source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
5,007
I am currently in college for a bachelor's degree in Network Engineering, and one of my Professors explained in class that a traceroute that shows, for example, 15 hops is actually abstracting the path, and in reality many more nodes are involved. Is this true? This contradicts everything I can find on traceroute. To my knowledge, traceroute works by sending ICMP (or UDP) packets to a specific destination with a TTL from 0 --> n until the destination is reached. The probe packets that are sent out time out at each location along the way in succession, producing an ICMP "time exceeded" reply, and finally a "port unreachable" message when reaching the destination. I understand the imperfections of traceroute - for example, traceroute traffic may be blocked by certain gateways, or the TTL of the reply packet may be set to the probe's remaining TTL, causing it to never return to the sender. However, after a lot of researching, I can't find anything referencing traceroute being inaccurate in the case of a traceroute that always returns the same path. Likewise, nothing referencing there being any "extra" hops not reported by traceroute (other than * * * hops that timed out with no reply). I'm open to discussion, and I'm genuinely interested to know the answer to this.
A traceroute will show you how many layer 3 hops you are getting through from A to B. However you could be going through hundreds of switches in between. You could also be going through 10 ISP routers running a layer 2 VPN which appears as a single hop. An MPLS network could hide its internals, or show its internals to you. You could have transparent firewalls in the path as well. Either way your Professor is correct in saying that you cannot guarantee that every single device in the path will count as a hop to you. Because of the above points I mentioned, you could be going through 50 devices but it could look like three to you. It doesn't happen all the time though. If you see 15 hops it very well could be 15 hops. This is a basic example of an MPLS set up in regards to TTL: http://www.juniper.net/techpubs/en_US/junos13.2/topics/reference/configuration-statement/no-propagate-ttl-edit-protocols-mpls.html
{ "source": [ "https://networkengineering.stackexchange.com/questions/5007", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/3187/" ] }
5,057
I think I might be getting confused with terminology surrounding MTU. This definition from Wendell Odom's CCNA book on MTU: The IEEE 802.3 specification limits the data portion of the 802.3 frame to a minimum of 46 and a maximum of 1500 bytes. The term maximum transmission unit (MTU) defines the maximum layer 3 packet that can be sent over a medium. Because the layer 3 packet rests inside the data portion of an Ethernet frame, 1500 bytes is the largest IP MTU allowed over an Ethernet. My understanding, is that an Ethernet frame is the last phase of encapsulation before it gets transmitted to the wire. When I look at a diagram of an Ethernet frame, its total size can equal a maximum of 1526 bytes. Am I right in saying that an Ethernet frame MTU is 1526 while the MTU at the IP layer is 1500? Does the MTU change at each phase of encapsulation, or is the term "MTU" only meant to define the maximum size of a packet at layer 3?
Am I right in saying that an Ethernet frame MTU is 1526 while the MTU at the IP layer is 1500? The Ethernet MTU is 1500 bytes, meaning the largest IP packet (or some other payload) an Ethernet frame can contain is 1500 bytes. Adding 26 bytes for the Ethernet header results in a maximum frame (not the same as MTU) of 1526 bytes. Does the MTU change at each phase of encapsulation, or is the term "MTU" only meant to define the maximum size of a packet at layer 3? The MTU is often considered a property of a network link, and will generally refer to the layer 2 MTU. The limits at layer 3 are far higher (see below) and cause no issues. The length of an IP packet (layer 3) is limited by the maximum value of the 16 bit Total Length field in the IP header. For IPv4, this results in a maximum payload size of 65515 (= 2^16 - 1 - 20 bytes header). Because IPv6 has a 40 byte header, it allows for payloads up to 65495. And IIRC using the Jumbo Payload header extension, IPv6 could allow packets up to 4 GB... When setting up a TCP connection, a Maximum Segment Size (MSS) is agreed upon. This could be considered an MTU at layer 4, but it is not fixed. It is often set to the largest payload that can be sent in a TCP segment without causing fragmentation, thus reflecting the lowest layer 2 MTU on the path. With an ethernet MTU of 1500, this MSS would be 1460 after subtracting 20 bytes for the IPv4 and TCP header.
{ "source": [ "https://networkengineering.stackexchange.com/questions/5057", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/3278/" ] }
5,825
The standard I've seen so far is to use 192.168.*.* IP addresses for devices on the local network. Why this combination? If it were me, I would have chosen something simpler, like 1.0.*.*. What's the historical reason?
Note: Unless we can get one of the original authors of RFC 1918 / RFC 1597 or someone from InterNIC/RIPE NCC at that time (1994-1996) to comment , we may be left at taking guesses, and the answers to this question being mostly opinion based. * Per RFC 1918 , the following three ranges are reserved for use on private networks: 10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) This is why you would see them utilized for devices on the local network. The reasoning behind at least parts of each of these three "private" address ranges is fairly straightforward, but again outside of logic, these are guesses based upon my readings over the years. First consider that the classful networks are as follows (source Wikipedia article on Classful Network ): Class A 0. 0. 0. 0 = 00000000.00000000.00000000.00000000 127.255.255.255 = 01111111.11111111.11111111.11111111 0nnnnnnn.HHHHHHHH.HHHHHHHH.HHHHHHHH Class B 128. 0. 0. 0 = 10000000.00000000.00000000.00000000 191.255.255.255 = 10111111.11111111.11111111.11111111 10nnnnnn.nnnnnnnn.HHHHHHHH.HHHHHHHH Class C 192. 0. 0. 0 = 11000000.00000000.00000000.00000000 223.255.255.255 = 11011111.11111111.11111111.11111111 110nnnnn.nnnnnnnn.nnnnnnnn.HHHHHHHH Class D 224. 0. 0. 0 = 11100000.00000000.00000000.00000000 239.255.255.255 = 11101111.11111111.11111111.11111111 1110XXXX.XXXXXXXX.XXXXXXXX.XXXXXXXX Class E 240. 0. 0. 0 = 11110000.00000000.00000000.00000000 255.255.255.255 = 11111111.11111111.11111111.11111111 1111XXXX.XXXXXXXX.XXXXXXXX.XXXXXXXX As you can see, each one of the three RFC1918 ranges cuts out a private block from one of the old "classful" network ranges. (Class A, Class B, and Class C in this case.) To quote Dumbledore, "From this point forth, we shall be leaving the firm foundation of fact and journeying together through the murky marshes of memory into thickets of wildest guesswork." The IANA had been assigning addresses for many years before the inception of RFC 1918 (February 1996) . (Actually the private ranges were first put forth in RFC 1597 in March 1994.) For example, if you conduct a whois 8.0.0.0 lookup, you can see that Level 3 had this block assigned on 1992-12-01. Therefore it can be assumed that the authors of RFC1918 had to work with the IANA/ Jon Postel to find available ranges, giving us the private ranges listed above. But again, unless someone directly involved with the process* speaks up, this may remain guesswork. * Or just someone with better Google-foo than myself. I was unable to find a good primary source for this information.
{ "source": [ "https://networkengineering.stackexchange.com/questions/5825", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/3878/" ] }
6,209
How can ISPs on one continent connect to ISPs on another continent? From a physical layer standpoint? Say one ISP is located in Asia and another one in Europe how would they connect their fiber optic cable to exchange traffic? Also how would they connect physically through IXP ( Internet Exchange Points ). Do they have to drag their fiber optics cable all the way to these locations?
There are a number of things ISP's do: drag bundles of fibers across continents. Since this is very costly, only a small number of very large companies do this and many ISP's rent fiber-pairs from these companies rent capacity (a wavelength, VLAN, MPLS circuit, etc) between to an IXP from a company who owns (or rents) fibers. Since the capacity of fibers is shared, this usually is less costly buy IP transit from a transit provider. These transit providers typically have their own global connectivity. Transit providers can offer ISP's routes to the entire internet, so an ISP does not have to be present on every IXP to connect to every other ISP on the planet. The last option is most common. There is only a limited number of transit providers who don't buy transit from another ISP, they're usually called Tier 1 . Most ISP's combine IXP connectivity and IP transit for their global connectivity. Edit: Here's a real-world example: I used NLNOG Ring's ring-trace to create a graph of how networks around the world reach Facebook's network. . As you can see from this example, a lot of networks reach Facebook via DE-CIX (the IXP in Frankfurt, Germany, one of the largest in the world), but there are also a large number of networks which use Telia (AS1299) and NTT (AS2914) to reach Facebook. Telia and NTT are tier 1 transit providers. Edit 2: Since the image is downscaled it's hard to read. here is a full size version.
{ "source": [ "https://networkengineering.stackexchange.com/questions/6209", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/4211/" ] }
6,329
This question is a literal repost of the same question asked on The Cisco Support Community . Answers are unique to Stack Exchange. Why is area 0 the backbone area in OSPF? Why must all other areas connect to it? I have been searching for the right reason why all the areas must be connected to area 0 in OSPF. I have a small idea, but I am not clear with the whole concept. If 2 areas aren't connected through area 0 (discontiguous), how does OSPF behaving as a link state protocol increase the possibility of routing loops?
OSPF Backbone Why is area 0 the backbone area in OSPF? Why must all other areas connect to it? This is explained very well in RFC 3509, Section 1.2 1 : 1.2 Motivation In OSPF domains the area topology is restricted so that there must be a backbone area (area 0) and all other areas must have either physical or virtual connections to the backbone. The reason for this star-like topology is that OSPF inter-area routing uses the distance-vector approach and a strict area hierarchy permits avoidance of the "counting to infinity" problem. OSPF prevents inter-area routing loops by implementing a split-horizon mechanism, allowing ABRs to inject into the backbone only Summary-LSAs derived from the intra-area routes, and limiting ABRs' SPF calculation to consider only Summary-LSAs in the backbone area's link-state database. OSPF is usually considered a link-state protocol . What some people miss is that OSPF uses both link-state protocol and distance-vector protocol algorithms. Routes within the backbone, or a non-backbone area are computed as a link-state protocol does (ref Dijkstra's algorithm ). When OSPF must carry non-backbone routes through the backbone, it uses some distance-vector behavior (i.e. parts of the Bellman Ford algorithm ) to propagate Type3 LSA metrics into non-backbone areas. Simple example of OSPF's distance-vector behavior: <-- Area 5 --><-- Area 0 --><-- Area 4 --> R5-----------R1-----------R2------------R3---------------------R4 Cost 3 Cost 5 Cost 7 Cost 12 LSA--> LSA--> Type3 LSA Type3 LSA {From R1} {From R2} R5 cost is 3 R5 cost is 8 Consider what happens to a /32 Loopback route for R5. R5 sends a Type1 LSA containing the /32 Loopback R1 (Area 5 ABR), is connected to Area 0; it translates the Type1 LSA into a Type3 LSA with a cost of 3. R2 (Area 4 ABR) receives R1's Type3 LSA (metric 3) and changes the metric to R5's Loopback, based on R2's cost to R1 . Now R2's Type3 LSA for R5 has a cost of 8. This is the distance-vector behavior I mentioned above. Requiring all non-backbone routes to go through the backbone is a loop-prevention mechanism. Connecting non-backbone OSPF areas at an ABR If 2 areas aren't connected through area 0 (discontiguous), how does OSPF behaving as a link state protocol increase the possibility of routing loops? As we saw above, OSPF uses distance-vector behavior to send routes through the Area 0 backbone. Distance-vector protocols have well-known limits, such as the count-to-infinity problem . OSPF would be vulnerable to the same issues, if we didn't have boundaries on its behavior. 1 RFC 3509 describes Cisco IOS's ABR behavior
{ "source": [ "https://networkengineering.stackexchange.com/questions/6329", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/4289/" ] }
6,380
When most networking students first learn about the OSI model, they spend a great deal of time trying to figure out which layer of the model a particular protocol fits into. We get a lot of questions about OSI layers on this forum, and they are usually like: Which OSI layer does IS-IS operate at? Is HTML a presentation or application protocol? Are VPN tunnels layer 2 or 3? How should a student (or professional for that matter) understand the relationship between the OSI model and protocols he/she works with?
There are two important facts about the OSI model to remember: It is a conceptual model. That means it describes an idealized, abstract, theoretical group of networking functions. It does not describe anything that someone actually built (at least nothing that is in use today). It is not the only model. There are other models, most notably the TCP/IP protocol suite ( RFC-1122 and RFC-1123 ), which is much closer to what is currently in use. A bit of history: You’ve probably all heard about the early days of packet networking, including ARPANET, the Internet’s predecessor. In addition to the U.S. Defense Department’s efforts to create networking protocols, several other groups and companies were involved as well. Each group was developing their own protocols in the brand new field of packet switching. IBM and the telephone companies were developing their own standards. In France, researchers were working on their own networking project called Cyclades. Work on the OSI model began in the late 1970s, mostly as a reaction to the growing influence of big companies like IBM, NCR, Burroughs, Honeywell (and others) and their proprietary protocols and hardware. The idea behind it was to create an open standard that would provide interoperability between different manufacturers. But because the OSI model was international in scope, it had many competing political, cultural, and technical interests. It took well over six years to come to consensus and publish the standards. In the meanwhile, the TCP/IP model was also developed. It was simple, easy to implement, and most importantly, it was free. You had to purchase the OSI standard specifications to create software for it. All the attention and development efforts gravitated to TCP/IP. As a result, the OSI model was never commercially successful as a set of protocols, and TCP/IP became the standard for the Internet. The point is, all of the protocols in use today, the TCP/IP suite; routing protocols like RIP, OSPF and BGP; and host OS protocols like Windows SMB and Unix RPC, were developed without the OSI model in mind. They sometimes bear some resemblance to it, but the OSI standards were never followed during their development. So it’s a fools errand to try to fit these protocols into OSI. They just don’t exactly fit. That doesn’t mean the model has no value; it is still a good idea to study it so you can understand the general concepts. The concept of the OSI layers is so woven into network terminology, that we talk about layer 1, 2 and 3 in everyday networking speech. The definition of layers 1, 2 and 3 are, if you squint a bit, fairly well agreed upon. For that reason alone, it’s worth knowing. The most important things to understand about the OSI (or any other) model are: We can divide up the protocols into layers Layers provide encapsulation Layers provide abstraction Layers decouple functions from others Dividing the protocols into layers allows us to talk about their different aspects separately. It makes the protocols easier to understand and easier to troubleshoot. We can isolate specific functions easily, and group them with similar functions of other protocols. Each “function” (broadly speaking) encapsulates the layer(s) above it. The network layer encapsulates the layers above it. The data link layer encapsulates the network layer, and so on. Layers abstract the layers below it. Your web browser doesn’t need to know whether you’re using TCP/IP or something else at at the network layer (as if there were something else). To your browser, the lower layers just provide a stream of data. How that stream manages to show up is hidden from the browser. TCP/IP doesn’t know (or care) if you’re using Ethernet, a cable modem, a T1 line, or satellite. It just processes packets. Imagine how hard it would be to design an application that would have to deal with all of that. The layers abstract lower layers so software design and operation becomes much simpler. Decoupling: In theory, you can substitute one specific technology for another at the same layer. As long as the layer communicates with the one above and the one below in the same way, it shouldn’t matter how it’s implemented. For example, we can remove the very well-known layer 3 protocol, IP version 4, and replace it with IP version 6. Everything else should work exactly the same. To your browser or your cable modem, it should make no difference. The TCP/IP model is what TCP/IP protocol suite was based on (surprise!). It only has four layers, and everything above transport is just “application.” It is simpler to understand, and prevents endless questions like “Is this session layer or presentation layer?” But it too is just a model, and some things don’t fit well into it either, like tunneling protocols (GRE, MPLS, IPSec to name a few). Ultimately, the models are a way of representing invisible abstract ideas like addresses and packets and bits. As long as you keep that in mind, the OSI or TCP/IP model can be useful in understanding networking.
{ "source": [ "https://networkengineering.stackexchange.com/questions/6380", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/2142/" ] }
6,483
I hear about VLAN tagging, but I don’t quite understand the concept. I know a trunk cannot accept untagged packets without configuring a native VLAN, and that access ports only accept untagged packets. But I don’t understand why packets need to be tagged or untagged. What purpose does it serve?
If you have more than one VLAN on a port (a "trunk port"), you need some way to tell which packet belongs to which VLAN on the other end. To do this you are "tagging" a packet with a VLAN tag (or VLAN header if you like). In reality a VLAN tag is inserted in the Ethernet frame like this: The 802.1Q (dot1q, VLAN) tag contains a VLAN-ID and other things explained in the 802.1Q Standard . The first 16 bits contain the "Tag Protocol Identifier" (TPID) which is 8100. This also doubles as the EtherType 0x8100 for devices that don't understand VLANs. So a "tagged" packet contains the VLAN information in the Ethernet frame while an "untagged" packet doesn't. A typical use case would be if you have one port from a router to a switch which multiple customers are attached to: In this example customer "Green" has VLAN 10 and Customer "Blue" has VLAN 20. The ports between switch and customers are "untagged" meaning for the customer the arriving packet is just a normal Ethernet packet. The port between router and switch is configured as a trunk port so that both router and switch know which packet belongs to which customer VLAN. On that port the Ethernet frames are tagged with the 802.1Q tag.
{ "source": [ "https://networkengineering.stackexchange.com/questions/6483", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/4483/" ] }
6,491
I'm studying IPv4 addresses and came across this whole thing about classful addressing. I get the idea behind it, bit there is something I find confusing: There are two "ABC" ranges: First one: A: 1.0.0.0 to 126.0.0.0 with /8 B: 128.0.0.0 to 191.255.0.0 with /16 C: 192.0.0.0 to 223.255.255.0 with /24 Second one: A: 10.0.0.0 to 10.255.255.255 with /8 B: 172.16.0.0 to 172.31.255.255 with /12 C: 192.168.0.0 to 192.168.255.255 with /16 Why are both of these using the names A, B and C? They are not even using the same sets of subnet-masks! Is the first one only for public addresses? Because the second one is only private addresses. Help appreciated!
It's likely that the subnet masks are throwing you off. As long as you keep in mind that the below rules no longer apply, you should be fine. Ultimately classful addressing came down to the most significant (or "leading") bits in the address. Nothing more, nothing less. Class A: Most significant bits starts with 0 Class B: Most significant bits start with 10 Class C: Most significant bits start with 110 The "classes" came from the way they split up the address space for use between "host" and "network". Keep in mind that back then (way way back, from the days of ARPANET), subnet masks did not exist , and the network was intended to be inferred from the address itself. So, with the above in mind, this is what they came up with (this is intended to be binary representation - each N or H represents a single bit in the 32-bit address): Class A: NNNNNNNN.HHHHHHHH.HHHHHHHH.HHHHHHHH (less networks, more hosts) Class B: NNNNNNNN.NNNNNNNN.HHHHHHHH.HHHHHHHH (more networks, less hosts) Class C: NNNNNNNN.NNNNNNNN.NNNNNNNN.HHHHHHHH (even more networks, even less hosts) Here the N is representative of the network portion of the address, and the H is representative of the host portion of the address, or as they called it back in the day, the "rest field." Combining that with what was said earlier about the most-significant bits, we have the following: Class A: 0.0.0.0 - 127.255.255.255 Class B: 128.0.0.0 - 191.255.255.255 Class C: 192.0.0.0 - 223.255.255.255 Converting those ranges to binary may make this more clear: Class A 0.0.0.0 ----------- [0]0000000.00000000.00000000.00000000 127.255.255.255 ----------- [0]1111111.11111111.11111111.11111111 ^ most significant bit = 0 Class B 128.0.0.0 ----------- [10]000000.00000000.00000000.00000000 191.255.255.255 ----------- [10]111111.11111111.11111111.11111111 ^ most significant bits = 10 Class C 192.0.0.0 ----------- [110]00000.00000000.00000000.00000000 223.255.255.255 ----------- [110]11111.11111111.11111111.11111111 ^ most significant bits = 110 Every single address within those ranges will share a common leading bit(s). The moral of the story is, if you can remember what the leading bits are supposed to be (0 for class A, 10 for class B, 110 for class C) it's extremely simple to determine what "class" an address would have otherwise belonged in. Or, if decimal is easier: Class A: First octet in address is between 0 and 127, inclusive Class B: First octet in address is between 128 and 191, inclusive Class C: First octet in address is between 192 and 223, inclusive The easiest way to mess someone up on "classful addressing" either on a test, or exam, or whatever, is to use misdirection by way of a subnet mask. Again, remember that the subnet mask does not apply for determining the class of an address. This is easy to forget because as others have said, classless addressing and routing have been around for over two decades now, and the subnet mask and CIDR notation have become ubiquitous in the industry.
{ "source": [ "https://networkengineering.stackexchange.com/questions/6491", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/4491/" ] }
6,915
Personally I don't even feel that there is a need for ACK. It's faster if we just send NACK(n) for the lost packets instead of sending an ACK for each received packet. So when/which situations would one use ACK over NACK and viceversa?
The reason for the ACK is that a NACK is simply not sufficient. Let's say I send you a data stream of X segments (let's say 10 for simplicity). You are on a bad connection, and only receive segments 1, 2, 4, and 5. Your computer sends the NACK for segment 3, but doesn't realize there should be segments 6-10 and does not NACK those. So, I resend segment 3, but then my computer falsely believes the data is successfully sent. ACKs provide some assurance that the segment has arrived at the destination. If you want the application to deal with order of data and retransmissions, you can simply choose to utilize a protocol like UDP (for instance, like TFTP does).
{ "source": [ "https://networkengineering.stackexchange.com/questions/6915", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/4855/" ] }
6,938
It seems that everything I read about wifi says there are only three usable channels in the 2.4 GHz band; however, there are eleven 2.4Ghz wifi channels allowed in the US. If everyone is using 1, 6 or 11, wouldn’t it make sense to use an unused channel, say, channel 3 for my wifi infrastructure? Or to put the question differently, why can’t I use channels other than 1, 6 and 11?
The 2.4GHz band is one of many portions of radio spectrum, called the Industrial, Scientific and Medical (ISM) bands that are allocated for unlicensed use. As long as you operate within the power and antenna limits, you can pretty much do what you want. So the short answer is, you can. But there are very good reasons why you shouldn’t. Part of the confusion regarding wifi channels comes from the allocation of the frequency spectrum. The ISM band was first allocated in 1958, before most of us were born and well before anyone even imagined wireless networking. The channel definitions were made before wifi was invented, and they assumed 5 MHz spacing. 802.11b and g transmissions require 22MHz bandwidth. Because they’re 22MHz wide, the signal covers two channels above and below the center frequency. So if you use channel 6, your signal spreads across channels 4-8. There is only room in the entire band for three 22MHz wide signals (in the US) without overlapping if they center on channels 1, 6 and 11. If you transmit your wifi signal between two of these channels, say centered on channel 3, two things happen: your signal interferes with other wifi users on 1 and 6, and their signals interfere with you. This will greatly increase the number of data errors, which in turn will cause retransmissions and significantly reduce your throughput. It’s as if there are a number of parallel bicycle lanes, and you try to drive a bus down one of them. Although you drive down one lane, your bus will occupy several of the adjacent lanes. If someone happens to be driving their bus in one of those adjacent lanes when your bus goes by, well…it won’t be pretty. If you only want to use one access point in a remote area with no other wifi signals, then you can probably get a away with using a different channel. But in most urban commercial environments, the 2.4GHz band is pretty crowded. If you use an overlapping channel, you are likely to experience (and cause) interference. If your wireless system is large with many access points, then you will need all three non-overlapping channels to get good coverage. Using something other than 1, 6 or 11 will limit the density of your access points, further reducing throughput. In summary, it’s good practice to use 1, 6 and 11 to get the maximum use of the radio spectrum with a minimum of interference.
{ "source": [ "https://networkengineering.stackexchange.com/questions/6938", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/2142/" ] }
7,106
Example: IP: 128.42.5.4 In binary: 10000000 00101010 00000101 00000100 Subnet: 255.255.248.0 How could you determine the prefix, network, subnet, and host numbers?
Calculating the Netmask Length (also called a prefix): Convert the dotted-decimal representation of the netmask to binary. Then, count the number of contiguous 1 bits, starting at the most significant bit in the first octet (i.e. the left-hand-side of the binary number). 255.255.248.0 in binary: 11111111 11111111 11111000 00000000 ----------------------------------- I counted twenty-one 1s -------> /21 The prefix of 128.42.5.4 with a 255.255.248.0 netmask is /21. Calculating the Network Address: The network address is the logical AND of the respective bits in the binary representation of the IP address and network mask. Align the bits in both addresses, and perform a logical AND on each pair of the respective bits. Then convert the individual octets of the result back to decimal. Logical AND truth table: 128.42.5.4 in binary: 10000000 00101010 00000101 00000100 255.255.248.0 in binary: 11111111 11111111 11111000 00000000 ----------------------------------- [Logical AND] 10000000 00101010 00000000 00000000 ------> 128.42.0.0 As you can see, the network address of 128.42.5.4/21 is 128.42.0.0 Calculating the Broadcast Address: The broadcast address converts all host bits to 1s... Remember that our IP address in decimal is: 128.42.5.4 in binary: 10000000 00101010 00000101 00000100 The network mask is: 255.255.248.0 in binary: 11111111 11111111 11111000 00000000 This means our host bits are the last 11 bits of the IP address, because we find the host mask by inverting the network mask: Host bit mask : 00000000 00000000 00000hhh hhhhhhhh To calculate the broadcast address, we force all host bits to be 1s: 128.42.5.4 in binary: 10000000 00101010 00000101 00000100 Host bit mask : 00000000 00000000 00000hhh hhhhhhhh ----------------------------------- [Force host bits] 10000000 00101010 00000111 11111111 ----> 128.42.7.255 Calculating subnets: You haven't given enough information to calculate subnets for this network; as a general rule you build subnets by reallocating some of the host bits as network bits for each subnet. Many times there isn't one right way to subnet a block... depending on your constraints, there could be several valid ways to subnet a block of addresses. Let's assume we will break 128.42.0.0/21 into 4 subnets that must hold at least 100 hosts each... In this example, we know that you need at least a /25 prefix to contain 100 hosts; I chose a /24 because it falls on an octet boundary. Notice that the network address for each subnet borrows host bits from the parent network block. Finding the required subnet masklength or netmask: How did I know that I need at least a /25 masklength for 100 hosts? Calculate the prefix by backing into the number of host bits required to contain 100 hosts. One needs 7 host bits to contain 100 hosts. Officially this is calculated with: Host bits = Log 2 (Number-of-hosts) = Log 2 (100) = 6.643 Since IPv4 addresses are 32 bits wide, and we are using the host bits (i.e. least significant bits), simply subtract 7 from 32 to calculate the minimum subnet prefix for each subnet... 32 - 7 = 25. The lazy way to break 128.42.0.0/21 into four equal subnets: Since we only want four subnets from the whole 128.42.0.0/21 block, we could use /23 subnets. I chose /23 because we need 4 subnets... i.e. an extra two bits added to the netmask. This is an equally-valid answer to the constraint, using /23 subnets of 128.42.0.0/21... Calculating the host number: This is what we've already done above... just reuse the host mask from the work we did when we calculated the broadcast address of 128.42.5.4/21... This time I'll use 1s instead of h , because we need to perform a logical AND on the network address again. 128.42.5.4 in binary: 10000000 00101010 00000101 00000100 Host bit mask : 00000000 00000000 00000111 11111111 ----------------------------------- [Logical AND] 00000000 00000000 00000101 00000100 -----> 0.0.5.4 Calculating the maximum possible number of hosts in a subnet: To find the maximum number of hosts, look at the number of binary bits in the host number above. The easiest way to do this is to subtract the netmask length from 32 (number of bits in an IPv4 address). This gives you the number of host bits in the address. At that point... Maximum Number of hosts = 2**(32 - netmask_length) - 2 The reason we subtract 2 above is because the all-ones and all-zeros host numbers are reserved. The all-zeros host number is the network number; the all-ones host number is the broadcast address. Using the example subnet of 128.42.0.0/21 above, the number of hosts is... Maximum Number of hosts = 2**(32 - 21) - 2 = 2048 - 2 = 2046 Finding the maximum netmask (minimum hostmask) which contains two IP addresses: Suppose someone gives us two IP addresses and expects us to find the longest netmask which contains both of them; for example, what if we had: 128.42.5.17 128.42.5.67 The easiest thing to do is to convert both to binary and look for the longest string of network-bits from the left-hand side of the address. 128.42.5.17 in binary: 10000000 00101010 00000101 00010001 128.42.5.67 in binary: 10000000 00101010 00000101 01000011 ^ ^ ^ | | | +--------- Network ---------+Host-+ (All bits are the same) Bits In this case the maximum netmask (minimum hostmask) would be /25 NOTE: If you try starting from the right-hand side, don't get tricked just because you find one matching column of bits; there could be unmatched bits beyond those matching bits. Honestly, the safest thing to do is to start from the left-hand side.
{ "source": [ "https://networkengineering.stackexchange.com/questions/7106", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/5032/" ] }
7,713
Does gratuitous ARP work like a normal ARP request? Why is gratuitous ARP used for HSRP?
Gratuitous ARP is a sort of "advance notification", it updates the ARP cache of other systems before they ask for it (no ARP request) or to update outdated information. When talking about gratuitous ARP, the packets are actually special ARP request packets, not ARP reply packets as one would perhaps expect. Some reasons for this are explained in RFC 5227 . The gratuitous ARP packet has the following characteristics: Both source and destination IP in the packet are the IP of the host issuing the gratuitous ARP The destination MAC address is the broadcast MAC address ( ff:ff:ff:ff:ff:ff ) This means the packet will be flooded to all ports on a switch No reply is expected Gratuitous ARP is used for some reasons: Update ARP tables after a MAC address for an IP changes (failover, new NIC, etc.) Update MAC address tables on L2 devices (switches) that a MAC address is now on a different port Send gratuitous ARP when interface goes up to notify other hosts about new MAC/IP bindings in advance so that they don't have to use ARP requests to find out When a reply to a gratuitous ARP request is received you know that you have an IP address conflict in your network As for the second part of your question, HSRP , VRRP etc. use gratuitous ARP to update the MAC address tables on L2 devices (switches). Also there is the option to use the burned-in MAC address for HSRP instead of the "virtual"one. In that case the gratuitous ARP would also update the ARP tables on L3 devices/hosts.
{ "source": [ "https://networkengineering.stackexchange.com/questions/7713", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/4289/" ] }
7,924
In this NPR article there is mentioned 'network neutrality' with very little details on what it actually is or how it actually works. I tried researching it on my own but I get a lot of non-technical explanations of what it is fighting against (essentially metering internet traffic speeds) but I am very confused as to how this works. My understanding of the internet (broadly) is that user Joe opens up a connection with website npr.com via the HTTP protocol (after some DNS work) which sends and receives data to and from NPR's server utilizing both NPR's and Joe's upload and download speeds. Where is the throttling occurring? Am I missing a crucial step? Is the 'traffic' being throttled 'on the way' to the client/server somewhat like a tollbooth at the ISP level? The NPR article brings up the example of how one website could pay to have their traffic go to the user faster. I just don't understand this because isn't all my incoming traffic maxed out at whatever my download speed is? Further, isn't the server maxed out at whatever their upload speed is? For example, if I try to send 1MB of data from a server (www.mysimplesite.com) with an upload speed of 1MB/s to a client (joe) who has a download speed of 1mb/s would this transfer not happen in the same [theoretical] time as a server (www.thesuperubersite.com) with an upload speed of 2MB/s? I fail to see how any server can pay to have their content 'reach the user faster' if it is the client who generally is the speed limitation. From a technical perspective , how would this work? I'm also not looking for an analogy or opinions.
Network neutrality effectively governs how providers can handle traffic. It's a broad concept in theory, with potential upsides and downsides in practice. Consensus seems to be the downsides might be fairly harmful to consumers and startups . In the absence of net neutrality, internet service providers and the big businesses with the budget to support them would win – with consumers and internet startups losing. In theory, abolishing net neutrality would allow ISPs like Verizon to decide for themselves whether or not they want to provide VoIP traffic, or any other traffic type, to traverse their networks. Other Tier 1 providers would be free to do the same. To be fair, internet in the US isn't currently operated under perfect net neutrality rules. Information freedom is increasingly eroding around the edges of the law, with an egregious, perhaps extortive agreement taking place earlier this year between Netflix and Comcast . Comcast makes up a significant portion of Netflix’s customer base; the video streaming service was all but forced to pay up. This eventually caused Netflix to raise its prices (however slightly), thus, altering their business model and harming consumers. If net neutrality is to fail completely, then expect more deals like this. As for your question: Where is the throttling occurring? Am I missing a crucial step? Is the 'traffic' being throttled 'on the way' to the client/server somewhat like a tollbooth at the ISP level? When you think of throttling, think of quality of service. Most points of the internet work on the principle of oversubscription, which makes more effective use of infrastructure. So even if you have a 1Gb/s internet connection, you probably won't be getting 1Gb/s service. That's because Tier 1 internet providers' backbones aren’t capable of sustaining 1Gb/s for millions of customers. I fail to see how any server can pay to have their content 'reach the user faster' if it is the client who generally is the speed limitation. From a technical perspective, how would this work? A provider has the ability to service packets as they see fit. At micro-scale, imagine you have 2 users connected via a 100Mb connection to separate providers with each provider peering through a 100Mb connection to one another. If either provider decides those users are less important than any other customer they have, they can shape that traffic to be lower priority than anything else, meaning they have the capability of being dropped first if there isn’t enough bandwidth to support them. It's also important to point out that those packets may not make it to the consumer at all. It is entirely possible to drop packets altogether if they fall outside the threshold those providers provisioned.
{ "source": [ "https://networkengineering.stackexchange.com/questions/7924", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/5692/" ] }
7,928
Many moons ago, when I was just a wee bairn commencing my career, I had a job interview for a low-level developer role. Having at that time just learned how CIDR was implemented, I was keen to show off my knowledge. Sadly, that tactic didn't work out too well for me. I recall being completely floored by the very first question that was asked (and, then ruffled, it all went downhill). The question was: Why are IPv4 addresses 32-bit? I readily admitted that I didn't know the answer, but I did know that the original protocol design divided the address space into an 8-bit network number and a 24-bit host identifier—so I tried to rationalise it on the grounds that the protocol designers imagined an Internet of a few networks (after all, it was originally intended to link together a specific few) each comprising many hosts and, for simplicity of programming, kept everything aligned to byte boundaries. I recall the interviewer being unsatisfied with my answer and suggesting to me that the real reason is that it's guaranteed to fit inside a long int in C, so simplifies implementation details. Being young and green at the time, I accepted that as a reasonable answer and (before today) hadn't thought any more of it. For some reason that conversation has just returned to me and, now that I reflect upon it, it doesn't seem entirely plausible: Under the original addressing scheme comprising fixed-size network and host fields, it's unlikely that a developer would have wanted to assign the concatenation of the two fields to a single variable (I don't have access to any early IP implementations to verify what they actually did in practice); and At the time that works on TCP/IP began, C was neither standardized nor the de facto "lingua franca" of low-level software development that it has become today. Was the interviewer's suggestion actually founded in fact? If not, what were the real reasons that the protocol designers chose 32-bit addressing?
Here's a link to a Hangout with Vint Cerf (Apr. 2014) where he explains how he thought that this internet was supposed to be an experiment only: As we were thinking about the Internet (thinking well, this is going to be some arbitrary number of networks all interconnected — we don't know how many and we don't know how they'll be connected), but national scale networks we thought " well, maybe there'll be two per country " (because it was expensive: at this point Ethernet had been invented but it wasn't proliferating everywhere, as it did do a few years later). Then we said " how many countries are there? " (two networks per country, how many networks?) and we didn't have Google to ask, so we guessed at 128 and that would be 2 times 128 is 256 networks (that's 8 bits) and then we said " how many computers will there be on each network? " and we said " how about 16 million? " (that's another 24 bits) so we had a 32-bit address which allowed 4.3 billion terminations — which I thought in 1974/3 was enough to do the experiment! I had already posted this as a comment to Jens Link's answer, but I felt it shoud surface a bit more.
{ "source": [ "https://networkengineering.stackexchange.com/questions/7928", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/4911/" ] }
8,288
I'm confused as to the difference between maximum segment size and a maximum transmission unit. Can someone please explain in relation to layers 2 and 3? If I had a packet of 800 bytes in the payload. Would it be correct to say that the MSS would be 800 bytes (If I set it to be that) and the MTU would be 840? TCP 20 and IP 20 bytes. Would it be any different if I was doing PPPoE?
In addition, MSS value is derived from the MTU . Consider that you have data of 2260 bytes to be sent to a remote device. If MTU is 1500, and we consider IP header + TCP header to be 40 bytes, then only 1460 bytes of data can be sent in the first IP packet. The remaining 800 bytes will be sent in the second IP packet. So, for MSS = 800, the MTU should be at least 840. As PPPoE overhead is 8 bytes, and therefore MTU = 1492 bytes, and MSS = 1492-40 = 1452 bytes.
{ "source": [ "https://networkengineering.stackexchange.com/questions/8288", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/5622/" ] }
9,429
What is the proper term for (example) hostname.tld:433 (hostname:portnumber)? It is not just hostname, and it is not really a URL either :) same goes for 10.0.0.1:3306 etc.
IP address and port pair is called, Socket Address Pair of socket addresses (10.0.0.1:123, 192.168.0.1:123) may also be called 4-tuple or 5-tuple if the protocol is specified as well (10.0.0.1:123, 192.168.0.1:123 UDP)
{ "source": [ "https://networkengineering.stackexchange.com/questions/9429", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/6123/" ] }
9,571
I am preparing for ICND1 exams and recently started to learn about different Cisco devices. I have just come to know how the packet is generated to be transmitted over a network, or outside the network. For example, When the packet is generated, it adds source IP address, Destination IP address, Source Mac address, destination mac address, and other data. Since Switch is a layer 2 device, and it uses MAC addresses to interact with other Hosts within the Network, then why do we use IP addresses within our local networks ? What if someone do not need to connect to any host or network outside its own network, Why do he still needs to have an IP address, isn't MAC address is enough ?
Since Switch is a layer 2 device, and it uses MAC addresses to interact with other Hosts within the Network, then why do we use IP addresses within our local networks? Well let's start with what traffic you're sending. If you use a strictly layer-2 protocol inside your own LAN with no HTTP , SSL, NFS , CIFS , iSCSI , H.323 , SIP , DNS , ICMP , databases, or websockets, then your proposal works just fine. In fact, FCoE does not rely on an IP layer... so if that's what you want, knock yourself out :-) The problem is that you just crippled 95% of the utility of most networks by removing those IP-based services. Networks exist to share information; all operating systems on the planet share information by binding services to, and encapsulating inside IP. That information is usually wrapped inside TCP as well. Rhetorical question : Could a bunch of determined people implement TCP and UDP services directly on top of ethernet in all the major operating systems? Pedantic Answer : Yes, but that's a collosal waste of time and resources for insignificant gain. Let's start with the basics... there is no DNS name-service for ethernet mac-addresses. That means unless you build it, how would you resolve URLs without IP addresses? I doubt that anyone really wants to type http://00c0.9b4a.fb2c/ just so they can avoid 20 extra bytes in each packet. This is just an example of the work required. What if someone do not need to connect to any host or network outside its own network, Why do he still needs to have an IP address, isn't MAC address is enough? Technically, yes. In the real world... it's a pretty boring network without IP.
{ "source": [ "https://networkengineering.stackexchange.com/questions/9571", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/3821/" ] }
9,612
I've been reading about the differences between MAC and IP addresses, and why we need both of them. To summarize, MAC addresses are physical unchangeable unique IDs for every single device while IP addresses are assigned, changeable and virtual. To analogize, MAC addresses are like people who have permanent names, and IP addresses are where they currently live. In the real world, we link addresses and names with the help of a phonebook. What mechanism links IP addresses to MAC addresses and where is this mechanism located in the network?
The mechanism is called Address Resolution Protocol (ARP) . Every ethernet IPv4 device ARPs to resolve ethernet mac addresses for target IPs. IP to mac mappings are stored in each device's ARP table (the phone book in your analogy). To simplify: In most cases, to resolve the MAC address associated with an IP address, you send a broadcast ARP packet (to all devices in the network), asking who has that IP address. The device with that IP address replies to the ARP (with its MAC address).
{ "source": [ "https://networkengineering.stackexchange.com/questions/9612", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/6026/" ] }
10,073
I am unsure what the following networking tools do. They all seem to do a similar thing. First some background. I am familiar with cisco IOS. I am doing some linux networking experimentation with virtual machines so I am trying to create a small virtual network. I started playing with virtual interfaces (tun/tap, loop br etc) and I'd like to be able to examine the traffic going through them for debug purposes. I'm a bit unsure of what tool to use. I know of the following: tshark (wireshark) dumpcap tcpdump ettercap I think tshark/wireshark uses dumpcap underneath. ettercap seems to be a man-in-the-middle attack tool. Which tool (others not listed included) would you use to debug an interface?
wireshark - powerfull sniffer which can decode lots of protocols, lots of filters. tshark - command line version of wireshark dumpcap (part of wireshark) - can only capture traffic and can be used by wireshark / tshark tcpdump - limited protocol decoding but available on most *NIX platforms ettercap - used for injecting traffic not sniffing All tools use libpcap (on windows winpcap) for sniffing. Wireshark/tshark /dumpcap can use tcpdump filter syntax as capture filter. As tcpdump is available on most *NIX system I usually use tcpdump. Depending on the problem I sometimes use tcpdump to capture traffic and write it to a file, and then later use wireshark to analyze it. If available, I use tshark but if the problem gets more complicated I still like to write the data to a file and then use Wireshark for analysis.
{ "source": [ "https://networkengineering.stackexchange.com/questions/10073", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/2986/" ] }
12,858
I work for a VoIP service provider and I am working an issue with a customer who has a cable internet connection that's throwing me for a loop. He has a single block, which we'll pretend is 70.141.15.0/29, with the gateway at .1 and routers at .2 and .3. Both routers are connected to his cable modem which is, to the best of our knowledge, set to whatever cable providers imagine "bridge mode" is. I am pinging both these routers simultaneously from the same box, which is a linux system connected to fiber from (probably) Level(3). So needless to say, nobody on the planet knows how many nodes there are between here and there. But check out the ping results. To the first router: 64 bytes from 70.141.15.2: icmp_seq=2637 ttl=47 time=45.0 ms 64 bytes from 70.141.15.2: icmp_seq=2638 ttl=47 time=39.2 ms 64 bytes from 70.141.15.2: icmp_seq=2639 ttl=47 time=37.3 ms 64 bytes from 70.141.15.2: icmp_seq=2640 ttl=47 time=46.1 ms 64 bytes from 70.141.15.2: icmp_seq=2641 ttl=47 time=45.8 ms 64 bytes from 70.141.15.2: icmp_seq=2642 ttl=47 time=46.5 ms 64 bytes from 70.141.15.2: icmp_seq=2643 ttl=47 time=40.9 ms From the second: 64 bytes from 70.141.15.3: icmp_seq=631 ttl=239 time=54.7 ms 64 bytes from 70.141.15.3: icmp_seq=637 ttl=239 time=40.5 ms 64 bytes from 70.141.15.3: icmp_seq=638 ttl=239 time=40.3 ms 64 bytes from 70.141.15.3: icmp_seq=639 ttl=239 time=38.4 ms 64 bytes from 70.141.15.3: icmp_seq=640 ttl=239 time=44.9 ms 64 bytes from 70.141.15.3: icmp_seq=641 ttl=239 time=38.4 ms 64 bytes from 70.141.15.3: icmp_seq=642 ttl=239 time=38.8 ms Check out the TTL values. Does this make sense? These devices are directly adjacent to each other, plugged into that modem via separate switch ports. How can one appear to have nearly 200 more hops? From pinging other sites I'm getting the impression that TTL is just not implemented the way I think it is. I doubt there are 200 hops between me and 4.2.2.2, or woot.com, yet I get under 50 TTL results from both of those. One of these routers (the one with the higher TTL) is from Fortinet, while the other one is a custom Linux-based device. I'm pretty sure the Forti has a home-rolled network stack, while the Linux box uses whatever came with the source tarball the hardware devs downloaded. Is it likely that ICMP echo is implemented in a bizarre form on one of these and deliberately sends all replies with a TTL of 50? I also notice that one of the only sites I can find that replies with sane-looking TTL is slashdot, and I can imagine that their servers and routers might be a little less "whatever we found in the garage" than the average website, which sort of makes me feel like I might be on the right track with that last supposition. To sum up: does TTL on ping mean anything reliable whatsoever?
So needless to say, nobody on the planet knows how many nodes there are between here and there. I know how many nodes. There are exactly 16. The reason you get different responses is because different operating systems use different starting values for TTL. Some devices use 255, while others use 63. So, one of the devices you are pinging sends the reply with the TTL set to 255. By the time it gets back to you, it has decremented to 239. That's 16 hops. The other device you ping sets the TTL to 63. So when it gets to you, the value is 47. 255-239 = 63-47 = 16. If you want to be sure about the number of hops between you and the target, use traceroute.
{ "source": [ "https://networkengineering.stackexchange.com/questions/12858", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/10255/" ] }
13,012
Say the switch table is empty. If computer A sends a frame destined to computer B, the switch will broadcast asking who has the mac address of B. What if C suddenly sends a frame to A? What is the mechanism so the switch doesn't mistakenly think computer C is computer B? Is it that it remembers the mac address of the destination desired by computer A, and when C tries to get to A it also contains its own mac address and the switch sees it isn't the same destination as computer A wanted? Basically I'm asking, when a switch floods for an unknown mac address for a request sent by host A, how does it know that the destination is responding to host A or if some other host just happens to be transmitting to A?
Layer 2 switches (bridges) have a MAC address table that contains a MAC address and physical port number. Switches follow this simple algorithm for forwarding frames: When a frame is received, the switch compares the SOURCE MAC address to the MAC address table. If the SOURCE is unknown, the switch adds it to the table along with the physical port number the frame was received on. In this way, the switch learns the MAC address and physical connection port of every transmitting device. The switch then compares the DESTINATION MAC address with the table. If there is an entry, the switch forwards the frame out the associated physical port. If there is no entry, the switch sends the frame out all its physical ports, except the physical port that the frame was received on (Flooding). If the DESTINATION is on the same port as the SOURCE (if they're both on the same segment), the switch will not forward the frame.) Note that the switch does not learn the destination MAC until it receives a frame from that device.
{ "source": [ "https://networkengineering.stackexchange.com/questions/13012", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/8025/" ] }
13,417
In discussions that have spurred from other questions on this site , I've realised that I don't have a solid understanding of when Path MTU Discovery (PMTUD) is performed. I know what it does -- discover the lowest MTU on a path from Client to Server). I know how it does it -- send progressively larger packets with their "Don't Fragment" bit set, and see how big of a packet you can get through without getting a "ICMP Need to Fragment" error. My question is specifically then, when will a host perform PMTUD? I'm looking for specific cases. Not just something generic like "when a host wants to discover the path MTU". Bonus points if you can provide a packet capture of a host doing it, or provide instructions for generating such a packet capture. Also, I am specifically referring to IPv4. I know in IPv6 transient routers aren't responsible for fragmentation, and can imagine that PMTUD happens much more commonly. But for now, I'm looking for specific examples of PMTUD in IPv4. (although if the only packet capture you can put together of PMTUD is in IPv6, I would still love to see it)
The answer is simple: whenever the host pleases. Really. It's that simple. The explanation below assumes an IPv4-only environment, since IPv6 does away with fragmentation in the routers (forcing the host to always deal with fragmentation and MTU discovery). There is no strict rule that governs when (or even if) a host does Path MTU Discovery. The reason that PMTUD surfaced is that fragmentation is considered harmful for various reasons. To avoid packet fragmentation, the concept of PMTUD was brought to life as a workaround. Of course, a nice operating system should use PMTUD to minimize fragmentation. So, naturally, the exact semantics of when PMTUD is used depend on the sender's operating system - in particular, the socket implementation. I can only speak for the specific case of Linux, but other UNIX variants are probably not very different. In Linux, PMTUD is controlled by the IP_MTU_DISCOVER socket option. You can retrieve its current status with getsockopt(2) by specifying the level IPPROTO_IP and the IP_MTU_DISCOVER option. This option is valid for SOCK_STREAM sockets only (a SOCK_STREAM socket is a two-way, connection-oriented, reliable socket; in practice it's a TCP socket, although other protocols are possible), and when set, Linux will perform PMTUD exactly as defined in RFC 1191. Note that in practice, PMTUD is a continuous process; packets are sent with the DF bit set - including the 3-way handshake packets - you can think of it as a connection property (although an implementation may be willing to accept a certain degree of fragmentation at some point and stop sending packets with the DF bit set). Thus, PMTUD is just a consequence of the fact that everything on that connection is being sent with DF. What if you don't set IP_MTU_DISCOVER ? There's a default value. By default, IP_MTU_DISCOVER is enabled on SOCK_STREAM sockets. This can be read or changed by reading /proc/sys/net/ipv4/ip_no_pmtu_disc . A zero value means that IP_MTU_DISCOVER is enabled by default in new sockets; a non-zero means the opposite. What about connectionless sockets? This is tricky because connectionless, unreliable sockets do not retransmit lost segments. It becomes the user's responsibility to packetize the data in MTU-sized chunks. Also, the user is expected to make the necessary retransmits in case of a Message too big error. So, essentially user code must reimplement PMTUD. Nevertheless, if you're up for the challenge, you can force the DF bit by passing the IP_PMTUDISC_DO flag to setsockopt(2) . The bottomline The host decides when (and if) to use PMTUD When it uses PMTUD, it's like a connection attribute, it happens continuously (but at any point the implementation is free to stop doing so) Different operating systems use different approaches, but usually, reliable, connection-oriented sockets perform PMTUD by default, whereas unreliable, connectionless sockets don't
{ "source": [ "https://networkengineering.stackexchange.com/questions/13417", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/3675/" ] }
16,005
I'm setting up a Cisco 2901 router. I have a login password on the console line, and the vty lines are configured to only accept ssh connections with public key authentication. The auxiliary line is shut down. There are only two admins who will be accessing the router and we are both authorized to perform any configuration on the router. I'm not an expert on Cisco gear, but I consider this adequate to secure access to the router configuration. However, every single guide I've read states I should set an enable secret, regardless of any other user or line passwords. Is there something more to the enable password that I'm not aware off? Is there any other way to access the router than then console, auxiliary, or vty lines? EDIT: I've added the actual configuration below to be more clear about my situation. The following works, with requiring an enable password, or a username config aside from the one within ip ssh pubkey-chain . aaa new-model ip ssh time-out 60 ip ssh authentication-retries 2 ip ssh version 2 ip ssh pubkey-chain username tech key-hash ssh-rsa [HASH] ip scp server enable line vty 0 4 transport input ssh
No, you don't -- technically. But whether you can enter enable mode without one depends on how you log in. Here's the instant gratification version: You can enter via the console without an enable password, but you will be stuck in user mode if you use a simple vty login password without an enable password set. Here's the long-winded StackExchange answerer version: Cisco authentication is kind of a mess for a beginner. There's a lot of legacy baggage there. Let me try to break this down in a real-world sense. Everyone that has any business logging into a router or switch pretty much goes directly to privileged (enable) mode. The user mode is basically a front lobby, and serves little more purpose than to keep the draft out. In large organizations where you have vast networks and equally vast pools of labor, it may be justifiable to have someone who can knock on the front door and make sure someone is still there. (That is, to log in and run the most trivial commands just to see that the device is, in fact, responding and not on fire.) But in every environment I've ever worked in, tier 1 had at least some ability to break things. As such, and particularly in a scenario like yours, knowing the enable password is obligatory to get anything done. You could say this is a second level of security -- one password to enter the device, another to escalate to administrative privilege -- but that seems a little bit silly to me. As already noted, you can (and many people do) use the same password, which doesn't help much if someone has gained unauthorized access via telnet/ssh. Having static, global passwords shared by everyone is arguably more of an issue than having just one token required to enter. Finally, most other systems (services, appliances, etc.) don't require a second layer of authentication, and are not generally considered insecure because of this. OK, that's my opinion on the topic. You'll have to decide for yourself whether it makes sense in light of your own security stance. Let's get down to business. Cisco (wisely) requires you to set a remote access password by default. When you get into line configuration mode... router> enable router# configure terminal router(config)# line vty 0 15 router(config-line)# ...you can tell the router to skip authentication: router(config-line)# no login ...and promptly get hacked, but your attacker will end up in user mode. So if you have an enable password set, at least you have somewhat limited the damage that can be done. (Technically, you can't go any further without an enable password either. More on that in a moment...) Naturally, no one would do this in real life. Your minimum requirement, by default and by common sense, is to set a simple password: router(config-line)# login router(config-line)# password cisco Now, you will be asked for a password, and you will again end up in user mode. If you're coming in via the console, you can just type enable to get access without having to enter another password. But things are different via telnet, where you will probably get this instead: $ telnet 10.1.1.1 Trying 10.1.1.1... Connected to 10.1.1.1. Escape character is '^]'. User Access Verification Password: ***** router> enable % No password set router> Moving on... You probably already know that, by default, all your configured passwords show up as plain text: router# show run | inc password no service password-encryption password cisco This is one of those things that tightens the sphincter of the security-conscious. Whether it's justified anxiety is again something you have to decide for yourself. On one hand, if you have sufficient access to see the configuration, you probably have sufficient access to change the configuration. On the other hand, if you happen to have carelessly revealed your configuration to someone who doesn't have the means themselves, then ... well, now they do have the means. Luckily, that first line in the snippet above, no service password-encryption , is the key to changing that: router(config)# service password-encryption router(config)# line vty 0 15 router(config-line)# password cisco Now, when you look at the configuration, you see this: router(config-line)# do show run | begin line vty line vty 0 4 password 7 01100F175804 login line vty 5 15 password 7 01100F175804 login ! ! end This is marginally better than plain-text passwords, because the displayed string isn't memorable enough to shoulder-surf. However, it's trivial to decrypt -- and I use that term loosely here. You can literally paste that string above into one of a dozen JavaScript password crackers on the first Google results page, and get the original text back immediately. These so-called "7" passwords are commonly considered "obfuscated" rather than "encrypted" to highlight the fact that it is just barely better than nothing. As it turns out, however, all those password commands are deprecated. (Or if they're not, they should be.) That's why you have the following two options: router(config)# enable password PlainText router(config)# enable secret Encrypted router(config)# do show run | inc enable enable secret 5 $1$sIwN$Vl980eEefD4mCyH7NLAHcl enable password PlainText The secret version is hashed with a one-way algorithm, meaning the only way to get the original text back is by brute-force -- that is, trying every possible input string until you happen to generate the known hash. When you enter the password at the prompt, it goes through the same hashing algorithm, and should therefore end up generating the same hash, which is then compared to the one in the configuration file. If they match, your password is accepted. That way, the plain text isn't known to the router except during the brief moment when you are creating or entering the password. Note: There's always the chance some other input can generate the same hash, but statistically it's a very low (read: negligible) probability. If you were to use the above configuration yourself, the router will allow both the enable password and enable secret lines to exist, but the secret wins from the password prompt. This is one of those Cisco-isms that doesn't make much sense, but it's the way it is. Furthermore, there's no secret equivalent command from line configuration mode, so you're stuck with obfuscated passwords there. Alright, so we now have a password that can't be recovered (easily) from the config file -- but there's still one problem. It's being transmitted in plain text when you log in via telnet. No good. We want SSH. SSH, being designed with more robust security in mind, requires a little extra work -- and an IOS image with a certain feature set. One big difference is that a simple password is no longer good enough. You need to graduate to user-based authentication. And while you're at it, set up an encryption key pair: router(config)# username admin privilege 15 secret EncryptedPassword router(config)# line vty 0 15 router(config-line)# transport input ssh router(config-line)# no password router(config-line)# login local router(config-line)# exit router(config)# ip ssh version 2 router(config)# crypto key generate rsa modulus 1024 Now you're cooking with gas! Notice this command uses secret passwords. (Yes, you can, but shouldn't, use password ). The privilege 15 part allows you to bypass user mode entirely. When you log in, you go straight to privileged mode: $ ssh [email protected] Password: ***** router# In this scenario, there's no need to use an enable password (or secret.) If you're not yet thinking, "wow... what a clusterfudge that was", bear in mind there's a whole other long-winded post still lurking behind the command aaa new-model , where you get to dive into things like external authentication servers (RADIUS, TACACS+, LDAP, etc.), authentication lists (which define the sources to use, and in which order), authorization levels, and user activity accounting. Save all that for a time when you feel like getting locked out of your router for a while. Hope that helps!
{ "source": [ "https://networkengineering.stackexchange.com/questions/16005", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/13237/" ] }
16,438
I'm reading Computer Networks - A Systems Approach 5th ed. , and I came across the following statistics for the speed of light through different mediums: Copper – 2.3 × 10 8 m/s Fiber – 2.0 × 10 8 m/s So, are these figures wrong, or is there another reason to explain why copper is worse than fiber? Does fiber have better bandwidth (per volume) or something?
No, the numbers are right (Page 46). If I can reword your question, it's "Why should I use fiber if the propagation delay is worse than copper?" You are assuming that propagation delay is an important characteristic. In fact (as you'll see a few pages later), it rarely is. Fiber has three characteristics that make it superior to copper in many (but not all) scenarios. Higher bandwidth. Because fiber uses light, it can be modulated at a much higher frequency than electrical signals on copper wire, giving you a much higher bandwidth. Also the maximum modulation frequency on copper wire is highly dependent on the length -- inductance and capacitance increase with length, reducing the maximum modulation frequency. Longer distance. Light over fiber can travel tens of kilometers with little attenuation, which makes it ideal for long distance connections. Less interference. Because fiber uses light, it is impervious to electromagnetic interference. That makes it best for "noisy" electromagnetic environments. Electrical isolation. Fiber does not conduct electricity, so it can electrically isolate devices. But fiber has drawbacks too. Expense. The optical transmitters and receivers can be expensive ($100's) and have more stringent environmental requirements than copper wire. Fiber optic cable is more fragile than wire. If you bend it too sharply, it will fracture. Copper wire is much more tolerant of movement and bending. Difficult to terminate. Placing a connector on a optical fiber strand requires precision tools, technique, and expertise. Fiber cables are usually terminated by trained specialists. In comparison, you can terminate a copper cable in seconds with little or no training.
{ "source": [ "https://networkengineering.stackexchange.com/questions/16438", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/13664/" ] }
18,280
As it is clear from the title, why do switches need ARP tables when the translation are done on the machines side? Roughly saying, why there are two ARP tables on machines and on switches? Is not the one on the switch sufficient?
This is a pretty common misconception or more specifically, a terminology problem. In a layer two switch, there is not an ARP table, only a forwarding table. The switch records each src MAC address it sees inbound in the forwarding table, and attributes it to the port so frames with a dst MAC will only get sent to the port known for that MAC. Many people call this an "arp table" or "arp cache" even though it is neither. In a managed layer two switch, there is a forwarding table plus an ARP table but the latter is only used for the management interface to talk to interested hosts (i.e. the PC you are using to configure the switch.) In a managed layer 3 switch there will be a forwarding table plus an ARP table, since it needs it for the management interface plus router functionality exists to perform forwarding between subnets.
{ "source": [ "https://networkengineering.stackexchange.com/questions/18280", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/13402/" ] }
18,873
In context of networking I keep running into statements such as "east-west traffic is larger than north-south traffic". I have tried a bit of googling but have not been able to locate an authoritative answer to understand the origin of these terms. What is the definition of north-south traffic? What is the definition of east-west traffic? What is the origin of these terms?
The terms are usually used in the context of data centers. Generally speaking, "east-west" traffic refers to traffic within a data center -- i.e. server to server traffic. "North-south" traffic is client to server traffic, between the data center and the rest of the network (anything outside the data center). I believe the terms have come into use from the way network diagrams are typically drawn, with servers or access switches spread out horizontally, and external connections at the top or bottom.
{ "source": [ "https://networkengineering.stackexchange.com/questions/18873", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/15708/" ] }
19,377
... or does the "default VLAN" have some wider meaning? Also, can it / should it be changed? For instance, if a switch is going into part of a network that is only one VLAN and it's not VLAN 1, is it possible to make the "default" / native VLAN on all ports a particular VLAN using one global command, or is the preferred method to make all ports access ports and set the access VLAN to 10 on each of them?
This is an often confused point for people new to the Networking, in particular to people coming up the Cisco track, due to Cisco's over emphasis on this point. It is more or less just a terminology thing. Let me explain. The 802.1q standard defines a method of tagging traffic between two switches to distinguish which traffic belongs to which VLANs. In Cisco terms, this is what happens on a " trunk " port. I've seen other vendors refer to this as a "tagged" port. In this context, it means the same: adding an identifier to frames to indicate what VLAN the frame belongs to. Terminology aside, the main thing to keep in mind is a VLAN tag is necessary, because often the traffic traversing two switches belongs to multiple VLANs, and there must be a way to determine which 1's and 0's belong to which VLAN. But what happens if a trunk port, who is expecting to receive traffic that includes the VLAN tag, receives traffic with no tag? In the predecessor to 802.1q, known as ISL (cisco proprietary, but archaic, no one supports it anymore, not even Cisco), untagged traffic on a trunk would simply be dropped. 802.1q however, provided for a way to not only receive this traffic, but also associate it to a VLAN of your choosing. This method is known as setting a Native VLAN . Effectively, you configure your trunk port with a Native VLAN, and whatever traffic arrives on that port without an existing VLAN tag, gets associated to your Native VLAN. As with all configuration items, if you do not explicitly configure something, usually some sort of default behavior exists. In the case of Cisco (and most vendors), the Default Native VLAN is VLAN 1. Which is to say, if you do not set a Native VLAN explicitly, any untagged traffic received on a trunk port is automatically placed in VLAN 1. The trunk port is the "opposite" (sort of) from what is known as an Access Port . An access port sends and expects to receive traffic with no VLAN tag. The way this can work, is that an access port also only ever sends and expects to receive traffic belonging to one VLAN . The access port is statically configured for a particular VLAN, and any traffic received on that port is internally associated on the Switch itself as belonging to a particular VLAN (despite not tagging traffic for that VLAN when it leaves the switch port). Now, to add to the confusing mix. Cisco books will often refer to the "default VLAN". The Default VLAN is simply the VLAN which all Access Ports are assigned to until they are explicitly placed in another VLAN. In the case of Cisco switches (and most other Vendors), the Default VLAN is usually VLAN 1. Typically, this VLAN is only relevant on an Access port, which is a port that sends and expects to receive traffic without a VLAN tag (also referred to an 'untagged port' by other vendors). So, to summarize: The Native VLAN can change . You can set it to anything you like. The Access Port VLAN can change . You can set it to anything you like. The Default Native VLAN is always 1, this can not be change, because its set that way by Cisco The Default VLAN is always 1, this can not be changed, because it is set that way by Cisco edit: forgot your other questions: Also, can it / should it be changed? This is largely an opinion question. I tend to agree with this school of thought: All unused ports should be in a specific VLAN. All active ports should be explicitly set on to a particular VLAN. Your switch should then prevent traffic from traversing the uplink into the rest of your network if the traffic belongs on VLAN1, or the VLAN you are using for unused ports. Everything else should be allowed up the uplink. But there are many different theories behind this. As well as differing requirements which would prevent having such a restricted switch policy (scale, resources, etc). For instance, if a switch is going into part of a network that is only one VLAN and it's not VLAN 1, is it possible to make the "default" / native VLAN on all ports a particular VLAN using one global command, or is the preferred method to make all ports access ports and set the access VLAN to 10 on each of them? You can not change the default Cisco configurations. You can use the "interface range" to put all ports in a different VLAN in one go. You don't really need to change the Native VLAN on the uplink trunk, so long as the other switch is using the same Native VLAN. If you really want to spare the switch from adding the VLAN Tag, you could get creative and do the following (although, its probably not recommended). Leave all access ports in the VLAN1. Leave the Native VLAN at its default (VLAN1). On the uplink switch, set the port as a trunk port. And set its Native VLAN to the VLAN you want the lower switch to be a part of. Since the lower switch will send traffic to the upper switch untagged, the upper switch will receive it and associate it with what it considers the Native VLAN.
{ "source": [ "https://networkengineering.stackexchange.com/questions/19377", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/16081/" ] }
19,445
According to wikipedia host is A network host is a computer or other device connected to a computer network. A network host may offer information resources, services , and applications to users or other nodes on the network. A network host is a network node that is assigned a network layer host address . Computer is a host. Printers also provide services and have ip address. So in these which are really host? router, printer, "Camera in network", switch I'm totally confused with these things. Thanks in advance
I actually like the way the IPv6 RFC defines it: 2. Terminology node - a device that implements IPv6. router - a node that forwards IPv6 packets not explicitly addressed to itself. host - any node that is not a router. So in your list: router, printer, "Camera in network", switch A router is a node, a router, and a host A printer is a node, and a host* A Camera is a node, and a host* *( Provided that it as an IP address configured) A switch is tricky, because it comes down to how it is configured: A switch without an IP address configured, is neither a host, nor a router, nor a node A switch with an IP address configured is a node and a host for the interface/vlan with the configured IP . For all the other ports, it can be considered a switch without an IP address. (Both bullet points above consider a switch that is not participating in IP routing. If it is, then you could consider it a Router, and the bullet points above these two can be applied)
{ "source": [ "https://networkengineering.stackexchange.com/questions/19445", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/16027/" ] }
19,456
I was watching a Meraki AP presentation, and one of the features they showed was an analytic tab, which showed the number of devices that passed by, visitors, and connected to the AP, as well as many other things. I'm assuming passerby is defined as devices that are within the range of connectivity. Perhaps my understanding of AP isn't correct, but I thought a WAP would be something like my wireless router, that allows devices to connect to a network. Is the reason I'm not able to see devices that are "passerbys" in my router that my router doesn't have that feature? How does a WAP see a passerby? Like is a mobile emitting something that the router can pick up? Lastly, is it possible for a device to hide itself from WAPs?
I actually like the way the IPv6 RFC defines it: 2. Terminology node - a device that implements IPv6. router - a node that forwards IPv6 packets not explicitly addressed to itself. host - any node that is not a router. So in your list: router, printer, "Camera in network", switch A router is a node, a router, and a host A printer is a node, and a host* A Camera is a node, and a host* *( Provided that it as an IP address configured) A switch is tricky, because it comes down to how it is configured: A switch without an IP address configured, is neither a host, nor a router, nor a node A switch with an IP address configured is a node and a host for the interface/vlan with the configured IP . For all the other ports, it can be considered a switch without an IP address. (Both bullet points above consider a switch that is not participating in IP routing. If it is, then you could consider it a Router, and the bullet points above these two can be applied)
{ "source": [ "https://networkengineering.stackexchange.com/questions/19456", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/16145/" ] }
19,737
I would like to display all traffic for or from a specific MAC address. For that I tried sudo tcpdump host aa:bb:cc:11:22:33 It does not work and returns me an error tcpdump: pktap_filter_packet: pcap_add_if_info(en0, 1) failed: pcap_add_if_info: pcap_compile_nopcap() failed I don't know how to interpret this error message and I don't know how to solve the problem. Any help ?
You have used the following as your packet filter: host aa:bb:cc:11:22:33 As it stands, this is looking for an IP or hostname but you are giving it a MAC address. To use a MAC address, you need to include the ether packet filter primitive. In your case, the following should work: sudo tcpdump ether host aa:bb:cc:11:22:33 Or, if it needs you to specify the interface, then it would be something like: sudo tcpdump -i eth0 ether host aa:bb:cc:11:22:33
{ "source": [ "https://networkengineering.stackexchange.com/questions/19737", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/16335/" ] }
19,840
I am still struggling to understand to what extent CIDR really renders IP address classes obsolete. Here's what I understood so far: It's ridiculously inefficient (and impossible, too) to assign every organization that needs to address more than 255 hosts a class B address, which could technically address 65535 hosts. However, if such an organization needed to address, say, approximately 700 hosts, one could just assign three (preferably contiguous) class C network addresses to that organization. E.g.: 192.42.42 192.42.43 192.42.44 Problem: For that one organization, routers would have to store three entries in their forwarding tables, which won't scale. CIDR solves this problem by introducing route summarization/aggregation, enabling the ISP that assigned the three class C networks to the organization to advertise only one prefix to the rest of the world. E.g., 192.42.42.0/21 So far, so good. However, I just can't grasp why every resource I touch claims that classful addressing is "a thing of the past". After all, the ISP is in charge of, say, class C network addresses, and does assign these to its customers. CIDR just fixes the problem of multiple entries in the forwarding tables, right? Thus, IP address classes are still around, are they not? Exam's coming up, so help is much appreciated. :P
Address delegation really used to happen in three sizes: class A, B and C. Class A delegations would be given from a certain address range, class B delegations from a different range etc. Because the different classes used different address ranges you could determine the class by looking at the first part of an address. And this was built into the routing protocols. Class A delegations contained 16777216 addresses each Class B delegations contained 65536 addresses each Class C delegations contained 256 addresses each This was very inefficient for networks that didn't fit these sizes. A network that needed 4096 addresses would either get sixteen Class C delegations (which would be bad for the global routing table because each of them would have to be routed separately: the class size was built into the protocol) or they would get one Class B delegation (which would waste a lot of addresses). In 1993 CIDR was introduced. The protocols were adjusted to be able to deal with prefixes of different sizes and it became possible to route (both internally and externally) prefixes like a /30 or a /21 or a /15 etc etc. Anything between /0 and /32 became possible. Organisations that needed 2048 addresses could get a /21: exactly what they would need. The way you could internally subdivide those addresses was also limited. There were rules on how you could subnet. Originally each subnet within your classful network had to be the same size. You need one subnet with 128 addresses and another subnet with 16 addresses: too bad. Variable Length Subnet Masking (VLSM) is the internal-network equivalent of CIDR. VLSM has existed longer than CIDR. It was already mentioned in 1985. So CIDR is basically extending VLSM to inter-domain routing. With VLSM your subnets don't all have to be the same size anymore. You can assign a different number of addresses for each subnet, depending on your needs. These days all routing on the internet is done without classes. A prefix in the routing table might by coincidence (or because of history) match the classful structure, but protocols will no longer assume they can deduce the prefix length (subnet mask) from the first part of the address. All prefix lengths are explicitly communicated: classless. Saying that an ISP is in charge of a Class C network is similarly obsolete. Addresses are distributed completely classless by the RIRs ( Regional Internet Registries , the organisations responsible for delegating addresses to ISPs and businesses with their own independent addresses). IPv4 addresses classes really don't exist anymore, and have been deprecated in 1993. If you look at old obsolete routing protocols you can of course still see the assumptions they made based on address class, but that was 20 years ago...
{ "source": [ "https://networkengineering.stackexchange.com/questions/19840", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/-1/" ] }
23,959
My employer requires me to first log on to a VPN, and only then I can SSH into the servers. But, given the security of SSH, is a VPN overkill? What is the use of a VPN in terms of security if I am already using SSH ?
The reasoning behind your current setup is probably some combination of the following three reasons. The VPN is a security solution for outside your company's network (See #1 below) . SSH however, might be a second layer of security outside of your company's network... but its main purpose is to secure the traffic within your company's network (See #2 Below) . VPN is also necessary if the device you are trying to SSH into is using a private address on your companies network (See #3 below) . VPN creates the tunnel to your company network that you push data through. Thus no one seeing the traffic between you and your company's network can actually see what you're sending. All they see is the tunnel. This prevents people that are outside the company network from intercepting your traffic in a way that is useful. SSH is an encrypted way of connecting to devices on your network (as opposed to Telnet, which is clear text). Companies often require SSH connections even on a company network for security sake. If I have installed malware on a network device and you telnet into that device (even if you're coming through a VPN tunnel - as the VPN tunnel usually terminates at the perimeter of a company's network), I can see your username and password. If it's SSH you're using, then I cannot. If your company is using private addressing for the internal network, then the device you are connecting to may not be rout-able over the internet. Connecting via a VPN tunnel would be like you are directly connected in the office, therefore you would use the internal routing of the company network that would not be reachable outside of the company network.
{ "source": [ "https://networkengineering.stackexchange.com/questions/23959", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/4339/" ] }
24,068
The TCP 3-way handshake works like this: Client ------SYN-----> Server Client <---ACK/SYN---- Server Client ------ACK-----> Server Why not just this? Client ------SYN-----> Server Client <-----ACK------ Server
Break down the handshake into what it is really doing. In TCP, the two parties keep track of what they have sent by using a Sequence number. Effectively it ends up being a running byte count of everything that was sent. The receiving party can use the opposite speaker's sequence number to acknowledge what it has received. But the sequence number doesn't start at 0. It starts at the ISN (Initial Sequence Number), which is a randomly chosen value. And since TCP is a bi-directional communication, both parties can "speak", and therefore both must randomly generate an ISN as their starting Sequence Number. Which in turn means, both parties need to notify the other party of their starting ISN. So you end up with this sequence of events for a start of a TCP conversation between Alice and Bob: Alice ---> Bob SYNchronize with my Initial Sequence Number of X Alice <--- Bob I received your syn, I ACKnowledge that I am ready for [X+1] Alice <--- Bob SYNchronize with my Initial Sequence Number of Y Alice ---> Bob I received your syn, I ACKnowledge that I am ready for [Y+1] Notice, four events are occurring: Alice picks an ISN and SYNchronizes it with Bob. Bob ACKnowledges the ISN. Bob picks an ISN and SYNchronizes it with Alice. Alice ACKnowledges the ISN. In actuality though, the middle two events (#2 and #3) happen in the same packet. What makes a packet a SYN or ACK is simply a binary flag turned on or off inside each TCP header , so there is nothing preventing both of these flags from being enabled on the same packet. So the three-way handshake ends up being: Bob <--- Alice SYN Bob ---> Alice SYN ACK Bob <--- Alice ACK Notice the two instances of "SYN" and "ACK", one of each, in both directions. So to come back to your question, why not just use a two-way handshake? The short answer is because a two way handshake would only allow one party to establish an ISN, and the other party to acknowledge it. Which means only one party can send data. But TCP is a bi-directional communication protocol, which means either end ought to be able to send data reliably. Both parties need to establish an ISN, and both parties need to acknowledge the other's ISN. So in effect, what you have is exactly your description of the two-way handshake, but in each direction . Hence, four events occurring. And again, the middle two flags happen in the same packet. As such three packets are involved in a full TCP connection initiation process.
{ "source": [ "https://networkengineering.stackexchange.com/questions/24068", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/13856/" ] }
24,132
This will be a kind of newbie question but I am not quite sure why we really need IPv6. AFAIK, the story is as follows: In the olden days, when computers were not plentiful, 32 bit IP addresses were enough for everybody. At these times, the subnet mask was implicit. Then the number of computers have increased and 32 bits started to become insufficient. So the subnet mask started to become explicit. Essentially the size of an IP address has increased. My question is, what is the downside of continuing the addressing with the subnet masks? For example when they become insufficient as well, can't we continue with using "subnet-subnet masks" etc.? I understand that it consumes more space than the original IPv4 (and maybe not much different than using IPv6) but aren't explicit subnet masks a sufficient solution? If not, why are they an insufficient solution?
Two things are getting confused here: classful addressing vs CIDR Masquerading / NAT Going from classful addressing to Classless Inter Domain Routing (CIDR) was an improvement that made the address distribution to ISPs and organisations more efficient, thereby also increasing the lifetime of IPv4. In classful addressing an organisation would get one of these: a class A network (a /8 in CIDR terms, with netmask 255.0.0.0) a class B network (a /16 in CIDR terms, with netmask 255.255.0.0) a class C network (a /24 in CIDR terms, with netmask 255.255.255.0) All of these classes were allocated from fixed ranges. Class A contained all addresses where the first digit was between 1 and 126, class B was from 128 to 191 and class C from 192 to 223. Routing between organisations had all of this hard-coded into the protocols. In the classful days when an organisation would need e.g. 4000 addresses there were two options: give them 16 class C blocks (16 x 256 = 4096 addresses) or give them one class B block (65536 addresses). Because of the sizes being hard-coded the 16 separate class C blocks would all have to be routed separately. So many got a class B block, containing many more addresses than they actually needed. Many large organisations would get a class A block (16,777,216 addresses) even when only a few hundred thousand were needed. This wasted a lot of addresses. CIDR removed these limitations. Classes A, B and C don't exist anymore (since ±1993) and routing between organisations can happen on any prefix length (although something smaller than a /24 is usually not accepted to prevent lots of tiny blocks increasing the size of routing tables). So since then it was possible to route blocks of different sizes, and allocate them from any of the previously-classes-A-B-C parts of the address space. An organisation needing 4000 addresses could get a /20, which is 4096 addresses. Subnetting means dividing your allocated address block into smaller blocks. Smaller blocks can then be configured on physical networks etc. It doesn't magically create more addresses. It only means that you divide your allocation according to how you want to use it. What did create more addresses was Masquerading, better known as NAT (Network Address Translation). With NAT one device with a single public address provides connectivity for a whole network with private (internal) addresses behind it. Every device on the local network thinks it is connected to the internet, even when it isn't really. The NAT router will look at outbound traffic and replace the private address of the local device with its own public address, pretending to be the source of the packet (which is why it was also known as masquerading). It remembers which translations it has made so that for any replies coming back it can put back the original private address of the local device. This is generally considered a hack, but it worked and it allowed many devices to send traffic to the internet while using less public addresses. This extended the lifetime of IPv4 immensely. It is possible to have multiple NAT devices behind each other. This is done for example by ISPs that don't have enough public IPv4 addresses. The ISP has some huge NAT routers that have a handful of public IPv4 addresses. The customers are then connected using a special range of IPv4 addresses ( 100.64.0.0/10 , although sometimes they also use normal private addresses) as their external address. The customers then again have NAT router that uses that single address they get on the external side and performs NAT to connect a whole internal network which uses normal private addresses. There are a few downsides to having NAT routers though: incoming connections: devices behind a NAT router can only make outbound connections as they don't have their own 'real' address to accept incoming connections on port forwarding: this is usually made less of a problem by port forwarding, where the NAT routed dedicates some UDP and/or TCP ports on its public address to an internal device. The NAT router can then forward incoming traffic on those ports to that internal device. This needs the user to configure those forwardings on the NAT router carrier grade NAT: is where the ISP performs NAT. Yyou won't be able to configure any port forwarding, so accepting any incoming connections becomes (bit torrent, having your own VPN/web/mail/etc server) impossible fate sharing: the outside world only sees a single device: that NAT router. Therefore all devices behind the NAT router share its fate. If one device behind the NAT router misbehaves it's the address of the NAT router that ends up on a blacklist, thereby blocking every other internal device as well redundancy: a NAT router must remember which internal devices are communicating through it so that it can send the replies to the right device. Therefore all traffic of a set of users must go through a single NAT router. Normal routers don't have to remember anything, and so it's easy to build redundant routes. With NAT it's not. single point of failure: when a NAT router fails it forgets all existing communications, so all existing connections through it will be broken big central NAT routers are expensive As you can see both CIDR and NAT have extended the lifetime of IPv4 for many many years. But CIDR can't create more addresses, only allocate the existing ones more efficiently. And NAT does work, but only for outbound traffic and with higher performance and stability risks, and less functionality compared to having public addresses. Which is why IPv6 was invented: Lots of addresses and public addresses for every device. So your device (or the firewall in front of it) can decide for itself which inbound connections it wants to accept. If you want to run your own mail server that is possible, and if you don't want anybody from the outside connecting to you: that's possible too :) IPv6 gives you the options back that you used to have before NAT was introduced, and you are free to use them if you want to.
{ "source": [ "https://networkengineering.stackexchange.com/questions/24132", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/19156/" ] }
26,694
I thought the 1000M ment that it was capable of Gigabit speed but it isn't, I noticed then that it stated 10/100Mbps. What does this 1000M mean?
I thought the 1000M ment that it was capable of Gigabit speed The 1000M is clearly labelling indicator lights showing that the ports in question are running at gigabit speeds. but it isn't, I noticed then that it stated 10/100Mbps. Have you tested all the ports? The line you highlighted part of says "24 port 10/100 + 2 SFP/1000T combo" "24 port 10/100" indicates that your switch has 24 ports that are capable of 10/100 speeds. "2 SFP/1000T combo" indicates that your switch has two gigabit ports which can be used either directly for 1000BASE-T copper or used with a SFP module (for gigabit fiber). Since only ports 25 and 26 have "1000M" lights it's pretty clear that ports 1-24 are the 10/100 ports and ports 25-26 are the gigabit ports. If you connect two devices to ports 25 and 26 you should get gigabit speeds between them. If you don't you might want to check the configuration to make sure noone has locked the ports to a lower speed. It's very common in switches to have a couple of ports that are faster than the rest. This makes sense because in general you want your backbones to be faster than your access ports.
{ "source": [ "https://networkengineering.stackexchange.com/questions/26694", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/22783/" ] }
27,830
I know that both TCP and UDP are built on top of IP, and I know the differences between TCP & UDP, but I'm confused about what exactly "raw ip" is. Would it be fair to say that TCP & UDP both implement IP, but that IP in and of itself isn't capable of transferring data? Or is IP some very low level form of communication, which is further abstracted by TCP and UDP?
IP is a Layer 3 protocol. TCP/UDP are Layer 4 protocols. They each serve different purposes. Layer 3 is in charge of end to end delivery . Its sole function is adding whatever is necessary to a packet to get a packet from one host to another. Layer 4 is in charge of service-to-service delivery . Its sole function is to segregate data streams. Your computer can have multiple programs running, each which sends/receives bits on to the wire. IE: You could have multiple browser tabs running, streaming internet radio, running a download, running some legal torrents, using a chat application, etc. All of these receive 1s and 0s from the wire, and Layer 4 segregates each data streams to the unique application that needs them. Here is an illustration: IP is unable to deliver a packet to the correct service/application. And TCP/UDP is unable to deliver a packet from one end of the internet to the other. Both TCP and IP work together to enable them both to achieve the "end-goal" of Internet communication. Data that needs to get from one host to another is generated by the upper layers of the OSI model. This data is passed down to L4 which will add the information necessary to deliver the data from service to service, like a TCP header with a Source and Destination Port. The Data and the L4 header is now referred to as a segment. Then the Segment will be passed to L3 which will add the information necessary to deliver the segment from end to end, like an IP header with a Source an Destination IP address. The L3 header and the segment can now be referred to as a Packet. This process is known as Encapsulation and De-encapsulation (or sometimes decapsulation). Here is an animation of how it works: If this doesn't make sense, I suggest reading more about the OSI model , and how each layer has different responsibilities that all work together to accomplish moving a packet across the Internet .
{ "source": [ "https://networkengineering.stackexchange.com/questions/27830", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/22916/" ] }
29,321
I have a relatively dumb question. Suppose the Switch just started, and it received a frame that contains a destination MAC address for a network device not in its MAC addresses table. What happens then? Does it broadcast (MAC address ff:ff:ff:ff:ff:ff ) and receive answers from connected devices, or is there a protocol dedicated for that which is used? I don't think the switch uses the ARP (Address Resolution Protocol)?
Good question. I'll answer it with an animation: When Host A sends the frame, the switch does not have anything in its MAC address table. Upon receiving the frame, it records Host A's MAC Address to Switch Port mapping. Since it doesn't know where the destination MAC address is, it floods the frame out all ports. This assures that if host B exists (which at this point, the switch does not know yet), that it will receive it. Hopefully, upon receiving the frame, Host B will generate a response frame, which will allow the Switch to learn the MAC address mapping from the return frame. You can read more about how a Switch works here (where I took the animation from). I would also suggest reading the entire article series for a closer look at how a packet moves through a network . One last note regarding the terms Flooding vs Broadcast . A switch never broadcasts frames, a broadcast is not an action a switch can take. A switch can only flood a frame. A broadcast is simply a frame with a destination MAC address of ffff.ffff.ffff . This is often confused because the end effect is the same, but they are actually different .
{ "source": [ "https://networkengineering.stackexchange.com/questions/29321", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/-1/" ] }
32,096
According to the notes I'm reading: 10/100 Mbit/s Ethernet refers to the standard that can autosense which speed it needs to run between speeds of 10 Mbit/s or 100 Mbit/s. Why would autosensing be required? Wouldn't it be best to result to 100 Mbit/s or will this impact the network in a negative way?
Some devices could only run at 10 megabit/s, so the device at the other end would autosense the speed to match. If a device that has a maximum speed of 10 Mbit/s is connected to a 10 Mbit/s / 100 Mbit/s switch, the switch needs to lower its speed on that particular port in order to effectively (efficiently) communicate with the device. These days, most devices will autosense between 10 Mbit/s, 100 Mbit/s, and 1000 Mbit/s, but back in the days of "fast Ethernet" the choices were 10 Mbit/s and 100 Mbit/s.
{ "source": [ "https://networkengineering.stackexchange.com/questions/32096", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/24256/" ] }
37,896
Yesterday interviewer ask me what is port number for ping and which protocol ping use TCP/UDP. After interview I search on internet and found different results someone says ICMP uses Port 7, someone says it does not use port number, on one site I found it usese IP protocol 1, etc. Can anyone help me with the correct explanation?
The standard ping command does not use TCP or UDP. It uses ICMP. To be more precise ICMP type 8 (echo message) and type 0 (echo reply message) are used. ICMP has no ports! See RFC792 for further details.
{ "source": [ "https://networkengineering.stackexchange.com/questions/37896", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/22728/" ] }
38,573
I think I understand these concepts, but I'm a little rusty. Can someone give a concise, easy-to-understand explanation of these concepts? The planes are logical concepts, aren't they? Is this a Cisco only thing?
These terms are abstract logical concepts, much like the OSI model. Data plane refers to all the functions and processes that forward packets/frames from one interface to another. Control plane refers to all the functions and processes that determine which path to use. Routing protocols (such as OSPF, ISIS, EIGRP, etc...), spanning tree, LDP, etc are examples. Management plane is all the functions you use to control and monitor devices. These are mostly logical concepts but things like SDN ( S oftware D efined N etwork) separate them into actual devices. Finally, all manufacturers use these concepts.
{ "source": [ "https://networkengineering.stackexchange.com/questions/38573", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/15501/" ] }
39,678
In our rented office, we have a router ( BT smart hub ) connected to a large switch (model unknown). The two are connected using three ethernet cables, thus: ----------------------- | 1 2 3 4 Router | ----------------------- | | | | | | | ------[Mobile signal booster] | | | ------------------------------------ | 1 2 3 4 5 6 7 8 9 10 Switch | ------------------------------------ Can anyone please explain the advantage this offers? I always assumed that a switch was connected to a router using a single cable (as here ).
The switch is probably a managed switch on which multiple VLANs are configured. The three cables between the router and the switch are used to provide inter-VLAN routing. Another possibility is that the multiple cables are used for link aggregation (e.g. LACP) to increase throughput.
{ "source": [ "https://networkengineering.stackexchange.com/questions/39678", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/19942/" ] }
39,681
I was researching network load balancer features recently and I couldn't find a certain feature. I believe I heard some years ago about a network load balancer feature (don't remember any feature name) in which the network balancer 'knows' how loaded (CPU, I/O, RAM etc.) a backend server is and dispatches the next request to the least loaded server. Is there such a feature? Are there reasons for not having this feature?
The switch is probably a managed switch on which multiple VLANs are configured. The three cables between the router and the switch are used to provide inter-VLAN routing. Another possibility is that the multiple cables are used for link aggregation (e.g. LACP) to increase throughput.
{ "source": [ "https://networkengineering.stackexchange.com/questions/39681", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/35169/" ] }
39,686
I know that MAC address is a physical address attached to Network Interface Card(NIC) to identify your computer uniquely. However, MAC address of Ethernet NIC or Wireless 802.11a/b/g/n WiFi NIC are different. So, does it mean a laptop can be identified with 2 different MAC address depending on the network interfaces it supports?(Wifi/Ethernet or both) Also, modem has MAC address found on the bottom of the Modem. Why does modem has a unique MAC address on your home network besides laptop, phone and other devices connected to the Internet?
The switch is probably a managed switch on which multiple VLANs are configured. The three cables between the router and the switch are used to provide inter-VLAN routing. Another possibility is that the multiple cables are used for link aggregation (e.g. LACP) to increase throughput.
{ "source": [ "https://networkengineering.stackexchange.com/questions/39686", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/34724/" ] }
43,736
I know the CAM table in a switch holds MAC addresses and the ports that are associated with the respective MAC addresses. There are no such thing as CAM addresses from my knowledge, so why is it called CAM table and not MAC table?
CAM (Content Addressable Memory) is memory that can be addressed by content, rather than a numeric memory address. You can look up the interface by presenting the memory with the MAC address. This is done in a single CPU cycle vs. the traditional programming of searching through a table, which will cost many CPU cycles. There is also TCAM (Ternary Content Addressable Memory) that can use a mask. This is particularly useful for IP addressing, and it is used by ACLs or routing tables, among other things. CAM and TCAM cost much more than standard DRAM, but the performance boost given by them for specific applications can be worth the cost, power, and size compromises you must make. Since most standard PCs do not include anything like this, you can see how a purpose-built piece of hardware, e.g. router or switch, can have a performance advantage over a standard PC for the purpose of routing or switching.
{ "source": [ "https://networkengineering.stackexchange.com/questions/43736", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/23730/" ] }
44,010
Internet Protocol version 10 (IPv10) Specification The name is funny (IPv4 + IPv6 == IPv10), but the actual proposal looks strange (one more packet format to battle incompatibility between packet formats). Is it a normal proposal that have balanced pros and cons or just a minimally viable document to make fun of "IPv10" with a serious face? If serious, please describle it in a "tl;dr" fashion. Why this and not another transition technology like nat64/teredo?
As Ron said, anyone can write a proposal. I have a hard time taking proposals seriously from someone who suggests interconnecting satellites with optical fiber , though. Also, I can't imagine this actual proposal gaining any momentum, especially due to this note: All Internet connected hosts must be IPv10 hosts to be able to communicate regardless the used IP version, and the IPv10 deployment process can be accomplished by ALL technology companies developing OSs for hosts networking and security devices. So, to solve the problem of IPv4-only hosts not being able to talk to IPv6-only hosts (and vice versa) you need to implement another protocol: IPv10. Then, why bother with that and not just implement IPv6 on that IPv4-only host and be done with it. In addition, as can be read in RFC7059 , there are already more than enough tunnel mechanisms available which can be used to solve parts of this problem. To be honest, I think the author is hoping on some commercial success by claiming copyright, as can be read in these tweets : ANNOUNCEMENT: Protecting the Copyright, The #IPv10 and KHALED Routing Protocol (#KRP or #RRP) are developed by @The_Road_Series CEO. They MUST NOT be represented or published by any organization without approval from their developer @Eng_Khaled_Omar Today 26th of May, 2017, a 2nd request was sent to the ietf for removing #IPv10 #KRP (#RRP) drafts from their repository.
{ "source": [ "https://networkengineering.stackexchange.com/questions/44010", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/39738/" ] }
47,994
Is it possible to use multicast on the public internet? If yes: How? Are special IP addresses required and where do you get them from?
You cannot multicast on the public Internet, but you can multicast across the public Internet to another site by using a tunnel that supports multicast. Multicast routing is very different from unicast routing, and all the routers in the path of the multicast packets need to have multicast routing configured.
{ "source": [ "https://networkengineering.stackexchange.com/questions/47994", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/43145/" ] }
48,047
two sites connected ip sec vpn tunnel. site A has 16 static ip addresses for outside to in access (NAT). can I route one of my public ip addresses from site A across vpn tunnel to a host on site B?
You cannot multicast on the public Internet, but you can multicast across the public Internet to another site by using a tunnel that supports multicast. Multicast routing is very different from unicast routing, and all the routers in the path of the multicast packets need to have multicast routing configured.
{ "source": [ "https://networkengineering.stackexchange.com/questions/48047", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/43752/" ] }
48,526
A network engineer professional is overseeing our town's installation of fibre and was explaining how much faster it was. I pointed out that it wasn't a silver bullet as you had to pay the provider for a certain bandwidth. For example, I explained, I have 20 Mbit/s copper for about €30 per month, and 30 Mbit/s fibre cost about the same, however it wasn't going to be ten times faster. 200 Mbit/s would cost you €70 pm so there was a price. He strongly disagreed and said the reduced latency meant 30 Mbit/s fibre was twice as fast as 30 Mbit/s copper. Now, I get that reduced ping times and latency mean time to the first byte is faster, however if I'm downloading a 1 gigabyte file, 30 Mbit/s is 30 Mbit/s, right? I'm not sure how it affects streaming, however was he right, or talking rubbish?
30 Mbit/s is the same speed, no matter if it runs over copper or fiber. However, there are important link parameters other than link speed/pure bandwidth, so there may be differences. First, latency on fiber can be better than on copper depending on the line encoding - fiber requires much less elaborate encoding (see below) than e.g. xDSL. However, lower latency doesn't make the fiber faster but sensitive applications may respond faster. Second, fiber's scalability is much better - in the future, you can just call your provider and order more speed. Speed on copper may be very limited, depending on the line length and quality. Speed on fiber is practically limited by your budget only. Third, reliability or packet loss ratio is usually much better on fiber than on copper. Copper is generally susceptible to EMI (depending on cable type, cable quality, link length, and environment) while fiber is practically immune. EDIT: in regard to "elaborate coding": Fiber commonly uses 8b/10b line code with 20% line/bandwidth overhead, or 64b/66b line code with 3% overhead, but next to no time overhead or delay (less than a microsecond). xDSL variants use OFDM/DMT and QAM encoding and modulation to cope with the channel's high attenuation/low signal-to-noise ratio. Reed-Solomon forward error-correction (FEC) is added to decrease the effective error rate, causing a transmission delay/added latency of a few milliseconds or a few thousand microseconds. Long lines also need to add interleaving for protection against burst errors, striping consecutive packets into each other - this causes significant, yet additional delay/latency in the order of 20 ms. In a nutshell, voice-grade copper's low frequency bandwidth and its sensitivity to noise require elaborate encoding and FEC, which in turn significantly increase latency. Of course, when FEC fails and an error cannot be compensated, a retransmission is much worse than the usual 60 ms RTT for (long-line) ADSL.
{ "source": [ "https://networkengineering.stackexchange.com/questions/48526", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/44028/" ] }
48,874
This is the process of DHCP operation, My question is at the 3rd step why does the Client send a Broadcast and not a Unicast as after the previous two operations the address of the DHCP server / Relay server should be known?
https://www.rfc-editor.org/rfc/rfc2131#page-13 The servers receive the DHCPREQUEST broadcast from the client. Those servers not selected by the DHCPREQUEST message use the message as notification that the client has declined that server's offer. The protocol assumes there may be multiple DHCP servers. By broadcasting the request message, all servers that may have issued an offer can be aware of the client's choice.
{ "source": [ "https://networkengineering.stackexchange.com/questions/48874", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/21569/" ] }
49,078
This might be a silly question but a me and few buddies have been discussing the potential limitations of TCP. We have an application that is going to listen for clients (think of a gateway) and route all connected clients data through a single connected kafka publisher to one topic. One of my buddies is saying that TCP will be a problem for this gateway because it is going to establish a new connection for every message it sends (not kafka but the underlying transportation protocol itself is the issue), requiring a new port each time. At the rate we'll be sending these clients messages (gigabytes), kafka will run out of ports to read from?? I've done development for several years and have never heard of this before and would like to get a lower level understanding (which I thought I had) of how TCP works. My understanding is that when you establish a TCP connection, that connection remains open until it is timed out by the application or forcibly closed by either the server or client. Data that is sent over this connection is a stream and won't open / close new connections regardless of the 3 V's (volume, velocity, variety). As far as the ports go, one port is used for broadcasting and the internal file descriptor port is something the application manages for read / write of individual clients. I've never understood TCP to establish new connections for every packet that it writes. I apologize in advance if this question is not direct and or too vague. I really am baffled and am hoping someone could provide some more context to what my colleagues are saying?
One of my buddies is saying that TCP will be a problem for this gateway because it is going to establish a new connection for every message it sends (not kafka but the underlying transportation protocol itself is the issue), requiring a new port each time. At the rate we'll be sending these clients messages (gigabytes), kafka will run out of ports to read from?? Your friend is badly confused. TCP is a stream-oriented protocol. It has no notion of messages. Of course, it does use packets at the IP layer, but to the application this is an implementation detail. TCP inserts packet boundaries where it makes sense to do so, and not necessarily once per write() or send() . Similarly, it combines successive packets together if you receive more than one between calls to read() or recv() . Needless to say, this stream-oriented design would be completely unworkable if every send established a new connection. So, the only way to establish a new connection is to close and reopen the connection manually. (In practice, most protocols built on top of TCP have something which resembles messages, such as HTTP requests and responses. But TCP does not know or care about the structures of such things.) It is possible that your friend was thinking of UDP, which does have messages, but is also connectionless. Most socket implementations allow you to "connect" a UDP socket to a remote host, but this is just a convenient way to avoid having to repeatedly specify the IP address and port. It does not actually do anything at the networking level. Nevertheless, you can manually keep track of which peers you are talking to under UDP. But if you do that, then deciding what counts as a "connection" is your problem, not the OS's. If you want to re-establish a "connection" on every message, you could do that. It probably isn't a very good idea, however.
{ "source": [ "https://networkengineering.stackexchange.com/questions/49078", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/44682/" ] }
52,232
Golang community provide a HTTP/2 demo website to compare the performance between HTTP 1.1 and HTTP/2. We can choose different latency , e.g. 0s latency, 30 ms latency, 200ms latency. Is latency a terminology of computer science? What does that mean? What's the difference between latency and Round Trip Time ?
Network latency is how long it takes for something sent from a source host to reach a destination host. There are many components to latency, and the latency can actually be different A to B and B to A. The round trip time is how long it takes for a request sent from a source to a destination, and for the response to get back to the original source. Basically, the latency in each direction, plus the processing time.
{ "source": [ "https://networkengineering.stackexchange.com/questions/52232", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/47956/" ] }
52,257
I have a windows DHCP server in VLAN 10 and I have wireless clients in VLAN 288. The VLANs are defined in my Juniper EX-4600 and the VLANs are tagged/trunked all the way to the Meraki APs. [DHCP]---[EX-4600]---[EX-4300]--[HP5130]---[Meraki APs] There are VLANs that are working and VLANs that do not and they all seem to have the same configuration with the exception of the subnet. The only thing that I have found in searching for a solution was on the juniper KB that said to configure IP helpers, but none of the other VLANs have that and they work, so I don't think that is the solution. ## Last commit: 2018-07-26 01:20:45 UTC by root version 14.1X53-D40.8; system { processes { dhcp-service { traceoptions { file dhcp_logfile size 10m; level all; flag all; } } app-engine-virtual-machine-management-service { traceoptions { level notice; flag all; } } } } chassis { redundancy { graceful-switchover; } } interfaces { interface-range VLAN4 { member-range ge-1/0/5 to ge-1/0/11; unit 0 { family ethernet-switching { vlan { members 4; } } } } xe-0/0/23 { ether-options { flow-control; } unit 0 { family ethernet-switching { interface-mode trunk; vlan { members [ 1 3 10 100 140-141 172 176 210-213 230-232 234 236 238 270 280 288 ]; } storm-control default; } } } em1 { unit 0 { family inet; } } irb { unit 0 { family inet; } unit 1 { family inet { address 10.1.1.100/24; } } unit 2 { family inet { address 192.168.11.1/24; } } unit 3 { family inet { address 192.168.10.1/24; } } unit 4 { family inet { address 192.168.9.1/24; } } unit 5 { family inet { address 192.168.15.1/24; } } unit 6 { family inet { address 192.168.35.1/24; } } unit 8 { family inet { address 192.168.36.1/24; } } unit 9 { family inet { address 192.168.37.1/24; } } unit 10 { family inet { address 192.168.75.2/24; } } unit 11 { family inet { address 192.168.76.1/24; } } unit 15 { family inet { address 192.168.8.1/24; } } unit 20 { family inet { address 10.0.0.1/24; } } unit 100 { family inet { address 192.168.100.1/24; } } unit 111 { family inet { address 192.168.111.6/24; } } unit 130 { family inet { address 10.1.30.1/23; } } unit 132 { family inet { address 10.1.32.1/24; } } unit 133 { family inet { address 10.1.33.1/24; } } unit 136 { family inet { address 10.1.36.1/22; } } unit 138 { family inet { address 10.1.56.1/21; } } unit 140 { family inet { address 10.1.40.1/24; } } unit 141 { family inet { address 172.16.41.142/29; } } unit 170 { family inet { address 10.1.72.1/22; } } unit 171 { family inet { address 10.1.76.1/24; } } unit 172 { family inet { address 172.26.88.10/21; } } unit 176 { family inet { address 172.16.41.177/29; } } unit 180 { family inet { address 10.1.80.1/23; } } unit 188 { family inet { address 10.1.88.1/21; } } unit 210 { family inet { address 10.41.10.1/24; } } unit 211 { family inet { address 10.41.11.1/24; } } unit 212 { family inet { address 10.41.12.1/24; } } unit 213 { family inet { address 10.41.13.1/24; } } unit 230 { family inet { address 10.41.30.1/24; } } unit 231 { family inet { address 10.41.31.1/24; } } unit 232 { family inet { address 10.41.32.1/24; } } unit 234 { family inet { address 10.41.34.1/23; } } unit 236 { family inet { address 10.41.36.1/23; } } unit 238 { family inet { address 10.41.56.1/21; } } unit 270 { family inet { address 10.41.72.1/22; } } unit 271 { family inet { address 10.41.76.1/24; } } unit 280 { family inet { address 10.41.80.1/23; } } unit 288 { family inet { address 10.41.88.1/21; } } unit 311 { family inet { address 10.103.11.1/24; } } unit 331 { family inet { address 10.103.31.1/24; } } unit 332 { family inet { address 10.103.32.1/24; } } unit 334 { family inet { address 10.103.34.1/23; } } unit 336 { family inet { address 10.103.36.1/23; } } unit 338 { family inet { address 10.103.56.1/21; } } unit 370 { family inet { address 10.103.72.1/22; } } unit 371 { family inet { address 10.103.76.1/24; } } unit 380 { family inet { address 10.103.80.1/23; } } unit 388 { family inet { address 10.103.88.1/21; } } } vme { unit 0 { family inet; } } } snmp { name HSServerRoom-4600; description HSServerRoom-4600; location "HS Server Room"; contact "[email protected]"; client-list list0 { 192.168.75.0/24; } community public { authorization read-only; client-list-name list0; } trap-group kcisd-traps { destination-port 162; targets { 192.168.75.29; } } } forwarding-options { storm-control-profiles default { all; } analyzer { INTERNET_IN { input { ingress { interface ge-1/0/0.0; } egress { interface ge-1/0/0.0; } } output { interface ge-1/0/2.0; } } } dhcp-relay { server-group { dhcp_servers { 192.168.75.12; } } group group1 { active-server-group dhcp_servers; interface irb.1; interface irb.2; interface irb.3; interface irb.4; interface irb.5; interface irb.6; interface irb.8; interface irb.9; interface irb.10; interface irb.11; interface irb.15; interface irb.100; interface irb.111; interface irb.130; interface irb.132; interface irb.133; interface irb.136; interface irb.138; interface irb.170; interface irb.180; interface irb.188; interface irb.210; interface irb.211; interface irb.212; interface irb.213; interface irb.230; interface irb.231; interface irb.232; interface irb.234; interface irb.236; interface irb.238; interface irb.270; interface irb.280; interface irb.288; interface irb.311; interface irb.331; interface irb.332; interface irb.334; interface irb.338; interface irb.370; interface irb.380; interface irb.388; } } } routing-options { nonstop-routing; static { route 0.0.0.0/0 next-hop 192.168.75.13; route 172.25.0.18/32 next-hop 172.16.41.182; route 172.25.0.19/32 next-hop 172.16.41.182; } router-id 192.168.100.1; } protocols { ospf { area 0.0.0.0 { interface irb.10; interface irb.1 { passive; } interface irb.4 { passive; } interface irb.2 { passive; } interface irb.5 { passive; } interface irb.6 { passive; } interface irb.8 { passive; } interface irb.9 { passive; } interface irb.15 { passive; } interface irb.111 { passive; } interface irb.100 { passive; } interface irb.172 { passive; } interface irb.141 { passive; } interface irb.11 { passive; } interface irb.130 { passive; } interface irb.311 { passive; } interface irb.140 { passive; } interface irb.133 { passive; } interface irb.132 { passive; } interface irb.176 { passive; } interface irb.331 { passive; } interface irb.332 { passive; } interface irb.334 { passive; } interface irb.136 { passive; } interface irb.20 { passive; } interface irb.212 { passive; } interface irb.3 { passive; } interface irb.211 { passive; } interface irb.213 { passive; } interface irb.230 { passive; } interface irb.231 { passive; } interface irb.232 { passive; } interface irb.234 { passive; } interface irb.236 { passive; } interface irb.210 { passive; } interface irb.138 { passive; } interface irb.238 { passive; } interface irb.338 { passive; } interface irb.170 { passive; } interface irb.180 { passive; } interface irb.188 { passive; } interface irb.288 { passive; } interface irb.388 { passive; } interface irb.380 { passive; } interface irb.370 { passive; } interface irb.270 { passive; } interface irb.280 { passive; } } } lldp { interface all; } lldp-med { interface all; } igmp-snooping { vlan default; } inactive: rstp { interface xe-0/0/0; interface xe-0/0/1; interface xe-0/0/2; interface xe-0/0/3; interface xe-0/0/4; interface xe-0/0/5; interface xe-0/0/6; interface xe-0/0/7; interface xe-0/0/8; interface xe-0/0/9; interface xe-0/0/10; interface xe-0/0/11; interface xe-0/0/12; interface xe-0/0/13; interface xe-0/0/14; interface xe-0/0/15; interface xe-0/0/16; interface xe-0/0/17; interface xe-0/0/18; interface xe-0/0/19; interface xe-0/0/20; interface xe-0/0/21; interface xe-0/0/22; interface xe-0/0/23; interface et-0/0/24; interface xe-0/0/24:0; interface xe-0/0/24:1; interface xe-0/0/24:2; interface xe-0/0/24:3; interface et-0/0/25; interface xe-0/0/25:0; interface xe-0/0/25:1; interface xe-0/0/25:2; interface xe-0/0/25:3; interface et-0/0/26; interface xe-0/0/26:0; interface xe-0/0/26:1; interface xe-0/0/26:2; interface xe-0/0/26:3; interface et-0/0/27; interface xe-0/0/27:0; interface xe-0/0/27:1; interface xe-0/0/27:2; interface xe-0/0/27:3; } } virtual-chassis { preprovisioned; member 0 { role routing-engine; serial-number TC3716430102; } member 1 { role routing-engine; serial-number PE3716350055; } member 2 { role line-card; serial-number PE3716350466; } } vlans { Backbone { vlan-id 10; l3-interface irb.10; } Central { vlan-id 5; l3-interface irb.5; } DEFAULT_VLAN { vlan-id 1; l3-interface irb.1; } DLVIDEO { vlan-id 141; l3-interface irb.141; } ES4thLab { vlan-id 311; l3-interface irb.311; } ESFac { vlan-id 380; l3-interface irb.380; } ESFacBYOD { vlan-id 388; l3-interface irb.388; } ESStu { vlan-id 370; l3-interface irb.370; } ES_BYOD { vlan-id 331; l3-interface irb.331; } ES_BYOD_Fac { vlan-id 332; l3-interface irb.332; } ES_Main_Wifi { vlan-id 334; l3-interface irb.334; } Fog { vlan-id 11; l3-interface irb.11; } HSFac { vlan-id 180; l3-interface irb.180; } HSFacBYOD { vlan-id 188; l3-interface irb.188; } HSPhone { vlan-id 140; l3-interface irb.140; } HSStu { vlan-id 170; l3-interface irb.170; } HS_BYOD_Fac { vlan-id 133; l3-interface irb.133; } HS_BYOD_Stu { vlan-id 132; l3-interface irb.132; } HighSchool { vlan-id 4; l3-interface irb.4; } JH { vlan-id 3; l3-interface irb.3; } JH107 { vlan-id 211; l3-interface irb.211; } JH116 { vlan-id 212; l3-interface irb.212; } JHFac { vlan-id 280; l3-interface irb.280; } JHFacBYOD { vlan-id 288; l3-interface irb.288; } JHOffice { vlan-id 213; l3-interface irb.213; } JHStu { vlan-id 270; l3-interface irb.270; } JH_BYOD_Fac { vlan-id 232; l3-interface irb.232; } JH_BYOD_Stu { vlan-id 231; l3-interface irb.231; } JH_Data { vlan-id 210; l3-interface irb.210; } JH_Faculty { vlan-id 234; l3-interface irb.234; } JH_Student { vlan-id 236; l3-interface irb.236; } JH_Wifi_APs { vlan-id 230; l3-interface irb.230; } KCISDFac_ES { vlan-id 338; l3-interface irb.338; } KCISDFac_HS { vlan-id 138; l3-interface irb.138; } KCISDFac_JH { vlan-id 238; l3-interface irb.238; } KHFac { vlan-id 130; l3-interface irb.130; } KHStu { vlan-id 136; l3-interface irb.136; } Scale { vlan-id 20; l3-interface irb.20; } SciAutoAGath { vlan-id 6; l3-interface irb.6; } VLAN172 { vlan-id 172; l3-interface irb.172; } VLAN176 { vlan-id 176; l3-interface irb.176; } VideoSur { vlan-id 100; l3-interface irb.100; } busbarn { vlan-id 9; l3-interface irb.9; } elem456 { vlan-id 15; l3-interface irb.15; } elementary { vlan-id 2; l3-interface irb.2; } hsmacser { vlan-id 8; l3-interface irb.8; } weather { vlan-id 111; l3-interface irb.111; } } The juniper at the junior high MDF is on port xe-0/0/23. root@HSServerRoom-4600> show dhcp relay statistics Packets dropped: Total 127849 Bootp packets 126744 Interface not configured 588 Bad UDP checksum 1 No binding found 512 dhcp-service total 4 Messages received: BOOTREQUEST 5571847 DHCPDECLINE 142 DHCPDISCOVER 1751818 DHCPINFORM 996327 DHCPRELEASE 99599 DHCPREQUEST 2723961 Messages sent: BOOTREPLY 3999872 DHCPOFFER 968432 DHCPACK 3030802 DHCPNAK 638 DHCPFORCERENEW 0 Packets forwarded: Total 5737 BOOTREQUEST 0 BOOTREPLY 5737
Network latency is how long it takes for something sent from a source host to reach a destination host. There are many components to latency, and the latency can actually be different A to B and B to A. The round trip time is how long it takes for a request sent from a source to a destination, and for the response to get back to the original source. Basically, the latency in each direction, plus the processing time.
{ "source": [ "https://networkengineering.stackexchange.com/questions/52257", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/49138/" ] }
53,166
I'm reading up on the relationship between latency and ISP interconnectivity (that higher ISP interconnectivity results in lower latency, which makes sense to me). My understanding is that IXPs provide the primary means of ISPs to connect with each other (taken from this article on edge servers by cloudflare ). But why not, as an ISP, connect directly to another ISP? Does this happen? And, in terms of terminology, would the connection then be referred to as an IXP?
Yes, this does happen quite a lot, and it is called private peering . It has some benefits over peering over an IXP: dedicated bandwidth , you can be sure you can use the full capacity of the interconnecting link for traffic to and from the other ISP no dependency on the IXP , an IXP connects two ISPs on their switch(es), you're not suffering from any outages of the IXP. Also, you're in direct contact with the other ISP when solving problems. possibly lower costs , if an ISP does a lot of traffic with one specific other ISP, it can be cost efficient not to pay an IXP to provide the connectivity, but instead just use a direct connection However, there can be downsides too: cost and availability router ports , routers often have a very limited number of ports, and port can be very costly (especially for high speed connections). By connecting to an IXP, you can reduced the number of private peering connections and thus lowering costs. localisation , not every ISP is present in every datacenter. IXPs often provide a peering LAN which stretches over multiple datacenters spanning a city (or sometimes a country or even a continent). Buying fiber paths to every other ISP can become very, very costly, especially if the distances are longer. operational costs , having many interconnections means more configurations, outages, links and ports to monitor, etc. Doing this for every single ISP can be very cost inefficient. connectivity between inequal peers , not every ISP wants to do private peering with all other peers, especially if there's a large difference in size. IXPs may enable them to peer with smaller peers, because the operational costs are much lower. Also, IXPs often offer route servers, which can function as an intermediate between ISPs, so the do not have to setup peering sessions with each other peer on the IXP.
{ "source": [ "https://networkengineering.stackexchange.com/questions/53166", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/36904/" ] }
53,253
If we use ICMP's ping , we know the TTL and round-trip time are stored in the IP header. In the below IP header map we know TTL's location, but where is the round-trip time ? Is it stored in Options ?
The round trip time is not actually stored anywhere. The sending host remembers the time it sends each ICMP Echo Request message, using ICMP's 16-bit identifier and sequence fields. When it gets the ICMP Echo Reply, it notes the current time, finds the time it sent the matching Request packet identified by the reply, calculates the difference, and reports it. Typically ping uses ICMP's identification field to differentiate multiple simultaneous pings, and the sequence field to differentiate individual packets. It is up to the implementation to decide where to store the outgoing time for a given packet: instead of storing it on the host in a table, it typically sends it in the outgoing request and uses the copy in the reply to calculate the time. (Thanks commenters for pointing this out.) It's sent in whatever way is convenient for the implementation, and of course has to trust the far end, and any intervening equipment, to properly copy the data. Some systems are known to represent the time in 16 bytes with resolution of microseconds, some as 8 bytes with resolution of milliseconds. The format inside the data portion of the IP packet is the ICMP Echo Request/Reply message, copied here from RFC 792 "Internet Control Message Format" (p14). Type is 8 for Request, 0 for Reply; Code is 0. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type | Code | Checksum | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Identifier | Sequence Number | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Data ... +-+-+-+-+- PS. Just to be clear, the identification field of the IP header is normally set to an arbitrary value, different for each outgoing packet, used for reassembly of any fragmentation, and doesn't have the same value as anything in the ICMP body. Also, although there is a mechanism defined for putting timestamps into the IP header as an option, this is not the normal mechanism for ping because very many routers are configured not to pass certain IP options. See RFC 781 Specification of the Internet Protocol Timestamp Option. Finally, although everything here was written from an IPv4 perspective, per the original question; but ping in IPv6 is extremely similar, see ICMPv6 RFC 4443 .
{ "source": [ "https://networkengineering.stackexchange.com/questions/53253", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/38050/" ] }
53,935
I understand that we are running out (or ran out already?) of IPv4 addresses, but I don't really understand why that is. Right now, every home has its own IPv4 address (dynamically assigned, but still, each has an address). Why can't a city (for example) have just one IPv4 address and all homes in this city would just be on a private network of that city? Then this one city would be able to assign addresses from range 0.0.0.1 to 255.255.255.254 . I'm sure that my understanding is wrong somehow otherwise IPv4 addresses would not run out. What's wrong with my understanding?
The IPv4 Address Shortage According to Vint Cerf (the father of IP), the IPv4 32-bit address size of was chosen arbitrarily. IP was a government/academic collaborative experiment, and the current public Internet was never envisioned. The IP paradigm was that each connected device would have a unique IP address (all packets sent between IP devices would be end-to-end connected from the source IP address to the destination IP address), and many protocols using IP depend on each device having a unique IP address. Assuming we could use every possible IPv4 address*, there are only 4,294,967,296 possible IPv4 addresses, but (as of September 2018) the current world population is 7,648,290,361. As you can see, there are not enough possible IPv4 addresses for every person to have even one, but many people have a computer, printer, cell phone, tablet, gaming console, smart TV, etc., each requiring an IP address, and that doesn’t even touch on the business needs for IP addresses. We are also on the cusp of the IoT (Internet of Things), where every device needs an IP address: light bulbs, thermostats, thermometers, rain gauges and sprinkler systems, alarm sensors, appliances, vehicles, garage door openers, entertainment systems, pet collars, and who knows what all else. All this adds up to the fact that IPv4 simply cannot handle the addressing needs of the modern world. * There are blocks of IPv4 addresses that cannot be used for host addressing. For example, multicast has a block of 268,435,456 addresses that cannot be used for host addressing. IANA maintains the IANA IPv4 Special-Purpose Address Registry at https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml to document all the special address blocks and their purposes. IANA (Internet Assigned Numbers Authority) ran out of IPv4 address blocks to assign to the RIRs (Regional Internet Registries) to be assigned in their respective regions, and the RIRs have now also run out of IPv4 addresses to assign in each region. ISPs (Internet Service Providers) and companies that want or need IPv4 addresses can no longer get IPv4 addresses from their RIRs and now must try to buy IPv4 addresses from businesses that may have extra (as the IPv4 address shortage deepens, the price of IPv4 addresses goes up). Even if all the IPv4 addresses that are reserved for special purposes and cannot be used for host addressing were made available for use, we would still be in the same position because there are simply not enough IPv4 addresses due to the limited size of IPv4 addresses. Mitigating the IPv4 Address Shortage IANA and the RIRs would have run out of IPv4 addresses many years before they did if IANA and the IETF (Internet Engineering Task Force) had not adopted mitigations for the IPv4 address shortage. One important mitigation was the deprecation of IPv4 network classes in favor of CIDR (Classless Inter-Domain Routing). Classful addressing only allows for three assigned network sizes (16,777,216, 65,536, or 256 total host addresses per network), meaning that many addresses are wasted (a business needing only 300 host addresses would need to be allocated a classful network that has 65,536 possible host addresses, wasting over 99% of the addresses in the classful network), but CIDR allows for network sizes to fit more closely with network address requirements (a business needing only 300 host addresses could be allocated a CIDR /23 network that has only 510 usable host addresses), wasting far fewer addresses and still providing some room for growth. By far, the mitigation that has had the biggest impact on extending the life of IPv4 is the use of Private Addressing and a variant of NAT (Network Address Translation) called NAPT (Network Address Port Translation), which is what most people mean when they refer to NAT or PAT (PAT is a vendor-specific term for NAPT). Unfortunately, NAPT is an ugly workaround that breaks the IP end-to-end paradigm, and that breaks protocols that depend on unique IP addressing, requiring even more ugly workarounds. NAT/NAPT The concept of NAT is pretty simple: it replaces either or both the source and destination IPv4 addresses in a packet header as the packet passes through the NAT device. In practice, it requires computation because the IPv4 header has a computed field to check the integrity of the IPv4 header, and any change made to the IPv4 header requires recalculation of the field, and some transport protocols in the packet payload also have their own computed fields that must be recalculated, using computing resources in the NAT device that could be used for packet forwarding. In Basic NAT, the NAT device has a pool of IPv4 addresses that it uses to replace the source IPv4 addresses of the packet headers for IPv4 packets sent from an inside network to an outside network, and it maintains a translation table in order to translate the destination IPv4 addresses of traffic returning from the outside network in order to deliver the packets back to the correct hosts on the inside network. This also requires resources on the NAT device to build and maintain the translation table, and to perform table lookups. This resource utilization can slow the forwarding of packets because the resources used by NAT are taken from the resources that could be used for packet forwarding. NAPT takes Basic NAT further by also translating the transport protocol addresses (ports) for TCP and UDP, and the Query IDs for ICMP. By also translating the transport-layer addresses, NAPT allows the use of a single outside IPv4 address for many inside host IPv4 addresses. NAPT is even more resource intensive than Basic NAT because it requires a separate table for each transport-layer protocol, and it must also perform the integrity calculations for the transport protocols. The use of Private IPv4 addressing, that can be reused on multiple networks (you may have noticed that most home/residential networks default to use the same 192.168.1.0/24 network, which is in one of the IANA allocated Private IPv4 address ranges), along with NAPT, allows business and home users to each use a single outside (public) address for a large inside (privately addressed) network. This saves many, many IPv4 addresses (several times the total number of possible IPv4 addresses) and has extended the life of IPv4 far beyond the point at which it would have collapsed without NAPT. NAPT does have some serious drawbacks: NAPT breaks the IP end-to-end paradigm, and it only works with TCP, UDP, and ICMP, breaking other transport protocols. There are also application-layer protocols that use TCP or UDP that are broken by NAPT, even though TCP and UDP nominally work with NAPT. Other mitigations, e.g. STUN/TURN, may be available for some application-layer protocols, but they can add cost and complexity. NAPT is very resource intensive, slowing packet forwarding compared to what is possible without using any form of NAT. Some vendors add dedicated hardware to mitigate the need to steal resources from packet forwarding, but this comes at added expense, size, complexity, and power usage. When using NAPT, traffic initiated from outside the NAPT network cannot be delivered to the inside network because there is no translation entry in the translation table, which is added by inside-initiated traffic. The single outside (public) address is configured on the NAT device, and any packets with that destination IPv4 address and no entry for the source IPv4 address in the translation table for the transport protocol is assumed to be for the NAPT device, itself, not the inside network. There is a mitigation, called Port Forwarding, for this problem. Port Forwarding basically configures, manually, a permanent entry in a translation table to allow outside-initiated traffic that is destined to a particular transport protocol and address for the protocol to be delivered to a particular inside host. This does have the drawback of only allowing one inside host to be the target for a particular transport protocol and address. For example, if there are multiple web servers on the inside network, only one of the web servers can be exposed on TCP port 80 (the default for web servers). Because the IPv4 address shortage is so severe, the ISPs (Internet Service Providers) are running out of public addresses to assign to their customers. The ISPs can no longer get any more public addresses, so they have adopted some mitigations that especially hurt home/residential users. The ISPs want to reserve their precious public address pool for their business customers that are willing to pay for the privilege of getting public addresses. To do that, the ISPs are now starting to assign Private or Shared addresses to their home/residential customers, and the ISPs use NAPT on their own routers to facilitate the use of multiple Private or Shared addresses on a single public address. That creates a situation where a home/residential network is behind two NAPT translations (ISP NAPT to customer NAPT), and port forwarding configured by the customer on the home/residential router no longer works because it is broken by the ISP NAPT, which is not configured to forward the port to the customer router. Many people make the mistake of equating NAPT and security because the inside hosts cannot be directly addressed from outside. This is a false sense of security. Because a firewall connecting a network to the public Internet is a convenient place to run NAPT, that simply confuses the situation. It creates a dangerous perception that that NAPT, itself, is the firewall, and a real firewall is unnecessary. Network security comes from firewalls, which block all outside-initiated traffic by default, only allowing traffic it is explicitly configured to permit, possibly doing a deep inspection on the packet contents to drop dangerous packet payloads. What some people fail to realize is that, without a firewall, either in hardware or software, on the outside of or built into the NAPT device, to protect the NAPT device, the NAPT device itself is vulnerable. If the NAPT device is compromised, it, and by extension an attacker, has full access to the privately addressed inside network. Outside-initiated packets that do not match a translation table are destined to the NAPT device, itself, because it is the device that is actually addressed with the external address, so the NAPT device can be directly attacked. The Solution to the IPv4 Address Shortage The IETF predicted the IPv4 address shortage, and it created the solution: IPv6, which uses 128-bit addresses, meaning there are 340,282,366,920,938,463,463,374,607,431,768,211,456 (340 undecillion) possible IPv6 addresses. The almost unimaginable number of IPv6 addresses removes the need for NAPT (IPv6 doesn’t have any NAT standards, the way IPv4 does, and the experimental IPv6 NAT RFC specifically forbids NAPT), restoring the original IP end-to-end paradigm. The mitigations for the IPv4 address shortage are meant to extend the life of IPv4 until IPv6 is ubiquitous, at which point IPv4 should fade away. Humans cannot really comprehend numbers of the size used for IPv6. For example, a standard IPv6 network uses 64 bits for each of the network and host portions of the network address. That is 18,446,744,073,709,551,616 possible IPv6 standard /64 networks, and that same (huge) number of host addresses for each of those networks. To try to understand a number that large, consider tools that scan all the possible addresses on a network. If such a tool could scan 1,000,000 addresses per second (unlikely), it would take over 584,542 years to perform the scan on a single /64 IPv6 network. Currently, only 1/8 of the total IPv6 address space is allocated for global IPv6 addresses, which works out to 2,305,843,009,213,693,952 standard IPv6 /64 networks, and if the world population is 21 billion in the year 2100 (a somewhat realistic number), every one of those 21 billion people could have 109,802,048 standard IPv6 /64 networks, each network having 18,446,744,073,709,551,616 possible host addresses. Unfortunately, the (decades of) IPv4 address shortage has so ingrained address conservation in people, that many people simply cannot let it go, and they try to apply it to IPv6, which is pointless and actually detrimental. IPv6 is actually designed to waste addresses. The IETF also had the advantage of hindsight, and it improved IP (in IPv6) by removing features of IPv4 that didn’t work well, improving some IPv4 features, and adding features that IPv4 didn’t have, creating a new and improved IP. Because IPv6 is a completely separate protocol from IPv4, it can be run in parallel with IPv4 as the transition is made from IPv4 to IPv6. Hosts and network devices can run both IPv4 and IPv6 on the same interface at the same time (dual-stacked), and each is invisible to the other; there is no interference between the two protocols. The problem with IPv6 is that it is actually a completely different protocol that is incompatible with the ubiquitous IPv4, and the mitigations for the IPv4 address shortage are seen by many people to be “good enough.” The result is that it has been over 20 years since IPv6 was standardized, and we are just now getting some real traction in using IPv6 (Google reports, as of September 2018, worldwide IPv6 adoption of over 20%, and the IPv6 adoption rate in the U.S. is over 35%). The reason we are finally moving to IPv6 is that there are simply no more unused IPv4 addresses to be assigned. There are other obstacles, all part of the IPv4 culture, that are simply hard for people to look past. Many people are also scared of IPv6, having grown up and being comfortable with IPv4, warts and all. For example, the IPv6 addresses appear to be large and ugly compared to IPv4 addresses, and that seems to put many people off. The reality is that IPv6 is often easier and more flexible than IPv4, especially for addressing, and the lessons learned in IPv4 have been applied to IPv6 from the beginning.
{ "source": [ "https://networkengineering.stackexchange.com/questions/53935", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/50866/" ] }
54,069
Does anyone know what this thread-like material to the right on the picture is? Does it have anything to do with grounding? It is a Cat6 U/UTP cable.
It is used to split the outer shielding away without needing to use a sharp object which could potentially damage the wires themselves. It is commonly called a ripcord. Image taken from http://netx.us.com/Product%20pdf/Copper_Solutions/A6.pdf
{ "source": [ "https://networkengineering.stackexchange.com/questions/54069", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/51127/" ] }
55,581
In RFC 793 there is a part about the acknowledgment of TCP segments: When the TCP transmits a segment containing data, it puts a copy on a retransmission queue and starts a timer; when the acknowledgment for that data is received, the segment is deleted from the queue. If the acknowledgment is not received before the timer runs out, the segment is retransmitted. An acknowledgment by TCP does not guarantee that the data has been delivered to the end user , but only that the receiving TCP has taken the responsibility to do so. Now, this is interesting. In our NOC, we often troubleshoot connectivity issues between our network and external client network and whenever we sniff traffic on a firewall and see SYN and ACK bits sent and received in both directions, we assume that the connectivity is established and the issue has nothing to do with network. But now this RFC made me think - what else should I check (without setting up Wireshark) if TCP connection is established but the users are still experiencing connectivity issues?
This part of the RFC is about passing responsibility over to the operating system or whatever is the next stage of the process. It's fundamentally concerned with the separation of layers. An acknowledgment by TCP does not guarantee that the data has been delivered to the end user, but only that the receiving TCP has taken the responsibility to do so. I have always thought about it this way: The OS could crash between sending the ACK and the data reaching the client process ("client" here means client of the OS, not "network client") The client process could be buggy or crash, or just much slower than anticipated to get round to dealing with its incoming data, or indeed only read it under non-obvious circumstances If the client is sending the data onwards, perhaps to a disk file, the file may not have been written or flushed yet If the client is sending the data onwards by TCP, the far side TCP may not have transmitted the data, received an ACK, or the far process successfully consumed the data All it is saying is that this is a layer 3 acknowledgement ("I hear your bytes") not a higher layer acknowledgement. Consider for example the different between the TCP ACK, the SMTP 250 OK after the next-hop mail gateway accepts a message, a message receipt message (eg per RFC 3798 ), a message-opened tracking pixel, a thank-you note from a PA, and a reply saying "Yes I'll do it." Another concrete example would be a printer: It must ACK the data early before it knows what the end of it contains (might be a Postscript file beginning with an included library bigger than the TCP transmit window) It might contain a status query ("do you have paper?", which it can obviously execute) It might contain a print command ("please print this", which it might fail, if out of paper) I would suggest that if users are seeing and sending ACKs but still experiencing connectivity issues, it is orders of magnitude more likely that there are congestion, OS, or application issues than anything strictly network-related. To diagnose I suggest looking for retransmits, rather than the ACKs specifically.
{ "source": [ "https://networkengineering.stackexchange.com/questions/55581", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/47224/" ] }
56,156
Why do we need to give IP addresses to each interface? Wouldn't giving one to each device be enough?
Connecting an interface to a network makes it a part of that network. Therefore, the IP address is a property of the connection, not the host. Likewise, a host can have many network connections and accordingly, many IP addresses. Different interfaces or addresses often have different functions, so it's important to distinguish between them (e.g. internal console, public services, iSCSI). Routers require multiple IP addresses for their interfaces. However, there are scenarios when you'd like to assign an address to the actual device behind its interfaces - e.g. when you've got redundant links/interfaces and you don't really care which path a connection takes. Then you can use a virtual loopback interface inside the device which is always linked and up. Using addresses to generally identify hosts instead of interfaces creates the problem that you'd need to propagate information on which host is reachable from which network. How do you handle multi-home nodes like routers or other gateways? How about hosts moving between networks (e.g. mobile devices)? That very quickly causes problems that severely limit scalability.
{ "source": [ "https://networkengineering.stackexchange.com/questions/56156", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/54373/" ] }
56,208
Here's the scenario. I was picturing a university that bought a range of IP addresses. I think their network would still be connected to an ISP (right?), but they'd have freedom to configure stuff the way they wanted. What stops them from assigning to their routers and hosts already in-use IP addresses? And what would happen if indeed someone did this?
Most likely if they're a big university they are their own ISP, using BGP to connect their network to the internet via a number of upstream networks. Nothing stops them from using IP addresses they should not be using, and it would work in their local network. However, it won't work on the Internet. Their upstream networks providing them connectivity should have filters in place which would only allow the university to advertise IP addresses assigned to them. If the direct upstreams wouldn't filter them, the upstreams' upstreams will. And if IP addresses, which are in use by another network, would be used by the university, that other network would become unreachable from the university network. In addition, there are a number of projects (for example, RIPE RIS and BGPmon ) which monitor routing tables and alert on any "illegal" IP advertisement ( BGP hijacks and routing anomalies).
{ "source": [ "https://networkengineering.stackexchange.com/questions/56208", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/54373/" ] }
56,707
This question may not look as an important one and in fact it is just out of curiosity. But to the point: Is Ethernet port blinking really useful? Ethernet ports usually have two light indicators of activity (blinking leds). Yes, I know that they indeed indicate activity, but why do we need that for Ethernet ports while we don't for other interfaces? Is there any official explanation for that or is it just a matter of tradition nobody cares about? Or else? Looking forward to see your knowledge or ideas on the topic.
Yes, blinkenlights are your friends! The lights can be very helpful when diagnosing problems - especially when dealing with non-managed switches or remote diagnosis with unexperienced users. Rule of thumb: no link light = layer 1 (cable/port) problem link light but no traffic = layer 2 problem (or higher) - VLANs, STP, port security, IP subnet mismatch, ... traffic light constantly on but little useful traffic = bridge loop (or constant collisions with a repeater hub), possible duplex mismatch Many switches can signal multiple infos (sometimes using a toggle button): link speed (none/10M/100M/1G/10G), half/full duplex mode, slow/fast traffic, PoE available/applied, high error rate, inserted transceiver accepted/rejected and more. Most of these are very helpful when something doesn't work. With other (shorter-range) interfaces, most often the other cable end is in the same room - that situation is rather easy to diagnose, even without link and traffic lights. With Ethernet, the far side can be in another room, another building, or even in another city. Twisted-pair cabling is mostly limited to 100 meter, but fiber can cover 100 kilometer or even more. You'll be glad to have a quick, local info light. Note that the traffic blinking frequency is very often significantly reduced for the human eye - with a potential frame rate of more than 140,000 Hz just for 100M Ethernet, indicating each single frame wouldn't make sense. Well-designed devices attempt to signal various port load states by frequency (like constantly lit for 80-100%, fast blinking for 50-80%, slow for 5-50%, intermittent for 1-5%) but there's no standard sadly.
{ "source": [ "https://networkengineering.stackexchange.com/questions/56707", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/55001/" ] }
56,845
From the book Computer Network : The IP address 0.0.0.0 , the lowest address, is used by hosts when they are being booted. It means ‘‘this network’’ or ‘‘this host.’’ ... All addresses of the form 127.xx.yy.zz are reserved for loopback testing. Packets sent to that address are not put out onto the wire; they are processed locally and treated as incoming packets. This allows packets to be sent to the host without the sender knowing its number, which is useful for testing. If I am correct, a loopback IP address refers to the current host. What is the difference between 0.0.0.0 and a loopback IP address then?
The statement: The IP address 0.0.0.0 [...] means ‘‘this network’’ or ‘‘this host.’’ is misleading. It is not a "or" but "This host on this network." From RFC1122 : { 0, 0 } This host on this network. MUST NOT be sent, except as a source address as part of an initialization procedure by which the host learns its own IP address. The loopback address (actually any address in the 127.0.0.0/8 network) is explained in the same RFC this way: { 127, any } Internal host loopback address. Addresses of this form MUST NOT appear outside a host. So both a loopback address and the all zero address can be referred as "this host", but they have in fact very different usages: the 0.0.0.0 address can be observed on a network, but only during the DHCP/BOOTP process, and only as a source address. any address in the 127.0.0.0/8 can not be viewed anywhere on the network, and can only be used for: testing the TCP/IP stack of the host. two applications on the same host to communicate together. A 127.X.X.X address is attached to a loopback interface. Such an interface has no underlying layer attached (i.e. it is not attached to a link layer). The packet is processed and responded to in the Internet layer. So there's really no way for this packet to reach anything outside the host. But a packet sent from 0.0.0.0 is processed normally by the network stack, except that there's no routing decision, it is bound to the interface that is initializing, so it's sent out of this interface and goes through the link layer (which can be something else than Ethernet), then on the network.
{ "source": [ "https://networkengineering.stackexchange.com/questions/56845", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/7894/" ] }
59,610
I have an IP address and can traceroute to it, but I can not ping. You see, I can traceroute 43.24.226.50 : dele-MBP:~ ll$ traceroute 43.24.226.50 traceroute to 43.24.226.50 (43.24.226.50), 64 hops max, 52 byte packets 1 router.asus.com (192.168.2.1) 2.082 ms 1.039 ms 0.924 ms 2 100.64.0.1 (100.64.0.1) 3.648 ms 3.795 ms 3.955 ms 3 118.112.212.225 (118.112.212.225) 4.252 ms 4.569 ms 4.168 ms 4 171.208.203.73 (171.208.203.73) 6.378 ms 171.208.198.25 (171.208.198.25) 6.943 ms 171.208.203.61 (171.208.203.61) 7.055 ms 5 202.97.36.225 (202.97.36.225) 38.149 ms 202.97.36.221 (202.97.36.221) 39.949 ms 202.97.36.225 (202.97.36.225) 40.780 ms 6 202.97.90.158 (202.97.90.158) 37.894 ms 202.97.94.146 (202.97.94.146) 39.885 ms 39.354 ms 7 202.97.38.166 (202.97.38.166) 45.324 ms 202.97.39.149 (202.97.39.149) 40.097 ms 202.97.94.77 (202.97.94.77) 40.580 ms 8 202.97.51.118 (202.97.51.118) 374.218 ms 202.97.27.238 (202.97.27.238) 187.573 ms 202.97.86.138 (202.97.86.138) 197.524 ms 9 218.30.53.190 (218.30.53.190) 201.597 ms 218.30.54.190 (218.30.54.190) 194.194 ms 218.30.53.190 (218.30.53.190) 204.027 ms 10 182.54.129.91 (182.54.129.91) 220.026 ms 282.360 ms et-11-1-5.r01.laxus01.us.bb.bgp.net (182.54.129.38) 185.700 ms 11 182.54.129.91 (182.54.129.91) 229.700 ms 508.509 ms 266.683 ms 12 * 212.5.128.2 (212.5.128.2) 565.161 ms * 13 43.24.226.50 (43.24.226.50) 200.531 ms 201.911 ms 191.566 ms But I can not ping it: dele-MBP:~ ll$ ping 43.24.226.50 PING 43.24.226.50 (43.24.226.50): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 Request timeout for icmp_seq 4 Request timeout for icmp_seq 5 Request timeout for icmp_seq 6 Request timeout for icmp_seq 7 Request timeout for icmp_seq 8 Request timeout for icmp_seq 9 Request timeout for icmp_seq 10 Request timeout for icmp_seq 11 If there is a ban on ICMP, traceroute should not work either. What's the reason for it? I checked the server's firewall is stopped.
On a similar question here Luke Savage explained it perfectly: Traceroute is not a protocol itself, it is an application and the protocols used depends on the implementation your are using. Primarily this is ICMP. There are two main implementations: tracert - tracert is a Windows application that utilises ICMP packets with as incrementing TTL field to map the hops to the final destination address. traceroute - traceroute is a *nix application available on most Linux based systems, including network devices, and on Cisco devices. This uses UDP packets with an incrementing TTL field to map the hops to the final destination. The difference between these is useful to know as some network now block ICMP by default so both PING and tracert from a Windows machine will fail but a traceroute from a Linux device may still work. From your shared output I can see that you are using traceroute command and not tracert which got me to think that you are using a Unix or GNU based operating system. In the answer I mentioned you can see that unix based systems are not using ICMP for traceroute . In other words, since PING is using ICMP (which I think is blocked by the system you are trying to reach) and traceroute is using UDP packets with an incrementation method of TTL field (which I think is not blocked at the system you are trying to reach) PING fails but Traceroute succeeds
{ "source": [ "https://networkengineering.stackexchange.com/questions/59610", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/51231/" ] }
60,002
I have a question about oversubscription in networking. I read a lot of documentation but I still don't understand what it means. I read the following on the Cisco website, oversubscription of the ISL is typically on the order of 7:1 or greater. What does oversubscription mean? Where is it used? Where should it be avoided? How do we calculate this value? If this is a configuration parameter, which commands are used to set it? (Cisco or Juniper) If it is configuration parameter, which devices or which IOS version support it?
Suppose you have a core switch that connect to several access switches (leaf and spine topology). If your access switches have each 48 1Gbs ports, you can potentially aggregate 48Gbs of traffic to be passed to the core switch, so you would need a connection between the core switch and each access switches of at least 48Gbs. Most often, this would be wasteful, because in practice you will never encounter a situation where all ports receive traffic at their maximum rate at the same time. So we could have an access switch with 48 ports at 1Gbs and an uplink to the core switch at 10Gbs We then have an over-subscription of 4.8:1 If we use a LAG with 2 x 10Gbs ports, we can reduce it to: 48 x 1 Gbs / 2 x 10 Gbs = 2.4:1 When we use it and when not to use? As you can see it is almost always used when you have several switch layers. You don't use it: when you have only one switch layer (very small networks) when you have very specific requirements and want the full bandwidth available on all ports at any time (and enough money to do so) How do we calculate these Value? As in the example above, the over-subscription ratio is the ratio between the upstream bandwidth and the downstream capacity. As for how to decide which final ratio to attain when designing / upgrading a network, it can be tricky. This is why, from its vast experience and analysis of real networks, Cisco make some recommendation, such as the one you quoted, or the one quoted by @RonMaupin in a comment: the access to distribution oversubscription ratio is recommended to be no more than 20:1 (for every 20 access 1 Gbps ports on your access switch, you need 1 Gbps in the uplink to the distribution switch), and the distribution to core ratio is recommended to be no more than 4:1 But the correct values for a given network highly depend on the traffic pattern. For existing network, a close monitoring of the bandwidth used on each port should give enough insight. You can also use NetFlow / sFlow to analyze further what use the bandwidth. When designing a new network you need to assess the expected traffic. If this is a configurable parameter, what are the commands which use to configure?(Cisco or Juniper) You can see now that it is not something we configure, but it is a design choice. Note: The ports speed is not always the limiting factor. Most often the switch hardware is not capable of handling the full bandwidth on all its ports simultaneously; this is indeed a kind of internal over-subscription (once again mostly driven by real usage patterns and costs).
{ "source": [ "https://networkengineering.stackexchange.com/questions/60002", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/52517/" ] }
60,074
TCP provides reliability at transport layer while UDP does not. So, UDP is fast. But, a protocol at application layer can implement reliable mechanism while using UDP. In this sense, why isn't UDP with reliability (implemented on Application layer) a substitute of TCP in the case that UDP is faster than TCP while we need reliability?
TCP is about as fast as you can make something with its reliability properties. If you only need, say, sequencing and error detection, UDP can be made to serve perfectly well. This is the basis for most real-time protocols such as voice, video streaming etc, where lag and jitter are more important than "absolute" error correction. Fundamentally, TCP says its streams can be relied upon eventually . How fast that is depends on the various timers, speeds etc. The time taken to resolve errors can be unpredictable, but the basic operations are as fast as practicable when there are no errors. If a system knows something about the kinds of errors which are likely, it might be able to do something which isn't possible with TCP. For example, if single-bit errors are especially likely, you can use error-correcting coding for those bit errors: however, this is much better implemented in the link layer. As another example, if short bursts of whole-packet loss are common, you can address this with multiple transmission without waiting for loss, but obviously this is expensive in bandwidth. Or alternatively, slow the speed down until the error probability is negligible: also expensive in bandwidth. In the end, a protocol has to pay for reliability with either a) bandwidth or b) delay. In implementation terms, you would find that the programmer-centuries invested in TCP will make it faster than anything general you could afford to make, as well as more reliable in the obscure edge cases. TCP provides: a ubiquitious method of connecting (essential where the communicating systems have no common control) giving a reliable, ordered, (and deduplicated), two way, windowed, byte stream with congestion control over arbitrary-distance multi-hop networks. If an application doesn't require ubiquity (your software runs on both sides), or doesn't need all of TCP's features, many people profitably use other protocols, often on top of UDP. Examples include TFTP (minimalistic, with really inefficient error handling, QUIC which is designed to reduce overheads (still marked as experimental), and libraries such as lidgren, which has fine-grained control over exactly which reliability features are required. [Thanks commenters.]
{ "source": [ "https://networkengineering.stackexchange.com/questions/60074", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/58678/" ] }
62,949
I’ve been following the CCENT official certification book(100-105) and came upon this question in the “do I know this already?” quiz. The books only covered /24 subnetting only so far. Which of the following is a network broadcast address? a. 10.1.255.255 b. 192.168.255.1 c. 224.1.1.255 d. 172.30.255.255 As no subnetting notation has been included, I’ll stick with .255 ending as being broadcast. a = seems correct. b = incorrect. ends with .1 c = incorrect, it’s a class d multicast. d = seems corrects too. The answer says ONLY D is correct. So why is A incorrect? My understanding: 10.1.255.0 as the network ID 10.1.255.1 to 10.1.255.254 as valid IP addresses. 10.1.255.255 as the broadcast Class a = 8 Bits network ID. 24 Bits Host ID. (0 subnet bits as theirs only 1 whole subnet? is this correct?) subnet mask = 255.0.0.0 I believe it's my lack of understanding of how subnet masks correlate to Ip addresses
I believe the book wrongly assumes network classes are still in effect. So a) would be a "Class A" network, where 10.255.255.255 would be the broadcast address. Another hint: There is no explicit network size specified (/24, /27, ..) so it is implied you know about network classes. Classical example of outdated literature.
{ "source": [ "https://networkengineering.stackexchange.com/questions/62949", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/62562/" ] }
63,738
I have reset the password on a Cisco 1700 router, now when I copy run start or write mem then reload or reboot, it defaults to factory settings, i.e. it is not holding the configuration I have just written. To reset the password I used the boot from ROMmon command. Any help would be very much appreciated.
Check your config register running show version . If it shows 0x2142 then it means that on boot router ignores nvram config and loads factory default. To change that, load router, in configuration mode enter config-register 0x2102 . Write memory and reload.
{ "source": [ "https://networkengineering.stackexchange.com/questions/63738", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/22358/" ] }
64,401
DHCP uses UDP as its transport protocol. DHCP messages that a client sends to a server are sent to well-known port 67 (UDP—Bootstrap Protocol and DHCP). DHCP Messages  that a server sends to a client are sent to port 68, so can DHCP use TCP ?
DHCP cannot use TCP as the transport protocol because TCP requires both end-points to have unique IP addresses. At the time a host is required to use DHCP, it does not have an IP address it can source the packets from, nor does it have the IP address of the DHCP server. So it uses 0.0.0.0 as the source IP address and 255.255.255.255 (broadcast) as the destination IP address (this is for DHCP - similar behaviour is present for DHCPv6). These IP addresses are not valid host IP addresses and can be used by multiple clients at any time. So a TCP connection wouln't be "unique" for lack of a better term.
{ "source": [ "https://networkengineering.stackexchange.com/questions/64401", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/64200/" ] }
64,679
I know that the organization that distributes IP addresses decided to assign 192.168.xxx, 172.xxx and 10.xxx to private networks. However, I thought that private networks have their own address space so shouldn't a private network be able to assign any values in the IP address space and not be limited to those values? Assuming IPV4 CIDR notation
RFC 1918 allocates the following for private address space: 10.0.0.0/8 172.16.0.0/12 (not 172.0.0.0/8 !!!) 192.168.0.0/16 While those are private, network engineers often use NAT to allow users on those nets to reach internet resources. If you used 8.0.0.0/8 for private address space (for example), you would not be able to reach the google address server 8.8.8.8 , because you would have an internal route for that block. In addition, even if your “private” servers did not need to reach the internet at all, if google tried to reach your public webserver, and your public webserver had your internal routing table (with your “private” 8-net route), the replies would not get back to google. So use the RFC1918 private address space and save yourself a bunch of trouble.
{ "source": [ "https://networkengineering.stackexchange.com/questions/64679", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/64551/" ] }
64,803
It is stated in Wikipedia that an IPv6 header does not include a checksum. What are the reasons that were behind this decision?
One of the ideas around IPv6 was to speed up packet forwarding. To that end, several decisions were made. For example, the IPv6 header was greatly simplified and is a fixed length, unlike the variable length IPv4 header. Also, you cannot fragment IPv6 packets along the path, the way you can for IPv4, because packet fragmentation is resource intensive. Not having a checksum in the IPv6 header means that an IPv6 router does not need to recalculate the checksum to see if the packet header is corrupt, and recalculate the checksum after decrementing the hop count. That saves processing time and speeds up the packet forwarding. The logic is that the layer-2 and layer-4 protocols each already have a checksum. The layer-2 checksum covers the entire IPv6 packet, and the layer-4 checksum covers the transport datagram. Where UDP has an optional checksum for IPv4, it is required for IPv6.
{ "source": [ "https://networkengineering.stackexchange.com/questions/64803", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/1776/" ] }
64,871
Suppose you have two NICs with the same MAC address, but not necessarily the same IP address. What is the least possible separation (in terms of number of switches, routers, different IP subnets etc.) needed that would still allow traffic between the NICs?
Suppose you have two NICs with the same MAC address, but not necessarily the same IP address. You can't have that within the same link-layer segment. Identical MAC addresses will disable reliable switching/bridging. What is the least possible separation (in terms of number of switches, routers, different IP subnets etc.) needed that would still allow traffic between the NICs? The NICs need to be in different L2 segments - at least one router in between. Also, they would have to be in different IP subnets to enable normal routing. For a router to see identical MAC addresses in different subnets is not normally a problem. The number of switches in between doesn't matter - each broadcast is propagated throughout the broadcast domain (=L2 segment), so each NIC messes up the source-address table for the other on every participating switch. Of course, both NICs could be in different VLANs since those represent separate segments. [edit] As has been pointed out by Jörg, the "router" above can very well be an L3 switch that is used as a router. Note that the switching/bridging function of an L3 switch can not cope with identical MACs within the same segment either. [edit2] Also, (see comments, I thought that was pretty obvious) having multiple NICs with identical MACs is a bad thing. Generally, MACs are supposed to be unique (at least within a site's scope) in order to avoid problems that may be hard to diagnose. If need be (thx Ron!) you need to separate those NICs into their own broadcast domains/L2 segments/ESXi port groups and use a router to enable IP communication between them. Make sure your router or L3 switch is fine with duplicate MACs across its L3 interfaces. Do not replace the router without prior testing. Running that router inside a VM might have its own tribulations. Disclaimer : I have no experience running something like that in an ESXi environment - since a vSwitch works somewhat differently from a hardware switch - it has considerably more insight - there may be unexpected problems (unless you distribute the VMs to different hosts). In any case, duplicate MACs will likely require the "MAC address changes" option on the port group. They might even require running separate vSwitches in addition to using separate port groups.
{ "source": [ "https://networkengineering.stackexchange.com/questions/64871", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/64792/" ] }
67,209
So, AFAIK, packets "hop" between routers. Packets are forwarded via a router's default path until it gets to destination IP. So is it possible to specify a specific set of routers this packet "hops" to?
It's theoretically possible, but not really in a practical sense. The IP protocol includes two options: Loose Source and Record Route (LSRR) Strict Source and Record Route (SSRS) They're both described in RFC 791 . The difference between them is that LSRR can specify a partial route, while SSRS specifies the complete, exact route. With LSRR, each router along the path uses its local routing table to determine how to send to the next hop in the source route. The reason it's not practical is because most routers are configured to ignore this option. RFC 1122 says that source-route forwarding must be disabled by default, and I would be surprised if any ISP enables it.
{ "source": [ "https://networkengineering.stackexchange.com/questions/67209", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/65099/" ] }
67,290
I'm studying for some networking concepts and have learned that a receiving host looks at an Ethernet frame's header to determine what protocol being used by the packet. It makes me wonder though, since a packet header includes this same info, why doesn't Ethernet protocol try to save space in the frame by just determining what network protocol is being used by looking at the packet header? Thank you for the help!
The receiver has to look at the Ethernet frame to decide its contents, which might be DECnet, Appletalk or many other things -- Internet Protocol is only one of many protocols running on top of Ethernet. When Ethernet was being designed, it wasn't obvious at all what protocols might exist in the future, and the winner-take-all effect wasn't obviously so large. A key goal of Ethernet was ubiquity -- in order to be cheap it had to be everywhere, and so it was designed to be completely neutral about its contents. It's also a fundamental idea in the separation of the layers: code for each layer does not look inside the contents, it has a label on the outside which says what it contains along with whatever is required for addressing. For example the Ethernet frame might say "contains IP". Inside its payload, the IP header has a label which says "contains UDP". The UDP header says "contains DNS". And the DNS program is responsible for deciding what to do with that. If you don't do it like this then you duplicate at least a little of the code of the upper layer in the lower. (Or, required some kind of mechanism for the lower layer to ask an upper layer "is this packet one of yours?".) And, worse than that, this then limits the lower layer to working with upper layers which the lower-layer implementer knew and cared about. I guarantee you the Ethernet designers didn't know about IPv6, and it's the separation of layers which permits the easy development of new protocols. Those who were obsessed about a few bytes generally wrote their protocols raw on top of Ethernet, essentially using it in the way we might use UDP. Most of those surely regretted it and converted to TCP or UDP later when they needed a facility such as ports or reliable ordered two-way byte streams.
{ "source": [ "https://networkengineering.stackexchange.com/questions/67290", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/67585/" ] }
67,412
Everybody says routers (and vlans) break broadcast domains, but nobody goes into WHY that is, it seems. What's the router logic? Say I have three routers on my lan, with one main router and the other two merely bridged to the first (not creating different networks with different addresses, so they're not acting as gateways). Let's say the main router sends out a broadcast. The packet gets encapsulated into a frame, and the switches will all forward the broadcast. But will these other routers, behind which I might have some more switches and devives, do the same? What I want to understand is : do routers only break broadcast domains when they're acting as gateways and actually route between networks, in which case they'll discard the broadcast by decapsulating the frame and seeeing the address in the packet header (?), or do they always break broadcast domains, even when they're merely used more like switches, for their ports, behind a gateway? And HOW exactly do Vlans break broadcast domains? If vlans abstract the underlying switch and logically divide it- say, in half, why would the resulting vlans break the broadcast domain? Even if they are perceived as different switches, don't interconnected switches forward broadcasts? How does it all work here ? Thanks in n advance
Let's talk about it using this topology of three networks (red / orange / blue): A Router's primary function is to facilitate communication between IP networks . Which means if A wants to speak to D or B, the Router must be used. However, a Broadcast by definition is a message intended to be sent to everyone within the sender's local network . If Host A send a broadcast , then Host A means for the packet to only be delivered to Host C, and the Router on the left -- and no one else . The Router, by definition, does not need to, and should not, forward that broadcast anywhere. So it isn't so much that the Router is "breaking" the Broadcast domain as much as it is that the Router is the natural boundary for the Broadcast domain. It is analogous to a wall being the natural boundary of a room. If a Router is merely "switching" between it's interfaces and not actually routing, then you can safely consider that router as behaving like a Switch -- who's primary purpose is to facilitate communication WITHIN networks . As such, a Switch will not limit a Broadcast in any way, and in fact will help it along by flooding the broadcast out every port. Edit: forgot your VLANs question: And HOW exactly do Vlans break broadcast domains? If vlans abstract the underlying switch and logically divide it- say, in half, why would the resulting vlans break the broadcast domain? Even if they are perceived as different switches, don't interconnected switches forward broadcasts? How does it all work here ? VLAN's simply break up one switch into multiple "virtual" switches. That image above with the three "switches" can also be represented as two physical switches with three VLANs: In fact, you could consider this image the "Physical Topology" and the image above it as the "Logical Topology". They are essentially the same topology. In this image, if the switches receive traffic (to include broadcasts) on VLAN 10 ports, they will only send that traffic out other VLAN 10 ports -- this is by definition of what VLANs do. So whether there is only 1 switch or many switches in a row, Switches still only facilitate communication WITHIN networks, meaning across any number of switches you still have a single IP network. Disclaimer: The image and links above are to my own blog. The blog is not monetized. I make no profit from you visiting and am providing the links to help the reader
{ "source": [ "https://networkengineering.stackexchange.com/questions/67412", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/58434/" ] }
67,434
Here's the ping output from Singapore: 64 bytes from 8.8.8.8: icmp_seq=1 ttl=50 time=1.96 ms Singapore to 8.8.8.8 = ~2ms Another ping output: 64 bytes from 23.59.8.146: icmp_seq=81 ttl=44 time=66.1 ms Singapore to 23.59.8.146 = ~66ms Now my question is, even if both servers reside in the USA (found from https://ipinfo.io/ ), how come the first server's latency/RTT is surprisingly lower?
As others have pointed out, Google DNS uses IP anycast, which allows multiple servers in multiple locations to effectively share an IP address. Google (like many others) has many servers around the world that will respond to 8.8.8.8. So when you ping that address, the server that responds is the one closest to you, perhaps one in your country. Note that geo-location services, like ipinfo, use a variety of techniques, including registration information. All of it is an estimation.
{ "source": [ "https://networkengineering.stackexchange.com/questions/67434", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/67799/" ] }
67,545
Will it still be useful to make use of private IP addresses with IPv6?
No, private addressing will not become obsolete. But actually, there are two kinds of private addresses: the Unique Local Addresses (ULAs) and the link-local addresses (LLAs). There will always be a need for big (i.e. routed) private networks that are not directly connected to the internet. Now, you could argue you don't require a private address space for this, just use whatever you want. There are still good arguments why there is still a dedicated IANA-assigned prefix FC00::/7 with a 40 bit randomizer for local unicast addresses as per RFC 4139 : Some devices on that network may be connected to the global internet also, either simultaneously or by moving networks. In any case, re-using prefixes will mess up routing tables. Worst case you even leak a 'privately used prefix' into the global routing table and black-hole that regularly routable address. Leaking a non-routable address is less bad. Actual global addresses are owned by people and organisations. These people are free to do with those addresses what they want, including hard-coding them in certain applications. If you re-use those addresses internally you may see some weird traffic and/or application behavior. At some point you may want to interconnect two private networks (e.g. combine two offices), at which point overlap would be a headache. The randomization makes sure this should be extremly unlikely. Now, most homes and offices don't have such a complex network and only have one router that guarantees connection to the internet. Even here it is useful to have a kind of private addressing. For example you want to run a service that is only reachable from inside the home, or run a service that is not dependent on your internet connection. For this you can easily rely on the LLAs in prefix fe80::/64 as defined in RFC 4291 . Most devices will automatically assign a link-local address to any IPv6-enabled interface. This address is only meaningful within a given broadcast domain and cannot be reached from outside that broadcast domain. As a final: what IPv6 obsoletes is the need for private addresses that are NAT'ed into public ones. IPv6 allows for multiple addresses per interface, so the private addresses above can perfectly be combined with public addresses. Aside from that the IP resource saving from NAT is no longer necessary and the neglible perceived security that NAT gives can perfectly be built for IPv6 without actually doing NAT itself and is better of using a dedicated firewall anyway.
{ "source": [ "https://networkengineering.stackexchange.com/questions/67545", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/67985/" ] }
69,842
Multicasting seems provide an efficient method of routing networking traffic from a source to multiple end users, especially at this moment in time with teleconferencing, streaming media, online collaboration tools in high use. Looking into it, it appears to be seldom used for such applications, why is this?
Because multicast is one source to many receivers, and thus two way communications (and anything using TCP connections) won't work. That makes it unfit to use for teleconferencing, online collaboration and many more applications. Streaming media would work, but many people like to be able to pause the stream for example, and that wouldn't be possible with multicast. To add to that, multicast is quite complex to implement, especially between networks. It is used however, but mostly only within networks. Many consumer networks I know use multicast to provide IPTV with a fallback to regular unicast if functions like time shifting are activated.
{ "source": [ "https://networkengineering.stackexchange.com/questions/69842", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/70705/" ] }
70,040
It's my understanding that TCP has logic for ensuring reliable communication, but UDP just naively sends information along the channel set up for it using IP and things in lower layers. Does UDP actually do anything? I'm kind of confused by why it even has a name.
Interesting perspective and question! Yes, most of what UDP does is supply a standard means for multiple applications to co-exist using the same IP address, by defining the concept of UDP ports . The exciting part about UDP isn't so much the network protocol but the API implemented by operating systems and socket libraries. While not part of the UDP specifications itself, the ability to use abstractions like the POSIX socket API to easily develop software atop protocols like UDP is key to the success of the Internet Protocol stack.
{ "source": [ "https://networkengineering.stackexchange.com/questions/70040", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/58757/" ] }
73,261
Why does one say fragmentation is bad and must be avoided due to performance issues when in reality fragmentation intrinsically occurs within the communication ? Example: User1 wants to send 100 000 bytes to User2 Maximum MTU size of the network devices connecting the nodes is 1500 bytes. Maximum MSS size is 1460 bytes in TCP In the sending process, the 100000 bytes / 1460 => ~70 packets; therefore, segmentation occurs automatically. What I am missing here ? Regards
Fragmentation is resource intensive in a router, and it slows packet forwarding. Today, we use PMTUD to determine the smallest MTU in the path so that packets are properly sized prior to sending. There are also fragmentation attacks, so many businesses drop fragments. What you are confusing is something like TCP segmentation , which is very different than fragmentation . Segmentation happens in the source host, not in the routers in the path, so the routers can forward at top speed, rather than pausing to fragment a packet and perform all the necessary calculations and build new packet fragments. IPv6 has eliminated in-path packet fragmentation. It must use PMTUD to find the smallest MTU in the path and properly size packets prior to sending. This was one of the choices made to speed packet forwarding. See this answer about that.
{ "source": [ "https://networkengineering.stackexchange.com/questions/73261", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/68541/" ] }
75,994
I recently saw two men open up the sidewalk in a major German city. It seemed they dug out this device Both men worked for a networking technology company, so I hope this is on topic here. What is this device?
Connecting two fiber trunks is a tedious process. Those are splice cassettes for optical fiber - each single fiber is fusion spliced and looped into one of the cassettes for protection. Afterwards the whole box is sealed and buried or put in a cabinet.
{ "source": [ "https://networkengineering.stackexchange.com/questions/75994", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/78646/" ] }
76,484
I'm curious about why 10Base-T and then 100Base-T Ethernet networks used cables which had four pairs if they only needed two? Were there some cables that only had two pairs? If we wanted to increase the speed of Ethernet further in the future, could adding more pairs of cables help?
10BASE-T saw first light as StarLAN that made use of the already existing twisted-pair category 3 telephone cabling (instead of the dedicated coax that 10BASE5/2 required). That cabling standard carries four pairs to each wall jack. StarLAN (10) and subsequently 10BASE-T had no use for more than two pairs, so they ignored the other two. 100BASE-TX borrowed FDDI's copper design ("CDDI") with also just two pairs, so it's the same situation. The already existing cabling is also the reason for the use of straight-through cables as standard (with the crossover, MDI vs MDI-X logic on the device port side), and crossover cables only in special cases. Fiber uses crossovers throughout. Cables with only 1-2 & 3-6 pairs are sub-standard, yet functional with 10BASE-T and 100BASE-TX. Of course, there are (or at least were) many installations and adapters that would split a four-pair cable into two independent 10/100 links (for cost, cabling restrictions, ...). That won't work with 1000BASE-T and faster though - the faster Ethernet variants split traffic into four full-duplex lanes to reduce the cable's frequency requirements to a quarter, so they require all four lanes. For completeness, the 100BASE-T4 and 100BaseVG standards that initially competed with 100BASE-TX both used all four pairs of voice-grade category 3 cabling, non-intuitively both were inherently half duplex. In spite of the extra cost of upgrading to category 5 cable (but with lower port cost), 100BASE-TX quickly won the race and became ubiquitous. If we wanted to increase the speed of ethernet further in future could adding more pairs of cables help? It could, in theory, use more than the current four lanes. But nobody's going to redeploy their whole cable plant. By various standards (e.g. TIA-568), horizontal/tertiary cabling is commonly deployed using twisted-pair cabling (good for up to 10 Gbit/s, depending on category and length), but vertical/secondary cabling and upwards uses fiber anyway. 40GBASE-T and 25GBASE-T using category 8 cabling (30 m max) are already having a hard time competing with fiber and likely we won't get faster TP standards ever. Fiber is already at 400 Gbit/s and ready for much more using WDM or multi-lane fiber which is much more practical than with copper, so that's the future. EDIT As has been pointed out in the comments, you could also add pairs for more speed using another cable . That is actually common practice called link aggregation (LAG). However, LAG performance can differ substantially from a single, equally fast multi-lane link. Multiple lanes across a single cable are irrelevant to performance because data distribution across the lanes is very finely grained - usually on a multi-bit level, depending on the PCS line code. An aggregated link distributes data coarsely on the frame level. You really have to avoid changing the frame order and considerably hurting overall performance, so traffic isn't distributed dynamically (by link utilization) but statically - by L2 addresses, L3 addresses, L4 ports, ..., depending on the switches and their configuration. The higher the layer, the better, usually. At best, you distribute traffic on the L4 connection level - but still, no single flow can ever exceed the bandwidth of a single link. With L3-based distribution, that limit applies to any host-to-host connection. Much worse even, L2-based traffic distribution puts that limit on the sum of all flows between any two routers, hosts or mix. Accordingly, interconnecting 10G switches with aggregated 1G links is a bad idea mostly. Aggregating multiple links is generally limited as well. The most common LAG protocol LACP allows a maximum of eight active links, and all switches I've seen limit the number to eight (or less) for static LAG as well. Ethernet's speeds usually increase tenfold between grades, so that's more than you can gain by LAG. Correspondingly, link aggregation is often just a stop-gap measure before upgrading the actual link speeds.
{ "source": [ "https://networkengineering.stackexchange.com/questions/76484", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/77056/" ] }
78,359
I was reading: this answer to "Maximum packet size for a TCP connection" , where it says: The absolute limitation on TCP packet size is 64K (65535 bytes), but in practicality this is far larger than the size of any packet you will see, because the lower layers (e.g. ethernet) have lower packet sizes. A few questions remain unclear for me: Why don't we just send one single packet? Why do we need to split content into multiple packets (ignoring the size limit)? If a lower layer (like the internet layer) has a lower packet size what does this have to do with TCP packet size limitations? A higher layer (than the internet layer) can add as much data as it wants to.
Why we don't just send one single packet? why we need to split content into multiple pockets (ignoring size limit). That would just lead back to circuit-switched networks like the original PSTN (Public Switched Telephone Network). The government funded research into packet-switched networks (result: Internet) to overcome the limitations of circuit-switched networks. In a circuit-switched network, or what you propose, one caller or packet would monopolize the circuit or path until it is done, not giving anyone or any other process process a chance to use the circuit or path until it is done. Breaking things up into smaller packets means that you can share the circuit among callers or processes. Each IP packet is routed independently, so a packet follows a path to the destination, regardless of the path any other packet took to the same destination. If the path loses a link, then the routers in the path can reroute packets to a different path to the destination, and the sender does not know or care. The big driver of the government funding was the threat of disaster (including nuclear war, which was a big threat in the 1960s and 1970s). If you are making a call (say to respond to ICBM launches), and the telephone company central office is destroyed, then you lose the call and need to start all over, manually rerouting the call. The same holds true for a giant data packet. If you break things up into smaller packets, and there is an interruption in the path, the rest of the packets can automatically be re-routed around the damage. So, in the simple case you get to share the circuit or path, and you lose very little in the event of a circuit or path interruption. If lower layer (like internet) have lower packet size why this has to do with TCP packet size limitations? a higher layer (than internet) can add as much data as it wants to. TCP takes a stream of data (can be very large) and segments it into PDUs (Protocol Data Units) we call segments. The segments fit into the IP packets, which fit into the data-link protocol frames. TCP is a very large subject, far too large to explain it all in a site like this. Once you understand the reasons for the different layers in the network stack (abstraction and encapsulation), you will see how that works. Basically, the data-link protocol is responsible for delivering frames in the local network, IP is responsible for delivering packets between networks, and a transport protocol like TCP is responsible for delivering datagrams between host processes on different hosts.
{ "source": [ "https://networkengineering.stackexchange.com/questions/78359", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/83210/" ] }
79,835
My understanding is there are 2^32 - 1 possible IPv4 addresses, and 2^16 - 1 possible ports. Which gives ~2^48 addresses. The additional 2^16 additional ports seem almost insignificant considering the IPv6 address space is 2^80 times larger than the number of IPv4 addresses * ports. With the 2^128 possible IPv6 addresses, why do we need ports at all? Why not assign each application, tab, etc... its own public IPv6 address?
An IP address targets a host on the network layer. Transport layer ports multiplex an L4 protocol within a host (to different processes/services). Both are different things on different layers. Basically, if you'd repurpose IPv6 addresses (or bits) for host-level multiplexing you'd gain very little but you'd break logic compatibility between IPv4 and IPv6. A transport layer protocol works the same way on any IP version.
{ "source": [ "https://networkengineering.stackexchange.com/questions/79835", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/85233/" ] }
79,865
I am currently working on connecting ip cameras together . I am using cat7 sftp cable . according to the bellow article : http://www.differencebetween.net/technology/difference-between-rj45-and-rj48/ I understand that : RJ45 is used with UTP cable only . I can't use RJ48 because it will not fit into my NVR and SWITCHES . I have two questions : I found RJ45 which fit cat7 cable . https://www.amazon.com/dp/B071GXPTVZ/ref=twister_B09ZYBKJFZ How could that happen ? RJ45 should be used with utp cables only ? right ? what's the right connector to choose to connect cat7 sftp to ip camera ? some people suggest to connect Cat6a Cat7 Keystone at the end of cat7 cable . https://www.amazon.com/dp/B074PKQ1S1/ then to use short cat 6 cable to connect between the keystone and ip camera . is that the correct method ? Thank you
An IP address targets a host on the network layer. Transport layer ports multiplex an L4 protocol within a host (to different processes/services). Both are different things on different layers. Basically, if you'd repurpose IPv6 addresses (or bits) for host-level multiplexing you'd gain very little but you'd break logic compatibility between IPv4 and IPv6. A transport layer protocol works the same way on any IP version.
{ "source": [ "https://networkengineering.stackexchange.com/questions/79865", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/85292/" ] }
80,419
Just as the questions says, why there would be an NTP server when the devices already got the internal clock and are accurate.
Internal clocks are only so accurate. With an uptime of months or sometimes years, your nodes can drift considerably from correct time. Also, some nodes lack a battery-powered RTC, so they default to some point in time on bootup and need to be set somehow. Incorrect time may cause confusion and even break some protocols. NTP can provide accuracy within a few milliseconds, so if you e.g. analyze logs from various sources the time stamps will match very nicely. Of course, you can use alternative protocols for synchronization, with more or less precision. Keeping the clocks manually on dozens or even hundreds of nodes does not look like a viable option.
{ "source": [ "https://networkengineering.stackexchange.com/questions/80419", "https://networkengineering.stackexchange.com", "https://networkengineering.stackexchange.com/users/86158/" ] }
714
What is the policy on asking homework questions on Physics Stack Exchange? What kinds of questions are considered homework questions? Are homework questions allowed? What should I include in a homework question? Why don't you provide a complete answer to homework questions? Return to FAQ index
Summary It's not enough to just show your work and ask where you went wrong. If you just need someone to check your work, you can always seek out a friend, classmate, or teacher. As a rule of thumb, a good conceptual question should be useful even to someone who isn't looking at the problem you happen to be working on What kinds of questions are considered homework questions? A "homework question" is any question whose value lies in helping you understand the method by which the question can be solved, rather than getting the answer itself. This includes not just questions from actual homework assignments, but also self-study problems, puzzles, etc. On the other hand, questions that come up in the course of doing a homework problem, but are separate from the main point of the problem, might not be considered homework questions. There's a bit of a judgment call to be made, depending on the context of the problem. If you're not sure, it's probably safer to treat your question as a homework question and later find out that it isn't, than the other way around. Can I ask a homework question here? Yes, but there are a couple of things you need to make sure of first. As a general rule, we do not discourage homework questions, as long as they are related to physics. But do keep in mind that Physics Stack Exchange is not primarily a homework help site ; it's a place to get specific conceptual physics questions answered. The list in the following section will help you ask questions about your homework in a way that fits in with the site's philosophy. Also, make sure you know whether your learning institution (middle school, high school, college, etc.) and your teacher or professor allow you to consult other people, or to post the exact question on the internet. This is usually addressed by your institution's honor code or rules and regulations, and any specific class policies. You should ask your teacher whether asking a homework question here is appropriate before posting your question. How should I ask a homework question on this website? See if an existing question helps you Check and see if someone has already asked a question that gives you the information you need. The search box at the top right corner of the page will be pretty useful here, but you can also try looking at tags that are relevant to your question. If you find a prior question that seems relevant but doesn't clear up your confusion, mention it when you write your own question. That gives the people answering a better idea of what kinds of explanations don't work for you, and what might be more effective. Ask about the specific concept that gives you trouble We expect you to narrow down the problem to the particular concept that's giving you trouble and ask about that specifically. That produces a question that is more relevant to others who might be having the same problem, as well as probably more interesting to answer. As a side effect it shows that you're not just being lazy and trying to get us to do your work for you. The best way to produce a focused, specific question is to show your work . Explain what you've been able to figure out so far and how you did it. Showing your work will help us gauge where you are having problems: if it is a technical thing near the end, a short to the point answer will suffice; if it is some fundamental problem with understanding the subject, somebody will then write a longer, more detailed response. It will also prevent people from spending a lot of time going over ground that you have already covered or understand well already. It's not enough to just show your work and ask where you went wrong. If you just need someone to check your work, you can always seek out a friend, classmate, or teacher. As a rule of thumb, a good conceptual question should be useful even to someone who isn't looking at the problem you happen to be working on. Of course, it's still good to include the text of your problem, just in case (more on that a few paragraphs down). Don't just copy the exact problem from your homework assignment or textbook. In particular, when you are asking for help, writing in imperative mode ("Show that...", "Compute...", or "Prove or find a counterexample: ...") is at the very least impolite: you are, after all, trying to ask a question, not give an assignment . It also turns many people off. Reference the source If you're asking about a specific homework problem from a textbook, include the book and the problem number, so that someone trying to answer the question can go look it up themselves if they need to. If you're asking about a specific problem from a custom assignment prepared by the instructor, it helps if you quote the complete text of the problem in your post. Again, this shouldn't be the entire content of the post - you still need to ask about the specific issue that's confusing you, in addition to quoting the problem - but you never know when the person answering might need additional information from the original problem. Use the homework tag Use the homework-and-exercises tag on your question, in addition to any other tags that identify the kind of physics involved. This lets answerers know that you're looking for an answer which explains the underlying concepts. If you don't include the tag, someone will usually add it for you. If that happens, don't think that we're accusing you of lying about whether your question is from a homework assignment! The tag is used for any question in which the point is to learn the method you're using to solve it, rather than just to get the answer. Why don't you provide a complete answer to homework questions? This is pretty well covered by a discussion on the Math Stack Exchange site. Providing an answer that doesn't help a student learn is not in the student's own best interest, and if a solution complete enough to be copied verbatim and handed in is given immediately, it will encourage more people to use the site as a free homework service. In the spirit of creating a lasting resource of mathematical knowledge, you may come back after a suitable amount of time and edit your response to include a more complete answer. Or even better, the student can post his own correct answer! If someone posts an answer to a homework-type question that gives away a complete or near-complete solution, in most cases it will be temporarily deleted. Examples The rules for how to properly post homework questions can be a bit confusing, so here are some examples: Good: Use the relative velocity formula to find $v_{2f}$ in terms of $v_{1f}$? Can the process $u\overline{u} \rightarrow s\overline{s}$ be mediated by the EM interaction? Mechanics + Thermodynamics: Bouncing Ball How to solve this Schrödinger equation? Which trigonometric ratio should be used to describe simple harmonic motion as a function of time? "Find the net force the southern hemisphere of a uniformly charged sphere exerts on the northern hemisphere" Peskin and Schroeder Exercise 10.2 - Yukawa Theory Renormalization Equations of motion only have a solution for very specific initial conditions A good homework question states the problem clearly, shows an attempt to work through it, and identifies the specific issue that is giving the questioner trouble. These questions demonstrate that pretty well. Bad: Moment calculation physics- momentum ( a space question) https://physics.stackexchange.com/questions/9994/acceleration-force Creating an ammeter? These homework questions don't show any effort put into solving the problem, and they are too specific to be of use to anybody except the person asking. That makes them inappropriate for this site. Where else can I go? If your question was closed as off-topic on this site, and you are unable to formulate it in the ways explained above, there are still other sites where you can ask for help. We keep a list on the thread My question was closed on Phys.SE. Can you recommend me another internet site where my question might be on-topic? . Parts adapted from https://math.meta.stackexchange.com/questions/1803/how-to-ask-a-homework-question
{ "source": [ "https://physics.meta.stackexchange.com/questions/714", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/124/" ] }
1,197
It was recently announced that several SE sites are closing , including Theoretical Physics and Astronomy. Given that our scope here is basically a superset of the scope of Theoretical Physics and that there is a lot of overlap between the sites, there is considerable support for migrating the entire content of that site to Physics Stack Exchange . However, it's also been mentioned in several places that there is a fair amount of overlap between Astronomy Stack Exchange and this site as well. Indeed, we get quite a few astronomy and cosmology questions posted here, and looking at the astronomy site I see a decent number of astrophysics-related questions. Certainly those questions can find a good home here. But since physics and astronomy are such closely related subjects, I think even the other, less physical questions from the astronomy site might be of interest to our audience. This raises the possibility of incorporating Astronomy.SE into this site as well. Should we expand the scope of this site to include both physics and astronomy? To be clear, what I am proposing is that this site expands its scope to include anything that is currently on topic for Astronomy SE (including the non-physics-related questions) as well as anything that is currently on topic here. If this change were to happen, it might be appropriate to alter the name and/or design of the site accordingly. This is being addressed by a separate question . Obviously, for this reason, the change in scope would be contingent on the approval of the Stack Exchange team. But I would also like to see whether there is community support for the idea.
Should we expand the scope of this site to include both physics and astronomy? Yep :-)
{ "source": [ "https://physics.meta.stackexchange.com/questions/1197", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/124/" ] }
1,444
I am afraid that David will delete comments of 'tHooft, and others, on threads where he has participated. I don't like to see any of 'tHooft's writing go anywhere, at least not for a long time, so please can he hold his horses? Same goes for other comment threads, which are actively going on The trigger-happy deletion has caused some consternation to users in the past. It is only reasonable for stale discussions, not for active things that are still being sorted out.
I think the "deleting" feature, however it works, is one of the biggest "bugs" on Stack Exchange. In addition to the current controversy, in the past it has deleted very valuable references from the answers to some of my questions. Now I cannot recover them. I am highly displeased, to say the least. If there was an edit trail, or something, it might be tolerable. This kind of delete without memory should be reserved for abusive comments and the worst kind of spam, in my opinion. It should never be applied to debates with significant physical content. Move, yes if you must. Even hide, if absolutely necessary. Delete without memory, only in the most flagrant cases. My very passionate two cents worth.
{ "source": [ "https://physics.meta.stackexchange.com/questions/1444", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/4864/" ] }
4,306
Should I answer old questions (over a year), even if they have been already answered? I think that this practice is correct, but since in many forums it is discouraged I'm not sure.
Stack Exchange sites are not forums, and many of the rules which are conventional among forums don't apply here. The rule against answering old questions is one of them. So yes, you can and should answer old questions when you feel you have something to contribute, just as you would do with new questions. (Even if a question already has an accepted answer, don't let that stop you from posting another answer of your own.) In fact, one of the guiding principles behind the design of the Stack Exchange software is to allow old questions to receive new answers.
{ "source": [ "https://physics.meta.stackexchange.com/questions/4306", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/21817/" ] }
4,538
Is non-mainstream physics allowed here? What defines non-mainstream physics? What sort of questions and answers are disallowed by this policy? What should I do if I see a question or answer containing non-mainstream physics? Return to FAQ index
Is non-mainstream physics allowed here? No, questions and answers about non-mainstream physics are not allowed here. We are not a substitute for peer-review, and cannot evaluate new theories. While some questions can lead to legitimate new theories, the question will need to be specific in order to fit this format. What defines mainstream physics? Mainstream physics is physics which has been accepted by a significant portion of the physics community. In the case of modern physics, if a theory has not been published in a reputable journal, it is not considered mainstream. What sort of questions and answers are disallowed by this policy? Any post that attempts to work within the bounds of what we have determined to be "mainstream physics" is considered on topic for this site barring any other issues. For example, a question that proposes a new concept or paradigm, but asks for evaluation of that concept within the framework of current (mainstream) physics is OK. Similarly, a wrong answer that makes false statements but claims to work within the bounds of a mainstream theory is also allowed. On the other hand, if a question or answer uses a non-mainstream theory as its premise and attempts to go forward in that direction, it can be safely closed or deleted. What should I do if I see a question or answer containing non-mainstream physics? Firstly, be certain that it is indeed off topic by the above rules. Note that if a post is simply wrong , leave a constructive comment explaining why, and downvote. If the post is indeed non-mainstream, leave a comment stating the fact and linking to this meta post. For questions, flag or vote to close as non-mainstream. For answers, use a custom moderator flag mentioning that it is non-mainstream.
{ "source": [ "https://physics.meta.stackexchange.com/questions/4538", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/7433/" ] }
4,571
I've been kicking around this site for a little while now and I've realized that I'm very rarely even tempted to post a question. I'm an astronomy grad student with a background in physics, so naturally as I go about my life and work questions relevant to physics occur to me; it's one of my main areas of interest, after all. However, I feel like that same background/interests prevents me from actually wanting to post questions here. There is a very limited set of questions that (i) I think of and (ii) I can't reason out a plausible answer to, this is after all what my training as a physicist is supposed to make me capable of... or (iii) I can't find an answer to with a bit of research, another skill I think any self-respecting physicist possesses. It seems I'm in good company. Most of the top answer authors on this site seem to have a ratio of questions to answers in the range of 1:100 or 1:1000. I wondered if this might be particular to Physics.SE (and other SE's where the topic is one that will naturally attract problem-solvers), and got sort of mixed results. SO's top answer authors barely ask anything, and likewise on Math.SE. Photography.SE, which I'd describe as a non-problem solving based topic, is similar as well. Bicycles.SE has somewhat more even Q:A ratios amongst its top users. Top users from Gaming.SE and RPG.SE ask a lot of questions, and I think I'd describe gamers as keen problem solvers. So hmm, not a lot of support for that hypothesis. It's probably a lot simpler; it's a lot easier to answer questions than to ask (good) questions, especially in the volume required to end up at the top of the rep scale. Great questions can come from anywhere, and sometimes a simple question that anyone can ask can turn out to be very interesting and complex (my favourite example here is A mirror flips left and right, but not up and down ). But a piece of old wisdom goes that the more things you know about, the more things you find that you don't know about. An expert (I use the term loosely, what I really mean in the context of Physics.SE is anyone with about the equivalent knowledge required for a bachelor's degree in physics) should have some advantages when putting together a question, though. They should understand the basic physics underlying the topic, so they can really zero in on the concept they want to ask about. They know where to look for reliable information and can understand and evaluate the validity of whatever background material they come across. And I'm sure there are more reasons. Of course there are also potential drawbacks to being an expert trying to write a question, which I sort of outlined above. So my question is: Why do you think many clearly knowledgeable users don't ask more questions? Can we/should we/how can we encourage them/help them to ask more? I feel like there may be an untapped wealth of great questions lurking out there.
Can we/should we/how can we encourage them/help them to ask more? I feel like there may be an untapped wealth of great questions lurking out there. Yes, we should encourage expert users to ask more questions in their area of expertise. I agree with you that the proportion of good questions would be greater if the experts shared more. Here are five suggestions for experts considering contributing a question: 1) Answer your own questions. 2) Write questions for an audience of physicists trained outside your specialty. 3) Share the big, significant, important questions in your field. 4) Elaborate on the scientific news that does manage to get attention in the media. 5) Ask questions with answers that actually matter to the quality of life for people on our planet. 6) Be an example of reason, scientific methodology, and professionalism. Reasons to contribute 1) We become what we write. 2) There are too few examples of scientific thinking available to people outside a university. 3) Good posts have a feedback effect by encouraging others – making physics.se more fun.
{ "source": [ "https://physics.meta.stackexchange.com/questions/4571", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/11053/" ] }
5,263
It's the time of the year again, namely it is December 2013, and so we shall now refresh our Community Promotion Ads for the new year. What are Community Promotion Ads? Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown. Why do we have Community Promotion Ads? This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things: the site's twitter account useful tools or resources for physics research interesting articles or findings for the curious cool events or conferences anything else your community would genuinely be interested in The goal is for future visitors to find out about the stuff your community deems important . This also serves as a way to promote information and resources that are relevant to your own community's interests , both for those already in the community and those yet to join. Why do we reset the ads every year? Some services will maintain usefulness over the years, while other things will wane to allow for new faces to show up. Resetting the ads every year helps accommodate this, and allows old ads that have served their purpose to be cycled out for fresher ads for newer things. This helps keep the material in the ads relevant to not just the subject matter of the community, but to the current status of the community. We reset the ads once a year, every December. The community promotion ads have no restrictions against reposting an ad from a previous cycle. If a particular service or ad is very valuable to the community and will continue to be so, it is a good idea to repost it. It may be helpful to give it a new face in the process, so as to prevent the imagery of the ad from getting stale after a year of exposure. How does it work? The answers you post to this question must conform to the following rules, or they will be ignored. All answers should be in the exact form of: [![Tagline to show on mouseover][1]][2] [1]: http://image-url [2]: http://clickthrough-url Please do not add anything else to the body of the post . If you want to discuss something, do it in the comments. The question must always be tagged with the magic community-ads tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form. Image requirements The image that you create must be 220 x 250 pixels Must be hosted through our standard image uploader (imgur) Must be GIF or PNG No animated GIFs Absolute limit on file size of 150 KB Score Threshold There is a minimum score threshold an answer must meet (currently 6 ) before it will be shown on the main site. You can check out the ads that have met the threshold with basic click stats here .
{ "source": [ "https://physics.meta.stackexchange.com/questions/5263", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/4826/" ] }
5,778
On Physics Stack Exchange, people asking questions are expected to demonstrate that they've put a certain amount of effort into answering the question themselves. What exactly counts as sufficient effort? Return to FAQ index
What research do I need to do before asking a question? Here's the general rule: Before asking a question, do anything else you can think of that might get you the answer. In particular, you should probably do all of the following before asking here: Think about the question. What do you already know that relates to it? Narrow it down to the specific physics concept you are really asking about or are confused by. Type the question title into Google (and/or another good-quality search engine) and look at the top few (at least 5-10) results. Also search a few combinations of key words, and again look at the top few results. Identify what physics concepts are involved and look at the relevant Wikipedia pages. Look in a textbook or equivalent resource on the appropriate subject. If you don't have actual textbooks, there are plenty of online resources, like Hyperphysics , that you can use. If you're asking a more advanced question which concerns current research, look for relevant papers, including checking on arXiv . You may find it useful to use a dedicated scientific search engine, such as Google Scholar , INSPIRE , or ADS . Use the search box in the top right to check this site for similar questions. Also, when you're typing your question in the form, look at the suggested similar questions the system shows you. If you have friends, colleagues, teachers, or so on who would know something about your question, ask them for input. And anything else you can think of that you think has a reasonable chance of getting you the information you're looking for. (For example, if your question is about a calculation, try working through the math yourself.) If you find the answer to your question while doing all this research, great! You can, if you want, post your question here (if it's not already on the site) along with your own answer describing what you found. OK, I didn't find the answer. Now what? If your prior research didn't give the answer you were looking for, tell us what you checked . Don't explicitly list each one of the steps above to say that you followed it, but do mention anything you found that is close to the topic of your question, and point out why it didn't give you the answer you wanted. If someone reads your question and immediately finds a standard resource (Google search, Wikipedia, HyperPhysics, another SE question) that seems to answer it, it's going to look like you didn't do your research, unless you explain why that resource doesn't actually contain the information you're looking for. What if I think I know what the answer is? If you have a guess or hypothesis about the answer to your question (one that is grounded in solid physics), that's fine, but you should go out and check that hypothesis yourself. Maybe it'll be correct, in which case you can post the question here with your hypothesis (and the evidence that shows it's correct) as an answer. If you think it's incorrect, on the other hand, mention it in the question and say why you think it's wrong. A question that just ends on "My hypothesis is XXXX" comes across as lazy, but if you instead write "My hypothesis is XXXX, but that doesn't seem correct because Wikipedia says YYYYY," and so on, that shows research effort and makes a better question. What happens if I don't follow these guidelines? People trying to answer your question will follow the steps in the first section: they'll search Google, check on Wikipedia, on Hyperphysics, in textbooks, and do some simple calculations if it's that type of question. If, by doing so, they find the answer to your question, they're likely to downvote your question for not showing research effort. You may also get comments pointing you to the answer on Wikipedia or a top search result. Questions that show a particular lack of research effort may be put on hold (or "closed") . In many of these cases, the advice in our homework policy may be useful in improving the question enough to have it reopened.
{ "source": [ "https://physics.meta.stackexchange.com/questions/5778", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/124/" ] }
5,958
Well, meta posts about "homework" keep on coming, but we still don't seem to have a solution that sticks, so I'll give it another shot. For what it's worth I think the current system isn't so bad that we need to go back to the drawing board, but that there's also room for improvement. My suggestion is to use this question as follows: Post answers containing a short, self-contained statement on the topic of the homework policy. Keep these as close to one line as possible. I'll put up a couple of my own to show the format I have in mind and get the ball rolling. Vote according to whether you agree or disagree with the statement. This is of course the normal voting process for meta. My hope is that this can break up the question of homework policy into bite-sized chunks. Some items we will have good agreement on - these can be either implemented in the case of positive agreement, or discarded in the case of negative agreement. Other items will be debated - these can go to other meta-questions for further discussion.
Questions that can be summarized as "please solve this exercise" or "please plug these numbers into an equation for me" are OFF topic .
{ "source": [ "https://physics.meta.stackexchange.com/questions/5958", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/11053/" ] }
6,093
It's long overdue that I make this post revisiting our policy on "check-my-work" questions. These are questions, often (but not necessarily) homework-like, that present a complete mathematical or logical derivation and ask whether it's correct. Historically our homework policy has rendered check-my-work questions off topic. However, we're currently in the (very long) process of revisiting that policy. According to the poll on homework close reasons ( answer 1 , answer 2 ), the community seems to be generally in favor of keeping such questions off topic, although not obviously so. It was brought up in a chat session a while ago that perhaps people would support making only certain kinds of check-my-work questions on topic. When deciding whether a given check-my-work question is on topic, we might take into account factors such as the level of the question (basic or advanced) how much prior research the asker has demonstrated (have they tried asking colleagues?) whether they have a reason to believe they have made a mistake at all whether they have an idea of what the mistake might be and others to be suggested. So the point of this question is to resolve the issue for whenever our homework policy gets revamped. Should any kinds of check-my-work questions be on topic? If so, how do we distinguish the on-topic ones from the off-topic ones? Return to FAQ index
Check my work question should always be off-topic. Those that can be rephrased should be rephrased . "Am I right?", "Is this correct?" or something else is always only of use to people who did the exact same derivation , and this is definitely too localized . To answer the bullet points in order: The level of the question should be utterly irrelevant. I have no more desire to correct a multiplication error in a basic kinematics derivation that I have to hunt sign errors in an advanced QFT calculation. Aside from my desire, neither will be of use to people who have not committed the exact same error , so it is definitely too localized either way. Asking colleagues or anything else like that is nothing we could read. The close-worthiness of a question must not be influenced by a statement like "I asked my prof and he didn't know either" since we have no way to know whether that is even true or not, or whether that prof even should have known, etc. Prior research that I would expect is finding out the correct answer (preferably with derivation) or, if that cannot/has not been done, finding similar problems or techniques and briefly explaining why they are of no use in this case. This is crucial . If it is simply "Am I right?" , a probably full and complete answer is "Yes." , which is too short to even submit it as an answer, which shows unequivocally that we as a SE do not want such questions. Asking "Where's the mistake?" is better, but... The only kind of "check my work" I think we should allow is the one where a derivation is presented, leading to a wrong result, and the question is "It seems as if step X is wrong? But it should be right because of Y, so why is this not the case?" . There must be a reasonable explanation (by established physics, of course) of why the derivation is expected to work in the eye of the asker, and then the answer pointing out the flaw in the reasoning can actually be useful , since the question is then essentially "Why is the physical principle Y not applicable here?" The question should also be edited to reflect that. Note that it is still possible that questions of the latter type are essentially based on a sign error or somesuch, because we all are sometimes blind. Point it out in a comment, VTC the question (or not, that must probably be left to individual judgement, but that's what the close/reopen queues are for), and move on.
{ "source": [ "https://physics.meta.stackexchange.com/questions/6093", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/124/" ] }
6,135
I know that most of us are "paper theorists," but I think we need to remember that experimental physics is physics too! We have several tags for experimental physics (with tag excerpts): experimental-technology Use this tag for questions pertaining to the limits, management, and operation of equipment necessary to experimental physics. This tag is not intended for "does this thing I heard about actually work" type questions. experimental-physics for questions about design, process, data, or analysis of experiments and observations. experimental-technique filling in the unspecified details of elegant-sounding descriptions of experimental methods. I think some of us are a little quick to close questions that seem to be engineering (e.g., How can I make a dry dilution refrigerator quiet? that was closed and even downvoted--and also the cause for this Meta post). This merely begs the question: What objectively separates engineering questions from experimental physics ones? see also Experimental technology questions: on topic? Does the Area 51 proposal "Experimental or Applied Physics" duplicate this site? Should "How do I solve this experimental problem?" questions be on-topic What is engineering and what is experimental design? (of which this might be a possible duplicate?)
What objectively separates engineering questions from experimental physics ones? Nothing. I will argue that trying to formulate such a distinction is not the correct way to attack the problem of deciding what questions should and shouldn't be considered on-topic for this site. A better approach is to ask whether or not physics or engineering flavor is more appropriate in the answer . It's a question of answers Suppose I want to build a waveguide resonator with low dielectric loss. I could post the following question to either Physics SE or Electrical Engineering SE: How can I design the geometry of a waveguide resonator so that it has the minimal dielectric loss? If I pose this question to an electrical engineer, I will likely get design rules, possibly in the form of immediately useful formulae. If I pose it to a physicist, I may learn something about the physics of electromagnetic fields near dielectric boundaries, diverging fields at conductor corners, and maybe even something about conformal mapping. Heck, if I pose it to the right person I might learn about numerical recipes to solve electrostatics problems. This example illustrates that the appropriate place for a question can depend on what kind of answer the person posing question wants. It would be wise to keep this in mind when judging the appropriateness of questions on this site. Before knee-jerk voting to close a question, we must ask ourselves if the asker may have had a reason to post it here instead of somewhere else. Who is reading your question? In our capacity as experimental physicists, my colleagues and I spend >99% of our time building equipment, fixing equipment, and programming a computer. These are all activities which could be classified as "engineering", yet we identify as physicists. I frequent Physics SE and not Electrical Engineering SE. This is not without reason: I am far more qualified to answer questions here than I am on Electrical Engineering and, even when I'm asking an engineeringy question, I would rather get the benefit of people who work in similar environments and with similar equipment as myself. The point here is that another important aspect to deciding whether a question should be categorized as "engineering" is the readership of the site. If I ask a question about a machine, the Physics Lab Quizzwopper 2000X, nobody on Electrical Engineering SE will ever have heard of it. So, while a theorist might find a question about equipment off-topic, an experimentalist will not, because the experimentalist wants the answer which only a physicists can give. Audiences aren't well delineated Consider the following question: How can I use the symmetries of physical system XYZ in order to vectorize my simulation program in computer memory for optimal performance? There is no way to objectively categorize this question as "physics" or "engineering". All we can say is that the person most like to give a useful answer to this question is someone who has solved a similar problem in the past. This could be a "physicist" who had to learn some programming or an "engineer" who had to solve a physics problem. In this case, it's probably reasonable to let the question sit wherever the asker put it, and if it doesn't get any attention, that person can move it somewhere else and try again. P.S. I have avoided proposing an objective criterion by which to decide whether a given question is appropriate for this site, as that issue is not strictly within the scope of the original question.
{ "source": [ "https://physics.meta.stackexchange.com/questions/6135", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/25301/" ] }
6,196
A community ad caught my eye, and I visited http://www.physicsoverflow.org . What are the links between Physics SE and Physics Overflow and how are they different from each other?
PhysicsOverflow is meant to be some kind of a rebirth of the untimely passed away Theoretical Physics SE and a little physics brother of MathOverflow . Compared to the former theoretical physics site, we have slightly lowered the bar to ask questions to graduate-level upward and broadened the scope to include experimental physics and phenomenology. Apart from the high-level Q&A, PhysicsOverflow offers also a Reviews section , dedicated to discuss and peer review (mostly ArXiv but other sources can be considered too) papers publicly and "in real time". PhysicsOverflow was built outside the SE network to not have to fullfill any externally prescribed activity criteria for graduation (Area51 statistics). As Danu hinted at, this also allows us to avoid having to obey the "SE model" or SE rules, policies, and guidelines where they would clash with the goals of a high-level academic community (MathOverflow negociated a special agreement with SE before joining the network). You can find a full description of the site in our FAQ and a short summary of its purpose in our official announcement on MathOverflow.
{ "source": [ "https://physics.meta.stackexchange.com/questions/6196", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/22254/" ] }
6,971
The context is the following post: Hooke's Law and Bungee Jumping and the revision history can be viewed here . As one can see from the revision history, the post owner himself/herself vandalized the post, removing the entire content, and replacing it with a stream of "aaaaaa...". Then, this edit was suggested by another user, which merely reverted to an earlier version of the question. I was one of the two people who reviewed this, and instinctively, went for the approve button (as did the other reviewer), since the edit was setting right, what was messed up. However, thinking about it afterwards, this edit was clearly in conflict with the OP's intentions , which were, ummm... deliberately destructive . This raises two questions - 1) Does being the post owner entitle you to self-destruct your posts? 2) In the event of suggested edits like these, as a reviewer, am I supposed to: Approve it, because it cures self-vandalism? Reject it, because it conflicts with the OP's intentions?
Note that you can't delete a question that has an accepted or upvoted answer, as per this Mother Meta FAQ . In this particular case, then, the post owner seems to have tried to delete the question and, when that failed, they vandalized the post. This is supported by a detailed breakdown in the question timeline : The question is already upvoted when the OP comments "Don't worry, I solved it :)" and vandalizes the post four minutes later. More generally, being the post owner does not entitle you to vandalize your posts. From a legal perspective, you have granted a CC-CY-SA license to the site to display any and all the versions you post. While the site generally respects the OP's wishes as to which version will be displayed, by posting you do cede a measure of control over your posts. From a moral perspective, if you post you are soliciting work from others in the community, and if you receive an answer then you are receiving work from community members. This is freely given, with the payback being shiny internet points and having their content up. By deleting a question, you deprive the people who worked to give you an answer of the chance to have their content displayed, and to gain more shiny internet points in the future. This is why you can't delete questions that have upvoted answers, and it works exactly the same for vandalism. If you encounter a self-vandalized post, you should roll it back to the last working version. If you see edit suggestions that cure self-vandalism, you should approve them.
{ "source": [ "https://physics.meta.stackexchange.com/questions/6971", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/46399/" ] }
7,181
It was sometime back that I realized what we are synthesizing here (possibly) is an archive for an expert system. Put the Physics Stack Exchange together with an AI machine and you have a super-physicist. The same might be said of other Stack areas, knowledge bases. What will the stack eventually become? Are there longer term goals than to just ask/answer questions?
Eventually what it will become is a place where anyone who asks any question will be referred to a previous answer before their question is closed down.
{ "source": [ "https://physics.meta.stackexchange.com/questions/7181", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/45613/" ] }
7,407
In our last chat session two weeks ago (sorry for the delay), we had a discussion about updating the homework policy and homework close reason . I'm making this post to summarize that discussion and solicit feedback on how to proceed afterwards. The current status According to the homework policy , questions are considered "homework-like" (or "educational") and fall under the policy whenever ...[the question's] value lies in helping you understand the method by which the question can be solved, rather than getting the answer itself. However, it's not always clear whether a question fits this description just from its content. And if you track the questions that we actually close these days using the homework-like close reason, quite a few of them are likely not of an educational nature. Instead, we've taken to using the homework-like close reason on questions that simply ask us to calculate something without the original poster making an attempt at it. We do have a policy on showing effort , separate from the homework policy, but it doesn't justify closing those questions. (Also, it was meant to catch questions that the homework policy doesn't.) It seems that most people believe that questions where the poster asks us to do some simple calculation for them should be off topic. However, it's not clear exactly where we should draw the boundary of what sorts of questions are off topic, and what reason is best to give for considering them off topic. There have been a number of discussions on this in the past, among them: Banning homework: vote and documentation (November 2013) What's the current status of the homework policy? (December 2013) Homework - the view from the chat session (March 2015) Should we rename the homework policy? (October 2015) The options We will probably want to create a new policy that replaces the current homework policy, and gives a different criterion for a question that is off topic. This new policy would become the new justification for closing low-effort homework questions (those where it is clear that it's a homework/educational question), so whatever we wind up going with, it should be some criterion that does catch those questions. There are a few options that I can identify: Questions which ask us to perform calculations are off topic Questions which don't show sufficient effort are off topic (I suppose this is the nice wording for the poster being lazy) A combination of the previous two - kind of like our current homework policy, except that the criteria of "ask a conceptual question" (more or less) and "show effort" would apply to all questions, not just those we consider to be of an educational nature Those are the most likely contenders, but for completeness, some other options: Questions which ask for a conceptual understanding of something are on topic, and everything else is not (this is kind of like #1, but excludes more questions) All questions which come from homework assignments are off topic (difficult to enforce) etc. etc. And for comparison, our current policy would be something like Questions which are educational in nature are off topic unless they ask a conceptual question and show effort The issues We have some ideas for what kind of criterion should replace our current homework policy. For each of these ideas, we should consider: Are there a significant number of questions which are off topic under our current homework policy which would be on topic under the new policy? If so, should they be on topic? Exactly what sorts of questions would become off topic under the new policy which are not covered by current close reasons? Do we want them to be off topic? Hopefully the answers we come up with will help us focus the new close reason. At some point in the future, we will also have to discuss the role of the homework-and-exercises tag in this new policy.
Questions which ask us to perform calculations are off topic. This is too broad. I recognize that it's intended to head off boring copied-from-homework questions like "what's the optimal angle for a 45 mph banked turn if the coefficient of friction is μ = 0.233457821234 also are all those digits important kthxbye". However calculation questions like "what's the average antimatter content of a banana?" or "with my eyes closed at sea level, how often should I expect to see Cherenkov flashes from cosmic ray muons in my vitreous humor?" are interesting entrées into a host of more complicated topics. It'd be the responsibility of the asker to elaborate on which of these other topics is really at issue. In fact I find that turning my conceptual questions into model calculations gives me better answers than asking in vague terms, and I'm loath to see such questions closed. Furthermore, many of my favorite questions on this site have inspired clever calculations from answerers; I would prefer to encourage these sorts of answers with inviting questions. I might propose as an alternative: Questions which attempt to outsource tedious calculations to the community, without any broader context, are off-topic. I recognize that a key word here ("tedious") implies a value judgement, and there's a bias in writing these guidelines towards "objective", judgement-free criteria. That bias is flawed, which is why we have human moderators and the opportunity to discuss some decisions with them.
{ "source": [ "https://physics.meta.stackexchange.com/questions/7407", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/124/" ] }
8,930
When we last visited the issue of history questions , the dedicated History of Science and Mathematics Stack Exchange was in its early stages. At that time, we decided against closing questions for being historical, but with the implication that it may make sense to revisit that decision someday. Now that HSM SE is well established, the time seems right to do that revisit. So, going forward from here, do we want history-of-physics questions to be on topic? Where exactly should we draw the line between on-topic and off-topic historical questions? Keep in mind this is primarily asking about history-of-physics questions to be asked in the future, not old ones that have already been asked here. Return to FAQ index
Summary History questions are welcome on this site whenever they have any bearing on our modern understanding of physics. However, if a question has only minimal or null bearing on our current understanding, or it specifically requires a historian's skills, toolset, and mindset to answer, then it should be migrated to the History of Science and Maths Stack Exchange site. I think we should stay open to a significant set of history questions, and we should keep the history tag. Unlike mathematics, the historical perspective is very often one of the key ways to understand the world and our knowledge about physics. The way physics is structured is very often centered around key experiments, and these are generally thought of as historical events and their contribution and motivation is understood in a historical perspective. It doesn't have to be that way: you can perfectly well motivate, say, special relativity as one of only two possible cases , with the Galilean case ruled out through muon lifetime measurements, but you don't - you usually start with the Michelson-Morley experiment and tell the story from there. (Math, on the other hand, does not have to do this: you can simply start with whatever axioms and definitions you want to work with, and you build your theory. Later on, or as an aside or a brief introduction, you might comment, as motivation, that the axioms went through this and then that version before people found the Right Version to use, but the brunt of the work in multiple fields $-$ particularly those taught in depth in physics degrees $-$ tends to follow from the axioms to the consequences rather than the historical refinements of the axioms.) Going through recent, good-score questions on the history tag yields plenty of questions that are definitely of a historical character and yet have direct bearing on our understanding of physics, and which I think should definitely be on topic here. Taking a few non-rigorous picks, How did physicists know that only negative charges move? How did physicists know that there are two kind of charges? Why was the first discovered neutrino an anti-neutrino? How did we realize that light travels at a finite speed? Why is a second equal to the duration of exactly 9,192,631,770 periods of radiations? Did Newton conduct any experiments to find something called momentum, or was he such a great genius that he was able to spot it intuitively? Why was the Stark effect discovered much later than the Zeeman effect? On the other hand, there are questions that should be moved to the history site, because they require a historian's skills, toolset, and mindset to answer, and because they have minimal or null bearing on our modern understanding of physics. From the first page of the search above, for example, this could include https://physics.stackexchange.com/questions/266815/was-einstein-familiar-with-the-michelson-gale-experiment ( screenshot ) Bohr on wholeness? What was Newton's own explanation of Newton's rings? (For full clarity, these questions weren't migrated because it was too late to migrate them when this question was posted. If they were posted now, they would be migrated.) However, I find that the set of questions that satisfy that criterion strictly is really rather small. There is, though, a relatively large gray area of questions that are primarily of historical interest, but that still illuminate aspects of how we think about physics and why we no longer think or do certain things. Some examples from this category are Was Leverrier-Adams prediction of Neptune a lucky coincidence? The role of anharmonic oscillator(s) in Heisenberg's 1925 paper What are the original papers on the hydrogen spectral lines? Were the Michelson-Morley results a surprise? I think these questions should be allowed here: they could also be asked at HSM if the OP is interested in a science historian's perspective, but there is no need to migrate them if asked here, in which case the focus should be (because it can be) on how that part of the historical record helps explain how we think of physics. If the OP wants to shift the focus specifically to what some historical character thought and did, what information they had available, and so on, then it can move to HSM.
{ "source": [ "https://physics.meta.stackexchange.com/questions/8930", "https://physics.meta.stackexchange.com", "https://physics.meta.stackexchange.com/users/124/" ] }