source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
48,776
I use yum to update my Fedora. After a huge update, I got many .rpmnew and .rpmsave files. I could understand if I had modified these files. But I'm sure that I didn't edit these files. What should I do with these files? What will happen at the next update? Here's a sample of these files: /usr/share/texmf-var/fonts/map/dvipdfm/updmap/dvipdfm_dl14.map.rpmnew /usr/share/texmf-var/fonts/map/dvipdfm/updmap/dvipdfm_dl14.map.rpmsave Thanks
There are two cases: If a file was installed as part of a rpm, it is a config file (i.e. marked with the %config tag), you've edited the file afterwards and you now update the rpm then the new config file (from the newer rpm) will replace your old config file (i.e. become the active file). The latter will be renamed with the .rpmsave suffix. If a file was installed as part of a rpm, it is a noreplace-config file (i.e. marked with the %config(noreplace) tag), you've edited the file afterwards and you now update the rpm then your old config file will stay in place (i.e. stay active) and the new config file (from the newer rpm) will be copied to disk with the .rpmnew suffix. See e.g. this table for all the details. In both cases you or some program has edited the config file(s) and that's why you see the .rpmsave / .rpmnew files after the upgrade because rpm will upgrade config files silently and without backup files if the local file is untouched. After a system upgrade it is a good idea to scan your filesystem for these files and make sure that correct config files are active and maybe merge the new contents from the .rpmnew files into the production files. You can remove the .rpmsave and .rpmnew files when you're done.
{ "source": [ "https://serverfault.com/questions/48776", "https://serverfault.com", "https://serverfault.com/users/6343/" ] }
48,808
My company production servers (FOO, BAR...) are located behind two gateway servers (A, B). In order to connect to server FOO, I have to open a ssh connection with server A or B with my username JOHNDOE, then from A (or B) I can access any production server opening a SSH connection with a standard username (let's call it WEBBY). So, each time I have to do something like: ssh johndoe@a ... ssh webby@foo ... # now I can work on the server As you can imagine, this is a hassle when I need to use scp or if I need to quickly open multiple connections. I have configured a ssh key and also I'm using .ssh/config for some shortcuts. I was wondering if I can create some kind of ssh configuration in order to type ssh foo and let SSH open/forward all the connections for me. Is it possible? Edit womble's answer is exactly what I was looking for but it seems right now I can't use netcat because it's not installed on the gateway server. weppos:~ weppos$ ssh foo -vv OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/xyz/.ssh/config debug1: Applying options for foo debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Executing proxy command: exec ssh a nc -w 3 foo 22 debug1: permanently_drop_suid: 501 debug1: identity file /Users/xyz/.ssh/identity type -1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_rsa type 1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_dsa type 2 bash: nc: command not found ssh_exchange_identification: Connection closed by remote host
As a more concrete version of Kyle's answer, what you want to put in your ~/.ssh/config file is: host foo User webby ProxyCommand ssh a nc -w 3 %h %p host a User johndoe Then, when you run "ssh foo", SSH will attempt to SSH to johndoe@a , run netcat ( nc ), then perform an SSH to webby@foo through this tunnel. Magic! Of course, in order to do this, netcat needs to be installed on the gateway server; this package is available for every major distribution and OS.
{ "source": [ "https://serverfault.com/questions/48808", "https://serverfault.com", "https://serverfault.com/users/9633/" ] }
49,082
Is it possible to run a cron job every 30 seconds without a sleep command?
If your task needs to run that frequently, cron is the wrong tool. Aside from the fact that it simply won't launch jobs that frequently, you also risk some serious problems if the job takes longer to run than the interval between launches. Rewrite your task to daemonize and run persistently, then launch it from cron if necessary (while making sure that it won't relaunch if it's already running).
{ "source": [ "https://serverfault.com/questions/49082", "https://serverfault.com", "https://serverfault.com/users/15336/" ] }
49,105
In a Windows domain PDC isn't necessarily the domain time server. How could I identify the authoritative time server?
I'm assuming that you're looking for the server used by the W32Time service to perform time sync on domain-member computers. In a stock Active Directory deployment the only computer configured with a time server explicitly will be computer holding the PDC Emulator FSMO role in the forest root domain. All domain controllers in the forest root domain synchronize time with the PDC Emulator FSMO role-holder. All PDC Emulator FSMO role-holders in child domains synchronize their time with domain controllers in their parent domain (including, potentially, the PDF Emulator FSMO role-holder in the forest root domain). All domain member computers synchronize time with domain controller computers in their respective domains. To determine if a domain member is configured for domain time sync, examine the REG_SZ value at HKLM\System\CurrentControlSet\Services\W32Time\Parameters\Type. If it is set to "Nt5DS" then the computer is synchronizing time with the Active Directory time hierarchy. If it's configured with the value "NTP" then the comptuer is synchronizing time with the NTP server specified in the NtpServer REG_SZ value in the same registry key. The low-level details of the time synchronization protocol are available in this article: How Windows Time Service Works Beware that not every domain controller (the KDC's, as James directs you in finding via DNS in his post) may be running a time service. In a stock AD deployment every domain controller will be, but some deployments may use virtualized domain controllers that have the W32Time service disabled (to facilitate hypervisor-based time synchronization) and, as such, you would probably do well to implement functionality as described by the "How Windows Time Service Works" article if you're developing a piece of software that needs to synchronize time in the same manner that a domain member computer would.
{ "source": [ "https://serverfault.com/questions/49105", "https://serverfault.com", "https://serverfault.com/users/15341/" ] }
49,235
I'm trying to determine where connectivity to an external host using a specific TCP port is being blocked. Traceroute for Windows only uses ICMP, and telnet will only tell me that the port is blocked and not where. Does anyone know of a Windows utility similar to traceroute that will achieve this?
You can use nmap 5.0 with --traceroute option. You will also get a portscan for free :). If you want to test a specific port, you can use -p port option. (You should also use -Pn option so that nmap doesn't try to do a regular ICMP probe first). This is an example: $ sudo nmap -Pn --traceroute -p 8000 destination.com PORT STATE SERVICE 8000/tcp open http-alt TRACEROUTE (using port 443/tcp) HOP RTT ADDRESS 1 0.30 origin.com (192.168.100.1) 2 0.26 10.3.0.4 3 0.42 10.1.1.253 4 1.00 gateway1.com (33.33.33.33) 5 2.18 gateway2.com (66.66.66.66) 6 ... 7 1.96 gateway3.com (99.99.99.99) 8 ... 9 8.28 destination.com (111.111.111.111) If you're interested in a graphical tool, you can use zenmap , which also displays topology maps based on traceroute output.
{ "source": [ "https://serverfault.com/questions/49235", "https://serverfault.com", "https://serverfault.com/users/11198/" ] }
49,405
Is there a command line way to list all the users in a particular Active Directory group? I can see who is in the group by going to Manage Computer --> Local User / Groups --> Groups and double clicking the group. I just need a command line way to retrieve the data, so I can do some other automated tasks.
Here's another way from the command prompt, not sure how automatable though since you would have to parse the output: If group is "global security group": net group <your_groupname> /domain If you are looking for "domain local security group": net localgroup <your_groupname> /domain
{ "source": [ "https://serverfault.com/questions/49405", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
49,614
What's a good Windows command line option for deleting all files in a given folder older than (n) days? Also note there may be many thousands of these files, so forfiles with a shell to cmd is not a great idea here.. unless you like spawning thousands of command shells. I consider that a pretty nasty hack, so let's see if we can do better! Ideally, something built into (or easily installable into) Windows Server 2008.
I looked around a bit more and found a powershell way : Delete all files more than 8 days old from the specified folder (with preview) dir |? {$_.CreationTime -lt (get-date).AddDays(-8)} | del -whatif (remove the -whatif to make it happen)
{ "source": [ "https://serverfault.com/questions/49614", "https://serverfault.com", "https://serverfault.com/users/1/" ] }
49,639
I am really curious. Mac OS X as a server sounds like a very expensive solution that is not much better than the free one provided by free software. I understand paying extra for a nice UI and an Apple logo on your desktop computer or laptop (I did it). But a Apple logo gathering dust in a dark room somewhere with no monitor attached, doesn't make much sense for me. But then either Apple is producing the server at a cost, or some people know something that I don't and choose Apple servers. If that's your case, why are you doing it? Enlighten me.
Edit, as this post still gathers the occasional vote: All of the points below are now irrelevant. With no real Mac Server hardware and the Server software being just a cheap add-on to the client OS X with dramatically limited usability and functionality, newer OS X server versions (10.7+) can't be reasonably used beyond small workgroups, preferentially in Mac-only shops. I was about to write an endless essay about the pro's and con's, but let's make it short instead. MacOS Server offers major advantages if you use Mac clients in you network. It allows for an extremely easy creation of features comparable to Win group policies for Mac clients, much more easy then to do the same for Win clients on a Windows server. Naturally, it also has full support for all the small Mac client specifics like resource forks, finder attributes and stuff like that which all have the potential to become a real PITA if you use a Win or Linux server instead. Telling your users you don't support these might be possible, but it also might break some applications. In my experience, general administration is much more easy than on any Linux system and also than on Windows, at least for smallish groups. Scaling out is another thing, but this requires detailed knowledge on any platform. At least with simple requirements, the promise of not needing a pro admin is much more realistic for a Mac only shop than for any other platform. Even if you plan to run a Win clients only or mixed Win/Mac environment with a Linux server and Samba in a 10 to 20 user environment without a pro admin, I would recommend using MacoS Server in many cases, as it shields all those implicit complexities behind an really easy to use GUI. While this is not the subject of the question, even being more expensive than Windows clients for the initial purchase, Macs have a much lower TCO in many environments, if users would stop thinking in brands and reputations and instead start to learn what the real differences and pros/cons are, beyond the logo and the more or less fancy GUIs. That said, MacOS Server has some drawbacks, of course. First, while certainly possible, it is not really cut out to scale to the enterprise, and doing so will require intimate knowledge of the system. Also, while Apple used many standard open source software packages to create the system, they often decided to do things a little bit differently than others, sometimes for no apparent reason. This might require working around some issues (not storing the password in the LDAP database being a prime example). If you know your way around Linux, have more Win/Linux than Mac clients and can live with some restrictions on the Mac side, a Linux server might indeed be cheaper. Integrating MacOS Server into a larger environment can sometimes be quite difficult. Often, software/hardware packages are not certified for MacOS, leaving you without support if needed. I currently experience this while planning a SAN. All in all, I can only recommend to really learn what the different architectures offer and what your requirements are and make a decision based on that. A boss who just wants to add a few Macs to the network for no other reason that to be hip and have a Mac, without thinking about the consequences is the same kind of idiot than the admin who shuns everything Apple because "Apple is for fanboys only", without knowing anything about the platform.
{ "source": [ "https://serverfault.com/questions/49639", "https://serverfault.com", "https://serverfault.com/users/2563/" ] }
49,765
This is a Canonical Question about IPv4 Subnets. Related: How does IPv6 subnetting work and how does it differ from IPv4 subnetting? How does Subnetting Work, and How do you do it by hand or in your head? Can someone explain both conceptually and with several examples? Server Fault gets lots of subnetting homework questions, so we could use an answer to point them to on Server Fault itself. If I have a network, how do I figure out how to split it up? If I am given a netmask, how do I know what the network Range is for it? Sometimes there is a slash followed by a number, what is that number? Sometimes there is a subnet mask, but also a wildcard mask, they seem like the same thing but they are different? Someone mentioned something about knowing binary for this?
IP subnets exist to allow routers to choose appropriate destinations for packets. You can use IP subnets to break up larger networks for logical reasons (firewalling, etc), or physical need (smaller broadcast domains, etc). Simply put, though, IP routers use your IP subnets to make routing decisions. Understand how those decisions work, and you can understand how to plan IP subnets. Counting to 1 If you are already fluent in binary (base 2) notation you can skip this section. For those of you who are left: Shame on you for not being fluent in binary notation! Yes, that may be a bit harsh. It's really, really easy to learn to count in binary, and to learn shortcuts to convert binary to decimal and back. You really should know how to do it. Counting in binary is so simple because you only have to know how to count to 1! Think of a car's "odometer", except that unlike a traditional odometer each digit can only count up to 1 from 0. When the car is fresh from the factory the odometer reads "00000000". When you've driven your first mile the odometer reads "00000001". So far, so good. When you've driven your second mile the first digit of the odometer rolls back over to "0" (since its maximum value is "1") and the second digit of the odometer rolls over to "1", making the odometer read "00000010". This looks like the number 10 in decimal notation, but it's actually 2 (the number of miles you've driven the car so far) in binary notation. When you've driven the third mile the odometer reads "00000011", since the first digit of the odometer turns again. The number "11", in binary notation, is the same as the decimal number 3. Finally, when you've driven your fourth mile both digits (which were reading "1" at the end of the third mile) roll back over to zero position, and the 3rd digit rolls up to the "1" position, giving us "00000100". That's the binary representation of the decimal number 4. You can memorize all of that if you want, but you really only need to understand how the little odometer "rolls over" as the number it's counting gets bigger. It's exactly the same as a traditional decimal odometer's operation, except that each digit can only be "0" or "1" on our fictional "binary odometer". To convert a decimal number to binary you could roll the odometer forward, tick by tick, counting aloud until you've rolled it a number of times equal to the decimal number you want to convert to binary. Whatever is displayed on the odometer after all that counting and rolling would be the binary representation of the decimal number you counted up to. Since you understand how the odometer rolls forward you'll also understand how it rolls backward, too. To convert a binary number displayed on the odometer back to decimal you could roll the odometer back one tick at a time, counting aloud until the odometer reads "00000000". When all that counting and rolling is done, the last number you say aloud would be the decimal representation of the binary number the odometer started with. Converting values between binary and decimal this way would be very tedious. You could do it, but it wouldn't be very efficient. It's easier to learn a little algorithm to do it faster. A quick aside: Each digit in a binary number is known as a "bit". That's "b" from "binary" and "it" from "digit". A bit is a b nary dig it . Converting a binary number like, say, "1101011" to decimal is a simple process with a handy little algorithm. Start by counting the number of bits in the binary number. In this case, there are 7. Make 7 divisions on a sheet of paper (in your mind, in a text file, etc) and begin filling them in from right to left. In the rightmost slot, enter the number "1", because we'll always start with "1". In the next slot to the left enter double the value in the slot to the right (so, "2" in the next one, "4" in the next one) and continue until all the slots are full. (You'll end up memorizing these numbers, which are the powers of 2, as you do this more and more. I'm alright up to 131,072 in my head but I usually need a calculator or paper after that). So, you should have the following on your paper in your little slots. 64 | 32 | 16 | 8 | 4 | 2 | 1 | Transcribe the bits from the binary number below the slots, like so: 64 | 32 | 16 | 8 | 4 | 2 | 1 | 1 1 0 1 0 1 1 Now, add some symbols and compute the answer to the problem: 64 | 32 | 16 | 8 | 4 | 2 | 1 | x 1 x 1 x 0 x 1 x 0 x 1 x 1 --- --- --- --- --- --- --- + + + + + + = Doing all the math, you should come up with: 64 | 32 | 16 | 8 | 4 | 2 | 1 | x 1 x 1 x 0 x 1 x 0 x 1 x 1 --- --- --- --- --- --- --- 64 + 32 + 0 + 8 + 0 + 2 + 1 = 107 That's got it. "1101011" in decimal is 107. It's just simple steps and easy math. Converting decimal to binary is just as easy and is the same basic algorithm, run in reverse. Say that we want to convert the number 218 to binary. Starting on the right of a sheet of paper, write the number "1". To the left, double that value (so, "2") and continue moving toward the left of the paper doubling the last value. If the number you are about to write is greater than the number being converted stop writing. otherwise, continue doubling the prior number and writing. (Converting a big number, like 34,157,216,092, to binary using this algorithm can be a bit tedious but it's certainly possible.) So, you should have on your paper: 128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 | You stopped writing numbers at 128 because doubling 128, which would give you 256, would be large than the number being converted (218). Beginning from the leftmost number, write "218" above it (128) and ask yourself: "Is 218 larger than or equal to 128?" If the answer is yes, scratch a "1" below "128". Above "64", write the result of 218 minus 128 (90). Looking at "64", ask yourself: "Is 90 larger than or equal to 64?" It is, so you'd write a "1" below "64", then subtract 64 from 90 and write that above "32" (26). When you get to "32", though, you find that 32 is not greater than or equal to 26. In this case, write a "0" below "32", copy the number (26) from above 32" to above "16" and then continue asking yourself the same question with the rest of the numbers. When you're all done, you should have: 218 90 26 26 10 2 2 0 128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 | 1 1 0 1 1 0 1 0 The numbers at the top are just notes used in computation and don't mean much to us. At the bottom, though, you see a binary number "11011010". Sure enough, 218, converted to binary, is "11011010". Following these very simple procedures you can convert binary to decimal and back again w/o a calculator. The math is all very simple and the rules can be memorized with just a bit of practice. Splitting Up Addresses Think of IP routing like pizza delivery. When you're asked to deliver a pizza to "123 Main Street" it's very clear to you, as a human, that you want to go to the building numbered "123" on the street named "Main Street". It's easy to know that you need to go to the 100-block of Main Street because the building number is between 100 and 199 and most city blocks are numbered in hundreds. You "just know" how to split the address up. Routers deliver packets, not pizza. Their job is the same as a pizza driver: To get the cargo (packets) as close to the destination as possible. A router is connected to two or more IP subnets (to be at all useful). A router must examine destination IP addresses of packets and break those destination addresses up into their "street name" and "building number" components, just like the pizza driver, to make decisions about delivery. Each computer (or "host") on an IP network is configured with a unique IP address and subnet mask. That IP address can be divided up into a "building number" component (like "123" in the example above) called the "host ID" and a "street name" component (like "Main Street" in the example above) called the "network ID". For our human eyes, it's easy to see where the building number and the street name are in "123 Main Street", but harder to see that division in "10.13.216.41 with a subnet mask of 255.255.192.0". IP routers "just know" how to split up IP addresses into these component parts to make routing decisions. Since understanding how IP packets are routed hinges on understanding this process, we need to know how to break up IP addresses, too. Fortunately, extracting the host ID and the network ID out of an IP address and subnet mask is actually pretty easy. Start by writing out the IP address in binary (use a calculator if you haven't learned to do this in your head just yet, but make a note learn how to do it-- it's really, really easy and impresses the opposite sex at parties): 10. 13. 216. 41 00001010.00001101.11011000.00101001 Write out the subnet mask in binary, too: 255. 255. 192. 0 11111111.11111111.11000000.00000000 Written side-by-side, you can see that the point in the subnet mask where the "1s" stop "lines up" to a point in the IP address. That's the point that the network ID and the host ID split. So, in this case: 10. 13. 216. 41 00001010.00001101.11011000.00101001 - IP address 11111111.11111111.11000000.00000000 - subnet mask 00001010.00001101.11000000.00000000 - Portion of IP address covered by 1s in subnet mask, remaining bits set to 0 00000000.00000000.00011000.00101001 - Portion of IP address covered by 0s in subnet mask, remaining bits set to 0 Routers use the subnet mask to "mask out" the bits covered by 1s in the IP address (replacing the bits that are not "masked out" with 0s) to extract the network ID: 10. 13. 192. 0 00001010.00001101.11000000.00000000 - Network ID Likewise, by using the subnet mask to "mask out" the bits covered by 0s in the IP address (replacing the bits that are not "masked out" with 0s again) a router can extract the host ID: 0. 0. 24. 41 00000000.00000000.00011000.00101001 - Portion of IP address covered by 0s in subnet mask, remaining bits set to 0 It's not as easy for our human eyes to see the "break" between the network ID and the host ID as it is between the "building number" and the "street name" in physical addresses during pizza delivery, but the ultimate effect is the same. Now that you can split up IP addresses and subnet masks into host ID's and network ID's you can route IP just like a router does. More Terminology You're going to see subnet masks written all over the Internet and throughout the rest of this answer as (IP/number). This notation is known as "Classless Inter-Domain Routing" (CIDR) notation. "255.255.255.0" is made up of 24 bits of 1s at the beginning, and it's faster to write that as "/24" than as "255.255.255.0". To convert a CIDR number (like "/16") to a dotted-decimal subnet mask just write out that number of 1s, split it into groups of 8 bits, and convert it to decimal. (A "/16" is "255.255.0.0", for instance.) Back in the "old days", subnet masks weren't specified, but rather were derived by looking at certain bits of the IP address. An IP address starting with 0 - 127, for example, had an implied subnet mask of 255.0.0.0 (called a "class A" IP address). These implied subnet masks aren't used today and I don't recommend learning about them anymore unless you have the misfortune of dealing with very old equipment or old protocols (like RIPv1) that don't support classless IP addressing. I'm not going to mention these "classes" of addresses further because it's inapplicable today and can be confusing. Some devices use a notation called "wildcard masks". A "wildcard mask" is nothing more than a subnet mask with all 0s where there would be 1s, and 1s where there would be 0s. The "wildcard mask" of a /26 is: 11111111.11111111.11111111.11000000 - /26 subnet mask 00000000.00000000.00000000.00111111 - /26 "wildcard mask" Typically you see "wildcard masks" used to match host IDs in access-control lists or firewall rules. We won't discuss them any further here. How a Router Works As I've said before, IP routers have a similar job to a pizza delivery driver in that they need to get their cargo (packets) to its destination. When presented with a packet bound for address 192.168.10.2, an IP router needs to determine which of its network interfaces will best get that packet closer to its destination. Let's say that you are an IP router, and you have interfaces connected to you numbered: Ethernet0 - 192.168.20.1, subnet mask /24 Ethernet1 - 192.168.10.1, subnet mask /24 If you receive a packet to deliver with a destination address of "192.168.10.2", it's pretty easy to tell (with your human eyes) that the packet should be sent out the interface Ethernet1, because the Ethernet1 interface address corresponds to the packet's destination address. All the computers attached to the Ethernet1 interface will have IP addresses starting with "192.168.10.", because the network ID of the IP address assigned to your interface Ethernet1 is "192.168.10.0". For a router, this route selection process is done by building a routing table and consulting the table each time a packet is to be delivered. A routing table contains network ID and destination interface names. You already know how to obtain a network ID from an IP address and subnet mask, so you're on your way to building a routing table. Here's our routing table for this router: Network ID: 192.168.20.0 (11000000.10101000.00010100.00000000) - 24 bit subnet mask - Interface Ethernet0 Network ID: 192.168.10.0 (11000000.10101000.00001010.00000000) - 24 bit subnet mask - Interface Ethernet1 For our incoming packet bound for "192.168.10.2", we need only convert that packet's address to binary (as humans -- the router gets it as binary off the wire to begin with) and attempt to match it to each address in our routing table (up to the number of bits in the subnet mask) until we match an entry. Incoming packet destination: 11000000.10101000.00001010.00000010 Comparing that to the entries in our routing table: 11000000.10101000.00001010.00000010 - Destination address for packet 11000000.10101000.00010100.00000000 - Interface Ethernet0 !!!!!!!!.!!!!!!!!.!!!????!.xxxxxxxx - ! indicates matched digits, ? indicates no match, x indicates not checked (beyond subnet mask) 11000000.10101000.00001010.00000010 - Destination address for packet 11000000.10101000.00001010.00000000 - Interface Ethernet1, 24 bit subnet mask !!!!!!!!.!!!!!!!!.!!!!!!!!.xxxxxxxx - ! indicates matched digits, ? indicates no match, x indicates not checked (beyond subnet mask) The entry for Ethernet0 matches the first 19 bits fine, but then stops matching. That means it's not the proper destination interface. You can see that the interface Ethernet1 matches 24 bits of the destination address. Ah, ha! The packet is bound for interface Ethernet1. In a real-life router, the routing table is sorted in such a manner that the longest subnet masks are checked for matches first (i.e. the most specific routes), and numerically so that as soon as a match is found the packet can be routed and no further matching attempts are necessary (meaning that 192.168.10.0 would be listed first and 192.168.20.0 would never have been checked). Here, we're simplifying that a bit. Fancy data structures and algorithms make faster IP routers, but simple algorithms will produce the same results. Static Routes Up to this point, we've talked about our hypothetical router as having networks directly connected to it. That's not, obviously, how the world really works. In the pizza-driving analogy, sometimes the driver isn't allowed any further into the building than the front desk, and has to hand-off the pizza to somebody else for delivery to the final recipient (suspend your disbelief and bear with me while I stretch my analogy, please). Let's start by calling our router from the earlier examples "Router A". You already know RouterA's routing table as: Network ID: 192.168.20.0 (11000000.10101000.00010100.00000000) - subnet mask /24 - Interface RouterA-Ethernet0 Network ID: 192.168.10.0 (11000000.10101000.00001010.00000000) - subnet mask /24 - Interface RouterA-Ethernet1 Suppose that there's another router, "Router B", with the IP addresses 192.168.10.254/24 and 192.168.30.1/24 assigned to its Ethernet0 and Ethernet1 interfaces. It has the following routing table: Network ID: 192.168.10.0 (11000000.10101000.00001010.00000000) - subnet mask /24 - Interface RouterB-Ethernet0 Network ID: 192.168.30.0 (11000000.10101000.00011110.00000000) - subnet mask /24 - Interface RouterB-Ethernet1 In pretty ASCII art, the network looks like this: Interface Interface Ethernet1 Ethernet1 192.168.10.1/24 192.168.30.254/24 __________ V __________ V | | V | | V ----| ROUTER A |------- /// -------| ROUTER B |---- ^ |__________| ^ |__________| ^ ^ Interface Interface Ethernet0 Ethernet0 192.168.20.1/24 192.168.10.254/24 You can see that Router B knows how to "get to" a network, 192.168.30.0/24, that Router A knows nothing about. Suppose that a PC with the IP address 192.168.20.13 attached to the network connected to router A's Ethernet0 interface sends a packet to Router A for delivery. Our hypothetical packet is destined for the IP address 192.168.30.46, which is a device attached to the network connected to the Ethernet1 interface of Router B. With the routing table shown above, neither entry in Router A's routing table matches the destination 192.168.30.46, so Router A will return the packet to the sending PC with the message "Destination network unreachable". To make Router A "aware" of the existence of the 192.168.30.0/24 network, we add the following entry to the routing table on Router A: Network ID: 192.168.30.0 (11000000.10101000.00011110.00000000) - subnet mask /24 - Accessible via 192.168.10.254 In this way, Router A has a routing table entry that matches the 192.168.30.46 destination of our example packet. This routing table entry effectively says "If you get a packet bound for 192.168.30.0/24, send it on to 192.168.10.254 because he knows how to deal with it." This is the analogous "hand-off the pizza at the front desk" action that I mentioned earlier-- passing the packet on to somebody else who knows how to get it closer to its destination. Adding an entry to a routing table "by hand" is known as adding a "static route". If Router B wants to deliver packets to the 192.168.20.0 subnet mask 255.255.255.0 network, it will need an entry in its routing table, too: Network ID: 192.168.20.0 (11000000.10101000.00010100.00000000) - subnet mask /24 - Accessible via: 192.168.10.1 (Router A's IP address in the 192.168.10.0 network) This would create a path for delivery between the 192.168.30.0/24 network and the 192.168.20.0/24 network across the 192.168.10.0/24 network between these routers. You always want to be sure that routers on both sides of such an "interstitial network" have a routing table entry for the "far end" network. If router B in our example didn't have a routing table entry for "far end" network 192.168.20.0/24 attached to router A our hypothetical packet from the PC at 192.168.20.13 would get to the destination device at 192.168.30.46, but any reply that 192.168.30.46 tried to send back would be returned by router B as "Destination network unreachable." One-way communication is generally not desirable. Always be sure you think about traffic flowing in both directions when you think about communication in computer networks. You can get a lot of mileage out of static routes. Dynamic routing protocols like EIGRP, RIP, etc, are really nothing more than a way for routers to exchange routing information between each other that could, in fact, be configured with static routes. One large advantage to using dynamic routing protocols over static routes, though, is that dynamic routing protocols can dynamically change the routing table based on network conditions (bandwidth utilization, an interface "going down", etc) and, as such, using a dynamic routing protocol can result in a configuration that "routes around" failures or bottlenecks in the network infrastructure. (Dynamic routing protocols are WAY outside the scope of this answer, though.) You Can't Get There From Here In the case of our example Router A, what happens when a packet bound for "172.16.31.92" comes in? Looking at the Router A routing table, neither destination interface or static route matches the first 24 bits of 172.18.31.92 (which is 10101100.00010010.00011111.01011100, by the way). As we already know, Router A would return the packet to the sender via a "Destination network unreachable" message. Say that there's another router (Router C) sitting at the address "192.168.20.254". Router C has a connection to the Internet! Interface Interface Interface Ethernet1 Ethernet1 Ethernet1 192.168.20.254/24 192.168.10.1/24 192.168.30.254/24 __________ V __________ V __________ V (( heap o )) | | V | | V | | V (( internet )) ----| ROUTER C |------- /// -------| ROUTER A |------- /// -------| ROUTER B |---- (( w00t! )) ^ |__________| ^ |__________| ^ |__________| ^ ^ ^ Interface Interface Interface Ethernet0 Ethernet0 Ethernet0 10.35.1.1/30 192.168.20.1/24 192.168.10.254/24 It would be nice if Router A could route packets that do not match any local interface up to Router C such that Router C can send them on to the Internet. Enter the "default gateway" route. Add an entry at the end of our routing table like this: Network ID: 0.0.0.0 (00000000.00000000.00000000.00000000) - subnet mask /0 - Destination router: 192.168.20.254 When we attempt to match "172.16.31.92" to each entry in the routing table we end up hitting this new entry. It's a bit perplexing, at first. We're looking to match zero bits of the destination address with... wait... what? Matching zero bits? So, we're not looking for a match at all. This routing table entry is saying, basically, "If you get here, rather than giving up on delivery, send the packet on to the router at 192.168.20.254 and let him handle it". 192.168.20.254 is a destination we DO know how to deliver a packet to. When confronted with a packet bound for a destination for which we have no specific routing table entry this "default gateway" entry will always match (since it matches zero bits of the destination address) and gives us a "last resort" place that we can send packets for delivery. You'll sometimes hear the default gateway called the "gateway of last resort." In order for a default gateway route to be effective it must refer to a router that is reachable using the other entries in the routing table. If you tried to specify a default gateway of 192.168.50.254 in Router A, for example, delivery to such a default gateway would fail. 192.168.50.254 isn't an address that Router A knows how to deliver packets to using any of the other routes in its routing table, so such an address would be ineffective as a default gateway. This can be stated concisely: The default gateway must be set to an address already reachable by using another route in the routing table. Real routers typically store the default gateway as the last route in their routing table such that it matches packets after they've failed to match all other entries in the table. Urban Planning and IP Routing Breaking up a IP subnet into smaller IP subnets is like urban planning. In urban planning, zoning is used to adapt to natural features of the landscape (rivers, lakes, etc), to influence traffic flows between different parts of the city, and to segregate different types of land-use (industrial, residential, etc). IP subnetting is really much the same. There are three main reasons why you would subnet a network: You may want to communicate across different unlike communication media. If you have a T1 WAN connection between two buildings IP routers could be placed on the ends of these connections to facilitate communication across the T1. The networks on each end (and possibly the "interstitial" network on the T1 itself) would be assigned to unique IP subnets so that the routers can make decisions about which traffic should be sent across the T1 line. In an Ethernet network, you might use subnetting to limit the amount of broadcast traffic in a given portion of the network. Application-layer protocols use the broadcast capability of Ethernet for very useful purposes. As you get more and more hosts packed into the same Ethernet network, though, the percentage of broadcast traffic on the wire (or air, in wireless Ethernet) can increase to such a point as to create problems for delivery of non-broadcast traffic. (In the olden days, broadcast traffic could overwhelm the CPU of hosts by forcing them to examine each broadcast packet. That's less likely today.) Excessive traffic on switched Ethernet can also come in form of "flooding of frames to unknown destinations". This condition is caused by an Ethernet switch being unable to keep track of every destination on the network and is the reason why switched Ethernet networks can't scale to an infinite number of hosts. The effect of flooding of frames to unknown destinations is similar to the the effect of excess broadcast traffic, for the purposes of subnetting. You may want to "police" the types of traffic flowing between different groups of hosts. Perhaps you have print server devices and you only want authorized print queuing server computers to send jobs to them. By limiting the traffic allowed to flow to the print server device subnet users can't configure their PCs to talk directly to the print server devices to bypass print accounting. You might put the print server devices into a subnet all to themselves and create a rule in the router or firewall attached to that subnet to control the list of hosts permitted to send traffic to the print server devices. (Both routers and firewalls can typically make decisions about how or whether to deliver a packet based on the source and destination addresses of the packet. Firewalls are typically a sub-species of router with an obsessive personality. They can be very, very concerned about the payload of packets, whereas routers typically disregard payloads and just deliver the packets.) In planning a city, you can plan how streets intersect with each other, and can use turn-only, one-way, and dead-end streets to influence traffic flows. You might want Main Street to be 30 blocks long, with each block having up to 99 buildings each. It's pretty easy to plan your street numbering such that each block in Main Street has a range of street numbers increasing by 100 for each block. It's very easy to know what the "starting number" in each subsequent block should be. In planning IP subnets, you're concerned with building the right number of subnets (streets) with the right number of available host ID's (building numbers), and using routers to connect the subnets to each other (intersections). Rules about allowed source and destination addresses specified in the routers can further control the flow of traffic. Firewalls can act like obsessive traffic cops. For the purposes of this answer, building our subnets is our only major concern. Instead of working in decimal, as you would with urban planning, you work in binary to describe the bounds of each subnet. Continued on: How does IPv4 Subnetting Work? (Yes ... we reached the maximum size of an answer (30000 characters).)
{ "source": [ "https://serverfault.com/questions/49765", "https://serverfault.com", "https://serverfault.com/users/2561/" ] }
49,998
Our company is moving to new offices in a couple of months, and I am responsible for looking after the move of the development servers in the company. most of the dev equipment is in 5, 42U cabinets + rack for switching/routing equipment. How do most people do this sort of thing? Move the cabinent whole or extract the indvidual components and move the racks empty. any advise on prep and shutdown before the move would be welcome
We have done a couple of moves with racks. Here's what I took away from the experience: Before: label everything. Both ends. And the rack slot it's plugged into too. diagram everything. if you have important data, schedule a full backup to finish 6-12 hours before the move. At a minimum, validate the last full backup you have. plan to take everything out of the rack during the move. At a minimum take out half the mass, from the top down -- this means leave the top empty Don't expect movers to move a top-heavy rack. In a rack with a UPS at the base that means you are taking out probably 2/3 of the servers. Anything which is left racked MUST have rails and be secure in the rails (less than 0.5mm movement, and tolerant of any orientation -- including upside down and face down -- was our margin). We've moved full racks, and half-full racks, and the racks always need to be tilted/rotated, and it is downright scary to watch your livelyhood being dangled over a concrete loading dock. in our experience, front and rear doors are more decorative than load-bearing and leaving them on can make moving the racks very awkward because all the good hand-holding points are unavailable. Plan to remove the doors during the actual move. This forces the movers to lift, pull, push, and tug the rack by its strongest parts. Usually side panels can stay on, but be prepared to remove them if the movers think having them gone would help. hire someone with an insured truck to to the transportation. DO NOT do it yourself. You might think you are insured, but your insurance company will probably think otherwise. The second last thing you want is to be involved in a three-way blamestorm between your employer's insurance company and your insurance company. hire someone insured to physically pick up, move, put down, and (during an accident) drop your gear for you. DO NOT do it yourself. The last thing you want to do is hurt yourself or be involved in a co-worker's injury. make sure your insured someones specialize in high-tech moves and can supply blankets, anti-static bubble-wrap, and if necessary crates. During the move: backups are good, right? do the wire disconnect yourself. Pull ALL wires that are disconnectable. Each wire should be labelled and go into a box that itself is well labelled. let the strong, well insured guys do the server extractions from the rack. Supervise the wrapping personally. stay out of the strong, well-insured guys' way when they do the loading and unloading. supervise the unwrapping and re-insertion into the racks. If there is ANY question about a server's condition, mention it, and note that it was mentioned. If the job is taking a while, have coffee and/or doughnuts available for the guys. If appropriate (and if they've done a good job), cold beer for after the job goes well too. Physical bring-up: double your time estimate for getting things back on track. have a build-up plan. A good build up plan includes a checklist of all services, servers, and a test plan. validate, validate, validate. check any servers which may have been damaged in transit as noted above. don't let management cheap out when it comes to food if you are working through meal times. Hungry techs make mistakes. If the bring-up is going to be long and complicated, schedule time to get out of the room for 90 minutes to go get a real meal. After: a cold beer for you may be appropriate, too.
{ "source": [ "https://serverfault.com/questions/49998", "https://serverfault.com", "https://serverfault.com/users/7261/" ] }
50,085
I am attempting to change directories to a file server such as: cd \\someServer\\someStuff\ However, I get the following error: CMD does not support UNC paths as current directories What are my options to navigate to that directory?
If you're considering scripting it, it's always helpful to learn about the pushd and popd commands. Sometimes you can't be sure what drives letters are already used on the machine that the script will run on and you simply need to take the next available drive letter. Since net use will require you to specify the drive, you can simply use pushd \\server\folder and then popd when you're finished.
{ "source": [ "https://serverfault.com/questions/50085", "https://serverfault.com", "https://serverfault.com/users/866/" ] }
50,128
I'm looking for a good explanation of the IUSR and IWAM accounts used by IIS to help me better configure our hosting environment: Why are they there? What's the difference between them? Do the names stand for something meaningful? Are there any best practice changes I should make? IIS also gives me options to run application pools as Network Service, Local Service or Local System. Should I? My web server is part of a domain, how does this change things? It seems to be common to create your own versions of these accounts when deploying multiple sites to a server which raises some extra questions: When might I want to create my own IUSR and IWAM accounts? How should I create these additional accounts so they have the correct permissions? I am using both IIS 6 and and IIS 7 with mostly default configurations.
IUSR and IWAM date back to the very early days of IIS when you installed it separately (not as an OS component). By default, if a web site permits anonymous authentication, the IUSR account is used with respect to permissions on the OS. This can be changed from the default. There are some security recommendations to at least rename the account, so it's not a "known" account, much like there is a recommendation to rename the administrator account on a server. You can learn more about IUSR and authentication at MSDN . IWAM was designed for any out of process applications and is only used in IIS 6.0 when you're in IIS 5.0 isolation mode. You usually saw it with COM/DCOM objects. With respect to application pool identities, the default is to run as the Network Service. You should not run as Local System because that account has rights greater than that of an administrator. So that basically leaves you to Network Service, Local Service, or a local/domain account other than those two. As to what to do, it depends. One advantage of leaving it as Network Service is this is a limited privilege account on the server. However, when it access resources across the network, it appears as Domain\ComputerName$, meaning you can assign permissions that permit the Network Service account to access resources such as SQL Server running on a different box. Also, since it appears as the computer account, If you enable Kerberos authentication, the SPN is already in place if you're accessing the website by the server name. A case where you'd consider changing the application pool to a particular Windows domain account if you want a particular account accessing networked resources such as a service account accessing SQL Server for a web based application. There are other options within ASP.NET for doing this without changing the application pool identity, so this isn't strictly necessary any longer. Another reason you'd consider using a domain user account is you were doing Kerberos authentication and you had multiple web servers servicing a web application. A good example is if you had two or more web servers serving up SQL Server Reporting Services. The front end would probably to a generic url such as reports.mydomain.com or reporting.mydomain.com. In that case, the SPN can only be applied to one account within AD. If you have the app pools running under Network Service on each server, that won't work, because when they leave the servers, they appear as Domain\ComputerName$, meaning you'd have as many accounts as you had servers serving up the app. The solution is to create a domain account, set the app pool identity on all servers to the same domain user account and create the one SPN, thereby permitting Kerberos authentication. In the case of an app like SSRS, where you may want to pass the user credentials through to the back-end database server, then Kerberos authentication is a must because then you're going to have to configure Kerberos delegation. I know that's a lot to take in, but the short answer is, except for Local System, it depends.
{ "source": [ "https://serverfault.com/questions/50128", "https://serverfault.com", "https://serverfault.com/users/1964/" ] }
50,181
There's a way to do this?
In your logrotate.conf (or the equivilent logrotate.d file), change the line that says " rotate 10 (your number may be different) to a bigger number. That will tell it to keep that many days of logs. You can make it 36500, which would last you 100 years.
{ "source": [ "https://serverfault.com/questions/50181", "https://serverfault.com", "https://serverfault.com/users/15619/" ] }
50,577
I know that explicit "negotiated" FTPS is preferred, because it still uses the standard port 21 with that method but in regards to "implicit" non-negotiated FTPS using a standard port of 990 vs. port 22 (which I have seen some people describe), why is there this difference in a "standard" for the non-negotiated port number? Note: I also noticed that a FileZilla server won't work properly (when connecting from a FileZilla client) if I configure it to use anything other than the default of port 990.
SFTP (SSH File Transfer Protocol) is not the same as FTPS (FTP-SSL). SFTP is intimately related to SSH , and has no relation, except in purpose and name, with FTP. Contrast with FTPS, which is simply the FTP protocol with SSL. The main difference is that SFTP only uses one stream, whereas FTPS, like FTP, uses at least two: a control stream, where the commands are issued, and another one for each data transfer.
{ "source": [ "https://serverfault.com/questions/50577", "https://serverfault.com", "https://serverfault.com/users/9227/" ] }
50,585
What's the best way to check if a volume is mounted in a Bash script? What I'd really like is a method that I can use like this: if <something is mounted at /mnt/foo> then <Do some stuff> else <Do some different stuff> fi
Avoid using /etc/mtab because it may be inconsistent. Avoid piping mount because it needn't be that complicated. Simply: if grep -qs '/mnt/foo ' /proc/mounts; then echo "It's mounted." else echo "It's not mounted." fi (The space after the /mnt/foo is to avoid matching e.g. /mnt/foo-bar .)
{ "source": [ "https://serverfault.com/questions/50585", "https://serverfault.com", "https://serverfault.com/users/466/" ] }
50,628
Monit runs with root, but i don't want to start my processes as root.. like mysql, mongrel, apache..
check process tomcat with pidfile /var/run/tomcat.pid start program = "/etc/init.d/tomcat start" as uid nobody and gid nobody stop program = "/etc/init.d/tomcat stop" # You can also use id numbers instead and write: as uid 99 and with gid 99 if failed port 8080 then restart (source)
{ "source": [ "https://serverfault.com/questions/50628", "https://serverfault.com", "https://serverfault.com/users/15619/" ] }
50,775
I have an existing public/private key pair. The private key is password protected, and the encryption may be either RSA or DSA. These keys are the kind you generate with ssh-keygen and generally store under ~/.ssh . I'd like to change the private key's password. How do I go about it, on a standard Unix shell? Also, how do I simply remove the password? Just change it to empty?
To change the passphrase on your default key: $ ssh-keygen -p If you need to specify a key, pass the -f option: $ ssh-keygen -p -f ~/.ssh/id_dsa then provide your old and new passphrase (twice) at the prompts. (Use ~/.ssh/id_rsa if you have an RSA key.) More details from man ssh-keygen : [...] SYNOPSIS ssh-keygen [-q] [-b bits] -t type [-N new_passphrase] [-C comment] [-f output_keyfile] ssh-keygen -p [-P old_passphrase] [-N new_passphrase] [-f keyfile] [...] -f filename Specifies the filename of the key file. [...] -N new_passphrase Provides the new passphrase. -P passphrase Provides the (old) passphrase. -p Requests changing the passphrase of a private key file instead of creating a new private key. The program will prompt for the file containing the private key, for the old passphrase, and twice for the new passphrase. [...]
{ "source": [ "https://serverfault.com/questions/50775", "https://serverfault.com", "https://serverfault.com/users/149/" ] }
50,789
We've recently migrated our Windows network to use DFS for shared files. DFS is working well, except for one annoying problem: users experience a significant delay when they try to access a DFS namespace that they have not accessed for some time. I have tried to troubleshoot the issue but have not had any success so far, and I was hoping someone here may have some pointers to help resolve the problem. Firstly, some background on our network: The network uses a Windows 2008 functional level Active Directory domain with two Windows 2008 DCs and two DNS servers (one on each of the DCs). The network is DNS only - no WINS. All computers are located at the same site and connected by Gigabit Ethernet. We have approximately 20 Domain-based DFS namespaces in Windows 2008 mode, and each DFS namespace has two Windows 2008 DFS namespace servers (the same two servers for all namespaces). All namespace servers are in FQDN mode and all folder targets are specified using their FQDN. All computers are up-to-date with Service Packs and patches. The actual folder targets (i.e. the SMB shares our DFS folders point to) are scattered across several file and application servers, all running Windows 2008 bar two application servers which run Windows 2003 R2, with no replication setup at all (e.g. all DFS folders currently only have one folder target). Some more detail on the problem: The namespace access delay is generally 1 - 10 seconds long and seems to occur when a particular computer has not accessed the requested namespace for approximately five minutes or more. For example, if the user has not accessed \\domain.name\namespace1\ for more than five minutes and attempts to access \\domain.name\namespace1\ via Windows Explorer, the Explorer window will freeze for 1 - 10 seconds before finally resuming and displaying the folders that exist in \\domain.name\namespace1. If they then close the Explorer window and attempt to access \\domain.name\namespace1\ again within five minutes the contents will be displayed almost instantly - if they wait longer than five minutes it will go through the 1 - 10 second pause again. Once "inside" the namespace everything is nice and snappy, it's just the initial connection to the namespace that is slow. The browsing delays seem to affect all variants of Windows that we use (Windows 2008 x64 SP2, Windows 2003 R2 x86 SP2, Windows XP Pro x86 SP3) - it is possibly a bit worse in Windows XP / 2003 than in Windows 2008, but I'm not sure if the difference isn't just psychological. Accessing the underlying folder targets directly exhibits no delay at all - i.e. if the SMB shares pointed to by DFS are accessed directly (bypassing DFS) then there is no pause. During trouble-shooting I noticed that the "Cache duration" for all of our DFS roots is set to 300 seconds - 5 minutes. Given that this is the same amount of time required to trigger the pause I assume that this caching is somehow related, although I am unsure exactly what is cached on the client and hence what needs to be looked up again after 5 minutes have elapsed. In trying to resolve the problem I have already tried / checked the following (without success): Run dcdiag on both Domain Controllers - no problems found Done some basic DNS server checks without finding any problems - I don't know how to check the DNS servers in detail, but I would add that the network is not exhibiting any other strange behavior that may point to a DNS problem Disabled Anti-virus on clients and servers Removing one of the namespace servers from a couple of namespaces - no difference So that's where I'm up to - and I'm out of ideas. Can anyone suggest what may be causing the delays and/or what I should be trying next?
Well, we finally appear to have resolved this issue in our environment. For the benefits of others, here's what we discovered and how we fixed the problem: To try and gain further insight into what was occurring before/during/after the delays we used Wireshark on a client machine to capture/analyse network traffic whilst that client attempted to access a DFS share. These captures showed something strange: whenever the delay occurred, in between the DFS request being sent from the client to a DC, and the referral to a DFS root server coming back from the DC to the client, the DC was sending out several broadcast name lookups to the network. Firstly, the DC would broadcast a NetBIOS lookup for DOMAIN (where DOMAIN is our pre-Windows 2000 Active Directory domain name). A few seconds later, it would broadcast a LLMNR lookup for DOMAIN. This would be followed by yet another broadcast NetBios lookup for DOMAIN. After these three lookups had been broadcast (and I assume timed out) the DC would finally respond to the client with a (correct) referral to a DFS root server. These broadcast name lookups for DOMAIN were only being sent when the long delay opening a DFS share occurred, and we could clearly see from the Wireshark capture that the DC wasn't returning a referral to a DFS root server until all three lookups been sent (and ~7 seconds passed). So, these broadcast name lookups were pretty obviously the cause of our delays. Now that we knew what the problem was, we started trying to figure out why these broadcast name lookups were occurring. After a bit more Googling and some trial-and-error, we found our answer: we hadn't set the DfsDnsConfig registry key on our domain controllers to 1, as is required when using DFS in a DNS-only environment. When we originally setup DFS in our enviroment we did read the various articles about how to configure DFS for a DNS-only environment (e.g. Microsoft KB244380 and others) and were aware of this registry key, but had misintepreted the instructions on when/how to use it. KB244380 says: The DFSDnsConfig registry key must be added to each server that will participate in the DFS namespace for all computers to understand fully qualified names. We thought this meant that the registry key has to be set on the DFS namespace servers only, not realising that it was also required on the domain controllers. After we set DfsDnsConfig to 1 on our domain controllers (and restarted the "DFS Namespace" service), the problem vanished. Obviously we're happy with this outcome, but I would add that I'm still not 100% convinced that this is our only problem - I wonder if adding DfsDnsConfig=1 to our DCs has only worked around the problem, rather than solving it. I can't figure out why the DCs would be trying to lookup DOMAIN (the domain name itself, rather than a server in the domain) during the DFS referral process, even in a non-DNS-only environment, and I also know I haven't set DfsDnsConfig=1 on domain controllers in other (admittedly much smaller / simpler) DNS-only environments and haven't had the same issue. Still, we've solved our problem so we are happy. I hope this is helpful to the others who are experiencing a similar issue - and thanks again to those that offered suggestions along the way.
{ "source": [ "https://serverfault.com/questions/50789", "https://serverfault.com", "https://serverfault.com/users/12993/" ] }
50,883
I have install java through yum on CentOS, however another java programme needs to know what the JAVA_HOME environmental variable is. I know all about setting environmental variables, but what do I set it to? java is installed in /usr/bin/java , it can't be there!
Actually I found it, it's /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/ . I found out what it was by doing update-alternatives --display java and it showed me the directory /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java
{ "source": [ "https://serverfault.com/questions/50883", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
50,934
My (Windows XP, Professional, v2002, SP3) workstation is completely ignoring my hosts file. Here is the code in my hosts file: 127.0.0.1 localhost 172.17.1.107 wiki But, when I open a browser and type "wiki" in the URL bar and hit "Enter" it takes me to the old location of my wiki as it appeared in my old hosts file: 10.0.36.100 wiki Even though I have renamed the old hosts file "hosts_full" and moved it to my desktop (so, out of the etc folder entirely). I have so far taken the following steps: Restarted (3 times) Ran ipconfig /flushdns from the command line Ran ping wiki from the command line, response was Reply from 10.0.36.100: bytes=32 time=1ms TTL=63 I've cleared every cache I can think of (IE, FF). I have an ISA firewall client that runs on my machine and I've tried all of this with it disabled and enabled. In fact, the firewall uses the old hosts file to resolve itself: 10.0.2.126 isa3 And somehow it still works fine even though the new hosts file doesn't contain that line. Any ideas??? Thanks in advance for the help!
Any chance you are using a proxy server for browsing? If so it might be that the proxy server is resolving the dns name for you. And thats why you get different results in a commandline with ping as opposed to the browser. Off chance traffic is intercepted and changed. Very off chance...
{ "source": [ "https://serverfault.com/questions/50934", "https://serverfault.com", "https://serverfault.com/users/11863/" ] }
51,014
I simply need to get the match from a regular expression: $ cat myfile.txt | SOMETHING_HERE "/(\w).+/" The output has to be only what was matched, inside the parenthesis. Don't think I can use grep because it matches the whole line. Please let me know how to do this.
2 Things: As stated by @Rory, you need the -o option, so only the match are printed (instead of whole line) In addition, you neet the -P option, to use Perl regular expressions, which include useful elements like Look ahead (?= ) and Look behind (?<= ) , those look for parts, but don't actually match and print them. If you want only the part inside the parenthesis to be matched, do the following: grep -oP '(?<=\/\()\w(?=\).+\/)' myfile.txt If the file contains the sting /(a)5667/ , grep will print 'a', because: /( are found by \/\( , but because they are in a look-behind (?<= ) they are not reported a is matched by \w and is thus printed (because of -o ) )5667/ are found by \).+\/ , but because they are in a look-ahead (?= ) they are not reported
{ "source": [ "https://serverfault.com/questions/51014", "https://serverfault.com", "https://serverfault.com/users/141516/" ] }
51,032
How do I check what options are compiled into a Linux kernel without looking at /boot/config-* and if I don't have access to the /boot/config-* file that's left over?
Unless your kernel was built with CONFIG_IKCONFIG_PROC , which would make the config available in /proc as sysadmin1138 mentioned above, you're pretty much out of luck. Debian and RH based kernel packages do, however, generally install a config-$version file in /boot . So unless it's a custom kernel, it should be available there.
{ "source": [ "https://serverfault.com/questions/51032", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
51,409
Is it possible to dump entire HTTP requests by apache? I need to track all HTTP headers of incomming requests. How to do that?
I think what you want instead of Apache might be a packet analyzer , Also known as a packet sniffer. Two of the most popular ones are probably TCPDump and Wireshark , both of which are free and have versions for Windows and *nix operating systems. These will show you all traffic coming in on an interface, not just what Apache sees. But you can use filters to restrict to a specified port, such as 80 for http. tcpdump: The following command run from the server will show you all packets destined for port 80: sudo tcpdump -s 0 -X 'tcp dst port 80' The capital X switch dumps the payload in hex and ASCII. The s switch with 0 means to get the whole packet. 'tcp dst port 80' means to filter and only show packets destined for port 80 in the tcp header. Wireshark: For the more user friendly version, if you have a GUI running, consider wireshark (formally known as ethereal).
{ "source": [ "https://serverfault.com/questions/51409", "https://serverfault.com", "https://serverfault.com/users/3320/" ] }
51,477
Is there any linux command to extracts all the ascii strings from an executable or other binary file? I suppose I could do it with a grep, but I remember hearing somewhere that such a command existed?
The command you are looking for is strings Its name is quite self-explanatory, it retrieves any printable string from a given file. man strings gives: STRINGS(1) NAME strings - find the printable strings in a object, or other binary, file SYNOPSIS strings [ - ] [ -a ] [ -o ] [ -t format ] [ -number ] [ -n number ] [--] [file ...]
{ "source": [ "https://serverfault.com/questions/51477", "https://serverfault.com", "https://serverfault.com/users/5684/" ] }
51,567
I would like to copy stuff in bulk (reimage disk using dd ) with netcat from host A to B via ssh encrypted channel on Linux. What commands should I type on both ends?
Copying from source to target where target has sshd running: dd if=/dev/sda | gzip | ssh root@target 'gzip -d | dd of=/dev/sda' Copying from source to target via sshd_host when target is not running sshd. Target: nc -l -p 62222 | dd of=/dev/sda bs=$((16 * 1024 * 1024)) Source: ssh -L 62222:target:62222 sshd_host & Source: dd if=/dev/sda | nc -w 3 localhost 62222 dd - if= is the source, of= is the destination, bs= is the block size. Different block sizes may improve performance. 16 is usuually a fairly reasonable starting point. You can also use count= to indicate how many blocks to copy. nc - -p indicates the port to use for services. -l is used to start a service. -w sets up the time to wait for data in the pipline before quiting. ssh - -L sets up the tunnel on the remote host. The format of the argument is, local_port:target_host:target_port . Your local program (nc) connects to the local_port, this connection is tunneled and connected to target_port on the target_host. The options defined are just the ones used for this. Look at the man pages for more details. A few notes: If you are doing this over anything but a LAN, I'd suggest compressing the datastream with gzip or compress. Bzip2 would work too but it takes a bit more CPU time. The first one has an example of that usage. Its better if the source partition is not mounted or is mounted read-only. If not you will need to fsck the destination image. Unless one of the machines has netcat but not ssh, netcat isn't really needed here. That case would look like: source machine dd -> nc -> ssh -> ssh tunnel -> sshd server -> nc on target -> dd dd works best if the source and targets are the same size. If not the target must be the bigger of the 2. If you are using ext2/3 or xfs, dump (or xfsdump) and restore may be a better option. It wont handle the boot sector but it works when the target and source are different sizes.
{ "source": [ "https://serverfault.com/questions/51567", "https://serverfault.com", "https://serverfault.com/users/13823/" ] }
51,635
I have a directory shared on my computer, which is part of the domain. Is it possible to set up the share so that a user logged on to a different machine which is not part of the domain can access my share? From the machine not on the domain, I can browse to the share, but it asks for credentials, and I just want to allow anonymous access.
To do what you want you'll have to enable the "Guest" account on the computer hosting the files and then grant the "Everyone" group whatever access you want. "Guest" is a user account, but its enabled / disabled status is interpreted by the operating system as a boolean "Allow unauthenticated users to connect?" Permissions still control the access to files, but you open things up a LOT by enabling Guest. Don't do this on a domain controller computer, BTW, because you'll be Guest on all DCs...
{ "source": [ "https://serverfault.com/questions/51635", "https://serverfault.com", "https://serverfault.com/users/9350/" ] }
51,681
I don't have any real (i.e. professional) experience with Steve Gibson's SpinRite so I'd like to put this to the SF community. Does SpinRite actually do what it claims? Is it a good product to use? With a proper backup solution and RAID fault tolerance, I've never found need for it, but I'm curious. There seems to be some conflicting messages regarding it, and no hard data to be found either way. On one hand, I've heard many home users claim it helped them, but I've heard home users say a lot of things -- most of the time they don't have the knowledge or experience to accurately describe what really happened. On the other hand, Steve's own description and documentation don't give me a warm fuzzy about it either. So what is the truth of the matter? Would you use it?
I've had a reasonably good experience with SpinRite, but I think it's highly overrated. In fact, it might just be too clever for its own good. There are free solutions which work just as well (actually, the free ones might work even better). We had a 200 GB NTFS drive that suddenly failed catastrophically. This was supposed to be the "shared" drive on which people just dumped stuff temporarily, but it ended up turning into a huge data repository that had miscellaneous backups, as well as a bunch of files that nobody bothered to back up anywhere. When the drive died, we couldn't get it to mount, no matter how many times we ran chkdsk or other tools. In the end, we purchased and ran SpinRite...which continued to run for more than 1 month. Every time it hit a bad cluster, it spent hours trying to recover data from it. Again, it ran nonstop for more than a month trying to recover data from a defective 200 GB drive. (In SpinRite's defense, it can scan a drive in just a few hours if there are no physical defects.) SpinRite was eventually able to recover all our files, although many of the larger ones turned out to be corrupt anyway. SpinRite also made the drive mountable again. So I'd definitely say it did something. However, despite the fact that it worked, I don't know if it helped any more than just booting off a Linux CD and running dd to copy the entire drive to a file. There's something to be said for not running a dying disk for an entire month, as it's dying! Physical defects seem to have a habit of spreading. It wouldn't surprise me if the disk degraded even further while SpinRite was running. Personally, I'd rather get the data off the disk as quickly as possible, make several backup images, and try to repair the files offline. We've had to recover other data recently, and dd has done a great job. You can tell it to copy all the good data off the drive, then you can run it a few more times to go and try harder (i.e., use smaller block sizes) trying to pull data off the bad areas. If you've got an hour or so to spare, I'd say it's worth your time to learn how to use dd instead of buying SpinRite: http://www.debianadmin.com/recover-data-from-a-dead-hard-drive-using-dd.html Or go the slightly easier route and just download dd_rescue: http://www.garloff.de/kurt/linux/ddrescue If you still want to run SpinRite, I'd highly recommend doing it AFTER you've copied all existing data off the drive, just in case running the drive for a longer period of time allows it to become further degraded. Every time you get a new drive, you should boot off a Linux CD and run badblocks to check it for defects. You should also periodically check your drives for degradation. We've had at least 2 brand-new drives come with defects, and 3 or 4 more die within a couple of months (even though we did thorough tests before putting them into service). Note that you need to run badblocks as root, or prefix the commands with "sudo " if you're booting off an Ubuntu live CD. Brand-new drives (warning: destroys all data!): badblocks -wvs /dev/sd# or badblocks -wvs /dev/hd# In-use drives (read-only test): badblocks -vs /dev/sd# or badblocks -vs /dev/hd# Where # is the drive number in Linux. IDE drives usually are called /dev/hd#, and SCSI (and often SATA) drives are /dev/sd#. More info on badblocks here: http://en.wikipedia.org/wiki/Badblocks By the way, even though dd and badblocks are Linux programs, you can use them on NTFS drives, and you can even mount NTFS partitions in Linux, regardless of whether you're using MBR partitions, dynamic disks, or GPT disks. Steve's documentation discusses a lot of hypothetical problems that SpinRite theoretically could help with. For example: data fading away over time and needing to be "refreshed" by reading every block and writing it back to the disk again, or the notion that repeatedly repositioning the read head on either side of a block will eventually permit you to statistically divine the original data stored in that block. Logically, these things make sense, but I think they are just solutions to academic problems which may not actually arise in the real world. (At least, with hard disks--maybe Zip disks and such were more susceptible to data fading.) If Steve cited papers on the subjects, or if these techniques had been experimentally proven to be effective, then I would expect for there to be many open-source or commercially-available SpinRite clones. It would be well within the capabilities of an average script programmer to write a Python, Perl, or UNIX shell script that includes all of SpinRite's documented features.
{ "source": [ "https://serverfault.com/questions/51681", "https://serverfault.com", "https://serverfault.com/users/3454/" ] }
51,851
If I were to archive data on a hard drive, unplug it, and set it on a (not dusty, temperature-controlled) shelf somewhere, would that drive deteriorate much? How does the data retention of an unplugged hard drive compare to tapes?
Hard drives are unsuitable for anything other than short term archival storage. The problem is not one of data retention, it's the fact that stored drives have a bad track record for not spinning up. Contrary to what Vilx- wrote, many of us know what happens to a drive that's been stored for 20 years - they won't spin up 90+% of the time. "Modern" drives are in fact worse than the drives of a decade or two back. CDs have been around more than long enough to know that the organic dye layer of an RW disc deteriorates surprisingly fast, especially when exposed to ultraviolet light (e.g. fluorescent lighting) and temperature fluctuations. One thing that's often overlooked when comparing hard drives to tapes is the simple fact that backup tape technology is specifically designed to store data for extended periods. Hard drives are a temporary medium only. Edit: The following was posted as an answer to a newer version of essentially the same question, so I'll merge it in here rather than have a second post Hard drives are designed for relatively short term data storage. Quite apart from the mechanical issues there's the fact that magnetic media, and I mean ALL magnetic media, does deteriorate with age, even under absolutely ideal conditions. This doesn't require any external influences as magnetic particles are affected by others near by and in modern drives the density is nothing short of astonishing. A drive in use has the magnetic strength regularly refreshed, either by actual writing to disk or by the drive's own low level system, which periodically reads and rewrites sectors. A drive in storage or even one that's simply powered down, receives none of that refreshing of the data. As for how long a drive can be stored without becoming at least partially unreadable is an ongoing topic of debate and will certainly vary even across drives from the same batch, let alone different models or manufacturers. I personally would never rely on a device specifically designed for short term data storage for any significant length of time. Even tapes, which are designed for relatively long term data storage, should be refreshed ever few years. With all that said, I don't know what your alternative options are. Gold layer CDs/DVDs are currently being claimed to have 20+ years of safe storage but I can also remember when the exact same claim was made for burnable CDs. Those early ones proved to have a safe life of less than two years.
{ "source": [ "https://serverfault.com/questions/51851", "https://serverfault.com", "https://serverfault.com/users/1753/" ] }
52,034
I just wondered what exactly the difference between [[ $STRING != foo ]] and [ $STRING != foo ] is, apart from that the latter is POSIX-compliant, found in sh and the former is an extension found in bash.
There are several differences. In my opinion, a few of the most important are: [ is a builtin in Bash and many other modern shells. The builtin [ is similar to test with the additional requirement of a closing ] . The builtins [ and test imitate the functionality /bin/[ and /bin/test along with their limitations so that scripts would be backwards compatible. The original executables still exist mostly for POSIX compliance and backwards compatibility. Running the command type [ in Bash indicates that [ is interpreted as a builtin by default. (Note: which [ only looks for executables on the PATH and is equivalent to type -P [ . You can execute type --help for details) [[ is not as compatible, it won't necessarily work with whatever /bin/sh points to. So [[ is the more modern Bash / Zsh / Ksh option. Because [[ is built into the shell and does not have legacy requirements, you don't need to worry about word splitting based on the IFS variable to mess up on variables that evaluate to a string with spaces. Therefore, you don't really need to put the variable in double quotes. For the most part, the rest is just some nicer syntax. To see more differences, I recommend this link to an FAQ answer: What is the difference between test, [ and [[ ? . In fact, if you are serious about bash scripting, I recommend reading the entire wiki , including the FAQ, Pitfalls , and Guide. The test section from the guide section explains these differences as well, and why the author(s) think [[ is a better choice if you don't need to worry about being as portable. The main reasons are: You don't have to worry about quoting the left-hand side of the test so that it actually gets read as a variable. You don't have to escape less than and greater than < > with backslashes in order for them not to get evaluated as input redirection, which can really mess some stuff up by overwriting files. This again goes back to [[ being a builtin. If [ (test) is an external program the shell would have to make an exception in the way it evaluates < and > only if /bin/test is being called, which wouldn't really make sense.
{ "source": [ "https://serverfault.com/questions/52034", "https://serverfault.com", "https://serverfault.com/users/15325/" ] }
52,068
When I log in with PuTTY, I always have to: change settings appearance font change 8 resize window so that I can see enough text to work with the log files. I don't see where I can save these settings to my saved session. Is this possible? Answer: So the answer is click on change settings, change everything but then you have to also click on session the name again and save, thanks David:
You need to go "Change Settings -> Window" and set it there. Do the same under Appearance for the font and size. Then go back to the session and save it.
{ "source": [ "https://serverfault.com/questions/52068", "https://serverfault.com", "https://serverfault.com/users/2571/" ] }
52,071
I've got a 3rd party binary library for connecting to a server via SSL that works fine on Cent OS 4 32-bit , but on Debian Lenny 32-bit I get an SSL initialization error when trying to process a transaction. When I execute ldd on the library on Debian, 5 links are missing that ldd on Cent OS has: libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 libkrb5.so.3 => /usr/lib/libkrb5.so.3 libcom_err.so.2 => /lib/libcom_err.so.2 libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 libresolv.so.2 => /lib/libresolv.so.2 I suspect my problem lies here. All those libraries are installed on the Debian system so I'm perplexed that the 3rd party binary does not see them. I've done an md5sum on the 3rd party binary on each system, & they are exactly the same. Here is the complete ldd listing from Cent OS: [root@localhost ~]# ldd /usr/lib/libwebpayclient.so libssl.so.4 => /lib/libssl.so.4 (0x0026a000) libcrypto.so.4 => /lib/libcrypto.so.4 (0x00c41000) libstdc++.so.5 => /usr/lib/libstdc++.so.5 (0x00544000) libm.so.6 => /lib/tls/libm.so.6 (0x0093e000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00da4000) libc.so.6 => /lib/tls/libc.so.6 (0x0066e000) libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x008dd000) libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00394000) libcom_err.so.2 => /lib/libcom_err.so.2 (0x00111000) libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00114000) libresolv.so.2 => /lib/libresolv.so.2 (0x007da000) libdl.so.2 => /lib/libdl.so.2 (0x00135000) libz.so.1 => /usr/lib/libz.so.1 (0x004d1000) /lib/ld-linux.so.2 (0x008b5000) Note that I had to install the package compat-libstdc++-33.i386 to resolve libstdc++.so.5 And here is the complete ldd listing from Debian: localhost:~# ldd /usr/lib/libwebpayclient.so linux-gate.so.1 => (0xb7fcb000) libssl.so.4 => not found libcrypto.so.4 => not found libstdc++.so.5 => /usr/lib/libstdc++.so.5 (0xb7ee5000) libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb7ebf000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7eb2000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7d57000) /lib/ld-linux.so.2 (0xb7fcc000) Note that I had to install the package libstdc++5 to resolve libstdc++.so.5. Using ln -s to fix the 2 "not found" links I get: localhost:~# ldd /usr/lib/libwebpayclient.so linux-gate.so.1 => (0xb7eff000) libssl.so.4 => /usr/lib/libssl.so.4 (0xb7e8e000) libcrypto.so.4 => /usr/lib/libcrypto.so.4 (0xb7d31000) libstdc++.so.5 => /usr/lib/libstdc++.so.5 (0xb7c76000) libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb7c50000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7c43000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7ae8000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7ae4000) libz.so.1 => /usr/lib/libz.so.1 (0xb7acf000) /lib/ld-linux.so.2 (0xb7f00000) Interestingly, libz.so.1 appears. So there is the clue. The version of SSL on Cent OS is 0.9.7a where it's 0.9.8 on Debian. I bet it's linked to less libraries...
You need to go "Change Settings -> Window" and set it there. Do the same under Appearance for the font and size. Then go back to the session and save it.
{ "source": [ "https://serverfault.com/questions/52071", "https://serverfault.com", "https://serverfault.com/users/16149/" ] }
52,074
I would like to turn my extra wireless router into a wireless bridge so that I don't have to buy a card for my Xbox 360. I have found an article on how to do so with DD-WRT (if there is an easier way please tell me). The router is a Netgear WGR614 v6 which according to the DD-WRT compatability list has 2MB of Flash memory and hence can use only DD-WRT Micro . I am searching through their downloads page and for the life of me I can't find anything named DD-WRT Micro. Can anyone point me to the download?
You need to go "Change Settings -> Window" and set it there. Do the same under Appearance for the font and size. Then go back to the session and save it.
{ "source": [ "https://serverfault.com/questions/52074", "https://serverfault.com", "https://serverfault.com/users/841/" ] }
52,260
Maybe this will sound like dumb question but the way i'm trying to do it doesn't work. I'm on livecd, drive is unmounted, etc. When i do backup this way sudo dd if=/dev/sda2 of=/media/disk/sda2-backup-10august09.ext3 bs=64k ...normally it would work but i don't have enough space on external hd i'm copying to (it ALMOST fits into it). So I wanted to compress this way sudo dd if=/dev/sda2 | gzip > /media/disk/sda2-backup-10august09.gz ...but i got permissions denied. I don't understand.
Do you have access to the sda2-backup...gz file? Sudo only works with the command after it, and doesn't apply to the redirection. If you want it to apply to the redirection, then run the shell as root so all the children process are root as well: sudo bash -c "dd if=/dev/sda2 | gzip > /media/disk/sda2-backup-10august09.gz" Alternatively, you could mount the disk with the uid / gid mount options (assuming ext3) so you have write permissions as whatever user you are. Or, use root to create a folder in /media/disk which you have permissions for. Other Information that might help you: The block size only really matters for speed for the most part. The default is 512 bytes which you want to keep for the MBR and floppy disks. Larger sizes to a point should speed up the operations, think of it as analogous to a buffer. Here is a link to someone who did some speed benchmarks with different block sizes. But you should do your own testing, as performance is influenced by many factors. Take also a look at the other answer by andreas If you want to accomplish this over the network with ssh and netcat so space may not be as big of an issue, see this serverfault question . Do you really need an image of the partition, there might be better backup strategies? dd is a very dangerous command, use of instead of if and you end up overwriting what you are trying to backup!! Notice how the keys o and i are next to each other? So be very very very careful.
{ "source": [ "https://serverfault.com/questions/52260", "https://serverfault.com", "https://serverfault.com/users/12427/" ] }
52,285
Let's suppose I have a SSH key, but I've deleted the public key part. I have the private key part. Is there some way I can regenerate the public key part?
Use the -y option to ssh-keygen: ssh-keygen -f ~/.ssh/id_rsa -y > ~/.ssh/id_rsa.pub From the 'man ssh-keygen' -y This option will read a private OpenSSH format file and print an OpenSSH public key to stdout. Specify the private key with the -f option, yours might be dsa instead of rsa. The name of your private key probably contains which you used. The newly generated public key should be the same as the one you generated before.
{ "source": [ "https://serverfault.com/questions/52285", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
52,301
I administer a number of linux servers that require telnet access for users. Currently the user's credentials are stored locally on each server and the passwords tend to be very weak and there is no requirement for them to be changed. The logons will soon be integrated with Active Directory and this is a more closely guarded identity. Is it really a concern that user's password could be sniffed from the LAN given that we have a fully switched network so any hacker would need to insert themselves physically between the user's computer and the server?
It is a reasonable concern as there are tools that accomplish arp poisoning (spoofing) that allow you convince computers that you are the gateway. An example and relatively easy to use tool would be ettercap that automates the whole process. It will convince their computer that you are the gateway and sniff the traffic, it will also forward packets so unless there is an IDS running the whole process might be transparent and undetected. Since these tools are available to the kiddies it is a fairly large threat. Even if the systems themselves are not that important, people reuse passwords and might expose passwords to more important things. Switched networks only makes sniffing more inconvenient, not hard or difficult.
{ "source": [ "https://serverfault.com/questions/52301", "https://serverfault.com", "https://serverfault.com/users/16211/" ] }
52,335
I schedule some tasks using crontab. What will happen if my computer is shutdown or turned off during the time when cron was scheduled to do something? Does each missed cron job run after computer is turned on, or are missed jobs for that day ignored? If missed jobs don't resume, is there any way to configure cron such that it runs the missed tasks after the computer is turned back on?
When your computer is shut down (or the cron daemon is otherwise not running), cron jobs will not be started. If you have jobs that you would like to run after the fact during those times when the computer is shut down, use anacron. Installed by default, see "man anacron", "man anacrontab", or the file /etc/anacrontab for more info. Ubuntu uses anacron by default for crontab entries in: /etc/cron.daily /etc/cron.weekly /etc/cron.monthly leaving the remaining crontabs to be handled by the main cron daemon, specifically: /etc/crontab /etc/cron.d /var/spool/cron NOTES Anacron itself does not run as a daemon, but relies on system startup scripts and cron itself to run. On the Ubuntu 8.04 box I'm looking at, /etc/init.d/anacron is run at boot, and again by cron each morning at 07:30. The README at /usr/share/doc/anacron/README.gz has a slight bit more info than is contained in the manpages. EXAMPLES For simple "daily", "weekly", "monthly" jobs, put a copy of or a symlink to the script in one of the /etc/cron.{daily|weekly|monthly} directories above. Anacron will take care of running it daily/weekly/monthly, and if your computer is off on the day the "weekly" scripts would normally run, it'll run them the next time the computer is on. As another example, assuming you have a script here: /usr/local/sbin/maint.sh And you wish to run it every three days, the standard entry in /etc/crontab would look like this: # m h dom mon dow user command 0 0 */3 * * root /usr/local/sbin/maint.sh If your computer was not on at 00:00 on the 3rd of the month, the job would not run until the 6th. To have the job instead run on the 4th when the computer is off and "misses" the run on the 3rd, you'd use this in /etc/anacrontab (don't forget to remove the line from /etc/crontab): # period delay job-identifier command 3 5 maint-job /usr/local/sbin/maint.sh The "delay" of "5" above means that anacron will wait for 5 minutes before it runs this job. The idea is to prevent anacron from firing things off immediately at boot time.
{ "source": [ "https://serverfault.com/questions/52335", "https://serverfault.com", "https://serverfault.com/users/13830/" ] }
52,408
This three questions have been merged into this one. The following answers might have come from any of them, or might be a generic answer to all of them. Is there any kind of tool to assist in loading an unloading servers? I realized that I lack both height and upper body strength to remove servers from the upper tiers of a rack? I could not find the name or type of equipment that folks are using to do this kind of work safely? In this video can I see that they use a dedicated "server lift" http://www.racklift.com/RackLift-Demo-Video.html which looks quite expensive until you consider the cost of what your lifting. Does anyone know of cheap server lifts or other tools that can be used for this? In this video: http://vodpod.com/watch/2515104-rack-and-install-the-cisco-ucs-5108-server-chassis 3:18 into the video. I'm curious about Server Jacks and possibly ballpark costs for them.
Why is everyone giving the wrong answer? It's called a scissor lift: (source: vestilmfg.com ) They make servers that are hundreds of pounds. Lots of storage arrays are far more than that. There comes a time when you don't want to rely on hands that were, most likely, just reaching for greasy potato chips. Use the actual tool if server lifting is an issue.
{ "source": [ "https://serverfault.com/questions/52408", "https://serverfault.com", "https://serverfault.com/users/7261/" ] }
52,640
What is the best way to format a USB drive with FAT32 (for Mac compatibility) from within Windows 7/Vista? I ask because the Disk Management only lets you pick exFAT (because the disk is over 32 GB I believe). Doing it from the command line with diskpart doesn't seem to work either.
Download fat32format It should works fine.
{ "source": [ "https://serverfault.com/questions/52640", "https://serverfault.com", "https://serverfault.com/users/87112/" ] }
52,830
I got DKIM setup on my mail server (postfix and ubuntu) so it signs outgoing emails. I used these instructions: https://help.ubuntu.com/community/Postfix/DKIM However, I need it to sign emails from any domain (in the From address) and not just my own. I'm building an email newsletter service and clients will be sending their own email through the server. First I set "Domain *" in /etc/dkim-filter.conf. This got it to include the DKIM headers in all outgoing emails, no matter what the domain. However, the verification check fails on gmail because it is checking the domain in the from address, and not my domain (and dns record). Does anyone know how to do this?
Ok I managed to figure this out on my own, but I wanted to post the steps here for posterity because there was zero documentation on this (that I could find) and it was practically guess and check. After I set "Domain *" as described above, it would sign it like this: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=clientdomain.com; s=main; t=1250005729; bh=twleuNpYDuUTZQ/ur9Y2wxCprI0RpF4+LlFYMG81xwE=; h=Date:From:To:Message-Id:Subject:Mime-Version:Content-Type; b=kohI7XnLlw/uG4XMJoloc4m9zC13g48+Av5w5z7CVE0u3NxsfEqwfDriapn7s7Upi 31F3k8PDT+eF57gOu2riXaOi53bH3Fn/+j0xCgJf8QpRVfk397w4nUWP/y8tz4jfRx GhH21iYo05umP0XflHNglpyEX02bssscu2VzXwMc= notice the "d=clientdomain.com". It was generating this based on the from address in the email, where the from address was something like "[email protected]". Obviously if it checked the client's domain and not mine no DNS TXT record was there and the verification would fail. So anyway I found out in this documentaion that you can set a KeyList parameter. http://manpages.ubuntu.com/manpages/hardy/man5/dkim-filter.conf.5.html It didn't really describe what I wanted to do, but I figured I'd play with it. I commented out KeyFile and set KeyList to "/etc/mail/dkim_domains.key" which is an arbitrary file name I made up. I then created that file and put this in it "*:feedmailpro.com:/etc/mail/dkim.key". This tells it for any client domain, sign it with my domain (feedmailpro.com), and use the dkim.key file. Restarted DKIM and postfix sudo /etc/init.d/dkim-filter restart sudo /etc/init.d/postfix restart Now this is the key it generated when I sent a test email. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=feedmailpro.com; s=dkim.key; t=1250005729; bh=twleuNpYDuUTZQ/ur9Y2wxCprI0RpF4+LlFYMG81xwE=; h=Date:From:To:Message-Id:Subject:Mime-Version:Content-Type; b=kohI7XnLlw/uG4XMJoloc4m9zC13g48+Av5w5z7CVE0u3NxsfEqwfDriapn7s7Upi 31F3k8PDT+eF57gOu2riXaOi53bH3Fn/+j0xCgJf8QpRVfk397w4nUWP/y8tz4jfRx GhH21iYo05umP0XflHNglpyEX02bssscu2VzXwMc= Improvement, you see the d= now is set to my domain (even though the from address of the email was not my domain). However s= got changed to "dkim.key" instead of the selector I chose in dkim-filter.conf. In the original setup instructions I'd set the selector to "mail". That was weird, but I noticed it changed it to the filename of my key, dkim.key. So I went and renamed "/etc/mail/dkim.key" to "/etc/mail/mail". Also updated the reference to it in "/etc/mail/dkim_domains.key". Restart dkim-filter and postfix again same as above, and now it started working. Here is the final header which signs correctly using the right selector (apparently based off the filename of the key). DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=feedmailpro.com; s=mail; t=1250006218; bh=tBguOuDhBDlhv0m4KF66LG10V/8ijLcAKZ4JbjpLXFM=; h=Date:From:To:Message-Id:Subject:Mime-Version:Content-Type; b=c9eqvd+CY86BJDUItWVVRvI3nibfEDORZbye+sD1PVltrcSBOiLZAxF3Y/4mP6vRX MUUNCC004oIH1u7FYafgF32lpuioMP1cd7bi6x3AZ5zH4BYETNBnnz4AhAPBtqlIh/ FFMz8jkhhLhcM2hDpwJkuKjAe3LzfNVDP8kD11ZI= Now s=mail is right, and d=feedmailpro.com is right. It works! Overall this was way harder than I expected and there seemed to be zero documentation on how to do this (signing for all outgoing domains), but I guess it's open source software so I can't complain. One final note, to check if the TXT DNS record was setup correctly you can do a command like with your domain dig mail._domainkey.feedmailpro.com TXT May need to install dig (sudo apt-get install dig). If you're using Slicehost manager to add the DNS entry, you'd enter the TXT record like this. Type: TXT Name: mail._domainkey Data: k=rsa; t=s; p=M5GfMA0...YOUR LONG KEY...fIDAQAB TTL seconds: 86400 I don't really understand why the name is set to "mail._domainkey" without a period on the end or without my domain, like "mail._domainkey.feedmailpro.com". But whatever, it seems to work so I'm happy. If you're trying to duplicate this, here are the instructions I started with: https://help.ubuntu.com/community/Postfix/DKIM
{ "source": [ "https://serverfault.com/questions/52830", "https://serverfault.com", "https://serverfault.com/users/15088/" ] }
52,861
I have a free dropbox account (2GB), and I was wondering how the versioning of large files works. I have a full backup of all my webfiles that sites @ just over 1GB. After the initial upload of 1GB, everytime it syncs will dropbox figure out the delta of the file, or will it have to upload the entire thing again to version it? It would be cool to always have an up to date version of a large file, but I dont want to kill my bandwidth uploading 1GB everytime. Is this possible? Thanks,
Dropbox uses a binary diff algorithm to break down all files into blocks, and only upload blocks that it doesn't already have in the cloud. All of this is done locally on your computer. Dropbox doesn't just use your files that you have already uploaded, it aggregates everyone's files into one database of blocks, and checks each local block hash against that database. This means that if someone else has uploaded the same file as yourself (say for example, the latest Ubuntu ISO), then the upload will seem instant as there is nothing to upload, but if you are updating a file that changes regularly, like your backup file, then only the changes are uploaded. If you upload a totally unique file, then you have to wait for it all to upload.
{ "source": [ "https://serverfault.com/questions/52861", "https://serverfault.com", "https://serverfault.com/users/3260/" ] }
52,983
I'm trying to use robocopy to transfer a single file from one location to another but robocopy seems to think I'm always specifying a folder. Here is an example: robocopy "c:\transfer_this.txt" "z:\transferred.txt" But I get this error instead: 2009/08/11 15:21:57 ERROR 123 (0x0000007B) Accessing Source Directory c:\transfer_this.txt\ (note the \ at the end of transfer_this.txt ) But if I treat it like an entire folder: robocopy "c:\folder" "z:\folder" It works but then I have to transfer everything in the folder. How can I only transfer a single file with robocopy ?
See: Robocopy /? Usage : ROBOCOPY source destination [file [file]...] [options] robocopy c:\folder d:\folder transfer_this.txt
{ "source": [ "https://serverfault.com/questions/52983", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
53,061
I need to return a 503 status code from one of my sites while it's down for maintenance, in the time-honoured SE_firendly fashion. I can't seem to work out how to do this without invoking external scripts, which I'd rather avoid. Is there an apache directive which will allow me to return an arbitrary HTTP status code without resorting to hacks like invoking a php script which sets the status header?
This serves every request a static holding page along with the 503 status. RedirectMatch 503 ^/(?!holding\.html) ErrorDocument 503 /holding.html Header always set Retry-After "18000" RedirectMatch is used to negate the holding page itself which would otherwise create an infinite loop. mod_header is used to set a Retry-After header so that you can tell Google/other bots etc that you should back up after 18000 seconds (5 hours) in this example. You can sudo ap2enmod header to activate mod_header (which is required for the Header directive).
{ "source": [ "https://serverfault.com/questions/53061", "https://serverfault.com", "https://serverfault.com/users/16414/" ] }
53,080
I have hosts A,B and C. From host A I can access through ssh only B. From B I can access C. I want to be able to run X11 programs on C and forward display to A. I tried this: A$ ssh -X B B$ ssh -X C C$ xclock Error: Can't open display: But it doesn't work.
There are several ways to do this, the one I prefer is to forward the ssh port: First, connect to machine B and forward [localPort] to C:22 through B A$ ssh -L [localPort]:C:22 B Next, connect to C from A through this newly-created tunnel using [localPort], forwarding X11 A$ ssh -X -p [localPort] localhost Now we can run X11 programs on C and have them display on A C$ xclock [localPort] can be any port that you are not already listening to on A, I often use 2222 for simplicity.
{ "source": [ "https://serverfault.com/questions/53080", "https://serverfault.com", "https://serverfault.com/users/8975/" ] }
53,314
I am trying to configure a demo machine which is EEEPC with Windows 7 Home Premium edition all the drivers properly loaded (don't ask me why it's Home edition) with IIS7 installed. I've deploy the application to be demo-ed on the machine which is an ASP.NET MVC site, added website via the console, added an app pool. The app pool run as NetworkService and guest authenticate as IUSR. I've added modify rights to NetworkService and read & execute rights to IUSR to the website's folder and its content. When I hit the root of the web, say http://example.com/ I gets a proper HTML rendered from the website (which means the application works) but the problem is that all static content returns blank. I'm not sure why is this happening. No 404 or 500 error page, just plain empty response when I access static content. All ASP.NET-generated content works fine (albeit a little strange since all CSS and images won't load) Please help, IIS7 Management Console is very confusing to me and I need the machine by tomorrow.
Did you turn on the static content feature? http://weblogs.asp.net/anasghanem/archive/2008/05/23/don-t-forget-to-check-quot-static-content-service-quot-in-iis7-installation.aspx
{ "source": [ "https://serverfault.com/questions/53314", "https://serverfault.com", "https://serverfault.com/users/1398/" ] }
53,577
What do the ' && ', ' \ ' and ' - ' mean at the end of bash commands? In particular, I came across the following combination of lines that are supposed to add public keys to Ubuntu's aptitude package manager, what are those characters used here for? gpg --keyserver keyserver.ubuntu.com --recv 26C2E075 && \ gpg --export --armor 26C2E075 | sudo apt-key add - && \ sudo apt-get update
"&&" is used to chain commands together, such that the next command is run if and only if the preceding command exited without errors (or, more accurately, exits with a return code of 0). "\" by itself at the end of a line is a means of concatenating lines together. So the following two lines: gpg --keyserver keyserver.ubuntu.com --recv 26C2E075 && \ gpg --export --armor 26C2E075 are processed exactly the same as if the line was written as the single line: gpg --keyserver keyserver.ubuntu.com --recv 26C2E075 && gpg --export --armor 26C2E075 "-" is a command line argument with no specific bash function. Exactly how it is handled is dependent on the command being run (in this case apt-key ). Depending on context, it's typically used to indicate either " read data from stdin rather than from a file ", or " process the remainder of the line as data rather than as command line arguments ".
{ "source": [ "https://serverfault.com/questions/53577", "https://serverfault.com", "https://serverfault.com/users/3320/" ] }
53,699
We're using tail to continuously monitor several logs, but when a log is rotated the tail for that file will cease. As far as I understand, the problem is that when the log is rotated, there is a new file created, and the running tail process doesn't know anything about that new file handle.
Ah, there's a flag for this. instead of using tail -f /var/log/file we should be using tail -F /var/log/file tail -F translates to tail --follow=name --retry as in; --follow=name : follow the name of the file instead of the file descriptor --retry : if the file is inaccessible, try again later instead of dying
{ "source": [ "https://serverfault.com/questions/53699", "https://serverfault.com", "https://serverfault.com/users/1595/" ] }
53,954
I had a web server that ran Ubuntu, but the hard drive failed recently and everything was erased. I decided to try CentOS on the machine instead of Ubuntu, since it's based on Red Hat. That association meant a lot to me because Red Hat is a commercial server product and is officially supported by my server's manufacturer. However, after a few days I'm starting to miss Ubuntu. I have trouble finding some of the packages I want in the CentOS repositories, and the third-party packages I've tried have been a hassle to deal with. My question is, what are the advantages of using CentOS as a server over Ubuntu? CentOS is ostensibly designed for this purpose, but so far I would prefer to use a desktop edition of Ubuntu over CentOS. Are there any killer features of CentOS which make it a better server OS? Is there any reason I shouldn't switch back to Ubuntu Server or Xubuntu?
There are no benefits that I can discern for using CentOS (or RHEL) over Ubuntu if you are equally familiar with using both OSes. We use RHEL and CentOS heavily at work, and it's just painful -- we're building custom packages left and right because the OS doesn't come with them, and paid RedHat support is worse than useless, being chock full of "pillars of intransigence" who see it as their duty to make sure you never get to speak to anyone who can actually answer your question. (I've heard that if you spend enough money with them their support improves markedly, so if you're a fortune 500 you'll probably have better luck than we do -- but then again, if you're fortune 500 you're probably chock full of useless oxygen thieves internally anyway, so it feels natural to deal with another bunch of them) That much-vaunted "hardware support" pretty much always comes in the form of puke-worthy binary-only drivers and utilities that I'd prefer to avoid by almost any means necessary. Just choosing hardware that has proper support to begin with is much less hassle than trying to deal with the crap utilities. The long-term stability of the OS platform isn't a differentiating factor -- Ubuntu has LTS (long-term support) releases that are around for five years (and which are coming out more often than RHEL releases, so if you want the latest and greatest you're not waiting as long), so there's no benefit there either . Proprietary software doesn't get much of a benefit, either -- installing Oracle on RedHat is just as much of a "genitals in the shredder" experience as installing it on Debian, and you won't get any useful help from Oracle either (proprietary software support is near-universally worthless in my long and painful experience). The only benefit to running CentOS is if you are more comfortable working in that environment and have your processes and tools tuned that way.
{ "source": [ "https://serverfault.com/questions/53954", "https://serverfault.com", "https://serverfault.com/users/14723/" ] }
54,152
I create cron-jobs in Ubuntu by placing the executable in one of /etc/cron.{daily,hourly,monthly,weekly} . There are lots of directories starting with cron: kent@rat:~$ ls -ld /etc/cron* drwxr-xr-x 2 root root 4096 2009-06-06 18:52 /etc/cron.d drwxr-xr-x 2 root root 4096 2009-07-16 13:17 /etc/cron.daily drwxr-xr-x 2 root root 4096 2009-06-06 18:52 /etc/cron.hourly drwxr-xr-x 2 root root 4096 2009-06-06 18:52 /etc/cron.monthly -rw-r--r-- 1 root root 724 2009-05-16 23:49 /etc/crontab drwxr-xr-x 2 root root 4096 2009-06-06 18:52 /etc/cron.weekly I would like to get e-mail from my scripts when: A script fails and gives an exit code of non-zero. The script has something to tell me I have SSMTP installed and working, I send my mail from my Google-account. The fact that SSMTP can only send mail using one account isn't a problem for me. It's just a home server and the users I have do not have the ability to add cron-jobs. I would like to know how the mailing from scripts usually works in Linux/Unix in general and in Ubuntu specifically. I would also like to know of a good way for me to get mails in the two situations above.
By default, cron will email the owner of the account under which the crontab is running. The system-wide crontab is in /etc/crontab runs under the user `root' Because root is used widely, I'd recommend adding a root alias to your /etc/aliases file anyways. (run 'newaliases' after) The normal way to structure this is for root to be aliased to another user on the system, e.g. for me I'd alias 'root' to 'phil' (my user account) and alias 'phil' to my external email address. If you have a specific user cron that you'd like emailed to you on output, you can use /etc/aliases again (providing you have superuser access) to redirect the user to another email address, or you can use the following at the top of your crontab: MAILTO="[email protected]" If mail should be sent to a local user, you may put just the username instead: MAILTO=someuser If you need more information see crontab(5) by running: man 5 crontab
{ "source": [ "https://serverfault.com/questions/54152", "https://serverfault.com", "https://serverfault.com/users/32224/" ] }
54,287
Am I correct that Ubuntu desktop and server are the same os but that desktop runs X and lacks things that a server might have like dhcp server, mysqld, apache, etc.? And that if I add those items it would in fact be a server with X instead of just the command line that is given with the server? Thank you. EDIT: Is this pretty much the same with all linux distros? I like Fedora, but I only saw Fedora Desktop. I can update it to become server, right?
The differences are just in what's bundled as a default packaging to make things easier. In reality the difference between a server and workstation are just the purpose they're used for; Linux is Linux in either case (indeed Windows NT variants were largely just differences in packaged tools/dll's and some registry hacks to enforce licensing differences for how much you paid for your license...the kernel was the same and the base OS was the same). In other words, Ubuntu Server and Ubuntu Desktop are two sides to the same coin. Server was just meant to run by default with some packages to make it easier to set up a LAMP server or file server by default while desktop looks nicer and has office tools/GUI/etc. for desktop users.
{ "source": [ "https://serverfault.com/questions/54287", "https://serverfault.com", "https://serverfault.com/users/15827/" ] }
54,357
Possible Duplicate: How to use DNS to redirect domain to specific port on my server I want to trick my browser into going to localhost:3000 instead of xyz.com. I went into /etc/hosts on OS X 10.5 and added the following entry: 127.0.0.1:3000 xyz.com That does not work but without specifying the port the trick works. Is there a way to do this specifying the port?
No, the hosts file is simply a way to statically resolve names when no DNS server is present.
{ "source": [ "https://serverfault.com/questions/54357", "https://serverfault.com", "https://serverfault.com/users/3567/" ] }
54,413
I am using MySQL Administrator for making my database backup. I can perfectly back up the whole database with all its tables. There are some tables whose size is very big so I wonder if I could only back up the tables' structure (only their elements) but not their data.
Use the --no-data switch with mysqldump to tell it not to dump the data, only the table structure. This will output the CREATE TABLE statement for the tables. Something like this mysqldump --no-data -h localhost -u root -ppassword mydatabase > mydatabase_backup.sql To target specific tables, enter them after the database name. mysqldump --no-data -h localhost -u root -ppassword mydatabase table1 table2 > mydatabase_backup.sql http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_no-data http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
{ "source": [ "https://serverfault.com/questions/54413", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
54,591
I've written a web application for which the user interface is in Dutch. I use the system's date and time routines to format date strings in the application. However, the date strings that the system formats are in English but I want them in Dutch, so I need to set the system's locale. How do I do that on Debian? I tried setting LC_ALL=nl_NL but it doesn't seem to have any effect: $ date Sat Aug 15 14:31:31 UTC 2009 $ LC_ALL=nl_NL date Sat Aug 15 14:31:36 UTC 2009 I remember that setting LC_ALL on my Ubuntu desktop system works fine. Do I need to install extra packages to make this work, or am I doing it entirely wrong?
Edit /etc/default/locale and set the contents to: LANG="nl_NL.UTF-8" You can check which locales you currently have generated using: locale -a You can generate more by editing /etc/locale.gen and uncommenting the lines for the locales that you want to enable. Then you can generate them by running the command: locale-gen You can find a list of supported locales in /usr/share/i18n/SUPPORTED There is more information available on the Debian wiki .
{ "source": [ "https://serverfault.com/questions/54591", "https://serverfault.com", "https://serverfault.com/users/16845/" ] }
54,736
In Linux, how do I check if a library is installed or not? (from the command line of course). In my specific case now, I want to check whether libjpeg is installed.
To do this in a distro-independent* fashion you can use ldconfig with grep, like this: ldconfig -p | grep libjpeg If libjpeg is not installed, there will be no output. If it is installed, you will get a line for each version available. Replace libjpeg by any library you want, and you have a generic, distro-independent* way of checking for library availability. If for some reason the path to ldconfig is not set, you can try to invoke it using its full path, usually /sbin/ldconfig . **99% of the times*
{ "source": [ "https://serverfault.com/questions/54736", "https://serverfault.com", "https://serverfault.com/users/1618/" ] }
54,800
Let's say I have a domain that I run a web application on, for example cranketywidgets.com , and I'm using Google Apps for handling email for people working on that domain, for example, support@ cranketywidgets.com , [email protected] , [email protected] and so on. Google's own mail services aren't always the best for sending automated reminder emails, comment notifications and so on, so the current solution I plan to pursue is to create a separate subdomain called mailer.cranketywidgets.com , run a mail server off it, and create a few accounts specifically for sending these kinds of emails. What should the MX records and A records look like here for this? I'm somewhat confused by the fact that MX records can be names, but that they must eventually resolve to an A record. What should the records look like here? cranketywidgets.com - A record to actual server like 10.24.233.214 cranketywidgets.com - MX records for Google's email applications mailer.cranketywidgets.com - MX name pointing to server's IP address I would greatly appeciate some help on this - the answer seems like it'll be obvious, but email spam is a difficult problem to solve.
You should never point your MX to a IP address to be RFC compliant. Make an A record for the IP address instead and point the MX record to it. Then the zone should look like this, @ IN MX 1 ASPMX.L.GOOGLE.COM. @ IN MX 5 ALT1.ASPMX.L.GOOGLE.COM. @ IN MX 5 ALT2.ASPMX.L.GOOGLE.COM. @ IN MX 10 ASPMX2.GOOGLEMAIL.COM. @ IN MX 10 ASPMX3.GOOGLEMAIL.COM. @ IN MX 10 ASPMX4.GOOGLEMAIL.COM. @ IN MX 10 ASPMX5.GOOGLEMAIL.COM. @ IN A 10.24.233.214 mailer IN A 10.24.233.214 mailer IN MX 10 mailer.cranketywidgets.com.
{ "source": [ "https://serverfault.com/questions/54800", "https://serverfault.com", "https://serverfault.com/users/10967/" ] }
54,949
I would like to write a simple backup script that saves some data to a FAT drive. Should I reformat the drive and use a better file system or is it possible to use rsync with FAT? If so, what problems might I run into? Would performance be a lot worse? EDIT: This is on linux, didn't even know there was a rsync for windows. The sources are various file systems (it's a mess), and the destination is currently formatted with FAT32. Thank you for your answers, I'll probably go for a reformat, since I'm not completely sure about the file sizes we'll have.
I use rsync to backup my photos I store and process on laptop running Linux (Ubuntu 10.4). I backup them to a very basic NAS with 1TB hard disk formatted as FAT32. The NAS case and firmware is very basic, so it doesn't allow to reformat the drive. The command I use is: $ rsync --progress --modify-window=1 --update --recursive --times \ /home/mloskot/Pictures /mnt/nas/Pictures To allow correct time comparison --modify-window=1 option is used, because FAT32 records file timestamps with 2-seconds resolution which is different to filesystem(s) used on Linux. The --update to avoid unnecessary copying of existing files - it behaves like incremental backup. In order to do size-based comparison, you can specify --size-only option.
{ "source": [ "https://serverfault.com/questions/54949", "https://serverfault.com", "https://serverfault.com/users/1342/" ] }
54,958
I've just read through a lot of MSDN documentation and I think I understand the different recovery models and the concept of a backup chain. I still have one question: Does a full database backup truncate the transaction log (using full recovery mode)? If yes: Where is this mentioned in the MSDN? All I could find was that only BACKUP LOG truncates the log. If no: Why? Since a full database backup starts a new backup chain, what's the point in keeping the transactions that were finshed before the full backup active in the log?
Nope - it definitely doesn't. The only thing that allows the log to clear/truncate in the FULL or BULK_LOGGED recovery models is a log backup - no exceptions. I had this argument a while back and posted a long and detailed blog post with an explanation and a script that you can use to prove it to yourself at Misconceptions around the log and log backups: how to convince yourself . Feel free to follow up with more questions. Btw - also see the long article I wrote for TechNet Magazine on Understanding Logging and Recovery in SQL Server . Thanks
{ "source": [ "https://serverfault.com/questions/54958", "https://serverfault.com", "https://serverfault.com/users/17182/" ] }
54,981
I'm terrible at working out network subnets in my head. Is there some command line tool for linux (ubuntu packages a plus), that lets me put in 255.255.255.224 and it'll tell me that is a /27 ?
ipcalc can do this, for example: [kbrandt@kbrandt-opadmin: ~] ipcalc 192.168.1.1/24 Address: 192.168.1.1 11000000.10101000.00000001. 00000001 Netmask: 255.255.255.0 = 24 11111111.11111111.11111111. 00000000 Wildcard: 0.0.0.255 00000000.00000000.00000000. 11111111 => Network: 192.168.1.0/24 11000000.10101000.00000001. 00000000 HostMin: 192.168.1.1 11000000.10101000.00000001. 00000001 HostMax: 192.168.1.254 11000000.10101000.00000001. 11111110 Broadcast: 192.168.1.255 11000000.10101000.00000001. 11111111 Hosts/Net: 254 Class C, Private Internet if you entered a subnet mask instead of CIDR, you will still see the /## CIDR number after 'Network:', so it goes both ways. or with sipcalc : [kbrandt@kbrandt-opadmin: ~] sipcalc 192.168.1.1/24 <23403@8:55> -[ipv4 : 192.168.1.1/24] - 0 [CIDR] Host address - 192.168.1.1 Host address (decimal) - 3232235777 Host address (hex) - C0A80101 Network address - 192.168.1.0 Network mask - 255.255.255.0 Network mask (bits) - 24 Network mask (hex) - FFFFFF00 Broadcast address - 192.168.1.255 Cisco wildcard - 0.0.0.255 Addresses in network - 256 Network range - 192.168.1.0 - 192.168.1.255 Usable range - 192.168.1.1 - 192.168.1.254 The Ubuntu Packages are ipcalc and sipcalc: sudo apt-get install ipcalc sudo apt-get install sipcalc
{ "source": [ "https://serverfault.com/questions/54981", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
55,343
My server is running CentOS 5.3. I'm on a Mac running Leopard. I don't know which is responsible for this: I can log on to my server just fine via password authentication. I've gone through all of the steps for setting up PKA (as described at http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-ssh-beyondshell.html ), but when I use SSH, it refuses to even attempt publickey verification. Using the command ssh -vvv user@host (where -vvv cranks up verbosity to the maximum level) I get the following relevant output: debug2: key: /Users/me/.ssh/id_dsa (0x123456) debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-with-mic,password debug3: preferred keyboard-interactive,password debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password followed by a prompt for my password. If I try to force the issue with ssh -vvv -o PreferredAuthentications=publickey user@host I get debug2: key: /Users/me/.ssh/id_dsa (0x123456) debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-with-mic,password debug3: preferred publickey debug3: authmethod_lookup publickey debug3: No more authentication methods to try. So, even though the server says it accepts the publickey authentication method, and my SSH client insists on it, I'm rebutted. (Note the conspicuous absence of an "Offering public key:" line above.) Any suggestions?
Check that your Centos machine has: RSAAuthentication yes PubkeyAuthentication yes in sshd_config and ensure that you have proper permission on the centos machine's ~/.ssh/ directory. chmod 700 ~/.ssh/ chmod 600 ~/.ssh/* should do the trick.
{ "source": [ "https://serverfault.com/questions/55343", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
55,355
What are the major differences between Windows Server 2008, 2008 SP2 and 2008 R2? Are the code bases for these OSes different? If I'm developing applications for any one of these three, should I be worried that it might not work on the other two?
Windows Server 2008 and Windows Server 2008 SP2 are the same operating system, just at a different service pack level (Windows Server 2008 started at the SP1 level because it was released quite a bit after Windows Vista and SP1 was already out). Windows Server 2008 R2 is the server release of Windows 7, so it's version 6.1 of the O.S.; it introduces quite a lot of new features, because it's actually a new release of the system. This is a good place to start: http://www.microsoft.com/windowsserver2008/en/us/whats-new.aspx . There are also differences at the GUI level, because WS2008R2 uses the same new GUI introduced with Windows 7 (new taskbar, etc.). Depending on what kind of applications you're developing, they may or may not encounter problems on different O.S. releases; you should definitely check MSDN. The single most important point: Windows Server 2008 R2 exists only for 64-bit platforms, there's no x86 version anymore.
{ "source": [ "https://serverfault.com/questions/55355", "https://serverfault.com", "https://serverfault.com/users/122256/" ] }
55,394
I am running Windows 7 RTM and have both physical drives BitLockered. Because my machine has a TPM it will boot all very nicely when I turn it on. But my employers would prefer if I was challenged for a password at boot time. I have found this article: http://4sysops.com/archives/review-windows-7-bitlocker/ that tells me which group policy flags to set to get it BitLocker to challenge for a PIN at startup. What I can't find is how to set this PIN given the system is already encrypted? I have also come across http://technet.microsoft.com/en-us/library/dd875532%28WS.10%29.aspx and am curious to know which of these recommendations it is safe to apply to an already encrypted system?
Found the answer, assuming you have BitLocker up and running, make the changes: To enable TPM & PIN at boot: Using the Group Policy Editor (Start -> gpedit.msc and press Enter), go to : Local Computer Policy > Computer Configuration > Administrative Templates > Windows Components > Bitlocker Drive Encryption > Operating System Drives and open the key "Require additional authentication at startup" Then enable that Key and set " Configure TPM startup Pin: " to "Require startup PIN with TPM" To set the actual PIN use in a CMD prompt manage-bde -protectors -add c: -TPMAndPIN This will prompt you for a PIN which it then requires you to enter at Boot.
{ "source": [ "https://serverfault.com/questions/55394", "https://serverfault.com", "https://serverfault.com/users/2068/" ] }
55,528
I need to create an NS record for a domain that is a CNAME, for the purpose of having two domains pointed at one IP, and not having to maintain the current IP address in two different places. The DNS provider for this domain is DynDNS, but they block this operation: CNAME cannot be created with label that is equal to zone name I can do this with another domain whose DNS is served by 1and1: root@srv-ubuntu:~# dig myseconddomain.co.uk ; <<>> DiG 9.4.2-P1 <<>> myseconddomain.co.uk ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61795 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myseconddomain.co.uk. IN A ;; ANSWER SECTION: myseconddomain.co.uk. 71605 IN CNAME myfirstdomain.co.uk. myfirstdomain.co.uk. 59 IN A www.xxx.yyy.zzz ;; Query time: 298 msec ;; SERVER: 10.0.0.10#53(10.0.0.10) ;; WHEN: Tue Aug 18 14:17:26 2009 ;; MSG SIZE rcvd: 78 Is this a breach of the RFCs or does DynDNS have a legitimate reason for blocking this action? Followup Thanks to the two answers already posted I now know that 1and1 IS breaching RFCs to do this. However it does work and they seem to support it. For a company that hosts so many domains it seems very odd that they get away with doing this on such a massive scale without objection. More followup The output of "dig myseconddomain.co.uk ns" as requested. root@srv-ubuntu:~# dig myseconddomain.co.uk ns ; <<>> DiG 9.4.2-P1 <<>> myseconddomain.co.uk ns ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18085 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 2 ;; QUESTION SECTION: ; myseconddomain.co.uk. IN NS ;; ANSWER SECTION: myseconddomain.co.uk. 4798 IN NS ns67.1and1.co.uk. myseconddomain.co.uk. 4798 IN NS ns68.1and1.co.uk. ;; ADDITIONAL SECTION: ns67.1and1.co.uk. 78798 IN A 195.20.224.201 ns68.1and1.co.uk. 86400 IN A 212.227.123.89 ;; Query time: 59 msec ;; SERVER: 10.0.0.10#53(10.0.0.10) ;; WHEN: Wed Aug 19 12:54:58 2009 ;; MSG SIZE rcvd: 111
Correct, it is a breach of RFC 1034 , section 3.6.2, paragraph 3: ... If a CNAME RR is present at a node, no other data should be present; this ensures that the data for a canonical name and its aliases cannot be different. ... This applies here because the root of your zone must also have SOA and NS records.
{ "source": [ "https://serverfault.com/questions/55528", "https://serverfault.com", "https://serverfault.com/users/1439/" ] }
55,560
When using rsync+ssh to access a remote machine, is there a way to "nice" the rsync process on the remote machine (to lower its priority)? Editing the question to clarify: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND backups 16651 86.2 0.1 3576 1636 ? Rs 11:06 0:06 rsync --ser... (rsync line snipped) This is a backup cron job that normally runs at 4am, but when I happen to be awake (and committing, or using Bugzilla hosted on that same machine), it kills server performance, so I wanted a quick "hack" to try and fix it a bit.
You can use the --rsync-path option, eg. rsync --rsync-path="nice rsync" foo remotebox:/tmp/
{ "source": [ "https://serverfault.com/questions/55560", "https://serverfault.com", "https://serverfault.com/users/12036/" ] }
55,611
This is a Canonical Question about Hairpin NAT (Loopback NAT). The generic form of this question is: We have a network with clients, a server, and a NAT Router. There is port forwarding on the router to the server so some of it's services are available externally. We have DNS pointing to the external IP. Local network clients fail to connect, but external work. Why does this fail? How can I create a unified naming scheme (DNS names which work both locally and externally)? This question has answeres merged from multiple other questions. They originally referenced FreeBSD, D-Link, Microtik, and other equipment. They're all trying to solve the same problem however.
Since this has been elevated to be the canonical question on hairpin NAT , I thought it should probably have an answer that was more generally-valid than the currently-accepted one, which (though excellent) relates specifically to FreeBSD. This question applies to services provided by servers on RFC1918-addressed IPv4 networks, which are made available to external users by introducing destination NAT (DNAT) at the gateway. Internal users then try to access those services via the external address. Their packet goes out from the client to the gateway device, which rewrites the destination address and immediately injects it back into the internal network. It is this sharp about-turn the packet makes at the gateway that gives rise to the name hairpin NAT , by analogy with the hairpin turn . The problem arises when the gateway device rewrites the destination address, but not the source address. The server then receives a packet with an internal destination address (its own), and an internal source address (the client's); it knows it can reply directly to such an address, so it does so. Since that reply is direct, it doesn't go via the gateway, which therefore never gets a chance to balance the effect of inbound destination NAT on the initial packet by rewriting the source address of the return packet. The client thus sends a packet to an external IP address, but gets a reply from an internal IP address. It has no idea that the two packets are part of the same conversation, so no conversation happens. The solution is that for packets which require such destination NAT, and which reach the gateway from the internal network , to also perform source NAT (SNAT) on the inbound packet, usually by rewriting the source address to be that of the gateway. The server then thinks the client is the gateway itself, and replies directly to it. That in turn gives the gateway a chance to balance the effects of both DNAT and SNAT on the inbound packet by rewriting both source and destination addresses on the return packet. The client thinks it's talking to an external server. The server thinks it's talking to the gateway device. All parties are happy. A diagram may be helpful at this point: Some consumer gateway devices are bright enough to recognise those packets for which the second NAT step is needed, and those will probably work out-of-the-box in a hairpin NAT scenario. Others aren't, and so won't, and it is unlikely that they can be made to work. A discussion of which consumer-grade devices are which is off-topic for Server Fault. Proper networking devices can generally be told to work, but - because they are not in the business of second-guessing their admins - they do have to be told do so. Linux uses iptables to do the DNAT thus: iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 192.168.3.11 which will enable simple DNAT for the HTTP port, to an internal server on 192.168.3.11 . But to enable hairpin NAT, one would also need a rule such as: iptables -t nat -A POSTROUTING -d 192.168.3.11 -p tcp --dport 80 -j MASQUERADE Note that such rules need to be in the right place in the relevant chains in order to work properly, and depending on settings in the filter chain, additional rules may be needed to permit the NATted traffic to flow. All such discussions are outside the scope of this answer. But as others have said, properly-enabling hairpin NAT isn't the best way to handle the problem. The best is split-horizon DNS , where your organisation serves different answers for the original lookup depending on where the requesting client is, either by having different physical servers for internal vs. external users, or by configuring the DNS server to respond differently according to the address of the requesting client.
{ "source": [ "https://serverfault.com/questions/55611", "https://serverfault.com", "https://serverfault.com/users/6233/" ] }
55,880
My office job routinely sees me connected to a Linux box via VNC. Sometimes I start a remote job on the console, and realize later that it runs much longer than expected. (Should have started that one under Screen in the first place...) I don't want to keep my workstation running overnight just to keep the VNC session open; I would like to move that already-running remote job into a Screen session (on the remote box), so I can power down the workstation (and reconnect next morning). How can this be done, if at all?
Have a look at reptyr , which does exactly that. The github page has all the information. reptyr - A tool for "re-ptying" programs. reptyr is a utility for taking an existing running program and attaching it to a new terminal. Started a long-running process over ssh, but have to leave and don't want to interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session and head on home. USAGE reptyr PID "reptyr PID" will grab the process with id PID and attach it to your current terminal. After attaching, the process will take input from and write output to the new terminal, including ^C and ^Z. (Unfortunately, if you background it, you will still have to run "bg" or "fg" in the old terminal. This is likely impossible to fix in a reasonable way without patching your shell.)
{ "source": [ "https://serverfault.com/questions/55880", "https://serverfault.com", "https://serverfault.com/users/20371/" ] }
55,984
On our production server there is a small drive for the root mount point / , /var/log is taking too much space and I have to manually delete some files. How can I move /var/log/ to let's say /home/log WITHOUT REBOOTING? Here is the thing I thought: $ mkdir /home/log $ rsync -a /var/log /home/log $ mount --bind /home/log /var/log $ /etc/init.d/rsyslof restart But I know that some services use file descriptors, so they'll continue to use /var/log or inodes.
Proper design I assume you are unable to simply extend the filesystem in question (using lvextend && ext2online ), because you do not use LVM or use wrong filesystem type. Your approach What you've proposed might work if you signal the daemons with SIGHUP (kill -1 pid). Obviously you would need to later on "mount -o bind / /somewhere" and clean up what has been left underneath mounted /var/log. But it has a bad smell for me, especially for production. Avoid downtime, have a clean result (but complicated to do) Forget about "mount -o bind" idea, create a new LV/partition, but don't mount it yet. lsof | grep /var/log # lists open files in /var/log For each daemon that has any open file (I would expect at least syslog, inetd, sshd): reconfigure the daemon no to log to /var/log refresh the daemon ( kill -1 or /etc/init.d/script reload ) confirm with lsof | grep /var/log that daemon has closed its files Mount over /var/log. Restore old configurations, SIGHUP/reload daemons again. Easy way (downtime) Create a new LV/partition and mount it properly over either /var or /var/log. The easy way is to take down the server to maintenance mode (single-user mode), and use the actual console (not ssh) for the operation.
{ "source": [ "https://serverfault.com/questions/55984", "https://serverfault.com", "https://serverfault.com/users/14987/" ] }
56,040
What tool can I use to generate a Windows wallpaper at logon that contains user info? I've seen it in several places, like in the screenshot. I found whoami , but it looks different and with less information than the screenshot I uploaded. (maybe the screenshot is a heavily configured version of it) I've seen the same information in several Windows installs, but couldn't find the tool to do it. Any pointer?
In the office we use BGInfo by Sysinternals. Easy to configure and you can have it execute external scripts if you want it pull in some specific information.
{ "source": [ "https://serverfault.com/questions/56040", "https://serverfault.com", "https://serverfault.com/users/7790/" ] }
56,136
Slicehost.com vs Linode.com Which one do you recommend? Are there any major differences between these two? What are your personal experience?
I've used Linode for about a year now, specifically their Dallas DC. I have nothing but praise for them. The only thing they could do to make me happier is to make it free (but with their referral program it's been free for me for 6 months now!). I've blogged a bit about them: Linode Review Bring your Linode Home with You There's comments on those articles from other users. In the interest of full disclosure, those articles link to Linode with my referral code - if you find the articles helpful, I'd appreciate the referral. I've heard SliceHost is good, but there's been a few people that have came to Linode from SliceHost and been happier with their Linode.
{ "source": [ "https://serverfault.com/questions/56136", "https://serverfault.com", "https://serverfault.com/users/10157/" ] }
56,148
I keep hearing about some PHP (opcode) caches like - APC, XCache, Memcache, eAccelerator, etc. But I couldn't ever figure out how to go about choosing one. Apart from performance benefit, which a caching system is supposed to deliver, which other factors should be a point of concern. Like why you will say X cache system is better than Y? I am less worried about relative performance gain. Small differences between any two systems matter less. If a generic answer to my question is not possible, here are few pointers. I use dedicated VPS with Mediatemple (with root access). RAM is 512 MB (physical) + 400MB (swap) I am concerned about WordPress and its cousins WordPress-MU and BuddyPress. 90% of our codes/sites fall into WordPress family. Thanks in advance for some help.
The products you list serve different purposes. OPCode caches There are many PHP Accelerators (OPCaches) as seen on this Wikipedia list . As is common with open source products, they are all fairly similar. XCache is the lighttp PHP accelerator, and is the default choice when you are running that HTTPd. It works well with Apache as well, however APC seems to be slightly more "plays well with others" socially speaking, being officially supported as part of PHP, and is released in-step with the official PHP distribution. I abandoned usign eAccelerator due to its slowing development, and lagging against the releases of PHP, and the official blessed status APC offers with similar performance. These products typically are drop in; no code change instant performance boost. With large codebases (Drupal, Wordpress) the performance can be up to 3x better while lowering response time and memory usage. Data Caching Memcache is a slightly different product -- you might think of it as a lightweight key value system that can be scaled to multiple servers . Software has to be enhanced to support Memcache, and it solves certain problems better than others. If you had a list of realtime stock values on your website, you might use Memcache to keep a resident list of the current value that is displayed accross your website. You might use it to store session data for short term reuse. You wouldn't use it for other things such as full-page caches, or as a replacement for MySQL. There are also Wordpress addons such as WP-Super-Cache that can drastically improve Wordpress' performance (infact, WP-Super-Cache can rival static HTML based sites in many cases) In summary -- I would highly recommend APC if you want a "set it and forget it, well supported product".
{ "source": [ "https://serverfault.com/questions/56148", "https://serverfault.com", "https://serverfault.com/users/17440/" ] }
56,280
I'm looking for a quick, simple, and effective way to erase the hard drives of computers that my company will be getting rid of (donation to charity, most likely). Ideally, I would like a single-purpose bootable utility CD that upon booting, finds all attached hard drives and performs an "NSA grade" disk erasure. Is anyone aware of such a utility (even one not quite as automated as what I've described)?
DBAN : dban , Darik's "boot and nuke" bootable cd will do this. It takes a while, but that is because it really makes sure everything get erased when you use the longer format options. Keep in mind 'sure' and 'fast' are opposing forces with something like DBAN. The faster the wipe, the easier it will be to recover the data. Other Options: If you have a lot of drives, you might consider looking at 3rd party vendors that provide this service, lots of companies that shred paper will do this service as well (for tapes and hard drives). If this is something you are going to be doing a lot in the future, you might want to buy a degausser . Both the 3rd party vendor and the degausser options will destroy the drives for future use, but you could still donate the rest of the hardware.
{ "source": [ "https://serverfault.com/questions/56280", "https://serverfault.com", "https://serverfault.com/users/4159/" ] }
56,394
How do I enable apache modules from the command line in RedHat? On Debian/Ubuntu systems I use a2enmod to enable modules from the command line. Is there an equivalent for RedHat/CentOS type systems?
There is no equivalent. Debian/Ubuntu butcher the apache configuration into a large number of files, where directories of mods and sites enabled are symlinked to other snippets of configuration files. The a2enmod/a2ensite scripts just manipulate these symlinks. debian$ ls /etc/apache2/mods-enabled lrwxrwxrwx 1 root root 28 2009-03-12 18:02 alias.conf -> ../mods-available/alias.conf lrwxrwxrwx 1 root root 28 2009-03-12 18:02 alias.load -> ../mods-available/alias.load lrwxrwxrwx 1 root root 33 2009-03-12 18:02 auth_basic.load -> ../mods-available/auth_basic.load lrwxrwxrwx 1 root root 33 2009-03-12 18:02 authn_file.load -> ../mods-available/authn_file.load lrwxrwxrwx 1 root root 36 2009-03-12 18:02 authz_default.load -> ../mods-available/autoindex.load lrwxrwxrwx 1 root root 26 2009-03-12 18:02 env.load -> ../mods-available/env.load lrwxrwxrwx 1 root root 27 2009-03-12 18:02 mime.conf -> ../mods-available/mime.conf lrwxrwxrwx 1 root root 27 2009-03-12 18:02 mime.load -> ../mods-available/mime.load lrwxrwxrwx 1 root root 34 2009-03-12 18:02 negotiation.conf -> ../mods-available/negotiation.conf lrwxrwxrwx 1 root root 34 2009-03-12 18:02 negotiation.load -> ../mods-available/negotiation.load lrwxrwxrwx 1 root root 27 2009-06-16 21:47 php5.conf -> ../mods-available/php5.conf lrwxrwxrwx 1 root root 27 2009-06-16 21:47 php5.load -> ../mods-available/php5.load On redhat systems the apache configuration is by default held in one file /etc/httpd/conf/httpd.conf. All modules are loaded from this file, and can be disabled by commenting out the appropiate LoadModule statement. ... LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so ... What RedHat/CentOS are doing is giving you a pretty stock apache setup, while debian are adding their own "improvements". You could of course use the debian split config system as a template to make your own, and copy the scripts. However, the main argument for the debian setup is so that apache module packages can install their own config files, so without that it's significantly less useful Edit: If you're looking for an equivalent way of scripting this then i suggest you use /etc/httpd/conf.d directory, any config files in here will be included. Depending on how complicated the script is it might make sense to directly write one line files into conf.d, or use symlinks for more complicated bits.
{ "source": [ "https://serverfault.com/questions/56394", "https://serverfault.com", "https://serverfault.com/users/17507/" ] }
56,497
I am running a half dozen different cron jobs from my hosting at Hostmonster.com. When a cronjob has been executed I receive an email with the output of the script. The email comes in the format of: From: Cron Daemon Subject: Cron /ramdisk/bin/php5 -c /home5/username/scheduled/optimize_mysql.bash The problem with this is that the subject of the email makes it very hard to read which cronjob the email is pertaining to. Is there a way to modify the subject of a cronjob email so that it's easier to read? For example: From: Cron Daemon Subject: Optimize MySQL Database
Or use the sh noop command (:) 0 9-17 * * 1-5 : Queue Summary; PATH=/usr/sbin qshape The subject still looks kludgey, but at least it's descriptive and requires no extraneous scripts.
{ "source": [ "https://serverfault.com/questions/56497", "https://serverfault.com", "https://serverfault.com/users/17540/" ] }
56,498
I have a Rails app using the Rails javascript helpers to concat all the javascript together into a single file. In addition my web server is serving this javascript file to the end users as gzip compressed. As modern browsers will request the compressed version - is there any real benefit to me bothering to minify the javascript on each deploy? I'm sure it will save some bandwidth but will it be worth it if the content was already compressed.
Or use the sh noop command (:) 0 9-17 * * 1-5 : Queue Summary; PATH=/usr/sbin qshape The subject still looks kludgey, but at least it's descriptive and requires no extraneous scripts.
{ "source": [ "https://serverfault.com/questions/56498", "https://serverfault.com", "https://serverfault.com/users/12978/" ] }
56,539
Lets say I have a webserver, called 'www'. www.example.com resolves to the IP address of that machine. Then I wanna make some virtual hosts, and DNS records for them, like webmail.example.com. For 'webmail', should I put in an A record with www's IP address, or should I do a CNAME to www? What's 'cleaner?, more robust? better?
There are two alternate views of this question, and it's one that is ultimately going to be debated forever. I'm not going to give my opinion (because I'm torn myself), but the general arguments each way typically are: You should define A records for your physical machines, and then CNAME services onto those machines. This does make it rather clear as to what is what, and in the event that you need to renumber there's not a lot of records to change -- just the machine records. On the other hand, it does increase your DNS lookup load somewhat, and "auxiliary" IPs (think SSL vhosts) don't fit neatly into this model. The literal meaning of "canonical name" (CNAME) is to define strict aliases of the same name (think mail and smtp ), and if you have multiple services running on the same machine they should all have A records, because it reduces load on DNS and some services (NS records and, to a lesser extent, MX records) really aren't impressed with dealing with CNAMEs, so if you have to handle those services differently anyway, we may as well do it for everything.
{ "source": [ "https://serverfault.com/questions/56539", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
56,566
Is it possible to configure ssh (on linux) to allow access for tunneling only? Ie user can setup tunnels but cannot get a shell/access files?
Yes, just use /bin/false as shell and instruct the user to start the tunneling SSH process without executing any remote command (i.e. the -N flag for OpenSSH): ssh -N -L 1234:target-host:5678 ssh-host
{ "source": [ "https://serverfault.com/questions/56566", "https://serverfault.com", "https://serverfault.com/users/3211/" ] }
56,588
Server A used to be a NFS server. Server B was mounting an export of that. Everything was fine. Then A died. Just switched off. Gone. Vanished. However that folder is still mounted on B. I obviously can't cd into it or anything. However umount /mnt/myfolder just hangs and won't umount. Is there anyway to umount it without restarting B? Both client and server are Linux machines.
Assuming Linux: umount -f -l /mnt/myfolder Will sort of fix the problem: -f Force unmount (in case of an unreachable NFS system). (Requires kernel 2.1.116 or later.) -l Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. (Requires kernel 2.4.11 or later.) -f also exists on Solaris and AIX.
{ "source": [ "https://serverfault.com/questions/56588", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
56,667
As per the title: How can I tell how much RAM is installed on a FreeBSD server? Thanks!
sysctl hw.physmem
{ "source": [ "https://serverfault.com/questions/56667", "https://serverfault.com", "https://serverfault.com/users/51807/" ] }
56,672
I have a Thecus N5200 Pro integrated into my Windows 2003 AD network. My current backup solution involves Backup Exec 11D but this requires a running service for windows boxes or a similar daemon for linux machines. The N5200 runs a custom linux kernel but as of yet I am unable to add it to my backups through Backup Exec. Does anyone know of a method of backing up directly from the N5200 to Backup Exec without moving the data to an intermediary for archiving?
sysctl hw.physmem
{ "source": [ "https://serverfault.com/questions/56672", "https://serverfault.com", "https://serverfault.com/users/9884/" ] }
56,691
And is it configurable? Can I set up Tomcat so that a URL with, say, 200K of query params goes through successfully to the contained servlet? Yes, I know one should use POST when you have lots of data; that's a less pleasant option in this particular case. The contained application (a search engine) expects a GET request to perform a search.
You can edit tomcat/conf/server.xml's HTTP/1.1 Connector entry, and add a maxHttpHeaderSize="65536" to increase from the default maximum of 8K or so, to 64K. I imagine that you could up this number as high as necessary, but 64K suffices for my needs at the moment so I haven't tried it. <Connector port="8080" maxHttpHeaderSize="65536" protocol="HTTP/1.1" ... />
{ "source": [ "https://serverfault.com/questions/56691", "https://serverfault.com", "https://serverfault.com/users/2404/" ] }
56,700
I see lots of information about enabling http compression for server responses but what about for incoming requests. Wouldn't it make sense for the browsers to compress large form posts before sending them to the server? Another example is a REST web service that we use. We have to send frequent PUT requests with large XML files (10+ MB) and would definitely see some bandwidth/speed benefits on both sides. So is this a solved problem on the server side or does each web application have to handle it individually?
To PUT data to the server compressed you must compress the request body and set the Content-Encoding: gzip header. The header itself must be uncompressed. It's documented in mod_deflate : The mod_deflate module also provides a filter for decompressing a gzip compressed request body. In order to activate this feature you have to insert the DEFLATE filter into the input filter chain using SetInputFilter or AddInputFilter. ... Now if a request contains a Content-Encoding: gzip header, the body will be automatically decompressed. Few browsers have the ability to gzip request bodies. However, some special applications actually do support request compression, for instance some WebDAV clients. And an article describing it is here : So how do you do it? Here is a blurb, again from the mod_deflate source code: only work on main request/no subrequests. This means that the whole body of the request must be gzip compressed if we chose to use this, it is not possible to compress only the part containing the file for example in a multipart request. Separately, a browser can request server response content to be compressed by setting Accept-Encoding header as per here : GET /index.html HTTP/1.1 Host: www.http-compression.com Accept-Encoding: gzip User-Agent: Firefox/1.0 This will return compressed data to the browser.
{ "source": [ "https://serverfault.com/questions/56700", "https://serverfault.com", "https://serverfault.com/users/3940/" ] }
56,709
So... I've built my web application (asp.net mvc) So far I have used the dev server. Works lovely. Now I want to test performance running on iis 7.0 For the life of me I cant work out how to get the web app running. At the moment, Visual Studio wont set up the virtual directory - stating that I need to install a bunch of components - asp.net, IIS 6.0 configuration compatability/meta base and windows authentication. How do I do this? Any help will be appreciated! Can anybody recommend a walkthrough for an iis 7.0 newbie?
To PUT data to the server compressed you must compress the request body and set the Content-Encoding: gzip header. The header itself must be uncompressed. It's documented in mod_deflate : The mod_deflate module also provides a filter for decompressing a gzip compressed request body. In order to activate this feature you have to insert the DEFLATE filter into the input filter chain using SetInputFilter or AddInputFilter. ... Now if a request contains a Content-Encoding: gzip header, the body will be automatically decompressed. Few browsers have the ability to gzip request bodies. However, some special applications actually do support request compression, for instance some WebDAV clients. And an article describing it is here : So how do you do it? Here is a blurb, again from the mod_deflate source code: only work on main request/no subrequests. This means that the whole body of the request must be gzip compressed if we chose to use this, it is not possible to compress only the part containing the file for example in a multipart request. Separately, a browser can request server response content to be compressed by setting Accept-Encoding header as per here : GET /index.html HTTP/1.1 Host: www.http-compression.com Accept-Encoding: gzip User-Agent: Firefox/1.0 This will return compressed data to the browser.
{ "source": [ "https://serverfault.com/questions/56709", "https://serverfault.com", "https://serverfault.com/users/17615/" ] }
56,999
At an Not For Profit/Charity I volunteer for occasionally we were very fortunate to be provided (donated) with a brand new 3Kva APC UPS. This is total overkill for our modest server rack (four mid-range servers and a switch!) but hey, I'll take what I can get! Pressing the TEST button on the front indicates that yes, the UPS does work. Brilliant. But it only tests it for about 15 seconds. My question is - will I degrade the UPS by unplugging it from the wall to see how long it will last? My plan is to unplug it, and wait until the battery meter reaches its last LED before plugging it back in, so that I know about how long I will have in the event of a power outage. Do people do this on a regular basis? I'm guessing no (Lead Acid is very different to Li-Ion batteries)... but what kind of harm would it do if this were to happen (on purpose) every 6 months?
You should consider turning off the circuit breaker to the outlet running the rack in lieu of unplugging the cord from the wall. The UPS is losing its electrical ground when you unplug it from the wall. While it's unlikely that anything would go wrong, the UPS designers "expect" that path to ground to remain available at all times, and if something did short during your test you might see sparks (smoke, flame, etc) when the electricity takes another path to ground. I've unplugged UPSs from the wall for testing before, but seeing a flash of "lightning" and hearing a loud "bang" coming out of a UPS during one such test gave me "religion" about not doing that again. After talking to an electrician friend I decided that, from then on, I'd do UPS tests that didn't interrupt the ground to the UPS. BTW: The PowerChute Network Shutdown software from APC is garbage. You might have a look at apcupsd . It runs under a variety of operating systems (Windows included) and is much easier to configure (and to replicate the configuration on multiple servers via copying files) than the APC alternative.
{ "source": [ "https://serverfault.com/questions/56999", "https://serverfault.com", "https://serverfault.com/users/7709/" ] }
57,077
Basic question from a novice: What is the difference between authentication and authorization ?
Authentication is the process of verifying who you are. When you log on to a PC with a user name and password you are authenticating. Authorization is the process of verifying that you have access to something. Gaining access to a resource (e.g. directory on a hard disk) because the permissions configured on it allow you access is authorization.
{ "source": [ "https://serverfault.com/questions/57077", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
57,098
I have a fileserver where df reports 94% of / full. But according to du, much less is used: # df -h / Filesystem Size Used Avail Use% Mounted on /dev/sda3 270G 240G 17G 94% / # du -hxs / 124G / I read that open but deleted files could be responsible for it but a reboot did not fix this. This is Linux, ext3. regards
Ok, found it. I had a old backup on /mnt/Backup in the same filesystem and then an external drive mounted in that place. So du didn't see the files. So cleaning up this gave me back my disk space. It probably happened this way: the external drive once was unmounted while the daily backup script run.
{ "source": [ "https://serverfault.com/questions/57098", "https://serverfault.com", "https://serverfault.com/users/17601/" ] }
57,222
How can I send ctrl + alt + del to a remote computer over Remote Desktop? For example, if I wanted to change the local admin password on a remote PC using a Remote Desktop connection, it would be helpful to be able to send the ctrl + alt + del key sequence to the remote computer. I would normally do this by pressing ctrl + alt + del and selecting the change password option. But I can't send ctrl + alt + del using Remote Desktop since this "special" key series is always handled by the local client.
ctrl + alt + end is the prescribed way to do this. Coding Horror has some other shortcuts.
{ "source": [ "https://serverfault.com/questions/57222", "https://serverfault.com", "https://serverfault.com/users/17431/" ] }
57,321
I currently have n data files in a directory where each file has at most 1 line of very long data. My directory structure is director/ data1.json data2.json data3.json I know that at least one of those files contain the keyword I'm looking for, but since the one line of data is too long, it covers my entire terminal. How do I get the filename only after performing a keyword grep? The grep command I'm using is: grep keyword *.json
The -l argument should do what you want. -l, --files-with-matches Suppress normal output; instead print the name of each input file from which output would normally have been printed. The scanning will stop on the first match. (-l is specified by POSIX.)
{ "source": [ "https://serverfault.com/questions/57321", "https://serverfault.com", "https://serverfault.com/users/16033/" ] }
57,360
Is it possible to run cp again after it was aborted and make it start where it ended last time (not overwrite data that's already copied, only copy what's still left)?
It's cases like this that have taught me to use rsync from the start. However in your case, you can use rsync now. It will only copy new data across, including if cp stopped half way through a big file. You can use it just like cp, like this: rsync --append /where/your/copying/from /where/you/want/to/copy
{ "source": [ "https://serverfault.com/questions/57360", "https://serverfault.com", "https://serverfault.com/users/12427/" ] }
57,377
I'm trying to install the imagick pecl extension on my Ubuntu server and am getting the below error. I've installed the ImageMagick rpm using aptitude already and the pecl extension is version 2.3.0. I've looked around online but can't find anything pointing me in the right direction. I also tried looking for anything that looked like it might be the Wand-config or MagickWand-config program that the error is mentioning but can't find any. steven@server:/var/www$ sudo pecl install imagick downloading imagick-2.3.0.tgz ... Starting to download imagick-2.3.0.tgz (86,976 bytes) .....................done: 86,976 bytes 12 source files, building running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 Please provide the prefix of Imagemagick installation [autodetect] : building in /var/tmp/pear-build-root/imagick-2.3.0 running: /tmp/pear/temp/imagick/configure --with-imagick *** ... snip ... *** checking ImageMagick MagickWand API configuration program... configure: error: not found. Please provide a path to MagickWand-config or Wand-config program. ERROR: `/tmp/pear/temp/imagick/configure --with-imagick' failed I snipped most of the output because it didn't really seem to helpful but I can post if requested. PHP is 5.2.4 ImageMagick is 6.3.7 Ran sudo aptitude upgrade today to upgrade RPMs as well before install ImageMagick
You need to install the ImageMagick devel package. In Ubuntu try: sudo apt-get install libmagickwand-dev libmagickcore-dev
{ "source": [ "https://serverfault.com/questions/57377", "https://serverfault.com", "https://serverfault.com/users/15900/" ] }
57,479
I'd like to be able to schedule a server reboot at a specific time, but not regularly. How can I do this without futzing with adding and removing cron entries?
If it is one-time deal, you can use shutdown command with -r as argument. Instead of using shutdown now, you can add time as parameter (e.g. shutdown -r 12:30 ).
{ "source": [ "https://serverfault.com/questions/57479", "https://serverfault.com", "https://serverfault.com/users/919/" ] }
57,529
I need to check the MD5 of a few files on Windows. Any recommendations on either a command line or an explorer-plugin utility?
There's a built-in PowerShell tool: CertUtil -hashfile yourFileName MD5 The following rules are as of Windows 7 SP1 , Windows Server 2012 , and beyond . If they are known to work in older versions, they will be noted with: (independent of Windows version) You will need to open a Command Prompt OR Powershell to run this command ** a quick guide to open CMD/Powershell is at the bottom of the answer You can find the checksum for a file using ANY of the following hashing algorithms, not JUST MD5 : MD2 MD4 MD5 SHA1 SHA256 SHA384 SHA512 To get the current list of supported Hash Algorithms on your specific windows machine (independent of Windows version) , run CertUtil -hashfile -? The full Format is below, optional parameters are in braces - just replace [HashAlgorithm] with your desired hash from above: CertUtil -hashfile InFile [HashAlgorithm] You can do the command line operation for ANY files , whether they provide a certificate or not (independent of Windows version) If you leave off the [HashAlgorithm] , it will default to the SHA1 checksum of your chosen file Its HELPFUL to note that [HashAlgorithm] is case INsensitive in both CMD and Powershell meaning you can do any of the following (for example): CertUtil -hashfile md5 certutil -hashfile MD5 CertUtil -hashfile sHa1 certutil -hashfile SHA256 Quick: How to Open Command Prompt or Powershell In case you do not know how to open the Command Prompt or Powershell and you got here by search engine, the following is a quick guide that will work for Windows XP and beyond: Press [ Windows ]+[ R ] Then, type cmd ( or powershell if Windows 8+ ) Press [ OK ] or hit enter
{ "source": [ "https://serverfault.com/questions/57529", "https://serverfault.com", "https://serverfault.com/users/9060/" ] }
57,596
Using my Django app, I'm able to read from the database just fine. When the application didn't have permission to access the file, it gave me this error: attempt to write a readonly database Which made sense. So I edited the permissions on the file, so that the Apache process had write permissions. However, instead of it being able to write, I get this cryptic error: unable to open database file If it's useful, here's the entire output: Request Method: POST Request URL: http://home-sv-1/hellodjango1/polls/1/vote/ Exception Type: OperationalError Exception Value: unable to open database file Exception Location: /usr/lib/pymodules/python2.5/django/db/backends/sqlite3/base.py in execute, line 193 Python Executable: /usr/bin/python Python Version: 2.5.2 Python Path: ['/var/www', '/usr/lib/python2.5', '/usr/lib/python2.5/plat-linux2', '/usr/lib/python2.5/lib-tk', '/usr/lib/python2.5/lib-dynload', '/usr/local/lib/python2.5/site-packages', '/usr/lib/python2.5/site-packages', '/usr/lib/pymodules/python2.5', '/usr/lib/pymodules/python2.5/gtk-2.0'] Server time: Sun, 23 Aug 2009 07:06:08 -0500 Let me know if a stack trace is necessary.
Aha, just stumbled across an article explaining this. Also Django have info on their NewbieMistakes page. The solution is to make sure the directory containing the database file also has write access allowed to the process. In my case, running this command fixed the problem: sudo chown www-data .
{ "source": [ "https://serverfault.com/questions/57596", "https://serverfault.com", "https://serverfault.com/users/17170/" ] }
57,747
I've seen a few random pages mention using empty gif images to somehow increase performance. I've also found the nginx has a module for just this purpose. What I can't figure out, is exactly how serving this small file is supposed to boost performance or perceived responsiveness from a web server. Can anyone help me understand the benefits?
1x1 gif files are used by some websites to set spacing between elements (particularly on older websites, made when browsers' interpretations of HTML/CSS were more divergent cough IE cough They are also used more often today as a request target for "tracking pixels", which are used as a tool for gathering usage stats, etc., especially for email/marketing campaigns. The reason you'd provide a special module for this file is that (a) it's requested often, and (b) it returns the same thing every time, so you don't want to have to go to disk for the file if you can avoid it.
{ "source": [ "https://serverfault.com/questions/57747", "https://serverfault.com", "https://serverfault.com/users/14426/" ] }
57,932
Im looking to set up a simple fileserver: 5 - 7 clients - Mixed Windows, Linux, Mac OSX - connecting over wireless and wired Serving ~200GB content - Photos, MP3's, ISO's etc What OS would you recommend for this fileserver? I understand XP limits the number of when connecting to different shares so this probably isnt the best choice. Any recommendations are appreciated. Thank you,
Whichever OS you can support the best - seriously, for relatively basic stuff as you've described they can all do a good enough job so it comes down to how quickly you can set it up, how often it stays up and running and how quickly you can fix it when it breaks - so in my mind the best is the one that you yourself can deal with best in these situations.
{ "source": [ "https://serverfault.com/questions/57932", "https://serverfault.com", "https://serverfault.com/users/3260/" ] }
57,954
I work in a small organization with 2 servers and 30 clients. We are completely Windows Server 2003/XP. Besides me, the Director of Operations and our IT consulting company need access to a domain administrator account. Should we have multiple domain administrator accounts for any reason (change logging, security, or otherwise)? Are there reasons not to have multiple domain admin accounts?
Each user who performs administrative activities should have a dedicated account to perform those activities. In a Windows environment, the built-in (RID 500) Administrator account should have a complex password set, printed, and locked away in a safe, etc. for emergencies. A general tenet of security goes like this: You want to know who is performing which (administrative, in this case) activities (i.e. having an audit trail. Sharing accounts blows that out of the water. Further, you want to be able to cut off an individual's access in case of a breach of password security, termination, etc. Shared accounts don't meet that criteria, either. Shared, common-use accounts of any type should be considered highly dubious in value, but shared administration credentials are always bad. re: Windows-specfic considerations like limited Remote Desktop / Terminal Services connections: Be curteous to your fellow admins and don't leave disconected sessions laying around. I've found that social pressure works fairly well in small organizations (i.e. mentioning frequently and loudly the fact that admin XXX doesn't remember to logoff servers). You can always boot other users' disconnected sessions off if you really have to. It adds, maybe, 30 seconds to a connection attempt. In a larger organization, or if it becomes a major problem, you might consider implementing disconnected session timeouts. A little aside, but one that's probably on-topic since you mentioned an IT consultant: As an IT contractor myself I always request a dedicated administration account for myself, and I demand not to know any "shared" administration credentials. It protects both parties and provides an audit trail. I always want my Customers to feel like they can "lock me out" at a moment's notice (and to actually have that ability, too) because I believe it sends a powerful message that I'm confident in my ability to maintain the relationship with them based on the merits of my skills and the value I provide, not based on some vague feeling that they're "locked in" to me.
{ "source": [ "https://serverfault.com/questions/57954", "https://serverfault.com", "https://serverfault.com/users/6494/" ] }
57,962
I have a feeling this is a stupid question, but this is something I've wondered for awhile. I have a VPS and this is my first big linux venture. I am the only person who has access to it. My question is, what is wrong with just logging in as root as opposed to making an account and giving them sudo access? If a sudoer can do everything root can, then what's the difference? If a hacker could crack my password to my standard, non-root account, then he could also execute sudo commands, so how does a hacker cracking my root account matter any more or less?
If you're logged in as root, you can easily wipe directories or do something that in retrospect is really dumb on the system with the flip of a finger, while as a user you normally have to put a few extra mental cycles into what you're typing before doing something that is dangerous. Also any program you run as root as root privileges, meaning if someone or something gets you to run/compile/browse a website that is dangerous and wants to damage your system, such as a trojan or other malware, it has full access to your system and can do what it wants, including access to TCP ports below 1024 (so it can turn your system into a remailer without your knowledge, for example). Basically you're kind of asking for trouble that logging in as yourself may prevent. I've known many people that ended up being glad they had that safety net in a moment of carelessness. EDIT: There is also the issue of root being the most well known, thus an easy target, for scripts and hacks. Systems that disable the account and instead force users to use sudo means that any attempt to crack root from ssh or a local exploit to the account are banging their heads against a wall. They'd have to guess/crack a password and username. It's security through obscurity to a degree but it's hard to argue that it doesn't foil most script kiddie attacks.
{ "source": [ "https://serverfault.com/questions/57962", "https://serverfault.com", "https://serverfault.com/users/1828/" ] }
57,963
We have a web application with load balancing in two machines with a java server(tomcat), ColdFusion and Apache in front. Some time ago we notice that the cfregistry file have different size in both servers: 89Mb in one and 44Mb in other. It's normal to have this size? It's normal they had the size so different? Thanks in advance.
If you're logged in as root, you can easily wipe directories or do something that in retrospect is really dumb on the system with the flip of a finger, while as a user you normally have to put a few extra mental cycles into what you're typing before doing something that is dangerous. Also any program you run as root as root privileges, meaning if someone or something gets you to run/compile/browse a website that is dangerous and wants to damage your system, such as a trojan or other malware, it has full access to your system and can do what it wants, including access to TCP ports below 1024 (so it can turn your system into a remailer without your knowledge, for example). Basically you're kind of asking for trouble that logging in as yourself may prevent. I've known many people that ended up being glad they had that safety net in a moment of carelessness. EDIT: There is also the issue of root being the most well known, thus an easy target, for scripts and hacks. Systems that disable the account and instead force users to use sudo means that any attempt to crack root from ssh or a local exploit to the account are banging their heads against a wall. They'd have to guess/crack a password and username. It's security through obscurity to a degree but it's hard to argue that it doesn't foil most script kiddie attacks.
{ "source": [ "https://serverfault.com/questions/57963", "https://serverfault.com", "https://serverfault.com/users/18032/" ] }
58,052
I'm running Ubuntu Intrepid, and have been seeing the following my logs: Aug 23 16:01:03 wp1 sm-mta[13700]: n7NFJIad013566: Warning: program /usr/sbin/sensible-mda unsafe: No such file or directory Aug 23 16:01:03 wp1 sm-mta[13700]: n7NFJIad013566: SYSERR(root): Cannot exec /usr/sbin/sensible-mda: No such file or directory Aug 23 16:01:03 wp1 sm-mta[13700]: n7NFJIad013566: Warning: program /usr/sbin/sensible-mda unsafe: No such file or directory Aug 23 16:01:03 wp1 sm-mta[13700]: n7NFJIad013566: SYSERR(root): Cannot exec /usr/sbin/sensible-mda: No such file or directory I have tons of these messages now, where I had none before. Looking it up, it appears Ubuntu has some special sendmail packages that might not have been installed when I installed sendmail. Do I need "sensible-mda"? No one should be authenticating nor sending via the server - it's just a default local smtp host that's setup to allow for web forms to post to email, and for the system to send system logs, etc. Why would these messages just start appearing?
Perhaps you've installed sendmail by using the sendmail-bin individual package instead of installing the sendmail wrapper package. Anyway, if you install sensible-mda (or the sendmail wrapper package), the problem you're seeing should disappear.
{ "source": [ "https://serverfault.com/questions/58052", "https://serverfault.com", "https://serverfault.com/users/18048/" ] }
58,097
I'm running a traffic intense site plenty of dynamic content, mostly user-generated. The server is a dedicated one and has a total of 4 Intel(R) Xeon(R) CPU X3210 @ 2.13GHz proccesors. I need to know to optimal values for ServerLimit and MaxClients apache's directives, considering that the server has 4GB of RAM and the MySQL database runs on a separate server. The panel is DirectAdmin with CentOS. Below are my current directives, but during peak hours with over 5k users, an important lag is noticed - and it's not entirey MySQL's fault, because pages seem to be generated fast (I implemented a page generation time counter), but there is a long connection delay until the page starts responding and is sent to the browser. <IfModule prefork.c> StartServers 800 MinSpareServers 20 MaxSpareServers 60 ServerLimit 900 MaxClients 900 MaxRequestsPerChild 2000 </IfModule> Timeout 90 KeepAlive On KeepAliveTimeout 5 I should mention that monitoring the server using the top command, CPU usage never goes beyond 20% ~ 30% on peak hour. The MySQL server also has a 30~50% usage at that time, and I'm constantly working on fixing slow queries, but that's a different issue. I know it's not a DB bottleneck because static pages also take long to load on peak hours. Any tips to optimize these values will be greatly appreciated, thanks.
Your MaxClients is WAY WAY WAY too high. What is the current size of your apache process? Multiply that x 900. Is that greater than 4GB? If so, the machine is likely going into swap. I usually start with MaxClients = 2x vCPUs in the box (grep -c processor /proc/cpuinfo). Which in this case would be about 8. Then make sure that MaxClients x apache process size isn't over 4GB. You can up your MaxClients from there, depending on the type of connection that your clients have. (Dial-up users need to be spoonfed, etc.) But make sure you never get yourself into a swapping situation. Then set your Min, Max, and Start servers to MaxClients. There's no real need to have them differ in a dedicated server environment. Then do some testing with ab (as goose notes.)
{ "source": [ "https://serverfault.com/questions/58097", "https://serverfault.com", "https://serverfault.com/users/18046/" ] }
58,196
What's the postfix equivalent to sendmail -bp ?
Or, less typing: mailq
{ "source": [ "https://serverfault.com/questions/58196", "https://serverfault.com", "https://serverfault.com/users/995/" ] }
58,363
When trying to start sendmail or send a mail using a wordpress plugin , this error shows up in the maillog : "My unqualified host name (foo.bar) unknown; sleeping for retry" After Googling the best advice was, "add foo.bar to the /etc/hosts file", but it already is: 127.0.0.1 localhost localhost.localdomain 127.0.0.1 foo.bar
Simply changed: 127.0.0.1 localhost localhost.localdomain 127.0.0.1 foo.bar To this 127.0.0.1 localhost localhost.localdomain foo.bar Sendmail looks for a fully qualified domain (FQDN) name and will use the localhost.localdomain in the single line version.
{ "source": [ "https://serverfault.com/questions/58363", "https://serverfault.com", "https://serverfault.com/users/1786/" ] }
58,378
I want to add new user and have/grant that new user to have all the root access, how can I do that ? I did sudo adduser --system testuser but this is not working as I expected. Thanks for help.
There are actually three ways you can do this: the right way, the wrong way, and the ugly way. First, create a normal user account. adduser username Then select one of the following: The Right Way Create a sudo entry for the wheel group in /etc/sudoers like this: ## Allows people in group wheel to run all commands %wheel ALL=(ALL) ALL Or for "modern" versions: ## Allows people in group sudoers to run all commands %sudoers ALL=(ALL) ALL Then add the user to the wheel group. Adding and removing users with administrative priviledges now becomes a function of remembering to add them to wheel, instead of creating an entry in sudo. The great thing about using wheel is that you can extend this mechanism into other authentication schemes that support groups, i.e. winbind/Active Directory, and reap the benefits in the process. You would accomplish this by mapping wheel to a group in your authentication schema that has admin privileges. Note that some distributions use different administrative accounts. Wheel is a "traditional" approach to this, but you may encounter admin , adm , and other group accounts that serve the same purpose. Follow-up Edit: I have to give a point to Bart Silverstrim for pointing out that Ubuntu uses admin as the group for this purpose. He got to this first, although I didn't notice an Ubuntu tag at the time. Again, it all depends on what distribution your are using. The Ugly Way Create a sudo entry for the user account in question and give then complete access. Again, you create the entry in /etc/sudoers like this: ## Allows just user "username" to run all commands as root username ALL=(ALL) ALL ADDED: ## For Ubuntu version: username ALL=(ALL:ALL) This is great if you only have one (or two) normal accounts. It is ugly when you have a hundred accounts over multiple (geophysical) sites and have to constantly maintain the sudo file. The Wrong Way You can edit the /etc/passwd file and change the user account ID from whatever number it is, to 0 . That's right, zero . username:x:0:502::/home/username:/bin/bash See that third entry as zero? When you log into that account, you are, for all effective purposes, root . I do not recommend this. If you do not remember "who" you are, you can create all kinds of havoc as you start creating and touching files as root . You could also add your username to the root group. This has the same effect for file access but it creates other issues; programs will notice you are not user root and refuse to run, but you will gain access to files that belong to group root . If you did this, you did use vipw instead of just editing with vi , right? (or whatever your favorite text editor is) After all, a single typo in this file can lock you out of your system - and that means a physical visit to the computer in question with a repair disc...
{ "source": [ "https://serverfault.com/questions/58378", "https://serverfault.com", "https://serverfault.com/users/13830/" ] }