source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
216,330 | I am building a site that I anticipate will have high usage. Currently, my registrar (GoDaddy) is handling DNS. However, Amazon's Route 53 looks interesting. They promise high speed and offer globally distributed DNS servers and a programmable interface. While GoDaddy doesn't offer a programmable interface, I assume their servers are geographically distributed as well. What are the main reasons I should opt to use Amazon Route 53 over free registrar-based DNS? | GoDaddy [...] I assume their servers are geographically distributed Don't assume, verify with GoDaddy or verify it yourself. A quick traceroute to nsX.secureserver.com (a common DNS server name for GoDaddy) gives me a response from a datacenter here in Scandinavia where I live. So yes, it seems that GoDaddy has its nameservers spread out over at least the US and Europe. But check the nameservers assigned to your domain. main reasons I should opt to use Amazon Route 53 Amazon has clearly documented how their server setup is. They use Anycast, and have DNS servers in 15+ locations worldwide. Their service seems well engineered for high uptime. Having your DNS resolve from 15+ locations worldwide makes your website a little bit faster for your end users. It also allows you to use a lower TTL, which means in case of a website failure, you can move your service over to a new IP faster. In the future, Amazon plans to integrate Route53 with their other cloud offerings. If you uses some of these, such as EC2 and Elastic Load Balancer, then you will benefit from this integration. What they'll build isn't known yet, but one-step setup of Elastic Load Balancing and health check integration with CloudWatch seem like reasonable guesses. | {
"source": [
"https://serverfault.com/questions/216330",
"https://serverfault.com",
"https://serverfault.com/users/22895/"
]
} |
216,421 | We updated our servers this weekend (windows updates), everything went fine except one of our terminal servers now hangs at login with the message, "waiting for windows modules installer." It eventually times out and leaves an event log message that the service has stopped unexpectedly. I have disabled the service and users can now login in a reasonable time frame. However we will need to re-enable the service in order to install further updates. I'm not sure where to start with this one, I'm an entry level admin and my colleagues are on vacation today, thank God this isn't a serious problem. Further details: -It affects all users.
-The only third party software on the server is our ERP software and screwdrivers from Tricerat.
-The only event log message is that the service has stopped unexpectedly.
-The server manager screen does not display any information about roles it just says, "error".
-The remote desktop roles all seem to be functioning properly, Remote app works as well as standard RDP. Let me know if there is any further details I can provide, I will be checking this frequently throughout the day. | Be patient, give the server time to do what is needed, mine took nearly 30 minutes after going a disk cleanup on the C Drive. I would certainly let it go 30-45 minutes, a hard reboot when the server is deep in thought is not a good thing. | {
"source": [
"https://serverfault.com/questions/216421",
"https://serverfault.com",
"https://serverfault.com/users/59344/"
]
} |
216,427 | I've a KVM system upon which I'm running a network bridge directly between all VM's and a bond0 (eth0, eth1) on the host OS. As such, all machines are presented on the same subnet, available outside of the box. The bond is doing mode 1 active / passive, with an arp_ip_target set to the default gateway, which has caused some issues in itself, but I can't see the bond configs mattering here myself. I'm seeing odd things most times when I stop and start a guest on the platform, in that on the host I lose network connectivity (icmp, ssh) for about 30 seconds. I don't lose connectivity on the other already running VM's though... they can always ping the default GW, but the host can't. I say "about 30 seconds" but from some tests it actually seems to be 28 seconds usually (or at least, I lose 28 pings...) and I'm wondering if this somehow relates to the bridge config. I'm not running STP on the bridge at all, and the forwarding delay is set to 1 second, path cost on the bond0 lowered to 10 and port priority of bond0 also lowered to 1. As such I don't think that the bridge should ever be able to think that bond0 is not connected just fine (as continued guest connectivity implies) yet the IP of the host, which is on the bridge device (... could that matter?? ) becomes unreachable. I'm fairly sure it's about the bridged networking, but at the same time as this happens when a VM is started there are clearly loads of other things also happening so maybe I'm way off the mark. Lack of connectivity: # ping 10.20.11.254
PING 10.20.11.254 (10.20.11.254) 56(84) bytes of data.
64 bytes from 10.20.11.254: icmp_seq=1 ttl=255 time=0.921 ms
64 bytes from 10.20.11.254: icmp_seq=2 ttl=255 time=0.541 ms
type=1700 audit(1293462808.589:325): dev=vnet6 prom=256 old_prom=0 auid=42949672
95 ses=4294967295
type=1700 audit(1293462808.604:326): dev=vnet7 prom=256 old_prom=0 auid=42949672
95 ses=4294967295
type=1700 audit(1293462808.618:327): dev=vnet8 prom=256 old_prom=0 auid=42949672
95 ses=4294967295
kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x130079
kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xffdd694a
kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x530079
64 bytes from 10.20.11.254: icmp_seq=30 ttl=255 time=0.514 ms
64 bytes from 10.20.11.254: icmp_seq=31 ttl=255 time=0.551 ms
64 bytes from 10.20.11.254: icmp_seq=32 ttl=255 time=0.437 ms
64 bytes from 10.20.11.254: icmp_seq=33 ttl=255 time=0.392 ms brctl output of relevant bridge: # brctl showstp brdev
brdev
bridge id 8000.b2e1378d1396
designated root 8000.b2e1378d1396
root port 0 path cost 0
max age 19.99 bridge max age 19.99
hello time 1.99 bridge hello time 1.99
forward delay 0.99 bridge forward delay 0.99
ageing time 299.95
hello timer 0.50 tcn timer 0.00
topology change timer 0.00 gc timer 0.04
flags
vnet5 (3)
port id 8003 state forwarding
designated root 8000.b2e1378d1396 path cost 100
designated bridge 8000.b2e1378d1396 message age timer 0.00
designated port 8003 forward delay timer 0.00
designated cost 0 hold timer 0.00
flags
vnet0 (2)
port id 8002 state forwarding
designated root 8000.b2e1378d1396 path cost 100
designated bridge 8000.b2e1378d1396 message age timer 0.00
designated port 8002 forward delay timer 0.00
designated cost 0 hold timer 0.00
flags
bond0 (1)
port id 0001 state forwarding
designated root 8000.b2e1378d1396 path cost 10
designated bridge 8000.b2e1378d1396 message age timer 0.00
designated port 0001 forward delay timer 0.00
designated cost 0 hold timer 0.00
flags I do see the new port listed as learning, but in line with the forward delay, only for 1 or 2 seconds when polling the brctl output on a loop. ifconfig without sample VM: bond0 Link encap:Ethernet HWaddr D4:85:64:65:FA:4E
inet6 addr: fe80::d685:64ff:fe65:fa4e/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:21168629 errors:0 dropped:0 overruns:0 frame:0
TX packets:9280285 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:8777768179 (8.1 GiB) TX bytes:2671736365 (2.4 GiB)
bradSP1 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:36 errors:0 dropped:0 overruns:0 frame:0
TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1656 (1.6 KiB) TX bytes:6592 (6.4 KiB)
brawSP1 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:109 errors:0 dropped:0 overruns:0 frame:0
TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4996 (4.8 KiB) TX bytes:6592 (6.4 KiB)
brdev Link encap:Ethernet HWaddr B2:E1:37:8D:13:96
inet addr:10.20.11.129 Bcast:10.20.11.255 Mask:255.255.255.0
inet6 addr: fe80::d685:64ff:fe65:fa4e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:16663718 errors:0 dropped:0 overruns:0 frame:0
TX packets:8800468 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3268513274 (3.0 GiB) TX bytes:2587834869 (2.4 GiB)
brmgtSP1 Link encap:Ethernet HWaddr 1A:CA:AE:08:1C:42
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:699322 errors:0 dropped:0 overruns:0 frame:0
TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:928721301 (885.6 MiB) TX bytes:6706 (6.5 KiB)
eth0 Link encap:Ethernet HWaddr D4:85:64:65:FA:4E
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:20412120 errors:0 dropped:0 overruns:0 frame:0
TX packets:9280285 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8720799421 (8.1 GiB) TX bytes:2671736365 (2.4 GiB)
Interrupt:169 Memory:f4000000-f4012800
eth1 Link encap:Ethernet HWaddr D4:85:64:65:FA:4E
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:756509 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:56968758 (54.3 MiB) TX bytes:0 (0.0 b)
Interrupt:186 Memory:f2000000-f2012800
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:3937 errors:0 dropped:0 overruns:0 frame:0
TX packets:3937 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6641553 (6.3 MiB) TX bytes:6641553 (6.3 MiB)
vnet0 Link encap:Ethernet HWaddr B2:E1:37:8D:13:96
inet6 addr: fe80::b0e1:37ff:fe8d:1396/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:59861 errors:0 dropped:0 overruns:0 frame:0
TX packets:5924530 errors:0 dropped:0 overruns:2 carrier:0
collisions:0 txqueuelen:500
RX bytes:6405635 (6.1 MiB) TX bytes:1987480170 (1.8 GiB)
vnet1 Link encap:Ethernet HWaddr 1A:CA:AE:08:1C:42
inet6 addr: fe80::18ca:aeff:fe08:1c42/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:541798 errors:0 dropped:0 overruns:0 frame:0
TX packets:61998 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:802746110 (765.5 MiB) TX bytes:6498514 (6.1 MiB) ifconfig with sample VM: bond0 Link encap:Ethernet HWaddr D4:85:64:65:FA:4E
inet6 addr: fe80::d685:64ff:fe65:fa4e/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:21285120 errors:0 dropped:0 overruns:0 frame:0
TX packets:9291457 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:8948482155 (8.3 GiB) TX bytes:2673235824 (2.4 GiB)
bradSP1 Link encap:Ethernet HWaddr 2A:18:E1:2D:1A:EC
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:36 errors:0 dropped:0 overruns:0 frame:0
TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1656 (1.6 KiB) TX bytes:6592 (6.4 KiB)
brawSP1 Link encap:Ethernet HWaddr 96:55:AA:14:67:07
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:109 errors:0 dropped:0 overruns:0 frame:0
TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4996 (4.8 KiB) TX bytes:6592 (6.4 KiB)
brdev Link encap:Ethernet HWaddr 16:5C:BC:E5:90:11
inet addr:10.20.11.129 Bcast:10.20.11.255 Mask:255.255.255.0
inet6 addr: fe80::d685:64ff:fe65:fa4e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:16673094 errors:0 dropped:0 overruns:0 frame:0
TX packets:8801611 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3279365967 (3.0 GiB) TX bytes:2587927761 (2.4 GiB)
brmgtSP1 Link encap:Ethernet HWaddr 1A:CA:AE:08:1C:42
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:699342 errors:0 dropped:0 overruns:0 frame:0
TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:928723605 (885.6 MiB) TX bytes:6706 (6.5 KiB)
eth0 Link encap:Ethernet HWaddr D4:85:64:65:FA:4E
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:20528382 errors:0 dropped:0 overruns:0 frame:0
TX packets:9291457 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8891497316 (8.2 GiB) TX bytes:2673235824 (2.4 GiB)
Interrupt:169 Memory:f4000000-f4012800
eth1 Link encap:Ethernet HWaddr D4:85:64:65:FA:4E
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:756738 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:56984839 (54.3 MiB) TX bytes:0 (0.0 b)
Interrupt:186 Memory:f2000000-f2012800
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:3937 errors:0 dropped:0 overruns:0 frame:0
TX packets:3937 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6641553 (6.3 MiB) TX bytes:6641553 (6.3 MiB)
vnet0 Link encap:Ethernet HWaddr B2:E1:37:8D:13:96
inet6 addr: fe80::b0e1:37ff:fe8d:1396/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:69818 errors:0 dropped:0 overruns:0 frame:0
TX packets:6034715 errors:0 dropped:0 overruns:2 carrier:0
collisions:0 txqueuelen:500
RX bytes:7763947 (7.4 MiB) TX bytes:2149238089 (2.0 GiB)
vnet1 Link encap:Ethernet HWaddr 1A:CA:AE:08:1C:42
inet6 addr: fe80::18ca:aeff:fe08:1c42/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:650557 errors:0 dropped:0 overruns:0 frame:0
TX packets:72519 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:964153780 (919.4 MiB) TX bytes:7896728 (7.5 MiB)
vnet2 Link encap:Ethernet HWaddr AA:4B:22:76:D2:EC
inet6 addr: fe80::a84b:22ff:fe76:d2ec/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10521 errors:0 dropped:0 overruns:0 frame:0
TX packets:108765 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:1398214 (1.3 MiB) TX bytes:161408138 (153.9 MiB)
vnet3 Link encap:Ethernet HWaddr 96:55:AA:14:67:07
inet6 addr: fe80::9455:aaff:fe14:6707/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:468 (468.0 b)
vnet4 Link encap:Ethernet HWaddr 2A:18:E1:2D:1A:EC
inet6 addr: fe80::2818:e1ff:fe2d:1aec/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:468 (468.0 b)
vnet5 Link encap:Ethernet HWaddr 16:5C:BC:E5:90:11
inet6 addr: fe80::145c:bcff:fee5:9011/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:241 errors:0 dropped:0 overruns:1 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:47167 (46.0 KiB) All pointers, tips or stabs in the dark appreciated. | Be patient, give the server time to do what is needed, mine took nearly 30 minutes after going a disk cleanup on the C Drive. I would certainly let it go 30-45 minutes, a hard reboot when the server is deep in thought is not a good thing. | {
"source": [
"https://serverfault.com/questions/216427",
"https://serverfault.com",
"https://serverfault.com/users/53660/"
]
} |
216,477 | I have my cert.pem and cert.key files in /etc/apache2/ssl folders. What would be the most secure permissions and ownership of: /etc/apache2/ssl directory /etc/apache2/ssl/cert.pem file /etc/apache2/ssl/cert.key file (Ensuring https:// access works of course :). Thanks, JP | The directory permissions should be 700, the file permissions on all the files should be 600, and the directory and files should be owned by root. | {
"source": [
"https://serverfault.com/questions/216477",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
216,508 | I want to interrupt a running resync operation on a debian squeeze software raid. (This is the regular scheduled compare resync. The raid array is still clean in such a case. Do not confuse this with a rebuild after a disk failed and was replaced.) How to stop this scheduled resync operation while it is running? Another raid array is "resync pending", because they all get checked on the same day (sunday night) one after another. I want a complete stop of this sunday night resyncing. [Edit: sudo kill -9 1010 doesn't stop it, 1010 is the PID of the md2_resync process] I would also like to know how I can control the intervals between resyncs and the remainig time till the next one. [Edit2: What I did now was to make the resync go very slow, so it does not disturb anymore: sudo sysctl -w dev.raid.speed_limit_max=1000 taken from http://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html During the night I will set it back to a high value, so the resync can terminate. This workaround is fine for most situations, nonetheless it would be interesting to know if what I asked is possible. For example it does not seem to be possible to grow an array, while it is resyncing or resyncing "pending"] | If your array is md0 then echo "idle" > /sys/block/md0/md/sync_action 'idle' will stop an active
resync/recovery etc. There is no
guarantee that another resync/recovery
may not be automatically started
again, though some event will be
needed to trigger this. http://www.mjmwired.net/kernel/Documentation/md.txt#477 | {
"source": [
"https://serverfault.com/questions/216508",
"https://serverfault.com",
"https://serverfault.com/users/64799/"
]
} |
216,801 | The server I am using is Ubuntu 10.10. To ensure security I want to edit the banner the server sends to the client. If I telnet to my host on port 22 it tells me the exact version of SSH I am running (SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4). The situation is the same with MySQL and Cyrus. Any suggestions? At least for SSH? Thanks | While it's prohibitively difficult to hide the version number of your SSH daemon, you can easily hide the linux version (Debian-3ubuntu4) Add the following line to /etc/ssh/sshd_config DebianBanner no And restart your SSH daemon: /etc/init.d/ssh restart or service ssh restart | {
"source": [
"https://serverfault.com/questions/216801",
"https://serverfault.com",
"https://serverfault.com/users/64816/"
]
} |
216,965 | I've been using, happily, opendns to block facebook on my network.
Then I started thinking about tricks to circumvent this block and, of course, I've read here on serverfault how to block the facebook ip address.
But if someone uses tor or freegate? What can I do? | What you have isn't really a technical problem, it's a management problem, don't try to make it a technical problem. You need to have an acceptable use policy that clearly defines what users can and can't do with the resources provided by your organisation. This should also detail what steps may be taken to enforce the AUP (monitoring usage/auditing machines etc) and what the sanctions for breaking the AUP are. | {
"source": [
"https://serverfault.com/questions/216965",
"https://serverfault.com",
"https://serverfault.com/users/60311/"
]
} |
217,395 | Out of curiosity, why does it often take seconds to obtain network configuration via DHCP when the CPU is capable of processing millions of operations per second and ping to the router takes a couple of milliseconds? In my home environment with one WiFi router and about 5 devices, it is not rare to see times like 5-10 seconds. | In addition to the actual acquisition of the DHCP lease from the DHCP server (which typically doesn't take very long), some servers will first ping the IP address it's about to hand out before it actually hands it out to verify that it's not already in use on the network - this takes a few seconds to time out. The client sometimes will do the same (again, to prevent IP address conflicts) which will add some more time. Then, on top of that, some clients will also register their DNS entries etc. | {
"source": [
"https://serverfault.com/questions/217395",
"https://serverfault.com",
"https://serverfault.com/users/14078/"
]
} |
217,605 | I want to use a filter rule to capture only ack or syn packets. How do I do this? | The pcap filter syntax used for tcpdump should work exactly the same way on wireshark capture filter. With tcpdump I would use a filter like this. tcpdump "tcp[tcpflags] & (tcp-syn|tcp-ack) != 0" Check out the tcpdump man page , and pay close attention to the tcpflags. Be sure to also check out the sections in the Wireshark Wiki about capture and display filters. Unfortunately the two types of filters use a completely different syntax, and different names for the same thing. If you wanted a display filter instead of capture filter you would probably need to build an expression combining tcp.flags.ack, and tcp.flags.syn. I am far more familiar with capture filters though, so you'll have to work that out on your own. http://wiki.wireshark.org/DisplayFilters Display filter ref: http://www.wireshark.org/docs/dfref/ TCP display ref: http://www.wireshark.org/docs/dfref/t/tcp.html http://wiki.wireshark.org/CaptureFilters | {
"source": [
"https://serverfault.com/questions/217605",
"https://serverfault.com",
"https://serverfault.com/users/55582/"
]
} |
217,666 | I currently have LVM on software RAID, but I'd like to ask you what you think it is better solution, maybe some pros and cons? Edit: It is about software raid on lvm or lvm on software raid. I know than hardware raid is better if we are thinking about performance. | Your current setup is like this: | / | /var | /usr | /home |
--------------------------
| LVM Volume |
--------------------------
| RAID Volume |
--------------------------
| Disk 1 | Disk 2 | Disk 3 | It's a much simpler setup with more flexibility. You can use all of the disks in the RAID volume and slice and dice them whatever way you like with LVM. The other way isn't even worth thinking about - it's ridiculously complicated and you lose the benefits of LVM at the filesystem level. If you tried to RAID LVM volumes, you would be left with a normal device without any of the LVM volume benefits (e.g. growing filesystems etc.) | {
"source": [
"https://serverfault.com/questions/217666",
"https://serverfault.com",
"https://serverfault.com/users/61930/"
]
} |
218,005 | This is a Canonical Question about Server Security - Responding to Breach Events (Hacking) See Also: Tips for Securing a LAMP Server Reinstall after a Root Compromise? Canonical Version I suspect that one or more of my servers is compromised by a hacker, virus, or other mechanism: What are my first steps? When I arrive on site should I disconnect the server, preserve "evidence", are there other initial considerations? How do I go about getting services back online? How do I prevent the same thing from happening immediately again? Are there best practices or methodologies for learning from this incident? If I wanted to put a Incident Response Plan together, where would I start? Should this be part of my Disaster Recovery or Business Continuity Planning? Original Version 2011.01.02 - I'm on my way into work at 9.30 p.m. on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet
has been shut down which means over 5-600 of our clients sites are now
down. Now this could be an FTP hack, or some weakness in code
somewhere. I'm not sure till I get there. How can I track this down quickly? We're in for a whole lot of
litigation if I don't get the server back up ASAP. Any help is
appreciated. We are running Open SUSE 11.0. 2011.01.03 - Thanks to everyone for your help. Luckily I WASN'T the only person responsible for this server, just the nearest. We managed
to resolve this problem, although it may not apply to many others in a
different situation. I'll detail what we did. We unplugged the server from the net. It was performing (attempting to
perform) a Denial Of Service attack on another server in Indonesia,
and the guilty party was also based there. We firstly tried to identify where on the server this was coming from,
considering we have over 500 sites on the server, we expected to be
moonlighting for some time. However, with SSH access still, we ran a
command to find all files edited or created in the time the attacks
started. Luckily, the offending file was created over the winter
holidays which meant that not many other files were created on the
server at that time. We were then able to identify the offending file which was inside the
uploaded images folder within a ZenCart website. After a short cigarette break we concluded that, due to the files
location, it must have been uploaded via a file upload facility that
was inadequetly secured. After some googling, we found that there was
a security vulnerability that allowed files to be uploaded, within the
ZenCart admin panel, for a picture for a record company. (The section
that it never really even used), posting this form just uploaded any
file, it did not check the extension of the file, and didn't even
check to see if the user was logged in. This meant that any files could be uploaded, including a PHP file for
the attack. We secured the vulnerability with ZenCart on the infected
site, and removed the offending files. The job was done, and I was home for 2 a.m. The Moral - Always apply security patches for ZenCart, or any other CMS system for that matter. As when security updates are released, the whole
world is made aware of the vulnerability.
- Always do backups, and backup your backups.
- Employ or arrange for someone that will be there in times like these. To prevent anyone from relying on a panicy post on Server
Fault. | It's hard to give specific advice from what you've posted here but I do have some generic advice based on a post I wrote ages ago back when I could still be bothered to blog. Don't Panic First things first, there are no "quick fixes" other than restoring your system from a backup taken prior to the intrusion, and this has at least two problems. It's difficult to pinpoint when the intrusion happened. It doesn't help you close the "hole" that allowed them to break in last time, nor deal with the consequences of any "data theft" that may also have taken place. This question keeps being asked repeatedly by the victims of hackers breaking into their web server. The answers very rarely change, but people keep asking the question. I'm not sure why. Perhaps people just don't like the answers they've seen when searching for help, or they can't find someone they trust to give them advice. Or perhaps people read an answer to this question and focus too much on the 5% of why their case is special and different from the answers they can find online and miss the 95% of the question and answer where their case is near enough the same as the one they read online. That brings me to the first important nugget of information. I really do appreciate that you are a special unique snowflake. I appreciate that your website is too, as it's a reflection of you and your business or at the very least, your hard work on behalf of an employer. But to someone on the outside looking in, whether a computer security person looking at the problem to try and help you or even the attacker himself, it is very likely that your problem will be at least 95% identical to every other case they've ever looked at. Don't take the attack personally, and don't take the recommendations that follow here or that you get from other people personally. If you are reading this after just becoming the victim of a website hack then I really am sorry, and I really hope you can find something helpful here, but this is not the time to let your ego get in the way of what you need to do. You have just found out that your server(s) got hacked. Now what? Do not panic. Absolutely do not act in haste, and absolutely do not try and pretend things never happened and not act at all. First: understand that the disaster has already happened. This is not the time for denial; it is the time to accept what has happened, to be realistic about it, and to take steps to manage the consequences of the impact. Some of these steps are going to hurt, and (unless your website holds a copy of my details) I really don't care if you ignore all or some of these steps, that's up to you. But following them properly will make things better in the end. The medicine might taste awful but sometimes you have to overlook that if you really want the cure to work. Stop the problem from becoming worse than it already is: The first thing you should do is disconnect the affected systems from the Internet. Whatever other problems you have, leaving the system connected to the web will only allow the attack to continue. I mean this quite literally; get someone to physically visit the server and unplug network cables if that is what it takes, but disconnect the victim from its muggers before you try to do anything else. Change all your passwords for all accounts on all computers that are on the same network as the compromised systems. No really. All accounts. All computers. Yes, you're right, this might be overkill; on the other hand, it might not. You don't know either way, do you? Check your other systems. Pay special attention to other Internet facing services, and to those that hold financial or other commercially sensitive data. If the system holds anyone's personal data, immediately inform the person responsible for data protection (if that's not you) and URGE a full disclosure. I know this one is tough. I know this one is going to hurt. I know that many businesses want to sweep this kind of problem under the carpet but the business is going to have to deal with it - and needs to do so with an eye on any and all relevant privacy laws. However annoyed your customers might be to have you tell them about a problem, they'll be far more annoyed if you don't tell them, and they only find out for themselves after someone charges $8,000 worth of goods using the credit card details they stole from your site. Remember what I said previously? The bad thing has already happened. The only question now is how well you deal with it. Understand the problem fully: Do NOT put the affected systems back online until this stage is fully complete, unless you want to be the person whose post was the tipping point for me actually deciding to write this article. I'm not going to link to that post so that people can get a cheap laugh, but the real tragedy is when people fail to learn from their mistakes. Examine the 'attacked' systems to understand how the attacks succeeded in compromising your security. Make every effort to find out where the attacks "came from", so that you understand what problems you have and need to address to make your system safe in the future. Examine the 'attacked' systems again, this time to understand where the attacks went, so that you understand what systems were compromised in the attack. Ensure you follow up any pointers that suggest compromised systems could become a springboard to attack your systems further. Ensure the "gateways" used in any and all attacks are fully understood, so that you may begin to close them properly. (e.g. if your systems were compromised by a SQL injection attack, then not only do you need to close the particular flawed line of code that they broke in by, you would want to audit all of your code to see if the same type of mistake was made elsewhere). Understand that attacks might succeed because of more than one flaw. Often, attacks succeed not through finding one major bug in a system but by stringing together several issues (sometimes minor and trivial by themselves) to compromise a system. For example, using SQL injection attacks to send commands to a database server, discovering the website/application you're attacking is running in the context of an administrative user and using the rights of that account as a stepping-stone to compromise other parts of a system. Or as hackers like to call it: "another day in the office taking advantage of common mistakes people make". Why not just "repair" the exploit or rootkit you've detected and put the system back online? In situations like this the problem is that you don't have control of that system any more. It's not your computer any more. The only way to be certain that you've got control of the system is to rebuild the system. While there's a lot of value in finding and fixing the exploit used to break into the system, you can't be sure about what else has been done to the system once the intruders gained control (indeed, its not unheard of for hackers that recruit systems into a botnet to patch the exploits they used themselves, to safeguard "their" new computer from other hackers, as well as installing their rootkit). Make a plan for recovery and to bring your website back online and stick to it: Nobody wants to be offline for longer than they have to be. That's a given. If this website is a revenue generating mechanism then the pressure to bring it back online quickly will be intense. Even if the only thing at stake is your / your company's reputation, this is still going generate a lot of pressure to put things back up quickly. However, don't give in to the temptation to go back online too quickly. Instead move with as fast as possible to understand what caused the problem and to solve it before you go back online or else you will almost certainly fall victim to an intrusion once again, and remember, "to get hacked once can be classed as misfortune; to get hacked again straight afterward looks like carelessness" (with apologies to Oscar Wilde). I'm assuming you've understood all the issues that led to the successful intrusion in the first place before you even start this section. I don't want to overstate the case but if you haven't done that first then you really do need to. Sorry. Never pay blackmail / protection money. This is the sign of an easy mark and you don't want that phrase ever used to describe you. Don't be tempted to put the same server(s) back online without a full rebuild. It should be far quicker to build a new box or "nuke the server from orbit and do a clean install" on the old hardware than it would be to audit every single corner of the old system to make sure it is clean before putting it back online again. If you disagree with that then you probably don't know what it really means to ensure a system is fully cleaned, or your website deployment procedures are an unholy mess. You presumably have backups and test deployments of your site that you can just use to build the live site, and if you don't then being hacked is not your biggest problem. Be very careful about re-using data that was "live" on the system at the time of the hack. I won't say "never ever do it" because you'll just ignore me, but frankly I think you do need to consider the consequences of keeping data around when you know you cannot guarantee its integrity. Ideally, you should restore this from a backup made prior to the intrusion. If you cannot or will not do that, you should be very careful with that data because it's tainted. You should especially be aware of the consequences to others if this data belongs to customers or site visitors rather than directly to you. Monitor the system(s) carefully. You should resolve to do this as an ongoing process in the future (more below) but you take extra pains to be vigilant during the period immediately following your site coming back online. The intruders will almost certainly be back, and if you can spot them trying to break in again you will certainly be able to see quickly if you really have closed all the holes they used before plus any they made for themselves, and you might gather useful information you can pass on to your local law enforcement. Reducing the risk in the future. The first thing you need to understand is that security is a process that you have to apply throughout the entire life-cycle of designing, deploying and maintaining an Internet-facing system, not something you can slap a few layers over your code afterwards like cheap paint. To be properly secure, a service and an application need to be designed from the start with this in mind as one of the major goals of the project. I realise that's boring and you've heard it all before and that I "just don't realise the pressure man" of getting your beta web2.0 (beta) service into beta status on the web, but the fact is that this keeps getting repeated because it was true the first time it was said and it hasn't yet become a lie. You can't eliminate risk. You shouldn't even try to do that. What you should do however is to understand which security risks are important to you, and understand how to manage and reduce both the impact of the risk and the probability that the risk will occur. What steps can you take to reduce the probability of an attack being successful? For example: Was the flaw that allowed people to break into your site a known bug in vendor code, for which a patch was available? If so, do you need to re-think your approach to how you patch applications on your Internet-facing servers? Was the flaw that allowed people to break into your site an unknown bug in vendor code, for which a patch was not available? I most certainly do not advocate changing suppliers whenever something like this bites you because they all have their problems and you'll run out of platforms in a year at the most if you take this approach. However, if a system constantly lets you down then you should either migrate to something more robust or at the very least, re-architect your system so that vulnerable components stay wrapped up in cotton wool and as far away as possible from hostile eyes. Was the flaw a bug in code developed by you (or a contractor working for you)? If so, do you need to re-think your approach to how you approve code for deployment to your live site? Could the bug have been caught with an improved test system, or with changes to your coding "standard" (for example, while technology is not a panacea, you can reduce the probability of a successful SQL injection attack by using well-documented coding techniques). Was the flaw due to a problem with how the server or application software was deployed? If so, are you using automated procedures to build and deploy servers where possible? These are a great help in maintaining a consistent "baseline" state on all your servers, minimising the amount of custom work that has to be done on each one and hence hopefully minimising the opportunity for a mistake to be made. Same goes with code deployment - if you require something "special" to be done to deploy the latest version of your web app then try hard to automate it and ensure it always is done in a consistent manner. Could the intrusion have been caught earlier with better monitoring of your systems? Of course, 24-hour monitoring or an "on call" system for your staff might not be cost effective, but there are companies out there who can monitor your web facing services for you and alert you in the event of a problem. You might decide you can't afford this or don't need it and that's just fine... just take it into consideration. Use tools such as tripwire and nessus where appropriate - but don't just use them blindly because I said so. Take the time to learn how to use a few good security tools that are appropriate to your environment, keep these tools updated and use them on a regular basis. Consider hiring security experts to 'audit' your website security on a regular basis. Again, you might decide you can't afford this or don't need it and that's just fine... just take it into consideration. What steps can you take to reduce the consequences of a successful attack? If you decide that the "risk" of the lower floor of your home flooding is high, but not high enough to warrant moving, you should at least move the irreplaceable family heirlooms upstairs. Right? Can you reduce the amount of services directly exposed to the Internet? Can you maintain some kind of gap between your internal services and your Internet-facing services? This ensures that even if your external systems are compromised the chances of using this as a springboard to attack your internal systems are limited. Are you storing information you don't need to store? Are you storing such information "online" when it could be archived somewhere else. There are two points to this part; the obvious one is that people cannot steal information from you that you don't have, and the second point is that the less you store, the less you need to maintain and code for, and so there are fewer chances for bugs to slip into your code or systems design. Are you using "least access" principles for your web app? If users only need to read from a database, then make sure the account the web app uses to service this only has read access, don't allow it write access and certainly not system-level access. If you're not very experienced at something and it is not central to your business, consider outsourcing it. In other words, if you run a small website talking about writing desktop application code and decide to start selling small desktop applications from the site then consider "outsourcing" your credit card order system to someone like Paypal. If at all possible, make practicing recovery from compromised systems part of your Disaster Recovery plan. This is arguably just another "disaster scenario" that you could encounter, simply one with its own set of problems and issues that are distinct from the usual 'server room caught fire'/'was invaded by giant server eating furbies' kind of thing. ... And finally I've probably left out no end of stuff that others consider important, but the steps above should at least help you start sorting things out if you are unlucky enough to fall victim to hackers. Above all: Don't panic. Think before you act. Act firmly once you've made a decision, and leave a comment below if you have something to add to my list of steps. | {
"source": [
"https://serverfault.com/questions/218005",
"https://serverfault.com",
"https://serverfault.com/users/65500/"
]
} |
218,256 | I use a non-standard ssh port (1234) for most of the servers I connect to. So in my ssh config file I have: Port 1234 But github.com uses port 22. When I try to connect to github it obviously tries to use port 1234. Right now I have to edit my ssh config to get things to work. Here is a snippet from my git config: [remote "origin"]
url = [email protected]:asdf/asdf.git | Just have a look at how to set up your ~/.ssh/config file correctly (man 5 ssh_config). You can specify different settings for different hosts easily. To solve your problem you would set Host github.com
Port 22
Host *
Port 1234 Do have a look at the ssh_config manual page, it explains everything you need to know on the first few pages. | {
"source": [
"https://serverfault.com/questions/218256",
"https://serverfault.com",
"https://serverfault.com/users/64506/"
]
} |
218,707 | I need to configure my machine as to allow HTTP traffic to/from serverfault.com only. All other websites, services ports are not accessible. I came up with these iptables rules: #drop everything
iptables -P INPUT DROP
iptables -P OUTPUT DROP
#Now, allow connection to website serverfault.com on port 80
iptables -A OUTPUT -p tcp -d serverfault.com --dport 80 -j ACCEPT
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
#allow loopback
iptables -I INPUT 1 -i lo -j ACCEPT It doesn't work quite well: After I drop everything, and move on to rule 3: iptables -A OUTPUT -p tcp -d serverfault.com --dport 80 -j ACCEPT I get this error: iptables v1.4.4: host/network `serverfault.com' not found
Try `iptables -h' or 'iptables --help' for more information. Do you think it is related to DNS? Should I allow it as well? Or should I just put IP addresses in the rules?
Do you think what I'm trying to do could be achieved with simpler rules? How? I would appreciate any help or hints on this. Thanks a lot! | With IPTables rules, order matters. The rules are added, and applied, in order. Moreover, when adding rules manually they get applied immediately. Thus, in your example, any packets going through the INPUT and OUTPUT chains start getting dropped as soon as the default policy is set. This is also, incidentally, why you received the error
message you did. What is happening is this: The default DROP policy get applied IPTables receives a hostname as a destination IPTables attempts a DNS lookup on 'serverfault.com' The DNS lookup is blocked by the DROP action While the source/destination options will accept hostnames, it is strongly discouraged. To quote the man page, Hostnames will be resolved once
only, before the rule is submitted to
the kernel. Please note that
specifying any name to be resolved
with a remote query such as DNS is a
really bad idea. Slillibri hit the nail on the head which his answer, you missed the DNS ACCEPT rule. In your case it won't matter, but generally I would set the default policy later on the process. The last thing you want is to be working remotely and allow SSH after turning on a default deny. Also, depending on your distribution, you should be able to save your firewall rules such that they will be automatically applied at start time. Knowing all that, and rearranging your script, here is what I would recommend. # Allow loopback
iptables -I INPUT 1 -i lo -j ACCEPT
# Allow DNS
iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
# Now, allow connection to website serverfault.com on port 80
iptables -A OUTPUT -p tcp -d serverfault.com --dport 80 -j ACCEPT
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Drop everything
iptables -P INPUT DROP
iptables -P OUTPUT DROP | {
"source": [
"https://serverfault.com/questions/218707",
"https://serverfault.com",
"https://serverfault.com/users/34653/"
]
} |
218,750 | I started a couple servers on EC2 and they don't have swap. Am I doing something wrong or is it that the machines just don't have any? | You are right, the Ubuntu EC2 EBS images don't come with swap space configured (for 11.04 at least). The "regular" instance-type images do have a swap partition, albeit only 896 MB on the one I tested. If some process blows up and you don't have swap space, your server could come to a crawling halt for a good while before the OOM killer kicks in, whereas with swap, it merely gets slow. For that reason, I always like to have swap space around, even with enough RAM. Here's your options: Create an EBS volume (2-4 times the size of your RAM), attach it to your instance (I like calling it /dev/xvdm for "memory"), sudo mkswap /dev/xvdm , add it to fstab, sudo swapon -a , and you're good to go. I have done this before and it works fine, but it is probably a bit slower than instance store because it goes over the network. Or you might be able to repartition your disk to add a swap partition, though this might require creating a new AMI. I have not been able to do this in a running instance, because I cannot unmount the root file system, and I do not even have access to the disk device (/dev/xvda), only the partition (xvda1). Or you can create a swap file. This is my preferred solution right now. sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048 &&
sudo chmod 600 /var/swapfile &&
sudo mkswap /var/swapfile &&
echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab &&
sudo swapon -a Done. :) I know a lot of people feel icky about using files instead of partitions, but it certainly works well enough as emergency swap space. | {
"source": [
"https://serverfault.com/questions/218750",
"https://serverfault.com",
"https://serverfault.com/users/62095/"
]
} |
218,887 | I'm trying to import a MySQL dump file, which I got from my hosting company, into my Windows dev machine, and i'm running into problems. I'm importing this from the command line, and i'm getting a very weird error: ERROR 2005 (HY000) at line 3118: Unknown MySQL server host
'╖?*á±dÆ╦N╪Æ·h^ye"π╩i╪ Z+-$▼₧╬Y.∞┌|↕╘l∞/l╞⌂î7æ▌X█XE.ºΓ[ ;╦ï♣éµ♂º╜┤║].♂┐φ9dë╟█'╕ÿG∟═0à¡úè♦╥↑ù♣♦¥'╔NÑ' (11004) I'm attaching the screenshot because i'm assuming the binary data will get lost... I'm not exactly sure what the problem is, but two potential issues are the size of the file (2 Gb) which is not insanely large, but it's not trivially small either, and the other is the fact that many of these tables have JPG images in them (which is why the file is 2Gb large, for the most part). Also, the dump was taken in a Linux machine and I'm importing this into Windows, not sure if that could add to the problems (I understand it shouldn't) Now, that binary garbage is why I think the images in the file might be a problem, but i've been able to import similar dumps from the same hosting company in the past, so i'm not sure what might be the issue. Also, trying to look into this file (and line 3118 in particular) is kind of impossible given its size (i'm not really handy with Linux command line tools like grep, sed, etc). The file might be corrupted, but i'm not exactly sure how to check it. What I downloaded was a .gz file, which I "tested" with WinRar and it says it looks OK (i'm assuming gz has some kind of CRC). If you can think of a better way to test it, I'd love to try that. Any ideas what could be going on / how to get past this error? I'm not very attached to the data in particular, since I just want this as a copy for dev, so if I have to lose a few records, i'm fine with that, as long as the schema remains perfectly sound. Thanks! Daniel | For this reason I always use mysqldump --hex-blob . Re-dump the database encoding the blobs using this switch and it will work. You can try to import it using a windows mysql client IDE like sqlyog or mysql administrator. It worked for me once. | {
"source": [
"https://serverfault.com/questions/218887",
"https://serverfault.com",
"https://serverfault.com/users/1168/"
]
} |
218,912 | By default, Puppet clients ask for updates every 30 minutes. I would like to change this interval. What is the most convenient way to do it? | On the client(s), edit /etc/puppet/puppet.conf and set the following (add a new line if it's not already present) in the [main] section of the file: runinterval=xxx where xxx is your desired polling interval in seconds. Runinterval How often puppet agent applies the catalog. Note that a runinterval of 0 means “run continuously” rather than “never run.” If you want puppet agent to never run, you should start it with the --no-client option. This setting can be a time interval in seconds (30 or 30s), minutes (30m), hours (6h), days (2d), or years (5y). Default: 30m | {
"source": [
"https://serverfault.com/questions/218912",
"https://serverfault.com",
"https://serverfault.com/users/27416/"
]
} |
218,993 | What's the difference between useradd and adduser ? When/why should I prefer using one or the other? | In the case of Debian and its related distros, adduser is a friendlier interactive frontend to useradd. | {
"source": [
"https://serverfault.com/questions/218993",
"https://serverfault.com",
"https://serverfault.com/users/2401/"
]
} |
219,013 | I have searched for this option already, but have only found solutions that involve custom patching . The fact that it does not show in --help and no more info can be found probably indicates the answers is 'no', but I'd like to see this confirmed. Is it possible to show total file transfer progress with rsync? | There is now an official way to do this in rsync (version 3.1.0 protocol version 31, tested with Ubuntu Trusty 14.04). #> ./rsync -a --info=progress2 /usr .
305,002,533 80% 65.69MB/s 0:00:01 xfr#1653, ir-chk=1593/3594) I tried with my /usr folder because I wanted this feature for transferring whole filesystems, and /usr seemed to be a good representative sample. The --info=progress2 gives a nice overall percentage, even if it's just a partial value. In fact, my /usr folder is more than 6 gigs: #> du -sh /usr
6,6G /usr/ and rsync took a lot of time to scan it all. So almost all the time the percentage I've seen was about 90% completed, but nonetheless it's comforting to see that something is being copied :) References: https://stackoverflow.com/a/7272339/1396334 https://download.samba.org/pub/rsync/NEWS#3.1.0 | {
"source": [
"https://serverfault.com/questions/219013",
"https://serverfault.com",
"https://serverfault.com/users/22/"
]
} |
219,041 | What is the difference? | Pretty much what it says on the tin. In the first case, domain 2's SPF record is included in the SPF record for domain1, but can still be modified eg by adding another A host that isn't permitted for domain2.com: "v=spf1 include:domain2.com a:othermailhost.domain1.com -all" In the second case, domain2's SPF record is used as the complete SPF record for domain1, and no further modifications are possible. "v=spf1 redirect=domain2.com" | {
"source": [
"https://serverfault.com/questions/219041",
"https://serverfault.com",
"https://serverfault.com/users/58001/"
]
} |
219,524 | Looks like someone or something is trying a brute force attempt at logging into our production SQL Server instance with the 'sa' account. They haven't been successful because our 'sa' account is disabled, but what steps should I take to make sure things are secure? | Does your SQL server need to be publicy available to the Internet? This is usually not the case. If it absolutely has to be this way, you could restrict access by IP address or maybe set up a VPN. Obviously, make the sa password unguessable or see about restricting sa login locations from only your LAN ip addresses. Please provide more details so others can assist you with better solutions. | {
"source": [
"https://serverfault.com/questions/219524",
"https://serverfault.com",
"https://serverfault.com/users/3298/"
]
} |
219,764 | We're trying to debug some applications performing broadcast. What is the difference between the broadcast address 255.255.255.255 and as e.g. reported by ifconfig, Bcast:192.168.1.255 | A broadcast address is always relative to a given network, there is no broadcast per se ; when you have a network, you can compute its broadcast address by replacing all the host bits with 1s; simply put, the broadcast address is the highest numbered address you can have on the network, while the network address is the lowest one (with all host bits set to 0s); this is why you can't use either of them as actual host addresses: they are reserved for this use. If your network is 192.168.1.0/24, then your network address will be 192.168.1.0 and your broadcast address will be 192.168.1.255. If your network is 192.168.0.0/16, then your network address will be 192.168.0.0 and your broadcast address will be 192.168.255.255. And so on... 255.255.255.255 is a special broadcast address, which means "this network": it lets you send a broadcast packet to the network you're connected to, without actually caring about its address; in this, is similar to 127.0.0.1, which is a virtual address meaning "local host". More info here: http://en.wikipedia.org/wiki/Broadcast_address | {
"source": [
"https://serverfault.com/questions/219764",
"https://serverfault.com",
"https://serverfault.com/users/10271/"
]
} |
219,930 | I have some tasks in the Task Scheduler on Windows 2008 R2. I created them as the admin and I'm logged in as the admin. I have no easy way to rename the tasks. The only way I can is to export the task config to a XML file and re-import to a new task, change the name there, then delete the old task. Is there an easier way? | Congratulations! You've come up against a problem that has stumped many a Windows user/admin. No, you cannot rename a task except for exporting, renaming and importing again. Yes, it's rather silly. Perhaps an enterprising scripter could create a simple PowerShell script that automates this, but until then, you're stuck with your export/import two-step. Sorry. =( (You also can't rename a task folder after you've created it.) | {
"source": [
"https://serverfault.com/questions/219930",
"https://serverfault.com",
"https://serverfault.com/users/18156/"
]
} |
220,000 | I haven't put much thought into this until now, but it seems odd that there is a /var/tmp and /tmp directories for most of the linux distros I routinely use (Ubuntu, Centos, Redhat). Is there any semantic difference between the two, like when whoever designed the first file system layout, he or she thought "Not all tmp files are created equal!" The only difference I've found for Centos is that /tmp routinely scrubs out files older than 240 hours while /var/tmp holds onto stale files for 720 hours. | The main difference between both, is that /tmp is wiped whenever the system reboots where as /var/tmp gets preserved across reboots. You'll be able to find further information regarding linux standard directory structures at the following link : http://www.pathname.com/fhs/pub/fhs-2.3.html#VARTMPTEMPORARYFILESPRESERVEDBETWEE . | {
"source": [
"https://serverfault.com/questions/220000",
"https://serverfault.com",
"https://serverfault.com/users/30247/"
]
} |
220,006 | I have 2 cron jobs, i want one of them to run every odd minute (1,3,5,7,9,11....57,59)
and i want the other one to run every even minute (0,2,4,6,8,10,12...58) how can i do it in an easy way? (no scripting - just cron job rules) | */2 * * * * date >>/tmp/even
1-59/2 * * * * date >>/tmp/odd | {
"source": [
"https://serverfault.com/questions/220006",
"https://serverfault.com",
"https://serverfault.com/users/66073/"
]
} |
220,046 | I have a Django site running on Gunicorn with a reverse proxy through Nginx.
Isn't Nginx just an extra unnecessary overhead?
How does adding that on top of Gunicorn help? | I'm going to focus on slow client behavior, and how your configuration handles it, but don't be tempted to believe that is the only benefit. The same method that benefits slow clients also has benefits for fast clients, SSL handling, dealing with traffic surges, and other aspects of serving HTTP on the Internet. Gunicorn is pre-forking software. For low latency communications, such as load balancer to app server or communications between services, pre-fork systems can be very successful. There is no cost in spinning up a process to handle the request, and a single process can be dedicated to handling a single request; the elimination of these things can lead to an overall faster, more efficient system until the number of simultaneous connections exceeds the number of available processes to handle them. In your situation, you are dealing with high latency clients over the internet. These slow clients can tie up those same processes. When QPS matters, the application code needs to receive, handle, and resolve the request as quickly as possible so it can move on to another request. When slow clients communicate directly with your system, they tie up that process and slow it down. Instead of handling and disposing of the request as quickly as possible, that process now also has to wait around for the slow client. Effective QPS goes down. Handling large numbers of connections with very little cpu and memory cost is what asynchronous servers like Nginx are good at. They aren't affected in the same negative manner by slow clients because they are adept at handling large numbers of clients simultaneously. In Nginx's case, running on modern hardware it can handle tens of thousands of connections at once. Nginx in front of a pre-forking server is a great combination. Nginx handles communications with clients, and doesn't suffer a penalty for handling slow clients. It sends requests to the backend as fast as the backend can handle those requests, enabling the backend to be as efficient with server resources as possible. The backend returns the result as soon as it calculates it, and Nginx buffers that response to feed it to slow clients at their own pace. Meanwhile, the backend can move on to handling another request even as the slow client is still receiving the result. | {
"source": [
"https://serverfault.com/questions/220046",
"https://serverfault.com",
"https://serverfault.com/users/64674/"
]
} |
220,112 | Please bear with me as being a bit of a newcomer at 19" rack-mounted equipment. I've thought a fair bit lately about the best way of getting 4x or 6x of 2.5" hard drives into my rack and are currently really confused about would be the best (read economical) solution. After scouting the market, I've found this type of disk array units that offers built in RAID and a lot of drive slots and a truckload of geek cred, but at a price that just isn't going to fit in my budget. I've also found these type of cute adapters that takes two 2.5" drives in one 3.5" slot, but I will obviously need a chassie with a lot of 3.5" spaces in order to make it work. So what is the most economical way to house my harddrives in my rack? | It's easy to look at pictures of hard drive caddies and storage arrays but that isn't going to help. As I'm sure you know, it's not just about getting a large amount of disks and throwing them into a rack - you need to think about how they will be accessed, monitored, controlled, etc. I'm also a little confused - in your question title you talk about "many" hard drives and in the detail you talk about 4 drives - do you literally mean 4 drives, or do you mean 4 drive chassis of the sort in your picture? The most "economical way of getting lots of disk into a rack mount" is difficult to answer because what that actually means changes depending on the problem you're trying to solve. You need to define what you're going to use them for, what sort of risks are acceptable to you and how you define "economical". And while you might have a tight budget, which is fine, you need to accept there will be some real costs here, either in time or money if not both. What sort of problem are you trying to solve In other words, what do you want to do with the disks, how will they connect to the thing(s) you want to use them with, etc. Different types of storage are suited to different types of job - broadly speaking you can divide "a bunch of disks in a rack" into 3 broad categories depending on what they are connected to (there are lots of other ways to group this and break it down, of course) Direct Attached storage - DAS for short. This is a dedicated storage array that plugs directly into an already existing server to expand the storage available on that server, usually via either SCSI , (more recently) SAS , or (typically at the lower end) SATA . This will give you a reasonably economical way of providing a lot of storage to one machine. That one machine might then act as a file server and publish shares on your network to contain files, and you can even find software to turn this hypothetical server into a NAS (see openfiler or FreeNAS for examples) or SAN (openfiler is an obvious example). Network Attached Storage - NAS for short A NAS is essentially a minimalistic server that is dedicated to providing shared storage on a network. Typically this will be an appliance with a highly tuned OS and file system, designed to publish fileshares on a network with reasonable performance and security, and not do a lot else (though many home/small office NASes do other tricks as well). If you're trying to provide bulk "network" storage, perhaps centralised storage for office workers to store documents, or even for their workstations to be backed up to a central point this can be a good bet. You will probably find that a NAS might cost more than a DAS solution, but then you don't also have to provide a server and spend time configuring the server as a file server. You pays your money and you takes your choice. There are some cheap NAS devices out there ( like this one ) but once you start talking about rack mountable devices you're talking about "enterprise computing" and the prices and features start going up accordingly. Storage Area Network - SAN for short A SAN is a more specialised network file store, which is designed to allow its storage to be divided between several servers and viewed "logically" on each server as if it were a local direct attached/internal device. SANs are connected to the servers using them by a " network " that is usually (but not always) dedicated to the SAN connections to ensure both good security, reliability and performance. SAN infrastructure and disk typically ranges from "quite expensive" to "Is that really a price, or an international phone number" so with your worries about budget you probably see this outside of your price range - though depending on your requirements it may turn out this is what you need, in which case you may be able to set one up for "free" using the resources I suggest above. Risk, and how you define "economical" You mention a NAS that supports RAID as being out of your price range in your question, but you need to think about risk - only you can define what chances you are prepared to take with your data and how valuable it is, but you need to be aware that the more disks you have in a storage array, the greater the chances that one will fail and the greater the chances that another will fail while the first one has not yet been replaced and brought back online. There's a discussion about this here . This bring us to "economical" - do you consider this to mean you want the cheapest possible solution, period (which will probably be a server with a lot of DAS boxes, configured in one giant RAID 5 array) and damn the problems and risks this might bring? Or do you consider "economical" to mean "good value for money" (which isn't always the same as 'cheapest'). I'm a lot more comfortable with that second definition myself. Other considerations If you want a rack full of hard drives, then you need to be aware that this will require a good power supply and will also generate a lot of heat which will need to be removed/cooled in order to keep the hard disks operating reliably, so air conditioning and careful planning of rack air-flow and power needs may be a requirement. | {
"source": [
"https://serverfault.com/questions/220112",
"https://serverfault.com",
"https://serverfault.com/users/38710/"
]
} |
220,633 | How can you calculate the number of days since 1/1/1970? This is for updating the shadowLastChange attribute on OpenLDAP. Is there a way to do this using the linux date command? | ring0 beat me by a few seconds, but the full command is: echo $(($(date --utc --date "$1" +%s)/86400)) This goes by UTC time. Result: root@hostname:~# echo $((`date --utc --date "$1" +%s`/86400))
14984 A quick check with WolframAlpha shows that this is the correct value. | {
"source": [
"https://serverfault.com/questions/220633",
"https://serverfault.com",
"https://serverfault.com/users/8888/"
]
} |
220,657 | I'm about to buy (for the first time obviously) a bare rackable box to build a server. As I understands things, standard rack size is 19" wide. But some boxes specs (Antec's one to be specific) says they are 17" wide... Is this a problem ? | It will most likely come with ears or rails to make it the right size to fit in the rack. If the device is indicated to be rack-mountable, it will come with the supplied hardware to make up the difference in width. (In my experience, at least) | {
"source": [
"https://serverfault.com/questions/220657",
"https://serverfault.com",
"https://serverfault.com/users/48501/"
]
} |
220,775 | Sometimes a record is listed as www IN A 192.168.1.1 and sometimes it is listed as www A 192.168.1.1 . What is the purpose of the IN and when is it required/not required? | That is referring to the DNS class. 'IN' refers to 'Internet' while the only other option in common use is 'CH' for 'CHAOS'. The CH class is (presently) commonly used for things like querying DNS server versions, while the IN class is the default and generally what "the internet" uses. http://victor.se/bjorn/its/chaos-dns.php http://www.faqs.org/rfcs/rfc2929.html (section 3.3) | {
"source": [
"https://serverfault.com/questions/220775",
"https://serverfault.com",
"https://serverfault.com/users/66322/"
]
} |
220,988 | I am writing a Linux shell script to copy a local directory to a remote server (removing any existing files). Local server: ftp and lftp commands are available, no ncftp or any graphical tools. Remote server: only accessible via FTP. No rsync nor SSH nor FXP. I am thinking about listing local and remote files to generate a lftp script and then run it. Is there a better way? Note: Uploading only modified files would be a plus, but not required. | lftp should be able to do this in one step, in particular with lftp mirror : The lftp command syntax is confusing, original invocation I posted doesn't work. Try it like this: lftp -e "mirror -R {local dir} {remote dir}" -u {username},{password} {host} note the quotes around the arguments to the -e switch. | {
"source": [
"https://serverfault.com/questions/220988",
"https://serverfault.com",
"https://serverfault.com/users/29296/"
]
} |
221,007 | I'm running CentOS 64 bit, and just found out I am running prefork MPM on my dual quad Xeon. I was told worker will give me lower memory usage and higher performance, since I run a very high traffic website. If this is true, how do I do it? | Edit:
/etc/sysconfig/httpd Uncomment: HTTPD=/usr/sbin/httpd.worker Restart, voila! | {
"source": [
"https://serverfault.com/questions/221007",
"https://serverfault.com",
"https://serverfault.com/users/62254/"
]
} |
221,337 | Has anyone had any issues with logrotate before that causes a log file to get rotated and then go back to the same size it originally was? Here's my findings: Logrotate Script: /var/log/mylogfile.log {
rotate 7
daily
compress
olddir /log_archives
missingok
notifempty
copytruncate
} Verbose Output of Logrotate: copying /var/log/mylogfile.log to /log_archives/mylogfile.log.1
truncating /var/log/mylogfile.log
compressing log with: /bin/gzip
removing old log /log_archives/mylogfile.log.8.gz Log file after truncate happens [root@server ~]# ls -lh /var/log/mylogfile.log
-rw-rw-r-- 1 part1 part1 0 Jan 11 17:32 /var/log/mylogfile.log Literally Seconds Later: [root@server ~]# ls -lh /var/log/mylogfile.log
-rw-rw-r-- 1 part1 part1 3.5G Jan 11 17:32 /var/log/mylogfile.log RHEL Version: [root@server ~]# cat /etc/redhat-release
Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Logrotate Version: [root@DAA21529WWW370 ~]# rpm -qa | grep logrotate
logrotate-3.7.1-10.RHEL4 Few Notes: Service can't be restarted on the fly, so that's why I'm using copytruncate Logs are rotating every night, according to the olddir directory having log files in it from each night. | This is probably because even though you truncate the file, the process writing to the file will continue writing at whichever offset it were at last. So what's happening is that logrotate truncates the file, size is zero, process writes to the file again, continuing at the offset it left off, and you now have a file with NULL-bytes up to the point where you truncated it plus the new entries written to the log. od -c after truncate + sudden growth, generated output along the lines of: 0000000 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
33255657600 \0 C K B - s e r v e r [ h t t
33255657620 <more log output> What this says is from offset 0 to 33255657600 your file consists of null bytes, and then some legible data. Getting to this state doesn't take the same amount of time it would take to actually write all those null-bytes. The ext{2,3,4} filesystems support something called sparse files, so if you seek past a region of a file that doesn't contain anything, that region will be assumed to contain null-bytes and won't take up space on disk. Those null bytes won't actually be written, just assumed to be there, hence the time it takes to go to 0 to 3.5GB don't take a lot of time to. (You can test the amount of time it takes by doing something like dd if=${HOME}/.bashrc of=largefile.bin seek=3432343264 bs=1 , this should generate a file of over 3GB in a few milliseconds). If you run ls -ls on your logfiles after they've been truncated and had a sudden growth again, it should now report a number at the beginning of the line which represents the actual size (in blocks occupied on disk), which probably is orders of magnitude smaller than the size reported by just ls -l . | {
"source": [
"https://serverfault.com/questions/221337",
"https://serverfault.com",
"https://serverfault.com/users/32999/"
]
} |
221,377 | I thought that I could easily check the timestamp of particular files. Then I realized that it wouldn't be so easy when I saw timestamps like 1991 . | The simplest way would probably be (presuming sda1 is your /root/): tune2fs -l /dev/sda1 | grep created This should show you the date on which the file system was created. Confirmed to work on ext2 to ext4, not sure about other file systems! | {
"source": [
"https://serverfault.com/questions/221377",
"https://serverfault.com",
"https://serverfault.com/users/45709/"
]
} |
221,541 | As a test of the Opteron processor family, I bought a HP DL385 G7 6128 with HP Smart Array P410i Controller - no memory. The machine has 20GB ram 2x146GB 15k rpm SAS + 2x250GB SATA2, both in Raid 1 configurations. I run Vmware ESXi 4.1. Problem: Even with one virtual machine only, tried Linux 2.6/Windows server 2008/Windows 7, the VMs' feel really sluggish. With windows 7, the vmware converter installation even timed out. Tried both SATA and SAS disks and SATA disks are nearly unsusable, while SAS disks feels extremely slow. I can't see a lot of disk activity in the infrastructure client, but I haven't been looking for causes or even tried diagnostics because I have a feeling that it's either because of the cheap raid controller - or simply because of the lack of memory for it. Despite the problems, I continued and installed a virtual machine that serves a key function, so it's not easy to take it down and run diagnostics. Would very much like to know what you guys have to say of it, is it more likely to be a problem with the controller/disks or is it low performance because of budget components? Thanks in advance, | The HP Smart Array P410 is a fine controller, but you will get poor performance out of it if you don't have the battery-backed or flash-backed cache units installed. The cache makes a tremendous difference in that writes are buffered by the cache memory before being committed to disk. You get the write confirmation to the application without having to incur the latency of the physical disk drives. Here's a 4GB dd on a similarly-spec'd system (DL380 G7 with 24GB RAM and a p410 with 2 x SAS disks and 1GB Flash-Backed Write Cache). The RAM helps a lot in a test like this, but you get the idea... [root@xxxx /]# dd if=/dev/zero of=somefile bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 3.70558 seconds, 1.2 GB/s But realistically, your write performance with two SAS drives in a RAID 1 on that controller with the appropriate cache should be between a sustained 130-170 megabytes/second. A quick iozone test on the above server configuration shows: [root@xxxx /]# iozone -t1 -i0 -i1 -r1m -s16g
Write
Avg throughput per process = 166499.47 KB/sec
Rewrite:
Avg throughput per process = 177147.75 KB/sec Since you're using ESXi, you can't run online firmware updates. You should download the Current Smart Update Firmware DVD , burn it to disk and make sure your system is patched to a relatively recent level. Here are the controller's quickspecs: http://h18004.www1.hp.com/products/quickspecs/13201_na/13201_na.html You will want to purchase one of the following, ranging from $350-$600 US: 512MB BBWC 512MB Flash Backed Write Cache 1G Flash Backed Write Cache To answer your question, the cache solution will help the most. Additional disks won't make much of a difference until you handle the caching situation. *Note for other users. If you have cache memory on recent HP controllers with up-to-date firmware, there is a write cache override available if you have RAM on the controller but no battery unit. It's slightly risky, but can be an intermediate step in testing what performance would be like on the way to buying a battery or flash unit. | {
"source": [
"https://serverfault.com/questions/221541",
"https://serverfault.com",
"https://serverfault.com/users/24836/"
]
} |
221,555 | I've exported a Mantis project from one server to another and despite the MySQL SQL file (from which it was populated) showing: (15375,'\r\n1. Log out\r\n\r\n2. When logging in, start ... The final end-user view loses the \r\n and shows it only on one line: 1. Log out 2. When logging in, start typing When viewing through phpMyAdmin, I can see the record properly: 1. Log out
2. When logging in, start typing How can I correct this behavior when displaying this data? | The HP Smart Array P410 is a fine controller, but you will get poor performance out of it if you don't have the battery-backed or flash-backed cache units installed. The cache makes a tremendous difference in that writes are buffered by the cache memory before being committed to disk. You get the write confirmation to the application without having to incur the latency of the physical disk drives. Here's a 4GB dd on a similarly-spec'd system (DL380 G7 with 24GB RAM and a p410 with 2 x SAS disks and 1GB Flash-Backed Write Cache). The RAM helps a lot in a test like this, but you get the idea... [root@xxxx /]# dd if=/dev/zero of=somefile bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 3.70558 seconds, 1.2 GB/s But realistically, your write performance with two SAS drives in a RAID 1 on that controller with the appropriate cache should be between a sustained 130-170 megabytes/second. A quick iozone test on the above server configuration shows: [root@xxxx /]# iozone -t1 -i0 -i1 -r1m -s16g
Write
Avg throughput per process = 166499.47 KB/sec
Rewrite:
Avg throughput per process = 177147.75 KB/sec Since you're using ESXi, you can't run online firmware updates. You should download the Current Smart Update Firmware DVD , burn it to disk and make sure your system is patched to a relatively recent level. Here are the controller's quickspecs: http://h18004.www1.hp.com/products/quickspecs/13201_na/13201_na.html You will want to purchase one of the following, ranging from $350-$600 US: 512MB BBWC 512MB Flash Backed Write Cache 1G Flash Backed Write Cache To answer your question, the cache solution will help the most. Additional disks won't make much of a difference until you handle the caching situation. *Note for other users. If you have cache memory on recent HP controllers with up-to-date firmware, there is a write cache override available if you have RAM on the controller but no battery unit. It's slightly risky, but can be an intermediate step in testing what performance would be like on the way to buying a battery or flash unit. | {
"source": [
"https://serverfault.com/questions/221555",
"https://serverfault.com",
"https://serverfault.com/users/54725/"
]
} |
221,734 | Is there an easy way to set up a bucket in s3 to automatically delete files older than x days? | Amazon now has the ability to set bucket policies to automatically expire content: https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html | {
"source": [
"https://serverfault.com/questions/221734",
"https://serverfault.com",
"https://serverfault.com/users/52260/"
]
} |
221,760 | This question is similar to SSH public key authentication - can one public key be used for multiple users? but it's the other way around. I'm experimenting on using ssh so any ssh server would work for your answers. Can I have multiple public keys link to the same user? What are the benefits of doing so? Also, can different home directories be set for different keys used (all of which link to the same user)? Please let me know if I'm unclear. Thanks. | You can have as many keys as you desire. It's good practice to use separate private/public key sets for different realms anyway, like one set for your personal use, one for your work, etc. First, generate two separate keypairs, one for home and one for work: ssh-keygen -t rsa -f ~/.ssh/id_rsa.home
ssh-keygen -t rsa -f ~/.ssh/id_rsa.work Next, add an entry to your ~/.ssh/config file to pick the key to use based on the server you connect to: Host home
Hostname home.example.com
IdentityFile ~/.ssh/id_rsa.home
User <your home acct>
Host work
Hostname work.example.com
IdentityFile ~/.ssh/id_rsa.work
User <your work acct> Next, append the contents of your id_rsa.work.pub into ~/.ssh/authorized_keys on the work machine, and do the same for the home key on your home machine. Then when you connect to the home server you use one of the keys, and the work server you use another. Note you probably want to add both keys to your ssh-agent so you don't have to type your passphrase all the time. | {
"source": [
"https://serverfault.com/questions/221760",
"https://serverfault.com",
"https://serverfault.com/users/66657/"
]
} |
221,761 | I have three VMware Workstation 7.03 VMs collected in one team that works great on Workstation 7.03. I would like to convert/transfer the team to ESXi (I have a working host) but the VMware Client Converter (runs on Windows) will not convert the team (*.vmtm file) to ESXi. Is this doable? How? Limitations? | You can have as many keys as you desire. It's good practice to use separate private/public key sets for different realms anyway, like one set for your personal use, one for your work, etc. First, generate two separate keypairs, one for home and one for work: ssh-keygen -t rsa -f ~/.ssh/id_rsa.home
ssh-keygen -t rsa -f ~/.ssh/id_rsa.work Next, add an entry to your ~/.ssh/config file to pick the key to use based on the server you connect to: Host home
Hostname home.example.com
IdentityFile ~/.ssh/id_rsa.home
User <your home acct>
Host work
Hostname work.example.com
IdentityFile ~/.ssh/id_rsa.work
User <your work acct> Next, append the contents of your id_rsa.work.pub into ~/.ssh/authorized_keys on the work machine, and do the same for the home key on your home machine. Then when you connect to the home server you use one of the keys, and the work server you use another. Note you probably want to add both keys to your ssh-agent so you don't have to type your passphrase all the time. | {
"source": [
"https://serverfault.com/questions/221761",
"https://serverfault.com",
"https://serverfault.com/users/63805/"
]
} |
222,010 | I know that Xen is usually better than OpenVZ as the provider cannot oversell in Xen.
However, what is the difference between Xen PV , Xen KVM and HVM (I was going through this provider's specs ? Which one is better for what purposes and why? Edit: For an end-user who will just be hosting websites, which is better? From efficiency or other point of view, is there any advantage of one over the other? | Xen supported virtualization types Xen supports running two different
types of guests. Xen guests are often
called as domUs (unprivileged
domains). Both guest types (PV, HVM)
can be used at the same time on a
single Xen system. Xen Paravirtualization (PV) Paravirtualization is an efficient and
lightweight virtualization technique
introduced by Xen, later adopted also
by other virtualization solutions.
Paravirtualization doesn't require
virtualization extensions from the
host CPU. However paravirtualized
guests require special kernel that is
ported to run natively on Xen, so the
guests are aware of the hypervisor and
can run efficiently without emulation
or virtual emulated hardware. Xen PV
guest kernels exist for Linux, NetBSD,
FreeBSD, OpenSolaris and Novell
Netware operating systems. PV guests don't have any kind of
virtual emulated hardware, but
graphical console is still possible
using guest pvfb (paravirtual
framebuffer). PV guest graphical
console can be viewed using VNC
client, or Redhat's virt-viewer.
There's a separate VNC server in dom0
for each guest's PVFB. Upstream kernel.org Linux kernels
since Linux 2.6.24 include Xen PV
guest (domU) support based on the
Linux pvops framework, so every
upstream Linux kernel can be
automatically used as Xen PV guest
kernel without any additional patches
or modifications. See XenParavirtOps wiki page for more
information about Linux pvops Xen
support. Xen Full virtualization (HVM) Fully virtualized aka HVM (Hardware
Virtual Machine) guests require CPU
virtualization extensions from the
host CPU (Intel VT, AMD-V). Xen uses
modified version of Qemu to emulate
full PC hardware, including BIOS, IDE
disk controller, VGA graphic adapter,
USB controller, network adapter etc
for HVM guests. CPU virtualization
extensions are used to boost
performance of the emulation. Fully
virtualized guests don't require
special kernel, so for example Windows
operating systems can be used as Xen
HVM guest. Fully virtualized guests
are usually slower than
paravirtualized guests, because of the
required emulation. To boost performance fully virtualized
HVM guests can use special paravirtual
device drivers to bypass the emulation
for disk and network IO. Xen Windows
HVM guests can use the opensource
GPLPV drivers. See
XenLinuxPVonHVMdrivers wiki page for
more information about Xen PV-on-HVM
drivers for Linux HVM guests. This is from http://wiki.xenproject.org/wiki/XenOverview KVM is not Xen at all, it is another technology, where KVM is a Linux native kernel module and not an additional kernel, like Xen. Which makes KVM a better design. the downside here is that KVM is newer than Xen, so it might be lacking some of the features. | {
"source": [
"https://serverfault.com/questions/222010",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
222,012 | I need some calrification about managing SQL server 2008. The scenario is as follows:
I have One Windows Physical server at Primary site, I want to have Two different applications database on it, so shall I create two Instances on same server or shall have diffenrent server for another database. First Database is for management purpose while second would be used for Reporting purpose.
There is a second database at the secondary site, which will be in Passive mode and I intend to connect them through MSCS.
Now, can I have both Instances on Single server and both will work fine? The management database will be used more.
Please reply soon.
Can both Instances have dedicated reources allocated to them? | Xen supported virtualization types Xen supports running two different
types of guests. Xen guests are often
called as domUs (unprivileged
domains). Both guest types (PV, HVM)
can be used at the same time on a
single Xen system. Xen Paravirtualization (PV) Paravirtualization is an efficient and
lightweight virtualization technique
introduced by Xen, later adopted also
by other virtualization solutions.
Paravirtualization doesn't require
virtualization extensions from the
host CPU. However paravirtualized
guests require special kernel that is
ported to run natively on Xen, so the
guests are aware of the hypervisor and
can run efficiently without emulation
or virtual emulated hardware. Xen PV
guest kernels exist for Linux, NetBSD,
FreeBSD, OpenSolaris and Novell
Netware operating systems. PV guests don't have any kind of
virtual emulated hardware, but
graphical console is still possible
using guest pvfb (paravirtual
framebuffer). PV guest graphical
console can be viewed using VNC
client, or Redhat's virt-viewer.
There's a separate VNC server in dom0
for each guest's PVFB. Upstream kernel.org Linux kernels
since Linux 2.6.24 include Xen PV
guest (domU) support based on the
Linux pvops framework, so every
upstream Linux kernel can be
automatically used as Xen PV guest
kernel without any additional patches
or modifications. See XenParavirtOps wiki page for more
information about Linux pvops Xen
support. Xen Full virtualization (HVM) Fully virtualized aka HVM (Hardware
Virtual Machine) guests require CPU
virtualization extensions from the
host CPU (Intel VT, AMD-V). Xen uses
modified version of Qemu to emulate
full PC hardware, including BIOS, IDE
disk controller, VGA graphic adapter,
USB controller, network adapter etc
for HVM guests. CPU virtualization
extensions are used to boost
performance of the emulation. Fully
virtualized guests don't require
special kernel, so for example Windows
operating systems can be used as Xen
HVM guest. Fully virtualized guests
are usually slower than
paravirtualized guests, because of the
required emulation. To boost performance fully virtualized
HVM guests can use special paravirtual
device drivers to bypass the emulation
for disk and network IO. Xen Windows
HVM guests can use the opensource
GPLPV drivers. See
XenLinuxPVonHVMdrivers wiki page for
more information about Xen PV-on-HVM
drivers for Linux HVM guests. This is from http://wiki.xenproject.org/wiki/XenOverview KVM is not Xen at all, it is another technology, where KVM is a Linux native kernel module and not an additional kernel, like Xen. Which makes KVM a better design. the downside here is that KVM is newer than Xen, so it might be lacking some of the features. | {
"source": [
"https://serverfault.com/questions/222012",
"https://serverfault.com",
"https://serverfault.com/users/38731/"
]
} |
222,043 | I can't visualize in my mind the network traffic flow. eg. If there are 15 pc's in a LAN When packet goes from router to local LAN, do it passes all the computers? Does it go to the ethernet card of every computer and those computers accept the packet based on their physical address? To which pc the packet will go first? To the nearest to the router? What happens if that first pc captures that packet(though it is not for it)? What happens when a pc broadcast a message? Do it have to generate 14 packets for all the pc's or only one packet reach to all pc's? If it is one packet and captured by first pc, how other pc's can get that? I can't imagine how this traffic is exactly flows? May be my analogy is completely wrong. Can anybody explain me this? | The exact procedure depends on the type of networks, the topology, and the equipment. I will attempt to describe the process with regard to most Ethernet networks. Terms : MAC Address : Like a Social Security Number. It doesn't change as you move IP Address : Like an address, when you move (over long distances), it changes. TCP Packet : Data with TCP Port information (sometimes referred to as a TCP Segment) IP Datagram : Data with IP information Ethernet Frame : Data with MAC information The IP Address is divided into two parts, the network and the node. The subnet you configure on your computer or router determines what network an IP address is on. You need to configure an interface with an IP Address (and subnet) to route to it. Depending on your router, there are several things that might happen when it receives a packet: Home Router ( NAT Gateway) Packet comes in on Router Router extracts IP address from IP Datagram Router checks destination, 3. If the address is not the current router, it usually drops the packet (read below if its more than a NAT gateway) Router extracts port number from the packet
5 Router checks forwarding tables to see if that port is associated with an internal IP Address If yes: Delivers it (see below) Otherwise: Drop " Real Router " Packet comes in on Router Router extracts IP address from IP Datagram Router checks to see if it is a part of destination IP network If yes: delivers it (see below) Otherwise, check the TTL (also from the IP Datagram) to see if it should be dropped or signalled as undeliverable. If still deliverable, check routing table for network destination, forward it to next router if known. Otherwise, forward it to the default gateway, drop the packet, or send it an ICMP response that its undeliverable . (depends on configuration) Delivery (Ethernet) Router checks to see if IP address is in its ARP table (IP address to MAC addresses). If not, send an ARP request to locate the MAC address. Once an ARP response is recieved, send the packet to that MAC address. The ARP request is a broadcast frame, so every computer sees the request. If there is no response, it may be silently dropped or responded to. The router only sends one frame for the broadcast (if its also bridge, it may send it out on each interface the bridge is on). To send broadcast frames, there is a special address called the Broadcast Address . On Ethernet networks, the address is FF:FF:FF:FF:FF:FF (all 1s in binary). Bridges (including switches) recognise frames directed to FF:FF:FF:FF:FF:FF as a broadcast, and transmit it on every port. Some bridges (like managed switches), keep track of ARP requests themselves, so that they do not need to broadcast and simply reply with what's in their ARP cache. Firewalls can be configured to block these broadcasts, but it may have decremental effects to the IP network (the sides of the firewall can no longer talk to each other without a router). Getting from the router to the node depends on the hardware (usually a bridge, a switch, or a hub) Bridge A bridge takes things input from one port, and sends it to one (or more) ports. Technically, switches are bridges, but a Firewall or Wireless Access Point are also bridges. Switch A switch remembers which port has which MAC address. (Usually, it'll learn it from the ARP response). The switch will send the frame (which contains a packet), to the destination port. In the rare instances that the switch doesn't know the MAC address, it behaves like a Hub and sends the information to every address. Hub A hub will not remember which port has MAC address. A hub will always send the frame to every port. There are a lot of problems (like collisions) associated with hubs. Delivery (again) Finally, the frame at this point will continue until on hubs and switches until it reaches its destination or is dropped. Things like STP exist to prevent it from being forwarded forever. | {
"source": [
"https://serverfault.com/questions/222043",
"https://serverfault.com",
"https://serverfault.com/users/58859/"
]
} |
222,424 | Can any one tell me how to get the PID of a command executed in bash. E.g. I have a bash script that runs imapsync. When the script is killed the imapsync process does not always get killed, so I'd like to be able to identify the PID of imapsync programatically from my script, so that I can kill the imapsync process myself in a signal handler. So how do I programatically get the PID of a child process from a parent bash script? | any_command args &
my_child_PID=$! IOW, just like $$ holds your PID, $! has the PID of the most recently executed background command . | {
"source": [
"https://serverfault.com/questions/222424",
"https://serverfault.com",
"https://serverfault.com/users/5464/"
]
} |
222,430 | I have been running PostgreSQL on Windows Server 2003 without a hitch and its fast, so to answer my own question it seems fine. However I am about to launch a new project and am considering using a Linux box instead as stability and performance are crucial. Since PostgreSQL seems to be developed on Linux distributions mostly, maybe it would be better to stick with Linux? | PostgreSQL will definitely run faster on Linux than on Windows (and I say this as one of the guys who wrote the windows port of it..) It is designed for a Unix style architecture, and implements this same architecture on Windows, which means it does a number of things that Windows isn't designed to do well. It works fine, but it doesn't perform as well. For example, PostgreSQL uses a process-per-connection model, not threading. Windows is designed to do threading. If your application does lots of connect and disconnects, it will definitely run significantly slower on Windows, for example. There is also some assumptions around the filesystem which don't exactly favor NTFS. The one thing you really need to think about - if you are on Windows, most antivirus products will bug out when used with PostgreSQL, because they are not used to this type of workload (such as 1000 different processes reading and writing to the same file through different handles). That means that the strong recommendation is to always uninstall any antivirus if possible (just disabling it or excluding the PostgreSQL processes/files is often not enough). And this is not just for performance reasons, but also stability under load. | {
"source": [
"https://serverfault.com/questions/222430",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
222,982 | Is there a nice way to monitor and/or control Intel Turbo Boost technology on Nehalem processors from a Linux host? I'm looking to do this RHEL/CentOS 5.5 hosts running stock or Realtime MRG kernels. Has anyone here found a good way to leverage Turbo Boost in your environments? | i7z is a good tool for monitoring Intel Turbo Boost for Intel CPUs that support it (i7 and later) on Linux. If it is working, you will see the current frequency change as you add load to the CPUs, due to the multiplier increasing dynamically under load. Try BurnP6 for this. Basic description (pdf) of power states: C0 - active state. While in C0, instructions are being executed by the core. For
Intel® Turbo Boost technology, a core in C0 is considered an active core. C1 - halt state. While in C1, no instructions are being executed. For Intel® Turbo Boost technology, a core in C1 is considered an active core. C3 - While in C3 the core PLLs are turned off, and all the core caches are flushed. For Intel® Turbo Boost technology, a core in C3 is considered an inactive core. C6 - While in C6, the core PLLs are turned off, the core caches are flushed and the core state is saved to the Last Level Cache. Power Gates are used to reduce power consumption to close to zero. For
Intel® Turbo Boost technology, a core in C6 is considered an inactive core. C7 - New, slightly deeper sleep state introduced with Sandy Bridge and later. Be warned that C6 and C7 states are "deep" sleep modes and may have some latency penalties that might not be great for certain types of server workloads. For more detail see Intel's Power Management for Embedded Apps (pdf). Turbo Boost is P0 state, kind of the opposite of sleep. It scales the CPU multipliers up when only a few cores are active, but ramps down under extreme multi-core load to prevent thermal issues with the CPU. In general ACPI support must be enabled in Linux for i7z to show correct temps and Turbo Boost (dynamic multipliers above the default) to work. You can find useful information on how to enable Intel Turbo Boost on Linux in this post. | {
"source": [
"https://serverfault.com/questions/222982",
"https://serverfault.com",
"https://serverfault.com/users/13325/"
]
} |
223,305 | How can I keep updated of changes to the range of IP addresses that Amazon will use for EC2 instances. I want to add a range of IPs to my firewall settings to allow access to my 'ground based' mysql database from instances started by my autoscale policy. As i understand each instance will have an IP address that will be in this range? is this correct? | https://ip-ranges.amazonaws.com/ip-ranges.json This question has been answered before, but here is the link to the forums, select the sticky link to the list of Ip ranges used by Amazon, it gets updated reliably when they add new information. EDIT: Changed link, whenever the post is updated the link breaks, so just gave a static link to the forum page with the sticky, should be safer. | {
"source": [
"https://serverfault.com/questions/223305",
"https://serverfault.com",
"https://serverfault.com/users/23609/"
]
} |
223,306 | I am a php developer and have recently decided to make one of my Magento extensions commercial. I have downloaded and configured MageParts CEM Server and that is all working perfectly in regard to licencing and delivery of module packages. The only issue is that the directory that the packages are stored in could be accessed by anyone. I tried this in a .htaccess file, but now it is not working. <Files services.wsdl>
allow from all
</Files>
deny from all Clients are receiving a 403 Forbidden response. Have I done something wrong in the .htaccess file or would there be a better way to secure the directory? Any help would be greatly appreciated. | https://ip-ranges.amazonaws.com/ip-ranges.json This question has been answered before, but here is the link to the forums, select the sticky link to the list of Ip ranges used by Amazon, it gets updated reliably when they add new information. EDIT: Changed link, whenever the post is updated the link breaks, so just gave a static link to the forum page with the sticky, should be safer. | {
"source": [
"https://serverfault.com/questions/223306",
"https://serverfault.com",
"https://serverfault.com/users/65308/"
]
} |
223,500 | Good day. While this post discusses a similar setup to mine serving blank pages occasionally after having made a successful installation, I am unable to serve anything but blank pages. There are no errors present in /var/log/nginx/error.log , /var/log/php-fpm.log or /var/log/nginx/us/sharonrhodes/blog/error.log . My setup: Wordpress 3.0.4 nginx 0.8.54 php-fpm 5.3.5 (fpm-fcgi) Arch Linux Configuration Files php-fpm.conf: [global]
pid = run/php-fpm/php-fpm.pid
error_log = log/php-fpm.log
log_level = notice
[www]
listen = 127.0.0.1:9000
listen.owner = www
listen.group = www
listen.mode = 0660
user = www
group = www
pm = dynamic
pm.max_children = 50
pm.start_servers = 20
pm.min_spare_servers = 5
pm.max_spare_servers = 35
pm.max_requests = 500 nginx.conf: user www;
worker_processes 1;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
include /etc/nginx/sites-enabled/*.conf;
} /etc/nginx/sites-enabled/blog_sharonrhodes_us.conf: upstream php {
server 127.0.0.1:9000;
}
server {
error_log /var/log/nginx/us/sharonrhodes/blog/error.log notice;
access_log /var/log/nginx/us/sharonrhodes/blog/access.log;
server_name blog.sharonrhodes.us;
root /srv/apps/us/sharonrhodes/blog;
index index.php;
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location / {
# This is cool because no php is touched for static content
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
include fastcgi_params;
fastcgi_intercept_errors on;
fastcgi_pass php;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
} /etc/nginx/conf/fastcgi.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200; | By default the Nginx source does not define SCRIPT_FILENAME in the fastcgi_params file, so unless the repo you installed Nginx from does that you need to do it yourself. Check if the following line is in your fastcgi_params file: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; and if not then add it. | {
"source": [
"https://serverfault.com/questions/223500",
"https://serverfault.com",
"https://serverfault.com/users/67165/"
]
} |
223,509 | How can I check what modules have been added to an nginx installation? | nginx -V will list all the configured modules. There is no explicit enable/load command. | {
"source": [
"https://serverfault.com/questions/223509",
"https://serverfault.com",
"https://serverfault.com/users/53070/"
]
} |
223,601 | Simple question:
How can I setup multiple MAC addresses on one physical network interface (linux)? Why?
My ISP is checking ip<->mac on GW and I d like to route traffic through my "linuxbox" and than forward it with different source ip. Without checking ip<->mac, I will use eth0, eth0:0, but in this situation I need unique MAC address for every IP. | You can use macvlan to create multiple virtual interfaces with different MAC addresses. ip link add link eth0 address 00:11:11:11:11:11 eth0.1 type macvlan
ip link add link eth0 address 00:22:22:22:22:22 eth0.2 type macvlan In theory that should be all you need, though at some point something broke in the kernel and it would cause it to use one MAC for everything. I'm not sure what the status of that is; hopefully it's fixed. If not, you could use arptables to rewrite the MAC addresses on output based on the egress interface or on input based on destination IP: arptables -A OUT -o eth0.1 --arhln 06 -j mangle --mangle-hw-s 00:11:11:11:11:11
arptables -A OUT -o eth0.2 --arhln 06 -j mangle --mangle-hw-s 00:22:22:22:22:22
arptables -A IN -d 192.168.1.1 --arhln 06 -j mangle --mangle-hw-d 00:11:11:11:11:11
arptables -A IN -d 192.168.1.2 --arhln 06 -j mangle --mangle-hw-d 00:22:22:22:22:22 Unfortunately arptables is also quite buggy in my experience. | {
"source": [
"https://serverfault.com/questions/223601",
"https://serverfault.com",
"https://serverfault.com/users/67200/"
]
} |
224,122 | I've the following configuration: SSLEngine on
SSLCertificateFile /etc/httpd/conf/login.domain.com.crt
SSLCertificateKeyFile /etc/httpd/conf/login.domain.com.key
SSLCipherSuite ALL:-ADH:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP but I don't know how to generate .crt and .key files. | crt and key files represent both parts of a certificate, key being the private key to the certificate and crt being the signed certificate. It's only one of the ways to generate certs, another way would be having both inside a pem file or another in a p12 container. You have several ways to generate those files, if you want to self-sign the certificate you can just issue this commands openssl genrsa 2048 > host.key
chmod 400 host.key
openssl req -new -x509 -nodes -sha256 -days 365 -key host.key -out host.cert Note that with self-signed certificates your browser will warn you that the certificate is not "trusted" because it hasn't been signed by a certification authority that is in the trust list of your browser. From there onwards you can either generate your own chain of trust by making your CA or buy a certificate from a company like Verisign or Thawte. | {
"source": [
"https://serverfault.com/questions/224122",
"https://serverfault.com",
"https://serverfault.com/users/67251/"
]
} |
224,467 | At the office where I work, three of the other members of the IT staff are logged into their computers all the time with accounts that are members of the domain administrators group. I have serious concerns about being logged in with admin rights (either local or for the domain). As such, for everyday computer use, I use an account that just has regular user privelages. I also have an different account that is part of the domain admins group. I use this account when I need to do something that requires elevated privilages on my computer, one of the servers, or on another user's computer. What is the best practice here? Should network admins be logged in with rights to the entire network all the time (or even their local computer for that matter)? | Absolute best-practice is to Live User, Work Root . The user you're logged in as when you hit refresh on Server Fault every 5 minutes should be a normal user. The one you use to diagnose Exchange routing problems should be Admin. Getting this separation can be hard, since in Windows at least it requires dual login-sessions and that means two computers in some way. VMs work real well for this, and that's how I solve it. I've heard of organizations that login-restrict their elevated accounts to certain special VMs hosted internally, and admins rely on RDP for access. UAC helps limit what an admin can do (accessing special programs), but the continual prompts can be just as annoying as having to remote into a whole other machine to do what needs doing. Why is this a best-practice? In part it's because I said so , and so do a lot of others. SysAdminning doesn't have a central body that sets best-practices in any kind of definitive way. In the last decade we've had some IT Security best-practices published suggesting that you only use elevated privs when you actually need them. some of the best-practice is set through the gestalt of experience by sysadmins over the last 40+ years. A paper from LISA 1993 ( link ), an example paper from SANS ( link , a PDF), a section from SANS 'critical security controls' touches on this ( link ). | {
"source": [
"https://serverfault.com/questions/224467",
"https://serverfault.com",
"https://serverfault.com/users/52664/"
]
} |
224,560 | I need some help setting the correct permissions or ownership of the apache document root. Here is what I need: different websites stored in /var/www/html/<site> two users should update/manage the websites through ssh ownership should be different than the apache user (for security) How can I do this? At the moment all files are world-writeable, which isn't good. The server runs CentOS 5.5 Thanks | Create a new group groupadd webadmin Add your users to the group usermod -a -G webadmin user1
usermod -a -G webadmin user2 Change ownership of the sites directory chown root:webadmin /var/www/html/ Change permissions of the sites directory chmod 2775 /var/www/html/ -R Now anybody can read the files (including the apache user) but only root and webadmin can modify their contents. | {
"source": [
"https://serverfault.com/questions/224560",
"https://serverfault.com",
"https://serverfault.com/users/67518/"
]
} |
224,810 | Is there any equivalent or port of ssh-copy-id available for Windows? That is, is there an easy way to transfer SSH keys from a local machine to a remote server under Windows? In case it helps, I'm using Pageant and Kitty (a Putty alternative) already. | ssh-copy-id is a pretty simple script that should be pretty easy to replicate under windows. If you ignore all the parameter handling, error handling, and so on, these are the two commands from ssh-copy-id that are actually doing the work most of the time. GET_ID="cat ${ID_FILE}"
{ eval "$GET_ID" ; } | ssh ${1%:} "umask 077; test -d .ssh || mkdir .ssh ; cat >> .ssh/authorized_keys" || exit 1 Using the putty tools a command like this should be equivalent (not tested). type public_id | plink.exe username@hostname "umask 077; test -d .ssh || mkdir .ssh ; cat >> .ssh/authorized_keys" If you want to do all the same error handling, and the automatic key location, I am sure writing a script under Windows will be a lot trickier, but certainly possible. | {
"source": [
"https://serverfault.com/questions/224810",
"https://serverfault.com",
"https://serverfault.com/users/61293/"
]
} |
224,813 | Q1) Do I need mod_deflate running on apache? does it help in performance in anyway? Q2) Do I need mod_cache running on apache if nginx is serving a static caching proxy? <IfModule mod_cache.c>
CacheEnable disk http://website.com/
CacheIgnoreNoLastMod On
CacheMaxExpire 86400
CacheLastModifiedFactor 0.1
CacheStoreNoStore Off
CacheStorePrivate On
<IfModule mod_disk_cache.c>
CacheDefaultExpire 3600
CacheDirLength 3
CacheDirLevels 2
CacheMaxFileSize 640000
CacheMinFileSize 1
CacheRoot /opt/apicache
</IfModule>
</IfModule> | ssh-copy-id is a pretty simple script that should be pretty easy to replicate under windows. If you ignore all the parameter handling, error handling, and so on, these are the two commands from ssh-copy-id that are actually doing the work most of the time. GET_ID="cat ${ID_FILE}"
{ eval "$GET_ID" ; } | ssh ${1%:} "umask 077; test -d .ssh || mkdir .ssh ; cat >> .ssh/authorized_keys" || exit 1 Using the putty tools a command like this should be equivalent (not tested). type public_id | plink.exe username@hostname "umask 077; test -d .ssh || mkdir .ssh ; cat >> .ssh/authorized_keys" If you want to do all the same error handling, and the automatic key location, I am sure writing a script under Windows will be a lot trickier, but certainly possible. | {
"source": [
"https://serverfault.com/questions/224813",
"https://serverfault.com",
"https://serverfault.com/users/67177/"
]
} |
224,920 | I'm trying to understand DNS a bit better, but I still don't get A and NS records completely. As far as I understood, the A record tells which IP-address belongs to a (sub) domain, so far it was still clear to me. But as I understood, the NS record tells which nameserver points belongs to a (sub) domain, and that nameserver should tell which IP-address belongs to a (sub) domain. But that was already specified in the A record in the same DNS file. So can someone explain to me what the NS records and nameservers exactly do, because probably I understood something wrong. edit: As I understand you correctly, a NS record tells you were to find the DNS server with the A record for a certain domain, and the A record tells you which ip-address belongs to a domain. But what is the use of putting an A and an NS record in the same DNS file? If there is already an A record for a certain domain, then why do you need to point to another DNS server, which would probably give you the same information? | Some examples out of the fictitious foo.com zone file ....... SOA record & lots more stuff .......
foo.com. IN NS ns1.bar.com.
foo.com. IN A 192.168.100.1
....... More A/CNAME/AAAA/etc. records ....... A Record = "The host called foo.com lives at address 192.168.100.1" NS Record = "If you want to know about hosts in the foo.com zone, ask the name server ns1.bar.com" | {
"source": [
"https://serverfault.com/questions/224920",
"https://serverfault.com",
"https://serverfault.com/users/67634/"
]
} |
225,155 | I was wondering if someone could give me a simple guide on how to set up virtual networking in VirtualBox (4.0.2) so that the following scenarios work: Both Host and Guest can access the Internet Host can ping Guest and vice versa Host can access, for example, an apache web server running on Guest and vice versa I've been fiddling around with the various Network Adapters available in the settings for my Guest, but I'm just not able to figure it out. Is there anyone that can help me out here? The host is running Windows 7 32-bit and the guest is running Ubuntu 10.10 32-bit. | Try this: Setup the virtualbox to use 2 adapters: The first adapter is set to NAT (that will give you the internet connection). The second adapter is set to host only . Start the virtual machine and assign a static IP for the second adapter in Ubuntu (for instance 192.168.56.56 ). The host Windows will have 192.168.56.1 as IP for the internal network ( VirtualBox Host-Only Network is the name in network connections in Windows). What this will give you is being able to access the apache server on ubuntu, from windows, by going to 192.168.56.56. Also, Ubuntu will have internet access, since the first adapter (set to NAT) will take care of that. Now, to make the connection available both ways (accessing the windows host from the ubuntu guest) there's still one more step to be performed. Windows will automatically add the virtualbox host-only network to the list of public networks and that cannot be changed. This entails that the firewall will prevent proper access. To overcome this and not make any security breaches in your setup: go to the windows firewall section, in control panel, click on advanced settings. In the page that pops up, click on inbound rules (left column), then on new rule (right column). Chose custom rule, set the rule to allow all programs, and any protocol. For the scope, add in the first box (local IP addresses) 192.168.56.1, and in the second box (remote IP) 192.168.56.56. Click next, select allow the connection, next, check all profiles, next, give it a name and save. That's it, now you have 2 way communication, with apache/any other service available as well as internet.
The final step is to setup a share. Do not use the shared folders feature in virtualbox, it's quite buggy especially with windows 7 (and 64 bit). Instead use samba shares - fast and efficient. Follow this link for how to set that up: https://wiki.ubuntu.com/MountWindowsSharesPermanently | {
"source": [
"https://serverfault.com/questions/225155",
"https://serverfault.com",
"https://serverfault.com/users/773/"
]
} |
225,777 | I am having a few situations to which I do not see any thing in du man pages. I want to see files in a sub directory which are larger than a particular size only. I use du -sh > du_output.txt I see the output as described for options -s and -h . However, what I am more interested in, is if the output comes in a format which
is
say for example dir0--->dir1-->dir3-->dir4
| |
->dir2 |-file1
|-file2 If the above is directory layout and I want to just see the size of individual directories in all the subdirectories then what can I do (the depth of each subdirectory is variable) | To only show folders over 1GB in size: du -h --threshold=1G You may also want to order by size, to easily find the biggest ones. du -h --threshold=1G | sort -h (Works on: Ubuntu/Mint. Does not work on: OSX or RHEL 6.2) | {
"source": [
"https://serverfault.com/questions/225777",
"https://serverfault.com",
"https://serverfault.com/users/49959/"
]
} |
225,798 | Even when /tmp has no file called something , searching for it with find will return 0: $ find /tmp -name something
$ echo $?
0 How can I get a non-zero exit status when find does not find anything? | find /tmp -name something | grep . The return status will be 0 when something is found, and non-zero otherwise. EDIT: Changed from egrep '.*' to the much simpler grep . , since the result is the same. | {
"source": [
"https://serverfault.com/questions/225798",
"https://serverfault.com",
"https://serverfault.com/users/67054/"
]
} |
225,948 | I just installed Nginx on Mac OS X (thanks http://www.kevinworthington.com/nginx-mac-os-snow-leopard-2-minutes/ ), but how do I restart Nginx on Mac OS X? Thanks! | sudo nginx -s stop && sudo nginx | {
"source": [
"https://serverfault.com/questions/225948",
"https://serverfault.com",
"https://serverfault.com/users/67949/"
]
} |
225,999 | Any disadvantage to short DNS TTL? | Your DNS should not change very often in the first place. Many DNS server do not honor your TTL "request" and impose their own policy. If you're going to make a chage, set the TTL lower weeks before the change. Normally having a long TTL helps reduce load on your authoritative server(s) and adds a bit of time to clients accessing your site. I commonly use 3600, or even 36000 depending on the situation. | {
"source": [
"https://serverfault.com/questions/225999",
"https://serverfault.com",
"https://serverfault.com/users/66939/"
]
} |
226,074 | I finally set up a realistic backup schedule on my data through a shell script, which are handled by cron on tight intervals. Unfortunately, I keep getting empty emails each time the CRON has been executed and not only when things go wrong. Is it possible to only make CRON send emails when something goes wrong, ie. my TAR doesn't execute as intended? Here's how my crontab is setup for the moment; 0 */2 * * * /bin/backup.sh 2>&1 | mail -s "Backup status" [email protected] Thanks a lot! | Ideally you'd want your backup script to output nothing if everything goes as expected and only produce output when something goes wrong. Then use the MAILTO environment variable to send any output generated by your script to your email address. [email protected]
0 */2 * * * /bin/backup.sh If your script normally produces output but you don't care about it in cron, just sent it to /dev/null and it'll email you only when something is written to stderr. [email protected]
0 */2 * * * /bin/backup.sh > /dev/null | {
"source": [
"https://serverfault.com/questions/226074",
"https://serverfault.com",
"https://serverfault.com/users/38710/"
]
} |
226,078 | I've set up a scheduled backup using the Windows Server Backup. Also, I've created a managed service account that should execute the backup, but the Schedule wizard doesn't accept the user account. The account is granted membership to the Backup Operators group and have all necessary permissions to the backup location. Is it possible to assign the managed service account to the backup service by other means? Update: since the backup is scheduled using the Task Scheduler in Windows, I've also tried to configure the job to run as a managed service account without success. | Ideally you'd want your backup script to output nothing if everything goes as expected and only produce output when something goes wrong. Then use the MAILTO environment variable to send any output generated by your script to your email address. [email protected]
0 */2 * * * /bin/backup.sh If your script normally produces output but you don't care about it in cron, just sent it to /dev/null and it'll email you only when something is written to stderr. [email protected]
0 */2 * * * /bin/backup.sh > /dev/null | {
"source": [
"https://serverfault.com/questions/226078",
"https://serverfault.com",
"https://serverfault.com/users/16646/"
]
} |
226,090 | My file server (Ubuntu) on the internet runs OpenSSH, where users upload and download files using scp or WinSCP. At the same time, the server runs some web applications (http), which need higher priority. Is there a way to give HTTP priority over SSH file transfer? If not, can I limit SSH bandwidth? | Ideally you'd want your backup script to output nothing if everything goes as expected and only produce output when something goes wrong. Then use the MAILTO environment variable to send any output generated by your script to your email address. [email protected]
0 */2 * * * /bin/backup.sh If your script normally produces output but you don't care about it in cron, just sent it to /dev/null and it'll email you only when something is written to stderr. [email protected]
0 */2 * * * /bin/backup.sh > /dev/null | {
"source": [
"https://serverfault.com/questions/226090",
"https://serverfault.com",
"https://serverfault.com/users/20773/"
]
} |
226,386 | I want to download a script from: http://dl.dropbox.com/u/11210438/flockonus-stack.sh and execute it.
My guess is, to use wget renaming it, chmod it, and execute. What are the commands for doing that on Ubuntu? | Careful Before running the script, do you trust the person who wrote it? For example, did you expect the script to contain this? echo "brain1" > /etc/hostname That will try to change your hostname. For future reference, if, after verifying the script is correct and not malicious, you can run it in one line like this: wget -O - http://dl.dropbox.com/u/11210438/flockonus-stack.sh | bash But download it separately and read it before running it the first time. Also note that interactive prompts inside the downloaded script may not work properly using this method. | {
"source": [
"https://serverfault.com/questions/226386",
"https://serverfault.com",
"https://serverfault.com/users/67834/"
]
} |
226,700 | It looks like creating a bucket policy might do the trick, but I am having trouble creating the policy. | The AWS Policy Generator is a very helpful tool for the exploration and creation of such policies, its usage is explained in the respective introductory blog post (a direct link to it it is meanwhile available alongside most policy entry forms within the AWS Management Console as well). I've created an example according to your specification (make sure to generate a new one for your own resources of course): {
"Id": "ExamplePolicyId12345678",
"Statement": [
{
"Sid": "ExampleStmtSid12345678",
"Action": [
"s3:DeleteBucket"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::test-example-com",
"Principal": {
"AWS": [
"*"
]
}
}
]
} Please note that the AWS Management Console currently neither hides the delete command nor reports on its execution being unsuccessful for buckets with such policies, however, the bucket remains in place ;) | {
"source": [
"https://serverfault.com/questions/226700",
"https://serverfault.com",
"https://serverfault.com/users/29574/"
]
} |
227,190 | I have a server build script which uses apt-get to install packages. It then puts pre-written configuration files directly in place, so the interactive post-install configuration dialog in packages such as postfix is not needed. How do I skip this stage of the installation? It creates a piece of manual intervention that I would rather avoid. I am aware of the -qq option, but the manpage warns against using it without specifying a no-action modifier. I do want to perform an action, I just want to suppress a specific part of it. | You can do a couple of things for avoiding this. Setting the DEBIAN_FRONTEND variable to noninteractive and using -y flag. For example: export DEBIAN_FRONTEND=noninteractive
apt-get -yq install [packagename] If you need to install it via sudo, use: sudo DEBIAN_FRONTEND=noninteractive apt-get -yq install [packagename] | {
"source": [
"https://serverfault.com/questions/227190",
"https://serverfault.com",
"https://serverfault.com/users/50856/"
]
} |
227,242 | This is a canonical question about avoiding outgoing mail being classified as spam. Also related: Fighting Spam - What can I do as an: Email Administrator, Domain Owner, or User? What are SPF records, and how do I configure them? I'm wondering how to prevent my emails from my site being marked as spam? I'm using sendmail. I'm trying to send emails through my ruby-on-rails application. The mails are all written in swedish (if that does make a difference?). I don't know why they keep getting marked as spam. Are there any things that I can do to minimize the risk? | Mail will be marked as spam by major ISPs (including webmail providers like gmail, hotmail, yahoo) for several possible reasons: If you're sending it from a residential IP address If you're sending it from an IP address with a poor reputation If you're sending mail which matches certain patterns (these are hard to describe, but software looks for things like "Congratulations, you've won $1 billion!", in a fuzzy-matching sort of way). If you send too much mail to the ISP too fast If too many people at the ISP click the "This is spam" button on your emails If you don't use SPF to identify which mail servers for your domain may send email, and which servers may not If you don't use DKIM to sign your messages If you haven't requested permission to be a "bulk sender" (some offer this like AOL and hotmail) If you IP address is on any DNS blocklists and many, many other possible reasons. You can check the reputation of your IP address at https://www.senderscore.org/ You can check if you're on various blocklists at http://www.mxtoolbox.com/blacklists.aspx | {
"source": [
"https://serverfault.com/questions/227242",
"https://serverfault.com",
"https://serverfault.com/users/68340/"
]
} |
227,480 | I currently have Nginx installed via the instructions on the Nginx site: nginx=stable
sudo su -
add-apt-repository ppa:nginx/$nginx
apt-get update
apt-get install I have configured Nginx and it has been running great for a little while. Now, I want to add some custom modules--say, the Upload Progress Module . The instructions for this module say to add --add-module=path/to/nginx_uploadprogress_module to your ./configure command. However, I did not install Nginx from source. What is the best way to handle this situation? Is it possible to tell APT to compile from source and pass options to ./configure ? Can I compile over the existing installation? What about paths--how do I make them match? Or do I have to remove the APT managed version and start over? | Install dpkg-dev: sudo apt-get install dpkg-dev Add repository: sudo add-apt-repository ppa:nginx/stable Edit /etc/apt/sources.list.d/nginx-stable-lucid.list , add dpkg-src: deb http://ppa.launchpad.net/nginx/stable/ubuntu lucid main
deb-src http://ppa.launchpad.net/nginx/stable/ubuntu lucid main note: (the previous step may have already been automatically performed on Ubuntu >= 12.04 - also make sure that you change lucid to reflect your version) Resynchronize the package index files: sudo apt-get update Get sources: apt-get source nginx Build dependencies: sudo apt-get build-dep nginx Edit nginx-0.8.54/debian/rules: config.status.full: config.env.full config.sub config.guess
...
--add-module=path/to/nginx_uploadprogress_module Build package: cd nginx-0.8.54 && dpkg-buildpackage -b Install packages: sudo dpkg --install nginx-common_1.2.4-2ubuntu0ppa1~precise_all.deb
sudo dpkg --install nginx-full_1.2.4-2ubuntu0ppa1~precise_amd64.deb | {
"source": [
"https://serverfault.com/questions/227480",
"https://serverfault.com",
"https://serverfault.com/users/55077/"
]
} |
227,510 | We are using RVM for managing Ruby installations and environments. Usually we are using this .rvmrc script: #!/bin/bash
if [ ! -e '.version' ]; then
VERSION=`pwd | sed 's/[a-z/-]//g'`
echo $VERSION > .version
rvm gemset create $VERSION
fi
VERSION=`cat .version`
rvm use 1.9.2@$VERSION This script forces RVM to create new gem environment for each our project/version. But each time we was deploying new version RVM asks us to confirm new .rvmrc file. When we cd to this directory first time, we are getting something like: ===============================================================
= NOTICE: =
===============================================================
= RVM has encountered a not yet trusted .rvmrc file in the =
= current working directory which may contain nasty code. =
= =
= Examine the contents of this file to be sure the contents =
= are good before trusting it! =
= =
= Press 'q' to exit the reader when finished reading the file =
===============================================================
(press enter to continue when ready) This is not as bad for development environments, but with auto deploy it require to manually confirm each new version on each server. Is it possible to skip this confirmation? | I found these notes on Waynes blog, http://wayneeseguin.beginrescueend.com/ Basically, adding: export rvm_trust_rvmrcs_flag=1 to ~/.rvmrc will bypass the check. There is also rvm rvmrc <command> [dir] for manually trusting/untrusting .rvmrc files. Looking for the same thing so thought I'd post the solution. HTH Regards, Phil | {
"source": [
"https://serverfault.com/questions/227510",
"https://serverfault.com",
"https://serverfault.com/users/68420/"
]
} |
227,682 | I've been setting up Amazon EC2 instances for an upcoming project. They are all micro instances, running Ubuntu Server 64bit. Here's what I've setup so far: Web Server -- Apache Database Server -- MySQL Development Server -- Apache & MySQL File Server -- SVN & Bacula (backups are done to S3 buckets) Currently, there's only one Web Server, but eventually there will be more. My first question is, what is the
best, most secure way for Amazon EC2
instances to communicate between each
other? Currently I'm using SSH, is
that the best method? According to Amazon, instances communicating between themselves using their Elastic IP addresses will be charged data transfer fees. However, instances communicating using their Private IP addresses can do so for free. Unfortunately, it appears Private IPs change if the instance is stopped and re-started. So that's my second question, how do
you make use of Amazon instances'
Private IPs if they're not static? I know that the instances probably won't be stopped and started very frequently, but still, if the IP address is in various config files, it would be a pain to have to go through them all and change it. I'm primarily concerned about the Web servers, which will need access to the Database server and the File server, which will need access to all the instances when performing backups. Note: I've never used Bacula before and I don't have it setup yet, but I'm assuming it will need the IP addresses of the clients to back them up. | Check out Eric Hammond's article explaining how to use Elastic IP addresses even from within EC2. This method does NOT result in any bandwidth charges because resolving the Elastic IP address (by name) from within EC2 returns the Private IP address. http://alestic.com/2009/06/ec2-elastic-ip-internal For more options, I have an article examining a few alternatives: http://shlomoswidler.com/2010/06/track-changes-to-your-dynamic-cloud-services-automatically.html | {
"source": [
"https://serverfault.com/questions/227682",
"https://serverfault.com",
"https://serverfault.com/users/47893/"
]
} |
227,852 | What is the plus sign at the end of the permissions telling me? ls -l
total 4
drwxrwxrwx+ 2 benson avahi-autoipd 4096 Jan 27 17:37 docs Here's the context: cat /etc/issue
\CentOS release 5.3 (Final)
Kernel \r on an \m | It means your file has extended permissions called ACLs. You have to run getfacl <file> to see the full permissions. See Access Control Lists for more details. | {
"source": [
"https://serverfault.com/questions/227852",
"https://serverfault.com",
"https://serverfault.com/users/24563/"
]
} |
227,929 | If I do this 10 times a month, how much would that cost me? | Very close to nothing. There is no cost to make an AMI itself, but if you're making it from a running instance you will pay the fees for running a micro instance (which is about $0.02/hr, depending on availability region - see the pricing details ) and also fees for using the EBS root for however long you use it. When storing the AMI, you only pay for the S3 storage taken from the snapshot. The root FS must be an EBS volume (for all micro instances), but this is stored as an EBS snapshot (which are stored in S3, rather than EBS) so you pay S3 fees to store it. Note that EBS charges per allocated GB , while S3 charges per used GB ; with the way they do things, a 10G EBS volume that only has 500MB on it will only take up approx. 500MB of S3 storage. Further to that, Micro instances are included in Amazon's free tier at the moment, so all of the above will cost you nothing (with limits - see the pricing link above). | {
"source": [
"https://serverfault.com/questions/227929",
"https://serverfault.com",
"https://serverfault.com/users/81082/"
]
} |
228,102 | I've recently been "forced" to perform some sysadmin work, while this isn't something that I absolutely love doing I've been reading, experimenting and learning a lot. There is one fundamental aspect of server configuration that I've not been able to grasp - hostnames . In Ubuntu for instance, one should set the hostname like this (according to the Linode Library ): echo "plato" > /etc/hostname
hostname -F /etc/hostname File: /etc/hosts 127.0.0.1 localhost.localdomain localhost
12.34.56.78 plato.example.com plato I assume that plato is an arbitrary name and that plato.example.com is the FQDN. Now my questions are: Is it mandatory? To what purpose? Where is it needed / used? Why can't I define "localhost" as the hostname for every machine? Do I have to set up a DNS entry for the plato.example.com FQDN? Should plato.example.com be used as the reverse DNS entry for my IP? Also, are there any "best practices" for picking hostnames? I've seen people using Greek letters, planet names and even mythological figures... What happens when we run out of letters / planets? I'm sorry if this is a dumb question but I've never been too enthusiastic with network configurations. | These days, a system may have multiple interfaces, each with multiple addresses, and each address may even have multiple DNS entries associated with it. So what does a "system hostname" even mean? Many applications will use the system hostname as a default identifier when they communicate elsewhere. For example, if you're collecting syslog messages at a central server, the messages will all be tagged with the hostname of the originating system. In an ideal world you would probably ignore this (because you don't necessarily want to trust the client), but the default behavior -- if you named all your systems "localhost" -- would result in a bunch of log messages that you wouldn't be able to associate with a specific system. As other folks have pointed out, the system hostname is also a useful identifier if you find yourself remotely accessing a number of system. If you've got five windows attached to a systems named "localhost" then you're going to have a hard time keeping them straight. In a similar vein, we try to make the system hostname matches the hostname we use for administrative access to a system. This helps avoid confusion when referring to the system (in email, conversations, documentation, etc). Regarding DNS: You want to have proper forward and reverse DNS entries for your applications in order to avoid confusion. You need some forward entry (name -> ip address) for people to be able to access your application conveniently. Having the reverse entry match is useful for an number of reasons -- for example, it helps you correctly identify the application if you find the corresponding ip address in a log. Note that here I'm talking about "applications" and not "systems", because -- particularly with web servers -- it's common to have multiple ip addresses on a system, associated with different hostnames and services. Trying to maintain name to ip mappings in your /etc/hosts file quickly becomes difficult as you manage an increasing number of systems. It's very easy to for the local hosts file to fall out of sync with respect to DNS, potentially leading to confusion and in some cases malfunction (because something tries to bind to an ip address that no longer exists on the system, for example). | {
"source": [
"https://serverfault.com/questions/228102",
"https://serverfault.com",
"https://serverfault.com/users/17287/"
]
} |
228,170 | Is there a way to export a PostgreSQL database and later import it with another name? I'm using PostgreSQL with Rails and I often export the data from production, where the database is called blah_production and import it on development or staging with names blah_development and blah_staging. On MySQL this is trivial as the export doesn't have the database anywhere (except a comment maybe), but on PostgreSQL it seems to be impossible. Is it impossible? I'm currently dumping the database this way: pg_dump blah > blah.dump I'm not using the -c or -C options. That dump contains statements such as: COMMENT ON DATABASE blah IS 'blah';
ALTER TABLE public.checks OWNER TO blah;
ALTER TABLE public.users OWNER TO blah; When I try to import with psql blah_devel < blah.dump I get WARNING: database "blah" does not exist
ERROR: role "blah" does not exist Maybe the problem is not really the database but the role? If I dump it this way: pg_dump --format=c blah > blah.dump and try to import it this way: pg_restore -d blah_devel < tmp/blah.psql I get these errors: pg_restore: WARNING: database "blah" does not exist
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 1513; 1259 16435 TABLE checks blah
pg_restore: [archiver (db)] could not execute query: ERROR: role "blah" does not exist
Command was: ALTER TABLE public.checks OWNER TO blah;
pg_restore: [archiver (db)] Error from TOC entry 1509; 1259 16409 TABLE users blah
pg_restore: [archiver (db)] could not execute query: ERROR: role "blah" does not exist
Command was: ALTER TABLE public.users OWNER TO blah;
pg_restore: [archiver (db)] Error from TOC entry 1508; 1259 16407 SEQUENCE users_id_seq blah
pg_restore: [archiver (db)] could not execute query: ERROR: role "blah" does not exist
Command was: ALTER TABLE public.users_id_seq OWNER TO blah;
pg_restore: [archiver (db)] Error from TOC entry 1824; 0 0 ACL public postgres
pg_restore: [archiver (db)] could not execute query: ERROR: role "postgres" does not exist
Command was: REVOKE ALL ON SCHEMA public FROM postgres;
pg_restore: [archiver (db)] could not execute query: ERROR: role "postgres" does not exist
Command was: GRANT ALL ON SCHEMA public TO postgres;
WARNING: errors ignored on restore: 11 Any ideas? I've seen out there some people using sed scripts to modify the dump. I'd like to avoid that solution but if there are no alternative I'll take it. Has anybody wrote a script to alter the dump's database name ensure no data is ever altered? | The solution was dumping it like this: pg_dump --no-owner --no-acl blah > blah.psql and importing it like this: psql blah_devel < blah.psql > /dev/null I still get this warning: WARNING: database "blah" does not exist but the rest seems to work. | {
"source": [
"https://serverfault.com/questions/228170",
"https://serverfault.com",
"https://serverfault.com/users/2563/"
]
} |
228,281 | I want to run an executable in Linux, and regardless of the exit status that it returns, I want to return a good exit status. (i.e. no error.) (This is because I'm using sh -ex and I want the script to keep running even if one (specific) command fails.) | Give this a try: command || true From man bash : The shell does not
exit if the command that fails is part of the command
list immediately following a while or until keyword,
part of the test following the if or elif reserved
words, part of any command executed in a && or ⎪⎪ list
except the command following the final && or ⎪⎪, any
command in a pipeline but the last, or if the command's
return value is being inverted with !. | {
"source": [
"https://serverfault.com/questions/228281",
"https://serverfault.com",
"https://serverfault.com/users/20346/"
]
} |
228,396 | I've been searching for a way to setup OpenSSH's umask to 0027 in a consistent way across all connection types. By connection types I'm referring to: sftp scp ssh hostname ssh hostname program The difference between 3. and 4. is that the former starts a shell which usually reads the /etc/profile information while the latter doesn't. In addition by reading this post I've became aware of the -u option that is present in newer versions of OpenSSH. However this doesn't work. I must also add that /etc/profile now includes umask 0027 . Going point by point: sftp - Setting -u 0027 in sshd_config as mentioned here , is not enough. If I don't set this parameter, sftp uses by default umask 0022 . This means that if I have the two files: -rwxrwxrwx 1 user user 0 2011-01-29 02:04 execute
-rw-rw-rw- 1 user user 0 2011-01-29 02:04 read-write When I use sftp to put them in the destination machine I actually get: -rwxr-xr-x 1 user user 0 2011-01-29 02:04 execute
-rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write However when I set -u 0027 on sshd_config of the destination machine I actually get: -rwxr--r-- 1 user user 0 2011-01-29 02:04 execute
-rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write which is not expected, since it should actually be: -rwxr-x--- 1 user user 0 2011-01-29 02:04 execute
-rw-r----- 1 user user 0 2011-01-29 02:04 read-write Anyone understands why this happens? scp - Independently of what is setup for sftp , permissions are always umask 0022 . I currently have no idea how to alter this. ssh hostname - no problem here since the shell reads /etc/profile by default which means umask 0027 in the current setup. ssh hostname program - same situation as scp . In sum, setting umask on sftp alters the result but not as it should, ssh hostname works as expected reading /etc/profile and both scp and ssh hostname program seem to have umask 0022 hardcoded somewhere. Any insight on any of the above points is welcome. EDIT: I would like to avoid patches that require manually compiling openssh. The system is running Ubuntu Server 10.04.01 (lucid) LTS with openssh packages from maverick. Answer: As indicated by poige, using pam_umask did the trick. The exact changes were: Lines added to /etc/pam.d/sshd : # Setting UMASK for all ssh based connections (ssh, sftp, scp)
session optional pam_umask.so umask=0027 Also, in order to affect all login shells regardless of if they source /etc/profile or not, the same lines were also added to /etc/pam.d/login . EDIT : After some of the comments I retested this issue. At least in Ubuntu (where I tested) it seems that if the user has a different umask set in their shell's init files (.bashrc, .zshrc,...), the PAM umask is ignored and the user defined umask used instead. Changes in /etc/profile did't affect the outcome unless the user explicitly sources those changes in the init files. It is unclear at this point if this behavior happens in all distros. | I can suggest trying 2 things: pam_umask LD_PRELOAD wrapper (self-written?) | {
"source": [
"https://serverfault.com/questions/228396",
"https://serverfault.com",
"https://serverfault.com/users/60866/"
]
} |
228,468 | I have two Amazon EC2 Instances running Windows Server 2003 and IIS 6.0.
Both the instances are created in the same region and have the same Security Group.
I enables icmp for all ports and connection methods, and am able to successfully ping between both my instances. However, when I try to access the shared locations of one EC2 instance feom another, using: \\<elastic-ip> or \\<internal-private-ip> I am unable to see the shared locations, and get an error saying: No Network Provider accepted the given network path I am able to trace from both EC2 instances using the tracert command. Please let me know of a way to accessed shared locations between two EC2 instances. Thanks P.S.: I know that this can alternatively be achieved using S3, but do not wish to use it for different reasons. | I found the answer to my own query, and here it is: Theory: This can be found at this Microsoft knowledgebase article which deals with the ways to enable Microsoft file sharing SMB. The below matter is of relevance: The following ports are associated
with file sharing and server message
block (SMB) communications: Microsoft file sharing SMB: User Datagram Protocol (UDP) ports from 135
through 139 and Transmission Control
Protocol (TCP) ports from 135 through
139. Direct-hosted SMB traffic without a network basic input/output system
(NetBIOS): port 445 (TCP and UPD). How to do it: Enable the above ports in the security group associated with your EC2 Instance. Once you have done this, your Security Group Permissions should look something like the image below: Enable the ports in the windows firewalls of both the instances. A detailed method to do so can be found here . Skip step 7 for Windows Server. This solves the issue, however, a restart of the instances might be needed. | {
"source": [
"https://serverfault.com/questions/228468",
"https://serverfault.com",
"https://serverfault.com/users/68717/"
]
} |
228,481 | Where does output from cloud-init (automatically runs scripts when starting up a virtual machine in the cloud, for example at Amazon EC2) go? I would like to know that my initialization scripts executed successfully. There is a /var/log/cloud-init.log file, but it seems to contain only partial output (namely from the SSH key initialization). | Since cloud-init 0.7.5 (released on Apr 1 2014), all output from cloud-init is captured by default to /var/log/cloud-init-output.log . This default logging configuration was added in a commit from Jan 14 2014: # this tells cloud-init to redirect its stdout and stderr to
# 'tee -a /var/log/cloud-init-output.log' so the user can see output
# there without needing to look on the console.
output: {all: '| tee -a /var/log/cloud-init-output.log'} To add support for previous versions of cloud-init , you can manually add this configuration manually to your Cloud Config Data . | {
"source": [
"https://serverfault.com/questions/228481",
"https://serverfault.com",
"https://serverfault.com/users/37524/"
]
} |
228,629 | As 127.0.0.1 is known as the loopback address, is there a shorter term to refer to 0.0.0.0 other than "the IP address who means all IP address on local machine"? | Sometimes it is called "wildcard address", INADDR_ANY , or "unspecified address" . The official name is "source address for this host on this network" ( RFC 5735, Section 3 ). It must not appear in packets sent to the network under normal circumstances: This host on this network. MUST NOT
be sent, except as
a source address as part of an initialization procedure
by which the host learns its own IP address. But if it appears as destination address in incoming packet it should be treated as broadcast address 255.255.255.255 ( RFC 1122, Section 3.3.6 ) | {
"source": [
"https://serverfault.com/questions/228629",
"https://serverfault.com",
"https://serverfault.com/users/66154/"
]
} |
228,690 | We have a MySQL table that has an auto-incrementing field set as an INT (11). This table pretty much stores a list of jobs that are running in an application. At any given moment during the lifetime of the application, the table could well contain thousands of entries or be completely empty (i.e. everything has finished). The field is not foreign-keyed to anything else. The auto-increment seems to randomly reset itself to zero, although we have never actually been able to trap the reset. The problem becomes evident because we see the auto-increment field get up to, say, 600,000 records or so and then a while later the auto-increment field seems to be running in the low 1000's. It is almost as if the auto-increment resets itself if the table is empty. Is this possible and if it is, how do I turn it off or change the manner in which it resets? If it isn't, does anyone have an explanation of why it might be doing this? Thanks! | The auto-increment counter is stored only in main memory, not on disk. http://dev.mysql.com/doc/refman/4.1/en/innodb-auto-increment-handling.html Because of this when the service (or server) restarts the following will happen: After a server startup, for the first insert into a table t, InnoDB
executes the equivalent of this statement: SELECT MAX(ai_col) FROM t
FOR UPDATE; InnoDB increments by one the value retrieved by the statement and
assigns it to the column and to the auto-increment counter for the
table. If the table is empty, InnoDB uses the value 1. So in plain English, after the MySQL service starts it has no idea what the auto-increment value for your table should be. So when you first insert a row, it finds the max value of the field that uses auto-increment, adds 1 to this value, and uses the resulting value. If there are no rows, it will start at 1. This was a problem for us, as we were using the table and mysql's auto-increment feature to neatly manage IDs in a multi-threaded environment where users were getting re-directed to a third-party payment site. So we had to make sure the ID the third party got and sent back to us was unique and would stay that way (and of course there's the possibility the user would cancel the transaction after they had been redirected). So we were creating a row, obtaining the generated auto-increment value, deleting the row to keep the table clean, and forwarding the value to the payment site. What we ended up doing to to fix the issue of the way InnoDB handles AI values was the following: $query = "INSERT INTO transactions_counter () VALUES ();";
mysql_query($query);
$transactionId = mysql_insert_id();
$previousId = $transactionId - 1;
$query = "DELETE FROM transactions_counter WHERE transactionId='$previousId';";
mysql_query($query); This always keeps the latest transactionId generated as a row in the table, without unnecessarily blowing up the table. Hope that helps anyone else that might run into this. Edit (2018-04-18) : As Finesse mentioned below it appears the behavior of this has been modified in MySQL 8.0+. https://dev.mysql.com/worklog/task/?id=6204 The wording in that worklog is faulty at best, however it appears InnoDB in those newer versions now support persistent autoinc values across reboots. -Gremio | {
"source": [
"https://serverfault.com/questions/228690",
"https://serverfault.com",
"https://serverfault.com/users/49503/"
]
} |
228,733 | Replace ACDC to AC-DC For example we have these files ACDC - Rock N' Roll Ain't Noise Pollution.xxx ACDC - Rocker.xxx ACDC - Shoot To Thrill.xxx I want them to become: AC-DC - Rock N' Roll Ain't Noise Pollution.xxx AC-DC - Rocker.xxx AC-DC - Shoot To Thrill.xxx I know that sed or awk is used for this operation. I can't google anything so I'm asking for your help =) Could you please provide full working shell command for this task? Feedback: Solution for OSX users | rename 's/ACDC/AC-DC/' *.xxx from man rename DESCRIPTION
"rename" renames the filenames supplied according to the rule specified as the
first argument. The perlexpr argument is a Perl expression which is expected to modify the
$_ string in Perl for at least some of the filenames specified. If a given filename is not
modified by the expression, it will not be renamed. If no filenames are given on
the command line, filenames will be read via standard input. For example, to rename all files matching "*.bak" to strip the extension, you might say rename 's/\.bak$//' *.bak To translate uppercase names to lower, you'd use rename 'y/A-Z/a-z/' * | {
"source": [
"https://serverfault.com/questions/228733",
"https://serverfault.com",
"https://serverfault.com/users/46779/"
]
} |
229,331 | I'm using names like "a.alpha" for the hostname of my linux box, but it seems that these names are not completely usable. The response of a hostname shell command is correct (a.alpha).
But the name printed after my user account is "user@a" instead of "[email protected]". When I use avahi, I can reach (by hostname ) "a.alpha", but not "b.alpha". Is that normal? | Chopper is right. Due to how DNS works, the "alpha" component of "a.alpha" is considered a discrete 'label' in DNS. Using a hostname with a dot in it will cause inconsistent results from any system that consumes DNS. Avahi does interact with DNS names, and specifically the <host-name> directive needs to have the DNS FQDN of the service in it, so it's also subject to DNS inconsistency with dotted names. Don't use dotted name. | {
"source": [
"https://serverfault.com/questions/229331",
"https://serverfault.com",
"https://serverfault.com/users/68345/"
]
} |
229,337 | I am a database administrator and I need to move SQL databases from one server to another for enterprise project management, Project Server 2007 installed on WSS 3.0 (both at patch level SP2 + Aug09 CU). I know how to move databases but I don't know how to re-point the application to the new server. Can anyone please help me? - Thanks! | Chopper is right. Due to how DNS works, the "alpha" component of "a.alpha" is considered a discrete 'label' in DNS. Using a hostname with a dot in it will cause inconsistent results from any system that consumes DNS. Avahi does interact with DNS names, and specifically the <host-name> directive needs to have the DNS FQDN of the service in it, so it's also subject to DNS inconsistency with dotted names. Don't use dotted name. | {
"source": [
"https://serverfault.com/questions/229337",
"https://serverfault.com",
"https://serverfault.com/users/19202/"
]
} |
229,441 | I have an app running on my computer at 127.0.0.1:3000 I would like to access that app from an iPhone connected to the same network. I have done this before but blanking out on how I did it. Any ideas? | First you need to determine the ip address or name of the machine you are running the webserver on. I'm assuming you are running the webserver on a mac since you tagged your post macosx athough the instructions are similar for linux machines. So, on your mac: Open Terminal.app . It's under Applications->Utilities . Run ifconfig in the terminal. That shows you all the network interfaces on the machine. One of them is the network your machine is actively connected to. If you mac is on a wired connection that should be en0 . Make a note of the address after inet - that should be the address your machine uses. Let's assume you discover it's 192.168.10.1. Verify that you can connect to that address from your server with nc -v 192.168.10.1 3000 . (replace 3000 with the port your application is running on) You should see a message like Connection to 192.168.10.1 3000 port [tcp/http] succeeded! . If that doesn't work, see below. If it does work, hit ctrl-C to exit the nc session. Now try to connect on your client machine. If this is a web app, you should be able to connect via the browser For example, try http://192.168.10.1:3000 If you are unable to connect to your application on the server's real address, that means your application isn't listening on that address. You will need to investigate how to change your application configuration to modify that behavior. Since I don't know what application you are running I can't offer any good ideas on that. | {
"source": [
"https://serverfault.com/questions/229441",
"https://serverfault.com",
"https://serverfault.com/users/25834/"
]
} |
229,454 | I deleted a 2.3GB log file on my Ubuntu server, and df doesn't seem to be picking up the change. Is there typically a delay before df can detect that a large file has been deleted? | It sounds like the file is still open by some process. You'll need to restart that service for the disk space to be freed. | {
"source": [
"https://serverfault.com/questions/229454",
"https://serverfault.com",
"https://serverfault.com/users/68755/"
]
} |
229,464 | I'm running a 2 processor VMWare server. We pushed it up from 1 core to 2 cores recently and noticed it is now generating large numbers of hardware interrupts, which is killing performance. Does anyone know why this is and how to fix it, short of reverting to 1 cpu, which would also kill performance. | It sounds like the file is still open by some process. You'll need to restart that service for the disk space to be freed. | {
"source": [
"https://serverfault.com/questions/229464",
"https://serverfault.com",
"https://serverfault.com/users/5974/"
]
} |
229,467 | I've had OpenSSH server running on my debian server for a couple weeks and all of a sudden now when I go to login the next day it rejects my ssh key and I have to manually add a new one each time. Not only that but I have the "tunneling with clear-text passwords" option enabled and the non-root (login with root is disabled) account for that is rejected too. I'm at a loss why this is happening and I can't find any ssh options that would explain it. --update-- I just changed debug level to DEBUG. But before that I'm seeing a lot of the following in auth.log Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session opened for user root by (uid=0)
Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session closed for user root
...
Feb 1 04:36:26 greenpages sshd[7217]: reverse mapping checking getaddrinfo for nat-pool-xx-xx-xx-xx.myinternet.net [xx.xx.xx.xx] failed - POSSIBLE BREAK-IN ATTEMPT!
...
Feb 1 04:37:31 greenpages sshd[7223]: Did not receive identification string from xx.xx.xx.xx
... My sshd_conf file settings are: # Package generated configuration file
# See the sshd(8) manpage for details
# What ports, IPs and protocols we listen for
Port xxx
# Use these options to restrict which interfaces/protocols sshd will bind to
#ListenAddress ::
#ListenAddress 0.0.0.0
Protocol 2
# HostKeys for protocol version 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
#Privilege Separation is turned on for security
UsePrivilegeSeparation yes
# Lifetime and size of ephemeral version 1 server key
KeyRegenerationInterval 3600
ServerKeyBits 768
# Logging
SyslogFacility AUTH
LogLevel DEBUG
# Authentication:
LoginGraceTime 120
PermitRootLogin no
StrictModes yes
RSAAuthentication yes
PubkeyAuthentication yes
#AuthorizedKeysFile %h/.ssh/authorized_keys
# Don't read the user's ~/.rhosts and ~/.shosts files
IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
RhostsRSAAuthentication no
# similar for protocol version 2
HostbasedAuthentication no
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication
#IgnoreUserKnownHosts yes
# To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no
# Change to no to disable tunnelled clear text passwords
PasswordAuthentication yes
# Kerberos options
#KerberosAuthentication no
#KerberosGetAFSToken no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
X11Forwarding no
X11DisplayOffset 10
PrintMotd no
PrintLastLog yes
TCPKeepAlive yes
#UseLogin no
#MaxStartups 10:30:60
#Banner /etc/issue.net
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
UsePAM no
ClientAliveInterval 60
AllowUsers myuser | It sounds like the file is still open by some process. You'll need to restart that service for the disk space to be freed. | {
"source": [
"https://serverfault.com/questions/229467",
"https://serverfault.com",
"https://serverfault.com/users/53393/"
]
} |
229,833 | We (and by we I mean Jeff) are looking into the possibility of using Consumer MLC SSD disks in our backup data center. We want to try to keep costs down and usable space up - so the Intel X25-E's are pretty much out at about 700$ each and 64GB of capacity. What we are thinking of doing is to buy some of the lower end SSD's that offer more capacity at a lower price point. My boss doesn't think spending about 5k for disks in servers running out of the backup data center is worth the investment. These drives would be used in a 6 drive RAID array on a Lenovo RD120. The RAID controller is an Adaptec 8k (rebranded Lenovo). Just how dangerous of an approach is this and what can be done to mitigate these dangers? | A few thoughts; SSDs have 'overcommit' memory. This is the memory used in place of cells 'damaged' by writing. Low end SSDs may only have 7% of overcommit space; mid-range around 28%; and enterprise disks as much as 400%. Consider this factor. How much will you be writing to them per day? Even middle-of-the-range SSDs such as those based on Sandforce's 1200 chips rarely appreciate more than around 35GB of writes per day before seriously cutting into the overcommitted memory. Usually, day 1 of a new SSD is full of writing, whether that's OS or data. If you have significantly more than >35GB of writes on day one, consider copying it across in batches to give the SSD some 'tidy up time' between batches. Without TRIM support, random write performance can drop by up to 75% within weeks if there's a lot of writing during that period - if you can, use an OS that supports TRIM The internal garbage collection processes that modern SSDs perform is very specifically done during quiet periods, and it stops on activity. This isn't a problem for a desktop PC where the disk could be quiet for 60% of its usual 8 hour duty cycle, but you run a 24hr service... when will this process get a chance to run? It's usually buried deep in specs but like cheapo 'regular' disks, inexpensive SSDs are also only expected to have a duty cycle of around 30%. You'll be using them for almost 100% of the time - this will affect your MTBF rate. While SSDs don't suffer the same mechanical problems regular disks do, they do have single and multiple-bit errors - so strongly consider RAIDing them even though the instinct is not to. Obviously it'll impact on all that lovely random write speed you just bought but consider it anyway. It's still SATA not SAS, so your queue management won't be as good in a server environment, but then again the extra performance boost will be quite dramatic. Good luck - just don't 'fry' them with writes :) | {
"source": [
"https://serverfault.com/questions/229833",
"https://serverfault.com",
"https://serverfault.com/users/5880/"
]
} |
229,945 | What are the differences between HAProxy and Nginx when it comes to their abilities as a reverse proxy? | HAProxy is really just a load balancer/reverse proxy. Nginx is a Webserver that can also function as a reverse proxy. Here are some differences: HAProxy: Does TCP as well as HTTP proxying (SSL added from 1.5-dev12) More rate limiting options The author answers questions here on Server Fault ;-) Nginx : Supports SSL directly Is also a caching server At Stack Overflow we mainly use HAProxy with nginx for SSL offloading so HAProxy is my recommendation. | {
"source": [
"https://serverfault.com/questions/229945",
"https://serverfault.com",
"https://serverfault.com/users/26763/"
]
} |
230,370 | I wish to fetch content from a PHP script on my server two times a day, altering a query variable lang to set what language we want, and save this content in two language specific files. This is my crontab: */15 * * * * ~root/apache.sh > /var/log/checkapache.log
10 0 * * * wget -O /path/to/file-sv.sql "http://mydomain.com/path/?lang=sv"
11 0 * * * wget -O /path/to/file-en.sql "http://mydomain.com/path/?lang=en" The problem is that only the first wget command line is being executed (or to be precise: the only file that is being written is /path/to/file-sv.sql ). If I switch the second and the third row, /path/to/file-en.sql gets written instead. The first line always runs as expected, no matter where it is. I then tried using lynx -dump "http://mydomain.com/path/?lang=xx" > /path/to/file-xx.sql to no avail; still only the first lynx line executed successfully. Even mixing wget and lynx did not change this! Getting kinda desperate! Am I missing something? There are thousands of articles on crontab (combined with) wget or lynx, but all seems to cover basic setups and syntax. Does anyone got a clue of what I am doing wrong? Thanks, Alexander | Try adding newline at the end of your crontab. | {
"source": [
"https://serverfault.com/questions/230370",
"https://serverfault.com",
"https://serverfault.com/users/69267/"
]
} |
230,495 | Here is an example from my top: Cpu(s): 6.0%us, 3.0%sy, 0.0%ni, 78.7%id, 0.0%wa, 0.0%hi, 0.3%si, 12.0%st I am trying to figure out the significance of the %st field. I read that it means steal cpu and it represents time spent by the hypervisor, but I want to know what that actually means to me. Does it mean I may be on a busy physical server and someone else is using too much CPU on the server and they are taking from my VM? If I am using EBS could it be related to handling EBS I/O at the hypervisor level? Is it related to things running on my VM or is it completely unaffected by me? | The Steal percentage (documented in the mpstat man-page) is indeed the hypervisor telling your VM that it can't have CPU resources the VM would otherwise use. This percentage is regulated in part by Amazon's CPU limiting, and VM load on that specific host. I/O load is monitored through the %io stat. You will see this most often on their t class of instances that use a CPU credit model for regulating performance. If you're seeing high percentages, chances are good you're running out of CPU credits. | {
"source": [
"https://serverfault.com/questions/230495",
"https://serverfault.com",
"https://serverfault.com/users/69314/"
]
} |
230,551 | By default MySQL InnoDB stores all tables of all DBs in one global file. You can change this by setting innodb_file_per_table in the config, which then creates one data file for each table. I am wondering why innodb_file_per_table is not enabled by default. Are there downsides to using it? | I have the complete answer for this one. Once innodb_file_per_table is put in place, and new InnoDB tables can be shrunk using ALTER TABLE <innodb-table-name> ENGINE=InnoDB'; This will shrink new .ibd files GUARANTEED. If you run ALTER TABLE <innodb-table-name> ENGINE=InnoDB'; on an InnoDB table created before you used innodb_file_per_table, it will yank the data and indexes for that table out of the ibdata1 file and store it in a .ibd file, This will leave a permanent pigeon whole in the ibdata1 that can never be reused. The ibdata1 file normally houses four types of information Table Data Table Indexes MVCC (Multiversioning Concurrency Control) Data Rollback Segments Undo Space Table Metadata (Data Dictionary) Double Write Buffer (background writing to prevent reliance on OS caching) Insert Buffer (managing changes to non-unique secondary indexes) See the Pictorial Representation of ibdata1 Here is the guaranteed way to shrink the ibdata1 file pretty much forever... STEP 01) MySQLDump all databases into a SQL text file (call it SQLData.sql) STEP 02) Drop all databases (except mysql, information_schema and performance_schema schemas) STEP 03) Shutdown mysql STEP 04) Add the following lines to /etc/my.cnf [mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G
innodb_data_file_path=ibdata1:10M:autoextend Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size. STEP 05) Delete ibdata1, ib_logfile0 and ib_logfile1 ( see update below before deleting! ) At this point, there should only be the mysql schema in /var/lib/mysql STEP 06) Restart mysql This will recreate ibdata1 at 10MB (do not configure the option) , ib_logfile0 and ib_logfile1 at 1G each STEP 07) Reload SQLData.sql into mysql ibdata1 will grow but only contain table metadata and intermittent MVCC data. Each InnoDB table will exist outside of ibdata1 Suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb , you will see two files representing the table mytable.frm (Storage Engine Header) mytable.ibd (Home of Table Data and Table Indexes for mydb.mytable ) ibdata1 will never contain InnoDB data and Indexes anymore. With the innodb_file_per_table option in /etc/my.cnf , you can run OPTIMIZE TABLE mydb.mytable OR ALTER TABLE mydb.mytable ENGINE=InnoDB; and the file /var/lib/mysql/mydb/mytable.ibd will actually shrink. I have done this numerous times in my career as a MySQL DBA without so much as a single problem thereafter. In fact, the first time I did this, I collapsed a 50GB ibdata1 file into 50MB. Give it a try. If you have further questions on this, email me. Trust me. This will work in the short term and over the long haul. UPDATE 2013-07-02 15:08 EDT There is a caveat I have in this regard that I updated in other posts of mine but I missed this: I am updating my answer a little more with innodb_fast_shutdown because I used to restart mysql and stop mysql to do this. Now, this one-step is vital because every transaction uncommitted may have other moving parts within and outside of the InnoDB Transaction Logs ( See InnoDB Infrastructure ). Please note that setting innodb_fast_shutdown to 2 would clean the logs out as well but more moving parts still exist and gets picked on Crash Recovery during mysqld's startup. Setting of 0 is best. | {
"source": [
"https://serverfault.com/questions/230551",
"https://serverfault.com",
"https://serverfault.com/users/52811/"
]
} |
230,749 | How can I setup an nginx proxy_pass directive that will also include HTTP Basic authentication information sent to the proxy host? This is an example of the URL I need to proxy to: http://username:[email protected]/export?uuid=1234567890 The end goal is to allow 1 server present files from another server (the one we're proxying to) without exposing the URI of the proxy server. I have this working 90% correct now from following the Nginx config found here: http://kovyrin.net/2010/07/24/nginx-fu-x-accel-redirect-remote/ I just need to add in the HTTP Basic authentication to send to the proxy server | I got this working with alvosu's answer but I had to enter the word "Basic" inside the quotation of the base64 string so it looked like this: proxy_set_header Authorization "Basic dGVzdHN0cmluZw=="; | {
"source": [
"https://serverfault.com/questions/230749",
"https://serverfault.com",
"https://serverfault.com/users/15905/"
]
} |
231,161 | I have a couple of networking components in my rack that take giant AC adapters ("power bricks") that don't fit neatly into my rackmount PDU. I have one "thingy" that is shown below, and I need to buy a few more. But I have no idea what I'm searching for because I don't know what the "thingy" is called. Yes, this drawing is terrible. I would ask my 4-year-old to draw it for me because she's a better artist, but she's taking a nap. | I would just call it a "very short extension cord", and in fact a Google search for "short extension cord" turns up lots of results of exactly what you're looking for. E.g., these , which have a pass-through plug. | {
"source": [
"https://serverfault.com/questions/231161",
"https://serverfault.com",
"https://serverfault.com/users/475/"
]
} |
231,220 | I have cronjobs setup to be run daily on my Ubuntu server. eg. 0 4 * * * command They are running except they are running 8 hours early. When setting up the server, it was originally set to UTC time. I ran sudo dpkg-reconfigure tzdata to set the server to CST which is 6 hours behind UTC. Interestingly, I am in PST which is 8 hours behind UTC but I don't see how the server could know that. If I run the command date , it shows the time in CST. There must be some place that the time is configured wrong. Where can I look to solve this? | Did you remember to restart cron after changing your time zone? If not, cron may still have its old notion of the time zone from when it was originally started. While not strictly necessary I usually suggest rebooting a machine after changing the time zone -- A server's time zone shouldn't ever change (or at least it should be VERY infrequent), and this guarantees that every program on the server has been restarted and knows about the change :-) | {
"source": [
"https://serverfault.com/questions/231220",
"https://serverfault.com",
"https://serverfault.com/users/59203/"
]
} |
231,438 | I know that similar questions have been asked, but the available answers are not very clear, so please bear with me. After setting up a few <VirtualHost> s in apache, I'd like to configure the _default _ ServerName so that it returns the 404 message. I.e., unless some explicitly available domain is specified in the Host http header, return 404. (Ideally something more direct than pointing to a now-nonexistent directory.) Any help would be greatly appreciated. | Did you try: Redirect 404 /
ErrorDocument 404 "Page Not Found" in the default VirtualHost? | {
"source": [
"https://serverfault.com/questions/231438",
"https://serverfault.com",
"https://serverfault.com/users/59240/"
]
} |
231,578 | I have a strange problem here. I just moved from apache + mod_php to nginx + php-fpm. Everything went fine except this one problem. I have a site, let's say example.com. When I access it like example.com?test=get_param $_SERVER['REQUEST_URI'] is /?test=get_param and there is a $_GET['test'] also. But when I access example.com/ajax/search/?search=get_param $_SERVER['REQUEST_URI'] is /ajax/search/?search=get_param yet there is no $_GET['search'] (there is no $_GET array at all). I'm using Kohana framework. which routes /ajax/search to controller, but I've put phpinfo() at index.php so I'm checking for $_GET variables before framework does anything (this means that disapearing get params aren't frameworks fault). My nginx.conf is like this worker_processes 4;
pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
index index.html index.php;
autoindex on;
autoindex_exact_size off;
include mime.types;
default_type application/octet-stream;
server_names_hash_bucket_size 128;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
error_log logs/error.log debug;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
keepalive_timeout 2;
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
include sites-enabled/*;
} and example.conf is like this server {
listen 80;
server_name www.example.com;
rewrite ^ $scheme://example.com$request_uri? permanent;
}
server {
listen 80;
server_name example.com;
root /var/www/example/;
location ~ /\. {
return 404;
}
location / {
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /usr/local/nginx/conf/fastcgi_params;
}
location ~* ^/(modules|application|system) {
return 403;
}
# serve static files directly
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ {
access_log off;
expires 30d;
}
} fastcgi_params is like this fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param PATH_INFO $fastcgi_path_info; What is the problem here? By the way there are few more sites on the same server, both Kohana based and plain php, that are working perfectly. | You are not passing get arguments in your try_files call in the question: location / {
try_files $uri $uri/ /index.php;
} Should be: location / {
try_files $uri $uri/ /index.php?$query_string;
} You can use either $query_string or $args , they are equivalent - except $query_string is readonly (or alternatively, $args can be updated by any other logic you may wish to add) | {
"source": [
"https://serverfault.com/questions/231578",
"https://serverfault.com",
"https://serverfault.com/users/63852/"
]
} |
231,952 | Is there an equivalent of MySQL's SHOW CREATE TABLE in Postgres? Is this possible? If not what is the next best solution? I need the statement because I use it to create the table on an remote server (over WCF). | You can try to trace in the PostgreSQL log file what pg_dump --table table --schema-only really does. Then you can use the same method to write your own sql function. | {
"source": [
"https://serverfault.com/questions/231952",
"https://serverfault.com",
"https://serverfault.com/users/69728/"
]
} |
232,046 | How would I configure my CentOS Linux server to automatically start mysql when the server is started following a shutdown? I'm aware of the init.d path... /etc/rc.d/init.d ...and I can see mysqld in this folder. I believe that placing items (i.e. by symbolic link) in this folder means that they should start on server restart. But this did not happen for me. Background Our central IT desk restarted our virtualised CentOS servers over the weekend. The server was available following restart but the MySQL database had not restarted also. Thoughts? | Use chkconfig : chkconfig --level 345 mysqld on | {
"source": [
"https://serverfault.com/questions/232046",
"https://serverfault.com",
"https://serverfault.com/users/29205/"
]
} |
232,145 | I'm looking for a command that checks the validity of the config files in Apache server on both Debian and RHEL distros. I need to do this prior to restart, so there will be no downtime. | Check: http://httpd.apache.org/docs/2.2/programs/apachectl.html apachectl configtest | {
"source": [
"https://serverfault.com/questions/232145",
"https://serverfault.com",
"https://serverfault.com/users/69481/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.