source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
429,775
I have the following CSV file this file defined which Linux machine exist in the system and there ip's my target is to create host file from this file please advice how to create host file as example 1 from my CSV file ( I need to match the IP address from CSV file and put it on the first field of the host file , then match the LINUX name and locate this name in the sec field – as example 1 ) remark - should be performed by sed or awk or perl .. , I need to write the solution in my bash script CSV file , machine , VM-LINUX1 , SZ , Phy , 10.213.158.18 , PROXY , VM-LINUX2 , SZ , 10.213.158.19 , OLD HW , VM-LINUX3 , SZ , 10.213.158.20 , , VM-LINUX4 , SZ , Phy , 10.213.158.21 , , VM-LINUX5 , SZ , Phy , OUT , EXT , LAN3 , 10.213.158.22 , INTERNAL , VM-LINUX6 , SZ , Phy , 10.213.158.23 , , server , new HW , VM-LINUX7 , SZ , Phy , 10.213.158.24 , OUT, LAN3 , VM-LINUX8 , SZ , 10.213.158.25 , OLD HW , machine , VM-LINUX9 , SZ , Phy , INT , 10.213.158.26 , LAN2, AN45, , VM-LINUX10 , SZ , Phy , 10.213.158.27 , , VM-LINUX11 , SZ , Phy , LAN5 , 10.213.158.28 , example 1 ( host file ) 10.213.158.18 VM-LINUX1 10.213.158.19 VM-LINUX2 10.213.158.20 VM-LINUX3 10.213.158.21 VM-LINUX4 10.213.158.22 VM-LINUX5 10.213.158.23 VM-LINUX6 10.213.158.24 VM-LINUX7 10.213.158.25 VM-LINUX8 10.213.158.26 VM-LINUX9 10.213.158.27 VM-LINUX10 10.213.158.25 VM-MACHINE8 10.213.158.26 STAR9 10.213.158.27 TOP10 10.213.158.28 SERVER11
I had this issue before with recent version of Bind (9.8.1). The following option solved the problem for me : dnssec-validation no;
{ "source": [ "https://serverfault.com/questions/429775", "https://serverfault.com", "https://serverfault.com/users/117906/" ] }
429,937
I have a / partition which contains /var and is too small. I have another existing partition with enough space. Here is my df: File system Size. Occ. Avai. %Ful. Monté sur /dev/sda1 5,0G 4,5G 289M 95% / tmpfs 242M 0 242M 0% /lib/init/rw udev 10M 2,7M 7,4M 27% /dev tmpfs 242M 0 242M 0% /dev/shm /dev/sda2 15G 406M 14G 3% /home How can I move the /var folder from sda1 to sda2 ?
Go into single user mode, and make sure any process writing to /var is stopped. (Check with lsof -n | grep /var ) mkdir -p /home/var rsync -va /var/. /home/var/. mv /var /var.old # you can remove /var.old when you are done to reclaim the space mkdir -p /var mount -o bind /home/var /var update your /etc/fstab to make the bind-mount permanent. /etc/fstab /home/var /var none bind
{ "source": [ "https://serverfault.com/questions/429937", "https://serverfault.com", "https://serverfault.com/users/124061/" ] }
430,059
I want to setup a git server. I have found several how-to's, well detailed. Some describe the installation for a git-server accessible thru Ssh, while others, accessible thru HTTP. ( Others even advise tools like gitolite ). Are there pros or cons choosing over SSH or HTTP? It seems that by HTTP, the file transfer is significantly slower, but I wonder if there are other things to keep in mind. What is the most common way of setting up a git server, if any?
While you're asking for what is the most common way, I think it's better to look at your situation and remember that one protocol doesn't exclude another - add more access protocol later if you need them. Most efficient and fast is to use the native Git daemon. However, little features offered: no encryption, no authentication. Ideal for public read-only mirrors of your repositories. If you need performance, also consider installing a recent version rather than the version shipped with your OS. Most compatible way is HTTP. Less efficient than native Git, but not that much of a difference either. Most important pro of HTTP is firewall penetration and proxy support. It appears as regular other HTTP traffic for most gateways/firewalls. More secure is HTTPS, but inevitably less efficient too. Requires quite some configuration. You'll also need a trusted TLS certificate. Similar security, but a more common way is to use SSH. It is the default if no protocol is specified on command line. Powered by SSH, it provides strong encryption and both password and key authentication. While unconventional, it is possible to allow anonymous access this way too. My advise would be depending on the use case of your repositories: private repositories & small user group: SSH public repositories, any amount of clones, but small group of push-privileged users: HTTP and Git (fetch-only) + SSH (+push-access) any of the above, but with large amount of push-privileged users: you probably don't understand the philosophy of Git. Some public or corporate networks might block Git and SSH traffic. If you really need to access your repositories from anywhere , consider using both HTTPS and SSH.
{ "source": [ "https://serverfault.com/questions/430059", "https://serverfault.com", "https://serverfault.com/users/70111/" ] }
430,138
I read in one of the VMware KB articles that snapshots will directly affect VM performance. But my team keeps asking me how snapshots can affect performance. I would like to give them solid reason behind the statement the snapshots are performance killers. Can anyone explain a little bit theory about how snapshots are actually affecting the performance? Is it just because Disk I/O rate of hard disk would be slow?
When you create a snapshot, the original disk image is "frozen" in a consistent state, and all write accesses from then on will go to a new differential image. Even worse, as explained here and here , the differential image has the form of a change log, that records every change made to a file since the snapshot was taken. This means, that read accesses would have to read not only one file, but also all difference data (the original data plus every change made to the original data). The number increases even more when you cascade snapshots.
{ "source": [ "https://serverfault.com/questions/430138", "https://serverfault.com", "https://serverfault.com/users/105129/" ] }
430,309
I'm changing my network from having every device on flat network to using VLans. My problem is that we already have a lot of devices on this network(192.168.20.0/24). From theory, I read that each Vlan has to be a different subnet and then I need to configure virtual interfaces on my Cisco router to cater for inter vlan routing. 1) How can I segment this network with minimum down time on the devices already on the network? 2) Can I just create Vlans and leave all these Vlans in the same layer 3 network so that they can go out of the network (I am not too concerned about inter-Vlan routing) or I have to create subnets which means reconfiguring the existing devices (something I do not want).
As Joeqwerty already noted, you're approaching this with an inadequate fundamental understanding, combined with vaguely-defined goals. You are setting yourself up for failure, downtime, and security holes. Rather than just answering your questions as asked I'm going to indulge in a little "vLAN 101" tutorial which might be a bit more useful for you. You seem to have a few fundamental misconceptions about vLAN segmentation and how it fits into network architecture, so let's roll ALLLLLLL the way back to the beginning for a minute: From a network architecture level you can take the very simplistic view that a vLAN is nothing more than a separate switch, not connected to any other switch (vLAN). If you look at vLANs in this way it becomes relatively clear how to use them: When you don't want machines in Group A to be able to talk to machines in Group B you put them in separate vLANs, and force them to traverse a router (ideally one with firewall functionality) to talk to each other. Under nearly all circumstances it's better (and easier) to do this by also putting the machines in different IP networks (subnets) -- Machines within a vLAN are in the same subnet, and can chat amongst themselves as much as they want, but if they want to talk to someone outside their vLAN it's also going to be outside their subnet, so they get handed off to their default gateway, which can handle the security concern of who can talk to whom under what circumstances. So vLAN architecture in 11 easy steps: Figure out which machines form logical groups. These are your vLANs In a very simple environment this could be Web Servers and Database Servers . In more complex environments you may have lots of groups, and you may combine multiple groups in a single vLAN -- This is an architecture decision you have to make. Figure out an addressing scheme that suits your vLANs. If you're supremely lucky every vLAN will fit into a /24 and you'll be able to build a topology based around that. If you aren't that lucky figure out which vLANs need bigger (or smaller) blocks. Draw what you have done so far on paper. Figure out which vLANs need to talk to each other. What ports/services should be open between vLANs/Networks? What other conditions need to exist for your environment to function? Draw what you came up with on paper. Make sure it's sane, then convert it into firewall/router policy. Draft a firewall/router configuration. Ideally play with it in a test environment. Draw your switch on paper and map which ports will go to which vLANs. It's helpful to physically group connections so that they're in the same logical vLAN, but this isn't strictly necessary. Turn your switch drawing into a switch configuration. Ideally play with it in a test environment. Clean up your drawings on paper. The logical drawing should look somewhat like this: (The image has been shrunk to obscure stuff you don't need to read) Get someone else to look at your design. You can ask on Server Fault, but it's better if someone familiar with your environment looks at it as they're more likely to catch potential breakage. Take a weekend and turn your logical design into a physical reality. (It should go without saying that you should have a rollback plan in case things go horribly wrong, but I'm saying it anyway.) (If you are VERY good you might be able to skip some of the "Draw it on paper" steps above, but I don't recommend skipping that your first time.) Re: the two specific questions you asked: 1)How can I segment this network with minimum down time on the devices already on the network? You can't. Breaking your network into vLANs will require an outage window - you will have to reconfigure your switch, move machines into different logical networks, configure routing, probably move some cables around, etc. etc. etc. Plan for an outage starting at 5PM Friday and extending over a weekend, ESPECIALLY if this is your first time designing a properly segmented network - you will spend some time debugging things that break. 2)Can I just create Vlans and leave all these Vlans in the same laye 3 network so that they can go out of the network (I am not not concerned about Vlan routing) or I have to create subnets which means reconfiguring the existing devices (something I do not want) Can you? Yes. Will it buy you anything in terms of security? Not really. Will it make the entire project 10 times harder? Absolutely. Should you design a network this way? NO.
{ "source": [ "https://serverfault.com/questions/430309", "https://serverfault.com", "https://serverfault.com/users/125623/" ] }
430,682
I get this warning for several packages every time I install any package or perform apt-get upgrade . Not sure what is causing it; it's a fresh Debian install on my OpenVZ server and I haven't changed any dpkg settings. Here's an example: root@debian:~# apt-get install cowsay Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: filters The following NEW packages will be installed: cowsay 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 21.9 kB of archives. After this operation, 91.1 kB of additional disk space will be used. Get:1 http://ftp.us.debian.org/debian/ unstable/main cowsay all 3.03+dfsg1-4 [21.9 kB] Fetched 21.9 kB in 0s (70.2 kB/s) Selecting previously unselected package cowsay. dpkg: warning: files list file for package 'libssh2-1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkrb5-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libwrap0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcap2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpam-ck-connector:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libc6:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libtalloc2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libselinux1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libp11-kit0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-client3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libbz2-1.0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpcre3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgpm2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgnutls26:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-common3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcroco3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'liblzma5:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpaper1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsensors4:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libbsd0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-common-data:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libss2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libblkid1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libslang2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libacl1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcomerr2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkrb5support0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'e2fslibs:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'librtmp0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libidn11:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpcap0.8:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libattr1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libdevmapper1.02.1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'odbcinst1debian2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libexpat1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libltdl7:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkeyutils1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcups2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsqlite3-0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libck-connector0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'zlib1g:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libnl1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libfontconfig1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libudev0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsepol1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libmagic1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libk5crypto3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libunistring0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgpg-error0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libusb-0.1-4:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpam0g:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpopt0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgssapi-krb5-2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgeoip1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcurl3-gnutls:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libtasn1-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libuuid1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgcrypt11:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgdbm3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libdbus-1-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsysfs2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libfreetype6:amd64' missing; assuming package has no files currently installed (Reading database ... 21908 files and directories currently installed.) Unpacking cowsay (from .../cowsay_3.03+dfsg1-4_all.deb) ... Processing triggers for man-db ... Setting up cowsay (3.03+dfsg1-4) ... root@debian:~# Everything works fine, but these warning messages are pretty annoying. Does anyone know how I can fix this? ls -la /var/lib/dpkg/info | grep libssh : -rw-r--r-- 1 root root 327 Sep 21 15:51 libssh2-1.list -rw-r--r-- 1 root root 359 Aug 15 06:06 libssh2-1.md5sums -rwxr-xr-x 1 root root 135 Aug 15 06:06 libssh2-1.postinst -rwxr-xr-x 1 root root 132 Aug 15 06:06 libssh2-1.postrm -rw-r--r-- 1 root root 20 Aug 15 06:06 libssh2-1.shlibs -rw-r--r-- 1 root root 4377 Aug 15 06:06 libssh2-1.symbols
He fixed it reinstalling the files that appeared there. So you might want to try something like this: for package in $(apt-get upgrade 2>&1 |\ grep "warning: files list file for package '" |\ grep -Po "[^'\n ]+'" | grep -Po "[^']+"); do apt-get install --reinstall "$package"; done Copy-paste friendly in one line: for package in $(apt-get upgrade 2>&1 | grep "warning: files list file for package '" | grep -Po "[^'\n ]+'" | grep -Po "[^']+"); do apt-get install --reinstall "$package"; done Be aware, that running this command takes some time , as we cycle through every package. In some cases apt upgrade doesn't show the errors therefore you can reinstall one package (for example x) which gives the error and execute like this: for package in $(apt-get install --reinstall x 2>&1 |\ grep "warning: files list file for package '" |\ grep -Po "[^'\n ]+'" | grep -Po "[^']+"); do apt-get install --reinstall "$package"; done
{ "source": [ "https://serverfault.com/questions/430682", "https://serverfault.com", "https://serverfault.com/users/135059/" ] }
430,688
My Netgear router randomly reset itself the other day loosing all of my config settings: DSL details, Firewall rules, the lot! So I set about restoring all of the details manually, but when it came to configuring the firewall I wanted improve the security by explicitly setting 'deny' rules for everything that I figured is 'non-essential', and (although not necessary) whilst I was at it I set explicit 'allow' for the 'essential' protocols. I'll admit now I didn't really know what I was doing and everything was just 'my best guess', but I enabled only DNS, HTTP, HTTPS, FTP, SFTP, TFTP with everything else blocked. This did not work for me as I could not access 99% of web sites (although strangely Google worked!), so I played around a bit more and found that (oddly) if I disabled just the explicit 'allow' rules then everything worked fine, for browsing anyway. Today I came to work on some web-sites via FTP (edit: I use FileZilla on Linux) and just could not get a consistent connection, it kept dropping out after a few files or being blocked by the server or simply not connecting. It would authenticate okay but then stop when retrieving the initial directory listing! e.g.: Status: Delaying connection for 1 second due to previously failed connection attempt... Status: Resolving address of ftp.domain.co.uk Status: Resolving address of ftp.domain.co.uk Status: Connecting to 123.123.123.123:21... Status: Connecting to 123.123.123.123:21... Status: Connection established, waiting for welcome message... Status: Connection established, waiting for welcome message... Response: 421 Too many connections (8) from this IP Error: Could not connect to server Status: Delaying connection for 5 seconds due to previously failed connection attempt... Response: 421 Too many connections (8) from this IP Error: Could not connect to server Status: Delaying connection for 5 seconds due to previously failed connection attempt... I've checked and re-checked the FTP settings (they worked before anyway), I have Googled the I.T. out of the various protocols that I have blocked in the fire-wall but none seem essential to FTP (other than FTP/SFTP etc. which I have passively enabled). I'm (clearly) no server engineer, or protocols / fire-wall expert so I was hoping that some one could maybe shed some light on why my FTP is failing. I've been wondering if I ought to be allowing BGP, BOOTP and/or IDENT (or any others)? What other protocols are required for FTP? Thanks in advance!
He fixed it reinstalling the files that appeared there. So you might want to try something like this: for package in $(apt-get upgrade 2>&1 |\ grep "warning: files list file for package '" |\ grep -Po "[^'\n ]+'" | grep -Po "[^']+"); do apt-get install --reinstall "$package"; done Copy-paste friendly in one line: for package in $(apt-get upgrade 2>&1 | grep "warning: files list file for package '" | grep -Po "[^'\n ]+'" | grep -Po "[^']+"); do apt-get install --reinstall "$package"; done Be aware, that running this command takes some time , as we cycle through every package. In some cases apt upgrade doesn't show the errors therefore you can reinstall one package (for example x) which gives the error and execute like this: for package in $(apt-get install --reinstall x 2>&1 |\ grep "warning: files list file for package '" |\ grep -Po "[^'\n ]+'" | grep -Po "[^']+"); do apt-get install --reinstall "$package"; done
{ "source": [ "https://serverfault.com/questions/430688", "https://serverfault.com", "https://serverfault.com/users/90200/" ] }
430,895
I'm looking for a log file or any service to report the latest login attempts that have failed due to username/password mismatch. Are there any such utilities available for CentOS? (built-in is preferred) My second question, and more generally, I need a log file of penetration attempts to my server. Ideally, this log should contain all attempts including logins, httpd activities, and other conventional open ports.
In Linux, the last command shows successful login attempts and displays session information (pts, source, date and length). The lastb command records all bad login attempts. Both share the same man page, but the difference is that last reads the binary /var/log/wtmp file, and lastb reads the /var/log/btmp file by default. The range of these files depends on your log rotation schedule, but it should span a few weeks. Most distributions will rotate /var/log/wtmp monthly, so you can read a previous record, usually listed as /var/log/wtmp.1 by specifying the file with the -f parameter... last -f /var/log/wtmp.1
{ "source": [ "https://serverfault.com/questions/430895", "https://serverfault.com", "https://serverfault.com/users/128297/" ] }
430,901
Using Disk2VHD utility I converted my bare-metal OS into Hyper-V VHD - http://technet.microsoft.com/en-us/sysinternals/ee656415.aspx And I could obtain a huge 190GB VHD file. Apart from performance issues, this VHD worked fine as guest when hosted on Windows Server 200 R2, Hyper-V. Having realized need to keeping only system files and application installations on VHD. I have deleted most of the junk data from this VHD and now it contains only 20-25 GB . But I am not able to shrink the VHD VM. Having done some research, I came to know, this as a limitation of .VHD files. Subsequently I followed these two step using Edit Virtual Hard Wizard on Windows 2012 Box. Convert from VHD to VHDX (took close to 3 hrs.) Compact (Another 4 hrs.) This did not ever shrink the VHDX either. Does Hyper-V does not provide proper support to handle large VHDs or VHDXs whose size are the range of 200GB.
In Linux, the last command shows successful login attempts and displays session information (pts, source, date and length). The lastb command records all bad login attempts. Both share the same man page, but the difference is that last reads the binary /var/log/wtmp file, and lastb reads the /var/log/btmp file by default. The range of these files depends on your log rotation schedule, but it should span a few weeks. Most distributions will rotate /var/log/wtmp monthly, so you can read a previous record, usually listed as /var/log/wtmp.1 by specifying the file with the -f parameter... last -f /var/log/wtmp.1
{ "source": [ "https://serverfault.com/questions/430901", "https://serverfault.com", "https://serverfault.com/users/137130/" ] }
430,970
Is it possible to set a CNAME record at the top of a domain? (i.e. @ CNAME www , @ CNAME foobar.com. , etc.) My ISP says that it's only possible to use CNAME's for subdomains but I've read somewhere else that is should be possible even if not recommended.
Not possible - this would conflict with the SOA- and NS-records at the domain root. From RFC1912 section 2.4: "A CNAME record is not allowed to coexist with any other data."
{ "source": [ "https://serverfault.com/questions/430970", "https://serverfault.com", "https://serverfault.com/users/43579/" ] }
430,974
which one of these two files should I use to configure Apache? The httpd.conf is empty, while apache2.conf is not. It confuses me!
The httpd.conf is designed for user configurations. You really should not edit the apache2.conf as it may be updated by future upgrades. An additional option is to just put your custom configuration into /etc/apache2/conf.d, all files in this directory are included as well.
{ "source": [ "https://serverfault.com/questions/430974", "https://serverfault.com", "https://serverfault.com/users/39101/" ] }
431,080
I want dig only to show the answer of my query. Normally, it prints out alot of additional info like this: ;; <<>> DiG 9.7.3 <<>> google.de ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55839 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;google.de. IN A ;; ANSWER SECTION: google.de. 208 IN A 173.194.69.94 ;; Query time: 0 msec ;; SERVER: 213.133.99.99#53(213.133.99.99) ;; WHEN: Sun Sep 23 10:02:34 2012 ;; MSG SIZE rcvd: 43 I want this to be reduced to just the answer section. dig has alot of options, a good one i found was +noall +answer ; <<>> DiG 9.7.3 <<>> google.de +noall +answer ;; global options: +cmd google.de. 145 IN A 173.194.69.94 It leaves out most of the stuff, but still shows this options thing. Any ideas on how to remove it using dig options? I sure could cut it out using other tools, but a option with dig itself would be the cleanest and nicest.
I am not sure why you are getting comments in the output. That is the correct set of options for the behaviour you want. Here are the same options with the same version of dig: $ dig -version DiG 9.7.3 $ dig +noall +answer google.de google.de. 55 IN A 173.194.44.216 google.de. 55 IN A 173.194.44.223 google.de. 55 IN A 173.194.44.215 $
{ "source": [ "https://serverfault.com/questions/431080", "https://serverfault.com", "https://serverfault.com/users/125240/" ] }
431,167
I got a string like the following: test.de. 1547 IN SOA ns1.test.de. dnsmaster.test.de. 2012090701 900 1000 6000 600 now I want to replace all the tabs/spaces inbetween the records with just a single space so I can easily use it with cut -d " " I tried the following: sed "s/[\t[:space:]]+/[:space:]/g" and various varions but couldn't get it working. Any ideas?
Use sed -e "s/[[:space:]]\+/ /g" Here's an explanation: [ # start of character class [:space:] # The POSIX character class for whitespace characters. It's # functionally identical to [ \t\r\n\v\f] which matches a space, # tab, carriage return, newline, vertical tab, or form feed. See # https://en.wikipedia.org/wiki/Regular_expression#POSIX_character_classes ] # end of character class \+ # one or more of the previous item (anything matched in the brackets). For your replacement, you only want to insert a space. [:space:] won't work there since that's an abbreviation for a character class and the regex engine wouldn't know what character to put there. The + must be escaped in the regex because with sed's regex engine + is a normal character whereas \+ is a metacharacter for 'one or more'. On page 86 of Mastering Regular Expressions , Jeffrey Friedl mentions in a footnote that ed and grep used escaped parentheses because "Ken Thompson felt regular expressions would be used to work primarily with C code, where needing to match raw parentheses would be more common than backreferencing." I assume that he felt the same way about the plus sign, hence the need to escape it to use it as a metacharacter. It's easy to get tripped up by this. In sed you'll need to escape + , ? , | , ( , and ) . or use -r to use extended regex (then it looks like sed -r -e "s/[[:space:]]\+/ /g" or sed -re "s/[[:space:]]\+/ /g"
{ "source": [ "https://serverfault.com/questions/431167", "https://serverfault.com", "https://serverfault.com/users/125240/" ] }
431,170
In many development scenarios I needed to see how my application - web or desktop would behave when the internet connection is very slow - 10-15KBps. Is there a way to slow down the internet speed in Ubuntu/Windows/Mac?
Use sed -e "s/[[:space:]]\+/ /g" Here's an explanation: [ # start of character class [:space:] # The POSIX character class for whitespace characters. It's # functionally identical to [ \t\r\n\v\f] which matches a space, # tab, carriage return, newline, vertical tab, or form feed. See # https://en.wikipedia.org/wiki/Regular_expression#POSIX_character_classes ] # end of character class \+ # one or more of the previous item (anything matched in the brackets). For your replacement, you only want to insert a space. [:space:] won't work there since that's an abbreviation for a character class and the regex engine wouldn't know what character to put there. The + must be escaped in the regex because with sed's regex engine + is a normal character whereas \+ is a metacharacter for 'one or more'. On page 86 of Mastering Regular Expressions , Jeffrey Friedl mentions in a footnote that ed and grep used escaped parentheses because "Ken Thompson felt regular expressions would be used to work primarily with C code, where needing to match raw parentheses would be more common than backreferencing." I assume that he felt the same way about the plus sign, hence the need to escape it to use it as a metacharacter. It's easy to get tripped up by this. In sed you'll need to escape + , ? , | , ( , and ) . or use -r to use extended regex (then it looks like sed -r -e "s/[[:space:]]\+/ /g" or sed -re "s/[[:space:]]\+/ /g"
{ "source": [ "https://serverfault.com/questions/431170", "https://serverfault.com", "https://serverfault.com/users/62766/" ] }
431,838
I'm in the market for a new storage solution. While researching various specs one of my coworkers said that some raid controllers can synchronize HDD rotation to the effect of all drives' sector/block 0 passes under the reading head at the same time. I searched online but have not been able to find information proving/disproving this claim.
RAID controllers did not (and could not) synchronize disk spindles, but it was an option on some drives. Given a set of identical drives with spindle sync connectors you could ensure a set of disks were all synchronized. I happened to own some Seagate Elite 3 (ancient, obsolete SCSI-2 drives) which I remembered having such a connector so I found the Seagate ST43400N/ND Elite 3 user guide which has this handy illustration in Figure 1 (note connector second from the left): Figure 14 (not shown here) illustrates how to connect the drives together: Synchronizing the spindle The spindle sync feature makes it possible to synchronize the spindle rotation of a group of disc drives. This reduces the latency normally encountered when the initiator switches between multiple disc drives. Figure 14 shows two system configurations. In one type of system, one of the disc drives in the system provides the reference clock. In the other type, an external signal source provides the reference clock.
{ "source": [ "https://serverfault.com/questions/431838", "https://serverfault.com", "https://serverfault.com/users/87065/" ] }
431,840
With 4 servers with OpenSuse is it possible to cluster them all together to run websites? What is involved in clustering and is anything special needed to cluster the machines together so they work as one? Do you need a special OS?
RAID controllers did not (and could not) synchronize disk spindles, but it was an option on some drives. Given a set of identical drives with spindle sync connectors you could ensure a set of disks were all synchronized. I happened to own some Seagate Elite 3 (ancient, obsolete SCSI-2 drives) which I remembered having such a connector so I found the Seagate ST43400N/ND Elite 3 user guide which has this handy illustration in Figure 1 (note connector second from the left): Figure 14 (not shown here) illustrates how to connect the drives together: Synchronizing the spindle The spindle sync feature makes it possible to synchronize the spindle rotation of a group of disc drives. This reduces the latency normally encountered when the initiator switches between multiple disc drives. Figure 14 shows two system configurations. In one type of system, one of the disc drives in the system provides the reference clock. In the other type, an external signal source provides the reference clock.
{ "source": [ "https://serverfault.com/questions/431840", "https://serverfault.com", "https://serverfault.com/users/137299/" ] }
432,322
How to pause execution for a while in a Windows batch file between a command and the next one?
The correct way to sleep in a batch file is to use the timeout command, introduced in Windows 2000. To wait somewhere between 29 and 30 seconds : timeout /t 30 The timeout would get interrupted if the user hits any key; however, the command also accepts the optional switch /nobreak , which effectively ignores anything the user may press, except an explicit CTRL-C : timeout /t 30 /nobreak Additionally, if you don't want the command to print its countdown on the screen, you can redirect its output to NUL : timeout /t 30 /nobreak > NUL
{ "source": [ "https://serverfault.com/questions/432322", "https://serverfault.com", "https://serverfault.com/users/6352/" ] }
432,617
I have installed a fresh copy of Windows Server 2012 and when I go to Control Panel > Appearance > Display > Color and Appearance it states " This page is not available in this edition of Windows ". The version I installed is the latest from MSDN subscriber downloads and is listed under Computer Properties as "Windows Server 2012 Standard". I can change the desktop background color, but not the colors of the window borders. The only "schemes" available are "Windows Basic" and then 4 even uglier "High Contrast" schemes. It's not a huge deal, but looking at the ugly baby blue window borders all the time is giving me a headache. Why would such a simple setting "not be available"?
You'll need to enable the "Desktop Experience" feature to get the desktop parts (color schemes, 3d graphics, windows media player etc). We do this on our terminal servers. You might have to force users into using a defined style - this can be done via the local group policy or in a regular domain based GPO. Below screenshot comes from here .
{ "source": [ "https://serverfault.com/questions/432617", "https://serverfault.com", "https://serverfault.com/users/51899/" ] }
432,959
If I make an analogy with the hosting of a web server, I would say that git's data should be in /var/git , so my git repository would be in /var/git/myrepo Q : Is that the right guess ?
There is no right or wrong answer here, except the one dictated by your own personal religion and the contents of the hier(7) manpage on your system. typical Linux hier manpage ; typical BSD hier manpage ) /var/git/* seems reasonable to me personally. That's where I keep mine.
{ "source": [ "https://serverfault.com/questions/432959", "https://serverfault.com", "https://serverfault.com/users/129642/" ] }
433,024
I'm currently receiving a fairly large HTTP flood right now, and it's causing my nginx reverse proxy to produce a 502 Bad Gateway. I have a frontend server running nginx as a proxy to my backend server, but it's just getting a bunch of connect() failed (110: Connection timed out) while connecting to upstream errors. Tons of them. If I bypass the proxy server to connect to the backend, I can run the site just fine, so I know it's in the reverse proxy somewhere. However, I have no idea how to determine why it's timing out. Any help? running nginx 1.2.3 on CentOS 6.2
I'm assuming you've already jacked your Nginx error logging level up to debug. If not, start there. Your best bet is probably going to be using strace to view the system calls being made by Nginx. In particular, you'll want to pay attention to connect() calls, and keep an eye on the return codes of these ( man 2 connect can be your friend here). Once you have that information, you can better make an educated guess about whether the issue is confined to your frontend proxy, or has something to do with the interactions between the proxy and backend application server.
{ "source": [ "https://serverfault.com/questions/433024", "https://serverfault.com", "https://serverfault.com/users/33982/" ] }
433,029
We have a domain "muzzard.com" which has nameservers ns0 and ns1 I'd like to add a delegation aws.muzzard.com and have the nameservers for that delegation in there e.g. ns0.aws.muzzard.com etc. When I go through the new delegation wizard it asks for the FQDN's of the nameservers for the delegation.... which don't exist! This must be possible.. What gives?
I'm assuming you've already jacked your Nginx error logging level up to debug. If not, start there. Your best bet is probably going to be using strace to view the system calls being made by Nginx. In particular, you'll want to pay attention to connect() calls, and keep an eye on the return codes of these ( man 2 connect can be your friend here). Once you have that information, you can better make an educated guess about whether the issue is confined to your frontend proxy, or has something to do with the interactions between the proxy and backend application server.
{ "source": [ "https://serverfault.com/questions/433029", "https://serverfault.com", "https://serverfault.com/users/128203/" ] }
433,265
I have a PHP script that creates a directory and outputs an image to the directory. This was working just fine under Apache but we recently decided to switch to NGINX to make more use of our limited RAM. I'm using the PHP mkdir() command to create the directory: mkdir(dirname($path['image']['server']), 0755, true); After the switch to NGINX, I'm getting the following warning: Warning: mkdir(): Permission denied in ... I've already checked all the permissions of the parent directories, so I've determined that I probably need to change the NGINX or PHP-FPM 'user' but I'm not sure how to do that (I never had to specify user permissions for APACHE). I can't seem to find much information on this. Any help would be great! (Note: Besides this little hang-up, the switch to NGINX has been pretty seamless; I'm using it for the first time and it literally only took about 10 minutes to get up and running with NGINX. Now I'm just ironing out the kinks.)
Run nginx & php-fpm as www:www ###1. Nginx Edit nginx.conf and set user to www www; : user www www; If the master process is run as root, then nginx will setuid()/setgid() to USER/GROUP. If GROUP is not specified, then nginx uses the same name as USER. By default it's nobody user and nobody or nogroup group or the --user=USER and --group=GROUP from the ./configure script. ###2. PHP-FPM Edit php-fpm.conf and set user and group to www : [www] user=www group=www user - Unix user of processes. Default "www-data" group - Unix group of processes. Default "www-data"
{ "source": [ "https://serverfault.com/questions/433265", "https://serverfault.com", "https://serverfault.com/users/127008/" ] }
433,295
When I type something like sudo apt-get install firefox , everything work until it asks me: After this operation, 77 MB of additional disk space will be used. Do you want to continue [Y/n]? Y Then error messages are displayed: Failed to fetch: <URL> My iptables rules are as follows: -P INPUT DROP -P OUTPUT DROP -P FORWARD DROP -A INPUT -i lo -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT What should I add to allow apt-get to download updates? Thanks
apt-get almost always downloads over HTTP but may also use FTP, so the short answer is probably to allow outbound HTTP connections... and also DNS, of course. The configuration you have now disallows all outgoing network traffic (the ESTABLISHED rule you have on the OUTPUT chain isn't effective since no sessions will ever get established). Do you need to allow ONLY apt-get updates while still disallowing everything else? iptables is probably the wrong tool for that job as it isn't really going to interpret URLs and allow HTTP transfers selectively. You'd want to use an HTTP proxy server for this job. You can use a simpler setup that will permit apt-get downloads, but be aware that this also permits all other outgoing DNS and HTTP connections, which may not be what you want. iptables -F OUTPUT # remove your existing OUTPUT rule which becomes redundant iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 80 -m state --state NEW -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -m state --state NEW -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -m state --state NEW -j ACCEPT If your APT sources include HTTPS or FTP sources or HTTP sources on ports other than 80, you'll have to add those ports too. Next, you will have to permit the return traffic. You can do that with this single rule that permit any established connection: iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT (It is safe to allow all inbound established connections when using connection tracking, because only connections that you have otherwise allowed will get to the ESTABLISHED state.)
{ "source": [ "https://serverfault.com/questions/433295", "https://serverfault.com", "https://serverfault.com/users/137658/" ] }
433,765
I have a Windows server that will sometimes reboot into safe mode after updates. I'm working on that issue but what I'd really like to know is how can I check to see if Windows is running in safe mode or not. Ideally I would like to incorporate it into a script that would send a passive check to our Nagios box with the status. Is there some environmental variable I can use or some way to get this information via the command line?
I think this does what you are looking for PS C:\> gwmi win32_computersystem | select BootupState BootupState ----------- Normal boot http://msdn.microsoft.com/en-us/library/windows/desktop/aa394102%28v=vs.85%29.aspx Possible return values: Normal boot Fail-safe boot Fail-safe with network boot
{ "source": [ "https://serverfault.com/questions/433765", "https://serverfault.com", "https://serverfault.com/users/67923/" ] }
433,986
I have a production server with 16GB of RAM that came with a 32bit CentOS installation. The website hosted on this server is getting increasing amounts of traffic every day, which has led to some some MySQL performance issues. I ran mysqltuner.pl and got the following messages: [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** Can I survive with the 32 bit OS, or do I need have to install the 64 bit version?
You can survive just fine with the 32 bit CentOS install. But, like the warning says, using a 32 bit OS means that MySQL can't actually use all (or even most of) the RAM installed in the system. Seems like a waste to me. If the hardware supports 64-bit, I'd certainly replace the 32 bit OS with a 64 bit one, yeah. You'd probably want to do some testing first, and/or use a second server to find out what's going to break when you switch OSes, because something always does. Strictly speaking, you don't need to install a 64 bit OS, but you definitely should. And probably sooner, rather than later, before the RAM limitations of a 32 bit OS start causing you problems.
{ "source": [ "https://serverfault.com/questions/433986", "https://serverfault.com", "https://serverfault.com/users/88353/" ] }
434,064
I'm wondering what is the correct way of moving a VM between two KVM hosts without using any kind of shared storage Would copying the disk files and the XML dump from the source KVM machine to the destination one suffice? If so, what commands need to be run to import the vm on the destination? OS is Ubuntu on both the Dom0's and DomU. Thanks in advance
copy the VM's disks from /var/lib/libvirt/images on src host to the same dir on destination host on the source host run virsh dumpxml VMNAME > domxml.xml and copy this xml to the destination host on the destination host run virsh define domxml.xml start the VM. If the disk location differs, you need to edit the xml's devices/disk node to point to the image on the destination host If the VM is attached to custom defined networks, you'll need to either edit them out of the xml on the destination host or redefine them as well On source machine virsh net-dumpxml NETNAME > netxml.xml copy netxml.xml to target machine On target machine virsh net-define netxml.xml && virsh net-start NETNAME & virsh net-autostart NETNAME )
{ "source": [ "https://serverfault.com/questions/434064", "https://serverfault.com", "https://serverfault.com/users/137232/" ] }
434,321
I added some scripts from root inside etc/profile.d to execute at startup time. But when will these scripts be executed if I login into system as a non root user? I want to start LDAP-server at start-up time, independently from which user has first logged in. I use CentOS 6.3.
Files in /etc/profile.d/ are run when a user logs in (unless you've modified /etc/profile to not do this) and are generally used to set environment variables.
{ "source": [ "https://serverfault.com/questions/434321", "https://serverfault.com", "https://serverfault.com/users/134613/" ] }
434,581
Can anyone tell me why this is happening? I can resolve a hostname using host and/or nslookup but forward lookups do not work with dig; reverse lookups do: musashixxx@box:~$ host someserver someserver.somenet.internal has address 192.168.0.252 musashixxx@box:~$ host 192.168.0.252 252.0.168.192.in-addr.arpa domain name pointer someserver.somenet.internal. musashixxx@box:~$ nslookup someserver Server: 192.168.0.253 Address: 192.168.0.253#53 Name: someserver.somenet.internal Address: 192.168.0.252 musashixxx@box:~$ nslookup 192.168.0.252 Server: 192.168.0.253 Address: 192.168.0.253#53 252.0.168.192.in-addr.arpa name = someserver.somenet.internal. musashixxx@box:~$ dig someserver ; <<>> DiG 9.8.1-P1 <<>> someserver ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 55306 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;someserver. IN A ;; Query time: 0 msec ;; SERVER: 192.168.0.253#53(192.168.0.253) ;; WHEN: Wed Oct 3 15:47:38 2012 ;; MSG SIZE rcvd: 27 musashixxx@box:~$ dig -x 192.168.0.252 ; <<>> DiG 9.8.1-P1 <<>> -x 192.168.0.252 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28126 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;252.0.168.192.in-addr.arpa. IN PTR ;; ANSWER SECTION: 252.0.168.192.in-addr.arpa. 3600 IN PTR someserver.somenet.internal. ;; Query time: 0 msec ;; SERVER: 192.168.0.253#53(192.168.0.253) ;; WHEN: Wed Oct 3 15:49:11 2012 ;; MSG SIZE rcvd: 86 Here's what my resolv.conf looks like: nameserver 192.168.0.253 search somenet.internal Is this behavior normal? Any thoughts?
It's the default behaviour of dig not to use the search-option. From the manual page: +[no]search Use [do not use] the search list defined by the searchlist or domain directive in resolv.conf (if any). The search list is not used by default. Edit: Just add +search to make it work, like dig +search myhost .
{ "source": [ "https://serverfault.com/questions/434581", "https://serverfault.com", "https://serverfault.com/users/88383/" ] }
434,703
I have a very strange problem with my emails being marked as spam by hotmail. I just have configured Postfix + Dovecot on my server and all works perfectly. I can Send/Receive emails. I only have problems with hotmail accounts, I do not understand the reason, because I also configured: SPF DKIM rDNS My IP is not listed in any backlist, I used: mxtoolbox.com Checking the headers I see that SPF and DKIM pass correctly. I have no problem with GMAIL, YAHOO, and other, but hotmail seems very strict. The only problem I think... could be that my IP had no email traffic yet. I've sent very few emails to hotmail. So, if postfix has no problem, what do I have to do to send emails to hotmail correctly? Because if the only reason is that I had no email traffic yet it means that my first newsletters will be tag as SPAM without no reason. Advice? (An example of email received as SPAM is below) HEADERS: x-store-info:4r51+eLowCe79NzwdU2kRwMf1FfZT+JrxVyutn/pLjoZiDggbl3J7aHGkQoNPd8ZB9iY77nKNhzoKkbFqj2wPQ4Ha91HUDyzG+BsQ2lzn+x/xsXGuDBWhAPIPgrYY3dCiWYILdpiCyM= Authentication-Results: hotmail.com; sender-id=pass (sender IP is 66.85.140.94) [email protected]; dkim=pass header.d=example.net; x-hmca=pass X-SID-PRA: [email protected] X-SID-Result: Pass X-DKIM-Result: Pass X-AUTH-Result: PASS X-Message-Status: n:n X-Message-Delivery: Vj0xLjE7dXM9MDtsPTA7YT0wO0Q9MjtHRD0yO1NDTD00 X-Message-Info: M98loaK0Lo1j8FOgXol8UFVrP26QMSvVTQXke21+QxXu+DJ5ttCh6cM/eFA+HRgTBFdz52wvmszvfgxVXBCfExvqqIFxcJKaFap8dwTFrYmSiOTK6J40vAbrC+QeYPnMG9Hntes6IFH9T95bydckDQ== Received: from mail.example.net ([66.85.140.94]) by SNT0-MC3-F15.Snt0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4900); Sun, 30 Sep 2012 14:13:33 -0700 Received: from [192.168.1.2] (2-231-150-154.ip207.fastwebnet.it [2.231.150.154]) by mail.example.net (Postfix) with ESMTPA id DD0A3401D9 for <[email protected]>; Sun, 30 Sep 2012 21:13:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=example.net; s=mail; t=1349039612; bh=qCXqeVFYopgNSxSiqL3ANA5CfkeFw8AlGDFYh/ruUlg=; h=Date:From:To:Subject; b=NIYcYZJ4YitQHGus2ZQV4ErzN+hvFoDWi+M53eJXZSx3o0VamoA8PODMEZlWqvG29 aYQK8DVW140wZ1tmHCvNCIe+KF/FVmRkxtD2aWGVK5OhVNuFv6ldRE7VUDhlPfOvaZ uUqp1QopHJsg8pGDTeifigb58xTa2V4AOac6WY4c= Message-ID: <[email protected]> Date: Sun, 30 Sep 2012 23:13:30 +0200 From: Aziende Mandanti <[email protected]> User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120911 Thunderbird/15.0.1 MIME-Version: 1.0 To: [email protected] Subject: Registrazione avvenuta con successo Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 8bit Return-Path: [email protected] X-OriginalArrivalTime: 30 Sep 2012 21:13:33.0410 (UTC) FILETIME=[72B24C20:01CD9F50] Gentile Damiano, la registrazione è avvenuta correttamente. Saluti example.net The IP you see in the headers is correct, I only obfuscated the email addresses
Checking the headers I see that SPF and DKIM pass correctly. I have no problem with GMAIL, YAHOO, and other, but hotmail seems very strict. This is correct. Hotmail / outlook.com are insanely strict for .. really no sensible reason at all. You have checked the obvious things: SPF DKIM reverse DNS My IP is not listed in any backlist, I used: mxtoolbox.com The only thing left to do is manually file a request with Microsoft to get your URL listed in their safe senders . I really wish I was kidding, but even after triple checking all our mail settings (same as your above bulleted list), testing successfully on every other mail provider under the sun, etcetera, we had to file a manual Hotmail inclusion request in 2010 before email from Stack Overflow, Super User, Server Fault et al would arrive to Hotmail / outlook.com users. As you can see on Microsoft's Postmaster Troubleshooting page: IPs not previously used to send email typically don’t have any reputation built up in our systems. As a result, emails from new IPs are more likely to experience deliverability issues. Once the IP has built a reputation for not sending spam, Outlook will typically allow for a better email delivery experience. The Improving E-mail Deliverability into Windows Live Hotmail (pdf) document describes this troubleshooting for the "Your e-mail is being delivered to the Junk e-mail Folder" scenario: Too many recipients reported your previous e-mails as spam Too much of your mail is sent to invalid or inactive e-mail addresses Your SenderID record is incorrect or missing None of which applies here to a new mailer anyway, and SenderID / SPF was already checked as valid. So this begs the question, how exactly do you get positive email reputation when all your emails go into the spam folder on day zero? The only way we could get it to work is to .. file for a manual inclusion request. If your email complies with our policies and guidelines and you are still experiencing email delivery problems that are not addressed in the FAQ below, click here to contact support. Once I did this I got a deliverability email which looks automated, but that's good in this case: This mail is to confirm that the IP(s) listed below are being investigated by our automated system. Please note that your ticket number is in the subject line of this mail. 192.168.1.1 Note: Errors are unlikely, however, if an error is indicated, please resubmit the specific IP or IP range. Thank you, Hotmail Deliverability Support Service Additionally, Microsoft recommends that in addition, to adding your new IPs to existing Sender ID records, don’t forget to update your Junk Email Reporting Program (JMRP) account with the new IPs as well. To update or set up a JMRP account, click here .
{ "source": [ "https://serverfault.com/questions/434703", "https://serverfault.com", "https://serverfault.com/users/94979/" ] }
434,717
I am using a .bat file to create a user and password at windows operating system level. The issue am facing is when i pass EXPIRES:NEVER for password, when the user is created, it doesn't have "Password never expires" checkbox checked (meaning the password never expires is selected for that created user) and the user expires automatically after 90 days. Net User %1 %2 /COMMENT:"%3" /EXPIRES:NEVER /PASSWORDCHG:NO /ADD The above is the main line of code, i pass user name and password from a text file and run the .bat file.
Add this line to the batch file: WMIC USERACCOUNT WHERE "Name='%1'" SET PasswordExpires=FALSE
{ "source": [ "https://serverfault.com/questions/434717", "https://serverfault.com", "https://serverfault.com/users/61313/" ] }
435,132
On CentOS exists the yum versionlock option, where you can lock a package to a specific version, so it is never upgraded past that. I would like that puppet-server-2.7.19-1 puppet-2.7.19-1 stays on 2.7, and never upgraded to 3.0. Puppet Labs have released 3.0 and put it into the stable repo, so 2.7 will get upgraded to 3.0, which is not backwards compatible. Does Ubuntu have something similar to yum versionlock ?
You can create a file in /etc/apt/preferences and pin packages' version. The format for the file would be somewhat like this: Package: puppet-server Pin: version 2.7* Pin-Priority: 550 See also: Debian documentation | Apt Howto Debian Wiki | Apt preferences manpage of apt_preferences
{ "source": [ "https://serverfault.com/questions/435132", "https://serverfault.com", "https://serverfault.com/users/34187/" ] }
435,256
Picture a scenario where I'm logged into a server (which we'll call "Wallace") from my local machine, and from there I ssh into another server (which we'll call "Gromit"): laptop ---ssh---> Wallace ---ssh---> Gromit Then the ssh session from Wallace to Gromit hangs, and I want to kill it. If I enter ~. to kill ssh, it kills the ssh session from my laptop to Wallace, because the ~ is intercepted by that ssh session, and the . is taken as a command to kill the session. How do I send a command to the ssh session between Wallace and Gromit? How do I kill my "inner" ssh?
Add another tilde (ie, type ~~. ). Each successive tilde is eaten by the outermost ssh session which hasn't yet eaten one, but if the next character is another tilde, it's passed along to the next session in. If, from gromit 1 , you ssh'ed to a third host (let's call it wensleydale), then ~~~. would drop the session to wensleydale and return you to a prompt on gromit. 1 And what a great server that is; how often have I heard a developer remark "cracking host, gromit"?
{ "source": [ "https://serverfault.com/questions/435256", "https://serverfault.com", "https://serverfault.com/users/40159/" ] }
435,827
What is the difference between Buckets and Folders in Amazon S3 ? Is such a thing like Folder exist in Amazon S3 ? or only the S3 clients present Folders to us for better handling ?
Directories don't actually exist within S3 buckets. The entire file structure is actually just one flat single-level container of files. The illusion of directories are actually created based on naming the files names like dirA/dirB/file . Certain S3 tools (Firefox S3 organizer, s3fs, etc.) have taken the extra step of introducing proprietary metadata files to simulate directory nodes for usage in making the tool operate more intuitively. But the bottom line is there are no real sub-directories in a bucket.
{ "source": [ "https://serverfault.com/questions/435827", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
436,073
I have a Linux server with many 2 TB disks, all currently in a LVM resulting in about 10 TB of space. I use all this space on an ext4 partition, and currently have about 8,8 TB of data. Problem is, I often get errors on my disks, and even if I replace (that is to say, I copy the old disk to a new one with dd then i put the new one in the server) them as soon as errors appear, I often get about 100 MB of corrupted data on it. That makes e2fsck go crazy everytime, and it often takes a week to get the ext4 filesystem in a sane state again. So the question is : What would you recommend me to use as a filesystem on my LVM ? Or what would you recommend me to do instead (I don't really need the LVM) ? Profile of my filesystem : many folder of different total sizes (some totalling 2 TB, some totalling 100 MB) almost 200,000 files with different sizes (3/4 of them about 10 MB, 1/4 between 100 MB and 4 GB; I can't currently get more statistics on files as my ext4 partition is completely wrecked up for some days) many reads but few writes and I need fault tolerance (I stopped using mdadm RAID because it doesn't like having ONE error on the whole disk, and I sometimes have failing disks, that I replace as soon as I can, but that means I can get corrupted data on my filesystem) The major problem are failing disks; I can lose some files, but I can't afford lose everything at the same time. If I continue to use ext4, I heard that I should best try to make smaller filesystems and "merge" them somehow, but I don't know how. I heard btrfs would be nice, but I can't find any clue as to how it manages losing a part of a disk (or a whole disk), when data is NOT replicated ( mkfs.btrfs -d single ?). Any advice on the question will be welcome, thanks in advance !
It's not file system problem, it's disks' physical limitations. Here's some data: SATA drives are commonly specified with an unrecoverable read error rate (URE) of 10^14. That means that 1 byte per 12TB will be unrecoverably lost even if disks work fine. This means that with no RAID you will lose data even if no drive fails - RAID is your only option. If you choose RAID5 (total capacity n-1, where n = number of disks) it's still not enough. With 10TB RAID5 consisting of 6 x 2TB HDD you will have a 20% chance of one drive failure per year and with a single disk failing, due to URE you'll have 50% chance of successfully rebuilding RAID5 and recovering 100% of your data. Basically with the high capacity of disks and relatively high URE you need RAID6 to be secure even again single disk failure. Read this: http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162
{ "source": [ "https://serverfault.com/questions/436073", "https://serverfault.com", "https://serverfault.com/users/140138/" ] }
436,082
I have installed the pure-ftpd package with PureFTP 1.0.24 on Ubuntu 10.04 using apt-get. Even though, this is the default port range, I've added the file /etc/pure-ftpd/conf/PassivePortRange containing: 30000 50000 This does add the correct option to the command as it is run ( -p 30000:50000 ), but for some reason, I still get connections trying to use ports above 50000. I think the problem is that these are active ftp sessions, but what's the point of specifying a port range if it only works for passive mode? Then I still need to open all the ports in my firewall... Is there a way to specify a port range for all connections (rather than just passive ones)?
It's not file system problem, it's disks' physical limitations. Here's some data: SATA drives are commonly specified with an unrecoverable read error rate (URE) of 10^14. That means that 1 byte per 12TB will be unrecoverably lost even if disks work fine. This means that with no RAID you will lose data even if no drive fails - RAID is your only option. If you choose RAID5 (total capacity n-1, where n = number of disks) it's still not enough. With 10TB RAID5 consisting of 6 x 2TB HDD you will have a 20% chance of one drive failure per year and with a single disk failing, due to URE you'll have 50% chance of successfully rebuilding RAID5 and recovering 100% of your data. Basically with the high capacity of disks and relatively high URE you need RAID6 to be secure even again single disk failure. Read this: http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162
{ "source": [ "https://serverfault.com/questions/436082", "https://serverfault.com", "https://serverfault.com/users/84889/" ] }
436,327
I've used greylisting on my servers for many years, but I don't know how effective it is nowadays. Is it still good for fighting spam in 2012? Or is the typical spammer MTA capable of resending greylisted emails now?
I last looked at this quantitatively in July of this year (2012). In July, my mailserver received about 46,000 attempts to deliver mail; of those, about 1,750 returned and were permitted through by the greylisting (and passed valid sender domain, SPF and some other non-content-based tests). Of those, about another 1,500 were filtered by my content-based filtering.. Assuming that those 44,250 emails were spam (since they couldn't pass greylisting, I think that's a fair assumption), if it were not for the greylisting my content-based filtering would have had to deal with 46,000 mails instead of 1,750. A twenty-five-fold increase in load on my content-based filtering would require me to have much beefier CPUs and more memory. That would in turn increase my monthly hosting costs, because of the extra power consumption (and, probably, the size of the server). So in short, the last time I counted, yes, greylisting still made very, very good sense as part of a complete spam-filtering system . I have activated it for clients in the past few weeks, and all are extremely happy with the decrease in load on their content-based filtering systems also. Edit : I note that I haven't answered the question about whether it's becoming less effective over time. When I turned it on, in late 2006, my estimate at that time was that it was filtering out about 95% of the spam. 1,750 as a proportion of 46,000 is about 4%, so my data suggest that it's not become less effective over that time period.
{ "source": [ "https://serverfault.com/questions/436327", "https://serverfault.com", "https://serverfault.com/users/13551/" ] }
436,648
I know that the command ec2-create-image instance-id will be creating an image of the ec2 instance, creating snapshots file and registering as an AMI. But what is the equivalent command to delete the image which will deleting associated snapshot files and de-registering AMI?
Updated answer from the aws docs: Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/ . In the navigation bar, verify your region. In the navigation panel, click AMIs. Select the AMI, click Actions, and then click Deregister. When prompted for confirmation, click Continue. In the navigation pane, click Snapshots. Select the snapshot, click Actions, and then click Delete. When prompted for confirmation, click Yes, Delete. Hope this help anyone like me! :D
{ "source": [ "https://serverfault.com/questions/436648", "https://serverfault.com", "https://serverfault.com/users/111399/" ] }
436,654
I'm developing a small website (Magento eshop) on own Debian virtual server. I'm not very advanced in this topic. Server now works fine, the last remaining problem is SSL access via https protocol. When I access the server via local IP address https://192.168.1.xxx , it works. But when I access it via https://www.mydomain.com , is server unavailable. However http:/ /192.168.1.xxx and http:/ /www.mydomain.com works well. What can be the problem? My config files: ports.conf NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> sites-enabled/magento NameVirtualHost 192.168.1.124:80 <VirtualHost 192.168.1.124:80> ServerName www.mydomain.de ServerAdmin [email protected] DocumentRoot /var/www/magento <Directory /var/www/magento/> Options FollowSymLinks AllowOverride All </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> sites-enabled/magento_ssl NameVirtualHost *:443 <VirtualHost *:443> ServerAdmin [email protected] ServerName www.mydomain.de ServerAlias *.mydomain.de SSLEngine On SSLCertificateFile /etc/apache2/ssl/edc.pem DocumentRoot /var/www/magento <Directory /var/www/magento> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> LogLevel warn ErrorLog /var/log/apache2/ssl-error.log CustomLog /var/log/apache2/ssl-access.log combined </VirtualHost>
Updated answer from the aws docs: Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/ . In the navigation bar, verify your region. In the navigation panel, click AMIs. Select the AMI, click Actions, and then click Deregister. When prompted for confirmation, click Continue. In the navigation pane, click Snapshots. Select the snapshot, click Actions, and then click Delete. When prompted for confirmation, click Yes, Delete. Hope this help anyone like me! :D
{ "source": [ "https://serverfault.com/questions/436654", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
437,342
I have a user named hedgehog and I want him to be named squirrel , but I don't want to change his numeric user ID. How can I accomplish this?
Under Linux, the usermod command changes user names. It modifies the system account files to reflect the changes that are specified on the command line. To change just the username: usermod --login new_username old_username To change the username and home directory name: usermod --login new_username --move-home --home path_to_the_new_home_dir old_username You may also want to change the name of the group associated with the user: groupmod --new-name new_username old_username
{ "source": [ "https://serverfault.com/questions/437342", "https://serverfault.com", "https://serverfault.com/users/68608/" ] }
438,475
I have 2 servers, each in two separate locations. I need to host an application on one, and the database server on the other. From the app server, if I ping the database server, on average I get about 30ms. My question is: When I query the database from the app; Is it going to take 30 ms + database_server_query_run_time Or; Is it going to take 30 ms + database_server_query_run_time + 30ms I would like to understand this please.
It will usually take more then those two options. Ping measures just the time from client, to server, and back again (rtt - round trip time) Usually databases use TCP, so you first need to send a SYN packet to start the TCP handshake (to simplify let's say 15ms* + cpu time, then you recieve and SYN/ACK (15ms+cpu time), send back an ACK and a request (atleast 15ms + cpu time), then the time for the DB to process the query, and then the time (15ms + cpu) to get the data back, and a bit more to ack, and close the connection. This is ofcourse not counting the authentication (username/password) to the database, and no encryption (ssl handshakes/DH or whatever is needed). *half of a round trip time, assuming the route there and back is symmetrical (half the time to get there, and half to get back... cpu processing time for ping reply is very short)
{ "source": [ "https://serverfault.com/questions/438475", "https://serverfault.com", "https://serverfault.com/users/120023/" ] }
438,907
As I understand, a 10Gb Ethernet card is capable of putting 10Gb every second on (say) a fibre optics cable. Now naively, for this to happen in hardware, one will need a 10GHz clock running the network card. It is possible to half that frequency by clocking on both edges, but 5GHz is still awefully high for transistors to support. For 100Gb Ethernet, 50GHz seems completely unreasonable. What is the clock frequency of clocks running (say) a 10Gb Ethernet card? Are there tricks used to cut down this frequency from the "naive" 10GHz frequency?
You are correct that frequencies that high would be completely unmanageable. Sending one bit per frequency would cause problems for various types of radio transmissions as well. So we have modulation techniques which allow more than one bit to be send. A touch of terminology: baud, most people will remember that term from the days of telephone modems, is the symbol rate at which a communications medium is operating. A symbol can contain more than one bit, so sending multibit symbols allows higher throughput at lower frequencies. 10MbE (10Base-T) used a very simple inverted Manchester encoding, 10 Mbaud, and a single -2.5v/2.5v differential pair for communications in each direction. 100MbE (100Base-TX) used 4B/5B encoding, 125 Mbaud, and a single -1.0/1.0v differential pair for communication in each direction. So 4/5b * 125 MHz = 100Mb in each direction. 1GbE (1000Base-T) uses PAM-5 TCM, the same 125 Mbaud as 100MbE, all four -1.0/1.0v differential pairs for communication in both directions at the same time. The PAM-5 coding allows for 5 states, but the trellis modulation limits each end to 2 at any given time, so 2 bits are sent in each symbol. Thus 125M/s * 4 * 2b = 1Gbps. Side notes: 1GbE uses only a single pair to negotiate the initial connection. If a cable has only this pair working it can lead to an unresponsive NIC that seems to connect. Also, almost all new NICs can negotiate on any of the 4 pairs, thus enabling auto MDI/MDI-X (but this is not a requirement of the spec). 1000Base-T requires Cat5e cabling. 1000Base-TX simplified NICs, but required Cat6 cable; it never got off the ground for various reasons. 10GbE uses PAM-16 DSQ128 coding, 833 Mbaud, 4 pairs as before. The new PAM-16 DSQ-128 with LDPC error correction is sufficiently complicated that I will not try to explain how it works here other than to say it effectively sends 3 bits of information per symbol even over cabling rated for only 500MHz (or less in some circumstances). Thus 833.3 MHz * 4 * 3b = 10Gbps. Side notes: 10GbE requires Cat6a cabling for 100m operation, Cat6 for 55m, and may work with Cat5e for very short cables. Cabling other than Cat6a should be discouraged because of the variation from the 100m standard length. Also, older NICs didn't have the gain necessary to send 10GbE over 100m distances and were limited to shorter cables - see manufacturer for details if you have a first generation 10GbE NIC. 25GbE and 40GbE have a proposed draft standard 802.3bq D3.3 (2016). It has not been updated in almost 6 years. It would allow for 25GBase-T and 40GBase-T operation over 4 Pair Category 8 wire up to 30m. I do not have a copy of the draft, so do not know the specifics. Side notes: Two previous 40GBase-T proposals exist. The first uses the same techniques as 10Gbase-T, but 4x faster, and requiring cabling certified for ~1600MHz. The second uses PAM-32 DSQ-512 and requires cabling at ~1200MHz (the higher complexity would mean relatively expensive NICs). Both are likely to use LDPC to allow the use of slightly underrated cabling. 100GbE has no draft copper standards at this time. Connectors: 100GbE will not use the C8P8 (colloquially RJ-45) connector, but likely a variation of it called GG45, with the 4 pairs at the 4 corners of the connector. There is also an intermediate connector, the ARJ45-HD with pins for both 10MbE-10GbE (RJ-45) and 40GbE-100GbE (GG45). TERA is a competing connector rated for 1000 MHz, it seems unlikely to become the new standard. Cabling: Cat7 and Cat7a are cabling standards rated for 600 MHz and 1200 MHz. They were originally called CatF and CatFa. Cat8.1 and Cat8.2 have been proposed with ratings for 1600 and 2000 MHz. There is some debate as to whether there will be a 100GBase-T standard as, with current technology, Cat7a, Cat8.1 and Cat8.2 will only carry such connections 10m, 30m, and 50m respective. Cat7a and up are already dramatically different cables from Cat6a and below, requiring shielding around both individual pairs and the cable as a whole. The testing that suggests these connections are possible does not demonstrate a commercially viable implementation either. There is reasonable speculation that more advanced/sensitive circuits could carry 100GbE at some point in the future, but it's only speculation. Worth mentioning: 10GBase-R, 40GBase-R, and 100GBase-R are a family of fiber specifications for 10, 40, and 100GbE which have all been standardized. These are all available in Short (-SR, 400m), Long (-LR, 10km), Extended (-ER, 40km), Proprietary (-ZR, 80km), and EPON/x (-PR/x, 20km) ranges. They all use a common 64b/66b encoding, 10.3125 GBaud, and simple use more "lanes" for additional capacity (1, 4, and 10 respectively) - lanes being different wavelengths of light on the same fiber cable. A 200GBase proprietary implementation is working it's way to standardization, though with modulated DWDM frequencies and ranges up to 2Mm.
{ "source": [ "https://serverfault.com/questions/438907", "https://serverfault.com", "https://serverfault.com/users/141268/" ] }
439,128
I have a disk, say /dev/sda. Here is fdisk -l: Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0000e4b5 Device Boot Start End Blocks Id System /dev/sda1 * 1 27 209920 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 27 525 4000768 5 Extended Partition 2 does not end on cylinder boundary. /dev/sda5 27 353 2621440 83 Linux /dev/sda6 353 405 416768 83 Linux /dev/sda7 405 490 675840 83 Linux /dev/sda8 490 525 282624 83 Linux I need to make an image to store on our file server for use in flashing other devices we are manufacturing so I only want the used space (only about 4gb). I want to keep the mbr etc... as this device should be boot ready as soon as the copy is finished. Any ideas? I previously had been using dd if=/dev/sda of=[//fileserver/file] , but at that time, my master copy was on a 4gb flash ide.
Back in the day I ran into a similar problem with embedded Linux distributions - get rid of all the junk before compressing the image. dd if=/dev/zero of=asdf.txt . Wait until it dies. Delete asdf.txt. You've just written zeros to all free space on the device. Now take a disk image and run it through gzip. Voila, sparse image. Probably doesn't scale very well and could cause problems if you actually need to write to the disk, but hey. You could take an rsync snapshot of the disk to another volume, zero that, and then take that disk image. Note: Could be hazardous for SSD, user should consider this operation befor committing.
{ "source": [ "https://serverfault.com/questions/439128", "https://serverfault.com", "https://serverfault.com/users/96087/" ] }
439,129
I'd like to mention that im really new to this so please bear with me. I'm trying to setup a forum software to send emails via postfix but I think my server has the port 25 blocked. I tried running these: works: ping alt2.gmail-smtp-in.l.google.com don't work: telnet alt2.gmail-smtp-in.l.google.com 25 telnet 66.249.93.114 25 tried flushing iptables and then using these rules but didn't work either: sudo iptables --flush sudo iptables -P INPUT ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables -P FORWARD ACCEPT sudo iptables -F sudo iptables -X doing a telnet on 25 port to localhost url works but nothing when telnet'ing in none local urls. mail.log: Oct 17 01:20:24 webhost postfix/smtp[3642]: connect to alt2.gmail-smtp-in.l.google.com[2607:f8b0:400e:c03::1a]:25: Connection timed out Oct 17 01:20:24 webhost postfix/smtp[3643]: connect to alt2.gmail-smtp-in.l.google.com[2607:f8b0:400e:c03::1a]:25: Connection timed out Oct 17 01:20:24 webhost postfix/smtp[3642]: 4744380032: to=<[email protected]>, relay=none, delay=2892, delays=2741/0.03/150/0, dsn=4.4.1, status=deferred (connect to alt2.gmail-smtp-in.l.google.com[2607:f$
Back in the day I ran into a similar problem with embedded Linux distributions - get rid of all the junk before compressing the image. dd if=/dev/zero of=asdf.txt . Wait until it dies. Delete asdf.txt. You've just written zeros to all free space on the device. Now take a disk image and run it through gzip. Voila, sparse image. Probably doesn't scale very well and could cause problems if you actually need to write to the disk, but hey. You could take an rsync snapshot of the disk to another volume, zero that, and then take that disk image. Note: Could be hazardous for SSD, user should consider this operation befor committing.
{ "source": [ "https://serverfault.com/questions/439129", "https://serverfault.com", "https://serverfault.com/users/132975/" ] }
439,137
We have a Windows 2003 server DC1 which is our primary DC holding all FSMO roles. It also is a DNS server for our domain domain.local which is an active directory integrated zone. We also have a Windows 2008 DC name DC2 which is a DNS server with ad integrated zone for domain.local. Both zones are set to "replication: All domain controllers in this domain (for windows 2000 compatibility)" All servers have the correct DNS entries etc. However in all dns servers dns event log there are event id 4515 indicating there are duplicate zones in separate directory partitions and only one will be used until the other is removed. And I see these, there is a zone for domain.local under the default naming partition CN=System, CN=MicrosoftDNS, DC=domain.local. As well as the DomainDNSZones partition DC=DomainDNSZones, DC=DOMAIN, DC=local, CN=MicrosoftDNS It seems that the partition in the Default Naming partition is the one which is being used currently. Which one should be in use? How do I make the EventID 4515's go away? How could this have happened? EventID 4515: http://support.microsoft.com/kb/867464
Back in the day I ran into a similar problem with embedded Linux distributions - get rid of all the junk before compressing the image. dd if=/dev/zero of=asdf.txt . Wait until it dies. Delete asdf.txt. You've just written zeros to all free space on the device. Now take a disk image and run it through gzip. Voila, sparse image. Probably doesn't scale very well and could cause problems if you actually need to write to the disk, but hey. You could take an rsync snapshot of the disk to another volume, zero that, and then take that disk image. Note: Could be hazardous for SSD, user should consider this operation befor committing.
{ "source": [ "https://serverfault.com/questions/439137", "https://serverfault.com", "https://serverfault.com/users/121341/" ] }
439,818
We are studying implementing some virtualized servers here, but we don't know what will be better suitable for us. Some folks are saying better have two huge servers, and others are saying have like a ten middle-end servers. We have a legacy Visual Foxpro application, which nowadays run on Dual Xeon E5405 @ 2GHz and 16Gb of RAM. The currently server its getting too slow due the number of active users and process running on it. Virtualizing this server will give us the benefit of an faster disaster recovery. So the question is, having like ten physical servers running at 1.7GHz and 4Gb of RAM, we could virtualize one server into 4 machines, and have one virtualized server running at 6.8GHz and 16Gb of memory? If yes, there is some how if one machine stops, automatically manage this virtual machine to another one, and execute the appropriate maintenance on it, and later back to it again?
Yes, you can combine multiple x86 machines into a larger virtual x86 machine, with ScaleMP . Compatible with Xen and KVM Hypervisors, you can then create VMs that will span multiple physical machines. You could then run a large windows VM within your Xen or KVM hypervisor on top of your ScaleMP cluster. Here's a write up that's a bit easier to read than their website: http://www.readwriteweb.com/solution-series/2011/10/cost-effective-clustering-with.php
{ "source": [ "https://serverfault.com/questions/439818", "https://serverfault.com", "https://serverfault.com/users/125276/" ] }
439,848
I am evaluating a system for a client where many OpenVPN clients connect to a OpenVPN server. "Many" means 50000 - 1000000. Why do I do that? The clients are distributed embedded systems, each sitting behind the system owners dsl router. The server needs to be able to send commands to the clients. My first naive approach is to make the clients connect to the server via an openvpn network. This way, the secure communication tunnel can be used in both directions. This means that all clients are always connected to the server. There are many clients summing up over the years. The question is: does the OpenVPN server explode when reaching a certain number of clients? I am already aware of a maximum TCP connection number limit, therefore (and for other reasons) the VPN would have to use UDP transport. OpenVPN gurus, what is your opinion?
I doubt that a setup that large has ever been attempted before, so you likely will be pushing limits when trying. I could find an article on a VPN deployment for 400 clients but judging from the text, the author just relied on rough estimates about how many clients could be run per CPU and lacked some understanding about how his setup would perform. You would mainly need to consider these two points: The bandwidth your data transfers are going to use would need encryption / decryption at the VPN server side, consuming CPU resources OpenVPN client connections consume both, memory and CPU resources on the server even when no data is transferred Any decent PC hardware available today should easily saturate a Gigabit link with Blowfish or AES-128, even $100 embedded devices are capable of rates near 100 Mbps , so CPU bottlenecks due to bandwidth intensity should not be of any concern. Given the default rekeying interval of 3600 seconds, a number of 1,000,000 clients would mean that the server would need to be able to complete 278 key exchanges per second on average. While a key exchange is a rather CPU-intensive task, you could offload it to dedicated hardware if needed - cryptographic accelerator cards available easily meet and exceed this number of TLS handshakes. And memory restrictions should not bother too much as well - a 64-bit binary should take care of any virtual memory restrictions you would be likely to hit otherwise. But the real beauty with OpenVPN is that you can scale it out quite easily - simply set up an arbitrary number of OpenVPN servers and make sure your clients are using them (e.g. through DNS round-robin), configure a dynamic routing protocol of your choice (typically this would be RIP due to its simplicity) and your infrastructure would be capable of supporting an arbitrary number of clients as long as you've got enough hardware.
{ "source": [ "https://serverfault.com/questions/439848", "https://serverfault.com", "https://serverfault.com/users/11877/" ] }
440,088
I need to backup some data with the "p" option on tar command. The problem is the place I'm going to restore this data will have all the same users, but those users may have different IDs. Does that make any difference to tar or will it restore permissions correctly by user name?
Summing up previous answers and adding some important information: When creating archives, tar will always preserve files' user and group ID, unless told otherwise with --owner=NAME , --group=NAME . But still there will always be a user and group associated with each file. GNU tar, and perhaps other versions of tar , also store the user and group names , unless --numeric-owner is used. bsdtar also stores user and group names by default, but support for --numeric-owner option when creating didn't appear until bsdtar 3.0 (note that bsdtar supported the option when extracting for much longer). When extracting as a regular user , all files will always be owned by the user. And it can't be different, since extracting a file is creating a new file on the filesystem, and a regular user cannot create a file and give ownership to someone else. When extracting as root , tar will by default restore ownership of extracted files, unless --no-same-owner is used, which will give ownership to root himself. In GNU tar, bsdtar, and perhaps other versions of tar , restored ownership is done by user (and group) name , if that information is in the archive and there is a matching user in the destination system. Otherwise, it restores by ID. If --numeric-owner option is provided, user and group names are ignored. Permissions and timestamps are also saved to the archive, and restored by default, unless options --no-same-permissions and/or --touch are used. When extracted by the user, user's umask is subtracted from permissions unless --same-permissions is used. --preserve-permissions and --same-permissions are aliases, and have the same functionality as -p Hope this helps clarify the issue! :)
{ "source": [ "https://serverfault.com/questions/440088", "https://serverfault.com", "https://serverfault.com/users/111346/" ] }
440,159
I assume these are some sort of bots, but would like to know what are they trying to do to my server. The logs in questions are below and the IP address has been changed from the original. 12.34.56.78 - - [18/Oct/2012:16:48:20 +0100] "\x86L\xED\x0C\xB0\x01|\x80Z\xBF\x7F\xBE\xBE" 400 172 "-" "-" 12.34.56.78 - - [18/Oct/2012:16:50:28 +0100] "\x84K\x1D#Z\x99\xA0\xFA0\xDC\xC8_\xF3\xAB1\xE2\x86%4xZ\x04\xA3)\xBCN\x92r*\xAAN\x5CF\x94S\xE3\xAF\x96r]j\xAA\xC1Y_\xAE\xF0p\xE5\xBAQiz\x14\x9F\x92\x0C\xCC\x8Ed\x17N\x08\x05" 400 172 "-" "-" 12.34.56.78 - - [18/Oct/2012:16:58:32 +0100] "g\x82-\x9A\xB8\xF0\xFA\xF4\xAD8\xBA\x8FP\xAD\x0B0\xD3\xB2\xD2\x1D\xFF=\xAB\xDEC\xD5\xCB\x0B*Z^\x187\x9C\xB6\xA6V\xB8-D_\xFE" 400 172 "-" "-" 12.34.56.78 - - [18/Oct/2012:17:06:59 +0100] "\xA61[\xB5\x02*\xCA\xB6\xC6\xDB\x92#o.\xF4Kj'H\xFD>\x0E\x15\x0E\x90\xDF\xD0R>'\xB8A\xAF\xA3\x13\xB3c\xACI\xA0\xAA\xA7\x9C\xCE\xA3\x92\x85\xDA\xAD1\x08\x07\xFC\xBB\x0B\x95\xA8Z\xCA\xA1\xE0\x88\xAEP" 400 172 "-" "-" 12.34.56.78 - - [18/Oct/2012:17:13:53 +0100] "b\xC4\xA24Z\xA2\x95\xEFc\xAF\xF1\x93\xE8\x81\xFD\xB4\xDEo\x92\xC0v\x1Fe\xD8W\x85\xC7O\x9D\x8C\x89<" 400 172 "-" "-" 12.34.56.78 - - [19/Oct/2012:09:56:39 +0100] "\x93d\xD8\x85\xD3f\x182\x94\x10\xE6y\x06\x7F\xE5\x97\xA8S\x8AfZ\x84\x0C\x0F\xFD\x19d*+\x09%\xEC3EG\xDD:Tn\xDA" 400 172 "-" "-" 12.34.56.78 - - [19/Oct/2012:10:07:10 +0100] ">\x92\xD7\x85\xC2\x5C\xDA\x8CJX\xBE\x87\x01\xBA\x09\xADj\xEDT.\x02z\x0B\xCA\x00\xAC\xDC[_;q\xC15\x17\xE9\x0B\x9F\xDA;\xEC\xDA)\xB8\x91\xA2\xB5P\xE9\x81\xF2\xD5\xD3\xC4\xD3" 400 172 "-" "-" 12.34.56.78 - - [19/Oct/2012:10:09:53 +0100] "\x12\x9E>\xFC\xF4\x07,\x9A\xF5G\xB4\xD0\xD4\xF1\xCB9\x9FRl\xB0\xDB\x84a\x90\x7F{\xB1\xA3\xD9-5\xF8\x94~\xCEm\x87\xEC\xB4\xE2s\xBD\xDB@" 400 172 "-" "-" 12.34.56.78 - - [19/Oct/2012:10:24:49 +0100] "\x98\xCA\xD3\x95|&t\x1Cp\x02\xF7\x88m\x08T\xE7tm\x9E\x04\xFB\x85\xB7\x08\xB3\xA0-Z\x03\xD5O\x98\xC6\x0EK|\xA1" 400 172 "-" "-" 12.34.56.78 - - [19/Oct/2012:10:27:58 +0100] "\x11\xE8.^\x0E\x8B}\x81\xAD\xA3^\x9E\xDFg2?@\xCB\x1Ej\xC7h\xB00\xF0\xDC\x92\x9B@\xFD\xBChB\xBF7tF\x17+W\xFFV\x8F" 400 172 "-" "-" 12.34.56.78 - - [19/Oct/2012:10:40:43 +0100] "Ou\xB3\x89\x8DiB\x82\x9D\xE8?wshxLF'\x0F\xB2o\xF6\xCD\xFC\xC2\x82ck\xC4\xF7\x0F\x01\xBC\x8B\xDA\x93|\xEAL\x81\xED`Rbr\x0F\xC1\xC8T\xDE\x07\x91\xF5|J\x5C\xBD70\x22\xD5\xA5p\xF4\xF4\xAA\xC2\xF2a\x19\xFE" 400 172 "-" "-" 12.34.56.78 - - [19/Oct/2012:10:41:29 +0100] "[8]\xCC\x7F\x1E\xA9\xE6f\xD7<\xA9\x18\xD9\xC0\xD0j~O\x90C\x8D]hVz\x84\x94y]\x95{.\x13m_];W1\x16\xEF\xD6\xE2" 400 172 "-" "-" The above is from the same IP address over a period of time. Any insight into this is appreciated.
You are most likely seeing this because you are making an HTTPS request to an HTTP endpoint. For example, you're sending an HTTPS request to port 80 of your web server instead of 443. As a result, the HTTP endpoint gets a bunch of encrypted data, makes no effort to decrypt it (since HTTP is supposed to be plaintext), and so you get a bunch of gibberish in the log file.
{ "source": [ "https://serverfault.com/questions/440159", "https://serverfault.com", "https://serverfault.com/users/141806/" ] }
440,160
I tried to setup a cron job in my new CentOS VPS server over SSH. I typed in the command crontab -e and I got the following error message:- -bash: anacrontab: command not found Any idea how I can set up cron job in CentOS? ********* CentOS release 5.8 (Final)
You are most likely seeing this because you are making an HTTPS request to an HTTP endpoint. For example, you're sending an HTTPS request to port 80 of your web server instead of 443. As a result, the HTTP endpoint gets a bunch of encrypted data, makes no effort to decrypt it (since HTTP is supposed to be plaintext), and so you get a bunch of gibberish in the log file.
{ "source": [ "https://serverfault.com/questions/440160", "https://serverfault.com", "https://serverfault.com/users/80696/" ] }
440,169
I am currently having an issue with our Domain Controller network environment which is on Hyper V. Basically the machine that was hosting the Hyper V has crashed. The problem is not with hard drive but with the corrupted hyper v file. We have all the vhds and xml configuration files in separate partition. Plan is to make a fresh install of hyper v and reattach all virtual machine. My question is how to do this so that we can retain same DC network? Please let me know if you need further clarification. Thanks,
You are most likely seeing this because you are making an HTTPS request to an HTTP endpoint. For example, you're sending an HTTPS request to port 80 of your web server instead of 443. As a result, the HTTP endpoint gets a bunch of encrypted data, makes no effort to decrypt it (since HTTP is supposed to be plaintext), and so you get a bunch of gibberish in the log file.
{ "source": [ "https://serverfault.com/questions/440169", "https://serverfault.com", "https://serverfault.com/users/141816/" ] }
440,189
I've read conflicting advice on this issue so thought I'd ask here. Should I be running a scheduled defrag within my VM?
Storage folks refer to VMs as I/O blenders. This is because all of the guest's files are typically inside of a "container" like a VMDK. This VMDK is a single file that contains all other files used by the VM. Consider that an 80GB VMDK might not have all block allocated sequentially on the disk - this is even more likely if you're using thin provisioning. By running a defrag inside of the VM, you're not actually making the files sequential on the physical disk, you're making them sequential inside of the container and that container is likely not sequential on the physical disk. Basically, in a lot of cases it's a waste of time and performance gains are very minimal at best.
{ "source": [ "https://serverfault.com/questions/440189", "https://serverfault.com", "https://serverfault.com/users/139468/" ] }
440,200
How to get details of a process that is taking RAM. From top command i have found that Mysql is taking too much ram how can i know why Mysql is taking too much ram far from usual behavior? Is there any commands to get more details of the process other than " ps "
Storage folks refer to VMs as I/O blenders. This is because all of the guest's files are typically inside of a "container" like a VMDK. This VMDK is a single file that contains all other files used by the VM. Consider that an 80GB VMDK might not have all block allocated sequentially on the disk - this is even more likely if you're using thin provisioning. By running a defrag inside of the VM, you're not actually making the files sequential on the physical disk, you're making them sequential inside of the container and that container is likely not sequential on the physical disk. Basically, in a lot of cases it's a waste of time and performance gains are very minimal at best.
{ "source": [ "https://serverfault.com/questions/440200", "https://serverfault.com", "https://serverfault.com/users/140299/" ] }
440,203
We got some replacement drives from HP PN 454273-001 1TB 7.2k drives. We put them into the msa. It completes rebuild but when we run the hp insight diagnostics tests. It comes back as read write error threshold reached. At first we thought it might be just faulty disk. But we now have received three disks and they all exhibits the same behaviour from different slot. The drives that we received is slightly different. The part number is the same but the sticker got an extra 3G on it and they are HP oem branded disks rather than the standard seagate we get normally. They also don't have the normal HP serial number on it so when I logged a call with HP they had trouble identifying the drive but they eventually found it. Is it a compatibility issue? I think we upgraded the firmware on the msa half a year ago.
Storage folks refer to VMs as I/O blenders. This is because all of the guest's files are typically inside of a "container" like a VMDK. This VMDK is a single file that contains all other files used by the VM. Consider that an 80GB VMDK might not have all block allocated sequentially on the disk - this is even more likely if you're using thin provisioning. By running a defrag inside of the VM, you're not actually making the files sequential on the physical disk, you're making them sequential inside of the container and that container is likely not sequential on the physical disk. Basically, in a lot of cases it's a waste of time and performance gains are very minimal at best.
{ "source": [ "https://serverfault.com/questions/440203", "https://serverfault.com", "https://serverfault.com/users/135490/" ] }
440,285
I've done a fresh install of Ubuntu 12.04LTS, and installed the snmpd and snmp packages. If I type: snmpwalk -m ALL -v2c -c public localhost 1.3 I get swathes of errors, of the form: Cannot adopt OID in SQUID-MIB: cacheClients ::= { cacheProtoAggregateStats 15 } Cannot adopt OID in NET-SNMP-EXTEND-MIB: nsExtendLineIndex ::= { nsExtendOutput2Entry 1 } Cannot adopt OID in NET-SNMP-EXTEND-MIB: nsExtendOutLine ::= { nsExtendOutput2Entry 2 } Cannot adopt OID in UCD-SNMP-MIB: laIndex ::= { laEntry 1 } Cannot adopt OID in UCD-SNMP-MIB: laNames ::= { laEntry 2 } Cannot adopt OID in UCD-SNMP-MIB: laLoad ::= { laEntry 3 } Cannot adopt OID in UCD-SNMP-MIB: laConfig ::= { laEntry 4 } Cannot adopt OID in UCD-SNMP-MIB: laLoadInt ::= { laEntry 5 } Cannot adopt OID in UCD-SNMP-MIB: laLoadFloat ::= { laEntry 6 } Cannot adopt OID in UCD-SNMP-MIB: laErrorFlag ::= { laEntry 100 } Cannot adopt OID in UCD-SNMP-MIB: laErrMessage ::= { laEntry 101 } Cannot adopt OID in NET-SNMP-AGENT-MIB: nsNotifyRestart ::= { netSnmpNotifications 3 } Cannot adopt OID in NET-SNMP-AGENT-MIB: nsNotifyShutdown ::= { netSnmpNotifications 2 } Cannot adopt OID in NET-SNMP-AGENT-MIB: nsNotifyStart ::= { netSnmpNotifications 1 } There a literally hundreds of these. If snmp doesn't even like the distro-included MIBs, what chance to I have of getting my own used? (I get the same form of error with my own MIB, on a different machine, which is why I set up a clean install to test the distro's sanity.) Do other distros have this issue? Is there something obvious that I am overlooking here?
apt-get install snmp-mibs-downloader The above command downloads various non-free MIBs which the free MIBs (included with distro) require to work. There are still a handful of errors, after installing this non-free package, but the snmpwalk now works.
{ "source": [ "https://serverfault.com/questions/440285", "https://serverfault.com", "https://serverfault.com/users/20520/" ] }
440,926
I want to do something like this: watch tail -f | wc -l #=> 43 #=> 56 #=> 61 #=> 44 #=> ... It counts new lines of tail each second / Linux, CentOs To be more clear. I have got something like this: tail -f /var/log/my_process/*.log | grep error I am reading some error messages. And now I want to count them. How many ~ errors I have got in a second. So one line in a log is one error in a proccess.
I've recently discovered pv, and it's really cool, you could do something like tail -f logfile | pv -i2 -ltr > /dev/null -i2 = count every 2 seconds -l = count lines -t = print time -r = show rate
{ "source": [ "https://serverfault.com/questions/440926", "https://serverfault.com", "https://serverfault.com/users/45696/" ] }
442,088
In centos how do you answer yes automatically for yum install so that it is an unassisted install?
You can use: yum -y install packagename The "-y" implies "yes".
{ "source": [ "https://serverfault.com/questions/442088", "https://serverfault.com", "https://serverfault.com/users/141931/" ] }
442,102
On my Apache 2.4.2 server with a standard mod_php Prefork setup these are my server-status results Current Time: Wednesday, 24-Oct-2012 19:36:24 CDT Restart Time: Wednesday, 24-Oct-2012 01:27:30 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 18 hours 8 minutes 54 seconds Total accesses: 14304233 - Total Traffic: 342.3 GB CPU Usage: u12584.6 s721.93 cu.66 cs3.43 - 20.4% CPU load 219 requests/sec - 5.4 MB/second - 25.1 kB/request 507 requests currently being processed, 355 idle workers ______KKKKR_K______W_KKC___CKK_K_K_W__CC_KKK_KK._K_K_KK._KKKK_K_ K_____KK_KKKK_K_KK__K___KK_K___K_____CKKK_WK_K_____KCKK__K___K_K K_CK_K_K_____K__KKKK_K__K___K_KK_K_K_KKKCK____________KK_CK__KKK __C_KKKKKKK___CK___C_KKK_K__C__K_CK____KKK__K__K__K_K__KK_CK_K__ _KKKKK_K_W__KK______K___K__W___C_K__K____KKKKKKKK.KKKKKKKCK_K___ _C_KK_K_WK__K_KK__K__RK_KK___K____K_KK_K_K___RKC_KKKK___KKKC_K_W _C_KK_KK__W____KC__KKK__KKK___K___KKK_KK_K_KKW__K_KR_KK_KK__KKK_ R__KKK__KKKKKK__K_KKKKK_K__K_K___KKW_________KK_K___KKK___KK.K_C KKKKKKW_____K__K_KKC_KCKK_K_KK_K__KK__K___K__KK_KK__________KK__ __K___KK_K__K_C_KK_K___KK__KK__K__KCK_K__KK_________K_K_KK__.K__ K_CKK.CCRW__KKKKKKKKKKKC__W____K___KWK_KK_KKC______.K_K_KK_KKKC_ __KKK_W_KCKKK_K_K____CCCK__KC_KKKK_K____K_CK_K____K__K____KKK_KK KK___K_K_K__KW__KCKKKK____WKWK__K_KKRKK__C_K_KK_KK_K__KKCC_K__C_ KK_K___K_KK______K_____CKK_K_______KK_CKCK__KKKKK____K__K..K____ __KKWK_KW__KKK__K_KKK___K_KK_KKK__KK___KK___KK_KK___KK____KKWKKC KK_KKKK_................................` When I switch to a PHP-FPM setup with the Event MPM with no other variables changes, my requests/sec plummet and overall apache response is garbage. Current Time: Wednesday, 24-Oct-2012 19:51:21 CDT Restart Time: Wednesday, 24-Oct-2012 19:48:03 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 3 minutes 18 seconds Total accesses: 18720 - Total Traffic: 307.1 MB CPU Usage: u16.57 s4.74 cu0 cs0 - 10.8% CPU load 94.5 requests/sec - 1.6 MB/second - 16.8 kB/request 15 requests currently being processed, 49 idle workers PID Connections Threads Async connections total accepting busy idle writing keep-alive closing 11701 114 no 10 22 0 66 38 11702 134 no 5 27 0 81 48 Sum 248 15 49 0 147 86 __R_R__W___RRW________RR__R___W_W_______W_____W_____________R_R_ Is there any obvious reason anyone could think of why this would be the case. I can provide any other additional stats or server setup info to help out. Ive tried tweaking everything up and down and nothing really helps get the PHP-FPM setup anywhere near a baseic prefork/mod-php setup. Thanks!
You can use: yum -y install packagename The "-y" implies "yes".
{ "source": [ "https://serverfault.com/questions/442102", "https://serverfault.com", "https://serverfault.com/users/139250/" ] }
442,611
I'm unsure as to why I'm getting the following error when apache is rebooted: Invalid command 'VirtualDocumentRoot', perhaps misspelled or defined by a module not included in the server configuration Action 'start' failed. The snippet it is referring to is this: <VirtualHost *:80> ServerAdmin [email protected] VirtualDocumentRoot /local/www/staging/%1 ServerAlias *.staging.mydomain.com </VirtualHost> I assumed it was a misspelling as it said, but it was copied directly from another server of mine. It works perfect there. Any ideas?
The documentation suggests that the directive is provided by the module vhost_alias. You should ensure that you have the LoadModule vhost_alias_module modules/mod_vhost_alias.so configuration directive in the configuration file of the server where it doesn't work.
{ "source": [ "https://serverfault.com/questions/442611", "https://serverfault.com", "https://serverfault.com/users/122892/" ] }
442,933
I have an existing SSH key (public and private), that was created with ssh-keygen. How can I add a comment to this existing key?
Just add a space after the key and put in the comment, e.g.: ssh-dss AAAAB3NzaC1kc3MAAACBAN+NX/rmUkRW7Xn7faglC/pxqbVIohbcVOt41VThMYORtMQr QSqMZugxew2s9iX4qRowHWLBRci6404nSydLiDe1q6/NmpK+oQ8zD1yXekl+fruBAYeno7f6dM7c 2swwwXY6knp4umXkLItxIUki6SXM0WfabJ8BwuNDyA8IrbFAAAAFQCynEN3MYXbs4AA7E/1I03jb B1rewAAAIAztzZUygrUI8XX6eE4zEHdTbv89AHYsAsf7fSAWnPxWc63dV0P5lCPNk58nze6+N+MD X7ZQADT6710fvbOmEFLciTwBGHHLxIV+1iTApJSsQp9T+pdkbFzBZ+mqQamZpSN1hC8fXe/Uty0D SbhnQ1qanwrOdKP1JV7DUgzehSfAAAAIEAwAyNYxUsGil46gZQea6sfhUnrBwyM6JnEbA6ogfGdS T2TDn1U5rfTV9UuNHzfoZ4CplVHclXyUPPhbKqcedpuRPJhHN/lp5MH7Q2tI/UxHvmePNHrXKk86 XYt7RzKHjWbHRxf84GIyTlKa8yfNfFlf9oNXdtBXcsJjHIvNsBk= ThisIsAComment The man page for sshd has a section on the authorized_keys format, where it states that the comment extends to the end of the line. While I haven't tried it, you should be able to put spaces into the comment.
{ "source": [ "https://serverfault.com/questions/442933", "https://serverfault.com", "https://serverfault.com/users/1032/" ] }
443,038
I've seen many resources explaining how to set up a server's firewall to allow incoming and outgoing traffic on HTTP standard ports ( 80 and 443 ), but I can't figure out why I would need either of them. Do I need to unblock both for a "regular" web site to work? For file uploads to work? Are there situations where it would be advisable to unblock one and leave the other blocked? Sorry if that's a basic question, but I couldn't find it explained anywhere (also I'm not a native english speaker). I know in a "regular" web site the client is always the one who initiates a request, so I'm assuming a web server must accept incoming traffic on those ports, and my common sense tells me the server is allowed to send a response without unblocking anything else (otherwise it wouldn't make sense to have two types of rules). Is that correct? But what is an outgoing web (service) traffic, and what would be its use? AFAIK if the server wanted to initiate a connection with another machine, the specific port that matters is the one in the other end (i.e. the destination port would be 80 ), on its end any free port could be used (the source port would be random). I can open HTTP requests from my server (using wget for instance) without unblocking anything. So I'm assuming my concepts of "incoming" and "outgoing" are wrong somehow.
"Incoming" and "outgoing" are from the perspective of the machine in question. "Incoming" refers to packets which originate elsewhere and arrive at the machine, while "outgoing" refers to packets which originate at the machine and arrive elsewhere. If you refer to your web server, it mostly accepts incoming connections to its web service, and only occasionally (or maybe never) makes outgoing connections. If you refer to your web client, it mostly makes outgoing connections to other services, and only occasionally (or maybe never) accepts incoming connections. Clear as mud now?
{ "source": [ "https://serverfault.com/questions/443038", "https://serverfault.com", "https://serverfault.com/users/106111/" ] }
443,344
I've done a fair bit of programming in C#, but then I've also written a lot of T-SQL scripts. C# requires semicolons, and T-SQL and PowerShell they're optional. What do you do for PowerShell? Why? My gut feel is to include semicolons but I don't know why.
Powershell primarily uses new lines as statement seperators , but semicolons may be used for multiple statements on a single line.
{ "source": [ "https://serverfault.com/questions/443344", "https://serverfault.com", "https://serverfault.com/users/20142/" ] }
443,949
Let's say a website is load-balanced between several servers. I want to run a command to test whether it's working, such as curl DOMAIN.TLD . So, to isolate each IP address, I specify the IP manually. But many websites may be hosted on the server, so I still provide a host header, like this: curl IP_ADDRESS -H 'Host: DOMAIN.TLD' . In my understanding, these two commands create the exact same HTTP request. The only difference is that in the latter one I take out the DNS lookup part from cURL and do this manually (please correct me if I'm wrong). All well so far. But now I want to do the same for an HTTPS url. Again, I could test it like this curl https://DOMAIN.TLD . But I want to specify the IP manually, so I run curl https://IP_ADDRESS -H 'Host: DOMAIN.TLD' . Now I get a cURL error: curl: (51) SSL: certificate subject name 'DOMAIN.TLD' does not match target host name 'IP_ADDRESS'. I can of course get around this by telling cURL not to care about the certificate (the "-k" option) but it's not ideal. Is there a way to isolate the IP address being connected to from the host being certified by SSL?
Think I found a solution going through the cURL manual: curl https://DOMAIN.EXAMPLE --resolve 'DOMAIN.EXAMPLE:443:192.0.2.17' Added in [curl] 7.21.3. Removal support added in 7.42.0. from CURLOPT_RESOLVE explained
{ "source": [ "https://serverfault.com/questions/443949", "https://serverfault.com", "https://serverfault.com/users/125287/" ] }
443,952
I'm running a Nginx 1.2.4 webserver here, and I'm behind a proxy of my hoster to prevent ddos attacks. The downside of being behind this proxy is that I need to get the REAL IP information from an extra header. In PHP it works great by doing $_SERVER[HTTP_X_REAL_IP] for example. Now before I was behind this proxy of my hoster I had a very effective way of blocking certain IP's by doing this: include /etc/nginx/block.conf and to allow/deny IP's there. But now due to the proxy, Nginx sees all traffic coming from 1 IP. Is there a way I can get Nginx to read the IP's like how PHP does, with the X-REAL-IP header?
Think I found a solution going through the cURL manual: curl https://DOMAIN.EXAMPLE --resolve 'DOMAIN.EXAMPLE:443:192.0.2.17' Added in [curl] 7.21.3. Removal support added in 7.42.0. from CURLOPT_RESOLVE explained
{ "source": [ "https://serverfault.com/questions/443952", "https://serverfault.com", "https://serverfault.com/users/68748/" ] }
444,111
I am doing some distributed work with RackSpace cloud servers and I am using bittorrent to distribute my files. It works surprisingly well. However, distributing the torrent files themselves are not so nice. How would you go around doing that? Right now I just scp the torrent files to the servers, and of course I could write a script that copies it to sqrt(n) servers instructing each to again copy to sqrt(n) but that's a pita to work it.
Not knowing what exactly your problem is, I can recommend pscp from parallel-ssh as a tool to upload small files to multiple servers. You prepare a list of servers to upload to and let it know what to take locally and where to put it remotely. For example: $ pscp -h list-of-servers file.torrent /tmp/ [1] 02:11:22 [SUCCESS] 10.0.0.21 [2] 02:11:22 [SUCCESS] 10.0.0.20 [3] 02:11:22 [SUCCESS] 10.0.0.45 [4] 02:11:22 [SUCCESS] 10.0.0.19 [5] 02:11:22 [SUCCESS] 10.0.0.2 [6] 02:11:22 [SUCCESS] 10.0.0.5 [7] 02:11:25 [FAILURE] 10.0.0.3 Exited with error code 1
{ "source": [ "https://serverfault.com/questions/444111", "https://serverfault.com", "https://serverfault.com/users/64874/" ] }
444,114
I've made several backups of older 2000 and XP machines I've phased out over the last year or so. These were done with 'Seagate DiskWizard' from Hiren's Boot CD (Acronis True Image Home (v9.5 I think)) and produced .tib files which I've archived and kept. One of my users (several months later) decides she is missing some internet favourites and program settings from her old install (or has removed them since my migration and wants them back). I have downloaded a trial of 'Acronis Backup & Recovery 11.5' with the intensions of converting the .tib files to .vhd and booting them when required (probably with Windows Virtual PC). Several places on the Acronis site mention converting easily, but I'm unable to see the option in B&R 11.5. Going a different route , B&R 11.5 does not identify the .tib files I have when pointed at the location, reporting 'There are no items to show in this view.' for both 'Data' and 'Archive' views in the Recovery 'Data Selection'. Any pointers?
Not knowing what exactly your problem is, I can recommend pscp from parallel-ssh as a tool to upload small files to multiple servers. You prepare a list of servers to upload to and let it know what to take locally and where to put it remotely. For example: $ pscp -h list-of-servers file.torrent /tmp/ [1] 02:11:22 [SUCCESS] 10.0.0.21 [2] 02:11:22 [SUCCESS] 10.0.0.20 [3] 02:11:22 [SUCCESS] 10.0.0.45 [4] 02:11:22 [SUCCESS] 10.0.0.19 [5] 02:11:22 [SUCCESS] 10.0.0.2 [6] 02:11:22 [SUCCESS] 10.0.0.5 [7] 02:11:25 [FAILURE] 10.0.0.3 Exited with error code 1
{ "source": [ "https://serverfault.com/questions/444114", "https://serverfault.com", "https://serverfault.com/users/106401/" ] }
444,232
I would like to isolate processes using lxc-execute. Is it possible to set bandwidth, cpu and memory limit? I had a look in the man of lxc.conf but I did not find it exhaustive.
First of all i would like you to understand Cgroups that are a part of the LXC utility. when you have a container, you would obviously want to ensure that the various containers you have running done starve any other container or process within. With this in mind, the nice guy of the LXC project a.k.a Daniel Lezcano integrated cgroups with the container technology he was creating i.e. LXC. Now if you want to assign resource usage, you will need to look into configuring your CGROUP. Cgroups allow you to allocate resources—such as CPU time, system memory, network bandwidth, or combinations of these resources—among user-defined groups of tasks (processes) running on a system. You can monitor the cgroups you configure, deny cgroups access to certain resources, and even reconfigure your cgroups dynamically on a running system. The cgconfig ( control group config) service can be configured to start up at boot time and reestablish your predefined cgroups, thus making them persistent across reboots. Cgroups can have multiple hierarchies because each hierarchy is attached to one or more subsystems (also known as resources controllers or controllers). This will then create multiple trees which are unconnected. There are nine subsystems available. blkio sets limits on input/output access on block devices cpu scheduler for cgroup task access to the CPU cpuacct generate reports for CPU use and cgroup cpuset assign CPUs and memory to a cgroup devices manage access to devices by tasks freezer suspend/resume tasks memory limit memory net_cls tag network packets to allow Linux traffic controller to identify task traffic ns namespace We can list the subsystems we have in our kernel by the command : lssubsys –am lxc-cgroup get or set value from the control group associated with the container name. Manage the control group associated with a container. example usage: lxc-cgroup -n foo cpuset.cpus "0,3" assign the processors 0 and 3 to the container. Now, i have in my opinion answered your original question. But let me add a bit of the parameters that might be useful to you for configuring your container for using lxc. there are condensed form of the documentation of resource control by redhat BLKIO Modifiable Parameters: blkio.reset_stats : any int to reset the statistics of BLKIO blkio.weight : 100 - 1000 (relative proportion of block I/O access) blkio.weight_device : major, minor , weight 100 - 1000 blkio.time : major, minor and time (device type and node numbers and length of access in milli seconds) blkio.throttle.read_bps_device : major, minor specifies the upper limit on the number of read operations a device can perform. The rate of the read operations is specified in bytes per second. blkio.throttle.read_iops_device :major, minor and operations_per_second specifies the upper limit on the number of read operations a device can perform blkio.throttle.write_bps_device : major, minor and bytes_per_second (bytes per second) blkio.throttle.write_iops_device : major, minor and operations_per_second CFS Modifiable Parameters: cpu.cfs_period_us : specifies a period of time in microseconds for how regularly a cgroup's access to CPU resources should be reallocated. If tasks in a cgroup should be able to access a single CPU for 0.2 seconds out of every 1 second, set cpu.cfs_quota_us to 200000 and cpu.cfs_period_us to 1000000. cpu.cfs_quota_us : total amount of time in microseconds that all tasks in a cgroup can run during one period. Once limit has reached, they are not allowed to run beyond that. cpu.shares : contains an integer value that specifies the relative share of CPU time available to tasks in a cgroup. Note: For example, tasks in two cgroups that have cpu.shares set to 1 will receive equal CPU time, but tasks in a cgroup that has cpu.shares set to 2 receive twice the CPU time of tasks in a cgroup where cpu.shares is set to 1. Note that shares of CPU time are distributed per CPU. If one cgroup is limited to 25% of CPU and another cgroup is limited to 75% of CPU, on a multi-core system, both cgroups will use 100% of two different CPUs. RT Modifiable Parameters: cpu.rt_period_us : time in microseconds for how regularly a cgroups access to CPU resources should be reallocated. cpu.rt_runtime_us : same as above. CPUset : cpuset subsystem assigns individual CPUs and memory nodes to cgroups. Note: here some parameters are mandatory Mandatory: cpuset.cpus : specifies the CPUs that tasks in this cgroup are permitted to access. This is a comma-separated list in ASCII format, with dashes (" -") to represent ranges. For example 0-2,16 represents CPUs 0, 1, 2, and 16. cpuset.mems : specifies the memory nodes that tasks in this cgroup are permitted to access. same as above format Optional: cpuset.cpu_exclusive : contains a flag ( 0 or 1) that specifies whether cpusets other than this one and its parents and children can share the CPUs specified for this cpuset. By default ( 0), CPUs are not allocated exclusively to one cpuset. cpuset.mem_exclusive : contains a flag ( 0 or 1) that specifies whether other cpusets can share the memory nodes specified for this cpuset. By default ( 0), memory nodes are not allocated exclusively to one cpuset. Reserving memory nodes for the exclusive use of a cpuset ( 1) is functionally the same as enabling a memory hardwall with the cpuset.mem_hardwall parameter. cpuset.mem_hardwall : contains a flag ( 0 or 1) that specifies whether kernel allocations of memory page and buffer data should be restricted to the memory nodes specified for this cpuset. By default ( 0), page and buffer data is shared across processes belonging to multiple users. With a hardwall enabled ( 1), each tasks' user allocation can be kept separate. cpuset.memory_pressure_enabled : contains a flag ( 0 or 1) that specifies whether the system should compute the memory pressure created by the processes in this cgroup cpuset.memory_spread_page : contains a flag ( 0 or 1) that specifies whether file system buffers should be spread evenly across the memory nodes allocated to this cpuset. By default ( 0), no attempt is made to spread memory pages for these buffers evenly, and buffers are placed on the same node on which the process that created them is running. cpuset.memory_spread_slab : contains a flag ( 0 or 1) that specifies whether kernel slab caches for file input/output operations should be spread evenly across the cpuset. By default ( 0), no attempt is made to spread kernel slab caches evenly, and slab caches are placed on the same node on which the process that created them is running. cpuset.sched_load_balance : contains a flag ( 0 or 1) that specifies whether the kernel will balance loads across the CPUs in this cpuset. By default ( 1), the kernel balances loads by moving processes from overloaded CPUs to less heavily used CPUs. Devices: The devices subsystem allows or denies access to devices by tasks in a cgroup. devices.allow : specifies devices to which tasks in a cgroup have access. Each entry has four fields: type, major, minor, and access. type can be of following three values: a - applies to all devices b - block devices c - character devices access is a sequence of one or more letters: r read from device w write to device m create device files that do not yet exist devices.deny : similar syntax as above devices.list : reports devices for which access control has been set for tasks in this cgroup Memory: The memory subsystem generates automatic reports on memory resources used by the tasks in a cgroup, and sets limits on memory use by those tasks Memory modifiable parameters: memory.limit_in_bytes : sets the maximum amount of user memory. can use suffixes like K for kilo and M for mega etc. This only limits the groups lower in the heirarchy. i.e. root cgroup cannot be limited memory.memsw.limit_in_bytes : sets the maximum amount for the sum of memory and swap usage. again this cannot limit the root cgroup. Note: memory.limit_in_bytes should always be set before memory.memsw.limit_in_bytes because only after limit, can swp limit be set memory.force_empty : when set to 0, empties memory of all pages used by tasks in this cgroup memory.swappiness : sets the tendency of the kernel to swap out process memory used by tasks in this cgroup instead of reclaiming pages from the page cache. he default value is 60. Values lower than 60 decrease the kernel's tendency to swap out process memory, values greater than 60 increase the kernel's tendency to swap out process memory, and values greater than 100 permit the kernel to swap out pages that are part of the address space of the processes in this cgroup. Note: Swappiness can only be asssigned to leaf groups in the cgroups architecture. i.e if any cgroup has a child cgroup, we cannot set the swappiness for that memory.oom_control : contains a flag ( 0 or 1) that enables or disables the Out of Memory killer for a cgroup. If enabled ( 0), tasks that attempt to consume more memory than they are allowed are immediately killed by the OOM killer. net_cls: The net_cls subsystem tags network packets with a class identifier (classid) that allows the Linux traffic controller ( tc) to identify packets originating from a particular cgroup. The traffic controller can be configured to assign different priorities to packets from different cgroups. net_cls.classid : 0XAAAABBBB AAAA = major number (hex) BBBB = minor number (hex) net_cls.classid contains a single value that indicates a traffic control handle. The value of classid read from the net_cls.classid file is presented in the decimal format while the value to be written to the file is expected in the hexadecimal format. e.g. 0X100001 = 10:1 net_prio : The Network Priority ( net_prio) subsystem provides a way to dynamically set the priority of network traffic per each network interface for applications within various cgroups. A network's priority is a number assigned to network traffic and used internally by the system and network devices. Network priority is used to differentiate packets that are sent, queued, or dropped. traffic controller (tc) is responsible to set the networks priority. net_prio.ifpriomap : networkinterface , priority (/cgroup/net_prio/iscsi/net_prio.ifpriomap) Contents of the net_prio.ifpriomap file can be modified by echoing a string into the file using the above format, for example: ~]# echo "eth0 5" > /cgroup/net_prio/iscsi/net_prio.ifpriomap
{ "source": [ "https://serverfault.com/questions/444232", "https://serverfault.com", "https://serverfault.com/users/18459/" ] }
444,286
So the release of Windows Server 2012 has removed a lot of the old Remote Desktop related configuration utilities. In particular, there is no more Remote Desktop Session Host Configuration utility that gave you access to the RDP-Tcp properties dialog that let you configure a custom certificate for the RDSH to use. In its place is a nice new consolidated GUI that is part of the overall "edit deployment properties" workflow in the new Server Manager. The catch is that you only get access to that workflow if you have the Remote Desktop Services role installed (as far as I can tell). This seems like a bit of an oversight on Microsoft's part. How can we configure a custom SSL certificate for RDP on Windows Server 2012 when it's running in the default Remote Administration mode without needlessly installing the Remote Desktop Services role?
It turns out that much of the configuration data for RDSH is stored in the Win32_TSGeneralSetting class in WMI in the root\cimv2\TerminalServices namespace. The configured certificate for a given connection is referenced by the Thumbprint value of that certificate on a property called SSLCertificateSHA1Hash . UPDATE: Here's a generalized Powershell solution that grabs and sets the thumbprint of the first SSL cert in the computer's personal store. If your system has multiple certs, you should add a -Filter option to the gci command to make sure you reference the correct cert. I've left my original answer intact below this for reference. # get a reference to the config instance $tsgs = gwmi -class "Win32_TSGeneralSetting" -Namespace root\cimv2\terminalservices -Filter "TerminalName='RDP-tcp'" # grab the thumbprint of the first SSL cert in the computer store $thumb = (gci -path cert:/LocalMachine/My | select -first 1).Thumbprint # set the new thumbprint value swmi -path $tsgs.__path -argument @{SSLCertificateSHA1Hash="$thumb"} In order to get the thumbprint value Open the properties dialog for your certificate and select the Details tab Scroll down to the Thumbprint field and copy the space delimited hex string into something like Notepad Remove all the spaces from the string. You'll also want to watch out for and remove a non-ascii character that sometimes gets copied just before the first character in the string. It's not visible in Notepad. This is the value you need to set in WMI. It should look something like this: 1ea1fd5b25b8c327be2c4e4852263efdb4d16af4 . Now that you have the thumbprint value, here's a one-liner you can use to set the value using wmic: wmic /namespace:\\root\cimv2\TerminalServices PATH Win32_TSGeneralSetting Set SSLCertificateSHA1Hash="THUMBPRINT" Or if PowerShell is your thing, you can use this instead: $path = (Get-WmiObject -class "Win32_TSGeneralSetting" -Namespace root\cimv2\terminalservices -Filter "TerminalName='RDP-tcp'").__path Set-WmiInstance -Path $path -argument @{SSLCertificateSHA1Hash="THUMBPRINT"} Note: the certificate must be in the 'Personal' Certificate Store for the Computer account.
{ "source": [ "https://serverfault.com/questions/444286", "https://serverfault.com", "https://serverfault.com/users/1584/" ] }
444,554
When I run netstat there are some entries such as TCP [::]:8010 computername LISTENING What does that mean? It is impossible to search for...
:: can be used once in an IPv6 address to replace a consecutive blocks of zeroes. It can be any length of zeroes as long as it is greater than a single block. All zeroes in a single block can be represented by :0: instead of writing out all four zeroes. In this case, it means all zeroes, or the IPv6 equivalent of the IPv4 0.0.0.0 As an example of something that is not all zeroes: fe80:0000:0000:0000:34cb:9850:4868:9d2c Which is properly "reduced" to: fe80::34cb:9850:4868:9d2c As an example, it can also be written as: fe80:0:0:0:34cb:9850:4868:9d2c but that is far less common than just "double coloning" it.
{ "source": [ "https://serverfault.com/questions/444554", "https://serverfault.com", "https://serverfault.com/users/116691/" ] }
444,867
I have a directory called "members" and under it there are folders/files. How can I recursively set all the current folders/files and any future ones created there to by default have 775 permissions and belong to owner/group nobody/admin respectively? I enabled ACL, mounted, but can't seem to get the setfacl command to do this properly. Any idea how to accomplish this?
I actually found something that so far does what I asked for, sharing here so anyone who runs into this issue can try out this solution: sudo setfacl -Rdm g:groupnamehere:rwx /base/path/members/ sudo setfacl -Rm g:groupnamehere:rwx /base/path/members/ R is recursive, which means everything under that directory will have the rule applied to it. d is default, which means for all future items created under that directory, have these rules apply by default. m is needed to add/modify rules. The first command, is for new items (hence the d), the second command, is for old/existing items under the folder. Hope this helps someone out as this stuff is a bit complicated and not very intuitive.
{ "source": [ "https://serverfault.com/questions/444867", "https://serverfault.com", "https://serverfault.com/users/143836/" ] }
444,871
Using a stand alone Windows Server 2012 Standard edition (no Active Directory), I succeded to add a new rule to allow sql server connection, by entering the .exe file When I edited the rule property to specify the port number, text zones for ports are greyed and no way to write in them. Is there a new feature in Windows Server 2012 that by default disables port numbers editing ? Any advices please ?
I actually found something that so far does what I asked for, sharing here so anyone who runs into this issue can try out this solution: sudo setfacl -Rdm g:groupnamehere:rwx /base/path/members/ sudo setfacl -Rm g:groupnamehere:rwx /base/path/members/ R is recursive, which means everything under that directory will have the rule applied to it. d is default, which means for all future items created under that directory, have these rules apply by default. m is needed to add/modify rules. The first command, is for new items (hence the d), the second command, is for old/existing items under the folder. Hope this helps someone out as this stuff is a bit complicated and not very intuitive.
{ "source": [ "https://serverfault.com/questions/444871", "https://serverfault.com", "https://serverfault.com/users/140342/" ] }
444,965
real quick question regarding HAProxy reqrep. I am trying to rewrite/replace the request that gets sent to the backend. I have the following example domain and URIs, both sharing the same domain name, but different backend web server pools. http://domain/web1 http://domain/web2 I want web1 to go to backend webfarm1, and web2 to go to webfarm2. Currently this does happen. However I want to strip off the web1 or web2 URI when the request is sent to the backend. Here is my haproxy.cfg frontend webVIP_80 mode http bind :80 #acl routing to backend acl web1_path path_beg /web1 acl web2_path path_beg /web2 #which backend use_backend webfarm1 if web1_path use_backend webfarm2 if web2_path default_backend webfarm1 backend webfarm1 mode http reqrep ^([^\ ]*)\ /web1/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1 10.0.0.10:80 weight 5 check slowstart 5000ms server webtest2 10.0.0.20:80 weight 5 check slowstart 5000ms backend webfarm2 mode http reqrep ^([^\ ]*)\ /web2/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1-farm2 10.0.0.110:80 weight 5 check slowstart 5000ms server webtest2-farm2 10.0.0.120:80 weight 5 check slowstart 5000ms If I go to http://domain/web1 or http://domain/web2 I see it in the error logs that the request on a server in each backend that the requst is for the resource /web1 or /web2 respectively. Therefore I believe there to be something wrong with my regular expression, even though I copied and pasted it from the Documentation. http://code.google.com/p/haproxy-docs/wiki/reqrep Summary: I'm trying to route traffic based on URI, however I want HAProxy to strip the URI when it sends the request to the backend pool. Thank you! -Jim
You have this: reqrep ^([^\ ]*)\ /web1/(.*) \1\ /\2 I think you want this: reqrep ^([^\ ]*\ /)web1[/]?(.*) \1\2 The difference being that the second one will work if the / after webN is omitted. In answer to your comment below, going in to detail about how the expressions above work is more effort than I can give. However, maybe this will help. Everything before /web1 is "capturing" everything that comes before web1 in the request string. So usually that would be GET or POST. The (.*) "captures" everything after web1, including nothing if there is nothing. The next part ( \1\2 ) says what to do with those captured parts. It says to form a string composed of \1 (the first captured part) and \2 (followed by the second captured part). Since web1 is never captured, it's not assembled in to the final output.
{ "source": [ "https://serverfault.com/questions/444965", "https://serverfault.com", "https://serverfault.com/users/140021/" ] }
445,333
It would be something similar to top , where you see your cpu processes in real time. I'm not looking for a GUI like Wireshark to do it.
iftop is cool and lightweight. ntop is even cooler but web-based and uses a daemon.
{ "source": [ "https://serverfault.com/questions/445333", "https://serverfault.com", "https://serverfault.com/users/144032/" ] }
446,379
I have a virtual machine (Debian) running on a physical machine host. The virtual machine acts as a buffer for data that it frequently receives over the local network (the period for this data is 0.5s, so a fairly high throughput). Any data received is stored on the virtual machine and repeatedly forwarded to an external server over UDP. Once the external server acknowledges (over UDP) that it has received a data packet, the original data is deleted from the virtual machine and not sent to the external server again. The internet connection that connects the VM and the external server is unreliable, meaning it could be down for days at a time. The physical machine that hosts the VM gets its power cut several times per day at random. There is no way to tell when this is about to happen and it is not possible to add a UPS, a battery, or a similar solution to the system. Originally, the data was stored on a file-based HSQLDB database on the virtual machine. However, the frequent power cuts eventually cause the database script file to become corrupted (not at the file system level, i.e. it is readable, but HSQLDB can't make sense of it), which leads to my question: How should data be stored in an environment where power cuts can and do happen frequently? One option I can think of is using flat files, saving each packet of data as a file on the file system. This way if a file is corrupted due to loss of power, it can be ignored and the rest of the data remains intact. This poses a few issues however, mainly related to the amount of data likely being stored on the virtual machine. At 0.5s between each piece of data, 1,728,000 files will be generated in 10 days. This at least means using a file system with an increased number of inodes to store this data (the current file system setup ran out of inodes at ~250,000 messages and 30% disk space used). Also, it is hard (not impossible) to manage. Are there any other options? Are there database engines that run on Debian that would not get corrupted by power cuts? Also, what file system should be used for this? ext3 is what is used at the moment. The software that runs on the virtual machine is written using Java 6, so hopefully the solution would not be incompatible.
Honestly your best approach here is to either fix the power-cuts, or deploy a different system in a better location. Yes there are systems such as redis which will store data in an append-only-log for replay, but you risk corruption at lower levels - e.g. if your filesystem is scrambled then the data on disk is potentially at risk. I appreciate any improvement would be useful to you, but really the problem is not one that can be solved given the scenario you've outlined.
{ "source": [ "https://serverfault.com/questions/446379", "https://serverfault.com", "https://serverfault.com/users/144472/" ] }
447,028
I want to clone a repo in a non-interactive way. When cloning, git asks to confirm host's fingerprint: The authenticity of host 'bitbucket.org (207.223.240.182)' can't be established. RSA key fingerprint is 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40. Are you sure you want to continue connecting (yes/no)? no How do I force "yes" every time this questions pops up? I tried using yes yes | git clone ... , but it doesn't work. EDIT: Here's a solution: Can I automatically add a new host to known_hosts? (adds entires to known_hosts with ssh-keyscan).
I don't think that is the best solution, but it was a solution for me. ANSWER: Adding the domainnames to the known_hosts file using the ssh-keyscan command solved the issue: ssh-keyscan <enter_domainname_e.g._github.com> >> ~/.ssh/known_hosts
{ "source": [ "https://serverfault.com/questions/447028", "https://serverfault.com", "https://serverfault.com/users/144671/" ] }
447,092
All of my servers are currently flooded by salt water. Is it possible for each platter in a multi-platter drive to be separated, cleaned, imaged, and merged into a new virtual drive for data recovery?
There are already many companies providing services for this, for example 24HourData (their site has a list of drives they support ). While you figure out your next steps keep these things in mind (from Top Tips for Liquid Damaged Data Storage Devices ) Do not try to power on a flood damaged hard drive bad things happen to good hard drives when this is done! Strange as it may seem, in most cases it is not wise to allow your device to dry off . Sealing your water damaged media in an air tight container and getting it to a data recovery company is definitely the best option if you need your data back. Do not presume that only the electronics will be affected and attempt to swap PCBs or components - doing so may prove dangerous to yourself and your drive. Don't open up your hard drive under any circumstances (unless of course you don't mind losing you data). If you have a backup copy of your data and therefore don't need a data recovery service, backup the backup, they say that bad luck comes in threes! Highlights in these points are mine.
{ "source": [ "https://serverfault.com/questions/447092", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
447,258
I've got a wildcard ssl certification and I'm trying to redirect all non-ssl traffic to ssl. Currently I'm using the following for redirection the non-subdomainded url which is working fine. server { listen 80; server_name mydomain.com; #Rewrite all nonssl requests to ssl. rewrite ^ https://$server_name$request_uri? permanent; } when I do the same thing for *.mydomain.com it logically redirects to https://%2A.mydomain.com/ How do you redirect all subdomains to their https equivalent?
That's all... server { listen 80; server_name *.mydomain.com; #Rewrite all nonssl requests to ssl. return 301 https://$host$request_uri; }
{ "source": [ "https://serverfault.com/questions/447258", "https://serverfault.com", "https://serverfault.com/users/14356/" ] }
447,871
I have the following nginx config, e.g. server { listen 80; server_name example.com allow 127.0.0.0/8; When I restart, it warn me: Restarting nginx: nginx: [warn] server name "127.0.0.0/8" has suspicious symbols in /etc/nginx/sites-enabled/xxx Any idea?
I guess you are missing the ; at the end of the server_name directive so it interprets the allow line as part of the server name. server { listen 80; server_name example.com; allow 127.0.0.0/8;
{ "source": [ "https://serverfault.com/questions/447871", "https://serverfault.com", "https://serverfault.com/users/50774/" ] }
447,896
We have a Cisco ASA 5505 with firmware ASA9.0(1) and ASDM 7.0(2). It is configured with a public ip address, and when trying to reach it from the outside by HTTPS for AnyConnect VPN, we get the following log output: 6 Nov 12 2012 07:01:40 <client-ip> 51000 <asa-ip> 443 Built inbound TCP connection 2889 for outside:<client-ip>/51000 (<client-ip>/51000) to identity:<asa-ip>/443 (<asa-ip>/443) 6 Nov 12 2012 07:01:40 <client-ip> 50999 <asa-ip> 443 Built inbound TCP connection 2890 for outside:<client-ip>/50999 (<client-ip>/50999) to identity:<asa-ip>/443 (<asa-ip>/443) 6 Nov 12 2012 07:01:40 <client-ip> 51000 <asa-ip> 443 Teardown TCP connection 2889 for outside:<client-ip>/51000 to identity:<asa-ip>/443 duration 0:00:00 bytes 0 No valid adjacency 6 Nov 12 2012 07:01:40 <client-ip> 50999 <asa-ip> 443 Teardown TCP connection 2890 for outside:<client-ip>/50999 to identity:<asa-ip>/443 duration 0:00:00 bytes 0 No valid adjacency We finished the startup wizard and the anyconnect vpn wizard and here is the resulting configuration: Cryptochecksum: 12262d68 23b0d136 bb55644a 9c08f86b : Saved : Written by enable_15 at 07:08:30.519 UTC Mon Nov 12 2012 ! ASA Version 9.0(1) ! hostname vpn domain-name office.<redacted>.com enable password <redacted> encrypted passwd <redacted> encrypted names ip local pool vpn-pool 192.168.67.2-192.168.67.253 mask 255.255.255.0 ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.68.250 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address <redacted> 255.255.255.248 ! ftp mode passive dns server-group DefaultDNS domain-name office.<redacted>.com object network obj_any subnet 0.0.0.0 0.0.0.0 pager lines 24 logging enable logging asdm informational mtu outside 1500 mtu inside 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 no arp permit-nonconnected ! object network obj_any nat (inside,outside) dynamic interface timeout xlate 3:00:00 timeout pat-xlate 0:00:30 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy user-identity default-domain LOCAL http server enable http 192.168.68.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart crypto ipsec ikev2 ipsec-proposal DES protocol esp encryption des protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal 3DES protocol esp encryption 3des protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES protocol esp encryption aes protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES192 protocol esp encryption aes-192 protocol esp integrity sha-1 md5 crypto ipsec ikev2 ipsec-proposal AES256 protocol esp encryption aes-256 protocol esp integrity sha-1 md5 crypto ipsec security-association pmtu-aging infinite crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set ikev2 ipsec-proposal AES256 AES192 AES 3DES DES crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto map inside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map inside_map interface inside crypto ca trustpoint _SmartCallHome_ServerCA crl configure crypto ca trustpoint ASDM_TrustPoint0 enrollment self subject-name CN=vpn proxy-ldc-issuer crl configure crypto ca trustpool policy crypto ca certificate chain _SmartCallHome_ServerCA certificate ca 6ecc7aa5a7032009b8cebcf4e952d491 <redacted> quit crypto ca certificate chain ASDM_TrustPoint0 certificate f678a050 <redacted> quit crypto ikev2 policy 1 encryption aes-256 integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 10 encryption aes-192 integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 20 encryption aes integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 30 encryption 3des integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 policy 40 encryption des integrity sha group 5 2 prf sha lifetime seconds 86400 crypto ikev2 enable outside client-services port 443 crypto ikev2 remote-access trustpoint ASDM_TrustPoint0 telnet timeout 5 ssh 192.168.68.0 255.255.255.0 inside ssh timeout 5 console timeout 0 vpn-addr-assign local reuse-delay 60 dhcpd auto_config outside ! dhcpd address 192.168.68.254-192.168.68.254 inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ssl trust-point ASDM_TrustPoint0 inside ssl trust-point ASDM_TrustPoint0 outside webvpn enable outside enable inside anyconnect image disk0:/anyconnect-win-3.1.01065-k9.pkg 1 anyconnect image disk0:/anyconnect-linux-3.1.01065-k9.pkg 2 anyconnect image disk0:/anyconnect-macosx-i386-3.1.01065-k9.pkg 3 anyconnect profiles GM-AnyConnect_client_profile disk0:/GM-AnyConnect_client_profile.xml anyconnect enable tunnel-group-list enable group-policy GroupPolicy_GM-AnyConnect internal group-policy GroupPolicy_GM-AnyConnect attributes wins-server none dns-server value 192.168.68.254 vpn-tunnel-protocol ikev2 ssl-client default-domain value office.<redacted>.com webvpn anyconnect profiles value GM-AnyConnect_client_profile type user username <redacted> password <redacted> encrypted tunnel-group GM-AnyConnect type remote-access tunnel-group GM-AnyConnect general-attributes address-pool vpn-pool default-group-policy GroupPolicy_GM-AnyConnect tunnel-group GM-AnyConnect webvpn-attributes group-alias GM-AnyConnect enable ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context call-home reporting anonymous Cryptochecksum:12262d6823b0d136bb55644a9c08f86b : end Clearly we are missing something, but the question is, what?
I guess you are missing the ; at the end of the server_name directive so it interprets the allow line as part of the server name. server { listen 80; server_name example.com; allow 127.0.0.0/8;
{ "source": [ "https://serverfault.com/questions/447896", "https://serverfault.com", "https://serverfault.com/users/63608/" ] }
448,541
Possible Duplicate: How can I know when my computer is pinged? I'm using Linux. I would like to know how to tell who is pinging my computer. I have seen this similar question using Windows , but I'm not sure it applies to me.
It looks like you're asking how to see who's pinging you, right? One quick and dirty way would be using tcpdump to simply monitor all incoming ICMP echo requests: sudo tcpdump -i ethX icmp and icmp[icmptype]=icmp-echo where ethX is the name of the adapter you're interested in listening to. Note that tcpdump will resolve hostnames by default, so you might need to add the -n option to get IPs instead. (This is, by the way, basically identical to the instructions given in the question you linked, though they are for Wireshark, a related but separate tool.)
{ "source": [ "https://serverfault.com/questions/448541", "https://serverfault.com", "https://serverfault.com/users/145442/" ] }
448,550
We have HP DL360 G7 server with one cpu and 16G 12G ram. We plan to add another cpu. So, we need also ram for second cpu. Is there any negative performance impact if we add different size of ram to second cpu? for example 20G ? Current ram configuration:
The HP ProLiant DL360 G7 server (and other Nehalem-and-newer CPU systems) have a set of memory DIMM population guidelines. Can you share what's currently populated and what your final RAM amount/goal is? This is documented primarily in the Quickspecs for the system, but I'll try to give some specific guidelines. HP also has an interactive Memory Configuration Tool to help step you through the process and your options. Here's a technical deep-dive on the Nehalem/Westmere CPU architecture that explains the memory side of things. At present, you have 9 available DIMM slots out of the 18 slots on the server. You can only use half of them because the server only has one CPU installed. Installing the additional CPU opens the other 9 slots up for use. Performance is maximized if you balance across each CPU's DIMM banks. E.g. results are best if an equal amount of RAM is assigned to both CPUs. The other critical rules are: Do not mix Unbuffered memory (UDIMMs) with Registered memory (RDIMMs) Do not install DIMMs if the corresponding processor is not installed To maximize performance, balance the total memory capacity between all installed processors Populate DIMMs from heaviest load (quad-rank) to lightest load (single-rank) within a channel There are also memory channel population tips that affect bus speed. E.g. using 3 to 6 DIMMs per CPU is going to be faster than running with all 18 slots populated. RAM can be seen by both CPUs, but you have to popular on both sides if you have two CPUs. Reply back with your setup, and we can help optimize...
{ "source": [ "https://serverfault.com/questions/448550", "https://serverfault.com", "https://serverfault.com/users/50152/" ] }
448,563
We have a multi-tenant email relay set up that has a transport map file that looks like this: domain1.com smtp:mail.domain1.com domain2.com smtp:mail.domain2.com domain3.com smtp:mail.domain3.com [etc] In the event mail.domain1.com is down, email for domain1.com will be held by the postfix server until mail.domain1.com starts responding again. However we have a customer who has a backup DSL line on their site, an their email server is also available over this. How can I tell the transport to failover to a different host if the first is unavailable? Clarification I think there is some confusion over the purpose of this setup. This postfix server is an inbound mail relay for clients who do not have AV and Spam protection on site. It is one of a pair, which are configured as the 2 MX records for these customers. They receive and clean email before forwarding it on to their local mail servers, as well as acting as a buffer in case of an outage on their end. These customers don't generally have multiple on site mail servers, they are too small hence this service. What they do often have though is a secondary connection, eg fibre and DSL, so I'd like to be able to direct the onward SMTP to their second connection should the first be unreachable.
The HP ProLiant DL360 G7 server (and other Nehalem-and-newer CPU systems) have a set of memory DIMM population guidelines. Can you share what's currently populated and what your final RAM amount/goal is? This is documented primarily in the Quickspecs for the system, but I'll try to give some specific guidelines. HP also has an interactive Memory Configuration Tool to help step you through the process and your options. Here's a technical deep-dive on the Nehalem/Westmere CPU architecture that explains the memory side of things. At present, you have 9 available DIMM slots out of the 18 slots on the server. You can only use half of them because the server only has one CPU installed. Installing the additional CPU opens the other 9 slots up for use. Performance is maximized if you balance across each CPU's DIMM banks. E.g. results are best if an equal amount of RAM is assigned to both CPUs. The other critical rules are: Do not mix Unbuffered memory (UDIMMs) with Registered memory (RDIMMs) Do not install DIMMs if the corresponding processor is not installed To maximize performance, balance the total memory capacity between all installed processors Populate DIMMs from heaviest load (quad-rank) to lightest load (single-rank) within a channel There are also memory channel population tips that affect bus speed. E.g. using 3 to 6 DIMMs per CPU is going to be faster than running with all 18 slots populated. RAM can be seen by both CPUs, but you have to popular on both sides if you have two CPUs. Reply back with your setup, and we can help optimize...
{ "source": [ "https://serverfault.com/questions/448563", "https://serverfault.com", "https://serverfault.com/users/87855/" ] }
448,647
I'm pretty new to debian, and I'm trying to set up a server. I have created a user who can only access his folder /home/username (and its subdirectory). Now I want to use that user for the webserver I set up, and I have given him access to /var/www but I can't see /var/www through sftp and I did a symbolic link like this: root@server:/home/username# ln -s /var/www www root@server:/home/username# cd www root@server:/home/username/www# chown username:username * Now, with filezilla, I can see www folder like this: But when I try to open it, I get this: What I'm doing wrong?
It's likely the SFTP is being chrooted, so that the directory /var/www is not available to the user in the chroot jail. Look in /etc/ssh/sshd_config and examine the sftp directives. Do you see something like: Match group sftp ChrootDirectory /home/%u AllowTcpForwarding no ForceCommand internal-sftp The sshd_config man page is here . Basically, once the user is in /home/username in SFTP, that directory becomes / and references outside of /home/username are not available. In fact, a symlink like ln -s /var/www /home/username/www will look like you're trying to reach /home/username/var/www (i.e., /home/username is now / so any link that references /var/www must also be a subdirectory of /home/username in the context of the chroot). As a solution, you can turn off the chroot (but this will have other security implications, mainly with SFTP users having full rein over your filesystem). You can do a loop mount of /var/www into /home/username/www (something like mount --bind /var/www /home/username/www (check your documentation for mount ) which should work as you'd expect under chroot). You can also muck with the sshd_config file to exclude that one particular user from chroot (though, again, with security implications). I would try the bind mount first.
{ "source": [ "https://serverfault.com/questions/448647", "https://serverfault.com", "https://serverfault.com/users/145487/" ] }
449,048
"Can we upgrade our existing production EL5 servers to EL6?" A simple-sounding request from two customers with completely different environments prompted my usual best-practices answer of "yes, but it will require a coordinated rebuild of all of your systems "... Both clients feel that a complete rebuild of their systems is an unacceptable option for downtime and resource reasons... When asked why it was necessary to fully reinstall the systems, I didn't have a good answer beyond, "that's the way it is..." I'm not trying to elicit responses about configuration management ("Puppetize everything " doesn't always apply ) or how the clients should have planned better. This is a real-world example of environments that have grown and thrived in a production capacity, but don't see a clean path to move to the next version of their OS. Environment A: Non-profit organization with 40 x Red Hat Enterprise Linux 5.4 and 5.5 web, database servers and mail servers, running a Java web application stack, software load balancers and Postgres databases. All systems are virtualized on two VMWare vSphere clusters in different locations, each with HA, DRS, etc. Environment B: High-frequency financial trading firm with 200 x CentOS 5.x systems in multiple co-location facilities running production trading operations, supporting in-house development and back-office functions. The trading servers are running on bare-metal commodity server hardware. They have numerous sysctl.conf , rtctl , interrupt binding and driver tweaks in place to lower messaging latency. Some have custom and/or realtime kernels. The developer workstations are also running a similar version(s) of CentOS. In both cases, the environments are running well as-is. The desire to upgrade comes from a need for a newer application or feature available in EL6. For the non-profit firm, it's tied to Apache, the kernel and some things that will make the developers happy. In the trading firm, it's about some enhancements in the kernel, networking stack and GLIBC, which will make the developers happy. Both are things that can't be easily packaged or updated without drastically altering the operating system . As a systems engineer, I appreciate that Red Hat recommends full rebuilds when moving between major version releases. A clean start forces you to refactor and pay attention to configs along the way. Being sensitive to business needs of clients, I wonder why this needs to be such an onerous task . The RPM packaging system is more than capable of handling in-place upgrades, but it's the little details that get you: /boot requiring more space, new default filesystems, RPM possibly breaking mid-upgrade, deprecated and defunct packages... What's the answer here? Other distributions (.deb-based, Arch and Gentoo) seem to have this ability or a better path. Let's say we find the downtime to accomplish this task the right way: What should these clients do to avoid the same problem when EL7 is released and stabilizes? Or is this a case where people need to resign themselves to full rebuilds every few years? This seems to have gotten worse as Enterprise Linux has evolved... Or am I just imagining that? Has this dissuaded anyone from using Red Hat and derivative operating systems? I suppose there's the configuration management angle, but most Puppet installations I see do not translate well into environments with highly-customized application servers ( Environment B could have a single server whose ifconfig output looks like this ). I'd be interesting in hearing suggestions on how configuration management can be used to help organizations get across the RHEL major version bump, though.
(Author's Note: This answer refers to RHEL 6 and prior versions. RHEL 7 now has a fully supported upgrade path from RHEL 6, the details of which are at the end.) To start, I should note that there are two ways to do the in-place upgrade: Drop in the installation DVD (or use the DVD image via iLO/iDRAC), boot from it and choose Upgrade, e.g. linux upgradeany . Update the redhat-release RPM manually, run yum distro-sync (this is oversimplified a bit) and reboot. Method 1 is merely unsupported. Method 2 is for Real Cowboys. In addition to the recommended fresh installs, I have done both of these... Do I need support? Support has two complementary meanings in our world. The first is that a product has a given feature (e.g. "Postfix supports SMTP"). The second is that the vendor will talk to you about it. Which definition is meant is not always clear from context. To accomplish a task, you obviously need support in the first sense. Where vendor support comes in is to assist you in resolving issues and giving the vendor feedback as to what features need to exist or be improved. Many sites pay a fortune for vendor support when they have the in-house expertise to resolve any issues that may arise, faster and even cheaper than the vendor could. Whether to buy vendor support is ultimately a business decision you will have to make (or advise management on). Why not do an in-place upgrade? This is what Red Hat says about it : Red Hat does not support in-place upgrades between any major versions of Red Hat Enterprise Linux. A major version is denoted by a whole number version change. For example, Red Hat Enterprise Linux 5 and Red Hat Enterprise Linux 6 are both major versions of Red Hat Enterprise Linux. In-place upgrades across major releases do not preserve all system settings, services or custom configurations. Consequently, Red Hat strongly recommends fresh installations when upgrading from one major version to another. They further warn: However, note the following limitations before you choose to upgrade your system: Individual package configuration files may or may not work after performing an upgrade due to changes in various configuration file formats or layouts. If you have one of Red Hat's layered products (such as the Cluster Suite) installed, it may need to be manually upgraded after the Red Hat Enterprise Linux upgrade has been completed. Third party or ISV applications may not work correctly following the upgrade. Of course, they then describe how to do an in-place upgrade via method 1, just in case you really want to do it. The feature exists and Red Hat puts development time into it, so it is supported in that the feature exists. But if something goes wrong, Red Hat will tell you to install fresh; they will not provide vendor support for things that break as a result of the upgrade. For the record, I've never actually had a problem with an in-place upgrade of a RHEL/CentOS or Fedora system that I couldn't resolve myself. The typical problems come from renamed packages, third party repositories and the occasional version mismatch between the i386 and x86_64 architectures of a package. The installer is a bit better at handling these than yum , I think. How should I upgrade? I generally warn people that they should plan on a maintenance window every 3-4 years to update RHEL systems from one major version to the next. While upgrades generally go smoothly, the unexpected can always happen. For both of your environments, I expect an in-place upgrade would work, though I strongly recommend testing it thoroughly first. P2V a representative sample of the servers and run through the in-place upgrade on the virtual systems to see what problems you're going to run into. You can then plan the actual production upgrade based on better knowledge of what will happen. For a large deployment such as you have here, consider using Limoncelli's "one-some-many" approach. Upgrade one machine, see what problems occur, solve them, then use lessons learned when upgrading a small batch of machines, repeat the lessons learned thing, then when you believe you have all the kinks worked out, upgrade large batches of them. At a time like this, I also recommend taking a long hard look at your application deployment process. If it isn't sufficiently automated that you can kick it off with a single command and be reasonably sure that the app will be deployed correctly, then perhaps the developers need to get to work on that. Having such a deployment process would make it much easier to do a fresh installation of the newer version of EL and then deploy onto it. Will switching distributions help? Debian-based distributions do have a supported in-place upgrade method, and it mostly works, but it is not immune from problems. Lots of things broke for people upgrading from Ubuntu 10.04 LTS to 12.04 LTS via the supported method, for instance. It's not clear that Debian or Canonical are putting a sufficient amount of development time into "supporting" this feature, i.e., making sure it works. And you still actually have to buy vendor support for this distribution if you want someone to hold your hand. So I doubt you will gain much from switching to such a distribution. You may gain by switching to a rolling-release distribution such as Gentoo or Arch. However, this also doesn't make you immune to problems; it just means you have to deal with the upgrade problems continuously over the life of the server (e.g. whenever you or the developers decide to update something on the system), rather than all at once at a well-planned distribution upgrade time. You also have no vendor to provide support. What does the future hold? The Fedora Project is working on a tool to improve in-place upgrades. They had a tool called preupgrade which was abandoned and replaced with a new tool called fedup beginning with Fedora 18 . This was added to RHEL7 and now in-place upgrades have full support , at least from RHEL 6 to RHEL 7 . From my own experience I can say that while fedup still has some kinks , it is shaping up to be a very useful tool. CentOS is also experimenting with a rolling-release type of repository , but it only applies between minor versions (e.g. 6.3-6.4).
{ "source": [ "https://serverfault.com/questions/449048", "https://serverfault.com", "https://serverfault.com/users/13325/" ] }
449,296
This is a canonical question about how Unix operating systems report memory usage. Similar Questions: Server refuses to use swap partition Memory Usage in LINUX I have production server that is running Debian 6.0.6 Squeeze #uname -a Linux debsrv 2.6.32-5-xen-amd64 #1 SMP Sun Sep 23 13:49:30 UTC 2012 x86_64 GNU/Linux Every day cron executes backup script as root: #crontab -e 0 5 * * * /root/sites_backup.sh > /dev/null 2>&1 #nano /root/sites_backup.sh #!/bin/bash str=`date +%Y-%m-%d-%H-%M-%S` tar pzcf /home/backups/sites/mysite-$str.tar.gz /var/sites/mysite/public_html/www mysqldump -u mysite -pmypass mysite | gzip -9 > /home/backups/sites/mysite-$str.sql.gz cd /home/backups/sites/ sha512sum mysite-$str* > /home/backups/sites/mysite-$str.tar.gz.DIGESTS cd ~ Everything works perfectly, but I notice that Munin's memory graph shows increase of cache and buffers after backup. Then I just download backup files and delete them. After deletion Munin's memory graph returns cache and buffers to the state that was before backup. Here's Munin graph: Externally hosted image was a dead link.
This is the same "problem" as from Server refuses to use swap partition and a few other similar questions on this site. ( High Memory Usage on Linux Server , Memory Usage in LINUX , Web Server Running Low in Memory , etc.) Pay attention to the fact that the memory consumption is from cache . This means it's keeping a file in memory. Cached memory is "free" memory. Instead of leaving the block of memory empty, your OS is keeping recently read files in that space. If an application does need that memory, it will be used by the application. Until then, it stands a chance to save your from having to read a file from the disk again if it is frequently referenced. According to this graph, your effective memory consumption hasn't changed at all for the entire duration of the graph.
{ "source": [ "https://serverfault.com/questions/449296", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
449,395
There's a lot of contradictory information about Unix server partitioning out on the internet, so I need some advice on how to proceed. So far, on the servers I in our test environment I didn't really care about partitioning and I configured a single monolithic / plus a swap partition. This partitioning scheme doesn't seem like a good idea for our production servers. I have found a good starting point here , but it seems very vague on the details. Basically I have a server on which I will be running a basic LAMP stack (Apache, PHP, and MySQL). It will have to handle file uploads (up to 2GB). The system has a 2TB RAID 1 array. I plan to set : / 100GB /var 1000GB (apache files and mysql files will be here), /tmp 800GB (handles the php tmp file) /home 96GB swap 4GB Does this sound sane, or am I over-complicating things?
One thing to keep in mind when laying out your partitions are failure modes. Typically that question is of the form: "What happens when partition x fills up?" Dearest voretaq7 brought up the situation with a full / causing any number of difficult to diagnose issues. Let's look at some more specific situations. What happens if your partition storing logs is full? You lose auditing/reporting data and is sometimes used by attackers to hide their activity. In some cases your system will not authenticate new users if it can't record their login event. What happens on an RPM based system when /var is full? The package manager will not install or update packages and, depending on your configuration, may fail silently. Filling up a partition is easy, especially when a user is capable of writing to it. For fun, run this command and see how quickly you can make a pretty big file: cat /dev/zero > zerofile . It goes beyond filling up partitions as well, when you place locations on different mount points you can also customize their mount options. What happens when /dev/ is not mounted with noexec ? Since /dev is typically assumed to be maintained by the OS and only contain devices it was frequently (and sometimes still is) used to hide malicious programs. Leaving off noexec allows you do launch binaries stored there. For all these reasons, and more, many hardening guides will discuss partitioning as one of the first steps to be performed. In fact, if you are building a new server how to partition the disk is very nearly exactly the first thing you have to decide on, and often the most difficult to later change. There exists a group called the Center for Internet Security that produces gobs of easy to read configuration guides. You can likely find a guide for your specific Operating System and see any specifics they may say. If we look at RedHat Enterprise Linux 6, the recommended partitioning scheme is this: # Mount point Mount options /tmp nodev,nosuid,noexec /var /var/tmp bind (/tmp) /var/log /var/log/audit /home nodev /dev/shm nodev,nosuid,noexec The principle behind all of these changes are to prevent them from impacting each other and/or to limit what can be done on a specific partition. Take the options for /tmp for example. What that says is that no device nodes can be created there, no programs can be executed from there, and the set-uid bit can't be set on anything. By its very nature, /tmp is almost always world writable and is often a special type of filesystem that only exists in memory. This means that an attacker could use it as an easy staging point to drop and execute malicious code, then crashing (or simply rebooting) the system will wipe clean all the evidence. Since the functionality of /tmp doesn't require any of that functionality, we can easily disable the features and prevent that situation. The log storage places, /var/log and /var/log/audit are carved off to help buffer them from resource exhaustion. Additionally, auditd can perform some special things (typically in higher security environments) when its log storage begins to fill up. By placing it on its partition this resource detection performs better. To be more verbose, and quote mount(8) , this is exactly what the above used options are: noexec Do not allow direct execution of any binaries on the mounted file system. (Until recently it was possible to run binaries anyway using a command like /lib/ld*.so /mnt/binary. This trick fails since Linux 2.4.25 / 2.6.0.) nodev Do not interpret character or block special devices on the file system. nosuid Do not allow set-user-identifier or set-group-identifier bits to take effect. (This seems safe, but is in fact rather unsafe if you have suidperl(1) installed.) From a security perspective these are very good options to know since they'll allow you to put protections on the filesystem itself. In a highly secure environment you may even add the noexec option to /home . It'll make it harder for your standard user to write shell scripts for processing data, say analyzing log files, but it will also prevent them from executing a binary that will elevate privileges. Also, keep in mind that the root user's default home directory is /root . This means it will be in the / filesystem, not in /home . Exactly how much you give to each partition can vary greatly depending on the systems workload. A typical server that I've managed will rarely require person interaction and as such the /home partition doesn't need to be very big at all. The same applies to /var since it tends to store rather ephemeral data that gets created and deleted frequently. However, a web server typically uses /var/www as its playground, meaning that either that needs to be on a separate partition as well or /var/ needs to be made big. In the past I have recommended the following as baselines. # Mount Point Min Size (MB) Max Size (MB) / 4000 8000 /home 1000 4000 /tmp 1000 2000 /var 2000 4000 swap 1000 2000 /var/log/audit 250 These need to be reviewed and adjusted according to the system's purpose, and how your environment operates. I would also recommend using LVM and against allocating the entire disk. This will allow you to easily grow, or add, partitions if such things are required.
{ "source": [ "https://serverfault.com/questions/449395", "https://serverfault.com", "https://serverfault.com/users/145978/" ] }
449,416
This might seem a stupid question but why do Ethernet cables have 8 wires? Cat5 cables were just using 4 of the 8 wires, so only 4 are actualy 'needed'. Why not 12 or 16 wires?
This is an interesting question since I've never seen anything that authoritatively states the design decisions behind that choice. Everything that I've come across, whether on the Interwebs or from conversation with people smarter than me in this area, seem to indicate two possibilities: Future proofing Extra shielding Future Proofing By the time of the Cat5 spec we had seen the explosion of data cable runs. Telephone had been using Cat3, or something similar for some time, serial connections had been run throughout University campuses, ThickNet had spidered its way around, ThinNet had started to see significant use in microcomputer labs and in some cases offices. It was obvious that networking computing equipment was the wave of the future. We had also learned the terrible costs of changing out cabling to meet the demands of longer segments or higher speeds. Let's face it, replacing cabling is a nightmarish chore and expensive. The notion of limiting this cost by developing a cable that could be run, and left in place for some length of time, was definitely an appealing one. So forward thinking engineers, who were probably tired of replacing wiring, could easily have found it worthwhile to design extra pairs into the spec. After all, especially at a time when the price of bulk copper was relatively low. Which is more expensive - adding 4 extra wires or having a team of people remove old wiring and add new? Extra Shielding Since typical Cat5 is UTP (unshielded twisted pair) it does not contain the extra grounded foil to slough off the extraneous electro-magnetic interference. It has been described to me that, when properly grounded, the unused wires will help buffer the in-use pairs in a similar, albeit less effective way, than actual shielding. This could have been an important feature in the long runs and (electrically) noisy environments we were accustomed to running cabling at the time. To me the future proofing argument is the most compelling.
{ "source": [ "https://serverfault.com/questions/449416", "https://serverfault.com", "https://serverfault.com/users/145489/" ] }
449,651
This is a Canonical Question about using cron & crontab. You have been directed here because the community is fairly sure that the answer to your question can be found below. If your question is not answered below then the answers will help you gather information that will help the community help you. This information should be edited into your original question. The answer for ' Why is my crontab not working, and how can I troubleshoot it? ' can be seen below. This addresses the cron system with the crontab highlighted.
How to fix all of your crontab related woes/problems (Linux) This is a community wiki , if you notice anything incorrect with this answer or have additional information then please edit it. First, basic terminology: cron(8) is the daemon that executes scheduled commands. crontab(1) is the program used to modify user crontab(5) files. crontab(5) is a per user file that contains instructions for cron(8). Next, education about cron: Every user on a system may have their own crontab file. The location of the root and user crontab files are system dependant but they are generally below /var/spool/cron . There is a system-wide /etc/crontab file, the /etc/cron.d directory may contain crontab fragments which are also read and actioned by cron. Some Linux distributions (eg, Red Hat) also have /etc/cron.{hourly,daily,weekly,monthly} which are directories, scripts inside which will be executed every hour/day/week/month, with root privilege. root can always use the crontab command; regular users may or may not be granted access. When you edit the crontab file with the command crontab -e and save it, crond checks it for basic validity but does not guarantee your crontab file is correctly formed. There is a file called cron.deny which will specify which users cannot use cron. The cron.deny file location is system dependent and can be deleted which will allow all users to use cron. If the computer is not powered on or crond daemon is not running, and the date/time for a command to run has passed, crond will not catchup and run past queries. crontab particulars, how to formulate a command: A crontab command is represented by a single line. You cannot use \ to extend a command over multiple lines. The hash ( # ) sign represents a comment which means anything on that line is ignored by cron. Leading whitespace and blank lines are ignored. Be VERY careful when using the percent ( % ) sign in your command. Unless they are escaped \% they are converted into newlines and everything after the first non-escaped % is passed to your command on stdin. There are two formats for crontab files: User crontabs # Example of job definition: # .---------------- minute (0 - 59) # | .------------- hour (0 - 23) # | | .---------- day of month (1 - 31) # | | | .------- month (1 - 12) OR jan,feb,mar,apr ... # | | | | .---- day of week (0 - 6) (Sunday=0 or 7) # | | | | | # * * * * * command to be executed System wide /etc/crontab and /etc/cron.d fragments # Example of job definition: # .---------------- minute (0 - 59) # | .------------- hour (0 - 23) # | | .---------- day of month (1 - 31) # | | | .------- month (1 - 12) OR jan,feb,mar,apr ... # | | | | .---- day of week (0 - 6) (Sunday=0 or 7) # | | | | | # * * * * * user-name command to be executed Notice that the latter requires a user-name. The command will be run as the named user. The first 5 fields of the line represent the time(s) when the command should be run. You can use numbers or where applicable day/month names in the time specification. The fields are separated by spaces or tabs. A comma ( , ) is used to specify a list e.g 1,4,6,8 which means run at 1,4,6,8. Ranges are specified with a dash ( - ) and may be combined with lists e.g. 1-3,9-12 which means between 1 and 3 then between 9 and 12. The / character can be used to introduce a step e.g. 2/5 which means starting at 2 then every 5 (2,7,12,17,22...). They do not wrap past the end. An asterisk ( * ) in a field signifies the entire range for that field (e.g. 0-59 for the minute field). Ranges and steps can be combined e.g. */2 signifies starting at the minimum for the relevant field then every 2 e.g. 0 for minutes( 0,2...58), 1 for months (1,3 ... 11) etc. Debugging cron commands Check the mail! By default cron will mail any output from the command to the user it is running the command as. If there is no output there will be no mail. If you want cron to send mail to a different account then you can set the MAILTO environment variable in the crontab file e.g. [email protected] 1 2 * * * /path/to/your/command Capture the output yourself You can redirect stdout and stderr to a file. The exact syntax for capturing output may vary depending on what shell cron is using. Here are two examples which save all output to a file at /tmp/mycommand.log : 1 2 * * * /path/to/your/command &>/tmp/mycommand.log 1 2 * * * /path/to/your/command >/tmp/mycommand.log 2>&1 Look at the logs Cron logs its actions via syslog, which (depending on your setup) often go to /var/log/cron or /var/log/syslog . If required you can filter the cron statements with e.g. grep CRON /var/log/syslog Now that we've gone over the basics of cron, where the files are and how to use them let's look at some common problems. Check that cron is running If cron isn't running then your commands won't be scheduled ... ps -ef | grep cron | grep -v grep should get you something like root 1224 1 0 Nov16 ? 00:00:03 cron or root 2018 1 0 Nov14 ? 00:00:06 crond If not restart it /sbin/service cron start or /sbin/service crond start There may be other methods; use what your distro provides. cron runs your command in a restricted environment. What environment variables are available is likely to be very limited. Typically, you'll only get a few variables defined, such as $LOGNAME , $HOME , and $PATH . Of particular note is the PATH is restricted to /bin:/usr/bin . The vast majority of "my cron script doesn't work" problems are caused by this restrictive path . If your command is in a different location you can solve this in a couple of ways: Provide the full path to your command. 1 2 * * * /path/to/your/command Provide a suitable PATH in the crontab file PATH=/bin:/usr/bin:/path/to/something/else 1 2 * * * command If your command requires other environment variables you can define them in the crontab file too. cron runs your command with cwd == $HOME Regardless of where the program you execute resides on the filesystem, the current working directory of the program when cron runs it will be the user's home directory . If you access files in your program, you'll need to take this into account if you use relative paths, or (preferably) just use fully-qualified paths everywhere, and save everyone a whole lot of confusion. The last command in my crontab doesn't run Cron generally requires that commands are terminated with a new line. Edit your crontab; go to the end of the line which contains the last command and insert a new line (press enter). Check the crontab format You can't use a user crontab formatted crontab for /etc/crontab or the fragments in /etc/cron.d and vice versa. A user formatted crontab does not include a username in the 6th position of a row, while a system formatted crontab includes the username and runs the command as that user. I put a file in /etc/cron.{hourly,daily,weekly,monthly} and it doesn't run Check that the filename doesn't have an extension see run-parts Ensure the file has execute permissions. Tell the system what to use when executing your script (eg. put #!/bin/sh at top) Cron date related bugs If your date is recently changed by a user or system update, timezone or other, then crontab will start behaving erratically and exhibit bizarre bugs, sometimes working, sometimes not. This is crontab's attempt to try to "do what you want" when the time changes out from underneath it. The "minute" field will become ineffective after the hour is changed. In this scenario, only asterisks would be accepted. Restart cron and try it again without connecting to the internet (so the date doesn't have a chance to reset to one of the time servers). Percent signs, again To emphasise the advice about percent signs, here's an example of what cron does with them: # cron entry * * * * * cat >$HOME/cron.out%foo%bar%baz will create the ~/cron.out file containing the 3 lines foo bar baz This is particularly intrusive when using the date command. Be sure to escape the percent signs * * * * * /path/to/command --day "$(date "+\%Y\%m\%d")" How to use sudo in cron jobs when running as a non-root user, crontab -e will open the user's crontab, while sudo crontab -e will open the root user's crontab. It's not recommended to run sudo commands in a cron job, so if you're trying to run a sudo command in a user's cron, try moving that command to root's cron and remove sudo from the command.
{ "source": [ "https://serverfault.com/questions/449651", "https://serverfault.com", "https://serverfault.com/users/127242/" ] }
450,389
I'm wondering how to safely remove a domain user profile from a computer that is a part of a domain. I don't want to delete the account from the domain itself, I just need to remove the profile from this computer, to do some cleanup. I'm currently on a Vista Business computer, but we also have Win XP Pro and Win 7 Pro.
Method 1 (easy and safe) Open up "Control Panel | System and Security | System" In the dialog click on "Advanced system settings" (requires Admin rights) The "System Properties" dialog will be displayed Make sure you are in the "Advanced" register In the "User Profiles" section click on "Settings" The "User Profiles" dialog is displayed Select the account. Hit Delete. Method 2 (slight variation of method 1) Start | Run sysdm.cpl switch to register "Advanced" In the "User Profiles" section click on "Settings" The "User Profiles" dialog is displayed Select the account. Hit Delete. The greyed out button possibly means that the registry hive has not been released by the operating system, as pointed out by @joeqwerty in the comments. Method 3 (manual and prone to errors) Delete the C:\Users\[ACCOUNT] directory. That leaves some registry entries behind that have to be manually deleted as follows. Open Regedit with Administrator Permissions (Runas Administrator) Select the HKEY_USERS branch Search for the Domain Account without the domain (e.g. login = DOMAIN\ACCOUNT then search for ACCOUNT) Keep on searching until the status bar shows Computer\HKEY_USERS\[SID]\Software\Microsoft\Windwos\CurrentVersion\Explorer\Shell Folders There should be a large list of your ACCOUNTs folders e.g. C:\Users\ACCOUNT\Desktop You are in the right HKEY_USERS\[SID]\Software\Microsoft\Windwos\CurrentVersion\Explorer\Shell Folders branch if the ACCOUNT in "Shell Folders" matches the ACCOUNT you just manually deleted form the C:\Users\[ACCOUNT] directory. This branch [SID] can be exported and/or deleted to clean up the last of the user profile.
{ "source": [ "https://serverfault.com/questions/450389", "https://serverfault.com", "https://serverfault.com/users/18026/" ] }
450,628
I recently installed Apache 2.4 on my local machine, together with PHP 5.4.8 using PHP-FPM. Everything went quite smoothly (after a while...) but there is still a strange error: I configured Apache for PHP-FPM like this: <VirtualHost *:80> ServerName localhost DocumentRoot "/Users/apfelbox/WebServer" ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/Users/apfelbox/WebServer/$1 </VirtualHost> It works, for example if I call http://localhost/info.php I get the correct phpinfo() (it is just a test file). If I call a directory however, I get a 404 with body File not found. and in the error log: [Tue Nov 20 21:27:25.191625 2012] [proxy_fcgi:error] [pid 28997] [client ::1:57204] AH01071: Got error 'Primary script unknown\n' Update I now tried doing the proxying with mod_rewrite: <VirtualHost *:80> ServerName localhost DocumentRoot "/Users/apfelbox/WebServer" RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/Users/apfelbox/WebServer/$1 [L,P] </VirtualHost> But the problem is: it is always redirecting, because on http://localhost/ automatically http://localhost/index.php is requested, because of DirectoryIndex index.php index.html Update 2 Ok, so I think "maybe check whether there is a file to give to the proxy first: <VirtualHost *:80> ServerName localhost DocumentRoot "/Users/apfelbox/WebServer" RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/Users/apfelbox/WebServer/$1 [L,P] </VirtualHost> Now the complete rewriting does not work anymore... Update 3 Now I have this solution: <VirtualHost *:80> ServerName localhost DocumentRoot "/Users/apfelbox/WebServer" RewriteEngine on RewriteCond /Users/apfelbox/WebServer/%{REQUEST_FILENAME} -f RewriteRule ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/Users/apfelbox/WebServer/$1 [L,P] </VirtualHost> First check, that there is a file to pass to PHP-FPM (with the full and absolute path) and then do the rewriting. This does not work when using URL rewriting inside a subdirectory, also it fails for URLs like http://localhost/index.php/test/ So back to square one. Any ideas?
After hours of searching and reading Apache documentation I've come up with a solution that allows to use the pool, and also allow the Rewrite directive in .htaccess to work even when the url contains .php files. <VirtualHost ...> ... # This is to forward all PHP to php-fpm. <FilesMatch \.php$> SetHandler "proxy:unix:/path/to/socket.sock|fcgi://unique-domain-name-string/" </FilesMatch> # Set some proxy properties (the string "unique-domain-name-string" should match # the one set in the FilesMatch directive. <Proxy fcgi://unique-domain-name-string> ProxySet connectiontimeout=5 timeout=240 </Proxy> # If the php file doesn't exist, disable the proxy handler. # This will allow .htaccess rewrite rules to work and # the client will see the default 404 page of Apache RewriteCond %{REQUEST_FILENAME} \.php$ RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_URI} !-f RewriteRule (.*) - [H=text/html] </VirtualHost> As per Apache documentation, the SetHandler proxy parameter requires Apache HTTP Server 2.4.10. I hope that this solution will help you too.
{ "source": [ "https://serverfault.com/questions/450628", "https://serverfault.com", "https://serverfault.com/users/112126/" ] }
450,796
(This is a problem with ssh, not gitolite) I've configured gitolite on my home server (ubuntu 12.04 server, open-ssh). I want an special identityfile to administer the repositories, so I need to access throught ssh to my own host ussing two different identity keys. This is the content of my .ssh/config file: Host gitadmin.gammu.com User git IdentityFile /home/alvaro/.ssh/id_gitolite_mantra Host git.gammu.com User git IdentityFile /home/alvaro/.ssh/id_alvaro_mantra This is the content of my hosts file: # Git 127.0.0.1 gitadmin.gammu.com 127.0.0.1 git.gammu.com So I should be able to communicate with gitolite this way to access with the "normal" account: $ssh git.gammu.com and this way to access with the administrative account: $ssh gitadmin.gammu.com When I try to access with the normal account, all is ok: alvaro@mantra:~/.ssh$ ssh git.gammu.com PTY allocation request failed on channel 0 hello alvaro, this is gitolite 2.2-1 (Debian) running on git 1.7.9.5 the gitolite config gives you the following access: @R_ @W_ testing Connection to git.gammu.com closed. When I do the same with the administrative account: alvaro@mantra:~$ ssh gitadmin.gammu.com PTY allocation request failed on channel 0 hello alvaro, this is gitolite 2.2-1 (Debian) running on git 1.7.9.5 the gitolite config gives you the following access: @R_ @W_ testing Connection to gitadmin.gammu.com closed. It should show the administrative repository. If I launch ssh with verbose option: ssh -vvv gitadmin.gammu.com ... debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/alvaro/.ssh/id_alvaro_mantra (0x7f7cb6c0fbc0) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7f7cb6c044d0) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/alvaro/.ssh/id_alvaro_mantra debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 ... It's offering the key id_alvaro_mantra, and it shouldn't!! The same happens when I specify the key with the -i option: ssh -i /home/alvaro/.ssh/id_gitolite_mantra -vvv gitadmin.gammu.com ... debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/alvaro/.ssh/id_alvaro_mantra (0x7fa365237f90) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7fa365230550) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7fa365231050) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/alvaro/.ssh/id_alvaro_mantra debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp 36:b1:43:36:af:4f:00:e5:e1:39:50:7e:07:80:14:26 debug3: sign_and_send_pubkey: RSA 36:b1:43:36:af:4f:00:e5:e1:39:50:7e:07:80:14:26 debug1: Authentication succeeded (publickey). ... What's happening? I'm missing something, but I can't find what. These are the contents of my home dir: -rw-rw-r-- 1 alvaro alvaro 395 nov 14 18:00 authorized_keys -rw-rw-r-- 1 alvaro alvaro 326 nov 21 10:21 config -rw------- 1 alvaro alvaro 137 nov 20 20:26 environment -rw------- 1 alvaro alvaro 1766 nov 20 21:41 id_alvaromaceda.es -rw-r--r-- 1 alvaro alvaro 404 nov 20 21:41 id_alvaromaceda.es.pub -rw------- 1 alvaro alvaro 1766 nov 14 17:59 id_alvaro_mantra -rw-r--r-- 1 alvaro alvaro 395 nov 14 17:59 id_alvaro_mantra.pub -rw------- 1 alvaro alvaro 771 nov 14 18:03 id_developer_mantra -rw------- 1 alvaro alvaro 1679 nov 20 12:37 id_dos_pruebasgit -rw-r--r-- 1 alvaro alvaro 395 nov 20 12:37 id_dos_pruebasgit.pub -rw------- 1 alvaro alvaro 1679 nov 20 12:46 id_gitolite_mantra -rw-r--r-- 1 alvaro alvaro 397 nov 20 12:46 id_gitolite_mantra.pub -rw------- 1 alvaro alvaro 1675 nov 20 21:44 id_gitpruebas.es -rw-r--r-- 1 alvaro alvaro 408 nov 20 21:44 id_gitpruebas.es.pub -rw------- 1 alvaro alvaro 1679 nov 20 12:34 id_uno_pruebasgit -rw-r--r-- 1 alvaro alvaro 395 nov 20 12:34 id_uno_pruebasgit.pub -rw-r--r-- 1 alvaro alvaro 2434 nov 21 10:11 known_hosts There are a bunch of other keys which aren't offered... why id_alvaro_mantra is offered and not the other keys? I can't understand. I need some help, don't know where to look....
This is expected behaviour according to the manpage of ssh_config : IdentityFile Specifies a file from which the user's DSA, ECDSA or DSA authentica‐ tion identity is read. The default is ~/.ssh/identity for protocol version 1, and ~/.ssh/id_dsa, ~/.ssh/id_ecdsa and ~/.ssh/id_rsa for protocol version 2. Additionally, any identities represented by the authentication agent will be used for authentication. [...] It is possible to have multiple identity files specified in configu‐ ration files; all these identities will be tried in sequence. Mul‐ tiple IdentityFile directives will add to the list of identities tried (this behaviour differs from that of other configuration directives). Basically, specifying IdentityFile s just adds keys to a current list the SSH agent already presented to the client. Try overriding this behaviour with this at the bottom of your .ssh/config file: Host * IdentitiesOnly yes
{ "source": [ "https://serverfault.com/questions/450796", "https://serverfault.com", "https://serverfault.com/users/146430/" ] }
450,940
I have a website hosted on Amazon S3. It is the new version of an old website hosted on WordPress. I have set up some files with the metadata Website Redirect Location to handle old location and redirect them to the new website pages. For example: I had http://www.mysite.com/solution that I want to redirect to http://mysite.s3-website-us-east-1.amazonaws.com/product.html So I created an empty file named solution inside my bucket with the correct metadata: Website Redirect Location = /product.html The S3 redirect metadata is equivalent to a 301 Moved Permanently that is great for SEO. This works great when accessing the URL directly from S3 domain. I have also set up a CloudFront distribution based on the website bucket. And when I try to access through my distribution, the redirect does not work, ie: http://xxxx123.cloudfront.net/solution does not redirect but download the empty file instead. So my question is how to keep the redirection through the CloudFront distribution ? Or any idea on how to handle the redirection without deteriorate SEO ? Thanks
I ran into this problem recently and I found a workaround that seemed to work. I created a Cloudfront distribution with a custom origin pointing to the S3 static website hostname instead of the bucket hostname. In the OP's case, the desired origin would be. mysite.s3-website-us-east-1.amazonaws.com Hitting a Cloudfront distribution just using the bucket as the origin does not work because the bucket does not actually serve redirects. It only serves files and stores metadata. Hope that helps.
{ "source": [ "https://serverfault.com/questions/450940", "https://serverfault.com", "https://serverfault.com/users/143492/" ] }
451,381
I have a firewall/router (not doing NAT). I've googled and seen conflicting answers. It seems UDP 500 is the common one. But the others are confusing. 1701, 4500. And some say I need to also allow gre 50, or 47, or 50 & 51. Ok, which ports are the correct ones for IPSec/L2TP to work in a routed environment without NAT? i.e. I want to use the built in windows client to connect to a VPN behind this router/firewall. Perhaps a good answer here is to specify which ports to open for different situations. I think this would be useful for many people.
Here are the ports and protocols: Protocol: UDP, port 500 (for IKE, to manage encryption keys) Protocol: UDP, port 4500 (for IPSEC NAT-Traversal mode) Protocol: ESP, value 50 (for IPSEC) Protocol: AH, value 51 (for IPSEC) Also, Port 1701 is used by the L2TP Server, but connections should not be allowed inbound to it from outside. There is a special firewall rule to allow only IPSEC secured traffic inbound on this port. If using IPTABLES, and your L2TP server sits directly on the internet, then the rules you need are: iptables -A INPUT -i $EXT_NIC -p udp --dport 500 -j ACCEPT iptables -A INPUT -i $EXT_NIC -p udp --dport 4500 -j ACCEPT iptables -A INPUT -i $EXT_NIC -p 50 -j ACCEPT iptables -A INPUT -i $EXT_NIC -p 51 -j ACCEPT iptables -A INPUT -i $EXT_NIC -p udp -m policy --dir in --pol ipsec -m udp --dport 1701 -j ACCEPT Where $EXT_NIC is your external network interface card name, e.g. ppp0.
{ "source": [ "https://serverfault.com/questions/451381", "https://serverfault.com", "https://serverfault.com/users/14631/" ] }
451,387
This is more of a curiosity then a real problem, I am just to lazy to reboot or log off my laptop. I have connected to a network share on a Windows server with domain credentials from a non-domain Windows 7 machine, I didn't mark the option to remember the password. The share is let's say \\10.10.10.10\folder . I have changed the password for that domain account in the meantime, and now when I try to access that share I get the following error: Logon failure: unknown user name or bad password I have tried the following on the client side: deleting cached credentials in Credential Manager running net use delete running net session \\ip.of.the.server /delete gives me "A session does not exist with that computer name." running net use \\10.10.10.10\folder /u:DOMAIN\USER password gives me "The command completed successfully.", but I still get the same unknown user name or bad password when trying to access the share from Windows Explorer mapping the share as a network drive from GUI, but then I get The network folder specified is currently mapped using a different user name and password. To connect using a different user name and password, first disconnect any existing mappings to this network share. running net use to see connections, I get that there are no connections in the list killing explorer.exe and starting it again. I have tried the following on the server side: going to Computer Management > Shared folders > Sessions to kill the session with my username rebooting the server I have managed to access the share using the domain name instead of the IP address, but I am curios. Does anybody know any way how to delete the cached credentials in this case? Where are credentials cached when you don't mark the remember password option when accessing the share, they are not shown in Credential Manager and there is no mapping shown when you run net use.
NOT FOR WINDOWS 10 (I am answer for WINDOWS 7) To delete all network authentication C:\> net use * /d To view current network connection C:\> net use IMPORTANT NOTE I tested in Windows 7 SP1 64 Bits, 100% WORK After run the command, you need to go to task manager delete the explorer.exe , then reopen the application again. To open the application, go to RUN , enter explorer.exe Now you are fully clear the connection information in the session.
{ "source": [ "https://serverfault.com/questions/451387", "https://serverfault.com", "https://serverfault.com/users/137553/" ] }
451,528
On my debian maschine I deleted /bin/bash by accident. Is there a way to get it back without reinstalltin the machine? If it helps. I'm still logged in. Guess once I'm out I cannot log in since it's my login shell.
ln -s /bin/sh /bin/bash apt-get install --reinstall bash
{ "source": [ "https://serverfault.com/questions/451528", "https://serverfault.com", "https://serverfault.com/users/76180/" ] }
451,601
I read the man page of ip and still do not understand what src is and I could not find much documentation. Please, if you can explain it thoroughly or point to some link it a good answer.
When adding a route to a multihomed host, you might want to have control over the source IP address your host is sending from when starting communications using this route. This is what src is for. A short example: you have a host with two interfaces and the IP addresses 192.168.1.123/24 and 10.45.22.12/24. You are adding a route to 78.22.45.0/24 via 10.45.22.1 and want to make sure you are not sending to 78.22.45.0/24 using the 192.168.1.123 address (maybe because the network 78.22.45.0/24 has no route back to 192.168.1.0/24 or because you do not want your traffic to take this route for one reason or the other): ip route add 78.22.45.0/24 via 10.45.22.1 src 10.45.22.12 Note that the src you are giving would only affect the traffic originating at your very host. If a foreign packet is being routed, it obviously would already have a source IP address so it would be passed on unaltered (unless you are using NAT of course, but this is an entirely different matter). Also, this setting might be overridden by a process specifically choosing to bind to a specific address instead of using the defaults when initiating connections (rather rare).
{ "source": [ "https://serverfault.com/questions/451601", "https://serverfault.com", "https://serverfault.com/users/146775/" ] }
452,268
The hosts file on Windows computers is used to bind certain name strings to specific IP addresses to override other name resolution methods. Often, one decides to change the hosts file, and discovers that the changes refuse to take effect, or that even old entries of the hosts file are ignored thereafter. A number of "gotcha" mistakes can cause this, and it can be frustrating to figure out which one. When faced with the problem of Windows ignoring a hosts file, what is a comprehensive troubleshoot protocol that may be followed? This question has duplicates on SO, such as HOSTS file being ignored However, these tend to deal with a specific case, and once whatever mistake the OP made is found out, the discussion is over. If you don't happen to have made the same error, such a discussion isn't very useful. So I thought it would be more helpful to have a general protocol for resolving all hosts-related issues that would cover all cases.
Based on my own experience and what I encountered while Googling, here are some things to try: 1. Did you check that it works correctly? Changes to hosts should take effect immediately, but Windows caches name resolution data so for some time the old records may be used. Open a command line (Windows+R, cmd , Enter) and type: ipconfig /flushdns To drop the old data. To check if it works, use (assuming you have an ipv4 entry in your hosts for www.example.com , or an ipv6 entry in your hosts for ipv6.example.com): ping -4 www.example.com -n 1 ping -6 www.example.com -n 1 And see if it uses the correct IP. If yes, your hosts file is fine and the problem is elsewhere. Also, you can reset the NetBios cache with (open the console as an admin or it will fail): nbtstat -R You can check the current data in the DNS cache with: ipconfig /displaydns | more NB: nslookup does not look at the hosts file. See NSLOOKUP and NBLOOKUP give one IP address; PING finds another 2. Basics Is your hosts file named correctly? It should be hosts and not host , etc. Is the extension correct? It should have no extension ( hosts not hosts.txt ) - be careful if you have configured windows to hide known extensions, check the properties to be sure: The correct hosts file's type will show up as just "File". Did you follow the correct syntax ? Did you accidentally prefix lines with a hash ( # ) which indicates comments? Did you take care of all variants ( www.example.com and example.com - safest to just add both)? 3. Whitespace The format for each line is IP address , then a horizontal tab (escape code \t , ASCII HT , hex 0x09 ) or a single space (hex 0x20 ), then the host name, ie. www.example.com , then finally a carriage return followed by a line feed, (escape codes \r\n , ASCII CRLF , hex 0x0d 0x0a ). Sample entries, using Unicode control pictures to indicate control characters. (Don't copy and paste these into your hosts file!) 192.0.2.1␉www.example.com␍␊ 2001:db8:8:4::2␉ipv6.example.com␍␊ The individual bytes may be viewed in Notepad++ with the hex editor plugin . Notepad++ will also show special characters (View -> Show Symbol) so you can easily inspect the number and kind of whitespace characters. If you copied and pasted hosts entries from somewhere, you may end up with multiple spaces. In theory hosts supports multiple spaces separating the two columns, but it's another thing to try if nothing else works. To be on the safe side, make sure all lines in your hosts file either use tabs or spaces, not both. Lastly, terminate the file with a blank line. 4. Registry Key There is a registry key specifying the location of the hosts file. Supposedly, Windows doesn't actually support putting the hosts file in other locations, but you might want to check. The key is: \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath The entry should be: %SystemRoot%\System32\drivers\etc Or, in a Command Prompt window, type: reg query HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters -v DataBasePath which should display something similar to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters DataBasePath REG_EXPAND_SZ %SystemRoot%\System32\drivers\etc 5. Permissions Sometimes there are issues with permissions on the file, the file attributes, and similar things. To recreate the file with default permissions: Create a new text file on your desktop. Copy and paste the contents of your current hosts file into this file in Notepad. Save the new text file and rename it to hosts . Copy ( do not move ) the file to your %SystemRoot%\System32\drivers\etc directory, and overwrite the old file. Last point is important: Copying works, moving doesn't. The local Users account must be able to read the hosts file . To make sure (in Windows 7): Navigate to %SystemRoot%\System32\drivers\etc in Windows Explorer. If you can't see the hosts file, ensure you can see hidden and system files . Right-click on the hosts file and select Properties from the context menu. In the hosts Properties window, click on the Security tab. Examine the list of names in the Group or user names: box. If %COMPUTERNAME%\Users is present, click on it to view permissions. If Users is not present, or is present but does not have Read permission, click Edit... . If Users is not present, click Add... , type Users , click Check Names , and click OK or press Enter. Select Users , and ensure Read & execute is checked in the Allow column. Click OK. If a Windows Security alert box pops up, choose Yes to continue. Click OK to close the hosts Properties window. Go up to section 1 of this answer and follow the directions to check if it's working now. Or, in a Command Prompt window, type: icacls %SystemRoot%\System32\drivers\etc\hosts which should display something like: C:\WINDOWS\System32\drivers\etc\hosts NT AUTHORITY\SYSTEM:(F) NT AUTHORITY\SYSTEM:(I)(F) BUILTIN\Administrators:(I)(F) BUILTIN\Users:(I)(RX) APPLICATION PACKAGE AUTHORITY\ALL APPLICATION PACKAGES:(I)(RX) APPLICATION PACKAGE AUTHORITY\ALL RESTRICTED APPLICATION PACKAGES:(I)(RX) You should see an (R) after BUILTIN\Users . 6. Encoding The hosts file should encoded in ANSI or UTF-8 without BOM. You can do this with File -> Save As. 7. Proxies If you have a proxy configured, it may bypass the hosts file. The solution is to not use the proxy, or configure it to not do this. To check, go to your Internet Explorer -> Internet Options -> Connections -> LAN settings. If everything is blank and "Automatically detect settings" is checked, you aren't using a proxy. If you rely on a proxy to access the web and therefore don't want to disable it, you can add exceptions by going to Internet Explorer -> Internet Options -> Connections -> LAN settings -> Proxy Server / Advanced. Then add your exceptions to the Exceptions text box. e.g. localhost;127.0.0.1;*.dev 8. DNS address (This may also resolve proxy issues.) Go to your network connections properties, then TCP/IP settings, and change the first DNS server to 127.0.0.1 (localhost). The second should probably be your actual DNS's IP. This is not necessary for the hosts file to work , but it may help in your case if something is configured strangely. 9. .local addresses If you are using a .local domain entry in the form of myhost.local and it gets ignored please try the following: x.x.x.x myhost.local www.myhost.local even if the www.myhost.local does not exist. Windows somehow does not append its workgroup or localdomain. 10. Line / count limits (added to this answer to make it visible as it's been mentioned a few times) Windows hosts file seems to have a line or host limit. If you have more than 150 characters on a line or more than 8 hosts entries for an IP create a new line e.g. instead of: 1.2.3.4 host1.com host2.com host3.com host4.com host5.com host6.com host7.com host8.com host9.com Try this: 1.2.3.4 host1.com host2.com host3.com host4.com host5.com 1.2.3.4 host6.com host7.com host8.com host9.com
{ "source": [ "https://serverfault.com/questions/452268", "https://serverfault.com", "https://serverfault.com/users/147056/" ] }
452,735
I have two nodes on a wireless network. Node A is streaming data to node B. Most of the time it works fine, but sometimes there is packet loss and the stream is interrupted. To improve performance and reduce packet loss, should I move node A to be closer to node B, or move node A to be closer to the base station ?
Move it closer to the Base station. Everything you send in typical wifi links goes to/from the base station. Ad-hoc connections are different, but not many use those. Really, though, I expect your problem has to do with interference. That is much more likely to be the problem than distance. Here's the kicker: that interference may be your own signal. With wifi, you may have a base station can do a hypothetical 65 Mbit connection. Unfortunately, that is not 65 Mbit for each node: that is 65 Mbit total , shared among not only nodes A and B, but also any other clients on that same channel in the same area. Worse, let's say one of your nodes is only able to get an 18 Mbit signal, and is actively using 3Mbit of that signal. That use scales proportionally to the max theoretical number for the base station. The client is using air time , not bandwidth, and so 3 Mbit of a total available 18 Mbit (one sixth) means it's using one sixth of the total theoretical 65 Mbit supported by the base station, or about 11 Mbit worth of air time. This leaves at most 54 Mbit for all other clients combined on the same channel in the same area. Worse than that, you can even get interference from devices on different channels , because the channel frequency ranges overlap (this is why 2.4Ghz radios should only ever use channels 1, 6, or 11 in the US). In your situation, when A streams to B, you must upload the data to the base station, which must then resend it to B. That means you cut your available wireless bandwidth in half, because you're having to share. If A is also downloading its data for the stream from the internet you take a share away again, and you're down to one third of the original total. We also need to account for command and control information from the protocols used that must be transmitted. Worse than that, the bandwidth is not shared perfectly. Different nodes can try to send at the same time, resulting in collisions. When that happens, all colliding nodes must re-send the packet. As the traffic increases, the number of collisions increase. As the number of collisions increase, the amount of data needed to be re-transmitted increases, and the odds of additional collisions go even higher. This doesn't even begin to account for other interference sources like cordless phones, video game controllers, microwave ovens, wireless keyboards/mice, running water, etc. In the end, you may only have a small fraction of the original and reported 65 Mbit actually usable. Newer 5Ghz radios can help with this, but it's not a panacea; if you're sharing a base station, you're still sharing a single channel and still sharing your theoretical max among all clients of that base station. If you really want good performance here, go wired or go home. Wired connections can fix the issues described above in three ways: they can provide a connection that is switched , full-duplex , and that is almost completely insusceptible to outside interference. Switched means that if each node has a 100 Mbit connection to the base, that is 100 Mbit devoted exclusively to that node. If two nodes try to send at the same time, the base is able to hold packets from one and forward them when the line is clear, reducing collisions and therefore reducing the need to re-transmit the same data. Full-duplex means that nodes are able to both send and receive at the same time... again, reducing collisions. Here, node A could be downloading stream data from the internet at the same time that it is sending it back towards B, with no interference or collisions. In this case, because of all the re-transmission of the same data, you might see dramatic performance improvement if even one of nodes A or B has a wired connection. A recent example where I'm at is that we deployed iPads to all the faculty this term at the college where I work. To support these devices, during the trial we deployed a few AppleTV devices to classrooms and connected them to the projector to support AirPlay mirroring from an iPad to the front of the classroom. We learned from this that leaving both the AppleTV and the iPad wireless did not work well, especially as we may have two instructors in neighboring rooms both wanting to do mirroring. The solution for us was to install software on the PCs in each room to support AirPlay mirroring to the PC, which is wired. We had to make some network changes so the classroom PCs were on the same subnet as iPads, but the result is much more reliable and with much better video quality.
{ "source": [ "https://serverfault.com/questions/452735", "https://serverfault.com", "https://serverfault.com/users/147273/" ] }
452,767
Anyone know if a setting exists to allow a non-admin user to shutdown a server? Obviously I can set the "Allow Server to shutdown without logon" GPO but that is not quite the same thing. I am looking for a way to properly assign the shutdown right to a particular user if possible.
You can assign this in either a GPO or Local Security Policy. The setting that you're looking for is in Computer Configuration > Windows Settings > Security Settings > Local Policies > User Rights Assignment > Shutdown the system
{ "source": [ "https://serverfault.com/questions/452767", "https://serverfault.com", "https://serverfault.com/users/123729/" ] }
452,935
How to check the LDAP connection from a client to server. I'm working on the LDAP authentication and this client desktop needs to authenticate via a LDAP server. I can SSH to the LDAP server using LDAP user but When in desktop login prompt, I can't login. It says Authentication failure. Client machine has Cent OS 6.3 and LDAP server has Cent OS 5.5 LDAP software is Openldap. LDAP servers logs doesn't even show any messages. So, how to test whether the client can successfully connect to LDAP or not.
Use ldapsearch. It will return an error if you cannot query the LDAP Server. The syntax for using ldapsearch: ldapsearch -x -LLL -h [host] -D [user] -w [password] -b [base DN] -s sub "([filter])" [attribute list] A simple example $ ldapsearch -x -LLL -h host.example.com -D user -w password -b"dc=ad,dc=example,dc=com" -s sub "(objectClass=user)" givenName Please see this link: http://randomerror.wordpress.com/2009/10/16/quick-tip-how-to-search-in-windows-active-directory-from-linux-with-ldapsearch/ Edit : It seems you don't have pam configured corectlly for gdm/xdm here is an example how to do it: http://pastebin.com/TDK4KWRV
{ "source": [ "https://serverfault.com/questions/452935", "https://serverfault.com", "https://serverfault.com/users/147369/" ] }
453,185
I am running a fresh install of Linux Mint Nadia (14). I am following the instructions on Vagrant Getting Started but have gotten stuck on the Provisioning . It seems the Vagrant box cannot connect outside and so I can't install anything using either Chef or Puppet. In the basic Vagrant resolve.conf contains nameserver 10.0.2.3 . But with that set I can't ping us.archive.ubuntu.com . If I change it to 8.8.8.8 then I can ping us.archive.ubuntu.com but it does not stay set, and after a reboot it changes back to 10.0.2.3 - so provisioning fails again. Ideally I would like for 10.0.2.3 to work on my setup. Failing that I would like a way to permanently change resolv.conf so that I can do provisioning.
You can work around this issue in one of two ways, both of which are in the VirtualBox manual : Enabling DNS proxy in NAT mode The NAT engine by default offers the same DNS servers to the guest that are configured on the host. In some scenarios, it can be desirable to hide the DNS server IPs from the guest, for example when this information can change on the host due to expiring DHCP leases. In this case, you can tell the NAT engine to act as DNS proxy using the following command: VBoxManage modifyvm "VM name" --natdnsproxy1 on Using the host's resolver as a DNS proxy in NAT mode For resolving network names, the DHCP server of the NAT engine offers a list of registered DNS servers of the host. If for some reason you need to hide this DNS server list and use the host's resolver settings, thereby forcing the VirtualBox NAT engine to intercept DNS requests and forward them to host's resolver, use the following command: VBoxManage modifyvm "VM name" --natdnshostresolver1 on Note that this setting is similar to the DNS proxy mode, however whereas the proxy mode just forwards DNS requests to the appropriate servers, the resolver mode will interpret the DNS requests and use the host's DNS API to query the information and return it to the guest.
{ "source": [ "https://serverfault.com/questions/453185", "https://serverfault.com", "https://serverfault.com/users/34871/" ] }