source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
125,467
I want a secure mail solution, as I am looking to move away from Google and other parties looking into my private data. How much of a PITA is it to setup my own mailserver? Should I go for an external provider with a good privacy policy and encrypted data instead? I have a VPS running Debian (with a dedicated IP + reverse DNS), and I'm a fairly capable Linux administrator, having setup a couple of webservers, home networks, and looking over the shoulder of sysadmins at work. The security I currently have on the VPS is limited to iptables and installing/running the bare minimum of what I need (currently basically irssi and lighttpd). When setting up a mail server, is there a lot of stuff to take into consideration? Will my outgoing mail be marked as spam on other servers if I don't implement a number of solutions? Will reliable spam filtering be difficult to setup? Can I easily encrypt the stored mail?
I run several mail servers of varying sizes ranging from my own for two users to hundreds of IMAP mailboxes. My opinion of email can be summed up by telling you that I am planning to decommission my own private mail server and move to Gmail for my domain. The main reason why I want rid of this responsibility is spam. It is compute- and resource-expensive to filter inbound spam with any kind of effectiveness. It takes time and effort on my part to maintain the spam filtering to ensure that we are as up-to-date as possible with the techniques being used by the spammers. And then there are times when your tools seem to be actively mis-maintained by the maintainers, such as when SpamAssassin started marking up everything with a date in 2010 or later because it was impossibly far in the future. Greylisting works much of the time too, but some relay systems just can't deal with it properly -- and even though greylisting is legal, dealing with the broken systems is your problem. Using black-lists can skim much of it off, but inevitably someone finds a blacklisted host that they want to receive mail from. If you run a mail server, blacklisting is always your problem. You get blacklisted so your users can't mail out? That's your problem. Especially when the blacklist is some penny-ante ISP in Southern Wisconsin which is blacklisting you because ten years ago your IP block was used by some fly-by-night DSL provider and not the backbone provider it is today. Or they insist that they have to run a "relay test" on your server before they'll de-list you, even though the IP that is in their list is an outbound-only IP and doesn't accept email from the internet at large. Someone trying to email one of your users gets blacklisted so they can't mail you? That's your problem. The email is always of earth-shattering importance and it is up to you to create an exception to let their email in. Secondary-MXing is broken. Spammers just beat up on that, and your system gets to accept, then scan and possibly bounce, drop, or false-negative it into your users mailbox. Frankly I never secondary-MX anymore because if my primaries are offline for longer than it takes email to die then I've got bigger problems (probably headed by the need for finding a new job). Then there are the RFC-nazis. You'll get blacklisted if you are not strictly RFC compliant. And then you'll get shouted down by people who hate the fact that your anti-spam choses to bounce rather than just drop, meaning the innocent people used as header-forging get buried in the back-scatter. Email used to be interesting and fun. Now it's just one long, slow, hard kick in the nuts (pardon my colloquialism).
{ "source": [ "https://serverfault.com/questions/125467", "https://serverfault.com", "https://serverfault.com/users/38232/" ] }
125,514
I am nstalling drivers for a printer, and I have a choice of either PCL (5 or 6), or PostScript drivers? Which one would you recommend and why? The printer is HP LaserJet 2605dn, the OS is Windows 7 (x64). Do you have a rule of thumb for this sort of thing? Or is it pretty much 'see-what-works'? Thanks
It's so amazing and horrifying when a thread like this has all sorts of non-knowledge and non-answers flowing in it and no answer gets it right. First I'll give my own answer then I'll explain where the previous posters are wrong. You should go with PCL 6. Here's why: You don't need PostScript. If you did need it you would know it and you wouldn't be asking this question. PostScript is more problematic than is PCL, so if you don't need it it's better avoided. It's more problematic in these ways and more: harder to find drivers (for a Win ME computer for example), more resource hungry (both on the printer, the workstation, and the network), HP's PostScript drivers are going to be much buggier than their PCL drivers, the quality of HP's PostScript emulation (that is, a third-party clone of Adobe's PostScript program) is highly questionable whereas the PCL is an HP product and therefore a better risk, PostScript tends to throw obscure errors when printing and requires obscure expertise to troubleshoot (very frustrating)-PCL does this less, PostScript tends to run the printer out of memory easier, PostScript drivers offer lots of obscure settings that are useful only to industry pros (like color separations, e.g.) and will only confuse normal people and give them more ways to cause themselves problems, and on difficult prints PostScript will often be slower. All that off the top of my head. PCL6 is a powerful page description language and will do anything you ever need to do. Quality is not an issue, PCL works fine and can print the same vector graphics and vector fonts as can PostScript. Photos and other bit mapped graphics are outside the realm of PostScript's power and thus the two languages will print them the same, except that PostScript will render the photo in text and blow up its binary size, thus taking longer to download it to the printer (it has to do this because PostScript is a language of text, there is nothing binary there. Everything is rendered into text characters). PostScript offers many advantages, but mostly to printing industry pros. An example is that if you want to print something on a super-high resolution image setter at some local high end printing shop they will likely accept the file only in Adobe Photoshop or PostScript formats, thus if you are using the PostScript driver you have a way to make such a file. However, PDF format can be used now in many situations where PostScript was formerly required. PostScript drivers do tend to offer more features than the PCL driver and some may be useful to you (like Booklet printing e.g.) but at this late date and age it's more likely that the PCL driver offers everything you would ever need, and the PostScript driver may not offer much at all extra that you could use.
{ "source": [ "https://serverfault.com/questions/125514", "https://serverfault.com", "https://serverfault.com/users/19829/" ] }
125,607
I have a server with apache and I recently installed mod_security2 because I get attacked a lot by this: My apache version is apache v2.2.3 and I use mod_security2.c This were the entries from the error log: [Wed Mar 24 02:35:41 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:31 2010] [error] [client 202.75.211.90] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:49 2010] [error] [client 95.228.153.177] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:48:03 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) Here are the errors from the access_log: 202.75.211.90 - - [29/Mar/2010:10:43:15 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:11:40:41 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:12:37:19 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" I tried configuring mod_security2 like this: SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecFilterSelective REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" The thing in mod_security2 is that SecFilterSelective can not be used, it gives me errors. Instead I use a rule like this: SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecRule REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" Even this does not work. I don't know what to do anymore. Anyone have any advice? Update 1 I see that nobody can solve this problem using mod_security. So far using ip-tables seems like the best option to do this but I think the file will become extremely large because the ip changes serveral times a day. I came up with 2 other solutions, can someone comment on them on being good or not. The first solution that comes to my mind is excluding these attacks from my apache error logs. This will make is easier for me to spot other urgent errors as they occur and don't have to spit trough a long log. The second option is better i think, and that is blocking hosts that are not sent in the correct way. In this example the w00tw00t attack is send without hostname, so i think i can block the hosts that are not in the correct form. Update 2 After going trough the answers I came to the following conclusions. To have custom logging for apache will consume some unnecessary recourses, and if there really is a problem you probably will want to look at the full log without anything missing. It is better to just ignore the hits and concentrate on a better way of analyzing your error logs. Using filters for your logs a good approach for this. Final thoughts on the subject The attack mentioned above will not reach your machine if you at least have an up to date system so there are basically no worries. It can be hard to filter out all the bogus attacks from the real ones after a while, because both the error logs and access logs get extremely large. Preventing this from happening in any way will cost you resources and it is a good practice not to waste your resources on unimportant stuff. The solution i use now is Linux logwatch . It sends me summaries of the logs and they are filtered and grouped. This way you can easily separate the important from the unimportant. Thank you all for the help, and I hope this post can be helpful to someone else too.
From your error log they are sending a HTTP/1.1 request without the Host: portion of the request. From what I read, Apache replies with a 400 (bad request) error to this request, before handing over to mod_security. So, it doesn't look like your rules will be processed. (Apache dealing with it before requiring to hand over to mod_security) Try yourself: telnet hostname 80 GET /blahblahblah.html HTTP/1.1 (enter) (enter) You should get the 400 error and see the same error in your logs. This is a bad request and apache is giving the correct answer. Proper request should look like: GET /blahblahblah.html HTTP/1.1 Host: blah.com A work around for this issue could be to patch mod_uniqueid, to generate a unique ID even for a failed request, in order that apache passes the request on to its request handlers. The following URL is a discussion about this work around, and includes a patch for mod_uniqueid you could use: http://marc.info/?l=mod-security-users&m=123300133603876&w=2 Couldn't find any other solutions for it and wonder if a solution is actually required.
{ "source": [ "https://serverfault.com/questions/125607", "https://serverfault.com", "https://serverfault.com/users/26204/" ] }
125,615
I'm running Windows Server 2008 as admin and I tried to set ExecutionPolicy as Remotesigned for PowerShell v2 like this: Set-ExecutionPolicy RemoteSigned But I got this error: Set-ExecutionPolicy : Access to the registry key 'HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft .PowerShell' is denied. At line:1 char:20 + Set-ExecutionPolicy <<<< RemoteSigned + CategoryInfo : NotSpecified: (:) [Set-ExecutionPolicy], UnauthorizedAccessException + FullyQualifiedErrorId : System.UnauthorizedAccessException,Microsoft.PowerShell.Commands.SetExecutionPolicyComma nd How to fix this?
Right click on Powershell shortcut and choose 'Run as Administrator'
{ "source": [ "https://serverfault.com/questions/125615", "https://serverfault.com", "https://serverfault.com/users/36019/" ] }
125,865
I want to secure a file upload directory on my server as described beautifully here, but I have one problem before I can follow these instructions. I don't know what user Apache is running as. I've found a suggestion that you can look in httpd.conf and there will be a "User" line, but there is no such line in my httpd.conf file, so I guess Apache is running as the default user. I can't find out what that is, though. So, my question is (are): how do I find out what the default user is do I need to change the default user if the answer is yes and I change the default user by editing httpd.conf, is it likely to screw anything up? Thanks!
ps aux | egrep '(apache|httpd)' typically will show what apache is running as. Usually you do not need to change the default user, "nobody" or "apache" are typically fine users. As long as its not "root" ;) edit: more accurate command for catching apache binaries too
{ "source": [ "https://serverfault.com/questions/125865", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
126,009
Just learned about the screen command on linux - it is genius. I love it. However, the actual terminal/prompt in screen looks and behaves differently than my standard bash prompt. That is, the colors aren't the same, tab completion doesn't seem to work, etc. Is there a way I can tell screen to behave just like a normal (at least, normal as in what I am used to) bash prompt ? Additional Information I am connecting via ssh from a Mac (Terminal) to a headless linux box (Ubuntu). After logging in, I have TERM=xterm-color and when I run screen I have TERM=screen . Am going to try the suggestions below to see if I can change the $TERM value first.
Thanks to this post , what I did was add one line to ~/.screenrc : # ~/.screenrc defshell -bash # dash makes it a login shell Then things in your ~/.bashrc , /etc/bashrc , etc. should get run.
{ "source": [ "https://serverfault.com/questions/126009", "https://serverfault.com", "https://serverfault.com/users/38707/" ] }
126,072
Is it possible for a web server to select an SSL certificate to use based on the host-header of the incoming connection, or is that information that is only available after the SSL connection is established? That is, can my webserver listed on port 443 and use the foo.com certificate if https://foo.com is requested, and the bar.com certificate if https://bar.com is requested or am I trying to do something impossible because the server has to establish an SSL connection before it knows what the client wants?
Historically, your first statement is accurate. Now, there are multiple options: A wildcard certificate if subdomains within the same domain. A SAN/UCC cert to specify alternative names for the certificate, thus being able to serve multiple certificates. SNI was introduced to establish the SSL connection after the Host header. This has limited support, however, as it is newer. This has been answered numerous times on ServerFault by myself and others. I'd suggest searching for further details unless you have a specific question.
{ "source": [ "https://serverfault.com/questions/126072", "https://serverfault.com", "https://serverfault.com/users/11495/" ] }
126,407
I have a file that I am trying to read by using tail -f. I was wondering if there was a way to have the terminal output an actual line break instead of the \n character.
tail -f file | sed 's/\\n/\n/g'
{ "source": [ "https://serverfault.com/questions/126407", "https://serverfault.com", "https://serverfault.com/users/18207/" ] }
126,413
Background flushing on Linux happens when either too much written data is pending (adjustable via /proc/sys/vm/dirty_background_ratio) or a timeout for pending writes is reached (/proc/sys/vm/dirty_expire_centisecs). Unless another limit is being hit (/proc/sys/vm/dirty_ratio), more written data may be cached. Further writes will block. In theory, this should create a background process writing out dirty pages without disturbing other processes. In practice, it does disturb any process doing uncached reading or synchronous writing. Badly. This is because the background flush actually writes at 100% device speed and any other device requests at this time will be delayed (because all queues and write-caches on the road are filled). Is there a way to limit the amount of requests per second the flushing process performs, or otherwise effectively prioritize other device I/O?
After lots of benchmarking with sysbench, I come to this conclusion: To survive (performance-wise) a situation where an evil copy process floods dirty pages and hardware write-cache is present (possibly also without that) and synchronous reads or writes per second (IOPS) are critical just dump all elevators, queues and dirty page caches. The correct place for dirty pages is in the RAM of that hardware write-cache. Adjust dirty_ratio (or new dirty_bytes) as low as possible, but keep an eye on sequential throughput. In my particular case, 15 MB were optimum ( echo 15000000 > dirty_bytes ). This is more a hack than a solution because gigabytes of RAM are now used for read caching only instead of dirty cache. For dirty cache to work out well in this situation, the Linux kernel background flusher would need to average at what speed the underlying device accepts requests and adjust background flushing accordingly. Not easy. Specifications and benchmarks for comparison: Tested while dd 'ing zeros to disk, sysbench showed huge success , boosting 10 threads fsync writes at 16 kB from 33 to 700 IOPS (idle limit: 1500 IOPS) and single thread from 8 to 400 IOPS. Without load, IOPS were unaffected (~1500) and throughput slightly reduced (from 251 MB/s to 216 MB/s). dd call: dd if=/dev/zero of=dumpfile bs=1024 count=20485672 for sysbench, the test_file.0 was prepared to be unsparse with: dd if=/dev/zero of=test_file.0 bs=1024 count=10485672 sysbench call for 10 threads: sysbench --test=fileio --file-num=1 --num-threads=10 --file-total-size=10G --file-fsync-all=on --file-test-mode=rndwr --max-time=30 --file-block-size=16384 --max-requests=0 run sysbench call for one thread: sysbench --test=fileio --file-num=1 --num-threads=1 --file-total-size=10G --file-fsync-all=on --file-test-mode=rndwr --max-time=30 --file-block-size=16384 --max-requests=0 run Smaller block sizes showed even more drastic numbers. --file-block-size=4096 with 1 GB dirty_bytes: sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Extra file open flags: 0 1 files, 10Gb each 10Gb total file size Block size 4Kb Number of random requests for random IO: 0 Read/Write ratio for combined random IO test: 1.50 Calling fsync() after each write operation. Using synchronous I/O mode Doing random write test Threads started! Time limit exceeded, exiting... Done. Operations performed: 0 Read, 30 Write, 30 Other = 60 Total Read 0b Written 120Kb Total transferred 120Kb (3.939Kb/sec) 0.98 Requests/sec executed Test execution summary: total time: 30.4642s total number of events: 30 total time taken by event execution: 30.4639 per-request statistics: min: 94.36ms avg: 1015.46ms max: 1591.95ms approx. 95 percentile: 1591.30ms Threads fairness: events (avg/stddev): 30.0000/0.00 execution time (avg/stddev): 30.4639/0.00 --file-block-size=4096 with 15 MB dirty_bytes: sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Extra file open flags: 0 1 files, 10Gb each 10Gb total file size Block size 4Kb Number of random requests for random IO: 0 Read/Write ratio for combined random IO test: 1.50 Calling fsync() after each write operation. Using synchronous I/O mode Doing random write test Threads started! Time limit exceeded, exiting... Done. Operations performed: 0 Read, 13524 Write, 13524 Other = 27048 Total Read 0b Written 52.828Mb Total transferred 52.828Mb (1.7608Mb/sec) 450.75 Requests/sec executed Test execution summary: total time: 30.0032s total number of events: 13524 total time taken by event execution: 29.9921 per-request statistics: min: 0.10ms avg: 2.22ms max: 145.75ms approx. 95 percentile: 12.35ms Threads fairness: events (avg/stddev): 13524.0000/0.00 execution time (avg/stddev): 29.9921/0.00 --file-block-size=4096 with 15 MB dirty_bytes on idle system: sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Extra file open flags: 0 1 files, 10Gb each 10Gb total file size Block size 4Kb Number of random requests for random IO: 0 Read/Write ratio for combined random IO test: 1.50 Calling fsync() after each write operation. Using synchronous I/O mode Doing random write test Threads started! Time limit exceeded, exiting... Done. Operations performed: 0 Read, 43801 Write, 43801 Other = 87602 Total Read 0b Written 171.1Mb Total transferred 171.1Mb (5.7032Mb/sec) 1460.02 Requests/sec executed Test execution summary: total time: 30.0004s total number of events: 43801 total time taken by event execution: 29.9662 per-request statistics: min: 0.10ms avg: 0.68ms max: 275.50ms approx. 95 percentile: 3.28ms Threads fairness: events (avg/stddev): 43801.0000/0.00 execution time (avg/stddev): 29.9662/0.00 Test-System: Adaptec 5405Z (that's 512 MB write-cache with protection) Intel Xeon L5520 6 GiB RAM @ 1066 MHz Motherboard Supermicro X8DTN (5520 chipset) 12 Seagate Barracuda 1 TB disks 10 in Linux software RAID 10 Kernel 2.6.32 Filesystem xfs Debian unstable In summary, I am now sure this configuration will perform well in idle, high load and even full load situations for database traffic that otherwise would have been starved by sequential traffic. Sequential throughput is higher than two gigabit links can deliver anyway, so no problem reducing it a bit.
{ "source": [ "https://serverfault.com/questions/126413", "https://serverfault.com", "https://serverfault.com/users/38826/" ] }
126,502
I'm trying to find a way to get my own PID from a command prompt (for later use in bat scripts). So far the only useful way I found was to use getpids.exe from here : http://www.scheibli.com/projects/getpids/index.html , but I'm looking for a command that's "built in" to Windows. I'm looking for a "bullet proof" way. No assumptions about my process being the only cmd.exe or anything.
Since none of the other solutions are bulletproof and built in, I figured I'd offer the following solution, but note that you'll need to parse/save the results somehow: title mycmd tasklist /v /fo csv | findstr /i "mycmd"
{ "source": [ "https://serverfault.com/questions/126502", "https://serverfault.com", "https://serverfault.com/users/36342/" ] }
126,554
I know that a dedicated IP is needed for setting up SSL. What happens if we add SSL for domains sharing an IP ? (Namevirtualhost)
I think it is a good idea how to explain what the problem really is with virtual hosts and SSL/TLS. When you connect to an apache server over HTTP you send a set of http headers along. They look like this: GET /index.html HTTP/1.1 Host: www.nice-puppies.com If you have virtual hosting apache will look at the hosts field, then fetch the right index.html for you. The problem is when you add SSL/TLS. The server sets up the encryption before you ever send your http request. Therefor the server doesn't know if you are going to www.nice-puppies.com or www.evil-haxxor.com until after the authentication/encryption is completed. The server can not guess (as sending the wrong certificate gives you a nasty error message). One solution is a wildcard certificate (as mentioned above), which is valid for *.nice-puppies.com. That way you can use the same cert for multiple domains, but you can't have a *.com certificate (okay, you can, but it would be very bad for everybody else), so in general you will need separate IP for each HTTPS domain.
{ "source": [ "https://serverfault.com/questions/126554", "https://serverfault.com", "https://serverfault.com/users/8888/" ] }
126,920
I'm trying to install a package from a Debian repository. I am trying to install manually with dpkg errors because of missing or incomplete dependencies. This got me wondering, is it a mistake to just add the Debian repository to my apt sources? To be more specific I'm trying to install Guake (the console wrapper). I'm trying to install Guake 0.4.1. This resolves an issue with transparency I'm having.
It's a bad idea to install binary packages from Debian on Ubuntu. But it's a good idea to install packages from source ! So here's how: It's not that hard. Here's how to do it (instructions taken from my old note at http://www.asheesh.org/note/backporting-with-apt-src.html ): Step 1: Make sure you have an appropriate deb-src line Backporting is the process of taking source packages and compiling them on your Debian(-like) system. The easiest way to find Debian "source packages" is the same way you find Debian "binary packages": apt-get and its configuration. Make sure you have this line in /etc/apt/sources.list: deb-src http://ftp.debian.org/debian/ unstable main APT provides a command "apt-get source" that looks in these deb-src lines (rather than plain binary deb lines) and downloads source packages. In this tutorial, you'll use "apt-src" which is a convenient wrapper around "apt-get source". Step 2 apt-get update Step 3 sudo aptitude install apt-src apt-src is a helper program that makes compiling source packages easy. It's not necessary, but it prevents you from having to type too many commands. Step 4 apt-src -bi install $package If you wanted to install 'alpine', run this: apt-src -bi install alpine The "b" stands for "build", the "i" stands for "install the resulting package", and the word "install" means "download the source for alpine as found in a Debian source line from sources.list". apt-src will "install" the source into the current directory, make sure you have all the required packages to build the package (a process called "satisfying the build dependencies"), build it, and install the resulting .debs.
{ "source": [ "https://serverfault.com/questions/126920", "https://serverfault.com", "https://serverfault.com/users/4179/" ] }
126,922
Here is my setup: - win2003 server (ISA installed) with 3 NICs: 1) internal network 2) ISP 1 (default) network (DHCP enabled) 3) ISP 2 (backup) network (DHCP enabled) - several "normal" PC within internal net - one "special" PC within internal net Both ISP 1 and ISP 2 provide access to internet and their resources thru their VPN connections. The goal is to enable all "normal" PCs to use internet from ISP_1's VPN connection and "special" should use only ISP_2's VPN connection. Futhermore all "normal" and "special" PCs should have access to several servers accesible only thru ISP_2's VPN connection. I have some thoughts how to achieve this but I want to be certain because everything should be configured as quickly as posible, avoiding significant downtime. UPD: any ideas to solve this if there was no ISA? windows-server-2003 isa routing vpn
It's a bad idea to install binary packages from Debian on Ubuntu. But it's a good idea to install packages from source ! So here's how: It's not that hard. Here's how to do it (instructions taken from my old note at http://www.asheesh.org/note/backporting-with-apt-src.html ): Step 1: Make sure you have an appropriate deb-src line Backporting is the process of taking source packages and compiling them on your Debian(-like) system. The easiest way to find Debian "source packages" is the same way you find Debian "binary packages": apt-get and its configuration. Make sure you have this line in /etc/apt/sources.list: deb-src http://ftp.debian.org/debian/ unstable main APT provides a command "apt-get source" that looks in these deb-src lines (rather than plain binary deb lines) and downloads source packages. In this tutorial, you'll use "apt-src" which is a convenient wrapper around "apt-get source". Step 2 apt-get update Step 3 sudo aptitude install apt-src apt-src is a helper program that makes compiling source packages easy. It's not necessary, but it prevents you from having to type too many commands. Step 4 apt-src -bi install $package If you wanted to install 'alpine', run this: apt-src -bi install alpine The "b" stands for "build", the "i" stands for "install the resulting package", and the word "install" means "download the source for alpine as found in a Debian source line from sources.list". apt-src will "install" the source into the current directory, make sure you have all the required packages to build the package (a process called "satisfying the build dependencies"), build it, and install the resulting .debs.
{ "source": [ "https://serverfault.com/questions/126922", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
127,021
I'm not clear about the difference between Load Balancer and Reverse Proxy. They both seems having same behavior: distributing incoming requests to backend servers.
Your confusion is reasonable - they are often the same thing. But not always. When you refer to a load balancer you are referring to a very specific thing - a server or device that balances inbound requests across two or more web servers to spread the load. A reverse proxy, however, typically has any number of features: load balancing: as discussed above caching: it can cache content from the web server(s) behind it and thereby reduce the load on the web server(s) and return some static content back to the requester without having to get the data from the web server(s) security: it can protect the web server(s) by preventing direct access from the internet; it might do this through simple means by just obfuscating the web server(s) or it may have some more active components that actually review inbound requests looking for malicious code SSL acceleration: when SSL is used; it may serve as a termination point for those SSL sessions so that the workload of dealing with the encryption is offloaded from the web server(s) I think this covers most of it but there are probably a few other features I've missed. Certainly it isn't uncommon to see a device or piece of software marketed as a load balancer/reverse proxy because the features are so commonly bundled together.
{ "source": [ "https://serverfault.com/questions/127021", "https://serverfault.com", "https://serverfault.com/users/37942/" ] }
127,708
I'm trying to configure mercurial access using Apache http. It requires authentication. My /etc/apache2/sites-enabled/mercurial looks like this: NameVirtualHost *:8080 <VirtualHost *:8080> UseCanonicalName Off ServerAdmin webmaster@localhost AddHandler cgi-script .cgi ScriptAliasMatch ^(.*) /usr/lib/cgi-bin/hgwebdir.cgi/$1 </VirtualHost> Every tutorial I read on the internet tells me to insert these lines: AuthType Basic AuthUserFile /usr/local/etc/httpd/users But when I do it I get the following error: # /etc/init.d/apache2 reload Syntax error on line 8 of /etc/apache2/sites-enabled/mercurial: AuthType not allowed here My distro is a customized Ubuntu called Turnkey Linux Redmine
You should place this inside a Location directive: <VirtualHost *:8080> <Location /> #the / has to be there, otherwise Apache startup fails Deny from all #Allow from (You may set IP here / to access without password) AuthUserFile /usr/local/etc/httpd/users AuthName authorization AuthType Basic Satisfy Any # (or all, if IPs specified and require IP + pass) # any means neither ip nor pass require valid-user </Location> ... </VirtualHost>
{ "source": [ "https://serverfault.com/questions/127708", "https://serverfault.com", "https://serverfault.com/users/958/" ] }
127,794
Quick question - I run two linux boxes, one my own desktop and the other my VPS. For security reasons on the VPS end I opted for socket connections to MySQL ( /var/run/mysqld/mysql.sock ). I know I can tunnel like this: ssh -L 3307:127.0.0.1:3306 [email protected] if I set up the remote sql server to listen on some port, but what I want to know is can I do something like: ssh -L /path/to/myremotesqlserver.sock:/var/run/mysqld/mysql.sock thereby tunnelling two sockets, as opposed to two ports? A perfectly acceptable solution would also be to forward a local port to the remote socket file, but where possible I'm trying not to have tcp servers running on the remote box. (and yes, I know tcp would be easier).
Altough in the time, when the question was asked, it was really impossible, but it is possible nowadays. You can to both: UNIX=>TCP and UNIX=>UNIX forwarding. For example: ssh \ -R/var/run/mysql.sock:/var/run/mysql.sock \ -R127.0.0.1:3306:/var/run/mysql.sock \ somehost It is possible since OpenSSH 6.7.
{ "source": [ "https://serverfault.com/questions/127794", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
127,904
Is it possible to make xargs use only newline as separator? (in bash on Linux and OS X if that matters) I know -0 can be used, but it's PITA as not every command supports NUL-delimited output.
Something along the lines of alias myxargs='perl -p -e "s/\n/\0/;" | xargs -0' cat nonzerofile | myxargs command should work.
{ "source": [ "https://serverfault.com/questions/127904", "https://serverfault.com", "https://serverfault.com/users/6521/" ] }
128,069
I have started using git for deployment of websites for testing. How do I prevent apache from serving the .git directory contents? I tried <Directorymatch "^/.*/\.svn/"> Order deny,allow Deny from all </Directorymatch> with no success. I know that I can create a .htaccess file in each .git directory and deny access, but I wanted something I could put into the main config file that makes this global across all websites.
It's not working because you have 'svn' instead of 'git' in the rule. All you have to do is to replace the 'svn' with 'git'. <Directorymatch "^/.*/\.git/"> Order 'deny,allow' Deny from all </Directorymatch>
{ "source": [ "https://serverfault.com/questions/128069", "https://serverfault.com", "https://serverfault.com/users/9006/" ] }
128,071
I'm using netcat as a backend to shovel data back and forth for a program I'm making. I tested my program on the local network, and once it worked I thought it would be a matter of simply forwarding a port from my router to have my program work over the internet. Alas! This seems not to be the case. If I start netcat listening on port 6666 with: nc -vv -l -p 6666 , then go to 127.0.0.1:6666 in a browser, as expected I see a HTTP GET request come through netcat (and my browser sits waiting in vain). If I go to my.external.ip.address:6666 , however, nothing comes through at all and the browser displays 'could not connect to my.external.ip.address:6666 '. I know that the port is correctly forwarded, as www.canyouseeme.org says port 6666 is open (and when netcat is not listening, that its closed). If I run netcat with -g my.adslmodem's.local.address to set the gateway address, I get the same behavior. Am I using this command line option correctly? Any insight as to what I'm doing wrong?
It's not working because you have 'svn' instead of 'git' in the rule. All you have to do is to replace the 'svn' with 'git'. <Directorymatch "^/.*/\.git/"> Order 'deny,allow' Deny from all </Directorymatch>
{ "source": [ "https://serverfault.com/questions/128071", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
128,072
I'm trying to add a custom service to SMF's configuration, which seems successful in that the service starts and there is a log file, but therein lies the problem; the service, on start-up, prints some logging messages to the stderr. It seems that SMF is seeing those messages and, believing them to be errors, restarts the service, giving up after a number of tries and leaving the service off. Here's part of the log output: [ Mar 30 14:59:54 Enabled. ] [ Mar 30 14:59:54 Executing start method ("java server.CustomServer"). ] Starting server... [ Mar 30 15:00:04 Method or service exit timed out. Killing contract 107. ] Running the server directly on the commandline is fine, and AFACS there are no errors being encountered during startup, other than the output. What would be the best way to manage this service with SMF? The logging is needed for diagnosing problems, and would be problematic to disable. Is it possible to configure this service to only restart if the service exists?
It's not working because you have 'svn' instead of 'git' in the rule. All you have to do is to replace the 'svn' with 'git'. <Directorymatch "^/.*/\.git/"> Order 'deny,allow' Deny from all </Directorymatch>
{ "source": [ "https://serverfault.com/questions/128072", "https://serverfault.com", "https://serverfault.com/users/8305/" ] }
128,096
Why do cat5 cables sometimes have the fluffy fiber bit in them?
Actually it's for pulling the outer shielding away from the inner wires. When you're punching down the cable you pull the fiber string down from the top of the cable and it makes a nice split in the outer shielding that allows you to pull the outer shielding down to cut it off without damaging the inner wires. Here's a video that shows the process: http://www.youtube.com/watch?v=sHy8mtW9eak at 1:23
{ "source": [ "https://serverfault.com/questions/128096", "https://serverfault.com", "https://serverfault.com/users/36858/" ] }
128,132
We are experimenting with running an OpenVPN server for our business. One question I can't seem to find the answer to is this: When we generate keys for one of our users for them to use at home, can their use the same keys on their home laptop as well as their home desktop? Or do we need to generate separate keys for each user's client machine?
It's a simple key management issue. There is nothing technically that stops you from using the same key from several locations. You can even use them at the same time. However, using the same key for multiple systems makes a revocation more painful. It also limits what user tracking you can do. Letting a user use the same key from all his systems is a common setup, and what I would recommend. If the users have root access it's pretty hard to prevent them from moving the keys anyway. Just make sure you don't fall in the trap of using a single key for all your users. That hurts when somebody forgets a laptop in china.
{ "source": [ "https://serverfault.com/questions/128132", "https://serverfault.com", "https://serverfault.com/users/21307/" ] }
128,242
I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine ( http://svn.host.com/reposname ). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@server domain.com]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout ( svn checkout file:///var/www/svn/repos/reposname ) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either.
I had this recently... but it turned out I had forgotten the url :) One thing you must do is ensure your svn Location does not overlap any apache-servable websites. ie, if you set your DocumentRoot to be /www, and your svn Location to be /www/svn... then you're in trouble - Apache won't know what its supposed to be served with (ie the svn special handlers, or a straight http handler). See the FAQ entry for this .
{ "source": [ "https://serverfault.com/questions/128242", "https://serverfault.com", "https://serverfault.com/users/1822/" ] }
128,284
I'm investigating an issue with DB connections being left open indefinitely, causing problems on the DB server. How do I see currently open connections to a PostgreSQL server, particularly those using a specific database? Ideally I'd like to see what command is executing there as well. Basically, I'm looking for something equivalent to the "Current Activity" view in MSSQL.
OK, got it from someone else. This query should do the trick: select * from pg_stat_activity where datname = 'mydatabasename';
{ "source": [ "https://serverfault.com/questions/128284", "https://serverfault.com", "https://serverfault.com/users/2519/" ] }
128,962
I'm setting up a LAMP server and need to prevent SSH/FTP/etc. brute-force logon attempts from succeeding. I've seen many recommendations for both denyhosts and fail2ban, but few comparisons of the two. I also read that an IPTables rule can fill the same function. Why would I choose one of these methods over another? How do people on serverfault handle this problem?
IIRC, DenyHosts will only watch your SSH service. If you need it to protect other services as well, Fail2ban is definitely a better choice. It is configurable to watch nearly any service if you are willing to tweak its configuration, but that shouldn't be necessary as the newer versions of Fail2ban include rulesets which are suitable for many popular server daemons. Using fail2ban over a simple iptables rate limit has the advantage of completely blocking an attacker for a specified amount of time, instead of simply reducing how quickly he can hammer your server. I've used fail2ban with great results on a number of production servers and have never seen one of those servers breached by a brute force attack since I've started using it.
{ "source": [ "https://serverfault.com/questions/128962", "https://serverfault.com", "https://serverfault.com/users/23202/" ] }
129,086
How can I start/stop the iptables service on Ubuntu? I have tried service iptables stop but it is giving "unrecognized service" . Why is it doing so? Is there any other method?
I don't know about "Ubuntu", but in Linux generally, "iptables" isn't a service - it's a command to manipulate the netfilter kernel firewall. You can "disable" (or stop) the firewall by setting the default policies on all standard chains to "ACCEPT", and flushing the rules. iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -F (You may need to flush other tables, too, such as "nat", if you've used them) The following article on the Ubuntu website describes setting up iptables for use with NetworkManager: https://help.ubuntu.com/community/IptablesHowTo
{ "source": [ "https://serverfault.com/questions/129086", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
129,354
I went to my zoneedit.com, changed mydomain.com to point to a different IP. But changes haven't taken affect yet. Is this because my ISP DNS is caching?
Yes. Your ISP is almost certainly caching DNS settings for some period of time. They are supposed to refresh the records when the TTL expires. Unfortunately there are a large number of ISPs that seem to ignore TTLs all together in their DNS caching schemes. If you happen to be on one of those ISPs it could be hours or even days before they respect the new records, even if you have a very low TTL set.
{ "source": [ "https://serverfault.com/questions/129354", "https://serverfault.com", "https://serverfault.com/users/81082/" ] }
129,380
Last week I had a question here about suexec / suphp but I tried to accomplish too much. I'm going to narrow the scope a bit and try again. I'd like to configure a LAMP server to host multiple clients. I'd like it to seem (from the client's viewpoint) just like any other shared hosting environment. Web sites in their home directory, no need to muck around with file ownerships to get pages served, etc. It would seem that a configuration that involves suexec and suphp is the way to go(?) I'm specifically looking for a current/modern guide on how to accomplish this (I'll be using CentOS if it matters) and I'm afraid I need more than a link to Apache docs. Are there any good How-To's out there? The few I've found have been pretty out of date, but it is quite possible my search was weak.
Yes. Your ISP is almost certainly caching DNS settings for some period of time. They are supposed to refresh the records when the TTL expires. Unfortunately there are a large number of ISPs that seem to ignore TTLs all together in their DNS caching schemes. If you happen to be on one of those ISPs it could be hours or even days before they respect the new records, even if you have a very low TTL set.
{ "source": [ "https://serverfault.com/questions/129380", "https://serverfault.com", "https://serverfault.com/users/1936/" ] }
129,503
Can you think of any linux command-line method for saving the certificate presented by a HTTPS server? Something along the lines of having curl/wget/openssl make a SSL connection and save the cert rather than the HTTP response content. The gui equivalent to what I'm looking for would be to browse to the HTTPS site, double-click on the browser "secure site" icon, and export the cert. Except the goal here is to do it non-interactively. Thanks, Jim
Something like: openssl s_client -servername remote.server.net -connect remote.server.net:443 </dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' >/path/to/certificate.pem That's what I use with fetchmail to retrieve the certificate of an SSL capable IMAP or POP3 server (except obviously I don't use port 443) (Note that "redundant" -servername parameter is necessary to make openssl do a request with SNI support.)
{ "source": [ "https://serverfault.com/questions/129503", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
129,507
I am running an ASP.NET 3.5 website on IIS 6 with Server 2003. Whenever I modify any of the ASPX files, any page on the site then takes about 2-3 minutes before it starts to load. Even the smallest modification causes this to happen. Why is this?
Something like: openssl s_client -servername remote.server.net -connect remote.server.net:443 </dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' >/path/to/certificate.pem That's what I use with fetchmail to retrieve the certificate of an SSL capable IMAP or POP3 server (except obviously I don't use port 443) (Note that "redundant" -servername parameter is necessary to make openssl do a request with SNI support.)
{ "source": [ "https://serverfault.com/questions/129507", "https://serverfault.com", "https://serverfault.com/users/39750/" ] }
129,633
What would be the crontab entry look like for a job that runs on the first day of every third month?
The following will run script on the 1st of Jan, Apr, Jul and Oct at 03:30 30 03 01 Jan,Apr,Jul,Oct * /path/to/script Alternatively, but less obvious 30 03 01 */3 * /path/to/script Will run every three months at 03:30 on the 1st of Jan,Apr,Jul and Oct.
{ "source": [ "https://serverfault.com/questions/129633", "https://serverfault.com", "https://serverfault.com/users/17209/" ] }
129,635
I have a free domain running at x10hosting (x10.bz), and I want to find out the IP Address of my MySQL host for it, so I can contact the MySQL database from another host. I've already added that host to the access list, but now I need to find out the IP Address of the MySQL host. How can I find this out? x10 is using cPanel X and PHPMyAdmin.
The SQL query SHOW VARIABLES WHERE Variable_name = 'hostname' will show you the hostname of the MySQL server which you can easily resolve to its IP address. SHOW VARIABLES WHERE Variable_name = 'port' Will give you the port number. You can find details about this in MySQL's manual: 12.4.5.41. SHOW VARIABLES Syntax and 5.1.4. Server System Variables
{ "source": [ "https://serverfault.com/questions/129635", "https://serverfault.com", "https://serverfault.com/users/39787/" ] }
129,935
Is there any reason to have 2 NICs on a server BESIDES the following cases? You need to connect to 2 different physical networks Redudancy (1 NIC fails, so you use the other) Are there any other reasons?
(2a). Load balancing. (3). Separation of traffic (i.e. you could have a combo web/database server, same network, put all web traffic on one NIC, db traffic on the other, makes it easier to calculate loads for traffic types). This also makes it easier to split the two later on, nobody has to change connection strings.
{ "source": [ "https://serverfault.com/questions/129935", "https://serverfault.com", "https://serverfault.com/users/23007/" ] }
130,300
I am currently developing a project which is mission-critical. The actual domain name is registered with 1 & 1 and I plan on purchasing DynDNS Custom DNS service (which has 5 different geographical locations for DNS) and then another secondary DNS service to make sure my DNS is as failover safe as possible. Does it matter that the registration is with 1 & 1 - are they a weak link in the chain? All I really use them for is to say that DynDNS is my primary DNS nameserver and then my secondary DNS is my other nameserver. I can transfer the registration to DynDNS - Im just not sure if it really matters or not. Thanks
Your registrar is, IMHO, the least of your concerns. Your actual DNS provider (the folks who host your nameserver) is probably worth a little consideration, but it's still down in the noise compared to the rest of what you need to do to really reach 99.9999% availability. Six Nines availability (99.9999) means less than 1 hour minute (actually exactly 31.536 seconds ) of downtime in a year. If you're really intending to reach that level of availability and not just blowing smoke you should be concentrating more on your (distributed, redundant) network & server infrastructure, and at that point you can really host your own DNS servers in your multiple (geographically and topologically distributed) datacenters :-) Just my $3.50...
{ "source": [ "https://serverfault.com/questions/130300", "https://serverfault.com", "https://serverfault.com/users/35204/" ] }
130,306
Hey everyone, I have an old server that doesn't boot. I don't know the version of unix installed, but I see SCO UNIX. It stops with that error: UX:init: ERROR: Cannot create /var/adm/utmp or /var/adm/utmpx UX:init: ERROR: failed write of utmpx entry: " " UX:init: ERROR: failed write of utmpx entry: " " UX:init: INFO: SINGLE USER MODE After that message, it just stops. I cannot write or press anything. Even CTRL + ALT + DEL does not work. I cannot get into the system. I have tried booting with a DamnSmallLinux LiveCD but it does not recognize the file system on HDA. Is there a way to either log in as root or bypass this error? Thanks.
Your registrar is, IMHO, the least of your concerns. Your actual DNS provider (the folks who host your nameserver) is probably worth a little consideration, but it's still down in the noise compared to the rest of what you need to do to really reach 99.9999% availability. Six Nines availability (99.9999) means less than 1 hour minute (actually exactly 31.536 seconds ) of downtime in a year. If you're really intending to reach that level of availability and not just blowing smoke you should be concentrating more on your (distributed, redundant) network & server infrastructure, and at that point you can really host your own DNS servers in your multiple (geographically and topologically distributed) datacenters :-) Just my $3.50...
{ "source": [ "https://serverfault.com/questions/130306", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
130,482
I have Ubuntu 9.10 installed with sshd and I can successfully connect to it using login and password. I have configured an RSA key login and now have "Server refused our key" as expected. Ok, now I want to check sshd log in order to figure out a problem. I have examined /etc/ssh/sshd_config and it have SyslogFacility AUTH LogLevel INFO Ok. I'm looking at /var/log/auth.log and... it's empty O_O. Changing Loglevel to VERBOSE helps nothing - auth.log is still empty. Any hints how I can check sshd log?
Creating an answer based on the comments above, credit to @Prof. Moriarty and @Eye of Hell SSH auth failures are logged here /var/log/auth.log The following should give you only ssh related log lines: grep 'sshd' /var/log/auth.log To be on the safe side, get the last few hundred lines and then search (because if the log file is too large, grep on the whole file would consume more system resources, not to mention will take longer to run) View sshd entries in the last 500 lines of the log: tail -n 500 /var/log/auth.log | grep 'sshd' or to follow the log output as you test: tail -f -n 500 /var/log/auth.log | grep 'sshd'
{ "source": [ "https://serverfault.com/questions/130482", "https://serverfault.com", "https://serverfault.com/users/4634/" ] }
130,543
We're on a corporate network thats running active directory and we'd like to test out some LDAP stuff (active directory membership provider, actually) and so far, none of us can figure out what our LDAP connection string is. Does anyone know how we can go about finding it? The only thing we know is the domain that we're on.
The ASP.NET Active Directory Membership Provider does an authenticated bind to the Active Directory using a specified username, password, and "connection string". The connection string is made up of the LDAP server's name, and the fully-qualified path of the container object where the user specified is located. The connection string begins with the URI LDAP:// . For the server name, you can use the name of a domain controller in that domain-- let's say "dc1.corp.domain.com". That gives us LDAP://dc1.corp.domain.com/ thusfar. The next bit is the fully qualified path of the container object where the binding user is located. Let's say you're using the "Administrator" account and your domain's name is "corp.domain.com". The "Administrator" account is in a container named "Users" located one level below the root of the domain. Thus, the fully qualified DN of the "Users" container would be: CN=Users,DC=corp,DC=domain,DC=com . If the user you're binding with is in an OU, instead of a container, the path would include "OU=ou-name". So, using an account in an OU named Service Accounts that's a sub-OU of an OU named Corp Objects that's a sub-OU of a domain named corp.domain.com would have a fully-qualified path of OU=Service Accounts,OU=Corp Objects,DC=corp,DC=domain,DC=com . Combine the LDAP://dc1.corp.domain.com/ with the fully qualified path to the container where the binding user is located (like, say, LDAP://dc1.corp.domain.com/OU=Service Accounts,OU=Corp Objects,DC=corp,DC=domain,DC=com ) and you've got your "connection string". (You can use the domain's name in the connection string as opposed to the name of a domain controller. The difference is that the domain's name will resolve to the IP address of any domain controller in the domain. That can be both good and bad. You're not reliant on any single domain controller to be up and running for the membership provider to work, but the name happens to resolve to, say, a DC in a remote location with spotty network connectivity then you may have problems with the membership provider working.)
{ "source": [ "https://serverfault.com/questions/130543", "https://serverfault.com", "https://serverfault.com/users/18317/" ] }
131,105
In the configuration I have setup I wish to allow samba and apache to access /var/www I am able to set a context to allow samba access, but then httpd doesn't have access. Using setenforce to 0 eliminates issues so I know that it is SELinux. In addition: How can I view the context of a folder, and can a folder have multiple contexts? (CentOS)
First off, you can view the context of something with ls using ls -Z [root@servername www]# ls -dZ /var/www drwxr-xr-x root root system_u:object_r:httpd_sys_content_t /var/www Second, there are two options for giving Samba and Apache access to the same directory. The simple way is to just allow samba read/write access everywhere with: setsebool -P samba_export_all_rw 1 It's simple, easy, and doesn't mess with any weird properties of SELinux. If you're concerned with Samba having full access to all directories and only want to change /var/www, try: chcon -t public_content_rw_t /var/www setsebool -P allow_smbd_anon_write 1 setsebool -P allow_httpd_anon_write 1 This will allow both Samba and Apache write access to any directories with the public_content_rw_t context. Note that chcon is only modifying /var/www. Any new directories created under /var/www will be public_content_rw_t, but not existing directories like /var/www/html or /var/www/manual. If you want to change everything, add an -R to chcon: chcon -R -t public_content_rw_t /var/www You can look through this CentOS wiki page to get hints on other SELinux booleans.
{ "source": [ "https://serverfault.com/questions/131105", "https://serverfault.com", "https://serverfault.com/users/39985/" ] }
131,107
I have two servers (one LAMP, one Windows) and one website with an associated blog. I'm running the main site on the Windows server, and the blog on the LAMP server, using Wordpress. The main site is accessed at http://folketsting.dk (it's in Danish -- sorry), the blog is accessed at http://blog.folketsting.dk (this link is bad, read on). The main site works fine. The blog works, except for the frontpage. Example of working post: http://blog.folketsting.dk/2009/10/09/ftlive/ . The frontpage of the blog ( http://blog.folketsting.dk ) shows html from http://folketsting.dk however (except for the css and javascript). In fact, any other URL than the frontpage "works", and gets served by Wordpress e.g. http://blog.folketsting.dk/foo . I cannot -- for the life of me -- understand how the LAMP server running http://blog.folketsting.dk manages to serve up content generated by the Windows server running http://folketsting.dk . Looking at the response headers at http://blog.folketsting.dk , it's evident that the content originates from Apache, not IIS. I'm pretty sure it's not a DNS-issue, since the problem is evident even when accessing the raw IP, eg. http://130.226.142.141/ vs. http://130.226.142.141/foo . I'm thinking it's a bad config in Apache... any clues? UPDATE: As requested, here's the apache conf file for the non-working site. Incidentally, another Wordpress blog is running on the server (though not on a subdomain), and it is not exhibiting this quirk. <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName blog.folketsting.dk ServerAlias blog.folketsting.dk DocumentRoot /var/www/blog.folketsting.dk <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>
First off, you can view the context of something with ls using ls -Z [root@servername www]# ls -dZ /var/www drwxr-xr-x root root system_u:object_r:httpd_sys_content_t /var/www Second, there are two options for giving Samba and Apache access to the same directory. The simple way is to just allow samba read/write access everywhere with: setsebool -P samba_export_all_rw 1 It's simple, easy, and doesn't mess with any weird properties of SELinux. If you're concerned with Samba having full access to all directories and only want to change /var/www, try: chcon -t public_content_rw_t /var/www setsebool -P allow_smbd_anon_write 1 setsebool -P allow_httpd_anon_write 1 This will allow both Samba and Apache write access to any directories with the public_content_rw_t context. Note that chcon is only modifying /var/www. Any new directories created under /var/www will be public_content_rw_t, but not existing directories like /var/www/html or /var/www/manual. If you want to change everything, add an -R to chcon: chcon -R -t public_content_rw_t /var/www You can look through this CentOS wiki page to get hints on other SELinux booleans.
{ "source": [ "https://serverfault.com/questions/131107", "https://serverfault.com", "https://serverfault.com/users/275/" ] }
131,474
I don't understand the difference between break and last (flags of rewrite). The documentation is rather abstruse. I've tried to switch between the two in some of my configs, but I couldn't spot any difference in behavior. Can someone please explain these flags in more detail? Preferably with an example that shows different behavior when flipping one flag to another.
You may have different sets of rewrite rules for different locations. When rewrite module meets last , it stops processing the current set and the rewritten request is passed once again to find the appropriate location (and the new set of rewriting rules). If the rule ends with break , the rewriting also stops, but the rewritten request is not passed to another location. That is, if there are two locations: loc1 and loc2, and there's a rewriting rule in loc1 that changes loc1 to loc2 AND ends with last , the request will be rewritten and passed to location loc2. If the rule ends with break , it will belong to location loc1.
{ "source": [ "https://serverfault.com/questions/131474", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
131,558
I'm trying to figure out if centos is legal (or simply grey). Here's what makes me wonder: They seem to go to great pains not to mention that they are based on redhat in the FAQ they mention a policy about using the redhat trademark, but the link no longer exists. When installing it it's not hard to find a lot of redhat code. I don't bother much with the linux world anymore but I had a client that was wondering about it as his auditors picked up on it and wanted to know where his license was.
In a word.. yes http://www.centos.org/modules/news/article.php?storyid=66 in particular, the little snippit below directly from Redhat's legal team (bold by me): We understand that you are distributing, on your web site located at http://www.centos.org , CentOS Enterprise class Linux software that was developed using Red Hat's open source software. While Red Hat permits others to redistribute the software that constitutes Red Hat Linux , Red Hat does not authorize any person to use the RED HAT marks in association with such redistribution in any fashion, except by express agreement. So, basically, anyone has the right to redistribute the software (ie:linux) but cannot use the RedHat name/logo/etc with their distro. Which is why CentOS has removed the logos/name/etc. You can also find more info here: http://www.redhat.com/f/pdf/corp/trademark1.pdf which contains the section titled: "Guidelines For Marketing Software Products Containing Unmodified Red Hat® Linux® Software"
{ "source": [ "https://serverfault.com/questions/131558", "https://serverfault.com", "https://serverfault.com/users/3528/" ] }
131,627
We have an Exchange 2007 server running on Windows Server 2008. Our client uses another vendor's mail server. Their security policies require us to use enforced TLS. This was working fine until recently. Now, when Exchange tries to deliver mail to the client's server, it logs the following: A secure connection to domain-secured domain 'ourclient.com' on connector 'Default external mail' could not be established because the validation of the Transport Layer Security (TLS) certificate for ourclient.com failed with status 'UntrustedRoot. Contact the administrator of ourclient.com to resolve the problem, or remove the domain from the domain-secured list. Removing ourclient.com from the TLSSendDomainSecureList causes messages to be delivered successfully using opportunistic TLS, but this is a temporary workaround at best. The client is an extremely large, security-sensitive international corporation. Our IT contact there claims to be unaware of any changes to their TLS certificate. I have asked him repeatedly to please identify the authority that generated the certificate so that I can troubleshoot the validation error, but so far he has been unable to provide an answer. For all I know, our client could have replaced their valid TLS certificate with one from an in-house certificate authority. Does anyone know a way to manually inspect a remote SMTP server's TLS certificate, as one can do for a remote HTTPS server's certificate in a web browser? It could be very helpful to determine who issued the certificate and compare that information against the list of trusted root certificates on our Exchange server.
You can use OpenSSL. If you have to check the certificate with STARTTLS , then just do openssl s_client -connect mail.example.com:25 -starttls smtp or for a standard secure smtp port: openssl s_client -connect mail.example.com:465
{ "source": [ "https://serverfault.com/questions/131627", "https://serverfault.com", "https://serverfault.com/users/28400/" ] }
131,816
I did download RubyStack 2.0.3 for VMWare (Ubuntu 9.10) but I cannot download anything on it! It appears that all basic utilities are missing/screwed: bitnami@linux:/var/tmp$ wget -bash: wget: command not found bitnami@linux:/var/tmp$ curl curl: error while loading shared libraries: libcurl.so.4: cannot open shared obj ect file: No such file or directory bitnami@linux:/var/tmp$ man wget -bash: man: command not found bitnami@linux:/var/tmp$ sudo apt-get install wget [sudo] password for bitnami: Reading package lists… Done Building dependency tree Reading state information… Done E: Couldn’t find package wget Any ideas how can I download anything on this machine? (I don't have physical access to it) UPDATE You gotta be kidding me... bitnami@linux:~$ ftp -bash: ftp: command not found bitnami@linux:~$ smbclient -bash: smbclient: command not found
I use debian, not ubuntu, but, the method should be the same First, try: sudo bash apt-get update apt-get -f install apt-get install wget Barring that, cat /etc/apt/sources.list Make note of the url prefix after deb apt-cache show wget look for: Filename: pool/main/w/wget/wget_1.12-1.1_i386.deb grab that in your local browser assembling the url portion from /etc/apt/sources.list and the filename portion from apt-cache show. scp the file to your machine, dpkg -i wget_whatever.deb If /etc/apt/sources.list is not set up correctly, try tekhammer's suggestion and then rerun apt-get update.
{ "source": [ "https://serverfault.com/questions/131816", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
131,852
Is there any command to find out if apache is running or not. and on which port except by seeingports.conf files When i try netstat command then apaches does not appear in that. but when i use apache2 restart command then it says restart ok i don't know where it is running
lsof -i list open ports and the corresponding applications. For a general check if an app is running you could just use ps aux | grep apache2
{ "source": [ "https://serverfault.com/questions/131852", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
131,872
I have a huge pcap file (generated by tcpdump). When I try to open it in wireshark, the program just gets unresponsive. Is there a way to split a file in set of smaller ones to open them one by one? The traffic captured in a file is generated by two programs on two servers, so I can't split the file using tcpdump 'host' or 'port' filters. I've also tried linux 'split' command :-) but with no luck. Wireshark wouldn't recognize the format.
You can use tcpdump itself with the -C, -r and -w options tcpdump -r old_file -w new_files -C 10 The "-C" option specifies the size of the file to split into. Eg: In the above case new files size will be 10 million bytes each.
{ "source": [ "https://serverfault.com/questions/131872", "https://serverfault.com", "https://serverfault.com/users/40428/" ] }
131,888
Although EngineX is running, monit can't seem to figure it out. Here's my monit log: [PDT Apr 13 02:19:19] error : HTTP error: Server returned status 400 [PDT Apr 13 02:19:19] error : 'nginx' failed protocol test [HTTP] at INET[localhost:80] via TCP [PDT Apr 13 02:19:19] info : 'nginx' trying to restart [PDT Apr 13 02:19:19] info : 'nginx' stop: /etc/init.d/nginx [PDT Apr 13 02:19:20] info : 'nginx' start: /etc/init.d/nginx The monitrc file contains the following configuration: if failed port 80 protocol http and request '/ping.txt' # check for response with timeout 20 seconds then restart I can access the file through lynx http://localhost:80/ping.txt without any problems. Why would monit have trouble requesting the file when nginx is running just fine?
You can use tcpdump itself with the -C, -r and -w options tcpdump -r old_file -w new_files -C 10 The "-C" option specifies the size of the file to split into. Eg: In the above case new files size will be 10 million bytes each.
{ "source": [ "https://serverfault.com/questions/131888", "https://serverfault.com", "https://serverfault.com/users/28207/" ] }
131,890
I would like to know if it's possible to operate different databases on different filesystem locations. Background: we are a hosting service, which hosts mysql, web, and smtp to it's customer, but all our services (sql, smtp, http) are located in a different place. We are going to assign a single logical volume to a customer, which will accommodate the customer's mailing, weppages and (hopefully) sql database. Web pages and mailing are already covered, but I am not able to find a configuration setting which would enable me to specify the location of a database (the directory where mysql stores the DB). Let me please highlight, the target here is to relocate different databases to different locations in the filesystem, not moving them from a single place to an another (single) place. Also please do not bother answering with soft and hard symbolic links. ;) Thanks
You can use tcpdump itself with the -C, -r and -w options tcpdump -r old_file -w new_files -C 10 The "-C" option specifies the size of the file to split into. Eg: In the above case new files size will be 10 million bytes each.
{ "source": [ "https://serverfault.com/questions/131890", "https://serverfault.com", "https://serverfault.com/users/13364/" ] }
131,893
I have an Ubuntu Server 9.10 box with sshd configured. I have two computers with Windows 7 professional and putty installed. Day ago, both computers was able to connect ubuntu server both via putty and plink. I have installed sun-java6-jre on ubuntu server, and now have a weird problem. First Windows 7 computer can still connect with both putty GUI and command-line plink . Second computer can connect via putty gui, but if i issue plink command that works perfectly on first computer: plink www.hostname.tk -i c:\users\username\documents\key\private.ppk I get login prompt, enter same username as on first computer, and receive following weird error message: bash: www.hostname.tk: command not found I can't see any difference between my Windows 7 computers :(. The ppk key used is same (i copied it multiple times both ways). hostname and username are same. Anyone have any ideas why such thing happens and what can i do in order to troubleshoot and fix it? Updated: Log that plink -v disaplys: Offered public key Offer of public key accepted Authenticating with public key "imported-openssh-key" Access granted Opened channel for session Started a shell/command bash: www.hostname.tk: command not found Server sent command exit status 127 Disconnected: All channels closed
You can use tcpdump itself with the -C, -r and -w options tcpdump -r old_file -w new_files -C 10 The "-C" option specifies the size of the file to split into. Eg: In the above case new files size will be 10 million bytes each.
{ "source": [ "https://serverfault.com/questions/131893", "https://serverfault.com", "https://serverfault.com/users/4634/" ] }
131,942
Surprisingly, it's been tough for me to find the command(s) to do this. Does anyone know how to add a group? Thanks! Or do something like this: # create the MySQL group dscl . create /Groups/mysql # give it some group id dscl . create /Groups/mysql gid 296
I've used these to add dba group: sudo dscl . -create /groups/dba sudo dscl . -append /groups/dba gid 4200 sudo dscl . -append /groups/dba passwd "*"
{ "source": [ "https://serverfault.com/questions/131942", "https://serverfault.com", "https://serverfault.com/users/25834/" ] }
132,352
Can someone explain to me what the exact difference is between named and BIND?
Nothing :) http://en.wikipedia.org/wiki/BIND named is just an alias of BIND.
{ "source": [ "https://serverfault.com/questions/132352", "https://serverfault.com", "https://serverfault.com/users/26204/" ] }
132,405
A server allows SSH connections, but not using public key authentication. It's not within my power to change this at the moment (due to technical difficulties, not organizational) but I will get on it as soon as possible! What I need now is to execute commands on the server using plain old account+password authentication from a script . That is, I need to do it in a non-interactive way. Is it possible? And how do I do it? The client which will be executing the script runs Ubuntu Server 8.04. The server runs Cygwin and OpenSSH.
There is a linux utility called sshpass . It allows you to do exactly what you want and will take a server password either as a command line argument, or from a file (i prefer this way, so i do not have my server password show up in shell history) and you use it like so: sshpass -f file_with_password ssh user@server ls -la This will ssh into a server and run ls -la . One thing, however, you have to manually ssh into a server first (if you haven't done so already), so the server gets added to your ~/.ssh/known_hosts . If you don't do that, sshpass will not work.
{ "source": [ "https://serverfault.com/questions/132405", "https://serverfault.com", "https://serverfault.com/users/32224/" ] }
132,409
I am having a hard time finding help desk software that allows for drop down hyperlink selection during ticket creation. The situation is that we do external support for client systems and connect via remotely anywhere or logmein. Right now we use a poorly modified php based system that has a customer drop down menu and then a site drop down list that is then parsed by a bit of java script which opens a url. What I am looking for is the ability to store customer site URL information in the database and during the creation of a ticket be able to select the customer name and then select the site there by placing the corresponding site URL in the ticket. The support tech will then be able to click on this link to access the customer's site. Has anyone used or seen help desk software with this feature?
There is a linux utility called sshpass . It allows you to do exactly what you want and will take a server password either as a command line argument, or from a file (i prefer this way, so i do not have my server password show up in shell history) and you use it like so: sshpass -f file_with_password ssh user@server ls -la This will ssh into a server and run ls -la . One thing, however, you have to manually ssh into a server first (if you haven't done so already), so the server gets added to your ~/.ssh/known_hosts . If you don't do that, sshpass will not work.
{ "source": [ "https://serverfault.com/questions/132409", "https://serverfault.com", "https://serverfault.com/users/20106/" ] }
132,551
I have a Ubuntu 9.10 server. I have installed apache2 and php5 using the apt-get commands. How does one install php extensions? Are there commands like apt-get to get them? Or should I manually look for the files on the php website and set them up in the php.ini? More specifically, I need mcrypt, curl and gd. Thanks
All you need to do is: sudo apt-get install php5-mcrypt php5-curl php5-gd If you need to check what is installed php-wise you can: dpkg --list | grep php EDIT: Removed sudo in the command above as it's not needed with dpkg --list.
{ "source": [ "https://serverfault.com/questions/132551", "https://serverfault.com", "https://serverfault.com/users/24213/" ] }
132,657
Running CentOS 5.4 Why do I have route to 169.254.0.0 although it does not appear in Network > Ethernet Device > Route configuration dialog? Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 * 255.255.255.0 U 0 0 0 eth2 169.254.0.0 * 255.255.0.0 U 0 0 0 eth2 default 192.168.1.1 0.0.0.0 UG 0 0 0 eth2
From this article on the Red Hat Knowledgebase: How do I disable the zeroconf route so that the system will boot without the 169.254.0.0 / 255.255.0.0 route? Symptom: Every time the system boots, the zeroconf route (169.254.0.0) is enabled. You manually disable it by turning off the firewall and remove the route with 169.254.0.0 / 255.255.0.0 using the route command. Example output of the route with the zeroconf route enables would like similar to the following: # route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.15.50.0 * 255.255.252.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth0 Solution: To disable the zeroconf route during system boot, edit the /etc/sysconfig/network file and add the following NOZEROCONF value to the end of the file: NETWORKING=YES HOSTNAME=localhost.localdomain NOZEROCONF=yes
{ "source": [ "https://serverfault.com/questions/132657", "https://serverfault.com", "https://serverfault.com/users/40645/" ] }
132,708
How do I find files not belonging to particular group? find /home -group NOT test
find /home -not -group test or find /home ! -group test The exclamation inverts the match. From man find : ! expr True if expr is false. This character will also usually need -not expr Same as ! expr, but not POSIX compliant. If you want the group it does belong to in the output: find /home ! -group test -printf "%p:%g\n" ./lots/573:root ... Some more information on using find: How do I master the UNIX find command?
{ "source": [ "https://serverfault.com/questions/132708", "https://serverfault.com", "https://serverfault.com/users/40639/" ] }
132,805
For as long as I can remember, I've always used the IP 4.2.2.2 when testing network connectivity using ping . What is significant about this IP, and when did this practice start?
I ping it because it has always been up, and is easy to remember when DNS isn't working. But you might want to read this for more information: http://www.tummy.com/Community/Articles/famous-dns-server/ .
{ "source": [ "https://serverfault.com/questions/132805", "https://serverfault.com", "https://serverfault.com/users/2979/" ] }
132,963
I have a command I am running produces a ton of output, I want to silence the output without writing to a file. I have used the following to send all output to a file: command > out.txt 2>&1 ... but again I don't want any file output: command > /dev/null 2>&1 I have used command > /dev/null on my CentOS box before, but I can't find a similar thing for windows.
You want command > nul 2>&1 .
{ "source": [ "https://serverfault.com/questions/132963", "https://serverfault.com", "https://serverfault.com/users/26581/" ] }
132,970
Here's my situation: I'm setting up a test harness that will, from a central client, launch a number of virtual machine instances and then execute commands on them via ssh . The virtual machines will have previously unused hostnames and IP addresses, so they won't be in the ~/.ssh/known_hosts file on the central client. The problem I'm having is that the first ssh command run against a new virtual instance always comes up with an interactive prompt: The authenticity of host '[hostname] ([IP address])' can't be established. RSA key fingerprint is [key fingerprint]. Are you sure you want to continue connecting (yes/no)? Is there a way that I can bypass this and get the new host to be already known to the client machine, maybe by using a public key that's already baked into the virtual machine image ? I'd really like to avoid having to use Expect or whatever to answer the interactive prompt if I can.
Set the StrictHostKeyChecking option to no , either in the config file or via -o : ssh -o StrictHostKeyChecking=no [email protected]
{ "source": [ "https://serverfault.com/questions/132970", "https://serverfault.com", "https://serverfault.com/users/1101/" ] }
132,980
i have newly installed lighttpd in ubuntu 9.10 first it showed the detault page and i changed the permission of /var/www/ directory to 777 and now its saying 403 forbidden my php-cgi -v PHP 5.2.10-2ubuntu6.4 with Suhosin-Patch 0.9.7 (cgi-fcgi) (built: Jan 6 2010 22:34:28) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies php -v PHP 5.2.10-2ubuntu6.4 with Suhosin-Patch 0.9.7 (cli) (built: J 6) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies and i have added these line in lighttpd.conf file fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket" ))) still getting same error....
Set the StrictHostKeyChecking option to no , either in the config file or via -o : ssh -o StrictHostKeyChecking=no [email protected]
{ "source": [ "https://serverfault.com/questions/132980", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
133,058
In the beginning of a crontab file you could use the MAILTO instruction to indicate you want the output to be sent as an e-mail to an e-mail address. I would like to send the output to multiple addresses. Is it possible (and how) to specify multiple addresses?
It may differ depending exactly which cron daemon package you use, but this is from the manpage of Vixie Cron on Ubuntu Hardy: If MAILTO is defined (and non-empty), mail is sent to the user so named. MAILTO may also be used to direct mail to multiple recipients by separating recipient users with a comma. If MAILTO is defined but empty (MAILTO=""), no mail will be sent. Otherwise mail is sent to the owner of the crontab. If you're not using Vixie Cron, or aren't sure, try the manual page for the crontab file: man 5 crontab Example MAILTO="[email protected],[email protected]"
{ "source": [ "https://serverfault.com/questions/133058", "https://serverfault.com", "https://serverfault.com/users/4801/" ] }
133,692
I guess everyone knows the useful Linux cmd line utilities head and tail . head allows you to print the first X lines of a file, tail does the same but prints the end of the file. What is a good command to print the middle of a file? something like middle --start 10000000 --count 20 (print the 10’000’000th till th 10’000’010th lines). I'm looking for something that will deal with large files efficiently. I tried tail -n 10000000 | head 10 and it's horrifically slow.
sed -n '10000000,10000020p' filename You might be able to speed that up a little like this: sed -n '10000000,10000020p; 10000021q' filename In those commands, the option -n causes sed to "suppress automatic printing of pattern space". The p command "print[s] the current pattern space" and the q command "Immediately quit[s] the sed script without processing any more input..." The quotes are from the sed man page . By the way, your command tail -n 10000000 filename | head -n 10 starts at the ten millionth line from the end of the file, while your "middle" command would seem to start at the ten millionth from the beginning which would be equivalent to: head -n 10000010 filename | tail -n 10 The problem is that for unsorted files with variable length lines any process is going to have to go through the file counting newlines. There's no way to shortcut that. If, however, the file is sorted (a log file with timestamps, for example) or has fixed length lines, then you can seek into the file based on a byte position. In the log file example, you could do a binary search for a range of times as my Python script here * does. In the case of the fixed record length file, it's really easy. You just seek linelength * linecount characters into the file. * I keep meaning to post yet another update to that script. Maybe I'll get around to it one of these days.
{ "source": [ "https://serverfault.com/questions/133692", "https://serverfault.com", "https://serverfault.com/users/4801/" ] }
134,183
I have an Apache web server that needs to reverse proxy a site. So example.com/test/ or example.com/test pull from the same other webserver. I have setup a reverse proxy for the one without the trailing slash like this: ProxyPass /test http://othersite.com/test ProxyPassReverse /test http://othersite.com/test But it doesn't work with a trailing slash. Any Ideas? I have tried redirecting from /test/ to /test with no luck. Thanks.
Have you tried to rewrite the url? RewriteEngine on RewriteRule ^/test$ /test/ [R] ProxyRequests Off ProxyPreserveHost On ProxyPass /test/ http://othersite.com/test/ ProxyPassReverse /test/ http://othersite.com/test/
{ "source": [ "https://serverfault.com/questions/134183", "https://serverfault.com", "https://serverfault.com/users/41063/" ] }
134,190
I ran the RTM version of TFS2010 on my server that has TFS 2010 RC on it and it gave me this error message. alt text http://xs.to/image-766A_4BCE01EC.jpg I thought you could upgrade from the RC to the final version. Was I wrong?
Have you tried to rewrite the url? RewriteEngine on RewriteRule ^/test$ /test/ [R] ProxyRequests Off ProxyPreserveHost On ProxyPass /test/ http://othersite.com/test/ ProxyPassReverse /test/ http://othersite.com/test/
{ "source": [ "https://serverfault.com/questions/134190", "https://serverfault.com", "https://serverfault.com/users/3299/" ] }
134,467
From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over a standard ethernet port, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over just running NFS on servers?
A NAS (Networked Attached Storage) is a device serving files via the network. One protocol to accomplish this is NFS. So a NAS can use the NFS protocol (or another protocol). So a Linux server providing NFS exports is, in effect, a NAS device. Is there a good reason to pick a NAS filer over just running NFS on servers? The appliance has the advantage of being pre-packaged and ready out of the box and probably has a web gui to make changes a little more admin friendly. A disadvantage to the appliance is that recovery of the data could be more difficult, if you get in that spot, as the underlying filesystem could be proprietary.
{ "source": [ "https://serverfault.com/questions/134467", "https://serverfault.com", "https://serverfault.com/users/2321/" ] }
135,507
I've just run the following in bash: uniq .bash_history > .bash_history and my history file ended up completely empty. I guess I need a way to read the whole file before writing to it. How is that done? PS: I obviously thought of using a temporary file, but I'm looking for a more elegant solution.
I recommend using sponge from moreutils . From the manpage: DESCRIPTION sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before opening the output file. This allows for constructing pipelines that read from and write to the same file. To apply this to your problem, try: uniq .bash_history | sponge .bash_history
{ "source": [ "https://serverfault.com/questions/135507", "https://serverfault.com", "https://serverfault.com/users/41191/" ] }
135,618
Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com‐ mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?
Unfortunately not directly. rsync requires a clean link with a shell that will allow it to start the remote copy of rsync , when run this way. If you have some way of running long-lived listening processes on the host you could try starting rsync manually listening for connections on a non-privileged port, but most techniques for doing that would require proper shell access via SSH too, and it relies on the hosts firewall arrangements letting connections in on the port you chose (and the host having rsync installed in the first place). Running rsync as a publicly addressable service (rather than indirectly via SSH or similar) is not generally recommended for non-public data though. If you host allows scripting in PHP or similar and does not have it locked down so extra processes can not be exec ed by user scripts, then you could try starting rsync in listening mode that way. If your end is connectible (you are running SSH accessible to the outside world) you could try this in reverse - have a script run rsync on the server but instead of listening for incoming connections have it contact your local service and sync that way. This still relies on rsync actually being installed on the host which is not a given, or that you can upload a working copy, but does not have the security implications of running an rsync daemon in a publicly addressable fashion and talking to it over an unencrypted channel. Messing around as described above may be against the hosts policies though, even if it works at all, and could get you kicked off. You are better off asking if a full shell can be enabled for that account and either abandoning rsync for that host or abandoning that host and moving elsewhere if they will not do that.
{ "source": [ "https://serverfault.com/questions/135618", "https://serverfault.com", "https://serverfault.com/users/1134/" ] }
135,867
How do you grant access to network resources to the LocalSystem (NT AUTHORITY\SYSTEM) account? Background When accessing the network, the LocalSystem account acts as the computer on the network : LocalSystem Account The LocalSystem account is a predefined local account used by the service control manager. ...and acts as the computer on the network. Or to say the same thing again: The LocalSystem account acts as the computer on the network : When a service runs under the LocalSystem account on a computer that is a domain member, the service has whatever network access is granted to the computer account, or to any groups of which the computer account is a member. How does one grant a " computer " access to a shared folder and files? Note : Computer accounts typically have few privileges and do not belong to groups. So how would i grant a computer access to one of my shares; considering that " Everyone " already has access? Note : workgroup | Account | Presents credentials | |----------------|----------------------| | LocalSystem | Machine$ | | LocalService | Anonymous | | NetworkService | Machine$ |
In a domain environment, you can grant access rights to computer accounts; this applies to processes running on those computers as LocalSystem or NetworkService (but not LocalService , which presents anonymous credentials on the network) when they connect to remote systems. So, if you have a computer called MANGO , you'll have an Active Directory computer account called MANGO$ , which you can grant permissions to. Note : You can't do any of this in a workgroup environment; this applies only to domains.
{ "source": [ "https://serverfault.com/questions/135867", "https://serverfault.com", "https://serverfault.com/users/4822/" ] }
135,906
When do entries in cron.daily (and .weekly and .hourly ) run, and is it configurable? I haven't found a definitive answer to this, and am hoping there is one. I'm running RHEL5 and CentOS 4, but for other distros/platforms would be great, too.
For the distributions you mention: On CentOS 5.4 (Should be same for RHEL5) grep run-parts /etc/crontab 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly So cron.daily runs at 04:02am. Same on CentOS 4.8
{ "source": [ "https://serverfault.com/questions/135906", "https://serverfault.com", "https://serverfault.com/users/2321/" ] }
136,306
What is the IIS7 default time for HTTP keepAlive?
The default connection timeout in IIS7 is 2 minutes. Click on your web site in IIS Mgr, click Advanced Settings, and expand Connection Limits. The Connection Timeout (Seconds) setting is what governs this. If IIS doesn't receive activity on a connection for this duration then it will time the connection out. This is regardless of whether or not the connection was requested as a keep-alive. You will, of course, have to have keep-alives enabled for this to be a "keep-alive timeout". Keep-alive is enabled by default in IIS. You can also set it for the site in the applicationHost.config file using the <limits> and the connectionTimeout attribute. <limits connectionTimeout="00:02:00" /> This will set the timeout value to 2 minutes.
{ "source": [ "https://serverfault.com/questions/136306", "https://serverfault.com", "https://serverfault.com/users/41615/" ] }
136,461
I went to /var/log/cron but this file is empty. How to check if crontab is enabled or is running properly or not in ubuntu thanks
modify rsyslog config: open /etc/rsyslog.d/50-default.conf ,remove # before cron.* restart rsyslog service: sudo service rsyslog restart restart cron service: service cron restart now you can check cron log from file /var/log/cron.log
{ "source": [ "https://serverfault.com/questions/136461", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
136,861
I am using rsync for backups from remote FTP to local computer. I read on internet that rsnapshot is better. Just want to know which is used in production environment
rsnapshot uses rsync and cp -al to keep an historical archive with minimal extra storage. in short: there's the 'last' copy, let's call it back-0 the previous copies are called back-1, back-2.... each copy 'seems' to be a full complete copy, but in fact any unchanged file is stored only once. it appears on several directories using hard links. the process is simple, let's say there are currently 4 copies, back-0 through back-3. when rsnapshot is invoked, it: deletes the oldest copy: back-3 ( rm -r back-3 ) renames back-2 to back-3 ( mv back-2 back-3 ) renames back-1 to back-2 ( mv back-1 back-2 ) makes a 'link mirror' from back-0 to back-1 ( cp -al back-0 back-1 ) this creates the back-1 directory but insteado of copying each file from back-0 to back-1, it creates a hardlink; in effect, a second reference to the same file. this second name is just as valid as the first one, and the file's data won't be removed from the disk until both names are deleted. performs an rsync from the original storage to back-0. since the previous backup was still on back-0, this rsync is very fast (even on remote links, since it transfers only changes). a file that was changed since the previous backup is replaced on back-0 but not on back-1, breaking the link between them, so now you keep both versions. an unchanged file stays shared between both directories and won't require extra storage to keep the previous copies consistent. once you get familiar with the procedure, you'll find it very handy. it's not complex at all, sometimes i do it manually to keep sporadic 'previous versions' at interesting points of time (just before an important upgrade, just after installing and configuring a system, etc)
{ "source": [ "https://serverfault.com/questions/136861", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
137,292
Puppet requires certificates between the client (puppet) being managed and the server (puppetmaster). You can run manually on the client and then go onto the server to sign the certificate, but how do you automate this process for clusters / cloud machines?
On the server (puppetmaster) run: puppetca --generate <NAME> Then copy the following from the server onto the client: /var/lib/puppet/ssl/certs/<NAME>.pem /var/lib/puppet/ssl/certs/ca.pem /var/lib/puppet/ssl/private_keys/<NAME>.pem If you wish to sign <NAME> as something other than the hostname use: puppetd --fqdn=<NAME> And add to /etc/puppet/puppet.conf if running the daemon [puppetd] certname=<NAME>
{ "source": [ "https://serverfault.com/questions/137292", "https://serverfault.com", "https://serverfault.com/users/7594/" ] }
137,348
At the moment we're trying to decide whether to move our datacenter from the west coast to the east coast. However, I am seeing some disturbing latency numbers from my west coast location to the east coast. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: West coast to east coast: 215 ms latency, 46 ms transfer time, 261 ms total West coast to west coast: 114 ms latency, 41 ms transfer time, 155 ms total It makes sense that Corvallis, OR is geographically closer to my location in Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only increased 10%, yet the latency increased 100%! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... Does routing distance affect performance significantly? How does geography affect network latency? Latency in Internet connections from Europe to USA ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?
Speed of Light: You are not going beat the speed of light as an interesting academic point. This link works out Stanford to Boston at ~40ms best possible time. When this person did the calculation he decided the internet operates at about "within a factor of two of the speed of light", so there is about ~85ms transfer time. TCP Window Size: If you are having transfer speed issues you may need to increase the receiving window tcp size. You might also need to enable window scaling if this is a high bandwidth connection with high latency (Called a "Long Fat Pipe"). So if you are transferring a large file, you need to have a big enough receiving window to fill the pipe without having to wait for window updates. I went into some detail on how to calculate that in my answer Tuning an Elephant . Geography and Latency: A failing point of some CDNs (Content Distribtuion Networks) is that they equate latency and geography. Google did a lot of research with their network and found flaws in this, they published the results in the white paper Moving Beyond End-to-End Path Information to Optimize CDN Performance : First, even though most clients are served by a geographically nearby CDN node, a sizeable fraction of clients experience latencies several tens of milliseconds higher than other clients in the same region. Second, we find that queueing delays often override the benefits of a client interacting with a nearby server. BGP Peerings: Also if you start to study BGP (core internet routing protocol) and how ISPs choose peerings, you will find it is often more about finances and politics, so you might not always get the 'best' route to certain geographic locations depending on your ISP. You can look at how your IP is connected to other ISPs (Autonomous Systems) using a looking glass router . You can also use a special whois service : whois -h v4-peer.whois.cymru.com "69.59.196.212" PEER_AS | IP | AS Name 25899 | 69.59.196.212 | LSNET - LS Networks 32869 | 69.59.196.212 | SILVERSTAR-NET - Silver Star Telecom, LLC It also fun to explore these as peerings with a gui tool like linkrank , it gives you a picture of the internet around you.
{ "source": [ "https://serverfault.com/questions/137348", "https://serverfault.com", "https://serverfault.com/users/1/" ] }
137,468
I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a logging method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' and 'priority' Can work with third party scripts which don't always do logging internally.
In the process of writing this question, I answered myself. So I'll answer myself " Jeopardy-style ". This expands on the answer provided by Dennis Williamson. The following will send any Cron output to /usr/bin/logger (including stderr, which is converted to stdout using 2>&1 ), which will send to syslog, with a 'tag' of nsca_check_disk . Syslog handles it from there. Since these systems (CentOS and FreeBSD) already have built-in log rotation mechanisms, I don't need to worry about a log like / var/log/mycustom.log filling up a disk. */5 * * * * root /usr/local/nagios/sbin/nsca_check_disk 2>&1 | /usr/bin/logger -t nsca_check_disk /var/log/messages now has one additional message which says this: Apr 29, 17:40:00 192.168.6.19 nsca_check_disk: 1 data packet(s) sent to host successfully. I like /usr/bin/logger , because it works well with an existing syslog configuration and infrastructure, and is included with most Unix distros. Most *nix distributions already do log rotation and do it well.
{ "source": [ "https://serverfault.com/questions/137468", "https://serverfault.com", "https://serverfault.com/users/36178/" ] }
137,591
Is there any form to prevent local delivery on a postfix server? Ideally, I want to avoid local delivery to some domains, because this postfix server is a google apps backup one.
In order for postfix to know not to deliver mail for a domain locally, you will need to make changes to a few (if relevant to your setup) config variables in main.cf - from the official postfix docs, you'd need to make sure you remove all domains you don't want to be treated as local from the following variables: mydestination: this usually contains the list of domains delivered locally local_recipient_maps: lookup table containing local recipient addresses local_transport: default transport for local mail - change if inet_interfaces or proxy_interfaces match the destination of a mail virtual_mailbox_domains: same as mydestination, if you're making use of it Beyond that, I recommend you: use postconf at the command line to get quick access to the current values in postfix configuration variables ( man postconf for more detail) visit the postconf/main.cf info page on the official postfix site for all the details Yes, postfix can be complicated - but that's the beauty of its configurable nature. Hope this helps!
{ "source": [ "https://serverfault.com/questions/137591", "https://serverfault.com", "https://serverfault.com/users/17558/" ] }
137,605
Some directories are easy to understand the meaning / usr / bin ... But for the next ones, I have no idea. / etc / opt opt for optional? etc for electronic t...... configuration (no idea for t) I would like to know what these abbreviations mean.
Strangely enough /usr actually means Unix System Resources. "The "etc" in "/etc/bin" really does stand for "etcetera." In early Unix systems, the most important directory was the "bin" directory (short for "binaries" -- compiled programs), and "etc" was for trivial stuff like startup, shutdown and admin. The list of things you need for running Linux is: a program binary, etcetera, etcetera -- in other words, a sole vital item, plus some less important bits and pieces. Today, "etc" holds system-wide configuration files that you'd almost never do without -- hardly unimportant." -- http://searchenterpriselinux.techtarget.com/tip/0,289483,sid39_gci1098161,00.html
{ "source": [ "https://serverfault.com/questions/137605", "https://serverfault.com", "https://serverfault.com/users/6343/" ] }
137,728
I've got an Ubuntu server that boots up in text mode. It rarely has a screen or keyboard attached to it, but when I do attach a screen, I usually have to attach a keyboard too, because the darn console mode screen saver will be on and I'll need to hit a key to see what's going on. I'm aware that the setterm command can disable this, but it's a per-session thing. How can I make it so the machine never ever blanks the screen in text mode, even when it's first booted up and sitting at the login prompt?
In Ubuntu 12.10 and earlier the console-tools package allows console options to be controlled. To turn off screen blanking and powerdown, set BLANK_TIME and POWERDOWN_TIME to 0 in /etc/console-tools/config . If you'd prefer not to modify the config file, the same effect can be achieved by creating a new file in /etc/console-tools/config.d containing the following: BLANK_TIME=0 POWERDOWN_TIME=0 The name of the file in config.d must consist entirely of upper and lower case letters, digits, underscores, and hyphens.
{ "source": [ "https://serverfault.com/questions/137728", "https://serverfault.com", "https://serverfault.com/users/1691/" ] }
137,907
I need to restrict access to any files or subdirs in direstory "testdir". My conf: ... location ~* ^.+\.(jpg|txt)$ { root /var/www/site; } location /testdir { deny all; return 404; } ... In my configuration I have no restrictions on /testdir/jpg_or_txt-files. How to do it?
to restrict access to multiple directories in nginx in one location entry do ... location ~ /(dir1|dir2|dir3) { deny all; return 404; } ...
{ "source": [ "https://serverfault.com/questions/137907", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
137,965
I'm trying to import a gzipped SQL file into mysql directly. Is this the right way? mysql -uroot -ppassword mydb > myfile.sql.gz
zcat /path/to/file.sql.gz | mysql -u 'root' -p your_database > will write the output of the mysql command on stdout into the file myfile.sql.gz which is most probably not what you want. Additionally, this command will prompt you for the password of the MySQL user "root".
{ "source": [ "https://serverfault.com/questions/137965", "https://serverfault.com", "https://serverfault.com/users/81082/" ] }
138,297
I am aware of using lsof for checking the files currently accessed by a process. Does there exist a way to see all files that an application opens in its lifetime?
Using the strace command it migh be possible with something like : strace -e trace=open program [arguments]
{ "source": [ "https://serverfault.com/questions/138297", "https://serverfault.com", "https://serverfault.com/users/261/" ] }
138,325
I'm certainly trying to achieve something weird here, but I want to fake the date locally for a shell session on GNU/Linux. I need to black-box test how a program behaves at different dates, and modifying the system-wide date can have unwanted side effects (cron jobs, messed up logs, etc). Any ideas ?
You can just use executable faketime (from ubuntu repositories sudo apt-get install faketime ) by: faketime -f "-15d" date Or even fake time in whole shell by faketime -f "-15d" bash -l
{ "source": [ "https://serverfault.com/questions/138325", "https://serverfault.com", "https://serverfault.com/users/48053/" ] }
138,427
I am running top to monitor my server performance and 2 of my java processes show virtual memory of up to 800MB-1GB. Is that a bad thing? What does virtual memory mean? And oh btw, I have swap of 1GB and it shows 0% used. So I am confused. Java process = 1 Tomcat server + my own java daemon Server = Ubuntu 9.10 (karmic)
Virtual memory isn't even necessarily memory. For example, if a process memory-maps a large file, the file is actually stored on disk, but it still takes up "address space" in the process. Address space (ie. virtual memory in the process list) doesn't cost anything; it's not real. What's real is the RSS (RES) column, which is resident memory. That's how much of your actual memory a process is occupying. But even that isn't the whole answer. If a process calls fork(), it splits into two parts, and both of them initially share all their RSS. So even if RSS was initially 1 GB, the result after forking would be two processes, each with an RSS of 1 GB, but you'd still only be using 1 GB of memory. Confused yet? Here's what you really need to know: use the free command and check the results before and after starting your program (on the +/- buffers/cache line). That difference is how much new memory your newly-started program used.
{ "source": [ "https://serverfault.com/questions/138427", "https://serverfault.com", "https://serverfault.com/users/42159/" ] }
138,949
My company runs an internal DNS for mycompany.example There is a machine on the network that I need to find, but I’ve forgotten its name. If I could see a list, it would probably jog my memory. How can I list all of the domain records for mycompany.example ?
Answer The short answer to your specific question of listing CNAMEs is that you cannot without permission to do zone transfers (see How to list all CNAME records for a given domain? ). That said, if your company's DNS server still supports the ANY query, you can use dig to list the other records by doing: dig +noall +answer +multiline yourdomain.yourtld any These ... +noall +answer +multiline ... are strictly optional and are simply output formatting flags to make the output more easily human readable (see dig man page ). Example $ dig +noall +answer +multiline bad.horse any Returns: bad.horse. 7200 IN A 162.252.205.157 bad.horse. 7200 IN CAA 0 issue "letsencrypt.org" bad.horse. 7200 IN CAA 0 iodef "mailto:[email protected]" bad.horse. 7200 IN MX 10 mx.sandwich.net. bad.horse. 7200 IN NS a.sn1.us. bad.horse. 7200 IN NS b.sn1.us. bad.horse. 7200 IN SOA a.sn1.us. n.sn1.us. ( 2017032202 ; serial 1200 ; refresh (20 minutes) 180 ; retry (3 minutes) 1209600 ; expire (2 weeks) 60 ; minimum (1 minute) ) Caveats (RFC8482) Note that, since around 2019 , most public DNS servers have stopped answering most DNS ANY queries usefully. For background on that, see: RFC8482 - Saying goodbye to ANY If ANY queries do not enumerate multiple records, the only option is to request each record type (e.g. A, CNAME, or MX) individually.
{ "source": [ "https://serverfault.com/questions/138949", "https://serverfault.com", "https://serverfault.com/users/10500/" ] }
138,951
I was just poking around in /usr/bin and I found an ELF binary file called [ . /usr/bin/[ . I have never heard of this file and my first thought was that it was a clever way of hiding a program, possibly a trojan. However it's present on all my CentOS servers and seems to have no manual entry. I can hazard a guess as to what it is but I was looking for a more authoritative answer...
It's an alternative form of the 'test' command. Mostly used in scripts. i.e. if [ $VAR ] then echo $VAR exists! fi
{ "source": [ "https://serverfault.com/questions/138951", "https://serverfault.com", "https://serverfault.com/users/11086/" ] }
139,077
Let's say I'm using Amazon's EC2 load balancer. I have it hooked up to two instances (excuse me if my terminology isn't correct). What happens if the load balancer fails? Do both instances fail to work now?
Typically load balancers are clustered together into a high-availability pair. If one load balancer fails, the secondary picks up the failure and becomes active. They have a heartbeat link between them that monitors status. If all load balancers fail (or are accidentally misconfigured), servers down-stream are knocked offline until the problem is resolved, or you manually route around them.
{ "source": [ "https://serverfault.com/questions/139077", "https://serverfault.com", "https://serverfault.com/users/16396/" ] }
139,081
I am running nfs-kernel-server to access my files on my linux machine(ubuntu - /share). The disk I have been using is full. So I have added a new disk and mounted it to /share/data. My other pc mounts the /share folder to /mnt/nfs; but cannot see the contents of /mnt/nfs/data. I have tried adding /share/data to /etc/exports, but it did not help. What do I do? PS: I am looking for another solution than explicitly mounting /share/data on the second drive.
Typically load balancers are clustered together into a high-availability pair. If one load balancer fails, the secondary picks up the failure and becomes active. They have a heartbeat link between them that monitors status. If all load balancers fail (or are accidentally misconfigured), servers down-stream are knocked offline until the problem is resolved, or you manually route around them.
{ "source": [ "https://serverfault.com/questions/139081", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
139,126
On a Linux server, I need to find all files with a certain file extension in the current directory and all sub-directories. Previously, I have always used the following command: find . -type f | grep -i *.php However, it doesn't find hidden files, for example .myhiddenphpfile.php . The following finds the hidden php files, but not the non-hidden ones: find . -type f | grep -i \.*.php How can I find both the hidden and non-hidden php files in the same command?
... find . -type f -name '*.php'
{ "source": [ "https://serverfault.com/questions/139126", "https://serverfault.com", "https://serverfault.com/users/21415/" ] }
139,156
I'm trying to set up password-less login with ssh on Ubuntu Server, but I keep getting: Agent admitted failure to sign using the key and prompt for password. I have generated new rsa keys. Before the system reboot it worked just fine. All the links lead me to this bug , but nothing works. SSH Agent is still not running. How to fix that? Maybe the files need specific permissions?
just run: ssh-add + path to key on the client (your pc)
{ "source": [ "https://serverfault.com/questions/139156", "https://serverfault.com", "https://serverfault.com/users/36982/" ] }
139,323
Is there a secret way to bind MySQL to more than one IP address? As far as I can see the bind-address parameter in the my.cnf does not support more than one IP and you can't have it more than once.
No, there isn't (I just checked 1 hour ago). You can comment the bind-address in my.cnf: Note: « 1 hour ago » is now more than 10 years ago. #skip-networking #bind-address = 127.0.0.1 If you want only 2 IPs, you will then have to use a firewall. For MySql version 8.0.13 and above , you can specify a list of comma-separated IP addresses. bind-address = 10.0.0.1,10.0.1.1,10.0.2.1 Relevant MySql documentation . Remember to restart your MySQL instance after changing the config file.
{ "source": [ "https://serverfault.com/questions/139323", "https://serverfault.com", "https://serverfault.com/users/1385/" ] }
139,451
I'm about to install Leiningen which is a useful Bash script for the Clojure programming language. The problem is, I'm not sure where it is appropriate to put a executable script in the Linux system so that it's permanently and stable-ly available. I don't think that anywhere in /home makes sense, but I don't know which directories are supposed to be used for that. /usr/share ?
(Note: ~ translates as /home/user in this post) Personally, I put all of my custom-made system scripts in /usr/local/bin and all of my personal bash scripts in ~/bin . Very few programs I install place themselves in /usr/local/bin directory so it's not very cluttered and it was already in the $PATH variable on most of my machines. To add /usr/local/bin to your system path (if it's not already there) add this to /etc/profile : PATH=$PATH:/usr/local/bin export PATH To add ~/bin to your user's path add this to ~/.bash_profile : PATH=$PATH:$HOME/bin export PATH Sometimes the default .bash_profile file will have an if statement that automatically adds ~/bin to $PATH if it exists, so create the ~/bin and open a new terminal to see if yours already does this.
{ "source": [ "https://serverfault.com/questions/139451", "https://serverfault.com", "https://serverfault.com/users/17925/" ] }
139,628
On our LAN I've set up several 'fake' TLDs in the DNS server, with the intention of using them for Apache name-based virtual hosting. I'd like to combine this with mass-virtual-hosting (i.e. VirtualDocumentRoot) on an Ubuntu 10.04 LAMP server. However, I can't get it to select the right vhost! Here is a summary of the Apache config: NameVirtualHost 10.10.0.205 <VirtualHost 10.10.0.205> ServerName *.test VirtualDocumentRoot /var/www/%-3.0.%-2/test/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> <VirtualHost 10.10.0.205> ServerName *.dev VirtualDocumentRoot /var/www/%-3.0.%-2/dev/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> A hostname such as www.domain.com.dev , correctly resolves to 10.10.0.205, but always selects the top vhost, instead of the bottom one, which matches more closely. I was under the impression that Apache would first try to match the ServerName before defaulting to the top vhost for a given IP. What am I doing wrong? Or is this not possible and must I use another IP for each TLD? apachectl -S outputs (trimmed): 10.10.0.205:* is a NameVirtualHost default server *.test port * namevhost *.test port * namevhost *.dev
Use ServerAlias , rather than ServerName alone: ServerName somename.dev ServerAlias *.dev
{ "source": [ "https://serverfault.com/questions/139628", "https://serverfault.com", "https://serverfault.com/users/9082/" ] }
139,632
I'm running a Rails stack on Ubuntu. When I call ps -AF , I get a descriptive process name set by the Apache module like 00:00:43 Rails: /var/www... which is really helpful in diagnosing load issues. But when I call top , the same process shows up simply as ruby Is there any way to get the ps -AF process name in top ?
While top is running, you can press c to toggle between showing the process name and the command line. To remember the toggle state for next time, press W to save the current configuration to ~/.toprc .
{ "source": [ "https://serverfault.com/questions/139632", "https://serverfault.com", "https://serverfault.com/users/13821/" ] }
139,728
I want to download the ssl certificate from, say https://www.google.com , using wget or any other commands. Any unix command line? wget or openssl?
In order to download the certificate, you need to use the client built into openssl like so: echo -n | openssl s_client -connect $HOST:$PORTNUMBER -servername $SERVERNAME \ | openssl x509 > /tmp/$SERVERNAME.cert That will save the certificate to /tmp/$SERVERNAME.cert . The -servername is used to select the correct certificate when multiple are presented, in the case of SNI. You can use -showcerts if you want to download all the certificates in the chain. But if you just want to download the server certificate, there is no need to specify -showcerts . The x509 at the end will strip out the intermediate certs, you will need to use sed -n '/-----BEGIN/,/-----END/p' instead of the x509 at the end. echo -n gives a response to the server, so that the connection is released openssl x509 removes information about the certificate chain and connection details. This is the preferred format to import the certificate into other keystores.
{ "source": [ "https://serverfault.com/questions/139728", "https://serverfault.com", "https://serverfault.com/users/35955/" ] }
139,730
I am looking for an alternative to Cisco (too expensive for me !) for semi-pro utilization (at home but with advanced feature (I'm studying in IT)) and in small/medium enterprises. I think I will choose between LinkSys (Including Cisco Small Business), Netgear and D-Link, but I've never really used these products, that what I need is a manufacturer that make "almost" all type of networking equipment (Like Cisco but cheaper..), here are my needs : I need almost all my products to be rackable I need a good warranty (Netgear lifetime waranty rulez!) I need an "unified" network environment I made a little comparison of the characteristics that interest me after hours of search on Internet (based on result found on many websites): (Prices are based on the ldlc-pro.com french website) Hotline/Support Quality: Netgear : Not so bad Linksys : Not so bad D-Link : Poor! Most common Warranty: Netgear : Unlimited Lifetime Warranty! Linksys : Limited 3 years warranty D-Link : Limited 5 years warranty (Unlimited in US but I'm on France :(...) VPN protocols compatibles with routers on endpoint mode: Netgear : Only IPSEC :( Linksys : IPSEC, PPTP, L2TP D-Link : IPSEC, PPTP, L2TP Cheaper 8 ports Gb switch : Netgear : 30€ Linksys : 47€ D-Link : 30€ Cheaper 48 ports + 1Gb uplink(s) administrable switch : Netgear : 263€ Linksys : 630€ D-Link : 600€ Cheaper VPN router : Netgear : 100€ Linksys : 80€ D-Link : 60€ Cheaper rackable switch : Netgear : 50€ Linksys : 87€ D-Link : 50€ Cheaper rackable and administrable switch : Netgear : 120€ Linksys : 370€ D-Link : 171€ Netgear and D-Link are in the same range of price, where Linksys is more expensives. I've searched for some other criteria ( the full comparison is here, in french with shop/source links: http://forums.jeuxonline.info/showthread.php?t=1072280 ) and made a final score for each manufacturer : SCORE including IP camera sub-score: Netgear : 6.2/10 Linksys : 7.3/10 D-Link : 7.0/10 SCORE excluding IP camera sub-score: Netgear : 6.9/10 Linksys : 7.0/10 D-Link : 6.7/10 On both case, Linksys wins. So here is my little comparison, but because I've never really used these stuffs, I need your help to make a decision on witch manufacturer choose for both my personnal and corporate use. So here are the questions : What manufacturer do you recommend me (Not cisco (except Small business)) ? Why ? Have you called the call center of the customer support of one of these manufacturer ? How it was ? Did you had problems or bad experiences with these equipments ? Any other advices ? ;) Thank you !
In order to download the certificate, you need to use the client built into openssl like so: echo -n | openssl s_client -connect $HOST:$PORTNUMBER -servername $SERVERNAME \ | openssl x509 > /tmp/$SERVERNAME.cert That will save the certificate to /tmp/$SERVERNAME.cert . The -servername is used to select the correct certificate when multiple are presented, in the case of SNI. You can use -showcerts if you want to download all the certificates in the chain. But if you just want to download the server certificate, there is no need to specify -showcerts . The x509 at the end will strip out the intermediate certs, you will need to use sed -n '/-----BEGIN/,/-----END/p' instead of the x509 at the end. echo -n gives a response to the server, so that the connection is released openssl x509 removes information about the certificate chain and connection details. This is the preferred format to import the certificate into other keystores.
{ "source": [ "https://serverfault.com/questions/139730", "https://serverfault.com", "https://serverfault.com/users/35545/" ] }
139,731
Windows Server 2003, IIS6. We're trying to deploy a non-MVC ASP.NET web application as a subdirectory of an MVC application. However the ASP.NET application in the subdirectory is failing with the message "Could not load file or assembly 'System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified." which is bizarre because it's not an MVC application.
In order to download the certificate, you need to use the client built into openssl like so: echo -n | openssl s_client -connect $HOST:$PORTNUMBER -servername $SERVERNAME \ | openssl x509 > /tmp/$SERVERNAME.cert That will save the certificate to /tmp/$SERVERNAME.cert . The -servername is used to select the correct certificate when multiple are presented, in the case of SNI. You can use -showcerts if you want to download all the certificates in the chain. But if you just want to download the server certificate, there is no need to specify -showcerts . The x509 at the end will strip out the intermediate certs, you will need to use sed -n '/-----BEGIN/,/-----END/p' instead of the x509 at the end. echo -n gives a response to the server, so that the connection is released openssl x509 removes information about the certificate chain and connection details. This is the preferred format to import the certificate into other keystores.
{ "source": [ "https://serverfault.com/questions/139731", "https://serverfault.com", "https://serverfault.com/users/10344/" ] }
139,870
Like most sysadmins I use openssh all the time. I have about a dozen ssh keys, I like to have a different ssh key for each host. However this causes a problem when I am connecting to a host for the first time, and all I have is a password. I want to just connect to the host using a password, no ssh key in this case. However the ssh client will offer all the public keys in my ~/.ssh/ (I know this from looking at the output of ssh -v ). Since I have so many, I will get disconnected for too many authentication failures. Is there some way to tell my ssh client to not offer all the ssh keys?
This is expected behaviour according to the man page of ssh_config : IdentityFile Specifies a file from which the user's DSA, ECDSA or DSA authentica‐ tion identity is read. The default is ~/.ssh/identity for protocol version 1, and ~/.ssh/id_dsa, ~/.ssh/id_ecdsa and ~/.ssh/id_rsa for protocol version 2. Additionally, any identities represented by the authentication agent will be used for authentication. [...] It is possible to have multiple identity files specified in configu‐ ration files; all these identities will be tried in sequence. Mul‐ tiple IdentityFile directives will add to the list of identities tried (this behaviour differs from that of other configuration directives). Basically, specifying IdentityFile s just adds keys to a current list the SSH agent already presented to the client. Try overriding this behaviour with this at the bottom of your .ssh/config file: Host * IdentitiesOnly yes You can also override this setting on the host level, e.g.: Host foo User bar IdentityFile /path/to/key IdentitiesOnly yes
{ "source": [ "https://serverfault.com/questions/139870", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
139,896
Numerous times i have met the expression SASL/GSSAPI. I have searched Google many times, but i simply do no understand what it is and how it relate to Kerberos. Anybody that have a simple explanation on this?
SASL stands for Simple Authentication and Security Layer; it's a framework that allows developers to implement different authentication mechanisms, and allows clients and servers to negotiate a mutually acceptable mechanism for each connection (rather than hard-coding or pre-configuring them). GSSAPI stands for Generic Security Services Application Program Interface; it is usually made available as one of the mechanisms that SASL can use. It is itself another framework for developing and implementing various authentication mechanisms. These mechanisms include Kerberos, NTLM, and SPNEGO (Simple and Protected GSSAPI Negotiation Mechanism): a GSSAPI pseudo-mechanism which allows GSSAPI-compatible clients to negotiate which GSSAPI mechanism they want to use. Here's an example to help make this a little clearer (brutally simplified for clarity's sake): Client connects to server and says, "I support SASL! How should I authenticate myself?" Server receives the connection and responds, "I also support SASL, and can use these mechanisms, in descending order of preference: GSSAPI, CRAM-MD5, PLAIN." Client responds, "Of the choices, I'd like to use GSSAPI." Server responds "GSSAPI? Capital. I support Kerberos and NTLM." Client responds "Let's use Kerberos. Here's my encrypted ticket etc. etc."
{ "source": [ "https://serverfault.com/questions/139896", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
140,149
I was watching the funny server type from http://www.reddit.com with curl -I http://www.reddit.com when I guessed that curl -X HEAD http://www.reddit.com would do the same. But, in fact, it doesn't. I'm curious about why. This is what I observe running the two commands: curl -I : works as expected, outputs the header and exists. curl -X HEAD : does not show anything and seems to wait for user input. But, sniffing with tshark I see the second command actually sends the same HTML query and receives the correct answer, but it does not show it and it doesn't close the connection. curl -I 0.000000 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=47267342 TSER=0 WS=6 0.045392 213.248.111.106 -> 333.33.33.33 TCP http > 59675 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=2552532839 TSER=47267342 WS=1 0.045441 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [ACK] Seq=1 Ack=1 Win=5888 Len=0 TSV=47267353 TSER=2552532839 0.045623 333.33.33.33 -> 213.248.111.106 HTTP HEAD / HTTP/1.1 0.091665 213.248.111.106 -> 333.33.33.33 TCP http > 59675 [ACK] Seq=1 Ack=155 Win=6432 Len=0 TSV=2552532886 TSER=47267353 0.861782 213.248.111.106 -> 333.33.33.33 HTTP HTTP/1.1 200 OK 0.861830 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [ACK] Seq=155 Ack=321 Win=6912 Len=0 TSV=47267557 TSER=2552533656 0.862127 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [FIN, ACK] Seq=155 Ack=321 Win=6912 Len=0 TSV=47267557 TSER=2552533656 0.910810 213.248.111.106 -> 333.33.33.33 TCP http > 59675 [FIN, ACK] Seq=321 Ack=156 Win=6432 Len=0 TSV=2552533705 TSER=47267557 0.910880 333.33.33.33 -> 213.248.111.106 TCP 59675 > http [ACK] Seq=156 Ack=322 Win=6912 Len=0 TSV=47267570 TSER=2552533705 curl -X HEAD 34.106389 333.33.33.33 -> 213.248.111.90 TCP 51690 > http [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=47275868 TSER=0 WS=6 34.149507 213.248.111.90 -> 333.33.33.33 TCP http > 51690 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=3920268348 TSER=47275868 WS=1 34.149560 333.33.33.33 -> 213.248.111.90 TCP 51690 > http [ACK] Seq=1 Ack=1 Win=5888 Len=0 TSV=47275879 TSER=3920268348 34.149646 333.33.33.33 -> 213.248.111.90 HTTP HEAD / HTTP/1.1 34.191484 213.248.111.90 -> 333.33.33.33 TCP http > 51690 [ACK] Seq=1 Ack=155 Win=6432 Len=0 TSV=3920268390 TSER=47275879 34.192657 213.248.111.90 -> 333.33.33.33 TCP [TCP Dup ACK 15#1] http > 51690 [ACK] Seq=1 Ack=155 Win=6432 Len=0 TSV=3920268390 TSER=47275879 34.823399 213.248.111.90 -> 333.33.33.33 HTTP HTTP/1.1 200 OK 34.823453 333.33.33.33 -> 213.248.111.90 TCP 51690 > http [ACK] Seq=155 Ack=321 Win=6912 Len=0 TSV=47276048 TSER=3920269022 Any idea about why this difference in behaviour?
It seems the difference has to do with the Content-Length header and how it is treated by both commands. But before going into that, curl -X HEAD does not give any output because, by default, curl does not print headers if switch -i is not provided (not needed on -I though). In any case, curl -I is the proper way to fetch the headers. It just ask for the header and close the connection. On the other hand curl -X HEAD -i will wait for the transmission of the number of bytes stated by Content-Length . In the case no Content-Length is not specified, I guess it will wait for some data or for that particular header. Some examples that shows this behaviour: $ curl -X HEAD -i http://www.elpais.es HTTP/1.1 301 Moved Permanently Server: AkamaiGHost Content-Length: 0 Location: http://www.elpais.com/ Date: Wed, 12 May 2010 06:35:57 GMT Connection: keep-alive Because Content-Length is 0, in this case both commands behave the same. And the connection is closed afterwards. $ curl -X HEAD -i http://slashdot.org HTTP/1.1 200 OK Server: Apache/1.3.41 (Unix) mod_perl/1.31-rc4 SLASH_LOG_DATA: shtml X-Powered-By: Slash 2.005001296 X-Bender: Since I love you all so much, I'd like to give everyone hugs. X-XRDS-Location: http://slashdot.org/slashdot.xrds Cache-Control: no-cache Pragma: no-cache Content-Type: text/html; charset=iso-8859-1 Content-Length: 115224 Date: Wed, 12 May 2010 06:37:20 GMT X-Varnish: 1649060825 1649060810 Age: 1 Connection: keep-alive curl: (18) transfer closed with 115224 bytes remaining to read In this case, there seems to be a timeout (probably by Varnish), so curl protests that the connection was closed before having received the Content-Length number of bytes. By the way, look at the funny X-Bender (shown in the example) and X-Fry (try it for yourself) headers :).
{ "source": [ "https://serverfault.com/questions/140149", "https://serverfault.com", "https://serverfault.com/users/9404/" ] }
140,354
I am trying to write some custom messages in my dmesg output. I tried: logger "Hello" but this does not work. It exits without error, but no "Hello" appears int the output of: dmesg I am using a Fedora 9, and it seems that there is no syslogd/klogd daemon running. However, all my kernel messages are succesfully written in the dmesg buffer. Any idea?
You can, as root, write to /dev/kmsg to print to the kernel message buffer: fixnum:~# echo Some message > /dev/kmsg fixnum:~# dmesg | tail -n1 [28078118.692242] Some message I've tested this on my server and an embedded Linux device, and it works on both, so I'm just going to assume it works pretty much everywhere.
{ "source": [ "https://serverfault.com/questions/140354", "https://serverfault.com", "https://serverfault.com/users/39204/" ] }
140,622
I want connections coming in on ppp0 on port 8001 to be routed to 192.168.1.200 on eth0 on port 8080. I've got these two rules -A PREROUTING -p tcp -m tcp --dport 8001 -j DNAT --to-destination 192.168.1.200:8080 -A FORWARD -m state -p tcp -d 192.168.1.200 --dport 8080 --state NEW,ESTABLISHED,RELATED -j ACCEPT and it doesn't work. What am I missing?
First of all - you should check if forwarding is allowed at all: cat /proc/sys/net/ipv4/conf/ppp0/forwarding cat /proc/sys/net/ipv4/conf/eth0/forwarding If both returns 1 it's ok. If not do the following: echo '1' | sudo tee /proc/sys/net/ipv4/conf/ppp0/forwarding echo '1' | sudo tee /proc/sys/net/ipv4/conf/eth0/forwarding Second thing - DNAT could be applied on nat table only. So, your rule should be extended by adding table specification as well ( -t nat ): iptables -t nat -A PREROUTING -p tcp -i ppp0 --dport 8001 -j DNAT --to-destination 192.168.1.200:8080 iptables -A FORWARD -p tcp -d 192.168.1.200 --dport 8080 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT Both rules are applied only to TCP traffic (if you want to alter UDP as well, you need to provide similar rules but with -p udp option set). Last, but not least is routing configuration. Type: ip route and check if 192.168.1.0/24 is among returned routing entries.
{ "source": [ "https://serverfault.com/questions/140622", "https://serverfault.com", "https://serverfault.com/users/7315/" ] }