source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
195,611
I have some subdomains I want to redirect to specific ports on the same server. Say I have dev.mydomain.com I want dev.mydomain.com to transparently redirect to mydomain.com:8080 and I want to preserve the original sub-domain name the url of the browser. How do I do this with Apache 2.2? I have Apache 2.2 running on default port 80 . I can't figure out the write configuration to get this to happen. I have already set up dev.mydomain.com to resolve in DNS to mydomain.com . This is for an intranet development server that has a non-routable ip address so I am not so concerned about exploits and security that would compromise a publicly facing server.
Solution Here is what I finally came up with after being set in the right direction by Miles Erickson. I wanted the address bar to reflect the original subdomain/domain of the request and not the redirected server and port, but he put me on the right path to Google up a solution using VirtualHost and I finally found a solution that included the use of mod_proxy . First, make sure mod_proxy is enabled: sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod proxy_balancer sudo a2enmod lbmethod_byrequests sudo systemctl restart apache2 Next, add the following to your site config (e.g., /etc/apache2/sites-available/000-default.conf ): <VirtualHost *:80> ServerAdmin [email protected] ServerName dev.mydomain.com ProxyPreserveHost On # setup the proxy <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass / http://localhost:8888/ ProxyPassReverse / http://localhost:8888/ </VirtualHost>
{ "source": [ "https://serverfault.com/questions/195611", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
195,847
I'm new to Linux, trying Ubuntu 10.04, preconfigured by host. When I log in (SSH) using the preconfigured account, the shell prompt is: user@hostname:~$ The up arrow scrolls through the list of recent shell commands, and tab-completion works as expected. However, when I create an account and log in (SSH) using that account, the shell prompt is simply "$", and the up arrow just prints a control character (^[[A). Can anyone tell me how to get my prompt, tab-completion, and up-arrow behavior set up? The fact that I'm getting control characters when I up-arrow makes me think that my account (or session) is in some mode I'm unfamiliar with. I know there is tab-completion code stored in bashrc that I can uncomment, but that doesn't seem to have any effect, even after logging out and logging back in. Am I in some strange mode when I SSH in with the new account, or are there just some session/account settings I need to set up, and can find information for anywhere on the Internet if I just knew what to look for?
My first guess is that the default shell is sh rather than Bash. Use chsh to change it. You will need to log out and log back in to make the change take effect.
{ "source": [ "https://serverfault.com/questions/195847", "https://serverfault.com", "https://serverfault.com/users/58574/" ] }
196,131
I'm launching a rsync simple command between two servers. Both servers have two eth interfaces on bonding. When I send a big file from one server to the other with rsync I reach 130M/s transfer rate. But, and here is the problem, when I send a directory with lots of small files the transfer is 1M/s at its best. I've checked both cpu loads(8cpu i7), and they are at 10% maximum. Knowing that what makes all the transfer slow down is the open/close of the files, and this 'theoretically' goes on the cpu, I understand that this can be easily tuned. But I do not know how to tune that. Any tip on how to make rsync use all CPUs?
Your problem doesn't have (almost) anything to do with the CPU. Transferring big files is usually fast, since it can be done with sequential I/O. Transferring lots of small files requires tons of horsepower on the storage side of things, since it requires random I/O. Low seek times, fast hard drives, lots of cache and a filesystem designed for huge number of files are a must. CPU does not help there, at least not much, just like you are observing. CPU's and OS are just waiting for disk I/O to finish. All that faster CPU / more cores can do, that they can end up waiting for I/O faster. :-)
{ "source": [ "https://serverfault.com/questions/196131", "https://serverfault.com", "https://serverfault.com/users/24812/" ] }
196,160
On MS SQL Server, during the night we like to update the statistics, but this process is taking a long time and runs well into the morning. Is it okay sample by rows where the number of rows is +100,000? or will this give the wrong statistics?
Your problem doesn't have (almost) anything to do with the CPU. Transferring big files is usually fast, since it can be done with sequential I/O. Transferring lots of small files requires tons of horsepower on the storage side of things, since it requires random I/O. Low seek times, fast hard drives, lots of cache and a filesystem designed for huge number of files are a must. CPU does not help there, at least not much, just like you are observing. CPU's and OS are just waiting for disk I/O to finish. All that faster CPU / more cores can do, that they can end up waiting for I/O faster. :-)
{ "source": [ "https://serverfault.com/questions/196160", "https://serverfault.com", "https://serverfault.com/users/43988/" ] }
196,301
How do you disable CPU power management scaling in Windows Server 2008 R2? After setting the Control Panel, Power Management plan to performance and then rebooting -- CPUID's Cpu-Z still shows the clock speed being scaled.
There are 3 Main BIOS settings in the Dell R710 that control this under Power Management: OS Control sets the CPU power to OS DBPM, the fan power to Minimum Power, and the memory power to Maximum Performance. In this setting, all processor performance information is passed from the system BIOS to the operating system for control. The operating system sets the processor performance based on processor utilization. Active Power Controller sets the CPU power to System DBPM , the fan power to Minimum Power, and the memory power to Maximum Performance. The BIOS sets the processor performance based on processor utilization. Maximum Performance sets all fields to Maximum Performance. Source: http://support.dell.com/support/systemsinfo/document.aspx?c=us&cs=555&l=en&s=biz&~file=/systems/pet410/en/hom/html/syssetup.htm We had it set to "System DBPM" so it was ignoring the OS settings. It is worth noting that this was digging into why some of our full text SQL queries were taking so long. After this change we observed that these queries dropped from an average of 1285 ms to 335 ms .
{ "source": [ "https://serverfault.com/questions/196301", "https://serverfault.com", "https://serverfault.com/users/2561/" ] }
196,734
I don't know much about bash. My instructor asked me to make a cat script and to observe the output and then tell what is the operator > and what is the difference between the operators > & >>. I am unable to find any justifications. Could you help?
The > sign is used for redirecting the output of a program to something other than stdout (standard output, which is the terminal by default). The >> appends to a file or creates the file if it doesn't exist. The > overwrites the file if it exists or creates it if it doesn't exist. In either case, the output of the program is stored in the file whose name is provided after the redirection operator. Examples: $ ls > allmyfiles.txt creates the file "allmyfiles.txt" and fills it with the directory listing from the ls command $ echo "End of directory listing" >> allmyfiles.txt adds "End of directory listing" to the end of the file "allmyfiles.txt" $ > newzerobytefile creates a new zero byte file with the name "newzerobytefile" or overwrites an existing file of the same name (making it zero bytes in size)
{ "source": [ "https://serverfault.com/questions/196734", "https://serverfault.com", "https://serverfault.com/users/58802/" ] }
196,929
I have configured Apache to send back a 200 response without serving any file with this configuration line Redirect 200 /hello Can I do this with Nginx? I don't want to serve a file, I just want the server to respond with a 200 (I'm just logging the request). I know I can add an index file and achieve the same thing, but doing it in the config means there's one less thing that can go wrong.
Yes, you can location / { return 200 'gangnam style!'; # because default content-type is application/octet-stream, # browser will offer to "save the file"... # if you want to see reply in browser, uncomment next line # add_header Content-Type text/plain; }
{ "source": [ "https://serverfault.com/questions/196929", "https://serverfault.com", "https://serverfault.com/users/937/" ] }
196,931
When downloading files from our web server often files will only download files part of the way, then end as if they finished downloading leaving a partial file. Has anyone heard of this issue or know a possible way to fix it? For example if you start downloading a 100MB file it may download ~36mb, then finish (no error, it just finishes as if the file was completed). When you try to open the file of course it's corrupted or has some error that goes to say that the file isn't all there. We've verified the files on the server are good by copying them back from the server directly and working with them. it seems to occur related to how many users are downloading a file. When tested on files not being downloaded by anyone it happens less, and having multiple users download files simultaneously at different sites it happens much more often. We've tested from multiple sites and it occurs seemingly anywhere. The server is running Windows 2000 IIS; unfortunately it cannot be upgraded anytime soon due to the funding/red tape issues.
Yes, you can location / { return 200 'gangnam style!'; # because default content-type is application/octet-stream, # browser will offer to "save the file"... # if you want to see reply in browser, uncomment next line # add_header Content-Type text/plain; }
{ "source": [ "https://serverfault.com/questions/196931", "https://serverfault.com", "https://serverfault.com/users/48456/" ] }
196,957
I have Zend Server installed and noticed something like the following was added to my httpd.conf file: <Location /ZendServer> Order Allow,Deny Allow from 127.0.0.1 </Location> Alias /ZendServer "C:\Program Files\Zend\ZendServer\GUI\html" <Directory "C:\Program Files\Zend\ZendServer\GUI\html"> AllowOverride All </Directory> But I can't seem to understand the difference between Location and Directory . I changed to something like the following, which makes more sense to me, and it still works: <Location /ZendServer> AllowOverride All Order Allow,Deny Allow from 127.0.0.1 </Location> Alias /ZendServer "C:\Program Files\Zend\ZendServer\GUI\html" Can I keep my changes or should I put it back the way it was?
Directory directive works only for filesystem objects (e.g. /var/www/mypage, C:\www\mypage), while Location directive works only for URLs (the part after your site domain name, e.g. www.mypage.com/mylocation). The usage is straightforward - you would use Location if you need to fine tune access rights by an URL, and you would use Directory if you need to control access rights to a directory (and its subdirectories) in the filesystem.
{ "source": [ "https://serverfault.com/questions/196957", "https://serverfault.com", "https://serverfault.com/users/31964/" ] }
197,123
What's the best way of getting only the final match of a regular expression in a file using grep? Also, is it possible to begin grepping from the end of the file instead of the beginning and stop when it finds the first match?
You could try grep pattern file | tail -1 or tac file | grep pattern | head -1 or tac file | grep -m1 pattern
{ "source": [ "https://serverfault.com/questions/197123", "https://serverfault.com", "https://serverfault.com/users/41630/" ] }
197,340
I'd like to allow one of my users to execute commands as another user on my Ubuntu Lucid server. I'm struggling with finding the syntax for the sudoers file to do this. Say I'm connecting to the box with a user called 'ludo', and I want ludo to be able to execute commands as the 'django' user. eg: sudo -u django I'd like to be able to execute /any/ commands as the django user, and without prompting for a passsword. All the examples I find are for a restricted subset. I did attempt something but got a syntax error upon exiting visudo so I bottled it. Thanks :)
You can put the user to run as in parentheses before the command list: ludo ALL = (django) NOPASSWD: ALL
{ "source": [ "https://serverfault.com/questions/197340", "https://serverfault.com", "https://serverfault.com/users/58436/" ] }
198,002
I'm trying to grant all privileges on all tables of a given database to a new postgres user (not the owner). It seems that GRANT ALL PRIVILEGES ON DATABASE my_db TO new_user; does not do that. After running said command successfully (as the postgres user), I get the following as new_user: $ psql -d my_db my_db => SELECT * FROM a_table_in_my_db; ERROR: permission denied for relation a_table_in_my_db Two questions: 1) What does the command above do, then, if not granting all permissions on all tables on my_db? 2) What's the proper way to grant all permissions on all tables to a user? (including on all tables created in the future)
The answers to your questions come from the online PostgreSQL 8.4 docs . GRANT ALL PRIVILEGES ON DATABASE grants the CREATE , CONNECT , and TEMPORARY privileges on a database to a role (users are properly referred to as roles ). None of those privileges actually permits a role to read data from a table; SELECT privilege on the table is required for that. I'm not sure there is a "proper" way to grant all privileges on all tables to a role. The best way to ensure a given role has all privileges on a table is to ensure that the role owns the table. By default, every newly created object is owned by the role that created it, so if you want a role to have all privileges on a table, use that role to create it. PostgreSQL 9.0 introduces the following syntax that is almost what you want: GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO new_user; The rub is that if you create tables in schemas outside the default "public" schema, this GRANT won't apply to them. If you do use non-public schemas, you'll have to GRANT the privileges to those schemas separately.
{ "source": [ "https://serverfault.com/questions/198002", "https://serverfault.com", "https://serverfault.com/users/6027/" ] }
198,014
On remote linux-based devices, I thought I'd use logrotate to manage any core files our appliance may create. But it seems logrotate considers every core file a unique file since the filename includes the PID. This breaks the way logrotate normally rotates files. E.g.: core_123 core_222 core_555 Instead of seeing these as 3 variations of the same file, it sees this as 3 unique files. So if I had rotate 50 in /etc/logrotate.d/core , it would be willing to rotate through 50 different core_123 files, and 50 different core_222 files, etc., resulting in potentially hundreds or thousands of files. Instead, I want to ensure that logrotate manages a maximum of 50 core_* files. This is the exact logrotate file I was trying to make work: /mycores/core_* { compress daily maxage 28 missingok nocreate nodelaycompress olddir /mycores/old rotate 50 } I suspect this isn't possible with logrotate, but I figured I'd post on serverfault just in case I missed something in the documentation.
The answers to your questions come from the online PostgreSQL 8.4 docs . GRANT ALL PRIVILEGES ON DATABASE grants the CREATE , CONNECT , and TEMPORARY privileges on a database to a role (users are properly referred to as roles ). None of those privileges actually permits a role to read data from a table; SELECT privilege on the table is required for that. I'm not sure there is a "proper" way to grant all privileges on all tables to a role. The best way to ensure a given role has all privileges on a table is to ensure that the role owns the table. By default, every newly created object is owned by the role that created it, so if you want a role to have all privileges on a table, use that role to create it. PostgreSQL 9.0 introduces the following syntax that is almost what you want: GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO new_user; The rub is that if you create tables in schemas outside the default "public" schema, this GRANT won't apply to them. If you do use non-public schemas, you'll have to GRANT the privileges to those schemas separately.
{ "source": [ "https://serverfault.com/questions/198014", "https://serverfault.com", "https://serverfault.com/users/17502/" ] }
198,055
I am playing around with Amazon EC2 and have (finally) managed to SSH into the box from my home machine. Now I want to connect from my work machine but neglected to copy the key pair on a USB key. Is there a way of downloading an existing key pair WITHOUT dropping the instance? Thanks
As far as I know, private key can only be retrieved at the time you create the keypair (via EC2 web management console or via API commandline ). So you have to save the private key somewhere and be able to retrieve it at work in order to connect to the instance via SSH, since keypairs' public keys are automatically installed on EC2 servers when you launch them. Hope that helps.
{ "source": [ "https://serverfault.com/questions/198055", "https://serverfault.com", "https://serverfault.com/users/27327/" ] }
198,058
Have a strange problem with my wired network interface. Here goes: I plug in the cable. Both diodes light up (one green / one orange) and dmesg gives [ 66.847512] tg3: eth0: Link is up at 1000 Mbps, full duplex. [ 66.847516] tg3: eth0: Flow control is off for TX and off for RX. nm-applet (network-management gnome applet) icon starts spinning, but gives up after a while. I terminate the nm-applet and try dhclient eth0 instead. This gives: $ sudo dhclient eth0 Internet Systems Consortium DHCP Client V3.1.2 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ Listening on LPF/eth0/00:16:d3:30:9e:73 Sending on LPF/eth0/00:16:d3:30:9e:73 Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 4 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 14 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 13 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 13 ... I thought it might be a hardware problem so I booted on a BSD usb-stick and I can ping fine back and forth. Back in Linux I tried with a USB-ethernet dongle, same result. Tried three different ethernet-connections. Same issue everywhere. This is what ifconfig eth0 gives when I've plugged in the cable: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:16:d3:30:9e:73 inet6 addr: 2001:6b0:1:1de0:216:d3ff:fe30:9e73/64 Scope:Global inet6 addr: fe80::216:d3ff:fe30:9e73/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1056 errors:0 dropped:0 overruns:0 frame:0 TX packets:19 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:244082 (244.0 KB) TX bytes:4998 (4.9 KB) Interrupt:16 What could the problem be? Update: Some extra information that may or may not be useful: $ sudo mii-tool -v eth0: negotiated 1000baseT-FD flow-control, link ok product info: vendor 00:08:18, model 24 rev 0 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD ...and... $ sudo ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on Supports Wake-on: g Wake-on: g Current message level: 0x000000ff (255) Link detected: yes PS: Wireless works just fine.
As far as I know, private key can only be retrieved at the time you create the keypair (via EC2 web management console or via API commandline ). So you have to save the private key somewhere and be able to retrieve it at work in order to connect to the instance via SSH, since keypairs' public keys are automatically installed on EC2 servers when you launch them. Hope that helps.
{ "source": [ "https://serverfault.com/questions/198058", "https://serverfault.com", "https://serverfault.com/users/42849/" ] }
198,203
Judging by the timestamps on my systems, logrotate does its daily log rotation when logrotate is run by cron. However, if I run it earlier than that it doesn't rotate the files. How does logrotate know if should rotate them or not, does it keep a history or perhaps use timestamps?
I believe it's the content of the state file, which is my case is /var/lib/logrotate.status . Each file has one line, which is the date on which it was last rotated; if you run logrotate on such a date that a given file is due for rotation, given the number of days between current date and the date in the file (1 for daily, 7 for weekly, etc.), the file will be rotated. logrotate doesn't seem to care at what time of day it's run; even if it usually runs at 2355, if you were to run it at 0130 instead, it would still rotate files marked daily and last done yesterday; but having done so it would put today's date into the state file (against any rotated files), so a second run at 2355 would do nothing.
{ "source": [ "https://serverfault.com/questions/198203", "https://serverfault.com", "https://serverfault.com/users/2561/" ] }
199,434
I'm trying to verify that HTTP persistent connections are being used during communication with a Tomcat webserver I've got running. Currently, I can retrieve a resource on my server from a browser (e.g. Chrome) and verify using netstat that the connection is established: # visit http://server:8080/path/to/resource in Chrome [server:/tmp]$ netstat -a ... tcp 0 0 server.mydomain:webcache client.mydomain:55502 ESTABLISHED However, if I use curl, I never see the connection on the server in netstat. [client:/tmp]$ curl --keepalive-time 60 --keepalive http://server:8080/path/to/resource ... [server:/tmp]$ netstat -a # no connection exists for client.mydomain I've also tried using the following curl command: curl -H "Keep-Alive: 60" -H "Connection: keep-alive" http://server:8080/path/to/resource Here's my client machine's curl version: [server:/tmp]$ curl -V curl 7.19.5 (x86_64-unknown-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 libssh2/1.1 Protocols: tftp ftp telnet dict http file https ftps scp sftp Features: IDN IPv6 Largefile NTLM SSL libz How do I get curl to use a persistent/keepalive connection? I've done quite a bit of Googling on the subject, but with no success. It should be noted that I've also used links on the client machine to retrieve the resource, and that does give me an ESTABLISHED connection on the server. Let me know if I need to provide more information.
curl already uses keepalive by default. As an example: curl -v http://www.google.com http://www.google.com Produces the following: * About to connect() to www.google.com port 80 (#0) * Trying 74.125.39.99... connected * Connected to www.google.com (74.125.39.99) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: www.google.com > Accept: */* > < HTTP/1.1 302 Found < Location: http://www.google.ch/ < Cache-Control: private < Content-Type: text/html; charset=UTF-8 < Set-Cookie: PREF=ID=0dd153a227433b2f:FF=0:TM=1289232886:LM=1289232886:S=VoXSLP8XWvjzNcFj; expires=Wed, 07-Nov-2012 16:14:46 GMT; path=/; domain=.google.com < Set-Cookie: NID=40=sOJuv6mxhQgqXkVEOzBwpUFU3YLPQYf4HRcySE1veCBV5cPtP3OiLPKqvRxL10VLiFETGz7cu25pD_EoUq1f_CkNwOna-xRcFFsCokiFqIbGPrb6DmUO7XhcpMYOt3dB; expires=Tue, 10-May-2011 16:14:46 GMT; path=/; domain=.google.com; HttpOnly < Date: Mon, 08 Nov 2010 16:14:46 GMT < Server: gws < Content-Length: 218 < X-XSS-Protection: 1; mode=block < <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"> <TITLE>302 Moved</TITLE></HEAD><BODY> <H1>302 Moved</H1> The document has moved <A HREF="http://www.google.ch/">here</A>. </BODY></HTML> * Connection #0 to host www.google.com left intact * Re-using existing connection! (#0) with host www.google.com * Connected to www.google.com (74.125.39.99) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: www.google.com > Accept: */* > < HTTP/1.1 302 Found < Location: http://www.google.ch/ < Cache-Control: private < Content-Type: text/html; charset=UTF-8 < Set-Cookie: PREF=ID=8b531815cdfef717:FF=0:TM=1289232886:LM=1289232886:S=ifbAe1QBX915QGHr; expires=Wed, 07-Nov-2012 16:14:46 GMT; path=/; domain=.google.com < Set-Cookie: NID=40=Rk86FyMCV3LzorQ1Ph8g1TV3f-h41NA-9fP6l7G-441pLEiciG9k8L4faOGC0VI6a8RafpukiDvaNvJqy8wExED9-Irzs7VdUQYwI8bCF2Kc2ivskb6KDRDkWzMxW_xG; expires=Tue, 10-May-2011 16:14:46 GMT; path=/; domain=.google.com; HttpOnly < Date: Mon, 08 Nov 2010 16:14:46 GMT < Server: gws < Content-Length: 218 < X-XSS-Protection: 1; mode=block < <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"> <TITLE>302 Moved</TITLE></HEAD><BODY> <H1>302 Moved</H1> The document has moved <A HREF="http://www.google.ch/">here</A>. </BODY></HTML> * Connection #0 to host www.google.com left intact * Closing connection #0 This snippet: * Connection #0 to host www.google.com left intact * Re-using existing connection! (#0) with host www.google.com Indicates it re-used the same connection. Use the same " curl -v http://my.server/url1 http://my.server/url2 " invocation against your server and check that you see the same message. Consider using tcpdump instead of netstat to see how the packets are handled. netstat will only give you a momentary glimpse of what's happening, whereas with tcpdump you'll see every single packet involved. Another option is Wireshark.
{ "source": [ "https://serverfault.com/questions/199434", "https://serverfault.com", "https://serverfault.com/users/13035/" ] }
199,697
I am setting up a mongoDB replica set and one of the first things I am suppose to do is turn off atime on the file system. After researching this a bit, I am not opposed to doing this, but I have to ask, what uses atime? I have searched the interwebs and found very little in the way of "this applciaiton or this process uses atime and you would be stupid if you turned it off" kinds of warnings, but being a paranoid person by nature, I have to wonder. So, what uses atime and if I turn it off, what could break?
mutt , an email client, uses file access times to monitor for new mail arriving on an mbox-formatted mailbox. Apparently, this problem is not serious, and is easy to work around . Other than that, it is difficult to find examples of things that break on noatime . I run a number of Linux servers with noatime on all filesystems, and I can't recall ever having seen any problems attributable to noatime . If you are concerned about using noatime in general, you could devote a separate filesystem for your mongoDB stuff, and mount only that filesystem with noatime . EDIT I found an interesting blog at kerneltrap.org that quotes some discussions between Linux developers (Linus Torvalds, Ingo Molnar, Alan Cox, and others) on the topic of atime . In Ingo's second email, he says this: ... i've got no real complaint about ext3 - with the obligatory qualification that "noatime,nodiratime" in /etc/fstab is a must. This speeds up things very visibly - especially when lots of files are accessed. It's kind of weird that every Linux desktop and server is hurt by a noticeable IO performance slowdown due to the constant atime updates, while there's just two real users of it: tmpwatch [which can be configured to use ctime so it's not a big issue] and some backup tools. (Ok, and mail-notify too i guess.) Out of tens of thousands of applications. So for most file workloads we give Windows a 20%-30% performance edge, for almost nothing.
{ "source": [ "https://serverfault.com/questions/199697", "https://serverfault.com", "https://serverfault.com/users/46020/" ] }
199,743
Our production server is running CentOS release 5.2 (Final). How do I see/get/list all the dependencies of an already installed RPM package? For example: SQLite v3.3.6 is already installed in the server. I want to see all the dependencies of this particular package. Here is the output of the command: rpm -qa |grep sqlite python-sqlite-1.1.7-1.2.1 sqlite-3.3.6-2 sqlite-3.3.6-2 Also, why it is listing 2 entries of sqlite-3.3.6-2 here?
rpm -q --requires somepackagehere One is the i?86 package, the other is the x86_64 package.
{ "source": [ "https://serverfault.com/questions/199743", "https://serverfault.com", "https://serverfault.com/users/35997/" ] }
199,921
I am trying to delete a directory recursively with rm -Force -Recurse somedirectory , I get several "The directory is not empty" errors. If I retry the same command , it succeeds. Example: PS I:\Documents and Settings\m\My Documents\prg\net> rm -Force -Recurse .\FileHelpers Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data\RunTime\_svn: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (_svn:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data\RunTime: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (RunTime:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (Data:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (FileHelpers.Tests:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs\nunit\_svn: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (_svn:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs\nunit: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (nunit:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (Libs:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (I:\Documents an...net\FileHelpers:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand PS I:\Documents and Settings\m\My Documents\prg\net> rm -Force -Recurse .\FileHelpers PS I:\Documents and Settings\m\My Documents\prg\net> Of course, this doesn't happen always . Also, it doesn't happen only with _svn directories, and I don't have a TortoiseSVN cache or anything like that so nothing is blocking the directory. Any ideas?
help Remove-Item says: The Recurse parameter in this cmdlet does not work properly. and Because the Recurse parameter in this cmdlet is faulty, the command uses the Get-Childitem cmdlet to get the desire d files, and it uses the pipeline operator to pass them to the Remove-Item cmdlet. and proposes this alternative as an example: get-childitem * -include *.csv -recurse | remove-item So you should pipe get-childitem -recurse into remove-item .
{ "source": [ "https://serverfault.com/questions/199921", "https://serverfault.com", "https://serverfault.com/users/10710/" ] }
200,249
I have two physical servers in my home network, linux ( 192.168.8.x ) and windows server 2008 ( 192.168.8.y ). The linux server ist accessible from outside by ssh on a non-standard port (say 23008). How do I establish a permanent RDP tunnel through ssh on the linux box? I know that I can use putty on the outside machine, but I don't know how to set up sshd on the linux box correctly. Thanks for any hints!
Assuming your linux box is accessible from the internet at 1.2.3.4 on port 23008, on an external system I would do: external% ssh -p 23008 -L 13389:192.168.8.y:3389 [email protected] I'd then connect to the port-forwarded RDP system with external% rdesktop localhost:13389 If your external box isn't a linux box, there will be equivalent commands for the tools you have; the idea is still the same: to forward external's port 13389 to 192.168.8.y's port 3389, then use external's RDP client to connect to localhost:13389 . You refer to setting up the linux box's sshd correctly, but unless you've reconfigured it, the standard sshd setup is likely to support this just fine.
{ "source": [ "https://serverfault.com/questions/200249", "https://serverfault.com", "https://serverfault.com/users/55493/" ] }
200,263
I have a project for a cyber coffee shop; i have 10 pc's that I need to reinitialize at boot (by downloading the image and booting from it). I'll be using Linux for the image deployment server and think of a 'multicast' capable switch. What software and hardware you recommend ?
Assuming your linux box is accessible from the internet at 1.2.3.4 on port 23008, on an external system I would do: external% ssh -p 23008 -L 13389:192.168.8.y:3389 [email protected] I'd then connect to the port-forwarded RDP system with external% rdesktop localhost:13389 If your external box isn't a linux box, there will be equivalent commands for the tools you have; the idea is still the same: to forward external's port 13389 to 192.168.8.y's port 3389, then use external's RDP client to connect to localhost:13389 . You refer to setting up the linux box's sshd correctly, but unless you've reconfigured it, the standard sshd setup is likely to support this just fine.
{ "source": [ "https://serverfault.com/questions/200263", "https://serverfault.com", "https://serverfault.com/users/22376/" ] }
200,268
I have an application in Linux which will log if there is any error in some particular format. Is there any log-analysis or log-monitoring software in Linux which I can configure it according to my log format so that in case of an error it will send me a alert ?
Assuming your linux box is accessible from the internet at 1.2.3.4 on port 23008, on an external system I would do: external% ssh -p 23008 -L 13389:192.168.8.y:3389 [email protected] I'd then connect to the port-forwarded RDP system with external% rdesktop localhost:13389 If your external box isn't a linux box, there will be equivalent commands for the tools you have; the idea is still the same: to forward external's port 13389 to 192.168.8.y's port 3389, then use external's RDP client to connect to localhost:13389 . You refer to setting up the linux box's sshd correctly, but unless you've reconfigured it, the standard sshd setup is likely to support this just fine.
{ "source": [ "https://serverfault.com/questions/200268", "https://serverfault.com", "https://serverfault.com/users/10303/" ] }
200,468
I am trying to write a script that lists all the hosts on my LAN (there a about 20 of them) and writes the ping status next to each host. I have the DHCP leases file, so I have all the IPs (say, 10.0.0.1, 10.0.0.2, etc.), all I need is the ping status for each host. So, my script launches a single ping for each host: ping -c 1 10.0.0.1 Unfortunately, when a host is offline, the ping takes a long time to timeout. I checked man ping , there seem to be two options to set the timeout delay: -w deadline and -W timeout . I think I'm interested in the latter. So I tried this: ping -c 1 -W 1 10.0.0.1 But waiting one second per offline host is still too long. I tried to set it to below a second, but it does not seem to take the parameter into account at all: ping -c 1 -W 0.1 10.0.0.1 # timeout option is ignored, apparently Is there a way to set the timeout to a lower value? If not, are there any alternatives? Edit The O.S. is Debian Lenny. The hosts I am trying to ping are actually access points. They are on the same vlan and subnet as the users (for simplicity of deployment and replacement). This is why I do not want to scan all the subnet (with a ping -b for example). Edit #2 I accepted the fping solution (thanks for all the other answers). This command does exactly what I was looking for: fping -c1 -t500 10.0.0.1 10.0.0.2 10.0.0.3 10.0.0.4 This command takes at most 500ms to complete, and gives me the ping status of all the hosts at once: 10.0.0.1 : [0], 84 bytes, 5.71 ms (5.71 avg, 0% loss) 10.0.0.2 : [0], 84 bytes, 7.95 ms (7.95 avg, 0% loss) 10.0.0.3 : [0], 84 bytes, 16.1 ms (16.1 avg, 0% loss) 10.0.0.4 : [0], 84 bytes, 48.0 ms (48.0 avg, 0% loss) 10.0.0.1 : xmt/rcv/%loss = 1/1/0%, min/avg/max = 5.71/5.71/5.71 10.0.0.2 : xmt/rcv/%loss = 1/1/0%, min/avg/max = 7.95/7.95/7.95 10.0.0.3 : xmt/rcv/%loss = 1/1/0%, min/avg/max = 16.1/16.1/16.1 10.0.0.4 : xmt/rcv/%loss = 1/1/0%, min/avg/max = 48.0/48.0/48.0 On Debian Lenny, installation is trivial: aptitude update aptitude install fping
fping might be a better tool than the stock ping you are using. What OS are you on? "fping differs from ping in that you can specify any number of targets on the command line, or specify a file containing the lists of targets to ping." "Instead of sending to one target until it times out or replies, fping will send out a ping packet and move on to the next target in a round-robin fashion." "Unlike ping, fping is meant to be used in scripts, so its output is designed to be easy to parse." Example: fping -c1 -t500 10.0.0.1 10.0.0.2 10.0.0.3 10.0.0.4 -c Number of request packets to send to each target. -t Initial target timeout in milliseconds
{ "source": [ "https://serverfault.com/questions/200468", "https://serverfault.com", "https://serverfault.com/users/34850/" ] }
200,635
I currently have this snippet: # flush all chains iptables -F iptables -t nat -F iptables -t mangle -F # delete all chains iptables -X Is there a possibility that some impervious rule will stay alive after running this? The idea is to have a completely clean iptables config, that can be easily replaced by new ruleset (nevermind routes/ifconfig's parameters).
To answer your question succinctly, no: there would not be any "leftover" rules after flushing every table. In the interest of being thorough however, you may want to set the policy for the built-in INPUT and FORWARD chains to ACCEPT , as well: iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -t nat -F iptables -t mangle -F iptables -F iptables -X Clear ip6tables rules: ip6tables -P INPUT ACCEPT ip6tables -P FORWARD ACCEPT ip6tables -P OUTPUT ACCEPT ip6tables -t nat -F ip6tables -t mangle -F ip6tables -F ip6tables -X ...and that should do it. iptables -nvL should produce this (or very similar) output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination
{ "source": [ "https://serverfault.com/questions/200635", "https://serverfault.com", "https://serverfault.com/users/54228/" ] }
200,832
So, we host a geoservice webserver thing at the office. Someone apparently broke into this box (probably via ftp or ssh), and put some kind of irc-managed rootkit thing. Now I'm trying to clean the whole thing up, I found the process pid who tries to connect via irc, but i can't figure out who's the invoking process (already looked with ps, pstree, lsof) The process is a perl script owned by www user, but ps aux |grep displays a fake file path on the last column. Is there another way to trace that pid and catch the invoker? Forgot to mention: the kernel is 2.6.23, which is exploitable to become root, but I can't touch this machine too much, so I can't upgrade the kernel EDIT: lsof might help: lsof -p 9481 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAMEss perl 9481 www cwd DIR 8,2 608 2 /ss perl 9481 www rtd DIR 8,2 608 2 /ss perl 9481 www txt REG 8,2 1168928 38385 /usr/bin/perl5.8.8ss perl 9481 www mem REG 8,2 135348 23286 /lib64/ld-2.5.soss perl 9481 www mem REG 8,2 103711 23295 /lib64/libnsl-2.5.soss perl 9481 www mem REG 8,2 19112 23292 /lib64/libdl-2.5.soss perl 9481 www mem REG 8,2 586243 23293 /lib64/libm-2.5.soss perl 9481 www mem REG 8,2 27041 23291 /lib64/libcrypt-2.5.soss perl 9481 www mem REG 8,2 14262 23307 /lib64/libutil-2.5.soss perl 9481 www mem REG 8,2 128642 23303 /lib64/libpthread-2.5.soss perl 9481 www mem REG 8,2 1602809 23289 /lib64/libc-2.5.soss perl 9481 www mem REG 8,2 19256 38662 /usr/lib64/perl5/5.8.8/x86_64-linux-threa d-multi/auto/IO/IO.soss perl 9481 www mem REG 8,2 21328 38877 /usr/lib64/perl5/5.8.8/x86_64-linux-threa d-multi/auto/Socket/Socket.soss perl 9481 www mem REG 8,2 52512 23298 /lib64/libnss_files-2.5.soss perl 9481 www 0r FIFO 0,5 1068892 pipess perl 9481 www 1w FIFO 0,5 1071920 pipess perl 9481 www 2w FIFO 0,5 1068894 pipess perl 9481 www 3u IPv4 130646198 TCP 192.168.90.7:60321->www.****.net:ircd (SYN_SENT)
If I can give you any advice, it is to stop wasting your time cleaning up. Make an image of the OS for forensic stuff for later, and just reinstall the server. Sorry, but its the only secure way to resolving yourself from being rootkitted. Later you can check the image, for certain reasons, why it happened. From my own personal experience, I did this, and later found an internal user which had a SSH key containing the flaw of openssl in 2008. I hope, it clears up the things. Note: If you are going to image/backup the server before reinstalling, be very careful, how you do this. As @dfranke said, boot from a trusted medium to backup. You shouldn't connect to other machines from a rooted server, as great rootkits are known to be able to spread through trusted sessions such as SSH.
{ "source": [ "https://serverfault.com/questions/200832", "https://serverfault.com", "https://serverfault.com/users/59977/" ] }
200,949
Is there a way in Unix to see the biggest directories on disk ? I need to know why I'm almost done with the space on the server, and I dunno where most of the space is used.. thanks
Try: du --max-depth=7 /* | sort -n - it won't just tell you directories, and there will be duplicates, but it will list everything 7 levels deep and sort them by size order.
{ "source": [ "https://serverfault.com/questions/200949", "https://serverfault.com", "https://serverfault.com/users/43312/" ] }
201,298
PLEASE NOTE: I'm not interested in making this into a flame war! I understand that many people have strongly-held beliefs about this subject, in no small part because they've put a lot of effort into their firewalling solutions, and also because they've been indoctrinated to believe in their necessity. However, I'm looking for answers from people who are experts in security. I believe that this is an important question, and the answer will benefit more than just myself and the company I work for. I've been running our server network for several years without a compromise, without any firewalling at all. None of the security compromises that we have had could have been prevented with a firewall. I guess I've been working here too long, because when I say "servers", I always mean "services offered to the public", not "secret internal billing databases". As such, any rules we would have in any firewalls would have to allow access to the whole Internet. Also, our public-access servers are all in a dedicated datacenter separate from our office. Someone else asked a similar question, and my answer was voted into negative numbers. This leads me to believe that either the people voting it down didn't really understand my answer, or I don't understand security enough to be doing what I'm currently doing. This is my approach to server security: Follow my operating system's security guidelines before connecting my server to the Internet. Use TCP wrappers to restrict access to SSH (and other management services) to a small number of IP addresses. Monitor the state of this server with Munin . And fix the egregious security problems inherent to Munin-node in its default configuration. Nmap my new server (also before connecting my server to the Internet). If I were to firewall this server, this should be the exact set of ports incoming connections should be restricted to. Install the server in the server room and give it a public IP address. Keep the system secure by using my operating system's security updates feature. My philosophy (and the basis of the question) is that strong host-based security removes the necessity of a firewall. The overall security philosophy says that strong host-based security is still required even if you have a firewall (see security guidelines ). The reason for this is that a firewall that forwards public services to a server enables an attacker just as much as no firewall at all. It is the service itself that is vulnerable, and since offering that service to the entire Internet is a requirement of its operation, restricting access to it is not the point. If there are ports available on the server that do not need to be accessed by the whole Internet, then that software needed to be shut down in step 1, and was verified by step 4. Should an attacker successfully break into the server through vulnerable software and open a port themselves, the attacker can (and do) just as easily defeat any firewall by making an outbound connection on a random port instead. The point of security isn't to defend yourself after a successful attack - that's already proven to be impossible - it's to keep the attackers out in the first place. It's been suggested that there are other security considerations besides open ports - but to me that just sounds like defending one's faith. Any operating system/TCP stack vulnerabilities should be equally vulnerable whether or not a firewall exists - based on the fact that ports are being forwarded directly to that operating system/TCP stack. Likewise, running your firewall on the server itself as opposed to having it on the router (or worse, in both places) seems to be adding unnecessary layers of complexity. I understand the philosophy "security comes in layers", but there comes a point where it's like building a roof by stacking X number of layers of plywood on top of each other and then drilling a hole through all of them. Another layer of plywood isn't going to stop the leaks through that hole you're making on purpose. To be honest, the only way I see a firewall being any use for servers is if it has dynamic rules preventing all connections to all servers from known attackers - like the RBLs for spam (which coincidentally, is pretty much what our mail server does). Unfortunately, I can't find any firewalls that do that. The next best thing is an IDS server, but that assumes that the attacker doesn't attack your real servers first, and that attackers bother to probe your entire network before attacking. Besides, these have been known to produce large numbers of false positives.
Advantages of firewall: You can filter outbound traffic. Layer 7 firewalls (IPS) can protect against known application vulnerabilities. You can block a certain IP address range and/or port centrally rather than trying to ensure that there is no service listening on that port on each individual machine or denying access using TCP Wrappers . Firewalls can help if you have to deal with less security aware users/administrators as they would provide second line of defence. Without them one has to be absolutely sure that hosts are secure, which requires good security understanding from all administrators. Firewall logs would provide central logs and help in detecting vertical scans. Firewall logs can help in determining whether some user/client is trying to connect to same port of all your servers periodically. To do this without a firewall one would have to combine logs from various servers/hosts to get a centralized view. Firewalls also come with anti-spam / anti-virus modules which also add to protection. OS independent security. Based on host OS, different techniques / methods are required to make the host secure. For example, TCP Wrappers may not be available on Windows machines. Above all this if you do not have firewall and system is compromised then how would you detect it? Trying to run some command 'ps', 'netstat', etc. on local system can't be trusted as those binaries can be replaced. 'nmap' from a remote system is not guaranteed protection as an attacker can ensure that root-kit accepts connections only from selected source IP address(es) at selected times. Hardware firewalls help in such scenarios as it is extremely difficult to change firewall OS/files as compared to host OS/files. Disadvantages of firewall: People feel that firewall will take care of security and do not update systems regularly and stop unwanted services. They cost. Sometimes yearly license fee needs to be paid. Especially if the firewall has anti-virus and anti-spam modules. Additional single point of failure. If all traffic passes through a firewall and the firewall fails then network would stop. We can have redundant firewalls, but then previous point on cost gets further amplified. Stateful tracking provides no value on public-facing systems that accept all incoming connections. Stateful firewalls are a massive bottleneck during a DDoS attack and are often the first thing to fail, because they attempt to hold state and inspect all incoming connections. Firewalls cannot see inside encrypted traffic. Since all traffic should be encrypted end-to-end, most firewalls add little value in front of public servers. Some next-generation firewalls can be given private keys to terminate TLS and see inside the traffic, however this increases the firewall's susceptibility to DDoS even more, and breaks the end-to-end security model of TLS. Operating systems and applications are patched against vulnerabilities much more quickly than firewalls. Firewall vendors often sit on known issues for years without patching, and patching a firewall cluster typically requires downtime for many services and outbound connections. Firewalls are far from perfect, and many are notoriously buggy. Firewalls are just software running on some form of operating system, perhaps with an extra ASIC or FPGA in addition to a (usually slow) CPU. Firewalls have bugs, but they seem to provide few tools to address them. Therefore firewalls add complexity and an additional source of hard-to-diagnose errors to an application stack.
{ "source": [ "https://serverfault.com/questions/201298", "https://serverfault.com", "https://serverfault.com/users/10118/" ] }
201,709
I need to do this: On linux, we have to find a few dynamic libraries which are not on a standard location. We have to set $LD_LIBRARY_PATH to /path/to/sdk/lib How can I do that in Ubuntu 10.10?
To define this variable, simply use (on the shell prompt): export LD_LIBRARY_PATH="/path/to/sdk/lib" To make it permanent, you can edit the ldconfig files. First, create a new file such as: sudo vi /etc/ld.so.conf.d/your_lib.conf Second, add the path in the created file /path/to/sdk/lib Finally, run ldconfig to update the cache. sudo ldconfig
{ "source": [ "https://serverfault.com/questions/201709", "https://serverfault.com", "https://serverfault.com/users/13951/" ] }
201,814
Is there any command line or php script which returns the memcached total memory usage?
As Mike said, you can look at the line including the "STAT bytes" to see memory usage: $ echo "stats" | nc -w 1 <host> <port> | awk '$2 == "bytes" { print $2" "$3 }'
{ "source": [ "https://serverfault.com/questions/201814", "https://serverfault.com", "https://serverfault.com/users/60261/" ] }
202,000
I used MySQLTuner which pointed out some tables were fragmented. I used mysqlcheck --optimize -A to optimize all tables. It fixed some tables but MySQLTuner still finds 19 tables fragmented. how can I see which tables are in need of defragmenting? Maybe OPTIMIZE TABLE will work where mysqlcheck didn't? Or what else should I try?
the short answer: select ENGINE, TABLE_NAME,Round( DATA_LENGTH/1024/1024) as data_length , round(INDEX_LENGTH/1024/1024) as index_length, round(DATA_FREE/ 1024/1024) as data_free from information_schema.tables where DATA_FREE > 0; The "You must know" answer first at all you must understand that Mysql tables get fragmented when a row is updated, so it's a normal situation. When a table is created, lets say imported using a dump with data, all rows are stored with no fragmentation in many fixed size pages. When you update a variable length row, the page containing this row is divided in two or more pages to store the changes, and these new two (or more) pages contains blank spaces filling the unused space. This does not impact performance, unless of course the fragmentation grows too much. What is too much fragmentation, well let's see the query you're looking for: select ENGINE, TABLE_NAME,Round( DATA_LENGTH/1024/1024) as data_length , round(INDEX_LENGTH/1024/1024) as index_length, round(DATA_FREE/ 1024/1024) as data_free from information_schema.tables where DATA_FREE > 0; The DATA_LENGTH and INDEX_LENGTH are the space your data and indexes are using, and DATA_FREE is the total amount of bytes unused in all the table pages (fragmentation). Here's an example of a real production table | ENGINE | TABLE_NAME | data_length | index_length | data_free | | InnoDB | comments | 896 | 316 | 5 | In this case we have a Table using (896 + 316) = 1212 MB, and have data a free space of 5 MB. This means a "ratio of fragmentation" of: 5/1212 = 0.0041 ...Which is a really low "fragmentation ratio". I've been working with tables with a ratio near 0.2 (meaning 20% of blank spaces) and never notice a slow down on queries, even if I optimize the table, the performance is the same. But apply a optimize table on a 800MB table takes a lot of time and blocks the table for several minutes, which is impracticable on production. So, if you consider what you win in performance and the time wasted in optimize a table, I prefer NOT OPTIMIZE. If you think it's better for storage, see your ratio and see how much space can you save when optimize. It's usually not too much, so I prefer NOT OPTIMIZE. And if you optimize, the next update will create blank spaces by splitting a page in two or more. But it's faster to update a fragmented table than a not fragmented one, because if the table is fragmented an update on a row not necessarily will split a page. I hope this helps you.
{ "source": [ "https://serverfault.com/questions/202000", "https://serverfault.com", "https://serverfault.com/users/58683/" ] }
202,313
My lab is in the process of setting up a small server that holds data (mostly video and image data, plus a few documents) for the project our group is working on at a moment in time. Historically, after a research project ends, the data haphazardly ends up being archived in one hard drive, or a big pile of DVDs (or CDs in the olden days), and/or some of the video ended up in Sony DV cassettes or even VHS tapes (this lab has been active since the early '90s), OR a mixture of all the above... Question: What is the best way for (1) consolidating them ALL into the same format AND storage medium, and (2) what's the best medium for long term archiving of such data for very occasional access (say, 30+ years?)? Unfortunately we don't have enterprise level budget (we are just a ~10 people lab), so can't do things that costs hundreds of thousands of dollars. Thanks! P.S. Considering our old video and images are of smaller resolution, but recent ones are huge, I think we are talking about 30~40 TB for the really old data, another 10~20 TB for recent data, then yearly additions of about 5 TB.
Unfortunately, there is no best way for you. 30 year archival of digital media is a very hard problem and takes routine investment. About the only formats guaranteed to be readable in 30 years are ASCII and UTF8, which are not video formats. Storage formats change, the 8 track reel-to-reel tapes we were using 30 years ago are nigh impossible to read these days even though the data is still on the tape (there is an interesting story about NASA rebuilding a 40 year old tape drive to get at some newly recovered/discovered Apollo data tapes). Your best bet is to commit to periodic, I'd say every 5 years, assessments of your archival environment with sufficient budget to bring old formats into newer formats. You probably know better than I do, but the video landscape is changing rapidly. Realtime online editing is now possible, where it was only doable on seriously good kit even 10 years ago. Who knows how things will look 30 years hence. Set your archival window for 5 years. In the immediate term a largish storage array should suffice ( big and slow 50TB disk can be had for under $70K, possibly well under. An LTO5 tape drive and 50 tapes (well over 50TB worth) can be had for less than $15K. What format you store your video in is up to you. Start finding and converting all of your older stuff into this new storage. At the end of 5 years, do another full assessment of your archival environment. What formats are you using? What are newer formats? What codecs seem to be dead ends, and what media do you have stored encoded that way? Decide how you're going to migrate to newer storage methods (data formats, disk/tape/something-else), and spend appropriately. Repeat 6 times. That should get you to 30 years.
{ "source": [ "https://serverfault.com/questions/202313", "https://serverfault.com", "https://serverfault.com/users/39687/" ] }
203,355
I'm trying to understand why one would add a batterypack to a raid card. It seems to me like if power goes down, running just the raid card is going to do little good: without power for HDs and motherboard, writing in-memory data isn't going to work anyway, right? In addition, doesn't having a UPS facilitate this?
It allows the raid card to remember what is in its buffers ( that hasnt been sync'd to disk ) Its very important for people who need high data integrity.. Or to save your DB from certain types of corruption.. (Basically whats on disk, is on disk - so thats safe.. The problem is when the OS thinks its on disk but its actually not and in a RAID card buffer) When the server starts up again, obviously those buffers get flushed to the disks.. So you have a point in time correlation with your disks and OS.. ( otherwise you will just loose information - like a few database records, which you will never know. ) A UPS help sure.. but its not safe enough.. ever decent RAID card should have a BBU (Battery Backed Unit)
{ "source": [ "https://serverfault.com/questions/203355", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
203,550
I'm running a LAMP server on Fedora 13 that's working fine; however, I just added an ".htaccess" file to my current site's docroot folder that is being completely ignored. I've tried half a dozen different tests, including this one: RewriteEngine on RewriteBase / RewriteRule ^.*$ index.php But images and all other pages load fine, and non-existent files still 404. I also tried this: order deny,allow deny from all But every page still loads just fine. Again the .htaccess file is simply ignored 100%. We put our virtualhost records in /etc/httpd/conf.d/virtual.conf. It looks like this: NameVirtualHost * <VirtualHost *> ServerName intranet DocumentRoot /var/www/default <Directory "/var/www/default"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *> ServerName ourwebsite.com DocumentRoot /var/www/html/ourwebsite.com/docroot <Directory "/var/www/html/ourwebsite.com/docroot"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> What else could be causing our server to completely IGNORE the .htaccess file?? Edit: I changed the .htaccess file to above to better demonstrate that my changes are being ignored. Note that I tried the exact same .htaccess file on the production server and it worked fine. Edit 2: OK, I have new information! Just for testing purposes, I went through and temporarily changed EVERY "AllowOverride" directive to AllowOverride All . I figured out that the very first Directory entry seems to overpower all others: <Directory /> Options FollowSymLinks AllowOverride None </Directory> When I changed that to AllowOverride All , my .htaccess files begin taking effect. It's like all the other AllowOverride All directives in my config files are being ignored! What Gives??
Unbelievable. Remember how I said this is a development server? Yeah.. well here's what my virtual host entry REALLY looks like: <VirtualHost *> ServerName dev.ourwebsite.com DocumentRoot /var/www/html/dev.ourwebsite.com/docroot <Directory "/var/www/html/ourwebsite.com/docroot"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> Do you see it? Well I didn't. I FORGOT To change my "Directory" entry to dev.ourwebsite.com instead of ourwebsite.com -- and that made all the difference. I just assumed that Apache would have thrown an error if the directory didn't exist; but that only applies to the DocumentRoot directive. is match-based -- meaning it applies the rules if it matches the incoming request, but otherwise, it doesn't care if you tell it to AllowOverride on magic unicorns. Let this be a lesson to any others who come looking -- when all else fails, consider the almighty Typo.
{ "source": [ "https://serverfault.com/questions/203550", "https://serverfault.com", "https://serverfault.com/users/34350/" ] }
203,567
We have a number of users who have MP3s in their home directories which are stored on our centralized file server. This has a negative affect on how long our backups take, how much drives space we need to have, etc. I thought about sending e-mails out for people to remove it with a notice that it would be deleted by a certain day but I don't feel that this is the right way to go about this. Many of these employees have music because it helps them work more efficiently and they don't have quantities that are excessive but the amount in sum across all the employees is still significant. I have come up with a couple of ideas but each have their own problems: Idea : Allow Users to stream music instead of storing it Problem : Takes up too much bandwidth Idea : Move all the music to the users' local machines Problem : This would take significant effort on my department's part and we would then be responsible for doing things like redirecting the default directories for iTunes on people's computers so that data is stored locally Idea : Encourage people to purchase their own portable MP3 players by leveraging our corporate discount to offer employees discounted players Problem : Some of our users listen to podcasts, something that I have found extremely beneficial in my job, and may not have a computer at home to synchronize with What are some good ways to handle respecting our users and getting the productivity and morale benefits that music affords without having to store users' music on our file server?
I'd be tempted to ask senior management just to send out a "remove and don't do it again" email - then you can do a monthly scan and give the management a list of those still doing it. It's not a technical issue so don't make it one.
{ "source": [ "https://serverfault.com/questions/203567", "https://serverfault.com", "https://serverfault.com/users/12890/" ] }
203,613
Remote users connect to several services at our main office over the internet using SSH. The SSH password is synchronized with their LAN A/D account. Would having users bring a copy of an SSH key home using something like a CD or a piece of paper be more secure than having users type in their password? Or should I require both? My question could probably be re-formulated as follows: is there any vulnerability in SSH's password authentication protocol? It's this message that makes me doubt: The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established. RSA key fingerprint is xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:. Are you sure you want to continue connecting (yes/no)?
The message you are seeing is a separate issue. It is asking you to confirm that the host you're connecting to is really the one you expect it to be. From the server, you can get the fingerprint by running ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub . Then, when you're connecting remotely for the first time, you can make sure that the fingerprints match. Host keys, seen in action here, address the problem of man in the middle attacks — perhaps DNS has been subverted, and you're connecting to a competitor's machine instead of your own. That machine gathers your credentials and transparently forwards your connection to the real server, stealing your information without you knowing. Making sure the host key matches prevents this from happening unless the attacker has actually stolen your server's public key. A problem remains, though — how do you know which key is right? After the first connection, the public key is stored in your ~/.ssh/known_hosts file, so subsequent connections are fine. But the first time, you either need some out-of-band way of getting the fingerprint, or else you follow the "TOFU" model: trust-on-first-use. But none of this has anything to do with passwords vs. keys, except that both keys and passwords could be stolen via this attack -- in a sense, it's the vulnerability you're asking for. There's (at least) three reasons passwords are worse than keys: They can be brute-forced. A typical user-selected 8-character password has around 30 bits of guessing-entropy. An ssh public/private key pair is 1024 bits or larger. It's effectively impossible to brute-force an ssh key, but automated password guessing happens all the time. They can be dumb. Users routinely select horrible passwords, even with restrictions in place, and they tend to use harder passwords in multiple places. This obviously makes attacks easier. Passwords can be stolen remotely. When you're using SSH, the password is encrypted on the wire, but it's very common for the ssh server to be replaced on compromised systems with one that logs all passwords. With keys, the private key stays on the local system and is never sent at all, and so it can't be stolen without actually compromising the client machine. Additionally, ssh keys offer convenience when used with something like ssh-agent — you get the hassle-free operation of connecting without re-authenticating each time, while still maintaining a reasonable degree of security. There's no significant advantage in asking for both, as someone who has enough access to steal the private key can fairly easily steal the user's password as well. If you need more security than this, consider looking into a two-factor authentication system like RSA SecurID or WiKID .
{ "source": [ "https://serverfault.com/questions/203613", "https://serverfault.com", "https://serverfault.com/users/2050/" ] }
203,685
As a concrete example I want to be able to take a particular tool that isn't installed (say nslookup) and be able to tell which package I need to install when the following fails: apt-get install nslookup E: Unable to locate package nslookup Obviously I can google to find the answer for a specific package (dnsutils) but I want to know how to find it myself.
There are two ways I know of to do this: host ~ # apt-file update host ~ # apt-file search nslookup dnsutils: /usr/bin/nslookup dnsutils: /usr/share/man/man1/nslookup.1.gz gajim: /usr/share/gajim/src/common/nslookup.py kaptain: /usr/share/kaptain/nslookup.kaptn kvirc2-data: /usr/share/kvirc2/help/en/nslookup.kvihelp libgnet2.0-0: /usr/share/doc/libgnet2.0-0/examples/dnslookup.c.gz manpages-ja: /usr/share/man/ja/man8/nslookup.8.gz procmail-lib: /usr/share/procmail-lib/pm-janslookup.rc rbot: /usr/share/rbot/plugins/nslookup.rb scrollz: /usr/share/scrollz/help/nslookup zsh: /usr/share/zsh/4.3.4/functions/Completion/Unix/_nslookup zsh: /usr/share/zsh/4.3.4/functions/Misc/nslookup zsh-beta: /usr/share/zsh-beta/functions/Completion/Unix/_nslookup zsh-beta: /usr/share/zsh-beta/functions/Misc/nslookup and... host ~ # apt-cache search nslookup host - utility for querying DNS servers dnsutils - Clients provided with BIND
{ "source": [ "https://serverfault.com/questions/203685", "https://serverfault.com", "https://serverfault.com/users/15994/" ] }
203,988
I don't understand how to allow syntax highlighting in files I'm editing via vi when using sudo. When I sudo vi <filename> the terminal is only black & white of my terminal settings. In vi if I enter :syntax on nothing changes. When I vi <filename> all the syntax is properly colored. Using RHEL 5.4, relevant env: LS_COLORS=no=00:fi=00:di=01;34:ln=01;36:pi=40;33:so=01;35:bd=40; 33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=01;32:*.cmd=01; 32:*.exe=01;32:*.com=01;32:*.btm=01;32:*.bat=01;32:*.sh=01;32:*.csh=01; 32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01; 31:*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tz=01;31:*.rpm=01; 31:*.cpio=01;31:*.jpg=01;35:*.gif=01;35:*.bmp=01;35:*.xbm=01;35:*.xpm=01; 35:*.png=01;35:*.tif=01;35: SHELL=/bin/bash TERM=xterm-color
Larks answer is probably the most likely. You may not wish to change your root accounts vi to vim permanently as if your resources are low vi is almost guaranteed to always work, I'm not so sure about vim. You are probably using (color) /usr/bin/vim under your normal user and /bin/vi under sudo or root. You can check by using: which vi once under your normal user and the other via sudo sudo which vi Do a man which if you need more details
{ "source": [ "https://serverfault.com/questions/203988", "https://serverfault.com", "https://serverfault.com/users/60886/" ] }
204,025
I've seen people recommend combining all of these in a flow, but they seem to have lots of overlapping features so I'd like to dig in to why you might want to pass through 3 different programs before hitting your actual web server. nginx: ssl: yes compress: yes cache: yes backend pool: yes varnish: ssl: no (stunnel?) compress: ? cache: yes (primary feature) backend pool: yes haproxy: ssl: no (stunnel) compress: ? cache: no backend pool: yes (primary feature) Is the intent of chaining all of these in front of your main web servers just to gain some of their primary feature benefits? It seems quite fragile to have so many daemons stream together doing similar things. What is your deployment and ordering preference and why?
Simply put.. HaProxy is the best opensource loadbalancer on the market. Varnish is the best opensource static file cacher on the market. Nginx is the best opensource webserver on the market. (of course this is my and many other peoples opinion) But generally, not all queries go through the entire stack. Everything goes through haproxy and nginx/multiple nginx's. The only difference is you "bolt" on varnish for static requests. any request is loadbalanced for redundancy and throughput (good, that's scalable redundancy) any request for static files is first hitting the varnish cache (good, that's fast) any dynamic request goes direct to the backend (great, varnish doesn't get used) Overall, this model fits a scalable and growing architecture (take haproxy out if you don't have multiple servers) Hope this helps :D Note: I'll actually also introduce Pound for SSL queries as well :D You can have a server dedicated to decrypting SSL requests, and passing out standard requests to the backend stack :D (It makes the whole stack run quicker and simpler)
{ "source": [ "https://serverfault.com/questions/204025", "https://serverfault.com", "https://serverfault.com/users/5893/" ] }
204,150
Periodically I notice PowerShell seems to take forever to finish doing whatever it is I told it to do until it occurs to me to "wake it up" by pressing enter. This is not the fault of any one process as best I can tell, as I have even run custom apps that just log their output to the screen every few seconds and even in these cases, PowerShell will stop doing anything after a while until I "give it a kick" by pressing enter. Any ideas what might be causing this?
If the QuickEdit Mode and\or Insert options are checked within the console\window properties, and you click within the console, it will pause the output. If those options are not checked, the output can't be paused by clicking within the console. To get to these settings, right-click on the PowerShell-Logo in the top-left of your terminal window, then select 'Properties' (at least that's one way to do it)
{ "source": [ "https://serverfault.com/questions/204150", "https://serverfault.com", "https://serverfault.com/users/12248/" ] }
204,265
I have a centos server and I want to run a job on it at 11PM every 2 days, how do I do that?
You can use the following cron arrangement. The fields denote (from left-to-right): Minute, Hour, Day of Month, Month, Day of Week. The "*/2" in the Day of Month field means "every two days". 0 23 */2 * * insert_your_script_here.sh
{ "source": [ "https://serverfault.com/questions/204265", "https://serverfault.com", "https://serverfault.com/users/48055/" ] }
204,303
I want to copy about 200 directories & subdirectories from one location to another but I don't want to copy the thousands of files within those directories. I am on Linux. Note: I don't have enough space to copy everything then delete all the files.
Just found this: rsync -a -f"+ */" -f"- *" source/ destination/ http://psung.blogspot.com/2008/05/copying-directory-trees-with-rsync.html
{ "source": [ "https://serverfault.com/questions/204303", "https://serverfault.com", "https://serverfault.com/users/3255/" ] }
204,417
I'm trying to view the Shutdown Event Tracker logs in the Event Viewer, on windows server 2008 r8, but I can't find the messages that I supplied when previously restart the server. Where in the Event Viewer can I see these logs?
Open event viewer. Expand windows logs. Click system, then either find or filter for event ID 1074. And you will see all your shut down logs.
{ "source": [ "https://serverfault.com/questions/204417", "https://serverfault.com", "https://serverfault.com/users/42449/" ] }
204,893
I am trying to compile Node.js on Amazon EC2, but I can't even install "build essential". Where's the problem? Thanks. sudo yum install build-essential Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile (...) No package build-essential available. Error: Nothing to do ./configure Checking for program g++ or c++ : not found Checking for program icpc : not found Checking for program c++ : not found error: could not configure a cxx compiler! could not configure a cxx compiler!
build-essential is a package that resides in aptitude (Debian), not in Yum (RHEL). Maybe you should rephrase your question to provide more information about the core issue--i.e., installing EC2 tools? The (rough) equivalent of the build-essential meta-package for yum is: yum install make glibc-devel gcc patch
{ "source": [ "https://serverfault.com/questions/204893", "https://serverfault.com", "https://serverfault.com/users/61179/" ] }
205,094
I have an EC2 instance running, and it belongs to a security group. If I add a new allowed connection to that security group through AWS Management Console, should that change be effective immediately ? Or perhaps only after restart of the instance? In my case, I'm trying to allow access to PostgreSQL's default port (tcp 5432 5432 0.0.0.0/0), and I'm not sure if it's the EC2 firewall or PostgreSQL's settings that are refusing the connection.
Seems like yes (quoting AWS documentation ): You can modify rules for a group at any time. The new rules are automatically enforced for all running instances and instances launched in the future. A simple test of disallowing access to a certain (previously accessible) port also confirmed this.
{ "source": [ "https://serverfault.com/questions/205094", "https://serverfault.com", "https://serverfault.com/users/1746/" ] }
205,097
I have written a script in monit interface for Webmin service. I can execute the process where in I am unable to restart the service. check process webmin with pidfile /var/webmin/miniserv.pid start = "/etc/init.d /webmin start" stop = "/etc/init.d /webmin stop" if failed host in1.miracletel.com port 10000 then restart if 5 restarts within 5 cycles then timeout #if changed pid 2 times within 2 cycles then alert Would you please look into this and let me know, whether I can considered the service correct or not?
Seems like yes (quoting AWS documentation ): You can modify rules for a group at any time. The new rules are automatically enforced for all running instances and instances launched in the future. A simple test of disallowing access to a certain (previously accessible) port also confirmed this.
{ "source": [ "https://serverfault.com/questions/205097", "https://serverfault.com", "https://serverfault.com/users/61227/" ] }
205,498
I want to start process (eg. myCommand) and get its pid (to allow to kill it later). I tried ps and filter by name, but I can not distinguish process by names myCommand ps ux | awk '/<myCommand>/ {print $2}' Because processes names are not unique. I can run process by: myCommand & I found that I can get this PID by: echo $! Is there any simpler solution? I would be happy to execute myCommand and get its PID as a result of one line command.
What can be simpler than echo $! ? As one line: myCommand & echo $!
{ "source": [ "https://serverfault.com/questions/205498", "https://serverfault.com", "https://serverfault.com/users/61345/" ] }
206,042
Today, one of our developers had his laptop stolen from his house. Apparently, he had a full svn checkout of the company's source code, as well as a full copy of the SQL database. This is one massive reason why I'm personally against allowing company work on personal laptops. However, even if this had been a company owned laptop, we'd still have the same problem, although we would be in a slightly stronger position to enforce encryption (WDE) on the whole disk. Questions are these: What does your company do about company data on non company owned hardware? Is WDE a sensible solution? Does it produce a lot of overhead on reads/writes? Other than changing passwords for things that were stored/accessed from there, is there anything else you can suggest?
The problem is that allowing people do unpaid overtime on their own kit is very cheap, so managers aren't so willing to stop it; but will of course be happy to blame IT when there's a leak... Only a strongly enforced policy is going to prevent this. It's down to management where they want to strike the balance, but it's very much a people problem. I've tested WDE (Truecrypt) on laptops with admin-level workloads and it's really not that bad, performance-wise, the I/O hit is negligible. I've several developers keeping ~20GB working copies on it, too. It's not a 'solution' in itself; (It won't stop the data being slurped off an unsecured machine while it's booted, for instance), but it certainly closes a lot of doors. How about blanket ban on all externally held data; followed by some investment in remote desktop services, a decent VPN and the bandwidth to support it. That way all code stays inside the office; the users get a session with local network access to resources; and home machines just become dumb terminals. It won't suit all environments (intermittent access or high letency might be a deal-breaker in your case) but it's worth considering if home working is important to the company.
{ "source": [ "https://serverfault.com/questions/206042", "https://serverfault.com", "https://serverfault.com/users/16732/" ] }
206,544
I put "exit" in my .bashrc file. I don't have physical access to the machine so to connect to it I use ssh. I don't have root privileges. Every time I connect to the server, the connection automatically closes. So far, I've tried: Overwriting .bashrc with scp and sftp. The connection closes before I can do anything. Using a few different GUI programs to access ssh (connection closes) Overwriting the file with ftp. (can't use ftp) From my home computer $ ssh host "bash --noprofile --norc" (connection closes) $ ssh host "mv .bashrc bashrc_temp" (connection closes) $ ssh host "rm .bashrc" (same thing) $ ssh host -t (connection closes) Is there anything I can do to disable .bashrc or maybe overwrite the file before .bashrc is sourced? UPDATE @ring0 I tried your suggestion, but no luck. The bashrc file still runs first. Another thing I tried was logging in with another account and sudo editing the .bashrc, but I don't have sudo privileges on this account. I guess I'll contact the admin. EDIT @shellholic I can't believe it, but this approach worked! Even though "exit" occurs within the first few lines (composed only of a few if blocks and export statements) in the .bashrc file, I still managed to Ctrl-c interrupt it successfully within twenty tries (took about 3 minutes). I removed the offending line in the .bashrc and everything is in working order again.
you can try to abort (ctrl+C) before the exit part of your .bashrc is executed. I tried by adding the following at the top of a testuser's bashrc, it works, it's just a matter of timing. Very easy in my case: sleep 3 echo "Too late... bye" exit 0
{ "source": [ "https://serverfault.com/questions/206544", "https://serverfault.com", "https://serverfault.com/users/61720/" ] }
206,560
I have a MySQL-MMM cluster with three database servers (two masters and one slave). Recently replication was broken by someone directly inserting to the slave database servers. After I discovered this I reestablished replication from the db1 system to the db2 and db3 systems. Replication is now running and mmm_control show is showing the servers as all online: [root@host ~]# mmm_control show db1(10.1.0.21) master/ONLINE. Roles: reader(10.1.0.31), writer(10.1.0.30) db2(10.1.0.22) master/ONLINE. Roles: reader(10.1.0.32) db3(10.1.0.23) slave/ONLINE. Roles: reader(10.1.0.33) However when I check all of the status checks, I see that db1 has broken replication: [root@host ~]# mmm_control checks all db2 ping [last change: 2010/11/24 03:57:48] OK db2 mysql [last change: 2010/11/27 03:21:42] OK db2 rep_threads [last change: 2010/11/27 03:23:19] OK db2 rep_backlog [last change: 2010/11/24 03:57:48] OK: Backlog is null db3 ping [last change: 2010/11/24 03:58:15] OK db3 mysql [last change: 2010/11/27 03:19:21] OK db3 rep_threads [last change: 2010/11/27 03:23:06] OK db3 rep_backlog [last change: 2010/11/24 03:58:23] OK: Backlog is null db1 ping [last change: 2010/11/24 03:57:48] OK db1 mysql [last change: 2010/11/27 03:22:27] OK db1 rep_threads [last change: 2010/11/27 02:14:46] ERROR: Replication is broken db1 rep_backlog [last change: 2010/11/24 03:58:00] OK: Backlog is null What do I need to do to fix replication for db1 since it appears that the databases are in sync?
you can try to abort (ctrl+C) before the exit part of your .bashrc is executed. I tried by adding the following at the top of a testuser's bashrc, it works, it's just a matter of timing. Very easy in my case: sleep 3 echo "Too late... bye" exit 0
{ "source": [ "https://serverfault.com/questions/206560", "https://serverfault.com", "https://serverfault.com/users/8639/" ] }
206,561
I'm setting up a new web/database server that will perform a lot of read/write operations. So I want to use a RAID controller and use RAID 10. I need some help to decide what kind of hard drives I should get? VelociRaptor? SAS? (Is it worth the cost?)
you can try to abort (ctrl+C) before the exit part of your .bashrc is executed. I tried by adding the following at the top of a testuser's bashrc, it works, it's just a matter of timing. Very easy in my case: sleep 3 echo "Too late... bye" exit 0
{ "source": [ "https://serverfault.com/questions/206561", "https://serverfault.com", "https://serverfault.com/users/61705/" ] }
206,738
I get this error every few minutes when using mod_proxy as a reverse proxy to a SOAP web service. There's probably 3 or 4 requests going per seconds so we're talking around 1 or 2 out of every thousand that have this error. [Tue Nov 23 11:44:14 2010] [error] [client 172.16.1.31] (20014)Internal error: proxy: error reading status line from remote server soap1.server:8888 [Tue Nov 23 11:44:14 2010] [error] [client 172.16.1.31] proxy: Error reading from remote server returned by /someapp/path/to/web/service This causes the request to fail. If I have the client connect directly to the soap server without using the proxy, success is 100% so the problem appears to be in the proxy The configuration looks like this. The purpose is to switch to a backup server if the primary one is unavailable: <Proxy balancer://apicluster> BalancerMember http://soap1.server:8888 lbset=0 BalancerMember http://soap2.server:8888 lbset=1 </Proxy> ProxyPass /someapp balancer://apicluster/someapp ProxyPassReverse / balancer://apicluster/someapp Has anyone run into this and found a fix? There's some mentions in bug reports but no solutions. The only thing that may be unusual is the client request could be 100MB or larger, so the request could take a little longer than you'd expect for a SOAP call.
In case someone else runs into this. This is a bug in mod_proxy that can be avoided by putting these lines in your httpd.conf: SetEnv force-proxy-request-1.0 1 SetEnv proxy-nokeepalive 1 https://issues.apache.org/bugzilla/show_bug.cgi?id=37770 For info on what these variables do see the mod_proxy documentation . They have a specific section, Protocol Adjustment, that addresses these variables.
{ "source": [ "https://serverfault.com/questions/206738", "https://serverfault.com", "https://serverfault.com/users/61787/" ] }
206,745
we have two NAS as our storage with data sync. ips for the NAS are as follows 10.10.0.5 10.10.0.6 we want to create a scenario such that when a client machine requests data from the servers request should be routed to one of the servers automatically (in a load balancing mannner). and if any of the NAS is down the request should be forwarded to the other that is alive how should we go about this? please pu some light on this topic edit: they are custom built nas boxes with freenas running on them and smb for file share and client side are mixed of linux and windows systems. i have yet got a solution for this anybody out there to help....?
In case someone else runs into this. This is a bug in mod_proxy that can be avoided by putting these lines in your httpd.conf: SetEnv force-proxy-request-1.0 1 SetEnv proxy-nokeepalive 1 https://issues.apache.org/bugzilla/show_bug.cgi?id=37770 For info on what these variables do see the mod_proxy documentation . They have a specific section, Protocol Adjustment, that addresses these variables.
{ "source": [ "https://serverfault.com/questions/206745", "https://serverfault.com", "https://serverfault.com/users/61792/" ] }
207,115
I'm a developer at my organization, but I've been tasked with resetting the passwords on 10k e-mail users in an OU in Active Directory. I was given the proper permissions, then sent the following TechNet article , but I'm not sure where I'd run this or how exactly it works. I apologize if this question is too vague, but I wasn't sure where else I could ask (I'd ask a sysadmin at my organization, but it'd take a while for response). Could someone give me a rundown of what exactly this cmdlet does and how I'd go about executing this?
Much easier than that. Install the (depending on your flavour of your workstation OS) Remote Server Administration Tools so you get the AD DS tools. Don't forget to go into your Windows Features in Control Panel to enable the correct toolsets. Once you've done that, the following command will achieve your desired result: DSQUERY user "OU=myOU,OU=myUsers,DC=myDomain,DC=loc" -limit 0 | DSMOD user -pwd <insert new password here> ~ Replace "OU=myOU,OU=myUsers,DC=myDomain,DC=loc" with the distinguishedName of the OU containing the users to be changed
{ "source": [ "https://serverfault.com/questions/207115", "https://serverfault.com", "https://serverfault.com/users/5363/" ] }
207,375
I realise this is very subjective and dependent on a number of variables, but I'm wondering what steps most folks go through when they need to diagnose packet loss on a given system?
I am a network engineer, so I'll describe this from my perspective. For me, diagnosing packet loss usually starts with "it's not working very well". From there, I usually try to find kit as close to both ends of the communication (typically, a workstation in an office and a server somewhere) and ping as close to the other end as possible (ideally the "remote end-point", but sometimes there are firewalls I can't send pings through, so will have to settle for a LAN interface on a router) and see if I can see any loss. If I can see loss, it's usually a case of "not enough bandwidth" or "link with issues" somewhere in-between, so find the route through the network and start from the middle, that usually gives you one end or the other. If I cannot see loss, the next two steps tend to be "send more pings" or "send larger pings". If that doesn't sort give an indication of what the problem is, it's time to start looking at QoS policies and interface statistics through the whole path between the end-points. If that doesn't find anything, it's time to start question your assumptions, are you actually suffering from packet loss. The only sure way of finding that is to do simultaneous captures on both ends, either by using WireShark (or equivalent) on the hosts or by hooking up sniffer machines (probably using WireShark or similar) via network taps. Then comes the fun of comparing the two packet captures... Sometimes, what is attributed as "packet loss" is simply something on the server side being noticeably slower (like, say, moving the database from "on the same LAN" to "20 ms away" and using queries that requires an awful lot of back-and-forth between the front-end and the database).
{ "source": [ "https://serverfault.com/questions/207375", "https://serverfault.com", "https://serverfault.com/users/19134/" ] }
207,474
I just installed a new gigabit network interface card (NIC) in Linux. How do I tell if it is really set to gigabit speeds? I see ethtool has an option to set the speed, but I can't seem to figure out how to report its current speed.
Just use a command like: ethtool eth0 to get the needed info. Ex: $ sudo ethtool eth0 | grep Speed Speed: 1000Mb/s
{ "source": [ "https://serverfault.com/questions/207474", "https://serverfault.com", "https://serverfault.com/users/13219/" ] }
207,510
I'm running CentOS on a storage server that has to do file sharing for with Windows machines. SMB version is smbd version 3.5.5-68.fc13 I'm getting a lot of error messages in /var/log/messages regarding failed attempts to connect to a CUPS server. They look like this: Nov 30 18:49:34 big03 smbd[9927]: [2010/11/30 18:49:34.850620, 0] printing/print_cups.c:108(cups_connect) Nov 30 18:49:34 big03 smbd[9927]: Unable to connect to CUPS server localhost:631 - Connection refused I understand that the issue is generated by the fact that SMB comes with printer sharing support, but I'm really not interested in that. I just want to disable the feature to get rid of the messages. Any idea how I can do that?
Commenting out the printers section actually does nothing, add this to your smb.conf: load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes (spoolss is not a typo)
{ "source": [ "https://serverfault.com/questions/207510", "https://serverfault.com", "https://serverfault.com/users/16330/" ] }
207,619
I'm looking for an smtp service that essentially obeys the RFC, except rather than sending mail it simply logs to a file [date] sent mail to <address> Or whatever. I can bash this together with the bare minimum of functionality I need in python in about half an hour I reckon but if there's an existing project that works better I'd rather use that. The reason for needing it is debugging an app that keeps sending 7* the amount of mail it's supposed to. EDIT: And already asked: https://stackoverflow.com/questions/1006650/dummy-smtp-server-for-testing-apps-that-send-email
If you have python lying around this will write the SMTP conversation to stdout. sudo python -m smtpd -n -c DebuggingServer localhost:25 http://docs.python.org/library/smtpd.html#debuggingserver-objects
{ "source": [ "https://serverfault.com/questions/207619", "https://serverfault.com", "https://serverfault.com/users/27697/" ] }
207,620
Dumb question: Is there an equivalent of iptables on Windows? Could I install one via cygwin? The real question: how can I accomplish on Windows what I can accomplish via iptables? Just looking for basic firewall functionality (e.g. blocking certain IP addresses)
One way would be with the netsh command: netsh firewall (deprecated after XP and 2003) netsh advfirewall (Vista, 7, and 2008)
{ "source": [ "https://serverfault.com/questions/207620", "https://serverfault.com", "https://serverfault.com/users/57189/" ] }
207,683
netcat -ul -p2115 fails with a usage statement. What am I doing wrong?
To quote the nc man page : -l Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host. It is an error to use this option in conjunction with the -p, -s, or -z options. Additionally, any timeouts specified with the -w option are ignored. The key here is that -p cannot be combined with the -l flag. When using the -l flag, any ports specified in the positional arguments are used. So instead, you could use the following: netcat -ul 2115
{ "source": [ "https://serverfault.com/questions/207683", "https://serverfault.com", "https://serverfault.com/users/62105/" ] }
208,006
Is it possible to get logrotate to consider logfiles in a directory and all its subdirectories? (i.e. without explicitly listing the subdirectories.)
How deep do your subdirectories go? /var/log/basedir/*.log /var/log/basedir/*/*.log { daily rotate 5 } Will rotate all .log files in basedir/ as well as all .log files in any direct child of basedir. If you also need to go 1 level deeper just add another /var/log/basedir/*/*/*.log until you have each level covered. This can be tested by using a separate logrotate config file which contains a constraint that will not be met (a high minsize) and then running log rotate yourself in verbose mode logrotate -d testconfig.conf the -d flag will list each log file it is considering to rotate.
{ "source": [ "https://serverfault.com/questions/208006", "https://serverfault.com", "https://serverfault.com/users/14573/" ] }
208,265
Try executing the following under a bash shell echo "Reboot your instance!" On my installation: root@domU-12-31-39-04-11-83:/usr/local/bin# bash --version GNU bash, version 4.1.5(1)-release (i686-pc-linux-gnu) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. root@domU-12-31-39-04-11-83:/usr/local/bin# uname -a Linux domU-12-31-39-04-11-83 2.6.35-22-virtual #35-Ubuntu SMP Sat Oct 16 23:57:40 UTC 2010 i686 GNU/Linux root@domU-12-31-39-04-11-83:/usr/local/bin# echo "Reboot your instance!" -bash: !": event not found Can anyone please explain what is "bash events?" I've never heard this concept before. Also, how should I output "!" at the end of the sentence?
You can turn off history substitution using set +H .
{ "source": [ "https://serverfault.com/questions/208265", "https://serverfault.com", "https://serverfault.com/users/20186/" ] }
208,277
I have taken domain www.exampledomoain.com and www.sampledomain.com on Debian server server. on first domain [www.exampledomoain.com] i am running one application on port 80 now i i have install new second app for witch i want to use second domain [www.sampledomain.com] ,now second app is running on port 8080 so current URL will be www.sampledomain.com:8080 Now Problems: 1.) i am not able to make www.sampledomain.com:8080 to www.sampledomain.com only how to do that i don't have idea. 2.) Now i want to create dynamic host name like [www.username.sampledomain.com , www.username2.sampledomain.com] which should be redirect to www.sampledomain.com only is their any configuration file is their or i have to install some app. Thanks in Advance
You can turn off history substitution using set +H .
{ "source": [ "https://serverfault.com/questions/208277", "https://serverfault.com", "https://serverfault.com/users/62298/" ] }
208,300
I currently have two CentOS servers. I need to know how and what the quickest way would be to "tar" up the images directory and SCP it over? Is that the quickest way that I just suggested, because tarring is taking forever... I ran the command: tar cvf imagesbackup.tar images And I was going to just scp it over. Let me know if there is a quicker way. I have remote/SSH access to both machines.
Instead of using tar to write to your local disk, you can write directly to the remote server over the network using ssh. server1$ tar -zc ./path | ssh server2 "cat > ~/file.tar.gz" Any string that follows your "ssh" command will be run on the remote server instead of the interactive logon. You can pipe input/output to and from those remote commands through SSH as if they were local. Putting the command in quotes avoids any confusion, especially when using redirection. Or, you can extract the tar file on the other server directly: server1$ tar -zc ./path | ssh server2 "tar -zx -C /destination" Note the seldom-used -C option. It means "change to this directory first before doing anything." Or, perhaps you want to "pull" from the destination server: server2$ tar -zx -C /destination < <(ssh server1 "tar -zc -C /srcdir ./path") Note that the <(cmd) construct is new to bash and doesn't work on older systems. It runs a program and sends the output to a pipe, and substitutes that pipe into the command as if it was a file. I could just have easily have written the above as follows: server2$ tar -zx -C /destination -f <(ssh server1 "tar -zc -C /srcdir ./path") Or as follows: server2$ ssh server1 "tar -zc -C /srcdir ./path" | tar -zx -C /destination Or, you can save yourself some grief and just use rsync: server1$ rsync -az ./path server2:/destination/ Finally, remember that compressing the data before transfer will reduce your bandwidth, but on a very fast connection, it may actually make the operation take more time . This is because your computer may not be able to compress fast enough to keep up: if compressing 100MB takes longer than it would take to send 100MB, then it's faster to send it uncompressed. Alternately, you may want to consider piping to gzip yourself (rather than using the -z option) so that you can specify a compression level. It's been my experience that on fast network connections with compressible data, using gzip at level 2 or 3 (the default is 6) gives the best overall throughput in most cases. Like so: server1$ tar -c ./path | gzip -2 | ssh server2 "cat > ~/file.tar.gz"
{ "source": [ "https://serverfault.com/questions/208300", "https://serverfault.com", "https://serverfault.com/users/62254/" ] }
208,347
On a linux box, how do I list all users that possess identical privilege to the superuser (and even better, all users in general along with if they are able to escalate their privilege to that level or not)?
Don't forget to change the root password. If any user has UID 0 besides root, they shouldn't. Bad idea. To check: grep 'x:0:' /etc/passwd Again, you shouldn't do this but to check if the user is a member of the root group: grep root /etc/group To see if anyone can execute commands as root, check sudoers: cat /etc/sudoers To check for SUID bit, which allows programs to be executed with root privileges: find / -perm -04000
{ "source": [ "https://serverfault.com/questions/208347", "https://serverfault.com", "https://serverfault.com/users/62322/" ] }
208,445
Amazon RDS has a metric for 'freeable memory'. It appears to go up & down in a sawtooth pattern. This leads me to believe that it's memory that is being used by MySQL for caching and that when the cache expires, more freeable memory appears. Any definitive documentation would be great.
It includes cached memory and memory used for buffers (besides what's really free/unused). They'll all be freed if an application requests more memory than what's free.
{ "source": [ "https://serverfault.com/questions/208445", "https://serverfault.com", "https://serverfault.com/users/9185/" ] }
208,522
NAT options on domestic routers often come configured as strict . What does this mean? What do moderate or open do? Port-forwarding/DMZ access works properly on strict so why bother with the other two? A look through the router suggests this affects the firewall . When spending a large amount of your time securing networks using Cisco/iptables such a limp non-descriptive answer is nothing but infuriating and leaves no clues as to what effect upon a firewall this has. Please can someone shed some light.
It's important first to know how Network Address Translation (NAT) works. You establish a connection to a server on the internet. In reality you send packets to your router, going out from your computer on some randomly chosen port: Your computer Router ╔════════════╗ ╔═══════════╗ ║ ║ ║ ║ ║ port 31746 ╫====>╫ ║ ║ ║ ║ ║ ╚════════════╝ ╚═══════════╝ Your router, in turn, establishes a connection to the server you want to talk to. It talks out it's own randomly chosen port: Router www.google.com ╔═══════════╗ ╔════════════════╗ ║ ║ ║ ║ ║ port 21283╫====>╫ port 80 ║ ║ ║ ║ ║ ╚═══════════╝ ╚════════════════╝ When Google's webserver sends you back information, it is actually sending it back to your router (since your router is the guy actually on the Internet): Router www.google.com ╔═══════════╗ ╔════════════════╗ ║ ║ ║ ║ ║ port 21283╫˂====╫ port 80 ║ ║ ║ ║ ║ ╚═══════════╝ ╚════════════════╝ A packet arrives at your router, on port 21283 from www.google.com . What should the router do with it? In this case the router has kept a record of you, and it knows that any traffic arriving on port 21283 from the Internet should go to your PC. So the router will relay the packet to your computer: Your computer Router ╔════════════╗ ╔═══════════╗ ║ ║ ║ ║ ║ port 31746 ╫<════╫ ║ ║ ║ ║ ║ ╚════════════╝ ╚═══════════╝ Open NAT (aka Full cone NAT, aka the good , right , and correct one) In open NAT, any machine on the internet can send traffic to your router's port 21283 , and the packet will be sent back to you: Your computer Router ╔════════════╗ ╔═══════════╗ ╭www.google.com:80 ║ ║ ║ ║ ├www.google.com:443 ║ port 31746 ╫<════╫ port 21283╫<════╡serverfault.com:80 ║ ║ ║ ║ ├fbi.gov:32188 ╚════════════╝ ╚═══════════╝ ╰botnet.cn:11288 Moderate NAT (aka Restricted Cone NAT) Moderate NAT is where your router will only accept traffic from the same host , but will allow it to come from any port : Your computer Router ╔════════════╗ ╔═══════════╗ ║ ║ ║ ║ ╭www.google.com:80 ║ port 31746 ╫<════╣ port 21283╫<════╡www.google.com:443 ║ ║ ║ ║ (rejected) serverfault.com:80 ╚════════════╝ ╚═══════════╝ (rejected) fbi.gov:32188 (rejected) botnet.cn:11288 Closed NAT (aka Port-restricted cone NAT) Closed NAT is more restrictive. It won't allow anything in unless it came from the original host and port that you originally communicated with, i.e. www.google port 80 : Your computer Router ╔════════════╗ ╔═══════════╗ ╭www.google.com:80 ║ ║ ║ ║ ┆ (rejected) www.google.com:443 ║ port 31746 ╫<════╫ port 21283╫<════╛ (rejected) serverfault.com:80 ║ ║ ║ ║ (rejected) fbi.gov:32188 ╚════════════╝ ╚═══════════╝ (rejected) botnet.cn:11288 Teredo, X-Box Live, NAT Microsoft's book Writing Secure Code has some other definitions of the different types of NAT. It is written in the context of NAT for use by Teredo; the IPv6 transition technology: Full cone: A full-cone NAT establishes an external UDP port when sending an outbound packet and will forward traffic sent to that port from any IP address and any port back to the originating port on the internal system. Restricted cone: This type of NAT maintains some level of state and requires that replies come from the same IP address as the initial request was sent to. Port-restricted cone: Replies must come from the same IP address and port as the request. Symmetric: In addition to the requirements for a port-restricted code NAT, the symmetric NAT will create a new mapping of internal IP address and port to external IP address and port for traffic sent to every individual external host. Some newer NAT devices can also appear to be port restricted under some conditions and symmetric under others: In particular, we found that many NAT have a 5th strategy, "port conservation." Basically, they will try to keep the same port number inside and outside, unless it is already used for another connection, in which case they pick a different one either sequentially (from a global variable) or randombly. These NATs appear typically "port restricted" during the tests, but behave as "symmetric" under load. (Huitema, personal communication) If you're interested in the details, consult RFC 3489 (Rosenberg et al. 2003). Remember: if anyone tries to tell you that Full-code NAT / Open NAT is a security issue, tell them they don't know what they're talking about. NAT is not a security boundary - that is what a firewall is. Anyone using NAT as a security boundary is simply wrong. See also Wikipedia: Network address translation Strict, Moderate, and Open NAT Error: Your NAT type is set to strict (or moderate) RFC 3489 - STUN - Simple Traversal of User Datagram Protocol (UDP) Through Network Address Translators (NATs)
{ "source": [ "https://serverfault.com/questions/208522", "https://serverfault.com", "https://serverfault.com/users/24516/" ] }
208,693
I have been reading about KVM and Qemu for sometime. As of now I have a clear understanding of what they do. KVM supports hardware virtualization to provide near native performance to the Guest Operating sytems. On the other hand QEmu emulates the target operating system. What I am confused is to what level these two co-ordinate. Like Who manages the sharing of RAM and/or memory? Who schedules I/O operations?
Qemu : QEmu is a complete and standalone software of its own. You use it to emulate machines, it is very flexible and portable. Mainly it works by a special 'recompiler' that transforms binary code written for a given processor into another one (say, to run MIPS code on a PPC mac, or ARM in an x86 PC). To emulate more than just the processor, Qemu includes a long list of peripheral emulators: disk, network, VGA, PCI, USB, serial/parallel ports, etc. KQemu : In the specific case where both source and target are the same architecture (like the common case of x86 on x86), it still has to parse the code to remove any 'privileged instructions' and replace them with context switches. To make it as efficient as possible on x86 Linux, there's a kernel module called KQemu that handles this. Being a kernel module, KQemu is able to execute most code unchanged, replacing only the lowest-level ring0-only instructions. In that case, userspace Qemu still allocates all the RAM for the emulated machine, and loads the code. The difference is that instead of recompiling the code, it calls KQemu to scan/patch/execute it. All the peripheral hardware emulation is done in Qemu. This is a lot faster than plain Qemu because most code is unchanged, but still has to transform ring0 code (most of the code in the VM's kernel), so performance still suffers. KVM : KVM is a couple of things: first it is a Linux kernel module—now included in mainline—that switches the processor into a new 'guest' state. The guest state has its own set of ring states, but privileged ring0 instructions fall back to the hypervisor code. Since it is a new processor mode of execution, the code doesn't have to be modified in any way. Apart from the processor state switching, the kernel module also handles a few low-level parts of the emulation like the MMU registers (used to handle VM) and some parts of the PCI emulated hardware. Second, KVM is a fork of the Qemu executable. Both teams work actively to keep differences at a minimum, and there are advances in reducing it. Eventually, the goal is that Qemu should work anywhere, and if a KVM kernel module is available, it could be automatically used. But for the foreseeable future, the Qemu team focuses on hardware emulation and portability, while KVM folks focus on the kernel module (sometimes moving small parts of the emulation there, if it improves performance), and interfacing with the rest of the userspace code. The kvm-qemu executable works like normal Qemu: allocates RAM, loads the code, and instead of recompiling it, or calling KQemu, it spawns a thread (this is important). The thread calls the KVM kernel module to switch to guest mode and proceeds to execute the VM code. On a privileged instruction, it switches back to the KVM kernel module, which, if necessary, signals the Qemu thread to handle most of the hardware emulation. One of the nice things of this architecture is that the guest code is emulated in a posix thread which you can manage with normal Linux tools. If you want a VM with 2 or 4 cores, kvm-qemu creates 2 or 4 threads, each of them calls the KVM kernel module to start executing. The concurrency—if you have enough real cores—or scheduling—if not—is managed by the normal Linux scheduler, keeping code small and surprises limited.
{ "source": [ "https://serverfault.com/questions/208693", "https://serverfault.com", "https://serverfault.com/users/39612/" ] }
209,203
I know that the ip tool lets you bind multiple addresses to an interface (eg, http://www.linuxplanet.com/linuxplanet/tutorials/6553/1/ ). Right now, though, I'm trying to build something on top of IPv6, and it would be really useful to have an entire block of addresses (say, a /64) available, so that programs could pick any address from the range and bind to that. Needless to say, attaching every IP from this range to an interface would take a while. Does Linux support binding a whole block of addresses to an interface?
Linux 2.6.37 and above supports this via a feature called AnyIP . For instance if I run ip route add local 2001:db8::/32 dev lo on an Ubuntu 11.04 machine it will accept connections on any address in the 2001:db8::/32 network.
{ "source": [ "https://serverfault.com/questions/209203", "https://serverfault.com", "https://serverfault.com/users/62581/" ] }
209,461
I would be thankfull if someone who understands how LVM works, could tell me a rough estimate, how much slower using LVM (with a Software RAID1) will be. (What I do not want to know how much slower LVM will be if the LVM Volume is currently in snapshot mode doing Copy on Write). I only need some rough estmiate how much LVM will slow down reads and writes in a normal operation scenario. Any links are also very much appreciated I was not able to find any good performance benachmarks about this question.
LVM is fairly lightweight for just normal volumes (without snapshots, for example). It's really just a table lookup in a fairly small table that block X is actually block Y on device Z. I've never done any benchmarking, but I've never noticed any performance differences between LVM and just using the raw device. It's some small extra CPU overhead on the disc I/O, so I really wouldn't expect much difference. My gut reaction is that the reason there are no benchmarks is that there just isn't that much overhead in LVM. The convenience of LVM, and being able to slice and dice and add more drives, IMHO, far outweighs what little (if any) performance difference there may be.
{ "source": [ "https://serverfault.com/questions/209461", "https://serverfault.com", "https://serverfault.com/users/61570/" ] }
209,599
I've got a couple of linux virtual machines with bridged interfaces, and I'd like the IP address of the machine to show up after the machine boot (in the login, where it usually shows the release and kernel). From what I can tell the message is picked up from /etc/issues, but I'm not sure how and when to write to it.
On CentOS 7 and Debian 8 (and maybe other as well), just append the following line to /etc/issue My IP address: \4 and that will resolve to the machine's IPv4 address. If you have multiple network interfaces and you want to pick one specific, you can specify it with My IP address: \4{eth0} Check man getty for a list of supported escape sequences on your distribution.
{ "source": [ "https://serverfault.com/questions/209599", "https://serverfault.com", "https://serverfault.com/users/3761/" ] }
209,888
From time to time, I have to perform several large migration changes on data files on my server, and I'm looking for a good way to do this. I was thinking about using rsync to duplicate my directory structure starting at the root data folder, creating hard links to all original the files (some of them are rather big), and I can overwrite in the destination tree only the files that need migrating. In the end, I can safely switch from the old files to the new files with two mv operations. However, I can't seem to get rsync to do this. I tried rsync -a --link-dest=$DATA $DATA $DATA/../upgrade_tmp but instead of creating hard links to files, rsync copies them entirely. Is there a problem using the same source and link-dest directory?
rsync is a powerful tool, but it is, unfortunately, strangely picky about some of its pathnames. If $DATA is an absolute path (i.e. it begins with a / ), then the correct command-line to use is: rsync -a --link-dest=$DATA $DATA/ $DATA/../upgrade_tmp [Now, just a brief aside about rsync 's strangeness. Note the trailing / added to the source argument. This tells rsync to work with the contents of the source directory, rather than with the source directory itself. (I'm assuming that $DATA doesn't already contain a trailing / .) In this case, we want to work with the contents, so we add the trailing / .] If, on the other hand, $DATA is a relative path (i.e. it does not begin with a / ), then Sean R's comment about --link-dest is bang on: The link-dest path is interpreted relative to the destination path, so you would use the following: rsync -a --link-dest=../`basename $DATA` $DATA/ $DATA/../upgrade_tmp EDIT One final note, it turns out that the second rsync command-line I gave should work regardless of whether $DATA is an absolute path, since basename doesn't care whether a path is absolute or relative.
{ "source": [ "https://serverfault.com/questions/209888", "https://serverfault.com", "https://serverfault.com/users/62700/" ] }
210,034
I was presented with an argument to the tune of "you don't need a strong mysql user password because in order to use it, they'd already have access to your server." We're talking about a 4 digit password that is a standard english dictionary word on a live business website. Without influencing the answers with my own knowledge and experience, I'd like to show them some responses from a disinterested 3rd party source. Anyone care to chime in on this one? Programming / practical answers would be appreciated.
Whoever was making this argument seems to be saying "Once someone has their foot in the door, you might as well give them complete access". By that logic, a firewall negates the need for all passwords on your internal network. Strong passwords are one step towards limiting the damage done by network intrusion. There's no reason to throw your hands up in defeat just because one small part of your network was compromised.
{ "source": [ "https://serverfault.com/questions/210034", "https://serverfault.com", "https://serverfault.com/users/26174/" ] }
210,408
We have a server on Amazon EC2 running SSH is on a standard (22) port. I placed my public key at the <username>/.ssh/authorized_keys file. The fun thing is that yesterday it was working great! But today, I don't know what happened! I just can't log in. ssh -vvvv servername is stuck on debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY I got someone to confirm that my public key is there. I added a new public key from another computer (windows 7 + putty) and I was able to log in. This other computer with Win7 is on the same LAN which means that the external IP is the same. My private key works for other servers but not with this.
Change the network interface MTU to solve it. This is a bug for ubuntu 14.04. This worked for me: sudo ip li set mtu 1200 dev wlan0 OR sudo ifconfig wlan0 mtu 1200 ssh fails to connect to VPN host - hangs at 'expecting SSH2_MSG_KEX_ECDH_REPLY'
{ "source": [ "https://serverfault.com/questions/210408", "https://serverfault.com", "https://serverfault.com/users/61470/" ] }
211,005
Can anyone clarify the differences between the --checksum and --ignore-times options of rsync? My understanding is as follows: --checksum If the file size and time match, it will do a checksum at both ends to see if the files are really identical. --ignore-times 'Transfer' every file, regardless of whether file time is same at both ends. Since it will still use the delta-transfer algorithm, if a file actually is identical, nothing gets transferred. That's the technical difference, but as far as I can tell, they are semantically the same thing. So, what I'm wondering is: What is the practical difference between the two options? In what cases would you use one rather than the other? Is there any performance difference between them?
Normally, rsync skips files when the files have identical sizes and times on the source and destination sides. This is a heuristic which is usually a good idea, as it prevents rsync from having to examine the contents of files that are very likely identical on the source and destination sides. --ignore-times tells rsync to turn off the file-times-and-sizes heuristic, and thus unconditionally transfer ALL files from source to destination. rsync will then proceed to read every file on the source side, since it will need to either use its delta-transfer algorithm, or simply send every file in its entirety, depending on whether the --whole-file option was specified. --checksum also modifies the file-times-and-sizes heuristic, but here it ignores times and examines only sizes. Files on the source and destination sides that differ in size are transferred, since they are obviously different. Files with the same size are checksummed (with MD5 in rsync version 3.0.0+, or with MD4 in earlier versions), and those found to have differing sums are also transferred. In cases where the source and destination sides are mostly the same, --checksum will result in most files being checksummed on both sides. This could take long time, but the upshot is that the barest minimum of data will actually be transferred over the wire, especially if the delta-transfer algorithm is used. Of course, this is only a win if you have very slow networks, and/or very fast CPU. --ignore-times , on the other hand, will send more data over the network, and it will cause all source files to be read, but at least it will not impose the additional burden of computing many cryptographically-strong hashsums on the source and destination CPUs. I would expect this option to perform better than --checksum when your networks are fast, and/or your CPU relatively slow. I think I would only ever use --checksum or --ignore-times if I were transferring files to a destination where it was suspected that the contents of some files were corrupted, but whose modification times were not changed. I can't really think of any other good reason to use either option, although there are probably other use-cases.
{ "source": [ "https://serverfault.com/questions/211005", "https://serverfault.com", "https://serverfault.com/users/14636/" ] }
211,525
I have a problem deploying Django app using Gunicorn and Supervisor. While I can make Gunicorn serving my app (by setting proper PYTHONPATH and running apropriate command, the one from supervisord config) I can't make supervisor to run it. It just won't see my app. I don't know how to make sure if the config file is ok. Here's what supervisorctl says: # supervisorctl start myapp_live myapp_live: ERROR (no such process) I'm running it on Ubuntu 10.04 with following config: File /home/myapp/live/deploy/supervisord_live.ini: [program:myapp_live] command=/usr/local/bin/gunicorn_django --log-file /home/myapp/logs/gunicorn_live.log --log-level info --workers 2 -t 120 -b 127.0.0.1:10000 -p deploy/gunicorn_live.pid webapp/settings_live.py directory=/home/myapp/live environment=PYTHONPATH='/home/myapp/live/eco/lib' user=myapp autostart=true autorestart=true In /etc/supervisor/supervisord.conf, at the end of the file, there is: [include] files = /etc/supervisor/conf.d/*.conf and here's a symlink to my config file: # ls -la /etc/supervisor/conf.d lrwxrwxrwx 1 root root 48 Dec 4 18:02 myapp-live.conf -> /home/myapp/live/deploy/supervisord_live.ini everything looks fine for me but supervisorctl just keep saying myapp_live: ERROR (no such process) . Any solution for this?
The correct answer is that supervisor requires you to re-read and update when you place a new configuration file. Restarting is not the answer, as that will affect other services. Try: supervisorctl reread supervisorctl update
{ "source": [ "https://serverfault.com/questions/211525", "https://serverfault.com", "https://serverfault.com/users/44567/" ] }
211,536
I want to redirect all traffic from port 443 to the internal port 8080. I'm using this config for iptables: iptables -t nat -I PREROUTING --source 0/0 --destination 0/0 -p tcp \ --dport 443 -j REDIRECT --to-ports 8080 This works for all external clients. But if I'm trying to access the port 443 from the same maschine I'll get a connection refused error. wget https://localhost How can I extend the iptables rule to redirect local traffic too?
PREROUTING isn't used by the loopback interface, you need to also add an OUTPUT rule: iptables -t nat -I PREROUTING -p tcp --dport 443 -j REDIRECT --to-ports 8080 iptables -t nat -I OUTPUT -p tcp -o lo --dport 443 -j REDIRECT --to-ports 8080
{ "source": [ "https://serverfault.com/questions/211536", "https://serverfault.com", "https://serverfault.com/users/63307/" ] }
212,093
Ubuntu Server 10.04.1 x86 I've got a machine with a FCGI HTTP service behind nginx, that serves a lot of small HTTP requests to a lot of different clients. (About 230 requests per second in the peak hours, average response size with headers is 650 bytes, several millions of different clients per day.) As a result, I have a lot of sockets, hanging in TIME_WAIT (graph is captured with TCP settings below): I'd like to reduce the number of sockets. What can I do besides this? $ cat /proc/sys/net/ipv4/tcp_fin_timeout 1 $ cat /proc/sys/net/ipv4/tcp_tw_recycle 1 $ cat /proc/sys/net/ipv4/tcp_tw_reuse 1 Update: some details on the actual service layout on the machine: client -----TCP-socket--> nginx (load balancer reverse proxy) -----TCP-socket--> nginx (worker) --domain-socket--> fcgi-software --single-persistent-TCP-socket--> Redis --single-persistent-TCP-socket--> MySQL (other machine) I probably should switch load-balancer --> worker connection to domain sockets as well, but the issue about TIME_WAIT sockets would remain — I plan to add a second worker on a separate machine soon. Won't be able to use domain sockets in that case.
One thing you should do to start is to fix the net.ipv4.tcp_fin_timeout=1 . That is way to low, you should probably not take that much lower than 30. Since this is behind nginx. Does that mean nginx is acting as a reverse proxy? If that is the case then your connections are 2x (one to client, one to your web servers). Do you know which end these sockets belong to? Update: fin_timeout is how long they stay in FIN-WAIT-2 (From networking/ip-sysctl.txt in the kernel documentation): tcp_fin_timeout - INTEGER Time to hold socket in state FIN-WAIT-2, if it was closed by our side. Peer can be broken and never close its side, or even died unexpectedly. Default value is 60sec. Usual value used in 2.2 was 180 seconds, you may restore it, but remember that if your machine is even underloaded WEB server, you risk to overflow memory with kilotons of dead sockets, FIN-WAIT-2 sockets are less dangerous than FIN-WAIT-1, because they eat maximum 1.5K of memory, but they tend to live longer. Cf. tcp_max_orphans. I think you maybe just have to let Linux keep the TIME_WAIT socket number up against what looks like maybe 32k cap on them and this is where Linux recycles them. This 32k is alluded to in this link : Also, I find the /proc/sys/net/ipv4/tcp_max_tw_buckets confusing. Although the default is set at 180000, I see a TCP disruption when I have 32K TIME_WAIT sockets on my system, regardless of the max tw buckets. This link also suggests that the TIME_WAIT state is 60 seconds and can not be tuned via proc. Random fun fact: You can see the timers on the timewait with netstat for each socket with netstat -on | grep TIME_WAIT | less Reuse Vs Recycle: These are kind of interesting, it reads like reuse enable the reuse of time_Wait sockets, and recycle puts it into TURBO mode: tcp_tw_recycle - BOOLEAN Enable fast recycling TIME-WAIT sockets. Default value is 0. It should not be changed without advice/request of technical experts. tcp_tw_reuse - BOOLEAN Allow to reuse TIME-WAIT sockets for new connections when it is safe from protocol viewpoint. Default value is 0. It should not be changed without advice/request of technical experts. I wouldn't recommend using net.ipv4.tcp_tw_recycle as it causes problems with NAT clients . Maybe you might try not having both of those switched on and see what effect it has (Try one at a time and see how they work on their own)? I would use netstat -n | grep TIME_WAIT | wc -l for faster feedback than Munin.
{ "source": [ "https://serverfault.com/questions/212093", "https://serverfault.com", "https://serverfault.com/users/1355/" ] }
212,107
how can I tell what version of SharePoint a site is using, without being able to see the admin panel? Is there anything, perhaps in the source of the pages, that would give me a clue?
For sites that haven't been customised much, you can tell a lot from the design. Typical default 2003 site Default 2007 page 2010 page 2013 page The tab style is generally a give-away of the version in use. If you want to know the sub-version, you'll have to ask the site admin. There are probably also some clues in the dress of the corporate drones in the revolting stock images ;-)
{ "source": [ "https://serverfault.com/questions/212107", "https://serverfault.com", "https://serverfault.com/users/1834/" ] }
212,178
I have a remote partition that i have mounted locally using NFS. 'mount' gives 192.168.3.1:/mnt/storage-pools/ on /pools type nfs (rw,addr=192.168.3.1) On the server i have in exports: /mnt/storage-pools *(rw,insecure,sync,no_subtree_check) Then I try touch /pools/test1 ls -lah -rw-r--r-- 1 65534 65534 0 Dec 13 20:56 test1 chown root.root test1 chown: changing ownership of `test1': Operation not permitted What am I missing ? Pulling my hairs out.
By default the root_squash export option is turned on, therefore NFS does not allow a root user from the client to perform operations as root on the server, instead mapping it to the user/group id specified by anonuid and anongid options (default=65534). This is configurable in /etc/exports together with other export options.
{ "source": [ "https://serverfault.com/questions/212178", "https://serverfault.com", "https://serverfault.com/users/22376/" ] }
212,180
We have a problem with one of our clients where the default printer he uses isn't loaded by the time the app that he uses over terminal services is launched. If you check again a minute later the printer is there but the app reads the default printer at the time of launch. Is it possible to get the login process to delay till all printers are loaded? The user is in a remote location so we have no direct access to the printers.
By default the root_squash export option is turned on, therefore NFS does not allow a root user from the client to perform operations as root on the server, instead mapping it to the user/group id specified by anonuid and anongid options (default=65534). This is configurable in /etc/exports together with other export options.
{ "source": [ "https://serverfault.com/questions/212180", "https://serverfault.com", "https://serverfault.com/users/63508/" ] }
212,269
This is a Canonical Question about Securing a LAMP stack What are the absolute guidelines for securing a LAMP server?
David's answer is a good baseline of the general principles of server hardening. As David indicated, this is a huge question. The specific techniques you take could depend highly on your environment and how your server will be used. Warning, this can take a lot of work in a test environment to build out and get done right. Followed by a lot of work to integrate into your production environment, and more importantly, business process. First, however, check to see if your organization has any hardening policies, as those might be the most directly relevant. If not, depending on your role, this might be a great time to build them out. I would also recommend tackling each component separately from the bottom up. The L There are lots of good guides available to help you out. This list may or may not help you depending on your distribution. Center for Internet Security Benchmarks - Distribution specific for the major flavors CentOS Hardening HowTo - Follows closely to the CIS RHEL5 guide, but is a much easier read NIST SP800-123 - Guide to General Server Security NSA Hardening Factsheets - Not as recently updated as CIS, but still mostly applicable Tiger - Live System Security Auditing Software The A Apache can be fun to secure. I find it easier to harden the OS and maintain usability than either Apache or PHP. Apache Server Hardening - This question on the IT Security sister site has lots of good information. Center for Internet Security Benchmarks - Again, Apache benchmarks. Apache Security Tips - Straight from the Apache project, it looks like it covers the basics DISA Hardening Checklist - Checklist from the DoD Information Assurance guys The M Center for Internet Security Benchmarks - Again, but for MySQL benchmarks OWASP MySQL Hardening General Security Guidelines - Basic checklist from the project devs The P This runs headlong into the whole idea of Secure Programming Practices, which is an entire discipline of its own. SANS and OWASP have a ridiculous amount of information on the subject, so I won't try to replicate it here. I will focus on the runtime configuration and let your developers worry about the rest. Sometimes the 'P' in LAMP refers to Perl, but usually PHP. I am assuming the latter. Hardening PHP - Some minor discussion, also on IT Security SE site. Hardened PHP Project - Main project that produces Suhosin , an attempt to patch the PHP application to project against certain types of attacks. Hardening PHP With Suhosin - A brief HowTo specifically for Suhosin Hardening PHP from php.ini - Short, but not bad discussion on some of the security related runtime options
{ "source": [ "https://serverfault.com/questions/212269", "https://serverfault.com", "https://serverfault.com/users/63532/" ] }
212,439
I have a directory of many files, something like 50,000 pdf's and other files on a server. I need to move specific ones to another directory. I can generate a list of the files that need to be moved either in csv or any other text format. What I need to do is run a bash script and move or copy the files that are listed in the text file to another directory. Is there an easy way of doing this? Any suggestions or resources would be greatly appreciated.
rsync has several options that can take a list of files to process( --files-from , --include-from , etc.). For example, this will do the trick: rsync -a /source/directory --files-from=/full/path/to/listfile /destination/directory
{ "source": [ "https://serverfault.com/questions/212439", "https://serverfault.com", "https://serverfault.com/users/39464/" ] }
212,546
I'm trying to build a simple checklist to determine the quality of a datacenter... where and what should I look for and how can I determine if what the owners say (e.g. "our UPS keep the data center up for 100 days without power") is true or not? What are typical signs or good or bad data centers?
Here is a list of questions I made for myself last time I went datacenter shopping: Explain what it would take for sprinklers to go off on our equipment. What will remote hands be willing to do? For example, install hard drives, rotate tapes… Are your remote hands available 24/7/365, average wait time for them to get to the cage after filing a ticket (How are tickets entered?) ? Are you on multiple grids? Do you have raised floor cooling? How many datacenters do you operate besides this one? How long can the datacenter run on backup power? Can we have equipment delivered directly to the datacenter? Is there a delivery dock and free, close, and available parking? If we have a vendor come to the datacenter, do we need to accompany them? What ambient temperature and humidity is maintained? How many ISP choices are there? Have any of your customers ever lost power for any amount of time in the history of the datacenter? How long has this datacenter been in operation? What access controls are in place to to both the floor and equipment? If you visit several and ask these questions between the price, your visit impressions, and their answers it will probably be clear which one you want. Make sure you always visit them and visit a good amount of them.
{ "source": [ "https://serverfault.com/questions/212546", "https://serverfault.com", "https://serverfault.com/users/62919/" ] }
213,185
For me, I run "killall nginx" and start it by "sbin/nginx", anyone has a better restart script? BTW: I install nginx from source, i do not find 'service nginx' command or /etc/init.d/nginx
The nginx package supplies a /etc/init.d/nginx script that provides the usual start|stop|restart|reload ... functionality. /etc/init.d/nginx restart will restart nginx as will service nginx restart Edit Here is a link to a script you can use as /etc/init.d/nginx.
{ "source": [ "https://serverfault.com/questions/213185", "https://serverfault.com", "https://serverfault.com/users/55582/" ] }
213,224
On a FreeBSD system (8.1), I am looking for instructions on how to check the running version of OpenSSH and also instructions on the best way to download install an update of OpenSSH
Run sshd -V or ssh -V and they'll return the version and usage information. Note: These are capital "V" now, when I originally wrote this answer they were lower case. There's a dozen ways to upgrade. pkg-add -r openssh-portable cd /usr/ports/security/openssh && make install clean portupgrade security/openssh-portable part of the makeworld/buildworld process freebsd-upgrade and the list goes on... I'm not aware of any issues with the 5.2p1 version that shipped with 8.1-RELEASE. I have seen hoax e-mails flying around for over a year now announcing the imminent release of a zero day hack (note that it's been a year and a half since release, so 'zero' day was a heck of a long time ago).
{ "source": [ "https://serverfault.com/questions/213224", "https://serverfault.com", "https://serverfault.com/users/35134/" ] }
213,422
We have a XAMPP Apache development web server setup with virtual hosts and want to stop serps from crawling all our sites. This is easily done with a robots.txt file. However, we'd rather not include a disallow robots.txt in every vhost and then have to remove it when we went live with the site on another server. Is there a way with an apache config file to rewrite all requests to robots.txt on all vhosts to a single robots.txt file? If so, could you give me an example? I think it would be something like this: RewriteEngine On RewriteRule .*robots\.txt$ C:\xampp\vhosts\override-robots.txt [L] Thanks!
Apache mod_alias is designed for this and available from the core Apache system, and can be set in one place with almost no processing overhead, unlike mod_rewrite. Alias /robots.txt C:/xampp/vhosts/override-robots.txt With that line in the apache2.conf file, outside all the vhost's, http://example.com/robots.txt - on any website it serves, will output the given file.
{ "source": [ "https://serverfault.com/questions/213422", "https://serverfault.com", "https://serverfault.com/users/63891/" ] }
213,765
I just tried to access a folder like so: \\somecomputeronmynetwork\somelocation$ When going to this location I'm prompted for a user name and password. I put one in, and it let me in fine. Now I need to remove that login, so I can try a different user name and password. What's the easiest way to do this?
Open a command prompt or from start/run type: net use \\somecomputeronmynetwork\somelocation$ /delete You can also use the following command to list "remembered" connections: net use
{ "source": [ "https://serverfault.com/questions/213765", "https://serverfault.com", "https://serverfault.com/users/710/" ] }
214,054
how can I activate / install mod_headers on my server?
I'm taking a leap of faith assuming you are talking about a Debian/Ubuntu server here, as Redhat/Fedora/Centos have it installed and enabled as part of the default httpd installation. On Debian/Ubuntu, you can enable mod_headers (it should be already installed as part of the apache2 installation), by running: a2enmod headers apache2 -k graceful
{ "source": [ "https://serverfault.com/questions/214054", "https://serverfault.com", "https://serverfault.com/users/49166/" ] }
214,242
I don't want anyone to be able to detect that I'm using NGINX or even Ubuntu from the internet. There are tools out there (such as BuiltWith) which scan servers to detect what tools they're using. Also, some cracking tools might help with deteting. What's the best / closest to that I can get to hiding all this info from the outside?
You can stop it outputting the version of Nginx and OS by adding server_tokens off; to a http , server , or location context. Or if you want to remove the Server header completely, you need to compile Nginx with the Headers More module in, as the header is hard coded in the Nginx source, and this module allows changing any http headers. more_clear_headers Server; However, there are many hidden ways servers perform by accident via their implementation which may help identify the system. e.g. How it responds to a bad SSL request. I don't see a practical way of preventing this. Some of the things I might suggest: change error templates block all ports except the services needed
{ "source": [ "https://serverfault.com/questions/214242", "https://serverfault.com", "https://serverfault.com/users/31454/" ] }
214,254
I'd like to copy backup archives from a remote server to my client machine. In the past, I've installed an FTP server on the remote machine and directed local server backups to dump into that directory. I'd then FTP in from my client machine. Just wondering if there is a simpler way to do this using Win 7 (Client) Win Server 2008? Robocopy? RDC command line options? For example, I can easily remote desktop in and drag the files from the server to my local machine. If there is an easy command line way to do this, then I don't have to setup an FTP server which is ideal. Thanks.
You can stop it outputting the version of Nginx and OS by adding server_tokens off; to a http , server , or location context. Or if you want to remove the Server header completely, you need to compile Nginx with the Headers More module in, as the header is hard coded in the Nginx source, and this module allows changing any http headers. more_clear_headers Server; However, there are many hidden ways servers perform by accident via their implementation which may help identify the system. e.g. How it responds to a bad SSL request. I don't see a practical way of preventing this. Some of the things I might suggest: change error templates block all ports except the services needed
{ "source": [ "https://serverfault.com/questions/214254", "https://serverfault.com", "https://serverfault.com/users/13688/" ] }
214,312
I set up haproxy logging via rsyslogd using the tips from this article , and everything seems to be working fine. The log files get the log messages. However, every log message from haproxy also shows up at /var/log/syslog . This means that once the server goes live, the syslog will be quite useless, as it will be run over with haproxy log messages. I would like to filter out those messages from /var/log/syslog . After going over the rsyslogd documentation, I tried to change the file /etc/rsyslog.d/50-default.conf thus: *.*;auth,authpriv.none;haproxy.none -/var/log/syslog I simply added the ;haproxy.none part. After restarting rsyslogd it stopped working completely until I reverted my changes. What am I doing wrong?
You could also do the following which will make it so they don't go in any other logs: local0.* -/var/log/haproxy.log & ~ The & ~ means not to put what matched in the above line anywhere else for the rest of the rules.
{ "source": [ "https://serverfault.com/questions/214312", "https://serverfault.com", "https://serverfault.com/users/1726/" ] }
214,512
This is a Canonical Question about Apache's mod_rewrite. Changing a request URL or redirecting users to a different URL than the one they originally requested is done using mod_rewrite. This includes such things as: Changing HTTP to HTTPS (or the other way around) Changing a request to a page which no longer exist to a new replacement. Modifying a URL format (such as ?id=3433 to /id/3433 ) Presenting a different page based on the browser, based on the referrer, based on anything possible under the moon and sun. Anything you want to mess around with URL Everything You Ever Wanted to Know about Mod_Rewrite Rules but Were Afraid to Ask! How can I become an expert at writing mod_rewrite rules? What is the fundamental format and structure of mod_rewrite rules? What form/flavor of regular expressions do I need to have a solid grasp of? What are the most common mistakes/pitfalls when writing rewrite rules? What is a good method for testing and verifying mod_rewrite rules? Are there SEO or performance implications of mod_rewrite rules I should be aware of? Are there common situations where mod_rewrite might seem like the right tool for the job but isn't? What are some common examples? A place to test your rules The htaccess tester web site is a great place to play around with your rules and test them. It even shows the debug output so you can see what matched and what did not.
mod_rewrite syntax order mod_rewrite has some specific ordering rules that affect processing. Before anything gets done, the RewriteEngine On directive needs to be given as this turns on mod_rewrite processing. This should be before any other rewrite directives. RewriteCond preceding RewriteRule makes that ONE rule subject to the conditional. Any following RewriteRules will be processed as if they were not subject to conditionals. RewriteEngine On RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) RewriteRule $/blog/(.*)\.html $/blog/$1.sf.html In this simple case, if the HTTP referrer is from serverfault.com, redirect blog requests to special serverfault pages (we're just that special). However, if the above block had an extra RewriteRule line: RewriteEngine On RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) RewriteRule $/blog/(.*)\.html $/blog/$1.sf.html RewriteRule $/blog/(.*)\.jpg $/blog/$1.sf.jpg All .jpg files would go to the special serverfault pages, not just the ones with a referrer indicating it came from here. This is clearly not the intent of the how these rules are written. It could be done with multiple RewriteCond rules: RewriteEngine On RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) RewriteRule ^/blog/(.*)\.html /blog/$1.sf.html RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) RewriteRule ^/blog/(.*)\.jpg /blog/$1.sf.jpg But probably should be done with some trickier replacement syntax. RewriteEngine On RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) RewriteRule ^/blog/(.*)\.(html|jpg) /blog/$1.sf.$2 The more complex RewriteRule contains the conditionals for processing. The last parenthetical, (html|jpg) tells RewriteRule to match for either html or jpg , and to represent the matched string as $2 in the rewritten string. This is logically identical to the previous block, with two RewriteCond/RewriteRule pairs, it just does it on two lines instead of four. Multiple RewriteCond lines are implicitly ANDed, and can be explicitly ORed. To handle referrers from both ServerFault and Super User (explicit OR): RewriteEngine On RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) [OR] RewriteCond %{HTTP_REFERER} ^https?://superuser\.com(/|$) RewriteRule ^/blog/(.*)\.(html|jpg) /blog/$1.sf.$2 To serve ServerFault referred pages with Chrome browsers (implicit AND): RewriteEngine On RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) RewriteCond %{HTTP_USER_AGENT} ^Mozilla.*Chrome.*$ RewriteRule ^/blog/(.*)\.(html|jpg) /blog/$1.sf.$2 RewriteBase is also order specific as it specifies how following RewriteRule directives handle their processing. It is very useful in .htaccess files. If used, it should be the first directive under "RewriteEngine on" in an .htaccess file. Take this example: RewriteEngine On RewriteBase /blog RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) RewriteRule ^(.*)\.(html|jpg) $1.sf.$2 This is telling mod_rewrite that this particular URL it is currently handling was arrived by way of http://example.com/blog/ instead of the physical directory path (/home/$Username/public_html/blog) and to treat it accordingly. Because of this, the RewriteRule considers it's string-start to be after the "/blog" in the URL. Here is the same thing written two different ways. One with RewriteBase, the other without: RewriteEngine On ##Example 1: No RewriteBase## RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) RewriteRule /home/assdr/public_html/blog/(.*)\.(html|jpg) $1.sf.$2 ##Example 2: With RewriteBase## RewriteBase /blog RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) RewriteRule ^(.*)\.(html|jpg) $1.sf.$2 As you can see, RewriteBase allows rewrite rules to leverage the web- site path to content rather than the web- server , which can make them more intelligible to those who edit such files. Also, they can make the directives shorter, which has an aesthetic appeal. RewriteRule matching syntax RewriteRule itself has a complex syntax for matching strings. I'll cover the flags (things like [PT]) in another section. Because Sysadmins learn by example more often than by reading a man-page I'll give examples and explain what they do. RewriteRule ^/blog/(.*)$ /newblog/$1 The .* construct matches any single character ( . ) zero or more times ( * ). Enclosing it in parenthesis tells it to provide the string that was matched as the $1 variable. RewriteRule ^/blog/.*/(.*)$ /newblog/$1 In this case, the first .* was NOT enclosed in parens so isn't provided to the rewritten string. This rule removes a directory level on the new blog-site. (/blog/2009/sample.html becomes /newblog/sample.html). RewriteRule ^/blog/(2008|2009)/(.*)$ /newblog/$2 In this case, the first parenthesis expression sets up a matching group. This becomes $1, which is not needed and therefore not used in the rewritten string. RewriteRule ^/blog/(2008|2009)/(.*)$ /newblog/$1/$2 In this case, we use $1 in the rewritten string. RewriteRule ^/blog/(20[0-9][0-9])/(.*)$ /newblog/$1/$2 This rule uses a special bracket syntax that specifies a character range . [0-9] matches the numerals 0 through 9. This specific rule will handle years from 2000 to 2099. RewriteRule ^/blog/(20[0-9]{2})/(.*)$ /newblog/$1/$2 This does the same thing as the previous rule, but the {2} portion tells it to match the previous character (a bracket expression in this case) two times. RewriteRule ^/blog/([0-9]{4})/([a-z]*)\.html /newblog/$1/$2.shtml This case will match any lower-case letter in the second matching expression, and do so for as many characters as it can. The \. construct tells it to treat the period as an actual period, not the special character it is in previous examples. It will break if the file-name has dashes in it, though. RewriteRule ^/blog/([0-9]{4})/([-a-z]*)\.html /newblog/$1/$2.shtml This traps file-names with dashes in them. However, as - is a special character in bracket expressions, it has to be the first character in the expression. RewriteRule ^/blog/([0-9]{4})/([-0-9a-zA-Z]*)\.html /newblog/$1/$2.shtml This version traps any file name with letters, numbers or the - character in the file-name. This is how you specify multiple character sets in a bracket expression. RewriteRule flags The flags on rewrite rules have a host of special meanings and usecases . RewriteRule ^/blog/([0-9]{4})/([-a-z]*).\html /newblog/$1/$2.shtml [L] The flag is the [L] at the end of the above expression. Multiple flags can be used, separated by a comma. The linked documentation describes each one, but here they are anyway: L = Last. Stop processing RewriteRules once this one matches. Order counts! C = Chain. Continue processing the next RewriteRule. If this rule doesn't match, then the next rule won't be executed. More on this later. E = Set environmental variable. Apache has various environmental variables that can affect web-server behavior. F = Forbidden. Returns a 403-Forbidden error if this rule matches. G = Gone. Returns a 410-Gone error if this rule matches. H = Handler. Forces the request to be handled as if it were the specified MIME-type. N = Next. Forces the rule to start over again and re-match. BE CAREFUL! Loops can result. NC = No case. Allows jpg to match both jpg and JPG. NE = No escape. Prevents the rewriting of special characters (. ? # & etc) into their hex-code equivalents. NS = No subrequests. If you're using server-side-includes, this will prevent matches to the included files. P = Proxy. Forces the rule to be handled by mod_proxy. Transparently provide content from other servers, because your web-server fetches it and re-serves it. This is a dangerous flag, as a poorly written one will turn your web-server into an open-proxy and That is Bad. PT = Pass Through. Take into account Alias statements in RewriteRule matching. QSA = QSAppend. When the original string contains a query ( http://example.com/thing?asp=foo ) append the original query string to the rewritten string. Normally it would be discarded. Important for dynamic content. R = Redirect. Provide an HTTP redirect to the specified URL. Can also provide exact redirect code [R=303]. Very similar to RedirectMatch , which is faster and should be used when possible. S = Skip. Skip this rule. T = Type. Specify the mime-type of the returned content. Very similar to the AddType directive. You know how I said that RewriteCond applies to one and only one rule? Well, you can get around that by chaining. RewriteEngine On RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) RewriteRule ^/blog/(.*)\.html /blog/$1.sf.html [C] RewriteRule ^/blog/(.*)\.jpg /blog/$1.sf.jpg Because the first RewriteRule has the Chain flag, the second rewrite-rule will execute when the first does, which is when the previous RewriteCond rule is matched. Handy if Apache regular-expressions make your brain hurt. However, the all-in-one-line method I point to in the first section is faster from an optimization point of view. RewriteRule ^/blog/([0-9]{4})/([-0-9a-zA-Z]*)\.html /newblog/$1/$2.shtml This can be made simpler through flags: RewriteRule ^/blog/([0-9]{4})/([-0-9a-z]*)\.html /newblog/$1/$2.shtml [NC] Also, some flags also apply to RewriteCond. Notably, NoCase. RewriteCond %{HTTP_REFERER} ^https?://serverfault\.com(/|$) [NC] Will match "ServerFault.com"
{ "source": [ "https://serverfault.com/questions/214512", "https://serverfault.com", "https://serverfault.com/users/2561/" ] }
214,605
I've got a ton of processes running in the background to try and get enough entropy, but I am still failing. **We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. Not enough random bytes available. Please do some other work to give the OS a chance to collect more entropy! (Need 210 more bytes)** I need a method to generate the key that works, cause what I'm trying to do is failing apparently.
Have you had a look at RNG? Fedora/Rh/Centos types: sudo yum install rng-tools On deb types: sudo apt-get install rng-tools to set it up. Then run sudo rngd -r /dev/urandom before generating the keys. Reference: http://it.toolbox.com/blogs/lim/how-to-generate-enough-entropy-for-gpg-key-generation-process-on-fedora-linux-38022
{ "source": [ "https://serverfault.com/questions/214605", "https://serverfault.com", "https://serverfault.com/users/63998/" ] }
214,816
I'm trying to connect to a Windows server from my Mac using RDC2.1 for Mac. The problem is the server I need to connect to is guarded by the evil dragon - IP-based access control on a completely separate network. I have an IP I can get in on, but it's at my office (i.e. a completely separate network). Because that network isn't set up for VPN, I've set up a SOCKS proxy through an SSH tunnel (which is all working fine). (SSH proxy) Me (on my Mac) ----------> Office Linux box ----> Windows server (home network) (office network) (other network) From my Linux server in my office (the SSH server) I can telnet to port 3389 on the Windows server, no problem. But from my Mac I can't get so much as a squeak out of it. Any ideas?
You don't need a SOCKS proxy for this; simple SSH port forwarding will work. For example, there's a server at my office I frequently need to access, which we'll call server.example.com . I can't connect to it directly, but I can ssh to myofficemachine.example.com . So I do this: ssh -L 3389:server.example.com:3389 myofficemachine.example.com And then I point my local Remote Desktop client to localhost . This works great, and my setup is almost identical to yours -- a Mac at home, a Linux box at my office, and a Windows server on another work network.
{ "source": [ "https://serverfault.com/questions/214816", "https://serverfault.com", "https://serverfault.com/users/30792/" ] }
215,007
Let's say I just have an ip address for a server and I don't have a domain with it (it's just a database server, so it doesn't need a domain). I don't want to have to remember the ip address every time, so is there a way I could still use the syntax like ssh username@database or something?
If you only want the name for ssh and ssh only, you can add a name to your ssh config in ~/.ssh/config As an example, your config file could look like this: Host database HostName <real IP address or hostname here> User username Then you can type ssh database on the command line and ssh will automatically do ssh [email protected] for you.
{ "source": [ "https://serverfault.com/questions/215007", "https://serverfault.com", "https://serverfault.com/users/41287/" ] }
215,405
This is a Canonical Question about Licensing. Questions on licensing are off-topic on Server Fault. If your question has been closed as a duplicate of this question, then this is because we want to help you understand why licence questions are off topic rather than just telling you "it just is". In all likelihood, this question will not address your question directly, it was not meant to. I have a question regarding software licensing. Can the Server Fault community please help with the following: How many licenses do I need? Is this licensing configuration valid? What CALs do I need to be properly licensed? Can I run this product in a virtual environment? Can I downgrade this product to an earlier version? Am I entitled to feature X with license Y ?
Licensing is a hard and absolutely vendor-specific problem. Not only that, many vendors, especially the larger ones like Microsoft, have multiple types of licensing regimes that change pricing based on: How large you are (either by seat count, annual revenue, or a combination of both) What industry you're in (non-profits, education, government, enterprise, large corporation, small corporation, and SMB are all discrete licensing categories with their own quirks) How much licensing you purchase (volume discounts vary) What kind of contract you buy (monthly subscription versus one time payout) Where you are located or where the licenses will be deployed Whether you're buying off a master contract, or are making your own. Especially for the larger companies like Microsoft, licensing is its own career-track and one that more and more often is not found in the SysAdmin/DevOps/SRE office. It is found in either your Purchasing office, your value-added-reseller's office, or the LargeCorp's sales office. In our company we have one person who specializes in purchasing IT license-bearing software. She handles Microsoft and Adobe licensing, as well as a host of other complex entities like ESRI, MatLab, Apple, Novell, and AutoCAD. It is her entire job to know these things and we've made significant savings because she can focus her whole effort into figuring the fiddly bits out. It has saved us a lot of money. She is neither Server person or Desktop person. When license-servers need setting up I do that, but she provides the license keys that goes into them and all of the legal mucking about that does into obtaining them in the first place. She's a licensing person , and is mostly the kind of person who can answer these questions. I'm not, neither are people like me. But even she would be hard pressed to answer questions for a 30 person small business, since she spends her entire day enmeshed in a large, public (and therefore governmental) higher-ed organization that has completely different licensing options. So you're a sysadmin, you're still told to fix a licensing problem, and you don't have that wonderful licensing professional I rhapsodized about. What do you do? Ask the company that makes what you want to buy . They'll at least give you some idea what market-segment they think your organization belongs in. You may even be able to buy from them directly. Do this especially if your question is on appropriate usage of licenses. If at all possible, ask for written answer — it's not unheard of for the sales people of larger software vendors to be lost in licensing details too. Ask a value-added-reseller of some kind , preferably one you already have a relationship with. They deal with this a lot more than you do, and likely have such a licensing professional. The majors like CDW have whole departments dedicated to this. They will keep you inside your market segment in ways you won't even notice (this is a good thing). Don't ask your peers , they likely don't know either. This is why your question got closed as a duplicate of this one.
{ "source": [ "https://serverfault.com/questions/215405", "https://serverfault.com", "https://serverfault.com/users/33884/" ] }
215,606
I am using Windows and have been given a .cer file. How can I view the details of it?
OpenSSL will allow you to look at it if it is installed on your system, using the OpenSSL x509 tool . openssl x509 -noout -text -in 'cerfile.cer'; The format of the .CER file might require that you specify a different encoding format to be explicitly called out. openssl x509 -inform pem -noout -text -in 'cerfile.cer'; or openssl x509 -inform der -noout -text -in 'cerfile.cer'; On Windows systems you can right click the .cer file and select Open. That will then let you view most of the meta data. On Windows you run Windows certificate manager program using certmgr.msc command in the run window. Then you can import your certificates and view details.
{ "source": [ "https://serverfault.com/questions/215606", "https://serverfault.com", "https://serverfault.com/users/26257/" ] }
215,756
I am looking for a way to push configuration from one central machine to several remote machines without the need to install anything on the remote machines. The aim is to do something like you would find with tools like cfengine , but on a set of machines that don't have agents set up. This might actually be a good technique of setting up cfagent on a set of existing remote machines.
You can pass a script and have it execute ephemerally by piping it in and executing a shell. e.g. echo "ls -l; echo 'Hello World'" | ssh me@myserver /bin/bash Naturally, the "ls -l; echo 'Hello World'" part could be replaced with a bash script stored in a file on the local machine. e.g. cat script.sh | ssh me@myserver /bin/bash Cheers!
{ "source": [ "https://serverfault.com/questions/215756", "https://serverfault.com", "https://serverfault.com/users/2374/" ] }
216,252
I've been trying to follow a few basic tutorials explaining how to get Apache up and running (on ubuntu, running on Amazon). I've mostly come up blank, because all the tutorials told me to configure httpd.conf (to add DocumentRoot, etc.). I've now stumbled across one tutorial that told me to add site configurations to the sites-available directory (under /etc/apache), and then symlink to it from sites-enabled. Configuring this way seems to work. But now I'm confused - how am I supposed to configure Apache? Most tutorials still seem to say that I should be using httpd.conf. Which one should I be using? What's the difference? Why are all the tutorials "wrong" (if they are)?
The sites-available method is generally considered the "Debian Way": "main" config in /etc/apache2/apache2.conf "user" config in /etc/apache2/httpd.conf vhosts in /etc/apache2/sites-available files (one per file, typically) you might want to number them, e.g. 00-domain.com, 01-otherdomain.com ports ( Listen directives) in /etc/apache2/ports.conf mods in /etc/apache2/mods-available You can manipulate these with symlinks or with the a2 series of commands: a2ensite/a2dissite <site_config_filename> a2enmod/a2dismod <module_name> Depending on personal preference, you can restart Apache using apachectl , /etc/init.d/apache2 (start|stop|reload|restart) , or service apache2 (start|stop|reload|restart) An example where you would use httpd.conf instead of a vhost entry would be for a global redirect or rewrite rule, for example. Other tidbits -- generally, you should leave apache2.conf alone, and make sure you set up a consistent naming scheme for vhosts in the sites-available directory.
{ "source": [ "https://serverfault.com/questions/216252", "https://serverfault.com", "https://serverfault.com/users/20675/" ] }