source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
232,201 | I am using Ubuntu 10.10. I am generally good with computers but mostly with Windows, I'm not very familiar with Ubuntu. I'm trying to setup a website and I'm talking with a friend I have who works for a school IT department who is giving me advice. He told me that I need to open a command-prompt and type in sudo /etc/init.d/apache2 start but when I do all I see is sudo: /etc/init.d/apache2: command not found . I told my friend the error and he said that my file /dev/null was full, so I tried to find it but it's a hidden file. I know how to view hidden files in Windows but not in Ubuntu. My friend is offline now so I'm hoping someone can tell me how to delete my /dev/null ? Thanks! | It's not possible to empty /dev/null, that doesn't make sense. Your friend is joking with you. However it sounds like you don't have apache2 installed. You should be able to install it with sudo aptitude install apache2 | {
"source": [
"https://serverfault.com/questions/232201",
"https://serverfault.com",
"https://serverfault.com/users/69794/"
]
} |
232,416 | In a corporate environment, should developers have admin rights on their computer? Why? Technological environment: Windows 7 Visual Studio 2008 & 2010 SQL Server | Should they? That's up to the corporation. Personally I think it's fine as long as there are some understood rules. Being admin on your box is a privilege NOT a right. Catching viruses on multiple occasions will get the right revoked Disabling corporate agents will get the right revoked - AV/inventory/software deployment/etc Basically if you do something that risks the network will get the right revoked Any tools you install must not be made a dependency of your project without getting them on the officially approved list. Ask nicely don't come crashing in the day of the deploy and demand $random_library be installed on all servers with no testing For anything outside of normal applications installed everywhere else support will be best effort. Help desk and/or sysadmins will not spend 5 hours trying to debug why you have dll conflicts. | {
"source": [
"https://serverfault.com/questions/232416",
"https://serverfault.com",
"https://serverfault.com/users/69846/"
]
} |
232,511 | I want to archive files (with tar) which are below 3 MB in size. But I also want to retain the directories in which those files exist. (so I cannot use find command). I just want to avoid the files which are above 3 MB in size. How can this be done? | Simpler than you think: $ tar cf small-archive.tar /big/tree --exclude-from <(find /big/tree -size +3M) On a semi-related note (relating to your statement that you can't use find) to get a listing of all files (including directories) under a path minus files larger than 3MiB, use: $ find . -size -3M -o -type d You could then do: $ tar cf small-archive.tar --no-recursion --files-from <(find /big/tree -size -3M -o -type d) But I'd prefer the first one as it's simpler, clearly expresses what you want and will lead to less surprises. | {
"source": [
"https://serverfault.com/questions/232511",
"https://serverfault.com",
"https://serverfault.com/users/59291/"
]
} |
232,525 | I have file servers which are used to store files. Files might reside there for a week, or for a year. Unfortunately, when I remove files from the server, df command doesn't reflect the freed up space. So eventually, the server gets filled up ( df shows 99%), and my script doesn't send any more files there, except there might be a few dozen GB of free space on there. I got noatime flag on the mounted partitions if that makes any difference. | Deleting the filename doesn't actually delete the file. Some other process is holding the file open, causing it to not be deleted; restart or kill that process to release the file. Use lsof +L1 to find out which process is using a deleted (unlinked) file. | {
"source": [
"https://serverfault.com/questions/232525",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
232,642 | I admin a handful of cloud-based (VPS) servers for the company I work for. The servers are minimal ubuntu installs that run bits of LAMP stacks / inbound data collection (rsync). The data is large but not personal, financial or anything like that (ie not that interesting) Clearly on here people are forever asking about configuring firewalls and such like. I use a bunch of approaches to secure the servers, for example (but not restricted to) ssh on non standard ports; no password typing, only known ssh keys from known ips for login etc https, and restricted shells (rssh) generally only from known keys/ips servers are minimal, up to date and patched regularly use things like rkhunter, cfengine, lynis denyhosts etc for monitoring I have extensive experience of unix sys admin. I'm confident I know what I'm doing in my setups. I configure /etc files. I have never felt a compelling need to install stuff like firewalls: iptables etc. Put aside for a moment the issues of physical security of the VPS. Q? I can't decide whether I am being naive or the incremental protection a fw might offer is worth the effort of learning / installing and the additional complexity (packages, config files, possible support etc) on the servers. To date (touch wood) I've never had any problems with security but I am not complacent about it either. | I note that you've done a great job tying down several different daemons, and from what you've said I think it unlikely that you'll expose yourself to trouble through those services you have already secured. This still leaves you in a "everything is permitted except that which I have forbidden" state, and you can't get out of that state by hunting down daemon after daemon and securing them one by one. A firewall configured to DENY ANY ANY by default moves you to a "everything is forbidden except that which is permitted" mode of operation, and I have found over many years that they're better. Right now, given a legitimate user with a legitimate shell on your system, she could decide to run some local unprivileged daemon for proxying web requests for the internet, or start file sharing on port 4662, or accidentally open up a listener by using -g with ssh port tunneling, not understanding what it does; or a sendmail install could leave you running an MUA on port 587 which was improperly configured despite all the work you'd done on securing the MTA sendail on port 25; or a hundred and one things could happen that bypass your careful and thoughtful security simply because they weren't around when you were thinking carefully about what to forbid. Do you see my point? At the moment, you've put a lot of effort into securing all the things you know about, and it sounds like they won't bite you. What may bite you is the things you don't know about, or that aren't even there, right now. A firewall which defaults to DENY ANY ANY is the sysadmin way of saying that if something new comes along and opens up a network listener on this server, noone will be able to talk to it until I have given explicit permission . | {
"source": [
"https://serverfault.com/questions/232642",
"https://serverfault.com",
"https://serverfault.com/users/69920/"
]
} |
232,762 | I was checking a Linux box and found a perl process running and taking a good share of cpu usage. With top, i could only perl in process name. When i pressed c , to view the command-line, it showed /var/spool/mail. Which does not make sense, since this is directory. My questions are: 1) Why did this happen? How this perl process could mask its command-line?
2) What is the most reliable way of finding out where and how a process was started? Thanks! | In most cases just running ps is usually sufficient, along with your favorite flags to enable wide output. I lean towards ps -feww , but the other suggestions here will work. Note that if a program was started out of someone's $PATH , you're only going to see the executable name, not the full path. For example, try this: $ lftp &
$ ps -feww | grep ftp
lars 9600 9504 0 11:30 pts/10 00:00:00 lftp
lars 9620 9504 0 11:31 pts/10 00:00:00 grep ftp It's important to note that the information visible in ps can be completely overwritten by the running program. For example, this code: int main (int argc, char **argv) {
memset(argv[0], ' ', strlen(argv[0]));
strcpy(argv[0], "foobar");
sleep(30);
return(0);
} If I compile this into a file called "myprogram" and run it: $ gcc -o myprogram myprogram.c
$ ./myprogram &
[1] 10201 And then run ps , I'll see a different process name: $ ps -f -p 10201
UID PID PPID C STIME TTY TIME CMD
lars 10201 9734 0 11:37 pts/10 00:00:00 foobar You can also look directly at /proc/<pid>/exe , which may be a symlink to the appropriate executable. In the above example, this gives you much more useful information than ps : $ls -l /proc/9600/exe
lrwxrwxrwx. 1 lars lars 0 Feb 8 11:31 /proc/9600/exe -> /usr/bin/lftp | {
"source": [
"https://serverfault.com/questions/232762",
"https://serverfault.com",
"https://serverfault.com/users/58461/"
]
} |
233,567 | I want to transfer lots of files/folders from Windows to Linux using Rsync. On linux server(destination), I want the file permission set to 644, and folder permission set to 755. If possible, I want the ownership set to root.root for all the files/folders. I have tried -p option, but it doesn't work. Thank you for any help. | You can set the perms using the --chmod parameter e.g. --chmod=Du=rwx,Dgo=rx,Fu=rw,Fog=r will force the permissions to be set to 755 for D irectories and 644 for F iles. | {
"source": [
"https://serverfault.com/questions/233567",
"https://serverfault.com",
"https://serverfault.com/users/26731/"
]
} |
233,623 | As stated in rsync 's man page, the -a (archive) switch is equivalent to -rlptgoD . However, I have a situation where I don't want symbolic links retained. Is there any way to keep using the -a switch and prevent copying of symbolic links? I could write -rptgoD every time, but it's a bit long. | Try the following: rsync -a --no-links ... or, the slightly shorter: rsync -a --no-l ... Note that the --no-links / --no-l switch must come after the -a switch on the command line, otherwise the --links implied by -a is turned back on again. | {
"source": [
"https://serverfault.com/questions/233623",
"https://serverfault.com",
"https://serverfault.com/users/21430/"
]
} |
234,215 | I have a folder that contains files for a static website like: /site/index.html
/site/css/css.css
/site/js/js.js
/site/images/... If I update something on my laptop, I want a single command to send the files off to my ubuntu server. I don't want to setup FTP on it if I don't have too, wondering if scp would be able to handle this? | The command scp -r source user@target:dest will walk all subdirectories of source and copy them. However, scp behaves like cp and always copies files, even if it is the same on both source and destination. [See here for a workaround.] As this is a static website, you are most likely only making updates, not re-creating the whole thing, so you will probably find things move along faster if you use rsync over ssh instead of scp . Probably something like rsync -av -e ssh source user@target:dest ...to get started. If you are doing this across a LAN, I would personally use the options -avW instead for rsync . Rsync also gives you the ability to duplicate deletions in your source; so if you remove a file from your tree, you can run rsync as above, and include the flag --delete and it will remove the same file from the destination side. | {
"source": [
"https://serverfault.com/questions/234215",
"https://serverfault.com",
"https://serverfault.com/users/9900/"
]
} |
234,223 | I wonder if it is tied to my NIC at all or if the OS or driver intercepts and immediately returns data sent to the loopback address? Does the signal actually travel to my NIC then the NIC returns it? | All 127.xx.xx.xx traffic never hits the physical network, it gets processed by a loop back adapter in the kernel. | {
"source": [
"https://serverfault.com/questions/234223",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
234,311 | Is there a command to test whether jumbo frames are actually working? i.e. some sort of "ping" that reports whether or not the packet was broken up along the way. I've an ESXi host with an Ubuntu VM which mounts a Dell MD3000i via iSCSI. I suspect jumbo frames are not enabled on the switch, and can't easily get admin access to it. I have the option to connect the disk array directly to the ESXi host, but would like some way of confirming that jumbo frames are a problem first. | Enabling Jumbo Frames means allowing a larger Maximum Transmission Unit (MTU), usually by setting the MTU to 9000. To verify this has worked you can use ping in windows with the -l flag to set the packet size, and the -f flag to set Don't Fragment flag in the packet. ping my.test.host -f -l 8972 If the packet gets fragmented you will see Packet needs to be fragmented but DF set in place of what you would normally see. For Linux, the ping command uses different flags. -s sets the packet size, and -M do sets Do Not Fragment. So the above command would be: ping my.test.host -M do -s 8972 By adjusting the packet size, you can figure out what the mtu for the link is. This will represent the lowest mtu allowed by any device in the path, which could be your switch, your computer, target or anything else inbetween. This won't by itself tell you where the lowest MTU is - you may be able to work that out by running the test to different devices in the path, but there could always be transparent routers that limit the MTU but don't show up for traceroute . Note there is an overhead of 28 bytes for the ICMP headers, so the MTU is 28 bytes larger than the figure you establish through the method above. So to check for MTU of 9000, you actually need to set your ping packet size to 9000-28 = 8972. Update I found some resources which will specifically figure out the MTU across the path between the host and the target: For Windows mturoute For *nix tracepath or traceroute --mtu And some more discussion on finding the MTU of a path . | {
"source": [
"https://serverfault.com/questions/234311",
"https://serverfault.com",
"https://serverfault.com/users/4935/"
]
} |
234,322 | i have a pc with dual boot - "windows 7 and ubuntu 10.10". is it possible to run my windows 7 with virtual box or vmware workstation/player through my ubuntu login.
thank you. | Enabling Jumbo Frames means allowing a larger Maximum Transmission Unit (MTU), usually by setting the MTU to 9000. To verify this has worked you can use ping in windows with the -l flag to set the packet size, and the -f flag to set Don't Fragment flag in the packet. ping my.test.host -f -l 8972 If the packet gets fragmented you will see Packet needs to be fragmented but DF set in place of what you would normally see. For Linux, the ping command uses different flags. -s sets the packet size, and -M do sets Do Not Fragment. So the above command would be: ping my.test.host -M do -s 8972 By adjusting the packet size, you can figure out what the mtu for the link is. This will represent the lowest mtu allowed by any device in the path, which could be your switch, your computer, target or anything else inbetween. This won't by itself tell you where the lowest MTU is - you may be able to work that out by running the test to different devices in the path, but there could always be transparent routers that limit the MTU but don't show up for traceroute . Note there is an overhead of 28 bytes for the ICMP headers, so the MTU is 28 bytes larger than the figure you establish through the method above. So to check for MTU of 9000, you actually need to set your ping packet size to 9000-28 = 8972. Update I found some resources which will specifically figure out the MTU across the path between the host and the target: For Windows mturoute For *nix tracepath or traceroute --mtu And some more discussion on finding the MTU of a path . | {
"source": [
"https://serverfault.com/questions/234322",
"https://serverfault.com",
"https://serverfault.com/users/43969/"
]
} |
235,139 | I'm using RHEL 5.6 and unzip-5.52-3.el5. I'm trying to unzip a big file, but I get the error: unzip -o test.zip -d unzip/
error: Zip file too big (greater than 4294959102 bytes)
Archive: test.zip
warning [test.zip]: 4294967296 extra bytes at beginning or within zipfile Is there another program that can work with large zip files or do I have to wait until unzip 6 comes to RHEL? (might be years!) Thanks | If you've got Java on the box, you can use jar xf test.zip | {
"source": [
"https://serverfault.com/questions/235139",
"https://serverfault.com",
"https://serverfault.com/users/53148/"
]
} |
235,154 | We have deployed our rails application on on Nginx and passenger. Intermittently, pages of application get loaded partially. There is no error in application log, but the Nginx error log shows the following: 2011/02/14 05:49:34 [crit] 25389#0: *645 open() "/opt/nginx/proxy_temp/2/02/0000000022"
failed (13: Permission denied) while reading upstream, client: x.x.x.x,
server: y.y.y.y, request: "GET /signup/procedures?count=0 HTTP/1.1",
upstream: "passenger:unix:/passenger_helper_server:", host: "y.y.y.y",
referrer: "http://y.y.y.y/signup/procedures" | I had the same problem on an NGINX/PHP-FPM setup (php-fpm=improved fcgi for php). You can find out which user the nginx processes are running as ps aux | grep "nginx: worker process" And then check out if the permissions in your proxy files are correct ls -l /opt/nginx/proxy_temp/ In my case, nginx was running as www-data and two of the directories in my proxy directory belonged to root. I don't know how it happened yet, but I fixed it by doing (as root) chown www-data.www-data /opt/nginx/proxy_temp | {
"source": [
"https://serverfault.com/questions/235154",
"https://serverfault.com",
"https://serverfault.com/users/68613/"
]
} |
235,184 | We switched from PostgreSQL 8.3 to 9.0. Perhaps it's a new feature or perhaps just a configuration change, but now when output from commands (like, \d tablename ) exceeds visible vertical space, psql seem to pipe the output through something similar to less . I could not find a way to turn this behaviour off. Any advice? Thanks. P.S. I'm scrolling the buffer using PuTTY's Shift+PgUp/PgDn so I don't need psql's paging. Plus, when I press q in the psql's paging, its output disappears from the screen entirely (just like after running less in bash), which is wrong from the general use-cases point of view. | TL;DR: \pset pager 0 From the \pset section of the psql manual : pager Controls use of a pager program for query and psql help output. If the environment variable PAGER is set, the output is piped to the specified program. Otherwise a platform-dependent default (such as more) is used. When the pager option is off, the pager program is not used. When the pager option is on, the pager is used when appropriate, i.e., when the output is to a terminal and will not fit on the screen. The pager option can also be set to always, which causes the pager to be used for all terminal output regardless of whether it fits on the screen. \pset pager without a value toggles pager use on and off. | {
"source": [
"https://serverfault.com/questions/235184",
"https://serverfault.com",
"https://serverfault.com/users/68280/"
]
} |
235,307 | I have noticed unusual traffic coming from my workstation the last couple of days. I am seeing HEAD requests sent to random character URLs, usually three or four within a second, and they appear to be coming from my Chrome browser. The requests repeat only three or four times a day, but I have not identified a particular pattern. The URL characters are different for each request. Here is an example of the request as recorded by Fiddler 2: HEAD http://xqwvykjfei/ HTTP/1.1
Host: xqwvykjfei
Proxy-Connection: keep-alive
Content-Length: 0
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The response to this request is as follows: HTTP/1.1 502 Fiddler - DNS Lookup Failed
Content-Type: text/html
Connection: close
Timestamp: 08:15:45.283
Fiddler: DNS Lookup for xqwvykjfei failed. No such host is known I have been unable to find any information through Google searches related to this issue. I do not remember seeing this kind of traffic before late last week, but it may be that I just missed it before. The one modification I made to my system last week that was unusual was adding the Delicious add-in/extension to both IE and Chrome. I have since removed both of these, but am still seeing the traffic. I have run virus scan (Trend Micro) and HiJackThis looking for malicious code, but I have not found any. I would appreciate any help tracking down the source of the requests, so I can determine if they are benign, or indicative of a bigger problem. Thanks. | This is actually legitimate behaviour. Some ISPs improperly respond to DNS queries to non-existent domains with an A record to a page that they control, usually with advertising, as a "did you mean?" kind of thing, instead of passing NXDOMAIN as the RFC requires. To combat this, Chrome makes several HEAD requests to domains which cannot exist to check how the DNS servers resolve them. If they return A records, Chrome knows to perform a search query for the host instead of obeying the DNS record so that you are not affected by the ISPs improper behaviour. [1] | {
"source": [
"https://serverfault.com/questions/235307",
"https://serverfault.com",
"https://serverfault.com/users/5162/"
]
} |
235,648 | How can use X-Forwarded-For headers(my proxy ip 10.1.1.x) to allow HTTP query? | You can use SetEnvIf and Allow: <Location "/only_proxy/">
SetEnvIf X-Forwarded-For ^10\.1\.1\. proxy_env
Order allow,deny
Satisfy Any
Allow from env=proxy_env
</Location> | {
"source": [
"https://serverfault.com/questions/235648",
"https://serverfault.com",
"https://serverfault.com/users/54662/"
]
} |
235,669 | I have installed RabbitMQ on a Debian Linux Squeeze machine, and I would like it to only listen to the localhost interface. I have added RABBITMQ_NODE_IP_ADDRESS=127.0.0.1 to my /etc/rabbitmq/rabbitmq.conf file, and that makes it bind to only the localhost interface when listening on the amqp port (5672). However, it still binds to all interfaces when listening on ports epmd (4369) and 43380: # lsof -n -a -i -urabbitmq
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
epmd 7353 rabbitmq 3u IPv4 1177662 0t0 TCP *:epmd (LISTEN)
epmd 7353 rabbitmq 5u IPv4 1177714 0t0 TCP 127.0.0.1:epmd->127.0.0.1:50877 (ESTABLISHED)
beam.smp 7365 rabbitmq 10u IPv4 1177711 0t0 TCP *:43380 (LISTEN)
beam.smp 7365 rabbitmq 11u IPv4 1177713 0t0 TCP 127.0.0.1:50877->127.0.0.1:epmd (ESTABLISHED)
beam.smp 7365 rabbitmq 19u IPv4 1177728 0t0 TCP 127.0.0.1:amqp (LISTEN) How do I prevent this? Do I have to set up iptables, or are there additional RabbitMQ configuration options that will make it do what I want? | Putting the following in /etc/rabbitmq/rabbitmq-env.conf will make RabbitMQ and epmd listen on only localhost: export RABBITMQ_NODENAME=rabbit@localhost
export RABBITMQ_NODE_IP_ADDRESS=127.0.0.1
export ERL_EPMD_ADDRESS=127.0.0.1 It takes a bit more work to configure Erlang to only use localhost for the higher numbered port (which is used for clustering nodes as far as I can tell). If you don't care about clustering and just want Rabbit to be run fully locally then you can pass Erlang a kernel option for it to only use the loopback interface. To do so, create a new file in /etc/rabbitmq/ - I'll call it rabbit.config . In this file we'll put the Erlang option that we need to load on run time. [{kernel,[{inet_dist_use_interface,{127,0,0,1}}]}]. If you're using the management plugin and also want to limit that to localhost, you'll need to configure its ports separately, making the rabbit.config include this: [
{rabbitmq_management, [
{listener, [{port, 15672}, {ip, "127.0.0.1"}]}
]},
{kernel, [
{inet_dist_use_interface,{127,0,0,1}}
]}
]. (Note RabbitMQ leaves epmd running when it shuts down, so if you want to block off Erlang's clustering port, you will need to restart epmd separately from Rabbit.) Next we need to have RabbitMQ load this at startup. Open up /etc/rabbitmq/rabbitmq.conf again and put the following at the top: export RABBITMQ_CONFIG_FILE="/etc/rabbitmq/rabbit" This loads that config file when the rabbit server is started and will pass the options to Erlang. You should now have all Erlang/RabbitMQ processes listening only on localhost! This can be checked with netstat -ntlap EDIT : In older versions of RabbitMQ, the configuration file is : /etc/rabbitmq/rabbitmq.conf . However, this file has been replaced by the rabbit-env.conf file. | {
"source": [
"https://serverfault.com/questions/235669",
"https://serverfault.com",
"https://serverfault.com/users/2188/"
]
} |
237,557 | I have created a new EC2 instance. It got assigned the default security group. I want to change that security group. How? | Unless the instance is in a VPC, security groups can only be chosen before you start your instance for the first time. Only VPC instances can change security group. For information on VPC see here . | {
"source": [
"https://serverfault.com/questions/237557",
"https://serverfault.com",
"https://serverfault.com/users/35042/"
]
} |
238,033 | We're quite interested in exploring the possibility of using SSD drives in a server environment. However, one thing that we need to establish is expected drive longevity. According to this article manufacturer's are reporting drive endurance in terms of 'total bytes written' (TBW). E.g. from that article a Crucial C400 SSD is rated at 72TB TBW. Do any scripts/tools exist under the Linux ecosystem to help us measure TBW? (and then make a more educated decision on the feasibility of using SSD drives) | Another possibility is to look at /proc/diskstats . It's not persistent across reboots, but it has data for every block device. Probably most interesting to you is field 10, which contains the total number of sectors written. On a system with scsi disks with a sector size of 512 bytes, you could run awk '/sd/ {print $3"\t"$10 / 2 / 1024}' /proc/diskstats to see how many megabytes were written to each device. The output will look like sda 728.759 sda1 79.0908 sda2 649.668 | {
"source": [
"https://serverfault.com/questions/238033",
"https://serverfault.com",
"https://serverfault.com/users/48987/"
]
} |
238,100 | What does ::1 mean? I'm trying to find out my IP and the result is ::1. | Its the loopback address in ipv6, equal to 127.0.0.1 in ipv4. | {
"source": [
"https://serverfault.com/questions/238100",
"https://serverfault.com",
"https://serverfault.com/users/45338/"
]
} |
238,191 | I've installed some things manually in the past and would like to weed out all related files. So, I need a way to automatically find all the files (in /usr, for example) that are not included in any of the packages currently installed on the Debian system. However, I would also need to filter out the files that are created during package installation (by dpkg post-install scripts and similar things). | Use the cruft package: cruft is a program to look over the system for anything that shouldn't
be there, but is; or for anything that should be there, but isn't. | {
"source": [
"https://serverfault.com/questions/238191",
"https://serverfault.com",
"https://serverfault.com/users/71560/"
]
} |
238,417 | This is a software design question I used to work on the following rule for speed cache memory > memory > disk > network With each step being 5-10 times the previous step (e.g. cache memory is 10 times faster than main memory). Now, it seems that gigabit ethernet has latency less than local disk. So, maybe operations to read out of a large remote in-memory DB are faster than local disk reads. This feels like heresy to an old timer like me. (I just spent some time building a local cache on disk to avoid having to do network round trips - hence my question) Does anybody have any experience / numbers / advice in this area? And yes I know that the only real way to find out is to build and measure, but I was wondering about the general rule. edit : This is the interesting data from the top answer: Round trip within same datacenter 500,000 ns Disk seek 10,000,000 ns This is a shock for me; my mental model is that a network round trip is inherently slow. And its not - its 10x faster than a disk 'round trip'. Jeff attwood posted this v good blog on the topic http://blog.codinghorror.com/the-infinite-space-between-words/ | Here are some numbers that you are probably looking for, as quoted by Jeff Dean, a Google Fellow: Numbers Everyone Should Know L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns
Mutex lock/unlock 100 ns (25)
Main memory reference 100 ns
Compress 1K bytes with Zippy 10,000 ns (3,000)
Send 2K bytes over 1 Gbps network 20,000 ns
Read 1 MB sequentially from memory 250,000 ns
Round trip within same datacenter 500,000 ns
Disk seek 10,000,000 ns
Read 1 MB sequentially from network 10,000,000 ns
Read 1 MB sequentially from disk 30,000,000 ns (20,000,000)
Send packet CA->Netherlands->CA 150,000,000 ns It's from his presentation titled Designs, Lessons and Advice from Building Large Distributed Systems and you can get it here: Dr Jeff Dean Keynote PDF or on slideshare.net The talk was given at Large-Scale Distributed Systems and Middleware (LADIS) 2009 . Other Info Google Pro Tip: Use Back-Of-The-Envelope-Calculations To Choose The Best Design Stanford 295 Talk Software Engineering Advice from Building Large-Scale Distributed Systems It's said that gcc -O4 emails your code to Jeff Dean for a rewrite. | {
"source": [
"https://serverfault.com/questions/238417",
"https://serverfault.com",
"https://serverfault.com/users/43397/"
]
} |
238,427 | I have set up a mercurial repo to be served using apache+wsgi+hgweb on OS X. It is now completely open to anyone who stumbles upon my server on the correct port number.. How can I set it up so that only people with a username+password pair that I approve can pull and/or push from the repo? I know how to very easily achieve this using ssh, but in this specific case the requirement is that the solution doesn't require defining full fledged user accounts on the machine for each person whom I'd like to give access to the repo. | Here are some numbers that you are probably looking for, as quoted by Jeff Dean, a Google Fellow: Numbers Everyone Should Know L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns
Mutex lock/unlock 100 ns (25)
Main memory reference 100 ns
Compress 1K bytes with Zippy 10,000 ns (3,000)
Send 2K bytes over 1 Gbps network 20,000 ns
Read 1 MB sequentially from memory 250,000 ns
Round trip within same datacenter 500,000 ns
Disk seek 10,000,000 ns
Read 1 MB sequentially from network 10,000,000 ns
Read 1 MB sequentially from disk 30,000,000 ns (20,000,000)
Send packet CA->Netherlands->CA 150,000,000 ns It's from his presentation titled Designs, Lessons and Advice from Building Large Distributed Systems and you can get it here: Dr Jeff Dean Keynote PDF or on slideshare.net The talk was given at Large-Scale Distributed Systems and Middleware (LADIS) 2009 . Other Info Google Pro Tip: Use Back-Of-The-Envelope-Calculations To Choose The Best Design Stanford 295 Talk Software Engineering Advice from Building Large-Scale Distributed Systems It's said that gcc -O4 emails your code to Jeff Dean for a rewrite. | {
"source": [
"https://serverfault.com/questions/238427",
"https://serverfault.com",
"https://serverfault.com/users/38982/"
]
} |
238,563 | Im currently using ufw to enforce some basic firewall rules. Is it possible to also use ufw to do port forwarding? Specifically im wanting to forward incoming traffic to my server (same machine running ufw) on port 80 to port 8080. (http traffic forwarded to tomcat) Th | Let's say you want to forward requests going to 80 to a server listening on port 8080. Note that you will need to make sure port 8080 is allowed, otherwise ufw will block the requests that are redirected to 8080. sudo ufw allow 8080/tcp There are no ufw commands for setting up the port forwards, so it must be done via configuraton files. Add the lines below to /etc/ufw/before.rules , before the filter section, right at the top of the file: *nat
:PREROUTING ACCEPT [0:0]
-A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
COMMIT Then restart and enable ufw to start on boot: sudo ufw enable | {
"source": [
"https://serverfault.com/questions/238563",
"https://serverfault.com",
"https://serverfault.com/users/71677/"
]
} |
238,567 | I have a simple question for which I have not been able to find a simple answer. Here's the scenario: Server 1 : WHM/cPanel with multiple accounts/dbs Server 2 : WHM/cPanel with multiple accounts/dbs Server 3 : Heartbeat for Server 1. (Wwitches the routes to Server 2 in case Server 1 is unavailable.) Question : How do I synchronize Server 1 and Server 2? I can sync the DNS settings using the cluster features but what about the files/dbs? I'm prepared to get my hands wet with some coding but I have no idea how to go about this. Regards,
Nauman. | Let's say you want to forward requests going to 80 to a server listening on port 8080. Note that you will need to make sure port 8080 is allowed, otherwise ufw will block the requests that are redirected to 8080. sudo ufw allow 8080/tcp There are no ufw commands for setting up the port forwards, so it must be done via configuraton files. Add the lines below to /etc/ufw/before.rules , before the filter section, right at the top of the file: *nat
:PREROUTING ACCEPT [0:0]
-A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
COMMIT Then restart and enable ufw to start on boot: sudo ufw enable | {
"source": [
"https://serverfault.com/questions/238567",
"https://serverfault.com",
"https://serverfault.com/users/69030/"
]
} |
238,708 | I use puppet to install a current JDK and tomcat. package {
[ "openjdk-6-jdk", "openjdk-6-doc", "openjdk-6-jre",
"tomcat6", "tomcat6-admin", "tomcat6-common", "tomcat6-docs",
"tomcat6-user" ]:
ensure => present,
} Now I'd like to add JAVA_HOME="/usr/lib/java"
export JAVA_HOME to /etc/profile , just to get this out of the way. I haven't found a straightforward answer in the docs, yet. Is there a recommended way to do this? In general, how do I tell puppet to place this file there or modify that file? I'm using puppet for a single node (in standalone mode) just to try it out and to keep a log of the server setup . | Add a file to /etc/profile.d/ with the suffix .sh . It will be sourced as part of /etc/profile in Red Hat and Debian and derivatives, can't say on other distros. Generally speaking, if at all possible, it's better to add snippets rather than replace distributed files as it tends to be more future safe. So in puppet, the following would do: file { "/etc/profile.d/set_java_home.sh":
ensure => present,
source => ...[whatever's appropriate for your setup]...,
...
} This what you're looking for or do you need more detail? | {
"source": [
"https://serverfault.com/questions/238708",
"https://serverfault.com",
"https://serverfault.com/users/4346/"
]
} |
238,962 | I have a directory that is showing up with the permission mask drwsrwsr-x . When I try to reset the permissions to 755 the S still remains. What is the "s" and why cant I change the permissions back to 775 ( drwxrwxr-x )? | The s you are seeing in the "execute" position in the user and group column are the SetUID (Set User ID on Execution) and SetGID (Set Group ID on execution) bits. Unix file permissions are actually a 4-digit octal number SUGO S controls the SetUID (4), SetGID (2) and "Sticky" (1) bits U controls Read(4)/Write(2)/Execute(1) bits for the file owner G controls the Read/Write/Execute bits for the file's group O controls the Read/Write/Execute bits for everyone else. You can remove the setuid bits from your directory with chmod ug-s directory , or chmod 0755 directory For more information see the man page for chmod , and this Wikipedia page about the SetUID bit . | {
"source": [
"https://serverfault.com/questions/238962",
"https://serverfault.com",
"https://serverfault.com/users/52106/"
]
} |
239,205 | I am looking for something that resembles packages.debian.org Debian Package Browser only for CentOS 5 and/or RHEL 5 [Red Hat Enterprise Linux]. | Per my comment, I don't believe there is an equivalent to the "packages.debian.org" central package archive (with web interface) in CentOS. It's something I think is really missing! | {
"source": [
"https://serverfault.com/questions/239205",
"https://serverfault.com",
"https://serverfault.com/users/60317/"
]
} |
239,496 | I have a series piped greps, awks and seds which produce a list of numbers, one on each line. Something like this: 1.13
3.59
1.23 How can i pipe this to something which will output the average, max, and min? | Since you're already using awk blahblahblah | awk '{if(min==""){min=max=$1}; if($1>max) {max=$1}; if($1<min) {min=$1}; total+=$1; count+=1} END {print total/count, max, min}' | {
"source": [
"https://serverfault.com/questions/239496",
"https://serverfault.com",
"https://serverfault.com/users/32417/"
]
} |
239,719 | I've seen a lot of datacenters pictures and it seems that the owners prefer to build them over a wide area instead of building them using taller buildings. Why? | Because they don't need to be located somewhere land/real estate is expensive. Tall buildings are cost effective when the expense of the structure is less than the cost of the footprint. | {
"source": [
"https://serverfault.com/questions/239719",
"https://serverfault.com",
"https://serverfault.com/users/56623/"
]
} |
239,749 | I've successfully setup HAProxy in front of an HTTP server which I have no control over . Is it possible to configure HAProxy to add Simple HTTP Authentication to all sites, bearing in mind I can't configure this on the backend? Thanks, Lars | I had to do this today myself (because IIS 7.5 bizarrely doesn't actually support authenticating against anything but Windows user accounts or AD!)... Here's all the code userlist UsersFor_AcmeCorp
user joebloggs insecure-password letmein
backend HttpServers
.. normal backend stuff goes here as usual ..
acl AuthOkay_AcmeCorp http_auth(UsersFor_AcmeCorp)
http-request auth realm AcmeCorp if !AuthOkay_AcmeCorp I documented it a bit better here: http://nbevans.wordpress.com/2011/03/03/cultural-learnings-of-ha-proxy-for-make-benefit/ | {
"source": [
"https://serverfault.com/questions/239749",
"https://serverfault.com",
"https://serverfault.com/users/72048/"
]
} |
239,808 | Currently I am reading SSD reviews and I wonder how much exactly I will benefit if I move the 24 GB swap from 7200rpm HDD to SSD. Does anyone implemented swap space on SSD? Is this generally good idea? On a side note: I read that ext4 has much better performance if the journal is on SSD. Anyone with such a setup? Thanks! Edit: Here I will answer the questions posted:
Occasionally, relatively rare I am hitting the swap. I know what the swap is for and that is better to get more RAM. When the server begins to swap its performance degrades (not a surprise). The idea is if I have few memory hungry processes running, to improve the overall system performance at that time, using SSD for swap, instead of slower rotational media. At the end - I want to be able to login faster and check the server state during swapping, instead of waiting on the login prompt. And of what I see SSD is cheaper per GB than RAM. Would I have better server performance during swapping (as rare it is) using SSD compared to HDD? Where 10k or 15k rpm HDDs would rate in this scenario? Thank you all for your quick and prompt answers! | Are you hitting swap? Generally, the better solution is to avoid that entirely, or at least make it so that things which are swapped out are genuinely not in active use, so that the speed doesn't matter. Put your money into more RAM. This is particularly true because while high-end SSD drives may improve performance, cheap ones are very troublesome in this regard. There is a great article on this week's Linux Weekly News which I highly recommend reading: http://lwn.net/Articles/428584/ . The summary is that cheap drives are very, very sensitive to access patterns, and Linux isn't currently designed to match that well. Worse, the drives don't really expose that information in a useful way, so Linux can't necessarily do the right thing. The best best is to use them with their pre-existing FAT32 filesystems, which are factory-configured to match the drive's expectations. Or else you should buy expensive high-performance SSDs — but only when you're already maxed out on RAM. (And really, at that point, you might strongly consider just getting a newer server which supports more RAM.) | {
"source": [
"https://serverfault.com/questions/239808",
"https://serverfault.com",
"https://serverfault.com/users/69956/"
]
} |
240,015 | I'd like to for once leave SELinux running on a server for the alleged increased security. I usually disable SELinux to get anything to work. How do I tell SELinux to allow MySQL connections? The most I've found in the documentation is this line from mysql.com: If you are running under Linux and Security-Enhanced Linux (SELinux) is enabled, make sure you have disabled SELinux protection for the mysqld process. wow ... that's really helpful. | To check SELinux sestatus To see what flags are set on httpd processes getsebool -a | grep httpd To allow Apache to connect to remote database through SELinux setsebool httpd_can_network_connect_db 1 Use -P option makes the change permanent. Without this option, the boolean would be reset to 0 at reboot. setsebool -P httpd_can_network_connect_db 1 | {
"source": [
"https://serverfault.com/questions/240015",
"https://serverfault.com",
"https://serverfault.com/users/84800/"
]
} |
240,155 | When I try to exit from my Linux server I get the message: There are stopped jobs. : Is there a single command to kill these? | To quickly kill all the stopped jobs under the bash, enter: kill -9 `jobs -ps` jobs -ps lists the process IDs ( -p ) of the stopped ( -s ) jobs. kill -9 `jobs -ps` sends SIGKILL signals to all of them. | {
"source": [
"https://serverfault.com/questions/240155",
"https://serverfault.com",
"https://serverfault.com/users/26257/"
]
} |
240,181 | I am configuring apache and by default I have the directories /etc/httpd/conf and /etc/httpd/conf.d . What is the difference? Also you can see the suffix in many other directories like /etc/init.d , /etc/cron.d , etc... | "d" stands for directory and such a directory is a collection of configuration files which are often fragments that are included in the main configuration file. The point is to compartmentalize configuration concerns to increase maintainability. When you have a distinction such as /etc/httpd/conf vs /etc/httpd/conf.d , it is usually the case that /etc/httpd/conf contains various different kinds of configuration files, while a .d directory contains multiple instances of the same configuration file type (such as "modules to load", "sites to enable" etc), and the administrator can add and remove as needed. | {
"source": [
"https://serverfault.com/questions/240181",
"https://serverfault.com",
"https://serverfault.com/users/62738/"
]
} |
240,476 | I am using nginx/0.7.68, running on CentOS, with the following configuration: server {
listen 80;
server_name ***;
index index.html index.htm index.php default.html default.htm default.php;
location / {
root /***;
proxy_pass http://***:8888;
index index.html index.htm;
}
# where *** is my variables The proxy_pass is to a DNS record whose IP changes frequently. Nginx caches the outdated IP address, resulting in a request to the wrong IP address. How can I stop nginx from caching the IP address, when it is outdated? | Accepted answer didn't work for me on nginx/1.4.2. Using a variable in proxy_pass forces re-resolution of the DNS names because NGINX treats variables differently to static configuration. From the NGINX proxy_pass documentation : Parameter value can contain variables. In this case, if an address is specified as a domain name, the name is searched among the described server groups, and, if not found, is determined using a resolver. For example: server {
...
resolver 127.0.0.1;
set $backend "http://dynamic.example.com:80";
proxy_pass $backend;
...
} Note: A resolver (i.e. the name server to use) MUST be available and configured for this to work (and entries inside a /etc/hosts file won't be used in a lookup). By default, version 1.1.9 or later versions of NGINX cache answers using the TTL value of a response and an optional valid parameter allows the cache time to be overridden: resolver 127.0.0.1 [::1]:5353 valid=30s; Before version 1.1.9, tuning of caching time was not possible, and nginx always cached answers for the duration of 5 minutes. . | {
"source": [
"https://serverfault.com/questions/240476",
"https://serverfault.com",
"https://serverfault.com/users/67012/"
]
} |
240,496 | Can you provide instuctions on how to install ab on a fedora distro with or without installing the apache web server? With yum or compiling from source. | Install apr-util(need to run ab): yum install apr-util Install yum-utils: yum install yum-utils Download httpd an extract ab: mkdir ~/httpd
cd ~/httpd
yumdownloader httpd
rpm2cpio httpd-2.2.3-43.el5.centos.3.i386.rpm | cpio -idmv
mv usr/bin/ab /usr/bin/ab
cd ~
rm -rf ~/httpd Run ab: ab http://google.ru/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0 | {
"source": [
"https://serverfault.com/questions/240496",
"https://serverfault.com",
"https://serverfault.com/users/52704/"
]
} |
240,788 | Strangely i can't find it any where but when i stdout the output of strace into afile like this: strace foo.exe | & tee foo.log the out put is to short, how can i make the width longer? | The "-s" option under Linux, from the "strace" package, will let you specify the width: -s strsize Specify the maximum string size to print (the
default is 32). Note that filenames are not consid-
ered strings and are always printed in full. | {
"source": [
"https://serverfault.com/questions/240788",
"https://serverfault.com",
"https://serverfault.com/users/61104/"
]
} |
240,813 | All I know about differences of them is varchar has limit, and text is not. The documentation does not mention about this. Is that really the only difference? No consideration about performance or etc? | The background of this is: The old Postgres system used the PostQUEL language and used a data type named text (because someone thought that was a good name for a type that stores text). Then, Postgres was converted to use SQL as its language. To achieve SQL compatibility, instead of renaming the text type, a new type varchar was added. But both type use the same C routines internally. Now, to some degree and in some places, text is hardcoded as a default type, in case nothing else can be derived. Also, most functions are only available as taking a text argument or returning text . The two types are binary compatible, so casting is a trivial parse-time operation. But using text is still overall more natural to the system. But aside from these fine points, there is no noticeable difference. Use whichever one looks prettier to you. ;-) | {
"source": [
"https://serverfault.com/questions/240813",
"https://serverfault.com",
"https://serverfault.com/users/46527/"
]
} |
240,897 | I'm trying to connect to an NFS folder on my dev server. The owner of the folder on the dev server is darren and group darren. When I export and mount it to my Mac using the Disk Utility it mounts, but then when I try to open the folder is says I do not have permissions. I have set rw, sync, and no_subtree_check. The user on the Mac is darren with a bunch of groups. Do I need to have the same group and user set to access the folder? | NFS is built on top of RPC authentication. With NFS version 3, the most common authentication mechanism is AUTH_UNIX. The user id and group id of the client system are sent in each RPC call, and the permissions these IDs have on the file being accessed are checked on the server. For this to work, the UID and GIDs must be the same on the server and the clients. However, you can force all access to occur as a single user and group by combining the all_squash, anonuid, and anongid export options. all_squash will map all UIDs and GIDs to the anonymous user, and anonuid and anongid set the UID and GID of the anonymous user. For example, if your UID and GID on your dev server are both 1001, you could export your home directory with a line like /home/darren 192.168.1.1/24(rw,all_squash,anonuid=1001,anongid=1001) I'm less familiar with NFS version 4, but I think you can set up rpc.idmapd on the clients to alter the uid and gid they send to the server. | {
"source": [
"https://serverfault.com/questions/240897",
"https://serverfault.com",
"https://serverfault.com/users/54780/"
]
} |
240,920 | How can I enable non-free packages on Debian? I want to install Sun's Java JDK but it's not available to me. | Open up /etc/apt/sources.list , and you should see lines like the following (URLs will likely vary): deb http://http.us.debian.org/debian stable main contrib Simply add non-free to the respective URLs you wish to use, i.e.: deb http://http.us.debian.org/debian stable main contrib non-free Running apt-get update will update your local repo with the package listing. | {
"source": [
"https://serverfault.com/questions/240920",
"https://serverfault.com",
"https://serverfault.com/users/72395/"
]
} |
241,154 | How to run command in bash without saving it in history? | Add space before command. commands starting with a space do not put in history: root@ubuntu-1010-server-01:~# echo foo
foo
root@ubuntu-1010-server-01:~# history
1 echo foo
2 history
root@ubuntu-1010-server-01:~# echo bar
bar
root@ubuntu-1010-server-01:~# history
1 echo foo
2 history man bash HISTCONTROL
A colon-separated list of values controlling how commands are
saved on the history list. If the list of values includes
ignorespace, lines which begin with a space character are not
saved in the history list. A value of ignoredups causes lines
matching the previous history entry to not be saved. A value
of ignoreboth is shorthand for ignorespace and ignoredups. A
value of erasedups causes all previous lines matching the cur‐
rent line to be removed from the history list before that line
is saved. Any value not in the above list is ignored. If
HISTCONTROL is unset, or does not include a valid value, all
lines read by the shell parser are saved on the history list,
subject to the value of HISTIGNORE. The second and subsequent
lines of a multi-line compound command are not tested, and are
added to the history regardless of the value of HISTCONTROL. | {
"source": [
"https://serverfault.com/questions/241154",
"https://serverfault.com",
"https://serverfault.com/users/62120/"
]
} |
241,162 | I am trying to use nmap to scan the computers on my network for conficker. I am using Windows. What do I put in the target for all computer on a given subnet? It is running: nmap -T4 -A -v I put in 192.168.1.0 and 192.168.1.255 but it did not work. Thank you | Add space before command. commands starting with a space do not put in history: root@ubuntu-1010-server-01:~# echo foo
foo
root@ubuntu-1010-server-01:~# history
1 echo foo
2 history
root@ubuntu-1010-server-01:~# echo bar
bar
root@ubuntu-1010-server-01:~# history
1 echo foo
2 history man bash HISTCONTROL
A colon-separated list of values controlling how commands are
saved on the history list. If the list of values includes
ignorespace, lines which begin with a space character are not
saved in the history list. A value of ignoredups causes lines
matching the previous history entry to not be saved. A value
of ignoreboth is shorthand for ignorespace and ignoredups. A
value of erasedups causes all previous lines matching the cur‐
rent line to be removed from the history list before that line
is saved. Any value not in the above list is ignored. If
HISTCONTROL is unset, or does not include a valid value, all
lines read by the shell parser are saved on the history list,
subject to the value of HISTIGNORE. The second and subsequent
lines of a multi-line compound command are not tested, and are
added to the history regardless of the value of HISTCONTROL. | {
"source": [
"https://serverfault.com/questions/241162",
"https://serverfault.com",
"https://serverfault.com/users/15827/"
]
} |
241,588 | How to automate SSH login with password?
I'm configuring my test VM, so heavy security is not considered. SSH chosen for acceptable security with minimal configuration. ex) echo password | ssh id@server This doesn't work. I remember I did this with some tricks somebody guided me, but I can't remember now the trick I used... | Don't use a password. Generate a passphrase-less SSH key and push it to your VM. If you already have an SSH key, you can skip this step…
Just hit Enter for the key and both passphrases: $ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub. Copy your keys to the target server: $ ssh-copy-id id@server
id@server's password: Now try logging into the machine, with ssh 'id@server' , and check-in: .ssh/authorized_keys Note: If you don't have .ssh dir and authorized_keys file, you need to create it first to make sure we haven’t added extra keys that you weren’t expecting. Finally, check to log in… $ ssh id@server
id@server:~$ You may also want to look into using ssh-agent if you want to try keeping your keys protected with a passphrase. | {
"source": [
"https://serverfault.com/questions/241588",
"https://serverfault.com",
"https://serverfault.com/users/46527/"
]
} |
241,721 | Is there any way supervisord can automatically restart a failed/exited/terminated job and send me a notification email with a dump of the last x lines of log file? | There is a plugin called superlance. You install it with pip install superlance or download it at: http://pypi.python.org/pypi/superlance The next thing you do is you go into your supervisord.conf and add the following lines: [eventlistener:crashmail]
command=/usr/local/bin/crashmail -a -m [email protected]
events=PROCESS_STATE This should be followed by a "supervisorctl update". When a process "exits" you will now get a notification sent to [email protected]. If you only want to listen to some selected apps you can exchange the -a for a -p program1 or if it is a group group1:program2 One example would be [eventlistener:crashmail]
command=/usr/local/bin/crashmail -p program1 -p group1:program2 -m [email protected]
events=PROCESS_STATE Regarding the automatic restart:
you should make sure that autorestart is set to true (it is set to unexpected by default). This way the package will be restarted 3 times. If after that it still exits, it gives up, but you can change that with startretries . Example program: [program:cat]
command=/bin/cat
autorestart=true
startretries=10 | {
"source": [
"https://serverfault.com/questions/241721",
"https://serverfault.com",
"https://serverfault.com/users/1127/"
]
} |
241,959 | My guess is this defaults to Bash, but would like to know for sure. Thanks. | The default shell in BusyBox is ash . | {
"source": [
"https://serverfault.com/questions/241959",
"https://serverfault.com",
"https://serverfault.com/users/6477/"
]
} |
242,176 | Is there a way to do a remote "ls" much like "scp" does a remote copy in a standard linux shell? | You could always do this: ssh user@host ls -l /some/directory That will SSH to the host, run ls, dump the output back to you and immediately disconnect. | {
"source": [
"https://serverfault.com/questions/242176",
"https://serverfault.com",
"https://serverfault.com/users/26257/"
]
} |
242,391 | I am looking into implementing SSH tunneling as a cheap VPN solution for outside users to access Intranet-only facing web applications. I currently am using Ubuntu Server 10.04.1 64 bit with OpenSSH installed. I am using Putty on Windows boxes to create a tunnel on a local port to my ssh server. start putty -D 9999 mysshserver.com -N I then use tell Firefox to use a SOCKS proxy on localhost:9999. The -N flag will disable the interactive shell from the client side. Is there a way to do this on the server side? Besides disabling root access, using rsa key authentication, and changing the default port; are there any other obvious security practices I should follow for this purpose? My goal is to simply be able to tunnel web traffic. | After four years this answer deserved an update. While originally I used authorized_keys myself and would probably use it still in some select cases, you can also use the central sshd_config server configuration file. sshd_config You can designate (for your particular use case) a group, such as proxy-only or Match individual users. In sshd_config . This is done after the global settings and revokes, repeats or refines some of the settings given in the global settings. Note: some of the syntax/directives used in sshd_config(5) are documented in the man page for ssh_config(5) . In particular make sure to read the PATTERNS section of ssh_config(5) . For a group this means your Match block would begin like this: Match group proxy-only You can Match the following criteria: User , Group , Host , LocalAddress , LocalPort and Address . To match several criteria simply comma-separate the criteria-pattern pairs ( group proxy-only above). Inside such a block, which is traditionally indented accordingly for brevity (but needn't to), you can then declare the settings you want to apply for the user group without having to edit every single authorized_keys file for members of that group. The no-pty setting from authorized_keys would be mirrored by a PermitTTY no setting and command="/sbin/nologin" would become ForceCommand /sbin/nologin . Additionally you can also set more settings to satisfy an admin's paranoia, such as chroot -ing the user into his home folder and would end up with something like this: Match group proxy-only
PermitTTY no
ForceCommand /sbin/nologin
ChrootDirectory %h
# Optionally enable these by un-commenting the needed line
# AllowTcpForwarding no
# GatewayPorts yes
# KbdInteractiveAuthentication no
# PasswordAuthentication no
# PubkeyAuthentication yes
# PermitRootLogin no (check yourself whether you need or want the commented out lines and uncomment as needed) The %h is a token that is substituted by the user's home directory ( %u would yield the user name and %% a percent sign). I've found ChrootDirectory particularly useful to confine my sftp-only users: Match group sftp-only
X11Forwarding no
AllowTcpForwarding no
ChrootDirectory %h
ForceCommand internal-sftp
PasswordAuthentication no Please mind that only certain directives can be used in a Match block. Consult the man page sshd_config(5) for details (search for Match ). authorized_keys NB: the part below this remark was my original answer. Meanwhile - but it also depends on the features of your exact sshd version - I would go for the method described above in most cases. Yes you can, as fine-grained as you can assign public keys. In addition to nologin as recommended by ajdecon, I would suggest setting the following in front of the key entry in authorized_keys : no-pty ssh-rsa ... The no pty tells the server-side that no pseudo-terminal should be allocated for that key. You can also force the execution of something like nologin for a particular key by prepending this: command="/sbin/nologin",no-pty ssh-rsa ... | {
"source": [
"https://serverfault.com/questions/242391",
"https://serverfault.com",
"https://serverfault.com/users/20859/"
]
} |
242,402 | Intermittently, and not at the same time, computers connected the network will redirect to Stop-online-piracy.com (may not want to go there). The domain responds with the source: <div id="Message">
ERROR: This web site encountered an error (40100), please notify your Technical Support.</div> just an error, .nothing hidden there. It is privately registered. Seems like a sham trying to impersonate a Cease&Desist. A Google search of the domain yield no references. The other strange part is that right before it happened the internet connection gets really slow AND the wifi signal bar goes down. This signal bar could just be Windows 7's adding in connection speed into the task-bar icon. (or it could indicate something really interesting...) The wifi is using WPA2 if that matters. Not many Computers are plugged into an Ethernet so it's unknown whether or not it affects them. edit:
When I said it wasn't at the same time, I meant that not exactly the same, but all within a few minutes of each other. I has subsided now. Also is there anything I can do if/when it happens again? It seems like it comes in waves. | After four years this answer deserved an update. While originally I used authorized_keys myself and would probably use it still in some select cases, you can also use the central sshd_config server configuration file. sshd_config You can designate (for your particular use case) a group, such as proxy-only or Match individual users. In sshd_config . This is done after the global settings and revokes, repeats or refines some of the settings given in the global settings. Note: some of the syntax/directives used in sshd_config(5) are documented in the man page for ssh_config(5) . In particular make sure to read the PATTERNS section of ssh_config(5) . For a group this means your Match block would begin like this: Match group proxy-only You can Match the following criteria: User , Group , Host , LocalAddress , LocalPort and Address . To match several criteria simply comma-separate the criteria-pattern pairs ( group proxy-only above). Inside such a block, which is traditionally indented accordingly for brevity (but needn't to), you can then declare the settings you want to apply for the user group without having to edit every single authorized_keys file for members of that group. The no-pty setting from authorized_keys would be mirrored by a PermitTTY no setting and command="/sbin/nologin" would become ForceCommand /sbin/nologin . Additionally you can also set more settings to satisfy an admin's paranoia, such as chroot -ing the user into his home folder and would end up with something like this: Match group proxy-only
PermitTTY no
ForceCommand /sbin/nologin
ChrootDirectory %h
# Optionally enable these by un-commenting the needed line
# AllowTcpForwarding no
# GatewayPorts yes
# KbdInteractiveAuthentication no
# PasswordAuthentication no
# PubkeyAuthentication yes
# PermitRootLogin no (check yourself whether you need or want the commented out lines and uncomment as needed) The %h is a token that is substituted by the user's home directory ( %u would yield the user name and %% a percent sign). I've found ChrootDirectory particularly useful to confine my sftp-only users: Match group sftp-only
X11Forwarding no
AllowTcpForwarding no
ChrootDirectory %h
ForceCommand internal-sftp
PasswordAuthentication no Please mind that only certain directives can be used in a Match block. Consult the man page sshd_config(5) for details (search for Match ). authorized_keys NB: the part below this remark was my original answer. Meanwhile - but it also depends on the features of your exact sshd version - I would go for the method described above in most cases. Yes you can, as fine-grained as you can assign public keys. In addition to nologin as recommended by ajdecon, I would suggest setting the following in front of the key entry in authorized_keys : no-pty ssh-rsa ... The no pty tells the server-side that no pseudo-terminal should be allocated for that key. You can also force the execution of something like nologin for a particular key by prepending this: command="/sbin/nologin",no-pty ssh-rsa ... | {
"source": [
"https://serverfault.com/questions/242402",
"https://serverfault.com",
"https://serverfault.com/users/72842/"
]
} |
242,650 | I'm trying to set up a basic virtual host to proxy all requests to test.local to a WEBrick server I have running on 127.0.0.1:8080 while keeping all requests to localhost going to my static files in /var/www. I'm running Ubuntu 10.04. I have libapache2-mod-proxy-html installed and I have the module enabled with a2enmod proxy. I also have my virtual host enabled. However, whenever I go to test.local I always get a cryptic 500 server error and all my logs are telling me is: [Thu Mar 03 01:43:10 2011] [warn] proxy: No protocol handler was valid for the URL /. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule. Here's my virtual host: <VirtualHost test.local:80>
LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so
ServerAdmin webmaster@localhost
ServerName test.local
ProxyPreserveHost On
# prevents this folder from being proxied
ProxyPass /static !
DocumentRoot /var/www
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
<Proxy *>
Order allow,deny
Allow from all
</Proxy>
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
ErrorLog /var/log/apache2/error.log
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn
CustomLog /var/log/apache2/access.log combined and here's my settings for mod_proxy: <IfModule mod_proxy.c>
#turning ProxyRequests on and allowing proxying from all may allow
#spammers to use your proxy to send email.
ProxyRequests Off
<Proxy *>
# default settings
#AddDefaultCharset off
#Order deny,allow
#Deny from all
##Allow from .example.com
AddDefaultCharset off
Order allow,deny
Allow from all
</Proxy>
# Enable/disable the handling of HTTP/1.1 "Via:" headers.
# ("Full" adds the server version; "Block" removes all outgoing Via: headers)
# Set to one of: Off | On | Full | Block
ProxyVia On
</IfModule> Does anybody know what I'm doing wrong? Thanks | Looks like you're not loading the mod_proxy_http module (which is needed to proxy to HTTP servers). I don't have Ubuntu 10.04 in front of me, but IIRC it's something like: sudo a2enmod proxy_http | {
"source": [
"https://serverfault.com/questions/242650",
"https://serverfault.com",
"https://serverfault.com/users/72913/"
]
} |
242,825 | Is it possible to restore a duplicity backup from a certain time in the past? For example, if I'm making daily incremental backups, is there a way to restore a backup from three days ago? | The -t argument will tell duplicity from what time to restore. duplicity -t 3D --file-to-restore FILENAME scp://[email protected]/some_dir /home/me/restored_file Will restore FILENAME from 3 days ago.
If you don't do daily backups and use a non-existing day the restore command will pick the date closest to your date. | {
"source": [
"https://serverfault.com/questions/242825",
"https://serverfault.com",
"https://serverfault.com/users/70997/"
]
} |
243,023 | I'd like to tail a file but only output lines that have a certain string in them. Is this possible? | use grep. Its built just for that purpose. To find lines from a tail of /var/log/syslog that have "cron" in them, just run: tail -f /var/log/syslog | grep cron And since it accepts anything over stdin, you can use it on the output of any other command as well, by piping in the same way as above (using the | symbol). | {
"source": [
"https://serverfault.com/questions/243023",
"https://serverfault.com",
"https://serverfault.com/users/39877/"
]
} |
243,297 | Is it possible to install PHP5 without installing apache, in Ubuntu? If so, how? | $ sudo apt-get install php5-cli Should do it. | {
"source": [
"https://serverfault.com/questions/243297",
"https://serverfault.com",
"https://serverfault.com/users/50774/"
]
} |
243,316 | It appears that ICANN is allowing the creation of top level domains . Instead of 'registering' a domain, you would essentially be signing up to be a registrar (you'd be giving out registrations on your TLD). How do they decide whether to accept/reject applications? (i.e. is notability a requirement precluding .michael for instance) Can an existing business register a TLD, or is it only a more general organization (i.e. "the museum society" instead of the "NYC Natural History Museum") How much does it cost? | There are various considerations for accepting an application, covered in the guidebook (PDF) . Part of the process will involve the application going through several panels, including: String Similarity Panel – assesses whether a proposed gTLD string is likely to result in user confusion due to similarity with any reserved name, any existing TLD, any requested IDN ccTLD, or any new gTLD string applied for in the current application round. This occurs during the String Similarity review in Initial Evaluation. The panel may also review IDN tables submitted by applicants as part of its work. DNS Stability Panel – reviews each applied-for string to determine whether the proposed string might adversely affect the security or stability of the DNS. This occurs during the DNS Stability String Review in Initial Evaluation. Geographical Names Panel – reviews each application to determine whether the applied-for gTLD represents a geographic name, as defined in the Applicant Guidebook. In the event that the string represents a geographic name and requires government support, the panel will review and verify that the documentation provided with the application is from the relevant governments or public authorities and is authentic. Technical Evaluation Panel – reviews the technical components of each application against the criteria in the Applicant Guidebook, along with proposed registry operations, in order to determine whether the applicant is technically and operationally capable of operating a gTLD registry as proposed in the application. This occurs during the Technical/Operational Reviews in Initial Evaluation, and may also occur in Extended Evaluation if necessary and if elected by the applicant. Financial Evaluation Panel – reviews each application against the relevant business, financial and organizational criteria contained in the Applicant Guidebook, to determine whether the applicant is financially capable of maintaining a gTLD registry as proposed in the application. This occurs during the Financial Review in Initial Evaluation, and may also occur in Extended Evaluation if necessary and if elected by the applicant. Registry Services Panel – reviews the proposed registry services in the application to determine if any registry services pose a risk of a meaningful adverse impact on security or stability. This occurs, if applicable, during the Extended Evaluation period. Anyone can register a TLD, though you'd better own the trademark, and others can file objections to your TLD application. Also, you'll have to be able to operate as a registry. The costs start at $185,000. From the FAQ to which you linked: 7.2 How much is the evaluation fee? The evaluation fee is estimated at US$185,000. Applicants will be required to pay a US$5,000 deposit fee per application request slot when registering. The US$5,000 will be credited against the evaluation fee. Other fees may apply depending on the specific application path. See the section 1.5 of the Applicant Guidebook for details about the methods of payment, additional fees and refund schedules. 7.3 Are there any additional costs I should be aware of in applying for a new gTLD? Yes. Applicants may be required to pay additional fees in certain cases where specialized process steps are applicable, and should expect to account for their own business start up costs. See Section 1.5.2 of the Applicant Guidebook. 7.5 Are there any ongoing fees once a gTLD is approved by ICANN? Yes. Once an application has successfully passed all the evaluation steps, the applicant is required to sign a New gTLD Agreement (also called Registry Agreement) with ICANN. Under the agreement, there are two fees: (a) a fixed fee of US$6,250 per calendar quarter; (b) and a transaction fee of US$0.25. The latter does not apply until and unless more than 50,000 domain names are registered in the gTLD. | {
"source": [
"https://serverfault.com/questions/243316",
"https://serverfault.com",
"https://serverfault.com/users/821/"
]
} |
243,318 | My servers have been crashing recently. I am running two nginx servers on one shared server using Ruby Version Manager to tackle gem dependencies. Everything was going fairly smooth after I setup an .rvmrc to toggle calls made from the application. But once every couple days it will crash. I think the reason may be that I'm pulling code, or restarting the other box. Not entirely sure. I went into the logs and found this, and found a really really strange link as the "referrer". No idea was a "referrer" is, and it definately has nothing to do with my site www.truejersey.com. I have no idea what these logs mean so just a simple explanation will suffice for an answer. Thanks so much! 2011/03/04 10:11:38 [info] 25504#0: *20008271 client closed prematurely connection, so upstream connection is closed too (104: Connection reset by peer) while sending request to upstream, client: 194.65.234.120, server: true.shadyfront.webfactional.com, request: "GET /pages/aboutjersey/photos/thumbs/nj-gazette.jpg HTTP/1.1", upstream: "http://127.0.0.1:11363/pages/aboutjersey/photos/thumbs/nj-gazette.jpg", host: "www.truejersey.com", referrer: "http://www.portalentretextos.com.br/colunas/recontando-estorias-do-dominio-publico/e-o-demonio-de-nova-jersey-o-decimo-terceiro-filho-de-deborah-leeds,236,4485.html"
2011/03/04 10:22:02 [info] 25503#0: *20018714 client 207.46.204.197 closed keepalive connection (104: Connection reset by peer)
2011/03/04 10:22:40 [info] 25503#0: *20019126 client 207.46.204.197 closed keepalive connection (104: Connection reset by peer)
2011/03/04 10:26:09 [info] 25503#0: *20022733 client 65.52.110.26 closed keepalive connection (104: Connection reset by peer)
2011/03/04 10:38:46 [error] 25503#0: *20034686 connect() failed (111: Connection refused) while connecting to upstream, client: 2.80.170.148, server: true.shadyfront.webfactional.com, request: "GET /pages/aboutjersey/photos/thumbs/nj-gazette.jpg HTTP/1.1", upstream: "http://127.0.0.1:11363/pages/aboutjersey/photos/thumbs/nj-gazette.jpg", host: "www.truejersey.com", referrer: "http://www.portalentretextos.com.br/colunas/recontando-estorias-do-dominio-publico/e-o-demonio-de-nova-jersey-o-decimo-terceiro-filho-de-deborah-leeds,236,4485.html"
2011/03/04 10:39:48 [error] 25503#0: *20035361 connect() failed (111: Connection refused) while connecting to upstream, client: 68.204.64.69, server: true.shadyfront.webfactional.com, request: "GET /pages/aboutjersey/photos/thumbs/tat6.jpg HTTP/1.1", upstream: "http://127.0.0.1:11363/pages/aboutjersey/photos/thumbs/tat6.jpg", host: "www.truejersey.com", referrer: "http://matthewraphaelmelvin.blogspot.com/2011/02/jersey-tattoos.html"
2011/03/04 10:39:48 [error] 25503#0: *20035371 connect() failed (111: Connection refused) while connecting to upstream, client: 68.204.64.69, server: true.shadyfront.webfactional.com, request: "GET /pages/aboutjersey/photos/thumbs/tat8.jpg HTTP/1.1", upstream: "http://127.0.0.1:11363/pages/aboutjersey/photos/thumbs/tat8.jpg", host: "www.truejersey.com", referrer: "http://matthewraphaelmelvin.blogspot.com/2011/02/jersey-tattoos.html"
2011/03/04 10:40:00 [error] 25503#0: *20035641 connect() failed (111: Connection refused) while connecting to upstream, client: 2.80.170.148, server: true.shadyfront.webfactional.com, request: "GET /pages/aboutjersey/photos/thumbs/nj-gazette.jpg HTTP/1.1", upstream: http://127.0.0.1:11363/pages/aboutjersey/photos/thumbs/nj-gazette.jpg", host: "www.truejersey.com", referrer: "http://www.portalentretextos.com.br/colunas/recontando-estorias-do-dominio-publico/e-o-demonio-de-nova-jersey-o-decimo-terceiro-filho-de-deborah-leeds,236,4485.html" | There are various considerations for accepting an application, covered in the guidebook (PDF) . Part of the process will involve the application going through several panels, including: String Similarity Panel – assesses whether a proposed gTLD string is likely to result in user confusion due to similarity with any reserved name, any existing TLD, any requested IDN ccTLD, or any new gTLD string applied for in the current application round. This occurs during the String Similarity review in Initial Evaluation. The panel may also review IDN tables submitted by applicants as part of its work. DNS Stability Panel – reviews each applied-for string to determine whether the proposed string might adversely affect the security or stability of the DNS. This occurs during the DNS Stability String Review in Initial Evaluation. Geographical Names Panel – reviews each application to determine whether the applied-for gTLD represents a geographic name, as defined in the Applicant Guidebook. In the event that the string represents a geographic name and requires government support, the panel will review and verify that the documentation provided with the application is from the relevant governments or public authorities and is authentic. Technical Evaluation Panel – reviews the technical components of each application against the criteria in the Applicant Guidebook, along with proposed registry operations, in order to determine whether the applicant is technically and operationally capable of operating a gTLD registry as proposed in the application. This occurs during the Technical/Operational Reviews in Initial Evaluation, and may also occur in Extended Evaluation if necessary and if elected by the applicant. Financial Evaluation Panel – reviews each application against the relevant business, financial and organizational criteria contained in the Applicant Guidebook, to determine whether the applicant is financially capable of maintaining a gTLD registry as proposed in the application. This occurs during the Financial Review in Initial Evaluation, and may also occur in Extended Evaluation if necessary and if elected by the applicant. Registry Services Panel – reviews the proposed registry services in the application to determine if any registry services pose a risk of a meaningful adverse impact on security or stability. This occurs, if applicable, during the Extended Evaluation period. Anyone can register a TLD, though you'd better own the trademark, and others can file objections to your TLD application. Also, you'll have to be able to operate as a registry. The costs start at $185,000. From the FAQ to which you linked: 7.2 How much is the evaluation fee? The evaluation fee is estimated at US$185,000. Applicants will be required to pay a US$5,000 deposit fee per application request slot when registering. The US$5,000 will be credited against the evaluation fee. Other fees may apply depending on the specific application path. See the section 1.5 of the Applicant Guidebook for details about the methods of payment, additional fees and refund schedules. 7.3 Are there any additional costs I should be aware of in applying for a new gTLD? Yes. Applicants may be required to pay additional fees in certain cases where specialized process steps are applicable, and should expect to account for their own business start up costs. See Section 1.5.2 of the Applicant Guidebook. 7.5 Are there any ongoing fees once a gTLD is approved by ICANN? Yes. Once an application has successfully passed all the evaluation steps, the applicant is required to sign a New gTLD Agreement (also called Registry Agreement) with ICANN. Under the agreement, there are two fees: (a) a fixed fee of US$6,250 per calendar quarter; (b) and a transaction fee of US$0.25. The latter does not apply until and unless more than 50,000 domain names are registered in the gTLD. | {
"source": [
"https://serverfault.com/questions/243318",
"https://serverfault.com",
"https://serverfault.com/users/50751/"
]
} |
243,343 | I have Ubuntu 10.10 Server installed on a single-board machine in a semi-embedded environment; no keyboard or screen, just SSH access to it. So it's really frustrating when it occasionally boots up and gets stuck on the GRUB menu, waiting for a keystroke to select the first option. How do I configure GRUB to under no circumstances wait for a keystroke? Update #1: There is no menu.lst, since this is GRUB 2. But I do have an /etc/default/grub which is like so: GRUB_DEFAULT=0
#GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=2
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="" Update #2: I figured it out. On boots which follow unsuccessful boots, GRUB disables its own timeout. Since showing the menu makes a boot unsuccessful, this is an inescapable loop. This behaviour may be disabled by editing the /etc/grub.d/00_header file, and changing the make_timeout function: make_timeout ()
{
echo "set timeout=0"
} Now exit and re-run the grub configuration updater script: sudo update-grub2 It makes no sense to me that this behaviour would be the default for Ubuntu Server, a product intended for machines accessed by console. | For Ubuntu 12.04 LTS there is a specific option that can be set in /etc/default/grub . For example, if you want to have a 2 seconds timeout (thus avoiding hangs for unattended reboots) just add the following line in /etc/default/grub : GRUB_RECORDFAIL_TIMEOUT=2 Remember to run update-grub after that... | {
"source": [
"https://serverfault.com/questions/243343",
"https://serverfault.com",
"https://serverfault.com/users/73134/"
]
} |
243,816 | I want to setup FTP server to allow only certain users, so with vsftpd, I add in vsftpd.conf : local_enable=YES
user_config_dir=/etc/vsftpd_user_conf In /etc/vsftpd_user_conf for the unix user foo I set in a file foo: local_root=/home/foo/ftpdir
anon_world_readable_only=NO
write_enable=YES
anon_upload_enable=YES
anon_mkdir_write_enable=YES
anon_other_write_enable=YES
virtual_use_local_privs=YES
local_umask=022 ... and I launch vsftpd. I can login to FTP with user foo. However, I also can with other unix users! How can I disable other unix users? | In vsftpd.conf add: userlist_enable=YES userlist_file=/etc/vsftpd.userlist userlist_deny=NO Edit the file to contain a username per row. | {
"source": [
"https://serverfault.com/questions/243816",
"https://serverfault.com",
"https://serverfault.com/users/73284/"
]
} |
244,263 | I have Postfix setup on my server so that I can send outgoing mail using the command-line: mail -s "Subject" [email protected] Is this using Sendmail or Postfix ? Is " Sendmail " just a software category or a distinct program ? If something is " Sendmail-ready " does that mean it will work with Postfix ? Everything I've read online seems to use these two terms interchangeably. | Sendmail is a different (and much older) program from Postfix. However for every mail server to succeed in the Unix environment, a sendmail binary (with some of the expected command line options) must be provided. EDIT: See for example the manual page for the sendmail program provided by Postfix | {
"source": [
"https://serverfault.com/questions/244263",
"https://serverfault.com",
"https://serverfault.com/users/32759/"
]
} |
244,294 | I learned to use gnu-screen and have been using it for the past several weeks. I got a grip of the basics now, and would be very helpful if I get these queries clarified: How do I rename / reorder / move windows in Linux screen utility? For eg., after a few days of use, I come to a state where the window-numbers are 2, 3, 6, 8. Now I want a new-window to be created at #9, or #7. Is this possible? If yes, how? Also, is it possible to 'move' the window #6 to #4 or #7? If yes, how? Scrolling shortcut: Scroll takes Ctrl-A + Esc, and then Ctrl-u / Ctrl-d. Is there a way to map PgUp / PgDn to do these directly? Or, is there a way to map some key (like F5) to take me to copy mode, and then PgUp / PgDn for scrolling? I have enabled "caption always", so the current window title is displayed always at the bottom. Is there a way to display the current window's log-file-name and logging status (on/off) in the same caption bar? | You can renumber the current window with ctrl+a :number x where x is a numeric argument. You can rename the current window with ctrl+a A | {
"source": [
"https://serverfault.com/questions/244294",
"https://serverfault.com",
"https://serverfault.com/users/73442/"
]
} |
244,614 | I just checked my server's /var/log/auth.log and found that I'm getting over 500 failed password/break-in attempt notifications per day! My site is small, and its URL is obscure. Is this normal? Should I be taking any measures? | In today's internet this is quite normal sadly. There are hordes of botnets trying to login to each server they find in whole IP networks. Typically, they use simple dictionary attacks on well-known accounts (like root or certain applications accounts). The attack targets are not found via Google or DNS entries, but the attackers just try every IP address in a certain subnet (e.g. of known root-server hosting companies). So it doesn't matter that your URL (hence the DNS entry) is rather obscure. That's why it is so important to: disallow root-login in SSH ( howto ) use strong passwords everywhere (also in your web applications) for SSH, use public-key authentication if possible and disable password-auth completely ( howto ) Additionally, you can install fail2ban which will scan the authlog and if it finds a certain amount of failed login attempts from an IP, it will proceed to add that IP to /etc/hosts.deny or iptables/netfilter in order to lock out the attacker for a few minutes. In addition to the SSH attacks, it is also becoming common to scan your webserver for vulnerable web-applications (some blogging apps, CMSs, phpmyadmin, etc.). So make sure to keep those up-to-date and securely configured too! | {
"source": [
"https://serverfault.com/questions/244614",
"https://serverfault.com",
"https://serverfault.com/users/55155/"
]
} |
244,665 | I got a databse A. It has some data in it. I created a backup for A as A.bak file. Then I create a new empty database B. And then I try to restore B from A.bak . But the SQL Serve tell me the following error: The file 'C:\SQL Directory\DATA\A.mdf' cannot be overwritten. It is being used by database 'A'. But if I delete A from SQL Server, the retore is ok. I don't understand why the SQL needs to write to the original database file while restoring from a seperate backup file? Thanks~ | If you restore a database, SQL Server will, by default, attempt to restore all the data and log files to their original locations. Since those original locations are still in use by the original database ("A"), the restore fails. You need to use the WITH MOVE clause to specify new locations for all the files in the database. RESTORE DATABASE B FROM DISK = 'A.bak'
WITH MOVE 'DataFileLogicalName' TO 'C:\SQL Directory\DATA\B.mdf',
MOVE 'LogFileLogicalName' TO 'C:\SQL Directory\DATA\B.ldf',
REPLACE --Needed if database B already exists Something like that anyway. Use RESTORE FILELISTONLY FROM DISK... to see the logical filenames in the backup if necessary. | {
"source": [
"https://serverfault.com/questions/244665",
"https://serverfault.com",
"https://serverfault.com/users/36036/"
]
} |
244,767 | I want iptables to filter only one interface, eth0, which is facing WAN. How can this be done? And I want to keep ftp and ssh ports open on eth0. | So for all interfaces but one you want to accept all traffic, and on eth0 you want to drop all incoming traffic except ftp and ssh. First, we could set a policy of accepting all traffic by default. iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT Then, we could reset your firewall rules. iptables -F Now we could say that we want to allow incoming traffic on eth0 that is a part of a connection we already allowed. iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT Also that we want to allow incoming ssh connections on eth0. iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT But that anything else incoming on eth0 should be dropped. iptables -A INPUT -i eth0 -j DROP For slightly more depth see this CentOS wiki entry . FTP is a trickier than ssh since it can use a random port, so see this previous question . | {
"source": [
"https://serverfault.com/questions/244767",
"https://serverfault.com",
"https://serverfault.com/users/59291/"
]
} |
244,917 | I need to search our mail logs for a specific e-mail address. We keep a current file named maillog as well as a week's worth of .bz2 files in the same folder. Currently, I'm running the following commands to search for the file: grep [email protected] maillog
bzgrep [email protected] *.bz2 Is there a way combine the grep and bzgrep commands into a single output? That way, I could pipe the combined results to a single e-mail or a single file. | Another way is { grep ...; bzgrep ...;} >file && has the difficulty that the bzgrep wouldn't be run if the grep failed. Note the mandatory space after the opening curly brace and semicolon after the last command. Alternatively, you can use the subshell syntax (parentheses instead of curly braces), which isn't as picky: (grep ...; bzgrep ...) >file | {
"source": [
"https://serverfault.com/questions/244917",
"https://serverfault.com",
"https://serverfault.com/users/55007/"
]
} |
245,136 | Is there some easy way to find out mac address of all machines on my network rather than doing an SSH into each and ifconfig | grep HWaddr if there are 300 machines on network I really need some easy solution. | You can use nmap to run a ping scan. nmap -sP 192.168.254.*
Starting Nmap 5.00 ( http://nmap.org ) at 2011-03-09 11:32 GMT
Host xyzzy.lan (192.168.254.189) is up (0.00022s latency).
MAC Address: 00:0C:29:5B:A5:E0 (VMware)
Host plugh.lan (192.168.254.196) is up (0.00014s latency).
MAC Address: 00:0C:29:2E:78:F1 (VMware)
Host foo.lan (192.168.254.200) is up.
Host bar.lan (192.168.254.207) is up (0.00013s latency).
MAC Address: 00:0C:29:2D:94:A0 (VMware)
Nmap done: 256 IP addresses (4 hosts up) scanned in 3.41 seconds Edit: A sed script to filter the output to IP -> MAC - put this in a file. /^Host.*latency.*/{
$!N
/MAC Address/{
s/.*(\(.*\)) .*MAC Address: \(.*\) .*/\1 -> \2/
}
}
/[Nn]map/d
s/^Host .*is up/& but MAC Address cannot be found/ and use it like this nmap -sP 192.168.254.0/20 | sed -f sedscript
192.168.254.189 -> 00:0C:29:5B:A5:E0
192.168.254.196 -> 00:0C:29:2E:78:F1
Host foo.lan (192.168.254.200) is up but MAC Address cannot be found.
192.168.254.207 -> 00:0C:29:2D:94:A0 | {
"source": [
"https://serverfault.com/questions/245136",
"https://serverfault.com",
"https://serverfault.com/users/68579/"
]
} |
245,139 | I want to write a batch file and we want to use if-then-else and for loop statements, but I don't know it and its option. Please can anyone help me? | You can use nmap to run a ping scan. nmap -sP 192.168.254.*
Starting Nmap 5.00 ( http://nmap.org ) at 2011-03-09 11:32 GMT
Host xyzzy.lan (192.168.254.189) is up (0.00022s latency).
MAC Address: 00:0C:29:5B:A5:E0 (VMware)
Host plugh.lan (192.168.254.196) is up (0.00014s latency).
MAC Address: 00:0C:29:2E:78:F1 (VMware)
Host foo.lan (192.168.254.200) is up.
Host bar.lan (192.168.254.207) is up (0.00013s latency).
MAC Address: 00:0C:29:2D:94:A0 (VMware)
Nmap done: 256 IP addresses (4 hosts up) scanned in 3.41 seconds Edit: A sed script to filter the output to IP -> MAC - put this in a file. /^Host.*latency.*/{
$!N
/MAC Address/{
s/.*(\(.*\)) .*MAC Address: \(.*\) .*/\1 -> \2/
}
}
/[Nn]map/d
s/^Host .*is up/& but MAC Address cannot be found/ and use it like this nmap -sP 192.168.254.0/20 | sed -f sedscript
192.168.254.189 -> 00:0C:29:5B:A5:E0
192.168.254.196 -> 00:0C:29:2E:78:F1
Host foo.lan (192.168.254.200) is up but MAC Address cannot be found.
192.168.254.207 -> 00:0C:29:2D:94:A0 | {
"source": [
"https://serverfault.com/questions/245139",
"https://serverfault.com",
"https://serverfault.com/users/52749/"
]
} |
245,151 | We want to be able to create FTP users and assign rights to specific folders and revoke them again remotely by means of an automated process (running a script, calling a webservice or similar). What solutions can you suggest? OS is undecided. Windows and Unix are both fine. | You can use nmap to run a ping scan. nmap -sP 192.168.254.*
Starting Nmap 5.00 ( http://nmap.org ) at 2011-03-09 11:32 GMT
Host xyzzy.lan (192.168.254.189) is up (0.00022s latency).
MAC Address: 00:0C:29:5B:A5:E0 (VMware)
Host plugh.lan (192.168.254.196) is up (0.00014s latency).
MAC Address: 00:0C:29:2E:78:F1 (VMware)
Host foo.lan (192.168.254.200) is up.
Host bar.lan (192.168.254.207) is up (0.00013s latency).
MAC Address: 00:0C:29:2D:94:A0 (VMware)
Nmap done: 256 IP addresses (4 hosts up) scanned in 3.41 seconds Edit: A sed script to filter the output to IP -> MAC - put this in a file. /^Host.*latency.*/{
$!N
/MAC Address/{
s/.*(\(.*\)) .*MAC Address: \(.*\) .*/\1 -> \2/
}
}
/[Nn]map/d
s/^Host .*is up/& but MAC Address cannot be found/ and use it like this nmap -sP 192.168.254.0/20 | sed -f sedscript
192.168.254.189 -> 00:0C:29:5B:A5:E0
192.168.254.196 -> 00:0C:29:2E:78:F1
Host foo.lan (192.168.254.200) is up but MAC Address cannot be found.
192.168.254.207 -> 00:0C:29:2D:94:A0 | {
"source": [
"https://serverfault.com/questions/245151",
"https://serverfault.com",
"https://serverfault.com/users/61992/"
]
} |
245,393 | I'm trying to create a script to execute an exe on shutdown in order to install sp1. my script goes something like (not actual bat script). If installed GOTO END
Install.exe
END: My problem is that when it runs, it starts the installer, then finishes the script because the installer's a different process and follows up by shutting down the install process because the computer's shutting down and shutting down the system (at least, that's what i think it's doing.) Is there any way to tell it to wait until the process it started completes and then shutdown? | Try running START /WAIT Install.exe | {
"source": [
"https://serverfault.com/questions/245393",
"https://serverfault.com",
"https://serverfault.com/users/47325/"
]
} |
245,396 | Lately on a shared host, the filesystem containing my home folder was mounted read-only for 45 minutes to 1 hour. The technical support did not know about the outage, evaded direct questions. After a bit more than three days I obtained the answer: There are many explanations, but in
the most times this caused by server
issue on filesystem level. I am somehow not pleased by this in-depth analysis, as my normal work environment runs on a RAID1 ( mdadm ) and I never encountered such issues. The shared host system is supposed to be a RAID1, and I became aware of the issue as a cronjob running uptime every 15 minutes sent me email about it. I would really like to know, what you, the more experienced, think of this. | Try running START /WAIT Install.exe | {
"source": [
"https://serverfault.com/questions/245396",
"https://serverfault.com",
"https://serverfault.com/users/56129/"
]
} |
245,401 | I am seeing a very severe clock drift on my Xen HVM VPS, rented from a hosting provider, so I don't have access to the dom0 system. I continuously run ntpd, but the clock drifts by as much as 30 seconds in 5 minutes and NTP cannot keep up. Has anyone experienced this? Here are some details: $ dmesg | grep clock
[ 0.160000] Measured 347 cycles TSC warp between CPUs, turning off TSC clock.
[ 0.396000] * this clock source is slow. Consider trying other clock sources
[ 0.550448] Switching to clocksource acpi_pm
[ 0.653135] rtc_cmos 00:05: setting system clock to 2011-03-09
02:45:40 UTC (1299638740)
$ cat /sys/devices/system/clocksource/clocksource0/available_clocksource
acpi_pm
$ cat /sys/devices/system/clocksource/clocksource0/current_clocksource
acpi_pm | Try running START /WAIT Install.exe | {
"source": [
"https://serverfault.com/questions/245401",
"https://serverfault.com",
"https://serverfault.com/users/73711/"
]
} |
245,774 | I have a directory that contains symbolic links to other directories located on different media on my system: /opt/lun1/2011
/opt/lun1/2010
/opt/lun2/2009
/opt/lun2/2008
/opt/lun3/2007 But the symbolic links show up as: /files/2011
/files/2010
/files/2009
/files/2008
/files/2007 How can I perform an rsync that follows the symbolic links? e.g.: rsync -XXX /files/ user@server:/files/ | The -L flag to rsync will sync the contents of files or directories linked to, rather than the symbolic link. | {
"source": [
"https://serverfault.com/questions/245774",
"https://serverfault.com",
"https://serverfault.com/users/39040/"
]
} |
245,803 | Are there any tools out there that allow you to check a box (or something like that) and say that you want to subject a given email for approval. Then, once the email is approved, it goes to the original intended recipient? The idea would be that certain emails (but not all) require approval before being sent. Absent a solution, people have to get emails approved and then forward email out to the original intend recipient. | The -L flag to rsync will sync the contents of files or directories linked to, rather than the symbolic link. | {
"source": [
"https://serverfault.com/questions/245803",
"https://serverfault.com",
"https://serverfault.com/users/64560/"
]
} |
246,003 | I am running CentOS 5.5 with the stock Apache httpd-2.2.3. I have enabled mod_status at the Location /server-status. I would like to allow access to this single Location in the following way: Deny from all Allow from the subnet 192.168.16.0/24 Deny from a the IP 192.168.16.100, which is within the 192.168.16.0/24 subnet. 1 & 2 are easy. However, since I "Allow from 192.168.16.0/24", is it possible to Deny from 192.168.16.100? I tried to add a Deny statement for 192.168.16.100 but it doesn't work. Here is the relevant config: <Location /server-status>
SetHandler server-status
Order Allow,Deny
Deny from all
Deny from 192.168.16.100 # This does not deny access from 192.168.16.100
Allow from 192.168.16.0/24
</Location> Or: <Location /server-status>
SetHandler server-status
Order Allow,Deny
Deny from all
Deny from 192.168.16.100 # This does not deny access from 192.168.16.100
Allow from 192.168.16.0/24
</Location> However, this doesn't prevent access to this particular page, as demonstrated in the Access logs: www.example.org 192.168.16.100 - - [11/Mar/2011:16:01:14 -0800] "GET /server-status HTTP/1.1" 200 9966 "-" " According to the manual for mod_authz_host : Allow,Deny First, all Allow directives are evaluated; at least one must match, or the request is rejected. Next, all Deny directives are evaluated. If any matches, the request is rejected The IP address matches the Deny directive, so shouldn't the request be rejected? According to the table on the mod_authz_host page, this IP address should "Match both Allow & Deny", and thus the "Final match controls: Denied" rule should apply. Match Allow,Deny result Deny,Allow result
Match Allow only Request allowed Request allowed
Match Deny only Request denied Request denied
No match Default to second directive: Denied Default to second directive: Allowed
Match both Allow & Deny Final match controls: Denied Final match controls: Allowed | I haven't tested, but I think you are almost there. <Location /server-status>
SetHandler server-status
Order Allow,Deny
Deny from 192.168.16.100
Allow from 192.168.16.0/24
</Location> Deny from all is not needed. In fact it will screw up because everything will match all , and thus denied (and I think Apache is trying to be smart and do something stupid). I have always found Apache's Order , Allow and Deny directives confusing, so always visualize things in a table (taken from the docs ): Match | Allow,Deny result | Deny,Allow result
-------------------------------------------------------
Allow only | Allowed | Allowed
Deny only | Denied | Denied
No match | Default: Denied | Default: Allowed
Match both | Final match: Denied | Final match: Allowed With the above settings: Requests from 192.168.16.100 get "Match both" and thus denied. Requests from 192.168.16.12 get "Allow only" and thus allowed. Requests from 123.123.123.123 get "No match" and thus denied. | {
"source": [
"https://serverfault.com/questions/246003",
"https://serverfault.com",
"https://serverfault.com/users/36178/"
]
} |
246,347 | I understand that socat is described as a "more advanced" version of netcat, but what is the actual difference?
Would it be correct to say that everything you can do in netcat you can also do in socat? What about the opposite (everything you can do with socat can also be done in netcat)? | socat can do serial line stuff, netcat cannot. socat can do fairly advanced functionality, like having multiple clients listen on a port, or reusing connections. | {
"source": [
"https://serverfault.com/questions/246347",
"https://serverfault.com",
"https://serverfault.com/users/37767/"
]
} |
246,445 | I want apache to do this > mydomain.com:80 --- opens var/www1
mydomain.com:81 --- opens var/ww2
mydomain.com:82 --- opens var/www3 Problem is I don't know if those ports are open on Linux (how do I check?) And if they're not how do I open them in the firewall and get apache to listen? I tried doing this > iptables -A RH-Firewall-1-INPUT -m NEW -m tcp -p tcp –dport 81 -j ACCEPT
iptables v1.3.5: Couldn't load match `NEW':/lib64/iptables/libipt_NEW.so: cannot open shared object file: No such file or directory and I checked the ports... looks like httpd is listening... but I don't know why I can't hit my URL > netstat -tulpn | less
tcp 0 0 :::80 :::* LISTEN 6840/httpd
tcp 0 0 :::81 :::* LISTEN 6840/httpd
tcp 0 0 :::82 :::* LISTEN 6840/httpd | To expand on Jeff's answer you'll need something like this in your apache configuration Listen 80
Listen 81
Listen 82
# Listen for virtual host requests on all IP addresses
NameVirtualHost *:80
<VirtualHost *:80>
DocumentRoot /var/www1
ServerName www.example1.com
</VirtualHost>
NameVirtualHost *:81
<VirtualHost *:81>
DocumentRoot /var/www2
ServerName www.example2.org
</VirtualHost>
NameVirtualHost *:82
<VirtualHost *:82>
DocumentRoot /var/www3
ServerName www.example3.org
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/246445",
"https://serverfault.com",
"https://serverfault.com/users/21343/"
]
} |
246,448 | I have a Windows Server 2008 server with SQL Server 2008 R2 Evaluation edition. I am trying to upgrade to SQL Server 2008 R2 Workgroup edition and having a problem. When installing I get the error: Rule "SQL Server 2008 edition upgrade" failed. The Selected SQL Server instance does not meet upgrade matrix requirements.' According to the Microsoft Upgrade Matrix I needed to uninstall Analysis Services (which I did), but I still get the same problem. Any help would be appreciated! | To expand on Jeff's answer you'll need something like this in your apache configuration Listen 80
Listen 81
Listen 82
# Listen for virtual host requests on all IP addresses
NameVirtualHost *:80
<VirtualHost *:80>
DocumentRoot /var/www1
ServerName www.example1.com
</VirtualHost>
NameVirtualHost *:81
<VirtualHost *:81>
DocumentRoot /var/www2
ServerName www.example2.org
</VirtualHost>
NameVirtualHost *:82
<VirtualHost *:82>
DocumentRoot /var/www3
ServerName www.example3.org
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/246448",
"https://serverfault.com",
"https://serverfault.com/users/25144/"
]
} |
246,468 | It's been a few days now that I have gotten problems and problems with my network connection, though my other computer works fine! The problematic computer is operating with (PC2) Windows 7 Professional The running-fine computer is running (PC1) Windows Vista Business The problem occured when I turned off my Cisco Catalyst 3500 series switch so that I can move it to another location (5 feet away). I then recoeecnted the Ethernet cables and my cable modem to the switch. The PC1 then reinitialized the network, found the settings and reconnected properly! Internet is then running just fine as if I had never touched it in my life! Great! I can't tell the same with PC2, sadly. It began to identify the network, then set it to Unidentified network over and over again though I have tried solving with the troubleshooter. Here's what I did so far: I read this linked in answer article to know what's NLA. What settings does Windows use to determine network location? I have also read this: LAN - Unidentified Network (No Network Access) on Windows 7 I have also tried: Setting the DHCP Broadcast Flag to 0 so no more broadcast; Running a Powershell script that was suppoed to set all Unidentified network to private; Went and change manually Unidentified network setting from "unknown" to "private"; Forced ipconfig /release , typed in a fake IP in my Local Connection, saved the seetings, then clicked the "Obtain an IP automatically option, and ipconfig /renew ; Setting proper DNS IP though PC1 is set to obtain them automatically (and works, not PC2); Disabled the adapter, uninstalled and reinstalled it, and enabled it; To stop the NLA and related services with no luck (didn't solve my problem); Rolledback to previous working settings/configuration with a restore point, and it netiehr solved the problem. Many of the above-mentioned tries come from ServerFault.com and I simply can't find the questions anymore, so please believe me, I DID search! I've been working for about 6 hours on the problem, that is, probably becuase I am no network nor system administrator. Any one who has something relevant is welcome to answer and help me. I'm desperate. =( I just want it to work! While trying the ipconfig thing, it said that the DHCP request timed out. I tried to ping some other IP addresses as well, and it seems PC2 is totally lost. I can't find a solution, I have done everything I could, I knew, and learned for the last two days. Thanks for your help! | To expand on Jeff's answer you'll need something like this in your apache configuration Listen 80
Listen 81
Listen 82
# Listen for virtual host requests on all IP addresses
NameVirtualHost *:80
<VirtualHost *:80>
DocumentRoot /var/www1
ServerName www.example1.com
</VirtualHost>
NameVirtualHost *:81
<VirtualHost *:81>
DocumentRoot /var/www2
ServerName www.example2.org
</VirtualHost>
NameVirtualHost *:82
<VirtualHost *:82>
DocumentRoot /var/www3
ServerName www.example3.org
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/246468",
"https://serverfault.com",
"https://serverfault.com/users/34168/"
]
} |
246,483 | Im a noob at iptables, and have recently setup a new server ans used webmin to tell iptables to allow incomming port 80, 443, and 22. However with iptables enabled the server can no longer ping external servers or do dns lookups. What do I need to change in iptables to allow such things? Thanks! | To expand on Jeff's answer you'll need something like this in your apache configuration Listen 80
Listen 81
Listen 82
# Listen for virtual host requests on all IP addresses
NameVirtualHost *:80
<VirtualHost *:80>
DocumentRoot /var/www1
ServerName www.example1.com
</VirtualHost>
NameVirtualHost *:81
<VirtualHost *:81>
DocumentRoot /var/www2
ServerName www.example2.org
</VirtualHost>
NameVirtualHost *:82
<VirtualHost *:82>
DocumentRoot /var/www3
ServerName www.example3.org
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/246483",
"https://serverfault.com",
"https://serverfault.com/users/13571/"
]
} |
246,491 | I run a shared hosting provider and we're looking to move to a High Availability (replicated across multiple datacenters) setup for our hosting. We have created a replicated MySQL setup with failover that works wonderfully, and we'd like to move all of our clients' databases to it. The only trouble is that we have many many customers, all of whom have configured their Wordpress, Drupal, etc. installations to connect to MySQL via a local socket, not to the address of the remove server. I would hate to have to go through manually and change the connection statement in all of our clients' sites. What I'd ideally love to see is a program that listens on /tmp/mysql.sock and forwards connections there to the remote server I specify. I've seen SQL Relay, but it seems to require that I hardcode all of the database names and usernames and passwords into its configuration file. This is not going to work for me because our users add new databases dynamically all of the time, and I'd rather not have to write code to updated SQLRelay's config file every time. Does anyone have an idea on how to do this? Alternatively, I'd accept idea on how to handle this at the PHP level. (i.e. redirect any attempted calls to mysql_connect() to use that hostname rather than localhost) Thanks,
Kevin | To expand on Jeff's answer you'll need something like this in your apache configuration Listen 80
Listen 81
Listen 82
# Listen for virtual host requests on all IP addresses
NameVirtualHost *:80
<VirtualHost *:80>
DocumentRoot /var/www1
ServerName www.example1.com
</VirtualHost>
NameVirtualHost *:81
<VirtualHost *:81>
DocumentRoot /var/www2
ServerName www.example2.org
</VirtualHost>
NameVirtualHost *:82
<VirtualHost *:82>
DocumentRoot /var/www3
ServerName www.example3.org
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/246491",
"https://serverfault.com",
"https://serverfault.com/users/74179/"
]
} |
246,764 | Ubuntu 10.04.2 nginx 0.7.65 I see some weird HTTP requests coming to my nginx server. To better understand what is going on, I want to dump whole HTTP request data for such queries. (I.e. dump all request headers and body somewhere I can read them.) Can I do this with nginx? Alternatively, is there some HTTP server that allows me to do this out of the box, to which I can proxy these requests by the means of nginx? Update: Note that this box has a bunch of normal traffic, and I would like to avoid capturing all of it on low level (say, with tcpdump ) and filtering it out later. I think it would be much easier to filter good traffic first in a rewrite rule (fortunately I can write one quite easily in this case), and then deal with bogus traffic only. And I do not want to channel bogus traffic to another box just to be able to capture it there with tcpdump . Update 2: To give a bit more details, bogus request have parameter named (say) foo in their GET query (the value of the parameter can differ). Good requests are guaranteed not to have this parameter ever. If I can filter by this in tcpdump or ngrep somehow — no problem, I'll use these. | Adjust the number of pre/post lines (-B and -A args) as needed: tcpdump -n -S -s 0 -A 'tcp dst port 80' | grep -B3 -A10 "GET /url" This lets you get the HTTP requests you want, on the box, without generating a huge PCAP file that you have to offload somewhere else. Keep in mind, that the BPF filter is never exact, if there are a large number of packets flowing through any box, BPF can and will drop packets. | {
"source": [
"https://serverfault.com/questions/246764",
"https://serverfault.com",
"https://serverfault.com/users/1355/"
]
} |
246,840 | i have plenty of empty directories and i wonder if there's a way to display actual direcory sizes (after some sort of scan maybe) in MC | Press Ctrl + Space on current directory and this will show it size. Also, you can use visual select to apply this command on multiply items. If you press Ctrl + Space on .. directory mc will calculate size of all subdirectories in current dir To see total size of all directories, you can select them all using Inserl key, and see total size at the bottom of the panel | {
"source": [
"https://serverfault.com/questions/246840",
"https://serverfault.com",
"https://serverfault.com/users/74295/"
]
} |
247,043 | If I put a script in /etc/cron.daily on CentOS what user will it run as? Do they all run as root or as the owner? | They all run as root . If you need otherwise, use su in the script or add a crontab entry to the user's crontab ( man crontab ) or the system-wide crontab (whose location I couldn't tell you on CentOS). | {
"source": [
"https://serverfault.com/questions/247043",
"https://serverfault.com",
"https://serverfault.com/users/54114/"
]
} |
247,049 | I've got a problem with my debian server.
Probably there is some vulnerable script at my web-serser, which is running from www-data user.
I also have samba with winbind installed, and samba is joined to windows domain. So, probably this vulnerable script allows hacker to bruteforce out domain controller through winbind unix domain socket. Actually I have lots of such lines at netstat -a output: unix 3 [ ] STREAM CONNECTED 509027 /var/run/samba/winbindd_privileged/pipe And our DC logs contain lots of recorded authentication attems from root or guest accounts. How can I restrict my apaches access to winbind?
I had an idea to use some kind of firewall for IPC sockets. Is it possible? | They all run as root . If you need otherwise, use su in the script or add a crontab entry to the user's crontab ( man crontab ) or the system-wide crontab (whose location I couldn't tell you on CentOS). | {
"source": [
"https://serverfault.com/questions/247049",
"https://serverfault.com",
"https://serverfault.com/users/74372/"
]
} |
247,053 | We're planning to migrate from SBS 2003 to 2011 in a month or so, and I want to try out the upgrade process before I do it on the real server. I haven't bought the 2011 licence yet, but noticed that there is a trial version at Technet. Is the trial version complete enough to try out a migration? I also don't have a spare 2003 licence, is there a trial version of 2003 still availible? And in that case, is it upgradeable? If not, any other way of trying it out? I suppose I could use the existing licence, but that isn't really ok, is it? I'm not good enough at legalese to understand the EULAs, maybe it is considered fair use (if there is such a thing)? Any good advice in general in trying these things? I intend to do it virtually since I don't have heaps of spare hardware, but that shouldn't be an issue, right? | They all run as root . If you need otherwise, use su in the script or add a crontab entry to the user's crontab ( man crontab ) or the system-wide crontab (whose location I couldn't tell you on CentOS). | {
"source": [
"https://serverfault.com/questions/247053",
"https://serverfault.com",
"https://serverfault.com/users/38771/"
]
} |
247,061 | If you have db names like test_db1
test_db2 phpmyadmin would create a group test . The group appears in the left side-bar with related databases displayed under it. How can I disable this feature? | I know this is something old -- so I'm posting here just in case someone stumbles in the same question. In the latest versions of phpMyAdmin the setting to flip is NavigationTreeEnableGrouping, i.e. in config.inc.php set: $cfg['NavigationTreeEnableGrouping'] = false; p.s. on localhost, config.inc.php is placed in /phpmyadminX.XX/ folder Or, for a "current session only", you can do it from phpmyadmin STARTPAGE: click settings -> navigation panel -> uncheck "group items in tree" | {
"source": [
"https://serverfault.com/questions/247061",
"https://serverfault.com",
"https://serverfault.com/users/34662/"
]
} |
247,062 | So I have IIS installed at work and use the following URL to test my asp site: http://localhost/mysite/index.asp I'm on the road and need to work on it. I have all the site files, but I don't have an XP disc so I installed IIS express. All the files and folder locations and names are the same, but the URL ( http://localhost/mysite/index.asp ) no longer works. What do I do? | I know this is something old -- so I'm posting here just in case someone stumbles in the same question. In the latest versions of phpMyAdmin the setting to flip is NavigationTreeEnableGrouping, i.e. in config.inc.php set: $cfg['NavigationTreeEnableGrouping'] = false; p.s. on localhost, config.inc.php is placed in /phpmyadminX.XX/ folder Or, for a "current session only", you can do it from phpmyadmin STARTPAGE: click settings -> navigation panel -> uncheck "group items in tree" | {
"source": [
"https://serverfault.com/questions/247062",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
247,176 | I have struggled throughout the years to get a solid understanding on iptables. Any time I try and read through the man pages my eyes start to glaze over. I have a service that I only want to allow the localhost to have access to. What terms (or configuration, if someone is feeling generous) should I Google for to allow only localhost host to have access to a given port? | If by service you mean a specific port, then the following two lines should work. Change the "25" to whatever port you're trying to restrict. iptables -A INPUT -p tcp -s localhost --dport 25 -j ACCEPT
iptables -A INPUT -p tcp --dport 25 -j DROP | {
"source": [
"https://serverfault.com/questions/247176",
"https://serverfault.com",
"https://serverfault.com/users/74411/"
]
} |
247,425 | In IIS7, there are numerous things you can do that seem to restart the website. I am unclear about exactly how they are all related. run iisreset from the command line refresh a website recycle an app pool restart a website Can someone explain exactly what each one does please? | iisreset will stop and start the World Wide Web Publishing Service. This, of course, applies to all of your application pools. I'm sure you noticed a process being created for each application pool. This process will handle requests for all websites associated with it. When you recycle an application pool , IIS will create a new process (keeping the old one) to serve requests. Then it tries to move all requests on the new process. After a timeout the old process will be killed automatically. You usually recycle your application pool to get rid of leaked memory (you might have a problem in your application if this needs to be a regular operation, even though it is recommended to have a scheduled recycle). As for restarting a website , it just stops and restarts serving requests for that particular website. It will continue to serve other websites on the same app pool with no interruptions. If you have a session oriented application, all of the above will cause loss of session objects. Refreshing a website has no effect on the service/process/website and is merely a UI command to refresh the treeview (maybe you added a directory you don't see in the management console). | {
"source": [
"https://serverfault.com/questions/247425",
"https://serverfault.com",
"https://serverfault.com/users/2889/"
]
} |
247,671 | I'm trying to determine why a Nagios host check is failing (hostnames and IPs have been changed to protect the guilty): : jmglov@laurana; host www.foo.com
;; connection timed out; no servers could be reached
: jmglov@laurana; for ns in `grep -o '\([0-9]\+[.]\)\{3\}[0-9]\+$' /etc/resolv.conf`; do ping -qc 1 $ns; done
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
--- 192.168.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 10.911/10.911/10.911/0.000 ms
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
--- 192.168.1.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms So I know that my nameservers are reachable, meaning that some nameserver along the delegation path to the authoritative nameserver for my host is not responding. Is there an easy way to determine which nameserver this is (basically a traceroute for DNS)? | Does this do the job for you? dig +trace google.com From the man page: +[no]trace Toggle tracing of the delegation path from the root name
servers for the name being looked up.
Tracing is disabled by default. When
tracing is enabled, dig makes
iterative queries to resolve
the name being looked up. It will follow referrals from the root
servers, showing the answer from each
server that was used to resolve the
lookup. | {
"source": [
"https://serverfault.com/questions/247671",
"https://serverfault.com",
"https://serverfault.com/users/70239/"
]
} |
248,841 | I have searched a lot with google. It is only documented how to enable IP Forwarding in the linux kernel but not WHEN and WHY I have to enable it. I would be thankful for advice. When and why do I have to enable it? (For example when I install a bridge?, or when using iptables? or when using route add ?) (Good links are appreciated, but I was not able to found one. ) Thank you very much! Jan | IP forwarding should be enabled when you want the system to act as a router, that is transfer IP packets from one network to another. In the simplest case, consider a server with two physical ethernet ports which is meant to connect to two different networks (say your internal network and the outside world as provided by a DSL modem). If you just connect and configure those two interfaces, the system can communicate on either network. However, packets from one network cannot travel to the other network, because forwarding is not enabled. Consider the specific example of 'route add'. If you have two network interfaces, you will add a minimum of two routes, one for each interface. When the kernel considers where to send a network packet, it will pick the most specific applicable route and then send it along to that interface. However, if forwarding is turned off, the kernel will first check to see which interface the packet came from. If it didn't come from the same interface, the kernel will discard it. EDIT : First note that you can use a router without having two physical network interfaces. For example if you are using VLANs , your server can transfer IP packets between vlans but only have one physical network interface. This is called a one-armed router . However for the simplest case yes you can say that if you only have one physical network interface then you don't need to enable IP forwarding. IP forwarding involves transferring packets between network interfaces (real or virtual) so I think that if you had two interfaces on the same network, you would have to enable ip forwarding to allow packets to move between the interfaces. However since the interfaces are already on the same network, it doesn't seem to make a lot of sense to transfer packets between them. | {
"source": [
"https://serverfault.com/questions/248841",
"https://serverfault.com",
"https://serverfault.com/users/74772/"
]
} |
249,109 | I would like haproxy to use its own 503 document page when back server (backend) sends HTTP 503 code. Is it possible? Have seen something like "monitor fail" conditions but don't know how to add it to the frontend. | You can use the errorfile directive and then a custom .http text file. So for example: errorfile 503 /etc/haproxy/errors/503-mycustom.http Content of the file would then be something like: HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html>
<head>
<title>RARRR!!!!!</title>
</head>
<body style="font-family:Arial,Helvetica,sans-serif;">
<div style="margin: 0 auto; width: 960px;">
<h2 >RAWR RAWR RAWR</h2>
</div>
</body>
</html> The errorfile directive can be specific to a backend as well. The "errorfile" setting cannot be used to change a response sent by HAProxy if nodes are online. This setting only affects HAProxy when all nodes are offline. It is important to understand that
this keyword is not meant to rewrite
errors returned by the server, but
errors detected and returned by
HAProxy. This is why the list of
supported errors is limited to a small
set. | {
"source": [
"https://serverfault.com/questions/249109",
"https://serverfault.com",
"https://serverfault.com/users/57327/"
]
} |
249,316 | I have haproxy running as my load-balancer and from the stats web interface that comes with haproxy, I can put a web server into maintenance mode (and bring it back out again) - which is great! However, I also want to be able to perform that same action from the command line (for use in some automated deployment workflows). Is this possible, and if so how? Many thanks | Update (28 Aug 2012): I now tend to use haproxyctl nowadays, which utilizes the methods described below. I've fixed it after a little more research, for anyone else with the same issue:- You can add a unix socket in your config, then interact with that ( here are the possible commands ). To set up: sudo nano /etc/haproxy/haproxy.cfg In your 'global' section add in: stats socket /etc/haproxy/haproxysock level admin Restart your haproxy daemon (e.g. sudo service haproxy restart ) Now you need socat (if you don't have it, just apt-get install socat on Ubuntu). Now all you need to do is fire off this command to take down a node: echo "disable server yourbackendname/yourservername" | socat stdio /etc/haproxy/haproxysock To bring it back up replace disable with enable in the command above. | {
"source": [
"https://serverfault.com/questions/249316",
"https://serverfault.com",
"https://serverfault.com/users/74875/"
]
} |
249,340 | In order to compile NGinx in need to install openssl and openssl-dev (I'am following a book guide). So i'am doing this : sudo apt-get install openssl openssl-dev But i get an error telling me that it's impossible to find openssl-dev .
Also after some googling, it seems that libssl-dev is equal to openssl-dev , is that true ? ( apt-get found libssl-dev on my server) Here is my server version : 2.6.32-22-server Any help welcome ! | Yes, you are right. It is libssl-dev | {
"source": [
"https://serverfault.com/questions/249340",
"https://serverfault.com",
"https://serverfault.com/users/75103/"
]
} |
249,483 | Is there a shell command to see the headers of a HTTP request? For example, I would like to know what the headers retrieved from www.example.com/test.php are How can I do this? | In order to retrieve only the header, give this a try: curl -I http://www.example.com/test.php From the man page: -I/--head (HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature
the command HEAD which this uses to get nothing but the header
of a document. When used on a FTP or FILE file, curl displays
the file size and last modification time only. | {
"source": [
"https://serverfault.com/questions/249483",
"https://serverfault.com",
"https://serverfault.com/users/75141/"
]
} |
249,671 | How do I switch on PAM debugging in Debian Squeeze at the admin level? I have checked every resource I was able to find. Google, manpages, whatever. The only thing I haven't tried yet (I simply not dare to, did I mention that I hate PAM?) is digging into the PAM's library source. I tried to google for a solution, nothing. What I found so far: http://www.bitbull.ch/wiki/index.php/Pam_debugging_funktion ( /etc/pam_debug ) and http://nixdoc.net/man-pages/HP-UX/man4/pam.conf.4.html ( debug option on PAM entries in /etc/pam.d/ ). Nope, does not work. No PAM output, nothing, absolute silence. While searching for a solution I even followed links to Pam, that are gas stations here in Germany. Well, yes, perhaps in all those billion of hits might hiding a clue, but shoot me I'd be dead before I discover. Rest is FYI: What problem did I have? After upgrading to Debian Squeeze something got weird (well, hey, it once was, uh, what was right over the Etch .. ah, yes, Woody). So it's probably not Debian's fault, just a long lived screwed up setup. I immediately had the impression it has to do something with PAM, but I really did not know what's going on. I was completely in the dark, left alone, helpless as a baby, YKWIM. Some ssh logins worked, some not. It was kind of funny. No clues in ssh -v , no clues in /var/log/* , nothing. Just "auth succeeded" or "auth fail", sometimes the same user logging in parallely succeeded with one session and failed with the other, at the same time. And nothing you really can get hold of. After digging trainloads of other options I was able to find out. There is nullok and nullok_secure , a Debian special. Something screwed with /etc/securetty and depending on the tty (which is somewhat random) a login was rejected or not. REALLY NICE, phew! The fix was easy and everything's now fine again. However this left me with the question, how to debug such a mess in future. It's not the first time PAM drives me nuts. So I would like to see a final solution. Final as in "solved", not final as in "armageddon". Thanks. Ah, BTW, this again strengthened my belief in that it's good to hate PAM since it came up. Did I mention that I do? | A couple of things for you to try: Did you enable logging of debug messages in syslog? cp /etc/syslog.conf /etc/syslog.conf.original
vi /etc/syslog.conf Add the following line: *.debug /var/log/debug.log Exit with :wq! . touch /var/log/debug.log
service syslog restart You can enable debugging for all modules like so: touch /etc/pam_debug OR you can enable debugging only for the modules you're interested in by adding "debug" to the end of the relevant lines in /etc/pam.d/system-auth or the other /etc/pam.d/* files: login auth required pam_unix.so debug Then debugging messages should start appearing in /var/log/debug.log . Hope this helps you out! | {
"source": [
"https://serverfault.com/questions/249671",
"https://serverfault.com",
"https://serverfault.com/users/59497/"
]
} |
249,719 | We've starting to receive bounced spam messages and the sender is one of our email address. We know that we don't send spam from that address. We've tried changing the password but we're still receiving these bounced emails. Note: This email account is not configured in an email client. We only access it through the browser, using HTTPS. How do we prevent spammers from using our email address to send spam? I've googled around and almost every website says that forging the "From" address is pretty easy using an email client and it is impossible to stop these kind of spammers. Note: We're using the email functionality in a shared hosting account, not hosting an email server ourselves. Even the tech support says Not a whole lot we can do about stopping that . | Short Answer: You can't. For more info, this gives a basic explaination as to why. This shows how easy it is to do. It's just the nature of SMTP, it's insecure! Just because an email appears to come from somebody, it doesn't mean it did. | {
"source": [
"https://serverfault.com/questions/249719",
"https://serverfault.com",
"https://serverfault.com/users/56623/"
]
} |
249,952 | I've just installed Nginx on my server and am extremely happy with the results, however I still cannot figure out how to insert wildcard virtual hosts. This is the [directory] structure I'd like: -- public_html (example.com)
---subdoamin 1 (x.example.com)
---subdomain 2 (y.example.com) As you can see it's pretty basic, however, I'd like the ability to add domains by simply adding an A record for a new subdomain, which will instantly point to the subdirectory of the same name under public_html. There's stuff on the web, however I haven't come across something exactly like this. Any help would be greatly appreciated. | I shall show you. The configuration file server {
server_name example.com www.example.com;
root www/pub;
}
server {
server_name ~^(.*)\.example\.com$ ;
root www/pub/$1;
} Test files We have two test files: $ cat www/pub/index.html
COMMON
$ cat www/pub/t/index.html
T Testing Static server names: $ curl -i -H 'Host: example.com' http://localhost/
HTTP/1.1 200 OK
Server: nginx/0.8.54
Date: Wed, 23 Mar 2011 08:00:42 GMT
Content-Type: text/html
Content-Length: 7
Last-Modified: Wed, 23 Mar 2011 07:56:24 GMT
Connection: keep-alive
Accept-Ranges: bytes
COMMON
$ curl -i -H 'Host: www.example.com' http://localhost/
HTTP/1.1 200 OK
Server: nginx/0.8.54
Date: Wed, 23 Mar 2011 08:00:48 GMT
Content-Type: text/html
Content-Length: 7
Last-Modified: Wed, 23 Mar 2011 07:56:24 GMT
Connection: keep-alive
Accept-Ranges: bytes
COMMON And regexp server name: $ curl -i -H 'Host: t.example.com' http://localhost/
HTTP/1.1 200 OK
Server: nginx/0.8.54
Date: Wed, 23 Mar 2011 08:00:54 GMT
Content-Type: text/html
Content-Length: 2
Last-Modified: Wed, 23 Mar 2011 07:56:40 GMT
Connection: keep-alive
Accept-Ranges: bytes
T | {
"source": [
"https://serverfault.com/questions/249952",
"https://serverfault.com",
"https://serverfault.com/users/75275/"
]
} |
249,956 | My /var/log/apache2/error.log is filling up about 1GB per hour with PHP Notice errors. I've tried adding this:
/apache2.conf php_value error_log none And in my /cgi/php.ini: error_reporting = E_ERROR
display_errors = On
display_startup_errors = Off
log_errors = Off PHP is running through fcgi.
Even though display errors is ON, it is NOT displaying errors. Is there a seperate config file I should be editting? OS: Ubuntu Linux 10.04
PHP: 5.3.2
Apache: 2.2.14 | I shall show you. The configuration file server {
server_name example.com www.example.com;
root www/pub;
}
server {
server_name ~^(.*)\.example\.com$ ;
root www/pub/$1;
} Test files We have two test files: $ cat www/pub/index.html
COMMON
$ cat www/pub/t/index.html
T Testing Static server names: $ curl -i -H 'Host: example.com' http://localhost/
HTTP/1.1 200 OK
Server: nginx/0.8.54
Date: Wed, 23 Mar 2011 08:00:42 GMT
Content-Type: text/html
Content-Length: 7
Last-Modified: Wed, 23 Mar 2011 07:56:24 GMT
Connection: keep-alive
Accept-Ranges: bytes
COMMON
$ curl -i -H 'Host: www.example.com' http://localhost/
HTTP/1.1 200 OK
Server: nginx/0.8.54
Date: Wed, 23 Mar 2011 08:00:48 GMT
Content-Type: text/html
Content-Length: 7
Last-Modified: Wed, 23 Mar 2011 07:56:24 GMT
Connection: keep-alive
Accept-Ranges: bytes
COMMON And regexp server name: $ curl -i -H 'Host: t.example.com' http://localhost/
HTTP/1.1 200 OK
Server: nginx/0.8.54
Date: Wed, 23 Mar 2011 08:00:54 GMT
Content-Type: text/html
Content-Length: 2
Last-Modified: Wed, 23 Mar 2011 07:56:40 GMT
Connection: keep-alive
Accept-Ranges: bytes
T | {
"source": [
"https://serverfault.com/questions/249956",
"https://serverfault.com",
"https://serverfault.com/users/36766/"
]
} |
250,224 | How do I make apt-get ignore some dependencies? For example, I wanted to install mailx so I can use it to send email from cron scripts/report-generating tools. However, installing mailx also installs exim4 and a whole bunch of dependencies (I already have Postfix installed) I don't really need and which I guess mailx can also live without. How do I ignore some dependencies but still use apt-get since it's a nice tool? | Simple and easy solution: just specify the unwanted packages with an extra - after each of them. Example without the - switch: root@debian:~# apt-get install bsd-mailx
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
exim4-base exim4-config exim4-daemon-light liblockfile-bin liblockfile1
[...] Example using the switch to avoid installing exim4-base . Notice the - at the end: root@debian:~# apt-get install bsd-mailx exim4-base-
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'exim4-base' is not installed, so not removed
The following extra packages will be installed:
liblockfile-bin liblockfile1 ssmtp
[...] As you can see, apt-get does not try anymore to install the exim4-base package, and it does not try to install its various dependencies ( exim4-config etc). And if you were wrong and needed that exim4-base dependency after all, you can just apt-get install it later! | {
"source": [
"https://serverfault.com/questions/250224",
"https://serverfault.com",
"https://serverfault.com/users/37828/"
]
} |
250,294 | What is the best way to kill Zombie processes and D state process by single command. | Double tap. Actually, reboot. There's no real way to easily get rid of a zombie, but there's really no reason to because a zombie isn't taking up resources on the computer; it's an orphaned entry in a process table. Init is supposed to collect it but something went wrong with the process. http://en.wikipedia.org/wiki/Zombie_process Perhaps you're asking because there's worse problem...are you getting a boatload of zombies roaming your process table? That usually means a bug in the program or a problem with a configuration. You shouldn't have a huge number of zombies on the system. One or two I don't worry. If you have fifty of them from Apache or some other daemon, you probably have a problem. But that's not directly related to your question... | {
"source": [
"https://serverfault.com/questions/250294",
"https://serverfault.com",
"https://serverfault.com/users/55371/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.