source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
58,401 | We're using amazon EC2 and we want to keep track of instaces. Is the Amazon EC2 instance-id unique forever? i.e. If a VM has an instance id of i-12345678 is there a guarantee that when that instance terminates, that instance id won't ever be used again? | I asked Amazon, and this was their answer: "Instance ids are unique. You'll never receive a duplicate id. However, the current format of the instance id is an implementation detail that is subject to change. If you use the instance id as a string, you should be fine." It's important to note that you will never receive the same ID twice. However, since you can't connect to other people's instances, this will probably be sufficient. | {
"source": [
"https://serverfault.com/questions/58401",
"https://serverfault.com",
"https://serverfault.com/users/8950/"
]
} |
58,762 | I want to redirect only my root to another url, but maintain all the /sub/directories where they belong (and redirect) example: mysite.com/1 redirects to somewhere
mysite.com/admin opens a page i want mysite.com/ to redirect to mysecondsite.com and only this with a 301 redirect using htaccess | Try this: RewriteEngine on
RewriteCond %{HTTP_HOST} mysite\.com [NC]
RewriteCond %{REQUEST_URI} ^/$
Rewriterule ^(.*)$ http://mysecondsite.com/ [L,R=301] If you don't need to check for the old domain (for example, if the directory where your .htaccess is placed is only used by the old domain) you can remove the second line. | {
"source": [
"https://serverfault.com/questions/58762",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
58,881 | I have a problem with my Ubuntu server. I start a new remote ssh session (from windows putty or ubuntu client) to my server. After a while (I think it is when I hide the console window), my input is not shown in the console. But when I type for example "ls", I get the listing. This means that the input was sended, but I don't see it. I can only close the console and start a new ssh session. But the next annoying point is, that when I start a new screen and I have this problem there, it don't go away after reconnecting. I have to restart the screen bash. Does anyone have an idea whats going wrong? It seems that it is a problem on the server, because I tried with windows and linux with the same result. thanks
plucked | This can happen after a program dies leaving a terminal in an abnormal state.
To fix it temporary you "reset" the terminal with: $ reset | {
"source": [
"https://serverfault.com/questions/58881",
"https://serverfault.com",
"https://serverfault.com/users/88427/"
]
} |
59,014 | I need to run a windows command n times within a bat script file. I know how to do this in various programming languages but cannot manage to get it right on the windows command line :-( I would expect something like either for(int i = 0; i < 100; i++) {
// do something
} or even this (though not entirely seriously) 1.upto(100, {
// do something
}) Thanks! EDIT I can write a program in java, perl, c or whatever that will generate a bat script that looks like this for %%N in (1 2 3 4 5 6 7 8 9 10 11 12) do echo %%N and so on. Or even "better": echo 1
echo 2
echo 3
echo 4
echo 5
echo 6
echo 7
echo 8
echo 9
echo 10
echo 11
echo 12 and then execute it... But the thing is that I need a concise way to specify a range of numbers to iterate through within the script. Thanks! | You can do it similarly like this: ECHO Start of Loop
FOR /L %i IN (1,1,5) DO (
ECHO %i
) The 1,1,5 is decoded as: (start,step,end) Also note, if you are embedding this in a batch file, you will need to use the double percent sign (%%) to prefix your variables, otherwise the command interpreter will try to evaluate the variable %i prior to running the loop. | {
"source": [
"https://serverfault.com/questions/59014",
"https://serverfault.com",
"https://serverfault.com/users/12665/"
]
} |
59,108 | I have two directories - one from earlier backup and second from newest backup. How do i compare what changes were made to files in directory from newest backup on Linux? Also how do i display changes in for example text and php files - i'm thinking about something like revision history on wikipedia where you see old version on one side of the screen and newest version on other and changes are highlighted. How do i achieve something like that? edit:
How do i also compare remote dir with local? | From diff man page: If both from-file and to-file are directories, diff compares corresponding files in both directories,
in alphabetical order; this comparison is not recursive unless the -r or --recursive option is given.
diff never compares the actual contents of a directory as if it were a file. The file that is fully
specified may not be standard input, because standard input is nameless and the notion of ‘‘file with
the same name’’ does not apply. So to compare directories: diff --brief -r dir1 dir2 To compare files side by side: diff --side-by-side file1 file2 | {
"source": [
"https://serverfault.com/questions/59108",
"https://serverfault.com",
"https://serverfault.com/users/12427/"
]
} |
59,140 | How do I diff files/folders across machines provided that the only connectivity available is ssh? | You can do it with Bash's process substitution : diff foo <(ssh myServer 'cat foo') Or, if both are on remote servers: diff <(ssh myServer1 'cat foo') <(ssh myServer2 'cat foo') | {
"source": [
"https://serverfault.com/questions/59140",
"https://serverfault.com",
"https://serverfault.com/users/18479/"
]
} |
59,262 | Is there a way to make bash display stderr messages in red color? | command 2> >(while read line; do echo -e "\e[01;31m$line\e[0m" >&2; done) | {
"source": [
"https://serverfault.com/questions/59262",
"https://serverfault.com",
"https://serverfault.com/users/12097/"
]
} |
59,602 | I can't send out emails, need to look into the logs, but where is the log? | Where are the logs? The default location depends on your linux/unix system, but the most common places are /var/log/maillog /var/log/mail.log /var/adm/maillog /var/adm/syslog/mail.log If it's not there, look up /etc/syslog.conf . You should see something like this mail.* -/var/log/maillog sendmail writes logs to the mail facility of syslog. Therefore, which file it gets written to depends on how syslog was configured. If you system uses syslog-ng (instead of the more "traditional" syslog ), then you'll have to look up your syslog-ng.conf file. You'll should something like this: # This files are the log come from the mail subsystem.
#
destination mail { file("/var/log/mail.log"); };
destination maillog { file("/var/log/maillog"); };
destination mailinfo { file("/var/log/mail.info"); };
destination mailwarn { file("/var/log/mail.warn"); };
destination mailerr { file("/var/log/mail.err"); }; Unable to send out emails? One of the most common reason I've seen for a freshly installed sendmail not being able to send out emails is the DAEMON_OPTIONS being set to listen only on 127.0.0.1 See /etc/mail/sendmail.mc dnl #
dnl # The following causes sendmail to only listen on the IPv4 loopback address
dnl # 127.0.0.1 and not on any other network devices. Remove the loopback
dnl # address restriction to accept email from the internet or intranet.
dnl #
DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl If that's your case, remove the "Addr=127.0.0.1" part, rebuild your conf file and you're good to go! DAEMON_OPTIONS(`Port=smtp, Name=MTA')dnl
[root@server]$ m4 sendmail.mc > /etc/sendmail.cf
[root@server]$/etc/init.d/sendmail restart If you've been making changes to /etc/sendmail.cf manually thus far (instead of the *.m4 file) you can make similar changes in /etc/sendmail.cf. The offending line will look like this: O DaemonPortOptions=Port=smtp,Addr=127.0.0.1, Name=MTA Change it to: O DaemonPortOptions=Port=smtp, Name=MTA | {
"source": [
"https://serverfault.com/questions/59602",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
59,735 | does someone faced error "The RPC server is unavailable" during connecting to Disk management MMC console to Hyper-V Server R2? Servers are in the same AD domain and MMC console is enabled on Hyper-V. Thanks | You need to change the Firewall rules on both machines (NOT only the Hyper-V Server) Run this command on both machines: netsh advfirewall firewall set rule group="Remote Volume Management" new enable=yes | {
"source": [
"https://serverfault.com/questions/59735",
"https://serverfault.com",
"https://serverfault.com/users/11590/"
]
} |
59,838 | I find it tedious to have to backup databases every week. And I also think weekly backups should be turned into daily backups. If I had to do that, I don't want to do it manually. What's the best way to automate the backing-up of PostgreSQL databases daily? | the same as you do for any other repetitive task that can be automated - you write a script to do the backup, and then set up a cron job to run it. a script like the following, for instance: (Note: it has to be run as the postgres user, or any other user with the same privs) #! /bin/bash
# backup-postgresql.sh
# by Craig Sanders <[email protected]>
# This script is public domain. feel free to use or modify
# as you like.
DUMPALL='/usr/bin/pg_dumpall'
PGDUMP='/usr/bin/pg_dump'
PSQL='/usr/bin/psql'
# directory to save backups in, must be rwx by postgres user
BASE_DIR='/var/backups/postgres'
YMD=$(date "+%Y-%m-%d")
DIR="$BASE_DIR/$YMD"
mkdir -p "$DIR"
cd "$DIR"
# get list of databases in system , exclude the tempate dbs
DBS=( $($PSQL --list --tuples-only |
awk '!/template[01]/ && $1 != "|" {print $1}') )
# first dump entire postgres database, including pg_shadow etc.
$DUMPALL --column-inserts | gzip -9 > "$DIR/db.out.gz"
# next dump globals (roles and tablespaces) only
$DUMPALL --globals-only | gzip -9 > "$DIR/globals.gz"
# now loop through each individual database and backup the
# schema and data separately
for database in "${DBS[@]}" ; do
SCHEMA="$DIR/$database.schema.gz"
DATA="$DIR/$database.data.gz"
INSERTS="$DIR/$database.inserts.gz"
# export data from postgres databases to plain text:
# dump schema
$PGDUMP --create --clean --schema-only "$database" |
gzip -9 > "$SCHEMA"
# dump data
$PGDUMP --disable-triggers --data-only "$database" |
gzip -9 > "$DATA"
# dump data as column inserts for a last resort backup
$PGDUMP --disable-triggers --data-only --column-inserts \
"$database" | gzip -9 > "$INSERTS"
done
# delete backup files older than 30 days
echo deleting old backup files:
find "$BASE_DIR/" -mindepth 1 -type d -mtime +30 -print0 |
xargs -0r rm -rfv EDIT : pg_dumpall -D switch (line 27) is deprecated, now replaced with --column-inserts https://wiki.postgresql.org/wiki/Deprecated_Features | {
"source": [
"https://serverfault.com/questions/59838",
"https://serverfault.com",
"https://serverfault.com/users/14037/"
]
} |
60,500 | I would like to create a new user in an existing postgresql database on an Ubuntu machine. I want to grant this user a read-only access to all the tables. How do I do it? Do I need to create a new user on Ubuntu, too? Thanks, Udi | Reference taken from this Article ! Script to Create Read-Only user: CREATE ROLE Read_Only_User WITH LOGIN PASSWORD 'Test1234'
NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION VALID UNTIL 'infinity'; Assign permission to this read only user: GRANT CONNECT ON DATABASE YourDatabaseName TO Read_Only_User;
GRANT USAGE ON SCHEMA public TO Read_Only_User;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO Read_Only_User;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO Read_Only_User; | {
"source": [
"https://serverfault.com/questions/60500",
"https://serverfault.com",
"https://serverfault.com/users/10904/"
]
} |
60,508 | Is there a one-liner that grants the SELECT permissions to a new user postgresql? Something that would implement the following pseudo-code: GRANT SELECT ON TABLE * TO my_new_user; | I thought it might be helpful to mention that, as of 9.0, postgres does have the syntax to grant privileges on all tables (as well as other objects) in a schema: GRANT SELECT ON ALL TABLES IN SCHEMA public TO user;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO user; Here's the link . | {
"source": [
"https://serverfault.com/questions/60508",
"https://serverfault.com",
"https://serverfault.com/users/10904/"
]
} |
60,553 | From reading, it seems like DNS failover is not recommended just because DNS wasn't designed for it. But if you have two webservers on different subnets hosting redundant content, what other methods are there to ensure that all traffic gets routed to the live server if one server goes down? To me it seems like DNS failover is the only failover option here, but the consensus is it's not a good option. Yet services like DNSmadeeasy.com provide it, so there must be merit to it. Any comments? | By 'DNS failover' I take it you mean DNS Round Robin combined with some monitoring, i.e. publishing multiple IP addresses for a DNS hostname, and removing a dead address when monitoring detects that a server is down. This can be workable for small, less trafficked websites. By design, when you answer a DNS request you also provide a Time To Live (TTL) for the response you hand out. In other words, you're telling other DNS servers and caches "you may store this answer and use it for x minutes before checking back with me". The drawbacks come from this: With DNS failover, a unknown percentage of your users will have your DNS data cached with varying amounts of TTL left. Until the TTL expires these may connect to the dead server. There are faster ways of completing failover than this. Because of the above, you're inclined to set the TTL quite low, say 5-10 minutes. But setting it higher gives a (very small) performance benefit, and may help your DNS propagation work reliably even if there is a short glitch in network traffic. So using DNS based failover goes against high TTLs, but high TTLs are a part of DNS and can be useful. The more common methods of getting good uptime involve: Placing servers together on the same LAN. Place the LAN in a datacenter with highly available power and network planes. Use a HTTP load balancer to spread load and fail over on individual server failures. Get the level of redundancy / expected uptime you require for your firewalls, load balancers and switches. Have a communication strategy in place for full-datacenter failures, and the occasional failure of a switch / database server / other resource that cannot easily be mirrored. A very small minority of web sites use multi-datacenter setups, with 'geo-balancing' between datacenters. | {
"source": [
"https://serverfault.com/questions/60553",
"https://serverfault.com",
"https://serverfault.com/users/12007/"
]
} |
60,658 | In a similar vein to this question , how would I do a schema-only dump in PostgreSQL? | pg_dump --schema-only | {
"source": [
"https://serverfault.com/questions/60658",
"https://serverfault.com",
"https://serverfault.com/users/2321/"
]
} |
60,711 | I try to chown the owner of a file to root, but I can't. I'm doing this as root. I get the following message: chown: changing ownership of `ps': Operation not permitted | The immutable attribute might be set on the file. Remove it with chattr -i <file> | {
"source": [
"https://serverfault.com/questions/60711",
"https://serverfault.com",
"https://serverfault.com/users/1519/"
]
} |
61,321 | I have an alias that passes in some parameters to a tool that I use often. Sometimes I run as myself, sometimes under sudo. Unfortunately, of course, sudo doesn't recognise the alias. Does anyone have a hint on how to pass the alias through? In this case, I have a bunch of options for perl when I'm debugging: alias pd='perl -Ilib -I/home/myuser/lib -d' Sometimes, I have to debug my tools as root, so, instead of running: pd ./mytool --some params I need to run it under sudo. I've tried many ways: sudo eval $(alias pd)\; pd ./mytool --some params
sudo $(alias pd)\; pd ./mytool --some params
sudo bash -c "$(alias pd)\; pd ./mytool --some params"
sudo bash -c "$(alias pd); pd ./mytool --some params"
sudo bash -c eval\ "$(alias pd)\; pd ./mytool --some params"
sudo bash -c eval\ "'$(alias pd)\; pd ./mytool --some params'" I was hoping for a nice, concise way to ensure that my current pd alias was fully used (in case I need to tweak it later), though some of my attempts weren't concise at all. My last resort is to put it into a shell script and put that somewhere that sudo will be able to find. But aliases are soooo handy sometimes, so it is a last resort. | A very elegant solution can be found in the Archlinux-Wiki: alias sudo='sudo '
# whitespace ---^ works to pass all aliases to sudo . Source: http://wiki.archlinux.org/index.php/Sudo#Passing_aliases | {
"source": [
"https://serverfault.com/questions/61321",
"https://serverfault.com",
"https://serverfault.com/users/19088/"
]
} |
61,400 | How can I tell if grub is installed on a disk, and if it is what settings it has (noteably, what it has for the root parameter) I need to check a lot of disks in software RAID1 arrays to make sure both disks have grub installed, with the grub on each disk having the appropriate root value. | This is a simple way to tell if GRUB is installed. If it doesn't work your file command's database is likely out of date and you can either update the its database or use an alternate method from another answer. You can use file to identify GRUB in an MBR. e.g. # file -s /dev/sda
/dev/sda: x86 boot sector; GRand Unified Bootloader, stage1 version 0x3
, stage2 address 0x2000, stage2 segment 0x200; partition 1:
ID=0xfd, starthead 1, startsector 63, 1044162 sectors; partition
2: ID=0x82, starthead 0, startsector 1044225, 1028160 sectors;
partition 3: ID=0xfd, starthead 0, startsector 2072385,
1951447680 sectors, code offset 0x48 The root= paramater is not stored in the MBR, that's stored in GRUB's menu.lst file which is stored on a file-system (typically in the /boot/grub directory of the root fs or the grub directory of the /boot filesystem - but not always, it could be anywhere). You'll have to parse the output of file above, determine which disk/partition the menu.lst file is on, mount it, read it in and parse it. You'll also want to read in the grub/default file to figure out which grub menu entry is the default, because that's probably the one that has the root= parameter that you're most interested in. | {
"source": [
"https://serverfault.com/questions/61400",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
]
} |
61,510 | I have a server that has a really high load. Nothing is jumping out at me in terms of CPU usage, and it's not swapping. I think it's cause some processes are waiting for disk IO, and I want to see what's waiting. Is there any programme that'll show me what processes are waiting for IO? I know about iotop but that shows what's currently doing IO. Or is this a silly question? (If so explain how :) ) | You can use an I/O monitor like iotop, but it will show you only processes or threads with current I/O operations. If you need to browse processes waiting for I/O, use watch to monitor processes with STAT flag 'D' like below: watch -n 1 "(ps aux | awk '\$8 ~ /D/ { print \$0 }')" | {
"source": [
"https://serverfault.com/questions/61510",
"https://serverfault.com",
"https://serverfault.com/users/8950/"
]
} |
61,659 | If I run a program from the shell, and it segfaults: $ buggy_program
Segmentation fault It will tell me, however, is there a way to get programs to print a backtrace, perhaps by running something like this: $ print_backtrace_if_segfault buggy_program
Segfault in main.c:35
(rest of the backtrace) I'd also rather not use strace or ltrace for that kind of information, as they'll print either way... | There might be a better way, but this kind of automates it. Put the following in ~/backtrace : backtrace
quit Put this in a script called seg_wrapper.sh in a directory in your path: #!/bin/bash
ulimit -c unlimited
"$@"
if [[ $? -eq 139 ]]; then
gdb -q $1 core -x ~/backtrace
fi The ulimit command makes it so the core is dumped. "$@" are the arguments given to the script, so it would be your program and its arguments. $? holds the exit status, 139 seems to be the default exit status for my machine for a segfault. For gdb , -q means quiet (no intro message), and -x tells gdb to execute commands in the file given to it. Usage So to use it you would just: seg_wrapper.sh ./mycommand and its arguments Update You could also write a signal handler that does this, see this link . | {
"source": [
"https://serverfault.com/questions/61659",
"https://serverfault.com",
"https://serverfault.com/users/1843/"
]
} |
61,823 | I'm not too familiar with RAID arrays, but I plan on making a RAID 5 array for a fileserver. However, once I get the RAID running for a while, I have plans to move it to another machine (with completely different hardware). Is it possible to move an array from machine to machine without having to break the array and place the data on it again? | If you have a dedicated RAID controller that plugs into a PCI port, then you should be fine. All of the RAID data will be stored on the controller, with matching meta-data on the drives. Then you can just move the whole thing into another server. Some controllers will even let you shuffle the drives around so that they don't need to go back in the same order that they came out in (particularly useful when you have 14 disks). If you are using software-based RAID (i.e. in Windows or Linux), then this too can be transported between machines. With Windows, when you put all the new disks in, it will ask you to import them and they should just start running without a hitch. With Linux I don't know the procedure but I suspect it would be something similar. If you are using an on-board RAID controller, this is where things can get tricky. You have specified that you will be moving between different hardware, so if you were moving from say an Adaptec RAID controller to a 3Ware controller, then the chances of survival are minimal. If both the boards have the same brand of controller, they may be able to read the meta-data off the disks and re-create the array. If you're VERY brave, you can create a new array on the new controller, and make sure that you use the exact same settings as the previous controller used (same stripe size, etc), and when it asks you if you want to initialise the array, say no, and hope for the best. I've had this work with a RAID0 and a RAID10, but never with a RAID5. So the short answer is - if you want to be able to move it around easily, invest a hundred bucks into a proper RAID controller and just move the whole thing over in one hit. | {
"source": [
"https://serverfault.com/questions/61823",
"https://serverfault.com",
"https://serverfault.com/users/16625/"
]
} |
61,915 | I've got a script that SSHes several servers using public key authentication. One of the servers has stopped letting the script log in due to a configuration issue, which means that the script gets stuck with a "Password:" prompt, which it obviously cannot answer, so it doesn't even try the rest of the servers in the list. Is there a way to tell the ssh client not to prompt for a password if key authentication fails, but instead to just report an error connecting and let my script carry on? | For OpenSSH there is BatchMode, which in addition to disabling password prompting, should disable querying for passphrase(s) for keys. BatchMode If set to “yes”, passphrase/password querying will be disabled.
This option is useful in scripts and other batch jobs where no
user is present to supply the password. The argument must be
“yes” or “no”. The default is “no”. Sample usage: ssh -oBatchMode=yes -l <user> <host> <dostuff> | {
"source": [
"https://serverfault.com/questions/61915",
"https://serverfault.com",
"https://serverfault.com/users/2284/"
]
} |
62,026 | After I have installed a package by yum (with multiple repositories configured), how can I find from which repository it has been installed? If I run yum info package-name (or yum list package-name ), I can only see that the package is "installed". | With yum-utils installed, repoquery will provide the information you seek (here 'epel' being the repository). $ repoquery -i cherokee
Name : cherokee
Version : 0.99.49
Release : 1.el5
Architecture: i386
Size : 8495964
Packager : Fedora Project
Group : Applications/Internet
URL : http://www.cherokee-project.com/
Repository : epel
Summary : Flexible and Fast Webserver
Description :
Cherokee is a very fast, flexible and easy to configure Web Server. It supports
the widespread technologies nowadays: FastCGI, SCGI, PHP, CGI, TLS and SSL
encrypted connections, Virtual hosts, Authentication, on the fly encoding,
Apache compatible log files, and much more. | {
"source": [
"https://serverfault.com/questions/62026",
"https://serverfault.com",
"https://serverfault.com/users/13376/"
]
} |
62,240 | I've been using a spreadsheet to keep track of domain names. Is there a web service anywhere that maintains a domain name database and tracks all the domains we own? The most important feature is that it would have to remind me when it's time to renew, but it would also keep track of all the registrars in my life. | Domain Monitor will do it. The only problem being that although they will send you e-mail alerts when your domain status changes, they aren't quite as fast as I would prefer. I was trying to watch a domain I wanted to buy, and the alert they sent came about 8 or 9 hours after the status actually changed. Domain Monitoring with Domain Monitor & more - DomainTools But at least it's free, and it's pretty handy. Also, they claim that they don't sell your "lookups" to squatters (meaning, when you check to see if xyz.com is available, they won't sell that info to a squatter if you don't buy it in 24 hours like many domain services apparently do). | {
"source": [
"https://serverfault.com/questions/62240",
"https://serverfault.com",
"https://serverfault.com/users/4/"
]
} |
62,316 | Is there a standard way to list the parameter values of a loaded Linux module? I'm essentially probing for another answer to this Linux kernel module parameters question , because the module I'm interested in doesn't have a /sys/modules/<module_name>/parameters interface. | You can do it by using this simple one way command, which uses the /proc/modules and /sys virtual filesystems: cat /proc/modules | cut -f 1 -d " " | while read module; do \
echo "Module: $module"; \
if [ -d "/sys/module/$module/parameters" ]; then \
ls /sys/module/$module/parameters/ | while read parameter; do \
echo -n "Parameter: $parameter --> "; \
cat /sys/module/$module/parameters/$parameter; \
done; \
fi; \
echo; \
done You will obtain an output like this: ...
...
Module: vboxnetadp
Module: vboxnetflt
Module: vboxdrv
Parameter: force_async_tsc --> 0
Module: binfmt_misc
Module: uinput
Module: fuse
Parameter: max_user_bgreq --> 2047
Parameter: max_user_congthresh --> 2047
Module: md_mod
Parameter: new_array --> cat: /sys/module/md_mod/parameters/new_array: Permission denied
Parameter: start_dirty_degraded --> 0
Parameter: start_ro --> 0
Module: loop
Parameter: max_loop --> 0
Parameter: max_part --> 0
Module: kvm_intel
Parameter: emulate_invalid_guest_state --> N
Parameter: ept --> Y
Parameter: fasteoi --> Y
Parameter: flexpriority --> Y
Parameter: nested --> N
Parameter: ple_gap --> 0
Parameter: ple_window --> 4096
Parameter: unrestricted_guest --> Y
Parameter: vmm_exclusive --> Y
Parameter: vpid --> Y
Parameter: yield_on_hlt --> Y
Module: kvm
Parameter: allow_unsafe_assigned_interrupts --> N
Parameter: ignore_msrs --> N
Parameter: min_timer_period_us --> 500
Module: tpm_infineon
Module: joydev
Module: snd_hda_codec_hdmi
Parameter: static_hdmi_pcm --> N
...
... Hope this helps. | {
"source": [
"https://serverfault.com/questions/62316",
"https://serverfault.com",
"https://serverfault.com/users/100/"
]
} |
62,411 | I need to get a list of human readable du output. However, du does not have a "sort by size" option, and piping to sort doesn't work with the human readable flag. For example, running: du | sort -n -r Outputs a sorted disk usage by size (descending): du |sort -n -r
65108 .
61508 ./dir3
2056 ./dir4
1032 ./dir1
508 ./dir2 However, running it with the human readable flag, does not sort properly: du -h | sort -n -r
508K ./dir2
64M .
61M ./dir3
2.1M ./dir4
1.1M ./dir1 Does anyone know of a way to sort du -h by size? | As of GNU coreutils 7.5 released in August 2009, sort allows a -h parameter, which allows numeric suffixes of the kind produced by du -h : du -hs * | sort -h If you are using a sort that does not support -h , you can install GNU Coreutils. E.g. on an older Mac OS X: brew install coreutils
du -hs * | gsort -h From sort manual : -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G) | {
"source": [
"https://serverfault.com/questions/62411",
"https://serverfault.com",
"https://serverfault.com/users/1134/"
]
} |
62,433 | I am administering a public web forum based on Invision Power Board v2.3.6. Registrations are already filtered thanks to ReCaptcha, but I still have loads of people that seems to register manually to actually post 1 or 2 spams and never look back. Validating the registration upfront does not seem very tractable because I can distinguish between regular users and spammers. The ideal solution would be to put newly registered users on probation to let them freely post, but to keep the posts invisible as long they have not been manually validated. Then, the manual validation would directly upgrade genuine posters as regular users (posts being directly visible, no moderation). IPB provides a complex policy system, I think such scheme (or something equivalent) is possible. Any ideas on this matter? | As of GNU coreutils 7.5 released in August 2009, sort allows a -h parameter, which allows numeric suffixes of the kind produced by du -h : du -hs * | sort -h If you are using a sort that does not support -h , you can install GNU Coreutils. E.g. on an older Mac OS X: brew install coreutils
du -hs * | gsort -h From sort manual : -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G) | {
"source": [
"https://serverfault.com/questions/62433",
"https://serverfault.com",
"https://serverfault.com/users/4930/"
]
} |
62,435 | can anyone recommend (or recommend to avoid) a source for asset labels? id like foil, potentially with a color logo if possible. so if youve had something like that printed before, please speak up. google returns far too many results to be manageable. | As of GNU coreutils 7.5 released in August 2009, sort allows a -h parameter, which allows numeric suffixes of the kind produced by du -h : du -hs * | sort -h If you are using a sort that does not support -h , you can install GNU Coreutils. E.g. on an older Mac OS X: brew install coreutils
du -hs * | gsort -h From sort manual : -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G) | {
"source": [
"https://serverfault.com/questions/62435",
"https://serverfault.com",
"https://serverfault.com/users/11087/"
]
} |
62,496 | Is there any standard or convention for where SSL certificates and associated private keys should go on the UNIX/Linux filesystem? | For system-wide use, OpenSSL should provide you /etc/ssl/certs and /etc/ssl/private . The latter of which will be restricted 700 to root:root . If you have an application that doesn’t perform initial privilege separation from root , then it might suit you to locate them somewhere local to the application with the relevantly restricted ownership and permissions. | {
"source": [
"https://serverfault.com/questions/62496",
"https://serverfault.com",
"https://serverfault.com/users/224/"
]
} |
62,578 | How do I get a list of drive letters and their associated labels on a windows system through a bat file? | This will get most of it: Net Use If you have any drives mapped via subst you would also need to get those: Subst For completeness, you would do it like this in Powershell (if you are on windows 7 or have installed it): gwmi win32_LogicalDisk -filter DriveType=4 You can also do it from the command prompt or a batch file using WMI like this: wmic logicaldisk get caption,providername,drivetype,volumename | {
"source": [
"https://serverfault.com/questions/62578",
"https://serverfault.com",
"https://serverfault.com/users/12890/"
]
} |
62,687 | I'm pretty impressed with Splunk , especially version 4. Pretty graphs, alerting (Enterprise only), and fast, accurate, searching. It's a great product. However, the cost just way too high to consider for full production use for our company. All we really need is to be able to index different logs in a central place, and have reasonable searching on that. Having alerts based on a saved search is also really nice. We don't really go beyond that. In fact, our biggest usage has been in deploying new applications. Everything gets logged via log4net to either the Event log on Windows or a text file on Linux. Splunk makes it pretty easy to quickly search across those to make sure all the parts of the app are working ok -- that's saved us tons of time versus hunting down individual logging sources. What alternatives exist in this market? I have a sinking feeling Splunk's pricing is so high because they have the best product by far, and they know it. We want the server to run on Windows. I'd be open to a split model, using one product for general logs (collect via syslog/Snare), and a dedicated product for our custom apps (like Log4Net Dashboard ). Would using a simple syslog server such as Kiwi, sent to SQL Server (perhaps with fulltext enabled) work? I'd hope the cost should be well under 5 figures, USD. (And yes, I know, we're cheap. We're a startup with little money, and BizSpark takes care of all our MS licensing.) Edit: I should add, we have about 10 physical servers, 20 VMs, and a couple firewalls and switches. 90% is Windows. | Note : This is all regarding Linux and free software , as that's what I mostly use, but you should be fine with a syslog client on Windows to send the logs to a Linux syslog server. Logging to an SQL server: With only ~30 machines, you should be fine with pretty much any centralised syslog-alike and an SQL backend. I use syslog-ng and MySQL on Linux for this very thing. Pretty frontends for graphing are the main problem -- It seems that there is a lot of hacked-up front-ends which will grab items from the logs and show how many hits, alerts etc but I've not found anything integrated and clean. Admittedly this is the main thing that you're looking for... (If I find anything good then I'll update this section!) Alerting : I use SEC on a Linux server to find bad things happening in the logs and alert me via various methods. It's incredibly flexible and not as clicky as Splunk. There's a nice tutorial here which guides through a lot of the possible features. I also use Nagios for graphs of various stats and some alerting which I don't get from the logs (such as when services are down etc). This can be easily customized to add graphs of anything you like. I have added graphs of items such as the number of hits made to an http server, by having the agent use the check_logfiles plugin to count the number of hits in the logs (it saves the position it gets up to for each check period). Overall, it depends on how much your time will cost to set this up , as there are many options which you can use but they aren't as integrated as Splunk and will probably require more effort to get doing what you want. The Nagios graphs are straightforward to set up but don't give you historical data from before you add the graph, whereas with Splunk (and presumably other front-ends) you can look back at the past logs and graph things you've only just thought of to look at from them. Note also that the SQL database format and indexing will have a huge effect on the speed of queries, so your idea of fulltext indexing will make a tremendous increase to the speed of searches. I'm not sure if MySQL or PostgreSQL will do something similar. Edit : MySQL will do fulltext indexing, but only on MyISAM tables prior to MySQL 5.6. In 5.6 Support was added for InnoDB . Edit : Postgresql can do full text search of course: http://www.postgresql.org/docs/9.0/static/textsearch.html | {
"source": [
"https://serverfault.com/questions/62687",
"https://serverfault.com",
"https://serverfault.com/users/2258/"
]
} |
62,837 | df does only reports the disk free space. How can I get my allowed free space? | Take a look at the quota command here: http://linux.die.net/man/1/quota quota For example: quota -u user1 System response: Disk quotas for user user1 (uid 501):
Filesystem blocks quota limit grace files quota limit grace
/dev/hda6 992 50000 55000 71 10000 11000 quota report Report on all users over quota limits: quota -q Quota summary report: repquota -a Report for user quotas on device /dev/hda5
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 4335200 0 0 181502 0 0
bin -- 15644 0 0 101 0 0
...
user1 -- 1944 0 0 120 0 0 No limits shown with this user as limits are set to 0. | {
"source": [
"https://serverfault.com/questions/62837",
"https://serverfault.com",
"https://serverfault.com/users/19420/"
]
} |
62,841 | Maybe this is a noob question but usually I dont have to deal with that stuff. I just installed apache with mod_proxy and I want to use it as a forward http proxy. What i want to do is, if a request of a certain format is executed by a web client, then the request will be send to another destination url without the client noticeing it. No matter what domain, when the url contains 'mysign' directly after the domain name it should be send to destination I specify in my config. For e.g. all following request should be send to ' http://localhost/mysign/ ' instead of there ordinary destination: http://www.gooogle.com/mysign
http://www.another.com/mysign
http://web.dev.us/mysigne/total?p=2 following request should not be tunneld: http://mysign.not.com/test
http://wwww.ende.com/do/not/mysign how can I configure that? Update: I dont want to send a redirect to the client. The client browser shouldn't notice that the response is sent by another endpoint than expcected. | Take a look at the quota command here: http://linux.die.net/man/1/quota quota For example: quota -u user1 System response: Disk quotas for user user1 (uid 501):
Filesystem blocks quota limit grace files quota limit grace
/dev/hda6 992 50000 55000 71 10000 11000 quota report Report on all users over quota limits: quota -q Quota summary report: repquota -a Report for user quotas on device /dev/hda5
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 4335200 0 0 181502 0 0
bin -- 15644 0 0 101 0 0
...
user1 -- 1944 0 0 120 0 0 No limits shown with this user as limits are set to 0. | {
"source": [
"https://serverfault.com/questions/62841",
"https://serverfault.com",
"https://serverfault.com/users/19532/"
]
} |
63,002 | Is there a quick way of deleting all the .pyc files from a tree of directories? | If you've got GNU find then you probably want find <directory name> -name '*.pyc' -delete If you need something portable then you're better off with find <directory name> -name '*.pyc' -exec rm {} \; If speed is a big deal and you've got GNU find and GNU xargs then find <directory name> -name '*.pyc' -print0|xargs -0 -p <some number greater than 1> rm This is unlikely to give you that much of a speed up however, due to the fact that you'll mostly be waiting on I/O. | {
"source": [
"https://serverfault.com/questions/63002",
"https://serverfault.com",
"https://serverfault.com/users/7355/"
]
} |
63,014 | In Linux, the command ip address add [...] has a scope argument. The man page says that the scope is "the scope of the area where this address is valid". Follows the list of legal scopes: global site link host What does this "area" of "validity" refer to? | from http://linux-ip.net/html/tools-ip-address.html : Scope | Description global | valid everywhere site | valid only within this site (IPv6) link | valid only on this device host | valid only inside this host (machine) Scope is normally determined by the ip utility without explicit use on the command line. (...) The following citations are from the book Understanding Linux network internals
by Christian Benvenuti, O'Reilly: "The scope of a route in Linux is an indicator of the distance to the destination network. The scope of an IP address is an indicator of how far from the local host the address is known, which, to some extent also tells you how far the owner of that address is from the local host (...). Host: An address has a host scope when it is used only to communicate within the host itself. Outside the host this address is not known and can not be used. An Example is the loopback address, 127.0.0.1 Link: An address has a link scope when it is meaningful and can be used only within a LAN. An example is a subnet's broadcast address. Global: An address has global scope when it can be used anywhere. This is the default scope for most addresses. (...)" The main reason to use scopes seems to be that a host with multiple interfaces and addresses has to decide when to use which address. For communication with itself a loopback address (scope host) can be used. With communication elswhere, a different address has to be selected. | {
"source": [
"https://serverfault.com/questions/63014",
"https://serverfault.com",
"https://serverfault.com/users/5979/"
]
} |
63,383 | I want to use memcached http://www.danga.com/memcached/ I have installed it through yum install memcached But now I need to connect to PHP, and there is an extension named memcache and one named memcached? ARGH http://us3.php.net/manual/en/book.memcache.php http://us3.php.net/manual/en/book.memcached.php Could someone point me in the right direction here.. which one is going to work? Also, do I need to open any ports for it to work even though it's local?
After running it, I try telnet 127.0.0.1 11211 and I get connection refused. | You probably want to see the PHP Client Comparison . Short version: They will both work, and for most cases either one will do just fine. Regarding the other issue: Yes, you should be able to do telnet 127.0.0.1 11211 . Very few firewalls would block localhost from communicating with itself. If you are not able to connect, verify that memcached really is running by doing ps auxwww | grep memcached , which will also show you the command-line arguments used to start memcached. One of the arguments should be -p 11211 or another port number. See man memcached for the meaning of all the possible arguments. | {
"source": [
"https://serverfault.com/questions/63383",
"https://serverfault.com",
"https://serverfault.com/users/18284/"
]
} |
63,403 | At what point is it worth adding a CDN (content delivery network) to your website? Does it make sense to use it for a relatively low traffic website that's a web application? The clients are all over the USA. Will a CDN even offer a noticeable difference to the end user for my scenario or does it only show effectiveness once you truly have hit scalability levels? Edit: Information about server setup, currently it's a single ASP.NET instance on a shared hosting environment. What would go into the CDN would be some image files, jquery related files (I know google provides a CDN for the core), css files, and probably some moderate size PDF files. | At what point is it worth adding a CDN (content delivery network) to your website? When one of the following occurs: You're reaching a large, international audience. Careful analysis of your audience shows that many of them are 100 - 300ms Round Trip Time (RTT) away. You do the math, and discover that a large group of you customers are getting a somewhat slow site, due to TCP/IP's so-so performance on links with high bandwidth delay product . You find that you have a lots of requests for mostly static files, i.e. streaming video, audio, PDFs, images etc. In fact, there are so many requests per second that it can't easily be handled by just setting up 2, 3, 4 or more servers dedicated to static file serving. You're a tech geek, and you set up a site using Amazon Cloudfront or Cachefly just for the fun of it. Don't feel bad, I have done it too. I have repeatedly seen articles where SimpleCDN didn't do so great. It is really hard to objectively quantify the performance of the various CDNs, but here is one attempt . Maybe I'm being unfair to SimpleCDN here, but they wouldn't be my first choice. Amazon Cloudfront is pretty consistenly good ... not great, but cheap and easy to get started with . Edit: Akamai still seems to be the very best CDN, expensive but so worth it. See SmugMugs recent presentation , slide 7 in the PDF or the more detailed version in the video. I have never worked with Akamai, I have always dismissed them as obviously too expensive for the sites I have worked on. Maybe that is beginning to change, I don't know, but they are trying to lower the barrier to entry to their CDN service. | {
"source": [
"https://serverfault.com/questions/63403",
"https://serverfault.com",
"https://serverfault.com/users/10771/"
]
} |
63,404 | While idly reviewing event logs I saw something new (to me) this morning. Event Type: Information
Event Source: DNS
Event Category: None
Event ID: 5504
Date: 9/8/2009
Time: 8:38:09 AM
User: N/A
Computer: MYSERVER
Description:
The DNS server encountered an invalid domain name in a packet from 72.233.33.107. The packet will be rejected. The event data contains the DNS packet. I got a few mentioning that .107 address and several more for .109 as well. All within about a 5 second span of time. The event data isn't all that helpful (or is it?): Data:
0000: 97 5b 80 05 00 00 00 00 [.....
0008: 00 00 00 00 .... Now I'm curious... how could my internal AD domain server be getting packets from those external address(es)? | At what point is it worth adding a CDN (content delivery network) to your website? When one of the following occurs: You're reaching a large, international audience. Careful analysis of your audience shows that many of them are 100 - 300ms Round Trip Time (RTT) away. You do the math, and discover that a large group of you customers are getting a somewhat slow site, due to TCP/IP's so-so performance on links with high bandwidth delay product . You find that you have a lots of requests for mostly static files, i.e. streaming video, audio, PDFs, images etc. In fact, there are so many requests per second that it can't easily be handled by just setting up 2, 3, 4 or more servers dedicated to static file serving. You're a tech geek, and you set up a site using Amazon Cloudfront or Cachefly just for the fun of it. Don't feel bad, I have done it too. I have repeatedly seen articles where SimpleCDN didn't do so great. It is really hard to objectively quantify the performance of the various CDNs, but here is one attempt . Maybe I'm being unfair to SimpleCDN here, but they wouldn't be my first choice. Amazon Cloudfront is pretty consistenly good ... not great, but cheap and easy to get started with . Edit: Akamai still seems to be the very best CDN, expensive but so worth it. See SmugMugs recent presentation , slide 7 in the PDF or the more detailed version in the video. I have never worked with Akamai, I have always dismissed them as obviously too expensive for the sites I have worked on. Maybe that is beginning to change, I don't know, but they are trying to lower the barrier to entry to their CDN service. | {
"source": [
"https://serverfault.com/questions/63404",
"https://serverfault.com",
"https://serverfault.com/users/1936/"
]
} |
63,705 | How do I pipe the standard error stream without piping the standard out stream? I know this command works, but it also writes the standard out. Command 2>&1 | tee -a $LOG How do I get just the standard error? Note: What I want out of this is to just write the stderr stream to a log and write both stderr and stdout to the console. | To do that, use one extra file descriptor to switch stderr and stdout: find /var/log 3>&1 1>&2 2>&3 | tee foo.file Basically, it works, or at least I think it works, as follows: The re-directions are evaluated left-to-right. 3>&1 Makes a new file descriptor, 3 a duplicate (copy) of fd 1 (stdout). 1>&2 Make stdout (1) a duplicate of fd 2 (stderr) 2>&3 Make fd 2, a duplicate (copy) of 3, which was previously made a copy of stdout. So now stderr and stdout are switched. | tee foo.file tee duplicates file descriptor 1 which was made into stderr. | {
"source": [
"https://serverfault.com/questions/63705",
"https://serverfault.com",
"https://serverfault.com/users/1834/"
]
} |
63,764 | I created some users with: $ useradd john and I forgot to specify the parameter -m to create the home directory and to have the skeleton files copied to each user. now I want to do that, and I don't want to recreate all users (there must be an easier way). so, is there any way to create the user directories and copy the skeleton files? I thought about creating the directories, chowning them to the corresponding user, copying all the skeleton files and chowning them to the corresponding user. but if there's a command like useradd -m that doesn't create the user again, but create the directories, it'd be better. | Also you can use mkhomedir_helper Usage: /sbin/mkhomedir_helper <username> [<umask> [<skeldir>]] | {
"source": [
"https://serverfault.com/questions/63764",
"https://serverfault.com",
"https://serverfault.com/users/4156/"
]
} |
64,239 | So, let's say your server had 6 healthy hard drives. A drive fails (will not mount/detect, drops out of raid with errors) or is failing (SMART getting worse, etc). You need to swap out the bad drive. When you open the case you see.. six identical hard drives. How can you tell which one is no longer healthy/mounting/functioning? System would be linux, most likely ubuntu server, using at most simple software RAID. The hard drives would be SATA and connected directly to the motherboard. (no raid controller) I don't want to randomly disconnect drives until I pick the correct one. The drives all appear identical to me; I imagine there is some common way to identify which drive is which that I am unaware of. Does anyone have any pointers/tips/best practices? Thanks! EDIT: I had wanted this to be 'generalized' in a hand-wavy sort of way, but it just came off as 'incomplete' and 'horrible'. My bad! | I had this exact problem on a (tower) server just like you explain, and it was easy: smartctl will output the serial number of the drive Vendors sometimes ship their own specific tools, like hdparm, that will do the same. So output the serial of the bad drive, and then use a dentist's mirror and a flashlight to find the drive. On a rackmount you'll usually have indicator lights like other people have said, but I bet the same would apply. | {
"source": [
"https://serverfault.com/questions/64239",
"https://serverfault.com",
"https://serverfault.com/users/4401/"
]
} |
64,484 | I know that TLS is essentially a newer version of SSL, and that it generally supports transitioning a connection from unsecured to secured (commonly through a STARTTLS command). What I don't understand is why TLS is important to an IT Professional, and why given the choice I would pick one over the other. Is TLS really just a newer version, and if so is it a compatible protocol? As an IT Professional: When do I use which? When do I not use which? | Short answer: SSL is the precursor to TLS. SSL was a proprietary protocol developed by Netscape Communications, later standardised within IETF and renamed as TLS.
In short, the versions go in this order: SSLv2, SSLv3, TLSv1.0, TLSv1.1 and TLSv1.2. Contrarily to a relatively wide-spread belief, this is not at all about having to run a service on a distinct port with SSL and being able to on the same port as the plain-text variant with TLS.
Both SSL and TLS can be used for the two approaches. This is about the difference between SSL/TLS upon connection (sometimes referred to as "implicit SSL/TLS") and SSL/TLS after an command was issued at the protocol level, typically STARTTLS (sometimes referred to as "explicit SSL/TLS"). The key word in STARTTLS is "START", not TLS. It's a message, at the application protocol level, to indicate there needs to be a switch to SSL/TLS, if it hasn't been initiated before any application protocol exchange. Using either modes should be equivalent, provided the client is configured to expect SSL/TLS one way or another, so as not to be downgraded to a plain-text connection. Longer answer: SSL v.s. TLS As far as I'm aware, SSLv1 never left the labs. SSLv2 and SSLv3 were protocols developed
by Netscape. SSLv2 has been considered insecure for a while, since it's prone to downgrade attacks.
SSLv3 internally uses (3,0) as its version number (within the ClientHello message). TLS is the result of the standardisation as a more open protocol within IETF.
(I think I have read somewhere, perhaps in E. Rescorla's book, that the name had been chosen in such
a way that all participants were equally unhappy with it, so as not to favour a particular company:
this is quite common practice in standards body.) Those interested in how the transition was made can read the SSL-Talk List FAQ ; there are multiple copies of this document around, but most links (to netscape.com ) are outdated. TLS uses very similar messages (sufficiently different to make the protocols incompatible, although it's possible to negotiate a common version ). The TLS 1.0 , 1.1 and 1.2 ClientHello messages use (3,1) , (3,2) , (3,3) to indicate the version number, which clearly shows the continuation from SSL. There are more details about the protocol differences in this answer . When do I use which? When do I not use which? Use the highest version you can if possible. In practice, as a service provider, this will require
your users to have clients that support these versions. As usual, it's always a risk-assessment exercise (preferably backed with a business case if appropriate).
This being said, cut off SSLv2 anyway. In addition, note that the security provided by SSL/TLS isn't just about which version you use, it's also about proper configuration: it's certainly preferable to use SSLv3 with a strong cipher suite than TLSv1.0 with a with a weak (or anonymous/null-encryption) cipher suite.
Some cipher suites, considered too weak, have been explicitly forbidden by newer versions of TLS. The tables in the Java 7 SunJSSE provider (and their footnotes) can be of interest if you want more details. It would be preferable to use TLS 1.1 at least, but not all clients support them yet unfortunately (e.g. Java 6). When using a version under 1.1, it's certainly worth looking into mitigating the BEAST vulnerability . I'd generally recommend Eric Rescorla's book - SSL and TLS: Designing and Building Secure Systems, Addison-Wesley, 2001 ISBN 0-201-61598-3 to people who really want more details. Implicit v.s. Explicit SSL/TLS There is a myth saying that TLS allows you to use the same port whereas SSL can't. That's just not true (and I'll leave port unification out for this discussion).
Unfortunately, this myth seems to have been propagated to users by the fact that some applications like MS Outlook sometimes offer a choice between SSL and TLS in their configuration options when they actually mean a choice between implicit and explicit SSL/TLS. (There are SSL/TLS experts at Microsoft, but it seems they weren't involved in the Outlook UI.) I think the reason why this confusion happen is because of the STARTTLS mode. Some people seem that have understood this as STARTTLS = TLS, but this is not the case.
The key word in STARTTLS is "START", not TLS. Why this wasn't called STARTSSL or STARTSSLORTLS is because these extensions were specified within IETF, which only used names used in its specifications (assuming that the TLS name would eventually be the only standing, I guess). SSL on the same port as the plain-text service: HTTPS proxy. Nowadays, most HTTPS servers can handle TLS, but a few years ago, most people were using SSLv3 for HTTPS. HTTPS (strictly speaking, standardised as HTTP over TLS ) normally establishes the SSL/TLS connection upon TCP connection, and then exchanges HTTP message over the SSL/TLS layer.
There is an exception to this when using an HTTP proxy in between. In this case, the client connects to the HTTP proxy in clear (typically on port 3128), then issues the CONNECT HTTP command and, provided the response was successful, initiates the SSL/TLS handshake by sending a ClientHello message. All of this happens on the same port as far as the connection between browser and proxy is concerned (obviously not between proxy and the target server: it's not even the same machine). This works just fine with SSLv3. Many of us in situations behind a proxy will have used this against servers that didn't support at least TLS 1.0. SSL on the same port as the plain-text service: e-mail. This one is clearly out of specifications, but in practice, it often works.
Strictly speaking the specifications talk about switching to TLS (not SSL) after using the STARTTLS command. In practice, SSL often works too (just like the "HTTP over TLS" spec also encompasses
using SSL instead of TLS too).
You can try it by yourself. Assuming you have an SMTP or IMAP server that supports STARTTLS,
use Thunderbird, go into the preferences, advanced options, config editor and turn off security.enable_tls .
Many servers will still accept the connection, simply because their implementations delegate the SSL/TLS layer to an SSL/TLS library which will generally be able to handle SSL and TLS in the same way, unless configured not to do so. As the OpenLDAP FAQ puts it, " While the mechanism is designed for use with TLSv1, most implementations will fallback to SSLv3 (and SSLv2) if necessary. ". If you're not sure, check with a tool like Wireshark. TLS on a distinct port. Many clients can use TLS 1.0 (at least) for protocols where the secured variant is on a different port. Obviously, there are a number of browsers and web servers that support TLS 1.0 (or above) for HTTPS. Similarly, SMTPS, IMAPS, POPS and LDAPS can use TLS too. They're not limited to SSL. When do I use which? When do I not use which? Between explicit and implicit SSL/TLS, it doesn't really matter. What matters is that your client knows what to expect and is configured appropriately to do so. More importantly, it should be configured to reject plain text connections when it's expecting an SSL/TLS connection, whether it's implicit or explicit . The main difference between explicit and implicit SSL/TLS will be in the clarity of the configuration settings. For example, for LDAP, if the client is an Apache Httpd server ( mod_ldap -- its documentation also mislabels the difference between SSL and TLS, unfortunately), you can use implicit SSL/TLS by using an ldaps:// URL (e.g. AuthLDAPURL ldaps://127.0.0.1/dc=example,dc=com?uid?one ) or use explicit SSL/TLS by using an extra parameter (e.g. AuthLDAPURL ldap://127.0.0.1/dc=example,dc=com?uid?one TLS ). There is perhaps generally speaking a slightly lesser risk when specifying the security protocol in the URL scheme ( https , ldaps , ...) than when expecting the client to configure an additional setting to enable SSL/TLS, because they may forget. This is arguable.
There may also be issues in the correctness of the implementation of one versus the other too (for example, I think the Java LDAP client doesn't support host name verification when using ldaps:// , when it should, whereas it is supported with ldap:// + StartTLS). In doubt, and to be compatible with more clients if possible, it doesn't seem to do any harm to offer both services when the server supports it (your server will just be listening on two ports at the same time). Many server implementations for protocols that can be used with either modes will support both. It is the responsibility of the client not to let itself being downgraded to a plain-text connection. As a server administrator, there is nothing you can technically do on your side to prevent downgrade attacks (apart from requiring a client-certificate perhaps). The client must check that SSL/TLS is enabled, whether it's upon connection or after a STARTTLS -like command. In the same way as a browser shouldn't let itself be redirected from https:// to http:// , a client for a protocol that support STARTTLS should make sure the response was positive and the SSL/TLS connection was enabled before proceeding any further. An active MITM attacker could otherwise easily downgrade either connections. For example, older versions of Thunderbird had a bad option for this called "Use TLS, if available" , which essentially implied that if a MITM attacker was able to alter the server messages so that it didn't advertise support for STARTTLS, the client would silently let itself be downgraded to a plain-text connection. (This insecure option is no longer available in Thunderbird.) | {
"source": [
"https://serverfault.com/questions/64484",
"https://serverfault.com",
"https://serverfault.com/users/14037/"
]
} |
64,656 | Is it possible to use variables in Apache config files? For example, when I'm setting up a site with Django+WSGI, the config file might look like: <Directory /path/to/foo/>
Order allow,deny
Allow from all
</Directory>
Alias /foo/static /path/to/foo/static
WSGIScriptAlias /foo /path/to/foo/run_wsgi And I'd like to turn the '/path/to/foo' into a variable so it only needs to be defined in one place. Something like: Variable FOO /path/to/foo
… Thanks! | You could use mod_macro , which has been included in Apache httpd since version 2.4 Before that it had to be installed separately, see mod_macro . For example on Debian: apt-get install libapache2-mod-macro; a2enmod macro . Example configuration /etc/apache2/conf.d/vhost.macro <Macro VHost $host $port>
<VirtualHost $host:$port>
ServerName $host
DocumentRoot /var/vhosts/$host
<Directory /var/vhosts/$host>
# do something here...
</Directory>
</VirtualHost>
</Macro> /etc/apache2/sites-available/vhost.mysite.com Use VHost vhost.mysite.com 80 | {
"source": [
"https://serverfault.com/questions/64656",
"https://serverfault.com",
"https://serverfault.com/users/1299/"
]
} |
64,657 | I set up IIS 7 on my computer and have a demo site which works fine when I go to http://localhost/ or http://<my-ip> /. However users on the intranet (my network) cannot access my site. (when they type in http://<my-internal-ip>/ ) Should they be able to? If not, how can I make this possible. Thanks for any help/explanations,
Andrew | You could use mod_macro , which has been included in Apache httpd since version 2.4 Before that it had to be installed separately, see mod_macro . For example on Debian: apt-get install libapache2-mod-macro; a2enmod macro . Example configuration /etc/apache2/conf.d/vhost.macro <Macro VHost $host $port>
<VirtualHost $host:$port>
ServerName $host
DocumentRoot /var/vhosts/$host
<Directory /var/vhosts/$host>
# do something here...
</Directory>
</VirtualHost>
</Macro> /etc/apache2/sites-available/vhost.mysite.com Use VHost vhost.mysite.com 80 | {
"source": [
"https://serverfault.com/questions/64657",
"https://serverfault.com",
"https://serverfault.com/users/62980/"
]
} |
64,753 | Is it possible to install the SMTP server that you can install in Windows Server 2008 in Windows 7? Or something similar? I'm developing an application that will make use of it and I want to be able to test it and try it locally. | Since I needed this only for development, I ended up using smtp4dev , which is exactly what you need when developing an application that sends emails. The project description: Dummy SMTP server that sits in the
system tray and does not deliver the
received messages. The received
messages can be quickly viewed, saved
and the source/structure inspected.
Useful for testing/debugging software
that generates email. | {
"source": [
"https://serverfault.com/questions/64753",
"https://serverfault.com",
"https://serverfault.com/users/2563/"
]
} |
65,199 | Is it possible to alias a hostname in Linux? It has been asked by jmillikin at various Ubuntu forums as follows: Is it possible to create a hostname alias? Sort of like /etc/hosts,
but with other hostnames rather than IP addresses. So that with some
file like this, you could ping "fakehost1", and it would be re-mapped
to "realhost", and then "realhost" would be resolved to an IP address. # Real host # Aliases
realhost fakehost1 fakehost2 fakehost3 Somebody has answered about ssh, but not about ping, etc. My main
purpose is to use it as an alias for a Subversion server. In my case, realhost
is under a dynamic IP address. So, the "/etc/hosts" alias doesn't work. I want to
access my Subversion server as svn://my_svnserver/my_repos instead of svn://realhost/my_repos . | For those who don't have an account on the forums (or don't wish to login): if your main issue is not to ping but to ssh, you can create/edit your
~/.ssh/config adding lines like these: Host fakehost1
Hostname real-hostname
Host fakehost2
Hostname real-hostname2
Host fakehost3
Hostname real-hostname3 | {
"source": [
"https://serverfault.com/questions/65199",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
65,265 | A user's account keeps getting locked out in Active Directory. It's probably caused by an app that's using Windows authentication to connect to SQL Server. Is there a way to find out which app is causing it and why the app might be causing failed login attempts? | Have a look at the Account Lockout and Management Tools available on the Microsoft Download Center. Specifically LockoutStatus.exe and EventCombMT.exe. You might not be able to exactly pinpoint where the lockout is coming from but you should be able to narrow it down quite a bit to make it easier to see. Here are a couple more Technet articles that might help: Maintaining and Monitoring Account Lockout Account Lockout Tools (description of the tools in the download linked to above) Using the checked Netlogon.dll to track account lockouts Enabling debug logging for the Net Logon service | {
"source": [
"https://serverfault.com/questions/65265",
"https://serverfault.com",
"https://serverfault.com/users/16854/"
]
} |
65,329 | I just discovered that IIS builds up logs indefinitely and there don't appear to be any IIS settings that will automatically clean out old log files. What is the best way to keep my IIS logs under control so that they don't fill the entire hard drive? | You'll have to run a scheduled task to do it. Here's a Powershell script that should work. set-location c:\windows\system32\Logfiles\W3SVC1\ -ErrorAction Stop
foreach ($File in get-childitem -include *.log) {
if ($File.LastWriteTime -lt (Get-Date).AddDays(-30)) {
del $File
}
} This should purge anything that was last modified more than 30 days ago. Change the path in the first line to wherever your log files are stored. Also change the -30 to however long you want to retain the files. -30 means you will delete anything older than 30 days. You can have a look at this article that shows different properties for the FileInfo object if you don't want to use LastWriteTime. | {
"source": [
"https://serverfault.com/questions/65329",
"https://serverfault.com",
"https://serverfault.com/users/1047/"
]
} |
65,616 | Has anyone seen this before? I've got a raid 5 mounted on my server and for whatever reason it started showing this: jason@box2:/mnt/raid1/cra$ ls -alh
ls: cannot access e6eacc985fea729b2d5bc74078632738: Input/output error
ls: cannot access 257ad35ee0b12a714530c30dccf9210f: Input/output error
total 0
drwxr-xr-x 5 root root 123 2009-08-19 16:33 .
drwxr-xr-x 3 root root 16 2009-08-14 17:15 ..
?????????? ? ? ? ? ? 257ad35ee0b12a714530c30dccf9210f
drwxr-xr-x 3 root root 57 2009-08-19 16:58 9c89a78e93ae6738e01136db9153361b
?????????? ? ? ? ? ? e6eacc985fea729b2d5bc74078632738 The md5 strings are actual directory names and not part of the error. The question marks are odd, and any directory with a question mark throws an io error when you attempt to use/delete/etc it. I was unable to umount the drive due to "busy". Rebooting the server "fixed" it but it was throwing some raid errors on shutdown. I have configured two raid 5 arrays and both started doing this on random files. Both are using the following config: mkfs.xfs -l size=128m -d agcount=32
mount -t xfs -o noatime,logbufs=8 Nothing too fancy, but part of an optimized config for this box. We're not partitioning the drives and that was suggested as a possible issue. Could this be the culprit? | I had a similar problem because my directory had read (r) but not execute (x) rights.
My directory listing showed: myname@srv:/home$ ls -l service/mail/
ls: cannot access service/mail/001_SERVICE INBOX: Permission denied
total 0
-????????? ? ? ? ? ? 001_SERVICE INBOX
d????????? ? ? ? ? ? 01_CURRENT SERVICE The mail directory had the r bit set, but not the x that you need for listing or search and access.
Doing sudo chmod -R g+x mail solved this problem. | {
"source": [
"https://serverfault.com/questions/65616",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
65,712 | I have one server which I'm hosting a handful of sites on. Currently, each site has it's domain hosted by an independent provider and each has an A record pointing to the server's IP address. But if I want to change the server in the future, I will have to go back an update each IP address in each DNS record. Is it possible to use a CNAME record on each domain to point to another domain that I control directly? This is so I can update the IP address in 1 place myself and not have to get all these other DNS providers to update their records separately? | That's exactly the point of a CNAME. A CNAME does not need to point to a DNS in the same zone, it can point to any DNS name registered with any nameserver. What it means for your clients is an additional DNS lookup on the NS for the other host, but that's a tiny price to pay for the majority of websites on the internet. | {
"source": [
"https://serverfault.com/questions/65712",
"https://serverfault.com",
"https://serverfault.com/users/80/"
]
} |
65,718 | An interesting question. I have logged into a Linux (most likely SuSE) host. Is there some way that I can tell programmatically that I am a VM host or not? Also assume that the vmtools are not installed. | Use standard Linux tools to inspect the hardware on the system. cat /proc/scsi/scsi or ethtool -i eth0 or dmidecode | grep -i vmware If the output of these commands shows hardware with a manufacturer name of "VMWare", you're on a VMWare VM. Multiple commands are provided here because system configurations and tools differ. | {
"source": [
"https://serverfault.com/questions/65718",
"https://serverfault.com",
"https://serverfault.com/users/1592/"
]
} |
66,138 | I seem to be running into a little bit of a problem understanding how to get this to work. I have a new server I'm building sitting behind the office NAT at work, its reverse dns maps to office.mydomain.com , but I want the machine to be ns2.mydomain.com for the sake of puppet. nodes.pp snippet: node 'ns2.mydomain.com' inherits basenode {
info('ns2.mydomain.com')
}
node 'office.mydomain.com' inherits basenode {
info('office.mydomain.com')
} And my 'puppet.conf' on the client: [main]
#was node_name=ns2.mydomain.com
#was fqdn=ns2.mydomain.com
certname=ns2.mydomain.com
node_name=cert My syslog on the server reports: Sep 16 22:59:12 support puppetmasterd[2800]: Host is missing hostname and/or domain: office.mydomain.com
Sep 16 22:59:12 support puppetmasterd[2800]: (Scope(Node[office.mydomain.com])) office.mydomain.com
Sep 16 22:59:12 support puppetmasterd[2800]: Compiled catalog for office.mydomain.com in 0.03 seconds
Sep 16 22:59:12 support puppetmasterd[2800]: Caching catalog for ns2.mydomain.com How can I make it grab the config for ns2.mydomain.com without doing something like this: node 'ns2.mydomain.com' inherits basenode {
info('ns2.mydomain.com')
}
node 'office.mydomain.com' inherits 'ns2.mydomain.com' {
info('office.mydomain.com')
} UPDATE : This problem seems to be causing other issues as well. For instance if I info("$fqdn") while the machine is sitting behind office.mydomain.com the fqdn fact is empty, as well as the $operatingsystem . Its almost like the facts aren't being discovered properly. Is there perhaps a NAT issue? Are there any suggestions for tracking down this cause of this problem? | Aaah, Puppet hostname detection. What a nightmare... By default, what name will be used to find which node definition to use is based on the contents of the fqdn fact. What that actually maps to is dependent on a few different things, and yes, reverse DNS is one of them -- and it's preferred over the machine's own hostname! However, this name (usually) only applies at certificate generation time. You're actually misusing the node_name variable -- it should be set to one of 'cert' or 'facter'. The fqdn parameter is also deprecated. What you actually want to do is set the certname parameter on the client to the node name you want to use, and then set node_name to cert (or just leave it out, since cert is the default). This will take the node name from the CN of the certificate that the client presents, and the certname parameter makes sure that's set to something reasonable rather than whatever facter decides to come up with on it's own. Unfortunately, since you've already got "wrong" certs created, you'll need to regenerate those certs ( rm -rf /var/lib/puppet/ssl on the client and re-run Puppet) after you've setup the config, so that the right certs get created and used. If this all sounds a little complicated, you're right -- it is. Welcome to Puppet. | {
"source": [
"https://serverfault.com/questions/66138",
"https://serverfault.com",
"https://serverfault.com/users/12185/"
]
} |
66,347 | I am working on a tiny little PHP project for a friend of mine, and I have a WAMP environment setup for local development. I remember the days when the response from my local Apache 2.2 was immediate. Alas, now that I got back from a long, long holiday, I find the responses from localhost painfully slow. It takes around 5 seconds to get a 300B HTML page served out. When I look at the task manager, the httpd processes (2) are using up 0% of the CPU and overall my computer is not under load (0-2% CPU usage). Why is the latency so high? Is there any Apache setting that I could tweak to perhaps make its thread run with a higher priority or something? It seems like it's simply sleeping before it's serving out the response. | The issue was with Apache's main settings file httpd.conf . I found this: There are three ways to set up PHP to work with Apache 2.x on Windows. You can run PHP as a handler, as a CGI, or under FastCGI. [Source] And so I went into the Apache's settings and saw where the problem was: I had it set up as CGI, instead of loading it as a module. This caused php-cgi.exe to start up and shut down every time I made a request. This was slowing my localhost development down. I changed the settings to load PHP as an Apache MODULE and now it all works perfectly. :) To load the PHP module for Apache 2.x: 1) insert following lines into httpd.conf LoadModule php5_module "c:/php/php5apache2.dll" AddHandler application/x-httpd-php .php (p.s. change C:/php to your path. Also, change php5apache**.dll to your existing file name) 2) To limit PHP execution only for .php files, add this in httpd.conf : <FilesMatch \.php$>
SetHandler application/x-httpd-php
</FilesMatch> 3) set path of php.ini in httpd.conf (if after restart you get error, then remove this line again) PHPIniDir "C:/php" Thank you all for your efforts. | {
"source": [
"https://serverfault.com/questions/66347",
"https://serverfault.com",
"https://serverfault.com/users/20522/"
]
} |
66,349 | I know they're defined in /etc/resolv.conf , but what if it's not there? And more specifically, how do you find the DNS server returned by DHCP? In GNOME you can use the NetworkManager applet to see the primary DNS for any connection, so how would you do the same from the command line? | Usually dhclient.leases file is located at /var/lib/dhcp3/dhclient.leases , type the following command: less /var/lib/dhcp3/dhclient.leases OR cat /var/lib/dhcp3/dhclient.leases OR You can just use grep command to get DHCP server address, enter: grep dhcp-server-identifier /var/lib/dhcp3/dhclient.leases OR dhclient eth0 | {
"source": [
"https://serverfault.com/questions/66349",
"https://serverfault.com",
"https://serverfault.com/users/942/"
]
} |
66,360 | We've been running Cisco and dell layer 3 switches. The former are expensive and reliable, the latter a lot cheaper and fraught with issues. Anyone has positive experience with the core Force10 switches (and edge switches as well)? | Usually dhclient.leases file is located at /var/lib/dhcp3/dhclient.leases , type the following command: less /var/lib/dhcp3/dhclient.leases OR cat /var/lib/dhcp3/dhclient.leases OR You can just use grep command to get DHCP server address, enter: grep dhcp-server-identifier /var/lib/dhcp3/dhclient.leases OR dhclient eth0 | {
"source": [
"https://serverfault.com/questions/66360",
"https://serverfault.com",
"https://serverfault.com/users/4242/"
]
} |
66,363 | I need to troubleshoot some problems related to environment variables on a Unix system. On Windows, I can use a tool such as ProcessExplorer to select particular a process and view values of each environment variable. How can I accomplish the same thing on Unix? echoing and env cmd just show values at present time, but I want to view what values the running process is using currently. | cat /proc/<pid>/environ If you want to have pid(s) of a given running executable you can, among a number of other possibilities, use pidof : AlberT$ pidof sshd
30690 6512 EDIT : I totally quote Dennis Williamson and Teddy comments to achieve a more readable output.
My solution is the following: tr '\0' '\n' < /proc/<pid>/environ | {
"source": [
"https://serverfault.com/questions/66363",
"https://serverfault.com",
"https://serverfault.com/users/20056/"
]
} |
66,369 | My company has quite a few servers, and the applications that run on these servers are spread over many machines. Does anyone know of a program that will visualize these relationships in a MS Visio type manner? I looked into Spice-works, The Dude and a handful of other solutions, none will automate the process. | cat /proc/<pid>/environ If you want to have pid(s) of a given running executable you can, among a number of other possibilities, use pidof : AlberT$ pidof sshd
30690 6512 EDIT : I totally quote Dennis Williamson and Teddy comments to achieve a more readable output.
My solution is the following: tr '\0' '\n' < /proc/<pid>/environ | {
"source": [
"https://serverfault.com/questions/66369",
"https://serverfault.com",
"https://serverfault.com/users/18611/"
]
} |
66,587 | Does anyone know of a simple one liner to read the first line of a file in bash? | read -r FIRSTLINE < filename Same result as the other answers but faster because it doesn't spawn any process, as "read" is a built-in bash command. | {
"source": [
"https://serverfault.com/questions/66587",
"https://serverfault.com",
"https://serverfault.com/users/20114/"
]
} |
66,986 | I am logging into a server which has an ssh banner set. I would like to suppress it (especially for non-interactive use). I do not have access to the server sshd_config . The best solution I have found so far is to set the LogLevel ERROR option on the client. The problem is that this will suppress any other INFO level messages, which I don't necessarily want to hide (search the OpenSSH source for logit for examples). I could also use ssh -q but that will suppress even more. Are there any other more specific solutions? | AFAIK, " ssh -q " or " LogLevel QUIET " in ~/.ssh/config are the "traditional" ways to silence the banner. So you already have a "better" compromise with " LogLevel ERROR ". A more specific solutions would be to use a custom patched version of the ssh client, if this is an option. | {
"source": [
"https://serverfault.com/questions/66986",
"https://serverfault.com",
"https://serverfault.com/users/20695/"
]
} |
67,316 | I want to rewrite all http requests on my web server to be https requests, I started with the following: server {
listen 80;
location / {
rewrite ^(.*) https://mysite.com$1 permanent;
}
... One Problem is that this strips away any subdomain information (e.g., node1.mysite.com/folder), how could I rewrite the above to reroute everything to https and maintain the sub-domain? | Correct way in new versions of nginx Turn out my first answer to this question was correct at certain time, but it turned into another pitfall - to stay up to date please check Taxing rewrite pitfalls I have been corrected by many SE users, so the credit goes to them, but more importantly, here is the correct code: server {
listen 80;
server_name my.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name my.domain.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
[....]
} | {
"source": [
"https://serverfault.com/questions/67316",
"https://serverfault.com",
"https://serverfault.com/users/2476/"
]
} |
67,504 | What is the most likely causes of signal 11, also know as "segmentation fault"? | Signal 11 (SIGSEGV, also known as segmentation violation) means that the program accessed a memory location that was not assigned to it. That's usually a bug in a program. So if you're writing your own program, that's the most likely cause. It can also commonly occur with some hardware malfunctions. | {
"source": [
"https://serverfault.com/questions/67504",
"https://serverfault.com",
"https://serverfault.com/users/20765/"
]
} |
67,513 | How can I rename a SQL Server 2008 instance without reinstalling? For example, if the db is referenced as "MySQLServer\MSSQL2008", how can I rename to "MySQLServer\SQL2008"? | I don't think it is possible to rename without installing. There are traces left to the name in a few internal databases such as replication and you may find errors later on. If you can, unless you have more than one instance, you are best off reinstalling and then importing all your databases again. | {
"source": [
"https://serverfault.com/questions/67513",
"https://serverfault.com",
"https://serverfault.com/users/13401/"
]
} |
67,692 | This came up in a comment to another question and I'd love it if someone could explain the reasons for this to me. I suggested having Apache log the errors for a given VHost to a user's home directory. This was shot down because it was insecure. Why? I asked for clarification in a reply comment but all I got was that it's insecure to have root writing in a folder not owned by root. Again, could someone explain? Thanks, Bart. | Because an evil user can maliciously try to point the file root is writing to a different location .
This is not so simple, but really possible. As an example, if a user would find the way to make a symlink from the supposed Apache log to, say, /etc/shadow you'll suddenly have an unusable system. Apache ( root ) would overwrite your users' credentials making the system faulty. ln -s /etc/shadow /home/eviluser/access.log If the access.log file is not writable by the user it can be difficult to hijack it, but avoiding the possibility is better! A possibility could be to use logrotate to do the job , creating the link to a file not already existing, but that logrotate will overwrite as soon as the logs grows: ln -s /etc/shadow /home/eviluser/access.log.1 Note : The symlink method is only one of the possible attacks, given as a proof of concept. Security has to be made with a White List mind , not blacklisting what we know to be an issue. | {
"source": [
"https://serverfault.com/questions/67692",
"https://serverfault.com",
"https://serverfault.com/users/4997/"
]
} |
67,706 | Yeah, I can fire up a VM or remote into something and try the password...I know...but is there a tool or script that will simulate a login just enough to confirm or deny that the password is correct? Scenario: A server service account's password is "forgotten"...but we think we know what it is. I'd like to pass the credentials to something and have it kick back with "correct password" or "incorrect password". I even thought about a drive mapping script with that user account and password being passed to see if it mapped the drive successfully or not but got lost in the logic of making it work correctly...something like: -Script asks for username via msgbox
-script asks for password via msgbox
-script tries to map a drive to a common share that everyone has access to
-script unmaps drive if successful
-script returns popup msgbox stating "Correct Password" or else "Incorrect Password" Any help is appreciated...you'd think this would be a rare occurrence not requiring a tool to support it but...well.... | runas /u:yourdomain\a_test_user notepad.exe The utility will prompt for the password, if the right password has been provided, notepad will launch, if not it will produce error 1326: the username or password is incorrect | {
"source": [
"https://serverfault.com/questions/67706",
"https://serverfault.com",
"https://serverfault.com/users/7861/"
]
} |
67,712 | I'm looking for a CMS that can do the following two things: Reusable, named blocks (modules) of text that can be inserted in the content (i.e. articles, posts - the terminology differes between various CMS-es). The obvious example is a header/footer block, but I want to build all content from such blocks, so any solution that can only place blocks on the sidebars etc. is inadequate. Variable substitution (extrapolation). The blocks described above would serve as templates, and would contain stretches of text with "inline" variables. For example, "Click here to download {app-name}". The CMS would look up the declaration of "app-name" in the db and replace it with the actual value on the fly. I am aware of TextPattern , which does the blocks quite nicely (they're called forms in TP), but not the substitution. Its management of static pages seemed somewhat limited, too. Is there anything else? Must be non-commercial (this is for a hobby programming site), and ideally be php/mysql or php with flat files. (I am trying to escape Joomla, which for all the complexity does little to actually reduce the time spent on maintaining content - in fact, hand-crafting HTML would sometimes be faster.) | runas /u:yourdomain\a_test_user notepad.exe The utility will prompt for the password, if the right password has been provided, notepad will launch, if not it will produce error 1326: the username or password is incorrect | {
"source": [
"https://serverfault.com/questions/67712",
"https://serverfault.com",
"https://serverfault.com/users/5436/"
]
} |
67,759 | I am using a linux server which has 128GB of memory and 24 cores. I use top to see how much it is used. Its output is pasted at the end of the post. Here are two questions: (1) I see that each of the running processes occupies a very small percentage of memory (%MEM no more than 0.2%, and most just 0.0%), but how the total memory is almost used as in the fourth line of output ("Mem: 130766620k total, 130161072k used, 605548k free, 919300k buffers")? The sum of used percentage of memory over all processes seems unlikely to achieve almost 100%, doesn't it? (2) how to understand the load average on the first line ("load average: 14.04, 14.02, 14.00")? Thanks and regards! Edit: Thanks! I also really like to hear some rough numbers based on used percentage of memory to determine if a server is heavily loaded, since I once became the one who cramed the server without understanding the current load. Is swap regarded as almost the same as memory? For example, when memory and swap are almost of same size, if the memory is almost running out but the swap is still largely free, may I just view it as if the used percentage of memory + swap is still not high and run other new processes? How would you consider together CPU or memory (or memory + swap) usage? Do you become worried if either of them reaches too high or both? Output of top: $ top top - 12:45:33 up 19 days, 23:11, 18 users, load average: 14.04, 14.02, 14.00
Tasks: 484 total, 12 running, 472 sleeping, 0 stopped, 0 zombie
Cpu(s): 36.7%us, 19.7%sy, 0.0%ni, 43.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 130766620k total, 130161072k used, 605548k free, 919300k buffers
Swap: 63111312k total, 500556k used, 62610756k free, 124437752k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6529 sanchez 18 -2 1075m 219m 13m S 100 0.2 13760:23 MATLAB
13210 timothy 18 -2 48336 37m 1216 R 100 0.0 3:56.75 absurdity
13888 timothy 18 -2 48336 37m 1204 R 100 0.0 2:04.89 absurdity
14542 timothy 18 -2 48336 37m 1196 R 100 0.0 1:08.34 absurdity
14544 timothy 18 -2 2888 2076 400 R 100 0.0 1:06.14 gatherData
6183 sanchez 18 -2 1133m 195m 13m S 100 0.2 13676:04 MATLAB
6795 sanchez 18 -2 1079m 210m 13m S 100 0.2 13734:26 MATLAB
10178 timothy 18 -2 48336 37m 1204 R 100 0.0 11:33.93 absurdity
12438 timothy 18 -2 48336 37m 1216 R 100 0.0 5:38.17 absurdity
13661 timothy 18 -2 48336 37m 1216 R 100 0.0 2:44.13 absurdity
14098 timothy 18 -2 48336 37m 1204 R 100 0.0 1:58.31 absurdity
14335 timothy 18 -2 48336 37m 1196 R 100 0.0 1:08.93 absurdity
14765 timothy 18 -2 48336 37m 1196 R 99 0.0 0:32.57 absurdity
13445 timothy 18 -2 48336 37m 1216 R 99 0.0 3:01.37 absurdity
28990 root 20 0 0 0 0 S 2 0.0 65:50.21 pdflush
12141 tim 18 -2 19380 1660 1024 R 1 0.0 0:04.04 top
1240 root 15 -5 0 0 0 S 0 0.0 16:07.11 kjournald
9019 root 20 0 296m 4460 2616 S 0 0.0 82:19.51 kdm_greet
1 root 20 0 4028 728 592 S 0 0.0 0:03.11 init
2 root 15 -5 0 0 0 S 0 0.0 0:00.00 kthreadd
3 root RT -5 0 0 0 S 0 0.0 0:01.01 migration/0
4 root 15 -5 0 0 0 S 0 0.0 0:08.13 ksoftirqd/0
5 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/0
6 root RT -5 0 0 0 S 0 0.0 17:27.31 migration/1
7 root 15 -5 0 0 0 S 0 0.0 0:01.21 ksoftirqd/1
8 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/1
9 root RT -5 0 0 0 S 0 0.0 10:02.56 migration/2
10 root 15 -5 0 0 0 S 0 0.0 0:00.34 ksoftirqd/2
11 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/2
12 root RT -5 0 0 0 S 0 0.0 4:29.53 migration/3
13 root 15 -5 0 0 0 S 0 0.0 0:00.34 ksoftirqd/3 | (1) I see that each of the running processes occupies a very small percentage of memory (%MEM no more than 0.2%, and most just 0.0%), but how the total memory is almost used as in the fourth line of output ("Mem: 130766620k total, 130161072k used, 605548k free, 919300k buffers")? The sum of used percentage of memory over all processes seems unlikely to achieve almost 100%, doesn't it? To see how much memory you are currently using, run free -m . It will provide output like: total used free shared buffers cached
Mem: 2012 1923 88 0 91 515
-/+ buffers/cache: 1316 695
Swap: 3153 256 2896 The top row 'used' (1923) value will almost always nearly match the top row mem value (2012). Since Linux likes to use any spare memory to cache disk blocks (515). The key used figure to look at is the buffers/cache row used value (1316). This is how much space your applications are currently using. For best performance, this number should be less than your total (2012) memory. To prevent out of memory errors, it needs to be less than the total memory (2012) and swap space (3153). If you wish to quickly see how much memory is free look at the buffers/cache row free value (695). This is the total memory (2012)- the actual used (1316). (2012 - 1316 = 696, not 695, this will just be a rounding issue) (2) how to understand the load average on the first line ("load average: 14.04, 14.02, 14.00")? This article on load average uses a nice traffic analogy and is the best one I've found so far: Understanding Linux CPU Load - when should you be worried? . In your case, as people pointed out: On multi-processor system, the load is relative to the number of processor cores available. The "100% utilization" mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc. So, with a load average of 14.00 and 24 cores, your server is far from being overloaded. | {
"source": [
"https://serverfault.com/questions/67759",
"https://serverfault.com",
"https://serverfault.com/users/16981/"
]
} |
68,684 | How can I get diff to show only added and deleted lines? If diff can't do it, what tool can? | Try comm Another way to look at it: Show lines that only exist in file a: (i.e. what was deleted from a) comm -23 a b Show lines that only exist in file b: (i.e. what was added to b) comm -13 a b Show lines that only exist in one file or the other: (but not both) comm -3 a b | sed 's/^\t//' (Warning: If file a has lines that start with TAB, it (the first TAB) will be removed from the output.) Sorted files only NOTE: Both files need to be sorted for comm to work properly. If they aren't already sorted, you should sort them: sort <a >a.sorted
sort <b >b.sorted
comm -12 a.sorted b.sorted If the files are extremely long, this may be quite a burden as it requires an extra copy and therefore twice as much disk space. | {
"source": [
"https://serverfault.com/questions/68684",
"https://serverfault.com",
"https://serverfault.com/users/1834/"
]
} |
68,753 | If you have 5 web servers behind a load balancer (such as haproxy) and they are serving up content for the same domain, do you need SSL certificates for all the servers, or can you use the same certificate on each server? I know you can put all SSL requests on a specific server, but that requires distributed session info and hoping it doesn't come to that. | If you have 5 web servers behind a load balancer (...)
do you need SSL certificates for all the servers, It depends. If you do your load balancing on the TCP or IP layer (OSI layer 4/3, a.k.a L4, L3), then yes, all HTTP servers will need to have the SSL certificate installed. If you load balance on the HTTPS layer (L7), then you'd commonly install the certificate on the load balancer alone, and use plain un-encrypted HTTP over the local network between the load balancer and the webservers (for best performance on the web servers). If you have a large installation, then you may be doing Internet -> L3 load balancing -> layer of L7 SSL concentrators -> load balancers -> layer of L7 HTTP application servers... Willy Tarreau, the author of HAProxy, has a really nice overview of the canonical ways of load balancing HTTP/HTTPS . If you install a certificate on each server, then be sure to get a certificate that supports this. Normally certificates can be installed on multiple servers, as long as the servers all serve traffic for one Fully Qualified Domain Name only. But verify what you're buying, certificate issuers can have a confusing product portfolio... | {
"source": [
"https://serverfault.com/questions/68753",
"https://serverfault.com",
"https://serverfault.com/users/20343/"
]
} |
68,883 | I would like to open a discussion that would accumulate your Linux command line (CLI) best practices and tips. I've searched for such a discussion to share the below comment but haven't found one, hence this post. I hope we all could learn from this. You are welcome to share your Bash tips, grep, sed, AWK, /proc and all other related Linux/Unix system administration, shell programming best practices for the benefit of us all. | Use screen , a free terminal multiplexer developed by the GNU Project that will allow you to have several terminals in one. You can start a session and your terminals will be saved even when you connection is lost, so you can resume later or from home. | {
"source": [
"https://serverfault.com/questions/68883",
"https://serverfault.com",
"https://serverfault.com/users/20186/"
]
} |
69,183 | We currently have our DNS SOA record set to the following for stackoverflow.com: primary name server = ns1.p19.dynect.net
serial = 2009090909
refresh = 3600 (1 hour)
retry = 600 (10 mins)
expire = 604800 (7 days)
default TTL = 60 (1 min) Are there better choices for our refresh / retry / expire / default TTL for a site like stackoverflow.com which receives close to 1M pageviews per day? | The actual traffic rate to the site is irrelevant. All of those settings (except for "default TTL") only affect how frequently your domain's secondary DNS servers poll the primary DNS server for updates. If your zone only changes infrequently (which I believe yours does) then your value for "refresh" is currently a bit on the low side. Typically the primary should send a NOTIFY message to each of the secondaries whenever there's an update at which point the secondaries grab the zone file immediately. These days the "refresh / retry / expire" mechanism is only a backstop to that. In any event, it's likely that your DNS provider is automatically syncing changes to all of the relevant DNS servers on the fly without using DNS's built-in synchronisation mechanisms so the actual values are probably irrelevant. Note that the "default TTL" field no longer means what it says. The real default TTL is set (in BIND at least) with the $TTL directive, and that's only used when there isn't an explicit TTL set on each record. The "default TTL" field's meaning was changed in RFC 2308 and it's actually a hint for negative caching . If your server returns a negative response (e.g. NXDOMAIN or NODATA ) it's how long the remote server should wait before trying again. The current value is a bit on the low side, but there's no harm leaving it as is. It's often ignored anyway. | {
"source": [
"https://serverfault.com/questions/69183",
"https://serverfault.com",
"https://serverfault.com/users/2/"
]
} |
69,283 | While trying to search for a simple pattern "hello" in a file, all the following forms of grep work: grep hello file1 grep 'hello' file1 grep "hello" file1 Is there a specific case where one of the above forms work but others do not.
Does it make any difference if I use one in place of another? | This is actually dependent on your shell. Quotes (either kind) are primarily meant to deal with whitespace. For instance, the following: grep hello world file1 will look for the word "hello" in files called "world" and "file1", while grep "hello world" file1 will look for "hello world" in file1. The choice between single or double quotes is only important if the search string contains variables or other items that you expect to be evaluated. With single quotes, the string is taken literally and no expansion takes place. With double quotes, variables are expanded. For example (with a Bourne-derived shell such as Bash or ZSH): VAR="serverfault"
grep '$VAR' file1
grep "$VAR" file1 The first grep will look for the literal string "$VAR1" in file1. The second will expand the "$VAR" variable and look for the string "serverfault" in file1. | {
"source": [
"https://serverfault.com/questions/69283",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
69,510 | My system admin gave me a file with iptables rules. What command do I type in to load this? I watched him do it before, and he did it in 1 line!
Something like...iptables > thefile.dat ???? | My system admin gave me a file with iptables rules. What command do I type in to load this? iptables-restore < file-with-iptables-rules.txt I watched him do it before, and he did it in 1 line! Something like...iptables > thefile.dat ???? iptables-save > file-with-iptables-rules.txt | {
"source": [
"https://serverfault.com/questions/69510",
"https://serverfault.com",
"https://serverfault.com/users/81082/"
]
} |
69,612 | What difference does the 'Rank' of DIMMs make to server memory? For example, when looking at server configurations I see the following being offered for the same server: 2GB (1x2GB) Single Rank PC3-10600 CL9 ECC DDR3-1333 VLP RDIMM
2GB (1x2GB) Dual Rank PC3-10600 CL9 ECC DDR3-1333 VLP RDIMM Given the option of Single Rank vs. Dual Rank or Dual Rank vs. Quad Rank is one always: Faster? Cheaper? Higher Bandwidth? Here's what IBM has to say (page 7) on the subject, at least regarding their HS22s: It is important to ensure that DIMMs
with appropriate number of ranks are
populated in each channel for optimal
performance. Whenever possible, it is
recommended to use dual-rank DIMMs in
the system. Dual-rank DIMMs offer
better interleaving and hence better
performance than single-rank DIMMs. For instance, a system populated with
six 2GB dual-rank DIMMs outperforms a
system populated with six 2GB
single-rank DIMMs by 7% for
SPECjbb2005. Dual-rank DIMMs are also
better than quad-rank DIMMs because
quad-rank DIMMs will cause the memory
speed to be down-clocked. Another important guideline is to
populate equivalent ranks per channel.
For instance, mixing one single-rank
DIMM and one dual-rank DIMM in a
channel should be avoided. Ultimately, the effect of the number of memory ranks is specific per server/chipset. For example, on IBM's x3850X5 servers more ranks is better (see §3.8.4): With the Xeon 7500/6500 processors in the x3850 X5, having more ranks gives better
performance. The reason is because of the addressing scheme, which can extend the pages
across ranks thereby making the pages effectively larger and therefore more page-hit cycles. | Wikipedia has a fairly good explanation of rank ( link ). I'd say RamCity (a vendor for Kingston memory) has a more succint explanation on ranks ( link ): A memory rank is, simply put, a block
or area of data that is created using
some or all the memory chips on a
memory module. A rank must be 64 bits of data wide;
on memory modules which support Error
Correction Code (ECC), the 64-bit wide
data area requires an 8-bit wide ECC
area for a total width of 72 bits.
Depending on how memory modules are
engineered, they can contain one, two,
or four areas of 64-bit wide data
areas (or 72-bit wide areas, where 72
bits = 64 data bits and 8 ECC bits). The article goes on mentioning price variation: Why do the single- and dual-rank
memory modules vary in price? In general, single-rank memory modules
are built using x4 (“By 4”) DRAM chips
and are more expensive than dual-rank
memory modules (which are built using
x8 DRAM chips); both module types have
the same number of chips but the x4
DRAMs are more expensive than x8
DRAMs. Dual-rank memory modules may
limit future upgradeability and
capacity of servers when using PC2700
or PC2-3200 memory. This tradeoff
between memory cost and capacity is
important to consider when purchasing
memory modules for Intel
Lindenhurst-based servers. In terms of performance, I'd refer to wikipedia: The ranks cannot be accessed
simultaneously as they share the same
data path. So to sum up everything, it appears that ranks have more to do with density and pricing than actual performance. Granted, I'm working off of generalized statements from a vendor and wikipedia, I don't think most people put much effort into researching ranks. All that matters (for most server admins) is that RAM have matching ranks. I don't think it's an actual specification or requirement but it helps keep some consistency and keeps memory interchangeable within a number of similar servers. Keep in mind that most servers are upgradeable and RAM density has a large part in factor. It's best (albeit more expensive) to get the more dense RAM for servers to make room for future upgrades. | {
"source": [
"https://serverfault.com/questions/69612",
"https://serverfault.com",
"https://serverfault.com/users/2101/"
]
} |
69,836 | In Windows System, there is this file at C:\WINDOWS\system32\drivers\etc\hosts . This file allows us to default a specific IP address to a host name. The issue now is whether I can set multiple IP addresses to a host name. For example, can I do something like this: 192.168.244.128 gateway.net
192.168.226.129 gateway.net And expect that the browser can resolve to both of them, see which one will work and thus point at that one? If not, is there any other way to get the behavior I want? Note: I am deploying this app in my own local area network, so there is no need for internet. | Normally you would not uses hosts to do this, but your DNS. Most DNS will provide what's called a "Round Robin" if you assign multiple A records to the one name in the zone. What it would do then, is the first request comes through would receive 192.168.244.128 , the next would receive 192.168.226.129 , so on and so forth. However, by design, your local machine will cache its DNS resolution, and will usually use the same IP address over and over, until it expires (Time To Live, TTL). | {
"source": [
"https://serverfault.com/questions/69836",
"https://serverfault.com",
"https://serverfault.com/users/1605/"
]
} |
69,847 | I have a script running under a non-root user which, under certain conditions, should restart apache httpd. What would be the simplest way for me to allow the user to do that? I'm using Ubuntu Server 8.04 LTS. | Short answer: Using visudo , add the following to your sudoers file, replacing username with the proper username: username ALL = /etc/init.d/apache2 If you want to not have to type in a password before you do this, use the following: username ALL = NOPASSWD: /etc/init.d/apache2 After this, the 'username' user can execute sudo /etc/init.d/apache2 start (or stop, restart,etc) Long answer:
You'll likely want to setup a separate user for this if you haven't already, and then configure the /etc/sudoers file to allow a user or group to execute the command you want. For example, to allow the user 'ben' to execute all commands as root prompting for a password, you would do the following: ben ALL= ALL To allow 'ben' to execute only one command (like say, rm ), you would do the following: ben ALL= /bin/rm If you are running a script as a user and don't want to prompt for a password, you'll want to use the 'NOPASSWD' option like so: ben ALL=NOPASSWD: /bin/commandname options You can do the same thing for groups by prefixing group names with a percentage sign, like so: %supportstaff ALL= NOPASSWD: /bin/commandname | {
"source": [
"https://serverfault.com/questions/69847",
"https://serverfault.com",
"https://serverfault.com/users/1726/"
]
} |
69,870 | Multiple A records pointing to the same domain seem to be used almost exclusively to implement DNS Round Robin as a cheap load balancing technique. The usual warning against DNS RR is that it is not good for high availability. When 1 IP goes down clients will continue to use it for minutes. A load balancer is often suggested as a better choice. Both claims are not completely true: When the traffic is HTTP then, most of the HTML browsers are able to automatically try the next A record if the previous is down, without a new DNS look-up. Read here chapter 3.1 and here . When multiple data centers are involved then, DNS RR is the only option to distribute traffic across them. So, is it true that, with multiple data centers and HTTP traffic, the use of DNS RR is the ONLY way to assure instant fail-over when one data center goes down? Thanks, Valentino Edit: Off course each data center has a local Load Balancer with hot spare. It's OK to sacrifice session affinity for an instant fail-over. AFAIK the only way for a DNS to suggest a data center instead of another is to reply with just the IP (or IPs) associated to that data center. If the data center becomes unreachable then all those IP are also unreachables. This means that, even if smart HTML browsers are able to instantly try another A record , all the attempts will fail until the local cache entry expires and a new DNS lookup is done, fetching the new working IPs (I assume DNS automatically suggests to a new data center when one fail). So, "smart DNS" cannot assure instant fail-over. Conversely a DNS round-robin permits it. When one data center fail, the smart HTML browsers (most of them) instantly try the other cached A records jumping to another (working) data center. So, DNS round-robin doesn't assure session affinity or the lowest RTT but seems to be the only way to assure instant fail-over when the clients are "smart" HTML browsers. Edit 2: Some people suggest TCP Anycast as a definitive solution. In this paper (chapter 6) is explained that Anycast fail-over is related to BGP convergence. For this reason Anycast can employ from 15 minutes to 20 seconds to complete.
20 seconds are possible on networks where the topology was optimized for this.
Probably just CDN operators can grant such fast fail-overs. Edit 3:* I did some DNS look-ups and traceroutes (maybe some expert can double check) and: The only CDN using TCP Anycast seems to be CacheFly, other operators like CDN networks and BitGravity use CacheFly. Seems that their edges cannot be used as reverse proxies. Therefore, they cannot be used to grant instant failover. Akamai and LimeLight seems to use geo-aware DNS. But! They return multiple A records.
From traceroutes seems that the returned IPs are on the same data center. So, I'm puzzled on how they can offer a 100% SLA when one data center goes down. | When I use the term "DNS Round Robin" I generally mean in in the sense of the "cheap load balancing technique" as OP describes it. But that's not the only way DNS can be used for global high availability. Most of the time, it's just hard for people with different (technology) backgrounds to communicate well. The best load balancing technique (if money is not a problem) is generally considered to be: A Anycast'ed global network of 'intelligent' DNS servers, and a set of globally spread out datacenters, where each DNS node implements Split Horizon DNS, and monitoring of availability and traffic flows are available to the 'intelligent' DNS nodes in some fashion, so that the user DNS request flows to the nearest DNS server via IP Anycast , and this DNS server hands out a low-TTL A Record / set of A Records for the nearest / best datacenter for this end user via 'intelligent' split horizon DNS. Using anycast for DNS is generally fine, because DNS responses are stateless and almost extremely short. So if the BGP routes change it's highly unlikely to interrupt a DNS query. Anycast is less suited for the longer and stateful HTTP conversations, thus this system uses split horizon DNS. A HTTP session between a client and server is kept to one datacenter; it generally cannot fail over to another datacenter without breaking the session. As I indicated with "set of A Records" what I would call 'DNS Round Robin' can be used together with the setup above. It is typically used to spread the traffic load over multiple highly available load balancers in each datacenter (so that you can get better redundancy, use smaller/cheaper load balancers, not overwhelm the Unix network buffers of a single host server, etc). So, is it true that, with multiple data centers
and HTTP traffic, the use of DNS RR is the ONLY
way to assure high availability? No it's not true, not if by 'DNS Round Robin' we simply mean handing out multiple A records for a domain. But it's true that clever use of DNS is a critical component in any global high availability system. The above illustrates one common (often best) way to go. Edit: The Google paper "Moving Beyond End-to-End Path Information to Optimize CDN Performance" seems to me to be state-of-the-art in global load distribution for best end-user performance. Edit 2: I read the article "Why DNS Based .. GSLB .. Doesn't Work" that OP linked to, and it is a good overview -- I recommend looking at it. Read it from the top. In the section "The solution to the browser caching issue" it advocates DNS responses with multiple A Records pointing to multiple datacenters as the only possible solution for instantaneous fail over. In the section "Watering it down" near the bottom, it expands on the obvious, that sending multiple A Records is uncool if they point to datacenters on multiple continents, because the client will connect at random and thus quite often get a 'slow' DC on another continent. Thus for this to work really well, multiple datacenters on each continent are needed. This is a different solution than my steps 1 - 6. I can't provide a perfect answer on this, I think a DNS specialist from the likes of Akamai or Google is needed, because much of this boils down to practical know-how on the limitations of deployed DNS caches and browsers today. AFAIK, my steps 1-6 are what Akamai does with their DNS (can anyone confirm this?). My feeling -- coming from having worked as a PM on mobile browser portals (cell phones) -- is that the diversity and level of total brokeness of the browsers out there is incredible. I personally would not trust a HA solution that requires the end user terminal to 'do the right thing'; thus I believe that global instantaneous fail over without breaking a session isn't feasible today. I think my steps 1-6 above are the best that are available with commodity technology. This solution does not have instantaneous fail over. I'd love for one of those DNS specialists from Akamai, Google etc to come around and prove me wrong. :-) | {
"source": [
"https://serverfault.com/questions/69870",
"https://serverfault.com",
"https://serverfault.com/users/21200/"
]
} |
69,983 | It seems like a simple thing to find the answer too. But, I can't seem to find it in the Hyper-V doc. I'm sure it's there somewhere. Simple question, when I am in Hyper-V manager there are two options that seem similar but I am sure they are different. "Turn Off..." and "Shut Down..." What do they each do? My gut tells me that "Turn Off..." is like pulling the plug on a physical machine where as "Shut Down..." sends a shut down message to the guest. Is that correct? In both cases is the VM no longer running and using memory and CPU resources on the host. | you are correct in what your gut tells you. | {
"source": [
"https://serverfault.com/questions/69983",
"https://serverfault.com",
"https://serverfault.com/users/21608/"
]
} |
69,988 | We have a server that we host web-solutions on, they are updatede on the server with CVS. About 4 people need access to the server and the ability to update the web-solutions through CVS. When I checkout the web-solution the CVS/Root is set to :ext:USERNAME@ADDRESS:CVS-PATH - Which is fine for as long as I use cvs to update. But if another user (different USERNAME) makes a CVS update, it tries to update it with my username for which the other user doesnt know the password to. I would like to "force" the cvs-root to be something for each user, but unfortunally the file CVS/Root overrides the enviroment variable CVSROOT. Is it possible another way to override it, so each user gets to update using thier own login. Hope someone can help me in the right direction :) | you are correct in what your gut tells you. | {
"source": [
"https://serverfault.com/questions/69988",
"https://serverfault.com",
"https://serverfault.com/users/13662/"
]
} |
69,990 | Currently we are using bitdefender for mail servers to scan for spam, viruses and content filtering. We chose bitdefender as it receives all incoming emails and forwards them to our internal windows IIS SMTP-service. Bitdefender is also the protection for our SMTP to not be used as spam relay as it allows certain IPs to send from only. The question is: are there any alternatives to bitdefenser for mailserver? | you are correct in what your gut tells you. | {
"source": [
"https://serverfault.com/questions/69990",
"https://serverfault.com",
"https://serverfault.com/users/13589/"
]
} |
70,889 | How can I have wget print errors, but nothing otherwise? In the default behavior, it shows a progress bar and lots of stuff. In the --no-verbose version still prints one line per downloaded file, this I don't want. The --quiet option causes it to be totally quiet, even in the case of an error, it doesn't print anything. Is there a mode in which it prints errors, but nothing else? | There are very good answers in this question, be sure to check them out, but what I've done is this: wget [wget options] 2>&1 | grep -i "failed\|error" | {
"source": [
"https://serverfault.com/questions/70889",
"https://serverfault.com",
"https://serverfault.com/users/2563/"
]
} |
71,043 | I setup wildcard SSL certificate from Godaddy on Apache2. Whenever the server restarts it asks for the passphrase for the SSL certificate's private key. What's the best way to remove this obstacle to restarts, because when logfile rotation restart occurs in the middle of the night, the server doesn't come back up, and I get an unhappy client call in the morning, as it is a shared server. | To make apache receive the passphrase everytime it restarts, add this to the httpd.conf: SSLPassPhraseDialog exec:/path/to/passphrase-file in your passphrase-file: #!/bin/sh
echo "passphrase" and make the passphrase-file executable: chmod +x passphrase-file | {
"source": [
"https://serverfault.com/questions/71043",
"https://serverfault.com",
"https://serverfault.com/users/8867/"
]
} |
71,285 | What command can I use to strip color-code escape sequences from a text file? Ideally something I can pipe through. If I have a file with a bunch of coloured text rainbow.txt, what goes in the gap: cat rainbox.txt | *something* > plain.txt I'm working in bash on CentOS 4.4. | Try: sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" | {
"source": [
"https://serverfault.com/questions/71285",
"https://serverfault.com",
"https://serverfault.com/users/4020/"
]
} |
71,596 | Does anybody know of any useful Tools for cleaning out Active Directory on a Server 2003 domain? I want to clean up old computers, etc, and prefer a free tool. I have a lot of devices that I know don't exist and I want to clean them prior to building up a new DC. | You have all the tools you need on your domain controller already. The main command you want to use is DSQUERY . To find objects that have been inactive for 52 weeks, open a CMD window and type : DSQUERY computer -inactive 52
DSQUERY user -inactive 52 You can also search for stale passwords using the -stalepwd <num of days> switch instead of -inactive . You could also search for disabled accounts by using the -disabled switch If you want to take it to the next level, you can have it automatically move the objects into an OU of your choice (where you can then analyse what's there before you take any further action) by piping the results to the DSMOVE command like so: DSQUERY computer -inactive 52 | DSMOVE -newparent <distinguished name of target OU> Edit: Here are all the builtin DS commands to experiment with: dsadd /? - help for adding objects.
dsget /? - help for displaying objects.
dsmod /? - help for modifying objects.
dsmove /? - help for moving objects.
dsquery /? - help for finding objects matching search criteria.
dsrm /? - help for deleting objects. | {
"source": [
"https://serverfault.com/questions/71596",
"https://serverfault.com",
"https://serverfault.com/users/4573/"
]
} |
72,356 | Many people (including the Securing Debian Manual ) recommend mounting /tmp with the noexec,nodev,nosuid set of options. This is generally presented as one element of a 'defense-in-depth' strategy, by preventing the escalation of an attack that lets someone write a file, or an attack by a user with a legitimate account but no other writable space. Over time, however, I've encountered arguments (most prominently by Debian/Ubuntu Developer Colin Watson) that noexec is a useless measure, for a couple potential reasons: The user can run /lib/ld-linux.so <binary> in an attempt to get the same effect. The user can still run system-provided interpreters on scripts that can't be run directly Given these arguments, the potential need for more configuration (e.g. debconf likes an executable temporary directory), and the potential loss of convenience, is this a worthwhile security measure? What other holes do you know of that enable circumvention? | Here are the arguments for utility I've come up with so far: Modern kernels fix the /lib/ld-linux.so hole, so that it won't be able to map executable pages from a noexec filesystem. The interpreters point is certainly still a concern, though I think less of one than people might claim. The reasoning I can come up with is that there have been numerous privilege escalation vulnerabilities that relied on making particular malformed syscalls. Without an attacker providing a binary, it would be much harder to make evil syscalls. Also, script interpreters should be unprivileged (I know this has historically sometimes not been the case, such as with an suid perl), and so would need their own vulnerability to be useful in an attack. Apparently, it is possible to use Python, at least, to run some exploits. Many 'canned' exploits may try to write and run executables in /tmp , and so noexec reduces the probability of falling to a scripted attack (say in the window between vulnerability disclosure and patch installation). Thus, there's still a security benefit to mounting /tmp with noexec . As described in Debian's bug tracker , setting APT::ExtractTemplates::TempDir in apt.conf to a directory that is not noexec and accessible to root would obviate the debconf concern. | {
"source": [
"https://serverfault.com/questions/72356",
"https://serverfault.com",
"https://serverfault.com/users/6139/"
]
} |
72,359 | I have an extremely bizarre issue on one of SBS2003 servers, belonging to my client. Server cannot access some URLs. I can access simple sites, such as google, but every time I'm trying to go to msn.com, or mozypro.com the site will timeout. With that said, server is responsive, it will access network, open files. Other computers on the same network will access all internet URLs fine. In attempt to fix the issue, I did following things: uninstalled IE enchanced security tried using Chrome instead of IE disabled firewall made sure there is no IP blocks plugged in a separate USB ethernet card to rule out ethernet issue any more ideas?.. | Here are the arguments for utility I've come up with so far: Modern kernels fix the /lib/ld-linux.so hole, so that it won't be able to map executable pages from a noexec filesystem. The interpreters point is certainly still a concern, though I think less of one than people might claim. The reasoning I can come up with is that there have been numerous privilege escalation vulnerabilities that relied on making particular malformed syscalls. Without an attacker providing a binary, it would be much harder to make evil syscalls. Also, script interpreters should be unprivileged (I know this has historically sometimes not been the case, such as with an suid perl), and so would need their own vulnerability to be useful in an attack. Apparently, it is possible to use Python, at least, to run some exploits. Many 'canned' exploits may try to write and run executables in /tmp , and so noexec reduces the probability of falling to a scripted attack (say in the window between vulnerability disclosure and patch installation). Thus, there's still a security benefit to mounting /tmp with noexec . As described in Debian's bug tracker , setting APT::ExtractTemplates::TempDir in apt.conf to a directory that is not noexec and accessible to root would obviate the debconf concern. | {
"source": [
"https://serverfault.com/questions/72359",
"https://serverfault.com",
"https://serverfault.com/users/18819/"
]
} |
72,417 | On a linux system is there any way to use nohup when the process that is being nohuped required input, such as an rsync command that needs a password to be entered but will then run happily on its own? | If the command doesn't have to be scripted, you can do it this way: run it in the foreground pause it (CTRL+Z) disown it so that it won't be closed when you close your shell (disown -h %jobid) resume the job in the background (bg %jobid) | {
"source": [
"https://serverfault.com/questions/72417",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
]
} |
72,476 | I need to write some complex xml to a variable inside a bash script. The xml needs to be readable inside the bash script as this is where the xml fragment will live, it's not being read from another file or source. So my question is this if I have a long string which I want to be human readable inside my bash script what is the best way to go about it? Ideally I want: to not have to escape any of the characters have it break across multiple lines making it human readable keep it's indentation Can this be done with EOF or something, could anyone give me an example? e.g. String = <<EOF
<?xml version="1.0" encoding='UTF-8'?>
<painting>
<img src="madonna.jpg" alt='Foligno Madonna, by Raphael'/>
<caption>This is Raphael's "Foligno" Madonna, painted in
<date>1511</date>-<date>1512</date>.</caption>
</painting>
EOF | This will put your text into your variable without needing to escape the quotes. It will also handle unbalanced quotes (apostrophes, i.e. ' ). Putting quotes around the sentinel (EOF) prevents the text from undergoing parameter expansion. The -d'' causes it to read multiple lines (ignore newlines). read is a Bash built-in so it doesn't require calling an external command such as cat . IFS='' read -r -d '' String <<"EOF"
<?xml version="1.0" encoding='UTF-8'?>
<painting>
<img src="madonna.jpg" alt='Foligno Madonna, by Raphael'/>
<caption>This is Raphael's "Foligno" Madonna, painted in
<date>1511</date>-<date>1512</date>.</caption>
</painting>
EOF | {
"source": [
"https://serverfault.com/questions/72476",
"https://serverfault.com",
"https://serverfault.com/users/20114/"
]
} |
72,561 | I'm using 64-bit TortoiseSVN on a 64-bit Windows 7 Professional. Every so often a checkout or update will fail with an error message like the following. Error: Can't move
Error: '[...]\\.svn\tmp\entries'
Error: to
Error: '[...]\\.svn\entries':
Error: The file or directory is corrupted and unreadable. Then CHKDSK runs after reboot, which makes me nervous. Why might this be happening or how I can avoid it? | This is a known bug in Window 7, slated to be fixed in SP 1: http://subversion.wandisco.com/blogs/windows-7-bogus-errorfilecorrupt-error-.html There is now a hotfix available: http://support.microsoft.com/kb/982927/en-us http://support.microsoft.com/kb/2498472/en-us | {
"source": [
"https://serverfault.com/questions/72561",
"https://serverfault.com",
"https://serverfault.com/users/920/"
]
} |
72,744 | Looking for something like this? Any ideas? cmd | prepend "[ERRORS] "
[ERROR] line1 text
[ERROR] line2 text
[ERROR] line3 text
... etc | cmd | while read line; do echo "[ERROR] $line"; done has the advantage of only using bash builtins so fewer processes will be created/destroyed so it should be a touch faster than awk or sed. @tzrik points out that it might also make a nice bash function. Defining it like: function prepend() { while read line; do echo "${1}${line}"; done; } would allow it to be used like: cmd | prepend "[ERROR] " | {
"source": [
"https://serverfault.com/questions/72744",
"https://serverfault.com",
"https://serverfault.com/users/14645/"
]
} |
72,767 | Every tech conference I've ever been to, and I've been to a lot, has had absolutely abysmal Wi-Fi and Internet access. Sometimes it's the DHCP server running out of addresses. Sometimes the backhaul is clearly inadequate. Sometimes there's one router for a ballroom with 3000 people. But it's always SOMETHING. It never works. What are some of the best practices for conference organizers? What questions should they ask the conference venue or ISP to know, in advance, if the Wi-Fi is going to work? What are the most common causes of crappy Wi-Fi at conferences? Are they avoidable, or is Wi-Fi simply not an adequate technology for large conferences? | (For those that are interested, I have finally written up my 2009 report on the wireless at PyCon ). I have done the wireless for the PyCon conference most of the years since we moved from George Washington University into hotels, so I have some ideas about this, which have been proven in battle -- though only with around a thousand users. One thing I hear a lot of people talking about in this discussion is "open air coverage in a ballroom". One theory I operate under is that the ballroom is NOT an open air environment. Human bodies soak up 802.11b/g and 802.11a quite nicely. Here are some of my thoughts, but more details are available in my conference reports if you search google for "pycon wireless" -- the tummy.com links are what you want. I use just the non-overlapping channels, and spread the APs out. For 802.11b/g, I run the radios at the lowest power settings. For 802.11a I run it at the higest power setting because we have so many channels. I try to keep the APs fairly low, so the bodies can help reduce interference between APs on the same channel. I set all the APs to the same ESSID so that people can "roam" to different APs as loads (number of associated clients) go up or coverage goes down (more people coming in, etc). Lots and lots of APs. The first year we had the hotel do the networking, they eventually brought in 6 APs, but they had started with only a couple. Despite that we had told them that we would be heavily using their wireless. But we also had other problems like the DHCP server giving out leases with a gateway in a different network than the address. (Calls to support resulted in "I'll just reboot everything."). We are running relatively inexpensive D-Link dual-radio APs, costing around $100 or $200 each. We just haven't really had the budget to buy 20 to 40 of the $600+ high end APs. These D-Link APs have worked surprisingly well. In 2009 we had a hell of a problem with netbooks. Something about the radios in these just stinks for use at this sort of conference. I've heard reports of people putting Intel wireless cards in the Netbooks and getting much better performance. At PyCon 2009, my netbook couldn't get a reliable connection after the conference started, but my ThinkPad had no problems. I heard similar reports from people with Mac and other "real" laptops, but the cheapest hardware just was not working. I have NOT done anything with directional antennas. I was expecting to need them, but so far it has worked out well enough. Most hotels cannot provide enough bandwidth. Don't worry though, there are lots of terrestrial wireless providers that can provide 100mbps. I'm not talking about the places that run 802.11g from some tower, but people with real, serious radios and backhaul to cope with it. Over the last several years we haven't really had much in the way of wired ports, mostly because of budget and volunteer effort required to cable all those locations. In 2010 we expect to have quite a few wired ports. I like the idea of wiring every seat for wired, but I would doubt we'll be able to cover even 10% simply due to the effort required to wire and maintain such a network. Getting people off the wireless is great. Getting people off the 802.11b frequencies is good as well. Most people talking about since Joel has brought it up have been saying things like "3 non-overlapping channels", which is true for the 2.4GHz spectrum. However, we have seen a HUGE move towards the 5.2GHz spectrum. The first year I ran the network (2006?), we had around 25% usage. In 2008 we had over 60% in 5.2GHz. So, yes, running wireless with thousands of people requires some thought. But, giving it some thought seems to have resulted in a fairly high level of satisfaction. Sean | {
"source": [
"https://serverfault.com/questions/72767",
"https://serverfault.com",
"https://serverfault.com/users/4/"
]
} |
72,780 | I'm going to upgrade from Ubuntu 9.04 to Ubuntu 9.10 in a near future and wanted to know if I should migrate my existing ext3 partitions to ext4 during the process and, if yes, why? | (For those that are interested, I have finally written up my 2009 report on the wireless at PyCon ). I have done the wireless for the PyCon conference most of the years since we moved from George Washington University into hotels, so I have some ideas about this, which have been proven in battle -- though only with around a thousand users. One thing I hear a lot of people talking about in this discussion is "open air coverage in a ballroom". One theory I operate under is that the ballroom is NOT an open air environment. Human bodies soak up 802.11b/g and 802.11a quite nicely. Here are some of my thoughts, but more details are available in my conference reports if you search google for "pycon wireless" -- the tummy.com links are what you want. I use just the non-overlapping channels, and spread the APs out. For 802.11b/g, I run the radios at the lowest power settings. For 802.11a I run it at the higest power setting because we have so many channels. I try to keep the APs fairly low, so the bodies can help reduce interference between APs on the same channel. I set all the APs to the same ESSID so that people can "roam" to different APs as loads (number of associated clients) go up or coverage goes down (more people coming in, etc). Lots and lots of APs. The first year we had the hotel do the networking, they eventually brought in 6 APs, but they had started with only a couple. Despite that we had told them that we would be heavily using their wireless. But we also had other problems like the DHCP server giving out leases with a gateway in a different network than the address. (Calls to support resulted in "I'll just reboot everything."). We are running relatively inexpensive D-Link dual-radio APs, costing around $100 or $200 each. We just haven't really had the budget to buy 20 to 40 of the $600+ high end APs. These D-Link APs have worked surprisingly well. In 2009 we had a hell of a problem with netbooks. Something about the radios in these just stinks for use at this sort of conference. I've heard reports of people putting Intel wireless cards in the Netbooks and getting much better performance. At PyCon 2009, my netbook couldn't get a reliable connection after the conference started, but my ThinkPad had no problems. I heard similar reports from people with Mac and other "real" laptops, but the cheapest hardware just was not working. I have NOT done anything with directional antennas. I was expecting to need them, but so far it has worked out well enough. Most hotels cannot provide enough bandwidth. Don't worry though, there are lots of terrestrial wireless providers that can provide 100mbps. I'm not talking about the places that run 802.11g from some tower, but people with real, serious radios and backhaul to cope with it. Over the last several years we haven't really had much in the way of wired ports, mostly because of budget and volunteer effort required to cable all those locations. In 2010 we expect to have quite a few wired ports. I like the idea of wiring every seat for wired, but I would doubt we'll be able to cover even 10% simply due to the effort required to wire and maintain such a network. Getting people off the wireless is great. Getting people off the 802.11b frequencies is good as well. Most people talking about since Joel has brought it up have been saying things like "3 non-overlapping channels", which is true for the 2.4GHz spectrum. However, we have seen a HUGE move towards the 5.2GHz spectrum. The first year I ran the network (2006?), we had around 25% usage. In 2008 we had over 60% in 5.2GHz. So, yes, running wireless with thousands of people requires some thought. But, giving it some thought seems to have resulted in a fairly high level of satisfaction. Sean | {
"source": [
"https://serverfault.com/questions/72780",
"https://serverfault.com",
"https://serverfault.com/users/19695/"
]
} |
73,051 | A few hours ago my root partition filled up, I moved files away from it and df reports: # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda1 183G 174G 0 100% / So there should be 9GB free, but avail reports 0 and Use is still at 100%. I tested as root, e.g. # echo test >a ; cat a
test it works as expected; however as a normal user, I still get the error: $ echo test >a ; cat a
bash: echo: write error: No space left on device The root home directory where I conducted the positive test and my home directory are on the same partition.The fstab entry is: /dev/hda1 / ext3 noatime,defaults,errors=remount-ro 0 1 | Most filing systems reserve a certain percentage for root, so you can still log in as root and solve out of diskspace issues. Usually this is 5%. 9GB is roughly 5% of 183GB, so this would make sense. You can see how much is reserved using tune2fs: # tune2fs -l /dev/sda1 | grep -i reserved
Reserved block count: 936488
Reserved GDT blocks: 1019
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root) You can modify it using # tune2fs -m 3 /dev/sda1
tune2fs 1.41.9 (22-Aug-2009)
Setting reserved blocks percentage to 3% (561893 blocks) On modern large drives 5% is probably a little excessive, and you probably want to set it lower. You don't want to set it to zero. | {
"source": [
"https://serverfault.com/questions/73051",
"https://serverfault.com",
"https://serverfault.com/users/677/"
]
} |
73,084 | I want to create user accounts named after a domain name. adduser complains that the usernames need to match the NAME_REGEX regular expression. adduser: Please enter a username matching the regular expression configured
via the NAME_REGEX configuration variable. Use the `--force-badname'
option to relax this check or reconfigure NAME_REGEX. I can add the users using useradd without complaint.
Is there a reason that I shouldn't modify the regular expression to allow . , - and _ ? What characters will cause problems and shouldn't be allowed in usernames? This is the default NAME_REGEX . NAME_REGEX="^[a-z][-a-z0-9]*\$" | My advice to you is to follow the standard recommended by the default NAME_REGEX. You can actually put nearly anything in a user name under *NIX but you may encounter odd problems with library code that makes assumptions. Case in point: https://web.archive.org/web/20170928165345/http://blog.endpoint.com/2008/08/on-valid-unix-usernames-and-ones-sanity.html My question to you: do you have a lot of domain names that would collide with each other if you stripped out the unusual punctuation? For example, do you have both "QUALITY-ASSURANCE" and QUALITYASSURANCE" as domain names? If not, you could simply adopt a policy of stripping out the unusual characters and using what's left as the user name. Also, you could use the "real name" section of the GECOS field in the /etc/passwd information to store the original, unmodified domain name, and scripts could extract it pretty easily. | {
"source": [
"https://serverfault.com/questions/73084",
"https://serverfault.com",
"https://serverfault.com/users/1482/"
]
} |
73,163 | I'm using the wget program, but I want it not to save the html file I'm downloading. I want it to be discarded after it is received. How do I do that? | You can redirect the output of wget to /dev/null (or NUL on Windows): wget http://www.example.com -O /dev/null The file won't be written to disk, but it will be downloaded. | {
"source": [
"https://serverfault.com/questions/73163",
"https://serverfault.com",
"https://serverfault.com/users/20346/"
]
} |
73,250 | I have a box on Linode that's going through weird behavior. Every now and then CPU and disk I/O will shoot to 100% and the server becomes unresponsive and has to be booted. I'd like to investigate better what's going on, but I don't know how to find who's responsible for all that CPU and I/O. I'm running Gentoo 2.6.18. | You could try to do something like this: while true; do ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10 >> logfile.txt; printf "\n" >> logfile.txt; sleep 3; done that would show you the top ten processes in terms of CPU usage. You can change the number of processes shown by changing the 10 in "head -10" to a different number, and how often it updates by changing the 3 in "sleep 3" or taking out the "sleep 3" part entirely. | {
"source": [
"https://serverfault.com/questions/73250",
"https://serverfault.com",
"https://serverfault.com/users/15227/"
]
} |
73,319 | I'm wondering if there is a way to log commands received by the server. It can be all SSH commands, as long as it includes information on commands related to file transfer. I'm having issues with an SFTP client and the creator is asking for logs, but I am unable to find any existing logs. I'm looking to log on both or either CentOS or OS X (although I suspect if it's possible, it'd be similar on both). | OpenSSH versions 4.4p1 and up (which should include the latest version with CentOS 5) have SFTP logging capability built in - you just need to configure it. Find this in your sshd_config (in centos, file /etc/ssh/sshd_config ): Subsystem sftp /usr/libexec/openssh/sftp-server and change it to: Subsystem sftp /usr/libexec/openssh/sftp-server -l INFO INFO is just one level of detail over what you're seeing by default - it provides detailed information regarding file transfers, permission changes, etc. If you need more info, you can adjust the log level accordingly. The various levels (in order of detail) are: QUIET, FATAL, ERROR, INFO, VERBOSE, DEBUG, DEBUG1, DEBUG2, and DEBUG3 Anything over VERBOSE is probably more information than you're looking for, but it might be useful. Finally restart the SSH service to update the changes (centos): systemctl restart sshd | {
"source": [
"https://serverfault.com/questions/73319",
"https://serverfault.com",
"https://serverfault.com/users/25/"
]
} |
73,327 | I am running SQL server 2008 developer edition on windows vista home premium. I created a reporting services project that was built successfully in BIDS. When I try to deploy it it gives the following error: Error rsAccessDenied : The permissions granted to user 'COMP\MYSELF' are insufficient for performing this operation. The MYSELF account is the only account on the system. It has administrator rights. The reporting service is running with the LocalSystem service account. If I log in with the MYSELF account into reportmanager, I cannot see the site settings tab. Without the site settings tab, how do I add or change the roles for MYSELF account. In summary, please help me to open the reportmanager in the browser with the site settings link so that I can change the role of the user account. | OpenSSH versions 4.4p1 and up (which should include the latest version with CentOS 5) have SFTP logging capability built in - you just need to configure it. Find this in your sshd_config (in centos, file /etc/ssh/sshd_config ): Subsystem sftp /usr/libexec/openssh/sftp-server and change it to: Subsystem sftp /usr/libexec/openssh/sftp-server -l INFO INFO is just one level of detail over what you're seeing by default - it provides detailed information regarding file transfers, permission changes, etc. If you need more info, you can adjust the log level accordingly. The various levels (in order of detail) are: QUIET, FATAL, ERROR, INFO, VERBOSE, DEBUG, DEBUG1, DEBUG2, and DEBUG3 Anything over VERBOSE is probably more information than you're looking for, but it might be useful. Finally restart the SSH service to update the changes (centos): systemctl restart sshd | {
"source": [
"https://serverfault.com/questions/73327",
"https://serverfault.com",
"https://serverfault.com/users/16231/"
]
} |
73,628 | Is there a way to find the fully qualified domain name of a Windows XP box? Being unfamiliar with Windows I would describe what I'm looking for as the equivalent of the command hostname --fqdn available in Linux. | You can find it in the system properties ("Computer name" tab). With the command line, you can run IPCONFIG /ALL and have a look at the "Host name" and "Primary DNS suffix" fields. | {
"source": [
"https://serverfault.com/questions/73628",
"https://serverfault.com",
"https://serverfault.com/users/22725/"
]
} |
73,812 | In any default installation, Apache 2 comes with keepAlive off, but looking at another server, the keepAlive module was turned on. So, how do I know if keepAlive is right for me? Where can I find some good examples about configure this? | There are 2 good answers already, but the perhaps most important real-life issue is not mentioned yet. First off, the OP might want to read the 2 preceding answers and this little blog post to understand what keepalives are. (The author doesn't elaborate on the part about TCPI/IP getting "faster" the longer the connection is open. It is true, longer-lasting connections benefit from IP window scaling , but the effect isn't significant unless the files are large, or the bandwith-delay product is unusually large.) The big argument against HTTP Keepalive when using Apache is that it blocks Apache processes. I.e. a client using keepalives will prevent 'his' Apache process from serving any other clients, until the client closes the connection or the timeout is reached. In the same span of time, this Apache instance could have served many other connections. Now, a very common Apache configuration is the Prefork MPM and a PHP / Perl / Python interpreter, and application code in the mentioned language. In this case each Apache process is "heavy" in the sense that it occupies several megabytes of RAM (Apache linked with interpreter and application code). This, together with the blocking of each keepalive'd Apache instance, is inefficient. A common workaround is to use 2 Apache servers (both on the same physical server, or on 2 servers, as needed) with different configurations: one "heavy" with mod_php (or whatever programming language is used) for dynamic content, with keepalives off . one "lightweight" with a minimal set of modules, for serving static content (image, css, js etc), with keepalives on . You can then expand on this separation of dynamic and static content when needed , for example by: using an event-driven server for static content, such as nginx . using a CDN for static content (could do all static content serving for you) implementing caching of static and/or dynamic content Another approach with regards to avoid blocking Apache is to use a load balancer with smarter connection handling, such as Perlbal . .. and much more. :-) | {
"source": [
"https://serverfault.com/questions/73812",
"https://serverfault.com",
"https://serverfault.com/users/4884/"
]
} |
74,023 | I want to allow all LAN traffic to my Ubuntu server. I have read the documentation and see the command, but when I try to edit the command for my IP range I get an error. How can I allow all traffic starting at 192.168.15.0 - 192.168.15.255? sudo ufw allow from 192.168.15.0/255
ERROR: Bad source address It seems like the 15 (third octal) is causing the error. Almost like UFW does not expect a LAN to have a unique IP set. Thank you | sudo ufw allow from 192.168.15.0/24 | {
"source": [
"https://serverfault.com/questions/74023",
"https://serverfault.com",
"https://serverfault.com/users/9388/"
]
} |
74,042 | I'm looking for a command line tool which gets an IP address and returns the host name, for Windows. | The command you are looking for is called nslookup , works fine for reverse lookups IFF someone has configured a reverse zone file, which they don't always do. | {
"source": [
"https://serverfault.com/questions/74042",
"https://serverfault.com",
"https://serverfault.com/users/22839/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.