source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
91,712 | We are trying to move all our websites we host to CNAMES as we are planning on moving servers in the new year and would like the ability to move some clients to one server and other clients somewhere else. We were planning on giving clients a unique CNAME which we can then change at a later date. (We have other reasons for doing this now but that is the main one) We have been testing out this theory with a few of our own domains and it seemed to be fine. However when checking the MX records on a domain I got the CNAME value back rather than the MX record. Sadly all of these domains are done via control panels, but I am guessing they are just writing zone files for me. I want to create 2 CNAMEs for the company.com company.com. IN CNAME client.dns.ourserver.com
www IN CNAME client.dns.ourserver.com The MX record is something like the following: company.com IN MX 10 mail.company.com We have an A record for mail.company.com Doing: host -t mx company.com Returns the CNAME value rather than the mx record. Is this expected behaviour? I have managed to get the above configuration working with the 123-reg.co.uk control panel, but not sure if that is more luck than anything. | This is a common error. You cannot use a CNAME RR for your root domain (e.g. company.com) and define additional resource records for the same zone. See Why can't I create a CNAME record for the root record? and RFC1034 section 3.6.2 for details: If a CNAME RR is present at a node, no
other data should be present; this
ensures that the data for a canonical
name and its aliases cannot be
different. | {
"source": [
"https://serverfault.com/questions/91712",
"https://serverfault.com",
"https://serverfault.com/users/25999/"
]
} |
92,181 | I'm using the Postfix mail server and I have 6 IPs available. I'd like to use another IP for the Postfix mail server for sending mail than the web server uses. How can I do this? My postfix version is 2.3.3. For example:
main IP: 66.66.66.66
other IP: 66.66.66.67 | You want smtp_bind_address=66.66.66.67 and inet_interfaces=all or inet_interfaces=eth(whatever) that 66.66.66.67 is on. Make that change, then stop/start postfix. You can't just reload if you're changing inet_interfaces | {
"source": [
"https://serverfault.com/questions/92181",
"https://serverfault.com",
"https://serverfault.com/users/28449/"
]
} |
92,575 | I've searched for details on how to do this but I've been unsuccessful - I wondered if someone could offer up some advice. So, let's say I have 2 network cards (LAN and 3G in my instance), both assigned dynamic IP addresses. The LAN interface is my corporate LAN, and I'd like to use the 3G interface for all other access (ie, t'internet!). I have little networking experience, but my feeling is that I should be able to make the 3G card the default gateway, and then force all traffic for a set of known subnets through the LAN interface. Here's a route print ===========================================================================
Interface List
40...........................Vodafone Mobile Connect
12...00 16 cf 87 71 22 ......Dell Wireless 1500 Draft 802.11n WLAN Mini-Card
11...00 15 c5 58 47 24 ......Broadcom NetXtreme 57xx Gigabit Controller
24...00 50 56 c0 00 01 ......VMware Virtual Ethernet Adapter for VMnet1
25...00 50 56 c0 00 08 ......VMware Virtual Ethernet Adapter for VMnet8
1...........................Software Loopback Interface 1
26...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter
13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface
21...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2
23...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #4
28...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #6
===========================================================================
IPv4 Route Table
===========================================================================
Active Routes:
Netork Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 10.183.148.5 10.183.148.157 4235
0.0.0.0 0.0.0.0 10.183.148.6 10.183.148.157 4235
0.0.0.0 0.0.0.0 10.183.148.7 10.183.148.157 4235
0.0.0.0 0.0.0.0 On-link 10.57.175.79 31
10.57.175.79 255.255.255.255 On-link 10.57.175.79 286
10.183.148.0 255.255.255.0 On-link 10.183.148.157 4491
10.183.148.157 255.255.255.255 On-link 10.183.148.157 4491
10.183.148.255 255.255.255.255 On-link 10.183.148.157 4491
127.0.0.0 255.0.0.0 On-link 127.0.0.1 4531
127.0.0.1 255.255.255.255 On-link 127.0.0.1 4531
127.255.255.255 255.255.255.255 On-link 127.0.0.1 4531
169.254.0.0 255.255.0.0 On-link 10.183.148.157 4511
169.254.255.255 255.255.255.255 On-link 10.183.148.157 4491
192.168.6.0 255.255.255.0 On-link 192.168.6.1 4501
192.168.6.1 255.255.255.255 On-link 192.168.6.1 4501
192.168.6.255 255.255.255.255 On-link 192.168.6.1 4501
192.168.73.0 255.255.255.0 On-link 192.168.73.1 4501
192.168.73.1 255.255.255.255 On-link 192.168.73.1 4501
192.168.73.255 255.255.255.255 On-link 192.168.73.1 4501
224.0.0.0 240.0.0.0 On-link 127.0.0.1 4531
224.0.0.0 240.0.0.0 On-link 10.183.148.157 4492
224.0.0.0 240.0.0.0 On-link 192.168.6.1 4502
224.0.0.0 240.0.0.0 On-link 192.168.73.1 4502
224.0.0.0 240.0.0.0 On-link 10.57.175.79 31
255.255.255.255 255.255.255.255 On-link 127.0.0.1 4531
255.255.255.255 255.255.255.255 On-link 10.183.148.157 4491
255.255.255.255 255.255.255.255 On-link 192.168.6.1 4501
255.255.255.255 255.255.255.255 On-link 192.168.73.1 4501
255.255.255.255 255.255.255.255 On-link 10.57.175.79 286
===========================================================================
Persistent Routes:
None So, interface 40 is my 3G card, and interface 11 is my LAN card. You can see that (I think) I have two default routes currently but the 3G wins because of the lower metric? I need to force all 10.183. . traffic over LAN interface. | The command you're looking for is route add: route | Microsoft Docs For your setup, I think the syntax is: route add 10.183.0.0 mask 255.255.0.0 10.183.148.5 This will send all the traffic for 10.183.x.x to the next hop address of 10.183.148.5 which your system already knows is off of your ethernet nic, and any traffic that doesn't match a route, will be grabbed by your default route and head through your 3g connection. It also looks like your network assigns multiple routers, so you might want to double it up and add the routes for 10.183.148.6 and .7 as well. You might need to be careful if your network has stuff not in the 10.183 range, you may need to add more routes. You may also be able to get away with routing all of 10.0.0.0/8 to your corporate network, since windows will have a more specific route, but i'm not 100% sure on that since your 3g card is giving you an IP in the 10.x.x.x range. | {
"source": [
"https://serverfault.com/questions/92575",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
92,932 | I have a few Ubuntu servers (8.10, 9.10) that is set to automatically install security updates. Some times these updates requires a restart of the system, and this string is shown in motd : *** System restart required *** To get a notice about these, I plan to write a Nagios test to monitor if the server is in need of a reboot. So, my question: Is there a better way than parsing /etc/motd to find out if a reboot is needed? | Check for the presence of /var/run/reboot-required . | {
"source": [
"https://serverfault.com/questions/92932",
"https://serverfault.com",
"https://serverfault.com/users/7152/"
]
} |
92,937 | I am using PHP FastCGI SAPI on my web hosting environment to run PHP applications. To spawn FCGI processes I use spawn-fcgi helper program. My problem is whenever I make a change to php.ini file, I have to kill and respawn each FastCGI server for the new configuration to take effect. Is there a way to reload PHP configuration(ie. php.ini directives) without respawning each FastCGI server? I try sending hangup signal (ie. kill -HUP PHPCGIPID ) to the servers but this will result in termination of the servers. | Check for the presence of /var/run/reboot-required . | {
"source": [
"https://serverfault.com/questions/92937",
"https://serverfault.com",
"https://serverfault.com/users/28703/"
]
} |
92,981 | Using Postfix and custom transports I can manage delivery speeds depending on the recipient's domain. (For example, I send max one message per second to *@hotmail.com) I also use similar rules to block bad destinations (htmail.com is blocked right away, avoiding many loops in the queue). However, I'd like to temporarily suspend mail delivery to a destination for 24 or 48 hours (mails to *@gmail.com suspended, everything else delivered). Messages would queue up during this time, and would be delivered only when I want by changing the config. Does anyone know how to do that ? Thanks | Put messages in a HOLD state /etc/postfix/main.cf: smtpd_recipient_restrictions =
...
check_recipient_access hash:/etc/postfix/hold /etc/postfix/hold: gmail.com HOLD
blah.com HOLD Make sure you run postmap hash:/etc/postfix/hold whenever you update the file. If you want to release all messages on hold, use postsuper : # postsuper -H ALL | {
"source": [
"https://serverfault.com/questions/92981",
"https://serverfault.com",
"https://serverfault.com/users/1801/"
]
} |
93,407 | I'm looking for a program that turns an ASCII string into something like the "ascii art" below: .-"^`\ /`^"-.
.' ___\ /___ `.
/ /.---. .---.\ \
| // '-. ___________________________ .-' \\ |
| ;| \/--------------------------// |; |
\ || |\_) Red Hat (_/| || /
\ | \ . \ ; | Enterprise Linux || ; / . / | /
'\_\ \\ \ \ \ | ||/ / / // /_/'
\\ \ \ \| Server Release 5.3 |/ / / //
`'-\_\_\ Codename Tikanga /_/_/-'`
'--------------------------' I don't have a matching example but I would like the string be turned into some multi line text, like: __ __
/ | / |
| | | |
| |--| |
| |--| |
| | | |
|_/ |_/ for the letter H and so on... I would like to use this to show certain warning messages, for example when the user is about to run a script that will delete the production database and so on... Thanks! | $ figlet you want figlet
_ __ _ _ _
_ _ ___ _ _ __ ____ _ _ __ | |_ / _(_) __ _| | ___| |_
| | | |/ _ \| | | | \ \ /\ / / _` | '_ \| __| | |_| |/ _` | |/ _ \ __|
| |_| | (_) | |_| | \ V V / (_| | | | | |_ | _| | (_| | | __/ |_
\__, |\___/ \__,_| \_/\_/ \__,_|_| |_|\__| |_| |_|\__, |_|\___|\__|
|___/ |___/ | {
"source": [
"https://serverfault.com/questions/93407",
"https://serverfault.com",
"https://serverfault.com/users/12665/"
]
} |
93,717 | I'm using both IPv6 and IPv4 in a LAN network containing Slackware 13.0 boxes. How can I set IPv4 as preferred protocol on the workstations in this network? I want to use IPv6 either explicitly or when there are only AAAA records available. For example, if I try to open http://ipv6.org/ from Firefox, I will always connect via IPv6. The situation is the same with other applications. I tried creating /etc/gai.conf and adding the following to it: precedence ::ffff:0:0/96 100 This should control the behavior of getaddrinfo(3) at least in Debian, but it didn't help on Slackware. Any ideas will be appreciated. Thanks in advance! | According to the man page, inserting a precedence value in gai.conf disables the all the other default rules. Try setting all the rules as listed in RFC 3484 (10.3): Prefix Precedence Label
::1/128 50 0
::/0 40 1
2002::/16 30 2
::/96 20 3
::ffff:0:0/96 100 4 | {
"source": [
"https://serverfault.com/questions/93717",
"https://serverfault.com",
"https://serverfault.com/users/28648/"
]
} |
93,729 | Is there any technical difference between these 2 editions or is it just how they are licenced? | Here you can find a comprehensive comparison : http://msdn.microsoft.com/en-us/library/cc645993%28SQL.105%29.aspx Please note: the Web edition does come with SQL Profiler and management studio | {
"source": [
"https://serverfault.com/questions/93729",
"https://serverfault.com",
"https://serverfault.com/users/12685/"
]
} |
94,351 | I just want to pause everything. Don't execute anything listed on crontab -l . | First, back up the crontab: crontab -l > my_cron_backup.txt Then you can empty it: crontab -r To restore: crontab my_cron_backup.txt
crontab -l This works only for the crontab of the user who runs these commands, but it does not empty/restore crontabs of other users. My other answer is about suspending launches from all the users. | {
"source": [
"https://serverfault.com/questions/94351",
"https://serverfault.com",
"https://serverfault.com/users/81082/"
]
} |
94,361 | Is there any 3rd party software that you can install on your Windows server that filters spam from your mail server? (Before the user downloads via POP3.) | First, back up the crontab: crontab -l > my_cron_backup.txt Then you can empty it: crontab -r To restore: crontab my_cron_backup.txt
crontab -l This works only for the crontab of the user who runs these commands, but it does not empty/restore crontabs of other users. My other answer is about suspending launches from all the users. | {
"source": [
"https://serverfault.com/questions/94361",
"https://serverfault.com",
"https://serverfault.com/users/2659/"
]
} |
94,503 | So let's say one typoed something in their .bashrc that prevents him (or her) from logging in via ssh (i.e. the ssh login exits because of the error in the file). Is there any way that person could login without executing it (or .bashrc since the one runs the other), or otherwise delete/rename/invalidate the file? Suppose you don't have physical access to the machine, and this is the only user account with the ability to ssh in. For Reference: .bash_profile includes .bashrc : [[ -f ~/.bashrc ]] && . ~/.bashrc Edit: Things I have tried: ssh user@host "rm ~/.bashrc"
scp nothing user@host:/RAID/home/tom/.bashrc
ssh user@host "/bin/bash --norc" All give the error: /RAID/home/tom/.bashrc: line 16: /usr/local/bin/file: No such file or directory
/RAID/home/tom/.bashrc: line 16: exec: /usr/local/bin/file: cannot execute: No such file or directory | ssh -t username@hostname /bin/sh works for me. | {
"source": [
"https://serverfault.com/questions/94503",
"https://serverfault.com",
"https://serverfault.com/users/401/"
]
} |
94,845 | We're subscribed to MSDN at work and I've noticed that among the various windows "editions" there are downloads such as "Windows 7 Ultimate N" and "Windows 7 Ultimate N and KN". What are these versions and what is the difference between "regular" windows 7 and these version? | N is made for the EU market and does not include Windows Media Player. KN is made for the Korean Market and does not include Windows Media Player or an Instant Messenger. VL are volume license editions for business enterprise customers and uses MAK (Multiple Activation Keys) or KMS (Key Management Server) to activate. Everything else is exactly the same as retail editions except for those changes above. | {
"source": [
"https://serverfault.com/questions/94845",
"https://serverfault.com",
"https://serverfault.com/users/934/"
]
} |
95,036 | I'm having a DNS resolving issue that is affecting the performance of my locally hosted web site when browse it on my local machine. If I attach my network's DNS suffix to my local machine name when I go to the URL in my browser, the site has terrible load times (100+ times slower) than without the DNS suffix. I thought I could fix this by using my hosts file to avoid the need for a lookup. I added an entry to my hosts file like this 127.0.0.1 myMachine.MyDnsSuffix But this didn't change the load times, even after a reboot. Although it is not important to resolve this specific problem, I would really like to know why this happens. Also, when I run nslookup on the domain myMachine.MyDnsSuffix , I notice it uses my network's DNS server to find the IP. Could this be related to my problem or am I just mis-understanding how nslookup works? | I believe nslookup is used to test a DNS server itself, as opposed to utilizing your HOSTS file. http://support.microsoft.com/kb/200525 seems to indicate as much. Try just a simple ping. Does ping myMachine.MyDnsSuffix resolve to the loopback address you have specified in your HOSTS file? | {
"source": [
"https://serverfault.com/questions/95036",
"https://serverfault.com",
"https://serverfault.com/users/10973/"
]
} |
95,342 | A Debian Stable (5.0.3) server is running ntpd , and connected to the internet. Still, the system clock is about 5 minutes wrong. $ /etc/init.d/ntp status
NTP server is running.. Relevant parts (I think) of /etc/ntp.conf : driftfile /var/lib/ntp/ntp.drift
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
server 0.europe.pool.ntp.org
server 1.europe.pool.ntp.org
server 2.europe.pool.ntp.org
server 3.europe.pool.ntp.org I know NTP doesn't necessarily bring the clock in time immediately. Still, how many hours or days you need to wait in order to reasonably expect that NTP has done its job and synced the clock? Am I missing some other configuration file or option, or just doing something wrong? Is ntp (instead of e.g. ntpdate ) the right tool for this? Is there any quick way to check if configuration is correct and whether the chosen NTP servers return the correct time? Edit : output of ntpq -p is: remote refid st t when poll reach delay offset jitter
==============================================================================
ns1.nexellent.n .INIT. 16 u - 1024 0 0.000 0.000 0.000
dnscache-madrid .INIT. 16 u - 1024 0 0.000 0.000 0.000
sinister.wzw.tu .INIT. 16 u - 1024 0 0.000 0.000 0.000
dnscache-frankf .INIT. 16 u - 1024 0 0.000 0.000 0.000 Edit 2 : Turns out ntpdate -u 0.europe.pool.ntp.org command ( suggested by brent ) returns 17 Dec 17:37:29 ntpdate[14195]: no server suitable for synchronization found ...even though on other machines that command works fine. So we'll be looking at network/firewall settings for this particular server (which is in a different network, accessed over VPN). Resolution : The culprit wasn't local firewall on our server, but firewall settings somewhere in the surrounding network. So we asked the server hosting provider to allow NTP for our machines, and now it works fine. For example, ntpq -p now returns: remote refid st t when poll reach delay offset jitter
==============================================================================
ns1.eunet.fi 192.36.144.23 2 u 10 64 1 1.043 0.258 0.001
ns2.eunet.fi 62.142.10.44 2 u 9 64 1 0.671 0.135 0.001
ns3.eunet.fi 62.142.10.44 2 u 8 64 1 0.750 0.277 0.001 (We also switched to eunet.fi servers recommenced by the hosting company, but that is beside the point.) The commands in brent's answer were helpful because they made me realise the problem was in network access to the NTP servers, not in NTP configuration itself. Thanks everyone! | Stop ntpd, run ntpdate -u 0.europe.pool.ntp.org 3 times, start ntpd, check up on ntpq -p , delay, offset and jitter should be non-zero. | {
"source": [
"https://serverfault.com/questions/95342",
"https://serverfault.com",
"https://serverfault.com/users/1746/"
]
} |
95,404 | Sometimes I forget how the exact syntax of a CMD command looks and then I would like to search my own CMD history. Clearly, within the same session, you can browse it with the up and down arrow keys but what about the history of former CMD sessions? Is there a file, a log the history gets written to or does it all go to digital Nirvana? Thanks! | No, Windows command prompt history can't be saved when a session ends. | {
"source": [
"https://serverfault.com/questions/95404",
"https://serverfault.com",
"https://serverfault.com/users/12665/"
]
} |
95,431 | In a PowerShell script, how can I check if I'm running with administrator privileges? | $currentPrincipal = New-Object Security.Principal.WindowsPrincipal([Security.Principal.WindowsIdentity]::GetCurrent())
$currentPrincipal.IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator) (from Command line safety tricks ) | {
"source": [
"https://serverfault.com/questions/95431",
"https://serverfault.com",
"https://serverfault.com/users/10359/"
]
} |
95,444 | I have a project that will generate a huge number of images. Around 1,000,000 for start. They are not large images so I will store them all on one machine at start. How do you recommended on storing these images efficiently? (NTFS file system currently) I am considering a naming scheme... for start all the images will have an incremental name from 1 up
I hope this will help me sort them later if needed, and throw them in different folders. what would be a better naming scheme: a/b/c/0 ... z/z/z/999 or a/b/c/000 ... z/z/z/999 any idea on this ? | I'd recommend using a regular file system instead of databases. Using file system is easier than a database, you can use normal tools to access files, file systems are designed for this kind of usage etc. NTFS should work just fine as a storage system. Do not store the actual path to database. Better to store the image's sequence number to database and have function that can generate path from the sequence number. e.g: File path = generatePathFromSequenceNumber(sequenceNumber); It is easier to handle if you need to change directory structure some how. Maybe you need to move the images to different location, maybe you run out of space and you start storing some of the images on the disk A and some on the disk B etc. It is easier to change one function than to change paths in database. I would use this kind of algorithm for generating the directory structure: First pad you sequence number with leading zeroes until you have at least 12 digit string. This is the name for your file. You may want to add a suffix: 12345 -> 000000012345.jpg Then split the string to 2 or 3 character blocks where each block denotes a directory level. Have a fixed number of directory levels (for example 3): 000000012345 -> 000/000/012 Store the file to under generated directory: Thus the full path and file filename for file with sequence id 123 is 000/000/012/00000000012345.jpg For file with sequence id 12345678901234 the path would be 123/456/789/12345678901234.jpg Some things to consider about directory structures and file storage: Above algorithm gives you a system where every leaf directory has maximum of 1000 files (if you have less that total of 1 000 000 000 000 files) There may be limits how many files and subdirectories a directory can contain, for example ext3 files system on Linux has a limit of 31998 sub-directories per one directory. Normal tools (WinZip, Windows Explorer, command line, bash shell, etc.) may not work very well if you have large number of files per directory (> 1000) Directory structure itself will take some disk space, so you'll do not want too many directories. With above structure you can always find the correct path for the image file by just looking at the filename, if you happen to mess up your directory structures. If you need to access files from several machines, consider sharing the files via a network file system. The above directory structure will not work if you delete a lot of files. It leaves "holes" in directory structure. But since you are not deleting any files it should be ok. | {
"source": [
"https://serverfault.com/questions/95444",
"https://serverfault.com",
"https://serverfault.com/users/4287/"
]
} |
95,643 | How can I see when a process started, assuming I know the pid. (On Linux) | If you want only the start time, you can select the field and suppress the header by doing this: ps -p YOURPID -o lstart= the output will look like this: Mon Dec 14 17:17:16 2009 which is ctime(3) format and you can parse it to split out the relevant parts. Other start fields such as start , stime , bsdstart and start_time age the time (after 24 hours only the date is shown, for example). You can, however, use them directly for recently started processes without further parsing: ps -p YOURPID -o stime= which would output something like: 09:26 | {
"source": [
"https://serverfault.com/questions/95643",
"https://serverfault.com",
"https://serverfault.com/users/8950/"
]
} |
95,686 | I have a bat file on windows that execute a procdump operation. The issue with the batch file is that I need to cd to the batch file directory first before executing the job, or else the script won't work. How to change to the current batch file directory? I tried the following code in my procdump.bat : cd "%~dp"
procdump -h devenv.exe mydump.txt But it failed, the error message is: The following usage of the path
operator in batch-parameter
substitution is invalid: %~dp" For valid formats type CALL /? or FOR
/? Edit: The answer provided is working, but there is only one catch: if my current directory is different than the batch file directory, then I would get a "The system cannot find the path specified". Anyone has any ideas? | Ok, I think I found here what you mean with %~dp . I think what you really want to do is this: cd /D "%~dp0" (!) But note that this will still not give you the right behaviour when you're trying to execute your batch while the current directory is on another drive as cd doesn't change the active drive. Edit : Apparently (thanks @Yoopergeek ) you can add the /D parameter to the cd command to let it also change the active drive. | {
"source": [
"https://serverfault.com/questions/95686",
"https://serverfault.com",
"https://serverfault.com/users/1605/"
]
} |
96,245 | I recently discovered the 'moreutils' package in Debian (and Ubuntu) . It's a collection of convenient unix tools. One of the commands is 'pee'. The man page says: pee is like tee but for pipes. However it's a short man page, I have filed a bug about it . Does anyone know what it does, how to use it, why one would use it? | Here's what you can do with pee: seq 5 -1 1 > file
cat file |pee 'sort -u > sorted' 'sort -R > unsorted' So pee works with shell pipes instead of files. bash doesn't need pee, it can open shell commands as files: cat file |tee >(sort -u > sorted) >(sort -R > unsorted) | {
"source": [
"https://serverfault.com/questions/96245",
"https://serverfault.com",
"https://serverfault.com/users/8950/"
]
} |
96,265 | I already googled and read the "to-puppet-or-to-chef-that-is-the-question" article. I'm interested in use cases, real world implementations in which people had choosen one or the other on real problems bases. I'm particularly interested in integration with cobbler issues ( I know puppet is much a standard approach in this direction ); as anybody any experience in cobbler-chef integration ? Thanks in advance | To be honest, I think this comes down to simple viewpoint: Chef seems more of an imperative, programmatic solution, the usage of ruby as the language instantly makes me hope somebody ported it to python, as is the way of the world with all of ruby's ideas. That's not what you want for this sort of thing though. You want to speak to the void where the system will be and declare: "Upon port 80 summon from the north the daemon named nginx. His task is to serve." "A user should exist, his name should be chiggsy and he should be one of the mighty in the group of wheel," "Raise up a wall of fire, thin in the places 80,443,8080" And so on, although perhaps in language less flowery. Puppet supports that paradigm better IMO. I'd have used either one, I had no preference but when it came down to it, declarative suited me better. Puppet. | {
"source": [
"https://serverfault.com/questions/96265",
"https://serverfault.com",
"https://serverfault.com/users/12431/"
]
} |
96,272 | I suspect my server has a huge load of http requests from its clients.
I want to measure the volume of http traffic.
How can I do it with Wireshark?
Or probably there is an alternative solution using another tool? This is how a single http request/response traffic looks in Wireshark.
The ping is generated by WinAPI funciton ::InternetCheckConnection() alt text http://yowindow.com/shared/ping.png Thanks! | Ping packets should use an ICMP type of 8 (echo) or 0 (echo reply), so you could use a capture filter of: icmp and a display filter of: icmp.type == 8 || icmp.type == 0 For HTTP, you can use a capture filter of: tcp port 80 or a display filter of: tcp.port == 80 or: http Note that a filter of http is not equivalent to the other two, which will include handshake and termination packets. If you want to measure the number of connections rather than the amount of data, you can limit the capture or display filters to one side of the communication. For example, to capture only packets sent to port 80, use: dst tcp port 80 Couple that with an http display filter, or use: tcp.dstport == 80 && http For more on capture filters, read " Filtering while capturing " from the Wireshark user guide, the capture filters page on the Wireshark wiki, or pcap-filter (7) man page. For display filters, try the display filters page on the Wireshark wiki. The "Filter Expression" dialog box can help you build display filters. | {
"source": [
"https://serverfault.com/questions/96272",
"https://serverfault.com",
"https://serverfault.com/users/22468/"
]
} |
96,401 | To start off, this is not about loading data from within MySQL itself, but using the command-line tool "mysqlimport". I am using it to load a CSV directly into a table and need to see the warnings it has generated. I cannot seem to get warnings to display with verbose nor debugging turned on. Any ideas? (MySQL 5.0.5) | It's not possible with mysqlimport, however as an alternative you can do the following: mysql --execute="LOAD DATA LOCAL INFILE '$WORKDIR/$table.csv' INTO TABLE $table FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' IGNORE 1 LINES (listOfColumnNames); SHOW WARNINGS" Replace listOfColumnNames with an appropriate seperated list of columns. The magic is (as Eduard previously mentioned) is to execute the LOAD DATA INFILE and the SHOW WARNINGS commands together in the same session, as mysqlimport doesn't provide a way to get at the warnings directly. | {
"source": [
"https://serverfault.com/questions/96401",
"https://serverfault.com",
"https://serverfault.com/users/29774/"
]
} |
96,416 | I run a lot of open source applications including java and tomcat. It seems like most instructions have my applications running from the /var directory. But every once in a while, I also see the /opt directory. While I'm at it, I also see /usr/local/ and even /etc as well. When should I install applications in one folder or the other? Are there pros and cons of each one? Does it have to do with the flavor history (Solaris vs Linux or Red Hat vs Ubuntu)? | The standard for these issues is the Filesystem Hierarchy Standard . It's a rather big document. Basically (and very roughly), the standard paths on Linux are: /bin & /sbin are for vital programs for the OS, sbin being for administrators only ; /usr/bin & /usr/sbin are for not vital programs, sbin being for administrators only ; /var is for living data for programs. It can be cache data, spool data, temporary data (unless it's in /tmp , which is wiped at every reboot), etc. ; /usr/local is for locally installed programs. Typically, it hosts programs that follow the standards but were not packaged for the OS, but rather installed manually by the administrator (using for example ./configure && make && make install ) as well as administrator scripts ; /opt is for programs that are not packaged and don't follow the standards. You'd just put all the libraries there together with the program. It's often a quick & dirty solution, but it can also be used for programs that are made by yourself and for which you wish to have a specific path. You can make your own path (e.g. /opt/yourcompany ) within it, and in this case you are encouraged to register it as part of the standard paths ; /etc should not contain programs, but rather configurations. If your programs are specific to the services provided by the service, /srv can also be a good location for them. For example, I prefer to use /srv/www for websites rather than /var/www to make sure the directory will only contain data I added myself, and nothing that comes from software packages. There are some differences between distributions. For example, RedHat systems use libexec directories when Debian/Ubuntu systems don't. The FHS is mostly used by Linux distributions (I actually don't know any other OS that really complies to it). Other Unix systems don't follow it. For example, BSD systems tend to use /usr/local for packaged programs, which is not the case for Linux. Solaris has very different standard paths. I strongly encourage you to read the FHS document I linked above if you wish to know more about this. | {
"source": [
"https://serverfault.com/questions/96416",
"https://serverfault.com",
"https://serverfault.com/users/15463/"
]
} |
96,810 | Consider a Win 2008 SP2 machine with IIS7. The task is to apply a certificate and host name to the one and only Site on this machine. The site's host headers need to be abc.123.example.com The first step was installing the .pfx to the Personal Store, which was successful. IIS7 finds the cert as available, but won't allow the entry of a host name. The host name textbox is ALWAYS disabled/greyed out, even before selecting my cert. I've even deleted the default port 80 binding. Question: how can I set a host name for this site?
Is it a matter of this cert being a wildcard cert?
I understand that the SSL request comes into the web server, and the host header in the packet is encrypted. Why then would IIS6 allow the host header to be specified, but IIS7 not? Update: The cert isn't part of the problem. I've created a new Site on the machine, and when choosing https binding, the host name textbox is disabled. | You can't do it from the UI, you have to do it from the command line. Here's a nice walk through of the process: http://www.sslshopper.com/article-ssl-host-headers-in-iis-7.html | {
"source": [
"https://serverfault.com/questions/96810",
"https://serverfault.com",
"https://serverfault.com/users/658/"
]
} |
96,815 | I'm trying to add a column to a table with much data in SQL Server 2005, using SSMS. So I browse to the table, select Modify, and add the new column. Then, when I press Save, I get the following warning: Saving Definition Changes to tables
with large amounts of data could take
a considerable amount of time. While
changes are being saved, table data
will not be accessible I'm OK with that, the DB is offline and I have all the time in the world, so I press Yes. However, the operation then proceeds to time out after about 30 seconds with this message: Unable to modify table. Timeout
expired. The timeout period elapsed
prior to completion of the operation
or the server is not responding. Then, when I press OK: User canceled out of save dialog (MS
Visual Database Tools) I don't get that. I have set the execution timeout to 0 (infinite) both in the SSMS connection dialog and under Tools -> Options -> Query Execution -> SQL Server. What is the point of setting an execution timeout if it's just ignored? Does anyone know what timeout-value is being used here, and how I can set it? | Sounds like a timeout setting. So your SSMS thinks it takes too long and cancels the connection for you. The SQL server roles back. But there is help. You are not the first person to encounter this. See here . For everybody who doesn't want to click the link. Here is the price winning answer: After hitting the same error, I
stumbled upon the corrent setting. In the Management Studio, from the
Tools menu, select Options, then click
"Designers". There is an option called
"Override connection string time-out
value for table designer updates:" In
the "Transaction time-out after:" box,
you will see the magic 30 seconds | {
"source": [
"https://serverfault.com/questions/96815",
"https://serverfault.com",
"https://serverfault.com/users/787/"
]
} |
96,964 | How do I get a list of files that were or will-be installed when I apt-get a package? Conversely, can I find what package(s) caused a particular file to be installed? | Note: in the following commands, a command beginning with 'root#' means it needs to be run as root. To find which files were installed by a package, use dpkg -L : $ dpkg -L $package apt-file can tell you which files will be installed by a package before installing it: root# apt-get install apt-file
root# apt-file update
$ apt-file list $package Or if you have the package as a .deb file locally already, you can run dpkg on it: $ dpkg --contents $package.deb To find which package provides a file that is already on your system, use: $ dpkg -S /path/to/file To find which package provides a file that is not currently on your system, use apt-file again: $ apt-file search /path/to/file | {
"source": [
"https://serverfault.com/questions/96964",
"https://serverfault.com",
"https://serverfault.com/users/29972/"
]
} |
97,270 | We're currently using Nagios to monitor about 20 Linux machines (services and functional links). I just find out about Munin and I wonder if this is a Nagios replacement, or it can be used together with Nagios? I don't want to spend hours setting it up, just to discover that I already have all that functionality with Nagios. I'd especially appreciate if someone who used both programs can give some insight about your experience. Which is better for which task and what do you recommend to use? Note: we also used Cacti for some time. The main problem we have with Nagios is that setup takes too long and isn't very straightforward. | Munin and Nagios are really different tools. From the official Munin website : Munin is a networked resource monitoring tool that can help analyze
resource trends and "what just happened to kill our performance?"
problems. It is designed to be very plug and play. A default
installation provides a lot of graphs with almost no work. Nagios is a monitoring (alerting) tool. Munin could be considered a replacement for Cacti . We use both of them: Nagios and Munin. Nagios tell us in real time if something is wrong: like web server down, database load average, etc. Using Munin you can see the trends and the history about why that happenend. | {
"source": [
"https://serverfault.com/questions/97270",
"https://serverfault.com",
"https://serverfault.com/users/8741/"
]
} |
97,763 | How would I be able to compress subdirectories into separate archives? Example: directory
subdir1
subdir2 Should create subdir1(.tar).gz and subdir2(.tar).gz | This small script seems to be your best option, given your requirements: cd directory
for dir in */
do
base=$(basename "$dir")
tar -czf "${base}.tar.gz" "$dir"
done It properly handles directories with spaces in their names. | {
"source": [
"https://serverfault.com/questions/97763",
"https://serverfault.com",
"https://serverfault.com/users/28449/"
]
} |
98,268 | I realise this may be a stupid question for some, but it's something I've always wondered about. Let's say we have two gigabit switches and all of the devices on the network are also gigabit. If 10 computers connected to switch A need to transfer large amounts of data to a server on Switch B (at the same time), is the maximum transfer speed of each connection limited by the bandwidth of the connection between the two switches? In other words, would each computer only be able to transfer at a speed of one gigabit divided by the 10 machines trying to use the "bridge" between switches? If so, are there any workarounds so that every device can use it's maximum speed from point to point? | Yes. Using single cables to "cascade" multiple Ethernet switches together does create bottlenecks. Whether or not those bottlenecks are actually causing poor performance, however, can only be determined by monitoring the traffic on those links. (You really should be monitoring your per-port traffic statistics. This is yet one more reason why that's a good idea.) An Ethernet switch has a limited, but typically very large, internal bandwidth to perform its work within. This is referred to as the switching fabric bandwidth and can be quite large, today, on even very low-end gigabit Ethernet switches (a Dell PowerConnect 6248, for example, has a 184 Gbps switching fabric). Keeping traffic flowing between ports on the same switch typically means (with modern 24 and 48 port Ethernet switches) that the switch itself will not "block" frames flowing at full wire speed between connected devices. Invariably, though, you'll need more ports than a single switch can provide. When you cascade (or, as some would say, "heap") switches with crossover cables you're not extending the switching fabric from the switches into each other. You're certainly connecting the switches, and traffic will flow, but only at the bandwidth provided by the ports connecting the switches. If there's more traffic that needs to flow from one switch to another than the single connection cable can support frames will be dropped. Stacking connectors are typically used to provide higher speed switch-to-switch interconnects. In this way you can connect multiple switches with a much less restrictive switch-to-switch bandwidth limitatation. (Using the Dell PowerConnect 6200 series again as an example, their stack connections are limited in length to under .5 meters, but operate at 40Gbps). This still doesn't extend the switching fabric, but it typically offers vastly improved performance as compared to a single cascaded connection between switches. There were some switches (Intel 500 Series 10/100 switches come to mind) that actually extended the switching fabric between switches via stack connectors, but I don't know of any that have such a capability today. One option that other posters have mentioned is using link aggregation mechanisms to "bond" multiple ports together. This uses more ports on each switch, but can increase switch-to-switch bandwidth. Beware that different link aggregation protocols use different algorithms to "balance" traffic across the links in the aggregation group, and you need to monitor the traffic counters on the individual interfaces in the aggregation group to insure that balancing is really occurring. (Typically some kind of hash of the source / destination addresses is used to achieve a "balancing" effect. This is done so that Ethernet frames arrive in the same order since frames between a single source and destination will always move across the same interfaces, and has the added benefit of not requiring queuing or monitoring of traffic flows on the aggregation group member ports.) All of this concern about port-to-port switching bandwidth is one argument for using chassis-based switches. All the linecards in, for example, a Cisco Catalyst 6513 switch, share the same switching fabric (though some line cards may, themselves, have an independent fabric). You can jam a lot of ports into that chassis and get more port-to-port bandwidth than you could in a cascaded or even stacked discrete switch configuration. | {
"source": [
"https://serverfault.com/questions/98268",
"https://serverfault.com",
"https://serverfault.com/users/15623/"
]
} |
98,289 | Trying to SSH using a user account; root account works but I am specifying a private key. User account simply gives "Permission denied (publickey,gssapi-with-mic) without prompting me for my password at all. How can I fix this so I can log in with a password, and NOT a key? I don't want to use a private key for this right now, but a regular account. | The server has setting PasswordAuthentication no Change it to yes and after a restart you'll be able to use password authentication. | {
"source": [
"https://serverfault.com/questions/98289",
"https://serverfault.com",
"https://serverfault.com/users/5169/"
]
} |
98,487 | Is there any practical difference between using ln -s or mount --bind ? I want to move some folders to another partition, without changing their daemon setting, and wonder what approach I should take. I prefer ln -s as it requires minimum setup (no /etc/fstab modifications), but perhaps there is a reason why it's not common? | Hell yes. If you execute the ln -s you create a symbolic link, which is an inode pointing to a certain filesystem object, which is why symlinks can traverse filesystems and hard links cannot: hard links do not have their own inode. If you mount a filesystem with --bind , you create a second mountpoint for a device or filesystem. If you envision a symlink as a redirect, then envision a --bind mounted filesystem as creating another gateway to data. Symlinks and bind mounts are a whole different ballgame. The --bind mount seems a bit more robust to me and it probably is a bit faster than working with a symlink. On the other hand, there are no serious drawbacks to using the symlink, as the performance hit will be small (if it at all exists). Edit : I've been thinking about this, and the performance hit might be a bit bigger than I originally thought. If you have an application that reads a lot of different files, then every new file that is opened will require an extra read. Some research here suggests that my assumption is correct, so if you have an IO heavy application running there, consider the --bind option to mount above the symlink solution. The reason it is not common, is probably the fact that a symlink is visible in an ls , whereas a bind mount is only visible when looking at /proc/mounts or /etc/mtab (which is what the mount command does, if it is executed without parameters). Other than that, I don't think there are any issues. I'd be interested if there are, though. Addition : another issue with ln -s is that for some applications, when the path gets dereferenced, it may cause the application to balk if it "expects" certain items to be in specific places. | {
"source": [
"https://serverfault.com/questions/98487",
"https://serverfault.com",
"https://serverfault.com/users/13323/"
]
} |
98,745 | I'm trying to syncronize files from a remote server that is not reliable, meaning the connection tends to fail "randomly" with rsync: connection unexpectedly closed Rsync is called with --partial, so I'd like to be able to call rsync in a loop until files are fully transfered. There doesn't seem to be a flag to tell rsync to retry. What would be the best way to script it? A bash for loop? | If you are syncing everything in one sync, call rsync in a loop until rsync gives you a successful return code. Something like: RC=1
while [[ $RC -ne 0 ]]
do
rsync -a .....
RC=$?
done This will loop, calling rsync, until it gives a return code of 0. You may want to add a sleep in there to keep from DOSing your server. | {
"source": [
"https://serverfault.com/questions/98745",
"https://serverfault.com",
"https://serverfault.com/users/3761/"
]
} |
98,749 | Okies, I have successfully installed the XAMPP and added virtual hosts and am able to make database calls and stuff. The problem I am facing is while trying to enable the memcache module.
Currently trying to configure using these links. h??p://theindexer.wordpress.com/2008/06/02/installing-a-lamp-stack-on-linux-using-xampp-for-linux/ h??p://theindexer.wordpress.com/2008/06/11/installing-xdebug-on-xampp-for-linux/ http://lynxbites.blogspot.com/2009/09/steps-to-install-memcache.html The problem I am facing is while starting the phpize from /opt/lampp/bin/phpize I am getting the following error. Cannot find config.m4.
Make sure that you run '/opt/lampp/bin/phpize' in the top level source directory of the module Can any one tell me wat to do for this error and if anyone has any useful links for configuring memcache on linux using XAMPP please paste here. Thanks. | If you are syncing everything in one sync, call rsync in a loop until rsync gives you a successful return code. Something like: RC=1
while [[ $RC -ne 0 ]]
do
rsync -a .....
RC=$?
done This will loop, calling rsync, until it gives a return code of 0. You may want to add a sleep in there to keep from DOSing your server. | {
"source": [
"https://serverfault.com/questions/98749",
"https://serverfault.com",
"https://serverfault.com/users/30545/"
]
} |
98,900 | On ubuntu server, I've noticed more than once now that after adding a user to a group that user doesn't have group permissions until I reboot the system. For example: User 'hudson' needs permission to read directory 'root:shadow /etc/shadow'
So I add hudson to the shadow group. hudson still cannot read. So, I 'sudo shutdown -h -r now' and when the system comes up again user hudson can read. Is a reboot required or is there a better way to get permissions applied after adding the user to the group? | I was looking for a solution, came across this post, and then later found one! I'd thought I'd actually offer a solution so others can benefit. Logging in and out is so 1995. Taken from: https://arkaitzj.wordpress.com/2010/03/08/linux-add-user-to-a-group-without-logout/ So if you needed to get permissions for the cdrom group you just added your user to: newgrp cdrom for example So the steps would be: #adduser my_user cdrom and then $newgrp cdrom I've confirmed that it works. A simple $groups check from the CLI shows the user is in the group. And a quick execution with needed privileges from that group works. No need to kill your windows and login and logout! Hope that helps others! Additional Information (based on jytou's helpful comment): "[This] solution will work only for the current opened shell. If you have another shell open, you'll need to use the same command to take the changes into account." | {
"source": [
"https://serverfault.com/questions/98900",
"https://serverfault.com",
"https://serverfault.com/users/14909/"
]
} |
98,951 | Does HTTPS use TCP or UDP? | HTTPS can run over any reliable stream transport protocol. Normally that's TCP, but it could also be SCTP. It is NOT expected to run over UDP, which is an unreliable datagram protocol (in fact, while that's not its official name, that's a good way to remember what it is). The IANA assignment for UDP is historical; at the time, nearly every protocol was assigned both the TCP and UDP port numbers, even if it was expected that it would only ever use one. There has been discussion of merging the port number registries, and only ever assigning one port to one protocol from here on. That is to make it easier to deploy future transport protocols that would otherwise need their own registries. I'm not aware of how that discussion concluded. | {
"source": [
"https://serverfault.com/questions/98951",
"https://serverfault.com",
"https://serverfault.com/users/23004/"
]
} |
99,787 | I added a new linux user by doing a useradd -d /var/www/mywebsite.com -m newuser
passwd newuser I tested the account by logging into the server with the following command ssh [email protected] After login, the shell doesn't let me do tab autocomplete. For example, I would type /var/www/myweb{tab}, but the tab button only enters a space into the shell. Also, pressing the up and down arrow keys does not give me the most recent shell commands entered. Everything works perfectly when I ssh login as root. But it doesn't work when I ssh login as newuser. Did I miss something? Thanks | Check what shell 'newuser' is using. Make sure it's one that actually supports tab completion (like bash or zsh). You can determine what shell the user is using using the following command # getent passwd rodjek
rodjek:x:1001:1001:x:/home/rodjek:/bin/zsh You can change the users shell using the chsh command # chsh -s /bin/bash rodjek | {
"source": [
"https://serverfault.com/questions/99787",
"https://serverfault.com",
"https://serverfault.com/users/14896/"
]
} |
100,064 | Say you own a abcd.com and you only want to use it to send and receive email via [email protected] . You don't want to provide any kind of website. Can you set up the DNS records to include an "MX" record and no "A" record? Is this enough for sending and receiving email to work? Is this valid in terms of whatever standard defines these things? Edit: To clarify, the mail server (terminology?) would not be hosted on abcd.com or *.abcd.com | As long as the system pointed at by the MX record has an A record itself, then yes. For example: example.com can have a MX record pointing at mail.otherdomain.com . As long as the name mail.otherdomain.com itself is resolvable to an IP address, this is a valid configuration for example.com . Strictly speaking, mail.otherdomain.com should be an A record with the IP address in order to be RFC-compliant. But this A record will be in the otherdomain.com domain, not in example.com . Addressing your example, in order for [email protected] to be a valid email address, mail.otherdomain.com needs to be configured to handle inbound mail for [email protected] . | {
"source": [
"https://serverfault.com/questions/100064",
"https://serverfault.com",
"https://serverfault.com/users/921/"
]
} |
100,490 | Some of my coworkers were surprised when I told them that I can back up an SQL Server database while it's still running and wondered how that's possible. I know that SQL Server is capable of backing up a database while it is still online but I'm not sure how to explain why it's possible. My question is what effect does this have on the database? If data is modified (by an insert, update, or delete) while the backup is running, will the backup contain those changes or will it be added to the database afterwards? I'm assuming that the log file plays an important role here but I'm not quite sure how. edit: Just as a note, my case involves backing up the databases using SQL Server Agent and the effects of database modifications during this process. | Full backup contains both the data and log. For data, it simply copies each page of the database into the backup, as is at the moment it reads the page. It then appends into the backup media all the 'relevant' log. This includes, at the very least, all the log between the LSN at the start of the backup operation and the LSN at the end of the backup operation. In reality there is more log usually, as it has to include all active transactions at the start of backup and log needed by replication. See Debunking a couple of myths around full database backups . When the database is restored, all the data pages are copied out into the database files, then all the log pages are copied out into the log file(s). The database is inconsistent at this moment, since it contains data page images that may be out of sync with one another. But now a normal recovery is run. Since the log contains all the log during the backup, at the end of the recovery the database is consistent. | {
"source": [
"https://serverfault.com/questions/100490",
"https://serverfault.com",
"https://serverfault.com/users/5257/"
]
} |
100,707 | Are there any practical benefits in using rsyncd compared to rsync over ssh? Does it really increase speed, stability, anything? | I think the big difference is that if you're using rsyncd on the server end, instead of rsync over ssh , the server already knows what it has, so building the file lists to determine what needs to be transferred is much simpler. It won't make a difference if you're just pushing around a few files, but if you're making, for example, CPAN available over rsync, you don't want to have to build the file list on the source side every time. | {
"source": [
"https://serverfault.com/questions/100707",
"https://serverfault.com",
"https://serverfault.com/users/12097/"
]
} |
100,731 | im trying to connect to a sql server database i get this error Database 'XXX' is in transition. Try the statement later. i cancelled a long query earlier today but for some reason i can't get the database to get back up. Is there anything i can do? | This can happen sometimes if you try to take a DB offline or perform certain other operations and they fail. Sometimes the lock can be cleared if you close the SSMS instance that attempted the operation, then reopen it. Close and reopen any SSMS instances attached to the server. It can also occur if you try to take the DB offline while a long query is running. Check the activity monitor and try killing any long-running queries, if applicable and safe. If neither of the above works, close all SSMS instances, then restart SQL through the SQL Server Configuration Manager. Usually that will cure it, although the DB may be in recovery mode at first. | {
"source": [
"https://serverfault.com/questions/100731",
"https://serverfault.com",
"https://serverfault.com/users/29910/"
]
} |
100,978 | I'm working on getting some servers running in the EC2 environment and I'm noticing some errors with ntpd trying to sync (using CentOS). I was reading on this site and the impression I get is that I don't need to run ntpd since EC2 is Xen and the host takes care of the time for the virtual servers. http://support.ntp.org/bin/view/Support/KnownOsIssues Is this accurate or do I need to figure out how to get around the error I'm having? cap_set_proc() failed to drop root privileges It looks like it involves building a new kernel and other stuff I'd rather not do if I don't have to. | Yes, you need to run ntpd. My clock was 18.5 seconds off on an EC2 micro instance (running Ubuntu UEC Maverick) with 5 days uptime. After shutting down and starting again, it was back to normal, so there seems to be some kind of drift. This is despite /sys/devices/system/clocksource/clocksource0/current_clocksource saying xen , by the way. I'm not sure why it's not working. Installing the ntp package has solved the problem for me. The clock stays accurate, and there's nothing suspicious in the syslog that might indicate a conflict with Xen's clock synchronization. (It uses ntp.ubuntu.com as its server. I'm not sure if there's an NTP server in the AWS network that I could use instead, but the Ubuntu server will do nicely for now.) Update: I've recently observed that on my (newer?) instances the clock stays accurate automatically, without ntp running. Judging by the comments, this doesn't seem to be the case for everyone though, so it's probably still best to use ntp just in case. | {
"source": [
"https://serverfault.com/questions/100978",
"https://serverfault.com",
"https://serverfault.com/users/2315/"
]
} |
101,053 | We have a set of shared, static content that we serve up between our websites at http://sstatic.net . Unfortunately, this content is not currently load balanced at all -- it's served from a single server. If that server has problems, all the sites that rely on it are effectively down because the shared resources are essential shared javascript libraries and images. We are looking at ways to load balance the static content on this server, to avoid the single server dependency. I realize that round-robin DNS is, at best, a low end (some might even say ghetto ) solution, but I can't help wondering -- is round robin DNS a "good enough" solution for basic load balancing of static content? There is some discussion of this in the [dns] [load-balancing] tags, and I've read through some great posts on the topic. I am aware of the common downsides of DNS load balancing through multiple round-robin A records: there's typically no heartbeats or failure detection with DNS records, so if a given server in the rotation goes down, its A record must manually be removed from the DNS entries the time to live (TTL) must necessarily be set quite low for this to work at all, since DNS entries are cached aggressively throughout the internet the client computers are responsible for seeing that there are multiple A records and picking the correct one But, is round robin DNS good enough as a starter, better than nothing, "while we research and implement better alternatives" form of load balancing for our static content? Or is DNS round robin pretty much worthless under any circumstances? | Jeff, I disagree, load balancing does not imply redundancy, it's quite the opposite in fact. The more servers you have, the more likely you'll have a failure at a given instant. That's why redundancy IS mandatory when doing load balancing, but unfortunately there are a lot of solutions which only provide load balancing without performing any health check, resulting in a less reliable service. DNS roundrobin is excellent to increase capacity, by distributing the load across multiple points (potentially geographically distributed). But it does not provide fail-over. You must first describe what type of failure you are trying to cover. A server failure must be covered locally using a standard IP address takeover mechanism (VRRP, CARP, ...). A switch failure is covered by resilient links on the server to two switches. A WAN link failure can be covered by a multi-link setup between you and your provider, using either a routing protocol or a layer2 solution (eg: multi-link PPP). A site failure should be covered by BGP : your IP addresses are replicated over multiple sites and you announce them to the net only where they are available. From your question, it seems that you only need to provide a server fail-over solution, which is the easiest solution since it does not involve any hardware nor contract with any ISP. You just have to setup the appropriate software on your server for that, and it's by far the cheapest and most reliable solution. You asked "what if an haproxy machine fails ?". It's the same. All people I know who use haproxy for load balancing and high availability have two machines and run either ucarp, keepalived or heartbeat on them to ensure that one of them is always available. Hoping this helps! | {
"source": [
"https://serverfault.com/questions/101053",
"https://serverfault.com",
"https://serverfault.com/users/1/"
]
} |
101,063 | I used Virtual PC to create a virtual machine with configuration: CPU: Intel Pentium E6300 2.8Ghz overclocked to 3.66Ghz (which supports VT technology) RAM: 600MB I thought it was enough for me to install SQL Server 2005 enterprise on it. But installer still tells me that the virtual machine does not meet the hardware requirements. Did I forgot something about this? | Jeff, I disagree, load balancing does not imply redundancy, it's quite the opposite in fact. The more servers you have, the more likely you'll have a failure at a given instant. That's why redundancy IS mandatory when doing load balancing, but unfortunately there are a lot of solutions which only provide load balancing without performing any health check, resulting in a less reliable service. DNS roundrobin is excellent to increase capacity, by distributing the load across multiple points (potentially geographically distributed). But it does not provide fail-over. You must first describe what type of failure you are trying to cover. A server failure must be covered locally using a standard IP address takeover mechanism (VRRP, CARP, ...). A switch failure is covered by resilient links on the server to two switches. A WAN link failure can be covered by a multi-link setup between you and your provider, using either a routing protocol or a layer2 solution (eg: multi-link PPP). A site failure should be covered by BGP : your IP addresses are replicated over multiple sites and you announce them to the net only where they are available. From your question, it seems that you only need to provide a server fail-over solution, which is the easiest solution since it does not involve any hardware nor contract with any ISP. You just have to setup the appropriate software on your server for that, and it's by far the cheapest and most reliable solution. You asked "what if an haproxy machine fails ?". It's the same. All people I know who use haproxy for load balancing and high availability have two machines and run either ucarp, keepalived or heartbeat on them to ensure that one of them is always available. Hoping this helps! | {
"source": [
"https://serverfault.com/questions/101063",
"https://serverfault.com",
"https://serverfault.com/users/7381/"
]
} |
101,916 | The OOM killer on Linux wreaks havoc with various applications every so often, and it appears that not much is really done on the kernel development side to improve this. Would it not be better, as a best practice when setting up a new server , to reverse the default on the memory overcommitting, that is, turn it off ( vm.overcommit_memory=2 ) unless you know you want it on for your particular use? And what would those use cases be where you know you want the overcommitting on? As a bonus, since the behavior in case of vm.overcommit_memory=2 depends on vm.overcommit_ratio and swap space, what would be a good rule of thumb for sizing the latter two so that this whole setup keeps working reasonably? | An interesting analogy (from http://lwn.net/Articles/104179/ ): An aircraft company discovered that it
was cheaper to fly its planes with
less fuel on board. The planes would
be lighter and use less fuel and money
was saved. On rare occasions however
the amount of fuel was insufficient,
and the plane would crash. This
problem was solved by the engineers of
the company by the development of a
special OOF (out-of-fuel) mechanism.
In emergency cases a passenger was
selected and thrown out of the plane.
(When necessary, the procedure was
repeated.) A large body of theory was
developed and many publications were
devoted to the problem of properly
selecting the victim to be ejected.
Should the victim be chosen at random?
Or should one choose the heaviest
person? Or the oldest? Should
passengers pay in order not to be
ejected, so that the victim would be
the poorest on board? And if for
example the heaviest person was
chosen, should there be a special
exception in case that was the pilot?
Should first class passengers be
exempted? Now that the OOF mechanism
existed, it would be activated every
now and then, and eject passengers
even when there was no fuel shortage.
The engineers are still studying
precisely how this malfunction is
caused. | {
"source": [
"https://serverfault.com/questions/101916",
"https://serverfault.com",
"https://serverfault.com/users/10293/"
]
} |
102,032 | I'm working on a homework assignment for my college course.
The task is to fetch web pages on HTTPS using nc (netcat) . To fetch a page over HTTP, I can simply do the following: cat request.txt | nc -w 5 <someserver> 80 In request.txt I have an HTTP 1.1 request GET / HTTP/1.1
Host: <someserver> Now... This works perfectly fine. The challenge is, however - to fetch a web page that uses HTTPS? I get a page certificate like this. And this is the point at which I'm currently stuck openssl s_client -connect <someserver>:443 | nc doesn't do https. openssl s_client is as close as you'll get. Do something like this: $ cat request.txt | openssl s_client -connect server:443 | {
"source": [
"https://serverfault.com/questions/102032",
"https://serverfault.com",
"https://serverfault.com/users/31519/"
]
} |
102,098 | I am playing around with PowerShell scripts and they're working great. However, I am wondering if there is any way to also show all the commands that were run, just as if you were manually typing them in yourself. This would be similar to "echo on" in batch files. I looked at the PowerShell command-line arguments, the cmdlets, but I didn't find anything obvious. Thanks! | The following command will output each line of script to Write-Debug- Set-PSDebug -Trace 1 From man Set-PSDebug When the Trace parameter is set to 1, each line of script is traced as
it is executed. When the parameter is set to 2, variable assignments,
function calls, and script calls are also traced. If the Step
parameter is specified, you are prompted before each line of the
script is executed. | {
"source": [
"https://serverfault.com/questions/102098",
"https://serverfault.com",
"https://serverfault.com/users/31060/"
]
} |
102,110 | Problem : trying to install RPMs for RedHat EL, php-mssql. There does not appear to be a free open-source option for connecting to MSFT SQL Server database. Has anyone had any luck with this? | The following command will output each line of script to Write-Debug- Set-PSDebug -Trace 1 From man Set-PSDebug When the Trace parameter is set to 1, each line of script is traced as
it is executed. When the parameter is set to 2, variable assignments,
function calls, and script calls are also traced. If the Step
parameter is specified, you are prompted before each line of the
script is executed. | {
"source": [
"https://serverfault.com/questions/102110",
"https://serverfault.com",
"https://serverfault.com/users/7203/"
]
} |
102,114 | We have an ancient FTP server that runs Server 2000 and when our users use IE to login they are Presented with the following error: To view this FTP site in Windows Explorer, click Page, and then click Open FTP Site in Windows Explorer. The problem is with the upgrade to IE 7 "Page" has been replaced by "View". Does anyone know a way to get into the default page and edit its settings to update it? | The following command will output each line of script to Write-Debug- Set-PSDebug -Trace 1 From man Set-PSDebug When the Trace parameter is set to 1, each line of script is traced as
it is executed. When the parameter is set to 2, variable assignments,
function calls, and script calls are also traced. If the Step
parameter is specified, you are prompted before each line of the
script is executed. | {
"source": [
"https://serverfault.com/questions/102114",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
102,416 | I want to forward requests from 192.168.99.100:80 to 127.0.0.1:8000 . This is how I'd do it in linux using iptables : iptables -t nat -A OUTPUT -p tcp --dport 80 -d 192.168.99.100 -j DNAT --to-destination 127.0.0.1:8000 How do I do the same thing in MacOS X? I tried out a combination of ipfw commands without much success: ipfw add fwd 127.0.0.1,8000 tcp from any to 192.168.99.100 80 (Success for me is pointing a browser at http://192.168.99.100 and getting a response back from a development server that I have running on localhost:8000 ) | So I found out a way to do this. I'm not sure if it's the preferred way but it works! At your favourite shell: sudo ifconfig lo0 10.0.0.1 alias
sudo ipfw add fwd 127.0.0.1,9090 tcp from me to 10.0.0.1 dst-port 80 (The alias to lo0 seems to be the missing part) If you'd like a (fake) domain to point to this new alias then make sure /etc/hosts contains the line: 10.0.0.1 www.your-domain.com | {
"source": [
"https://serverfault.com/questions/102416",
"https://serverfault.com",
"https://serverfault.com/users/31571/"
]
} |
102,569 | According to a guide on the Linux directory structure , /usr/ is for application files, and /var/ is for files that change (I assume this means "files that belong to the applications"). Is this correct? If this is the case then I'm a little torn between using either. A website is an application (if it's dynamic, so to speak), but in other cases it is just a collection of files used by Apache. The default www dir lives in /var/www/ , so should we follow suit by using /var/websites/ (or something similar), or choose /usr/websites/ since they could be applications? This is a very trivial question, but it's bugging me nonetheless. For our case, I'm leaning toward /usr/web or something like that, since our websites are all applications. Update: This is for our company websites; it's not a shared hosting server, so we don't need to worry about separating them in /home/ or anything like that. | According to the FHS , /usr is for shareable, read-only data - not where you want to put the website. This is where you should put your code (for example Fedora does this for Wordpress). See also the web assets packaging guide for Fedora. /var is "variable data files. This includes spool directories and files, administrative and logging data, and transient and temporary files." -- better, but still not quite right -- but a lot of systems will use /var/www , so even if you're wrong to put it there you're in good company. /srv is for "site-specific data which is served by this system." -- which seems like a good match, but is much less common than /var/www . The other common place to put the site files is under /home -- by creating a special user called website or such, then placing the files inside that user's homedir (e.g., /home/website ). | {
"source": [
"https://serverfault.com/questions/102569",
"https://serverfault.com",
"https://serverfault.com/users/17170/"
]
} |
102,745 | I was wondering if there is an easy way to trigger an e-mail alert on Windows Server 2008 when any logical disk partitions become low on space. I have 2 SQL servers that have come close to running out of disk space because of the DB log files. Thanks,
Ryan | One simple way to get Windows Server 2008 to send low disk space e-mail alerts is to use Task Scheduler and the System Log. If the free space falls below the percentage specified in HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\ DiskSpaceThreshold , an event is recorded in the System Log that can trigger a task to send an e-mail message. Open Task Scheduler and create a new task. Enter a name for the task, select "Run whether user is logged on or not", and check "Do not store password." Add a new trigger on the Triggers tab. Select "On an event" in the "Begin the task" box. Set Log to "System", Source to "srv", and Event ID to "2013". Add a new action on the Actions tab. Set Action to "Send an e-mail" and fill in the rest of the settings appropriately. To configure when the low disk space event is recorded in the System Log, open the Registry Editor, navigate to HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters and add a DWORD value named “DiskSpaceThreshold”, setting it to the desired percentage. When the entry does not exist, the default value is 10. | {
"source": [
"https://serverfault.com/questions/102745",
"https://serverfault.com",
"https://serverfault.com/users/3066/"
]
} |
102,879 | When a DNS server is looking up an IP address for a client, and it receives a list of multiple DNS servers to query, how does it choose one? Similarly, when a DNS client receives a list of multiple IP addresses for a FQDN, how does it choose one? Is it implementation specific, or is covered in an RFC? | A DNS server resolving a query, may prioritize the order in which it uses the listed servers based on historical response time data (RFC1035 section 7.2). It may also prioritize by closer sub-net (I have seen this in RFC but don't recall which). If no history or sub-net priority is available, it may choose by random, or simply pick the first one. I have seen DNS server implementations doing various combinations of above. A client program picking an IP address from a list (of A/AAAA-records) will generally try the addresses in the order they where returned by the DNS server (round robin).
If the client cannot connect to the first IP address returned, it should try the second and so on. For example all major browsers do this, however many other Internet client programs "forget" this step and fail if they cannot connect to the first IP address. | {
"source": [
"https://serverfault.com/questions/102879",
"https://serverfault.com",
"https://serverfault.com/users/2452/"
]
} |
102,887 | I have an SMTP Event Sink to process incoming SMTP email messages to perform special processing. Under IIS 6/SMTP, this event sink runs as expected. Under IIS 7/SMTP, it does not appear to run, even though it appears to register successfully, as shown below: c:\Program Files\Kryptiq Corporation\GW\Bin>regsvr32 SpoolFilter.dll
c:\Program Files\Kryptiq Corporation\GW\Bin>smtp_sink_register.bat
c:\Program Files\Kryptiq Corporation\GW\Bin>cscript smtpreg.vbs /add 1 OnArrival
KryptiqSpoolFilter SpoolFilter.FilterObject "mail from=*"
Microsoft (R) Windows Script Host Version 5.8
Copyright (C) Microsoft Corporation. All rights reserved.
Binding Display Name Specified: KryptiqSpoolFilter
Assigning priority (24575 in 32767)
** SUCCESS **
Registered Binding:
Event Name :SMTP Transport OnSubmission
Display Name:KryptiqSpoolFilter
Binding GUID:{C12ECB83-BF0A-46B4-823D-8C4D212F5238}
ProgID :SpoolFilter.FilterObject
Rule :mail from=*
Priority :24575 (0 - 32767, default: 24575)
ComCatID :{FF3CAA23-00B9-11d2-9DFB-00C04FA322BA} How can I debug this event sink and figure out why it is not processing any email that lands in the SMTP pickup directory, and instead the email passes through untouched? Are there IIS 7 requirements for SMTP Event Sinks that are different from IIS 6, such as new permissions? | A DNS server resolving a query, may prioritize the order in which it uses the listed servers based on historical response time data (RFC1035 section 7.2). It may also prioritize by closer sub-net (I have seen this in RFC but don't recall which). If no history or sub-net priority is available, it may choose by random, or simply pick the first one. I have seen DNS server implementations doing various combinations of above. A client program picking an IP address from a list (of A/AAAA-records) will generally try the addresses in the order they where returned by the DNS server (round robin).
If the client cannot connect to the first IP address returned, it should try the second and so on. For example all major browsers do this, however many other Internet client programs "forget" this step and fail if they cannot connect to the first IP address. | {
"source": [
"https://serverfault.com/questions/102887",
"https://serverfault.com",
"https://serverfault.com/users/31794/"
]
} |
102,932 | We just got our new server(s) up and we're running CentOS on them all. After successfully installing Ruby Enterprise Edition, I would now like to add the REE /bin (located at /usr/lib/ruby-enterprise/bin ) directory to make it the default Ruby interpreter on the server. I have tried the following, which only adds it to the current shell session: export PATH=/usr/lib/ruby-enterprise/bin:$PATH What would be the correct approach to permanently adding this directory to $PATH for all users ? I'm currently logged in as root . | It's not a good idea to edit /etc/profile for things like this, because you'll lose all your changes whenever CentOS publishes an update for this file. This is exactly what /etc/profile.d is for: echo 'pathmunge /usr/lib/ruby-enterprise/bin' > /etc/profile.d/ree.sh
chmod +x /etc/profile.d/ree.sh Log back in and enjoy your (safely) updated $PATH : echo $PATH
/usr/lib/ruby-enterprise/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
which ruby
/usr/lib/ruby-enterprise/bin/ruby Instead of logging back in, you could reload the profile: . /etc/profile This will update the $PATH variable. | {
"source": [
"https://serverfault.com/questions/102932",
"https://serverfault.com",
"https://serverfault.com/users/31814/"
]
} |
103,263 | I'm having a bit of an issue here. Bear with me as this may be a case of "not asking the right question". Background: Using Apple Mail. Want to encrypt/decrypt email but GPGMail (and apparently PGP) isn't supported with Snow Leopard. Basically I need to create an S/MIME certificate for use in email encryption. I don't want, nor do I care for a Certificate Authority. I simply want a quick-and-dirty certificate. Is this even possible (using OPENSSL, etc) or does the whole process hinge on a higher authority forcing me to either set up a full-scale CA or deal with a company (e.g. Verisign, Thawte) for a cert? My criteria are instant gratification, and free. Best. | Yeah, it sucks that Apple Mail does not support GPG. :-( I wish it did because I prefer GPG encrypted e-mail too. I also agree that information surrounding S/MIME and generating your own e-mail certificates is hard to come by. I found Paul Bramscher's webpage has a good description of how to create your own Certificate Authority certificate. I don't pretend to fully understand the certificate process, but this is what I've been able to piece together. You should consult the openssl manpage for more detailed information about each of the commands shown below. Create Certificate Authority The first step is to create your own Certificate Authority (CA). The commands are … # openssl genrsa -des3 -out ca.key 4096
# openssl req -new -x509 -days 365 -key ca.key -out ca.crt and follow the prompts. You will need to issue your CA's certificate (ie the content of ca.crt ) to each and every recipient of your encrypted e-mail. The recipients will have to install and trust your CA certificate so that your encrypted e-mail will be trusted. The installation will vary for each mail client used. In your case, you will need to add your CA's certificate to your Apple Keychain. There are lots of posts on the web about how to import and trust a CA certificate in the Apple Keychain. Create Personal E-Mail Certificate Request You now need to create a certificate request. Create one for each e-mail address you wish to send e-mail from. Execute the following commands … # openssl genrsa -des3 -out humble_coder.key 4096
# openssl req -new -key humble_coder.key -out humble_coder.csr and follow the prompts. Certificate Authority Signs Your Certificate Request Your personal certificate needs to be signed by your CA. In this case, you! # openssl x509 -req -days 365 -in humble_coder.csr -CA ca.crt -CAkey ca.key \
-set_serial 1 -out humble_coder.crt -setalias "Humble Coder's E-Mail Certificate" \
-addtrust emailProtection \
-addreject clientAuth -addreject serverAuth -trustout The output is your signed certificate. Prepare Your Certificate for Importing into Your Mail Application You need to convert your certificate from .crt (PEM format, I think) to .p12 (PCKS12 format). # openssl pkcs12 -export -in humble_coder.crt -inkey humble_coder.key \
-out humble_coder.p12 You can now import your *.p12* formatted certificate into your mail client. In your case, import the *.p12* file into the Apple Keychain. Once the certificate is installed correctly, Apple Mail will start using your certificate. There is an Easier Way Of course, once you've created your own CA there's an easier way of managing certificates created by your own Certificate Authority. openssl comes with a script named … # /usr/lib/ssl/misc/CA.pl which simplifies the process of being your own Certificate Authority. There's even a man page for CA.pl! | {
"source": [
"https://serverfault.com/questions/103263",
"https://serverfault.com",
"https://serverfault.com/users/17351/"
]
} |
103,359 | In Java it is possible to create a random UUID : UUID uuid = UUID.randomUUID(); How to do this in Bash? | See the uuidgen program which is part of the e2fsprogs package. According to this , libuuid is now part of util-linux and the inclusion in e2fsprogs is being phased out. However, on new Ubuntu systems, uuidgen is now in the uuid-runtime package. To create a uuid and save it in a variable: uuid=$(uuidgen) On my Ubuntu system, the alpha characters are output as lower case and on my OS X system, they are output as upper case (thanks to David for pointing this out in a comment). To switch to all upper case (after generating it as above): uuid=${uuid^^} To switch to all lower case: uuid=${uuid,,} If, for example, you have two UUIDs and you want to compare them in Bash, ignoring their case, you can do a tolower() style comparison like this: if [[ ${uuid1,,} == ${uuid2,,} ]] | {
"source": [
"https://serverfault.com/questions/103359",
"https://serverfault.com",
"https://serverfault.com/users/12665/"
]
} |
103,371 | Iam using window 2008 server on drive C
and then on D drive i have all virtual machines folder. I also have two more internal SATA drives i want to backup the whole HD i.e drive C and D. I like the way Vmware workstation takes snapshot of VM. Is there any software or utility from where i can make snapshots or backups just like vmware so that i can restore at any time in the past. The best thing which i like is that e,g i want test some software then first i make the snapshot of current state and then if something goes wrong i go back to that snapshot. is there anything like that for actual OS just like vmware does for for VM. I should be able to save the state if want to test something | See the uuidgen program which is part of the e2fsprogs package. According to this , libuuid is now part of util-linux and the inclusion in e2fsprogs is being phased out. However, on new Ubuntu systems, uuidgen is now in the uuid-runtime package. To create a uuid and save it in a variable: uuid=$(uuidgen) On my Ubuntu system, the alpha characters are output as lower case and on my OS X system, they are output as upper case (thanks to David for pointing this out in a comment). To switch to all upper case (after generating it as above): uuid=${uuid^^} To switch to all lower case: uuid=${uuid,,} If, for example, you have two UUIDs and you want to compare them in Bash, ignoring their case, you can do a tolower() style comparison like this: if [[ ${uuid1,,} == ${uuid2,,} ]] | {
"source": [
"https://serverfault.com/questions/103371",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
103,412 | When I'm working locally, I don't really need to enter my password to access my database. I changed my root password when I first installed MySQL, but I don't know how to change my password back. What should I do? | To change the root password to newpassword : mysqladmin -u root -p'oldpassword' password 'newpassword' To change it so root doesn't require a password: mysqladmin -u root -p'oldpassword' password '' Note: I think it matters that there isn't a space between the -p and 'oldpassword' but I may be wrong about that | {
"source": [
"https://serverfault.com/questions/103412",
"https://serverfault.com",
"https://serverfault.com/users/10126/"
]
} |
103,426 | I keep getting this warning when I (re)start Apache. * Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName ... waiting apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [ OK ] This is the content of my etc/hosts file: #127.0.0.1 hpdtp-ubuntu910
#testproject.localhost localhost.localdomain localhost
#127.0.1.1 hpdtp-ubuntu910
127.0.0.1 localhost
127.0.0.1 testproject.localhost
127.0.1.1 hpdtp-ubuntu910
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts This is the content of my /etc/apache2/sites-enabled/000-default file: <VirtualHost *:80>
ServerName testproject.localhost
DocumentRoot "/home/morpheous/work/websites/testproject/web"
DirectoryIndex index.php
<Directory "/home/morpheous/work/websites/testproject/web">
AllowOverride All
Allow from All
</Directory>
Alias /sf /lib/vendor/symfony/symfony-1.3.2/data/web/sf
<Directory "/lib/vendor/symfony/symfony-1.3.2/data/web/sf">
AllowOverride All
Allow from All
</Directory>
</VirtualHost> When I go to http://testproject.localhost , I get a blank page. Can anyone spot what I am doing wrong? | By default Ubuntu doesn't specify a ServerName in the Apache configuration, because it doesn't know what the name of your server is. It tries a reverse lookup on your IP address, which returns nothing, so it just has to use the IP address as the ServerName . To fix it, either add a ServerName directive outside of any virtual host - e.g. in /etc/apache2/httpd.conf , or set up a reverse DNS response for your primary IP address - in this case, 127.0.1.1 It's perfectly fine to ignore it also. | {
"source": [
"https://serverfault.com/questions/103426",
"https://serverfault.com",
"https://serverfault.com/users/35402/"
]
} |
103,501 | From my script output I want to capture ALL the logs data with error messages and redirect them all to log file. I have script like below: #!/bin/bash
(
echo " `date` : part 1 - start "
ssh -f [email protected] 'bash /www/htdocs/server.com/scripts/part1.sh logout exit'
echo " `date` : sleep 120"
sleep 120
echo " `date` : part 2 - start"
ssh [email protected] 'bash /www/htdocs/server.com/scripts/part2.sh logout exit'
echo " `date` : part 3 - start"
ssh [email protected] 'bash /www/htdocs/server.com/scripts/part3.sh logout exit'
echo " `date` : END"
) | tee -a /home/scripts/cron/logs I want to see all actions in file /home/scripts/cron/logs But I only see this what I put after echo command. How to check in logs that SSH command was successful? I need to gather all logs. I need this to monitor result of every command in my script, to better analyse what's going on while script fails. | I generally put something similar to the following at the beginning of every script (especially if it'll run as a daemon): #!/bin/bash
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>log.out 2>&1
# Everything below will go to the file 'log.out': Explanation: exec 3>&1 4>&2 Saves file descriptors so they can be restored to whatever they were before redirection or used themselves to output to whatever they were before the following redirect. trap 'exec 2>&4 1>&3' 0 1 2 3 Restore file descriptors for particular signals. Not generally necessary since they should be restored when the sub-shell exits. exec 1>log.out 2>&1 Redirect stdout to file log.out then redirect stderr to stdout . Note that the order is important when you want them going to the same file. stdout must be redirected before stderr is redirected to stdout . From then on, to see output on the console (maybe), you can simply redirect to &3 . For example, echo "$(date) : part 1 - start" >&3 will go to wherever stdout was directed, presumably the console, prior to executing line 3 above. | {
"source": [
"https://serverfault.com/questions/103501",
"https://serverfault.com",
"https://serverfault.com/users/26292/"
]
} |
103,626 | I'd like to set some Linux services to non-standard ports - what's the highest valid port number? | (2^16)-1, or 0-65,535 (the -1 is because port 0 is reserved and unavailable). (edited because o_O Tync reminded me that we can't use port 0, and Steve Folly reminded me that you asked for the highest port, not the number of ports) But you're probably going about this the wrong way. There are people who argue for and against non-standard ports. I say they're irrelevant except to the most casual scanner, and the most casual scanner can be kept at bay by using up-to-date software and proper firewall techniques, along with strong passwords. In other words, security best practices. | {
"source": [
"https://serverfault.com/questions/103626",
"https://serverfault.com",
"https://serverfault.com/users/32024/"
]
} |
104,131 | Occasionally I come across servers (Windows 2003 and 2008) with high processor % interrupt time. Is there a way to see what program or device is causing the interrupts? | After digging through the documentation (based on the other answers here), this is the process I ended up using: Capture the ETW log of the problem The easiest way to do this is using the Windows Performance Recorder . I'm not sure when it first appeared, but seems to be built in on recent versions of Windows. Set the profile to CPU usage . or, using an elevated command prompt, navigate to the folder which contains it and use the command-line tool xperf: xperf -on base+interrupt+dpc Note, you will need to close Process Monitor or any other app which uses ETW or you will get the following error: xperf: error: NT Kernel Logger: Cannot create a file when that file already exists. (0xb7). Stop tracing / save the log xperf -d interrupt_trace.etl Open the trace in Windows Performance Analyzer (part of Windows Performance Toolkit); some places mention using xperfview instead. Expand Computation -> CPU Usage (Sampled) -> DPC and ISR Usage by Module, Stack , right-click and add graph to analysis view This pointed right to the driver in question. In this case, HDAudBus.sys is using a constant 10.82% of my cpu via interrupts, which is exactly what Process Explorer was showing me. | {
"source": [
"https://serverfault.com/questions/104131",
"https://serverfault.com",
"https://serverfault.com/users/28743/"
]
} |
104,154 | Is this for security reason, or performance reason? | It's actually neither of those reasons. If it had to be one of those two options, you might argue that it's security. However, using duplicate-cn alone does not make your VPN any less secure. There are two reasons that I know. The first is a concern about managing the credentials used to authenticate on the VPN--if many clients use the same certificate, then revoking that certificate also revokes access for all clients that use it, which may or may not be desirable. Also, it is common for a client device to roam and initiate connections from a range of public addresses--in those cases it is more likely desired for that device to retain the same address on the VPN despite the roaming, which requires there to be no more than one connection per client certificate. A valid use case for duplicate-cn might be where your client devices do not roam and you don't care to control access on a client-by-client basis and your higher priority is not spending too much time managing keys and certificates. I believe the basis of their recommendation is the fact that such cases are in the minority and also that most people don't understand security, much less PKI-based security and they don't want to muddy the waters for such people. | {
"source": [
"https://serverfault.com/questions/104154",
"https://serverfault.com",
"https://serverfault.com/users/29150/"
]
} |
104,160 | I'd like to know if any certificates support a double wildcard like *.*.example.com ? I've just been on the phone with my current SSL provider (register.com) and the girl there said they don't offer anything like that and that she didn't think it was possible anyway. Can anyone tell me if this is possible, and if browsers support this? | RFC2818 states: If more than one identity of a given
type is present in the certificate
(e.g., more than one dNSName name, a
match in any one of the set is
considered acceptable.) Names may
contain the wildcard character * which
is considered to match any single
domain name component or component
fragment. E.g., *.a.com matches
foo.a.com but not bar.foo.a.com.
f*.com matches foo.com but not
bar.com. Internet Explorer behaves in the way outlined by the RFC, where each level needs its own wildcarded certificate. Firefox is happy with a single *.domain.com where * matches anything in front of domain.com, including other.levels.domain.com, but will also handle the *.*.domain.com types as well. So, to answer your question: it is possible, and supported by browsers. | {
"source": [
"https://serverfault.com/questions/104160",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
104,618 | I'm trying to do a mysqldump on a Windows server and I get the following error message : mysqldump: Got error: 23: Out of resources when opening file '.\db\sometable.MYD' (Errcode: 24) when using LOCK TABLES Here's the command I'm running : mysqldump -u user -p"pass" --lock-tables --default-character-set=latin1 -e --quick databasename > "query.sql" Restarting the mysql service didn't help. I always get the message for the same table. I've tried reducing the table_cache and max_connections variables from 64 to 32 and 30 to 10 respectively but I still get the error only this time for a different table (and from now on the error message is always mentionning the second table). The same script is running on a dozen other Windows servers having the same database without problems. All databases have 85 tables. | According to here - "OS error code 24: Too many open files" which lines up with the more general error 23 "Out of resources". So it seems as though you are running out of file handles. This is usually server-end setting/problem, either in MySQL, or in the OS itself. Perhaps check/adjust the --open-files-limit setting in MySQL itself and see if that helps. Also, perhaps try running the dump, while no one else is using the DB, with the --single-transaction setting instead of --Lock-File , as several people suggest this will work one table at a time instead of opening them all at once (therefore using less file handles). Beyond that you'll probably have to find a root cause as to why this particular server is running out of resources. Which would probably involve troubleshooting by disabling as many services/processes as possible and see if the dump goes through. Then figure out from there who the culprit is that's eating too many resources and perhaps not freeing them correctly. | {
"source": [
"https://serverfault.com/questions/104618",
"https://serverfault.com",
"https://serverfault.com/users/17032/"
]
} |
104,923 | I have a Linux from scratch LiveCD running on qemu vm.
I'm using this command to create a hda disc for qemu: qemu-img.exe create -f qcow2 base-linux.img 5G Then I run my vm: qemu.exe -m 1024 -boot d -cdrom lfslivecd-x86-6.3-r2145.iso -hda base-linux.img After booting I try this command: parted /dev/hda unit GB mkpartfs primary ext3 0 5 And it gives me the 'unrecoginised disc label error'. I'm using parted 1.9.0 and have no ideas as to how to fix it. | You probably need to make a label on the disk first. Try just running parted manually: parted /dev/hda
unit GB
mklabel msdos
mkpartfs primary ext3 0 5 | {
"source": [
"https://serverfault.com/questions/104923",
"https://serverfault.com",
"https://serverfault.com/users/7413/"
]
} |
104,986 | Given the current structure of a directory entry on a ext4 file system on Ubuntu, what is the maximum number of files a file system can contain? What is the general method of calculating the maximum number of files a file system can contain? | Ext4 has a theoretical limit of 4 billion files, which is restricted by the size of inode number it uses to identify each file (ext4 uses 32-bit inode numbers). However, as John says, ext4 allocates inode tables statically, so the actual limit is set when the filesystem is created. The df command shows you a count of free inodes on your filesystem: $ df -i
Filesystem iused ifree %iused Mounted on
/dev/disk0s3 55253386 66810480 45% /
/dev/disk1s3 55258045 66805821 45% /Volumes/Clone Ext4 also supports an unlimited number of sub-directories per directory, though it may default to a limit of 64,000. This is configurable -- see the ext4 article at Kernel Newbies . For more information, see The new ext4 filesystem: current status and future plans from the 2007 Linux Symposium. | {
"source": [
"https://serverfault.com/questions/104986",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
105,061 | If I have multiple hosts configured on one machine ( a la apache's VirtualHosts), how can I do a lookup on the IP and find all domains configured to reach it? For example, I have several web and email domains hooked-to my server. How can I find all domains that point to it? Is it even possible? I have DNS A entries for all the domains I own, plus I know some friends' domains point to my server. What I'd like to see is if folks I don't know about are pointing there, too. (Or if someone has repointed their domain elsewhere, and I can delete their 'old' website from my server.) | Not really, no. This is all about the difference between forward and reverse DNS lookups. A forward lookup is the standard name->IP lookup. So, you would have to know all the names in advance. What you want is to do an IP->name lookup, but somehow get all the names you've applied in your Apache config and in DNS as A records (or CNAMES or whatever). What you will probably find is that doing a reverse lookup (e.g. dig @nameserver $ip -x) will return the hostname given to that IP by the people who own that netblock, which could be your ISP. It might have a name like 45-23-45-231.big-isp.com, which doesn't mean a whole lot to you. And crucially, there is only one reverse record, but potentially many forward ones. I suppose it boils down to the question - how does the reverse zone know about any of the records in the forward zone? In most setups, the forward zone is made available to the customer to make changes to, but the reverse zone is maintained by the owners of the netblock. The two systems don't need to know anything about each other to function. | {
"source": [
"https://serverfault.com/questions/105061",
"https://serverfault.com",
"https://serverfault.com/users/2321/"
]
} |
105,287 | I'm after a second opinion; and apologies if this has already been answered (point me in the right direction). Different factions within a project I'm on are engaged in a holy war between virtual vs physical servers. We're implementing a COTS IBM document management system (DB2, etc). General wisdom is that we should virtualise everything, and our vendor partner supports this view; some of the propeller heads at work are against this, particularly for the central metadata server (basically a big DB2 database). My problem is that I come from a developer background (I know squat), so an independent view would be welcome. What's the skinny on virtual vs physical? When should you - or should you not - virtualise? General advantages / disadvantages, etc. My starter for 10 - shoot me down... Virtual: Good for DR (you can setup a new instance on a different VM Server if the one your on fails, i.e: the physical box your running on) Bad for certain database senarios? Slight performance hit (not sure of specifics) | Broadly speaking if the virtualisation platform you currently run fully supports the guest OS you're intending to run, virtualisation is a good move. There are some use-cases that warrant more careful inspection: Terminal Services (or services with very high user-concurrency) Funky flavours of Linux Database or Email servers Servers with unusual peripheral attachments Servers with unique/very high resource requirements In your specific case, look at the number of concurrent users your system will need to support, and the kind of physical hardware specs you'd need to run it as a physical machine. If it requires a 4-processor, quad-core beast with 32Gb of RAM and a local 6-disk SAS drive stripe, it's not a good candidate for virtualisation. If it has high requirements on any one of those aspects (e.g. just needs an ultra-fast disk) it's in the 'maybe' pile and needs a round of testing before making the decision. If the database would run fine on a basic 1 or 2 processor server with a modest amount of ram (under 8Gb) and disk throughput isn't excessive, virtualise it. If the choice you're making is between purchasing brand new hardware for the system, or virtualising onto your existing VM infrastructure, then virtualise it first and migrate to a physical server only if required. The hallmark of a well planned server is that you can easily re-build it again on-demand ;) | {
"source": [
"https://serverfault.com/questions/105287",
"https://serverfault.com",
"https://serverfault.com/users/32543/"
]
} |
105,386 | I am rsyncing a few directories. I have a bash terminal open and am executing something like this: for DIR in * ; do rsync -a $DIR example.com:somewhere/ ; done However if I want to stop the whole things, I press Control-C. That stops the rsync, but then it keeps going to the next one. In this case I realize what has happened and then just press Control-C like a madman until things work again. Is there some way to 'fix' this. I want it so if I have a loop like that, and press Control-C, that it will return me to my bash shell. | for DIR in * ; do rsync -a $DIR example.com:somewhere/ || break; done This will also exit the loop if an individual rsync run fails for some reason. | {
"source": [
"https://serverfault.com/questions/105386",
"https://serverfault.com",
"https://serverfault.com/users/8950/"
]
} |
105,398 | I have a shared internet connection on my network which I currently manage using Smoothwall Express. I want to be able to allocate each of my housemates a certain amount of bandwidth per month. My ISP charges me per MB, so I want to extend that charge to those that are using it, while not alowing them to rip me off. The best way to do this, I think, is to have them pay for a certan amount, and then allow them to use that much. When they've used their quota, they must be completely blocked until I allocate more bandwidth to them. Is Smoothall Express sufficient for this? What plug-ins do I need? If it can't do it, what can? | for DIR in * ; do rsync -a $DIR example.com:somewhere/ || break; done This will also exit the loop if an individual rsync run fails for some reason. | {
"source": [
"https://serverfault.com/questions/105398",
"https://serverfault.com",
"https://serverfault.com/users/32584/"
]
} |
105,432 | Much is made of the fact that VMWare's ESXi hypervisor is "free" As best I can tell, you can install the hypervisor on a host for "free". Because ESXi does not have a built in management console, you need a program, of some sort, to connect to the ESXi hosts to "manage" them. By "manage" I mean, start, stop, install, reboot and backup vms. If you install the free ESXi on a host and connect to it via a web browser, you are prompted to download vSphere to manage the host. OK, but vSphere is, as best I can tell, not free. When you install it you are continuously reminded that you have only 60 days to evaluate vSphere. My question is this: Is there a completely free management tool for ESXi hosts that enables one to: Create VMs Modify VMs settings (memory etc.) Power VMs on and off Backup the VM (via any means) Resore a VM from a backup Failing that, without licensing something from VMWare, is there any tool that will let you manage your hosts after the 60 day evaluation period of vSphere ends? I have not found a straightforward explanation of this on VMWare's web site. Does anyone out there know the answer (even better if you can point me to a clear explanation on VMWare's website...) | You have to pay for vSphere with its various modules and extra features but not to use the vSphere Client to connect to a free ESXi. I think where you may be getting the license message from is although ESXi is free, you still need to request a free license key from VMWare. Login to your ESXi box with vSphere Client and go to Configuration -> Licensed Features -> Edit. If you are set to evaluation mode, that is what you are getting the license warning from. VMWare should have emailed you a license key when you signed up on their website to download ESXi. If not, you can go through the download steps again and the license key should be on one of the pages. For me, if I go to https://www.vmware.com/products/esxi/ hit Download, login with my free VMWare account, then on the page with all of the download links, at the top of the list is my ESXi License. The reason you are seeing the license message about vSphere is that in the Evaluation mode, some of the extra features that are only available with vSphere are enabled, once you enter a free ESXi license, those will be disabled and you won't get prompted anymore. Also, you can use the vCenter Converter in the standalone mode (runs off of your workstation) for free with ESXi. This tool is immensely useful for moving VMs on and off of ESXi. http://www.vmware.com/products/converter/ . | {
"source": [
"https://serverfault.com/questions/105432",
"https://serverfault.com",
"https://serverfault.com/users/30244/"
]
} |
105,535 | How can I set full permissions to a user in a specified dir in linux? | Depends what you mean 'full permissions'. If you want a user to have full read and write access to all files and directories in that directory, then this will help: chown -R username directory
chmod -R u+rX directory The first command makes the user own the directory.
The second command gives them full read and access permissions. The r gives read permission, the X gives 'execute' permission to directories, and not files. | {
"source": [
"https://serverfault.com/questions/105535",
"https://serverfault.com",
"https://serverfault.com/users/44511/"
]
} |
105,633 | Is there any way to mount a remote CIFS/SMB/SAMBA share as a folder/directory and not as a drive letter. For example, I want this map: \\Server\ShareName -> C:\Folder\ShareName Instead of the usual map like this: \\Server\ShareName -> Z:\ The server is Linux/Samba and the client is Windows 7 Professional 64-bit. The closest I've found is being able to mount a local volume as a subfolder using the Windows disk manager, but it doesn't appear to handle remote CIFS shares (see http://support.microsoft.com/kb/307889 ). | Just to map a network share directory you would use this command: net use \\Server\ShareName\Directory This mapping would: not be persistent would have to be established and authenticated at user login you would access the share using the UNC path, and not a local drive letter If you want to access the network share through a location on your local C: drive, you'll want to set up a symbolic link: mklink /d C:\Folder\ShareName \\Server\ShareName\Directory Now when you navigate to C:\Folder\Share you'll see the contents of \\\Server\Sharename\Directory . You'll still need to provide authentication for the resource with something like net use (or just be logged into a domain account on a domain system that has access) otherwise the link will probably error out angrily. | {
"source": [
"https://serverfault.com/questions/105633",
"https://serverfault.com",
"https://serverfault.com/users/32654/"
]
} |
105,838 | ls prints differently depending on whether the output is to a terminal or to something else. e.g.: $ ls .
file1 file2
$ ls . | head
file1
file2 Is there some way to make ls print out on one line as if it's to a terminal when it's not. There's a -C argument that sorta does that, but it will split it into several lines. $ ls
file1 file10 file11 file12 file13 file14 file15 file16 file17 file18 file19 file2 file3 file4 file5 file6 file7 file8 file9
$ ls -C . | head
file1 file11 file13 file15 file17 file19 file3 file5 file7 file9
file10 file12 file14 file16 file18 file2 file4 file6 file8 The reason I want to do this is that I want to monitor the files in a directory that were changing quickly. I had constructed this simple command line: while [[ true ]] ; do ls ; done | uniq The uniq prevents it from spamming my terminal and only showing changes. However it was printing it all on differnet lines, which was making the uniq useless, and increasing the amount of noise. In theory one could use watch for this, but I wanted to see a file as soon as it appeared/disappeared. This is the final solution: while [[ true ]] ; do ls | tr '\n' ' ' ; done | uniq | i don't know of a switch which could do that, but you can pipe your output through tr to do it: ls | tr "\n" " " | <whatever you like> | {
"source": [
"https://serverfault.com/questions/105838",
"https://serverfault.com",
"https://serverfault.com/users/8950/"
]
} |
106,160 | We recently had a little problem with networking where multiple servers would intermittently lose network connectivity in a fairly painful-to-resolve way (required hard reboot). This has been going on for about two weeks, seemingly at random, on different servers. No particular pattern that we could discern to it. After some digging into it, we saw that the switch was reporting 100 Mbps for the problem port: This sounds remarkably like what happened in the Joel Spolsky article Five Whys Michael spent some time doing a post-mortem, and discovered that the problem was a simple configuration problem on the switch. There are several possible speeds that a switch can use to communicate (10, 100, or 1000 megabits/second). You can either set the speed manually, or you can let the switch automatically negotiate the highest speed that both sides can work with. The switch that failed had been set to autonegotiate. This usually works, but not always, and on the morning of January 10th, it didn’t. We have now disabled auto-negotiate on our network hardware and set it to a fixed rate of 1000 Mbps (gigabit). My questions to those with more server hardware networking expertise: How common are auto-negotiate problems with modern networking hardware? Is it considered good, standard networking practice to disable auto-negotiate and set fixed speeds when setting up networking? | I have yet to see a problem with auto-negotiation of network speeds that isn't caused by either (a) a mismatch of manual on one end of the link and auto on the other or (b) a failing component of the link (cable, port, etc). This depends on the admin, but my experience has shown me that if you manually specify the link speeds and duplex settings, than you are bound to run into speed mismatches. Why? Because it is nearly impossible to document the various connections between switches and servers and then follow that documentation when making changes. Most failures I have seen are because of 1(a) and you only get in to that situation when you start manually setting speed/duplex settings. As mention in the Cisco documentation : If you disable autonegotiation, it hides link drops and other physical layer problems. Only disable autonegotiation to end-devices, such as older Gigabit NICs that do not support Gigabit autonegotiation. Do not disable autonegotiation between switches unless absolutely required, as physical layer problems can go undetected and result in spanning tree loops. Unless you are prepared to setup a change management system for network changes that requires the verification of speed/duplex (and don't forget flow control) or are willing to deal with occasional mismatches that come from manually specifying these settings on all network devices, then stick with the default configuration of auto/auto. In the future, consider monitoring the errors on the switch ports with MRTG so you can spot these issues before you have a problem. Edit: I do see a lot of people referencing negotiation failures on old equipment. Yes this was an issue a long time ago when the standards were being created and not all devices followed them. Are your NICs and switches less than 10 years old? If so, then this won't be an issue. | {
"source": [
"https://serverfault.com/questions/106160",
"https://serverfault.com",
"https://serverfault.com/users/1/"
]
} |
106,398 | How do I tell lsof I need to list only physical files (not sockets, not TCP/IP connections, only physical files)? | Just looked through some man pages, it appears you use the command: sudo lsof / This will list all open files in the / directory, which is everything on a Linux filesystem. Just tested and it shows only REG and DIR. More examples: lsof -a -d 0-999 -c <command name> /
lsof -a -d 0-999 -p <pid> / 0-999 limits it to files with a file descriptor number. | {
"source": [
"https://serverfault.com/questions/106398",
"https://serverfault.com",
"https://serverfault.com/users/20381/"
]
} |
106,402 | We have inherited a legacy application which runs under DOS 6.2 and the Phar-Lap DOS extender (if anyone is old enough to remember that). It also uses up to 6 serial ports (16550) which are expected to exist at fixed port addresses & IRQs. There are still many of these systems in the field but the PCs are starting to fail and finding compatible motherboards is impossible due to the hard-coded IRQs used. (The software installs interrupt handlers for the COM ports with COM3 to COM6 expected to use port/IRQ combinations of 3E8/10, 280/11, 2A0/12 & 3A8/15. With modern motherboards, this is a problem). Does anyone know of any virtualization technology which allows you run-up a DOS guest on a host with 6 COM ports whose physical resources are mapped to the guest as above? The display requirement is VGA 640X480 and there is no network requirement. | Just looked through some man pages, it appears you use the command: sudo lsof / This will list all open files in the / directory, which is everything on a Linux filesystem. Just tested and it shows only REG and DIR. More examples: lsof -a -d 0-999 -c <command name> /
lsof -a -d 0-999 -p <pid> / 0-999 limits it to files with a file descriptor number. | {
"source": [
"https://serverfault.com/questions/106402",
"https://serverfault.com",
"https://serverfault.com/users/6962/"
]
} |
106,537 | We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed
Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory
Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory
Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link , apparently has to do with low default settings for net.ipv4.tcp_mem . So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968
new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long!
Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route
hash entries regardless of how new/old they are. In our environment this is
generally bad. The CPU will be busy rebuilding thousands of entries per
second every time the cache is cleared. However we set this to run once a
day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals , rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel
will accept before it starts expiring route hash entries. This will help
maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse , so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance? | I never ever encountered this issue. However, you should probably increase your hash table width in order to reduce its depth. Using "dmesg", you'll see how many entries you currently have: $ dmesg | grep '^IP route'
IP route cache hash table entries: 32768 (order: 5, 131072 bytes) You can change this value with the kernel boot command line parameter rhash_entries . First try it by hand then add it to your lilo.conf or grub.conf . For example: kernel vmlinux rhash_entries=131072 It is possible that you have a very limited hash table because you have assigned little memory to your HAProxy VM (the route hash size is adjusted depending on total RAM). Concerning tcp_mem , be careful. Your initial settings make me think you were running with 1 GB of RAM, 1/3 of which could be allocated to TCP sockets. Now you've allocated 367872 * 4096 bytes = 1.5 GB of RAM to TCP sockets. You should be very careful not to run out of memory. A rule of thumb is to allocate 1/3 of the memory to HAProxy and another 1/3 to the TCP stack and the last 1/3 to the rest of the system. I suspect that your "out of socket memory" message comes from default settings in tcp_rmem and tcp_wmem . By default you have 64 kB allocated on output for each socket and 87 kB on input. This means a total of 300 kB for a proxied connection, just for socket buffers. Add to that 16 or 32 kB for HAProxy, and you see that with 1 GB of RAM you'll only support 3000 connections. By changing the default settings of tcp_rmem and tcp_wmem (middle param), you can get a lot lower on memory. I get good results with values as low as 4096 for the write buffer, and 7300 or 16060 in tcp_rmem (5 or 11 TCP segments). You can change those settings without restarting, however they will only apply to new connections. If you prefer not to touch your sysctls too much, the latest HAProxy, 1.4-dev8, allows you to tweak those parameters from the global configuration, and per side (client or server). I am hoping this helps! | {
"source": [
"https://serverfault.com/questions/106537",
"https://serverfault.com",
"https://serverfault.com/users/1/"
]
} |
106,595 | Usually after dumping a MySQL database with mysqldump command I immediately tar/gzip the resultant file. I'm looking for a way to do this in one command: So from this: mysqldump dbname -u root -p > dbname.sql
tar czvf dbname.sql.tgz dbname.sql
rm dbname.sql To something like this: mysqldump dbname -u root -p > some wizardry > dbname.sql.tgz Or even better (since I'm usually scp'ing the dump file to another server): mysqldump dbname -u root -p > send dbname.sql.tgz to user@host I'm running bash on debian. | mysqldump --opt <database> | gzip -c | ssh user@wherever 'cat > /tmp/yourfile.sql.gz' You can't use tar in a pipe like this, and you don't need it anyway, as you're only outputting a single file. tar is only useful if you have multiple files. | {
"source": [
"https://serverfault.com/questions/106595",
"https://serverfault.com",
"https://serverfault.com/users/27889/"
]
} |
106,596 | I am writing a script to copy the linux user passwords to samba on ubuntu server 10.04. I am using samba 3 with tdbsam backend. 1) How do I (if possible) copy accounts (user/password) from linux to samba using a shell script? 2) How do I find out in my script if a certain user is in the samba user db and has a password and is activated? I need this as my script is run more often and on subsequent runs I would need to find out if the user is already present. I would not copy or set password or activate if unneccesary. This is the head of my config: [global]
workgroup = WORKGROUP
server string = %h server
security = SHARE
obey pam restrictions = Yes
pam password change = no
passdb backend = tdbsam
unix password sync = no
syslog = 0
log file = /var/log/samba/log.%m
max log size = 1000
dns proxy = No
panic action = /usr/share/samba/panic-action %d
encrypt passwords = true
invalid users = root
hosts allow = 192.168.0.1/24 | mysqldump --opt <database> | gzip -c | ssh user@wherever 'cat > /tmp/yourfile.sql.gz' You can't use tar in a pipe like this, and you don't need it anyway, as you're only outputting a single file. tar is only useful if you have multiple files. | {
"source": [
"https://serverfault.com/questions/106596",
"https://serverfault.com",
"https://serverfault.com/users/12096/"
]
} |
106,722 | How do I set the shell that is used when a user SSHs to a server. For example I can't stand BASH and need to use ZSH, how do I make it so ZSH is loaded along with my profile ( .zsh_profile ) when I ssh to the machine. I dont want to have to pass a bunch of parameters with ssh either, can't I set the default shell? | Assuming you're running on Linux, you can use the chsh command. chsh -s /bin/ksh foo
chsh -s /bin/bash username | {
"source": [
"https://serverfault.com/questions/106722",
"https://serverfault.com",
"https://serverfault.com/users/27785/"
]
} |
106,882 | How can I have it where I have one IP address that sits on the Internet but many web names? For example, when a hosting company has a shared IP but I get unlimited domain names (along with everyone else on that box). I have a box on the Internet but I want to point to another machine that holds a different website when someone types in the different www...(it's sitting right next to it in just a different box). Is that all subdomaining? Thank you. I am the hosting company | It's part of the HTTP 1.1 protocol. Specifically, the HTTP 1.1 protocol includes a header called "host:" which specifies which web site on a particular server the client is attempting to access. So, if snoopy.net and woodstock.org both share 192.0.32.10 and your browser is trying to get content from http://snoopy.net/doghouse the specific http request would look like: GET /doghouse HTTP/1.1
Host: snoopy.net If the desired url is http://woodstock.org/seeds the request would look like GET /seeds HTTP/1.1
Host: woodstock.org In both cases, there would be a tcp socket between your computer and port 80 of the server. The server would know to get content from /var/www/snoopy.net or /var/www/woodstock.org/ based on the Host header. There would be other headers for cookies and other stuff like browser type and allowed content, but the "Host" header specifically is what allows the web server to know which virtual web site is desired. There's more in the RFC2616 . This is also why https sites must ** have their own IP address -- the ssl key exchange and certificate verification take place prior to the http transaction, so the http server won't know to give out the certificate for "woodstock.org" or "snoopy.net" when it receives an https connection on port 443 of 192.0.32.10. edit ** in the comments Grawity points out that there are extensions to SSL in the TLS spec that allow the server to know which web site the user is attempting to access, and that most modern web browsers have these extensions, so must is a bit too strong. | {
"source": [
"https://serverfault.com/questions/106882",
"https://serverfault.com",
"https://serverfault.com/users/15827/"
]
} |
107,187 | If I have a server A into which I can login with my ssh key and I have the ability to "sudo su - otheruser", I lose key forwarding, because the env variables are removed and the socket is only readable by my original user. Is there a way I can bridge the key forwarding through the "sudo su - otheruser", so I can do stuff on a server B with my forwarded key (git clone and rsync in my case)? The only way I can think of is adding my key to authorized_keys of otheruser and "ssh otheruser@localhost", but that's cumbersome to do for every user and server combination I may have. In short: $ sudo -HE ssh user@host
(success)
$ sudo -HE -u otheruser ssh user@host
Permission denied (publickey). | As you mentioned, the environment variables are removed by sudo , for security reasons. But fortunately sudo is quite configurable: you can tell it precisely which environment variables you want to keep thanks to the env_keep configuration option in /etc/sudoers . For agent forwarding, you need to keep the SSH_AUTH_SOCK environment variable. To do so, simply edit your /etc/sudoers configuration file (always using visudo ) and set the env_keep option to the appropriate users. If you want this option to be set for all users, use the Defaults line like this: Defaults env_keep+=SSH_AUTH_SOCK man sudoers for more details. You should now be able to do something like this (provided user1 's public key is present in ~/.ssh/authorized_keys in user1@serverA and user2@serverB , and serverA 's /etc/sudoers file is setup as indicated above): user1@mymachine> eval `ssh-agent` # starts ssh-agent
user1@mymachine> ssh-add # add user1's key to agent (requires pwd)
user1@mymachine> ssh -A serverA # no pwd required + agent forwarding activated
user1@serverA> sudo su - user2 # sudo keeps agent forwarding active :-)
user2@serverA> ssh serverB # goto user2@serverB w/o typing pwd again...
user2@serverB> # ...because forwarding still works | {
"source": [
"https://serverfault.com/questions/107187",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
107,433 | I don't quite understand the theory behind keeping public keys on the server. In the lockbox analogy of public/private keys, to unlock Alice's box, Alice holds the private key while the public key is distributed to Bob. It would seem that the server plays the role of lockbox, so why does it hold the public key? | Keep in mind that the server DOES have a private and public key which is completely separate from the keypair you generate as a user. The private key for the server is usually stored with the server configuration and the public key is transmitted by the server when you attempted to connect. You client compares the server's public key against your known_hosts file. If used properly, this prevents MITM attacks. You have the private key for your personal account. The server needs your public key so that it can verify that your private key for the account you are trying to use is authorized. So using your example. Both Bob and Alice have private keys and public keys. The public keys which have been shared before hand or as part of the connection are used to verify the data encrypted by the private keys is legitimate. If the client doesn't have the public key, or has a different public key you will get a scary warning. If the server doesn't have the clients public key, you will not be allowed in. | {
"source": [
"https://serverfault.com/questions/107433",
"https://serverfault.com",
"https://serverfault.com/users/33209/"
]
} |
107,437 | We currently have a qmail set up as an email gateway for incoming mail, and from there it is routed to a different internal mail server based on smtproutes. Is there a way that we can have one domain forwarded from the qmail 'gateway' to multiple internal servers? For example, when someone will send an email to [email protected], it gets routed to our qmail server, and from there a copy is sent to both mailserver1.example.com and mailserver2.example.com - both of which have the same list of users and both of which think that they are the mailserver for example.com Thank you. | Keep in mind that the server DOES have a private and public key which is completely separate from the keypair you generate as a user. The private key for the server is usually stored with the server configuration and the public key is transmitted by the server when you attempted to connect. You client compares the server's public key against your known_hosts file. If used properly, this prevents MITM attacks. You have the private key for your personal account. The server needs your public key so that it can verify that your private key for the account you are trying to use is authorized. So using your example. Both Bob and Alice have private keys and public keys. The public keys which have been shared before hand or as part of the connection are used to verify the data encrypted by the private keys is legitimate. If the client doesn't have the public key, or has a different public key you will get a scary warning. If the server doesn't have the clients public key, you will not be allowed in. | {
"source": [
"https://serverfault.com/questions/107437",
"https://serverfault.com",
"https://serverfault.com/users/22276/"
]
} |
107,546 | On my Ubuntu system, I have this line in /etc/fstab: myserver:/home/me /mnt/me nfs rsize=8192,wsize=8192,timeo=14,intr When I do sudo mount -a I get: mount.nfs: access denied by server while mounting myserver:/home/me How can I diagnose this problem? The nfs server is also Ubuntu. Additional details: I am able to mount this nfs share from other Ubuntu clients on the same network with no problem. However, the problematic client is different in that it is running inside VirtualBox on a Windows system. I can ping "myserver" fine from the problematic client. EDIT: /etc/exports on "myserver": /home/me *(rw,all_squash,async,no_subtree_check,anonuid=1000,anongid=1000) /etc/hosts.allow and /etc/hosts.deny on "myserver" are both all comments. And keep in mind, that I can connect fine from other clients on the same network. | Found it! One of the logs had the line: refused mount request from 192.168.1.108 for /home/me (/home/me): illegal port 64112 I googled and found that since the port is over 1024 I needed to add the "insecure" option to the relevant line in /etc/exports on the server. Once I did that (and ran exportfs -r), the mount -a on the client worked. | {
"source": [
"https://serverfault.com/questions/107546",
"https://serverfault.com",
"https://serverfault.com/users/8625/"
]
} |
107,608 | It's very weird but when setting a git repository and creating a post-receive hook with: echo "--initializing hook--"
cd ~/websites/testing
echo "--prepare update--"
git pull
echo "--update completed--" the hook runs indeed, but it never manage to run git pull properly: 6bfa32c..71c3d2a master -> master
--initializing hook--
--prepare update--
fatal: Not a git repository: '.'
Failed to find a valid git directory.
--update completed-- so I'm asking myself now, how it's possible to make the hook update the clone with post-receive? in this case the user running the processes is the same, and its everything inside the user folder so I really don't understand...because if if I go manually into cd ~/websites/testing
git pull it works without any problem... any help on that would be pretty much appreciated Thanks a lot | While the hook is running, GIT_DIR and (if the worktree was defined explicitly) GIT_WORK_TREE are set. That means your pull won't run with the second repository in the directory you changed to. Try git --git-dir ~/websites/testing/.git --work-tree ~/websites/testing pull ; or unset git's repo-local environment with this: unset $(git rev-parse --local-env-vars) More info on these environment variables in man 1 git . | {
"source": [
"https://serverfault.com/questions/107608",
"https://serverfault.com",
"https://serverfault.com/users/33238/"
]
} |
108,080 | How can I list all available versions of specific package? I know with apt-get install myPackage=1.2.3 a specific version could be installed.
And with apt-show-versions -a myPackage I would get a list of versions that are known by the system. But how getting a list of all available versions.
I think that isn't possible using the apt tools because they are
restricted to configured repositories. So what is the way to go? Some web-repositories? What is the recommondation for Ubuntu 8.04? | Try with apt-cache madison myPackage Quote from man page: It displays available versions of a package in a tabular format. | {
"source": [
"https://serverfault.com/questions/108080",
"https://serverfault.com",
"https://serverfault.com/users/33401/"
]
} |
108,154 | After calling pushd / popd in bash, it will print off the current directory stack. Is there any way to prevent this behaviour, so that it will act 'quitely'? This sort of noise in a command is uncommon in unix tools. | I think this sort of "noise" is not uncommon, that's why you often do this: pushd > /dev/null | {
"source": [
"https://serverfault.com/questions/108154",
"https://serverfault.com",
"https://serverfault.com/users/8950/"
]
} |
108,193 | Is there a way to mount a network location so that it appears as a local physical disk? e.g. \\computer\share as D: (not a network drive) | Yes, this is possible in Windows Vista and in Windows 7. Open the Command Prompt as an administrator. Then type the following command: mklink /D C:\LinkName \\NetworkLocation\LocationName This will create a "symbolic link" on Drive C called LinkName , which will link to LocationName on \\NetworkLocation . Windows will, of course, know that this is a symbolic link, but will treat it as if it were a folder on the local drive. All applications will treat this symbolic link as a local resource. Hope this helps. | {
"source": [
"https://serverfault.com/questions/108193",
"https://serverfault.com",
"https://serverfault.com/users/33211/"
]
} |
108,261 | Apache has a graceful option which can scan for modification in http.conf without restarting Apache. What about nginx? | nginx supports the following signals : TERM, INT - Quick shutdown
QUIT - Graceful shutdown
HUP - Configuration reload: Start the new worker processes with a new configuration, Gracefully shutdown the old worker processes
USR1 - Reopen the log files
USR2 - Upgrade Executable on the fly
WINCH - Gracefully shutdown the worker processes HUP is what you are looking for, so sudo kill -HUP pid (nginx pid) source : http://nginx.org/en/docs/control.html | {
"source": [
"https://serverfault.com/questions/108261",
"https://serverfault.com",
"https://serverfault.com/users/31877/"
]
} |
108,396 | We had someone steal some files before quitting and it has eventually come down to a lawsuit. I've now been provided with a cd of files and I have to "prove" that they are our files by matching them to our files from our own file server. I don't know if this is just for our lawyer or evidence for court or both. I also realize that I am not an impartial 3rd party. In thinking how to "prove" these files came from our servers we realized I also have to prove we had the files before receiving the cd. My boss took screen shots of our explorer windows of the files in question with creation dates and file names showing and emailed them to our lawyer the day before we received the cd. I would have liked to have provided md5sums but I wasn't involved in that part of the process. My first thoughts were to use the unix diff program and give console shell output. I also thought I could couple it with the md5 sums of both our files and their files. Both of these can easily be faked. I'm at a loss of what I actually should provide and then again at a loss on how to provide an auditable trail to reproduce my findings, so if it does need to be proved by a 3rd party it can be. Does anyone have any experience with this? Facts about the case: The files came from A Windows 2003 file server The incident happed over a year ago and the files haven't been modified since before the incident. | The technical issues are pretty straightforward. Using a combination of SHA and MD5 hashes is pretty typical in the forensics industry. If you're talking about text files that might've been modified-- say source code files, etc, then performing some type of structured "diff" would be pretty common. I can't cite cases, but there's definitely precedent out there re: the "stolen" file being a derivative work of the "original". Chain-of-custody issues are a LOT more of a worry to you than proving that the files match. I'd talk to your attorney about what they're looking for, and would strongly consider getting in touch with an attorney experienced with this type of litigation or computer forensics professinal and get their advice on the best way to proceed so that you don't blow your case. If you actually received a copy of the files I hope you did a good job of maintaining a chain-of-custody. If I were the opposing counsel I'd argue that you received the CD and used it as the source material to produce the "original" files that were "stolen". I'd have kept that CD of "copied" files far, far away from the "originals" and had an independent party perform "diffs" of the files. | {
"source": [
"https://serverfault.com/questions/108396",
"https://serverfault.com",
"https://serverfault.com/users/966/"
]
} |
108,866 | I want to install the administration tools on a Windows Server 2008 (R1) machine. On Windows 2003 you installed adminpak.msi, but I can't find such a file for 2008. Is this a "feature" in Server Manager? If so what is it named? ---UPDATE---
So I drilled into the server Features list and I have "Remote Server Administration Tools" but it only includes File Services, Print Services and Web Server. This is a member server in a domain but not a domain controller. It is Windows 2008 (original) not R2. Still, why can't it run AD users and computers from this machine? | From Server Manager (available under Administrative Tools), go to "Features", then "Add Features". Windows Server 2008 Standard Instructions: Expand: Remote Server Administration Tools Role Administration Tools Active Directory Domain Services Tools Then check Active Directory Domain Controller Tools . Windows Server 2008 R2 Instructions: Expand: Remote Server Administration Tools Role Administration Tools AD DS and AD LDS Tools AD DS Tools Then check AD DS Snap-Ins and Command-Line Tools . Feature Includes: Active Directory Users and Computers Active Directory Domains and Trusts Active Directory Sites and Services | {
"source": [
"https://serverfault.com/questions/108866",
"https://serverfault.com",
"https://serverfault.com/users/33171/"
]
} |
109,154 | Is there a way to know if the Windows machine I'm working on is virtual or physical?
(I'm connecting with RDP to the machine. If it's a virtual machine it is working and handled by VMWare). | If it's Windows, just have a look at the hardware screens. It'll have a billion and five VMWare-branded virtual devices. | {
"source": [
"https://serverfault.com/questions/109154",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
109,362 | We have a web application (developed by a third party) that runs on Tomcat. We have been getting very bad performance from the application. The application developer is claiming that it is an Industry Best Practice to restart web servers every night, to free up all memory usage and start over. From the customer perspective that alleviates their issue of the site crashing during the day, but from a SysAdmin perspective it is an awful solution. We host 20 of these applications in different servers for different clients, and the coordination of making sure that all are being restarted every night just seems wrong. | This is certainly not a best practice. While it is good to restart your servers periodically just to make sure that everything comes up correctly, needing to restart nightly points to a very serious memory leak in the application. | {
"source": [
"https://serverfault.com/questions/109362",
"https://serverfault.com",
"https://serverfault.com/users/4310/"
]
} |
109,800 | This is a Canonical Question about Hosting multiple SSL websites on the same IP. I was under the impression that each SSL Certificate required it's own unique IP Address/Port combination. But the answer to a previous question I posted is at odds with this claim. Using information from that Question, I was able to get multiple SSL certificates to work on the same IP address and on port 443. I am very confused as to why this works given the assumption above and reinforced by others that each SSL domain website on the same server requires its own IP/Port. I am suspicious that I did something wrong. Can multiple SSL Certificates be used this way? | For the most up-to-date information on Apache and SNI, including additional HTTP-Specific RFCs, please refer to the Apache Wiki FYsI: "Multiple (different) SSL certificates on one IP" is brought to you by the magic of TLS Upgrading.
It works with newer Apache servers (2.2.x) and reasonably recent browsers (don't know versions off the top of my head). RFC 2817 (upgrading to TLS within HTTP/1.1) has the gory details, but basically it works for a lot of people (if not the majority). You can reproduce the old funky behavior with openssl's s_client command (or any "old enough" browser) though. Edit to add: apparently curl can show you what's happening here better than openssl: SSLv3 mikeg@flexo% curl -v -v -v -3 https://www.yummyskin.com
* About to connect() to www.yummyskin.com port 443 (#0)
* Trying 69.164.214.79... connected
* Connected to www.yummyskin.com (69.164.214.79) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: /usr/local/share/certs/ca-root-nss.crt
CApath: none
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using DHE-RSA-AES256-SHA
* Server certificate:
* subject: serialNumber=wq8O9mhOSp9fY9JcmaJUrFNWWrANURzJ; C=CA;
O=staging.bossystem.org; OU=GT07932874;
OU=See www.rapidssl.com/resources/cps (c)10;
OU=Domain Control Validated - RapidSSL(R);
CN=staging.bossystem.org
* start date: 2010-02-03 18:53:53 GMT
* expire date: 2011-02-06 13:21:08 GMT
* SSL: certificate subject name 'staging.bossystem.org'
does not match target host name 'www.yummyskin.com'
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
curl: (51) SSL: certificate subject name 'staging.bossystem.org'
does not match target host name 'www.yummyskin.com' TLSv1 mikeg@flexo% curl -v -v -v -1 https://www.yummyskin.com
* About to connect() to www.yummyskin.com port 443 (#0)
* Trying 69.164.214.79... connected
* Connected to www.yummyskin.com (69.164.214.79) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: /usr/local/share/certs/ca-root-nss.crt
CApath: none
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using DHE-RSA-AES256-SHA
* Server certificate:
* subject: C=CA; O=www.yummyskin.com; OU=GT13670640;
OU=See www.rapidssl.com/resources/cps (c)09;
OU=Domain Control Validated - RapidSSL(R);
CN=www.yummyskin.com
* start date: 2009-04-24 15:48:15 GMT
* expire date: 2010-04-25 15:48:15 GMT
* common name: www.yummyskin.com (matched)
* issuer: C=US; O=Equifax Secure Inc.; CN=Equifax Secure Global eBusiness CA-1
* SSL certificate verify ok. | {
"source": [
"https://serverfault.com/questions/109800",
"https://serverfault.com",
"https://serverfault.com/users/14896/"
]
} |
109,811 | I have just purchased a new Acer Revo nettop PC for dedicated internet browsing. It will be the only pc on a home network. My original plan was to install one virtual PC for family browsing, another for remote web based server administration and ban browser use from the host Windows 7 o/s. The idea was that I could recover to a fresh VHD image once a week to eliminate any build up of malware inside the browser VMs. However now I am looking for alternative solutions since the Intel Atom cpu does not have hardware VT support which Windows Virtual PC requires. Would it be possible to engineer some type of routine overnight host o/s wipe and recovery? I guess cyber cafes do something like this? The only user data that would need to be retained across a recovery would be browser bookmarks but these could be exported to remote service. Edit 1: I am thinking the o/s reset could be done via some disk image recovery process. Edit 2: Just had a brainwave. Routine browsing could be done via the new Google Chrome O/S. I have just seen a video of the Google Chrome o/s booting off a usb pen drive in seconds. | For the most up-to-date information on Apache and SNI, including additional HTTP-Specific RFCs, please refer to the Apache Wiki FYsI: "Multiple (different) SSL certificates on one IP" is brought to you by the magic of TLS Upgrading.
It works with newer Apache servers (2.2.x) and reasonably recent browsers (don't know versions off the top of my head). RFC 2817 (upgrading to TLS within HTTP/1.1) has the gory details, but basically it works for a lot of people (if not the majority). You can reproduce the old funky behavior with openssl's s_client command (or any "old enough" browser) though. Edit to add: apparently curl can show you what's happening here better than openssl: SSLv3 mikeg@flexo% curl -v -v -v -3 https://www.yummyskin.com
* About to connect() to www.yummyskin.com port 443 (#0)
* Trying 69.164.214.79... connected
* Connected to www.yummyskin.com (69.164.214.79) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: /usr/local/share/certs/ca-root-nss.crt
CApath: none
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using DHE-RSA-AES256-SHA
* Server certificate:
* subject: serialNumber=wq8O9mhOSp9fY9JcmaJUrFNWWrANURzJ; C=CA;
O=staging.bossystem.org; OU=GT07932874;
OU=See www.rapidssl.com/resources/cps (c)10;
OU=Domain Control Validated - RapidSSL(R);
CN=staging.bossystem.org
* start date: 2010-02-03 18:53:53 GMT
* expire date: 2011-02-06 13:21:08 GMT
* SSL: certificate subject name 'staging.bossystem.org'
does not match target host name 'www.yummyskin.com'
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
curl: (51) SSL: certificate subject name 'staging.bossystem.org'
does not match target host name 'www.yummyskin.com' TLSv1 mikeg@flexo% curl -v -v -v -1 https://www.yummyskin.com
* About to connect() to www.yummyskin.com port 443 (#0)
* Trying 69.164.214.79... connected
* Connected to www.yummyskin.com (69.164.214.79) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: /usr/local/share/certs/ca-root-nss.crt
CApath: none
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using DHE-RSA-AES256-SHA
* Server certificate:
* subject: C=CA; O=www.yummyskin.com; OU=GT13670640;
OU=See www.rapidssl.com/resources/cps (c)09;
OU=Domain Control Validated - RapidSSL(R);
CN=www.yummyskin.com
* start date: 2009-04-24 15:48:15 GMT
* expire date: 2010-04-25 15:48:15 GMT
* common name: www.yummyskin.com (matched)
* issuer: C=US; O=Equifax Secure Inc.; CN=Equifax Secure Global eBusiness CA-1
* SSL certificate verify ok. | {
"source": [
"https://serverfault.com/questions/109811",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.