source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
453,617 | This is a canonical question about the use of a *AMPP's stack. I recently had a talk with some experienced people and they suggested to me not to use a WAMP stack, and instead install apache, mysql and php separately. I don't understand why they have suggested this, though, so can anyone tell me? Is there a particular disadvantage of WAMP, or a particular advantage to installing all of them separately? Since a WAMP stack itself is composed of apache, mysql and php, then what's the difference between using the WAMP stack and installing them all separately? | Since a WAMP stack itself is composed of apache, mysql and php, then what's the difference between using the WAMP stack and installing them all separately? There are many differences, though the three most troubling ones are: insecure configuration difficulty and lag in upgrades non-standard configs/binary locations To expand on #1: WAMP, MAMP, LAMPP, XAMPP, etc. are designed to be one-click stack installers that make it easy for developers to get to work quickly and with the least resistance possible. As such, many of the configuration values are intentionally left in a very insecure state. This is OK for development work, but incredibly stupid to do in production. Then, for #2, OS vendors make it very easy to keep your LAMP stack upgraded with the most recent feature updates and security patches. When their packages get released to their official repos, they've been through much testing and the chances of them breaking anything on your system are fairly low. In the vast majority of the time, you're able to upgrade everything with a single command. Finally, #3: one-click installers place their files in very non-standard locations. As such, when you (or anyone else) go to troubleshoot things, you're left searching all over your filesystem for, say, your php.ini file. When you install a LAMP stack from your distribution's package repo, everything will be in an expected, well-known location. | {
"source": [
"https://serverfault.com/questions/453617",
"https://serverfault.com",
"https://serverfault.com/users/121407/"
]
} |
453,680 | In the past, all of our servers have automatically shown command arguments passed to rake when we view them in top. For example: But on this particular server, we get this instead (picture is top running, showing the rake command, but not showing any of the arguments that had been passed to rake): Both servers are running Ubuntu (though the server without rake commands is a newer flavor of ubuntu). Both run rake through ruby enterprise edition (as powered by rvm). Can't seem to find any documentation on how top chooses what to show in the "command" column, other than the obvious "more data/less data" toggle (all screenshots are shown with the extra data enabled. Anyone encountered anything similar to this? | Use top -c to make top show arguments. Alternatively, just press c in a running top to toogle this. | {
"source": [
"https://serverfault.com/questions/453680",
"https://serverfault.com",
"https://serverfault.com/users/147689/"
]
} |
453,811 | I have a symfony2 application on my ubuntu. Symfony has a plenty of useful console commands (like php app/console cache:clear or php app/console assets:install web ). The problem is If I run them as root user the newly generated files will have root:root user/group, and if I acces my website I get errors (becouse apache cannot read/modify these files -> they should have www-data:www-data ). Running chown www-data:www-data solves the problem, but running it every time I clear my cache is not a solution. How can I configure PHP CLI to always run as www-data user/group? or How can I run a command as a diffrent user (being root, run it as www-data)? | Run a command as another user once: sudo -u www-data php script.php This should work if you have sudo installed and are root (or another user that is allowed to do that; see the sudo group, man sudoers and visudo ). For reusability, add an alias. Place this in your .bashrc , .profile or similar (and reload the shell to make it effective): alias phpwww='sudo -u www-data php' You can then type phpwww script.php and it will actually execute sudo -u www-data php script.php for you. For other, more complex and error-prone ways, read on. As for always running php as www-data , there are several possiblities. You could create a simple wrapper shellscript. If /usr/bin/php is only a soft-link to /usr/bin/php5 or similar, that makes it simpler. Just replace the soft-link (NOT the file php5 ) with a script like this: #!/bin/sh
sudo -u www-data php5 $*
return $? That's not tested though. Also be aware that this will ALWAYS try to run php5 as user www-data , even if the user may not be root and may not have permission to do so. And it may also not be what you really want. Some installed services may run into problems when trying to execute php. A (possibly better) solution to only apply that to root may be to leave the soft-link /usr/bin/php alone and place the script in /root/bin instead. Then add that folder to PATH via .bashrc , .profile or similar. If you have /etc/skel/.profile , that may point out how that is done: # set PATH so it includes user's private bin if it exists
if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH"
fi Once this is in your .bashrc , .profile or similar, every new shell you open will allow you to directly execute any executables (+x) in $HOME/bin ( /root/bin for root). Hint: You may want to name the wrapper script something like phpwww so you explicitly specify php script.php or phpwww script.php to decide if you want regular or sudo'ed php. | {
"source": [
"https://serverfault.com/questions/453811",
"https://serverfault.com",
"https://serverfault.com/users/80772/"
]
} |
453,824 | Possible Duplicate: SysAdmin & Developer: Responsibilities Suppose, I have 20 servers: We keep data in Linux servers Developers often need to Login to the server to debug some issue Sometimes they have to access user data and run through the app in production to replicate a problem that was not reproducible in test environment What are the best practices for this situation? | Run a command as another user once: sudo -u www-data php script.php This should work if you have sudo installed and are root (or another user that is allowed to do that; see the sudo group, man sudoers and visudo ). For reusability, add an alias. Place this in your .bashrc , .profile or similar (and reload the shell to make it effective): alias phpwww='sudo -u www-data php' You can then type phpwww script.php and it will actually execute sudo -u www-data php script.php for you. For other, more complex and error-prone ways, read on. As for always running php as www-data , there are several possiblities. You could create a simple wrapper shellscript. If /usr/bin/php is only a soft-link to /usr/bin/php5 or similar, that makes it simpler. Just replace the soft-link (NOT the file php5 ) with a script like this: #!/bin/sh
sudo -u www-data php5 $*
return $? That's not tested though. Also be aware that this will ALWAYS try to run php5 as user www-data , even if the user may not be root and may not have permission to do so. And it may also not be what you really want. Some installed services may run into problems when trying to execute php. A (possibly better) solution to only apply that to root may be to leave the soft-link /usr/bin/php alone and place the script in /root/bin instead. Then add that folder to PATH via .bashrc , .profile or similar. If you have /etc/skel/.profile , that may point out how that is done: # set PATH so it includes user's private bin if it exists
if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH"
fi Once this is in your .bashrc , .profile or similar, every new shell you open will allow you to directly execute any executables (+x) in $HOME/bin ( /root/bin for root). Hint: You may want to name the wrapper script something like phpwww so you explicitly specify php script.php or phpwww script.php to decide if you want regular or sudo'ed php. | {
"source": [
"https://serverfault.com/questions/453824",
"https://serverfault.com",
"https://serverfault.com/users/2172/"
]
} |
454,135 | I'm sorry but i'm stuck in figuring out a problem i'm facing here. I removed the AD feature from server manager and after rebooting, my server 2012 gui wasn't there anymore. There's only command prompt to deal with. I tried to enable back gui based on threads i've found. I did SConfig but option no.12 which is to restore gui is not there. I tried running powershell but it stated "powershell is not recognized as internal or external...". I change my path to c:\windows\system32\windowspowershell\v1.0 and tried running powershell just to find the same error message. So how can i enable back the gui feature of my server 2012? | Is explorer simply not starting? have you tried typing explorer.exe in the command prompt window? I guess this isn't the case you wouldn't normally get a command prompt when logging in. It sounds like somehow the shell has been removed, effectively giving you a server core install, in which case try issuing the following from the command prompt. This should re-enable the shell if it has been somehow disabled. Dism /online /enable-feature /featurename:Server-Gui-Mgmt /featurename:Server-Gui-Shell /featurename:ServerCore-FullServer | {
"source": [
"https://serverfault.com/questions/454135",
"https://serverfault.com",
"https://serverfault.com/users/147864/"
]
} |
454,332 | In my organisation we are thinking about buying blade servers - instead of rack servers. Of course technology vendors also make them sound very nice. A concern, that I read very often in different forums, is, that there is a theoretical possibility of the server chassis going down - which would in consequence take all the blades down. That is due to shared infrastructure. My reaction on this probability would be to have redundancy and by two chassis instead of one (very costly of course). Some people (including e.g. HP Vendors) try to convince us, that the chassis is very very unlikely to fail, due to many redundancies (redundant power supply, etc.). Another concern on my side is, that if something goes down, spare parts might be required - which is difficult in our location (Ethiopia). So I would ask to experienced administrators, that have managed blade server: What is your experience? Do they go down as a whole - and what is the sensible shared infrastructure, that might fail? That question could be extended to shared storage. Again I would say, that we need two storage units instead of only one - and again the vendors say, that this things are so rock solid, that no failure is expected. Well - I can hardly believe, that such a critical infrastructure can be very reliable without redundancy - but maybe you can tell me, whether you have successfull blade-based projects, that work without redundancy in its core parts (chassis, storage...) At the moment, we look at HP - as IBM looks much too expensive. | There's a low probability of complete chassis failure... You'll likely encounter issues in your facility before sustaining a full failure of a blade enclosure. My experience is primarily with HP C7000 and HP C3000 blade enclosures. I've also managed Dell and Supermicro blade solutions. Vendor matters a bit. But in summary, the HP gear has been stellar, Dell has been fine, and Supermicro was lacking in quality, resiliency and was just poorly-designed. I've never experienced failures on the HP and Dell side. The Supermicro did have serious outages, forcing us to abandon the platform. On the HP's and Dells, I've never encountered a full chassis failure. I've had thermal events. The air-conditioning failed at a co-location facility sending temperatures to 115°F/46°C for 10 hours. Power surges and line failures: Losing one side of an A/B feed. Individual power supply failures. There are usually six power supplies in my blade setups, so there's ample warning and redundancy. Individual blade server failures. One server's issues do not affect the others in the enclosure. An in-chassis fire ... I've seen a variety of environments and have had the benefit of installing in ideal data center conditions, as well as some rougher locations. On the HP C7000 and C3000 side, the main thing to consider is that the chassis is entirely modular. The components are designed minimize the impact of a component failure affecting the entire unit. Think of it like this... The main C7000 chassis is comprised of front, (passive) midplane and backplane assemblies. The structural enclosure simply holds the front and rear components together and supports the systems' weight. Nearly every part can be replaced... believe me, I've disassembled many. The main redundancies are in fan/cooling, power and networking an management. The management processors ( HP's Onboard Administrator ) can be paired for redundancy, however the servers can run without them. Fully-populated enclosure - front view. The six power supplies at the bottom run the full depth of the chassis and connect to a modular power backplane assembly at the rear of the enclosure. Power supply modes are configurable: e.g. 3+3 or n+1. So the enclosure definitely has power redundancy. Fully-populated enclosure - rear view. The Virtual Connect networking modules in the rear have an internal cross-connect, so I can lose one side or the other and still maintain network connectivity to the servers. There are six hot-swappable power supplies and ten hot-swappable fans. Empty enclosure - front view. Note that there's really nothing to this part of the enclosure. All connections are passed-through to the modular midplane. Midplane assembly removed. Note the six power feeds for the midplane assembly at the bottom. Midplane assembly. This is where the magic happens. Note the 16 separate downplane connections: one for each of the blade servers. I've had individual server sockets/bays fail without killing the entire enclosure or affecting the other servers. Power supply backplane(s). 3ø unit below standard single-phase module. I changed power distribution at my data center and simply swapped the power supply backplane to deal with the new method of power delivery Chassis connector damage. This particular enclosure was dropped during assembly, breaking the pins off of a ribbon connector. This went unnoticed for days, resulting in the running blade chassis catching FIRE... Here are the charred remains of the midplane ribbon cable. This controlled some of the chassis temperature and environment monitoring. The blade servers within continued to run without incident. The affected parts were replaced at my leisure during scheduled downtime, and all was well. | {
"source": [
"https://serverfault.com/questions/454332",
"https://serverfault.com",
"https://serverfault.com/users/147949/"
]
} |
454,977 | I host my site at domain.com . My DNS entries in Route53 are as follows: domain.com A xxx.xxx.xxx.xxx 300
domain.com NS stuff.awsdns-47.org 172800
domain.com SOA stuff.awsdns-47.org 900 I would like to redirect traffic from www.domain.com to domain.com , as currently this just returns a 404. This question on SO suggested a PTR record, and I added that: www.domain.com PTR domain.com 300 but it didn't work. What should I be doing? | PTR is for setting up reverse IP lookups, and it's not something you should care about. Remove it. What you need is a CNAME for www: www.domain.com CNAME domain.com 300 | {
"source": [
"https://serverfault.com/questions/454977",
"https://serverfault.com",
"https://serverfault.com/users/51639/"
]
} |
455,799 | I have a rewrite in my ngix conf file that works properly except it seems to include the location block as part of the $uri variable. I only want the path after the location block. My current config code is: location /cargo {
try_files $uri $uri/ /cargo/index.php?_REWRITE_COMMAND=$uri&args;
} Using an example url of http://localhost/cargo/testpage the redirect works, however the value of the "_REWRITE_COMMAND" parameter received by my php file is "/cargo/testpage". I need to strip off the location block and just have "testpage" as the $uri I am pretty sure there is a regex syntax to split the $uri and assign it to a new variable using $1 $2 etc, but I can't find any example to do just a variable assignment using a regex that is not part of a rewrite statement. I've been looking and trying for hours and I just can't seem to get past this last step. I also know I could just strip this out on the application code, but the reason I want to try to fix it in the nginx conf is for compatibility reasons as it also runs on Apache. I also should say that I have figured out a really hacky way to do it, but it involves an "if" statement to check for file existance and the documentation specifically says not to do it that way. | Looking around I would guess that using a regexp location with captures is the easiest. Adapting your example I end up with: location ~ ^/cargo(.*) {
try_files $1 $1/ /cargo/index.php?_REWRITE_COMMAND=$1&args;
} | {
"source": [
"https://serverfault.com/questions/455799",
"https://serverfault.com",
"https://serverfault.com/users/148597/"
]
} |
456,415 | One of my client sites called to ask me to change the subnet masks of the Linux servers I manage there while they re-IP/change the netmask of their network based on a 10.0.0.x scheme. "Can you change the Linux server netmasks from 255.255.255.0 to 255.240.0.0?" You mean, 255.255.240.0? "No, 255.240.0.0." Are you sure you need that many IP addresses? "Yeah, we never want to run out of IP addresses." A quick check against the Subnet Cheat Sheet shows: a 255.255.255.0 netmask, a /24 provides 256 hosts. It's clear to see that an organization can exhaust that number of IP addresses. a 255.240.0.0 netmask, a /12 provides 1,048,576 hosts. This is a small < 200-user site. I doubt that they'd allocate more than 400 IP addresses, ever... Maybe 500, but at that point, more subnets/VLANs should be established. I suggested something that provides fewer hosts, like a /22 or /21 (1024 and 2048 hosts, respectively), but was unable to give a specific reason against using the /12 subnet. Is there anything this customer should be concerned about? Are there any specific reasons they shouldn't use such an incredibly large mask in their environment? | As stated in other answers, having too many hosts in the broadcast domain can really start to make broadcasts a mess. They'll need a lot of expansion in the subnet before it becomes a potential problem. Future growth planning becomes a mess. Adding extra sites with their own IP space gets difficult when you've already laid a needlessly huge footprint down in the available space. Internal network security boundaries become impossible. Assigning different subnets to different groups of users and splitting up low security servers/high security servers/restricted management interfaces of servers/storage/network devices goes out the window. Any ol' user's laptop that picked up a virus at home can ARP poison the network and take the servers down or man-in-the-middle them. You have no way to keep a compromised device away from sensitive network locations, like out-of-band management interfaces of servers. A typo in an innocent reconfig of network settings can potentially IP conflict with any other device on the network. If they're not planning on growing in any way that would ever require more subnets, and not planning on ever adding any complexity or security to their network, then it's fine, since it's effectively identical to their current network configuration -- but if they're asking for this, they're obviously planning on expanding. Needless at best, and seriously bad idea at worst. | {
"source": [
"https://serverfault.com/questions/456415",
"https://serverfault.com",
"https://serverfault.com/users/13325/"
]
} |
457,301 | We have a NAS server at the company I work for that is being used for storing photography sessions. Each session is approximately 100gb. Over the last couple of years this server has accumulated 10+ TB of data, and we are increasing the amount of photoshoots exponentially. I estimate that by the end of next year we will have 20+ TB stored on this NAS. We are currently backing this server up to tape using LTO-5 tapes with Symantec BackupExec. Since the size of this server has grown, full backups of this server are not completing overnight. Does anyone have any suggestion on how to backup this amount of data? Should we be backing it up to tape? Are there any other options which may be better? | You need to take a step back and stop thinking "I've got 20TB on my NAS I need to back up!" and develop a storage strategy that takes into account the nature of your data: Where is it coming from and how much new data are you getting? (you've got this in your question) How is the data used once you have it? Are people editing the pictures? Do you keep the originals and generate edited versions? How long do you need to keep all the data? Are people still making changes to pictures from 2 years ago? Depending on the answers to the last two questions, you probably need more of a Archiving System than a radically different backup system. Data that is static (e.g. 2 year old pictures that you retain "just in case") doesn't need to be backed up every night, or even every week, it needs to be archived. What you actually do might be more complex, but conceptually, all the old pictures can be written off to tape (multiple copies!) and not backed up any more. Based on your comments, some additional thoughts: Since you keep the originals of each shoot untouched and work on a copy, and assuming that at least some of the original pictures are duds, you might be able to cut the amount of data that needs to be backed up in half. If you still can't finish a full backup within whatever window of time you have, a common way to speed things up is to do a disk-to-disk backup first and then later copy the backup set off to tape. | {
"source": [
"https://serverfault.com/questions/457301",
"https://serverfault.com",
"https://serverfault.com/users/140438/"
]
} |
457,340 | Is it possible to log all IP addresses that trying to connect or connected to port "5901" in Linux Debian? How can i do that? | You could do it using iptables iptables -I INPUT -p tcp -m tcp --dport 5901 -m state --state NEW -j LOG --log-level 1 --log-prefix "New Connection " This will log new tcp connections on port 5901 to /var/log/syslog and /var/log/kernel.log like this Dec 12 07:52:48 u-10-04 kernel: [591690.935432] New Connection IN=eth0 OUT= MAC=00:0c:29:2e:78:f1:00:0c:29:eb:43:22:08:00 SRC=192.168.254.181 DST=192.168.254.196 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=40815 DF PROTO=TCP SPT=36972 DPT=5901 WINDOW=14600 RES=0x00 SYN URGP=0 | {
"source": [
"https://serverfault.com/questions/457340",
"https://serverfault.com",
"https://serverfault.com/users/148918/"
]
} |
457,550 | I tried to apply patch to my file with following command patch -p0 < foo.patch I got the following output bash: patch: command not found I have Centos 5.x server. Please guide what to do in this case | All you need to do is sudo yum install patch | {
"source": [
"https://serverfault.com/questions/457550",
"https://serverfault.com",
"https://serverfault.com/users/142268/"
]
} |
457,718 | Using Powershell v2.0 I want to delete any files older than X days: $backups = Get-ChildItem -Path $Backuppath |
Where-Object {($_.lastwritetime -lt (Get-Date).addDays(-$DaysKeep)) -and (-not $_.PSIsContainer) -and ($_.Name -like "backup*")}
foreach ($file in $backups)
{
Remove-Item $file.FullName;
} However, when $backups is empty I get: Remove-Item : Cannot bind argument to parameter 'Path' because it is null. I've tried: Protecting the foreach with if (!$backups) Protecting the Remove-Item with if (Test-Path $file -PathType Leaf) Protecting the Remove-Item with if ([IO.File]::Exists($file.FullName) -ne $true) None of these seem to work, what if the recommended way of preventing a foreach loop from being entered if the list is empty? | With Powershell 3 the foreach statement does not iterate over $null and the issue described by OP no longer occurs. From the Windows PowerShell Blog post New V3 Language Features : ForEach statement does not iterate over $null In PowerShell V2.0, people were often surprised by: PS> foreach ($i in $null) { 'got here' } got here This situation often comes up when a cmdlet doesn’t return any objects. In PowerShell V3.0, you don’t need to add an if statement to avoid iterating over $null. We take care of that for you. For PowerShell $PSVersionTable.PSVersion.Major -le 2 see the following for original answer. You have two options, I mostly use the second. Check $backups for not $null . A simple If around the loop can check for not $null if ( $backups -ne $null ) {
foreach ($file in $backups) {
Remove-Item $file.FullName;
}
} Or Initialize $backups as a null array. This avoids the ambiguity of the "iterate empty array" issue you asked about in your last question . $backups = @()
# $backups is now a null value array
foreach ( $file in $backups ) {
# this is not reached.
Remove-Item $file.FullName
} Sorry, I neglected to provide an example integrating your code. Note the Get-ChildItem cmdlet wrapped in the array. This would also work with functions which could return a $null . $backups = @(
Get-ChildItem -Path $Backuppath |
Where-Object { ($_.lastwritetime -lt (Get-Date).addDays(-$DaysKeep)) -and (-not $_.PSIsContainer) -and ($_.Name -like "backup*") }
)
foreach ($file in $backups) {
Remove-Item $file.FullName
} | {
"source": [
"https://serverfault.com/questions/457718",
"https://serverfault.com",
"https://serverfault.com/users/139468/"
]
} |
457,796 | I was wondering if it's at all possible to make a Dell Powerconnect 2848 switch show when running internal traceroutes. This would help with diagnosing issues and make it far easier to see where issues occur. According to the datasheet , this particular switch is Layer 2 and 3 aware. I'm not completely sure what that means. Is this possible? | No. The hops shown by traceroute show the path that an IP packet follows on a routed (layer 3) network. Routers will show up, and switches will not. Switches are by their nature a layer 2 device: they receive and forward Ethernet frames, using the destination MAC address to determine the correct destination port. Some switches are also able to function as routers. We call such devices "layer 3 switches." Even a layer 3 switch will not necessarily show up on a traceroute, because much of the traffic passing through such a switch is layer 2 traffic within its own subnet. In any event, the PowerConnect 2848 is not a layer 3 switch. It is "layer 3 aware" for QoS purposes only. | {
"source": [
"https://serverfault.com/questions/457796",
"https://serverfault.com",
"https://serverfault.com/users/41698/"
]
} |
458,553 | I'm trying to install bash as the default shell on a ARM Linux running on an embedded device (Synology DS212+ NAS). But there's something really wrong, and I can't figure out what it is. Symptoms: 1) Root has /bin/bash as default shell, and can log in normally via SSH: $ grep root /etc/passwd
root:x:0:0:root:/root:/bin/bash
$ ssh root@NAS
root@NAS's password:
Last login: Sun Dec 16 14:06:56 2012 from desktop
# 2) joeuser has /bin/bash as default shell, and receives "Permission denied" when trying to log in via SSH: $ grep joeuser /etc/passwd
joeuser:x:1029:100:Joe User:/home/joeuser:/bin/bash
$ ssh joeuser@localhost
joeuser@NAS's password:
Last login: Sun Dec 16 14:07:22 2012 from desktop
Permission denied, please try again.
Connection to localhost closed. 3) changing joeuser's shell back to /bin/sh: $ grep joeuser /etc/passwd
joeuser:x:1029:100:Joe User:/home/joeuser:/bin/sh
$ ssh joeuser@localhost
Last login: Sun Dec 16 15:50:52 2012 from localhost
$ To make things even more strange, I can log in as joeuser using /bin/bash using the serial console (!). Also a su - joeuser as root works fine, so the bash binary itself is working fine. In an act of despair, I changed joeuser's uid to 0 on /etc/passwd, but also didn't work, so it doesn't seem to be something permission related. Seems that bash is doing some extra checking that sshd didn't like, and blocking the connections for non-root users. Maybe some sort of sanity checking - or terminal emulation - that is triggering the SIGCHLD, but only when called via ssh. I already went through every single item on sshd_config, and also put SSHD in debug mode, but didn't find anything strange. Here's my /etc/ssh/sshd_config : LogLevel DEBUG
LoginGraceTime 2m
PermitRootLogin yes
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile %h/.ssh/authorized_keys
ChallengeResponseAuthentication no
UsePAM yes
AllowTcpForwarding no
ChrootDirectory none
Subsystem sftp internal-sftp -f DAEMON -u 000 And here's the output from /usr/syno/sbin/sshd -d , showing the failed attempt of joeuser trying to log in, with /bin/bash as the shell: debug1: Config token is loglevel
debug1: Config token is logingracetime
debug1: Config token is permitrootlogin
debug1: Config token is rsaauthentication
debug1: Config token is pubkeyauthentication
debug1: Config token is authorizedkeysfile
debug1: Config token is challengeresponseauthentication
debug1: Config token is usepam
debug1: Config token is allowtcpforwarding
debug1: Config token is chrootdirectory
debug1: Config token is subsystem
debug1: HPN Buffer Size: 87380
debug1: sshd version OpenSSH_5.8p1-hpn13v11
debug1: read PEM private key done: type RSA
debug1: private host key: #0 type 1 RSA
debug1: read PEM private key done: type DSA
debug1: private host key: #1 type 2 DSA
debug1: read PEM private key done: type ECDSA
debug1: private host key: #2 type 3 ECDSA
debug1: rexec_argv[0]='/usr/syno/sbin/sshd'
debug1: rexec_argv[1]='-d'
Set /proc/self/oom_adj from 0 to -17
debug1: Bind to port 22 on ::.
debug1: Server TCP RWIN socket size: 87380
debug1: HPN Buffer Size: 87380
Server listening on :: port 22.
debug1: Bind to port 22 on 0.0.0.0.
debug1: Server TCP RWIN socket size: 87380
debug1: HPN Buffer Size: 87380
Server listening on 0.0.0.0 port 22.
debug1: Server will not fork when running in debugging mode.
debug1: rexec start in 6 out 6 newsock 6 pipe -1 sock 9
debug1: inetd sockets after dupping: 4, 4
Connection from 127.0.0.1 port 52212
debug1: HPN Disabled: 0, HPN Buffer Size: 87380
debug1: Client protocol version 2.0; client software version OpenSSH_5.8p1-hpn13v11
SSH: Server;Ltype: Version;Remote: 127.0.0.1-52212;Protocol: 2.0;Client: OpenSSH_5.8p1-hpn13v11
debug1: match: OpenSSH_5.8p1-hpn13v11 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.8p1-hpn13v11
debug1: permanently_set_uid: 1024/100
debug1: MYFLAG IS 1
debug1: list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: AUTH STATE IS 0
debug1: REQUESTED ENC.NAME is 'aes128-ctr'
debug1: kex: client->server aes128-ctr hmac-md5 none
SSH: Server;Ltype: Kex;Remote: 127.0.0.1-52212;Enc: aes128-ctr;MAC: hmac-md5;Comp: none
debug1: REQUESTED ENC.NAME is 'aes128-ctr'
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: expecting SSH2_MSG_KEX_ECDH_INIT
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: KEX done
debug1: userauth-request for user joeuser service ssh-connection method none
SSH: Server;Ltype: Authname;Remote: 127.0.0.1-52212;Name: joeuser
debug1: attempt 0 failures 0
debug1: Config token is loglevel
debug1: Config token is logingracetime
debug1: Config token is permitrootlogin
debug1: Config token is rsaauthentication
debug1: Config token is pubkeyauthentication
debug1: Config token is authorizedkeysfile
debug1: Config token is challengeresponseauthentication
debug1: Config token is usepam
debug1: Config token is allowtcpforwarding
debug1: Config token is chrootdirectory
debug1: Config token is subsystem
debug1: PAM: initializing for "joeuser"
debug1: PAM: setting PAM_RHOST to "localhost"
debug1: PAM: setting PAM_TTY to "ssh"
debug1: userauth-request for user joeuser service ssh-connection method password
debug1: attempt 1 failures 0
debug1: do_pam_account: called
Accepted password for joeuser from 127.0.0.1 port 52212 ssh2
debug1: monitor_child_preauth: joeuser has been authenticated by privileged process
debug1: PAM: establishing credentials
User child is on pid 9129
debug1: Entering interactive session for SSH2.
debug1: server_init_dispatch_20
debug1: server_input_channel_open: ctype session rchan 0 win 65536 max 16384
debug1: input_session_request
debug1: channel 0: new [server-session]
debug1: session_new: session 0
debug1: session_open: channel 0
debug1: session_open: session 0: link with channel 0
debug1: server_input_channel_open: confirm session
debug1: server_input_global_request: rtype [email protected] want_reply 0
debug1: server_input_channel_req: channel 0 request pty-req reply 1
debug1: session_by_channel: session 0 channel 0
debug1: session_input_channel_req: session 0 req pty-req
debug1: Allocating pty.
debug1: session_new: session 0
debug1: session_pty_req: session 0 alloc /dev/pts/1
debug1: server_input_channel_req: channel 0 request shell reply 1
debug1: session_by_channel: session 0 channel 0
debug1: session_input_channel_req: session 0 req shell
debug1: Setting controlling tty using TIOCSCTTY.
debug1: Received SIGCHLD.
debug1: session_by_pid: pid 9130
debug1: session_exit_message: session 0 channel 0 pid 9130
debug1: session_exit_message: release channel 0
debug1: session_by_tty: session 0 tty /dev/pts/1
debug1: session_pty_cleanup: session 0 release /dev/pts/1
Received disconnect from 127.0.0.1: 11: disconnected by user
debug1: do_cleanup
debug1: do_cleanup
debug1: PAM: cleanup
debug1: PAM: closing session
debug1: PAM: deleting credentials Here you have the full output of sshd -dd, together with ssh -vv . Bash: # bash --version
GNU bash, version 3.2.49(1)-release (arm-none-linux-gnueabi)
Copyright (C) 2007 Free Software Foundation, Inc. The bash binary was cross compiled from source. I also tried using a pre-compiled binary from the Optware distribution , but had the exact same problem. I checked for missing shared libraries using objdump -x , but they're all there. Any ideas what could be causing this " Permission denied, please try again. "? I'm almost diving in the bash source code to investigate, but trying to avoid hours chasing something that may be silly. EDIT: adding more information about bash and the system $ ls -la /bin/bash
-rwxr-xr-x 1 root root 724676 Dec 15 23:57 /bin/bash
$ file /bin/bash
/bin/bash: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.14, stripped
$ uname -a
Linux NAS 2.6.32.12 #2661 Mon Nov 12 23:10:15 CST 2012 armv5tel GNU/Linux synology_88f6282_212+
$ grep bash /etc/shells
/bin/bash
/bin/bash2 | For future reference: after way too many hours researching and debugging this issue, I finally discovered the root cause. The OpenSSH version used by Synology is a highly customized version, that does not behave like the original code. It has lots of hacks and ad-hoc customizations - e.g., additional checking before accepting a login to see if the SSH service is enabled within the web interface, or stripping special chars (;, |, ') from rsync commands, or... wait for it... avoiding regular users to use a shell different than /bin/sh or /bin/ash . Yeah, hard coded within the binary. Here's the piece of code from OpenSSH 5.8p1, as distributed by Synology on their source code (DSM4.1 - branch 2636) , file session.c : void do_child(Session *s, const char *command)
{
...
#ifdef MY_ABC_HERE
char szValue[8];
int RunSSH = 0;
SSH_CMD SSHCmd = REQ_UNKNOWN;
if (1 == GetKeyValue("/etc/synoinfo.conf", "runssh", szValue, sizeof(szValue))) {
if (strcasecmp(szValue, "yes") == 0) {
RunSSH = 1;
}
}
if (IsSFTPReq(command)){
SSHCmd = REQ_SFTP;
} else if (IsRsyncReq(command)){
SSHCmd = REQ_RSYNC;
} else if (IsTimebkpRequest(command)){
SSHCmd = REQ_TIMEBKP;
} else if (RunSSH && IsAllowShell(pw)){
SSHCmd = REQ_SHELL;
} else {
goto Err;
}
if (REQ_RSYNC == SSHCmd) {
pw = SYNOChgValForRsync(pw);
}
if (!SSHCanLogin(SSHCmd, pw)) {
goto Err;
}
goto Pass;
Err:
fprintf(stderr, "Permission denied, please try again.\n");
exit(1);
Pass:
#endif /* MY_ABC_HERE */
...
} As you can imagine, the IsAllowShell(pw) was the culprit: static int IsAllowShell(const struct passwd *pw)
{
struct passwd *pUnPrivilege = NULL;
char *szUserName = NULL;
if (!pw || !pw->pw_name) {
return 0;
}
szUserName = pw->pw_name;
if(!strcmp(szUserName, "root") || !strcmp(szUserName, "admin")){
return 1;
}
if (NULL != (pUnPrivilege = getpwnam(szUserName))){
if (!strcmp(pUnPrivilege->pw_shell, "/bin/sh") ||
!strcmp(pUnPrivilege->pw_shell, "/bin/ash")) {
return 1;
}
}
return 0;
} No wonder why I was experiencing such an odd behavior. Only shells /bin/sh and /bin/ash would be accepted for users different than root or admin . And this regardless of the uid (I had tested also making joeuser uid=0, and it didn't work. Now it's obvious why). Once identified the cause, the fix was easy: just remove the call to IsAllowShell() . It took me a while to get the right configuration to cross-compile openssh and all its dependencies, but it worked well in the end. If anyone is interested in doing the same (or trying to cross-compile other kernel modules or binaries for Synology), here's my version of Makefile . It was tested with OpenSSH-5.8p1 source , and works well with models running Marvell Kirkwood mv6281/mv6282 CPU (like DS212+). I used a host running Ubuntu 12.10 x64. Bottom line: bad practice, terrible code, and a great example of what not to do. I understand sometimes OEMs need to develop special customizations, but they should think twice before digging too deep. Not only this results in unmaintainable code for them, but also creates all sorts of unforeseen issues down the road. Thankfully GPL exist to keep them honest - and open. | {
"source": [
"https://serverfault.com/questions/458553",
"https://serverfault.com",
"https://serverfault.com/users/98715/"
]
} |
458,628 | With the servers that mount Infiniband cards, when I use the ifconfig command, I get this warning: Ifconfig uses the ioctl access method to get the full address
information, which limits hardware addresses to 8 bytes.
Because Infiniband address has 20 bytes, only the first 8 bytes
are displayed correctly.
Ifconfig is obsolete! For replacement check ip. Should I quit using ifconfig ? Is it deprecated in favor of the ip command? Or will it be updated in the near future? Note: This question and answers are in regards to GNU/Linux's "major" distributions. It should not be assumed that the information applies to all distributions, and especially not other OSes. | Quoting Thomas Pircher 's website ( cc-by-sa ): ifconfig vs ip The command /bin/ip has been around for some time now. But people continue using the older command /sbin/ifconfig . Let's be clear: ifconfig will not quickly go away, but its newer version, ip , is more powerful and will eventually replace it. The man page of ip may look intimidating at first, but once you get familiar with the command syntax, it is an easy read. This page will not introduce the new features of ip. It rather features a side-by-side comparison if ifconfig and ip to get a quick overview of the command syntax. Show network devices and configuration ifconfig
ip addr show
ip link show Enable a network interface ifconfig eth0 up
ip link set eth0 up A network interface is disabled in a similar way: ifconfig eth0 down
ip link set eth0 down | {
"source": [
"https://serverfault.com/questions/458628",
"https://serverfault.com",
"https://serverfault.com/users/93952/"
]
} |
458,645 | I am getting the following error when i try to connect to a remote host. Can't connect to X11 Windows server using "0:0" as the value of the display variable. How do i get past it? Other info: I am running a windows machine and the host is unix based. I have Xming installed. I have given the display command as export DISPLAY=0:0 I have checked X11 option in SSH. | Quoting Thomas Pircher 's website ( cc-by-sa ): ifconfig vs ip The command /bin/ip has been around for some time now. But people continue using the older command /sbin/ifconfig . Let's be clear: ifconfig will not quickly go away, but its newer version, ip , is more powerful and will eventually replace it. The man page of ip may look intimidating at first, but once you get familiar with the command syntax, it is an easy read. This page will not introduce the new features of ip. It rather features a side-by-side comparison if ifconfig and ip to get a quick overview of the command syntax. Show network devices and configuration ifconfig
ip addr show
ip link show Enable a network interface ifconfig eth0 up
ip link set eth0 up A network interface is disabled in a similar way: ifconfig eth0 down
ip link set eth0 down | {
"source": [
"https://serverfault.com/questions/458645",
"https://serverfault.com",
"https://serverfault.com/users/149925/"
]
} |
458,759 | I'm posting this as a BIG CAVEAT to everyone. I know it's not a standard Q&A, but I think this is something every Windows admin should know. There is a very real risk of falling into Big Troubles. Microsoft has recently released Windows Management Framework 3.0 for Windows Server 2008 and Windows Server 2008 R2 systems, which includes some nice things native to Windows Server 2012 (like PowerShell 3.0) and lots of improvements to WMI, WinRM and other management technologies. Windows Update is advertising it as an optional update. Should I install it on my servers? Update: As of 2012-12-19 Microsoft has removed the update from Windows Update after major compatibility issues with various products (including the ones being discussed here) have been reported by multiple users. | Short answer: NO , unless you really need it and you really know what you are doing. WMF 3.0 is known to be not compatible at all with Exchange Server (both 2007 and 2010) , at least until further updates are released for these products; also, although this is not yet officially documented, it has been found to wreak havoc on SharePoint 2010 , and to break Small Business Server 2008/2011 . I've also personally experienced it completely and utterly destroying System Center Configuration Manager 2012 , and breaking both the setup and the Configuration Manager for SQL Server 2008 R2 , which, after its installation, started failing with loud complaints about the WMI service not being available (although it was actually running fine). Last but not least, once WMF 3.0 is installed, it can become very hard to remove it, because its uninstaller has quite a real chance of failing, leaving your servers in an inconsistent state which usually requires a full O.S. reinstall to get them up and running again. Be very, very , very careful with this update. Update: apart from known compatibility issues with various programs, it looks like installing WMF 3.0 can (sometimes? often? always?) completely destroy WMI . Well, this sure explains why nothing seems to work anymore after installing it... | {
"source": [
"https://serverfault.com/questions/458759",
"https://serverfault.com",
"https://serverfault.com/users/6352/"
]
} |
458,958 | I have a lot of subdomains in the main domain xxx.zzz So, for this domain, I can have aaa.xxx.zzz
bbb.xxx.zzz
ccc.xxx.zzz
ddd.xxx.zzz
eee.xxx.zzz
....ETC.... Istead of adding each subdomain in the host file, I would like to add only the main domain xxx.zzz and then to be able to access all the subdomains. I have tryed with *.xxx.zzz but apparently, this will not work (Linux or Windows). Any idea is welcome. Thank you very much. | Wildcards don't work in hosts files. You either have to write them all: w.x.y.z example.com foo.example.com bar.example.com baz.example.com or setup proper DNS | {
"source": [
"https://serverfault.com/questions/458958",
"https://serverfault.com",
"https://serverfault.com/users/831451/"
]
} |
459,083 | How do I get the ldapsearch on Scientific Linux? I am trying to find the ldapsearch client for Scientific Linux but cannot find how to install the client in order to do LDAP queries. | Use yum whatprovides to see what package provides a file. The following was run on SL6.x: $ yum whatprovides */ldapsearch
...
openldap-clients-2.4.23-15.el6.x86_64 : LDAP client utilities
Matched from:
Filename : /usr/bin/ldapsearch | {
"source": [
"https://serverfault.com/questions/459083",
"https://serverfault.com",
"https://serverfault.com/users/147779/"
]
} |
459,229 | Can I put shell commands in the /etc/motd login banner file? I have tried: $(uptime) and `uptime` Is this possible? | /etc/motd is only read and not executed, so technically speaking, you cannot put shell commands in there. However, it's possible to execute a shell script at login time that will have the same result. This is usually achieved by adapting the /etc/profile script that is executed each time a user logs in. A useful practice is to put the command you want to be executed in a script named /etc/motd.sh and call this script from /etc/profile , usually at about the end of it. | {
"source": [
"https://serverfault.com/questions/459229",
"https://serverfault.com",
"https://serverfault.com/users/65061/"
]
} |
459,369 | When I browse to this URL: http://localhost:8080/foo/%5B-%5D server ( nc -l 8080 ) receives it as-is: GET /foo/%5B-%5D HTTP/1.1 However when I proxy this application via nginx (1.1.19): location /foo {
proxy_pass http://localhost:8080/foo;
} The same request routed through nginx port is forwarded with path decoded: GET /foo/[-] HTTP/1.1 Decoded square brackets in the GET path are causing the errors in the target server ( HTTP Status 400 - Illegal character in path... ) as they arrive un-escaped. Is there a way to disable URL decoding or encode it back so that the target server gets the exact same path when routed through nginx? Some clever URL rewrite rule? | Quoting Valentin V. Bartenev (who should get the full credit for this answer): A quote from documentation : If proxy_pass is specified with URI , when passing a request to the server, part of a normalized request URI matching the location is replaced by a URI specified in the directive If proxy_pass is specified without URI , a request URI is passed to the server in the same form as sent by a client when processing an original request The correct configuration in your case would be: location /foo {
proxy_pass http://localhost:8080;
} | {
"source": [
"https://serverfault.com/questions/459369",
"https://serverfault.com",
"https://serverfault.com/users/72302/"
]
} |
459,728 | I'm running PHP-FPM and Nginx, occasionally, for whatever reason, I have to reboot the server. Once the server is running again, the nginx service automatically starts, however, PHP-FPM does not. This can be seen when I run the command sudo /etc/init.d/php-fpm restart immediately after a reboot and get the result: $ sudo /etc/init.d/php-fpm restart
Stopping php-fpm: [FAILED]
Starting php-fpm: [ OK ] Is this expected behaviour? What is the best way to make PHP-FPM automatically start? Is there a config option anywhere, or do I have to add the command to one of the Linux startup scripts? Thanks. | So set it up to start at boot: chkconfig php-fpm on | {
"source": [
"https://serverfault.com/questions/459728",
"https://serverfault.com",
"https://serverfault.com/users/148734/"
]
} |
459,744 | I recently found a DoS Defense setting in my DrayTek Vigor 2830 router, which is disabled as default. I'm running a very small server on this network and I take it very serious to have the server up and running 24/7. I'm a bit unsure if the DoS Defense could cause me any kind of problems. I haven't experienced any DoS attacks yet, but I would like to avoid possible attacks. Is there any reason not to enable the DoS Defense setting? | It means the router has to maintain additional state and do additional work on each packet. And how can it really help in the case of a DoS? All it can do is drop a packet that you have already received. Since you've already received it, it has already done the damage by consuming your inbound Internet bandwidth. | {
"source": [
"https://serverfault.com/questions/459744",
"https://serverfault.com",
"https://serverfault.com/users/150441/"
]
} |
459,757 | I have some servers on the public network of my isp, say 192.168.2.0/24 . Now my provider gives me additional ip addresses, but unfortunately not in a subsequent range of my first network, say 192.168.4.0/24 . I configure the new servers with the new ip range and now they communicate over the default gateway between the subnets, although they are on the same physical network. I add a route to the second network on each server, so they can send packets directly to each other. But if I look in ifconfig, of course I still see only one configured subnet. Are there any downsides of this configuration? What is the difference to the case where I had the consecutive networks 192.168.2.0/24 and 192.168.3.0/24 and could just configure all interfaces in ifconfig with 192.168.2.0/23 and avoid the extra route? I could imagine broadcast behavior is maybe different. If i broadcast a network it would only go to half of my servers in one subnet. Additions: As I read the first answers, I think my question was maybe no clear enough. The servers are all supposed to be on the public network, I do not want to hide them behind some router. They all also have an internal network connection where most traffic is going over. I was just wondering how you would configure multiple subnets on the same network interface and what the difference is between a setup with consecutive subnets and one with non-consecutive subnets. For me the servers are all in the same public network. It is just that the IP assignment of my provider and the configuration options I see in Linux do not really allow me to configure the servers as such. I have to make the separation between both subnets. I can add additional routes, but will it be the same as if I had one consecutive IP range for all servers? | It means the router has to maintain additional state and do additional work on each packet. And how can it really help in the case of a DoS? All it can do is drop a packet that you have already received. Since you've already received it, it has already done the damage by consuming your inbound Internet bandwidth. | {
"source": [
"https://serverfault.com/questions/459757",
"https://serverfault.com",
"https://serverfault.com/users/150401/"
]
} |
460,755 | I have a micro instance on Amazon EC2 cloud. Also the instance is small and it has vary low CPU and EAM usage but it
generates a lot of content, so it can be considered like a web server
serving small amount of static files (not of a big size) to many clients. From the technical point of view there is no problems for such instance to handle the load
serving many MBs per seconds. What I'm considered is if there are limitation of bandwidth by Amazon itself. Many VPS service providers limit the bandwidths to lets say 10MB/s, are there such limits at Amazon and if they are what are they? I couldn't find any reference. | Remembered that I had bookmarked a similar post a while back, and Cyberx86 posted an excellent answer with benchmark tests :) Serverfault answer Edit From what I've been able to find on the AWS forums - It doesn't seem like the support people from Amazon want to answer that question. Their advice is to test it with an external source: AWS forum post from 2012 Older posts ( post1 , post2 refer to transfer speeds in coalition to instance size. The 2nd one mentions that the data was a part of the AWS documentation but later it was replaced with stuff about I/O. small 250 mbps large 500 mbps xlarge 1000 mbps These numbers seem to fit with the benchmarks you can find on google. So sadly - I don't think you can find transfer speeds on their site anymore. | {
"source": [
"https://serverfault.com/questions/460755",
"https://serverfault.com",
"https://serverfault.com/users/60717/"
]
} |
460,876 | After recently upgrading Apache2 to version 2.2.31 I found a strange behaviour in SSL VirtualHost setup. A few of the website I'm hosting were showing the certificate for the default host even if the client was Server Name Identification aware, and this happened only with a few of them. This shows as the common Firefox's/Chrome's passport-warning about you being possibly scammed if you're browsing your home banking, but that simply was not the case. To be clear, if server host.hostingdomain.org has its own SSL, attempting to access https://www.hostedsite.org reports certificate for host.hostingdomain.org , but a few https://www.hostedsite.me reported the correct certificate. All sites are hosted on the same IP address, on port 443. The truth is that VirtualHosting works on the HTTP side and redirects SNI-aware clients to SSL automatically, so it's backward compatible with SNI-unaware clients. Examining error logs for the offending VirtualHosts shown the following text [Tue Dec 25 16:02:45 2012] [error] Server should be SSL-aware but has no certificate configured [Hint: SSLCertificateFile] (/path/to/www.site.org.conf:20) and in fact the vhost was correctly configured with SSLCertificateFile. The question is obvious: how to fix that? | It happens that it could be a bug in the most recent version of Apache. Solution 1: downgrade to the latest stable Solution 2: edit listen.conf Replace Listen *:443 (or Listen 443 according to your setup) with Listen *:443 http Credit | {
"source": [
"https://serverfault.com/questions/460876",
"https://serverfault.com",
"https://serverfault.com/users/64579/"
]
} |
460,880 | In Godaddy's Zone File Editor, I can edit A/CNAME etc. including NS record. I am wondering if I change NS to another Nameserver, will my records, like A/CNAME, be deleted right away? If I have more than two dns records, and they are mixed by different DNS servers,
for example,
ns1.abc.com
ns2.abc.com
ns1.efg.com
ns2.efg.com
will that cause any issue? Thanks! | It happens that it could be a bug in the most recent version of Apache. Solution 1: downgrade to the latest stable Solution 2: edit listen.conf Replace Listen *:443 (or Listen 443 according to your setup) with Listen *:443 http Credit | {
"source": [
"https://serverfault.com/questions/460880",
"https://serverfault.com",
"https://serverfault.com/users/81392/"
]
} |
461,271 | There is a lot of community feeling about what Linux distributions are appropriate for production server environments and which aren't, however, a lot of this feeling seems religiously based, and seldom presented with supporting evidence. Assuming that we were trying to select a Linux distribution to standardize on (because we have an interest in keeping our environments as homogeneous as possible), what criteria are important, and how do you make determinations about how well different distributions meet those criteria? | I currently work in an environment that has used Linux for more than a decade. Everybody in the office uses different distros on their desktops as well as the servers. As such, the choices of distribution tend to revolve around a number of things in no particular order: History - Obviously systems like RedHat and Debian have been around for a long time. As such, the adage "if it ain't broke, don't fix it" can be used for these. Upgrading becomes easier if the software is supported well on a distro. Familiarity - Similar to History, however we all have our favourites. I cut my teeth on Debian, and migrated to Ubuntu (a hard decision at the time because I tend to commit to a community). Conversely, it's a pain to have to remember how to do things on a dozen different distros (not to mention the scratch-built ones). Support - I migrated to Ubuntu mainly because I appreciated what they were doing as far as offering paid support. That was a selling point if ever a client had a concern about running a system long-term. Similar to RedHat's approach (but RPM hell was going on at the time). We have a number of RedHat servers for this reason also. Dependencies - Some softwares are easier to use on some distros simply because the dependent packages are more easily obtainable or buildable. As example of this would be oVirt on RedHat. There are no packages for some softwares on some distros. And you could compile it, but why would you if the package was right there on another distro? Granularity - Distros like Gentoo offer finer control over versioning and software-switch granularity. Other distros have "pinning" in various forms, but that's still not as controllable or reliable. Binding - While it's possible to compile from source on most distros, some distros are better at it than others. This can have an effect, say, if your project patches existing libraries for extended functionality. Prettiness - Some distros are just better-looking. Every geek knows it's just fluff (and you could probably get away with doing it as a web app these days) but some clients are wowed by this stuff, and we all know it. Stability - Some distros stream "stable" versions of software as opposed to "testing", "experimental", etc. This can mean alot if you know that the version you're building on will eventually reach a consensus on stability. You may develop on "experimental" knowing that by the time your project is finished it will have reached "stable" and be good to rely on. Package management - If you're developing something on a daily basis, and it's going to go out to 1000s of machines in one hit, then you probably want something that makes it easy to build, maintain, and track packages across those systems. Consistency - This is more an argument for the same distro. Less mistakes get made (and less errors in security) when people can focus on one distro as opposed to several. Predictable release schedule - If you want to be sure that your software stays supported, planned upgrades offer a certain type of stability. Security - Some distros have active security teams whose job it is to respond immediately to genuine security risks in any approved package. Those are just a few things that come off the top of my head regarding reasons why each system was chosen. I don't see any one guiding light or preference of one distro over another in this decision. Diversity and choice can be great and offer you some really good options to get a project started quickly, but it's also the noose that can hang you. Make sure you think ahead of what you're going to need. Plan what the system's needs are as well as when the system is going to be upgraded or retired. Don't assume you'll always be the one maintaining it. | {
"source": [
"https://serverfault.com/questions/461271",
"https://serverfault.com",
"https://serverfault.com/users/14858/"
]
} |
461,385 | I have a volume group (VG) that contains two physical volumes (PV).
Several logical volumes (LV) in the VG are likely to use extents on both PVs. Is there a way to tell which LVs occupy space on which PVs? | The pvdisplay command has a -m option to show the mapping of physical extents to logical volumes and logical extents. I have set up the following situation on a test machine: 3 disks of 1GB each added to the system and used as physical volumes for vg_test 6 logical volumes made with various sizes (ranging from 300M to 1.1G) so that they are spread over the physical volumes Running pvdisplay -m on this machine results in the following output: [root@centos6 ~]# pvdisplay -m
--- Physical volume ---
PV Name /dev/sdb
VG Name vg_test
PV Size 1.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 255
Free PE 5
Allocated PE 250
PV UUID eR2ko2-aKRf-uCfq-O2L0-z6em-ZYT5-23YhKb
--- Physical Segments ---
Physical extent 0 to 74:
Logical volume /dev/vg_test/one
Logical extents 0 to 74
Physical extent 75 to 149:
Logical volume /dev/vg_test/two
Logical extents 0 to 74
Physical extent 150 to 249:
Logical volume /dev/vg_test/four
Logical extents 0 to 99
Physical extent 250 to 254:
FREE
--- Physical volume ---
PV Name /dev/sdc
VG Name vg_test
PV Size 1.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 255
Free PE 10
Allocated PE 245
PV UUID rByjXK-NA6D-ifnY-lKdF-eFWg-Ndou-psGJUq
--- Physical Segments ---
Physical extent 0 to 124:
Logical volume /dev/vg_test/three
Logical extents 0 to 124
Physical extent 125 to 224:
Logical volume /dev/vg_test/five
Logical extents 0 to 99
Physical extent 225 to 244:
Logical volume /dev/vg_test/six
Logical extents 255 to 274
Physical extent 245 to 254:
FREE
--- Physical volume ---
PV Name /dev/sdd
VG Name vg_test
PV Size 1.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 255
Free PE 0
Allocated PE 255
PV UUID TCJnZM-0ss9-o5gY-lgD3-7Kq6-18IH-sN04To
--- Physical Segments ---
Physical extent 0 to 254:
Logical volume /dev/vg_test/six
Logical extents 0 to 254 As you can see, You get a nice overview of where the extents for each of the 6 logical volumes are. | {
"source": [
"https://serverfault.com/questions/461385",
"https://serverfault.com",
"https://serverfault.com/users/18782/"
]
} |
461,394 | Router ip: 192.168.3.1 Windows server: 192.168.3.50 (WLan) and 192.168.2.1 (LAN) PCs: 192.168.2.x I am able to access 192.168.3.50 from laptop (in 3.x network) but unable to access through other interface 192.168.2.1 and its lan pcs 192.168.2.x. I added a route in my router with Destination LAN NET 192.168.2.0, Subnet Mask 255.255.255.0 and Gateway 192.168.3.50 but still unable to access. Do I need to add a route in my Windows server as well? If so what will be the route like? EDIT : - Server and laptop connected to Router through wireless. - All the PCs are connected to the server through a switch. - Server is able to access the internet through WLAN interface (192.168.3.50) - Server and PCs are communicating through LAN interface (192.168.2.x) - I am able to ping 192.168.3.50 from router but not 192.168.2.1 Problem - 3.x machines (in my case laptop) is not able to access 2.x machines. Not even 192.168.2.1 (which is another ip for server). ping 192.168.3.50 works not 192.168.2.1 As @0wn3r suggested I changed the mask to 255.255.0.0 in router and server but still the same problem. | The pvdisplay command has a -m option to show the mapping of physical extents to logical volumes and logical extents. I have set up the following situation on a test machine: 3 disks of 1GB each added to the system and used as physical volumes for vg_test 6 logical volumes made with various sizes (ranging from 300M to 1.1G) so that they are spread over the physical volumes Running pvdisplay -m on this machine results in the following output: [root@centos6 ~]# pvdisplay -m
--- Physical volume ---
PV Name /dev/sdb
VG Name vg_test
PV Size 1.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 255
Free PE 5
Allocated PE 250
PV UUID eR2ko2-aKRf-uCfq-O2L0-z6em-ZYT5-23YhKb
--- Physical Segments ---
Physical extent 0 to 74:
Logical volume /dev/vg_test/one
Logical extents 0 to 74
Physical extent 75 to 149:
Logical volume /dev/vg_test/two
Logical extents 0 to 74
Physical extent 150 to 249:
Logical volume /dev/vg_test/four
Logical extents 0 to 99
Physical extent 250 to 254:
FREE
--- Physical volume ---
PV Name /dev/sdc
VG Name vg_test
PV Size 1.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 255
Free PE 10
Allocated PE 245
PV UUID rByjXK-NA6D-ifnY-lKdF-eFWg-Ndou-psGJUq
--- Physical Segments ---
Physical extent 0 to 124:
Logical volume /dev/vg_test/three
Logical extents 0 to 124
Physical extent 125 to 224:
Logical volume /dev/vg_test/five
Logical extents 0 to 99
Physical extent 225 to 244:
Logical volume /dev/vg_test/six
Logical extents 255 to 274
Physical extent 245 to 254:
FREE
--- Physical volume ---
PV Name /dev/sdd
VG Name vg_test
PV Size 1.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 255
Free PE 0
Allocated PE 255
PV UUID TCJnZM-0ss9-o5gY-lgD3-7Kq6-18IH-sN04To
--- Physical Segments ---
Physical extent 0 to 254:
Logical volume /dev/vg_test/six
Logical extents 0 to 254 As you can see, You get a nice overview of where the extents for each of the 6 logical volumes are. | {
"source": [
"https://serverfault.com/questions/461394",
"https://serverfault.com",
"https://serverfault.com/users/150672/"
]
} |
462,178 | Let's say someone is on the same network as me and spoofs their MAC address to match mine: Is this possible? Can two or more clients with the same MAC address be on the same network at the same time and stay consistently connected? When this happens, will I end up getting deauthenticated and kicked off the network if duplicate MAC addresses aren't allowed on the same network? If duplicate MAC addresses are allowed, what kind of behavior might I encounter? Collisions, race conditions, etc.? | It's possible for two hosts to have the same MAC, due to spoofing, a mistake during manufacturing, or willful negligence on the part of the manufacturer. So, 1) In general, an Ethernet switch keeps a table of which MAC addresses are attached to which ports. It bases this table on the source address of frames it receives during the normal operation of the network. Upon receiving any frame, the source MAC is read and compared with the current switching table, and then added alongside whichever switchport it was received on. So if there are two hosts, both with the same MAC address, then the switch will update it's MAC table every time it receives a frame from either host. The reachability of either host will flap on and off and be inconsistent. 2) Short answer: no. Duplicate MAC addresses will not trigger any sort of security problem in an unmanaged switch (a switch without configuration software), or a managed switch (like most Cisco/HP/Junipers) that has not been configured for port security. Managed switches will give you a warning printed in the console terminal if they detect a duplicate MAC (a MAC that 'exists' on multiple switchports), but by default they won't "do anything" about it AFAIK. If you want to use port security options on a managed switch, you can do stuff like only allow 1 MAC address per switchport. The MAC address will be learned dynamically by the switch (like it usually learns MACs), but the difference is that once it is learned, it is bound to that switchport. Then, if the switch receives frames from a duplicate MAC on another switchport, it can place that port into a disabled state (shut it down.) You mentioned deauthentication in your question. The port security feature of some switches is different that "deauthentication"-- it is deauthorization. They are similar but the difference is important; look up authentication vs. authorization. 3) Duplicate MACs will not cause collisions. Collisions are the result of a shared electrical bus. It is more of a race condition, although I haven't heard it referred to like that before. Remember, duplicate MACs are "allowed", as far as any out-of-the-box Ethernet switch is concerned-- they just cause a problem that will interrupt network connectivity to each host in question. The problem is a constantly changing switching table. | {
"source": [
"https://serverfault.com/questions/462178",
"https://serverfault.com",
"https://serverfault.com/users/71373/"
]
} |
462,664 | When sending emails from Amazon SES, gmail shows "sent via amazonses.com". How do I remove this? According to Google, I'm a sender and I don't want my recipients to see the "via" link. What can I do?
Gmail checks whether emails are correctly authenticated. If your messages are sent by a bulk mailing vendor or by third-party affiliates, please publish an SPF record2 that includes the IPs of the vendor or affiliates which send your messages and sign your messages with a DKIM3 signature that is associated with your domain. I have added both SPF and DKIM records. When looking at the original email, it shows both passed. Received-SPF: pass Authentication-Results: mx.google.com; spf=pass ...; dkim=pass ... Any ideas? | Once DKIM was setup (for help, see this guide ) and verified successfully on my domain I still had to enable it in the AWS console at SES -> Domains -> DKIM Once that was done mails to Gmail no longer show up with the via bounces address. You can see it still shows as mailed by: amazonses.com when you view details of the sender but that's OK since it's true. Importantly, it shows as signed by our domain. Hope that it's as simple as enabling DKIM for you. | {
"source": [
"https://serverfault.com/questions/462664",
"https://serverfault.com",
"https://serverfault.com/users/71804/"
]
} |
462,739 | I have a directory named: -2 I want to cd into it but the cd complains: bash: cd: -2: invalid option With no success, I've tried: cd "-2"
cd '-2'
cd \-2 Any solution? Edit: no file browsers like mc, etc. available on the server. | At least two ways: Use the -- argument. cd -- -2 This uses a convention common to GNU tools which is to not treat anything that appears after -- as a command line option. As a commenter noted, this convention is also defined in the POSIX standard : Default Behavior: When this section is listed as "None.", it means that the implementation need not support any options. Standard utilities that do not accept options, but that do accept operands, shall recognize "--" as a first argument to be discarded. The requirement for recognizing "--" is because conforming applications need a way to shield their operands from any arbitrary options that the implementation may provide as an extension. For example, if the standard utility foo is listed as taking no options, and the application needed to give it a pathname with a leading hyphen, it could safely do it as: foo -- -myfile and avoid any problems with -m used as an extension. as well as : Guideline 10: The argument -- should be accepted as a delimiter indicating the end of options. Any following arguments should be treated as operands, even if they begin with the '-' character. The -- argument should not be used as an option or as an operand. Specify the path explicitly: cd ./-2 This specifies the path explicitly naming the current directory ( . ) as the starting point. cd $(pwd)/-2
cd /absolute/path/to/-2 These are variations on the above. Any number of such variations may be possible; I'll leave it as an exercise to the reader to discover all of them. | {
"source": [
"https://serverfault.com/questions/462739",
"https://serverfault.com",
"https://serverfault.com/users/151851/"
]
} |
462,903 | I would like to run some scripts on hosts which are EC2 instances but I don't know how to be sure that the host is really an EC2 instance. I have made some tests, but this is not sufficient: Test that binary ec2_userdata is available (but this will not always be true) Test availability of " http://169.254.169.254/latest/meta-data " (but will this be always true ? and what is this "magical IP" ?) | First, I felt the need to post a new answer because of the following subtle problems with the existing answers, and after receiving a question about my comment on @qwertzguy's answer . Here are the problems with the current answers: The accepted answer from @MatthieuCerda definitely does not work reliably, at least not on any VPC instances I checked against. (On my instances, I get a VPC name for hostname -d , which is used for internal DNS, not anything with "amazonaws.com" in it.) The highest-voted answer from @qwertzguy does not work on new m5 or c5 instances , which do not have this file. Amazon neglects to document this behavior change AFAIK, although the doc page on this subject does say "... If /sys/hypervisor/uuid exists ...". I asked AWS support whether this change was intentional, see below †. The answer from @Jer does not necessarily work everywhere because the instance-data.ec2.internal DNS lookup may not work. On an Ubuntu EC2 VPC instance I just tested on, I see: $ curl http://instance-data.ec2.internal
curl: (6) Could not resolve host: instance-data.ec2.internal which would cause code relying on this method to falsely conclude it is not on EC2! The answer to use dmidecode from @tamale may work, but relies on you a.) having dmidecode available on your instance, and b.) having root or sudo password-less ability from within your code. The answer to check /sys/devices/virtual/dmi/id/bios_version from @spkane is dangerously misleading! I checked one Ubuntu 14.04 m5 instance, and got a bios_version of 1.0 . This file is not documented at all on Amazon's doc , so I would really not rely on it. The first part of the answer from @Chris-Montanaro to check an unreliable 3rd-party URL and use whois on the result is problematic on several levels. Note the URL suggested in that answer is a 404 page right now! Even if you did find a 3rd-party service that did work, it would be comparatively very slow (compared to checking a file locally) and possibly run into rate-limiting issues or network issues, or possibly your EC2 instance doesn't even have outside network access. The second suggestion in the answer from @Chris-Montanaro to check http://169.254.169.254/ is a little better, but another commenter notes that other cloud providers make this instance metadata URL available, so you have to be careful to avoid false positives. Also it will still be much slower than a local file, I have seen this check be especially slow (several seconds to return) on heavily loaded instances. Also, you should remember to pass a -m or --max-time argument to curl to avoid it hanging for a very long time, especially on a non-EC2 instance where this address may lead to nowhere and hang (as in @algal's answer ). Also, I don't see that anyone has mentioned Amazon's documented fallback of checking for the (possible) file /sys/devices/virtual/dmi/id/product_uuid . Who knew that determining whether you are running on EC2 could be so complicated?! OK, now that we have (most) of the problems with listed approaches listed, here is a suggested bash snippet to check whether you are running on EC2. I think this should work generally on almost any Linux instances, Windows instances are an exercise for the reader. #!/bin/bash
# This first, simple check will work for many older instance types.
if [ -f /sys/hypervisor/uuid ]; then
# File should be readable by non-root users.
if [ `head -c 3 /sys/hypervisor/uuid` == "ec2" ]; then
echo yes
else
echo no
fi
# This check will work on newer m5/c5 instances, but only if you have root!
elif [ -r /sys/devices/virtual/dmi/id/product_uuid ]; then
# If the file exists AND is readable by us, we can rely on it.
if [ `head -c 3 /sys/devices/virtual/dmi/id/product_uuid` == "EC2" ]; then
echo yes
else
echo no
fi
else
# Fallback check of http://169.254.169.254/. If we wanted to be REALLY
# authoritative, we could follow Amazon's suggestions for cryptographically
# verifying their signature, see here:
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html
# but this is almost certainly overkill for this purpose (and the above
# checks of "EC2" prefixes have a higher false positive potential, anyway).
if $(curl -s -m 5 http://169.254.169.254/latest/dynamic/instance-identity/document | grep -q availabilityZone) ; then
echo yes
else
echo no
fi
fi Obviously, you could expand this with even more fallback checks, and include paranoia about handling e.g. a false positive from /sys/hypervisor/uuid happening to start with "ec2" by chance and so on. But this is a good-enough solution for illustration purposes and probably nearly all non-pathological use-cases. [†] Got back this explanation from AWS support about the change for c5/m5 instances: The C5 and M5 instances use a new hypervisor stack and the associated kernel drivers do not create files in sysfs (which is mounted at /sys) as the Xen drivers used by the other/older instance types do . The best way to detect whether the operating system is running on an EC2 instance is to account for the different possibilities listed in the documentation you linked . | {
"source": [
"https://serverfault.com/questions/462903",
"https://serverfault.com",
"https://serverfault.com/users/151544/"
]
} |
463,366 | So, say I get disconnected from an SSH-session after I've started rsync or cp or any other command that can be long running. Does that command keep running until it's finished after I get disconnected or does it just get killed? Always wondered this. | Edit for 2016: This Q&A predates the systemd v230 debacle . As of systemd v230, the new default is to kill all children of a terminating login session, regardless of what historically valid precautions were taken to prevent this. The behavior can be changed by setting KillUserProcesses=no in /etc/systemd/logind.conf , or circumvented using the systemd-specific mechanisms for starting a daemon in userspace. Those mechanisms are outside the scope of this question. The text below describes how things have traditionally worked in UNIX designspace for longer than Linux has existed. They will get killed, but not necessarily immediately. It depends on how long it takes for the SSH daemon to decide that your connection is dead. What follows is a longer explanation that will help you understand how it actually works. When you logged in, the SSH daemon allocated a pseudo-terminal for you and attached it to your user's configured login shell. This is called the controlling terminal. Every program you start normally at that point, no matter how many layers of shells deep, will ultimately trace its ancestry back to that shell. You can observe this with the pstree command. When the SSH daemon process associated with your connection decides that your connection is dead, it sends a hangup signal ( SIGHUP ) to the login shell. This notifies the shell that you've vanished and that it should begin cleaning up after itself. What happens at this point is shell specific (search its documentation page for "HUP"), but for the most part it will start sending SIGHUP to running jobs associated with it before terminating. Each of those processes, in turn, will do whatever they're configured to do on receipt of that signal. Usually that means terminating. If those jobs have jobs of their own, the signal will often get passed along as well. The processes that survive a hangup of the controlling terminal are ones that either disassociated themselves from having a terminal (daemon processes that you started inside of it), or ones that were invoked with a prefixed nohup command. (i.e. "don't hang up on this") Daemons interpret the HUP signal differently; since they do not have a controlling terminal and do not automatically receive a HUP signal, it is instead repurposed as a manual request from the administrator to reload the configuration. Ironically this means that most admins don't learn the "hangup" usage of this signal for non-daemons until much, much later. That's why you're reading this! Terminal multiplexers are a common way of keeping your shell environment intact between disconnections. They allow you to detach from your shell processes in a way that you can reattach to them later, regardless of whether that disconnection was accidental or deliberate. tmux and screen are the more popular ones; syntax for using them is beyond the scope of your question, but they're worth looking into. It was requested that I elaborate on how long it takes for the SSH daemon to decide that your connection is dead. This is a behavior which is specific to every implementation of a SSH daemon, but you can count on all of them to terminate when either side resets the TCP connection. This will happen quickly if the server attempts to write to the socket and the TCP packets are not acknowledged, or slowly if nothing is attempting to write to the PTY. In this particular context, the factors most likely to trigger a write are: A process (typically the one in the foreground) attempting to write to the PTY on the server side. (server->client) The user attempting to write to the PTY on the client side. (client->server) Keepalives of any sort. These are usually not enabled by default, either by the client or the server, and there are typically two flavors: application level and TCP based (i.e. SO_KEEPALIVE ). Keepalives amount to either the server or the client infrequently sending packets to the other side, even when nothing would otherwise have a reason to write to the socket. While this is typically intended to skirt firewalls that time out connections too quickly, it has the added side effect of causing the sender to notice when the other side isn't responding that much more quickly. The usual rules for TCP sessions apply here: if there is an interruption in connectivity between the client and server, but neither side attempts to send a packet during the problem, the connection will survive provided that both sides are responsive afterwards and receiving the expected TCP sequence numbers. If one side has decided that the socket is dead, the effects are typically immediate: the sshd process will send HUP and self-terminate (as described earlier), or the client will notify the user of the detected problem. It's worth noting that just because one side thinks the other is dead does not mean that the other is has been notified of this. The orphaned side of the connection will typically remain open until either it attempts to write to it and times out, or receives a TCP reset from the other side. (if connectivity was available at the time) The cleanup described in this answer only happens once the server has noticed. | {
"source": [
"https://serverfault.com/questions/463366",
"https://serverfault.com",
"https://serverfault.com/users/44138/"
]
} |
463,550 | How do you change the name and description of a security group in AWS EC2? My security group is named quick-start-1 (the default) and I want to change it to " HTTP, HTTPS and Limited SSH ". | It's not possible to rename a security group, by GUI or by API. For VPC EC2 instances You can dynamically assign security groups assigned to VPC EC2 instances. Create a new SG with the desired name and the same rules. EC2 classic instances It's not possible to change the security group that is assigned to EC2 classic instances. If you must change the security group for an EC2 classic instance, then you need to: Create an AMI from your instance, then Launch a new copy of your instance from the AMI created in step #1, selecting the new security group at launch time. | {
"source": [
"https://serverfault.com/questions/463550",
"https://serverfault.com",
"https://serverfault.com/users/150075/"
]
} |
463,604 | I am trying to install .Net 3.5 on Windows Server 2012 and it constantly keeps failing. I am using "Add or Remove Features" and my Internet is already there. I've read that if alternate source couldn't be found, the installer tries to download online and installs it from there. However, it's not working. This is the screenshot that I keep seeing: Please suggest what am I missing? Edit: I already tried using dism.exe /online /enable-feature /featurename:NetFX3 /Source:D:\sources\sxs /all but I do not have the source disk with me. I want to download it online. | This behavior can also be caused by a system administrator who
configures the computer to use Windows Server Update Services (WSUS)
instead of the Microsoft Windows Update server for servicing. http://support.microsoft.com/kb/2734782 This worked for me. Windows has to download the 3.5 installation files, but the server is configured not to use Windows Update (common for managed servers), but WSUS. The above article describes how to fix this. In a nutshell: Start the Local Group Policy Editor or Group Policy Management Console ( WIN + R and type gpedit.msc ). Expand Computer Configuration, expand Administrative Templates, and then select System. Open the Specify settings for optional component installation and component repair Group Policy setting, and then select Enabled. Select the Contact Windows Update directly to download repair content instead of Windows Server Update Services (WSUS) checkbox. Make sure Windows Updates Service is set to Manual or Automatic to apply this fix. Our default images are set to disabled, and the issue continued until that seemingly obvious change was made. | {
"source": [
"https://serverfault.com/questions/463604",
"https://serverfault.com",
"https://serverfault.com/users/124019/"
]
} |
463,615 | I have a Dell T300 with a Perc 6i, to which 3 500Gb drives are connected. The are assembled into a raid 5 ~1000Gb VD. This setup is working fine, however system is filling up and I would like to increase capacity. My idea was something along the following lines: Add a new 2Tb drive as a hot spare to the array Remove one of the 500Gb drives, so the 2Tb drive will replace it Once reconstruction is finished, repeat the process until there only are 2Tb drives in the array Expand the array size to fill the 2Tb drives, making the VD about 4Tb. However, looking in the docs, I cannot find any information about point 4. So is the perc 6i able to expand a raid5 array? How? Or should I proceed completely differently to achieve my goal? | This behavior can also be caused by a system administrator who
configures the computer to use Windows Server Update Services (WSUS)
instead of the Microsoft Windows Update server for servicing. http://support.microsoft.com/kb/2734782 This worked for me. Windows has to download the 3.5 installation files, but the server is configured not to use Windows Update (common for managed servers), but WSUS. The above article describes how to fix this. In a nutshell: Start the Local Group Policy Editor or Group Policy Management Console ( WIN + R and type gpedit.msc ). Expand Computer Configuration, expand Administrative Templates, and then select System. Open the Specify settings for optional component installation and component repair Group Policy setting, and then select Enabled. Select the Contact Windows Update directly to download repair content instead of Windows Server Update Services (WSUS) checkbox. Make sure Windows Updates Service is set to Manual or Automatic to apply this fix. Our default images are set to disabled, and the issue continued until that seemingly obvious change was made. | {
"source": [
"https://serverfault.com/questions/463615",
"https://serverfault.com",
"https://serverfault.com/users/152327/"
]
} |
463,811 | I'm trying to clone/pull a repository in another PC using Ubuntu Quantal. I have done this on Windows before but I don't know what is the problem on ubuntu. I tried these: git clone file:////pc-name/repo/repository.git
git clone file:////192.168.100.18/repo/repository.git
git clone file:////user:pass@pc-name/repo/repository.git
git clone smb://c-pc/repo/repository.git
git clone //192.168.100.18/repo/repository.git Always I got: Cloning into 'intranet'...
fatal: '//c-pc/repo/repository.git' does not appear to be a git repository
fatal: The remote end hung up unexpectedly or fatal: repository '//192.168.100.18/repo/repository.git' does not exist More: The other PC has username and password Is not networking issue, I can access and ping it. I just installed git doing apt-get install git (dependencies installed) I'm running git from the terminal (I'm not using git-shell) What is causing this and how to fix this? Any help would be great! UPDATE I have cloned the repo on Windows using git clone //192.168.100.18/repo/intranet.git without problems. So, the repo is accessible and exist! Maybe the problem is due user credentials? | It depends on how you have your server configured to serve content. If over ssh: git clone [email protected]:repo/repository.git or if a webserver is providing the content (http or https) https://[email protected]/repo/repository.git or if available via a file path: git clone file://path/to/repo or if the server is running the git daemon: git clone git://192.168.100.18/repo | {
"source": [
"https://serverfault.com/questions/463811",
"https://serverfault.com",
"https://serverfault.com/users/144709/"
]
} |
463,993 | I use fedora 17, and when I setup nginx with uwsgi using unix domain socket, when I place the socket in a directory with proper permission it's ok, but when I place the socket in the /tmp it will cause nginx error: connect() to unix:/tmp/MySite.sock failed (2: No such file or directory) while connecting to upstream The file does exist and has read/write permission for nginx user. But what cause this error, its really drive me crazy, can somebody figure it out. | You can't place sockets intended for interprocess communication in /tmp . For security reasons, recent versions of Fedora use namespaced temporary directories , meaning every service sees a completely different /tmp and can only see its own files in that directory. To resolve the issue, place the socket in a different directory, such as /run (formerly known as /var/run ). | {
"source": [
"https://serverfault.com/questions/463993",
"https://serverfault.com",
"https://serverfault.com/users/152377/"
]
} |
464,018 | Is there a way to run a Powershell Prompt with Elevated privileges from a command linein Server 2012? Problem is this is 'Minimal Server Interface' mode without full server-gui installed so I can run powershell from only either the command prompt or from ServerManager. I am actually trying to run the command:
Enable-ServerManagerStandardUserRemoting
but although this appears to work it does not add the user in question to the various groups as it is supposed to do. I suspect it is not working properly because I am not running it from a fully elevated powershell prompt, just a standard prompt but as Administrator. Thanks,
Nick | Sure... works on Windows 7+, too. Open Powershell first: Type PowerShell to enter a PowerShell session. Once in the session: Type Start-Process PowerShell -Verb RunAs and press Enter. That will open a new Powershell process as Administrator. ------- OR ------- To do it all with only one line from the command prompt, just type: powershell -Command "Start-Process PowerShell -Verb RunAs" | {
"source": [
"https://serverfault.com/questions/464018",
"https://serverfault.com",
"https://serverfault.com/users/123729/"
]
} |
465,473 | I have a domain controller with Windows Server 2012 on it. After updates, the server does not reboot immediately. However if I remote into the server I will be presented with a countdown for a reboot. The only options are to restart now or to close the notification. However the countdown still continues and the server eventually reboots without my permission. How can I stop this from occurring? | There is a Local Group Policies you can set to disable the automatic restarts. This should only be done on Windows Servers assuming a sysadmin is going to RDP into the server on a regular schedule and install updates and restart the server (see Patch Tuesday ). Press Windows Key+R to open the run prompt. Type "gpedit.msc" and press enter. In the "Local Group Policy Editor", navigate to Computer Configuration > Administrative Templates > Windows Components > Windows Update. Enable the "Configure Automatic Updates" policy and set it to "2". Enable the "No auto-restart with logged on users for scheduled automatic updates installations" policy. | {
"source": [
"https://serverfault.com/questions/465473",
"https://serverfault.com",
"https://serverfault.com/users/68204/"
]
} |
465,511 | I can't seem to find a consensus on what the differences are between the two. Roaming Profiles, Folder redirection or... both is one example. The top answer doesn't answer the question as to what data isn't shared if not using roaming profiles. What is the difference between roaming profile and folder redirection? What data "roams" with roaming profiles that doesn't roam with folder redirection? Why is it a bad idea to redirect AppData? What are the consequences of not redirecting this folder should a user log onto the domain with a different machine? Thanks for any insight. | What is the difference between roaming profile and folder redirection? At the most basic level, a Windows user profile is the entirety of the directories and files within the directories that contain user-specific data (a very basic way to look at it is the profile is anything and everything contained within the c:\users\username directory) as well as the various registry entries that contain user specific settings within the HKCU registry hive. A pure roaming profiles implementation will COPY the data from the entire user profile from a fileshare to a system on user logon and copy data for the entire user profile back to the fileshare on logoff. In cases where a user who has roaming profiles enabled logins to multiple systems and makes conflicting changes to the same file in their profile, the last logoff/write will win. As users start saving things to their my documents folder, saving pictures off their camera, uploading their iTunes libraries (these things never happen in an enterprise environment, right? :), the size of the user profile data being copied back and forth can start to cause long delays and increase the time it takes during both user login and user logoff. What data "roams" with roaming profiles that doesn't roam with folder redirection? Folder redirection provides a mechanism to point specific folders (My Docs/AppData/Pictures/etc) within the user profile to a fileshare. If a user logins into multiple systems and has folder redirection applied on all systems, his My documents on all systems would point back to the same fileshare location regardless of which machine he logs into. Note that the use of badly written applications that hard code a path (as opposed to reading the registry or querying windows for the proper location) into their application may NOT work correctly with folder redirection. Data that "roams" with roaming profiles would include such things like Outlook profile Settings, Desktop wallpaper settings, screen saver settings, explorer view settings, installed/default printers, etc..). Folder redirection would not account for these things as it does not account for any data contained in folders that cannot be redirected (appdata\local, etc), or account for any settings contained in the HKCU registry hive. Why is it a bad idea to redirect AppData? What are the consequences of not redirecting this folder should a user log onto the domain with a different machine? First, a note, that only the Appdata\Roaming folder is redirected. The Appdata\Local and Appdata\LocalLow folders are not redirected. Redirecting the AppData folder is a mixed bag and the user experience depends largely on the applications being used. In a redirected folder solution, all the I/O to the Appdata\Roaming folder can cause performance issues (impacting file servers, network, and the system being used) with folder redirection as it would need to read/write that data over the network to the fileshare. In addition, if an application is being used on multiple systems and require a file lock to the same file, folder redirection may not work as there is only a single copy on the file server that can be accessed and locked. All that being said, you start with application profiling and unless there is some serious indications of possible performance issues, I usually would recommend starting with redirecting AppData and watch for performance issues. There are some tools (Citrix Profile Manager and other profile management tools) that provide methods to be more granular in the folders being copied vs redirected within AppData. | {
"source": [
"https://serverfault.com/questions/465511",
"https://serverfault.com",
"https://serverfault.com/users/38936/"
]
} |
465,523 | I recently switch from a Time Warner 10/Mbps cable connection to a Verizon 75/Mbps fiber optics connection (FIOS) and have noticed some interesting things speed-wise. I use a program called GrabIt to download items from newsgroups which basically just downloads multiple parts in parallel. I've noticed that with my new FIOS connection when I have 50 files downloading in parallel my speed maxes out at around 10 megabyes per second, which is very fast and perfectly fine. But once it is downloading only one part the speed will drop to around 200 kilobytes per second, which is very very slow to say the least. Previously with my cable connection the number of parts being downloaded never affected the rate at which the download was occurring. Any idea what might cause this? | What is the difference between roaming profile and folder redirection? At the most basic level, a Windows user profile is the entirety of the directories and files within the directories that contain user-specific data (a very basic way to look at it is the profile is anything and everything contained within the c:\users\username directory) as well as the various registry entries that contain user specific settings within the HKCU registry hive. A pure roaming profiles implementation will COPY the data from the entire user profile from a fileshare to a system on user logon and copy data for the entire user profile back to the fileshare on logoff. In cases where a user who has roaming profiles enabled logins to multiple systems and makes conflicting changes to the same file in their profile, the last logoff/write will win. As users start saving things to their my documents folder, saving pictures off their camera, uploading their iTunes libraries (these things never happen in an enterprise environment, right? :), the size of the user profile data being copied back and forth can start to cause long delays and increase the time it takes during both user login and user logoff. What data "roams" with roaming profiles that doesn't roam with folder redirection? Folder redirection provides a mechanism to point specific folders (My Docs/AppData/Pictures/etc) within the user profile to a fileshare. If a user logins into multiple systems and has folder redirection applied on all systems, his My documents on all systems would point back to the same fileshare location regardless of which machine he logs into. Note that the use of badly written applications that hard code a path (as opposed to reading the registry or querying windows for the proper location) into their application may NOT work correctly with folder redirection. Data that "roams" with roaming profiles would include such things like Outlook profile Settings, Desktop wallpaper settings, screen saver settings, explorer view settings, installed/default printers, etc..). Folder redirection would not account for these things as it does not account for any data contained in folders that cannot be redirected (appdata\local, etc), or account for any settings contained in the HKCU registry hive. Why is it a bad idea to redirect AppData? What are the consequences of not redirecting this folder should a user log onto the domain with a different machine? First, a note, that only the Appdata\Roaming folder is redirected. The Appdata\Local and Appdata\LocalLow folders are not redirected. Redirecting the AppData folder is a mixed bag and the user experience depends largely on the applications being used. In a redirected folder solution, all the I/O to the Appdata\Roaming folder can cause performance issues (impacting file servers, network, and the system being used) with folder redirection as it would need to read/write that data over the network to the fileshare. In addition, if an application is being used on multiple systems and require a file lock to the same file, folder redirection may not work as there is only a single copy on the file server that can be accessed and locked. All that being said, you start with application profiling and unless there is some serious indications of possible performance issues, I usually would recommend starting with redirecting AppData and watch for performance issues. There are some tools (Citrix Profile Manager and other profile management tools) that provide methods to be more granular in the folders being copied vs redirected within AppData. | {
"source": [
"https://serverfault.com/questions/465523",
"https://serverfault.com",
"https://serverfault.com/users/89547/"
]
} |
465,528 | I've got a batch machines to update this week, but I'm not quite confident with our established procedure. It basically runs like so for each machine: Mount a shared directory particular to the OS version/bitness, ie: mount -t cifs //server/share/rhel5.3-64/ /mnt/updates/ yum update --downloadonly --downloaddir=/mnt/updates/ yum update /mnt/updates/*.rpm We use the mount to reduce the amount of network bandwidth we use up, but since each machine might have wildly different package sets installed there will be packages included in the 'update' command that are not even present on the system, as well as there being multiple older versions of certain packages. Is this a problem? Will yum skip/remove any unnecessary/obsolete packages before applying the changes? edit After reading @aaron-copley's response I decided to do a bit of testing. I logged onto the server, mounted the share, ran yum update --downloadonly --downloaddir=/mnt/updates/ , unmounted the share, did a yum clean all , remounted, and re-ran the command. Nothing downloaded. [yay] I deleted an rpm, ran the command again, and only that one package downloaded. [also yay] I mounted the share on another box running the same RHEL version, ran yum update --downloadonly --downloaddir=/mnt/updates/ , and even though it's marked 221 packages for download, it's only downloading the 30 that were not already in the share. [super yay] As a bonus, yum also lists the packages that are already downloaded in bold. | What is the difference between roaming profile and folder redirection? At the most basic level, a Windows user profile is the entirety of the directories and files within the directories that contain user-specific data (a very basic way to look at it is the profile is anything and everything contained within the c:\users\username directory) as well as the various registry entries that contain user specific settings within the HKCU registry hive. A pure roaming profiles implementation will COPY the data from the entire user profile from a fileshare to a system on user logon and copy data for the entire user profile back to the fileshare on logoff. In cases where a user who has roaming profiles enabled logins to multiple systems and makes conflicting changes to the same file in their profile, the last logoff/write will win. As users start saving things to their my documents folder, saving pictures off their camera, uploading their iTunes libraries (these things never happen in an enterprise environment, right? :), the size of the user profile data being copied back and forth can start to cause long delays and increase the time it takes during both user login and user logoff. What data "roams" with roaming profiles that doesn't roam with folder redirection? Folder redirection provides a mechanism to point specific folders (My Docs/AppData/Pictures/etc) within the user profile to a fileshare. If a user logins into multiple systems and has folder redirection applied on all systems, his My documents on all systems would point back to the same fileshare location regardless of which machine he logs into. Note that the use of badly written applications that hard code a path (as opposed to reading the registry or querying windows for the proper location) into their application may NOT work correctly with folder redirection. Data that "roams" with roaming profiles would include such things like Outlook profile Settings, Desktop wallpaper settings, screen saver settings, explorer view settings, installed/default printers, etc..). Folder redirection would not account for these things as it does not account for any data contained in folders that cannot be redirected (appdata\local, etc), or account for any settings contained in the HKCU registry hive. Why is it a bad idea to redirect AppData? What are the consequences of not redirecting this folder should a user log onto the domain with a different machine? First, a note, that only the Appdata\Roaming folder is redirected. The Appdata\Local and Appdata\LocalLow folders are not redirected. Redirecting the AppData folder is a mixed bag and the user experience depends largely on the applications being used. In a redirected folder solution, all the I/O to the Appdata\Roaming folder can cause performance issues (impacting file servers, network, and the system being used) with folder redirection as it would need to read/write that data over the network to the fileshare. In addition, if an application is being used on multiple systems and require a file lock to the same file, folder redirection may not work as there is only a single copy on the file server that can be accessed and locked. All that being said, you start with application profiling and unless there is some serious indications of possible performance issues, I usually would recommend starting with redirecting AppData and watch for performance issues. There are some tools (Citrix Profile Manager and other profile management tools) that provide methods to be more granular in the folders being copied vs redirected within AppData. | {
"source": [
"https://serverfault.com/questions/465528",
"https://serverfault.com",
"https://serverfault.com/users/140324/"
]
} |
465,572 | I have setup few websites on IIS8 all using the same wildcard SSL certificate. Some of the sites need to be accessible to older browsers and operating systems, therefore I cannot use the "Require Server Name Indication" option. Since SNI is not supported by all devices, IIS is showing the following alert: "No default SSL site has been created. To support browsers without SNI capabilities, it is recommended to create a default SSL site." How do I create a default SSL site? The closest article I found is not very clear, and I have the feeling that there must be an easier solution. Server details: Windows Server 2012, IIS8, One external IP address | You could chose any of the websites hosted in IIS and uncheck SNI (Server Name Indication there. Check this below | {
"source": [
"https://serverfault.com/questions/465572",
"https://serverfault.com",
"https://serverfault.com/users/99115/"
]
} |
465,574 | We are planning to perform an Rolling Pool Upgrade using the "automatic method" described in the Citrix XenServer ® 6.1.0 Installation Guide (at http://support.citrix.com/servlet/KbServlet/download/32308-102-691301/installation.pdf ) In order to use the "Automatic Mode" (to avoid installing with media at each host), we attempted to set up a local HTTP repository (or mirror) with the contents of the ISO. We chose this method because we have no NFS or FTP services in place, currently. Because it was handy, I added a virtual directory (named "media") to an existing IIS web instance and enabled Directory Browsing (mostly for troubleshooting). Then I extracted the ISO into a sub-directory (named "xenserver-6.1") and verified the directory listing matched the contents of the ISO. At this point I thought I was ready and I performed a test install using HTTP as the method and the address (" http://servername/media/xenserver-6/1 "). When this test failed, I began researching the required contents of the directory and several other theories. | You could chose any of the websites hosted in IIS and uncheck SNI (Server Name Indication there. Check this below | {
"source": [
"https://serverfault.com/questions/465574",
"https://serverfault.com",
"https://serverfault.com/users/82419/"
]
} |
465,607 | I can't notice any difference if in my config file I set fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; Or: fastcgi_param SCRIPT_FILENAME $request_filename; What do they do respectively? Is one of the two better than the other? Thanks in advance. | Here's what the documentation says: $request_filename This variable is equal to path to the file for the current request, formed from directives root or alias and URI request; $document_root This variable is equal to the value of directive root for the current request; $fastcgi_script_name This variable is equal to the URI request or, if if the URI concludes with a forward slash, then the URI request plus the name of the index file given by fastcgi_index. It is possible to use this variable in place of both SCRIPT_FILENAME and PATH_TRANSLATED, utilized, in particular, for determining the name of the script in PHP. As written here, there's at least a difference when using fastcgi_index or fastcgi_split_path_info . Maybe there are more ... that's what I know of right now. Example You get the request /info/ and have the following configuration: fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/www/scripts/php$fastcgi_script_name; SCRIPT_FILENAME would equal /home/www/scripts/php/info/index.php , but using $request_filename it would just be /home/www/scripts/php/info/ . The configuration of fastcgi_split_path_info is important as well. See here for further help: http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_split_path_info | {
"source": [
"https://serverfault.com/questions/465607",
"https://serverfault.com",
"https://serverfault.com/users/123313/"
]
} |
465,722 | I'm in the process of installing a client's ASP.net site on a Windows Server 2003 box running Sql 2008. The site uses Report Viewer 2012, but when I attempted to install it on the server, I got the message "CLR Types for Sql Server 2012" were missing. Does anyone know if it be possible to install the 2012 CLR types alongside SQL 2008, and without Sql 2012? Many thanks. | “Microsoft® System CLR Types for Microsoft® SQL Server® 2012” can be downloaded at Microsoft® SQL Server® 2012 Feature Pack (direct links: X86 version , X64 version ). And it seems to have fixed the installation problem for us; the server we needed to install the Report Viewer 2012 on does not have SQL Server installed at all. | {
"source": [
"https://serverfault.com/questions/465722",
"https://serverfault.com",
"https://serverfault.com/users/143073/"
]
} |
465,799 | If I do this: */9 * * * * /path/to/wotnot At what times will the task run in two hours, starting at 09h00 Is it A: 09h00
09h09
09h18
09h27
09h36
09h45
09h54
10h03
10h12
10h21
10h30
10h39
10h48
10h57 or B: 09h00
09h09
09h18
09h27
09h36
09h45
09h54
10h00
10h09
10h18
10h27
10h36
10h45
10h54 | When looking at a range, you interpret it within only that column, so '*/9' within the minutes column means "list every minute, then select every ninth value". This selection resets at the top of the hour, so you restart at xx:00, xx:09, xx:18, etc every hour. It can also be read as "every nine minutes of every hour", implying the reset at the top of the hour. So the actual behavior you'll see corresponds to option B. | {
"source": [
"https://serverfault.com/questions/465799",
"https://serverfault.com",
"https://serverfault.com/users/60719/"
]
} |
465,833 | Can someone please tell me where to find the SSHD log on RedHat and SELinux.... I would like to view the log to see who is logging into my account.. | Login records are usually in /var/log/secure. I don't think there is a log specific to the SSH daemon process, unless you've broken it out from other syslog messages. | {
"source": [
"https://serverfault.com/questions/465833",
"https://serverfault.com",
"https://serverfault.com/users/150591/"
]
} |
466,118 | I'm running this command in a bash shell on Ubuntu 12.04.1 LTS. I'm attempting to remove both the [ and ] characters in one fell swoop, i.e. without having to pipe to sed a second time. I know square brackets have special meaning in a regex so I'm escaping them by prepending with a backslash. The result I was expecting is just the string 123 but the square brackets remain and I'd love to know why! ~$ echo '[123]' | sed 's/[\[\]]//'
[123] | This is easy, if you follow the manual carefully: all members inside a character class lose special meaning (with a few exceptions). And ] loses its special meaning if it is placed first in the list. Try: $ echo '[123]' | sed 's/[][]//g'
123
$ This says: inside the outer [ brackets ], replace any of the included characters, namely: ] and [ replace any of them by the empty string — hence the empty replacement string // , replace them everywhere ( globally ) — hence the final g . Again, ] must be first in the class whenever it is included. | {
"source": [
"https://serverfault.com/questions/466118",
"https://serverfault.com",
"https://serverfault.com/users/57254/"
]
} |
466,155 | Many applications allow me to connect to Mysql using a username, password host and port. Some allow me to configure a socket instead of the host:port . Is there any clear benefit of one over the other? I can imagine that a socket only works when MySQL is on the same machine. Is that so? And if so, are there benefits over using that socket instead of connecting to localhost:3306 ? I am not too familiar with the ins- and outs of networking and sockets, so maybe I am completely missing some crucial information and my question is just plain stupid; if so, could you explain what I am missing? | Well, it's simple. Socket is a file based communication, and you can't access the socket from another machine. On the other hand, ports are open to the world (depends on configuration) and you can access the mysql from other machine using host+port combination. Also, as much I understand sockets, they are just combination of host+port, just in the file format. So, I don't see any clear benefit in using any of them (as much my knowledge goes). Though I personally prefer using host+port, as my code becomes more flexible, as I can move it to the other machine, without changing much. Copy pasting from some old post : Unix sockets are a little bit faster as you don't have the
tcp-overhead. If you realize this performance loss is a question of
server load. If you don't have very high server load you won't
recognize it. If you use Jails (FreeBSD) or some other virtualisation technology to
separate the e.g. MySQL-Server from the Webserver, you often use the
tcp/ip setup instead of sockets. The firewall rules need to restrict
the access though. You need to find out if your system is under heavy load so that a
socket is a must or you can focus on a nice system design (separating
services), then a tcp/ip solution would be better. So make a long answer short: Yes, there is a performance difference, sockets are faster. If you are
not suffering high server load, just choose what fits better to your
system's design. | {
"source": [
"https://serverfault.com/questions/466155",
"https://serverfault.com",
"https://serverfault.com/users/60697/"
]
} |
466,266 | We are in the process of moving a website from a machine with Windows Server 2008 R2/IIS 7.5 to a machine with Windows Server 2012/IIS 8.0 as we want to take advantage of the new SNI feature. This website has an SSL through Go-Daddy, so we went through their site to re-key the SSL for this new server and download the corresponding files and followed their instructions found here for IIS 7.0 since they don't have any available for IIS 8.0. The problem that we are experiencing is that when we try to "Complete the Certificate Request" in IIS, it gives us an error message of "Failed to Remove Certificate" - we are not sure what certificate it is trying to remove. In comparing them to Microsoft's instructions found here , we noticed during the import process when following Go-Daddy's instructions, it wants you to import the certificate into the "Intermediate Certification Authorities" directory which then places it in the Personal certificate store - but Microsoft's instructions say to import the certificate into the new Web Hosting certificate store. Not sure if this may be part of the issue... - UPDATE We thought maybe it was something to do with the certificate GoDaddy was issuing so we bought a brand new certificate for a different website from Thawte - however we are still getting the same error of "Failed to Remove Certificate". | I ran into the same issue with a GoDaddy SSL certificate on Windows 2012 / IIS 8. What worked in my case, after getting the "Failed to Remove Certificate" error, was this: I have tried adding it again, this time getting an "Access Denied" error. I have also tried adding it to the "Personal" store instead of "Web Hosting" but same "Access Denied" error appeared so I went back to the Certificates snap-in using MMC and found the certificate was already there - under Certificates (Local Computer) / Personal Instead of doing the export/import thing that Scott suggested, I simply tried dragging the certificate down to Certificates (Local Computer) / Web Hosting node - and, surprisingly, it worked After doing the above, I went back to IIS Manager and was able to use the certificate in the site bindings right away | {
"source": [
"https://serverfault.com/questions/466266",
"https://serverfault.com",
"https://serverfault.com/users/153022/"
]
} |
466,683 | SSL certificates by default have line breaks after 67 characters. I'm trying to create SSL certificate files using Chef. Essentially I want to create the entire certificate file from a string variable without any line breaks. I've tried this a few times to no avail (Apache complains about not being able to find certificate). I don't see why line breaks in an SSL cert would be necessary. Any ideas if it's possible to have a cert without any line breaks in the file? | No, the certificate won't be handled properly without the line breaks - software will fail to parse it. If you're trying to pass it in a string, why not just include them in it? ( \n ) | {
"source": [
"https://serverfault.com/questions/466683",
"https://serverfault.com",
"https://serverfault.com/users/75925/"
]
} |
467,756 | I am using iptable rules to filter & manipulate packets in my Ubuntu server.
but I cannot understand the mangle table. Quoting from this iptables tutorial : This table should as we've already noted mainly be used for mangling packets. In other words, you may freely use the mangle matches etc that could be used to change TOS (Type Of Service) fields and so on. You are strongly advised not to use this table for any filtering; nor will any DNAT, SNAT or Masquerading work in this table. Can anybody describe to me the mangle table, and provide some examples to understand when I should use it? | Further to the other good answers, I recently had to use the mangle table to adjust for MTU (maximum transmission unit) discrepancies caused by traffic being brought through PPPoE, PPP, and ATM, each of which adds overhead that reduces the payload available for IP from the usual 1500 bytes of an Ethernet frame. Systems on each end of the pipe, as is normal, would have their MTU at the regular default of 1500 and so they would try to send IP frames that large. Since the actual payload size available was smaller, this would have caused packet fragmentation, except that often the sender will request that packets not be fragmented, and as such they end up getting dropped entirely. In an ideal world, path MTU discovery would have allowed the endpoints to adjust their MTU down as needed, but this discovery depends upon ICMP, and networks outside of my control were often configured to drop ICMP for security reasons. The only choice was to use packet mangling in my router in order to modify TCP SYN packets to lower the maximum segment size at the transport layer: iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1452 This sort of thing is messy and ideally should be avoided, but I had no other options and this did solve the problem. Hope these examples help, as well as the man page. | {
"source": [
"https://serverfault.com/questions/467756",
"https://serverfault.com",
"https://serverfault.com/users/152439/"
]
} |
467,778 | I have a domain, example.co.uk , and it has its nameservers registered with the registrar as current-nameserver.co.uk . It has a TXT record with value ORIGINAL . I decide to change DNS providers to new-nameserver.co.uk . As a test, I set the TXT record to have the value NEW . The idea to test the new server is lookup the TXT record and see what is returned, However, I try: dig @new-nameserver.co.uk example.co.uk TXT Despite trying numerous combinations of command, the value ORIGINAL is always returned. Why is this? And how can I preventa DNS server from providing an authoritative answer, as it appears it is aware that is not part of the normal chain as not registered with the parent nameserver. Is there a command line option available, or is overiding the root nameserver (as in Testing nameserver configuration using it ) the only option? | Further to the other good answers, I recently had to use the mangle table to adjust for MTU (maximum transmission unit) discrepancies caused by traffic being brought through PPPoE, PPP, and ATM, each of which adds overhead that reduces the payload available for IP from the usual 1500 bytes of an Ethernet frame. Systems on each end of the pipe, as is normal, would have their MTU at the regular default of 1500 and so they would try to send IP frames that large. Since the actual payload size available was smaller, this would have caused packet fragmentation, except that often the sender will request that packets not be fragmented, and as such they end up getting dropped entirely. In an ideal world, path MTU discovery would have allowed the endpoints to adjust their MTU down as needed, but this discovery depends upon ICMP, and networks outside of my control were often configured to drop ICMP for security reasons. The only choice was to use packet mangling in my router in order to modify TCP SYN packets to lower the maximum segment size at the transport layer: iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1452 This sort of thing is messy and ideally should be avoided, but I had no other options and this did solve the problem. Hope these examples help, as well as the man page. | {
"source": [
"https://serverfault.com/questions/467778",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
469,094 | We're trying to run a fairly straightforward setup on Amazon EC2 - several HTTP servers sitting behind an Amazon Elastic Load Balancer (ELB). Our domain is managed in Route53, and we have a CNAME record set up to point to the ELB. We've experienced some issues where some - but not all - locations are intermittently unable to connect to the load balancer; it seems that this may be the resolution of the ELB's domain name. Amazon support advised us that the underlying Elastic IP of the load balancer has been changing, and that the problem is that some ISPs' DNS servers do not honour the TTL. We're not satisfied with this explanation, because we replicated the problem using Amazon's own DNS servers from an EC2 instance, as well as on local ISPs in Australia and via Google's DNS server ( 8.8.8.8 ). Amazon also confirmed that during the period where we noticed down time from some locations, traffic passing through the ELB was down significantly - so the problem is not with our endpoints. Interestingly, the domain seems to resolve to the correct IP on the servers that cannot connect - but the attempt to establish a TCP connection fails. All the instances attached to the ELB have been healthy at all times. They're all Does anyone know how we might go about diagnosing this problem more deeply? Has anyone else experienced this problem with the Elastic Load Balancer? Thanks, | I found this question while Googling for how to diagnose Amazon Elastic Load Balancers (ELBs) and I want to answer it for anyone else like me who has had this trouble without much guidance. ELB Properties ELBs have some interesting properties. For instance: ELBs are made up of 1 or more nodes These nodes are published as A records for the ELB name These nodes can fail, or be shut down, and connections will not be closed gracefully It often requires a good relationship with Amazon support ($$$) to get someone to dig into ELB problems NOTE: Another interesting property but slightly less pertinent is that ELBs were not designed to handle sudden spikes of traffic. They typically require 15 minutes of heavy traffic before they will scale up or they can be pre-warmed on request via a support ticket Troubleshooting ELBs (manually) Update: AWS has since migrated all ELBs to use Route 53 for DNS. In addition, all ELBs now have a all.$elb_name record that will return the full list of nodes for the ELB. For example, if your ELB name is elb-123456789.us-east-1.elb.amazonaws.com , then you would get the full list of nodes by doing something like dig all.elb-123456789.us-east-1.elb.amazonaws.com . For IPv6 nodes, all.ipv6.$elb_name also works. In addition, Route 53 is able to return up to 4KB of data still using UDP, so using the +tcp flag may not be necessary. Knowing this, you can do a little bit of troubleshooting on your own. First, resolve the ELB name to a list of nodes (as A records): $ dig @ns-942.amazon.com +tcp elb-123456789.us-east-1.elb.amazonaws.com ANY The tcp flag is suggested as your ELB could have too many records to fit inside of a single UDP packet. I'm also told, but haven't personally confirmed, that Amazon will only display up to 6 nodes unless you perform an ANY query. Running this command will give you output that looks something like this (trimmed for brevity): ;; ANSWER SECTION:
elb-123456789.us-east-1.elb.amazonaws.com. 60 IN SOA ns-942.amazon.com. root.amazon.com. 1376719867 3600 900 7776000 60
elb-123456789.us-east-1.elb.amazonaws.com. 600 IN NS ns-942.amazon.com.
elb-123456789.us-east-1.elb.amazonaws.com. 60 IN A 54.243.63.96
elb-123456789.us-east-1.elb.amazonaws.com. 60 IN A 23.21.73.53 Now, for each of the A records use e.g. curl to test a connection to the ELB. Of course, you also want to isolate your test to just the ELB without connecting to your backends. One final property and little known fact about ELBs: The maximum size of the request method (verb) that can be sent through an ELB is 127 characters . Any larger and the ELB will reply with an HTTP 405 - Method not allowed . This means that we can take advantage of this behavior to test only that the ELB is responding: $ curl -X $(python -c 'print "A" * 128') -i http://ip.of.individual.node
HTTP/1.1 405 METHOD_NOT_ALLOWED
Content-Length: 0
Connection: Close If you see HTTP/1.1 405 METHOD_NOT_ALLOWED then the ELB is responding successfully. You might also want to adjust curl's timeouts to values that are acceptable to you. Troubleshooting ELBs using elbping Of course, doing this can get pretty tedious so I've built a tool to automate this called elbping . It's available as a ruby gem, so if you have rubygems then you can install it by simply doing: $ gem install elbping Now you can run: $ elbping -c 4 http://elb-123456789.us-east-1.elb.amazonaws.com
Response from 54.243.63.96: code=405 time=210 ms
Response from 23.21.73.53: code=405 time=189 ms
Response from 54.243.63.96: code=405 time=191 ms
Response from 23.21.73.53: code=405 time=188 ms
Response from 54.243.63.96: code=405 time=190 ms
Response from 23.21.73.53: code=405 time=192 ms
Response from 54.243.63.96: code=405 time=187 ms
Response from 23.21.73.53: code=405 time=189 ms
--- 54.243.63.96 statistics ---
4 requests, 4 responses, 0% loss
min/avg/max = 187/163/210 ms
--- 23.21.73.53 statistics ---
4 requests, 4 responses, 0% loss
min/avg/max = 188/189/192 ms
--- total statistics ---
8 requests, 8 responses, 0% loss
min/avg/max = 188/189/192 ms Remember, if you see code=405 then that means that the ELB is responding. Next Steps Whichever method you choose, you will at least know if your ELB's nodes are responding or not. Armed with this knowledge, you can either turn your focus to troubleshooting other parts of your stack or be able to make a pretty reasonable case to AWS that something is wrong. Hope this helps! | {
"source": [
"https://serverfault.com/questions/469094",
"https://serverfault.com",
"https://serverfault.com/users/154424/"
]
} |
469,247 | sleep is a very popular command and we can start sleep from 1 second: # wait one second please
sleep 1 but what the alternative if I need to wait only 0.1 second or between 0.1 to 1 second ? remark: on linux or OS X sleep 0.XXX works fine , but on solaris sleep 0.1 or sleep 0.01 - illegal syntax | The documentation for the sleep command from coreutils says: Historical implementations of sleep have required that number be an
integer, and only accepted a single argument without a suffix.
However, GNU sleep accepts arbitrary floating point numbers. See Floating point . Hence you can use sleep 0.1 , sleep 1.0e-1 and similar arguments. | {
"source": [
"https://serverfault.com/questions/469247",
"https://serverfault.com",
"https://serverfault.com/users/117906/"
]
} |
469,824 | I am developing and I need to access https://localhost . I know the certificate will not match. I just want curl to ignore that. Currently it gives me the following error message: curl: (51) SSL peer certificate or SSH remote key was not OK Is it possible to tell curl to perform the access anyway? | Yeah, you can do that. From curl --help or man curl : -k, --insecure (SSL) This option explicitly allows curl to perform "insecure" SSL
connections and transfers. All SSL connections are attempted to be
made secure by using the CA certificate bundle installed by default.
This makes all connections considered "insecure" fail unless -k,
--insecure is used. See this online resource for further details: http://curl.haxx.se/docs/sslcerts.html | {
"source": [
"https://serverfault.com/questions/469824",
"https://serverfault.com",
"https://serverfault.com/users/91978/"
]
} |
469,993 | Basically I need for DNS to respond with different CNAMES depending if the request was made for HTTPS or HTTP object. s.test.com -> IF(https) RESPONSE special.domain.com ELSE simple.domain.com Is it possible? What other possible ways to do that? | This isn't possible with DNS. The DNS request is completely independent of the reason for the request. For this to be possible, the entire caching system for DNS would have to be scrapped. DNS would also have to be rewritten every time a new scheme was invented. What are you trying to do? There might be a better way to solve your actual problem. | {
"source": [
"https://serverfault.com/questions/469993",
"https://serverfault.com",
"https://serverfault.com/users/89005/"
]
} |
470,287 | The newest fedora has firewalld as new firewall aplication. I liked old iptables services. I want them back but have no idea how to do that. I have tried : systemctl disable firewalld.service
systemctl stop firewalld.service
systemctl enable iptables.service
systemctl enable ip6tables.service
systemctl start iptables.service
systemctl start ip6tables.service But it does not work! Didn't find any help on wiki or google. Disabling firewalld work ok, but when I'm trying to enable iptables.service I get: systemctl enable iptables.service
Failed to issue method call: No such file or directory | Make sure you have the iptables-services package installed. This legacy package provides the systemd scripts for the previous iptables invocation. This package is not always installed, depending on your installation choices when you installed (or upgraded). yum install iptables-services And of course, if possible, you should use the new firewalld system. It should only be necessary to revert to the old system if firewalld fails to provide a feature you need. | {
"source": [
"https://serverfault.com/questions/470287",
"https://serverfault.com",
"https://serverfault.com/users/71114/"
]
} |
470,497 | I have been learning about spanning tree protocol (STP/RSTP/MSTP) and was wondering, once I turn it on and it's protecting against for example network loops, how do I know there is a network loop? I suppose in most cases it would be obvious, because the room the loop is in would be down, but what if there is no complaint? It seems like I would still want a way to know, that there is a network issue such as this. Maybe the device sending some kind of alert, or maybe someone has to check a log or something else occasionally? | You watch your switch logs for spanning tree events, or configure your switches to send SNMP traps when STP shuts down a port. | {
"source": [
"https://serverfault.com/questions/470497",
"https://serverfault.com",
"https://serverfault.com/users/41580/"
]
} |
470,534 | I'm looking to purchase a rack/enclosure for some servers. As this is my first time shopping for this type of equipment, I need to know what to look for. I'm asking not only about how to buy the rack, but also what accessories will I need? What can I do without? What should I look for in terms of delivery and assembly? What do I need to prepare for in terms of power? Cooling inside the cabinet? Anything else I might be overlooking? I'll share some back story in case anyone finds it helpful, but really generic answers for anyone who is shopping for rack equipment is helpful. Where I'm at we used to have mainly tower servers... even a number of glorified desktops that lived in our server room. We do have a couple two-poster racks for switches, but everything sits in a wooden bench made of 2x4's and plywood that's been here much longer than I have. Over the last three years I've been able to virtualize the desktop "servers" and as items have come up for refresh I've purchased rackmount servers, with rails. I put the servers on their sides in the old space, and set the rails aside in the store room, biding my time. We are now (at last!) to the point where in the next six months I'll be down to only a single tower server, and it just happens to be 19 inches high, so I'm thinking 1U shelf. Everything else except the the UPS equipment should mount in a rack. In anticipation of that event, I'm looking to spec out a server rack enclosure to purchase and install this summer. I want to be get preliminary shopping done over the next month or two so I can get it put into the budget for next fiscal year, in time to actually execute the project late this summer. For size, we should fit comfortably inside a single 42U height-wise, even including mounting our existing switches, so I'm pretty sure one rack will handle it. ... I just need to know what to look for in that rack. | It's really difficult to buy an unsuitable rack today if you're purchasing new. I usually encounter generic, custom or unbranded racks , APC Netshelter , Dell and HP (10642 G1 and G2 models) in the field. They've all been solid and have handled the systems and equipment I've needed to mount within them. The basics: Begin with your servers. Why not match up with the equipment you're currently using? If you're on Dell servers, you know Dell racks will work. Same for HP. That's a very good starting point. In addition to manufacturer compatibility, you'll want a four-post, square-hole rack . At this juncture, you should not consider anything else for housing servers. That means NO round-hole threaded or unthreaded round-hole racks! Other answers here elaborate on the reasons why, but again, it's difficult to end up with a round-hole rack if purchasing something new today. Do you want sides and doors? Sides are optional in situations where you make want to bay/bind two similar racks together side-by-side. Also, if you need easy access to systems and cabling, you may want to omit doors/side panels. Is there a security concern or a need to restrict physical access to the systems? If so, the doors and side panels are worth getting. Networking gear could mean switches, or switches and patch panels... While both of those work best in 2-post telco or relay racks . They can live happily in a 4-post rack, but would ideally be mounted in the rear rails versus the front of the rack. Think about were your infrastructure cabling will terminate. E.g. do you want to drape fixed wiring inside the cabinet? You want to absolutely avoid this scenario . You'll want shelves for any non-rackmount equipment (tape drives, Cisco 1700 routers, KVM switches, etc). Try to buy shelves purpose-built for the chosen rack. (e.g. HP shelves with HP racks... APC with APC...) I don't usually bother with in-rack cooling. Proper rackmount servers and equipment pull cool air in from the front and exhaust hot air out the rear. A top-mount fan really hasn't been particularly useful in my environments. I skip the dedicated KVM/monitor/keyboard in racks these days, opting for a more portable "crash cart" or a laptop-KVM adapter . Ensure depth is appropriate for your preferred power distribution choice. APC racks accommodate their PDU's quite well. Full rack enclosures are often delivered whole, palletized with casters . There isn't any assembly really required, other than removing them from the pallet anchors. Open-frame racks may be shipped in one or more flat boxes and require assembly. Do you want a permanent installation, or do you want casters in the bottom to allow the rack to be moved and repositioned easily? Outside of those items, you'll be well-served with most recent APC (universally-accepted, very adjustable), HP or even Dell racks. Quite a few data centers use APC racks, or at least use similar designs since they're in the business of accommodating a wide variety of customer equipment. You can also opt for an open-frame 4-post rack if this is a permanent installation. Rack Solutions is also a great (re)source and can answer any specific questions or give you some design ideas. Examples: Custom rack : Permanent installation. No casters, front view with shelves, side panels and door attached. HP 10642 G1 rack : Movable rack on casters with side panels installed, door removed, fully loaded with big servers, UPS, CRT monitor (LOLWUT?), KVM, 1U pull-out keyboard/mouse and two shelves. Dell rack : Rear view. Split rear door for clearance and ease of access (versus a one-hinge full door). Side-mounted cable management loops. Custom rack : Rear of a custom data center rack enclosure with patch panels and networking facing the rear. Proper clearance for PDU's and server rails . Open-frame 4-post rack No doors, side panels or casters. A skeleton rack, essentially. Good for airflow or easy access to cabling. | {
"source": [
"https://serverfault.com/questions/470534",
"https://serverfault.com",
"https://serverfault.com/users/2869/"
]
} |
470,650 | yesterday I upgraded my fedora box to the latest version, and with that, I also upgraded samba, now using samba 4. I used to access those share from any computer at home without user/password, but now there seems to be something wrong with the configuration. Here is my smb.conf [global]
workgroup = mygroup
server string = Samba Server Version %v
netbios name = HOME-WS
log file = /var/log/samba/log.%m
max log size = 50
guest ok = yes
security = share
[Media]
path = /mnt/Media
read only = yes
browseable = yes
guest ok = yes
guest only = yes
[Music]
path = /mnt/Music
read only = yes
browseable = yes
guest ok = yes
guest only = yes Looking at the logs, there is a warning related to the security parameter WARNING: Ignoring invalid value 'share' for parameter 'security' Does that means that samba 4 has finally removed 'share' as an option, is there any alternative to it, so that I can configure shares without passwords. | If you follow the FAQ link from JasonAzze, you will see there is a "map to guest" line which is also required, so you need both of these lines: security = user
map to guest = Bad Password I had the same problem as the OP, and I have tested that this solution works on Fedora 18 | {
"source": [
"https://serverfault.com/questions/470650",
"https://serverfault.com",
"https://serverfault.com/users/62861/"
]
} |
470,755 | It is company policy for admins to login to the servers via a personal username, and then run sudo -i to become root. Upon running sudo -i , sudo will create an environmental variable called SUDO_USER , which contains the original user's username. Is there a way to log ALL commands within syslog with something akin to the following syntax: ${TIME/DATE STAMP}: [${REAL_USER}|${SUDO_USER}]: ${CMD} An example entry would be: Sat Jan 19 22:28:46 CST 2013: [root|ksoviero]: yum install random-pkg Obviously it doesn't have to be exactly the above syntax, it just has to include a minimum of the real user (eg. root), the sudo user (eg. ksoviero), and the full command that was run (eg. yum install random-pkg). I've already tried snoopy , but it did not include the SUDO_USER variable. | Update :
2 more things that have popped up in the comments and in follow-up questions: Using auditd this way will dramatically increase your log volume, especially if the system is heavily in use via commandline. Adjust your log retention policy. Auditd logs on the host where they are created are just as secure as other files on the same box. Forward your logs to a remote log collection server like ELK or Graylog to preserve your logs' integrity. Plus, adding to the point above, it allows to more aggressively delete old logs. As was suggested by Michael Hampton, auditd is the correct tool for the job here. I tested this on an Ubuntu 12.10 installation, so your mileage may vary on other systems. Install auditd : apt-get install auditd Add these 2 lines to /etc/audit/audit.rules : -a exit,always -F arch=b64 -F euid=0 -S execve
-a exit,always -F arch=b32 -F euid=0 -S execve These will track all commands run by root ( euid=0 ). Why two rules? The execve syscall must be tracked in both 32 and 64 bit code. To get rid of auid=4294967295 messages in logs, add audit=1 to the kernel's cmdline (by editing /etc/default/grub ) Place the line session required pam_loginuid.so in all PAM config files that are relevant to login ( /etc/pam.d/{login,kdm,sshd} ), but not in the files that are relevant to su or sudo .
This will allow auditd to get the calling user's uid correctly when calling sudo or su . Restart your system now. Let's login and run some commands: $ id -u
1000
$ sudo ls /
bin boot data dev etc home initrd.img initrd.img.old lib lib32 lib64 lost+found media mnt opt proc root run sbin scratch selinux srv sys tmp usr var vmlinuz vmlinuz.old
$ sudo su -
# ls /etc
[...] This will yield something like this in /var/log/audit/auditd.log : ----
time->Mon Feb 4 09:57:06 2013
type=PATH msg=audit(1359968226.239:576): item=1 name=(null) inode=668682 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1359968226.239:576): item=0 name="/bin/ls" inode=2117 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1359968226.239:576): cwd="/home/user"
type=EXECVE msg=audit(1359968226.239:576): argc=2 a0="ls" a1="/"
type=SYSCALL msg=audit(1359968226.239:576): arch=c000003e syscall=59 success=yes exit=0 a0=10cfc48 a1=10d07c8 a2=10d5750 a3=7fff2eb2d1f0 items=2 ppid=26569 pid=26570 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=1 comm="ls" exe="/bin/ls" key=(null)
----
time->Mon Feb 4 09:57:06 2013
type=PATH msg=audit(1359968226.231:575): item=1 name=(null) inode=668682 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1359968226.231:575): item=0 name="/usr/bin/sudo" inode=530900 dev=08:01 mode=0104755 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1359968226.231:575): cwd="/home/user"
type=BPRM_FCAPS msg=audit(1359968226.231:575): fver=0 fp=0000000000000000 fi=0000000000000000 fe=0 old_pp=0000000000000000 old_pi=0000000000000000 old_pe=0000000000000000 new_pp=ffffffffffffffff new_pi=0000000000000000 new_pe=ffffffffffffffff
type=EXECVE msg=audit(1359968226.231:575): argc=3 a0="sudo" a1="ls" a2="/"
type=SYSCALL msg=audit(1359968226.231:575): arch=c000003e syscall=59 success=yes exit=0 a0=7fff327ecab0 a1=7fd330e1b958 a2=17cc8d0 a3=7fff327ec670 items=2 ppid=3933 pid=26569 auid=1000 uid=1000 gid=1000 euid=0 suid=0 fsuid=0 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=1 comm="sudo" exe="/usr/bin/sudo" key=(null)
----
time->Mon Feb 4 09:57:09 2013
type=PATH msg=audit(1359968229.523:578): item=1 name=(null) inode=668682 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1359968229.523:578): item=0 name="/bin/su" inode=44 dev=08:01 mode=0104755 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1359968229.523:578): cwd="/home/user"
type=EXECVE msg=audit(1359968229.523:578): argc=2 a0="su" a1="-"
type=SYSCALL msg=audit(1359968229.523:578): arch=c000003e syscall=59 success=yes exit=0 a0=1ceec48 a1=1cef7c8 a2=1cf4750 a3=7fff083bd920 items=2 ppid=26611 pid=26612 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=1 comm="su" exe="/bin/su" key=(null)
----
time->Mon Feb 4 09:57:09 2013
type=PATH msg=audit(1359968229.519:577): item=1 name=(null) inode=668682 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1359968229.519:577): item=0 name="/usr/bin/sudo" inode=530900 dev=08:01 mode=0104755 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1359968229.519:577): cwd="/home/user"
type=BPRM_FCAPS msg=audit(1359968229.519:577): fver=0 fp=0000000000000000 fi=0000000000000000 fe=0 old_pp=0000000000000000 old_pi=0000000000000000 old_pe=0000000000000000 new_pp=ffffffffffffffff new_pi=0000000000000000 new_pe=ffffffffffffffff
type=EXECVE msg=audit(1359968229.519:577): argc=3 a0="sudo" a1="su" a2="-"
type=SYSCALL msg=audit(1359968229.519:577): arch=c000003e syscall=59 success=yes exit=0 a0=7fff327ecab0 a1=7fd330e1b958 a2=17cc8d0 a3=7fff327ec670 items=2 ppid=3933 pid=26611 auid=1000 uid=1000 gid=1000 euid=0 suid=0 fsuid=0 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=1 comm="sudo" exe="/usr/bin/sudo" key=(null)
----
time->Mon Feb 4 09:57:09 2013
type=PATH msg=audit(1359968229.543:585): item=1 name=(null) inode=668682 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1359968229.543:585): item=0 name="/bin/bash" inode=6941 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1359968229.543:585): cwd="/root"
type=EXECVE msg=audit(1359968229.543:585): argc=1 a0="-su"
type=SYSCALL msg=audit(1359968229.543:585): arch=c000003e syscall=59 success=yes exit=0 a0=13695a0 a1=7fffce08a3e0 a2=135a030 a3=7fffce08c200 items=2 ppid=26612 pid=26622 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=1 comm="bash" exe="/bin/bash" key=(null)
----
time->Mon Feb 4 09:57:11 2013
type=PATH msg=audit(1359968231.663:594): item=1 name=(null) inode=668682 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1359968231.663:594): item=0 name="/bin/ls" inode=2117 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1359968231.663:594): cwd="/root"
type=EXECVE msg=audit(1359968231.663:594): argc=3 a0="ls" a1="--color=auto" a2="/etc"
type=SYSCALL msg=audit(1359968231.663:594): arch=c000003e syscall=59 success=yes exit=0 a0=7fff8c709950 a1=7f91a12149d8 a2=1194c50 a3=7fff8c709510 items=2 ppid=26622 pid=26661 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=1 comm="ls" exe="/bin/ls" key=(null) The auid column contains the calling user's uid , which allows you filter for commands run by this user with ausearch -ua 1000 This will even list commands the user ran as root. Sources: http://www.woitasen.com.ar/2011/11/auditing-user-actions-after-sudo/ http://linux.die.net/man/8/pam_loginuid http://linux.die.net/man/8/auditd | {
"source": [
"https://serverfault.com/questions/470755",
"https://serverfault.com",
"https://serverfault.com/users/63619/"
]
} |
471,224 | In Windows 2003 and 2008 we had Terminal Services Manager (see screenshot below). However in Windows server 2012 it's gone . Does anyone know how to access the list of currently remotely logged on users in Windows 2012 through a similar tool or some other way? (I tried connecting to Windows 2012 from Windows 2008, that's why you see "win2012" in the TSM groups list. But that didn't quite work, and that's not a solution to my problem either. I was just trying to somehow manage remote users on the Win 2012 server.) | Yep, tsadmin is gone. Kinda' sucks. There's RDMS through Server Manager and the Remote Desktop Powershell cmdlets ( get-command *RD* ), but those both require that a full Remote Desktop Services deployment exist on that server. Those don't work on servers without RDS deployments or on workstations. You can use Task Manager... or, if you want something command-line, you could use this utility that I wrote specifically for this: users.exe Oh and there's also quser.exe that Microsoft already wrote, but my utility does a little extra that quser doesn't do. | {
"source": [
"https://serverfault.com/questions/471224",
"https://serverfault.com",
"https://serverfault.com/users/3903/"
]
} |
471,289 | Is it necessary to generate the CSR (Certificate Signing Request) on the same machine that will host my web application and SSL certificate? This page on SSL Shopper says so, but I'm not sure if that's true, because it would mean I'd have to buy a separate SSL certificate for each server in my cluster. What is a CSR? A CSR or Certificate Signing request is a block of
encrypted text that is generated on the server that the certificate
will be used on. | No. It is not necessary to generate the CSR on the machine that you want to host the resulting certificate on. The CSR does need to be generated either using the existing private key that the certificate will be eventually paired with or its matching private key is generated as part of the CSR creation process. What's important is not so much the originating host but that the private key and resulting public key are a matching pair. | {
"source": [
"https://serverfault.com/questions/471289",
"https://serverfault.com",
"https://serverfault.com/users/33958/"
]
} |
471,327 | I cloned a server and so they've the same RSA key fingerprint. It seems to be defined in /etc/ssh/ssh_host_rsa_key.pub . What is the correct way to change that? Thanks. | Follow these steps to regenerate OpenSSH Host Keys Delete old ssh host keys: rm /etc/ssh/ssh_host_* Reconfigure OpenSSH Server: dpkg-reconfigure openssh-server Update all ssh client(s) ~/.ssh/known_hosts files Reference | {
"source": [
"https://serverfault.com/questions/471327",
"https://serverfault.com",
"https://serverfault.com/users/61071/"
]
} |
471,355 | I've been having a pretty annoying problem with a website I work on, and I'm having difficulty saying whether the problem lies with PHP, Apache or MySQL. System setup : cloud hosted solution (moved from dedicated servers last year) with two VMs: Apache VM and MySQL VM. The Apache VM has 1 core (2GHz),4GB RAM, the MySQL VM has two of the same core, 8GB RAM. The site doesn't get a large volume of traffic, due to its nature. Problem : when viewing an account report, sometimes the page times out and fails to load. The page runs a lot of queries, and returns quite a lot of data (mostly text, still <1MB), so my first thought was a problem with MySQL. I've monitored the server during these time-outs, and nothing stands out. I've also run the queries isolated (both direct to DB and through a test page), and they run fairly quickly. Apache also shows nothing out of the ordinary, and I never get PHP timeouts or memory errors. I've also run this on local systems, without experiencing the same issue (though these systems obviously have no competition, unlike the live box. The strangest thing is that when I get this problem on one browser (say, Firefox), I can't load any other pages on the site through Firefox, but I can through another browser (say, Chrome). It suggests there's some kind of connection or queue issue with the server and that session? Can anybody give me any idea what they think could cause something like this? Or is there any more info I can give you to help? Thanks | Follow these steps to regenerate OpenSSH Host Keys Delete old ssh host keys: rm /etc/ssh/ssh_host_* Reconfigure OpenSSH Server: dpkg-reconfigure openssh-server Update all ssh client(s) ~/.ssh/known_hosts files Reference | {
"source": [
"https://serverfault.com/questions/471355",
"https://serverfault.com",
"https://serverfault.com/users/155520/"
]
} |
471,412 | Trying to generate a key for a server. gpg --gen-key We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy. and it just hangs there. There is another error: can't connect to `/root/.gnupg/S.gpg-agent': No such file or directory which seems to go away after: gpg-agent --daemon GPG_AGENT_INFO=/tmp/gpg-4c5hyT/S.gpg-agent:1397:1; export GPG_AGENT_INFO; #GPG_AGENT_INFO=/tmp/gpg-4c5hyT/S.gpg-agent:1397:1; export GPG_AGENT_INFO;
gpg --gen-key
... but again, it hangs at "...gain enough entropy". There are no "++++++++++++++++++++++++++++++++++++++++++"'s which from forum posts looks like should be expected as the key is generated. I have tried reinstalling the package, but seemingly everything depends on gpg. I've read other people having problems with this on centos 6 too (whereas centos 5 works fine). There is nothing remarkable in /var/log/* . Any ideas on where to go from here? Thanks. | When the gpg --gen-key command hangs like this, log in to another shell and perform the following command: dd if=/dev/sda of=/dev/zero (This command basically reads from your hard drive and discards the output, because writing to /dev/zero will do nothing.) After a few seconds / minutes, the key generation command should complete. | {
"source": [
"https://serverfault.com/questions/471412",
"https://serverfault.com",
"https://serverfault.com/users/38936/"
]
} |
472,030 | I want to be sure in what order services are started during boot process in Debian based systems (Debian Squeeze in particular). | In short: ls /etc/rc*.d This shows you what starts at which runlevel, and within each level the order is determined by the number after the letter (K is Kill, S is start). You can configure what starts at each runlevel with sysv-rc-conf, which is installable with apt. e.g. on my system apache2 is symlinked in rc5.d as "S20apache2". A link in the same directory with S19 would start before it, something with S21 would start after it. Further reading: http://wiki.debian.org/RunLevel http://www.debian.org/doc/manuals/debian-reference/ch03.en.html#_the_meaning_of_the_runlevel | {
"source": [
"https://serverfault.com/questions/472030",
"https://serverfault.com",
"https://serverfault.com/users/155872/"
]
} |
472,145 | I have apache2 installed on Amazon Linux AMI release 2012.03. I'm able to start it manually just fine, without any errors using /etc/init.d/httpd start . However, it doesn't start automatically when the machine is booted up. It appears that everything is configured properly in my rc*.d directories. Here's the result of find /etc/rc.d -name "*httpd*" | xargs ls -l : -rwxr-xr-x 1 root root 3371 Feb 16 2012 /etc/rc.d/init.d/httpd
lrwxrwxrwx 1 root root 15 Apr 14 2012 /etc/rc.d/rc0.d/K15httpd -> ../init.d/httpd
lrwxrwxrwx 1 root root 15 Apr 14 2012 /etc/rc.d/rc1.d/K15httpd -> ../init.d/httpd
lrwxrwxrwx 1 root root 15 Apr 14 2012 /etc/rc.d/rc2.d/K15httpd -> ../init.d/httpd
lrwxrwxrwx 1 root root 15 Apr 14 2012 /etc/rc.d/rc3.d/K15httpd -> ../init.d/httpd
lrwxrwxrwx 1 root root 15 Apr 14 2012 /etc/rc.d/rc4.d/K15httpd -> ../init.d/httpd
lrwxrwxrwx 1 root root 15 Apr 14 2012 /etc/rc.d/rc5.d/K15httpd -> ../init.d/httpd
lrwxrwxrwx 1 root root 15 Apr 14 2012 /etc/rc.d/rc6.d/K15httpd -> ../init.d/httpd I understand that I can put the /etc/init.d/httpd start command into /etc/rc.local , but isn't that a workaround? Why isn't it starting automatically? Other stuff in the rc*.d directories starts just fine on bootup (mongod, postfix, etc). Thanks! | Use chkconfig to manage the runlevels under which you want this service to start. Usually chkconfig httpd on does the job. | {
"source": [
"https://serverfault.com/questions/472145",
"https://serverfault.com",
"https://serverfault.com/users/89239/"
]
} |
472,158 | I have a fiber connection with 5 public ips. The ips are like this: 200.195.169.xxx/29 The Internet Link is connected to ether2. One of the public ips is used to the LAN.
ether1,ether3,ether4,ether5 belongs to my lan, and have the following ips:
192.168.55.1/24 I have another computer running Apache, connected to ether6.
What I want to do, is to assign one of my publics ip to this server, without nat. Hope you understand. What should I do? thanks. Edit I need this, because I will install cpanel on the computer used as server. And cpanel won't run well behind NAT. | Use chkconfig to manage the runlevels under which you want this service to start. Usually chkconfig httpd on does the job. | {
"source": [
"https://serverfault.com/questions/472158",
"https://serverfault.com",
"https://serverfault.com/users/56589/"
]
} |
472,779 | I'm running a debian squeeze webserver with nginx, and i can't get SSL to work. The error that i get isn't from apache, but from the client web browser, with a "Connection timeout error" I have purchased an SSL certificate from StartSSL, and when that didn't work, i tried generating my own just to troubleshoot. Both yielded the same error, and neither worked and my nginx log isn't showing anything. My SSL config looks like this: server {
listen 443 default_server ssl;
server_name tarror.org www.tarror.org;
ssl on;
ssl_certificate /srv/ssl/nginx.pem;
ssl_certificate_key /srv/ssl/nginx.key;
root /wdata/tarror.org;
index index.php index.htm index.html;
location ~ .php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /wdata/tarror.org$fastcgi_script_name;
include fastcgi_params;
}
} | Open port 443 in your web server's firewall. | {
"source": [
"https://serverfault.com/questions/472779",
"https://serverfault.com",
"https://serverfault.com/users/156260/"
]
} |
472,955 | I want Upstart to do two things: stop trying to respawn a failed process so fast never give up trying to respawn In an ideal world, upstart would try to restart a dead process after 1s, then double that delay on each attempt, until it reached an hour. Is something like this possible? | The Upstart Cookbook recommends a post-stop delay ( http://upstart.ubuntu.com/cookbook/#delay-respawn-of-a-job ). Use the respawn stanza without arguments and it will continue trying forever: respawn
post-stop exec sleep 5 (I got this from this Ask Ubuntu question ) To add the exponential delay part, I'd try working with an environment variable in the post-stop script, I think something like: env SLEEP_TIME=1
post-stop script
sleep $SLEEP_TIME
NEW_SLEEP_TIME=`expr 2 \* $SLEEP_TIME`
if [ $NEW_SLEEP_TIME -ge 60 ]; then
NEW_SLEEP_TIME=60
fi
initctl set-env SLEEP_TIME=$NEW_SLEEP_TIME
end script ** EDIT ** To apply the delay only when respawning, avoiding the delay on a real stop, use the following, which checks whether the current goal is "stop" or not: env SLEEP_TIME=1
post-stop script
goal=`initctl status $UPSTART_JOB | awk '{print $2}' | cut -d '/' -f 1`
if [ $goal != "stop" ]; then
sleep $SLEEP_TIME
NEW_SLEEP_TIME=`expr 2 \* $SLEEP_TIME`
if [ $NEW_SLEEP_TIME -ge 60 ]; then
NEW_SLEEP_TIME=60
fi
initctl set-env SLEEP_TIME=$NEW_SLEEP_TIME
fi
end script | {
"source": [
"https://serverfault.com/questions/472955",
"https://serverfault.com",
"https://serverfault.com/users/20520/"
]
} |
472,960 | I've the following problem with yum: $ yum
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:
No module named cElementTree
Please install a package which provides this module, or
verify that the module is installed correctly.
It's possible that the above module doesn't match the
current version of Python, which is:
2.4.3 (#1, Feb 22 2012, 16:06:13)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-52)]
If you cannot solve this problem yourself, please go to
the yum faq at:
http://wiki.linux.duke.edu/YumFaq My OS: CentOS release 5.8 (Final) When trying to install python-elementtree manually from RPM package, it gives the following error: $ sudo rpm -i http://mirror.centos.org/centos-5/5/os/i386/CentOS/python-elementtree-1.2.6-5.i386.rpm
package python-elementtree-1.2.6-7.el4.rf.i386 (which is newer than python-elementtree-1.2.6-5.i386) is already installed
$ sudo rpm -i http://mirror.centos.org/centos-5/5/os/i386/CentOS/python-elementtree-1.2.6-5.i386.rpm Related links: http://www.rickrodriguezjr.com/wordpress/archives/183 http://www.webhostingtalk.com/showthread.php?t=936132 http://www.clearfoundation.com/component/option,com_kunena/Itemid,232/catid,26/func,view/id,45278/ http://pingd.org/2012/no-module-named-celementtree-yum-update-error.html http://forums.contribs.org/index.php?topic=49189.0 http://www.centos.org/modules/newbb/viewtopic.php?topic_id=3401 | The Upstart Cookbook recommends a post-stop delay ( http://upstart.ubuntu.com/cookbook/#delay-respawn-of-a-job ). Use the respawn stanza without arguments and it will continue trying forever: respawn
post-stop exec sleep 5 (I got this from this Ask Ubuntu question ) To add the exponential delay part, I'd try working with an environment variable in the post-stop script, I think something like: env SLEEP_TIME=1
post-stop script
sleep $SLEEP_TIME
NEW_SLEEP_TIME=`expr 2 \* $SLEEP_TIME`
if [ $NEW_SLEEP_TIME -ge 60 ]; then
NEW_SLEEP_TIME=60
fi
initctl set-env SLEEP_TIME=$NEW_SLEEP_TIME
end script ** EDIT ** To apply the delay only when respawning, avoiding the delay on a real stop, use the following, which checks whether the current goal is "stop" or not: env SLEEP_TIME=1
post-stop script
goal=`initctl status $UPSTART_JOB | awk '{print $2}' | cut -d '/' -f 1`
if [ $goal != "stop" ]; then
sleep $SLEEP_TIME
NEW_SLEEP_TIME=`expr 2 \* $SLEEP_TIME`
if [ $NEW_SLEEP_TIME -ge 60 ]; then
NEW_SLEEP_TIME=60
fi
initctl set-env SLEEP_TIME=$NEW_SLEEP_TIME
fi
end script | {
"source": [
"https://serverfault.com/questions/472960",
"https://serverfault.com",
"https://serverfault.com/users/130437/"
]
} |
473,763 | Dealing with hundreds of RHEL servers, how can we maintain local root accounts and network user accounts? Is there an active directory type solution that manages these from a central location? | One central component of Active Directory is LDAP, which is available on Linux in the form of OpenLDAP and 389DS (and some others). Also, the other major component Kerberos is available in the form of MIT Kerberos and Heimdal . Finally, you can even connect your machines to AD. | {
"source": [
"https://serverfault.com/questions/473763",
"https://serverfault.com",
"https://serverfault.com/users/90718/"
]
} |
473,905 | I am trying to count the number of SQL queries per second from a log file and I want to do it in real time by pipeing stdout from grep into some command. ( I am doing some performance testing ) I could write it myself, but thought for sure this would exist. I looked at wc but didn't see an option to allow this. I could also use it to count requests per second by piping a tail from the access log. | pv is your command! P ipe V iewer prints stats about the data passing through it, and can run anywhere in your pipeline, since it pipes stdin directly over to stdout. For example: tail -f /var/log/nginx/access.log | pv --line-mode --rate > /dev/null The pv command prints to stderr the current number of lines per second (the default is bytes per second), which, for this particular data source (Nginx's default log file), equates to incoming web requests per second. I only care about the counts, so I pipe stdout into /dev/null . There are also options like: -b (total number of lines), --average-rate (average rate since starting), and --timer (tracks how long the pipe has been going). If you don't say --line-mode , it'll count bytes, which is probably not what you want for server logs, but could be handy elsewhere. Final note: ... | pv -lb > file.txt is a lot like ... | tee file.txt | awk '{printf "\r%lu", NR}' , which is also handy for counting lines, but the pv call is way shorter, though the output is not quite as exciting — pv updates every second by default, while that awk command updates continuously. | {
"source": [
"https://serverfault.com/questions/473905",
"https://serverfault.com",
"https://serverfault.com/users/155822/"
]
} |
473,929 | I have Cygwin with SSH server installed (Windows 7). After setting up I can login locally using "ssh localhost -l [myUsername]". I input my password. Success. However, trying to SSH to the machine remotely from a different machine connects, but ALWAYS rejects the password with "permission Denied". There is no connectivity problem, obviously I'm connecting. Firewall settings are all OFF. Why is that happening? | pv is your command! P ipe V iewer prints stats about the data passing through it, and can run anywhere in your pipeline, since it pipes stdin directly over to stdout. For example: tail -f /var/log/nginx/access.log | pv --line-mode --rate > /dev/null The pv command prints to stderr the current number of lines per second (the default is bytes per second), which, for this particular data source (Nginx's default log file), equates to incoming web requests per second. I only care about the counts, so I pipe stdout into /dev/null . There are also options like: -b (total number of lines), --average-rate (average rate since starting), and --timer (tracks how long the pipe has been going). If you don't say --line-mode , it'll count bytes, which is probably not what you want for server logs, but could be handy elsewhere. Final note: ... | pv -lb > file.txt is a lot like ... | tee file.txt | awk '{printf "\r%lu", NR}' , which is also handy for counting lines, but the pv call is way shorter, though the output is not quite as exciting — pv updates every second by default, while that awk command updates continuously. | {
"source": [
"https://serverfault.com/questions/473929",
"https://serverfault.com",
"https://serverfault.com/users/68513/"
]
} |
474,361 | So I installed mailutils (apt-get install mailutils) and when I did a nice little setup screen popped up and started asking me questions. I guess I screwed up and cancelled out before I had all the data I need to configure. Anyhow, how do I get it to rerun that setup script? PBI | You can try with dpkg-reconfigure -plow <PACKAGE> This will ask again the configuration questions about the package. It may ask you to reconfigure related packages as well. | {
"source": [
"https://serverfault.com/questions/474361",
"https://serverfault.com",
"https://serverfault.com/users/151648/"
]
} |
474,367 | Is the best way to get all my EC2 Web Server Instance Apache Logs in one place, to create a "LogServer" EC2 Micro instance and point all the Apache Configs to it? Not sure how to do this and wondering if anyone can offer help on doing this. Do I need to do this via Syslog? | You can try with dpkg-reconfigure -plow <PACKAGE> This will ask again the configuration questions about the package. It may ask you to reconfigure related packages as well. | {
"source": [
"https://serverfault.com/questions/474367",
"https://serverfault.com",
"https://serverfault.com/users/142429/"
]
} |
474,494 | I have a fresh VPS installation from my provider (Windows 2008 R2, IIS 7.5). The default web site works fine: http://5.9.251.167/ I created a new website, and binding for http://new.ianquigley.com On that server and everywhere else that Domain maps to the IP address. (ping new.ianquigley.com for example). I created the sub folder c:\inetpub\wwwroot\com.ianquigley and created an HTML file index.html with the content " <html>cake</html> " The default document for the web site is index.html On the server, in Chrome I browse to http://new.ianquigley.com/index.html and get a 404 Error. The page says; HTTP Error 404.0 - Not Found
The resource you are looking for has been removed, had it's name changed,
or is temporarily unavailable.
Detail:
Module: IIS Web Core
Notification: HttpRequestHandler
Handler: StaticFile
Error Code: 0x80007002
Request URL: http://new.ianquigley.com/index.html
Physical path: c:\inetpub\wwwroot\com.ianquigley\index.html
Logon Method: Anonymous
Logon User: Anonymous
Failed Request Log: c:\inetpub\logs\FailedRequestLog The Physical Path does exist. The folder wwwroot and com.ianquigley both have "Everyone" and "Read" permission. The c:\inetpub\wwwroot\logfiles\w3svc2\u_ex130201 file contains the request for the index.html with the 404 error code. update (from comment below) I created c:\cake with "Everyone" "Full Control" permissions. Moved my index.html file in there and changed the mapping in IIS. Checking the page in the browser on the server again gives me the same as above except Physical Path is c:\cake\index.html update 2 The default web site (which works fine/can read from disk) runs in the "DefaultAppPool", which originally used the account "ApplicationPoolIdentity". The new website also uses this same app pool. I've tried changing the account to; NetworkService, LocalService and LocalSytem (refreshing the app pool each time).. still no joy! W3SVC2 log #Software: Microsoft Internet Information Services 7.5
#Version: 1.0
#Date: 2013-02-02 20:00:02
#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken
2013-02-02 20:00:02 5.9.251.167 GET /index.html - 80 - 5.9.251.167 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.17+(KHTML,+like+Gecko)+Chrome/24.0.1312.57+Safari/537.17 404 0 2 1151 The sc-win32-status: 2 means "file not found". So this is probably simply a file access permission sort of problem. If it is, why can the default web site read from c:\inetpub\wwwroot folder sucesfully but not a sub-folder with permission. Right now I'm completely stumped. | Facepalm New VPS box, means default behaviour. i.e. "Hide file type extensions for known types". When I turned that off, I discovered my file was actually named index.html.txt . Renamed it to index.html and problem solved. | {
"source": [
"https://serverfault.com/questions/474494",
"https://serverfault.com",
"https://serverfault.com/users/9660/"
]
} |
474,882 | I have a website that needs to use Basic Authentication. I want to host it on a Windows Server 2012 box. But in IIS in the Authentication page for the web site, Basic Authentication is not available. How do I enable Basic Authentication for IIS 8 in Windows Server 2012? | Open an elevated command-prompt or PowerShell: dism /online /enable-feature /featurename:IIS-BasicAuthentication Now you will have the Basic Authentication option in the Authentication page in IIS Manager. This works in Server Core and is much quicker than using the GUI. | {
"source": [
"https://serverfault.com/questions/474882",
"https://serverfault.com",
"https://serverfault.com/users/12878/"
]
} |
475,173 | Is there a command that list all enabled Apache modules? | To list apache loaded modules use: apachectl -M or: apachectl -t -D DUMP_MODULES or on RHEL,CentoS, Fedora: httpd -M For more options man apachectl . All these answers can be found just by little google search. | {
"source": [
"https://serverfault.com/questions/475173",
"https://serverfault.com",
"https://serverfault.com/users/24363/"
]
} |
475,468 | You add a rule like this: ufw allow 22/tcp The rule is saved, and is applied even after reboot. But it's not written anywhere in /etc/ufw . Where is it saved to? (Ubuntu, using ufw as pre-installed.) | In my Ubuntu 11 server, the firewall rules are saved in /lib/ufw/user.rules | {
"source": [
"https://serverfault.com/questions/475468",
"https://serverfault.com",
"https://serverfault.com/users/68259/"
]
} |
475,635 | ntpd listens on numerous interfaces by default, I only want it to listen on 127.0.0.1:123 since I only want the localhost to sync the time. How to do that, I tried by editing /etc/default/ntp on Debian Wheezy: NTPD_OPTS='-4 -I 127.0.0.1' But it still listens globally on 0.0.0.0:123 Any ideas? | Remove all -I or --interface options from /etc/default/ntp and insert the following into your /etc/ntp.conf : interface ignore wildcard
interface listen 127.0.0.1
interface listen ::1
# NOTE: if you want to update your time using remote machines,
# add at least one remote interface address:
#interface listen 2001:db8::1
#interface listen 192.0.2.1 An excerpt from the ntpd(1) manual page about the -i option: This option also implies not opening other addresses, except
wildcard and localhost. Please consider using the configuration file
interface command, which is more versatile. See also the Debian manual page (I could not find it in Arch Linux one) of ntp.conf(5) . | {
"source": [
"https://serverfault.com/questions/475635",
"https://serverfault.com",
"https://serverfault.com/users/91589/"
]
} |
475,642 | Being new to Linux I have followed this tutorial to set up a mail server: https://www.digitalocean.com/community/articles/how-to-install-postfix-on-centos-6 Everything is working correctly however I am sending mail from: [email protected] I want mail just being sent from [email protected], however when I change this section: myhostname = mail.example.com
mydomain = example.com to myhostname = example.com
mydomain = example.com Mail is not received. :( What is causing this ? Also, is there a way to change mail being sent from root to another prefix? Thanks chaps. | Remove all -I or --interface options from /etc/default/ntp and insert the following into your /etc/ntp.conf : interface ignore wildcard
interface listen 127.0.0.1
interface listen ::1
# NOTE: if you want to update your time using remote machines,
# add at least one remote interface address:
#interface listen 2001:db8::1
#interface listen 192.0.2.1 An excerpt from the ntpd(1) manual page about the -i option: This option also implies not opening other addresses, except
wildcard and localhost. Please consider using the configuration file
interface command, which is more versatile. See also the Debian manual page (I could not find it in Arch Linux one) of ntp.conf(5) . | {
"source": [
"https://serverfault.com/questions/475642",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
475,849 | For my new server, I want to setup a proper backup solution. I've found a great setup that will do twice-daily incremental backups via Dropbox. I plan on backing up my various databases, the webroot directory, the /etc directory/repository, and /var/log. What else do I need to know to do a proper backup, and what is the standard setup here to ensure you can quickly restore from a backup in the case of a system failure? I'm thinking of using Puppet, as it describes how the system should be. My restore procedure would look like this: Install Puppet Run my puppet config Restore my backups from Dropbox (Should I create a script to do this? Probably) This should also let me create a clone of my production server for use in dev environments, correct? Am I missing anything of importance? | We build backup systems for one purpose: To enable restores. Nobody cares about backups; they care about restores. There are three reasons one might need to restore file(s): Accidental file deletion, hardware failure, or archival/legal reasons. A "complete" backup system would enable you to restore files in all of these scenarios. For accidental file deletion, things like Dropbox and RAID fail because they simply reflect all changes made to the filesystem, and a deleted file is gone in these scenarios. Your backup system should be able to restore a file to a recent point in time fairly quickly; preferably the restore would complete within seconds to minutes. For hardware failure, you should use solutions such as RAID and other high-availability approaches when possible to ensure that your service remains up and running, as a full restore of a system can take hours or possibly days due to the necessity of reading and writing to (relatively) slow media. Finally archives, or full backups (or equivalent) of the systems at a specific point in time, can serve restores in both legal and disaster recovery scenarios. These would typically be stored off-site, in case a stray meteor turns your data center into a smoking crater... Your complete backup system should be able to support restores for any of these three types, with varying levels of service (SLA). For instance, you may decide that a deleted file may be restored with one business day granularity for the last six months and one month granularity for the last three years; and that a disk failure should be capable of being restored within four hours with no more than two business days of data loss. The backup system must be able to implement the SLA in a backup schedule. Your backup system must be fully automated . This cannot be stressed enough. If the backups aren't fully automated, they simply won't happen. Your backup system must be capable of fully automated backups, out of the box, with little or no special configuration or scripting required. You must periodically test restores. Any backup system is utterly useless if restoring from backup fails to work. I think most of us have horror stories along these lines. Your backup system must be able to restore single files or whole systems within the SLA you're implementing. You must purchase backup media on an ongoing basis. Whether you're just doing on-site tape backup or going whole hog with off-site cloud backup, make sure you have it in the budget to pay for the gigabytes (or terabytes!) of space you will need. This has been a very brief summary of a portion of Chapter 26 of The Practice of System and Network Administration, Second Edition , which anyone who is or aspires to be a system administrator should own, read, and memorize. I've glossed over a lot of things that don't necessarily apply to your particular situation or that don't make sense in a small environment such as the one you've described. Nevertheless it should be a reasonable description of the features that your "complete" backup system should have, as well as why they're necessary. | {
"source": [
"https://serverfault.com/questions/475849",
"https://serverfault.com",
"https://serverfault.com/users/57232/"
]
} |
475,851 | Is it possible to redirect (via a Rewrite Cond?) users to another URL for help if an index.html or index.php file doesn't exist within their home directory? Thanks
Greg | We build backup systems for one purpose: To enable restores. Nobody cares about backups; they care about restores. There are three reasons one might need to restore file(s): Accidental file deletion, hardware failure, or archival/legal reasons. A "complete" backup system would enable you to restore files in all of these scenarios. For accidental file deletion, things like Dropbox and RAID fail because they simply reflect all changes made to the filesystem, and a deleted file is gone in these scenarios. Your backup system should be able to restore a file to a recent point in time fairly quickly; preferably the restore would complete within seconds to minutes. For hardware failure, you should use solutions such as RAID and other high-availability approaches when possible to ensure that your service remains up and running, as a full restore of a system can take hours or possibly days due to the necessity of reading and writing to (relatively) slow media. Finally archives, or full backups (or equivalent) of the systems at a specific point in time, can serve restores in both legal and disaster recovery scenarios. These would typically be stored off-site, in case a stray meteor turns your data center into a smoking crater... Your complete backup system should be able to support restores for any of these three types, with varying levels of service (SLA). For instance, you may decide that a deleted file may be restored with one business day granularity for the last six months and one month granularity for the last three years; and that a disk failure should be capable of being restored within four hours with no more than two business days of data loss. The backup system must be able to implement the SLA in a backup schedule. Your backup system must be fully automated . This cannot be stressed enough. If the backups aren't fully automated, they simply won't happen. Your backup system must be capable of fully automated backups, out of the box, with little or no special configuration or scripting required. You must periodically test restores. Any backup system is utterly useless if restoring from backup fails to work. I think most of us have horror stories along these lines. Your backup system must be able to restore single files or whole systems within the SLA you're implementing. You must purchase backup media on an ongoing basis. Whether you're just doing on-site tape backup or going whole hog with off-site cloud backup, make sure you have it in the budget to pay for the gigabytes (or terabytes!) of space you will need. This has been a very brief summary of a portion of Chapter 26 of The Practice of System and Network Administration, Second Edition , which anyone who is or aspires to be a system administrator should own, read, and memorize. I've glossed over a lot of things that don't necessarily apply to your particular situation or that don't make sense in a small environment such as the one you've described. Nevertheless it should be a reasonable description of the features that your "complete" backup system should have, as well as why they're necessary. | {
"source": [
"https://serverfault.com/questions/475851",
"https://serverfault.com",
"https://serverfault.com/users/157756/"
]
} |
475,925 | Connecting from a Windows 7 PC via SSH to an Ubuntu server using PuTTY , I get some screen errors: I.e. it: "Double-draws" the selection inside Midnight Commander (MC). Other characters like line elements are drawn as the wrong characters (e.g. "â" instead of "|"). I connected to the same Ubuntu server with a terminal and SHH from a Mac OS X and do not get these screen garbling (i.e. everything looks and works correctly). I've already tried to play with the font settings inside PuTTY, changing it from Courier New to Consolas but without luck. My question therefore is: How to configure PuTTY to correctly display special characters and not double-draw/overwrite screen lines? | You almost certainly have set the wrong character set in your PuTTY settings . Verify the character set on the remote system by running the command: locale This should return something like: LANG=de_DE.UTF-8
LC_CTYPE="de_DE.UTF-8"
LC_NUMERIC="de_DE.UTF-8"
LC_TIME="de_DE.UTF-8"
LC_COLLATE="de_DE.UTF-8"
LC_MONETARY="de_DE.UTF-8"
LC_MESSAGES="de_DE.UTF-8"
LC_PAPER="de_DE.UTF-8"
LC_NAME="de_DE.UTF-8"
LC_ADDRESS="de_DE.UTF-8"
LC_TELEPHONE="de_DE.UTF-8"
LC_MEASUREMENT="de_DE.UTF-8"
LC_IDENTIFICATION="de_DE.UTF-8"
LC_ALL= So check your PuTTY settings under Translation and ensure that you have UTF-8 set as the character set. You may need to tweak the line drawing setting as well, but it is probably not likely. | {
"source": [
"https://serverfault.com/questions/475925",
"https://serverfault.com",
"https://serverfault.com/users/54658/"
]
} |
476,576 | I've just finished reading over this great thread explaining the different SSL formats. Now I'm essentially looking for the opposite of How to split a PEM file There's 4 files I want to consolidate, originally created for Apache, I'm looking at files specified by SSLCertificateFile SSLCertificateKeyFile SSLCertificateChainFile SSLCACertificateFile What I'm mostly curious about is the order of the files in the consolidated dereivative, is that important? EG. if I were to just cat them together in the order they appear above, into a .pem , would it be valid, or should they be ordered a specific way? FYI, I'm doing this for sake of using these certs as a combined single .pem in SimpleSAMLphp . | The order does matter, according to RFC 4346 . Here is a quote directly taken from the RFC: certificate_list
This is a sequence (chain) of X.509v3 certificates. The sender's
certificate must come first in the list. Each following
certificate must directly certify the one preceding it. Because
certificate validation requires that root keys be distributed
independently, the self-signed certificate that specifies the root
certificate authority may optionally be omitted from the chain,
under the assumption that the remote end must already possess it
in order to validate it in any case. Based on this information, the server certificate should come first, followed by any intermediate certs, and finally the root trusted authority certificate (if self-signed). I could not find any information on the private key, but I think that should not matter because a private key in pem is easy to identify as it starts and ends with the text below, which has the keyword PRIVATE in it. -----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY----- | {
"source": [
"https://serverfault.com/questions/476576",
"https://serverfault.com",
"https://serverfault.com/users/95641/"
]
} |
476,610 | Some time ago there was a thread to exact the same problem: Can't create symlinks in virtualbox shared folders . Now it's closed (why?). So I start a new one, because I've got this issue now and cannot find a solution. Short issue description: By attepms to create/place a symlink in a shared folder an error occurs: root@devmv:/var/www/sandbox/zf1sandbox# ln -s /lib/ZendFramework/ZF1 ZF1
ln: creating symbolic link `ZF1': Protocol error I've already tried to activate the symlinks for my shared folder "workspace" in different ways: C:\Windows\system32>VBoxManage setextradata "Dev VM" VBoxInternal2/SharedFoldersEnableSymlinksCreate/var/www 1
C:\Windows\system32>VBoxManage setextradata "Dev VM" VBoxInternal2/SharedFoldersEnableSymlinksCreate/var/www/ 1
C:\Windows\system32>VBoxManage setextradata "Dev VM" VBoxInternal2/SharedFoldersEnableSymlinksCreate/workspace 1
C:\Windows\system32>VBoxManage setextradata "Dev VM" VBoxInternal2/SharedFoldersEnableSymlinksCreate/workspace/ 1
C:\Windows\system32>VBoxManage setextradata "Dev VM" VBoxInternal2/SharedFoldersEnableSymlinksCreate/M:\workspace 1
C:\Windows\system32>VBoxManage setextradata "Dev VM" VBoxInternal2/SharedFoldersEnableSymlinksCreate/M:\workspace\ 1 I don't get errors like C:\Windows\system32>VBoxManage setextradata devvm VBoxInternal2/SharedFoldersEnableSymlinksCreate/workspace 1
VBoxManage.exe: error: Failed to create the VirtualBox object!
VBoxManage.exe: error: Code CO_E_SERVER_EXEC_FAILURE (0x80080005) - Server execution failed (extended info not available)
VBoxManage.exe: error: Most likely, the VirtualBox COM server is not running or failed to start.
C:\Windows\system32>VBoxManage setextradata "Dev VM" VBoxInternal2/SharedFoldersEnableSymlinksCreate/workspace 1
VBoxManage.exe: error: Failed to create the VirtualBox object!
VBoxManage.exe: error: Code CO_E_SERVER_EXEC_FAILURE (0x80080005) - Server execution failed (extended info not available)
VBoxManage.exe: error: Most likely, the VirtualBox COM server is not running or failed to start. but it is still not working. I've also installed the Oracle VM VirtualBox Extension Pack (can be downloaded here ). But it simply doesn't want to work. Would be great, if someone could help. Thanks System parameter:
Host: Winwows 7 64Bit
Guest: Debian 6.0.6 64Bit
VirtualBox: 4.2.6 EDIT: Some additional information: C:\Windows\system32>VBoxManage getextradata "Dev VM" enumerate
Key: GUI/LastCloseAction, Value: shutdown
Key: GUI/LastGuestSizeHint, Value: 720,400
Key: GUI/LastNormalWindowPosition, Value: 390,158,1424,819,max
Key: GUI/LastScaleWindowPosition, Value: 640,345,640,480,max
Key: GUI/MiniToolBarAlignment, Value: bottom
Key: GUI/SaveMountedAtRuntime, Value: yes
Key: GUI/ShowMiniToolBar, Value: yes
Key: VBoxInternal2/SharedFoldersEnableSymlinksCreate/M:\workspace, Value: 1
Key: VBoxInternal2/SharedFoldersEnableSymlinksCreate/M:\workspace\, Value: 1
Key: VBoxInternal2/SharedFoldersEnableSymlinksCreate/var/www, Value: 1
Key: VBoxInternal2/SharedFoldersEnableSymlinksCreate/var/www/, Value: 1
Key: VBoxInternal2/SharedFoldersEnableSymlinksCreate/workspace, Value: 1
Key: VBoxInternal2/SharedFoldersEnableSymlinksCreate/workspace/, Value: 1 So, the config changes have been saved. But they don't work. | It works! On Windows by default only administrators can create symlinks. When I start VirtualBox as administrator, I can create symlinks without any problems. In order to be able to create symlinks without starting the VB as admin, you need to set this permission for your user/usergroup. Here is a short how-to. The only problem is -- I have not found a way to permit creating of symlinks to admin-users. I don't know, whether it's possible. | {
"source": [
"https://serverfault.com/questions/476610",
"https://serverfault.com",
"https://serverfault.com/users/158057/"
]
} |
476,612 | I am trying to set up printers for users, and the print server is in a forest with a trust relationship. Users are all on Windows 7, and the print server is Server 2008 R2 Standard. DomainA contains the print server
DomainB contains the users When users or admins in DomainB attempt to add printers from the DomainA print server, they get a generic error that says "Windows cannot connect to the printer. Access is denied" I have added DomainB users to the DomainA printer security w/ print rights, still getting the same error. I've even tried creating a Domain Local group in DomainA, and added users from DomainB, and it still fails whether I'm using a standard user or a domain admin in DomainB. When adding the printer via IP, it works, but that's not running through the print server and isn't an acceptable solution in our environment. What do I need to do to get this cross-forest printing working? ADDITIONAL INFO FROM TESTING:
DomainB user is able to browse file shares on the DomainA print server, but adding printers flags the error.
DomainB user was able to add certain HP/Brother printers, but Ricoh and Canon printers fail. All the printers they were able to add were printers who's drivers are included by default in Win7. This seems to only occurs when the print driver needs to be downloaded from the print server. Possible share missing or with wrong permissions? | It works! On Windows by default only administrators can create symlinks. When I start VirtualBox as administrator, I can create symlinks without any problems. In order to be able to create symlinks without starting the VB as admin, you need to set this permission for your user/usergroup. Here is a short how-to. The only problem is -- I have not found a way to permit creating of symlinks to admin-users. I don't know, whether it's possible. | {
"source": [
"https://serverfault.com/questions/476612",
"https://serverfault.com",
"https://serverfault.com/users/75882/"
]
} |
477,448 | I have a simple webserver (Debian 6.0 x86, DirectAdmin with 1 GB of memory and still 10 GB free space, mySQl version 5.5.9), however the mySQL server keeps crashing and I need to kill all mySQL processes to be able to restart it again. /var/log/mysql-error.log output: 130210 21:04:26 InnoDB: Using Linux native AIO
130210 21:04:34 InnoDB: Initializing buffer pool, size = 128.0M
130210 21:05:42 InnoDB: Completed initialization of buffer pool
130210 21:05:48 InnoDB: Initializing buffer pool, size = 128.0M
130210 21:06:22 InnoDB: Initializing buffer pool, size = 128.0M
130210 21:06:27 mysqld_safe mysqld from pid file /usr/local/mysql/data/website.pid ended
130210 21:06:29 mysqld_safe mysqld from pid file /usr/local/mysql/data/website.pid ended
130210 21:07:22 InnoDB: Completed initialization of buffer pool
130210 21:07:51 mysqld_safe mysqld from pid file /usr/local/mysql/data/website.pid ended
130210 21:08:33 InnoDB: Completed initialization of buffer pool
130210 21:12:03 [Note] Plugin 'FEDERATED' is disabled.
130210 21:12:47 InnoDB: The InnoDB memory heap is disabled
130210 21:12:47 InnoDB: Mutexes and rw_locks use InnoDB's own implementation
130210 21:12:47 InnoDB: Compressed tables use zlib 1.2.3
130210 21:12:47 InnoDB: Using Linux native AIO
130210 21:13:11 InnoDB: highest supported file format is Barracuda.
130210 21:13:23 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
130210 21:14:05 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.
InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.
InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.
130210 21:17:53 InnoDB: Unable to open the first data file
InnoDB: Error in opening ./ibdata1
130210 21:17:53 InnoDB: Operating system error number 11 in a file operation. I have found a topic on the mySQL website here however there's no solution for it. Any ideas anyone? | another approach from one comment in the same blog: this helped me: lsof -i:3306 Then kill it (the process number) kill -9 PROCESS e.g. kill -9 13498 Then try to restart MySQL again. via http://www.webhostingtalk.com/archive/index.php/t-1070293.html | {
"source": [
"https://serverfault.com/questions/477448",
"https://serverfault.com",
"https://serverfault.com/users/94574/"
]
} |
477,488 | I am working with some Amazon EC2 servers that are up and running, and I need to SSH into the servers. I don't have any keys that were generated when the servers were first set up (someone else did it long before I got here). Can I still get into the servers without the key files? FWIW I've tried a lot of things to SSH into the box so far, including generating new key pairs in the EC2 dashboard, and nothing seems to be working. This Amazon AWS support post and this answer seem to indicate that I'm out of luck unless I want to make an AMI of my current server and then use it to instantiate a whole new EC2 server instance (just to get the .pem file generated at that time). Is that really the only way I can get into the box at this point?! | In short: Yes, you can, but not without some work. You'll need to do the following: (For these steps, assume that the machine you're having trouble connecting to is called server-01.) First, before starting these steps, take a snapshot of your server. Start a new, temporary instance. Call it server-02. Stop server-01. Don't terminate it, just stop it. Un-attach the root ( / ) EBS volume from server-01, and attach it to server-02 as, say /dev/sdb . Sign into server-02, and run: $ mkdir /mnt/temp && mount /dev/sdb /mnt/temp . This will mount server-01's root partition within the (temporary) server-02. Now you should be able to: $ vi /home/<user>/.ssh/authorized_keys and copy/paste in your public key. When you've done that, save and close the file. Now run: $ cd / && umount /mnt/temp to umount server-01's root partition from server-02. Now, just un-attach that volume from server-02, attach it back to server-01, and then start server-01. When it starts up, you should be able to ssh in again. | {
"source": [
"https://serverfault.com/questions/477488",
"https://serverfault.com",
"https://serverfault.com/users/90246/"
]
} |
477,498 | In my web-server configuration troubleshooting I went through a process of uninstalling apache2 and reinstalling it. I accidentally removed the file /etc/init.d/apache2 which is responsible for starting and stopping the apache service. I've tried reinstalling the apache2.2-common package which is supposed to place the file, but it is still not present. How can I do a complete reinstall of Apache? The standard apt-get remove and apt-get install doesn't give me that file back. | In short: Yes, you can, but not without some work. You'll need to do the following: (For these steps, assume that the machine you're having trouble connecting to is called server-01.) First, before starting these steps, take a snapshot of your server. Start a new, temporary instance. Call it server-02. Stop server-01. Don't terminate it, just stop it. Un-attach the root ( / ) EBS volume from server-01, and attach it to server-02 as, say /dev/sdb . Sign into server-02, and run: $ mkdir /mnt/temp && mount /dev/sdb /mnt/temp . This will mount server-01's root partition within the (temporary) server-02. Now you should be able to: $ vi /home/<user>/.ssh/authorized_keys and copy/paste in your public key. When you've done that, save and close the file. Now run: $ cd / && umount /mnt/temp to umount server-01's root partition from server-02. Now, just un-attach that volume from server-02, attach it back to server-01, and then start server-01. When it starts up, you should be able to ssh in again. | {
"source": [
"https://serverfault.com/questions/477498",
"https://serverfault.com",
"https://serverfault.com/users/85427/"
]
} |
477,503 | I have an array which gets filled with different error messages as my script runs. I need a way to check if it is empty of not at the end of the script and take a specific action if it is. I have already tried treating it like a normal VAR and using -z to check it, but that does not seem to work. Is there a way to check if an array is empty or not in Bash? | Supposing your array is $errors , just check to see if the count of elements is zero. if [ ${#errors[@]} -eq 0 ]; then
echo "No errors, hooray"
else
echo "Oops, something went wrong..."
fi | {
"source": [
"https://serverfault.com/questions/477503",
"https://serverfault.com",
"https://serverfault.com/users/157322/"
]
} |
477,519 | I have been using ipfilter in the past. Here is what I used there: map-block tun0 192.168.1.0/24 -> 20.20.20.0/24 Some remote applications require that multiple connections all come from the same IP address. So, I want to tell iptables to have a static mapping between IP address and use the same IP address for a host (i.e. use some magic to choose a port). How do I do this? | Supposing your array is $errors , just check to see if the count of elements is zero. if [ ${#errors[@]} -eq 0 ]; then
echo "No errors, hooray"
else
echo "Oops, something went wrong..."
fi | {
"source": [
"https://serverfault.com/questions/477519",
"https://serverfault.com",
"https://serverfault.com/users/38876/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.