source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
556,435 | I am not familiar with active directory and have very limited knowledge of firewalls. I am trying to understand how 'trusts' work. Say there is a situation like the following: There are 2 separate networks An external trust is to be given to a domain in network1 to another domain in network2 Wouldn't the users in domain in network1 need go past the firewall in network2 first? How does the set up work? | The main difference is the route for 0.0.0.0/0 in the associated route table. A private subnet sets that route to a NAT gateway/instance. Private subnet instances only need a private ip and internet traffic is routed through the NAT in the public subnet. You could also have no route to 0.0.0.0/0 to make it a truly private subnet with no internet access in or out. A public subnet routes 0.0.0.0/0 through an Internet Gateway (igw). Instances in a public subnet require public IPs to talk to the internet. The warning appears even for private subnets, but the instance is only accessible inside your vpc. | {
"source": [
"https://serverfault.com/questions/556435",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
556,621 | We've been running a couple websites off Amazons AWS infrastructure for about two years now and as of about two days ago the webserver started to go down once or twice a day with the only error I can find being: HTTP/1.1 503 Service Unavailable: Back-end server is at capacity No alarms (CPU/Disk IO/DB Conn) are being triggered by CloudWatch. I tried going to the site via the elastic IP to skip the ELB and got this: HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers. Retrying. I don't see anything out of the ordinary in the apache logs and verified that they were being properly rotated. I have no problems accessing the machine when it's "down" via SSH and looking at the process list I see 151 apache2 processes that appear normal to me. Restarting apache temporarily fixes the problem. This machine operates as just a webserver behind an ELB. Any suggestions would be greatly appreciated. CPU Utilization
Average: 7.45%, Minimum: 0.00%, Maximum: 25.82% Memory Utilization
Average: 11.04%, Minimum: 8.76%, Maximum: 13.84% Swap Utilization
Average: N/A, Minimum: N/A, Maximum: N/A Disk Space Utilization for /dev/xvda1 mounted on /
Average: 62.18%, Minimum: 53.39%, Maximum: 65.49% Let me clarify I think the issue is with the individual EC2 instance and not the ELB I just didn't want to rule that out even though I was unable to reach the elastic IP. I suspect ELB is just returning the results of hitting the actual EC2 instance. Update: 2014-08-26
I should have updated this sooner but the "fix" was to take a snapshot of the "bad" instance and start the resulting AMI. It hasn't gone down since then. I did look at the health check when I was still experiencing issues and could get to the health check page ( curl http://localhost/page.html ) even when I was getting capacity issues from the load balancer. I'm not convinced it was a health check issue but since no one, including Amazon, can provide a better answer I'm marking it as the answer. Thank you. Update: 2015-05-06
I thought I'd come back here and say that part of the issue I now firmly believe was the health check settings. I don't want to rule out their being an issue with the AMI because it definitely got better after the replacement AMI was launched but I found out that our health checks were different for each load balancer and that the one that was having the most trouble had a really aggressive unhealthy threshold and response timeout. Our traffic tends to spike unpredictably and I think between the aggressive health check settings and the spikes in traffic it was a perfect storm. In diagnosing the issue I was focused on the fact that I could reach the health check endpoint at the moment but it is possible the health check had failed because of latency and then we had a high healthy threshold (for that particular ELB) so it would take while to see the instance as being healthy again. | You will get a "Back-end server is at capacity" when the ELB load balancer performs its health checks and receives a "page not found" (or other simple error) due to a mis-configuration (typically with the NameVirtual host). Try grepping the log files folder using the "ELB-HealthChecker" user agent. e.g. grep ELB-HealthChecker /var/log/httpd/* This will typically give you a 4x or 5x error which is easily fixed. e.g. Flooding, MaxClients etc is giving the problem way too much credit. FYI Amazon: Why not show the returned response from request? Even a status code would help. | {
"source": [
"https://serverfault.com/questions/556621",
"https://serverfault.com",
"https://serverfault.com/users/135584/"
]
} |
556,625 | We are experiencing an issue with our SonicWall NSA 2400 Firewall. We have a secondary gateway over IPSec setup in the event of a failure of our main ISP (which is unfortunately common). The secondary gateway is a 4G connection through Verizon and the cost grows as data usage increases. The firewall switches over to the secondary gateway properly, but then sometimes will not renegotiate and switch back to the primary when it comes back up. Hitting 'Renegotiate' on both sides seems to fix the issue, but I am wondering if there is something I am missing that may be causing it to stay on the secondary. I don't think this has anything to do with the settings as it seems to work 50% of the time, but here they are anyway in case someone has some tips on how to ensure switching back to primary when the connection is restored. Policy Type: Site to Site Auth Method: IKE using Preshared Secret IKE Phase 1 proposal: Exchange: Main Mode DH Group: Group 1 Encrypt: AES-256 Auth: SHA1 Lifetime: 3600 (seconds) Phase 2 proposal: Protocol: ESP Encrypt: AES-256 Auth: SHA1 Lifetime: 900 (seconds) Keep Alive is enabled, and Preempt Secondary Gateway is enabled at 120 second interval. | You will get a "Back-end server is at capacity" when the ELB load balancer performs its health checks and receives a "page not found" (or other simple error) due to a mis-configuration (typically with the NameVirtual host). Try grepping the log files folder using the "ELB-HealthChecker" user agent. e.g. grep ELB-HealthChecker /var/log/httpd/* This will typically give you a 4x or 5x error which is easily fixed. e.g. Flooding, MaxClients etc is giving the problem way too much credit. FYI Amazon: Why not show the returned response from request? Even a status code would help. | {
"source": [
"https://serverfault.com/questions/556625",
"https://serverfault.com",
"https://serverfault.com/users/200261/"
]
} |
557,233 | hopefully you guys can help me with a proxy problem I have. What I already have I have set up an apache http reverse proxy, to proxy requests from *.proxy.domain to *.intern.domain. The apache is the only way to reach my internal webapplications from an external network. Example: app.proxy.domain -> app.intern.domain
mail.proxy.domain -> mail.intern.domain This is all working great, but I have the following problem. Problem I want to proxy the following requests: app.proxy.domain -> app.internal.domain
app-dev.proxy.domain -> app-dev.internal.domain This is no problem, but unfortunately the app-dev server runs an exact copy of the app servers webapplication, and this webapplication only responses to it's hostname (app.intern.domain) So what I need to do is proxy the following app.proxy.domain -> app.internal.domain (10.0.1.1)
app-dev.proxy.domain -> app.internal.domain (10.0.1.2) I can do the second thing, by adding "10.0.1.2 app.internal.domain" in /etc/hosts, but that also means that app.proxy.domain will land on the dev-server. I am searching for an option, to set the /etc/hosts entry only inside the vhost configuration file for app-dev.proxy.domain, so that every other vhost config will just use DNS for app.intern.domain. Thoughts... Is there a way to tell apache config, to ProxyPass / http://10.0.1.2/ but send app.intern.domain as hostname? Editing the dev-servers webapplication to listen to app-dev is no option, since it is supposed to be an exact copy (not my decision...) Thanks! | Possibly you could use mod_headers in conjunction with mod_proxy. I haven't tested it though. So for your app-dev vhost you could have: RequestHeader set Host "app.internal.domain" and then you would add: ProxyPreserveHost On | {
"source": [
"https://serverfault.com/questions/557233",
"https://serverfault.com",
"https://serverfault.com/users/117583/"
]
} |
558,283 | I installed apache2 on ubuntu 13.10.
If I try to restart it using sudo /etc/init.d/apache2 restart I get this message: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message So I read that I should edit my httpd.conf file. But, since I can't find it in /etc/apache2/ folder, I tried to locate it using this command: /usr/sbin/apache2 -V But the output I get is this: [Fri Nov 29 17:35:43.942472 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOCK_DIR} is not defined
[Fri Nov 29 17:35:43.942560 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_PID_FILE} is not defined
[Fri Nov 29 17:35:43.942602 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_USER} is not defined
[Fri Nov 29 17:35:43.942613 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_GROUP} is not defined
[Fri Nov 29 17:35:43.942627 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined
[Fri Nov 29 17:35:43.947913 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined
[Fri Nov 29 17:35:43.948051 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined
[Fri Nov 29 17:35:43.948075 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined
AH00526: Syntax error on line 74 of /etc/apache2/apache2.conf:
Invalid Mutex directory in argument file:${APACHE_LOCK_DIR} Line 74 of /etc/apache2/apache2.conf is this: Mutex file:${APACHE_LOCK_DIR} default I gave a look at my /etc/apache2/envvar file, but I don't know what to do with it. What should I do? | [Fri Nov 29 17:35:43.942472 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOCK_DIR} is not defined This message is displayed because you directly executed the apache2 binary.
In Ubuntu/Debian the apache config relies on the envvar file which is only activated. If you start apache with the init script or apachectl. Your original problem is that you have no a proper hostname (fqdn) for your machine. If you can't change it, change the ServerName variable in /etc/apache2/apache2.conf to localhost or your prefered FQDN. | {
"source": [
"https://serverfault.com/questions/558283",
"https://serverfault.com",
"https://serverfault.com/users/200993/"
]
} |
558,354 | Using the time C function (number of seconds since the Epoch) shows that the time on my current CentOS 6 server is about 7 hours behind compared to another server with the correct time. How can I correct the system clock? I don't think it's drift because I just setup this server a few weeks ago, but it could be. I setup ntpd but it's not helping, maybe because the time difference is too much. | The simple answer is "set the date manually", which you need to do, but to prevent this occurring again, there is more that you should do. Ensure that the system timezone configuration is in a sane state. Unless there is a very strong reason not to do so (such as software compatibility issues), server clocks should always run on UTC time. If you decide not to use UTC, choose a timezone by running tzselect . A timezone will be printed on screen which you will use below. An example would be Europe/Moscow . Otherwise use UTC as the timezone below. Here is that TZ value again, this time on standard output so that you
can use the /usr/bin/tzselect command in shell scripts:
Europe/Moscow Set the system clock to your desired timezone by the following steps: Replace the contents of /etc/sysconfig/clock with the following: ZONE="<timezone>"
UTC=true For example: ZONE="Europe/Moscow"
UTC=true Note that UTC=true should be set here, even if you don't use UTC as your timezone. This refers to the server's hardware clock, which should always be UTC regardless of your chosen system timezone. Replace the /etc/localtime file with a link to the selected timezone: # ln -snf /usr/share/zoneinfo/<timezone> /etc/localtime For example: # ln -snf /usr/share/zoneinfo/Europe/Moscow /etc/localtime
# ln -snf /usr/share/zoneinfo/UTC /etc/localtime Set the clock manually to the current time. Sync the system clock to the current time: # ntpd -g -q Check that the time appears correct: # date Sync the server's hardware clock to the system clock: # hwclock -wu Restart the computer. Restarting is necessary because all system services must be restarted to pick up the corrected time and timezone, and the server's hardware clock needs to be tested (e.g. for a faulty battery). After restart, check to see that the system shows the correct time and that ntpd is running properly. | {
"source": [
"https://serverfault.com/questions/558354",
"https://serverfault.com",
"https://serverfault.com/users/145377/"
]
} |
558,731 | I have installed Webuzo on my unmanaged VPS. I am not able to install any applications, since it is giving me errors such "Unable to connect to MySQL server". But through terminal, the MySQL status is running. Can anybody help how to troubleshoot? | Many ways to do it - in your terminal: sudo service mysql status or ps aux | grep mysql What you're facing is probably authentication failure or database misspell. Did you try logging in with same creds via Terminal? mysql -u <username> -p <database-name> Hope it helps :) | {
"source": [
"https://serverfault.com/questions/558731",
"https://serverfault.com",
"https://serverfault.com/users/201221/"
]
} |
558,739 | 2-weeks ago one of our web servers CPU Usage started bottle-necking lot at 100% for long periods of time when an application runs and requests "sqlserver.exe"..When this app runs and does its integration (~5-mins) the CPU shoots up and locks the webpage. Background: Qquest Software Clock Server Version 1.2.20.0 The off-the-shelf unsupported version app database is on the same partition as the OS and Sql Server (DB size 17.19 MB and 0.78 MB
Free) SQL Server version 8.0 MS Server 2003 SP 2 ( 51.01 GB Capacity, 3.34 GB Free) 1536 MB in memory 3.00 GHz Intel Xeon X5365 What I've done so far.... Just added a extra processor to the server ( it now has 2 CPU's ) Just added extra memory ( it now has 192.84 MB) And yes, did a reboot Could so little free space be my problem with out shutting down the server and moving the app to a new home? Seems like the problem stopped after the upgrade, but throwing hardware at the problem is not the answer for bad system architecture. So I guess the new question is now; how to isolate problem applications from using all of your system resources? | Many ways to do it - in your terminal: sudo service mysql status or ps aux | grep mysql What you're facing is probably authentication failure or database misspell. Did you try logging in with same creds via Terminal? mysql -u <username> -p <database-name> Hope it helps :) | {
"source": [
"https://serverfault.com/questions/558739",
"https://serverfault.com",
"https://serverfault.com/users/125413/"
]
} |
558,936 | I keep getting answers like: yum list installed | grep bind or rpm -qa | grep bind But that is not accurate as I'm getting a list of few other bind packages like these: bind-utils-9.8.2-0.17.rc1.el6_4.5.x86_64
rpcbind-0.2.0-11.el6.x86_64
bind-libs-9.8.2-0.17.rc1.el6_4.5.x86_64
samba-winbind-3.6.9-151.el6.x86_64
samba-winbind-clients-3.6.9-151.el6.x86_64
ypbind-1.20.4-30.el6.x86_64 That is not I wanted. Instead I would want to accurately check if bind core package has been installed. Eg. bind.x86_64 32:9.8.2-0.17.rc1.el6_4.6 I was hoping for something like: yum check installed bind But hopefully someone could shed the light. | Have you tried this? $ yum list installed bind | {
"source": [
"https://serverfault.com/questions/558936",
"https://serverfault.com",
"https://serverfault.com/users/111014/"
]
} |
559,053 | I only want to exec following command when file (/usr/local/bin/papply) does not exist. not sure what to put there. exec { 'git add url':
command =>'git remote add origin https://github.com/testing/puppet.git',
require => Exec['git init'],
cwd => '/home/vagrant/django',
user => 'vagrant',
onlyif => "not sure what to put here"
} | Have you tried this? onlyif => "test ! -f /usr/local/bin/papply" Not sure if Puppet can use the '!' character Perhaps a better alternaltive: creates => '/usr/local/bin/papply' even if i don't like the fact that the command doesn't really creates the file | {
"source": [
"https://serverfault.com/questions/559053",
"https://serverfault.com",
"https://serverfault.com/users/61752/"
]
} |
559,065 | I'm having trouble coming to a conclusion to my answer. I'm in charge of the development of a potentially large site for Australia. The task now is choosing a server. We will need a a powerful server to cater for the system being built with all its features. Personally I've have had better experience with hosts outside of Australia. Also the cost of services in Australia is considerably higher that other parts of the world. Hence my questions is does the distance form a server to a client machine really matter when serving a website? From what I had read it does not make a huge impact. Considering that we would need 24 hours support we could host anywhere. Also by sourcing externally from Australia we can get more for our money. Hence afford a server of increased power. The site will run on a .com.au so it will need to point to that domain.
Do we need to host in Australia? | Yes, it does matter. We run a .com.au SaaS application, and latency is quite important. It is physically impossible to get information from the United States to Australia in under 200ms, but we have a typical latency of 20-50ms from our datacenter in North Sydney to most of the east coast on Australia. Yes, it's expensive to lease servers and datacenter space in Australia - but it's also worth it. I strongly suggest building these costs into your business plan. Even if you start small with AWS's new Sydney datacenter, and then scale out to your own co-located hardware later on, your heaviest customers will thank you. (actually, no, they won't thank you, they will bitch and moan about everything, having no idea that you've cut latency by 150ms, but it would be worse if your server was elsewhere). As a caveat though, it does matter what you're doing. If this is a blog, or even something like Server Fault, it's not a big deal. We're used to the internet being slow here anyway (have you used the internet in the US? Page loads are not even in the same ballpark as here). So if you're doing SaaS or something with a lot of synchronous calls, or sending/receiving lots of small pieces of data (like polling for status updates) then it's a fairly big deal. And if you're running a terminal server or something in real-time then it's a huge deal. But if you're mostly running non-realtime things, then it might not be such a big deal for you. Best thing to do would be to try it - set it up on two servers, one here and one in the US, and give the two sites to the same person. Don't tell them anything about the change in location, and ask them to tell you if one feels more responsive than the other. Repeat that a few times and you'll have your conclusion. | {
"source": [
"https://serverfault.com/questions/559065",
"https://serverfault.com",
"https://serverfault.com/users/121853/"
]
} |
559,200 | Recently, My SSH log summaries for my Ubuntu 12.04 servers in Logwatch have started showing entries for "11: Normal Shutdown, Thank you for playing [preauth]" along with the "11: Bye Bye [preauth]" and "11: disconnected by user" messages they had been showing previously. I have not seen this message in my logs before the past few weeks, nor have I seen it on my older servers which are stuck on Ubuntu 10.04. I have googled this message and can't find any clear explanations there either. The IPs attempting to login and receiving this message are random hack attempts, and judging from the preauth I assume (hope) they are not successful, but I would like to know exactly what this message means and how it differs from others to be sure. EDIT for additional information: My servers have password authentication and root authentication both disabled | When the ssh client does a "normal" connection shutdown, it sends a packet with a message in it. When the ssh daemon gets such a packet when it's not expecting it -- in this case, before the user managed to authenticate -- it logs the message. (Older versions of OpenSSH did not do this.) So your surmise is exactly correct: it's a side effect of a brute-force ssh password-guessing attack. You should probably be running something like fail2ban or sshguard to block these in iptables; even if you think everything is correctly configured to disallow passwords, it's well to have a second layer of defense. | {
"source": [
"https://serverfault.com/questions/559200",
"https://serverfault.com",
"https://serverfault.com/users/171743/"
]
} |
559,211 | Our software stack requires some specific versions of RPM packages. Unfortunately some of these packages become deprecated over time and get removed from their repos since their maintainers don't keep archives (EPEL, Percona, ...). It is a problem in configuration management. We want to make sure we provision a new machine with the same software the old ones have (we use Puppet). I guess the standard solution is hosting these packages in a private YUM repo we set up for our own. This is beneficial for packages we need to compile from source, too. My question is: do you know of any "proxy tool" to a Yum repo server so that every agent downloads packages from there and the repo server downloads packages from the external, original sources then caches them locally? (In case they disappear from the original repo) An analogy from Java world would be Archiva, which is a Maven repo server, but also can be used to proxy requests to public repos and cache them locally. OS: Centos 6.4 Thank you | When the ssh client does a "normal" connection shutdown, it sends a packet with a message in it. When the ssh daemon gets such a packet when it's not expecting it -- in this case, before the user managed to authenticate -- it logs the message. (Older versions of OpenSSH did not do this.) So your surmise is exactly correct: it's a side effect of a brute-force ssh password-guessing attack. You should probably be running something like fail2ban or sshguard to block these in iptables; even if you think everything is correctly configured to disallow passwords, it's well to have a second layer of defense. | {
"source": [
"https://serverfault.com/questions/559211",
"https://serverfault.com",
"https://serverfault.com/users/63524/"
]
} |
559,288 | Windows Server 2008 R2 (fully patched) I'm trying to run a scheduled task to move a specific type of files from C:\Windows\Temp to E:\Foo_blah_blah_blah_blah\Foo2 and for some reason am getting the following error: Task Scheduler failed to start instance "{fe0f148a-cece-44a0-a4d1-914aaf21daa8}" of "\Move Temp Files" task for user "FOOBOX\Administrator". Additional Data: Error Value: 2147942402 Any idea why this is happening? Additional details: The task is configured to run as an account that has authority to move the file. The task is configured to run whether user is logged on or not. It fails for both scenarios - same errors. The task is configured to run for the local OS (Windows Server 2008) The command is broken up into two parts. Program/script: move Add Arguments: C:\Windows\Temp\*.foo E:\Foo_blah_blah_blah_blah\Foo2\ If I run this same command move C:\Windows\Temp\*.foo E:\Foo_blah_blah_blah_blah\Foo2\ from the windows command prompt, it works fine. What am I missing? | As Ryan Ries pointed out, 2147942402 translates to "File not Found" - which is a very appropriate response. Try and press Win+R, put in "move" and press enter - that's the interactive equivalent of what your task is failing to do. The reason is that MOVE is not a program, but a native command in cmd . It should be: Program: "cmd.exe" Arguments: "/c move C:\Windows\Temp\*.foo E:\Foo_blah_blah_blah_blah\Foo2\" | {
"source": [
"https://serverfault.com/questions/559288",
"https://serverfault.com",
"https://serverfault.com/users/21875/"
]
} |
559,885 | Is there a way to temporarily ignore my ~/.ssh/known_hosts file? mbp:~ alexus$ ssh 10.52.11.171
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx.
Please contact your system administrator.
Add correct host key in /Users/alexus/.ssh/known_hosts to get rid of this message.
Offending RSA key in /Users/alexus/.ssh/known_hosts:155
RSA host key for 10.52.11.171 has changed and you have requested strict checking.
Host key verification failed.
mbp:~ alexus$ NOTE: .. by a few answer(s)/comment(s) i realize that my question is a bit misleading, so short it is expected behavior), so it's normal (in my case) there is a valid reason behind it on why I want to see "ignore it") | You can use ssh -o StrictHostKeyChecking=no to turn off checking known_hosts momentarily. But I'd advise against this. You should really check why the host key has changed. Another option is to add a specific entry to your ~/.ssh/config for the host in question. This might be valid approach if you have a certain host which generates new host keys every time it reboots and it gets rebooted for a valid reason several times a day. Host <your problematic host>
StrictHostKeyChecking no | {
"source": [
"https://serverfault.com/questions/559885",
"https://serverfault.com",
"https://serverfault.com/users/10683/"
]
} |
560,081 | What do people use when Telnet is not installed to check a port is open and reachable? E.g. I used to use the technique of telnet <destination> and know it was there, even if telnet could not interact with the system on the other end. With Windows 2008 telnet is not installed so I've been a bit lost. So what can I use instead. And something if its not there in Linux or Solaris, too please. I am a consultant who works on different sites. For a number of reasons (access rights, change control times, if I install it someone uses it next year we have some liability, etc) I cannot install on someone else's server. But a USB or other self contained, non-installed tool would be wonderful ... | Use Powershell like a boss Basic code $ipaddress = "4.2.2.1"
$port = 53
$connection = New-Object System.Net.Sockets.TcpClient($ipaddress, $port)
if ($connection.Connected) {
Write-Host "Success"
}
else {
Write-Host "Failed"
} One Liner PS C:\> test-netconnection -ComputerName 4.2.2.1 -Port 53 Turn it into a cmdlet [CmdletBinding()]
Param(
[Parameter(Mandatory=$True,Position=1)]
[string]$ip,
[Parameter(Mandatory=$True,Position=2)]
[int]$port
)
$connection = New-Object System.Net.Sockets.TcpClient($ip, $port)
if ($connection.Connected) {
Return "Connection Success"
}
else {
Return "Connection Failed"
} Save as a script and use all the time Then you use the command in your powershell or cmd prompt like so: PS C:\> telnet.ps1 -ip 8.8.8.8 -port 53 or PS C:\> telnet.ps1 8.8.8.8 53 | {
"source": [
"https://serverfault.com/questions/560081",
"https://serverfault.com",
"https://serverfault.com/users/2589/"
]
} |
560,089 | I'm trying to copy some files to a folder where there are already older versions of those files with the robocopy tool. And in order to differentiate versions, I'd like to set the timestamp to "now" in the output folder when the file is copied over. Basically it would be like copying and "touch"-ing the file. However, as easy as it looks, it seems that there is no option to do that. I understood that when using the D (data) flag on /COPY , the T (timestamp) flag is automatically added, so the source timestamp is kept. Is there something I missed in the options? Or is there any other alternative? Thanks for any help. Julien. | Use Powershell like a boss Basic code $ipaddress = "4.2.2.1"
$port = 53
$connection = New-Object System.Net.Sockets.TcpClient($ipaddress, $port)
if ($connection.Connected) {
Write-Host "Success"
}
else {
Write-Host "Failed"
} One Liner PS C:\> test-netconnection -ComputerName 4.2.2.1 -Port 53 Turn it into a cmdlet [CmdletBinding()]
Param(
[Parameter(Mandatory=$True,Position=1)]
[string]$ip,
[Parameter(Mandatory=$True,Position=2)]
[int]$port
)
$connection = New-Object System.Net.Sockets.TcpClient($ip, $port)
if ($connection.Connected) {
Return "Connection Success"
}
else {
Return "Connection Failed"
} Save as a script and use all the time Then you use the command in your powershell or cmd prompt like so: PS C:\> telnet.ps1 -ip 8.8.8.8 -port 53 or PS C:\> telnet.ps1 8.8.8.8 53 | {
"source": [
"https://serverfault.com/questions/560089",
"https://serverfault.com",
"https://serverfault.com/users/201909/"
]
} |
560,106 | I would like to use ansible to manage a group of existing servers. I have created an ansible_hosts file, and tested successfully (with the -K option) with commands that only target a single host ansible -i ansible_hosts host1 --sudo -K # + commands ... My problem now is that the user passwords on each host are different, but I can't find a way of handling this in Ansible. Using -K , I am only prompted for a single sudo password up-front, which then seems to be tried for all subsequent hosts without prompting: host1 | ...
host2 | FAILED => Incorrect sudo password
host3 | FAILED => Incorrect sudo password
host4 | FAILED => Incorrect sudo password
host5 | FAILED => Incorrect sudo password Research so far: a StackOverflow question with one incorrect answer ("use -K ") and one response by the author saying "Found out I needed passwordless sudo" the Ansible docs , which say "Use of passwordless sudo makes things easier to automate, but it’s not required ." (emphasis mine) this security StackExchange question which takes it as read that NOPASSWD is required article "Scalable and Understandable Provisioning..." which says: "running sudo may require typing a password, which is a sure way of blocking Ansible forever. A simple fix is to run visudo on the target host, and make sure that the user Ansible will use to login does not have to type a password" article "Basic Ansible Playbooks" , which says "Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t" My thoughts exactly, but then how to extend beyond a single server? ansible issue #1227 , "Ansible should ask for sudo password for all users in a playbook", which was closed a year ago by mpdehaan with the comment "Haven't seen much demand for this, I think most people are sudoing from only one user account or using keys most of the time." So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers , reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security. | You've certainly done your research... From all of my experience with ansible what you're looking to accomplish, isn't supported. As you mentioned, ansible states that it does not require passwordless sudo, and you are correct, it does not. But I have yet to see any method of using multiple sudo passwords within ansible, without of course running multiple configs. So, I can't offer the exact solution you are looking for, but you did ask... "So... how are people using Ansible in situations like these? Setting
NOPASSWD in /etc/sudoers, reusing password across hosts or enabling
root SSH login all seem rather drastic reductions in security." I can give you one view on that. My use case is 1k nodes in multiple data centers supporting a global SaaS firm in which I have to design/implement some insanely tight security controls due to the nature of our business. Security is always balancing act, more usability less security, this process is no different if you are running 10 servers or 1,000 or 100,000. You are absolutely correct not to use root logins either via password or ssh keys. In fact, root login should be disabled entirely if the servers have a network cable plugged into them. Lets talk about password reuse, in a large enterprise, is it reasonable to ask sysadmins to have different passwords on each node? for a couple nodes, perhaps, but my admins/engineers would mutiny if they had to have different passwords on 1000 nodes. Implementing that would be near impossible as well, each user would have to store there own passwords somewhere, hopefully a keypass, not a spreadsheet. And every time you put a password in a location where it can be pulled out in plain text, you have greatly decreased your security. I would much rather them know, by heart, one or two really strong passwords than have to consult a keypass file every time they needed to log into or invoke sudo on a machine. So password resuse and standardization is something that is completely acceptable and standard even in a secure environment. Otherwise ldap, keystone, and other directory services wouldn't need to exist. When we move to automated users, ssh keys work great to get you in, but you still need to get through sudo. Your choices are a standardized password for the automated user (which is acceptable in many cases) or to enable NOPASSWD as you've pointed out. Most automated users only execute a few commands, so it's quite possible and certainly desirable to enable NOPASSWD, but only for pre-approved commands. I'd suggest using your configuration management (ansible in this case) to manage your sudoers file so that you can easily update the password-less commands list. Now, there are some steps you can take once you start scaling to further isolate risk. While we have 1000 or so nodes, not all of them are 'production' servers, some are test environments, etc. Not all admins can access production servers, those than can though use their same SSO user/pass|key as they would elsewhere. But automated users are a bit more secure, for instance an automated tool that non-production admins can access has a user & credentials that cannot be used in production. If you want to launch ansible on all nodes, you'd have to do it in two batches, once for non-production and once for production. We also use puppet though, since it's an enforcing configuration management tool, so most changes to all environments would get pushed out through it. Obviously, if that feature request you cited gets reopened/completed, what you're looking to do would be entirely supported. Even then though, security is a process of risk assessment and compromise. If you only have a few nodes that you can remember the passwords for without resorting to a post-it note, separate passwords would be slightly more secure. But for most of us, it's not a feasible option. | {
"source": [
"https://serverfault.com/questions/560106",
"https://serverfault.com",
"https://serverfault.com/users/47979/"
]
} |
560,337 | I have an instance named dev-server-03 . Now how can I search all dev-server-* instances from command line? I am using aws cli tool. | Assuming that you are using the convention of putting the name of the instance in a tag with the key of "Name" (this is what the AWS Console does when you enter a name), then you can use the --filters option to list those instances with aws-cli: aws ec2 describe-instances --filters 'Name=tag:Name,Values=dev-server-*' If you just wanted the instance ids of those instances, you could use: aws ec2 describe-instances --filters 'Name=tag:Name,Values=dev-server-*' \
--output text --query 'Reservations[*].Instances[*].InstanceId' Note: --query may require a recent version of aws-cli but it's worth getting. | {
"source": [
"https://serverfault.com/questions/560337",
"https://serverfault.com",
"https://serverfault.com/users/105041/"
]
} |
560,505 | How can I suppress giving a reason for shutdown on a Windows Server host? Specifically, on 2008 R2, but all versions back to 2003 and up to 2012 would be appreciated. | You will need to modify the group policy that is applied to the servers. Open up the Group Policy Management Console and navigate to Computer Configuration >> Administrative Templates >> System and select "Display Shutdown Event Tracker." Disable that option. | {
"source": [
"https://serverfault.com/questions/560505",
"https://serverfault.com",
"https://serverfault.com/users/2321/"
]
} |
560,552 | I have a domain with godaddy: example.com I have an ec2 load balancer pointing to an ec2 instance. I would like to example.com to point to my load balanced instance. I first added a www cname record for my elb DNS. Then I forwarded example.com to www.example.com What do I put in the A Name record on godaddy? | You can't. ELB provides one -- or more -- IP addresses, hiding behind the CNAME you are using with www record, and these addresses are not static, so you can't create an A record at the top ("apex") of your domain and point to the addresses... along with that, a CNAME at the apex of a domain is not a valid DNS configuration. So there isn't directly a way to do this. You can either use Go Daddy's web site forwarding feature to redirect example.com requests to www.example.com, which will cause the browser to change its address bar value from example.com to www.example.com and then send traffic to the ELB (via the www CNAME)... or you can move the DNS from Go Daddy to Amazon's Route 53 service, which has another feature that operates similarly to a CNAME but is implemented differently, consistent with the rules established in RFC-1912 . They call these ALIAS records. An Alias record in Route 53 is a pointer to internal configuration within Route 53 that allows that service to look up and return an appropriate A-record for the service to which the Alias record is pointing. http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingAliasRRSets.html | {
"source": [
"https://serverfault.com/questions/560552",
"https://serverfault.com",
"https://serverfault.com/users/4807/"
]
} |
560,632 | I have an nginx reverse-proxy which proxies requests from an outer amazon ELB to internal ELBs. I have 6 backend instances that handles the requests. The site-enabled configs looks like this, but there are different port numbers and proxy_pass. Everything else is identical: server {
listen 3000;
location / {
proxy_pass http://internal-prod732r8-PrivateE-1GJ070M0745TT-348518554.eu-west-1.elb.amazonaws.com:3000;
include /etc/nginx/proxy.conf;
} } Once about every 24h one of the configurations stops working. All other proxies works fine. If i restart nginx all configurations works again. There is nothing in error.log, nothing weird in access log, syslog or dmesg. Is this something known? Have i done something wrong with my proxy configs? Are there any other logs i can look in? | The answer to this question is that ELBs sometimes change ip adresses and nginx does name resolving during start. To fix this there is always a DNS server in your VPC at 0.2. So if the local ip CIDR is 10.0.0.0/16 the DNS server is at 10.0.0.2. Add this to the nginx config. resolver 10.0.0.2 valid=10s; The proxy_pass also needs to be defined as a variable otherwise nginx will only resolve it once. So based on the configuration above this is the correct config: server {
listen 3000;
location / {
resolver 10.0.0.2 valid=10s;
set $backend "http://internal-prod732r8-PrivateE-1GJ070M0745TT-348518554.eu-west-1.elb.amazonaws.com:3000"
proxy_pass $backend;
include /etc/nginx/proxy.conf;
}
} | {
"source": [
"https://serverfault.com/questions/560632",
"https://serverfault.com",
"https://serverfault.com/users/202172/"
]
} |
560,978 | My haproxy instance serves 2 domains (mostly to avoid XSS on the main site). The rules look something like this bind :443 ssl crt /etc/ssl/haproxy.pem
acl is_static hdr_end(Host) -i example.com
acl is_api hdr_end(Host) -i api.example.com
acl is_files hdr_end(Host) -i example.io
redirect scheme https if !{ ssl_fc } is_static is_api Now SSL uses /etc/ssl/haproxy.pem as the default cert, which is the certificate for example.com and not example.io . How can I specify certs for multiple domain names? | You can concatenate all your certificates into files say haproxy1.pem and haproxy2.pem or you can specify a directory containing all your pem files. cat cert1.pem key1.pem > haproxy1.pem
cat cert2.pem key2.pem > haproxy2.pem As per the haproxy docs Then on the config use something like this: defaults
log 127.0.0.1 local0
option tcplog
frontend ft_test
mode http
bind 0.0.0.0:443 ssl crt /certs/haproxy1.pem crt /certs/haproxy2.pem
use_backend bk_cert1 if { ssl_fc_sni my.example.com } # content switching based on SNI
use_backend bk_cert2 if { ssl_fc_sni my.example.org } # content switching based on SNI
backend bk_cert1
mode http
server srv1 <ip-address2>:80
backend bk_cert2
mode http
server srv2 <ip-address3>:80 Read more about SNI Keep in mind that SSL support is in development staging for haproxy and also that it apparently has considerable performance hit. There are other solutions talked about in this thread: https://stackoverflow.com/questions/10684484/haproxy-with-multiple-https-sites Hope this helps. | {
"source": [
"https://serverfault.com/questions/560978",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
560,979 | I am trying to figure out how best to use Nginx as proxy for serving PHP (via PHP5-FPM), Python (via gunicorn) and NodeJS. My current default file in the sites-available directory is copied below. Should I be attempting to configure multiple servers or make other changes in order to enable this functionality? Thanks in advance. Update: Currently, with the current config, Nginx is serving as a proxy to NodeJS application. However, it is no longer serving PHP content anymore. Should I be using a different server in the default file and if so, should I be able to use the same listening port but just use a different server_name and use the location tag to differentiate between the requests? I am trying to route certain URL requests to a PHP application (in /var/www - I switched from /usr/share/nginx) as well as to Python and Nodejs backends. One thought that I have not implemented is to try multiple upstream and have the PHP setup in the main server - would that work i.e. have one upstream for NodeJS, one for Python and then the server for PHP. upstream test {
server 0.0.0.0:3002;
keepalive 500;
}
server {
listen 81 default_server;
listen [::]:81 default_server; ##remove this?
root /var/www/; ##switched from /usr/share/nginx
index index.php index.html index.htm;
server_name localhost;
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Nginx-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_pass http://0.0.0.0:3002;
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
allow ::1;
deny all;
}
# Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests
location /RequestDenied {
proxy_pass http://127.0.0.1:4242;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# fastcgi_pass 127.0.0.1:9000;
# # With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
server {
listen 82;
root /var/www/;
index index.php index.html index.htm;
server_name php;
location ~ /testPHP { //testPHP is part of URL/directory name in /var/www/
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
} | You can concatenate all your certificates into files say haproxy1.pem and haproxy2.pem or you can specify a directory containing all your pem files. cat cert1.pem key1.pem > haproxy1.pem
cat cert2.pem key2.pem > haproxy2.pem As per the haproxy docs Then on the config use something like this: defaults
log 127.0.0.1 local0
option tcplog
frontend ft_test
mode http
bind 0.0.0.0:443 ssl crt /certs/haproxy1.pem crt /certs/haproxy2.pem
use_backend bk_cert1 if { ssl_fc_sni my.example.com } # content switching based on SNI
use_backend bk_cert2 if { ssl_fc_sni my.example.org } # content switching based on SNI
backend bk_cert1
mode http
server srv1 <ip-address2>:80
backend bk_cert2
mode http
server srv2 <ip-address3>:80 Read more about SNI Keep in mind that SSL support is in development staging for haproxy and also that it apparently has considerable performance hit. There are other solutions talked about in this thread: https://stackoverflow.com/questions/10684484/haproxy-with-multiple-https-sites Hope this helps. | {
"source": [
"https://serverfault.com/questions/560979",
"https://serverfault.com",
"https://serverfault.com/users/149198/"
]
} |
561,107 | Is there a way on Linux to get statistics about the various reasons packets were dropped? On all network interfaces (openSUSE 12.3) on several servers, ifconfig and netstat -i are reporting dropped packets at the reception. When I do a tcpdump , the number of dropped packets stop increasing, meaning that the interfaces queues are not full and dropping the data. So there must be other reasons why this is happening (e.g. multicast pkts received whereas the interface is not part of this multicast group). Where can I find such information? (/proc? /sys? some logs?) Example of statistics (merge of the /sys/class/net/<dev>/statistics and ethtool output): alloc_rx_buff_failed: 0
collisions: 0
dropped_smbus: 0
multicast: 1644
rx_align_errors: 0
rx_broadcast: 23626
rx_bytes: 1897203
rx_compressed: 0
rx_crc_errors: 0
rx_csum_offload_errors: 0
rx_csum_offload_good: 0
rx_dropped: 4738
rx_errors: 0
rx_fifo_errors: 0
rx_flow_control_xoff: 0
rx_flow_control_xon: 0
rx_frame_errors: 0
rx_length_errors: 0
rx_long_byte_count: 1998731
rx_long_length_errors: 0
rx_missed_errors: 0
rx_multicast: 1644
rx_no_buffer_count: 0
rx_over_errors: 0
rx_packets: 25382
rx_short_length_errors: 0
rx_smbus: 0
tx_aborted_errors: 0
tx_abort_late_coll: 0
tx_broadcast: 7
tx_bytes: 11300
tx_carrier_errors: 0
tx_compressed: 0
tx_deferred_ok: 0
tx_dropped: 0
tx_errors: 0
tx_fifo_errors: 0
tx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_heartbeat_errors: 0
tx_multicast: 43
tx_multi_coll_ok: 0
tx_packets: 63
tx_restart_queue: 0
tx_single_coll_ok: 0
tx_smbus: 0
tx_tcp_seg_failed: 0
tx_tcp_seg_good: 0
tx_timeout_count: 0
tx_window_errors: 0 | Try /sys/class/net/eth0/statistics/ (i.e. for eth0 ), it's not perfect but it breaks down errors by transmit/receive and by carrier, window, fifo, crc, frame, length (and a few more) types of errors. Drops are not the same as "ignored", netstat show interface level statistics, a multicast packet ignored by a higher level (layer 3, the IP stack) won't show as a drop (though it might show up as "filtered" on some NIC stats). Statistics may be complicated somewhat by various offload features. You can get more stats if you have ethtool : # ethtool -S eth0
rx_packets: 60666755
tx_packets: 2206194
rx_bytes: 6630349870
tx_bytes: 815877983
rx_broadcast: 58230114
tx_broadcast: 9307
rx_multicast: 8406
tx_multicast: 17
rx_errors: 0
tx_errors: 0
tx_dropped: 0
multicast: 8406
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
[...] Some statistics depend on the NIC driver, as will the exact meaning. The above is from an Intel e1000 . Having looked at handful of drivers, some collect many more statistics than others (the stats available to ethtool tend to be kept in separate source file, e.g. drivers/net/ethernet/intel/e1000/e1000_ethtool.c , if you need to rummage). ethtool -i eth0 will show the driver details, the output of lspci -v should be more detailed, though with a bit of clutter too. Update In tg3.c function tg3_rx() there's only one place that looks likely with a tp->rx_dropped++ , but the code is littered with goto s, so there are several other causes than the obvious, i.e. anything with goto drop_it or goto drop_it_no_recycle .
(Note that the drop counter is one of the few maintained by the driver, the rest are maintained by the device itself.) The driver source I have to hand is 3.123. My best guess is this code: if (len > (tp->dev->mtu + ETH_HLEN) &&
skb->protocol != htons(ETH_P_8021Q)) {
dev_kfree_skb(skb);
goto drop_it_no_recycle;
} Check the MTU, possible causes are jumbo frames, or slightly oversized ethernet frames to allow for encapsulation. I cannot explain why tcpdump might change the behaviour, it's not known to change the interface MTU. Note also that you may "see" packets larger then the MTU with tcpdump if TSO / LRO is enabled ( explanation ). | {
"source": [
"https://serverfault.com/questions/561107",
"https://serverfault.com",
"https://serverfault.com/users/67419/"
]
} |
561,413 | In Windows Server 2012 R2 I am unable to find the option to reassign the drive letter for a CD/DVD drive where the disks are normally managed. So I can reassign for hard drives, but not for optical drives? What gives? Since compmgmt.msc runs the Server Manager, I am wondering which method I am supposed to use. So how can I reassign the drive letter for optical drives without going through hoops on Windows Server 2012 R2? | Run diskmgmt.msc just like previous versions of Windows. | {
"source": [
"https://serverfault.com/questions/561413",
"https://serverfault.com",
"https://serverfault.com/users/71790/"
]
} |
561,892 | I have a reverse proxy setup as follows in Apache: Server A with address www.example.com/folder is the reverse proxy server. It maps to: Server B with address test.madeupurl.com This kind of works. But the problem I have is, on www.example.com/folder, all of the relative links are of the forms www.example.com/css/examplefilename.css rather than www.example.com/folder/css/examplefilename.css How do I fix this? So far my reverse proxy has this on Server A (www.example.com): <Location /folder>
ProxyPass http://test.madeupurl.com
ProxyPassReverse http://test.madeupurl.com
</Location> | The Apache ProxyPassRewrite does not rewrite the response bodies received from http://test.example.com , only headers (like redirects to a 404 page and such). A number of alternatives: One ) Rewrite the internal app to use relative paths instead of absolute. i.e. ../css/style.css instead of /css/style.css Two ) Redeploy the internal app in a the same subdirectory /folder rather than in the root of test.example.com. Three ) One and two are often unlikely to happen... If you're lucky the internal app only uses two or three subdirectories and those are unused on your main site , simply write a bunch of ProxyPass lines: # Expose Internal App to the internet.
ProxyPass /externalpath/ http://test.example.com/
ProxyPassReverse /externalpath/ http://test.example.com/
# Internal app uses a bunch of absolute paths.
ProxyPass /css/ http://test.example.com/css/
ProxyPassReverse /css/ http://test.example.com/css/
ProxyPass /icons/ http://test.example.com/icons/
ProxyPassReverse /icons/ http://test.example.com/icons/ Four ) Create a separate subdomain for the internal app and simply reverse proxy everything: <VirtualHost *:80>
ServerName app.example.com/
# Expose Internal App to the internet.
ProxyPass / http://test.internal.example.com/
ProxyPassReverse / http://test.internal.example.com/
</VirtualHost> Five ) Sometimes developers are completely clueless and have their applications not only generate absolute URL's but even include the hostname part in their URL's and the resulting HTML code looks like this: <img src=http://test.example.com/icons/logo.png> . A ) You can use combo solution of a split horizon DNS and scenario 4. Both internal and external users use the test.example.com, but your internal DNS points directly to the ip-address of test.example.com's server. For external users the public record for test.example.com points to the ip-address of your public webserver www.example.com and you can then use solution 4. B ) You can actually get apache to to not only proxy requests to test.example.com, but also rewrite the response body before it will be transmitted to your users. (Normally a proxy only rewrites HTTP headers/responses). mod_substitute in apache 2.2. I haven't tested if it stacks well with mod_proxy, but maybe the following works: <Location /folder/>
ProxyPass http://test.example.com/
ProxyPassReverse http://test.example.com/
AddOutputFilterByType SUBSTITUTE text/html
Substitute "s|test.example.com/|www.example.com/folder/|i"
</Location> | {
"source": [
"https://serverfault.com/questions/561892",
"https://serverfault.com",
"https://serverfault.com/users/166820/"
]
} |
561,900 | https://access.redhat.com/site/security/updates/backporting/?sc_cid=3093 http://froginapan.blogspot.com/2012/07/redhats-backporting-activity.html As the above two links suggest, only security patches and certain selected "new" features and functionality is also backported. My question is specifically regarding qemu-kvm and libvirt. If somebody knows for sure how advanced/ improved these packages are as compared to their "original" release versions, please share that. rpm -q --changelog qemu-kvm ...shows many backports/patches, but I don't know if new features are being added consistently as well.. May be someone who is following qemu-kvm development more closely would know... | The Apache ProxyPassRewrite does not rewrite the response bodies received from http://test.example.com , only headers (like redirects to a 404 page and such). A number of alternatives: One ) Rewrite the internal app to use relative paths instead of absolute. i.e. ../css/style.css instead of /css/style.css Two ) Redeploy the internal app in a the same subdirectory /folder rather than in the root of test.example.com. Three ) One and two are often unlikely to happen... If you're lucky the internal app only uses two or three subdirectories and those are unused on your main site , simply write a bunch of ProxyPass lines: # Expose Internal App to the internet.
ProxyPass /externalpath/ http://test.example.com/
ProxyPassReverse /externalpath/ http://test.example.com/
# Internal app uses a bunch of absolute paths.
ProxyPass /css/ http://test.example.com/css/
ProxyPassReverse /css/ http://test.example.com/css/
ProxyPass /icons/ http://test.example.com/icons/
ProxyPassReverse /icons/ http://test.example.com/icons/ Four ) Create a separate subdomain for the internal app and simply reverse proxy everything: <VirtualHost *:80>
ServerName app.example.com/
# Expose Internal App to the internet.
ProxyPass / http://test.internal.example.com/
ProxyPassReverse / http://test.internal.example.com/
</VirtualHost> Five ) Sometimes developers are completely clueless and have their applications not only generate absolute URL's but even include the hostname part in their URL's and the resulting HTML code looks like this: <img src=http://test.example.com/icons/logo.png> . A ) You can use combo solution of a split horizon DNS and scenario 4. Both internal and external users use the test.example.com, but your internal DNS points directly to the ip-address of test.example.com's server. For external users the public record for test.example.com points to the ip-address of your public webserver www.example.com and you can then use solution 4. B ) You can actually get apache to to not only proxy requests to test.example.com, but also rewrite the response body before it will be transmitted to your users. (Normally a proxy only rewrites HTTP headers/responses). mod_substitute in apache 2.2. I haven't tested if it stacks well with mod_proxy, but maybe the following works: <Location /folder/>
ProxyPass http://test.example.com/
ProxyPassReverse http://test.example.com/
AddOutputFilterByType SUBSTITUTE text/html
Substitute "s|test.example.com/|www.example.com/folder/|i"
</Location> | {
"source": [
"https://serverfault.com/questions/561900",
"https://serverfault.com",
"https://serverfault.com/users/182381/"
]
} |
562,079 | Currently I'm using the plain vanilla linux configuration for resolv.conf ... something like: nameserver 123.123.123.123
nameserver 8.8.8.8 When 123.123.123.123 goes down DNS queries become impossible slow, I'm assuming that Linux retries the first one each time. Is there a way to get linux to be smarter about this? Health checks or something? Or do I misunderstand how resolv.conf should work? | In addition to ewwhite's awesome response, some addendum. You can add this in /etc/resolv.conf options timeout:1 attempts:1 rotate The defaults are time:5 attempts:2 What happens is that the resolver library will try to use the nameservers listed in /etc/resolv.conf from top to bottom if no rotate option is present. If rotate is present, then it does a round-robin selection. If the resolver goes to the bottom of the list and the server doesn't respond within X seconds (considering X is the timeout parameter) then it will repeat the whole process of round robin selection again for Y-1 times (where Y is the value of attempts). However a bit of caution would be to avoid dig and friends for testing these resolv.conf options. As they avoid resolver library and directly ask the nameservers. getent hosts is the better command to use. Note that anything that uses glibc resolver will have to obey /etc/resolv.conf file. | {
"source": [
"https://serverfault.com/questions/562079",
"https://serverfault.com",
"https://serverfault.com/users/107347/"
]
} |
562,564 | In practically every example of ZFS usage that I've seen online (including several questions here), the zpool is named "tank". Why? Is there some sort of significance to the name or is it just that the original documentation used "tank" so that's what everyone else uses, too? If you have more than one zpool on a system, is it common to have one of them named "tank" or is "tank" only a convention for single-pool systems? | I was confused by this at the beginning as well. Since the ZFS is referring to 'Storage pools', the author created the nickname 'Tank' as in a 'Tank of water' or a 'Fish tank'. It is a bit of a play on words since the English words 'Pool' and 'Tank' both refer to large containers of water. Some people find it confusing at first. Here is an old example from the Sun Solaris 11 documentation from 2004 : Create a ZFS storage pool. The following example illustrates how to create a simple mirrored
storage pool named tank and a ZFS file system named tank in one
command. Assume that the whole disks /dev/dsk/c1t0d0 and
/dev/dsk/c2t0d0 are available for use. # zpool create tank mirror c1t0d0 c2t0d0 The term is not referring to a 'Tank' like a Battle Tank, or the term 'Tank' in gaming. If I find time, I can dig up the authoritative source of the person who created that term. I believe the term was coined by Jeff Bonwick , Team Lead for the ZFS team while at Sun. | {
"source": [
"https://serverfault.com/questions/562564",
"https://serverfault.com",
"https://serverfault.com/users/119616/"
]
} |
562,756 | I have a running web-application at http://example.com/ , and want to "mount" another application, on a separate server on http://example.com/en . Upstream servers and proxy_pass seem to work, but for one issue: upstream luscious {
server lixxxx.members.linode.com:9001;
}
server {
root /var/www/example.com/current/public/;
server_name example.com;
location /en {
proxy_pass http://luscious;
}
} When opening example.com/en , my upstream application returns 404 not found /en . This makes sense, as the upstream does not have the path /en . Is proxy_path the right solution? Should I rewrite "upstream" so it listens to /en instead, as it root path? Or is there a directive that allows me to rewrite the path passed along to upstream? | This is likely the most efficient way to do what you want, without the use of any regular expressions: location = /en {
return 302 /en/;
}
location /en/ {
proxy_pass http://luscious/; # note the trailing slash here, it matters!
} | {
"source": [
"https://serverfault.com/questions/562756",
"https://serverfault.com",
"https://serverfault.com/users/60697/"
]
} |
563,033 | The command iptables no longer recognizes one of the most commonly used options when defining rules: --dport . I get this error: [root@dragonweyr /home/calyodelphi]# iptables -A INPUT --dport 7777 -j ACCEPT_TCP_UDP
iptables v1.4.7: unknown option `--dport'
Try `iptables -h' or 'iptables --help' for more information. The add rule command above is just an example for enabling Terraria connections. Here's what I currently have as a barebones iptables configuration ( listiptables is aliased to iptables -L -v --line-numbers ), and it's obvious that --dport has worked in the past: root@dragonweyr /home/calyodelphi]# listiptables
Chain INPUT (policy DROP 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 39 4368 ACCEPT all -- lo any anywhere anywhere
2 114 10257 ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED
3 1 64 ACCEPT tcp -- eth1 any anywhere anywhere tcp dpt:EtherNet/IP-1
4 72 11610 ACCEPT all -- eth1 any anywhere anywhere
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 91 packets, 10045 bytes)
num pkts bytes target prot opt in out source destination
Chain ACCEPT_TCP_UDP (0 references)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT tcp -- any any anywhere anywhere I'm also trying to define a custom chain (inspired by this question ) to accept tcp & udp connections so that I don't have to define two rules for everything that I want to enable tcp and udp for (such as a Minecraft or Terraria server, or another service entirely). But even this doesn't work: [root@dragonweyr /home/calyodelphi]# iptables -P ACCEPT_TCP_UDP DROP
iptables: Bad built-in chain name. This is getting to be very frustrating, in polite terms (the amount of cussing involved with this would make a sailor tell me to watch my mouth). My Google-fu is terrible, so I've yet to find a working solution for any of this. I'm running CentOS 6.5 on the router. Any help and pointers that you guys can offer would be awesome. EDIT: Bonus question: I'm also planning to configure port forwarding as well. Is it still necessary to set rules to accept incoming connections over specific ports? | First give a -p option like -p tcp or -p udp . Examples: iptables -A INPUT -p tcp --dport 22 -m state --state NEW -j DROP iptables -A INPUT -p udp --dport 53 --sport 1024:65535 -j ACCEPT You could also try -p all but I've never done that and don't find too much support for it in the examples. | {
"source": [
"https://serverfault.com/questions/563033",
"https://serverfault.com",
"https://serverfault.com/users/203347/"
]
} |
563,038 | I need to compile apache from source but my previous version was downloaded via yum.
But I need to find the options that were previously used to compile apache with so I can use the same options at the point I recompile. Also i want to ensure my config files are keep intact at the point I recompile. Note : I tried the following command but the output didnt help too much , [root@test httpd-2.2.4]# yumdownloader --source httpd
Loaded plugins: fastestmirror
Repository c5-testing is listed more than once in the configuration
Loading mirror speeds from cached hostfile
* base: mirrors.coreix.net
* epel: mirrors.coreix.net
* extras: centos.hyve.com
* fc6-base: ftp-stud.hs-esslingen.de
* rpmforge: www.mirrorservice.org
* updates: centos.hyve.com
drivesrvr | 951 B 00:00
Enabling epel-source repository
No source RPM found for httpd-2.2.3-5.x86_64
No source RPM found for httpd-2.2.3-83.el5.centos.x86_64
No source RPM found for httpd-2.2.3-82.el5.centos.x86_64
No source RPM found for httpd-2.2.26-1.el5.x86_64
Nothing to download Any ideas ? | First give a -p option like -p tcp or -p udp . Examples: iptables -A INPUT -p tcp --dport 22 -m state --state NEW -j DROP iptables -A INPUT -p udp --dport 53 --sport 1024:65535 -j ACCEPT You could also try -p all but I've never done that and don't find too much support for it in the examples. | {
"source": [
"https://serverfault.com/questions/563038",
"https://serverfault.com",
"https://serverfault.com/users/131641/"
]
} |
563,714 | On cPanel when I am logged in as root and type "mysql" without hostname and password it gives me direct access to mysql root user. I would like to do this for one of my non-cpanel server where the linux root user gets password less logon to mysql root user in the same way as it does on cPanel. Is this possible ? | The easiest way to do this is to use a client section of the ~/.my.cnf file, and add the credentials there. [client]
user=root
password=somepassword
... it's a good idea to make that file readable only by root too. | {
"source": [
"https://serverfault.com/questions/563714",
"https://serverfault.com",
"https://serverfault.com/users/156625/"
]
} |
563,815 | When I installed OpenVAS, I was prompted for a password, however the prompt errored out. I have installed OpenVAS and it is working properly, however I cant get in as admin (I created a new user and that works fine). I've tried googling how to reset admin password, recover admin password, change the access of a user to admin, but to no avail. Greenbone Security Assistant is version 4.0.
OpenVAS is version 6 I believe (I just installed it today) Host OS is Kali Linux. | Try this: openvasmd --user=admin --new-password=new_password | {
"source": [
"https://serverfault.com/questions/563815",
"https://serverfault.com",
"https://serverfault.com/users/186349/"
]
} |
563,824 | I've been going mental over this. How can I redirect a certain url like /2013/04/test.html to /test in nginx? I have tried this: but doesn't work: server {
location /2013/05/test.html {
return 301 http://$server_name/test;
}
} I've performed some tests - for some reason, any url with no .html extension in the location part of the config line will redirect properly, but as soon as I place .html in the location, kaboom, it stops working. Any idea why this is? Thank you! | Try this: openvasmd --user=admin --new-password=new_password | {
"source": [
"https://serverfault.com/questions/563824",
"https://serverfault.com",
"https://serverfault.com/users/190786/"
]
} |
563,872 | My system is running CentOS 6.4 with apache2.2.15. SElinux is enforcing and I'm trying to connect to a local instance of redis through my python/wsgi app. I get Error 13, Permission denied. I could fix this via the command: setsebool -P httpd_can_network_connect However, I don't exactly want httpd to be able to connect to all tcp ports. How can I specify which ports/networks httpd is allowed to connect to? If I could make a module to allow httpd to connect to port 6379 ( redis ) or any tcp on 127.0.0.1, that would be preferable. Not sure why my paranoia is so strong on this, but hey... Anyone know? | By default, the SELinux policy will only allow services access to recognized ports associated with those services: # semanage port -l | egrep '(^http_port_t|6379)'
http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000
# curl http://localhost/redis.php
Cannot connect to redis server. - add Redis port (6379) to SELinux policy # semanage port -a -t http_port_t -p tcp 6379
# semanage port -l | egrep '(^http_port_t|6379)'
http_port_t tcp 6379, 80, 81, 443, 488, 8008, 8009, 8443, 9000
# curl http://localhost/redis.php
Connected successfully. You can also install setroubleshoot-server RPM and run: sealert -a /var/log/audit/audit.log - it will give you a nice report with useful suggestions (including command above). PHP script to test connection: # cat redis.php
<?php
$redis=new Redis();
$connected= $redis->connect('127.0.0.1', 6379);
if(!$connected) {
die( "Cannot connect to redis server.\n" );
}
echo "Connected successfully.\n";
?> | {
"source": [
"https://serverfault.com/questions/563872",
"https://serverfault.com",
"https://serverfault.com/users/203754/"
]
} |
564,068 | I have continous oom&panic situation unresolved. I am not sure system fills up all the ram (36GB). Why this system triggered this oom situation? I suspect it as related to lowmem zone in 32 bit linux systems. How can I analize the logs from kernel panic and oom-killer? Best Regards, Kernel 3.10.24 Dec 27 09:19:05 2013 kernel: : [277622.359064] squid invoked oom-killer: gfp_mask=0x42d0, order=3, oom_score_adj=0
Dec 27 09:19:05 2013 kernel: : [277622.359069] squid cpuset=/ mems_allowed=0
Dec 27 09:19:05 2013 kernel: : [277622.359074] CPU: 9 PID: 15533 Comm: squid Not tainted 3.10.24-1.lsg #1
Dec 27 09:19:05 2013 kernel: : [277622.359076] Hardware name: Intel Thurley/Greencity, BIOS 080016 10/05/2011
Dec 27 09:19:05 2013 kernel: : [277622.359078] 00000003 e377b280 e03c3c38 c06472d6 e03c3c98 c04d2d96 c0a68f84 e377b580
Dec 27 09:19:05 2013 kernel: : [277622.359089] 000042d0 00000003 00000000 e03c3c64 c04abbda e42bd318 00000000 e03c3cf4
Dec 27 09:19:05 2013 kernel: : [277622.359096] 000042d0 00000001 00000247 00000000 e03c3c94 c04d3d5f 00000001 00000042
Dec 27 09:19:05 2013 kernel: : [277622.359105] Call Trace:
Dec 27 09:19:05 2013 kernel: : [277622.359116] [<c06472d6>] dump_stack+0x16/0x20
Dec 27 09:19:05 2013 kernel: : [277622.359121] [<c04d2d96>] dump_header+0x66/0x1c0
Dec 27 09:19:05 2013 kernel: : [277622.359127] [<c04abbda>] ? __delayacct_freepages_end+0x3a/0x40
Dec 27 09:19:05 2013 kernel: : [277622.359131] [<c04d3d5f>] ? zone_watermark_ok+0x2f/0x40
Dec 27 09:19:05 2013 kernel: : [277622.359135] [<c04d2f27>] check_panic_on_oom+0x37/0x60
Dec 27 09:19:05 2013 kernel: : [277622.359138] [<c04d36d2>] out_of_memory+0x92/0x250
Dec 27 09:19:05 2013 kernel: : [277622.359144] [<c04dd1fa>] ? wakeup_kswapd+0xda/0x120
Dec 27 09:19:05 2013 kernel: : [277622.359148] [<c04d6cee>] __alloc_pages_nodemask+0x68e/0x6a0
Dec 27 09:19:05 2013 kernel: : [277622.359155] [<c0801c1e>] sk_page_frag_refill+0x7e/0x120
Dec 27 09:19:05 2013 kernel: : [277622.359160] [<c084b8c7>] tcp_sendmsg+0x387/0xbf0
Dec 27 09:19:05 2013 kernel: : [277622.359166] [<c0469a2f>] ? put_prev_task_fair+0x1f/0x350
Dec 27 09:19:05 2013 kernel: : [277622.359173] [<c0ba7d8b>] ? longrun_init+0x2b/0x30
Dec 27 09:19:05 2013 kernel: : [277622.359177] [<c084b540>] ? tcp_tso_segment+0x380/0x380
Dec 27 09:19:05 2013 kernel: : [277622.359182] [<c086d0da>] inet_sendmsg+0x4a/0xa0
Dec 27 09:19:05 2013 kernel: : [277622.359186] [<c07ff3a6>] sock_aio_write+0x116/0x130
Dec 27 09:19:05 2013 kernel: : [277622.359191] [<c0457acc>] ? hrtimer_try_to_cancel+0x3c/0xb0
Dec 27 09:19:05 2013 kernel: : [277622.359197] [<c050b208>] do_sync_write+0x68/0xa0
Dec 27 09:19:05 2013 kernel: : [277622.359202] [<c050caa0>] vfs_write+0x190/0x1b0
Dec 27 09:19:05 2013 kernel: : [277622.359206] [<c050cbb3>] SyS_write+0x53/0x80
Dec 27 09:19:05 2013 kernel: : [277622.359211] [<c08f72ba>] sysenter_do_call+0x12/0x22
Dec 27 09:19:05 2013 kernel: : [277622.359213] Mem-Info:
Dec 27 09:19:05 2013 kernel: : [277622.359215] DMA per-cpu:
Dec 27 09:19:05 2013 kernel: : [277622.359218] CPU 0: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359220] CPU 1: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359222] CPU 2: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359224] CPU 3: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359226] CPU 4: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359228] CPU 5: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359230] CPU 6: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359232] CPU 7: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359234] CPU 8: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359236] CPU 9: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359238] CPU 10: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359240] CPU 11: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359242] CPU 12: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359244] CPU 13: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359246] CPU 14: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359248] CPU 15: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359250] CPU 16: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359253] CPU 17: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359255] CPU 18: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359258] CPU 19: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359260] CPU 20: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359262] CPU 21: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359264] CPU 22: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359266] CPU 23: hi: 0, btch: 1 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359268] Normal per-cpu:
Dec 27 09:19:05 2013 kernel: : [277622.359270] CPU 0: hi: 186, btch: 31 usd: 34
Dec 27 09:19:05 2013 kernel: : [277622.359272] CPU 1: hi: 186, btch: 31 usd: 72
Dec 27 09:19:05 2013 kernel: : [277622.359274] CPU 2: hi: 186, btch: 31 usd: 40
Dec 27 09:19:05 2013 kernel: : [277622.359276] CPU 3: hi: 186, btch: 31 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359279] CPU 4: hi: 186, btch: 31 usd: 39
Dec 27 09:19:05 2013 kernel: : [277622.359281] CPU 5: hi: 186, btch: 31 usd: 49
Dec 27 09:19:05 2013 kernel: : [277622.359283] CPU 6: hi: 186, btch: 31 usd: 50
Dec 27 09:19:05 2013 kernel: : [277622.359285] CPU 7: hi: 186, btch: 31 usd: 25
Dec 27 09:19:05 2013 kernel: : [277622.359286] CPU 8: hi: 186, btch: 31 usd: 42
Dec 27 09:19:05 2013 kernel: : [277622.359289] CPU 9: hi: 186, btch: 31 usd: 39
Dec 27 09:19:05 2013 kernel: : [277622.359290] CPU 10: hi: 186, btch: 31 usd: 155
Dec 27 09:19:05 2013 kernel: : [277622.359293] CPU 11: hi: 186, btch: 31 usd: 56
Dec 27 09:19:05 2013 kernel: : [277622.359295] CPU 12: hi: 186, btch: 31 usd: 2
Dec 27 09:19:05 2013 kernel: : [277622.359297] CPU 13: hi: 186, btch: 31 usd: 162
Dec 27 09:19:05 2013 kernel: : [277622.359299] CPU 14: hi: 186, btch: 31 usd: 67
Dec 27 09:19:05 2013 kernel: : [277622.359301] CPU 15: hi: 186, btch: 31 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359303] CPU 16: hi: 186, btch: 31 usd: 68
Dec 27 09:19:05 2013 kernel: : [277622.359305] CPU 17: hi: 186, btch: 31 usd: 38
Dec 27 09:19:05 2013 kernel: : [277622.359307] CPU 18: hi: 186, btch: 31 usd: 56
Dec 27 09:19:05 2013 kernel: : [277622.359308] CPU 19: hi: 186, btch: 31 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359310] CPU 20: hi: 186, btch: 31 usd: 54
Dec 27 09:19:05 2013 kernel: : [277622.359312] CPU 21: hi: 186, btch: 31 usd: 35
Dec 27 09:19:05 2013 kernel: : [277622.359314] CPU 22: hi: 186, btch: 31 usd: 2
Dec 27 09:19:05 2013 kernel: : [277622.359316] CPU 23: hi: 186, btch: 31 usd: 60
Dec 27 09:19:05 2013 kernel: : [277622.359318] HighMem per-cpu:
Dec 27 09:19:05 2013 kernel: : [277622.359320] CPU 0: hi: 186, btch: 31 usd: 32
Dec 27 09:19:05 2013 kernel: : [277622.359322] CPU 1: hi: 186, btch: 31 usd: 52
Dec 27 09:19:05 2013 kernel: : [277622.359324] CPU 2: hi: 186, btch: 31 usd: 9
Dec 27 09:19:05 2013 kernel: : [277622.359326] CPU 3: hi: 186, btch: 31 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359328] CPU 4: hi: 186, btch: 31 usd: 125
Dec 27 09:19:05 2013 kernel: : [277622.359330] CPU 5: hi: 186, btch: 31 usd: 116
Dec 27 09:19:05 2013 kernel: : [277622.359332] CPU 6: hi: 186, btch: 31 usd: 126
Dec 27 09:19:05 2013 kernel: : [277622.359333] CPU 7: hi: 186, btch: 31 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359336] CPU 8: hi: 186, btch: 31 usd: 79
Dec 27 09:19:05 2013 kernel: : [277622.359338] CPU 9: hi: 186, btch: 31 usd: 34
Dec 27 09:19:05 2013 kernel: : [277622.359340] CPU 10: hi: 186, btch: 31 usd: 111
Dec 27 09:19:05 2013 kernel: : [277622.359341] CPU 11: hi: 186, btch: 31 usd: 144
Dec 27 09:19:05 2013 kernel: : [277622.359343] CPU 12: hi: 186, btch: 31 usd: 15
Dec 27 09:19:05 2013 kernel: : [277622.359345] CPU 13: hi: 186, btch: 31 usd: 166
Dec 27 09:19:05 2013 kernel: : [277622.359347] CPU 14: hi: 186, btch: 31 usd: 185
Dec 27 09:19:05 2013 kernel: : [277622.359349] CPU 15: hi: 186, btch: 31 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359351] CPU 16: hi: 186, btch: 31 usd: 58
Dec 27 09:19:05 2013 kernel: : [277622.359353] CPU 17: hi: 186, btch: 31 usd: 122
Dec 27 09:19:05 2013 kernel: : [277622.359356] CPU 18: hi: 186, btch: 31 usd: 170
Dec 27 09:19:05 2013 kernel: : [277622.359358] CPU 19: hi: 186, btch: 31 usd: 0
Dec 27 09:19:05 2013 kernel: : [277622.359360] CPU 20: hi: 186, btch: 31 usd: 30
Dec 27 09:19:05 2013 kernel: : [277622.359362] CPU 21: hi: 186, btch: 31 usd: 33
Dec 27 09:19:05 2013 kernel: : [277622.359364] CPU 22: hi: 186, btch: 31 usd: 28
Dec 27 09:19:05 2013 kernel: : [277622.359366] CPU 23: hi: 186, btch: 31 usd: 44
Dec 27 09:19:05 2013 kernel: : [277622.359371] active_anon:658515 inactive_anon:54399 isolated_anon:0
Dec 27 09:19:05 2013 kernel: : [277622.359371] active_file:1172176 inactive_file:323606 isolated_file:0
Dec 27 09:19:05 2013 kernel: : [277622.359371] unevictable:0 dirty:0 writeback:0 unstable:0
Dec 27 09:19:05 2013 kernel: : [277622.359371] free:6911872 slab_reclaimable:29430 slab_unreclaimable:34726
Dec 27 09:19:05 2013 kernel: : [277622.359371] mapped:45784 shmem:9850 pagetables:107714 bounce:0
Dec 27 09:19:05 2013 kernel: : [277622.359371] free_cma:0
Dec 27 09:19:05 2013 kernel: : [277622.359382] DMA free:2332kB min:36kB low:44kB high:52kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15968kB managed:6960kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:8kB slab_unreclaimable:288kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Dec 27 09:19:05 2013 kernel: : [277622.359384] lowmem_reserve[]: 0 573 36539 36539
Dec 27 09:19:05 2013 kernel: : [277622.359393] Normal free:114488kB min:3044kB low:3804kB high:4564kB active_anon:0kB inactive_anon:0kB active_file:252kB inactive_file:256kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:894968kB managed:587540kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:117712kB slab_unreclaimable:138616kB kernel_stack:11976kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:982 all_unreclaimable? yes
Dec 27 09:19:05 2013 kernel: : [277622.359395] lowmem_reserve[]: 0 0 287725 287725
Dec 27 09:19:05 2013 kernel: : [277622.359404] HighMem free:27530668kB min:512kB low:48272kB high:96036kB active_anon:2634060kB inactive_anon:217596kB active_file:4688452kB inactive_file:1294168kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:36828872kB managed:36828872kB mlocked:0kB dirty:0kB writeback:0kB mapped:183132kB shmem:39400kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:430856kB unstable:0kB bounce:367564104kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Dec 27 09:19:05 2013 kernel: : [277622.359406] lowmem_reserve[]: 0 0 0 0
Dec 27 09:19:05 2013 kernel: : [277622.359410] DMA: 3*4kB (U) 2*8kB (U) 4*16kB (U) 5*32kB (U) 2*64kB (U) 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB (R) 0*4096kB = 2428kB
Dec 27 09:19:05 2013 kernel: : [277622.359422] Normal: 5360*4kB (UEM) 3667*8kB (UEM) 3964*16kB (UEMR) 13*32kB (MR) 0*64kB 1*128kB (R) 1*256kB (R) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 115000kB
Dec 27 09:19:05 2013 kernel: : [277622.359435] HighMem: 6672*4kB (M) 74585*8kB (UM) 40828*16kB (UM) 17275*32kB (UM) 3314*64kB (UM) 1126*128kB (UM) 992*256kB (UM) 585*512kB (UM) 225*1024kB (UM) 78*2048kB (UMR) 5957*4096kB (UM) = 27529128kB
Dec 27 09:19:05 2013 kernel: : [277622.359452] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Dec 27 09:19:05 2013 kernel: : [277622.359454] 1505509 total pagecache pages
Dec 27 09:19:05 2013 kernel: : [277622.359457] 4 pages in swap cache
Dec 27 09:19:05 2013 kernel: : [277622.359459] Swap cache stats: add 13, delete 9, find 0/0
Dec 27 09:19:05 2013 kernel: : [277622.359460] Free swap = 35318812kB
Dec 27 09:19:05 2013 kernel: : [277622.359462] Total swap = 35318864kB
Dec 27 09:19:05 2013 kernel: : [277622.450529] 9699327 pages RAM
Dec 27 09:19:05 2013 kernel: : [277622.450532] 9471490 pages HighMem
Dec 27 09:19:05 2013 kernel: : [277622.450533] 342749 pages reserved
Dec 27 09:19:05 2013 kernel: : [277622.450534] 2864256 pages shared
Dec 27 09:19:05 2013 kernel: : [277622.450535] 1501243 pages non-shared
Dec 27 09:19:05 2013 kernel: : [277622.450538] Kernel panic - not syncing: Out of memory: system-wide panic_on_oom is enabled
Dec 27 09:19:05 2013 kernel: : [277622.450538] and # cat /proc/meminfo
MemTotal: 37426312 kB
MemFree: 28328992 kB
Buffers: 94728 kB
Cached: 6216068 kB
SwapCached: 0 kB
Active: 6958572 kB
Inactive: 1815380 kB
Active(anon): 2329152 kB
Inactive(anon): 170252 kB
Active(file): 4629420 kB
Inactive(file): 1645128 kB
Unevictable: 0 kB
Mlocked: 0 kB
HighTotal: 36828872 kB
HighFree: 28076144 kB
LowTotal: 597440 kB
LowFree: 252848 kB
SwapTotal: 35318864 kB
SwapFree: 35318860 kB
Dirty: 0 kB
Writeback: 8 kB
AnonPages: 2463512 kB
Mapped: 162296 kB
Shmem: 36332 kB
Slab: 208676 kB
SReclaimable: 120872 kB
SUnreclaim: 87804 kB
KernelStack: 6320 kB
PageTables: 42280 kB
NFS_Unstable: 0 kB
Bounce: 124 kB
WritebackTmp: 0 kB
CommitLimit: 54032020 kB
Committed_AS: 3191916 kB
VmallocTotal: 122880 kB
VmallocUsed: 27088 kB
VmallocChunk: 29312 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 10232 kB
DirectMap2M: 901120 kB sysctl: vm.oom_dump_tasks = 0
vm.oom_kill_allocating_task = 1
vm.panic_on_oom = 1
vm.admin_reserve_kbytes = 8192
vm.block_dump = 0
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.drop_caches = 0
vm.highmem_is_dirtyable = 0
vm.hugepages_treat_as_movable = 0
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256 32 32
vm.max_map_count = 65530
vm.min_free_kbytes = 3084
vm.mmap_min_addr = 4096
vm.nr_hugepages = 0
vm.nr_overcommit_hugepages = 0
vm.nr_pdflush_threads = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.percpu_pagelist_fraction = 0
vm.scan_unevictable_pages = 0
vm.stat_interval = 1
vm.swappiness = 30
vm.user_reserve_kbytes = 131072
vm.vdso_enabled = 1
vm.vfs_cache_pressure = 100 and # ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 292370
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 36728
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 292370
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited | A 'sledgehammer' approach though would be to upgrade to a 64bit O/S (this is 32bit) because the layout of the zones is done differently. OK, so here I will attempt to answer why you have experienced an OOM here. There are a number of factors at play here. The order size of the request and how the kernel treats certain order sizes. The zone being selected. The watermarks this zone uses. Fragmentation in the zone. If you look at the OOM itself, there is clearly lots of free memory available but OOM-killer was invoked? Why? The order size of the request and how the kernel treats certain order sizes The kernel allocates memory by order. An 'order' is a region of contiguous RAM which must be satisfied for the request to work. Orders are arranged by orders of magnitude (thus the name order) using the algorithm 2^(ORDER + 12) . So, order 0 is 4096, order 1 is 8192, order 2 is 16384 so on and so forth. The kernel has a hard coded value of what is considers a 'high order' (> PAGE_ALLOC_COSTLY_ORDER ). This is order 4 and above (64kb or above is a high order). High orders are satisfied for page allocations differently from low orders. A high order allocation if it fails to grab the memory, on modern kernels will. Try to run memory the compaction routine to defragment the memory. Never call OOM-killer to satisfy the request. Your order size is listed here Dec 27 09:19:05 2013 kernel: : [277622.359064] squid invoked oom-killer: gfp_mask=0x42d0, order=3, oom_score_adj=0 Order 3 is the highest of the low-order requests and (as you see) invokes OOM-killer in an attempt to satisfy it. Note that most userspace allocations don't use high-order requests. Typically its the kernel that requires contiguous regions of memory. An exception to this may be when userspace is using hugepages - but that isn't the case here. In your case the order 3 allocation is called by the kernel wanting to queue a packet into the network stack - requiring a 32kb allocation to do so. The zone being selected. The kernel divides your memory regions into zones. This chopping up is done because on x86 certain regions of memory are only addressable by certain hardware. Older hardware may only be able to address memory in the 'DMA' zone for example. When we want to allocate some memory, first a zone is chosen and only the free memory from this zone is accounted for when making an allocation decision. Whilst I'm not completely up to knowledge on the zone selection algorithm, the typical use-case is never to allocate from DMA, but to usually select the lowest addressable zone that could satisfy the request. Lots of zone information is spat out during OOM which can also be gleaned from /proc/zoneinfo . Dec 27 09:19:05 2013 kernel: : [277622.359382] DMA free:2332kB min:36kB low:44kB high:52kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15968kB managed:6960kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:8kB slab_unreclaimable:288kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Dec 27 09:19:05 2013 kernel: : [277622.359393] Normal free:114488kB min:3044kB low:3804kB high:4564kB active_anon:0kB inactive_anon:0kB active_file:252kB inactive_file:256kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:894968kB managed:587540kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:117712kB slab_unreclaimable:138616kB kernel_stack:11976kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:982 all_unreclaimable? yes
Dec 27 09:19:05 2013 kernel: : [277622.359404] HighMem free:27530668kB min:512kB low:48272kB high:96036kB active_anon:2634060kB inactive_anon:217596kB active_file:4688452kB inactive_file:1294168kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:36828872kB managed:36828872kB mlocked:0kB dirty:0kB writeback:0kB mapped:183132kB shmem:39400kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:430856kB unstable:0kB bounce:367564104kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no The zones you have, DMA, Normal and HighMem indicate a 32-bit platform, because the HighMem zone is non-existent on 64bit. Also on 64bit systems Normal is mapped to 4GB and beyond whereas on 32bit it maps up to 896Mb (although, in your case the kernel reports only managing a smaller portion than this:- managed:587540kB .) Its possible to tell where this allocation came from by looking at the first line again, gfp_mask=0x42d0 tells us what type of allocation was done. The last byte (0) tells us that this is a allocation from the normal zone. The gfp meanings are located in include/linux/gfp.h . The watermarks this zone uses. When memory is low, actions to reclaim it are specified by the watermark. They show up here: min:3044kB low:3804kB high:4564kB . If free memory reaches 'low', then swapping will occur until we pass the 'high' threshold. If memory reaches 'min', we need to kill stuff in order to free up memory via the OOM-killer. Fragmentation in the zone. In order to see whether a request for a specific order of memory can be satisfied, the kernel accounts for how many free pages and available of each order. This is readable in /proc/buddyinfo . OOM-killer reports additionally spit out the buddyinfo too as seen here: Normal: 5360*4kB (UEM) 3667*8kB (UEM) 3964*16kB (UEMR) 13*32kB (MR) 0*64kB 1*128kB (R) 1*256kB (R) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 115000kB For a memory allocation to be satisfied there must be free memory available in the order size requested or a higher allocation. Having lots and lots of free data in the low orders and none in the higher orders means your memory is fragmented. If you get a very high order allocation its possible (even with lots of free memory) for it to not be satisfied due to there being no high-order pages available. The kernel can defragment memory (this is called memory compaction) by moving lots of low order pages around so they don't leave gaps in the addressable ram space. OOM-killer was invoked? Why? So, if we take these things into account, we can say the following; A 32kB contiguous allocation was attempted. From the normal zone. There was enough free memory in the zone selected. There was order 3, 5 and 6 memory available 13*32kB (MR) 1*128kB (R) 1*256kB (R) So, if there was free memory, other orders could satisfy the request. what happened? Well, there is more to allocating from an order than just checking the amount of free memory available for that order or higher. The kernel effectively subtracts memory from all lower orders from the total free line and then performs the min watermark check on what is left. What happens in your case is to check our free memory for that zone we must do. 115000 - (5360*4) - (3667*8) - (3964*16) = 800 This amount of free memory is checked against the min watermark, which is 3044. Thus, technically speaking -- you have no free memory left to do the allocation you requested. And this is why you invoked OOM-killer. Fixing There are two fixes. Upgrading to 64bit changes your zone partitioning such that 'Normal' is 4GB up to 36GB, so you wont end up 'defaulting' your memory allocation into a zone which can get so heavily fragmented. Its not that you have more addressable memory that fixes this problem (because you're using PAE already), merely that the zone you select from has more addressable memory. The second way (which I have never tested) is to try to get the kernel to more aggressively compact your memory. If you change the value of vm.extfrag_threshold from 500 to 100, its more likely to compact memory in an attempt to honour a high-order allocation. Although, I have never messed with this value before - it will also depend on what your fragmentation index is which is available in /sys/kernel/debug/extfrag/extfrag_index . I dont have a box at the moment with a new enough kernel to see what that shows to offer more than this. Alternatively you could run some sort of cron job (this is horribly, horribly ugly) to manually compact memory yourself by writing into /proc/sys/vm/compact_memory . In all honestly though, I don't think there really is a way to tune the system to avoid this problem -- its the nature of the memory allocator to work this way. Changing the architecture of the platform you use is probably the only fundamentally resolvable solution. | {
"source": [
"https://serverfault.com/questions/564068",
"https://serverfault.com",
"https://serverfault.com/users/78983/"
]
} |
564,127 | I have two locations in nginx config that work: location ^~ /media/ {
proxy_pass http://backend.example.com;
}
location ^~ /static/ {
proxy_pass http://backend.example.com;
} How can I combine these two into one location? What I have done already: I tried this suggestion location ~ ^/(static|media)/ {
proxy_pass http://backend.example.com;
} but it doesn't work for me. Also, when I don't use backends, the following config is functioning properly: location ~ ^/(static|media)/ {
root /home/project_root;
} update (some strings from the log) xx.xx.xx.xx - - [31/Dec/2013:13:48:18 +0000] "GET /content/11160/ HTTP/1.1" 200 5310 "-" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36 OPR/18.0.1284.68"
xx.xx.xx.xx - - [31/Dec/2013:13:48:18 +0000] "GET /static/font-awesome/css/font-awesome.min.css HTTP/1.1" 404 200 "http://www.example.com/content/11160/" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome$
xx.xx.xx.xx - - [31/Dec/2013:13:48:18 +0000] "GET /static/bootstrap/css/bootstrap.min.css HTTP/1.1" 404 200 "http://www.example.com/content/11160/" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.$
xx.xx.xx.xx - - [31/Dec/2013:13:48:18 +0000] "GET /static/css/custom.css HTTP/1.1" 404 200 "http://www.example.com/content/11160/" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/53$
xx.xx.xx.xx - - [31/Dec/2013:13:48:18 +0000] "GET /static/colorbox/colorbox.css HTTP/1.1" 404 200 "http://www.example.com/content/11160/" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Sa$
xx.xx.xx.xx - - [31/Dec/2013:13:48:18 +0000] "GET /static/colorbox/jquery.colorbox-min.js HTTP/1.1" 404 200 "http://www.example.com/content/11160/" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.$
xx.xx.xx.xx - - [31/Dec/2013:13:48:18 +0000] "GET /static/js/scripts.js HTTP/1.1" 404 200 "http://www.example.com/content/11160/" "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537$ SOLUTION Actually my solution does work fine: location ~ ^/(static|media)/ {
root /home/project_root;
} and the issue has nothing to do with backends. As Guido Vaccarella correctly noticed it just followed after another location ~ ... that matched, so that my location ~ ... had no chance to run. | According to nginx documentation : Then regular expressions are checked, in the order of their appearance
in the configuration file. The search of regular expressions
terminates on the first match, and the corresponding configuration is
used. In your configuration, the following location is defined before the one with the proxy_pass and it matches the request of js and css files under static : location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
} Unfortunately the "log_not_found off" clause disables the logging for any file-not-found error related to this location, that's why your error_log is empty! You can try to comment out this location or move it after the location with the proxy_pass (if you need it for other files not in static / media ). | {
"source": [
"https://serverfault.com/questions/564127",
"https://serverfault.com",
"https://serverfault.com/users/187118/"
]
} |
564,385 | I´m trying to set owner and group via rsync and it doesn't seem to be working. This is the command: sudo rsync -rlptDvz --owner=cmsseren --group=cmsseren /home/serena/public_html/ -e ssh root@ip:/home/cmsseren/public_html2/ The files sync correctly but doesn´t seem to change the owner and group. | Version 3.1.0 of rsync introduced the --usermap and --groupmap mentioned by Thomas, but also the convenience option --chown , which works well for your scenario. --chown=USER:GROUP
This option forces all files to be owned by USER with group GROUP.
This is a simpler interface than using --usermap and --groupmap directly,
but it is implemented using those options internally, so you cannot mix them.
If either the USER or GROUP is empty, no mapping for the omitted user/group will
occur. If GROUP is empty, the trailing colon may be omitted, but if USER is
empty, a leading colon must be supplied.
If you specify "--chown=foo:bar, this is exactly the same as specifying
"--usermap=*:foo --groupmap=*:bar", only easier. Also, the -o and -g options are required. Excluding them will fail to update their respective attribute, but produce no error. rsync -og --chown=cmsseren:cmsseren [src] [dest] This is mentioned indirectly in the manpage , which
states that the --chown option "is implemented using --usermap and --groupmap internally", and: For the --usermap option to have any effect, the -o ( --owner ) option must be used (or implied),
and the receiver will need to be running as a super-user (see also the --fake-super option). For the --groupmap option to have any effect, the -g ( --groups ) option must be used (or implied),
and the receiver will need to have permissions to set that group. | {
"source": [
"https://serverfault.com/questions/564385",
"https://serverfault.com",
"https://serverfault.com/users/204012/"
]
} |
565,339 | I want to stop Nginx but it fails like this. $ sudo service nginx stop
Stopping nginx: [FAILED] And nginx.conf that defines place of nginx.pid have a line. # /etc/nginx/nginx.conf
pid /var/run/nginx.pid; But there is no nginx.pid in the directory /var/run/ . locate nginx.pid shows this output. /var/run/nginx.pid
/var/run/nginx.pid.oldbin But after updatedb there is no match for the search.
I'm using nginx/1.4.4 in CentOS release 6.5 (Final) . What should I do to stop the nginx daemon? Edit 2014/01/07 This is output of ps -ef | grep nginx , it seems nginx daemon is still running. ironsand 17065 16933 0 15:55 pts/0 00:00:00 grep --color nginx
root 19506 1 0 2013 ? 00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
ironsand 19507 19506 0 2013 ? 00:00:25 nginx: worker process And sudo service nginx restart gives this error. I think nginx fails to start because old one still alive. And /var/log/nginx/error.log-2014017 contains also this error. Stopping nginx: [FAILED]
Starting nginx: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] still could not bind()
[FAILED] | I will recommend stopping nginx by killing it's master process first. The nginx is not shutdown properly may be because of that it can't be stopped using init script. ps -ef |grep nginx This will show you the PID of nginx master process. Like you mentioned above: root 19506 1 0 2013 ? 00:00:00 nginx: master process
/usr/sbin/nginx -c /etc/nginx/nginx.conf Kill it using kill -9 19506 Verify once again whether there is any nginx process running or port 80 is occupied. If you see any process is bind to port 80, Identify the PID and check if it can be killed. ps -ef |grep nginx netstat -tulpn |grep 80 make sure the filesystem is fine and you can read/write to /var file system. Then Start nginx service nginx start | {
"source": [
"https://serverfault.com/questions/565339",
"https://serverfault.com",
"https://serverfault.com/users/195107/"
]
} |
565,539 | We have 16GB of memory in our server and I noticed that around 10GB is marked as "standby" memory in the Resource Monitor. Do I need to worry about the big amount of standby memory? Is there a way to control this amount? Is there a way to find out what is in "standby"? It appears that "standby" is still considered as "available" on the Resource Monitor, so it might not be an issue. | It is just cached data that may be released when another app demands memory. Here is good description from Investigate memory usage with Windows 7 Resource Monitor : quote from the link: Standby The Standby list, which is shown in blue, contains pages that have
been removed from process working sets but are still linked to their
respective working sets. As such, Standby list is essentially a cache .
However, memory pages in the Standby list are prioritized in a range
of 0-7, with 7 being the highest. Essentially, a page related to a
high-priority process will receive a high-priority level in the
Standby list. For example, processes that are Shareable will be a high priority and
pages associated with these Shareable processes will have the highest
priority in the Standby list. Now, if a process needs a page that is associated with the process and
that page is now in the Standby list, the memory manager immediately
returns the page to that process' working set. However, all pages on
the Standby list are available for memory allocation requests from any
process. When a process requests additional memory and there is not
enough memory in the Free list, the memory manager checks the page's
priority and will take a page with a low priority from the Standby
list, initialize it, and allocate it to that process. | {
"source": [
"https://serverfault.com/questions/565539",
"https://serverfault.com",
"https://serverfault.com/users/204637/"
]
} |
565,542 | I would like to configure my httpd.conf to see the client IP address and some more information.
this is what I have: LogLevel warn
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" proxy
SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded`
CustomLog "logs/access_log" combined env=!forwarded
CustomLog "logs/access_log" proxy env=forwarded
CustomLog /var/log/httpd/access_log combined env=!dontlog all I can see when I do tail -f /var/log/httpd/access.log is: combined
combined all the time.
what am I doing wrong? Thanks!
Dotan. | It is just cached data that may be released when another app demands memory. Here is good description from Investigate memory usage with Windows 7 Resource Monitor : quote from the link: Standby The Standby list, which is shown in blue, contains pages that have
been removed from process working sets but are still linked to their
respective working sets. As such, Standby list is essentially a cache .
However, memory pages in the Standby list are prioritized in a range
of 0-7, with 7 being the highest. Essentially, a page related to a
high-priority process will receive a high-priority level in the
Standby list. For example, processes that are Shareable will be a high priority and
pages associated with these Shareable processes will have the highest
priority in the Standby list. Now, if a process needs a page that is associated with the process and
that page is now in the Standby list, the memory manager immediately
returns the page to that process' working set. However, all pages on
the Standby list are available for memory allocation requests from any
process. When a process requests additional memory and there is not
enough memory in the Free list, the memory manager checks the page's
priority and will take a page with a low priority from the Standby
list, initialize it, and allocate it to that process. | {
"source": [
"https://serverfault.com/questions/565542",
"https://serverfault.com",
"https://serverfault.com/users/80829/"
]
} |
565,546 | I am trying to install / configure mod-sec using this tutorial , which uses the OWASP ModSecurity Core Rule Set . However when I go to restart apache, I get the following error: Syntax error on line 53 of /etc/modsecurity/base_rules/modsecurity_crs_20_protocol_violations.conf:
Error parsing actions: Unknown action: ver
Action 'configtest' failed.
The Apache error log may have more information.
...fail! This is the block of code it is having trouble with: (specifically ver:'OWASP_CRS/2.2.9',\ ) SecRule REQUEST_LINE "!^(?i:(?:[a-z]{3,10}\s+(?:\w{3,7}?://[\w\-\./]*(?::\d+)?)?/[^?#]*(?:\?[^#\s]*)?(?:#[\S]*)?|connect (?:\d{1,3}\.){3}\d{1,3}\.?(?::\d+)?|options \*)\s+[\w\./]+|get /[^?#]*(?:\?[^#\s]*)?(?:#[\S]*)?)$"\
"msg:'Invalid HTTP Request Line',\
severity:'4',\
id:'960911',\
ver:'OWASP_CRS/2.2.9',\
rev:'2',\
maturity:'9',\
accuracy:'9',\
logdata:'%{request_line}',\
phase:1,\
block,\
t:none,\
tag:'OWASP_CRS/PROTOCOL_VIOLATION/INVALID_REQ',\
tag:'CAPEC-272',\
setvar:'tx.msg=%{rule.msg}',\
setvar:tx.anomaly_score=+%{tx.notice_anomaly_score},\
setvar:'tx.%{rule.id}-OWASP_CRS/PROTOCOL_VIOLATION/INVALID_REQ-%{matched_var_name}=%{matched_var}'" I have installed modsec Version: 2.6.3-1ubuntu0.2 so I beleve it should work with the OWASP ModSecurity Core Rule Set Any ideas on how to get it working? Thanks in advance! | It is just cached data that may be released when another app demands memory. Here is good description from Investigate memory usage with Windows 7 Resource Monitor : quote from the link: Standby The Standby list, which is shown in blue, contains pages that have
been removed from process working sets but are still linked to their
respective working sets. As such, Standby list is essentially a cache .
However, memory pages in the Standby list are prioritized in a range
of 0-7, with 7 being the highest. Essentially, a page related to a
high-priority process will receive a high-priority level in the
Standby list. For example, processes that are Shareable will be a high priority and
pages associated with these Shareable processes will have the highest
priority in the Standby list. Now, if a process needs a page that is associated with the process and
that page is now in the Standby list, the memory manager immediately
returns the page to that process' working set. However, all pages on
the Standby list are available for memory allocation requests from any
process. When a process requests additional memory and there is not
enough memory in the Free list, the memory manager checks the page's
priority and will take a page with a low priority from the Standby
list, initialize it, and allocate it to that process. | {
"source": [
"https://serverfault.com/questions/565546",
"https://serverfault.com",
"https://serverfault.com/users/156809/"
]
} |
565,743 | My website gets thousands of hits daily from different IPs trying to access: /php-myadmin/
/myadmin/
/mysql/ ...and thousands of other variations. None of these directories exist, I don't even have phpmyadmin on my server. I don't think any of these attempts have been successful, however they must be taking their toll on the server's resources and wasting bandwidth, so I would like to stop them if possible. I've blocked a handful of these IPs but they keep coming back with fresh IPs, is there any way I can prevent this more permanently? | Don't worry about it. Serving a 404 is a tiny, tiny, tiny amount of work for a web server to do. You could probably serve ten 404's a second using a 486. The bandwidth per 404 is negligible; a tiny GET request and a tiny 404 response. Seriously; don't worry about it. This is just part and parcel of running a server on the internet. | {
"source": [
"https://serverfault.com/questions/565743",
"https://serverfault.com",
"https://serverfault.com/users/120569/"
]
} |
566,426 | I am creating a websocket server which will live on ws.mysite.example . I want the web socket server to be SSL encrypted as well as domain.example to be SSL encrypted. Do I need to purchase a new certificate for each subdomain I create? Do I need a dedicated IP address for each subdomain I create? I will likely have more than one subdomain. I am using NGINX and Gunicorn running on Ubuntu. | I'll answer this in two steps... Do You Need an SSL Cert for Each Subdomain ? Yes and No, it depends. Your standard SSL certificate will be for single domain, say www.domain.example . There are different types of certs you can aside from the standard single domain cert: wildcard and multi domain certs. A wild card cert will be issued for something like *.domain.example and clients will treat this as valid for any domain that ends with domain.example , such as www.domain.example or ws.domain.example . A multi domain cert is valid for a predefined list of domain names. It does this by using the Subject Alternative Name field of the cert. For example, you could tell an CA that you want a multi domain cert for domain.example and ws.mysite.example . This would allow it to be used for both domain names. If neither of these options work for you, then you would need to have two different SSL certs. Do I Need a Dedicated IP for Each Subdomain ? Again, this is a yes and no...it all depends on your web/application server. I am a Windows guy, so I will answer with IIS examples. If you are running IIS7 or older, then you are forced to bind SSL certs to an IP and you can not have multiple certs assigned to a single IP. This causes you to need to have a different IP for each subdomain if you are using a dedicated SSL cert for each subdomain. If you are using a multi domain cert or a wildcard cert, then you can get away with the single IP as you only have one SSL cert to begin with. If you are running IIS8 or later, then the same applies. However, IIS8+ includes support for something called Server Name Indication (SNI). SNI allows you to bind an SSL cert to a hostname, not to an IP. So the hostname (Server Name) that is used to make the request is used to indicate which SSL cert that IIS should use to for the request. If you use a single IP, then you can configure websites to respond to requests for specific hostnames. I know that Apache and Tomcat also have support for SNI, but I am not familiar them enough to know what versions support it. Bottom Line Depending on your application/web server and what type of SSL certs you are able to obtain will dictate your options. | {
"source": [
"https://serverfault.com/questions/566426",
"https://serverfault.com",
"https://serverfault.com/users/162287/"
]
} |
566,503 | I’m connecting over the web to a remote Windows Server 2012 R2 via Remote Desktop Connection for administration needs. It is a single web and database server without an AD etc. I’m not talking about Remote Desktop Services / Terminal Server, just the simple Remote Desktop feature activated through Control Panel > System > Remote Settings.
The server will automatically create a self-signed certificate to encrypt the connection and the Remote Desktop Connection client will show a certificate error due to the untrusted CA. I have a CA signed certificate issued to the FQDN of this server and valid for server authentication (I’m using it for MSSQL Server remote access). I’d like to use that one for RDP connections too. All tutorials (like this question ) I’ve found so far describe the process for the Remote Desktop Services or Terminal Service. I have found this question stating a wmic command to set a certificate, but I don't want to try setting some values when I don't know what exactly I'm doing.
What I have done is adding it to the Remote Desktop Certificates of Local Computer where the auto generated self-signed is located too. Is that possible? If yes, what do I have to do? Thanks! | The question you found that mentions using wmic to set the certificate thumbprint value should work without any additional feature installation. I asked and answered a similar question here with a little more detail. It also has a PowerShell equivalent for the wmic command. But I'll add some more explanation here as well. Since you're already using this certificate for MSSQL SSL, I assume it's already installed into one of the certificate stores on the system. If you installed it in the context of a service account that MSSQL is running as, you might also need to install it into the Personal or Remote Desktop store for the "Local Computer" as well. Once it's in there, you just need to update the SSLCertificateSHA1Hash value in Win32_TSGeneralSetting to point to it using one of the commands in my previous question . If you want to check what the value is currently set to and compare it to the self-signed certificate, you can change the wmic command to the following. You can also use this to validate that the new thumbprint value you tried to set is correct. wmic /namespace:\\root\cimv2\TerminalServices PATH Win32_TSGeneralSetting Get SSLCertificateSHA1Hash The output should look something like this: | {
"source": [
"https://serverfault.com/questions/566503",
"https://serverfault.com",
"https://serverfault.com/users/205109/"
]
} |
567,075 | When AWS documentation and pricing refer to "usage" does this simply mean "if the instance is on" instead of "if the instance is on and doing work ". E.g., if I had an EC2 instance running but it was idle (CPU=0%), I assume I still get charged for that hour's usage. In this case, if I had an EC2 instance which hosted a website (which should be accessible 24/7), it would make sense to purchase a Reserved Instance. Then, if I had to bring other instance online to share the load, those would (likely) best be served as On-Demand Instances. Is my understanding correct? | Yes, your understanding is correct. There's no AWS charging based on CPU usage -- you pay the same for an instance whether its CPU usage is 0% or 100%. | {
"source": [
"https://serverfault.com/questions/567075",
"https://serverfault.com",
"https://serverfault.com/users/70361/"
]
} |
567,320 | I want to know the difference between " default :*" and "*:*" in VirtualHost Context. <VirtualHost _default_:*>
#...
ServerName host.example.com
#...
</VirtualHost>
<VirtualHost *:*>
#...
ServerName host.example.com
#...
</VirtualHost> I don't know the difference and the porpouse of use. Thk | Solution is on Apache 2.2 documentation on the Virtualhost directive : Syntax: ...
(...)
Addr can be: The IP address of the virtual host; A fully qualified domain name for the IP address of the virtual host (not recommended); The character * , which is used only in combination with NameVirtualHost * to match all IP addresses; or The string _default_ , which is used only with IP virtual hosting to catch unmatched IP addresses. Two ways of handling Virtualhosts exists, Name based virtualhosting and IP based Virtualhosting. With named based virtualhosts you have a list of virtualhosts, each one managing one or several domain names, and each one associated with a couple listening IP:port . * is a special value which mean all IPs on this host . The default virtualHost is the first declared one on this list for each given listening address. With IP based VirtualHosts the ServerName directive of the VirtualHost is not used, the important information is the listening IP (and port), and the default VirtualHost is the first one matching the IP handling the incoming request.. So with a named based virtualhosting configuration: <Virtualhost *:80> with ServerName foo.com means "on all IPs managed on this host", "on port 80", "if the request host header is foo.com" I'll use this virtualhost <Virtualhost *:*> with Servername foo.com means "on all IPs managed on this host", "on all ports", "if the request host header is foo.com" I'll use this virtualhost <Virtualhost 10.0.0.2:*> with Servername foo.com means "for request incoming from my network interface 10.0.0.2", "on all ports", "if the request host header is foo.com" I'll use this virtualhost <Virtualhost _default_:*> with Servername foo.com : should not be used with name based virtualhosting And on an IP based Virtualhosting: <Virtualhost 10.0.0.2:*> means "I'll use this virtualhost for request coming on my 10.0.0.2 interface" <Virtualhost _default_:443> means "I'll use this virtualhost for all other network interface on my host for request coming on port 443" <Virtualhost _default_:*> means "I'll use this virtualhost for all other network interface on my host, if it is not matched by a previous rule, and if the request host header is not matched by a named based virtualhost" So it's all about defining a catch-all Virtualhost. Documentation adds: When using IP-based virtual hosting, the special name _default_ can be specified in which case this virtual host will match any IP address that is not explicitly listed in another virtual host. In the absence of any _default_ virtual host the "main" server config, consisting of all those definitions outside any VirtualHost section, is used when no IP-match occurs. (But note that any IP address that matches a NameVirtualHost directive will use neither the "main" server config nor the _default_ virtual host. See the name-based virtual hosting documentation for further details.) So after all theses things it becomes quite "clear" that mixing IP-based and name-based virtualhosting could become a mess. With Apache 2.2 Name based virtualhosting was used only if NameVirtualhost <something> was used. But with the new Apache 2.4 version theses things are really easier to understand, no NameVirtualhost declaration. The NameVirtualHost directive no longer has any effect, other than to emit a warning. Any address/port combination appearing in multiple virtual hosts is implicitly treated as a name-based virtual host . No more complex thoughs, even the documentation is now simplier: The character *, which acts as a wildcard and matches any IP address. The string _default_ , which is an alias for * So with apache 2.4 the answer is, it's the same thing . | {
"source": [
"https://serverfault.com/questions/567320",
"https://serverfault.com",
"https://serverfault.com/users/195770/"
]
} |
567,474 | As you're probably aware, by default when you install a package on a Debian or Ubuntu based system, if the package contains a service, that service will generally be enabled and started automatically when you install the package. This is a problem for me. I've found myself needing to manage templates for building LXC containers. There are several containers, each corresponding to a Debian or Ubuntu release. (There are also Red Hat-based containers, but they aren't relevant here.) /var/lib/libvirt/filesystems/debian6_template
/var/lib/libvirt/filesystems/debian7_template
/var/lib/libvirt/filesystems/ubuntu1004_template
/var/lib/libvirt/filesystems/ubuntu1204_template Occasionally I will find that the templates have a missing package or need some other change, so I will chroot into them to install the package. Unfortunately when I do that, I wind up with several copies of the package's service running! By way of example, I found the templates didn't have a syslog daemon, so I installed one: for template in /var/lib/libvirt/filesystems/{debian,ubuntu}*_template; do
chroot $template apt-get install rsyslog
done And promptly wound up with four copies of rsyslog running. Not to mention two copies of exim4. Oops! I read somewhere (though I can't find it again now) that it's not supposed to start services when running in a chroot, but that clearly isn't happening here. One potentially viable nasty hack calls for temporarily replacing the various commands which actually start services, such as start-stop-daemon and initctl , though this is a lot more work than I really wanted to do. If I have no other choice, though... The ideal solution here would be for Debian-based systems to stop doing this crap, but failing that, perhaps an obscure or undocumented command line option for apt-get ? In case it wasn't clear, I really want to keep anything related to managing the templates outside the templates, if possible. | For debian you can do this with policy-rc.d . Here's one explanation : A package’s maintainer scripts are supposed to only interface with the init system by means of invoke-rc.d, update-rc.d and the LSB init script headers...
invoke-rc.d will, before taking its action, check whether
/usr/sbin/policy-rc.d is executable, will call it with the respective
service name and the current runlevel number on its command line and
act according to its exit code. For example, a return value of 101
will prevent the planned action from being taken. This includes the
automated start of the service upon package installation as well as
the stop of the service on package removal and reduces the
stop-upgrade-restart ritual during package upgrades to just performing
the upgrade which might leave the old version of the service running Since you don't want any services to ever start, your policy-rc.d script can be simply #!/bin/sh
exit 101 This is the technique used by tools like pbuilder and Docker's mkimage/debootstrap . Unfortunately, this technique does not work with Ubuntu chroots. Packages that integrate with the upstart init system call /usr/sbin/initctl instead of invoke-rc.d during installation, and initctl doesn't consult policy-rc.d. According to upstart's author the workaround is to replace /sbin/initctl with a symlink to /bin/true in a chroot. You can see this in mkimage-debootstrap as well, they do dpkg-divert --local --rename --add /sbin/initctl
ln -sf /bin/true sbin/initctl | {
"source": [
"https://serverfault.com/questions/567474",
"https://serverfault.com",
"https://serverfault.com/users/126632/"
]
} |
567,775 | Hopefully , we all know what the recommendations for naming an Active Directory forest are , and they're pretty simple. Namely, it can be summed up in a single sentence. Use a subdomain of an existing, registered domain name, and pick one that's not going to be used externally. For example, if I were to incorporate and register the hopelessn00b.com domain, my internal AD forest should be named internal.hopelessn00b.com or ad.hopelessn00b.com or corp.hopelessn00b.com . There are overwhelmingly compelling reasons to avoid using "fake" tlds or single-label domain names , but I'm having a hard time finding similarly compelling reasons to avoid using the root domain ( hopelessn00b.com ) as my domain name and use a subdomain such as corp.hopelessn00b.com instead. Really, the only justification I can seem to find is that accessing the external website from internal requires an A name DNS record and typing www. in front of the website name in a browser, which is pretty "meh" as far as problems go. So, what am I missing? Why is it so much better to use ad.hopelessn00b.com as my Active Directory forest name over hopelessn00b.com ? Just for the record, it's really my employer that needs convincing - the boss man is back-peddling, and after giving me the go ahead to create a new AD forest named corp.hopelessn00b'semployer.com for our internal network, he's wanting to stick with an AD forest named hopelessn00b'semployer.com (the same as our externally registered domain). I'm hoping that I can get some compelling reason or reasons that the best practice is the better option, so I can convince him of that... because it seems easier than rage quitting and/or finding a new job, at least for the moment. Right now, "Microsoft best practices" and internally accessing the public website for our company don't seem to be cutting it, and I'm really , really , really hoping someone here has something more convincing. | So much rep to be had. Come to me precious. Ok, so it's pretty well documented by Microsoft that you shouldn't use split-horizon, or a made up TLD as you've linked to many times (shout out to my blog!). There are a few reasons for this. The www problem that you've pointed out above. Annoying, but not a deal breaker. It forces you to maintain duplicate records for all public-facing servers that are also accessible internally, not just www . mail.hopelessnoob.com is a common example. In an ideal scenario, you'd have a separate perimeter network for things like mail.hopelessnoob.com or publicwebservice.hopelessnoob.com . With some configurations, like an ASA with Internal and External interfaces , you either need inside-inside NAT or split-horizon DNS anyway but for larger organizations with a legitimate perimeter network where your web-facing resources aren't behind a hairpin NAT boundary - this causes unnecessary work. Imagine this scenario - You're hopelessnoob.com internally and externally. You have a corporation that you're partnering with called example.com and they do the same thing - split horizon internally with their AD and with their publicly accessible DNS namespace. Now, you configure a site-to-site VPN and want internal authentication for the trust to traverse the tunnel while having access to their external public resources to go out over the Internet. It's next to impossible without unbelievably complicated policy routing or holding your own copy of their internal DNS zone - now you've just created an additional set of DNS records to maintain. So you have to deal with hairpinning at your end and their end, policy routing/NAT, and all kinds of other trickery. (I was actually in this situation with an AD that I inherited). If you ever deploy DirectAccess , it drastically simplifies your name resolution policies - this is likely also true for other split-tunnel VPN technologies as well. Some of these are edge cases, some are not, but they're all easily avoided. If you have the ability to do this from the beginning, might as well do it the right way so that you don't run into one of these in a decade. | {
"source": [
"https://serverfault.com/questions/567775",
"https://serverfault.com",
"https://serverfault.com/users/118258/"
]
} |
567,874 | I have had reports of my remote workstation freezing for several months, and it turns out that this is happening: User goes to print something to PDF (or save it). The file dialog box comes up asking where they want the file to go. They click something else, or the dialog otherwise somehow ends up behind something. They sit and stare at the PDF software that won't do anything, because it's waiting for them. They decide the "computer" is "frozen" and call in to have it restarted, which my other (Non IT) staff just does. They complain to me that the computer is slow and keeps freezing. This seems to be happening a lot . We are a bookkeeping company and do a lot of printing / PDF work. I've tried the human approach, which would be educating the users. No luck. I don't think they are going to get it. How can we fix this? Is there a way to make Windows (or Acrobat, if you know anything about that - it's my very favorite program) just put the file somewhere by default to prevent the user from having to deal with the file dialog? This is a Windows 7 x64 computer, accessed remotely via Remote Desktop Connection . | They decide the "computer" is "frozen" and call in to have it restarted, which my other staff just does. Here is your problem. This is not a technical fault, so don't try and implement a technical solution. Instead, you should implement a process whereby every call or ticket for this type of problem actually gets troubleshooted before any action is taken. People tend to stop making silly mistakes when you make them fix it themselves. If a user calls in with this problem - just ask them if they have any open dialogue windows, or if they have tried pressing ALT+TAB. Make a wiki item on your help page with some simple troubleshooting steps that the user can take. That way when they call in with this problem, you can ask if they've checked the "My computer is frozen" troubleshooting guide. | {
"source": [
"https://serverfault.com/questions/567874",
"https://serverfault.com",
"https://serverfault.com/users/8913/"
]
} |
567,879 | Unfortunately one of the forms of my website was used to send completely sensless spammails. As I found out through " http://mxtoolbox.com " that my IP-address is listed on two blacklists, I want to set up a new IP-address. However, I don't know if only IP-addresses get blacklisted or also domains. Please can anyone help . . . Many Greetings | They decide the "computer" is "frozen" and call in to have it restarted, which my other staff just does. Here is your problem. This is not a technical fault, so don't try and implement a technical solution. Instead, you should implement a process whereby every call or ticket for this type of problem actually gets troubleshooted before any action is taken. People tend to stop making silly mistakes when you make them fix it themselves. If a user calls in with this problem - just ask them if they have any open dialogue windows, or if they have tried pressing ALT+TAB. Make a wiki item on your help page with some simple troubleshooting steps that the user can take. That way when they call in with this problem, you can ask if they've checked the "My computer is frozen" troubleshooting guide. | {
"source": [
"https://serverfault.com/questions/567879",
"https://serverfault.com",
"https://serverfault.com/users/205222/"
]
} |
568,627 | I have a program that should behave differently if it is being run under "sudo". Is there a way it can find out if it was run under sudo? Update: Someone asked why would I want to do this. In this case, on a Mac using MacPorts there is output that tells you to cut-and-paste a particular command. If the MacPorts command was run with "sudo", it should include sudo in the sample command: $ sudo port selfupdate
---> Updating MacPorts base sources using rsync
MacPorts base version 2.2.1 installed,
MacPorts base version 2.2.1 downloaded.
---> Updating the ports tree
---> MacPorts base is already the latest version
The ports tree has been updated. To upgrade your installed ports, you should run
port upgrade outdated
^^^^^^^^^ it would be really sweet if it output "sudo port upgrade outdated" instead. It would be even better if it just did it for you :-) | Yes, there are 4 environment variables set when a program is running under sudo: $ sudo env |grep SUDO
SUDO_COMMAND=/usr/bin/env
SUDO_USER=tal
SUDO_UID=501
SUDO_GID=20 Note that these can be faked by simply setting them. Don't trust them for anything critical. For example: In this program we need to tell the user to run some other program. If the current one was run with sudo, the other one will be too. #!/bin/bash
echo 'Thank you for running me.'
if [[ $(id -u) == 0 ]]; then
if [[ -z "$SUDO_COMMAND" ]]; then
echo 'Please now run: next_command'
else
echo 'Please now run: sudo next_command'
fi
else echo 'Error: Sadly you need to run me as root.'
exit 1
fi Note that it only tests for a SUDO_* variable if it can first prove that it is running as root. Even then it only uses it to change some helpful text. | {
"source": [
"https://serverfault.com/questions/568627",
"https://serverfault.com",
"https://serverfault.com/users/6472/"
]
} |
568,638 | My website loads more slowly than I think it should, due to a few of the assets taking an absurdly long time to download from the server. I've been trying to track down the cause of this. I'm about 95% sure it is a networking issue, not an Apache issue, due to the tests I've done (see below). Here's a screenshot from Firefox's network inspector . Note that the stuck assets are usually some of these images, but it has occurred on other assets like Javascript files, etc. Hypothesis and Question My current theory is that our colo's bandwidth limit is causing packet loss when the browser downloads resources in parallel, perhaps momentarily above the bandwidth limit.
Is this a sensible theory? Is there anything we can change apart from requesting more bandwidth, even though we don't use most of the bandwidth most of the time? Or, is there some other avenue I need to be researching? Configuration Apache 2.4.3 on Fedora 18, CPU and memory with lots of available capacity. Gigabit Ethernet to a switch to a 4 or 5 Mbps uplink via the colocation facility. It isn't a very high traffic site. Rarely more than a couple visitors at once. Tests I've Done traceroute to the server is fine. traceroute from the server to, say, our office does stop after 8 or so hops. I'm hypothesizing that this is due to traceroute traffic getting blocked somewhere (since things like wget —see below— and ssh seem to largely work fine), but I can provide more details if this is pertinent. strace on Apache indicated that the server immediately was serving up the entire image without delay. tcpdump / wireshark showed that the image data was sent immediately, but then, later, some packets were retransmitted. One trace in particular showed that the final packet of the asset was transmitted immediately by the server, retransmitted several times, but the original packet was the one the browser finally received. While I could sometimes reproduce the problem downloading the page via wget , it didn't happen as regularly as it did in the browser. My hypothesis is that this is because wget doesn't parallelize downloads. Testing with iperf was interesting. Using iperf 's UDP mode, I found that I had next to no packet loss at speeds up to about 4 Mbps. Above that, I began seeing ~10% packet loss. Similarly, in TCP mode, small numbers of parallel connections split the bandwidth sensibly between them. On the other hand 6 or more parallel connections started getting a "sawtooth" bandwidth pattern, where a connection would sometimes have bandwidth and not other times. I'd be happy to provide more details on any of these, but I didn't want to crowd this post with details not pertinent. I'm hardly knowledgeable enough in networking to know what information is useful and what isn't. :-D Any pointers to other good network-troubleshooting tools would be swell. EDIT 1: Clarified my near-certainty that Apache isn't to blame, but rather networking something-or-other. EDIT 2: I tried iperf between this server and another of ours on the same gigabit switch, and got a pretty consistent 940+ Mbits/s. I think that rules out most of the hardware problems or duplex mismatches on our end, except perhaps the uplink. EDIT 3: While the specifics are very different, this post about a TCP incast problem sounds very similar, in terms of having high-bandwidth traffic shuffled down a narrow pipe in small bursts and losing packets. I need to read it in more detail to see if any of the specifics apply to our situation. | Yes, there are 4 environment variables set when a program is running under sudo: $ sudo env |grep SUDO
SUDO_COMMAND=/usr/bin/env
SUDO_USER=tal
SUDO_UID=501
SUDO_GID=20 Note that these can be faked by simply setting them. Don't trust them for anything critical. For example: In this program we need to tell the user to run some other program. If the current one was run with sudo, the other one will be too. #!/bin/bash
echo 'Thank you for running me.'
if [[ $(id -u) == 0 ]]; then
if [[ -z "$SUDO_COMMAND" ]]; then
echo 'Please now run: next_command'
else
echo 'Please now run: sudo next_command'
fi
else echo 'Error: Sadly you need to run me as root.'
exit 1
fi Note that it only tests for a SUDO_* variable if it can first prove that it is running as root. Even then it only uses it to change some helpful text. | {
"source": [
"https://serverfault.com/questions/568638",
"https://serverfault.com",
"https://serverfault.com/users/206207/"
]
} |
568,640 | I have a few servers running Ubuntu Server 12.04.4 LTS, and they have all had an intermittent problem with uploading files from my Windows development machine. Occasionally when an upload is started (via SFTP) the upload starts in the client, the file is created server side, then it times out. The file on the server remains at 0kb. It can be deleted or overwritten, but once this has occurred once, each subsequent file upload results in the same problem for a period of time, sometimes 5 minutes, sometimes hours. Downloads work normally. File size doesn't seem to matter (1kb or 50mb), different SFTP clients result in the same error. Pulling my hair out over this one, and all my searching has not turned up an answer. Update: Using PHPStorm, I am still running into this same issue, but it gives a little more information. The upload progress bar completes, it spins for a while, then it says: Failed to transfer file 'filename.ext': could not close the output stream for file "sftp://host.tld/filename.ext". I tried turning off the firewall on the server, thinking maybe it was getting in the way ( sudo ufm disable ) to no effect. Update 2 (2014-07-29) I have found that if I connect to an encrypted VPN, I never have this issue, and it fixes the issue if I enable it after having problems without it. This leads me to think that this is somehow connected to my ISP? Is this at all a possibility? The only difference in the traffic is that it is encrypted to the VPN source, which is NOT internal to the server (so the server is still seeing it as external traffic). | Yes, there are 4 environment variables set when a program is running under sudo: $ sudo env |grep SUDO
SUDO_COMMAND=/usr/bin/env
SUDO_USER=tal
SUDO_UID=501
SUDO_GID=20 Note that these can be faked by simply setting them. Don't trust them for anything critical. For example: In this program we need to tell the user to run some other program. If the current one was run with sudo, the other one will be too. #!/bin/bash
echo 'Thank you for running me.'
if [[ $(id -u) == 0 ]]; then
if [[ -z "$SUDO_COMMAND" ]]; then
echo 'Please now run: next_command'
else
echo 'Please now run: sudo next_command'
fi
else echo 'Error: Sadly you need to run me as root.'
exit 1
fi Note that it only tests for a SUDO_* variable if it can first prove that it is running as root. Even then it only uses it to change some helpful text. | {
"source": [
"https://serverfault.com/questions/568640",
"https://serverfault.com",
"https://serverfault.com/users/187607/"
]
} |
569,730 | Taking a spin off of this question: Do I really need MS Active Directory? in a new direction for 2014. Taking into account a basic Windows infrastructure: domain controllers Exchange 2007/2010/2013 Sharepoint SQL File Servers / Print Servers AD Integrated DNS AD authenticated 3rd party devices (let's say 802.1X for networking and maybe some content-filtering, etc.) AD/LDAP authenticated "administrative" functions on IT apps/hardware/etc. perhaps some KMS stuff throw in a CA if you'd like home grown apps 3rd party in-house apps Now, let's rip it all out and decide we are going to the cloud. We've contracted to move Exchange/Sharepoint/File Services to Office 365. SQL will now be hosted as well on something like Azure. We've gotten away from the need for AD-DNS and simply run everything via a simple Windows DNS server. We still need 802.1X and would like SSO if possible to our various cloud apps. Home grown and 3rd party in-house apps would likely stay, but have the ability to use internal user databases instead of AD authentication The question is...do we really need Active Directory at all? Or more to the point, AD on-premise or even hosted via Azure or similar (ADFS) or running ADDS on a hosted VM through Azure or similar. Could/Should we look to something else like a 3rd party SSO option such as http://www.onelogin.com/partners/app-partners/office-365/ or similar that can provide SSO functionality even if it is as simple as LastPass or similar for each user? What kind of legitimate needs does AD fulfill if everything else in the cloud? Could a MS-centric infrastructure get away with not having AD at all if they move everything that previously relied on AD to SaaS offerings that didn't rely on AD authentication? | I've managed large numbers of workstations without AD. I had power tools (Altiris Deployment Solution), but it still hurt in certain situations: Security auditor comes in and says that our default workstation password policy isn't good enough. In order to change password complexity and expiration, etc., on 5,000 machines, we had to write a (nontrivial) script and schedule that to run on all machines. (Good luck catching the laptops, by the way!) Mapping department printers. Sure, we could use the IP number. That means that if Department A and Department B get into a printer war, the remedy involves staking out the printer and then following the offender back to their workstation to remove the printer from their workstation. (I suppose you could buy print management software instead.) Also, how did that printer end up on their workstation in the first place if they're not supposed to use it, and how will you prevent it from ending up there again? There are registry keys for WSUS, so you technically don't need AD for patch management. However, if you include those registry keys in the image, you need to make sure and delete a couple of keys (SusClientID and PingID) or else they will never get updates ever. Or, to be more specific and accurate, only one of them will get updates. Software installs. You can do these with power tools (LANdesk, Altiris, etc.), but that's extra money. "Poison" printer drivers. I've seen a couple of these. The best remedy was a print queue with an updated driver. Windows 7 printing would have epic tantrums unless we set allowed forest/allowed hosts in point and print restrictions. Perhaps this wouldn't be a big deal if all printers were ip-only, as long as User1 never wants to use User2's local printer. Without AD, our techs had to either use gpedit on the workstation or on the master image. You're assuming cloud Exchange, but I'm also going to add that email migrations and other large infrastructural changes without AD are painful on the client end. I scripted the "remove software from old failed migration/add workstation to AD/migrate user's profile from local to domain/demote user from admin to power user/make changes to firewall" jobs and ran them through Altiris. (The Microsoft consultants were suggesting we hire temps with thumb drives until I showed them my kung-fu.) Also, there are software vendors who look at you like you have three heads when you tell them you have workgroups rather than domains. Altiris runs in workgroups, but your desktop techs are never allowed to change their passwords, for example. (Okay, okay. They can change their password. But they also have to swing by your cube and type their new password into the server, or tell you what their new password is.) What I'm getting at is: You can manage lots of workstations without AD, but you may need to buy replacement software, and even with nice software you'll run into painful things. | {
"source": [
"https://serverfault.com/questions/569730",
"https://serverfault.com",
"https://serverfault.com/users/7861/"
]
} |
570,024 | First the specific problem:
In linux, I use zcat to list a .zip file. In osx, zcat seems to automatically append .Z to the file name. Various people suggest replacing zcat with gzcat; however, gzcat complains that the file is not in gzip format! 'file ' shows this:
...Zip archive data, at least v2.0 to extract So neither zcat or gzcat will work in osx, what can I do? I have a medium sized script in in bash which uses, zcat/gzcat, sed awk and other basic utilities to process a number of files. I'd like to duplicate that environment on my osx laptop so I can work offline. Any general suggestions how I can avoid such pain? I expect this is a fairly routine workflow so must have been sorted out by others. | You are right. It's annoying behavior. $ zcat foo.txt.gz
zcat: can't stat: foo.txt.gz (foo.txt.gz.Z): No such file or directory Try this: $ zcat < foo.txt.gz
asdfadsf | {
"source": [
"https://serverfault.com/questions/570024",
"https://serverfault.com",
"https://serverfault.com/users/23398/"
]
} |
570,126 | I use the GitLab API to fetch a list of the projects I have access to (URL /api/v3/projects/all?private_token=xxx ), but there are 6-7 projects that are not included in the list for some reason. EDIT: My user is an administrator, and I want to list all projects like the /projects/all URL indicates. I have access to the projects just fine using git itself and the GitLab web interface. Any suggestions why the projects wouldn't be shown in the list from the API? All the projects missing are newer than the others. I have tried refreshing my API token; no change. Versions: GitLab 6.4.3
GitLab Shell 1.8.0
GitLab API v3
Ruby 2.0.0p353
Rails 4.0.2 | I just tested this and it looks like the GitLab API response is using pagination. According to the documentation ( http://api.gitlab.org ), the default number of results per page is set to 20 and the starting page is 1. To adjust the maximum results per page, you need to use the per_page variable in the HTTP request line. You can change the page number by using page as well, if you have more repositories than the maximum value of per_page . You can specify a maximum per_page value of 100. For example, you request may look like: https://git.example.com/api/v3/projects/all?page=1&per_page=100&private_token=abc123 The page and per_page variables are not required as they have default values, so you do not need to include either if you don't want to. Hopefully this solves your problem. | {
"source": [
"https://serverfault.com/questions/570126",
"https://serverfault.com",
"https://serverfault.com/users/54162/"
]
} |
570,255 | I have an Ubuntu server where I'm automounting an external hard drive each boot. To do this, I've created an empty folder on the root partition, and the drive gets mounted "inside" this folder. However, what if I perform a backup to this path when the drive isn't properly mounted? The backup would instead fill up my root partition! I can ensure that the drive is mounted each time by performing: sudo mount -a ... before each backup. However, what are the best practices to ensure that data is never written to the empty mount-folder (except when the external hard drive is truly mounted)? Can this be solved without scripting? Say with permissions for example? What are the best practices? | I go a step further and always set the attributes of my mountpoint directories to immutable using chattr . This is accomplished with chattr +i /mountpoint (with the mount unmounted). This would error-out on new write activity and also protects the mount point in other situations. But I suppose you could use the mountpoint command , too ;) | {
"source": [
"https://serverfault.com/questions/570255",
"https://serverfault.com",
"https://serverfault.com/users/35720/"
]
} |
570,288 | I just installed an SSL Certificate on my server. It then set up a redirect for all traffic on my domain on Port 80 to redirect it to Port 443. In other words, all my http://example.com traffic is now redirected to the appropriate https://example.com version of the page. The redirect is done in my Apache Virtual Hosts file with something like this... RewriteEngine on
ReWriteCond %{SERVER_PORT} !^443$
RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] My question is, are there any drawbacks to using SSL? Since this is not a 301 Redirect, will I lose link juice/ranking in search engines by switching to https ? I appreciate the help. I have always wanted to set up SSL on a server, just for the practice of doing it, and I finally decided to do it tonight. It seems to be working well so far, but I am not sure if it's a good idea to use this on every page. My site is not eCommerce and doesn't handle sensitive data; it's mainly for looks and the thrill of installing it for learning. UPDATED ISSUE Strangely Bing creates this screenshot from my site now that it is using HTTPS everywhere... | The [R] flag on its own is a 302 redirection ( Moved Temporarily ). If you really want people using the HTTPS version of your site (hint: you do), then you should be using [R=301] for a permanent redirect: RewriteEngine on
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R=301,L] A 301 keeps all your google-fu and hard-earned pageranks intact . Make sure mod_rewrite is enabled: a2enmod rewrite To answer your exact question: Is it bad to redirect http to https? Hell no. It's very good. | {
"source": [
"https://serverfault.com/questions/570288",
"https://serverfault.com",
"https://serverfault.com/users/13943/"
]
} |
570,289 | I'm using ubuntu 12.04 and trying to get things secured. I'm still pretty new to Linux so I'm not quite sure how to interpret this. I logged into my root account using mysql -u root -p and then to see all of the users I typed SELECT User FROM mysql.user; which showed the following +------------------+
| User |
+------------------+
| root |
| root |
| |
| root |
| |
| Testing |
| debian-sys-maint |
| phpmyadmin |
| root |
+------------------+ I logged into phpmyadmin to check out what each of the root accounts for and noticed they all have different hosts. Localhost, 127.0.0.1, ::1 and another IP address. Is it necessary to keep all of these? I currently SSH into my server (using a key pair) and then access MySQL through the terminal or through PHPMyadmin directly from my URL, so I'm pretty sure I just access it through the localhost root account and none of the others. If I change my root password will all of the other MySQL root accounts change (from the different hosts)? What would you guys do in this situation to make it more secure? Here's what I was thinking of doing, but maybe there's a better way. I was going to change the MySQL root user's password to something long and random (and write it down), and create another account with a shorter password for everyday management. For the record, I have already restricted IP access to PHPMyAdmin and made an alias, I just want to do everything I can to prevent some jerk from trying to get a hold of it. | The [R] flag on its own is a 302 redirection ( Moved Temporarily ). If you really want people using the HTTPS version of your site (hint: you do), then you should be using [R=301] for a permanent redirect: RewriteEngine on
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R=301,L] A 301 keeps all your google-fu and hard-earned pageranks intact . Make sure mod_rewrite is enabled: a2enmod rewrite To answer your exact question: Is it bad to redirect http to https? Hell no. It's very good. | {
"source": [
"https://serverfault.com/questions/570289",
"https://serverfault.com",
"https://serverfault.com/users/206099/"
]
} |
570,302 | I have an HP Proliant SL250s Gen8 (right-hand) server which is apparently not able to boot from the network. If I configure the system to PXE boot in the BIOS, or if I hit F12, it does start up the Intel boot agent (for netboot) and claims to be attempting to DHCP. However, the system does not boot. In addition, running tcpdump -i eth0 -s 1500 port bootps or port bootpc on the PXE server during boot does not show any DHCP requests from the server MAC address. I have confirmed that a second server, using the same switch port and cable, is capable of booting using my PXE server. When booting the second server, the above tcpdump command successfully captures the DHCP request. I haven't worked with HP servers much before. Is there a "make it boot" button in the BIOS somewhere that I've missed? ;-) | The [R] flag on its own is a 302 redirection ( Moved Temporarily ). If you really want people using the HTTPS version of your site (hint: you do), then you should be using [R=301] for a permanent redirect: RewriteEngine on
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R=301,L] A 301 keeps all your google-fu and hard-earned pageranks intact . Make sure mod_rewrite is enabled: a2enmod rewrite To answer your exact question: Is it bad to redirect http to https? Hell no. It's very good. | {
"source": [
"https://serverfault.com/questions/570302",
"https://serverfault.com",
"https://serverfault.com/users/57056/"
]
} |
570,306 | I'm currently trying to set up Master-Master MySQL replication between 2 servers, but I'm encountering an unusual issue. I'm getting this error on my MySQL log: Slave: Table 'phpmyadmin.pma_recent' doesn't exist Error_code: 1146 And I'm seeing that this table 'pma_recent' indeed doesn't exist on this particular server, but on the other server I'm setting up this replication with it does. Any ideas as to what I should be doing here? Should I be looking into adding this table on the server getting the error, or removing it on the other? | The [R] flag on its own is a 302 redirection ( Moved Temporarily ). If you really want people using the HTTPS version of your site (hint: you do), then you should be using [R=301] for a permanent redirect: RewriteEngine on
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R=301,L] A 301 keeps all your google-fu and hard-earned pageranks intact . Make sure mod_rewrite is enabled: a2enmod rewrite To answer your exact question: Is it bad to redirect http to https? Hell no. It's very good. | {
"source": [
"https://serverfault.com/questions/570306",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
570,316 | I am bringing up an openvpn server that will support multiple clients into a private subnet. So on the private subnet the clients connecting will get IP addresses such as 10.8.0.10, 10.8.0.11, etc. One of the facilities I need is for the clients to be able to find each other. Is there any easy and generally accepted way for a client to see the list of IP addresses has assigned to all clients? I don't need DNS names or anything like that. | In the OpenVPN server configuration file, a prerequisite is the following directive: # Uncomment this directive to allow different
# clients to be able to "see" each other.
# By default, clients will only see the server.
# To force clients to only see the server, you
# will also need to appropriately firewall the
# server's TUN/TAP interface.
client-to-client To facilitate the clients finding each other easily I would suggest dynamic DNS as the (just about) always present enterprise solution. To present a list of active clients you could perhaps either: find a way of distributing or making available the openvpn-status.log to the clients? distribute ping scripts or similar to clients, perhaps doing a reverse dns lookup for every live host? have the clients register/deregister themselves in a custom db or file upon connection and have a scavenging mechanism of some sort. This alternative seems like totally reinventing the wheel, but no doubt it would be a fun way of spending an hour which adds nothing to the world of IT at large. | {
"source": [
"https://serverfault.com/questions/570316",
"https://serverfault.com",
"https://serverfault.com/users/97933/"
]
} |
570,785 | Since Windows 8.1 doesn't allow system-wide "Windows XP style" high DPI support, how can I make the Microsoft Management Console apps (mmc.exe) high-DPI aware? There is no "Troubleshoot compatibility" context menu item for it. | The Compatibility tab is hidden for system files, so to replicate the functionality of the "Disable display scaling on high DPI settings" checkbox you would add the following to the registry: Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers]
"C:\\Windows\\System32\\mmc.exe"="~ HIGHDPIAWARE" This has the added benefit of making all MMC snap-ins like Group Policy Editor also use native scaling instead of the blurry rasterized version. You can save that as .reg file and import it, or use paste the following command into the Run dialog: reg add "HKCU\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers" /v "C:\Windows\System32\mmc.exe" /f /t REG_SZ /d "~ HIGHDPIAWARE" If you find yourself using that workaround often you may want to add it to the right click context menu for .exe files. You can also add it to .msi files since the Compatibility tab is missing for those files as well: Windows Registry Editor Version 5.00
[-HKEY_CLASSES_ROOT\exefile\shell\disabledpi]
[HKEY_CLASSES_ROOT\exefile\shell\disabledpi]
@="Disable DP&I Scaling"
[HKEY_CLASSES_ROOT\exefile\shell\disabledpi\command]
@="cmd /c @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\">nul"
"IsolatedCommand"="cmd /c @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\">nul"
[-HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi]
[HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi]
@="Disable DP&I Scaling"
[HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi\command]
@="cmd /c @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\">nul"
"IsolatedCommand"="cmd /c @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\">nul" Since the "Run as Administrator" and "Disable DPI scaling" settings are stored together, invoking that command on a file already set to run as admin will clear that flag and set the DPI scaling flag instead. That only affects files you've manually checked the box for, not those with the correct requestedExecutionLevel in their manifest. Just for reference, when both are checked the string is "~ RUNASADMIN HIGHDPIAWARE" but I wouldn't put that into a context menu option since it's already available for one-time use on the context menu and it's not a good idea to make the administrator token necessary so easily. If you want the option to disable DPI scaling for executable and installer files in a specific folder you can use the following .reg import: Windows Registry Editor Version 5.00
[-HKEY_CLASSES_ROOT\Directory\shell\disabledpi]
[HKEY_CLASSES_ROOT\Directory\shell\disabledpi]
@="Disable DP&I Scaling"
[HKEY_CLASSES_ROOT\Directory\shell\disabledpi\command]
@="cmd /c @start /min cmd /c for /f \"usebackq delims=\" %%i in (`dir /b /s \"%1\\*.exe\" \"%1\\*.msi\"`) do @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%%i\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
"IsolatedCommand"="cmd /c @start /min cmd /c for /f \"usebackq delims=\" %%i in (`dir /b /s \"%1\\*.exe\" \"%1\\*.msi\"`) do @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%%i\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\"" Using that option on a root-level folder like Program Files is a bad idea because you'll create hundreds of registry entries. But for some cases it's essential, particularly for Process Explorer and the rest of the Sysinternals utilities, or the Nirsoft utilities, all of which run great with DPI scaling disabled but don't have the option explicitly specified in their manifests. The last batch of code uses the internal start command to get the command prompt window out of the way as quickly as possible and keep it minimized as it parses the contents of the folder. The @ symbol is used to prevent echoing back the command in the output, and nul redirection is used to hide the output "The operation completed successfully." for each entry since it never changes. If you happen to have the excellent nircmd tool you can hide the brief flash of the command prompt window entirely: Windows Registry Editor Version 5.00
[-HKEY_CLASSES_ROOT\exefile\shell\disabledpi]
[HKEY_CLASSES_ROOT\exefile\shell\disabledpi]
@="Disable DP&I scaling"
[HKEY_CLASSES_ROOT\exefile\shell\disabledpi\command]
@="nircmd.exe execmd reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
"IsolatedCommand"="nircmd.exe execmd reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
[-HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi]
[HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi]
@="Disable DP&I scaling"
[HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi\command]
@="nircmd.exe execmd reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
"IsolatedCommand"="nircmd.exe execmd @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
[-HKEY_CLASSES_ROOT\Directory\shell\disabledpi]
[HKEY_CLASSES_ROOT\Directory\shell\disabledpi]
@="Disable DP&I scaling"
[HKEY_CLASSES_ROOT\Directory\shell\disabledpi\command]
@="nircmd.exe execmd for /f \"usebackq delims=\" %%i in (`dir /b /s \"%1\\*.exe\" \"%1\\*.msi\"`) do @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%%i\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
"IsolatedCommand"="nircmd.exe execmd for /f \"usebackq delims=\" %%i in (`dir /b /s \"%1\\*.exe\" \"%1\\*.msi\"`) do @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%%i\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\"" If nircmd.exe is not in your path you can either add its location above or add its folder to your path in the System Environment Variables dialog. To bring up that window you can use the command rundll32 sysdm.cpl,EditEnvironmentVariables The argument could be made that it would be more elegant to add the registry keys by creating a .reg file at runtime and importing it silently with the undocumented reg import /s option. But in my experience, writing any files at runtime raises all sorts of alarms with security products like COMODO Internet Securita, its equivalent versions from Panda, Norton, etc. and anything based on a HIPS model. I don't see a need to do that when the above works just fine, especially if you're using this on multiple computers or sharing it and don't want to create a false alarm for someone else. However if you're already using nircmd, it would make sense to use its regsetval command instead of reg add for the .exe and .msi shell extensions. The folder option would still need to iterate over the directory listing to add each entry so it won't work for those. PowerShell and VBScript are options but their availability depends on the version of Windows and a host of other variables. From a security standpoint, VBScript has a reputation as an exploit vector particularly when downloaded from the internet or shared on a network, and PS1 scripts won't run at all without explicitly setting PowerShell's execution policy to allow remote signed scripts. Let me know if you notice anything odd when using that code as it's still a work in progress. That being said it should make configuring Windows 8.1's DPI settings much easier. | {
"source": [
"https://serverfault.com/questions/570785",
"https://serverfault.com",
"https://serverfault.com/users/20939/"
]
} |
570,818 | When browsing to a page on the site the page is being download instead of running
Nginx Config - user www-data www-data;
worker_processes 4;
events {
worker_connections 2048;
}
http {
include mime.types;
default_type application/octet-stream;
access_log off;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
keepalive_timeout 10;
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server {
listen 80;
server_name localhost;
location / {
root /home/bil/public_html/webiste.net/public/;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
include /usr/local/nginx/sites-enabled/*;
} Virtual Site config - server {
listen 80;
server_name localhost;
rewrite ^/(.*) http://webiste.net/$1 permanent;
}
server {
listen 80;
server_name localhost;
access_log /home/bil/public_html/webiste.net/log/access.log;
error_log /home/bil/public_html/webiste.net/log/error.log;
location / {
root /home/bil/public_html/webiste.net/public/;
index index.php index.html;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$
{
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
include /usr/local/nginx/conf/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /home/bil/public_html/webiste.net/public/$fastcgi_script_name;
}
}` | The Compatibility tab is hidden for system files, so to replicate the functionality of the "Disable display scaling on high DPI settings" checkbox you would add the following to the registry: Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers]
"C:\\Windows\\System32\\mmc.exe"="~ HIGHDPIAWARE" This has the added benefit of making all MMC snap-ins like Group Policy Editor also use native scaling instead of the blurry rasterized version. You can save that as .reg file and import it, or use paste the following command into the Run dialog: reg add "HKCU\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers" /v "C:\Windows\System32\mmc.exe" /f /t REG_SZ /d "~ HIGHDPIAWARE" If you find yourself using that workaround often you may want to add it to the right click context menu for .exe files. You can also add it to .msi files since the Compatibility tab is missing for those files as well: Windows Registry Editor Version 5.00
[-HKEY_CLASSES_ROOT\exefile\shell\disabledpi]
[HKEY_CLASSES_ROOT\exefile\shell\disabledpi]
@="Disable DP&I Scaling"
[HKEY_CLASSES_ROOT\exefile\shell\disabledpi\command]
@="cmd /c @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\">nul"
"IsolatedCommand"="cmd /c @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\">nul"
[-HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi]
[HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi]
@="Disable DP&I Scaling"
[HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi\command]
@="cmd /c @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\">nul"
"IsolatedCommand"="cmd /c @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\">nul" Since the "Run as Administrator" and "Disable DPI scaling" settings are stored together, invoking that command on a file already set to run as admin will clear that flag and set the DPI scaling flag instead. That only affects files you've manually checked the box for, not those with the correct requestedExecutionLevel in their manifest. Just for reference, when both are checked the string is "~ RUNASADMIN HIGHDPIAWARE" but I wouldn't put that into a context menu option since it's already available for one-time use on the context menu and it's not a good idea to make the administrator token necessary so easily. If you want the option to disable DPI scaling for executable and installer files in a specific folder you can use the following .reg import: Windows Registry Editor Version 5.00
[-HKEY_CLASSES_ROOT\Directory\shell\disabledpi]
[HKEY_CLASSES_ROOT\Directory\shell\disabledpi]
@="Disable DP&I Scaling"
[HKEY_CLASSES_ROOT\Directory\shell\disabledpi\command]
@="cmd /c @start /min cmd /c for /f \"usebackq delims=\" %%i in (`dir /b /s \"%1\\*.exe\" \"%1\\*.msi\"`) do @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%%i\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
"IsolatedCommand"="cmd /c @start /min cmd /c for /f \"usebackq delims=\" %%i in (`dir /b /s \"%1\\*.exe\" \"%1\\*.msi\"`) do @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%%i\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\"" Using that option on a root-level folder like Program Files is a bad idea because you'll create hundreds of registry entries. But for some cases it's essential, particularly for Process Explorer and the rest of the Sysinternals utilities, or the Nirsoft utilities, all of which run great with DPI scaling disabled but don't have the option explicitly specified in their manifests. The last batch of code uses the internal start command to get the command prompt window out of the way as quickly as possible and keep it minimized as it parses the contents of the folder. The @ symbol is used to prevent echoing back the command in the output, and nul redirection is used to hide the output "The operation completed successfully." for each entry since it never changes. If you happen to have the excellent nircmd tool you can hide the brief flash of the command prompt window entirely: Windows Registry Editor Version 5.00
[-HKEY_CLASSES_ROOT\exefile\shell\disabledpi]
[HKEY_CLASSES_ROOT\exefile\shell\disabledpi]
@="Disable DP&I scaling"
[HKEY_CLASSES_ROOT\exefile\shell\disabledpi\command]
@="nircmd.exe execmd reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
"IsolatedCommand"="nircmd.exe execmd reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
[-HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi]
[HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi]
@="Disable DP&I scaling"
[HKEY_CLASSES_ROOT\Msi.Package\shell\disabledpi\command]
@="nircmd.exe execmd reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
"IsolatedCommand"="nircmd.exe execmd @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%1\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
[-HKEY_CLASSES_ROOT\Directory\shell\disabledpi]
[HKEY_CLASSES_ROOT\Directory\shell\disabledpi]
@="Disable DP&I scaling"
[HKEY_CLASSES_ROOT\Directory\shell\disabledpi\command]
@="nircmd.exe execmd for /f \"usebackq delims=\" %%i in (`dir /b /s \"%1\\*.exe\" \"%1\\*.msi\"`) do @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%%i\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\""
"IsolatedCommand"="nircmd.exe execmd for /f \"usebackq delims=\" %%i in (`dir /b /s \"%1\\*.exe\" \"%1\\*.msi\"`) do @reg add \"HKCU\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags\\Layers\" /v \"%%i\" /f /t REG_SZ /d \"~ HIGHDPIAWARE\"" If nircmd.exe is not in your path you can either add its location above or add its folder to your path in the System Environment Variables dialog. To bring up that window you can use the command rundll32 sysdm.cpl,EditEnvironmentVariables The argument could be made that it would be more elegant to add the registry keys by creating a .reg file at runtime and importing it silently with the undocumented reg import /s option. But in my experience, writing any files at runtime raises all sorts of alarms with security products like COMODO Internet Securita, its equivalent versions from Panda, Norton, etc. and anything based on a HIPS model. I don't see a need to do that when the above works just fine, especially if you're using this on multiple computers or sharing it and don't want to create a false alarm for someone else. However if you're already using nircmd, it would make sense to use its regsetval command instead of reg add for the .exe and .msi shell extensions. The folder option would still need to iterate over the directory listing to add each entry so it won't work for those. PowerShell and VBScript are options but their availability depends on the version of Windows and a host of other variables. From a security standpoint, VBScript has a reputation as an exploit vector particularly when downloaded from the internet or shared on a network, and PS1 scripts won't run at all without explicitly setting PowerShell's execution policy to allow remote signed scripts. Let me know if you notice anything odd when using that code as it's still a work in progress. That being said it should make configuring Windows 8.1's DPI settings much easier. | {
"source": [
"https://serverfault.com/questions/570818",
"https://serverfault.com",
"https://serverfault.com/users/134397/"
]
} |
571,000 | I want to remove my files securely without worrying about anyone restoring them, I know I can use shred but it takes too long even with -n 1 so I thought maybe if I remove files and write on the disk using dd, filling up all available space I won't have to use shred right? I believe dd will be faster than shred especially that my files almost fill up most of the available space in the disk So does filling up the disk using dd guarantee that my files will be securely removed? | This will go against most conventional wisdom on the Internet, but here we go... If this is a modern rotating disk, a simple pass of dd with /dev/zero is enough to foil almost any attempt at data recovery, even from a professional data recovery house. It might be possible to extract some data with extremely expensive specialized equipment (e.g. a government lab), but that is out of reach of pretty much anyone that isn't willing to spend $millions on you. (Note this will not comply with any official-sounding government standards for data disposal, but it works.) The problem with most of the wisdom you read on the Internet about this topic, is that it is more urban legend than actual fact. If you look for an actual source on this topic, most people refer back to a paper that was published in 1996 , and was referring to MFM/RLL drives (pre-IDE). Additionally, most of the government standards for data destruction that people refer to are decades old. The logic behind multiple passes to erase data boils down to the idea that residual information can linger in the space between sectors on a platter . On older drives, the density of sectors was relatively low, and there was lots of empty space on the platters where this residual data could linger. Since 1996, hard drive capacities have increased by orders of magnitude, while platter size has remained the same. There simply is not that much empty space in a platter for data to linger anymore. If there was usable extra space in the platters, drive manufactures would be using it and selling you a higher-capacity disk. The wisdom of these secure erase standards has been picked apart , and papers have been published that say a single pass is enough . A few years ago, someone issued the Great Zero Challenge , where someone overwrote a drive with dd and /dev/zero, and issued an open challenge for someone to extract the data. There were no takers as I recall. (Disclaimer: The original web site for this challenge is gone now.) But what about Solid State Drives? Because of the flash wear leveling, bad sector remapping, and garbage collection, and additional "hidden capacity", traditional overwrite methods may not actually overwrite the data (although it will appear overwritten to the host PC). A single pass of dd with /dev/zero will stop a casual user from reading back any data from the SSD. However, a dedicated attacker with a logic analyzer can crack open the drive and extract data from the flash chips inside. This problem was identified a while ago. So, a command called Secure Erase was added to the ATA standard. The firmware in the drive will securely erase all of the flash cells. Most modern SSDs will support this command. I beleive it also works with rotating drives. Note that this command can sometimes be tricky for an end user to access. You typically need a special utility to use it, some BIOSes implement a "security freeze" that can get in the way. Check with the SSD manufacturer for a utility. If they do not have one you there are 3rd party ones that may work. Note that some people have raised concerns about the reliability of the secure erase functionality built into the drive firmware. There was a paper published in 2011 that showed some drives will leave data behind after a secure erase. Note that SSD firmware has advanced quite a bit since then. If secure erase is an important function to you, I would recommend purchasing drives from a top-tier manufacturer, preferably something in their server/datacenter line (where buggy firmware is less likely to be tolerated). If the above make you nervous about data remaining on the drive, your next best option is to fill the drives with random data multiple times, as this will hopefully take care of overwriting the excess hidden capacity in the SSD, but you cannot be absolutely sure without knowledge of the internal workings of the firmware. This will also shorten the lifespan of the SSD. What you should take away from this: Overwriting a drive with dd and /dev/zero or the single pass option in DBAN is enough to stop most people from getting your data (SSD or Rotating). If you have a rotating drive, you can use a multi-pass erasure method. It won't hurt anything, but it will take longer. If you have a recent-vintage SSD from a reputable manufacturer, you should use the ATA Secure Erase Command, preferably using a manufacturer-supplied utility. If ATA Secure Erase is not supported by your drive (or known to be buggy), multi-pass erasure is your next best option. If you are required to erase the drive to a certain standard (e.g. you have a contract says the data shall be erased per DoD 5220.22-M), just do it and don't argue with whether or not it is necessary. Nothing beats physical destruction. If the data on the drive is so sensitive that its value exceeds the cash value of the drive itself, you should physically destroy it (use a hammer, vise, drill press, or get creative). If you are really paranoid, make sure the remains of the drive are scattered over a wide area (e.g. multiple public trash cans in multiple parts of the city). | {
"source": [
"https://serverfault.com/questions/571000",
"https://serverfault.com",
"https://serverfault.com/users/164359/"
]
} |
571,592 | I'm developing a website for managing OpenVPN users with Django framework. But I need to know is there any way to extract active users from OpenVPN? My server is running Ubuntu 12.04. | There should be a status log you can look at to show you, mine is, for examle: cat /etc/openvpn/openvpn-status.log EDIT: As an alternative, adding the flag --management IP port [pw-file] or adding that same directive to your server.conf , for example: management localhost 7505 This would allow you to telnet to that port and offer you a list of commands to run: telnet localhost 7505 help | {
"source": [
"https://serverfault.com/questions/571592",
"https://serverfault.com",
"https://serverfault.com/users/207135/"
]
} |
571,732 | I have been asked to find out when a user has logged on to the system in the last week. Now the audit logs in Windows should contain all the info I need. I think if I search for Event ID 4624 (Logon Success) with a specific AD user and Logon Type 2 (Interactive Logon) that it should give me the information I need, but for the life of my I cannot figure out how to actually filter the Event Log to get this information. Is it possible inside of the Event Viewer or do you need to use an external tool to parse it to this level? I found http://nerdsknowbest.blogspot.com.au/2013/03/filter-security-event-logs-by-user-in.html which seemed to be part of what I needed. I modified it slightly to only give me the last 7 days worth. Below is the XML I tried. <QueryList>
<Query Id="0" Path="Security">
<Select Path="Security">*[System[(EventID=4624) and TimeCreated[timediff(@SystemTime) <= 604800000]]]</Select>
<Select Path="Security">*[EventData[Data[@Name='Logon Type']='2']]</Select>
<Select Path="Security">*[EventData[Data[@Name='subjectUsername']='Domain\Username']]</Select>
</Query>
</QueryList> It only gave me the last 7 days, but the rest of it did not work. Can anyone assist me with this? EDIT Thanks to the suggestions of Lucky Luke I have been making progress. The below is my current query, although as I will explain it isn't returning any results. <QueryList>
<Query Id="0" Path="Security">
<Select Path="Security">
*[System[(EventID='4624')]
and
System[TimeCreated[timediff(@SystemTime) <= 604800000]]
and
EventData[Data[@Name='TargetUserName']='john.doe']
and
EventData[Data[@Name='LogonType']='2']
]
</Select>
</Query>
</QueryList> As I mentioned, it wasn't returning any results so I have been messing with it a bit. I can get it to produce the results correctly until I add in the LogonType line. After that, it returns no results. Any idea why this might be? EDIT 2 I updated the LogonType line to the following: EventData[Data[@Name='LogonType'] and (Data='2' or Data='7')] This should capture Workstation Logons as well as Workstation Unlocks, but I still get nothing. I then modify it to search for other Logon Types like 3, or 8 which it finds plenty of. This leads me to believe that the query works correctly, but for some reason there are no entries in the Event Logs with Logon Type equalling 2 and this makes no sense to me. Is it possible to turn this off? | You're on the right track - one of the mistakes in your query is the space in 'Logon Type', it should just be 'LogonType'. I pasted a query below that I have just verified works. It's a bit simplified but you get the idea. It shows you all 4624 events with logon type 2, from user 'john.doe'. <QueryList>
<Query Id="0" Path="Security">
<Select Path="Security">
*[
EventData[Data[@Name='LogonType']='2']
and
EventData[Data[@Name='TargetUserName']='john.doe']
and
System[(EventID='4624')]
]
</Select>
</Query>
</QueryList> You can find out more about XML queries in the event viewer here: http://blogs.technet.com/b/askds/archive/2011/09/26/advanced-xml-filtering-in-the-windows-event-viewer.aspx . You can query events from the command line with wevtutil.exe: http://technet.microsoft.com/en-us/magazine/dd310329.aspx . | {
"source": [
"https://serverfault.com/questions/571732",
"https://serverfault.com",
"https://serverfault.com/users/207864/"
]
} |
573,336 | I've been testing a minimal Fedora install. To check the path for interpreters like python or node, I normally use which . I notice which isn't installed by default. I could add the package, but I wonder if there's a shell builtin that can be used to perform this common task. I'm using bash 4.2. | You can use type , which is a Bash builtin: $ type -P which
which is /usr/bin/which For documentation, see help [t]ype , which refers to the type section in the bash man page. ( help type prints the help pages for two builtins which start with the string "type", one of which is obsolete and completely unrelated to this.) | {
"source": [
"https://serverfault.com/questions/573336",
"https://serverfault.com",
"https://serverfault.com/users/22842/"
]
} |
573,340 | We're using currently an Apache 2.4 as load-balancer for a multi-master-cluster of three Tomcats-Servers. We can access the internal IP-Address of this load-balancer with an web-browser and change the settings on the fly (e.g. http://xx.xx.xx.xx/balancer-manager ). This works and we're pretty fine with this. Enabled modules: proxy.load (proxy.conf used to configure) proxy_ajp.load proxy_balancer.load (proxy_balancer.conf used to configure) But now we will host several (three in this setup) virtual hosts on this Apache, each representing a cluster of three Tomcats itself. Each is accessiable via an URL like http://customer.company.tld/app/ui ). So the path is the same for each cluster ! Now we're facing two problems: The page balancer-manager is only reachable via it's virtual host. Therefore we have to access the balancer-manager via random choosen internal hostname (added to ServerAlias) and add this hostname also into the /etc/hosts off our internal computers to be be able to use it. Furthermore we have to do this for each virtual-host (~cluster). But we want a single page balancer-manager which presents all virtual-hosts and the cluster behind. Here some sample configuration: /etc/apache2/site-enabled/foo # we have actually three of them: foo, bar, baz <VirtualHost *:80>
ServerAdmin webmaster@localhost
ServerName customer-foo.company.tld
ServerAlias customer-foo-balancer customer-foo.company.tld www.customer-foo.company.tld
DocumentRoot /var/www
<Directory />
Options FollowSymLinks
AllowOverride None
Order allow,deny
deny from all
</Directory>
<Directory /var/app/app_static>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
<Directory /var/www/app_static_res>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
# load balancer
<Location /balancer-manager>
SetHandler balancer-manager
Order deny,allow
Deny from all
# allows for internal access
Allow from 127.0.0.1 ::1 10.1.21.81 10.1.4.9
Satisfy all
</Location>
ProxyRequests Off
ProxyVia Off
ProxyPreserveHost On
<Proxy balancer://htg>
ProxySet failonstatus=503
BalancerMember ajp://10.171.23.120:8010/app lbset=0 route=foo001 loadfactor=40
BalancerMember ajp://10.171.23.121:8010/app lbset=0 route=foo002 loadfactor=40
BalancerMember ajp://10.171.23.122:8010/app lbset=0 route=foo003 loadfactor=20
</Proxy>
<Proxy balancer://htgservice>
ProxySet failonstatus=503
BalancerMember ajp://10.171.23.120:8011/wcs_service route=foo001 loadfactor=40
BalancerMember ajp://10.171.23.121:8011/wcs_service route=foo002 loadfactor=40
BalancerMember ajp://10.171.23.122:8011/wcs_service route=foo003 loadfactor=20
</Proxy>
ProxyPass /app balancer://foo stickysession=JSESSIONID|jsessionid nofailover=Off scolonpathdelim=On
ProxyPassReverse /app balancer://foo
ProxyPass /app_service balancer://fooservice stickysession=JSESSIONID|jsessionid nofailover=Off scolonpathdelim=On
ProxyPassReverse /app_service balancer://fooservice
LogLevel warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost> In the past we configured the balancing inside the distinct files proxy.conf and proxy_balancer.conf, which should allow for accessing via the internal IP and show up all clusters and cluster-members on a single page.
But this won't work anymore, the configuration for proxies seem to accept only paths (e.g. app and app_service ) not URLs or hostnames. We can't and won't change the paths. Therefore we moved the proxy-configuration inside the virtual hosts. Thanks for your help! | You can use type , which is a Bash builtin: $ type -P which
which is /usr/bin/which For documentation, see help [t]ype , which refers to the type section in the bash man page. ( help type prints the help pages for two builtins which start with the string "type", one of which is obsolete and completely unrelated to this.) | {
"source": [
"https://serverfault.com/questions/573340",
"https://serverfault.com",
"https://serverfault.com/users/126002/"
]
} |
573,342 | I need to upgrade bunch of PCs that serve as client workstations. The old PCs are running Windows XP, while the new ones have Windows 7 installed on them. I need to make sure that every personal document the users have on the old PC get migrated to the new one. Is there any way of automating this migration process? | You can use type , which is a Bash builtin: $ type -P which
which is /usr/bin/which For documentation, see help [t]ype , which refers to the type section in the bash man page. ( help type prints the help pages for two builtins which start with the string "type", one of which is obsolete and completely unrelated to this.) | {
"source": [
"https://serverfault.com/questions/573342",
"https://serverfault.com",
"https://serverfault.com/users/208216/"
]
} |
573,378 | I'm following the CoreOS Docker Documentation and it mentions starting containers with commands like: docker run someImageName /bin/somebinary Where someImageName is an image. When /bin/somebinary exits, the image will no longer be running. I would simply like to run an image, without specifying any binaries to run. Instead, I simply want to run the services (eg, systemd / sysvinit) that are normally run inside the images OS . This seems like the most common thing anyone would ever want to do with Docker, but trying to run an image without a command returns: 2014/02/05 14:49:19 Error: create: No command specified How can I start a Docker container and run a full OS, rather than specifying a command ? | As documented here, you simply run /sbin/init as the command just like any other unix booting from single user to multi-user mode. https://stackoverflow.com/questions/19332662/start-full-container-in-docker Containers can be full blown OS's, they just don't have to be (neither do VMs for that matter, it's just more complicated to configure and manage). I would say the whole point of Docker is to make application containers easy, so that you only have to configure an app, not the whole OS. | {
"source": [
"https://serverfault.com/questions/573378",
"https://serverfault.com",
"https://serverfault.com/users/22842/"
]
} |
573,392 | When running this command: $ sudo rsync -r --delete --force --checksum --exclude=uploads /data/prep/* /data/app/ I'm getting the following output: cannot delete non-empty directory: html/js/ckeditor/_source/plugins/uicolor/yui
cannot delete non-empty directory: html/js/ckeditor/_source/plugins/uicolor/yui
cannot delete non-empty directory: html/js/ckeditor/_source/plugins/uicolor
cannot delete non-empty directory: html/js/ckeditor/_source/plugins/uicolor
cannot delete non-empty directory: html/js/ckeditor/_source/plugins
cannot delete non-empty directory: html/js/ckeditor/_source/plugins
cannot delete non-empty directory: html/js/ckeditor/_source
cannot delete non-empty directory: html/js/ckeditor/_samples
cannot delete non-empty directory: html/js/ckeditor/plugins/uicolor/yui
cannot delete non-empty directory: html/js/ckeditor/plugins/uicolor/yui
cannot delete non-empty directory: html/js/ckeditor/plugins/uicolor From reading the man rsync it was my impression that the --force option would tell rsync to delete these non-empty directories, which is the desired result. Ref: --force force deletion of dirs even if not empty How can I modify the command to delete the non-empty directories? I'm using rsync version 3.0.8, on Gentoo Base System release 2.0.3, in case that's relevant. Update: Added sudo to the command to make it clear that this is not a file permissions issue. | Did you try to add --delete-excluded ? If you delete a directory in your excluded folders on the "remote" side, rsync --delete will not delete the excluded folder on your "local" site. | {
"source": [
"https://serverfault.com/questions/573392",
"https://serverfault.com",
"https://serverfault.com/users/208232/"
]
} |
573,406 | I use samba (3.4) and OpenLDAP (2.4) for file sharing and directory service. Some days ago, I noticed that samba has problem in connection to the LDAP service while beforehand it was working properly. But still the shared directories were accessible. After restarting the smbd service, it is not accessible anymore. So, there are lots of necessary files unavailable for people in the network!
Any help? The samba log file has reported: [2014/01/30 11:21:59, 1] lib/smbldap.c:1265(another_ldap_try)
Connection to LDAP server failed for the 1 try!
[2014/01/30 11:22:00, 1] lib/smbldap.c:1265(another_ldap_try)
Connection to LDAP server failed for the 2 try!
[2014/01/30 11:22:01, 1] lib/smbldap.c:1265(another_ldap_try)
Connection to LDAP server failed for the 3 try!
[2014/01/30 11:22:02, 1] lib/smbldap.c:1265(another_ldap_try)
.
. (the number is just counted up)
.
Connection to LDAP server failed for the 13 try!
[2014/01/30 11:22:12, 1] lib/smbldap.c:1265(another_ldap_try)
Connection to LDAP server failed for the 14 try!
[2014/01/30 11:22:13, 1] lib/smbldap.c:1265(another_ldap_try)
Connection to LDAP server failed for the 15 try!
[2014/01/30 11:22:14, 0] smbd/server.c:1201(main)
ERROR: failed to setup guest info.
[2014/01/30 11:22:14, 0] smbd/server.c:1069(main)
smbd version 3.4.7 started.
Copyright Andrew Tridgell and the Samba Team 1992-2009
[2014/01/30 11:22:14, 0] smbd/server.c:1115(main)
standard input is not a socket, assuming -D option
[2014/01/30 11:22:14, 0] lib/smbldap.c:1086(smbldap_connect_system)
failed to bind to server ldapi:/// with dn="uid=samba,ou=system-accounts,dc=example,dc=com" Error: Invalid credentials
(unknown) some corresponding system logs: Jan 30 14:27:34 atom-lan slapd[2320]: connection_read(24): no connection!
Jan 30 14:27:35 atom-lan winbindd[2872]: [2014/01/30 14:27:35, 0] lib/smbldap.c:656(smbldap_store_state)
Jan 30 14:27:35 atom-lan winbindd[2872]: PANIC: assert failed at lib/smbldap.c(656): tmp_ldap_state == smbldap_state
.
. (repetition)
.
Jan 30 14:42:55 atom-lan init: smbd main process (5977) terminated with status 255
Jan 30 14:42:55 atom-lan init: smbd main process ended, respawning
Jan 30 14:42:55 atom-lan smbd[6037]: [2014/01/30 14:42:55, 0] smbd/server.c:1115(main)
Jan 30 14:42:55 atom-lan smbd[6037]: standard input is not a socket, assuming -D option
Jan 30 14:42:55 atom-lan smbd[6037]: [2014/01/30 14:42:55, 0] lib/smbldap.c:1086(smbldap_connect_system)
Jan 30 14:42:55 atom-lan smbd[6037]: failed to bind to server ldapi:/// with dn="uid=samba,ou=system-accounts,dc=example,dc=com" Error: Invalid credentials In case more information is needed, please just comment it and I will update the question. | Did you try to add --delete-excluded ? If you delete a directory in your excluded folders on the "remote" side, rsync --delete will not delete the excluded folder on your "local" site. | {
"source": [
"https://serverfault.com/questions/573406",
"https://serverfault.com",
"https://serverfault.com/users/208198/"
]
} |
573,681 | I have a client whose workforce is comprised entirely of remote employees using a mix of Apple and Windows 7 PCs/laptops. The users don't authenticate against a domain at the moment, but the organization would like to move in that direction for several reasons. These are company-owned machines, and the firm seeks to have some control over account deactivation, group policy and some light data-loss prevention (disable remote media, USB, etc.) They are concerned that requiring VPN authentication in order to access AD would be cumbersome, especially at the intersection of a terminated employee and cached credentials on a remote machine. Most services in the organization are Google-based (mail, file, chat, etc.) so the only domain services are DNS and the auth for their Cisco ASA VPN. The customer would like to understand why it is not acceptable to expose their domain controllers to the public. In addition, what is a more acceptable domain structure for a distributed remote workforce? Edit: Centrify is in use for a handful of Mac clients. | I'm posting this as answer mainly because everyone has their own "educated opinion" based on experience, 3rd party info, hearsay, and tribal knowledge within IT, but this is more a list of citations and readings "directly" from Microsoft. I used quotes because I'm sure they don't properly filter all opinions made by their employees, but this should prove helpful nonetheless if you are after authoritative references direct from Microsoft. BTW, I also think it is VERY EASY to say DOMAIN CONTROLLER == ACTIVE DIRECTORY, which isn't quite the case. AD FS proxies and other means (forms based auth for OWA, EAS, etc.) offer a way to "expose" AD itself to the web to allow clients to at least attempt to authenticate via AD without exposing the DCs themselves. Go on someone's OWA site and attempt to login and AD will get the request for authentication on a backend DC, so AD is technically "exposed"...but is secured via SSL and proxied through an Exchange server. Citation #1 Guidelines for Deploying Windows Server Active Directory on Windows Azure Virtual Machines Before you go "Azure isn't AD"...you CAN deploy ADDS on an Azure VM. But to quote the relevant bits: Never expose STSs directly to the Internet. As a security best practice, place STS instances behind a firewall and
connect them to your corporate network to prevent exposure to the
Internet. This is important because the STS role issues security
tokens. As a result, they should be treated with the same level of
protection as a domain controller. If an STS is compromised, malicious
users have the ability to issue access tokens potentially containing
claims of their choosing to relying party applications and other STSs
in trusting organizations. ergo...don't expose domain controllers directly to the internet. Citation #2 Active Directory - The UnicodePwd Mystery of AD LDS Exposing a domain controller to the Internet is normally a bad
practice, whether that exposure comes directly from the production
environment or through a perimeter network. The natural alternative is
to place a Windows Server 2008 server with Active Directory
Lightweight Directory Services (AD LDS) role running in the perimeter
network. Citation #3 - not from MS...but useful still in looking ahead Active Directory-as-a-Service? Azure, Intune hinting at a cloud-hosted AD future In the end, there is no great "short" answer which meets the goals of
ridding the office of the AD server in exchange for an Azure
alternative. While Microsoft is being complacent in allowing customers
to host Active Directory Domain Services on Server 2012 and 2008 R2
boxes in Azure, their usefulness is only as good as the VPN
connectivity you can muster for your staff. DirectAccess, while a very
promising technology, has its hands tied due to its own unfortunate
limitations. Citation #4 Deploy AD DS or AD FS and Office 365 with single sign-on and Windows Azure Virtual Machines Domain controllers and AD FS servers should never be exposed directly
to the Internet and should only be reachable through VPN | {
"source": [
"https://serverfault.com/questions/573681",
"https://serverfault.com",
"https://serverfault.com/users/13325/"
]
} |
573,946 | In older Linux systems, the logger command can be used to send a log message to syslog. Reading where does logger log its messages to in Arch Linux? , it seems that syslog messages and the logger command line app only talk to the systemd journal if a socket for message forwarding is set up . So what's the modern equivalent of the logger command? How can I send a message directly to the systemd journal from the command line? | systemd-cat is the equivalent to logger: echo 'hello' | systemd-cat In another terminal, running journalctl -f : Feb 07 13:38:33 localhost.localdomain cat[15162]: hello Priorities are specified just by part of the string: echo 'hello' | systemd-cat -p info
echo 'hello' | systemd-cat -p warning
echo 'hello' | systemd-cat -p emerg Warnings are bold, emergencies are bold and red. Scary stuff. You can also use an 'identifier' which is arbitrary, to specify the app name. These are like syslog's old facilities, but you're not stuck with ancient things like lpr uucp nntp or the ever-descriptive local0 through local7 . echo 'hello' | systemd-cat -t someapp -p emerg Is logged as: Feb 07 13:48:56 localhost.localdomain someapp[15278]: hello | {
"source": [
"https://serverfault.com/questions/573946",
"https://serverfault.com",
"https://serverfault.com/users/22842/"
]
} |
574,072 | I need this for load balancing. For example, I've two azure storage accounts (say a and b ) and the blob addresses for those are a.blob.core.windows.net and b.blob.core.windows.net . Both of them store identical data. Now I need to provide a single external name (say example.com ) which points to both the storage accounts and should work in round robin. This can be achieved if I create two CNAME entries in DNS as following and it resolves to one of them in round robing. example.com CNAME a.blob.core.windows.net example.com CNAME b.blob.core.windows.net But I can not create two CNAME records for a single name in Windows DNS server. So is it ever possible? | Multiple CNAME records for the same fully-qualified domain name is a violation of the specs for DNS. Some versions of BIND would allow you to do this (some only if you specified the multiple-cnames yes option) and would round-robin load-balance between then but it's not technically legal. There are not supposed to be resource records (RRs) with the same name as a CNAME and, to pick nits, that would include multiple identical CNAMEs. Quoth RFC 1034, Section 3.6.2: If a CNAME RR is present at a node, no other data should be present;
this ensures that the data for a canonical name and its aliases cannot
be different. This rule also insures that a cached CNAME can be used
without checking with an authoritative server for other RR types. The letter-of-the RFC method to handle what you're doing would be with a single CNAME referring to a load-balanced "A" record. | {
"source": [
"https://serverfault.com/questions/574072",
"https://serverfault.com",
"https://serverfault.com/users/208588/"
]
} |
574,121 | We've setup a L2TP VPN server with this tutorial , everything works like a charm. The only issue is We don't want client to route all traffic using this VPN, only a particular subnet, e.g. 10.0.0.0/20 On Mac, we need to set the route manually using command, but for mobile devices, seems there is no way to do so? So, it is possible to configure for the client automatically for the subnet "10.0.0.0/20"? | OK, this question is asked over and over again over the Internet and most of the time there is a (semi-) incorrect answer that you cannot do what was described in the original post. Let me clarify it once and for all :) The short answer is L2TP (and PPTP for that matter) do not have facilities to do route pushes inside the protocol, but it can be achieved outside the protocol. Since L2TP is a Microsoft invention, the best source of information is their technical documentation (and they are quite good at it, by the way). The technical description of what I am going to explain down below can be found at VPN Addressing and Routing .
The keywords for setting everything up properly (if you are going to do your own research) are: DHCPINFORM and "classless static routes". First of all, how it works: a client connects to the VPN server after successful authentication a secure tunnel is established the client uses a DHCPINFORM message after the connection to request the DHCP Classless Static Routes option. This DHCP option contains a set of routes that are automatically added to the routing table of the requesting client ( I slavishly copy-and-pasted this line directly from Microsoft documentation :) ) the VPN server replies to that message with appropriate set of routes Well, there is a caveat: there is RFC-3442 describing "DHCP Classless Static Routes" and there it states that the code for this option is 121. Microsoft decided to re-invent the wheel (as always) and uses code 249 for this option. Hence, to support a wider range of clients we need to respond back with both codes I am going to describe a typical configuration using Linux box as the VPN server (you can configure MS servers using the link to the Microsoft documentation). To configure routes on the clients we will need the following ingredients: L2TP/IPSEC (or PPTP) = for example, accel-ppp is a nice open source L2TP/PPTP server DHCP server = there are many, but I am going to describe dnsmasq's configuration The following is a dump of a working accel-ppp configuration. I am providing it in its entirety, otherwise it would be difficult to explain what goes where. If you already have your VPN working you may skip this configuration file and concentrate on the DHCP configuration described below. [root@vpn ~]# cat /opt/accel-ppp/config/accel-ppp.conf
[modules]
log_syslog
pptp
l2tp
auth_mschap_v2
ippool
sigchld
chap-secrets
logwtmp
[core]
log-error=/var/log/accel-ppp/core.log
thread-count=4
[ppp]
verbose=1
min-mtu=1280
mtu=1400
mru=1400
check-ip=1
single-session=replace
mppe=require
ipv4=require
ipv6=deny
ipv6-intf-id=0:0:0:1
ipv6-peer-intf-id=0:0:0:2
ipv6-accept-peer-intf-id=1
[lcp]
lcp-echo-interval=30
lcp-echo-failure=3
[auth]
#any-login=0
#noauth=0
[pptp]
echo-interval=30
echo-failure=3
verbose=1
[l2tp]
host-name=access-vpn
verbose=1
[dns]
dns1=192.168.70.251
dns2=192.168.70.252
[client-ip-range]
disable
[ip-pool]
gw-ip-address=192.168.99.254
192.168.99.1-253
[log]
log-file=/var/log/accel-ppp/accel-ppp.log
log-emerg=/var/log/accel-ppp/emerg.log
log-fail-file=/var/log/accel-ppp/auth-fail.log
log-debug=/var/log/accel-ppp/debug.log
copy=1
level=3
[chap-secrets]
gw-ip-address=192.168.99.254
chap-secrets=/etc/ppp/chap-secrets
[cli]
telnet=127.0.0.1:2000
tcp=127.0.0.1:2001
[root@vpn ~]#
=== At this point our clients can connect via L2TP (or PPTP) and communicate with the VPN server. So, the only missing part is a DHCP server that is listening on the created tunnels and that responds back with the necessary information. Below is an excerpt from the dnsmasq configuration file (I am providing DHCP related options only): [root@vpn ~]# grep -E '^dhcp' /etc/dnsmasq.conf
dhcp-range=192.168.99.254,static
dhcp-option=option:router
dhcp-option=121,192.168.70.0/24,192.168.99.254,192.168.75.0/24,192.168.99.254,10.0.0.0/24,192.168.99.254
dhcp-option=249,192.168.70.0/24,192.168.99.254,192.168.75.0/24,192.168.99.254,10.0.0.0/24,192.168.99.254
dhcp-option=vendor:MSFT,2,1i
[root@vpn ~]# In the above excerpt we are pushing routes 192.168.70.0/24, 192.168.75.0/24, and 10.0.0.0/24 via 192.168.99.254 (the VPN server). Finally, if you sniff the network traffic (e.g. on the VPN server) you will see something like the following for the response on the DHCPINFORM message: 19:54:46.716113 IP (tos 0x0, ttl 64, id 10142, offset 0, flags [none], proto UDP (17), length 333)
192.168.99.254.67 > 192.168.99.153.68: BOOTP/DHCP, Reply, length 305, htype 8, hlen 6, xid 0xa27cfc5f, secs 1536, Flags [none]
Client-IP 192.168.99.153
Vendor-rfc1048 Extensions
Magic Cookie 0x63825363
DHCP-Message Option 53, length 1: ACK
Server-ID Option 54, length 4: 192.168.99.254
Domain-Name Option 15, length 18: "vpn.server.tld"
Classless-Static-Route-Microsoft Option 249, length 24: (192.168.70.0/24:192.168.99.254),(192.168.75.0/24:192.168.99.254),(10.0.0.0/24:192.168.99.254)
Vendor-Option Option 43, length 7: 2.4.0.0.0.1.255 P.S. I almost forgot an essential part required for the successful use of the above configuration. Well, it was described in the Microsoft docs I referred to, but who read the documentation? :) OK, clients should be configured without 'Use default gateway' on the VPN connection (on Windows it is in connection's properties -> Networking -> Internet Protocol Version 4 (TCP/IPv4) -> Properties -> Advanced -> IP Settings). On some clients there is also an option called 'Disable class based route addition' - it must be unset since it explicitly disables the functionality we are trying to implement. | {
"source": [
"https://serverfault.com/questions/574121",
"https://serverfault.com",
"https://serverfault.com/users/52746/"
]
} |
574,437 | First of all, let me state that this is not my idea and I don't want to discuss whether such an action is reasonable. However, for a company, is there a way to prevent employees to access public cloud services? In particular, they should not be able to upload files to any place on the web. Blocking HTTPS might be a first, simple, but very radical solution. Using a blacklist of IP addresses wouldn't suffice either. Probably, some kind of software is needed to filter the traffic on a content level. A proxy might be helpful, to be able to filter HTTPS traffic. Theses are my thoughts so far. What do you think? Any ideas? | You basically have three options here. 1. Disconnect your office/users from the internet If they can't get to "the public cloud," they can't upload anything to it. 2. Compile a blacklist of specific services you're worried about users accessing. This is going to be absolutely massive if it's meant to be even remotely effective. Tech-savvy users will always be able to find a way around it - I can connect to my computer from anywhere in the world with an internet connection, so... good luck blocking me, for example. 3. Do something more reasonable/recognize the limits of technology. This isn't your idea, but generally, if you provide management with the pitfalls and expense of implementing a solution like this, they'll be more open to better approaches. Sometimes this is a compliance thing, or "just for appearances," and they're happy with just blocking the most popular services Sometimes they genuinely don't understand how insane their request is, and need you to tell them in terms they can understand. Had a client once, when I was working for an computer security vendor, who wanted us to provide a way to stop employees from leaking confidential information with our AV agent. I whipped out my smartphone, took a picture of my screen, and asked him how he could possibly prevent that, or even writing the information down on a piece of paper. Use the news and recent events in your explanation - if the Army couldn't stop Manning, and the NSA couldn't stop Snowden, what makes you think we can do it, and how much money do you think even trying will cost? | {
"source": [
"https://serverfault.com/questions/574437",
"https://serverfault.com",
"https://serverfault.com/users/192311/"
]
} |
575,050 | I typically like to set up separate logins for myself, one with regular user permissions, and a separate one for administrative tasks. For example, if the domain was XXXX, I'd set up a XXXX\bpeikes and a XXXX\adminbp account. I've always done it because frankly I don't trust myself to be logged in as an adminstrator, but in every place that I've worked, the system administrators seem to just add their usual accounts to the Domain Admins group. Are there any best practices? I've seen an article from MS which does appear to say that you should use Run As, and not login as an admin, but they don't give an example of an implementation and I've never seen anyone else do it. | "Best Practice" typically dictates LPU (least privileged user)...but you are correct (as is ETL and Joe so +1) that people rarely follow this model. Most recommendations are to do as you say...create 2 accounts and not share those accounts with others. One account shouldn't have admin rights on even the local workstation you are using in theory, but again who follows that rule, especially with UAC these days (which in theory should be enabled). There are multiple factors in why you want to go this route though. You have to factor security, convenience, corp policy, regulatory restrictions (if any), risk, etc. Keeping the Domain Admins and Administrators domain level groups nice and clean with minimal accounts is always a good idea. But don't simply share common domain admin accounts if you can avoid it. Otherwise there's a risk of someone doing something and then finger pointing between sysadmins of "it wasn't me that used that account". Better to have individual accounts or use something like CyberArk EPA to audit it correctly. Also on these lines, your Schema Admins group should always be EMPTY unless you are making a change to the schema and then you put the account in, make the change, and remove the account. The same could be said for Enterprise Admins especially in a single domain model. You should also NOT allow privileged accounts to VPN into the network. Use a normal account and then elevate as required once inside. Finally, you should use SCOM or Netwrix or some other method for auditing any privileged group and notify the appropriate group in IT whenever any of these group's members have changed. This will give you a heads up to say "wait a minute, why is so and so suddenly a Domain Admin?" etc. At the end of the day there's a reason it's called "Best Practice" and not "Only Practice"...there are acceptable choices made by IT groups based on their own needs and philosophies on this. Some (like Joe said) are simply lazy...while others simply don't care because they aren't interested in plugging one security hole when there are hundreds already and daily fires to fight. However, now that you've read all of this, consider yourself one of the ones that will fight the good fight and do what you can to keep things secure. :) References: http://www.microsoft.com/en-us/download/details.aspx?id=4868 http://technet.microsoft.com/en-us/library/cc700846.aspx http://technet.microsoft.com/en-us/library/bb456992.aspx | {
"source": [
"https://serverfault.com/questions/575050",
"https://serverfault.com",
"https://serverfault.com/users/83465/"
]
} |
575,112 | Apologies if this is not the right place for asking this question. I regularly need to ssh to different servers.
Now, from my home machine (linux mint), when I connect via ssh, after some time of inactivity, my ssh shell freezes, and there's no way to get it back. The only thing I can do is '~.", that at least gives me my initiating shell back. When I login from other locations to the same servers there's no issue.
Could that be a problem with my ISP?
How can I investigate further on this one? It's really annoying, as I have to re-establish ssh connections after the freeze, navigate back to where I was and resume work. Thanks | Your NAT is dropping your TCP socket after a period of inactivity. Your ssh client can optionally send periodic noops to the server, thereby eliminating this problem. To do this, add this to your ~/.ssh/config : Host *
ServerAliveInterval 60 Alternatively, re-configure your NAT to not expire items out of its state table as quickly as it is now. In addition to the above, you should be using a terminal multiplexer for your sessions - something like GNU Screen or tmux. With either of those, you can recover your session in the event of getting disconnected. | {
"source": [
"https://serverfault.com/questions/575112",
"https://serverfault.com",
"https://serverfault.com/users/38760/"
]
} |
575,163 | I need to backup data and config files on this server, daily. I need to keep: daily backups for a week weekly backups for a month monthly backups for a year yearly backups after that All of this accomplished via a shell script run daily from cron. This is how the backup files should look after 10 years of running: blog-20050103.tar.bz2
blog-20060102.tar.bz2
blog-20070101.tar.bz2
blog-20080107.tar.bz2
blog-20090105.tar.bz2
blog-20100104.tar.bz2
blog-20110103.tar.bz2
blog-20120102.tar.bz2
blog-20130107.tar.bz2
blog-20130902.tar.bz2
blog-20131007.tar.bz2
blog-20131104.tar.bz2
blog-20131202.tar.bz2
blog-20140106.tar.bz2
blog-20140203.tar.bz2
blog-20140303.tar.bz2
blog-20140407.tar.bz2
blog-20140505.tar.bz2
blog-20140602.tar.bz2
blog-20140707.tar.bz2
blog-20140728.tar.bz2
blog-20140804.tar.bz2
blog-20140811.tar.bz2
blog-20140816.tar.bz2
blog-20140817.tar.bz2
blog-20140818.tar.bz2
blog-20140819.tar.bz2
blog-20140820.tar.bz2
blog-20140821.tar.bz2
blog-20140822.tar.bz2 | You are seriously over-engineering this. Badly. Here's some pseudocode: Every day: make a backup, put into daily directory remove everything but the last 7 daily backups Every week: make a backup, put into weekly directory remove everything but the last 5 weekly backups Every month: make a backup, put into monthly directory remove everything but the last 12 monthly backups Every year: make a backup, put into yearly directory The amount of logic you have to implement is about the same, eh? KISS. This looks easier: s3cmd ls s3://backup-bucket/daily/ | \
awk '$1 < "'$(date +%F -d '1 week ago')'" {print $4;}' | \
xargs --no-run-if-empty s3cmd del Or, by file count instead of age: s3cmd ls s3://backup-bucket/daily/ | \
awk '$1 != "DIR"' | \
sort -r | \
awk 'NR > 7 {print $4;}' | \
xargs --no-run-if-empty s3cmd del | {
"source": [
"https://serverfault.com/questions/575163",
"https://serverfault.com",
"https://serverfault.com/users/24406/"
]
} |
575,239 | Ubuntu precise (12.04.1 LTS) I'm rather new to PEAR. I installed PEAR. Then, using pear I installed phpdoc. It seems to be working great except for the graphing functions. I ran this command: /var/www/site5 $ phpdoc -f models/classes.php -t ./docs/classes
Collecting files .. OK
Initializing parser .. OK
Parsing files
Parsing /var/www/site5/models/classes.php
Storing cache in "/var/www/site5/docs/classes" .. OK
Load cache .. 0.026s
Preparing template "clean" .. 0.069s
Preparing 15 transformations .. 0.000s
Build "elements" index .. 0.017s
Replace textual FQCNs with object aliases .. 0.151s
Build "packages" index .. 0.015s
Collect all markers embedded in tags .. 0.015s
Build "namespaces" index and add namespaces to "elements" .. 0.004s
Transform analyzed project into artifacts .. Unable to
find the `dot` command of the GraphViz package. Is GraphViz correctly installed
and present in your path? 12.465s
Analyze results and write report to log .. 0.004s
$ I realized that in my apache virtual host for this site I had this line: php_value include_path ".:/var/www/site5/includes" And so I thought maybe that was preventing inclusion of other directories... ? So I tried changing the line to this: php_value include_path ".:/var/www/site5/includes:/usr/lib/php:/usr/share/php" That didn't work either, so I finally commented out the line , but still the same error. In case this helps, inside of /usr/share , I ran this command: /usr/share$ find -name "*GraphViz*"
./php/phpDocumentor/vendor/phpdocumentor/graphviz/src/phpDocumentor/GraphViz
./php/phpDocumentor/vendor/phpdocumentor/graphviz/tests/phpDocumentor/GraphViz
./php/Image/GraphViz.php
./php/test/Image_GraphViz
./php/data/phpDocumentor/features/generate-documentation/graphs/GenerateClassDia
gramUsingGraphViz.feature
/usr/share$ I don't see why this is not working. Thanks for your help. | I had this problem when generating PHPDoc, during the "Transform analyzed project into artifacts"-phase. I solved this problem by executing the following command sudo apt-get install graphviz | {
"source": [
"https://serverfault.com/questions/575239",
"https://serverfault.com",
"https://serverfault.com/users/94276/"
]
} |
575,667 | In Active Directory if you want to prevent a user from logging in you can either disable their account or simply reset their password. However, if you have a user who is already logged in to a workstation and you need to prevent them from accessing any resources as quickly as possible - how do you do it? I speak of an emergency situation in which a worker is fired with immediate effect and there is risk of them wreaking havoc if they are not locked out of the network immediately. A few days ago I've been faced with a similar case. At first I was not sure how to act. Preventing user access to network shares is easy but this is not enough. Eventually, I switched the target computer off with the Stop-Computer -ComputerName <name> -Force PowerShell cmdlet and in my case this solved the issue. However, in some cases this might not be the best choice, say if the user you need to cut off is logged in on several workstations or on a computer which provides an important service and you just cannot switch it off. What is the best possible solution to remotely force an immediate user logoff from all workstations? Is this even possible in Active Directory? | Best solution: A security guard escort the person out... Second best solution: First, check the session number with qwinsta: QWINSTA /server:computername Write down the session ID. Then use the logoff command: LOGOFF sessionID /server:computername. C:\>qwinsta /?
Display information about Remote Desktop Sessions.
QUERY SESSION [sessionname | username | sessionid]
[/SERVER:servername] [/MODE] [/FLOW] [/CONNECT] [/COUNTER] [/VM]
sessionname Identifies the session named sessionname.
username Identifies the session with user username.
sessionid Identifies the session with ID sessionid.
/SERVER:servername The server to be queried (default is current).
/MODE Display current line settings.
/FLOW Display current flow control settings.
/CONNECT Display current connect settings.
/COUNTER Display current Remote Desktop Services counters information.
/VM Display information about sessions within virtual machines.
C:\>logoff /?
Terminates a session.
LOGOFF [sessionname | sessionid] [/SERVER:servername] [/V] [/VM]
sessionname The name of the session.
sessionid The ID of the session.
/SERVER:servername Specifies the Remote Desktop server containing the user
session to log off (default is current).
/V Displays information about the actions performed.
/VM Logs off a session on server or within virtual machine. The unique ID of the session needs to be specified. I wrote a rudimentary batch script for that. I requires some unixtools in the path as well as psexec . @ECHO OFF
:: Script to log a user off a remote machine
::
:: Param 1: The machine
:: Param 2: The username
psexec \\%1 qwinsta | grep %2 | sed 's/console//' | awk '{print $2}' > %tmp%\sessionid.txt
set /p sessionid=< %tmp%\sessionid.txt
del /q %tmp%\sessionid.txt
psexec \\%1 logoff %sessionid% /v | {
"source": [
"https://serverfault.com/questions/575667",
"https://serverfault.com",
"https://serverfault.com/users/201188/"
]
} |
575,952 | If I have a single domain with visitors from both USA and Europe and also have 2 servers, one in USA and one in UK, how can I force users from USA to go the USA server and visitors from UK to go to the UK server in order to reduce the ping of visitors? First of all is this possible? And why companies like google have a different domain for each country? | And why companies like google have a different domain for each country? Because it makes it easier to have SEPARATE CONTENT for every country. Content should be static - so if you want English and for example Spanish pages to be indexed, they must have separate url's. One way is example.com/en - the other is en.example.com . The later scales better. First of all is this possible? Not for you. You need a provider that supports anycast routing. To do it yourself you need your own internationally routed IP addresses - which are impossible to get for a normal user as the smallest block assigned is more than 4000 addresses (which you must USE) and the costs are high. If you would get one you would get routing as an AS (Autonomous System) and just publish routes going to the closest server. So, not for you. But some hosts may support it. CDN's do it - so you can definitely move your static stuff off to a content delivery network. What you can do is country prefixes, and then redirect to them from the main domain. | {
"source": [
"https://serverfault.com/questions/575952",
"https://serverfault.com",
"https://serverfault.com/users/204420/"
]
} |
576,461 | I want to redirect all requests from example.com to www.example.com . Preferably, this should happen at DNS level. I tried using PTR records, but that simply fails, returning a 404. wwww.example.com is an ALIAS for an Elastic Load Balancer. What’s the simplest way to achieve this? | If you're already using Route 53, you can use their proprietary alias "record" to solve this problem. With standard DNS, you cannot do this at all and you have to have a web site send a 301 redirect. Of course, you still need to send the 301 redirects or deal with the fact that some requests will come in without the www (though you should send 301s for SEO reasons). Probably the easiest way to do this is to set up an S3 bucket with the name of the naked domain and configure the bucket properties to redirect from example.com to www.example.com, and then in Route 53 create an alias for the naked domain name that points to that S3 bucket. From the Comments To enhance the answer, here is what we did to get this working: Set up bucket - doesn't matter what its name is and must allow public. In bucket, click properties and click static website hosting. Click redirect all requests to another host name and enter the site you want traffic to go to. Copy the endpoint of the bucket name and go to the hosted zone in the Route53 console and add a CNAME with Alias No to the url that you need to be redirected from and paste the bucket endpoint as its value. | {
"source": [
"https://serverfault.com/questions/576461",
"https://serverfault.com",
"https://serverfault.com/users/154847/"
]
} |
576,472 | I've got an application which requires data recording in a outdoor environment, and I am interested in the reliability of SSDs vs HDD when placed in a cold (down to -20) and hot (+50) ambient environments. Intuition leads me to believe SSDs will be more reliable, with the possible exception of high temperatures. Air conditioning enclosures is not an option. Does anyone have any information on disk reliability in these situations? | Look for an industrial or ruggedized SSD for this application. A good example of a proper product spec. http://www.pretec.com/products/ssd-series/item/sata-ssd-series/a5000-industrial-grade .Standard 2.5" SATA III SSD, compatible with SATA III/II/I interface
.Capacity: 32GB ~ 256GB
.Data transfer rate: Up to 490 MB/s
.Built-in ECC (Error Correction Code) function
.Support ATA-8 command and SMART function
.Temperature
I. Operating Temperature: 0℃ ~ +70℃
II. Extended Temperature: -40℃ ~ +85℃
III. Storage Temperature: -55℃ ~ +95℃ | {
"source": [
"https://serverfault.com/questions/576472",
"https://serverfault.com",
"https://serverfault.com/users/209869/"
]
} |
576,490 | I'm using Docker to deploy some services on a CentOS 6.4 server, and I'm trying to figure out how to properly backup data they generate. For example, one of the services is a web application where users can upload files. For this container, I have a /files volume which I want to backup. Host mounts looks like they are somewhat frowned upon, because such mount is in no way portable — as said in this blog post and the docker documentation for volumes . I know from the same blog post that I don't need a host mount to access the files in a volume, I can use docker inspect to find out where the files are. But here's my problem: I was thinking about backing up just the dockerfiles needed to build the containers and the volumes associated with them. In the likely event that I have to restore everything from the backup, how would I go about knowing which volume directory corresponds to which container? Rebuilding the container causes the id and the volume path to change, so I would need some extra information to match them. What else, if anything, should I backup to be able to actually restore everything? | You're right. Since you can have multiple containers with volumes on their own, you need to keep track which volume corresponds to which container.
How to do that depends on your setup: I use the name -data for the data container, so it's obvious to which container a image belongs. That way it can be backed up like this: VOLUME=`docker inspect $NAME-data | jq '.[0].Volumes["/path/in/container"]'`
tar -C $VOLUME . -czvf $NAME.tar.gz Now you just need to rebuild your image and recreate your data container: cat $NAME.tar.gz | docker run -name $NAME-data -v /path/in/container \
-i busybox tar -C /path/int/container -xzf - So this means you need to backup: Dockerfile volume volume path in container name of the container the volume belongs to Update: In the meanwhile I created a tool to backup containers and their volume(s) (container(s)): https://github.com/discordianfish/docker-backup and a backup image that can create backups and push them to s3: https://github.com/discordianfish/docker-lloyd | {
"source": [
"https://serverfault.com/questions/576490",
"https://serverfault.com",
"https://serverfault.com/users/131581/"
]
} |
576,831 | When you are registering a DLL in old machines (Windows XP), regsrv always says that the registration was sucessful. This happens even if the user doesn't have permission to register. With the name of the dll, is there a command that I can run at the command line to verify if a DLL is installed? | I've found this link: How can I tell whether a DLL has been registered? : Given that DLL registration can encompass arbitrary operations, there
is no general-purpose way of determining whether registration has
taken place for an arbitrary DLL. To determine whether a DLL has been registered, you need to bring in
domain-specific knowledge. If you know that a DLL registers a COM
object with a particular CLSID, you can check whether that CLSID is
indeed registered. OK, it is impossible, but DLLs usually register themselves creating an entry in the register. A workaround is to: First you have to discover the COM GUID of the DLL. If you have one machine where it is already registered, you can: Open regedit and search for your DLL filename If it is registered, you will find filename under a key that is under the TypeLib. The key will look like: {9F3DBFEE-FD77-4774-868B-65F75E7DB7C2} Now that you know the DLL GUID, you can search for it with this command in a DOS prompt: reg query HKCR\CLSID | find /i "{9F3DBFEE-FD77-4774-868B-65F75E7DB7C3}" A better answer would allow me to find the GUID directly from the file before it was registered. At least this way you can create a script to install and verify if it was successfully installed. | {
"source": [
"https://serverfault.com/questions/576831",
"https://serverfault.com",
"https://serverfault.com/users/22369/"
]
} |
576,834 | Im running an email server as part of a research study. We have subjects connecting and sending email from a variety of clients. One function of this server is to communicate with said subjects - this means sending out regular emails. In the last few weeks it seems that nearly all of these emails are being marked as spam by the big providers (gmail, hotmail, yahoo, etc). As a result I'm looking into how to mark these messages as safe. One suggestion that comes up from searching is to use DKIM . I can set this up and try it but it certainly won't be in place for all of our existing clients. If I implement it will it block all emails from our server that aren't setup to use it? FWIW: Postfix, CentOS 6.4 x64 | I've found this link: How can I tell whether a DLL has been registered? : Given that DLL registration can encompass arbitrary operations, there
is no general-purpose way of determining whether registration has
taken place for an arbitrary DLL. To determine whether a DLL has been registered, you need to bring in
domain-specific knowledge. If you know that a DLL registers a COM
object with a particular CLSID, you can check whether that CLSID is
indeed registered. OK, it is impossible, but DLLs usually register themselves creating an entry in the register. A workaround is to: First you have to discover the COM GUID of the DLL. If you have one machine where it is already registered, you can: Open regedit and search for your DLL filename If it is registered, you will find filename under a key that is under the TypeLib. The key will look like: {9F3DBFEE-FD77-4774-868B-65F75E7DB7C2} Now that you know the DLL GUID, you can search for it with this command in a DOS prompt: reg query HKCR\CLSID | find /i "{9F3DBFEE-FD77-4774-868B-65F75E7DB7C3}" A better answer would allow me to find the GUID directly from the file before it was registered. At least this way you can create a script to install and verify if it was successfully installed. | {
"source": [
"https://serverfault.com/questions/576834",
"https://serverfault.com",
"https://serverfault.com/users/72780/"
]
} |
576,898 | I scrubbed my pool today, and after the scrub finished, I noticed there was an error that corrupted a file. I didn't care about the file, so I deleted it. Unfortunately, the error remains (now referenced by a hex ID and not a filename), and I don't know how to clear it. Should I be worried? Am I not really free of this error just yet? Can I clear the error? If the file is gone, I don't really want to see this error in the future. For reference, here are the commands I issued and the output, with annotations: Checking status kevin@atlas:~$ sudo zpool status -v
pool: zstorage
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: zfsonlinux.org/msg/ZFS-8000-8A
scan: scrub repaired 1.81M in 7h19m with 1 errors on Wed Feb 19 10:04:44 2014
config:
NAME STATE READ WRITE CKSUM
zstorage ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-WDC_WD30EZRX-00DC0B0_WD-WCC1T1735698 ONLINE 0 0 0
ata-WDC_WD30EZRX-00DC0B0_WD-WMC1T0506289 ONLINE 0 0 0
ata-WDC_WD30EZRX-00MMMB0_WD-WCAWZ2711600 ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
/zstorage/owncloud/kevin/files/Archives/Music/Kev Rev 7/graveyard/Old/Four Tet/Pause/03 Harmony One.mp3 Switching to root and deleting the file - I don't need it kevin@atlas:~$ sudo -i
root@atlas:~# cd /zstorage/owncloud/kevin/files/Archives/Music/Kev\ Rev\ 7/graveyard/Old/Four\ Tet/Pause/
root@atlas:/zstorage/owncloud/kevin/files/Archives/Music/Kev Rev 7/graveyard/Old/Four Tet/Pause# rm 03\ Harmony\ One.mp3 Checking status again root@atlas:/zstorage/owncloud/kevin/files/Archives/Music/Kev Rev 7/graveyard/Old/Four Tet/Pause# zpool status -v
pool: zstorage
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: zfsonlinux.org/msg/ZFS-8000-8A
scan: scrub repaired 1.81M in 7h19m with 1 errors on Wed Feb 19 10:04:44 2014
config:
NAME STATE READ WRITE CKSUM
zstorage ONLINE 0 0 1
raidz1-0 ONLINE 0 0 2
ata-WDC_WD30EZRX-00DC0B0_WD-WCC1T1735698 ONLINE 0 0 0
ata-WDC_WD30EZRX-00DC0B0_WD-WMC1T0506289 ONLINE 0 0 0
ata-WDC_WD30EZRX-00MMMB0_WD-WCAWZ2711600 ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
zstorage:<0x9f115> Uh oh. Maybe I can clear the error? root@atlas:/zstorage/owncloud/kevin/files/Archives/Music/Kev Rev 7/graveyard/Old/Four Tet/Pause# zpool clear zstorage
root@atlas:/zstorage/owncloud/kevin/files/Archives/Music/Kev Rev 7/graveyard/Old/Four Tet/Pause# zpool status -v
pool: zstorage
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: zfsonlinux.org/msg/ZFS-8000-8A
scan: scrub repaired 1.81M in 7h19m with 1 errors on Wed Feb 19 10:04:44 2014
config:
NAME STATE READ WRITE CKSUM
zstorage ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-WDC_WD30EZRX-00DC0B0_WD-WCC1T1735698 ONLINE 0 0 0
ata-WDC_WD30EZRX-00DC0B0_WD-WMC1T0506289 ONLINE 0 0 0
ata-WDC_WD30EZRX-00MMMB0_WD-WCAWZ2711600 ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
zstorage:<0x9f115> This doesn't look good! | Scrub your pool again (if you haven't already): zpool scrub zstorage That error is telling you that inode <0x9f115> is corrupt (deleting the file broke the filename->inode mapping, so it's just reporting the inode now). Either something still has the file open or the metadata just needs to be cleaned up (which a scrub should do). To clear the error if a scrub won't you need to get down and dirty with zdb, which is not publicly documented by oracle (and poorly documented elsewhere) - and at any rate probably indicates something more fundamentally wrong. | {
"source": [
"https://serverfault.com/questions/576898",
"https://serverfault.com",
"https://serverfault.com/users/203684/"
]
} |
577,134 | I've just installed Postfix on my Ubuntu, on a local network. On this network, I have an Exchange Server (using domain mail.example.com ).
I had a problem sending an email to a local address: [email protected] : relay=none, delay=0.01, delays=0.01/0/0/0, dsn=4.3.5, status=deferred
(Host or domain name not found. Name service error for name=example.com
type=AAAA: Host found but no data record of requested type) I solved this problem using relay_domain in my Postfix main.cf : relay_domains = example.com
transport_maps = hash:/etc/postfix/transport And in my /etc/postfix/transport : example.com smtp:[mail.example.com] Now I can send mails on @example.com , and I have tested some majors webmails (Gmail, Yahoo, Hotmail...). It works. But why I got this error on my adresses @example.com ? How can I be sure I never find this error on another domain? My Postfix configuration is: postconf -n
alias_database = hash:/etc/aliases
alias_maps = hash:/etc/aliases
append_dot_mydomain = no
biff = no
config_directory = /etc/postfix
inet_interfaces = all
mailbox_command = procmail -a "$EXTENSION"
mailbox_size_limit = 0
mydestination = SRVWEB, localhost.localdomain, localhost
myhostname = SRVWEB
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
myorigin = /etc/mailname
readme_directory = no
recipient_delimiter = +
relay_domains = domain.com
relayhost =
smtp_generic_maps = hash:/etc/postfix/generic
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtpd_use_tls = yes
transport_maps = hash:/etc/postfix/transport | Your server is trying to use IPv6 when sending the mail. Since the mail.example.com doesn't have an AAAA-record (which is the same as an A-record, but for IPv6), that isn't working. If you want Postfix to never use IPv6, you can change that in the config file, as explained in the postconf(5) man page: When IPv6 support is enabled via the inet_protocols parameter, Post-
fix will do DNS type AAAA record lookups.
When both IPv4 and IPv6 support are enabled, the Postfix SMTP client
will attempt to connect via IPv6 before attempting to use IPv4.
Examples:
inet_protocols = ipv4
inet_protocols = all (DEFAULT)
inet_protocols = ipv6
inet_protocols = ipv4, ipv6 If you want to change it for this domain only, change your transport map to read example.com smtp-ipv4:[mail.domain.com] | {
"source": [
"https://serverfault.com/questions/577134",
"https://serverfault.com",
"https://serverfault.com/users/208502/"
]
} |
577,135 | We have our email hosted at Google Apps, and have our DNS servers for the domain setup at Namecheap. About a month ago our website went down (not a big deal, since it's most just a contact info page), but we were also unable to receive email for several hours. I narrowed down the cause to the DNS server at Namecheap. I contacted them via live chat and they said they were working on mitigating a DoS attack. The DNS servers came back up not too much longer. Today, they are being hit by another DoS attack. This one is a big one (the previous one only seemed to effect a few people). We cannot receive ANY emails right now. Our TTL on our MX servers is set to about 1 hour (I can't verify since all of Namecheap is down right now). Would setting a longer TTL help mitigate future problems like this? Thanks! | Your server is trying to use IPv6 when sending the mail. Since the mail.example.com doesn't have an AAAA-record (which is the same as an A-record, but for IPv6), that isn't working. If you want Postfix to never use IPv6, you can change that in the config file, as explained in the postconf(5) man page: When IPv6 support is enabled via the inet_protocols parameter, Post-
fix will do DNS type AAAA record lookups.
When both IPv4 and IPv6 support are enabled, the Postfix SMTP client
will attempt to connect via IPv6 before attempting to use IPv4.
Examples:
inet_protocols = ipv4
inet_protocols = all (DEFAULT)
inet_protocols = ipv6
inet_protocols = ipv4, ipv6 If you want to change it for this domain only, change your transport map to read example.com smtp-ipv4:[mail.domain.com] | {
"source": [
"https://serverfault.com/questions/577135",
"https://serverfault.com",
"https://serverfault.com/users/210200/"
]
} |
577,144 | I am designing a system where remote devices securely send status updates to a central logging server for aggregation. On the server I am using the Redis + Logstash + Elasticsearch solution. The data being sent to the server is sensitive and must be encrypted. I am struggling to find an efficient and secure means to 'LPUSH' the logs to the Redis list. The devices are currently capable of sending the following Redis command directly to port 6379. "*3\r\n$5\r\nLPUSH\r\n$3\r\nkey\r\n$5\r\nvalue\r\n" The correct key and list entry are created within Redis on the server. The next step is to put redis behind a firewall and encrypt the packets. My current attempt was to use Apache as a reverse proxy. A device would make a 2-way SSL connection to Apache and then reverse proxy the decreypted information to port 6379 using the loopback interface. The 2-way SSL connection is made without problems and a message is forwarded to Redis. Unfortunately it is not the message the device sent. tcpdump tells me the following... tcpdump -nnXvv -i lo host localhost and port 6379 127.0.0.1.48916 > 127.0.0.1.6379: Flags [P.], cksum 0xfeab (incorrect -> 0x9415),
seq 1:132, ack 1, win 1025, options [nop,nop,TS val 299310518 ecr 299310518],
length 131
0x0000: 4500 00b7 12b7 4000 4006 2988 7f00 0001 E.....@.@.).....
0x0010: 7f00 0001 bf14 18eb ce9c 0f04 e920 abec ................
0x0020: 8018 0401 feab 0000 xxxx xxxx xxxx xxxx ................
0x0030: xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx ....*3./.HTTP/1.
0x0040: xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx 1..Host:.localho
0x0050: xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx st:6379..X-Forwa
0x0060: xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx rded-For:.xx.xxx
0x0070: xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx .xxx.xxx..X-Forw
0x0080: xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx arded-Server:.xx
0x0090: xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx .xxx.xxx.xx..Con
0x00a0: xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx nection:.Keep-Al
0x00b0: xxxx xxxx xxxx xx ive.... As can be seen in the ASCII translation, Apache is truncating the message at the first CRLF after *3 and appending HTTP header information for forwarding, as it is suppose to. Of course Redis is replying with an error as the message is no longer formatted using the Redis Serialization Protocol (RESP). 1) Is there a way to configure Apache to blindly forward the raw TCP packets? 2) If not, is there a standard open source solution to this problem? Thank you for your time! | Your server is trying to use IPv6 when sending the mail. Since the mail.example.com doesn't have an AAAA-record (which is the same as an A-record, but for IPv6), that isn't working. If you want Postfix to never use IPv6, you can change that in the config file, as explained in the postconf(5) man page: When IPv6 support is enabled via the inet_protocols parameter, Post-
fix will do DNS type AAAA record lookups.
When both IPv4 and IPv6 support are enabled, the Postfix SMTP client
will attempt to connect via IPv6 before attempting to use IPv4.
Examples:
inet_protocols = ipv4
inet_protocols = all (DEFAULT)
inet_protocols = ipv6
inet_protocols = ipv4, ipv6 If you want to change it for this domain only, change your transport map to read example.com smtp-ipv4:[mail.domain.com] | {
"source": [
"https://serverfault.com/questions/577144",
"https://serverfault.com",
"https://serverfault.com/users/134556/"
]
} |
577,370 | I have a docker container running Nginx, that links to another docker container. The host name and IP address of the second container is loaded into the Nginx container as environment variables on startup, but is not know before then (it's dynamic). I want my nginx.conf to use these values - e.g. upstream gunicorn {
server $APP_HOST_NAME:$APP_HOST_PORT;
} How can I get environment variables into the Nginx configuration on startup? EDIT 1 This is the entire file, after the suggested answer below: env APP_WEB_1_PORT_5000_TCP_ADDR;
# Nginx host configuration for django_app
# Django app is served by Gunicorn, running under port 5000 (via Foreman)
upstream gunicorn {
server $ENV{"APP_WEB_1_PORT_5000_TCP_ADDR"}:5000;
}
server {
listen 80;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location /static/ {
alias /app/static/;
}
location /media/ {
alias /app/media/;
}
location / {
proxy_pass http://gunicorn;
}
} Reloading nginx then errors: $ nginx -s reload
nginx: [emerg] unknown directive "env" in /etc/nginx/sites-enabled/default:1 EDIT 2: more details Current environment variables root@87ede56e0b11:/# env | grep APP_WEB_1
APP_WEB_1_NAME=/furious_turing/app_web_1
APP_WEB_1_PORT=tcp://172.17.0.63:5000
APP_WEB_1_PORT_5000_TCP=tcp://172.17.0.63:5000
APP_WEB_1_PORT_5000_TCP_PROTO=tcp
APP_WEB_1_PORT_5000_TCP_PORT=5000
APP_WEB_1_PORT_5000_TCP_ADDR=172.17.0.63 Root nginx.conf: root@87ede56e0b11:/# head /etc/nginx/nginx.conf
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
env APP_WEB_1_PORT_5000_TCP_ADDR; Site nginx configuration: root@87ede56e0b11:/# head /etc/nginx/sites-available/default
# Django app is served by Gunicorn, running under port 5000 (via Foreman)
upstream gunicorn {
server $ENV{"APP_WEB_1_PORT_5000_TCP_ADDR"}:5000;
}
server {
listen 80; Reload nginx configuration: root@87ede56e0b11:/# nginx -s reload
nginx: [emerg] directive "server" is not terminated by ";" in /etc/nginx/sites-enabled/default:3 | From the official Nginx docker file: Using environment variables in nginx configuration: Out-of-the-box, Nginx doesn't support using environment variables
inside most configuration blocks. But envsubst may be used as a
workaround if you need to generate your nginx configuration
dynamically before nginx starts. Here is an example using docker-compose.yml: image: nginx
volumes:
- ./mysite.template:/etc/nginx/conf.d/mysite.template
ports:
- "8080:80"
environment:
- NGINX_HOST=foobar.com
- NGINX_PORT=80
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'" The mysite.template file may then contain variable references like
this : listen ${NGINX_PORT}; Update: But you know this caused to its Nginx variables like this: proxy_set_header X-Forwarded-Host $host; damaged to: proxy_set_header X-Forwarded-Host ; So, to prevent that, i use this trick: I have a script to run Nginx, that used on the docker-compose file as command option for Nginx server, i named it run_nginx.sh : #!/usr/bin/env bash
export DOLLAR='$'
envsubst < nginx.conf.template > /etc/nginx/nginx.conf
nginx -g "daemon off;" And because of defined new DOLLAR variable on run_nginx.sh script, now content of my nginx.conf.template file for Nginx itself variable is like this: proxy_set_header X-Forwarded-Host ${DOLLAR}host; And for my defined variable is like this: server_name ${WEB_DOMAIN} www.${WEB_DOMAIN}; Also here , there is my real use case for that. | {
"source": [
"https://serverfault.com/questions/577370",
"https://serverfault.com",
"https://serverfault.com/users/45578/"
]
} |
577,378 | I have a load balancer configurated with the port 443 to port 80 of the ec2 servers and with a sniffer like Burtsuite can I edit the request. How can configure the ELB to avoid this type of attack?. For example when I access to this script /userprofile/Get.php sending by post the user_id param and with Burtsuite can modify this user_id to another. | From the official Nginx docker file: Using environment variables in nginx configuration: Out-of-the-box, Nginx doesn't support using environment variables
inside most configuration blocks. But envsubst may be used as a
workaround if you need to generate your nginx configuration
dynamically before nginx starts. Here is an example using docker-compose.yml: image: nginx
volumes:
- ./mysite.template:/etc/nginx/conf.d/mysite.template
ports:
- "8080:80"
environment:
- NGINX_HOST=foobar.com
- NGINX_PORT=80
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'" The mysite.template file may then contain variable references like
this : listen ${NGINX_PORT}; Update: But you know this caused to its Nginx variables like this: proxy_set_header X-Forwarded-Host $host; damaged to: proxy_set_header X-Forwarded-Host ; So, to prevent that, i use this trick: I have a script to run Nginx, that used on the docker-compose file as command option for Nginx server, i named it run_nginx.sh : #!/usr/bin/env bash
export DOLLAR='$'
envsubst < nginx.conf.template > /etc/nginx/nginx.conf
nginx -g "daemon off;" And because of defined new DOLLAR variable on run_nginx.sh script, now content of my nginx.conf.template file for Nginx itself variable is like this: proxy_set_header X-Forwarded-Host ${DOLLAR}host; And for my defined variable is like this: server_name ${WEB_DOMAIN} www.${WEB_DOMAIN}; Also here , there is my real use case for that. | {
"source": [
"https://serverfault.com/questions/577378",
"https://serverfault.com",
"https://serverfault.com/users/210331/"
]
} |
577,387 | I am wanting to install PHP5 on an internal production server running Ubuntu 12.04 LTS. When I try to use apt-get to install it lists a multitude of dependencies and recommends running apt-get -f install . When I run that I get this returned: Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
linux-headers-3.2.0-59 linux-headers-3.2.0-59-generic linux-headers-server linux-image-3.2.0-59-generic linux-image-server linux-server
Suggested packages:
fdutils linux-doc-3.2.0 linux-source-3.2.0 linux-tools
The following NEW packages will be installed:
linux-headers-3.2.0-59 linux-headers-3.2.0-59-generic linux-image-3.2.0-59-generic
The following packages will be upgraded:
linux-headers-server linux-image-server linux-server
3 upgraded, 3 newly installed, 0 to remove and 379 not upgraded.
3 not fully installed or removed.
Need to get 51.4 MB of archives.
After this operation, 218 MB of additional disk space will be used.
Do you want to continue [Y/n]? n Is this a safe upgrade to do on a production machine? I know apt-get dist-upgrade can be pretty major and break things. Is this a minor upgrade or major? Thanks ---Update 1--- The /boot partition is full, not allowing me to run apt-get -f install . When attempting to run this script from ubuntugenius dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge I get the following: rgs sudo apt-get -y purge
[sudo] password for tech:
Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies:
linux-headers-server : Depends: linux-headers-3.2.0-59-generic but it is not going to be installed
linux-server : Depends: linux-headers-server (= 3.2.0.37.44) but 3.2.0.59.70 is to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I feel like im stuck in a bit of a loop now with a full /boot that wont let me run repairs, yet I cant purge /boot without running the repairs. --- Update 2 ---- I have successfully cleared out some space in /boot only now to get the following error: Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
linux-server
The following packages will be upgraded:
linux-server
1 upgraded, 0 newly installed, 0 to remove and 379 not upgraded.
1 not fully installed or removed.
Need to get 0 B/1,732 B of archives.
After this operation, 1,024 B of additional disk space will be used.
Do you want to continue [Y/n]? y
dpkg: dependency problems prevent configuration of linux-server:
linux-server depends on linux-image-server (= 3.2.0.37.44); however:
Version of linux-image-server on system is 3.2.0.59.70.
linux-server depends on linux-headers-server (= 3.2.0.37.44); however:
Version of linux-headers-server on system is 3.2.0.59.70.
dpkg: error processing linux-server (--configure):
dependency problems - leaving unconfigured
No apport report written because the error message indicates its a followup error from a previous failure.
Errors were encountered while processing:
linux-server
E: Sub-process /usr/bin/dpkg returned an error code (1) | From the official Nginx docker file: Using environment variables in nginx configuration: Out-of-the-box, Nginx doesn't support using environment variables
inside most configuration blocks. But envsubst may be used as a
workaround if you need to generate your nginx configuration
dynamically before nginx starts. Here is an example using docker-compose.yml: image: nginx
volumes:
- ./mysite.template:/etc/nginx/conf.d/mysite.template
ports:
- "8080:80"
environment:
- NGINX_HOST=foobar.com
- NGINX_PORT=80
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'" The mysite.template file may then contain variable references like
this : listen ${NGINX_PORT}; Update: But you know this caused to its Nginx variables like this: proxy_set_header X-Forwarded-Host $host; damaged to: proxy_set_header X-Forwarded-Host ; So, to prevent that, i use this trick: I have a script to run Nginx, that used on the docker-compose file as command option for Nginx server, i named it run_nginx.sh : #!/usr/bin/env bash
export DOLLAR='$'
envsubst < nginx.conf.template > /etc/nginx/nginx.conf
nginx -g "daemon off;" And because of defined new DOLLAR variable on run_nginx.sh script, now content of my nginx.conf.template file for Nginx itself variable is like this: proxy_set_header X-Forwarded-Host ${DOLLAR}host; And for my defined variable is like this: server_name ${WEB_DOMAIN} www.${WEB_DOMAIN}; Also here , there is my real use case for that. | {
"source": [
"https://serverfault.com/questions/577387",
"https://serverfault.com",
"https://serverfault.com/users/98263/"
]
} |
577,942 | Is there a better way to install only the required dependencies of a package, instead of installing it directly with apt-get (or any other frontend of dpkg) and then immediately removing it, leaving out its dependencies? | apt-cache depends pkgname will show a package's dependencies. If you want it all in one command, you could do worse than: apt-get install `apt-cache depends pkgname | awk '/Depends:/{print$2}'` | {
"source": [
"https://serverfault.com/questions/577942",
"https://serverfault.com",
"https://serverfault.com/users/127819/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.