source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
531,751 | On a multihomed Linux machine, how can I find out what network interface will be used to send a packet to a specific host? I need to do this programmatically and I'd rather not parse and interpret the routing table myself. | Use ip route for this. For instance: ip route show to match 198.252.206.16 | {
"source": [
"https://serverfault.com/questions/531751",
"https://serverfault.com",
"https://serverfault.com/users/4160/"
]
} |
531,818 | I understand that changing the host name/computer name for a Windows based machine (Client or Server) requires a reboot. I realize that once a computer name is changed, a pending machine name is stored in the system registry and is applied on next boot. I was not able to find much technical documentation on exactly WHY this was required. Can someone please point me in the right direction on the technical challenges behind why a host name change is not complete without a system reboot ? | You don't have to restart immediately after renaming the machine... just don't expect every service and function to pick up on the new name. Things such as system environment variables are read once, at system startup. If you change those variables, the various components of Windows won't pick up on them until they're restarted. Some of those components are so deeply integrated with the OS, you might as well restart the entire OS. Many Windows services and applications, both those written by Microsoft as well as 3rd party, that use the system's hostname, typically only read it once at startup and never read it again. I know if I was writing an application that read the machine's hostname, it would be pretty silly of me to periodically poll for it just in case it changed. In an operating system such as Linux, you see the same thing. You can change the hostname without rebooting, but you do have to restart some very basic components of the system in order to get them to pick up on the new name. Linux is more modular than Windows, although Windows has come a long way in terms of modularity. One way to detect whether a Windows system is pending a computer rename operation is to check the registry. If the contents of HKLM\SYSTEM\CurrentControlSet\Control\ComputerName\ComputerName and HKLM\SYSTEM\CurrentControlSet\Control\ComputerName\ActiveComputerName are not the same, that means the system has a pending rename operation that will complete the next time the system reboots. | {
"source": [
"https://serverfault.com/questions/531818",
"https://serverfault.com",
"https://serverfault.com/users/17751/"
]
} |
531,891 | I am investigating Ansible for server and application provisioning. My application is currently provisioned with shell scripts in Vagrant. Rather than rewriting my scripts, I took a sample and attempted to deploy it. It appears to deploy fine, but I have seen a failure message after what looks like a series of successful steps: » vagrant provision ~/vm/blvagrant 1 ↵
[default] Running provisioner: ansible...
PLAY [web-servers] ************************************************************
GATHERING FACTS ***************************************************************
ok: [192.168.9.149]
TASK: [install python-software-properties] ************************************
ok: [192.168.9.149] => {"changed": false, "item": ""}
TASK: [add nginx ppa if it ubuntu 10.04 and up] *******************************
ok: [192.168.9.149] => {"changed": false, "item": "", "repo": "ppa:nginx/stable", "state": "present"}
TASK: [update apt repo] *******************************************************
ok: [192.168.9.149] => {"changed": false, "item": ""}
TASK: [install nginx] *********************************************************
ok: [192.168.9.149] => {"changed": false, "item": ""}
TASK: [copy fixed init for nginx] *********************************************
ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0755", "owner": "root", "path": "/etc/init.d/nginx", "size": 2321, "state": "file", "uid": 0}
TASK: [service nginx] *********************************************************
ok: [192.168.9.149] => {"changed": false, "item": "", "name": "nginx", "state": "started"}
TASK: [write nginx.conf] ******************************************************
ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0644", "owner": "root", "path": "/etc/nginx/nginx.conf", "size": 1067, "state": "file", "uid": 0}
PLAY RECAP ********************************************************************
192.168.9.149 : ok=8 changed=0 unreachable=0 failed=0
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again. How do I go about getting additional debug information? I've already added ansible.verbose = true to my vagrant config which results in the dictionaries being displayed within the output above. | You can also add this into your Vagrantfile: ansible.verbose = "vvv" this would need to go where you're kicking off the provisioning, like this: config.vm.provision "ansible" do |ansible|
ansible.verbose = "vvv"
end This sets the verbose option of ansible: -v, --verbose verbose mode (-vvv for more, -vvvv to enable
connection debugging) Setting this to vvvv (four v's) is useful for debugging SSH connection errors - but it creates a huge amount of debug output, so only use four v's if you're having connection problems. | {
"source": [
"https://serverfault.com/questions/531891",
"https://serverfault.com",
"https://serverfault.com/users/1655/"
]
} |
531,941 | This is a Canonical Question about DoS and DDoS mitigation. I found a massive traffic spike on a website that I host today; I am getting thousands of connections a second and I see I'm using all 100Mbps of my available bandwidth. Nobody can access my site because all the requests time out, and I can't even log into the server because SSH times out too! This has happened a couple times before, and each time it's lasted a couple hours and gone away on its own. Occasionally, my website has another distinct but related problem: my server's load average (which is usually around .25) rockets up to 20 or more and nobody can access my site just the same as the other case. It also goes away after a few hours. Restarting my server doesn't help; what can I do to make my site accessible again, and what is happening? Relatedly, I found once that for a day or two, every time I started my service, it got a connection from a particular IP address and then crashed. As soon as I started it up again, this happened again and it crashed again. How is that similar, and what can I do about it? | You are experiencing a denial of service attack. If you see traffic coming from multiple networks (different IPs on different subnets) you've got a distributed denial of service (DDoS); if it's all coming from the same place you have a plain old DoS. It can be helpful to check, if you are able; use netstat to check. This might be hard to do, though. Denial of service usually falls into a couple categories: traffic-based, and load-based. The last item (with the crashing service) is exploit-based DoS and is quite different. If you're trying to pin down what type of attack is happening, you may want to capture some traffic (using wireshark, tcpdump, or libpcap). You should, if possible, but also be aware that you will probably capture quite a lot of traffic. As often as not, these will come from botnets (networks of compromised hosts under the central control of some attacker, whose bidding they will do). This is a good way for the attacker to (very cheaply) acquire the upstream bandwidth of lots of different hosts on different networks to attack you with, while covering their tracks. The Low Orbit Ion Cannon is one example of a botnet (despite being voluntary instead of malware-derived); Zeus is a more typical one. Traffic-based If you're under a traffic-based DoS, you're finding that there is just so much traffic coming to your server that its connection to the Internet is completely saturated. There is a high packet loss rate when pinging your server from elsewhere, and (depending on routing methods in use) sometimes you're also seeing really high latency (the ping is high). This kind of attack is usually a DDoS. While this is a really "loud" attack, and it's obvious what is going on, it's hard for a server administrator to mitigate (and basically impossible for a user of shared hosting to mitigate). You're going to need help from your ISP; let them know you're under a DDoS and they might be able to help. However, most ISPs and transit providers will proactively realize what is going on and publish a blackhole route for your server. What this means is that they publish a route to your server with as little cost as possible, via 0.0.0.0 : they make traffic to your server no longer routeable on the Internet. These routes are typically /32s and eventually they are removed. This doesn't help you at all; the purpose is to protect the ISP's network from the deluge. For the duration, your server will effectively lose Internet access. The only way your ISP (or you, if you have your own AS) is going to be able to help is if they are using intelligent traffic shapers that can detect and rate-limit probable DDoS traffic. Not everyone has this technology. However, if the traffic is coming from one or two networks, or one host, they might also be able to block the traffic ahead of you. In short, there is very little you can do about this problem. The best long-term solution is to host your services in many different locations on the Internet which would have to be DDoSed individually and simultaneously, making the DDoS much more expensive. Strategies for this depend on the service you need to protect; DNS can be protected with multiple authoritative nameservers, SMTP with backup MX records and mail exchangers, and HTTP with round-robin DNS or multihoming (but some degradation might be noticeable for the duration anyway). Load balancers are rarely an effective solution to this problem, because the load balancer itself is subject to the same problem and merely creates a bottleneck. IPTables or other firewall rules will not help because the problem is that your pipe is saturated. Once the connections are seen by your firewall, it is already too late ; the bandwidth into your site has been consumed. It doesn't matter what you do with the connections; the attack is mitigated or finished when the amount of incoming traffic goes back down to normal. If you are able to do so, consider using a content distribution network (CDN) like Akamai, Limelight and CDN77, or use a DDoS scrubbing service like CloudFlare or Prolexic. These services take active measures to mitigate these types of attacks, and also have so much available bandwidth in so many different places that flooding them is not generally feasible. If you decide to use CloudFlare (or any other CDN/proxy) remember to hide your server's IP. If an attacker finds out the IP, he can again DDoS your server directly, bypassing CloudFlare. To hide the IP, your server should never communicate directly with other servers/users unless they are safe. For example your server should not send emails directly to users. This doesn't apply if you host all your content on the CDN and don't have a server of your own. Also, some VPS and hosting providers are better at mitigating these attacks than others. In general, the larger they are, the better they will be at this; a provider which is very well-peered and has lots of bandwidth will be naturally more resilient, and one with an active and fully staffed network operations team will be able to react more quickly. Load-based When you are experiencing a load-based DDoS, you notice that the load average is abnormally high (or CPU, RAM, or disk usage, depending on your platform and the specifics). Although the server doesn't appear to be doing anything useful, it is very busy. Often, there will be copious amounts of entries in the logs indicating unusual conditions. More often than not this is coming from a lot of different places and is a DDoS, but that isn't necessarily the case. There don't even have to be a lot of different hosts . This attack is based on making your service do a lot of expensive stuff. This could be something like opening a gargantuan number of TCP connections and forcing you to maintain state for them, or uploading excessively large or numerous files to your service, or perhaps doing really expensive searches, or really doing anything that is expensive to handle. The traffic is within the limits of what you planned for and can take on, but the types of requests being made are too expensive to handle so many of . Firstly, that this type of attack is possible is often indicative of a configuration issue or bug in your service. For instance, you may have overly verbose logging turned on, and may be storing logs on something that's very slow to write to. If someone realizes this and does a lot of something which causes you to write copious amounts of logs to disk, your server will slow to a crawl. Your software might also be doing something extremely inefficient for certain input cases; the causes are as numerous as there are programs, but two examples would be a situation that causes your service to not close a session that is otherwise finished, and a situation that causes it to spawn a child process and leave it. If you end up with tens of thousands of open connections with state to keep track of, or tens of thousands of child processes, you'll run into trouble. The first thing you might be able to do is use a firewall to drop the traffic . This isn't always possible, but if there is a characteristic you can find in the incoming traffic (tcpdump can be nice for this if the traffic is light), you can drop it at the firewall and it will no longer cause trouble. The other thing to do is to fix the bug in your service (get in touch with the vendor and be prepared for a long support experience). However, if it's a configuration issue, start there . Turn down logging on production systems to a reasonable level (depending on the program this is usually the default, and will usually involve making sure "debug" and "verbose" levels of logging are off; if everything a user does is logged in exact and fine detail, your logging is too verbose). Additionally, check child process and request limits , possibly throttle incoming requests, connections per IP, and the number of allowed child processes, as applicable. It goes without saying that the better configured and better provisioned your server is, the harder this type of attack will be. Avoid being stingy with RAM and CPU in particular. Ensure your connections to things like backend databases and disk storage are fast and reliable. Exploit-based If your service mysteriously crashes extremely quickly after being brought up, particularly if you can establish a pattern of requests that precede the crash and the request is atypical or doesn't match expected use patterns, you might be experiencing an exploit-based DoS. This can come from as few as just one host (with pretty much any type of internet connection), or many hosts. This is similar to a load-based DoS in many respects, and has basically the same causes and mitigations. The difference is merely that in this case, the bug doesn't cause your server to be wasteful, but to die. The attacker is usually exploiting a remote crash vulnerability, such as garbled input that causes a null-dereference or something in your service. Handle this similarly to an unauthorized remote access attack. Firewall against the originating hosts and type of traffic if they can be pinned down. Use validating reverse proxies if applicable. Gather forensic evidence (try and capture some of the traffic), file a bug ticket with the vendor, and consider filing an abuse complaint (or legal complaint) against the origin too. These attacks are fairly cheap to mount, if an exploit can be found, and they can be very potent, but also relatively easy to track down and stop. However, techniques that are useful against traffic-based DDoS are generally useless against exploit-based DoS. | {
"source": [
"https://serverfault.com/questions/531941",
"https://serverfault.com",
"https://serverfault.com/users/126699/"
]
} |
532,086 | Recently I was at a local user group meeting where the presenter noted that the maximum throughput of the NTFS IO stack was 1 GBps. He substantiated his claim by simultaneously copying two large files from the same logical volume to different logical volumes (i.e. [a] is the source, [b] is destination 1 and [c] is destination 2) and noting the transfer rates hovering around 500 MBps. He repeated this test a few times and noted that the underlying storage subsystem was flash (to make sure we didn't suspect slow storage). I've been trying to verify this assertion but cannot find anything documented. I suspect that I'm searching for the wrong search terms ("1GBps NTFS throughput", "NTFS throughput maximum"). I'm interested in whether or not the IO stack is actually limited to 1GBps throughput. EDIT To clarify: I do not believe the presenter intended to imply that NTFS was intentionally limited (and I'm sorry if I implied that as well). I think it was implied that it was a function of the design of the filesystem. | Even assuming you meant GBps and not Gbps ... I am unaware of any filesystem that has an actual throughput limit . Filesystems are simply structures around how to store and retrieve files. They use metadata, structure, naming conventions, security conventions, etc. but the actual throughput limitations are defined by the underlying hardware itself (typically a combination of lots of hardware involved). Comparing different filesystems and how they affect performance of the underlying hardware can be done, but again that isn't a limitation directly imposed by the filesystem but more of a "variable" in the overall performance of the system. Choosing to deploy one filesystem over another is typically related to what the underlying OS is, what the server/application is going to be, what the underlying hardware is, and soft factors such as the admin's areas of expertise and familiarity. ================================================================================== TECHNICAL RESOURCES AND CITATIONS Optimizing NTFS NTFS Performance Factors You determine many of the factors that affect an NTFS volumes'
performance. You choose important elements such as an NTFS volume's
type (e.g., SCSI, or IDE), speed (e.g., the disks' rpm speed), and the
number of disks the volume contains. In addition to these important
components, the following factors significantly influence an NTFS
volume's performance: The cluster and allocation unit size The location and fragmentation level of frequently accessed files, such as the Master File Table (MFT), directories, special files
containing NTFS metadata, the paging file, and commonly used user data
files Whether you create the NTFS volume from scratch or convert it from an existing FAT volume Whether the volume uses NTFS compression Whether you disable unnecessary NTFS behaviors Using faster disks and more drives in multidisk volumes is an obvious
way to improve performance. The other performance improvement methods
are more obscure and relate to the details of an NTFS volume's
configuration. Scalability and Performance in Modern File Systems Unfortunately, it is impossible to do direct performance comparisons
of the file systems under discussion since they are not all available
on the same platform. Further, since available data is necessarily
from differing hardware platforms, it is difficult to distinguish the
performance characteristics of the file system from that of the
hardware platform on which it is running. NTFS Optimization New white paper providing guidance for sizing NTFS volumes What's new in NTFS Configuring NTFS file system for performance https://superuser.com/questions/411720/how-does-ntfs-compression-affect-performance Best practices for NTFS compression in Windows | {
"source": [
"https://serverfault.com/questions/532086",
"https://serverfault.com",
"https://serverfault.com/users/43822/"
]
} |
532,106 | Suppose I have the user id of a user in Active Directory. I'd like to get a list of all AD groups in which that user is currently a member of. How can I do this from the Windows command line? I've tried the following: dsget user "DC=jxd123" -memberof Error: dsquery failed:'-memberof' is an unknown parameter.
type dsquery /? for help. | You can do this in PowerShell pretty easily. I'm sure you can do it with the ds tools too, but they're old and crusty and PowerShell should be used for everything possible nowadays. Import-Module ActiveDirectory
(Get-ADUser userName –Properties MemberOf | Select-Object MemberOf).MemberOf Shorter version (Get-ADUser userName –Properties MemberOf).MemberOf | {
"source": [
"https://serverfault.com/questions/532106",
"https://serverfault.com",
"https://serverfault.com/users/82603/"
]
} |
532,559 | I would like to have a count down of 5 minutes, updating every second and showing the result on the same line. Is this even possible with Bash scripting? | This works from Bash shell: secs=$((5 * 60))
while [ $secs -gt 0 ]; do
echo -ne "$secs\033[0K\r"
sleep 1
: $((secs--))
done The special character \033[0K represents an end of line which cleans the rest of line if there are any characters left from previous output and \r is a carriage return which moves the cursor to the beginning of the line. There is a nice thread about this feature at stackoverflow.com . You can add own commands or whatever in the while loop. If you need something more specific please provide me more details. | {
"source": [
"https://serverfault.com/questions/532559",
"https://serverfault.com",
"https://serverfault.com/users/186433/"
]
} |
532,675 | With CloudWatch monitoring script (mon-put-instance-data.pl) it's possible to specify a IAM role name to provide AWS credentials (--aws-iam-role=VALUE). I'm creating a IAM role for this purpose (to run mon-put-instance-data.pl on an AWS instance), but which permissions / policies should I give to this role?? Thank you for your help | The Amazon CloudWatch Monitoring Scripts for Linux are comprised of two Perl scripts, both using one Perl module - a short peek into the source reveals the following AWS API actions being used: CloudWatchClient.pm - DescribeTags mon-get-instance-stats.pl - GetMetricStatistics , ListMetrics mon-put-instance-data.pl - PutMetricData With this information you can assemble your IAM policy , e.g. via the AWS policy generator - an all encompassing policy would be: {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics",
"cloudwatch:PutMetricData",
"ec2:DescribeTags"
],
"Effect": "Allow",
"Resource": "*"
}
]
} Of course you can drop cloudwatch:GetMetricStatistics cloudwatch:ListMetrics when just using mon-put-instance-data.pl - please note that I haven't actually tested the code though. | {
"source": [
"https://serverfault.com/questions/532675",
"https://serverfault.com",
"https://serverfault.com/users/186507/"
]
} |
532,681 | I have a number of EC2 servers on AWS running apache behind a load balancer (ELB). Every now and then some IP address abuses the API hosted on the EC2 servers and causes a denial of service. I have no access to the load balancer so I need to block access at the server's level. I changed the apache access log to display IP's based on the X-Forwarded-For header provided by the load balancer (otherwise it just displays the load balancer's IP), so I can identify these IP's and block them (again by specifying the X-Forwarded-For) with something like: <Directory api_dir>
SetEnvIF X-FORWARDED-FOR "1.1.1.1" DenyIP
Order allow,deny
allow from all
deny from env=DenyIP
</Directory> However, this still means that I need to manually handle every attack, and my server suffers some downtime as a result. What is the recommended way to automatically block attacks of repeated HTTP calls, based not on IP but on the Forwarded-For header coming from the load balancer. | The Amazon CloudWatch Monitoring Scripts for Linux are comprised of two Perl scripts, both using one Perl module - a short peek into the source reveals the following AWS API actions being used: CloudWatchClient.pm - DescribeTags mon-get-instance-stats.pl - GetMetricStatistics , ListMetrics mon-put-instance-data.pl - PutMetricData With this information you can assemble your IAM policy , e.g. via the AWS policy generator - an all encompassing policy would be: {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics",
"cloudwatch:PutMetricData",
"ec2:DescribeTags"
],
"Effect": "Allow",
"Resource": "*"
}
]
} Of course you can drop cloudwatch:GetMetricStatistics cloudwatch:ListMetrics when just using mon-put-instance-data.pl - please note that I haven't actually tested the code though. | {
"source": [
"https://serverfault.com/questions/532681",
"https://serverfault.com",
"https://serverfault.com/users/113867/"
]
} |
532,945 | I am new to powershell, but I've been reading manuals and practiced a little bit.
My objective is to List all users in all Security Groups under specified path.
I have found the way to do it: get-adgroup -Filter * -SearchBase "OU=Groups,DC=corp,DC=ourcompany,DC=Com" | %{Get-ADGroupMember $_.name} | ft name But the problem is I do not see the group name. All I get is a bunch of users.
It would be nice if someone could tell me how to display the group name before all the members of this group get listed.
Thanks. | Gimme the codes! powers, activate! $Groups = Get-ADGroup -Properties * -Filter * -SearchBase "OU=Groups,DC=corp,DC=ourcompany,DC=Com"
Foreach($G In $Groups)
{
Write-Host $G.Name
Write-Host "-------------"
$G.Members
} The point being, just take your time and break it out into steps. I know that it's fun to try to get everything and the kitchen sink to fit into a one-liner with Powershell, but it's by no means required. A few notes: You don't need to do Get-ADGroupMember if you collect the Members property in the initial Get-ADGroup Cmdlet. The good thing about this is that it halves the amount of calls you have to make to AD, which should make your script run faster, and it eases the burden on the domain controller. $G.Members will display all members of the group $G... in Powershell 3. In Powershell 2, you might still need to put another Foreach inside the Foreach there to enumerate through the group members. ( Yo dawg, I heard you like loops... ) I use Write-Host here, which is gross. You should never really use Write-Host . Instead, you should be building and outputting objects, not text, but that was a whole other topic and I was too lazy to do that for this answer. | {
"source": [
"https://serverfault.com/questions/532945",
"https://serverfault.com",
"https://serverfault.com/users/107384/"
]
} |
532,995 | I am running Windows 8 64bit and running Hyper-V VM's. On my host machine I want to map a drive to the VM using the following command. net use * \\10.10.18.104\c$
Drive Y: is now connected to \\10.10.18.104\c$. When I look in File Explore I don't see a Y: drive. However, in the command window I can change directories using these commands. Y:
dir and all the files are displayed. Why does this not show up in File Explorer? This seems to be intermittent as some days the drive does show up in File Explorer. Thanks for the help | I'm going to venture a guess that you're running the command prompt as another user or as "Administrator" and you have UAC enabled. When this happens, you've mapped the drive under a different user context than the one your Explorer window is running in. Launch a command prompt without doing Run As Administrator or Run As a Different User and you should be fine. | {
"source": [
"https://serverfault.com/questions/532995",
"https://serverfault.com",
"https://serverfault.com/users/36501/"
]
} |
533,513 | Since ifconfig is apparently being deprecated in major Linux distributions, I thought I'd learn something about the ip tool that's supposed to be used instead of ifconfig . And here I ran into a problem: when run on its own, ifconfig shows the number of bytes received/transmitted on each interface besides other info. I couldn't find a way to get this from ip . Is there no such function in this tool? What other built-in tools could I use for getting those stats? | Another option is to use the /proc filesystem. The /proc/net/dev file contains statistics about the configured network interfaces. Each line is dedicated to one network interface and it contains statistics for receive and transmit. The statistics include metrics such total number of received/transmittted bytes, packets, drops, errors and so on. cat /proc/net/dev
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
lo: 29846937 129576 0 0 0 0 0 0 29846937 129576 0 0 0 0 0 0
wlan0: 9467393340 8027251 0 0 0 0 0 0 2559312961 5896509 0 0 0 0 0 0 Or you can try the netstat command which can display all network interfaces and related statistics: netstat -i
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
lo 65536 0 130435 0 0 0 130435 0 0 0 LRU
wlan0 1492 0 8028018 0 0 0 5897361 0 0 0 BMRU | {
"source": [
"https://serverfault.com/questions/533513",
"https://serverfault.com",
"https://serverfault.com/users/186981/"
]
} |
533,611 | If there is a limit on the number of ports one machine can have and a socket can only bind to an unused port number, how do servers experiencing extremely high amounts (more than the max port number) of requests handle this? Is it just done by making the system distributed, i.e., many servers on many machines? | You misunderstand port numbers: a server listens only on one port and can have large numbers of open sockets from clients connecting to that one port. On the TCP level the tuple (source ip, source port, destination ip, destination port) must be unique for each simultaneous connection. That means a single client cannot open more than 65535 simultaneous connections to a single server. But a server can (theoretically) serve 65535 simultaneous connections per client . So in practice the server is only limited by how much CPU power, memory etc. it has to serve requests, not by the number of TCP connections to the server. | {
"source": [
"https://serverfault.com/questions/533611",
"https://serverfault.com",
"https://serverfault.com/users/187034/"
]
} |
533,877 | I have read a lot of information about planning RAM requirements forZFS deduplication. I've just upgraded my file server's RAM to support some very limited dedupe on ZFS zvols which I cannot use snapshots and clones on (as they're zvols formatted as a different filesystem) yet will contain much duplicated data. I want to make sure that the new RAM I added will support the limited deduplication I intend to be doing. In planning, my numbers look good but I want to be sure . How can I tell the current size of the ZFS dedupe tables (DDTs) on my live system? I read this mailing list thread but I'm unclear on how they're getting to those numbers. (I can post the output of zdb tank if necessary but I'm looking for a generic answer which can help others) | You can use the zpool status -D poolname command. The output would look similar to: root@san1:/volumes# zpool status -D vol1
pool: vol1
state: ONLINE
scan: scrub repaired 0 in 4h38m with 0 errors on Sun Mar 24 13:16:12 2013
DDT entries 2459286, size 481 on disk, 392 in core
bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 2.23M 35.6G 19.0G 19.0G 2.23M 35.6G 19.0G 19.0G
2 112K 1.75G 1005M 1005M 240K 3.75G 2.09G 2.09G
4 8.03K 129M 73.8M 73.8M 35.4K 566M 324M 324M
8 434 6.78M 3.16M 3.16M 4.61K 73.8M 35.4M 35.4M
16 119 1.86M 811K 811K 2.33K 37.3M 15.3M 15.3M
32 24 384K 34.5K 34.5K 1.13K 18.1M 1.51M 1.51M
64 19 304K 19K 19K 1.63K 26.1M 1.63M 1.63M
128 7 112K 7K 7K 1.26K 20.1M 1.26M 1.26M
256 3 48K 3K 3K 1012 15.8M 1012K 1012K
512 3 48K 3K 3K 2.01K 32.1M 2.01M 2.01M
1K 2 32K 2K 2K 2.61K 41.7M 2.61M 2.61M
2K 1 16K 1K 1K 2.31K 36.9M 2.31M 2.31M
Total 2.35M 37.5G 20.1G 20.1G 2.51M 40.2G 21.5G 21.5G The important fields are the Total allocated blocks and the Total referenced blocks. In the example above, I have a low deduplication ratio. 40.2G is stored on disk in 37.5G of space. Or 2.51 million blocks in 2.35 million block's worth of space. To get the actual size of the table, see: DDT entries 2459286, size 481 on disk, 392 in core 2459286*392=964040112 bytes Divide by 1024 and 1024 to get: 919.3MB in RAM . | {
"source": [
"https://serverfault.com/questions/533877",
"https://serverfault.com",
"https://serverfault.com/users/11086/"
]
} |
533,986 | I followed the instructions to share my AMI with a specific account here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html I made sure the account number is correct, and when I log into the target account, I'm unable to find the shared AMI anywhere in my EC2 console ( Under Images -> AMIs ) no matter what I try to filter by. How can I find the shared AMI? | I'm not sure about shared AMIs, but many things in EC2 are segmented by region and you have to select the correct region to see them. You can select the region in the upper right-hand corner of the screen. | {
"source": [
"https://serverfault.com/questions/533986",
"https://serverfault.com",
"https://serverfault.com/users/20077/"
]
} |
534,229 | I have set up a tunnel via autossh. This works: autossh -M 33201 -N -i myIdFile -R 33101:localhost:22 [email protected] I would like to run autossh in background. Seems easy using the -f option. This does not work, however: autossh -f -M 33201 -N -i myIdFile -R 33101:localhost:22 [email protected] Autossh runs in the background fine, but the ssh connection seems to fail every time. In /var/syslog I see multiple occurences of: autossh[3420]: ssh exited with error status 255; restarting ssh What am I doing wrong? A wild guess is it has something to do with the authentication via key file. How can I debug this (adding -v to the ssh options does not seem to log anywhere). Edit: I got some ssh logs using the -y option /usr/bin/ssh[3484]: debug1: Next authentication method: publickey
/usr/bin/ssh[3484]: debug1: Trying private key: /home/myuser/.ssh/id_rsa
/usr/bin/ssh[3484]: debug1: Trying private key: /home/myuser/.ssh/id_dsa
/usr/bin/ssh[3484]: debug1: Trying private key: /home/myuser/.ssh/id_ecdsa
/usr/bin/ssh[3484]: debug1: No more authentication methods to try.
/usr/bin/ssh[3484]: fatal: Permission denied (publickey).
autossh[3469]: ssh exited with error status 255; restarting ssh So it seems autossh does not accept my identiy file ( -i myIdFile ) when using the -f option. Why is that? (autossh 1.4c on Raspian) | It seems like when autossh drops to the background (-f option) it is changing the working directory, meaning relative paths do not work any longer. Or more specific: By entering the absolute path of your id file you will probably succeed. I re-created the scenario by creating a key with no password at a non-default location: ~/$ mkdir test
~/$ cd test
~/test$ ssh-keygen -f test_id_rsa I simply hit enter twice to generate a key that is not protected by a password. I copied the new key to my server (which allows password authentication currently): ~/test$ ssh-copy-id -i test_id_rsa user@server First I confirmed the key was working with regular ssh, then using autossh like you: ~/test$ ssh -i test_id_rsa user@server
~/test$ autossh -M 13000 -N -i test_id_rsa user@server
^C They both worked fine, so I recreated the problem you had: ~/test$ autossh -f -M 13000 -N -i test_id_rsa user@server This did not work and the following was written to /var/log/syslog : autossh[2406]: ssh exited prematurely with status 255; autossh exiting By changing the path of the keyfile to be absolute, it worked though: ~/test$ autossh -f -M 13000 -N -i /home/user/test/test_id_rsa user@server No errors in /var/log/syslog . | {
"source": [
"https://serverfault.com/questions/534229",
"https://serverfault.com",
"https://serverfault.com/users/156029/"
]
} |
534,236 | I am migrating a bunch of websites to new servers and one of them has config options I have not encountered before, specifically: <IfModule mod_jrun.c>
JRunConfig Serverstore /usr/local/jrun4/lib/wsconfig/1/somewebsite.com-store
JRunConfig Bootstrap 127.0.0.1:9009
</IfModule> From some google searches it apears to be a coldfusion configuration option, HOWEVER there are no cfm files in the website, what do I need to install to move forward with the migration? | It seems like when autossh drops to the background (-f option) it is changing the working directory, meaning relative paths do not work any longer. Or more specific: By entering the absolute path of your id file you will probably succeed. I re-created the scenario by creating a key with no password at a non-default location: ~/$ mkdir test
~/$ cd test
~/test$ ssh-keygen -f test_id_rsa I simply hit enter twice to generate a key that is not protected by a password. I copied the new key to my server (which allows password authentication currently): ~/test$ ssh-copy-id -i test_id_rsa user@server First I confirmed the key was working with regular ssh, then using autossh like you: ~/test$ ssh -i test_id_rsa user@server
~/test$ autossh -M 13000 -N -i test_id_rsa user@server
^C They both worked fine, so I recreated the problem you had: ~/test$ autossh -f -M 13000 -N -i test_id_rsa user@server This did not work and the following was written to /var/log/syslog : autossh[2406]: ssh exited prematurely with status 255; autossh exiting By changing the path of the keyfile to be absolute, it worked though: ~/test$ autossh -f -M 13000 -N -i /home/user/test/test_id_rsa user@server No errors in /var/log/syslog . | {
"source": [
"https://serverfault.com/questions/534236",
"https://serverfault.com",
"https://serverfault.com/users/187405/"
]
} |
534,449 | Out of curiosity, when your shell character set breaks from doing something like cat /dev/urandom is there a way to fix that in place? | Try one of these: stty sane or reset If both don't work, or your terminal is so messed up that you can't even enter commands, then it is best to close the terminal and start a new one. Note that stty sane is defined by POSIX whereas reset is not. That means on some systems there might not be a reset or it might do something completely different, like resetting the entire system. I have not yet encountered a system without reset . For more background information read "The Linux keyboard and console HOWTO" chapter "Resetting your terminal" . | {
"source": [
"https://serverfault.com/questions/534449",
"https://serverfault.com",
"https://serverfault.com/users/187528/"
]
} |
534,497 | I was trying to setup nginx to run with one of my rails apps, when having a look at output for ps -e | grep nginx , I realised nginx worker processes run with user nobody. Is there a reason why they are not running as www-data ? | Is there a reason why they are not running as www-data ? Yes. You most likely haven't specified the user in your nginx config . User Directive: http://nginx.org/en/docs/ngx_core_module.html#user syntax: user user [group];
default:
user nobody nobody;
context: main How to run nginx as a particular user? You can specify the user/group that nginx runs as, in the nginx config. This is an example of what an nginx config might look like (notice the user directive): pid /path/to/nginx.pid;
user www-data www-data;
worker_processes 1;
events {
worker_connections 1024; # usually 1024 is a good default
}
http {
# more code goes here
} Simply update your config and then reload or restart nginx and you should be good to go. Of course you should choose the user that works best for your system, in Debian/Ubuntu there's a www-data by default, so that's a sensible choice. | {
"source": [
"https://serverfault.com/questions/534497",
"https://serverfault.com",
"https://serverfault.com/users/169708/"
]
} |
534,507 | When mounting /vagrant over NFS, a changed file on the host is not refresh on the guest if the size doesn't changes. Quick update/typo are not immediately reflected unless I make enough modification for the size to be different. I've tried to set lookupcache=none but apart from making everything slower, nothing change. I'm using OSX ML as host and Arch Linux as guest. NFS is v3 (because of OSX). | This was bugging me for months, and I finally found a fix, if you're using Sublime Text (I'm on ST3). Check to see if it's using atomic saves — they were causing this issue for me. To your Preferences.sublime-settings file, ( Sublime Text > Preferences > Settings- User ) add this: {
"atomic_save": false
} This fixed the cached file-size NFS issue for us. Still unsure whether the root issue is in the OS X NFS daemon or the Ubuntu client (my money's on OS X). | {
"source": [
"https://serverfault.com/questions/534507",
"https://serverfault.com",
"https://serverfault.com/users/59871/"
]
} |
535,028 | I'm using ubuntu 12.04. I'm using ssh for connecting to many servers daily, so I put their parameters in .ssh/config file; like this : Host server1
User tux
Port 2202
HostName xxx.x.xx.x I know we should use key-pair ensure security, however sometimes we can't add public key into the remote machine (e.g. a public SSH server which accepting password and execute a specific command, or an user without a home directory). So, is there a way to put passwords in this file, for each connection? So when the server asks for a password, the terminal puts its pass and send it to the server, so I need not type the password each time. | No, There is no method to specify or provide on the command line the password in a non-interactive manner for ssh authentication using a openssh built-in mechanism. At least not one what I know of. You could hardcode your password into expect script but it is not a good solution either. You definitely would want to use keypairs for passwordless authentication as Michael stated, in the end private key is pretty much a big password in the file. | {
"source": [
"https://serverfault.com/questions/535028",
"https://serverfault.com",
"https://serverfault.com/users/76183/"
]
} |
535,412 | I've been using SSH tunnel for a while on Windows (using Putty). On Windows with putty, it is always fine, but on mac or cygwin, it sometimes prompts the warning message: open failed: administratively prohibited: open failed | I believe you have disabled TCP forwarding on the server. In your server /etc/ssh/sshd_config make sure that the following line is either not present or commented, otherwise comment it. AllowTcpForwarding no | {
"source": [
"https://serverfault.com/questions/535412",
"https://serverfault.com",
"https://serverfault.com/users/187653/"
]
} |
535,436 | I have a configuration like this:
- nginx port 80
- varnish (3.0.4) port 6081
- apache port 8080 Nginx takes the request and passes it to Varnish which then check the cache and then either returns the response from cache or passes the request to Apache.
In Apache I've disabled mod_deflate so the output is not gzipped.
Inside Varnish I've enabled ESI for all requests like this: sub vcl_fetch {
set beresp.do_esi = true;
} And my test file (test.php) looks like this: Current time is: <esi:include src="/date.php" /> The date.php: <?php
echo date('H:i:s'); But Varnish is nos processing the esi include. In varnishlog I get this error: 11 ESI_xmlerror c No ESI processing, first char not '<' Response headers from test.php: Accept-Ranges:bytes
Age:3
Connection:keep-alive
Content-Length:51
Content-Type:text/html
Date:Sun, 01 Sep 2013 11:51:57 GMT
Server:nginx
Surrogate-Control:"ESI/1.0"
Via:1.1 varnish
X-Powered-By:PHP/5.4.15-1~precise+1
X-Varnish:1236304062 1236304061 And the html output: Current time is: <esi:include src="/name.php" /> So you can see ESI is not processed. What am I doing wrong? | I believe you have disabled TCP forwarding on the server. In your server /etc/ssh/sshd_config make sure that the following line is either not present or commented, otherwise comment it. AllowTcpForwarding no | {
"source": [
"https://serverfault.com/questions/535436",
"https://serverfault.com",
"https://serverfault.com/users/68593/"
]
} |
535,631 | I see no option to export a backup of the settings for a domain. Maybe I should save the results of public DNS with dig but I would question whether a friend knows a better way. | Yes, it can be more friendly way. I suggest using cli53 tool, https://github.com/barnybug/cli53 After you setup it, just try cli53 export --full sciworth.com And you get the export zone in bind format. | {
"source": [
"https://serverfault.com/questions/535631",
"https://serverfault.com",
"https://serverfault.com/users/187214/"
]
} |
536,360 | This might be a pedestrian question but what is the difference between a "Floating IP" address and a "Virtual IP" address? Are they synonyms? | To me, the terms mean different things. A floating IP address is used to support failover in a high-availability cluster. The cluster is configured such that only the active member of the cluster "owns" or responds to that IP address at any given time. Should the active member fail, then "ownership" of the floating IP address would be transferred to a standby member to promote it as the new active member. Specifically, the member to be promoted issues a gratuitous ARP, announcing the new MAC address–to–IP address association. A virtual IP address refers to the IP address of a virtual server, and is a more nebulous term. With F5 load balancers, for example , the virtual servers are the services (websites, etc.) you want to host. More concretely, suppose you have a pair of load balancers in an active-standby cluster. For each interface or VLAN, the load balancers would each have a self IP address, as well as a floating IP address that is shared between both members. When the load balancer relays incoming requests to the back-end nodes, it uses the floating IP address as the source address, so if the load balancer dies, its partner will be able to take over and receive the response. Each website or other service being hosted on the load balancers would have its own IP address, which you could call a "virtual" IP address. (You could say that these virtual IPs "float" as well, since control of them would transfer to the standby node in the event of a failover.) | {
"source": [
"https://serverfault.com/questions/536360",
"https://serverfault.com",
"https://serverfault.com/users/9830/"
]
} |
536,576 | What I want to do is: When someone visits http://localhost/route/abc the server responds exactly the same as http://localhost:9000/abc Now I configure my Nginx server like this: location /route {
proxy_pass http://127.0.0.1:9000;
} The HTTP request is dispatched to port 9000 correctly, but the path it receives is http://localhost:9000/route/abc not http://localhost:9000/abc . Any suggestions? | I hate the subtlety here, but try adding a / at the end of 9000 like below. It will no longer append "route" to the forwarded request now. location /route {
proxy_pass http://127.0.0.1:9000/;
} | {
"source": [
"https://serverfault.com/questions/536576",
"https://serverfault.com",
"https://serverfault.com/users/160485/"
]
} |
537,060 | How do I see stdout for ansible-playbook commands? -v only shows ansible output, not the individual commands. It would be great if I could figure out how to do this immediately, so if something fails or hangs I can see why. e.g. - name: print to stdout
action: command echo "hello" would print TASK: [print variable] ********************************************************
hello | I think you can register the result to a variable, then print with debug. - name: print to stdout
command: echo "hello"
register: hello
- debug: msg="{{ hello.stdout }}"
- debug: msg="{{ hello.stderr }}" | {
"source": [
"https://serverfault.com/questions/537060",
"https://serverfault.com",
"https://serverfault.com/users/188942/"
]
} |
537,134 | So when I'm locally testing things such as Ajax in apps I'm writing, I often like to add a delay in server side scripts using a sleep statement. It helps simulate slow connections etc. Is there a way to specify a similar delay behaviour directly in Nginx config that would work for the flat HTML files it's serving? I'm aware you can do a similar delay simulation at the network level (see here ) but it seems pretty messy and has never worked very well for me. | You should try an echo module. https://www.nginx.com/resources/wiki/modules/echo https://github.com/openresty/echo-nginx-module#readme | {
"source": [
"https://serverfault.com/questions/537134",
"https://serverfault.com",
"https://serverfault.com/users/137724/"
]
} |
537,269 | I'm trying to set up nginx as a reverse proxy, with a large number of backend servers. I'd like to start up the backends on-demand (on the first request that comes in), so I have a control process (controlled by HTTP requests) which starts up the backend depending on the request it receives. My problem is configuring nginx to do it. Here's what I have so far: server {
listen 80;
server_name $DOMAINS;
location / {
# redirect to named location
#error_page 418 = @backend;
#return 418; # doesn't work - error_page doesn't work after redirect
try_files /nonexisting-file @backend;
}
location @backend {
proxy_pass http://$BACKEND-IP;
error_page 502 @handle_502; # Backend server down? Try to start it
}
location @handle_502 { # What to do when the backend server is not up
# Ping our control server to start the backend
proxy_pass http://127.0.0.1:82;
# Look at the status codes returned from control server
proxy_intercept_errors on;
# Fallback to error page if control server is down
error_page 502 /fatal_error.html;
# Fallback to error page if control server ran into an error
error_page 503 /fatal_error.html;
# Control server started backend successfully, retry the backend
# Let's use HTTP 451 to communicate a successful backend startup
error_page 451 @backend;
}
location = /fatal_error.html {
# Error page shown when control server is down too
root /home/nginx/www;
internal;
}
} This doesn't work - nginx seems to ignore any status codes returned from the control server. None of the error_page directives in the @handle_502 location work, and the 451 code gets sent as-is to the client. I gave up trying to use internal nginx redirection for this, and tried modifying the control server to emit a 307 redirect to the same location (so that the client would retry the same request, but now with the backend server started up). However, now nginx is stupidly overwriting the status code with the one it got from the backend request attempt (502), despite that the control server is sending a "Location" header. I finally got it "working" by changing the error_page line to error_page 502 =307 @handle_502; , thus forcing all control server replies to be sent back to the client with a 307 code. This is very hacky and undesirable, because 1) there is no control over what nginx should do next depending on the control server's response (ideally we only want to retry the backend only if the control server reports success), and 2) not all HTTP clients support HTTP redirects (e.g. curl users and libcurl-using applications need to enable following redirects explicitly). What's the proper way to get nginx to try to proxy to upstream server A, then B, then A again (ideally, only when B returns a specific status code)? | Key points: Don't bother with upstream blocks for failover, if pinging one server will bring another one up - there's no way to tell nginx (at least, not the FOSS version) that the first server is up again. nginx will try the servers in order on the first request, but not follow-up requests, despite any backup , weight or fail_timeout settings. You must enable recursive_error_pages when implementing failover using error_page and named locations. Enable proxy_intercept_errors to handle error codes sent from the upstream server. The = syntax (e.g. error_page 502 = @handle_502; ) is required to correctly handle error codes in the named location. If = is not used, nginx will use the error code from the previous block. Here is a summary: server {
listen ...;
server_name $DOMAINS;
recursive_error_pages on;
# First, try "Upstream A"
location / {
error_page 418 = @backend;
return 418;
}
# Define "Upstream A"
location @backend {
proxy_pass http://$IP:81;
proxy_set_header X-Real-IP $remote_addr;
# Add your proxy_* options here
}
# On error, go to "Upstream B"
error_page 502 @handle_502;
# Fallback static error page, in case "Upstream B" fails
root /home/nginx/www;
location = /_static_error.html {
internal;
}
# Define "Upstream B"
location @handle_502 { # What to do when the backend server is not up
proxy_pass ...;
# Add your proxy_* options here
proxy_intercept_errors on; # Look at the error codes returned from "Upstream B"
error_page 502 /_static_error.html; # Fallback to error page if "Upstream B" is down
error_page 451 = @backend; # Try "Upstream A" again
}
} Original answer / research log follow: Here's a better workaround I found, which is an improvement since it doesn't require a client redirect: upstream aba {
server $BACKEND-IP;
server 127.0.0.1:82 backup;
server $BACKEND-IP backup;
}
...
location / {
proxy_pass http://aba;
proxy_next_upstream error http_502;
} Then, just get the control server to return 502 on "success" and hope that code is never returned by backends. Update: nginx keeps marking the first entry in the upstream block as down, so it does not try the servers in order on successive requests. I've tried adding weight=1000000000 fail_timeout=1 to the first entry with no effect. So far I have not found any solution which does not involve a client redirect. Edit: One more thing I wish I knew - to get the error status from the error_page handler, use this syntax: error_page 502 = @handle_502; - that equals sign will cause nginx to get the error status from the handler. Edit: And I got it working! In addition to the error_page fix above, all that was needed was enabling recursive_error_pages ! | {
"source": [
"https://serverfault.com/questions/537269",
"https://serverfault.com",
"https://serverfault.com/users/25229/"
]
} |
537,343 | Here is the error I'm getting: Reloading nginx configuration: nginx: [emerg]
SSL_CTX_use_certificate_chain_file("/path/to/cert.pem") failed (SSL:
error:02001002:system library:fopen:No such file or directory
error:20074002:BIO routines:FILE_CTRL:system lib error:140DC002:SSL
routines:SSL_CTX_use_certificate_chain_file:system lib)
nginx: configuration file /etc/nginx/nginx.conf test failed I'm 100% sure that the file is at that location but Nginx seems to think that it's not there.
I merged the domain.crt and intermediate.crt manually in that order.
I've been scratching my head over this one all day. I hope someone has seen this error and has a solution.(and a side note it's not an error in pasting that the file location is show only once and not again after 'no such file or directory'). | Are you sure that the Nginx user has access to the directory? Also check the permissions of the .pem file, if Nginx cannot access it, it can show as 'no such file or directory' . If the permissions are right, you might check the actual path again. How you pasted it (which I know you removed the dir) there is no beginning / which could be the problem. EDIT Try moving your SSL setup into the following structure (as well as change the nginx.conf to reflect): sudo mkdir /etc/nginx/ssl
sudo chown -R root:root /etc/nginx/ssl
sudo chmod -R 600 /etc/nginx/ssl Nginx could be failing on your .pem because the permissions are too open (need source to verify that Nginx does this) but the above setup should work fine. | {
"source": [
"https://serverfault.com/questions/537343",
"https://serverfault.com",
"https://serverfault.com/users/189117/"
]
} |
537,384 | This is a Canonical Question about choosing a network switch for a datacentre When shopping for a networking switch that's going to be going into the top of a datacentre rack, what specific things should I be looking for? i.e. What makes a $3,000 Cisco switch that requires annual maintenance a smarter buy than a $300 Netgear Smart switch with a lifetime warranty? | Context is everything... There's no blanket answer. If you're trying to ask: "what differentiates an expensive switch from a low-end switch?" or "is there a reliability difference between a high-end switch and an inexpensive switch?" The answers are "feature-set" and "maybe" , respectively... I've used a $40,000 switch for the specific purpose of connecting two 10GbE WAN connections in a data center cabinet. I've also seen $100 unmanaged Netgear FS524 switches run the "core" of a $400 million/year company for 9 years, with no reliability issues... "You're only using me for my 10GbE ports, routing capabilities and good looks..." - Cisco 4900M. If you're looking for a rule or general advice that can be applied across the board, there are more questions that deserve attention: What type(s) of systems are co-located in the data center facility? - This is basic. Running a pair of web servers at a cheap colo is different than managing a full application stack or virtualization cluster in a high-end facility. What is the purpose of the switch? - As above, if there are throughput, latency, buffer or other performance considerations, that's going to drive the type of equipment you use. And there are definitely switch attributes that impact the success of deployments for iSCSI, VoIP, PoE, low-latency and streaming technologies. What interconnects are required? - These days, this may determine the class and tier of switch more than anything else. People want 10GbE connectivity for storage and other network workloads. Below a certain price threshold, you simply won't find that type of connectivity. Fiber? SFP+? Compatible with Copper DAC? Dedicated stacking links? HDMI?!? How complex is the network? - Will these switches link back to a core? Are they the core? What's their place in the overall design? In my work environment, we use lower-end Layer-2 access switches that offload the heavy lifting to a central core switching/routing infrastructure. Power - Depending on the colo/facility, power constraints, etc., redundant power supplies are a nice option. But they're not a requirement. I rarely see switch power supplies fail. But it's possible to keep a cold-spare and copies of configurations handy, too. Redundant power supplies often push devices into a much higher price bracket. Cooling - Fan design, hot-pluggability and the option to control switch airflow are nice features. What resiliency and redundancy options do you need? - Chassis switches, modular switches, stacked switches and standalone devices can all have different levels of resiliency. But I think their feature sets and other network design considerations tend to be more important. Warranty and support - I don't buy Cisco SmartNet often enough... But the product is so ubiquitous that finding technical resources/parts/support hasn't been difficult. I think the HP ProCurve Lifetime Warranty is often overlooked. For something like Netgear, I don't know that they would provide good technical support. As stated earlier, if the cost is low enough to afford cold-spare units, you can self-support on the hardware side. | {
"source": [
"https://serverfault.com/questions/537384",
"https://serverfault.com",
"https://serverfault.com/users/7709/"
]
} |
537,421 | I sometimes hear my colleagues talking about IDRAC, IPMI and ILO , when restarting a server. It seems that those terms are often misused. For instance, is there a difference between saying that you connect to IPMI and IDRAC ? If I well-understand, IDRAC and LILO are ipmi-based tools respectively implemented by Dell and HP , bringing new functionalities? | These are all forms of out-of-band management . IPMI is a standard. DRAC is a proprietary offering from Dell. ILO is the HP ProLiant variant. ILOM for Sun/Oracle. In some cases, you may hear the terms used interchangeably. The proprietary lights-out management solutions provide more integration with the hardware and often time have nicer features (monitoring, logging, access) than a generic IPMI implementation. | {
"source": [
"https://serverfault.com/questions/537421",
"https://serverfault.com",
"https://serverfault.com/users/121770/"
]
} |
537,568 | I'm new to RHEL. Trying to install software this morning and running into road blocks. Is it required to have a subscription to download packages via yum on RHEL? I'm coming across different sources on the net, some make it sound like yes, you need a subscription , others making it sound like no, a subscription is only required for support . In either case I'm stuck unable to install software ATM, because the machines I'm on don't have the subscription registered. Is there a way to install RHEL software without registering a subscription? If so, how? | Yes, you have to have an active RHEL subscription to download packages from RHEL's repositories. If your machine has never been subscribed, or the subscription is expired, you will not be able to use any of the repositories provided by RHEL. Red Hat states , in relevant part: If you choose to let all your subscriptions expire and have no other active subscriptions in your organization, you retain the right to use the software, but your entire environment will no longer receive any of the subscription benefits, including: The latest certified software versions. Security errata or bug fixes. And further : Entering a Red Hat Enterprise Linux 5 subscription number lets the installer: Access the full set of supported packages included with the subscription at install time. Automatically register the system to all Red Hat Network (RHN) channels included with the subscription at install time. Many other examples can be found... You can still use third party repositories; however, they often depend on packages in the base repositories provided by RHEL, and thus many packages will fail to install if those dependencies can't be satisfied. The only way to install base packages without a subscription is to get them off the installation media. If you're unable or unwilling to purchase a Red Hat subscription, consider migrating to CentOS to avoid the problem. | {
"source": [
"https://serverfault.com/questions/537568",
"https://serverfault.com",
"https://serverfault.com/users/95641/"
]
} |
537,829 | I'm working a website we maintain, and I use Capistrano to deploy. I've kind of inherited the stuff, so I'm not the one who set everything up. When I deploy to the server, it fails and nothing is updated. Since file permissions usually are the culprit of it failing, in my experience, I checked them for the folder I'm deploying to, and I saw something I haven't seen before: drwxrwsr-x+ . I don't know what that ending plus sign is or what it does; I assumed it was CentOS' way of denoting sticky bit, but when I ran sudo chmod -t shared , it was still there, so I guess it must not be the sticky bit. Can someone who knows more about Linux tell me what the ending "+" is in that list of permissions? | From info ls , under the What information is listed? section, regarding the output produced by -l : A file with any other combination of alternate access methods is
marked with a '+' character. Generally, it means it has an ACL set. | {
"source": [
"https://serverfault.com/questions/537829",
"https://serverfault.com",
"https://serverfault.com/users/165765/"
]
} |
538,037 | I'm not sure why it isn't starting or why its preventing me from connecting, i get this error: sshd.service - OpenSSH Daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled)
Active: failed (Result: start-limit) since Wed 2013-09-11 08:45:13 BST; 1min 21s ago
Process: 701 ExecStart=/usr/bin/sshd -D (code=exited, status=1/FAILURE)
Sep 11 08:45:13 alarmpi systemd[1]: sshd.service: main process exited, code=exited, status=1/FAILURE
Sep 11 08:45:13 alarmpi systemd[1]: Unit sshd.service entered failed state.
Sep 11 08:45:13 alarmpi systemd[1]: sshd.service holdoff time over, scheduling restart.
Sep 11 08:45:13 alarmpi systemd[1]: Stopping OpenSSH Daemon...
Sep 11 08:45:13 alarmpi systemd[1]: Starting OpenSSH Daemon...
Sep 11 08:45:13 alarmpi systemd[1]: sshd.service start request repeated too quickly, refusing to start.
Sep 11 08:45:13 alarmpi systemd[1]: Failed to start OpenSSH Daemon.
Sep 11 08:45:13 alarmpi systemd[1]: Unit sshd.service entered failed state. On the advice of #amrith I ran sshd -t which indicated that the key had not been generate . I generated this using ssh-keygen -A on the advice given in this forum then running systemctl status showed that I am still not running the Daemon. I've attached the error below, sadly I don't know ho Re-running sshd -t gives no messages now. sshd.service - OpenSSH Daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled)
Active: inactive (dead)
Sep 11 12:04:42 alarmpi systemd[1]: Started OpenSSH Daemon.
Sep 11 12:04:42 alarmpi sshd[289]: fatal: Cannot bind any address.
Sep 11 12:04:42 alarmpi systemd[1]: sshd.service: main process exited, code=exited, status=255/n/a
Sep 11 12:04:42 alarmpi systemd[1]: Unit sshd.service entered failed state.
Sep 11 12:04:42 alarmpi systemd[1]: sshd.service holdoff time over, scheduling restart.
Sep 11 12:04:42 alarmpi systemd[1]: Stopping OpenSSH Daemon...
Sep 11 12:04:42 alarmpi systemd[1]: Starting OpenSSH Daemon...
Sep 11 12:04:42 alarmpi systemd[1]: sshd.service start request repeated too quickly, refusing to start.
Sep 11 12:04:42 alarmpi systemd[1]: Failed to start OpenSSH Daemon.
Sep 11 12:04:42 alarmpi systemd[1]: Unit sshd.service entered failed state. | Try the sshd test mode. It may point you to a reason for failure: $ sshd -t Refer to the test mode documentation here . | {
"source": [
"https://serverfault.com/questions/538037",
"https://serverfault.com",
"https://serverfault.com/users/189468/"
]
} |
538,045 | When I execute: ssh root@myVPS I'm getting the next warning: Warning: the RSA host key for 'myVPS' differs from the key for the IP address 'xxx.xx.xxx.xx'
Offending key for IP in /home/manolo/.ssh/known_hosts:1
Matching host key in /home/manolo/.ssh/known_hosts:2
Are you sure you want to continue connecting (yes/no)? and if I type "yes" I works fine, but I don't know why this warning is thrown.
Any suggestion of why is it thrown and how to avoid it? | Most likely, you'll have reinstalled your VPS at some point and kept the host name and/or IP address. When reinstalling, the host key of the VPS got regenerated and since it differs from the one in your ~/.ssh/known_hosts , the warning gets displayed so you can detect the problem. This is done to prevent you from connecting to an entirely different system that replaces the legitimate host, e.g. to collect passwords. If something like that happened, just remove the offending key from your known_hosts file and everything is fine, but if you are not aware of such an rinstall, you have to investigate further do understand why the key differs. | {
"source": [
"https://serverfault.com/questions/538045",
"https://serverfault.com",
"https://serverfault.com/users/186279/"
]
} |
538,383 | In the last days I have set up some Linux system with LDAP authentication and everything works fine, but there's still something I can't really understand regarding NSS and PAM, also after a lot of research. Citing: NSS allows administrators to specify a list of sources where authentication files, host names and other information will be stored and searched for and PAM is a set of libraries that provide a configurable authentication platform for applications and the underlying operating system What I don't understand is how PAM and NSS work and interact together. In this book the architecture is explained pretty well: I configure PAM to use pam_ldap for LDAP accounts and pam_unix for local accounts, then I configure nsswitch.conf to fetch information from local files and LDAP. If I have understood correctly LDAP is used twice: first by pam_ldap and then by NSS which is itself called from pam_unix . Is that right? Is LDAP really used twice? But why do I need to configure both NSS and PAM? My explanation is that PAM performs different tasks than NSS and it is used by other programs. But, then, it should be possible to use only NSS or only PAM, as I have read in this page . So I experimented a bit and I have first tried to delete LDAP from the nsswitch.conf (and the authentication stopped to work as if only pam_ldap is not enough to do the job). Then I re-enabled LDAP in NSS and I deleted it from the PAM configuration (this time everything worked fine, as if pam_ldap is useless and NSS is enough to authenticate a user). Is there anyone who can help me to clarify this? Many thanks in advance. UPDATE I've just tried something now. I removed again all the pam_ldap entries in all pam configuration fields and I have also removed shadow: ldap from nsswitch.conf . As now in all the system there are only the lines: passwd: ldap files and group: ldap files in nsswitch.conf . Well... the login with LDAP users works perfectly, those two lines (plus /etc/ldap.conf ) are enough to configure LDAP auth. From my knowledge PAM in independent from NSS, but my tests showed it's not. So I ask myself is it possible to completely disable NSS and use only PAM? | It helps to break things down like this in your head: NSS - A module based system for controlling how various OS-level databases are assembled in memory. This includes (but is not limited to) passwd , group , shadow (this is important to note), and hosts . UID lookups use the passwd database, and GID lookups use the group database. PAM - A module based system for allowing service based authentication and accounting. Unlike NSS, you are not extending existing databases; PAM modules can use whatever logic they like, though shell logins still depend on the passwd and group databases of NSS. (you always need UID/GID lookups) The important difference is that PAM does nothing on its own. If an application does not link against the PAM library and make calls to it, PAM will never get used. NSS is core to the operating system, and the databases are fairly ubiquitous to normal operation of the OS. Now that we have that out of the way, here's the curve ball: while pam_ldap is the popular way to authenticate against LDAP, it's not the only way. If shadow is pointing at the ldap service within /etc/nsswitch.conf , any authentication that runs against the shadow database will succeed if the attributes for those shadow field mappings (particularly the encrypted password field) are present in LDAP and would permit login. This in turn means that pam_unix.so can potentially result in authentication against LDAP, as it authenticates against the shadow database. (which is managed by NSS, and may be pointing at LDAP) If a PAM module performs calls against a daemon that in turn queries the LDAP database (say, pam_sss.so , which hooks sssd ), it's possible that LDAP will be referenced. | {
"source": [
"https://serverfault.com/questions/538383",
"https://serverfault.com",
"https://serverfault.com/users/183076/"
]
} |
538,466 | I am a web developer, but I am also interested in a few administrative tasks. Hence, the new move from pure administration to dev-ops comes handy for me. Anyway, I have some problems to put a few things into a relationship. Maybe there isn't any, so I wanted to ask for help to clarify. Basically, what I want to put into relation is four types of software (from my understanding). The exact products don't matter, you can place any similar software as an alternative: Vagrant: From my understanding is to automate creation and management of VMs: Setting them up, starting and stopping them. This can be done using a local VM or remote, e.g. on a cloud platform. Docker: A "lightweight VM", based on a few Linux kernel concepts, which can be used to run processes in isolation, e.g. in a shared web hosting environment. Chef: A tool to setup and configure an operating system, e.g. inside a VM. OpenStack: A tool that allows you to build your own private cloud, hence comparable to something such as AWS. Question #1: Are my explanations right, or am I wrong with some (or all) of these consumptions? Question #2: How could I mix all those tools? Would that make any sense? In my imagination and from my point of understanding, you could go and use OpenStack to build your own cloud, use Vagrant to manage the VMs run in the cloud, use Chef to setup these VMs and finally use Docker to run processes inside the VMs. Is this correct? And if so, can you give me an advice in how to start using all this (it's quite a lot at the same time, and I don't know yet where to start)? | Let's use their respective web pages to find out what are all these projects about. I'll change the order in which you listed, though: Chef : Chef is an automation platform that transforms infrastructure into code. This is a configuration management software . Most of them use the same paradigm: they allow you to define the state you want a machine to be, with regards to configuration files, software installed, users, groups and many other resource types. Most of them also provide functionality to push changes onto specific machines, a process usually called orchestration . Vagrant : Create and configure lightweight, reproducible, and portable development environments. It provides a reproducible way to generate fully virtualized machines using either Oracle's VirtualBox or VMWare technology as providers . Vagrant can coordinate with a configuration management software to continue the process of installation where the operating system's installer finishes. This is known as provisioning . Docker : An open source project to pack, ship and run any application as a lightweight container The functionality of this software somewhat overlaps with that of Vagrant, in which it provides the means to define operating systems installations, but greatly differs in the technology used for this purpose. Docker uses Linux containers , which are not virtual machines per se, but isolated processes running in isolated filesystems. Docker can also use a configuration management system to provision the containers. OpenStack : Open source software for building private and public clouds. While it is true that OpenStack can be deployed on a single machine , such deployment is purely for proof-of-concept, probably not very functional due to resource constraints. The primary target for OpenStack installations are bare metal multi-node environments, where the different components can be used in dedicated hardware to achieve better results. A key functionality of OpenStack is its support for many virtualization technologies, from fully virtualized (VirtualBox, VMWare), to paravirtualized (KVM/Qemu) and also containers (LXC) and even User Mode Linux (UML) . I've tried to present these products as components of an specific architecture. From my point of view, it makes sense to first be able to define your needs with regards to the environment you need (Chef, Puppet, Ansible, ...), then be able to deploy it in a controlled fashion (Vagrant, Docker, ...) and finally scale it to global size if needs be. How much of all this functionality you need should be defined in the scope of your project. Also note I've over-simplified mostly all technical explanations. Please use the referenced links for detailed information. | {
"source": [
"https://serverfault.com/questions/538466",
"https://serverfault.com",
"https://serverfault.com/users/189707/"
]
} |
538,490 | Is there a configuration that changes the directory where apache web server temporarily places uploaded files? I have access to httpd/conf.d I'm on a machine where /tmp is very size constrained and have a requirement to allow file uploads that are larger than the available space on /tmp. Environment: fedora 18, apache web server 2.4.6-2, passenger and ruby on rails. EDIT: there's some discussion around the office that it's passenger (because this is a ruby on rails app) not apache that determines the location of the temporary file upload. I'm going under the assumption that it's apache but please correct me if I'm wrong. | Let's use their respective web pages to find out what are all these projects about. I'll change the order in which you listed, though: Chef : Chef is an automation platform that transforms infrastructure into code. This is a configuration management software . Most of them use the same paradigm: they allow you to define the state you want a machine to be, with regards to configuration files, software installed, users, groups and many other resource types. Most of them also provide functionality to push changes onto specific machines, a process usually called orchestration . Vagrant : Create and configure lightweight, reproducible, and portable development environments. It provides a reproducible way to generate fully virtualized machines using either Oracle's VirtualBox or VMWare technology as providers . Vagrant can coordinate with a configuration management software to continue the process of installation where the operating system's installer finishes. This is known as provisioning . Docker : An open source project to pack, ship and run any application as a lightweight container The functionality of this software somewhat overlaps with that of Vagrant, in which it provides the means to define operating systems installations, but greatly differs in the technology used for this purpose. Docker uses Linux containers , which are not virtual machines per se, but isolated processes running in isolated filesystems. Docker can also use a configuration management system to provision the containers. OpenStack : Open source software for building private and public clouds. While it is true that OpenStack can be deployed on a single machine , such deployment is purely for proof-of-concept, probably not very functional due to resource constraints. The primary target for OpenStack installations are bare metal multi-node environments, where the different components can be used in dedicated hardware to achieve better results. A key functionality of OpenStack is its support for many virtualization technologies, from fully virtualized (VirtualBox, VMWare), to paravirtualized (KVM/Qemu) and also containers (LXC) and even User Mode Linux (UML) . I've tried to present these products as components of an specific architecture. From my point of view, it makes sense to first be able to define your needs with regards to the environment you need (Chef, Puppet, Ansible, ...), then be able to deploy it in a controlled fashion (Vagrant, Docker, ...) and finally scale it to global size if needs be. How much of all this functionality you need should be defined in the scope of your project. Also note I've over-simplified mostly all technical explanations. Please use the referenced links for detailed information. | {
"source": [
"https://serverfault.com/questions/538490",
"https://serverfault.com",
"https://serverfault.com/users/92010/"
]
} |
538,767 | I would like to rsync folders from a specific date and forward.
for example. I want to rsync my folders that were created from 3 days ago (and of course 2 days ago, one day ago etc.).
I know I need to use find and rsync but I'm not sure how.
any idea?
Thanks!
Dotan. | rsync --progress --files-from=<(find /src_path -mtime -3 -type f -exec basename {} \;) /src_path/ /dst_path | {
"source": [
"https://serverfault.com/questions/538767",
"https://serverfault.com",
"https://serverfault.com/users/80829/"
]
} |
538,897 | What does the ServerAliveCountMax in SSH actually do? I am trying to ensure that when I connect to my server via SSH that the connection remains open for a long period of time instead of the connection dying after a short period of inactivity. This is the example Host *
ServerAliveInterval 60
ServerAliveCountMax 2 I've heard from one source that the above setting will always send a response to the server every 60 seconds so long as the server receives that response. However if for whatever reason the response doesn't go through to the server, it will try and send another message. If that message fails too, then it will close the connection. (I feel this is wrong) The second and third source however say something different. They claim that a message will be sent to the server every 60 seconds if there is a period of inactivity, but it will only send through 2 requests and then it will close the connection. So what exactly does ServerAliveCountMax do? | Your feeling that "this is wrong" is correct. See the man page : ServerAliveCountMax
Sets the number of server alive messages (see below) which may be
sent without ssh(1) receiving any messages back from the server.
If this threshold is reached while server alive messages are
being sent, ssh will disconnect from the server, terminating the
session. It is important to note that the use of server alive
messages is very different from TCPKeepAlive (below). The server
alive messages are sent through the encrypted channel and there‐
fore will not be spoofable. The TCP keepalive option enabled by
TCPKeepAlive is spoofable. The server alive mechanism is valu‐
able when the client or server depend on knowing when a connec‐
tion has become inactive.
The default value is 3. If, for example, ServerAliveInterval
(see below) is set to 15 and ServerAliveCountMax is left at the
default, if the server becomes unresponsive, ssh will disconnect
after approximately 45 seconds. This option applies to protocol
version 2 only.
ServerAliveInterval
Sets a timeout interval in seconds after which if no data has
been received from the server, ssh(1) will send a message through
the encrypted channel to request a response from the server. The
default is 0, indicating that these messages will not be sent to
the server. This option applies to protocol version 2 only. | {
"source": [
"https://serverfault.com/questions/538897",
"https://serverfault.com",
"https://serverfault.com/users/188995/"
]
} |
539,665 | Do Ubuntu Linux cron (Vixie cron?) support setting timeout for its jobs? Specifically, the process would be killed after X seconds unless it finishes successfully by then. I have had some cases where tasks have been hanging due to network connectivity and various issues. The process are left to hanging forever unless you manually clean up and kill them. | On Ubuntu the command timeout exists, which is part of coreutils. You can use it like this to set a timeout. Older versions of Debian/Ubuntu didn't build and include this command, but there is a comparable timeout package that you can install. # puppet shouldn't take more then 40 minutes!
47 * * * * root /usr/bin/timeout 2400 /usr/bin/puppet agent ... | {
"source": [
"https://serverfault.com/questions/539665",
"https://serverfault.com",
"https://serverfault.com/users/74975/"
]
} |
540,004 | On a VM I am initializing I am able to log in as one non-root user ( admin ) but not another ( tbbscraper ) over SSH with public key authentication. The only error message I can find in any log file is Sep 18 17:21:04 [REDACTED] sshd[18942]: fatal: Access denied for user tbbscraper by PAM account configuration [preauth] On the client side, the syndrome is $ ssh -v -i [REDACTED] tbbscraper@[REDACTED]
...
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Offering public key: [REDACTED]
debug1: Authentications that can continue: publickey
debug1: Trying private key: [REDACTED]
debug1: read PEM private key done: type RSA
Connection closed by [REDACTED] Changing 'tbbscraper' to 'admin' allows a successful login: debug1: Authentication succeeded (publickey). appears instead of the "Connection closed" message. This doesn't seem to be a permissions problem... # for x in admin tbbscraper
> do ls -adl /home/$x /home/$x/.ssh /home/$x/.ssh/authorized_keys
> done
drwxr-xr-x 3 admin admin 4096 Sep 18 17:19 /home/admin
drwx------ 2 admin admin 4096 Sep 18 16:53 /home/admin/.ssh
-rw------- 1 admin admin 398 Sep 18 17:19 /home/admin/.ssh/authorized_keys
drwxr-xr-x 3 tbbscraper tbbscraper 4096 Sep 18 17:18 /home/tbbscraper
drwx------ 2 tbbscraper tbbscraper 4096 Sep 18 17:18 /home/tbbscraper/.ssh
-rw------- 1 tbbscraper tbbscraper 398 Sep 18 17:18 /home/tbbscraper/.ssh/authorized_keys
# cmp /home/{admin,tbbscraper}/.ssh/authorized_keys ; echo $?
0 ... nor a PAM-level access control problem ... # egrep -v '^(#|$)' /etc/security/*.conf
# ... so none of the existing answers to similar questions would seem to apply. The only other piece of evidence I've got is: root@[REDACTED] # su - admin
admin@[REDACTED] $ but root@[REDACTED] # su - tbbscraper
su: Authentication failure
(Ignored)
tbbscraper@[REDACTED] $ which suggests some larger-scale PAM issue, but I can't find anything obviously wrong with the stuff in /etc/pam.d . Any ideas? The VM is an EC2 instance, OS is Debian 7.1 (Amazon's off-the-shelf AMI). | After all that, it turns out to have been a one-character typo in /etc/shadow . Spot the difference: admin:!:15891:0:99999:7:::
tbbscraper:!::15966:0:99999:7::: That's right, there are two colons after the exclamation point on the tbbscraper line. That shoves all the fields over one and makes PAM think that the account expired on January 8, 1970. | {
"source": [
"https://serverfault.com/questions/540004",
"https://serverfault.com",
"https://serverfault.com/users/112625/"
]
} |
540,328 | foreman can read .env files and set environment variables from the contents, and then run a program e.g. foreman run -e vars.env myprogram ...but it does a lot of other things (and is primarily concerned with starting things using its Procfile format). Is there a simpler (Linux/Unix) tool that's just focussed on reading .env files and executing a command with the new environment? Example environment file (from http://ddollar.github.io/foreman/#ENVIRONMENT ): FOO=bar
BAZ=qux | You can source the environment file in the active shell and run the program: sh -ac ' . ./.env; /usr/local/bin/someprogram' The -a switch exports all variables, so that they are available to the program. | {
"source": [
"https://serverfault.com/questions/540328",
"https://serverfault.com",
"https://serverfault.com/users/21602/"
]
} |
540,492 | Disclaimer: I'm pretty novice at sysadmin stuff. I'm trying to set up port forwarding in an AWS EC2 instance, this has to be done in the command-line because I don't want to go in and edit anything, it has to be automatic (it's part of a build process). sudo echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf Permission denied The weird thing is I've been (successfully) using sudo for pretty much every command that required su privileges. If I do sudo su before the command (trying it out by hand in an ssh session), then it works. Reasons behind this? Possible solutions that don't involve sudo su or manual edits? | You can't use sudo to affect output redirection; > and >> (and, for completeness, < ) are effected with the privilege of the calling user, because redirection is done by the calling shell, not the called subprocess. Either do cp /etc/sysctl.conf /tmp/
echo "net.ipv4.ip_forward = 1" >> /tmp/sysctl.conf
sudo cp /tmp/sysctl.conf /etc/ or sudo /bin/su -c "echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf" | {
"source": [
"https://serverfault.com/questions/540492",
"https://serverfault.com",
"https://serverfault.com/users/58313/"
]
} |
540,537 | I'm installing an nginx ssl proxy on my Fedora server. I've created a cert and key pair under /etc/nginx. They look like this: ls -l /etc/nginx/
total 84
...
-rw-r--r--. 1 root root 1346 Sep 20 12:11 demo.crt
-rw-r--r--. 1 root root 1679 Sep 20 12:11 demo.key
... As root, I'm trying to start the nginx service: systemctl start nginx.service I get the following error: nginx[30854]: nginx: [emerg]
SSL_CTX_use_certificate_chain_file("/etc/nginx/demo.crt") failed (SSL: error:0200100D:system library:fopen:Permission denied...e:system lib)
nginx[30854]: nginx: configuration file /etc/nginx/nginx.conf test failed Is there something wrong with the permissions on these files? | You probably have SELinux in enforcing mode (the default for Fedora): sestatus -v If this is the case, check the audit logs, you should find the access error: ausearch -m avc -ts today | audit2allow You also probably moved the filed instead of copying it, so the security context of the file might be wrong. ls -lrtZ /etc/nginx/demo.* and correct it if needed: restorecon -v -R /etc/nginx | {
"source": [
"https://serverfault.com/questions/540537",
"https://serverfault.com",
"https://serverfault.com/users/190825/"
]
} |
540,548 | Our uncle let us fix their HR System (Human Resource System), and we gathered these problems:
(The problems to be presented must be only fixed by automation(computer program)) They have a very old hardware (since 1999) for the system, the Server is running Windows NT 4.0 Server and to be able to access it they have a client, Windows NT running in a virtual machine, with HOST OS of windows 98 (since 1999). They have a very old printer (since 1999) attached to the client computer. The model/brand is Kyo Cera. The printer has a parallel connector. Our real problems are: 1.1 Does Windows NT 4.0 compatible to newer computer builds, so we can just install the OS in there? 1.2 If not, what causes it? Is there a limitation for the OS to be installed? 1.3 Is the printer compatible to newer computer builds together with the client OS? . 1.4 For illustration purposes of our propose solution:
We may assume that: we now bought: -new computer with windows 7 OS -new printer(usb)
. . Is it possible to have/ run properly and smoothly a Windows NT 4.0(client) in a virtual machine like "Virtual Box" with a HOST OS of windows 7? 1.4.1 Can the virtual OS access the windows NT 4.0 Server and read/write data to it? Can the virtual OS access the new printer and be able to print?
. . *we can't find many information regarding this topic many don't exist/deleted. | You probably have SELinux in enforcing mode (the default for Fedora): sestatus -v If this is the case, check the audit logs, you should find the access error: ausearch -m avc -ts today | audit2allow You also probably moved the filed instead of copying it, so the security context of the file might be wrong. ls -lrtZ /etc/nginx/demo.* and correct it if needed: restorecon -v -R /etc/nginx | {
"source": [
"https://serverfault.com/questions/540548",
"https://serverfault.com",
"https://serverfault.com/users/190823/"
]
} |
540,821 | Background I had a small logrotate misshap... Logrotate would rotate the archived logs by misstake causing a quadratic growth of files in my /var/log/ . And by the time I caught wind that something was awry, /var/log/ already contained a few million files ... I managed to (after some hairloss and find/sed/grep magic) remove all offending files and fix my logrotate config. And thought all was well... Problem Whenever I ls / du -hs or otherwise list the contents of /var/log/ (which now contains 80mb of archives/logs and at most a few hundred files) the process doing that hangs for a good minute or two. I do believe this is somehow related to the logrotate misshap but I'm not certain, it could be something else. Anyway I'm at loss to where to start debugging or looking for a fix for this. Please help :3 Other info uname -a
Linux xxx 3.3.8-gentoo #18 SMP Sat Sep 21 22:44:40 CEST 2013 x86_64 Intel(R)
Core(TM)2 CPU 4400 @ 2.00GHz GenuineIntel GNU/Linux
cat /proc/meminfo
MemTotal: 2051552 kB
MemFree: 75612 kB
Buffers: 9016 kB
Cached: 1740608 kB
SwapCached: 0 kB
CFQ IO scheduler + SLUB allocator I thought this: How many files in a directory is too many? (Downloading data from net) was related but I don't have the files left anymore. Edit The problem persists even after a call to init 1 so I think it is safe to assume there is no other process to blame but the FS. Solution (as applied from accepted answer) init 1
mv /var/log /var/log1
mkdir /var/log
chmod --reference=/var/log1 /var/log
chown --reference=/var/log1 /var/log
tar -C /var/log1 -cvp . | tar -C /var/log -xvp
rm -rf /var/log1
init 5 | Directories only ever grow in size, not shrink. Try moving all those files out into a a temporary directory (like log2) then rmdir the old directory and rename the temp one as the new permanent one. | {
"source": [
"https://serverfault.com/questions/540821",
"https://serverfault.com",
"https://serverfault.com/users/190996/"
]
} |
540,828 | I have created my RDS instance before creating my Elastic Beanstalk environment. The two are working together with no problem, but I'd like them to be linked together, and have the RDS parameters accessible via the RDS_* environment variables. The Elastic Beanstalk configuration page says: Although the first link creates a RDS instance in-place and links it to the current environment, the second link just redirects to this documentation page. , which unfortunately only explains how to create a new RDS instance, but not how to link an existing one. How can I associate an existing RDS instance to my Elastic Beanstalk environment? | Answer from the AWS support : In order to associate an existing database to a EB Environment you have to take a snapshot of it via the Management Console and then choose "create a new RDS database" under the Data Layer. There does not appear to be a way to associate a running RDS instance to an existing EB Environment without launching a new one from a snapshot due to the way the RDS instance is tied into the Beanstalk environment's underlying Cloudformation stack. If you take a snapshot of your current RDS instance you can start it anew in EB if you wish. If you want the RDS instance to exist outside of the environment you can simply provide the connection parameters as environment variables via the EB Console: Configuration -> Web Layer -> Software Configuration. Then, you can read the environment variable via PHP . | {
"source": [
"https://serverfault.com/questions/540828",
"https://serverfault.com",
"https://serverfault.com/users/83039/"
]
} |
541,243 | I'm new to linux and I have server with four network cards. I have to identify which physical network interface is assigned to the names eth0, eth1, eth2 and eth3. I have to disconnect cable from eth2 and do not know which network card it is. Thanks | You can use ethtool. ethtool -p ethX [N] ethX – network interface name [N] – number of second to blink Example: ethtool -p eth2 15 This will blink the network interface eth2 for 15 seconds, then you can see which physical network interface is for eth2 | {
"source": [
"https://serverfault.com/questions/541243",
"https://serverfault.com",
"https://serverfault.com/users/191253/"
]
} |
542,910 | How can I add a user to additional groups with Ansible? For example, I would like to add a user to the sudo group without replacing the user's existing set of groups. | According to the User module you can use this: - name: Adding user {{ user }}
user: name={{ user }}
group={{ user }}
shell=/bin/bash
password=${password}
groups=sudo
append=yes You can just add the groups=groupname and append=yes to add them to an existing user when you're creating them | {
"source": [
"https://serverfault.com/questions/542910",
"https://serverfault.com",
"https://serverfault.com/users/25526/"
]
} |
543,999 | I have a server which was working ok until 3rd Oct 2013 at 10:50am when it began to intermittently return "502 Bad Gateway" errors to the client. Approximately 4 out of 5 browser requests succeed but about 1 in 5 fail with a 502. The nginx error log contains many hundreds of these errors; 2013/10/05 06:28:17 [error] 3111#0: *54528 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 66.249.66.75, server: www.bec-components.co.uk request: ""GET /?_n=Fridgefreezer/Hotpoint/8591P;_i=x8078 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.bec-components.co.uk" However the PHP error log does not contain any matching errors. Is there a way to get PHP to give me more info about why it is resetting the connection? This is nginx.conf ; user www-data;
worker_processes 4;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/access.log;
sendfile on;
keepalive_timeout 30;
tcp_nodelay on;
client_max_body_size 100m;
gzip on;
gzip_types text/plain application/xml text/javascript application/x-javascript text/css;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
include /gvol/sites/*/nginx.conf;
} And this is the .conf for this site; server {
server_name www.bec-components.co.uk bec3.uk.to bec4.uk.to bec.home;
root /gvol/sites/bec/www/;
index index.php index.html;
location ~ \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 2592000; # 30 days
log_not_found off;
}
## Trigger client to download instead of display '.xml' files.
location ~ \.xml$ {
add_header Content-disposition "attachment; filename=$1";
}
location ~ \.php$ {
fastcgi_read_timeout 3600;
include /etc/nginx/fastcgi_params;
keepalive_timeout 0;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
}
}
## bec-components.co.uk ##
server {
server_name bec-components.co.uk;
rewrite ^/(.*) http://www.bec-components.co.uk$1 permanent;
} | i'd always trust if my webservers are telling me: 502 Bad Gateway what is the uptime of your fastcgi/nginx - process? do you monitor network-connections? can you confirm/deny a change of visitors-count around that day? what does it mean: you fastcgi-process is not accessible by nginx; either to slow or not corresponding at all. bad gateway means: nginx cannot fastcgi_pass to that defined ressource 127.0.0.1:9000; at that very specific moment . your inital error-logs tells it all: . recv() failed
-> nginx failed
(104: Connection reset by peer) while reading response header from upstream,
-> no complete answer, or no answer at all
upstream: "fastcgi://127.0.0.1:9000",
-> who is he, who failed??? from my limited pov i'd suggest: restart your fastcgi_process / server check your access-log enable debug-log | {
"source": [
"https://serverfault.com/questions/543999",
"https://serverfault.com",
"https://serverfault.com/users/80757/"
]
} |
544,009 | When I run apache on port 80 it works fine. But if I change the port then it's accepting connection from localhost only. Somebody please help me figure out what is the problem. My iptables -L result Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:1032 Chain FORWARD (policy ACCEPT)
target prot opt source destination Chain OUTPUT (policy ACCEPT)
target prot opt source destination | i'd always trust if my webservers are telling me: 502 Bad Gateway what is the uptime of your fastcgi/nginx - process? do you monitor network-connections? can you confirm/deny a change of visitors-count around that day? what does it mean: you fastcgi-process is not accessible by nginx; either to slow or not corresponding at all. bad gateway means: nginx cannot fastcgi_pass to that defined ressource 127.0.0.1:9000; at that very specific moment . your inital error-logs tells it all: . recv() failed
-> nginx failed
(104: Connection reset by peer) while reading response header from upstream,
-> no complete answer, or no answer at all
upstream: "fastcgi://127.0.0.1:9000",
-> who is he, who failed??? from my limited pov i'd suggest: restart your fastcgi_process / server check your access-log enable debug-log | {
"source": [
"https://serverfault.com/questions/544009",
"https://serverfault.com",
"https://serverfault.com/users/192780/"
]
} |
544,171 | My MongoDB database was running into problems under load, with the following errors spamming the logs: [initandlisten] pthread_create failed: errno:11 Resource temporarily unavailable
[initandlisten] can't create new thread, closing connection I've come to the conclusion that I need to raise the "ulimit -u" or "Max processes" setting which were at 1024, and the usage could have been exceeding that given the web frontends launched (not sure how to check this). I edited /etc/security/limits.conf to add the last two lines (the first two were already there): * soft nofile 350000
* hard nofile 350000
* soft nproc 30000
* hard nproc 30000 Then I rebooted the system (BTW should I have done that, or should a mongod service restart be enough?) After reboot, reviewing the process limits for mongod process it seems the soft limit has been ignored: $ cat /proc/2207/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 1024 30000 processes
Max open files 350000 350000 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 273757 273757 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
$ whoami
mongod
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 273757
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 350000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited I expected that "Max processes" both hard and soft limits will be at 30000 as per the /etc/security/limits.conf file, but only the hard one is. What am I doing wrong? I'm running Amazon Linux on AWS EC2. bash-4.1$ cat /etc/*-release
Amazon Linux AMI release 2012.09 | Check the file /etc/security/limits.d/90-nproc.conf as this is likely overriding your settings. I wrote about this exact same issue last year http://scott.cm/max-processes-1024-limits-conf/ | {
"source": [
"https://serverfault.com/questions/544171",
"https://serverfault.com",
"https://serverfault.com/users/177356/"
]
} |
544,619 | When setting up a network in need of static IP addresses I've come across at least two ways of doing so. A: Router/Gateway In particular, on my Buffalo router, under DHCP Server, there is the option to add multiple MAC addresses and the ability to assign them an IP address. B: On the Device itself Configuring the static IP on the device itself via the network adapter settings on Windows. Which makes more sense to use in what situation? Is one better/worse than the other? | The main advantage of using DHCP reservations is that the assignment of a "static" IP address is managed centrally. This can be helpful for example if you are often rebuilding a particular computer or constantly changing the OS or if setting a "static" IP address is cumbersome (DirectTV DVR for example). Using DHCP reservations is also handy if you ever need to migrate to a new subnet. In most cases then you just need to change the subnet on the router\DHCP server and all the clients will automatically be updated to the new subnet. Lastly, using DHCP reservations is nice because you have a central place that you can go and lookup the IP address of a machine, provided the router\DHCP server allows you to note a name in addition to the IP address and MAC Address. The down side to DHCP Reservations is that you have to know the MAC address, not a huge deal, but depending on the Router\DHCP Server and the computers OS it may be more time consuming that just setting a static address on the machine. | {
"source": [
"https://serverfault.com/questions/544619",
"https://serverfault.com",
"https://serverfault.com/users/83619/"
]
} |
544,779 | Because of Ubuntu updating, I made the mistake to upgrade Apache 2.2 to 2.4—many things went wrong. I have no idea how to specify the version after apt-get remove apache2 . apt-get install apache2 always installs 2.4. How can I do it? | You need to do the following: apt-cache showpkg <pachagename> The above command will display list of available versions for this package. Then select the needed version and do the following. apt-get install <packagename>=<complete version name> Example: apt-cache showpkg apache2
apt-get install apache2=2.2.14-5ubuntu8.7 | {
"source": [
"https://serverfault.com/questions/544779",
"https://serverfault.com",
"https://serverfault.com/users/185832/"
]
} |
544,850 | I need to periodically give temporary and limited access to various directories on a CentOS linux server that has vsftp installed. I've created a user using useradd [user_name] and given them a password using passwd [password] . I've created a directory in /var/ftp and then I bind this to the directory that I wish to limit access to. What else do I need to specifically do to ensure that when this user logs into FTP, they only have access to this directory please? | Complete answer that solved my question for any others that are after a step by step walkthrough... Install vsftpd using this as a guide . Create user with useradd [user_name] . Create user's password with passwd [user_name] . (You'll be prompted to specify the password). Create FTP directory in /var/ftp and then bind to the 'home' directory you wish to specify for this user with mount --bind /var/www/vhosts/domain.com/ /var/ftp/custom_name/ . Change user's home directory with usermod -d /var/ftp/custom_name/ user_name In /etc/vsftpd/vsftpd.conf , ensure all all of the following are set:- chroot_local_user=YES chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list Only list users in the vsftpd.chroot_list file if you want them to have full access to anywhere on the server. By not listing them in this file, you're saying restrict all vsftpd users to their specified home directory. In other words (for reference):- means that by default, ALL users get chrooted except users in the file... chroot_local_user=YES chroot_list_enable=YES means that by default, ONLY users in the file get chrooted... chroot_local_user=NO chroot_list_enable=YES | {
"source": [
"https://serverfault.com/questions/544850",
"https://serverfault.com",
"https://serverfault.com/users/177943/"
]
} |
544,863 | I was told that every time I refresh our web site, either individual pages or the entire site, I should first stop the application pool, update my website file or files, then start the application pool. My web site files consists, of HTML, JS, ASPX, INC, GIF, JPEG, CONFIG, etcetera. I'm asking because I believe I have updated my site without stopping the application pool and starting it, and also by stopping and starting it, just trying to find out what the correct approach should be. | Complete answer that solved my question for any others that are after a step by step walkthrough... Install vsftpd using this as a guide . Create user with useradd [user_name] . Create user's password with passwd [user_name] . (You'll be prompted to specify the password). Create FTP directory in /var/ftp and then bind to the 'home' directory you wish to specify for this user with mount --bind /var/www/vhosts/domain.com/ /var/ftp/custom_name/ . Change user's home directory with usermod -d /var/ftp/custom_name/ user_name In /etc/vsftpd/vsftpd.conf , ensure all all of the following are set:- chroot_local_user=YES chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list Only list users in the vsftpd.chroot_list file if you want them to have full access to anywhere on the server. By not listing them in this file, you're saying restrict all vsftpd users to their specified home directory. In other words (for reference):- means that by default, ALL users get chrooted except users in the file... chroot_local_user=YES chroot_list_enable=YES means that by default, ONLY users in the file get chrooted... chroot_local_user=NO chroot_list_enable=YES | {
"source": [
"https://serverfault.com/questions/544863",
"https://serverfault.com",
"https://serverfault.com/users/193318/"
]
} |
545,546 | I am using the "aws ec2 run-instances" command (from the AWS Command Line Interface (CLI) ) to launch an Amazon EC2 instance. I want to set an IAM role for the EC2 instance I am launching. The IAM role is configured and I can use it successfully when launching an instance from the AWS web UI. But when I try to do this using that command, and the "--iam-instance-profile" option, it failed. Doing "aws ec2 run-instances help" shows Arn= and Name= subfields for the value. When I try to look up the Arn using "aws iam list-instance-profiles" it gives this error message: A client error (AccessDenied) occurred: User:
arn:aws:sts::xxxxxxxxxxxx:assumed-role/shell/i-15c2766d is not
authorized to perform: iam:ListInstanceProfiles on resource:
arn:aws:iam::xxxxxxxxxxxx:instance-profile/ (where xxxxxxxxxxxx is my AWS 12-digit account number) I looked up the Arn string via the web UI and used that via "--iam-instance-profile Arn=arn:aws:iam::xxxxxxxxxxxx:instance-profile/shell" on the run-instances command, and that failed with: A client error (UnauthorizedOperation) occurred: You are not
authorized to perform this operation. If I leave off the "--iam-instance-profile" option entirely, the instance will launch but it will not have the IAM role setting I need. So the permission seems to have something to do with using "--iam-instance-profile" or accessing IAM data. I repeated several times in case of AWS glitches (they happen sometimes) and no success. I suspected that perhaps there is a restriction that an instance with an IAM role is not allowed to launch an instance with a more powerful IAM role. But in this case, the instance I am doing the command in has the same IAM role that I am trying to use. named "shell" (though I also tried using another one, no luck). Is setting an IAM role not even permitted from an instance (via its
IAM role credentials)? Is there some higher IAM role permission needed to use IAM roles,
than is needed for just launching a plain instance? Is "--iam-instance-profile" the appropriate way to specify an IAM
role? Do I need to use a subset of the Arn string, or format it in some other way? Is it possible to set up an IAM role that can do any IAM role
accesses (maybe a "Super Root IAM" ... making up this name)? FYI, everything involves Linux running on the instances. Also, I am running all this from an instance because I could not get these tools installed on my desktop. That and I do not want to put my IAM user credentials on any AWS storage as advised by AWS here . after answered: I did not mention the launching instance permission of "PowerUserAccess" (vs. "AdministratorAccess") because I did not realize additional access was needed at the time the question was asked. I assumed that the IAM role was "information" attached to the launch. But it really is more than that. It is a granting of permission. | Update Mike Pope has published a nice article about Granting Permission to Launch EC2 Instances with IAM Roles (PassRole Permission) on the AWS Security Blog , which explains the subject matter from an AWS point of view. Initial Answer Skaperen's answer is partially correct (+1), but slightly imprecise/misleading as follows (the explanation seems a bit too complex for a comment, hence this separate answer): To launch an EC2 instance with an IAM role requires administrative access to the IAM facility. This is correct as such and points towards the underlying problem, but the required administrative permissions are rather limited, so the following conclusion ... Because IAM roles grant permissions, there is clearly a security issue to be addressed. You would not want IAM roles being a means to allow permission escalation. ... is a bit misleading, insofar the potential security issue can be properly addressed. The subject matter is addressed in Granting Applications that Run on Amazon EC2 Instances Access to AWS Resources : You can use IAM roles to manage credentials for applications that run
on Amazon EC2 instances. When you use roles, you don't have to
distribute AWS credentials to Amazon EC2 instances. Instead, you can
create a role with the permissions that applications will need when
they run on Amazon EC2 and make calls to other AWS resources. When
developers launch an Amazon EC2 instance, they can specify the role
you created to associate with the instance. Applications that run on
the instance can then use the role credentials to sign requests. Now, within the use case at hand the mentioned developers [that] launch an Amazon EC2 instance are in fact EC2 instances themselves, which appears to yield the catch 22 security issue Skaperen outlined. That's not really the case though as illustrated by the sample policy in section Permissions Required for Using Roles with Amazon EC2 : {
"Version": "2012-10-17",
"Statement": [{
"Effect":"Allow",
"Action":"iam:PassRole",
"Resource":"*"
},
{
"Effect":"Allow",
"Action":"iam:ListInstanceProfiles",
"Resource":"*"
},
{
"Effect":"Allow",
"Action":"ec2:*",
"Resource":"*"
}]
} So iam:PassRole is in fact the only IAM permission required, and while technically of administrative nature, this isn't that far reaching - of course, the sample policy above would still allow to escalate permissions by means of listing and in turn passing any available role, but this can be prevented by specifying only those roles that are desired/safe to pass for the use case at hand - this is outlined in section Restricting Which Roles Can Be Passed to Amazon EC2 Instances (Using PassRole) : You can use the PassRole permission to prevent users from passing a
role to Amazon EC2 that has more permissions than the user has already
been granted, and then running applications under the elevated
privileges for that role. In the role policy, allow the PassRole
action and specify a resource (such as
arn:aws:iam::111122223333:role/ec2Roles/*) to indicate that only a
specific role or set of roles can be passed to an Amazon EC2 instance. The respective sample policy illustrates exactly matches the use case at hand, i.e. grants permission to launch an instance with a role by using the Amazon EC2 API : {
"Version": "2012-10-17",
"Statement": [{
"Effect":"Allow",
"Action":"ec2:RunInstances",
"Resource":"*"
},
{
"Effect":"Allow",
"Action":"iam:PassRole",
"Resource":"arn:aws:iam::123456789012:role/Get-pics"
}]
} | {
"source": [
"https://serverfault.com/questions/545546",
"https://serverfault.com",
"https://serverfault.com/users/98296/"
]
} |
545,555 | I connect to a Linux machine (CentOS 6.4) using PuTTY . Except from fact that I can set PuTTY to only use one type of protocol, how can I find the current SSH connection's version (SSH1 or SSH2)? | Once you are in you say: ssh -v localhost it will tell you the exact version of the server. | {
"source": [
"https://serverfault.com/questions/545555",
"https://serverfault.com",
"https://serverfault.com/users/193704/"
]
} |
545,579 | I have installed Hyper-V Server 2012 R2 on a server that had Hyper-V Server 2012. When I did this, the standard Windows.old folder was created. I now would like to remove that folder safely. The standard way to do that with a full GUI would be to use Disk Cleanup, but of course I don't have that option on Hyper-V Server. Is there a formal way to remove that folder in this scenario? I know if this was Server Core I could install the full GUI including Desktop Experience, but that would be a lot of nonsense just to cleanly remove a folder. My primary reason for asking, as opposed to just doing rmdir /s or some such, is that the Windows.old folder has a lot of junctions, and I don't want to break anything in the production OS copy as part of doing this. | I first tried to copy and run cleanmgr.exe (Disk Cleanup tool), but it has too many dependencies on DLLs which are not present in Core/Hyper-V Server. So instead I deleted the directory manually. First I removed all junction points and symbolic links. To do this I used junction.exe from SysInternals. Copy the exe into a directory in your path. I ran it to get a list of all junctions: c:\tools\junction.exe -s -q C:\windows.old > %temp%\junc.txt I opened a PowerShell: start powershell.exe and ran the following script to find the relevant lines and execute junction.exe again: foreach ($line in [System.IO.File]::ReadLines("$env:temp\junc.txt"))
{
if ($line -match "^\\\\")
{
$file = $line -replace "(: JUNCTION)|(: SYMBOLIC LINK)",""
& c:\tools\junction.exe -d "$file"
}
} This removed all junction points and the single symbolic link on my system. back in cmd.exe I now executed three commands to clear permissions and delete all files: takeown /F C:\windows.old /R /D Y
cacls C:\windows.old /T /G Everyone:F
rd /s /q C:\windows.old In my test, I installed a new Hyper-V server 2012, then upgraded to 2012 R2, Windows.old is now gone and the system is running fine with all old junction targets intact. | {
"source": [
"https://serverfault.com/questions/545579",
"https://serverfault.com",
"https://serverfault.com/users/27123/"
]
} |
545,622 | Syslog, auth.log, kern.log and messages log files are not updated anymore after upgrading to Debian Wheezy (Debian Squeeze was previously running). How could I fix it? | I figured the exact issue has been encountered by other Debian users ( http://forums.debian.net/viewtopic.php?f=5&t=104049 ). To restore logging, one just needs to reinstall a syslog daemon (similar to the one that had been removed during upgrade), for example: apt-get install inetutils-syslogd | {
"source": [
"https://serverfault.com/questions/545622",
"https://serverfault.com",
"https://serverfault.com/users/57823/"
]
} |
546,012 | Amazon EC2 won't let me delete a security group, complaining that the group still has dependencies. How Can I find what those dependencies are? aws ec2 describe-security-groups doesn't say. | Paste the security group ID in the "Network Interfaces" section of EC2. This will find usage across EC2, EB, RDS, ELB. CLI: aws ec2 describe-network-interfaces --filters Name=group-id,Values=sg-123abc45 | {
"source": [
"https://serverfault.com/questions/546012",
"https://serverfault.com",
"https://serverfault.com/users/14645/"
]
} |
546,175 | Is it possible to receive a notification on the console when a package containing a file that is controlled by puppet is about to change that file? Meaning, in yum when doing yum update, is it possible to inject a custom warning? | Yum supports plugins, so it's entirely possible to write a plugin that reads the cached puppet manifest and warns when a transaction will overwrite a puppet-controlled file. I'm not aware of an existing plugin that does this, but I will probably write just wrote one myself as I like the idea. The plugin checks all newly installed/upgraded/downgraded packages, tells you which puppet-managed files it will overwrite and asks for a confirmation to do so. [root@camel ~]# yum update pam
Loaded plugins: puppet, security
Skipping security plugin, no data
Setting up Update Process
Resolving Dependencies
Skipping security plugin, no data
--> Running transaction check
---> Package pam.i386 0:0.99.6.2-12.el5 set to be updated
---> Package pam.x86_64 0:0.99.6.2-12.el5 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================================
Updating:
pam i386 0.99.6.2-12.el5 base 983 k
pam x86_64 0.99.6.2-12.el5 base 982 k
Transaction Summary
===============================================================================================================================================================
Install 0 Package(s)
Upgrade 2 Package(s)
Total download size: 1.9 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): pam-0.99.6.2-12.el5.x86_64.rpm | 982 kB 00:00
(2/2): pam-0.99.6.2-12.el5.i386.rpm | 983 kB 00:00
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 8.7 MB/s | 1.9 MB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing pam-0.99.6.2-12.el5.i386 overwrites puppet-managed file /etc/pam.d/system-auth
Installing pam-0.99.6.2-12.el5.i386 overwrites puppet-managed file /etc/security/access.conf
Installing pam-0.99.6.2-12.el5.i386 overwrites puppet-managed file /etc/security/limits.conf
Installing pam-0.99.6.2-12.el5.x86_64 overwrites puppet-managed file /etc/pam.d/system-auth
Installing pam-0.99.6.2-12.el5.x86_64 overwrites puppet-managed file /etc/security/access.conf
Installing pam-0.99.6.2-12.el5.x86_64 overwrites puppet-managed file /etc/security/limits.conf
Is this ok [y/N]: n
Aborting
[root@camel ~]# yum update pam
Loaded plugins: puppet, security
Skipping security plugin, no data
Setting up Update Process
Resolving Dependencies
Skipping security plugin, no data
--> Running transaction check
---> Package pam.i386 0:0.99.6.2-12.el5 set to be updated
---> Package pam.x86_64 0:0.99.6.2-12.el5 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================================
Updating:
pam i386 0.99.6.2-12.el5 base 983 k
pam x86_64 0.99.6.2-12.el5 base 982 k
Transaction Summary
===============================================================================================================================================================
Install 0 Package(s)
Upgrade 2 Package(s)
Total size: 1.9 M
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing pam-0.99.6.2-12.el5.i386 overwrites puppet-managed file /etc/pam.d/system-auth
Installing pam-0.99.6.2-12.el5.i386 overwrites puppet-managed file /etc/security/access.conf
Installing pam-0.99.6.2-12.el5.i386 overwrites puppet-managed file /etc/security/limits.conf
Installing pam-0.99.6.2-12.el5.x86_64 overwrites puppet-managed file /etc/pam.d/system-auth
Installing pam-0.99.6.2-12.el5.x86_64 overwrites puppet-managed file /etc/security/access.conf
Installing pam-0.99.6.2-12.el5.x86_64 overwrites puppet-managed file /etc/security/limits.conf
Is this ok [y/N]: y
Updating : pam 1/4
Updating : pam 2/4
Cleanup : pam 3/4
Cleanup : pam 4/4
Updated:
pam.i386 0:0.99.6.2-12.el5 pam.x86_64 0:0.99.6.2-12.el5
Complete! The plugin itself can be found in my github hacks repository . Nov. 8 2013 update: As hinted at in the comments, I've now turned this into a larger project to improve the interaction between Yum and Puppet. You can find it on GitHub . | {
"source": [
"https://serverfault.com/questions/546175",
"https://serverfault.com",
"https://serverfault.com/users/26514/"
]
} |
546,342 | Is there a limit of characters a command in a crontab could be ? I have a crontab with a 178 characters command and it seems to be truncated at 164 when executed. I can tell this number from the e-mail I receive and from the vi colors changing from that point. So, is it an "official" limitation ? I can't find any documentation about this. | Wow, I found what my problem is and it had nothing to do with line length. It turns out that my command had a % (percent sign) in it, which has a special meaning in crontab. It is used to input text to STDIN (see Why is my crontab not working, and how can I troubleshoot it? ). So I had to escape it. My command which was: gzip -c /path/to/a/file > /backup/dir/file-$(date +%F_%T).gz becomes gzip -c /path/to/a/file > /backup/dir/file-$(date +\%F_\%T).gz | {
"source": [
"https://serverfault.com/questions/546342",
"https://serverfault.com",
"https://serverfault.com/users/121519/"
]
} |
546,349 | I've been searching around but couldn't find a straight answer, if someone could please clarify this, would be greatly appreciated, thanks! location ~ \.php$ {
try_files $uri = 404;
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
include fastcgi.conf;
} OR/AND? upstream php {
server unix:/run/php-fpm/php-fpm.sock;
} Thanks! | location is used to match expressions and create rules for them. upstream defines servers that can be referenced to. In your example this means if you want to get an equivalent for location ~ \.php$ {
try_files $uri = 404;
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
include fastcgi.conf;
} , you would need upstream php {
server unix:/run/php-fpm/php-fpm.sock;
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_pass php;
fastcgi_index index.php;
include fastcgi.conf;
} The benefit of the upstream block is that you can configure more than one server/port/service as upstream and distribute the traffic on them, for example like this: upstream php {
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server 192.68.1.2 weight=5;
server unix:/run/php-fpm/php-fpm.sock;
} You can find more information about this in the nginx documentation: http://nginx.org/en/docs/http/ngx_http_upstream_module.html | {
"source": [
"https://serverfault.com/questions/546349",
"https://serverfault.com",
"https://serverfault.com/users/40916/"
]
} |
546,358 | I have a server with a non-paged memory issue. Usage slowly climbs until it is exhausted and the server stops serving web pages as IIS cant get enough non paged memory. This a 32bit windows 2003 server. Task manager shows no suspicious activity and all the running processes there are consuming 'normal' amounts of NP memory and they all stay rock steady over time. The tag showing all the usage is 'Even' which is for the Event Viewer according to the tag list. No other warnings or errors are showing up in the event logs except when the NP memory is exhausted and IIS starts to complain. Server runs MSSQL, IIS and hMailserver, nothing else. Anyone have any ideas or seen this before..? I'd have somewhere to go if it was a tag associated with a driver like a network card or something but Event Viewer, where do I go with that! Poolmon output for EVEN Tag Type Allocs Frees Diff Bytes Per Alloc
Even NonP 65563201 ( 948) 64585254 ( 861) 980124 47049280 ( -2384) 48 Thanks | location is used to match expressions and create rules for them. upstream defines servers that can be referenced to. In your example this means if you want to get an equivalent for location ~ \.php$ {
try_files $uri = 404;
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
include fastcgi.conf;
} , you would need upstream php {
server unix:/run/php-fpm/php-fpm.sock;
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_pass php;
fastcgi_index index.php;
include fastcgi.conf;
} The benefit of the upstream block is that you can configure more than one server/port/service as upstream and distribute the traffic on them, for example like this: upstream php {
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server 192.68.1.2 weight=5;
server unix:/run/php-fpm/php-fpm.sock;
} You can find more information about this in the nginx documentation: http://nginx.org/en/docs/http/ngx_http_upstream_module.html | {
"source": [
"https://serverfault.com/questions/546358",
"https://serverfault.com",
"https://serverfault.com/users/195146/"
]
} |
547,544 | I have a set of 100 log files, compressed using gzip. I need to find all lines matching a given expression. I'd use grep, but of course, that's a bit of a nightmare because I'll have to unzip all files, one by one, grep them and delete the unzipped version, because they wouldn't all fit on my sevrer if they were all unzipped. Anyone has a little trick on how to get that done quickly? | You might have a look at zgrep . >$ zgrep -h
grep through gzip files
usage: zgrep [grep_options] pattern [files] | {
"source": [
"https://serverfault.com/questions/547544",
"https://serverfault.com",
"https://serverfault.com/users/16493/"
]
} |
548,212 | I'm logged into a Linux server. I think it's a Red Hat distribution. The commands a2ensite and a2dissite are not available.
In the /etc/httpd directory, I don't see any mention of sites-enabled or sites-available . I'm pretty sure the site is currently executing the directives of /etc/httpd/conf.d/ssl.conf . I would like to do a a2dissite ssl , then reload the Web Server. How to do achieve this ? | a2ensite etc. are commands available in Debian-based systems and that are not available in RH-based distributions. What they do is to manage symbolic links from configuration file parts in /etc/apache2/sites-available and mods-available to /etc/apache2/sites-enabled and so on. E.g. if you have a vhost defined in a config file /etc/apache2/sites-avaible/example.com , a2ensite example.com would create a symlink to this file in /etc/apache2/sites-enabled and reload the apache config. The main Apache config file contains lines that include every file in /etc/apache2/sites-enabled and thus, they get incorporated into the runtime config. It's quite easy to mimic this structure in RHEL. Add two directories in /etc/httpd/ named sites-enabled and sites-available and add your vhosts into files in sites-available . After that, add a line include ../sites-enabled to /etc/httpd/conf/httpd.conf . You can now create symlinks to sites-enabled and then reload the config with service httpd reload or apachectl . | {
"source": [
"https://serverfault.com/questions/548212",
"https://serverfault.com",
"https://serverfault.com/users/14896/"
]
} |
548,228 | I'm planning to use haproxy to proxy a tls service that IS NOT http at the backend. Is there any reason why you'd use stud as the front facing app vs haproxy then stud? i.e. 1) INTERNET -> stud -> haproxy -> tls service over 2) INTERNET -> haproxy -> stud -> tls service | a2ensite etc. are commands available in Debian-based systems and that are not available in RH-based distributions. What they do is to manage symbolic links from configuration file parts in /etc/apache2/sites-available and mods-available to /etc/apache2/sites-enabled and so on. E.g. if you have a vhost defined in a config file /etc/apache2/sites-avaible/example.com , a2ensite example.com would create a symlink to this file in /etc/apache2/sites-enabled and reload the apache config. The main Apache config file contains lines that include every file in /etc/apache2/sites-enabled and thus, they get incorporated into the runtime config. It's quite easy to mimic this structure in RHEL. Add two directories in /etc/httpd/ named sites-enabled and sites-available and add your vhosts into files in sites-available . After that, add a line include ../sites-enabled to /etc/httpd/conf/httpd.conf . You can now create symlinks to sites-enabled and then reload the config with service httpd reload or apachectl . | {
"source": [
"https://serverfault.com/questions/548228",
"https://serverfault.com",
"https://serverfault.com/users/14631/"
]
} |
548,237 | This is on a fresh computer (super computer actually). It got to me with 15T on the home mount and 50G on the root. I tried allocating 7T to root and resizing (since I'm putting a local yum repo on this machine as it has no internet access nor will it ever). I tried following the instructions here: Centos 6.3 disk space allocation but something went wrong and the home won't mount again. Instead I get from dmesg | tail: EXT4-fs (dm-2): bad geometry: block count 4294967295 exceeds size of device (1342177280 blocks) df -h nets this output: Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 7.0T 3.6G 6.6T 1% /
tmpfs 190G 216K 190G 1% /dev/shm
/dev/sda1 485M 38M 422M 9% /boot I didn't have any files on /dev/mapper/VolGroup-lv_home. Will simply running mke2fs fix it to be mountable? What sort of options should I run it with. I've never resized volumes before or used mke2fs. I don't want to make this mess worse. | a2ensite etc. are commands available in Debian-based systems and that are not available in RH-based distributions. What they do is to manage symbolic links from configuration file parts in /etc/apache2/sites-available and mods-available to /etc/apache2/sites-enabled and so on. E.g. if you have a vhost defined in a config file /etc/apache2/sites-avaible/example.com , a2ensite example.com would create a symlink to this file in /etc/apache2/sites-enabled and reload the apache config. The main Apache config file contains lines that include every file in /etc/apache2/sites-enabled and thus, they get incorporated into the runtime config. It's quite easy to mimic this structure in RHEL. Add two directories in /etc/httpd/ named sites-enabled and sites-available and add your vhosts into files in sites-available . After that, add a line include ../sites-enabled to /etc/httpd/conf/httpd.conf . You can now create symlinks to sites-enabled and then reload the config with service httpd reload or apachectl . | {
"source": [
"https://serverfault.com/questions/548237",
"https://serverfault.com",
"https://serverfault.com/users/77753/"
]
} |
548,537 | I get the following error when I run bower: bower ESUDO Cannot be run with sudo Thing is, I'm not running bower with sudo. The command I run is: bower install foo or bower search cats I am logged in as root to an Ubuntu 12.04 server but I am not using sudo. What gives? How do I get bower working? | I had the same problem. All you have to do is add --allow-root to your command. See this issue. | {
"source": [
"https://serverfault.com/questions/548537",
"https://serverfault.com",
"https://serverfault.com/users/29690/"
]
} |
548,591 | I'm sure this has been asked before, but I can't find a solution that works. A website has switched CMS services, but has the same domain, how do I set up an nginx rewrite for a single page? E.g. Old Page http://sitedomain.co.uk/content/unique-page-name New page http://sitedomain.co.uk/new-name/unique-page-name Please note , I don't want everything within the content page to be redirected, but literally just the url mentioned above. I have about 9 redirects to set up, non of which fit in a pattern. Thanks! Edit: I found this solution, which seems to be working, except for the fact that it redirects without a slash: if ( $request_filename ~ content/unique-page-name/ ) {
rewrite ^ http://sitedomain.co.uk/new-name/unique-page-name/? permanent;
} But this redirects to: http://sitedomain.co.uknew-name/unique-page-name/ | Direct quote from Pitfalls and Common Mistakes: Taxing Rewrites : By using the return directive we can completely avoid evaluation of regular expression. Please use return instead of rewrite for permanent redirects. Here's my approach to this use-case... location = /content/unique-page-name {
return 301 /new-name/unique-page-name;
} | {
"source": [
"https://serverfault.com/questions/548591",
"https://serverfault.com",
"https://serverfault.com/users/148734/"
]
} |
548,736 | I have a Ubuntu 12.04 server which sometimes dies completely - no SSH, no ping, nothing until it is physically rebooted. After the reboot, I see in syslog that the oom-killer killed, well, pretty much everything. There's a lot of detailed memory usage information in them. How do I read these logs to see what caused the OOM issue? The server has far more memory than it needs, so it shouldn't be running out of memory. Oct 25 07:28:04 nldedip4k031 kernel: [87946.529511] oom_kill_process: 9 callbacks suppressed
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529514] irqbalance invoked oom-killer: gfp_mask=0x80d0, order=0, oom_adj=0, oom_score_adj=0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529516] irqbalance cpuset=/ mems_allowed=0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529518] Pid: 948, comm: irqbalance Not tainted 3.2.0-55-generic-pae #85-Ubuntu
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529519] Call Trace:
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529525] [] dump_header.isra.6+0x85/0xc0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529528] [] oom_kill_process+0x5c/0x80
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529530] [] out_of_memory+0xc5/0x1c0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529532] [] __alloc_pages_nodemask+0x72c/0x740
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529535] [] __get_free_pages+0x1c/0x30
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529537] [] get_zeroed_page+0x12/0x20
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529541] [] fill_read_buffer.isra.8+0xaa/0xd0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529543] [] sysfs_read_file+0x7d/0x90
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529546] [] vfs_read+0x8c/0x160
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529548] [] ? fill_read_buffer.isra.8+0xd0/0xd0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529550] [] sys_read+0x3d/0x70
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529554] [] sysenter_do_call+0x12/0x28
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529555] Mem-Info:
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529556] DMA per-cpu:
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529557] CPU 0: hi: 0, btch: 1 usd: 0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529558] CPU 1: hi: 0, btch: 1 usd: 0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529560] CPU 2: hi: 0, btch: 1 usd: 0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529561] CPU 3: hi: 0, btch: 1 usd: 0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529562] CPU 4: hi: 0, btch: 1 usd: 0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529563] CPU 5: hi: 0, btch: 1 usd: 0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529564] CPU 6: hi: 0, btch: 1 usd: 0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529565] CPU 7: hi: 0, btch: 1 usd: 0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529566] Normal per-cpu:
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529567] CPU 0: hi: 186, btch: 31 usd: 179
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529568] CPU 1: hi: 186, btch: 31 usd: 182
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529569] CPU 2: hi: 186, btch: 31 usd: 132
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529570] CPU 3: hi: 186, btch: 31 usd: 175
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529571] CPU 4: hi: 186, btch: 31 usd: 91
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529572] CPU 5: hi: 186, btch: 31 usd: 173
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529573] CPU 6: hi: 186, btch: 31 usd: 159
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529574] CPU 7: hi: 186, btch: 31 usd: 164
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529575] HighMem per-cpu:
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529576] CPU 0: hi: 186, btch: 31 usd: 165
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529577] CPU 1: hi: 186, btch: 31 usd: 183
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529578] CPU 2: hi: 186, btch: 31 usd: 185
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529579] CPU 3: hi: 186, btch: 31 usd: 138
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529580] CPU 4: hi: 186, btch: 31 usd: 155
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529581] CPU 5: hi: 186, btch: 31 usd: 104
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529582] CPU 6: hi: 186, btch: 31 usd: 133
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529583] CPU 7: hi: 186, btch: 31 usd: 170
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529586] active_anon:5523 inactive_anon:354 isolated_anon:0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529586] active_file:2815 inactive_file:6849119 isolated_file:0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529587] unevictable:0 dirty:449 writeback:10 unstable:0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529587] free:1304125 slab_reclaimable:104672 slab_unreclaimable:3419
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529588] mapped:2661 shmem:138 pagetables:313 bounce:0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529591] DMA free:4252kB min:780kB low:972kB high:1168kB active_anon:0kB inactive_anon:0kB active_file:4kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15756kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:11564kB slab_unreclaimable:4kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1 all_unreclaimable? yes
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529594] lowmem_reserve[]: 0 869 32460 32460
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529599] Normal free:44052kB min:44216kB low:55268kB high:66324kB active_anon:0kB inactive_anon:0kB active_file:616kB inactive_file:568kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:890008kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:407124kB slab_unreclaimable:13672kB kernel_stack:992kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:2083 all_unreclaimable? yes
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529602] lowmem_reserve[]: 0 0 252733 252733
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529606] HighMem free:5168196kB min:512kB low:402312kB high:804112kB active_anon:22092kB inactive_anon:1416kB active_file:10640kB inactive_file:27395920kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:32349872kB mlocked:0kB dirty:1796kB writeback:40kB mapped:10640kB shmem:552kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:1252kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529609] lowmem_reserve[]: 0 0 0 0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529611] DMA: 6*4kB 6*8kB 6*16kB 5*32kB 5*64kB 4*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 4232kB
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529616] Normal: 297*4kB 180*8kB 119*16kB 73*32kB 67*64kB 47*128kB 35*256kB 13*512kB 5*1024kB 1*2048kB 1*4096kB = 44052kB
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529622] HighMem: 1*4kB 6*8kB 27*16kB 11*32kB 2*64kB 1*128kB 0*256kB 0*512kB 4*1024kB 1*2048kB 1260*4096kB = 5168196kB
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529627] 6852076 total pagecache pages
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529628] 0 pages in swap cache
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529629] Swap cache stats: add 0, delete 0, find 0/0
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529630] Free swap = 3998716kB
Oct 25 07:28:04 nldedip4k031 kernel: [87946.529631] Total swap = 3998716kB
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571914] 8437743 pages RAM
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571916] 8209409 pages HighMem
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571917] 159556 pages reserved
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571917] 6862034 pages shared
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571918] 123540 pages non-shared
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571919] [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571927] [ 421] 0 421 709 152 3 0 0 upstart-udev-br
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571929] [ 429] 0 429 773 326 5 -17 -1000 udevd
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571931] [ 567] 0 567 772 224 4 -17 -1000 udevd
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571932] [ 568] 0 568 772 231 7 -17 -1000 udevd
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571934] [ 764] 0 764 712 103 1 0 0 upstart-socket-
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571936] [ 772] 103 772 815 164 5 0 0 dbus-daemon
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571938] [ 785] 0 785 1671 600 1 -17 -1000 sshd
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571940] [ 809] 101 809 7766 380 1 0 0 rsyslogd
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571942] [ 869] 0 869 1158 213 3 0 0 getty
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571943] [ 873] 0 873 1158 214 6 0 0 getty
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571945] [ 911] 0 911 1158 215 3 0 0 getty
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571947] [ 912] 0 912 1158 214 2 0 0 getty
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571949] [ 914] 0 914 1158 213 1 0 0 getty
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571950] [ 916] 0 916 618 86 1 0 0 atd
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571952] [ 917] 0 917 655 226 3 0 0 cron
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571954] [ 948] 0 948 902 159 3 0 0 irqbalance
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571956] [ 993] 0 993 1145 363 3 0 0 master
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571957] [ 1002] 104 1002 1162 333 1 0 0 qmgr
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571959] [ 1016] 0 1016 730 149 2 0 0 mdadm
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571961] [ 1057] 0 1057 6066 2160 3 0 0 /usr/sbin/apach
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571963] [ 1086] 0 1086 1158 213 3 0 0 getty
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571965] [ 1088] 33 1088 6191 1517 0 0 0 /usr/sbin/apach
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571967] [ 1089] 33 1089 6191 1451 1 0 0 /usr/sbin/apach
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571969] [ 1090] 33 1090 6175 1451 3 0 0 /usr/sbin/apach
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571971] [ 1091] 33 1091 6191 1451 1 0 0 /usr/sbin/apach
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571972] [ 1092] 33 1092 6191 1451 0 0 0 /usr/sbin/apach
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571974] [ 1109] 33 1109 6191 1517 0 0 0 /usr/sbin/apach
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571976] [ 1151] 33 1151 6191 1451 1 0 0 /usr/sbin/apach
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571978] [ 1201] 104 1201 1803 652 1 0 0 tlsmgr
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571980] [ 2475] 0 2475 2435 812 0 0 0 sshd
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571982] [ 2494] 0 2494 1745 839 1 0 0 bash
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571984] [ 2573] 0 2573 3394 1689 0 0 0 sshd
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571986] [ 2589] 0 2589 5014 457 3 0 0 rsync
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571988] [ 2590] 0 2590 7970 522 1 0 0 rsync
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571990] [ 2652] 104 2652 1150 326 5 0 0 pickup
Oct 25 07:28:04 nldedip4k031 kernel: [87946.571992] Out of memory: Kill process 421 (upstart-udev-br) score 1 or sacrifice child
Oct 25 07:28:04 nldedip4k031 kernel: [87946.572407] Killed process 421 (upstart-udev-br) total-vm:2836kB, anon-rss:156kB, file-rss:452kB
Oct 25 07:28:04 nldedip4k031 kernel: [87946.573107] init: upstart-udev-bridge main process (421) killed by KILL signal
Oct 25 07:28:04 nldedip4k031 kernel: [87946.573126] init: upstart-udev-bridge main process ended, respawning
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461570] irqbalance invoked oom-killer: gfp_mask=0x80d0, order=0, oom_adj=0, oom_score_adj=0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461573] irqbalance cpuset=/ mems_allowed=0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461576] Pid: 948, comm: irqbalance Not tainted 3.2.0-55-generic-pae #85-Ubuntu
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461578] Call Trace:
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461585] [] dump_header.isra.6+0x85/0xc0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461588] [] oom_kill_process+0x5c/0x80
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461591] [] out_of_memory+0xc5/0x1c0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461595] [] __alloc_pages_nodemask+0x72c/0x740
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461599] [] __get_free_pages+0x1c/0x30
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461602] [] get_zeroed_page+0x12/0x20
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461606] [] fill_read_buffer.isra.8+0xaa/0xd0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461609] [] sysfs_read_file+0x7d/0x90
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461613] [] vfs_read+0x8c/0x160
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461616] [] ? fill_read_buffer.isra.8+0xd0/0xd0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461619] [] sys_read+0x3d/0x70
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461624] [] sysenter_do_call+0x12/0x28
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461626] Mem-Info:
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461628] DMA per-cpu:
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461629] CPU 0: hi: 0, btch: 1 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461631] CPU 1: hi: 0, btch: 1 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461633] CPU 2: hi: 0, btch: 1 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461634] CPU 3: hi: 0, btch: 1 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461636] CPU 4: hi: 0, btch: 1 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461638] CPU 5: hi: 0, btch: 1 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461639] CPU 6: hi: 0, btch: 1 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461641] CPU 7: hi: 0, btch: 1 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461642] Normal per-cpu:
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461644] CPU 0: hi: 186, btch: 31 usd: 61
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461646] CPU 1: hi: 186, btch: 31 usd: 49
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461647] CPU 2: hi: 186, btch: 31 usd: 8
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461649] CPU 3: hi: 186, btch: 31 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461651] CPU 4: hi: 186, btch: 31 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461652] CPU 5: hi: 186, btch: 31 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461654] CPU 6: hi: 186, btch: 31 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461656] CPU 7: hi: 186, btch: 31 usd: 30
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461657] HighMem per-cpu:
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461658] CPU 0: hi: 186, btch: 31 usd: 4
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461660] CPU 1: hi: 186, btch: 31 usd: 204
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461662] CPU 2: hi: 186, btch: 31 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461663] CPU 3: hi: 186, btch: 31 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461665] CPU 4: hi: 186, btch: 31 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461667] CPU 5: hi: 186, btch: 31 usd: 31
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461668] CPU 6: hi: 186, btch: 31 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461670] CPU 7: hi: 186, btch: 31 usd: 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461674] active_anon:5441 inactive_anon:412 isolated_anon:0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461674] active_file:2668 inactive_file:6922842 isolated_file:0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461675] unevictable:0 dirty:836 writeback:0 unstable:0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461676] free:1231664 slab_reclaimable:105781 slab_unreclaimable:3399
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461677] mapped:2649 shmem:138 pagetables:313 bounce:0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461682] DMA free:4248kB min:780kB low:972kB high:1168kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:4kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15756kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:11560kB slab_unreclaimable:4kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:5687 all_unreclaimable? yes
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461686] lowmem_reserve[]: 0 869 32460 32460
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461693] Normal free:44184kB min:44216kB low:55268kB high:66324kB active_anon:0kB inactive_anon:0kB active_file:20kB inactive_file:1096kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:890008kB mlocked:0kB dirty:4kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:411564kB slab_unreclaimable:13592kB kernel_stack:992kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1816 all_unreclaimable? yes
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461697] lowmem_reserve[]: 0 0 252733 252733
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461703] HighMem free:4878224kB min:512kB low:402312kB high:804112kB active_anon:21764kB inactive_anon:1648kB active_file:10652kB inactive_file:27690268kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:32349872kB mlocked:0kB dirty:3340kB writeback:0kB mapped:10592kB shmem:552kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:1252kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461708] lowmem_reserve[]: 0 0 0 0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461711] DMA: 8*4kB 7*8kB 6*16kB 5*32kB 5*64kB 4*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 4248kB
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461719] Normal: 272*4kB 178*8kB 76*16kB 52*32kB 42*64kB 36*128kB 23*256kB 20*512kB 7*1024kB 2*2048kB 1*4096kB = 44176kB
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461727] HighMem: 1*4kB 45*8kB 31*16kB 24*32kB 5*64kB 3*128kB 1*256kB 2*512kB 4*1024kB 2*2048kB 1188*4096kB = 4877852kB
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461736] 6925679 total pagecache pages
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461737] 0 pages in swap cache
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461739] Swap cache stats: add 0, delete 0, find 0/0
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461740] Free swap = 3998716kB
Oct 25 07:28:34 nldedip4k031 kernel: [87976.461741] Total swap = 3998716kB
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524951] 8437743 pages RAM
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524953] 8209409 pages HighMem
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524954] 159556 pages reserved
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524955] 6936141 pages shared
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524956] 124602 pages non-shared
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524957] [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524966] [ 429] 0 429 773 326 5 -17 -1000 udevd
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524968] [ 567] 0 567 772 224 4 -17 -1000 udevd
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524971] [ 568] 0 568 772 231 7 -17 -1000 udevd
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524973] [ 764] 0 764 712 103 3 0 0 upstart-socket-
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524976] [ 772] 103 772 815 164 2 0 0 dbus-daemon
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524979] [ 785] 0 785 1671 600 1 -17 -1000 sshd
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524981] [ 809] 101 809 7766 380 1 0 0 rsyslogd
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524983] [ 869] 0 869 1158 213 3 0 0 getty
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524986] [ 873] 0 873 1158 214 6 0 0 getty
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524988] [ 911] 0 911 1158 215 3 0 0 getty
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524990] [ 912] 0 912 1158 214 2 0 0 getty
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524992] [ 914] 0 914 1158 213 1 0 0 getty
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524995] [ 916] 0 916 618 86 1 0 0 atd
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524997] [ 917] 0 917 655 226 3 0 0 cron
Oct 25 07:28:34 nldedip4k031 kernel: [87976.524999] [ 948] 0 948 902 159 5 0 0 irqbalance
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525002] [ 993] 0 993 1145 363 3 0 0 master
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525004] [ 1002] 104 1002 1162 333 1 0 0 qmgr
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525007] [ 1016] 0 1016 730 149 2 0 0 mdadm
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525009] [ 1057] 0 1057 6066 2160 3 0 0 /usr/sbin/apach
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525012] [ 1086] 0 1086 1158 213 3 0 0 getty
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525014] [ 1088] 33 1088 6191 1517 0 0 0 /usr/sbin/apach
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525017] [ 1089] 33 1089 6191 1451 1 0 0 /usr/sbin/apach
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525019] [ 1090] 33 1090 6175 1451 1 0 0 /usr/sbin/apach
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525021] [ 1091] 33 1091 6191 1451 1 0 0 /usr/sbin/apach
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525024] [ 1092] 33 1092 6191 1451 0 0 0 /usr/sbin/apach
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525026] [ 1109] 33 1109 6191 1517 0 0 0 /usr/sbin/apach
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525029] [ 1151] 33 1151 6191 1451 1 0 0 /usr/sbin/apach
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525031] [ 1201] 104 1201 1803 652 1 0 0 tlsmgr
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525033] [ 2475] 0 2475 2435 812 0 0 0 sshd
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525036] [ 2494] 0 2494 1745 839 1 0 0 bash
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525038] [ 2573] 0 2573 3394 1689 3 0 0 sshd
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525040] [ 2589] 0 2589 5014 457 3 0 0 rsync
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525043] [ 2590] 0 2590 7970 522 1 0 0 rsync
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525045] [ 2652] 104 2652 1150 326 5 0 0 pickup
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525048] [ 2847] 0 2847 709 89 0 0 0 upstart-udev-br
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525050] Out of memory: Kill process 764 (upstart-socket-) score 1 or sacrifice child
Oct 25 07:28:34 nldedip4k031 kernel: [87976.525484] Killed process 764 (upstart-socket-) total-vm:2848kB, anon-rss:204kB, file-rss:208kB
Oct 25 07:28:34 nldedip4k031 kernel: [87976.526161] init: upstart-socket-bridge main process (764) killed by KILL signal
Oct 25 07:28:34 nldedip4k031 kernel: [87976.526180] init: upstart-socket-bridge main process ended, respawning
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439671] irqbalance invoked oom-killer: gfp_mask=0x80d0, order=0, oom_adj=0, oom_score_adj=0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439674] irqbalance cpuset=/ mems_allowed=0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439676] Pid: 948, comm: irqbalance Not tainted 3.2.0-55-generic-pae #85-Ubuntu
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439678] Call Trace:
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439684] [] dump_header.isra.6+0x85/0xc0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439686] [] oom_kill_process+0x5c/0x80
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439688] [] out_of_memory+0xc5/0x1c0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439691] [] __alloc_pages_nodemask+0x72c/0x740
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439694] [] __get_free_pages+0x1c/0x30
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439696] [] get_zeroed_page+0x12/0x20
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439699] [] fill_read_buffer.isra.8+0xaa/0xd0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439702] [] sysfs_read_file+0x7d/0x90
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439704] [] vfs_read+0x8c/0x160
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439707] [] ? fill_read_buffer.isra.8+0xd0/0xd0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439709] [] sys_read+0x3d/0x70
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439712] [] sysenter_do_call+0x12/0x28
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439714] Mem-Info:
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439714] DMA per-cpu:
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439716] CPU 0: hi: 0, btch: 1 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439717] CPU 1: hi: 0, btch: 1 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439718] CPU 2: hi: 0, btch: 1 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439719] CPU 3: hi: 0, btch: 1 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439720] CPU 4: hi: 0, btch: 1 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439721] CPU 5: hi: 0, btch: 1 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439722] CPU 6: hi: 0, btch: 1 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439723] CPU 7: hi: 0, btch: 1 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439724] Normal per-cpu:
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439725] CPU 0: hi: 186, btch: 31 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439726] CPU 1: hi: 186, btch: 31 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439727] CPU 2: hi: 186, btch: 31 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439728] CPU 3: hi: 186, btch: 31 usd: 0
Oct 25 07:28:44 nldedip4k031 kernel: [87986.439729] CPU 4: hi: 186, btch: 31 usd: 0
Oct 25 07:33:48 nldedip4k031 kernel: imklog 5.8.6, log source = /proc/kmsg started.
Oct 25 07:33:48 nldedip4k031 rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="2880" x-info="http://www.rsyslog.com"] start
Oct 25 07:33:48 nldedip4k031 rsyslogd: rsyslogd's groupid changed to 103
Oct 25 07:33:48 nldedip4k031 rsyslogd: rsyslogd's userid changed to 101
Oct 25 07:33:48 nldedip4k031 rsyslogd-2039: Could not open output pipe '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] | The OOM killer suggests that in fact, you've run out of memory. If you say it's got more memory than it needs then maybe some system event is creating a memory leak somewhere, but the OOM killer will not tell why there is a memory leak, only that it's run out of memory and now tries to kill the least important things (based on oom_score ). And if the case is that there is a memory leak, then maybe the oom-killer will only kill procs so that the rouge one can allocate more and more memory. So what I would do in case, is Configure kdump , which will create a crash dump vmcore after a kernel panic. (it's described more here ) Setting vm.panic_on_oom=1 kernel parameter. This will cause a kernel panic should the machine run out of memory. Next time you get a panic, you can open up the vmcore file created by kdump, and look at the process table, and it will reveal the culprit. | {
"source": [
"https://serverfault.com/questions/548736",
"https://serverfault.com",
"https://serverfault.com/users/108667/"
]
} |
548,888 | This is a Canonical Question about solving IPv4 subnet conflicts between a VPN client's local network and one across the VPN link from it. After connecting to a remote location via OpenVPN, clients try to access a server on a network that exists on a subnet such as 192.0.2.0/24. However, sometimes, the network on the client's LAN has the same subnet address: 192.0.2.0/24. Clients are unable to connect to the remote server via typing in its IP because of this conflict. They are unable to even access the public internet while connected to the VPN. The problem is that this subnet 192.0.2.0/24 needs to be routed by the VPN, but it also needs to be routed as the client's LAN. Does anyone know how to mitigate this issue? I have access to the OpenVPN server. | It is possible to solve this using NAT; it's just not very elegant. So under the assumption you couldn't solve this by having internal nets which have so uncommon network numbers as to never actually come into conflict, here's the principle: As both the local and remote subnet have identical network numbers, traffic from your client will never realize it has to go through the tunnel gateway to reach its destination. And even if we imagine it could, the situation would be the same for the remote host as it is about to send an answer. So stay with me and pretend that as of yet, there are no side issues as I write that for full connectivity, you would need to NAT both ends inside the tunnel so as to differentiate the hosts and allow for routing. Making some nets up here: Your office network uses 192.0.2.0/24 Your remote office uses 192.0.2.0/24 Your office network VPN gateway hides 192.0.2.0/24 hosts behind the NATed network number 198.51.100.0/24 Your remote office network VPN gateway hides 192.0.2.0/24 hosts behind the NATed network number 203.0.113.0/24 So inside the VPN tunnel, the office hosts are now 198.51.100.x and remote office hosts are 203.0.113.x. Let's furthermore pretend all hosts are mapped 1:1 in the NAT of their respective VPN gateways. An example: Your office network host 192.0.2.5/24 is statically mapped as 198.51.100.5/24 in the office vpn gateway NAT Your remote office network host 192.0.2.5/24 is statically mapped as 203.0.113.5/24 in the remote office vpn gateway NAT So when host 192.0.2.5/24 in the remote office wants to connect to the host with the same ip in the office network, it needs to do so using the address 198.51.100.5/24 as destination. The following happens: At the remote office, host 198.51.100.5 is a remote destination reached through the VPN and routed there. At the remote office, host 192.0.2.5 is masqueraded as 203.0.113.5 as the packet passes the NAT function. At the office, host 198.51.100.5 is translated to 192.0.2.5 as the packet passes the NAT function. At the office, return traffic to host 203.0.113.5 goes through the same process in the reverse direction. So whilst there is a solution, there are a number of issues which must be addressed for this to work in practice: The masqueraded IP must be used for remote connectivity; DNS gets complex. This is because endpoints must have a unique IP address, as viewed from the connecting host. A NAT function must be implemented both ends as part of the VPN solution. Statically mapping hosts is a must for reachability from the other end. If traffic is unidirectional, only the receiving end needs static mapping of all involved hosts; the client can get away with being dynamically NATed if desirable. If traffic is bidirectional, both ends need static mapping of all involved hosts. Internet connectivity must not be impaired regardless of split- or non-split VPN. If you can't map 1-to-1 it gets messy; careful bookkeeping is a necessity. Naturally one runs the risk of using NAT addresses which also turn out to be duplicates :-) So solving this needs careful design. If your remote office really consists of road warriors you add a layer of problems in that: they never know beforehand when they end up on overlapping net ids. the remote office gateway NAT would need to be implemented on their laptops. the office gateway would need two VPNs, one NAT-free and one NATed, to cover both scenarios. Otherwise, in the event someone were to pick one of the subnets you chose for the NAT method, things wouldn't work . Depending on your VPN client you might be able to automatically select one VPN or the other depending on the network address of the local segment. Observe that all mentioning of NAT in this context denotes a NAT function which so to speak takes place within the tunnel perspective. Processwise, the static NAT mapping must be done before the packet "enters" the tunnel, i.e. before it is encapsulated in the transport packet which is to take it across the internet to the other VPN gateway. This means that one must not confuse the public ip addresses of the VPN gateways (and which in practice may also be NAT:ed, but then wholly outside the perspective of transport to the remote site through VPN) with the unique private addresses used as masquerades for the duplicate private addresses. If this abstraction is difficult to picture, an illustration of how NAT may be physically separated from the VPN gateway for this purpose is made here: Using NAT in Overlapping Networks . Condensing the same picture to a logical separation inside one machine, capable of performing both the NAT and VPN gateway functionality, is simply taking the same example one step further, but does place greater emphasis on the capabilities of the software at hand. Hacking it together with for example OpenVPN and iptables and posting the solution here would be a worthy challenge. Softwarewise it certainly is possible: PIX/ASA 7.x and Later: LAN-to-LAN IPsec VPN with Overlapping Networks Configuration Example and: Configuring an IPSec Tunnel Between Routers with Duplicate LAN Subnets The actual implementation therefore depends on a lot of factors, the operating systems involved, associated software and its possibilities not the least. But it certainly is doable. You would need to think and experiment a bit. I learned this from Cisco as seen by the links. | {
"source": [
"https://serverfault.com/questions/548888",
"https://serverfault.com",
"https://serverfault.com/users/196357/"
]
} |
549,075 | I have a PEM file which I add to a running ssh-agent: $ file query.pem
query.pem: PEM RSA private key
$ ssh-add ./query.pem
Identity added: ./query.pem (./query.pem)
$ ssh-add -l | grep query
2048 ef:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX ./query.pem (RSA) How can I get the key's fingerprint (which I see in ssh-agent) directly from the file? I know ssh-keygen -l -f some_key works for "normal" ssh keys, but not for PEM files. If I try ssh-keygen on the .pem file, I get: $ ssh-keygen -l -f ./query.pem
key_read: uudecode PRIVATE KEY----- failed
key_read: uudecode PRIVATE KEY----- failed
./query.pem is not a public key file. This key starts with: -----BEGIN RSA PRIVATE KEY-----
MIIEp.... etc. as opposed to a "regular" private key, which looks like: -----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,E15F2.... etc. | If you want to retrieve the fingerprint of your lost public key file, you can recover it from the private key file : $ ssh-keygen -yf path/to/private_key_file > path/to/store/public_key_file Then you are able to ascertain the public fingerprint: $ ssh-keygen -lf path/to/store/public_key_file
2048 SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX user@host (RSA) On some newer systems, this prints the SHA256 fingerprint of the key. You can print the MD5 fingerprint of the key (the colon form) using option -E : $ ssh-keygen -E md5 -lf path/to/store/public_key_file
2048 MD5:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx user@host (RSA) Or as one command line : $ ssh-keygen -yf /etc/ssh/ssh_host_ecdsa_key | ssh-keygen -E md5 -lf -
2048 MD5:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx user@host (RSA) | {
"source": [
"https://serverfault.com/questions/549075",
"https://serverfault.com",
"https://serverfault.com/users/45379/"
]
} |
549,200 | I recently pushed a major update to a site and I'm having an issue where some people can't log in because their browser is loading old javascript files. Some of the things I have done include: Cache busting all javascript files Set sendfile off in nginx.conf Set expires 1s in mysite.conf Explicitly set Cache-Control header: add_header Cache-Control no-cache; Bellow are my conf files for nginx. Any help would be much appreciated. /etc/nginx/sites-enabled/mysite.conf proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=one:8m max_size=3000m inactive=600m;
server {
listen 80;
server_name mysite.com;
return 301 https://www.mysite.com$request_uri;
}
server {
# listen for connections on all hostname/IP and at TCP port 80
listen *:80;
# name-based virtual hosting
server_name www.mysite.com;
# location of the web root for all static files (this should be changed for local development)
root /var/mysite.com/static;
# redirect http requests to https
if ($http_x_forwarded_proto = "http") {
rewrite ^/(.*)$ https://www.mysite.com/$1 permanent;
}
# error pages
error_page 403 /errors/403.html;
error_page 404 /errors/404.html;
error_page 408 /errors/408.html;
error_page 500 502 503 504 /errors/500.html;
# error and access out
error_log /var/log/nginx/error.mysite.log;
access_log /var/log/nginx/access.mysite.log;
# use Nginx's gzip static module
gzip_static on;
gzip_types application/x-javascript text/css;
location / {
# redefine and add some request header lines which will be passed along to the node server
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
# set the address of the node proxied server
proxy_pass http://127.0.0.1:9001;
# forbid all proxy_redirect directives at this level
proxy_redirect off;
}
# do a regular expression match for any files ending in the list of extensions
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|xml|html|htm)$ {
# clear all access_log directives for the current level
access_log off;
add_header Cache-Control no-cache;
# set the Expires header to 31 December 2037 23:59:59 GMT, and the Cache-Control max-age to 10 years
expires 1s;
}
} /etc/nginx/nginx.conf user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile off;
tcp_nopush off;
tcp_nodelay off;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
} | Have you tried manually deleting everything that is in your cache? This is typically /var/cache/nginx . I believe that having add_header Cache-Control no-cache; set should keep things from being cached, but perhaps you have something that was cached in there before you set that? | {
"source": [
"https://serverfault.com/questions/549200",
"https://serverfault.com",
"https://serverfault.com/users/95568/"
]
} |
549,253 | An SD (SDHC) card installed in an HP ProLiant DL380p Gen8 server running VMware ESXi just failed :( I encountered some ominous looking messages on the vCenter console and in the HP ProLiant ILO event log... Lost connectivity to the device ... backing the boot filesystem. As a
result, host configuration changes will not be saved to persistent
storage. Embedded Flash/SD-CARD: Error writing media 0, physical block 848880:
Stack Exception. VMware advocates the use of USB and SD (SDHC) boot devices for ESXi. It was one of the main reasons the smaller footprint ESXi was developed (versus the older ESX). I've spent much time highlighting the differences between ESXi's installable and embedded modes to coworkers and clients. However, these failures do seem to happen. In this case, this is my third instance. Luckily, this is a vSphere cluster with SAN storage. What steps should be taken to remediate this failure? | Here's the process I used to resolve this: VMware ESXi can be installed in an embedded mode or an installable mode. As outlined here , the installation mode is determined by the destination media and the size of the volume available to the ESXi installer. USB, SDHC or any device less than 5GB in size: Embedded Hard drives/volumes greater than or equal to 5GB in size: Installable One of the unique attributes of running ESXi in embedded mode is that the OS is loaded into RAM and only touches the USB/SD device hourly during normal operation. In my situation, the system continued to operate, even with a failed SDHC device. The error message I received in the vCenter interface indicated that configuration changes would not be saved , but the cluster was still usable. I left the system in this state for several days until I could get to the datacenter to replace the SD card. With regard to steps to take following a failure of a USB or SD device, it is important to extract and save a copy of your host's settings!! This is easily accomplished via PowerCLI or the vSphere CLI . I used PowerCLI running from the vCenter server: Get-VMHostFirmware -VMHost 10.10.8.22 -BackupConfiguration -DestinationPath C:\Users\ewwhite\Downloads Following that, I evacuated all virtual machines from the affected host and placed it in maintenance mode. The host was then shut down, the SDHC card replaced with a new device, and I installed ESXi again. Once the host was up again with a fresh ESXi install, I made the bare minimum configuration changes needed to make the host visible on the network; set IP information, vlan info and password. I reloaded the saved configuration to the host via PowerCLI... For this step, I used: Set-VMHostFirmware -VMHost 10.10.8.22 -Restore -SourcePath c:\Users\ewwhite\configBundle-10.10.8.22.tgz -HostUser root -HostPassword YoMama!! Restoring the configuration forces a host reboot. Once up again, I was able to issue a reconnect to rejoin the host in vCenter and exit maintenance mode. If PowerCLI not available, the ESXi shell commands look like: vim-cmd hostsvc/firmware/backup_config This produces a web link that you'll be able to browse to and download a tarball of the host's configuration. You can SCP a configuration file to a host and use the following to restore the settings. vim-cmd hostsvc/firmware/restore_config /tmp/configBundle.tgz That's all! | {
"source": [
"https://serverfault.com/questions/549253",
"https://serverfault.com",
"https://serverfault.com/users/13325/"
]
} |
549,297 | I have a Windows Server 2012 VM running on Windows Azure. I want to enable the ability for 2 simultaneous administrative sessions over Remote Desktop. This is permitted under the EULA for Windows Server 2012. This is not the same thing as the fully-blown Terminal Services (Remote Desktop Services) feature . In Windows Server 2000 and 2003, multiple concurrent sessions (up to a limit of 2, plus the root /console session) were enabled by default (such that logging-in via RDP without logging-out first would create a new session rather than reconnecting to the old session). In Server 2008 and later it uses single-sessions by default, as this simplifies administration (as most people want to connect to old sessions). In Windows Server 2008 R2, you can add the MMC snap-ins for Remote Desktop Host Configuration which allows you to re-enable concurrent sessions. However, in Server 2012, after adding the Remote Administration snap-ins from Server Manager it seems the Remote Desktop Host Configuration snap-in has been removed. How can I re-enable the multiple concurrent sessions for Remote Desktop for Administration in Windows Server 2012? | There is no more /console RDP switch since Windows Vista. Yes, the Remote Desktop Services mmc snapins that you were used to in 2008 have been removed. A Windows license grants you two "administrative" simultaneous remote desktop sessions before you need to install the Remote Desktop Services role with CALs. There is no "2 administrative connections +1 console (which would make 3 simultaneous interactive sessions)" though. It's just two. You can use the /admin switch with the Remote Desktop Client to avoid using up CALs when the RDS Session Host role is installed, but you can only have two admin connections at a time regardless. From this Microsoft article which does a great job of explaining: At any point in time, there can be two active remote administration sessions. To start a remote administration session, you must be a member of the Administrators group on the server to which you are connecting. To RDP to a Windows Server 2012 VM hosted on Azure, you need to ensure that you have opened the endpoint in the Azure portal (think of it like a firewall ACL) in Azure, and also make sure RDP (port 3389-in) is allowed through the Windows Firewall as well. Then you need to make sure you're logging in with a user account who has 'Remote Desktop Users' privileges or better. Next, disable the setting Restrict Remote Desktop Services users to a single Remote Desktop Services session by using the Group Policy Object Editor MMC-snapin to edit your Local Policy. It's under Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Connections . Run gpupdate after you make changes to the policy to apply them immediately. I have a Server 2012 VM hosted on Azure, and I just followed the above steps, and now I am logged in twice, interactively, as the same user. | {
"source": [
"https://serverfault.com/questions/549297",
"https://serverfault.com",
"https://serverfault.com/users/20707/"
]
} |
549,298 | I created an RSA keypair for an SSL certificate and stored the private key in /etc/ssl/private/server.key . Unfortunately this was the only copy of the private key that I had. Then I accidentally overwrote the file on disk (yes, I know). Apache is still running and still serving SSL requests, leading me to believe that there may be hope in recovering the private key. (Perhaps there is a symbolic link somewhere in /proc or something?) This server is running Ubuntu 12.04 LTS. | SUCCESS! I was able to retrieve the private key. But it wasn't easy. Here's what you need to do: Make sure you do not restart the server or Apache. The game is over at that point. That also means making sure that no monitoring services restart Apache. Grab this file - source code for a tool named passe-partout . Extract the source code and adjust line 9 of Makefile.main to read: $(CC) $(CFLAGS) -o $@ $(OBJS) $(LDFLAGS) (Notice that the $(OBJS) and $(LDFLAGS) are reversed in order.) Run ./build.sh . Grab the PID of Apache using: service apache2 status Run the passe-partout command as root: sudo passe-partout [PID] ...where [PID] is the value you retrieved in step #5. If the program succeeds, your current directory will have a bunch of extra keys: you@server:~# ls
id_rsa-0.key id_rsa-1.key id_rsa-2.key If all went well (and hopefully it did), one of those keys is the one you need. However, if you had more than one certificate/keyfile in use, then you need to figure out which one it is. Here's how you do that: First grab a copy of the certificate that matches the signed key. Assuming the file is named server.crt , run the following command: openssl x509 -noout -modulus -in server.crt | openssl md5 This will output a value that you will need to match against each of the keys. For each key, run the following command: openssl rsa -noout -modulus -in id_rsa-0.key | openssl md5 If one of them matches, you've found the key. Credit: this article pointed me to passe-partout. | {
"source": [
"https://serverfault.com/questions/549298",
"https://serverfault.com",
"https://serverfault.com/users/30745/"
]
} |
549,305 | I have a script that issues a lot of commands. I wish to save the output of this script (Errors and stdout) to a file, but I also want the actual commands that were executed to be saved to the file as well (as displayed by bash -x) I've tried: bash -x script.sh > log.txt to no avail Any help is appreciated. Thanks | SUCCESS! I was able to retrieve the private key. But it wasn't easy. Here's what you need to do: Make sure you do not restart the server or Apache. The game is over at that point. That also means making sure that no monitoring services restart Apache. Grab this file - source code for a tool named passe-partout . Extract the source code and adjust line 9 of Makefile.main to read: $(CC) $(CFLAGS) -o $@ $(OBJS) $(LDFLAGS) (Notice that the $(OBJS) and $(LDFLAGS) are reversed in order.) Run ./build.sh . Grab the PID of Apache using: service apache2 status Run the passe-partout command as root: sudo passe-partout [PID] ...where [PID] is the value you retrieved in step #5. If the program succeeds, your current directory will have a bunch of extra keys: you@server:~# ls
id_rsa-0.key id_rsa-1.key id_rsa-2.key If all went well (and hopefully it did), one of those keys is the one you need. However, if you had more than one certificate/keyfile in use, then you need to figure out which one it is. Here's how you do that: First grab a copy of the certificate that matches the signed key. Assuming the file is named server.crt , run the following command: openssl x509 -noout -modulus -in server.crt | openssl md5 This will output a value that you will need to match against each of the keys. For each key, run the following command: openssl rsa -noout -modulus -in id_rsa-0.key | openssl md5 If one of them matches, you've found the key. Credit: this article pointed me to passe-partout. | {
"source": [
"https://serverfault.com/questions/549305",
"https://serverfault.com",
"https://serverfault.com/users/46466/"
]
} |
549,332 | I am trying to set robots.txt for all virtual hosts under nginx http server.
I was able to do it in Apache by putting the following in main httpd.conf : <Location "/robots.txt">
SetHandler None
</Location>
Alias /robots.txt /var/www/html/robots.txt I tried doing something similar with nginx by adding the lines given below (a) within nginx.conf and (b) as include conf.d/robots.conf location ^~ /robots.txt {
alias /var/www/html/robots.txt;
} I have tried with '=' and even put it in one of the virtual host to test it. Nothing seemed to work. What am I missing here? Is there another way to achieve this? | You can set the contents of the robots.txt file directly in the nginx config: location = /robots.txt { return 200 "User-agent: *\nDisallow: /\n"; } It is also possible to add the correct Content-Type: location = /robots.txt {
add_header Content-Type text/plain;
return 200 "User-agent: *\nDisallow: /\n";
} | {
"source": [
"https://serverfault.com/questions/549332",
"https://serverfault.com",
"https://serverfault.com/users/172701/"
]
} |
549,336 | I have a small network set up like this: I have a Pfsense for connecting my servers to the WAN, they are using NAT from the LAN -> WAN. I have an OpenVPN server using TAP to allow remote workers to be put on the same LAN network as the servers. They connect through the WAN IP to the OVPN interface. The LAN interface also servers as the gateway for the servers to get internet connection and has an IP of 10.25.255.254 The OVPN Interface and the LAN interface are bridged in BR0 Server A has an IP of 10.25.255.1 and is able to connect the internet Client A is connecting through the VPN and is assigned an IP address on its TAP interface of 10.25.24.1 (I reserved a /24 within the 10.25.0.0/16 for VPN clients) Firewall currently allows any-any connection OVPN towards LAN and vice versa Currently when I connect, all routes seem fine on the client side: Destination Gateway Genmask Flags Metric Ref Use Iface
300.300.300.300 0.0.0.0 255.255.255.0 U 0 0 0 eth0
10.25.0.0 10.25.255.254 255.255.0.0 UG 0 0 0 tap0
10.25.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tap0
0.0.0.0 300.300.300.300 0.0.0.0 UG 0 0 0 eth0 I can ping the LAN interface: root@server:# ping 10.25.255.254
PING 10.25.255.254 (10.25.255.254) 56(84) bytes of data.
64 bytes from 10.25.255.254: icmp_req=1 ttl=64 time=7.65 ms
64 bytes from 10.25.255.254: icmp_req=2 ttl=64 time=7.49 ms
64 bytes from 10.25.255.254: icmp_req=3 ttl=64 time=7.69 ms
64 bytes from 10.25.255.254: icmp_req=4 ttl=64 time=7.31 ms
64 bytes from 10.25.255.254: icmp_req=5 ttl=64 time=7.52 ms
64 bytes from 10.25.255.254: icmp_req=6 ttl=64 time=7.42 ms But I can't ping past the LAN interface: root@server:# ping 10.25.255.1
PING 10.25.255.1 (10.25.255.1) 56(84) bytes of data.
From 10.25.255.254: icmp_seq=1 Redirect Host(New nexthop: 10.25.255.1)
From 10.25.255.254: icmp_seq=2 Redirect Host(New nexthop: 10.25.255.1) I ran a tcpdump on my em1 interface (LAN interface which has the IP of 10.25.255.254) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on em1, link-type EN10MB (Ethernet), capture size 96 bytes
08:21:13.449222 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 10, length 64
08:21:13.458211 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28
08:21:14.450541 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 11, length 64
08:21:14.458431 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28
08:21:15.451794 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 12, length 64
08:21:15.458530 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28
08:21:16.453203 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 13, length 64 So traffic is reaching the LAN interface, it's also passed to the host and the host replies. But the traffic is not put on the LAN interface. | You can set the contents of the robots.txt file directly in the nginx config: location = /robots.txt { return 200 "User-agent: *\nDisallow: /\n"; } It is also possible to add the correct Content-Type: location = /robots.txt {
add_header Content-Type text/plain;
return 200 "User-agent: *\nDisallow: /\n";
} | {
"source": [
"https://serverfault.com/questions/549336",
"https://serverfault.com",
"https://serverfault.com/users/86280/"
]
} |
549,517 | I've just update my Apache server to Apache/2.4.6 which is running under Ubuntu 13.04. I used to have a vhost file that had the following: <Directory "/home/john/development/foobar/web">
AllowOverride All
</Directory> But when I ran that I got a "Forbidden. You don't have permission to access /" After doing a little bit of googling I found out that to get my site working again I needed to add the following line "Require all granted" so that my vhost looked like this: <Directory "/home/john/development/foobar/web">
AllowOverride All
Require all granted
</Directory> I want to know if this is "safe" and does not bring in any security issues. I read on Apache's page that this "mimics the functionality the was previously provided by the 'Allow from all' and 'Deny from all' directives. This provider can take one of two arguments which are 'granted' or 'denied'. The following examples will grant or deny access to all requests." But it didn't say if this was a security issue of some sort or why we now have to do it when in the past you did not have to. | The access control configuration changed in 2.4, and old configurations aren't compatible without some changes. See here . If your old config was Allow from all (no IP addresses blocked from accessing the service), then Require all granted is the new functional equivilent. | {
"source": [
"https://serverfault.com/questions/549517",
"https://serverfault.com",
"https://serverfault.com/users/188995/"
]
} |
549,603 | How Can I stop the email notifications. I am setting up a new server and getting tons of notifications. Wants to disable them for time being. | Click on "Process Info" in the left nav pane, and then "Disable Notifications", then "Commit". | {
"source": [
"https://serverfault.com/questions/549603",
"https://serverfault.com",
"https://serverfault.com/users/54280/"
]
} |
549,608 | We have a WAMP based server set up. php.ini is set up with the following: session.gc_maxlifetime = 60*60*12
session.save_path = "d:/wamp/tmp" The problem we are facing is that session files inside the tmp folder are being sporadically deleted and we can't tell why. Sessions will last anything from about 10 minutes to 40 minutes, when they should be lasting 12 hours. This is a virtual host environment, but none of the code we use in these sites overrides this setting (with ini_set , apache config PHP values or otherwise) so we can't see why they are being deleted. There are also no scheduled tasks deleting the files. Is there a way to successfully figure out why gc_maxlifetime is being ignored? For the record, I changed one of our sites to use session_save_path('D:/wamp/tmptmp'); temporarily just to double check it was the garbage collection, and session files remain in there untouched - though admittedly this doesn't give really many more clues. | Click on "Process Info" in the left nav pane, and then "Disable Notifications", then "Commit". | {
"source": [
"https://serverfault.com/questions/549608",
"https://serverfault.com",
"https://serverfault.com/users/106816/"
]
} |
550,276 | Recently, I have encountered a problem of limiting Internet Access to specific programs. Could anybody recommend a good way of doing that, without using any particular software? | The solution for me happened to be straight forward. Create, validate new group ; add required users to this group: Create: groupadd no-internet Validate: grep no-internet /etc/group Add user: useradd -g no-internet username Note: If you're modifying already existing user you should run: usermod -a -G no-internet userName check with : sudo groups userName Create a script in your path and make it executable: Create: nano /home/username/.local/bin/no-internet Executable: chmod 755 /home/username/.local/bin/no-internet Content: #!/bin/bash
sg no-internet "$@" Add iptables rule for dropping network activity for group no-internet : iptables -I OUTPUT 1 -m owner --gid-owner no-internet -j DROP Note: Don't forget to make the changes permanent, so it would be applied automatically after reboot . Doing it, depends on your Linux distribution. Check it, for example on Firefox by running: no-internet "firefox" In case you would want to make an exception and allow a program to access local network : iptables -A OUTPUT -m owner --gid-owner no-internet -d 192.168.1.0/24 -j ACCEPT iptables -A OUTPUT -m owner --gid-owner no-internet -d 127.0.0.0/8 -j ACCEPT iptables -A OUTPUT -m owner --gid-owner no-internet -j DROP NOTE: In case of spawning the rules will be maintained. For example, if you run a program with no-internet rule and that program will open browser window, still the rules will be applied. | {
"source": [
"https://serverfault.com/questions/550276",
"https://serverfault.com",
"https://serverfault.com/users/131830/"
]
} |
550,297 | I added a 10GB NIC to a SQL server which is connected over to a backend storage using ISCSI. I would like to force traffic going to a certain IP address/host to use the 10gb NIC, while all other traffic should continue to use the 1GB NIC. The 10gb nic is configured using a private network. So far I have added a entry in the host file to the host I want to go over the private network and when I ping the host, it does return the private IP, but I'm still finding traffic going to the 1gb pipe. How can I force all traffic to this host to use the 10gb interface? Would the best approach be a static route? route print
Dest NetMask Gw Interface Metric
0.0.0.0 0.0.0.0 160.205.31.254 160.205.31.26 266
0.0.0.0 0.0.0.0 160.205.31.254 172.31.33.72 266
10gb NIC
IP 172.31.33.72
mask 255.255.255.255.0
GW 160.205.31.254
1gb NIC
IP 160.205.31.26
mask 255.255.255.0
gw 160.205.31.254 I want all traffic to 160.205.32.16 to use out the 10GB NIC. | The solution for me happened to be straight forward. Create, validate new group ; add required users to this group: Create: groupadd no-internet Validate: grep no-internet /etc/group Add user: useradd -g no-internet username Note: If you're modifying already existing user you should run: usermod -a -G no-internet userName check with : sudo groups userName Create a script in your path and make it executable: Create: nano /home/username/.local/bin/no-internet Executable: chmod 755 /home/username/.local/bin/no-internet Content: #!/bin/bash
sg no-internet "$@" Add iptables rule for dropping network activity for group no-internet : iptables -I OUTPUT 1 -m owner --gid-owner no-internet -j DROP Note: Don't forget to make the changes permanent, so it would be applied automatically after reboot . Doing it, depends on your Linux distribution. Check it, for example on Firefox by running: no-internet "firefox" In case you would want to make an exception and allow a program to access local network : iptables -A OUTPUT -m owner --gid-owner no-internet -d 192.168.1.0/24 -j ACCEPT iptables -A OUTPUT -m owner --gid-owner no-internet -d 127.0.0.0/8 -j ACCEPT iptables -A OUTPUT -m owner --gid-owner no-internet -j DROP NOTE: In case of spawning the rules will be maintained. For example, if you run a program with no-internet rule and that program will open browser window, still the rules will be applied. | {
"source": [
"https://serverfault.com/questions/550297",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
550,763 | IBM still develop and sell tape drives today. The capacity of them seems to be on a par with today's hard drives, but the search time and transfer rate are both significantly lower than that of hard drives. So when is tape drives preferable to hard drives (or SSDs) today? | For me, the single biggest argument in favour of tape is that doubling your storage capacity is cheap. That is, to go from 1TB of HDD storage to 2TB is the same as going from nothing to that first TB. With tape, you pay a large premium for the drive, but storage after that is comparitively cheap. You don't have to have lengthy budget meetings about increasing the size of the storage NAS by 15TB, you just order another box of LTO5s. (Chopper makes a valid point about compulsory labels, but tape labels are in a standard format, and there are free software solutions to printing your own onto label stock.) Tapes are much easier to ship, and easier to store, than HDD and HDD-like media. They're more resistant to shocks, and their temperature tolerances are higher. They also benefit from the existence of autoloaders. This allows you to spread a large dump over multiple storage containers, which means you don't have to worry about how to break up your backups. While it's perfectly possible to make an autoloader for HDD-type media, I've never seen one, and I suspect the lack of standardisation in physical package size will make it difficult to bring one to market at a reasonable price. Your point about transfer rates is valid, but in the context of backups it's of minimal import. The time required to back up a 1TB file system to anything is large enough that you shouldn't be doing it on a live file system; and if you're dumping a snapshot to tape, who cares if it takes an extra hour or two? Search times are an equally minor concern, because all decent backup software maintains indices, so one can generally go straight to the relevant portion of the tape to restore a file. Edit : after an incident earlier this week, one more advantage of tape has struck me most forcefully. A client got infected with ransomware, which promptly encrypted several hundred gigabytes of their main corporate file server. Online backups are all very well, but any system that can write to those backups can rewrite or erase them as well - even if you would rather it hadn't. That certainly doesn't argue against all HDD-based backup systems, but it is a weakness in the simple " let's just have a big NAS and do all our backups there via cron " approach. My client has tape storage, by the way - so apart from a couple of lost days, no harm was done. | {
"source": [
"https://serverfault.com/questions/550763",
"https://serverfault.com",
"https://serverfault.com/users/74246/"
]
} |
554,359 | I have several PostgreSQL 9.2 installations where the timezone used by PostgreSQL is GMT, despite the entire system being "Europe/Vienna". I double-checked that postgresql.conf does not contain timezone setting, so according to the documentation it should fallback to the system's timezone. However, # su -s /bin/bash postgres -c "psql mydb"
mydb=# show timezone;
TimeZone
----------
GMT
(1 row)
mydb=# select now();
now
-------------------------------
2013-11-12 08:14:21.697622+00
(1 row) Any hints, where the GMT timezone could come from? The system user does not have TZ set and the /etc/timezone and /etc/timeinfo seem to be configured correctly. # cat /etc/timezone
Europe/Vienna
# date
Tue Nov 12 09:15:42 CET 2013 Any hints are appreciated, thanks in advance! | The default value for the TimeZone setting has changed on release 9.2: 9.1 TimeZone : (..) If not explicitly set, the server initializes this variable to the
time zone specified by its system environment. (...) 9.2 TimeZone : (...) The built-in default is GMT, but that is typically overridden in
postgresql.conf; initdb will install a setting there corresponding to
its system environment. (...) Which means that prior to version 9.2 the default value at postgresql.conf should be set during initdb phase. If you overridden that value (probably copying the old postgresql.conf while upgrading from older versions) PostgreSQL will use the "GMT" value as default. The solution for your case is quite simple, just change the TimeZone setting on postgresql.conf to the value you want: TimeZone = 'Europe/Vienna' After that you need to reload the service: # su - postgres -c "psql mydb -c 'SELECT pg_reload_conf()'" Then all fields stored as timestamp with time zone (or timestamptz ) will be shown correctly from now on. But you will have to correct by hand all (update) the fields stored as timestamp without time zone (or timestamp ). A tip I give to everyone upgrading PostgreSQL is not to copy the old postgresql.conf to the new cluster (notice I'm not sure if it what you did, but I saw this very same problem a lot because of that). Just get the one generated by initdb and add the modifications (a diff tool may be handful to this task). | {
"source": [
"https://serverfault.com/questions/554359",
"https://serverfault.com",
"https://serverfault.com/users/2191/"
]
} |
554,374 | Title is quite explanatory, but I have just deleted an s3 bucket as it was in the wrong region and am wanting to recreate it in the correct region with same name as the just deleted one. Is there any documentation of this or user experience? | The S3 docs used to say: When you delete a bucket, there may be a delay of up to one hour before the bucket name is available for reuse in a new region or by a new bucket owner. If you re-create the bucket in the same region or with the same bucket owner, there is no delay. But now they just say: ... it might take some time before the name can be reused ... | {
"source": [
"https://serverfault.com/questions/554374",
"https://serverfault.com",
"https://serverfault.com/users/138894/"
]
} |
554,378 | How can I tell if I have Windows 2012 R2 or just "2012"? I can't seem to see any signs of "R2" anywhere yet I ordered a (remote) server with this installed and I want to verify I have the correct edition installed before I go any further with the server build. | Drop into a command prompt and issue either of the following commands; systeminfo | findstr OS Or winver You can then use this table to determine the version; Operating System Version Which shows: Operating system Version number
Windows 8.1 6.3*
Windows Server 2012 R2 6.3*
Windows 8 6.2
Windows Server 2012 6.2
Windows 7 6.1
Windows Server 2008 R2 6.1
Windows Server 2008 6.0
Windows Vista 6.0
Windows Server 2003 R2 5.2
Windows Server 2003 5.2
Windows XP 64-Bit Edition 5.2
Windows XP 5.1
Windows 2000 5.0 Based on your comment, it would appear you're running Windows Server 2012 as opposed to Windows Server 2012 R2 | {
"source": [
"https://serverfault.com/questions/554378",
"https://serverfault.com",
"https://serverfault.com/users/39043/"
]
} |
554,520 | SMTP allows for multiple FROM addresses on the body (not the envelope) according to the RFCs. Has this feature ever been used for a legitimate purpose? Is it safe to discard messages that have multiple FROM addresses? | RFC 822 actually gives an example of this usage. It required (Section 4.4) that the Sender: header be present when it was used. A.2.7. Agent for member of a committee
George's secretary sends out a message which was authored
jointly by all the members of a committee. Note that the name
of the committee cannot be specified, since <group> names are
not permitted in the From field.
From: Jones@Host,
Smith@Other-Host,
Doe@Somewhere-Else
Sender: Secy@SHost RFC 2822 , which obsoleted it, continued to explicitly allow this particular construction (Section 3.6.2). from = "From:" mailbox-list CRLF
mailbox-list = (mailbox *("," mailbox)) / obs-mbox-list In the current standard, RFC 5322 , this is unchanged, and multiple addresses are still explicitly allowed (Section 3.6.2). The from field consists of the field name "From" and a comma-
separated list of one or more mailbox specifications. If the from
field contains more than one mailbox specification in the mailbox-
list, then the sender field, containing the field name "Sender" and a
single mailbox specification, MUST appear in the message. Was it ever useful? Yes, and it still is, for exactly the sort of scenario shown in the ancient example. Messages with multiple authors are supposed to have all of them listed in the From: header, with the Sender: set to the person who actually hit Send in their email program. The originator fields indicate the mailbox(es) of the source of the
message. The "From:" field specifies the author(s) of the message,
that is, the mailbox(es) of the person(s) or system(s) responsible
for the writing of the message. The "Sender:" field specifies the
mailbox of the agent responsible for the actual transmission of the
message. For example, if a secretary were to send a message for
another person, the mailbox of the secretary would appear in the
"Sender:" field and the mailbox of the actual author would appear in
the "From:" field. If the originator of the message can be indicated
by a single mailbox and the author and transmitter are identical, the
"Sender:" field SHOULD NOT be used. Otherwise, both fields SHOULD
appear. In practice on the public Internet, messages in which this is done are uncommon, though they do occur especially in enterprise and academic environments where it's much more common for one person to send email on behalf of another, or of a group. I've never actually seen spam that does this (and got through all my other controls). I would generally consider it unsafe to discard or raise the spam score of such a message. | {
"source": [
"https://serverfault.com/questions/554520",
"https://serverfault.com",
"https://serverfault.com/users/51457/"
]
} |
555,953 | I've been trying to move an existing db from MySQL running on EC2 to a new Amazon RDS instance (an experiment to see if we can move across). So far, it's not going well. I'm stuck at the initial import before setting up replication (instructions here ). I've prepared the RDS instance as described and can connect to it from the EC2 instance using mysql. I ran the mysqldump command as: mysqldump --master-data --databases db1 db2 > dump.sql Then attempted to upload it to RDS with: mysql -h RDSHost -P 3306 -u rdsuser --password=rdspassword < dump.sql The first problem was at line 22 of the dump: CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=106; This line caused error ERROR 1227 (42000) at line 22: Access denied; you need (at least one of) the SUPER privilege(s) for this operation . No problem, just commented out that line and hope to fix it later via mysql.rds_set_external_master(). Retried the upload, and got a very similar error: ERROR 1227 (42000) at line 7844: Access denied; you need (at least one of) the SUPER privilege(s) for this operation . The section around line 7844 looks like this: /*!50001 CREATE ALGORITHM=UNDEFINED */
/*!50013 DEFINER=`dev`@`localhost` SQL SECURITY DEFINER */
/*!50001 VIEW `jos_contributor_ids_view` AS select `jos_resource_contributors_view`.`uidNumber` AS `uidNumber` from `jos_resource_contributors_view` union select `jos_wiki_contributors_view`.`uidNumber` AS `uidNumber` from `jos_wiki_contributors_view` */; By commenting out the first 2 lines and adding a 'CREATE' to the third,I was able to get past this one. But there are tons of sections like this. Is there some way round this without all the editing? Like an option to mysqldump to not produce anything which needs SUPER privileges? It seems like lots of people have had similar problems, like having to run sed against the output of mysqldump / mysqlbinlog! I'm going to post on the AWS forum too - really I think RDS should have a more tolerant way of importing from mysqldump, or a specific tool which can be run against an existing db to create a dump which is complaint with RDS security. Just wondered if anyone had any other recipes or tricks which might help here. Thanks, Dave | You do likely need log_bin_trust_function_creators = 1 on RDS but that isn't the issue, here. You can specify a DEFINER value other than your own account only if you have the SUPER privilege. — http://dev.mysql.com/doc/refman/5.6/en/stored-programs-security.html When a stored program (proc, function, event, or trigger) is running, everything it does has the permissions of the user who defined it, or of the user explicitly stated with a DEFINER declaration. This allows, among other things, for stored programs to permit other users to do things to data they don't directly have permission to manipulate, as long as they have permission to use the stored program itself. It would be a serious vulnerability, then, if a non- SUPER user could create a procedure with an arbitrary definer, because the user could escalate his or her privileges at will. This is also true of views, of course, when the definer security context is used, as in the example you posted. One of the biggest complaints I have with RDS is that you can't have SUPER ... and now it can be one of yours, too :) because that fact is the cause of the problem you are having. Of course, if I were running a managed MySQL service, I wouldn't give anybody SUPER , either, so their security model makes sense, even if it is sometimes unwieldy. If all of your objects have the same definer, a workaround would be to restore the dump using that account instead of the one you're using now, but that seems unlikely. Deleting just the line with the DEFINER declaration should make the dumpfile work in cases where it appears on a line by itself, or you could use sed or perl to modify the file... an idea that I already know you're not fond of, but it really is a nice thing about MySQL that such hackery is quite legitimate, and not really all that far afield from the kinds of things I have to do as a DBA even in a non-RDS envirnoment. perl -pe 's/\sDEFINER=`[^`]+`@`[^`]+`//' < oldfile.sql > newfile.sql ...possibly not the answer you hoped for, but you could run that against your dumpfile and should end up with a slightly more usable file. | {
"source": [
"https://serverfault.com/questions/555953",
"https://serverfault.com",
"https://serverfault.com/users/188683/"
]
} |
556,077 | I'm really flailing around in AWS trying to figure out what I'm missing here. I'd like to make it so that an IAM user can download files from an S3 bucket - without just making the files totally public - but I'm getting access denied. If anyone can spot what's off I'll be stoked. What I've done so far: Created a user called my-user (for sake of example) Generated access keys for the user and put them in ~/.aws on an EC2 instance Created a bucket policy that I'd hoped grants access for my-user Ran the command aws s3 cp --profile my-user s3://my-bucket/thing.zip . Bucket policy: {
"Id": "Policy1384791162970",
"Statement": [
{
"Sid": "Stmt1384791151633",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-bucket/*",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/my-user"
}
}
]
} The result is A client error (AccessDenied) occurred: Access Denied although I can download using the same command and the default (root account?) access keys. I've tried adding a user policy as well. While I don't know why it would be necessary I thought it wouldn't hurt, so I attached this to my-user. {
"Statement": [
{
"Sid": "Stmt1384889624746",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
} Same results. | I was struggling with this, too, but I found an answer over here https://stackoverflow.com/a/17162973/1750869 that helped resolve this issue for me. Reposting answer below. You don't have to open permissions to everyone. Use the below Bucket policies on source and destination for copying from a bucket in one account to another using an IAM user Bucket to Copy from – SourceBucket Bucket to Copy to – DestinationBucket Source AWS Account ID - XXXX–XXXX-XXXX Source IAM User - src–iam-user The below policy means – the IAM user - XXXX–XXXX-XXXX:src–iam-user has s3:ListBucket and s3:GetObject privileges on SourceBucket/* and s3:ListBucket and s3:PutObject privileges on DestinationBucket/* On the SourceBucket the policy should be like: {
"Id": "Policy1357935677554",
"Statement": [
{
"Sid": "Stmt1357935647218",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::SourceBucket",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
},
{
"Sid": "Stmt1357935676138",
"Action": ["s3:GetObject"],
"Effect": "Allow",
"Resource": "arn:aws:s3::: SourceBucket/*",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
}
]
} On the DestinationBucket the policy should be: {
"Id": "Policy1357935677554",
"Statement": [
{
"Sid": "Stmt1357935647218",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3::: DestinationBucket",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
},
{
"Sid": "Stmt1357935676138",
"Action": ["s3:PutObject"],
"Effect": "Allow",
"Resource": "arn:aws:s3::: DestinationBucket/*",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
}
]
} command to be run is s3cmd cp s3://SourceBucket/File1 s3://DestinationBucket/File1 | {
"source": [
"https://serverfault.com/questions/556077",
"https://serverfault.com",
"https://serverfault.com/users/115375/"
]
} |
556,091 | The university I work for allows students to register their bikes for indoor storage during the winter. This is done through a website. At the end of the process, a DYMO label printer connected to the network is supposed to print out a label that will be stuck to the bike. Here is the process: User and bike data is entered into a web form by a staff member. (this works) User and bike data is stored in an Oracle database. (this works) PHP running on the web server saves the user and bike data into a CSV file. (this works) PHP running on the web server calls a VBScript. (this works) The VBScript opens a Word document that loads the CSV data and prints a label. (problem) Now, the VBScript works correctly. If I manually run the VBScript, it will open Word, load the CSV data, print a label, and close Word. Likewise, if I add a bit of code to the end of the VBScript to write a .txt file (for testing purposes) the text file gets written whether I run the script manually or allow it to be run by the website. As such, I suspect there is a permissions problem preventing the VBScript from accessing Word and/or the printer when run from the web. Any suggestions on how to solve this problem? The web server is Windows Server 2003 running XAMPP. If it helps, here is the line in PHP that calls the VBScript: exec('wscript "D:\CSWebHousing\wwwroot\portal2\bikes\testcode.vbs"'); Here is the relevant portion of the VBScript: Sub TestCode
Set ws = WScript.CreateObject("WScript.Shell")
OFFICE_PATH = "C:\path_to\Office12"
file_to_open = CHR(34) & "D:\path_to\Label.doc" & CHR(34)
ws.Run CHR(34)& OFFICE_PATH & "\winword.exe" & CHR(34) & file_to_open, 0, false
'These lines tab and enter past a dialog box
intTime = 3000
Wscript.Sleep(intTime)
ws.Sendkeys "{TAB}"
intTime = 500
Wscript.Sleep(intTime)
ws.Sendkeys "{TAB}"
intTime = 500
Wscript.Sleep(intTime)
ws.Sendkeys "{ENTER}"
intTime = 4000
Wscript.Sleep(intTime)
ws.Sendkeys "%fx"
End Sub I think it is a permissions problem, but any ideas would be appreciated. Thank you. | I was struggling with this, too, but I found an answer over here https://stackoverflow.com/a/17162973/1750869 that helped resolve this issue for me. Reposting answer below. You don't have to open permissions to everyone. Use the below Bucket policies on source and destination for copying from a bucket in one account to another using an IAM user Bucket to Copy from – SourceBucket Bucket to Copy to – DestinationBucket Source AWS Account ID - XXXX–XXXX-XXXX Source IAM User - src–iam-user The below policy means – the IAM user - XXXX–XXXX-XXXX:src–iam-user has s3:ListBucket and s3:GetObject privileges on SourceBucket/* and s3:ListBucket and s3:PutObject privileges on DestinationBucket/* On the SourceBucket the policy should be like: {
"Id": "Policy1357935677554",
"Statement": [
{
"Sid": "Stmt1357935647218",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::SourceBucket",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
},
{
"Sid": "Stmt1357935676138",
"Action": ["s3:GetObject"],
"Effect": "Allow",
"Resource": "arn:aws:s3::: SourceBucket/*",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
}
]
} On the DestinationBucket the policy should be: {
"Id": "Policy1357935677554",
"Statement": [
{
"Sid": "Stmt1357935647218",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3::: DestinationBucket",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
},
{
"Sid": "Stmt1357935676138",
"Action": ["s3:PutObject"],
"Effect": "Allow",
"Resource": "arn:aws:s3::: DestinationBucket/*",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
}
]
} command to be run is s3cmd cp s3://SourceBucket/File1 s3://DestinationBucket/File1 | {
"source": [
"https://serverfault.com/questions/556091",
"https://serverfault.com",
"https://serverfault.com/users/199915/"
]
} |
556,363 | When I launch a server with a security group that allows all traffic into my private subnet, it displays a warning that it may be open to the world. If it is a private subnet, how can that be? | The main difference is the route for 0.0.0.0/0 in the associated route table. A private subnet sets that route to a NAT gateway/instance. Private subnet instances only need a private ip and internet traffic is routed through the NAT in the public subnet. You could also have no route to 0.0.0.0/0 to make it a truly private subnet with no internet access in or out. A public subnet routes 0.0.0.0/0 through an Internet Gateway (igw). Instances in a public subnet require public IPs to talk to the internet. The warning appears even for private subnets, but the instance is only accessible inside your vpc. | {
"source": [
"https://serverfault.com/questions/556363",
"https://serverfault.com",
"https://serverfault.com/users/192084/"
]
} |
556,369 | I've got a brand new installation of CentOS minimal, and have installed Samba as follows: yum install krb5-workstation samba Firstly, have I got all the necessary packages to become a domain member? The above command also installs for dependencies: libtalloc libtdb samba-common samba-winbind samba-winbind-clients In my smb.conf I have the lines: template shell = /bin/bash
template homedir = /home/%D/%U I've joined to the domain with: net ads join -U <admin> I can now use getent passwd and see AD users as well as local users, but all the AD accounts have shell listed as /bin/false . They do correctly have home directories as /home/<DOMAIN>/<username> , though. What could be causing this behaviour? All AD users currently get logged out on authentication! | The main difference is the route for 0.0.0.0/0 in the associated route table. A private subnet sets that route to a NAT gateway/instance. Private subnet instances only need a private ip and internet traffic is routed through the NAT in the public subnet. You could also have no route to 0.0.0.0/0 to make it a truly private subnet with no internet access in or out. A public subnet routes 0.0.0.0/0 through an Internet Gateway (igw). Instances in a public subnet require public IPs to talk to the internet. The warning appears even for private subnets, but the instance is only accessible inside your vpc. | {
"source": [
"https://serverfault.com/questions/556369",
"https://serverfault.com",
"https://serverfault.com/users/61566/"
]
} |
556,388 | I have been archiving mailboxes on our Exchange 2010 server and subsequently deleting large numbers of messages from nearly all mailboxes by setting retention periods on them. I would like to know how much of the database is now just whitespace so that I can gauge how much space will be freed up by defragging it using ESEUTIL. So, I run: Get-MailboxDatabase -Status | ft Name,DatabaseSize,AvailableNewMailboxSpace But the columns that are returned for both DatabaseSize and AvailableNewMailboxSpace are blank. I have tried specifying the database using the "-Identity" parameter, but the result is the same. Am I omitting something necessary? | The main difference is the route for 0.0.0.0/0 in the associated route table. A private subnet sets that route to a NAT gateway/instance. Private subnet instances only need a private ip and internet traffic is routed through the NAT in the public subnet. You could also have no route to 0.0.0.0/0 to make it a truly private subnet with no internet access in or out. A public subnet routes 0.0.0.0/0 through an Internet Gateway (igw). Instances in a public subnet require public IPs to talk to the internet. The warning appears even for private subnets, but the instance is only accessible inside your vpc. | {
"source": [
"https://serverfault.com/questions/556388",
"https://serverfault.com",
"https://serverfault.com/users/184167/"
]
} |
556,391 | How can I monitor when a file is changed, and specifically who changed it (or perhaps what cron job changed it) on a RedHat machine without installing additional software? I am a system admin for the machine, but for reasons I wont go into here, I cannot install software on the box. Is there any built in functionality or a python or bash script I can run? I've found many options for file monitoring software, but I can't install anything on the server. | The main difference is the route for 0.0.0.0/0 in the associated route table. A private subnet sets that route to a NAT gateway/instance. Private subnet instances only need a private ip and internet traffic is routed through the NAT in the public subnet. You could also have no route to 0.0.0.0/0 to make it a truly private subnet with no internet access in or out. A public subnet routes 0.0.0.0/0 through an Internet Gateway (igw). Instances in a public subnet require public IPs to talk to the internet. The warning appears even for private subnets, but the instance is only accessible inside your vpc. | {
"source": [
"https://serverfault.com/questions/556391",
"https://serverfault.com",
"https://serverfault.com/users/186349/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.