source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
648,855
Does Windows Server 2012 R2 have native SFTP support? I see a role FTP Server but it doesn't say whether this includes SFTP.
Microsoft IIS server does not support SFTP (or SSH) at all, on any version of IIS or Windows. IIS supports secure FTP (FTPS or FTP over TLS/SSL) though. It's a different (incompatible) protocol than SFTP, but most "FTP" clients support both SFTP and FTPS. When setting up an FTPS server, make sure you disable plain (unencrypted) FTP! See (my) guide on Installing Secure FTP Server on Windows using IIS . Microsoft recently released OpenSSH for Windows ( Releases and Downloads ). On Windows 10 version 1803 or newer, you already have OpenSSH built-in. On older versions of Windows 10, it can be installed as an optional Windows feature. It can also be manually installed on older versions of Windows. I have prepared a guide for setting up SSH/SFTP server on Windows using this Microsoft build of OpenSSH .
{ "source": [ "https://serverfault.com/questions/648855", "https://serverfault.com", "https://serverfault.com/users/256924/" ] }
648,875
I was not exactly sure how to best word my question above. I am trying not to ask a subjective question but I really want a little advice from someone who knows more about this, and have a few different questions. I am planning a new network topology and upgrades to the current network setup and began to look into firewalls. The biggest question I am having right now (since I am very inexperienced with networking) is whether or not connecting a firewall to the router is necessary. The current setup is "Modem, Router, PC's (some are wired, some are wireless)". We currently only have the windows 7 firewall's running on each individual machine and we are looking for a better way to protect all systems. The software firewall seems a lot easier to manage for me as I am the only IT person in the building and I have no experience with a hardware firewall. I am thinking about a setup as follows, "Modem, router, firewall, switch, Server and PC's". As I was looking into the SonicWall tz105, it began to look very complex and complicated especially for me since I haven't done anything with firewalls. We have about 15 PC's in the building that we are looking to implement this new network and eventually we will want to connect to one server from multiple locations. Although this can be a subjective question, is the initial setup of the firewall relatively simple if you have minimum knowledge about how everything works? We are looking for a "minimum" firewall that protects all of the PC's we aren't looking to do anything "too fancy" but the capabilities of the firewall I looked at scares me away a little bit, Dell offers a nice demo online where I played around some with the setting of the SonicWall. So, a recap, Should I look into getting a hardware firewall such as the SonicWall tz105 or do I rely on the router and windows firewall to protect all of the systems? Is a firewall something that I can setup in say an hour or so where I have a full internet connection with the firewall monitoring and protecting or is there a lot more that I need to setup before I can communicate with the internet? I know these questions are frowned upon but please bear with me, I don't know how else to find this information.
Microsoft IIS server does not support SFTP (or SSH) at all, on any version of IIS or Windows. IIS supports secure FTP (FTPS or FTP over TLS/SSL) though. It's a different (incompatible) protocol than SFTP, but most "FTP" clients support both SFTP and FTPS. When setting up an FTPS server, make sure you disable plain (unencrypted) FTP! See (my) guide on Installing Secure FTP Server on Windows using IIS . Microsoft recently released OpenSSH for Windows ( Releases and Downloads ). On Windows 10 version 1803 or newer, you already have OpenSSH built-in. On older versions of Windows 10, it can be installed as an optional Windows feature. It can also be manually installed on older versions of Windows. I have prepared a guide for setting up SSH/SFTP server on Windows using this Microsoft build of OpenSSH .
{ "source": [ "https://serverfault.com/questions/648875", "https://serverfault.com", "https://serverfault.com/users/251425/" ] }
649,151
I'm trying to get these 2 location directives working in Nginx but I'm getting some errors back when booting Nginx. location ~ ^/smx/(test|production) { proxy_pass http://localhost:8181/cxf; } location ~ ^/es/(test|production) { proxy_pass http://localhost:9200/; } This is the error I'm receiving: nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block Does it sounds familiar to anyone? What I'm I missing here?
A small addition to the great answer from Xaviar : If you happen to be not so well acquainted with nginx, there's an important difference between adding the slash to the end of the proxy_pass directive. The following does not work: location ~* ^/dir/ { rewrite ^/dir/(.*) /$1 break; proxy_pass http://backend/; but this one does: location ~* ^/dir/ { rewrite ^/dir/(.*) /$1 break; proxy_pass http://backend; The difference being the / at the end of the proxy_pass directive.
{ "source": [ "https://serverfault.com/questions/649151", "https://serverfault.com", "https://serverfault.com/users/110615/" ] }
649,532
I'm writing a Graphite/Diamond collector measuring network latency. Specifically it should measure the time it takes to open up a connection to a port on a remote server. And it needs to work via script, i.e. no humans on the keyboard running this interactively. To give an example with a fictional parameter to telnet which would do exactly what I need. time telnet --close-connection-immediately-after-established somehost.somedomain.com 1234 Trying somehost.somedomain.com... Connected to somehost.somedomain.com. Escape character is '^]'. Connection closed automatically after established. real 0m3.978s user 0m0.001s sys 0m0.003s Basically, what's a command-line way to output 3.978 given the example above using only builtin tools? You could wrap the telnet in an expect script I suppose and have it issue: ^] close ... but that seems rather ugly. Any ideas?
What about : time nc -zw30 <host> <port>
{ "source": [ "https://serverfault.com/questions/649532", "https://serverfault.com", "https://serverfault.com/users/208119/" ] }
649,647
On my local network there are (among others) 5 machines (running Debian Jessie or Arch) wirelessly connected to a Netgear WNDR4000 router. Below is a graph of the ping times to the router from each of the machines, collected over a period of around half an hour. Observations: When things are going well, the ping times are all below 3ms (under 1ms for two of the machines, including the problem machine purple ) At irregular intervals (of the order of 100s), three of these machines ( red , green , purple ) suffer degradation of ping times, while the other two appear unaffected. The degradation periods coincide for all 3 machines. The degradation for purple is two orders of magnitude more severe than for green and red , with ping times typically reaching over 20000ms for purple and 200ms for red and green . If purple is physically moved nearer the router, the degradation completely disappears for purple while continuing as before for both red and green . Red is 3m away and in direct line of sight from the base station; purple 's usual location is about 10m away without direct line of sight. This makes network access on purple intolarably slow (when it is in its normal location). Can you suggest how to go about diagnosing and fixing the problem?
Clearly you have an interference problem. Interference can come from passive elements like aluminum wall studs or thick floors, but those are not likely to show the periodic pattern you see. So something electric or electronic is periodically emitting. Finding it may be expensive or tough, but you have a few options. Graph more. It would be nice to make sure that this isn't a bandwidth limitation of your own creation. Looking at the bandwidth consumed on your uplink and each of the clients might lead to a similar looking graph and the source of your problem. Upgrade firmware. Being on the latest firmware could get you past a bug in their code causing this. Plus it helps reduce the chances of your router being remotely compromised. Mitigate it. Get a wifi analyzer for your phone. Look at who is using what bands when purple is pokey particularly. You may find that switching from 7 to 1 or 14 takes care of your problem. The analyzer should show you how the channels spread out into each other, so if you are in a really busy area going for the in-betweens of 4 or 11 would let you reduce the congestion. Migrate. Can you move the WAP to a different location? Placing it in a place just a foot away can significantly effect propagation within a building. Ground it. Make sure all of your AV equipment (TV's, stereos, etc.) is properly grounded. Find it. You could use a spectrum analyzer and a directional antenna to find the emitter, but the spectrum analyzer is big bucks. If you know a ham (an amateur radio operator), they may have this gear laying around already; offer them food and don't be surprised if more than one shows up. The ham will also know the FCC regs inside and out which will be great if you actually find the source. Without bribing anyone you could try cutting things off and see if gets better, but it might not be your stuff causing the problem. When dealing with VCR's and TV's you may need to completely unplug them. Good luck.
{ "source": [ "https://serverfault.com/questions/649647", "https://serverfault.com", "https://serverfault.com/users/257425/" ] }
649,990
Is there a way to create SSL cert requests by specifying all the required parameters on the initial command? I am writing a CLI-based web server control panel and I would like to avoid the use of expect when executing openssl if possible. This is a typical way to create a cert request: $ openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout foobar.com.key -out foobar.com.csr Generating a 2048 bit RSA private key .................................................+++ ........................................+++ writing new private key to 'foobar.com.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:New Sweden Locality Name (eg, city) []:Stockholm Organization Name (eg, company) [Internet Widgits Pty Ltd]:Scandanavian Ventures, Inc. Organizational Unit Name (eg, section) []: Common Name (e.g. server FQDN or YOUR name) []:foobar.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []:FooBar I am hoping to see something like this: (unworking example) $ openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout foobar.com.key -out foobar.com.csr \ -Country US \ -State "New Sweden" \ -Locality Stockholm \ -Organization "Scandanavian Ventures, Inc." \ -CommonName foobar.com \ -EmailAddress [email protected] \ -Company FooBar The fine man page had nothing to say on the matter, nor was I able to find anything via Google. Must SSL cert request generation be an interactive process, or is there some way to specify all the parameters in a single command? This is on a Debian-derived Linux distro running openssl 1.0.1 .
you are missing two part: the subject line, which can be called as -subj "/C=US/ST=New Sweden/L=Stockholm /O=.../OU=.../CN=.../emailAddress=..." replacing ... with value, X= being X509 code ( O rganisation/ O rganisation U nit/etc ... ) the password value, which can be called as -passout pass:client11 -passin pass:client11 which give an output/input password my calling for new key looks like openssl genrsa -aes256 -out lib/client1.key -passout pass:client11 1024 openssl rsa -in lib/client1.key -passin pass:client11 -out lib/client1-nokey.key openssl req -new -key lib/client1.key -subj req -new \ -passin pass:client11 -out lib/client1.csr \ -subj "/C=US/ST=New Sweden/L=Stockholm/O=.../OU=.../CN=.../emailAddress=..." (now that I see it, there is two -new ... )
{ "source": [ "https://serverfault.com/questions/649990", "https://serverfault.com", "https://serverfault.com/users/91213/" ] }
650,117
I have a couple of API endpoints that I want to serve from under a single location of /api with subpaths going to different endpoints. Specifically, I want webdis to be available at /api and a proprietary API available at /api/mypath . I'm not worried about clashes with the webdis API because I am using subpaths which are unlikely to clash with redis command names, and also have full control over the design of the API to avoid clashes. Here's the config file from my test server that I have been hacking on: server { listen 80; server_name localhost; server_name 192.168.3.90; server_name 127.0.0.1; location / { root /home/me/src/phoenix/ui; index index.html; } # temporary hardcoded workaround location = /api/mypath/about { proxy_pass http://localhost:3936/v1/about; } location /api { rewrite ^/api/(.*)$ /$1 break; proxy_pass http://localhost:7379/; } # tried this but it gives "not found" error #location ^~ /api/mypath/ { # rewrite ^/api/mypath/(.*)$ /$1 break; # proxy_pass http://localhost:3936/v1/; #} # #location ^~ /api { # rewrite ^/api/(.*)$ /$1 break; # proxy_pass http://localhost:7379/; #} } How can I change my workaround so that any requests to /api/mypath/* will go to the endpoint at port 3936, and everything else to port 7379?
You don't need rewrite for this. server { ... location ^~ /api/ { proxy_pass http://localhost:7379/; } location ^~ /api/mypath/ { proxy_pass http://localhost:3936/v1/; } } According to the nginx documentation A location can either be defined by a prefix string, or by a regular expression. Regular expressions are specified with the preceding ~* modifier (for case-insensitive matching), or the ~ modifier (for case-sensitive matching). To find location matching a given request, nginx first checks locations defined using the prefix strings (prefix locations). Among them, the location with the longest matching prefix is selected and remembered. Then regular expressions are checked, in the order of their appearance in the configuration file. The search of regular expressions terminates on the first match, and the corresponding configuration is used. If no match with a regular expression is found then the configuration of the prefix location remembered earlier is used. If the longest matching prefix location has the ^~ modifier then regular expressions are not checked. Therefore any request that begins with /api/mypath/ will always be served by the second block since that's the longest matching prefix location. Any request that begins with /api/ not immediately followed by mypath/ will always be served by the first block, since the second block doesn't match, therefore making the first block the longest matching prefix location.
{ "source": [ "https://serverfault.com/questions/650117", "https://serverfault.com", "https://serverfault.com/users/42456/" ] }
650,339
I want to keep our SSL key for our website confidential. It's stored on 2 USB sticks, one in a safe deposit box and one I keep secure. And then I'm the only one who applies it to the web server so that it is totally secure. Except... On IIS at least, you can export the key. So anyone who's an admin can then get a copy of the key. Is there any way around this? Or by definition do all admins have full access to all keys? Update: I do have sysadmins I fully trust. What led to this is one of them quit (they had an hour commute to our company, a 5 minute commute to the new one). While I trust this individual, just as we disable their Active Directory account when someone leaves, I thought we should have a way to insure they don't retain the ability to use our SSL. And what struck me as easiest is if I'm the only one who has it. Our cert expires in January so this was the time to change the practice if we could. Based on the answers it looks like we cannot. So this leads to a new question - when someone who has access to the cert leaves, is it standard practice to get a new cert and have the existing one revoked. Or if the person who left is trustworthy, then do we continue with the cert we have?
A person with administrative (or often even physical) access to a server is going to be able to extract the private key. Whether through exporting, memory sniffing, or other such trickery. Your administrators have access to the private keys of your web servers. Accept this as fact, and work around that. If your sysadmins aren't trustworthy, then you may need better sysadmins or at least fewer sysadmins with access to the web servers. If it's a matter of management security paranoia, then there may be a deeper issue regarding their ability to trust a sysadmin. This isn't to say that you should just let everybody have access to the private key. There should always be a need for access before access is granted. With that in mind, are you going to take extreme measures to make sure that a sysadmin with full control of a website can not export the private key, but can still manipulate the website itself in any number of nearly untraceable ways? We're back to trust here, and I think that's the core of the problem that needs to be addressed.
{ "source": [ "https://serverfault.com/questions/650339", "https://serverfault.com", "https://serverfault.com/users/237816/" ] }
650,652
With respect to domain-joined Windows 8/8.1 Pro machines, what features of Windows will not work unless a Microsoft Account is used? Also, aside from "feature loss," are their any gotchas of not using a Microsoft Account in a domain-environment? This is a Server 2008 R2-based Active Directory environment. No Office 365.
features of Windows will not work unless a Microsoft Account is used? None. You only lose features of various software, and 99% of that is conveniences that nobody will miss. You lose the "Store" completely, and applications like "Weather" need to have the location configured (they can't just pull that data from your account). Are their any gotchas of not using a Microsoft Account in a domain-environment? Nothing specific to the domain or security. We disable the the Store and Login with MS Accounts anyway, just so people don't wander into anything: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options\Accounts: Block Microsoft accounts Computer Configuration\Policies\Administrative Templates\Windows Components\Store\Turn off the Store application If you disable the Store, but do not uninstall all of the apps, you will likely want to install the updates for those apps (I'm not aware of any recent security problems, but it's Microsoft software). Microsoft provides a full list of built-in App updates that can be imported into a WSUS sever (or you can extract the MSI installer from the cabinet file and deploy it however you like).
{ "source": [ "https://serverfault.com/questions/650652", "https://serverfault.com", "https://serverfault.com/users/205065/" ] }
652,237
I have a strange idea - let multiple people/organizations host the same application, and let all their nodes be accessible via a single domain name. That's in order to have, let's say, a really distributed social network, where usability is not sacrificed (i.e. users don't have to remember different provider urls and then when one provider goes down, switch to another one) To achieve that, I thought a DNS record with multiple IPs can be used. So, how many IPs can a single DNS A record hold? This answer says it's around 30, but the usecase there is different. For the above scenario I wouldn't care if a given ISP caches only 30, as long as another ISP caches another 30, and so on.
Disclaimer: No offense, but this is a really bad idea. I do not recommend that anyone do this in real life. But if you give a bored IT guy a lab, funny things will happen! For this experiment, I used a Microsoft DNS server running on Server 2012 R2. Because of the complications of hosting a DNS zone in Active Directory, I created a new primary zone named testing.com that is not AD-integrated. Using this script: $Count = 1 for ($x = 1; $x -lt 256; $x++) { for ($y = 1; $y -lt 256; $y++) { for ($z = 1; $z -lt 256; $z++) { Write-Host "1.$x.$y.$z`t( $Count )" $Count++ dnscmd . /RecordAdd testing.com testing A 1.$x.$y.$z } } } I proceeded to create, without error, 65025 host records for the name testing.testing.com. , with literally every IPv4 address from 1.1.1.1 to 1.1.255.255. Then, I wanted to make sure that I could break through 65536 (2^16 bit) total number of A records without error, and I could, so I assume I probably could have gone all the way to 16581375 (1.1.1.1 to 1.255.255.255,) but I didn't want to sit here and watch this script run all night. So I think it's safe to say that there's no practical limit to the number of A records you can add to a zone for the same name with different IPs on your server. But will it actually work from a client's perspective? Here is what I get from my client as viewed by Wireshark: (Open the image in a new browser tab for full size.) As you can see, when I use nslookup or ping from my client, it automatically issues two queries - one UDP and one TCP. As you already know, the most I can cram into a UDP datagram is 512 bytes, so once that limit is exceeded (like 20-30 IP addresses,) one must use TCP instead. But even with TCP, I only get a very small subset of A records for testing.testing.com. 1000 records were returned per TCP query. The list of A records rotates by 1 properly with each successive query, exactly like how you would expect round-robin DNS to work. It would take millions of queries to round robin through all of these. I don't see how this is going to help you make your massively scalable, resilient social media network, but there's your answer nevertheless. Edit: In your follow-up comment, you ask why I think this is generally a bad idea. Let's say I am an average internet user, and I would like to connect to your service. I type www.bozho.biz into my web browser. The DNS client on my computer gets back 1000 records. Well, bad luck, the first 30 records in the list are non-responsive because the list of A records isn't kept up to date, or maybe there's a large-scale outage affecting a chunk of the internet. Let's say my web browser has a time-out of 5 seconds per IP before it moves on and tries the next one. So now I am sitting here staring at a spinning hourglass for 2 and a half minutes waiting for your site to load. Ain't nobody got time for that. And I'm just assuming that my web browser or whatever application I use to access your service is even going to attempt more than the first 4 or 5 IP addresses. It probably won't. If you used automatic scavenging and allow non-validated or anonymous updates to the DNS zone in the hopes of keeping the list of A records fresh... just imagine how insecure that would be! Even if you engineered some system where the clients needed a client TLS certificate that they got from you beforehand in order to update the zone, one compromised client anywhere on the planet is going to start a botnet and destroy your service. Traditional DNS is precariously insecure as it is, without crowd-sourcing it. Humongous bandwidth usage and waste. If every DNS query requires 32 kilobytes or more of bandwidth, that's not going to scale well at all. DNS round-robin is no substitute for proper load balancing. It provides no way to recover from one node going down or becoming unavailable in the middle of things. Are you going to instruct your users to do an ipconfig/flushdns if the node they were connected to goes down? These sorts of issues have already been solved by things like GSLB and Anycast. Etc.
{ "source": [ "https://serverfault.com/questions/652237", "https://serverfault.com", "https://serverfault.com/users/33148/" ] }
652,592
My server had some problem with the ssh and I couldn't upload files of size > 10 KB, as scp would hang during the copy. I found a solution for this problem here , and I was changing the MTU, when I accidentally did sudo ip link set eth0 mtu 0 . No one can ssh into the server now. What should I do?
Connect to the console and change the MTU back. If you've not got console access, then reboot the server.
{ "source": [ "https://serverfault.com/questions/652592", "https://serverfault.com", "https://serverfault.com/users/127130/" ] }
652,764
I've sent an important email that the recipient claims it wasn't received by them. They say that they asked their IT team to see if the email was received in their server. According to them the email never reached their server. Also they don't accept the chance that the email was received and marked as SPAM. Shouldn't I receive an error message in the case the email wasn't delivered? Is their any way for me to check if they are telling the truth (it sounds very fishy to me). Thank you.
You can absolutely see in the postfix logs where an email was sent, and whether it was accepted. Here's an example log entry from my mail server which indicates that the message was successfully sent to the Google SMTP servers. Dec 15 14:21:43 ebony postfix/smtp[2422]: D05BB1D872: to=, relay=gmail-smtp-in.l.google.com[74.125.201.27]:25, delay=1.4, delays=0.08/0.01/0.59/0.74, dsn=2.0.0, status=sent (250 2.0.0 OK 1418674912 h96si7402391iod.11 - gsmtp) What this doesn't show is what the server did with the email after it was accepted, but this entry alone is enough for you to tell the remote IT dept that your mail was in fact delivered and you can give them the Message ID and the response from their server (in parentheses at the end) to provide evidence! Good luck.
{ "source": [ "https://serverfault.com/questions/652764", "https://serverfault.com", "https://serverfault.com/users/245326/" ] }
652,793
I'm trying to install IIS Advanced Logging via downloaded 64 bit msi, but noninteractively from the command line. Supposedly AdvancedLogging64.msi /quiet should do the trick, but it seems to be doing nothing. I suspect this is because it wants the user to accept the license. Is there a command line flag to force this through?
You can absolutely see in the postfix logs where an email was sent, and whether it was accepted. Here's an example log entry from my mail server which indicates that the message was successfully sent to the Google SMTP servers. Dec 15 14:21:43 ebony postfix/smtp[2422]: D05BB1D872: to=, relay=gmail-smtp-in.l.google.com[74.125.201.27]:25, delay=1.4, delays=0.08/0.01/0.59/0.74, dsn=2.0.0, status=sent (250 2.0.0 OK 1418674912 h96si7402391iod.11 - gsmtp) What this doesn't show is what the server did with the email after it was accepted, but this entry alone is enough for you to tell the remote IT dept that your mail was in fact delivered and you can give them the Message ID and the response from their server (in parentheses at the end) to provide evidence! Good luck.
{ "source": [ "https://serverfault.com/questions/652793", "https://serverfault.com", "https://serverfault.com/users/98556/" ] }
652,810
I'm trying to correlate volumes (as enumerated from win32_volume for those where DriveType = 3 ) back to win32_physicaldisk instances. Everything that I've seen in my research points to the answer being "it's not possible", but then again, I didn't read the entire Internet. :) I'm currently getting the information out of diskpart, but am running into limitations with that approach. As a bonus, if the answer is "no, and here's the reason why", that would be useful, too.
You can absolutely see in the postfix logs where an email was sent, and whether it was accepted. Here's an example log entry from my mail server which indicates that the message was successfully sent to the Google SMTP servers. Dec 15 14:21:43 ebony postfix/smtp[2422]: D05BB1D872: to=, relay=gmail-smtp-in.l.google.com[74.125.201.27]:25, delay=1.4, delays=0.08/0.01/0.59/0.74, dsn=2.0.0, status=sent (250 2.0.0 OK 1418674912 h96si7402391iod.11 - gsmtp) What this doesn't show is what the server did with the email after it was accepted, but this entry alone is enough for you to tell the remote IT dept that your mail was in fact delivered and you can give them the Message ID and the response from their server (in parentheses at the end) to provide evidence! Good luck.
{ "source": [ "https://serverfault.com/questions/652810", "https://serverfault.com", "https://serverfault.com/users/69392/" ] }
653,203
I stumbled accross this problem when trying to create new FTP users for vsftpd. Upon creating a new user with the following command and attempting login with FileZilla, I would get an "incorrect password" error. useradd f -p pass -d /home/f -s /bin/false After doing this, /etc/shadow contains f:pass:1111:0:99:2::: Once I run the following command and provide the same pass pass passwd f /etc/shadow contains f:$1$U1c5vVwg$x5TVDDDmhi0a7RWFer6Jn1:1111:0:99:2::: It appears that encryption happens when I run passwd , but doesn't upon useradd Importantly after doing this, I am able to login to FTP with the exact same credentials. I am using CentOS 5.11, vsftpd for FTP, and FileZilla for FTP Access /var/log/secure contains: Dec 17 useradd[644]: new group: name=f, GID=511 Dec 17 useradd[644]: new user: name=f, UID=511, GID=511, home=/home/f, shell=/bin/false Why does it not work when I pass -p pass to useradd? What do I need to do to make it work?
That is working as intended. If you want to set a password using the useradd command, you are supposed to give a hashed version of the password to useradd . The string pass does satisfy the format criteria for the hashed password field in /etc/shadow , but no actual password hashes to that string. The result is that for all intents and purposes, that account will behave as having a password, but any password you try to use to access it will be rejected as not being the correct password. See man useradd or the useradd documentation : -p , --password PASSWORD The encrypted password, as returned by crypt(3) . The default is to disable the password. Note: This option is not recommended because the password (or encrypted password) will be visible by users listing the processes. You should make sure the password respects the system's password policy.
{ "source": [ "https://serverfault.com/questions/653203", "https://serverfault.com", "https://serverfault.com/users/260066/" ] }
653,340
I've been reading some EBS docs and they are talking about "I/O credit balance" How can I view my current (or historical) credit balance? Each volume receives an initial I/O credit balance of 5,400,000 I/O credits, which is enough to sustain the maximum burst performance of 3,000 IOPS for 30 minutes. This initial credit balance is designed to provide a fast initial boot cycle for boot volumes and to provide a good bootstrapping experience for other applications. Volumes earn I/O credits every second at a base performance rate of 3 IOPS per GiB of volume size. For example, a 100 GiB General Purpose (SSD) volume has a base performance of 300 IOPS.
AWS recently added Burst Balance metric to monitor your balance, the metrics is 0-100% and says how far is your volume from 5.4 million. The AWS Blog post about it This is available for EC2 gp2 Volumes as well as RDS gp2 Volumes. To view it for EC2 EBS Volumes, go to Cloudwatch -> Metrics -> EBS -> BurstBalance. To view it for RDS Instances, go to Cloudwatch -> Metrics -> RDS -> BurstBalance.
{ "source": [ "https://serverfault.com/questions/653340", "https://serverfault.com", "https://serverfault.com/users/124775/" ] }
653,342
I already saw this thread , but it didn't answer my question because it has been left for dead. As the title says, when I log into my VPS with putty, everything works fine. But when connecting with FileZilla through SFTP, I always get an error : Authentication failed, cannot establish connection to the server (roughly translated). I am using the right settings in FileZilla because I only got this error 3 days ago and it used to work fine before : SFTP through port 22. Here is an iptables -L : (TL;DR : accept everything in and out on ports 20, 21 and 22, and passive inbound connections on ports 1024+) Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ctstate ESTABLISHED /* Allow ftp connections on port 21 */ ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ctstate RELATED,ESTABLISHED /* Allow ftp connections on port 20 */ ACCEPT tcp -- anywhere anywhere tcp spts:1024:65535 dpts:1024:65535 ctstate ESTABLISHED /* Allow passive inbound connections */ ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate ESTABLISHED /* Allow ftp connections on port 22 */ Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ctstate NEW,ESTABLISHED /* Allow ftp connections on port 21 */ ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ctstate ESTABLISHED /* Allow ftp connections on port 20 */ ACCEPT tcp -- anywhere anywhere tcp spts:1024:65535 dpts:1024:65535 ctstate RELATED,ESTABLISHED /* Allow passive inbound connections */ ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate ESTABLISHED /* Allow ftp connections on port 22 */ I did set this manually in case that was the source of my problems, but nothing changed. I also set PasswordAuthentication yes and LogLevel DEBUG as the previous thread suggested as well, but nothing changed neither after restarting sshd. Here is what I get in /var/log/auth.log when I try to connect with FileZilla : literally nothing related to SFTP login. It only contains stuff about me doing sudo s to access the file. I don't know whether it comes from FileZilla because auth.log shows nothing related to SFTP connection, or it comes from sshd configuration just ignoring SFTP requests. I can't seem to find anything to help me, do you have any suggestions ? Thank you for your time reading this.
AWS recently added Burst Balance metric to monitor your balance, the metrics is 0-100% and says how far is your volume from 5.4 million. The AWS Blog post about it This is available for EC2 gp2 Volumes as well as RDS gp2 Volumes. To view it for EC2 EBS Volumes, go to Cloudwatch -> Metrics -> EBS -> BurstBalance. To view it for RDS Instances, go to Cloudwatch -> Metrics -> RDS -> BurstBalance.
{ "source": [ "https://serverfault.com/questions/653342", "https://serverfault.com", "https://serverfault.com/users/250583/" ] }
653,361
We are seeing some very strange behaviour from our Cisco ASA 5505 running 9.1(2) We have a SIP PBX inside our network. It's got a bit of an odd configuration, it listens for inbound SIP requests for our trunk on UDP/60052 . So in our ASA, I have a Port Forwarding from UDP/5060 to UDP/65002 on our outside interface. 90% of this time, this works just fine. However, the remaining 10% of the time, and so far it appears to be randomly, the ASA decides not to do anything with the incoming UDP/5060 traffic. A packet capture on the ASA shows the SIP INVITE request hitting the outside interface, but it never reaches the egress interface. We are not using any SIP inspection, as the internal server uses STUN to remap its SIP headers. NAT Rule: nat (outside,any) source static obj_any obj_any destination static interface Swyx service Swyx-5060-Service-UDP Swyx-SIP-65002-UDP no-proxy-arp description BC Swyx 5060 > 65002 ACL: object-group service DM_INLINE_SERVICE_3 service-object object Swyx-RTP-55000 service-object object Swyx-SIP-65002-UDP service-object object RTP access-list outside_access_in_1 extended permit object-group DM_INLINE_SERVICE_3 any object Swyx log critical What have I missed? Is there any known bugs in this version of the ASA?
AWS recently added Burst Balance metric to monitor your balance, the metrics is 0-100% and says how far is your volume from 5.4 million. The AWS Blog post about it This is available for EC2 gp2 Volumes as well as RDS gp2 Volumes. To view it for EC2 EBS Volumes, go to Cloudwatch -> Metrics -> EBS -> BurstBalance. To view it for RDS Instances, go to Cloudwatch -> Metrics -> RDS -> BurstBalance.
{ "source": [ "https://serverfault.com/questions/653361", "https://serverfault.com", "https://serverfault.com/users/7709/" ] }
653,399
I want to display banner (welcome message) for SSH users with a specific welcome message for each user.
You did not specify, what SSH server are you using. I'm assuming OpenSSH. Note that the SSH banner and the MOTD are two different things. While almost indistinguishable in an SSH terminal, they have a different behavior, for example, in an SFTP client. The MOTD is just a text printed on an interactive terminal. So, it won't (and cannot) be sent to SFTP clients, for example (more about that later). The MOTD is hard-coded to the /etc/motd in OpenSSH. You can turn it on/off globally only, using the PrintMotd directive. On some Linux systems, however, the PrintMotd is always off and the MOTD is printed by the PAM stack instead (using the pam_motd module). In this case you can turn it off via the /etc/pam.d/sshd or specify a custom motd= path as a module parameter. The SSH banner is a special SSH 2.0 feature, sent in a specific SSH packet (SSH2_MSG_USERAUTH_BANNER). As such, even non-terminal clients, like SFTP clients, can process it and display to user. See how the banner displays in WinSCP SFTP/SCP client for example. The SSH banner is configurable per user (or group or other criteria) in the sshd_config using the Banner and the Match directives : Match User username1 Banner /etc/banner_user1 Match User username2 Banner /etc/banner_user2 See also Disable ssh banner for specific users or ips . Of course, you can also use a custom implementation for the message/banner. Simply print a message selected using your custom logic from a global profile script. As with the MOTD, this won't work for non-interactive sessions (the SFTP and alike). More importantly, not only it won't work, you need to make sure that you print the message for an interactive terminal only. What OpenSSH does automatically for the /etc/motd . Either use a global profile script that executes for an interactive terminal only, or print the message conditionally based on value of the TERM environment variable. If you print the message for non-interactive session, you break any client that uses a strict protocol, such as the SFTP or the SCP, as the client will try to interpret your text message as a protocol message, failing badly. See for example description of such issue in documentation of WinSCP SFTP/SCP client . (I'm the author of WinSCP)
{ "source": [ "https://serverfault.com/questions/653399", "https://serverfault.com", "https://serverfault.com/users/260277/" ] }
653,664
A few days ago Gmail suddenly decided to stop sending mails to my mailserver. I am using Postfix and Dovecot with an paid SSL Certificate running on Debian 7 with everything updated. My mail.log shows the following error: Dec 19 11:09:11 server postfix/smtpd[19878]: initializing the server-side TLS engine Dec 19 11:09:11 server postfix/tlsmgr[19880]: open smtpd TLS cache btree:/var/lib/postfix/smtpd_scache Dec 19 11:09:11 server postfix/tlsmgr[19880]: tlsmgr_cache_run_event: start TLS smtpd session cache cleanup Dec 19 11:09:11 server postfix/smtpd[19878]: connect from mail-wi0-x230.google.com[2a00:1450:400c:c05::230] Dec 19 11:09:11 server postfix/smtpd[19878]: setting up TLS connection from mail-wi0-x230.google.com[2a00:1450:400c:c05::230] Dec 19 11:09:11 server postfix/smtpd[19878]: mail-wi0-x230.google.com[2a00:1450:400c:c05::230]: TLS cipher list "aNULL:-aNULL:ALL:+RC4:@STR ENGTH:!aNULL:!DES:!3DES:!MD5:!DES+MD5:!RC4:!RC4-MD5" Dec 19 11:09:11 server postfix/smtpd[19878]: SSL_accept:before/accept initialization Dec 19 11:09:11 server postfix/smtpd[19878]: SSL_accept:error in unknown state Dec 19 11:09:11 server postfix/smtpd[19878]: SSL_accept error from mail-wi0-x230.google.com[2a00:1450:400c:c05::230]: -1 Dec 19 11:09:11 server postfix/smtpd[19878]: warning: TLS library problem: 19878:error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol:s23_srvr.c:647: Dec 19 11:09:11 server postfix/smtpd[19878]: lost connection after STARTTLS from mail-wi0-x230.google.com[2a00:1450:400c:c05::230] Dec 19 11:09:11 server postfix/smtpd[19878]: disconnect from mail-wi0-x230.google.com[2a00:1450:400c:c05::230] excerpts from my postfix main.cf : smtpd_use_tls=yes smtpd_tls_security_level = may smtpd_tls_auth_only = yes smtpd_tls_CAfile = path to CA Bundle smtpd_tls_cert_file= path to cert (pem) smtpd_tls_key_file=path to key (pem) smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_tls_mandatory_protocols = !SSLv2,!SSLv3,!TLSv1,!TLSv1.1 smtpd_tls_exclude_ciphers = aNULL, DES, 3DES, MD5, DES+MD5, RC4, RC4-MD5 smtpd_tls_protocols=!SSLv2,!TLSv1,!TLSv1.1,!SSLv3 smtpd_tls_mandatory_ciphers = medium smtpd_tls_received_header = yes tls_preempt_cipherlist = yes tls_medium_cipherlist = AES256+EECDH:AES256+EDH I don't know where the problem is, because I regularly receive mails from others. There are no errors connecting to port 25 via telnet or port 465 via openssl Addition: I got this mail in return from Google: Delivery to the following recipient failed permanently: <removed> Technical details of permanent failure: TLS Negotiation failed ----- Original message ----- [...] Maybe it's an issue with my cipherlist? Answer to masegaloeh's question: openssl s_client -connect localhost:25 -starttls smtp CONNECTED(00000003) depth=3 C = SE, O = AddTrust AB, OU = AddTrust External TTP Network, CN = AddTrust External CA Root verify error:num=19:self signed certificate in certificate chain verify return:0 --- Certificate chain [...] --- Server certificate -----BEGIN CERTIFICATE----- [...] --- No client certificate CA names sent --- SSL handshake has read 6267 bytes and written 477 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: zlib compression Expansion: zlib compression SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: [...] Session-ID-ctx: Master-Key: [...] Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 3600 (seconds) TLS session ticket: [...] Compression: 1 (zlib compression) Start Time: 1418986680 Timeout : 300 (sec) Verify return code: 19 (self signed certificate in certificate chain) --- 250 DSN Update 1: Reissued my SSL certificate. Generated everything as following: openssl req -nodes -newkey rsa:2048 -keyout myserver.key -out server.csr -sha256 I then created a new file consisting of the crt and the key , after this I created the CA bundle: cat COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > bundle.crt Added everything to my dovecot and postfix config and restarted both services. Google still fails to send mails zo my server resulting in TLS Negotiation failed I tried another mail provider (web.de) and the mail gets send. web.de log: Dec 19 17:33:15 server postfix/smtpd[14105]: connect from mout.web.de[212.227.15.3] Dec 19 17:33:15 server postfix/smtpd[14105]: setting up TLS connection from mout.web.de[212.227.15.3] Dec 19 17:33:15 server postfix/smtpd[14105]: mout.web.de[212.227.15.3]: TLS cipher list "aNULL:-aNULL:ALL:+RC4:@STRENGTH" Dec 19 17:33:15 server postfix/smtpd[14105]: mout.web.de[212.227.15.3]: save session EA1635ED786AFC2D9C7AB43EF43620A1D9092DC640FDE21C01E7BA25981D2445&s=smtp&l=268439647 to smtpd cache Dec 19 17:33:15 server postfix/tlsmgr[14107]: put smtpd session id=EA1635ED786AFC2D9C7AB43EF43620A1D9092DC640FDE21C01E7BA25981D2445&s=smtp&l=268439647 [data 127 bytes] Dec 19 17:33:15 server postfix/tlsmgr[14107]: write smtpd TLS cache entry EA1635ED786AFC2D9C7AB43EF43620A1D9092DC640FDE21C01E7BA25981D2445&s=smtp&l=268439647: time=1419006795 [data 127 bytes] Dec 19 17:33:15 server postfix/smtpd[14105]: Anonymous TLS connection established from mout.web.de[212.227.15.3]: TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits) Soultion: After enabling TLSv1 and TLSv1.1 in the smtpd_(mandatory)_protocols section everything works fine. Thanks masegaloeh ! Dec 20 11:44:46 server postfix/smtpd[31966]: initializing the server-side TLS engine Dec 20 11:44:46 server postfix/tlsmgr[31968]: open smtpd TLS cache btree:/var/lib/postfix/smtpd_scache Dec 20 11:44:46 server postfix/tlsmgr[31968]: tlsmgr_cache_run_event: start TLS smtpd session cache cleanup Dec 20 11:44:46 server postfix/smtpd[31966]: connect from mail-wi0-x235.google.com[2a00:1450:400c:c05::235] Dec 20 11:44:46 server postfix/smtpd[31966]: setting up TLS connection from mail-wi0-x235.google.com[2a00:1450:400c:c05::235] Dec 20 11:44:46 server postfix/smtpd[31966]: mail-wi0-x235.google.com[2a00:1450:400c:c05::235]: TLS cipher list "aNULL:-aNULL:ALL:+RC4:@STRENGTH" Dec 20 11:44:46 server postfix/smtpd[31966]: Anonymous TLS connection established from mail-wi0-x235.google.com[2a00:1450:400c:c05::235]: TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)
TLDR : Your TLS protocols is too strict because you only allow TLSv1.2 connection. smtpd_tls_mandatory_protocols = !SSLv2,!SSLv3,!TLSv1,!TLSv1.1 smtpd_tls_protocols=!SSLv2,!TLSv1,!TLSv1.1,!SSLv3 And GMAIL send email to your server with TLSv1 protocol . That's why TLS negotiation fails. The obvious solution is allowing TLSv1 and TLSv1.1 protocols and still disabling (insecure) SSLv2 and SSLv3 protocols. Explanation I can confirm your case when failing to receive email from GMAIL and FACEBOOK over STARTTLS . Why do only GMAIL who fails to send email to my server This is the maillog snippet when GMAIL send email Dec 19 23:37:47 tls postfix/smtpd[3876]: initializing the server-side TLS engine Dec 19 23:37:47 tls postfix/smtpd[3876]: connect from mail-wg0-f47.google.com[74.125.82.47] Dec 19 23:37:48 tls postfix/smtpd[3876]: setting up TLS connection from mail-wg0-f47.google.com[74.125.82.47] Dec 19 23:37:48 tls postfix/smtpd[3876]: mail-wg0-f47.google.com[74.125.82.47]: TLS cipher list "aNULL:-aNULL:ALL:+RC4:@STRENGTH:!aNULL:!DES:!3DES:!MD5:!DES+MD5:!RC4:!RC4-MD5" Dec 19 23:37:48 tls postfix/smtpd[3876]: SSL_accept:before/accept initialization Dec 19 23:37:48 tls postfix/smtpd[3876]: SSL_accept:error in unknown state Dec 19 23:37:48 tls postfix/smtpd[3876]: SSL_accept error from mail-wg0-f47.google.com[74.125.82.47]: -1 Dec 19 23:37:48 tls postfix/smtpd[3876]: warning: TLS library problem: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol:s23_srvr.c:647: Dec 19 23:37:48 tls postfix/smtpd[3876]: lost connection after STARTTLS from mail-wg0-f47.google.com[74.125.82.47] Dec 19 23:37:48 tls postfix/smtpd[3876]: disconnect from mail-wg0-f47.google.com[74.125.82.47] And this is the maillog snippet when FACEBOOK send email Dec 19 23:11:14 tls postfix/smtpd[3844]: initializing the server-side TLS engine Dec 19 23:11:14 tls postfix/tlsmgr[3846]: open smtpd TLS cache btree:/var/lib/postfix/smtpd_scache Dec 19 23:11:14 tls postfix/tlsmgr[3846]: tlsmgr_cache_run_event: start TLS smtpd session cache cleanup Dec 19 23:11:14 tls postfix/smtpd[3844]: connect from outcampmail003.ash2.facebook.com[66.220.155.162] Dec 19 23:11:14 tls postfix/smtpd[3844]: setting up TLS connection from outcampmail003.ash2.facebook.com[66.220.155.162] Dec 19 23:11:14 tls postfix/smtpd[3844]: outcampmail003.ash2.facebook.com[66.220.155.162]: TLS cipher list "aNULL:-aNULL:ALL:+RC4:@STRENGTH:!aNULL:!DES:!3DES:!MD5:!DES+MD5:!RC4:!RC4-MD5" Dec 19 23:11:14 tls postfix/smtpd[3844]: SSL_accept:before/accept initialization Dec 19 23:11:15 tls postfix/smtpd[3844]: SSL_accept:error in unknown state Dec 19 23:11:15 tls postfix/smtpd[3844]: SSL_accept error from outcampmail003.ash2.facebook.com[66.220.155.162]: -1 Dec 19 23:11:15 tls postfix/smtpd[3844]: warning: TLS library problem: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol:s23_srvr.c:647: Dec 19 23:11:15 tls postfix/smtpd[3844]: lost connection after STARTTLS from outcampmail003.ash2.facebook.com[66.220.155.162] Dec 19 23:11:15 tls postfix/smtpd[3844]: disconnect from outcampmail003.ash2.facebook.com[66.220.155.162] Dec 19 23:11:16 tls postfix/smtpd[3844]: connect from outcampmail004.ash2.facebook.com[66.220.155.163] Dec 19 23:11:17 tls postfix/smtpd[3844]: 962C281443: client=outcampmail004.ash2.facebook.com[66.220.155.163] Dec 19 23:11:18 tls postfix/cleanup[3849]: 962C281443: message-id=<[email protected]> Dec 19 23:11:18 tls postfix/qmgr[3843]: 962C281443: from=<[email protected]>, size=18002, nrcpt=1 (queue active) Dec 19 23:11:18 tls postfix/local[3850]: 962C281443: to=<[email protected]>, orig_to=<[email protected]>, relay=local, delay=1.6, delays=1.5/0/0/0, dsn=2.0.0, status=sent (delivered to mailbox) Dec 19 23:11:18 tls postfix/qmgr[3843]: 962C281443: removed Dec 19 23:11:24 tls postfix/smtpd[3844]: disconnect from outcampmail004.ash2.facebook.com[66.220.155.163] Some analysis In first snippet, GMAIL will try to send email over STARTTLS. When TLS negotiation, some error occurs, so GMAIL server disconnect it. We will discuss why the error occurring below. In second snippet, FACEBOOK also failing to send email over STARTTLS. In fallback process, FACEBOOK resend email with plain text mode. In this case our server happily accept it. So, that's explain why only GMAIL fails to send email to your server. GMAIL doesn't have mechanism to fallback if TLS negotiation fails . Other mail server may use fallback mechanism to ensure the email delivery succeed. Why TLS Negotiation error occurs I spot interesting line from web.de maillog Dec 19 17:33:15 foxdev postfix/smtpd[14105]: Anonymous TLS connection established from mout.web.de[212.227.15.3]: TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits) And find out that you specify this configuration in main.cf smtpd_tls_mandatory_protocols = !SSLv2,!SSLv3,!TLSv1,!TLSv1.1 smtpd_tls_protocols=!SSLv2,!TLSv1,!TLSv1.1,!SSLv3 That means your server only accept TLS connection when TLSv1.2 used. Other than TLSv1.2, your server will complain TLS negotiation error. If I change smtpd_tls_(mandatory_)protocols to !SSLv2,!SSLv3,!TLSv1 , the error still occurs. That means GMAIL and FACEBOOK will attempt contact your mail server with protocols other than TLSv1.1 and TLSv1.2. If I change smtpd_tls_(mandatory_)protocols to !SSLv2,!SSLv3 , TLS negotiation will success. It confirm that GMAIL and FACEBOOK will contact your server with TLSv1 protocol Dec 20 00:21:46 tls postfix/smtpd[4261]: Anonymous TLS connection established from outmail038.prn2.facebook.com[66.220.144.165]: TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits) Dec 20 00:23:00 tls postfix/smtpd[4261]: Anonymous TLS connection established from mail-wi0-f174.google.com[209.85.212.174]: TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits) Other folks in FreeBSD forum also confirm this behavior. Solution The obvious solution is enable TLSv1 and TLSv1.1 in your postfix. This will ensure some mail server who doesn't have fallback mechanism - like GMAIL - can still communicate with your server. I don't know your reason to disable TLSv1 and TLSv1.1 support, leaving only TLSv1.2 protocol. If it is an webserver and your user will use modern browser only, then you can disable TLSv1 in your server. This is acceptable because only older browser who doesn't support protocol TLSv1 .
{ "source": [ "https://serverfault.com/questions/653664", "https://serverfault.com", "https://serverfault.com/users/212232/" ] }
653,792
In short: Would like a way to do SSH key authentication via LDAP. Problem: We use LDAP (slapd) for directory services and we've recently moved to using our own AMI for building instances. The reason the AMI bit is important is that, ideally , we would like to be able to login with SSH via key authentication as soon as the instance is running and not have to wait for our somewhat slow configuration management tool to kickoff a script to add the correct keys to the instance. The ideal scenario is that, when adding a user to LDAP we add their key as well and they'd be immediately be able to login. Key authentication is a must because password-based login is both less secure and bothersome. I've read this question which suggests there's a patch for OpenSSH called OpenSSH-lpk to do this but this is no longer needed with OpenSSH server >= 6.2 Added a sshd_config(5) option AuthorizedKeysCommand to support fetching authorized_keys from a command in addition to (or instead of) from the filesystem. The command is run under an account specified by an AuthorizedKeysCommandUser sshd_config(5) option How can I configure OpenSSH and LDAP to implement this?
Update LDAP to include the OpenSSH-LPK schema We first need to update LDAP with a schema to add the sshPublicKey attribute for users: dn: cn=openssh-lpk,cn=schema,cn=config objectClass: olcSchemaConfig cn: openssh-lpk olcAttributeTypes: ( 1.3.6.1.4.1.24552.500.1.1.1.13 NAME 'sshPublicKey' DESC 'MANDATORY: OpenSSH Public key' EQUALITY octetStringMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 ) olcObjectClasses: ( 1.3.6.1.4.1.24552.500.1.1.2.0 NAME 'ldapPublicKey' SUP top AUXILIARY DESC 'MANDATORY: OpenSSH LPK objectclass' MAY ( sshPublicKey $ uid ) ) Create a script that queries LDAP for a user's public key: The script should output the public keys for that user, example: ldapsearch '(&(objectClass=posixAccount)(uid='"$1"'))' 'sshPublicKey' | sed -n '/^ /{H;d};/sshPublicKey:/x;$g;s/\n *//g;s/sshPublicKey: //gp' Update sshd_config to point to the script from the previous step AuthorizedKeysCommand /path/to/script AuthorizedKeysCommandUser nobody Bonus : Update sshd_config to allow password authentication from internal RFC1918 networks as seen in this question: Only allow password authentication to SSH server from internal network Useful links: https://github.com/AndriiGrytsenko/openssh-ldap-publickey Private key authentication with pam_ldap EDIT: Added user nobody as suggested TRS-80
{ "source": [ "https://serverfault.com/questions/653792", "https://serverfault.com", "https://serverfault.com/users/130936/" ] }
653,812
I've searched for a viable answer to this question, and most of the answers include advice on why to not do it. However, here's the scenario, and what makes it necessary: I have a console app, and in each user's .profile, there is a startup command for the app, and directly after the command that starts it up, there's an "exit" command, which logs them out of the system. I only want them to be able to access this console app through the interface provided by it. Upon startup, the app presents the user with a list of clients that can be accessed through the app, with each client having their own data directory. Users are granted access to only the clients that they will need access to. Now here's the problem: If I give the users SSH access, they will also be able to log in using an SFTP client, which will give them direct access to the data directories for the app, which is VERY undesirable, since that will also give them access to the data directories to which they should not have access. This was such a simple thing to do when using a telnet/FTP combination, but now that I want to give the users access from anywhere on the internet, I haven't been able to find a way to shut them out of SFTP, while still allowing them access to the shell where they can run the app.
Edit: In case it's not obvious, the following answer isn't intended as a secure method of preventing SFTP from being used by anyone with shell access to the server. It's just an answer that explains how to disable it from external visibility. For a discussion about user level security, see answers from @cpast and @Aleksi Torhamo. If security is your focus, this answer is not the proper one. If simple service visibiliy is your focus, then this is your answer. We now continue to the original answer: Comment out sftp support in sshd_config (and of course restart sshd ): #Subsystem sftp /usr/lib/openssh/sftp-server
{ "source": [ "https://serverfault.com/questions/653812", "https://serverfault.com", "https://serverfault.com/users/260604/" ] }
654,428
I'm aware of "round robin DNS" load balancing, but how can a single IP address be load balanced? Google's DNS servers for example, 8.8.8.8 and 8.8.4.4 . Wikipedia's load balancing article states: For Internet services, the load balancer is usually a software program that is listening on the port where external clients connect to access services. The load balancer forwards requests to one of the "backend" servers, which usually replies to the load balancer. ..which seems reasonable when used with round robin DNS, however for the likes of Google's DNS servers this doesn't seem like a very redundant or capable setup.
http://en.wikipedia.org/wiki/Anycast Anycast is a network addressing and routing methodology in which datagrams from a single sender are routed to the topologically nearest node in a group of potential receivers, though it may be sent to several nodes, all identified by the same destination address. ... Nearly all Internet root nameservers are implemented as clusters of hosts using anycast addressing. 12 of the 13 root servers A-M exist in multiple locations, with 11 on multiple continents. (Root server H exists in two U.S. locations. Root server B exists in a single, unspecified location.) The 12 servers with multiple locations use anycast address announcements to provide a decentralized service. This has accelerated the deployment of physical (rather than logical) root servers outside the United States. RFC 3258 documents the use of anycast addressing to provide authoritative DNS services. Many commercial DNS providers have switched to an IP anycast environment to increase query performance, redundancy, and to implement load balancing.
{ "source": [ "https://serverfault.com/questions/654428", "https://serverfault.com", "https://serverfault.com/users/143929/" ] }
654,430
I would like to setup a subdomain on my VPS. I have read a tutorial( http://crm.vpscheap.net/knowledgebase.php?action=displayarticle&id=10 ), but I think it isn't for subdomains. This section should be used only for one subdomain of my domain: ; ; BIND data file for linuxconfig.org ; $TTL 3h @ IN SOA ns1.linuxconfig.org. admin.linuxconfig.org. ( 1 ; Serial 3h ; Refresh after 3 hours 1h ; Retry after 1 hour 1w ; Expire after 1 week 1h ) ; Negative caching TTL of 1 day ; @ IN NS ns1.linuxconfig.org. @ IN NS ns2.linuxconfig.org. linuxconfig.org. IN MX 10 mail.linuxconfig.org. linuxconfig.org. IN A 192.168.0.10 ns1 IN A 192.168.0.10 ns2 IN A 192.168.0.11 www IN CNAME linuxconfig.org. mail IN A 192.168.0.10 ftp IN CNAME linuxconfig.org.
http://en.wikipedia.org/wiki/Anycast Anycast is a network addressing and routing methodology in which datagrams from a single sender are routed to the topologically nearest node in a group of potential receivers, though it may be sent to several nodes, all identified by the same destination address. ... Nearly all Internet root nameservers are implemented as clusters of hosts using anycast addressing. 12 of the 13 root servers A-M exist in multiple locations, with 11 on multiple continents. (Root server H exists in two U.S. locations. Root server B exists in a single, unspecified location.) The 12 servers with multiple locations use anycast address announcements to provide a decentralized service. This has accelerated the deployment of physical (rather than logical) root servers outside the United States. RFC 3258 documents the use of anycast addressing to provide authoritative DNS services. Many commercial DNS providers have switched to an IP anycast environment to increase query performance, redundancy, and to implement load balancing.
{ "source": [ "https://serverfault.com/questions/654430", "https://serverfault.com", "https://serverfault.com/users/253282/" ] }
654,773
I just took two university courses on computer security and internet programming. I was thinking about this the other day: Web cache proxy servers cache popular content from servers on the web. This is useful, for example, if your company has a 1 Gbps network connection internally (including a web cache proxy server), but only a 100 Mbps connection to the internet. The web cache proxy server can serve cached content much more quickly to other computers on the local network. Now consider TLS-encrypted connections. Can encrypted content be cached in any useful way? There's a great initiative from letsencrypt.org aiming to make all internet traffic encrypted over SSL by default. They are doing this by making it really easy, automated, and free to obtain SSL certificates for your site (starting summer 2015). Considering current yearly costs for SSL certs, FREE is really attractive. My question is: will HTTPS traffic eventually make web cache proxy servers obsolete? If so, what toll will this take on the load of global internet traffic?
Yes, HTTPs will put a damper on network caching. Specifically because caching HTTPs requires doing a man in the middle type attack - replacing the SSL certificate with that of the cache server. That certificate will have to be generated on the fly and signed by a local authority. In a corporate environment you can make all PCs trust your cache server certificates. But other machines will give certificate errors - which they should. A malicious cache could modify the pages easily. I suspect that sites that use large amounts of bandwidth like video streaming will still send content over regular HTTP specifically so it can be cached. But for many sites better security outweighs the increase in bandwidth.
{ "source": [ "https://serverfault.com/questions/654773", "https://serverfault.com", "https://serverfault.com/users/184943/" ] }
655,067
I created one Nginx with one Linux Azure VM, is it possible to make nginx listen to different ports so that when I change the port number, the content would be different. I found there would be a collision if I created two or more ports related to HTTP on VM. Can anyone help me with that?
Yes, it is. What you probably want is multiple "server" stanzas, each with a different port, but possibly (probably?) the same server_name, serving the "different" content appropriately within each one, maybe with a different document root in each server. Full documentation is here: http://nginx.org/en/docs/http/server_names.html Example: server { listen 80; server_name example.org www.example.org; root /var/www/port80/ } server { listen 81; server_name example.org www.example.org; root /var/www/port81/ }
{ "source": [ "https://serverfault.com/questions/655067", "https://serverfault.com", "https://serverfault.com/users/260308/" ] }
655,090
Three machines in the production environment had some hardware issues and were decommissioned. The infrastructure team has reinstalled them and gave them the same hostnames and IP addresses. The aim is to run Puppet on these systems so these can be commissioned again. Attempt 1) The old Puppet certificates were removed from the Puppetmaster by issuing the following commands: puppet cert revoke grb16.company.com puppet cert clean grb16.company.com 2) Once the old certificate was removed, a new certificate request was created by issuing the following command from one of the reinstalled nodes: [root@grb16 ~]# puppet agent -t Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml Info: Creating a new SSL certificate request for grb16.company.com Info: Certificate Request fingerprint (SHA256): 6F:2D:1D:71:67:18:99:86:2C:22:A1:14:80:55:34:35:FD:20:88:1F:36:ED:A7:7B:2A:12:09:4D:F8:EC:BF:6D Exiting; no certificate found and waitforcert is disabled [root@grb16 ~]# 3) Once the certificate request was visible on the Puppetmaster, the following command was issued to sign the certificate request: [root@foreman ~]# puppet cert sign grb16.company.com Notice: Signed certificate request for grb16.company.com Notice: Removing file Puppet::SSL::CertificateRequest grb16.company.com at '/var/lib/puppet/ssl/ca/requests/grb16.company.com.pem' [root@foreman ~]# Problem Once the certificate request has been signed and a Puppet run has been started the following error is thrown: [root@grb16 ~]# puppet agent -t Info: Caching certificate for grb16.company.com Error: Could not request certificate: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [CRL is not yet valid for /CN=Puppet CA: foreman.company.com] Exiting; failed to retrieve certificate and waitforcert is disabled [root@grb16 ~]# Running Puppet for the second time results in: [root@grb16 ~]# puppet agent -t Warning: Unable to fetch my node definition, but the agent run will continue: Warning: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [CRL is not yet valid for /CN=Puppet CA: foreman.company.com] Info: Retrieving pluginfacts Error: /File[/var/lib/puppet/facts.d]: Failed to generate additional resources using 'eval_generate': SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [CRL is not yet valid for /CN=Puppet CA: foreman.company.com] Error: /File[/var/lib/puppet/facts.d]: Could not evaluate: Could not retrieve file metadata for puppet://foreman.company.com/pluginfacts: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [CRL is not yet valid for /CN=Puppet CA: foreman.company.com] Wrapped exception: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [CRL is not yet valid for /CN=Puppet CA: foreman.company.com] Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate': SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [CRL is not yet valid for /CN=Puppet CA: foreman.company.com] Error: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve file metadata for puppet://foreman.company.com/plugins: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [CRL is not yet valid for /CN=Puppet CA: foreman.company.com] Wrapped exception: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [CRL is not yet valid for /CN=Puppet CA: foreman.company.com] Error: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [CRL is not yet valid for /CN=Puppet CA: foreman.company.com] Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [CRL is not yet valid for /CN=Puppet CA: foreman.company.com] [root@grb16 ~]# Analysis In order to solve the issue, the error message were investigated and it looks like that the problem is SSL or Puppet related. Perhaps one of these packages has been installed incorrectly or a wrong version has been installed on the reinstalled node. Puppet [root@grb16 ~]# yum list installed |grep puppet facter.x86_64 1:2.3.0-1.el6 @puppetlabs_6_products hiera.noarch 1.3.4-1.el6 @puppetlabs_6_products puppet.noarch 3.7.3-1.el6 @puppetlabs_6_products puppetlabs-release.noarch 6-11 @puppetlabs_6_products ruby-augeas.x86_64 0.4.1-3.el6 @puppetlabs_6_deps ruby-shadow.x86_64 1:2.2.0-2.el6 @puppetlabs_6_deps rubygem-json.x86_64 1.5.5-3.el6 @puppetlabs_6_deps SSL [root@grb16 ~]# yum list installed |grep ssl nss_compat_ossl.x86_64 0.9.6-1.el6 @anaconda-CentOS-201410241409.x86_64/6.6 openssl.x86_64 1.0.1e-30.el6_6.4 openssl-devel.x86_64 1.0.1e-30.el6_6.4 [root@grb16 ~]# No discrepancies were found between the SSL and Puppet packages that are installed on various servers. The systems that have not been decommissioned or reinstalled are still able to run Puppet. The issue is restricted to the reinstalled server. Note that Puppet has not been run on the other two reinstalled servers. What is causing this issue and how to solve it?
Concise answer The issue CRL is not yet valid for indicates that the time between the Puppet-agent and the Puppetmaster is out of sync . Sync the time (NTP). Remove the certificate from the Puppet-agent and Puppetmaster as well and run Puppet on the agent. Comprehensive answer CRL is not yet valid for resides in the following snippet. The following test code snippet describes what causes the issue: it 'includes the CRL issuer in the verify error message' do crl = OpenSSL::X509::CRL.new crl.issuer = OpenSSL::X509::Name.new([['CN','Puppet CA: puppetmaster.example.com']]) crl.last_update = Time.now + 24 * 60 * 60 ssl_context.stubs(:current_crl).returns(crl) subject.call(false, ssl_context) expect(subject.verify_errors).to eq(["CRL is not yet valid for /CN=Puppet CA: puppetmaster.example.com"]) end ssl_context let(:ssl_context) do mock('OpenSSL::X509::StoreContext') end subject subject do described_class.new(ssl_configuration, ssl_host) end The code includes snippets from the OpenSSL::X509::CRL class. issuer=(p1) static VALUE ossl_x509crl_set_issuer(VALUE self, VALUE issuer) { X509_CRL *crl; GetX509CRL(self, crl); if (!X509_CRL_set_issuer_name(crl, GetX509NamePtr(issuer))) { /* DUPs name */ ossl_raise(eX509CRLError, NULL); } return issuer; } last_update=(p1) static VALUE ossl_x509crl_set_last_update(VALUE self, VALUE time) { X509_CRL *crl; time_t sec; sec = time_to_time_t(time); GetX509CRL(self, crl); if (!X509_time_adj(crl->crl->lastUpdate, 0, &sec)) { ossl_raise(eX509CRLError, NULL); } return time; } The last_updated time will be the current time plus an additional day and will be passed to the subject function that calls the call function that resides in the default_validator class . class Puppet::SSL::Validator::DefaultValidator #< class Puppet::SSL::Validator attr_reader :peer_certs attr_reader :verify_errors attr_reader :ssl_configuration FIVE_MINUTES_AS_SECONDS = 5 * 60 def initialize( ssl_configuration = Puppet::SSL::Configuration.new( Puppet[:localcacert], { :ca_auth_file => Puppet[:ssl_client_ca_auth] }), ssl_host = Puppet::SSL::Host.localhost) reset! @ssl_configuration = ssl_configuration @ssl_host = ssl_host end def call(preverify_ok, store_context) if preverify_ok ... else ... crl = store_context.current_crl if crl if crl.last_update && crl.last_update < Time.now + FIVE_MINUTES_AS_SECONDS ... else @verify_errors << "#{error_string} for #{crl.issuer}" end ... end end end If preverify_ok is false the else clause is applicable. As if crl.last_update && crl.last_update < Time.now + FIVE_MINUTES_AS_SECONDS results in false because the time has been stubbed with an additional day the else statement will be applicable. The evaluation of @verify_errors << "#{error_string} for #{crl.issuer}" results in CRL is not yet valid for /CN=Puppet CA: puppetmaster.example.com . In order to solve the issue: Sync the time between the Puppet-agent and the Puppetmaster. Does the NTP server run (well) on both nodes? Remove or rename the complete ssl folder ( /var/lib/puppet/ssl ) from the agent. Revoke the cert from the master by issuing sudo puppet cert clean <fqdn-puppet-agent> Sign the cert if autosign is disabled Run puppet on the agent In conclusion, the time on Puppet-agents and Puppetmaster should be synced all the time. Exceeding the maximum allowed deviation of 5 minutes will cause the issue.
{ "source": [ "https://serverfault.com/questions/655090", "https://serverfault.com", "https://serverfault.com/users/109833/" ] }
655,616
I'm having some very strange behavior in Tomcat 7 on Ubuntu 14.04. I created a new VPS, installed default-jdk and other simple stuff. Downloaded and unpacked Tomcat 7. Checked that it runs on [myIP]:8080 , and saw Tomcat's index page. Once I rebooted the VPS, I started Tomcat again, and... there is no response on [myIP]:8080 . Not even an error. When I checked the logs, I saw that Tomcat just hangs in state of deployment on the first webapp. My logs : Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Server version: Apache Tomcat/7.0.57 Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Server built: Nov 3 2014 08:39:16 UTC Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Server number: 7.0.57.0 Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: OS Name: Linux Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: OS Version: 3.13.0-37-generic Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Architecture: amd64 Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: JAVA_HOME: /usr/lib/jvm/java-7-openjdk-amd64/jre Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: JVM Version: 1.7.0_65-b32 Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: JVM Vendor: Oracle Corporation Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: CATALINA_BASE: /opt/tomcat Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: CATALINA_HOME: /opt/tomcat Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.util.logging.config.file=/opt/tomcat/conf/logging.properties Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.endorsed.dirs=/opt/tomcat/endorsed Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Dcatalina.base=/opt/tomcat Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Dcatalina.home=/opt/tomcat Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.io.tmpdir=/opt/tomcat/temp Dec 31, 2014 9:06:04 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib Dec 31, 2014 9:06:04 AM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] Dec 31, 2014 9:06:04 AM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 2327 ms Dec 31, 2014 9:06:04 AM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina Dec 31, 2014 9:06:04 AM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.57 Dec 31, 2014 9:06:04 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /opt/tomcat/webapps/host-manager Dec 31, 2014 9:11:09 AM org.apache.catalina.util.SessionIdGenerator createSecureRandom INFO: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [303,104] milliseconds. Dec 31, 2014 9:11:09 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deployment of web application directory /opt/tomcat/webapps/host-manager has finished in 304,682 ms Dec 31, 2014 9:11:09 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /opt/tomcat/webapps/manager Dec 31, 2014 9:11:09 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deployment of web application directory /opt/tomcat/webapps/manager has finished in 271 ms Dec 31, 2014 9:11:09 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /opt/tomcat/webapps/docs Dec 31, 2014 9:11:09 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deployment of web application directory /opt/tomcat/webapps/docs has finished in 205 ms Dec 31, 2014 9:11:09 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /opt/tomcat/webapps/examples Dec 31, 2014 9:11:11 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deployment of web application directory /opt/tomcat/webapps/examples has finished in 1,422 ms Dec 31, 2014 9:11:11 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /opt/tomcat/webapps/ROOT Dec 31, 2014 9:11:11 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deployment of web application directory /opt/tomcat/webapps/ROOT has finished in 177 ms Dec 31, 2014 9:11:11 AM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-bio-8080"] Dec 31, 2014 9:11:11 AM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["ajp-bio-8009"] Dec 31, 2014 9:11:11 AM org.apache.catalina.startup.Catalina start INFO: Server startup in 306957 ms Dec 31, 2014 9:17:35 AM org.apache.coyote.AbstractProtocol pause INFO: Pausing ProtocolHandler ["http-bio-8080"] Dec 31, 2014 9:17:35 AM org.apache.coyote.AbstractProtocol pause INFO: Pausing ProtocolHandler ["ajp-bio-8009"] Dec 31, 2014 9:17:35 AM org.apache.catalina.core.StandardService stopInternal INFO: Stopping service Catalina Dec 31, 2014 9:17:36 AM org.apache.coyote.AbstractProtocol stop INFO: Stopping ProtocolHandler ["http-bio-8080"] Dec 31, 2014 9:17:36 AM org.apache.coyote.AbstractProtocol stop INFO: Stopping ProtocolHandler ["ajp-bio-8009"] Dec 31, 2014 9:17:36 AM org.apache.coyote.AbstractProtocol destroy INFO: Destroying ProtocolHandler ["http-bio-8080"] Dec 31, 2014 9:17:36 AM org.apache.coyote.AbstractProtocol destroy INFO: Destroying ProtocolHandler ["ajp-bio-8009"] Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Server version: Apache Tomcat/7.0.57 Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Server built: Nov 3 2014 08:39:16 UTC Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Server number: 7.0.57.0 Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: OS Name: Linux Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: OS Version: 3.13.0-37-generic Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Architecture: amd64 Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: JAVA_HOME: /usr/lib/jvm/java-7-openjdk-amd64/jre Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: JVM Version: 1.7.0_65-b32 Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: JVM Vendor: Oracle Corporation Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: CATALINA_BASE: /opt/tomcat Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: CATALINA_HOME: /opt/tomcat Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.util.logging.config.file=/opt/tomcat/conf/logging.properties Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.endorsed.dirs=/opt/tomcat/endorsed Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Dcatalina.base=/opt/tomcat Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Dcatalina.home=/opt/tomcat Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.io.tmpdir=/opt/tomcat/temp Dec 31, 2014 9:20:01 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib Dec 31, 2014 9:20:01 AM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] Dec 31, 2014 9:20:01 AM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] Dec 31, 2014 9:20:01 AM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 1536 ms Dec 31, 2014 9:20:02 AM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina Dec 31, 2014 9:20:02 AM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.57 Dec 31, 2014 9:20:02 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /opt/tomcat/webapps/host-manager Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Server version: Apache Tomcat/7.0.57 Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Server built: Nov 3 2014 08:39:16 UTC Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Server number: 7.0.57.0 Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: OS Name: Linux Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: OS Version: 3.13.0-37-generic Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Architecture: amd64 Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: JAVA_HOME: /usr/lib/jvm/java-7-openjdk-amd64/jre Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: JVM Version: 1.7.0_65-b32 Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: JVM Vendor: Oracle Corporation Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: CATALINA_BASE: /opt/tomcat Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: CATALINA_HOME: /opt/tomcat Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.util.logging.config.file=/opt/tomcat/conf/logging.properties Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.endorsed.dirs=/opt/tomcat/endorsed Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Dcatalina.base=/opt/tomcat Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Dcatalina.home=/opt/tomcat Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.VersionLoggerListener log INFO: Command line argument: -Djava.io.tmpdir=/opt/tomcat/temp Dec 31, 2014 9:33:38 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib Dec 31, 2014 9:33:38 AM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] Dec 31, 2014 9:33:38 AM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] Dec 31, 2014 9:33:38 AM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 2495 ms Dec 31, 2014 9:33:39 AM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina Dec 31, 2014 9:33:39 AM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.57 Dec 31, 2014 9:33:39 AM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /opt/tomcat/webapps/host-manager Up until line 74, this is a normal server startup. All standard webapps were deployed (lines 48-68), but after that it just hangs. So, I stopped the server and rebooted my system. Started tomcat again, and the miracle begins. The next session is in lines 89-136, and you can see there there is no entry for Server startup in xxxx ms . It just hangs in deployment and this situation repeats all the time. What could be causing this? I've spent hours struggling with this problem and I'm going crazy and getting nowhere.
A possible problem is Tomcat waiting for entropy to build up. Take a few thread dump with jstack to see who's waiting on what. Tomcat 7+ heavily relies on SecureRandom class to provide random values for its session ids, and other things. Depending on your JRE, it can cause delays during startup if the entropy source that is used to initialize SecureRandom is short of entropy. If the problem is entropy, there is a way to configure JRE to use a non-blocking entropy source by setting the following system property: -Djava.security.egd=file:/dev/./urandom See this related discussion for more details .
{ "source": [ "https://serverfault.com/questions/655616", "https://serverfault.com", "https://serverfault.com/users/261920/" ] }
656,079
I created a basic test PostgreSQL RDS instance in a VPC that has a single public subnet and that should be available to connect over the public internet. It uses the default security group, which is open for port 5432. When I try to connect, it fails. I must be missing something very straightforward -- but I'm pretty lost on this. Here're the database settings, note that it's marked as Publicly Accessible : Here're the security group settings, note it's wide open (affirmed in the RDS settings above by the green "authorized" hint next to the endpoint): Here's the command I'm trying to use to connect: psql --host=myinstance.xxxxxxxxxx.us-east-1.rds.amazonaws.com \ --port=5432 --username=masteruser --password --dbname=testdb And this is the result I'm getting when trying to connect from a Yosemite MacBook Pro (note, it's resolving to a 54.* ip address): psql: could not connect to server: Operation timed out Is the server running on host "myinstance.xxxxxxxxxx.us-east-1.rds.amazonaws.com" (54.xxx.xxx.xxx) and accepting TCP/IP connections on port 5432? I do not have any kind of firewall enabled, and am able to connect to public PostgreSQL instances on other providers (e.g. Heroku). Any troubleshooting tips would be much appreciated, since I'm pretty much at a loss here. Update Per comment, here are the inbound ACL rules for the Default VPC:
The issue was that the inbound rule in the Security Group specified a security group as the source. Changing it to a CIDR that included my IP address fixed the issue. Open the database security group in AWS; and choose "Edit inbound rules"; "Add rule". There is a "My IP" option in the dropdown menu; select that option to auto-populate with your computer's public IP address in CIDR notation
{ "source": [ "https://serverfault.com/questions/656079", "https://serverfault.com", "https://serverfault.com/users/86294/" ] }
656,331
Note: before you jump in too fast, yes I read linuxatemyram.com ! I have a server with 64GB RAM. free -m says that my RAM is full, and it's not because of disk caching: total used free shared buffers cached Mem: 64458 64117 340 201 67 331 -/+ buffers/cache: 63719 739 Swap: 1532 383 1149 However, top ordered by memory usage does not add up to 64GB: KiB Mem: 66005116 total, 65652464 used, 352652 free, 67512 buffers KiB Swap: 1569780 total, 392656 used, 1177124 free. 337464 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6258 mysql 20 0 38.665g 0.033t 4924 S 1.3 54.3 482:26.21 mysqld 2293 root 20 0 165896 102116 101964 S 0.0 0.2 0:43.53 systemd-journal 4909 root 20 0 377548 57840 57548 S 0.0 0.1 0:18.47 rsyslogd 26639 www 20 0 650076 53348 32968 S 0.0 0.1 11:32.27 php-fpm 26640 www 20 0 648344 51912 32984 S 0.0 0.1 11:37.43 php-fpm 26642 www 20 0 648600 51472 32580 S 0.0 0.1 11:37.16 php-fpm 26669 www 20 0 648148 50696 31988 S 0.0 0.1 11:35.24 php-fpm 26643 www 20 0 648452 50616 31628 S 0.0 0.1 11:36.19 php-fpm 26641 www 20 0 648620 50496 31340 S 0.0 0.1 11:36.51 php-fpm 28121 www 20 0 648620 48820 29660 S 0.0 0.1 11:35.75 php-fpm 27231 www 20 0 647508 48804 30760 S 0.0 0.1 11:35.61 php-fpm 28029 www 20 0 648044 48752 30172 S 0.0 0.1 11:37.20 php-fpm 28117 www 20 0 647868 48700 30296 S 0.0 0.1 11:36.45 php-fpm 28122 www 20 0 648340 48568 29676 S 0.0 0.1 11:35.73 php-fpm 8569 www 20 0 649028 40268 20704 S 0.0 0.1 11:31.50 php-fpm 10126 www 20 0 648432 39420 20700 S 0.0 0.1 9:58.52 php-fpm 22386 www 20 0 647996 39400 20868 S 0.0 0.1 11:25.00 php-fpm 9643 www 20 0 647976 39220 20704 S 0.0 0.1 11:29.23 php-fpm 23077 www 20 0 647852 39084 20692 S 0.0 0.1 11:11.80 php-fpm 10139 www 20 0 647580 38808 20692 S 0.0 0.1 9:59.94 php-fpm 6326 www 20 0 647368 38396 20696 S 0.7 0.1 8:32.34 php-fpm 4727 www 20 0 646128 37304 20692 S 0.0 0.1 8:30.20 php-fpm 5459 www 20 0 645988 37156 20688 S 0.0 0.1 7:15.13 php-fpm 2173 www 20 0 645240 36408 20684 S 0.0 0.1 4:39.13 php-fpm 20752 www 20 0 644536 35428 20680 S 0.0 0.1 4:29.78 php-fpm 5396 www 20 0 644468 35324 20692 S 0.0 0.1 4:14.65 php-fpm 17558 www 20 0 642668 33816 20740 S 0.0 0.1 1:28.34 php-fpm 28133 www 20 0 642780 33636 20704 S 0.0 0.1 0:49.88 php-fpm 10925 www 20 0 479584 29264 11212 S 3.0 0.0 0:00.09 php 26632 root 20 0 552136 26072 19468 S 0.0 0.0 0:37.74 php-fpm 4946 named 20 0 697996 18748 2104 S 0.0 0.0 3:46.96 named 15609 apache 20 0 2137056 8120 1592 S 0.0 0.0 0:56.18 httpd 8584 root 20 0 133432 4864 3700 S 0.0 0.0 0:00.08 sshd MySQL uses 54.3% alone, this is perfectly normal as it has an innodb_buffer_pool_size of 32G . The other processes' memory usage add up to roughly 2.8%, that's a total of 57.1%. Where are the 32% remaining? Edit: contents of /proc/meminfo : MemTotal: 66005116 kB MemFree: 353272 kB Buffers: 66328 kB Cached: 736620 kB SwapCached: 11348 kB Active: 34396680 kB Inactive: 2651132 kB Active(anon): 34223240 kB Inactive(anon): 2228020 kB Active(file): 173440 kB Inactive(file): 423112 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 1569780 kB SwapFree: 1177448 kB Dirty: 328 kB Writeback: 0 kB AnonPages: 36234364 kB Mapped: 125208 kB Shmem: 206396 kB Slab: 28058904 kB SReclaimable: 28010224 kB SUnreclaim: 48680 kB KernelStack: 2760 kB PageTables: 94780 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 34572336 kB Committed_AS: 38572348 kB VmallocTotal: 34359738367 kB VmallocUsed: 382304 kB VmallocChunk: 34359353572 kB HardwareCorrupted: 0 kB DirectMap4k: 9000 kB DirectMap2M: 2054144 kB DirectMap1G: 67108864 kB Output of slabtop : Active / Total Objects (% used) : 147380425 / 147413026 (100.0%) Active / Total Slabs (% used) : 7005839 / 7005839 (100.0%) Active / Total Caches (% used) : 71 / 144 (49.3%) Active / Total Size (% used) : 27615020.12K / 27627490.91K (100.0%) Minimum / Average / Maximum Object : 0.01K / 0.19K / 16.12K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 146851887 146851887 12% 0.19K 6992947 21 27971788K dentry 124936 124936 100% 0.07K 2231 56 8924K Acpi-ParseExt 105144 105144 100% 0.10K 2696 39 10784K buffer_head 49920 49172 98% 0.06K 780 64 3120K kmalloc-64 29916 29916 100% 0.11K 831 36 3324K sysfs_dir_cache 29856 29661 99% 0.12K 933 32 3732K kmalloc-128 21450 21128 98% 0.18K 975 22 3900K vm_area_struct 19328 19328 100% 0.03K 151 128 604K kmalloc-32 18258 13383 73% 0.93K 537 34 17184K ext4_inode_cache 17952 11651 64% 0.04K 176 102 704K ext4_extent_status 16828 6513 38% 0.55K 601 28 9616K radix_tree_node 14400 13996 97% 0.06K 225 64 900K anon_vma 11645 7903 67% 0.05K 137 85 548K shared_policy_node 10710 7006 65% 0.19K 510 21 2040K kmalloc-192 10608 10608 100% 0.04K 104 102 416K Acpi-Namespace 9728 9728 100% 0.01K 19 512 76K kmalloc-8 ...
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 146851887 146851887 12% 0.19K 6992947 21 27971788K dentry You say it's not because of disk caching, but clearly it is. My bet is that you have code that makes lots of fetches for files that do not exist and you get a ton of negative caching. Linux will remove these entries if it's under memory pressure, so it may not be anything to worry about. For example, like in this NSS issue .
{ "source": [ "https://serverfault.com/questions/656331", "https://serverfault.com", "https://serverfault.com/users/83039/" ] }
656,728
Some remodeling was done while I was away from work, and I'm left with several unlabeled cat5 cables that are terminated with RJ45 keystone modules. On the other end, all of the cables are terminated at the patch panels in the server room. Some of the cables are connected to the switches and are active data lines. I can verify the lines are active by plugging a laptop into the keystone modules. The DCHP server assigns an address and I'm in. During my attempts to trace the lines using a toner and probe, I noticed that the lines that are active have a very weak tone signal. It is so weak, I can't trace it. The other lines, however, I was able to trace without any issues. Anyone know why the signal is so weak? Is it because the line is active and data packets are watering down the toner's signal?
During my attempts to trace the lines using a toner and probe, I noticed that the lines that are active have a very weak tone signal. It is so weak, I can't trace it. The other lines, however, I was able to trace without any issues. Anyone know why the signal is so weak? Is it because the line is active and data packets are watering down the toner's signal? Most of the cheap tone generators/tracers won't be able to properly tone out a cable that is "active" (plugged into a switch with an active connection). Some of the nicer ones will, such as the Fluke IntelliTone Pro 200 . From their site: Modern network devices use aggressive termination schemes for cables connected to their ports. While this termination reduces noise and crosstalk in the cable, it can also absorb an analog toner signal, making the connected cable impossible to detect with an analog audio probe. Otherwise, you could see if your toner allows the generator to change cable pairs (some with wire clips would allow this, but you'd have to cut and reterminate the cable after). My suggestions for the ones that you want to trace that are plugged in if you don't have a more advanced toner like the Fluke one would be to plug your laptop in and out while a co-worker watches the front of the switch. Simple, effective. For tracing a live server back to the switch, like Joe said, if it's a managed switch just check which port it is connected to in the mac forwarding table. If it isn't a managed switch then figure out some downtime and do like I said above. :)
{ "source": [ "https://serverfault.com/questions/656728", "https://serverfault.com", "https://serverfault.com/users/221885/" ] }
657,967
Just recently, I had a developer accidentally try to restore a database to production, when he should have been restoring to a staging copy. It's easy to do, given that the db names are similar, i.e., CustomerName_Staging versus CustomerName_Production. Ideally, I'd have these on entirely separate boxes, but that is cost prohibitive, and, strictly speaking, it doesn't prevent the same thing from happening if the user connects to the wrong box. This is not a security problem, per se - it was the correct user working with the staging database, and if there is work to be done on the production database, it would be him as well. I'd love to have a deployment officer to separate out those concerns, but the team isn't big enough for that. I'd love to hear some advice in terms of practice, configuration and controls on how to prevent this.
If this is something you see yourself doing often, automate it. And since you're both developers, writing some code should be in your wheelhouse. :) Seriously though... by automating it, you can do things like: Verify that you're restoring on the correct server (i.e. no dev -> prod restores) Verify that it's the right "type" of database (in your case, "staging" and "production") Figure out what backup(s) to restore automatically by looking at the backup tables in msdb Et cetera. You're only limited by your imagination.
{ "source": [ "https://serverfault.com/questions/657967", "https://serverfault.com", "https://serverfault.com/users/34964/" ] }
658,071
While hosting new service these days, what would be best decision. IPv4 or IPv6 ? If we decided to launch it on IPv4 address: How easy/difficult to get IPv4 address (considering they getting exhausted out soon)? Can it be ported easily to IPv6 in coming future? How can existing IPv6 users be able to communicate with it? If we decide to launch it on IPv6 address: How can existing IPv4 users be able to communicate with it?
IPv4 and IPv6 are separate protocols that don't talk to each other. You'll have to support both protocols for now. Getting IPv4 addresses is getting more difficult and expensive, but you'll have to make your service available over it because not all users will have IPv6. On the other side there will be users who don't have full IPv4 anymore. They might have to share their IPv4 address with many others, they only have IPv6 and need a translation service to reach IPv4 services etc. For those users and for future users you want to offer your service over IPv6 so that they can reach it in the most optimal way. And hopefully in the not-so-distant future everybody will have IPv6 and we can get rid of IPv4 and the hacks and costs required to keep it working. One way you could start your new service is to build everything for IPv6-only and put a translator (SIIT-DC or reverse proxy) next to it to translate incoming requests over IPv4 to IPv6. You'll be able to handle both protocols for now, and it will also be easy to clean up and remove the obsolete IPv4 stuff later. This strategy is especially useful if your service runs on a cluster of servers. The whole cluster can run IPv6-only and you need only one IPv4 address on your translator. It's easier to only have to maintain one protocol on the majority of your machines and requiring less IPv4 addresses can also save you money. That's why companies like Facebook are doing this as well.
{ "source": [ "https://serverfault.com/questions/658071", "https://serverfault.com", "https://serverfault.com/users/162007/" ] }
658,085
In an organization where the hardware maintenance team is separated from the OS platform and operations team, 3Ware's RAID controllers have been in use together with the 3DM2 web service opened up to the hardware maintenance team for RAID device management. This allowed the hardware maintenance team to do the basic tasks like swapping drives, reconfiguring arrays or maintenance runs without bothering the platform operations team and, most importantly, without having local logon accounts to the operating systems: As the 3Ware RAID controllers are being phased out throughout the organization and replaced by LSI models, there is a need to have a similar facility for the new controllers which also would support the OSes in use (Windows Server 2008 R2 - 2012 R2/ SLES 11 - 12, CentOS 6). I know about local management facilities like MegaCLI, StorCLI or the Storage Manager (which is only available for Windows), but all of them require local interactive logons. The SNMP agent seems rather dated, also I have been unable to find a straightforward way to make use of SNMP for anything but monitoring purposes. So is there anything available to fill the management gap?
IPv4 and IPv6 are separate protocols that don't talk to each other. You'll have to support both protocols for now. Getting IPv4 addresses is getting more difficult and expensive, but you'll have to make your service available over it because not all users will have IPv6. On the other side there will be users who don't have full IPv4 anymore. They might have to share their IPv4 address with many others, they only have IPv6 and need a translation service to reach IPv4 services etc. For those users and for future users you want to offer your service over IPv6 so that they can reach it in the most optimal way. And hopefully in the not-so-distant future everybody will have IPv6 and we can get rid of IPv4 and the hacks and costs required to keep it working. One way you could start your new service is to build everything for IPv6-only and put a translator (SIIT-DC or reverse proxy) next to it to translate incoming requests over IPv4 to IPv6. You'll be able to handle both protocols for now, and it will also be easy to clean up and remove the obsolete IPv4 stuff later. This strategy is especially useful if your service runs on a cluster of servers. The whole cluster can run IPv6-only and you need only one IPv4 address on your translator. It's easier to only have to maintain one protocol on the majority of your machines and requiring less IPv4 addresses can also save you money. That's why companies like Facebook are doing this as well.
{ "source": [ "https://serverfault.com/questions/658085", "https://serverfault.com", "https://serverfault.com/users/76595/" ] }
658,367
I have php-fpm in a docker container and in the Dockerfile I edit the fpm config file ( /etc/php5/fpm/pool.d/www.conf ) to set up access logs to go to /var/log/fpm-access.log and error logs to go to /var/log/fpm-php.www.log : # Do some php-fpm config # Redirect worker stdout and stderr into main error log # Activate the fpm access log # Enable display errors # Enable the error log RUN sed -i '/^;catch_workers_output/ccatch_workers_output = yes' /etc/php5/fpm/pool.d/www.conf && \ sed -i '/^;access.log/caccess.log = /var/log/fpm-access.log' /etc/php5/fpm/pool.d/www.conf && \ sed -i '/^;php_flag\[display_errors\]/cphp_flag[display_errors] = off' /etc/php5/fpm/pool.d/www.conf && \ sed -i '/^;php_admin_value\[error_log\]/cphp_admin_value[error_log] = /var/log/fpm-php.www.log' /etc/php5/fpm/pool.d/www.conf && \ sed -i '/^;php_admin_flag\[log_errors\]/cphp_admin_flag[log_errors] = on' /etc/php5/fpm/pool.d/www.conf This works fine - I can get a shell into the container to see the logs. But... it is not best-practice. The problem is when I try to use the docker log collector - I need php-fpm to log to stdout or stderr so docker can capture them and provide them to the docker logs command. I tried to do this in the Dockerfile (which is a idea I copied from the official nginx docker image ): # Redirect fpm logs to stdout and stderr so they are forwarded to the docker log collector RUN ln -sf /dev/stdout /var/log/fpm-access.log && \ ln -sf /dev/stderr /var/log/fpm-php.www.log This is not working - no access logs are seen from docker logs - I'm trying to figure out why? Did anyone else that uses fpm in docker manage to get logging working to the docker log collector?
Ok, the way to do this is to send the error and the access logs to the following address: /proc/self/fd/2 In php5-fpm.log add: access.log = /proc/self/fd/2 error_log = /proc/self/fd/2 NOTE: access.log is correct, find in this page https://www.php.net/manual/en/install.fpm.configuration.php
{ "source": [ "https://serverfault.com/questions/658367", "https://serverfault.com", "https://serverfault.com/users/21415/" ] }
658,433
For the past week I've been getting a huge stream of traffic from a wide range of Chinese IP addresses. This traffic appears to be from normal people and their HTTP requests indicate that they think I'm: Facebook The Pirate Bay various BitTorrent trackers, porn sites All of which sounds like things people would use a VPN for. Or things that would make Great Wall of China angry. User-agents include web browsers, Android, iOS, FBiOSSDK, Bittorrent. The IP addresses are normal commercial Chinese providers. I have Nginx returning 444 if the host is incorrect or the user agent is obviously wrong: ## Deny illegal Host headers if ($host !~* ^({{ www_domain }})$ ) { return 444; } ## block bad agents if ($http_user_agent ~* FBiOSSDK|ExchangeWebServices|Bittorrent) { return 444; } I can handle the load now, but there were some bursts of up to 2k/minute. I want to find out why they are coming to me and stop it. We also have legitimate CN traffic, so banning 1/6th of planet earth is not an option. It is possible that its malicious and even personal, but it may just be a misconfigured DNS over there. My theory is that its a misconfigured DNS server or possibly some VPN services that people are using to get around Great Fire Wall. Given a client IP address: 183.36.131.137 - - [05/Jan/2015:04:44:12 -0500] "GET /announce?info_hash=%3E%F3%0B%907%7F%9D%E1%C1%CB%BAiF%D8C%DE%27vG%A9&peer_id=%2DSD0100%2D%96%8B%C0%3B%86n%8El%C5L%11%13&ip=183.36.131.137&port=11794&uploaded=4689970239&downloaded=4689970239&left=0&numwant=200&key=9085&compact=1 HTTP/1.0" 444 0 "-" "Bittorrent" I can know: descr: CHINANET Guangdong province network descr: Data Communication Division descr: China Telecom How can I find out what DNS server those customers are using ? Is there anyway to determine if an HTTP request is coming from a VPN ? What is really going on here ?
There is one theoretical way of determining the DNS resolver of your clients, but it's quite advanced and I don't know any off-the-shelf software that will do that for you. You'll for sure have to run a authoritative DNS server for that in addition to your nginx. In case the HTTP Host header is incorrect, serve an error-document and include a request to a dynamically created, unique FQDN for each and every request which you log to a database. eg. http://e2665feebe35bc97aff1b329c87b87e7.example.com/img.png As long as Chinas great firewall doesn't fiddle with that request and the client requests the document from that unique FQDN+URI, each request will result a new DNS lookup to your authoritative DNS for example.com where you can log the IP of the DNS resolver and later correlate this with your dynamically generated URIs.
{ "source": [ "https://serverfault.com/questions/658433", "https://serverfault.com", "https://serverfault.com/users/52828/" ] }
659,452
I have several servers from the HP DL360 line (generations 5-8). Each of these servers has two power supplies installed. The 2 power supplies in each server are fed from different circuits. My question is will the power draw be roughly balanced between these two circuits, or do the servers consider one power supply to be a "primary" and the other to be a "backup" with a smaller power draw?
I'm going to give an HP Proliant-specific answer here, since the OP is asking about the HP product line. Let's use the example of an HP DL360p Gen8 (also applies to G6, G7 and Gen9 servers): You have the option to configure the Redundant Power Supply Mode in the HP Rom-Based Setup Utility (press F9 at boot) with: Balanced Mode High Efficiency Mode Of the two main options, there's some granularity in the High-Efficiency Mode: High Efficiency Mode provides the most power efficient operation with redundant power supplies by keeping one power supply in standby mode at lower power usage levels. Balanced Mode shares the power equally between both power supplies. In addition, the "Auto" mode chooses between one PSU or the other as primary, depending on the serial number. It's a way to randomize the distribution in a datacenter situation with multiple servers. Also see: Maxing out both PDUs in a rack with redundant power An example of the Balanced Mode on a busy server: hpasmcli> SHOW POWERSUPPLY Power supply #1 Present : Yes Redundant: Yes Condition: Ok Hotplug : Supported Power : 110 Watts Power supply #2 Present : Yes Redundant: Yes Condition: Ok Hotplug : Supported Power : 95 Watts An example of the High Efficiency Mode on an idle server: hpasmcli> SHOW POWERSUPPLY Power supply #1 Present : Yes Redundant: Yes Condition: Ok Hotplug : Supported Power : 50 Watts Power supply #2 Present : Yes Redundant: Yes Condition: Ok Hotplug : Supported Power : 20 Watts An example of the High Efficiency Mode on a busy server: hpasmcli> SHOW POWERSUPPLY Power supply #1 Present : Yes Redundant: Yes Condition: Ok Hotplug : Supported Power : 90 Watts Power supply #2 Present : Yes Redundant: Yes Condition: Ok Hotplug : Supported Power : 20 Watts Detailing the relative efficiency of single PSU, load-balanced PSUs and High Efficiency mode on a 750W power supply.
{ "source": [ "https://serverfault.com/questions/659452", "https://serverfault.com", "https://serverfault.com/users/5318/" ] }
660,007
On a domain, in the DNS settings is an SRV record named: _autodiscover._tcp and value is: 0 10 443 autodiscover.*hostname*.net. What is it and what does it do? I am migrating websites to a new server and I need to know how this will work with the new server on a different host.
SRV DNS records allow the use of DNS for publishing services and service discovery. Their main use is to allow services to run easily on non-standard ports and to reduce the configuration burden when setting up clients. A SRV record has the following form: _Service._Protocol.Name. TTL Class SRV Priority Weight Port Target Service : the symbolic name of the service. Protocol : the transport protocol of the service; this is usually either TCP or UDP. Name : the domain name terminated with a . for which this record is valid - often omitted in DNS shorthand which then defaults to the zone name. TTL : standard DNS time to live field. Class : standard DNS class field (this is always IN for Internet). Priority : the priority of the target host, lower value means more preferred. Weight : A relative weight for records with the same priority. Port : the TCP or UDP port on which the service is to be found. Target : the canonical hostname of the machine providing the service. Yours appears an example of an autodiscovery service :) pointing to TCP port 443 on the aptly named host autodiscover.*hostname*.net. One such autodiscovery service seems to be used in automatically configuring MS Outlook but that may not be the only use-case.
{ "source": [ "https://serverfault.com/questions/660007", "https://serverfault.com", "https://serverfault.com/users/265320/" ] }
660,160
Why are there two ways to setup SFTP with OpenSSH and when to use which? Is there any difference between them? I mean the first one is using a lib from OpenSSH and the second one says "use the internal", so it is also OpenSSH? Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp
Both sftp-server and internal-sftp are part of OpenSSH. The sftp-server is a standalone binary. The internal-sftp is just a configuration keyword that tells sshd to use the SFTP server code built-into the sshd , instead of running another process (what would typically be the sftp-server ). The internal-sftp was added much later (OpenSSH 4.9p1 in 2008?) than the standalone sftp-server binary. But it is used in the default configuration file now. The sftp-server , which is now redundant and is kept probably for backward compatibility. I believe there's no reason to use the sftp-server for new installations. From a functional point of view, the sftp-server and internal-sftp are almost identical. They are built from the same source code. The main advantage of the internal-sftp is, that it requires no support files when used with ChrootDirectory directive . Quotes from the sshd_config(5) man page : For Subsystem directive : The command sftp-server implements the SFTP file transfer subsystem. Alternately the name internal-sftp implements an in-process SFTP server. This may simplify configurations using ChrootDirectory to force a different filesystem root on clients. For ForceCommand directive : Specifying a command of internal-sftp will force the use of an in-process SFTP server that requires no support files when used with ChrootDirectory . For ChrootDirectory directive : The ChrootDirectory must contain the necessary files and directories to support the user's session. For an interactive session this requires at least a shell, typically sh , and basic /dev nodes such as null , zero , stdin , stdout , stderr , and tty devices. For file transfer sessions using SFTP no additional configuration of the environment is necessary if the in-process sftp-server is used, though sessions which use logging may require /dev/log inside the chroot directory on some operating systems (see sftp-server for details). Another advantage of the internal-sftp is a performance, as it's not necessary to run a new sub-process for it. It may seem that the sshd could automatically use the internal-sftp , when it encounters the sftp-server , as the functionality is identical and the internal-sftp has even the above advantages. But there are edge cases, where there are differences. Few examples: Administrator may rely on a login shell configuration to prevent certain users from logging in. Switching to the internal-sftp would bypass the restriction, as the login shell is no longer involved. Using the sftp-server binary (being a standalone process) you can use some hacks, like running the SFTP under sudo . For SSH-1 (if anyone is still using it), Subsystem directive is not involved at all. An SFTP client using SSH-1 tells the server explicitly, what binary the server should run. So legacy SSH-1 SFTP clients have the sftp-server name hard-coded.
{ "source": [ "https://serverfault.com/questions/660160", "https://serverfault.com", "https://serverfault.com/users/185743/" ] }
660,210
I Can't start CentOS 7 "network" service after disabling and removing "NetworkManager" service. When I check the network service status, it comes up with the following error: #systemctl status network.service network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network) Active: failed (Result: exit-code) since Fri 2015-01-16 22:30:46 GMT; 38s ago Process: 4857 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE) Jan 16 22:30:46 localhost.localdomain network[4857]: RTNETLINK answers: File exists Jan 16 22:30:46 localhost.localdomain network[4857]: RTNETLINK answers: File exists Jan 16 22:30:46 localhost.localdomain network[4857]: RTNETLINK answers: File exists Jan 16 22:30:46 localhost.localdomain network[4857]: RTNETLINK answers: File exists Jan 16 22:30:46 localhost.localdomain network[4857]: RTNETLINK answers: File exists Jan 16 22:30:46 localhost.localdomain network[4857]: RTNETLINK answers: File exists Jan 16 22:30:46 localhost.localdomain network[4857]: RTNETLINK answers: File exists Jan 16 22:30:46 localhost.localdomain systemd[1]: network.service: control process exited, code=exited status=1 Jan 16 22:30:46 localhost.localdomain systemd[1]: Failed to start LSB: Bring up/down networking. Jan 16 22:30:46 localhost.localdomain systemd[1]: Unit network.service entered failed state. In earlier CenOS it didn't seem to give any problems when switching from the "NetworkManager" service to the network service. Any ideas as to what causes the problem and how to fix it? Note: I used yum erase to remove network manage service. Here is additional info as asked: /etc/sysconfig/network-script/ifcfg-enp8s0 TYPE=Ethernet BOOTPROTO=dhcp DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no NAME=enp8s0 UUID=453a07fe-1b07-4f29-bc32-f2168e50706a ONBOOT=yes HWADDR=XXXXXXXXXXX MACADDR=XXXXXXXXXX PEERDNS=yes PEERROUTES=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 /etc/resolv.conf ; generated by /usr/sbin/dhclient-script search customer.marples.midcity.lan nameserver 10.241.128.1
In Centos7.0 disabling NetworkManager will leave a dhcp client running configured for NetworkManager. This causes the error message RTNETLINK answers: File exists when the network service is started. The stale dhclient process has the additional "benefit" that when the lease expires your dhclient will choke, since it cannot reach NetWorkManager, thus removing your IP address. If you grep for it, you will see that it points to a NetWorkManager configuration file. [root@host ~]# ps -ef | grep dhc root 1865 792 0 Apr28 ? 00:00:00 /sbin/dhclient -d -sf \ /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eno1.pid -lf\ /var/lib/NetworkManager/dhclient-c96e56d3-a4c9-4a87-85ca-93dc0ca268f2-eno1.lease\ -cf /var/lib/NetworkManager/dhclient-eno1.conf eno1 So what you can do is kill the dhclient and only then start your network service.
{ "source": [ "https://serverfault.com/questions/660210", "https://serverfault.com", "https://serverfault.com/users/212687/" ] }
660,422
Why is it that Kosovo still hasn't got its own ccTLD? Kosovo is (semi)-independent, from Serbia (former Yugoslavia), since 2008. Montenegro is independent since 2006. Montenegro has the .me domain since its year of independence. Even Palestine (which isn't fully recognized) has its own ccTLD.
ICANN explains this pretty well in their blog ( https://www.icann.org/news/blog/abkhazia-kosovo-south-ossetia-transnistria-my-oh-my ): As at this time, Abkhazia, Kosovo, Transnistria, Somaliland, South Ossetia and others are not in the ISO 3166-1 standard, so ICANN is not in a position to grant any corresponding country-code domain for them. By strictly adhering to the ISO 3166-1 standard, we ensure that ICANN remains neutral by relying upon a widely recognised and impartial international standard.
{ "source": [ "https://serverfault.com/questions/660422", "https://serverfault.com", "https://serverfault.com/users/265637/" ] }
660,451
Running a web-service that binds to port 80 usually doesn't require sudoer privileges. Since ports 80/443 are system ports, meaning they can only be used by privileged users, how come those services are still able to bind to these ports?
There are basically two different approaches: Initially start running as root, bind to the privileged port, and then drop down to an unprivileged user. inetd, or xinetd runs privileged, and forwards the requests to web server running unprivileged.
{ "source": [ "https://serverfault.com/questions/660451", "https://serverfault.com", "https://serverfault.com/users/142028/" ] }
661,000
How can I determine if a Windows 2003 server is still being used by anyone/thing, and if it is, what it is being used for? I'm drawing a blank on what else to check other than event viewer to see what accounts are connecting to the server.
This is not a dumb question, it's a great question and I'm glad that you're asking. Human processes Make sure that you've reviewed all documentation, talked to the greybeards, and have sign-off from someone from the business. Technical processes Get a complete backup; mark the media for long-term archival. Run a connection monitor or packet sniffer for a period of time to see what connections are still being made. Inspect the services to see if anything sounds important/familiar. Cutting the cord Better idea than powering off - unplug the network cable for a few days. If it's an old physical machine, you don't want to risk the situation where you need to power it back up but the disk spindles are frozen. Leave them spinning. Source of authority - I spent over a year decommissioning old servers for a Fortune 25 pharma company. This was the process, and it worked.
{ "source": [ "https://serverfault.com/questions/661000", "https://serverfault.com", "https://serverfault.com/users/266022/" ] }
661,857
A coworker just demonstrated to me that accounts in our test AD was able to authenticate when replacing every a character in their samAccountName with Danish character å (ASCII 134 / å ). E.g. the user <domain>\aaa can authenticate as ååå . I tried reproducing this in a freshly provisioned W2K12R2 AD (single server, all standard values), and it works there too. I created an account aaa (never touching the letter å in the process, so that nothing contains å ) and ran: PS C:\Users\Administrator> runas /user:ååå notepad Enter the password for ååå: Attempting to start notepad as user "DEV-DLI\ååå" ... PS C:\Users\Administrator> which caused notepad to start, running as aaa . The same seems to hold true for o and Danish character ø , while the last Danish special char æ does not seem to correspond to any other character. With user aaa in AD, trying to create a user with samAccountName ååå will fail, informing you that The user logon name you have chosen is already in use (...) . I have googled like a madman, but have been unable to find out what is going on. Does anyone have any hints as to why this works?
This is by design. In short, Active Directory maps the accented/diacritical characters to their "simple" form. Please see the following Microsoft Support article. Windows logon behavior if your user name contains characters that have accents or other diacritical marks (Dead link) (Live version archived here ): If your user name in the Active Directory directory service contains one or more characters that have accents or other diacritical marks, you may find that you do not have to use the diacritical mark as you type your user name to log on to Windows. You can log on by using the simple form of the character or characters. For example, if your user name in Active Directory is jésush, you can type jesush in the User name box in the Log On to Windows dialog box to log on to Windows. This behavior occurs so that in situations when you have to log on to Windows from a computer where the preferred keyboard mapping is not installed, you can still log on to Windows by using your user name without the diacritical marks.
{ "source": [ "https://serverfault.com/questions/661857", "https://serverfault.com", "https://serverfault.com/users/266637/" ] }
661,909
I have docker container with installed and configured software. There is no any programm supposed to be started/runned all the time. What I want - its ability to start some command depending on external events. like: docker exec mysupercont /path/to/mycommand -bla -for and docker exec mysupercont /path/to/myothercommand But "exec" impossible when container is stopped, and also this container have some "working" data inside, which used for that commands, so I can't use docker run ... each time, because it recreate container from image and destroy my data. What is the "right" and the "best" way to keep such container runned? Which command I can start inside?
You do not need to perform each time docker run . docker run is actually a sequence of two commands: "create" and "start". When you run the container, you must specify the " -it ": -i, --interactive=false Keep STDIN open even if not attached -t, --tty=false Allocate a pseudo-TTY Example: docker run -it debian:stable bash After the work was completed command specified at startup (in my example bash). For example, you perform the "exit". Container stops: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1329c99a831b debian:stable "bash" 51 seconds ago Exited (0) 1 seconds ago goofy_bardeen Now you can start it again docker start 1329c99a831b The container is started and again executes the command "bash". Connect to this session "bash" with the command docker attach 1329c99a831b To sum up : you have to understand the difference between the run and start container. Besides, look at the documentation for the role of parameters " -i t " and " -d " for the "Run"
{ "source": [ "https://serverfault.com/questions/661909", "https://serverfault.com", "https://serverfault.com/users/90115/" ] }
661,913
I am in the midst of migrating to a Hyper-V 2012 R2 cluster and am currently configuring the first host to connect to the iSCSI CSV via MPIO. I "think" I have everything working properly but when I try to create a log file to verify via the MPIO properties (GUI) or invoke the "mpclaim -v" powershell command (or command prompt) while elevated as Admin, I get the following errors; GUI - Failed to probe MPIO storage configuration. Access is denied. Elevated Powershell/CMD - File creation failed. C:\Windows\System32\MPIO_Configuration.log. Error 5 Failed to write MPIO configuration to file. Access is denied. I have only been able to locate one article relating to the same problem but the solution was not applicable to me. Someone made a reference to " Local Security Policy/Public Key Policies/Encrypting File System/Properties/Certificates" and to allow something there but when I go there on the local machine, there are no keys or anything.. just a message that says "No Encrypting File System Policies Defined". Here's the article; https://social.technet.microsoft.com/Forums/windowsserver/en-US/6526b8c8-0fa9-4b47-9c31-3463896ffd51/access-denied-trying-to-capture-mpio-config?forum=winserverfiles Anyone have any insight into this? Thank you in advance!
You do not need to perform each time docker run . docker run is actually a sequence of two commands: "create" and "start". When you run the container, you must specify the " -it ": -i, --interactive=false Keep STDIN open even if not attached -t, --tty=false Allocate a pseudo-TTY Example: docker run -it debian:stable bash After the work was completed command specified at startup (in my example bash). For example, you perform the "exit". Container stops: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1329c99a831b debian:stable "bash" 51 seconds ago Exited (0) 1 seconds ago goofy_bardeen Now you can start it again docker start 1329c99a831b The container is started and again executes the command "bash". Connect to this session "bash" with the command docker attach 1329c99a831b To sum up : you have to understand the difference between the run and start container. Besides, look at the documentation for the role of parameters " -i t " and " -d " for the "Run"
{ "source": [ "https://serverfault.com/questions/661913", "https://serverfault.com", "https://serverfault.com/users/253657/" ] }
661,978
In Chrome, clicking on the green HTTPS lock icon opens a window with the certificate details: When I tried the same with cURL, I got only some of the information: $ curl -vvI https://gnupg.org * Rebuilt URL to: https://gnupg.org/ * Hostname was NOT found in DNS cache * Trying 217.69.76.60... * Connected to gnupg.org (217.69.76.60) port 443 (#0) * TLS 1.2 connection using TLS_DHE_RSA_WITH_AES_128_CBC_SHA * Server certificate: gnupg.org * Server certificate: Gandi Standard SSL CA * Server certificate: UTN-USERFirst-Hardware > HEAD / HTTP/1.1 > User-Agent: curl/7.37.1 > Host: gnupg.org > Accept: */* Any idea how to get the full certificate information form a command line tool (cURL or other)?
You should be able to use OpenSSL for your purpose: echo | openssl s_client -showcerts -servername gnupg.org -connect gnupg.org:443 2>/dev/null | openssl x509 -inform pem -noout -text That command connects to the desired website and pipes the certificate in PEM format on to another openssl command that reads and parses the details. (Note that "redundant" -servername parameter is necessary to make openssl do a request with SNI support.)
{ "source": [ "https://serverfault.com/questions/661978", "https://serverfault.com", "https://serverfault.com/users/10904/" ] }
661,993
I Have SQL server 2012 standard 64bit with 64GB RAM and 40 CPU cores. In the server properties, I can see that all the RAM is listed in the memory field on the general tab. Underneath the Memory tab, I have set the minimum to 50,000 MB and the maximum is still the default value. Both RAMMap and Task Manager show MSSQL using 27GB and having 37GB available. What's the deal? Why isn't sqlserver and reporting server using all of the RAM when I run a large report?
You should be able to use OpenSSL for your purpose: echo | openssl s_client -showcerts -servername gnupg.org -connect gnupg.org:443 2>/dev/null | openssl x509 -inform pem -noout -text That command connects to the desired website and pipes the certificate in PEM format on to another openssl command that reads and parses the details. (Note that "redundant" -servername parameter is necessary to make openssl do a request with SNI support.)
{ "source": [ "https://serverfault.com/questions/661993", "https://serverfault.com", "https://serverfault.com/users/164160/" ] }
662,382
I have a spanish website and I do not allow people from non-european countries to register and to login. Some time ago I started to receive messages from users who can not login. When I ask for their IP address they tell something like: 66.249.93.202. It's Google's IP adress. How do they get it in their mobile phones? What they have to do to use their real IP address?
What you're seeing is the Google proxy address. Mobile users with a chrome browser (either Android or iOS) that have the bandwidth management features turned on will often be seen as using one of these addresses as the requester as described here . In essence the data you're serving is being requested by the Google Data Compression Proxy, optimized and sent back to the end-user. What they have to do to use their real IP address. They shouldn't be doing anything differently. You can check the x-forwarded-for header as explained in the previously linked documentation.
{ "source": [ "https://serverfault.com/questions/662382", "https://serverfault.com", "https://serverfault.com/users/121769/" ] }
662,443
I am trying to run a specific Ansible task as a different user than the one who is running the playbook. My .yml file looks like this: --- - hosts: staging_servers tasks: - name: check user remote_user: someusername shell: whoami Running this task shows me that whoami command returns a different user than I defined in the task (precisely, returns the user which is defined in hosts file called ubuntu ). I also tried to define the task like this: --- - hosts: staging_servers tasks: - name: check user sudo: yes sudo_user: someusername shell: whoami but then I get ' Missing sudo password ' error, although there is a line in sudoers file which says someusername ALL=(ALL) NOPASSWD:ALL and issuing commands with sudo on remote machine as someusername doesn't ask me for a password. So, how can I run the specific task as a different user which is not the user defined in hosts file or root himself?
You're misunderstanding both settings there: remote_user is an Ansible setting that controls the SSH user Ansible is using to connect: ssh ${REMOTE_USER}@remotehost someusername ALL=(ALL) NOPASSWD:ALL is a sudo configuration that allows the user someusername to execute all commands in any host without a password. It does not allow anyone to issue commands as someusername though. Ideally, you would login directly as the right user and that's what remote_user is all about. But usually you are only able to login as an administrative user (say, ubuntu ) and have to sudo commands as another user (let's say scrapy ). Then you should leave remote_user to the user that logs in and the add the following ansible properties to the job: - name: log in as ubuntu and do something as scrapy remote_user: ubuntu sudo: true sudo_user: scrapy shell: do-something.sh
{ "source": [ "https://serverfault.com/questions/662443", "https://serverfault.com", "https://serverfault.com/users/116272/" ] }
663,112
I understand you should not point a MX record at an IP address directly, but should instead point it to an A record, which, in turns, points to the IP address of your mail server. But, in principle, why is this required?
The whole idea behind the MX record is to specify a host or hosts which can accept mail for a domain. As specified in RFC 1035 , the MX record contains a domain name. It must therefore point to a host which itself can be resolved in the DNS. An IP address could not be used as it would be interpreted as an unqualified domain name, which cannot be resolved. The reasons for this in the 1980s, when the specs were originally written, are almost the same as the reasons for it today: A host may be connected to multiple networks and use multiple protocols. Back in the 80s, it was not uncommon to have mail gateways which connected both to the (relatively new) Internet which used TCP/IP and to other legacy networks, which often used other protocols. Specifying MX in this way allowed for DNS records which could identify how to reach such a host on a network other than the Internet, such as Chaosnet . In practice, though, this almost never happened; virtually everyone re-engineered their networks to become part of the Internet instead. Today, the situation is that a host may be reached by multiple protocols (IPv4 and IPv6) and by multiple IP addresses in each protocol. A single MX record can't possibly list more than one address, so the only option is to point to a host, where all of that host's addresses can then be looked up. (As a performance optimization, the DNS server will send along the address records for the host in the response additional section if it has authoritative records for them, saving a round trip.) There is also the situation that arises when your mail exchangers are provided by a third party (e.g. Google Apps or Office 365). You point your MX records to their hostnames, but it may occur that the service provider needs to change the mail servers' IP addresses. Since you have pointed to a host, the service provider can do this transparently and you don't have to make any changes to your records.
{ "source": [ "https://serverfault.com/questions/663112", "https://serverfault.com", "https://serverfault.com/users/211049/" ] }
663,113
I am using puttysc to authenticate to a remote Linux server with my smart card . But as I understand, this isn't true PKI authentication - puttysc just unlocks the public key and matches it to a user account on the Linux server. Is there a way that I can use puttysc along with pam_pkcs11 to perform true PKI authentication? I know that you can use PAM along with the pam_pkcs11 module to require true PKI authentication. I just don't know how to use the two (puttysc and PAM with pam_pkcs11) together.
The whole idea behind the MX record is to specify a host or hosts which can accept mail for a domain. As specified in RFC 1035 , the MX record contains a domain name. It must therefore point to a host which itself can be resolved in the DNS. An IP address could not be used as it would be interpreted as an unqualified domain name, which cannot be resolved. The reasons for this in the 1980s, when the specs were originally written, are almost the same as the reasons for it today: A host may be connected to multiple networks and use multiple protocols. Back in the 80s, it was not uncommon to have mail gateways which connected both to the (relatively new) Internet which used TCP/IP and to other legacy networks, which often used other protocols. Specifying MX in this way allowed for DNS records which could identify how to reach such a host on a network other than the Internet, such as Chaosnet . In practice, though, this almost never happened; virtually everyone re-engineered their networks to become part of the Internet instead. Today, the situation is that a host may be reached by multiple protocols (IPv4 and IPv6) and by multiple IP addresses in each protocol. A single MX record can't possibly list more than one address, so the only option is to point to a host, where all of that host's addresses can then be looked up. (As a performance optimization, the DNS server will send along the address records for the host in the response additional section if it has authoritative records for them, saving a round trip.) There is also the situation that arises when your mail exchangers are provided by a third party (e.g. Google Apps or Office 365). You point your MX records to their hostnames, but it may occur that the service provider needs to change the mail servers' IP addresses. Since you have pointed to a host, the service provider can do this transparently and you don't have to make any changes to your records.
{ "source": [ "https://serverfault.com/questions/663113", "https://serverfault.com", "https://serverfault.com/users/267494/" ] }
664,643
I tried sudo yum update but it just keeps java "1.7.0_75". I need 1.8 for it to work with another application but can't figure out how to upgrade it. Do I need to manually install it somehow? There's not much information on this on the internet as far as I can see. Specs: java version "1.7.0_75" OpenJDK Runtime Environment (amzn-2.5.4.0.53.amzn1-x86_64 u75-b13) OpenJDK 64-Bit Server VM (build 24.75-b04, mixed mode) When I try update now: [ec2-________]$ sudo yum update Loaded plugins: priorities, update-motd, upgrade-helper amzn-main/latest | 2.1 kB 00:00 amzn-updates/latest | 2.3 kB 00:00 No packages marked for update Is there anything else I need to do? Thanks.
To remove java 1.7 and install java 1.8: sudo yum install java-1.8.0 sudo yum remove java-1.7.0-openjdk
{ "source": [ "https://serverfault.com/questions/664643", "https://serverfault.com", "https://serverfault.com/users/268481/" ] }
664,999
We have a slightly older Docker server running on RHEL 6.6. It's not well-supported by our operations team right now, so we can't upgrade easily. Right now it runs Docker 1.3.2 from an EPEL repo. If I ssh in it does everything that I need for proofs-of-concept that will hopefully help me push management to improve the infrastructure support for Docker down the road. I set it up to listen on TCP/TLS, and I'm able to connect to it, but it refuses to run commands given by my local docker client. $ docker version Client version: 1.4.1 Client API version: 1.16 Go version (client): go1.4 Git commit (client): 5bc2ff8 OS/Arch (client): darwin/amd64 FATA[0000] Error response from daemon: client and server don't have same version (client : 1.16, server: 1.15) I know the connection itself works because fig works: $ cat > fig.yml test: image: busybox $ fig run --rm test sh / # hostname -f 084f75fb59d4 Is there some way I can tell the newer docker client to use the older docker API version until I can access to a newer docker host?
Since Docker 1.10.0, there's an option for overriding the API Version used for Docker client communication with Docker engine. Just by using the DOCKER_API_VERSION environment variable. Ex.: $ docker version Client: Version: 1.10.0 API version: 1.22 Go version: go1.5.3 Git commit: 590d510 Built: Fri Feb 5 08:21:41 UTC 2016 OS/Arch: darwin/amd64 Error response from daemon: client is newer than server (client API version: 1.22, server API version: 1.21) $ DOCKER_API_VERSION=1.21 docker version Client: Version: 1.10.0 API version: 1.21 Go version: go1.5.3 Git commit: 590d510 Built: Fri Feb 5 08:21:41 UTC 2016 OS/Arch: darwin/amd64 Server: Version: 1.9.1 API version: 1.21 Go version: go1.4.3 Git commit: a34a1d5 Built: Fri Nov 20 17:56:04 UTC 2015 OS/Arch: linux/amd64 Reference: https://docs.docker.com/engine/reference/commandline/cli/#environment-variables EDIT Since Docker 1.13, CLI has an improved backwards compatibility. According to https://blog.docker.com/2017/01/whats-new-in-docker-1-13 : Starting with 1.13, newer CLIs can talk to older daemons. We’re also adding feature negotiation so that proper errors are returned if a new client is attempting to use features not supported in an older daemon. This greatly improves interoperability and makes it much simpler to manage Docker installs with different versions from the same machine.
{ "source": [ "https://serverfault.com/questions/664999", "https://serverfault.com", "https://serverfault.com/users/72728/" ] }
665,030
I have a problem with my server. Once in a while the CPU spikes to 100% and the server crashes and then needs to be rebooted. I've tried to go through my current logs but no luck. I would like to log every time a process uses more then say 40% of my CPU. I've read about a sar solution here : sar -u 1 0 but that doesn't have a treshold for what to log. Idealy this would be something that runs continously so that I can see after the next time the server crashes what actually caused the crash! Specs: Ubuntu 12.04 LTS CPU: 1 Core(s) 2048MB CPU Flags: advcpu,acpi,pae,virtblk,virtio
Since Docker 1.10.0, there's an option for overriding the API Version used for Docker client communication with Docker engine. Just by using the DOCKER_API_VERSION environment variable. Ex.: $ docker version Client: Version: 1.10.0 API version: 1.22 Go version: go1.5.3 Git commit: 590d510 Built: Fri Feb 5 08:21:41 UTC 2016 OS/Arch: darwin/amd64 Error response from daemon: client is newer than server (client API version: 1.22, server API version: 1.21) $ DOCKER_API_VERSION=1.21 docker version Client: Version: 1.10.0 API version: 1.21 Go version: go1.5.3 Git commit: 590d510 Built: Fri Feb 5 08:21:41 UTC 2016 OS/Arch: darwin/amd64 Server: Version: 1.9.1 API version: 1.21 Go version: go1.4.3 Git commit: a34a1d5 Built: Fri Nov 20 17:56:04 UTC 2015 OS/Arch: linux/amd64 Reference: https://docs.docker.com/engine/reference/commandline/cli/#environment-variables EDIT Since Docker 1.13, CLI has an improved backwards compatibility. According to https://blog.docker.com/2017/01/whats-new-in-docker-1-13 : Starting with 1.13, newer CLIs can talk to older daemons. We’re also adding feature negotiation so that proper errors are returned if a new client is attempting to use features not supported in an older daemon. This greatly improves interoperability and makes it much simpler to manage Docker installs with different versions from the same machine.
{ "source": [ "https://serverfault.com/questions/665030", "https://serverfault.com", "https://serverfault.com/users/211220/" ] }
665,034
I am trying to provide full privileges to multiple users to do anything they want on a specific directory, and make sure no one can take ownership on any file or folder inside that directory. for example lets say the directory that I want to provide other full privileges is /opt ;if I made the privileges for opt directory is 777 or 7777 and user1 try to create another text file then he will claim the ownership on that text file and no one will have full privileges to edit it, so I created a cron job to change the ownership and privileges to the default one (owner:root, and the permission level is 777) every 10 second but that created a problem for me users now can not replace or copy sub-directory (permission denied error). what did I do wrong here?
Since Docker 1.10.0, there's an option for overriding the API Version used for Docker client communication with Docker engine. Just by using the DOCKER_API_VERSION environment variable. Ex.: $ docker version Client: Version: 1.10.0 API version: 1.22 Go version: go1.5.3 Git commit: 590d510 Built: Fri Feb 5 08:21:41 UTC 2016 OS/Arch: darwin/amd64 Error response from daemon: client is newer than server (client API version: 1.22, server API version: 1.21) $ DOCKER_API_VERSION=1.21 docker version Client: Version: 1.10.0 API version: 1.21 Go version: go1.5.3 Git commit: 590d510 Built: Fri Feb 5 08:21:41 UTC 2016 OS/Arch: darwin/amd64 Server: Version: 1.9.1 API version: 1.21 Go version: go1.4.3 Git commit: a34a1d5 Built: Fri Nov 20 17:56:04 UTC 2015 OS/Arch: linux/amd64 Reference: https://docs.docker.com/engine/reference/commandline/cli/#environment-variables EDIT Since Docker 1.13, CLI has an improved backwards compatibility. According to https://blog.docker.com/2017/01/whats-new-in-docker-1-13 : Starting with 1.13, newer CLIs can talk to older daemons. We’re also adding feature negotiation so that proper errors are returned if a new client is attempting to use features not supported in an older daemon. This greatly improves interoperability and makes it much simpler to manage Docker installs with different versions from the same machine.
{ "source": [ "https://serverfault.com/questions/665034", "https://serverfault.com", "https://serverfault.com/users/254965/" ] }
666,013
I "Detach Volume" and "Attach Volume" again. After that I want "Instance Start" but I get immediately message Error starting instances Invalid value 'i-{id}' for instanceId. Instance does not have a volume attached at root (/dev/sda1) Q so where the error occured?
Answer is very easy, when you "Attach Volume" again set parameter: Device: /dev/sda1 WARN! If you haven't own "Elastic Network Interface (ENI)" beware change of address on "Instance Start". Thanks M. Brown
{ "source": [ "https://serverfault.com/questions/666013", "https://serverfault.com", "https://serverfault.com/users/175486/" ] }
666,149
In my Dockerfile I have the following 'COPY" statement: # Copy app code COPY /srv/visitor /srv/visitor It should go without saying that in my host system, under the "/srv/visitor" directory, there is indeed my source code: [root@V12 visitor]# ls /srv/visitor/ Dockerfile package.json visitor.js Now, when I try to build an image using this Dockerfile it hangs at the step when the "COPY" is supposed to happen: Step 10 : COPY /srv/visitor /srv/visitor INFO[0155] srv/visitor: no such file or directory It says that there is no such directory, but there clearly is. Any ideas? UPDATE 1: It has been pointed to me that I was mistaken, in the way I understood build context. The suggestion amounted to changing the "COPY" statement to this: COPY . /srv/visitor The problem is that I had it this way, and the build process halted at the very next step: RUN npm install It said something along the lines of "no package.json file found", when there clearly is one. UPDATE 2: I tried running it with this change in the Dockerfile: COPY source /srv/visitor/ It halted when trying to run npm: Step 12 : RUN npm install ---> Running in ae5e2a993e11 npm ERR! install Couldn't read dependencies npm ERR! Linux 3.18.5-1-ARCH npm ERR! argv "/usr/bin/node" "/usr/sbin/npm" "install" npm ERR! node v0.10.36 npm ERR! npm v2.5.0 npm ERR! path /package.json npm ERR! code ENOPACKAGEJSON npm ERR! errno 34 npm ERR! package.json ENOENT, open '/package.json' npm ERR! package.json This is most likely not a problem with npm itself. npm ERR! package.json npm can't find a package.json file in your current directory. npm ERR! Please include the following file with any support request: npm ERR! /npm-debug.log INFO[0171] The command [/bin/sh -c npm install] returned a non-zero code: 34 So, has the copy been performed? If yes, why is npm unable to find package.json?
From the documentation : The <src> path must be inside the context of the build ; you cannot COPY ../something /something, because the first step of a docker build is to send the context directory (and subdirectories) to the docker daemon. When you use /srv/visitor you are using an absolute path outside of the build context even if it's actually the current directory. You better organize your build context like this : ├── /srv/visitor │   ├── Dockerfile │   └── resources │   ├── visitor.json │   ├── visitor.js And use : COPY resources /srv/visitor/ Note: docker build - < Dockerfile does not have any context. Hence use, docker build .
{ "source": [ "https://serverfault.com/questions/666149", "https://serverfault.com", "https://serverfault.com/users/181235/" ] }
666,564
I am trying to do a git pull/push using ansible. I am running ansible on one server and want to automate or orchestrate a git pull/push on a remote host. Now since i didn't find a mmodule to do this on ansible doc website, i decided to go the script route using the script module The problem is ansible hags when it gets to running the git pull called in the script Anyone know how to run git pull/push using ansible? Thanks
Ansible's Git Module will do this for you as far as the "pull" is concerned, just make sure that the user that is running the command has key-based access to the git repo. You can specify the user that the command runs as by adding the "sudo_user" parameter to your task: - name: Get stuff from git git: repo: [email protected]:you/your-git-repo.git dest: /opt/git-stuff sudo_user: <your user that has the ssh key> See https://docs.ansible.com/playbooks_intro.html for more information on using sudo_user.
{ "source": [ "https://serverfault.com/questions/666564", "https://serverfault.com", "https://serverfault.com/users/85042/" ] }
667,042
I'm a small company on not much budget providing websites and databases for charity and not-for-profit clients. I have a few Debian Linux VPS servers and ensure I have daily backups to a different VPS than the one the service is hosted on. Recently one of my hosting companies told me two drives failed simultaneously and so that data was lost forever. Stuff happens, they said sorry, what else could they do? But it made me wonder about cost-effective ways to basically get a VPS up again in the event of a hardware or other host-related failure. Currently I would have to Spin up a new VPS Get the last day's backup (which includes databases, web root and website-specific config) over onto the VPS, and configure it like the last one etc. Update DNS and wait for it to propagate. It would probably take a day or so achieve this, with the DNS propagation being a big unknown, although I have the TTL set quite low (hour or so). Some hosts provide snapshots which can be used to replicate a set up to a new VPS, but there's still the IP and this doesn't help in the case that the host company cancels/suspends an account outright (I've been reading about this behaviour from certain hosting providers and it's scared me! I'm not doing anything spammy/dodgy and keep a close eye on security, but I realise that they literally have the power to do this and I'm quite risk averse). Is this, combined with choosing reputable hosts, the best I can do without going for an incredibly expensive solution?
For me, choosing reputable hosts and doing regular backups - both of which you seem to be doing already - is about as well as you can do without starting to think about business continuity planning, high-availability setups, SLAs, and so on. I tell people that you get 99% uptime for free (ie, without spending anything extra on high availability). That's about three and half days downtime a year. Every extra 9 on that uptime increases the cost by somewhere between three and ten times. If people aren't ready to pay that kind of money, it is in my opinion a mistake to mislead them into thinking they can get any extra protection of any significance.
{ "source": [ "https://serverfault.com/questions/667042", "https://serverfault.com", "https://serverfault.com/users/96883/" ] }
667,062
Somehow, one of our old Server 2008 (not R2) boxes has developed a seemingly infinitely-recursing folder. This is playing havock with our backups, as the backup agent tries to recurse down into the folder and never returns. The folder structure looks something like: C:\Storage\Folder1 C:\Storage\Folder1\Folder1 C:\Storage\Folder1\Folder1\Folder1 C:\Storage\Folder1\Folder1\Folder1\Folder1 ... and so on. It's like one of those Mandelbrot sets we used to all play with in the 90's. I've tried: Deleting it from Explorer. Yeah, I'm an optimist. RMDIR C:\Storage\Folder1 /Q/S - this returns The directory is not empty ROBOCOPY C:\temp\EmptyDirectory C:\Storage\Folder1 /PURGE - this spins through the folders for a couple of minutes before robocopy.exe crashes. Can anyone suggest a way to kill this folder off for good?
Thanks to everyone for the useful advice. Straying well into StackOverflow territory, I've solved the problem by knocking up this snippet of C# code. It uses the Delimon.Win32.IO library that specifically addresses issues accessing long file paths. Just in case this can help someone else out, here's the code - it got through the ~1600 levels of recursion I'd somehow been stuck with and took around 20 minutes to remove them all. using System; using Delimon.Win32.IO; namespace ConsoleApplication1 { class Program { private static int level; static void Main(string[] args) { // Call the method to delete the directory structure RecursiveDelete(new DirectoryInfo(@"\\server\\c$\\storage\\folder1")); } // This deletes a particular folder, and recurses back to itself if it finds any subfolders public static void RecursiveDelete(DirectoryInfo Dir) { level++; Console.WriteLine("Now at level " +level); if (!Dir.Exists) return; // In any subdirectory ... foreach (var dir in Dir.GetDirectories()) { // Call this method again, starting at the subdirectory RecursiveDelete(dir); } // Finally, delete the directory, and any files below it Dir.Delete(true); Console.WriteLine("Deleting directory at level " + level); level--; } } }
{ "source": [ "https://serverfault.com/questions/667062", "https://serverfault.com", "https://serverfault.com/users/103974/" ] }
667,076
I have 4 servers with Debian Wheezy OS. I have Apticron installed that informs me about updates. Debian updates are realized so often that when I finish to update the last of 4 servers I get new email about new updates on the first server. I try to update all servers when I get a notification but I never know if there is a need to reboot the servers. I have read that if the directory /var/run contains file reboot-required I have to reboot the server. But I never have seen this file in /var/run . How can I know when reboot is required? I don't want reboot my servers every time when I install new updates if it's not needed. I understand that if I update PHP or MySQL, etc I don't need to reboot the server but updates usually contain many lib... . Below are 9 updates (I have received this week). krb5-locales 1.10.1+dfsg-5+deb7u3 libdbus-1-3 1.6.8-1+deb7u6 libgssapi-krb5-2 1.10.1+dfsg-5+deb7u3 libk5crypto3 1.10.1+dfsg-5+deb7u3 libkrb5-3 1.10.1+dfsg-5+deb7u3 libkrb5support0 1.10.1+dfsg-5+deb7u3 libruby1.8 1.8.7.358-7.1+deb7u2 libxml2 2.8.0+dfsg1-7+wheezy3 ruby1.8 1.8.7.358-7.1+deb7u2 I have no idea what is libkrb , libgssapi , etc. How can I detect if reboot is needed? Please do not suggest to install UnattendedUpgrades to let the servers update automatically because this can cause websites going offline if something updates not correct.
Installing debian-goodies package provides checkrestart . It shows which processes are using old versions of libs previously installed. When not being able to remove all processes out of that list, a reboot might be needed. Besides installing needrestart package might help as well, as pointed out in this post comments and as one could find among debian packages search results check which daemons need to be restarted after library upgrades In general, consider rebooting after kernel updates (as pointed out by YuKYuk)!
{ "source": [ "https://serverfault.com/questions/667076", "https://serverfault.com", "https://serverfault.com/users/121769/" ] }
667,078
We are running into a strange behavior where we see high CPU utilization but quite low load average. The behavior is best illustrated by the following graphs from our monitoring system. At about 11:57 the CPU utilization goes from 25% to 75%. The load average is not significantly changed. We run servers with 12 cores with 2 hyper threads each. The OS sees this as 24 CPUs. The CPU utilization data is collected by running /usr/bin/mpstat 60 1 each minute. The data for the all row and the %usr column is shown in the chart above. I am certain this does show the average per CPU data, not the "stacked" utilization. While we see 75% utilization in the chart we see a process showing to use about 2000% "stacked" CPU in top . The load average figure is taken from /proc/loadavg each minute. uname -a gives: Linux ab04 2.6.32-279.el6.x86_64 #1 SMP Wed Jun 13 18:24:36 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux Linux dist is Red Hat Enterprise Linux Server release 6.3 (Santiago) We run a couple of Java web applications under fairly heavy load on the machines, think 100 requests/s per machine. If I interpret the CPU utilization data correctly, when we have 75% CPU utilization it means that our CPUs are executing a process 75% of the time, on average. However, if our CPUs are busy 75% of the time, shouldn't we see higher load average? How could the CPUs be 75% busy while we only have 2-4 jobs in the run queue? Are we interpreting our data correctly? What can cause this behavior?
On Linux at least, the load average and CPU utilization are actually two different things. Load average is a measurement of how many tasks are waiting in a kernel run queue (not just CPU time but also disk activity) over a period of time. CPU utilization is a measure of how busy the CPU is right now. The most load that a single CPU thread pegged at 100% for one minute can "contribute" to the 1 minute load average is 1. A 4 core CPU with hyperthreading (8 virtual cores) all at 100% for 1 minute would contribute 8 to the 1 minute load average. Often times these two numbers have patterns that correlate to each other, but you can't think of them as the same. You can have a high load with nearly 0% CPU utilization (such as when you have a lot of IO data stuck in a wait state) and you can have a load of 1 and 100% CPU, when you have a single threaded process running full tilt. Also for short periods of time you can see the CPU at close to 100% but the load is still below 1 because the average metrics haven't "caught up" yet. I've seen a server have a load of over 15,000 (yes really that's not a typo) and a CPU % of close to 0%. It happened because a Samba share was having issues and lots and lots of clients started getting stuck in an IO wait state. Chances are if you are seeing a regular high load number with no corresponding CPU activity, you are having a storage problem of some kind. On virtual machines this can also mean that there are other VMs heavily competing for storage resources on the same VM host. High load is also not necessarily a bad thing, most of the time it just means the system is being utilized to it's fullest capacity or maybe is beyond it's capability to keep up (if the load number is higher than the number of processor cores). At a place I used to be a sysadmin, they had someone who watched the load average on their primary system closer than Nagios did. When the load was high, they would call me 24/7 faster than you could say SMTP. Most of the time nothing was actually wrong, but they associated the load number with something being wrong and watched it like a hawk. After checking, my response was usually that the system was just doing it's job. Of course this was the same place where the load got up over 15000 (not the same server though) so sometimes it does mean something is wrong. You have to consider the purpose of your system. If it's a workhorse, then expect the load to be naturally high.
{ "source": [ "https://serverfault.com/questions/667078", "https://serverfault.com", "https://serverfault.com/users/58185/" ] }
667,798
I want to restrict the access for some VHosts so that only 127.0.0.1 can access it. I always used something like this to bind the VHost to the localhost and not the external IP: server { listen 127.0.0.1; server_name myvhost.local; location / { .... } } But I noticed that some tutorials also include explicit allow directives for the localhost and expicitly deny all others: server { listen 127.0.0.1; server_name myvhost.local; location / { allow 127.0.0.1; deny all; ... } } Are these allow / deny directives really needed when I already listen only at 127.0.0.1?
The listen directive tells the operating system on what interface the web server binds itself. So, when you look at netstat -a after starting nginx, you will see that nginx listens only on 127.0.0.1 IP port 80, which means that the nginx server cannot be reached via any other interface. Binding to a specific IP address works in a lower level in the actual network stack than the allow / deny directives inside nginx configuration. This means that you don't need separate allow / deny directives inside your configuration with your use case, because the connections are limited lower in the network stack. If you specify listen 80; only, and use allow / deny directives, then nginx will send a HTTP error code to the client, tellng that access is denied. With the listen 127.0.0.1; case, the browser cannot connect to the server at all, because there is no TCP port open for the browser to connect to.
{ "source": [ "https://serverfault.com/questions/667798", "https://serverfault.com", "https://serverfault.com/users/126131/" ] }
667,803
I have a small ami EC2 instance @ AWS. I had to reboot yesterday and after reboot, I couldn't access the websites hosted on the EC2 while there was no problem at all through SSH. Things I checked and found to be Okay: Network rules, seems ok: Allow all from 0.0.0.0/0 (Inbound and Outbond) IPTables - seems ok - Only allow. I tried associating another Elastic IP but to no avail. I can't telnet to the public IP address either. I believe it's some misconfiguration issue done by me. I would be grateful if someone could point me to some troubleshooting steps. Thanks in advance,
The listen directive tells the operating system on what interface the web server binds itself. So, when you look at netstat -a after starting nginx, you will see that nginx listens only on 127.0.0.1 IP port 80, which means that the nginx server cannot be reached via any other interface. Binding to a specific IP address works in a lower level in the actual network stack than the allow / deny directives inside nginx configuration. This means that you don't need separate allow / deny directives inside your configuration with your use case, because the connections are limited lower in the network stack. If you specify listen 80; only, and use allow / deny directives, then nginx will send a HTTP error code to the client, tellng that access is denied. With the listen 127.0.0.1; case, the browser cannot connect to the server at all, because there is no TCP port open for the browser to connect to.
{ "source": [ "https://serverfault.com/questions/667803", "https://serverfault.com", "https://serverfault.com/users/270553/" ] }
670,171
For a while now I've been trying to figure out why quite a few of our business-critical systems are getting reports of "slowness" ranging from mild to extreme. I've recently turned my eye to the VMware environment where all the servers in question are hosted. I recently downloaded and installed the trial for the Veeam VMware management pack for SCOM 2012, but I'm having a hard time beliving (and so is my boss) the numbers that it is reporting to me. To try to convince my boss that the numbers it's telling me are true I started looking into the VMware client itself to verify the results. I've looked at this VMware KB article ; specifically for the definition of Co-Stop which is defined as: Amount of time a MP virtual machine was ready to run, but incurred delay due to co-vCPU scheduling contention Which I am translating to The guest OS needs time from the host but has to wait for resources to become available and therefore can be considered "unresponsive" Does this translation seem correct? If so, here is where I have a hard time beliving what I am seeing: The host that contains the majority of the VMs that are "slow" is currently showing a CPU Co-stop average of 127,835.94 milliseconds! Does this mean that on average the VMs on this host have to wait 2+ minutes for CPU time??? This host does have two 4 core CPU's on it and it has 1x8 CPU guest and 14x4 CPU guests.
You state in the comments you have a dual quad-core ESXi host, and you're running one 8vCPU VM, and fourteen 4vCPU VMs. If this was my environment, I would consider that to be grossly over-provisioned. I would at most put four to six 4vCPU guests on that hardware. (This is assuming that the VMs in question have load that requires them to have that high of a vCPU count.) I'm assuming you don't know the golden rule... with VMware you should never assign a VM more cores than it needs. Reason? VMware uses somewhat strict co-scheduling that makes it hard for VMs to get CPU time unless there are as many cores available as the VM is assigned. Meaning, a 4vCPU VM cannot perform 1 unit of work unless there are 4 physical cores open at the same moment. In other words, it's architecturally better to have a 1vCPU VM with 90% CPU load, then to have a 2vCPU VM with 45% load per core. So...ALWAYS create VMs with a minimum of vCPUs, and only add them when it's determined to be necessary. For your situation, use Veeam to monitor CPU usage on your guests. Reduce vCPU count on as many as possible. I would be willing to bet that you could drop to 2vCPU on almost all your existing 4vCPU guests. Granted, if all these VMs actually have the CPU load to require the vCPU count they have, then you simply need to buy additional hardware.
{ "source": [ "https://serverfault.com/questions/670171", "https://serverfault.com", "https://serverfault.com/users/156842/" ] }
670,725
My customer uses a self signed certificate for an application to work. To be able to work, I have to install the root certificate they used to sign the certificate. Is it possible to configure a root certificate so it only validates towards one domain ?
As a rule of thumb: No , implied in trusting the customer's CA certificate is the trust in every certificate signed by that CA. I don't know of any applications/libraries that have an easy option that allows you as the end-user to select that you'll trust your customers or any other CA certificate only for certain (sub-) domains i.e. only for *.example.com and *.example.org and nothing else. Mozilla has a similar concern about currently trusted government sponsored CA's as an open attention point and for instance Chrome has extra checks built in for accessing Google sites, which was how the rogue *.google.com certificate and the compromise of the Diginotar CA became public. But even if you don't trust the CA, you can still import/trust a specific server certificate signed by that CA, which will prevent SSL warnings for the hostnames in that certificate. That should make your application work without errors or complaints. Exceptions: A very underused option of the X.509v3 PKI standard is the Name Constraints extension, which allows a CA certificate to contain white- and blacklists of domain name patterns it is authorized to issue certificates for. You might be lucky and your customer has restrained themselves when they set up their PKI infrastructure and included that Name constraint in their CA certificate. Then you can import their CA certificate directly and know that it can only validate a limited range of domain names.
{ "source": [ "https://serverfault.com/questions/670725", "https://serverfault.com", "https://serverfault.com/users/90447/" ] }
671,347
In my office's building there is a stone-age LAN rack with some Ethernet ports I've never seen before. I need to find the name of this ports, if they have one, and then buy some cables or adapters. Unfortunately, I'm not allowed to dismount the whole thing and connect the cables to a normal RJ45 rack. All the cables connected to the front of the rack have a RJ45 male on the other end. On the rack I can read AT&T 110DW2-100. I checked the cables, no hints on them. Here you can see a pic of the ports and some cables connected to the switch: Does anyone know the name of these ports?
It's just a 110 wiring block . More or less a type of punch down block . (According to a quick Googling, that wiring block is generally Cat5e compliant these days, so you could use it for a 100Mbit network connection). Instead of a patch panel Which has a set number of jacks pre-sized and pre-wired to a certain standard (like RJ45 jacks or RJ 12 jacks or whatever other standard), it's manufactured with exposed wire pairs so that it can be used for different standards easily (and is why teclos use them over patch panels that are fabricated for a specific standard only). They could use that block for any number of different types of data connections, instead of being restricted to one. The drawback, which you note, is that they won't take a standard RJ45 connector, and require those odd plugs instead. Being for a 110 wiring block, they take 110 plugs . Though, actually, like a punch down panel, you could strip one end of your cable, and attach the individual wires to the individual slots on the wiring block as well, and it would work. Here's an install guide for a 100 wiring block I found (with pictures) that might help give you a better sense of what that thing is - a standard plastic block with a bunch of wires connected to it... a description which would also apply just as accurately to the RJ-45 patch panels you're more familiar with.
{ "source": [ "https://serverfault.com/questions/671347", "https://serverfault.com", "https://serverfault.com/users/162992/" ] }
671,412
I've heard rumors of bad things happening to database and mail servers if you change the system time while they are running. However, I'm having a hard time finding any concrete information on actual risks. I have a production Postgres 9.3 server running on a Debian Wheezy host and the time is off by 367 seconds. Can I just run ntpdate or start openntp while Postgres is running, or is that likely to cause an issue? If so, what is a safer method of correcting the time? Are there other services that are more sensitive to a change in system time? Maybe mail servers (exim, sendmail, etc) or message queues (activemq, rabbitmq, zeromq, etc)?
Databases don't like backward steps in time, so you don't want to start with the default behavior of jumping the time. Adding the -x option to the command line will slew the time if the offset is less than 600 seconds (10 minutes). At maximum slew rate it will take about a day and half to adjust the clock by a minute. This is a slow but safe way to adjust the time. Before running ntp to adjust the time, you may want start ntp with a option like -g 2 to verify how large an offset it is detecting. This will set the panic offset to 2 seconds which should be relatively safe. An alternative option I have used before this option was available was to write a loop that reset the clock back part of second every minute or so. If you check to ensure the reset won't change the second this is likely safe. If you use timestamps heavily, you may have out of sequence records. A common option is to shutdown the server long enough that there is no backward movement of the clock. ntp or ntpdate can be configured to jump the clock to the correct time at start up. This should be done before the database is started.
{ "source": [ "https://serverfault.com/questions/671412", "https://serverfault.com", "https://serverfault.com/users/217589/" ] }
671,422
On a windows machine running Windows Update via the built-in service (not GPO), I would like to have it automatically restart every morning at 5:30AM, only when required by WU. The event log entry for a restart required by WU is as follows: I can scheduled a task with a trigger of 5:30AM every day. I can scheduled a task with a trigger that looks for the event above. But I cannot create a task that only runs when both triggers are satisfied or create an event log trigger that delays action until 5:30AM after the event is detected or create an event log trigger that runs itself at 5:30AM and checks to see if the event happened in the prior 24 hours. How can I create a task that only runs at 5:30 every day after the event is logged?
Databases don't like backward steps in time, so you don't want to start with the default behavior of jumping the time. Adding the -x option to the command line will slew the time if the offset is less than 600 seconds (10 minutes). At maximum slew rate it will take about a day and half to adjust the clock by a minute. This is a slow but safe way to adjust the time. Before running ntp to adjust the time, you may want start ntp with a option like -g 2 to verify how large an offset it is detecting. This will set the panic offset to 2 seconds which should be relatively safe. An alternative option I have used before this option was available was to write a loop that reset the clock back part of second every minute or so. If you check to ensure the reset won't change the second this is likely safe. If you use timestamps heavily, you may have out of sequence records. A common option is to shutdown the server long enough that there is no backward movement of the clock. ntp or ntpdate can be configured to jump the clock to the correct time at start up. This should be done before the database is started.
{ "source": [ "https://serverfault.com/questions/671422", "https://serverfault.com", "https://serverfault.com/users/34560/" ] }
671,481
I would like to install Java on one of our servers, but I am reticent due to Oracle's bundling of an Ask.com toolbar and some virus scanner. I've read that the java updater even installs these for important security fixes if they are missing, and the toolbar install has a 10 minute delay built-in so you can't immediately remove it if you realized you installed it by accident. There is no need or desire to have even the java browser plugin installed, I just want a nice clean JRE install. I've noticed some applications such as Atlassian Stash install their own JRE, is there some automated installer that I can't find? Can I just copy the JRE directory to my server from one of these?
The off-line installers at http://oracle.com/technetwork/java/javase/downloads/index.html do not include bundled software.
{ "source": [ "https://serverfault.com/questions/671481", "https://serverfault.com", "https://serverfault.com/users/126045/" ] }
671,513
I want to know if there is any way other than using linux bridges to interconnect interfaces from two virtual machines ? Since I am trying to run private spanning tree implementation in virtual machines ... underlying linux bridge which connects both the virtual machines is dropping the BPDUs. VirtualBox solves the issue by providing internal-network option. Is there any similar option if I use KVM ? Update-01: Enabling STP would end up creating a topology containing 3 bridges (2VMs and 1 Linux bridge connecting both the VMs) instead of 2 bridges (2VMs).
The off-line installers at http://oracle.com/technetwork/java/javase/downloads/index.html do not include bundled software.
{ "source": [ "https://serverfault.com/questions/671513", "https://serverfault.com", "https://serverfault.com/users/22302/" ] }
672,270
When using the ssh or ftp commands from the Bash shell, does the server that I am connecting to learn of the domain name used? I understand that the domain name is locally translated into an IP address via DNS. In HTTP, after that happens, the server is told the original domain name as well in order to serve the correct page, or to present the correct TLS cert (SNI). host serverfault.com GET / Does a similar phenomenon happen when connecting to ssh or ftp ? I ask because I am trying to ssh into a server (GoDaddy webhosting) which expects a domain name, but is not letting me in when I try to connect via user@IPaddress as the DNS is not yet moved to the GoDaddy IP address.
No, the SSH clients do not pass the DNS name you connected to on to the server. As you said correctly, the name is resolved locally to the IP address. It looks like I was wrong about FTP. See the other answer for details.
{ "source": [ "https://serverfault.com/questions/672270", "https://serverfault.com", "https://serverfault.com/users/91213/" ] }
672,346
I'm trying to have the following commands be auto-executed when I login to my server via ssh: ssh-agent /bin/bash ssh-add ~/.ssh/id_rsa My ssh key has a passphrase and I'm fine with entering it once per login. I tried putting this in my .bashrc file, however I believe that ssh-agent starts a new bash session. When I try to login after having this in my .bashrc, it gets stuck, and I have to type 'exit' to then see the 'enter passphrace to unlock key' prompt Any other suggestions? Server is running Ubuntu LTS
You can try adding this: eval $(ssh-agent -s) ssh-add ~/.ssh/id_rsa This way the ssh-agent does not start a new shell, it just launches itself in the background and spits out the shell commands to set the appropriate environment variables. As said in the comment, maybe you do not want to run the agent at all on the remote host, but rather on the box you are working from, and use ssh -A remote-host to forward the services of your local ssh agent to the remote-host. For security reasons you should only use agent forwarding with hosts run by trustworthy people, but it is better than running a complete agent remotely any time.
{ "source": [ "https://serverfault.com/questions/672346", "https://serverfault.com", "https://serverfault.com/users/3772/" ] }
672,369
It is a bit hard to describe but I will do my best... I have an internet application which is implemented in IIS 6.0 as virtual directory (I'll call it 'ItsMyParty') in the format of https://www.app.com.au/ItsMyParty Using the example above, www.app.com.au is the parent internet web site and 'ItsMyParty' is the virtual directory. Now the challenge for me to 'split' 'ItsMyParty' and run as a separate web site (not a virtual directory, and on a different server from www.app.com.au, and both www.app.com.au and ItsMyParty are going to migrate to W2k12 server); but we still want the users to use the same link ' https://www.app.com.au/ItsMyParty ' to access 'ItsMyParty'.. I was told I might be able to do some tricks on the DNS server to achieve this. Does anyone has anything suggestions on how to do this? Thanks in advance. WM
You can try adding this: eval $(ssh-agent -s) ssh-add ~/.ssh/id_rsa This way the ssh-agent does not start a new shell, it just launches itself in the background and spits out the shell commands to set the appropriate environment variables. As said in the comment, maybe you do not want to run the agent at all on the remote host, but rather on the box you are working from, and use ssh -A remote-host to forward the services of your local ssh agent to the remote-host. For security reasons you should only use agent forwarding with hosts run by trustworthy people, but it is better than running a complete agent remotely any time.
{ "source": [ "https://serverfault.com/questions/672369", "https://serverfault.com", "https://serverfault.com/users/267340/" ] }
674,874
I'm starting to use RHEL7 and learning a little about the changes that come with systemd. Is there a way to perform /sbin/service iptables save in firewalld? $ /sbin/service iptables save The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl. The closest parallel I can find from the Documentation is --reload : Reload the firewall without loosing state information: $ firewall-cmd --reload But it doesn't explicitly say if it's saving or not.
The version of firewalld in RHEL 7.0 has no "save" script and no way to copy the running firewall configuration to the permanent configuration. You save a firewall change with firewalld by adding --permanent to the command line making the change. Without it, any change you make is temporary and will be lost when the system restarts. For example: firewall-cmd --add-service=http # Running config firewall-cmd --add-service=http --permanent # Startup config Later (post-RHEL 7) versions of firewalld do include a way to save the running configuration, and this is available now in Fedora and in RHEL 7.1 . In this case the command is simply: firewall-cmd --runtime-to-permanent
{ "source": [ "https://serverfault.com/questions/674874", "https://serverfault.com", "https://serverfault.com/users/207193/" ] }
674,911
I am currently transferring a new customer onto my virtual private server from their old host. They have an existing SSL certificate but it expires next month so I don't think it is worth the hassle of getting the details from the old host. Would there be any issue with purchasing a new SSL (even if it's from the same authority) whilst they have an existing, unexpired, SSL?
You can just request a new certificate, and run both certificates at the same time. In fact this is quite common for applications that need to be allowed to run without downtime, that require new certificates. If you install both on one server, the old certificate will be ignored, in favor of the new one. Datasprings has a nice write-up about certificate renewal.
{ "source": [ "https://serverfault.com/questions/674911", "https://serverfault.com", "https://serverfault.com/users/275897/" ] }
674,974
What is the procedure for mounting a VirtualBox shared folder in Linux? I tried variations of the following mount command but I keep getting protocol error or other mount errors. sudo mount -t vboxsf share /home/toto
Ok this was a little confusing for me but I finally realized what was happening. So I decided to give my 2 cents in hopes that it will be more clear for others and if I forget sometime in the future : ). I was not using the name of the share I created in the VM, instead I used share or vb_share when the name of my share was wd so this had me confused for a minute. First add your share directory in the VM Box: Whatever you name your share here will be the name you will need to use when mounting in the vm guest OS. i.e. I named mine "wd" for my western digital passport drive. Next on the the guset OS make a directory to use for your mount preferably in your home directory. mkdir share Next open the terminal and copy and paste the following or type it in. You can enable shared clipboard under Device-> Shared Clipboard-> Bidirectional sudo mount -t vboxsf wd ~/share/ You should now be able to copy files between OS's using the folder "share" in your home directory. Hope this Helps!
{ "source": [ "https://serverfault.com/questions/674974", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
674,985
My organization recently purchased a HP DesignJet T3500 printer with the HP Designjet PostScript/PDF Upgrade Kit . HP offers 3 specific drivers in 2 packages . Which driver do I use? When I run the Add Printer wizard and I select Have Disk , I get the following options: The HP-GL/2 Driver Package contains two drivers "HP Designjet T3500 HPGL2" "HP Designjet T3500 ps HPGL2" The PostScript Driver Package contains only the PostScript driver. "HP Designjet T3500 PS3" My understanding is HP-GL/2 printing language is used for plotters (it fits our intentions as AutoCAD users) and that PostScript is for high levels of control and detailed publishing. My questions revolve around the HP Designjet T3500ps HPGL2 driver. It gives me the impression that this driver is capable of using multiple print languages. My immediate options/questions are: Which driver do I install in the client machines? Do I install both the HPGL2 and PS3 drivers with the instruction to users that HPGL2 is for CAD and PS3 is for PDFs/Documents? OR Do I install only the mysterious "HP Designjet T3500 ps HPGL2"? Why? : What is the difference between the HP Designjet T3500ps HPGL2 and HP Designjet T3500 HPGL2 drivers? Are Window's print drivers capable of using multiple description languages for output? If so, how could such a driver distinguish the type of content it is printing? Does it, for example, send this .docx to the PostScript interpreter and this .dwg to the HPGL2 interpreter?
Ok this was a little confusing for me but I finally realized what was happening. So I decided to give my 2 cents in hopes that it will be more clear for others and if I forget sometime in the future : ). I was not using the name of the share I created in the VM, instead I used share or vb_share when the name of my share was wd so this had me confused for a minute. First add your share directory in the VM Box: Whatever you name your share here will be the name you will need to use when mounting in the vm guest OS. i.e. I named mine "wd" for my western digital passport drive. Next on the the guset OS make a directory to use for your mount preferably in your home directory. mkdir share Next open the terminal and copy and paste the following or type it in. You can enable shared clipboard under Device-> Shared Clipboard-> Bidirectional sudo mount -t vboxsf wd ~/share/ You should now be able to copy files between OS's using the folder "share" in your home directory. Hope this Helps!
{ "source": [ "https://serverfault.com/questions/674985", "https://serverfault.com", "https://serverfault.com/users/171397/" ] }
675,090
I've noticed that some domains have a TXT record with the form ms=msXXXXXXXX , where each X is decimal digit. For example ms=ms97284866 What is this kind of TXT record used for?
They are usually used by automated validation procedures whose purpose is to detect wheter you are the rightful owner of a domain; they will ask you to create a TXT record with a specific text string in the domain DNS zone, and then check if the requested record is actually there; if you were able to create it, it's safe to assume you own (or at least manage) that domain. A record in the form ms=msXXXXXXXX is typical of the procedure used for domain validation by Microsoft Office 365 .
{ "source": [ "https://serverfault.com/questions/675090", "https://serverfault.com", "https://serverfault.com/users/128340/" ] }
675,553
I am trying to make my outgoing and incoming traffic look as legitimate as close to SSL traffic as possible. Is there a way to DPI my own traffic to ensure it looks like SSL traffic and not OpenVPN traffic? And based on my config setup does all traffic use port 443 which is the SSL port? My configuration is as follows: STUNNEL on laptop: [openvpn] # Set sTunnel to be in client mode (defaults to server) client = yes # Port to locally connect to accept = 127.0.0.1:1194 # Remote server for sTunnel to connect to connect = REMOTE_SERVER_IP:443 OPENVPN CONFIG ON laptop: client dev tun proto tcp remote 127.0.0.1 1194 resolv-retry infinite nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 persist-key persist-tun STUNNEL CONFIG ON SERVER: sslVersion = all options = NO_SSLv2 ;chroot = /var/lib/stunnel4/ ; PID is created inside the chroot jail pid = /stunnel4.pid ; Debugging stuff (may useful for troubleshooting) debug = 7 output = /var/log/stunnel4/stunnel4.log setuid = root setgid = root socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 compression = zlib [openvpn] accept = REMOTE_SERVER_IP:443 connect = REMOTE_SERVER_IP:11440 cert=/etc/stunnel/server.pem key=/etc/stunnel/server.key OPENVPN CONFIG on server: local REMOTE_SERVER_IP port 11440 proto tcp
OpenVPN over TLS Your VPN is using TCP as a transport protocol. The stunnel instance is used to encapsulate the content of the TCP stream in TLS/TCP. You get this protocol stack: [IP ]<------------------------>[IP ] [OpenVPN]<------------------------>[OpenVPN] [TLS ]<~~~~~>[TLS] [TCP ]<->[TCP ]<----->[TCP]<->[TCP ] [IP ]<->[IP ]<----->[IP ]<->[IP ] [ ] [ ] [ ] [ ] Server stunnel stunnel Client Between the stunnel instances you have this protocol stack on the wire: [IP ] [OpenVPN ] [TLS ] [TCP(443)] [IP ] [... ] As the TLS encrypts its payload, an attacker can only see: [??? ] [TLS ] [TCP(443)] [IP ] [... ] So yes, it is plain TLS traffic (it could be HTTP/TLS, SMTP/TLS, POP/TLS or anything else for someone looking at the traffic but it looks a lot like HTTP/TLS as the TCP port 443 is used). You can check this by using wireshark: record the traffic between the stunnel instances. In the wireshark UI (right button on a packet of the stream), you can ask wireshark to interpret the traffic as TLS: it will recognise it as TLS traffic (you will see the different TLS messages but not the payload of the TLS session). You might want to use SNI in the client in order to look like what a modern browser would do. You might want to use ALPN as well but stunnel currently does not handle that. OpenVPN with builtin TLS In comparison, if you are using OpenVPN, you will have something like this: [IP ] [OpenVPN ] [TCP ] [IP ] [... ] Which looks like this: [??? ] [OpenVPN ] [TCP ] [IP ] [... ] The builtin TLS layer does not encapsulate the (IP, Ethernet) packets but is only used for setting up the session and authenticating: [TLS ] [OpenVPN ] [TCP ] [IP ] [... ] In this case, your traffic does not look like a plain TLS traffic but is obviously OpenVPN. If you interpret this traffic as OpenVPN in wireshark, you will recognise the OpenVPN messages and inside of them the TLS messages (but not the payload). Warning You should be aware that if a passive attacker will not be able to tell that your remote server is in fact an OpenVPN server, an active attacker will be able to find this out: simply by connecting to your server over TLS, he will be able to confirm that it is not a HTTP/TLS server. By trying to speak the OpenVPN protocol, he will be able to detect that your server is a OpenVPN/TLS server. OpenVPN over TLS with client authentication It you are worried about this you could enable TLS client authentication: an attacker will not be able to initiate a working TLS session and will not be able to guess which payload is encapsulated over TLS. Warning: * I'm not talking about the builtin TLS support in OpenVPN (see above for en explanation about why it won't help you). Multiplexed OpenVPN/TLS and HTTP/TLS Another solution is to serve both HTTP and OpenVPN over the TLS session. sslh can be used to automatically detect the payload of the protocol and dispatch either to a plain HTTP/TCP server or you OpenVPN/TCP server. The server will look like standard HTTP/TLS server but someone trying to speak OpenVPN/TLS with this server will be able to detect that it is in fact a OpenVPN/TLS server as well. either OpenVPN/TCP or HTTP/TCP [1].---------. .------.HTTP/TCP.-------------. -->| stunnel |---->| sslh |------->| HTTP server | '---------' '------'| '-------------' | .----------------. '------>| OpenVPN server | OpenVPN/TCP'----------------' [1]= Either OpenVPN/TLS/TCP or HTTP/TLS/TCP OpenVPN over HTTP CONNECT over TLS Another solution is to use a standard HTTP/TLS server and use HTTP CONNECT/TLS to connect to the OpenVPN server: it will look like a standard HTTP server. You can even require authentication of client in order to authorise the HTTP CONNECT request (squid should be able to do this). OpenVPN has an option to use a HTTP Proxy: http-proxy proxy.example.com You should be able to combine this with a stunnel instance connecting to a remote HTTPS PROXY: http-proxy 127.0.0.1 8443 remote vpn.example.com Which would implement this protocol stack: [IP ]<------------------------>[IP ] [OpenVPN]<------------------------>[OpenVPN] [HTTP ]<------------->[HTTP ] [TLS ]<~~~~~>[TLS] [TCP ]<->[TCP ]<----->[TCP]<->[TCP ] [IP ]<->[IP ]<----->[IP ]<->[IP ] [ ] [ ] [ ] [ ] Server HTTPS PROXY stunnel Client
{ "source": [ "https://serverfault.com/questions/675553", "https://serverfault.com", "https://serverfault.com/users/185651/" ] }
676,221
My understanding is that mv dir1/file1 dir2/ is atomic, Is mv dir1/* dir2/ also atomic? As an example, assume there are 10 files in dir1 that are 10GB each.
Let's start with the statement that mv is not always atomic. Let's also identify that atomicity refers to file contents, not to the file name. For any individual file, the move or rename performed by mv is atomic provided that the file is moved within the same filesystem. The atomicity does not guarantee that the file is only in one place or another; it is quite possible that the file could be present in the filesystem in both places simultaneously for "a short time". What atomicity does guarantee, when offered, is that the file contents are instantaneously available completely and not partially. You can imagine that mv in such situations could have been implemented with ln followed by rm . mv is most definitely not atomic when the move that it performs is from one filesystem to another, or when a remote filesystem cannot implement the mv operation locally. In these instances mv could be said to be implemented by the equivalent of a cp followed by rm . Now, moving on to the question of atomicity across multiple files. mv is at best atomic only per file, so if you have a number of files to move together, the implementation is such that they will be moved one at a time. If you like, mv file1 dir; mv file2 dir; mv file3 dir . If you really need a group of files to appear in a destination simultaneously, consider putting them in a directory and moving that directory. This single object (the directory) can be moved atomically.
{ "source": [ "https://serverfault.com/questions/676221", "https://serverfault.com", "https://serverfault.com/users/276822/" ] }
676,328
I want to give non-sudo access to a non-root user on my machine, there is a user dns-manager, his only role is to run all the BIND commands(rndc, dnssec-keygen) etc. Now everytime he has to run a command, he types, sudo rndc reload Is there a way I can get rid of this sudo, but only on a particular set of commands(and only for dns-manager)?
If I understand your comments correctly, the issue here is that the command will be issued through a connection that does not have any ability to enter the password that sudo defaults to requesting. Also, in many OS distributions, sudo will default to requiring a TTY - which this program may not have. However, sudo is able to have a very fine-grained permissions structure, making it possible to allow one or more users to issue one particular command without password and TTY. Below, I'll show three ways to configure this for your needs. Whichever one you choose, the user will now be able to issue the command sudo rndc reload without having to enter a password. (Also, this may be unnecessary, but... please remember to make a backup copy of your sudoers file before editing it, to keep a shell where you're root open in case you need to revert to the backup, and to edit it using visudo instead of sudo vi /etc/sudoers . Hopefully these precautions will be unnecessary, but... better to have them and not need them than the reverse!) 1. If you don't want to require a TTY for any requests The easiest way to get rid of the TTY requirements (if one exists) is to make sure that the line beginning with Defaults in /etc/sudoers does not contain the word requiretty - instead, it should contain !requiretty . However, if you do this, it means that no sudo command will require a tty! You will also need to add the line rndcuser ALL = (root) NOPASSWD: /path/to/rndc reload, /path/to/dnssec-keygen, /path/to/other/program 2. If you want to require a TTY for all users except this one This can be done by setting a default for this one user, like this: Defaults:rndcuser !requiretty rndcuser ALL = (root) NOPASSWD: /path/to/rndc reload, /path/to/dnssec-keygen, /path/to/other/program 3. If you want to requre a TTY for all commands except this one command by this one user This is a bit more complex, due to the syntax of the sudoers file. You'd need to create a command alias for the command, and then set a default for that command alias, like so: Cmnd_Alias RNDC_CMD = /path/to/rndc reload, /path/to/dnssec-keygen, /path/to/other/program Defaults!RNDC_CMD !requiretty rndcuser ALL = (root) NOPASSWD: RNDC_CMD
{ "source": [ "https://serverfault.com/questions/676328", "https://serverfault.com", "https://serverfault.com/users/245881/" ] }
677,683
I am working on a playbook to join linux systems to Active Directory. I can't seem to find a way to convert the value of ansible_hostname to uppercase. One of the commands I need to run requires the hostname to be supplied in uppercase.
As Hector Valverde mentionned, it seems to be {{ ansible_hostname|upper }} ...rather than "uppercase"
{ "source": [ "https://serverfault.com/questions/677683", "https://serverfault.com", "https://serverfault.com/users/233361/" ] }
678,024
Is there a way to reboot a Linux system (Debian in particular) without rebooting the hardware? I have a RAID controller that takes a bit to get itself running before the OS starts up, and I would like it if there was a way to quickly reboot the Linux OS without having to go through the whole reboot process of restarting the RAID controller, etc.
I use kexec-reboot on nearly all of my production systems. It works incredibly well, allowing me to bypass the long POST time on HP ProLiant servers and reduce the boot cycle from 5 minutes to ~45 seconds. See: https://github.com/error10/kexec-reboot The only caveat is that it doesn't seem to work on RHEL/CentOS 6.x systems booting UEFI. But most sane OS/hardware combinations work.
{ "source": [ "https://serverfault.com/questions/678024", "https://serverfault.com", "https://serverfault.com/users/120769/" ] }
680,268
I have one ssh server that I seldom connect to and which requires to use a different user than the one I use to log in to my system. When I just execute ssh example.com then ssh will automatically use my default user [email protected] to connect. This will not work, because the server expects me to connect with a special user [email protected] . Additionally, the server has a strong blocking and banning policy. So when I use the wrong password a few times, then I am automatically blocked for half an hour or even banned. Because I connect so seldom, I tend to forget that I need to use a special user to connect, and it can be very annoying if I am blocked for half an hour or even banned. What I am looking for is a way configure my local ssh client and tell it: "When ever I am connecting to example.com, I want you to automatically connect with user xyz12345 and not with my current user." Is something like this possible?
You can set it up in your ssh client config. Add to your .ssh/config Host example.com User xyz12345 From man ssh_config : User Specifies the user to log in as. This can be useful when a dif- ferent user name is used on different machines. This saves the trouble of having to remember to give the user name on the com- mand line.
{ "source": [ "https://serverfault.com/questions/680268", "https://serverfault.com", "https://serverfault.com/users/120377/" ] }
680,780
On a linux networked machine, i would like to restrict the set of addresses on the "public" zone (firewalld concept), that are allowed to reach it. So the end result would be no other machine can access any port or protocol, except those explicitly allowed, sort of a mix of --add-rich-rule='rule family="ipv4" source not address="192.168.56.120" drop' --add-rich-rule='rule family="ipv4" source not address="192.168.56.105" drop' The problem above is that this is not a real list, it will block everything since if its one address its blocked by not being the same as the other, generating an accidental "drop all" effect, how would i "unblock" a specific non contiguous set? does source accept a list of addresses? i have not see anything in my look at the docs or google result so far. EDIT: I just created this: # firewall-cmd --zone=encrypt --list-all encrypt (active) interfaces: eth1 sources: 192.168.56.120 services: ssh ports: 6000/tcp masquerade: no forward-ports: icmp-blocks: rich rules: But i can still reach port 6000 from .123 my intention was that if a source is not listed, it should not be able to reach any service or port
The rich rules aren't necessary at all. If you want to restrict a zone to a specific set of IPs, simply define those IPs as sources for the zone itself (and remove any interface definition that may be present, as they override source IPs). You probably don't want to do this to the "public" zone, though, since that's semantically meant for public facing services to be open to the world. Instead, try using a different zone such as "internal" for mostly trusted IP addresses to access potentially sensitive services such as sshd. (You can also create your own zones.) Warning: don't mistake the special "trusted" zone with the normal "internal" zone. Any sources added to the "trusted" zone will be allowed through on all ports; adding services to "trusted" zone is allowed but it doesn't make any sense to do so. firewall-cmd --zone=internal --add-service=ssh firewall-cmd --zone=internal --add-source=192.168.56.105/32 firewall-cmd --zone=internal --add-source=192.168.56.120/32 firewall-cmd --zone=public --remove-service=ssh The result of this will be a "internal" zone which permits access to ssh, but only from the two given IP addresses. To make it persistent, re-run each command with --permanent appended, or better, by using firewall-cmd --runtime-to-permanent .
{ "source": [ "https://serverfault.com/questions/680780", "https://serverfault.com", "https://serverfault.com/users/144691/" ] }
680,844
At my current workplace, I look after two VMware host machines, an OpenBSD physical machine, three Debian VM's, and six Windows Server VM's (2008/2012). I'm considering implementing a configuration management tool such as Puppet or Chef. Is this reasonable, or will the overhead of learning the tool outweigh the benefits? Where is the tipping point between manageability & implementation cost?
IMHO it's worth learning even if you're only managing a single server, Yes, there will be a learning curve. Yes, you will get frustrated. For those costs, though, you will be paid back in multiples through reliable, consistent, one-click deployments, version-controlled server configuration, ease of setting up test/dev environments, etc. In addition to the benefits to your current job, being able to add a CM system to your resume is a big win. Modern sysadmins are now expected to have at least exposure to a config management system, if not proficiency. (Sidenote: consider Ansible as well. It's my preferred CM, and is very easy to get up and running with - much easier than either Puppet or Chef. Additionally, windows support in Ansible is coming along nicely.)
{ "source": [ "https://serverfault.com/questions/680844", "https://serverfault.com", "https://serverfault.com/users/279129/" ] }
681,832
I am setting up a MySQL server and want Ansible to set the mysql-root password during installation. With the help of the internet I came up with this solution: - name: Set MySQL root password before installing debconf: name='mysql-server' question='mysql-server/root_password' value='{{mysql_root_pwd | quote}}' vtype='password' - name: Confirm MySQL root password before installing debconf: name='mysql-server' question='mysql-server/root_password_again' value='{{mysql_root_pwd | quote}}' vtype='password' - name: Install Mysql apt: pkg=mysql-server state=latest mysql_root_pwd is a variable loaded from the Ansible Vault. This runs fine, but now on the server there are many lines in the log: Apr 10 14:39:59 servername ansible-debconf: Invoked with value=THEPASSWORD vtype=password question=mysql-server/root_password name=mysql-server unseen=None Apr 10 14:39:59 servername ansible-debconf: Invoked with value=THEPASSWORD vtype=password question=mysql-server/root_password_again name=mysql-server unseen=None How can I stop Ansible from writing clear text passwords to the logfiles?
To prevent a task with confidential information from being logged, in syslog or other, set no_log: true on the task: - name: secret stuff command: "echo {{secret_root_password}} | sudo su -" no_log: true The running of the task will still be logged, but with little details. Also, the module used has to support no_log , so test custom modules. See Ansible FAQ for further details. It can be applied to an entire playbook, however the output gets a little nasty with " censored! " messages.
{ "source": [ "https://serverfault.com/questions/681832", "https://serverfault.com", "https://serverfault.com/users/269064/" ] }
682,708
I want to use the AWS S3 cli to copy a full directory structure to an S3 bucket. So far, everything I've tried copies the files to the bucket, but the directory structure is collapsed. (to say it another way, each file is copied into the root directory of the bucket) The command I use is: aws s3 cp --recursive ./logdata/ s3://bucketname/ I've also tried leaving off the trailing slash on my source designation (ie, the copy from argument). I've also used a wildcard to designate all files ... each thing I try simply copies the log files into the root directory of the bucket.
I believe sync is the method you want. Try this instead: aws s3 sync ./logdata s3://bucketname/
{ "source": [ "https://serverfault.com/questions/682708", "https://serverfault.com", "https://serverfault.com/users/266814/" ] }
682,756
We have a server where one of our engineers misconfigured the subnet and now we are locked out of this server and the only access that I know of would work is a serial console from IDC (it means asking an IDC engineer to help us with this). What has been misconfigured: address 192.168.1.9 # Original address there was netmask 255.255.255.254 # Misconfigured, originally should've been .240 Out of curiosity - is there a way to avoid calling IDC and somehow connect to this host over SSH (then we can fix the configuration)?
You need to be able to log in on another host on the same network segment. Some of the ways to get access to the misconfigured host requires root on the intermediate host, but there also is one easy way to get access without needing root on the intermediate host. The easy way to access the host using IPv6 ssh -o ProxyCommand='ssh -W [fe80::42:ff:fe:42%%eth0]:%p user@intermediate-host' root@target-server The following example values in above command need to be substituted with correct values for your use case: fe80::42:ff:fe:42 , eth0 , user , intermediate-host , and target-server . Detailed explanation of how it works ProxyCommand is an ssh feature to use when you cannot open a TCP connection directly to the target host. The argument to ProxyCommand is a command whose stdin/stdout to use instead of a TCP connection. -W is used to open a single port forwarding and connect it to stdin/stdout. This fits nicely together with ProxyCommand . fe80::42:ff:fe:42%%eth0 is the link-local address of the target host. Notice that due to ProxyCommand using % as escape character, the typed ssh command must use %% in that location. You can find all link-local addresses on the segment by running ssh user@intermediate-host ping6 -nc2 ff02::1%eth0 . Using IPv6 link-local addresses for this purpose is usually the easiest way because it is enabled by default on all modern systems, and link-local addresses keep working even if both IPv4 and IPv6 stacks are severely misconfigured. Falling back to IPv4 If IPv6 is completely disabled on the misconfigured host (absolutely not recommended), then you may have to resort to using IPv4. Since IPv4 doesn't have link-local addresses the way IPv6 does then accessing the misconfigured host using IPv4 gets more complicated and need root access on the intermediate host. If the misconfigured host was still able to use its default gateway, you would be able to access it from outside. Possibly the misconfigured netmask also broke the default gateway due to the stack refusing to use a gateway outside of the prefix covered by the netmask. If this is indeed the case, the misconfigured host will only be able to communicate with 192.168.1.8 because that's the only other IP address in the subnet currently accessible to this misconfigured host. If you have a login on 192.168.1.8, you might just be able to ssh from there to 192.168.1.9. If 192.168.1.8 is currently unassigned you can temporarily assign it to any host on the segment on which you have root access.
{ "source": [ "https://serverfault.com/questions/682756", "https://serverfault.com", "https://serverfault.com/users/146727/" ] }
682,757
If i take a snapshot in vmware/vbox for example and soon after i delete a 5GB file.. the snapshot delta file will grow +5GB with the deleted files content to allow it to be recovered by restoring the snapshot. If later on rather than restoring the snapshot i choose to delete the snapshot so it merges the delta file with the base disk. As i understand, it will bring the base disk in-sync by replaying/merging the create/modify transactions from the snapshot. What is the logic for the 5GB file that resides in the snapshot ? It won't be recreated on the base disk during the merge. How does it know to skip over this file during the merge ? Does it just look for the inode existing ? Thanks fLo
You need to be able to log in on another host on the same network segment. Some of the ways to get access to the misconfigured host requires root on the intermediate host, but there also is one easy way to get access without needing root on the intermediate host. The easy way to access the host using IPv6 ssh -o ProxyCommand='ssh -W [fe80::42:ff:fe:42%%eth0]:%p user@intermediate-host' root@target-server The following example values in above command need to be substituted with correct values for your use case: fe80::42:ff:fe:42 , eth0 , user , intermediate-host , and target-server . Detailed explanation of how it works ProxyCommand is an ssh feature to use when you cannot open a TCP connection directly to the target host. The argument to ProxyCommand is a command whose stdin/stdout to use instead of a TCP connection. -W is used to open a single port forwarding and connect it to stdin/stdout. This fits nicely together with ProxyCommand . fe80::42:ff:fe:42%%eth0 is the link-local address of the target host. Notice that due to ProxyCommand using % as escape character, the typed ssh command must use %% in that location. You can find all link-local addresses on the segment by running ssh user@intermediate-host ping6 -nc2 ff02::1%eth0 . Using IPv6 link-local addresses for this purpose is usually the easiest way because it is enabled by default on all modern systems, and link-local addresses keep working even if both IPv4 and IPv6 stacks are severely misconfigured. Falling back to IPv4 If IPv6 is completely disabled on the misconfigured host (absolutely not recommended), then you may have to resort to using IPv4. Since IPv4 doesn't have link-local addresses the way IPv6 does then accessing the misconfigured host using IPv4 gets more complicated and need root access on the intermediate host. If the misconfigured host was still able to use its default gateway, you would be able to access it from outside. Possibly the misconfigured netmask also broke the default gateway due to the stack refusing to use a gateway outside of the prefix covered by the netmask. If this is indeed the case, the misconfigured host will only be able to communicate with 192.168.1.8 because that's the only other IP address in the subnet currently accessible to this misconfigured host. If you have a login on 192.168.1.8, you might just be able to ssh from there to 192.168.1.9. If 192.168.1.8 is currently unassigned you can temporarily assign it to any host on the segment on which you have root access.
{ "source": [ "https://serverfault.com/questions/682757", "https://serverfault.com", "https://serverfault.com/users/58161/" ] }
683,152
I'm on Windows Server 2012, Active Directory is on and working. All the project we manage have 2 dedicated groups, one for managers with access to all related files (including invoices, timetables and whatever they need to manage the project, or at least I guess, it could be a bunch of animated gifs for all I know) and one for the people that actually work on the project with access to only the files of the project itself. I need to let some project managers control the membership of the groups that allow file access to their projects. They should not be able to edit any other aspect of the group. And ideally it should be using a GUI of some kind, because it will be hard enough to explain it that way, but worst case scenario I can script one. I added the managing group to the "Managed By" tab of the managed group, with "Manager can update membership list" enabled, and this looked easy enough. But.. Should I let the managing group let see the whole user list? If so, how? How and where should the managing group members log in to edit the group membership?
You can specify the managedBy attribute, and check the box for "Manager can update membership list". (This grants write permission for the Member attribute.) The person(s) who need to edit the group may be able to do it with the DSQuery widget, for which you can create the following shortcut: rundll32 dsquery,OpenQueryWindow They can search for the group as with AD Users and Computers, then edit the properties, and Add members. It may be possible to do this with Outlook (if the group is mail-enabled), but that can be more fragile if you have a multiple domain environment.
{ "source": [ "https://serverfault.com/questions/683152", "https://serverfault.com", "https://serverfault.com/users/281988/" ] }
683,243
I am trying to use a postfix on a local Ubuntu 12.04 with ZoneMinder . I installed from Ubuntu Desktop the Postfix package and its dependency. Now if I try to send email with following command it works good: echo "This is the body of the email" | mail -s "This is the subject line" [email protected] Then if an alarm from ZoneMinder sends an email I get the following Apr 16 17:05:18 ubuntu postfix/local[11541]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory and on if i run postqueue -q i get queued emails with (alias database unavailable) A09B4A40C16 422 Thu Apr 16 16:59:37 [email protected] (alias database unavailable) [email protected] I tried to set pownership to postfix as suggested in other post with the following sudo chown postfix:postfix -R /var/lib/postfix and restarted postfix, but din't help. The main.cf has the following smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = ubuntu alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = meridianozero.net, localhost, localhost.localdomain, localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all What should I check?
This is because you have alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases The hash: means, that you must have a database file containing the hashes, as described in Postfix lookup table types : An indexed file type based on hashing. This is available only on systems with support for Berkeley DB databases. Public database files are created with the postmap(1) or postalias(1) command, and private databases are maintained by Postfix daemons. The database name as used in "hash:table" is the database file name without the ".db" suffix. Therefore, as described in the documentation of alias_maps : If you change the alias database, run postalias /etc/aliases (or wherever your system stores the mail alias file), or simply run newaliases to build the necessary DBM or DB file. This will build the /etc/aliases.db file from information in /etc/aliases . Naturally you must run either of these commands also during initial setup.
{ "source": [ "https://serverfault.com/questions/683243", "https://serverfault.com", "https://serverfault.com/users/282062/" ] }
683,538
$ sudo docker run --rm ubuntu:14.04 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.17.42.1 0.0.0.0 UG 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 Doesn't this mean that 127.0.0.0/8 is routed towards the gateway of 172.17.42.1 and not the loopback device ?
The route command is deprecated, and should not be used anymore. The new way is to use the iproute set of commands, which are all invoked with ip followed by an object. For example: $ ip route show default via 192.168.1.254 dev eth0 192.168.0.0/23 dev eth0 proto kernel scope link src 192.168.1.27 Now, I hear you say, this is basically the same info! Yes, but this isn't the whole story. Before the routing tables (yes, plural) comes the rule table: $ ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default The routing table we were looking at before is the main routing table. Your question concerns the local routing table, which contains all routes relating to local connections. This table can be shown as follows: $ ip ro sh table local broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 192.168.0.0 dev eth0 proto kernel scope link src 192.168.1.27 local 192.168.1.27 dev eth0 proto kernel scope host src 192.168.1.27 broadcast 192.168.1.255 dev eth0 proto kernel scope link src 192.168.1.27 (You can abbreviate ip options / parameters as long as they're still unique, hence ip ro sh is the same as ip route show .) Here you can see the loopback routes. You can do all sorts of wonderful things with this policy-based routing , I recommend you read Policy Routing with Linux by Matthew G. Marsh for all the info you'll ever need.
{ "source": [ "https://serverfault.com/questions/683538", "https://serverfault.com", "https://serverfault.com/users/282258/" ] }
683,605
Where do Docker containers get their time information? I've created some containers from the basic ubuntu:trusty image, and when I run it and request 'date', I get UTC time. For awhile I got around this by doing the following in my Dockerfile: RUN sudo echo "America/Los_Angeles" > /etc/timezone However, for some reason that stopped working. Searching online I saw the below suggested: docker run -v /etc/timezone:/etc/timezone [image-name] Both these methods correctly set the timezone though! $ cat /etc/timezone America/Los_Angeles $ date Tue Apr 14 23:46:51 UTC 2015 Anyone know what gives?
The secret here is that dpkg-reconfigure tzdata simply creates /etc/localtime as a copy, hardlink or symlink (a symlink is preferred) to a file in /usr/share/zoneinfo . So it is possible to do this entirely from your Dockerfile. Consider: ENV TZ=America/Los_Angeles RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone And as a bonus, TZ will be set correctly in the container as well. This is also distribution-agnostic, so it works with pretty much any Linux. Note: if you are using an alpine based image you have to install the tzdata first. (see this issue here ) Looks like this: RUN apk add --no-cache tzdata ENV TZ America/Los_Angeles
{ "source": [ "https://serverfault.com/questions/683605", "https://serverfault.com", "https://serverfault.com/users/278285/" ] }
683,910
The official Docker documentation mentions that I need to run docker rm -v containername to specifically remove a data volume. But what do you do if you already removed all the containers referencing the specific data volume ?
Before version 1.9, Docker didn't provide any way to remove dangling volumes. If such volumes are taking too much disk space and you want to take matters into your own hands though, you can manually delete the volumes by first identifying the ones which are in use. You can run docker inspect -f '{{ .Volumes }}' containername to find the location in the file system of the volumes in use, and then delete everything except those. If you have lots of containers, you can run for x in $(docker ps -qa | sed '1d'); do docker inspect -f '{{ .Volumes }}' ${x}; done to loop through the containers and list the volumes. Better yet, you can use the Python script here , the prerequisite is to install the python API client for Docker pip install docker-py
{ "source": [ "https://serverfault.com/questions/683910", "https://serverfault.com", "https://serverfault.com/users/34814/" ] }
683,911
I am trying to put a hard limit in CPU usage for a dd command . I have created the following unit file [Unit] Description=Virtual Distributed Ethernet [Service] ExecStart=/usr/bin/ddcommand CPUQuota=10% [Install] WantedBy=multi-user.target which call the following simple script #!/bin/sh dd if=/dev/zero of=/dev/null bs=1024k As I have seen in this guide , the CPU usage for my dd service should not exceed the 10%. But when I run the system-cgtop command the usage is about 70-75% . Any ideas of what am I doing wrong and how can I fix it? When I execute systemctl show dd I get the following results regarding CPU CPUShares=18446744073709551615 StartupCPUShares=18446744073709551615 CPUQuotaPerSecUSec=100ms LimitCPU=18446744073709551615
The Good Your solution is the correct and should actually be quite future-proof; by using systemd to control the services cgroup settings, eg. CPUQota. [Unit] Description=Virtual Distributed Ethernet [Service] ExecStart=/usr/bin/ddcommand CPUQuota=10% [Install] WantedBy=multi-user.target See the man systemd.resource-control for more useful cgroup settings in systemd. The Bad There are two caveats to this though, which I ( and possibly a few others ) stumpled upon. Those caveats are really difficult to track down as there does not seem to be much easily findable information about this, which is the main reason for this answer. Caveat 1: The CPUQuota setting is only available since systemd 213, see https://github.com/systemd/systemd/blob/master/NEWS * The CFS CPU quota cgroup attribute is now exposed for services. The new CPUQuota= switch has been added for this which takes a percentage value. Setting this will have the result that a service may never get more CPU time than the specified percentage, even if the machine is otherwise idle. This is for example an issue with Debian Jessie which only comes with systemd 208. As an alternative one could configure cpu.cfs_period_us and cpu.cfs_quota_us manually using cgcreate and cgset from the cgroup-bin package, eg. sudo cgcreate -g cpu:/cpulimited sudo cgset -r cpu.cfs_period_us=50000 cpulimited sudo cgset -r cpu.cfs_quota_us=10000 cpulimited sudo cgexec -g cpu:cpulimited /usr/bin/ddcommand Caveat 2 For the settings cpu.cfs_period_us and cpu.cfs_quota_us to be available the Kernel needs to be compiled with config-flag CONFIG_CFS_BANDWIDTH . Sadly the 3.16.x Kernel for Debian Jessie is not compiled with this flag by default, see this feature request . This will be available in Debian Stretch though. One could also use the kernel from jessie-backports , which should have the flag enabled. I hope this answer helps a few people with the same issue as me... PS: An easy way to test wether CPUquota is working in your environment is: $ apt-get install stress $ systemd-run -p CPUQuota=25% --slice=stress -- stress -c <your cpu count> and watch with top or htop , the load should be spread (evenly) accross all cpus/cores, summing up to 25%. Alternative As an alternative tool one could use cpu-limit which should be available in most distros, eg. $ apt-get install cpulimit $ cpulimit -l 10 -P /usr/bin/ddcommand It works by sending SIGSTOP and SIGCONT to the attached command to pause and resume its operation. AFAIK it was difficult to control multiple separate/stand-alone processes simultaneously with this, in like group them together , but there might also be solution for this...
{ "source": [ "https://serverfault.com/questions/683911", "https://serverfault.com", "https://serverfault.com/users/282544/" ] }
684,339
I know wa (in top ) measures the CPU time on waiting for I/O. Many articles say that. But I am confused that, based on 2 knowledge points: if a process uses a system call to read disk, the process is blocked. If a process is blocked, it is cannot be scheduled running on CPU. Right? It seems there no time for CPU waiting on I/O... What happens? If recommend some books or articles for me to further reading, so much the better.
The CPU idle status is divided in two different "sub"-states: iowait and idle . If the CPU is idle, the kernel then determines if there is at least one I/O currently in progress to either a local disk or a remotely mounted disk (NFS) which had been initiated from that CPU. If there is, then the CPU is in state iowait . If there is no I/O in progress that was initiated from that CPU, the CPU is in idle state. So, iowait is the percentage of time the CPU is idle AND there is at least one I/O in progress initiated from that CPU. The iowait counter states that the system can handle more computational work. Just because a CPU is in iowait state does not mean that it can't run other threads or processes on that CPU. So, iowait is simply a form of idle time.
{ "source": [ "https://serverfault.com/questions/684339", "https://serverfault.com", "https://serverfault.com/users/282844/" ] }
684,424
Currently on my Apache 2 (Apache 2.4.7 to be exact) on Ubuntu 14.04, I have this setting: /etc/apache2/mods-enabled/mpm_prefork.conf <IfModule mpm_prefork_module> StartServers 20 MinSpareServers 100 MaxSpareServers 250 MaxRequestWorkers 150 MaxConnectionsPerChild 0 </IfModule> The server is an 8GB (RAM) Amazon server that does nothing more than load up a three-page signup form for some Google ad campaigns. I found a script called apachetuneit.sh on the web, but then after awhile the Apache was reporting this error: [Tue Apr 21 16:45:42.227935 2015] [mpm_prefork:error] [pid 1134] AH00161: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting How can I judge how to set these settings? I am asking specifically only for how to tune Apache 2.4 and nothing else. This is why this question is different than this question .
Recognize that Ubuntu 14.04 uses Apache 2 with PHP running through an mpm_prefork module, of which an editable file is in /etc/apache2/mods-enabled/mpm_prefork.conf. Also, recognize that starting in Apache 2.4, MaxClients is now renamed as MaxRequestWorkers , and so any documentation regarding MaxClients needs to be switched to MaxRequestWorkers. Stop the Apache web service with the following command, temporarily: sudo service apache2 stop Wait 5 seconds and then run the following command to find out how much virtual memory you have on the server that is free: sudo free -ht Read the Mem: line and look at the free column. Consider this as the amount of RAM that you can dedicate to Apache, although I usually like to deduct 2GB on a beefier server (as in > 4GB), or 1GB on a lighter server. So, if the free column said I had 13GB free, I would recommend giving Apache 11GB. That's a baseline. If we encounter any database issue in the logs occasionally (as in like 3 times in the logs over a 3 day period) that it needs more memory, then we might consider that we only had 10GB to play with instead of 11GB (in this case). If we encounter in the Apache logs that the server needs more MaxRequestWorkers, then that's a separate issue I'll address below. Start the Apache web server. sudo service apache2 start Open like 10 browser tabs, connect to some of your longer or slower-loading pages from your website, and refresh like 3-4 times on each tab. After doing that, rapidly now run the following command: sudo ps -ylC apache2 | awk '{x += $8;y += 1} END {print "Apache Memory Usage (MB): "x/1024; print "Average Process Size (MB): "x/((y-1)*1024)}' Run it like 5 times rapidly. Look at the Average Process Size value and average that value out among the 5 times you ran that. Now do the following math, and be sure to convert GB to MB as necessary so that all the numbers are in MB values. So, either multiply times 1024 or divide by 1024, depending on which way you need to go. MaxRequestWorkers = Baseline Free (with buffer space) / Avg Process Size For example, I had a 14GB server, but when Apache was stopped the server showed it used 1GB RAM in idle. I then provide another 1GB in some extra buffer space for the OS in case it needs it. That means I would have a Baseline Free of 12GB. Now I must convert it from GB to MB, and so I multiply 12 x 1024 and get 12288. The 12288 MB is my Baseline Free value. In my case I saw that the Average Process Size was 21MB. So, I take 12288 / 21 and I get approximately 585. Now, it's common that sysops round down this value, and so I got 580. Edit the file /etc/apache2/mods-enabled/mpm_prefork.conf and consider setting it to the following defaults, replacing XXX with your MaxRequestWorkers calculation: `<IfModule mpm_prefork_module>` StartServers 2 MinSpareServers 2 MaxSpareServers 5 MaxRequestWorkers XXX ServerLimit XXX MaxConnectionsPerChild 0 </IfModule> Note that you may not see the ServerLimit parameter there. Add it. This parameter defaults to 256 if not present, but needs to be the same value as MaxRequestWorkers or you'll get an error. Another critical factor in your Apache configuration is the /etc/apache2/apache2.conf file with the Timeout variable and is measured in seconds. This is how long you can send or receive from the server before it times out. You have to also keep in mind a file upload or file download, such as if you have a website where people can upload or download CSV or other large files, for instance. And you need to keep in mind a busy database server and where you might need to provide some time before pages timeout. The smaller you make that Timeout variable, the more available the web server is to receive new connections. Note, however, that setting this value too low may cause havoc with PHP session variables, although not with browser session-based cookies. So, for instance, a value of 300 (5 minutes) might be good for a web server that relies on PHP session variables for web app workflow instead of browser session cookies. A value of 45 might be good for a web server that serves up nothing more than static advertising landing pages, but would be terrible for a server that needs to use PHP session variables a great deal. So, edit the Timeout parameter in this file to the amount you need. This may take some testing with all your web pages to see if the value is too low. It's probably a good idea, however, to not set it higher than 300 unless you're seeing problems in large file uploads or large file downloads. Now restart your Apache web service. If you did something wrong, Apache will likely tell you about it the moment you start it again, and you can rectify it. sudo service apache2 restart Now repeat the 10 tab browser trick that you did previously, and see if you encounter Apache configuration errors in the Apache web server error log: sudo tail -f /var/log/apache2/error.log ...press CTRL+C to get out of that, should you want. Look for a complaint about needing MaxRequestWorkers (and recently since you restarted the web server). If you see that even with an optimal MaxRequestWorkers setting, then you're likely needing more firepower for your websites or web applications. Consider these options: Using a CDN for large file downloads, images, and scripts. Using a caching service like CloudFlare or others. Redoing your website or web application strategy to use multiple web servers acting as one "web app" behind a load balancer. Adding more RAM to the server, and thus doing this calculation all over again. Now that the Apache server is tuned, it's sort of baseline tuned. You'll need to check on it over the course of 2-3 weeks and look for MaxRequestWorker issues in the Apache error logs. From that, you can make a decision on optimization (see step 10). You can also install Munin with apt on Ubuntu and look at the Apache performance over time and plot an idea of growth before you decide you need to do anything about the amount of traffic the web server is handling.
{ "source": [ "https://serverfault.com/questions/684424", "https://serverfault.com", "https://serverfault.com/users/36671/" ] }