source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
626,711 | If I have a script that I need to run against multiple computers, or with multiple different arguments, how can I execute it in parallel, without having to incur the overhead of spawning a new PSJob with Start-Job ? As an example, I want to re-sync the time on all domain members , like so: $computers = Get-ADComputer -filter * |Select-Object -ExpandProperty dnsHostName
$creds = Get-Credential domain\user
foreach($computer in $computers)
{
$session = New-PSSession -ComputerName $computer -Credential $creds
Invoke-Command -Session $session -ScriptBlock { w32tm /resync /nowait /rediscover }
} But I don't want to wait for each PSSession to connect and invoke the command. How can this be done in parallel, without Jobs? | Update - While this answer explains the process and mechanics of PowerShell runspaces and how they can help you multi-thread non-sequential workloads, fellow PowerShell aficionado Warren 'Cookie Monster' F has gone the extra mile and incorporated these same concepts into a single tool called Invoke-Parallel - it does what I describe below, and he has since expanded it with optional switches for logging and prepared session state including imported modules, really cool stuff - I strongly recommend you check it out before building you own shiny solution! With Parallel Runspace execution: Reducing inescapable waiting time In the original specific case, the executable invoked has a /nowait option which prevents blocking the invoking thread while the job (in this case, time re-synchronization) finishes on its own. This greatly reduces the overall execution time from the issuers perspective, but connecting to each machine is still done in sequential order. Connecting to thousands of clients in sequence may take a long time depending on the number of machines that are for one reason or another inaccessible, due to an accumulation of timeout waits. To get around having to queue up all subsequent connections in case of a single or a few consecutive timeouts, we can dispatch the job of connecting and invoking commands to separate PowerShell Runspaces, executing in parallel. What is a Runspace? A Runspace is the virtual container in which your powershell code executes, and represents/holds the Environment from the perspective of a PowerShell statement/command. In broad terms, 1 Runspace = 1 thread of execution, so all we need to "multi-thread" our PowerShell script is a collection of Runspaces that can then in turn execute in parallel. Like the original problem, the job of invoking commands multiple runspaces can be broken down into: Creating a RunspacePool Assigning a PowerShell script or an equivalent piece of executable code to the RunspacePool Invoke the code asynchronously (ie. not having to wait for the code to return) RunspacePool template PowerShell has a type accelerator called [RunspaceFactory] that will assist us in the creation of runspace components - let's put it to work 1. Create a RunspacePool and Open() it: $RunspacePool = [runspacefactory]::CreateRunspacePool(1,8)
$RunspacePool.Open() The two arguments passed to CreateRunspacePool() , 1 and 8 is the minimum and maximum number of runspaces allowed to execute at any given time, giving us an effective maximum degree of parallelism of 8. 2. Create an instance of PowerShell, attach some executable code to it and assign it to our RunspacePool: An instance of PowerShell is not the same as the powershell.exe process (which is really a Host application), but an internal runtime object representing the PowerShell code to execute. We can use the [powershell] type accelerator to create a new PowerShell instance within PowerShell: $Code = {
param($Credentials,$ComputerName)
$session = New-PSSession -ComputerName $ComputerName -Credential $Credentials
Invoke-Command -Session $session -ScriptBlock {w32tm /resync /nowait /rediscover}
}
$PSinstance = [powershell]::Create().AddScript($Code).AddArgument($creds).AddArgument("computer1.domain.tld")
$PSinstance.RunspacePool = $RunspacePool 3. Invoke the PowerShell instance asynchronously using APM: Using what is known in .NET development terminology as the Asynchronous Programming Model , we can split the invocation of a command into a Begin method, for giving a "green light" to execute the code, and an End method to collect the results. Since we in this case are not really interested in any feedback (we don't wait for the output from w32tm anyways), we can make due by simply calling the first method $PSinstance.BeginInvoke() Wrapping it up in a RunspacePool Using the above technique, we can wrap the sequential iterations of creating new connections and invoking the remote command in a parallel execution flow: $ComputerNames = Get-ADComputer -filter * -Properties dnsHostName |select -Expand dnsHostName
$Code = {
param($Credentials,$ComputerName)
$session = New-PSSession -ComputerName $ComputerName -Credential $Credentials
Invoke-Command -Session $session -ScriptBlock {w32tm /resync /nowait /rediscover}
}
$creds = Get-Credential domain\user
$rsPool = [runspacefactory]::CreateRunspacePool(1,8)
$rsPool.Open()
foreach($ComputerName in $ComputerNames)
{
$PSinstance = [powershell]::Create().AddScript($Code).AddArgument($creds).AddArgument($ComputerName)
$PSinstance.RunspacePool = $rsPool
$PSinstance.BeginInvoke()
} Assuming that the CPU has the capacity to execute all 8 runspaces at once, we should be able to see that the execution time is greatly reduced, but at the cost of readability of the script due to the rather "advanced" methods used. Determining the optimum degree of parallism: We could easily create a RunspacePool that allows for the execution of a 100 runspaces at the same time: [runspacefactory]::CreateRunspacePool(1,100) But at the end of the day, it all comes down to how many units of execution our local CPU can handle. In other words, as long as your code is executing, it does not make sense to allow more runspaces than you have logical processors to dispatch execution of code to. Thanks to WMI, this threshold is fairly easy to determine: $NumberOfLogicalProcessor = (Get-WmiObject Win32_Processor).NumberOfLogicalProcessors
[runspacefactory]::CreateRunspacePool(1,$NumberOfLogicalProcessors) If, on the other hand, the code you are executing itself incurs a lot of wait time due to external factors like network latency, you can still benefit from running more simultanous runspaces than you have logical processors, so you'd probably want to test of range possible maximum runspaces to find break-even : foreach($n in ($NumberOfLogicalProcessors..($NumberOfLogicalProcessors*3)))
{
Write-Host "$n: " -NoNewLine
(Measure-Command {
$Computers = Get-ADComputer -filter * -Properties dnsHostName |select -Expand dnsHostName -First 100
...
[runspacefactory]::CreateRunspacePool(1,$n)
...
}).TotalSeconds
} | {
"source": [
"https://serverfault.com/questions/626711",
"https://serverfault.com",
"https://serverfault.com/users/105072/"
]
} |
626,803 | Forgive me if I'm missing something obvious here..... but why do most linux server distros come with both Dovecot AND Postfix (or sendmail)? As far as I'm aware all three of them are Mail Transfer Agents, with Dovecot having a 'secondary' function of being a Mail Delivery Agent... Is Dovecot just not a very good MTA? Or is there some other reason why you'd want to use a combination of the above instead of a single program that seemingly does everything? | MTA is the service that route messages from one region to another. You drop the letter in the public submission box and MTA pass it to the city where recipient live. Then local delivery agent (LDA) delivers letter to the recipient's residence. And then recipient fetch the letter from his personal POP/IMAP mailbox and read it with MUA. Email simply resembles old good classic mail service. When you get the similarity, you'll get the meaning of each service. May be that helps MTA: LDA: POP/IMAP: | {
"source": [
"https://serverfault.com/questions/626803",
"https://serverfault.com",
"https://serverfault.com/users/241232/"
]
} |
626,922 | So... likely I'm an idiot, but I'm stuck. I just set up a CentOS 7 on Digial Ocean and I can't seem to get the MariaDB/MySQL server running. Some output [root@hostname ~]# yum list installed |grep maria
mariadb.x86_64 1:5.5.37-1.el7_0 @updates
mariadb-libs.x86_64 1:5.5.37-1.el7_0 @updates
mariadb-server.x86_64 1:5.5.37-1.el7_0 @updates So it's installed, can we at least see the client? [root@hostname ~]# which mysql
/bin/mysql Let's try and start the server, just for fun [root@hostname ~]# service mysqld start
Redirecting to /bin/systemctl start mysqld.service
Failed to issue method call: Unit mysqld.service failed to load: No such file or directory.
[root@hostname ~]# mysqld
-bash: mysqld: command not found
[root@hostname ~]# mysql.server start
-bash: mysql.server: command not found
[root@hostname ~]# And this is where I get lost. Looking at what is actually installed, there is no server/daemon [root@hostname ~]# ls -la /bin/my*
-rwxr-xr-x 1 root root 3419136 Jun 24 10:27 /bin/myisamchk
-rwxr-xr-x 1 root root 3290760 Jun 24 10:27 /bin/myisam_ftdump
-rwxr-xr-x 1 root root 3277032 Jun 24 10:27 /bin/myisamlog
-rwxr-xr-x 1 root root 3320200 Jun 24 10:27 /bin/myisampack
-rwxr-xr-x 1 root root 2914904 Jun 24 10:27 /bin/my_print_defaults
-rwxr-xr-x 1 root root 3533016 Jun 24 10:27 /bin/mysql
-rwxr-xr-x 1 root root 111587 Jun 24 10:24 /bin/mysqlaccess
-rwxr-xr-x 1 root root 3089712 Jun 24 10:27 /bin/mysqladmin
-rwxr-xr-x 1 root root 3253112 Jun 24 10:27 /bin/mysqlbinlog
lrwxrwxrwx 1 root root 26 Sep 8 03:06 /bin/mysqlbug -> /etc/alternatives/mysqlbug
-rwxr-xr-x 1 root root 3090832 Jun 24 10:27 /bin/mysqlcheck
-rwxr-xr-x 1 root root 4247 Jun 24 10:24 /bin/mysql_convert_table_format
-rwxr-xr-x 1 root root 24558 Jun 24 10:24 /bin/mysqld_multi
-rwxr-xr-x 1 root root 27313 Jun 24 10:24 /bin/mysqld_safe
-rwxr-xr-x 1 root root 3173968 Jun 24 10:27 /bin/mysqldump
-rwxr-xr-x 1 root root 7913 Jun 24 10:24 /bin/mysqldumpslow
-rwxr-xr-x 1 root root 3315 Jun 24 10:24 /bin/mysql_find_rows
-rwxr-xr-x 1 root root 1261 Jun 24 10:24 /bin/mysql_fix_extensions
-rwxr-xr-x 1 root root 34826 Jun 24 10:24 /bin/mysqlhotcopy
-rwxr-xr-x 1 root root 3082072 Jun 24 10:27 /bin/mysqlimport
-rwxr-xr-x 1 root root 16204 Jun 24 10:24 /bin/mysql_install_db
-rwxr-xr-x 1 root root 2923136 Jun 24 10:27 /bin/mysql_plugin
-rwxr-xr-x 1 root root 11578 Jun 24 10:24 /bin/mysql_secure_installation
-rwxr-xr-x 1 root root 17473 Jun 24 10:24 /bin/mysql_setpermission
-rwxr-xr-x 1 root root 3084760 Jun 24 10:27 /bin/mysqlshow
-rwxr-xr-x 1 root root 3104240 Jun 24 10:27 /bin/mysqlslap
-rwxr-xr-x 1 root root 3442464 Jun 24 10:27 /bin/mysqltest
-rwxr-xr-x 1 root root 2918416 Jun 24 10:27 /bin/mysql_tzinfo_to_sql
-rwxr-xr-x 1 root root 2995400 Jun 24 10:27 /bin/mysql_upgrade
-rwxr-xr-x 1 root root 2913960 Jun 24 10:27 /bin/mysql_waitpid
-rwxr-xr-x 1 root root 3888 Jun 24 10:24 /bin/mysql_zap Anyone care to point out what I'm doing wrong here? | Should anyone stumble across this, i found the solution here: https://ask.fedoraproject.org/en/question/43459/how-to-start-mysql-mysql-isnt-starting/ Repost below To start MariaDB on Fedora 20, execute the following command: systemctl start mariadb.service To autostart MariaDB on Fedora 20, execute the following command: systemctl enable mariadb.service After you started MariaDB (do this only once), execute the following command: /usr/bin/mysql_secure_installation | {
"source": [
"https://serverfault.com/questions/626922",
"https://serverfault.com",
"https://serverfault.com/users/79524/"
]
} |
627,169 | So, this is the situation. It seems we need to have an open TCP port 5432 to the world, where a customer has access to his PostgreSQL database. For obvious reasons, we can't say just "no", only as a last-last resort. What are the biggest troubles? How can I defend our infrastructure? Anyways: why shouldn't it be opened to the world? I think, maybe it is more secure than some 20 year old, unmaintained FTP server. P.S. VPN isn't ok. Some encryption maybe (if I can give him a JDBC connection URL which works ). | Require SSL, keep SELinux turned on, monitor the logs, and use a current PostgreSQL version . Server side Require SSL In postgresql.conf set ssl=on and make sure you have your keyfile and certfile installed appropriately (see the docs and the comments in postgresql.conf ). You might need to buy a certificate from a CA if you want to have it trusted by clients without special setup on the client. In pg_hba.conf use something like: hostssl theuser thedatabase 1.2.3.4/32 md5 ... possibly with "all" for user and/or database, and possibly with a wider source IP address filter. Limit users who can log in, deny remote superuser login Don't allow "all" for users if possible; you don't want to permit superuser logins remotely if you can avoid the need for it. Limit rights of users Restrict the rights of the user(s) that can log in. Don't give them CREATEDB or CREATEUSER rights. REVOKE the CONNECT right from PUBLIC on all your databases, then give it back to only the users/roles that should be able to access that database. (Group users into roles and grant rights to roles, rather than directly to individual users). Make sure users with remote access can only connect to the DBs they need, and only have rights to the schemas, tables, and columns within that they actually need. This is good practice for local users too, it's just sensible security. Client setup In PgJDBC, pass the parameter ssl=true : To instruct the JDBC driver to try and establish a SSL connection you must add the connection URL parameter ssl=true. ... and install the server certificate in the client's truststore, or use a server certificate that's trusted by one of the CAs in Java's built-in truststore if you don't want the user to have to install the cert. Ongoing action Now make sure you keep PostgreSQL up to date . PostgreSQL has only had a couple of pre-auth security holes, but that's more than zero, so stay up to date. You should anyway, bugfixes are nice things to have. Add a firewall in front if there are large netblocks/regions you know you don't ever need access from. Log connections and disconnections (see postgresql.conf ). Log queries if practical. Run an intrusion detection system or fail2ban or similar in front if practical. For fail2ban with postgres, there is a convenient how-to here Monitor the log files. Bonus paranoia Extra steps to think about... Require client certificates If you want, you can also use pg_hba.conf to require that the client present an X.509 client certificate trusted by the server. It doesn't need to use the same CA as the server cert, you can do this with a homebrew openssl CA. A JDBC user needs to import the client certificate into their Java Keystore with keytool and possibly configure some JSSE system properties to point Java at their keystore, so it's not totally transparent. Quarantine the instance If you want to be really paranoid, run the instance for the client in a separate container / VM, or at least under a different user account, with just the database(s) they require. That way if they compromise the PostgreSQL instance they won't get any further. Use SELinux I should't have to say this, but ... Run a machine with SELinux support like RHEL 6 or 7, and don't turn SELinux off or set it to permissive mode . Keep it in enforcing mode. Use a non-default port Security by only obscurity is stupidity. Security that uses a little obscurity once you've done the sensible stuff probably won't hurt. Run Pg on a non-default port to make life a little harder for automated attackers. Put a proxy in front You can also run PgBouncer or PgPool-II in front of PostgreSQL, acting as a connection pool and proxy. That way you can let the proxy handle SSL, not the real database host. The proxy can be on a separate VM or machine. Use of connection pooling proxies is generally a good idea with PostgreSQL anyway, unless the client app already has a built-in pool. Most Java application servers, Rails, etc have built-in pooling. Even then, a server side pooling proxy is at worst harmless. | {
"source": [
"https://serverfault.com/questions/627169",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
627,238 | What I'd like to do is to set the guests' network configuration (IP address, subnet, gateway, broadcast address) from the host system. The used network setup is in bridge mode. How can I configure the network from the host rather than configuring the client itself to a static network configuration? If I execute: virsh edit vm1 there is a <network> block as well and I tried to configure the network interface from there, but unfortunately the guest VM doesn't seem to use it and as such is offline to the network (since it uses automatic network configuration only)... Guest VMs are both, Linux and Windows based. Any help would be highly appreciated. | If you don't want to do any configuration inside the guest, then the only option is a DHCP server that hands out static IP addresses. If you use bridge mode, that will probably be some external DHCP server. Consult its manual to find out how to serve static leases. But at least in forward modes nat or route , you could use libvirt's built-in dnsmasqd (More recent versions of libvirtd support the dnsmasq's "dhcp-hostsfile" option). Here is how: First, find out the MAC addresses of the VMs you want to assign static IP addresses: virsh dumpxml $VM_NAME | grep 'mac address' Then edit the network virsh net-list
virsh net-edit $NETWORK_NAME # Probably "default" Find the <dhcp> section, restrict the dynamic range and add host entries for your VMs <dhcp>
<range start='192.168.122.100' end='192.168.122.254'/>
<host mac='52:54:00:6c:3c:01' name='vm1' ip='192.168.122.11'/>
<host mac='52:54:00:6c:3c:02' name='vm2' ip='192.168.122.12'/>
<host mac='52:54:00:6c:3c:03' name='vm3' ip='192.168.122.12'/>
</dhcp> Then, reboot your VM (or restart its DHCP client, e.g. ifdown eth0; ifup eth0 ) Update: I see there are reports that the change might not get into effect after "virsh net-edit". In that case, try this after the edit: virsh net-destroy $NETWORK_NAME
virsh net-start $NETWORK_NAME ... and restart the VM's DHCP client. If that still doesn't work, you might have to stop the libvirtd service kill any dnsmasq processes that are still alive start the libvirtd service Note: There is no way the KVM host could force a VM with unknown OS and unknown config to use a certain network configuration. But if know know that the VM uses a certain network config protocol - say DHCP - you can can use that. This is what this post assumes. Some OS (e.g. some Linux distros) also allow to pass network config options into the guest e.g. via the kernel command line. But that is very specific to the OS, and i see no advantage over the DHCP method. | {
"source": [
"https://serverfault.com/questions/627238",
"https://serverfault.com",
"https://serverfault.com/users/99077/"
]
} |
627,371 | On login to EC2 (Ubuntu) instance, I see *** /dev/xvda1 should be checked for errors *** I can't fsck /dev/xvda1 because it is mounted, and sudo umount /dev/xvda1 fails because it is in use. lsof shows jbd2/xvda 172 root cwd DIR 202,1 4096 2 /
jbd2/xvda 172 root rtd DIR 202,1 4096 2 /
jbd2/xvda 172 root txt unknown /proc/172/exe and kill -SIGKILL 172 is ineffective. What to do? | Most Linuxes these days should perform a forced fsck at boot time when the file /forcefsck is present on the system.
If you are at liberty to reboot the VM, run touch /forcefsck Then reboot at your convenience | {
"source": [
"https://serverfault.com/questions/627371",
"https://serverfault.com",
"https://serverfault.com/users/241621/"
]
} |
627,870 | SBS 2008 running Exchange 2007 and IIS6.0 CompanyA has two other companies that operate under the same roof. To accommodate email, we have 3 Exchange accounts per user to manage this. All users use their CompanyA account to log into the domain. CORP\user [email protected] CORP\user-companyb [email protected] <-- only used for email CORP\user-companyc [email protected] <-- only used for email Email works fine internally and via OWA. The problem exist when setting up Outlook for remote users who need access to companyB and companyC emails, Outlook pops up the certificate error. The SSL cert SAN has the following DNS names: webmail.companyA.com www.webmail.companyA.com CORP-SBS CORP-SBS.local autdiscover.companyA.com I was told by the users who access companyC email address remotely that this never used to happen before. This started with the CEO changed DNS providers on his own and in the process the original DNS settings were lost. He mentioned something about an SRV record being created which corrected this issue but that's about it. Looking for guidance on how to properly address this. | This issue is most likely caused by Outlook's Autodiscover service, part of the Outlook Anywhere functionality. Autodiscover provides various information to the end-user's Outlook client on the various services offered by Exchange and where these can be located; this is used for a variety of purposes: Autoconfiguration of Outlook profiles on first-run of Outlook, which can configure an Exchange account using only the user's email address and password, since the other information is automatically located and retrieved. Dynamic location of web-based services accessed by the Outlook client, including the out-of-office assistant, Unified Messaging functionality, location of the Exchange Control Panel (ECP) and so forth. This is Microsoft's proprietary implementation of RFC 6186 . Unfortunately, they didn't really follow the recommendations of that RFC in Outlook Anywhere's design, but that is perhaps to be expected since Exchange and RPC over HTTPS functionality is not a traditional IMAP/SMTP server. How does Autodiscover work (for external* users)? Autodiscover communicates with a web service on a Client Access Server (in this case, all roles are on the SBS server) at the path /Autodiscover/Autodiscover.xml , rooted off its default web site. To locate the FQDN of the server to communicate with, it removes the user portion of the email address, leaving the domain (i.e. @companyB.com). It attempts to communicate with Autodiscover using each of the following URLs, in turn: https://companyB.com/Autodiscover/Autodiscover.xml https://autodiscover.companyB.com/Autodiscover/Autodiscover.xml If these fail, it will attempt a non-secure connection by disabling SSL and attempting to communicate on port 80 (HTTP), typically after prompting the user to confirm this is an acceptable action (a flawed option in my opinion, since clueless users will typically approve this and risk sending credentials over plain text -- and clueless sysadmins who don't require secured communication of credentials and business-sensitive data are a risk to business continuity). Finally, a follow-on check is made using a service record (SRV) in the DNS, which exists at a well-known location off the companyB.com namespace and can redirect Outlook to the proper URL where the server is listening. What can go wrong? One of several issues can arise in this process: No DNS entries Typically, the root of the domain ( companyB.com ) might not resolve to a host record in the DNS. Improper DNS configuration (or a conscious decision not to expose the Outlook Anywhere service) might mean the autodiscover.companyB.com record does not exist either. In these cases, there is no major issue; Outlook simply continues to communicate with Exchange using the last known configuration, and may be degraded with respect to certain web-based functions for which it needs to retrieve URLs via Autodiscover (such as the out-of-office assistant). A workaround is to use Outlook Web Access to access such functions. Automatic configuration of Exchange accounts in new Outlook profiles is also not automated, and requires manual configuration of the RPC over HTTPS settings. However, this will not cause the issue you describe. Faulty SSL certificates It is entirely possible that the URL Outlook uses to attempt to contact the Exchange Server resolves to a host, which may or may not be a Client Access Server. If Outlook can communicate with that server on port 443, certificates will of course be exchanged to set up a secure channel between Outlook and the remote server. If the URL Outlook believes it is talking to is not listed on that certificate -- be it as the common name or a subject alternative name (SAN) -- this will elicit Outlook to present the dialog you describe in your initial post. This can happen for several reasons, all down to how DNS is configured and how the URLs I described above are checked by Outlook: If the https://companyB.com/ ... URL resolves to a host record, and the web server at that address listens on port 443, and it has an SSL certificate which does not list companyB.com in the common name or Subject Alternative Name, then the issue will occur. It matters not whether the host is an Exchange Server or not; it might be a web server hosting a company website which is not properly configured. Corrige either: Disable the host record at the root of the companyB.com zone (requiring visitors to the website or other service to enter www.companyB.com , or equivalent; or Disable access to the machine at companyB.com on port 443, causing Outlook to reject the companyB.com URL before certificates are exchanged and move on; or Fix the certificate at companyB.com to ensure companyB.com is listed on that certificate, and that attempts to visit https://companyB.com in a standard browser do not fail. The above applies regardless of whether companyB.com resolves to the Exchange Server; if Outlook can communicate with it, it will later discover that the /Autodiscover/Autodiscover.xml path yields an HTTP 404 error (does not exist) and move on. If the https://autodiscover.companyB.com/ ... URL resolves to the Exchange Server (or any other server) but, again, autodiscover.companyB.com is not listed as the common name or a subject alternative name, you will observe this behaviour. It can be fixed as above by fixing the certificate, or as you rightly indicate, you can use a SRV record to redirect Outlook to a URL which is listed on the certificate and which Outlook can communicate with. Your probable fix to this issue In this case, the typical fix is to do the latter; create SRV records in the new DNS provider to ensure Outlook is redirected to autodiscover.companyA.com , which (any other issues aside) will work successfully since it is listed on the certificate as a SAN. For this to work, you need to: Configure an _autodiscover._tcp.companyB.com SRV record in accordance with the documentation . Delete the autodiscover.companyB.com host record, if it exists, to prevent Outlook resolving this and attempting to reach Autodiscover in that way. Also resolve any issues with HTTPS access to https://companyB.com as above, since Outlook will enumerate the URLs derived from the user's email address before falling over to the SRV record approach. *How does Autodiscover work (for internal, domain-joined clients)? I add this merely for completeness, as it is another common reason for these certificate prompts to occur. On a domain-joined client, when it is local to the Exchange environment (i.e. on the internal LAN), the above techniques are not used. Instead, Outlook communicates directly with a Service Connection Point in Active Directory (listed in Exchange Client Access settings), which lists the URL where Outlook can locate the Autodiscover service. It is common for certificate warnings to occur in these circumstances, because: The default URL configured for this purpose refers to the internal URL of Exchange, which is often dissimilar from the public URL. SSL certificates may not list the internal URL on them. At present, yours does, but this may become an issue in the future for Active Directory domains which use .local and similar non-global gTLD domain name suffixes, since a decision by ICANN prohibits SSL certificates for such domains being issued post-2016. The internal address might not resolve to the proper server. In this case, the matter is resolved by correcting the recorded URL to refer to the proper, external address (listed in the certificate), by running the Set-ClientAccessServer cmdlet with the -AutodiscoverServiceInternalUri switch. Parties doing this typically also configure split-horizon DNS , either because they are required to do so by their network configuration and/or for continuity of resolution in the event of an upstream resolver/connection outage. | {
"source": [
"https://serverfault.com/questions/627870",
"https://serverfault.com",
"https://serverfault.com/users/201758/"
]
} |
627,903 | There has been a lot of talking about a security issue relative to the cgi.fix_pathinfo PHP option used with Nginx (usually PHP-FPM, fast CGI). As a result, the default nginx configuration file used to say: # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini However, now, the "official" Nginx wiki states that PATH_INFO can be handled correctly without disabling the above PHP option. So what? Questions Can you explain clearly what does cgi.fix_pathinfo do? ( official doc just says : "For more information on PATH_INFO, see the CGI specs") What will PHP really do with these PATH_INFO and SCRIPT_FILENAME variables? Why and how can it be dangerous with Nginx? ( detailed examples) Does the issue still exist in recent versions of these programs? Is Apache vulnerable? I'm trying to understand the issue at each step. For example, I don't understand why using the php-fpm Unix socket could avoid this problem. | TL;DR - the fix (which you may not even need) is VERY SIMPLE and at the end of this answer. I'll try to address your specific questions, but your misunderstanding of what PATH_INFO is makes the questions themselves a little bit wrong. First question should be "What is this path info business?" Path info is stuff after the script in a URI (should start with a forward slash, but ends before the query arguments, which start with a ? ). The last paragraph in the overview section of the Wikipedia article about CGI sums it up nicely. Below the PATH_INFO is "/THIS/IS/PATH/INFO": http://example.com/path/to/script.php/THIS/IS/PATH/INFO?query_args=foo Your next question should have been: "How does PHP determine what PATH_INFO and SCRIPT_FILENAME are?" Earlier versions of PHP were naive and technically didn't even support PATH_INFO , so what was supposed to be PATH_INFO was munged onto SCRIPT_FILENAME which, yes, is broken in many cases. I don't have an old enough version of PHP to test with, but I believe it saw SCRIPT_FILENAME as the whole shebang: "/path/to/script.php/THIS/IS/PATH/INFO" in the above example (prefixed with the docroot as usual). With cgi.fix_pathinfo enabled, PHP now correctly finds "/THIS/IS/PATH/INFO" for the above example and puts it into PATH_INFO and SCRIPT_FILENAME gets just the part that points to the script being requested (prefixed with the docroot of course). Note: when PHP got around to actually supporting PATH_INFO , they had to add a configuration setting for the new feature so people running scripts that depended on the old behavior could run new PHP versions. That's why there's even a configuration switch for it. It should have been built-in (with the "dangerous" behavior) from the start. But how does PHP know what part is the script and what it path info? What if the URI is something like: http://example.com/path/to/script.php/THIS/IS/PATH/INFO.php?q=foo That can be a complex question in some environments. What happens in PHP is that it finds the first part of the URI path that does not correspond to anything under the server's docroot. For this example, it sees that on your server you don't have "/docroot/path/to/script.php/THIS" but you most certainly do have "/docroot/path/to/script.php" so now the SCRIPT_FILENAME has been determined and PATH_INFO gets the rest. So now the good example of the danger that is nicely detailed in the Nginx docs and in Hrvoje Špoljar's answer (you can't be fussy about such a clear example) becomes even more clear: given Hrvoje's example (" http://example.com/foo.jpg/nonexistent.php "), PHP sees a file on your docroot "/foo.jpg" but it does not see anything called "/foo.jpg/nonexistent.php" so SCRIPT_FILENAME gets "/foo.jpg" (again, prefixed with docroot) and PATH_INFO gets "/nonexistent.php". Why and how it can be dangerous should now be clear: The web server really isn't at fault - it's merely proxying the URI to PHP, which innocently finds that "foo.jpg" actually contains PHP content, so it executes it (now you've been pwned!). This is NOT particular to Nginx per se. The REAL problem is that you let untrusted content be uploaded somewhere without sanitizing and you allow other arbitrary requests to the same location, which PHP happily executes when it can. Nginx and Apache could be built or configured to prevent requests using this trickery, and there are plenty of examples for how to do that, including in user2372674's answer . This blog article explains the problem nicely, but it's missing the right solution. However, the best solution is to just make sure PHP-FPM is configured correctly so that it will never execute a file unless it ends with ".php". It's worth noting that recent versions of PHP-FPM (~5.3.9+?) have this as default, so this danger isn't so much problem any more. The Solution If you have a recent version of PHP-FPM (~5.3.9+?), then you need to do nothing, as the safe behaviour below is already the default. Otherwise, find php-fpm's www.conf file (maybe /etc/php-fpm.d/www.conf , depends on your system). Make sure you have this: security.limit_extensions = .php Again, that's default in many places these days. Note that this doesn't prevent an attacker from uploading a ".php" file to a WordPress uploads folder and executing that using the same technique. You still need to have good security for your applications. | {
"source": [
"https://serverfault.com/questions/627903",
"https://serverfault.com",
"https://serverfault.com/users/158888/"
]
} |
627,910 | I would like to allow all users (or maybe only some of them but all would already be good) to be able to start/stop a specific systemd service. I found the following solution #include <stdlib.h>
#include <unistd.h>
int main(void)
{
execl("/usr/bin/systemctl", "systemctl", "start", "myService", NULL);
return(EXIT_SUCCESS);
}
gcc allow.c -o allow
chown root:root allow
chmod 4755 allow That seems to do the trick but I am wondering if this is acceptable security-wise or if there are other options (I am using CentOS 7) Update : this is bad practice. "sudo" is your best friend here since the command has specific non variable arguments it can be allowed in the sudoers file. | TL;DR - the fix (which you may not even need) is VERY SIMPLE and at the end of this answer. I'll try to address your specific questions, but your misunderstanding of what PATH_INFO is makes the questions themselves a little bit wrong. First question should be "What is this path info business?" Path info is stuff after the script in a URI (should start with a forward slash, but ends before the query arguments, which start with a ? ). The last paragraph in the overview section of the Wikipedia article about CGI sums it up nicely. Below the PATH_INFO is "/THIS/IS/PATH/INFO": http://example.com/path/to/script.php/THIS/IS/PATH/INFO?query_args=foo Your next question should have been: "How does PHP determine what PATH_INFO and SCRIPT_FILENAME are?" Earlier versions of PHP were naive and technically didn't even support PATH_INFO , so what was supposed to be PATH_INFO was munged onto SCRIPT_FILENAME which, yes, is broken in many cases. I don't have an old enough version of PHP to test with, but I believe it saw SCRIPT_FILENAME as the whole shebang: "/path/to/script.php/THIS/IS/PATH/INFO" in the above example (prefixed with the docroot as usual). With cgi.fix_pathinfo enabled, PHP now correctly finds "/THIS/IS/PATH/INFO" for the above example and puts it into PATH_INFO and SCRIPT_FILENAME gets just the part that points to the script being requested (prefixed with the docroot of course). Note: when PHP got around to actually supporting PATH_INFO , they had to add a configuration setting for the new feature so people running scripts that depended on the old behavior could run new PHP versions. That's why there's even a configuration switch for it. It should have been built-in (with the "dangerous" behavior) from the start. But how does PHP know what part is the script and what it path info? What if the URI is something like: http://example.com/path/to/script.php/THIS/IS/PATH/INFO.php?q=foo That can be a complex question in some environments. What happens in PHP is that it finds the first part of the URI path that does not correspond to anything under the server's docroot. For this example, it sees that on your server you don't have "/docroot/path/to/script.php/THIS" but you most certainly do have "/docroot/path/to/script.php" so now the SCRIPT_FILENAME has been determined and PATH_INFO gets the rest. So now the good example of the danger that is nicely detailed in the Nginx docs and in Hrvoje Špoljar's answer (you can't be fussy about such a clear example) becomes even more clear: given Hrvoje's example (" http://example.com/foo.jpg/nonexistent.php "), PHP sees a file on your docroot "/foo.jpg" but it does not see anything called "/foo.jpg/nonexistent.php" so SCRIPT_FILENAME gets "/foo.jpg" (again, prefixed with docroot) and PATH_INFO gets "/nonexistent.php". Why and how it can be dangerous should now be clear: The web server really isn't at fault - it's merely proxying the URI to PHP, which innocently finds that "foo.jpg" actually contains PHP content, so it executes it (now you've been pwned!). This is NOT particular to Nginx per se. The REAL problem is that you let untrusted content be uploaded somewhere without sanitizing and you allow other arbitrary requests to the same location, which PHP happily executes when it can. Nginx and Apache could be built or configured to prevent requests using this trickery, and there are plenty of examples for how to do that, including in user2372674's answer . This blog article explains the problem nicely, but it's missing the right solution. However, the best solution is to just make sure PHP-FPM is configured correctly so that it will never execute a file unless it ends with ".php". It's worth noting that recent versions of PHP-FPM (~5.3.9+?) have this as default, so this danger isn't so much problem any more. The Solution If you have a recent version of PHP-FPM (~5.3.9+?), then you need to do nothing, as the safe behaviour below is already the default. Otherwise, find php-fpm's www.conf file (maybe /etc/php-fpm.d/www.conf , depends on your system). Make sure you have this: security.limit_extensions = .php Again, that's default in many places these days. Note that this doesn't prevent an attacker from uploading a ".php" file to a WordPress uploads folder and executing that using the same technique. You still need to have good security for your applications. | {
"source": [
"https://serverfault.com/questions/627910",
"https://serverfault.com",
"https://serverfault.com/users/164582/"
]
} |
628,610 | I have successfully increased the nofile and nproc value for the local users, but I couldn't find a proper solution for the processes launched by systemd. Adding max_open_files to the MariaDB configuration doesn't help. su - mysql to change the limit manually doesn't work either (This account is currently not available). /etc/security/limits.conf * soft nofile 102400
* hard nofile 102400
* soft nproc 10240
* hard nproc 10240 /etc/security/limits.d/20-nproc.conf (no other files present in the directory) * soft nofile 102400
* hard nofile 102400
* soft nproc 10240
* hard nproc 10240 /etc/sysctl.conf fs.file-max = 2097152 /etc/pam.d/system-auth #%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required pam_env.so
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid >= 1000 quiet_success
auth required pam_deny.so
account required pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_succeed_if.so uid < 1000 quiet
account required pam_permit.so
password requisite pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=
password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password required pam_deny.so
session optional pam_keyinit.so revoke
session required pam_limits.so
-session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so /etc/pam.d/systemd-user #%PAM-1.0
# Used by systemd when launching systemd user instances.
account include system-auth
session include system-auth
auth required pam_deny.so
password required pam_deny.so /var/log/mariadb/mariadb.log [Warning] Changed limits: max_open_files: 1024 max_connections: 32 table_cache: 491 /proc/mysql_pid/limits Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 30216 30216 processes
Max open files 1024 4096 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 30216 30216 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us It is interesting that different processes (users) have different Max open files number: mysql - 1024 4096
apache - 1024 4096
postfix - 4096 4096 | systemd completely ignores /etc/security/limits*. If you are using an RPM that auto-squashes its systemd service file on update, you'll want to file a PR to ask them to mark those files as 'noreplace' You need to update the .service file /usr/lib/systemd/system/<servicename>.service [Unit]
Description=Some Daemon
After=syslog.target network.target
[Service]
Type=notify
LimitNOFILE=49152
ExecStart=/usr/sbin/somedaemon
[Install]
WantedBy=multi-user.target sickill pointed out that you can also override the package-installed values (found in the above file) by adding them to /etc/systemd/system/<servicename>.d/override.conf [Service]
LimitNOFILE=49152 This provides the added bonus of system-specific settings that aren't in danger of being overwritten on package update. Then issue the command: systemctl daemon-reload | {
"source": [
"https://serverfault.com/questions/628610",
"https://serverfault.com",
"https://serverfault.com/users/196053/"
]
} |
628,921 | I have got a file myfile-privkey.pem . How do I check if the private key file is password protected using ssh-keygen? | ssh-keygen -y -f myfile-privkey.pem If the key is password protected, you will see a "password:" prompt. The flags in this command are: -y Read private key file and print public key.
-f Filename of the key file. As extra guidance, always check the command someone, especially online, is telling you to use when dealing with your private keys. | {
"source": [
"https://serverfault.com/questions/628921",
"https://serverfault.com",
"https://serverfault.com/users/185710/"
]
} |
628,989 | I am using Ansible and I have this configuration in my inventory/all: [master]
192.168.1.10 ansible_connection=ssh ansible_ssh_user=vagrant ansible_ssh_pass=vagrant
[slave]
192.168.1.11 ansible_connection=ssh ansible_ssh_user=vagrant ansible_ssh_pass=vagrant
192.168.1.12 ansible_connection=ssh ansible_ssh_user=vagrant ansible_ssh_pass=vagrant
[app]
192.168.1.13 ansible_connection=ssh ansible_ssh_user=vagrant ansible_ssh_pass=vagrant
[all:children]
master
slave I don't want to repeat all the parameters for each new instance. How can I configure them just in one place? Is there any file with these parameters? | You can add following section to your inventory file: [all:vars]
ansible_connection=ssh
ansible_user=vagrant
ansible_ssh_pass=vagrant Note: Before Ansible 2.0 ansible_user was ansible_ssh_user . | {
"source": [
"https://serverfault.com/questions/628989",
"https://serverfault.com",
"https://serverfault.com/users/228940/"
]
} |
629,045 | With the following Nginx config: server {
listen 80;
listen [::]:80 default_server ipv6only=on;
server_name isitmaintained.com;
...
}
server {
listen 178.62.136.230:80;
server_name 178.62.136.230;
add_header X-Frame-Options "SAMEORIGIN";
return 301 $scheme://isitmaintained.com$request_uri;
} I am trying to redirect http://178.62.136.230/ to http://isitmaintained.com/ but when I deploy this config I end up with a Redirect loop or both of those links. What am I doing wrong? | Try this on the second block: server {
listen 80;
server_name 178.62.136.230;
return 302 $scheme://google.com$request_uri;
} The problem is that the second server block listen directive is more specific than first server block, therefore it is always used. And since the second block is the only virtual host for that listen specification, it is always used. Note : 301 will add permanent redirect. Use 302 for testing. | {
"source": [
"https://serverfault.com/questions/629045",
"https://serverfault.com",
"https://serverfault.com/users/37449/"
]
} |
629,083 | Without getting to far into the weeds, Nginx is forcing my hand in order to accomplish some magic with vhosts and the map directive. Is there an elegant (relative) solution to sharing a variable across multiple define calls, which allows each define call to append it's data to the global variable? In software this would be known as a singleton. -- The weeds -- Nginx has a map directive which dictates which upstream server pool the request should be passed to, like so: map $http_host $upstream_pool {
hostnames;
blog.example.com blog_example_com;
repo.example.com repo_example_com;
default example_com;
} As you can see, any requests for blog.example.com will be passed to the blog_example_com upstream server pool (by way of proxy_pass ). The problem is the map directive syntax is that it can only be included within the main http block (nginx.conf), whereas vhost specific directives such as upstream and location can be included in the server block of a vhost config. My nodes.pp manifest looks something like this: service-a-1.example.com inherits project_dev {
nginx::vhost { 'mothership': }
nginx::vhost { 'mothership_blog': }
nginx::vhost { 'repo': }
} As you can see, after a successful puppet run, I should end up with 3 distinct vhost config files in /etc/nginx/vhost.d/ dir. The problem I am having is that in order for the map directive to work, I need to know which vhosts were loaded, so I can add their respective upstream ids to the map directive, which I have defined in the primary config: /etc/nginx/nginx.conf within the http block (of which there can only be one). -- What I have tried - - I have a global.pp file which does some "bootstrapping" and in this file I added a $singleton = '' syntax, then in the nginx::vhost define, I added this syntax: $tpl_upstream_pool_labels = inline_template("<% tpl_upstream_pools.keys.sort.each do |role| %><%= role %>_<%= tpl_domain_primary %>_<%= tpl_domain_top_level %>|<% end %>")
$singleton = "${singleton}${tpl_upstream_pool_labels}"
notify { "\n--------------------- Nginx::Conf::Vhost::Touch | Timestamp: ${timestamp} | Pool labels: ${singleton} -------------------------\n": } Which should result in a pipe delimited list of upstream ids. As mentioned earlier, in the nodes.pp manifest, I make three calls to nginx::vhost, and would expect the $singleton global variable to be appended for every call, however it is not, it only contains the last call's data. I also tried to hack my way around this by writing a temp file like so: $temp_file_upstream_pool_labels_uri = "/tmp/puppet_known_upstreams_${timestamp}.txt"
exec { "event_record_known_upstream_${name}" :
command => "touch ${temp_file_upstream_pool_labels_uri} && echo ${tpl_upstream_pool_labels} >> ${temp_file_upstream_pool_labels_uri}",
provider => 'shell'
} Then in the nginx::conf::touch define, where the primary config nginx.conf is to be written by puppet, I tried this: $temp_file_upstream_pool_labels_uri = "/tmp/puppet_known_upstreams_${timestamp}.txt"
$contents = file($temp_file_upstream_pool_labels_uri) Which should, in theory, load the contents of the file into the $contents variable. But when I run puppet using this approach I get an error that the file does not exist. I ensured that the nginx::conf::touch call is not made until after all the vhosts were considered, but still to no avail. | Try this on the second block: server {
listen 80;
server_name 178.62.136.230;
return 302 $scheme://google.com$request_uri;
} The problem is that the second server block listen directive is more specific than first server block, therefore it is always used. And since the second block is the only virtual host for that listen specification, it is always used. Note : 301 will add permanent redirect. Use 302 for testing. | {
"source": [
"https://serverfault.com/questions/629083",
"https://serverfault.com",
"https://serverfault.com/users/87836/"
]
} |
629,439 | We will be running CentOS 7 on our new server. We have 6 x 300GB drives in raid6 internal to the server. (Storage is largely external in the form of a 40TB raid box.) The internal volume comes to about 1.3TB if formatted as a single volume. Our sysadmin thinks it is a really bad idea to install the OS on one big 1.3TB partition. I am a biologist. We constantly install new software to run and test, most of which lands in /usr/local. However, because we have about 12 non-computer savvy biologists using the system, we also collect a lot cruft in /home as well. Our last server had a 200GB partition for /, and after 2.5 years it was 90% full. I don't want that to happen again, but I also don't want to go against expert advice! How can we best use the 1.3TB available to make sure that space is available when and where it's needed but not create a maintenance nightmare for the sysadmin?? | The primary (historical) reasons for partitioning are: to separate the operating system from your user and application data . Until the release of RHEL 7 there was no supported upgrade path and a major version upgrade would require a re-install and then having for instance /home and other (application) data on separate partitions (or LVM volumes) allows you to easily preserve the user data and application data and wipe the OS partition(s). Users can't log in properly and your system starts to fail in interesting ways when you completely run out of disk space. Multiple partitions allow you to assign hard reserved disk space for the OS and keep that separate from the area's where users and/or specific applications are allowed to write (eg /home /tmp/ /var/tmp/ /var/spool/ /oradata/ etc.) , mitigating operational risk of badly behaved users and/or applications. Quota. Disk quota allow the administrator to prevent an individual user of using up all available space, disrupting service to all other users of the system. Individual disk quota is assigned per file system, so a single partition and thus a single file-system means only 1 disk quotum. Multiple (LVM) partitions means multiple file-systems allowing for more granular quota management. Depending on you usage scenario you may want for instance allow each user 10 GB in their home directory, 2TB in the /data directory on the external storage array and set up a large shared scratch area where anyone can dump datasets too large for their home directory and where the policy becomes "full is full" but when that happens nothing breaks either. Providing dedicated IO paths . You may have a combination of SSD's and spinning disks and would do well to address them differently. Not so much an issue in a general purpose server, but quite common in database setups is to also assign certain spindles (disks) to different purposes to prevent IO contention, e.g. seperate disk for the transaction logs, separate disks for actual database data and separate disks for temp space. . Boot You may have a need for a separate /boot partition. Historically to address BIOS problems with booting beyond the 1024 cylinder limit, nowadays more often a requirement to support encrypted volumes, to support certain RAID controllers, HBA's that don't support booting from SAN or file-systems not immediately supported by the installer etc. Tuning You may have a need for different tuning options or even completely different file-systems. If you use hard partitions you more or less have to get it right at install time and then a single large partition isn't the worst, but it does come with some of the restrictions above. Typically I recommend to partition your main volume as a single large Linux LVM physical volume and then create logical volumes that fit your current needs and for the remainder of your disk space, leave unassigned until needed . You can than expand those volumes and their file-systems as needed (which is a trivial operation that can be done on a live system), or create additional ones as well. Shrinking LVM volumes is trivial but often shrinking the file-systems on them is not supported very well and should probably be avoided. | {
"source": [
"https://serverfault.com/questions/629439",
"https://serverfault.com",
"https://serverfault.com/users/242954/"
]
} |
629,440 | I have a VPS (WHM/cPanel) where i keep our clients projects. Some of these projects are well known frameworks and some of them are custom PHP/MySQL codes. At certain times i am noticing high loading but i can't really find where is the cause. I am using top -c to check the top processes and have also installed Munin on WHM. I would like to ask if there is a certain way to monitor in real time the causes of the high load. At the time of high load, i am following these steps: Check global traffic and system resources Check Apache/MySQL/PHP logs Check which project causes the high load (usually from top ) Go on a full stack trace of the code causing the high load Is there a software that can do all of that in a central place? Is this the right way? What do you do in these situations? | The primary (historical) reasons for partitioning are: to separate the operating system from your user and application data . Until the release of RHEL 7 there was no supported upgrade path and a major version upgrade would require a re-install and then having for instance /home and other (application) data on separate partitions (or LVM volumes) allows you to easily preserve the user data and application data and wipe the OS partition(s). Users can't log in properly and your system starts to fail in interesting ways when you completely run out of disk space. Multiple partitions allow you to assign hard reserved disk space for the OS and keep that separate from the area's where users and/or specific applications are allowed to write (eg /home /tmp/ /var/tmp/ /var/spool/ /oradata/ etc.) , mitigating operational risk of badly behaved users and/or applications. Quota. Disk quota allow the administrator to prevent an individual user of using up all available space, disrupting service to all other users of the system. Individual disk quota is assigned per file system, so a single partition and thus a single file-system means only 1 disk quotum. Multiple (LVM) partitions means multiple file-systems allowing for more granular quota management. Depending on you usage scenario you may want for instance allow each user 10 GB in their home directory, 2TB in the /data directory on the external storage array and set up a large shared scratch area where anyone can dump datasets too large for their home directory and where the policy becomes "full is full" but when that happens nothing breaks either. Providing dedicated IO paths . You may have a combination of SSD's and spinning disks and would do well to address them differently. Not so much an issue in a general purpose server, but quite common in database setups is to also assign certain spindles (disks) to different purposes to prevent IO contention, e.g. seperate disk for the transaction logs, separate disks for actual database data and separate disks for temp space. . Boot You may have a need for a separate /boot partition. Historically to address BIOS problems with booting beyond the 1024 cylinder limit, nowadays more often a requirement to support encrypted volumes, to support certain RAID controllers, HBA's that don't support booting from SAN or file-systems not immediately supported by the installer etc. Tuning You may have a need for different tuning options or even completely different file-systems. If you use hard partitions you more or less have to get it right at install time and then a single large partition isn't the worst, but it does come with some of the restrictions above. Typically I recommend to partition your main volume as a single large Linux LVM physical volume and then create logical volumes that fit your current needs and for the remainder of your disk space, leave unassigned until needed . You can than expand those volumes and their file-systems as needed (which is a trivial operation that can be done on a live system), or create additional ones as well. Shrinking LVM volumes is trivial but often shrinking the file-systems on them is not supported very well and should probably be avoided. | {
"source": [
"https://serverfault.com/questions/629440",
"https://serverfault.com",
"https://serverfault.com/users/203220/"
]
} |
629,528 | We have a network setup for a demo, which lasts about 15mn. Our DHCP server is configured to assign ~ 100 addresses (max number of simultaneous connections or our AP) ... but since people might come and go very quickly we need to keep the lease time very short in order to free the IP addresses and allow other people to connect. Initially I wanted to go for a lease time as short as 25 seconds, considering that the demo is quite short, and to be sure that no IP will be "abusively" reserved by the DHCP server ...
However, I am afraid of several things. First , the impact on the load of the network. Second , I have read here and there that there might be some "weird" issues with time leases below 1 minute (e.g. What is a good DHCP lease timeout configuration ). Does somebody know what can be the different problems with using such a short time lease? What is the impact on the network? What would be a short but safe lease duration to use? | With a very low lease time you will see an increase of network traffic, particularly broadcast traffic as the "discover" and "offer" phases of DHCP are layer 2 broadcasts. How much of an issue this is depends on many factors such as the size and complexity of the network, latency, performance of the DHCP server, etc. Keep in mind DHCP clients do not wait until their lease is expired to try to renew it. So if you gave me a 60-second lease I'll be talking to the DHCP server (potentially) every 30 seconds to renew it. As for "weird" issues, anything goes. Different DHCP clients will behave differently. Some may handle it fine, some may have problems renewing so often and fail. Perhaps there are clients which get a lease and simply sleep for a certain period of time then check if they need to renew or toss the address if it expired. If the sleep is longer than the lease then the system will keep the IP longer than it is allowed to. I haven't seen that issue before but I have seen things like the IP a client requests in the "request" phase being different than the one the server gave it in the "offer" phase but the server actually gave the client the "request" IP, which was already in use. Never under-estimate how poorly software can be written. | {
"source": [
"https://serverfault.com/questions/629528",
"https://serverfault.com",
"https://serverfault.com/users/195306/"
]
} |
629,534 | I have a VoIP box setup with Asterisk and using chan_dongle to provide me with an inbound GSM trunk as well as a couple of DID SIP trunks with local numbers. I would like to be able to have the following call-flow: Person ring the GSM mobile number, get's a voice prompt to state they should wait. I call one of the SIP trunk numbers registered on the PBX as an inbound route. The PBX joins the waiting call to the newly established call, so I can now speak to the person who called me. At some point I would like to add SMS to the mix to send me a message when someone is waiting, but this is phase 2. Can this be done? Many thanks. | With a very low lease time you will see an increase of network traffic, particularly broadcast traffic as the "discover" and "offer" phases of DHCP are layer 2 broadcasts. How much of an issue this is depends on many factors such as the size and complexity of the network, latency, performance of the DHCP server, etc. Keep in mind DHCP clients do not wait until their lease is expired to try to renew it. So if you gave me a 60-second lease I'll be talking to the DHCP server (potentially) every 30 seconds to renew it. As for "weird" issues, anything goes. Different DHCP clients will behave differently. Some may handle it fine, some may have problems renewing so often and fail. Perhaps there are clients which get a lease and simply sleep for a certain period of time then check if they need to renew or toss the address if it expired. If the sleep is longer than the lease then the system will keep the IP longer than it is allowed to. I haven't seen that issue before but I have seen things like the IP a client requests in the "request" phase being different than the one the server gave it in the "offer" phase but the server actually gave the client the "request" IP, which was already in use. Never under-estimate how poorly software can be written. | {
"source": [
"https://serverfault.com/questions/629534",
"https://serverfault.com",
"https://serverfault.com/users/99864/"
]
} |
629,786 | I have to reuse old rack and servers. The server is 74 cm deep and the rack around 65 cm. With a few tricks I'd be able to mount everything I need if I leave back and front doors opened. I thought about more dust, more humidity, all the danger of having an opened door h24 that cannot be closed (a person closing and breaking it), physical access to servers granted to anyone able enter the room. I am generally against working this way but I would like to hear your thoughts and get a complete list of technical reasons that describe what is bad about the decision to mount anyway. | Recycle an old server rack? Arguments against Unsafe : someone could injure themselves by colliding with a server that should not be there. The doors are now more of a hindrance than a help and should be removed. No doors = no security. Defeats the purpose of having a rack. Maintainance of older racks is more time-consuming than more modern racks. Looks ( and is ) unprofessional and does not give a good impression. You should use up-to-date standardized equipment so you can add and remove servers seamlessly. Relocating an open rack to another room or site will require more time and effort. Keep dust out and servers live longer. Arguments for (devils-advocate) An old rack is better than no rack. No doors = better airflow (for old racks anyway) :D Will the old servers even fit into a new rack? The old servers and, presumably, operating systems must serve some useful function that may not be smoothly migrated to new hardware. Even if there is more dust the old servers are probably not that valuable and cheaper to replace one at a time. There may be plans to replace them in the next couple of years anyway. The server room itself should be secure. There is no money left in this years budget (or other cashflow issues). Perhaps old rack is mostly unused and space is cheap. Mounting safely inside the rack To make it possible to close the doors consider mounting the server in one of the following arrangements (assuming old server is 4u): vertically at the front side of the rack with the front on top so drives bays are accessible if this is more important Real-estate Penalty: 13u = 16 modern servers that could fit where that old server goes and 13u are wasted. vertically at the rear side of the rack with the back on top so cabling is more accessible Real-estate Penalty: 13u My favourite: diagonally and proudly show it off. Real-estate Penalty: 7u (approx.) This may increase your changes of getting a new rack sooner (but I would not like to have to look at that every day!). | {
"source": [
"https://serverfault.com/questions/629786",
"https://serverfault.com",
"https://serverfault.com/users/60311/"
]
} |
629,883 | I'm using Ubuntu 14.04.1 (with OpenSSH 6.6 and libpam-google-authenticator 20130529-2). I'm trying to set up SSH logins where the public key authenticates (without a password) and a user is prompted for a code from Google's Authenticator. Following/adapting these instructions has gotten me a password prompt as well as a Google Auth prompt: https://scottlinux.com/2013/06/02/use-google-authenticator-for-two-factor-ssh-authentication-in-linux/ http://www.howtogeek.com/121650/how-to-secure-ssh-with-google-authenticators-two-factor-authentication/ https://wiki.archlinux.org/index.php/Google_Authenticator and https://wiki.archlinux.org/index.php/SSH_keys#Two-factor_authentication_and_public_keys https://www.digitalocean.com/community/tutorials/how-to-protect-ssh-with-two-factor-authentication I've installed the package, edited my /etc/ssh/sshd_config and /etc/pam.d/ssh files In /etc/ssh/sshd_config : ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive
UsePAM yes and at the bottom of /etc/pam.d/ssh : auth required pam_google_authenticator.so nullok # (I want to give everyone a chance to set up their 2FA before removing "nullok") I know PAM is order dependent, but is sshd_config also? What am I doing wrong? Any help would be appreciated. | Have got it working well, first did: apt-get install libpam-google-authenticator In /etc/pam.d/sshd I have changed/added the following lines (at the top): # @include common-auth
auth required pam_google_authenticator.so And in /etc/ssh/sshd_config : ChallengeResponseAuthentication yes
UsePAM yes
AuthenticationMethods publickey,keyboard-interactive
PasswordAuthentication no Works well and I now receive a "Verification code" prompt after authentication with public key. I am not sure how I would allow authentication with password+token OR key+token, as I have now effectively removed the password authentication method from PAM. Using Ubuntu 14.04.1 LTS (GNU/Linux 3.8.0-19-generic x86_64) with ssh -v : OpenSSH_6.6.1p1 Ubuntu-2ubuntu2, OpenSSL 1.0.1f 6 Jan 2014 | {
"source": [
"https://serverfault.com/questions/629883",
"https://serverfault.com",
"https://serverfault.com/users/235535/"
]
} |
630,022 | I have been creating AWS VPCs and I am wondering if there is a recommended CIDR value when creating VPCs. What are the factors that I must consider when choosing a CIDR and does the CIDR value affect the performance of the network? | I would recommend the following considerations: If you creating an IPSEC connection between your corporate LAN and your VPC, use a CIDR that is different than that on your corporate LAN. This will prevent routing overlaps and create an identity distinction for reference. For very large networks, use at least different 16-bit masks in different regions eg eu-west-1 10.1.0.0/16
us-east-1 10.2.0.0/16
us-west-1 10.3.0.0/16 For smaller networks, use a 24-bit mask in different regions eg eu-west-1 10.0.1.0/24
us-east-1 10.0.2.0/24
us-west-1 10.0.3.0/24 Consider making a distinction between private and public subnets, eg private 10.0.1.0/24 (3rd byte < 129)
public 10.0.129.0/24 (3rd byte > 128) Don't over-allocate address space to subnets, eg eu-west-1 10.0.1.0/26
eu-west-1 10.0.1.64/26
eu-west-1 10.0.1.128/26
eu-west-1 10.0.1.192/26
(62 hosts per subnet) Don't under-allocate either. If you use a load of Elastic Load Balancers, remember that they will also consume available ip addresses on your subnets. This is a particularly true if you use ElasticBeanstalk. | {
"source": [
"https://serverfault.com/questions/630022",
"https://serverfault.com",
"https://serverfault.com/users/109363/"
]
} |
630,042 | What techniques do you typically use, when you start a gig at a complex environment and no one can tell you what servers are out there, and documentation doesn't exist? I typically start off with getting access to one system, and a few URLs, then drill down and start manually enumerating the networks using nmap, and a fair amount of manual searching. How do the rest of you handle this challenge? | I would recommend the following considerations: If you creating an IPSEC connection between your corporate LAN and your VPC, use a CIDR that is different than that on your corporate LAN. This will prevent routing overlaps and create an identity distinction for reference. For very large networks, use at least different 16-bit masks in different regions eg eu-west-1 10.1.0.0/16
us-east-1 10.2.0.0/16
us-west-1 10.3.0.0/16 For smaller networks, use a 24-bit mask in different regions eg eu-west-1 10.0.1.0/24
us-east-1 10.0.2.0/24
us-west-1 10.0.3.0/24 Consider making a distinction between private and public subnets, eg private 10.0.1.0/24 (3rd byte < 129)
public 10.0.129.0/24 (3rd byte > 128) Don't over-allocate address space to subnets, eg eu-west-1 10.0.1.0/26
eu-west-1 10.0.1.64/26
eu-west-1 10.0.1.128/26
eu-west-1 10.0.1.192/26
(62 hosts per subnet) Don't under-allocate either. If you use a load of Elastic Load Balancers, remember that they will also consume available ip addresses on your subnets. This is a particularly true if you use ElasticBeanstalk. | {
"source": [
"https://serverfault.com/questions/630042",
"https://serverfault.com",
"https://serverfault.com/users/243367/"
]
} |
630,043 | What happens when someone gets access to your DNS control and sets a TTL of 100 years on your domain, while pointing it's IP to some obscure website? (and you discover it too late of course) | Ryan has provided an excellent answer to one interpretation of your question. Given our target audience however, and the situation of the people most likely to stumble upon the question, I'm going to answer a different one. What does a company do when a bad TTL makes it out into the wild? You have a few options here. First and foremost though, you need to identify the problem vector and eliminate it. Trying to contain the damage is pointless when you have no control over the problem repeating itself. Wait. If it's not a crucial record, you can probably wait it out. As Ryan has covered, the "maximum damage" is not 68 years, but in practice most likely to be 7 days. This is the most common default for the maximum life of a positive cache entry (BIND, JunOS, etc.). Even in cases where this is not accurate, one would hope the server is receiving routine security updates that force a process restart. Speaking as the operator of several large clusters I do not find it likely that a MSO would set this to a larger value on purpose: it only serves to generate more external inquiries (which we hate). You may have to move on to the next steps for companies using less popular software, or operators who hate themselves. Annoy DNS cache operators. If you need to get record cleared from cache ASAP, your only real choice is to start reaching out to the largest providers of recursive DNS you can think of and work your way down. Some of these companies are likely to ignore you: either they think your company is too small for their customers to care about, or they institute cache purging policies of their own to minimize the number of support calls they have to deal with. In the latter case, they will probably shrug and let the problem take care of itself at the scheduled time. Your company did create this problem for itself, after all. Get ISP customers to annoy their ISP for you. If it's been a few days and a large ISP is ignoring the cached record, try to get one of their customers to complain and generate a ticket internal to that company. This is harder for them to ignore, but it will not win you any favors with their ops team as from their perspective you did this to yourself. If this is a repeat occurrence, they will probably start canceling these tickets just to spite you. Advise your partners to bypass the DNS record. If it's a mission critical DNS record consumed by your partners and none of the above options are acceptable (i.e. you are bleeding revenue by the minute), your company has no choice but to work with its partners to bypass the problem. If they do not control their local cache, this is usually this is accomplished by inserting entries into the hosts table of the effected systems as it avoids the need to modify the programs that are using the DNS record. This is only viable if the revenue loss is tied to a select few companies consuming the data. In all other cases you're stuck with the first three options. | {
"source": [
"https://serverfault.com/questions/630043",
"https://serverfault.com",
"https://serverfault.com/users/123361/"
]
} |
630,157 | In the Nginx configuration, when you want to limit the request processing rate by using the limit_req_zone / limit_req instructions , I don't really understand the use of the nodelay option. In my understanding, it terminates the requests above the defined rate without delaying them. So it seems equivalent to burst=0 .
That is why I don't understand the following example : limit_req zone=one burst=5 nodelay; burst defines the number of requests which could be delayed, so what is the meaning to define burst if there is the nodelay option? | I find limit_req documentation clear enough. burst is documented that way: Excessive requests are delayed until their number exceeds the maximum burst size [...] nodelay is documented that way: If delaying of excessive requests while requests are being limited is not desired, the parameter nodelay should be used Requests are limited to fit the defined rate. If requests are incoming at a higher rate, no more than the defined number of requests per time unit will be served. You then need to decide on what to do with those other requests. By default (no burst , no nodelay ), requests are denied with an HTTP 503 error. With burst , you stack the defined number of requests in a waiting queue, but you do not process them quicker than the defined requests per time unit rate . With burst and nodelay , the queue won't be waiting and request bursts will get processed immediately . | {
"source": [
"https://serverfault.com/questions/630157",
"https://serverfault.com",
"https://serverfault.com/users/112404/"
]
} |
630,253 | I'm having some odd issues with my ansible box(vagrant). Everything worked yesterday and my playbook worked fine. Today, ansible hangs on "gathering facts"? Here is the verbose output: <5.xxx.xxx.xxx> ESTABLISH CONNECTION FOR USER: deploy
<5.xxx.xxx.xxx> REMOTE_MODULE setup
<5.xxx.xxx.xxx> EXEC ['ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-
o', 'ControlPersist=60s', '-o', 'ControlPath=/home/vagrant/.ansible/cp/ansible-s
sh-%h-%p-%r', '-o', 'Port=2221', '-o', 'KbdInteractiveAuthentication=no', '-o',
'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o
', 'PasswordAuthentication=no', '-o', 'User=deploy', '-o', 'ConnectTimeout=10',
'5.xxx.xxx.xxx', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1411372677
.18-251130781588968 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1411372677.18-2
51130781588968 && echo $HOME/.ansible/tmp/ansible-tmp-1411372677.18-251130781588
968'"] | I was having a similar issue with Ansible ping on Vagrant, it just suddenly stuck for no reason and has previously worked absolutely fine. Unlike any other issue like ssh or connective issue, it just forever die with no timeout. One thing I did to resolve this issue is to clean ~/.ansible directory and it just works again. I can't find out why, but it did get resolved. If you got change to have it again try clean the ~/.ansible folder before you refresh your Vagrant. | {
"source": [
"https://serverfault.com/questions/630253",
"https://serverfault.com",
"https://serverfault.com/users/202233/"
]
} |
630,262 | Is there a way to snoop (on Solaris) SSL headers ( I don't actually need to capture SSL data ) so that I can ensure SSL is not blocked by any firewalls before entering my server. | I was having a similar issue with Ansible ping on Vagrant, it just suddenly stuck for no reason and has previously worked absolutely fine. Unlike any other issue like ssh or connective issue, it just forever die with no timeout. One thing I did to resolve this issue is to clean ~/.ansible directory and it just works again. I can't find out why, but it did get resolved. If you got change to have it again try clean the ~/.ansible folder before you refresh your Vagrant. | {
"source": [
"https://serverfault.com/questions/630262",
"https://serverfault.com",
"https://serverfault.com/users/208487/"
]
} |
630,631 | NOTE: I have read probably up to 50 different pages describing how to setup public Samba share in the span of 2 YEARS and nothing ever worked for me. I don't know how much RTFM I need to set this stuff. I need/want to setup a completely open public file share on my home server for two workstations. Setup is as follows: Server : Debian Wheezy sudo smbd --version gives me Version 3.6.6 . 2 local partitions which I want to share, formatted in NTFS due to being old and taken from Windows machine. I cannot format them to ext* FS because they have a lot of data I cannot (yet) move anywhere else. machine named "homeserv" for lack of originality. Client : Debian Testing (Jessie) Windows 7 (2 different machines). In fact, my machine is Debian/Windows dualboot, and my wife's machine is Windows only. My smb.conf after distillation looks as follows ( verbatim , nothing else is there): [global]
workgroup = WORKGROUP
security = user
map to guest = Bad User
[disk1]
comment = Disk 1 on 400GB HDD
path = /media/disk1
browsable = yes
guest ok = yes
read only = no
create mask = 0755
[disk2]
comment = Disk 2 on 400GB HDD
path = /media/disk2
browsable = yes
guest ok = yes
read only = no
create mask = 0755 On both client machines, in both Debian and Windows I get the same result: login/password dialog. NO COMBINATION of security = user , map to guest = Bad user , security = share , guest ok = yes and such helped. Windows 7 shows login/password dialog right after I click on the shared machine in network neighborhood. smb://homeserv/ file path in Debian (in any file browser) shows me two folders: disk1 and disk2 , as intended, by trying to open them bring the login/password dialog. So, what I lack in the scheme to NOT HAVE to enter login/password? This is usability question, I will not create a user-based authentication for file junkyard. | OK, I have found an answer myself. As this is absolutely not obvious from the docs and HOWTOs and whatever, the reason this thing asks for password is because it cannot map guest user to the owner of the directory being shared . I have NTFS partitions which I need to mount RW so I used the following setup in my /etc/fstab : /dev/sdb1 /media/disk1 ntfs defaults,noexec,noatime,relatime,utf8,uid=1000,gid=1000 0 2
/dev/sdb2 /media/disk2 ntfs defaults,noexec,noatime,relatime,utf8,uid=1000,gid=1000 0 2 The most important pieces of config are uid and gid (maybe only uid , don't know).
They are set to the UID and GID of the user jonnie set up on the server (obviously not root). So, when ntfs-3g will mount these disks, everything will be owned by him. After that, I have added this user to the Samba registry (or maybe created new identical one, don't care): # smbpasswd -a jonnie It asked for password, I have entered the same as for the main system. After that, I have added the force user and force group settings to the smb.conf : [global]
workgroup = WORKGROUP
netbios name = HOMESERV
security = user
map to guest = Bad User
[disk1]
comment = Disk 1 on 400GB HDD
path = /media/disk1
browsable = yes
guest ok = yes
read only = no
create mask = 666
directory mask = 777
force user = jonnie
force group = jonnie
[disk2]
comment = Disk 2 on 400GB HDD
path = /media/disk2
browsable = yes
guest ok = yes
read only = no
create mask = 666
directory mask = 777
force user = jonnie
force group = jonnie So, most important piece of config relevant to me was force user . Courtesy of the Samba HOWTO | {
"source": [
"https://serverfault.com/questions/630631",
"https://serverfault.com",
"https://serverfault.com/users/122148/"
]
} |
631,257 | How can I ensure my Bash installation is not vulnerable to the ShellShock bug anymore after the updates? | To check for the CVE-2014-6271 vulnerability env x='() { :;}; echo vulnerable' bash -c "echo this is a test" it should NOT echo back the word vulnerable. To check for the CVE-2014-7169 vulnerability (warning: if yours fails it will make or overwrite a file called /tmp/echo that you can delete after, and need to delete before testing again ) cd /tmp; env X='() { (a)=>\' bash -c "echo date"; cat echo it should say the word date then complain with a message like cat: echo: No such file or directory . If instead it tells you what the current datetime is then your system is vulnerable. To check for CVE-2014-7186 bash -c 'true <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF <<EOF' || echo "CVE-2014-7186 vulnerable, redir_stack" it should NOT echo back the text CVE-2014-7186 vulnerable, redir_stack . To check for CVE-2014-7187 (for x in {1..200} ; do echo "for x$x in ; do :"; done; for x in {1..200} ; do echo done ; done) | bash || echo "CVE-2014-7187 vulnerable, word_lineno" it should NOT echo back the text CVE-2014-7187 vulnerable, word_lineno . To check for CVE-2014-6277. I'm not 100% sure on this one as it seems to rely on a partially patched system that I no longer have access to. env HTTP_COOKIE="() { x() { _; }; x() { _; } <<`perl -e '{print "A"x1000}'`; }" bash -c "echo testing CVE-2014-6277" A pass result on this one is it ONLY echoing back the text testing CVE-2014-6277 . If it runs perl or if it complains that perl is not installed that is definitely a fail. I'm not sure on any other failure characteristics as I no longer have any unpatched systems. To check for CVE-2014-6278. Again, I'm not 100% sure on if this test as I no longer have any unpatched systems. env HTTP_COOKIE='() { _; } >_[$($())] { echo hi mom; id; }' bash -c "echo testing CVE-2014-6278" A pass for this test is that it should ONLY echo back the text testing CVE-2014-6278 . If yours echoes back hi mom anywhere that is definitely a fail. | {
"source": [
"https://serverfault.com/questions/631257",
"https://serverfault.com",
"https://serverfault.com/users/45673/"
]
} |
631,381 | I can not update Bash on a Debian 6.0 (Squeeze) server to get rid of the discovered vulnerability: bash --version
GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu)
apt-get update
apt-get install bash
Reading package lists... Done
Building dependency tree
Reading state information... Done
bash is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 7 not upgraded. Can I use Squeeze-LTS for this server just to update Bash? After one week I will be on another server, so I will not make any other updates. uname -m
x86_64
lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 6.0.5 (squeeze)
Release: 6.0.5
Codename: squeeze | You must use the squeeze-lts repository in order to continue receiving updates to Debian Squeeze To add this repository, edit /etc/apt/sources.list and add the line deb http://ftp.us.debian.org/debian squeeze-lts main non-free contrib (you can remove non-free and contrib if desired) Note that as of this instant, squeeze-lts only has the updated bash for the original CVE-2014-6271 but has not yet updated to fix the new CVE-2014-7169 . To update only bash, after running apt-get update use apt-get install bash to install just bash, instead of a complete upgrade. | {
"source": [
"https://serverfault.com/questions/631381",
"https://serverfault.com",
"https://serverfault.com/users/242607/"
]
} |
632,122 | I'll start by admitting I'm pretty new to Docker and I may be approaching this problem from the wrong set of assumptions... let me know if that's the case. I've seen lots of discussion of how Docker is useful for deployment but no examples of how that's actually done. Here's the way I thought it would work: create the data container to hold some persistent data on machine A create the application container which uses volumes from the data container do some work, potentially changing the data in the data container stop the application container commit & tag the data container push the data container to a (private) repository pull & run the image from step 6 on machine B pick up where you left off on machine B The key step here is step 5, which I thought would save the current state (including the contents of the file system). You could then push that state to a repository & pull it from somewhere else, giving you a new container that is essentially identical to the original. But it doesn't seem to work that way. What I find is that either step 5 doesn't do what I think it does or step 7 (pulling & running the image) "resets" the container to it's initial state. I've put together a set of three Docker images and containers to test this: a data container, a writer which writes a random string into a file in the data container every 30 s, and a reader which simply echo es the value in the data container file and exits. Data container Created with docker run \
--name datatest_data \
-v /datafolder \
myrepository:5000/datatest-data:latest Dockerfile: FROM ubuntu:trusty
# make the data folder
#
RUN mkdir /datafolder
# write something to the data file
#
RUN echo "no data here!" > /datafolder/data.txt
# expose the data folder
#
VOLUME /datafolder Writer Created with docker run \
--rm \
--name datatest_write \
--volumes-from datatest_data \
myrepository:5000/datatest-write:latest Dockerfile: FROM ubuntu:trusty
# Add script
#
ADD run.sh /usr/local/sbin/run.sh
RUN chmod 755 /usr/local/sbin/*.sh
CMD ["/usr/local/sbin/run.sh"] run.sh #!/bin/bash
while :
do
sleep 30s
NEW_STRING=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
echo "$NEW_STRING" >> /datafolder/data.txt
date >> /datafolder/data.txt
echo "wrote '$NEW_STRING' to file"
done This script writes a random string and the date/time to /datafolder/data.txt in the data container. Reader Created with docker run \
--rm \
--name datatest_read \
--volumes-from datatest_data \
myrepository:5000/datatest-read:latest Dockerfile: FROM ubuntu:trusty
# Add scripts
ADD run.sh /run.sh
RUN chmod 0777 /run.sh
CMD ["/run.sh"] run.sh: #!/bin/bash
echo "reading..."
echo "-----"
cat /datafolder/data.txt
echo "-----" When I build & run these containers, they run fine and work the way I expect: Stop & Start on the development machine: create the data container run the writer run the reader immediately, see the "no data here!" message wait a while run the reader, see the random string stop the writer restart the writer run the reader, see the same random string But committing & pushing do not do what I expect: create the data container run the writer run the reader immediately, see the "no data here!" message wait a while run the reader, see the random string stop the writer commit & tag the data container with docker commit datatest_data myrepository:5000/datatest-data:latest push to the repository delete all the containers & recreate them At this point, I would expect to run the reader & see the same random string, since the data container has been committed, pushed to the repository, and then recreated from the same image in the repository. However, what I actually see is the "no data here!" message. Can someone explain where I'm going wrong here? Or, alternatively, point me to an example of how deployment is done with Docker? | You got an assumption wrong about how volumes work in docker. I'll try to explain how volumes relates to docker containers and docker images and hopefully differences between data volumes and data volume containers will become clear. First let's recall a few definitions Docker images Docker images are essentially a union filesystem + metadata. You can inspect the content of docker image union filesystem with the docker export command, and you can inspect a docker image metadata with the docker inspect command. Data volumes from the Docker user guide : A data volume is a specially-designated directory within one or more containers that bypasses the Union File System to provide several useful features for persistent or shared data. It is important to note here that a given volume (as the directory or file that contains data) is reusable only if it exists at least one docker container using it. Docker images don't have volumes, they only have metadata which eventually tells where volumes would be mounted on the union filesystem. Data volumes aren't either part of docker containers union filesystem, so where are they? under /var/lib/docker/volumes on the docker host (while containers are stored under /var/lib/docker/containers ). Data volume containers That special type of container has nothing special. They are just stopped containers using a data volume with the sole and unique goal of having at least one container using that data volume. Remember, as soon as the last container (running or stopped) using a given data volume is deleted, that volume will become unreachable through the docker run --volumes-from option. Working with data volume containers How to create a data volume container The image used to create a data volume container has no importance as such a container can remain stopped and still fill its purpose. So to create a data container named datatest_data for a volume in /datafolder you only need to run: docker run --name datatest_data --volume /datafolder busybox true Here base is the image name (a conveniently small one) and true is a command we provide just to avoid seeing the docker daemon complain about a missing command. Anyway after you have a stopped container named datatest_data with the sole purpose of allowing you to reach that volume with the --volumes-from option of the docker run command. How to read from a data volume container I know two ways of reading a data volume: the first one is through a container. If you cannot have a shell into an existing container to access that data volume, you can run a new container with the --volumes-from option for the sole purpose of reading that data. For instance: docker run --rm --volumes-from datatest_data busybox cat /datafolder/data.txt The other way is to copy the volume from the /var/lib/docker/volumes folder. You can discover the name of the volume in that folder by inspecting the metadata of one of the container using the volume. See this answer for details. Working with volumes (since Docker 1.9.0) How to create a volume (since Docker 1.9.0) Docker 1.9.0 introduced a new command docker volume which allows to create volumes : docker volume create --name hello How to read from a volume (since Docker 1.9.0) Let say you created a volume named hello with docker volume create --name hello , you can mount it in a container with the -v option : docker run -v hello:/data busybox ls /data About committing & pushing containers It should now be clear that since data volumes aren't part of a container (the union filesystem), committing a container to produce a new docker image won't persist any data that would be in a data volume. Making backups of data volumes The docker user guide has a nice article about making backups of data volumes . Good article reagarding volumes: http://container42.com/2014/11/03/docker-indepth-volumes/ | {
"source": [
"https://serverfault.com/questions/632122",
"https://serverfault.com",
"https://serverfault.com/users/245703/"
]
} |
632,905 | I tried mounting an existing EBS Storage (which has data) to an instance, but it keeps throwing this error. mount: wrong fs type, bad option, bad superblock on /dev/xvdf,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so. The storage details are: ec2-user@ip ~]$ sudo parted -l
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvda: 8590MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
128 1049kB 2097kB 1049kB BIOS Boot Partition bios_grub
1 2097kB 8590MB 8588MB ext4 Linux
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 16.1GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
128 1049kB 2097kB 1049kB BIOS Boot Partition bios_grub
1 2097kB 16.1GB 16.1GB ext4 Linux dmesg | tail shows the following details [ec2-user@ip- ~]$ dmesg | tail
[ 2.593163] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
[ 2.625565] evbug: Connected device: input0 (AT Translated Set 2 keyboard at isa0060/serio0/input0)
[ 2.625568] evbug: Connected device: input2 (Power Button at LNXPWRBN/button/input0)
[ 2.625570] evbug: Connected device: input3 (Sleep Button at LNXSLPBN/button/input0)
[ 3.657958] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4
[ 3.664979] evbug: Connected device: input4 (ImExPS/2 Generic Explorer Mouse at isa0060/serio1/input0)
[ 5.731219] EXT4-fs (xvda1): re-mounted. Opts: (null)
[ 5.938276] NET: Registered protocol family 10
[ 11.720921] audit: type=1305 audit(1412199137.191:2): audit_pid=2080 old=0 auid=4294967295 ses=4294967295 res=1
[ 101.024164] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ec2-user@ip- ~]$ | Looks like you have partitioned that block device. In this case, you need to mount /dev/xvdf1 , not just /dev/xvdf . | {
"source": [
"https://serverfault.com/questions/632905",
"https://serverfault.com",
"https://serverfault.com/users/246176/"
]
} |
632,908 | From the Wireshark webpage : The Target MAC block contains 16 duplications of the IEEE address of the target, with no breaks or interruptions. Is there any specific reason for the 16 duplications? | In my opinion, the value has to be exactly 16. Magic Packet Technology ( whitepaper , publication #20213) was developed between AMD and Hewlett Packard circa 1995. From page 2: "Since an Ethernet controller already has built-in address matching
circuitry..." they propose reusing it, adding a counter "to count
up the 16 duplications of the IEEE address". They reason that WOL should be trivial to add, while leaving the actual implementation wide open. This doesn't appear to be historically arbitrary ("Oh, 16 looks long enough"), because: Build on what you have / what you know. By example, let's assume we like powers of 2 and therefore hex digits. Conveniently, a hex digit (4 bits) holds positive values from 0-15. Our processor checks all math and sets an overflow "flag" if we try to add 1 to an already "max" value (like 15). Because that's pretty common, we may even have a special instruction for overflow conditions, so in pseudocode: Initialize a single counter that holds values from 0-15.
Set it to 0.
Watch the network. When I see the signal:
Loop:
Do I see my address at the right spot?
Yes: Add 1 to counter.
Did I just overflow? (15+1 = 0?)
Yes: Jump out of loop to "wake up" code.
...otherwise
Loop again. Chip signal lines. AMD's reference to "circuitry" leads to the deep end, so all you really need to know is that we can imagine a simple case where a "bit set to 1" corresponds to a "high" voltage somewhere in a chip, visible at a "pin". Arduinos are a good example: set a memory bit to 1, and Arduino sets an output pin "high". This voltage change is often demoed by driving LEDs, but through the magic of transistors it can automatically activate, interrupt or "wake up" other circuits or chips. Let's assume a more natural hex representation (two hex digits, like FF, often seen in IP, masks and MAC addresses) and tie "output pin 5" of our Arduino to "bit position 5" in our counter: Memory Value Event
0000 0000 00 Nothing, so keep adding 1...
0000 1111 0F Nothing, but add 1...
0001 0000 10 Arduino pin 5 high. New voltage interrupts other circuits. Because the memory location is tied to that pin, it's elegant and all hardware: just keep adding 1, no need to interfere with driver or BIOS developer code. You're just a circuit maker anyway. You'll provide a pin that goes high, to be consumed by other chipmaker's silicon, which is what everyone's doing. In the real world it's a little more complicated (for example, the ENC28J60 spec lays it out in horrifying detail), but that's the gist. After this, human obviousness seems more a side effect than the goal. For computers, 4 copies of your MAC should suffice, but now that counter won't overflow and it's no longer dead simple. So it seems more likely that the goal was to get it implemented by as many silicon, driver, and BIOS designers as possible, and 16 gives everyone a choice between "overflow" AND direct signaling, without re-architecting and retooling. Playing devil's advocate for human detection, what about the next higher number with the same flexibility: 256? That doesn't work: The data segment alone produces a WOL packet that's larger than an Ethernet frame ( at the time ) could be. So to me this means that 16 is the only value the WOL segment can be. | {
"source": [
"https://serverfault.com/questions/632908",
"https://serverfault.com",
"https://serverfault.com/users/246181/"
]
} |
632,993 | When installing Linux VMs in a virtualized environment (ESXi in my case), are there any compelling reasons to partition the disks (when using ext4) rather than just adding separate disks for each mount point? The only one I can see is that it makes it somewhat easier to see if there's data present on a disk with e.g. fdisk. On the other hand, I can see some good reasons for not using partitions (for other than /boot, obviously). Much easier to extend disks. It's just to increase disk size for the VM (typically in VCenter), then rescan the device in VM, and resize the file system online. No more issues with aligning partitions with underlying LUNs. I have not found much on this topic around. Have I missed something important? | This is an interesting question... I don't think there's a definitive answer, but I can give some historical context on how best-practices surrounding this topic may have changed over time. I've had to support thousands of Linux VMs deployed in various forms across VMware environments since 2007. My approach to deployment has evolved, and I've had the unique ( sometimes unfortunate ) experience of inheriting and refactoring systems built by other engineers. The old days... Back in the day (2007), my early VMware systems were partitioned just like my bare metal systems. On the VMware side, I was using split 2GB thick files to comprise the VM's data, and didn't even think about the notion of multiple VMDKs, because I was just happy that virtualization could even work! Virtual Infrastructure... By ESX 3.5 and the early ESX/ESXi 4.x releases (2009-2011), I was using Linux, partitioned as normal atop monolithic Thick provisioned VMDK files. Having to preallocate storage forced me to think about Linux design in a similar manner as I would with real hardware. I was creating 36GB, 72GB, 146GB VMDK's for the operating system, partitioning the usual /, /boot, /usr, /var, /tmp, then adding another VMDK for the "data" or "growth" partition (whether that be /home, /opt or something application-specific). Again, the sweet-spot in physical hard disk sizes during this era was 146GB, and since preallocation was a requirement (unless using NFS), I needed to be conservative with space. The advent of thin provisioning VMware developed better features around Thin provisioning in later ESXi 4.x releases, and this changed how I began to install new systems. With the full feature set being added in 5.0/5.1, a new type of flexibility allowed more creative designs. Mind you, this was keeping pace with increased capabilities on virtual machines, in terms of how many vCPUS and how much RAM could be committed to individual VMs. More types of servers and applications could be virtualized than in the past. This is right as computing environments were starting to go completely virtual. LVM is awful... By the time full hot-add functionality at the VM level was in place and common (2011-2012), I was working with a firm that strove to maintain uptime for their clients' VMs at any cost ( stupid ). So this included online VMware CPU/RAM increases and risky LVM disk resizing on existing VMDKs. Most Linux systems in this environment were single VMDK setups with ext3 partitions on top of LVM. This was terrible because the LVM layer added complexity and unnecessary risk to operations. Running out of space in /usr, for instance, could result in a chain of bad decisions that eventually meant restoring a system from backups... This was partially process and culture-related, but still... Partition snobbery... I took this opportunity to try to change this. I'm a bit of a partition-snob in Linux and feel that filesystems should be separated for monitoring and operational needs. I also dislike LVM, especially with VMware and the ability to do what you're asking about. So I expanded the addition of VMDK files to partitions that could potentially grow. /opt, /var, /home could get their own virtual machine files if needed. And those would be raw disks. Sometimes this was an easier method to expand particular undersized partition on the fly. Obamacare... With the onboarding of a very high-profile client , I was tasked with the design of the Linux VM reference template that would be used to create their extremely visible application environment. The security requirements of the application required a unique set of mounts , so worked with the developers to try to cram the non-growth partitions onto one VMDK, and then add separate VMDKs for each mount that had growth potential or had specific requirements (encryption, auditing, etc.) So, in the end, these VMs were comprised of 5 or more VMDKs, but provided the best flexibility for future resizing and protection of data. What I do today... Today, my general design for Linux and traditional filesystems is OS on one thin VMDK (partitioned), and discrete VMDKs for anything else. I'll hot-add as necessary. For advanced filesystems like ZFS, it's one VMDK for the OS, and another VMDK that serves as a ZFS zpool and can be resized, carved into additional ZFS filesystems, etc. | {
"source": [
"https://serverfault.com/questions/632993",
"https://serverfault.com",
"https://serverfault.com/users/246233/"
]
} |
633,067 | What is a good way to automatically start docker containers when the system boots up? Is there a preferred way to do this on Ubuntu 14.04? I've used supervisord in the past to auto start web apps. But that doesn't feel like the right thing for Docker. | Apparently, the current method to auto-start Docker containers ( from Docker 1.2 ) is to use restart policies . This will control how Docker should handle starting of the container upon startup and re-starting of the container when it exits. I've used the 'always' option so far, and can confirm that it makes Docker auto-start the container at system boot: sudo docker run --restart=always -d myimage Documentation Excerpt Restart Policies Using the --restart flag on Docker run you can
specify a restart policy for how a container should or should not be
restarted on exit. no - Do not restart the container when it exits. on-failure - Restart the container only if it exits with a non zero
exit status. always - Always restart the container regardless of the exit status. You can also specify the maximum amount of times Docker will try to
restart the container when using the on-failure policy. The default is
that Docker will try forever to restart the container. $ sudo docker run --restart=always redis This will run the redis
container with a restart policy of always so that if the container
exits, Docker will restart it. $ sudo docker run --restart=on-failure:10 redis This will run the
redis container with a restart policy of on-failure and a maximum
restart count of 10. If the redis container exits with a non-zero exit
status more than 10 times in a row Docker will abort trying to restart
the container. Providing a maximum restart limit is only valid for the
on-failure policy. | {
"source": [
"https://serverfault.com/questions/633067",
"https://serverfault.com",
"https://serverfault.com/users/65584/"
]
} |
633,087 | A lot of people is stating that the ifconfig command is deprecated in favor of the ip one (on linux at least). This is often used as an argumentation to switch from ifconfig to ip (see some comment and answer of Should I quit using Ifconfig? ). Where can we find a statement about that (i.e. where is it stated that ifconfig won't be supported in the future) ? | The official statement regarding the plans to obsolete net-tools was made on the debian-devel mailing list in early 2009 by one of the net-tools maintainers. True to their statement, net-tools has been hardly maintained at all since that time. Luk Claes and me, as the current maintainers of net-tools, we've been
thinking about it's future. Net-tools has been a core part of Debian and
any other linux based distro for many years, but it's showing its age. It doesnt support many of the modern features of the linux kernel, the
interface is far from optimal and difficult to use in automatisation,
and also, it hasn't got much love in the last years. On the other side, the iproute suite, introduced around the 2.2 kernel
line, has both a much better and consistent interface, is more powerful,
and is almost ten years old, so nobody would say it's untested. Hence, our plans are to replace net-tools completely with iproute, maybe
leading the route for other distributions to follow. Of course, most
people and tools use and remember the venerable old interface, so the
first step would be to write wrappers, trying to be compatible with
net-tools. At the same time, we believe that most packages using net-tools should
be patched to use iproute instead, while others can continue using the
wrappers for some time. The ifupdown package is obviously the first
candidate, but it seems that a version using iproute has been available
in experimental since 2007. The idea to write wrappers was eventually abandoned as unworkable, and nearly all Linux distributions have switched to iproute2 since then. | {
"source": [
"https://serverfault.com/questions/633087",
"https://serverfault.com",
"https://serverfault.com/users/246294/"
]
} |
633,148 | So when I run this in Fedora I'm seeing this: $ ls hmm_data/indivA12_AATAAG/refs/par1/
2R-orths.alleles 2R-ref.alleles
$ ls hmm_data/indivA12_AATAAG/refs/par1/ | grep -F '-ref.alleles'
2R-ref.alleles But when I run on Ubuntu (same data) I don't get any results from the grep: $ ls hmm_data/indivA12_AATAAG/refs/par1/
2R-orths.alleles 2R-ref.alleles
$ ls hmm_data/indivA12_AATAAG/refs/par1/ | grep -F '-ref.alleles' Any ideas what could be going on? How can I come up with something that will work the same on both systems? | grep -F '-ref.alleles' is equivalent to: grep -F -ref.alleles (none of the characters between the apostrophes are shell metacharacters, so quoting them has no effect.) This is in turn equivalent to: grep -F -r -e f.alleles by normal parsing of - prefixed options. The -e option takes an argument, but -F and -r don't. Since you didn't specify any files to grep, the default behavior is to act on stdin... except that the -r option makes no sense so it defaults to searching . (the current directory) recursively instead and ignores stdin. In some versions. You need to use the -- "no more options" indicator before a regexp that starts with - as in grep -F -- -ref.alleles I tracked down the point where the behavior of -r with no file arguments changed. It was in version 2.11, released March 2, 2012. See the release announcement. The git commits which affected the behavior are this one and this one . If you run grep --version on your two machines, I'm sure you'll find that one of them is on the wrong side of 2.11 | {
"source": [
"https://serverfault.com/questions/633148",
"https://serverfault.com",
"https://serverfault.com/users/33492/"
]
} |
633,161 | One of our DBAs created the following crontab entry to run a backup every 3 hours starting 6:30 AM to midnight every day: 30 6-24/3 * * * (path to backup script) Cron took the entry but did not run the backup as expected. I am not a sysadmin or a Linux expert. My analysis is that the entry should have been (since hour never equals 24): 30 6-23/3 * * * (path to backup script) Which entry is correct? If the first entry is wrong, why did crontab allow the DBA to create the entry? It should have thrown an error. If I wanted to create an entry to run backup every 3 hours starting 6:30 AM to 11:30 PM, how would I create a crontab entry? There is no example which shows a time range which ends on half hour. Edit : the system is Oracle Enterprise Linux 5. The script only ran once before 2300. I will do some more testing and post back. I am still confused how cron accepted illegal value for hour (24). This is weird. If I create an entry like: 30 6-24/3 * * * /bin/ls I cannot save crontab. I get an error that hour is bad. If I create an entry like: 30 6-24/4 * * * /bin/ls I can save the entry. It does not make sense. The hour is still bad and is accepted. Is this a bug or an expected behavior? MadHatter: Please feel free to change the title of the post and file a bug. As I said, I am not a Linux expert, just someone that co-workers discuss technical issues with. I really like this site. The contributors are knowledgeable and willing to help. Thanks,
Arun | grep -F '-ref.alleles' is equivalent to: grep -F -ref.alleles (none of the characters between the apostrophes are shell metacharacters, so quoting them has no effect.) This is in turn equivalent to: grep -F -r -e f.alleles by normal parsing of - prefixed options. The -e option takes an argument, but -F and -r don't. Since you didn't specify any files to grep, the default behavior is to act on stdin... except that the -r option makes no sense so it defaults to searching . (the current directory) recursively instead and ignores stdin. In some versions. You need to use the -- "no more options" indicator before a regexp that starts with - as in grep -F -- -ref.alleles I tracked down the point where the behavior of -r with no file arguments changed. It was in version 2.11, released March 2, 2012. See the release announcement. The git commits which affected the behavior are this one and this one . If you run grep --version on your two machines, I'm sure you'll find that one of them is on the wrong side of 2.11 | {
"source": [
"https://serverfault.com/questions/633161",
"https://serverfault.com",
"https://serverfault.com/users/246339/"
]
} |
633,264 | Today is Friday, October 3, 2014 3:58 AM I want to schedule a cronjob like that to run it at the following dates: Saturday, October 4, 2014 8:00 AM Saturday, October 18, 2014 8:00 AM Saturday, November 1, 2014 8:00 AM
...
... So every 2 weeks , on Saturday, at 8 o'clock. | 0 8 * * 6 test $((10#$(date +\%W)\%2)) -eq 1 && yourCommand date +%W : week number of year with Monday as first day of week, today week 39 10#$(date +%W) : conver the date +W to decimal number and avoid shell base parsing confusion $((39%2)) : modulo operation: result is 0 (even week number) or 1 (odd week number), this week result is 1, next week 0 test 1 -eq 1 : arithmetic test (equal), in this case result is boolean true && yourCommand : Boolean AND: run yourCommand only if result of previous command was boolean true Note that the year can get two odd weeks: 53 (this year) and 1 (next year) | {
"source": [
"https://serverfault.com/questions/633264",
"https://serverfault.com",
"https://serverfault.com/users/68124/"
]
} |
633,394 | I installed the LDAP development headers: apt-get install libldb-dev This added a few ldap headers: root@crunchbang:/usr/include# ls -la ldap*
-rw-r--r-- 1 root root 9466 Apr 23 2013 ldap_cdefs.h
-rw-r--r-- 1 root root 1814 Apr 23 2013 ldap_features.h
-rw-r--r-- 1 root root 65213 Apr 23 2013 ldap.h
-rw-r--r-- 1 root root 9450 Apr 23 2013 ldap_schema.h
-rw-r--r-- 1 root root 3468 Apr 23 2013 ldap_utf8.h When I configure and reference the directory: ./configure --with-ldap=/usr/include I get this error: ...
checking for LDAP support... yes
checking for LDAP Cyrus SASL support... no
checking size of long int... 4
configure: error: Cannot find ldap libraries in /usr/include. | I ran into this issue trying to get PHP extensions involved in a Docker container. Here is what I had to do: apt-get install libldb-dev libldap2-dev ln -s /usr/lib/x86_64-linux-gnu/libldap.so /usr/lib/libldap.so \
&& ln -s /usr/lib/x86_64-linux-gnu/liblber.so /usr/lib/liblber.so | {
"source": [
"https://serverfault.com/questions/633394",
"https://serverfault.com",
"https://serverfault.com/users/42819/"
]
} |
633,421 | We have a server with 4 physical processors (individual chips) installed and have installed and activatded Win2012 Standard edition on it..on bare metal i.e. no virtualization.
When I look at the task manager, it only shows two physical processors in the 'Cores' field (# of cores/processor X # of procs seen). I do understand that ONE Win 2012 Standard license only supports upto 2 physical processors as mentinoed here http://download.microsoft.com/download/4/D/B/4DB352D1-C610-466A-9AAF-EEF4F4CFFF27/WS2012_Licensing-Pricing_FAQ.pdf What i want to know is- how can I make the same installation (same instance of Win 2012 Std) see, recognize, utilize all 4 processors? Is it even possible? Does it mean adding another license key to the installation? If so, how can I do that? | I ran into this issue trying to get PHP extensions involved in a Docker container. Here is what I had to do: apt-get install libldb-dev libldap2-dev ln -s /usr/lib/x86_64-linux-gnu/libldap.so /usr/lib/libldap.so \
&& ln -s /usr/lib/x86_64-linux-gnu/liblber.so /usr/lib/liblber.so | {
"source": [
"https://serverfault.com/questions/633421",
"https://serverfault.com",
"https://serverfault.com/users/246514/"
]
} |
634,197 | I have an NAS server with 4x 2TB WD RE4-GP drives in a RAID10 configuration (4TB usable). I'm running out of space (<1TB usable space left). I have $0 to spend on bigger/more drives/enclosures. I like what I've read about the data-integrity features of ZFS, which - on their own - are enough for me to switch from my existing XFS (software) RAID10. Then I read about ZFS's superior implementation of RAID5, so I thought I might even get up to 2TB more usable space in the bargain using RAIDZ-1. However, I keep reading more and more posts saying pretty much to just never use RAIDZ-1. Only RAIDZ-2+ is reliable enough to handle "real world" drive failures. Of course, in my case, RAIDZ-2 doesn't make any sense. It'd be much better to use two mirrored vdevs in a single pool (RAID10). Am I crazy wanting to use RAIDZ-1 for 4x 2TB drives? Should I just use a pool of two mirrored vdevs (essentially RAID10) and hope the compression gives me enough extra space? Either way, I plan on using compression. I only have 8GB of RAM (maxed), so dedup isn't an option. This will be on a FreeNAS server (about to replace the current Ubuntu OS) to avoid the stability issues of ZFS-on-Linux. | Before we go into specifics, consider your use case. Are you storing photos, MP3's and DVD rips? If so, you might not care whether you permanently lose a single block from the array. On the other hand, if it's important data, this might be a disaster. The statement that RAIDZ-1 is "not good enough for real world failures" is because you are likely to have a latent media error on one of your surviving disks when reconstruction time comes. The same logic applies to RAID5. ZFS mitigates this failure to some extent. If a RAID5 device can't be reconstructed, you are pretty much out of luck; copy your (remaining) data off and rebuild from scratch. With ZFS, on the other hand, it will reconstruct all but the bad chunk, and let the administrator "clear" the errors. You'll lose a file/portion of a file, but you won't lose the entire array. And, of course, ZFS's parity checking means that you will be reliably informed that there's an error. Otherwise, I believe it's possible (although unlikely) that multiple errors will result in a rebuild apparently succeeding, but giving you back bad data. Since ZFS is a " Rampant Layering Violation ," it also knows which areas don't have data on them, and can skip them in the rebuild. So if your array is half empty you're half as likely to have a rebuild error. You can reduce the likelihood of these kinds of rebuild errors on any RAID level by doing regular "zpool scrubs" or "mdadm checks" of your array. There are similar commands/processes for other RAID's; e.g., LSI/dell PERC raid cards call this "patrol read." These go read everything, which may help the disk drives find failing sectors, and reassign them, before they become permanent. If they are permanent, the RAID system (ZFS/md/raid card/whatever) can rebuild the data from parity. Even if you use RAIDZ2 or RAID6, regular scrubs are important. One final note - RAID of any sort is not a substitute for backups - it won't protect you against accidental deletion, ransomware, etc. Although regular ZFS snapshots can be part of a backup strategy. | {
"source": [
"https://serverfault.com/questions/634197",
"https://serverfault.com",
"https://serverfault.com/users/7559/"
]
} |
634,216 | I need to create msi package for silent distribution across my network and I need to convert
the Google Chrome and other software to MSI package that is compatible with SSCM or any other
third party tools? Anyone have experience or can recommend a good MSI packager? Mike W. | Before we go into specifics, consider your use case. Are you storing photos, MP3's and DVD rips? If so, you might not care whether you permanently lose a single block from the array. On the other hand, if it's important data, this might be a disaster. The statement that RAIDZ-1 is "not good enough for real world failures" is because you are likely to have a latent media error on one of your surviving disks when reconstruction time comes. The same logic applies to RAID5. ZFS mitigates this failure to some extent. If a RAID5 device can't be reconstructed, you are pretty much out of luck; copy your (remaining) data off and rebuild from scratch. With ZFS, on the other hand, it will reconstruct all but the bad chunk, and let the administrator "clear" the errors. You'll lose a file/portion of a file, but you won't lose the entire array. And, of course, ZFS's parity checking means that you will be reliably informed that there's an error. Otherwise, I believe it's possible (although unlikely) that multiple errors will result in a rebuild apparently succeeding, but giving you back bad data. Since ZFS is a " Rampant Layering Violation ," it also knows which areas don't have data on them, and can skip them in the rebuild. So if your array is half empty you're half as likely to have a rebuild error. You can reduce the likelihood of these kinds of rebuild errors on any RAID level by doing regular "zpool scrubs" or "mdadm checks" of your array. There are similar commands/processes for other RAID's; e.g., LSI/dell PERC raid cards call this "patrol read." These go read everything, which may help the disk drives find failing sectors, and reassign them, before they become permanent. If they are permanent, the RAID system (ZFS/md/raid card/whatever) can rebuild the data from parity. Even if you use RAIDZ2 or RAID6, regular scrubs are important. One final note - RAID of any sort is not a substitute for backups - it won't protect you against accidental deletion, ransomware, etc. Although regular ZFS snapshots can be part of a backup strategy. | {
"source": [
"https://serverfault.com/questions/634216",
"https://serverfault.com",
"https://serverfault.com/users/246980/"
]
} |
634,793 | This is a Canonical Question about securing public DNS resolvers Open DNS servers seem pretty neat and convenient, as they provide IP addresses that we can use consistently across our company regardless of where they are located. Google and OpenDNS provide this functionality, but I'm not sure that I want these companies to have access to our DNS queries. I want to set up something like this for use by our company, but I hear a lot about this being dangerous practice (particularly in regards to amplification attacks ) and I want to make sure that we do this right. What things do I need to keep in mind when building this type of environment? | There are a few things you need to understand going into this: This is a network engineering problem. Most of the people who are looking to set up this type of environment are system administrators. That's cool, I'm a system administrator too! Part of the job is understanding where your responsibilities end and someone else's begins, and believe me, this is not a problem system administrators can solve on their own. Here's why: UDP is a stateless protocol. There is no client handshake. Queries against a DNS server are an unauthenticated two-step transaction (query, reply). There is no way for the server to know whether the source IP is spoofed before it replies. By the time a query has reached the server, it is already too late to prevent a spoofed UDP packet. Spoofing can only be prevented by a practice known as ingress filtering , a topic which is covered by documents BCP 38 and BCP 84 . These are implemented by the networking devices sitting in front of your DNS server. We can't give you a walkthrough on how to set up your datacenter from end to end, or how to implement these best practices. These things are very specific to your own needs. Q&A format just doesn't work for this, and this site is not intended to be a substitute for hiring professional people to do professional work. Do not assume that your billion dollar too-big-to-fail company implements ingress filtering correctly. This is not a best practice. The best practice is not to do this. It's very easy to set up an internet facing DNS resolver. It takes far less research to set one up than to understand the risks involved in doing so. This is one of those cases where good intentions inadvertently enable the misdeeds (and suffering) of others. If your DNS server will respond to any source IP address it sees, you're running an open resolver. These are constantly being leveraged in amplification attacks against innocent parties. New system administrators are standing up open resolvers every day , making it lucrative for malicious individuals to scan for them constantly. There isn't really a question whether or not your open resolver is going to be used in an attack: as of 2015, it's pretty much a given. It may not be immediate, but it's going to happen for sure. Even if you apply an ACL using your DNS software (i.e. BIND), all this does is limit which spoofed DNS packets your server will reply to. It's important to understand that your DNS infrastructure can be used not only to attack the devices in the ACL, but any networking devices between your DNS server and the devices it will respond for. If you don't own the datacenter, that's a problem for more than just you. Google and OpenDNS do this, so why can't I? Sometimes it's necessary to weigh enthusiasm against reality. Here are some hard questions to ask yourself: Is this something you want to set up on a whim, or is this something you have a few million dollars to invest in doing it right? Do you have a dedicated security team? Dedicated abuse team? Do both of them have the cycles to deal with abuse of your new infrastructure, and complaints that you'll get from external parties? Do you have a legal team? When all of this is said and done, will all of this effort even remotely begin to pay for itself, turn a profit for the company, or exceed the monetary value of dealing with the inconvenience that led you in this direction? In closing, I know this thread is Q&A is kind of a letdown for most of you who are being linked to it. Serverfault is here for providing answers, and an answer of "this is a bad idea, don't do it" isn't usually perceived as very helpful. Some problems are much more complicated than they appear to be at the outset, and this is one of them. If you want to try to make this work, you can still ask us for help as you try to implement this kind of solution. The main thing to realize is that the problem is too big by itself for the answer to be provided in convenient Q&A format. You need to have invested a significant amount of time researching the topic already, and approach us with specific logic problems that you've encountered during your implementation. The purpose of this Q&A is to give you a better understanding of the larger picture, and help you understand why we can't answer a question as broad as this one. Help us keep the internet safe! :) | {
"source": [
"https://serverfault.com/questions/634793",
"https://serverfault.com",
"https://serverfault.com/users/152073/"
]
} |
634,800 | The following shell script works only for first server and does not loop to the next.
I tried 0< before the ssh command but it still does not return to the shell script once connected. #!/bin/sh
while read IP
do
ssh [email protected] " ssh root@$IP 'ls -lht /log/cdr-csv/ ' " > /tmp/$IP.txt
done << here_doc
18.17.6.19
18.17.10.24
here_doc How do I run the same command on the second server 18.17.10.24 ? | There are a few things you need to understand going into this: This is a network engineering problem. Most of the people who are looking to set up this type of environment are system administrators. That's cool, I'm a system administrator too! Part of the job is understanding where your responsibilities end and someone else's begins, and believe me, this is not a problem system administrators can solve on their own. Here's why: UDP is a stateless protocol. There is no client handshake. Queries against a DNS server are an unauthenticated two-step transaction (query, reply). There is no way for the server to know whether the source IP is spoofed before it replies. By the time a query has reached the server, it is already too late to prevent a spoofed UDP packet. Spoofing can only be prevented by a practice known as ingress filtering , a topic which is covered by documents BCP 38 and BCP 84 . These are implemented by the networking devices sitting in front of your DNS server. We can't give you a walkthrough on how to set up your datacenter from end to end, or how to implement these best practices. These things are very specific to your own needs. Q&A format just doesn't work for this, and this site is not intended to be a substitute for hiring professional people to do professional work. Do not assume that your billion dollar too-big-to-fail company implements ingress filtering correctly. This is not a best practice. The best practice is not to do this. It's very easy to set up an internet facing DNS resolver. It takes far less research to set one up than to understand the risks involved in doing so. This is one of those cases where good intentions inadvertently enable the misdeeds (and suffering) of others. If your DNS server will respond to any source IP address it sees, you're running an open resolver. These are constantly being leveraged in amplification attacks against innocent parties. New system administrators are standing up open resolvers every day , making it lucrative for malicious individuals to scan for them constantly. There isn't really a question whether or not your open resolver is going to be used in an attack: as of 2015, it's pretty much a given. It may not be immediate, but it's going to happen for sure. Even if you apply an ACL using your DNS software (i.e. BIND), all this does is limit which spoofed DNS packets your server will reply to. It's important to understand that your DNS infrastructure can be used not only to attack the devices in the ACL, but any networking devices between your DNS server and the devices it will respond for. If you don't own the datacenter, that's a problem for more than just you. Google and OpenDNS do this, so why can't I? Sometimes it's necessary to weigh enthusiasm against reality. Here are some hard questions to ask yourself: Is this something you want to set up on a whim, or is this something you have a few million dollars to invest in doing it right? Do you have a dedicated security team? Dedicated abuse team? Do both of them have the cycles to deal with abuse of your new infrastructure, and complaints that you'll get from external parties? Do you have a legal team? When all of this is said and done, will all of this effort even remotely begin to pay for itself, turn a profit for the company, or exceed the monetary value of dealing with the inconvenience that led you in this direction? In closing, I know this thread is Q&A is kind of a letdown for most of you who are being linked to it. Serverfault is here for providing answers, and an answer of "this is a bad idea, don't do it" isn't usually perceived as very helpful. Some problems are much more complicated than they appear to be at the outset, and this is one of them. If you want to try to make this work, you can still ask us for help as you try to implement this kind of solution. The main thing to realize is that the problem is too big by itself for the answer to be provided in convenient Q&A format. You need to have invested a significant amount of time researching the topic already, and approach us with specific logic problems that you've encountered during your implementation. The purpose of this Q&A is to give you a better understanding of the larger picture, and help you understand why we can't answer a question as broad as this one. Help us keep the internet safe! :) | {
"source": [
"https://serverfault.com/questions/634800",
"https://serverfault.com",
"https://serverfault.com/users/16842/"
]
} |
634,883 | I need to run an application from a specific directory. $ sudo docker run -P ubuntu/decomposer 'cd /local/deploy/decomposer; ./decomposer-4-15-2014'
2014/10/09 21:30:03 exec: "cd /local/deploy/decomposer; ./decomposer-4-15-2014": stat cd /local/deploy/decomposer; ./decomposer-4-15-2014: no such file or directory That directory definitely exists, and if I connect to docker by running bash interactively I can run the above command. $ sudo docker run -i -t ubuntu/decomposer /bin/bash
# cd /local/deploy/decomposer; ./decomposer-4-15-2014 I can run my program by specifying the full path, but then it crashes as it expects to be launched from the current directory. What can I do? | Pass your command as an argument to /bin/sh like this: sudo docker run -P ubuntu/decomposer /bin/sh -c 'cd /local/deploy/decomposer; ./decomposer-4-15-2014' | {
"source": [
"https://serverfault.com/questions/634883",
"https://serverfault.com",
"https://serverfault.com/users/73963/"
]
} |
634,894 | I upgraded a cluster from 8.3 to 8.4. Things seemed to be fine so I dropped the 8.3 cluster. Then noticed an error and found that the archive_command was still pointing to the 8.3 data directory archive_command = 'cp "%p" /var/lib/postgresql/8.3/main/wal_archives/"%f"' I changed this to 8.4 but now continually get the following errors in the log file and process list for one specific file 2014-10-09 17:02:12 CDT DETAIL: The failed archive command was: cp "pg_xlog/000000010000000000000012" /var/lib/postgresql/8.4/main/wal_archives/"000000010000000000000012"
cp: cannot create regular file `/var/lib/postgresql/8.4/main/wal_archives/000000010000000000000012': No such file or directory
4122 ? Ss 0:00 \_ postgres: archiver process failed on 000000010000000000000012 I'm not sure the best way to recover from this. As far as I can tell the database is fully functional | Pass your command as an argument to /bin/sh like this: sudo docker run -P ubuntu/decomposer /bin/sh -c 'cd /local/deploy/decomposer; ./decomposer-4-15-2014' | {
"source": [
"https://serverfault.com/questions/634894",
"https://serverfault.com",
"https://serverfault.com/users/98938/"
]
} |
635,139 | I am trying to create e-mail alert on ssh root login so I had to install ssmtp and mail utility. Then I configured ssmtp.conf file as follows : # Config file for sSMTP sendmail
# The person who gets all mail for userids < 1000
# Make this empty to disable rewriting.
#root=postmaster
#Adding email id to receive system information
root = [email protected]
# The place where the mail goes. The actual machine name is required no
# MX records are consulted. Commonly mailhosts are named mail.domain.com
#mailhub=mail
mailhub = smtp.gmail.com:587
[email protected]
AuthPass=plaintext password
UseTLS=YES
UseSTARTTLS=YES
# Where will the mail seem to come from?
rewriteDomain=gmail.com
# The full hostname
hostname = mailserver
# Are users allowed to set their own From: address?
# YES - Allow the user to specify their own From: address
# NO - Use the system generated From: address
FromLineOverride=YES as well as revaliases as follows: # Format: local_account:outgoing_address:mailhub
# Example: root:[email protected]:mailhub.your.domain[:port]
root:[email protected]:smtp.gmail.com:25 and I am getting this error: send-mail: Authorization failed (534 5.7.14 https://support.google.com/mail/bin/answer.py?answer=78754 ni5sm3908366pbc.83 - gsmtp)
Can't send mail: sendmail process failed with error code 1 but it didn't work.
Please help me to sort out this | It may take more than one step to fix this issue Take the step mentioned earlier. Log into your google email account and then go to this link: https://www.google.com/settings/security/lesssecureapps and set "Access for less secure apps" to ON. Test to see if your issue is resolved. If it isn't resolved, as it wasn't for me, continue to Step #2. Go to https://support.google.com/accounts/answer/6009563 (Titled: "Password incorrect error"). This page says "There are several reasons why you might see a “Password incorrect” error (aka 534-5.7.14) when signing in to Google using third-party apps. In some cases even if you type your password correctly." This page gives 4 suggestions of things to try. For me, the first suggestion worked: Go to https://g.co/allowaccess from a different device you have previously used to access your Google account and follow the instructions. Try signing in again from the blocked app. There were three more suggestions on the page given in step #2 but I didn't try them because after going to the redacted link and following the instructions, everything began to work as it should. | {
"source": [
"https://serverfault.com/questions/635139",
"https://serverfault.com",
"https://serverfault.com/users/247555/"
]
} |
636,621 | I've seen http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/ , which describes the rationale for consistent/predictable device naming, and then the rules by which device names are generated : * Two character prefixes based on the type of interface:
* en -- ethernet
* sl -- serial line IP (slip)
* wl -- wlan
* ww -- wwan
*
* Type of names:
* b<number> -- BCMA bus core number
* ccw<name> -- CCW bus group name
* o<index> -- on-board device index number
* s<slot>[f<function>][d<dev_port>] -- hotplug slot index number
* x<MAC> -- MAC address
* [P<domain>]p<bus>s<slot>[f<function>][d<dev_port>]
* -- PCI geographical location
* [P<domain>]p<bus>s<slot>[f<function>][u<port>][..][c<config>][i<interface>]
* -- USB port number chain So let's say I've got device eno16777736 : why is it called that? It's an ethernet card, I got that. But how can I back into the rest of this interface's name myself? I examined /sys/class/net/eno16777736 , and saw: eno16777736 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:01.0/net/eno16777736 Not sure how to interpret this either, or whether I can use this information to get to eno16777736 . Update So the 16777736 is the device's acpi_index . Per https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-bus-pci : What: /sys/bus/pci/devices/.../acpi_index
Date: July 2010
Contact: Narendra K <[email protected]>, [email protected]
Description:
Reading this attribute will provide the firmware
given instance (ACPI _DSM instance number) of the PCI device.
The attribute will be created only if the firmware has given
an instance number to the PCI device. ACPI _DSM instance number
will be given priority if the system firmware provides SMBIOS
type 41 device type instance also. And, indeed: core@localhost /sys/devices/pci0000:00/0000:00:11.0/0000:02:01.0 $ find . -type f | xargs grep 1677 2> /dev/null
./net/eno16777736/uevent:INTERFACE=eno16777736
./acpi_index:16777736 Further, to reconcile output from ifconfig or ip link and your devices in lspci : $ ifconfig
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.0.37 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::20c:29ff:fe70:c039 prefixlen 64 scopeid 0x20<link>
inet6 2601:a:7c0:66:20c:29ff:fe70:c039 prefixlen 64 scopeid 0x0<global>
ether 00:0c:29:70:c0:39 txqueuelen 1000 (Ethernet)
RX packets 326 bytes 37358 (36.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 172 bytes 45999 (44.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 19 base 0x2000 Notice the "device interrupt 19". And from lspci -v , which has "IRQ 19": 02:01.0 Ethernet controller: Advanced Micro Devices, Inc. [AMD] 79c970 [PCnet32 LANCE] (rev 10)
Subsystem: Advanced Micro Devices, Inc. [AMD] PCnet - Fast 79C971
Physical Slot: 33
Flags: bus master, medium devsel, latency 64, IRQ 19
I/O ports at 2000 [size=128]
[virtual] Expansion ROM at fd500000 [disabled] [size=64K]
Kernel driver in use: pcnet32 Here you also see "Phyiscal Slot 33", and indeed, sometimes VMWare boots VMs that get ens33 as the interface name. So, it's unclear why other times it chooses eno16777736. But the 16777736 comes from the acpi_index , and the 33 comes from the PCI slot. | en for Ethernet o for onboard 16777736 is the index of the device as provided by the firmware (BIOS/EFI). It would have been logical to start the index at 1 . Either that, or you either have sensible firmware and over 16 million onboard devices! But more likely, you're seeing the issue raised (but not answered) on VMware Community - it seems that the number comes from a possible negative overflow on acpi_index . You can view similar info from udev for your system with: udevadm info --name=/dev/eno16777736 --attribute-walk | {
"source": [
"https://serverfault.com/questions/636621",
"https://serverfault.com",
"https://serverfault.com/users/145069/"
]
} |
636,790 | We have an application server that sometimes hangs. We suspect it is due to a bad request from a client. Can nginx log the complete request/response (like fiddler captures) to files, so we can see the requests that were sent before the hang? (We probably need to avoid pcap and that approach and do it all in nginx) If nginx is not the right tool for this, what (other than a network analyzer) might be? | To get the request body sent by visitors, use client_body_in_file_only on; and log the "temporary" file it's written to in the logs by appending var $request_body_file to the log format. "Temporary" files will be located in client_temp directory by default. You can log request headers $http_<header> too and sent headers with $sent_http_<header> . If you have request body and headers you should be able to replay it and get the response your visitor had. Also something like gor should highly be considered so you could replay the traffic on an other environment where you could let nginx write these temporary files without causing IO issues in production (nginx won't purge them with on value that's why It's not that "temporary" in this case). | {
"source": [
"https://serverfault.com/questions/636790",
"https://serverfault.com",
"https://serverfault.com/users/13716/"
]
} |
637,102 | AWS EC2 offers two types of virtualization of Ubuntu Linux EC2 machines - PV and HVM. PV: HVM: What is the difference between these types? | Amazon run on Xen, which provides Para-virtualization (PV) or Hardware-assisted virtualization (HVM). Para-virtualization used to be the recommended choice, as it gave you better performance (with a much closer integration to the virtualization host, through patched specialized kernels/drivers on both the host and the guest). Hardware-assisted virtualization uses the benefits provided in modern hardware, and it doesn't require any kind of custom kernel or patches. Recent benchmarks has proven that HVM is actually faster on certain workloads. | {
"source": [
"https://serverfault.com/questions/637102",
"https://serverfault.com",
"https://serverfault.com/users/10904/"
]
} |
637,207 | How do I patch CVE-2014-3566 on a Windows Server 2012 system running IIS? Is there a patch in Windows Update, or do I have to do a registry change to disable SSL 3.0 ? | There is no "patch". It's a vulnerability in the protocol, not a bug in the implementation. In Windows Server 2003 to 2012 R2 the SSL / TLS protocols are controlled by flags in the registry set at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols . To disable SSLv3, which the POODLE vulnerability is concerned with, create a subkey at the above location (if it's not already present) named SSL 3.0 and, under that, a subkey named Server (if it's not already present). At this location ( HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server ) create a DWORD value named Enabled and leave it set at 0 . Disabling SSL 2.0, which you should also be doing, is done the same way, except that you'll be using a key named SSL 2.0 in the above registry path. I haven't tested all versions, but I think it's probably safe to assume that a reboot is necessary for this change to take effect. | {
"source": [
"https://serverfault.com/questions/637207",
"https://serverfault.com",
"https://serverfault.com/users/122927/"
]
} |
637,237 | I have a REST API behind an nginx proxy. Proxying works fine, however I am unable to cache any responses. Any help would be much appreciated: Nginx config: worker_processes 10;
error_log logs/error.log;
error_log logs/error.log notice;
error_log logs/error.log info;
pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
proxy_cache_path /path/to/cache/dir keys_zone=one:60m;
proxy_cache_methods GET HEAD POST;
upstream backend {
server server1 backup;
server server2 weight=5;
}
access_log logs/access.log;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 7076;
server_name localhost;
#charset koi8-r;
access_log logs/host.access.log;
location / {
add_header 'Access-Control-Allow-Origin' *;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Content-Type,Accept';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
proxy_cache one;
proxy_cache_key $host$uri$is_args$args;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_ignore_headers Cache-Control;
proxy_hide_header Cache-Control;
proxy_hide_header Set-Cookie;
proxy_pass http://backend;
}
}
} No matter what I have tried, the Proxy-Cache always comes back as a MISS: Request Headers are: Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Host:nginxserver:portnumber
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.124 Safari/537.36 Reponse Headers are: Access-Control-Allow-Credentials:true
Access-Control-Allow-Headers:Content-Type,Accept
Access-Control-Allow-Methods:GET, POST, OPTIONS
Access-Control-Allow-Origin:*
Connection:keep-alive
Content-Type:text/plain;charset=UTF-8
Date:Wed, 15 Oct 2014 16:30:18 GMT
Server:nginx/1.7.4
Transfer-Encoding:chunked
X-Proxy-Cache:MISS My suspicion is that it's something with the client headers, but even if I issue the call via curl and check out the headers, there is no response. Thanks in advance | You didn't tell NGINX for how much time the response is valid and must be served from cache. This must be specified with proxy_cache_valid directive. proxy_cache one;
proxy_cache_key $host$uri$is_args$args;
proxy_cache_valid 200 10m; But, this won't work for POST requests because you have no cache key that differs from a POST request to another on the same URL if they don't have same content. So you will need to adjust the cache key to $host$request_uri|$request_body . You will have to monitor the cache size ( proxy_cache_path parameter max_size ) and proxy response buffer proxy_buffer_size so it suits your needs. | {
"source": [
"https://serverfault.com/questions/637237",
"https://serverfault.com",
"https://serverfault.com/users/249260/"
]
} |
637,549 | I've got a VM running CentOS 6 (64bit) and I'm attempting to add the EPEL repo like usual to install various packages as I do quite regularly. Today, I'm experiencing some strange errors yet I'm doing absolutely nothing differently. I'm adding EPEL like so: # wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm Yet when I try running yum for anything, I'm getting this error: [root@core /]# yum list Loaded plugins: fastestmirror Determining fastest mirrors Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again Any ideas? I'm stumped! | The correct fix is to update your SSL certificates. sudo yum upgrade ca-certificates --disablerepo=epel You need to disable the epel repo so that this command will succeed. After you update your certificates you can use yum normally as EPEL will work again. | {
"source": [
"https://serverfault.com/questions/637549",
"https://serverfault.com",
"https://serverfault.com/users/249455/"
]
} |
637,706 | I've been reading all day about the Poodle vulnerability and it I am bit confused now vs Security and Revenue. If I disable SSL V3 on Server (SSL V2 and V3 both will be disabled for Apache) clients (browsers) who don't support any protocol but SSL V3 will not be able to connect HTTPS with the server. So it's situation where both client and server must communicate with TLS 1.1 1.2 and so on If any of them uses SSL V3 and the other does not support lower versions then what happens ?
No connection to SSL. I've seen few updates made to Firefox, perhaps they have disabled the SSL V3 in that what we usually have to do in options. This will force all the connection to lower versions and TLS But is disabling SSL V3 really a solution for this problem ? | First, let's clear things up a bit: TLS superseded SSL. TLS 1.0 came after and is an update to SSL 3.0. TLS 1.2 > TLS 1.1 > TLS 1.0 > SSL 3.0 > SSL 2.0 > SSL 1.0 SSL versions prior to 3.0 have had known severe security vulnerabilities for a while and are disabled/not supported by modern clients and servers. SSL 3.0 will likely go the same way soon. Of currently-used protocols, "Poodle" most severely affects SSL 3.0, where there is no way to mitigate. There is a similar attack against some TLS 1.0 and 1.1 implementations that the spec allows - make sure your software is up to date. Now, the reason "Poodle" is a risk even with modern clients and servers is due to clients' implementation of a fallback mechanism. Not all servers will support the latest versions, so clients will try each version in order from most to least recent (TLS 1.2, TLS 1.1, TLS 1.0, SSL 3.0) until it finds one that the server supports. This happens before encrypted communication begins, so a man-in-the-middle (MITM) attacker is able to force the browser to fall back to an older version even if the server supports a higher one. This is known as a protocol downgrade attack. Specifically, in the case of "Poodle", as long as both the client and server support SSL 3.0, a MITM attacker is able to force the use of this protocol. So when you disable SSL 3.0, this has two effects: Clients that support higher versions cannot be tricked into falling back to the vulnerable version ( TLS Fallback SCSV is a new proposed mechanism to prevent a protocol downgrade attack, but not all clients and servers support it yet). This is the reason you want to disable SSL 3.0. The vast majority of your clients likely fall into this category, and this is beneficial. Clients that do not support TLS at all (as others have mentioned, IE6 on XP is pretty much the only one still used for HTTPS) will not be able to connect through an encrypted connection at all. This is likely a minor portion of your userbase, and it's not worth sacrificing the security of the majority who are up-to-date to cater to this minority. | {
"source": [
"https://serverfault.com/questions/637706",
"https://serverfault.com",
"https://serverfault.com/users/120179/"
]
} |
637,996 | On our docker implementation on GCE, we are running out of space on the root file system. Since images themselves are stored on a separate 1TB volume, the images themselves shouldn't be the problem. One candidate are the centralized logfiles that Docker itself stores (a json file somewhere?), does anyone know where those files/file are/is located, and how we can logrotate/truncate them? | First, I'm using docker 1.1.2 for both client and server, this answer may be obsolete for newer versions of docker as docker evolve quickly. Location of the file Find your docker directory. On systems that use apt/debian style system, the package installed by the docker repository https://get.docker.com/ubuntu use /var/lib/docker . Chances are that directory is in the same place on other systems (can't confirm). under containers/**CONTAINER_ID** you'll find infos about the container.
In the file **CONTAINER_ID**-json.log in that folder, you'll find a file with all the logs for that container. It may look like a json file, it's not. It's a flow a json structures, one per line, each containing one log line (each line ends by a } and the next one starts with a { , thus it's not a valid json as a whole). Example location:
- /var/lib/docker/containers/05b6053c41a2130afd6fc3b158bda4e605b6053c41a2130afd6fc3b158bda4e6/05b6053c41a2130afd6fc3b158bda4e605b6053c41a2130afd6fc3b158bda4e6-json.log Editing/Altering that file I suggest you to use that path to see wether or not it's the reason why you're running out of space, but not to log rotate them. I would rather make sure the container doesn't log too much lines (by using a CMD in the dockerfile that either redirect the output of you process to a file in a volume or to /dev/null - with logs enabled with configuration - and I would then logrotate the log files with another container) | {
"source": [
"https://serverfault.com/questions/637996",
"https://serverfault.com",
"https://serverfault.com/users/187194/"
]
} |
638,152 | I have a server with multiple domains. How can I clear all Postfix queue messages for a specific domain? | UPDATE 2021-04-18: mailq | tail -n +2 | grep -v '^ *(' | awk 'BEGIN { RS = "" } { if ($8 ~ /@example\.com/ && $9 == "") print $1 }' | tr -d '*!' | postsuper -d - Whereas $7 =sender, $8 =recipient1, $9 =recipient2. You can also adapt the rule for other recipients ( $9 ) to your needs. The command is based on an example of the postsuper manpage which an example command matching a full recipient mail address: mailq | tail -n +2 | grep -v '^ *(' | awk 'BEGIN { RS = "" } { if ($8 == "[email protected]" && $9 == "") print $1 }' | tr -d '*!' | postsuper -d - Old content: This command deletes all mails sent from or to addresses that end with @example.com : sudo mailq | tail -n +2 | awk 'BEGIN { RS = "" } /@example\.com$/ { print $1 }' | tr -d '*!' | sudo postsuper -d - | {
"source": [
"https://serverfault.com/questions/638152",
"https://serverfault.com",
"https://serverfault.com/users/249837/"
]
} |
638,260 | Is it valid for a hostname to start with a digit? e.g. 8server From reading RFC 1123 it would appear that this is a valid hostname. However, I'm not clear on whether a hostname can only start with a digit when there is a suffix e.g. 8server.com The origin of this question is that InternetDomainName.isValid("8server"); in the Google Guava library ( Javadoc ) rejects the input. I also posted a specific question on the Guava Discuss group. | RFC 1123 relaxes a constraint of RFC 952 which specifies a legacy of the Hostname Server Protocol (described in RFC 953 ) replaced by DNS.
So a fully numeric hostname would be valid per these RFCs. RFC 1123 itself discusses consequences when it comes to IP versus hostname parsing : If a dotted-decimal number can be entered without such
identifying delimiters, then a full syntactic check must be
made, because a segment of a host domain name is now allowed
to begin with a digit and could legally be entirely numeric (see Section 6.1.2.4). However, a valid host name can never
have the dotted-decimal form #.#.#.#, since at least the
highest-level component label will be alphabetic. However, it was provided in RFC 1178 guidelines to choose a valid hostname because of implementations issues. A lot of these implementations don't recognize numeric hostnames well and try to parse them as if they were IPs until they contain at least one non-numeric character no matter the location. Also, you will find that implementations don't always honor other original constraints of RFC 952, allowing for instance the hostname to end with a minus sign or a period. DNS preserved these original specifications for hostnames and added support for underscores ( RFC 2782 ). Update As requested in comments, clarification for the sentence : However, a valid host name can never have the dotted-decimal form #.#.#.#, since at least the highest-level component label will be alphabetic . This means the top level domain name must be alphabetic , thus the fully qualified hostname can never be confused with an IPv4 address. This idea has been clarified by RFC 3696 for DNS and changed to not all-numeric . Note the slight difference. | {
"source": [
"https://serverfault.com/questions/638260",
"https://serverfault.com",
"https://serverfault.com/users/1135/"
]
} |
638,367 | I've seen various config examples for handling dual-stack IPv4 and IPv6 virtual hosts on nginx. Many suggest this pattern: listen 80;
listen [::]:80 ipv6only=on; As far as I can see, this achieves exactly the same thing as: listen [::]:80 ipv6only=off; Why would you use the former? The only reason I can think of is if you need additional params that are specific to each protocol, for example if you only wanted to set deferred on IPv4. | That probably is about the only reason you would use the former construct, these days. The reason you're seeing this is probably that the default of ipv6only changed in nginx 1.3.4. Prior to that, it defaulted to off ; in newer versions it defaults to on . This happens to interact with the IPV6_V6ONLY socket option on Linux, and similar options on other operating systems, whose defaults aren't necessarily predictable. Thus the former construct was required pre-1.3.4 to ensure that you were actually listening for connections on both IPv4 and IPv6. The change to the nginx default for ipv6only ensures that the operating system default for dual stack sockets is irrelevant. Now, nginx either explicitly binds to IPv4, IPv6, or both, never depending on the OS to create a dual stack socket by default. Indeed, my standard nginx configs for pre-1.3.4 have the first configuration, and post-1.3.4 all have the second configuration. Though, since binding a dual stack socket is a Linux-only thing, my current configurations now look more like the first example, but without ipv6only set, to wit: listen [::]:80;
listen 80; | {
"source": [
"https://serverfault.com/questions/638367",
"https://serverfault.com",
"https://serverfault.com/users/42272/"
]
} |
638,600 | I've tried yes | ssh [email protected] to try to accept the RSA key fingerprint, but am still prompted if I'm sure I want to connect. Is there a way to make this automatic? | OpenSSH 7.6 has introduced new StrictHostKeyChecking=accept-new setting for exactly this purpose: ssh(1): expand the StrictHostKeyChecking option with two new
settings. The first "accept-new" will automatically accept
hitherto-unseen keys but will refuse connections for changed or
invalid hostkeys. This is a safer subset of the current behaviour
of StrictHostKeyChecking=no. The second setting "n", is a synonym
for the current behaviour of StrictHostKeyChecking=no: accept new
host keys, and continue connection for hosts with incorrect
hostkeys. A future release will change the meaning of
StrictHostKeyChecking=no to the behaviour of "accept-new". ( OpenSSH 7.6 Release notes ) | {
"source": [
"https://serverfault.com/questions/638600",
"https://serverfault.com",
"https://serverfault.com/users/243828/"
]
} |
638,691 | I'm on CentOS 5.9. I'd like to determine from the linux shell if a remote web server specifically supports TLS 1.2 (as opposed to TLS 1.0). Is there an easy way to check for that? I'm not seeing a related option on openssl but perhaps I'm overlooking something. | You should use openssl s_client, and the option you are looking for is -tls1_2. An example command would be: openssl s_client -connect google.com:443 -tls1_2 If you get the certificate chain and the handshake you know the system in question supports TLS 1.2. If you see don't see the certificate chain, and something similar to "handshake error" you know it does not support TLS 1.2. You can also test for TLS 1 or TLS 1.1 with -tls1 or tls1_1 respectively. | {
"source": [
"https://serverfault.com/questions/638691",
"https://serverfault.com",
"https://serverfault.com/users/21875/"
]
} |
638,701 | In May AWS introduced the ability to tag elastic beanstalk environments . We can't figure out how to tag an elastic beanstalk environment when we create it using "eb start." We use the "eb command line interface" to create our environments. Using this mechanism you pass configuration parameters through the command line or using an .elasticbeanstalk/optionsettings. environment_name file. Anyone figure out how to tag an environment using an optionsettings file? If not, does anyone know a way to tag an environment after it has been created? | You should use openssl s_client, and the option you are looking for is -tls1_2. An example command would be: openssl s_client -connect google.com:443 -tls1_2 If you get the certificate chain and the handshake you know the system in question supports TLS 1.2. If you see don't see the certificate chain, and something similar to "handshake error" you know it does not support TLS 1.2. You can also test for TLS 1 or TLS 1.1 with -tls1 or tls1_1 respectively. | {
"source": [
"https://serverfault.com/questions/638701",
"https://serverfault.com",
"https://serverfault.com/users/238408/"
]
} |
639,052 | Users logged in on my Linux server should be able to ssh to a specific remote machine with a default account.
The authentication on the remote machine uses public key, so on the server the corresponding private key is available. I don't want the server users to actually be able to read the private key. Basically, the fact that they have access to the server allows them the ssh right, and removing them from the server should also disallow connection to the remote machine. How can I allow users to open an ssh connection without giving them read access to the private key? My thoughts so far: obviously the ssh executable must be able to read the private key, so it must run under another user on the server which has those right. Once the ssh connection is established, I can then "forward" it to the user so that he can enter commands and interact with the remote machine. Is this a good approach? How should I implement the forward? How can the user initiate the connection (that is, the execution of the ssh by the user which has read rights on the key)? Is there a security loophole? - if the users can execute an ssh as another user, can they then do everything that other user could (including, reading the private key)? | That is one of the reasons sudo exists. Simply allow your users to run 1 single command with only the pre-authorized command-line options and most obvious circumventions are solved. e.g. #/etc/sudoers
%users ALL = (some_uid) NOPASSWD: /usr/bin/ssh -i /home/some_uid/.ssh/remote-host.key username@remotehost sets up sudo so all members of the group users can run the ssh command as user some_uid without entering their own password (or that of the some_uid account) when they run: sudo -u some_uid /usr/bin/ssh -i /home/some_uid/.ssh/remote-host.key username@remotehost Remove the NOPASSWD: option to force that users enter their own passwords before logging in to the remote-host. Possibly set up an alias or wrapper script as a convenience for your users because sudo is quite picky about using the correct arguments. | {
"source": [
"https://serverfault.com/questions/639052",
"https://serverfault.com",
"https://serverfault.com/users/131264/"
]
} |
639,058 | In SQL Activity Monitor, I see a constant 35% to 40% CPU usage in the Processor Time scrolling graph.
How can I tell what Processes within SQL are responsible for that CPU usage? | That is one of the reasons sudo exists. Simply allow your users to run 1 single command with only the pre-authorized command-line options and most obvious circumventions are solved. e.g. #/etc/sudoers
%users ALL = (some_uid) NOPASSWD: /usr/bin/ssh -i /home/some_uid/.ssh/remote-host.key username@remotehost sets up sudo so all members of the group users can run the ssh command as user some_uid without entering their own password (or that of the some_uid account) when they run: sudo -u some_uid /usr/bin/ssh -i /home/some_uid/.ssh/remote-host.key username@remotehost Remove the NOPASSWD: option to force that users enter their own passwords before logging in to the remote-host. Possibly set up an alias or wrapper script as a convenience for your users because sudo is quite picky about using the correct arguments. | {
"source": [
"https://serverfault.com/questions/639058",
"https://serverfault.com",
"https://serverfault.com/users/192937/"
]
} |
639,061 | I'm getting lots of network unreachable lines in my Centos' messages log file. They seem they can't resolve to certain addresses which I do not have any ideas why my server has to resolve to them in the first place. Could anyone let me know the origin of such error? Am I under an attack? Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving './DNSKEY/IN': 2001:503:ba3e::2:30#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving './NS/IN': 2001:503:ba3e::2:30#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'dlv.isc.org/DNSKEY/IN': 2001:500:48::1#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'dlv.isc.org/DNSKEY/IN': 2001:4f8:0:2::19#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'ns.isc.afilias-nst.info/A/IN': 2001:500:2f::f#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'ns.isc.afilias-nst.info/AAAA/IN': 2001:500:2f::f#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'ns.isc.afilias-nst.info/A/IN': 2001:500:1::803f:235#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'ns.isc.afilias-nst.info/AAAA/IN': 2001:500:1::803f:235#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'ns.isc.afilias-nst.info/A/IN': 2001:503:c27::2:30#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'ns.isc.afilias-nst.info/AAAA/IN': 2001:503:c27::2:30#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'ns.isc.afilias-nst.info/A/IN': 2001:500:1a::1#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'dlv.isc.org/DNSKEY/IN': 2001:4f8:0:2::20#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'dlv.isc.org/DNSKEY/IN': 2001:500:60::29#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'ns1.isc.ultradns.net/A/IN': 2001:7fd::1#53
Oct 23 11:39:03 server named[1585]: error (network unreachable) resolving 'ns1.isc.ultradns.net/AAAA/IN': 2001:7fd::1#53
Oct 23 11:39:04 server named[1585]: error (network unreachable) resolving 'ns2.isc.ultradns.net/A/IN': 2610:a1:1014::e8#53
Oct 23 11:39:04 server named[1585]: error (network unreachable) resolving 'pdns196.ultradns.org/A/IN': 2001:500:e::1#53
Oct 23 11:39:04 server named[1585]: error (network unreachable) resolving 'pdns196.ultradns.org/AAAA/IN': 2001:500:e::1#53
Oct 23 11:39:04 server named[1585]: error (network unreachable) resolving 'pdns196.ultradns.org/A/IN': 2001:500:40::1#53
Oct 23 11:39:04 server named[1585]: error (network unreachable) resolving 'pdns196.ultradns.org/AAAA/IN': 2001:500:40::1#53
Oct 23 11:39:04 server named[1585]: error (network unreachable) resolving 'pdns196.ultradns.org/AAAA/IN': 2001:502:4612::e8#53
Oct 23 11:39:04 server named[1585]: error (network unreachable) resolving 'pdns196.ultradns.info/AAAA/IN': 2610:a1:1016::e8#53
Oct 23 11:39:04 server named[1585]: error (network unreachable) resolving 'pdns196.ultradns.info/A/IN': 2610:a1:1016::e8#53
Oct 23 11:39:04 server named[1585]: error (network unreachable) resolving 'pdns196.ultradns.co.uk/AAAA/IN': 2610:a1:1017::e8#53
Oct 23 11:39:04 server named[1585]: error (network unreachable) resolving 'pdns196.ultradns.biz/A/IN': 2610:a1:1015::e8#53
Oct 23 11:39:04 server named[1585]: error (network unreachable) resolving 'pdns196.ultradns.com/AAAA/IN': 2001:502:f3ff::e8#53
Oct 23 11:39:04 server named[1585]: client 93.113.174.225#46368: query (cache) 'adobe.com/A/IN' denied
Oct 23 11:39:04 server named[1585]: client 93.113.174.225#23736: query (cache) 'adobe.com/A/IN' denied
Oct 23 11:39:04 server lfd[1196]: SYSLOG check [Lga6AZUNsgZGaVQX] By the way, my named.conf's options are as below if they are of any help: options {
//listen-on port 53 { 127.0.0.1; };
//listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
//allow-query { localhost; };
allow-recursion { localnets; };
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
}; Please help! | All of the addresses are IPv6. Seems an IPv6 issue, you probably have no IPv6 networking configured. Disable IPv6 suport in Bind: Edit /etc/sysconfig/named and set: OPTIONS="-4" Then restart bind: service named restart (from http://crashmag.net/disable-ipv6-lookups-with-bind-on-rhel-or-centos ) Are you under attack? I don't think you've been compromised. Those messages can be normal depending on what services you are running (anyhow, any server is always under some attempt of attack, people scans the internet trying exploits on every server). | {
"source": [
"https://serverfault.com/questions/639061",
"https://serverfault.com",
"https://serverfault.com/users/236061/"
]
} |
639,079 | I have a server with dedicated IP address A and a server with dynamic IP address B (routing via no-ip.org). A uploads a backup to B via sshpass: export SSHPASS=***
sshpass -e sftp **@** << !
[..]
put [..]
bye
! Every time now on (A) following happens: Warning: Permanently added the ECDSA host key for IP address '[...]' to the list of known hosts. I have some feeling that this might be not a safe method to transfer the backup data (tar file).
Is it possible for someone to intercept the backup? Also, shouldn’t I remove the IP from the list of known hosts again afterwards? The backup is run every day. Sounds like a long list of known hosts that are just dynamic! | All of the addresses are IPv6. Seems an IPv6 issue, you probably have no IPv6 networking configured. Disable IPv6 suport in Bind: Edit /etc/sysconfig/named and set: OPTIONS="-4" Then restart bind: service named restart (from http://crashmag.net/disable-ipv6-lookups-with-bind-on-rhel-or-centos ) Are you under attack? I don't think you've been compromised. Those messages can be normal depending on what services you are running (anyhow, any server is always under some attempt of attack, people scans the internet trying exploits on every server). | {
"source": [
"https://serverfault.com/questions/639079",
"https://serverfault.com",
"https://serverfault.com/users/207575/"
]
} |
639,083 | I've the following iptables configuration: Chain INPUT (policy DROP 11 packets, 604 bytes)
num pkts bytes target prot opt in out source destination
1 127 11093 BLACKLIST all -- * * 0.0.0.0/0 0.0.0.0/0
2 127 11093 UNCLEAN all -- * * 0.0.0.0/0 0.0.0.0/0
3 115 10437 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
4 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
... followed by rules for state=NEW for running services like SSH The chain BLACKLIST blocks some source-IPs.
The chain UNCLEAN drops packets with unclean TCP flags.
I like to move the ACCEPT state RELATED,ESTABLISHED rule (currentyl rule 3) as far to the top as possible for best response behaviour - but without loosing security. As far as I know I can move the BLACKLIST two positions down, because it's sufficient to check it only for state=NEW . Once established it already passed the BLACKLIST check before and therefore the ACCEPT state RELAED,ESTABLISHED can be positioned before the BLACKLIST rule. Right? Would you suggest to move the other rules? (i.e lo to top or somehting like that) | All of the addresses are IPv6. Seems an IPv6 issue, you probably have no IPv6 networking configured. Disable IPv6 suport in Bind: Edit /etc/sysconfig/named and set: OPTIONS="-4" Then restart bind: service named restart (from http://crashmag.net/disable-ipv6-lookups-with-bind-on-rhel-or-centos ) Are you under attack? I don't think you've been compromised. Those messages can be normal depending on what services you are running (anyhow, any server is always under some attempt of attack, people scans the internet trying exploits on every server). | {
"source": [
"https://serverfault.com/questions/639083",
"https://serverfault.com",
"https://serverfault.com/users/127649/"
]
} |
639,088 | After a restart of one of our servers (a Windows Server 2012 R2), all private connections become public and vice versa ( this user had the same problem ). Stuff like pinging and iSCSI stopped working, and after some investigation it turned out this was the cause. The problem is that I don't know how to make them private again. Left-clicking the network icon in the tray shows the "modern" sidebar, but it only shows a list of connections, and right-clicking them doesn't show any options. What could be the problem, and is there a way to change these settings? I have to make one of the connections public (Internet access), and two of them private (backbone). | Powershell. Here is an example of changing the network profile of a network interface called Ethernet1 from whatever it is now to "Private." I got this info from Get-Help Set-NetConnectionProfile -Full . PS C:\>$Profile = Get-NetConnectionProfile -InterfaceAlias Ethernet1
PS C:\>$Profile.NetworkCategory = "Private"
PS C:\>Set-NetConnectionProfile -InputObject $Profile Documentation: https://docs.microsoft.com/en-us/powershell/module/netconnection/set-netconnectionprofile?view=winserver2012r2-ps | {
"source": [
"https://serverfault.com/questions/639088",
"https://serverfault.com",
"https://serverfault.com/users/48320/"
]
} |
639,128 | I have an nginx instance that is set to log access to /var/log/nginx/access.log and errors to /var/log/nginx/errors.log, but as soon as logrotate runs each week, the file gets moves to *.log.1 and the new *.log file gets created, but nginx continues to log to the log.1 file instead of the new .log file (and nothing gets gzipped). The first time I noticed this, it had been 3 weeks since the log rotation and the log was getting huge. Running kill -HUP `cat /run/nginx.pid` made nginx start logging to the right place again, but the problem started again the next week. The more important reason this is frustrating is that I have the logs set to upload to Loggly via rsyslog, and when nginx stops logging to the file I have rsyslog polling, then things stop uploading and I don't get any alerts. I suspect it has something to do with restarting nginx, or reloading the config, because it didn't start until I had made a config change and reloaded the config in a way that I thought was normal. I tried running kill -USR1 `cat /run/nginx.pid` but the files continued to get logged to the wrong location until I ran kill -HUP `cat /run/nginx.pid` , which I already know does not solve the problem. Any idea of what's going on? I admit I'm no expert on logrotate or nginx administration, but my Googles have failed me on this one. Here is my nginx logrotate script, and let me know if there's anything else you might want to see. The nginx.conf has nothing special in it with regard to logging, other than defining the output locations. /var/log/nginx/*.log {
weekly
missingok
rotate 52
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi \
endscript
postrotate
[ -s /run/nginx.pid ] && kill -USR1 `cat /run/nginx.pid`
endscript
} EDIT: I think I found the problem. Here is the output of running the logrotate in debug mode: $ sudo logrotate --force -d /etc/logrotate.d/nginx
reading config file /etc/logrotate.d/nginx
Handling 1 logs
rotating pattern: /var/log/nginx/*.log forced from command line (52 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/nginx/access.log
log needs rotating
considering log /var/log/nginx/error.log
log needs rotating
rotating log /var/log/nginx/access.log, log->rotateCount is 52
dateext suffix '-20141023'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
previous log /var/log/nginx/access.log.1 does not exist
renaming /var/log/nginx/access.log.52.gz to /var/log/nginx/access.log.53.gz (rotatecount 52, logstart 1, i 52),
renaming /var/log/nginx/access.log.51.gz to /var/log/nginx/access.log.52.gz (rotatecount 52, logstart 1, i 51),
renaming /var/log/nginx/access.log.50.gz to /var/log/nginx/access.log.51.gz (rotatecount 52, logstart 1, i 50),
renaming /var/log/nginx/access.log.49.gz to /var/log/nginx/access.log.50.gz (rotatecount 52, logstart 1, i 49),
renaming /var/log/nginx/access.log.48.gz to /var/log/nginx/access.log.49.gz (rotatecount 52, logstart 1, i 48),
renaming /var/log/nginx/access.log.47.gz to /var/log/nginx/access.log.48.gz (rotatecount 52, logstart 1, i 47),
renaming /var/log/nginx/access.log.46.gz to /var/log/nginx/access.log.47.gz (rotatecount 52, logstart 1, i 46),
renaming /var/log/nginx/access.log.45.gz to /var/log/nginx/access.log.46.gz (rotatecount 52, logstart 1, i 45),
renaming /var/log/nginx/access.log.44.gz to /var/log/nginx/access.log.45.gz (rotatecount 52, logstart 1, i 44),
renaming /var/log/nginx/access.log.43.gz to /var/log/nginx/access.log.44.gz (rotatecount 52, logstart 1, i 43),
renaming /var/log/nginx/access.log.42.gz to /var/log/nginx/access.log.43.gz (rotatecount 52, logstart 1, i 42),
renaming /var/log/nginx/access.log.41.gz to /var/log/nginx/access.log.42.gz (rotatecount 52, logstart 1, i 41),
renaming /var/log/nginx/access.log.40.gz to /var/log/nginx/access.log.41.gz (rotatecount 52, logstart 1, i 40),
renaming /var/log/nginx/access.log.39.gz to /var/log/nginx/access.log.40.gz (rotatecount 52, logstart 1, i 39),
renaming /var/log/nginx/access.log.38.gz to /var/log/nginx/access.log.39.gz (rotatecount 52, logstart 1, i 38),
renaming /var/log/nginx/access.log.37.gz to /var/log/nginx/access.log.38.gz (rotatecount 52, logstart 1, i 37),
renaming /var/log/nginx/access.log.36.gz to /var/log/nginx/access.log.37.gz (rotatecount 52, logstart 1, i 36),
renaming /var/log/nginx/access.log.35.gz to /var/log/nginx/access.log.36.gz (rotatecount 52, logstart 1, i 35),
renaming /var/log/nginx/access.log.34.gz to /var/log/nginx/access.log.35.gz (rotatecount 52, logstart 1, i 34),
renaming /var/log/nginx/access.log.33.gz to /var/log/nginx/access.log.34.gz (rotatecount 52, logstart 1, i 33),
renaming /var/log/nginx/access.log.32.gz to /var/log/nginx/access.log.33.gz (rotatecount 52, logstart 1, i 32),
renaming /var/log/nginx/access.log.31.gz to /var/log/nginx/access.log.32.gz (rotatecount 52, logstart 1, i 31),
renaming /var/log/nginx/access.log.30.gz to /var/log/nginx/access.log.31.gz (rotatecount 52, logstart 1, i 30),
renaming /var/log/nginx/access.log.29.gz to /var/log/nginx/access.log.30.gz (rotatecount 52, logstart 1, i 29),
renaming /var/log/nginx/access.log.28.gz to /var/log/nginx/access.log.29.gz (rotatecount 52, logstart 1, i 28),
renaming /var/log/nginx/access.log.27.gz to /var/log/nginx/access.log.28.gz (rotatecount 52, logstart 1, i 27),
renaming /var/log/nginx/access.log.26.gz to /var/log/nginx/access.log.27.gz (rotatecount 52, logstart 1, i 26),
renaming /var/log/nginx/access.log.25.gz to /var/log/nginx/access.log.26.gz (rotatecount 52, logstart 1, i 25),
renaming /var/log/nginx/access.log.24.gz to /var/log/nginx/access.log.25.gz (rotatecount 52, logstart 1, i 24),
renaming /var/log/nginx/access.log.23.gz to /var/log/nginx/access.log.24.gz (rotatecount 52, logstart 1, i 23),
renaming /var/log/nginx/access.log.22.gz to /var/log/nginx/access.log.23.gz (rotatecount 52, logstart 1, i 22),
renaming /var/log/nginx/access.log.21.gz to /var/log/nginx/access.log.22.gz (rotatecount 52, logstart 1, i 21),
renaming /var/log/nginx/access.log.20.gz to /var/log/nginx/access.log.21.gz (rotatecount 52, logstart 1, i 20),
renaming /var/log/nginx/access.log.19.gz to /var/log/nginx/access.log.20.gz (rotatecount 52, logstart 1, i 19),
renaming /var/log/nginx/access.log.18.gz to /var/log/nginx/access.log.19.gz (rotatecount 52, logstart 1, i 18),
renaming /var/log/nginx/access.log.17.gz to /var/log/nginx/access.log.18.gz (rotatecount 52, logstart 1, i 17),
renaming /var/log/nginx/access.log.16.gz to /var/log/nginx/access.log.17.gz (rotatecount 52, logstart 1, i 16),
renaming /var/log/nginx/access.log.15.gz to /var/log/nginx/access.log.16.gz (rotatecount 52, logstart 1, i 15),
renaming /var/log/nginx/access.log.14.gz to /var/log/nginx/access.log.15.gz (rotatecount 52, logstart 1, i 14),
renaming /var/log/nginx/access.log.13.gz to /var/log/nginx/access.log.14.gz (rotatecount 52, logstart 1, i 13),
renaming /var/log/nginx/access.log.12.gz to /var/log/nginx/access.log.13.gz (rotatecount 52, logstart 1, i 12),
renaming /var/log/nginx/access.log.11.gz to /var/log/nginx/access.log.12.gz (rotatecount 52, logstart 1, i 11),
renaming /var/log/nginx/access.log.10.gz to /var/log/nginx/access.log.11.gz (rotatecount 52, logstart 1, i 10),
renaming /var/log/nginx/access.log.9.gz to /var/log/nginx/access.log.10.gz (rotatecount 52, logstart 1, i 9),
renaming /var/log/nginx/access.log.8.gz to /var/log/nginx/access.log.9.gz (rotatecount 52, logstart 1, i 8),
renaming /var/log/nginx/access.log.7.gz to /var/log/nginx/access.log.8.gz (rotatecount 52, logstart 1, i 7),
renaming /var/log/nginx/access.log.6.gz to /var/log/nginx/access.log.7.gz (rotatecount 52, logstart 1, i 6),
renaming /var/log/nginx/access.log.5.gz to /var/log/nginx/access.log.6.gz (rotatecount 52, logstart 1, i 5),
renaming /var/log/nginx/access.log.4.gz to /var/log/nginx/access.log.5.gz (rotatecount 52, logstart 1, i 4),
renaming /var/log/nginx/access.log.3.gz to /var/log/nginx/access.log.4.gz (rotatecount 52, logstart 1, i 3),
renaming /var/log/nginx/access.log.2.gz to /var/log/nginx/access.log.3.gz (rotatecount 52, logstart 1, i 2),
renaming /var/log/nginx/access.log.1.gz to /var/log/nginx/access.log.2.gz (rotatecount 52, logstart 1, i 1),
renaming /var/log/nginx/access.log.0.gz to /var/log/nginx/access.log.1.gz (rotatecount 52, logstart 1, i 0),
rotating log /var/log/nginx/error.log, log->rotateCount is 52
dateext suffix '-20141023'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
previous log /var/log/nginx/error.log.1 does not exist
renaming /var/log/nginx/error.log.52.gz to /var/log/nginx/error.log.53.gz (rotatecount 52, logstart 1, i 52),
renaming /var/log/nginx/error.log.51.gz to /var/log/nginx/error.log.52.gz (rotatecount 52, logstart 1, i 51),
renaming /var/log/nginx/error.log.50.gz to /var/log/nginx/error.log.51.gz (rotatecount 52, logstart 1, i 50),
renaming /var/log/nginx/error.log.49.gz to /var/log/nginx/error.log.50.gz (rotatecount 52, logstart 1, i 49),
renaming /var/log/nginx/error.log.48.gz to /var/log/nginx/error.log.49.gz (rotatecount 52, logstart 1, i 48),
renaming /var/log/nginx/error.log.47.gz to /var/log/nginx/error.log.48.gz (rotatecount 52, logstart 1, i 47),
renaming /var/log/nginx/error.log.46.gz to /var/log/nginx/error.log.47.gz (rotatecount 52, logstart 1, i 46),
renaming /var/log/nginx/error.log.45.gz to /var/log/nginx/error.log.46.gz (rotatecount 52, logstart 1, i 45),
renaming /var/log/nginx/error.log.44.gz to /var/log/nginx/error.log.45.gz (rotatecount 52, logstart 1, i 44),
renaming /var/log/nginx/error.log.43.gz to /var/log/nginx/error.log.44.gz (rotatecount 52, logstart 1, i 43),
renaming /var/log/nginx/error.log.42.gz to /var/log/nginx/error.log.43.gz (rotatecount 52, logstart 1, i 42),
renaming /var/log/nginx/error.log.41.gz to /var/log/nginx/error.log.42.gz (rotatecount 52, logstart 1, i 41),
renaming /var/log/nginx/error.log.40.gz to /var/log/nginx/error.log.41.gz (rotatecount 52, logstart 1, i 40),
renaming /var/log/nginx/error.log.39.gz to /var/log/nginx/error.log.40.gz (rotatecount 52, logstart 1, i 39),
renaming /var/log/nginx/error.log.38.gz to /var/log/nginx/error.log.39.gz (rotatecount 52, logstart 1, i 38),
renaming /var/log/nginx/error.log.37.gz to /var/log/nginx/error.log.38.gz (rotatecount 52, logstart 1, i 37),
renaming /var/log/nginx/error.log.36.gz to /var/log/nginx/error.log.37.gz (rotatecount 52, logstart 1, i 36),
renaming /var/log/nginx/error.log.35.gz to /var/log/nginx/error.log.36.gz (rotatecount 52, logstart 1, i 35),
renaming /var/log/nginx/error.log.34.gz to /var/log/nginx/error.log.35.gz (rotatecount 52, logstart 1, i 34),
renaming /var/log/nginx/error.log.33.gz to /var/log/nginx/error.log.34.gz (rotatecount 52, logstart 1, i 33),
renaming /var/log/nginx/error.log.32.gz to /var/log/nginx/error.log.33.gz (rotatecount 52, logstart 1, i 32),
renaming /var/log/nginx/error.log.31.gz to /var/log/nginx/error.log.32.gz (rotatecount 52, logstart 1, i 31),
renaming /var/log/nginx/error.log.30.gz to /var/log/nginx/error.log.31.gz (rotatecount 52, logstart 1, i 30),
renaming /var/log/nginx/error.log.29.gz to /var/log/nginx/error.log.30.gz (rotatecount 52, logstart 1, i 29),
renaming /var/log/nginx/error.log.28.gz to /var/log/nginx/error.log.29.gz (rotatecount 52, logstart 1, i 28),
renaming /var/log/nginx/error.log.27.gz to /var/log/nginx/error.log.28.gz (rotatecount 52, logstart 1, i 27),
renaming /var/log/nginx/error.log.26.gz to /var/log/nginx/error.log.27.gz (rotatecount 52, logstart 1, i 26),
renaming /var/log/nginx/error.log.25.gz to /var/log/nginx/error.log.26.gz (rotatecount 52, logstart 1, i 25),
renaming /var/log/nginx/error.log.24.gz to /var/log/nginx/error.log.25.gz (rotatecount 52, logstart 1, i 24),
renaming /var/log/nginx/error.log.23.gz to /var/log/nginx/error.log.24.gz (rotatecount 52, logstart 1, i 23),
renaming /var/log/nginx/error.log.22.gz to /var/log/nginx/error.log.23.gz (rotatecount 52, logstart 1, i 22),
renaming /var/log/nginx/error.log.21.gz to /var/log/nginx/error.log.22.gz (rotatecount 52, logstart 1, i 21),
renaming /var/log/nginx/error.log.20.gz to /var/log/nginx/error.log.21.gz (rotatecount 52, logstart 1, i 20),
renaming /var/log/nginx/error.log.19.gz to /var/log/nginx/error.log.20.gz (rotatecount 52, logstart 1, i 19),
renaming /var/log/nginx/error.log.18.gz to /var/log/nginx/error.log.19.gz (rotatecount 52, logstart 1, i 18),
renaming /var/log/nginx/error.log.17.gz to /var/log/nginx/error.log.18.gz (rotatecount 52, logstart 1, i 17),
renaming /var/log/nginx/error.log.16.gz to /var/log/nginx/error.log.17.gz (rotatecount 52, logstart 1, i 16),
renaming /var/log/nginx/error.log.15.gz to /var/log/nginx/error.log.16.gz (rotatecount 52, logstart 1, i 15),
renaming /var/log/nginx/error.log.14.gz to /var/log/nginx/error.log.15.gz (rotatecount 52, logstart 1, i 14),
renaming /var/log/nginx/error.log.13.gz to /var/log/nginx/error.log.14.gz (rotatecount 52, logstart 1, i 13),
renaming /var/log/nginx/error.log.12.gz to /var/log/nginx/error.log.13.gz (rotatecount 52, logstart 1, i 12),
renaming /var/log/nginx/error.log.11.gz to /var/log/nginx/error.log.12.gz (rotatecount 52, logstart 1, i 11),
renaming /var/log/nginx/error.log.10.gz to /var/log/nginx/error.log.11.gz (rotatecount 52, logstart 1, i 10),
renaming /var/log/nginx/error.log.9.gz to /var/log/nginx/error.log.10.gz (rotatecount 52, logstart 1, i 9),
renaming /var/log/nginx/error.log.8.gz to /var/log/nginx/error.log.9.gz (rotatecount 52, logstart 1, i 8),
renaming /var/log/nginx/error.log.7.gz to /var/log/nginx/error.log.8.gz (rotatecount 52, logstart 1, i 7),
renaming /var/log/nginx/error.log.6.gz to /var/log/nginx/error.log.7.gz (rotatecount 52, logstart 1, i 6),
renaming /var/log/nginx/error.log.5.gz to /var/log/nginx/error.log.6.gz (rotatecount 52, logstart 1, i 5),
renaming /var/log/nginx/error.log.4.gz to /var/log/nginx/error.log.5.gz (rotatecount 52, logstart 1, i 4),
renaming /var/log/nginx/error.log.3.gz to /var/log/nginx/error.log.4.gz (rotatecount 52, logstart 1, i 3),
renaming /var/log/nginx/error.log.2.gz to /var/log/nginx/error.log.3.gz (rotatecount 52, logstart 1, i 2),
renaming /var/log/nginx/error.log.1.gz to /var/log/nginx/error.log.2.gz (rotatecount 52, logstart 1, i 1),
renaming /var/log/nginx/error.log.0.gz to /var/log/nginx/error.log.1.gz (rotatecount 52, logstart 1, i 0),
running prerotate script
running script with arg /var/log/nginx/*.log : "
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi \
"
renaming /var/log/nginx/access.log to /var/log/nginx/access.log.1
creating new /var/log/nginx/access.log mode = 0640 uid = 33 gid = 4
renaming /var/log/nginx/error.log to /var/log/nginx/error.log.1
creating new /var/log/nginx/error.log mode = 0640 uid = 33 gid = 4
running postrotate script
running script with arg /var/log/nginx/*.log : "
[ -s /run/nginx.pid ] && kill -USR1 `cat /run/nginx.pid`
"
removing old log /var/log/nginx/access.log.53.gz
error: error opening /var/log/nginx/access.log.53.gz: No such file or directory However, there are only archives up to about *.log.8.gz, so logrotate fails when it tries to interact with /var/log/nginx/access.log.53.gz . Why on earth is it trying to do that? I suppose I need to touch fake files to fill it out? This seems wrong somehow. | Bah, I finally found the answer after a long time digging. The problem, in my case wasn't that logrotate was failing. That error message is fine, and doesn't actually stop logrotate. The problem was that nginx was not releasing the file handle to the log file upon receiving the -USR1 signal from kill . Long story short, the reason it was not reloading the log files was because the /var/log/nginx folder was not owned by the same user as the nginx worker processes (owned by www-data, running under web). I have no idea how that changed (perhaps because this server was remade recently), but changing the folder to be owned by the same user as the nginx worker processes (and fixing the logrotate file to make new logs as web) fixed the issue. | {
"source": [
"https://serverfault.com/questions/639128",
"https://serverfault.com",
"https://serverfault.com/users/43645/"
]
} |
639,537 | I'm just getting up to speed on AWS and had a question about using an existing EBS volume as a boot device for an EC2 instance. It looks like a lot of the instances create an EBS volume for their boot devices. In the situation where the EBS volume has been setup so that it is not deleted when the instance is terminated, is it possible to use that EBS volume as the boot/root device for a new instance? For example say I have an instance using an EBS volume as the root device that is running on a hypervisor that crashes. Can I boot another instance using that EBS volume? I can see that you could take a snapshot of the EBS volume and then create an AMI from that snapshot. So I guess that is one way to get it back, but I was curious if there was a more direct way? I realize that ideally instances are throw away, but I'm just curious from a learning PoV. Thanks,
Joe | EBS volumes can be attached and detached from EC2 instance. If you have an EC2 instance that crashes for some reason, you can move the root volume to another EC2 intance. Launch a new EC2 instance. Stop that EC2 instance. Detach the root volume from the new instance. Make note of the device name that it was attached as (such as /dev/sda1). Detach the root volume from the original instance. Attach the root volume from the original instance to the new instance, using the same device name (such as /dev/sda1). Start your new instance. Technically, it can be done. However, you may encounter the same problem that you had with the original EC2 instance since you're booting from the original root volume. Another thing you can do is to attach the original root volume as a non-root volume on your new EC2 instance, such as /dev/sdb1. If you do this, you can examine the data on the volume to determine the cause of the crash and perhaps fix it. One more thing, while you can make an AMI image out of an EBS snapshot, you can also make AMI images directly from the EC2 instance instead. As a process, this often is simpler. | {
"source": [
"https://serverfault.com/questions/639537",
"https://serverfault.com",
"https://serverfault.com/users/250765/"
]
} |
639,891 | I changed my nameserver and host company for my domain 30 hours ago. Now, DNS propagation checks indicate that the correct nameserver is recognized worldwide. However, browsers on my own machines produce the old site. I tried multiple browsers and multiple devices (Ubuntu and Android), including some that never accessed the site, to make sure that the problem is not caused by DNS caching in the browser or in the machines. Using Hola or Tor as proxy from other countries, I correctly get the new site. More strangely, some of the browsers occasionally shift between producing the one site or the other. I suspect my ISP's DNS is giving crazy results, but how could I diagnose that? Also, strangely, monitor.us is showing the site as going down, then up, several times a day, when as far as I can tell that is not happening. (It is a basic Wordpress site with, for now, no traffic.) That would suggest that monitor.us is also getting strange DNS values. How can I diagnose this? | The output of dig any joshuafox.com shows that the TTL for your domain is 604800 seconds or one week. That is an unusually high value and you might want to change it. Expect the propagation of your new configuration to be fully propagated by the end of the week. | {
"source": [
"https://serverfault.com/questions/639891",
"https://serverfault.com",
"https://serverfault.com/users/58862/"
]
} |
640,130 | I have a Ansible play for PGBouncer that displays some output from a stats module built into PGBouncer. My issue is that when Ansible prints the output to the terminal it mangles the newlines. Instead of seeing ----------
| OUTPUT |
---------- I see ----------\n| OUTPUT |\n---------- Does anyone know how to get Ansible to "pretty print" the output? | If you want more human friendly output define: ANSIBLE_STDOUT_CALLBACK=debug This will make ansible use the debug output module (previously named human_log ) which despite its unfortunate name is less verbose and much easier to read by humans. If you get an error that this module is not available, upgrade Ansible or add this module locally if you cannot upgrade ansible, it will work with over versions of ansible like 2.0 or probably even 1.9. Another option to configure this is to add stdout_callback = debug to your ansible.cfg | {
"source": [
"https://serverfault.com/questions/640130",
"https://serverfault.com",
"https://serverfault.com/users/10407/"
]
} |
640,976 | I keep getting this error in nginx/error.log and its driving me nuts: 8096 worker_connections exceed open file resource limit: 1024 I've tried everything I can think of and cant figure out what is limiting nginx here. Can you tell what am I missing? nginx.conf has this: worker_processes 4;
events {
worker_connections 8096;
multi_accept on;
use epoll;
} I changed my system's Ulimit in security/limits.conf like this: # This is added for Open File Limit Increase
* hard nofile 199680
* soft nofile 65535
root hard nofile 65536
root soft nofile 32768
# This is added for Nginx User
nginx hard nofile 199680
nginx soft nofile 65535 It was still showing the error. So I also tried editing /etc/default/nginx and added this line: ULIMIT="-n 65535" It is still showing the same error. Can't figure out what is limiting the nginx worker connection to only 1024. Can you point me out? I've got Debian 7 + nginx | Set worker_rlimit_nofile 65535; in nginx.conf within the main context. | {
"source": [
"https://serverfault.com/questions/640976",
"https://serverfault.com",
"https://serverfault.com/users/247421/"
]
} |
641,264 | I'm trying to add a file to a Docker image built from the official tomcat image. That image does not seem to have root rights, as I'm logged in as user tomcat if I run bash: docker run -it tomcat /bin/bash
tomcat@06359f7cc4db:/usr/local/tomcat$ If I instruct a Dockerfile to copy a file to that container, the file has permissions 644 and the owner is root . As far as I understand, that seems to be reasonable as all commands in the Dockerfile are run as root. However, if I try to change ownership of that file to tomcat:tomcat , I get a Operation not permitted error. Why can't I change the permissions of a file copied to that image? How it can be reproduced: mkdir docker-addfilepermission
cd docker-addfilepermission
touch test.txt
echo 'FROM tomcat
COPY test.txt /usr/local/tomcat/webapps/
RUN chown tomcat:tomcat /usr/local/tomcat/webapps/test.txt' > Dockerfile
docker build . The output of docker build . : Sending build context to Docker daemon 3.072 kB
Sending build context to Docker daemon
Step 0 : FROM tomcat
---> 44859847ef64
Step 1 : COPY test.txt /usr/local/tomcat/webapps/
---> Using cache
---> a2ccb92480a4
Step 2 : RUN chown tomcat:tomcat /usr/local/tomcat/webapps/test.txt
---> Running in 208e7ff0ec8f
chown: changing ownership of '/usr/local/tomcat/webapps/test.txt': Operation not permitted
2014/11/01 00:30:33 The command [/bin/sh -c chown tomcat:tomcat /usr/local/tomcat/webapps/test.txt] returned a non-zero code: 1 | There is likely a way to view and change the Dockerfile for tomcat, but I can't figure it out after a few minutes. My inelegant solution is to add this line before the chown: USER root If you want to de-elevate the privileges after (which is recommended) you could add this line: USER tomcat Alternately, work with an image that has no software installed so you can begin your Dockerfile as root and install tomcat and all that. It's actually odd they change that in their image from my experience. It makes sense to allow the intended end user to set the USER directive as they see fit. | {
"source": [
"https://serverfault.com/questions/641264",
"https://serverfault.com",
"https://serverfault.com/users/212724/"
]
} |
641,266 | I have www.mydomain.com pointed at an Azure Website. www.mydomain.com --- CNAME --- mydomain.azurewebsites.net When I visit www.mydomain.com , everything works fine. This is good. Problem is, mydomain.com doesn't work. Azure only allows the www subdomain. In some nameservers, I use a FWD Record to forward the root to the www, and this works fine. My current name server (zoneedit.com) does not have this FWD Record. Is there a DNS Record that we can use to forward the root domain to the www subdomain? | Unfortunately, this is a well-known shortcoming of the DNS protocol. There is no record type defined within the DNS standards that will allow you to alias the apex of a domain. Many people assume that CNAME records can be used to accomplish this but there are technical reasons why they cannot . Many DNS providers implement custom (read: fake) DNS record types to try and address this shortcoming. Behind the scenes, these fake records implement custom behavior in that company's software using a combination of synthesized A records and webserver redirection to accomplish your desired goal. FWD is one of these, much like the WebForward that Michael directed you to in the comments. | {
"source": [
"https://serverfault.com/questions/641266",
"https://serverfault.com",
"https://serverfault.com/users/179158/"
]
} |
641,453 | I created a VM via Bitnami in Google Compute Engine. Previously, I was able to ssh via the Bitnami web interface. I tried to ssh via terminal on my Mac but kept getting the Permission denied (publickey) error. I then deleted all keys on the server and my Mac and downloaded the pem file form bitnami and used -i option to connect but still the problem persists. ssh -i bitnami-gce.pem [email protected] -v Complete debug info: OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: Connecting to 1xx.1xx.5x.1xx [1xx.1xx.5x.1xx] port 22.
debug1: Connection established.
debug1: identity file bitnami-gce.pem type -1
debug1: identity file bitnami-gce.pem-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.2
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Debian-4~bpo70+1
debug1: match: OpenSSH_6.6.1p1 Debian-4~bpo70+1 pat OpenSSH*
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr [email protected] none
debug1: kex: client->server aes128-ctr [email protected] none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Server host key: RSA <RSA KEY>
debug1: Host '1xx.1xx.5x.1xx' is known and matches the RSA host key.
debug1: Found key in /Users/xxx/.ssh/known_hosts:1
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: bitnami-gce.pem
debug1: read PEM private key done: type RSA
debug1: Authentications that can continue: publickey
debug1: No more authentication methods to try.
Permission denied (publickey). I am unable to ssh to the host. So can't send any keys to server now. How to resolve this? Edit: I tried to ssh via Google web console and I could do it. Can anyone tell me the exact steps to ssh from anywhere? I prefer the simple username and password way, how to configure it that way? | After I was able to ssh via Google web console, I did the following steps to resolve this: Generate ssh key using ssh-keygen Copy the key.pub file contents Append the contents to ~/.ssh/authorized_keys file sudo nano ~/.ssh/authorized_keys | {
"source": [
"https://serverfault.com/questions/641453",
"https://serverfault.com",
"https://serverfault.com/users/195536/"
]
} |
641,726 | The Mean Time Between Failures , or MTBF, for this SSD is listed as 1,500,000 hours. That is a lot of hours. 1,500,000 hours is roughly 170 years. Since the invention of this particular SSD is post-Civil War, how do they know what the MTBF is? A couple of options that make sense to me: Newegg just has a typo The definition of mean time between failures is not what I think it is They are using some type of statistical extrapolation to estimate what the MTBF would be Question: How is the Mean Time Between Failures (MTFB) obtained for SSD/HDDs? | Drive manufacturers specify the reliability of their products in terms of two related metrics: the annualized failure rate (AFR), which is the percentage of disk drives in a population that fail in a test scaled to a per year estimation; and the mean time to failure (MTTF). The AFR of a new product is typically estimated based on accelerated life and stress tests or based on field data from earlier products. The MTTF is estimated as the number of power on hours per year divided by the AFR. A common assumption for drives in servers is that they are powered on 100% of the time. http://www.cs.cmu.edu/~bianca/fast/ MTTF of 1.5 million hours sounds somewhat plausible. That would roughly be a test with 1000 drives running for 6 months and 3 drives failing. The AFR would be (2* 6 months * 3)/(1000 drives)=0.6% annually and the MTTF = 1yr/0.6%=1,460,967 hours or 167 years. A different way to look at that number is when you have 167 drives and leave them running for a year the manufacturer claims that on average you'll see one drive fail. But I expect that is simply the constant "random" mechanical/electronic failure rate. Assuming that failure rates follow the bathtub curve , as mentioned in the comments,
the manufacturer's marketing team can massage the reliability numbers a bit, for instance by not including DOA'S (dead on arrival, units that passed quality control but fail when the end-user installs them) and stretching the DOA definition to also exclude those in the early failure spike. And because testing isn't performed long enough you won't see age effects either. I think the warranty period is a better indication for how long a manufacturer really expects a SSD to last! That definitely won't be measured in decades or centuries... Associated with the MTBF is the reliability associated with the finite number of write cycles NAND cells can support. A common metric is the total write capacity, usually in TB. In addition to other performance requirements that is one big limiter. To allow a more convenient comparison between different makes and differently sized sized drives the write endurance is often converted to daily write capacity as a fraction of the disk capacity. Assuming that a drive is rated to live as long as it's under warranty: a 100 GB SSD may have a 3 year warranty and a write
capacity 50 TB: 50 TB
--------------------- = 0.46 drive per day write capacity.
3 * 365 days * 100 GB The higher that number, the more suited the disk is for write intensive IO. At the moment (end of 2014) value server line SSD's have a value of 0.3-0.8 drive/day, mid-range is increasing steadily from 1-5 and high-end seems to sky-rocket with write endurance levels of up to 25 * the drive capacity per day for 3-5 years. Some real world tests show that sometimes the vendor claims can be massively exceeded, but driving equipment way past the vendor limits isn't always an enterprise consideration... Instead buy correctly spec'd drives for your purposes. | {
"source": [
"https://serverfault.com/questions/641726",
"https://serverfault.com",
"https://serverfault.com/users/205338/"
]
} |
641,728 | I'm pretty new to linux, and I have been struggling with a problem for the past week or so... I am trying to setup a cluster of LXC containers on a workstation (host) which is has IP 192.168.10.33 connecting to a gateway with IP 192.168.10.1 the LXC nodes are by default connected to the lxcbr0 bridge with IP 10.0.3.1, containers have IP between 10.0.3.111 and 10.0.3.120 I can ping each container from the host, I can ping the bridge (10.0.3.1) from the containers, as well as the host IP (eth0, 192.168.10.33) but I can't reach the gateway (192.168.10.1)
I have read a multitude of posts and man pages about networking, iptables and routing, but nothing has worked so far (defining default gw, ip forwarding...) If i configure the lxcbr0 bridge to be at 192.168.10.33 (the host IP) and my containers to take IP on the same IP range (192.168.10.111 to 120), then it works fine. I would like to understand how I am supposed to bridge 2 networks with different IP ranges as mentioned (bridging 192.168.10.0/24 with 10.0.3.0/24) ??? (as a disclaimer, i disabled firewall and anything that could prevent reaching the gateway in the first place, i can reach it from the host) any insight to point me in the right direction would be appreciated.
Thank you | Drive manufacturers specify the reliability of their products in terms of two related metrics: the annualized failure rate (AFR), which is the percentage of disk drives in a population that fail in a test scaled to a per year estimation; and the mean time to failure (MTTF). The AFR of a new product is typically estimated based on accelerated life and stress tests or based on field data from earlier products. The MTTF is estimated as the number of power on hours per year divided by the AFR. A common assumption for drives in servers is that they are powered on 100% of the time. http://www.cs.cmu.edu/~bianca/fast/ MTTF of 1.5 million hours sounds somewhat plausible. That would roughly be a test with 1000 drives running for 6 months and 3 drives failing. The AFR would be (2* 6 months * 3)/(1000 drives)=0.6% annually and the MTTF = 1yr/0.6%=1,460,967 hours or 167 years. A different way to look at that number is when you have 167 drives and leave them running for a year the manufacturer claims that on average you'll see one drive fail. But I expect that is simply the constant "random" mechanical/electronic failure rate. Assuming that failure rates follow the bathtub curve , as mentioned in the comments,
the manufacturer's marketing team can massage the reliability numbers a bit, for instance by not including DOA'S (dead on arrival, units that passed quality control but fail when the end-user installs them) and stretching the DOA definition to also exclude those in the early failure spike. And because testing isn't performed long enough you won't see age effects either. I think the warranty period is a better indication for how long a manufacturer really expects a SSD to last! That definitely won't be measured in decades or centuries... Associated with the MTBF is the reliability associated with the finite number of write cycles NAND cells can support. A common metric is the total write capacity, usually in TB. In addition to other performance requirements that is one big limiter. To allow a more convenient comparison between different makes and differently sized sized drives the write endurance is often converted to daily write capacity as a fraction of the disk capacity. Assuming that a drive is rated to live as long as it's under warranty: a 100 GB SSD may have a 3 year warranty and a write
capacity 50 TB: 50 TB
--------------------- = 0.46 drive per day write capacity.
3 * 365 days * 100 GB The higher that number, the more suited the disk is for write intensive IO. At the moment (end of 2014) value server line SSD's have a value of 0.3-0.8 drive/day, mid-range is increasing steadily from 1-5 and high-end seems to sky-rocket with write endurance levels of up to 25 * the drive capacity per day for 3-5 years. Some real world tests show that sometimes the vendor claims can be massively exceeded, but driving equipment way past the vendor limits isn't always an enterprise consideration... Instead buy correctly spec'd drives for your purposes. | {
"source": [
"https://serverfault.com/questions/641728",
"https://serverfault.com",
"https://serverfault.com/users/252130/"
]
} |
641,899 | I recently checked one of our redis processes to what ulimits where applied using: cat /proc/<redis-pid>/limits And was suprised to learn that is was at the low default value: Limit Soft Limit Hard Limit
Max open files 4016 4016 I was suprised, because we have the following configured: # /etc/sysctl.conf
fs.file-max = 100000 . # /etc/security/limits.conf
* soft nofile 100000
* hard nofile 100000 . # /etc/ssh/sshd_config
UsePAM yes . # /etc/pam.d/sshd
session required pam_limits.so Can anyone tell me why the increased ulimit is not being applied to the running redis process? The redis process is running as the user 'redis', the server has been rebooted since the limits were increased. We are on Debian Squeeze. | In Linux resource limits can be set in various locations based on the type of requirement. /etc/security/limits.conf file. /etc/sysctl.conf file. ulimit command /etc/security/limits.conf is part of pam_limits and so the limits that are set in this file is read by pam_limits module during login sessions. The login session can be by ssh or through terminal . And pam_limits will not affect the daemon processes as mentioned here . /etc/sysctl.conf is a system wide global configuration, we cannot set user specific configuration here. It sets the maximum amount of resource that can be used by all users/processes put to gether. ulimit command is used to set the limits of the shell. And so when a limit is set with ulimit on a shell, the process which gets spawned from the shell gets that value too because of the rule that the child process inherits the parent processes properties. And so for your case, as the redis is started as part of init none of the above will help you directly. The proper way of doing this is that, you have to use the ulimit command to set the new value in the init script itself. Like below in the script, ulimit -n 100000
if start-stop-daemon --start --quiet --umask 007 --pidfile $PIDFILE --chuid redis:redis --exec $DAEMON -- $DAEMON_ARGS. There is already a bug filed in wishlist to add ulimit feature to start-stop-daemon . Also check in redis configuration if there is any way of providing limits there. | {
"source": [
"https://serverfault.com/questions/641899",
"https://serverfault.com",
"https://serverfault.com/users/52811/"
]
} |
642,315 | I was reading some technet articles as well as this one regarding the differences between the way VMware and hyper v doing CPU scheduling. I was wondering if I could get some objective info on this. It would seem that the gang scheduling used by VMware is a HUGE disadvantage, but I don't want to just drink the coolaid. Does it seriously impact performance or do the latest iterations of VMware's hyper visors resolve this? Edit: When I say disadvantage I mean relative to Hyper V's "free processor scheduling" or however KVM does it. The material I was reading didn't say there was any problems with "free processor scheduling" that are avoided with gang scheduling. | Like chanting Bloody Mary into a darkly-lit bathroom mirror, let's see if we can get Jake Oshins to show up... Gang scheduling is also referred to as co-scheduling. I think VMware prefers the term co-scheduling to gang scheduling. In ESX versions prior to version 3.x, VMware used "strict" co-scheduling, which had the synchronization drawbacks. In ESX 3.x and above, VMware switched to "relaxed" co-scheduling. Relaxed co-scheduling replaced the strict co-scheduling in ESX 3.x and
has been refined in subsequent releases to achieve better CPU
utilization and to support wide multiprocessor virtual machines.
Relaxed co-scheduling has a few distinctive properties compared to the
strict co-scheduling algorithm. Most important of all, while in the
strict co-scheduling algorithm, the existence of a lagging vCPU causes
the entire virtual machine to be co-stopped. In the relaxed
co-scheduling algorithm, a leading vCPU decides whether it should
co-stop itself based on the skew against the slowest sibling vCPU. If
the skew is greater than a threshold, the leading vCPU co-stops
itself. Note that a lagging vCPU is one that makes significantly less
progress than the fastest sibling vCPU, while a leading vCPU is one
that makes significantly more progress than the slowest sibling vCPU.
By tracking the slowest sibling vCPU, it is now possible for each vCPU
to make its own co-scheduling decision independently. Like co-stop,
the co-start decision is also made individually. Once the slowest
sibling vCPU starts progressing, the co-stopped vCPUs are eligible to
co-start and can be scheduled depending on pCPU availability. This
solves the CPU fragmentation problem in the strict co-scheduling
algorithm by not requiring a group of vCPUs to be scheduled together.
In the previous example of the 4- vCPU virtual machine, the virtual
machine can make forward progress even if there is only one idle pCPU
available. This significantly improves CPU utilization. The above snippet is from VMware's own documentation . So VMware is not using strict gang scheduling anymore. I would treat documentation directly from the vendor as being more authoritative. The only thing that will give you hard numbers is a benchmark, and it will be entirely dependent on the kinds of code that the CPUs are running. But I can tell you that if VMware was at such a disadvantage, then they would not still have lion's share of the virtualization market. | {
"source": [
"https://serverfault.com/questions/642315",
"https://serverfault.com",
"https://serverfault.com/users/152514/"
]
} |
642,327 | In the openvpn server's config file I have the line server 192.168.20.0 255.255.255.0 . This will cause tun0 to be created with inet addr:192.168.20.**1** P-t-P:192.168.20.**2** Mask:255.255.255.255 What do I need to do to change that into inet addr:192.168.20.**11** P-t-P:192.168.20.**12** Mask:255.255.255.255 I tried adding ifconfig 192.168.20.11 192.168.20.12 to the config file, but this doesn't work. Solution Following Zoredache's advice (RTFM 'man openvpn') Instead of using server 192.168.20.0 255.255.255.0 I'm now using mode server
tls-server
ifconfig 192.168.20.11 192.168.20.12
ifconfig-pool 192.168.20.4 192.168.20.8
route 192.168.20.0 255.255.255.0
push "route 192.168.20.0 255.255.255.0" and in the client's ccd file ifconfig-push 192.168.20.1 192.168.20.2 This works. | Like chanting Bloody Mary into a darkly-lit bathroom mirror, let's see if we can get Jake Oshins to show up... Gang scheduling is also referred to as co-scheduling. I think VMware prefers the term co-scheduling to gang scheduling. In ESX versions prior to version 3.x, VMware used "strict" co-scheduling, which had the synchronization drawbacks. In ESX 3.x and above, VMware switched to "relaxed" co-scheduling. Relaxed co-scheduling replaced the strict co-scheduling in ESX 3.x and
has been refined in subsequent releases to achieve better CPU
utilization and to support wide multiprocessor virtual machines.
Relaxed co-scheduling has a few distinctive properties compared to the
strict co-scheduling algorithm. Most important of all, while in the
strict co-scheduling algorithm, the existence of a lagging vCPU causes
the entire virtual machine to be co-stopped. In the relaxed
co-scheduling algorithm, a leading vCPU decides whether it should
co-stop itself based on the skew against the slowest sibling vCPU. If
the skew is greater than a threshold, the leading vCPU co-stops
itself. Note that a lagging vCPU is one that makes significantly less
progress than the fastest sibling vCPU, while a leading vCPU is one
that makes significantly more progress than the slowest sibling vCPU.
By tracking the slowest sibling vCPU, it is now possible for each vCPU
to make its own co-scheduling decision independently. Like co-stop,
the co-start decision is also made individually. Once the slowest
sibling vCPU starts progressing, the co-stopped vCPUs are eligible to
co-start and can be scheduled depending on pCPU availability. This
solves the CPU fragmentation problem in the strict co-scheduling
algorithm by not requiring a group of vCPUs to be scheduled together.
In the previous example of the 4- vCPU virtual machine, the virtual
machine can make forward progress even if there is only one idle pCPU
available. This significantly improves CPU utilization. The above snippet is from VMware's own documentation . So VMware is not using strict gang scheduling anymore. I would treat documentation directly from the vendor as being more authoritative. The only thing that will give you hard numbers is a benchmark, and it will be entirely dependent on the kinds of code that the CPUs are running. But I can tell you that if VMware was at such a disadvantage, then they would not still have lion's share of the virtualization market. | {
"source": [
"https://serverfault.com/questions/642327",
"https://serverfault.com",
"https://serverfault.com/users/112823/"
]
} |
642,981 | I'm running into a problem with my Docker containers on Ubuntu 14.04 LTS.
Docker worked fine for two days, and then suddenly I lost all network connectivity inside my containers. The error output below initially lead me to believe it was because apt-get is trying to resolve the DNS via IPv6. I disabled IPv6 on my host machine and still, removed all images, pulled base ubuntu, and still ran into the problem. I changed my /etc/resolve.conf nameservers from my local DNS server to Google's public DNS servers (8.8.8.8 and 8.8.4.4) and still have no luck. I also set the DNS to Google in the DOCKER_OPTS of /etc/default/docker and restarted docker. I also tried pulling coreos, and yum could not resolve DNS either. It's weird because while DNS does not work, I still get a response when I ping the same update servers that apt-get can't resolve. I'm not behind a proxy, I'm on a very standard local network, and this version of Ubuntu is up to date and fresh (I installed two days ago to be closer to docker). I've thoroughly researched this through other posts on stackoverflow and github issues, but haven't found any resolution. I'm out of ideas as to how to solve this problem, can anyone help? Error Message ➜ arthouse git:(docker) ✗ docker build --no-cache .
Sending build context to Docker daemon 51.03 MB
Sending build context to Docker daemon
Step 0 : FROM ubuntu:14.04
---> 5506de2b643b
Step 1 : RUN apt-get update
---> Running in 845ae6abd1e0
Err http://archive.ubuntu.com trusty InRelease
Err http://archive.ubuntu.com trusty-updates InRelease
Err http://archive.ubuntu.com trusty-security InRelease
Err http://archive.ubuntu.com trusty-proposed InRelease
Err http://archive.ubuntu.com trusty Release.gpg
Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
Err http://archive.ubuntu.com trusty-updates Release.gpg
Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
Err http://archive.ubuntu.com trusty-security Release.gpg
Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
Err http://archive.ubuntu.com trusty-proposed Release.gpg
Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
Reading package lists...
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-updates/InRelease
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-security/InRelease
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-proposed/InRelease
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty-proposed/Release.gpg Cannot initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
W: Some index files failed to download. They have been ignored, or old ones used instead. Container IFCONFIG/PING ➜ code docker run -it ubuntu /bin/bash
root@7bc182bf87bb:/# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:ac:11:00:04
inet addr:172.17.0.4 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:4/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:7 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:738 (738.0 B) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@7bc182bf87bb:/# ping google.com
PING google.com (74.125.226.0) 56(84) bytes of data.
64 bytes from lga15s42-in-f0.1e100.net (74.125.226.0): icmp_seq=1 ttl=56 time=12.3 ms
--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 12.367/12.367/12.367/0.000 ms
root@7bc182bf87bb:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=44 time=21.8 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=44 time=21.7 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=44 time=21.7 ms Also, apt-get update fails when I force IPv4: root@6d925cdf84ad:/# sudo apt-get update -o Acquire::ForceIPv4=true
Err http://archive.ubuntu.com trusty InRelease
Err http://archive.ubuntu.com trusty-updates InRelease
Err http://archive.ubuntu.com trusty-security InRelease
Err http://archive.ubuntu.com trusty-proposed InRelease
Err http://archive.ubuntu.com trusty Release.gpg
Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.153 80]
Err http://archive.ubuntu.com trusty-updates Release.gpg
Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.153 80]
Err http://archive.ubuntu.com trusty-security Release.gpg
Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.153 80]
Err http://archive.ubuntu.com trusty-proposed Release.gpg
Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.153 80]
Reading package lists... Done
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease | Woo, I found a post on github that solved my problem. After Steve K. pointed out that it wasn't actually a DNS issue and was a connectivity issue, I was able to find a post on github that described how to fix this problem. Apparently the docker0 network bridge was hung up. Installing bridge-utils and running the following got my Docker in working order: apt-get install bridge-utils
pkill docker
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
service docker restart | {
"source": [
"https://serverfault.com/questions/642981",
"https://serverfault.com",
"https://serverfault.com/users/130312/"
]
} |
643,647 | I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. So, there is a NAT server in public subnet which forward all outbound traffic from private subnet to outer network. Currently, I can SSH from public subnet to private subnet, also SSH from NAT to private subnet.
However, what I want is SSH from any machine(home laptop, office machine and mobile) to instances in private subnet. I have done some research that I can setup the NAT box to forward SSH to instance in private subnet. But I got not luck for this. Can anyone list what I need to setup to make this possible. Naming are : laptop (any device outside the VPC) nat (the NAT server in the public subnet) destination (the server in the private subnet which I want to connect to) Not sure following are limitations or not: The "destination" does not have a public IP, only a subnet ip, for example 10.0.0.1
The "destination" can not connect to "nat" via nat's public.
There are several "destination" servers, do I need to setup one for each? Thanks | You can set up a bastion host to connect to any instance within your VPC: http://blogs.aws.amazon.com/security/post/Tx3N8GFK85UN1G6/Securely-connect-to-Linux-instances-running-in-a-private-Amazon-VPC You can choose to launch a new instance that will function as a bastion host, or use your existing NAT instance as a bastion. If you create a new instance, as an overview, you will: 1) create a security group for your bastion host that will allow SSH access from your laptop (note this security group for step 4) 2) launch a separate instance (bastion) in a public subnet in your VPC 3) give that bastion host a public IP either at launch or by assigning an Elastic IP 4) update the security groups of each of your instances that don't have a public IP to allow SSH access from the bastion host. This can be done using the bastion host's security group ID (sg-#####). 5) use SSH agent forwarding (ssh -A user@publicIPofBastion) to connect first to the bastion, and then once in the bastion,SSH into any internal instance (ssh user@private-IP-of-Internal-Instance). Agent forwarding takes care of forwarding your private key so it doesn't have to be stored on the bastion instance ( never store private keys on any instance!! ) The AWS blog post above should be able to provide some nitty gritty regarding the process. I've also included the below in case you wanted extra details about bastion hosts: Concept of Bastion Hosts: http://en.m.wikipedia.org/wiki/Bastion_host If you need clarification, feel free to comment. | {
"source": [
"https://serverfault.com/questions/643647",
"https://serverfault.com",
"https://serverfault.com/users/87216/"
]
} |
644,082 | I maintain a flock of EC2 servers with ansible. The servers are regularly updates and upgraded using the apt module . When I manually tried to upgrade a server, I received the following message: $ sudo apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages were automatically installed and are no longer required:
linux-headers-3.13.0-29 linux-headers-3.13.0-29-generic
linux-headers-3.13.0-32 linux-headers-3.13.0-32-generic
linux-image-3.13.0-29-generic linux-image-3.13.0-32-generic
Use 'apt-get autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Is there a way to run sudo apt-get autoremove with ansible? | Support for the apt-get option --auto-remove is now built into Ansible's apt (option autoremove ) as of version 2.1 Official documentation is at http://docs.ansible.com/ansible/apt_module.html - name: Remove dependencies that are no longer required
apt:
autoremove: yes The merge happened here . Note that autoclean is also available as of 2.4 | {
"source": [
"https://serverfault.com/questions/644082",
"https://serverfault.com",
"https://serverfault.com/users/10904/"
]
} |
644,085 | I've managed to set up postfix and dovecot with self-signed certificate on my server. I can send and receive email using telnet command there. Now I want to connect to my mail server from a Thunderbird client on my laptop but it fails and here's the output of /var/log/mail.log : postfix/submission/smtpd[11560]: connect from unknown[95.134.50.75]
postfix/submission/smtpd[11439]: SSL_accept error from unknown[95.134.50.75]: lost connection
postfix/submission/smtpd[11439]: lost connection after CONNECT from unknown[95.134.50.75] Here's a part of /etc/postfix/master.cf that I've changed on setup: # ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (yes) (never) (100)
# ==========================================================================
smtp inet n - - - - smtpd
smtps inet n - - - - smtpd
#smtp inet n - - - 1 postscreen
#smtpd pass - - - - - smtpd
#dnsblog unix - - - - 0 dnsblog
#tlsproxy unix - - - - 0 tlsproxy
submission inet n - - - - smtpd
-o syslog_name=postfix/submission
-o smtpd_tls_wrappermode=yes
-o smtpd_tls_security_level=encrypt
-o smtpd_sasl_auth_enable=yes
-o smtpd_recipient_restrictions=permit_mynetworks,permit_sasl_authenticated,reject
-o milter_macro_daemon_name=ORIGINATING
-o smtpd_sasl_type=dovecot
-o smtpd_sasl_path=private/auth And here's my /etc/postfix/main.cf : myhostname = mail.myserver.com
myorigin = /etc/mailname
mydestination = mail.myserver.com, myserver.com, localhost, localhost.localdomain
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
smtpd_tls_cert_file=/etc/ssl/certs/mailcert.pem
smtpd_tls_key_file=/etc/ssl/private/mail.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtpd_tls_protocols = !SSLv2, !SSLv3
smtpd_tls_security_level = may
smtp_tls_security_level = may
smtp_tls_loglevel = 1
smtpd_tls_loglevel = 1
local_recipient_maps = proxy:unix:passwd.byname $alias_maps
inet_protocols = all Also, not sure if this can help but both telnet localhost 25 and telnet localhost 465 work on server but only telnet myserver.com 465 works from my laptop, when I try port 25 it says telnet: Unable to connect to remote host: Connection timed out . ufw is inactive on server. What should I do to fix it? | Port 465 is for SMTPS, it uses SSL immediately when establishing the connection and then uses the same SMTP protocol as normally found on port 25 after the secure connection is established. You test from the commandline with: openssl s_client -connect smtp.example.com:465 Using telnet to connect to port 465 will result in an error message in the log files because the SSL protocol isn't used. Just for completeness: to test TLS on the normal SMTP port, TCP/25 openssl s_client -starttls smtp -connect smtp.example.com:25 | {
"source": [
"https://serverfault.com/questions/644085",
"https://serverfault.com",
"https://serverfault.com/users/127952/"
]
} |
644,180 | I just received a Vagrantfile and post install bash script. The vagrantfile downloads standard Ubuntu from Ubuntu Cloud but I found something in the bash script. Few lines of script reads as: apt-get update -qq > /dev/null
apt-get -qq -y install apache2 > /dev/null I tried to search around the internet what -qq in shell script stands for, didn't get any mention of it, so, am asking here if anyone knows what it stands for. AFAIK > /dev/null means the ongoing process is not printed in the screen, for that it doesn't require the -qq flag. So, I am really curious to know. | The -qq is a flag to apt-get to make it less noisy. -qq No output except for errors You are correct about the >/dev/null . By redirecting all the STDOUT, the -qq becomes redundant. | {
"source": [
"https://serverfault.com/questions/644180",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
644,306 | I have already re-read the docs on this as well as other posts here and this is still very unclear to me. I have been testing various things to understand the difference between alias_maps and virtual_alias_maps and I don't see the use of these 2 separate settings in postfix. This is what I found so far (Note - I am using postfix in the same server as my web server as null client to send emails only) : 1) /etc/aliases file: root: [email protected] When I add the above to the alias_maps , I noticed that some services like fail2ban are able to pick this and it sends root emails to the alias email addresses mentioned. However, I also noticed that some other services (like mail command) does not respect this and tries to send the email directly to [email protected] which does not exist (I think its the postfix myorigin setting that is adding the @mydomain.com). To fix this I then added the virtual_alias_maps 2) /etc/postfix/virtual root [email protected] When the above is added, all services uses this virtual aliases email. I also noticed that once I add the above, even fail2ban begins to ignore my initial settings in /etc/aliases/ file and starts to follow the email address given in virtual file. Now this has confused me even more - Why do we need /etc/aliases/ when having the email inside virtual aliases map seems to override it? What is the purpose of having these 2 separate aliases mapping and when do we decide when to use what? Why did fail2ban (which is configured to email to root@localhost ) first follow email address given in alias_maps (/etc/aliases/) and later decides to ignore that once virtual_alias_maps was added? Why doesn't all services read email aliases mentioned in /etc/aliases and they only work when the email aliases are added in virtual alias map? I have spend several hours since yesterday and still unsure. Can someone help me clear my confusion? EDIT: This is the mail log when email is sent to root using mail root command. The aliases email for root is mentioned in /etc/aliases/. But mail does not work until I move this root aliases email from aliases_maps to virtual_aliases_maps Log when root email alias is mentioned in /etc/aliases/ : Nov 14 16:39:27 Debian postfix/pickup[4339]: 0F12643432: uid=0 from=<root>
Nov 14 16:39:27 Debian postfix/cleanup[4495]: 0F12643432: message-id=<[email protected]>
Nov 14 16:39:27 Debian postfix/qmgr[4338]: 0F12643432: from=<[email protected]>, size=517, nrcpt=1 (queue active)
Nov 14 16:39:27 Debian postfix/error[4496]: 0F12643432: to=<[email protected]>, orig_to=<root>, relay=none, delay=0.04, delays=0.03/0/0/0.01, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to domainname.com[128.199.147.136]:25: Connection refused) This is the log after the email aliases for root is moved from /etc/aliases/ to /etc/postfix/virtual where the email delivery is successful after the change: Nov 14 16:44:58 Debian postfix/pickup[4545]: ADD9A43436: uid=0 from=<root>
Nov 14 16:44:58 Debian postfix/cleanup[4563]: ADD9A43436: message-id=<[email protected]>
Nov 14 16:44:58 Debian postfix/qmgr[4544]: ADD9A43436: from=<[email protected]>, size=453, nrcpt=1 (queue active)
Nov 14 16:45:00 Debian postfix/smtp[4551]: ADD9A43436: to=<[email protected]>, orig_to=<root>, relay=somesite.com[108.160.157.120]:25, delay=1.9, delays=0.03/0/0.97/0.88, dsn=2.0.0, status=sent (250 OK id=1XpEqC-0002ry-9s)
Nov 14 16:45:00 Debian postfix/qmgr[4544]: ADD9A43436: removed | Some background Postfix inherited some features from older sendmail like milter and aliases. The file /etc/aliases is part of aliases inheritance and implemented by alias_maps . On the other side, postfix has virtual_maps / virtual_alias_maps for handle email aliasing. So what's the difference between them? Parameter alias_maps Used only for local(8) delivery According to address class in postfix , email will delivery by local(8) if the recipient domain names are listed in the mydestination The lookup input was only local parts from full email addres (e.g myuser from [email protected]). It discard domain parts of recipient. The lookup result can contains one or more of the following: email address : email will forwarded to email address /file/name : email will be appended to /file/name |command : mail piped to the command :include:/file/name : include alias from /file/name Parameter virtual_alias_maps Used by virtual(5) delivery Always invoked first time before any other address classes. It doesn't care whether the recipient domain was listed in mydestination , virtual_mailbox_domains or other places. It will override the address/alias defined in other places. The lookup input has some format user@domain : it will match user@domain literally user : it will match user @site when site is equal to $myorigin , when site is listed in $mydestination , or when it is listed in $inet_interfaces or $proxy_interfaces . This functionality overlaps with functionality of the local aliases(5) database. @domain : it will match any email intended for domain regardless of local parts The lookup result must be valid email address user without domain. Postfix will append $myorigin if append_at_myorigin set yes Why do we need /etc/aliases when having the email inside virtual aliases map seems to override it? As you can see above, alias_maps (/etc/aliases) has some additional features (beside forwarding) like piping to command. In contrast with virtual_alias_maps that just forwards emails. What is the purpose of having these 2 separate aliases mapping and when do we decide when to use what? The alias_maps drawback is that you cannot differentiate if the original recipient has [email protected] or [email protected] . Both will be mapped to root entry in alias_maps . In other words, you can define different forwarding address with virtual_alias_maps . Why did fail2ban (which is configured to email to root@localhost) first follow email address given in alias_maps (/etc/aliases/) and later decides to ignore that once virtual_alias_maps was added? Before virtual_alias_maps added : root@localhost was aliased by alias_maps because localhost was listed in mydestination . After virtual_alias_maps defined : The entry root (in virtual_alias_maps) doesn't have domain parts and localhost was listed in mydestination , so it will match root [email protected] . Why doesn't all services read email aliases mentioned in /etc/aliases and they only work when the email aliases are added in virtual alias map? Command mail root will send email to root. Because lacks of domain parts, postfix trivial-rewrite will append myorigin to domain parts. So, mail will be send to root@myorigin . Before virtual_alias_maps added : Unfortunately, myorigin isn't listed in mydestination , so it won't be processed by alias_maps . After virtual_alias_maps added : The entry root (in virtual_alias_maps) doesn't have domain parts and myorigin (obviously) same as myorigin , so it will match root [email protected] . | {
"source": [
"https://serverfault.com/questions/644306",
"https://serverfault.com",
"https://serverfault.com/users/247421/"
]
} |
644,892 | I am trying to optimize my nginx configs, so it would be possible to set one variable, and all location paths would update automatically. I have four lines in question: server_name php.domain.com;
root /srv/web/vhosts/php/web;
error_log /srv/web/vhosts/php/logs/error.log;
access_log /srv/web/vhosts/php/logs/access.log; What I would like to achieve is to set one variable (in this case 'php') and include it to config. set $variable "php";
server_name $variable.domain.com;
root /srv/web/vhosts/$variable/web;
error_log /srv/web/vhosts/$variable/logs/error.log;
access_log /srv/web/vhosts/$variable/logs/access.log; However it seams that nginx ignores variables in this config. Am I doing something wrong or it is not possible to use variable in location paths? | Variables can't be declared anywhere nor be used in any directive. As the documentation of set directive is : Syntax: set $variable value;
Default: —
Context: server, location, if The immediate consequence is that you can't use custom variables in an http block. Update : after a discussion and experiments with AlexeyTen in this chatroom . access_log can contain variables with restrictions. Among them, the lack of buffering and the fact that the leading slash must not be declared in a variable . error_log won't work with variables at all. root directive can contains variables. server_name directive only allows strict $hostname value as a variable-like notation. | {
"source": [
"https://serverfault.com/questions/644892",
"https://serverfault.com",
"https://serverfault.com/users/254268/"
]
} |
645,359 | We've got an AWS CloudFormation template for creating some EC2 instances. Some of those however require a specific PrivateIpAddress and I'm struggling to figure out how to incorporate that to the template. For now I've got a template parameter PrivateIP and a creating a Condition RequestedPrivateIP . So far so good. However I can't figure out how to incorporate it to the AWS::EC2::Instance resource specification. I tried this: "PrivateIpAddress": {
"Fn::If": [ "RequestedPrivateIP",
{ "Ref": "PrivateIP" },
"" <-- This doesn't work
]
}, But that fails when RequestedPrivateIP is false with CREATE_FAILED AWS::EC2::Instance NodeInstance Invalid addresses: [] Any idea how to optionally assign a static Private IP and if not specified leave it to AWS to set a dynamic one? | i would change the structure to: "PrivateIpAddress": {
"Fn::If": [ "RequestedPrivateIP",
{ "Ref": "PrivateIP" },
{"Ref" : "AWS::NoValue" }
]
} the AWS::NoValue is there to give you the else option for your if statement. http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.html | {
"source": [
"https://serverfault.com/questions/645359",
"https://serverfault.com",
"https://serverfault.com/users/122588/"
]
} |
646,342 | In order to prevent referrer spam, my nginx.conf contains a section like this: if ($http_referer ~* spamdomain1\.com) {
return 444;
}
if ($http_referer ~* spamdomain2\.com) {
return 444;
}
if ($http_referer ~* spamdomain3\.com) {
return 444;
} These rules tell nginx just to close the connection if the user has one of these referrers set. Is there a more elegant way to do this? Can I define a list of these domains and then say something like, “If the referrer is in this list then return 444”? | I would try a map : map $http_referer $bad_referer {
default 0;
"~spamdomain1.com" 1;
"~spamdomain2.com" 1;
"~spamdomain3.com" 1;
} Then use it like so: if ($bad_referer) {
return 444;
} | {
"source": [
"https://serverfault.com/questions/646342",
"https://serverfault.com",
"https://serverfault.com/users/197517/"
]
} |
646,578 | I just tried to run a test on my hdd and it doesn't want to complete a self test. Here is the result: smartctl --attributes --log=selftest /dev/sda
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-32-generic] (local build)
=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 697
3 Spin_Up_Time 0x0027 206 160 021 Pre-fail Always - 691
4 Start_Stop_Count 0x0032 074 074 000 Old_age Always - 26734
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 28
9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7432
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 097 097 000 Old_age Always - 3186
191 G-Sense_Error_Rate 0x0032 001 001 000 Old_age Always - 20473
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 84
193 Load_Cycle_Count 0x0032 051 051 000 Old_age Always - 447630
194 Temperature_Celsius 0x0022 113 099 000 Old_age Always - 34
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 16
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed: read failure 90% 7432 92290592
# 2 Conveyance offline Completed: read failure 90% 7432 92290596
# 3 Conveyance offline Completed: read failure 90% 7432 92290592
# 4 Short offline Completed: read failure 90% 7431 92290596
# 5 Extended offline Completed: read failure 90% 7431 92290592 So is this disk failing? | Your drive is very happy to do a self-test; from the summary, it has done more than five of them in the past hour. And all of them have failed, early on in the test, with read errors. Yes, this hard drive is failing. As the famous Google Labs report said (though I can't put my hand on a link to it at the moment), if smartctl says your drive is failing, it probably is (I paraphrase). Edit : don't try to save it. Get all the data off it, and replace it. | {
"source": [
"https://serverfault.com/questions/646578",
"https://serverfault.com",
"https://serverfault.com/users/255453/"
]
} |
647,231 | I'm trying to setup an OpenVPN Access Server in AWS using the market place AMI, but I;m struggling to connect to it. The access server is up and running. I've also added a user with Auto-Login and generated the relevant client config and certificates. I then copied said files down to my machine and tried to connect using openvpn client.ovpn but got the following output and error, Wed Nov 26 12:41:10 2014 OpenVPN 2.3.2 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [eurephia] [MH] [IPv6] built on Feb 4 2014
Wed Nov 26 12:41:10 2014 Control Channel Authentication: using 'ta.key' as a OpenVPN static key file
Wed Nov 26 12:41:10 2014 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
Wed Nov 26 12:41:10 2014 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
Wed Nov 26 12:41:10 2014 Socket Buffers: R=[212992->200000] S=[212992->200000]
Wed Nov 26 12:41:10 2014 UDPv4 link local: [undef]
Wed Nov 26 12:41:10 2014 UDPv4 link remote: [AF_INET]<REMOVED_IP>:1194
Wed Nov 26 12:41:10 2014 TLS: Initial packet from [AF_INET]<REMOVED_IP>:1194, sid=2a06a918 c4ecc6df
Wed Nov 26 12:41:11 2014 VERIFY OK: depth=1, CN=OpenVPN CA
Wed Nov 26 12:41:11 2014 VERIFY OK: nsCertType=SERVER
Wed Nov 26 12:41:11 2014 VERIFY OK: depth=0, CN=OpenVPN Server
Wed Nov 26 12:41:11 2014 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key
Wed Nov 26 12:41:11 2014 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
Wed Nov 26 12:41:11 2014 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key
Wed Nov 26 12:41:11 2014 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
Wed Nov 26 12:41:11 2014 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 2048 bit RSA
Wed Nov 26 12:41:11 2014 [OpenVPN Server] Peer Connection Initiated with [AF_INET]54.173.232.46:1194
Wed Nov 26 12:41:14 2014 SENT CONTROL [OpenVPN Server]: 'PUSH_REQUEST' (status=1)
Wed Nov 26 12:41:14 2014 PUSH: Received control message: 'PUSH_REPLY,explicit-exit-notify,topology subnet,route-delay 5 30,dhcp-pre-release,dhcp-renew,dhcp-release,route-metric 101,ping 12,ping-restart 50,comp-lzo yes,redirect-private def1,redirect-private bypass-dhcp,redirect-private autolocal,redirect-private bypass-dns,route-gateway 172.16.224.129,route 172.16.1.0 255.255.255.0,route 172.16.224.0 255.255.255.0,block-ipv6,ifconfig 172.16.224.131 255.255.255.128'
Wed Nov 26 12:41:14 2014 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:4: dhcp-pre-release (2.3.2)
Wed Nov 26 12:41:14 2014 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:5: dhcp-renew (2.3.2)
Wed Nov 26 12:41:14 2014 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:6: dhcp-release (2.3.2)
Wed Nov 26 12:41:14 2014 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:18: block-ipv6 (2.3.2)
Wed Nov 26 12:41:14 2014 OPTIONS IMPORT: timers and/or timeouts modified
Wed Nov 26 12:41:14 2014 OPTIONS IMPORT: explicit notify parm(s) modified
Wed Nov 26 12:41:14 2014 OPTIONS IMPORT: LZO parms modified
Wed Nov 26 12:41:14 2014 OPTIONS IMPORT: --ifconfig/up options modified
Wed Nov 26 12:41:14 2014 OPTIONS IMPORT: route options modified
Wed Nov 26 12:41:14 2014 OPTIONS IMPORT: route-related options modified
Wed Nov 26 12:41:14 2014 ROUTE_GATEWAY 192.168.0.1/255.255.255.0 IFACE=wlan0 HWADDR=c4:85:08:c9:14:f4
Wed Nov 26 12:41:14 2014 ERROR: Cannot ioctl TUNSETIFF tun: Operation not permitted (errno=1)
Wed Nov 26 12:41:14 2014 Exiting due to fatal error Any idea what the problem is? I assume it's failing to create the tunnel due to the ERROR line? I'm running server version 2.0.10 and client version, OpenVPN 2.3.2 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [eurephia] [MH] [IPv6] built on Feb 4 2014
Originally developed by James Yonan
Copyright (C) 2002-2010 OpenVPN Technologies, Inc. <[email protected]>
Compile time defines: enable_crypto=yes enable_debug=yes enable_def_auth=yes enable_dependency_tracking=no enable_dlopen=unknown enable_dlopen_self=unknown enable_dlopen_self_static=unknown enable_eurephia=yes enable_fast_install=yes enable_fragment=yes enable_http_proxy=yes enable_iproute2=yes enable_libtool_lock=yes enable_lzo=yes enable_lzo_stub=no enable_maintainer_mode=no enable_management=yes enable_multi=yes enable_multihome=yes enable_pam_dlopen=no enable_password_save=yes enable_pedantic=no enable_pf=yes enable_pkcs11=yes enable_plugin_auth_pam=yes enable_plugin_down_root=yes enable_plugins=yes enable_port_share=yes enable_selinux=no enable_server=yes enable_shared=yes enable_shared_with_static_runtimes=no enable_small=no enable_socks=yes enable_ssl=yes enable_static=yes enable_strict=no enable_strict_options=no enable_systemd=no enable_win32_dll=yes enable_x509_alt_username=yes with_crypto_library=openssl with_gnu_ld=yes with_ifconfig_path=/sbin/ifconfig with_iproute_path=/sbin/ip with_mem_check=no with_plugindir='${prefix}/lib/openvpn' with_route_path=/sbin/route with_sysroot=no | Looks like this is a simple matter of sudo. sudo openvpn client.ovpn worked a treat. | {
"source": [
"https://serverfault.com/questions/647231",
"https://serverfault.com",
"https://serverfault.com/users/4158/"
]
} |
647,539 | I have a dual Opteron server running Linux with libvirt to host several VMs. The VMs work fine and the server processes OK, but I notice one CPU always runs about 69C (throttles at 70C) and the other runs about 15C. This doesn't seem normal to me? Shouldn't they both be a little closer in temperature? I'm not sure how to dianose any further. Maybe there isn't enough thermal paste on one of the CPUs? Edit: The motherboard is ASUS KGPE-D16 and cooled by dual Noctua NH-U9DO fans . Note that I think the temperatures might be degress above ambient, rather than absolute values? When the server is idling, the CPU temperatures drop to 2C and 13C. I am using the lmsensors configuration from here | The problem ended up being a poorly fit heatsink. Maybe poorly fit isn't the right description. Turns out, you have to put thermal paste on the heatsink, not the plastic cover that goes over the heatsink. After removing the plastic cover, the CPU is nice and cool, thanks everyone! | {
"source": [
"https://serverfault.com/questions/647539",
"https://serverfault.com",
"https://serverfault.com/users/11090/"
]
} |
648,140 | I want to discover all my neighbors who enabled ipv6 protocol and still alive. I tried ip -6 neighbor show but it shows nothing. Can someone recommend a tool and show some examples? Thanks. | Best to ping a special all nodes on a link multicast address - ff02::1 - and wait for the responses: ~ $ ping6 -I eth0 ff02::1
PING ff02::1(ff02::1) from fe80::a11:96ff:fe04:50cc wlan0: 56 data bytes
64 bytes from fe80::a11:96ff:fe02:50ce: icmp_seq=1 ttl=64 time=0.080 ms
64 bytes from fe80::1eaf:f7ff:fe64:ec8e: icmp_seq=1 ttl=64 time=1.82 ms (DUP!)
64 bytes from fe80::6676:baff:feae:8c04: icmp_seq=1 ttl=64 time=4047 ms (DUP!)
64 bytes from fe80::5626:96ff:fede:ae5f: icmp_seq=1 ttl=64 time=4047 ms (DUP!)
64 bytes from fe80::5626:96ff:fede:ae5f: icmp_seq=1 ttl=64 time=3049 ms (DUP!)
64 bytes from fe80::6676:baff:feae:8c04: icmp_seq=1 ttl=64 time=3049 ms (DUP!)
[...]
^C A couple of points here: you must specify the interface: -I eth0 The responses are link-local addresses - they can easily be converted to your global address by replacing the leading fe80: with your subnet's prefix, e.g. with 2001:db8:1234:abcd: if that's your subnet's prefix. See http://www.iana.org/assignments/ipv6-multicast-addresses/ipv6-multicast-addresses.xhtml for some other multicast addresses other than ff02::1 that may be of an interest. | {
"source": [
"https://serverfault.com/questions/648140",
"https://serverfault.com",
"https://serverfault.com/users/256457/"
]
} |
648,276 | I understand that Linux chooses the most specific route to the destination when it does routing selection . But what about a route's metric? Does it have a higher priority than route's specificity? A reference to the details of the routing selection algorithm used by Linux would also be appreciated. | The routes metric is to set preference among routes with equal specificity. That is true of routing in general (i.e. Cisco, Windows, etc). So the model works like: Find the most specific route (aka the longest prefix match* ) If there are multiple routes with the same specificity, pick the one with the lowest administrative distance (This distinguishes between things like directly attached routes, static routes, and various routing protocols). Within that routing protocol and specific route (if route specificity and administrative distance are the same), chose the route with the lowest metric Note that there are other things that could be going on such a policy based routing that lets you do things like route based on the source IP address. But route specificity, administrative distance, and then metric are what I would consider to be the main three things. *It is called the longest prefix match because a subnet in binary (/24 for example) looks like 11111111.11111111.11111111.00000000 . So a router can just scan the prefix for binary 1s and stop once it hits a zero, and then it has matched the prefix. | {
"source": [
"https://serverfault.com/questions/648276",
"https://serverfault.com",
"https://serverfault.com/users/34662/"
]
} |
648,287 | I have one server in a cluster that was experiencing a process table leak. Because the developer responsible for the code was unavailable for a few days I increased pid_max on the machine as follows: echo 4194303 > /proc/sys/kernel/pid_max This bought us time until the developer was able to fix his app and stop the leak. However, I now would like to bring the server back inline with others in the cluster. My concern is that there are processes with pids in the 3 million range. If I reduce pid_max to its normal value, what will happen to pids already in the table? Does the system need to be restarted? | The routes metric is to set preference among routes with equal specificity. That is true of routing in general (i.e. Cisco, Windows, etc). So the model works like: Find the most specific route (aka the longest prefix match* ) If there are multiple routes with the same specificity, pick the one with the lowest administrative distance (This distinguishes between things like directly attached routes, static routes, and various routing protocols). Within that routing protocol and specific route (if route specificity and administrative distance are the same), chose the route with the lowest metric Note that there are other things that could be going on such a policy based routing that lets you do things like route based on the source IP address. But route specificity, administrative distance, and then metric are what I would consider to be the main three things. *It is called the longest prefix match because a subnet in binary (/24 for example) looks like 11111111.11111111.11111111.00000000 . So a router can just scan the prefix for binary 1s and stop once it hits a zero, and then it has matched the prefix. | {
"source": [
"https://serverfault.com/questions/648287",
"https://serverfault.com",
"https://serverfault.com/users/217589/"
]
} |
648,355 | We recently bought a wildcard SSL cert for our domain. We converted all of the certs to a Java keystore, but now we are asking ourselves where we should store these for later use. Do people use source control like BitBucket for these types of files or just generate every time it's needed, or something else? We're wondering if there's a standard solution or any "best practices" around storing these certificates for future use. | There are multiple solutions: One avenue is a specific key vault either a hardware based appliance, a hardware security module or a software based equivalent. Another is to simply revoke the old key and generate a new one private/public key-pair when the situation arises. That somewhat shifts the problem from maintaining key security to securing the username/password of the account with the certificate provider and their procedures for re-issue. The advantage there is that most organisations already have a privileged account management solution e.g. 1 2 There are multiple ways of off-line storage, from printing a hard-copy of the private and public key-pair including the password (but that will be a female dog to restore) to simply storing them on digital media rated for long time storage. Really bad places are GitHub, your team WiKi or a network share (and you get the idea). Update 2015/4/29: Keywhiz seems an interesting approach as well. | {
"source": [
"https://serverfault.com/questions/648355",
"https://serverfault.com",
"https://serverfault.com/users/256600/"
]
} |
648,448 | Is there a way to provide user-specific passwords for Wi-Fi, so that different users have different passwords? I'd like to provide each user with a different password for my Wi-Fi connection. | What you need is WPA-2 Enterprise , combined with a RADIUS server for authenticating users. If you have an existing Active Directory infrastructure, then you can use the Network Policy Server role in Windows to do the authentication and allow users to log on with their AD username/password. | {
"source": [
"https://serverfault.com/questions/648448",
"https://serverfault.com",
"https://serverfault.com/users/256673/"
]
} |
648,449 | I am very new to VPNs and I am getting errors. I have posted the following lines that I think are the most relevant: Dec 2 08:41:03 racoon: DEBUG: IV freed
Dec 2 08:41:03 racoon: [EUA]: [79.121.213.141] ERROR: failed to pre-process ph2 packet [Check Phase 2 settings, networks] (side: 1, status: 1).
Dec 2 08:41:03 racoon: ERROR: failed to get sainfo.
Dec 2 08:41:03 racoon: ERROR: failed to get sainfo.
Dec 2 08:41:03 racoon: DEBUG: cmpid source: '192.168.10.0/24'
Dec 2 08:41:03 racoon: DEBUG: cmpid target: '79.121.213.141/32'
Dec 2 08:41:03 racoon: DEBUG: check and compare ids : value mismatch (IPv4_subnet)
Dec 2 08:41:03 racoon: DEBUG: cmpid source: '192.168.0.0/24'
Dec 2 08:41:03 racoon: DEBUG: cmpid target: '192.168.0.0/24'
Dec 2 08:41:03 racoon: DEBUG: check and compare ids : values matched (IPv4_subnet)
Dec 2 08:41:03 racoon: DEBUG: evaluating sainfo: loc='192.168.0.0/24', rmt='192.168.10.0/24', peer='ANY', id=1
Dec 2 08:41:03 racoon: DEBUG: getsainfo params: loc='192.168.0.0/24' rmt='79.121.213.141/32' peer='79.121.213.141' client='79.121.213.141' id=1
Dec 2 08:41:03 racoon: DEBUG: 304ccaa9 0176e9fb 71aa4c00 c864b944 24677b49
Dec 2 08:41:03 racoon: DEBUG: HASH computed:
Dec 2 08:41:03 racoon: DEBUG: hmac(hmac_sha1) Can anyone tell me where this is going wrong? I don't think cmpid source and cmpid target should be the same? | What you need is WPA-2 Enterprise , combined with a RADIUS server for authenticating users. If you have an existing Active Directory infrastructure, then you can use the Network Policy Server role in Windows to do the authentication and allow users to log on with their AD username/password. | {
"source": [
"https://serverfault.com/questions/648449",
"https://serverfault.com",
"https://serverfault.com/users/216171/"
]
} |
648,573 | I have a very important file which an application in my workplace uses, i need to make sure it is not delete whatsoever, how can I do that? | Yes, you can change the attributes of the file to read-only. The command is: chattr +i filename And to disable it: chattr -i filename From man chattr : A file with the i attribute cannot be modified: it cannot be deleted or renamed, no link can be created to this file and no data can be written to the file. Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute. | {
"source": [
"https://serverfault.com/questions/648573",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
648,704 | In the Wikipedia page for CPU time , it says The CPU time is measured in clock ticks or seconds. Often, it is
useful to measure CPU time as a percentage of the CPU's capacity,
which is called the CPU usage. I don't understand how a time duration can be replaced by a percentage. When I look at top , doesn't %CPU tell me that MATLAB is using 2.17 of my cores? PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18118 jasl 20 0 9248400 261528 78676 S 217.2 0.1 8:14.75 MATLAB Question In order to better understand what CPU usage is, how do I calculate the CPU usage myself? | CPU time is allocated in discrete time slices (ticks). For a certain number of time slices, the CPU is busy, other times it is not (which is represented by the idle process). In the picture below the CPU is busy for 6 of the 10 CPU slices. 6/10 = .60 = 60% of busy time (and there would therefore be 40% idle time). A percentage is defined as "a number or rate that is expressed as a certain number of parts of something divided into 100 parts". So in this case, those parts are discrete slices of time and the something is busy time slices vs idle time slices -- the rate of busy to idle time slices. Since CPUs operate in GHz (billions of cycles a second). The operating system slices that time in smaller units called ticks. They are not really 1/10 of a second. The tick rate in windows is 10 million ticks in a second and in Linux it is sysconf(_SC_CLK_TCK) (usually 100 ticks per second). In something like top , the busy CPU cycles are then further broken down into percentages of things like user time and system time. In top on Linux and perfmon in Windows, you will often get a display that goes over 100%, that is because the total is 100% * the_number_of_cpu_cores. In an operating system, it is the scheduler's job to allocate these precious slices to processes, so the scheduler is what reports this. | {
"source": [
"https://serverfault.com/questions/648704",
"https://serverfault.com",
"https://serverfault.com/users/208796/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.