source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
799,800
I have a front end web server running over HTTPS - this is public facing - i.e. port is open. I also have a backend API server that my webserver makes API requests to - this is public facing and requires authentication - port is open. These 2 servers run over HTTPS. Behind the API server, there are lots of other servers. The API server reverse proxies to these servers. Ports for these other servers are not open to incoming traffic. They can only be talked to via the API server. My Question ... Do the "lots of other servers" need to run over HTTPS or, given that they cannot be accessed externally, can they run over HTTP safely instead? I thought this would be a common question but I could not find an answer to it. Thanks. If this is a dupe please point me to the right answer.
This is a matter of opinion, and also has to do with regulatory issues (if you face any). Even if it's not currently necessary I am a big advocate of keeping the HTTPS enabled between any application level firewalls / load balancers / front end servers and the back end servers. It's one less attack surface. I've contracted with places that needed to convert over as more sensitive information began being passed - it's better to start there. What I generally would suggest is using an internal CA (if available) or self sign (if no internal CA) the back end servers. We'd set the expiration date nice and far into the future to avoid unnecessary changes.
{ "source": [ "https://serverfault.com/questions/799800", "https://serverfault.com", "https://serverfault.com/users/236988/" ] }
800,628
Several permanent errors were reported on my zpool today. pool: seagate3tb state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://zfsonlinux.org/msg/ZFS-8000-8A scan: none requested config: NAME STATE READ WRITE CKSUM seagate3tb ONLINE 0 0 28 sda ONLINE 0 0 56 errors: Permanent errors have been detected in the following files: /mnt/seagate3tb/Install.iso /mnt/seagate3tb/some-other-file1.txt /mnt/seagate3tb/some-other-file2.txt Edit: I'm sure sure if those CKSUM values are accurate. I was redacting data and may have mangled those by mistake. They may have been 0. Unfortunately, I can't find a conclusive answer in my notes and the errors are resolved now so I'm not sure, but everything else is accurate/reflects what zpool was reporting. /mnt/seagate3tb/Install.iso is one example file reported as having a permanent error. Here's where I get confused. If I compare my "permanently errored" Install.iso against a backup of that exact same file on another filesystem, they look identical. shasum "/mnt/seagate3tb/Install.iso" 1ade72fe65902b2a978e5504aaebf9a3a08bc328 /mnt/seagate3tb/Install.iso shasum "/mnt/backup/Install.iso" 1ade72fe65902b2a978e5504aaebf9a3a08bc328 /mnt/backup/Install.iso cmp /mnt/seagate3tb/Install.iso /mnt/backup/Install.iso diff /mnt/seagate3tb/Install.iso /mnt/backup/Install.iso The files seem to be identical. What's more, the file works perfectly fine. If I use it in an application, it behaves like I'd expect it to. As the docs state : Data corruption errors are always fatal. But based on my rudimentary file verifications, I'm not sure I understand the definition of fatal . status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. Maybe I'm missing something, but the file seems perfectly fine as far as I can tell, and does need any restoration nor does it show any corruption, despite the reccomendation from ZFS. I've seen other articles with the same error , but I have yet to find an answer to my question. What is the permanent error with the file? Is there some lower level issue with the file that's just not readily apparent to me? If so, why would that not be detected by a shasum as a difference in the file? From a layperson's perspective, I see nothing to indicate any error with this file.
The wording of zpool status is a bit misleading. A permanent error (in this context) indicates that an I/O error has occurred and has been logged to the SPA (Storage Pool Allocator) error log for that pool. This does not necessarily mean there is irrecoverable data corruption. What you should do is run a zpool scrub on the pool. When the scrub completes, the SPA error log will be rotated and will no longer show errors from before the scrub. If the scrub detects no errors then zpool status will no longer show any "permanent" errors. Regarding the documentation, it is saying that only "fatal errors" are logged in this way. A fatal error is an I/O error that could not be automatically corrected by ZFS and therefore was exposed to an application as a failed I/O. By contrast, if the I/O was immediately retried successfully or if the logical I/O was satisfied from a redundant device, it would not be considered a fatal error and therefore would not be logged as a data corruption error. A fatal error does not necessarily mean permanent data loss, it just means that at the time it could not be fixed before it propagated up to the application. For example, a loose cable or a bad controller could cause temporary fatal errors which ZFS would describe as "permanent." Whether it truly is a problem depends on the nature of the I/O and whether the application is capable of recovering from I/O errors. EDIT: Fully agree with @bahamat that you should invest in redundancy as soon as possible.
{ "source": [ "https://serverfault.com/questions/800628", "https://serverfault.com", "https://serverfault.com/users/373603/" ] }
801,074
I have written a buggy program that has accidentally created about 30M files under /tmp. (The bug was introduced some weeks ago, and it was creating a couple of subdirectories per second.) I could rename /tmp to /tmp2, and now I need to delete the files. The system is FreeBSD 10, the root filesystem is zfs. Meanwhile one of the drives in the mirror went wrong, and I have replaced it. The drive has two 120GB SSD disks. Here is the question: replacing the hard drive and resilvering the whole array took less than an hour. Deleting files /tmp2 is another story. I have written another program to remove the files, and it can only delete 30-70 subdirectories per second. It will take 2-4 days to delete all files. How is it possible that resilvering the whole array takes an hour, but deleting from the disk takes 4 days? Why do I have so bad performance? 70 deletions/second seems very very bad performance. I could delete the inode for /tmp2 manually, but that will not free up the space, right? Could this be a problem with zfs, or the hard drives or what?
Deletes in ZFS are expensive. Even more so if you have deduplication enabled on the filesystem (since dereferencing deduped files is expensive). Snapshots could complicate matters too. You may be better off deleting the /tmp directory instead of the data contained within. If /tmp is a ZFS filesystem, delete it and create again.
{ "source": [ "https://serverfault.com/questions/801074", "https://serverfault.com", "https://serverfault.com/users/170295/" ] }
801,257
I'm setting up regular system maintenance tasks which have to run as root. I plan to use the flavour of cron which comes with Ubuntu 14.04 LTS as the default. I see the previous admin (who since left the company) edited /etc/crontab directly. However I understand another possible approach would be to use crontab -e as root. Are there any compelling arguments to use one or the other, or is it down to preference?
It might be useful to note that jobs in a personal crontab ( crontab -e ) are always executed as their owner, where /etc/crontab contains an additional mandatory <user> field allowing an admin to configure the job to run as a non-root user. Editing the system crontab or setting up a personal crontab for root are probably a bit more portable, not specific to certain Linux distributions and arguably more convenient for a person to maintain, with all jobs in a single file but: Personally I favour a third option : for each scheduled task drop either a file in /etc/cron.d/ with a cron snippet an executable (script) in the relevant /etc/cron.[hourly |daily |weekly |monthly] directory. That is easier to script (you can simply create/overwrite/delete such files and you don't have to muck about in the contents of a single crontab file) and that works well with configuration management tooling and that is what package managers are already doing anyway. Jobs/scripts in /etc/cron.[hourly |daily |weekly |monthly] are always executed as root, where the cron snippets in /etc/cron.d/ allow both setting a custom schedule as well as running as a different user with that same mandatory <user> field found in /etc/crontab .
{ "source": [ "https://serverfault.com/questions/801257", "https://serverfault.com", "https://serverfault.com/users/233584/" ] }
801,350
I have a Redhat server ( Red Hat Enterprise Linux Server release 7.2 (Maipo) ) that resets iptable rules on re/boot. According to the version 6 documentation , I execute: /sbin/service iptables save which returns: The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl. If I understand the message correctly, I attempted the following: sudo systemctl iptables save which returns: Unknown operation 'iptables'. I cannot locate the version 7 documentation on saving ip tables specifically, but previous versions support the same command. What command should I run to save iptables config? For reference: firewall d satatus: systemctl status firewalld firewalld.service Loaded: not-found (Reason: No such file or directory) Active: inactive (dead)
You should install iptables-services package. Then service iptables save will work. Also these commands will work too: # iptables-save > /etc/sysconfig/iptables # ip6tables-save > /etc/sysconfig/ip6tables AFAIK, systemctl doesn't have any option to save iptables-services' configuration. Note: systemctl syntax is as follow : systemctl <operation> <unit>
{ "source": [ "https://serverfault.com/questions/801350", "https://serverfault.com", "https://serverfault.com/users/195983/" ] }
801,355
I am trying to figure out a way to give IP:PORT/SOMEPATH a DNS entry. For example we have multiple services as the URLs IP:PORT/APP1, IP:PORT/APP2 , etc. Can I use DNS to alias these any way? It looks like A records are for just IP , and SRV records can be used for IP:PORT : http://www.networksolutions.com/support/how-to-manage-advanced-dns-records/ Is this impossible with DNS? I guess the question is, can you alias any valid URL with some type of DNS record? EDIT: this question is related but they specifically asked about CNAME records and I'm asking if there is any record type to achieve this: Can a CNAME DNS record point to a subdirectory
You should install iptables-services package. Then service iptables save will work. Also these commands will work too: # iptables-save > /etc/sysconfig/iptables # ip6tables-save > /etc/sysconfig/ip6tables AFAIK, systemctl doesn't have any option to save iptables-services' configuration. Note: systemctl syntax is as follow : systemctl <operation> <unit>
{ "source": [ "https://serverfault.com/questions/801355", "https://serverfault.com", "https://serverfault.com/users/255615/" ] }
801,356
What is the difference between Nginx ~ and ~* regexes? For example: if ($http_referer ~* www.foobar.net) { ... } vs if ($http_referer ~ www.foobar.net) { ... }
~: If a tilde modifier is present, this location will be interpreted as a case-sensitive regular expression match. ~*: If a tilde and asterisk modifier is used, the location block will be interpreted as a case-insensitive regular expression match.
{ "source": [ "https://serverfault.com/questions/801356", "https://serverfault.com", "https://serverfault.com/users/173117/" ] }
801,371
I want to fetch all the ad user accounts that are not expired and has blank password. Until now I have used an ldap filter that fetches all the expired accounts: (&(objectCategory=person)(objectClass=user) (!accountExpires=9223372036854775807) (!accountExpires=0)) I need to make a filter that fetches all the unexpired accounts and has a blank password. Any suggestions what modifications do I have to do.
~: If a tilde modifier is present, this location will be interpreted as a case-sensitive regular expression match. ~*: If a tilde and asterisk modifier is used, the location block will be interpreted as a case-insensitive regular expression match.
{ "source": [ "https://serverfault.com/questions/801371", "https://serverfault.com", "https://serverfault.com/users/374332/" ] }
801,546
By coincidence I looked at my servers ssh log (/var/log/auth.log) and I noticed that someone is constantly trying to gain access: Sep 7 13:03:45 virt01 sshd[14674]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.31.116.42 user=root Sep 7 13:03:48 virt01 sshd[14674]: Failed password for root from 116.31.116.42 port 13423 ssh2 Sep 7 13:03:52 virt01 sshd[14674]: message repeated 2 times: [ Failed password for root from 116.31.116.42 port 13423 ssh2] Sep 7 13:03:52 virt01 sshd[14674]: Received disconnect from 116.31.116.42: 11: [preauth] This happens a few times every minute, and has been going on for a long time without me knowing about it. Question Should I be concerned about this, if yes: What should I do about it?
Unfortuntately, this is absolutely normal and something every SSH server experiences. Welcome to the internet. As long as you properly secure your server (e.g. keep it updated, allow only key-based login, disable root SSH access), this shouldn't be a problem, but you can limit this even further with something like fail2ban and other approaches like IP whitelisting, changing ports and stuff like that where possible and appropriate.
{ "source": [ "https://serverfault.com/questions/801546", "https://serverfault.com", "https://serverfault.com/users/276343/" ] }
802,930
I have set up a working SMTP relay together with MailScanner. This SMTP relay is not — and will not be — able to relay email from the outside , only local email. Is it possible to send a malicious email with the terminal? I have googled around but could not find anything that could answer my question. For example, I want to use: echo "{malicious-string}" | mail [email protected] What could the "{malicious-string}" be?
Use the EICAR test virus. http://www.eicar.org/86-0-Intended-use.html echo 'X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*' | mail Edit: Be sure to read Michael Hampton's as well!
{ "source": [ "https://serverfault.com/questions/802930", "https://serverfault.com", "https://serverfault.com/users/348532/" ] }
803,175
Problem I have java process which does not die neither with SIGTERM nor SIGKILL. logstash 2591 1 99 13:22 ? 00:01:46 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash It respawns everytime signal is received. Sep 15 13:22:17 test init: logstash main process (2546) killed by KILL signal Sep 15 13:22:17 test init: logstash main process ended, respawning It sounds strange but even I reboot the server, it still does not die . Process was executed through init script with below command: NAME=logstash LS_USER=logstash LS_OPTS="--path.settings=/etc/logstash" LS_PIDFILE=/var/run/$NAME/$NAME.pid LS_STDERR="/var/log/logstash/logstash.stderr" DAEMON="/usr/share/logstash/bin/logstash" runuser -s /bin/sh -c "exec $DAEMON ${LS_OPTS}" ${LS_USER} &>${LS_STDERR} & Is there any way to force this process to kill other than reinstalling the OS? Environment Process : logstash 5.0.0~alpha5 OS : Red Hat Enterprise Linux Server release 6.7 (Santiago) Java version : openjdk version "1.8.0_101" OpenJDK Runtime Environment (build 1.8.0_101-b13) OpenJDK 64-Bit Server VM (build 25.101-b13, mixed mode) Server is deployed on Microsoft Azure.
init: logstash main process (2546) killed by KILL signal Actually your process does stop here. init: logstash main process ended, respawning A new logstash process is started by init to replace it. That is also shows which control process is responsible for restarting logstash: init . (On RHEL 6 and CentOS that is Upstart) Your process most likely gets started from either /etc/inittab or a drop-in file in /etc/init/logstash.conf (or similar) and should be controlled with the apropiate tool, initctl and not with kill . Try initctl list to see if logstash is there. Then initctl stop logstash will stop it. Editing or removing the conf file in /etc/init will allow you to disable it persistently. You might even be able to control the job with the service and chkconfig commands.
{ "source": [ "https://serverfault.com/questions/803175", "https://serverfault.com", "https://serverfault.com/users/316534/" ] }
803,283
I noticed a strange behavior on one machine using Debian that I can't reproduce on another machine running Ubuntu. When listing virsh networks as an ordinary user, it shows an empty list: ~$ virsh net-list --all Name State Autostart Persistent ---------------------------------------------------------- When running the same command with sudo , it shows the default connection: ~$ sudo virsh net-list --all Name State Autostart Persistent ---------------------------------------------------------- default active no yes The permissions on the files themselves seem to be set correctly: ~$ ls -l /etc/libvirt/qemu/networks total 8 drwxr-xr-x 2 root root 4096 Jul 1 18:19 autostart -rw-r--r-- 1 root root 228 Jul 1 18:19 default.xml The user belongs to kvm and libvirtd groups. What is happening? Why can't I list the networks as an ordinary user?
It appears that : If not explicitly stated, the virsh binary uses the 'qemu:///session' URI (at least under debian). Therefore, not only virsh net-list , but practically any command, including virsh list , behaved differently when running with sudo . In other words, virsh net-list was using user's scope instead of global ones. This makes sense; trying to create the default connection and then starting it led to “Network is already in use by interface virbr0” error—without knowing it, I was starting a second connection named “default”, while one was already running. The solution is straightforward: virsh --connect qemu:///system net-list does what I was expecting it to do, while: virsh net-list doesn't. Why is Ubuntu machine not having the issue? According to the documentation : If virsh finds the environment variable VIRSH_DEFAULT_CONNECT_URI set, it will try this URI by default. Use of this environment variable is, however, deprecated now that libvirt supports LIBVIRT_DEFAULT_URI itself. It appears, indeed, that on Ubuntu machine, the second variable was defined: ubuntu:~$ echo $VIRSH_DEFAULT_CONNECT_URI ubuntu:~$ echo $LIBVIRT_DEFAULT_URI qemu:///system On Debian machine, on the other hand, none of those variables are set: debian:~$ echo $VIRSH_DEFAULT_CONNECT_URI debian:~$ echo $LIBVIRT_DEFAULT_URI Setting one of those variables to qemu:///system would probably work, but, well, it's easier to specify the connection string directly in virsh command (at least when writing a script).
{ "source": [ "https://serverfault.com/questions/803283", "https://serverfault.com", "https://serverfault.com/users/39827/" ] }
803,413
I have created a second database on my existing Azure SQL server. The first database works fine and I can see it using SSMS. I cannot see the second database in the object explorer. Autocomplete detects that it exists however. Any suggestions?
So for me, it turned out that when I was connecting using SSMS, I had set the database to connect to as my first database by accident - meaning that was all I could see. On connect, go to Options, then check the database you're connecting to. If you're using multiple users to connect to that server like I was, my admin user ended up also being forced to connect to a single database instead of < default >.
{ "source": [ "https://serverfault.com/questions/803413", "https://serverfault.com", "https://serverfault.com/users/67806/" ] }
803,423
My .htaccess is: # nginx configuration location / { if (!-e $request_filename){ rewrite ^(.+)$ /index.php?url=$1 break; } } I have nginx configured like this: server { listen 80; listen 443 ssl http2; server_name prueba_nginx.net; root "/home/vagrant/Code/prueba_nginx/public"; index index.html index.htm index.php; charset utf-8; location / { try_files $uri $uri/ /index.php?$query_string; } location = /favicon.ico { access_log off; log_not_found off; } location = /robots.txt { access_log off; log_not_found off; } access_log off; error_log /var/log/nginx/prueba_nginx.net-error.log error; sendfile off; client_max_body_size 100m; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors off; fastcgi_buffer_size 16k; fastcgi_buffers 4 16k; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; } If I call: http://prueba_nginx.net/a/b/c and I run the php commant: echo $_GET['url´] it must return: : a/b/c but currently it is returning: empty. PHP: echo "\n".'$_GET'."\n"; var_dump($_GET); If I made the test with apache it works perfect: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-l RewriteRule ^(.+)$ index.php?url=$1 [QSA,L] what is the problem? Are there into the transformation from apache to nginx an error? Thanks Michael Hamptom for your fast answer. I checked $_SERVER['REQUEST_URI'] and it is returning the right value: a/b/c but I am wondering why is not working with $_Get. I made the changes that you sugested: .htaccess: # nginx configuration #location / { #if (!-e $request_filename){ #rewrite ^(.+)$ /index.php?url=$1 break; #} #} and I changed nginx file from: location / { try_files $uri $uri/ /index.php?$query_string; } to location / { try_files $uri $uri/ /index.php?url=$uri; } But when I restarted nginx I received this error message: [....] Restarting nginx (via systemctl): nginx.serviceJob for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details. failed! systemctl status nginx.service status nginx.service ● nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Fri 2016-09-16 04:00:27 UTC; 5s ago Process: 2645 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS) Process: 2559 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS) Process: 2696 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE) Main PID: 2562 (code=exited, status=0/SUCCESS) Sep 16 04:00:26 homestead systemd[1]: Starting A high performance web server and a reverse proxy server... Sep 16 04:00:27 homestead nginx[2696]: nginx: [emerg] unknown "url" variable Sep 16 04:00:27 homestead nginx[2696]: nginx: configuration file /etc/nginx/nginx.conf test failed Sep 16 04:00:27 homestead systemd[1]: nginx.service: Control process exited, code=exited status=1 Sep 16 04:00:27 homestead systemd[1]: Failed to start A high performance web server and a reverse proxy server. Sep 16 04:00:27 homestead systemd[1]: nginx.service: Unit entered failed state. Sep 16 04:00:27 homestead systemd[1]: nginx.service: Failed with result 'exit-code'. journalctl -xe -- Unit nginx.service has begun starting up. Sep 16 04:00:27 homestead nginx[2696]: nginx: [emerg] unknown "url" variable Sep 16 04:00:27 homestead nginx[2696]: nginx: configuration file /etc/nginx/nginx.conf test failed Sep 16 04:00:27 homestead systemd[1]: nginx.service: Control process exited, code=exited status=1 Sep 16 04:00:27 homestead systemd[1]: Failed to start A high performance web server and a reverse proxy server. -- Subject: Unit nginx.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit nginx.service has failed. -- -- The result is failed. Sep 16 04:00:27 homestead systemd[1]: nginx.service: Unit entered failed state. Sep 16 04:00:27 homestead systemd[1]: nginx.service: Failed with result 'exit-code'.
So for me, it turned out that when I was connecting using SSMS, I had set the database to connect to as my first database by accident - meaning that was all I could see. On connect, go to Options, then check the database you're connecting to. If you're using multiple users to connect to that server like I was, my admin user ended up also being forced to connect to a single database instead of < default >.
{ "source": [ "https://serverfault.com/questions/803423", "https://serverfault.com", "https://serverfault.com/users/376052/" ] }
805,006
I need to capture traffic on a CentOS 5 server which acts as a web proxy with 2 wan interfaces and 1 LAN. In order to troubleshoot a weird proxy problem, I would like to have a capture of a full conversation. Since external connections are balanced between the two WAN interfaces, I wonder if is it possible to capture simultaneously on all interfaces. I have used tcpdump previously but it only admits one interface at a time. I can launch 3 parallel processes to capture on all interfaces but then I end up with 3 different capture files. What is the right way of doing this ?
According to the tcpdump man page: On Linux systems with 2.2 or later kernels, an interface argument of ‘‘any’’ can be used to capture packets from all interfaces. Note that captures on the ‘‘any’’ device will not be done in promiscuous mode. So you should be able to run: tcpdump -i any in order to capture data on all interfaces at the same time into a single capture file.
{ "source": [ "https://serverfault.com/questions/805006", "https://serverfault.com", "https://serverfault.com/users/377358/" ] }
805,333
Every now and then I get the odd request to provide remote support, troubleshooting and/or performance tuning on Linux systems. Larger companies often already have well established procedures to provide remote access to vendors/suppliers and I only need to comply to those. (For better or for worse.) On the other hand small companies and individuals invariably turn to me to instruct them with what they need to do to set up me up. Typically their servers are directly connected to the internet and the existing security measures consist of the defaults for whatever their Linux distribution is. Nearly always I'll need root level access and whoever will be setting up access for me is not an expert sysadmin. I don't want their root password and I'm also pretty sure my actions won't be malicious, but what reasonably simple instructions should I give to: set up an account and securely exchange credentials set up root (sudo) access restrict access to my account provide audit trail (And yes I'm aware and always warn those clients that once I do have admin access hiding any malicious actions is trivial, but let's assume that I have nothing to hide and actively participate in creating an audit trail.) What can be improved on the steps below? My current instruction set: set up an account and securely exchange credentials I provide a password hash and ask that my account is set up with that encrypted password, so we won't need to transmit a clear text password, I'l be the only one that knows the password and we don't start off with a predictable weak password. sudo useradd -p '$1$********' hbruijn I provide a public key SSH (specific key-pair per client) and ask they set up my account with that key: sudo su - hbruijn mkdir -p ~/.ssh chmod 0700 ~/.ssh echo 'from="10.80.0.0/14,192.168.1.2" ssh-rsa AAAAB3NzaC1y***...***== hbruijn@serverfault' >> ~/.ssh/authorized_keys chmod 0600 ~/.ssh/authorized_keys set up root (sudo) access I ask the client to set up sudo for me with sudo sudoedit or by using their favourite editor and append to /etc/sudoers : hbruijn ALL=(ALL) ALL restrict access to my account Typically the client still allows password based logins and I ask them to add the following two lines to /etc/ssh/sshd_config to at least restrict my account to SSH keys only: Match user hbruijn PasswordAuthentication no Depending on the client I'll route all my SSH access through a single bastion host to always provide a single static IP-address (for instance 192.168.1.2) and/or provide the IP-address range my ISP uses (for instance 10.80.0.0/14). The client may need to add those to a firewall whitelist if SSH access is restricted (more often than not ssh is unfiltered though). You already saw those ip-addresses as the from= restriction in the ~.ssh/authorized_keys file that limits the hosts from which my key can be used to access their systems. provide audit trail Until now no client has asked me for that, and I have not done anything specific beyond the following to cover my ass: I try to consistently use sudo with individual commands and try to prevent using sudo -i or sudo su - . I try not to use sudo vim /path/to/file but use sudoedit instead. By default all the privileged actions will then be logged to syslog (and /var/log/secure ): Sep 26 11:00:03 hostname sudo: hbruijn : TTY=pts/0 ; PWD=/home/hbruijn ; USER=jboss ; COMMAND=sudoedit /usr/share/jbossas/domain/configuration/domain.xml Sep 26 11:00:34 hostname sudo: hbruijn : TTY=pts/0 ; PWD=/home/hbruijn ; USER=root ; COMMAND=/usr/bin/tail -n 5 /var/log/messages I have mostly gives up on customising my work environments, the only thing I really do is set the following in my ~/.bash_profile increasing the bash history and to include time-stamps: export HISTSIZE=99999999999 export HISTFILESIZE=99999999999 export HISTIGNORE="w:ls:ls -lart:dmesg:history:fg" export HISTTIMEFORMAT='%F %H:%M:%S ' shopt -s histappend
The only thing that comes to mind would be to add --expiredate to the adduser call. With that the customer knows that your access will automatically expire at a fixed date. He still needs to trust you as you have root access and still could remove the expire flag.
{ "source": [ "https://serverfault.com/questions/805333", "https://serverfault.com", "https://serverfault.com/users/37681/" ] }
805,377
This seems like it should be obvious, but I haven't been able to find a way to do it. My basic problem is this: I've got Ruby 1.8.7 installed on a Scientific Linux 6 system (from the base repository). I'm trying to install some gems via gem install , but I'm running into a lot of gems that require ruby 1.9 or better. I can specify individual gem versions via the -v parameter, but gem install appears to always pick the highest version available for any gem dependencies, so even if I restrict the version on the gem I want, my installation will still fail because one of the dependencies will require Ruby 1.9. The dependency trees for some of the gems I want are deep and wide; it would take a lot of time to manually figure out which version of each dependency I need and then install each required gem manually before I can work my way up to the one I want. (This is what dependency management is supposed to solve.) So: is there a way to tell Ruby, "Install gem foo , using only gems that will work with the currently-installed version of Ruby"? (Or even, "Install version x.y.z of gem foo , using only gems that will work with the currently-installed version of Ruby"?) As I mentioned, I happen to be running Ruby 1.8.7 on Scientific Linux 6, but I doubt any solution is going to be that platform-specific.
The only thing that comes to mind would be to add --expiredate to the adduser call. With that the customer knows that your access will automatically expire at a fixed date. He still needs to trust you as you have root access and still could remove the expire flag.
{ "source": [ "https://serverfault.com/questions/805377", "https://serverfault.com", "https://serverfault.com/users/119616/" ] }
805,389
Most of the time, using one of these two, I can tell which OS is running in my Docker container (alpine, centOS, etc) But this time, I can't tell: bash-4.2$ uname -a Linux 6fe5c6d1451c 2.6.32-504.23.4.el6.x86_64 #1 SMP Tue Jun 9 20:57:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux bash-4.2$ more /etc/issue \S Kernel \r on an \m Any way to get a text version of the OS it is running ?
I like to use Screenfetch . You might want to try that. If you look into the code you can see how it determines the distribution: lsb_release -sirc cat /etc/os-release And to cover CentOS too: cat /etc/issue
{ "source": [ "https://serverfault.com/questions/805389", "https://serverfault.com", "https://serverfault.com/users/113546/" ] }
805,745
Within CentOS-7 does a change in the options within /etc/systemd/system.conf of systemd require a reboot or will "systemctl daemon-reload" suffice?
No, daemon-reload will reload all unit files, not the configuration for systemd itself. However, # systemctl daemon-reexec will re-execute systemd and cause it to digest its new configuration in the process. From the systemctl man page: daemon-reexec Reexecute the systemd manager. This will serialize the manager state, reexecute the process and deserialize the state again. This command is of little use except for debugging and package upgrades. Sometimes, it might be helpful as a heavy-weight daemon-reload. While the daemon is being reexecuted, all sockets systemd listening on behalf of user configuration will stay accessible. When the man page says daemon-reexec is useful for package upgrades, it in large part means that this command executes whatever new binaries there are and re-processes its configs. HOWEVER, the RPM that we use to upgrade systemd already contains a script to do this, so it is usually never needed in the case of a normal upgrade. Or you can reboot. Either will do.
{ "source": [ "https://serverfault.com/questions/805745", "https://serverfault.com", "https://serverfault.com/users/290056/" ] }
806,141
While running the below command openssl s_client -host example.xyz -port 9093 I get the following error: 139810559764296:error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert bad certificate:s3_pkt.c:1259:SSL alert number 42 39810559764296:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:184: But at the end i get "Verify return code: 0 (ok)" message. My question is what does the above alert signify, and if the SSL was actually successful. SSL handshake has read 6648 bytes and written 354 bytes New, TLSv1/SSLv3, Cipher is AES128-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1.2 Cipher : AES128-SHA Session-ID: xx Session-ID-ctx: Master-Key: xx Key-Arg : None Krb5 Principal: None PSK identity: None PSK identity hint: None Start Time: 1475096098 Timeout : 300 (sec) **Verify return code: 0 (ok)**
"Handshake failure" means the handshake failed, and there is no SSL/TLS connection. You should see that openssl exits to the shell (or CMD etc) and does not wait for input data to be sent to the server. "Verify return code 0" means that no problem was found in the server's certificate, either because it wasn't checked at all or because it was checked and was good (as far as OpenSSL's checks go, which doesn't cover everything); in this case by knowing the protocol we can deduce the latter case applies. Receiving alert bad certificate (code 42) means the server demands you authenticate with a certificate, and you did not do so, and that caused the handshake failure. A few lines before the line SSL handshake has read ... and written ... you should see a line Acceptable client certificate CA names usually followed by several lines identifying CAs, possibly followed by a line beginning Client Certificate Types and maybe some about Requested Signature Algorithms depending on your OpenSSL version and the negotiated protocol. Find a certificate issued by a CA in the 'acceptable' list, or if it was empty look for documentation on or about the server saying which CAs it trusts or contact the server operators or owners and ask them, plus the matching private key , both in PEM format, and specify them with -cert $file -key $file ; if you have both in one file, as is possible with PEM, just use -cert $file . If you have them in a different format, either specify it, or search here and perhaps superuser and security.SX; there are already many Q&As about converting various certificate and privatekey formats. If your cert needs a "chain" or "intermediate" cert (or even more than one) to be verified, as is often the case for a cert from a public CA (versus an inhouse one) depending on how the server is configured, s_client requires a trick: either add the chain cert(s) to your system truststore, or create a local/temporary truststore containing the CA cert(s) you need to verify the server PLUS the chain cert(s) you need to send. If you don't have such a certificate you either need to get one, which is a different question that requires much more detail to answer, or you need to find a way to connect to the server without using certificate authentication; again check the documentation and/or ask the operators/owners. EDIT: It appears from comment you may have the client key and cert chain as well as server anchor(s?) in Java. On checking I don't see a good existing answer fully covering that case, so even though this probably won't search well: # Assume Java keystore is type JKS (the default but not only possibility) # named key.jks and the privatekey entry is named mykey (ditto) # and the verify certs are in trust.jks in entries named trust1 trust2 etc. # convert Java key entry to PKCS12 then PKCS12 to PEM files keytool -importkeystore -srckeystore key.jks -destkeystore key.p12 -deststoretype pkcs12 -srcalias mykey openssl pkcs12 -in key.p12 -nocerts -out key.pem openssl pkcs12 -in key.p12 -nokeys -clcerts -out cert.pem openssl pkcs12 -in key.p12 -nokeys -cacerts -out chain.pem # extract verify certs to individual PEM files # (or if you 'uploaded' PEM files and still have them just use those) keytool -keystore trust.jks -export -alias trust1 -rfc -file trust1.pem keytool -keystore trust.jks -export -alias trust2 -rfc -file trust2.pem ... more if needed ... # combine for s_client cat chain.pem trust*.pem >combined.pem openssl s_client -connect host:port -key key.pem -cert cert.pem -CAfile combined.pem
{ "source": [ "https://serverfault.com/questions/806141", "https://serverfault.com", "https://serverfault.com/users/378319/" ] }
806,469
I am running user-level services in Ubuntu 16.04 LTS. For example, I have my test.service located at ~/.config/systemd/user/test.service . I was able to run the service by doing systemctl --user start test.target However, when I try to read its log using journalctl , I got this error message: journalctl --user -u test.service Hint: You are currently not seeing messages from other users and the system. Users in the 'systemd-journal' group can see all messages. Pass -q to turn off this notice. No journal files were opened due to insufficient permissions. How can I use journalctl for user's specific unit?
On older systemd versions, you'll have to use journalctl --user --user-unit=SERVICENAME (on newer versions journalctl --user -u SERVICENAME will work fine). However, this only works if the Storage directive of the [Journal] section of /etc/systemd/journald.conf is set to persistent (instead of auto or volatile ). Reboot after editing the configuration file and the user will be able to see the journal. More information: https://www.freedesktop.org/software/systemd/man/journald.conf.html https://lists.freedesktop.org/archives/systemd-devel/2016-October/037554.html
{ "source": [ "https://serverfault.com/questions/806469", "https://serverfault.com", "https://serverfault.com/users/228766/" ] }
806,617
I have a service in the form of a node.js application set up with Systemd on Raspbian Jessie and it is using its own user account. However, I am finding that the service does not run correctly because it does not have the necessary permissions. One of the node modules I installed requires root access. If I run the application manually with sudo everything works fine. Is there a way to tell systemd to run the service with sudo?
tell systemd to run the service with sudo ? sudo has nothing to with it. Typically you instruct systemd to run a service as a specific user/group with a User= and Group= directive in the [Service] section of the unit file. Set those to root (or remove them, as running as root is the default).
{ "source": [ "https://serverfault.com/questions/806617", "https://serverfault.com", "https://serverfault.com/users/179109/" ] }
807,148
Recently a user unplugged their company PC from the network and used USB tethering with their Android phone to bypass the company network entirely and access the internet. I don't think I need to explain why this is bad. What would be the best way, from a zero-cost (i.e. open source, using scripting and group policy, etc.) and technical standpoint (i.e. HR has already been notified, I don't think that this is a symptom of some sort of deeper underlying corporate culture problem, etc.), to detect and/or prevent something like this from happening again? It would be nice to have a system-wide solution (e.g. by using group policy), but if that is not possible then doing something specific to this person's PC could also be an answer. A few details: The PC is Windows 7 joined to an Active Directory domain, the user has ordinary user privileges (not administrator), there is no wireless capabilities on the PC, disabling USB ports is not an option NOTE: Thank you for the great comments. I added some additional details. I think that there are a lot of reasons why one would want to disallow tethering, but for my particular environment I can think of the following: (1) Anti-virus updates. We have a local anti-virus server that delivers updates to network connected computers. If you are not connected to the network you cannot receive the updates. (2) Software Updates. We have a WSUS server and review each update to approve/disallow. We also deliver updates to other commonly used software programs such as Adobe Reader and Flash via group policy. Computers cannot receive updates if they are not connected to the local network (updating from external update servers is not permitted). (3) Internet filtering. We filter out malicious and, uh, naughty(?) sites. By using a tether you can bypass the filter and access these sites and possibly compromise the security of your computer. More background information: HR was notified already. The person in question is a high level person so it is a little bit tricky. "Making an example" of this employee although tempting would not be a good idea. Our filtering is not severe, I'm guessing that the person may have been looking at naughty sites although there is no direct evidence (cache was cleared). He says he was just charging his his phone, but the PC was unplugged from the local network. I'm not looking to get this person in trouble, just possibly prevent something similar from happening again.
You can use Group Policy to prevent the installation of new network devices. You'll find an option in Administrative Templates \ System \ Device Installation \ Device Installation Restrictions \ Prevent installation of devices using drivers that match these driver setup classes. From its description: This policy setting allows you to specify a list of device setup class globally unique identifiers (GUIDs) for device drivers that Windows is prevented from installing. This policy setting takes precedence over any other policy setting that allows Windows to install a device. If you enable this policy setting, Windows is prevented from installing or updating device drivers whose device setup class GUIDs appear in the list you create. If you enable this policy setting on a remote desktop server, the policy setting affects redirection of the specified devices from a remote desktop client to the remote desktop server. Using policy settings here, you can either create a whitelist (which you seem to not want) or a blacklist, either of individual devices or entire classes of devices (such as network adapters). These take effect when a device is removed and reinserted , so it will not affect the NIC built into the machine, provided you don't apply the setting to devices that are already installed. You will need to reference the list of device setup classes to find the class for network adapters, which is {4d36e972-e325-11ce-bfc1-08002be10318} . Add this class to the blacklist, and soon afterward, nobody will be able to use USB network adapters.
{ "source": [ "https://serverfault.com/questions/807148", "https://serverfault.com", "https://serverfault.com/users/240281/" ] }
807,158
For the last couple of hours I've been trying to battle with my server to keep it up during some pretty minor load (50 concurrent users). Spec: 6 CPUs 12GB RAM During this time, memory usage maxed out at 4GB, so no problems there. However, Apache was going insane kicking up about 20+ running processes and eating up all 6 CPUs (600% CPU usage), bringing the website to a halt. Now; with exactly the same traffic and concurrent users, CPU usage is down to 40% of the available 600% - no changes were made. I cannot for the life of me see why Apache thought it necessary to kick up 20+ running processes, and at the same time use 1 or 2 for the same traffic volumes. How can I diagnose what these Apache processes are actually doing? I know to limit this through MaxClients but that still bottlenecks the server when its trying to create 20+.
You can use Group Policy to prevent the installation of new network devices. You'll find an option in Administrative Templates \ System \ Device Installation \ Device Installation Restrictions \ Prevent installation of devices using drivers that match these driver setup classes. From its description: This policy setting allows you to specify a list of device setup class globally unique identifiers (GUIDs) for device drivers that Windows is prevented from installing. This policy setting takes precedence over any other policy setting that allows Windows to install a device. If you enable this policy setting, Windows is prevented from installing or updating device drivers whose device setup class GUIDs appear in the list you create. If you enable this policy setting on a remote desktop server, the policy setting affects redirection of the specified devices from a remote desktop client to the remote desktop server. Using policy settings here, you can either create a whitelist (which you seem to not want) or a blacklist, either of individual devices or entire classes of devices (such as network adapters). These take effect when a device is removed and reinserted , so it will not affect the NIC built into the machine, provided you don't apply the setting to devices that are already installed. You will need to reference the list of device setup classes to find the class for network adapters, which is {4d36e972-e325-11ce-bfc1-08002be10318} . Add this class to the blacklist, and soon afterward, nobody will be able to use USB network adapters.
{ "source": [ "https://serverfault.com/questions/807158", "https://serverfault.com", "https://serverfault.com/users/379157/" ] }
807,611
I've got a CentOS server somewhere in the building; I can login into it remotely and VNC, etc. Now I've got to physically move it, and for that I need to physically locate the machine among the lookalikes around the office. What can I do remotely to make the machine visibly or audibly identify itself?
Use IPMI to trigger LEDs, increase fan RPMs or sound the beep/alarm. Take a look at the man page for ipmitool https://linux.die.net/man/1/ipmitool depending on the server you may be able to set the LEDs, LCD display, fan RPM offset(listen when nobody is in the office). Some other IPMI or BMC interfaces may allow you to sound the beep but this functionality is more platform specific. a powerful workstation or server will sound like someone vacuuming with the fans turned up all the way. EDIT: To use the Identifier lights as mentioned in comments, this will however require setting making sure that an appropriate IPMI interface is setup, there are several guides and tutorials available, and depending on the OEM there may be proprietary interfaces and management systems like Intel's Data Center Manager( http://www.intel.com/content/www/us/en/software/intel-dcm-product-detail.html ). I have used this tutorial before but there are others https://www.thomas-krenn.com/en/wiki/Configuring_IPMI_under_Linux_using_ipmitool ipmitool -I <appropriate interface for system> -U<username> chassis identify force should force the ID to an on state, depending on the interface and configuration you may need to specify authentication type and other command line options.
{ "source": [ "https://serverfault.com/questions/807611", "https://serverfault.com", "https://serverfault.com/users/229191/" ] }
807,648
I have a SCSI enclosure (supermicro BPN-SAS-825TQ) residing on Ubuntu 16.04. Is there an sg_ses (or sg_senddiag or other) command I can send to the enclosure, that will make the Rear LEDs blink, even if there's no disk in the enclosure?
Use IPMI to trigger LEDs, increase fan RPMs or sound the beep/alarm. Take a look at the man page for ipmitool https://linux.die.net/man/1/ipmitool depending on the server you may be able to set the LEDs, LCD display, fan RPM offset(listen when nobody is in the office). Some other IPMI or BMC interfaces may allow you to sound the beep but this functionality is more platform specific. a powerful workstation or server will sound like someone vacuuming with the fans turned up all the way. EDIT: To use the Identifier lights as mentioned in comments, this will however require setting making sure that an appropriate IPMI interface is setup, there are several guides and tutorials available, and depending on the OEM there may be proprietary interfaces and management systems like Intel's Data Center Manager( http://www.intel.com/content/www/us/en/software/intel-dcm-product-detail.html ). I have used this tutorial before but there are others https://www.thomas-krenn.com/en/wiki/Configuring_IPMI_under_Linux_using_ipmitool ipmitool -I <appropriate interface for system> -U<username> chassis identify force should force the ID to an on state, depending on the interface and configuration you may need to specify authentication type and other command line options.
{ "source": [ "https://serverfault.com/questions/807648", "https://serverfault.com", "https://serverfault.com/users/246747/" ] }
807,959
Could someone explain me the difference between these certificates in a simplified way? I read some articles but it sounds like they do the same job, namely encrypting many domains with one certificate.
SAN (Subject Alternative Name) is part of the X509 certificate spec, where the certificate has a field with a list of alternative names that are also valid for the subject (in addition to the single Common Name / CN). This field and wildcard names are essentially the two ways of using one certificate for multiple names. SNI (Server Name Indication) is a TLS protocol extension that is sort of a TLS protocol equivalent of the HTTP Host-header. When a client sends this, it allows the server to pick the proper certificate to present to the client without having the limitation of using separate IP addresses on the server side (much like how the HTTP Host header is heavily used for plain HTTP). Do note that SNI is not something that is reflected in the certificate and it actually achieves kind of the opposite of what the question asks for; it simplifies having many certificates, not using one certificate for many things. On the other hand, it depends heavily on the situation which path is actually preferable. As an example, what the question asks for is almost assuredly not what you actually want if you need certificates for different entities.
{ "source": [ "https://serverfault.com/questions/807959", "https://serverfault.com", "https://serverfault.com/users/331998/" ] }
808,732
I installed Supervisor (v3.1.2) to manage ElastAlert but when I run supervisorctl it sometimes throws this error: unix:///var/run/supervisor.sock no such file and other times it throws this error: unix:///tmp/supervisor.sock no such file I'll note that it does bring me to the supervisor> prompt, but commands after that are the same errors as above. The /etc/supervisor/supervisor.conf file is configured to use /var/run , which seems at odds with the second error. I created a link to /etc/supervisor.conf , as other help pages suggested this, but it didn't make a difference. Two odd things, when I first installed Supervisor it worked fine, but it was after a reboot this problem started. And the other odd thing is that ElastAlert starts after a reboot, and continues to perform normally. So while it might be having errors it's doing its job. Not a show-stopper, but I would like for this to work properly. Any ideas?
This happens to me when the physical machine reboots. My machines run Ubuntu, ranging from 12.04 to 16.04. I resolve it by restarting supervisor as a service. sudo service supervisor stop sudo service supervisor start (This somehow works a lot better than simply using 'restart') Obviously this is not an ideal fix if you depend on Supervisor to start other programs for you without needing to be restarted after every reboot. I am currently looking into systemd like others suggested. Edit: If you are on Ubuntu 16.04, this answer might fix all your problem as it did mine. You should 'enable' systemd to start supervisord. https://unix.stackexchange.com/a/291098
{ "source": [ "https://serverfault.com/questions/808732", "https://serverfault.com", "https://serverfault.com/users/380517/" ] }
808,765
There are more than 1000 files in a storage server account and when I use curl to display the list of files, output is limited to only first 1000. Just to test, I had deleted a file showing up in the GET list and then did a GET again, this time curl did display 1000 files with an additional file from full list. Is there any way to tell curl to display full list instead of limiting to 1000? Edit: curl -X GET -u <user>:<passwd> <url-full-path-of-container> curl version: curl 7.29.0 (x86_64-redhat-linux-gnu) I am not sure whether problem is on server side because I dont really have full privileges to check that.. Can it be on server side too?
This happens to me when the physical machine reboots. My machines run Ubuntu, ranging from 12.04 to 16.04. I resolve it by restarting supervisor as a service. sudo service supervisor stop sudo service supervisor start (This somehow works a lot better than simply using 'restart') Obviously this is not an ideal fix if you depend on Supervisor to start other programs for you without needing to be restarted after every reboot. I am currently looking into systemd like others suggested. Edit: If you are on Ubuntu 16.04, this answer might fix all your problem as it did mine. You should 'enable' systemd to start supervisord. https://unix.stackexchange.com/a/291098
{ "source": [ "https://serverfault.com/questions/808765", "https://serverfault.com", "https://serverfault.com/users/134962/" ] }
808,788
The Nginx config I have throws 404 for .php like: ## Any other attempt to access PHP files returns a 404. location ~* ^.+\.php$ { return 404; } However I have some index.php file in subfolder that I want to run. The current config is like: location = /sitename/subpage/index.php { fastcgi_pass phpcgi; #where phpcgi is defined to serve the php files } location = /sitename/subpage2/index.php { fastcgi_pass phpcgi; } location = /sitename/subpage3/index.php { fastcgi_pass phpcgi; } it works perfectly, but the issue is duplicate locations and if there are lots of subpages then the config becomes huge. I tried the wildcard like * and some regex, which says the nginx test passed but doesn't load the page i.e. 404. What I tried are: location = /sitename/*/index.php { fastcgi_pass phpcgi; } location ~* ^/sitename/[a-z]/index.php$ { fastcgi_pass phpcgi; } Is there any way I can have some pathname in the location as regex or wildcard?
The = modifier in location block is an exact match, without any wildcards, prefix matching or regular expressions. That's why it doesn't work. On your regex attempt, [a-z] matches a single character between a and z . That's why it doesn't work for you. You need to have your locations set up like the following. Note the order of location statements. nginx picks the first matching regex condition. location ~ ^/sitename/[0-9a-z]+/index.php$ { fastcgi_pass phpcgi; } location ~ \.php$ { return 404; } I use case sensitive matching here ( ~ modifier instead of ~* ). In the first case, I match the first part of path, then one or more number of alphabetic / number characters and then index.php . You can modify the match range, but remember the + for "one or more" repetitions. The second one matches any URI ending with .php . You don't need the extra characters in your version because of the way regular expressions work.
{ "source": [ "https://serverfault.com/questions/808788", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
808,792
I have a number of ready to use network appliances like router, print server, L2/L3 network switches, ip-pbx, access point and many others. These devices are usually managed by web portal by accessing port 80 (http). And asking for user name and password for accessing. With so many devices to manage, I usually use default user name and password without changing it in case I might forgot in future. However, this habit is risky and easy for someone to change the configuration easily. Is there a way to leave the default username / password as is but prevent other local network users to do the changes on the configuration? Can VLAN segmentation help by grouping all those devices in one VLAN? But doing so seems prevent workstations from other VLAN accessing the services of those devices. Or may be firewall + VLAN may help?
The = modifier in location block is an exact match, without any wildcards, prefix matching or regular expressions. That's why it doesn't work. On your regex attempt, [a-z] matches a single character between a and z . That's why it doesn't work for you. You need to have your locations set up like the following. Note the order of location statements. nginx picks the first matching regex condition. location ~ ^/sitename/[0-9a-z]+/index.php$ { fastcgi_pass phpcgi; } location ~ \.php$ { return 404; } I use case sensitive matching here ( ~ modifier instead of ~* ). In the first case, I match the first part of path, then one or more number of alphabetic / number characters and then index.php . You can modify the match range, but remember the + for "one or more" repetitions. The second one matches any URI ending with .php . You don't need the extra characters in your version because of the way regular expressions work.
{ "source": [ "https://serverfault.com/questions/808792", "https://serverfault.com", "https://serverfault.com/users/46327/" ] }
809,093
I am running docker on ubuntu 16.04 and would like to view the logs. However, I am unable to view logs after what I am guessing is some sort of rotation or the logs grow to a certain size. I have not made any changes to my journald.conf, so I am using defaults there. There are containers running so the docker log outputs quite a lot of data. Examples of what I am seeing: systemctl docker status confirms service has been active: since Thu 2016-10-13 18:56:28 UTC However, when I run something like: journalctl -u docker.service --since "2016-10-13 22:00" The only output I get is: -- Logs begin at Fri 2016-10-14 01:18:49 UTC, end at Fri 2016-10-14 16:18:25 UTC. -- I can view the logs in that range as expected. My question is: why can I not view older logs with journalctl, and how can I fix this issue so I can view the logs?
The reason this happens is because of defaults on the size of journald files stored. There is more detail about this in the docs . It's worth reading the whole section I have linked to, but the defaults work like so: journald will use 10% of the disk or 4G, whichever is smaller. journald will leave free 15% of the disk or 4G, whichever is larger. For viewing logs from the last boot, assuming you have Storage=persistent in your journald.conf, as the other answer notes, you can use the --boot=-1 flag on journalctl commands to get logs from just the previous boot. In the case of the OP where they were sure the host had not been rebooted, the loss of logs was simply caused by the SystemMaxUse and/or SystemKeepFree defaults. Note: I'm the OP and this question still has upvotes trickling in, so since I've gained more experience with journald (and rtfm) I am posting this here in the hopes it helps others.
{ "source": [ "https://serverfault.com/questions/809093", "https://serverfault.com", "https://serverfault.com/users/350989/" ] }
809,117
How to add a password on a share folder on windows server 2012 that grants access to 2-3 people in the domain
The reason this happens is because of defaults on the size of journald files stored. There is more detail about this in the docs . It's worth reading the whole section I have linked to, but the defaults work like so: journald will use 10% of the disk or 4G, whichever is smaller. journald will leave free 15% of the disk or 4G, whichever is larger. For viewing logs from the last boot, assuming you have Storage=persistent in your journald.conf, as the other answer notes, you can use the --boot=-1 flag on journalctl commands to get logs from just the previous boot. In the case of the OP where they were sure the host had not been rebooted, the loss of logs was simply caused by the SystemMaxUse and/or SystemKeepFree defaults. Note: I'm the OP and this question still has upvotes trickling in, so since I've gained more experience with journald (and rtfm) I am posting this here in the hopes it helps others.
{ "source": [ "https://serverfault.com/questions/809117", "https://serverfault.com", "https://serverfault.com/users/377009/" ] }
809,632
I have the following Kubernetes Job configuration: --- apiVersion: batch/v1 kind: Job metadata: name: dbload creationTimestamp: spec: template: metadata: name: dbload spec: containers: - name: dbload image: sdvl3prox001:7001/pbench/tdload command: ["/opt/pbench/loadTpcdsData.sh", "qas0063", "dbc", "dbc", "1"] restartPolicy: Never imagePullSecrets: - name: pbenchregkey status: {} When I do kubectl create -f dbload-deployment.yml --record the job and a pod are created, Docker container runs to completion and I get this status: $ kubectl get job dbload NAME DESIRED SUCCESSFUL AGE dbload 1 1 1h $ kubectl get pods -a NAME READY STATUS RESTARTS AGE dbload-0mk0d 0/1 Completed 0 1h This job is one time deal and I need to be able to rerun it. If I attempt to rerun it with kubectl create command I get this error $ kubectl create -f dbload-deployment.yml --record Error from server: error when creating "dbload-deployment.yml": jobs.batch "dbload" already exists Of course I can do kubectl delete job dbload and then run kubectl create but I'm wondering if I can somehow re-awaken the job that already exists?
No. There is definitely no way to rerun a kubernetes job. You need to delete it first.
{ "source": [ "https://serverfault.com/questions/809632", "https://serverfault.com", "https://serverfault.com/users/14578/" ] }
809,633
I am currently on Ubuntu 16.04, and I have noticed slowdowns across the server in general. Upon viewing htop , I noticed that processes with random commands are spawning, while taking the CPU usage with it; Here is the image that shows an offending process. When trying to view which user started the process, the pts shows as a '?' as shown below: # ps -feww | grep netstat root 7444 1 91 01:29 ? 00:01:37 netstat -antop root 13051 1 0 01:31 ? 00:00:00 netstat -antop root 13063 1 0 01:31 ? 00:00:00 netstat -antop I successfully killed the process with signal 9, but after a few seconds, another process with a completely different command pops up, and ran until I killed it. Rebooting the server did not fix this. Would appreciate some advice on this, thanks!
No. There is definitely no way to rerun a kubernetes job. You need to delete it first.
{ "source": [ "https://serverfault.com/questions/809633", "https://serverfault.com", "https://serverfault.com/users/381320/" ] }
809,643
Whenever I run a command like ufw allow 22 , ufw automatically adds the firewall rules to both ipv4 and ipv6. If I want to only open a port on ipv4, is there a way to do so? Something like ufw allow 22 proto ipv4 .
You just have to use the fuller syntax and specify an address (range). For example, allow connections to TCP port 22 on all IPv4 addresses: ufw allow proto tcp to 0.0.0.0/0 port 22
{ "source": [ "https://serverfault.com/questions/809643", "https://serverfault.com", "https://serverfault.com/users/381326/" ] }
810,636
I have CentOS 7.2 (guest in VirtualBox, vagrant box centos/7 , no GUI). I see there is a nameserver in file: $ cat /etc/resolv.conf # Generated by NetworkManager nameserver 10.0.2.3 But how to add or replace with new one? I have done this manually directly in the network: $ vi /etc/sysconfig/network-scripts/ifcfg-eth0 PEERDS=no DNS1=8.8.4.4 DNS2=8.8.8.8 And it works. But is there any way to do this through nmcli ? P.S. No nmtui installed (in a selected system).
Here is the command to modify an existing connection. nmcli con mod $connectionName ipv4.dns "8.8.8.8 8.8.4.4" connectionName can be found by command: nmcli con . In the question case, it will be "System eth0" If you want to ignore automatically configured nameservers and search domains, ie the settings passed from DHCP. nmcli con mod $connectionName ipv4.ignore-auto-dns yes Finally, to enable the changes, bring the connection down then up: nmcli con down $connectionName nmcli con up $connectionName restart the NetworkManager service (if you don't want be disconnected) : service NetworkManager restart Verify with cat /etc/resolv.conf . You should not edit /etc/resolv.conf manually as it is generated by NetworkManager service, it is likely to get overridden at any given time. Useful nmcli manual
{ "source": [ "https://serverfault.com/questions/810636", "https://serverfault.com", "https://serverfault.com/users/237050/" ] }
811,381
Consider a Wi-Fi network with one access point and two clients, operating in marginal conditions due to range, etc. Client 1 is communicating with Client 2. Obviously the Access Point (AP) must be in range of both (assuming no fancy mesh modes, etc.) for the network to be deemed available, but does the data actually travel through it? That is, does the AP receive the packets from one client and rebroadcast them for the other client to pick up, or does Client 2's radio receive the signals directly as they're transmitted from Client 1 and the AP just provides some sort of arbitration and metadata to help them find each other? I'm particularly interested in how the answer to this would affect the case where the two clients are near to each other and have good radio propagation, but the access point is some distance away.
Yes, the communication is traveling through the access point. In this case the AP is functioning exactly like a switch does in a wired network. It is possible to have two devices communicate directly, without an AP. This is known as Ad Hoc networking.
{ "source": [ "https://serverfault.com/questions/811381", "https://serverfault.com", "https://serverfault.com/users/382775/" ] }
811,712
I have read that it's not OK to use .local in a domain especially with Microsoft Windows servers. I have also read the Windows Active Directory naming best practices article on ServerFault which was helpful but hadn't completely answered my question regarding "local" I was thinking it was somehow a reserved keyword and would present problems. I own the domain keiboom.com and set up my Active Directory domain as local.keiboom.com . Can this create problems?
No, that's fine. The warning is against using domain.local as your AD domain name. local.domain.tld is perfectly acceptable.
{ "source": [ "https://serverfault.com/questions/811712", "https://serverfault.com", "https://serverfault.com/users/380928/" ] }
811,912
Can nginx location blocks match a URL query string? For example, what location block might match HTTP GET request GET /git/sample-repository/info/refs?service=git-receive-pack HTTP/1.1
Can nginx location blocks match a URL query string? Short answer : No. Long answer : There is a workaround if we have only a handful of such location blocks. Here's a sample workaround for 3 location blocks that need to match specific query strings: server { #... common definitions such as server, root location / { error_page 418 = @queryone; error_page 419 = @querytwo; error_page 420 = @querythree; if ( $query_string = "service=git-receive-pack" ) { return 418; } if ( $args ~ "service=git-upload-pack" ) { return 419; } if ( $arg_somerandomfield = "somerandomvaluetomatch" ) { return 420; } # do the remaining stuff # ex: try_files $uri =404; } location @queryone { # do stuff when queryone matches } location @querytwo { # do stuff when querytwo matches } location @querythree { # do stuff when querythree matches } } You may use $query_string, $args or $arg_fieldname. All will do the job. You may know more about error_page in the official docs . Warning: Please be sure not to use the standard HTTP codes .
{ "source": [ "https://serverfault.com/questions/811912", "https://serverfault.com", "https://serverfault.com/users/53995/" ] }
811,928
I have followed several tutorials like this one: https://technet.microsoft.com/en-us/library/gg188595.aspx in an attempt to setup my local ADFS server to give my authenticated application an e-mail claim. However, the only claims I have received from the authenticated application are below: http://www.w3.org/2001/XMLSchema#string http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows http://www.w3.org/2001/XMLSchema#dateTime What I am attempting to do is to setup our adfs server to be an Idp where our local app can direct the user to it, they can sign in if needed, then the application will verify authentication and read the e-mail claim to know who the user is.
Can nginx location blocks match a URL query string? Short answer : No. Long answer : There is a workaround if we have only a handful of such location blocks. Here's a sample workaround for 3 location blocks that need to match specific query strings: server { #... common definitions such as server, root location / { error_page 418 = @queryone; error_page 419 = @querytwo; error_page 420 = @querythree; if ( $query_string = "service=git-receive-pack" ) { return 418; } if ( $args ~ "service=git-upload-pack" ) { return 419; } if ( $arg_somerandomfield = "somerandomvaluetomatch" ) { return 420; } # do the remaining stuff # ex: try_files $uri =404; } location @queryone { # do stuff when queryone matches } location @querytwo { # do stuff when querytwo matches } location @querythree { # do stuff when querythree matches } } You may use $query_string, $args or $arg_fieldname. All will do the job. You may know more about error_page in the official docs . Warning: Please be sure not to use the standard HTTP codes .
{ "source": [ "https://serverfault.com/questions/811928", "https://serverfault.com", "https://serverfault.com/users/128123/" ] }
812,156
I have a Redis cluster with the following nodes: 192.168.0.14:6379 master (slots from 0 to 16383) 192.168.0.15:6379 slave (slots from 0 to 16383) 192.168.0.16:6379 master (without slots) Documentation says that any node can redirect queries to the properly node. But I can not redirect requests from 192.168.0.16:6379 master node. Here is what I tried: 192.168.0.16:6379> set myKey myValue (error) MOVED 16281 192.168.0.14:6379 192.168.0.16:6379> get myKey (error) MOVED 16281 192.168.0.14:6379 It neither writes nor reads. When I try to get "myKey" from 192.168.0.14:6379 it shows next: 127.0.0.1:6379> get myKey (nil) What is wrong with my requests? I am using redis server version 3.2.5
The node did redirect you. As the documentation explains, the client is expected to connect to the specified node to retry the request. The server does not do this. If you're using redis-cli , then you must use the -c option if you want it to follow these redirects.
{ "source": [ "https://serverfault.com/questions/812156", "https://serverfault.com", "https://serverfault.com/users/335036/" ] }
812,489
Today is November 1st 2016 or in (unambiguous) numerals, 2016-11-01. I have a user cron job set up like this: # m h dom mon dow command 33 3 1 */2 * /home/user/... It is supposed to run every other month on the first of the month at 3:33am, no matter what day of week that is, but for some reason it was run today, even though 11 is not divisible by 2. Can someone explain me this? Is my assumption of divisibility by 2 wrong? EDIT: I forgot to mention, I am running cron version "3.0pl1-127+deb8u1" on a Debian 8.6 "Jessie" machine.
The / is not an arithmetic expression, instead it describes "step values" over the allowed range of values. So, since months always start with 1 instead of 0 , /2 would mean "take every other value", resulting in (1, 3, 5, 7, 9, 11). This is also decribed in the manual page, although this is not terrible clear and easy to understand: Step values can be used in conjunction with ranges. Following a range with "<number>" specifies skips of the number’s value through the range. For example, "0-23/2" can be used in the hours field to specify command execution every other hour (the alternative in the V7 standard is "0,2,4,6,8,10,12,14,16,18,20,22"). Steps are also permitted after an asterisk, so if you want to say "every two hours", just use "*/2".
{ "source": [ "https://serverfault.com/questions/812489", "https://serverfault.com", "https://serverfault.com/users/293542/" ] }
812,584
I'm creating a systemd .service file and I need help understanding the difference between Requires= and After= . The man page says that Requires= "Configures requirement dependencies on other units." and After= "Configures ordering dependencies between units." What's the difference?
After= configures service order (do X only after Y), while Requires= state dependencies. If you don't specify an order, a service depending on another would be started at the same time as the one it is depending on. Also, the way I understand it (although I can't test that now and don't find a reference), After= is "loose coupling", which effectively means that a service with an After statement would still run even if the one named in the After= line isn't started at all, while Requires= would prevent its start if the requirement isn't met. Citing https://www.freedesktop.org/software/systemd/man/systemd.unit.html : Requires= Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units gets deactivated or its activation fails, this unit will be deactivated. This option may be specified more than once or multiple space-separated units may be specified in one option in which case requirement dependencies for all listed names will be created. Note that requirement dependencies do not influence the order in which services are started or stopped. This has to be configured independently with the After= or Before= options. If a unit foo.service requires a unit bar.service as configured with Requires= and no ordering is configured with After= or Before=, then both units will be started simultaneously and without any delay between them if foo.service is activated. Often, it is a better choice to use Wants= instead of Requires= in order to achieve a system that is more robust when dealing with failing services. and Before=, After= A space-separated list of unit names. Configures ordering dependencies between units. If a unit foo.service contains a setting Before=bar.service and both units are being started, bar.service's start-up is delayed until foo.service is started up. Note that this setting is independent of and orthogonal to the requirement dependencies as configured by Requires=. It is a common pattern to include a unit name in both the After= and Requires= option, in which case the unit listed will be started before the unit that is configured with these options. This option may be specified more than once, in which case ordering dependencies for all listed names are created. After= is the inverse of Before=, i.e. while After= ensures that the configured unit is started after the listed unit finished starting up, Before= ensures the opposite, i.e. that the configured unit is fully started up before the listed unit is started. Note that when two units with an ordering dependency between them are shut down, the inverse of the start-up order is applied. i.e. if a unit is configured with After= on another unit, the former is stopped before the latter if both are shut down. Given two units with any ordering dependency between them, if one unit is shut down and the other is started up, the shutdown is ordered before the start-up. It doesn't matter if the ordering dependency is After= or Before=. It also doesn't matter which of the two is shut down, as long as one is shut down and the other is started up. The shutdown is ordered before the start-up in all cases. If two units have no ordering dependencies between them, they are shut down or started up simultaneously, and no ordering takes place.
{ "source": [ "https://serverfault.com/questions/812584", "https://serverfault.com", "https://serverfault.com/users/6472/" ] }
812,585
So, it recently dawned on me that since I have 3 GPS clocks in my network, I could, technically, give back a little and serve time to the rest of the world. So far I've not quite seen any downsides with this ideas, but I have the following questions; Can I virtualize this? I'm not going to spend money and time on standing up hardware for this, so virtualization is a must. Since the servers will have access to three stratum 1 sources, I can't see how this can be a problem provided the ntpd config is correct What kind of traffic do a public NTP server (part of pool.ntp.org) normally see? And how big VMs do I need for this? ntpd shouldn't be too resource intensive as far as I can gather, but I'd rather know beforehand. What security aspects are there to this? I'm thinking just installing ntpd on two VMs in the DMZ, allow only ntp in through the FW, and only ntp out from the DMZ to the internal ntp servers. There also seem to be some ntp settings that are recommended according to the NTP pool page, but are they sufficient? https://www.ntppool.org/join/configuration.html They recommend not having the LOCAL clock driver configured, is this equivalent to removing the LOCAL time source configuration from the config files? Anything else to consider?
Firstly, good for you; it's a helpful and public-spirited thing to do. That said, and given your clarification that you're planning on creating one or more DMZ VMs which will sync to and make publicly-available the time from your three Meinberg GPS-enabled stratum-1 (internal) servers: Edit : Virtualisation comes up for discussion on the pool list from time to time; a recent one was in July 2015, which can be followed starting from this email . Ask Bjørn Hansen, the project lead, did post to the thread , and did not speak out against virtualisation. Clearly a number of pool server operators are virtualising right now, so I don't think anyone will shoot you for it, and as one poster makes clear, if your server(s) are unreliable the pool monitoring system will simply remove them from the pool. KVM seems to be the preferred virtualisation technology; I didn't find anyone specifically using VMWare, so cannot comment on how "honest" a virtualisation that is. Perhaps the best summary on the subject said My pool servers are virtualized with KVM on my very own KVM hosts. Monitoring says, the server is pretty accurate and provides stable time for the last 2-3 years. But I wouldn't setup a pool server on a leased virtual server from another provider. This is the daily average number of distinct clients per second I see on my pool server (which is in the UK, European and global zones) over the past year: This imposes nearly no detectable system load ( ntpd seems to use between 1% and 2% of a CPU, most of the time). Note that, at some point during the year, load briefly peaked at nearly a thousand clients per second (Max: 849.27); I do monitor for excessive load, and the alarms didn't all go off, so I can only note that even that level of load didn't cause problems, albeit briefly. The project-recommended configurations are best-practice, and work for me. I also use iptables to rate-limit clients to two inbound packets in a rolling ten-second window (it's amazing how many rude clients there are out there, who think that they should be free to burst in order to set their own clocks quickly). Or remove any lines referring to server addresses starting with 127.127 . The best practice guidelines also recommend more than three clocks, so you might want to pick a couple of other public servers, or specific pool servers, in addition to your three stratum-1 servers. I'd also note that if you're planning to put both these VMs on the same host hardware, you should probably just run the one, but double the bandwidth declared to the pool (ie, accept twice as many queries as you otherwise would).
{ "source": [ "https://serverfault.com/questions/812585", "https://serverfault.com", "https://serverfault.com/users/335559/" ] }
812,606
I have a site-to-site VPN setup with StrongSwan between AWS and Azure. In AWS we use Route53 in our VPC to map something like production-db.internal.com to the the AWS-provided name for our RDS cluster. How can I set up my DNS servers for the Azure Virtual Network to able to resolve production-db.internal.com to the local ip address of the RDS instance in the AWS VPC?
Firstly, good for you; it's a helpful and public-spirited thing to do. That said, and given your clarification that you're planning on creating one or more DMZ VMs which will sync to and make publicly-available the time from your three Meinberg GPS-enabled stratum-1 (internal) servers: Edit : Virtualisation comes up for discussion on the pool list from time to time; a recent one was in July 2015, which can be followed starting from this email . Ask Bjørn Hansen, the project lead, did post to the thread , and did not speak out against virtualisation. Clearly a number of pool server operators are virtualising right now, so I don't think anyone will shoot you for it, and as one poster makes clear, if your server(s) are unreliable the pool monitoring system will simply remove them from the pool. KVM seems to be the preferred virtualisation technology; I didn't find anyone specifically using VMWare, so cannot comment on how "honest" a virtualisation that is. Perhaps the best summary on the subject said My pool servers are virtualized with KVM on my very own KVM hosts. Monitoring says, the server is pretty accurate and provides stable time for the last 2-3 years. But I wouldn't setup a pool server on a leased virtual server from another provider. This is the daily average number of distinct clients per second I see on my pool server (which is in the UK, European and global zones) over the past year: This imposes nearly no detectable system load ( ntpd seems to use between 1% and 2% of a CPU, most of the time). Note that, at some point during the year, load briefly peaked at nearly a thousand clients per second (Max: 849.27); I do monitor for excessive load, and the alarms didn't all go off, so I can only note that even that level of load didn't cause problems, albeit briefly. The project-recommended configurations are best-practice, and work for me. I also use iptables to rate-limit clients to two inbound packets in a rolling ten-second window (it's amazing how many rude clients there are out there, who think that they should be free to burst in order to set their own clocks quickly). Or remove any lines referring to server addresses starting with 127.127 . The best practice guidelines also recommend more than three clocks, so you might want to pick a couple of other public servers, or specific pool servers, in addition to your three stratum-1 servers. I'd also note that if you're planning to put both these VMs on the same host hardware, you should probably just run the one, but double the bandwidth declared to the pool (ie, accept twice as many queries as you otherwise would).
{ "source": [ "https://serverfault.com/questions/812606", "https://serverfault.com", "https://serverfault.com/users/367578/" ] }
812,719
I am running Arch Linux 4.8.4-1 on a 64bit installation. I installed MariaDB via pacman . When I try to start it with systemctl start mysqld , it gives me Job for mariadb.service failed because the control process exited with error code. See "systemctl status mariadb.service" and "journalctl -xe" for details. The output of systemctl status mariadb.service is ● mariadb.service - MariaDB database server Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2016-11-02 16:55:12 IST; 3min 6s ago Process: 5123 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION (code=exited, status=1/FAILURE) Process: 5070 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set Process: 5067 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) Main PID: 5123 (code=exited, status=1/FAILURE) Status: "MariaDB server is down" Nov 02 16:55:11 pranav-laptop systemd[1]: Starting MariaDB database server... Nov 02 16:55:12 pranav-laptop mysqld[5123]: 2016-11-02 16:55:12 140082509282496 [Note] /usr/sbin/mysqld (mysqld 10.1.18-MariaDB) starting as process 5 Nov 02 16:55:12 pranav-laptop mysqld[5123]: 2016-11-02 16:55:12 140082509282496 [Warning] Can't create test file /var/lib/mysql/pranav-laptop.lower-te Nov 02 16:55:12 pranav-laptop mysqld[5123]: [90B blob data] Nov 02 16:55:12 pranav-laptop systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE Nov 02 16:55:12 pranav-laptop systemd[1]: Failed to start MariaDB database server. Nov 02 16:55:12 pranav-laptop systemd[1]: mariadb.service: Unit entered failed state. Nov 02 16:55:12 pranav-laptop systemd[1]: mariadb.service: Failed with result 'exit-code'. If I need to post anything else, let me know... UPDATE: After trying Jérémy Munoz's comment, mysql still doesn't start, but I get a different systemctl status mariadb.service ● mariadb.service - MariaDB database server Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2016-11-02 21:03:24 IST; 4min 7s ago Process: 14350 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION (code=exited, status=1/FAILURE) Process: 14296 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl se Process: 14294 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) Main PID: 14350 (code=exited, status=1/FAILURE) Nov 02 21:03:24 pranav-laptop mysqld[14350]: 2016-11-02 21:03:24 140412958252224 [ERROR] Could not open mysql.plugin table. Some plugins may be not lo Nov 02 21:03:24 pranav-laptop mysqld[14350]: 2016-11-02 21:03:24 140412958235392 [Warning] Failed to load slave replication state from table mysql.gti Nov 02 21:03:24 pranav-laptop mysqld[14350]: 2016-11-02 21:03:24 140412362684160 [Note] InnoDB: Dumping buffer pool(s) not yet started Nov 02 21:03:24 pranav-laptop mysqld[14350]: 2016-11-02 21:03:24 140412958252224 [ERROR] Can't open and lock privilege tables: Table 'mysql.servers' d Nov 02 21:03:24 pranav-laptop mysqld[14350]: 2016-11-02 21:03:24 140412958252224 [Note] Server socket created on IP: '::'. Nov 02 21:03:24 pranav-laptop mysqld[14350]: 2016-11-02 21:03:24 140412958252224 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mys Nov 02 21:03:24 pranav-laptop systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE Nov 02 21:03:24 pranav-laptop systemd[1]: Failed to start MariaDB database server. Nov 02 21:03:24 pranav-laptop systemd[1]: mariadb.service: Unit entered failed state. Nov 02 21:03:24 pranav-laptop systemd[1]: mariadb.service: Failed with result 'exit-code'. UPDATE: After running mysql_install_db , I get this error: FATAL ERROR: Could not find ./bin/my_print_defaults If you compiled from source, you need to either run 'make install' to copy the software into the correct location ready for operation. If you don't want to do a full install, you can use the --srcddir option to only install the mysql database and privilege tables If you are using a binary release, you must either be at the top level of the extracted archive, or pass the --basedir option pointing to that location. The latest information about mysql_install_db is available at https://mariadb.com/kb/en/installing-system-tables-mysql_install_db /etc/mysql/my.cnf
If you haven't any real data in your database then clear all in /var/lib/mysql . After that try again to run command mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql to initialize database directory.
{ "source": [ "https://serverfault.com/questions/812719", "https://serverfault.com", "https://serverfault.com/users/383837/" ] }
812,724
I'm trying to solve an architecture design puzzle, it's about designing an infra for keeping data and servers as much secured/hidden as possible, here are requirements: *I want to hide the internal design of my infra (several data servers with public and private hosts) *I want to access to each service using same IP address, and the query is forwarded to right server based on something (cookie, uri, port or whatever) *access to data service must be enforced with ssl/tls encryption After studying carefully these requirements I was thinking about using a reverse proxy and grant access to all data services only across the reverse proxy server, an other pro of a reverse proxy is that access authentication is enforced at once with ssl/tls encryption and no need to configure each endpoint separately. my real issue is that I didn't find any reverse proxy that can redirect TCP traffic (for example mysql requests), and same for static load balancing algorithms that are supported only for HTTP requests, (haproxy for instance) Any idea how to solve this issue ? Thanks to all
If you haven't any real data in your database then clear all in /var/lib/mysql . After that try again to run command mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql to initialize database directory.
{ "source": [ "https://serverfault.com/questions/812724", "https://serverfault.com", "https://serverfault.com/users/383838/" ] }
814,715
After some searching I've come up totally empty handed on if there is any standard (or non-standard for that matter) specification or best practice for specifying the IMAP server for a domain name. I.e. if I have an account such as "[email protected]", and I wish to read my mail via IMAP, is there any DNS record which would indicate to my mail client which mail server it should be contacting? I've never seen anything like this, and virtually all email setup instructions I've seen include an exact host name for IMAP, e.g. "mail.example.com" or "imap.example.com". I guess the assumption is that the employees or other users of example.com can find out what server to use from their administrator. However if example.com were to have thousands of accounts, this would become burdensome. It would seem very useful to just enter your email address "[email protected]" and have it look up the IMAP server name in DNS based on the email's domain name (not dissimilar to how an MX record works for SMTP). Anyone heard of anything like this?
From a DNS perspective you have SRV DNS records which allow the use of DNS for publishing services and service discovery. Their main use is to allow services to run easily on non-standard ports and to reduce the configuration burden when setting up clients. A SRV record has the following form: _Service._Protocol.Name. TTL Class SRV Priority Weight Port Target and one for IMAP is defined in RFC 6186 and would look like: _imap._tcp.example.com. 3600 IN SRV 0 10 143 my-imap-host.example.com. or _imaps._tcp.example.com. 3600 IN SRV 0 10 995 my-imaps-host.example.com. Most email clients don't specifically look for an IMAP server first though, but use auto discovery to derive email client settings from the email address a user enters. If a user enters [email protected], depending on the client those typically involve either an _autodiscover._tcp.example.com. SRV record such as used by MS Exchange and Outlook an actual host called autoconfig.example.com. or more A pretty good write up is found here : https://web.archive.org/web/20210402044628/https://developer.mozilla.org/en-US/docs/Mozilla/Thunderbird/Autoconfiguration
{ "source": [ "https://serverfault.com/questions/814715", "https://serverfault.com", "https://serverfault.com/users/187004/" ] }
814,767
I am uploading a 26Gb file, but I am getting: 413 Request Entity Too Large I know, this is related to client_max_body_size , so I have this parameter set to 30000M . location /supercap { root /media/ss/synology_office/server_Seq-Cap/; index index.html; proxy_pass http://api/supercap; } location /supercap/pipe { client_max_body_size 30000M; client_body_buffer_size 200000k; proxy_pass http://api/supercap/pipe; client_body_temp_path /media/ss/synology_office/server_Seq-Cap/tmp_nginx; } But I still get this error when the whole file has been uploaded.
Modify NGINX Configuration File sudo nano /etc/nginx/nginx.conf Search for this variable: client_max_body_size . If you find it, just increase its size to 100M, for example. If it doesn’t exist, then you can add it inside and at the end of http client_max_body_size 100M; Test your nginx config changes. sudo service nginx configtest Restart nginx to apply the changes. sudo service nginx restart Modify PHP.ini File for Upload Limits It’s not needed on all configurations, but you may also have to modify the PHP upload settings as well to ensure that nothing is going out of limit by php configurations. If you are using PHP5-FPM use following command, sudo nano /etc/php5/fpm/php.ini If you are using PHP7.0-FPM use following command, sudo nano /etc/php/7.0/fpm/php.ini Now find following directives one by one upload_max_filesize post_max_size and increase its limit to 100M, by default they are 8M and 2M. upload_max_filesize = 100M post_max_size = 100M Finally save it and restart PHP. PHP5-FPM users use this, sudo service php5-fpm restart PHP7.0-FPM users use this, sudo service php7.0-fpm restart It will work fine !!!
{ "source": [ "https://serverfault.com/questions/814767", "https://serverfault.com", "https://serverfault.com/users/330755/" ] }
815,043
I am cloning a git repository in a script like this: git clone https://user:[email protected]/name/.git This works, but my username and my password! are now stored in the origin url in .git/config . How can I prevent this, but still do this in a script (so no manual password input)?
The method that I use is to actually use a git pull instead of a clone. The script would look like: mkdir repo cd repo git init git config user.email "email" git config user.name "user" git pull https://user:[email protected]/name/repo.git master This will not store your username or password in .git/config . However, unless other steps are taken, the plaintext username and password will be visible while the process is running from commands that show current processes (e.g. ps ). As brought up in the comments, since this method is using HTTPS you must URL-encode any special characters that may appear in your password as well. One further suggestion I would make (if you can't use ssh) is to actually use an OAuth token instead of plaintext username/password as it is slightly more secure. You can generate an OAuth token from your profile settings: https://github.com/settings/tokens . Then using that token the pull command would be git pull https://$OAUTH_TOKEN:[email protected]/name/repo.git master
{ "source": [ "https://serverfault.com/questions/815043", "https://serverfault.com", "https://serverfault.com/users/385784/" ] }
815,538
I have a CentOS 7 system and when I login with putty or ssh there is a long delay before I get the password prompt. I ran ssh -v and I found that it gets up to this: debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received and then it sits there for 1-2 minutes and then this output blasts out: debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information No Kerberos credentials available debug1: Unspecified GSS failure. Minor code may provide more information No Kerberos credentials available debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information No Kerberos credentials available debug1: Next authentication method: publickey debug1: Trying private key: /home/motor/.ssh/id_rsa debug1: Trying private key: /home/motor/.ssh/id_dsa debug1: Trying private key: /home/motor/.ssh/id_ecdsa debug1: Trying private key: /home/motor/.ssh/id_ed25519 debug1: Next authentication method: password And then the password prompt comes out. This happens no matter which user is logging in. It only happens on the 1 system. I have 5 others where it proceeds without the delay. There are no disk or memory or any other errors in the logs. What could be causing it to delay like this? UPDATE: I tried setting GSSAPIAuthentication to no and that did not solve the issue. I ran ssh again, this time with -vvv. This output came out and then it hung: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/motor/.ssh/id_rsa ((nil)), debug2: key: /home/motor/.ssh/id_dsa ((nil)), debug2: key: /home/motor/.ssh/id_ecdsa ((nil)), debug2: key: /home/motor/.ssh/id_ed25519 ((nil)), After 1-2 minutes this came out: debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/motor/.ssh/id_rsa debug3: no such identity: /home/motor/.ssh/id_rsa: No such file or directory debug1: Trying private key: /home/motor/.ssh/id_dsa debug3: no such identity: /home/motor/.ssh/id_dsa: No such file or directory debug1: Trying private key: /home/motor/.ssh/id_ecdsa debug3: no such identity: /home/motor/.ssh/id_ecdsa: No such file or directory debug1: Trying private key: /home/motor/.ssh/id_ed25519 debug3: no such identity: /home/motor/.ssh/id_ed25519: No such file or directory debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password And then the password prompt.
In your /etc/ssh/sshd_config on the remote server you should change the option GSSAPIAuthentication to no. Restart sshd and you should be good to go. edit: GSSAPI (Generic Security Service Application Programming Interface) is essentially an API that utilises Kerberos libraries to provide strong network encrypton. Unless there is a particular reason why you need GSSAPI enabled this method should resolve the issue you are having. edit2: For clarity, it may also be possible that the reverse DNS check is timing out (specifically checking the connecting hosts' PTR record) . SSH does this check as a matter of course because it acts as a security measure to validate the connecting host. Saying that, the process does not add much in terms of real security because realistically there's a significant proportion of hosts that do not have a PTR anyway. There are three ways to fix this issue: 1). You can amend the sshd_config file to use the UseDNS no parameter. This will stop the reverse DNS lookup. It is safe to do. 2). Add a PTR record in the appropriate DNS system for the host that's slow in connecting. 3). Add a manual entry into the OS hosts file with the relevant entry. Hope that helps!
{ "source": [ "https://serverfault.com/questions/815538", "https://serverfault.com", "https://serverfault.com/users/357327/" ] }
815,841
I would like to understand if multiple TXT records for the same subdomain are ok or could lead to issues. In particular, we have the requirement for one SPF record and one Google Domain Verification record on the root domain. In AWS Route 53 they explicitly support this in the following way: Enter multiple values on separate lines. Enclose text in quotation marks. Example: "Sample Text Entries" "Enclose entries in quotation marks" This way a single TXT field can contain both the SPF and Google Domain Verification records. When I asked name.com on the other hand they suggested to add two separate TXT records as the Route 53 method is not supported.
The way described is the way you create multiple records on Route 53. Entering two values in the textarea separated by a newline will result in two distinct records in the DNS. This is why Amazon call it a "record set" - it is a set of records.
{ "source": [ "https://serverfault.com/questions/815841", "https://serverfault.com", "https://serverfault.com/users/250867/" ] }
816,820
we have to (want to..) rename our DB Subnet Groups on AWS, so i created a new DB Subnet Group with same settings as the old one. When i want to switch the Group on "Modify Tab" in AWS GUI and hit apply, aws returns: You cannot move DB instance XXX to subnet group XXX. The specified DB subnet group and DB instance are in the same VPC. Choose a DB subnet group in different VPC than the specified DB instance and try again. (Service: AmazonRDS; Status Code: 400; Error Code: InvalidVPCNetworkStateFault; Request ID: 7d46c84c-b22a-11e6-be20-b5bb6bd6cc6d) Any suggestions? Or is it just not possible without recreating whole instance?
I had this same question a few months back, and ended up contacting AWS (I have Enterprise support). This was the result: Unfortunately, moving DB instance subnet group to another subnet group in the same VPC is not supported at this time. I realize our documentation says that it is supported, but that is an error. We are currently working on updating our documentation to reflect this and I apologize for the mis-communication. However, I do have a workaround, you can create a new temporary VPC, update the subnet group to point to that temporary VPC, then once that process completes, change the subnet group to point back to your new subnet group. Alternatively, another way to do it would be to create a database snapshot, and spin up a new instance from the snapshot. You might want to look at both approaches. Both methods will probably cause you some downtime unless you are able to run your application in read only mode for a while, or have a method of replaying transactions on the restored snapshot.
{ "source": [ "https://serverfault.com/questions/816820", "https://serverfault.com", "https://serverfault.com/users/28736/" ] }
817,005
When I run lsb_release on Debian 8, following error is appeared: No LSB modules are available. Is there any missing file causes this problem?
As the error message says lsb_release command is installed but lsb module isn't. Use this command to solve the problem: apt-get install lsb-core I suggest to use lsb_release -a instead of lsb_release . It shows more useful output.
{ "source": [ "https://serverfault.com/questions/817005", "https://serverfault.com", "https://serverfault.com/users/381009/" ] }
817,552
I have a drop-in for systemd-machined at the path /etc/systemd/system/systemd-machined.service.d/10-machined-pid-file.conf . when I run systemctl status systemd-machined I do see the lines Drop-In: /etc/systemd/system/systemd-machined.service.d └─10-machined-pid-file.conf However, I do not see a PID file in /var/run/. Which based on my drop-in: [Serivce] PIDFile=/var/run/machined.pid I believe there should not be any issue creating that PID file. Is there something I am missing?
The PIDFile= setting does not create a PID file. That is still up to the service itself to do, the same as it has been for the last 40 years. Rather, this option tells systemd where to find an existing PID file (if any). In most cases it's not required at all, as systemd will keep services in their own cgroups and does not need a PID file to keep track of them. However, systemd will delete a PID file when the service exits, if the service fails to clean up after itself. From the documentation : Takes an absolute file name pointing to the PID file of this daemon. Use of this option is recommended for services where Type= is set to forking . systemd will read the PID of the main process of the daemon after start-up of the service. systemd will not write to the file configured here, although it will remove the file after the service has shut down if it still exists.
{ "source": [ "https://serverfault.com/questions/817552", "https://serverfault.com", "https://serverfault.com/users/295515/" ] }
817,651
I have two AWS accounts. The master account with example.com as a Hosted Zone, this then has a number of record sets (i.e. api.example.com and kibana.example.com). A second account will be managing testing.example.com as a Hosted Zone, with the same set of record sets (i.e. api.testing.example.com and kibana.testing.example.com). How to I tell the master account to refer requests for .testing.example.com down to the child account. I don't want to change the master account as I want to use the same Cloud Formation templates in both 'Live' and 'Test'. I've set the two up as above and it does not work ( api.testing.example.com does not resolve). I've also tried setting the testing.example.com ns record in the master account to the one specified in the child account(1). Alas this is not something I've done before and Google searches are not returning anything. 1) I messed this up, and this is the answer. See below.
How to I tell the master account to push requests for .testing.example.com down to the child account. The requests are referred, not pushed, but you can achieve the desired outcome by delegating the subdomain to a different set of Route 53 servers from those that host the parent zone. Look at the new hosted zone you created for testing.example.com. This can be in the same AWS account, a different AWS account... any AWS account. There's nothing here that is "account" related. This uses standard DNS configuration. The whole of DNS is a hierarchy. The global root can tell you where to find com , and the com servers can tell you where to find example.com , and it's nothing materially different for example.com to tell you where to find testing.example.com instead of giving you a direct answer. Note the 4 name servers that Route 53 assigned to the testing.example.com hosted zone. Verify that they are all different than the ones assigned to the example.com hosted zone. (For any of them to be the same should be impossible, but verify this.) Now, back in the example.com zone, create a new resource record, with hostname testing , using record type NS , and enter the 4 name servers that Route 53 assigned to testing.example.com , in the box below. Now, when a request for testing.example.com and anything below it arrives at one of the Route 53 servers handling example.com, the reply will not be the answer from testing.example.com -- the reply will provide the requester with the 4 NS records associated with testing.example.com and an answer equivalent to "I don't know, but try asking one of these guys." That's how it's done.
{ "source": [ "https://serverfault.com/questions/817651", "https://serverfault.com", "https://serverfault.com/users/60589/" ] }
817,989
If I use something like Ansible or Puppet, and I only have two servers, is that defeating the purpose of using these products? I thought that if I configured one server, I could use one of these to duplicate it on the other.
Nope, it's not defeating the purpose. I, in fact, use Ansible to set up single servers for hobby/side-project use quite frequently. It allows me to keep a version-controlled, repeatable, self-documenting configuration for the server.
{ "source": [ "https://serverfault.com/questions/817989", "https://serverfault.com", "https://serverfault.com/users/15827/" ] }
817,992
When creating an instance on openstack, it is automatically assigned an IP address on the subnet. I have an instance that has a bad image. The network is configured for the given IP address. Is there a way to change the image of an instance? I have tried rebuilding but the bad image is still there. Thanks I tried running the following: nova --debug boot --flavor 17172145-c56e-4407-8f6b-5273fa19634d --image 41618691-aa09-4cf1-90ba-fdb4a742da87 --access-ip-v4 10.105.5.81 --access-ip-v6 10.105.5.81 --security-groups http_access TestingBoot To get the following error messages returned: DEBUG (shell:984) Not found (HTTP 404) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 982, in main OpenStackComputeShell().main(argv) File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 909, in main args.func(self.cs, args) File "/usr/lib/python2.7/site-packages/novaclient/v2/shell.py", line 686, in do_boot boot_args, boot_kwargs = _boot(cs, args) File "/usr/lib/python2.7/site-packages/novaclient/v2/shell.py", line 281, in _boot image = _find_image(cs, args.image) File "/usr/lib/python2.7/site-packages/novaclient/v2/shell.py", line 2350, in _find_image raise exceptions.CommandError(six.text_type(e)) CommandError: Not found (HTTP 404) ERROR (CommandError): Not found (HTTP 404) I also found another error further"up" the debug log: RESP BODY: 404 Not Found The resource could not be found.
Nope, it's not defeating the purpose. I, in fact, use Ansible to set up single servers for hobby/side-project use quite frequently. It allows me to keep a version-controlled, repeatable, self-documenting configuration for the server.
{ "source": [ "https://serverfault.com/questions/817992", "https://serverfault.com", "https://serverfault.com/users/364862/" ] }
817,996
I have a VPS that I am using to host a Wordpress site for which I'd like to restrict access. The end goal is to restrict SSH access to two IPs, and restrict everything inbound but the ports specified in the rules I have below. They don't seem to be saving while rebooting the server (reboot command). The rules seem to apply after being run, but not after server reboot. I have verified they are being written to /etc/sysconfig/iptables . Here is what I am adding to the file at the console: iptables -F; iptables -X; iptables -Z iptables -A INPUT -s X.X.X.X -p tcp --dport 22 -j ACCEPT iptables -A INPUT -s X.X.X.X -p tcp --dport 22 -j ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A INPUT -p udp --dport 123 -j ACCEPT iptables -A INPUT -p udp --dport 53 -j ACCEPT iptables -A INPUT -p tcp --dport 53 -j ACCEPT iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -P FORWARD DROP iptables-save > /etc/sysconfig/iptables Here is what is actually being written: # Generated by iptables-save v1.4.7 on Wed Nov 30 16:01:17 2016 *mangle :PREROUTING ACCEPT [572:59036] :INPUT ACCEPT [572:59036] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [486:358945] :POSTROUTING ACCEPT [483:358793] -A PREROUTING -p tcp -m tcp --sport 110 -j TOS --set-tos 0x04/0xff -A PREROUTING -p udp -m udp --sport 110 -j TOS --set-tos 0x04/0xff -A PREROUTING -p tcp -m tcp --sport 1110 -j TOS --set-tos 0x04/0xff -A PREROUTING -p udp -m udp --sport 1110 -j TOS --set-tos 0x04/0xff -A PREROUTING -p tcp -m tcp --sport 465 -j TOS --set-tos 0x04/0xff -A PREROUTING -p udp -m udp --sport 465 -j TOS --set-tos 0x04/0xff -A PREROUTING -p tcp -m tcp --sport 993 -j TOS --set-tos 0x04/0xff -A PREROUTING -p udp -m udp --sport 993 -j TOS --set-tos 0x04/0xff -A PREROUTING -p tcp -m tcp --sport 995 -j TOS --set-tos 0x04/0xff -A PREROUTING -p udp -m udp --sport 995 -j TOS --set-tos 0x04/0xff -A PREROUTING -p tcp -m tcp --sport 20 -j TOS --set-tos 0x08/0xff -A PREROUTING -p udp -m udp --sport 20 -j TOS --set-tos 0x08/0xff -A PREROUTING -p tcp -m tcp --sport 21 -j TOS --set-tos 0x08/0xff -A PREROUTING -p udp -m udp --sport 21 -j TOS --set-tos 0x08/0xff -A PREROUTING -p tcp -m tcp --sport 22 -j TOS --set-tos 0x10/0xff -A PREROUTING -p udp -m udp --sport 22 -j TOS --set-tos 0x10/0xff -A PREROUTING -p tcp -m tcp --sport 25 -j TOS --set-tos 0x10/0xff -A PREROUTING -p udp -m udp --sport 25 -j TOS --set-tos 0x10/0xff -A PREROUTING -p tcp -m tcp --sport 53 -j TOS --set-tos 0x10/0xff -A PREROUTING -p udp -m udp --sport 53 -j TOS --set-tos 0x10/0xff -A PREROUTING -p tcp -m tcp --sport 80 -j TOS --set-tos 0x10/0xff -A PREROUTING -p udp -m udp --sport 80 -j TOS --set-tos 0x10/0xff -A PREROUTING -p tcp -m tcp --sport 443 -j TOS --set-tos 0x10/0xff -A PREROUTING -p udp -m udp --sport 443 -j TOS --set-tos 0x10/0xff -A PREROUTING -p tcp -m tcp --sport 512:65535 -j TOS --set-tos 0x00/0xff -A PREROUTING -p udp -m udp --sport 512:65535 -j TOS --set-tos 0x00/0xff -A POSTROUTING -p tcp -m tcp --dport 110 -j TOS --set-tos 0x04/0xff -A POSTROUTING -p udp -m udp --dport 110 -j TOS --set-tos 0x04/0xff -A POSTROUTING -p tcp -m tcp --dport 1110 -j TOS --set-tos 0x04/0xff -A POSTROUTING -p udp -m udp --dport 1110 -j TOS --set-tos 0x04/0xff -A POSTROUTING -p tcp -m tcp --dport 465 -j TOS --set-tos 0x04/0xff -A POSTROUTING -p udp -m udp --dport 465 -j TOS --set-tos 0x04/0xff -A POSTROUTING -p tcp -m tcp --dport 993 -j TOS --set-tos 0x04/0xff -A POSTROUTING -p udp -m udp --dport 993 -j TOS --set-tos 0x04/0xff -A POSTROUTING -p tcp -m tcp --dport 995 -j TOS --set-tos 0x04/0xff -A POSTROUTING -p udp -m udp --dport 995 -j TOS --set-tos 0x04/0xff -A POSTROUTING -p tcp -m tcp --dport 20 -j TOS --set-tos 0x08/0xff -A POSTROUTING -p udp -m udp --dport 20 -j TOS --set-tos 0x08/0xff -A POSTROUTING -p tcp -m tcp --dport 21 -j TOS --set-tos 0x08/0xff -A POSTROUTING -p udp -m udp --dport 21 -j TOS --set-tos 0x08/0xff -A POSTROUTING -p tcp -m tcp --dport 22 -j TOS --set-tos 0x10/0xff -A POSTROUTING -p udp -m udp --dport 22 -j TOS --set-tos 0x10/0xff -A POSTROUTING -p tcp -m tcp --dport 25 -j TOS --set-tos 0x10/0xff -A POSTROUTING -p udp -m udp --dport 25 -j TOS --set-tos 0x10/0xff -A POSTROUTING -p tcp -m tcp --dport 53 -j TOS --set-tos 0x10/0xff -A POSTROUTING -p udp -m udp --dport 53 -j TOS --set-tos 0x10/0xff -A POSTROUTING -p tcp -m tcp --dport 80 -j TOS --set-tos 0x10/0xff -A POSTROUTING -p udp -m udp --dport 80 -j TOS --set-tos 0x10/0xff -A POSTROUTING -p tcp -m tcp --dport 443 -j TOS --set-tos 0x10/0xff -A POSTROUTING -p udp -m udp --dport 443 -j TOS --set-tos 0x10/0xff -A POSTROUTING -p tcp -m tcp --dport 512:65535 -j TOS --set-tos 0x00/0xff -A POSTROUTING -p udp -m udp --dport 512:65535 -j TOS --set-tos 0x00/0xff COMMIT # Completed on Wed Nov 30 16:01:17 2016 # Generated by iptables-save v1.4.7 on Wed Nov 30 16:01:17 2016 *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -s X.X.X.X/32 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -s X.X.X.X/32 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p udp -m udp --dport 123 -j ACCEPT -A INPUT -p udp -m udp --dport 53 -j ACCEPT -A INPUT -p tcp -m tcp --dport 53 -j ACCEPT COMMIT # Completed on Wed Nov 30 16:01:17 2016 It seems the mangle is being appended prior to the filter. Does this make a difference? Output for iptables -L -v -n is the following: Chain INPUT (policy DROP 37 packets, 1951 bytes) pkts bytes target prot opt in out source destination 1029 86325 ACCEPT tcp -- * * X.X.X.X 0.0.0.0/0 tcp dpt:22 0 0 ACCEPT tcp -- * * X.X.X.X 0.0.0.0/0 tcp dpt:22 632 316K ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 85 40646 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 128 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 9 678 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 2 100 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1529 packets, 484K bytes) pkts bytes target prot opt in out source destination If I restart the service (but not whole server) with service iptables restart everything sticks and the rules seem to still be in effect. However, if I do a server restart with reboot , everything from /etc/sysconfig/iptables is still there, however the output from iptables -L -v -n is very different now with thousands of lines. My rules also seem to no longer apply, although everything in the file still looks the same. I believe the runlevel is also appropriately set: [root@s1 ~]# chkconfig --list iptables iptables 0:off 1:off 2:on 3:on 4:on 5:on 6:off So again, my questions are: 1. Why aren't the rules applying after server restart, although they apply and seem to stick after service restart? How can I get them to apply? 2. Does the mangle part in the iptables file make a difference? Is it required in this case? 3. Why is the iptables -L -v -n output different after restart? It only seems to change after server restart and not service restart. Any ideas and help is greatly appreciated.
Nope, it's not defeating the purpose. I, in fact, use Ansible to set up single servers for hobby/side-project use quite frequently. It allows me to keep a version-controlled, repeatable, self-documenting configuration for the server.
{ "source": [ "https://serverfault.com/questions/817996", "https://serverfault.com", "https://serverfault.com/users/273488/" ] }
818,686
I am working on an embedded device that runs FreeBSD and SSH. As you know, sshd likes to randomly generate a set of server keys when it first boots up. The problem is that we will be shipping the product with a read-only sd-card filesystem (non-negotiable). My two options as I see them are: Ship the same sshd server keys on all devices Mount a memory file system and generate the server keys on each boot (slow...) Is it a major security problem to ship the same server keys on all devices? These items will not be directly on the internet. There will occasionally be multiple devices owned by the same person and on the same network. Most of the time the device will not be connected to the internet. Logging in with SSH is not part of normal operation. It is mostly for the convenience of the programmers and technicians. Customers will not be logging in to the device with SSH. What are the ramifications of using the same server keys on multiple hardware devices? PS could someone please create an internet-of-things tag? EDIT : I am talking about installing the same host private keys on all servers (devices). As far as user public/private keys, there are currently no plans to use key based login - it would be password login. Again, same password on all servers (devices). I know that this is probably a bad idea. I'd like to know why precisely it is a bad idea though so I can understand the tradeoffs.
Rather than storing host-specific data such as ssh host keys on the SD card or other read-only media, you can store this in NVRAM, which is what it's for on an embedded system. You'll need to do some custom scripting to store and retrieve the keys at boot time, but the scripts will be exactly the same for every device.
{ "source": [ "https://serverfault.com/questions/818686", "https://serverfault.com", "https://serverfault.com/users/196811/" ] }
818,996
Had a port opened up to for public use using firewall-cmd, I wanted to limit this port to a specific IP which I found the answer for on this SITE . I used the following to open it: $ firewall-cmd --permanent --zone=public --add-port=10050/tcp $ firewall-cmd --reload Now using the information from the information I found I wanted to restrict access to this port to a specific IP address. Do I need to first remove this port from public access? Or Can I just just add the new rule as follows and that will take care of the problem for me? $ firewall-cmd --new-zone=special $ firewall-cmd --permanent --zone=special --add-rich-rule=' rule family="ipv4" source address=”123.1.1.1" port protocol="tcp" port="10050" accept' I have tried the following: $ firewall-cmd --zone=public --remove-port=10050/tcp $ firewall-cmd --reload But when I run the following: $ firewall-cmd --list-ports 10050/tcp is still displayed. Please understand I'm not overly familiar with Sever side configurations. Soultion: Do not forget the --runtime-to-permanent $ firewall-cmd --zone=public --remove-port=10050/tcp $ firewall-cmd --runtime-to-permanent $ firewall-cmd --reload
Solution: Do not forget the --runtime-to-permanent $ firewall-cmd --zone=public --remove-port=10050/tcp $ firewall-cmd --runtime-to-permanent $ firewall-cmd --reload
{ "source": [ "https://serverfault.com/questions/818996", "https://serverfault.com", "https://serverfault.com/users/389242/" ] }
819,226
When configuring an application, you can often use /dev/null as config file if you want the application to read an empty file. But, if the application reads a list of files from a directory, you cannot use this trick. You would need to give it an empty directory to read. I was wondering: does Linux have a default empty directory that can be used for such purposes? I know OpenSSH used /var/empty for a while, and I can of course create an empty dir myself, but maybe the FHS has specified a standard directory for this?
The FHS provides no "standard" empty directory. It is common for Linux systems to provide a directory /var/empty , but this directory is not defined in FHS and may not actually be empty. Instead, certain daemons will create their own empty directories in here. For instance, openssh uses the empty directory /var/empty/sshd for privilege separation. If your need for an empty directory is transient, you can create an empty directory yourself, as a subdirectory of /run or /tmp . If you're doing this outside the program, you can use mktemp -d for this, or use the mkdtemp(3) C function inside your program. Though if you always need the empty directory to be present, consider creating one under /var/empty as openssh does. For this use case, creating a directory under /tmp is probably the best fit, though in practice it doesn't matter very much where you put it.
{ "source": [ "https://serverfault.com/questions/819226", "https://serverfault.com", "https://serverfault.com/users/340217/" ] }
822,035
I am using aws certificate manager for managing SSL. Recently I purchased a wildcard ssl *.example-private.com Now I need that SSL certificate to deploy on enterprise git instance on aws. How can i download ssl from aws?
You cannot download a SSL certificate from ACM.
{ "source": [ "https://serverfault.com/questions/822035", "https://serverfault.com", "https://serverfault.com/users/167266/" ] }
822,202
I ran a malware scanner on my site, and it marked a bunch of zipped EXE files as potential risk files (these files got uploaded by users). Since I'm able to uncompress the files on my Mac I assume these are real ZIP files and not just something like renamed PHP files. So the ZIP file shouldn't be any risk for my web server, right?
If they are indeed zipped Windows exe files, they should be harmless to your Linux system, unless you have something like Wine in place that could try to execute them. But if they are in your web path, they could be malware and pose a big risk for your web sites' visitors (and you in turn, if you end up being marked as malware source and users get ugly warnings when they try to visit your site).
{ "source": [ "https://serverfault.com/questions/822202", "https://serverfault.com", "https://serverfault.com/users/391731/" ] }
822,722
It is my understanding that when Apache receives a request to one of the TCP ports it is listening on (e.g. 80, 443), it will decide which host is being requested by looking at the HTTP header Host . The server will then know which virtual host it should redirect the request to. But how does it work for HTTP over SSL/TLS? Since the whole HTTP request is being encrypted (at least that's what I believe I have read somewhere), the header information can only be read after the server has decrypted the data. But in order to decrypt, it needs to know which key pair to use as you can have multiple SSL certificates installed on a web server. So how does the server know which key it needs for decryption? My guess : I could imagine that the TLS handshake provides the necessary information. Regarding the "possible duplicate" flag: While I agree that the answers to both the linked question and my own are similar, I must say the question is different. It is out of question whether or how hosting multiple sites with independet SSL certificates is possible. Instead my question addresses the underlying technical aspect.
Originally, the web server didn't know. This was the reason that you needed a separate IP address for every SSL vhost you wanted to host on the server. This way, the server knew that when a connection came in on IP X, he needed to use the configuration (including certificates) for the associated vhost. This changed with Server Name Indication , a TLS extension that indeed allows a client to indicate the required hostname in the handshaking process. This extension is used in all modern OS, but old browsers or servers don't support it, so if you expect clients to still use IE 6 on WinXP, you would be out of luck.
{ "source": [ "https://serverfault.com/questions/822722", "https://serverfault.com", "https://serverfault.com/users/277624/" ] }
823,214
I have received domain abuse notice email from [email protected] . The mail asks to download a Word Document which I believe contains a virus. Dear Domain Owner, Our system has detected that your domain: example.com is being used for spamming and spreading malware recently. You can download the detailed abuse report of your domain along with date/time of incidents. Click Here We have also provided detailed instruction on how to delist your domain from our blacklisting. Please download the report immediately and take proper action within 24 hours otherwise your domain will be suspended permanently. There is also possibility of legal action depend on severity and persistence of your abuse case. Three Simple Steps: Download your abuse report. Check your domain abuse incidents along with date and time. Take few simple steps for prevention and to avoid domain suspension. Click Here to Download your Report Please look into it and contact us. I want to know if I have missed enabling/configuring something on the ISP side or did I leave some email address on my domain because of which these spammers have targeted me as well. Are there ways to protect / configure my domain to avoid these emails?
This is spam at the least - at worst, it's a scam. Do not agree to send a read receipt. Do not download unnecessary content. Do not click links. Do not reply. Do not pass Go... etc. As others have mentioned, protecting your contact details in whois information may help eliminate these emails; I'd also like to add some common signs of spam/scam emails: "Dear <generic name>" Legitimate organisations will attempt to use your real name where they can - for example, by taking it from whois information, or by contacting your domain registrar to obtain your name and contact details. "Click Here" Legitimate (especially formal, like this) emails are drafted by people who are paid to come up with something better than "click this" for link text - "visit our website to retrieve the full report", perhaps. Bad grammar Legitimate emails are written by people who are paid to write good English - and then they're copy-checked before they ever get sent out. "There is also possibility", "depend on severity", and Unnecessary Capital Letters are generally not things that make it into professional communications. Legal threats People are more likely to get scared and do what the scammers want them to if they're threatened with things they don't understand, like legal proceedings - even if the threats aren't actionable. Unreasonable time limits If a legitimate organisation needs to time-limit you, it'll be on a scale of weeks, or multiple days at the very shortest. It's highly unusual to be given just 24 hours to perform some action - unless, of course, someone wants to scare you into taking action without thinking about it first. Unexpected attachment types Attachments such as official reports will usually come as PDFs, or a link to a legitimate webpage. Anything else - an RTF, a Word document, a HTML file, or an executable, should raise a question. None of these are 100% perfect indicators of scams, but each should raise a small flag - and the presence of multiple should make you highly suspicious. If in doubt, verify the organisation is legitimate, look up a contact email address for them ( not the address the email came from, usually), and use that to ask if the email is legitimate. If it is, fine - if it's not, you've done them a favour.
{ "source": [ "https://serverfault.com/questions/823214", "https://serverfault.com", "https://serverfault.com/users/254138/" ] }
823,234
I would like to use the same upstream definition with multiple ports: upstream production { server 10.240.0.26; server 10.240.0.27; } server { listen 80; server_name some.host; location / { proxy_pass http://production:1234; } } server { listen 80; server_name other.host; location / { proxy_pass http://production:4321; } } Using that configuration nginx -t throws: upstream "production" may not have port 1234 and returns with exitcode 1. When I try to define the proxy_pass URL (or parts of) as a variable like: set $upstream_url "http://production:1234" set $upstream_host production; set $upstream_port 1234; proxy_pass $upstream_url; # or proxy_pass http://production:$upstream_port; # or proxy_pass http://$upstream:$upstream_port; Nginx tries to resolve the name of my upstream via resolver: *16 no resolver defined to resolve production, client: ...., server: some.host, request: "GET / HTTP/1.1", host: "some.host" For me, the proxy_pass doc sounds as if exactly that should not happen; A server name, its port and the passed URI can also be specified using variables: proxy_pass http://$host$uri; or even like this: proxy_pass $request; In this case, the server name is searched among the described server groups, and, if not found, is determined using a resolver. Tested with nginx versions: 1.6 1.9 1.10 1.11 Am I missing something? Is there a way to work around this?
You must define port in every server entry in upstream . If you don't nginx will set it to 80 . So server 10.240.0.26; actually means server 10.240.0.26:80; . You could define several upstream blocks though: upstream production_1234 { server 10.240.0.26:1234; server 10.240.0.27:1234; } upstream production_4321 { server 10.240.0.26:4321; server 10.240.0.27:4321; } server { listen 80; server_name some.host; location / { proxy_pass http://production_1234; } } server { listen 80; server_name other.host; location / { proxy_pass http://production_4321; } } Another option is to configure your local DNS to resolve hostname production to several IPs, and in this case nginx will use them all. http://nginx.org/r/proxy_pass : If a domain name resolves to several addresses, all of them will be used in a round-robin fashion. server { listen 80; server_name some.host; location / { proxy_pass http://production:1234; } }
{ "source": [ "https://serverfault.com/questions/823234", "https://serverfault.com", "https://serverfault.com/users/237661/" ] }
823,791
My HP Proliant ML110 G7 has a full-size SD slot on the motherboard. What is its use case? The PDF manual mentions it on page 10: item 17 just to show its placement, but nothing more. In a later revision (Gen 9), it is said the slot is not hot-pluggable. The motherboard also has a USB slot (item 11).
It's for booting a hypervisor or lightweight operating system like VMware ESXi. See: What happens when the USB key or SD card I've installed VMware ESXi on fails?
{ "source": [ "https://serverfault.com/questions/823791", "https://serverfault.com", "https://serverfault.com/users/147163/" ] }
823,956
I am going to introduce Ansible into my data center, and I'm looking for some security best practice on where to locate the control machine and how to manage the SSH keys. Question 1: the control machine We of course need a control machine. The control machine has public SSH keys saved on it. If an attacker has access to the control machine, it potentially has access to the whole data center (or to the servers managed by Ansible). So is it better to have a dedicated control machine in data center or a remote control machine (like my laptop remotely connected to the data center)? If the best practice is to use my laptop (which could be stolen, of course, but I could have my public keys securely saved online in the cloud or offline on a portable crypted device), what if I need to use some web interfaces with Ansible, like Ansible Tower, Semaphore, Rundeck or Foreman which needs to be installed on a centralised machine into the datacenter? How to secure it and avoid it to become a "single point of attack"? Question 2: the SSH keys Assume that I need to use Ansible to make some tasks which require to be executed by root (like installing software packages or something like this). I think the best practice is not to use the root user on controlled servers, but to add a normal user for Ansible with sudo permissions. But, if Ansible needs to make almost every task, it needs to have access to every commands through sudo. So, what is the best choice: let Ansible use the root user (with its public key saved in ~/.ssh/authorized_keys create a unprivileged user dedicated for Ansible with sudo access let the Ansible user to run every commands through sudo specifying a password (which is unique needs to be known by every sysadmin which uses Ansible to control that servers) let the Ansible user to run every commands through sudo without specifying any password any other hints?
The bastion host (the ansible control center) belongs to a separate subnet. It shouldn't be directly accessible from outside, it shouldn't be directly accessible from the managed servers! Your laptop is the least secure device of all. One stupid mail, one stupid flash vulnerability, one stupid guest Wifi and it gets pwned. For servers, don't allow root access via ssh at all. Many audits scoff at this. For ansible, let every admin use their own personal account on each target server, and let them sudo with passwords. This way no password is shared between two people. You can check who did what on each server. It's up to you if personal accounts allow login on password, ssh key only, or require both. To clarify ansible doesn't require to use a single target login name . Each admin could and should have personal target login name. A side note: Try to never create an account called some word (like "ansible" or "admin" or "cluster" or "management" or "operator") if it has a password. The only good name for account that has a password is a name of a human being, like "jkowalski". Only a human being can be responsible for the actions done via the account and responsible for improperly securing their password, "ansible" cannot.
{ "source": [ "https://serverfault.com/questions/823956", "https://serverfault.com", "https://serverfault.com/users/177397/" ] }
823,965
What am I missing here? I'm trying to enable file auditing so I can see who deleted a file via security logs in event viewer. I created the below group policy Computer Configuration > Windows Settings > Local Policies/Audit Policy > Audit Object Access. Enabled for success and failure. The enabled checkbox is checked for the policy. In the delegation tab the computer account I'm trying to set this up for has read and apply policy selected. as well as authenticated users. On the folder itself I've enabled auditing for "Everyone" for "Delete subfolders and files" as well as "Delete" Success and failure are setup for these. gpresult shows the policy is applied not sure if it matters but gpedit shows the policy is not applied. Where else should this be set?
The bastion host (the ansible control center) belongs to a separate subnet. It shouldn't be directly accessible from outside, it shouldn't be directly accessible from the managed servers! Your laptop is the least secure device of all. One stupid mail, one stupid flash vulnerability, one stupid guest Wifi and it gets pwned. For servers, don't allow root access via ssh at all. Many audits scoff at this. For ansible, let every admin use their own personal account on each target server, and let them sudo with passwords. This way no password is shared between two people. You can check who did what on each server. It's up to you if personal accounts allow login on password, ssh key only, or require both. To clarify ansible doesn't require to use a single target login name . Each admin could and should have personal target login name. A side note: Try to never create an account called some word (like "ansible" or "admin" or "cluster" or "management" or "operator") if it has a password. The only good name for account that has a password is a name of a human being, like "jkowalski". Only a human being can be responsible for the actions done via the account and responsible for improperly securing their password, "ansible" cannot.
{ "source": [ "https://serverfault.com/questions/823965", "https://serverfault.com", "https://serverfault.com/users/372028/" ] }
824,631
Is there any way to set a Docker containers system time dynamically (at run time) without effecting the host machine? Using hwclock --set --date "Sat Aug 17 08:31:24 PDT 2016" gives the following error: hwclock: Cannot access the Hardware Clock via any known method. hwclock: Use the --debug option to see the details of our search for an access method. Using date -s "2 OCT 2006 18:00:00" gives the following error: date: cannot set date: Operation not permitted Use case: I need to test time sensitive software (behaviour depends on the date). Other common use cases: running legacy software with y2k bugs testing software for year-2038 compliance debugging time-related issues, such as expired SSL certificates running software which ceases to run outside a certain timeframe deterministic build processes.
It is possible The solution is to fake it in the container. This lib intercepts all system call programs use to retrieve the current time and date. The implementation is easy. Add functionality to your Dockerfile as appropriate: WORKDIR / RUN git clone https://github.com/wolfcw/libfaketime.git WORKDIR /libfaketime/src RUN make install Remember to set the environment variables LD_PRELOAD before you run the application you want the faked time applied to. Example: CMD ["/bin/sh", "-c", "LD_PRELOAD=/usr/local/lib/faketime/libfaketime.so.1 FAKETIME_NO_CACHE=1 python /srv/intercept/manage.py runserver 0.0.0.0:3000] You can now dynamically change the servers time: Example: import os def set_time(request): print(datetime.today()) os.environ["FAKETIME"] = "2020-01-01" # Note: time of type string must be in the format "YYYY-MM-DD hh:mm:ss" or "+15d" print(datetime.today())
{ "source": [ "https://serverfault.com/questions/824631", "https://serverfault.com", "https://serverfault.com/users/276343/" ] }
824,975
I'm trying to list services on my CentOS image running in Docker using systemctl list-units but I get this error message: Failed to get D-Bus connection: Operation not permitted Any suggestions what the problem might be?
My guess is that you're running a non-privileged container. systemd requires CAP_SYS_ADMIN capability but Docker drops that capability in the non privileged containers, in order to add more security. systemd also requires RO access to the cgroup file system within a container. You can add it with –v /sys/fs/cgroup:/sys/fs/cgroup:ro So, here a few steps on how to run CentOS with systemd inside a Docker container: Pull centos image Set up a docker file like the one below: FROM centos MAINTAINER "Yourname" <[email protected]> ENV container docker RUN yum -y update; yum clean all RUN yum -y install systemd; yum clean all; \ (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \ rm -f /lib/systemd/system/multi-user.target.wants/*;\ rm -f /etc/systemd/system/*.wants/*;\ rm -f /lib/systemd/system/local-fs.target.wants/*; \ rm -f /lib/systemd/system/sockets.target.wants/*udev*; \ rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \ rm -f /lib/systemd/system/basic.target.wants/*;\ rm -f /lib/systemd/system/anaconda.target.wants/*; VOLUME [ "/sys/fs/cgroup" ] CMD ["/usr/sbin/init"] Build it - docker build -t centos7-systemd - < mydockerfile Run a container with docker run --rm --privileged -ti -e container=docker -v /sys/fs/cgroup:/sys/fs/cgroup centos7-systemd /usr/sbin/init You should have systemd in your container
{ "source": [ "https://serverfault.com/questions/824975", "https://serverfault.com", "https://serverfault.com/users/202047/" ] }
825,034
Aside from one type of disk bottlenecking the other, are there any other problems with mixing SSD models in RAID? My problem is, I need to upgrade the storage in a server with 4x Samsung 845DC EVO 960GB in RAID10. These drives are not available anymore, so my options are to either use some newer comparable SSD's or to replace the array altogether.
The single biggest thing that crosses my mind isn't SSD-specific: that the biggest danger with RAID is that all the devices in any given RAID are often purchased from the same manufacturer, at the same time, and therefore tend to get to the far end of the bathtub curve and start dying at about the same time. In that sense, buying from different vendors is not only not a bad idea, but best practice. You don't say whether you're doing hardware or software RAID. If it's hardware, you have the issue of whether the new models are supported by the controller, both from a hardware support contract standpoint and an " it's too new for me to talk to / my programmer told me not to talk to you " standpoint. Either of those would be a reason not to do it. There is also the issue of capacity: if you're adding devices that are smaller than your existing ones, even if by only a few sectors, this will not go well. Check the absolute raw capacity to ensure it's greater than or equal to the devices you're already using. But assuming you can get past those issues, I think it's generally a good idea to do what you're planning.
{ "source": [ "https://serverfault.com/questions/825034", "https://serverfault.com", "https://serverfault.com/users/257295/" ] }
825,911
I have a production host, below: The system is using 1GB of swap, while maintaining nearly 40GB of free, unused memory space. Should I be concerned about this, or is it mostly normal?
This is not a problem and is likely normal. Lots of code (and possibly data) is used very rarely so the system will swap it out to free up memory. Swapping is mostly only a problem if memory is being swapped in and out continuously. It is that kind of activity that kills performance and suggests a problem elsewhere on the system. If you want to monitor your swap activity you can with several utilities but vmstat is usually quite useful e.g. $ vmstat 1 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 348256 73540 274600 0 0 1 9 9 6 2 0 98 0 0 0 0 0 348240 73544 274620 0 0 0 16 28 26 0 0 100 0 0 0 0 0 348240 73544 274620 0 0 0 0 29 33 0 0 100 0 0 0 0 0 348240 73544 274620 0 0 0 0 21 23 0 0 100 0 0 0 0 0 348240 73544 274620 0 0 0 0 24 26 0 0 100 0 0 0 0 0 348240 73544 274620 0 0 0 0 23 23 0 0 100 0 0 Ignore the first line as that is activity since the system started. Note the si and so columns under ---swap-- ; they should generally be fairly small figures if not 0 for the majority of the time. Also worth mentioning is that this preemptive swapping can be controlled with a kernel setting. The file at /proc/sys/vm/swappiness contains a number between 0 and 100 that tells the kernel how aggressively to swap out memory. Cat the file to see what this is set to. By default, most Linux distros default this to 60, but if you don't want to see any swapping before memory is exhausted, echo a 0 into the file like this: echo 0 >/proc/sys/vm/swappiness This can be made permanent by adding vm.swappiness = 0 to /etc/sysctl.conf .
{ "source": [ "https://serverfault.com/questions/825911", "https://serverfault.com", "https://serverfault.com/users/299165/" ] }
825,928
Secondary DC did not authenticate users after primary DC went down and Once I get this critical error in secondary dns on secondary dc after pdc down Directly. Event ID 4015 ( The DNS server has encountered a critical error from the Active Directory. Check that the Active Directory is functioning properly. The extended error debug information (which may be empty) is . The event data contains the error.)
This is not a problem and is likely normal. Lots of code (and possibly data) is used very rarely so the system will swap it out to free up memory. Swapping is mostly only a problem if memory is being swapped in and out continuously. It is that kind of activity that kills performance and suggests a problem elsewhere on the system. If you want to monitor your swap activity you can with several utilities but vmstat is usually quite useful e.g. $ vmstat 1 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 348256 73540 274600 0 0 1 9 9 6 2 0 98 0 0 0 0 0 348240 73544 274620 0 0 0 16 28 26 0 0 100 0 0 0 0 0 348240 73544 274620 0 0 0 0 29 33 0 0 100 0 0 0 0 0 348240 73544 274620 0 0 0 0 21 23 0 0 100 0 0 0 0 0 348240 73544 274620 0 0 0 0 24 26 0 0 100 0 0 0 0 0 348240 73544 274620 0 0 0 0 23 23 0 0 100 0 0 Ignore the first line as that is activity since the system started. Note the si and so columns under ---swap-- ; they should generally be fairly small figures if not 0 for the majority of the time. Also worth mentioning is that this preemptive swapping can be controlled with a kernel setting. The file at /proc/sys/vm/swappiness contains a number between 0 and 100 that tells the kernel how aggressively to swap out memory. Cat the file to see what this is set to. By default, most Linux distros default this to 60, but if you don't want to see any swapping before memory is exhausted, echo a 0 into the file like this: echo 0 >/proc/sys/vm/swappiness This can be made permanent by adding vm.swappiness = 0 to /etc/sysctl.conf .
{ "source": [ "https://serverfault.com/questions/825928", "https://serverfault.com", "https://serverfault.com/users/394839/" ] }
826,032
I've taken on the task of running a small email server, and the world of spam makes it more challenging for an individual, as many MTAs are highly paranoid about accepting email. I think I've configured nearly everything that could be a problem successfully: A commercial SSL certificate, DKIM, a proper domain, and static IP address. My (piddly) email in fact goes out almost all of the time. But the most paranoid MTA's are still rejecting my email - Craigslist for example - and it appears to be my reverse lookup at fault. I've recently changed my static IP address, and my service with my ISP. When they changed it, I tried to get this configured correctly, but I fear it is not. But I'm not 100% certain what is wrong, or what my reverse record should look like. I especially don't want to approach my ISP with a "Look, I don't know what the problem is, but you need to fix it anyhow" attitude. If there's a problem I want to be able to describe exactly what it is before I get on the phone with the NOC. They don't offer a control panel for this as far as I can tell, so I don't want to try anyone's patience with a bunch of trial and error. OK, the specifics, redacted & fictional, but consistent: Domain: funkeedomain.org Mailserver (DNS MX record): mx.funkeedomain.org Static IP address: 111.222.333.444 Static IP address reversed: 444.333.222.111 FQDN originally requested of the ISP for reverse lookups: main.funkeedomain.org Here's a typical rejection notice from my mail server (hMailServer): Your message did not reach some or all of the intended recipients. Sent: Thu, 12 Jan 2017 11:53:50 -0800 (PST) Subject: Blah blah blah The following recipient(s) could not be reached: [email protected] Error Type: SMTP Remote server (64.235.154.109) issued an error. hMailServer sent: . Remote server replied: 550 permanent failure for one or more recipients ([email protected]:550 Sender IP reverse lookup rejected) hMailServer A commercial email-sending checker tells me: main.funkeedomain.org.333.222.111.in-addr.arpa Failed - No A Record Found in DNS So, fine. What do DNS tools tell me? stew@griffin:~$ host 111.222.333.444 444.333.222.111.in-addr.arpa domain name pointer main.funkeedomain.org.333.222.111.in-addr.arpa. stew@griffin:~$ dig -x 111.222.333.444 ; <<>> DiG 9.10.3-P4-Ubuntu <<>> -x 111.222.333.444 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16150 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4000 ;; QUESTION SECTION: ;444.333.222.111.in-addr.arpa. IN PTR ;; ANSWER SECTION: 444.333.222.111.in-addr.arpa. 86365 IN PTR main.funkeedomain.org.333.222.111.in-addr.arpa. ;; Query time: 0 msec ;; SERVER: 10.0.0.4#53(10.0.0.4) ;; WHEN: Thu Jan 12 19:09:11 PST 2017 ;; MSG SIZE rcvd: 93 From reading examples ( http://www.gettingemaildelivered.com/how-to-set-up-reverse-dns-rdns for instance), my strong impression is that this is wrong, and my reverse record set up by my ISP should be a PTR to "main.funkeedomain.org", NOT "main.funkeedomain.org.333.222.111.in-addr.arpa." Am I right to think this? What should I be expecting in my reverse record if not what I'm finding? Thanks all who responded, and my post-post grammar copy-editor. Both HBruijn and Andrew B's answers were correct, but they appear to want me to select HBruijn's, which is also shorter, and so I have. I had to call no less than five times to get this resolved. Having a 100% accurate diagnosis was surely key to me getting this passed blindly up 3 levels of escalation successfully - I was never allowed to talk to the DNS department directly. Thank you all again.
Look at the answer section a little more closely: ;; ANSWER SECTION: 444.333.222.111.in-addr.arpa. 86365 IN PTR main.funkeedomain.org.333.222.111.in-addr.arpa. Specifically, the value of the PTR record: main.funkeedomain.org.333.222.111.in-addr.arpa. Your ISP forgot to add the trailing dot to your FQDN. This is causing the DNS software to helpfully append the name of the zone file to the end of the data. Tell them to look at your reverse DNS record again, mention the trailing dot, and if they have any sense to them they'll know exactly what they did wrong.
{ "source": [ "https://serverfault.com/questions/826032", "https://serverfault.com", "https://serverfault.com/users/185122/" ] }
827,474
This is sort of a general question about training spamassassin. I have a newly set up mailserver which filters incoming mail through spamassassin. I recently got a flight reservation flagged as spam (score 5) and would like to tell spamassassin it's not spam. (Perhaps doing this would also re-send the mail without the modified spamassassin headers?) I've tried searching around and am only finding stuff about either getting spamassassin to flag messages as spam (and not about fixing false positives), or for people writing emails - how not to be flagged as spam. So in regards to giving spamassassin feedback on wrong calls: Is there a way to do this from within an email client (for example: Thunderbird) Is there a way to do this via the command-line on the mail server? I'd like to make the process as fluid as possible, but whatever gets the job done. Details from SpamAssassin regarding the email: 0.0 FSL_HELO_NON_FQDN_1 No description available. 0.6 HK_RANDOM_ENVFROM Envelope sender username looks random -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at http://www.dnswl.org/, no trust [82.150.225.129 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [82.150.225.129 listed in wl.mailspike.net] 0.0 HEADER_FROM_DIFFERENT_DOMAINS From and EnvelopeFrom 2nd level mail domains are different 1.0 SPF_SOFTFAIL SPF: sender does not match SPF record (softfail) 1.6 SUBJ_ALL_CAPS Subject is all capitals 1.1 MIME_HTML_ONLY BODY: Message only has text/html MIME parts 0.7 HTML_IMAGE_ONLY_20 BODY: HTML: images with 1600-2000 bytes of words 0.0 HTML_MESSAGE BODY: HTML included in message -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders 0.0 UNPARSEABLE_RELAY Informational: message has unparseable relay lines 0.0 T_REMOTE_IMAGE Message contains an external image Clearly the main culprits are the all-caps subject line SUBJ_ALL_CAPS and the MIME_HTML_ONLY (I guess, no text alternative). The email was for a flight booking confirmation and the subject looked like this: Subject: JENNINGS/NICHOLAS KOSSOW MR 24 JAN MOF DPS Headers: X-Envelope-From: <[email protected]> X-Envelope-To: <[email protected]> Received: from mail1.amadeus.net (unknown) by 147-49-15-51.rev.cloud.scaleway.com(Postfix 3.1.0/8.13.0) with SMTP id unknown Fri, 20 Jan 2017 07:55:10 +0000 (envelope-from <[email protected]> Received: from obeap115 (nat-dns-mnp.amadeus.net [82.150.225.129]) by mail1.amadeus.net (Postfix) with ESMTP id 3F7A9200042 for <[email protected]>; Fri, 20 Jan 2017 07:55:10 +0000 (GMT) From: [email protected] TO: [email protected] Message-ID: <CTS/GA/C50D54421A07/[email protected]> FND-Request-ID: <CTS/GA/C50D54421A07/[email protected]> Job-ID: 1 Subject: JENNINGS/NICHOLAS KOSSOW MR 24 JAN MOF DPS Date: Fri, 20 Jan 2017 07:55:09 +0000 Content-Type: multipart/mixed; boundary="----=_Part_191904_1900935199.1484898909762" MIME-Version: 1.0
There is both specific and general advice that may be useful in this case. Specific The underlying problem here is that Garuda Airlines, bless their little cotton socks, are sending confirmation emails that bear many of the hallmarks of spam. The subject line is VERY SHOUTY, they send HTML-only emails which contain quite lot of images and very little text, the envelope-sender ( [email protected] ) is pretty clearly a machine-constructed nonce, and the email provider for their (outsourced) confirmation system (amadeus.com) has a useless SPF record (despite all our advice to the contrary , some people mistakenly think there is value in a record that lists some of their sending systems and ends ~all ). There is not much you can do about most of this. If you want to be sure of these getting through, a line in your ~/.spamassassin/user_prefs that says whitelist_from *@amadeus.com will get these messages through to you. Going further and tampering with the weights of the rules that were triggered is probably a bad idea. The SpamAssassin (SA) ruleset is created by filtering a huge weight of spam, and working out what characteristics apply to most of it; you are likely to open your INBOX to a lot more than just Garuda confirmation emails by turning off those rules. General This is exactly the sort of situation the Bayesian engine handles well. It is designed to filter out email that doesn't trigger the other rules but contains stuff you don't want to read, whilst helping through email that does trigger those rules but contains stuff you do want to read. IIRC, the engine won't do anything if you're not training it. The easiest way to train it is to maintain two folders, called (say) spam and ham . Into spam you put copies of email that made it into your INBOX but you didn't want; into ham you put copies of emails that fell foul of SA but you did want, such as this confirmation email. Then nightly (or so) you have a cron job that says sa-learn --spam --mbox mail/spam sa-learn --ham --mbox mail/ham modifying the paths accordingly. Over time, this will teach the engine what you do, and don't, like to read. Since a high Bayesian score can add +4.0 points to an email's SA score, while a low one can subtract 1.9, a well-trained engine can really help SA distinguish what you want to read from what you don't - but you have to put the effort in to teach it .
{ "source": [ "https://serverfault.com/questions/827474", "https://serverfault.com", "https://serverfault.com/users/29315/" ] }
828,130
I realize this looks like a duplicate of at least a few other questions, but I have read them each several times and am still doing something wrong. Following are the contents of my myexample.com nginx config file located in /etc/nginx/sites-available . server { listen 443 ssl; listen [::]:443 ssl; server_name myexample.com www.myexample.com; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; ssl_certificate /etc/letsencrypt/live/myexample.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/myexample.com/privkey.pem; #Configures the publicly served root directory #Configures the index file to be served root /var/www/myexample.com; index index.html index.htm; } It works, when I go to https://myexample.com the content is served and the connection is secure. So this config seems to be good. Now if I change the ssl port to 9443 and reload the nginx config, the config reloads without error, but visiting https://myexample.com shows an error in the browser (This site can’t be reached / myexample.com refused to connect. ERR_CONNECTION_REFUSED) I have tried suggestions and documentation here , here , and here (among others) but I always get the ERR_CONNECTION_REFUSED error. I should note that I can use a non-standard port and then explicitly type that port into the URL, e.g., https://myexample.com:9443 . But I don't want to do that. What I want is for a user to be able to type myexample.com into any browser and have nginx redirect to the secure connection automatically. Again, I have no issues whatsoever when I use the standard 443 SSL port. Edit: I'm using nginx/1.6.2 on debian/jessie
In order to support typing " https://myexample.com " in your browser, and having it handled by the nginx config listening on port 9443, you will need an additional nginx config that still listens on port 443, since that is the IP port to which the browser connects . Thus: server { listen 443 ssl; listen [::]:443 ssl; server_name myexample.com www.myexample.com; ssl_certificate /etc/letsencrypt/live/myexample.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/myexample.com/privkey.pem; # Redirect the browser to our port 9443 config return 301 $scheme://myexample.com:9443$request_uri; } server { listen 9443 ssl; listen [::]:9443 ssl; server_name myexample.com www.myexample.com; ssl_certificate /etc/letsencrypt/live/myexample.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/myexample.com/privkey.pem; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; #Configures the publicly served root directory #Configures the index file to be served root /var/www/myexample.com; index index.html index.htm; } Notice that the same certificate/key is needed for both sections, since the certificate is usually tied to the DNS hostname, but not necessarily the port. Hope this helps!
{ "source": [ "https://serverfault.com/questions/828130", "https://serverfault.com", "https://serverfault.com/users/211699/" ] }
828,137
I have an sql code that outer join 4 tables from 2 database. The tables have 30000 rows each. In sql 2014 the query took 0s. After an upgrade to sql 2016 i have a deterioration in every JOIN query i use, the query in question needs 1m30s to complete. The server isn't used when i made the tests. How can i search what is wrong? I tried running sql profiler and adding resources to the machine (its a virtual machine having a whole ibm x3650 to itself).
In order to support typing " https://myexample.com " in your browser, and having it handled by the nginx config listening on port 9443, you will need an additional nginx config that still listens on port 443, since that is the IP port to which the browser connects . Thus: server { listen 443 ssl; listen [::]:443 ssl; server_name myexample.com www.myexample.com; ssl_certificate /etc/letsencrypt/live/myexample.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/myexample.com/privkey.pem; # Redirect the browser to our port 9443 config return 301 $scheme://myexample.com:9443$request_uri; } server { listen 9443 ssl; listen [::]:9443 ssl; server_name myexample.com www.myexample.com; ssl_certificate /etc/letsencrypt/live/myexample.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/myexample.com/privkey.pem; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; #Configures the publicly served root directory #Configures the index file to be served root /var/www/myexample.com; index index.html index.htm; } Notice that the same certificate/key is needed for both sections, since the certificate is usually tied to the DNS hostname, but not necessarily the port. Hope this helps!
{ "source": [ "https://serverfault.com/questions/828137", "https://serverfault.com", "https://serverfault.com/users/396675/" ] }
828,141
Can I plug an old HDD (EXT4 partitions) in a new EXSi server and load all the existing virtual machines from there? Is this is even possible? (mount my old EXT4 partition inside ESXi 6.5 and load virtual machines from there). Or is it really necesary to format my old HDD in a VMFS file system? I'm totally new to ESXi, My company used to virtualize in VMWare Workstation for Linux, but today they asked me to migrate our VMs to a new ESXi server.
In order to support typing " https://myexample.com " in your browser, and having it handled by the nginx config listening on port 9443, you will need an additional nginx config that still listens on port 443, since that is the IP port to which the browser connects . Thus: server { listen 443 ssl; listen [::]:443 ssl; server_name myexample.com www.myexample.com; ssl_certificate /etc/letsencrypt/live/myexample.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/myexample.com/privkey.pem; # Redirect the browser to our port 9443 config return 301 $scheme://myexample.com:9443$request_uri; } server { listen 9443 ssl; listen [::]:9443 ssl; server_name myexample.com www.myexample.com; ssl_certificate /etc/letsencrypt/live/myexample.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/myexample.com/privkey.pem; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; #Configures the publicly served root directory #Configures the index file to be served root /var/www/myexample.com; index index.html index.htm; } Notice that the same certificate/key is needed for both sections, since the certificate is usually tied to the DNS hostname, but not necessarily the port. Hope this helps!
{ "source": [ "https://serverfault.com/questions/828141", "https://serverfault.com", "https://serverfault.com/users/396659/" ] }
828,364
My company issued laptop is Dell Latitude E7450 running Windows 7 Professional. If I hibernate the system, on boot it displays a blank screen and does not respond to input. Subsequent reboots do not yield different results. My IT department tells me that Windows hibernation is problematic on Dells, and that hibernating destroys the boot partition. They rebuilt the boot-loader, and told me to be very careful not to let the system hibernate again. I am a software engineer, not a system administrator, but something about this doesn't sound right to me. Is my IT department right? Is it a common problem that users brick their systems through hibernation? Or is it something about the image they have created? Interaction with UEFI? Is there a solution?
Nope, it's definitely not trashing the bootloader since hibernation never touches the bootloader with a write. It doesn't have to, since all that's required to boot from a hibernation image is for that image to be present at all. It's always checked for during boot. It's failing to restore from hibernation, and keeps trying to restore every time it boots. This is usually due to a driver issue, and likely can be fixed by making sure all of your drivers are installed and at their latest versions (or at least not ancient versions). One easy way to get around this is to delete c:\hiberfile.sys from the drive (obviously with some live OS, or with the OS drive connected to another working machine). With that file gone, it will no longer try to restore that image to RAM and will continue with its normal boot process. The thing to do here would be to disable hibernation on this laptop entirely, or fix the underlying device reset / driver issue.
{ "source": [ "https://serverfault.com/questions/828364", "https://serverfault.com", "https://serverfault.com/users/396864/" ] }
828,409
I have a basic-ish ELB v2 site. No clustering or anything yet. I'm pretty inexperienced with AWS. My stack is nginx/uwsgi/django + some other services. I was wondering if anyone had thoughts about making a "sorry, the website is currently down..."-style page (custom text I can update planned downtime is a bonus!) whenever there is downtime for whatever reason, and the health of the instance is red. It doesn't seem that Amazon offers this capability - am I missing anything? Is there a way to create a separate, super-tiny instance that is ONLY served if the main one is red, or something? Thanks!
The simple and cool solution here is to put your ELB behind CloudFront. If the origin server (the ELB in this case) throws a 5XX error (or 4XX if you like), CloudFront can return a custom error page , which you can configure CloudFront to fetch from an S3 bucket by creating a second origin pointing to the bucket and creating a cache behavior routing (e.g.) /errors/static/* to the bucket. This works better than Route 53 failover for an important reason... a fatal flaw, if you will... browsers are terrible about caching DNS lookups for far longer than you expect. The DNS TTL isn't relevant. Essentially, once a browser has a DNS entry in hand, it just keeps trying to use it... typically, until all browser windows are closed. So if your site goes down for a visitor who was already on the site, they unlikely to see the alternate site. Worse, if a visitor hits your site for the first time while it's down, they'll "stick" on the maintenance page until they close all browser windows. If you use failover DNS, that's really only good if the failover target is still your application, maybe just further away. You can turn CloudFront's caching off if you don't need it. You can also configure CloudFront's error caching TTL to a nonzero value if you want it to quit hammering your site while it's down and trying to recover. For a given page that throws an error, it will keep showing the error page and not bother your server with more requests for that page until the Error CachingTTL expires.
{ "source": [ "https://serverfault.com/questions/828409", "https://serverfault.com", "https://serverfault.com/users/396895/" ] }
828,412
I want to use Google Cloud's StackDriver integration for health checks and uptime monitoring. I have a web server running on a google instance, available at foo.mydomain.ai. NB: Port 80 is open to the world, and the tests I did were done from both other Google instances and my home computer. The webserver is a Jetty (Scalatra) instance running within Tomcat 8. I've setup the health check as follows: No matter what I do, I get the error There was an issue connecting to an endpoint of one or more of your resources. This could be due to temporary network issues or trying to connect with a protocol that is not supported by the resource (e.g. trying to connect to an instance though http that does not have a webserver on it) Fetching the same URL with curl gives the proper response: habitats@me:~/foobar curl http://foo.mydomain.ai/health/barservice OK% Fetching using plain GET also works as shown in
The simple and cool solution here is to put your ELB behind CloudFront. If the origin server (the ELB in this case) throws a 5XX error (or 4XX if you like), CloudFront can return a custom error page , which you can configure CloudFront to fetch from an S3 bucket by creating a second origin pointing to the bucket and creating a cache behavior routing (e.g.) /errors/static/* to the bucket. This works better than Route 53 failover for an important reason... a fatal flaw, if you will... browsers are terrible about caching DNS lookups for far longer than you expect. The DNS TTL isn't relevant. Essentially, once a browser has a DNS entry in hand, it just keeps trying to use it... typically, until all browser windows are closed. So if your site goes down for a visitor who was already on the site, they unlikely to see the alternate site. Worse, if a visitor hits your site for the first time while it's down, they'll "stick" on the maintenance page until they close all browser windows. If you use failover DNS, that's really only good if the failover target is still your application, maybe just further away. You can turn CloudFront's caching off if you don't need it. You can also configure CloudFront's error caching TTL to a nonzero value if you want it to quit hammering your site while it's down and trying to recover. For a given page that throws an error, it will keep showing the error page and not bother your server with more requests for that page until the Error CachingTTL expires.
{ "source": [ "https://serverfault.com/questions/828412", "https://serverfault.com", "https://serverfault.com/users/180213/" ] }
828,414
I have a functional password policy in my OpenLDAP server. However, I just tried testing out the pwdInHistory property and it does not seem to prevent me from using the previous password I had just set. For reference, part of my OpenLDAP server's policy is to not allow any cleartext passwords be set, so all of our passwords are set using SSHA from slappasswd. I set pwdInHistory to 3, a relatively short number to test with first. I only bind as the user that is updating their password and not the LDAP root DN. Anyone have any ideas why this is not functioning the way I set up the password policy?
The simple and cool solution here is to put your ELB behind CloudFront. If the origin server (the ELB in this case) throws a 5XX error (or 4XX if you like), CloudFront can return a custom error page , which you can configure CloudFront to fetch from an S3 bucket by creating a second origin pointing to the bucket and creating a cache behavior routing (e.g.) /errors/static/* to the bucket. This works better than Route 53 failover for an important reason... a fatal flaw, if you will... browsers are terrible about caching DNS lookups for far longer than you expect. The DNS TTL isn't relevant. Essentially, once a browser has a DNS entry in hand, it just keeps trying to use it... typically, until all browser windows are closed. So if your site goes down for a visitor who was already on the site, they unlikely to see the alternate site. Worse, if a visitor hits your site for the first time while it's down, they'll "stick" on the maintenance page until they close all browser windows. If you use failover DNS, that's really only good if the failover target is still your application, maybe just further away. You can turn CloudFront's caching off if you don't need it. You can also configure CloudFront's error caching TTL to a nonzero value if you want it to quit hammering your site while it's down and trying to recover. For a given page that throws an error, it will keep showing the error page and not bother your server with more requests for that page until the Error CachingTTL expires.
{ "source": [ "https://serverfault.com/questions/828414", "https://serverfault.com", "https://serverfault.com/users/396898/" ] }
828,480
I started working for a company that fired a previous IT worker for leaking data. I can only say the following things: We use a Firebird DB with an application written by another company, Proxmox, for virtualization of Windows Server 2008 R2, SQL Server, cloud core Mikrotik router, and a few other Mikrotik devices. I am not 100% sure, but is there some quick way to check if there are some backdoors left, without interrupting internal processes and reformatting everything? This previous guy was really good, having written software in C++ and C#. I also know that he did some assembler and cracked a few programs in ollydbg.
The only way to be absolutely certain is to wipe every system clean and to reinstall from scratch. You will also need to audit all of the locally generated software and configurations to ensure that they do not contain backdoors. This is a non trivial task which comes with a non trivial cost. Beyond that then there isn't really much you can do. Obviously while you're deciding what to do Audit all firewall rules for validity Audit all accounts for validity Audit all sudoers files for validity Change all passwords and keys but that is only scratching the surface.
{ "source": [ "https://serverfault.com/questions/828480", "https://serverfault.com", "https://serverfault.com/users/396971/" ] }
828,888
I am not sure if I've been hacked or not. I tried to log in through SSH and it wouldn't accept my password. Root login is disabled so I went to rescue and turned root login on and was able to log in as root. As root, I tried to change the password of the affected account with the same password with which I had tried to log in before, passwd replied with "password unchanged". I then changed the password to something else and was able to log in, then changed the password back to the original password and I was again able to log in. I checked auth.log for password changes but didn't find anything useful. I also scanned for viruses and rootkits and the server returned this: ClamAV: "/bin/busybox Unix.Trojan.Mirai-5607459-1 FOUND" RKHunter: "/usr/bin/lwp-request Warning: The command '/usr/bin/lwp-request' has been replaced by a script: /usr/bin/lwp-request: a /usr/bin/perl -w script, ASCII text executable Warning: Suspicious file types found in /dev:" It should be noted that my server isn't widely known. I have also changed the SSH port and enabled 2-step verification. I am worred I got hacked and someone is trying to fool me, "everything is fine don't worry about it".
Like J Rock, I think this is a false positive. I had the same experience. I received an alarm from 6 different, disparate, geographically separated servers in a short time span. 4 of these servers only existed on a private network. The one thing they had in common was a recent daily.cld update. So, after checking for some of the typical heuristics of this trojan without success, I booted a vagrant box with my known clean baseline and ran freshclam. This grabbed "daily.cld is up to date (version: 22950, sigs: 1465879, f-level: 63, builder: neo)" A subsequent clamav /bin/busybox returned the same "/bin/busybox Unix.Trojan.Mirai-5607459-1 FOUND" alert on the original servers. Finally, for good measure, I also did a vagrant box from Ubuntu's official box and also got the same "/bin/busybox Unix.Trojan.Mirai-5607459-1 FOUND" (Note, I had to up the memory on this vagrant box from its default 512MB or clamscan failed with 'killed') Full output from fresh Ubuntu 14.04.5 vagrant box. root@vagrant-ubuntu-trusty-64:~# freshclam ClamAV update process started at Fri Jan 27 03:28:30 2017 main.cvd is up to date (version: 57, sigs: 4218790, f-level: 60, builder: amishhammer) daily.cvd is up to date (version: 22950, sigs: 1465879, f-level: 63, builder: neo) bytecode.cvd is up to date (version: 290, sigs: 55, f-level: 63, builder: neo) root@vagrant-ubuntu-trusty-64:~# clamscan /bin/busybox /bin/busybox: Unix.Trojan.Mirai-5607459-1 FOUND ----------- SCAN SUMMARY ----------- Known viruses: 5679215 Engine version: 0.99.2 Scanned directories: 0 Scanned files: 1 Infected files: 1 Data scanned: 1.84 MB Data read: 1.83 MB (ratio 1.01:1) Time: 7.556 sec (0 m 7 s) root@vagrant-ubuntu-trusty-64:~# So, I also believe this is likely to be a false positive. I will say, rkhunter did not give me the: "/usr/bin/lwp-request Warning" reference, so maybe PhysiOS Quantum is having more than one issue. EDIT: just noticed that I never explicitly said that all of these servers are Ubuntu 14.04. Other versions may vary?
{ "source": [ "https://serverfault.com/questions/828888", "https://serverfault.com", "https://serverfault.com/users/372791/" ] }
829,326
I have an MVC app. I have a controller that when called runs a background process to query Active Directory and updates the database. http://myapp/BackgroundTask/Run I want to run this on a schedule (daily) without opening a browser. I see that there's a lot of third party solutions, is there something built in?
Use the Invoke-WebRequest cmdlet from Powershell. In your task: Action: Start a program Program/script: powershell.exe Arguments: -Command "Invoke-WebRequest http://myapp/BackgroundTask/Run"
{ "source": [ "https://serverfault.com/questions/829326", "https://serverfault.com", "https://serverfault.com/users/95097/" ] }
829,593
I need to search something in a huge log-file (over 14 GB). I'm pretty sure it's in the last 4 GB or so. Is there a way to skip the first X GB to speed things up?
I guess you could use tail to only output that last 4GB or so by using the -c switch -c, --bytes=[+]NUM output the last NUM bytes; or use -c +NUM to output starting with byte NUM of each file You could probably do something with dd too by setting bs=1 and skip ing to the offset you want to start e.g. dd if=file bs=1024k skip=12g | grep something
{ "source": [ "https://serverfault.com/questions/829593", "https://serverfault.com", "https://serverfault.com/users/113304/" ] }
830,636
This Friday I saw that I had 2 held back packages for some reason when I ran apt-get upgrade , so naturally I did what any inexperienced sysadmin would do and uninstalled the packages in the hopes that I could simply re-install them and the problem would be solved. Little did I know, I just made the situation worse. When I tried to reinstall openjdk-8-jre-headless , I got this: $ apt-get install openjdk-8-jre-headless Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: openjdk-8-jre-headless : Depends: ca-certificates-java but it is not going to be installed E: Unable to correct problems, you have held broken packages. I tried to upgrade the mentioned package manually, but to no avail. $ apt-get upgrade ca-certificates-java Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... ca-certificates-java is already the newest version. Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Looking around I saw no mention of a solution to this exact error anywhere. I use ElasticSearch on the server, which I restarted, but I should've expected that it wouldn't start up at this point. So now I'm without Java and my users are without search. What is going on and how can I fix this?
This worked for me: apt install -t jessie-backports openjdk-8-jre-headless ca-certificates-java REF: https://unix.stackexchange.com/questions/342403/openjdk-8-jre-headless-depends-ca-certificates-java-but-it-is-not-going-to-be
{ "source": [ "https://serverfault.com/questions/830636", "https://serverfault.com", "https://serverfault.com/users/171050/" ] }
831,217
We are engaging a consultant in India as our Linux Administrator. We don't know him well and he requires Root access to all our servers to do his job (including a security audit). What is the best practice for enabling a remote consultant for such work such that we are protected against any malignant activities? Thanks in advance.
Don't . Also, you're in as much danger of ineptitude as malice from what I've seen of the typical way companies handle this. I'd like to say, there's probably great system administrators out there in India, but the way many companies do things is terrible. If you're going through a body shop, you're also likely seeing a pretty big cut go to them, and many of them are unlikely to have properly vetted their employees. I've talked to three, one of whom I worked for and none of them have done any technical interviews. So, if you must hire someone remotely, for god's sake, interview him yourself and make sure he knows his work. System administration is far too important to hand over to someone blindly Now that I've handled the "ineptitude" part of it, Administration is a pretty broad phrase. And someone with root access can do anything . Now, personally I think creating an account for the admin, and giving him the ability to elevate himself through sudo is a better idea (which your config management system should handle if you have many servers). That said, even that relies on a certain amount of trust. There's so many stories out there of the sheer damage a disgruntled sysadmin can do. Change all your passwords? Sure you could get in eventually, but its not trivial, and it would probably cost more than you're saving. So, consider a local. If not, consider someone you have vetted yourself and have directly hired .
{ "source": [ "https://serverfault.com/questions/831217", "https://serverfault.com", "https://serverfault.com/users/399294/" ] }
831,876
I am looking to upgrade servers and am trying to figure out a good plan. We currently have 4 servers: OpenBSD firewall/VPN server FreeNAS backup servers (local) that receives ZFS snapshots FreeNAS backup servers (remote) that receives ZFS snapshots The workhorse FreeBSD server below. FreeBSD Server ~2010 FreeBSD 8.4, 32gb ram, dual Xeon E5520 ZFS (8 disks, zraid of disks in mirrored pairs, 8TB) Services: Samba Netatalk (Apple filesharing) Apache (mostly internal, some external facing sites) MySQL VirtualBox (Windows 2k3 instance) ZFS snapshots My Plan (basic) I am planning a server upgrade that would have us switch from one primary server to two servers that would each take some of the server duties from the list above (and would replicate to each other) so that if one goes down, I could rapidly activate all features on the second. Something like: Server 1: Samba Netatalk (Apple filesharing) VirtualBox (Windows 2k3 instance) ZFS snapshots Server 2: Apache (mostly internal, some external facing sites) MySQL ZFS snapshots I've only ever run bare metal, and I have no experience with VMs other than running Windows 2k3 on VirtualBox. Should I look at running my server instances as VMs? I thought that might make restoring from a crash easier. In general, does this seem like a good plan? I've been looking at ixSystems servers and Dell rack hardware, if that makes a difference. (I also have never used any rack mount equipment.)
No question, virtualize. The benefits and flexibility afforded by virtualization far outweigh the negligible performance hit. Your plan, though is sub-optimal, primarily because Virtualbox is a desktop-grade virtualization solution and is not intended for server usage. Here's what I'd suggest: install (free) VMware ESXi on both servers, then create VMs on them as needed. If you don't care for ESXi, then consider Hyper-V or KVM. Leave the host OS/hypervisor as "clean" as possible, responsible only for running your VMs, and create VMs as needed. Don't run any application processes on the host OS. If you have some budget for this, pick up the VMware Essentials Plus bundle, which gets you vCenter, which will allow you to do things like live VM migrations between hosts, centralized management, backups using tools like Veeam, etc. Once you move to a virtualized environment, you'll never go back.
{ "source": [ "https://serverfault.com/questions/831876", "https://serverfault.com", "https://serverfault.com/users/360988/" ] }
832,799
I am deploying a small 3 node cluster and I want to add the public IP addresses as defined in my inventory to the /etc/hosts files of all of the nodes. I am trying to use the following, but it is giving me an error: - name: Add IP address of all hosts to all hosts lineinfile: dest: /etc/hosts line: '{{ hostvars[item]["ansible_host"] }} {{ hostvars[item]["ansible_hostname"] }} {{ hostvars[item]["ansible_nodename"] }}' state: present with_items: groups['all'] The error is: fatal: [app1.domain.com]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'ansible.vars.hostvars.HostVars object' has no attribute u"groups['all']"\n\nThe error appears to have been in '/Users/k/Projects/Ansible/roles/common/tasks/main.yml': line 29, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Add IP address of all hosts to all hosts\n ^ here\n"} Any ideas on what I'm missing?
The previous answer simply does not work because it adds a new line for the same host instead of modifying the existing line when an IP Address for a host changes. The following solution takes into account when the ip address changes for a specific server and handles it well by only modifying the line instead of adding duplicate entries. --- - name: Add IP address of all hosts to all hosts lineinfile: dest: /etc/hosts regexp: '.*{{ item }}$' line: "{{ hostvars[item].ansible_host }} {{item}}" state: present when: hostvars[item].ansible_host is defined with_items: "{{ groups.all }}"
{ "source": [ "https://serverfault.com/questions/832799", "https://serverfault.com", "https://serverfault.com/users/112430/" ] }
833,484
I am running a SSH server and I am still using simple password authentication. Everywhere I read about security I am advised to use Public-Key Authentication. But I don't get the advantages. Using them is, in my eyes, either insecure or a lot of handy work. Of course, if someone tries to brute-force the login into my SSH server Public-Key is a lot stronger than any password. But aside from that, it's totally insecure. The advisors mostly argue that you don't have to remember a password. How insecure is that? So if someone hacks into my computer, he doesn't just get my computer, but my server too? If I am using SSH from various different clients, I have to store the public keys one every one of them, which multiplies the possibility that they fall into the false hands. I could save them on a usb-stick which I carry with me, but it can be lost and the finder has access to my server. Possibly I am better served with Two-Factor Authentication. Is there any argument I am missing? What is the best way for me?
if someone hacks into my computer, he doesn't just get my computer, but my server too? This is potentially true anyway with keyloggers: as soon as you log into your server from the compromised computer, they get the password. But there are 3 advantages to keys: 1) Cacheable authentication. Enter your passphrase once, carry out multiple ssh commands. This is very useful if you're using something that uses ssh as a transport, like scp, rsync or git. 2) Scaleable authentication. Enter your passphrase once, log into multiple machines . The more machines you have, the more useful this is. If you have 100 machines, what do you do? You can't use the same password (unless it's a clone farm), and you can't remember that many. So you'd have to use a password manager, and you're back to single point of compromise. Effectively the key passphrase is your password manager. 2b) It scales in the other way if you have multiple admins using the same systems, because you can revoke keys from user A without having to tell B,C,D,E,F... that the password has changed. (This can also be done with individual accounts and sudo, but then you have to provision those accounts somehow) 3) Automation and partial delegation. You can set up SSH to run a particular command when a key connects . This enables an automated process on system A to do something on system B without having full passwordless trust between the two. (It's a replacement for rlogin/rsh, which was hilariously insecure) Edit: another advantage to public keys rather than passwords is the common scenario where the server is compromised through a vulnerability. In this case, logging in with a password compromises the password immediately. Logging in with a key does not! I would say this is more common than the admin's originating desktop getting compromised.
{ "source": [ "https://serverfault.com/questions/833484", "https://serverfault.com", "https://serverfault.com/users/370686/" ] }
833,810
I am trying to use a nested loop in terraform. I have two list variables list_of_allowed_accounts and list_of_images , and looking to iterate over list list_of_images and then iterate over list list_of_allowed_accounts . Here is my terraform code. variable "list_of_allowed_accounts" { type = "list" default = ["111111111", "2222222"] } variable "list_of_images" { type = "list" default = ["alpine", "java", "jenkins"] } data "template_file" "ecr_policy_allowed_accounts" { template = "${file("${path.module}/ecr_policy.tpl")}" vars { count = "${length(var.list_of_allowed_accounts)}" account_id = "${element(var.list_of_allowed_accounts, count.index)}" } } resource "aws_ecr_repository_policy" "repo_policy_allowed_accounts" { count = "${length(var.list_of_images)}" repository = "${element(aws_ecr_repository.images.*.id, count.index)}" count = "${length(var.list_of_allowed_accounts)}" policy = "${data.template_file.ecr_policy_allowed_accounts.rendered}" } This is a bash equivalent of what I am trying to do. for image in alpine java jenkins do for account_id in 111111111 2222222 do // call template here using variable 'account_id' and 'image' done done
Terraform doesn't have direct support for this sort of nested iteration, but we can fake it with some arithmetic. variable "list_of_allowed_accounts" { type = "list" default = ["1111", "2222"] } variable "list_of_images" { type = "list" default = ["alpine", "java", "jenkins"] } data "template_file" "ecr_policy_allowed_accounts" { count = "${length(var.list_of_allowed_accounts) * length(var.list_of_images)}" template = "${file("${path.module}/ecr_policy.tpl")}" vars { account_id = "${var.list_of_allowed_accounts[count.index / length(var.list_of_images)]}" image = "${var.list_of_images[count.index % length(var.list_of_images)]}" } } resource "aws_ecr_repository_policy" "repo_policy_allowed_accounts" { count = "${data.template_file.ecr_policy_allowed_accounts.count}" repository = "${var.list_of_images[count.index % length(var.list_of_images)]}" policy = "${data.template_file.ecr_policy_allowed_accounts.*.rendered[count.index]}" } Since we want to create a policy template for every combination of account and image, the count on the template_file data block is the two multiplied together. We can then use the division and modulo operations to get back from count.index to the separate indices into each list. Since I didn't have a copy of your policy template I just used a placeholder one; this configuration thus gave the following plan: + aws_ecr_respository_policy.repo_policy_allowed_accounts.0 policy: "policy allowing 1111 to access alpine" repository: "alpine" + aws_ecr_respository_policy.repo_policy_allowed_accounts.1 policy: "policy allowing 1111 to access java" repository: "java" + aws_ecr_respository_policy.repo_policy_allowed_accounts.2 policy: "policy allowing 1111 to access jenkins" repository: "jenkins" + aws_ecr_respository_policy.repo_policy_allowed_accounts.3 policy: "policy allowing 2222 to access alpine" repository: "alpine" + aws_ecr_respository_policy.repo_policy_allowed_accounts.4 policy: "policy allowing 2222 to access java" repository: "java" + aws_ecr_respository_policy.repo_policy_allowed_accounts.5 policy: "policy allowing 2222 to access jenkins" repository: "jenkins" Each policy instance applies to a different pair of account id and image, covering all combinations.
{ "source": [ "https://serverfault.com/questions/833810", "https://serverfault.com", "https://serverfault.com/users/213050/" ] }
834,320
I have a Heroku app and I need to set up a domain for it. The common way to set it up is to use CNAME record to specify that this domain is an alias to <your-domain-name>.herokuapp.com . The thing is, I also want to add Google Webmasters and Yandex.Metrika integrations and the easiest way is to add two TXT record for the domain. I set it up like that: I need to have 2 TXT records on http://www.cscombo.com , but apparently this won't work because of this: https://stackoverflow.com/questions/34613083/cname-and-txt-record-for-same-subdomain-not-working My current setup isn't working properly, because adding http://www.cscombo.com to the Google Webmasters wouldn't work (because TXT record for the www subdomain doesn't exist), and adding http://cscombo.com (non-www version) will work (the TXT record for this subdomain exists), but this way Google Webmasters won't be able to read both sitemap.txt and robots.txt (because they both redirect to the www version of the site). The same story with Yandex.Metrika. So, the question: is there any way to add CNAME and TXT records for the same subdomain?
You can't. As RFC1034 says in s3.6.2, If a CNAME RR is present at a node, no other data should be present If you want a TXT record for (say) www.example.com , you can't have a CNAME for www.example.com , and will have to find another way to achieve what you want. This may mean monitoring example.herokuapp.com yourself, and when the IP address changes, updating your own A records for www.example.com .
{ "source": [ "https://serverfault.com/questions/834320", "https://serverfault.com", "https://serverfault.com/users/216722/" ] }
834,341
We own a primary domain: businessdts.com I didn't know if our admins had created a sub-domain I had requested, "BDASERVER.businessdts.com.", so I just tried to connect to it with a browser and got a "not found". Then I pinged that sub-domain and got an IP address that doesn't belong to us: Pinging BDASERVER.businessdts.com [198.105.244.117] with 32 bytes of data Our domain and all sub-domains should have an IP address of [173.203.24.209] I had the admins check all of our DNS zones and we find no instance of the BDASERVER sub-domain, (the admins had not created it yet), nor did we find any instance of the 198.105.244.117 IP address. Doing an IP lookup, we found that 198.105.244.117 belongs to a company called Search Guide Inc. (searchguideinc.com). They appear to be a domain broker of some kind. Am I missing something: How is this BDASERVER sub-domain resolving to a address that is not ours? How does someone hijack a SUB-domain?
There is no record for that subdomain: $ dig BDASERVER.businessdts.com ; <<>> DiG 9.8.3-P1 <<>> BDASERVER.businessdts.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 11871 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;BDASERVER.businessdts.com. IN A ;; AUTHORITY SECTION: businessdts.com. 300 IN SOA ns.rackspace.com. hostmaster.rackspace.com. 1487794151 10800 3600 604800 300 ;; Query time: 86 msec ;; SERVER: 192.168.64.1#53(192.168.64.1) ;; WHEN: Wed Feb 22 21:29:53 2017 ;; MSG SIZE rcvd: 103 It's likely that your ISP's DNS is doing what's referred to as NXDOMAIN hijacking, where they hijack NXDOMAIN DNS replies and instead of replying with a proper NXDOMAIN (as above), they give you the IP address of a "search" page, which typically gets advertisement revenue for them. I'd talk with your ISP and ask that they stop interfering with your traffic. If they refuse, get a better ISP or use a different resolver for your traffic.
{ "source": [ "https://serverfault.com/questions/834341", "https://serverfault.com", "https://serverfault.com/users/401751/" ] }
836,198
What is the current way of installing Docker on an AWS EC2 instance running the AMI? There has been an announcement of Docker Enterprise Edition and now I want to know if anything has changed. Until now, I have been using yum install docker and do get a Docker versioned at 1.12.6, build 7392c3b/1.12.6 right now (3/3/2017). However, the Docker repository on GitHub tells me that there are already newer releases. I remember the official Docker (package) repository having a package named docker-engine replacing docker some time ago and now they seem to split the package up into docker-ce and docker-ee , where e.g. "Docker Community Edition (Docker CE) is not supported on Red Hat Enterprise Linux." [ Source ] So is or will it still be correct to use the above to get the latest stable Docker version on EC2 instances running the AMI or do I need to pull the package from somewhere else (and if so which one, CE or EE)?
To get Docker running on the AWS AMI you should follow the steps below (these are all assuming you have ssh'd on to the EC2 instance). Update the packages on your instance [ec2-user ~]$ sudo yum update -y Install Docker [ec2-user ~]$ sudo yum install docker -y Start the Docker Service [ec2-user ~]$ sudo service docker start Add the ec2-user to the docker group so you can execute Docker commands without using sudo. [ec2-user ~]$ sudo usermod -a -G docker ec2-user You should then be able to run all of the docker commands without requiring sudo . After running the 4th command I did need to logout and log back in for the change to take effect.
{ "source": [ "https://serverfault.com/questions/836198", "https://serverfault.com", "https://serverfault.com/users/403608/" ] }
836,363
I have a rather weird problem with sudo on Debian 8. Users cannot execute some of commands in /etc/sudoers.d . I use Chef to distribute configurations, so all files are automatically generated. Example: This config works fine root@server:~# cat /etc/sudoers.d/nginx # This file is managed by Chef. # Do NOT modify this file directly. user ALL=(root) NOPASSWD:/usr/sbin/nginx And this fails: root@server:~# cat /etc/sudoers.d/update-rc.d # This file is managed by Chef. # Do NOT modify this file directly. user ALL=(root) NOPASSWD:/usr/sbin/update-rc.d user@www42:~$ sudo update-rc.d [sudo] password for user: Sorry, user user is not allowed to execute '/usr/sbin/update-rc.d' as root on server. What can be wrong? Diagnostics: Mar 5 12:12:51 server sudo: user : command not allowed ; TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/usr/sbin/update-rc.d Mar 5 12:14:25 www42 su[1209]: pam_unix(su:session): session closed for user user root@server:~# sudo --version Sudo version 1.8.10p3 Configure options: --prefix=/usr -v --with-all-insults --with-pam --with-fqdn --with-logging=syslog --with-logfac=authpriv --with-env-editor --with-editor=/usr/bin/editor --with-timeout=15 --with-password-timeout=0 --with-passprompt=[sudo] password for %p: --disable-root-mailer --with-sendmail=/usr/sbin/sendmail --with-rundir=/var/lib/sudo --mandir=/usr/share/man --libexecdir=/usr/lib/sudo --with-sssd --with-sssd-lib=/usr/lib/x86_64-linux-gnu --with-selinux --with-linux-audit Sudoers policy plugin version 1.8.10p3 Sudoers file grammar version 43
The problem is the dot in update-rc.d (in /etc/sudoers.d/update-rc.d ); from man sudo : The #includedir directive can be used to create a sudo.d directory that the system package manager can drop sudoers rules into as part of package installation. For example, given: #includedir /etc/sudoers.d sudo will read each file in /etc/sudoers.d, skipping file names that end in ~ or contain a . character to avoid causing problems with package manager or editor temporary/backup files.
{ "source": [ "https://serverfault.com/questions/836363", "https://serverfault.com", "https://serverfault.com/users/348621/" ] }
836,376
What can be learned about a 'user' from a failed malicious SSH attempt? User name entered ( /var/log/secure ) Password entered (if configured, i.e. by using a PAM module) Source IP address ( /var/log/secure ) Are there any methods of extracting anything else? Whether it's info hidden in log files, random tricks or from 3rd party tools etc.
Well, an item that you haven’t mentioned is the fingerprints of the private keys they tried before entering a password. With openssh , if you set LogLevel VERBOSE in /etc/sshd_config , you get them in the log files. You can check them against the collection of public keys your users have authorized in their profiles, to see if they have been compromised. In the case that an attacker has got hold of a user’s private key and is looking for the login name, knowing that the key is compromised could prevent the intrusion. Admittedly, it’s rare: who owns a private key has probably found out the login name too...
{ "source": [ "https://serverfault.com/questions/836376", "https://serverfault.com", "https://serverfault.com/users/403539/" ] }