source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
837,450 | I have been trying to ban an IP address in iptables that starts with 047, but it would change it to 039. iptables -v -w -I INPUT 1 -s 047.75.162.122 -j DROP But the IP address would be banned as 39.75.162.122! Why do you think this is happening? | This is what is happening: $ printf "%d\n" 047
39 047 in octal is 39 in decimal. You just need to drop the leading 0 . At a guess, this is happening because something in iptables is splitting IPv4 addresses into 4 decimal numbers so it can convert the IP string representation to a long. But that's conjecture. | {
"source": [
"https://serverfault.com/questions/837450",
"https://serverfault.com",
"https://serverfault.com/users/394885/"
]
} |
837,854 | I'm setting up a small business that will be providing internet service for a niche market. We'll be offering fully unrestricted and unmonitored (as much as the law allows - and while we'd rather not we will still have the ability to capture packets if justified) internet access, and I am not sure how should we respond to abuse reports (a Google search didn't find anything relevant). Let's say I get an e-mail about SSH bruteforce coming from one of our customer's IPs. How do I tell whether it's genuine and not a troll (log entries and even .pcaps can be faked)? How do the big ISPs do it (for those that actually care about abuse reports I mean)? Similarly, complaints about spam e-mail, how do I check whether they're genuine before acting upon them? Is this even a problem? Have there been instances where trolls would report someone for allegedly doing bad stuff in hopes of getting them in trouble with their provider? Am I condemned to log every single packet leaving my network or is there an industry standard solution that doesn't go to such extremes? Regards. | Generally speaking, you are acting as a neutral carrier and probably shouldn't be inspecting content. The general process for handling abuse reports is to setup a ticket system or even just a mailbox that picks up for abuse@yourdomain and then forward reports to the end user. I'm saying generally because while I have plenty of specific experience in this area, how it's done at our place won't be exactly how anyone else does it. You need to tailor the approach to the services you offer. That being said, I can give you some advice that's not too specific and forms the basis of how most places handle abuse. I am not a lawyer though and this shouldn't be interpreted as being anyone's opinion but my own, just in case someone is crazy enough to track down who my employer is. Hopefully some of this is helpful though. Basic procedure: You have an abuse email address. Mail comes in to abuse queue Tell the abuse reporter you'll pass it on. Look up which customer is using that IP, forward them the report and ask if they know what's going on. Providing the end user doesn't respond with something really stupid, an instruction along the lines of "Please don't let it happen again" is for the best. There are legitimate projects that trip abuse reports, but mostly these happen because of security researchers, if that's not your niche then you won't need to worry. Go to step 1 and repeat. Most times one loop through is enough. Abuse spoofing is not something I've really seen much of, I mean it happens but it's been really obvious since they're trying to get the person in trouble while legit abuse reports tend to be of the "We don't care why it's happening, just make it stop" kind. Things you should do You'll probably see a few piracy warnings, a bunch of spam reports, the occasional more esoteric warning... Server hosting trends towards a bigger variety, broadband is more piracy, everyone gets spam reports. Forward all of them. Most of the time the customer is going to plead innocence, then either clean up their PC or clean up their act. If they're determined to keep at it they'll probably cover their tracks better. Usually abuse reports are generated in response to actions by compromised machines... the problem children like to make a mess in someone else's front yard so that it doesn't track in to their house and make their parent's unhappy. Assume that the customer isn't intentionally sending out spam. Try to give customers the benefit of the doubt the first time they get a report against them. Warnings might take a while to stop if you've got a really prolific spammer, but if you continue to see reports with events after a customer has been warned, or they get a lot of complaints, you might want to consider terminating them for AUP violations. You'll probably realise pretty quickly if someone is faking reports enough to reach that point. Have traffic volume graphing. Most abuse report types (spam, copyright, ddos) will light up a traffic graph... was averaging 40kbit but suddenly jumped to 10mbit and stayed there for hours? Don't do anything until someone complains or it starts to impact customers, but irregular traffic will certainly give you ammo. Things not to do... Don't give out customer information unless someone hands you a court order and you can prove that order is legit. Some abuse reporters will ask for information in the hopes of getting a co-operative provider, but if you turn it over to anyone other than a court then you are probably creating legal issues for yourself. The police are generally not going to email you asking for your customer's billing contact, and even if they did you should still be telling them that you can only provide that information in person and on presentation of an appropriate court order. Don't turn off a customer just because someone contacted your abuse queue and asked you to. If they're reporting abuse you need to get them to provide some kind of evidence which you can act on... I said that abuse report faking wasn't common, I didn't say it didn't happen. How much you see it depends entirely on how much of a target your customer base is. Little old ladies probably aren't going to attack the attention of trolls, twitch streamers on the other hand might. Similarly, don't let abuse reporters bully you... some people can get really threatening and aggressive with their report if you don't instantly obey their orders. Your responsible as a conduit is to forward on the notices and take timely action if the customer isn't co-operative. You only become responsible if you know the customer is doing something bad and let them continue. Have a sensible (read: not favouring pirates) policy and stick to it, that'll help if anything does go screwy. If you only provide bandwidth and not hosting, you probably aren't responsible for taking down the content unless your customer fails to do so when you ask them. Don't stress out too much. 99.9% of abuse reports at an ISP are really boring procedural stuff that amounts to "I saw this bad thing come from your network, it's probably a compromised machine, please look in to it." In most cases, comparing the reported event time to the traffic graph will tell you the legitimacy of the report. Hostile processes don't send out emails or port scans in ones or twos. One last thing. If you do ever get an abuse case where the police are involved, make sure to ask explicitly what they want you to do for them but don't expect them to have super technical answers. Sometimes the police aren't entirely familiar with the tech involved (I've been told that on one occasion they wanted to visit us to physically seize a VPS, that was fun) but they do have an idea of what they want to accomplish. Exactly what sort of thing they're going to be after depends entirely on exactly what type of services you provide. | {
"source": [
"https://serverfault.com/questions/837854",
"https://serverfault.com",
"https://serverfault.com/users/304910/"
]
} |
837,994 | I understand that SSL certs cannot be signed using SHA-1 anymore. Yet, all CA root certificates are SHA-1 signed (mostly). Does it mean the same algorithm that is no longer trusted for "you grandma SSL shop" is fine for the uttermost top secured certificate of the world? Am I missing something? (key usage? key size?) | The signature of the root CA certificates do not matter at all, since there is no need to verify them. They are all self-signed. If you trust a root CA certificate, there’s no need to verify its signature. If you don’t trust it, its signature is worthless for you. Edit: there are some very relevant comments below. I don’t feel comfortable copying or rephrasing them and taking credit for them instead of their authors. But I welcome people to add explanations to this answer. | {
"source": [
"https://serverfault.com/questions/837994",
"https://serverfault.com",
"https://serverfault.com/users/114209/"
]
} |
840,241 | I've spent a bit of time researching this topic and can't seem to find an exact answer, so I'm fairly confident it's not a duplicate, and while my question is based on a security need, I think it's still safe to ask here but let me know if I need to move it the security community. Essentially, do DNS queries ever use TCP (if so, what scenario could this occur)? Again, I'm only talking about queries. Is it possible for them to travel over TCP? If domains can only be a max of 253 bytes in length, and UDP packets can be as large as 512 bytes, won't queries always go out as UDP? I didn't think a resolvable query could be large enough to require the use of TCP. If a DNS server ever got a request for a domain larger than 253 bytes, would the server drop it/not try and resolve it? I'm certain I've made some false assumptions here. For some context, I'm working with the security group to onboard DNS queries into their security monitoring tool, and for various reasons we've decided we will capture this traffic via standard packet capture on DNS servers and domain controllers. The core requirement is to capture all DNS queries so they can identify what client attempted to resolve any given domain. Based on this requirement, we aren't concerned with capturing DNS responses or other traffic like zone transfers, which is also driven by the fact that we need to limit log volume as much as possible. As such, we are planning to capture only DNS queries destined for the DNS server and sent over UDP. For more context (kind of question scope creeping here), it's now been brought up that we might need to expand security's visibility so they can monitor for activity like covert channels running over DNS (which would present the need to capture DNS responses as well, and subsequently TCP traffic). But even in that sort of scenario, I thought any outbound DNS traffic would be in the form of lookups/queries, and that these would always be over UDP, even if from a malicious source (because of my reasoning in the first paragraph). So this brings up some additional questions: Wouldn we at minimum be capturing half of the conversation with the approach I outlined? Or would a client ever send out DNS traffic that isn't in the form of a query? (maybe like some kind of reply to a DNS server's response, and maybe ends up going out over TCP) Can DNS queries be modified to use TCP? Would a DNS server accept and respond to a DNS query coming over TCP? Not sure if it's relevant, but we do limit DNS requests to authorized DNS servers and block all other traffic outbound over port 53. I'm definitely a rookie, so I'm sorry if my question isn't compliant, and let me know how I should modify. | Normal DNS queries use UDP port 53, but longer queries (> 512 octets) will receive a 'truncated' reply, that results in a TCP 53 conversation to facilitate sending/receiving the entire query. Also, the DNS server binds to port 53, but the query itself originates on a random high-numbered port (49152 or above) sent to port 53. The response will be returned to this same port from port 53. Network Ports Used by DNS | Microsoft Docs So, if you're planning on doing some security snooping on DNS queries from your network, you'll need to take the above into account. As for non-lookup traffic, consider that DNS also uses zone transfers (query type AXFR) to update other DNS servers with new records. A man in the middle attack can begin with such a transfer, DDOS'ing a Primary name server so that it's too busy to respond to a Secondary asking for updated records. The hacker then spoofs that same Primary to feed 'poisoned' records to the Secondary, that redirect popular DNS domains to compromised hosts. So your security audit should pay close attention to query type AXFR, and your DNS systems should only accept AXFR exchanges from specific IP addresses. SANS Institute InfoSec Reading Room | sans.org | {
"source": [
"https://serverfault.com/questions/840241",
"https://serverfault.com",
"https://serverfault.com/users/407081/"
]
} |
840,519 | I'm trying to use a different Storage Pool on KVM in order to store the virtual disks of my VMs and also the ISOs from the operating systems which I'm using. For example: I want to use the directory /media/work/kvm which is mounted over /dev/sda5 , as the default Storage Pool for all future situations To configure, create and start a new storage pool, it is pretty easy, but a least in Ubuntu, it doesn't matter if I'm selecting and ISO from a different storage pool, Virtual Machine Manager always points me to the default Storage Pool ( /var/cache/libvirt ) as the storage where the virtual disks from my VMs will be created. How can I avoid this? | Before following the steps, be sure that you are running these commands as normal user and that your user belongs to the group libvirtd (on some systems libvirt ). Here are the following commands which I used: Listing current pools: $ virsh pool-list
Name State Autostart
-------------------------------------------
default active yes Destroying pool: $ virsh pool-destroy default
Pool default destroyed Undefine pool: $ virsh pool-undefine default
Pool default has been undefined Creating a directory to host the new pool (if it does not exist): $ sudo mkdir /media/work/kvm Defining a new pool with name "default": $ virsh pool-define-as --name default --type dir --target /media/work/kvm
Pool default defined Set pool to be started when libvirt daemons starts: $ virsh pool-autostart default
Pool default marked as autostarted Start pool: $ virsh pool-start default
Pool default started Checking pool state: $ virsh pool-list
Name State Autostart
-------------------------------------------
default active yes From now, when creating virtual machines, Virtual Machine Manager will inform you that the *.img file (virtual disk of your VM), will be saved at /media/work/kvm. | {
"source": [
"https://serverfault.com/questions/840519",
"https://serverfault.com",
"https://serverfault.com/users/356214/"
]
} |
840,835 | tldr: How can I get iptables to show just one chain? I can have iptables show just one table, but a table consists of multiple chains. I need to find where in chain INPUT is the last rule (usually but not always the REJECT all rule). I've tried awk and even some grep, but my skill in those must be waning. I've tried using awk to get just one paragraph, but that doesn't seem to work on the output of iptables --line-numbers -n -L -t filter maybe because those blank lines aren't really blank. I am looking for a solution with any normal gnu tools that would be installed on a CentOS 6 minimal environment. | I almost deleted this question. D'oh! From man iptables : -L, --list [chain]
List all rules in the selected chain. | {
"source": [
"https://serverfault.com/questions/840835",
"https://serverfault.com",
"https://serverfault.com/users/217531/"
]
} |
840,884 | I've been the target of a brute force attack on two WordPress sites I own. The attacker is using the good old XML-RPC thing to amplify brute-force password attacking. Luckily we have extremely well-generated passwords, so I highly doubt he'll ever get anywhere. I've just been using iptables to block his requests whenever he pops up again (always from the same virtual cloud provider), but I would much rather modify the server so that whenever his IP address requests any page, he gets a response telling him to get a life. Most of the requests are POSTs, so I would ideally just like to modify the reponse header to include something like "Better luck next time!" or something equally satisfying. Is this possible? I'm far from an expert with Apache, so I'm not certain how difficult this would be to implement. But even if it takes me hours, the satisfaction will be priceless. For reference, I'm running Ubuntu 16.04.2 LTS, with Apache 2.4.18 hosting Wordpress 4.7.3. | Just install fail2ban with the appropriate jail and be done with it. Don't bother to give a custom response, as it's most likely that it will never be seen. | {
"source": [
"https://serverfault.com/questions/840884",
"https://serverfault.com",
"https://serverfault.com/users/256472/"
]
} |
840,996 | I have installed the pimd service by means of apt . This comes with an upstream systemd unit file ( /lib/systemd/system/pimd.service ). I want the service to be restarted when for some reason it gets killed, hence I wish to add the line Restart = always in the unit file. However, I don’t want to modify the upstream unit file. Is there any workaround for this? | You have two options: Copy the unit file from /lib/systemd/system/ to /etc/systemd/system/ . And then make your modifications in /etc/systemd/system/pimd.service to completely override the unit file(s) supplied by the package maintainer. The command systemctl edit --full <service-name> automates this for you. You can alter or add specific configuration settings for a unit, without having to modify unit files by creating .conf files in a drop-in directory /etc/systemd/system/<unit-name>.<unit-type>.d/ i.e. create a /etc/systemd/system/pimd.service.d/restart.conf The command systemctl edit <service-name> performs these steps for you. See man systemd.unit | {
"source": [
"https://serverfault.com/questions/840996",
"https://serverfault.com",
"https://serverfault.com/users/287984/"
]
} |
841,099 | I'm running a private game server on a headless linux box. Because I'm not an idiot, said server is running as its own unprivileged user with the bare minimum access rights it needs to download updates and modify the world database. I also created a systemd unit file to properly start, stop and restart the server when needed (for said updates, for example). However, in order to actually call systemctl or service <game> start/stop/restart I still need to log in as either root or a sudo capable user. Is there a way to tell systemd that for the <game> service, unprivileged user gamesrv is permitted to run the start/stop/restart commands? | I can think of two ways to do this: One is by making the service a user service rather than a system service. Instead of creating a system unit, the systemd unit will be placed under the service user's home directory, at $HOME/.config/systemd/user/daemon-name.service . The same user can then manage the service with systemctl --user <action> daemon-name.service . To allow the user unit to start at boot , root must enable linger for the account, i.e. sudo loginctl enable-linger username . The unit must also be WantedBy=default.target . The other way is by allowing the user access to manage the system unit via PolicyKit. This requires systemd 226 or higher (and PolicyKit >= 0.106 for the JavaScript rules.d files – check with pkaction --version ). Note that Debian has deliberately held back PolicyKit to a nearly decade old version 0.105 which does not support this functionality, apparently because of one person's personal opinion , and neither it nor distributions derived from it (like Ubuntu) can use this method. You would create a new PolicyKit configuration file, e.g. /etc/polkit-1/rules.d/57-manage-daemon-name.rules which checks for the attributes you want to permit. For example : // Allow alice to manage example.service;
// fall back to implicit authorization otherwise.
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.systemd1.manage-units" &&
action.lookup("unit") == "example.service" &&
subject.user == "alice") {
return polkit.Result.YES;
}
}); The named user can then manage the named service with systemctl and without using sudo . | {
"source": [
"https://serverfault.com/questions/841099",
"https://serverfault.com",
"https://serverfault.com/users/78565/"
]
} |
841,183 | When I run this command fail2ban-client status sshd I got this: Status for the jail: sshd
|- Filter
| |- Currently failed: 1
| |- Total failed: 81
| `- File list: /var/log/auth.log
`- Actions
|- Currently banned: 2
|- Total banned: 8
`- Banned IP list: 218.65.30.61 116.31.116.7 It only show two IP in banned IP list instead of 8 just like Total Banned says. While I do tail -f /var/log/auth.log I got this: Mar 29 11:08:40 DBSERVER sshd[29163]: error: maximum authentication attempts exceeded for root from 218.65.30.61 port 50935 ssh2 [preauth]
Mar 29 11:08:40 DBSERVER sshd[29163]: Disconnecting: Too many authentication failures [preauth]
Mar 29 11:08:40 DBSERVER sshd[29163]: PAM 5 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.65.30.61 user=root
Mar 29 11:08:40 DBSERVER sshd[29163]: PAM service(sshd) ignoring max retries; 6 > 3
Mar 29 11:08:44 DBSERVER sshd[29165]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.65.30.61 user=root
Mar 29 11:08:46 DBSERVER sshd[29165]: Failed password for root from 218.65.30.61 port 11857 ssh2
Mar 29 11:09:01 DBSERVER CRON[29172]: pam_unix(cron:session): session opened for user root by (uid=0)
Mar 29 11:09:01 DBSERVER CRON[29172]: pam_unix(cron:session): session closed for user root
Mar 29 11:10:01 DBSERVER CRON[29226]: pam_unix(cron:session): session opened for user root by (uid=0)
Mar 29 11:10:02 DBSERVER CRON[29226]: pam_unix(cron:session): session closed for user root
Mar 29 11:10:18 DBSERVER sshd[29238]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=113.122.43.185 user=root
Mar 29 11:10:20 DBSERVER sshd[29238]: Failed password for root from 113.122.43.185 port 46017 ssh2
Mar 29 11:10:33 DBSERVER sshd[29238]: message repeated 5 times: [ Failed password for root from 113.122.43.185 port 46017 ssh2]
Mar 29 11:10:33 DBSERVER sshd[29238]: error: maximum authentication attempts exceeded for root from 113.122.43.185 port 46017 ssh2 [preauth]
Mar 29 11:10:33 DBSERVER sshd[29238]: Disconnecting: Too many authentication failures [preauth]
Mar 29 11:10:33 DBSERVER sshd[29238]: PAM 5 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=113.122.43.185 user=root
Mar 29 11:10:33 DBSERVER sshd[29238]: PAM service(sshd) ignoring max retries; 6 > 3
Mar 29 11:11:36 DBSERVER sshd[29245]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.31.116.7 user=root
Mar 29 11:11:38 DBSERVER sshd[29245]: Failed password for root from 116.31.116.7 port 24892 ssh2
Mar 29 11:11:43 DBSERVER sshd[29245]: message repeated 2 times: [ Failed password for root from 116.31.116.7 port 24892 ssh2]
Mar 29 11:11:43 DBSERVER sshd[29245]: Received disconnect from 116.31.116.7 port 24892:11: [preauth]
Mar 29 11:11:43 DBSERVER sshd[29245]: Disconnected from 116.31.116.7 port 24892 [preauth]
Mar 29 11:11:43 DBSERVER sshd[29245]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.31.116.7 user=root
Mar 29 11:12:39 DBSERVER sshd[29247]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.31.116.7 user=root
Mar 29 11:12:41 DBSERVER sshd[29247]: Failed password for root from 116.31.116.7 port 26739 ssh2
Mar 29 11:12:45 DBSERVER sshd[29247]: message repeated 2 times: [ Failed password for root from 116.31.116.7 port 26739 ssh2]
Mar 29 11:12:45 DBSERVER sshd[29247]: Received disconnect from 116.31.116.7 port 26739:11: [preauth]
Mar 29 11:12:45 DBSERVER sshd[29247]: Disconnected from 116.31.116.7 port 26739 [preauth]
Mar 29 11:12:45 DBSERVER sshd[29247]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.31.116.7 user=root
Mar 29 11:13:41 DBSERVER sshd[29249]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.31.116.7 user=root
Mar 29 11:13:43 DBSERVER sshd[29249]: Failed password for root from 116.31.116.7 port 27040 ssh2 banned IP still trying. However when I check with sudo iptables -L INPUT -v -n I got this: Chain INPUT (policy ACCEPT 228 packets, 18000 bytes)
pkts bytes target prot opt in out source destination
6050 435K f2b-sshd tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 What am I doing wrong here? How can I show all banned IP list? | Please keep in mind that the fail2ban banning of IP is temporary in nature. The best way to have a look at the full list of IPs that have been blocked would be to check the log file: sudo zgrep 'Ban' /var/log/fail2ban.log* Edit : this answer previously searched for 'Ban:' , but even in 2013 the source has no colon ( ref ). The following command can also give you a clean list of input rules: sudo iptables -L INPUT -v -n | less | {
"source": [
"https://serverfault.com/questions/841183",
"https://serverfault.com",
"https://serverfault.com/users/407908/"
]
} |
841,954 | A website I frequent have finally decided to enable TLS to their servers, only not to mandate it as a lot of websites out there do. The maintainer claims that TLS must be optional. Why? On my own website I have long set up mandated TLS and HSTS with long periods, and the weaker cipher suites are disabled. Plaintext access is guaranteed to be walled out with a HTTP 301 to the TLS-protected version. Does this affect my website negatively? | In this day and age, TLS + HSTS are markers that your site is managed by professionals who can be trusted to know what they're doing. That is an emerging minimum-standard for trustability, as evidenced by Google stating they'll provide positive ranking for sites that do so. On the other end is maximum compatibility. There are still older clients out there, especially in parts of the world that aren't the United States, Europe, or China. Plain HTTP will always work (though, not always work well ; that's another story). TLS + HSTS: Optimize for search-engine ranking Plain HTTP: Optimize for compatibility Depends on what matters more for you. | {
"source": [
"https://serverfault.com/questions/841954",
"https://serverfault.com",
"https://serverfault.com/users/291626/"
]
} |
842,531 | How can zfs pools be continuously/incrementally backed up offsite? I recognise the send/receive over ssh is one method however that involves having to manage snapshots manually. There are some tools I have found however most are no longer supported. The one tool which looks promising is https://github.com/jimsalterjrs/sanoid however I am worried that non-widely known tool may do more harm then good in that it may corrupt/delete data. How are continuous/incremental zfs backups performed? | ZFS is an incredible filesystem and solves many of my local and shared data storage needs. While, I do like the idea of clustered ZFS wherever possible, sometimes it's not practical, or I need some geographical separation of storage nodes. One of the use cases I have is for high-performance replicated storage on Linux application servers. For example, I support a legacy software product that benefits from low-latency NVMe SSD drives for its data. The application has an application-level mirroring option that can replicate to a secondary server, but it's often inaccurate and is a 10-minute RPO . I've solved this problem by having a secondary server (also running ZFS on similar or dissimilar hardware) that can be local, remote or both. By combining the three utilities detailed below, I've crafted a replication solution that gives me continuous replication, deep snapshot retention and flexible failover options. zfs-auto-snapshot - https://github.com/zfsonlinux/zfs-auto-snapshot Just a handy tool to enable periodic ZFS filesystem level snapshots. I typically run with the following schedule on production volumes: # /etc/cron.d/zfs-auto-snapshot
PATH="/usr/bin:/bin:/usr/sbin:/sbin"
*/5 * * * * root /sbin/zfs-auto-snapshot -q -g --label=frequent --keep=24 //
00 * * * * root /sbin/zfs-auto-snapshot -q -g --label=hourly --keep=24 //
59 23 * * * root /sbin/zfs-auto-snapshot -q -g --label=daily --keep=14 //
59 23 * * 0 root /sbin/zfs-auto-snapshot -q -g --label=weekly --keep=4 //
00 00 1 * * root /sbin/zfs-auto-snapshot -q -g --label=monthly --keep=4 // Syncoid (Sanoid) - https://github.com/jimsalterjrs/sanoid This program can run ad-hoc snap/replication of a ZFS filesystem to a secondary target. I only use the syncoid portion of the product. Assuming server1 and server2 , simple command run from server2 to pull data from server1 : #!/bin/bash
/usr/local/bin/syncoid root@server1:vol1/data vol2/data
exit $? Monit - https://mmonit.com/monit/ Monit is an extremely flexible job scheduler and execution manager. By default, it works on a 30-second interval, but I modify the config to use a 15-second base time cycle. An example config that runs the above replication script every 15 seconds (1 cycle) check program storagesync with path /usr/local/bin/run_storagesync.sh
every 1 cycles
if status != 0 then alert This is simple to automate and add via configuration management. By wrapping the execution of the snapshot/replication in Monit, you get centralized status, job control and alerting (email, SNMP, custom script). The result is that I have servers that have multiple months of monthly snapshots and many points of rollback and retention within: https://pastebin.com/zuNzgi0G - Plus, a continuous rolling 15-second atomic replica: # monit status Program 'storagesync'
status Status ok
monitoring status Monitored
last started Wed, 05 Apr 2017 05:37:59
last exit value 0
data collected Wed, 05 Apr 2017 05:37:59
.
.
.
Program 'storagesync'
status Status ok
monitoring status Monitored
last started Wed, 05 Apr 2017 05:38:59
last exit value 0
data collected Wed, 05 Apr 2017 05:38:59 | {
"source": [
"https://serverfault.com/questions/842531",
"https://serverfault.com",
"https://serverfault.com/users/303223/"
]
} |
843,296 | I am using portainer and am unable to manage remote endpoints. I tried using the command line to connect to remote docker nodes, but got a message Cannot connect to the Docker daemon at tcp://<remote_ip>:<port>. Is the docker daemon running? . Yes, they are running. I have added myself to the docker group and can access docker by SSHing into the nodes. However I cannot access any docker nodes remotely. I modified /etc/default to add / uncomment DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock" I also modified /etc/init.d/docker and /etc/init/docker.conf to include DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock" . I restarted the docker service, logged out and logged in multiple times in the process, but still cannot connect to the remote node. I cannot even connect to the local node by passing the IP. What did I miss out? What configuration in what file exposes the API over TCP? user@hostname:~$ docker -H tcp://<REMOTE_IP>:2375 info
Cannot connect to the Docker daemon at tcp://<REMOTE_IP>:2375. Is the docker daemon running?
user@hostname:~$ docker -H tcp://127.0.0.1:2375 info
Cannot connect to the Docker daemon at tcp://127.0.0.1:2375. Is the docker daemon running?
user@hostname:~$ docker -H tcp://<LOCAL_IP>:2375 info
Cannot connect to the Docker daemon at tcp://<LOCAL_IP>:2375. Is the docker daemon running?
user@hostname:~$ Edit: Running ps aux | grep -i docker returns this - root 3581 0.1 0.2 596800 41540 ? Ssl 04:17 0:35 /usr/bin/dockerd -H fd://
root 3588 0.0 0.0 653576 14492 ? Ssl 04:17 0:18 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc | I found a solution thanks to Ivan Krizsan's post . I had to edit /lib/systemd/system/docker.service on my Ubuntu 16.04.2 LTS system to modify the line ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0: then sudo systemctl daemon-reload
sudo systemctl restart docker.service and everything worked :-). The next step is to figure out how to protect the docker daemon form being hijacked. | {
"source": [
"https://serverfault.com/questions/843296",
"https://serverfault.com",
"https://serverfault.com/users/85585/"
]
} |
844,161 | I have managed to create my certificates with LE with not errors, I have also managed to redirect my traffic from port 80 to port 443. But when i reload my nginx server I am unable to access my website. The Ngnix error logs show this line: 4 no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 192.168.0.104, server: 0.0.0.0:443 I think this means that it can't find the certificates I then navigated to the path of the certificates and they are both there, what could be the problem ?
Here is how my Ngnix configuration looks like: server {
listen 80;
server_name pumaportal.com www.pumaportal.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name pumaportal.com www.pumaportal.com;
add_header Strict-Transport-Security "max-age=31536000";
ssl_certificate /etc/letsencrypt/live/pumaportal.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/pumaportal.com/privkey.pem;
ssl_stapling on;
ssl_stapling_verify on;
access_log /var/log/nginx/sub.log combined;
location /.well-known {
alias /[MY PATH]/.well-known;
}
location / {
proxy_pass http://localhost:2000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
}
} It all seems pretty straight forward I don't understand where could the problem be. After running nginx -t it all seems ok: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful | My guess is that you have another server listening on port 443.
This server has no ssl_certificate defined, and it's automatically selected (SNI).
Try to delete all symbolic links from /etc/nginx/sites-enabled except this one server you want to make work (if that's possible, otherwise check all of your servers for listening to 443 without being configured correctly). | {
"source": [
"https://serverfault.com/questions/844161",
"https://serverfault.com",
"https://serverfault.com/users/401824/"
]
} |
844,162 | I've configured a dnsmasq service into my servers network. I can query it correctly from local server and "brothers" servers in the network. root@yyy ~# nslookup google.com 172.25.x.xxx
Server: 172.25.x.xxx
Address: 172.25.x.xxx#53
Non-authoritative answer:
Name: google.com
Address: 216.58.210.174 However when I try to query it from my dev machine (range 172.144.x.x) I get no response. Ports are correctly open (tested with nmap and telnet). $ nmap -p 53 172.25.x.xxx -Pn
Starting Nmap 7.01 ( https://nmap.org ) at 2017-04-12 17:54 CEST
Nmap scan report for 172.25.x.xxx
Host is up (0.0018s latency).
PORT STATE SERVICE
53/tcp open domain
Nmap done: 1 IP address (1 host up) scanned in 0.05 seconds
$ nslookup google.com 172.25.x.xxx
connection timed out; no servers could be reached It seems like the flag --local-service is enabled however I've set interface and listen-address variables interface=eth0
listen-address=127.0.0.1,172.25.x.xxx I've saw that --local-service is enabled by default but not when using "interface" and "listen-address". Is there any way to check in is still active? edit Seems it could be a udp/tcp related issue. This query works from my dev network $ dig +tcp +short cnn.com @172.25.7.110
151.101.129.67
151.101.193.67 It seems strange because dev and server machines can "talk" udp. (tested with "netcat -u -l -p 53" on server and "netcat -u 172.25.x.xxx 53" on dev machine) | My guess is that you have another server listening on port 443.
This server has no ssl_certificate defined, and it's automatically selected (SNI).
Try to delete all symbolic links from /etc/nginx/sites-enabled except this one server you want to make work (if that's possible, otherwise check all of your servers for listening to 443 without being configured correctly). | {
"source": [
"https://serverfault.com/questions/844162",
"https://serverfault.com",
"https://serverfault.com/users/286258/"
]
} |
844,283 | Last week I got a call from a scared customer because he thought his website was hacked. When I looked up his website I saw the apache2 default page. That night my server ( Ubuntu 16.04 LTS ) had upgraded and rebooted. Normally when something goes wrong I would've got alerted during the night. This time not, because the monitoring system checks for HTTP status code 200, and the apache2 default page comes with status code 200. What happened is that during startup apache2 was faster to bind to port 80 and 443 than my actual webserver nginx. I did not install apache2 myself. Through aptitude why apache2 I found out the php7.0 package requires it. Simply removing apache2 won't work because apparently php7.0 requires it. Is it somehow possible create a restriction so that only nginx is allowed to bind to port 80 and 443? Other solutions are more than welcome too. | You can't prevent a port from being bound by the wrong service. In your case, just remove apache from autostart and you should be good. For 16.04 and newer: sudo systemctl disable apache2 For older Ubuntu versions: sudo update-rc.d apache2 disable | {
"source": [
"https://serverfault.com/questions/844283",
"https://serverfault.com",
"https://serverfault.com/users/343392/"
]
} |
844,677 | A disaster just occurred to me after I ran the command yum remove python and now I can't boot the server up anymore. How it happened: I tried updating some apps via yum on my CentOS 5 VPS and the command was failing due to some weird python 2.4 error. I noticed that my version of python was old and I tried reinstalling it by first removing it, and so I did yum remove python . After that it asked me something about removing dependencies and it looked like nothing I could miss so I clicked Y . So the aftermath of that was that I was unable to run any command what so ever. I even tried cd /var/www but it said something like " command does not exist in /usr/bin ". When I used tab to see folder navigation suggestions, the file structure still seemed to be there (at least the /var/www bit which is really important to me).
After that I tried restarting the vps (from the admin panel since reboot command did not work) and now it doesn't boot anymore. Now my question is: how can a command like that possibly destroy my server like this? | Frankly, because you did something you didn't fully understand. Python is an essential part of the OS and the things you considered unimportant are very important. Restore from backup. When you removed Python, yum showed you a long list of packages that would also be removed. This list contains such essentials as yum itself, coreutils , net-tools and others. You confirmed to yum that you know what you are doing and want to proceed anyway. The result of this is a non-working system. This shouldn't be surprising. For the record, on newer CentOS version this isnt't possible anymore, as certain packages are now marked as protected and can't be removed, only reinstalled or upgraded. And since CentOS 5 is now EOL anyway, this is a good time to upgrade to a newer version. | {
"source": [
"https://serverfault.com/questions/844677",
"https://serverfault.com",
"https://serverfault.com/users/410793/"
]
} |
844,693 | My project directory structure is as follows: /var/www/mysite/
backend/
frontend/ where frontend/ contains simple html and js files, and backend/ is a wordpress site. I expose wordpress data to a REST api endpoint for the frontend. I want to have mysite.com show the html/js files and all REST api calls are made to mysite.com/api which are the wordpress site files. (so mysite.com/api/wp-admin will also work as normal). I am having trouble configuring nginx to make this possible. This is my current configuration: server {
listen *:80;
server_name mysite.com www.mysite.com;
access_log /var/log/nginx/mysite.access.log;
error_log /var/log/nginx/mysite.error.log;
root /var/www/mysite/frontend;
location / {
index index.html index.htm index.php;
}
location ^~ /api {
root /var/www/mysite/backend;
index index.php;
try_files $uri $uri/ /../backend/index.php?$args;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-cgi alone:
fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
#fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
sendfile off;
} This just downloads the index.php file from wordpress when I try to access the URL mysite.com/api. Any help is appreciated, thanks. | Frankly, because you did something you didn't fully understand. Python is an essential part of the OS and the things you considered unimportant are very important. Restore from backup. When you removed Python, yum showed you a long list of packages that would also be removed. This list contains such essentials as yum itself, coreutils , net-tools and others. You confirmed to yum that you know what you are doing and want to proceed anyway. The result of this is a non-working system. This shouldn't be surprising. For the record, on newer CentOS version this isnt't possible anymore, as certain packages are now marked as protected and can't be removed, only reinstalled or upgraded. And since CentOS 5 is now EOL anyway, this is a good time to upgrade to a newer version. | {
"source": [
"https://serverfault.com/questions/844693",
"https://serverfault.com",
"https://serverfault.com/users/410808/"
]
} |
845,471 | I have a systemd service that displays the following error service start request repeated too quickly, refusing to start I understand that the service is configured to restart on failure and it is restarting again and again. But when exactly does it refuse to restart ?
Is there a limit or number that defines it ? Moreover, what does too quickly exactly mean, is it a limit of number of restarts in a given period of time ? | The default limit is to allow 5 restarts in a 10sec period. If a service goes over that threshold due to the Restart= config option in the service definition, it will not attempt to restart any further. The rates are configured with the StartLimitIntervalSec= and StartLimitBurst= options and the Restart= option controls when SystemD tries to restart a service. More info in man systemd.unit and man systemd.service . Then use systemctl daemon-reload to reload unit configuration. | {
"source": [
"https://serverfault.com/questions/845471",
"https://serverfault.com",
"https://serverfault.com/users/411493/"
]
} |
845,682 | I am working on a server within my office. The server will eventually be relocated to a data center. I would like to be able to leave the server switched on in my office, which means I would like to be able to protect it from power outages or surges. In the office I only have desktop UPSs. I would like to avoid forking out for an expensive server class UPS. I don't mind if it only has protection for a short time (Even a few minutes would be longer than any likely power drop where I live) The UPS is APC Back-UPS ES 400 (400 VA, 240 Watts) The server is DL 360p Gen 8 (750 watt PSUs) | I would not plug a server with a power supply capable of drawing 750 watts into a UPS which is only rated at 240. The issue isn't really that it's a "server" or "desktop" UPS. You're likely to trigger overload protection and drop your server even if the power input is fine. | {
"source": [
"https://serverfault.com/questions/845682",
"https://serverfault.com",
"https://serverfault.com/users/130557/"
]
} |
845,766 | As of Chrome 58 it no longer accepts self-signed certs that rely on Common Name : https://productforums.google.com/forum/#!topic/chrome/zVo3M8CgKzQ;context-place=topicsearchin/chrome/category $3ACanary%7Csort:relevance%7Cspell:false Instead it requires using Subject Alt Name . I have been previously following this guide on how to generate a self-signed cert: https://devcenter.heroku.com/articles/ssl-certificate-self which worked great because I required the server.crt and server.key files for what I'm doing. I now need to generate new certs that include the SAN however all of my attempts to do so have not worked with Chrome 58. Here is what I've done: I followed the steps on the above mentioned Heroku article to generate the key. I then wrote a new OpenSSL config file: [ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = san
extensions = san
[ req_distinguished_name ]
countryName = US
stateOrProvinceName = Massachusetts
localityName = Boston
organizationName = MyCompany
[ san ]
subjectAltName = DNS:dev.mycompany.com Then generated the server.crt with the following command: openssl req \
-new \
-key server.key \
-out server.csr \
-config config.cnf \
-sha256 \
-days 3650 I'm on a Mac, so I opened the server.crt file with Keychain, added it to my System Certificates. I then set it to Always Trust . With the exception of the config file to set the SAN value these were similar steps I used in prior versions of Chrome to generate and trust the self-signed cert. However, after this I still get the ERR_CERT_COMMON_NAME_INVALID in Chrome 58. | My solution: openssl req \
-newkey rsa:2048 \
-x509 \
-nodes \
-keyout server.key \
-new \
-out server.crt \
-subj /CN=dev.mycompany.com \
-reqexts SAN \
-extensions SAN \
-config <(cat /System/Library/OpenSSL/openssl.cnf \
<(printf '[SAN]\nsubjectAltName=DNS:dev.mycompany.com')) \
-sha256 \
-days 3650 Status: Works for me | {
"source": [
"https://serverfault.com/questions/845766",
"https://serverfault.com",
"https://serverfault.com/users/382271/"
]
} |
846,118 | We have a very small (5 workstation) network with one Windows Server acting as domain controller, DHCP and DNS server.
All devices are connected to a standard switch which in turn is connected to a standard broadband modem. The TCP network settings for each workstation are: 192.168.0.50 is the IP of the DNS server. 192.168.0.1 is the IP of the modem gateway 8.8.8.8 is Google's public DNS server Is this a good plan? Is there any point including the modem's IP in that list?
I've noticed that the Windows DNS server is receiving and caching requests for public websites.
Should the Google DNS server be higher up the list? | Workstations should have your internal DNS server(s) as the only DNS server(s) in TCP/IP configuration PCs pick DNS server from the list and stick to it for some time.
So if by some chance your workstations would pick your modem or Google DNS server, your internal AD domain name resolution would stop working. You can optionally have Google or modem's DNS servers specified as forwarders on your DC's DNS Server. But DNS server on DC could also do all external resolution without any forwarders. Using your ISP's DNS servers as forwarders on internal DNS server might make more sense though. But you don't need to use any forwarders at all | {
"source": [
"https://serverfault.com/questions/846118",
"https://serverfault.com",
"https://serverfault.com/users/327102/"
]
} |
846,441 | I know how to enable/disable lingering with loginctl . But up to now I found no way to query the status of a user. I want to know: Is lingering enable for user foo ? How can I access this information? | You can show a list of lingering users with ls /var/lib/systemd/linger because loginctl enable-linger $USER
loginctl disable-linger $USER do the equivalent of touch /var/lib/systemd/linger/$USER
rm /var/lib/systemd/linger/$USER | {
"source": [
"https://serverfault.com/questions/846441",
"https://serverfault.com",
"https://serverfault.com/users/90324/"
]
} |
846,489 | Can X-FORWARDED-FOR contain multiple IP addresses? If so, why? An illustrative example would be great. | Yes, if a request is chained through more than one proxy server, then each proxy should add the IP of the preceding one to the existing X-Forwarded-For header so that the entire chain is preserved. | {
"source": [
"https://serverfault.com/questions/846489",
"https://serverfault.com",
"https://serverfault.com/users/321109/"
]
} |
847,435 | I am trying to set up opendkim on Debian stretch but I fail at changing the socket. I want to change the socket to /var/spool/postfix/opendkim/opendkim.sock so I can use it with postfix. I have added Socket local:/var/spool/postfix/opendkim/opendkim.sock to /etc/opendkim.conf and also tried adding SOCKET="local:/var/spool/postfix/opendkim/opendkim.sock to /etc/default/opendkim (which I had to create). No matter what I change or how often I restart opendkim, it always uses /var/run/opendkim/opendkim.sock as its socket. ➜ ~ netstat -a | fgrep LISTEN | grep open
unix 2 [ ACC ] STREAM LISTENING 5534128 /var/run/opendkim/opendkim.sock
➜ ~ sudo systemctl status opendkim.service
● opendkim.service - OpenDKIM DomainKeys Identified Mail (DKIM) Milter
Loaded: loaded (/lib/systemd/system/opendkim.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2017-04-30 12:41:54 CEST; 5min ago
Docs: man:opendkim(8)
man:opendkim.conf(5)
man:opendkim-genkey(8)
man:opendkim-genzone(8)
man:opendkim-testadsp(8)
man:opendkim-testkey
http://www.opendkim.org/docs.html
Process: 25246 ExecStart=/usr/sbin/opendkim -P /var/run/opendkim/opendkim.pid -p local:/var/run/opendkim/opendkim.sock (code=exited, status=0/SUCCESS)
Main PID: 25248 (opendkim)
Tasks: 7 (limit: 4915)
CGroup: /system.slice/opendkim.service
├─25248 /usr/sbin/opendkim -P /var/run/opendkim/opendkim.pid -p local:/var/run/opendkim/opendkim.sock
└─25249 /usr/sbin/opendkim -P /var/run/opendkim/opendkim.pid -p local:/var/run/opendkim/opendkim.sock
Apr 30 12:41:54 vServer systemd[1]: Starting OpenDKIM DomainKeys Identified Mail (DKIM) Milter...
Apr 30 12:41:54 vServer systemd[1]: Started OpenDKIM DomainKeys Identified Mail (DKIM) Milter.
Apr 30 12:41:54 vServer opendkim[25249]: OpenDKIM Filter v2.11.0 starting (args: -P /var/run/opendkim/opendkim.pid -p local:/var/run/opendkim/opendkim.sock) What am I doing wrong? (I guess it's my mistake as I can't find anyone else with the same issue) UPDATE: Changing /etc/default/opendkim to SOCKET="inet:8891@localhost" and changing the postfix config to use this socket results in inet:localhost:8891: Connection refused UPDATE2: I have now replaced with the file bundled in the debian stretch package: # Command-line options specified here will override the contents of
# /etc/opendkim.conf. See opendkim(8) for a complete list of options.
#DAEMON_OPTS=""
# Change to /var/spool/postfix/var/run/opendkim to use a Unix socket with
# postfix in a chroot:
RUNDIR=/var/spool/postfix/var/run/opendkim
#RUNDIR=/var/run/opendkim
#
# Uncomment to specify an alternate socket
# Note that setting this will override any Socket value in opendkim.conf
# default:
SOCKET=local:$RUNDIR/opendkim.sock
# listen on all interfaces on port 54321:
#SOCKET=inet:54321
# listen on loopback on port 12345:
#SOCKET=inet:12345@localhost
# listen on 192.0.2.1 on port 12345:
#SOCKET=inet:[email protected]
USER=opendkim
GROUP=opendkim
PIDFILE=$RUNDIR/$NAME.pid
EXTRAAFTER= The includes the following lines where the socket is decided: if [ -f /etc/opendkim.conf ]; then
CONFIG_SOCKET=`awk '$1 == "Socket" { print $2 }' /etc/opendkim.conf`
fi
# This can be set via Socket option in config file, so it's not required
if [ -n "$SOCKET" -a -z "$CONFIG_SOCKET" ]; then
DAEMON_OPTS="-p $SOCKET $DAEMON_OPTS"
fi | I finally found the solution. The /etc/init.d/opendkim doesn't seem to do anything. But instead the servicefile /lib/systemd/system/opendkim.service is used which had the wrong socket hardcoded. But the debian package also seems to include a bash that generates the correct systemd service. So after running /lib/opendkim/opendkim.service.generate
systemctl daemon-reload
service opendkim restart and restarting opendkim the socket file appears in the expected place, which can be verified by calling: tail /var/log/mail.log | grep OpenDKIM Update: It seems there is an debian bugreport about this issue: #861169 Update 2021: As this question is still read quite often, I want to make everyone aware of the recent NEWS entry : [...] We remind users that opendkim is best configured by editing
/etc/opendkim.conf. The legacy defaults file at /etc/default/opendkim is
still available, as is the script /lib/opendkim/opendkim.service.generate.
However, these provide no additional value over the default configuration
file /etc/opendkim.conf. Please take this opportunity to review your
configuration setup. Also beginning with Debian Bullseye the /etc/default/opendkim starts with: # NOTE: This is a legacy configuration file. It is not used by the opendkim
# systemd service. Please use the corresponding configuration parameters in
# /etc/opendkim.conf instead.
#
# Previously, one would edit the default settings here, and then execute
# /lib/opendkim/opendkim.service.generate to generate systemd override files at
# /etc/systemd/system/opendkim.service.d/override.conf and
# /etc/tmpfiles.d/opendkim.conf. While this is still possible, it is now
# recommended to adjust the settings directly in /etc/opendkim.conf. | {
"source": [
"https://serverfault.com/questions/847435",
"https://serverfault.com",
"https://serverfault.com/users/413101/"
]
} |
847,437 | I have a fairly common setup - vSphere and ESXi hosts using FreeNAS as the VM store. The servers can see each other (obviously) so I want to segregate system admin traffic and user traffic onto different VLANs, and restrict the management IPs on both boxes. Configuring management access on ESXi is easy, but I can't figure how to do it on FreeNAS. At the moment the relevant FreeNAS config is that it has one active NIC (10G Chelsio) with IP of say 192.168.1.2, and no VLANs have been set up on the network yet. What I'd like is to do one or more of the following: Create two VLANs, say 1 and 2, with any VLAN able to access sharing services on the sharing ports, but only VLAN 2 able to reach the admin IP/port Create two IPs on the one NIC, say 192.168.1.2 and 192.168.1.3, with only 192.168.1.3 able to reach the management login. Blocking the management access ports (80,443 etc) for VLAN != 2 and/or IP != 192.168.1.3. As FreeNAS isn't a router or firewall it doesn't have much built in to do this, so I'm not sure how to go about doing these things. It can't be uncommon to have it directly connected to the general LAN, so I'm hoping there's a straightforward helpful answer to the above 3 approaches, so I can choose which works best for me and figure out how to combine them if useful. | I finally found the solution. The /etc/init.d/opendkim doesn't seem to do anything. But instead the servicefile /lib/systemd/system/opendkim.service is used which had the wrong socket hardcoded. But the debian package also seems to include a bash that generates the correct systemd service. So after running /lib/opendkim/opendkim.service.generate
systemctl daemon-reload
service opendkim restart and restarting opendkim the socket file appears in the expected place, which can be verified by calling: tail /var/log/mail.log | grep OpenDKIM Update: It seems there is an debian bugreport about this issue: #861169 Update 2021: As this question is still read quite often, I want to make everyone aware of the recent NEWS entry : [...] We remind users that opendkim is best configured by editing
/etc/opendkim.conf. The legacy defaults file at /etc/default/opendkim is
still available, as is the script /lib/opendkim/opendkim.service.generate.
However, these provide no additional value over the default configuration
file /etc/opendkim.conf. Please take this opportunity to review your
configuration setup. Also beginning with Debian Bullseye the /etc/default/opendkim starts with: # NOTE: This is a legacy configuration file. It is not used by the opendkim
# systemd service. Please use the corresponding configuration parameters in
# /etc/opendkim.conf instead.
#
# Previously, one would edit the default settings here, and then execute
# /lib/opendkim/opendkim.service.generate to generate systemd override files at
# /etc/systemd/system/opendkim.service.d/override.conf and
# /etc/tmpfiles.d/opendkim.conf. While this is still possible, it is now
# recommended to adjust the settings directly in /etc/opendkim.conf. | {
"source": [
"https://serverfault.com/questions/847437",
"https://serverfault.com",
"https://serverfault.com/users/278317/"
]
} |
848,168 | How can I store my key pair (typically the id_rsa and id_rsa.pub) in azure key vault.
I want to put the public key in my GIT service and allow a virtual machine to download the private key from Azure key vault -> So that it can access GIT securely. I tried making a pair of PEM files and combining them into a pfx and uploading that as a secret bu the file I get back appears to be completely different to either pem file. I also tried manually inputting my secret key into Azure but it turns the newlines into spaces. | You could use Azure CLI to upload id_rsa to Azure Key Vault. azure keyvault secret set --name shui --vault-name shui --file ~/.ssh/id_rsa You could use -h to get help. --file <file-name> the file that contains the secret value to be uploaded; cannot be used along with the --value or --json-value flag You could also download secret from key vault. az keyvault secret download --name shui --vault-name shui --file ~/.ssh/id_rsa I compare the keys on my lab. They are same. | {
"source": [
"https://serverfault.com/questions/848168",
"https://serverfault.com",
"https://serverfault.com/users/158865/"
]
} |
848,177 | Does anyone know why i can't disable tls 1.0 and tls1.1 by updating the config to this. SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 After doing this, i reload apache I do an ssl scan using ssllabs or comodo ssl tool, and it still says tls 1.1 and 1.0 are supported. I would like to remove these? | When you have multiple TLS VirtualHosts and use Server Name Indication (SNI) it is an allowed syntax to have a SSLProtocol directive for each VirtualHost, but unless you have IP VirtualHosts in practice the settings from the first occurrence of the SSLProtocol directive are used for the whole server and/or all name-based VirtualHosts supporting TLS 1 . So check your main httpd.conf (and all included snippets from for instance conf.d/*.conf and similar includes) for more occurrences of the SSLProtocol directive. You syntax is correct, although I agree with ezra-s' answer that, when you expand the all shorthand, you can slightly improve upon: SSLProtocol +SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2 -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 by simply using: SSLProtocol TLSv1.2 | {
"source": [
"https://serverfault.com/questions/848177",
"https://serverfault.com",
"https://serverfault.com/users/337039/"
]
} |
848,503 | I have a deployment system on my web server, every time an app is deployed it creates a new timestamped directory and symlinks "current" to the new directory. This all workded good and great on apache, but on the new nginx server I've set up, it looks like a script from the "old" deployment is being run instead of the new symlinked one. I have read some tutorials and posts on how to resolve this but there is not much info and nothing seems to work. Here is my vhost file: server {
listen 80;
server_name ~^(www\.)?(?<sname>.+?).testing.domain.com$;
root /var/www/$sname/current/public;
index index.html index.htm index.php;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~* \.(jpg|jpeg|gif|png|bmp|ico|pdf|flv|swf|exe|html|htm|txt|css|js) {
add_header Cache-Control public;
add_header Cache-Control must-revalidate;
expires 7d;
}
location ~ \.php$ {
#fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
include fastcgi_params;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
}
location ~ /\.ht {
deny all;
}
} and here is my fastcgi_params: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param HTTPS $https if_not_empty;
# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;
fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name; I would really appreciate if someone could help me out with this as at the moment every deployment involves deleting previous deployment. System is Ubuntu 14.04.5 LTS ; PHP 7.1 ; Nginx nginx/1.4.6 (Ubuntu) | Embedded Variables , $realpath_root : an absolute pathname corresponding to the root or alias directive’s value for the current request, with all symbolic links
resolved to real paths The solution of using $realpath_root instead of $document_root is copy-pasted all around the Q/A sites and forums; it is actually hard to avoid finding it.Yet, I've only seen it well explained once by Rasmus Lerdorf . It's worth sharing as it describes why it works and when it should be used. So, when you deploy via something like Capistrano which does a symlink
swap on the document root, you want all new requests to get the new
files, but you don't want to screw over requests that are currently
executing as the deploy is happening. What you really need to create a
robust deploy environment is to have your web server be in charge of
this. The web server is the piece of the stack that understands when a
new request is starting. The opcode cache is too deep in the stack to
know or care about that. With nginx this is quite simple. Just add this to your config: fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root; This tells nginx to realpath resolve the docroot symlink meaning that
as far as your PHP application knows, the target of the symlink if the
real document_root. Now, once a request starts, nginx will resolve the
symlink as it stands at that point and for the duration of the request
it will use the same docroot directory, even if the symlink switch
happening mid-request. This entirely eliminates the symptoms described
here and it is the correct approach. This isn't something that can be
solved at the opcache level. Kanishk Dudeja had problems with this and added a useful notice: make sure these changes will actually be in final configuration, i.e. after include fastcgi_params; which otherwise overrides them. | {
"source": [
"https://serverfault.com/questions/848503",
"https://serverfault.com",
"https://serverfault.com/users/335698/"
]
} |
848,506 | I'm trying to avoid the problem of making an assumption and asking the wrong question. I'm going to describe the situation I'm in and what the problem is. I presently have a development server which runs development and testing databases. The developers run apache locally but connect to the test server databases so that everyone is on the same page. As time has gone by, the responsibilities of this server have grown. It is often used as a staging area for clients to sign off on changes. Meanwhile, the number of clients and databases has grown. Today, the development server ground to a halt. This was a real disruption to today's workflow for the development team. A mixture of active development and CRON tasks that were set up during testing eventually slowed the server to the point that it was unusable. I believe that the server's disk access was the bottleneck. I've asked for hardware upgrades previously, sadly they have not been forthcoming from management. A hardware upgrade would only do so much, also. I'm wondering if there's a better way to achieve what we're wanting, and want to set up the new server the right way when it arrives. Ideally I'd like a system that it's easy to "stop" sites, including their cron ad making their staging URLs inaccessible, when they're not in active development. Equally important is that I'd need a simple way to "start" them again, preferably via a UI so our non-technical staff can do it. I like the idea of a docker/vm-based solution but I'm not 100% sure how I'd maintain this. The biggest hurdle to using something like this is probably that I'm not sure how I'd be able to set these up for use as a staging environment. | Embedded Variables , $realpath_root : an absolute pathname corresponding to the root or alias directive’s value for the current request, with all symbolic links
resolved to real paths The solution of using $realpath_root instead of $document_root is copy-pasted all around the Q/A sites and forums; it is actually hard to avoid finding it.Yet, I've only seen it well explained once by Rasmus Lerdorf . It's worth sharing as it describes why it works and when it should be used. So, when you deploy via something like Capistrano which does a symlink
swap on the document root, you want all new requests to get the new
files, but you don't want to screw over requests that are currently
executing as the deploy is happening. What you really need to create a
robust deploy environment is to have your web server be in charge of
this. The web server is the piece of the stack that understands when a
new request is starting. The opcode cache is too deep in the stack to
know or care about that. With nginx this is quite simple. Just add this to your config: fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root; This tells nginx to realpath resolve the docroot symlink meaning that
as far as your PHP application knows, the target of the symlink if the
real document_root. Now, once a request starts, nginx will resolve the
symlink as it stands at that point and for the duration of the request
it will use the same docroot directory, even if the symlink switch
happening mid-request. This entirely eliminates the symptoms described
here and it is the correct approach. This isn't something that can be
solved at the opcache level. Kanishk Dudeja had problems with this and added a useful notice: make sure these changes will actually be in final configuration, i.e. after include fastcgi_params; which otherwise overrides them. | {
"source": [
"https://serverfault.com/questions/848506",
"https://serverfault.com",
"https://serverfault.com/users/210431/"
]
} |
848,580 | What is the simplest way to use the gcloud command line non-interactively with a Service Account outside of GCE? Preferably without littering the file system with credentials files, which is what gcloud auth activate-service-account --key-file=... does. There are many use cases for using gcloud with a service account. For example, on a server, I would like to test that the GOOGLE_APPLICATION_CREDENTIALS is correctly set and has the required permissions before running my application. Or, I would like to run some setup scripts or cron scripts that perform some check with the gcloud command line. Google Cloud libraries (e.g. python , java ) automatically use the environment variable GOOGLE_APPLICATION_CREDENTIALS to authenticate to Google Cloud. But unfortunately, this command line seems to have no effect on gcloud . What is a clean way to use gcloud while leaving the filesystem intact? $ GOOGLE_APPLICATION_CREDENTIALS=/etc/my-service-account-4b4b6e63aaed.json gcloud alpha pubsub topics publish testtopic hello
ERROR: (gcloud.alpha.pubsub.topics.publish) You do not currently have an active account selected.
Please run:
$ gcloud auth login
to obtain new credentials, or if you have already logged in with a
different account:
$ gcloud config set account ACCOUNT
to select an already authenticated account to use. | gcloud generally does not use GOOGLE_APPLICATION_CREDENTIALS environment variable. It only has some commands to facilitate setting up these application default credentials in gcloud auth application-default [login|revoke|print-access-token...] . By default gcloud stores its configuration in ${HOME}/.config/gcloud. It is possible to override that location by setting CLOUDSDK_CONFIG environment variable. Also it is possible (though more tedious) to override most setting so that they do not need to be preconfigured via gcloud config set ... and/or gcloud auth activate-service-account . For each setting one can specify environment variable. For example the equivalent command you tried to use service account key file would be: $ CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE=/etc/my-service-account-4b4b6e63aaed.json \
gcloud alpha pubsub topics publish testtopic hello Note that this will still cache credentials in CLOUDSDK_CONFIG since it needs to cache access-token, so that it wont have to refresh it on each invocation. For your use case best option in my view would be Set CLOUDSDK_CONFIG to some temp directory gcloud auth activate-service-account --key-file=... ... use gcloud to do your work ... Remove temp CLOUDSDK_CONFIG directory. | {
"source": [
"https://serverfault.com/questions/848580",
"https://serverfault.com",
"https://serverfault.com/users/331760/"
]
} |
849,697 | I'm trying to move from self-signed certificates to Let's Encrypt certificates on my nginx webserver. Currently, I redirect all requests to http/80 to https/443 , which uses a self signed certificate I created a while ago. Now - from what I understand Let's Encrypt makes a request to port 80 (as I am using the webroot option of certbot ). These requests are redirected, which renders the certificate generation unsuccessful. I tried to achieve this with the following server block, listening at port 80: server {
listen 80;
server_name sub.domain.tld;
server_tokens off;
location /.well-known {
root /var/www/letsencrypt;
}
location / {
return 301 https://$host$request_uri;
}
} But requests to /.well-known are redirected to https/443 anyways. How can I redirect all requests from http/80 to https/443 , except the requests to /.well-known/ ? | Try this: server {
listen 80;
server_name sub.domain.tld;
server_tokens off;
root /var/www/letsencrypt;
location /.well-known {
try_files $uri $uri/ =404;
}
location / {
return 301 https://$host$request_uri;
}
} Since there was no try_files entry in your virtual server, it didn't know what to do with requests coming to /.well-known . | {
"source": [
"https://serverfault.com/questions/849697",
"https://serverfault.com",
"https://serverfault.com/users/379569/"
]
} |
850,409 | In our small business, we are using about 75 PCs. Servers and desktops/laptops are all up-to-date and are secured using Panda Business Endpoint Protection and Malwarebytes Business Endpoint Security (MBAM + Ant-Exploit). However, in our production-environment we have about 15 Windows XP PCs running. They are connected to the company network. Mainly for SQL-connectivity and logging purposes. They have limited write-access to the servers. The Windows XP PCs are only used for one dedicated (custom) production-application. No office software (email, browsing, office,...).
Furthermore each of these XP-PCs has Panda web access control which does not allow Internet access. The only exceptions are for Windows and Panda Updates. Is it necessary, from security point-of-view, to replace these Windows XP PCs with new PCs? | is it necessary from security point-of-view, to replace these XP-PC's with new PC's. No, it's not necessary to replace the PCs. But it is necessary to upgrade those operating systems (this may also involve replacing those PCs - we don't know. But if they are running specialized hardware, then it may be possible to keep the PC). There are so many real-world stories about supposedly "air-gapped" PCs being infected. This can happen regardless of your operating system, but having a super-old non-updated operating system makes it even more at risk. Especially as it sounds like your computers are protected by a software restriction to block internet access. This is likely easy to bypass. (caveat: I've never heard of this Panda web access control, but it certainly looks like on-host software). The problem you are likely to face is a lack of vendor cooperation. It is possible that vendors refuse to help, want to charge $100,000 for an upgrade, or have plain outright gone bankrupt and the IP thrown away. If this is the case, this is something that the company needs to budget for. If there really is no option but to keep at 16-year-old operating system running unpatched (maybe this is a million dollar CNC lathe or milling machine or MRI), then you need to do some serious hardware-based host isolation. Putting those machines on their own vlan with extremely restrictive firewall rules would be a good start. It would appear that you need some hand-holding in this regard, so how's this: Windows XP is a 16 year old operating system. Sixteen years old . Let that sink in. I would think twice before buying a sixteen year old car, and they still make spare parts for 16 year old cars. There are no 'spare parts' for Windows XP. By the sounds of it, you have poor host isolation. Let's say that something gets inside your network already. By some other means. Someone plugs in an infected USB stick. It's going to scan your interior network and propagate to anything that has a vulnerability it can exploit. A lack of internet access is irrelevant here because the phone call is coming from inside the house This Panda security product looks like it's software-based restrictions. Software can be bypassed, sometimes easily. I bet a decent piece of malware could still get out to the internet if the only thing stopping it is a piece of software running on top of the networking stack. It could just get admin privileges and stop the software or service. So they don't really have no internet access at all. This comes back to host isolation - with proper host isolation you could actually get them off the internet and maybe limit the damage they can do to your network. Honestly though, you shouldn't need to justify replacing these computers and/or operating system. They will be fully depreciated for accounting purposes, they're likely well past the end of any warranty or support from the hardware vendor, they are definitely past any kind of support from Microsoft (even if you wave your titanium American Express in Microsoft's face, they still won't take your money). Any company that is interested in reducing risk and liability would have replaced those machines years ago. There is little to no excuse for keeping workstations around. I listed some valid excuses above (if it's totally disconnected completely from any and all networks and lives in a closet and runs the elevator music I might - MIGHT - give it a pass). It sounds like you do not have any valid excuse for leaving them around. Especially now that you are aware that they are there, and you have seen the damage that can occur (I assume you were writing this in response to WannaCry/WannaCrypt). | {
"source": [
"https://serverfault.com/questions/850409",
"https://serverfault.com",
"https://serverfault.com/users/415607/"
]
} |
850,659 | I would like to pick the community's brain regarding linux server security, specifically regarding brute-force attacks and using fail2ban vs custom iptables . There are a few similar questions out there but none of them address the topic to my satisfaction. In short I am trying to determine the best solution to secure linux servers exposed to the internet (running the usual services, ssh, web, mail), from brute-force attacks. I have a decent handle on server security, i.e. locking down ssh by not allowing root or password logins, changing the default port, ensuring software is up to date, checking log files, only allowing certain hosts to access the server and making use of security auditing tools such as Lynis ( https://cisofy.com/lynis/ ), for general security compliance, so this question is not necessarily regarding that although input and advice is always welcome . My question is which solution should I use (fail2ban or iptables), and how should I configure it, or should I use a combination of both to secure against brute-force attacks? There is a interesting response regarding the topic ( Denyhosts vs fail2ban vs iptables- best way to prevent brute force logons? ). The most interesting answer for me personally was ( https://serverfault.com/a/128964 ), and that iptables routing occurs in the kernel as opposed to fail2ban which makes use of user mode tools to parse log files. Fail2ban uses iptables of course, but it still has to parse log files and match a pattern until it performs an action. Does it make sense then to use iptables and use rate-limiting ( https://www.rackaid.com/blog/how-to-block-ssh-brute-force-attacks/ ) to drop requests from an IP for a period of time that makes too many connection attempts during a specific period regardless of what protocol it was attempting to connect to? If so, then there are some interesting thoughts about using drop vs reject for those packets here ( http://www.chiark.greenend.org.uk/~peterb/network/drop-vs-reject ), any thoughts on that? Fail2ban allows for custom configuration in the form of being able to write custom ' rules ' for services that might not be addressed in the default configuration. It is easy to install and setup and is powerful, but could it be an overkill if all I am trying to achieve is to ' block ' an IP from the server if they make 2 failed access attempts on any service/protocol over a x amount of time? The goal here is to open daily logwatch reports and not have to scroll through pages of attempted failed connections to the server. Thanks for taking the time. | should I use fail2ban or iptables? You use fail2ban in addition to a firewall solution, to extend on-demand those existing firewall rules with rules to block the specific ip-addresses of systems that perform undesirable actions on otherwise public services. They work in concert with each other. Simplified: a firewall only sees network connections and packets and can make some sense of the patterns therein but it doesn't have the application level insight to distinguish desired and valid requests from malicious, malformed and undesirable requests. For instance your firewall can't tell the difference between a bunch of HTTP API requests and a number incorrect login attempts caused by brute force password guessing on your Wordpress admin page, to the firewall they both are only TCP connections to port 80 or 443. Fail2ban is a generic and extensible approach to provide that application level insight to your firewall, albeit somewhat indirectly. Frequently applications will register and log malicious, malformed and undesirable requests as such, but only rarely will they have the native ability to prevent further abuse. Although it is slightly decoupled Fail2ban can then act on those logged malicious events and limit the damage and prevent further abuse, typically by dynamically reconfiguring your firewall to deny further access. In other words Fail2ban gives your existing applications, without modifying them, the means to fend off abuse. A different method to provide firewalls with application level insights would be by means of a intrusion detection/prevention system . For instance a webserver is a common public service and in your firewall TCP ports 80 and 443 are open for the internet at large. Typically you don't have any rate-limiting on the HTTP/HTTPS ports because multiple valid users can have a single origin when they are for instance behind a NAT gateway or a web proxy. When you detect undesirable and/or malicious actions towards your webserver you use fail2ban to automate blocking such an offender (either block them completely or by only locking their access to ports 80 & 443). On the other hand SSH access is not a public service, but if you're not in a position to restrict SSH access in your firewall to only white-listed ip-address ranges, rate-limiting incoming connections is one way to slow down brute-force attacks. But your firewall still can't distinguish between user bob successfully logging in 5 times because he's running ansible playbooks and 5 failed attempts to log in as root by a bot. | {
"source": [
"https://serverfault.com/questions/850659",
"https://serverfault.com",
"https://serverfault.com/users/112533/"
]
} |
850,664 | I am using Ubuntu 16LTS and performed a regular upgrade with apt. The upgrade failed with openssh-server and apt complained it could not upgrade because it cannot make a backup of /usr/sbin/sshd I removed openssh-server and tried to reinstall it. No luck and apt comes back with the same message. I remove openssh-server again. When I try to manually change or remove /usr/sbin/sshd I get the message that "Operation is not permitted". I tried to remove the attribute chattr -a -i /usr/sbin/sshd but it keeps responding that the operation not permitted. An other annoying issue is that lsattr does not give back any info. How can I force a delete or move of the /usr/sbin/sshd file? | should I use fail2ban or iptables? You use fail2ban in addition to a firewall solution, to extend on-demand those existing firewall rules with rules to block the specific ip-addresses of systems that perform undesirable actions on otherwise public services. They work in concert with each other. Simplified: a firewall only sees network connections and packets and can make some sense of the patterns therein but it doesn't have the application level insight to distinguish desired and valid requests from malicious, malformed and undesirable requests. For instance your firewall can't tell the difference between a bunch of HTTP API requests and a number incorrect login attempts caused by brute force password guessing on your Wordpress admin page, to the firewall they both are only TCP connections to port 80 or 443. Fail2ban is a generic and extensible approach to provide that application level insight to your firewall, albeit somewhat indirectly. Frequently applications will register and log malicious, malformed and undesirable requests as such, but only rarely will they have the native ability to prevent further abuse. Although it is slightly decoupled Fail2ban can then act on those logged malicious events and limit the damage and prevent further abuse, typically by dynamically reconfiguring your firewall to deny further access. In other words Fail2ban gives your existing applications, without modifying them, the means to fend off abuse. A different method to provide firewalls with application level insights would be by means of a intrusion detection/prevention system . For instance a webserver is a common public service and in your firewall TCP ports 80 and 443 are open for the internet at large. Typically you don't have any rate-limiting on the HTTP/HTTPS ports because multiple valid users can have a single origin when they are for instance behind a NAT gateway or a web proxy. When you detect undesirable and/or malicious actions towards your webserver you use fail2ban to automate blocking such an offender (either block them completely or by only locking their access to ports 80 & 443). On the other hand SSH access is not a public service, but if you're not in a position to restrict SSH access in your firewall to only white-listed ip-address ranges, rate-limiting incoming connections is one way to slow down brute-force attacks. But your firewall still can't distinguish between user bob successfully logging in 5 times because he's running ansible playbooks and 5 failed attempts to log in as root by a bot. | {
"source": [
"https://serverfault.com/questions/850664",
"https://serverfault.com",
"https://serverfault.com/users/415803/"
]
} |
850,703 | What is the proper way to migrate MS Exchange mailboxes to Dovecot? MS Exchange is 2010 and Dovecot with Postfix are on CentOS 7. I'd like to preserve permissions, states (seen/unseen), folders. I've googled a bit and I've found imapcopy and imapsync, is there a better way of doing that? | should I use fail2ban or iptables? You use fail2ban in addition to a firewall solution, to extend on-demand those existing firewall rules with rules to block the specific ip-addresses of systems that perform undesirable actions on otherwise public services. They work in concert with each other. Simplified: a firewall only sees network connections and packets and can make some sense of the patterns therein but it doesn't have the application level insight to distinguish desired and valid requests from malicious, malformed and undesirable requests. For instance your firewall can't tell the difference between a bunch of HTTP API requests and a number incorrect login attempts caused by brute force password guessing on your Wordpress admin page, to the firewall they both are only TCP connections to port 80 or 443. Fail2ban is a generic and extensible approach to provide that application level insight to your firewall, albeit somewhat indirectly. Frequently applications will register and log malicious, malformed and undesirable requests as such, but only rarely will they have the native ability to prevent further abuse. Although it is slightly decoupled Fail2ban can then act on those logged malicious events and limit the damage and prevent further abuse, typically by dynamically reconfiguring your firewall to deny further access. In other words Fail2ban gives your existing applications, without modifying them, the means to fend off abuse. A different method to provide firewalls with application level insights would be by means of a intrusion detection/prevention system . For instance a webserver is a common public service and in your firewall TCP ports 80 and 443 are open for the internet at large. Typically you don't have any rate-limiting on the HTTP/HTTPS ports because multiple valid users can have a single origin when they are for instance behind a NAT gateway or a web proxy. When you detect undesirable and/or malicious actions towards your webserver you use fail2ban to automate blocking such an offender (either block them completely or by only locking their access to ports 80 & 443). On the other hand SSH access is not a public service, but if you're not in a position to restrict SSH access in your firewall to only white-listed ip-address ranges, rate-limiting incoming connections is one way to slow down brute-force attacks. But your firewall still can't distinguish between user bob successfully logging in 5 times because he's running ansible playbooks and 5 failed attempts to log in as root by a bot. | {
"source": [
"https://serverfault.com/questions/850703",
"https://serverfault.com",
"https://serverfault.com/users/129090/"
]
} |
851,322 | I've been asking me this for a couple of days and after a bunch of searching I wasn't able to come up with a comprehensible answer, not even a theoretical one that makes sense in my head. I'm playing around with solutions for Mac hosting and I was wondering if I could add thunderbolt ethernet cards to the Macs and bond them in VLANs and therefore semi solve bandwidth bottlenecks to the machines in order to increase access speeds to a DB or external storage. For example: Plug two ethernet cards into a Mac Mini, bond them and have a VLAN with 2 Gb/s bandwidth. | Simply put, no, they are different: with a 10 GbE interface, you get a bandwidth of 10 Gb/s even for a single connection with 10x 1GbE interfaces (and using 802.ad protocol), a single connection/session is limited to 1 Gb/s only. On the other hand, you can serve 10 concurrent session each with a bandwidth of 1 Gb/s In other words, bonding generally does not increase the speed of a single connection. The only exception is Linux bonding type 0 (balance-rr), which sends packets in a round robin fashion, but it has significant drawbacks and limited scaling. For a practical example, give a look here | {
"source": [
"https://serverfault.com/questions/851322",
"https://serverfault.com",
"https://serverfault.com/users/306037/"
]
} |
851,334 | We're seeing apps like nginx and php-fpm error out occasionally (and temporarily) while opening good files from a connected NFS mount: php-fpm error example: 2017/05/20 22:53:09 [error] 55#0: *6575 FastCGI sent in stderr: "PHP message: PHP Warning: getimagesize(/www/newspaperfoundation.org/html/wp-content/blogs.dir/22/files/2017/05/19-highest-honors-1.jpg): failed to open stream: Input/output error in /www/newspaperfoundation.org/html/wp-content/plugins/mashsharer/includes/header-meta-tags.php on line 271" while reading response header from upstream, client:
192.168.255.34, server: www.dailyrepublic.com, request: "GET /solano-news/fairfield/highest-honors-commends-students-with-4-0-and-higher-grade-point-average/ HTTP/1.1", upstream: "fastcgi://172.17.0.3:9001", host: "www.dailyrepublic.com" nginx error example: 2017/05/20 23:22:32 [crit] 56#0: *712 open() "/www/newspaperfoundation.org/html/wp-content/blogs.dir/24/files/2017/05/Tandem1W-550x550.jpg" failed (5: Input/output error), client: 192.168.255.34, server: www.davisenterprise.com, request: "GET /files/2017/05/Tandem1W-550x550.jpg HTTP/1.1", host: "www.davisenterprise.com", referrer: "http://www.davisenterprise.com/" During a temporary error, I can ls and see the file exists with correct permissions. The image eventually becomes OK after a long while. Other files return OK without input/output errors. There's not much logging I can find to document the issue. But enabling rpcdebug I see a lot of messages like these around the time of errors: May 20 16:10:07 tomentella kernel: NFSD: nfsd4_open filename 19tommeyerW.jpg op_openowner (null)
May 20 16:10:07 tomentella kernel: nfsv4 compound op ffff8806239e5080 opcnt 5 #2: 18: status 10011
May 20 16:10:07 tomentella kernel: nfsv4 compound returned 10011
May 20 16:10:07 tomentella kernel: nfsd_dispatch: vers 4 proc 1
May 20 16:10:07 tomentella kernel: nfsv4 compound op #1/5: 22 (OP_PUTFH)
May 20 16:10:07 tomentella kernel: nfsd: fh_verify(36: 01070001 008c0312 00000000 3c639297 604b0f25 ce691899)
May 20 16:10:07 tomentella kernel: nfsv4 compound op ffff8806239e5080 opcnt 5 #1: 22: status 0
May 20 16:10:07 tomentella kernel: nfsv4 compound op #2/5: 18 (OP_OPEN)
May 20 16:10:07 tomentella kernel: NFSD: nfsd4_open filename 19tommeyerW.jpg op_openowner (null)
May 20 16:10:07 tomentella kernel: nfsv4 compound op ffff8806239e5080 opcnt 5 #2: 18: status 10011
May 20 16:10:07 tomentella kernel: nfsv4 compound returned 10011
May 20 16:10:08 tomentella kernel: nfsd_dispatch: vers 4 proc 1
May 20 16:10:08 tomentella kernel: nfsv4 compound op #1/4: 22 (OP_PUTFH)
May 20 16:10:08 tomentella kernel: nfsd: fh_verify(36: 01070001 008c0312 00000000 3c639297 604b0f25 ce691899)
May 20 16:10:08 tomentella kernel: nfsv4 compound op ffff8806239e5080 opcnt 4 #1: 22: status 0
May 20 16:10:08 tomentella kernel: nfsv4 compound op #2/4: 15 (OP_LOOKUP) In particular, I feel like I only see this message for files that are erroring out: May 20 16:10:07 tomentella kernel: NFSD: nfsd4_open filename 19tommeyerW.jpg op_openowner (null) Any ideas on what might be causing the input/output errors? Client mounts using the following: mount.nfs4 -v -o proto=tcp $NFSMASTERHOST:/srv/data /srv/data Centos 7 with updated packages. The error is "new" with few server changes recently. I think perhaps my recent update to system packages may have been the trigger for this change. Because the problem goes in and out for some images, I'm able to somewhat watch the logs and compare/contrast. Here's an example of it going from OK to bad when grepping on a particular image name: May 20 18:38:37 tomentella kernel: NFSD: nfsd4_open filename Ron-Thomas-web-150x150.jpg op_openowner (null)
May 20 18:38:37 tomentella kernel: NFSD: nfsd4_open_confirm on file Ron-Thomas-web-150x150.jpg
May 20 18:38:37 tomentella kernel: NFSD: nfsd4_close on file Ron-Thomas-web-150x150.jpg
May 20 18:39:08 tomentella kernel: NFSD: nfsd4_open filename Ron-Thomas-web-150x150.jpg op_openowner (null)
May 20 18:39:08 tomentella kernel: NFSD: nfsd4_open filename Ron-Thomas-web-150x150.jpg op_openowner (null)
May 20 18:39:10 tomentella kernel: NFSD: nfsd4_open filename Ron-Thomas-web-150x150.jpg op_openowner (null)
May 20 18:39:10 tomentella kernel: NFSD: nfsd4_open filename Ron-Thomas-web-150x150.jpg op_openowner (null)
May 20 18:39:11 tomentella kernel: NFSD: nfsd4_open filename Ron-Thomas-web-150x150.jpg op_openowner (null)
May 20 18:39:11 tomentella kernel: NFSD: nfsd4_open filename Ron-Thomas-web-150x150.jpg op_openowner (null) Here's nfsstat tomentella ★ ~ $ nfsstat
Server rpc stats:
calls badcalls badclnt badauth xdrcall
94437487 6 6 0 0
Server nfs v4:
null compound
503 0% 94436978 99%
Server nfs v4 operations:
op0-unused op1-unused op2-future access close commit
0 0% 0 0% 0 0% 11213689 3% 2631554 0% 3377 0%
create delegpurge delegreturn getattr getfh link
579 0% 0 0% 0 0% 88581315 31% 32460559 11% 0 0%
lock lockt locku lookup lookup_root nverify
365 0% 0 0% 365 0% 30058556 10% 0 0% 0 0%
open openattr open_conf open_dgrd putfh putpubfh
2771686 0% 0 0% 74326 0% 0 0% 92969992 32% 0 0%
putrootfh read readdir readlink remove rename
2435 0% 1999675 0% 1917567 0% 350 0% 12404 0% 5072 0%
renew restorefh savefh secinfo setattr setcltid
1226801 0% 0 0% 5072 0% 0 0% 18315216 6% 121025 0%
setcltidconf verify write rellockowner bc_ctl bind_conn
121105 0% 0 0% 115189 0% 365 0% 0 0% 0 0%
exchange_id create_ses destroy_ses free_stateid getdirdeleg getdevinfo
0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
getdevlist layoutcommit layoutget layoutreturn secinfononam sequence
0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
set_ssv test_stateid want_deleg destroy_clid reclaim_comp
0 0% 0 0% 0 0% 0 0% 0 0%
Client rpc stats:
calls retrans authrefrsh
0 0 0 | Simply put, no, they are different: with a 10 GbE interface, you get a bandwidth of 10 Gb/s even for a single connection with 10x 1GbE interfaces (and using 802.ad protocol), a single connection/session is limited to 1 Gb/s only. On the other hand, you can serve 10 concurrent session each with a bandwidth of 1 Gb/s In other words, bonding generally does not increase the speed of a single connection. The only exception is Linux bonding type 0 (balance-rr), which sends packets in a round robin fashion, but it has significant drawbacks and limited scaling. For a practical example, give a look here | {
"source": [
"https://serverfault.com/questions/851334",
"https://serverfault.com",
"https://serverfault.com/users/45814/"
]
} |
851,652 | I understand what IOPS and throughput are. Throughput measures data flow as MB/s and IOPS says how many I/O operations are happening per second. What I don't understand is why many storage services just show the IOPS they provide. I really can't see any scenario where I would prefer to know the IOPS instead of the throughput. Why do IOPS matter? Why does AWS mainly shows its storage provisions in IOPS? Where are IOPS more relevant than throughput (MB/s)? EDIT: Some people are looking into this question as if I asked what random access is and how it impacts performance or how HDD and SSD work... although I think this information is useful for people new to storage behavior, a lot of focus is being applied to this and it isn't the goal of the question, the question is about "What new piece of information do I get when I see an IOPS number, that I wouldn't get seeing a throughput (MB/s) number?" | This is because sequential throughput is not how most I/O activity occurs. Random reads/write operations are more representative of normal system activity, and that's usually bound by IOPS. Streaming porn from one of my servers to our customers (or uploading to our CDN) is more sequential in nature and you'll see the impact of throughput there. But maintaining the database that catalogs the porn and tracks user activity through the site is going to be random in nature, and limited by the number of small I/O operations/second that the underlying storage is capable of. I may need 2,000 IOPS to be able to run the databases at peak usage, but may only see 30MB/s throughput at the disk level because of the type of activity. The disks are capable of 1200MB/s, but the IOPS are the limitation in the environment. This is a way of describing the capacity potential of a storage system. An SSD may have the ability to do 80,000 IOPS and 600MB/s throughput. You can get that throughput with 6 regular 10k SAS disks, but the would only yield around 2,000 IOPS. | {
"source": [
"https://serverfault.com/questions/851652",
"https://serverfault.com",
"https://serverfault.com/users/409749/"
]
} |
854,208 | So a while ago I set up a server on AWS, and used their generated SSH key. I saved the key to Lastpass, and have successfully retrieved it from there before, and got it working. However, after trying that again today, I can't get it to work. -rw------- 1 itsgreg users 1674 Jun 6 12:51 key_name I've tried ssh -i key_name , ssh-keygen -f key_name , but nothing works, I always get this error message: Load key "key_name": invalid format Is there any way to fix this? | Check the contents of key_name , if the agent says invalid format , then there's something wrong with the key - like .. are you sure that's the correct key? Even if it's not the private key you need, the ssh agent won't return invalid format if the key is working, you simply won't be able to connect. You might have placed your public key in there, for some reason. Check it! | {
"source": [
"https://serverfault.com/questions/854208",
"https://serverfault.com",
"https://serverfault.com/users/415361/"
]
} |
854,413 | I am trying to set up a ECS but so far I have encountered a few permission issue for which I have created some questions on this forum already. I think I am stuck so far because honestly I cannot find out all these role requirements in one place concisely. It seems like I need to define at least two roles: 1) ECS container http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html 2) ECS task http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html#enable_task_iam_roles Is it correct? Did I miss out anything? Is there any special IAM requirement? | The only necessary role is the Container Instance IAM role . This role allows the ECS agent (running on your EC2 instance) to communicate with Amazon ECS. There are five other roles that you may also find useful, for different purposes: ECS Service-Linked role (SLR) - This role enables Amazon ECS to manage a variety of AWS resources associated with your application on your behalf. When using a Service , this role allows Amazon ECS to manage the load balancer (Classic Load Balancers, Application Load Balancers, and Network Load Balancers) and service discovery (with Route 53 ) associated with your service. When using task networking , this role allows Amazon ECS to attach and detach Elastic Network Interfaces (ENIs) to your tasks. This role is required when using AWS Fargate . Service Scheduler IAM role - Prior to the introduction of the ECS Service-Linked role (SLR), this role was used in conjunction with a Service to enable Amazon ECS to manage the load balancer associated with your service. If you want to use an Elastic Load Balancer (whether a Classic Load Balancer, an Application Load Balancer, or a Network Load Balancer) with your ECS service, you can use this role. Now that the ECS SLR is available you can use either of the two roles, but you may still wish to use this role if you want to restrict the permissions that are granted to Amazon ECS to cover specific load balancer resources. Auto Scaling IAM role - This role is used in conjunction with a Service and allows the Application Auto Scaling service to scale the desired count of your Service in or out. Task IAM role - This role can be used with any Task (including Tasks launched by a Service ). This role is very similar to an EC2 instance profile , but allows you to associate permissions with individual Tasks rather than with the underlying EC2 instance that is hosting those Tasks. If you are running a number of different applications across your ECS cluster with different permissions required, you can use the Task IAM role to grant specific permissions to each Task rather than ensuring that every EC2 instance in your cluster has the combined set of permissions that any application would need. Task execution role - This role is required when using AWS Fargate and replaces the Container Instance IAM role , which is unavailable for the FARGATE launch type. This role enables AWS Fargate to pull your container images from Amazon ECR and to forward your logs to Amazon CloudWatch Logs . This role is also used (on both the Fargate and the EC2 launch types) to enable private registry authentication and secrets from AWS Secrets Manager and AWS Systems Manager Parameter Store . | {
"source": [
"https://serverfault.com/questions/854413",
"https://serverfault.com",
"https://serverfault.com/users/12364/"
]
} |
855,094 | Is KVM a type 1 or a type 2 hypervisor? I understand that type 1 hypervisors run on bare metal while type 2 hypervisors are applications running on top of an operating system (such as VMware Workstation). I also understand that the performance difference between type 1 and type 2 clients can be significant. I am confused as if KVM is type 1 or 2 as I understand that a desktop environment can be installed in dom0. | KVM is not a clear case as it could be categorized as either one. The KVM kernel module turns Linux kernel into a type 1 bare-metal hypervisor, while the overall system could be categorized to type 2 because the host OS is still fully functional and the other VM's are standard Linux processes from its perspective. The desktop environment i.e. GUI has less to do with this. It's more clear if we compare this to Hyper-V, where the hypervisor is a distinct layer beneath all the virtual machines: even dom0 is technically just one VM among others, despite it has special privileges and it is the one shown in the console, having a GUI. Therefore, if we stare too much at the appearance, Hyper-V might look like type 2 while it is purely type 1. | {
"source": [
"https://serverfault.com/questions/855094",
"https://serverfault.com",
"https://serverfault.com/users/189822/"
]
} |
855,238 | I was doing some fault finding, and I've discovered two services which should be set to automatic have been set to disabled . What is the best way to find out who did this? It could be someone from my company, or it could be someone client-side. It would be enough to determine the user account. I've had a look in the Windows Event Viewer, but, to be honest, I'm not sure what I'm looking for, and there is a lot to work through. Nothing has jumped out at me, but I suspect it's just that I don't know what I'm looking for. | When the start type of a service is changed, an event is recorded in the system event log , with id 7040 and source Service Control Manager . The user that performed the operation is displayed in the event (obfuscated in the screen shot below). So you have to find those events in your event logs; hopefully you will directly have the user name. If it is a generic user name, such as "administrator", then it's time to stop using generic account, and you'll have to correlate the date / time of the event with other info you could get from other log (like: Microsoft-Windows-TerminalServices-LocalSessionManager/Operational which can give you the source IP of a remote desktop session) | {
"source": [
"https://serverfault.com/questions/855238",
"https://serverfault.com",
"https://serverfault.com/users/288750/"
]
} |
855,241 | We use Veeam 9.5 for backing up an ESX farm. For the Copy Job we plan to switch over from USB devices to NAS with network share, like: \\thenas01\VeeamCopy
\\thenas02\VeeamCopy
\\thenas03\VeeamCopy The NAS will be changed periodically.
At a time only one NAS will be connected to the network. We could configure three Copy Jobs but will get failed errors for the two missing NAS. Question: Is there a way co configure a single Copy Job which supports this constallation without reporting errors for the missing NAS? | When the start type of a service is changed, an event is recorded in the system event log , with id 7040 and source Service Control Manager . The user that performed the operation is displayed in the event (obfuscated in the screen shot below). So you have to find those events in your event logs; hopefully you will directly have the user name. If it is a generic user name, such as "administrator", then it's time to stop using generic account, and you'll have to correlate the date / time of the event with other info you could get from other log (like: Microsoft-Windows-TerminalServices-LocalSessionManager/Operational which can give you the source IP of a remote desktop session) | {
"source": [
"https://serverfault.com/questions/855241",
"https://serverfault.com",
"https://serverfault.com/users/237665/"
]
} |
855,620 | I sell a product to customers, and as part of this product I have a website where customers can upload data for processing. The data is of considerable size (gigabytes). I am looking to buy extra bandwidth for my customers, and to make the arrangement with their ISPs myself so that the experience is seamless to the end users. Many of my customers are on university or corporate networks where they would be unable to make the arrangement themselves even if they wanted to. The extra bandwidth would apply only to connections to my website, not to the customer's other connections. Basically I am looking for this sort of arrangement: Is this sort of thing possible? Edit : Now that the United States has ended net neutrality, is it possible? | For all intents and purposes, no, this is not possible. Even if it were, the technical and contractual logistics required would cripple your business. Think through this a bit more: Joe user at University signs up for your service. You then approach one of the University's many providers (which one? How might you know what provider Joe user's traffic egresses out of today, let alone tomorrow when things change). So then you have to make agreements with all of their providers. But then you realize that somehow you need to make an addendum to a contract that was made between the provider and the university, without the university's involvement?!? How exactly do you expect that to work? Oh and then, you realize that Joe user's traffic is likely subject to heavy traffic shaping and that any "extra bandwidth" you could procure (if such a thing were even possible) would be pointless due to traffic shaping rules. Even if traffic shaping rules weren't in play, why do you think traffic to/from your site should get special treatment? How do you thing the network people would feel about that? See? It's impossible for many, many reasons. Honestly, though, I think you're proposing a solution for a non-existent problem. If your customers are on university or corporate networks, there is likely plenty of bandwidth to spare. A few gigabytes is not that much data, and is lost in the noise when viewed with all of the other traffic on the network. | {
"source": [
"https://serverfault.com/questions/855620",
"https://serverfault.com",
"https://serverfault.com/users/378538/"
]
} |
856,194 | How can I add a host key to the SSH known_hosts file securely? I'm setting up a development machine, and I want to (e.g.) prevent git from prompting when I clone a repository from github.com using SSH. I know that I can use StrictHostKeyChecking=no (e.g. this answer ), but that's not secure. So far, I've found... GitHub publishes their SSH key fingerprints at GitHub's SSH key fingerprints I can use ssh-keyscan to get the host key for github.com . How do I combine these facts? Given a prepopulated list of fingerprints, how do I verify that the output of ssh-keyscan can be added to the known_hosts file? I guess I'm asking the following: How do I get the fingerprint for a key returned by ssh-keyscan ? Let's assume that I've already been MITM-ed for SSH, but that I can trust the GitHub HTTPS page (because it has a valid certificate chain). That means that I've got some (suspect) SSH host keys (from ssh-keyscan ) and some (trusted) key fingerprints. How do I verify one against the other? Related: how do I hash the host portion of the output from ssh-keyscan ? Or can I mix hashed/unhashed hosts in known_hosts ? | The most important part of "securely" adding a key to the known_hosts file is to get the key fingerprint from the server administrator. The key fingerprint should look something like this: 2048 SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8 github.com (RSA) In the case of GitHub, normally we can't talk directly to an administrator. However, they put the key on their web pages so we can recover the information from there. Manual key installation 1) Take a copy of the key from the server and get its fingerprint. N.B.: Do this before checking the fingerprint. $ ssh-keyscan -t rsa github.com | tee github-key-temp | ssh-keygen -lf -
# github.com:22 SSH-2.0-babeld-f3847d63
2048 SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8 github.com (RSA) 2) Get a copy of the key fingerprint from the server administrator - in this case navigate to the page with the information on github.com Go to github.com Go to the help page (on the menu on the right if logged in; at the bottom of the homepage otherwise). In the Getting Started section go to Connecting to GitHub with SSH Go to Testing your SSH connection Copy the SHA256 fingerprint from that page into your text editor for later use. 3) Compare the keys from the two sources By placing them directly one above the other in a text editor, it is easy to see if something has changed 2048 SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8 github.com (RSA) #key recovered from github website
2048 SHA256:nThbg6kXUpJ3Gl7E1InsaspRomtxdcArLviKaEsTGY8 github.com (RSA) #key recovered with keyscan (Note that the second key has been manipulated, but it looks quite similar to the original - if something like this happens you are under serious attack and should contact a trusted security expert.) If the keys are different abort the procedure and get in touch with a security expert 4) If the keys compare correctly then you should install the key you already downloaded cat github-key-temp >> ~/.ssh/known_hosts Or to install for all users on a system (as root): cat github-key-temp >> /etc/ssh/ssh_known_hosts Automated key installation If you need to add a key during a build process then you should follow steps 1-3 of the manual process above. Having done that, examine the contents of your github-key-temp file and make a script to add those contents to your known hosts file. if ! grep github.com ~/.ssh/known_hosts > /dev/null
then
echo "github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> ~/.ssh/known_hosts
fi You should now get rid of any ssh commands which have StrictHostKeyChecking disabled. | {
"source": [
"https://serverfault.com/questions/856194",
"https://serverfault.com",
"https://serverfault.com/users/7027/"
]
} |
856,425 | I am trying to push a docker image to Google's container registry but keep getting a error about Docker login having failed. I run gcloud docker -- push gcr.io/<my-project-id>/test-image I get back ERROR: Docker CLI operation failed:
Error response from daemon: login attempt to
https://appengine.gcr.io/v2/ failed with status: 404 Not Found
ERROR: (gcloud.docker) Docker login failed. Other gcloud operations that don't go through docker work. I can for example create a cluster via gcloud container clusters create my-cluster . I did play around with a local registry today, not sure if that might have broken things. Thanks! | You just need to disable storing docker credentials on macOS keychain on preferences of Docker for Mac. | {
"source": [
"https://serverfault.com/questions/856425",
"https://serverfault.com",
"https://serverfault.com/users/52797/"
]
} |
856,446 | I have an internet connection with a static ip from my ISP. I do have mail servers and webservers hosted from it. What i would like to achieve is run couple of nameservers by getting another static ip from my ISP. I have forwarded TCP and UDP ports from my local IP address and the internet connection is being managed by pfsense. The DNS resolver and forwarder service has been disabled. I tried to setup a nameserver by NAT and forwarded PORT 53 for udp & tcp traffic. But still when i try to query a record for a zone on my nameserver using dig externally or internally , i get an error "no servers could be reached". Is there any guide or information that would help me to setup the nameservers behind NAT or help me solve this issue? My ISP has confirmed that they do not have blocks or filters in place. I have also confirmed that no ports are being blocked or filtered from my end too. The name of the nameserver is ns1.sitehosters.in. ETHERNET CONFIG on NS1 auto eth1
iface eth1 inet static
address 192.168.1.12
netmask 255.255.255.0
gateway 192.168.1.1(PFSENSE)
dns-nameservers 8.8.8.8 /etc/bind/named/conf.options options {
directory "/var/cache/bind";
dnssec-validation auto;
auth-nxdomain no;
listen-on-v6 { any; };
}; Named.conf.local file on ns1 nano /etc/bind/named.conf.local
zone "sitehosters.in" {
type master;
allow-transfer {none;};
file"/etc/bind/pri.sitehosters.in"
}; Netstat output from below: tcp 0 0 192.168.1.36:domain . LISTEN 1156/named
tcp 0 0 localhost:domain . LISTEN 1156/named
tcp 0 0 localhost:953 . LISTEN 1156/named
udp 0 0 192.168.1.36:domain . 1156/named
udp 0 0 localhost:domain . 1156/named DNSCHECK at PINGDOM No name servers found at child.
No name servers could be found at the child.
This usually means that the child is not configured to answer queries about the zone. Please find some screenshots of my router config which might help you to point me in the right direction. I use pfsense on a PC which is managing all the internet connection and firewall. When using packet capture on my wan port in pfsense, i get 19:05:02.660753 IP xx.xx.xx.xx.13747 > 8.8.8.8.53: UDP, length 27
19:05:02.669900 IP 8.8.8.8.53 > xx.xx.xx.xx.13747: UDP, length 509
19:05:02.670409 IP xx.xx.xx.xx.63621 > 8.8.8.8.53: UDP, length 44
19:05:02.694125 IP xx.xx.xx.xx.34919 > 8.8.8.8.53: UDP, length 27
19:05:02.704487 IP 8.8.8.8.53 > xx.xx.xx.xx.34919: UDP, length 509
19:05:02.705580 IP xx.xx.xx.xx.11687 > 8.8.8.8.53: UDP, length 44
19:05:02.741893 IP 8.8.8.8.53 > xx.xx.xx.xx.11687: UDP, length 208
19:05:02.741919 IP 8.8.8.8.53 > xx.xx.xx.xx.63621: UDP, length 208
19:13:39.682095 IP 81.143.220.107.51368 > xx.xx.xx.xx.53: tcp 0
19:13:39.682355 IP xx.xx.xx.xx.53 > 81.143.220.107.51368: tcp 0
19:13:39.893583 IP 81.143.220.107.51368 > xx.xx.xx.xx: tcp 0
19:13:39.894893 IP 81.143.220.107.51368 > xx.xx.xx.xx.53: tcp 34
19:13:39.895023 IP xx.xx.xx.xx.53 > 81.143.220.107.51368: tcp 0
19:13:39.895353 IP xx.xx.xx.xx.53 > 81.143.220.107.51368: tcp 155
19:13:40.100199 IP 81.143.220.107.51368 > xx.xx.xx.xx.53: tcp 0
19:13:40.100220 IP 81.143.220.107.51368 > xx.xx.xx.xx.53: tcp 0 The report at intodns.com says DNS servers responded ERROR: One or more of your nameservers did not respond:
The ones that did not respond are: xx.xx.xx.xx | You just need to disable storing docker credentials on macOS keychain on preferences of Docker for Mac. | {
"source": [
"https://serverfault.com/questions/856446",
"https://serverfault.com",
"https://serverfault.com/users/396999/"
]
} |
856,904 | I have a webpage ( https://smartystreets.com/contact ) that uses jQuery to load some SVG files from S3 through the CloudFront CDN. In Chrome I will open an Incognito window as well as the console. Then I will load the page. As the page loads, I will typically get 6 to 8 messages in the console that look similar to this: XMLHttpRequest cannot load
https://d79i1fxsrar4t.cloudfront.net/assets/img/feature-icons/documentation.08e71af6.svg.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'https://smartystreets.com' is therefore not allowed access. If I do a standard reload of the page, even multiple time, I continue to get the same errors. If I do Command+Shift+R then most, and sometimes all, of the images will load without the XMLHttpRequest error. Sometimes even after the images have loaded, I will refresh and one or more of the images will not load and return that XMLHttpRequest error again. I have checked, changed, and re-checked the settings on S3 and Cloudfront. In S3 my CORS configuration looks like this: <?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedOrigin>http://*</AllowedOrigin>
<AllowedOrigin>https://*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration> (Note: initially had only <AllowedOrigin>*</AllowedOrigin> , same problem.) In CloudFront the distribution behavior is set to allow the HTTP Methods: GET, HEAD, OPTIONS . Cached methods are the same. Forward Headers is set to "Whitelist" and that whitelist includes, "Access-Control-Request-Headers, Access-Control-Request-Method, Origin". The fact that it works after a cache-less browser reload seems to indicate that all is well on the S3/CloudFront side, else why would the content be delivered. But then why would the content not be delivered on the initial page-view? I am working in Google Chrome on macOS. Firefox has no problem getting the files every time. Opera NEVER gets the files. Safari will pick up the images after several refreshes. Using curl I do not get any problems: curl -I -H 'Origin: smartystreets.com' https://d79i1fxsrar4t.cloudfront.net/assets/img/phone-icon-outline.dc7e4079.svg
HTTP/1.1 200 OK
Content-Type: image/svg+xml
Content-Length: 508
Connection: keep-alive
Date: Tue, 20 Jun 2017 17:35:57 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET
Access-Control-Max-Age: 3000
Last-Modified: Thu, 15 Jun 2017 16:02:19 GMT
ETag: "dc7e4079f937e83291f2174853adb564"
Cache-Control: max-age=31536000
Expires: Wed, 01 Jan 2020 23:59:59 GMT
Accept-Ranges: bytes
Server: AmazonS3
Vary: Origin,Access-Control-Request-Headers,Access-Control-Request-Method
Age: 4373
X-Cache: Hit from cloudfront
Via: 1.1 09fc52f58485a5da8e63d1ea27596895.cloudfront.net (CloudFront)
X-Amz-Cf-Id: wxn_m9meR6yPoyyvj1R7x83pBDPJy1nT7kdMv1aMwXVtHCunT9OC9g== Some have suggested that I delete the CloudFront distribution and recreate it. Seems like a rather harsh and inconvenient fix. What is causing this problem? Update: Adding response headers from an image that failed to load. age:1709
cache-control:max-age=31536000
content-encoding:gzip
content-type:image/svg+xml
date:Tue, 20 Jun 2017 17:27:17 GMT
expires:2020-01-01T23:59:59.999Z
last-modified:Tue, 11 Apr 2017 18:17:41 GMT
server:AmazonS3
status:200
vary:Accept-Encoding
via:1.1 022c901b294fedd7074704d46fce9819.cloudfront.net (CloudFront)
x-amz-cf-id:i0PfeopzJdwhPAKoHpbCTUj1JOMXv4TaBgo7wrQ3TW9Kq_4Bx0k_pQ==
x-cache:Hit from cloudfront | You're making two requests for the same object, one from HTML, one from XHR. The second one fails, because Chrome uses the cached response from the first request, which has no Access-Control-Allow-Origin response header. Why? Chromium bug 409090 Cross-origin request from cache failing after regular request is cached describes this problem, and it's a "won't fix" -- they believe their behavior is correct. Chrome considers the cached response to be usable, apparently because the response didn't include a Vary: Origin header. But S3 does not return Vary: Origin when an object is requested without an Origin: request header, even when CORS is configured on the bucket. Vary: Origin is only sent when an Origin header is present in the request. And CloudFront does not add Vary: Origin even when Origin is whitelisted for forwarding, which should by definition mean that varying the header might modify the response -- that's the reason why you forward and cache against request headers. CloudFront gets a pass, because its response would be correct if S3's were more correct, since CloudFront does return this when it's provided by S3. S3, a little fuzzier. It is not wrong to return Vary: Some-Header when there was no Some-Header in the request. For example, a response that contains Vary: accept-encoding, accept-language indicates that the origin server might have used the request's Accept-Encoding and Accept-Language fields (or lack thereof) as
determining factors while choosing the content for this response. (emphasis added) https://www.rfc-editor.org/rfc/rfc7231#section-7.1.4 Clearly, Vary: Some-Absent-Header is valid, so S3 would be correct if it added Vary: Origin to its response if CORS is configured, since that indeed could vary the response. And, apparently, this would make Chrome do the right thing. Or, if it doesn't do the right thing in this case, it would be violating a MUST NOT . From the same section: An origin server might send Vary with a list of fields for two
purposes: To inform cache recipients that they MUST NOT use this response
to satisfy a later request unless the later request has the same
values for the listed fields as the original request (Section 4.1
of [RFC7234]). In other words, Vary expands the cache key
required to match a new request to the stored cache entry. ... So, S3 really SHOULD be returning Vary: Origin when CORS is configured on the bucket, if Origin is absent from the request, but it doesn't. Still, S3 is not strictly wrong for not returning the header, because it's only a SHOULD , not a MUST . Again, from the same section of RFC-7231: An origin server SHOULD send a Vary header field when its algorithm
for selecting a representation varies based on aspects of the request
message other than the method and request target, ... On the other hand, the argument could be made that Chrome should implicitly know that varying the Origin header should be a cache key because it could change the response in the same way Authorization could change the response. ...unless the variance
cannot be crossed or the origin server has been deliberately
configured to prevent cache transparency. For example, there is no
need to send the Authorization field name in Vary because reuse
across users is constrained by the field definition [...] Similarly, reuse across origins is arguably constrained by the nature of Origin but this argument is not a strong one. tl;dr: You apparently cannot successfully fetch an object from HTML and then successfully fetch it again with as a CORS request with Chrome and S3 (with or without CloudFront), due to peculiarities in the implementations. Workaround: This behavior can be worked-around with CloudFront and Lambda@Edge, using the following code as an Origin Response trigger. This adds Vary: Access-Control-Request-Headers, Access-Control-Request-Method, Origin to any response from S3 that has no Vary header. Otherwise, the Vary header in the response is not modified. 'use strict';
// If the response lacks a Vary: header, fix it in a CloudFront Origin Response trigger.
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
if (!headers['vary'])
{
headers['vary'] = [
{ key: 'Vary', value: 'Access-Control-Request-Headers' },
{ key: 'Vary', value: 'Access-Control-Request-Method' },
{ key: 'Vary', value: 'Origin' },
];
}
callback(null, response);
}; Attribution: I am also the author of the original post on the AWS Support forums where this code was initially shared. The Lambda@Edge solution above results in fully correct behavior, but here are two alternatives that you may find useful, depending on your specific needs: Alternative/Hackaround #1: Forge the CORS headers in CloudFront. CloudFront supports custom headers that are added to each request. If you set Origin: on every request, even those that are not cross-origin, this will enable correct behavior in S3. The configuration option is called Custom Origin Headers, with the word "Origin" meaning something entirely different than it means in CORS. Configuring a custom header like this in CloudFront overwrites what is sent in the request with the value specified, or adds it if absent. If you have exactly one origin accessing your content over XHR, e.g. https://example.com , you can add that. Using * is dubious, but might work for other scenarios. Consider the implications carefully. Alternative/Hackaround #2: Use a "dummy" query string parameter that differs for HTML and XHR or is absent from one or the other. These parameters are typically named x-* but should not be x-amz-* . Let's say you make up the name x-request . So <img src="https://dzczcexample.cloudfront.net/image.png?x-request=html"> . When accessing the object from JS, don't add the query parameter. CloudFront is already doing the right thing, by caching different versions of the objects using the Origin header or absence of it as part of the cache key, because you forwarded that header in your cache behavior. The problem is, your browser doesn't know this. This convinces the browser that this is actually a separate object that needs to be requested again, in a CORS context. If you use these alternative suggestions, use one or the other -- not both. | {
"source": [
"https://serverfault.com/questions/856904",
"https://serverfault.com",
"https://serverfault.com/users/105379/"
]
} |
857,973 | So in my code I have a task - name: cool task
shell: 'touch iamnotcool.txt'
when: me.cool is not defined and my vars looks like ---
me:
stumped: yes So when I run the task it comes back with the following error {"failed": true, "msg": "The conditional check 'me.cool' failed. The error was: error while evaluating conditional (me.cool): 'dict object' has no attribute 'cool'. | The syntax you included: when: me.cool is not defined is correct. You can also use not in : when: "'cool' not in me" The problem is that your error message: The conditional check 'me.cool' failed. claims your condition is defined as: when: me.cool So, either there is some bug in the version you use (but you did not share which one it is) and there were known issues , or you did not post the exact task that caused the error. | {
"source": [
"https://serverfault.com/questions/857973",
"https://serverfault.com",
"https://serverfault.com/users/407790/"
]
} |
858,067 | I have nginx/1.12.0 and as per document it contains stream module. I have installed nginx with the following commands. sudo add-apt-repository ppa:nginx/stable
sudo apt-get update
sudo apt-get install nginx
nginx -v
nginx version: nginx/1.12.0 I tried to add stream directive in nginx.conf : stream {
upstream sys {
server 172.x.x.x:9516;
server 172.x.x.x:9516;
}
server {
listen 9516 udp;
proxy_pass sys;
}
} but when I restart nginx I am getting below error in the nginx logs unknown directive "stream" in /etc/nginx/nginx.conf:86
nginx -V output nginx version: nginx/1.12.0
built with OpenSSL 1.0.1f 6 Jan 2014
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector --param=ssp -buffer-size=4 -Wformat -Werror=format-security -fPIC -D_FORTIFY_SOURCE=2' --w ith-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/ var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path =/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/ modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-p ath=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http- scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_m odule --with-http_realip_module --with-http_auth_request_module --with-http_v2 _module --with-http_dav_module --with-http_slice_module --with-threads --with- http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_modul e --with-http_gzip_static_module --with-http_image_filter_module=dynamic --wit h-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with -stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with -mail_ssl_module --add-dynamic-module=/build/nginx-ZgS12K/nginx-1.12.0/debian/ modules/nginx-auth-pam --add-dynamic-module=/build/nginx-ZgS12K/nginx-1.12.0/d ebian/modules/nginx-dav-ext-module --add-dynamic-module=/build/nginx-ZgS12K/ng inx-1.12.0/debian/modules/nginx-echo --add-dynamic-module=/build/nginx-ZgS12K/ nginx-1.12.0/debian/modules/nginx-upstream-fair --add-dynamic-module=/build/ng inx-ZgS12K/nginx-1.12.0/debian/modules/ngx_http_substitutions_filter_module I googled this error and some folks say I have to install/configure this module separately. Some says it comes with nginx 1.12.0 release. Can someone suggest how I can install/configure this module on already installed nginx ? Regards
VG | The stream module is being added as dynamic, as per: --with-stream=dynamic You need it to be 'static' - so load the module directly. To do so, add the following at the very top of your nginx.conf: load_module /usr/lib/nginx/modules/ngx_stream_module.so; Then: nginx -t If all is well: nginx -s reload
service nginx restart Edit: -s signal' Send signal to the master process. The argument signal can be one of: stop, quit, reopen, reload. The following table shows the corresponding system signals.
stop' SIGTERM
quit' SIGQUIT
reopen' SIGUSR1
reload' SIGHUP | {
"source": [
"https://serverfault.com/questions/858067",
"https://serverfault.com",
"https://serverfault.com/users/419547/"
]
} |
861,517 | I have centos 7 server (CentOS Linux release 7.3.1611 (Core)) When I was updated my server I saw error you need extra space. But I had 20GB disk on server when I check disk spaces I saw only 4.5GB partition created and 16GB partition is free space no unallocated.
How I can extend partition from 16GB free space? lsblk: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 4.5G 0 part
├─centos-root 253:0 0 4G 0 lvm /
└─centos-swap 253:1 0 512M 0 lvm [SWAP]
sr0 11:0 1 1024M 0 rom | There are three steps to make: alter your partition table so sda2 ends at end of disk reread the partition table (will require a reboot) resize your LVM pv using pvresize Step 1 - Partition table Run fdisk /dev/sda .
Issue p to print your current partition table and copy that output to some safe place.
Now issue d followed by 2 to remove the second partition. Issue n to create a new second partition. Make sure the start equals the start of the partition table you printed earlier. Make sure the end is at the end of the disk (usually the default). Issue t followed by 2 followed by 8e to toggle the partition type of your new second partition to 8e (Linux LVM). Issue p to review your new partition layout and make sure the start of the new second partition is exactly where the old second partition was. If everything looks right, issue w to write the partition table to disk. You will get an error message from partprobe that the partition table couldn't be reread (because the disk is in use). Reboot your system This step is neccessary so the partition table gets re-read. Resize the LVM PV After your system rebooted invoke pvresize /dev/sda2 . Your Physical LVM volume will now span the rest of the drive and you can create or extend logical volumes into that space. | {
"source": [
"https://serverfault.com/questions/861517",
"https://serverfault.com",
"https://serverfault.com/users/390968/"
]
} |
862,387 | RDS server come up with 40 connection max, as in the following documentation I am using Magento 1.9, and at some points, i reach the max number then website is out of service. Do you have any recommended way to solve this issue? From my understanding, if i have 2 web servers connection to an RDS server.. then I should have 2 RDS connections, not more. | AWS RDS max_connections limit variable is based on Instance type, so you can upgrade your RDS or make more replica. The RDS types with max_connections limit: t2.micro 66 t2.small 150 m3.medium 296 t2.medium 312 m3.large 609 t2.large 648 m4.large 648 m3.xlarge 1237 r3.large 1258 m4.xlarge 1320 m2.xlarge 1412 m3.2xlarge 2492 r3.xlarge 2540 Referring by max_connections at AWS RDS MySQL Instance Sizes in 2015 Update 2017-07 The current RDS MySQL max_connections setting is default by {DBInstanceClassMemory/12582880} , if you use t2.micro with 512MB RAM, the max_connections could be (512*1024*1024)/12582880 ~= 40, and so on. Each Web server could have many connections to RDS, which depends on your SQL requests from Web server. | {
"source": [
"https://serverfault.com/questions/862387",
"https://serverfault.com",
"https://serverfault.com/users/335025/"
]
} |
865,874 | I usually work with Ubuntu LTS servers which from what I understand symlink /bin/sh to /bin/dash . A lot of other distros though symlink /bin/sh to /bin/bash . From that I understand that if a script uses #!/bin/sh on top it may not run the same way on all servers? Is there a suggested practice on which shell to use for scripts when one wants maximum portability of those scripts between servers? | There are roughly four levels of portability for shell scripts (as far as the shebang line is concerned): Most portable: use a #!/bin/sh shebang and use only the basic shell syntax specified in the POSIX standard . This should work on pretty much any POSIX/unix/linux system. (Well, except Solaris 10 and earlier which had the real legacy Bourne shell, predating POSIX so non compliant, as /bin/sh .) Second most portable: use a #!/bin/bash (or #!/usr/bin/env bash ) shebang line, and stick to bash v3 features. This'll work on any system that has bash (in the expected location). Third most portable: use a #!/bin/bash (or #!/usr/bin/env bash ) shebang line, and use bash v4 features. This'll fail on any system that has bash v3 (e.g. macOS, which has to use it for licensing reasons). Least portable: use a #!/bin/sh shebang and use bash extensions to the POSIX shell syntax. This will fail on any system that has something other than bash for /bin/sh (such as recent Ubuntu versions). Don't ever do this; it's not just a compatibility issue, it's just plain wrong. Unfortunately, it's an error a lot of people make. My recommendation: use the most conservative of the first three that supplies all of the shell features that you need for the script. For max portability, use option #1, but in my experience some bash features (like arrays) are helpful enough that I'll go with #2. The worst thing you can do is #4, using the wrong shebang. If you're not sure what features are basic POSIX and which are bash extensions, either stick with a bash shebang (i.e. option #2), or test the script thoroughly with a very basic shell (like dash on your Ubuntu LTS servers). The Ubuntu wiki has a good list of bashisms to watch out for . There's some very good info about the history and differences between shells in the Unix & Linux question "What does it mean to be sh compatible?" and the Stackoverflow question "Difference between sh and bash" . Also, be aware that the shell isn't the only thing that differs between different systems; if you're used to linux, you're used to the GNU commands, which have a lot of nonstandard extensions you may not find on other unix systems (e.g. bsd, macOS). Unfortunately, there's no simple rule here, you just have to know the range of variation for the commands you're using. One of the nastiest commands in terms of portability is one of the most basic: echo . Any time you use it with any options (e.g. echo -n or echo -e ), or with any escapes (backslashes) in the string to print, different versions will do different things. Any time you want to print a string without a linefeed after it, or with escapes in the string, use printf instead (and learn how it works -- it's more complicated than echo is). The ps command is also a mess . Another general thing-to-watch-for is recent/GNUish extensions to command option syntax: old (standard) command format is that the command is followed by options (with a single dash, and each option is a single letter), followed by command arguments. Recent (and often non-portable) variants include long options (usually introduced with -- ), allowing options to come after arguments, and using -- to separate options from arguments. | {
"source": [
"https://serverfault.com/questions/865874",
"https://serverfault.com",
"https://serverfault.com/users/1816/"
]
} |
867,322 | Maybe this is a trivial question, but it is not totally clear to me. On one of our servers we have some background processes running which were started with service and some others which were started with systemctl , like this: $ service nginx start
$ systemctl start gunicorn What is the difference between the two commands? Which one is the preferred way to deal with background services? How to configure the preferred command? | service is an "high-level" command used for starting and stopping services in different unixes and linuxes. Depending on the "lower-level" service manager, service redirects on different binaries. For example, on CentOS 7 it redirects to systemctl , while on CentOS 6 it directly calls the relative /etc/init.d script. On the other hand, in older Ubuntu releases it redirects to upstart service is adequate for basic service management, while directly calling systemctl give greater control options. | {
"source": [
"https://serverfault.com/questions/867322",
"https://serverfault.com",
"https://serverfault.com/users/210060/"
]
} |
868,281 | I have a kubernetes cluster setup by kops on Amazon Web Services I have a 2 sites setup. One is secured via SSL/TLS/https and the other is just http. Both are Wordpress sites. Domains changed to protect site identity Ingress config: apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-rules
spec:
tls:
- hosts:
- site1.com
secretName: site1-tls-secret
- hosts:
- www.site1.com
secretName: site1-tls-secret
rules:
- host: site1.com
http:
paths:
- path: /
backend:
serviceName: site1
servicePort: 80
- host: www.site1.com
http:
paths:
- path: /
backend:
serviceName: site1
servicePort: 80
- host: blog.site2.com
http:
paths:
- path: /
backend:
serviceName: site2
servicePort: 80 Ingress Service apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
app: nginx-ingress
k8s-addon: ingress-nginx.addons.k8s.io
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'tcp'
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
selector:
app: nginx-ingress Ingress Deployment: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- name: nginx-ingress
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/echoheaders-default
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf Generated nginx.conf daemon off;
worker_processes 1;
pid /run/nginx.pid;
worker_rlimit_nofile 1047552;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
set_real_ip_from 0.0.0.0/0;
real_ip_header proxy_protocol;
real_ip_recursive on;
geoip_country /etc/nginx/GeoIP.dat;
geoip_city /etc/nginx/GeoLiteCity.dat;
geoip_proxy_recursive on;
# lua section to return proper error codes when custom pages are used
lua_package_path '.?.lua;/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/lua-resty-http/lib/?.lua;';
init_by_lua_block {
require("error_page")
}
sendfile on;
aio threads;
tcp_nopush on;
tcp_nodelay on;
log_subrequest on;
reset_timedout_connection on;
keepalive_timeout 75s;
keepalive_requests 100;
client_header_buffer_size 1k;
large_client_header_buffers 4 8k;
client_body_buffer_size 8k;
http2_max_field_size 4k;
http2_max_header_size 16k;
types_hash_max_size 2048;
server_names_hash_max_size 1024;
server_names_hash_bucket_size 64;
map_hash_bucket_size 64;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
variables_hash_bucket_size 64;
variables_hash_max_size 2048;
underscores_in_headers off;
ignore_invalid_headers on;
include /etc/nginx/mime.types;
default_type text/html;
gzip on;
gzip_comp_level 5;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types application/atom+xml application/javascript application/x-javascr
ipt application/json application/rss+xml application/vnd.ms-fontobject applicati
on/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml applicat
ion/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-comp
onent;
gzip_proxied any;
# Custom headers for response
server_tokens on;
# disable warnings
uninitialized_variable_warn off;
log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time
_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $
request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_res
ponse_length $upstream_response_time $upstream_status';
map $request_uri $loggable {
default 1;
}
access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
error_log /var/log/nginx/error.log notice;
resolver 100.64.0.10 valid=30s;
# Retain the default nginx handling of requests without a "Connection" heade
r
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# trust http_x_forwarded_proto headers correctly indicate ssl offloading
map $http_x_forwarded_proto $pass_access_scheme {
default $http_x_forwarded_proto;
'' $scheme;
}
map $http_x_forwarded_port $pass_server_port {
default $http_x_forwarded_port;
'' $server_port;
}
map $http_x_forwarded_for $the_real_ip {
default $http_x_forwarded_for;
'' $proxy_protocol_addr;
}
# map port 442 to 443 for header X-Forwarded-Port
map $pass_server_port $pass_port {
442 443;
default $pass_server_port;
}
# Map a response error watching the header Content-Type
map $http_accept $httpAccept {
default html;
application/json json;
application/xml xml;
text/plain text;
}
map $httpAccept $httpReturnType {
default text/html;
json application/json;
xml application/xml;
text text/plain;
}
# Obtain best http host
map $http_host $this_host {
default $http_host;
'' $host;
}
map $http_x_forwarded_host $best_http_host {
default $http_x_forwarded_host;
'' $this_host;
}
server_name_in_redirect off;
port_in_redirect off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# turn on session caching to drastically improve performance
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 10m;
# allow configuring ssl session tickets
ssl_session_tickets on;
# slightly reduce the time-to-first-byte
ssl_buffer_size 4k;
# allow configuring custom ssl ciphers
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE
-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:D
HE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-
SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE
-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-
SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AE
S256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AE
S256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPOR
T:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-D
ES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_ecdh_curve secp384r1;
proxy_ssl_session_reuse on;
upstream upstream-default-backend {
# Load balance algorithm; empty for round robin, which is the default
least_conn;
server 100.96.1.49:8080 max_fails=0 fail_timeout=0;
}
upstream default-site1-80 {
# Load balance algorithm; empty for round robin, which is the default
least_conn;
server 127.0.0.1:8181 max_fails=0 fail_timeout=0;
}
upstream default-site2blog-80 {
# Load balance algorithm; empty for round robin, which is the default
least_conn;
server 100.96.2.127:80 max_fails=0 fail_timeout=0;
server 100.96.1.52:80 max_fails=0 fail_timeout=0;
}
server {
server_name _;
listen 80 proxy_protocol default_server reuseport backlog=511;
listen [::]:80 proxy_protocol default_server reuseport backlog=511;
set $proxy_upstream_name "-";
listen 442 proxy_protocol default_server reuseport backlog=511 ssl http2;
listen [::]:442 proxy_protocol default_server reuseport backlog=511 ssl http2;
# PEM sha: ------
ssl_certificate /ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /ingress-controller/ssl/default-fake-certificate.pem;
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains;";
location / {
set $proxy_upstream_name "upstream-default-backend";
port_in_redirect off;
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 10s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_pass http://upstream-default-backend;
}
# health checks in cloud providers require the use of port 80
location /healthz {
access_log off;
return 200;
}
# this is required to avoid error if nginx is being monitored
# with an external software (like sysdig)
location /nginx_status {
allow 127.0.0.1;
allow ::1;
deny all;
access_log off;
stub_status on;
}
}
server {
server_name blog.site2.com;
listen 80 proxy_protocol;
listen [::]:80 proxy_protocol;
set $proxy_upstream_name "-";
location / {
set $proxy_upstream_name "default-site2blog-80";
port_in_redirect off;
client_max_body_size "20m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 10s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_pass http://default-site2blog-80;
}
}
server {
server_name site1.com;
listen 80 proxy_protocol;
listen [::]:80 proxy_protocol;
set $proxy_upstream_name "-";
listen 442 proxy_protocol ssl http2;
listen [::]:442 proxy_protocol ssl http2;
# PEM sha: ---
ssl_certificate /ingress-controller/ssl/default-site1-tls-secret.pem;
ssl_certificate_key /ingress-controller/ssl/default-site1-tls-secret.pem;
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains;";
location / {
set $proxy_upstream_name "default-site1-80";
# enforce ssl on server side
if ($pass_access_scheme = http) {
return 301 https://$best_http_host$request_uri;
}
port_in_redirect off;
client_max_body_size "20m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgr
ade;
proxy_set_header Connection $connectio
n_upgrade;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 10s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
proxy_redirect off;
proxy_buffering off;
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_pass http://default-site1-80;
}
}
# default server, used for NGINX healthcheck and access to nginx stats
server {
# Use the port 18080 (random value just to avoid known ports) as default port for nginx.
# Changing this value requires a change in:
# https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/nginx/command.go#L104
listen 18080 default_server reuseport backlog=511;
listen [::]:18080 default_server reuseport backlog=511;
set $proxy_upstream_name "-";
location /healthz {
access_log off;
return 200;
}
location /nginx_status {
set $proxy_upstream_name "internal";
access_log off;
stub_status on;
}
# this location is used to extract nginx metrics
# using prometheus.
# TODO: enable extraction for vts module.
location /internal_nginx_status {
set $proxy_upstream_name "internal";
allow 127.0.0.1;
allow ::1;
deny all;
access_log off;
stub_status on;
}
location / {
set $proxy_upstream_name "upstream-default-backend";
proxy_pass http://upstream-default-backend;
}
}
# default server for services without endpoints
server {
listen 8181;
set $proxy_upstream_name "-";
location / {
return 503;
}
}
}
stream {
log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
access_log /var/log/nginx/access.log log_stream;
error_log /var/log/nginx/error.log;
# TCP services
# UDP services
} | It was caused by the Ingress config referencing the incorrect services name. After updating the Ingress reference to the service I no longer get a 503. | {
"source": [
"https://serverfault.com/questions/868281",
"https://serverfault.com",
"https://serverfault.com/users/172369/"
]
} |
868,650 | Over the last 3-4 weeks I have been trying to find a rogue DHCP server on my network but have been stumped! It is offering IP Addresses that do not work with my network, so any device that needs a Dynamic Address is getting one from the Rogue DHCP and then that device stops working. I need help to find and destroy this thing! I think it might be a Trojan of some sort. My Main Router is the only valid DHCP Server and is 192.168.0.1 which offers a range of 192.160.0.150-199, and I have this configured in my AD as Authorized. This ROGUE DHCP claims to be coming from 192.168.0.20 and offering an IP Address in the range of 10.255.255.* which is messing up EVERYTHING on my network unless I assign a static IP Address to it. 192.168.0.20 does not exist on my network. My network is a single AD Server on Windows 2008R2, 3 other physical servers (1-2008R2 and 2 2012R2) about 4 Hypervisor VM's, 3 laptops and a Windows 7 box. I can't ping the rogue 192.160.0.20 IP, and I can't see it in the ARP -A output, so I can't get its MAC address. I'm hoping that someone reading this post has come across this before. | On one of the affected Windows clients start a packet capture (Wireshark, Microsoft Network Monitor, Microsoft Message Analyzer, etc.), then from an elevated command prompt run ipconfig /release . The DHCP client will send a DHCPRELEASE message to the DHCP server that it obtained it's ip address from. This should allow you to obtain the MAC address of the rogue DHCP server, which you can then track down in your switch MAC address table to find out which switch port it's connected to, then trace that switch port to the network jack and the device plugged into it. | {
"source": [
"https://serverfault.com/questions/868650",
"https://serverfault.com",
"https://serverfault.com/users/384781/"
]
} |
868,665 | I currently have my website and email services managed by Office 365. Name servers for my domain are currently pointing at ns1.bdm.microsoftonline.com and ns2.bdm.microsoftonline.com to let Office 365 manage DNS records. I would like to now move my website off of Office 365 and into Squarespace, but keep all other Office 365 services (such as email) running. I would also like to conduct this migration with as little downtime as possible (preferably 0). What is the best way to do this? Microsoft has two help pages for this: https://support.office.com/en-us/article/Update-DNS-records-to-keep-your-website-with-your-current-hosting-provider-2c4cf347-b897-45c1-a71f-210bdc8f1061 https://support.office.com/en-us/article/Create-DNS-records-at-any-DNS-hosting-provider-for-Office-365-7b7b075d-79f9-4e37-8a9e-fb60c1d95166?ui=en-US&rs=en-US&ad=US SquareSpace has one help page: https://support.squarespace.com/hc/en-us/articles/206541867-Using-Office-365-with-your-Squarespace-Domain SquareSpace's documentation points to the Microsoft's second link above, which has instructions to change the DNS settings. From the limited experience I have, changing DNS settings will incur some downtime, which is something I would like to avoid. Therefore, the first link looks promising, but I have not tried it yet. Questions: Is the first link the best approach? Is there a better way? Or is there no way around changing DNS settings and incurring downtime? | On one of the affected Windows clients start a packet capture (Wireshark, Microsoft Network Monitor, Microsoft Message Analyzer, etc.), then from an elevated command prompt run ipconfig /release . The DHCP client will send a DHCPRELEASE message to the DHCP server that it obtained it's ip address from. This should allow you to obtain the MAC address of the rogue DHCP server, which you can then track down in your switch MAC address table to find out which switch port it's connected to, then trace that switch port to the network jack and the device plugged into it. | {
"source": [
"https://serverfault.com/questions/868665",
"https://serverfault.com",
"https://serverfault.com/users/92595/"
]
} |
868,863 | We have a lot of PCs in the company and nobody wants to wipe a multitude of hard drives. We also have many apprentice toolmakers who really want to destroy things. Thus, every couple of months, our apprentices receive two heavy baskets of hard drives to drill through. Some of my coworkers believe that this is absolutely overkill. I, however, believe that not wiping the drives before drilling through them might make some data recoverable. According to this question , wiping with DBAN will make data completely unrecoverable. DBAN is just fine. Here's the dirty little secret--any program that
overwrites every byte of the drive will have wiped everything
permanently. You don't need to do multiple passes with different write
patterns, etc. How about drilling a hole? | Drilling a hole in the drive enclosure which passes through all the platters will make it impossible to run the drive. Most modern HDDs don't have air inside the enclosure, and you've let what was in there escape. You've filled the cavity with tiny pieces of drill swarf, which will be on everything including the platters, and will crash the heads if someone tries to lower them onto the rotating platters. You've also unbalanced the platters, though I don't have an estimate for whether this will be fatal. The drill bit will likely pass through the controller board on the way, which though not fatal will certainly not help anyone trying to hook the drive up. You have not prevented someone from putting the platter under a magnetic force microscope and reading most of the data off that way. We can be fairly sure this is possible, because the SANS paper linked from the linked SF article demonstrates that you can't recover data from a platter with an MFM after a single overwriting pass, and such a test would be completely meaningless if you couldn't recover non-overwritten data using the same procedure. So drilling through the platters will very likely prevent data from being read off the HDD by normal means. It won't prevent much of the data being recoverable by a determined, well-funded opponent. All security is meaningless without a threat model. So decide what you're securing against. If you're worried about someone hooking up your old company HDDs and reading them, after they found them on ebay / the local rubbish dump / the WEEE recycling bin, then drilling is good. Against state-level actors, drilling is probably insufficient. If it helps, I drill most of my old drives, too, because I am worried about casual data leakage, but I doubt the security services are interested in most of my data. For the few drives I have which hold data that Simply Must Not Leak, I encrypt them using passphrases of known strength, and drill them at the end of their lives. | {
"source": [
"https://serverfault.com/questions/868863",
"https://serverfault.com",
"https://serverfault.com/users/424799/"
]
} |
870,095 | This may sound like an odd question, but it's generated some spirited discussion with some of my colleagues. Consider a moderately sized RAID array consisting of something like eight or twelve disks. When buying the initial batch of disks, or buying replacements to enlarge the array or refresh the hardware, there are two broad approaches one could take: Buy all the drives in one order from one vendor, and receive one large box containing all the disks. Order one disk apiece from a variety of vendors, and/or spread out (over a period of days or weeks) several orders of one disk apiece. There's some middle ground, obviously, but these are the main opposing mindsets. I've been genuinely curious which approach is more sensible in terms of reducing the risk of catastrophic failure of the array. (Let's define that as "25% of the disks fail within a time window equal to how long it takes to resilver the array once.") The logic being, if all the disks came from the same place, they might all have the same underlying defects waiting to strike. The same timebomb with the same initial countdown on the clock, if you will. I've collected a couple of the more common pros and cons for each approach, but some of them feel like conjecture and gut instinct instead of hard evidence-based data. Buy all at once, pros Less time spent in research/ordering phase. Minimizes shipping cost if the vendor charges for it. Disks are pretty much guaranteed to have the same firmware version and the same "quirks" in their operational characteristics (temperature, vibration, etc.) Price increases/stock shortages unlikely stall the project midway. Each next disk is on-hand the moment it's required to be installed. Serial numbers are all known upfront, disks can be installed in the enclosure in order of increasing serial number. Seems overly fussy, but some folks seem to value that. (I guess their management interface sorts the disks by serial number instead of hardware port order...?) Buy all at once, cons All disks (probably) came from the same factory, made at the same time, of the same materials. They were stored in the same environment, and subject to the same potential abuses during transit. Any defect or damage present in one is likely present in all. If the drives are being replaced one-at-a-time into an existing array and each new disk needs to be resilvered individually, it could be potentially weeks before the last disk from the order is installed and discovered to be faulty. The return/replacement window with the vendor may expire during this time. Can't take advantage of near-future price decreases that may occur during the project. Buy individually, pros If one disk fails, it shares very little manufacturing/transit history with any of the other disks. If the failure was caused by something in manufacturing or transit, the root cause likely did not occur in any other disk. If a disk is dead on arrival or fails during the first hours of use, that will be detected shortly after the shipment arrives and the return process may go more smoothly. Buy individually, cons Takes a significant amount of time to find enough vendors with agreeable prices. Order tracking, delivery failure, damaged item returns, and other issues can be time-consuming to resolve. Potentially higher shipping costs. A very real possibility exists that a new disk will be required but none will be on-hand, stalling the project. Imagined benefit. Regardless of the vendor or date purchased, all the disks came from the same place and are really the same. Manufacturing defects would have been detected by quality control and substandard disks would not have been sold. Shipping damage would have to be so egregious (and plainly visible to the naked eye) that damaged drives would be obvious upon unpacking. If we're going simply by bullet point count, "buy in bulk" wins pretty clearly. But some of the pros are weak, and some of the cons are strong. Many of the bullet points simply state the logical inverse of some of the others. Some of these things may be absurd superstition. But if superstition does a better job at maintaining array integrity, I guess I'd be willing to go along with it. Which group is most sensible here? UPDATE: I have data relevant to this discussion. The last array I personally built (about four years ago) had eight disks. I ordered from one single vendor, but split the purchase into two orders of four disks each, about one month apart. One disk of the array failed within the first hours of running. It was from the first batch, and the return window for that order had closed in the time it took to spin everything up. Four years later, the seven original disks plus one replacement are still running error-free. (knock on wood.) | In practice, people who buy from enterprise vendors (HPE, Dell, etc.) do not worry about this . Drives sourced by these vendors are already spread across multiple manufacturers under the same part number. An HP disk under a particular SKU may be HGST or Seagate or Western Digital. Same HP part number, variation on manufacturer, lot number and firmware You shouldn't try to outsmart/outwit the probability of batch failure, though. You're welcome to try if it gives peace of mind, but it may not be worth the effort. Good practices like clustering, replication and solid backups are the real protection for batch failures. Add hot and cold spares. Monitor your systems closely. Take advantage of smart filesystems like ZFS :) And remember, hard drive failures aren't always mechanical... | {
"source": [
"https://serverfault.com/questions/870095",
"https://serverfault.com",
"https://serverfault.com/users/235161/"
]
} |
870,105 | I'm trying to test a more modular approach to using sieve scripts on my mail server. Previously I used .dovecot.sieve for everything but I now have conflicting requirements and I thought I'd test having a master script and including multiple includes. It is dovecot 2.0.9. I set the following: sieve = ~/.dovecot.sieve
sieve_dir = ~/sieve When I do what I need directly in the .dovecot.sieve file it all works fine.
But I just tried changing it to the following instead: require ["include"];
include :personal "filing";
include :personal "vacation"; And when I compile it I get the following error: # sievec .dovecot.sieve
.dovecot: line 2: error: included personal script 'filing' does not exist.
.dovecot: line 3: error: included personal script 'vacation' does not exist.
.dovecot: error: validation failed. But they do exist: # ls -l sieve total 8
-rwxrwxrwx 1 root root 390 Aug 23 14:53 filing.sieve
-rwxrwxrwx 1 root root 740 Aug 23 15:04 vacation.sieve So why is dovecot not seeing the includes? I have tried chmod 777 in case it's permissions but that didn't help. I've tried absolute path names, and putting the include files in different locations, renaming them to not have the sieve suffix, explicitly calling them WITH the sieve suffix but nothing works. I can't use global because these are account specific requirements. Does anyone have any ideas why the sievec is failing? | In practice, people who buy from enterprise vendors (HPE, Dell, etc.) do not worry about this . Drives sourced by these vendors are already spread across multiple manufacturers under the same part number. An HP disk under a particular SKU may be HGST or Seagate or Western Digital. Same HP part number, variation on manufacturer, lot number and firmware You shouldn't try to outsmart/outwit the probability of batch failure, though. You're welcome to try if it gives peace of mind, but it may not be worth the effort. Good practices like clustering, replication and solid backups are the real protection for batch failures. Add hot and cold spares. Monitor your systems closely. Take advantage of smart filesystems like ZFS :) And remember, hard drive failures aren't always mechanical... | {
"source": [
"https://serverfault.com/questions/870105",
"https://serverfault.com",
"https://serverfault.com/users/282634/"
]
} |
870,568 | This error message shows up when I use ubuntu 16.04 and the latest mysql 5.7.19-0ubuntu0.16.04.1 in a Docker image. What could be done to fix this? To reproduce the error Get the Dockerfile : FROM ubuntu:16.04
RUN apt update
RUN DEBIAN_FRONTEND=noninteractive apt install -y mysql-server (also available here ) Build and run: docker build -t mysqlfail .
docker run -it mysqlfail tail -1 /var/log/mysql/error.log would have been shown the following error log: 2017-08-26T11:48:45.398445Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option. Which was exactly what we wanted: a mysql with no root password set yet. In the past (ubuntu 14.04 / mysql 5.5) a service mysql start was possible. Now if you try this it fails docker run -it mysqlfail service mysql start
* Starting MySQL database server mysqld
No directory, logging in with HOME=/
[fail] and /var/log/mysql/error.log contains a line: 2017-08-26T11:59:57.680618Z 0 [ERROR] Fatal error: Can't open and lock privilege tables: Table storage engine for 'user' doesn't have this option build log (for the complete Dockerfile ) Sending build context to Docker daemon 2.56kB
Step 1/4 : FROM ubuntu:16.04
---> ebcd9d4fca80
...
Step 4/4 : RUN service mysql start
---> Running in 5b899739d90d
* Starting MySQL database server mysqld
...fail!
The command '/bin/sh -c service mysql start' returned a non-zero code: 1 weird continuation After experiments as outlined in my answer attempt , I created a shell script that does a select count(*) query on each table in the mysql space three times in a row (because experiments show that on some tables the query will fail exactly twice :-( ). Then a mysql_upgrade and the service mysql restart is tried. In the Dockerfile the script is made available via COPY mysqltest.sh . The trials with this script give weird/crazy results. For the Docker environment the start still fails [ERROR] Fatal error: Can't open and lock privilege tables: Table storage engine for 'user' doesn't have this option Running the script sh mysqltest.sh root in the docker environment leads to 2017-08-27T09:12:47.021528Z 12 [ERROR] /usr/sbin/mysqld: Table './mysql/db' is marked as crashed and should be repaired 2017-08-27T09:12:47.050141Z 12 [ERROR] Couldn't repair table: mysql.db 2017-08-27T09:12:47.055925Z 13 [ERROR] /usr/sbin/mysqld: Table './mysql/db' is marked as crashed and should be repaired 2017-08-27T09:12:47.407700Z 54 [ERROR] /usr/sbin/mysqld: Table './mysql/proc' is marked as crashed and should be repaired 2017-08-27T09:12:47.433516Z 54 [ERROR] Couldn't repair table: mysql.proc 2017-08-27T09:12:47.440695Z 55 [ERROR] /usr/sbin/mysqld: Table './mysql/proc' is marked as crashed and should be repaired 2017-08-27T09:12:47.769485Z 81 [ERROR] /usr/sbin/mysqld: Table './mysql/tables_priv' is marked as crashed and should be repaired 2017-08-27T09:12:47.792061Z 81 [ERROR] Couldn't repair table: mysql.tables_priv 2017-08-27T09:12:47.798472Z 82 [ERROR] /usr/sbin/mysqld: Table './mysql/tables_priv' is marked as crashed and should be repaired 2017-08-27T09:12:47.893741Z 99 [ERROR] /usr/sbin/mysqld: Table './mysql/user' is marked as crashed and should be repaired 2017-08-27T09:12:47.914288Z 99 [ERROR] Couldn't repair table: mysql.user 2017-08-27T09:12:47.920459Z 100 [ERROR] /usr/sbin/mysqld: Table './mysql/user' is marked as crashed and should be repaired What is going on here to cause this strange behavior? | Ran into the same problem today. I'm running the MySQL service during docker build for the unit tests and upgrading to MySQL CE 5.7.19 from MariaDB broke the build. What did solve the issue for me was running chown -R mysql:mysql /var/lib/mysql /var/run/mysqld each time before starting the mysql service. So my Dockerfile looks like this now: RUN chown -R mysql:mysql /var/lib/mysql /var/run/mysqld && \
service mysql start && \
mvn -q verify site Hope this helps. | {
"source": [
"https://serverfault.com/questions/870568",
"https://serverfault.com",
"https://serverfault.com/users/162693/"
]
} |
871,090 | Currently we im a running application on a single docker container, the application needs all sorts of sensitive data to be passed as environments variables, Im putting those on the run command so they don't end up in the image and then on a repository, however i end up with a very non-secure run command, Now, i understand that docker secrets exist, however, how can i use them without deploying a cluster? or is there any other way to secure this data? Best Regards, | Yes , you can use secrets if you use a compose file . (You don't need to run a swarm). You use a compose file with docker-compose : there is documentation for "secrets" in a docker-compose.yml file . I switched to docker-compose because I wanted to use secrets. I am happy I did, it seems much more clean. Each service maps to a container. And if you ever want to switch to running a swarm instead, you are basically already there. Note: Secrets are not loaded into the container's environment, they are mounted to /run/secrets/ Here is a example: 1) Project Structure: |
|--- docker-compose.yml
|--- super_duper_secret.txt 2) docker-compose.yml contents: version: "3.6"
services:
my_service:
image: centos:7
entrypoint: "cat /run/secrets/my_secret"
secrets:
- my_secret
secrets:
my_secret:
file: ./super_duper_secret.txt 3) super_duper_secret.txt contents: Whatever you want to write for a secret really. 4) Run this command from the project's root to see that the container does have access to your secret, (Docker must be running and docker-compose installed): docker-compose up --build my_service You should see your container output your secret. | {
"source": [
"https://serverfault.com/questions/871090",
"https://serverfault.com",
"https://serverfault.com/users/421304/"
]
} |
871,305 | Given an nginx config: proxy_pass http://yahoo.com; How do you get the string " http://yahoo.com " in the access_log? $remote_addr is the ip. | add this to your log_format $proxy_host and $upstream_addr ^-courtesy of the commenter, Alex. Just adding here so that folks see it when visiting. | {
"source": [
"https://serverfault.com/questions/871305",
"https://serverfault.com",
"https://serverfault.com/users/75557/"
]
} |
872,302 | so I'm having a client in my network connected to the router through my computer with arpspoof. When I know want to stop the packet forwarding I execute: iptables -A FORWARD -j REJECT that is working how I expected. But when I try to do something like: iptables -A FORWARD -j ACCEPT I cannot manage to make the packets go through like in the beginning. Am I doing something wrong or are there any other arguments I should use different from "ACCEPT"? | IPtables has a list of rules, and for each packet, it checks the list of rules in order. Once a rule is found that matches the packet and specifies a policy (ACCEPT, REJECT, DROP), the fate of the matching packet is determined; no more rules are examined. This means that the order in which you run commands is important. When you use iptables -A , you add a rule to the end of the list of rules, so you will end up with a rule list that looks like this: Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
ACCEPT all -- anywhere anywhere Since the REJECT rule comes before the ACCEPT rule, it gets triggered first, and thus forwarding won't happen. You will therefore need to delete the REJECT rule instead of adding an ACCEPT rule. To delete the REJECT rule, run iptables -D FORWARD -j REJECT For more information, read the iptables manpage. | {
"source": [
"https://serverfault.com/questions/872302",
"https://serverfault.com",
"https://serverfault.com/users/434373/"
]
} |
872,749 | I am running this command: pg_dumpall | bzip2 > cluster-$(date --iso).sql.bz2 It takes too long. I look at the processes with top . The bzip2 process takes about 95% and postgres 5% of one core. The wa entry is low. This means the disk is not the bottleneck. What can I do to increase the performance? Maybe let bzip2 use more cores. The servers has 16 cores. Or use an alternative to bzip2? What can I do to increase the performance? | There are many compression algorithms around, and bzip2 is one of the slower ones. Plain gzip tends to be significantly faster, at usually not much worse compression. When speed is the most important, lzop is my favourite. Poor compression, but oh so fast. I decided to have some fun and compare a few algorithms, including their parallel implementations. The input file is the output of pg_dumpall command on my workstation, a 1913 MB SQL file. The hardware is an older quad-core i5. The times are wall-clock times of just the compression. Parallel implementations are set to use all 4 cores. Table sorted by compression speed. Algorithm Compressed size Compression Decompression
lzop 398MB 20.8% 4.2s 455.6MB/s 3.1s 617.3MB/s
lz4 416MB 21.7% 4.5s 424.2MB/s 1.6s 1181.3MB/s
brotli (q0) 307MB 16.1% 7.3s 262.1MB/s 4.9s 390.5MB/s
brotli (q1) 234MB 12.2% 8.7s 220.0MB/s 4.9s 390.5MB/s
zstd 266MB 13.9% 11.9s 161.1MB/s 3.5s 539.5MB/s
pigz (x4) 232MB 12.1% 13.1s 146.1MB/s 4.2s 455.6MB/s
gzip 232MB 12.1% 39.1s 48.9MB/s 9.2s 208.0MB/s
lbzip2 (x4) 188MB 9.9% 42.0s 45.6MB/s 13.2s 144.9MB/s
pbzip2 (x4) 189MB 9.9% 117.5s 16.3MB/s 20.1s 95.2MB/s
bzip2 189MB 9.9% 273.4s 7.0MB/s 42.8s 44.7MB/s
pixz (x4) 132MB 6.9% 456.3s 4.2MB/s 7.9s 242.2MB/s
xz 132MB 6.9% 1027.8s 1.9MB/s 17.3s 110.6MB/s
brotli (q11) 141MB 7.4% 4979.2s 0.4MB/s 3.6s 531.6MB/s If the 16 cores of your server are idle enough that all can be used for compression, pbzip2 will probably give you a very significant speed-up. But you need more speed still and you can tolerate ~20% larger files, gzip is probably your best bet. Update: I added brotli (see TOOGAMs answer) results to the table. brotli s compression quality setting has a very large impact on compression ratio and speed, so I added three settings ( q0 , q1 , and q11 ). The default is q11 , but it is extremely slow, and still worse than xz . q1 looks very good though; the same compression ratio as gzip , but 4-5 times as fast! Update: Added lbzip2 (see gmathts comment) and zstd (Johnny's comment) to the table, and sorted it by compression speed. lbzip2 puts the bzip2 family back in the running by compressing three times as fast as pbzip2 with a great compression ratio! zstd also looks reasonable but is beat by brotli (q1) in both ratio and speed. My original conclusion that plain gzip is the best bet is starting to look almost silly. Although for ubiquity, it still can't be beat ;) | {
"source": [
"https://serverfault.com/questions/872749",
"https://serverfault.com",
"https://serverfault.com/users/90324/"
]
} |
873,699 | I have an Amazon (AWS) Aurora DB cluster, and every day, its [Billed] Volume Bytes Used is increasing. I have checked the size of all my tables (in all my databases on that cluster) using the INFORMATION_SCHEMA.TABLES table: SELECT ROUND(SUM(data_length)/1024/1024/1024) AS data_in_gb, ROUND(SUM(index_length)/1024/1024/1024) AS index_in_gb, ROUND(SUM(data_free)/1024/1024/1024) AS free_in_gb FROM INFORMATION_SCHEMA.TABLES;
+------------+-------------+------------+
| data_in_gb | index_in_gb | free_in_gb |
+------------+-------------+------------+
| 30 | 4 | 19 |
+------------+-------------+------------+ Total: 53GB So why an I being billed almost 75GB at this time? I understand that provisioned space can never be freed, in the same way that the ibdata files on a regular MySQL server can never shrink; I'm OK with that. This is documented, and acceptable. My problem is that every day, the space I'm billed increases. And I'm sure I am NOT using 75GB of space temporarily. If I were to do something like that, I'd understand. It's as if the storage space I am freeing, by deleting rows from my tables, or dropping tables, or even dropping databases, is never re-used. I have contacted AWS (premium) support multiple times, and was never able to get a good explanation on why that is. I've received suggestions to run OPTIMIZE TABLE on the tables on which there is a lot of free_space (per the INFORMATION_SCHEMA.TABLES table), or to check the InnoDB history length, to make sure deleted data isn't still kept in the rollback segment (ref: MVCC ), and restart the instance(s) to make sure the rollback segment is emptied. None of those helped. | There are multiple things at play here... Each table is stored in its own tablespace By default, the parameter group for Aurora clusters (named default.aurora5.6 ) defines innodb_file_per_table = ON . That means each table is stored in a separate file, on the Aurora storage cluster. You can see which tablespace is used for each of your tables using this query: SELECT name, space FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES; Note: I have not tried to change innodb_file_per_table to OFF . Maybe that would help..? Storage space freed by deleting tablespaces is NOT re-used Quoting AWS premium support: Due to the unique design of the Aurora Storage engine to increase its performance and fault tolerance Aurora does not have a functionality to defragment file-per-table tablespaces in the same way as standard MySQL. Currently Aurora unfortunately does not have a way to shrink tablespaces as standard MySQL does and all fragmented space are charged because it is included in VolumeBytesUsed. The reason that Aurora cannot reclaim the space of a dropped table in the same way as standard MySQL is that the data for the table is stored in a completely different way to a standard MySQL database with a single storage volume. If you drop a table or row in Aurora the space is not then reclaimed on Auroras cluster volume due to this complicated design. This inability to reclaim small amounts of storage space is a sacrifice made to get the additional performance gains of Auroras cluster storage volume and the greatly improved fault tolerance of Aurora. But there is some obscure way to re-use some of that wasted space... Again, quote AWS premium support: Once your total data set exceeds a certain size (approximately 160 GB) you can begin to reclaim space in 160 GB blocks for re-use e.g. if you have 400 GB in your Aurora cluster volume and DROP 160 GB or more of tables Aurora can then automatically re-use 160 GB of data. However it can be slow to reclaim this space. The reason for the large amount of data required to be freed at once is due to Auroras unique design as an enterprise scale DB engine unlike standard MySQL which cannot be used on this scale. OPTIMIZE TABLE is evil! Because Aurora is based on MySQL 5.6, OPTIMIZE TABLE is mapped to ALTER TABLE ... FORCE , which rebuilds the table to update index statistics and free unused space in the clustered index. Effectively, along with innodb_file_per_table = ON , that means running an OPTIMIZE TABLE creates a new tablespace file, and deletes the old one. Since deleting a tablespace file doesn't free up the storage it was using, that means OPTIMIZE TABLE will always result in more storage being provisioned. Ouch! Ref: https://dev.mysql.com/doc/refman/5.6/en/optimize-table.html#optimize-table-innodb-details Using temporary tables By default, the parameter group for Aurora instances (named default.aurora5.6 ) defines default_tmp_storage_engine = InnoDB . That means every time I am creating a TEMPORARY table, it is stored, along with all my regular tables, on the Aurora storage cluster. That means new space is provisioned to hold those tables, thus increasing the total VolumeBytesUsed. The solution for this is simple enough: change the default_tmp_storage_engine parameter value to MyISAM . This will force Aurora to create the TEMPORARY tables on the instance's local storage. Of note: the instances' local storage is limited; see the Free Local Storage metric on CloudWatch to see how much storage your instances have. Larger (costlier) instances have more local storage. Ref: none yet; the current Amazon Aurora documentation doesn't mention this. I asked the AWS support team to update the documentation, and will update my answer if/once they do. | {
"source": [
"https://serverfault.com/questions/873699",
"https://serverfault.com",
"https://serverfault.com/users/61533/"
]
} |
873,708 | I am trying to configure OpenNMS to receive Syslog messages from an ASA. My syslogd-configuration file looks like so: <configuration
syslog-port="514"
new-suspect-on-message="false"
parser="org.opennms.netmgt.syslogd.CustonSyslogParser"
forwarding-regexp="((.+?) (.*))\r?\n?$"
matching-group-host="2"
matching-group-message="3"
/> The syslog messages arrive in this format: Sep 13 08:36:37 192.168.75.254 %ASA-4-106023: Deny tcp src outside:144.5.5.255/
56607 dst inside:192.168.75.102/23 by access-group "outside_access_in" [0x0, 0x0] With this config, I can get syslog messages into Opennms but they come through as indeterminate. It seems as though this regex cannot parse. When I test this regex in other websites like regex101.com it clearly says that there is not a match. I have created a regex that does match how I need: \b(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\s*([\s\S]*) BUT when I add this to the config, I no longer get any Syslog Messages at all. Does anyone have an idea of how I make this happen. I have spent wayyy too much time on this as is. | There are multiple things at play here... Each table is stored in its own tablespace By default, the parameter group for Aurora clusters (named default.aurora5.6 ) defines innodb_file_per_table = ON . That means each table is stored in a separate file, on the Aurora storage cluster. You can see which tablespace is used for each of your tables using this query: SELECT name, space FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES; Note: I have not tried to change innodb_file_per_table to OFF . Maybe that would help..? Storage space freed by deleting tablespaces is NOT re-used Quoting AWS premium support: Due to the unique design of the Aurora Storage engine to increase its performance and fault tolerance Aurora does not have a functionality to defragment file-per-table tablespaces in the same way as standard MySQL. Currently Aurora unfortunately does not have a way to shrink tablespaces as standard MySQL does and all fragmented space are charged because it is included in VolumeBytesUsed. The reason that Aurora cannot reclaim the space of a dropped table in the same way as standard MySQL is that the data for the table is stored in a completely different way to a standard MySQL database with a single storage volume. If you drop a table or row in Aurora the space is not then reclaimed on Auroras cluster volume due to this complicated design. This inability to reclaim small amounts of storage space is a sacrifice made to get the additional performance gains of Auroras cluster storage volume and the greatly improved fault tolerance of Aurora. But there is some obscure way to re-use some of that wasted space... Again, quote AWS premium support: Once your total data set exceeds a certain size (approximately 160 GB) you can begin to reclaim space in 160 GB blocks for re-use e.g. if you have 400 GB in your Aurora cluster volume and DROP 160 GB or more of tables Aurora can then automatically re-use 160 GB of data. However it can be slow to reclaim this space. The reason for the large amount of data required to be freed at once is due to Auroras unique design as an enterprise scale DB engine unlike standard MySQL which cannot be used on this scale. OPTIMIZE TABLE is evil! Because Aurora is based on MySQL 5.6, OPTIMIZE TABLE is mapped to ALTER TABLE ... FORCE , which rebuilds the table to update index statistics and free unused space in the clustered index. Effectively, along with innodb_file_per_table = ON , that means running an OPTIMIZE TABLE creates a new tablespace file, and deletes the old one. Since deleting a tablespace file doesn't free up the storage it was using, that means OPTIMIZE TABLE will always result in more storage being provisioned. Ouch! Ref: https://dev.mysql.com/doc/refman/5.6/en/optimize-table.html#optimize-table-innodb-details Using temporary tables By default, the parameter group for Aurora instances (named default.aurora5.6 ) defines default_tmp_storage_engine = InnoDB . That means every time I am creating a TEMPORARY table, it is stored, along with all my regular tables, on the Aurora storage cluster. That means new space is provisioned to hold those tables, thus increasing the total VolumeBytesUsed. The solution for this is simple enough: change the default_tmp_storage_engine parameter value to MyISAM . This will force Aurora to create the TEMPORARY tables on the instance's local storage. Of note: the instances' local storage is limited; see the Free Local Storage metric on CloudWatch to see how much storage your instances have. Larger (costlier) instances have more local storage. Ref: none yet; the current Amazon Aurora documentation doesn't mention this. I asked the AWS support team to update the documentation, and will update my answer if/once they do. | {
"source": [
"https://serverfault.com/questions/873708",
"https://serverfault.com",
"https://serverfault.com/users/435583/"
]
} |
874,407 | When setting up access control lists, what's the difference between 0.0.0.0/0 and ::/0 ? I'm seeing this for an AWS EC2 instance I'm setting up | 0.0.0.0/0 is the IPv4 everything - all possible IPv4 addresses. ::/0 is the IPv6 equivalent of that. You can, for example, allow IPv4 and disallow IPv6 or vice versa. @kasperd mentions: It should be noted that depending on implementation ::/0 can mean either all IPv6 addresses or all IPv4 and IPv6 addresses. That's because IPv4 addresses can be mapped into IPv6 addresses ::ffff:0:0/96 More info on IPv6 is here . | {
"source": [
"https://serverfault.com/questions/874407",
"https://serverfault.com",
"https://serverfault.com/users/321109/"
]
} |
874,779 | We have the following situation: My machine A gateway machine The target machine I have no root rights on both #2 and #3. I can also not really store information (no more then 200 MiB) on machine #2 (since it is ment to be a gateway into the rest of the network, not more then that). On machine #3 there is a folder, about 3 GiB in size, that I want to copy to local. I cannot SSH from #1 to #3, but I can SSH to #2 and then to #3. It is also not possible to set up a public private keypair between #2 and #3, but there is a keypair installed between #1 and #2. Normally I use the combination of SSH and tar to get this done: ssh name@host "tar cf - folder" > folder.tar But in this case that would require some sort of nesting, and I cannot seem to get this done. So, what would be a good way to get the data from #3 to #1? | You can create an SSH tunnel through machine2 then in another session connect to the tunnel. For example, open two CLI sessions on machine1. In the first session run the following: MACHINE1$ ssh -L 2022:MACHINE3:22 <user>@MACHINE2 In the second session run the following: MACHINE1 $ ssh -p 2022 <user>@localhost What's happening with the first command is a local port (2022 on machine1) is being tunneled to port 22 on machine3 using your SSH connection to machine2. With the second command you are connecting to the newly opened local port (2022) and it's like you're connecting directly to machine3. Now if you want to use your typical file transfer process you could do the following: ssh -p 2022 <user>@localhost "tar cf - /path/to/remote/directory/" > filename.tar Alternatively, you can familiarise yourself with rsync and do something like this instead: rsync -aHSv --progress -e 'ssh -p 2022' <user>@localhost:/path/to/remote/directory/ /path/to/local/directory/ Assuming the end goal isn't to get a tarball. | {
"source": [
"https://serverfault.com/questions/874779",
"https://serverfault.com",
"https://serverfault.com/users/161055/"
]
} |
874,936 | I recently changed my nginx config to redirect all http traffic to https (and all www traffic to no-www). Would it make sense to also add add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; to my server blocks as well? Or that's unneeded since I'm already redirecting all traffic? Would be great to know the pros (and cons, if any). In case relevant, my current virtual host configuration is: server {
server_name example.com www.example.com;
listen 80;
return 301 https://example.com$request_uri;
}
server {
server_name www.example.com;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/cert_chain.crt;
... other SSL related config ...
return 301 https://example.com$request_uri;
}
server {
server_name example.com;
listen 443 ssl;
... other SSL related config ...
... remaining server configuration ...
} | HSTS tells the browser to always use https, rather than http. Adding that configuration may reduce the need for forwarding from http to https, so it may very slightly increase website performance and very slightly decrease server load. For reference, here's the security headers I use on my Nginx based websites. I save this to a single file and include it from all servers that need it, including http and https servers. It allows some common resources like Google and Facebook to load. # Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header Content-Security-Policy "default-src 'self' www.google-analytics.com ajax.googleapis.com www.google.com google.com gstatic.com www.gstatic.com connect.facebook.net facebook.com;";
add_header X-XSS-Protection "1; mode=block";
add_header Referrer-Policy "origin"; Clarification You still need the http to https redirection in place. | {
"source": [
"https://serverfault.com/questions/874936",
"https://serverfault.com",
"https://serverfault.com/users/321109/"
]
} |
874,954 | I've got two WAN interfaces coming into a Debian 8 VM. WAN 1 - All Internet and local traffic. (0.0.0.0/0) Has a a static IP, thus IP, netmask and gateway are fixed values. WAN 2 - Specific private subnet traffic only (10.100.0.0/16). IP obtained via DHCP, can be anywhere in the 10.0.0.0/8 range. I don't have control over WAN2 (The link is supplied by the ISP) so I am faced with a dual gateway situation. Right now, here is how I have it set up. iface eth0 inet static
address 172.16.100.100
netmask 255.255.255.0
gateway 172.16.100.1
iface eth1 inet dhcp I then manually bring up eth1, obtain the DHCP gateway IP, then set a static route for 10.100.0.0/16 manually. This works fine, of course, until the DHCP lease renews, which is about every 4 days. At which point I have to bring down eth1, bring it back up, note the new gateway and set the new static route. I've tried setting a static route to 10.100.0.0/16 via eth1, but without any knowledge of the next-hop gateway IP.. of course that doesn't work. I've also tried several iproute2 setups but it still boils down to knowing the next-hop address it would seem. What i'm trying to solve - How can I set a static route for eth1 given that I have no knowledge of the next-hop address as it constantly changes via DHCP? | HSTS tells the browser to always use https, rather than http. Adding that configuration may reduce the need for forwarding from http to https, so it may very slightly increase website performance and very slightly decrease server load. For reference, here's the security headers I use on my Nginx based websites. I save this to a single file and include it from all servers that need it, including http and https servers. It allows some common resources like Google and Facebook to load. # Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header Content-Security-Policy "default-src 'self' www.google-analytics.com ajax.googleapis.com www.google.com google.com gstatic.com www.gstatic.com connect.facebook.net facebook.com;";
add_header X-XSS-Protection "1; mode=block";
add_header Referrer-Policy "origin"; Clarification You still need the http to https redirection in place. | {
"source": [
"https://serverfault.com/questions/874954",
"https://serverfault.com",
"https://serverfault.com/users/436647/"
]
} |
875,140 | I'm writing nginx config, and I have a fundamental question. What are the differences among: listen 443 ssl; vs listen [::]:443 ssl; vs listen [::]:443 ssl http2; My goal is secure this web application, but also remain compatible for old clients. Note: I understand that [::]:443 has to with ipv6, but does it encompass ipv4 as well in this case? Want to clear my concepts. | listen 443 ssl : makes nginx listen on all ipv4 address on the server, on port 443 ( 0.0.0.0:443 ) while listen [::]:443 ssl : makes nginx listen on all ipv6 address on the server, on port 443 ( :::443 ) [::]:443 will not make nginx respond on ipv4 by default, unless you specify parameter ipv6only=off : listen [::]:443 ipv6only=off; As per the doc : http://nginx.org/en/docs/http/ngx_http_core_module.html#listen ssl : The ssl parameter (0.7.14) allows specifying that all connections
accepted on this port should work in SSL mode. http2 : The http2 parameter (1.9.5) configures the port to accept HTTP/2 connections. This doesn't mean it accepts only HTTP/2 connections. As per RFC7540 A client that makes a request for an "http" URI without prior
knowledge about support for HTTP/2 on the next hop uses the HTTP
Upgrade mechanism. The client does so by making an HTTP/1.1 request
that includes an Upgrade header field with the "h2c" token. A server
that does not support HTTP/2 can respond to the request as though the
Upgrade header field were absent. HTTP/1.1 200 OK Content-Length: 243 Content-Type: text/html A server that supports HTTP/2
accepts the upgrade with a 101 (Switching Protocols) response. After
the empty line that terminates the 101 response, the server can begin
sending HTTP/2 frames. To summarize : A client that does not support HTTP/2 will never ask the server for an
HTTP/2 communication upgrade : the communication between them will be fully
HTTP1/1. A client that supports HTTP/2 will ask the server (using HTTP1/1) for an HTTP/2 upgrade : If the server is HTTP/2 ready, then the server will notice the client
as such : the communication between them will be switched to HTTP/2. If the server is not HTTP/2 ready, then the server will ignore the
upgrade request answering with HTTP1/1 : the communication between
them should stay plenty HTTP1/1. Maybe more summarized here : http://qnimate.com/http2-compatibility-with-old-browsers-and-servers/ However the nginx doc states the following about HTTP/2 over TLS : Note that accepting HTTP/2 connections over TLS requires the
“Application-Layer Protocol Negotiation” (ALPN) TLS extension support,
which is available only since OpenSSL version 1.0.2. Make sure old clients are compliant with this requirement. | {
"source": [
"https://serverfault.com/questions/875140",
"https://serverfault.com",
"https://serverfault.com/users/321109/"
]
} |
875,229 | I am trying to configure two-way SSL with SSL certs (for server and client) signed by Intermediate CAs. This is what I have done so far following this tutorial . Server - nginx application Nginx is configured with SSL certificate (signed by an Intermediate CA). server {
listen 443;
server_name app-ca.test.com;
ssl on;
ssl_certificate /root/ca/intermediate/certs/app-plus-intermediate.pem;
ssl_certificate_key /root/ca/intermediate/private/app-ca-interm-ca.test.com.key.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# I have also tried adding the Intermediate CA cert in vain
# ssl_client_certificate /root/client_rootca_intermediate.crt;
ssl_client_certificate /root/client_rootca.crt;
ssl_verify_client on;
location / {
root /usr/share/nginx/massl;
index index.html index.htm;
}
} Client - curl or OpenSSL s_client I have a client certificate signed by some other Intermediate CA, which fails with 400 The SSL certificate error I have also tried to pass ( -cert option in openssl command) Client's Intermediate CA and Root CA along with the client certificate in vain. $ cat /root/ca/intermediate/certs/client.cert.pem /root/ca/intermediate/certs/intermediate.cert.pem > /root/ca/intermediate/certs/client_plus_intermediate.cert.pem
$ cat /root/ca/intermediate/certs/client.cert.pem /root/ca/intermediate/certs/intermediate.cert.pem > /root/ca/intermediate/certs/intermediate_plus_client.cert.pem
$ cat /root/ca/intermediate/certs/client.cert.pem /root/ca/intermediate/certs/intermediate.cert.pem /root/ca/certs/ca.cert.pem > /root/ca/intermediate/certs/client_plus_intermediate_plus_ca.cert.pem Short Error Logs <html>
<head><title>400 The SSL certificate error</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<center>The SSL certificate error</center>
<hr><center>nginx/1.13.5</center>
</body>
</html> Long Error Logs I have shortened the longs hashes for brevity. $ openssl s_client -connect app-ca.test.com:443 -tls1 -key /root/ca/intermediate/private/client.key.pem -cert /root/ca/intermediate/certs/client.cert.pem -CAfile /root/server_rootca.crt -state -debug
CONNECTED(00000003)
SSL_connect:before/connect initialization
write to 0x2239a90 [0x226e3c3] (181 bytes => 181 (0xB5))
0000 - 16 03 01 00 b0 01 00 00-ac 03 01 16 ed fa 81 3e ...............>
0010 - fc 25 c1 55 73 8a ca 5f-d3 56 11 a6 0f 38 6e 3c .%.Us.._.V...8n<
0020 - 52 fb 1f 9b fb 4f 4f 3e-5a fb 82 00 00 64 c0 14 R....OO>Z....d..
0090 - 00 ff 01 00 00 1f 00 0b-00 04 03 00 01 02 00 0a ................
00a0 - 00 0a 00 08 00 17 00 19-00 18 00 16 00 23 00 00 .............#..
00b0 - 00 0f 00 01 01 .....
SSL_connect:SSLv3 write client hello A
read from 0x2239a90 [0x2269e73] (5 bytes => 5 (0x5))
0000 - 16 03 01 00 42 ....B
read from 0x2239a90 [0x2269e78] (66 bytes => 66 (0x42))
0000 - 02 00 00 3e 03 01 6f e5-89 1d bd 5a 58 26 d7 11 ...>..o....ZX&..
0010 - 8a 05 fd 2a 04 96 58 2e-2e 19 a7 89 46 a0 5b 21 ...*..X.....F.[!
0020 - c3 90 1c 3e 0b e6 00 c0-14 00 00 16 ff 01 00 01 ...>............
0030 - 00 00 0b 00 04 03 00 01-02 00 23 00 00 00 0f 00 ..........#.....
0040 - 01 01 ..
SSL_connect:SSLv3 read server hello A
read from 0x2239a90 [0x2269e73] (5 bytes => 5 (0x5))
0000 - 16 03 01 0c ab .....
read from 0x2239a90 [0x2269e78] (3243 bytes => 3243 (0xCAB))
0000 - 0b 00 0c a7 00 0c a4 00-06 64 30 82 06 60 30 82 .........d0..`0.
0c90 - 5f b6 c7 86 5d 41 b3 fb-9c fe d3 0a 26 01 f9 d9 _...]A......&...
0ca0 - a6 ae 7f ff 4f c7 0b e8-97 b3 1c ....O......
depth=2 C = GB, ST = England, L = Melbourne, O = Alice Ltd, OU = IT Services, CN = server-and-ca.test.com, emailAddress = [email protected]
verify return:1
depth=1 C = GB, ST = England, O = Alice Ltd, OU = Shared Services, CN = server-and-interm-ca.test.com, emailAddress = [email protected]
verify return:1
depth=0 C = US, ST = California, L = Mountain View, O = Alice Ltd, OU = Alice Ltd Web Services, CN = app-ca-interm-ca.test.com, emailAddress = [email protected]
verify return:1
SSL_connect:SSLv3 read server certificate A
read from 0x2239a90 [0x2269e73] (5 bytes => 5 (0x5))
0000 - 16 03 01 01 4b ....K
read from 0x2239a90 [0x2269e78] (331 bytes => 331 (0x14B))
0000 - 0c 00 01 47 03 00 17 41-04 13 5d 81 04 36 18 e7 ...G...A..]..6..
0010 - da bf 5e 30 dd d8 ee 77-f9 56 aa 77 8b 9e cd 3e ..^0...w.V.w...>.
0110 - d1 82 65 0f 5d 9c 03 ba-5f 7f 62 33 a8 a6 62 8e ..e.]..._.b3..b.
0120 - f2 5c 03 1d 4d 47 04 16-cb 80 09 39 32 be ca 23 .\..MG.....92..#
0130 - 41 95 36 a6 4b 6b f0 6c-df a5 4b 26 d4 4a c5 f3 A.6.Kk.l..K&.J..
0140 - 99 0d c8 d8 aa 5d f8 88-86 b3 15 .....].....
SSL_connect:SSLv3 read server key exchange A
read from 0x2239a90 [0x2269e73] (5 bytes => 5 (0x5))
0000 - 16 03 01 00 bc .....
read from 0x2239a90 [0x2269e78] (188 bytes => 188 (0xBC))
0000 - 0d 00 00 b4 03 01 02 40-00 ae 00 ac 30 81 a9 31 [email protected]
0010 - 0b 30 09 06 03 55 04 06-13 02 47 42 31 10 30 0e .0...U....GB1.0.
0090 - 06 09 2a 86 48 86 f7 0d-01 09 01 16 1b 72 6f 6f ..*.H........roo
00a0 - 74 40 63 6c 69 65 6e 74-2d 61 6e 64 2d 63 61 2e t@client-and-ca.
00b0 - 74 65 73 74 2e 63 6f 6d-0e test.com.
00bc - <SPACES/NULS>
SSL_connect:SSLv3 read server certificate request A
SSL_connect:SSLv3 read server done A
write to 0x2239a90 [0x2273910] (1593 bytes => 1593 (0x639))
0000 - 16 03 01 06 34 0b 00 06-30 00 06 2d 00 06 2a 30 ....4...0..-..*0
0010 - 82 06 26 30 82 04 0e a0-03 02 01 02 02 02 10 00 ..&0............
05f0 - 29 2a 6c 40 d1 ed 8f 6d-15 b2 cd 6a 7b 72 30 91 )*[email protected]{r0.
0600 - ea 29 16 48 f2 11 21 15-3a 50 32 8b 95 87 b8 09 .).H..!.:P2.....
0610 - 11 84 9a a4 d2 b8 46 33-7a a2 79 51 ba 23 8c 96 ......F3z.yQ.#..
0620 - 45 62 2e b9 f5 ea 23 79-53 e0 cb 72 1f e6 19 d4 Eb....#yS..r....
0630 - 75 18 a8 2e 44 2f f3 8b-a7 u...D/...
SSL_connect:SSLv3 write client certificate A
write to 0x2239a90 [0x2273910] (75 bytes => 75 (0x4B))
0000 - 16 03 01 00 46 10 00 00-42 41 04 b9 b3 02 d2 bc ....F...BA......
0010 - e2 8b 49 a7 f6 8c 59 66-fc 0e 39 79 c7 23 34 e9 ..I...Yf..9y.#4.
0020 - 3e 04 98 3a 60 78 1d aa-51 06 46 80 09 10 c4 7e >..:`x..Q.F....~
0030 - a5 e7 05 d1 82 f2 0d bb-9a ca e7 29 01 0b 88 6d ...........)...m
0040 - ed c3 52 73 b1 d4 3a 95-00 e8 ..Rs..:...
004b - <SPACES/NULS>
SSL_connect:SSLv3 write client key exchange A
write to 0x2239a90 [0x2273910] (267 bytes => 267 (0x10B))
0000 - 16 03 01 01 06 0f 00 01-02 01 00 5e 29 8e 7c 69 ...........^).|i
0010 - 1e 10 0d 01 39 35 db 18-7e 4a a7 12 ae 12 7e f0 ....95..~J....~.
0020 - d6 93 c5 0a ba 5d e4 f1-a4 ae 8f c4 7d 52 80 16 .....]......}R..
00f0 - 6f 1f 56 73 bc ab 7f 07-1d f7 b4 ec d7 58 57 cd o.Vs.........XW.
0100 - cd e0 37 b3 58 09 3a 75-93 02 ab ..7.X.:u...
SSL_connect:SSLv3 write certificate verify A
write to 0x2239a90 [0x2273910] (6 bytes => 6 (0x6))
0000 - 14 03 01 00 01 01 ......
SSL_connect:SSLv3 write change cipher spec A
write to 0x2239a90 [0x2273910] (53 bytes => 53 (0x35))
0000 - 16 03 01 00 30 24 90 78-08 d3 10 f3 f8 e3 c8 86 ....0$.x........
0010 - 82 f1 54 d1 38 7b 57 7b-83 a3 49 b9 3b 80 b2 86 ..T.8{W{..I.;...
0020 - 54 74 92 ec 9a a7 e7 28-1a ec 72 4c 64 8e f3 e3 Tt.....(..rLd...
0030 - 08 96 89 2a 03 ...*.
SSL_connect:SSLv3 write finished A
SSL_connect:SSLv3 flush data
read from 0x2239a90 [0x2269e73] (5 bytes => 5 (0x5))
0000 - 16 03 01 06 ea .....
read from 0x2239a90 [0x2269e78] (1770 bytes => 1770 (0x6EA))
0000 - 04 00 06 e6 00 00 01 2c-06 e0 09 8d 58 07 45 c9 .......,....X.E.
0010 - 58 49 42 f4 13 00 47 12-be 22 a2 e3 a0 b6 22 bd XIB...G.."....".
06d0 - a1 11 26 db 43 c8 6e 47-2f 40 65 61 e1 4e ef 0a ..&.C.nG/@ea.N..
06e0 - 57 e0 28 19 2d 0d c6 7f-ae 2e W.(.-.....
SSL_connect:SSLv3 read server session ticket A
read from 0x2239a90 [0x2269e73] (5 bytes => 5 (0x5))
0000 - 14 03 01 00 01 .....
read from 0x2239a90 [0x2269e78] (1 bytes => 1 (0x1))
0000 - 01 .
read from 0x2239a90 [0x2269e73] (5 bytes => 5 (0x5))
0000 - 16 03 01 00 30 ....0
read from 0x2239a90 [0x2269e78] (48 bytes => 48 (0x30))
0000 - 7d 5f 53 a4 5e 85 67 67-8d 6c d6 6e 93 cd c6 75 }_S.^.gg.l.n...u
0010 - c1 83 17 d9 a8 e3 89 23-86 6b 8a 04 2d 46 7e 95 .......#.k..-F~.
0020 - 15 46 a4 ec 73 f3 3d 78-1b 0e 94 62 79 cf 96 3d .F..s.=x...by..=
SSL_connect:SSLv3 read finished A
---
Certificate chain
0 s:/C=US/ST=California/L=Mountain View/O=Alice Ltd/OU=Alice Ltd Web Services/CN=app-ca-interm-ca.test.com/[email protected]
i:/C=GB/ST=England/O=Alice Ltd/OU=Shared Services/CN=server-and-interm-ca.test.com/[email protected]
1 s:/C=GB/ST=England/O=Alice Ltd/OU=Shared Services/CN=server-and-interm-ca.test.com/[email protected]
i:/C=GB/ST=England/L=Melbourne/O=Alice Ltd/OU=IT Services/CN=server-and-ca.test.com/[email protected]
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIGYDCCBEigAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwgagxCzAJBgNVBAYTAkdC
MRAwDgYDVQQIDAdFbmdsYW5kMRIwEAYDVQQKDAlBbGljZSBMdGQxGDAWBgNVBAsM
zBcik+fj+MUtDzhEl6EuW1ILjAvt5u4KBxj6d0yAXzleACOYncYWWzMfQdrFmwKh
W2opZQ==
-----END CERTIFICATE-----
subject=/C=US/ST=California/L=Mountain View/O=Alice Ltd/OU=Alice Ltd Web Services/CN=app-ca-interm-ca.test.com/[email protected]
issuer=/C=GB/ST=England/O=Alice Ltd/OU=Shared Services/CN=server-and-interm-ca.test.com/[email protected]
---
Acceptable client certificate CA names
/C=GB/ST=England/L=Sydney/O=Something/OU=Shared Services/CN=client-and-ca.test.com/[email protected]
Client Certificate Types: RSA sign, DSA sign, ECDSA sign
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 5682 bytes and written 2175 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : ECDHE-RSA-AES256-SHA
Session-ID: 2AF7BFD60D3EC4686EAAAE1971FBD8999E65C5C80A32182CB9A668B1411DB09C
Session-ID-ctx:
Master-Key: B3F714B4ACB61C6310311025B25AFBAFA9E9AAEBB5ACD5FEEAE5DCAE2690DECBFA4EC5CBD2C8A50F349F43026CD0C564
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
TLS session ticket lifetime hint: 300 (seconds)
TLS session ticket:
0000 - 09 8d 58 07 45 c9 58 49-42 f4 13 00 47 12 be 22 ..X.E.XIB...G.."
0010 - a2 e3 a0 b6 22 bd 0d 71-c9 46 bd ab 84 85 06 f7 ...."..q.F......
06b0 - 66 76 1f 3e 49 23 dc 2b-be 9e d5 03 b8 a5 a1 7d fv.>I#.+.......}
06c0 - 4d 56 79 3f 81 78 a1 11-26 db 43 c8 6e 47 2f 40 MVy?.x..&.C.nG/@
06d0 - 65 61 e1 4e ef 0a 57 e0-28 19 2d 0d c6 7f ae 2e ea.N..W.(.-.....
Start Time: 1506251677
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
GET / HTTP/1.0
write to 0x2239a90 [0x226e3c6] (90 bytes => 90 (0x5A))
0000 - 17 03 01 00 20 ca 44 95-8c a0 32 52 4d da d8 02 .... .D...2RM...
0010 - db bd 97 88 0e e3 cb b9-9e fb 50 7e 71 24 37 83 ..........P~q$7.
0020 - f8 48 03 a0 a1 17 03 01-00 30 db 99 b2 0c 6c e6 .H.......0....l.
0030 - f4 25 3d 54 2f b1 a3 3c-be 2a 36 94 6c ce 6d 8d .%=T/..<.*6.l.m.
0040 - 3d 54 82 d3 f0 2a 40 3d-fc 3f 1b 3e 4a 40 10 e5 =T...*@=.?.>J@..
0050 - 1d eb ab 00 69 f1 e0 4a-27 47 ....i..J'G
write to 0x2239a90 [0x226e3c6] (74 bytes => 74 (0x4A))
0000 - 17 03 01 00 20 95 06 3d-51 d5 7c c2 05 ef a7 d6 .... ..=Q.|.....
0010 - 2b 25 9c dd ec 5f 7c c0-15 83 c6 ca ea 47 a1 b2 +%..._|......G..
0020 - 82 2d 46 7d 64 17 03 01-00 20 3b 2e 36 63 10 b3 .-F}d.... ;.6c..
0030 - 50 c7 ec 36 a4 27 a0 4d-db bb 83 b5 c6 e8 d5 fa P..6.'.M........
0040 - ca 76 dc e7 63 8f 94 b3-24 3f .v..c...$?
read from 0x2239a90 [0x2269e73] (5 bytes => 5 (0x5))
0000 - 17 03 01 01 a0 .....
read from 0x2239a90 [0x2269e78] (416 bytes => 416 (0x1A0))
0000 - a6 8b c1 bb a4 aa 12 2e-81 d9 45 41 74 0e 33 a4 ..........EAt.3.
0190 - 37 be 58 ca 01 80 fc 7c-79 2b 3f 54 a4 cd 4a 07 7.X....|y+?T..J.
HTTP/1.1 400 Bad Request
Server: nginx/1.13.5
Date: Sun, 24 Sep 2017 11:14:49 GMT
Content-Type: text/html
Content-Length: 231
Connection: close
<html>
<head><title>400 The SSL certificate error</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<center>The SSL certificate error</center>
<hr><center>nginx/1.13.5</center>
</body>
</html>
read from 0x2239a90 [0x2269e73] (5 bytes => 5 (0x5))
0000 - 15 03 01 ...
0005 - <SPACES/NULS>
read from 0x2239a90 [0x2269e78] (32 bytes => 32 (0x20))
0000 - c3 75 ba 40 21 83 f7 0e-11 98 7b 44 84 bb 23 d5 .u.@!.....{D..#.
0010 - 80 32 1e 3e b6 b7 dd 4a-16 09 31 e9 62 a9 cd a3 .2.>...J..1.b...
SSL3 alert read:warning:close notify
closed
write to 0x2239a90 [0x226e3c3] (37 bytes => 37 (0x25))
0000 - 15 03 01 00 20 bd 18 f2-df 1b 84 fc 8e e0 80 a1 .... ...........
0010 - 2f 6f 31 b4 4c fc 1c e5-36 1f c5 fb 5d c0 f8 dc /o1.L...6...]...
0020 - 19 6b 03 c3 2d .k..-
SSL3 alert write:warning:close notify Interestingly, the above command works fine if I use certificates of Client's Root CA or Client's Intermediate CA. | Finally, I have pinned down the root cause of the problem. There were two problems with my setup. a) For two-way SSL, the certificate signed by the Intermediate CA must have clientAuth in extendedKeyUsage (Thanks to @dave_thompson_085) which can be verified by the below command $ openssl x509 -in /path/to/client/cert -noout -purpose | grep 'SSL client :'
SSL client : Yes b) Another, thing which was missing was ssl_verify_depth parameter in the nginx config file which must be 2 or more. It does not make much sense to make the number bigger than 2 in my case, but it works with any number other than 1 (which is default value). Interestingly, this is not required in nginx v1.12.X (my colleague with the exact same setup didn't have to specify this). However, it didn't work for me (nginx v1.13.5) until I used this parameter. I can have a sound sleep after 3 days of headbanging. TIP : Don't depend on curl much to troubleshoot two-way SSL issues, try openssl s_client instead. curl can give misleading results sometimes, see this . I too fumbled around for a while in my Ubuntu 16.04 docker container. | {
"source": [
"https://serverfault.com/questions/875229",
"https://serverfault.com",
"https://serverfault.com/users/213050/"
]
} |
875,247 | In Ansible 2.4, the include module is deprecated. In its place, it ships with two replacement modules, import_tasks and include_tasks . But they have very similar descriptions: include_tasks : Includes a file with a list of tasks to be executed in the current playbook. import_tasks : Imports a list of tasks to be added to the current playbook for subsequent execution. When should I use the former, and when should I use the latter? | There's quite a bit about this topic in the documentation: Includes vs. Imports Dynamic vs. Static The main difference is: All import* statements are pre-processed at the time playbooks are parsed. All include* statements are processed as they encountered during the execution of the playbook. So import is static, include is dynamic. From my experience, you should use import when you deal with logical "units". For example, separate long list of tasks into subtask files: main.yml: - import_tasks: prepare_filesystem.yml
- import_tasks: install_prerequisites.yml
- import_tasks: install_application.yml But you would use include to deal with different workflows and take decisions based on some dynamically gathered facts: install_prerequisites: - include_tasks: prerequisites_{{ ansible_os_family | lower }}.yml | {
"source": [
"https://serverfault.com/questions/875247",
"https://serverfault.com",
"https://serverfault.com/users/436935/"
]
} |
875,930 | I have an Ubuntu Linux server allowing password authentication for SSH, and I want to switch it to SSH keys only and disable password login. Before I disable password login, how can I find out which users are still using passwords, and which have switched to key authentication? | You can't do that 100% reliably, but there are two strong indications: First, the presence of a .ssh/authorized_keys file is a hint the user is at least prepared to use key based login Second, in the authentication log file ( /var/log/secure on CentOS, /var/log/auth.log on Debian/Ubuntu), the auth method will be logged: Sep 28 13:44:28 hostname sshd[12084]: Accepted publickey for sven vs Sep 28 13:47:36 hostname sshd[12698]: Accepted password for sven Scan the log for entries with password mentioned to learn who is still using passwords. This will not work with users seldom logging in of course unless you have very long log retention. | {
"source": [
"https://serverfault.com/questions/875930",
"https://serverfault.com",
"https://serverfault.com/users/33449/"
]
} |
875,941 | +----------+
+-------+ Client 1 |
+--------------+ +---------------+ +----------+ +------------+ | +----------+
| Web Server +-------------+ Cisco ASA5585 +---------+ Internet +-----------+ StrongSwan +--------+ IP: 10.2.0.1
+--------------+ +---------------+ +----------+ +------------+ |
| +----------+
+-------+ Client 2 |
Internal Web server External IP: 1.1.1.1 External IP: 2.2.2.2 +----------+
https://some.webservice.net Internal IP: 10.1.0.1 IP: 10.3.0.1
192.168.0.1:443 Clients 1 and 2 are in different /20 subnets and need to access the internal web server on the remote side through the host to host IPSEC VPN tunnel between the StrongSwan server and a remote Cisco ASA device. We don't have any control over the remote side. We have routing in place to allow client 1 and client 2 to reach the StrongSwan server. We have the tunnel established between the StrongSwan server and the Cisco ASA device. We have IP forwarding enabled on the StrongSwan server. I'm trying to find out whether its feasible to use iptunnel to masquerade clients 1 and 2 as the StrongSwan server itself in order to allow them to access the internal web server at the remote side of the tunnel. | You can't do that 100% reliably, but there are two strong indications: First, the presence of a .ssh/authorized_keys file is a hint the user is at least prepared to use key based login Second, in the authentication log file ( /var/log/secure on CentOS, /var/log/auth.log on Debian/Ubuntu), the auth method will be logged: Sep 28 13:44:28 hostname sshd[12084]: Accepted publickey for sven vs Sep 28 13:47:36 hostname sshd[12698]: Accepted password for sven Scan the log for entries with password mentioned to learn who is still using passwords. This will not work with users seldom logging in of course unless you have very long log retention. | {
"source": [
"https://serverfault.com/questions/875941",
"https://serverfault.com",
"https://serverfault.com/users/347402/"
]
} |
876,233 | I have a critical application which is run as a service by systemd. It is set up to restart as soon as there is a failure. How to send an email if the application restarts? | First you need two files: an executable for sending the mail and a .service for starting the executable. For this example, the executable is just a shell script using sendmail : /usr/local/bin/systemd-email:
#!/bin/bash
/usr/bin/sendmail -t <<ERRMAIL
To: $1
From: systemd <root@$HOSTNAME>
Subject: $2
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset=UTF-8
$(systemctl status --full "$2")
ERRMAIL Whatever executable you use, it should probably take at least two arguments as this shell script does: the address to send to and the unit file to get the status of. The .service we create will pass these arguments: /etc/systemd/system/[email protected]:
[Unit]
Description=status email for %i to user
[Service]
Type=oneshot
ExecStart=/usr/local/bin/systemd-email address %i
User=nobody
Group=systemd-journal Where user is the user being emailed and address is that user's email address. Although the recipient is hard-coded, the unit file to report on is passed as an instance parameter, so this one service can send email for many other units. At this point you can start [email protected] to verify that you can receive the emails. Then simply edit the service you want emails for and add OnFailure=status-email-user@%n.service to the [Unit] section. %n passes the unit's name to the template. Source: archlinux wiki: systemd timers MAILTO | {
"source": [
"https://serverfault.com/questions/876233",
"https://serverfault.com",
"https://serverfault.com/users/303223/"
]
} |
876,893 | How do I make a domain record that can change frequently? Let's say example.org points to 203.0.113.0 . Two minutes later it has to point to 198.51.100.0 . It will be normal web sites behind the domain ("normal" only in the the sense of being accessed using common web browsers) but with very short lifespan. Domain will point to an address for 3-4 hours at most before it gets switched or shut down. There is no need to protect the DNS server from frequent queries. My approach would be to set TTL to 60 seconds and simply change the record when switch has to be made. In worst case scenario it would cause a user to wait for max 60 seconds before a new server is accessible. Somehow I don't trust in this... Some ISPs or browsers could ignore or override the TTL, couldn't they? If it's a valid concern what would be a reasonable TTL to be expected? Thank you! | This is called Fast Flux DNS Records . And it's usually how malware authors hide their infrastructure servers. While this will work for your plan, it's not the best plan. You will likely need to have a spare server or more, online, and doing nothing almost all the time. Only when you have an issue with the main server you would switch to the next one. Even if you have a TTL of 1 minute, one record will most likely be valid for more than that: Browser caches Browsers usually cache the DNS records for a variable amount of time. Firefox uses 60 seconds , Chrome uses 60 seconds too, IE 3.x and earlier cached for 24 hours , IE 4.x and above cached for 30 minutes. OS cache Windows will not usually honour the TTL . A TTL for DND is not like the TTL for a IPv4 packet. It's more a indication of freshness than a mandatory refresh. Linux can have nscd configured to set the amount of time the user wants, disregarding DNS TTL. It can cache entries for a week, for example. ISP cache ISPs can (and some will) use aggressive caching for decreasing the traffic. They can not only change TTL, but cache the records and return it to clients without even asking upstream DNS servers. This is more prevalent on mobile ISPs, as they change the TTL so mobile clients don't complain on traffic latency. A load balancer is made to do exactly what you want. With a load balancer in place, you can have 2 or 4 or 10 servers all online at the same time, dividing the load. If one of them goes offline, the service will not be affected. Changing DNS records will have a downtime between the time when the server goes off and the DNS is changed. It will take more than one minute, because you have to detect the downtime, change the records, and wait for them to propagate. So use a load balancer. It's made to do what you want, and you know exact what to expect. A fast flux DNS setup will have mixed and inconsistent results. | {
"source": [
"https://serverfault.com/questions/876893",
"https://serverfault.com",
"https://serverfault.com/users/381135/"
]
} |
877,695 | In Windows 10, the Windows Recovery Environment (WinRE) can be launched by repeatedly cutting power to the computer during the boot sequence. This allows an attacker with physical access to a desktop machine to gain administrative command-line access, at which point they can view and modify files, reset the administrative password using various techniques , and so on. (Note that if you launch WinRE directly, you must provide a local administrative password before it will give you command line access; this does not apply if you launch WinRE by repeatedly interrupting the boot sequence. Microsoft have confirmed that they do not consider this to be a security vulnerability.) In most scenarios this doesn't matter, because an attacker with unrestricted physical access to the machine can usually reset the BIOS password and gain administrative access by booting from removable media. However, for kiosk machines, in teaching labs, and so on, measures are usually taken to restrict physical access by, e.g., padlocking and/or alarming the machines. It would be very inconvenient to have to also try to block user access to both the power button and the wall socket. Supervision (either in person or via surveillance cameras) might be more effective, but someone using this technique would still be far less obvious than, e.g., someone attempting to open the computer case. How can the system administrator prevent WinRE from being used as a back door? Addendum: if you are using BitLocker, you are already partially protected from this technique; the attacker will not be able to read or modify files on the encrypted drive. It would still be possible for the attacker to wipe the disk and install a new operating system, or to use a more sophisticated technique such as a firmware attack. (As far as I am aware firmware attack tools are not yet widely available to casual attackers, so this is probably not an immediate concern.) | You can use reagentc to disable WinRE: reagentc /disable See the Microsoft documentation for additional command-line options. When WinRE is disabled in this way, the startup menus are still available, but the only option that is available is the Startup Settings menu, equivalent to the old F8 startup options. If you are carrying out unattended installations of Windows 10, and want WinRE to be disabled automatically during installation, delete the following file from the install image: \windows\system32\recovery\winre.wim The WinRE infrastructure is still in place (and can be re-enabled later using a copy of winre.wim and the reagentc command line tool) but will be disabled. Note that the Microsoft-Windows-WinRE-RecoveryAgent setting in unattend.xml does not appear to have any effect in Windows 10. (However, this might depend on which version of Windows 10 you are installing; I have only tested it on the LTSB branch of version 1607.) | {
"source": [
"https://serverfault.com/questions/877695",
"https://serverfault.com",
"https://serverfault.com/users/94065/"
]
} |
878,611 | Recently I saw the whois record for google.com , and it has none of the usual information such as the admin's contact details. It is extremely truncated: Domain Name: GOOGLE.COM
Registry Domain ID: 2138514_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.markmonitor.com
Registrar URL: http://www.markmonitor.com
Updated Date: 2011-07-20T16:55:31Z
Creation Date: 1997-09-15T04:00:00Z
Registry Expiry Date: 2020-09-14T04:00:00Z
Registrar: MarkMonitor Inc.
Registrar IANA ID: 292
Registrar Abuse Contact Email: [email protected]
Registrar Abuse Contact Phone: +1.2083895740
Domain Status: clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited
Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited
Domain Status: serverDeleteProhibited https://icann.org/epp#serverDeleteProhibited
Domain Status: serverTransferProhibited https://icann.org/epp#serverTransferProhibited
Domain Status: serverUpdateProhibited https://icann.org/epp#serverUpdateProhibited
Name Server: NS1.GOOGLE.COM
Name Server: NS2.GOOGLE.COM
Name Server: NS3.GOOGLE.COM
Name Server: NS4.GOOGLE.COM
DNSSEC: unsigned Several other domains such as duolingo.com and even stackexchange.com are the same way. Why are these domains allowed to not have whois information? Is this something that anyone can access, for privacy protection? | Why are these domains allowed to not have whois information? Is this something that anyone can access, for privacy protection? TLDR: It’s not the case that these domains have somehow obtained an exemption from ICANN that allows them not to omit certain data from public WHOIS records. It’s more likely the case that the WHOIS record you saw is not displaying the full set of records for google.com (or the other .com domain names). Thick and thin WHOIS lookups WHOIS data for Internet domains can be stored in one of two ways: a thick data store where each TLD registry keeps the complete WHOIS records for each sub-domain of the TLD. a thin model where the TLD registry delegates storage and maintenance of the WHOIS records to the registrar that was used by the registrant to register the domain. The WHOIS Wikipedia article explains the distinction between thick and thin WHOIS lookups and describes thin lookups as A Thin WHOIS server stores only the name of the WHOIS server of the
registrar of a domain, which in turn has the full details on the data being
looked up (such as the .com WHOIS servers, which refer the WHOIS query to
the registrar where the domain was registered). Lookups for .com ICANN has assigned Verisign as the registry to manage the .com domain name. A WHOIS query run on ICANN’s own WHOIS server, whois.iana.org lists whois.verisign-grs.com as the canonical WHOIS server to use for the .com domain. This is the default WHOIS server that is queried by whois clients when looking up details of .com domain names (the results of this query is what’s displayed in your question). As the .com domain uses the thin model, one of the keys (records) returned by a WHOIS lookup for a domain name is Registrar WHOIS Server . This key specifies the domain name of the WHOIS server that is responsible for listing the full details of the domain name in question: Registrar WHOIS Server: whois.markmonitor.com This key tells the whois client that it should actually query whois.markmonitor.com to get the full WHOIS records for the domain in question. It looks like the WHOIS result that you saw was as a result of not following this referral. One reason for not following WHOIS referrals One reason for the whois client to not follow the referral is that earlier this year, ICANN changed the names of keys that registry operators should use. Previous to this change, the name of the key used to specify the delegated server was Whois Server , and the output for google.com would have been: Whois Server: whois.markmonitor.com After domain name registries updated their WHOIS servers, any clients looking for the string, WHOIS Server: (with leading spaces) would not find it – and would thus be unable to determine the name of the registrar’s WHOIS server. Example client fix To reflect ICANN’s recent changes, the code for the Debian whois client was patched this July and released as version 5.2.17. However, (as of October 2017) most Debian-based distributions will still be using the previous code-base so users would have to explicitly provide the name of the responsible WHOIS server, e.g., whois -h whois.markmonitor.com google.com | {
"source": [
"https://serverfault.com/questions/878611",
"https://serverfault.com",
"https://serverfault.com/users/318441/"
]
} |
879,393 | I've read about what charachers should usernames use, in linux, here: https://serverfault.com/a/578264/330936 but I would like to know if is there any problem if I will use the at sign "@" in my usernames. I will use it especially for my ftp accounts (I have a simple webserver with CentOS 7). I don't want to be portable to other older versions of linux, nor other distros (maybe debian). Is there any problem in using @ in usernames? | I'd say it isn't good idea. I'd recommend to use simple regex: ([a-z_][a-z0-9_]{0,30}) Check following links: https://stackoverflow.com/questions/6949667/what-are-the-real-rules-for-linux-usernames-on-centos-6-and-rhel-6 https://unix.stackexchange.com/questions/157426/what-is-the-regex-to-validate-linux-users | {
"source": [
"https://serverfault.com/questions/879393",
"https://serverfault.com",
"https://serverfault.com/users/330936/"
]
} |
881,517 | Since Kubernetes 1.8, it seems I need to disable swap on my nodes (or set --fail-swap-on to false ). I cannot find the technical reason why Kubernetes insists on the swap being disabled. Is this for performance reasons? Security reasons? Why is the reason for this not documented? | The idea of kubernetes is to tightly pack instances to as close to 100% utilized as possible. All deployments should be pinned with CPU/memory limits. So if the scheduler sends a pod to a machine it should never use swap at all. You don't want to swap since it'll slow things down. Its mainly for performance. | {
"source": [
"https://serverfault.com/questions/881517",
"https://serverfault.com",
"https://serverfault.com/users/59848/"
]
} |
883,073 | I have the latest NGINX from ppa installed on Ubuntu 16.04. nginx version: nginx/1.12.1 From my understanding, it should support stream and UDP load balancing. But I get this error message: nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/conf.d/load-balancer.conf:3 This is my config in /etc/nginx/conf.d/load-balancer.conf stream {
upstream backend {
least_conn;
server 172.31.9.51 fail_timeout=10s;
server 172.31.20.140 fail_timeout=10s;
}
server {
listen 500 udp;
listen 4500 udp;
proxy_pass backend;
proxy_timeout 1s;
proxy_responses 1;
error_log logs/dns.log;
}
} | stream needs to be on the same level as http block so like http { foo }
stream { bar } My guess is your include for /etc/nginx/conf.d/*.conf is located in the http {} block and not outside of it. Checkout the /etc/nginx/nginx.conf for the include and maybe you have to make a new one for the stream section | {
"source": [
"https://serverfault.com/questions/883073",
"https://serverfault.com",
"https://serverfault.com/users/105362/"
]
} |
883,083 | (In order to skip the are you sure warning of the disk space)
I tried apt-get install php && echo Y but it does not work. It still asks for the warning. What I can do? Thanks! | stream needs to be on the same level as http block so like http { foo }
stream { bar } My guess is your include for /etc/nginx/conf.d/*.conf is located in the http {} block and not outside of it. Checkout the /etc/nginx/nginx.conf for the include and maybe you have to make a new one for the stream section | {
"source": [
"https://serverfault.com/questions/883083",
"https://serverfault.com",
"https://serverfault.com/users/443788/"
]
} |
883,166 | I would like to ask question about fromhost message properties. http://www.rsyslog.com/doc/v7-stable/configuration/properties.html I am using rsyslog 7.4.7 on RHEL 7.3 . However, the fromhost message properties seems to set the hostname in lowercase letters even though uppercase letters are used for hostname in /etc/hosts/ /etc/hosts [root@RHEL73-1 log]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.2.12 RHEL73-1
10.0.2.13 RHEL73-test However, when log from remote log is received fromhost is set as lowercase letters . Debug line with all properties:
FROMHOST: 'rhel73-test', fromhost-ip: '10.0.2.13', HOSTNAME: 'RHEL73-2', PRI: 30,
syslogtag 'systemd:', programname: 'systemd', APP-NAME: 'systemd', PROCID: '-', MSGID: '-',
TIMESTAMP: 'Nov 13 20:01:01', STRUCTURED-DATA: '-',
msg: ' Removed slice user-0.slice.'
escaped msg: ' Removed slice user-0.slice.'
inputname: imudp rawmsg: '<30>Nov 13 20:01:01 RHEL73-2 systemd: Removed slice user-0.slice.' Is the resolved hostname set as lowercase or uppercase ? Document seems to not mention about this behavior... | stream needs to be on the same level as http block so like http { foo }
stream { bar } My guess is your include for /etc/nginx/conf.d/*.conf is located in the http {} block and not outside of it. Checkout the /etc/nginx/nginx.conf for the include and maybe you have to make a new one for the stream section | {
"source": [
"https://serverfault.com/questions/883166",
"https://serverfault.com",
"https://serverfault.com/users/316534/"
]
} |
883,697 | I have been told that you can get a longer lifespan of an SSD if you buy a bigger capacity SSD. The reasoning goes that newer SSDs have wear leveling and thus should sustain the same amount of writing whether you spread this writing on the (logical) disk or not. And if you get an SSD that is twice the size of what you need, then you have twice the capacity to do wear leveling on. Is there any truth to that? | This is true, and it was one of the key motivation to backing the switch from SLC (fast and durable flash cells, but small capacity) to MLC (slower and less durable flash cells, but bigger capacity). To give you some ballpark numbers (on old 34nm tech): SLC drive: 100K P/E cycles (program-erase cycles) , 100 GB in size, 10 DWPD (Drive Writes Per Day) x 5y, total 1825 TBW (TeraBytes Written); MLC drive: 30K P/E cycles, 200 GB in size, 3 DWPD x 5y, total 1095 TBW. As you can see, while the MLC drive as less than 1/3 the P/E endurance, due to its bigger size, its total endurance (in Terabyte Written) is 60% of the SLC drive (rather than the expected 30%). An even higher endurance can be achieved with sufficient overprovisioning, bringing relative parity between the two disks. That said, SSDs rarely die due to NAND wear. Rather, controller and FLT (flash translation layer) bugs are what kill, or brick, flash-based solid state drives. Choosing an SSD, I would put a priority on these things: capacity: as space is never enough, do not underestimate your needs. Bigger disks are (often) also faster than smaller ones, due to more NAND chips available; power loss protection: if used for synchronous writes, be sure to buy a disk with powerloss protected writeback caches; vendor track record: if used for enterprise workloads, do not buy "no-name" SSD or "game oriented" models. Rather, go with a know and reliable vendor, as Intel, Samsung, and Micron/Crucial. | {
"source": [
"https://serverfault.com/questions/883697",
"https://serverfault.com",
"https://serverfault.com/users/45704/"
]
} |
883,737 | I need to forward traffic from clients to a VPN server only for specific subnet i.e. 10.10.10.0/24 For example, if clients send requests to 123.123.123.123 then they will use their own Internet. If clients send requests to 10.10.10.123 then they will use a VPN connection. Is it possible to configure with strongswan? Right now all traffic from clients are proxied through the VPN server. Here is my strongswan configuration: config setup
uniqueids=no
charondebug = ike 3, cfg 3
conn %default
dpdaction=clear
dpddelay=35s
dpdtimeout=2000s
keyexchange=ikev2
auto=add
rekey=no
reauth=no
fragmentation=yes
compress=yes
### left - local (server) side
# filename of certificate chain located in /etc/strongswan/ipsec.d/certs/
leftcert=fullchain.pem
leftsendcert=always
leftsubnet=0.0.0.0/0,::/0
### right - remote (client) side
eap_identity=%identity
rightsourceip=10.10.11.0/24,2a00:1450:400c:c05::/112
rightdns=8.8.8.8,2001:4860:4860::8888
conn ikev2-mschapv2
rightauth=eap-mschapv2
conn ikev2-mschapv2-apple
rightauth=eap-mschapv2
leftid=mydomain.com | This is true, and it was one of the key motivation to backing the switch from SLC (fast and durable flash cells, but small capacity) to MLC (slower and less durable flash cells, but bigger capacity). To give you some ballpark numbers (on old 34nm tech): SLC drive: 100K P/E cycles (program-erase cycles) , 100 GB in size, 10 DWPD (Drive Writes Per Day) x 5y, total 1825 TBW (TeraBytes Written); MLC drive: 30K P/E cycles, 200 GB in size, 3 DWPD x 5y, total 1095 TBW. As you can see, while the MLC drive as less than 1/3 the P/E endurance, due to its bigger size, its total endurance (in Terabyte Written) is 60% of the SLC drive (rather than the expected 30%). An even higher endurance can be achieved with sufficient overprovisioning, bringing relative parity between the two disks. That said, SSDs rarely die due to NAND wear. Rather, controller and FLT (flash translation layer) bugs are what kill, or brick, flash-based solid state drives. Choosing an SSD, I would put a priority on these things: capacity: as space is never enough, do not underestimate your needs. Bigger disks are (often) also faster than smaller ones, due to more NAND chips available; power loss protection: if used for synchronous writes, be sure to buy a disk with powerloss protected writeback caches; vendor track record: if used for enterprise workloads, do not buy "no-name" SSD or "game oriented" models. Rather, go with a know and reliable vendor, as Intel, Samsung, and Micron/Crucial. | {
"source": [
"https://serverfault.com/questions/883737",
"https://serverfault.com",
"https://serverfault.com/users/335036/"
]
} |
885,080 | I would like to know if there is a GPO or something else to access the logon Windows screen in place of direct being connected into a session through RDP ? | Yes. On the server, you will have to allow RDP sessions with network level authentication disabled (which is in the control panel remote settings), and either your RDP client must be old enough to not support network level authentication (i. e. from WinXP or before) or you have to connect via a .rdp file that contains the option enablecredsspsupport:i:0 . Also some vulnerability scanners will try to connect with network level authentication disabled and take a screenshot of the login screen - which is useful to determine the OS edition, OS language, whether the machine is part of a domain, and (in some cases) some valid user names. | {
"source": [
"https://serverfault.com/questions/885080",
"https://serverfault.com",
"https://serverfault.com/users/332108/"
]
} |
885,117 | How can I disable all services except ssh on modern (systemd based) linux distributions? I need to implement a maintenance mode . All these services need to be down: postgres postfix apache cups cron dovecot But ssh must not be shut down, since this gets used to do tasks during the maintenance mode. Of course I could write a shell script which loops over a list of services which I would like to disable. But this feels like I reinventing something which already exists, but which I don't know up to now. | This sounds a lot like runlevels , replaced with targets in Systemd. So, instead of writing a script that starts and stop a list of services, you could create a new maintenance.target containing only the services necessary, like SSH. Of course, SSH is not quite useful without networking, so in this example a simple emergency-net.target is modified to include SSH. [Unit]
Description=Maintenance Mode with Networking and SSH
Requires=maintenance.target systemd-networkd.service sshd.service
After=maintenance.target systemd-networkd.service sshd.service
AllowIsolate=yes Then, you could enter your maintenance mode using # systemctl isolate maintenance.target and back # systemctl isolate multi-user.target | {
"source": [
"https://serverfault.com/questions/885117",
"https://serverfault.com",
"https://serverfault.com/users/90324/"
]
} |
885,133 | I am working with a website running on IIS8.5 and I am seeing a set of requests with what I will call "WS" request headers showing up in the serverVariables collection as follows: HTTP_WSHOST
HTTP_WSIP
HTTP_WS_IP
HTTP_WS_AUTH
HTTP_WS_VER
HTTP_X_WS_VER
HTTP_X_WS_EP_VER
HTTP_X_WS_AUTH
HTTP_X_WS_TSP_PROTOCOL_VERSION I have done some searching, and all I can come up with is that HTTP_WSHOST and HTTP_WSIP are used by DomainTools crawler, and that in general, they might be related to WebSockets. What are these headers commonly used for, and where might I find specs for each? | This sounds a lot like runlevels , replaced with targets in Systemd. So, instead of writing a script that starts and stop a list of services, you could create a new maintenance.target containing only the services necessary, like SSH. Of course, SSH is not quite useful without networking, so in this example a simple emergency-net.target is modified to include SSH. [Unit]
Description=Maintenance Mode with Networking and SSH
Requires=maintenance.target systemd-networkd.service sshd.service
After=maintenance.target systemd-networkd.service sshd.service
AllowIsolate=yes Then, you could enter your maintenance mode using # systemctl isolate maintenance.target and back # systemctl isolate multi-user.target | {
"source": [
"https://serverfault.com/questions/885133",
"https://serverfault.com",
"https://serverfault.com/users/360456/"
]
} |
885,621 | I'm using Ubuntu 16.04. Step 1) I logged into my root user account. Step 2) I used cd to navigate to a different user account's home directory. Step 3) I typed ls to examine the contents of that directory. Step 4) The contents came back as empty. Step 5) I typed mkdir .ssh to create a directory. Result) mkdir: cannot create directory '.ssh': File exists Question: Why is the directory listed as empty if an .ssh folder exists inside of it? -- update -- I logged into root because this is a test server. I'm repeatedly creating and destroying it. | ls by itself does not show hidden directories (hidden directories and files are ones that start with a . , such as .ssh ) Try using ls -a in the directory. From the ls manpage: -a, --all do not ignore entries starting with . As noted in the comments, "hidden" directories and files are not technically a thing, there is just code built into a lot of common tools that treat . and .. with special meaning, the result being that . is usually considered "hidden" by most tools. The reason I used this term is because it's common to hear it referred to that way. Additionally . and .. usually have special meaning to most filesystems, indicating current directory and parent directory, respectively. | {
"source": [
"https://serverfault.com/questions/885621",
"https://serverfault.com",
"https://serverfault.com/users/432825/"
]
} |
888,281 | When logging in via ssh, it can be seen the following on auth.log: Dec 14 16:29:30 app sshd[22781]: Accepted publickey for dev from XXX.XXX.XX.XXX port XXXXX ssh2: RSA SHA256:pO8i... I've been trying to figure out what is this SHA256 information, but I couldn't find anything that seems to match. First I thought it could be some information from the client (public key, fingerprint, hashed hostname etc) I'm connecting from, but I didn't find anything to confirm, neither at the server side. The closest information I've found is here , but I didn't understand when it says "And here is an example using a key for authentication. It shows the kewy (a misspelling, probably) fingerprint as a SHA256 hash in base64.", since I haven't found a corresponding key fingerprint of any kind. Thank you. | This is the SHA256 hash for the RSA public key which was used to authenticate the SSH session. This is how to verify it: ssh-keygen -lf .ssh/id_rsa.pub Or, to verify without ssh-keygen : Remove the ssh-rsa prefix Decode the key to bytes using base64 Get the SHA256 hash for the key (as bytes, not hex) Encode the bytes using base64 For example: cat .ssh/id_rsa.pub |
awk '{ print $2 }' | # Only the actual key data without prefix or comments
base64 -d | # decode as base64
sha256sum | # SHA256 hash (returns hex)
awk '{ print $1 }' | # only the hex data
xxd -r -p | # hex to bytes
base64 # encode as base64 | {
"source": [
"https://serverfault.com/questions/888281",
"https://serverfault.com",
"https://serverfault.com/users/448458/"
]
} |
888,487 | "We highly recommend that you never grant any kind of public access to your S3 bucket." I have set a very granular public policy (s3:GetObject) for one bucket that I use to host a website. Route53 explicitly supports aliasing a bucket for this purpose. Is this warning just redundant, or am I doing something wrong? | Yes, if you know what you're doing ( edit: and everyone else with access to it does, too...), you can ignore this warning. It exists because even large organizations who should know better have accidentally placed private data into public buckets. Amazon will also send you heads-up emails if you leave buckets public in addition to the in-console warnings. Accenture, Verizon, Viacom, Illinois voter information and military information has all been found inadvertently left open to everyone online due to IT bods misconfiguring their S3 silos. If you are absolutely, 100% certain that everything in the bucket should be public and that no one's going to accidentally put private data in it - a static HTML site's a good example - then by all means, leave it public. | {
"source": [
"https://serverfault.com/questions/888487",
"https://serverfault.com",
"https://serverfault.com/users/448643/"
]
} |
890,370 | I'm on a red hat 7 machine, and I need to open all ports to a specific IP on the firewall. I tried this command: firewall-cmd --permanent --zone=public --add-rich-rule=' rule family="ipv4" source address="64.39.96.0/20" port protocol="tcp" port="*" accept' But I'm getting an invalid port error for the * Does anyone know and can tell me how to do this correctly? | Use a firewalld zone for this. Zones can be specified either by interface or by source IP address. In fact, by default, a zone which accepts all traffic already exists, and it is named trusted . By default, though, nothing is in this zone. So, you don't even need to create a zone, just add the IP address to the trusted zone. firewall-cmd --zone=trusted --add-source=64.39.96.0/20 In addition to CIDR ranges, you can specify single IP addresses or ipset names prefixed with ipset: . After this, all traffic from the specified addresses will be allowed on any port. Remember to make it permanent , either by repeating the command with --permanent appended, or by running firewall-cmd --runtime-to-permanent . | {
"source": [
"https://serverfault.com/questions/890370",
"https://serverfault.com",
"https://serverfault.com/users/99201/"
]
} |
890,372 | According to this documentation , I should be able to retire an application while it is still deployed, the purpose being that those deployments are preserved.
When I try to retire an appliation in SCCM v1706, it tells me: "Configuration Manager cannot retire this application because other
applications or task sequences reference it or it is configured as a
deployment." There are three deployments for this application - no task sequences etc refence it. So is the documentation faulty or am I missing something here? edit: as expected from the above error message I am able to retire the application as soon as I delete all deployments for it. So I guess the functionality of retiring applications (in SCCM 2012 this was "enable/disable" iirc) was changed at some point in Current Branch without adapting relevant documentation? edit 2: I also posted this question in the Microsoft-forums and got an answer there which came as close to an explanation as can be for this topic: seems the documentation mentioned above is simply a little vague on the topic of deployments when retiring applications. The answer seems to be that retiring applications is not intended to preserve the deployment configuration, but it rather means that clients currently running that applications are not prompted to uninstall it. | Use a firewalld zone for this. Zones can be specified either by interface or by source IP address. In fact, by default, a zone which accepts all traffic already exists, and it is named trusted . By default, though, nothing is in this zone. So, you don't even need to create a zone, just add the IP address to the trusted zone. firewall-cmd --zone=trusted --add-source=64.39.96.0/20 In addition to CIDR ranges, you can specify single IP addresses or ipset names prefixed with ipset: . After this, all traffic from the specified addresses will be allowed on any port. Remember to make it permanent , either by repeating the command with --permanent appended, or by running firewall-cmd --runtime-to-permanent . | {
"source": [
"https://serverfault.com/questions/890372",
"https://serverfault.com",
"https://serverfault.com/users/264905/"
]
} |
890,381 | Is there an easy way to determine where a specific yum group is sourced? I can query what groups are available using yum grouplist . I can query group information using yum group info $yum_group_name What's troubling me is I can't determine which repository a group is being sourced from. The best I've done is find what repositories hold the group: yum_group_name="....." # or ID
# find all repository identifiers
# perform yum commands with only 1 repository enabled
cat /etc/yum.repos.d/* | grep '\[.*\]' | grep -v '#' | tr -d '[]' | xargs -I {} -t sh -c "yum --disablerepo='*' --enablerepo='{}' group info $yum_group_name 2>&1 | grep 'Group:'"
# subsequently, associate a bareurl to repository identifier Say a group exists in multiple repositories, how do I know which one is used? | Use a firewalld zone for this. Zones can be specified either by interface or by source IP address. In fact, by default, a zone which accepts all traffic already exists, and it is named trusted . By default, though, nothing is in this zone. So, you don't even need to create a zone, just add the IP address to the trusted zone. firewall-cmd --zone=trusted --add-source=64.39.96.0/20 In addition to CIDR ranges, you can specify single IP addresses or ipset names prefixed with ipset: . After this, all traffic from the specified addresses will be allowed on any port. Remember to make it permanent , either by repeating the command with --permanent appended, or by running firewall-cmd --runtime-to-permanent . | {
"source": [
"https://serverfault.com/questions/890381",
"https://serverfault.com",
"https://serverfault.com/users/133650/"
]
} |
890,904 | Like many of us, I spent yesterday updating a whole lot of systems to mitigate the Meltdown and Spectre attacks . As I understand it, it is necessary to install two packages and reboot: kernel-3.10.0-693.11.6.el7.x86_64
microcode_ctl-2.1-22.2.el7.x86_64 I have two CentOS 7 systems on which I've installed these packages and rebooted. According to Red Hat, I can check the status of mitigation by checking these sysctls and ensuring that they are all 1. However, on these systems, they are not all 1: # cat /sys/kernel/debug/x86/pti_enabled
1
# cat /sys/kernel/debug/x86/ibpb_enabled
0
# cat /sys/kernel/debug/x86/ibrs_enabled
0 And I can't set them to 1, either: # echo 1 > /sys/kernel/debug/x86/ibpb_enabled
-bash: echo: write error: No such device
# echo 1 > /sys/kernel/debug/x86/ibrs_enabled
-bash: echo: write error: No such device I confirmed that Intel microcode appears to have loaded on boot: # systemctl status microcode -l
● microcode.service - Load CPU microcode update
Loaded: loaded (/usr/lib/systemd/system/microcode.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Fri 2018-01-05 16:42:25 UTC; 9min ago
Process: 30383 ExecStart=/usr/bin/bash -c grep -l GenuineIntel /proc/cpuinfo | xargs grep -l -E "model[[:space:]]*: 79$" > /dev/null || echo 1 > /sys/devices/system/cpu/microcode/reload (code=exited, status=0/SUCCESS)
Main PID: 30383 (code=exited, status=0/SUCCESS)
Jan 05 16:42:25 makrura systemd[1]: Starting Load CPU microcode update...
Jan 05 16:42:25 makrura systemd[1]: Started Load CPU microcode update. Even dmesg seems to have confirmed it: [ 3.245580] microcode: CPU0 sig=0x50662, pf=0x10, revision=0xf
[ 3.245627] microcode: CPU1 sig=0x50662, pf=0x10, revision=0xf
[ 3.245674] microcode: CPU2 sig=0x50662, pf=0x10, revision=0xf
[ 3.245722] microcode: CPU3 sig=0x50662, pf=0x10, revision=0xf
[ 3.245768] microcode: CPU4 sig=0x50662, pf=0x10, revision=0xf
[ 3.245816] microcode: CPU5 sig=0x50662, pf=0x10, revision=0xf
[ 3.245869] microcode: CPU6 sig=0x50662, pf=0x10, revision=0xf
[ 3.245880] microcode: CPU7 sig=0x50662, pf=0x10, revision=0xf
[ 3.245924] microcode: CPU8 sig=0x50662, pf=0x10, revision=0xf
[ 3.245972] microcode: CPU9 sig=0x50662, pf=0x10, revision=0xf
[ 3.245989] microcode: CPU10 sig=0x50662, pf=0x10, revision=0xf
[ 3.246036] microcode: CPU11 sig=0x50662, pf=0x10, revision=0xf
[ 3.246083] microcode: CPU12 sig=0x50662, pf=0x10, revision=0xf
[ 3.246131] microcode: CPU13 sig=0x50662, pf=0x10, revision=0xf
[ 3.246179] microcode: CPU14 sig=0x50662, pf=0x10, revision=0xf
[ 3.246194] microcode: CPU15 sig=0x50662, pf=0x10, revision=0xf
[ 3.246273] microcode: Microcode Update Driver: v2.01 <[email protected]>, Peter Oruba I have an Intel CPU formerly code named Broadwell: processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 86
model name : Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
stepping : 2
microcode : 0xf
cpu MHz : 2499.921
cache size : 12288 KB
physical id : 0
siblings : 16
core id : 7
cpu cores : 8
apicid : 15
initial apicid : 15
fpu : yes
fpu_exception : yes
cpuid level : 20
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
bogomips : 3999.90
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management: The cpuid utility reports: # cpuid -1
Disclaimer: cpuid may not support decoding of all cpuid registers.
CPU:
vendor_id = "GenuineIntel"
version information (1/eax):
processor type = primary processor (0)
family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6)
model = 0x6 (6)
stepping id = 0x2 (2)
extended family = 0x0 (0)
extended model = 0x5 (5)
(simple synth) = Intel Xeon D-1500 (Broadwell-DE V1), 14nm
miscellaneous (1/ebx):
process local APIC physical ID = 0x9 (9)
cpu count = 0x10 (16)
CLFLUSH line size = 0x8 (8)
brand index = 0x0 (0)
brand id = 0x00 (0): unknown
feature information (1/edx):
x87 FPU on chip = true
virtual-8086 mode enhancement = true
debugging extensions = true
page size extensions = true
time stamp counter = true
RDMSR and WRMSR support = true
physical address extensions = true
machine check exception = true
CMPXCHG8B inst. = true
APIC on chip = true
SYSENTER and SYSEXIT = true
memory type range registers = true
PTE global bit = true
machine check architecture = true
conditional move/compare instruction = true
page attribute table = true
page size extension = true
processor serial number = false
CLFLUSH instruction = true
debug store = true
thermal monitor and clock ctrl = true
MMX Technology = true
FXSAVE/FXRSTOR = true
SSE extensions = true
SSE2 extensions = true
self snoop = true
hyper-threading / multi-core supported = true
therm. monitor = true
IA64 = false
pending break event = true
feature information (1/ecx):
PNI/SSE3: Prescott New Instructions = true
PCLMULDQ instruction = true
64-bit debug store = true
MONITOR/MWAIT = true
CPL-qualified debug store = true
VMX: virtual machine extensions = true
SMX: safer mode extensions = true
Enhanced Intel SpeedStep Technology = true
thermal monitor 2 = true
SSSE3 extensions = true
context ID: adaptive or shared L1 data = false
FMA instruction = true
CMPXCHG16B instruction = true
xTPR disable = true
perfmon and debug = true
process context identifiers = true
direct cache access = true
SSE4.1 extensions = true
SSE4.2 extensions = true
extended xAPIC support = true
MOVBE instruction = true
POPCNT instruction = true
time stamp counter deadline = true
AES instruction = true
XSAVE/XSTOR states = true
OS-enabled XSAVE/XSTOR = true
AVX: advanced vector extensions = true
F16C half-precision convert instruction = true
RDRAND instruction = true
hypervisor guest status = false
cache and TLB information (2):
0x63: data TLB: 1G pages, 4-way, 4 entries
0x03: data TLB: 4K pages, 4-way, 64 entries
0x76: instruction TLB: 2M/4M pages, fully, 8 entries
0xff: cache data is in CPUID 4
0xb5: instruction TLB: 4K, 8-way, 64 entries
0xf0: 64 byte prefetching
0xc3: L2 TLB: 4K/2M pages, 6-way, 1536 entries
processor serial number: 0005-0662-0000-0000-0000-0000
deterministic cache parameters (4):
--- cache 0 ---
cache type = data cache (1)
cache level = 0x1 (1)
self-initializing cache level = true
fully associative cache = false
extra threads sharing this cache = 0x1 (1)
extra processor cores on this die = 0x7 (7)
system coherency line size = 0x3f (63)
physical line partitions = 0x0 (0)
ways of associativity = 0x7 (7)
ways of associativity = 0x0 (0)
WBINVD/INVD behavior on lower caches = false
inclusive to lower caches = false
complex cache indexing = false
number of sets - 1 (s) = 63
--- cache 1 ---
cache type = instruction cache (2)
cache level = 0x1 (1)
self-initializing cache level = true
fully associative cache = false
extra threads sharing this cache = 0x1 (1)
extra processor cores on this die = 0x7 (7)
system coherency line size = 0x3f (63)
physical line partitions = 0x0 (0)
ways of associativity = 0x7 (7)
ways of associativity = 0x0 (0)
WBINVD/INVD behavior on lower caches = false
inclusive to lower caches = false
complex cache indexing = false
number of sets - 1 (s) = 63
--- cache 2 ---
cache type = unified cache (3)
cache level = 0x2 (2)
self-initializing cache level = true
fully associative cache = false
extra threads sharing this cache = 0x1 (1)
extra processor cores on this die = 0x7 (7)
system coherency line size = 0x3f (63)
physical line partitions = 0x0 (0)
ways of associativity = 0x7 (7)
ways of associativity = 0x0 (0)
WBINVD/INVD behavior on lower caches = false
inclusive to lower caches = false
complex cache indexing = false
number of sets - 1 (s) = 511
--- cache 3 ---
cache type = unified cache (3)
cache level = 0x3 (3)
self-initializing cache level = true
fully associative cache = false
extra threads sharing this cache = 0xf (15)
extra processor cores on this die = 0x7 (7)
system coherency line size = 0x3f (63)
physical line partitions = 0x0 (0)
ways of associativity = 0xb (11)
ways of associativity = 0x6 (6)
WBINVD/INVD behavior on lower caches = false
inclusive to lower caches = true
complex cache indexing = true
number of sets - 1 (s) = 16383
MONITOR/MWAIT (5):
smallest monitor-line size (bytes) = 0x40 (64)
largest monitor-line size (bytes) = 0x40 (64)
enum of Monitor-MWAIT exts supported = true
supports intrs as break-event for MWAIT = true
number of C0 sub C-states using MWAIT = 0x0 (0)
number of C1 sub C-states using MWAIT = 0x2 (2)
number of C2 sub C-states using MWAIT = 0x1 (1)
number of C3 sub C-states using MWAIT = 0x2 (2)
number of C4 sub C-states using MWAIT = 0x0 (0)
number of C5 sub C-states using MWAIT = 0x0 (0)
number of C6 sub C-states using MWAIT = 0x0 (0)
number of C7 sub C-states using MWAIT = 0x0 (0)
Thermal and Power Management Features (6):
digital thermometer = true
Intel Turbo Boost Technology = true
ARAT always running APIC timer = true
PLN power limit notification = true
ECMD extended clock modulation duty = true
PTM package thermal management = true
HWP base registers = false
HWP notification = false
HWP activity window = false
HWP energy performance preference = false
HWP package level request = false
HDC base registers = false
digital thermometer thresholds = 0x2 (2)
ACNT/MCNT supported performance measure = true
ACNT2 available = false
performance-energy bias capability = true
extended feature flags (7):
FSGSBASE instructions = true
IA32_TSC_ADJUST MSR supported = true
SGX: Software Guard Extensions supported = false
BMI instruction = true
HLE hardware lock elision = true
AVX2: advanced vector extensions 2 = true
FDP_EXCPTN_ONLY = false
SMEP supervisor mode exec protection = true
BMI2 instructions = true
enhanced REP MOVSB/STOSB = true
INVPCID instruction = true
RTM: restricted transactional memory = true
QM: quality of service monitoring = true
deprecated FPU CS/DS = true
intel memory protection extensions = false
PQE: platform quality of service enforce = true
AVX512F: AVX-512 foundation instructions = false
AVX512DQ: double & quadword instructions = false
RDSEED instruction = true
ADX instructions = true
SMAP: supervisor mode access prevention = true
AVX512IFMA: fused multiply add = false
CLFLUSHOPT instruction = false
CLWB instruction = false
Intel processor trace = true
AVX512PF: prefetch instructions = false
AVX512ER: exponent & reciprocal instrs = false
AVX512CD: conflict detection instrs = false
SHA instructions = false
AVX512BW: byte & word instructions = false
AVX512VL: vector length = false
PREFETCHWT1 = false
AVX512VBMI: vector byte manipulation = false
UMIP: user-mode instruction prevention = false
PKU protection keys for user-mode = false
OSPKE CR4.PKE and RDPKRU/WRPKRU = false
BNDLDX/BNDSTX MAWAU value in 64-bit mode = 0x0 (0)
RDPID: read processor D supported = false
SGX_LC: SGX launch config supported = false
AVX512_4VNNIW: neural network instrs = false
AVX512_4FMAPS: multiply acc single prec = false
Direct Cache Access Parameters (9):
PLATFORM_DCA_CAP MSR bits = 1
Architecture Performance Monitoring Features (0xa/eax):
version ID = 0x3 (3)
number of counters per logical processor = 0x4 (4)
bit width of counter = 0x30 (48)
length of EBX bit vector = 0x7 (7)
Architecture Performance Monitoring Features (0xa/ebx):
core cycle event not available = false
instruction retired event not available = false
reference cycles event not available = false
last-level cache ref event not available = false
last-level cache miss event not avail = false
branch inst retired event not available = false
branch mispred retired event not avail = false
Architecture Performance Monitoring Features (0xa/edx):
number of fixed counters = 0x3 (3)
bit width of fixed counters = 0x30 (48)
x2APIC features / processor topology (0xb):
--- level 0 (thread) ---
bits to shift APIC ID to get next = 0x1 (1)
logical processors at this level = 0x2 (2)
level number = 0x0 (0)
level type = thread (1)
extended APIC ID = 9
--- level 1 (core) ---
bits to shift APIC ID to get next = 0x4 (4)
logical processors at this level = 0x10 (16)
level number = 0x1 (1)
level type = core (2)
extended APIC ID = 9
XSAVE features (0xd/0):
XCR0 lower 32 bits valid bit field mask = 0x00000007
XCR0 upper 32 bits valid bit field mask = 0x00000000
XCR0 supported: x87 state = true
XCR0 supported: SSE state = true
XCR0 supported: AVX state = true
XCR0 supported: MPX BNDREGS = false
XCR0 supported: MPX BNDCSR = false
XCR0 supported: AVX-512 opmask = false
XCR0 supported: AVX-512 ZMM_Hi256 = false
XCR0 supported: AVX-512 Hi16_ZMM = false
IA32_XSS supported: PT state = false
XCR0 supported: PKRU state = false
bytes required by fields in XCR0 = 0x00000340 (832)
bytes required by XSAVE/XRSTOR area = 0x00000340 (832)
XSAVE features (0xd/1):
XSAVEOPT instruction = true
XSAVEC instruction = false
XGETBV instruction = false
XSAVES/XRSTORS instructions = false
SAVE area size in bytes = 0x00000000 (0)
IA32_XSS lower 32 bits valid bit field mask = 0x00000000
IA32_XSS upper 32 bits valid bit field mask = 0x00000000
AVX/YMM features (0xd/2):
AVX/YMM save state byte size = 0x00000100 (256)
AVX/YMM save state byte offset = 0x00000240 (576)
supported in IA32_XSS or XCR0 = XCR0 (user state)
64-byte alignment in compacted XSAVE = false
Quality of Service Monitoring Resource Type (0xf/0):
Maximum range of RMID = 63
supports L3 cache QoS monitoring = false
L3 Cache Quality of Service Monitoring (0xf/1):
Conversion factor from IA32_QM_CTR to bytes = 32768
Maximum range of RMID = 63
supports L3 occupancy monitoring = true
supports L3 total bandwidth monitoring = true
supports L3 local bandwidth monitoring = true
Resource Director Technology allocation (0x10/0):
L3 cache allocation technology supported = true
L2 cache allocation technology supported = false
L3 Cache Allocation Technology (0x10/1):
length of capacity bit mask - 1 = 0xb (11)
Bit-granular map of isolation/contention = 0x00000c00
infrequent updates of COS = true
code and data prioritization supported = false
highest COS number supported = 0xb (11)
0x00000011 0x00: eax=0x00000000 ebx=0x00000000 ecx=0x00000000 edx=0x00000000
SGX capability (0x12/0):
SGX1 supported = false
SGX2 supported = false
MISCSELECT.EXINFO supported: #PF & #GP = false
MaxEnclaveSize_Not64 (log2) = 0x0 (0)
MaxEnclaveSize_64 (log2) = 0x0 (0)
0x00000013 0x00: eax=0x00000000 ebx=0x00000000 ecx=0x00000000 edx=0x00000000
Intel Processor Trace (0x14):
IA32_RTIT_CR3_MATCH is accessible = true
configurable PSB & cycle-accurate = false
IP & TraceStop filtering; PT preserve = false
MTC timing packet; suppress COFI-based = false
PTWRITE support = false
power event trace support = false
IA32_RTIT_CTL can enable tracing = true
ToPA can hold many output entries = false
single-range output scheme = false
output to trace transport = false
IP payloads have LIP values & CS = false
extended feature flags (0x80000001/edx):
SYSCALL and SYSRET instructions = true
execution disable = true
1-GB large page support = true
RDTSCP = true
64-bit extensions technology available = true
Intel feature flags (0x80000001/ecx):
LAHF/SAHF supported in 64-bit mode = true
LZCNT advanced bit manipulation = true
3DNow! PREFETCH/PREFETCHW instructions = true
brand = "Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz"
L1 TLB/cache information: 2M/4M pages & L1 TLB (0x80000005/eax):
instruction # entries = 0x0 (0)
instruction associativity = 0x0 (0)
data # entries = 0x0 (0)
data associativity = 0x0 (0)
L1 TLB/cache information: 4K pages & L1 TLB (0x80000005/ebx):
instruction # entries = 0x0 (0)
instruction associativity = 0x0 (0)
data # entries = 0x0 (0)
data associativity = 0x0 (0)
L1 data cache information (0x80000005/ecx):
line size (bytes) = 0x0 (0)
lines per tag = 0x0 (0)
associativity = 0x0 (0)
size (KB) = 0x0 (0)
L1 instruction cache information (0x80000005/edx):
line size (bytes) = 0x0 (0)
lines per tag = 0x0 (0)
associativity = 0x0 (0)
size (KB) = 0x0 (0)
L2 TLB/cache information: 2M/4M pages & L2 TLB (0x80000006/eax):
instruction # entries = 0x0 (0)
instruction associativity = L2 off (0)
data # entries = 0x0 (0)
data associativity = L2 off (0)
L2 TLB/cache information: 4K pages & L2 TLB (0x80000006/ebx):
instruction # entries = 0x0 (0)
instruction associativity = L2 off (0)
data # entries = 0x0 (0)
data associativity = L2 off (0)
L2 unified cache information (0x80000006/ecx):
line size (bytes) = 0x40 (64)
lines per tag = 0x0 (0)
associativity = 8-way (6)
size (KB) = 0x100 (256)
L3 cache information (0x80000006/edx):
line size (bytes) = 0x0 (0)
lines per tag = 0x0 (0)
associativity = L2 off (0)
size (in 512KB units) = 0x0 (0)
Advanced Power Management Features (0x80000007/edx):
temperature sensing diode = false
frequency ID (FID) control = false
voltage ID (VID) control = false
thermal trip (TTP) = false
thermal monitor (TM) = false
software thermal control (STC) = false
100 MHz multiplier control = false
hardware P-State control = false
TscInvariant = true
Physical Address and Linear Address Size (0x80000008/eax):
maximum physical address bits = 0x2e (46)
maximum linear (virtual) address bits = 0x30 (48)
maximum guest physical address bits = 0x0 (0)
Logical CPU cores (0x80000008/ecx):
number of CPU cores - 1 = 0x0 (0)
ApicIdCoreIdSize = 0x0 (0)
(multi-processing synth): multi-core (c=8), hyper-threaded (t=2)
(multi-processing method): Intel leaf 0xb
(APIC widths synth): CORE_width=4 SMT_width=1
(APIC synth): PKG_ID=0 CORE_ID=4 SMT_ID=1
(synth) = Intel Xeon D-1500 (Broadwell-DE V1), 14nm The system is fully up to date: # yum upgrade
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: centos.mirror.colo-serv.net
* epel: mirror.steadfast.net
* extras: centos.mirror.colo-serv.net
* updates: centos.mirror.colo-serv.net
No packages marked for update I feel like I've missed something important, but at this point I really don't know what it could be. What's going on here? How do I get the system fully mitigated? I am also seeing the same behavior on Fedora 27 workstations, a desktop with a Core i7-3770 CPU and a laptop with a Core i7-7500U. | As noted in https://access.redhat.com/articles/3311301 CVE-2017-5715 (variant #2/Spectre) is an indirect branching poisoning attack that can lead to data leakage. This attack allows for a virtualized guest to read memory from the host system. This issue is corrected with microcode, along with kernel and virtualization updates to both guest and host virtualization software. This vulnerability requires both updated microcode and kernel patches. Variant #2 behavior is controlled by the ibrs and ibpb tunables (noibrs/ibrs_enabled and noibpb/ibpb_enabled), which work in conjunction with the microcode ... As noted, installing the microcode update for your hardware, if provided by the hardware vendor, is necessary to protect against variant 2. Please contact your hardware vendor for microcode updates. It seems that you also need a BIOS update to enable the mitigations for CVE-2017-5715. I read this elsewhere too earlier but can't find the reference right now. | {
"source": [
"https://serverfault.com/questions/890904",
"https://serverfault.com",
"https://serverfault.com/users/126632/"
]
} |
891,265 | I use current Windows 10 with Powershell 5.1. Often, I want to look up commands I have used in the past to modify and/or re-run them. Inevitably, the commands I'm looking for were run in a previous or different PowerShell window/session. When I hammer the ↑ key, I can browse through many, many commands from many, many sessions, but when I try to search through them using Get-History | Where-Object {$_.CommandLine -Like "*docker cp*"} , I get no results. Basic troubleshooting reveals that Get-History doesn't show anything from previous sessions, as shown by: C:\Users\Me> Get-History
Id CommandLine
-- -----------
1 Get-History | Where-Object {$_.CommandLine -Like "*docker cp*"} How can I search through the previous commands that the ↑ key provides using Get-History or another Cmdlet? | The persistent history you mention is provided by PSReadLine . It is separate from the session-bound Get-History . The history is stored in a file defined by the property (Get-PSReadlineOption).HistorySavePath . View this file with Get-Content (Get-PSReadlineOption).HistorySavePath , or a text editor, etc. Inspect related options with Get-PSReadlineOption . PSReadLine also performs history searches via ctrl + r . Using your provided example: Get-Content (Get-PSReadlineOption).HistorySavePath | ? { $_ -like '*docker cp*' } | {
"source": [
"https://serverfault.com/questions/891265",
"https://serverfault.com",
"https://serverfault.com/users/193388/"
]
} |
891,487 | I was wondering if the following is possible with AWS offerings? https://www.example.com/a/ -> served by Apache on EC2 Instance A https://www.example.com/b/ -> served by Apache on EC2 Instance B To clarify, I do not want files under one directory path to be on the same server instance as files under the other directory path. I understand this may be possible with a proxy of some sort, but is there an easier solution with one of AWS offerings. The EC2 Load Balancer does not seem to allow switching based on directory path. Route 53 works at the DNS level, which does not have path information to return IPs based on that. | Use the AWS Application Load Balancer , which does Path Based Routing . That second link is a tutorial how to do it. In short, you set up your ALB as normal, then follow these steps (copied from the AWS tutorial): On the Listeners tab, use the arrow to view the rules for the listener, and then choose Add rule . Specify the rule as follows: For Target group name , choose the second target group that you created. For Path pattern specify the exact pattern to be used for path-based routing (for example, /img/*). For more information, see Listener Rules. Choose Save . | {
"source": [
"https://serverfault.com/questions/891487",
"https://serverfault.com",
"https://serverfault.com/users/451434/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.