source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
684,523 | I'd like to divert off requests to a particular sub-directory, to another root location. How? My existing block is: server {
listen 80;
server_name www.domain.com;
location / {
root /home/me/Documents/site1;
index index.html;
}
location /petproject {
root /home/me/pet-Project/website;
index index.html;
rewrite ^/petproject(.*)$ /$1;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
} } That is, http://www.domain.com should serve /home/me/Documents/site1/index.html whereas http://www.domain.com/petproject should serve /home/me/pet-Project/website/index.html -- it seems that nginx re-runs all the rules after the replacement, and http://www.domain.com/petproject just serves /home/me/Documents/site1/index.html . | The configuration has the usual problem that generally happens with nginx. That is, using root directive inside location block. Try using this configuration instead of your current location blocks: root /home/me/Documents/site1;
index index.html;
location /petproject {
alias /home/me/pet-Project/website;
} This means that the default directory for your website is /home/me/Documents/site1 , and for /petproject URI, the content is served from /home/me/pet-Project/website directory. | {
"source": [
"https://serverfault.com/questions/684523",
"https://serverfault.com",
"https://serverfault.com/users/203815/"
]
} |
684,602 | I would like to open port 4567 for the IP address 1.2.3.4 with the firewall-cmd command on a CentOS 7.1 server. How can I achieve this, as the documentation I could find was too specific on this? | Try this command firewall-cmd --permanent --zone=public --add-rich-rule='
rule family="ipv4"
source address="1.2.3.4/32"
port protocol="tcp" port="4567" accept' Check the zone file later to inspect the XML configuration cat /etc/firewalld/zones/public.xml Reload the firewall firewall-cmd --reload | {
"source": [
"https://serverfault.com/questions/684602",
"https://serverfault.com",
"https://serverfault.com/users/166219/"
]
} |
684,688 | Let's say it again, we all make mistakes , and I have just made one. A brief history: I was doing some stuff on a VPS (Debian) I'm renting, when I noticed some strange behaviour. Using the netstat command I saw an non-authorized connection through SSH. I didn't know what to do, so I decided to close his connection using iptables : iptables -A INPUT -p tcp --dport ssh -s IP -j DROP But I am tired, and I wrote iptables -A INPUT -p tcp --dport ssh -j DROP and I kicked myself (and everyone else) out... How do I fix this? | There are several alternatives: See if they have IPMI / "KVM" / console access to the server which lets you control it as if you had a physical keyboard plugged into it. If they don't offer that, see if you can boot the VM to a recovery linux CD (some providers offer this) and then correct the firewall rules that way and then boot it like normal. If you don't have console access, before you boot to recovery or attach the volume to another VM (as in the Amazon case, credit user3550767's answer), you can try Ankh2054's answer of rebooting first if you haven't saved the rules (likely the case since you kicked yourself out before you had a chance to save). Use the control panel or ask someone to power cycle it using a non-graceful reset / poweroff (aka hard reboot or hard shutdown) in case the init script saves the rules automatically when gracefully rebooting (credit @jfalcon, @joshudson). Weigh the drawbacks of this (such as data being written during reboot may be lost and filesystem check may be required on boot so longer boot up time, though that delay may be less than booting to recovery). | {
"source": [
"https://serverfault.com/questions/684688",
"https://serverfault.com",
"https://serverfault.com/users/283111/"
]
} |
684,691 | I recently upgraded my Weblogic server to 10.3.6 with java 7. So with that I have TLS1.0 - TLS 1.2 enabled via the setEnv.sh. Some of the ciphers I am using to make sure that they are compatible (supported by Weblogic, FF37, Chrome 44, etc) are as follows: <ciphersuite>TLS_RSA_WITH_3DES_EDE_CBC_SHA</ciphersuite>
<ciphersuite>TLS_RSA_WITH_AES_128_CBC_SHA</ciphersuite>
<ciphersuite>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA</ciphersuite>
<ciphersuite>TLS_RSA_WITH_AES_128_CBC_SHA256</ciphersuite>
<ciphersuite>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256</ciphersuite><ciphersuite>TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA</ciphersuite> This is in config.xml under the ssl tag. I do have JSSE enabled as well to make sure I can get a TLS1.2 connection. The supported cipherlist for Weblogic 10.3.6 found here One issue that I see with SSL Labs is that with these ciphers, I am still possibly vulnerable to POODLE. An Nmap scan gave me this for what the ciphers are: | ssl-enum-ciphers:
| SSLv3:
| ciphers:
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| compressors:
| NULL
| TLSv1.0:
| ciphers:
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| compressors:
| NULL
| TLSv1.1:
| ciphers:
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| compressors:
| NULL
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA256 - strong
| compressors:
| NULL
|_ least strength: strong Before TLS1.1 and TLS1.2 were enabled in setEnv.sh, I did not have this issue, so I am unsure why adding them changed what happened.
Now my question is how do I make sure that I have SSL3 disabled but still able to use some of the CBC ciphers? or have the support I need? EDIT : I know that CBC ciphers are a no bueno kinda thing... I am open for suggestions for ciphers support TLS1.0+ and for a browser as low as IE 8. | There are several alternatives: See if they have IPMI / "KVM" / console access to the server which lets you control it as if you had a physical keyboard plugged into it. If they don't offer that, see if you can boot the VM to a recovery linux CD (some providers offer this) and then correct the firewall rules that way and then boot it like normal. If you don't have console access, before you boot to recovery or attach the volume to another VM (as in the Amazon case, credit user3550767's answer), you can try Ankh2054's answer of rebooting first if you haven't saved the rules (likely the case since you kicked yourself out before you had a chance to save). Use the control panel or ask someone to power cycle it using a non-graceful reset / poweroff (aka hard reboot or hard shutdown) in case the init script saves the rules automatically when gracefully rebooting (credit @jfalcon, @joshudson). Weigh the drawbacks of this (such as data being written during reboot may be lost and filesystem check may be required on boot so longer boot up time, though that delay may be less than booting to recovery). | {
"source": [
"https://serverfault.com/questions/684691",
"https://serverfault.com",
"https://serverfault.com/users/200372/"
]
} |
684,771 | I am running a custom compiled 3.18.9 kernel and I am wondering about the best way to disable swap on the system. I also use init if it makes a difference. Is it enough to comment or remove the swap line in /etc/fstab to prevent swap from working/mounting at boot or should I recompile the kernel without Support for paging of anonymous memory (swap) to be 100% sure it does not get enabled? I run encrypted partitions and want to prevent accidental leakage to the hard disk. My system specifications are also great enough that I can survive in a swap-less environment. | Identify configured swap devices and files with cat /proc/swaps . Turn off all swap devices and files with swapoff -a . Remove any matching reference found in /etc/fstab . Optional: Destroy any swap devices or files found in step 1 to prevent their reuse. Due to your concerns about leaking sensitive information, you may wish to consider performing some sort of secure wipe. man swapoff | {
"source": [
"https://serverfault.com/questions/684771",
"https://serverfault.com",
"https://serverfault.com/users/283167/"
]
} |
685,289 | I've been reading a lot on RAID controllers/setups and one thing that comes up a lot is how hardware controllers without cache offer the same performance as software RAID. Is this really the case? I always thought that hardware RAID cards would offer better performance even without cache. I mean, you have dedicated hardware to perform the tasks. If that is the case what is the benefit of getting a RAID card that has no cache, something like a LSI 9341-4i that isn't exactly cheap. Also if a performance gain is only possible with cache, is there a cache configuration that writes to disk right away but keeps data in cache for reading operations making a BBU not a priority? | In short: if using a low-end RAID card (without cache), do yourself a favor and switch to software RAID. If using a mid-to-high-end card (with BBU or NVRAM), then hardware is often (but not always! see below) a good choice. Long answer: when computing power was limited, hardware RAID cards had the significant advantage to offload parity/syndrome calculation for RAID schemes involving them (RAID 3/4/5, RAID6, ecc). However, with the ever increasing CPU performance, this advantage basically disappeared: even my laptop's ancient CPU (Core i5 M 520, Westmere generation) has XOR performance of over 4 GB/s and RAID-6 syndrome performance over 3 GB/s per single execution core . The advantage that hardware RAID maintains today is the presence of a power-loss protected DRAM cache, in the form of BBU or NVRAM. This protected cache gives very low latency for random write access (and reads that hit) and basically transforms random writes into sequential writes. A RAID controller without such a cache is near useless . Moreover, some low-end RAID controllers do not only come without a cache, but forcibly disable the disk's private DRAM cache, leading to slower performance than without RAID card at all. An example are DELL's PERC H200 and H300 cards: they totally disable the disk's private cache and (if newer firmware has not changed that) actively forbid to re-activate it. Do yourself a favor and do not, ever, never buy such controllers. While even higher-end controllers often disable disk's private cache, they at least have their own protected cache - making HDD's (but not SSD's!) private cache somewhat redundant. This is not the end, though. Even capable controllers (the one with BBU or NVRAM cache) can give inconsistent results when used with SSD, basically because SSDs really need a fast private cache for efficient FLASH page programming/erasing. And while some (most?) controllers let you re-enable disk's private cache (eg: PERC H700/710/710P), if that private cache is volatile you risk to lose data in case of power loss. The exact behavior really is controller and firmware dependent (eg: on a DELL S6/i with 256 MB WB cache and enabled disk's cache , I had no losses during multiple, planned power loss testing), giving uncertainty and much concern. Open source software RAIDs, on the other hand, are much more controllable beasts - their software is not enclosed inside a proprietary firmware, and have well-defined metadata patterns and behaviors. Software RAID make the (right) assumption that disk's private DRAM cache is not protected, but at the same time it is critical for acceptable performance - so rather than disabling it, they use ATA FLUSH / FUA commands to write critical data on stable storage. As they often run from the SATA ports attached to the chipset SB, their bandwidth is very good and driver support is excellent. However, if used with mechanical HDDs, synchronized, random write access patterns (eg: databases, virtual machines) will greatly suffer compared to an hardware RAID controller with WB cache. On the other hand, when used with enterprise SSDs (ie: with a powerloss protected write cache), software RAID often excels and give results even higher than hardware RAID cards. Unfortunately consumer SSDs only have volatile write cache, delivering very low IOPS in synchronized write workloads (albeit very fast at reads and async writes). Also consider that software RAIDs are not all created equal. Windows software RAID has a bad reputation, performance wise, and even Storage Space seems not too different. Linux MD Raid is exceptionally fast and versatile, but Linux I/O stack is composed of multiple independent pieces that you need to carefully understand to extract maximum performance. ZFS parity RAID (ZRAID) is extremely advanced but, if not correctly configured, can give you very poor IOPs; mirroring+striping, on the other side, performs quite well. Anyway, it need a fast SLOG device for synchronous write handling (ZIL). Bottom line: if your workloads are not synchronized random write sensitive, you don't need a RAID card if you need a RAID card, do not buy a RAID controller without WB cache if you plan to use SSD, software RAID is preferred but keep in mind that for high synchronized random writes you need a powerloss-protected SSD (ie: Intel S/P/DC, Samsung PM/SM, etc). For pure performance the best choice probably is Linux MD Raid, but nowadays I generally use striped ZFS mirrors. If you can not afford losing half the space due to mirrors and you needs ZFS advanced features, go with ZRAID but carefully think about your VDEVs setup. if you, even using SSD, really need an hardware RAID card, use SSDs with write-protected caches. if you need RAID6 when using normal, mechanical HDDs, consider to buy a fast RAID card with 512 MB (or more) WB cache. RAID6 has a high write performance penalty, and a properly-sized WB cache can at least provide a fast intermediate storage for small synchronous writes (eg: filesystem journal). if you need RAID6 with HDDs but you can't / don't want to buy a hardware RAID card, carefully think about your software RAID setup. For example, a possible solution with Linux MD Raid is to use two arrays: a small RAID10 array for journal writes / DB logs, and a RAID6 array for raw storage (as fileserver). On the other hand, software RAID5/6 with SSDs is very fast, so you probably don't need a RAID card for an all-SSDs setup. | {
"source": [
"https://serverfault.com/questions/685289",
"https://serverfault.com",
"https://serverfault.com/users/283488/"
]
} |
685,302 | I'm doing an unattended installation of Ubuntu-14.04-server with a USB drive on different type of servers (HP Proliant ML110, ML310, ML350). In some cases, the USB drive is incorrectly mounted on /media instead of /cdrom , making the installation process stop with the following message: [ Detect and mount CD-ROM ] Your installation CD-ROM couldn't be mounted. This probably means that the CD-ROM was not in the drive. If so you can insert it an try again. I managed to identify some cases where this error occurs: on the ML110 and ML310: when the hard drive is empty on the ML350 Gen9: even if the hard drive is partitioned. I think it comes from the debian-installer that, at an early stage of the installation, tries to mount a partition from the first drive on /media . And then mounts the USB drive in /cdrom . In the above cases, the hard drive is detected later during the installation process, making the USB drive the first drive and therefore mounting it on /media and not on /cdrom . For the persons for which a manual intervention is not a problem, I found a workaround that I will describe in an answer below. But for an unattended installation, this is not a solution. Can we force the installer to mount the USB drive on a specific mont-point? | In short: if using a low-end RAID card (without cache), do yourself a favor and switch to software RAID. If using a mid-to-high-end card (with BBU or NVRAM), then hardware is often (but not always! see below) a good choice. Long answer: when computing power was limited, hardware RAID cards had the significant advantage to offload parity/syndrome calculation for RAID schemes involving them (RAID 3/4/5, RAID6, ecc). However, with the ever increasing CPU performance, this advantage basically disappeared: even my laptop's ancient CPU (Core i5 M 520, Westmere generation) has XOR performance of over 4 GB/s and RAID-6 syndrome performance over 3 GB/s per single execution core . The advantage that hardware RAID maintains today is the presence of a power-loss protected DRAM cache, in the form of BBU or NVRAM. This protected cache gives very low latency for random write access (and reads that hit) and basically transforms random writes into sequential writes. A RAID controller without such a cache is near useless . Moreover, some low-end RAID controllers do not only come without a cache, but forcibly disable the disk's private DRAM cache, leading to slower performance than without RAID card at all. An example are DELL's PERC H200 and H300 cards: they totally disable the disk's private cache and (if newer firmware has not changed that) actively forbid to re-activate it. Do yourself a favor and do not, ever, never buy such controllers. While even higher-end controllers often disable disk's private cache, they at least have their own protected cache - making HDD's (but not SSD's!) private cache somewhat redundant. This is not the end, though. Even capable controllers (the one with BBU or NVRAM cache) can give inconsistent results when used with SSD, basically because SSDs really need a fast private cache for efficient FLASH page programming/erasing. And while some (most?) controllers let you re-enable disk's private cache (eg: PERC H700/710/710P), if that private cache is volatile you risk to lose data in case of power loss. The exact behavior really is controller and firmware dependent (eg: on a DELL S6/i with 256 MB WB cache and enabled disk's cache , I had no losses during multiple, planned power loss testing), giving uncertainty and much concern. Open source software RAIDs, on the other hand, are much more controllable beasts - their software is not enclosed inside a proprietary firmware, and have well-defined metadata patterns and behaviors. Software RAID make the (right) assumption that disk's private DRAM cache is not protected, but at the same time it is critical for acceptable performance - so rather than disabling it, they use ATA FLUSH / FUA commands to write critical data on stable storage. As they often run from the SATA ports attached to the chipset SB, their bandwidth is very good and driver support is excellent. However, if used with mechanical HDDs, synchronized, random write access patterns (eg: databases, virtual machines) will greatly suffer compared to an hardware RAID controller with WB cache. On the other hand, when used with enterprise SSDs (ie: with a powerloss protected write cache), software RAID often excels and give results even higher than hardware RAID cards. Unfortunately consumer SSDs only have volatile write cache, delivering very low IOPS in synchronized write workloads (albeit very fast at reads and async writes). Also consider that software RAIDs are not all created equal. Windows software RAID has a bad reputation, performance wise, and even Storage Space seems not too different. Linux MD Raid is exceptionally fast and versatile, but Linux I/O stack is composed of multiple independent pieces that you need to carefully understand to extract maximum performance. ZFS parity RAID (ZRAID) is extremely advanced but, if not correctly configured, can give you very poor IOPs; mirroring+striping, on the other side, performs quite well. Anyway, it need a fast SLOG device for synchronous write handling (ZIL). Bottom line: if your workloads are not synchronized random write sensitive, you don't need a RAID card if you need a RAID card, do not buy a RAID controller without WB cache if you plan to use SSD, software RAID is preferred but keep in mind that for high synchronized random writes you need a powerloss-protected SSD (ie: Intel S/P/DC, Samsung PM/SM, etc). For pure performance the best choice probably is Linux MD Raid, but nowadays I generally use striped ZFS mirrors. If you can not afford losing half the space due to mirrors and you needs ZFS advanced features, go with ZRAID but carefully think about your VDEVs setup. if you, even using SSD, really need an hardware RAID card, use SSDs with write-protected caches. if you need RAID6 when using normal, mechanical HDDs, consider to buy a fast RAID card with 512 MB (or more) WB cache. RAID6 has a high write performance penalty, and a properly-sized WB cache can at least provide a fast intermediate storage for small synchronous writes (eg: filesystem journal). if you need RAID6 with HDDs but you can't / don't want to buy a hardware RAID card, carefully think about your software RAID setup. For example, a possible solution with Linux MD Raid is to use two arrays: a small RAID10 array for journal writes / DB logs, and a RAID6 array for raw storage (as fileserver). On the other hand, software RAID5/6 with SSDs is very fast, so you probably don't need a RAID card for an all-SSDs setup. | {
"source": [
"https://serverfault.com/questions/685302",
"https://serverfault.com",
"https://serverfault.com/users/283088/"
]
} |
685,360 | I am trying to load balance 3 Apache servers. I am using VMware and I am very new to this kind of thing. Does anybody have any good sources that I can read for a tutorial or could anybody point me in the right direction. I have so far given the 3 servers the following addresses: 192.168.151.12 192.168.151.13 192.168.151.14 I would like to be able to load balance between these 3 addresses. | In short: if using a low-end RAID card (without cache), do yourself a favor and switch to software RAID. If using a mid-to-high-end card (with BBU or NVRAM), then hardware is often (but not always! see below) a good choice. Long answer: when computing power was limited, hardware RAID cards had the significant advantage to offload parity/syndrome calculation for RAID schemes involving them (RAID 3/4/5, RAID6, ecc). However, with the ever increasing CPU performance, this advantage basically disappeared: even my laptop's ancient CPU (Core i5 M 520, Westmere generation) has XOR performance of over 4 GB/s and RAID-6 syndrome performance over 3 GB/s per single execution core . The advantage that hardware RAID maintains today is the presence of a power-loss protected DRAM cache, in the form of BBU or NVRAM. This protected cache gives very low latency for random write access (and reads that hit) and basically transforms random writes into sequential writes. A RAID controller without such a cache is near useless . Moreover, some low-end RAID controllers do not only come without a cache, but forcibly disable the disk's private DRAM cache, leading to slower performance than without RAID card at all. An example are DELL's PERC H200 and H300 cards: they totally disable the disk's private cache and (if newer firmware has not changed that) actively forbid to re-activate it. Do yourself a favor and do not, ever, never buy such controllers. While even higher-end controllers often disable disk's private cache, they at least have their own protected cache - making HDD's (but not SSD's!) private cache somewhat redundant. This is not the end, though. Even capable controllers (the one with BBU or NVRAM cache) can give inconsistent results when used with SSD, basically because SSDs really need a fast private cache for efficient FLASH page programming/erasing. And while some (most?) controllers let you re-enable disk's private cache (eg: PERC H700/710/710P), if that private cache is volatile you risk to lose data in case of power loss. The exact behavior really is controller and firmware dependent (eg: on a DELL S6/i with 256 MB WB cache and enabled disk's cache , I had no losses during multiple, planned power loss testing), giving uncertainty and much concern. Open source software RAIDs, on the other hand, are much more controllable beasts - their software is not enclosed inside a proprietary firmware, and have well-defined metadata patterns and behaviors. Software RAID make the (right) assumption that disk's private DRAM cache is not protected, but at the same time it is critical for acceptable performance - so rather than disabling it, they use ATA FLUSH / FUA commands to write critical data on stable storage. As they often run from the SATA ports attached to the chipset SB, their bandwidth is very good and driver support is excellent. However, if used with mechanical HDDs, synchronized, random write access patterns (eg: databases, virtual machines) will greatly suffer compared to an hardware RAID controller with WB cache. On the other hand, when used with enterprise SSDs (ie: with a powerloss protected write cache), software RAID often excels and give results even higher than hardware RAID cards. Unfortunately consumer SSDs only have volatile write cache, delivering very low IOPS in synchronized write workloads (albeit very fast at reads and async writes). Also consider that software RAIDs are not all created equal. Windows software RAID has a bad reputation, performance wise, and even Storage Space seems not too different. Linux MD Raid is exceptionally fast and versatile, but Linux I/O stack is composed of multiple independent pieces that you need to carefully understand to extract maximum performance. ZFS parity RAID (ZRAID) is extremely advanced but, if not correctly configured, can give you very poor IOPs; mirroring+striping, on the other side, performs quite well. Anyway, it need a fast SLOG device for synchronous write handling (ZIL). Bottom line: if your workloads are not synchronized random write sensitive, you don't need a RAID card if you need a RAID card, do not buy a RAID controller without WB cache if you plan to use SSD, software RAID is preferred but keep in mind that for high synchronized random writes you need a powerloss-protected SSD (ie: Intel S/P/DC, Samsung PM/SM, etc). For pure performance the best choice probably is Linux MD Raid, but nowadays I generally use striped ZFS mirrors. If you can not afford losing half the space due to mirrors and you needs ZFS advanced features, go with ZRAID but carefully think about your VDEVs setup. if you, even using SSD, really need an hardware RAID card, use SSDs with write-protected caches. if you need RAID6 when using normal, mechanical HDDs, consider to buy a fast RAID card with 512 MB (or more) WB cache. RAID6 has a high write performance penalty, and a properly-sized WB cache can at least provide a fast intermediate storage for small synchronous writes (eg: filesystem journal). if you need RAID6 with HDDs but you can't / don't want to buy a hardware RAID card, carefully think about your software RAID setup. For example, a possible solution with Linux MD Raid is to use two arrays: a small RAID10 array for journal writes / DB logs, and a RAID6 array for raw storage (as fileserver). On the other hand, software RAID5/6 with SSDs is very fast, so you probably don't need a RAID card for an all-SSDs setup. | {
"source": [
"https://serverfault.com/questions/685360",
"https://serverfault.com",
"https://serverfault.com/users/283539/"
]
} |
685,626 | I have read multiple times (although I can't find it right now) that data centers take great effort to make sure that all server have the exact same time. Including, but not limited to worrying about leap seconds. Why is it so important that servers have the same time? And what are the actual tolerances? | Security In general, timestamps are used in various authentication protocols to help prevent replay attacks , where an attacker can reuse an authentication token he was able to steal (e.g. by sniffing the network). Kerberos authentication does exactly this, for instance. In the version of Kerberos used in Windows, the default tolerance is 5 minutes. This is also used by various one-time password protocols used for two-factor authentication such as Google Authenticator, RSA SecurID, etc. In these cases the tolerance is usually around 30-60 seconds. Without the time being in sync between client and server, it would not be possible to complete authentication. (This restriction is removed in the newest versions of MIT Kerberos, by having the requester and KDC determine the offset between their clocks during authentication, but these changes occurred after Windows Server 2012 R2 and it will be a while before you see it in a Windows version. But some implementations of 2FA will probably always need synchronized clocks.) Administration Having clocks in sync makes it easier to work with disparate systems. For instance, correlating log entries from multiple servers is much easier if all systems have the same time. In these cases you can usually work with a tolerance of 1 second, which NTP will provide, but ideally you want the times to be as closely synchronized as you can afford. PTP, which provides much tighter tolerances, can be much more expensive to implement. | {
"source": [
"https://serverfault.com/questions/685626",
"https://serverfault.com",
"https://serverfault.com/users/11651/"
]
} |
685,673 | (Related to Callbacks or hooks, and reusable series of tasks, in Ansible roles ): Is there any better way to append to a list or add a key to a dictionary in Ansible than (ab)using a jina2 template expression? I know you can do something like: - name: this is a hack
shell: echo "{% originalvar.append('x') %}New value of originalvar is {{originalvar}}" but is there really no sort of meta task or helper to do this? It feels fragile, seems to be undocumented, and relies on lots of assumptions about how variables work in Ansible. My use case is multiple roles (database server extensions) that each need to supply some configuration to a base role (the database server). It's not as simple as appending a line to the db server config file; each change applies to the same line , e.g. the extensions bdr and pg_stat_statements must both appear on a target line: shared_preload_libaries = 'bdr, pg_stat_statements' Is the Ansible way to do this to just process the config file multiple times (once per extension) with a regexp that extracts the current value, parses it, and then rewrites it? If so, how do you make that idempotent across multiple runs? What if the config is harder than this to parse and it's not as simple as appending another comma-separated value? Think XML config files. | Since Ansible v2.x you can do these: # use case I: appending to LIST variable:
- name: my appender
set_fact:
my_list_var: '{{my_list_myvar + new_items_list}}'
# use case II: appending to LIST variable one by one:
- name: my appender
set_fact:
my_list_var: '{{my_list_var + [item]}}'
with_items: '{{my_new_items|list}}'
# use case III: appending more keys DICT variable in a "batch":
- name: my appender
set_fact:
my_dict_var: '{{my_dict_var|combine(my_new_keys_in_a_dict)}}'
# use case IV: appending keys DICT variable one by one from tuples
- name: setup list of tuples (for 2.4.x and up
set_fact:
lot: >
[('key1', 'value1',), ('key2', 'value2',), ..., ('keyN', 'valueN',)],
- name: my appender
set_fact:
my_dict_var: '{{my_dict_var|combine({item[0]: item[1]})}}'
with_items: '{{lot}}'
# use case V: appending keys DICT variable one by one from list of dicts (thanks to @ssc)
- name: add new key / value pairs to dict
set_fact:
my_dict_var: "{{ my_dict_var | combine({item.key: item.value}) }}"
with_items:
- { key: 'key01', value: 'value 01' }
- { key: 'key02', value: 'value 03' }
- { key: 'key03', value: 'value 04' } all the above is documented in: https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries | {
"source": [
"https://serverfault.com/questions/685673",
"https://serverfault.com",
"https://serverfault.com/users/102814/"
]
} |
685,697 | Not understanding what is happening when I try to execute two commands at runtime via CMD directive in `Dockerfile. I assumed that this should work: CMD ["/etc/init.d/nullmailer", "start", ";", "/usr/sbin/php5-fpm"] But it's not working. Container has not started. So I had to do it like this: CMD ["sh", "-c", "/etc/init.d/nullmailer start ; /usr/sbin/php5-fpm"] I don't understand. Why is that? Why first line is not the right way? Can somebody explain me these "CMD shell format vs JSON format, etc" stuff. In simple words. Just to note - the same was with command: directive in docker-compose.yml , as expected. | I believe the difference might be because the second command does shell processing while the first does not. Per the official documentation , there are the exec and shell forms. Your first command is an exec form. The exec form does not expand environment variables while the shell form does. It is possible that by using the exec form the command is failing due to its dependence on shell processing. You can check this by running docker logs CONTAINERID Your second command, the shell form, is equivalent to - CMD /etc/init.d/nullmailer start ; /usr/sbin/php5-fpm Excerpts from the documentation - Note: Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, CMD [ "echo", "$HOME" ] will not do variable substitution on $HOME . If you want shell processing then either use the shell form or execute a shell directly, for example: CMD [ "sh", "-c", "echo", "$HOME" ] . | {
"source": [
"https://serverfault.com/questions/685697",
"https://serverfault.com",
"https://serverfault.com/users/69638/"
]
} |
686,286 | I am experiencing extremely slow OpenVPN transfer rates between two servers. For this question, I'll call the servers Server A and Server B. Both Server A and Server B are running CentOS 6.6. Both are located in datacenters with a 100Mbit line and data transfers between the two servers outside of OpenVPN run close to ~88Mbps. However, when I attempt to transfer any files over the OpenVPN connection I've established between Server A and Server B, I get throughput right around 6.5Mbps. Test results from iperf: [ 4] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 49184
[ 4] 0.0-10.0 sec 7.38 MBytes 6.19 Mbits/sec
[ 4] 0.0-10.5 sec 7.75 MBytes 6.21 Mbits/sec
[ 5] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 49185
[ 5] 0.0-10.0 sec 7.40 MBytes 6.21 Mbits/sec
[ 5] 0.0-10.4 sec 7.75 MBytes 6.26 Mbits/sec Aside from these OpenVPN iperf tests, both servers are virtually completely idle with zero load. Server A is assigned the IP 10.0.0.1 and it is the OpenVPN server. Server B is assigned the IP 10.0.0.2 and it is the OpenVPN client. The OpenVPN configuration for Server A is as follows: port 1194
proto tcp-server
dev tun0
ifconfig 10.0.0.1 10.0.0.2
secret static.key
comp-lzo
verb 3 The OpenVPN configuration for Server B is as follows: port 1194
proto tcp-client
dev tun0
remote 204.11.60.69
ifconfig 10.0.0.2 10.0.0.1
secret static.key
comp-lzo
verb 3 What I've noticed: 1. My first thought was that I was bottlenecking the CPU on the server. OpenVPN is single-threaded and both of these servers run Intel Xeon L5520 processors which aren't the fastest. However, I ran a top command during one of the iperf tests and pressed 1 to view CPU utilization by core and found that the CPU load was very low on each core: top - 14:32:51 up 13:56, 2 users, load average: 0.22, 0.08, 0.06
Tasks: 257 total, 1 running, 256 sleeping, 0 stopped, 0 zombie
Cpu0 : 2.4%us, 1.4%sy, 0.0%ni, 94.8%id, 0.3%wa, 0.0%hi, 1.0%si, 0.0%st
Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st
Cpu3 : 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu8 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu9 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu10 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu11 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu12 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu13 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu14 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu15 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 946768k total, 633640k used, 313128k free, 68168k buffers
Swap: 4192188k total, 0k used, 4192188k free, 361572k cached 2. Ping times increase considerably over the OpenVPN tunnel while iperf is running. When iperf is not running, ping times over the tunnel are consistently 60ms (normal). But when iperf is running and pushing heavy traffic, ping times become erratic. You can see below how the ping times are stable until the 4th ping when I've started the iperf test: PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=60.1 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=60.1 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=60.2 ms
** iperf test begins **
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=146 ms
64 bytes from 10.0.0.2: icmp_seq=5 ttl=64 time=114 ms
64 bytes from 10.0.0.2: icmp_seq=6 ttl=64 time=85.6 ms
64 bytes from 10.0.0.2: icmp_seq=7 ttl=64 time=176 ms
64 bytes from 10.0.0.2: icmp_seq=8 ttl=64 time=204 ms
64 bytes from 10.0.0.2: icmp_seq=9 ttl=64 time=231 ms
64 bytes from 10.0.0.2: icmp_seq=10 ttl=64 time=197 ms
64 bytes from 10.0.0.2: icmp_seq=11 ttl=64 time=233 ms
64 bytes from 10.0.0.2: icmp_seq=12 ttl=64 time=152 ms
64 bytes from 10.0.0.2: icmp_seq=13 ttl=64 time=216 ms 3. As mentioned above, I ran iperf outside of the OpenVPN tunnel and the throughput was normal -- ~88Mbps consistently. What I've tried: 1. I thought compression might be fouling things up, so I turned off compression by removing comp-lzo from both configs and restarting OpenVPN. No improvement. 2. Even though I previously found that the CPU utilization was low, I thought the default cipher might be a little too intensive for the system to keep up with. So I added cipher RC2-40-CBC to both configs (a very lightweight cipher) and restarted OpenVPN. No improvement. 3. I read on various forum about how tweaking the fragment, mssfix and mtu-tun might help with performance. I played with a few variations as described in this article , but again, no improvement. Any ideas on what could be causing such poor OpenVPN performance? | After a lot of Googling and configuration file tweaks, I found the solution. I'm now getting sustained speeds of 60Mbps and burst up to 80Mbps. It's a bit slower than the transfer rates I receive outside the VPN, but I think this is as good as it'll get. The first step was to set sndbuf 0 and rcvbuf 0 in the OpenVPN configuration for both the server and the client. I made that change after seeing a suggestion to do so on a public forum post (which is an English translation of a Russian original post ) that I'll quote here: It's July, 2004. Usual home internet speed in developed countries is 256-1024 Kbit/s, in less developed countries is 56 Kbit/s. Linux 2.6.7 has been released not a long ago and 2.6.8 where TCP Windows Size Scaling would be enabled by default is released only in a month. OpenVPN is in active development for 3 years already, 2.0 version is almost released.
One of the developers decides to add some code for socket buffer, I think to unify buffer sizes between OSes. In Windows, something goes wrong with adapters' MTU if custom buffers sizes are set, so finally it transformed to the following code: #ifndef WIN32
o->rcvbuf = 65536;
o->sndbuf = 65536;
#endif If you used OpenVPN, you should know that it can work over TCP and UDP. If you set custom TCP socket buffer value as low as 64 KB, TCP Window Size Scaling algorithm can't adjust Window Size to more than 64 KB. What does that mean? That means that if you're connecting to other VPN site over long fat link, i.e. USA to Russia with ping about 100 ms, you can't get speed more than 5.12 Mbit/s with default OpenVPN buffer settings. You need at least 640 KB buffer to get 50 Mbit/s over that link.
UDP would work faster because it doesn't have window size but also won't work very fast. As you already may guess, the latest OpenVPN release still uses 64 KB
socket buffer size. How should we fix this issue? The best way is to
disallow OpenVPN to set custom buffer sizes. You should add the
following code in both server and client config files: sndbuf 0
rcvbuf 0 The author goes on to describe how to push buffer size adjustments to the client if you are not in control of the client config yourself. After I made those changes, my throughput rate bumped up to 20Mbps. I then saw that CPU utilization was a little high on a single core so I removed comp-lzo (compression) from the configuration on both the client and server. Eureka! Transfer speeds jumped up to 60Mbps sustained and 80Mbps burst. I hope this helps someone else resolve their own issues with OpenVPN slowness! | {
"source": [
"https://serverfault.com/questions/686286",
"https://serverfault.com",
"https://serverfault.com/users/115445/"
]
} |
686,461 | Given two public IP addresses and the knowledge that are both on the same /27 network, would it be possible to determine from a remote location (i.e. different country) if they belong to one or two servers? | No, it's not, in the general case. Some additions, to make our guests from SO happy: There is nothing in the base TCP/IP protocols (e.g. IP/TCP/UDP,ICMP) that is specifically meant to make the distinction asked for in the question. The same is true for many higher level protocols, e.g. HTTP. It is indeed possible to use more or less subtle differences in answer patterns to make a guess about the system. If you have two very different systems, e.g. a Linux and a Windows server, this might be enough to be sure about having two hosts. This will become more difficult the more similar the systems are. Two nodes in a HA web cluster with identical hardware and OS are likely impossible to keep apart this way. I consider this the general case in most scenarios today. Lastly: Are two virtual machines on the same physical box one or two servers? Depending on why you try to differentiate in the first place, this might be important and it's completely impossible to tell on the networking level. | {
"source": [
"https://serverfault.com/questions/686461",
"https://serverfault.com",
"https://serverfault.com/users/35498/"
]
} |
686,655 | Our IT created a VM with 2 CPUs allocated rather than the 4 I requested. Their reason is that the VM performs better with 2 CPUs rather than 4 (according to them). The rationale is that the VM hypervisor (VMWare in this case) waits for all the CPUs to be available before engaging any of them. Thus, it takes longer to wait for 4 rather than 2 CPUs. Does this statement make sense? | This used to be true, but is no longer exclusively true. What they are referring to is Strict Co-Scheduling . Most important of all, while in the strict co-scheduling algorithm, the existence of a lagging vCPU causes the
entire virtual machine to be co-stopped. In the relaxed co-scheduling algorithm, a leading vCPU decides whether
it should co-stop itself based on the skew against the slowest sibling vCPU Now, if the host only has 4 threads, then you'd be silly to allocate all of them. If it has two processors and 4 threads per processor, then you might not want to allocate all of the contents of a single processor, as your hypervisor should try to keep vCPUs on the same NUMA node to make memory access faster, and you're making this job more difficult by allocating a whole socket to a single VM (See page 12 of that PDF above). So there are scenarios where fewer vCPUs can perform better than more, but it's not true 100% of the time. All that said and done, I very rarely allocate more than 3 vCPUs per guest. Everyone gets 2 by default, 3 if it's a heavy workload, and 4 for things like SQL Servers or really heavy batch processing VMs, or a terminal server with a lot of users. | {
"source": [
"https://serverfault.com/questions/686655",
"https://serverfault.com",
"https://serverfault.com/users/3025/"
]
} |
686,878 | I understand that the purpose of load balancers is to balance load between your servers and keep track of instance health, etc. But what if load balancer itself fails? How do you set up redundant load balancers? (load balancing load balancers?) I could see how DNS health checks could be useful, but there's obviously major latency issues, isn't there? This is assuming that you're not using any third party services like AWS ELB or anything similar. What to do if you're just using say Nginx? | There are couple of ways to achieve HA (high availability) of a Load Balancer - or in that regards any service. Lets assume you have two machines, with IP addresses: 192.168.100.101 192.168.100.102 Users connect to an IP, so what you want to do is separate IP from specific box - eg create virtual IP. That IP will be 192.168.100.100. Now, you can choose HA service which will take care of automatic failover/failback of IP address. Some of the simplest services for unix are (u)carp and keepalived, some of the more complex ones are for example RedHat Cluster Suite or Pacemaker. Lets take keepalived as an example - two keepalived services - each running on its own box - and they communicate together. That communication is often called heartbeat. | VIP | | |
| Box A | ------v^-----------v^---- | Box B |
| IP1 | | IP2 | If one keepalived stops responding (either service goes down for whatever reason, or the box bounces or shuts down) - keepalived on other box will notice missed heartbeats, and will presume other node is dead, and take failover actions. That action in our case will be bringing up the floating IP. | VIP |
------------------ -------------- | Box B |
| IP2 | Worst case that can happen in this case is the loss of sessions for clients, but they will be able to reconnect. If you want to avoid that, two load balancers have to be able to sync session data between them, and if they can do that, users won't notice anything except maybe broken a short delay. Another pitfall of this setup is split brain - when both boxes are online but the link is severed, and both boxes bring up the same IP. This is often resolved through some kind of fencing mechanism (SCSI reservation, IPMI restart, smart PDU power cut, ...), or odd number of nodes requiring majority of cluster members to be alive for service to be started. | VIP | | VIP |
| Box A | | Box B |
| IP1 | | IP2 | More complex cluster management software (like Pacemaker) can move whole service (eg.: stop it on one node and start it on another) - and this is the way HA for services like databases can be achieved. Another possible way - if you are controlling routers near your load balancers, is to utilize ECMP. This approach also enables you to horizontally scale load balancers.
This works by each of your two boxes talking BGP to your router(s). Each box has to advertise virtual IP (192.168.100.100) and the the router will load balance traffic via ECMP. If a machine dies, it will stop advertising VIP, which will in turn stop routers from sending traffic to it. Only thing you have to take care of in this setup is to stop advertising IP if the load balancer itself dies. | {
"source": [
"https://serverfault.com/questions/686878",
"https://serverfault.com",
"https://serverfault.com/users/125082/"
]
} |
688,658 | I'm working on Ubuntu 14 with the default rsyslog and logrotate utility. In the default rsyslog logrotate /etc/logrotate.d/rsyslog config I see the following: /var/log/syslog
{
rotate 7
daily
missingok
notifempty
delaycompress
compress
postrotate
reload rsyslog >/dev/null 2>&1 || true
endscript
} From what I understand, it is recommended to use copytruncate in all logrotate scenarios, as it doesn't moves the current log, but rather truncates the log so any process with an open file handler will be able to keep writing to it. So how come the default configuration using rsyslog reload feature instead? | To answer your question, you first need to understand the different trade-off of reload and copytruncate: reload : the old log file is renamed and the process writing into that log is notified (via Unix signal) to re-create its log file. This is the fastest / lower overhead method: rename/move operations are very fast and have a constant execution time. Moreover, it is an almost atomic operation: this means that (nearly) no log entry will be lost during the move/reload. On the other hand, you need a process capable of reloading and re-opening of its log file. Rsyslog is such a process, so the default logrotate config use the reload method. Using this mode with rsyslog is strongly recommended by rsyslog upstream. copytruncate : the old log file is copied into an archive file, and then it is truncated to "delete" old log lines. While the truncate operation is very fast, the copy can be quite long (depending of how big is your logfile). Moreover, some log entry can be lost during the time between the copy operation (remember, it can be slow) and the truncate. For these reasons, copytruncate is not used by default for services capable of reloading and recreate their log files. On the other hand, if a server is not capable of reload/recreate log files, copytruncate is your safest bet. In other words, it does not require any service-level support. The rsyslog upstream project strongly advises against using this mode. | {
"source": [
"https://serverfault.com/questions/688658",
"https://serverfault.com",
"https://serverfault.com/users/286268/"
]
} |
688,837 | I understand that domain name registrars, for each domain they manage, register the authoritative name servers for that domain with its top-level root name server. My question is: how do they do this? Is there a special protocol they use? How do top-level root name servers authenticate queries from registrars to change authoritative name servers for a given domain? Is that even public knowledge? For example, say you own example.com. You want to change the authoritative name servers for it. You give your registrar the addresses of the new name servers. So far, so good. They, in turn, echo that change with the top-level root name server (the one responsible for .com). What protocol is used for the query from your registrar? How does that root name server authenticate it? How does it know it's legit? Migrated from SuperUser ( https://superuser.com/questions/910123/how-do-registrars-register-authoritative-name-servers-with-root-name-servers) | Many registries use the Extensible Provisioning Protocol (EPP) to facilitate their registrar interactions. It's worth noting that this is a whole separate protocol from DNS itself, specifically dealing with name registration and provisioning. It only indirectly populates the relevant zone in DNS. Unless you are either a registry or a registrar it really doesn't matter much what sort of protocols / APIs these parties use but if you do want to read up on it, here are some of the relevant specs for EPP: Extensible Provisioning Protocol
(EPP) Extensible Provisioning
Protocol (EPP) Domain Name
Mapping Extensible
Provisioning Protocol (EPP) Host
Mapping Extensible
Provisioning Protocol (EPP) Contact
Mapping Extensible
Provisioning Protocol (EPP) Transport over
TCP Domain Name System (DNS)
Security Extensions Mapping for the Extensible Provisioning Protocol
(EPP) As more of a sidenote, the root servers deal with the root zone (aka `.`), a TLD zone is not the same as the "root". If you register for instance `example.com` through your registrar nothing changes in the root zone, your delegation is only entered into the `com` zone. | {
"source": [
"https://serverfault.com/questions/688837",
"https://serverfault.com",
"https://serverfault.com/users/286406/"
]
} |
690,040 | We have a makeshift server room that contains a rack with half a dozen servers and some network equipment. The room is cooled by a dual-hose portable a/c unit that is vented into the attic. At this time the portable a/c unit can not keep up with the heat being generated and the temperature rises to around 81 degrees Fahrenheit before stabilizing. As a side note the servers are currently mounted directly on top of each other in the rack (no space). In my opinion the only way to lower the temperature in the room without getting a larger a/c unit is to reduce the amount of heat being generated. In other words, I need to reduce the number of servers. My buddy contends if we space the servers apart the cooling will be more efficient and result in a lower room temperature. I think my buddy doesn't understand the law of conservation of energy. Please help us settle this dispute. | The airflow in rack servers (and any rackable equipment, actually) is designed to move horizontally, so that they can be rack-mounted on top of each other without any need for wasting rack space. Spacing them vertically would effectively accomplish nothing, and it could even decrease cooling effectiveness, due to how airflow is designed to work inside a rack cabinet (cool air should enter from the front, hot air should exit from the rear, and air should flow through servers, not between them). This doesn't have anything to do with conservation of energy, however; it's just an airflow design issue. About conservation of energy: you are of course absolutely correct; if there are (say) five hot objects giving away heat to a closed room, it doesn't matter at all if they are touching or if they are spaced apart; the amount of heat flowing from them to the room would be exactly the same. | {
"source": [
"https://serverfault.com/questions/690040",
"https://serverfault.com",
"https://serverfault.com/users/287554/"
]
} |
690,155 | I don't want to do the right thing by creating a new systemd script, I just want my old init script to work again now that I've upgraded my system to an OS that's using systemd. I've briefly researched how to convert init scripts and how to write systemd scripts, but I'm sure learning it properly and doing it right would take me several hours. The current situation is: systemctl start solr
Failed to start solr.service: Unit solr.service failed to load: No such file or directory. And: sudo service solr start
Failed to start solr.service: Unit solr.service failed to load: No such file or directory. Right now, I just want to get back to work. What's the path of least resistance to getting this working again? Updates I didn't want to figure this all out – I really didn't – but I have to and I've unearthed my first clue: sudo systemctl enable solr
Synchronizing state for solr.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d solr defaults
insserv: warning: script 'K01solr' missing LSB tags and overrides
insserv: warning: script 'solr' missing LSB tags and overrides
Executing /usr/sbin/update-rc.d solr enable
update-rc.d: error: solr Default-Start contains no runlevels, aborting. The incompatibilities page for systemd says that: LSB header dependency information matters. The SysV implementations on many distributions did not use the dependency information encoded in LSB init script headers, or used them only in very limited ways. Due to that they are often incorrect or incomplete. systemd however fully interprets these headers and follows them closely at runtime I think that means my script won't work until that's fixed. The script in question: #!/bin/sh
# Prerequisites:
# 1. Solr needs to be installed at /usr/local/solr/example
# 2. daemon needs to be installed
# 3. Script needs to be executed by root
# 4. $INSTALL_ROOT must be set
# This script will launch Solr in a mode that will automatically respawn if it
# crashes. Output will be sent to /var/log/solr/solr.log. A pid file will be
# created in the standard location.
start () {
echo -n "Starting solr..."
# Reset ulimit or else get issues with too many open files (https://issues.apache.org/jira/browse/SOLR-4)
ulimit -n 10000
# start daemon
daemon --chdir='/usr/local/solr/example' --command "java -jar -server start.jar -DINSTALL_ROOT=$INSTALL_ROOT" --respawn --output=/var/log/solr/solr.log --name=solr --verbose
RETVAL=$?
if [ $RETVAL = 0 ]
then
echo "done."
else
echo "failed. See error code for more information."
fi
return $RETVAL
}
stop () {
# stop daemon
echo -n "Stopping solr..."
daemon --stop --name=solr --verbose
RETVAL=$?
if [ $RETVAL = 0 ]
then
echo "done."
else
echo "failed. See error code for more information."
fi
return $RETVAL
}
restart () {
daemon --restart --name=solr --verbose
}
status () {
# report on the status of the daemon
daemon --running --verbose --name=solr
return $?
}
case "$1" in
start)
start
;;
status)
status
;;
stop)
stop
;;
restart)
stop
sleep 15
start
;;
*)
echo $"Usage: solr {start|status|stop|restart}"
exit 3
;;
esac
exit $RETVAL | Seriously, a systemd unit file is trivial to write for a service like this...or for most services. This ought to get you about 95% of the way there. Put this in, for example, /etc/systemd/system/solr.service [Unit]
Description=Apache Solr
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
Type=simple
EnvironmentFile=/etc/courtlistener
WorkingDirectory=/usr/local/solr/example
ExecStart=/usr/bin/java -jar -server -Xmx${CL_SOLR_XMX} start.jar -DINSTALL_ROOT=${INSTALL_ROOT}
Restart=on-failure
LimitNOFILE=10000
[Install]
WantedBy=multi-user.target Note the stuff that isn't here, like the log file and such; systemd will automatically capture and log the service output under the service's name. | {
"source": [
"https://serverfault.com/questions/690155",
"https://serverfault.com",
"https://serverfault.com/users/46783/"
]
} |
690,341 | What is the algorithm used to generate etags in Nginx? They look something like "554b73dc-6f0d" now. Are they generated from timestamp only? | From the source code: http://lxr.nginx.org/ident?_i=ngx_http_set_etag 1803 ngx_int_t
1804 ngx_http_set_etag(ngx_http_request_t *r)
1805 {
1806 ngx_table_elt_t *etag;
1807 ngx_http_core_loc_conf_t *clcf;
1808
1809 clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);
1810
1811 if (!clcf->etag) {
1812 return NGX_OK;
1813 }
1814
1815 etag = ngx_list_push(&r->headers_out.headers);
1816 if (etag == NULL) {
1817 return NGX_ERROR;
1818 }
1819
1820 etag->hash = 1;
1821 ngx_str_set(&etag->key, "ETag");
1822
1823 etag->value.data = ngx_pnalloc(r->pool, NGX_OFF_T_LEN + NGX_TIME_T_LEN + 3);
1824 if (etag->value.data == NULL) {
1825 etag->hash = 0;
1826 return NGX_ERROR;
1827 }
1828
1829 etag->value.len = ngx_sprintf(etag->value.data, "\"%xT-%xO\"",
1830 r->headers_out.last_modified_time,
1831 r->headers_out.content_length_n)
1832 - etag->value.data;
1833
1834 r->headers_out.etag = etag;
1835
1836 return NGX_OK;
1837 } You can see on lines 1830 and 1831 that the input is the last modified time and the content length. | {
"source": [
"https://serverfault.com/questions/690341",
"https://serverfault.com",
"https://serverfault.com/users/20381/"
]
} |
690,609 | I’ve found diverging instructions on the ’net about this. To recap: SATA with the 4-pin Molex (white) power plug is not hot-pluggable, but either the wide connector or the separate (15-pin power and 7-pin data) connectors are. However, in which order do I plug the cables in? Asrock says to connect first the data cable to the mainboard, then to connect, in this order, the power cable, then the data cable, to the drive. Another hardware guide says to connect the data cable first. It’s surprising that a definitive answer on this is so hard to find. | In the SATA specification this is referred to as hot plug and hot removal and they are two separate events. While the electrical and communication layers support both hot plug and hot removal, check that your drive controller, operating system, and drivers support them. Note that all of the below ONLY applies to host and devices (ie, drive controllers and drives) that BOTH declare they are hot plug capable. If your drive controller has specific instructions, follow them. If not, read on. It doesn't matter which plug to attach first. SATA drives are allowed to be connected to data without power, and to power without data. They are designed so when data is connected without power, some limited drive information can still be obtained (this is mostly used in RAID and backup setups where you want to keep some disks offline to reduce wear and tear, but still need to know what's installed). So if you plug in the power first, the drive turns on, recognizes there's no data cable, and waits for the data cable to be attached. If you plug in the data first, the computer recognizes the drive attachment, and that the drive isn't ready, and waits for the drive to signal that it's available. If you do happen to get a single cable with both power and data, though, you'll find that the data pins are further behind the rest. The pins are staged as follows: Ground and precharge inrush power Power Data This suggests that while the drives and controllers should support plugging either cable in any order, when they have control over how cables are connected they prefer power before data. So if you wanted to be pedantic and prefer one order above the other, your best bet is to follow what they do and connect the data cable last. Note that disconnecting the data first, then the power, when removing the drive will allow the drive to detect the removal, and possibly perform a few last millisecond housekeeping tasks before the power is fully removed. But, again, the specification allows connection in any order, and should work fine in any order. Specification excerpts From SATA revision 3.0 June 2, 2009 Gold Version 4.1.60 hot plug The connection of a SATA device to a host system that is already powered. The SATA device is
already powered or powered upon insertion/connection. See section 7.2.5.1 for details on hot
plug scenarios. You might think the above suggests that power should be applied first or simultaneously, but this is clarified in 7.2.5.1: 7.2.5.1 Hot Plug Overview The purpose of this section is to provide the minimum set of normative requirements necessary
for a Serial ATA Host or Device to be declared as “Hot-Plug Capable”. As there exists various
Hot-Plug events, there are relevant electrical and operational limitations for each of those types of
events. The events are defined below, and the Hot-Plug Capability is further classified into: a) Surprise Hot-Plug capable b) OS-Aware Hot-Plug capable When a Host or Device is declared Hot-Plug Capable without any qualifier, this shall imply that
the SATA interface is Surprise Hot-Plug Capable. For the purposes of this specification, Hot-Plug operations are defined as insertion or removal
operations, between SATA hosts and devices, when either side of the interface is powered. ... Hot-Plug Capable Hosts/Devices shall not suffer any electrical damage, or permanent electrical
degradation, and shall resume compliant Tx/Rx operations after the applicable OOB operations,
following the Hot-Plug Events. Here's the key part of the specification you're interested in. All the following situations shall not damage the device or host, and both the device and host shall resume normal TX/RX communication after any of the following events. While these discuss specific architectures (backplanes, for instance) the drive and host themselves are electrically and otherwise the same - these are merely methods of connection and there's no practical difference between them and your individual cable scenario: Power remains connected while data is plugged/unplugged Asynchronous Signal Hot Plug / Removal: A signal cable is plugged / unplugged at
any time. Power to the Host/Device remains on since it is sourced through an alternate
mechanism, which is not associated with the signal cable. This applies to External
Single-Lane and Multilane Cabled applications. Data is connected where power is not available Unpowered OS-Aware Hot Plug / Removal: This is defined as the insertion / removal of
a Device into / from a backplane connector (combined signal and power) that has power
shutdown. Prior to removal, the Host is placed into a quiescent state (not defined here)
and power is removed from the backplane connector to the Device. After insertion, the
backplane is powered; both the Device and Host initialize and then operate normally.
The mechanism for powering the backplane on/off and transitioning the Host into/out of
the “quiescent” state is not defined here. During OS-Aware events, the Host is powered.
This applies to “Short” and “Long” Backplane applications. There are two other situations here which don't apply to this question. Read the spec for more. However, they do provide the following warning in the specification: NOTE: This does not imply transparent resumption of system-level operation since data may be
lost, the device may have to be re-discovered and initialized, etc. Regardless of the above
definitions, the removal of a device, which is still rotating, is not recommended and should be
prevented by the system designer. In other words, the hot removal capability is the responsibility of the system designer, and they should ensure the drive is stopped before hot removal occurs. You, in this case, are the system designer. If your OS and driver don't have a mechanism to allow you to turn off the drive before unplugging them, then you aren't providing adequate hot removal support, and should not perform hot removals on the system. This is tackled by manufacturers by providing locking or handled drive cages where the lock to remove them tells the OS to perform the drive shutdown, or pulling the handle out a short way does so. The user is then instructed to wait for notification that the drive can be removed (usually an LED on the drive carrier itself). | {
"source": [
"https://serverfault.com/questions/690609",
"https://serverfault.com",
"https://serverfault.com/users/189656/"
]
} |
690,622 | I am using CentOS Linux release 7.0.1406 (Core). The last time I logged in to SSH of the server was on April 20. Everything was working fine. Today I logged in once again to check if anything new in the error.log of my websites. I do it periodically. But today there was a surprise: [root@myserver nginx]# ls -la
total 104840
drwx------ 2 nginx nginx 4096 Apr 30 03:19 .
drwxr-xr-x 7 root root 4096 May 3 03:20 ..
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 access.log
-rw-r--r-- 1 root root 17956729 Apr 30 03:19 access.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 awstats.site1.net.access.log
-rw-r--r-- 1 root root 5229 Apr 2 14:21 awstats.site1.net.access.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 awstats.site1.net.error.log
-rw-r--r-- 1 root root 4654 Apr 2 14:21 awstats.site1.net.error.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 devel.site1.net.access.log
-rw-r--r-- 1 root root 26082 Apr 20 21:12 devel.site1.net.access.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 devel.site1.net.error.log
-rw-r--r-- 1 root root 46743 Apr 20 21:14 devel.site1.net.error.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 devel.site2.pl.access.log
-rw-r--r-- 1 root root 1652 Apr 24 06:28 devel.site2.pl.access.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 devel.site2.pl.error.log
-rw-r--r-- 1 root root 237 Feb 28 21:32 devel.site2.pl.error.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 error.log
-rw-r--r-- 1 root root 596623 Apr 30 02:38 error.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 site1.net.access.log
-rw-r--r-- 1 root root 83764451 Apr 30 03:18 site1.net.access.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 site1.net.error.log
-rw-r--r-- 1 root root 147462 Apr 29 21:36 site1.net.error.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 site3.com-access.log
-rw-r--r-- 1 root root 177285 Apr 30 03:14 site3.com-access.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 site3.com-error.log
-rw-r--r-- 1 root root 27929 Apr 28 23:16 site3.com-error.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 panel.site4.com-access.log
-rw-r--r-- 1 root root 1963 Apr 25 22:22 panel.site4.com-access.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 panel.site4.com-error.log
-rw-r--r-- 1 root root 488 Apr 13 14:21 panel.site4.com-error.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 site2.pl.access.log
-rw-r--r-- 1 root root 4485845 Apr 30 03:12 site2.pl.access.log-20150430.gz
-rw-r--r-- 1 web nginx 0 Apr 30 03:19 site2.pl.error.log
-rw-r--r-- 1 root root 61613 Apr 30 01:36 site2.pl.error.log-20150430.gz As you can see, the .log files were 0KB!!! But there was a plenty of data there. It just... flew away. I also noticed that with last , there was a strange reboot I was not aware of: reboot system boot 2.6.32-042stab08 Wed Apr 29 20:41 - 15:09 (8+18:27) Now I changed back the owner/group to nginx and it looks like the logs are once again populating. EDIT: Here is my nginx.conf:
user web;
worker_processes 2;
pid /var/run/nginx.pid;
events {
worker_connections 768;
multi_accept on;
}
http {
rewrite_log off;
##
# Basic Settings
##
client_max_body_size 20m;
sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
# log_format main '$remote_addr $host $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" "$request_time"';
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include /etc/nginx/mime.types;
default_type application/octet-stream;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
fastcgi_buffer_size 16k;
fastcgi_buffers 16 16k;
##
# Logging Settings
##
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
} Here is the output of: ps axu | grep log root 86 0.0 0.0 34636 848 ? Ss Apr29 0:05 /usr/lib/systemd/systemd-logind
root 541 0.0 0.0 9512 588 ? S Apr29 0:01 dovecot/log
mysql 593 0.6 5.7 1675596 179496 ? Sl Apr29 89:36 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/mariadb.log --pid-file=/var/run/mariadb/mariadb.pid --socket=/var/lib/mysql/mysql.sock
root 30100 0.0 0.0 8988 900 pts/1 S+ 20:31 0:00 grep --color=auto log I have few questions: I don't remember if there were .gz files. But now there are. How/where can I check if there is some rule somewhere that says that it should gzip each logfile? What do you think happened? Is there anything else I can check to find the root cause of that issue Is there a way to prevent such things happening in future? Is there a way to recover the logs that disappeared? | In the SATA specification this is referred to as hot plug and hot removal and they are two separate events. While the electrical and communication layers support both hot plug and hot removal, check that your drive controller, operating system, and drivers support them. Note that all of the below ONLY applies to host and devices (ie, drive controllers and drives) that BOTH declare they are hot plug capable. If your drive controller has specific instructions, follow them. If not, read on. It doesn't matter which plug to attach first. SATA drives are allowed to be connected to data without power, and to power without data. They are designed so when data is connected without power, some limited drive information can still be obtained (this is mostly used in RAID and backup setups where you want to keep some disks offline to reduce wear and tear, but still need to know what's installed). So if you plug in the power first, the drive turns on, recognizes there's no data cable, and waits for the data cable to be attached. If you plug in the data first, the computer recognizes the drive attachment, and that the drive isn't ready, and waits for the drive to signal that it's available. If you do happen to get a single cable with both power and data, though, you'll find that the data pins are further behind the rest. The pins are staged as follows: Ground and precharge inrush power Power Data This suggests that while the drives and controllers should support plugging either cable in any order, when they have control over how cables are connected they prefer power before data. So if you wanted to be pedantic and prefer one order above the other, your best bet is to follow what they do and connect the data cable last. Note that disconnecting the data first, then the power, when removing the drive will allow the drive to detect the removal, and possibly perform a few last millisecond housekeeping tasks before the power is fully removed. But, again, the specification allows connection in any order, and should work fine in any order. Specification excerpts From SATA revision 3.0 June 2, 2009 Gold Version 4.1.60 hot plug The connection of a SATA device to a host system that is already powered. The SATA device is
already powered or powered upon insertion/connection. See section 7.2.5.1 for details on hot
plug scenarios. You might think the above suggests that power should be applied first or simultaneously, but this is clarified in 7.2.5.1: 7.2.5.1 Hot Plug Overview The purpose of this section is to provide the minimum set of normative requirements necessary
for a Serial ATA Host or Device to be declared as “Hot-Plug Capable”. As there exists various
Hot-Plug events, there are relevant electrical and operational limitations for each of those types of
events. The events are defined below, and the Hot-Plug Capability is further classified into: a) Surprise Hot-Plug capable b) OS-Aware Hot-Plug capable When a Host or Device is declared Hot-Plug Capable without any qualifier, this shall imply that
the SATA interface is Surprise Hot-Plug Capable. For the purposes of this specification, Hot-Plug operations are defined as insertion or removal
operations, between SATA hosts and devices, when either side of the interface is powered. ... Hot-Plug Capable Hosts/Devices shall not suffer any electrical damage, or permanent electrical
degradation, and shall resume compliant Tx/Rx operations after the applicable OOB operations,
following the Hot-Plug Events. Here's the key part of the specification you're interested in. All the following situations shall not damage the device or host, and both the device and host shall resume normal TX/RX communication after any of the following events. While these discuss specific architectures (backplanes, for instance) the drive and host themselves are electrically and otherwise the same - these are merely methods of connection and there's no practical difference between them and your individual cable scenario: Power remains connected while data is plugged/unplugged Asynchronous Signal Hot Plug / Removal: A signal cable is plugged / unplugged at
any time. Power to the Host/Device remains on since it is sourced through an alternate
mechanism, which is not associated with the signal cable. This applies to External
Single-Lane and Multilane Cabled applications. Data is connected where power is not available Unpowered OS-Aware Hot Plug / Removal: This is defined as the insertion / removal of
a Device into / from a backplane connector (combined signal and power) that has power
shutdown. Prior to removal, the Host is placed into a quiescent state (not defined here)
and power is removed from the backplane connector to the Device. After insertion, the
backplane is powered; both the Device and Host initialize and then operate normally.
The mechanism for powering the backplane on/off and transitioning the Host into/out of
the “quiescent” state is not defined here. During OS-Aware events, the Host is powered.
This applies to “Short” and “Long” Backplane applications. There are two other situations here which don't apply to this question. Read the spec for more. However, they do provide the following warning in the specification: NOTE: This does not imply transparent resumption of system-level operation since data may be
lost, the device may have to be re-discovered and initialized, etc. Regardless of the above
definitions, the removal of a device, which is still rotating, is not recommended and should be
prevented by the system designer. In other words, the hot removal capability is the responsibility of the system designer, and they should ensure the drive is stopped before hot removal occurs. You, in this case, are the system designer. If your OS and driver don't have a mechanism to allow you to turn off the drive before unplugging them, then you aren't providing adequate hot removal support, and should not perform hot removals on the system. This is tackled by manufacturers by providing locking or handled drive cages where the lock to remove them tells the OS to perform the drive shutdown, or pulling the handle out a short way does so. The user is then instructed to wait for notification that the drive can be removed (usually an LED on the drive carrier itself). | {
"source": [
"https://serverfault.com/questions/690622",
"https://serverfault.com",
"https://serverfault.com/users/209756/"
]
} |
690,855 | I have got the well-known warning message when trying to ssh into a server: $ ssh whateverhost
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:xxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxx/xxxxxxx.
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/user/.ssh/known_hosts:10
ECDSA host key for ipofmyhost has changed and you have requested strict checking.
Host key verification failed. And I know why because I changed the ip of such server. But if it weren't so, how could I check the fingerprint for the ECDSA key sent by the remote host? I have tried to do so by: echo -n ipofthehost | sha256sum But I don't get the same fingerprint. I also tried "hostname,ip" kind of like in AWS , but I didn't get any match. If I delete the entrance from my known_hosts file and I try to ssh again, it succeeds and tells the following: ECDSA key fingerprint is SHA256:xxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxx/xxxxxxx.
Are you sure you want to continue connecting (yes/no)? So to what is it applying the sha256sum to get the fingerprint and how I could check it? | A public key fingerprint isn't the simple hash of an IP address string. To retrieve a remote host public key you can use ssh-keyscan <IP address> , and then you can use the usual tools to extract its fingerprint ( ssh-keygen -lf <public_key_file> ). Finally you can compare the current fingerprint in your known_hosts file with ssh-keygen -l -F <domain_or_IP_address> . | {
"source": [
"https://serverfault.com/questions/690855",
"https://serverfault.com",
"https://serverfault.com/users/281028/"
]
} |
691,080 | I would like to copy files from remote directory to local directory with Ansible but fetch module allows me to copy only one file. I have many servers from which I need files (same directory each server) and I don't now how to do this with Ansible. Any ideas? | You will probably need to register remote content and than loop over it, something like this should work: - shell: (cd /remote; find . -maxdepth 1 -type f) | cut -d'/' -f2
register: files_to_copy
- fetch: src=/remote/{{ item }} dest=/local/
with_items: "{{ files_to_copy.stdout_lines }}" where /remote should be changed with directory path on your remote server and /local/ with directory on your master | {
"source": [
"https://serverfault.com/questions/691080",
"https://serverfault.com",
"https://serverfault.com/users/162630/"
]
} |
691,568 | Note: I've read How Often Do Windows Servers Need to be Restarted? but this question pertains to our Remote Desktop server specifically. We have a Windows Server 2008R2 server - a VMware ESX VM - licensed for Remote Desktop Services, 25 users that also does RRAS (SSTP). On an average weekday, during working hours, there are between 8 and 12 logged-in, active users with an additional 4-6 "disconnected" users. It has a 12 GHz CPU hard reservation and 16 GB RAM, also entirely reserved. The CPU reservation is expandable to 24 GHz max when needed. Many of our users rely exclusively on the server to work. They also complain bitterly about its performance but many are unwilling to change working habits or software to improve its performance. Specifically: Users refuse to log off instead of disconnect Users insist on using Lync 2013 instead of Lync 2010 (Lync 2013 is a notorious resource hog) I cannot overstate the significance of their refusal to log off. Disconencted users continue to hog RAM while disconnected, which means that at any given time, we have up to 16 instances of certain programs running. I've also noticed through experience that leaks/zombies tend to add up the longer a Remote Desktop server has been running. After a reboot the server is fresh and much faster, even when comparing performance after many users have logged in. I've also read that regular reboots can be helpful. So I have proposed regular reboots of the VM - I would like to do it weekly, say on Saturday evening - as I feel these reboots would solve a lot of the problem. I would like to know, if you are a Windows admin, Am I right about the fact that garbage/zombies/leaks accumulate with session time, even after a user disconnects/reconnects? How often do you restart a similarly-utilized Windows Server with Remote Desktop Services? | Generally, I'm opposed to the idea that a Windows server should be rebooted on a regular schedule EXCEPT in relation to TS/RDS servers. We reboot ours every day. It clears up old sessions, releases in use resources (CPU, RAM, file handles, etc.), so my opinion and suggestion would be that you do configure a daily scheduled reboot of your RDS servers. Note that this answer is only my opinion. There's no statement of fact here. | {
"source": [
"https://serverfault.com/questions/691568",
"https://serverfault.com",
"https://serverfault.com/users/88876/"
]
} |
692,309 | I am a bit confused in syslog, rsyslog and syslog-ng. From where can I get the source code for syslog() ? Is there any difference between rsyslog and rsyslogd? | Basically, they are all the same, in the way they all permit the logging of data from different types of systems in a central repository. But they are three different project, each project trying to improve the previous one with more reliability and functionalities. The Syslog project was the very first project. It started in 1980. It is the root project to Syslog protocol. At this time Syslog is a very simple protocol. At the beginning it only supports UDP for transport, so that it does not guarantee the delivery of the messages. Next came syslog-ng in 1998. It extends basic syslog protocol with new features like: content-based filtering Logging directly into a database TCP for transport TLS encryption Next came Rsyslog in 2004. It extends syslog protocol with new features like: RELP Protocol support Buffered operation support Let's say that today they are three concurrent projects that have grown separately upon versions, but also grown in parallel regarding what the neighbors was doing. I personally think that today syslog-ng is the reference in most cases, as it is the most mature project offering the main features you may need, in addition to an easy and comprehensive setup and configuration. | {
"source": [
"https://serverfault.com/questions/692309",
"https://serverfault.com",
"https://serverfault.com/users/274635/"
]
} |
692,771 | There are many different places where systemd unit files may be placed. Is there a quick and easy way to ask systemd where it read a service’s declaration from, given just the service name? | For units that are defined in actual, static files, this can be seen in systemctl status : $ systemctl status halt-local.service
● halt-local.service - /usr/sbin/halt.local Compatibility
Loaded: loaded (/lib/systemd/system/halt-local.service; static)
Active: inactive (dead) But there are units that are not defined by files, e.g. with systemd-cron installed. These have no useful location listed with status : $ systemctl status cron-jojo-0.timer
● cron-jojo-0.timer - [Cron] "*/10 * * * * ..."
Loaded: loaded (/var/spool/cron/crontabs/jojo)
Active: active (waiting) since Mon 2015-05-18 14:53:01 UTC; 9min ago In either case, though, the FragmentPath field is educating: $ systemctl show -P FragmentPath cron-daily.service
/lib/systemd/system/cron-daily.service
$ systemctl show -P FragmentPath cron-jojo-0.service
/run/systemd/generator/cron-jojo-0.service
$ systemctl show -P FragmentPath halt-local.service
/lib/systemd/system/halt-local.service | {
"source": [
"https://serverfault.com/questions/692771",
"https://serverfault.com",
"https://serverfault.com/users/127480/"
]
} |
692,981 | When I try to ping the IP address 10.10.208.57 I have no response since nothing exist in the network with that IP address. However if I try to ping 10.10.208. 0 57 instead another IP address responds: root@everest:/root# ping 10.10.208.057
PING 10.10.208.057 (10.10.208.47) 56(84) bytes of data.
64 bytes from 10.10.208.47: icmp_seq=1 ttl=253 time=0.732 ms
64 bytes from 10.10.208.47: icmp_seq=2 ttl=253 time=0.695 ms
64 bytes from 10.10.208.47: icmp_seq=3 ttl=253 time=0.659 ms
64 bytes from 10.10.208.47: icmp_seq=4 ttl=253 time=0.705 ms Considering that 10.10.208.47 is a Lexmark E120n printer what could be the origin of this strange problem? | That behavior is actually due to a normal feature of ping and has no relation to your actual hardware. Indeed, prefixing the IP address (or part of it) with a leading zero will cause the number to be interpreted as octal . So 057 means 57 in base 8 which is 47. Thus ping will send the ICMP query to the machine located at address 10.10.208.47 and therefore get an answer from it. Note that you can also ping adresses in hexadecimal, by using the 0x prefix instead of just 0. Edit : As many comments suggest, this feature is actually not specific to ping and can be found in many CLI softwares manipulating IP addresses. | {
"source": [
"https://serverfault.com/questions/692981",
"https://serverfault.com",
"https://serverfault.com/users/62938/"
]
} |
693,060 | In order to see approximate speeds for tarballing an entire system, and then restoring that system when if it was foobar'd, I partially cloned one of our primary systems onto a workstation that, while not integral to our company systems, would be nice to have functioning. I timed creating the tarball of the whole system, and inspected it to make sure it looked good. I then ran rm -rf / --no-preserve-root . I've never had the opportunity to do that before, so it was a lot of fun. At first. When I rebooted the box, nothing showed up. Not a "Dell" logo, not options for the BIOS, nothing. I hooked up the drive to a different box, and found to my chagrin that it had a UEFI partition. I assume that my Command of Death effectively hosed that partition. I hooked up a different, functioning drive to the now defunct workstation, but the workstation still does nothing. Has anyone seen anything like this, or have suggestions as to what to look for? How did running that rm command manage to so royally mess up the entire box? UPDATE: We returned the box to Dell. We weren't able to precisely diagnose if it was a coincidence or the situation as described by dronus . However, I will accept dronus' answer as it describes a possible reason why this would happen. Further, it will caution others against doing the same thing in the future. If anyone finds some record of Dell using buggy UEFI, that would be helpful. | One rare possibility could be you triggered some of the infamous UEFI bugs, that already killed some series of Samsung and Lenovo notebooks. It works like this: UEFI specs propose a non volatile memory (nvram or eeprom) that can be accessed by the OS to store settings or debugging information. Linux actually uses this feature in case of a kernel panic: If the root filesystem is not trusted anymore (eg. after an exception in kernel code), it is switched to read-only. Now the UEFI feature can be used, and debugging information is written to the nonvolatile memory. So far, this sounds like an good idea: The data may be retrieved later and used to explore the crash reasons. However, with some lines of buggy UEFI firwares, some management routines of the nonvolatile message memory are broken. Depending on the messages, these firmwares crash upon initialization of the message memory, usually quite early on bootup. They may not even reach VGA initialisation, in which case the machine seems totally bricked. In the cases mentioned above, there was no software solution and the mainboards had to be replaced. Running rm -rf / --no-preserve-root may trigger another kernel bug when traversing and deleting kernel filesystems like /sys , /dev or /proc , that may finally lead to a kernel panic, finally triggering the nonvolatile message memory bug mentioned above. | {
"source": [
"https://serverfault.com/questions/693060",
"https://serverfault.com",
"https://serverfault.com/users/89049/"
]
} |
693,241 | Recently, a new vulnerability in Diffie-Hellman, informally referred to as 'logjam' has been published, for which this page has been put together suggesting how to counter the vulnerability: We have three recommendations for correctly deploying Diffie-Hellman
for TLS: Disable Export Cipher Suites. Even though modern browsers no longer
support export suites, the FREAK and Logjam attacks allow a
man-in-the-middle attacker to trick browsers into using export-grade
cryptography, after which the TLS connection can be decrypted. Export
ciphers are a remnant of 1990s-era policy that prevented strong
cryptographic protocols from being exported from United States. No
modern clients rely on export suites and there is little downside in
disabling them. Deploy (Ephemeral) Elliptic-Curve Diffie-Hellman
(ECDHE). Elliptic-Curve Diffie-Hellman (ECDH) key exchange avoids all
known feasible cryptanalytic attacks, and modern web browsers now
prefer ECDHE over the original, finite field, Diffie-Hellman. The
discrete log algorithms we used to attack standard Diffie-Hellman
groups do not gain as strong of an advantage from precomputation, and
individual servers do not need to generate unique elliptic curves. Generate a Strong, Unique Diffie Hellman Group . A few fixed groups are
used by millions of servers, which makes them an optimal target for
precomputation, and potential eavesdropping. Administrators should
generate unique, 2048-bit or stronger Diffie-Hellman groups using
"safe" primes for each website or server. What are the best-practice steps I should take to secure my server as per the above recommendations? | From the article you linked , there are three recommended steps to protect yourself against this vulnerability. In principle these steps apply to any software you may use with SSL/TLS but here we will deal with the specific steps to apply them to Apache (httpd) since that is the software in question. Disable Export Cipher Suites Dealt with in the configuration changes we'll make in 2. below ( !EXPORT near the end of the SSLCipherSuite line is how we'll disable export cipher suites) Deploy (Ephemeral) Elliptic-Curve Diffie-Hellman (ECDHE) For this, you need to edit a few settings in your Apache config files - namely SSLProtocol , SSLCipherSuite , SSLHonorCipherOrder to have a "best-practices" setup. Something like the following will suffice: SSLProtocol all -SSLv2 -SSLv3
SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
SSLHonorCipherOrder on Note: as for which SSLCipherSuite setting to use, this is always changing, and it is a good idea to consult resources such as this one to check for the latest recommended configuration. 3. Generate a Strong, Unique Diffie Hellman Group To do so, you can run openssl dhparam -out dhparams.pem 2048 . Note that this will put significant load on the server whilst the params are generated - you can always get around this potential issue by generating the params on another machine and using scp or similar to transfer them onto the server in question for use. To use these newly-generated dhparams in Apache, from the Apache Documentation : To generate custom DH parameters, use the openssl dhparam command.
Alternatively, you can append the following standard 1024-bit DH
parameters from RFC 2409, section 6.2 to the respective
SSLCertificateFile file : (emphasis mine) which is then followed by a standard 1024-bit DH parameter. From this we can infer that the custom-generated DH parameters may simply be appended to the relevant SSLCertificateFile in question. To do so, run something similar to the following: cat /path/to/custom/dhparam >> /path/to/sslcertfile Alternatively, as per the Apache subsection of the article you originally linked, you may also specify the custom dhparams file you have created if you prefer not to alter the certificate file itself, thusly: SSLOpenSSLConfCmd DHParameters "/path/to/dhparams.pem" in whichever Apache config(s) are relevant to your particular SSL/TLS implementation - generally in conf.d/ssl.conf or conf.d/vhosts.conf but this will differ depending on how you have configured Apache. It is worth noting that, as per this link , Before Apache 2.4.7, the DH parameter is always set to 1024 bits and
is not user configurable. This has been fixed in mod_ssl 2.4.7 that
Red Hat has backported into their RHEL 6 Apache 2.2 distribution with
httpd-2.2.15-32.el6 On Debian Wheezy upgrade apache2 to 2.2.22-13+deb7u4 or later and openssl to 1.0.1e-2+deb7u17. The above SSLCipherSuite does not work perfectly, instead use the following as per this blog : SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-DSS-AES128-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!DHE-RSA-AES128-GCM-SHA256:!DHE-RSA-AES256-GCM-SHA384:!DHE-RSA-AES128-SHA256:!DHE-RSA-AES256-SHA:!DHE-RSA-AES128-SHA:!DHE-RSA-AES256-SHA256:!DHE-RSA-CAMELLIA128-SHA:!DHE-RSA-CAMELLIA256-SHA You should check whether your Apache version is later than these version numbers depending on your distribution, and if not - update it if at all possible. Once you have performed the above steps to update your configuration, and restarted the Apache service to apply the changes, you should check that the configuration is as-desired by running the tests on SSLLabs and on the article related to this particular vulnerability. | {
"source": [
"https://serverfault.com/questions/693241",
"https://serverfault.com",
"https://serverfault.com/users/187011/"
]
} |
694,385 | I've added these lines to /etc/apt/sources.list deb http://packages.dotdeb.org wheezy-php56 all
deb-src http://packages.dotdeb.org wheezy-php56 all But still sudo apt-get update or sudo apt-get upgrade don't touch php. php --version is still PHP 5.4.39-0+deb7u2 (cli) (built: Mar 25 2015 08:33:29) | ( Update )
Try this (Ubuntu): sudo add-apt-repository ppa:ondrej/php -y
sudo apt-get update
sudo apt-get install php5.6-fpm -y (Update) For Debian Wheezy (with sudo) echo "deb http://packages.dotdeb.org wheezy-php56 all" >> /etc/apt/sources.list.d/dotdeb.list
echo "deb-src http://packages.dotdeb.org wheezy-php56 all" >> /etc/apt/sources.list.d/dotdeb.list
wget http://www.dotdeb.org/dotdeb.gpg -O- | apt-key add -
apt-get update
apt-get install php5-cli php5-fpm ….. (or whatever package you might need) (Update 21/06/2017) For Debian 8 (jessie) sudo nano /etc/apt/sources.list Add the following repositories: ...
deb http://mirrors.digitalocean.com/debian jessie main contrib non-free
deb-src http://mirrors.digitalocean.com/debian jessie main contrib non-free
deb http://security.debian.org/ jessie/updates main contrib non-free
deb-src http://security.debian.org/ jessie/updates main contrib non-free
# jessie-updates, previously known as 'volatile'
deb http://mirrors.digitalocean.com/debian jessie-updates main contrib non-free
deb-src http://mirrors.digitalocean.com/debian jessie-updates main contrib non-free Then update your sources: sudo apt-get update Then install the php5-fpm sudo apt-get install php5-fpm | {
"source": [
"https://serverfault.com/questions/694385",
"https://serverfault.com",
"https://serverfault.com/users/68513/"
]
} |
694,818 | I need to have network messages sent when a systemd service I have crashes or is hung (i.e., enters failed state; I monitor for hung by using WatchdogSec=). I noticed that newer systemd have FailureAction=, but then saw that this doesn't allow arbitrary commands, but just rebooting/shutdown. Specifically, I need a way to have one network message sent when systemd detects the program has crashed, and another when it detects it has hung. I'm hoping for a better answer than "parse the logs", and I need something that has a near-instant response time, so I don't think a polling approach is good; it should be something triggered by the event occurring. | systemd units support OnFailure that will activate a unit (or more) when the unit goes to failed. You can put something like OnFailure=notify-failed@%n And then create the [email protected] service where you can use the required specifier (you probably will want at least %i) to launch the script or command that will send notification. You can see a practical example in http://northernlightlabs.se/systemd.status.mail.on.unit.failure | {
"source": [
"https://serverfault.com/questions/694818",
"https://serverfault.com",
"https://serverfault.com/users/195395/"
]
} |
694,841 | I have a terrible problem, my joomla website is being abused to massively send spam. I have no clue on what is actually happening, but my postfix mail queue is constantly filled with thousands of spam mails being send from my server to external mail addresses. As a from-address a randomly created alias on my domainname is being used. To solve this problem I would like my Postfix mail server to only process mail from known mail aliases... just I have no clue on how to achieve this and the technical information I can find about postfix just is to dificult for me to understand. So I was hoping that somebody could tell me how I can configure my postfix mailserver to only process mail (that is from internal to external) for known mail aliases (or at least a list of mail address that can be used in the from-field and all other mail just being rejected). | systemd units support OnFailure that will activate a unit (or more) when the unit goes to failed. You can put something like OnFailure=notify-failed@%n And then create the [email protected] service where you can use the required specifier (you probably will want at least %i) to launch the script or command that will send notification. You can see a practical example in http://northernlightlabs.se/systemd.status.mail.on.unit.failure | {
"source": [
"https://serverfault.com/questions/694841",
"https://serverfault.com",
"https://serverfault.com/users/291166/"
]
} |
695,310 | I have following output from git status , how do I grep for everything after Untracked files : [alexus@wcmisdlin02 Test]$ git status
# On branch master
#
# Initial commit
#
# Changes to be committed:
# (use "git rm --cached <file>..." to unstage)
#
# new file: app/.gitignore
# new file: app/app.iml
# new file: app/build.gradle
# new file: app/proguard-rules.pro
# new file: app/src/androidTest/java/org/alexus/test/ApplicationTest.java
# new file: app/src/main/AndroidManifest.xml
# new file: app/src/main/java/org/alexus/test/MainActivity.java
# new file: app/src/main/res/layout/activity_main.xml
# new file: app/src/main/res/menu/menu_main.xml
# new file: app/src/main/res/mipmap-hdpi/ic_launcher.png
# new file: app/src/main/res/mipmap-mdpi/ic_launcher.png
# new file: app/src/main/res/mipmap-xhdpi/ic_launcher.png
# new file: app/src/main/res/mipmap-xxhdpi/ic_launcher.png
# new file: app/src/main/res/values-w820dp/dimens.xml
# new file: app/src/main/res/values/dimens.xml
# new file: app/src/main/res/values/strings.xml
# new file: app/src/main/res/values/styles.xml
#
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
#
# .gitignore
# .idea/
# Test.iml
# build.gradle
# gradle.properties
# gradle/
# gradlew
# gradlew.bat
# settings.gradle
[alexus@wcmisdlin02 Test]$ Like this, but without specifying number of lines, like the -A parameter in GNU grep : [alexus@wcmisdlin02 Test]$ git status | grep -A100 'Untracked files'
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
#
# .gitignore
# .idea/
# Test.iml
# build.gradle
# gradle.properties
# gradle/
# gradlew
# gradlew.bat
# settings.gradle
[alexus@wcmisdlin02 Test]$ Is there a way to do it? [alexus@wcmisdlin02 Test]$ grep --version
grep (GNU grep) 2.20
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Mike Haertel and others, see <http://git.sv.gnu.org/cgit/grep.git/tree/AUTHORS>.
[alexus@wcmisdlin02 Test]$ | With GNU grep (tested with version 2.6.3): git status | grep -Pzo '.*Untracked files(.*\n)*' Uses -P for perl regular expressions, -z to also match newline with \n and -o to only print what matches the pattern. The regex explained : First we match any character ( . ) zero or multiple times ( * ) until an occurence of the string Untracked files . Now, the part inside the brackets (.*\n) matches any character except a newline ( . ) zero or multiple times ( * ) followed by a newline ( \n ). And all that (that's inside the backets) can occure zero or multiple times; that's the meaning of the last * . It should now match all other lines, after the first occurence of Untracked files . | {
"source": [
"https://serverfault.com/questions/695310",
"https://serverfault.com",
"https://serverfault.com/users/10683/"
]
} |
695,786 | I try to print the previously registered mosh_version variable using the ansible debug msg command like this: - name: Print mosh version
debug: msg="Mosh Version: {{ mosh_version.stdout }}" It doesn't work and prints the following error: Note: The error may actually appear before this position: line 55, column 27
- name: Print mosh version
debug: msg="Mosh Version: {{ mosh_version.stdout }}"
^
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}" I tried - name: Print mosh version
debug: msg=Mosh Version: "{{ mosh_version.stdout }}" but this will just print "Mosh". What's the best way to get this running? | Try this: - name: Print mosh version
debug: "msg=Mosh Version: '{{ mosh_version.stdout }}'" More info in http://docs.ansible.com/YAMLSyntax.html#gotchas Edited:
Something like this works perfect for me: - name: Check Ansible version
command: ansible --version
register: ansibleVersion
- name: Print version
debug:
msg: "Ansible Version: {{ ansibleVersion.stdout }}" http://pastie.org/private/cgeqjucn3l5kxhkkyhtpta | {
"source": [
"https://serverfault.com/questions/695786",
"https://serverfault.com",
"https://serverfault.com/users/125240/"
]
} |
695,849 | we have a simple systemd script to start a MineCraft server in a service fashion. The SO is CentOS 7. Here the script: [Unit]
Description=Minecraft Server
After=syslog.target network.target
[Service]
Type=simple
WorkingDirectory=/root/Minecraft
ExecStart=/bin/java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui
Restart=on-failure
[Install]
WantedBy=multi-user.target Starting the service works fine but when stopping , the service remains in a failed state. See: systemctl status minecraftd.service
minecraftd.service - Minecraft Server
Loaded: loaded (/usr/lib/systemd/system/minecraftd.service; disabled)
Active: active (running) since Mon 2015-06-01 16:00:12 UTC; 18s ago
Main PID: 20975 (java)
CGroup: /system.slice/minecraftd.service
└─20975 /bin/java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui
systemctl stop minecraftd.service
systemctl status minecraftd.service
minecraftd.service - Minecraft Server
Loaded: loaded (/usr/lib/systemd/system/minecraftd.service; disabled)
Active: failed (Result: exit-code) since Mon 2015-06-01 16:01:37 UTC; 3s ago
Process: 20975 ExecStart=/bin/java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui (code=exited, status=143)
Main PID: 20975 (code=exited, status=143) Any idea? Thanks | Exit code 143 means that the program received a SIGTERM signal to instruct it to exit. The JVM catches the signal, does a clean shutdown, i.e. it runs all registered shutdown hooks, but still exits with an exit code of 143. That's just how Java works. You should be able to suppress this by adding the exit code into the unit file as a "success" exit status: [Service]
SuccessExitStatus=143 | {
"source": [
"https://serverfault.com/questions/695849",
"https://serverfault.com",
"https://serverfault.com/users/291837/"
]
} |
696,182 | This is a proposed Canonical Question about understanding and debugging the software firewall on Linux systems. In response to EEAA's answer and @Shog's comment that we need a suitable canonical Q&A for closing common relatively simple questions about iptables. What is a structured method to debug problems with the Linux software firewall, the netfilter packet filtering framework, commonly referred to by the userland interface iptables ? What are common pitfalls, recurring questions and simple or slightly more obscure things to check that an occasional firewall administrator might overlook or otherwise benefit from knowing? Even when you use tooling such as UFW , FirewallD (aka firewall-cmd ), Shorewall or similar you might benefit from looking under the hood without the abstraction layer those tools offer. This question is not intended as a How-To for building firewalls: check the product documentation for that and
for instance contribute recipes to iptables Trips & Tricks or search the tagged iptables ufw firewalld firewall-cmd questions
for existing frequent and well regarded high-scoring Q&A's. | In general: Viewing and modifying the firewall configuration requires administrator privileges ( root ) as does
opening services in the restricted port number range. That means that you should either be logged in
as root or alternatively use sudo to run the command as root. I'll try to mark such commands with the optional [sudo] . Contents: Order matters or the difference between -I and -A Display the current firewall configuration Interpreting the ouput of iptables -L -v -n Know your environment The INPUT and FORWARD chains Kernel modules 1. Order matters or the difference between -I and -A The thing to remember is that firewall rules are checked in the order they are listed. The kernel will stop processing the chain when a rule is triggered that will either allow or dis-allow a packet or connection. I think the most common mistake for novice firewall administrators is that they follow the correct instructions to open a new port, such as the one below: [sudo] iptables -A INPUT -i eth0 -p tcp --dport 8080 -j ACCEPT and then discover that it won't take effect. The reason for that is that the -A option adds that new rule, after all existing rules and since very often the final rule in the existing firewall was one that blocks all traffic that isn't explicitely allowed, resulting in ...
7 2515K 327M REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
8 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 Or equivalent in iptables-save: ...
iptables -A INPUT -j REJECT
iptables -A INPUT -p tcp --dport 8080 -j ACCEPT and the new rule opening TCP port 8080 will never be reached. (as evidenced by the counters stubbornly remaining at 0 packets and zero bytes). By inserting the rule with -I the new rule would have been the first in the chain and will work. 2. Display the current firewall configuration My recommendation for the firewall administrator is to look at the actual configuration the Linux kernel is running, rather
than trying to diagnose firewall issues from userfriendly tools. Often once you understand the underlying issues you can
easily resolve them in a matter supported by those tools. The command [sudo] iptables -L -v -n is your friend (although some people like iptables-save better). Often when discussing configurations it is useful to use the --line-numbers option as well
to number lines. Refering to rule #X makes discussing them somewhat easier. Note: NAT rules are included in the iptables-save output but have to be listed separately by adding the -t nat option i.e, [sudo] iptables -L -v -n -t nat --line-numbers . Running the command multiple times and checking for incrementing counters can be a useful tool to see if a new rule actually gets triggered. [root@host ~]# iptables -L -v -n
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 784K 65M fail2ban-SSH tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
2 2789K 866M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
3 15 1384 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
4 44295 2346K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
5 40120 2370K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80
6 16409 688K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443
7 2515K 327M REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT 25 packets, 1634 bytes)
num pkts bytes target prot opt in out source destination
Chain fail2ban-SSH (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 REJECT all -- * * 117.239.37.150 0.0.0.0/0 reject-with icmp-port-unreachable
2 4 412 REJECT all -- * * 117.253.208.237 0.0.0.0/0 reject-with icmp-port-unreachable Alternatively the output of iptables-save gives a script that can regenerate the above firewall configuration: [root@host ~]# iptables-save
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [441:59938]
:fail2ban-SSH - [0:0]
-A INPUT -p tcp -m tcp --dport 22 -j fail2ban-SSH
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A fail2ban-SSH -s 117.239.37.150/32 -j REJECT --reject-with icmp-port-unreachable
-A fail2ban-SSH -s 117.253.208.237/32 -j REJECT --reject-with icmp-port-unreachable
COMMIT It is a matter of preference what you'll find easier to understand. 3. Interpreting the ouput of iptables -L -v -n The Policy sets the default action the chain uses when no explicit rule matches. In the INPUT chain that is set to ACCEPT all traffic. The first rule in the INPUT chain is immediately an interesting one, it sends all traffic (source 0.0.0.0/0 and destination 0.0.0.0/0) destined for TCP port 22 ( tcp dpt:22 ) the default port for SSH to a custom target ( fail2ban-SSH ).
As the name indicates this rule is maintained by fail2ban (a security product that among other things scans system log files for possible abuse and blocks the IP-address of the abuser). That rule would have been created by an iptables commandline similar to iptables -I INPUT -p tcp -m tcp --dport 22 -j fail2ban-SSH or is found in the output of
iptables-save as -A INPUT -p tcp -m tcp --dport 22 -j fail2ban-SSH . Often you'll find either of those notations in documentation. The counters indicate that this rule has matched 784'000 packets and 65 Megabytes of data. Traffic that matches to this first rule is then processed by the fail2ban-SSH chain that, as a non-standard chain, gets listed below the OUTPUT chain. That chain consists of two rules, one for each abuser (source ip-address 117.253.221.166 or 58.218.211.166) that is blocked (with a reject-with icm-port-unreachable ). -A fail2ban-SSH -s 117.253.221.166/32 -j REJECT --reject-with icmp-port-unreachable
-A fail2ban-SSH -s 58.218.211.166/32 -j REJECT --reject-with icmp-port-unreachable SSH packets that aren't from those blocked hosts are neither allowed nor dis-allowed yet and will now that the custom chain is completed will be checked against the second rule in the INPUT chain. All packets that weren't destined for port 22 passed the first rule in the INPUT chain and will also be evaluated in INPUT rule #2. The INPUT rule number 2 means this is intended to be a statefull firewall , which tracks connections. That has some advantages, only the packets for new connections need to be checked against
the full rule-set, but once allowed, additional packets belonging to an established or related connection are accepted without further checking. Input rule #2 matches all open and related connections and packets matching that rule will not need to be evaluated further. Note: rule changes in the configuration of a stateful firewall will only impact new connections, not established connections. In contrast, a simple packet filter tests every packet against the full rule-set, without tracking connection state. In such a firewall no state keywords would be used. INPUT rule #3 is quite boring, all traffic connecting to the loopback ( lo or 127.0.0.1) interface is allowed. INPUT rules 4, 5 and 6 are used to open TCP ports 22, 80 and 443 (the default ports for resp. SSH, HTTP and HTTPS) by granting access to NEW connections
(existing connections are already allowed by INPUT rule 2). In a stateless firewall those rules would appear without the state attributes: 4 44295 2346K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0
5 40120 2370K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0
6 16409 688K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 or -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT The final INPUT rule, #7 is a rule that blocks all traffic that was NOT granted access in INPUT rules 1-7. A fairly common convention: everything not allowed is denied. In theory this rule could have been omitted by setting the default POLICY to REJECT. Always investigate the whole chain. 4. Know your environment 4.1 . The settings in a software firewall won't effect security settings maintained elsewhere in the network,
i.e. despite opening up a network service with iptables the unmodified access control lists on routers or other firewalls in your network may still block traffic... 4.2 . When no service is listening you won't be able to connect and get a connection refused error , regardless of firewall settings. Therefore: Confirm that a service is listening (on the correct network interface/ip-address) and using the port numbers you expect with [sudo] netstat -plnut or alternatively use ss -tnlp . If your services are not yet supposed to be running, emulate a simple listener with for instance netcat: [sudo] nc -l -p 123 or openssl s_server -accept 1234 [options] if you need a TLS/SSL listener (check man s_server for options). Verify that you can connect from the server itself i.e. telnet <IP of Server> 123 or echo "Hello" | nc <IP of Server> 123 or when testing TLS/SSL secured service openssl s_client -connect <IP of Server>:1234 , before trying the same from a remote host. 4.3 . Understand the protocols used by your services. You can't properly enable/disable services you don't sufficiently understand. For instance: is TCP or UDP used or both (as with DNS)? is the service using a fixed default port (for instance something like TCP port 80 for a webserver)? alternatively is a dynamic port number chosen that can vary (i.e. RPC services like classic NFS that register with Portmap)? infamous FTP even uses two ports , both a fixed and a dynamic port number when configured to use passive mode... the service, port and protocol descriptions in /etc/services do not necessarily match with the actual service using a port. 4.4 . The kernel packet filter is not the only thing that may restrict network connectivity: SELinux might also be restricting network services. getenforce will confirm if SELinux is running. Although becoming slightly obscure TCP Wrappers are still a powerful tool to enforce network security. Check with ldd /path/to/service |grep libwrap and the /hosts.[allow|deny] control files. 5. INPUT or FORWARD Chains The concept of chains is more thoroughly explained here but the short of it is: The INPUT chain is where you open and/or close network ports for services running locally, on the host where you issue the iptables commands. The FORWARD chain is where you apply rules to filter traffic that gets forwarded by the kernel to other systems,
actual systems but also Docker containers and Virtual guest Servers servers when your Linux machine is acting as a bridge, router, hypervisor and/or does network address translation and port forwarding. A common misconception is that since a docker container or KVM guest runs locally, the filter rules that apply should be in the INPUT chain, but that is usually not the case. 6. Kernel modules Since the packet filter runs within the Linux kernel it can also be compiled as dynamic module, multiple modules actually. Most distributions include netfilter as modules and the
required netfilter modules will get loaded into the kernel as needed,
but for some modules a firewall administrator will need to manually ensure they get loaded. This primarily concerns the connection tracking modules, such as nf_conntrack_ftp which can be loaded with insmod . The modules currently loaded into the running kernel can be displayed with lsmod . The method to ensure modules are loaded persistently across reboots depends on the Linux distribution. | {
"source": [
"https://serverfault.com/questions/696182",
"https://serverfault.com",
"https://serverfault.com/users/37681/"
]
} |
696,460 | On a database, I can get a list of all the currently running processes, and the sql command that kicked them off. I'd like to do a similar thing on a windows box. I can get the list of processes, but not the command line that kicked them off. My question is: Given a PID on Windows - how do I find the command line instruction that executed it? Assumptions: Windows 7 and equivalent servers | Powershell and WMI. Get-WmiObject Win32_Process | Select ProcessId,CommandLine Or Get-WmiObject -Query "SELECT CommandLine FROM Win32_Process WHERE ProcessID = 3352" Note that you have to have permissions to access this information about a process. So you might have to run the command as admin if the process you want to know about is running in a privileged context. | {
"source": [
"https://serverfault.com/questions/696460",
"https://serverfault.com",
"https://serverfault.com/users/9803/"
]
} |
696,488 | Trying to get a basic Django app running on nginx using UWSGI. I keep getting a 502 error with the error in the subject line. I am doing all of this as root, which I know is bad practice, but I am just practicing. My config file is as follows (it's included in the nginx.conf file): server {
listen 80;
server_name 104.131.133.149; location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/root/headers;
}
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:8080;
}
} And my uwsgi file is: [uwsgi]
project = headers
base = /root
chdir = %(base)/%(project)
home = %(base)/Env/%(project)
module = %(project).wsgi:application
master = true
processes = 5
socket = 127.0.0.1:8080
chmod-socket = 666
vacuum = true As far as I can tell I am passing all requests on port 80 (from nginx.conf) upstream to localhost, which is running on my VH, where uwsgi is listening on port 8080. I've tried this with a variety of permissions, including 777. If anyone can point out what I'm doing wrong please let me know. | Powershell and WMI. Get-WmiObject Win32_Process | Select ProcessId,CommandLine Or Get-WmiObject -Query "SELECT CommandLine FROM Win32_Process WHERE ProcessID = 3352" Note that you have to have permissions to access this information about a process. So you might have to run the command as admin if the process you want to know about is running in a privileged context. | {
"source": [
"https://serverfault.com/questions/696488",
"https://serverfault.com",
"https://serverfault.com/users/277872/"
]
} |
697,641 | In the past 2 weeks, our nightly IISReset has not come back up successfully and caused us an outage. We have a Windows Task that runs every night that executes an IISReset, and I'm wondering if this is even necessary anymore? Should I be looking into the functionality in the app pool to restart itself instead? | I wouldn't consider it a good practice. When most people usually set up things like "nightly iisresets" or "nightly reboots," generally, it's because they are running an application that is poorly-written and leaks resources, to the point where the entire system may become unstable unless we restart the application, the service, or even entire system. The thing is, those people are ignoring the actual problem. (Or are unable to fix it.) Fix the application to be stable and not leak resources, and then iisresets or system restarts will no longer be needed or helpful. Unfortunately, this is very prevalent with IIS web apps, to the point where IIS itself is designed around the idea that the applications it runs will be poorly-written and leaky. Otherwise there'd be no need to routinely recycle app pools, etc. So to recap - if nightly iisresets are part of your strategy, it's because you have a poorly-written web application and the ideal thing to do would be to fix your app. (And yes, recycling app pools is better than an iisreset, as you can recycle an app pool without affecting every other website on your server.) Edit: Here's a pretty neat blog post from a guy who basically says the same thing that I did, but he also claims that he always completely disables app pool recycling altogether, instead insisting that his team fix each and every last memory leak, which IMO is a pretty heroic and laudable effort: http://thatextramile.be/blog/2010/06/why-do-we-recycle-our-application-pools/ | {
"source": [
"https://serverfault.com/questions/697641",
"https://serverfault.com",
"https://serverfault.com/users/221509/"
]
} |
698,334 | I have to hand off a Laptop including its hard disc. Since it was not encrypted I wanted to wipe it at least quickly. I know this is not optimal on SSD, but I thought better than just plain readable. Now I am running wipe of a live USB stick and it is painfully slow. I wonder why that is. Of course there is hardly anything happening on the computer besides wiping that device, so I imagine entropy could be low (entropy_avail says it is at 1220). Would it be equally good to just call dd if=/dev/random of=/dev/sda1 bs=1k four times? Or is there a way I can call something that will increase the randomness? Or is the bottle neck somewhere completely different? | Don't attempt to "wipe" an SSD with tools designed for spinning magnetic hard drives. You won't actually destroy all the data , and you'll just reduce the lifetime of the SSD. Instead, use an erase tool specifically designed for SSDs, which can use the drive's internal flash erase (discard) to discard all of the blocks, including the ones you can't access. The SSD vendor usually provides such a tool which is guaranteed to be compatible with that vendor's drives. You can also try doing it yourself with a Secure Erase utility. Programs that do Secure Erase work with both spinning hard drives and SSDs. In addition, a few system BIOSes (mainly in business laptops) have Secure Erase functionality built in. Note that a Secure Erase will take hours on a hard drive, but only seconds on an SSD; on a hard drive every sector must be ovewritten, but on an SSD it will discard all the blocks at once and/or change the drive's internal encryption key. (And note that secure erase did not work properly on some of the earliest generation SSDs; in these cases you should just throw the drive in a crusher.) | {
"source": [
"https://serverfault.com/questions/698334",
"https://serverfault.com",
"https://serverfault.com/users/128737/"
]
} |
698,369 | When describing IPv4 networks, I can use 0.0.0.0/0 or just 0/0 to specify all networks. What is the equivalent notation for IPv6? | The IPv6 equivalent of IPv4's 0.0.0.0 is ::/0 . | {
"source": [
"https://serverfault.com/questions/698369",
"https://serverfault.com",
"https://serverfault.com/users/276831/"
]
} |
699,908 | I successfully installed Postfix on my VPS. I would like to send encrypted email. I installed all certificates and private keys and set my conf file: smtpd_tls_key_file = <path to my private key>
smtpd_tls_cert_file = <path to my cert file>
smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination
smtpd_tls_security_level = encrypt But I do not know what else to do. I mean, how can I check that my emails are being encrypted? I use the php mail() function to send outgoing mails. | When postfix sends email to other server then postfix will act as SMTP client . Therefore the you need to refer to related document about SMTP client and TLS . To activate TLS encryption feature for postfix SMTP client, you need to put this line in main.cf smtp_tls_security_level = may It will put postfix SMTP client into Opportunistic-TLS-mode, i.e. SMTP transaction is encrypted if the STARTTLS ESMTP feature is supported by the server. Otherwise, messages are sent in the clear. To find out whether SMTP transaction is encrypted or not, increase smtp_tls_loglevel to 1 smtp_tls_loglevel = 1 With this config, postfix will has log line like this SMTP transaction is encrypted. postfix-2nd/smtp[66563]: Trusted TLS connection established to gmail-smtp-in.l.google.com[74.125.200.27]:25: TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits) When you're finished editing the config file, then remember to execute: postfix reload To make the changes take effect. Note: Your config above only cover Postfix SMTP server smtpd , a daemon used to receive email. | {
"source": [
"https://serverfault.com/questions/699908",
"https://serverfault.com",
"https://serverfault.com/users/88209/"
]
} |
699,912 | I have a condition to filtering IP address from client, and I just have their IP network like this 1.1.0.0/17 . If some client from this network will be filtered as special user on my website. And I also use php as web service language. In case client who have IP 1.1.4.24 will be automatically logged to databases. Can someone tell me some way? | When postfix sends email to other server then postfix will act as SMTP client . Therefore the you need to refer to related document about SMTP client and TLS . To activate TLS encryption feature for postfix SMTP client, you need to put this line in main.cf smtp_tls_security_level = may It will put postfix SMTP client into Opportunistic-TLS-mode, i.e. SMTP transaction is encrypted if the STARTTLS ESMTP feature is supported by the server. Otherwise, messages are sent in the clear. To find out whether SMTP transaction is encrypted or not, increase smtp_tls_loglevel to 1 smtp_tls_loglevel = 1 With this config, postfix will has log line like this SMTP transaction is encrypted. postfix-2nd/smtp[66563]: Trusted TLS connection established to gmail-smtp-in.l.google.com[74.125.200.27]:25: TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits) When you're finished editing the config file, then remember to execute: postfix reload To make the changes take effect. Note: Your config above only cover Postfix SMTP server smtpd , a daemon used to receive email. | {
"source": [
"https://serverfault.com/questions/699912",
"https://serverfault.com",
"https://serverfault.com/users/294893/"
]
} |
699,977 | I installed elasticsearch.90.7 with a deb file in ubuntu.
I tried to uninstall elasticsearch.90.7 with this command: sudo apt-get --purge autoremove elasticsearch And then I downloaded elasticsearch-1.6.0.deb to install elasticsearch 1.6. When I run this command to install elasticsearch 1.6 by deb file: dpkg -i elasticsearch-1.6.0.deb It shows me this: Selecting previously unselected package elasticsearch.
(Reading database ... 89826 files and directories currently installed.)
Preparing to unpack elasticsearch-1.6.0.deb ...
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Unpacking elasticsearch (1.6.0) ...
Setting up elasticsearch (1.6.0) ...
Processing triggers for ureadahead (0.100.0-16) ... When I start elasticsearch with service elasticsearch start it's starting, but when i run this command: curl http://localhost:9200 It shows this error: curl: (7) Failed to connect to localhost port 9200: Connection refused I think elasticsearch is not installed properly. I want know what I should do to install elasticsearch properly. | (1) Remove previous versions of ElasticSearch: sudo apt-get --purge autoremove elasticsearch (2) Remove the ElasticSearch directories: sudo rm -rf /var/lib/elasticsearch/
sudo rm -rf /etc/elasticsearch (3) Install ElasticSearch 1.6: sudo dpkg -i elasticsearch-1.6.0.deb (4) Start the service: sudo service elasticsearch start (5) Test if it works: sudo service elasticsearch status
curl -XGET "http://localhost:9200/_cluster/health?pretty=true"
curl "localhost:9200/_nodes/settings?pretty=true" | {
"source": [
"https://serverfault.com/questions/699977",
"https://serverfault.com",
"https://serverfault.com/users/148016/"
]
} |
700,812 | So I've googled quite a bit for this but it appears that my google-fu fails me - apologies if this is a trivial and already answered question, I could not find anything about this I'm trying to diagnose an SSL certificate hostname mismatch. When I visit the url in question, it redirects me to another page that has the correct SSL certificate. However, some clients are reporting that they are receiving an SSL certificate hostname mismatch error. My only assumption is that the redirecting page has the wrong certificate and some clients are letting it slide because it resolves with a new page that has the correct certificate. (The how and why of the issue isn't really the question) The question: From the outside in (aka, as a client in the world) - how would one view the certificate that was delivered by a page that automatically redirects to another page? | Use openssl s_client piped to openssl x509 : $ openssl s_client -connect foo.example.com:443 < /dev/null | openssl x509 -text (Add -servername foo.example.com to the s_client command if the server uses SNI .) The redirection of stdin from /dev/null for the first invocation of openssl will prevent it from hanging waiting for input. | {
"source": [
"https://serverfault.com/questions/700812",
"https://serverfault.com",
"https://serverfault.com/users/155917/"
]
} |
700,862 | Let's say I write a mine.service file. Then I use systemctl enable mine.service . If I later decide to edit mine.service , do I have to tell systemd that mine.service was changed? If so, how do I do that? | After you make changes to your unit file, you should run systemctl daemon-reload , as outlined here . daemon-reload Reload systemd manager configuration. This will rerun all generators (see systemd.generator(7) ), reload all unit files, and recreate the entire dependency tree. While the daemon is being reloaded, all sockets systemd listens on behalf of user configuration will stay accessible. You can then restart (or reload) your service as you desire with systemctl restart your-service-name (daemon-reload won't reload/restart the services themselves, just makes systemd aware of the new configuration) | {
"source": [
"https://serverfault.com/questions/700862",
"https://serverfault.com",
"https://serverfault.com/users/278814/"
]
} |
701,248 | I'm working in an office where my laptop is internet-connected, but tightly controlled. I am not allowed to install unauthorized software onto it. My development workstation is mine to do with as I please, but it does not have an internet connection. Is there any way for me to download Docker images from the hub as a file that I could then sneaker-net to my dev workstation? Similar to how I can download RPMs or Ruby Gems and burn them to CD? Or is the only way of downloading the images using the 'docker pull' command? | Short: use the save CLI command. https://docs.docker.com/engine/reference/commandline/save/ You can pull the image on a computer that have access to the internet. sudo docker pull ubuntu Then you can save this image to a file sudo docker save -o ubuntu_image.docker ubuntu Transfer the file on the offline computer (USB/CD/whatever) and load the image from the file: sudo docker load -i ubuntu_image.docker (On older versions this was just docker load image.docker , see comments for more info.) | {
"source": [
"https://serverfault.com/questions/701248",
"https://serverfault.com",
"https://serverfault.com/users/230046/"
]
} |
701,254 | I'm running a docker container that expects SSH traffic on port 22. The host machine also expects SSH traffic, but on port 2222. While SSH-ing on port 2222 works without hickups, my SSH client complains that the remote host identification has changed when SSH-ing on port 22. This makes sense, since the docker container has a different identity than the host machine. Is there a way to resolve this? | Short: use the save CLI command. https://docs.docker.com/engine/reference/commandline/save/ You can pull the image on a computer that have access to the internet. sudo docker pull ubuntu Then you can save this image to a file sudo docker save -o ubuntu_image.docker ubuntu Transfer the file on the offline computer (USB/CD/whatever) and load the image from the file: sudo docker load -i ubuntu_image.docker (On older versions this was just docker load image.docker , see comments for more info.) | {
"source": [
"https://serverfault.com/questions/701254",
"https://serverfault.com",
"https://serverfault.com/users/295753/"
]
} |
702,040 | Is it really secure to connect to a server using SSH from hotels during a journey? Server : - CentOS 7 - Authorisation only by RSA key - password auth is denied - Non-standard port Workstation : - Ubuntu 14 - user password - password to use RSA key (standard method) Maybe it will be a good idea to keep half of the private RSA key on a USB stick, and automatically (by script) add this half to ~/.ssh/private_key before connecting? Internet will be through either WIFI in hotels, or cable in a rented apartment. UPD Sorry for being unclear at first. I mean security in two aspects here: Security of just the SSH connection through an untrusted network. Security of a computer with the key necessary for the SSH connection - if it is stolen, how to protect the server... | So, regarding making an ssh connection over an explicitly untrusted connection. Assuming you already have an ~/.ssh/known_hosts entry from a previous connection, yes you should be able to connect without worrying about whatever the network is safe or not. The same goes if you have some other means of verifying the ssh host key. If you have never connected to the server before, nor having any other way of verifying the ssh host key, then you might want be more careful regarding the network you use to connect. | {
"source": [
"https://serverfault.com/questions/702040",
"https://serverfault.com",
"https://serverfault.com/users/257658/"
]
} |
702,828 | How can I easily see a history of every time my Windows Server has restarted or shutdown and the reason why, including user-initiated, system-initiated, and system crashed? The Windows Event Log is an obvious answer but what is the complete list of events that I should view? I found these posts that partially answer my question: Windows server last reboot time includes several answers that partially address the full restart history View Shutdown Event Tracker logs under Windows Server 2008 R2 includes an additional event id Event Log time when Computer Start up / boot up includes some of the same event ids but those don't cover every scenario AFAIK and the info is hard to understand because it is spread across multiple answers. I have several versions of Windows Server so a solution that works for at least versions 2008, 2008 R2, 2012, and 2012 R2 would be ideal. | The clearest most succinct answer I could find is: How To See PC Startup And Shutdown History In Windows which lists these event ids to monitor (quoted but edited and reformatted from article): Event ID 6005 ( alternate ): “The event log service was started.” This is synonymous to system startup. Event ID 6006 ( alternate ): “The event log service was stopped.” This is synonymous to system shutdown. Event ID 6008 ( alternate ): "The previous system shutdown was unexpected." Records that the system started after it was not shut down properly. Event ID 6009 ( alternate ): Indicates the Windows product name, version, build number, service pack number, and operating system type detected at boot time. Event ID 6013: Displays the uptime of the computer. There is no TechNet page for this id. Add to that a couple more from the Server Fault answers listed in my OP: Event ID 1074 ( alternate ): "The process X has initiated the restart / shutdown of computer on behalf of user Y for the following reason: Z." Indicates that an application or a user initiated a restart or shutdown. Event ID 1076 ( alternate ): "The reason supplied by user X for the last unexpected shutdown of this computer is: Y." Records when the first user with shutdown privileges logs on to the computer after an unexpected restart or shutdown and supplies a reason for the occurrence. Did I miss any? | {
"source": [
"https://serverfault.com/questions/702828",
"https://serverfault.com",
"https://serverfault.com/users/232730/"
]
} |
703,344 | Let's say I want to tag a Docker image, and make a typo. How do I remove the tag without removing the image itself? Neither the manpages nor the Docker documentation mention removing tags. docker tag 0e5574283393 my-imaj
docker tag 0e5574283393 my-image
# docker untag my-imaj # There is no "docker untag"! | If your image is tagged with more than one tag, then docker rmi will remove the tag, but not the image. So in your example ... # docker rmi my-imaj ... will remove that tag and leave the image present with the other correct tag. | {
"source": [
"https://serverfault.com/questions/703344",
"https://serverfault.com",
"https://serverfault.com/users/235448/"
]
} |
704,643 | My goal is to limit access to docker containers to just a few public IP addresses. Is there a simple, repeatable process to accomplish my goal? Understanding only the basics of iptables while using Docker's default options, I'm finding it very difficult. I'd like to run a container, make it visible to the public Internet, but only allow connections from select hosts. I would expect to set a default INPUT policy of REJECT and then only allow connections from my hosts. But Docker's NAT rules and chains get in the way and my INPUT rules are ignored. Can somebody provide an example of how to accomplish my goal given the following assumptions? Host public IP 80.80.80.80 on eth0 Host private IP 192.168.1.10 on eth1 docker run -d -p 3306:3306 mysql Block all connection to host/container 3306 except from hosts 4.4.4.4 and 8.8.8.8 I'm happy to bind the container to only the local ip address but would need instructions on how to set up the iptables forwarding rules properly which survive docker process and host restarts. Thanks! | Two things to bear in mind when working with docker's firewall rules: To avoid your rules being clobbered by docker, use the DOCKER-USER chain Docker does the port-mapping in the PREROUTING chain of the nat table. This happens before the filter rules, so --dest and --dport will see the internal IP and port of the container. To access the original destination, you can use -m conntrack --ctorigdstport . For example: iptables -A DOCKER-USER -i eth0 -s 8.8.8.8 -p tcp -m conntrack --ctorigdstport 3306 --ctdir ORIGINAL -j ACCEPT
iptables -A DOCKER-USER -i eth0 -s 4.4.4.4 -p tcp -m conntrack --ctorigdstport 3306 --ctdir ORIGINAL -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m conntrack --ctorigdstport 3306 --ctdir ORIGINAL -j DROP NOTE: Without --ctdir ORIGINAL , this would also match the reply packets coming back for a connection from the container to port 3306 on some other server, which is almost certainly not what you want! You don't strictly need this if like me your first rule is -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT , as that will deal with all the reply packets, but it would be safer to still use --ctdir ORIGINAL anyway. | {
"source": [
"https://serverfault.com/questions/704643",
"https://serverfault.com",
"https://serverfault.com/users/229275/"
]
} |
705,071 | I'm speaking at a conference next week about some software tools I've created. My laptop will be shown on a projector screen during this presentation. The presentation will be videotaped and posted on youtube. If, for some reason, I have occasion to open and edit my ~/.ssh/known_hosts file during this presentation, should I disconnect the projector while doing so? Is there any security risk to disclosing my known_hosts file? | The known_hosts file contains the trusted public keys for hosts you connected to in the past. These public keys can be obtained simply by trying to connect to these hosts. Therefore it is no security risk per se. But: It contains a history of hosts you connected to. The information may
be used by a potential attacker to footprint organization infrastructure for example. Also it informs potential attackers that you probably have access to certain hosts and that stealing your laptop will give them access as well. Edit: To avoid showing your known_hosts file i recommend you use the ssh-keygen utility. ssh-keygen -R ssh1.example.org for example removes the trusted keys for ssh1.example.org from your known_hosts. | {
"source": [
"https://serverfault.com/questions/705071",
"https://serverfault.com",
"https://serverfault.com/users/259315/"
]
} |
705,644 | We have a task that loads some configuration files from an external data source. After the settings are uploaded we would like to be able to restart all the tasks in a service so that the settings propagate to all instances. What's the best way to restart all services? We have a 'workaround' that involves setting the 'number of tasks' to 0 and then back up, but this is definitely not how it's supposed to be done and has downtime. | Using the AWS CLI tool: aws ecs update-service --force-new-deployment --service my-service --cluster cluster-name | {
"source": [
"https://serverfault.com/questions/705644",
"https://serverfault.com",
"https://serverfault.com/users/282283/"
]
} |
706,336 | I created a key pair using ssh-keygen and get the two clasic id_rsa and id_rsa.pub. I imported the public key into my AWS EC2 account. Now I created a windows instance and to decrypt that instance password, AWS console is asking me for a .pem file. How I can get that .pem file from my two id_rsa and id_rsa.pub files? | According to this , this command can be used: ssh-keygen -f id_rsa -e -m pem This will convert your public key to an OpenSSL compatible format.
Your private key is already in PEM format and can be used as is (as Michael Hampton stated). Double check if AWS isn't asking for a (X.509) certificate in PEM format, which would
be a different thing than your SSH keys. | {
"source": [
"https://serverfault.com/questions/706336",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
706,349 | I installed PHP7 from Remi repo with sudo yum -y install httpd
sudo yum -y install epel-release
wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
sudo rpm -Uvh remi-release-6*.rpm
sudo yum -y --enablerepo=remi,remi-test install php70
scl enable php70 'php -v'
sudo ln -s /usr/bin/php70 /usr/bin/php and it is working via CLI. Now I want to make it work with apache but i can't find a so to pass as a second argument to LoadModule LoadModule php7_module unknown_path
<FilesMatch \.php$>
SetHandler application/x-httpd-php
</FilesMatch> Is this the correct approach to make PHP7 to work with apache2? | By default "php70" (Software Collection) don't install the mod_php. yum install php70-php And check you haven't any other mod_php (such as the one provided by "php" base package) | {
"source": [
"https://serverfault.com/questions/706349",
"https://serverfault.com",
"https://serverfault.com/users/299841/"
]
} |
706,438 | What is the difference between the three Nginx variables $host , $http_host , and $server_name ? I have a rewrite rule where I'm not sure which one I should be using: location = /vb/showthread.php {
# /vb/showthread.php?50271-What-s-happening&p=846039
if ($arg_p) {
return 301 $scheme://$host/forum/index.php?posts/$arg_p/;
} I'm looking for an answer that doesn't just say 'use ___ variable in your rewrite rule' but also explains the theoretical differences between them. | You should almost always use $host , as it's the only one guaranteed to have something sensible regardless of how the user-agent behaves, unless you specifically need the semantics of one of the other variables. The difference is explained in the nginx documentation : $host contains "in this order of precedence: host name from the request line, or host name from the 'Host' request header field, or the server name matching a request" $http_host contains the content of the HTTP "Host" header field, if it was present in the request $server_name contains the server_name of the virtual host which processed the request, as it was defined in the nginx configuration. If a server contains multiple server_name s, only the first one will be present in this variable. Since it is legal for user-agents to send the hostname in the request line rather than in a Host: header, though it is rarely done except when connecting to proxies, you have to account for this. You also have to account for the case where the user-agent doesn't send a hostname at all, e.g. ancient HTTP/1.0 requests and modern badly-written software. You might do so by diverting them to a catch-all virtual host which doesn't serve anything, if you are serving multiple web sites, or if you only have a single web site on your server you might process everything through a single virtual host. In the latter case you have to account for this as well. Only the $host variable accounts for all the possible things that a user-agent may do when forming an HTTP request. | {
"source": [
"https://serverfault.com/questions/706438",
"https://serverfault.com",
"https://serverfault.com/users/180974/"
]
} |
706,475 | I have a server that runs Debian and sshd on it, and in case I need to reboot the server my SSH session hangs at client side until TCP timeout. I assume this is because when sshd is being terminated it does not explicitly close open SSH sessions to the host. What should I do to make sshd first disconnect everyone, then terminate itself as normal? So far I don't see a parameter in man sshd_config that's related to shutsown behavior. | When you shutdown or reboot your system, systemd tries to stop all services as fast as it can. That involves bringing down the network and terminating all processes that are still alive -- usually in that order. So when systemd kills the forked SSH processes that are handling your SSH sessions, the network connection is already disabled and they have no way of closing the client connection gracefully. Your first thought might be to just kill all SSH processes as the first step during shutdown, and there are quite a few systemd service files out there that do just that. But there is of course a neater solution (how it's "supposed" to be done): systemd-logind . systemd-logind keeps track of active user sessions (local and SSH ones) and assigns all processes spawned within them to so-called "slices". That way, when the system is shut down, systemd can just SIGTERM everything inside the user slices (which includes the forked SSH process that's handing a particular session) and then continue shutting down services and the network. systemd-logind requires a PAM module to get notified of new user sessions and you'll need dbus to use loginctl to check its status, so install both of those: apt-get install libpam-systemd dbus Be sure your /etc/ssh/sshd_config is actually going to use the module with UsePAM yes . | {
"source": [
"https://serverfault.com/questions/706475",
"https://serverfault.com",
"https://serverfault.com/users/154494/"
]
} |
706,560 | When I launch instance in AWS console I can set "Auto-assign Public IP" to true and newly created instance will be assigned with public IP address from pool. Now assume I have launched instance with this setting set to false and want to assign public IP to this instance. The same public IP as in first case, not Elastic IP. PS I know I can launch new instance and shut down old one. I'm particularly interested in assigning to one already running. | The instance that you launched without a public IP will stay without one as it is only assignable when you launch the instance. Even having a subnet with auto assign public IP switched on will not assign a public IP to your instance if, when you launched the instance you chose not to have a public IP. The only way I know is to select assign a public IP before launching the instance or having the subnet set up to auto assign public IPs which will do that only when you launch a new instance. So to summarize:
It is not possible to assign a public IP after launching that instance unless you use EIPs. | {
"source": [
"https://serverfault.com/questions/706560",
"https://serverfault.com",
"https://serverfault.com/users/236787/"
]
} |
706,819 | I've created a VPC, and inside it an RDS instance.
The RDS instance is publicly accessible and its settings are as follows: RDS settings The security group attached to the RDS instance accepts all traffic: All of my network ACLs accept all trafic.
However, I can't access my instance from a machine outside of my VPC. I get the following error: root@vps151014:~# mysql -h mysql1.xxxxxxxxxxxx.eu-west-1.rds.amazonaws.com -P 3306 -u skullberry -p
Enter password:
ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql1.xxxxxxxxxxxx.eu-west-1.rds.amazonaws.com' (110) If I run the same command from an EC2 inside my VPC, I am able to connect.
I've tried connecting from several machines, all of them without a firewall (i.e. port 3306 open). I'm obviously missing something but everything seems to be configured correctly. What could be the issue? | For an RDS instance in VPC to "publicly" (Internet) accessible, all of the subnets it is attached to must be "public" -- as opposed to "private" -- subnets of the VPC. A public subnet is essentially defined as a subnet that has the Internet Gateway object (igw-xxxxxxxx) as its route to "the Internet," or at least to any Internet destinations you need to access. Typically, this is a destination address of 0.0.0.0/0 . Public subnets must be used for instances (including RDS) that will have an associated public IP address, and should not be used for instances that will not have public IP addresses, since private addresses do not work across the Internet without translation. A private subnet, by contrast, has its routing table configured to reach Internet destinations via another EC2 instance, typically a NAT instance. This shows in the VPC route table associated with that subnet as i-xxxxxxxx, rather than "igw." That machine (which, itself, will actually be on a different subnet than the ones for which it acts as a route destination) serves as a translator, allowing the private-IP-only instances to transparently make outbound Internet requests using the NAT machine's public IP for their Internet needs. Instances with a public IP address cannot interact properly with the Internet if attached to a private subnet. In the specific case, here, the subnets associated with the RDS instance were not really configured as something that could be simply classified as either a private or public subnet, because the subnet had no default route at all. Adding a default route through the "igw" object, or, as OP did, adding a static route to the Internet IP address where connectivity was needed, into the VPC route table for the subnets fixes the connectivity issue. However... If you experience a similar issue, you can't simply change the route table or build new route tables and associate the subnets with them, unless you have nothing else already working correctly on the subnets, because the change could reasonably be expected to break existing connectivity. The correct course, in that case, would be to provision the instances on different subnets with the correct route table entries in place. When setting up a VPC, it's ideal to clearly define the subnet roles and fully provision then with the necessary routes when the VPC is first commissioned. It's also important to remember that the entire VPC "LAN" is a software-defined network. Unlike in a physical network, where the router can become a bottleneck and it's often sensible to place machines with heavy traffic among them on the same subnet... traffic crossing subnets has no performance disadvantage on VPC. Machines should be placed on subnets that are appropriate for the machine's IP addressing needs -- public address, public subnet; no public address, private subnet. More discussion of the logistics of private/public subnets in VPC can be found in Why Do We Need Private Subnet in VPC (at Stack Overflow). | {
"source": [
"https://serverfault.com/questions/706819",
"https://serverfault.com",
"https://serverfault.com/users/300121/"
]
} |
706,833 | I just want to run this idea by some smarter people to make sure I'm not overlooking something obvious: I want to backup my Linux server to S3 using one of the many backup scripts that allow automatic pruning of backups. So my S3 IAM policy will obviously have to give that user GET, PUT, and DELETE permissions. But since the DELETE permission will be there, I need to plan against the worst-case scenario of a hacker getting root access to my server and deleting the backups on S3 using the stored credentials. To eliminate this possibility, I was thinking about the following configuration: Versioning enabled on the bucket (hacker can delete the files but they are only tagged as deleted on S3 and recoverable by me) Lifecycle policy enabled on the bucket to automatically delete old versions (eventually eliminating all versions of the file to minimize storage costs) Then, the only user that would have bucket-deletion or version-deletion permissions would be my main Amazon account user, which I would configure with MFA. Am I missing anything obvious here? I did find this claiming that... You can use the Object Expiration feature on [...] You cannot, however, use it in conjunction with S3 Versioning ...I assume that's obsolete information? In my quick informal experimentation it appears to be possible to use versioning with Object Expiration. Thanks a lot! | For an RDS instance in VPC to "publicly" (Internet) accessible, all of the subnets it is attached to must be "public" -- as opposed to "private" -- subnets of the VPC. A public subnet is essentially defined as a subnet that has the Internet Gateway object (igw-xxxxxxxx) as its route to "the Internet," or at least to any Internet destinations you need to access. Typically, this is a destination address of 0.0.0.0/0 . Public subnets must be used for instances (including RDS) that will have an associated public IP address, and should not be used for instances that will not have public IP addresses, since private addresses do not work across the Internet without translation. A private subnet, by contrast, has its routing table configured to reach Internet destinations via another EC2 instance, typically a NAT instance. This shows in the VPC route table associated with that subnet as i-xxxxxxxx, rather than "igw." That machine (which, itself, will actually be on a different subnet than the ones for which it acts as a route destination) serves as a translator, allowing the private-IP-only instances to transparently make outbound Internet requests using the NAT machine's public IP for their Internet needs. Instances with a public IP address cannot interact properly with the Internet if attached to a private subnet. In the specific case, here, the subnets associated with the RDS instance were not really configured as something that could be simply classified as either a private or public subnet, because the subnet had no default route at all. Adding a default route through the "igw" object, or, as OP did, adding a static route to the Internet IP address where connectivity was needed, into the VPC route table for the subnets fixes the connectivity issue. However... If you experience a similar issue, you can't simply change the route table or build new route tables and associate the subnets with them, unless you have nothing else already working correctly on the subnets, because the change could reasonably be expected to break existing connectivity. The correct course, in that case, would be to provision the instances on different subnets with the correct route table entries in place. When setting up a VPC, it's ideal to clearly define the subnet roles and fully provision then with the necessary routes when the VPC is first commissioned. It's also important to remember that the entire VPC "LAN" is a software-defined network. Unlike in a physical network, where the router can become a bottleneck and it's often sensible to place machines with heavy traffic among them on the same subnet... traffic crossing subnets has no performance disadvantage on VPC. Machines should be placed on subnets that are appropriate for the machine's IP addressing needs -- public address, public subnet; no public address, private subnet. More discussion of the logistics of private/public subnets in VPC can be found in Why Do We Need Private Subnet in VPC (at Stack Overflow). | {
"source": [
"https://serverfault.com/questions/706833",
"https://serverfault.com",
"https://serverfault.com/users/179635/"
]
} |
707,228 | What are the pro's and con's of consumer SSDs vs. fast 10-15k spinning drives in a server environment? We cannot use enterprise SSDs in our case as they are prohibitively expensive. Here's some notes about our particular use case: Hypervisor with 5-10 VM's max. No individual VM will be crazy i/o intensive. Internal RAID 10, no SAN/NAS... I know that enterprise SSDs: are rated for longer lifespans and perform more consistently over long periods than consumer SSDs... but does that mean consumer SSDs are completely unsuitable for a server environment, or will they still perform better than fast spinning drives? Since we're protected via RAID/backup, I'm more concerned about performance over lifespan (as long as lifespan isn't expected to be crazy low). | Note: This answer is specific to the server components described in the OP's comment. Compatibility is going to dictate everything here. Dell PERC array controllers are LSI devices. So anything that works on an LSI controller should be okay. Your ability to monitor the health of your RAID array is paramount. Since this is Dell, ensure you have the appropriate agents, alarms and monitoring in place to report on errors from your PERC controller. Don't use RAID5. We don't do that anymore in the sysadmin world . Keep a cold spare handy. You don't necessarily have to go to a consumer disk. There are enterprise SSD drives available at all price points. I urge people to buy SAS SSDs instead of SATA wherever possible. In addition, you can probably find better pricing on the officially supported equipment as well (nobody pays retail). Don't listen to voodoo about rotating SSD drives out to try to outsmart the RAID controller or its wear-leveling algorithms. The use case you've described won't have a significant impact on the life of the disks. Also see: Are SSD drives as reliable as mechanical drives (2013)? | {
"source": [
"https://serverfault.com/questions/707228",
"https://serverfault.com",
"https://serverfault.com/users/66487/"
]
} |
707,377 | On one of my servers I've noticed really delay on SSH logins. Connecting using the ssh -vvv options the delay occurs at debug1: Entering interactive session. extract of connection: debug1: Authentication succeeded (publickey).
Authenticated to IP_REDACTED ([IP_REDACTED]:22).
debug1: channel 0: new [client-session]
debug3: ssh_session2_open: channel_new: 0
debug2: channel 0: send open
debug1: Requesting [email protected]
debug1: Entering interactive session.
debug2: callback start
debug2: fd 3 setting TCP_NODELAY
debug3: packet_set_tos: set IP_TOS 0x10
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 1 using the method outlined here I generated strace output and noticed the line 14:09:53.676004 ppoll([{fd=5, events=POLLIN}], 1, {24, 999645000}, NULL, 8) = 1 ([{fd=5, revents=POLLIN}], left {0, 0}) <25.020764> which takes 25 seconds. extract of strace output: 14:09:53.675567 clock_gettime(CLOCK_MONOTONIC, {4662549, 999741404}) = 0 <0.000024>
14:09:53.675651 recvmsg(5, {msg_name(0)=NULL, msg_iov(1)=[{"l\4\1\1\n\0\0\0\2\0\0\0\215\0\0\0\1\1o\0\25\0\0\0", 24}], msg_controll
en=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = 24 <0.000024>
14:09:53.675744 recvmsg(5, {msg_name(0)=NULL, msg_iov(1)=[{"/org/freedesktop/DBus\0\0\0\2\1s\0\24\0\0\0"..., 146}], msg_controllen
=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = 146 <0.000025>
14:09:53.675842 recvmsg(5, 0x7ffe0ff1dfa0, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailab
le) <0.000023>
14:09:53.675925 clock_gettime(CLOCK_MONOTONIC, {4662550, 96075}) = 0 <0.000024>
14:09:53.676004 ppoll([{fd=5, events=POLLIN}], 1, {24, 999645000}, NULL, 8) = 1 ([{fd=5, revents=POLLIN}], left {0, 0}) <25.020764>
14:10:18.696865 recvmsg(5, {msg_name(0)=NULL, msg_iov(1)=[{"l\3\1\0013\0\0\0\3\0\0\0m\0\0\0\6\1s\0\5\0\0\0", 24}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = 24 <0.000017>
14:10:18.696944 recvmsg(5, {msg_name(0)=NULL, msg_iov(1)=[{":1.10\0\0\0\4\1s\0#\0\0\0org.freedesktop."..., 155}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = 155 <0.000018> I have noticed an entry in the auth logs at the relevant time: Jul 21 14:10:18 click sshd[8165]: pam_systemd(sshd:session): Failed to create session: Activation of org.freedesktop.login1 timed out Not knowing enough about this what is it trying to poll for and why is it now taking 25seconds on this particular server. The journalctl -u systemd-logind command shows Jul 20 11:33:06 click systemd-logind[19415]: Failed to abandon session scope: Transport endpoint is not connected
Jul 21 05:04:54 myhost systemd[1]: Started Login Service.
Jul 21 12:15:30 myhost systemd[1]: Started Login Service.
Jul 21 12:17:04 myhost systemd[1]: Started Login Service.
Jul 21 12:49:55 myhost systemd[1]: Started Login Service.
Jul 21 13:57:05 myhost systemd[1]: Started Login Service.
Jul 21 13:58:49 myhost systemd[1]: Started Login Service.
Jul 21 14:01:55 myhost systemd[1]: Started Login Service.
Jul 21 14:08:32 myhost systemd[1]: Started Login Service.
Jul 21 14:09:53 myhost systemd[1]: Started Login Service.
Jul 21 14:19:08 myhost systemd[1]: Started Login Service.
Jul 21 14:21:26 myhost systemd[1]: Started Login Service.
Jul 21 14:22:37 myhost systemd[1]: Started Login Service.
Jul 21 14:25:20 myhost systemd[1]: Started Login Service.
Jul 21 14:30:27 myhost systemd[1]: Started Login Service.
Jul 21 15:02:56 myhost systemd[1]: Started Login Service. Issuing the command systemctl restart systemd-logind.service fixes it (for now probably). What is the Activation of org.freedesktop.login1 it mentions? Is there a way I can prevent having to restart logind in future? I expect over time I will have this issue with the rest of the servers I manage. Just noticed this starting to happen on another server. $ sudo service systemd-logind status
● systemd-logind.service - Login Service
Loaded: loaded (/lib/systemd/system/systemd-logind.service; static)
Active: active (running) since Tue 2015-06-16 14:10:57 BST; 1 months 12 days ago
Docs: man:systemd-logind.service(8)
man:logind.conf(5)
http://www.freedesktop.org/wiki/Software/systemd/logind
http://www.freedesktop.org/wiki/Software/systemd/multiseat
Main PID: 1701 (systemd-logind)
Status: "Processing requests..."
CGroup: /system.slice/systemd-logind.service
└─1701 /lib/systemd/systemd-logind
Jul 28 13:16:21 myhost systemd[1]: Started Login Service.
Jul 28 13:16:47 myhost systemd[1]: Started Login Service.
Jul 28 16:09:23 myhost systemd[1]: Started Login Service.
Jul 28 16:09:49 myhost systemd[1]: Started Login Service.
Jul 28 16:10:15 myhost systemd[1]: Started Login Service.
Jul 28 16:10:41 myhost systemd[1]: Started Login Service.
Jul 28 22:50:19 myhost systemd[1]: Started Login Service.
Jul 29 05:00:15 myhost systemd[1]: Started Login Service.
Jul 29 11:00:20 myhost systemd[1]: Started Login Service.
Jul 29 11:09:56 myhost systemd[1]: Started Login Service. EDIT - expanded journalctl output. EDIT2 - added systemd-logind status as suggested in comments when noticed this starting on another server. UPDATE - This is starting to happen to the rest of my Jessie servers. Am I the only one experiencing this? There must be some fix other than restarting systemd-logind, has anyone any thoughts? There is a Debian bug report on this 770135 . | This happens when dbus is restarted, but systemd-logind is not restarted. Just do the following: systemctl restart systemd-logind The solution is from here: https://major.io/2015/07/27/very-slow-ssh-logins-on-fedora-22/ | {
"source": [
"https://serverfault.com/questions/707377",
"https://serverfault.com",
"https://serverfault.com/users/300559/"
]
} |
707,544 | I am going to retire an old libvirt + KVM server, but I need to preserve the VMs. Unfortunately, the network is down and I cannot create connections to the system. Is there any way to export the VMs using virsh or any other utility? Clarification: I need disks and everything. I will deploy machines to another server. | If you need to backup your vm configuration using virsh you can use the following command virsh dumpxml vmname > vmname.xml If you need to move your vm to other server, you can dump your vm config and transfer the xml, if you are using files as backend storage for your vm you can copy the files to other server using scp or rsync, when you copied the disk files of the vm, you can try to start the vm with virsh define /tmp/myvm.xml && virsh start myvm | {
"source": [
"https://serverfault.com/questions/707544",
"https://serverfault.com",
"https://serverfault.com/users/69492/"
]
} |
707,955 | My nginx default configuration file is becoming huge. I'd like to split it to smaller config files, each including only one, maximum 4 locations to each file, so that I can enable/disable them quickly. Actual file looks like this: server {
listen 80 default_server;
root /var/www/
location /1 {
config info...;
}
location /2 {
config info....;
}
location /abc {
proxy_pass...;
}
location /xyz {
fastcgi_pass....;
}
location /5678ab {
config info...;
}
location /admin {
config info....;
} now, if I want to split that up to have only a few locations in each file (locations belonging together), what would be a proper way to do it without causing chaos (like declaring root in each file, hence having weird path's that nginx tries to find files) ? | You are probably looking for Nginx's include function: http://nginx.org/en/docs/ngx_core_module.html#include You can use it like this: server {
listen 80;
server_name example.com;
[…]
include conf/location.conf;
} include also accepts wildcards so you could also write include include/*.conf; to include every *.conf file in the directory include . | {
"source": [
"https://serverfault.com/questions/707955",
"https://serverfault.com",
"https://serverfault.com/users/267713/"
]
} |
708,076 | I was planning to sign my DNS zone with DNSSEC. My zone, the registrar and my DNS server (BIND9) all support DNSSEC. The only one who doesn't support DNSSEC is my secondary nameserver provider (namely buddyns.com ). On their website , they state this regarding DNSSEC: BuddyNS does not support DNSSEC because it exposes to some
vulnerabilities unsuited to a high-volume DNS service. Well, I thought the use of DNSSEC is currently somehow questionable as most resolvers don't check if the records are signed correctly. What I didn't know was that - according to their statement - it seems like providing it would expose security vulnerabilities of some kind. What are those "vulnerabilites"? | DNSSEC has some risks, but they are not directly related to reflection or amplification. The EDNS0 message size expansion is a red herring in this case. Let me explain. Any exchange of packets that does not depend on a previous proof of identity is subject to abuse by DDoS attackers who can use that unauthenticated packet exchange as a reflector, and perhaps also as an amplifier. For example, ICMP (the protocol behind "ping") can be abused in this way. As can the TCP SYN packet, which solicits up to 40 SYN-ACK packets even if the SYN was spoofed to come from some victim who doesn't want those SYN-ACK packets. And of course, all UDP services are vulnerable to this attack, including NTP, SSDP, uPNP, and as noted by other responses here, also including DNS. Most intrusion detection, intrusion prevention, and load balancer appliances are bottlenecks, unable to keep up with "line rate" traffic. Also many routers can't run at line rate, and some switches. These bottlenecks, by being the smallest thing "in the path", and smaller than the links themselves, are the actual target of congestion-based DDoS attacks. If you can keep somebody's firewall busy with attack traffic, then good traffic won't get through, even if the links aren't full. And what slows down a firewall isn't the total number of bits per second (which can be increased by using larger messages, and EDNS0 and DNSSEC will do), but rather the total number of packets per second. There's a lot of urban legend out there about how DNSSEC makes DDoS worse because of DNSSEC's larger message size, and while this makes intuitive sense and "sounds good", it is simply false. But if there were a shred of truth to this legend, the real answer would still lay elsewhere-- [because DNSSEC always uses EDNS0, but EDNS0 can be used without DNSSEC], and many normal non-DNSSEC responses are as large as a DNSSEC response would be. Consider the TXT records used to represent SPF policies or DKIM keys. Or just any large set of address or MX records. In short, no attack requires DNSSEC, and thus any focus on DNSSEC as a DDoS risk is misspent energy. DNSSEC does have risks! It's hard to use, and harder to use correctly. Often it requires a new work flow for zone data changes, registrar management, installation of new server instances. All of that has to be tested and documented, and whenever something breaks that's related to DNS, the DNSSEC technology must be investigated as a possible cause. And the end result if you do everything right will be that, as a zone signer, your own online content and systems will be more fragile to your customers. As a far-end server operator, the result will be, that everyone else's content and systems will be more fragile to you. These risks are often seen to outweigh the benefits, since the only benefit is to protect DNS data from in-flight modification or substitution. That attack is so rare as to not be worth all this effort. We all hope DNSSEC becomes ubiquitous some day, because of the new applications it will enable. But the truth is that today, DNSSEC is all cost, no benefit, and with high risks. So if you don't want to use DNSSEC, that's your prerogative, but don't let anyone confuse you that DNSSEC's problem is its role as a DDoS amplifier. DNSSEC has no necessary role as a DDoS amplifier; there are other cheaper better ways to use DNS as a DDoS amplifier. If you don't want to use DNSSEC, let it be because you have not yet drunk the Kool Aid and you want to be a last-mover (later) not a first-mover (now). DNS content servers, sometimes called "authority servers", must be prevented from being abused as DNS reflecting amplifiers, because DNS uses UDP, and because UDP is abusable by spoofed-source packets. The way to secure your DNS content server against this kind of abuse is not to block UDP, nor to force TCP (using the TC=1 trick), nor to block the ANY query, nor to opt out of DNSSEC. None of those things will help you. You need DNS Response Rate Limiting (DNS RRL), a completely free technology which is now present in several open source name servers including BIND, Knot, and NSD. You can't fix the DNS reflection problem with your firewall, because only a content-aware middlebox such as the DNS server itself (with RRL added) knows enough about the request to be able to accurately guess what's an attack and what's not. I want to emphasize, again: DNS RRL is free, and every authority server should run it. In closing, I want to expose my biases. I wrote most of BIND8, I invented EDNS0, and I co-invented DNS RRL. I've been working on DNS since 1988 as a 20-something, and I am now grumpy 50-something, with less and less patience for half-baked solutions to misunderstood problems. Please accept my apologies if this message sounds too much like "hey you kids, get offa my lawn!" | {
"source": [
"https://serverfault.com/questions/708076",
"https://serverfault.com",
"https://serverfault.com/users/301092/"
]
} |
708,293 | Is it possible JUST to delete the log files in a directory by using logrotate w/o actually rotating them? We have an app that generates logs in the following format: app.log.DD_MM_YYYY. I am unsuccessful with logrotate having the following config: /opt/log/app/app.log.* {
rotate 0
missingok
nomail
} Can log rotate do this or should I just write a script and place it within cron? Best,
-Iulian | In that case you may want to use postrotate. In the example below postrotate will delete files that are older that 1 day after logs been rotated, feel free to modify it to fit your needs. /opt/log/app/app.log.* {
missingok
nomail
postrotate
/usr/bin/find /opt/log/app/ -name "app.log.*" -type f -mtime +0 -exec rm {} \;
endscript
} | {
"source": [
"https://serverfault.com/questions/708293",
"https://serverfault.com",
"https://serverfault.com/users/261834/"
]
} |
709,005 | A lot of things about SNMP seemed cumbersome to me even 15 years ago. One example is the concept of MIB being a local resource to "make sense" of the otherwise numeric OIDs. Has SNMP been modernized or mutated into something else? Is it still a must-have feature for network equipment? | Sadly, SNMP is still in common usage. Later versions of the protocol have addressed numerous issues in SNMPv1, but those have almost entirely been directed at fixing the security model. As a result, SNMP traffic is now comparitively bloaty, but they have not addressed what I consider to be the glaring shortcoming in SNMP - that data stored in the MIB resides outside the monitoring/monitored device exchange. The separation of the MIB-stored data from that exchange, and the consequent use of numeric OIDs on the wire, made sense in SNMPv1, as it kept most exchanges to a single UDP datagram in each direction. As of v3, it no longer makes any sense, to my mind - but I'm not the IETF. Sadly, SNMP is still a sort of lowest-common-denominator management protocol, and I'm constantly surprised how many devices I see out there where the easiest way to extract monitoring data from them is good old RO-community-string-in-UDP-based SNMPv1. Edit (2018): because it's so germane, I quote from Geoff Huston's excellent article in the August 2018 edition of the Internet Protocol Journal : The Internet converged on using the Simple Network Management
Protocol (SNMP) a quarter of a century ago, and despite its security
weaknesses, its inefficiency, its incredibly irritating use of Abstract
Syntax Notation One (ASN.1), and its use in sustaining some forms
of Distributed Denial-of-Service (DDoS) attacks, it still enjoys widespread use. | {
"source": [
"https://serverfault.com/questions/709005",
"https://serverfault.com",
"https://serverfault.com/users/74548/"
]
} |
709,006 | I am trying to redirect my domain to a new document root if a cookie value is something. I did setup the rewrite and it works fine. RewriteLog "/home/user/dev/logs/rewrite.log"
RewriteLogLevel 3
RewriteCond %{HTTP_COOKIE} !^new_layout_v1_dev$
RewriteCond $1 !^php5.fastcgi [NC]
RewriteRule ^/(.*)$ /home/user/dev/user.dev/htdocs/$1 [C] The problem I have is that the site has a rewrite.conf configuration file to rewrite SEO friendly URL's. When the document root redirect does redirect me to a new document root then it does not load the next rewrite's or the config in the new document root. Errors: The requested URL /testime-pilte-2655600.html was not found on this server. As we use SEO friendly URLs in our webapp and redirect them via rewrite rules to the right php file, then when rewrite.conf is not loaded, the web page will show not found error. So my question is, how could I make document root rewrite work so that it will load the new document root rewrite config as well. | Sadly, SNMP is still in common usage. Later versions of the protocol have addressed numerous issues in SNMPv1, but those have almost entirely been directed at fixing the security model. As a result, SNMP traffic is now comparitively bloaty, but they have not addressed what I consider to be the glaring shortcoming in SNMP - that data stored in the MIB resides outside the monitoring/monitored device exchange. The separation of the MIB-stored data from that exchange, and the consequent use of numeric OIDs on the wire, made sense in SNMPv1, as it kept most exchanges to a single UDP datagram in each direction. As of v3, it no longer makes any sense, to my mind - but I'm not the IETF. Sadly, SNMP is still a sort of lowest-common-denominator management protocol, and I'm constantly surprised how many devices I see out there where the easiest way to extract monitoring data from them is good old RO-community-string-in-UDP-based SNMPv1. Edit (2018): because it's so germane, I quote from Geoff Huston's excellent article in the August 2018 edition of the Internet Protocol Journal : The Internet converged on using the Simple Network Management
Protocol (SNMP) a quarter of a century ago, and despite its security
weaknesses, its inefficiency, its incredibly irritating use of Abstract
Syntax Notation One (ASN.1), and its use in sustaining some forms
of Distributed Denial-of-Service (DDoS) attacks, it still enjoys widespread use. | {
"source": [
"https://serverfault.com/questions/709006",
"https://serverfault.com",
"https://serverfault.com/users/159984/"
]
} |
709,433 | I like to enable Git "Push to Deploy" on my CentOS 7 server. Currently I only can get Git 1.8.3.1 via yum. I need a newer version. Do I have to build it from source or is there any repo I can use? I alreay added EPEL and elrepo but yum still gives me Git 1.8.3.1. | You could use a IUS repository ( https://ius.io/ ) as provided on Git official site here or here . It contains prebuilt binaries for x86_64 . To do that, run (as root): yum install epel-release
yum remove git
rpm -U https://centos7.iuscommunity.org/ius-release.rpm
yum install git2u ( centos7 can be replaced with centos6 or rhel{6,7} if you are not using CentOS). Note: some users report that there is no more package called git2u . You can also try packages git222 or git224 in that case. Another option would be to use another RPM repository ( i386 & x86_64 ): sudo yum -y install https://packages.endpointdev.com/rhel/7/os/x86_64/endpoint-repo.x86_64.rpm
sudo yum install git Note 2 : as reported @alaindeseine in comments there is an issue accessing https://centos7.iuscommunity.org/ius-release.rpm . In that case use https://repo.ius.io/ius-release-el7.rpm | {
"source": [
"https://serverfault.com/questions/709433",
"https://serverfault.com",
"https://serverfault.com/users/302102/"
]
} |
709,694 | Our domain, grahamhancock.com is being wrongly resolved by a few people around the world, but it resolves correctly for most people. When I run through a list of free open DNS providers, about 90% resolve correctly and give information consistent with our zone file. 10%, however, do not, and claim the IP address to be one linked to some Amazon EC2 instance which we've never owned or used ever in the past. Here are some example DNS servers giving the wrong information: dig www.grahamhancock.com @173.84.127.88
dig www.grahamhancock.com @209.222.18.222 How could these servers have the wrong information, and how can we get back control of the situation? Could this be something malicious, or a misconfiguration? We're a 1-million-hits-a-month site, with good search rankings, so we're probably a target for something malicious. The wrong IP address that the erroneous server are returning to some people points to some get-rich-quick site on an AWS EC2 instance. What should we do? | Drifter is correct, you have a nameserver configuration problem. Here's the tail end of the output from dig +trace +additional www.grahamhancock.com : grahamhancock.com. 172800 IN NS ns1.grahamhancock.com.
grahamhancock.com. 172800 IN NS ns2.grahamhancock.com.
grahamhancock.com. 172800 IN NS server.grahamhancock.com.
ns1.grahamhancock.com. 172800 IN A 199.168.117.67
ns2.grahamhancock.com. 172800 IN A 199.168.117.67
server.grahamhancock.com. 172800 IN A 199.168.117.67
;; Received 144 bytes from 192.35.51.30#53(f.gtld-servers.net) in 92 ms
www.grahamhancock.com. 14400 IN CNAME grahamhancock.com.
grahamhancock.com. 14400 IN A 199.168.117.67
grahamhancock.com. 86400 IN NS ns2.grahamhancock.com.com.
grahamhancock.com. 86400 IN NS ns1.grahamhancock.com.com.
;; Received 123 bytes from 199.168.117.67#53(ns2.grahamhancock.com) in 17 ms Your glue records are pointing to an IP address of 199.168.117.67, which returns the correct response. Your zone however is defining nameserver records ending in com.com . If we +trace one of those nameservers instead... com.com. 172800 IN NS ns-180.awsdns-22.com.
com.com. 172800 IN NS ns-895.awsdns-47.net.
com.com. 172800 IN NS ns-1084.awsdns-07.org.
com.com. 172800 IN NS ns-2015.awsdns-59.co.uk.
;; Received 212 bytes from 192.26.92.30#53(c.gtld-servers.net) in 22 ms
ns1.grahamhancock.com.com. 30 IN A 54.201.82.69
com.com. 172800 IN NS ns-1084.awsdns-07.org.
com.com. 172800 IN NS ns-180.awsdns-22.com.
com.com. 172800 IN NS ns-2015.awsdns-59.co.uk.
com.com. 172800 IN NS ns-895.awsdns-47.net.
;; Received 196 bytes from 205.251.195.127#53(ns-895.awsdns-47.net) in 16 ms ...we end up at someone's AWS hosted nameservers. Your problem is something known as a glue record mismatch . Remote nameservers are initially learning about your domain via the glue records, but once those remote servers perform a refresh they end up querying the bogus nameservers that you've defined with an extra .com at the end. This is not your only problem. You are listing the same IP address three times in your glue records, which is extremely volatile. You should always have multiple nameservers, they should never share a subnet or upstream network peer, and they should never be located at the same physical location. As matters currently stand, any brief routing problem between DNS servers and your single server will cause your domain to be temporarily unreachable. Update: This Q&A has been featured on the front page and is getting lots of comments. Unfortunately, that includes people who are just a little too eager to reply to this answer without checking to see if their points have already been addressed in the expanded comments. The detail that most people seem to be overlooking is the comment that I'm quoting here: [...] geo-redundant DNS servers prevent scenarios where a brief routing interruption results in temporary negative caching of nameservers. However brief the negative caching period ends up being, it will almost certainly exceed the amount of time that there was a connectivity interruption. [...] the number of scenarios where lack of DNS geo-redundancy won't create sporadic and difficult to troubleshoot availability problems is exactly zero. If you think my understanding of negative caching of nameservers is wrong, that's open game for discussion, but outside of that you need to bring something to the table other than "it's a small site and who cares if both the website and DNS server are down at the same time". If you're saying this you don't understand the topic nearly as well as you think you do. Second Update: I went ahead and wrote a canonical Q&A that we can link to whenever the single DNS server topic comes up in the future. Hopefully this puts the matter to rest. | {
"source": [
"https://serverfault.com/questions/709694",
"https://serverfault.com",
"https://serverfault.com/users/195966/"
]
} |
709,738 | I have this server (44.44.44.44, for instance) running a webserver. I have routed pollnote.com to the server to access my webserver. Everything works fine. To access the server, I added my Public Key to .ssh/authorized_keys so I can do ssh [email protected] to log in without problems. The issue comes when I try it like this: ssh [email protected] . The terminal just displays nothing, and it waits for me until I decide to abort the command. What do I need to do to access the server using the domain name as reference? UPDATE I should have mentioned, I am accessing the server through CloudFlare. Maybe it is relevant..? data ➜ ~ dig pollnote.com
; <<>> DiG 9.9.5-9ubuntu0.1-Ubuntu <<>> mydomain.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56675
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;pollnote.com. IN A
;; ANSWER SECTION:
pollnote.com. 299 IN A 104.27.165.70
pollnote.com. 299 IN A 104.27.164.70
;; Query time: 54 msec
;; SERVER: 127.0.1.1#53(127.0.1.1)
;; WHEN: Thu Jul 30 19:12:38 CEST 2015
;; MSG SIZE rcvd: 73 ➜ ~ ssh -vvv [email protected]
OpenSSH_6.7p1 Ubuntu-5ubuntu1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to pollnote.com [104.27.165.70] port 22. | When you connect by IP address the SSH connection goes directly to your server but if you use the domain name it goes through Cloudflare defenses. My suggestion would be to either use direct.pollnote.com (I think CloudFlare creates it automaticaly but people often remove it) or add your own alias like ssh.pollnote.com and disable CloudFlare protection on it. | {
"source": [
"https://serverfault.com/questions/709738",
"https://serverfault.com",
"https://serverfault.com/users/75438/"
]
} |
709,806 | The system I have an API deployed on EC2 machines on AWS. Incoming HTTPS requests are passed to an elastic load balancer . The load balancer handles the SSL, and passes the request to an Nginx server, that proxies the requests to the specific servers according to the request URL. The pain Nginx machines require a lot of maintenance work, especially when servers IP addresses are changed. Moreover, URL-based proxy routing really seems like a natural continuation of a load balancer. Having a sane web-based or API-based interface to control URL routing would be a tremendous boon. The question Is there any cloud-based routing solution that can proxy HTTP requests by URL schemas, replacing my Nginx machine? | You can use AWS API Gateway ( documentation ). API Gateway helps developers deliver robust, secure and scalable mobile and web application backends. API Gateway allows developers to securely connect mobile and web applications to business logic hosted on AWS Lambda, APIs hosted on Amazon EC2, or other publicly addressable web services hosted inside or outside of AWS . With API Gateway, developers can create and operate APIs for their backend services without developing and maintaining infrastructure to handle authorization and access control, traffic management, monitoring and analytics, version management and software development kit (SDK) generation. API Gateway now supports HTTP Proxy integration for pass-through resources, so you don't need to describe your payload and query params explicitly (which was required previously). | {
"source": [
"https://serverfault.com/questions/709806",
"https://serverfault.com",
"https://serverfault.com/users/10904/"
]
} |
710,076 | I have just installed CentOS 7: [root@new ~]# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core) I am trying to configure the firewall, and I'm told that in CentOS 7 iptables is no longer used, replaced by firewalld. When attempting to run a command to set a firewall rule as such: firewall-cmd --add-port=80/tcp I receive the following message: [root@new ~]# firewall-cmd --add-port=80/tcp
-bash: firewall-cmd: command not found edit : I tried the following command, too: [root@new ~]# firewall-offline-cmd --add-port=80/tcp
-bash: firewall-offline-cmd: command not found without any success. I tried running the following to check that firewalld was installed: [root@new ~]# service firewalld status
Redirecting to /bin/systemctl status firewalld.service
firewalld.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead) Following this output, I tried starting firewalld: [root@new ~]# service firewalld start
Redirecting to /bin/systemctl start firewalld.service
Failed to issue method call: Unit firewalld.service failed to load: No such file or directory. Any ideas what is wrong with the CentOS 7 install? This is a clean install on an OpenVZ VPS, I'm yet to make any changes at all. | Two possible options Your PATH does not contain /usr/bin firewall-cmd is not installed yum install firewalld | {
"source": [
"https://serverfault.com/questions/710076",
"https://serverfault.com",
"https://serverfault.com/users/221288/"
]
} |
710,931 | I have list of IP addressed, I want to find if instances associated with the IP address are still running or terminated. I am launching and terminating lot of instances on daily basis, just want to remove their certificates from puppetmaster. If there is any alternative method, I can achieve my goal, I can do that. | aws ec2 describe-instances --filter Name=ip-address,Values=IP_1,..IP_N Should do what you need. use the filter name of private-ip-address to select using private address in your VPC. Pipe through something like jq -r '.Reservations[].Instances[] | .InstanceId, .PublicIpAddress' if you want the corresponding InstanceID | {
"source": [
"https://serverfault.com/questions/710931",
"https://serverfault.com",
"https://serverfault.com/users/54280/"
]
} |
711,101 | After another user asked this question: Dropbox on linux server - how to include/exclude folders? My icon is still not working after performing some of the steps listed in other questions and answers: https://askubuntu.com/questions/358913/no-dropbox-icon-in-the-indicator-panel https://askubuntu.com/questions/182567/dropbox-icon-in-tray-is-missing Edit: In case anyone was wondering while reading this post, I was able to get my icon to show up finally by following these links: https://www.reddit.com/r/elementaryos/comments/2ufjsy/dropbox_icon_is_not_visible/ https://github.com/nathandyer/elementary-dropbox-mods The question should still be valid for anyone that wants to do this from the command line. There was an answer for excluding, but not one for including. Is there any way to achieve this? I see the former command listed in the dropbox command's help text, but not anything that could help me with including. Does anybody know how to achieve this? Here is the current help text that I see: Dropbox command-line interface
commands:
Note: use dropbox help <command> to view usage for a specific command.
status get current status of the dropboxd
help provide help
puburl get public url of a file in your dropbox
stop stop dropboxd
running return whether dropbox is running
start start dropboxd
filestatus get current sync status of one or more files
ls list directory contents with current sync status
autostart automatically start dropbox at login
exclude ignores/excludes a directory from syncing
lansync enables or disables LAN sync I also found some official Dropbox help documentation that also strangely only mentions excluding files, without including others from the Dropbox folder that are not currently synced. | To exclude all files/folders: cd to your dropbox folder (usually cd ~/Dropbox ) then type ~/bin/dropbox.py exclude add * This will exclude everything in your dropbox folder from syncing. ( Be careful! This will remove all the files that you synced ) Then, if you want to start syncing the folder "dir", type ~/bin/dropbox.py exclude remove dir Taken from http://www.dropboxwiki.com/tips-and-tricks/using-the-official-dropbox-command-line-interface-cli#comment-1778553228 | {
"source": [
"https://serverfault.com/questions/711101",
"https://serverfault.com",
"https://serverfault.com/users/303367/"
]
} |
711,168 | I'm running Apache2 in a docker container, and want to write nothing to the disk, writing logs to stdout and stderr. I've seen a few different ways to do this ( Supervisord and stdout/stderr , Apache access log to stdout ) but these seem like hacks. Is there no way to do this by default? To be clear, I do not want to tail the log, since that will result in things being written to the disk in the container. | ErrorLog /dev/stderr
TransferLog /dev/stdout works on ubuntu and centos fo me | {
"source": [
"https://serverfault.com/questions/711168",
"https://serverfault.com",
"https://serverfault.com/users/7358/"
]
} |
711,529 | I get an email alert whenever there's something to update, and typically do them that day. This tends to happen most days. For whatever reason I've had no alerts since 20th July until today (I believe I did a manual yum update the other day just to check, and sure enough there was nothing to do). Today's update lists a LOT of things. Is this just a backlog from 20th July onwards? Why wasn't yum updating anything in that time? Has there been some major security flaw that has caused everyone to update their software? Or has something gone wrong in my system? Or have the repos been compromised? Thanks ImageMagick.x86_64 6.7.2.7-2.el6 base
ImageMagick-devel.x86_64 6.7.2.7-2.el6 base
abrt.x86_64 2.0.8-34.el6.centos base
abrt-addon-ccpp.x86_64 2.0.8-34.el6.centos base
abrt-addon-kerneloops.x86_64 2.0.8-34.el6.centos base
abrt-addon-python.x86_64 2.0.8-34.el6.centos base
abrt-cli.x86_64 2.0.8-34.el6.centos base
abrt-libs.x86_64 2.0.8-34.el6.centos base
abrt-tui.x86_64 2.0.8-34.el6.centos base
at.x86_64 3.1.10-48.el6 base
augeas-libs.x86_64 1.0.0-10.el6 base
authconfig.x86_64 6.1.12-23.el6 base
b43-openfwwf.noarch 5.2-10.el6 base
bash.x86_64 4.1.2-33.el6 base
bind-libs.x86_64 32:9.8.2-0.37.rc1.el6_7.2 updates
bind-utils.x86_64 32:9.8.2-0.37.rc1.el6_7.2 updates
binutils.x86_64 2.20.51.0.2-5.43.el6 base
biosdevname.x86_64 0.6.2-1.el6 base
centos-release.x86_64 6-7.el6.centos.12.3 base
chkconfig.x86_64 1.3.49.3-5.el6 base
cpp.x86_64 4.4.7-16.el6 base
cpuspeed.x86_64 1:1.5-22.el6 base
cronie.x86_64 1.4.4-15.el6 base
cronie-anacron.x86_64 1.4.4-15.el6 base
cups-libs.x86_64 1:1.4.2-72.el6 base
curl.x86_64 7.19.7-46.el6 base
dejavu-fonts-common.noarch 2.33-1.el6 base
dejavu-lgc-sans-mono-fonts.noarch 2.33-1.el6 base
dejavu-sans-mono-fonts.noarch 2.33-1.el6 base
device-mapper.x86_64 1.02.95-3.el6_7.1 updates
device-mapper-event.x86_64 1.02.95-3.el6_7.1 updates
device-mapper-event-libs.x86_64 1.02.95-3.el6_7.1 updates
device-mapper-libs.x86_64 1.02.95-3.el6_7.1 updates
device-mapper-multipath.x86_64 0.4.9-87.el6 base
device-mapper-multipath-libs.x86_64 0.4.9-87.el6 base
dhclient.x86_64 12:4.1.1-49.P1.el6.centos base
dhcp-common.x86_64 12:4.1.1-49.P1.el6.centos base
dmidecode.x86_64 1:2.12-6.el6 base
dracut.noarch 004-388.el6 base
dracut-kernel.noarch 004-388.el6 base
e2fsprogs.x86_64 1.41.12-22.el6 base
e2fsprogs-libs.x86_64 1.41.12-22.el6 base
efibootmgr.x86_64 0.5.4-13.el6 base
elfutils.x86_64 0.161-3.el6 base
elfutils-libelf.x86_64 0.161-3.el6 base
elfutils-libs.x86_64 0.161-3.el6 base
ethtool.x86_64 2:3.5-6.el6 base
fprintd.x86_64 0.1-22.git04fd09cfa.el6 base
fprintd-pam.x86_64 0.1-22.git04fd09cfa.el6 base
gcc.x86_64 4.4.7-16.el6 base
gdbm.x86_64 1.8.0-38.el6 base
ghostscript.x86_64 8.70-21.el6 base
ghostscript-devel.x86_64 8.70-21.el6 base
glibc.x86_64 2.12-1.166.el6_7.1 updates
glibc-common.x86_64 2.12-1.166.el6_7.1 updates
glibc-devel.x86_64 2.12-1.166.el6_7.1 updates
glibc-headers.x86_64 2.12-1.166.el6_7.1 updates
glusterfs.x86_64 3.6.0.54-1.el6 base
glusterfs-api.x86_64 3.6.0.54-1.el6 base
glusterfs-libs.x86_64 3.6.0.54-1.el6 base
gnutls.x86_64 2.8.5-18.el6 base
gnutls-utils.x86_64 2.8.5-18.el6 base
gpxe-roms-qemu.noarch 0.9.7-6.14.el6 base
grep.x86_64 2.20-3.el6 base
grub.x86_64 1:0.97-94.el6 base
hal-info.noarch 20090716-5.el6 base
httpd.x86_64 2.2.15-45.el6.centos base
httpd-tools.x86_64 2.2.15-45.el6.centos base
hwdata.noarch 0.233-14.1.el6 base
initscripts.x86_64 9.03.49-1.el6.centos base
iproute.x86_64 2.6.32-45.el6 base
iptables.x86_64 1.4.7-16.el6 base
iptables-ipv6.x86_64 1.4.7-16.el6 base
iputils.x86_64 20071127-20.el6 base
irqbalance.x86_64 2:1.0.7-5.el6 base
iscsi-initiator-utils.x86_64 6.2.0.873-14.el6 base
kernel.x86_64 2.6.32-573.1.1.el6 updates
kernel-firmware.noarch 2.6.32-573.1.1.el6 updates
kernel-headers.x86_64 2.6.32-573.1.1.el6 updates
kexec-tools.x86_64 2.0.0-286.el6 base
kpartx.x86_64 0.4.9-87.el6 base
krb5-libs.x86_64 1.10.3-42.el6 base
libX11.x86_64 1.6.0-6.el6 base
libX11-common.noarch 1.6.0-6.el6 base
libX11-devel.x86_64 1.6.0-6.el6 base
libcgroup.x86_64 0.40.rc1-16.el6 base
libcom_err.x86_64 1.41.12-22.el6 base
libcurl.x86_64 7.19.7-46.el6 base
libdrm.x86_64 2.4.59-2.el6 base
libgcc.x86_64 4.4.7-16.el6 base
libgomp.x86_64 4.4.7-16.el6 base
libgudev1.x86_64 147-2.63.el6 base
libpcap.x86_64 14:1.4.0-4.20130826git2dbcaa1.el6 base
libreport.x86_64 2.0.9-24.el6.centos base
libreport-cli.x86_64 2.0.9-24.el6.centos base
libreport-compat.x86_64 2.0.9-24.el6.centos base
libreport-plugin-kerneloops.x86_64 2.0.9-24.el6.centos base
libreport-plugin-logger.x86_64 2.0.9-24.el6.centos base
libreport-plugin-mailx.x86_64 2.0.9-24.el6.centos base
libreport-plugin-reportuploader.x86_64
2.0.9-24.el6.centos base
libreport-plugin-rhtsupport.x86_64 2.0.9-24.el6.centos base
libreport-python.x86_64 2.0.9-24.el6.centos base
libsemanage.x86_64 2.0.43-5.1.el6 base
libss.x86_64 1.41.12-22.el6 base
libstdc++.x86_64 4.4.7-16.el6 base
libudev.x86_64 147-2.63.el6 base
libuser.x86_64 0.56.13-8.el6_7 updates
libvirt.x86_64 0.10.2-54.el6 base
libvirt-client.x86_64 0.10.2-54.el6 base
libvirt-python.x86_64 0.10.2-54.el6 base
libxcb.x86_64 1.9.1-3.el6 base
libxcb-devel.x86_64 1.9.1-3.el6 base
libxml2.x86_64 2.7.6-20.el6 base
libxml2-python.x86_64 2.7.6-20.el6 base
logrotate.x86_64 3.7.8-23.el6 base
lsof.x86_64 4.82-5.el6 base
lvm2.x86_64 2.02.118-3.el6_7.1 updates
lvm2-libs.x86_64 2.02.118-3.el6_7.1 updates
man-pages-overrides.noarch 6.7.5-1.el6 base
mdadm.x86_64 3.3.2-5.el6 base
microcode_ctl.x86_64 1:1.17-20.el6 base
mlocate.x86_64 0.22.2-6.el6 base
module-init-tools.x86_64 3.9-25.el6 base
nc.x86_64 1.84-24.el6 base
ncurses.x86_64 5.7-4.20090207.el6 base
ncurses-base.x86_64 5.7-4.20090207.el6 base
ncurses-libs.x86_64 5.7-4.20090207.el6 base
netcf-libs.x86_64 0.2.4-3.el6 base
nfs-utils.x86_64 1:1.2.3-64.el6 base
nfs-utils-lib.x86_64 1.1.5-11.el6 base
ntp.x86_64 4.2.6p5-5.el6.centos base
ntpdate.x86_64 4.2.6p5-5.el6.centos base
ntsysv.x86_64 1.3.49.3-5.el6 base
numad.x86_64 0.5-12.20150602git.el6 base
openldap.x86_64 2.4.40-5.el6 base
openssh.x86_64 5.3p1-111.el6 base
openssh-clients.x86_64 5.3p1-111.el6 base
openssh-server.x86_64 5.3p1-111.el6 base
openssl.x86_64 1.0.1e-42.el6 base
pam_passwdqc.x86_64 1.0.5-8.el6 base
parted.x86_64 2.1-29.el6 base
pcre.x86_64 7.8-7.el6 base
pcre-devel.x86_64 7.8-7.el6 base
perl.x86_64 4:5.10.1-141.el6 base
perl-CGI.x86_64 3.51-141.el6 base
perl-Compress-Raw-Zlib.x86_64 1:2.021-141.el6 base
perl-Compress-Zlib.x86_64 2.021-141.el6 base
perl-IO-Compress-Base.x86_64 2.021-141.el6 base
perl-IO-Compress-Zlib.x86_64 2.021-141.el6 base
perl-Module-Pluggable.x86_64 1:3.90-141.el6 base
perl-Pod-Escapes.x86_64 1:1.04-141.el6 base
perl-Pod-Simple.x86_64 1:3.13-141.el6 base
perl-Time-HiRes.x86_64 4:1.9721-141.el6 base
perl-libs.x86_64 4:5.10.1-141.el6 base
perl-version.x86_64 3:0.77-141.el6 base
pinentry.x86_64 0.7.6-8.el6 base
policycoreutils.x86_64 2.0.83-24.el6 base
polkit.x86_64 0.96-11.el6 base
procps.x86_64 3.2.8-33.el6 base
pulseaudio-libs.x86_64 0.9.21-21.el6 base
pulseaudio-libs-glib2.x86_64 0.9.21-21.el6 base
python.x86_64 2.6.6-64.el6 base
python-devel.x86_64 2.6.6-64.el6 base
python-libs.x86_64 2.6.6-64.el6 base
python-virtinst.noarch 0.600.0-29.el6 base
qemu-img.x86_64 2:0.12.1.2-2.479.el6 base
qemu-kvm.x86_64 2:0.12.1.2-2.479.el6 base
quota.x86_64 1:3.17-23.el6 base
rng-tools.x86_64 5-1.el6 base
rpm.x86_64 4.8.0-47.el6 base
rpm-libs.x86_64 4.8.0-47.el6 base
rpm-python.x86_64 4.8.0-47.el6 base
seabios.x86_64 0.6.1.2-30.el6 base
selinux-policy.noarch 3.7.19-279.el6 base
selinux-policy-targeted.noarch 3.7.19-279.el6 base
sg3_utils-libs.x86_64 1.28-8.el6 base
sos.noarch 3.2-28.el6.centos base
spice-glib.x86_64 0.26-4.el6 base
spice-gtk.x86_64 0.26-4.el6 base
spice-gtk-python.x86_64 0.26-4.el6 base
spice-server.x86_64 0.12.4-12.el6 base
strace.x86_64 4.8-10.el6 base
sudo.x86_64 1.8.6p3-19.el6 base
systemtap-runtime.x86_64 2.7-2.el6 base
sysvinit-tools.x86_64 2.87-6.dsf.el6 base
tar.x86_64 2:1.23-13.el6 base
tcpdump.x86_64 14:4.0.0-5.20090921gitdf3cb4.2.el6 base
time.x86_64 1.7-38.el6 base
udev.x86_64 147-2.63.el6 base
usbredir.x86_64 0.5.1-2.el6 base
vim-common.x86_64 2:7.4.629-5.el6 base
vim-enhanced.x86_64 2:7.4.629-5.el6 base
vim-minimal.x86_64 2:7.4.629-5.el6 base
virt-manager.x86_64 0.9.0-29.el6 base
wireless-tools.x86_64 1:29-6.el6 base
xorg-x11-drv-ati-firmware.noarch 7.5.99-3.el6 base
yum.noarch 3.2.29-69.el6.centos base
yum-cron.noarch 3.2.29-69.el6.centos base
Obsoleting Packages
yum.noarch 3.2.29-69.el6.centos base
yum-plugin-downloadonly.noarch 1.1.30-30.el6 @base
Error: Package: php56u-pecl-imagick-3.1.2-5.ius.centos6.x86_64 (@ius)
Requires: libMagickWand.so.2()(64bit)
Removing: ImageMagick-6.5.4.7-7.el6_5.x86_64 (@updates)
libMagickWand.so.2()(64bit)
Updated By: ImageMagick-6.7.2.7-2.el6.x86_64 (base)
Not found
Error: Package: php56u-pecl-imagick-3.1.2-5.ius.centos6.x86_64 (@ius)
Requires: libMagickCore.so.2()(64bit)
Removing: ImageMagick-6.5.4.7-7.el6_5.x86_64 (@updates)
libMagickCore.so.2()(64bit)
Updated By: ImageMagick-6.7.2.7-2.el6.x86_64 (base)
Not found
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Updates downloaded, use "yum -C update" manually to install them. | As you can see from the output, the release version has changed from 6.6 to 6.7: centos-release.x86_64 6-7.el6.centos.12.3 base So this is perfectly normal. http://wiki.centos.org/Manuals/ReleaseNotes/CentOS6.7 | {
"source": [
"https://serverfault.com/questions/711529",
"https://serverfault.com",
"https://serverfault.com/users/192663/"
]
} |
712,808 | Summary Chrome is reporting ERR_SPDY_INADEQUATE_TRANSPORT_SECURITY when I try and connect to my local web server over HTTPS. I am almost certain this problem has to do with my recent Windows 10 upgrade, but I don't know how to fix it. What worked Here's the chain of events, with me having Windows 8.1 Pro installed at the start: Generated a self-signed certificate intended for use as a trusted root CA using the following command: makecert.exe -pe -ss Root -sr LocalMachine -n "CN=local, OU=development" -r -a sha512 -e 01/01/2020 Generated an application-specific certificate from the trusted root CA: makecert.exe -pe -ss My -sr LocalMachine -n "CN=myapp.local, OU=Development" -is Root -ir LocalMachine -in local -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 -a sha512 -e 01/01/2020 -sky -eku 1.3.6.1.5.5.7.3.1 Added a HOSTS file entry for myapp.local that points to 127.0.0.1 Created an IIS 8.5 application that is bound to the myapp.local domain and listens for HTTPS requests only Assigned the myapp.local certificate to the web site With this setup, I had no trouble accessing my local web site from Chrome without any certificate or security warnings. The browser displayed the green padlock, as expected. What doesn't work Recently, I upgraded to Windows 10. I did not know at the time that Windows 10 ships with IIS 10, which supports HTTP/2. Now, when I try and access my local web sites with Chrome, I receive an ERR_SPDY_INADEQUATE_TRANSPORT_SECURITY error. I should note that the same request sent from Edge does not result in an error and does use HTTP/2 for the connection. A cursory Google search didn't turn up anything promising, except to hint that the problem might be that HTTP/2 or Chrome is strict about what ciphers it will accept in SSL certificates. Thinking it may be an issue with enabled cipher suites in Windows (but not being an expert in such things), I downloaded the latest version of IIS Crypto . I clicked the Best Practices button, clicked Apply, and restarted my machine. IIS Crypto reports these settings as "best practices": Enabled protocols: TLS 1.0, TLS 1.1, TLS 1.2 Enabled ciphers: Triple DES 168, AES 128/128, AES 256/256 Enabled hashes: MD5, SHA, SHA 256, SHA 384, SHA 512 Enabled key exchanges: Diffie-Hellman, PKCS, ECDH SSL cipher suite order: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P521 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P521 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P284 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P521 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P284 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P521 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P284 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_CBC_SHA256 TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA I'll also add that the browser application I'm developing does not need to be usable from Windows XP. I know there are some issues about Windows XP not supporting newer protocols. Detailed information about the HTTPS negotiation I decided to use Fiddler to intercept the HTTPS negotiation. Here's what Fiddler reported about the request: Version: 3.3 (TLS/1.2)
Random: 6B 47 6D 2B BC AE 00 F1 1D 41 57 7C 46 DB 35 19 D7 EF A9 2B B1 D0 81 1D 35 0D 75 7E 4C 05 14 B0
"Time": 2/1/1993 9:53:15 AM
SessionID: 98 2F 00 00 15 E7 C5 70 12 70 CD A8 D5 C7 D4 4D ED D8 1F 42 F9 A8 2C E6 67 13 AD C0 47 C1 EA 04
Extensions:
server_name myapp.local
extended_master_secret empty
SessionTicket empty
signature_algs sha512_rsa, sha512_ecdsa, sha384_rsa, sha384_ecdsa, sha256_rsa, sha256_ecdsa, sha224_rsa, sha224_ecdsa, sha1_rsa, sha1_ecdsa
status_request OCSP - Implicit Responder
NextProtocolNego empty
SignedCertTimestamp (RFC6962) empty
ALPN http/1.1, spdy/3.1, h2-14, h2
channel_id(GoogleDraft) empty
ec_point_formats uncompressed [0x0]
elliptic_curves secp256r1 [0x17], secp384r1 [0x18]
Ciphers:
[C02B] TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
[C02F] TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
[009E] TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
[CC14] TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
[CC13] TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
[CC15] TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
[C00A] TLS1_CK_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
[C014] TLS1_CK_ECDHE_RSA_WITH_AES_256_CBC_SHA
[0039] TLS_DHE_RSA_WITH_AES_256_SHA
[C009] TLS1_CK_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
[C013] TLS1_CK_ECDHE_RSA_WITH_AES_128_CBC_SHA
[0033] TLS_DHE_RSA_WITH_AES_128_SHA
[009C] TLS_RSA_WITH_AES_128_GCM_SHA256
[0035] TLS_RSA_AES_256_SHA
[002F] TLS_RSA_AES_128_SHA
[000A] SSL_RSA_WITH_3DES_EDE_SHA
[00FF] TLS_EMPTY_RENEGOTIATION_INFO_SCSV
Compression:
[00] NO_COMPRESSION and the response: Version: 3.3 (TLS/1.2)
SessionID: 98 2F 00 00 15 E7 C5 70 12 70 CD A8 D5 C7 D4 4D ED D8 1F 42 F9 A8 2C E6 67 13 AD C0 47 C1 EA 04
Random: 55 C6 8D BF 78 72 88 41 34 BD B4 B8 DA ED D3 C6 20 5C 46 D6 5A 81 BD 6B FC 36 23 0B 15 21 5C F6
Cipher: TLS_RSA_WITH_AES_128_GCM_SHA256 [0x009C]
CompressionSuite: NO_COMPRESSION [0x00]
Extensions:
ALPN h2
0x0017 empty
renegotiation_info 00
server_name empty What's working Based on Håkan Lindqvist's answer, and the very detailed and apparently-thoroughly-researched answer here , I reconfigured IIS Crypto with the following settings, which eliminated the Chrome error: Enabled protocols: TLS 1.0, TLS 2.0, TLS 3.0 Enabled ciphers: AES 128/128, AES 256/256 Enabled hashes: SHA, SHA 256, SHA 384, SHA 512 Enabled key exchanges: Diffie-Hellman, PKCS, ECDH SSL cipher suite order: TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P521 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P384 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P521 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P384 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P256 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384_P521 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384_P384 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P521 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P384 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P256 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P521 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P384 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P521 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P384 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P521 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P521 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P384 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P521 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P521 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P384 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_CBC_SHA | Http/2 requirements as per https://httpwg.org/specs/rfc7540.html#rfc.section.9.2.2 : 9.2.2 TLS 1.2 Cipher Suites A deployment of HTTP/2 over TLS 1.2 SHOULD NOT use any of the cipher suites that are listed in the cipher suite black list ( Appendix A ). Endpoints MAY choose to generate a connection error (Section 5.4.1) of type INADEQUATE_SECURITY if one of the cipher suites from the black list is negotiated. A deployment that chooses to use a black-listed cipher suite risks triggering a connection error unless the set of potential peers is known to accept that cipher suite. Implementations MUST NOT generate this error in reaction to the negotiation of a cipher suite that is not on the black list. Consequently, when clients offer a cipher suite that is not on the black list, they have to be prepared to use that cipher suite with HTTP/2. The black list includes the cipher suite that TLS 1.2 makes mandatory, which means that TLS 1.2 deployments could have non-intersecting sets of permitted cipher suites. To avoid this problem causing TLS handshake failures, deployments of HTTP/2 that use TLS 1.2 MUST support TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 [TLS-ECDHE] with the P-256 elliptic curve [FIPS186]. Note that clients might advertise support of cipher suites that are on the black list in order to allow for connection to servers that do not support HTTP/2. This allows servers to select HTTP/1.1 with a cipher suite that is on the HTTP/2 black list. However, this can result in HTTP/2 being negotiated with a black-listed cipher suite if the application protocol and cipher suite are independently selected. Your negotiated cipher `TLS_RSA_WITH_AES_128_GCM_SHA256` is in the above mentioned (and linked) Http/2 blacklist. I believe you will want to adjust your cipher suites (ordering?) to meet the above requirements. Maybe simply putting TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 with the NIST P-256 elliptic curve (identified as TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256_P256 on Windows) at the top of the list, or at least before anything included in the blacklist? | {
"source": [
"https://serverfault.com/questions/712808",
"https://serverfault.com",
"https://serverfault.com/users/273354/"
]
} |
712,814 | On my CentOS 7 systems, I use tuned-adm to set a profile appropriate to the environment during configuration, but after that, I never subsequently change that profile. It seems that the tuned system spawns a process ( /usr/bin/python -Es /usr/sbin/tuned -l -P ) for dynamic monitoring and adjustment. This process uses noticeably more memory compared to other daemons on my system. I would like to reduce nonessential services on a certain memory-constrained server. If I do not use a profile that involves dynamically adjusting parameters such as power consumption, does the tuned process need to keep running? Can I safely stop the process and have the profile that I originally set up persist from that point on? | Http/2 requirements as per https://httpwg.org/specs/rfc7540.html#rfc.section.9.2.2 : 9.2.2 TLS 1.2 Cipher Suites A deployment of HTTP/2 over TLS 1.2 SHOULD NOT use any of the cipher suites that are listed in the cipher suite black list ( Appendix A ). Endpoints MAY choose to generate a connection error (Section 5.4.1) of type INADEQUATE_SECURITY if one of the cipher suites from the black list is negotiated. A deployment that chooses to use a black-listed cipher suite risks triggering a connection error unless the set of potential peers is known to accept that cipher suite. Implementations MUST NOT generate this error in reaction to the negotiation of a cipher suite that is not on the black list. Consequently, when clients offer a cipher suite that is not on the black list, they have to be prepared to use that cipher suite with HTTP/2. The black list includes the cipher suite that TLS 1.2 makes mandatory, which means that TLS 1.2 deployments could have non-intersecting sets of permitted cipher suites. To avoid this problem causing TLS handshake failures, deployments of HTTP/2 that use TLS 1.2 MUST support TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 [TLS-ECDHE] with the P-256 elliptic curve [FIPS186]. Note that clients might advertise support of cipher suites that are on the black list in order to allow for connection to servers that do not support HTTP/2. This allows servers to select HTTP/1.1 with a cipher suite that is on the HTTP/2 black list. However, this can result in HTTP/2 being negotiated with a black-listed cipher suite if the application protocol and cipher suite are independently selected. Your negotiated cipher `TLS_RSA_WITH_AES_128_GCM_SHA256` is in the above mentioned (and linked) Http/2 blacklist. I believe you will want to adjust your cipher suites (ordering?) to meet the above requirements. Maybe simply putting TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 with the NIST P-256 elliptic curve (identified as TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256_P256 on Windows) at the top of the list, or at least before anything included in the blacklist? | {
"source": [
"https://serverfault.com/questions/712814",
"https://serverfault.com",
"https://serverfault.com/users/221763/"
]
} |
712,820 | This are the steps I've done so far: Download spark-1.4.1-bin-hadoop2.6.tgz unzip .spark-1.4.1-bin-hadoop2.6/sbin/start-all.sh Master is working but slave doesn't start This is the output: [ec2-user@ip-172-31-24-107 ~]$ sudo ./spark-1.4.1-bin-hadoop2.6/sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/ec2-user/spark-1.4.1-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-ip-172-31-24-107.out
localhost: Permission denied (publickey).
[ec2-user@ip-172-31-24-107 ~]$ This is the secure log Aug 9 00:09:30 ip-172-31-24-107 sudo: ec2-user : TTY=pts/0 ; PWD=/home/ec2-user ; USER=root ; COMMAND=./spark-1.4.1-bin-hadoop2.6/sbin/start-all.sh
Aug 9 00:09:32 ip-172-31-24-107 sshd[4828]: Connection closed by 127.0.0.1 [preauth] I believe the problem is with SSH but I haven't been able to find the solution on google... Any idea how to fix my SSH issue? | Http/2 requirements as per https://httpwg.org/specs/rfc7540.html#rfc.section.9.2.2 : 9.2.2 TLS 1.2 Cipher Suites A deployment of HTTP/2 over TLS 1.2 SHOULD NOT use any of the cipher suites that are listed in the cipher suite black list ( Appendix A ). Endpoints MAY choose to generate a connection error (Section 5.4.1) of type INADEQUATE_SECURITY if one of the cipher suites from the black list is negotiated. A deployment that chooses to use a black-listed cipher suite risks triggering a connection error unless the set of potential peers is known to accept that cipher suite. Implementations MUST NOT generate this error in reaction to the negotiation of a cipher suite that is not on the black list. Consequently, when clients offer a cipher suite that is not on the black list, they have to be prepared to use that cipher suite with HTTP/2. The black list includes the cipher suite that TLS 1.2 makes mandatory, which means that TLS 1.2 deployments could have non-intersecting sets of permitted cipher suites. To avoid this problem causing TLS handshake failures, deployments of HTTP/2 that use TLS 1.2 MUST support TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 [TLS-ECDHE] with the P-256 elliptic curve [FIPS186]. Note that clients might advertise support of cipher suites that are on the black list in order to allow for connection to servers that do not support HTTP/2. This allows servers to select HTTP/1.1 with a cipher suite that is on the HTTP/2 black list. However, this can result in HTTP/2 being negotiated with a black-listed cipher suite if the application protocol and cipher suite are independently selected. Your negotiated cipher `TLS_RSA_WITH_AES_128_GCM_SHA256` is in the above mentioned (and linked) Http/2 blacklist. I believe you will want to adjust your cipher suites (ordering?) to meet the above requirements. Maybe simply putting TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 with the NIST P-256 elliptic curve (identified as TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256_P256 on Windows) at the top of the list, or at least before anything included in the blacklist? | {
"source": [
"https://serverfault.com/questions/712820",
"https://serverfault.com",
"https://serverfault.com/users/209054/"
]
} |
713,148 | I have a reverse proxy setup for access to a third party application located inside a intranet from the internet.
Let's say this application is on the URL: https://internalserver:8080/ (reachable only from the intranet) and the proxy is on: https://proxyserver/ (reachable from any place in the world) The proxy is managed by nginx and is working ok. When the user accesses https://proxyserver/ they get the content of the app at https://internalserver:8080/ . The problem is that the application is writing absolute URLs in the HTML response so, when the user clicks a link to a new page the browser is trying to locate the page with its internal name, e.g. https://internalserver:8080/somepage instead of https://proxyserver/somepage . I know this is a program bug, but I'm not able to modify the program. Can I intercept the response, modify the URLs and send it (modified) to the final client with nginx? Or maybe with another tool? EDIT: I saw this question before, but my case is more specific, the quoted question ask for a generic modification. In that case the fast-cgi ad hoc program is the best solution, what I want is a more specific solution for (I think) a more common scenario. while a fast-cgi program can work I´m looking for a easiest and maybe stronger and proved into the real world, solution for this scenario. | Here is an official Nginx Video on YouTube which demonstrates Inline Content Rewriting. https://youtu.be/7Y7ORypoHhE?t=20m22s Indeed with sub_filter http://nginx.org/en/docs/http/ngx_http_sub_module.html In your case, you're looking at something like: location / {
sub_filter_once off;
sub_filter_types text/html;
sub_filter "https://internalserver:8080" "https://proxyserver";
} | {
"source": [
"https://serverfault.com/questions/713148",
"https://serverfault.com",
"https://serverfault.com/users/304143/"
]
} |
713,187 | We have some Powershell scripts used to set up various dev/test/prod environments and one of them installs and configures IIS. Unfortunately these scripts don't appear to be working under Windows 10 at this time because the Install-WindowsFeature cmdlet is missing. Where these removed intentionally, or is there some hoop I need to jump through to install them that wasn't previously necessary? | While Ryan's answer is correct, I would recommend to stay away from the Install-WindowsFeature cmdlets if you want to run your scripts on workstations as well. You will always be dependent on RSAT even though you don't need it otherwise. Just use Enable-WindowsOptionalFeature which works on servers and workstations. You would need to change your scripts, the feature names are different too. I wrote a bit about: Different ways for installing Windows features on the command line | {
"source": [
"https://serverfault.com/questions/713187",
"https://serverfault.com",
"https://serverfault.com/users/280206/"
]
} |
713,211 | I'm having a lot of problems to connect a domain name with an nginx server. over /etc/nginx/sites-available/ I have a file called rsmweb: server {
# this is only 1 of the many configs I've tried..
listen 80;
server_name rsm.website;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl spdy;
server_name rsm.website;
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
..etc and I created a symbolic link to /etc/nginix/sites-enabled/default When I used a similar configuration with a localhost it worked, but now I'm trying to use my domain name and I'm really stuck. In some tries, when I replace the following code with my ip, I can connect by using my ip instead of my domain name. The thing that I can not understand is that if my domain name points to my ip, why it doesn't works? My domain name is configured like this: HOST NAME IP ADDRESS/ URL RECORD TYPE MX PREF TTL
@ 82.216.93.120 A (Address) n/a 1800
www 82.216.93.120 A (Address) n/a 1800 after some reading, I thought that I needed to add my ip/domain over /etc/hosts 82.216.93.120 rsm.website But I found in a blog that if the domain name points to the ip, it is not necessary to do that... I'm really confused and stuck, thanks for any help! | While Ryan's answer is correct, I would recommend to stay away from the Install-WindowsFeature cmdlets if you want to run your scripts on workstations as well. You will always be dependent on RSAT even though you don't need it otherwise. Just use Enable-WindowsOptionalFeature which works on servers and workstations. You would need to change your scripts, the feature names are different too. I wrote a bit about: Different ways for installing Windows features on the command line | {
"source": [
"https://serverfault.com/questions/713211",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
714,890 | In IPv6, you are not supposed to subnet to anything smaller than a /64 (RFC 5375). Among other things, SLAAC does not work with smaller subnets, and apparently also some other features will break. What are the workarounds for situations where ISPs will only give you a single /64 but you need multiple subnets internally? The common advice seems to be to just find another ISP who will hand out a /56 or /48. In some parts of the world, that may work, but in our area (USA), that's not feasible due to a lack of competition. Most of my clients are lucky if they have a single ISP serving their area. Many people here are still on dial-up. My clients won't qualify for their own /48 from ARIN. | If the ISP won't give you more than a /64, then that ISP sucks. If it is any relief I can tell you that I have to deal with ISPs that suck even more than that. Around here it is perfectly normal to take public IPv4 addresses away from customers and put them behind a CGN. And if you ask them for IPv6 addresses, they will tell you that they are not offering IPv6 because there is no shortage of IPv4 addresses yet, and as long as there are servers without IPv6 support they won't offer IPv6 because it is impossible for a dual stack client to connect to an IPv4-only server. If any ISP would give me what you have, I would take it because it sucks less than what I have been able to get so far. Moving forward there are two approaches I recommend that you pursue in parallel. Put pressure on the ISP Put as much pressure on the ISP as you can. That includes contacting other ISPs and possibly switching if any other ISP can offer you a better deal. Make sure that you do test what happens if your router requests a delegated /48, /52, /56, or /60 through DHCPv6 on the WAN. I would test all four prefix lengths just in case the DHCPv6 server for some reason will only hand out a specific prefix length and ignores requests for other prefix lengths. Make the best of what you have Given that you are probably going to have to live with some hacks moving forward, you have to ask yourself which sucks less IPv4 with hacks or IPv6 with hacks. There are a few hacks you can use to stretch a single /64 to a lot of hosts. Turning a link prefix into a routed prefix If you have a single /64 on the WAN link but no prefix routed to your LAN, you can turn that /64 into a routed prefix with a few steps. Configure the WAN interface on your router as a /126 rather than a /64. Install a neighbor advertisement daemon (such as ndppd) on the router to advertise its own MAC address for every address in the /64 except from the 4 addresses in the /126. With those two steps you will have a routed /64 which you can use on your LAN with the exception of the 4 addresses used for the WAN link. A modified version of this hack can share the link /64 across multiple routers. The link prefix will then have to be a bit shorter than /126 to accommodate for an IP address to each router, a /120 would be short enough to allow for up to 254 routers. Each router will obviously only get a prefix which will be longer than /64. I recommend you make the prefix for each router as long as you can while still having enough IP addresses for the LAN on that router. A /112 or /120 for each router would likely be suitable. Each router responds with its own MAC address for neighbor discovery of anything within that router's prefix. In this variant each router will have identical prefixes configured on their WAN side and will be responding to neighbor discovery requests for the prefix assigned to their LAN side. Obviously none of the LAN prefixes may overlap each other and none of them may overlap the prefix you configured on the WAN side. So if the ISP router acting as your gateway is on address 2001:db8::1/64, then you can use 2001:db8::/120 as your WAN and you can assign 2001:db8::1:0/112 to the first router, 2001:db8::2:0/112 to the second router, etc. On the LAN you can stretch a /64 to a lot of hosts either by subnetting or by bridging. You'll have to work out which of the two works best for you. Subnetting If you do subnet the /64 you may as well go to the longest prefixes which still have enough addresses for the hosts you need. Don't subnet into /80 prefixes, rather go with /116, /120, or /124 per subnet. Things that do break if you don't use /64 are unlikely to care and by going with /116 or longer you will reduce the impact of certain neighbor discovery DoS attacks (if present in any of your systems). In such a subnetting configuration you will break SLAAC, so you need a DHCPv6 server to respond on each segment and static IPv6 addresses configured on all devices without DHCPv6 support. Bridging Bridging is the other alternative. It essentially means you don't subnet but run your entire LAN as a single IPv6 segment with a /64 prefix. (Should you need to, that /64 can span both LAN and WAN.) IPv6 is designed to allow bridges to recognize which of the bridged networks each anycast addresses need to be forwarded to. That way you avoid having to broadcast packets across every physical link on your LAN. Bridges can also apply firewalls and protection against neighbor discovery spoofing on the LAN. With sufficient intelligence on the bridges there is in principle no limit to how many switches you can bridge a single /64 across. | {
"source": [
"https://serverfault.com/questions/714890",
"https://serverfault.com",
"https://serverfault.com/users/276314/"
]
} |
715,021 | I see patch panels all come rated as CAT5e, or CAT6 (and so on). I understand the difference between CAT5e and CAT6 cables, more stringent interference shielding, etc. What I don't understand is why switches aren't rated as CAT5e or CAT6? Instead, they are just a Gigabit switch. They have a port, doesn't that make the port part of the connection? Shouldn't the port be rated CAT5e or CAT6 as well? Edit: Or to reverse the question: how are patch panel ports so different from switch ports that they require a rating? Answer (for those that don't want to read through all the comments): Because there is quite a bit of wiring between the patch panel jack's front contacts and the termination for the cable on the back, while the switch port's jacks only have 2mm of contact at the front. | CATx are physical cable wiring standards, specifying physical wiring characteristics of the cable, like impedance, number of conductors, twist rate, etc. Switches do not care about the physical properties of the cable. All they care about is whether or not the cable is able to successfully transmit data. It is assumed that cabling used will be within spec for whatever speed/duplex that is required, and depending on what speed uplink is required, there may be any number of physical cabling types that will work fine. An analogy would be high voltage wiring, such as 12-3 romex. That describes the physical properties of the cable, not necessarily what it's going to be used for. 12-3 romex is rated to ~120v @ 15A or thereabouts. The wall sockets that 12-3 romex is terminated in are not called "12-3 romex sockets". They're called NEMA 5-15 sockets. It is assumed that the cabling terminated is up to spec. | {
"source": [
"https://serverfault.com/questions/715021",
"https://serverfault.com",
"https://serverfault.com/users/305474/"
]
} |
715,063 | I'd like to prevent users from deleting files they have uploaded to my sftp server. I know I could set up a solution of my own using inotify/dnotify (or pam hook) and lsof which triggers a script to do something such as chattr +i $filename after a file is uploaded and closed. But I wonder if there is something already available as a feature or a solution already vetted and available of which I'm not aware. The current setup is that I'm using openssh sftp and users are jailed upon connecting. | CATx are physical cable wiring standards, specifying physical wiring characteristics of the cable, like impedance, number of conductors, twist rate, etc. Switches do not care about the physical properties of the cable. All they care about is whether or not the cable is able to successfully transmit data. It is assumed that cabling used will be within spec for whatever speed/duplex that is required, and depending on what speed uplink is required, there may be any number of physical cabling types that will work fine. An analogy would be high voltage wiring, such as 12-3 romex. That describes the physical properties of the cable, not necessarily what it's going to be used for. 12-3 romex is rated to ~120v @ 15A or thereabouts. The wall sockets that 12-3 romex is terminated in are not called "12-3 romex sockets". They're called NEMA 5-15 sockets. It is assumed that the cabling terminated is up to spec. | {
"source": [
"https://serverfault.com/questions/715063",
"https://serverfault.com",
"https://serverfault.com/users/140806/"
]
} |
715,827 | I have the mycert.jks file only.
Now i need to extract and generate .key and .crt file and use it in apache httpd server. SSLCertificateFile /usr/local/apache2/conf/ssl.crt/server.crt
SSLCertificateKeyFile /usr/local/apache2/conf/ssl.key/server.key Can anybody list the all steps to get this done. I searched but there is no concrete example to understand, mixed and matched steps. Please suggest! [EDIT] Getting error after following steps from below answer. 8/21/2015 9:07 PM] Sohan Bafna:
[Fri Aug 21 15:32:03.008511 2015] [ssl:emerg] [pid 14:tid 140151694997376] AH02562: Failed to configure certificate 0.0.0.0:4545:0 (with chain), check /home/certs/smp_c
ert_key_store.crt
[Fri Aug 21 15:32:03.008913 2015] [ssl:emerg] [pid 14:tid 140151694997376] SSL Library Error: error:0906D06C:PEM routines:PEM_read_bio:no start line (Expecting: TRUSTED
CERTIFICATE) -- Bad file contents or format - or even just a forgotten SSLCertificateKeyFile?
[Fri Aug 21 15:32:03.008959 2015] [ssl:emerg] [pid 14:tid 140151694997376] SSL Library Error: error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib | .jks is a keystore , which is a Java thing use keytool binary from Java. export the .crt : keytool -export -alias mydomain -file mydomain.der -keystore mycert.jks convert the cert to PEM : openssl x509 -inform der -in mydomain.der -out certificate.pem export the key : keytool -importkeystore -srckeystore mycert.jks -destkeystore keystore.p12 -deststoretype PKCS12 convert PKCS12 key to unencrypted PEM : openssl pkcs12 -in keystore.p12 -nodes -nocerts -out mydomain.key credits: https://security.stackexchange.com/questions/3779/how-can-i-export-my-private-key-from-a-java-keytool-keystore https://stackoverflow.com/questions/2640691/how-to-export-private-key-from-a-keystore-of-self-signed-certificate https://www.sslshopper.com/article-most-common-java-keytool-keystore-commands.html | {
"source": [
"https://serverfault.com/questions/715827",
"https://serverfault.com",
"https://serverfault.com/users/186114/"
]
} |
717,101 | We have an Intel NUC in my university's language department that will soon host a web application used by faculty and students in the department. The NUC runs Ubuntu (14.10). I'm comfortable with the terminal and SSH-ing into the server, however I find that a lot of tasks that I need to do are just so much easier through screen-sharing (VNC). I suggested to our new technical director that we install VNC on this server to make my life a lot easier (in fact it had VNC installed before he was hired, and then he uninstalled it). However, he replied with the following comment: I would much prefer not to run X or VNC on the server if we can get away with it. It is a server after all. I really don't understand this logic. It isn't hooked up to a monitor; the only access into it through SSH. Is there some miraculous downside to having VNC access to a server that I am unaware of? Obviously you're opening up another port for an attacker; rebuttal: we're behind two university firewalls (the main university network firewall as well as our subnet's own special firewall). VNC would only be able to be accomplished inside our subnet, so I'm at a loss as to why this would be an issue other than "it's another package to maintain", and with Ubuntu's apt package manager that becomes a non-issue. What are the downsides of installing VNC on a server? Edit : this isn't just a web server. It's hosting a number of other applications. Not sure if that makes a difference. | There are a great many reasons: Attack surface: more programs, especially networked ones, means more opportunities for someone to find a bug and get in. Defect surface: as above, but replace "someone" with " Murphy ", and "get in" with "ruin your day". Actually, "ruin your day" probably applies to the previous point, too. System efficiency: X11, and the GUI environments that people tend to run on them, consume a decent amount of RAM, especially on a limited resources system like a NUC. Not running them means more resources for doing useful work. Operator efficiency: GUIs do not lend themselves to scripting and other forms of automation. Clicking on things feels productive, but it's actually about the worst way to do something deeply technical. You'll also find your future employment opportunities severely limited if you can't script and automate away your job -- the industry is going away from GUI admin tools. Heck, even Windows server can be installed GUI-free these days, and if that doesn't make you think about the relative merits of only knowing how to click on things, I really don't know what to say to you. | {
"source": [
"https://serverfault.com/questions/717101",
"https://serverfault.com",
"https://serverfault.com/users/307023/"
]
} |
717,129 | According to my ssh_config file... Configuration data is parsed as follows: command line options user-specific file system-wide file With that said, (and yes, I know, I could scour man ssh_config AND man ssh , and (hope) for documented defaults).. how can I "print out" the active configuration, for ALL current settings. For example, something like... ssh -o Tunnel=ethernet servername -p 2210 --print-config SSH-2.0-OpenSSH_7.0
Command Line Options
Port 2210
Host servername
Command Line Configurations
Tunnel Ethernet
Config File
...
SSH Defaults
...
AddressFamily any (???)
BatchMode no
... This would let you know explicitly exactly what is set, and why. I called out AddressFamily specifically, as it is a perfect example of a configuration option with NO documented default value . From man ssh_config ... Specifies which address family to use when connecting. Valid arguments are any , inet (use IPv4 only), or inet6 (use IPv6 only). Ugh! Thanks for any constructive suggestions (not just a bunch of RTFM 's). | There is -G option in recent openssh, which behaves in similar way as -T on server side. -G Causes ssh to print its configuration after evaluating Host and Match blocks and exit. By calling ssh -G host you will get options used for connecting to specific host, which can be helpful for debugging conditional matches in ssh_config . Also setting more verbose log level ( -vvv ) can help with debugging config parser. | {
"source": [
"https://serverfault.com/questions/717129",
"https://serverfault.com",
"https://serverfault.com/users/60486/"
]
} |
718,654 | When I'm inside of Linux, I can get the following information from lsblk (irrelevant drives removed from output): NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 298G 0 disk
sdb 8:16 0 2.7T 0 disk When I manually pull the drives out of the server, I can tell I'm physically using the following drives: 0 Seagate 320GB
1 Seagate 320GB
2 Hitachi 1TB
3 Hitachi 1TB
4 Hitachi 1TB
5 Hitachi 1TB
6 [empty]
7 [empty] Because there is more physical storage in the server than available space in Linux, this means I'm obviously using some form of RAID system. With a bit of math, I can often figure out what type of RAID system is being used. Is there a way for me to detect if I'm using hardware RAID from inside of Linux , and figure out all the information about it (such as type of RAID, available drives) without turning off the server, physically pulling the drives out, and reading their labels? Can this information be gathered from inside of Linux, or is the point of hardware RAID to make the underlying system "invisible" to the operating system? | How to get the RAID information is going to depend entirely on the RAID controller you are using. Often, manufacturers will have tools that can be downloaded from their website which can be used to query the RAID controller and get this information. In order to find which RAID controller you are using, try one of the following commands: lspci # lspci -knn | grep 'RAID bus controller'
08:00.0 RAID bus controller [0104]: 3ware Inc 9690SA SAS/SATA-II RAID PCIe [13c1:1005] (rev 01) Here, the information we are looking for is "3ware Inc 9690SA SAS/SATA-II RAID PCIe" . lsscsi The command is not available on Debian and Ubuntu, but a quick sudo apt-get install lsscsi will fetch it from the repos. Note, if you are not using a RAID controller, the manufacturer and model number of your harddrive will show up here instead. # lsscsi
[2:0:0:0] disk AMCC 9690SA-8I DISK 4.08 /dev/sda
[2:0:1:0] disk AMCC 9690SA-8I DISK 4.08 /dev/sdb Here we see the manufacturer is "AMCC" and the model number of the RAID card is "9690SA-8I" . A quick Google search shows that this card is also known as "AMCC 3Ware 9690SA-8I" . lshw A third method (which gives quite a bit of output data) is to use the lshw command. Run lshw -class disk as root to only display the details about harddrives (which includes RAID information). Finding the RAID controller tools Now that we have the manufacturer and model number, it should be possible to find the tools on their website, or at least be able to Google details on how to find and use the tools for that specific controller. If the manufacturer shows up in this list, see these answers for more details on how to get the RAID information for your card: AMCC - 3ware controllers LSI Logic / Symbios Logic Adaptec (some devices) | {
"source": [
"https://serverfault.com/questions/718654",
"https://serverfault.com",
"https://serverfault.com/users/97027/"
]
} |
718,773 | I have a couple of small EC2 instances (t1.micro and t2.micro) one of which was setup using AWS-EB. I'd like to terminate both of them, but whenever I terminate them, the re-appear in my list of running instances a couple of minutes later. How do I fully terminate them? Termination protection is not enabled. | As far as I'm aware the configuration wizard for AWS-EB configures an EC2 AutoScaling group for you automatically with a default desired running instances count of 1. That is why every time you try to terminate an instance the instance is relaunched. I would therefore suggest removing AutoScaling group(s) and probably also load balancer configurations that you no longer need. These steps will actually be done for you if you terminate the EB application, so there should be no need to do this manually. | {
"source": [
"https://serverfault.com/questions/718773",
"https://serverfault.com",
"https://serverfault.com/users/107415/"
]
} |
721,082 | With the following task - name: synchronising ...
synchronize: src=files/to/synchronize dest=/tmp/1 the files/to/synchronize directory is synchronized, and after it's done there is a /tmp/1/synchronize directory on the target machine. Is it possible to use the syncrhonize task to recursively synchronize only contents of the source directory, so that all its contents was in the /tmp/1 without extra level of depth? What I've done: I went through documentation I tried to google I went through ansible synchronize module source | All you need to do is add a trailing slash to the end of the source path. This will tell Ansible it is the files in the directory, and not the directory and its contents, that you want to transfer. This behaviour is identical to that of rsync. | {
"source": [
"https://serverfault.com/questions/721082",
"https://serverfault.com",
"https://serverfault.com/users/45086/"
]
} |
721,084 | Using robocopy trying to copy files from network drive to local workstation folder with some options, but getting the error 123, 1314. The filename, directory name, or volume label syntax is incorrect and a required privilege is not held by the client. robocopy "\\xfolder\xyz" "c:\test" /MT:8 /MIR /ZB /COPYALL /R:0 /W:0 /MON:1 /MOT:05 Is the options I am using is right, or is there any changes required to avoid these errors. Could someone help me. Thanks in advance. | All you need to do is add a trailing slash to the end of the source path. This will tell Ansible it is the files in the directory, and not the directory and its contents, that you want to transfer. This behaviour is identical to that of rsync. | {
"source": [
"https://serverfault.com/questions/721084",
"https://serverfault.com",
"https://serverfault.com/users/310316/"
]
} |
721,087 | I have reconfigured my puppet (v3.6.2) server (RHEL 7.1) into supporting environments as shown below. /etc/puppet
puppet.conf
auth.conf
environments
Project_A
modules
manifests/site.pp
environment.conf
Project_B
modules
manifests/site.pp
environment.conf the environment.conf files consist of modulepath=/etc/puppet/environments/$environment/modules
manifest=/etc/puppet/environments/$environment/manifests/site.pp the site.pp file for each environment consists of include 'nodes.pp'
include 'selinux.pp'
include 'check_mode.pp'
$puppetserver=<SERVER>
Package {
allow_virtual=>true,
} on an agent when I run the command puppet agent --no-daemonize --trace --debug --noop --verbose I get the error Error: Could not retrieve catalog from remote server: Error 400 on server: Could not find class nodes for <'SERVER'> on <'SERVER'> in /var/log/puppet/masterhttp.log i get the error [2015-09-09 15:43:12] <'IP'> - - [2015/09/09:15:43:12 AEST] "POST /Project_A/catalog/<'SERVER'> HTTP/1.1 400 21 Each agent has the same configuration as when puppet had a single environment with the addition of 'environment = 'PROJECT_A' If I change nodes.pp in site.pp from include to import
import 'nodes.pp'
the error changes to Error: Could not retrieve catalog from remote server: Error 400 on server: Could not find class selinux.pp for <'SERVER'> on <'SERVER'> This same structure work correctly when puppet was configured for a single environment.
Under the single environment everything was configured as such: /etc/puppet
puppet.conf
auth.conf
environments
modules
manifests/site.pp I suspect that I may need to modify my auth.conf file but am at a loss as to what changes are required. Currently the file is the default configuration. I have tried adding path /environments
allow * with no joy and have added to fileserver.conf path /etc/puppet/environments
allow * again with no joy. for the record the master puppet.conf file is [main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl
always_cache_features = true
server = <'PUPPET SERVER'>
environmentpath = $confdir/environments
[master]
ca = true
dns_alt_names = <'SAN DNS ENTRIES'>
certname = <'PUPPET MASTER'>
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
environment = master
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
environment = Project_A The agents use the same configuration file without the [master] Can anyone see where I have made a mistake in my configuration. UPDATE:
I have started the puppetmaster in debug mode and from an agent tried to connect to the server. In the debug output this is what has made me suspect that it is auth.conf Notice: Starting Pppet master version 3.6.2
Debug: Routes Registered
Debug: Route /^\/v2\.0/
Debug: Route /.*/
Debug: Evaluating match for Route /^\/v2\.0/
Debug: Did not match path ("/Project_A/node/<SERVER A>")
Debug: Evaluating match for Route /.*/
Info: access[^/catalog/([^/]+)$]: allowing 'method' find
Info: access[^/catalog/([^/]+)$]: allowing $1 access
Info: access[^/node/([^/]+)$]: allowing 'method' find
Info: access[^/node/([^/]+)$]: allowing $1 access
Info: access[/certificate_revocation_list/ca]: allowing 'method' find
Info: access[/certificate_revocation_list/ca]: allowing * access
Info: access[/^/report/([^/]+)$]: allowing 'method' save
Info: access[/^/report/([^/]+)$]: allowing $1 access
Info: access[/file]: allowing * access
Info: access[/certificate/ca]: adding authentication any
Info: access[/certificate/ca]: adding 'method' find
Info: access[/certificate/ca]: adding * access
Info: access[/certificate/]: adding authentication any
Info: access[/certificate/]: adding 'method' find
Info: access[/certificate/]: adding * access
Info: access[/certificate_request]: adding authentication any
Info: access[/certificate_request]: adding 'method' find
Info: access[/certificate_request]: adding 'method' save
Info: access[/certificate_request]: adding * access
Info: access[/v2.0/environments]: adding 'method' find
Info: access[/v2.0/environments]: adding * access
Info: access[/]: adding authentication any
Info: Inserting dfault '/status' (auth true) ACL
Info: Caching node for <SERVER A>
Debug: Failed to load library 'msgpack' for feature 'msgpack'
Debug: Puppet::Network::Format [msgpack]: feature msgpack is missing
Debug: node supports formats: pson b64_zlib_yaml yaml raw
Debug: Routes Register:
Debug: Routes /^\/v2\.0/
Debug: Route /.*/
Debug: Evaluating match for Route /^\/v2\.0/
Debug: Did not match path ("/Project_A/file_metadatas/plugins")
Debug: Evaluating match for Route /.*/ UPDATE: I have sort of got this working. After rereading the puppetlabs docs on environments it states that there has to be an environment called production. I have thus created /etc/puppet/environments/production
| modules
| manifests
| environment.conf This is configured the same as the other environments although the dirs currently have no files within them. The agent remains the same. Now when I run the agent it runs without errors. The only thing is that it is collecting information from the puppet root
/etc/puppet/modules & /etc/puppet/manifests
and while the agent runs doesn't do anything if the host isn't defined in /etc/puppet/manifests/site.pp. In the puppetmaster debug output all references to the host are defined as Project_A and there is the log entry Notice: Compiled catalog for <'SERVER_A'> in environment Project_A in 0.00 seconds From the agent Notice: /Stage/[main]/ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}<md5sum>' to '{md5}<md5sum>'
Info: /Stage/[main]/ntp::Config/File[/etc/ntp.conf]: Scheduling refresh of Service{ntpd} So in summary. The client is being recognised as belonging to environment 'Project_A' on the master. Despite being configured to use the path /etc/puppet/environments/$environment/{modules|manifests/site.pp} in the 'Project_A' environment.conf file. Is actually using /etc/puppet/{modules|manifests/site.pp} | All you need to do is add a trailing slash to the end of the source path. This will tell Ansible it is the files in the directory, and not the directory and its contents, that you want to transfer. This behaviour is identical to that of rsync. | {
"source": [
"https://serverfault.com/questions/721087",
"https://serverfault.com",
"https://serverfault.com/users/310318/"
]
} |
721,223 | I'm archiving data from one server to another. Initially I started a rsync job. It took 2 weeks for it to build the file list just for 5 TB of data and another week to transfer 1 TB of data. Then I had to kill the job as we need some down time on the new server. It's been agreed that we will tar it up since we probably won't need to access it again. I was thinking of breaking it into 500 GB chunks. After I tar it then I was going to copy it across through ssh . I was using tar and pigz but it is still too slow. Is there a better way to do it? I think both servers are on Redhat. Old server is Ext4 and the new one is XFS. File sizes range from few kb to few mb and there are 24 million jpegs in 5TB. So I'm guessing around 60-80 million for 15TB. edit: After playing with rsync, nc, tar, mbuffer and pigz for a couple of days. The bottleneck is going to be the disk IO. As the data is striped across 500 SAS disks and around 250 million jpegs. However, now I learnt about all these nice tools that I can use in future. | I have had very good results using tar , pigz (parallel gzip) and nc . Source machine: tar -cf - -C /path/of/small/files . | pigz | nc -l 9876 Destination machine: To extract: nc source_machine_ip 9876 | pigz -d | tar -xf - -C /put/stuff/here To keep archive: nc source_machine_ip 9876 > smallstuff.tar.gz If you want to see the transfer rate just pipe through pv after pigz -d ! | {
"source": [
"https://serverfault.com/questions/721223",
"https://serverfault.com",
"https://serverfault.com/users/135490/"
]
} |
721,963 | I am trying to use journalctl 's pattern matching on SYSLOG_IDENTIFIERS . As an example, I have a ton of message tagged sshd : $ journalctl -t sshd | wc -l
987 but if I try to use pattern matching to find them: $ journalctl -t 'ssh*'
-- No Entries --
$ journalctl -t 'ssh.*'
-- No Entries -- The journalctl man page says patterns should work, but I can't find anything else about how patterns are used/defined in systemd. $ man journalctl
....
-t, --identifier=SYSLOG_IDENTIFIER|PATTERN
Show messages for the specified syslog identifier SYSLOG_IDENTIFIER,
or for any of the messages with a "SYSLOG_IDENTIFIER" matched by PATTERN. I'm running ArchLinux: $ journalctl --version
systemd 225
+PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP
+GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD +IDN | This was a doc bug that was closed when the typo in the man page was updated. The bug report led to the following comments in the code : We don't actually accept patterns, hence don't claim so. As a workaround, you may be able to use grep as suggested in the comments to your question. Something like this: journalctl | grep sshd | {
"source": [
"https://serverfault.com/questions/721963",
"https://serverfault.com",
"https://serverfault.com/users/88224/"
]
} |
722,324 | Brand new to ansible - I'm trying to symlink a bunch of files in a src directory to a destination.. Currently: file:
src: /drupal/drush/{{ item.path }}.aliases.drushrc.php
dest: /home/vagrant/.drush/{{ item.dest }}.aliases.drushrc.php
with_items:
- { path: 'new', dest: 'new' }
- { path: 'vmdev', dest: 'vmdev' }
state: link I'm getting the error: fatal: [vmdev] => One or more undefined variables: 'item' is undefined Can somebody point me in the right direction..? Cheers | Your indentation is wrong, with_items should be on the same level as file . This is what you want: file:
src: "/drupal/drush/{{ item.path }}.aliases.drushrc.php"
dest: "/home/vagrant/.drush/{{ item.dest }}.aliases.drushrc.php"
state: link
with_items:
- { path: 'new', dest: 'new' }
- { path: 'vmdev', dest: 'vmdev' } | {
"source": [
"https://serverfault.com/questions/722324",
"https://serverfault.com",
"https://serverfault.com/users/81985/"
]
} |
722,466 | I have installed docker un my debian 7 server using the following command : sudo curl -sSL https://get.docker.com/ | sh I would now like to remove docker, how on earth do I uninstall it ? | For older versions of docker installed via curl sudo curl -sSL https://get.docker.com/ | sh You can remove docker with sudo apt-get remove --auto-remove docker #Removes docker and dependencies
sudo rm -rf /var/lib/docker #Removes all data Edit: 05/2018:
For newer versions according to online documentation $ sudo apt-get purge docker-ce To remove images, containers, volumes, or customized configuration files on your host that are not automatically removed $ sudo rm -rf /var/lib/docker | {
"source": [
"https://serverfault.com/questions/722466",
"https://serverfault.com",
"https://serverfault.com/users/113993/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.