source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
765,521 | I'm configuring an OpenVPN (version 2.3.10) server on a Windows 2012 server but I cannot make it to work. The server is behind a router and I opened the 1194 port and created a rule to forward traffic on this port to the server. Here is the log I see on the server when I try to connect from a client: Mon Mar 21 11:11:47 2016 XX.XX.XX.XX:57804 TLS: Initial packet from [AF_INET]XX.XX.XX.XX:57804, sid=fdf7a7ac 0264c7f3
Mon Mar 21 11:12:38 2016 XX.XX.XX.XX:55938 TLS: Initial packet from [AF_INET]XX.XX.XX.XX:55938, sid=1f242a3f e454a525
Mon Mar 21 11:12:48 2016 XX.XX.XX.XX:57804 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity)
Mon Mar 21 11:12:48 2016 XX.XX.XX.XX:57804 TLS Error: TLS handshake failed
Mon Mar 21 11:12:48 2016 XX.XX.XX.XX:57804 SIGUSR1[soft,tls-error] received, client-instance restarting Where XX.XX.XX.XX is the ip of the client. So I understand from this that the client at least is able to arrive at the server, so there's no routing or firewall issues. I followed the description provided here Easy Windows Guide Any ideas? | What's interesting is how the port number changes mid-stream: Mon Mar 21 11:11:47 2016 XX.XX.XX.XX: 57804 TLS: Initial packet from [AF_INET]XX.XX.XX.XX:57804, sid=fdf7a7ac 0264c7f3 Mon Mar 21 11:12:38 2016 XX.XX.XX.XX: 55938 TLS: Initial packet from [AF_INET]XX.XX.XX.XX:55938, sid=1f242a3f e454a525 This makes me think that, somewhere between client and server, there is a misbehaving NAT device, a device with very short-lived state table entries, which is changing the source port number that it applies to the client's established stream, causing the server to think that two short-lived communications are in progress, instead of one continuous one. Such devices generally only do this with UDP, so I have advised you to confirm that you are using UDP, and try TCP instead. This you have done, and found that it fixes the problem. The next step is to identify the misbehaving NAT device, hit it with a club hammer, and replace it with one that doesn't make the cardinal mistake of assuming that all UDP communications are ephemeral; but you have indicated that you're happy with changing to TCP as a workaround, and so the matter is concluded. | {
"source": [
"https://serverfault.com/questions/765521",
"https://serverfault.com",
"https://serverfault.com/users/345096/"
]
} |
765,537 | I'm havingg 100 windows machines which is running behind the firewall. But my monitoring server is running outside the network [ public]. For SNMP , I can put one proxy server inside the internal network and enable only simple and single NAT rule in firewall. Then I would be able to monitor all the 100 windows machine using SNMP. But How can I do the same for WMI . Is there any option available out instead of allowing multiple rule in firewall ?. Since there is no guarantee that only we will be having 100 machines. the count may be get double in future. Any proxy application available to achieve this? | What's interesting is how the port number changes mid-stream: Mon Mar 21 11:11:47 2016 XX.XX.XX.XX: 57804 TLS: Initial packet from [AF_INET]XX.XX.XX.XX:57804, sid=fdf7a7ac 0264c7f3 Mon Mar 21 11:12:38 2016 XX.XX.XX.XX: 55938 TLS: Initial packet from [AF_INET]XX.XX.XX.XX:55938, sid=1f242a3f e454a525 This makes me think that, somewhere between client and server, there is a misbehaving NAT device, a device with very short-lived state table entries, which is changing the source port number that it applies to the client's established stream, causing the server to think that two short-lived communications are in progress, instead of one continuous one. Such devices generally only do this with UDP, so I have advised you to confirm that you are using UDP, and try TCP instead. This you have done, and found that it fixes the problem. The next step is to identify the misbehaving NAT device, hit it with a club hammer, and replace it with one that doesn't make the cardinal mistake of assuming that all UDP communications are ephemeral; but you have indicated that you're happy with changing to TCP as a workaround, and so the matter is concluded. | {
"source": [
"https://serverfault.com/questions/765537",
"https://serverfault.com",
"https://serverfault.com/users/268462/"
]
} |
765,593 | We have a server connected to 2 switches via two NICs. On each NIC are 2 VLANs, management and production. Right now we only have one switch connected, so haven't setup the spanning tree etc. We have LXC installed, and want to bridge (rather than NAT) the XLC containers (so they are on the same subnet as the host). When we try to create a bridge in /etc/network/interfaces on the host ubuntu server, the networking fails to start, and we have to go to the console, remove the edits and reboot (lucky we have LOM cards!) interfaces file: auto em1.3
iface em1.3 inet manual
bond-master bond2
bond-primary em1.3
auto em2.3
iface em2.3 inet manual
bond-master bond2
auto bond2 #Production VLAN
iface bond2 inet static
address 10.100.100.10
netmask 255.255.255.0
gateway 10.100.100.1
dns-nameservers 10.100.10.1
bond-slaves em1.3, em2.3
bond-miimon 100
bond-mode active-backup
dns-nameservers 10.100.100.1
auto br_prod
iface br_prod inet dhcp
bridge_ports bond2
bridge_fd 0
bridge_maxwait 0 When we add that last section (br_prod) the server wont start networking, and we have to use the console. It says "waiting another 60 seconds for networking to start", but doesn't. I also tried adding pre-up ifup bond2
post-down ifup bond2 Tried making it manual. Tried making it static rather than DHCP, supplying appropriate ip/gateway/netmask. No luck. Tried naming it br2 instead of br_prod, tried pre_up post_down, bridge-ports etc. We tried every combination of options, switches and underscores vs dashes. Always same effect - networking wont start (no errors). Any ideas? UPDATE 1 From the answer from electrometro below, I tried this: auto bond1
iface bond1 inet static
address 10.30.30.10
netmask 255.255.255.0
#bond-slaves em1.2, em2.2
bond-slaves none
bond-miimon 100
bond-mode active-backup
up route add -net .....
auto em1.2
iface em1.2 inet manual
bond-master bond1
bond-primary em1.2
auto em2.2
iface em2.2 inet manual
bond-master bond1
bond-primary em1.2
br1
iface br1 inet manual
bridge_ports bond1
bridge_fd 0
bridge_maxwait 0 But get the same problem - networking doesn't start. UPDATE 2 Thanks for the contribution by Oliver. I tried this config, and the networking comes up, I can use ifconf to see the interfaces, but I cant ssh as the routing is not working. basically I cant ping the default gateway using the manually added route. auto em1.2
iface em1.2 inet manual
auto em2.2
iface em2.2 inet namual
auto bond1
iface bond1 inet manual
bond-slaves em1.2 em2.2
bond-mode active-backup
auto br10
iface br10 inet static
address 10.30.30.10
netmask 255.255.255.0
bridge_ports bond1
up route add -net 10.242.1.0/24 gw 10.30.30.1 dev bond1 # also tried dev br10 The reason we are manually setting a gateway, is that we have to networks defined: production and management. We have 2 interfaces, each connected to a switch. Each interface carries fail over for both networks, and the production network has the default gateway. I am now just trying to get a bridge on the management network as a start. UPDATE 3 In a long line of trial and error I also tried specifying the VLAN: auto em1.2
iface em1.2 inet manual
auto em2.2
iface em2.2 inet manual
auto bond1
iface bond1 inet manual
bond-slaves em1.2 em2.2
bond-mode active-backup
auto br10.2
iface br10.2 inet static
address 10.30.30.10
netmask 255.255.255.0
bridge_ports bond1
up route add -net 10.242.1.0/24 gw 10.30.30.1 dev br10.2 | What's interesting is how the port number changes mid-stream: Mon Mar 21 11:11:47 2016 XX.XX.XX.XX: 57804 TLS: Initial packet from [AF_INET]XX.XX.XX.XX:57804, sid=fdf7a7ac 0264c7f3 Mon Mar 21 11:12:38 2016 XX.XX.XX.XX: 55938 TLS: Initial packet from [AF_INET]XX.XX.XX.XX:55938, sid=1f242a3f e454a525 This makes me think that, somewhere between client and server, there is a misbehaving NAT device, a device with very short-lived state table entries, which is changing the source port number that it applies to the client's established stream, causing the server to think that two short-lived communications are in progress, instead of one continuous one. Such devices generally only do this with UDP, so I have advised you to confirm that you are using UDP, and try TCP instead. This you have done, and found that it fixes the problem. The next step is to identify the misbehaving NAT device, hit it with a club hammer, and replace it with one that doesn't make the cardinal mistake of assuming that all UDP communications are ephemeral; but you have indicated that you're happy with changing to TCP as a workaround, and so the matter is concluded. | {
"source": [
"https://serverfault.com/questions/765593",
"https://serverfault.com",
"https://serverfault.com/users/146910/"
]
} |
765,913 | I am implementing a multi-tenant application where my application hosts and serves technical documentation for a tenant's product. Now, the approach that I was considering was - I host the documentation at docs.<tenant>.mycompany.com and ask my tenant to setup a CNAME DNS record to point docs.tenantcompany.com to docs.<tenant>.mycompany.com . I want to the site to be SSL-enabled with my tenant's certificate. I wanted to understand if I my tenant company has a wildcard SSL certificate, will it work with this setup or will a new SSL certificate have to be purchased for docs.tenantcompany.com ? | The certificate name must match what the user entered in the browser, not the 'final' DNS record. If the user enters docs.tenantcompany.com then your SSL certificate has to cover that. If docs.tenantcompany.com is a CNAME to foo.example.com , the certificate does not need to cover foo.example.com , just docs.tenantcompany.com . | {
"source": [
"https://serverfault.com/questions/765913",
"https://serverfault.com",
"https://serverfault.com/users/240865/"
]
} |
766,506 | We're updating our servers from a very out-of-date distro to a modern Debian Jessie based system, including lightdm / xfce, and of course systemd (and udisks2). One sticking point is automounting USB drives. We used to accomplish this with some udev rules. The old rules almost still work - the mount point gets created and the drive is mounted fine, but after a few seconds systemd is doing something that breaks the mount, so subsequent access attempts result in "Transport endpoint is not connected" errors. Manually mounting the drive via the command line works fine. So does letting a file manager (thunar and thunar-volman, which in turn uses udisks2). But those are not viable options - these systems mostly run headless, so thunar isn't normally running. We need to be able to plug in disk drives for unattended cron-based backups. I thought that modifying the udev script to spawn a detached job which waits a few seconds before performing the mount might do the trick, but systemd seems to go out of its way to prevent this - it somehow still waits for the detached job to finish before continuing. Perhaps having the udev script tickle udisks2 somehow is the right approach? I'm at a lose, so any advice greatly appreciated. | After several false starts I figured this out. The key is to add a systemd unit service between udev and a mounting script. (For the record, I was not able to get this working using udisks2 (via something like udisksctl mount -b /dev/sdb1 ) called either directly from a udev rule or from a systemd unit file. There seems to be a race condition and the device node isn't quite ready, resulting in Error looking up object for device /dev/sdb1 . Unfortunate, since udisks2 could take care of all the mount point messyness...) The heavy lifting is done by a shell script, which takes care of creating and removing mount points, and mounting and unmounting the drives. /usr/local/bin/usb-mount.sh #!/bin/bash
# This script is called from our systemd unit file to mount or unmount
# a USB drive.
usage()
{
echo "Usage: $0 {add|remove} device_name (e.g. sdb1)"
exit 1
}
if [[ $# -ne 2 ]]; then
usage
fi
ACTION=$1
DEVBASE=$2
DEVICE="/dev/${DEVBASE}"
# See if this drive is already mounted, and if so where
MOUNT_POINT=$(/bin/mount | /bin/grep ${DEVICE} | /usr/bin/awk '{ print $3 }')
do_mount()
{
if [[ -n ${MOUNT_POINT} ]]; then
echo "Warning: ${DEVICE} is already mounted at ${MOUNT_POINT}"
exit 1
fi
# Get info for this drive: $ID_FS_LABEL, $ID_FS_UUID, and $ID_FS_TYPE
eval $(/sbin/blkid -o udev ${DEVICE})
# Figure out a mount point to use
LABEL=${ID_FS_LABEL}
if [[ -z "${LABEL}" ]]; then
LABEL=${DEVBASE}
elif /bin/grep -q " /media/${LABEL} " /etc/mtab; then
# Already in use, make a unique one
LABEL+="-${DEVBASE}"
fi
MOUNT_POINT="/media/${LABEL}"
echo "Mount point: ${MOUNT_POINT}"
/bin/mkdir -p ${MOUNT_POINT}
# Global mount options
OPTS="rw,relatime"
# File system type specific mount options
if [[ ${ID_FS_TYPE} == "vfat" ]]; then
OPTS+=",users,gid=100,umask=000,shortname=mixed,utf8=1,flush"
fi
if ! /bin/mount -o ${OPTS} ${DEVICE} ${MOUNT_POINT}; then
echo "Error mounting ${DEVICE} (status = $?)"
/bin/rmdir ${MOUNT_POINT}
exit 1
fi
echo "**** Mounted ${DEVICE} at ${MOUNT_POINT} ****"
}
do_unmount()
{
if [[ -z ${MOUNT_POINT} ]]; then
echo "Warning: ${DEVICE} is not mounted"
else
/bin/umount -l ${DEVICE}
echo "**** Unmounted ${DEVICE}"
fi
# Delete all empty dirs in /media that aren't being used as mount
# points. This is kind of overkill, but if the drive was unmounted
# prior to removal we no longer know its mount point, and we don't
# want to leave it orphaned...
for f in /media/* ; do
if [[ -n $(/usr/bin/find "$f" -maxdepth 0 -type d -empty) ]]; then
if ! /bin/grep -q " $f " /etc/mtab; then
echo "**** Removing mount point $f"
/bin/rmdir "$f"
fi
fi
done
}
case "${ACTION}" in
add)
do_mount
;;
remove)
do_unmount
;;
*)
usage
;;
esac The script, in turn, is called by a systemd unit file. We use the "@" filename syntax so we can pass the device name as an argument. /etc/systemd/system/[email protected] [Unit]
Description=Mount USB Drive on %i
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/usr/local/bin/usb-mount.sh add %i
ExecStop=/usr/local/bin/usb-mount.sh remove %i Finally, some udev rules start and stop the systemd unit service on hotplug/unplug: /etc/udev/rules.d/99-local.rules KERNEL=="sd[a-z][0-9]", SUBSYSTEMS=="usb", ACTION=="add", RUN+="/bin/systemctl start usb-mount@%k.service"
KERNEL=="sd[a-z][0-9]", SUBSYSTEMS=="usb", ACTION=="remove", RUN+="/bin/systemctl stop usb-mount@%k.service" This seems to do the trick! A couple of useful commands for debugging stuff like this: udevadm control -l debug turns on verbose logging to /var/log/syslog so you can see what's happening. udevadm control --reload-rules after you modify files in the
rules.d dir (may not be necessary, but can't hurt...). systemctl daemon-reload after you modify systemd unit files. | {
"source": [
"https://serverfault.com/questions/766506",
"https://serverfault.com",
"https://serverfault.com/users/91917/"
]
} |
766,519 | Good morning everyone. I have a weird issue here. I am new to the IT Admin role but every computer we have purchased new since I started is getting the windows 10 notification pop up. I run windows updates before putting it on the domain so it has not linked up to WSUS at the time. My questions are: Is there a way to remove the updates from the computer if they are not in WSUS or do I need to go to every computer? I made sure the computers are communicating with the GPO on the server and Notifications for next version of windows is off yet it is still showing. Any idea why? Thank you | After several false starts I figured this out. The key is to add a systemd unit service between udev and a mounting script. (For the record, I was not able to get this working using udisks2 (via something like udisksctl mount -b /dev/sdb1 ) called either directly from a udev rule or from a systemd unit file. There seems to be a race condition and the device node isn't quite ready, resulting in Error looking up object for device /dev/sdb1 . Unfortunate, since udisks2 could take care of all the mount point messyness...) The heavy lifting is done by a shell script, which takes care of creating and removing mount points, and mounting and unmounting the drives. /usr/local/bin/usb-mount.sh #!/bin/bash
# This script is called from our systemd unit file to mount or unmount
# a USB drive.
usage()
{
echo "Usage: $0 {add|remove} device_name (e.g. sdb1)"
exit 1
}
if [[ $# -ne 2 ]]; then
usage
fi
ACTION=$1
DEVBASE=$2
DEVICE="/dev/${DEVBASE}"
# See if this drive is already mounted, and if so where
MOUNT_POINT=$(/bin/mount | /bin/grep ${DEVICE} | /usr/bin/awk '{ print $3 }')
do_mount()
{
if [[ -n ${MOUNT_POINT} ]]; then
echo "Warning: ${DEVICE} is already mounted at ${MOUNT_POINT}"
exit 1
fi
# Get info for this drive: $ID_FS_LABEL, $ID_FS_UUID, and $ID_FS_TYPE
eval $(/sbin/blkid -o udev ${DEVICE})
# Figure out a mount point to use
LABEL=${ID_FS_LABEL}
if [[ -z "${LABEL}" ]]; then
LABEL=${DEVBASE}
elif /bin/grep -q " /media/${LABEL} " /etc/mtab; then
# Already in use, make a unique one
LABEL+="-${DEVBASE}"
fi
MOUNT_POINT="/media/${LABEL}"
echo "Mount point: ${MOUNT_POINT}"
/bin/mkdir -p ${MOUNT_POINT}
# Global mount options
OPTS="rw,relatime"
# File system type specific mount options
if [[ ${ID_FS_TYPE} == "vfat" ]]; then
OPTS+=",users,gid=100,umask=000,shortname=mixed,utf8=1,flush"
fi
if ! /bin/mount -o ${OPTS} ${DEVICE} ${MOUNT_POINT}; then
echo "Error mounting ${DEVICE} (status = $?)"
/bin/rmdir ${MOUNT_POINT}
exit 1
fi
echo "**** Mounted ${DEVICE} at ${MOUNT_POINT} ****"
}
do_unmount()
{
if [[ -z ${MOUNT_POINT} ]]; then
echo "Warning: ${DEVICE} is not mounted"
else
/bin/umount -l ${DEVICE}
echo "**** Unmounted ${DEVICE}"
fi
# Delete all empty dirs in /media that aren't being used as mount
# points. This is kind of overkill, but if the drive was unmounted
# prior to removal we no longer know its mount point, and we don't
# want to leave it orphaned...
for f in /media/* ; do
if [[ -n $(/usr/bin/find "$f" -maxdepth 0 -type d -empty) ]]; then
if ! /bin/grep -q " $f " /etc/mtab; then
echo "**** Removing mount point $f"
/bin/rmdir "$f"
fi
fi
done
}
case "${ACTION}" in
add)
do_mount
;;
remove)
do_unmount
;;
*)
usage
;;
esac The script, in turn, is called by a systemd unit file. We use the "@" filename syntax so we can pass the device name as an argument. /etc/systemd/system/[email protected] [Unit]
Description=Mount USB Drive on %i
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/usr/local/bin/usb-mount.sh add %i
ExecStop=/usr/local/bin/usb-mount.sh remove %i Finally, some udev rules start and stop the systemd unit service on hotplug/unplug: /etc/udev/rules.d/99-local.rules KERNEL=="sd[a-z][0-9]", SUBSYSTEMS=="usb", ACTION=="add", RUN+="/bin/systemctl start usb-mount@%k.service"
KERNEL=="sd[a-z][0-9]", SUBSYSTEMS=="usb", ACTION=="remove", RUN+="/bin/systemctl stop usb-mount@%k.service" This seems to do the trick! A couple of useful commands for debugging stuff like this: udevadm control -l debug turns on verbose logging to /var/log/syslog so you can see what's happening. udevadm control --reload-rules after you modify files in the
rules.d dir (may not be necessary, but can't hurt...). systemctl daemon-reload after you modify systemd unit files. | {
"source": [
"https://serverfault.com/questions/766519",
"https://serverfault.com",
"https://serverfault.com/users/256794/"
]
} |
766,890 | I'm unsure if this should be asked here or over on security.stackexchange.com ... Over the Easter long weekend, a small office of ours had a network breach in that an old HP printer was used to print some very offensive antisemitic documents. It appears to have happened to a number of universities in Western cultures all over the world . Anyway... I read that it's actually a pretty basic security exploit with most networked printers. Something to do with TCP port 9100 and access to the internet. I haven't been able to find much info on the specifics of how because everyone seems too concerned with the why. The network setup is pretty simple for the office that was affected. It has 4 PC's, 2 networked printers, an 8-port switch and a residential modem/router running an ADSL2+ connection (with static internet IP and a pretty vanilla configuration). Is the point of weakness in the modem/router or the printer? I've never really considered a printer as a security risk that needs to be configured, so in an effort to protect this office's network, I'd like to understand how the printers were exploited. How can I stop or block the exploit? And check or test for the exploit (or correct block of the exploit) in our other much larger offices? | This attack disproportionately affected universities because, for historical reasons, many universities use public IPv4 addresses for most or all of their network, and for academic reasons have little or no ingress (or egress!) filtering. Thus, many individual devices on a university network can be reached directly from anywhere on the Internet. In your specific case, a small office with an ADSL connection and home/SOHO router and static IP address, it's most likely that someone at the office explicitly forwarded TCP port 9100 from the Internet to the printer. (By default, because NAT is in use, incoming traffic has nowhere to go unless some provision is made to direct it somewhere.) To remediate this, you simply remove the port forwarding rule. In larger offices with proper ingress firewalling, you generally won't have any allow rules for this port at the border, except perhaps for VPN connections if you need people to be able to print over your VPN. To secure the printer/print server itself, use its built in allow list/access control list to specify the range(s) of IP addresses allowed to print to the printer, and deny all other IP addresses. (The linked document also contains other recommendations for securing your printers/print servers, which you should also evaluate.) | {
"source": [
"https://serverfault.com/questions/766890",
"https://serverfault.com",
"https://serverfault.com/users/173948/"
]
} |
766,902 | I have logged to my debian as root using ssh. Than I have mounted some foler from my NAS using mount -t nfs 192.168.1.222:/nfs /media/nfs . Now I can access all subfolders. Exept /media/nfs/somefolder , bacause this folder available only for admin , not root (permissions was configured using NAS web GUI). How can I open this folder using admin credentials? Thanks for help. | This attack disproportionately affected universities because, for historical reasons, many universities use public IPv4 addresses for most or all of their network, and for academic reasons have little or no ingress (or egress!) filtering. Thus, many individual devices on a university network can be reached directly from anywhere on the Internet. In your specific case, a small office with an ADSL connection and home/SOHO router and static IP address, it's most likely that someone at the office explicitly forwarded TCP port 9100 from the Internet to the printer. (By default, because NAT is in use, incoming traffic has nowhere to go unless some provision is made to direct it somewhere.) To remediate this, you simply remove the port forwarding rule. In larger offices with proper ingress firewalling, you generally won't have any allow rules for this port at the border, except perhaps for VPN connections if you need people to be able to print over your VPN. To secure the printer/print server itself, use its built in allow list/access control list to specify the range(s) of IP addresses allowed to print to the printer, and deny all other IP addresses. (The linked document also contains other recommendations for securing your printers/print servers, which you should also evaluate.) | {
"source": [
"https://serverfault.com/questions/766902",
"https://serverfault.com",
"https://serverfault.com/users/345616/"
]
} |
767,415 | At my organization we have a number of simple-to-use base AMIs for different services such as ECS and Docker. Since many of our projects involve CloudFormation, we're using cfn-bootstrap , which consists of a couple of scripts and a service which run on boot to install certain packages and do certain configuration management tasks for us. On startup of a system, an equivalent of the following script must be executed: #!/bin/bash
# capture stderr only
output="$(cfn-init -s $STACK_NAME -r $RESOURCE_NAME --region $REGION >/dev/null)"
# if it failed, signal to CloudFormation that it failed and include a reason
returncode=$?
if [[ $returncode == 0]]; then
cfn-signal -e $returncode -r "$output"
exit $returncode
fi
# otherwise, signal success
cfn-signal -s I was thinking of running this as a systemd oneshot service which runs After=network.target and WantedBy=multi-user.target . The only problem is that I'd like my AMI to be flexible and only execute this if a certain file exists. Rather than embedding the above script into the EC2 user data, I can have the user data just define an environment file which defines the variables I need and only run my one-shot service if that environment file exists: #cloud-init
write_files:
- path: /etc/sysconfig/cloudformation
# ...
content: |
CFN_STACK_NAME="stack-name"
CFN_RESOURCE="resource-name"
CFN_REGION="region" Is there a way to make systemd only run a service if a given condition is met? | systemd provides a wide variety of conditions you can test . For instance, you can use ConditionPathExists= to test for the existence of a file. [Unit]
ConditionPathExists=/etc/sysconfig/cloudformation | {
"source": [
"https://serverfault.com/questions/767415",
"https://serverfault.com",
"https://serverfault.com/users/70024/"
]
} |
767,572 | A week ago I got the following error on my APC Smart-UPS 1000 which I muted. Warning State:
Connect battery
Load: 55%
Batt: 100% Today, I could smell a sort of sulfur/sulphur/rotten egg smell when I came into the office and the UPS is alarming again. There isn't a burning smell. I have vented the office & server room and shutdown the UPS. Got any other advice? UPDATE: This is what I found in the UPS. | To answer the question: This is almost always a lead-acid battery failure causing the battery to vent hydrogen sulfide (H 2 S). The battery needs to be replaced as soon as possible. As an additional note, H 2 S can be extremely dangerous at higher concentrations. If you experience eye irritation or difficulty breathing or your ability to smell the odor deteriorates noticeably, the concentration of the gas is dangerously high and you should see a doctor. At that point, you may need to hire a hazmat cleanup service to remove the battery and clean up the area. Wikipedia says this on H 2 S toxicity: 0.00047 ppm or 0.47 ppb is the odor threshold, the point at which 50% of a human panel can detect the presence of an odor without being able to identify it. 10 ppm is the OSHA permissible exposure limit (PEL) (8 hour time-weighted average). 10–20 ppm is the borderline concentration for eye irritation . 20 ppm is the acceptable ceiling concentration established by OSHA. 50 ppm is the acceptable maximum peak above the ceiling concentration for an 8-hour shift, with a maximum duration of 10 minutes. 50–100 ppm leads to eye damage. At 100–150 ppm the olfactory nerve is paralyzed after a few inhalations, and the sense of smell disappears, often together with awareness of danger . 320–530 ppm leads to pulmonary edema with the possibility of death. 530–1000 ppm causes strong stimulation of the central nervous system and rapid breathing, leading to loss of breathing. 800 ppm is the lethal concentration for 50% of humans for 5 minutes exposure (LC50). Concentrations over 1000 ppm cause immediate collapse with loss of breathing, even after inhalation of a single breath. | {
"source": [
"https://serverfault.com/questions/767572",
"https://serverfault.com",
"https://serverfault.com/users/34396/"
]
} |
767,994 | My understanding was that the primary limitation of running docker on other OSs was the Linux Network containers that made it possible. (Certainly for Macs). Recently Microsoft announced a beta of a Ubuntu linux user mode running natively on Windows 10. This can run binaries compiled in ELF format on Windows (unlike cygwin which requires a compilation.) My question is: Can you run Docker natively on the new Windows 10 (Ubuntu) bash userspace? | You can use Docker Desktop for Windows as the engine and Docker for Linux as the client in WSL on Ubuntu / Debian on Windows. Connect them via TCP. Install Docker Desktop for Windows: https://hub.docker.com/editions/community/docker-ce-desktop-windows If you want to use Windows Containers instead of Linux Containers both type containers can be managed by the Linux docker client in the bash userspace. Since version 17.03.1-ce-win12 (12058) you must check Expose daemon on tcp://localhost:2375 without TLS to allow the Linux Docker client to continue communicating with the Windows Docker daemon by TCP Follow these steps: cd
wget https://download.docker.com/linux/static/stable/`uname -m`/docker-19.03.1.tgz
tar -xzvf docker-*.tgz
cd docker
./docker -H tcp://0.0.0.0:2375 ps or env DOCKER_HOST=tcp://0.0.0.0:2375 ./docker ps To make it permanent: mkdir ~/bin
mv ~/docker/docker ~/bin Add the corresponding variables to .bashrc export DOCKER_HOST=tcp://0.0.0.0:2375
export PATH=$PATH:~/bin Of course, you can install docker-compose sudo -i
curl -L https://github.com/docker/compose/releases/download/1.24.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose Or using python pip sudo apt-get install python-pip bash-completion
sudo pip install docker-compose And Bash completion. The best part: sudo -i
apt-get install bash-completion
curl -L https://raw.githubusercontent.com/docker/docker-ce/master/components/cli/contrib/completion/bash/docker > /etc/bash_completion.d/docker
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose I've tested it using the 2.1.0.1 (37199) version of Docker Desktop using Hyper-V: $ docker version
Client: Docker Engine - Community
Version: 19.03.1
API version: 1.40
Go version: go1.12.5
Git commit: 74b1e89e8a
Built: Thu Jul 25 21:17:37 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.1
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:17:52 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683
Look both client and server say **OS/Arch: linux/amd64** Volumes Take care when adding volumes. The path C:\dir will be visible as /mnt/c/dir on WSL and as /c/dir/ by docker engine. You can overcome it permanently: sudo bash -c "echo -e '[automount] \nroot = /'>/etc/wsl.conf" You must exit and reload WSL after making the change to wsl.conf so that WSL reads in your changes on launch. UPDATE from: What’s new for the Command Line in Windows 10 version 1803 Unix Sockets Unix Sockets weren't supported on Windows, and now they are! You can also communicate over Unix sockets between Windows and WSL. One of the great things about this is it enables WSL to run the Linux Docker Client to interact with the Docker Daemon running on Windows. UPDATE This script and the use of Unix Sockets was included in Pengwin 's pengwin-setup. Regards | {
"source": [
"https://serverfault.com/questions/767994",
"https://serverfault.com",
"https://serverfault.com/users/9803/"
]
} |
768,003 | We observed frequent soft lockup issues on Ubuntu 12.04 (kernel: 3.8.0-29-generic) and found system unresponsive after that. Here are kern.log message just before soft locks occurred. Any help would be highly appreciated. Mar 29 00:12:01 HOST9016 kernel: [387780.959368] BUG: soft lockup - CPU#60 stuck for 23s! [java:113233]
Mar 29 00:12:01 HOST9016 kernel: [387781.007045] BUG: soft lockup - CPU#63 stuck for 23s! [java:113220]
Mar 29 00:12:01 HOST9016 kernel: [387781.007516] Modules linked in: nf_conntrack_ipv6(F) nf_defrag_ipv6(F) ip6table_filter(F) ip6_tables(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F) xt_LOG(F) xt_tcpudp(F) xt_conntrack(F) xt_hashlimit(F) iptable_filter(F) ip_tables(F) x_tables(F) vesafb(F) coretemp(F) kvm_intel(F) kvm(F) ghash_clmulni_intel(F) aesni_intel(F) ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) joydev(F) hid_generic(F) gpio_ich(F) microcode(F) psmouse(F) serio_raw(F) usbhid(F) hid(F) hpwdt(F) hpilo(F) lpc_ich(F) ioatdma(F) dca(F) wmi(F) bnep(F) rfcomm(F) bluetooth(F) nfsd(F) nfs_acl(F) auth_rpcgss(F) nfs(F) fscache(F) acpi_power_meter(F) lockd(F) mac_hid(F) sunrpc(F) nf_conntrack_ftp(F) nf_conntrack(F) lp(F) parport(F) tg3(F) ptp(F) pps_core(F) hpsa(F)
Mar 29 00:12:01 HOST9016 kernel: [387781.007520] CPU 63
Mar 29 00:12:01 HOST9016 kernel: [387781.007521] Pid: 113220, comm: java Tainted: GF 3.8.0-29-generic #42~precise1-Ubuntu HP ProLiant DL580 Gen8
Mar 29 00:12:01 HOST9016 kernel: [387781.007530] RIP: 0010:[<ffffffff811674a5>] [<ffffffff811674a5>] change_pte_range+0x205/0x2d0
Mar 29 00:12:01 HOST9016 kernel: [387781.007532] RSP: 0018:ffff883dbc9ffca8 EFLAGS: 00000286
Mar 29 00:12:01 HOST9016 kernel: [387781.007533] RAX: ffffea00f1431600 RBX: ffff883dbc8d4958 RCX: 0600000000080068
Mar 29 00:12:01 HOST9016 kernel: [387781.007960] RDX: 0000000000000000 RSI: 00007f2769b6e000 RDI: 8000003c50c58166
Mar 29 00:12:01 HOST9016 kernel: [387781.007961] RBP: ffff883dbc9ffd48 R08: ffff883dbc8d4958 R09: 0000000000000000
Mar 29 00:12:01 HOST9016 kernel: [387781.007961] R10: 0000000000000004 R11: 0000000000000202 R12: 0000000000000004
Mar 29 00:12:01 HOST9016 kernel: [387781.007962] R13: 0000000000000202 R14: ffffffff81ce6fa0 R15: ffff883dbc9ffc98
Mar 29 00:12:01 HOST9016 kernel: [387781.007964] FS: 00007f22b1059700(0000) GS:ffff881fffa40000(0000) knlGS:0000000000000000
Mar 29 00:12:01 HOST9016 kernel: [387781.007965] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar 29 00:12:01 HOST9016 kernel: [387781.007966] CR2: 00007f47ab783028 CR3: 0000001d8f9a3000 CR4: 00000000001407e0
Mar 29 00:12:01 HOST9016 kernel: [387781.007967] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Mar 29 00:12:01 HOST9016 kernel: [387781.007968] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Mar 29 00:12:01 HOST9016 kernel: [387781.007969] Process java (pid: 113220, threadinfo ffff883dbc9fe000, task ffff883dbc9045c0)
Mar 29 00:12:01 HOST9016 kernel: [387781.007970] Stack:
Mar 29 00:12:01 HOST9016 kernel: [387781.007985] ffff883dbc9ffd38 ffff881fd06ba940 ffff881fd06ba680 000000007a400000
Mar 29 00:12:01 HOST9016 kernel: [387781.008435] 00007f2689600000 0000000000000001 ffff883dbc8d4958 0000000000000001
Mar 29 00:12:01 HOST9016 kernel: [387781.008445] ffffea017f48e570 8000000000000025 8000003c50c58166 00007f2769c00000
Mar 29 00:12:01 HOST9016 kernel: [387781.008445] Call Trace:
Mar 29 00:12:01 HOST9016 kernel: [387781.008452] [<ffffffff811677ea>] change_protection_range+0x27a/0x410
Mar 29 00:12:01 HOST9016 kernel: [387781.008875] [<ffffffff811679f5>] change_protection+0x75/0xc0
Mar 29 00:12:01 HOST9016 kernel: [387781.008881] [<ffffffff8117baeb>] change_prot_numa+0x1b/0x30
Mar 29 00:12:01 HOST9016 kernel: [387781.008889] [<ffffffff8109544a>] task_numa_work+0x24a/0x320
Mar 29 00:12:01 HOST9016 kernel: [387781.008895] [<ffffffff8107bdc8>] task_work_run+0xc8/0xf0
Mar 29 00:12:01 HOST9016 kernel: [387781.009311] [<ffffffff81014d9a>] do_notify_resume+0xaa/0xc0
Mar 29 00:12:01 HOST9016 kernel: [387781.009318] [<ffffffff816fcb9a>] int_signal+0x12/0x17
Mar 29 00:12:01 HOST9016 kernel: [387781.009738] Code: 0f 84 73 ff ff ff e9 69 ff ff ff 0f 1f 00 48 8b 7d 90 4c 89 f2 4c 89 ee e8 89 54 ff ff 31 d2 48 85 c0 0f 84 34 ff ff ff 48 8b 08 <48> c1 e9 3a 83 bd 7c ff ff ff ff 74 7e 39 8d 7c ff ff ff 0f b6
Mar 29 00:12:01 HOST9016 kernel: [387781.098867] BUG: soft lockup - CPU#69 stuck for 23s! [java:113232]
Mar 29 00:12:01 HOST9016 kernel: [387781.148120] Modules linked in: nf_conntrack_ipv6(F) nf_defrag_ipv6(F) ip6table_filter(F) ip6_tables(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F) xt_LOG(F) xt_tcpudp(F) xt_conntrack(F) xt_hashlimit(F) iptable_filter(F) ip_tables(F) x_tables(F) vesafb(F) coretemp(F) kvm_intel(F) kvm(F) ghash_clmulni_intel(F) aesni_intel(F) ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) joydev(F) hid_generic(F) gpio_ich(F) microcode(F) psmouse(F) serio_raw(F) usbhid(F) hid(F) hpwdt(F) hpilo(F) lpc_ich(F) ioatdma(F) dca(F) wmi(F) bnep(F) rfcomm(F) bluetooth(F) nfsd(F) nfs_acl(F) auth_rpcgss(F) nfs(F) fscache(F) acpi_power_meter(F) lockd(F) mac_hid(F) sunrpc(F) nf_conntrack_ftp(F) nf_conntrack(F) lp(F) parport(F) tg3(F) ptp(F) pps_core(F) hpsa(F)
Mar 29 00:12:01 HOST9016 kernel: [387781.150284] CPU 69
Mar 29 00:12:01 HOST9016 kernel: [387781.150288] Pid: 113232, comm: java Tainted: GF 3.8.0-29-generic #42~precise1-Ubuntu HP ProLiant DL580 Gen8
Mar 29 00:12:01 HOST9016 kernel: [387781.150701] RIP: 0010:[<ffffffff811674a5>] [<ffffffff811674a5>] change_pte_range+0x205/0x2d0
Mar 29 00:12:01 HOST9016 kernel: [387781.150706] RSP: 0018:ffff887fcba19ca8 EFLAGS: 00000286
Mar 29 00:12:01 HOST9016 kernel: [387781.151137] RAX: ffffea00f71aee00 RBX: ffff883dbc8d4958 RCX: 0600000000080078
Mar 29 00:12:01 HOST9016 kernel: [387781.151139] RDX: 0000000000000000 RSI: 00007f2a3c820000 RDI: 8000003dc6bb8166
Mar 29 00:12:01 HOST9016 kernel: [387781.151141] RBP: ffff887fcba19d48 R08: ffff883dbc8d4958 R09: 0000000000000000
Mar 29 00:12:01 HOST9016 kernel: [387781.151143] R10: 0000000000000004 R11: 0000000000000293 R12: 0000000000000004
Mar 29 00:12:01 HOST9016 kernel: [387781.151145] R13: 0000000000000293 R14: ffffffff81ce6fa0 R15: ffff887fcba19c98
Mar 29 00:12:01 HOST9016 kernel: [387781.151148] FS: 00007f22829a7700(0000) GS:ffff881fffb00000(0000) knlGS:0000000000000000
Mar 29 00:12:01 HOST9016 kernel: [387781.151151] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar 29 00:12:01 HOST9016 kernel: [387781.151153] CR2: 00007f60e5451720 CR3: 0000001d8f9a3000 CR4: 00000000001407e0
Mar 29 00:12:01 HOST9016 kernel: [387781.151154] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Mar 29 00:12:01 HOST9016 kernel: [387781.151156] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Mar 29 00:12:01 HOST9016 kernel: [387781.151599] Process java (pid: 113232, threadinfo ffff887fcba18000, task ffff887cfb0345c0)
Mar 29 00:12:01 HOST9016 kernel: [387781.151600] Stack:
Mar 29 00:12:01 HOST9016 kernel: [387781.151602] ffff887fcba19d38 ffff881fd06ba940 0000000000000293 0000000200000004
Mar 29 00:12:01 HOST9016 kernel: [387781.152476] 0000000000000000 ffff883dbc8d4958 ffff883dbc8d4958 0000000000000001
Mar 29 00:12:01 HOST9016 kernel: [387781.152895] ffffea01723ae170 8000000000000025 8000003dc6bb8166 00007f2a3ca00000
Mar 29 00:12:01 HOST9016 kernel: [387781.153738] Call Trace:
Mar 29 00:12:01 HOST9016 kernel: [387781.154157] [<ffffffff811677ea>] change_protection_range+0x27a/0x410
Mar 29 00:12:01 HOST9016 kernel: [387781.154575] [<ffffffff811679f5>] change_protection+0x75/0xc0
Mar 29 00:12:01 HOST9016 kernel: [387781.154992] [<ffffffff8117baeb>] change_prot_numa+0x1b/0x30
Mar 29 00:12:01 HOST9016 kernel: [387781.155001] [<ffffffff8109544a>] task_numa_work+0x24a/0x320
Mar 29 00:12:01 HOST9016 kernel: [387781.155009] [<ffffffff8107bdc8>] task_work_run+0xc8/0xf0
Mar 29 00:12:01 HOST9016 kernel: [387781.155015] [<ffffffff816f254b>] ? __schedule+0x3bb/0x6b0
Mar 29 00:12:01 HOST9016 kernel: [387781.155021] [<ffffffff81014d9a>] do_notify_resume+0xaa/0xc0
Mar 29 00:12:01 HOST9016 kernel: [387781.155448] [<ffffffff816fcb9a>] int_signal+0x12/0x17
Mar 29 00:12:01 HOST9016 kernel: [387781.155450] Code: 0f 84 73 ff ff ff e9 69 ff ff ff 0f 1f 00 48 8b 7d 90 4c 89 f2 4c 89 ee e8 89 54 ff ff 31 d2 48 85 c0 0f 84 34 ff ff ff 48 8b 08 <48> c1 e9 3a 83 bd 7c ff ff ff ff 74 7e 39 8d 7c ff ff ff 0f b6
Mar 29 00:12:01 HOST9016 kernel: [387781.262831] BUG: soft lockup - CPU#79 stuck for 22s! [java:113234]
Mar 29 00:12:01 HOST9016 kernel: [387781.314646] Modules linked in: nf_conntrack_ipv6(F) nf_defrag_ipv6(F) ip6table_filter(F) ip6_tables(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F) xt_LOG(F) xt_tcpudp(F) xt_conntrack(F) xt_hashlimit(F) iptable_filter(F) ip_tables(F) x_tables(F) vesafb(F) coretemp(F) kvm_intel(F) kvm(F) ghash_clmulni_intel(F) aesni_intel(F) ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) joydev(F) hid_generic(F) gpio_ich(F) microcode(F) psmouse(F) serio_raw(F) usbhid(F) hid(F) hpwdt(F) hpilo(F) lpc_ich(F) ioatdma(F) dca(F) wmi(F) bnep(F) rfcomm(F) bluetooth(F) nfsd(F) nfs_acl(F) auth_rpcgss(F) nfs(F) fscache(F) acpi_power_meter(F) lockd(F) mac_hid(F) sunrpc(F) nf_conntrack_ftp(F) nf_conntrack(F) lp(F) parport(F) tg3(F) ptp(F) pps_core(F) hpsa(F)
Mar 29 00:12:01 HOST9016 kernel: [387781.319281] CPU 79
Mar 29 00:12:01 HOST9016 kernel: [387781.319285] Pid: 113234, comm: java Tainted: GF 3.8.0-29-generic #42~precise1-Ubuntu HP ProLiant DL580 Gen8
Mar 29 00:12:01 HOST9016 kernel: [387781.319288] RIP: 0010:[<ffffffff8115c93f>] [<ffffffff8115c93f>] vm_normal_page+0x1f/0x80
Mar 29 00:12:01 HOST9016 kernel: [387781.320152] RSP: 0000:ffff887d8ede7c88 EFLAGS: 00000a06
Mar 29 00:12:01 HOST9016 kernel: [387781.320568] RAX: 0070bea105980000 RBX: ffff881fd06ba940 RCX: 0000000000000001
Mar 29 00:12:01 HOST9016 kernel: [387781.320570] RDX: 8000001c2fa84166 RSI: 00007f2b98da6000 RDI: 8000001c2fa84166
Mar 29 00:12:01 HOST9016 kernel: [387781.320572] RBP: ffff887d8ede7c98 R08: ffff883dbc8d4958 R09: 0000000000000000
Mar 29 00:12:01 HOST9016 kernel: [387781.320573] R10: 0000000000000004 R11: 0000000000000202 R12: 000000000000004f
Mar 29 00:12:01 HOST9016 kernel: [387781.320998] R13: ffffffff8104e810 R14: 000000000000003c R15: 0000004fd2942458
Mar 29 00:12:01 HOST9016 kernel: [387781.321001] FS: 00007f22827a5700(0000) GS:ffff883fffa60000(0000) knlGS:0000000000000000
Mar 29 00:12:01 HOST9016 kernel: [387781.321002] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar 29 00:12:01 HOST9016 kernel: [387781.321004] CR2: 00007f47b2eb3000 CR3: 0000001d8f9a3000 CR4: 00000000001407e0
Mar 29 00:12:01 HOST9016 kernel: [387781.321419] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Mar 29 00:12:01 HOST9016 kernel: [387781.321421] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Mar 29 00:12:01 HOST9016 kernel: [387781.321819] Process java (pid: 113234, threadinfo ffff887d8ede6000, task ffff887df5799740)
Mar 29 00:12:01 HOST9016 kernel: [387781.321820] Stack:
Mar 29 00:12:01 HOST9016 kernel: [387781.321821] ffff887d8ede7c98 8000001c2fa84166 ffff887d8ede7d48 ffffffff81167497
Mar 29 00:12:01 HOST9016 kernel: [387781.322653] ffff887d8ede7d38 ffff881fd06ba940 0000000000000202 0000000200000004
Mar 29 00:12:01 HOST9016 kernel: [387781.323512] ffff887d8ede7e00 ffff883dbc8d4958 ffff883dbc8d4958 0000000000000001
Mar 29 00:12:01 HOST9016 kernel: [387781.324377] Call Trace:
Mar 29 00:12:01 HOST9016 kernel: [387781.324796] [<ffffffff81167497>] change_pte_range+0x1f7/0x2d0
Mar 29 00:12:01 HOST9016 kernel: [387781.324802] [<ffffffff811677ea>] change_protection_range+0x27a/0x410
Mar 29 00:12:01 HOST9016 kernel: [387781.325225] [<ffffffff811679f5>] change_protection+0x75/0xc0
Mar 29 00:12:01 HOST9016 kernel: [387781.325672] [<ffffffff8117baeb>] change_prot_numa+0x1b/0x30
Mar 29 00:12:01 HOST9016 kernel: [387781.326888] [<ffffffff8109544a>] task_numa_work+0x24a/0x320
Mar 29 00:12:01 HOST9016 kernel: [387781.326900] [<ffffffff8107bdc8>] task_work_run+0xc8/0xf0
Mar 29 00:12:01 HOST9016 kernel: [387781.326912] [<ffffffff81014d9a>] do_notify_resume+0xaa/0xc0
Mar 29 00:12:01 HOST9016 kernel: [387781.327756] [<ffffffff816fcb9a>] int_signal+0x12/0x17
Mar 29 00:12:01 HOST9016 kernel: [387781.327758] Code: 66 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 49 89 f8 48 89 d7 48 89 e5 53 48 83 ec 08 48 89 f8 0f 1f 40 00 48 c1 e0 12 <48> c1 e8 1e f6 c6 02 75 27 48 39 05 19 cd b8 00 72 3f 48 89 c3
Mar 29 06:24:22 HOST9016 kernel: [410090.031877] BUG: soft lockup - CPU#103 stuck for 23s! [java:113233]
Mar 29 06:24:22 HOST9016 kernel: [410090.086169] Modules linked in: nf_conntrack_ipv6(F) nf_defrag_ipv6(F) ip6table_filter(F) ip6_tables(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F) xt_LOG(F) xt_tcpudp(F) xt_conntrack(F) xt_hashlimit(F) iptable_filter(F) ip_tables(F) x_tables(F) vesafb(F) coretemp(F) kvm_intel(F) kvm(F) ghash_clmulni_intel(F) aesni_intel(F) ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) joydev(F) hid_generic(F) gpio_ich(F) microcode(F) psmouse(F) serio_raw(F) usbhid(F) hid(F) hpwdt(F) hpilo(F) lpc_ich(F) ioatdma(F) dca(F) wmi(F) bnep(F) rfcomm(F) bluetooth(F) nfsd(F) nfs_acl(F) auth_rpcgss(F) nfs(F) fscache(F) acpi_power_meter(F) lockd(F) mac_hid(F) sunrpc(F) nf_conntrack_ftp(F) nf_conntrack(F) lp(F) parport(F) tg3(F) ptp(F) pps_core(F) hpsa(F) | You can use Docker Desktop for Windows as the engine and Docker for Linux as the client in WSL on Ubuntu / Debian on Windows. Connect them via TCP. Install Docker Desktop for Windows: https://hub.docker.com/editions/community/docker-ce-desktop-windows If you want to use Windows Containers instead of Linux Containers both type containers can be managed by the Linux docker client in the bash userspace. Since version 17.03.1-ce-win12 (12058) you must check Expose daemon on tcp://localhost:2375 without TLS to allow the Linux Docker client to continue communicating with the Windows Docker daemon by TCP Follow these steps: cd
wget https://download.docker.com/linux/static/stable/`uname -m`/docker-19.03.1.tgz
tar -xzvf docker-*.tgz
cd docker
./docker -H tcp://0.0.0.0:2375 ps or env DOCKER_HOST=tcp://0.0.0.0:2375 ./docker ps To make it permanent: mkdir ~/bin
mv ~/docker/docker ~/bin Add the corresponding variables to .bashrc export DOCKER_HOST=tcp://0.0.0.0:2375
export PATH=$PATH:~/bin Of course, you can install docker-compose sudo -i
curl -L https://github.com/docker/compose/releases/download/1.24.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose Or using python pip sudo apt-get install python-pip bash-completion
sudo pip install docker-compose And Bash completion. The best part: sudo -i
apt-get install bash-completion
curl -L https://raw.githubusercontent.com/docker/docker-ce/master/components/cli/contrib/completion/bash/docker > /etc/bash_completion.d/docker
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose I've tested it using the 2.1.0.1 (37199) version of Docker Desktop using Hyper-V: $ docker version
Client: Docker Engine - Community
Version: 19.03.1
API version: 1.40
Go version: go1.12.5
Git commit: 74b1e89e8a
Built: Thu Jul 25 21:17:37 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.1
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:17:52 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683
Look both client and server say **OS/Arch: linux/amd64** Volumes Take care when adding volumes. The path C:\dir will be visible as /mnt/c/dir on WSL and as /c/dir/ by docker engine. You can overcome it permanently: sudo bash -c "echo -e '[automount] \nroot = /'>/etc/wsl.conf" You must exit and reload WSL after making the change to wsl.conf so that WSL reads in your changes on launch. UPDATE from: What’s new for the Command Line in Windows 10 version 1803 Unix Sockets Unix Sockets weren't supported on Windows, and now they are! You can also communicate over Unix sockets between Windows and WSL. One of the great things about this is it enables WSL to run the Linux Docker Client to interact with the Docker Daemon running on Windows. UPDATE This script and the use of Unix Sockets was included in Pengwin 's pengwin-setup. Regards | {
"source": [
"https://serverfault.com/questions/768003",
"https://serverfault.com",
"https://serverfault.com/users/347128/"
]
} |
768,026 | There's seems to ready yum package.
So I've downloaded the tarball , but as soon as I ran autoreconf -i , I got the following: configure.ac:14: warning: macro `AM_PROG_AR' not found in library configure.ac:10: error: Autoconf version 2.64 or higher is required configure.ac:10: the top level autom4te: /usr/bin/m4 failed with exit status: 63 aclocal: autom4te failed with exit status: 63 autoreconf: aclocal failed with exit status: 63 So, how can one install jq on RHEL 6.5? Thank you! | As it says on the development page for jq "jq is written in C and has no runtime dependencies". So just download the file and put it in place with the following: wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
chmod +x ./jq
cp jq /usr/bin | {
"source": [
"https://serverfault.com/questions/768026",
"https://serverfault.com",
"https://serverfault.com/users/132118/"
]
} |
768,280 | We roll out Ubuntu 14.04 servers on isolated networks, running ntpd 4.2.6p5, configured to use multiple NTP servers as provided by customers (no access to pool.ntp.org). Our dumb terminal client devices run an old version of BusyBox (1.00-rc2) and ntpclient 2010 from Larry Doolittle. This setup has worked great for years, but recently we've hit a roadblock with a new customer. They provided us with 5 in-house NTP server addresses which seem to work great on their own, as far as ntpdate-debian is concerned on the Linux server. On the BusyBox side however, ntpclient complains with "Dispersion too high". From the debug output, ntpclient gets "1217163.1" from the NTP server but the max value it supports is absolute(65536). $ /usr/sbin/ntpclient -s -i 15 -h 10.17.162.250 -d
Configuration:
-c probe_count 1
-d (debug) 1
-g goodness 0
-h hostname 10.17.162.250
-i interval 15
-l live 0
-p local_port 0
-q min_delay 800.000000
-s set_clock 1
-x cross_check 1
Listening...
Sending ...
recvfrom
packet of length 48 received
Source: INET Port 123 host 10.17.162.250
LI=0 VN=3 Mode=4 Stratum=4 Poll=4 Precision=-20
Delay=60745.2 Dispersion=1346801.8 Refid=10.31.10.21
Reference 3668859928.942079
(sent) 3668859928.708371
Originate 3668859928.708371
Receive 3668859928.963271
Transmit 3668859928.963369
Our recv 3668859928.708371
Total elapsed: 0.00
Server stall: 93.09
Slop: -93.09
Skew: 255443.94
Frequency: 0
day second elapsed stall skew dispersion freq
42463 56728.708 rejected packet: abs(DISP)>65536 These are all devices on the same LAN so frankly I am flabbergasted. Aghast even. Here's the ntpq -pn output from the Ubuntu 14.04 server: user@host:~$ ntpq -pn
remote refid st t when poll reach delay offset jitter
==============================================================================
127.127.1.0 .LOCL. 10 l 1025 64 0 0.000 0.000 0.000
10.17.162.249 10.17.6.10 5 u 23 1024 37 0.865 1381.07 697.260
10.31.10.22 .LOCL. 1 u 1044 1024 17 29.586 -838.06 397.342
10.17.6.10 10.31.10.21 4 u 1065 1024 17 0.366 105.245 402.999
*10.31.10.21 132.246.11.238 3 u 5 1024 37 29.418 794.292 616.796
10.17.6.11 10.31.10.21 4 u 1038 1024 17 0.408 120.030 381.058 My questions are: What is dispersion and what can alter its value? What commands could I run to get more details from the NTP servers? Could the fault lie on the Ubuntu server side, with an improper ntp.conf ? There is nothing special there really. Would switching to chrony change anything in this case? | I see some confusion going on in the answers here. For starters, ntpclient , at least in -s mode, isn't acting as a full NTP client, it's only sending and receiving one packet , so there's no "last 8 packets received". It isn't actually estimating its own dispersion at all. Instead, the value it's printing is the value called "root dispersion" (rootdisp) in the packet returned by the server, which is an estimate of the total amount of error/variance between that server and the correct time. The way this is calculated is pretty simple: every NTP server either gets its time from an external clock (for example a radio or GPS receiver), or from another NTP server. If a server gets its time from an external clock, its root dispersion is the estimated maximum error of that clock. If it gets its time from another NTP server, its root dispersion is that server's root dispersion plus the dispersion added by the network link between them. One point of confusion here is that while ntpq and chrony display dispersion and root dispersion in seconds, which is what people are used to looking to, ntpclient displays it in microseconds . Regardless, a value of 1217163 is still quite high. A good NTP server knows the time within a few milliseconds; a bad one within a few tens or hundreds of milliseconds. Yours is telling you that its time can only be trusted to within +/- 1.2 seconds. You can actually get ntpclient to synchronize to this server anyway by passing the -x 0 or -t option (depending on version of ntpclient), which disables NTP sanity checks. If you only need roughly accurate time (to within a few seconds), that may be good enough. However, ntpclient is being pretty reasonable in refusing to synchronize to such a bad server. Your ntpq output on the ubuntu machine is showing a jitter of hundreds of milliseconds for all of its servers, even though they have low delay, which indicates either a very unreliable network, a conspiracy of all of the servers to provide erratic time, or a basic timekeeping problem on the server itself. It also concerns me that the server 10.31.10.22 is advertising a refid of LOCL (undisciplined local clock) but has a stratum of 1. Usually the local clock is fudged to a stratum of 10 so that it's only used as a last-resort synchronization source to keep a herd from drifting apart. Either 10.31.10.22 is misconfigured and providing bad time to the rest of the network, or it's being disciplined to good time by some program outside of NTP's control, in which case the misconfiguration is simply that it's advertising the LOCL refid; it should be overridden to e.g. GPS or whatever is providing its time. | {
"source": [
"https://serverfault.com/questions/768280",
"https://serverfault.com",
"https://serverfault.com/users/347350/"
]
} |
768,509 | Introduction I have a dev server (currently running Ubuntu 14.04 LTS), which I have been using for a while now for hosting various development tools on different ports. Because the ports can be hard to remember I have decided to to use port 80 for all of my services and do the port forwarding internally, based off hostname. Instead of writing domain.com:5432, I can simply access it through sub.domain.com For example the application X, which is using the port 7547 and is running on sub.domain.com has the following nginx configuration: upstream sub {
server 127.0.0.1:7547;
}
server {
listen 80;
server_name sub.domain.com www.sub.domain.com;
access_log /var/log/nginx/sub.log combined;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:7547;
proxy_set_header Authorization "";
}
} The Question Given the current configuration structure, which I have chosen, is it possible to use letsencrypt and run the different services under https? | Yes, you can have nginx proxy requests to HTTP servers, and then itself respond to clients over HTTPS. When doing this, you will want to be sure that the nginx<->proxy connect is unlikely to be sniffed by whoever is your expected attacker. Safe-enough approaches might include: proxying to the same host (as you do) proxying to other hosts behind your firewall Proxying to another host on the public Internet is unlikely to be safe-enough. Here are instructions for obtaining a Let's Encrypt certificate using the same webserver you are using as a proxy. Requesting your initial certificate from Let's Encrypt Modify your server clause to allow the subdirectory .well-known to be served from a local directory, eg: server {
listen 80;
listen [::]:80;
server_name sub.domain.com www.sub.domain.com;
[…]
location /.well-known {
alias /var/www/sub.domain.com/.well-known;
}
location / {
# proxy commands go here
[…]
}
} http://sub.domain.com/.well-known is where the Let's Encrypt servers will look for the answers to the challenges it issues. You can then use the certbot client to request a certificate from Let's Encrypt using the webroot plugin (as root): certbot certonly --webroot -w /var/www/sub.domain.com/ -d sub.domain.com -d www.sub.domain.com Your key, certificate, and certificate chain will now be installed in /etc/letsencrypt/live/sub.domain.com/ Configuring nginx to use your certificate First create a new server clause like this: server {
listen 443 ssl;
listen [::]:443 ssl;
# if you wish, you can use the below lines for listen instead
# which enables HTTP/2
# requires nginx version >= 1.9.5
# listen 443 ssl http2;
# listen [::]:443 ssl http2;
server_name sub.domain.com www.sub.domain.com;
ssl_certificate /etc/letsencrypt/live/sub.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/sub.domain.com/privkey.pem;
# Turn on OCSP stapling as recommended at
# https://letsencrypt.org/docs/integration-guide/
# requires nginx version >= 1.3.7
ssl_stapling on;
ssl_stapling_verify on;
# Uncomment this line only after testing in browsers,
# as it commits you to continuing to serve your site over HTTPS
# in future
# add_header Strict-Transport-Security "max-age=31536000";
access_log /var/log/nginx/sub.log combined;
# maintain the .well-known directory alias for renewals
location /.well-known {
alias /var/www/sub.domain.com/.well-known;
}
location / {
# proxy commands go here as in your port 80 configuration
[…]
}
} Reload nginx: service nginx reload Verify that HTTPS now works by visiting https://sub.domain.com and https://www.sub.domain.com in your browser (and any other browsers you specifically wish to support) and checking that they don't report certificate errors. Recommended: also review raymii.org: Strong SSL Security on nginx and test your configuration at SSL Labs . (Recommended) Redirect HTTP requests to HTTPS Once you have confirmed that your site works with the https:// version of the URL, rather than have some users served insecure content because they went to http://sub.domain.com , redirect them to the HTTPS version of the site. Replace your entire port 80 server clause with: server {
listen 80;
listen [::]:80;
server_name sub.domain.com www.sub.domain.com;
rewrite ^ https://$host$request_uri? permanent;
} You should also now uncomment this line in the port 443 configuration, so that browsers remember to not even try the HTTP version of the site: add_header Strict-Transport-Security "max-age=31536000"; Automatically renew your certificate You can use this command (as root) to renew all certificates known to certbot and reload nginx using the new certificate (which will have the same path as your existing certificate): certbot renew --renew-hook "service nginx reload" certbot will only attempt to renew certificates that are more than 60 days old, so it is safe (and recommended!) to run this command very regularly , and automatically if at all possible. Eg, you could put the following command in /etc/crontab : # at 4:47am/pm, renew all Let's Encrypt certificates over 60 days old
47 4,16 * * * root certbot renew --quiet --renew-hook "service nginx reload" You can test renewals with either a dry-run, which will contact Let's Encrypt staging servers to do a real test of contacting your domain, but won't store the resulting certificates: certbot --dry-run renew Or you can force an early renewal with: certbot renew --force-renew --renew-hook "service nginx reload" Note: you can dry run as many times as you like, but real renewals are subject to Let's Encrypt rate limits . | {
"source": [
"https://serverfault.com/questions/768509",
"https://serverfault.com",
"https://serverfault.com/users/347523/"
]
} |
768,901 | I'm particularly interested in this for looking at the output of oneshot services that run on a timer. The --unit flag is close, but it concatenates all the runs of the service together. The most obvious way I can think of would be to filter on PID, but that makes me worry about PID reuse / services that fork, and getting the last PID is pretty inconvenient. Is there some other identifier that corresponds to a single run of a service, that I could use to filter the logs? EDIT: I would happily accept an authoritative "no" if that's the real answer. | Since systemd version 232 , we have the concept of invocation ID. Each time a unit is ran, it has a unique 128 bit invocation ID. Unlike MainPID which can be recycled, or ActiveEnterTimestamp which can have resolution troubles, it is a failsafe way to get all the log of a particular systemd unit invocation. To obtain the latest invocation ID of a unit $ systemctl show --value -p InvocationID openipmi
bd3eb84c3aa74169a3dcad2af183885b To obtain the journal of the latest invocation of, say, openipmi , whether it failed or not, you can use the one liner $ journalctl _SYSTEMD_INVOCATION_ID=`systemctl show -p InvocationID --value openipmi.service`
-- Logs begin at Thu 2018-07-26 12:09:57 IDT, end at Mon 2019-07-08 01:32:50 IDT. --
Jun 21 13:03:13 build03.lbits openipmi[1552]: * Starting ipmi drivers
Jun 21 13:03:13 build03.lbits openipmi[1552]: ...fail!
Jun 21 13:03:13 build03.lbits openipmi[1552]: ...done. (Note that the --value is available since systemd 230 , older than InvocationID ) | {
"source": [
"https://serverfault.com/questions/768901",
"https://serverfault.com",
"https://serverfault.com/users/92283/"
]
} |
769,468 | My hosting provider has recently re-issued and re-installed an SSL certificate for my domain, after they let the old one expire by mistake. I am now able to browse the website over HTTPS again, and so is my host, and so are a number of other users. However, some users (at least a dozen out of hundreds) are still getting Your connection is not secure error messages on different browsers and platforms. (It is proving difficult to diagnose an issue I cannot reproduce.) I understand different browsers use different lists of Certification Authorities (CA.) How come a user running the same version of Firefox as I am (45.0.1 on OS X) is getting a SEC_ERROR_UNKNOWN_ISSUER error (for my site only) while I'm not? What makes it possible? Said user cleared his cache and rebooted his laptop. I ran an SSL check on digicert.com . The result is this: SSL Certificate is not trusted The certificate is not signed by a trusted authority (checking against
Mozilla's root store). If you bought the certificate from a trusted
authority, you probably just need to install one or more Intermediate
certificates. Contact your certificate provider for assistance doing
this for your server platform. How come I am able to connect to the site without SSL error if this is the case? | The certificate chain of your certificate is incomplete. Most likely your provider failed to install some intermediate certificate when installing the new certificate. Most times such intermediate certificates are provided by the SSL authority, to provide support for some older browsers and operating systems. That's the reason, that while it works for you, it doesn't work for some of your clients. An really great utility to check for SSL issues with your website is the SSL Server test by SSLlabs . As you can see in the link above, not only are you having a chain issue here, but also the signature algorithm used to create your cert is a weak one, your webserver is still vulnarable to the POODLE attack and still supports RC4, which is also considered unsecure ... I don't want to say anything against your webserver provider, but in your position I would mail them, that they fix all this issues ASAP, or change to another provider ... | {
"source": [
"https://serverfault.com/questions/769468",
"https://serverfault.com",
"https://serverfault.com/users/217251/"
]
} |
770,130 | We have a private debian repository that was set up years ago by an earlier system admin. Packages were signed by the older key, 7610DDDE (which I had to revoke), as shown here for the root user on the repo server. # gpg --list-keys
/root/.gnupg/pubring.gpg
------------------------
pub 1024D/2D230C5F 2006-01-03 [expired: 2007-02-07]
uid Debian Archive Automatic Signing Key (2006) <[email protected]>
pub 1024D/7610DDDE 2006-03-03 [revoked: 2016-03-31]
uid Archive Maintainer <[email protected]>
pub 4096R/DD219672 2016-04-18
uid Archive Maintainer <[email protected]> All commands below are as the root user.
I modified the repository/conf/distributions file to use the new sub key I created explicityly for signing: Architectures: i386 amd64 source
Codename: unstable
Components: main
...
SignWith: DD219672 But when I use dput to update a package I get Could not find any key matching 'DD219672'!
ERROR: Could not finish exporting 'unstable'!
This means that from outside your repository will still look like before (and
should still work if this old state worked), but the changes intended with this
call will not be visible until you call export directly (via reprepro export) And when I run reprepro export directly I get: # reprepro -V export unstable
Exporting unstable...
generating main/Contents-i386...
generating main/Contents-amd64...
Could not find any key matching 'DD219672'!
ERROR: Could not finish exporting 'unstable'! I Googled and found a couple of old threads that indicated a possible problem with reprepro finding the proper gnupg directory...so I tried this with the same results above: # GNUPGHOME=/root/.gnupg reprepro -V export unstable One thread suggested testing the key by signing a dummy file which seemed to work fine...at least it reported no errors and I ended up with a 576 byte bla.gpg file after it was finished. # touch bla
# gpg -u DD219672 --sign bla The reprepro man page also suggests "If there are problems with signing, you can try gpg --list-secret-keys value to see how gpg could interprete the value. If that command does not list any keys or multiple ones, try to find some other value (like the keyid), that gpg can more easily associate with a unique key." So I checked that as well and got: # gpg --list-secret-keys DD219672
sec 4096R/DD219672 2016-04-18
uid Archive Maintainer <[email protected]> And finally I was able to get in touch with the sys admin that first set up our repros and he suggested trying a key without a passphrase. So I generated a new signing key, DD219672, published it, went through the above steps again but with the same result. Today, after more reading and studying man pages and noting that pgp-agent is automatically started when I run reprepro, I decided to chase that for a while. I added a gpg-agent.conf with debug-level 7
log-file /root/gpg.agent.log
debug-all And I can see in the log that gpg-agent is not finding the keys 2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 -> OK Pleased to meet you, process 18903
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 <- RESET
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 -> OK
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 <- OPTION ttyname=/dev/pts/0
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 -> OK
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 <- OPTION ttytype=xterm-256color
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 -> OK
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 <- GETINFO version
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 -> D 2.1.11
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 -> OK
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 <- OPTION allow-pinentry-notify
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 -> OK
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 <- OPTION agent-awareness=2.1.0
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 -> OK
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 <- AGENT_ID
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 -> ERR 67109139 Unknown IPC command <GPG Agent>
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 <- HAVEKEY C2C5C59E5E90830F314ABB66997CCFAACC5DEA2F 416E8A33354912FF4843D52AAAD43FBF206252D9 8CE77065EA6F3818A4975072C8341F32CB7B0EF0
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 -> ERR 67108881 No secret key <GPG Agent>
2016-04-18 15:54:00 gpg-agent[15582] DBG: chan_5 <- [eof] I have so far been unable to figure out where gpg-agent is finding the keys it lists in HAVKEY and how to point it in the right direction to find the new key, DD219672, to sign our updated packages. | I had the same problem, and after much frustration finally tracked down what was going on. The reprepro tool uses gpgme, which is based on gnupg2 . A recent release of that changed how the secret key ring is handled: https://www.gnupg.org/faq/whats-new-in-2.1.html gpg used to keep the public key pairs in two files: pubring.gpg and secring.gpg ... With GnuPG 2.1 this changed ... To ease the
migration to the no-secring method, gpg detects the presence of a secring.gpg and converts the keys on-the-fly to the the key store of
gpg-agent (this is the private-keys-v1.d directory below the GnuPG
home directory ( ~/.gnupg )). This is done only once and an existing secring.gpg is then not anymore touched by gpg. This allows
co-existence of older GnuPG versions with GnuPG 2.1. However, any
change to the private keys using the new gpg will not show up when
using pre-2.1 versions of GnuPG and vice versa. Thus, if you create a new key with gpg, gpg2 won't see it, and vice versa. Quick fix that worked for me: gpg --export-secret-keys | gpg2 --import - And if you need to go the other way, of course: gpg2 --export-secret-keys | gpg --import - Depending on your setup, you may also want/need to add --export-secret-subkeys After doing the above, reprepro worked properly with my new key. | {
"source": [
"https://serverfault.com/questions/770130",
"https://serverfault.com",
"https://serverfault.com/users/182884/"
]
} |
770,138 | Got a guy on the other side of Earth who needs a sole partition off of a vhd so he can do his job. The entire vhd is over 490gb. The 1 partition is about 80. So I want to split the the vhd up into its 3 partitions. A vhd file for every partition. How can I do this? | I had the same problem, and after much frustration finally tracked down what was going on. The reprepro tool uses gpgme, which is based on gnupg2 . A recent release of that changed how the secret key ring is handled: https://www.gnupg.org/faq/whats-new-in-2.1.html gpg used to keep the public key pairs in two files: pubring.gpg and secring.gpg ... With GnuPG 2.1 this changed ... To ease the
migration to the no-secring method, gpg detects the presence of a secring.gpg and converts the keys on-the-fly to the the key store of
gpg-agent (this is the private-keys-v1.d directory below the GnuPG
home directory ( ~/.gnupg )). This is done only once and an existing secring.gpg is then not anymore touched by gpg. This allows
co-existence of older GnuPG versions with GnuPG 2.1. However, any
change to the private keys using the new gpg will not show up when
using pre-2.1 versions of GnuPG and vice versa. Thus, if you create a new key with gpg, gpg2 won't see it, and vice versa. Quick fix that worked for me: gpg --export-secret-keys | gpg2 --import - And if you need to go the other way, of course: gpg2 --export-secret-keys | gpg --import - Depending on your setup, you may also want/need to add --export-secret-subkeys After doing the above, reprepro worked properly with my new key. | {
"source": [
"https://serverfault.com/questions/770138",
"https://serverfault.com",
"https://serverfault.com/users/273285/"
]
} |
770,299 | I have just found out that 192.112.36.4 ( g.root-servers.net. ) neither responds to requests, nor responds to pings. . 3600000 NS G.ROOT-SERVERS.NET.
G.ROOT-SERVERS.NET. 3600000 A 192.112.36.4 I checked http://www.internic.net/domain/named.root , which is an up2date list of root servers and the IP address is correct. I always was under the impression that those root servers are redundant to the point where it is impossible that there is downtime. According to http://root-servers.org there are worldwide six locations where servers are located, so I would assume I am correct with that assumption. My question is if g.root-servers.net. is in any way different from all the others, or special, and if I am supposed to not get a DNS response from it for any reason? | I always was under the impression that those root servers are redundant to the point where it is impossible that there is downtime. According to http://root-servers.org there are worldwide six locations where servers are located, so I would assume I am correct with that assumption. Even were there not an undocumented outage for G, that's an incorrect assumption: The Anycast IP addresses may represent multiple physical sites, but it is undesirable for abuse events in one region to cascade into failures in others . If a site buckles, that traffic is not going to be shifted into another. Shared network links where abuse directed at a root server is present may very well choke before infrastructure closer to a root server does. Lastly, we have the human element. G was down across the board , but there has been no officially disclosed reason for why at this time. A widespread failure of this type typically points at a deliberate action or a catastrophic failure in the central administration. As the users of Serverfault do not represent the administrators of the root servers, your best bet is to watch for an official statement . In the meantime, the link above is sufficient to demonstrate that there was a total outage for G. The internet continued to operate because one root being down doesn't have a significant impact in the larger picture. Update from the DoD NIC: Regarding yesterday's G-root outage:
Like many outages, this one resulted from a series of unfortunate events.
These unfortunate events were operational errors; steps have been taken to
prevent any reoccurrence, and to provide better service in the future. https://lists.dns-oarc.net/pipermail/dns-operations/2016-April/014765.html | {
"source": [
"https://serverfault.com/questions/770299",
"https://serverfault.com",
"https://serverfault.com/users/188837/"
]
} |
770,302 | We just migrated to Amazon AWS. We currently have an EC2 instance that's working well. It's running Nginx in front and Apache in the back-end. That's running well also. All sites are launched properly and includes the Cache-Control header for files that are served from the EC2. The problem is with ALL static files we placed in Amazon S3 that's being accessed through CloudFront CDN . We can access the files fine (and no issue with CORS), but apparently CloudFront doesn't serve files with Cache-Control header. We want to leverage on browser caching. The way I see it, the EC2 instance doesn't play a role here as the static files are being served directly by S3+CloudFront, the request does not go to the Web Server in EC2. I'm at a complete lost. Question:
1) How do I set the Cache-Control in this case?
2) Is it possible to set the Cache-Control? From S3 or CloudFront? Note: I've hit a few pages in Google where you can set the Header in S3 for individual objects. That's really not a productive way to do it specially since in my case we are talking of several objects. Thanks! | I've hit a few pages in Google where you can set the Header in S3 for individual objects. That's really not a productive way to do it specially since in my case we are talking of several objects. Well, "productive" or not, that is how it actually is designed to work. CloudFront does not add Cache-Control: headers. CloudFront passes-through (and also respects, unless otherwise configured) the Cache-Control: headers provided by the origin server, which in this case is S3. To get Cache-Control: headers provided by S3 when an object is fetched, they must be provided when the object is uploaded into S3, or added to the object's metadata by a subsequent put+copy operation, which can be used to internally copy an object into itself in S3, modifying the metadata in the process. This is what the console does, behind the scenes, if you edit object metadata. There is also (in case you are wondering) no global setting in S3 to force all objects in a bucket to return these headers -- it's a per-object attribute. Update: Lambda@Edge is a new feature in CloudFront that allows you to fire triggers against requests and/or responses, between viewer and cache and/or cache and origin, running code written in Node.js against a simple request/response object structure exposed by CloudFront. One of the main applications for this feature is manipulating headers... so while the above is still accurate -- CloudFront itself does not add Cache-Control -- it is now possible for a Lambda function to add them to the response that is returned from CloudFront. This example adds Cache-Control: public, max-age=86400 only if there is no Cache-Control header already present on the response. Using this code in an Origin Response trigger would cause it to fire every time CloudFront fetches an object from the origin, and modify the response before CloudFront caches it. 'use strict';
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
if(!response.headers['cache-control'])
{
response.headers['cache-control'] = [{
key: 'Cache-Control',
value: 'public, max-age=86400'
}];
}
callback(null, response);
}; Update (2018-06-20): Recently, I submitted a feature request to the CloudFront team to allow configuration of static origin response headers as origin attributes, similar to the way static request headers can be added, now... but with a twist, allowing each header to be configured to be added conditionally (only if the origin didn't provide that header in the response) or unconditionally (adding the header and overwriting the header from then origin, if present). With feature requests, you typically don't receive any confirmation of whether they are actually considering implementing the new feature... or even whether they might have already been working on it... it's just announced when they are done. So, I have no idea if these will be implemented. There is an argument to be made that since this capability is already available via Lambda@Edge, there's no need for it in the base functionality... but my counter-argument is that the base functionally is not feature-complete without the ability to do simple, static response header manipulation, and that if this is the only reason a trigger is needed, then requiring Lambda triggers is an unnecessary cost, financially and in added latency (even though neither is necessarily an outlandish cost). | {
"source": [
"https://serverfault.com/questions/770302",
"https://serverfault.com",
"https://serverfault.com/users/334412/"
]
} |
770,686 | I'm trying to deploy some Windows 10 machines at work, and need to remove or disable the pre-installed apps. For some reason, management feels that the Xbox app and Candy Crush Soda Saga (etc.) shouldn't be installed on a corporate workstation. We've tried uninstalling them after the fact, but they show up again for any new users logging in, which isn't acceptable. How do we really get rid of these apps from our corporate Windows 10 image? | The easiest method I've found to actually control a Windows 10 image is to edit it with the Deployment Image Servicing and Management (DISM.exe) tool. In short, you need to: Locate the Windows wim for the image you're deploying. On a Windows 10 installation ISO, for example, the file is: \sources\install.wim Create a directory to temporarily mount the wim in. Mount the wim. Make your changes. For the purposes of removing the pre-installed Windows 10 apps, there are actually three different types we need to deal with here - one classic executable, a bunch of Metro/UWP/Appx applications , and a bunch of installer shortcuts that Windows 10 forces onto the Start Menu. Seems worth pointing out here that you can get a list of appx packages from the mounted WIM with DISM , if you're not sure what changes you wish to make. Commit the changes and unmount the WIM. In more detail: Locate the Windows wim. I'll be downloading the latest 64 bit, Enterprise version of Windows 10 (SW_DVD5_WIN_ENT_10_1511.1_64BIT_English_MLF_X20-93758.ISO) from Micorosoft's volume licensing portal, and mounting the ISO to D: . (Be sure to mount it with read-write access, of course!) This puts the wim file I want to edit at: D:\sources\install.wim . I'll assign that to a PowerShell variable. $wimfile = "D:\sources\install.wim" Create a directory to temporarily mount the wim in. I'll use C:\Temp\W10entDISM , and assign that to a PowerShell variable as well. $mountdir = "C:\Temp\W10entDISM" Mount the wim with DISM . dism.exe /Mount-Image /ImageFile:$wimfile /Index:1 /MountDir:$mountdir Make your changes. For the purposes of removing the pre-installed Windows 10 apps, there are actually three different types we need to deal with here - one classic executable, a bunch of Metro/UWP/Appx applications, and a bunch of installer shortcuts that Windows 10 forces onto the Start Menu. The classic executable, OneDrive Installer Windows 10 has an executable, OneDriveSetup.exe and registry entries to run it automatically, which I'll be eliminating, using the File System Security PowerShell Module and command line registry editor, reg.exe . Of course, this can be done manually or with other command line tools, if preferred. takeown /F $mountdir\Windows\SysWOW64\OneDriveSetup.exe /A Add-NTFSAccess -Path "$($mountdir)\Windows\SysWOW64\onedrivesetup.exe" -Account "BUILTIN\Administrators" -AccessRights FullControl Remove-Item $mountdir\Windows\SysWOW64\onedrivesetup.exe reg load HKEY_LOCAL_MACHINE\WIM $mountdir\Users\Default\ntuser.dat reg delete "HKEY_LOCAL_MACHINE\WIM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run" /v OneDriveSetup /f The installer shortcuts that Windows 10 creates on the Start Menu. These are controlled by a registry key called "CloudContent", which we'll need to create and add a value to disable, when editing an install disc. If dealing with an existing install, the key would already be created. reg add HKEY_LOCAL_MACHINE\WIM\SOFTWARE\Policies\Microsoft\Windows\CloudContent reg add HKEY_LOCAL_MACHINE\WIM\SOFTWARE\Policies\Microsoft\Windows\CloudContent /v DisableWindowsConsumerFeatures /t REG_DWORD /d 1 /f reg unload HKEY_LOCAL_MACHINE\WIM The Metro/UWP/Appx applications. We can use the Get-AppxProvisionedPackage cmdlet to view and decide which Appx applications to remove. ( Get-AppxProvisionedPackage -Path $mountdir ) Importantly, not all the pre-installed Appx apps can or should be removed. As of the time of this writing, it is recommended to not uninstall the AppConnector, ConnectivityStore, and WindowsStore (their use can be disabled in other ways, if desired, but actually removing them has been reported to break things and create undesired consequences). Also worth noting that in Windows 10, the Windows Calculator is an Appx package. I've elected to leave those three apps, the Windows Calculator, and the Microsoft Solitaire Collection installed, and remove everything else, so I end up running: dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.BingNews_4.6.169.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.BingSports_4.6.169.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.BingWeather_4.6.169.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.BingFinance_4.6.169.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.CommsPhone_1.10.15000.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.Messaging_1.10.22012.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.MicrosoftOfficeHub_2015.6306.23501.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.Office.OneNote_2015.6131.10051.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.SkypeApp_3.2.1.0_neutral_~_kzf8qxf38zg5c dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.Windows.Photos_2015.1001.17200.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.WindowsCamera_2015.1071.40.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.WindowsPhone_2015.1009.10.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.WindowsAlarms_2015.1012.20.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:microsoft.windowscommunicationsapps_2015.6308.42271.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.WindowsMaps_4.1509.50911.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.WindowsSoundRecorder_2015.1012.110.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.XboxApp_2015.930.526.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.ZuneMusic_2019.6.13251.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.ZuneVideo_2019.6.13251.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.Office.Sway_2015.6216.20251.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.People_2015.1012.106.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.Getstarted_2.3.7.0_neutral_~_8wekyb3d8bbwe dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.3DBuilder_10.9.50.0_neutral_~_8wekyb3d8bbwe Commit the changes and unmount the WIM. dism.exe /Unmount-Image /MountDir:$mountdir /commit Just teh codez: $wimfile = "D:\sources\install.wim"
$mountdir = "C:\Temp\W10entDISM"
dism.exe /Mount-Image /ImageFile:$wimfile /Index:1 /MountDir:$mountdir
# Remove Appx Packages
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.BingNews_4.6.169.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.BingSports_4.6.169.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.BingWeather_4.6.169.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.BingFinance_4.6.169.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.CommsPhone_1.10.15000.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.Messaging_1.10.22012.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.MicrosoftOfficeHub_2015.6306.23501.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.Office.OneNote_2015.6131.10051.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.SkypeApp_3.2.1.0_neutral_~_kzf8qxf38zg5c
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.Windows.Photos_2015.1001.17200.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.WindowsCamera_2015.1071.40.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.WindowsPhone_2015.1009.10.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.WindowsAlarms_2015.1012.20.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:microsoft.windowscommunicationsapps_2015.6308.42271.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.WindowsMaps_4.1509.50911.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.WindowsSoundRecorder_2015.1012.110.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.XboxApp_2015.930.526.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.ZuneMusic_2019.6.13251.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.ZuneVideo_2019.6.13251.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.Office.Sway_2015.6216.20251.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.People_2015.1012.106.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.Getstarted_2.3.7.0_neutral_~_8wekyb3d8bbwe
dism.exe /Image:$mountdir /Remove-ProvisionedAppxPackage /PackageName:Microsoft.3DBuilder_10.9.50.0_neutral_~_8wekyb3d8bbwe
# Remove OneDrive Setup
takeown /F $mountdir\Windows\SysWOW64\OneDriveSetup.exe /A
Add-NTFSAccess -Path "$($mountdir)\Windows\SysWOW64\onedrivesetup.exe" -Account "BUILTIN\Administrators" -AccessRights FullControl
Remove-Item $mountdir\Windows\SysWOW64\onedrivesetup.exe
reg load HKEY_LOCAL_MACHINE\WIM $mountdir\Users\Default\ntuser.dat
reg delete "HKEY_LOCAL_MACHINE\WIM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run" /v OneDriveSetup /f
# Remove Cloud Content
reg add HKEY_LOCAL_MACHINE\WIM\SOFTWARE\Policies\Microsoft\Windows\CloudContent
reg add HKEY_LOCAL_MACHINE\WIM\SOFTWARE\Policies\Microsoft\Windows\CloudContent /v DisableWindowsConsumerFeatures /t REG_DWORD /d 1 /f
# Unload, Unmount, Commit
reg unload HKEY_LOCAL_MACHINE\WIM
dism.exe /Unmount-Image /MountDir:$mountdir /commit You should now have an ISO and/or wim file that you can use to install Windows 10 without the added crap, or feed into your configuration/deployment management system. A screenclip of the Start Menu from a resulting OS deployment: | {
"source": [
"https://serverfault.com/questions/770686",
"https://serverfault.com",
"https://serverfault.com/users/118258/"
]
} |
770,896 | I have a really weird problem with my DNS. My domain name ( strugee.net ) is unresolvable from some networks, and resolvable from others. For example, on my home network (same network the server's on): % dig strugee.net
; <<>> DiG 9.10.3-P4 <<>> strugee.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10086
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;strugee.net. IN A
;; ANSWER SECTION:
strugee.net. 1800 IN A 216.160.72.225
;; Query time: 186 msec
;; SERVER: 205.171.3.65#53(205.171.3.65)
;; WHEN: Sat Apr 16 15:42:36 PDT 2016
;; MSG SIZE rcvd: 56 However, if I log in to a server I have on Digital Ocean, the domain fails to resolve: % dig strugee.net
; <<>> DiG 9.9.5-9+deb8u3-Debian <<>> strugee.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 58551
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;strugee.net. IN A
;; Query time: 110 msec
;; SERVER: 2001:4860:4860::8844#53(2001:4860:4860::8844)
;; WHEN: Sat Apr 16 18:44:25 EDT 2016
;; MSG SIZE rcvd: 40 But , going directly to the authoritative nameservers works just fine: % dig @dns1.registrar-servers.com strugee.net
; <<>> DiG 9.9.5-9+deb8u3-Debian <<>> @dns1.registrar-servers.com strugee.net
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30856
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;strugee.net. IN A
;; ANSWER SECTION:
strugee.net. 1800 IN A 216.160.72.225
;; AUTHORITY SECTION:
strugee.net. 1800 IN NS dns3.registrar-servers.com.
strugee.net. 1800 IN NS dns4.registrar-servers.com.
strugee.net. 1800 IN NS dns2.registrar-servers.com.
strugee.net. 1800 IN NS dns1.registrar-servers.com.
strugee.net. 1800 IN NS dns5.registrar-servers.com.
;; Query time: 3 msec
;; SERVER: 216.87.155.33#53(216.87.155.33)
;; WHEN: Sat Apr 16 18:46:36 EDT 2016
;; MSG SIZE rcvd: 172 It's pretty clear that there's a problem with some large network somewhere that's failing to resolve my domain, but I can't seem to figure out where. I skimmed the dig manpage for options that might help, but didn't find anything particularly useful. I'm on Namecheap both as a domain registrar as well as DNS hosting. I have the DNSSEC option turned on. I haven't made any changes to my DNS settings recently. How can I debug this problem and find the offending nameserver? | How can I debug this problem and find the offending nameserver? daxd5 offered some good starting advice, but the only real answer here is that you need to know how to think like a recursive DNS server. Since there are numerous misconfigurations at the authoritative layer that can result in an inconsistent SERVFAIL , you need a DNS professional or online validation tools. Anyway, the goal isn't to cop out of helping you, but I wanted to make sure that you understand that there is no conclusive answer to that question. In your particular case, I noticed that strugee.net appears to be a zone signed with DNSSEC. This is evident from the presence of the DS and RRSIG records in the referral chain: # dig +trace +additional strugee.net
<snip>
strugee.net. 172800 IN NS dns2.registrar-servers.com.
strugee.net. 172800 IN NS dns1.registrar-servers.com.
strugee.net. 172800 IN NS dns3.registrar-servers.com.
strugee.net. 172800 IN NS dns4.registrar-servers.com.
strugee.net. 172800 IN NS dns5.registrar-servers.com.
strugee.net. 86400 IN DS 16517 8 1 B08CDBF73B89CCEB2FD3280087D880F062A454C2
strugee.net. 86400 IN RRSIG DS 8 2 86400 20160423051619 20160416040619 50762 net. w76PbsjxgmKAIzJmklqKN2rofq1e+TfzorN+LBQVO4+1Qs9Gadu1OrPf XXgt/AmelameSMkEOQTVqzriGSB21azTjY/lLXBa553C7fSgNNaEXVaZ xyQ1W/K5OALXzkDLmjcljyEt4GLfcA+M3VsQyuWI4tJOng184rGuVvJO RuI=
dns2.registrar-servers.com. 172800 IN A 216.87.152.33
dns1.registrar-servers.com. 172800 IN A 216.87.155.33
dns3.registrar-servers.com. 172800 IN A 216.87.155.33
dns4.registrar-servers.com. 172800 IN A 216.87.152.33
dns5.registrar-servers.com. 172800 IN A 216.87.155.33
;; Received 435 bytes from 192.41.162.30#53(l.gtld-servers.net) in 30 ms Before we go any further, we need to check whether or not the signing is valid. DNSViz is a tool frequently used for this purpose, and it confirms that there are indeed problems . The angry red in the picture is suggesting that you have a problem, but rather than mousing over everything we can just expand Notices on the left sidebar: RRSIG strugee.net/A alg 8, id 10636: The Signature Expiration field of the RRSIG RR (2016-04-14 00:00:00+00:00) is 2 days in the past.
RRSIG strugee.net/DNSKEY alg 8, id 16517: The Signature Expiration field of the RRSIG RR (2016-04-14 00:00:00+00:00) is 2 days in the past.
RRSIG strugee.net/DNSKEY alg 8, id 16517: The Signature Expiration field of the RRSIG RR (2016-04-14 00:00:00+00:00) is 2 days in the past.
RRSIG strugee.net/MX alg 8, id 10636: The Signature Expiration field of the RRSIG RR (2016-04-14 00:00:00+00:00) is 2 days in the past.
RRSIG strugee.net/NS alg 8, id 10636: The Signature Expiration field of the RRSIG RR (2016-04-14 00:00:00+00:00) is 2 days in the past.
RRSIG strugee.net/SOA alg 8, id 10636: The Signature Expiration field of the RRSIG RR (2016-04-14 00:00:00+00:00) is 2 days in the past.
RRSIG strugee.net/TXT alg 8, id 10636: The Signature Expiration field of the RRSIG RR (2016-04-14 00:00:00+00:00) is 2 days in the past.
net to strugee.net: No valid RRSIGs made by a key corresponding to a DS RR were found covering the DNSKEY RRset, resulting in no secure entry point (SEP) into the zone. (216.87.152.33, 216.87.155.33, UDP_0_EDNS0_32768_4096) The problem is clear: the signature on your zone has expired and the keys need to be refreshed. The reason why you are seeing inconsistent results is because not all recursive servers have DNSSEC validation enabled. Ones which validate are dropping your domain, and for ones which do not it is business as usual. Edit: Comcast's DNS infrastructure is known to implement DNSSEC validation, and as one of their customers I can confirm that I'm seeing a SERVFAIL as well. $ dig @75.75.75.75 strugee.net | grep status
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 2011 | {
"source": [
"https://serverfault.com/questions/770896",
"https://serverfault.com",
"https://serverfault.com/users/167999/"
]
} |
770,968 | Has anyone had tried to run an IPv6-only SMTP engine?
Pretty much everybody with any sense has IPv6 configured for major front-end servers.
I was curious if anyone had tried to run an IPv6-only MTA and received any connection errors. Is IPv6-only a viable solution yet?
Can I expect a few lingering connection issues?
Or did a magic fairy come down on the internet and made IPv6-to-IPv4 on port 25 work like magic on a direct connection? | Short answer: it will work, technically, but you will have lots of undeliverable mail. Long answer: Take your SMTP logs. Sed out all the domain names you send mail to. Check if they have IPv6 DNS and MX. Once you get 100% (you won't, not anytime this decade), then you can try if the IPv6 IPs actually work. I don't have any interesting production logs at hand (those I do have don't have enough domains to be of interest), but I took a list of domains offering free e-mail services from https://gist.github.com/tbrianjones/5992856 Out of the 536 first, 173 did not seem to have any MX resolving to an IP, 7 had MXs resolving to IPv4 and IPv6 MX addresses, and the remaining 356 had only IPv4 MXs. Out of domains having MXs, that is less than two percent OK, even before actually trying the IPv6 address to see if it works. Even admitting that the domains in the list are not in any sense the majority of Internet e-mail domains, I do not think that is enough for running a mail server that you actually expect to use. EDIT: since the 536 alphabetically first of a random list of over 3600 free e-mail providers is not very representative, I've checked a few big-name domains, and here are those that did not have IPv6 MXs (remember IPv6-accessible DNS would also be needed): microsoft.com / hotmail.com / outlook.com mail.com gmx.net icloud.com / mac.com comcast.com inbox.com zoho.com aol.com orange.fr twitter.com Do you want to register a domain? godaddy.com networksolutions.com registrar.com Or . . . do you want mail from this site? stackexchange.com (Of course) gmail.com and google.com have IPv6, and so does Facebook.com. For those who are interested, I used an ancestor to this line of bash script: for i in $(cat domains.txt) ; do
echo $(
echo $i
echo \;
for j in $(dig +short mx $i) ; do
dig +short a $j
dig +short aaaa $i
done \
| sed -r -e 's/[^;:\.]//g' \
-e 's/^:+$/v6/' \
-e 's/^\.+$/v4/' \
| sort -u
)
done \
| sed 's/ v4 v6/ v4+v6/' \
| sed -r 's/^([^;]+); *([^;]*)$/\2;\1/' \
| sed 's/^;/none;/' \
| sort '-t;' -k 1,1 \
| tr ';' '\t' It's certainly improvable, but most of the bizarre things are to make the output prettier. | {
"source": [
"https://serverfault.com/questions/770968",
"https://serverfault.com",
"https://serverfault.com/users/349862/"
]
} |
771,598 | This is based upon this hoax question here. The problem described is having a bash script which contains something to the effect of: rm -rf {pattern1}/{pattern2} ...which if both patterns include one or more empty elements will expand to at least one instance of rm -rf / , assuming that the original command was transcribed correctly and the OP was doing brace expansion rather than parameter expansion . In the OP's explanation of the hoax , he states: The command [...] is harmless but it seems
that almost no one has noticed. The Ansible tool prevents these errors, [...] but [...] no one seemed to
know that, otherwise they would know that what I have described could
not happen. So assuming you have a shell script that emits an rm -rf / command through either brace expansion or parameter expansion, is it true that using Ansible will prevent that command from being executed, and if so, how does it do this? Is executing rm -rf / with root privileges really "harmless" so long as you're using Ansible to do it? | I have virtual machines, let's blow a bunch of them up! For science. [root@diaf ~]# ansible --version
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides First attempt: [root@diaf ~]# cat killme.yml
---
- hosts: localhost
gather_facts: False
tasks:
- name: Die in a fire
command: "rm -rf {x}/{y}"
[root@diaf ~]# ansible-playbook -l localhost -vvv killme.yml
Using /etc/ansible/ansible.cfg as config file
1 plays in killme.yml
PLAY ***************************************************************************
TASK [Die in a fire] ***********************************************************
task path: /root/killme.yml:5
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC /bin/sh -c '( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1461128819.56-86533871334374 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1461128819.56-86533871334374 `" )'
localhost PUT /tmp/tmprogfhZ TO /root/.ansible/tmp/ansible-tmp-1461128819.56-86533871334374/command
localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1461128819.56-86533871334374/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1461128819.56-86533871334374/" > /dev/null 2>&1'
changed: [localhost] => {"changed": true, "cmd": ["rm", "-rf", "{x}/{y}"], "delta": "0:00:00.001844", "end": "2016-04-20 05:06:59.601868", "invocation": {"module_args": {"_raw_params": "rm -rf {x}/{y}", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-04-20 05:06:59.600024", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": ["Consider using file module with state=absent rather than running rm"]}
[WARNING]: Consider using file module with state=absent rather than running rm
PLAY RECAP *********************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0 OK, so command just passes the literals along, and nothing happens. How about our favorite safety bypass, raw ? [root@diaf ~]# cat killme.yml
---
- hosts: localhost
gather_facts: False
tasks:
- name: Die in a fire
raw: "rm -rf {x}/{y}"
[root@diaf ~]# ansible-playbook -l localhost -vvv killme.yml
Using /etc/ansible/ansible.cfg as config file
1 plays in killme.yml
PLAY ***************************************************************************
TASK [Die in a fire] ***********************************************************
task path: /root/killme.yml:5
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC rm -rf {x}/{y}
ok: [localhost] => {"changed": false, "invocation": {"module_args": {"_raw_params": "rm -rf {x}/{y}"}, "module_name": "raw"}, "rc": 0, "stderr": "", "stdout": "", "stdout_lines": []}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 No go again! How hard can it possibly be to delete all your files? Oh, but what if they were undefined variables or something? [root@diaf ~]# cat killme.yml
---
- hosts: localhost
gather_facts: False
tasks:
- name: Die in a fire
command: "rm -rf {{x}}/{{y}}"
[root@diaf ~]# ansible-playbook -l localhost -vvv killme.yml
Using /etc/ansible/ansible.cfg as config file
1 plays in killme.yml
PLAY ***************************************************************************
TASK [Die in a fire] ***********************************************************
task path: /root/killme.yml:5
fatal: [localhost]: FAILED! => {"failed": true, "msg": "'x' is undefined"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @killme.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 Well, that didn't work. But what if the variables are defined, but empty? [root@diaf ~]# cat killme.yml
---
- hosts: localhost
gather_facts: False
tasks:
- name: Die in a fire
command: "rm -rf {{x}}/{{y}}"
vars:
x: ""
y: ""
[root@diaf ~]# ansible-playbook -l localhost -vvv killme.yml
Using /etc/ansible/ansible.cfg as config file
1 plays in killme.yml
PLAY ***************************************************************************
TASK [Die in a fire] ***********************************************************
task path: /root/killme.yml:5
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC /bin/sh -c '( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1461129132.63-211170666238105 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1461129132.63-211170666238105 `" )'
localhost PUT /tmp/tmp78m3WM TO /root/.ansible/tmp/ansible-tmp-1461129132.63-211170666238105/command
localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1461129132.63-211170666238105/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1461129132.63-211170666238105/" > /dev/null 2>&1'
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["rm", "-rf", "/"], "delta": "0:00:00.001740", "end": "2016-04-20 05:12:12.668616", "failed": true, "invocation": {"module_args": {"_raw_params": "rm -rf /", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 1, "start": "2016-04-20 05:12:12.666876", "stderr": "rm: it is dangerous to operate recursively on ‘/’\nrm: use --no-preserve-root to override this failsafe", "stdout": "", "stdout_lines": [], "warnings": ["Consider using file module with state=absent rather than running rm"]}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @killme.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 Finally, some progress! But it still complains that I didn't use --no-preserve-root . Of course, it also warns me that I should try using the file module and state=absent . Let's see if that works. [root@diaf ~]# cat killme.yml
---
- hosts: localhost
gather_facts: False
tasks:
- name: Die in a fire
file: path="{{x}}/{{y}}" state=absent
vars:
x: ""
y: ""
[root@diaf ~]# ansible-playbook -l localhost -vvv killme.yml
Using /etc/ansible/ansible.cfg as config file
1 plays in killme.yml
PLAY ***************************************************************************
TASK [Die in a fire] ***********************************************************
task path: /root/killme.yml:5
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC /bin/sh -c '( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1461129394.62-191828952911388 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1461129394.62-191828952911388 `" )'
localhost PUT /tmp/tmpUqLzyd TO /root/.ansible/tmp/ansible-tmp-1461129394.62-191828952911388/file
localhost EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1461129394.62-191828952911388/file; rm -rf "/root/.ansible/tmp/ansible-tmp-1461129394.62-191828952911388/" > /dev/null 2>&1'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"backup": null, "content": null, "delimiter": null, "diff_peek": null, "directory_mode": null, "follow": false, "force": false, "group": null, "mode": null, "original_basename": null, "owner": null, "path": "/", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "absent", "validate": null}, "module_name": "file"}, "msg": "rmtree failed: [Errno 16] Device or resource busy: '/boot'"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @killme.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 Good news, everyone! It started trying to delete all my files! But unfortunately it ran into an error. I'll leave fixing that and getting the playbook to destroy everything using the file module as an exercise to the reader. DO NOT run any playbooks you see beyond this point! You'll see why in a moment. Finally, for the coup de grâce ... [root@diaf ~]# cat killme.yml
---
- hosts: localhost
gather_facts: False
tasks:
- name: Die in a fire
raw: "rm -rf {{x}}/{{y}}"
vars:
x: ""
y: "*"
[root@diaf ~]# ansible-playbook -l localhost -vvv killme.yml
Using /etc/ansible/ansible.cfg as config file
1 plays in killme.yml
PLAY ***************************************************************************
TASK [Die in a fire] ***********************************************************
task path: /root/killme.yml:5
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC rm -rf /*
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ansible/executor/process/result.py", line 102, in run
File "/usr/lib/python2.7/site-packages/ansible/executor/process/result.py", line 76, in _read_worker_result
File "/usr/lib64/python2.7/multiprocessing/queues.py", line 117, in get
ImportError: No module named task_result This VM is an ex-parrot ! Interestingly, the above failed to do anything with command instead of raw . It just printed the same warning about using file with state=absent . I'm going to say that it appears that if you aren't using raw that there is some protection from rm gone amok. You should not rely on this, though. I took a quick look through Ansible's code, and while I found the warning, I did not find anything that would actually suppress running the rm command. | {
"source": [
"https://serverfault.com/questions/771598",
"https://serverfault.com",
"https://serverfault.com/users/84838/"
]
} |
771,921 | CentOS 7 file system is XFS , And resize2fs doesn't work. I need to shrink /home to 400G and add 100G space to / . What should I do? # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 50G 341M 100% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 84K 7.8G 1% /dev/shm
tmpfs 7.8G 778M 7.0G 10% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda1 497M 241M 257M 49% /boot
tmpfs 1.6G 16K 1.6G 1% /run/user/42
tmpfs 1.6G 0 1.6G 0% /run/user/0
/dev/mapper/centos-home 500G 20G 480G 4% /home The output of lvs, vgs and pvs are: [root@localhost]~# lvs -v
Using logical volume(s) on command line.
LV VG #Seg Attr LSize Maj Min KMaj KMin Pool Origin Data% Meta% Move Cpy%Sync Log Convert LV UUID LProfile
home centos 1 -wi-ao---- 499.38g -1 -1 253 2 4I53D9-7VSm-HN9H-QsSp-FvFU-5R9D-y5VwsN
root centos 1 -wi-ao---- 50.00g -1 -1 253 0 LGRoEL-0EHz-G135-p6vx-Lt2s-RvI5-qdT9Sm
swap centos 1 -wi-ao---- 7.81g -1 -1 253 1 UYB5xP-cEyV-lWvn-blIq-8s13-9kVB-ykjIWI
[root@localhost]~# vgs -v
Using volume group(s) on command line.
VG Attr Ext #PV #LV #SN VSize VFree VG UUID VProfile
centos wz--n- 4.00m 1 3 0 557.26g 64.00m Gd5c08-ujdQ-fsix-o7z6-Wfsv-C0uW-XzDois
[root@localhost]~# pvs -v
Using physical volume(s) on command line.
Found same device /dev/sda2 with same pvid TCmreQr93apETNoTl8bMc54l57FZ5hut
PV VG Fmt Attr PSize PFree DevSize PV UUID
/dev/sda2 centos lvm2 a-- 557.26g 64.00m 557.26g TCmreQ-r93a-pETN-oTl8-bMc5-4l57-FZ5hut
[root@localhost]~# | As others have pointed out, XFS filesystem cannot be shrunk. So your best bet is to backup /home, remove and recreate its volume in a smaller size and give the rest to your /root volume just as Koen van der Rijt outlined in his post. • backup the contents of /home tar -czvf /root/home.tgz -C /home . • test the backup tar -tvf /root/home.tgz • unmount home umount /dev/mapper/centos-home • remove the home logical volume lvremove /dev/mapper/centos-home • recreate a new 400GB logical volume for /home, format and mount it lvcreate -L 400GB -n home centos
mkfs.xfs /dev/centos/home
mount /dev/mapper/centos-home • extend your /root volume with ALL of the remaining space and resize (-r) the file system while doing so lvextend -r -l +100%FREE /dev/mapper/centos-root • restore your backup tar -xzvf /root/home.tgz -C /home • check /etc/fstab for any mapping of /home volume. IF it is using UUID you should update the UUID portion. (Since we created a new volume, UUID has changed) That's it. Hope this helps. Kindly add this to sync the changes: dracut --regenerate-all --force | {
"source": [
"https://serverfault.com/questions/771921",
"https://serverfault.com",
"https://serverfault.com/users/303449/"
]
} |
772,227 | I'm building a Docker image for my Symfony app and I need to give permission to apache server to write into cache and log folders #Dockerfile
FROM php:7-apache
RUN apt-get update \
&& apt-get install -y libicu-dev freetds-common freetds-bin unixodbc \
&& docker-php-ext-install intl mbstring \
&& a2enmod rewrite
COPY app/php.ini /usr/local/etc/php/
COPY app/apache2.conf /etc/apache2/apache2.conf
COPY ./ /var/www/html
RUN find /var/www/html/ -type d -exec chmod 755 {} \;
RUN find /var/www/html/ -type f -exec chmod 644 {} \;
RUN chmod -R 777 /var/www/html/app/cache /var/www/html/app/logs When I build this image with docker build -t myname/symfony_apps:latest . and run the container with docker run -p 8080:80 myname/symfony_apps:latest .
Apache log is flooded by permission denied errors , the strange thing that I've checked with ls -a and permissions are fine. and when I run chmod from container's bash , apache permission issues are gone and the app works well The situation Running chmod commands from dockerfile: permissions are changed but apache still complains about permission denied. Running chmod same commands with bash inside the container: permissions are changed and my app is running Any idea , Am I missing something, maybe I should add root user somewhere in the Dockerfile ? | I had the same issue and it seems that there is some bug in docker or overlay2 if directory content is created in one layer and its permissions are changed in other. As a workaround you could copy sources to temporary directory: COPY . /src And then move it to /var/www/html and setup permissions (in one RUN command): RUN rm -rf /var/www/html && mv /src /var/www/html &&\
find /var/www/html/ -type d -exec chmod 755 {} \; &&\
find /var/www/html/ -type f -exec chmod 644 {} \; &&\
chmod -R 777 /var/www/html/app/cache /var/www/html/app/logs Also I created GitHub issue . | {
"source": [
"https://serverfault.com/questions/772227",
"https://serverfault.com",
"https://serverfault.com/users/350662/"
]
} |
773,492 | PayPal is making upgrades to the SSL certificates on all web and API endpoints. Due to security concerns over advances in computing power, the industry is phasing out 1024-bit SSL certificates (G2) in favor of 2048-bit certificates (G5), and is moving towards a higher strength data encryption algorithm to secure data transmission, SHA-2 (256) over the older SHA-1 algorithm standard. However, we're still using systems that are not compatible with the upgrades and updating our servers is not an option. So, what we think is to proxy(nginx) the paypal endpoint so that paypal thinks that the nginx server(which supports the update) is hitting that endpoint instead of our old servers. Is this possible? if not, what are the possible options to bypass this upgrade? Here is a sample config of the nginx proxy server {
listen 80;
server_name api.sandbox.paypal.com;
access_log /var/log/nginx/api.sandbox.paypal.com.access.log;
error_log /var/log/nginx/api.sandbox.paypal.com.error.log;
location /nvp {
proxy_pass https://api.sandbox.paypal.com/nvp;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
} | This is less of an upgrade and more of an opportunity to rebuild and refactor. How long have these RHEL4 systems been in production? 2006? 2007? Did your organization ignore the Red Hat lifecycle schedule and warnings about end of support periods? Does that mean all of these systems are running unmatched since the last package releases? Can you give some reason about why you're still on RHEL4? That really went end-of-life in 2012. In that period of time, there's been opportunity to simply rebuild. For this particular issue, I think the best approach is to gauge the effort to rebuild onto a more current OS. EL6 or EL7 would be good candidates and would fall under active support. | {
"source": [
"https://serverfault.com/questions/773492",
"https://serverfault.com",
"https://serverfault.com/users/351833/"
]
} |
773,524 | I have created an OpenVPN on aws via cloudformations - all working as expected except bootstrapping. on user data i have entered some comands to do some of changes e,g, enabling googleauth etc. in addition to this I also would like to create a group - and then create users and assign to this group, onnce those are done than i want to create a role to the group so it does the split tunnelling e.g. not all traffic go through the internet - so i want only want to redirect couple IP to this group I am stack at the moment by creating groups - I have found the command https://evanhoffman.com/2014/07/22/openvpn-cli-cheat-sheet/ and enter on boot strap but i cant find anything for creating a group creating an Access control which allow access to netwokr serices to internet. so anyone has any expirence with openvpn access server CLI? how to create groups and assgin users to this group for split tunneling? | This is less of an upgrade and more of an opportunity to rebuild and refactor. How long have these RHEL4 systems been in production? 2006? 2007? Did your organization ignore the Red Hat lifecycle schedule and warnings about end of support periods? Does that mean all of these systems are running unmatched since the last package releases? Can you give some reason about why you're still on RHEL4? That really went end-of-life in 2012. In that period of time, there's been opportunity to simply rebuild. For this particular issue, I think the best approach is to gauge the effort to rebuild onto a more current OS. EL6 or EL7 would be good candidates and would fall under active support. | {
"source": [
"https://serverfault.com/questions/773524",
"https://serverfault.com",
"https://serverfault.com/users/351873/"
]
} |
773,532 | I'm a student doing an internship at the moment. I mainly develop websites using virtualbox and vagrant. I was wondering if i could like make the websites i build accessible for everyone in my network. What is the best way to do this. Im on a mac by the way. | This is less of an upgrade and more of an opportunity to rebuild and refactor. How long have these RHEL4 systems been in production? 2006? 2007? Did your organization ignore the Red Hat lifecycle schedule and warnings about end of support periods? Does that mean all of these systems are running unmatched since the last package releases? Can you give some reason about why you're still on RHEL4? That really went end-of-life in 2012. In that period of time, there's been opportunity to simply rebuild. For this particular issue, I think the best approach is to gauge the effort to rebuild onto a more current OS. EL6 or EL7 would be good candidates and would fall under active support. | {
"source": [
"https://serverfault.com/questions/773532",
"https://serverfault.com",
"https://serverfault.com/users/348332/"
]
} |
774,388 | I am migrating our app from a cloud server at Rackspace t a dedicated server. I want to bring the application down for ~5 minutes to copy the data from the cloud server to the dedicated server, so I don't want requests going to the old server after I have copied the data. I want to point our DNS record at the new server, but the TTL was set to 24 hours. I have changed it to 300 seconds. Do I need to wait the 24 hours before updating the ip that domain points to / copying the data? | Anyone who has a cached copy of the domain record will not bother updating it for 24 hours, so yes if your intent is to have at most a 5 minute window of unavailability you should wait until all of the outstanding caches have updated to live no more than 5 minutes. | {
"source": [
"https://serverfault.com/questions/774388",
"https://serverfault.com",
"https://serverfault.com/users/172484/"
]
} |
774,399 | I would like to redirect/forward a URL on my testdomain. The URL is for example: http://www.mydomaintest.com/feed and I would need that to forward to http://www.mydomaintest.com/feed/stories . I have tried the GUI URL Rewrite in the IIS manager but the redirect is not working. This sort of rewrite is very easily done on Linux machines but is giving me serious trouble on IIS. I've also tried modifying the web.config file with no success. I've attempted the method, rewrite rules and redirect rules and nothing is working. Here are all the tutorials I've followed before asking this question: http://www.iis.net/learn/extensions/url-rewrite-module/creating-rewrite-rules-for-the-url-rewrite-module http://knowledge.freshpromo.ca/seo-tools/301-redirect.php https://stackoverflow.com/questions/10399932/setting-up-redirect-in-web-config-file https://www.iis.net/configreference/system.webserver/httpredirect Thank you | Anyone who has a cached copy of the domain record will not bother updating it for 24 hours, so yes if your intent is to have at most a 5 minute window of unavailability you should wait until all of the outstanding caches have updated to live no more than 5 minutes. | {
"source": [
"https://serverfault.com/questions/774399",
"https://serverfault.com",
"https://serverfault.com/users/350906/"
]
} |
774,583 | I've received the error several times on Windows 7 Workstations and Laptops where it loses trust with the domain controller, and I know how to fix it, but why does it do that? | You probably already know this, but bear with me. Computers have passwords in AD, just like users. We don't know our computer's password, and it changes regularly via built-in logic. The short answer is that the computer's password is no longer valid, and therefore AD doesn't trust this machine for logins any more. Why? How? Lots of things cause this. Something interfered with the password change process , or caused the machine to revert to an old password. Possible culprits include: Restoring from backup. Being powered off long enough for the password to expire, followed by network issues. General intermittent network issues with poor timing. Viruses, malware, etc. More things that aren't occurring to me at the moment, probably. I hope that helps. | {
"source": [
"https://serverfault.com/questions/774583",
"https://serverfault.com",
"https://serverfault.com/users/1980/"
]
} |
775,965 | I am trying to follow this tutorial to setup uWSGI with Django and nginx on Ubuntu 16.04 . It all works fine up until the very last step (oh the irony...) where I try to execute this command: sudo service uwsgi start If fails with the following error: Failed to start uwsgi.service: Unit uwsgi.service not found. Others seem to get a similar error: Failed to start uwsgi.service: Unit uwsgi.service failed to load: No such file or directory. The issue appears to be related to the version of Ubuntu. While that tutorial is aimed at Ubuntu 14.04, it seems it will not work for newer versions because in version 15 Ubuntu switched from the upstart init daemon to the systemd init daemon . How can I use systemd to launch uWSGI so that it works with nginx and Django? | The first modification needed is to the /etc/uwsgi/sites/firstsite.ini file. The only change needed is replacing the permissions from 664 to 666 . The script would look like this: [uwsgi]
project = firstsite
base = /home/user
chdir = %(base)/%(project)
home = %(base)/Env/%(project)
module = %(project).wsgi:application
master = true
processes = 5
socket = %(base)/%(project)/%(project).sock
chmod-socket = 666
vacuum = true Secondly , as we're using systemd rather than upstart , the following file is not needed and can be removed: /etc/init/uwsgi.conf Third , we create the following systemd script at /etc/systemd/system/uwsgi.service : [Unit]
Description=uWSGI Emperor service
After=syslog.target
[Service]
ExecStart=/usr/local/bin/uwsgi --emperor /etc/uwsgi/sites
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target Refresh the state of the systemd init system with this new uWSGI service on board sudo systemctl daemon-reload In order to start the script you'll need to run the following: sudo systemctl start uwsgi In order to start uWSGI on reboot, you will also need: sudo systemctl enable uwsgi You can use the following to check its status: systemctl status uwsgi Some further details can be found here . | {
"source": [
"https://serverfault.com/questions/775965",
"https://serverfault.com",
"https://serverfault.com/users/178267/"
]
} |
776,037 | I am setting up a website and bought the SSL certificate for the domain of the website. When I asked the hosting company why https://www.example.com was refusing connections, they answered that SSL access was configured on port 41696. Of course, https://www.example.com:41696 works as they promised, but that's really not a URL I'd like to use for a customer facing website. The hosting company also said that they can't change it to 443 even if we get a different package. I have never heard that from any other hosting providers I worked with. Is there a good reason why they are not letting that happen? Or is there any configuration that I can change on the server that will make it accept HTTPS requests on port 443? | Historically, HTTPS required a dedicated IP per site/certificate , since the browser needs to verify the certificate before sending the Host header. It's possible that your hosting provider uses dedicated ports instead, in order to conserve IPs. Nowadays, however, pretty much all modern browsers support Server Name Indication , which allows virtual hosting multiple HTTPS sites on the same IP and port, so even that isn't a particularly good reason anymore. If this is a shared hosting service, it's unlikely that there are any config changes you can make to make your site be available on the default port. | {
"source": [
"https://serverfault.com/questions/776037",
"https://serverfault.com",
"https://serverfault.com/users/353941/"
]
} |
777,299 | I recently had an XFS filesystem become corrupt due to a powerfail. (CentOS 7 system). The system wouldn't boot properly. I booted from a rescue cd and tried xfs_repair , it told me to mount the partition to deal with the log. I mounted the partition, and did an ls to verify that yes, it appears to be there. I unmounted the partition and tried xfs_repair again and got the same message. What am I supposed to do in this situation? Is there something wrong with my rescue cd (System Rescue CD, version 4.7.1)? Is there some other procedure I should have used? I ended up simply restoring the system from backups (it was quick and easy in this case), but I'd like to know what to do in the future. | If you're attempting to run xfs_repair , getting the error message that suggests mounting the filesystem to replay the log, and after mounting still receiving the same error message, you may need to perform a forced repair (using the -L flag with xfs_repair ). This option should be a last resort. For example, I'll use a case where I had a corrupt root partition on my CentOS 7 install. When attempting to mount the partition, I continually received the below error message: mount: mount /dev/mapper/centos-root on /mnt/centos-root failed: Structure needs cleaning Unfortunately, forcing a repair would involve zeroing out (destroying) the log before attempting a repair. When using this method, there is a potential of ending up with more corrupt data than initially anticipated; however, we can use the appropriate xfs tools to see what kind of damage may be caused before making any permanent changes. Using xfs_metadump and xfs_mdrestore , you can create a metadata image of the affected partition and perform the forced repair on the image rather than the partition itself. The benefits of this is the ability to see the damage that comes with a forced repair before performing it on the partition. To do this, you'll need a decent sized USB or external hard drive. Start by mounting the USB drive - my USB was located at /dev/sdb1 , yours may be named differently. mkdir -p /mnt/usb
mount /dev/sdb1 /mnt/usb Once mounted, run xfs_metadump to create a copy of the partition metadata to the USB - again, your affected partition may be different. In this case, I had a corrupt root partition located at /dev/mapper/centos-root : xfs_metadump /dev/mapper/centos-root /mnt/usb/centos-root.metadump Next, you'll want to restore the metadata in to an image so that we can perform a repair and measure the damage. xfs_mdrestore /mnt/usb/centos-root.metadump /mnt/usb/centos-root.img I found that in rescue mode xfs_mdrestore is not available, and instead you'll need to be in rescue mode of a live CentOS CD. Finally, we can perform the repair on the image: xfs_repair -L /mnt/usb/centos-root.img After the repair has completed and you've assessed the output and potential damage, you can determine as to whether you'd like to perform the repair against the partition. To run the repair against the partition, simply run: xfs_repair -L /dev/mapper/centos-root Don't forget to check the other partitions for corruption as well. After the repairs, reboot the system and you should be able to successfully boot. Remember that the -L flag should be used as a last resort where there are no other possible options to repair. I found that these online articles helped: https://web.archive.org/web/20140920034637/http://geekblood.com/2014/08/13/filesystem-corruption-xfs-and-rhelv7/ https://web.archive.org/web/20160319163101/http://oss.sgi.com/archives/xfs/2015-01/msg00503.html http://dhoytt.com/blog/2015/07/26/xfs-filesystem-repair-gets-web-server-back/ | {
"source": [
"https://serverfault.com/questions/777299",
"https://serverfault.com",
"https://serverfault.com/users/2494/"
]
} |
777,749 | On a local development machine, I have a nginx reverse proxy like so: server {
listen 80;
server_name myvirtualhost1.local;
location / {
proxy_pass http://127.0.0.1:8080;
}
server {
listen 80;
server_name myvirtualhost2.local;
location / {
proxy_pass http://127.0.0.1:9090;
} Yet if I debug my application, the response may be delayed for an infinite amount of time, yet after 30 seconds I get: 504 Gateway Time-out as a response. How can I disable the timeout and have my reverse proxy wait forever for a response? And I like the setting to be global, so that I do not have to set it for each proxy. | It may not be possible to disable it at all, yet a feasible workaround is to increase the execution time. On a nginx tutorial site , it was written: If you want to increase time-limit for all-sites on your server, you
can edit main nginx.conf file: vim /etc/nginx/nginx.conf Add following in http{..} section http {
fastcgi_read_timeout 300;
proxy_read_timeout 300;
} and reload nginx' config: sudo service nginx reload I have used a rather large value that is unlikely to happen, i.e. 999999 or using time units , to one day via 1d . Beware that setting the value to 0 will cause a gateway timeout error immediately. | {
"source": [
"https://serverfault.com/questions/777749",
"https://serverfault.com",
"https://serverfault.com/users/101107/"
]
} |
777,994 | I'm running LEMP with PHP7.0. I've got this in my server block fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; But when I open the site, it returns a 502 Bad Gateway. Below is the error log. *1 connect() to unix:/var/run/php/php7.0-fpm.sock failed (13: Permission denied) while connecting to upstream, client: xxx.xxx.xxx.xxx, server: example.com, request: "GET / HTTP1.1", upstream: "fsatcgi://unix:/var/run/php/php7.0-fpm.sock:", host: "example.com" It says Permission Denied . What's wrong here? I've checked but I can't seem to find what needs to be given what kind of permission. Thank you. | I got it working. The php user was www-data but the nginx user was nginx . Check php here: /etc/php/7.0/fpm/pool.d/www.conf listen.owner = www-data
listen.group = www-data
listen.mode = 0660 Nginx user was at /etc/nginx/nginx.conf This guided me: https://stackoverflow.com/questions/23443398/nginx-error-connect-to-php5-fpm-sock-failed-13-permission-denied | {
"source": [
"https://serverfault.com/questions/777994",
"https://serverfault.com",
"https://serverfault.com/users/355082/"
]
} |
778,676 | I have a script: #!/bin/bash
echo "$(dirname $(readlink -e $1))/$(basename $1)" that sits here: /home/myuser/bin/abspath.sh which has execute permissions. If I run echo $PATH I get the following: /usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/myuser/bin I wish to be able to, from any directory, call abspath <some_path_here> and it call my script. I am using bash, what I am doing wrong? | You want to type abspath , but the program is named abspath.sh . The problem is not regarding whether it is in the PATH, but the fact that you are simply not using its name to call it. You have two options: Type abspath.sh instead. Rename the program to abspath . | {
"source": [
"https://serverfault.com/questions/778676",
"https://serverfault.com",
"https://serverfault.com/users/81901/"
]
} |
778,759 | I have a number of older EBS volumes that are not encrypted. In satisfying new corporate security measures, all data needs to be "encrypted at rest" so I need to convert all of the volumes to be encrypted. What is the best way to accomplish this? | It's possible to copy an unencrypted EBS snapshot to an encrypted EBS snapshot. So the following process can be used: Stop your EC2 instance. Create an EBS snapshot of the volume you want to encrypt. Copy the EBS snapshot, encrypting the copy in the process. Create a new EBS volume from your new encrypted EBS snapshot. The new EBS volume will be encrypted. Detach the original EBS volume and attach your new encrypted EBS volume, making sure to match the device name (/dev/xvda1, etc.) | {
"source": [
"https://serverfault.com/questions/778759",
"https://serverfault.com",
"https://serverfault.com/users/75429/"
]
} |
779,634 | I had a daemon that needed its own dir in /var/run for its PID file with write permission granted to the daemon's user. I found I could create this dir with these commands: # mkdir /var/run/mydaemon Then I could change its ownership to the user/group under which I wished to run the process: # chown myuser:myuser /var/run/mydaemon But this dir would be GONE whenever I issue a reboot! How do I get this dir to create every time the machine boots? | There are two alternatives to have systemd create directories under /var/run / /run . Typically the easiest is to declare a RuntimeDirectory in the unit file of your service. Example: RuntimeDirectory=foo This will create /var/run/foo for a system unit. (Note: DO NOT provide a full path, just the path under /var/run ) For full docs please see the appropriate entry in systemd.exec docs . For runtime directories that require more complex or different configuration or lifetime guarantees, use tmpfiles.d and
have your package drop a file /usr/lib/tmpfiles.d/mydaemon.conf : #Type Path Mode UID GID Age Argument
d /run/mydaemon 0755 myuser myuser - - See the full tmpfiles.d docs here . | {
"source": [
"https://serverfault.com/questions/779634",
"https://serverfault.com",
"https://serverfault.com/users/250616/"
]
} |
779,636 | I recently made a nmap scan to my server, and I discovered that there was some strange instance of nginx running on port 8088. 8088/tcp open http nginx 1.0.11
| http-methods:
|_ Supported Methods: GET HEAD
|_http-server-header: nginx/1.0.11
|_http-title: Welcome to nginx! I visited the page and is just the default nginx page (not the same I have on my main nginx instance on port 80) The weird instance on port 8088 But this is my normal nginx placeholder on port 80: Normal nginx placeholder running on port 80 I noticed that nginx -v shows: nginx version: nginx/1.10.0 (Ubuntu) But as seen earlier, according to nmap, 8088 is running 1.0.11 not 1.10.0
A quick netstat -tulpn | grep :8088 returned: tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 19003/nginx I didn't find any mention to the port 8088 on the entire /etc/nginx (Yes, I checked everything).
I don't want to kill the process until I know what it is, any ideas? I found something similar here (on server fault). I'm running Ubuntu Server 16.04. | There are two alternatives to have systemd create directories under /var/run / /run . Typically the easiest is to declare a RuntimeDirectory in the unit file of your service. Example: RuntimeDirectory=foo This will create /var/run/foo for a system unit. (Note: DO NOT provide a full path, just the path under /var/run ) For full docs please see the appropriate entry in systemd.exec docs . For runtime directories that require more complex or different configuration or lifetime guarantees, use tmpfiles.d and
have your package drop a file /usr/lib/tmpfiles.d/mydaemon.conf : #Type Path Mode UID GID Age Argument
d /run/mydaemon 0755 myuser myuser - - See the full tmpfiles.d docs here . | {
"source": [
"https://serverfault.com/questions/779636",
"https://serverfault.com",
"https://serverfault.com/users/351188/"
]
} |
782,625 | Here we have some servers and almost each of them has a dedicated UPS. There are dependencies between them so they must be switched on in the correct sequence. Ultimately we are experiencing serious problems with the power supply, so the servers are shutdown and then restarted in a random order when power is restored. It is not a problem if the servers were switched off during a blackout, it is important they work correctly without any human intervention once power is restored. Our UPS are quite cheap and the only configuration parameter useful for my goal is power the load xx seconds after power is restored . In theory putting the right delays on each UPS I can fix the order of server restart but I don't trust the UPS will behave as expected. Is it the right way to go ? Do high level UPS give other options to fix the restart sequence ? One final note: my Ups are in the range of 1000 - 2200 VA | The standard answer for this is "not at all". Fix the software to handle restarts in random order. If you really need SOME servers to start first (example: Active Directory) put them on USV's that are possibly surviving a LOT longer. A low power atom based server is good enough as Active Directory controller and will survive a day on a small USV. Do high level UPS give other options to fix the restart sequence ? No. I would say it is generally assumed programmers are competent enough to work around the issue properly. What you COULD do is: Have servers start "randomly". Except for DHCP / Active Directory there is nothing really demanding an order that can not be fixed. Have a control server after some time (5 minutes) start the services on the various machines in the correct order. I would say that this type of setup is a lot more common. I would call any software that REQUIRES server starts in a particular order (outside of pure infrastructure) as broken and unfit for business. Just as note: our own setup is a low cost 20kva USV (low cost because we got one used) for the servers, with a slaved 2000VA USV for a machine serving as "root" of the network (and backup machine). Slaved means that the USV is behind the big one - so it only switches to battery when the large one (that lasts between half an hour and 8 hours depending on how much of our computing grid is online) is going into terminal shutdown. | {
"source": [
"https://serverfault.com/questions/782625",
"https://serverfault.com",
"https://serverfault.com/users/179575/"
]
} |
782,967 | A short introduction to the use case: I am using a docker container to run my go tests using go test ./... . This can be achieved easily using docker exec <container> /bin/sh -c "go test ./..." . Unfortunately go test ./... runs across all subdirectories and I'd like to exclude one (the vendor directory). The advised solution for this is using the following command: go test $(go list ./... | grep -v '<excluded>' , somehow this leaves me with the following result: docker run golang:1.6.2-alpine /bin/sh -c "go test " (I have tested this on both run and exec, but they probably use the same core). When I ssh into the container using docker exec -it <container_id> /bin/sh and run the exact same command, it works like a charm. It seems that executing shell commands trough the docker exec/run does not support any commands nested with $() ? | Your command may not be working as you expected thanks to a common bash gotcha: docker exec <container> /bin/sh -c "go test $(go list ./... | grep -v '<excluded>')" The command you are trying to run will perform the expansion of the subshell $() on your host because it is inside double quotes. This can be solved by single quoting your command as suggested by @cuonglm in the question comments. docker exec <container> /bin/sh -c 'go test $(go list ./... | grep -v "<excluded>")' EDIT: A little demo [wbarnwell@host ~]$ docker run -it --rm busybox /bin/sh -c '$(whoami)'
/bin/sh: root: not found
[wbarnwell@host ~]$ docker run -it --rm busybox /bin/sh -c "$(whoami)"
/bin/sh: wbarnwell: not found | {
"source": [
"https://serverfault.com/questions/782967",
"https://serverfault.com",
"https://serverfault.com/users/178669/"
]
} |
782,999 | When I run yum update I receive the following error: Repository 'base' is missing name in configuration, using id
Loading mirror speeds from cached hostfile
Error: Cannot find a valid baseurl for repo: base I am not behind a proxy. Does anybody know how to fix? | Your command may not be working as you expected thanks to a common bash gotcha: docker exec <container> /bin/sh -c "go test $(go list ./... | grep -v '<excluded>')" The command you are trying to run will perform the expansion of the subshell $() on your host because it is inside double quotes. This can be solved by single quoting your command as suggested by @cuonglm in the question comments. docker exec <container> /bin/sh -c 'go test $(go list ./... | grep -v "<excluded>")' EDIT: A little demo [wbarnwell@host ~]$ docker run -it --rm busybox /bin/sh -c '$(whoami)'
/bin/sh: root: not found
[wbarnwell@host ~]$ docker run -it --rm busybox /bin/sh -c "$(whoami)"
/bin/sh: wbarnwell: not found | {
"source": [
"https://serverfault.com/questions/782999",
"https://serverfault.com",
"https://serverfault.com/users/359771/"
]
} |
783,082 | Many tutorials tell you to config your ssh server like this: ChallengeResponseAuthentication no
PasswordAuthentication no
UsePAM no but with this setup you cannot use PAM, as i plan to use 2 Factor Auth with Google Authenticator (OTP Onetime Password) i need PAM. So how to configure a fresh debian jessie ssh deamon, if i want to prevent the login with the normal password but still allow to use PAM. maybe the exact question is how to configure pam to disallow passwords? Details on PAM Authentication Disabling PAM-based password authentication is rather un-intuitive. It
is needed on pretty much all GNU/Linux distributions (with the notable
exception of Slackware), along with FreeBSD. If you're not careful,
you can have PasswordAuthentication set to 'no' and still login with
just a password through PAM authentication. It turns out that you need
to set 'ChallengeResponseAuthentication' to 'no' in order to truly
disable PAM authentication. The FreeBSD man pages have this to say,
which may help to clarify the situation a bit: Note that if ChallengeResponseAuthentication is 'yes', and the PAM authentication policy for sshd includes pam_unix(8), password
authentication will be allowed through the challenge-response
mechanism regardless of the value of PasswordAuthentication. http://www.unixlore.net/articles/five-minutes-to-more-secure-ssh.html | maybe the exact question is how to configure pam to disallow passwords? Correct. You've already stumbled upon the fact that setting UsePAM no is generally bad advice. Not only does it prevent any form of PAM based authentication, it also disables account and session modules. Access control and session configuration are good things. First, let's build a list of requirements: OTP via pam_google_authenticator.so . This requires UsePAM yes and ChallengeResponseAuthentication yes . You're prompting them for a credential, after all! No other form of password authentication via PAM. This means disabling any auth module that might possibly allow a password to be transmitted via keyboard-interactive logins. (which we have to leave enabled for OTP) Key based authentication. We need to require publickey authentication, and maybe gssapi-with-mic if you have Kerberos configured. Normally, authenticating with a key skips PAM based authentication entirely. This would have stopped us in our tracks with older versions of openssh, but Debian 8 (jessie) supports the AuthenticationMethods directive. This allows us to require multiple authentication methods, but only works with clients implementing SSHv2. sshd config Below are the lines I suggest for /etc/ssh/sshd_config . Make sure you have a way to access this system without sshd in case you break something! # Require local root only
PermitRootLogin no
# Needed for OTP logins
ChallengeResponseAuthentication yes
UsePAM yes
# Not needed for OTP logins
PasswordAuthentication no
# Change to to "yes" if you need Kerberos. If you're unsure, this is a very safe "no".
GSSAPIAuthentication no
# Require an OTP be provided with key based logins
AuthenticationMethods publickey,keyboard-interactive
# Use this instead for Kerberos+pubkey, both with OTP
#
#AuthenticationMethods gssapi-with-mic,keyboard-interactive publickey,keyboard-interactive Don't forget to reload sshd once these changes have been made. PAM config We still have to configure PAM. Assuming a clean install of Debian 8 (per your question): Comment @include common-auth from /etc/pam.d/sshd . Review /etc/pam.d/sshd and confirm that no lines beginning with auth are present. There shouldn't be if this is a clean install, but it's best to be safe. Add an auth entry for pam_google_authenticator.so . Remember that local passwords still work. We did not make any changes that would impact logins via a local console, or prevent users from using passwords to upgrade their privileges via sudo. This was outside the scope of the question. If you decide to take things further, remember that root should be always be permitted to login locally via password. You risk locking yourself out of the system accidentally otherwise. | {
"source": [
"https://serverfault.com/questions/783082",
"https://serverfault.com",
"https://serverfault.com/users/71452/"
]
} |
783,502 | I am about to use Wireshark for some traffic monitoring on my Windows computer. While working on it, I was wondering how Wireshark manages to catch low level network packets before Windows does. First of all, a network interface on my NIC receives a packet. The NIC then does some initial checks (CRC, right MAC address, ... etc. ). Assuming that the verification was successful, the NIC forwards the packet. But how and where? I understand that drivers are the glue between the NIC and the OS or any other application. I further guess that there's a separate driver for Windows and Wireshark ( WinPcap ?). Otherwise, Wireshark wouldn't be able to receive Ethernet frames. Are there two or more NIC drivers coexisting at the same time? How does the NIC know, which one to use? | The I/O model in Windows is based on a stack of components. Data must flow through the various components of that stack that exists between the physical network card, and the application that will consume the data. Sometimes those various components inspect the data (a TCP packet for example,) as they flow through the stack, and based on the contents of that packet, the data may be altered, or the packet may be discarded entirely. This is a simplified model of the "network stack" that packets flow through in order to get from the application to the wire and vice versa. One of the most interesting components shown in the screenshot above is the WFP (Windows Filtering Platform) Callout API. If we zoomed in on that, it might look something like this: Developers are free to plug in their own modules into the appropriate places in this stack. For instance, antivirus products typically use a "filter driver" that plugs in to this model and inspects network traffic or provides firewall capabilities. The Windows Firewall service also obviously fits in to this model as well. If you wanted to write an application that records network traffic, such as Wireshark, then the appropriate way to do it would be to use a driver of your own, and insert it into the stack as low as possible so that it can detect network packets before your firewall module has a chance to drop them. So there are many "drivers" involved in this process. Many different types of drivers too. Also, other forms of input/output on the system, such as hard disk drive reads and writes, follow very similar models. One other note - WFP callouts are not the only way to insinuate yourself into the network stack. WinPCap as an example, interfaces with NDIS directly with a driver, meaning it has a chance to intercept traffic before any filtering has taken place at all. NDIS Drivers WinPCap References: Next Generation TCP/IP Stack in Vista+ Windows Filtering Platform Architecture | {
"source": [
"https://serverfault.com/questions/783502",
"https://serverfault.com",
"https://serverfault.com/users/350473/"
]
} |
783,556 | I had an argument with a superior about this. Though at first glance the prior user of a laptop only did work in his own documents-folders, should I always install a new OS for the next user or is deleting the old profile enough? The software that is installed is mostly also needed by the next user. I think an install is needed, but except my own argument of viruses and private data, what reasons are there for doing so? At our company it is allowed to use the PC for e.g. private mail, on some PCs are even games installed. We have kinda mobile users, that are often on site at a customer, so I don't really blame them. Also because of that we have a lot of local admins out there. I know both the private use and the availability of local admin-accounts aren't good ideas, but that's how it was handled before I worked here and I can only change this once I am out of traineeship ;) Edit : I think all of the answers posted are relevant, and I also know that a couple of the practices we have at my company aren't the best to begin with (local admin for too many people for example ;). As of now, I think the most usable answer for a discussion would be the one from Ryder. Although the example he gave in his answer may be exaggerated, it has happened before that a former employee forgot private data. I recently found a retail copy of the game Runaway in a old laptop and we had a couple of cases of remaining private images, too. | Absolutely you should. It's not just common sense from a security POV, it should also be practice as matter of business ethics. Let's imagine the following scenario: Alice leaves, and her computer is transferred to Bob. Bob didn't know it, but Alice was into illegal shota porn and left several files tucked away outside of her profile. IT wipes her profile and nothing else, which included only her browsing history and local files. One day, Bob is checking out the bells and whistles on his shiny new work machine, while sitting at a Starbucks™ and sipping at a latte. He stumbles across Alice's cache and innocently clicks on a file that looks strange. Suddenly, every head in the store whips around to watch in horror as Bob's PC flouts several state and federal regulations at full volume. One little girl in the corner starts crying. Bob is mortified. After six months of depression and after having been fired for his unintentional act of public indecency (and possible criminal charges), he finds himself a really crackin' legal team and lays waste to his former employer with an outrageously damaging lawsuit. Alice is in Thailand and escapes extradition. Maybe all this is a little beyond the pale, but it absolutely could happen if you don't take the time to scour through a former employee's every action. Or you could save time, and reinstall from scratch. | {
"source": [
"https://serverfault.com/questions/783556",
"https://serverfault.com",
"https://serverfault.com/users/360206/"
]
} |
783,934 | I have a network bastion which is publicly accessible at example.compute-1.amazonaws.com and a private postgres database instance at postgres.example.us-east-1.rds.amazonaws.com:5432 I can ssh into the bastion using $ ssh -i key.pem [email protected] Then once I'm in the bastion I create a ssh tunnel with: $ ssh -i key.pem -L 5432:postgres.example.us-east-1.rds.amazonaws.com:5432 [email protected] I can then verify that the tunnel works by connecting to the database from the bastion using localhost: $ psql -p 5432 -h localhost -U postgres However, I am unable to connect to the database remotely (without being in the bastion). $ psql -p 5432 -h example.compute-1.amazonaws.com -U postgres
psql: could not connect to server: Connection refused
Is the server running on host "example.compute-1.amazonaws.com" () and accepting
TCP/IP connections on port 5432? I've configured the security group of the bastion to accept inbound traffic on port 5432. Am I using ssh -L correctly? Should I be using it outside the bastion? Any advice would be much appreciated. | When you create an SSH tunnel, it does not expose the opened port to the outside world. The opened port, is only available as localhost . So effectively what you've done is to create a tunnel from your bastion, to your bastion. Instead, what you want to do is create a tunnel from your local computer through your bastion. So, you create your tunnel as part of your connection from your local computer to your bastion . You do not need to create another SSH connection. So, locally, you would execute: $ ssh -i key.pem -L 5432:postgres.example.us-east-1.rds.amazonaws.com:5432 [email protected] Assuming postgres.example.us-east-1.rds.amazonaws.com resolves to the private IP address. Then to connect to your server, still locally, connect as if the server was local: $ psql -p 5432 -h localhost -U postgres Doing this, there's no need to use a prompt on your bastion. | {
"source": [
"https://serverfault.com/questions/783934",
"https://serverfault.com",
"https://serverfault.com/users/360497/"
]
} |
783,935 | I've been banging my head against the bind manual and google for a few hours tying to get this figured out, but I'm not sure where I'm screwing up. I built this on a few local VM's, and the slave talked to the master without a problem. The firewall between these two subnets isn't blocking anything. Both VM's have firewalld to accept udp port 53 data with a permanent exception. Any advice would be greatly appreciated. The configuration is setup so that DHCP from two locations would update a master DNS, and then the DNS would populate a DNS slave. I removed some of the default named.conf text for the sake of space (anything not included is most likely default). This all runs on Centos 7. Errors when starting Named on slave Jun 14 12:54:07 dns-vm-pa-01 named[26045]: running
Jun 14 12:54:07 dns-vm-pa-01 systemd[1]: Started Berkeley Internet Name Domain (DNS).
Jun 14 12:54:07 dns-vm-pa-01 named[26045]: zone 1.0.10.in-addr.arpa/IN: Transfer started.
Jun 14 12:54:07 dns-vm-pa-01 named[26045]: transfer of '1.0.10.in-addr.arpa/IN' from 10.0.0.5#53: connected using 10.0.1.5#36381
Jun 14 12:54:07 dns-vm-pa-01 named[26045]: transfer of '1.0.10.in-addr.arpa/IN' from 10.0.0.5#53: failed while receiving responses: SERVFAIL
Jun 14 12:54:07 dns-vm-pa-01 named[26045]: transfer of '1.0.10.in-addr.arpa/IN' from 10.0.0.5#53: Transfer completed: 0 messages, 0 records, 0 bytes, 0.146 secs (0 bytes/sec)
Jun 14 12:54:08 dns-vm-pa-01 named[26045]: zone int.bubbhashramp.com/IN: Transfer started.
Jun 14 12:54:08 dns-vm-pa-01 named[26045]: transfer of 'int.bubbhashramp.com/IN' from 10.0.0.5#53: connected using 10.0.1.5#36067
Jun 14 12:54:08 dns-vm-pa-01 named[26045]: transfer of 'int.bubbhashramp.com/IN' from 10.0.0.5#53: failed while receiving responses: SERVFAIL
Jun 14 12:54:08 dns-vm-pa-01 named[26045]: transfer of 'int.bubbhashramp.com/IN' from 10.0.0.5#53: Transfer completed: 0 messages, 0 records, 0 bytes, 0.155 secs (0 bytes/sec) NetStat Result on Master udp 0 0 10.0.0.5:53 0.0.0.0:* 26141/named Permissions for zone files in /var/named/dynamic/ -rw-r--r--. 1 root named 374 Jun 14 10:43 0.0.10.in-addr.arpa
-rw-r--r--. 1 root named 372 Jun 14 10:04 1.0.10.in-addr.arpa
-rw-r--r--. 1 root named 567 Jun 14 12:31 int.bubbhashramp.com Dig Reply from Master dig @10.0.0.5 vmhost-01.int.bubbhashramp.com
; <<>> DiG 9.8.3-P1 <<>> @10.0.0.5 vmhost-01.int.bubbhashramp.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21900
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2
;; QUESTION SECTION:
;vmhost-01.int.bubbhashramp.com. IN A
;; ANSWER SECTION:
vmhost-01.int.bubbhashramp.com. 10800 IN A 10.0.1.10
;; AUTHORITY SECTION:
int.bubbhashramp.com. 10800 IN NS dns-vm-pa-01.int.bubbhashramp.com.
int.bubbhashramp.com. 10800 IN NS dns-vm-nh-01.int.bubbhashramp.com.
;; ADDITIONAL SECTION:
dns-vm-nh-01.int.bubbhashramp.com. 10800 IN A 10.0.0.5
dns-vm-pa-01.int.bubbhashramp.com. 10800 IN A 10.0.1.5
;; Query time: 55 msec
;; SERVER: 10.0.0.5#53(10.0.0.5)
;; WHEN: Tue Jun 14 13:05:34 2016
;; MSG SIZE rcvd: 146 Master Config key "rndc-key" {
algorithm hmac-md5;
secret "bubbgumpkeys";
};
options {
listen-on port 53 { 10.0.0.5; };
listen-on-v6 port 53 { any; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; };
allow-transfer { 10.0.0.0/16; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
forwarders {
8.8.8.8;
75.75.75.75;
8.8.4.4;
};
};
zone "int.bubbhashramp.com" {
type master;
file "dynamic/int.bubbhashramp.com";
allow-update { key rndc-key; };
};
zone "1.0.10.in-addr.arpa" {
type master;
file "dynamic/1.0.10.in-addr.arpa";
allow-update { key rndc-key; };
};
zone "0.0.10.in-addr.arpa" {
type master;
file "dynamic/0.0.10.in-addr.arpa";
allow-update { key rndc-key; };
}; Slave Config options {
listen-on port 53 { any; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; };
recursion no;
dnssec-enable yes;
dnssec-validation yes;
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
forwarders {
8.8.8.8;
75.75.75.75;
8.8.4.4;
};
};
zone "int.bubbhashramp.com" {
type slave;
file "slaves/int.bubbhashramp.com";
masters { 10.0.0.5; };
};
zone "1.0.10.in-addr.arpa" {
type slave;
file "slaves/1.0.10.in-addr.arpa";
masters { 10.0.0.5; };
}; | When you create an SSH tunnel, it does not expose the opened port to the outside world. The opened port, is only available as localhost . So effectively what you've done is to create a tunnel from your bastion, to your bastion. Instead, what you want to do is create a tunnel from your local computer through your bastion. So, you create your tunnel as part of your connection from your local computer to your bastion . You do not need to create another SSH connection. So, locally, you would execute: $ ssh -i key.pem -L 5432:postgres.example.us-east-1.rds.amazonaws.com:5432 [email protected] Assuming postgres.example.us-east-1.rds.amazonaws.com resolves to the private IP address. Then to connect to your server, still locally, connect as if the server was local: $ psql -p 5432 -h localhost -U postgres Doing this, there's no need to use a prompt on your bastion. | {
"source": [
"https://serverfault.com/questions/783935",
"https://serverfault.com",
"https://serverfault.com/users/245943/"
]
} |
785,949 | So we use man whatever to get usage and other info regarding the whatever command, when the relevant section of info is found, I'd like to quit the man command with the info left on screen. So I can type the next command with the referee above. But the man command quits the whole screen to recover the old screen similar to vim .
Is there a way to achieve this? | I believe this is not so much about man itself but rather about your pager of choice ( PAGER environment variable) combined with the terminal in use. I'm guessing your pager is probably less (typical default pager nowadays and fits with the description). less has an option -X that may get you a behavior along the lines of what you're looking for. -X or --no-init
Disables sending the termcap initialization and deinitialization
strings to the terminal. This is sometimes desirable if the
deinitialization string does something unnecessary, like clear‐
ing the screen. Eg PAGER="less -X" man man could be used for testing it out, and if you find this behavior preferable you might consider setting PAGER to this value permanently. | {
"source": [
"https://serverfault.com/questions/785949",
"https://serverfault.com",
"https://serverfault.com/users/345610/"
]
} |
785,954 | Can someone elaborate on what would happen if the sendmail daemon is in the middle of processing a message when it receives a kill -15 request? Would it finish processing whatever message it's working on? Or terminate immediately? I'm trying to determine if sendmail will gracefully end connections when I run service sendmail restart . | I believe this is not so much about man itself but rather about your pager of choice ( PAGER environment variable) combined with the terminal in use. I'm guessing your pager is probably less (typical default pager nowadays and fits with the description). less has an option -X that may get you a behavior along the lines of what you're looking for. -X or --no-init
Disables sending the termcap initialization and deinitialization
strings to the terminal. This is sometimes desirable if the
deinitialization string does something unnecessary, like clear‐
ing the screen. Eg PAGER="less -X" man man could be used for testing it out, and if you find this behavior preferable you might consider setting PAGER to this value permanently. | {
"source": [
"https://serverfault.com/questions/785954",
"https://serverfault.com",
"https://serverfault.com/users/21875/"
]
} |
786,648 | Recently I've seen the root filesystem of a machine in a remote datacenter get remounted read-only, as a result of consistency issues. On reboot, this error was shown: UNEXPECTED INCONSISTENCY: RUN fsck MANUALLY (i.e., without -a or -p options) After running fsck as suggested, and accepting the corrections manually with Y , the errors were corrected and the system is now fine. Now, I think that it would be interesting if fsck was configured to run and repair everything automatically, since the only alternative in some cases (like this one) is going in person to the remote datacenter and attach a console to the affected machine. My question is: why does fsck by default ask for manual intervention? How and when a correction performed by such program would be unsafe? Which are the cases when the sysadmin might want to leave a suggested correction aside for some time (to perform some other operations) or abort it alltogether? | fsck definitely causes more harm than good if the underlying hardware is somehow damaged; bad CPU, bad RAM, a dying hard drive, disk controller gone bad... in those cases more corruption is inevitable. If in doubt, it's a good idea to just to take an image of the corrupted disk with dd_rescue or some other tool, and then see if you can successfully fix that image. That way you still have the original setup available. | {
"source": [
"https://serverfault.com/questions/786648",
"https://serverfault.com",
"https://serverfault.com/users/207945/"
]
} |
786,652 | We have IBM v3700 San Storage(300gb x 36 SAS HDD) connected to Four servers (windows 2008) via FC. Each server have few disks allocated in RAID5 mode. There are 8 unused (candidate) disks available in the slots. We want to add 2 disks per server to EXPAND existing pools. example each server have G: drive and want to expand the G drive using these 2 additional disks. What are my best options? How can i add 2 disks to each server pool. I see it gives me few raid option like raid0,10,5 , Is this possible I simply add 2 disk to existing raid5 to get maximum space and raid failover could be covered by existing raid 5 spare? Example: 8 drives raid5 is mounted on SERVER1, volume name is G: so 2 tb space is available, now I want to add 2 disk spaces in it to make it 2.6 TB , can i add two disks space in it? do i have to select raid5 for it and then EXPAND existing G: drive to 2.6 tb? possible? Or what should i do ? please suggest. | fsck definitely causes more harm than good if the underlying hardware is somehow damaged; bad CPU, bad RAM, a dying hard drive, disk controller gone bad... in those cases more corruption is inevitable. If in doubt, it's a good idea to just to take an image of the corrupted disk with dd_rescue or some other tool, and then see if you can successfully fix that image. That way you still have the original setup available. | {
"source": [
"https://serverfault.com/questions/786652",
"https://serverfault.com",
"https://serverfault.com/users/345651/"
]
} |
787,144 | We have in our organization around ~500 RedHat Linux machines. On all the machines we installed applications and services under /etc/init.d , and oracle RAC servers. We intend to perform yum updates on all machines and after that take a reboot. So I was wondering what command is safer: reboot or shutdown -r now | For Red Hat systems, there is no functional difference between reboot and shutdown -r now . Do whatever is easier for you. | {
"source": [
"https://serverfault.com/questions/787144",
"https://serverfault.com",
"https://serverfault.com/users/346089/"
]
} |
787,440 | I have a web server in Ireland (Amazon AWS). This server appears fast from Germany (orange line) but slow from the USA (black line). The HTTP request used for the test is the same. I think this is normal. The distance between Ireland and the USA is larger than Germany to Ireland, but the difference seems too high. Are there other possible reasons, apart from the distance to the server? | Assuming the graph is http request time it seems fairly reasonable to me. A http request (in the absense of keepalive, fastopen etc) normally requires at least two round trips. Client sends syn Sever receives syn and sends syn-ack Client receives syn-ack and sends ack and request. Server sends response. The speed of light in fiber is about 2*10^8 meters per second. According to google the distance from "ireland to the USA" is 6,629 km * which would translate to a round trip time of about 66 ms. But that assumes there are no delays in equipment and that the data route follows the shortest possible path. Practical round trip times are usually 100 to 150 milliseconds between a host in Europe and a host in the USA. As such a http request time of ~250ms is perfectly normal. What is a bit more concerning are the spikes in the graph, they suggest network congestion somewhere between the server and the test client. * obviously it depends on what point in the USA and what point in ireland but the point google picked seemed to be somewhere in the middle of the USA and the OPs graph said "us-mid". | {
"source": [
"https://serverfault.com/questions/787440",
"https://serverfault.com",
"https://serverfault.com/users/226126/"
]
} |
787,919 | Nginx worker_connections "sets the maximum number of simultaneous connections that can be opened by a worker process. This number includes all connections (e.g. connections with proxied servers, among others), not only connections with clients. Another consideration is that the actual number of simultaneous connections cannot exceed the current limit on the maximum number of open files". I have few queries around this: What should be the optimal or recommended value for this? What are the downsides of using a high number of worker connections? | Let's take the pragmatic approach. All these limits are things that were hardcoded and designed in the past century when hardware was slow and expensive. We're in 2016 now, an average wall-mart toaster can process more requests than the default values. The default settings are actually dangerous. Having hundreds of users on a website is nothing impressive. worker_process A related setting, let's explain it while we're on the topic. nginx as load balancer: 1 worker for HTTP load balancing. 1 worker per core for HTTPS load balancing. nginx as webservers: This one is tricky. Some applications/frameworks/middleware (e.g. php-fpm) are run outside of nginx. In that case, 1 nginx worker is enough because it's usually the external application that is doing the heavy processing and eating the resources. Also, some applications/frameworks/middleware can only process one request at a time and it is backfiring to overload them. Generally speaking, 1 worker is always a safe bet. Otherwise, you may put one worker per core if you know what you're doing. I'd consider that route to be an optimization and advise proper benchmarking and testing. worker_connections The total amount of connections is worker_process * worker_connections . Half in load balancer mode. Now we're reaching the toaster part. There are many seriously underrated system limits: ulimits is 1k max open files per process on linux (1k soft, 4k hard on some distro) systemd limits is about the same as ulimits. nginx default is 512 connections per worker. There might be more: SELinux, sysctl, supervisord (each distro+version is slightly different) 1k worker_connections The safe default is to put 1k everywhere. It's high enough to be more than most internal and unknown sites will ever encounter. It's low enough to not hit any other system limits. 10k worker_connections It's very common to have thousands of clients, especially for a public website. I stopped counting the amount of websites I've seen went down because of the low defaults. The minimum acceptable for production is 10k. Related system limits must be increased to allow it. There is no such thing as a too-high limit (a limit simply has no effect if there are no users). However a too-low limit is a very real thing that results in rejected users and a dead site. More than 10k 10k is nice and easy. We could set an arbitrary 1000kk limits (it's only a limit after all) but that doesn't make much practical sense, we never get that traffic and couldn't take it anyway. Let's stick to 10k as a reasonable setting. The services which are going for (and can really do) more will require special tuning and benchmarking. Special Scenario: Advanced Usage Sometimes, we know that the server doesn't have much resources and we expect spikes that we can't do much about. We'd rather refuse users than try. In that case, put a reasonable connection limit and configure nice error messages and handling. Sometimes, the backend servers are working good and well but only up to some load , anything more and everything goes south quickly. We'd rather slow down than have the servers crash. In that case, configure queuing with strict limits, let nginx buffer all the heat while requests are being drained at a capped pace. | {
"source": [
"https://serverfault.com/questions/787919",
"https://serverfault.com",
"https://serverfault.com/users/363759/"
]
} |
788,121 | I have a question about our Exchange Server:
Do you think it is a good idea to refuse incoming external e-mails that have our own domain in the ending? Like external eMail from [email protected] ? Because if it would be from a real sender in our company, the email would never come from outside? If yes, what's the best way of doing this? | Yes, if you know that email for your domain should only be coming from your own server, then you should block any email for that domain originating from a different server. Even if the sender's email client is on another host, they should be logging into your server (or whatever email server you use) to send email. Taking that a step further, you could configure your server to check SPF records. This is how many hosts prevent that sort of email activity. SPF records are a DNS record, a TXT record, which gives rules about which servers are allowed to send email for your domain. How to enable SPF record checking would depend on your email service, and would be beyond the scope of what to cover here. Fortunately, most hosting environments and software will have documentation for working with SPF records. You might want to learn more about SPF in general. Here's the Wikipedia article: https://en.wikipedia.org/wiki/Sender_Policy_Framework | {
"source": [
"https://serverfault.com/questions/788121",
"https://serverfault.com",
"https://serverfault.com/users/363947/"
]
} |
788,700 | I have been reading the Debian System Administrator's Handbook , and I came across this passage in the gateway section: ...Note that NAT is only relevant for IPv4 and its limited address space;
in IPv6, the wide availability of addresses greatly reduces the
usefulness of NAT by allowing all “internal” addresses to be directly
routable on the Internet (this does not imply that internal machines
are accessible, since intermediary firewalls can filter traffic). That got me thinking... With IPv6 there is still a private range. See: RFC4193 . Are companies really going to set up all their internal machines with public addresses? Is that how IPv6 is intended to work? | Is that how IPv6 is intended to work? In short, yes. One of the primary reasons for increasing the address space so drastically with IPv6 is to get rid of band-aid technologies like NAT and make network routing simpler. But don't confuse the concept of a public address and a publicly accessible host. There will still be "internal" servers that are not Internet accessible even though they have a public address. They'll be protected with firewalls just like they are with IPv4. But it will also be much easier to decide that today's internal-only server needs to open up a specific service to the internet tomorrow. Are companies really going to set up all their internal machines with public addresses? In my opinion, the smart ones will. But as you've probably noticed, it's going to take quite a while. | {
"source": [
"https://serverfault.com/questions/788700",
"https://serverfault.com",
"https://serverfault.com/users/210971/"
]
} |
788,862 | In my work with servers I have come across in configuration files where you should enter the address to an external server. I have seen some use the server's IP address directly, but I have heard many recommendations to use a hostname fully qualified domain name (FQDN) instead. Why should I use a hostname instead of the direct IP address? Because if you use a hostname then you would need a local DNS server that would link each hostname to an IP address. What is the disadvantage between using a hostname or an IP address? | Using an IP address ensures that you are not relying on a DNS server. It also has the benefit of preventing attacks through DNS spoofing. Using a FQDN instead of an IP address means that, if you were to migrate your service to a server with a different IP address, you would be able to simply change the record in DNS rather than try and find everywhere that the IP address is used. This is especially useful when you have many servers and services configured by multiple individuals. | {
"source": [
"https://serverfault.com/questions/788862",
"https://serverfault.com",
"https://serverfault.com/users/338860/"
]
} |
789,396 | I have a public key in a server( host ) that I want to transfer to another server( target ). The host server has a bunch of keys in .ssh/ folder, i want to copy just one of them to the target server (it's not id_rsa.pub , so lets call mykey.rsa.pub ). Also, the target server has the host server key (lets call hostkey.rsa.pub ) in .ssh/authorized_keys , for passwordless ssh. Is it possible to do something like this? ssh-copy-id mykey.rsa.pub -i hostkey.rsa.pub user@target | You can pass ssh options with -o : ssh-copy-id -i mykey.rsa.pub -o "IdentityFile hostkey.rsa" user@target | {
"source": [
"https://serverfault.com/questions/789396",
"https://serverfault.com",
"https://serverfault.com/users/364120/"
]
} |
789,601 | I am using the docker-compose . Some commands like up -d service_name or start service_name are returning right away and this is pretty useful if you don't want the containers running to depend on the state of the shell, like they do with regular up service_name . The one use-case is running it from some kind of continious integration/delivery server. But this way of running/starting services does not provide any feedback about the actual state of the service afterwards. The Docker Compose CLI reference for up command does mention the relevant option, but, as for version 1.7.1 , it is mutually exclusive with -d : --abort-on-container-exit Stops all containers if any container was stopped.
*Incompatible with -d.* Can I somehow manually check that the container is indeed working and haven't stopped because of some error? | docker-compose ps -q <service_name> will display the container ID no matter it's running or not, as long as it was created. docker ps shows only those that are actually running. Let's combine these two commands: if [ -z `docker ps -q --no-trunc | grep $(docker-compose ps -q <service_name>)` ]; then
echo "No, it's not running."
else
echo "Yes, it's running."
fi docker ps shows short version of IDs by default, so we need to specify --no-trunc flag. UPDATE : It threw "grep usage" warning if the service was not running. Thanks to @Dzhuneyt, here's the updated answer. if [ -z `docker-compose ps -q <service_name>` ] || [ -z `docker ps -q --no-trunc | grep $(docker-compose ps -q <service_name>)` ]; then
echo "No, it's not running."
else
echo "Yes, it's running."
fi | {
"source": [
"https://serverfault.com/questions/789601",
"https://serverfault.com",
"https://serverfault.com/users/214542/"
]
} |
790,143 | I'm trying to automate the setup of UFW on an Ubuntu 16.04 instance. However when I type: sudo ufw enable I get prompted to enter yes or no, is there a way to feed it yes or set it automatically to start without getting stuck with a prompt? | How about: $ echo "y" | sudo ufw enable | {
"source": [
"https://serverfault.com/questions/790143",
"https://serverfault.com",
"https://serverfault.com/users/48833/"
]
} |
790,168 | I have following network topology: Whenever I turn on my Server all the local IP addresses (in the form 10.0.0.X) of RADIUS Server and IIS Server changes or sometimes port is assigned to SVCHost process of Windows. I am a beginner in networking field. Somehow I was able to make this technology work but only problem is changing IP addresses. I have to visit to the client each day for configuring the IPs. Please tell me how to assign fix/static IPs to these two server software? | How about: $ echo "y" | sudo ufw enable | {
"source": [
"https://serverfault.com/questions/790168",
"https://serverfault.com",
"https://serverfault.com/users/118386/"
]
} |
790,296 | I've installed Google-Authenticator on a CentOS 6.5 machine and configured certain users to provide OTP. While editing /etc/ssh/sshd_config I saw a directive " PermitRootLogin " which is commented out by default. I would like to set " PermitRootLogin no " but to still be able to ssh to the machine as root only from the local network. Is that possible? | Use the Match config parameter in /etc/ssh/sshd_config : # general config
PermitRootLogin no
# the following overrides the general config when conditions are met.
Match Address 192.168.0.*
PermitRootLogin yes See man sshd_config | {
"source": [
"https://serverfault.com/questions/790296",
"https://serverfault.com",
"https://serverfault.com/users/109833/"
]
} |
790,404 | I have been trying to add an exception to SELinux for apache on port 5000.So I used the command: # semanage port -a -t http_port_t -p tcp 5000 But returns the error, ValueError: Port tcp/5000 already defined I tried to check if this is so with the command: semanage port -l |grep 5000 which gave the output, http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000 As you can see, 5000 is not on the list. Is there anything obvious I am missing?
Thank you in advance for your effort So I found that another service had a defined status for TCP port 5000. But by replacing the -a option with -m for modify, added tcp port 5000 to http_port_t | So I found that another service had a defined status for TCP port 5000. But by replacing the -a option with -m for modify, added tcp port 5000 to http_port_t So the command that worked was: # semanage port -m -t http_port_t -p tcp 5000 | {
"source": [
"https://serverfault.com/questions/790404",
"https://serverfault.com",
"https://serverfault.com/users/362301/"
]
} |
790,772 | Is this correct way to set cron for renewal of Let's Encrypt cert in Apache2 ?
I use Ubuntu 16.04. @monthly letsencrypt renew && service apache2 reload | Monthly is not frequent enough. This script should run at least weekly, and preferably daily. Remember that certs don't get renewed unless they are near to expiration, and monthly could cause your existing certs to occasionally be expired already before they get renewed. The name of the program is certbot , which was renamed from letsencrypt . If you are still using letsencrypt , you need to update to the current version. Aside from those issues, it's about the same as my cron jobs. 43 6 * * * certbot renew --post-hook "systemctl reload nginx" Note: in 18.04 LTS the letsencrypt package has been (finally) renamed to certbot . It now includes a systemd timer which you can enable to schedule certbot renewals, with systemctl enable certbot.timer and systemctl start certbot.timer . However, Ubuntu did not provide a way to specify hooks. You'll need to set up an override for certbot.service to override ExecStart= with your desired command line, until Canonical fixes this. | {
"source": [
"https://serverfault.com/questions/790772",
"https://serverfault.com",
"https://serverfault.com/users/333652/"
]
} |
790,882 | Correction : response time ( %D ) is μs not ms! 1 This doesn't change anything about the weirdness of this pattern but it means that it is practically way less devastating. Why is response time inversely correlated to request frequency? Shouldn't the server respond faster when it is less busy handling requests? Any suggestion how to make Apache "take advantage" of less load? This pattern is periodic. That means it will show up if impressions drop below about 200 requests per minute - which happens (due to natural user-activity) from late night to early morning. The requests are very simple POSTs sending a JSON of less than 1000 characters - this JSON is stored (appended to a text file) - that's it. The reply is just "-". The data shown in the graphs was logged with Apache itself: LogFormat "%{%Y-%m-%d+%H:%M:%S}t %k %D %I %O" performance
CustomLog "/var/log/apache2/performance.log" performance | This is common behavior in data centers. The times your response time is slow corresponds to what is commonly called the Batch Window. This is a period of time when user activity is expected to be low and batch processes can be run. Backups are also done during this period. These activities can strain the resources of server and networks causing performance issues such as you see. There are a few resources that can cause issues: High CPU load. This can cause Apache to wait for a time slice to process the request. High memory usage. This can flush buffers that enable Apache to serve resources without reading them from disk. It can also cause paging/swapping of Apache workers. High disk activity. This can cause disk I/O activity to be queued with corresponding delays in serving content. High network activity. This can cause packets to be queued for transmission, increase retries and otherwise degrade service. I use sar to investigate issued like this. atsar can be used gather sar data into daily data files. These can be examined to see what the system behavior is like during the daytime when performance is normal, and overnight when performance is variable. If you are monitoring the system with munin or some other system that gathers and graphs resource utilization, you may find some indicators there. I still find sar more precise. There are tools like nice and ionice that can be applied to batch processes to minimize their impact. They are only effective for CPU or I/O issues. They are unlikely to resolve issues with Memory or Network activity. Moving backup activity to a separate network can reduce network contention. Some backup software can be configured to limit the bandwidth that will be used. This could resolve or reduce network contention issues. Depending on how the batch processes are triggered you may be able to limit the number of batch processes running in parallel. This may actually improve the performance of the batch processes as they are likely experiencing the same resource contention. | {
"source": [
"https://serverfault.com/questions/790882",
"https://serverfault.com",
"https://serverfault.com/users/75233/"
]
} |
790,900 | Trying to replace all the forwarders with new ones but I can't seem to the pattern matching to work. I can't see the mistake for the life of me: sudo sed -i .bak "s/forwarders {[^]]*}/forwarders { 127.0.0.1 }/g" /etc/named/named.conf | This is common behavior in data centers. The times your response time is slow corresponds to what is commonly called the Batch Window. This is a period of time when user activity is expected to be low and batch processes can be run. Backups are also done during this period. These activities can strain the resources of server and networks causing performance issues such as you see. There are a few resources that can cause issues: High CPU load. This can cause Apache to wait for a time slice to process the request. High memory usage. This can flush buffers that enable Apache to serve resources without reading them from disk. It can also cause paging/swapping of Apache workers. High disk activity. This can cause disk I/O activity to be queued with corresponding delays in serving content. High network activity. This can cause packets to be queued for transmission, increase retries and otherwise degrade service. I use sar to investigate issued like this. atsar can be used gather sar data into daily data files. These can be examined to see what the system behavior is like during the daytime when performance is normal, and overnight when performance is variable. If you are monitoring the system with munin or some other system that gathers and graphs resource utilization, you may find some indicators there. I still find sar more precise. There are tools like nice and ionice that can be applied to batch processes to minimize their impact. They are only effective for CPU or I/O issues. They are unlikely to resolve issues with Memory or Network activity. Moving backup activity to a separate network can reduce network contention. Some backup software can be configured to limit the bandwidth that will be used. This could resolve or reduce network contention issues. Depending on how the batch processes are triggered you may be able to limit the number of batch processes running in parallel. This may actually improve the performance of the batch processes as they are likely experiencing the same resource contention. | {
"source": [
"https://serverfault.com/questions/790900",
"https://serverfault.com",
"https://serverfault.com/users/47674/"
]
} |
791,019 | One of our user has been compiling their own program within their home directory. Normally we don't mind, but this particular program has a memory leak and eats into the SWAP. We have told this user many times not to run the program and yet she wouldn't listen. Is there a simple way of blocking a certain program from running? | Two ways: Use limits.conf to assign the maximum allotted memory per process for that user Create a cgroup for that user in order to limit their total memory usage More details here: https://unix.stackexchange.com/questions/34334/how-to-create-a-user-with-limited-ram-usage | {
"source": [
"https://serverfault.com/questions/791019",
"https://serverfault.com",
"https://serverfault.com/users/135490/"
]
} |
791,027 | I'm using 503 HTTP Status and a coming soon page for maintenance mode. Is there any way to get HAproxy serving server-side generated 503 page instead of the default blank/unavailable page? I'm using Openshift + HAproxy + Cloudflare + PHP. Thanks in advance. Haproxy config (some comments removed): #---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main *:5000
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
use_backend static if url_static
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
balance roundrobin
server static 127.0.0.1:4331 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
server app1 127.0.0.1:5001 check
server app2 127.0.0.1:5002 check
server app3 127.0.0.1:5003 check
server app4 127.0.0.1:5004 check | Two ways: Use limits.conf to assign the maximum allotted memory per process for that user Create a cgroup for that user in order to limit their total memory usage More details here: https://unix.stackexchange.com/questions/34334/how-to-create-a-user-with-limited-ram-usage | {
"source": [
"https://serverfault.com/questions/791027",
"https://serverfault.com",
"https://serverfault.com/users/366338/"
]
} |
791,037 | I was trying to install uTorrent on my PHP site and mid-process I was kicked out of SSH and the website went down. libssl.so.1.0.0 and libcrypto.so.1.0.0 were required for uTorrent so to downgrade, I did the following: wget https://www.openssl.org/source/openssl-1.0.0r.tar.gz
cd openssl-1.0.0r
./config shared && make Installation went fine and it replaced my previous version 1.0.1e. But when I ldd, the list shows "No version could be found" for libssl.so.1.0.0 and libcrypto.so.1.0.0. I proceeded to delete both of them from my server hoping to revert the changes and immediately got kicked out of SSH and the site went down. Now I can't connect via SSH, only way is via KVM provided by my host. All commands ie. yum rpm wget etc. returns the following error: error while loading shared libraries libcrypto.so.10: file too short My server is unmanaged, therefore I don't think I have the option of manually reinstalling openSSL's packages on a USB... | Two ways: Use limits.conf to assign the maximum allotted memory per process for that user Create a cgroup for that user in order to limit their total memory usage More details here: https://unix.stackexchange.com/questions/34334/how-to-create-a-user-with-limited-ram-usage | {
"source": [
"https://serverfault.com/questions/791037",
"https://serverfault.com",
"https://serverfault.com/users/366352/"
]
} |
791,362 | We just got a call from a US telephone number (001-, didn't get the rest of it) with an automated voice, stating that someone had sent spam mails from a server located in Berlin. I didn't take the call, my colleague just got the first part of the number and a firm name (Something with ~computer network~ in it)
It didn't seem like scam or spam, the voice just was informative with no chance to interact. We did in fact sent our last newsletter (1,5 months ago) through a firm, located in Berlin. But all recipients opted-in for it and would not regard it as spam. We do this ~4 times/year. But it's the first time we didn't send them through our mail server. So I would like to know who contacted us about it from the US and is this a common method to inform ?
I would definitely prefer an sample e-mail sample to check some things for myself. | That is not a normal way to report spam. In fact, it's utterly bizarre. The generally accepted way to report spam is through the abuse@ email contact of the owner of the IP address which sent the email to you. In the case of email you sent through a firm in Berlin, such email would be directed either to them, or to their Internet service provider or datacenter from which they sent the mail. You would not see it until the firm forwarded it to you for appropriate action (e.g. unsubscribing the user). The other common way spam gets reported is through email feedback loops from large email service providers (e.g. Gmail, Yahoo, Hotmail, Yandex, etc.). When a user clicks the Spam/Junk button in these services, a report is generated. You would have had to opt in to receive these, and you would generally also get them delivered by email. In the old days, before abuse@ was a standardized thing, and before spam was a serious problem, we might pick up the phone and call the phone number listed in the whois record, but (1) it would be a human being calling, not an automated recording, and (2) that hasn't really been done since the late 1990s except for extremely unusual situations. And we'd end up having to forward a copy of the email anyway. I have no idea why you received an automated call, but if they weren't willing to send an email in the usual way, and weren't willing to have a human being talk to you, then I don't see why you should be expected to waste any time on it. | {
"source": [
"https://serverfault.com/questions/791362",
"https://serverfault.com",
"https://serverfault.com/users/215913/"
]
} |
791,713 | I'm guessing there's a difference between my PHP time and the server time. When I check the current time in PHP, it's showing that MST is being used. However, cron jobs aren't running at the correct time. How can I check to see what timezone the server itself is using, not what PHP is set to use? | Cron job uses the server's define timezone (UTC by default) which you can check by typing the date command in terminal. All countries timezones are defined in /usr/share/zoneinfo directory: cd /usr/share/zoneinfo/ When you cd into this directory you will see the name of different countries and their timezone. Command to change server timezone. sudo ln -sf /usr/share/zoneinfo/America/Los_Angeles /etc/localtime If you live in America > LA you can change your time-zone using above command. Change the country and state according to your requirement. Command to check the date and time: date Set time and date from the command line: date -s "19 APR 2012 11:14:00" | {
"source": [
"https://serverfault.com/questions/791713",
"https://serverfault.com",
"https://serverfault.com/users/366948/"
]
} |
792,486 | Connection to one of my servers using ssh takes more than 20 seconds to initiate. This is not related to LAN or WAN conditions, since connection to itself takes the same (ssh localhost). After connection is finally establised, it is super fast to interract with the server. Using -vvv shows that the connection is stuck after saying "pledge: network". At this point, authentication (here using key) is already done, as visible here : ...
debug1: Authentication succeeded (publickey).
Authenticated to myserver.mydomain.com ([xx.xx.xx.xx]:22).
debug1: channel 0: new [client-session]
debug2: channel 0: send open
debug1: Requesting [email protected]
debug1: Entering interactive session.
debug1: pledge: network (...stuck here for 15 to 25 seconds...) debug1: client_input_global_request: rtype [email protected] want_reply 0
debug2: callback start
debug2: fd 3 setting TCP_NODELAY
debug2: client_session2_setup: id 0
... Server is Ubuntu 16.04. It already happened to me in the past with another server (was Ubuntu 12.04) , nerver found the solution and the problem disapeared after a while... sshd_config is the default one provided by Ubuntu. So far I have tried : using -o GSSAPIAuthentication=no in the ssh command using password instead of a key using UsePrivilegeSeparation no instead of yes, in sshd_config | This is probably an issue with D-Bus and systemd . If the dbus service is restarted for some reason, you will also need to restart systemd-logind . You can check if this is the issue by opening the ssh daemon log (on Ubuntu it should be /var/log/auth.log ) and check if it has these lines: sshd[2721]: pam_systemd(sshd:session): Failed to create session: Connection timed out If yes, just restart systemd-logind service: systemctl restart systemd-logind I had this same issue on CentOS 7, because the messagebus was restarted (which is how the D-Bus service is called on CentOS). | {
"source": [
"https://serverfault.com/questions/792486",
"https://serverfault.com",
"https://serverfault.com/users/366046/"
]
} |
792,572 | I'm thinking about going with a security vendor for hosted sites on my VPS, and I'm having a hard time understanding something. (Yes I know this is OSI terminology, and the sites in question are basic dental and medical practice websites with no eCommerce and no private info (SSN, etc). Their basic plan has a Layer 7 firewall (and I get that that's HTTP, HTTPs, etc), but their advanced plan has layer 3,4 coverage as well (and I get that that is IP and TCP/UDP). 1) What I don't understand is the big picture -- does a Layer 7-only firewall ignore problems with Layer 3/4? Is packet inspection skipped? 2) And if so, how necessary is a layer 3/4 firewall if you already have a layer 7 in place? If there's a book or resource I can read to understand this that would also be great. I want to understand what I'm doing before I make a purchase! | It sounds like you're getting a bit of misleading jargon. The technical definitions for these types of firewalls are: Layer 3 firewalls (i.e. packet filtering firewalls ) filter traffic based solely on source/destination IP, port, and protocol. Layer 4 firewalls do the above, plus add the ability to track active network connections, and allow/deny traffic based on the state of those sessions (i.e. stateful packet inspection ). Layer 7 firewalls (i.e. application gateways ) can do all of the above, plus include the ability to intelligently inspect the contents of those network packets. For instance, a Layer 7 firewall could deny all HTTP POST requests from Chinese IP addresses. This level of granularity comes at a performance cost, though. Since the proper definitions don't line up with their pricing scheme, I think they're using Layer 7 as a (technically incorrect) reference to a software firewall running on your VPS. Think along the lines of iptables or Windows Firewall . Should you pony up the extra fees, they'll put your VPS behind a proper network firewall. Maybe. If they can't be bothered to use proper terminology when describing their VPS solution to potential customers, I'd question their competence in other areas as well. | {
"source": [
"https://serverfault.com/questions/792572",
"https://serverfault.com",
"https://serverfault.com/users/367766/"
]
} |
792,996 | My company's product is essentially a Linux box (Ubuntu) sitting in somebody else's network running our software. Up to now we had less than 25 boxes in the wild and used TeamViewer to manage them. We're now about to ship 1000 of these boxes and TeamViewer is no longer an option. My job is to figure out a way of accessing these boxes and updating the software on them . This solution should be able to punch through firewalls and what have you. I've considered: 1. Home grown solution (e.g. a Linux service) that establishes an SSH reverse tunnel to a server in the cloud, and another service in the cloud that keeps track of those & lets you connect to them. This is obviously labour intensive and frankly speaking feels like reinventing the wheel since so many other companies must have already run across this problem. Even so, I'm not sure we'll do a great job at it. 2. Tools such as puppet, chef or OpenVPN I tried to read as much as possible but I can't seem to penetrate enough through the marketing speak to understand the obvious choice to go with. No one else except us needs to connect to these boxes. Is there anyone with relevant experience that can give me some pointers? | 2022 June - Update If all you need is remote access to the machine, two newer approaches (if you're comfortable with AWS) would be to use one of: AWS SSM AWS VPN That said, I would still opt for a pull mechanism for ensuring updates are deployed. You ideally want to use these direct shells only in case of an emergency. Otherwise, you will (inevitably) end up with a Frankenstein fleet of servers, each with their own funny configuration tweaks that were done manually by someone in a pinch, without documentation. Pull updates, don't push As you scale, it's going to become unfeasible to do push updates to all your products. You'll have to track every single customer, who might each have a different firewall configuration. You'll have to create incoming connections through the customer's firewall, which would require port-forwarding or some other similar mechanism. This is a security risk to your customers Instead, have your products 'pull' their updates periodically, and then you can add extra capacity server-side as you grow. How? This problem has already been solved, as you suggested. Here's several approaches I can think of. using apt : Use the built-in apt system with a custom PPA and sources list. How do I setup a PPA? Con: Unless you use a public hosting service like launchpad, Setting up your own apt PPA + packaging system is not for the faint of heart. using ssh : Generate an SSH public key for each product, and then add that device's key to your update servers. Then, just have your software rsync / scp the files required. Con: Have to track (and backup!) all the public keys for each product you send out. Pro : More secure than a raw download, since the only devices that can access the updates would be those with the public key installed. raw download + signature check : Post a signed update file somewhere (Amazon S3, FTP server, etc) Your product periodically checks for the update file to be changed, and then downloads / verifies the signature. Con : Depending on how you deploy this, the files may be publicly accessible (which may make your product easier to reverse engineer and hack) ansible : Ansible is a great tool for managing system configurations. It's in the realm of puppet / chef, but is agentless (uses python) and designed to be idempotent. If deploying your software would require a complicated bash script, I'd use a tool like this to make it less complicated to perform your updates. Of course, there are other ways to do this.. But it brings me to an important point. Sign / validate your updates! No matter what you do, it's imperative that you have a mechanism to ensure that your update hasn't been tampered with. A malicious user could impersonate your update server in any of the above configurations. If you don't validate your update, your box is much easier to hack and get into. A good way to do this is to sign your update files. You'll have to maintain a certificate (or pay someone to do so), but you'll be able to install your fingerprint on each of your devices before you ship them out so that they can reject updates that have been tampered with. Physical Security Of course, if someone has physical access to the customer's deployment, they could easily take over the server. But at least they can't attack the other deployments! Physical security is likely the responsibiltiy of your customer. If you would for a moment, imagine what would happen if you used a
large OpenVPN network for updates... They could then use the
compromised server to attack every instance on the VPN Security Whatever you do, security needs to be built in from the beginning. Don't cut corners here - You'll regret it in the end if you do. Fully securing this update system is out of scope of this post, and I strongly recommend hiring a consultant if you or someone on your team isn't knowledgeable in this area. It's worth every penny. | {
"source": [
"https://serverfault.com/questions/792996",
"https://serverfault.com",
"https://serverfault.com/users/368119/"
]
} |
793,058 | I am using docker-compose to create mysql container.
I get host IP 172.21.0.2 .
But when I connect mysql. I get error: My docker-compose.yml : version: '2'
services:
### Mysql container
mysql:
image: mysql:latest
ports:
- "3306:3306"
volumes:
- /var/lib/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: test_db
MYSQL_USER: test
MYSQL_PASSWORD: test_pass Get my host IP docker inspect db_mysql_1 | grep IPAddress "IPAddress": "172.21.0.2", Access mysql: mysql -h 172.21.0.2 -P 3306 -u root -proot . ERROR 1130 (HY000): Host '172.21.0.1' is not allowed to connect to this MySQL server How can I connect to mysql container? | You can pass an extra environment variable when starting the MySQL container MYSQL_ROOT_HOST=<ip> this will create a root user with permission to login from given IP address. In case where you want to allow login from any IP you can specify MYSQL_ROOT_HOST=% . This will work only for a newly created containers. When spinning new container: docker run --name some-mysql -e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:latest In compose file it would be: version: '2'
services:
### Mysql container
mysql:
image: mysql:latest
ports:
- "3306:3306"
volumes:
- /var/lib/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: test_db
MYSQL_USER: test
MYSQL_PASSWORD: test_pass
MYSQL_ROOT_HOST: '%' # needs to be enclosed with quotes | {
"source": [
"https://serverfault.com/questions/793058",
"https://serverfault.com",
"https://serverfault.com/users/368174/"
]
} |
793,295 | Windows 8 / 8.1 / 10 has this feature called "Fast Startup" (or "fast boot", "hybrid statup", "hybrid shutdown", and so on...) which doesn't actually shut down the computer when you tell it to do so, instead putting it in a sort of hybernation, in order to speed up boot time. Although this might seem nice at first view, it has several known and ugly side effects: It can seriously screw up on some systems (possibly when using old/incompatible drivers or BIOSes), resulting in a system crash at boot time and a subsequent forced full boot (this I witnessed personally on several different systems... and good luck if you are also using mirrored dynamic disks, which will always undergo a full resync after a system crash). It does hell to the processing of some group policies, which require an actual system restart in order to be applied. Last but not least, it has been known to render Wake-On-Lan unusable; this is the problem I'm currently facing after an upgrade to Windows 10 of several Windows 7 PCs which used to WOL quite fine, and now just don't anymore. For these and other reasons, I'd like to be able to manage Fast Startup using Group Policies; however, the only policy I could find about this ( Computer Configuration\Policies\Administrative Templates\System\Shutdown\Require use of fast startup ) can only be used to force the use of Fast Startup, but not to disable it: its description explicitly states that if you disable or do not configure this policy setting, the local setting is used . Thus, my question: how can I disable Fast Startup using a group policy? | It looks like there is no Administrative Template for managing this setting; as documented, Computer Configuration\Policies\Administrative Templates\System\Shutdown\Require use of fast startup can only be used to enforce it, not to disable it (WTF?!? It's already enabled by default... they could at least have gone a bit further and turn this setting into a true on/off switch!). The only available way to disable Fast Startup (outside of using the GUI) is by setting the following Registry key to 0: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Power\HiberbootEnabled This can be done using Group Policy Preferences and it effectively disables Fast Startup; of course, setting it to 1 would instead enable it. And yes, disabling Fast Startup fixes the problem of Wake-On-Lan not working. | {
"source": [
"https://serverfault.com/questions/793295",
"https://serverfault.com",
"https://serverfault.com/users/6352/"
]
} |
793,550 | I've seen people use excessive quotes : add_header 'Access-Control-Allow-Origin' '*'; I've seen people use no quotes : add_header Access-Control-Allow-Origin *; Both work fine as far as I know, so when do you actually have to use quotes? | The exact answer is "never". You can either quote or \ -escape some special characters like " " or ";" in strings (characters that would make the meaning of a statement ambiguous), so add_header X-MyHeader "Test String;"; would work like add_header X-MyHeader Test\ String\;; In reality: Just use quotes :) Edit: As some people love to nitpick: The not necessarily complete list of characters that can make a statement ambiguous is according to my understanding of the nginx config syntax: <space> " ' { } ; $ \ and it might be necessary to escape $ and \ even in quoted strings to avoid variable expansion. Unfortunately, I can't find a complete and authoritative list of such characters in the docs. | {
"source": [
"https://serverfault.com/questions/793550",
"https://serverfault.com",
"https://serverfault.com/users/61246/"
]
} |
793,577 | There seems to be some script in the server which is executing as apache user and sending mails. Looking at ps aux output we find that sendmail executable is executed with apache user but we are not able to find specific script which is doing this. What is the ideal way to deal with this kind of situation ? | The exact answer is "never". You can either quote or \ -escape some special characters like " " or ";" in strings (characters that would make the meaning of a statement ambiguous), so add_header X-MyHeader "Test String;"; would work like add_header X-MyHeader Test\ String\;; In reality: Just use quotes :) Edit: As some people love to nitpick: The not necessarily complete list of characters that can make a statement ambiguous is according to my understanding of the nginx config syntax: <space> " ' { } ; $ \ and it might be necessary to escape $ and \ even in quoted strings to avoid variable expansion. Unfortunately, I can't find a complete and authoritative list of such characters in the docs. | {
"source": [
"https://serverfault.com/questions/793577",
"https://serverfault.com",
"https://serverfault.com/users/314228/"
]
} |
793,581 | Wondering if anyone else has done this.. I have two virtualmin servers, web-1 and web-2. I have configured HAProxy on our PFSense firewall to correctly intercept web-(1|2).domain.com and redirect to the correct internal IP on port 10000. web-1.domain.com -> 10.10.10.10:10000
web-2.domain com -> 10.10.10.20:10000 FrontEnd has an SSL redirect and SSL offloading, and the backends are enabled for SSL - this all works fine, and I reach the login page using web-1.domain.com The problem I have is upon logging in to virtualmin - the login script redirects to https://web-1.domain.com:10000/?virtualmin which is blocked by the firewall (I'd rather not have these common ports exposed, hence using HAProxy instead of NAT). If I re-enter web-1.domain.com into the address bar, it redirects to https://web-1.domain.com/?virtualmin and I can then access the backend correctly. Is there a method of removing this rewrite rule from virtualmin/webmin to skip this manual step? Thanks! | The exact answer is "never". You can either quote or \ -escape some special characters like " " or ";" in strings (characters that would make the meaning of a statement ambiguous), so add_header X-MyHeader "Test String;"; would work like add_header X-MyHeader Test\ String\;; In reality: Just use quotes :) Edit: As some people love to nitpick: The not necessarily complete list of characters that can make a statement ambiguous is according to my understanding of the nginx config syntax: <space> " ' { } ; $ \ and it might be necessary to escape $ and \ even in quoted strings to avoid variable expansion. Unfortunately, I can't find a complete and authoritative list of such characters in the docs. | {
"source": [
"https://serverfault.com/questions/793581",
"https://serverfault.com",
"https://serverfault.com/users/189482/"
]
} |
794,783 | I'm following this official Jenkins guide in order to become familiar with the Jenkins Pipeline configuration.
One of the steps there is to create a dumb slave and set it to "Launch slave agents via Java Web Start" but for some reason this option is missing from my configuration, the only other options I have are: I've made sure that /usr/bin/javaws exists on the machine. Any idea how to add it to Jenkins New Node configuration? | This question was asked elsewhere: https://stackoverflow.com/a/38740924 You have to enable the TCP port of JNLP agents to enable this option for slaves. Manage Jenkins > Configure Global Security > TCP port for JNLP agents | {
"source": [
"https://serverfault.com/questions/794783",
"https://serverfault.com",
"https://serverfault.com/users/109833/"
]
} |
796,043 | I have a dead symlink named dead_symlink under the directory /usr/local/bin When Ansible check the file it reports it exists - stat: "path=/usr/local/bin/dead_symlink"
register: dead_symlink_bin
- debug: var=dead_symlink_bin.stat.exists But when I try to remove it, it reports 'ok' but nothing is happening (the symlink is still there) - name: Remove symlink
file:
path: "path=/usr/local/bin/dead_symlink"
state: absent What am I doing wrong? | You have a synatx error in your task. It should be: - name: Remove symlink
file:
path: "/usr/local/bin/dead_symlink"
state: absent Ansible is probably looking for the path path=/usr/local/bin/dead_symlink and not for /usr/local/bin/dead_symlink . | {
"source": [
"https://serverfault.com/questions/796043",
"https://serverfault.com",
"https://serverfault.com/users/243715/"
]
} |
796,225 | Linux environment: Debian, Ubuntu, Centos Goal: Test monitoring program that set alarms and trigger different alarms at different cpu percentages. ex: (30-50%), (51-70%) and >90% So I need a cpu stresser that can simulate specific cpu percentage per core. stress-mg looks like the most advanced. According to its documentation http://kernel.ubuntu.com/~cking/stress-ng/ it is possible to set load values between 0 and 100%: -l P --cpu-load P load CPU by P %, 0=sleep, 100=full load (see -c) stress-ng -c 1 -p 30 stress-ng: info: [12650] dispatching hogs: 0
I/O-Sync, 1 CPU, 0 VM-mmap, 0 HDD-Write, 0 Fork, 0 Context-switch, 30
Pipe, 0 Cache, 0 Socket, 0 Yield, 0 Fallocate, 0 Flock, 0 Affinity, 0
Timer, 0 Dentry, 0 Urandom, 0 Float, 0 Int, 0 Semaphore, 0 Open, 0
SigQueue, 0 Poll Undesired result: But it doesn't seem to work, aLL cores are hogged at 100% Any ideas how to achieve this? | I designed stress-ng so that one can specify 0 for the number of stressor processes to match the number of on-line CPUs, so to load each cpu at say 40%, use stress-ng -c 0 -l 40 | {
"source": [
"https://serverfault.com/questions/796225",
"https://serverfault.com",
"https://serverfault.com/users/196732/"
]
} |
796,330 | I am trying to do a local rsync, from a mount point to a local folder. I need to set the owner, group, and permissions to specific settings. Here is what I am using: rsync -rtlv --chown=process:sambausers --chmod=D770,F770 /mnt/owncloud_mnt/Engineering/ /Drive_D/docs/Engineering_test I end up with permissions 760 on both directories and files, and root:root on ownership (rsync is run as root). What am I doing wrong? TIA | rsync needs to be told that you want to set the permissions and owner/group information. It would be logical to assume that having --chmod or --chown would tell that but they don't. For permissions to propagate you need the --perms or -p flag and for owner/group you need --owner --group or -og flags for the owner/group/permission information to be set. The documentation is a bit unclearly written so it isn't clear how the permissions are handled with different combinations or if existing files are affected. | {
"source": [
"https://serverfault.com/questions/796330",
"https://serverfault.com",
"https://serverfault.com/users/364829/"
]
} |
796,665 | Let's say we're using ext4 (with dir_index enabled) to host around 3M files (with an average of 750KB size) and we need to decide what folder scheme we're going to use. In the first solution , we apply a hash function to the file and use two levels folder (being 1 character for the first level and 2 characters to second level): therefore being the filex.for hash equals to abcde1234 , we'll store it on /path/ a/bc /abcde1234-filex.for. In the second solution , we apply a hash function to the file and use two levels folder (being 2 characters for the first level and 2 characters to second level): therefore being the filex.for hash equals to abcde1234 , we'll store it on /path/ ab/de /abcde1234-filex.for. For the first solution we'll have the following scheme /path/[16 folders]/[256 folders] with an average of 732 files per folder (the last folder, where the file will reside). While on the second solution we'll have /path/[256 folders]/[256 folders] with an average of 45 files per folder . Considering we're going to write/unlink/read files ( but mostly read ) from this scheme a lot (basically the nginx caching system), does it maters, in a performance sense, if we chose one or other solution? Also, what are the tools we could use to check/test this setup? | The reason one would create this sort of directory structure is that filesystems must locate a file within a directory, and the larger the directory is, the slower that operation. How much slower depends on the filesystem design. The ext4 filesystem uses a B-tree to store directory entries. A lookup on this table is expected to take O(log n) time, which most of the time is less than the naive linear table that ext3 and previous filesystems used (and when it isn't, the directory is too small for it to really matter). The XFS filesystem uses a B+tree instead. The advantage of this over a hash table or B-tree is that any node may have multiple children b , where in XFS b varies and can be as high as 254 (or 19 for the root node; and these numbers may be out of date). This gives you a time complexity of O(log b n) , a vast improvement. Either of these filesystems can handle tens of thousands of files in a single directory, with XFS being significantly faster than ext4 on a directory with the same number of inodes. But you probably don't want a single directory with 3M inodes, as even with a B+tree the lookup can take some time. This is what led to creating directories in this manner in the first place. As for your proposed structures, the first option you gave is exactly what is shown in nginx examples. It will perform well on either filesystem, though XFS will still have a bit of an advantage. The second option may perform slightly better or slightly worse, but it will probably be pretty close, even on benchmarks. | {
"source": [
"https://serverfault.com/questions/796665",
"https://serverfault.com",
"https://serverfault.com/users/207382/"
]
} |
796,684 | During installation of openstack on Ubuntu server 14.04 x64, after I issue the following commands: sudo add-apt-repository ppa:cloud-installer/stable
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
sudo apt-get install openstack
sudo openstack-install I get error: Top-level container OS did not initialize correctly How can I solve it to install openstack correctly? | The reason one would create this sort of directory structure is that filesystems must locate a file within a directory, and the larger the directory is, the slower that operation. How much slower depends on the filesystem design. The ext4 filesystem uses a B-tree to store directory entries. A lookup on this table is expected to take O(log n) time, which most of the time is less than the naive linear table that ext3 and previous filesystems used (and when it isn't, the directory is too small for it to really matter). The XFS filesystem uses a B+tree instead. The advantage of this over a hash table or B-tree is that any node may have multiple children b , where in XFS b varies and can be as high as 254 (or 19 for the root node; and these numbers may be out of date). This gives you a time complexity of O(log b n) , a vast improvement. Either of these filesystems can handle tens of thousands of files in a single directory, with XFS being significantly faster than ext4 on a directory with the same number of inodes. But you probably don't want a single directory with 3M inodes, as even with a B+tree the lookup can take some time. This is what led to creating directories in this manner in the first place. As for your proposed structures, the first option you gave is exactly what is shown in nginx examples. It will perform well on either filesystem, though XFS will still have a bit of an advantage. The second option may perform slightly better or slightly worse, but it will probably be pretty close, even on benchmarks. | {
"source": [
"https://serverfault.com/questions/796684",
"https://serverfault.com",
"https://serverfault.com/users/282799/"
]
} |
796,762 | I want to create a docker image on top of the mysql one that already contains the necessary scheme for my app. I tried adding lines to the Dockerfile that will import my scheme as a sql file. I did so as such (my Dockerfile): FROM mysql
ENV MYSQL_ROOT_PASSWORD="bagabu"
ENV MYSQL_DATABASE="imhere"
ADD imhere.sql /tmp/imhere.sql
RUN "mysql -u root --password="bagabu" imhere < /tmp/imhere.sql" To my understanding, that didn't work because the mysql docker image does not contain a mysql client (best practices state "don't add things just because they will be nice to have") (am I wrong about this?) what might be a good way to do this? I have had a few things in mind, but they all seem like messy workarounds. install the mysql client, do what I have to do with it, then remove/purge it. copy the mysql client binary to the image, do what I have to do, then remove it. Create the schema in another sql server and copy the db file themselves directly (this seems very messy and sounds to me like a contaminated pool of problems) Any suggestions? Hopefully in a way that will be easy to maintain later and maybe conform with the best practices as well? | I had to do this for tests purposes. Here's how i did by leveraging the actual MySQL/MariaDB images on dockerhub and the multi-stage build: FROM mariadb:latest as builder
# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$@\"/echo \"not running $@\"/", "/usr/local/bin/docker-entrypoint.sh"]
# needed for intialization
ENV MYSQL_ROOT_PASSWORD=root
COPY setup.sql /docker-entrypoint-initdb.d/
# Need to change the datadir to something else that /var/lib/mysql because the parent docker file defines it as a volume.
# https://docs.docker.com/engine/reference/builder/#volume :
# Changing the volume from within the Dockerfile: If any build steps change the data within the volume after
# it has been declared, those changes will be discarded.
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db", "--aria-log-dir-path", "/initialized-db"]
FROM mariadb:latest
COPY --from=builder /initialized-db /var/lib/mysql Full working example here : https://github.com/lindycoder/prepopulated-mysql-container-example | {
"source": [
"https://serverfault.com/questions/796762",
"https://serverfault.com",
"https://serverfault.com/users/145823/"
]
} |
796,776 | I have a Cent0S 7 miniPC with a wireless and wired network ports. The wireless port (wlp3s0) is connected as a DHCP client with 192.168.10.X addressing and has DNS resolution. I'm trying to setup the wired port (enp2s0) as a DHCP server for a private subnet with 192.168.100.X addressing. The miniPC will be attached to a network switch which will have other client devices connected for testing. I followed the directions from RedHat here to a tee. My /etc/systemd/system/dhcpd.service is as follows: [Unit]
Description=DHCPv4 Server Daemon
Documentation=man:dhcpd(8) man:dhcpd.conf(5)
Wants=network-online.target
After=network-online.target
After=time-sync.target
[Service]
Type=notify
ExecStart=/usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -group dhcpd --no-pid enp2s0
[Install]
WantedBy=multi-user.target And my /etc/dhcp/dhcpd.conf is as follows: default-lease-time 600;
max-lease-time 7200;
authoritative;
subnet 192.168.100.0 netmask 255.255.255.0 {
option routers 192.168.100.1;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.100.255;
range 192.168.100.10 192.168.100.100;
} When I go to configure and start the service: sudo systemctl --system daemon-reload
sudo systemctl restart dhcpd.service I get this in /var/log/messages : localhost systemd: Starting DHCPv4 Server Daemon...
localhost dhcpd: Internet Systems Consortium DHCP Server 4.2.5
localhost dhcpd: Copyright 2004-2013 Internet Systems Consortium.
localhost dhcpd: All rights reserved.
localhost dhcpd: For info, please visit https://www.isc.org/software/dhcp/
localhost dhcpd: Not searching LDAP since ldap-server, ldap-port and ldap-base-dn were not specified in the config file
localhost dhcpd: Wrote 0 leases to leases file.
localhost dhcpd:
localhost dhcpd: No subnet declaration for enp2s0 (no IPv4 addresses).
localhost dhcpd: ** Ignoring requests on enp2s0. If this is not what
localhost dhcpd: you want, please write a subnet declaration
localhost dhcpd: in your dhcpd.conf file for the network segment
localhost dhcpd: to which interface enp2s0 is attached. **
localhost dhcpd:
localhost dhcpd:
localhost dhcpd: Not configured to listen on any interfaces!
localhost dhcpd:
localhost dhcpd: This version of ISC DHCP is based on the release available
localhost dhcpd: on ftp.isc.org. Features have been added and other changes
localhost dhcpd: have been made to the base software release in order to make
localhost dhcpd: it work better with this distribution.
localhost dhcpd:
localhost dhcpd: Please report for this software via the CentOS Bugs Database:
localhost dhcpd: http://bugs.centos.org/
localhost dhcpd:
localhost dhcpd: exiting.
localhost systemd: dhcpd.service: main process exited, code=exited, status=1/FAILURE
localhost systemd: Failed to start DHCPv4 Server Daemon.
localhost systemd: Unit dhcpd.service entered failed state.
localhost systemd: dhcpd.service failed. Any idea what's going wrong here? Thanks. | I had to do this for tests purposes. Here's how i did by leveraging the actual MySQL/MariaDB images on dockerhub and the multi-stage build: FROM mariadb:latest as builder
# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$@\"/echo \"not running $@\"/", "/usr/local/bin/docker-entrypoint.sh"]
# needed for intialization
ENV MYSQL_ROOT_PASSWORD=root
COPY setup.sql /docker-entrypoint-initdb.d/
# Need to change the datadir to something else that /var/lib/mysql because the parent docker file defines it as a volume.
# https://docs.docker.com/engine/reference/builder/#volume :
# Changing the volume from within the Dockerfile: If any build steps change the data within the volume after
# it has been declared, those changes will be discarded.
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db", "--aria-log-dir-path", "/initialized-db"]
FROM mariadb:latest
COPY --from=builder /initialized-db /var/lib/mysql Full working example here : https://github.com/lindycoder/prepopulated-mysql-container-example | {
"source": [
"https://serverfault.com/questions/796776",
"https://serverfault.com",
"https://serverfault.com/users/370483/"
]
} |
796,788 | I'm reading about TCP data flow, Delayed ACK and Nagle's Algorithm . So far I understand that: The Delayed ACK implementation on TCP creates a delay on the acknowledgement of segments received to give the opportunity for the application to write some data along with the acknowledgement, thus avoiding sending an empty ACK packet and contributing to network congestion. The Nagle's Algorithm implementation states that you can't send a small TCP segment while another small segment is still not acknowledged. This avoids the traffic being loaded with several tinygrams . On some interactive applications, like Rlogin for instance, Nagle's Algorithm and Delayed ACKs can "conflict": Rlogin sends the keyboard input to the server as we type them and some keys (like F1 ) generates more than one byte ( F1 = Escape + left bracket + M). Those bytes can be sent in different segments if they are delivered to TCP one by one. The server doesn't reply with an echo until it has the whole sequence, so all the ACKs would be delayed (expecting some data from the application). The client on the other hand, would wait for the first byte acknowledgement before sending the next one (respecting the Nagle's Algorithm ). This combination ends up resulting in a "laggy" Rlogin. The tcpdump of the F1 and F2 key being sent on a Rlogin is represented below: type Fl key
1 0.0 slip.1023 > vangogh. login: P 1:2(1) ack 2
2 0.250520 (0.2505) vangogh.login > slip.1023: P 2:4(2) ack 2
3 0.251709 (0.0012) slip.1023 > vangogh.login: P 2:4(2) ack 4
4 0.490344 (0.2386) vangogh.login > slip.1023: P 4:6(2) ack 4
5 0.588694 (0.0984) slip.1023 > vangogh.login: . ack 6
type F2 key
6 2.836830 (2.2481) slip.1023 > vangogh.login: P 4:5(1) ack 6
7 3.132388 (0.2956) vangogh.login > slip.1023: P 6:8(2) ack 5
8 3.133573 (0.0012) slip.1023 > vangogh.login: P 5:7(2) ack 8
9 3.370346 (0.2368) vangogh.login > slip.1023: P 8:10(2) ack 7
10 3.388692 (0.0183) slip.1023 > vangogh.login: . ack 10 Now the doubt: Even though the page I read states that the server doesn't reply with an echo before it has the whole key sequence, the packets captured through tcpdump shows that the keys are being echoed on their respective ACKs (the first reply is 2 bytes long because the echo from ESC is two characters - caret + left bracket). If data is being sent from the application to TCP (the echo response) why are the ACKs being delayed? According to what was stated, about the server waiting the full sequence before echoing it , wasn't the ACKs supposed to contain no echo up to the last ACK, that would contain the whole sequence echo? Reference: http://people.na.infn.it/~garufi/didattica/CorsoAcq/Trasp/Lezione9/tcpip_ill/tcp_int.htm | I had to do this for tests purposes. Here's how i did by leveraging the actual MySQL/MariaDB images on dockerhub and the multi-stage build: FROM mariadb:latest as builder
# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$@\"/echo \"not running $@\"/", "/usr/local/bin/docker-entrypoint.sh"]
# needed for intialization
ENV MYSQL_ROOT_PASSWORD=root
COPY setup.sql /docker-entrypoint-initdb.d/
# Need to change the datadir to something else that /var/lib/mysql because the parent docker file defines it as a volume.
# https://docs.docker.com/engine/reference/builder/#volume :
# Changing the volume from within the Dockerfile: If any build steps change the data within the volume after
# it has been declared, those changes will be discarded.
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db", "--aria-log-dir-path", "/initialized-db"]
FROM mariadb:latest
COPY --from=builder /initialized-db /var/lib/mysql Full working example here : https://github.com/lindycoder/prepopulated-mysql-container-example | {
"source": [
"https://serverfault.com/questions/796788",
"https://serverfault.com",
"https://serverfault.com/users/370492/"
]
} |
796,792 | I want to do something that seems dead simple, but none of the options I've found are quite right (e.g. Dropbox). The question is: what cloud sync service can I use to sync a folder on my workstation with the filesystem in an EC2 instance? Note these requirements: It must have a unattended/scriptable installation and configuration that happens on init of the EC2 (since EC2 instances are ephemeral) And thus it may only depend on EC2 environment variables for any service installation credentials The service on EC2 needs read-only, recursive synchronization (not plain downloading; there are too many files to simply download a directory archive and expand it periodically). Both workstation and EC2 are syncing with a shared source cloud repository like Dropbox, since that workstation is not always on/publicly accessible The app on my EC2 instance is nodeJS, for what it's worth! The Dropbox Linux client, for example, (or nodejS libraries I've found) require attended installation, to visit a Dropbox URL every time the instance needs to log its Dropbox client in. Same is true for Bittorrent sync, requiring visiting a localhost URL to link with devices. Even if another tiny EC2 instance is to sync Dropbox for example with Elastic File System. It might be longer-lived, but is still ephemeral and needs an unattended init-script installation. Thanks in advance. | I had to do this for tests purposes. Here's how i did by leveraging the actual MySQL/MariaDB images on dockerhub and the multi-stage build: FROM mariadb:latest as builder
# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$@\"/echo \"not running $@\"/", "/usr/local/bin/docker-entrypoint.sh"]
# needed for intialization
ENV MYSQL_ROOT_PASSWORD=root
COPY setup.sql /docker-entrypoint-initdb.d/
# Need to change the datadir to something else that /var/lib/mysql because the parent docker file defines it as a volume.
# https://docs.docker.com/engine/reference/builder/#volume :
# Changing the volume from within the Dockerfile: If any build steps change the data within the volume after
# it has been declared, those changes will be discarded.
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db", "--aria-log-dir-path", "/initialized-db"]
FROM mariadb:latest
COPY --from=builder /initialized-db /var/lib/mysql Full working example here : https://github.com/lindycoder/prepopulated-mysql-container-example | {
"source": [
"https://serverfault.com/questions/796792",
"https://serverfault.com",
"https://serverfault.com/users/160347/"
]
} |
797,044 | I have been provided with an ssh key by a colleague to add to the authorized_keys file for an account on a linux server so they can access that account. The file looks something like this: ---- BEGIN SSH2 PUBLIC KEY ----
Comment: "rsa-key-20160816"
AAAAB3NzaC1yc2EAAAABJQAAAQEApoYJFnGDNis/2oCT6/h9Lzz2y0BVHLv8joXM
s4SYcYUVwBxNzqJsDWbikBn/h32AC36qAW24Bft+suGMtJGS3oSX53qR7ozsXs/D
lCO5FzRxi4JodStiYaz/pPK24WFOb4sLXr758tz2u+ZP2lfDfzn9nLxregZvO9m+
zpToLCWlXrzjZxDesJOcfh/eszU9KUKXfXn6Jsey7ej8TYqB2DgYCfv8jGm+oLVe
UOLEl7fxzjgcDdiLaXbqq7dFoOsHUABBV6kaXyE9LmkbXZB9lQ==
---- END SSH2 PUBLIC KEY ---- The man page for authorized_keys (well, sshd) makes it clear that the file expects each key to take up a single line. So I guess I need to convert this key to a single-line format? How do I accomplish this? | There is an accepted answer for this question, but I think it's worth noting that there is a way to do this using the ssh-keygen tool rather than sed : ssh-keygen -i -f ssh2.pub > openssh.pub Where ssh2.pub is your existing ssh2 key and openssh.pub will be the key in openssh format. If you just want to copy and paste you can leave out the redirect and use: ssh-keygen -i -f ssh2.pub | {
"source": [
"https://serverfault.com/questions/797044",
"https://serverfault.com",
"https://serverfault.com/users/119270/"
]
} |
797,049 | Script is at Server: #!/bin/bash
if [ ! $# == 1 ]; then
echo "Usage check_cluster "
fi;
clu_srv=$1
error="stopped"
error1="disabled"
error2="recoverable"
host1=`sudo /usr/sbin/clustat|grep $1| awk {'print $2'}`
host2=`sudo /usr/sbin/clustat|grep $1| awk {'print $3'}`
service1=`sudo /usr/sbin/clustat|grep $clu_srv| awk {'print $1'}`
if [[ "$host2" == "$error" ]] || [[ "$host2" == "$error1" ]]; then
echo "CRITICAL - Cluster $clu_srv service failover on $host1 and state is '$host2'"
else
echo "OK - Cluster $clu_srv service is on $host1 and state is '$host2'"
fi;
##--EndScript It receives thee argument from script correctly. When I running this script manually at
Server from command line it returns correct information, for example: # /usr/local/nagios/libexec/check_rhcs-ERS NFSService
OK - Cluster NFSService service is on NODE1 and state is 'started' But when I tried with the script (check_nrpe) remotely with following command its showing
incorrect information: # ./check_nrpe -H localhost -c check_rhcs-ERS
OK - Cluster NFSService service is on and state is '' nrpe.cfg: # command[check_rhcs-ERS]=/usr/local/nagios/libexec/check_rhcs-ERS NFSService What is Wrong with the script, How to fix it? | There is an accepted answer for this question, but I think it's worth noting that there is a way to do this using the ssh-keygen tool rather than sed : ssh-keygen -i -f ssh2.pub > openssh.pub Where ssh2.pub is your existing ssh2 key and openssh.pub will be the key in openssh format. If you just want to copy and paste you can leave out the redirect and use: ssh-keygen -i -f ssh2.pub | {
"source": [
"https://serverfault.com/questions/797049",
"https://serverfault.com",
"https://serverfault.com/users/370694/"
]
} |
797,482 | After cloud-init runs a user data script on the first boot of an EC2 instance, a state file is presumably written so that cloud-init won't run the script again on subsequent reboots. There are cases where I'd like to delete this state file so that the user data script will run again. Where is it? | rm /var/lib/cloud/instances/*/sem/config_scripts_user Confirmed working on: CentOS 7.4 Ubuntu 14.04 Ubuntu 16.04 For the sake of completeness, if you have a situation where you care to keep track of the fact/possibility that this AMI [had a parent AMI that ...] and they all ran cloud-init user data, you can delete only the current semaphore. rm /var/lib/cloud/instance/sem/config_scripts_user | {
"source": [
"https://serverfault.com/questions/797482",
"https://serverfault.com",
"https://serverfault.com/users/7243/"
]
} |
797,519 | Chkdsk is a very manual, technical solution Hard Drive checking in Windows reports a lot of unnecessary info, requiring that you read through it the chkdsk report, etc. Also chkdsk can't fix problems on an in-use files (like the Windows drive). It seems like a little thing, but a non-technical user can't understand ChkDsk results and it takes even a technical user a minute to read through the report for each hard drive. Even if I automated Chkdsk with the /x and /f options, I still have to read the report every day. Is there a tool or built in service which will: Automatically check all drives for errors Report a simple "no problem" or "problem" Ideally, schedule a repair operation on the next restart. Bonus points: restart the computer to do the repair, and report all of that. | rm /var/lib/cloud/instances/*/sem/config_scripts_user Confirmed working on: CentOS 7.4 Ubuntu 14.04 Ubuntu 16.04 For the sake of completeness, if you have a situation where you care to keep track of the fact/possibility that this AMI [had a parent AMI that ...] and they all ran cloud-init user data, you can delete only the current semaphore. rm /var/lib/cloud/instance/sem/config_scripts_user | {
"source": [
"https://serverfault.com/questions/797519",
"https://serverfault.com",
"https://serverfault.com/users/2181/"
]
} |
797,523 | I want to set up various infrastructure in MS Azure that will then be available to multiple locations that are equipped with Cisco Meraki MX Security Appliances. Unfortunately, the MXs don't yet support route based VPNs, and Azure only supports multiple site to site networks when using route based VPN. I think similar challenges may exist with AWS and other cloud service providers. I think I may be able to work around this limitation using a virtual firewall, such as Cisco ASAv, but I haven't been able to find any documentation or marketing material that makes it clear this is suitable. I know I have done hub/spoke VPN with physical ASAs in the past, but I have no experience with ASAv. Has anyone got any experience doing cloud provider hub with ASAv (or any other virtual firewall) and branch office spoke using firewalls that don't support IKEv2 or route based VPNs, such as Meraki MX, Cisco ASA etc? | rm /var/lib/cloud/instances/*/sem/config_scripts_user Confirmed working on: CentOS 7.4 Ubuntu 14.04 Ubuntu 16.04 For the sake of completeness, if you have a situation where you care to keep track of the fact/possibility that this AMI [had a parent AMI that ...] and they all ran cloud-init user data, you can delete only the current semaphore. rm /var/lib/cloud/instance/sem/config_scripts_user | {
"source": [
"https://serverfault.com/questions/797523",
"https://serverfault.com",
"https://serverfault.com/users/31143/"
]
} |
798,298 | Here's my scenario: I'm a developer that inherited (unbeknownst to me) three servers located within my office. I also inherited the job of being the admin of the servers with a distinct lack of server administration knowledge and google / ServerFault as a reference point. Luckily, I've never actually had to come into contact physically with the machines or address any issues as they've always 'just worked'. All three machines are located within the same data room and serve the following purpose: Machine1 - IIS 8.0 hosting a number of internal applications Machine2 - SQL Server 2008 R2 data store for the internal applications Machine3 - SQL Server 2008 R2 mirror store of Machine2 All three have external hard drives connected that complete back ups frequently. I've been informed that all three need to move from one data room to another within the same premises. I wont be completing the physical moving of the hardware, that'll be handled by a competent mover. Apart from completing a full back up of each, what considerations do I need to make prior to hypothetically flicking the power switch and watching my world move? I'm aware that it's far from ideal having all three located in the same room / premises but that's past the scope of this question. | Genuinely interesting question, well asked :) There's a few things you need to check before this move, some easy, some hard. Power - check that the new room has not only the right amount of power outlets but that they're the right type - as in physical connector type and if the current location allows for different power phases per server to protect against single phase failure then I'd strongly urge you to replicate that also in the new location. Cooling - you need to check that there won't be an immediate or gradual build-up of heat that will lead to overheating and potential server shutdown. You can usually look up the maximum power (in watts) or heat (in BTUs) that each server can draw from the manufacturers website - let your building manager know this and get a written confirmation from them stating that the cooling in that location will cope. Networking - this is the hard one - not only does the same number of ports need to be replicated between old and new location but so does their type, speed and most importantly configuration. This last point is the key - there was a time when almost all ports in a network were pretty much equal - I'm old enough to remember those times! but these days the number of port configurations and the place in the network that any one port can be in is astronomical, you need to make sure that your network people replicated EVERYTHING to be identical from old to new - again get this in writing as this isn't easy. If something goes wrong with this move I'd put money it'll be on the network ports not being identical, it happens all the time. 'Other connections' - do you know if your servers have any other connections than power and networking? perhaps they have Fibre-Channel links to shared storage, KVM links to a shared management screen - again if they do you need to replicate these identically. Other than that feel free to come back here with any more specific questions, and I hope the move goes well. | {
"source": [
"https://serverfault.com/questions/798298",
"https://serverfault.com",
"https://serverfault.com/users/297759/"
]
} |
798,427 | Amazon Web Services (AWS) offers an officially supported Amazon Machine Image (AMI), but it doesn't indicate which Linux distribution it's based upon. Is the official Amazon Linux AMI based on another Linux distribution, and if so, which one? | There's a discussion thread available over on the AWS forums that indicates the officially supported Amazon Linux AMI is not based upon any Linux distribution. Rather, the Amazon Linux AMI is independently maintained image by Amazon. | {
"source": [
"https://serverfault.com/questions/798427",
"https://serverfault.com",
"https://serverfault.com/users/55005/"
]
} |
799,016 | I have a server that is outside of AWS. I'd like to be able to mount an EFS volume to it, but I am not sure if that is possible. Perhaps if you create a VPC, and you create a tunnel over VPN? Does anybody know if this is possible? | Important updates: In October, 2018, AWS expanded the capabilities of the network technology underpinning EFS so that it now natively works across managed VPN connections and cross-region VPC peering, without resorting to the proxy workaround detailed below. https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-efs-now-supports-aws-vpn-and-inter-region-vpc-peering/ EFS added support for connectivity via AWS Direct Connect circuits in late 2016. https://aws.amazon.com/blogs/aws/amazon-efs-update-on-premises-access-via-direct-connect-vpc/ Comments have raised some interesting issues, since in my initial reading of the question, I may have assumed more familiarity with EFS than you may have. So, first, a bit of background: The "Elastic" in Elastic File System refers primarily to the automatic scaling of storage space and throughput -- not external access flexibility. EFS does not seem to have any meaningful limits on the amount of data you can store. The documented maximum size of any single file on an EFS volume is 52,673,613,135,872 bytes (52 TiB) . Most of the other limits are similarly generous. EFS is particularly "elastic" in the way it is billed. Unlike filesystems on EBS volumes, space is not preallocated on EFS, and you only pay for what you store on an hourly average basis. Your charges grow and shrink (they're "elastic") based on how much you've stored. When you delete files, you stop paying for the space they occupied within an hour. If you store 1 GB for 750 hours (≅1 month) and then delete it, or if you store 375 GB for 2 hours and then delete it, your monthly bill would be the same... $0.30. This is of course quite different than EBS, which will happily bill you $37.50 for storing 375 GB of 0x00 for the remaining hours in the month. S3's storage pricing model much the same as EFS, as billing for storage stops as soon as you delete an object, and the cost is ~1/10 the cost of EFS, but as I and others have mentioned many times, S3 is not a filesystem. Utilities like s3fs-fuse attempt to provide an "impedance bridge" but there are inherent difficulties in trying to treat something that isn't truly a filesystem as though it were (eventual consistency for overwrites being not the least of them). So, if a real "filesystem" is what you need, and it's for an application where access needs to be shared, or the storage needs space required is difficult to determine or you want it to scale on demand, EFS may be useful. And, it looks cool when you have 8.0 EiB of free space. $ df -h | egrep '^Filesystem|efs'
Filesystem Size Used Avail Use% Mounted on
us-west-2a.fs-5ca1ab1e.efs.us-west-2.amazonaws.com:/ 8.0E 121G 8.0E 1% /srv/efs/fs-5ca1ab1e
us-west-2a.fs-acce55ed.efs.us-west-2.amazonaws.com:/ 8.0E 7.2G 8.0E 1% /srv/efs/fs-acce55ed But it is, of course, important to use the storage service most appropriate to your applications. Each of the options has its valid use cases. EFS is probably the most specialized of the storage solutions offered by AWS, having a narrower set of use cases than EBS or S3. But can you use it from outside the VPC? The official answer is No : Mounting a file system over VPC private connectivity mechanisms such as a VPN connection, VPC peering, and AWS Direct Connect is not supported. — http://docs.aws.amazon.com/efs/latest/ug/limits.html EFS is currently limited to only EC2 Linux access only. That too within the VPC. More features would be added soon. You can keep an eye on AWS announcements for new features launched. — https://forums.aws.amazon.com/thread.jspa?messageID=732749 However, the practical answer is Yes , even though this isn't a an officially supported configuration. To make it work, some special steps are required. Each EFS filesystem is assigned endpoint IP addresses in your VPC using elastic network interfaces (ENI), typically one per availability zone, and you want to be sure you mount the one in the availability zone matching the instance, not only for performance reasons, but also because bandwidth charges apply when transporting data across availability zone boundaries. The interesting thing about these ENIs is that they do not appear to use the route tables for the subnets to which they are attached. They seem to be able to respond only to instances inside the VPC, regardless of security group settings (each EFS filesystem has its own security group to control access). Since no external routes are accessible, I can't access the EFS endpoints directly over my hardware VPN... so I turned to my old pal HAProxy, which indeed (as @Tim predicted) is necessary to make this work. It's a straightforward configuration, since EFS uses only TCP port 2049. I'm using HAProxy on a t2.nano (HAProxy is very efficient), with a configuration that looks something like this: listen fs-8d06f00d-us-east-1
bind :2049
mode tcp
option tcplog
timeout tunnel 300000
server fs-8d06f00d-us-east-1b us-east-1b.fs-8d06f00d.efs.us-east-1.amazonaws.com:2049 check inter 60000 fastinter 15000 downinter 5000
server fs-8d06f00d-us-east-1c us-east-1c.fs-8d06f00d.efs.us-east-1.amazonaws.com:2049 check inter 60000 fastinter 15000 downinter 5000 backup
server fs-8d06f00d-us-east-1d us-east-1d.fs-8d06f00d.efs.us-east-1.amazonaws.com:2049 check inter 60000 fastinter 15000 downinter 5000 backup This server is in us-east-1b so it uses the us-east-1b endpoint as primary, the other two as backups if the endpoint in 1b ever fails a health check. If you have a VPN into your VPC, you then mount the volume using the IP address of this proxy instance as the target (instead of using the EFS endpoint directly), and voilà you have mounted the EFS filesystem from outside the VPC. I've mounted it successfully on external Ubuntu machines as well as Solaris¹ servers (where EFS has proven very handy for hastening their decommissioning by making it easier to migrate services away from them). For certain situations, like moving data into AWS or running legacy and cloud systems in parallel on specific data during a migration, EFS seems like a winner. Of course, the legacy systems, having higher round-trip times, will not perform as well as EC2 instances, but that's to be expected -- there aren't exceptions to the laws of physics. In spite of that, EFS and the HAProxy gateway seem to be a stable solution for making it work externally. If you don't have a VPN, then a pair of HAProxy machines, one in AWS and one in your data center, can also tunnel EFS over TLS, establishing an individual TCP connection with the payload wrapped in TLS for transport of each individual EFS connection across the Internet. Not technically a VPN, but encrypted tunneling of connections. This also seems to perform quite well. ¹Solaris 10 is (not surprisingly) somewhat broken by default -- initially, root didn't appear to have have special privileges -- files on the EFS volume created by root are owned by root but can't be chown ed to another user from the Solaris machine ( Operation not permitted ), even though everything works as expected from Ubuntu clients. The solution, in this case, is to defeat the NFS ID mapping daemon on the Solaris machine using svcadm disable svc:/network/nfs/mapid:default . Stopping this service makes everything work as expected. Additionally, the invocation of /usr/sbin/quota on each login needs to be disabled in /etc/profile . There may be better or more correct solutions, but it's Solaris, so I'm not curious enough to investigate. | {
"source": [
"https://serverfault.com/questions/799016",
"https://serverfault.com",
"https://serverfault.com/users/106736/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.