source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
892,134
I am trying to understand the configuration of nvme. But I do not understand why there are two devices: nvme block and nvme character device: crw------- 1 root root 243, 0 Dec 12 16:09 /dev/nvme0 brw-rw---- 1 root disk 259, 0 Jan 14 01:30 /dev/nvme0n1 What is the purpose of each or when to use them?
The character device /dev/nvme0 is the NVME device controller, and block devices like /dev/nvme0n1 are the NVME storage namespaces: the devices you use for actual storage, which will behave essentially as disks. In enterprise-grade hardware, there might be support for several namespaces, thin provisioning within namespaces and other features. For now, you could think namespaces as sort of meta-partitions with extra features for enterprise use.
{ "source": [ "https://serverfault.com/questions/892134", "https://serverfault.com", "https://serverfault.com/users/292068/" ] }
892,167
I have a server running a WebDAV server. Using nautilus on the GUI of my ubuntu I can connect and read/write files. I have tried on terminal with the following command: sudo mount -t davfs http://<host>:<port>/<sharename>/ <destination> And the result is: /sbin/mount.davfs: mounting failed; the server does not support WebDAV Is there another way to connect?
The character device /dev/nvme0 is the NVME device controller, and block devices like /dev/nvme0n1 are the NVME storage namespaces: the devices you use for actual storage, which will behave essentially as disks. In enterprise-grade hardware, there might be support for several namespaces, thin provisioning within namespaces and other features. For now, you could think namespaces as sort of meta-partitions with extra features for enterprise use.
{ "source": [ "https://serverfault.com/questions/892167", "https://serverfault.com", "https://serverfault.com/users/452045/" ] }
893,066
I have installed redis on an ubuntu 16.04 machine and if I run /usr/local/bin/redis-server /etc/redis/cluster/7000/redis.conf it starts up and I can connect to it without issues. However I want to start it using systemctl start redis , so I have created the following file at /etc/systemd/system/redis7000.service [Unit] Description=Redis In-Memory Data Store After=network.target [Service] User=redis Group=redis ExecStart=/usr/local/bin/redis-server /etc/redis/cluster/7000/redis.conf ExecStop=/usr/local/bin/redis-cli shutdown Restart=always [Install] WantedBy=multi-user.target and the redis config has supervised systemd set which I think looks good, but I get the following errors: Jan 19 14:54:27 ip-172-31-42-18 systemd[1]: Started Redis In-Memory Data Store. Jan 19 14:54:27 ip-172-31-42-18 redis-server[21661]: 21661:C 19 Jan 14:54:27.680 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo Jan 19 14:54:27 ip-172-31-42-18 redis-server[21661]: 21661:C 19 Jan 14:54:27.680 # Redis version=4.0.6, bits=64, commit=00000000, modified=0, pid=21661, just started Jan 19 14:54:27 ip-172-31-42-18 redis-server[21661]: 21661:C 19 Jan 14:54:27.680 # Configuration loaded Jan 19 14:54:27 ip-172-31-42-18 redis-server[21661]: 21661:C 19 Jan 14:54:27.680 # systemd supervision requested, but NOTIFY_SOCKET not found Jan 19 14:54:27 ip-172-31-42-18 systemd[1]: redis7000.service: Main process exited, code=exited, status=1/FAILURE Jan 19 14:54:27 ip-172-31-42-18 systemd[1]: redis7000.service: Unit entered failed state. Jan 19 14:54:27 ip-172-31-42-18 systemd[1]: redis7000.service: Failed with result 'exit-code'. Jan 19 14:54:27 ip-172-31-42-18 systemd[1]: redis7000.service: Service hold-off time over, scheduling restart. Jan 19 14:54:27 ip-172-31-42-18 systemd[1]: Stopped Redis In-Memory Data Store. And I am not even sure what this means, so could someone guide me in the right direction?
To run redis under systemd, you need to set supervised systemd . See the configuration file: # If you run Redis from upstart or systemd, Redis can interact with your # supervision tree. Options: # supervised no - no supervision interaction # supervised upstart - signal upstart by putting Redis into SIGSTOP mode # supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET # supervised auto - detect upstart or systemd method based on # UPSTART_JOB or NOTIFY_SOCKET environment variables # Note: these supervision methods only signal "process is ready." # They do not enable continuous liveness pings back to your supervisor. supervised no Needs to be changed to: supervised systemd You can also pass this on the command line, which overrides the setting in redis.conf . Red Hat based systems do this. This also allows for running the same redis instance manually or from systemd without changing the config file. ExecStart=/usr/bin/redis-server /etc/redis.conf --supervised systemd In addition, you also need to tell systemd that redis will be operating in this mode by setting Type=notify in the [Service] section.
{ "source": [ "https://serverfault.com/questions/893066", "https://serverfault.com", "https://serverfault.com/users/443850/" ] }
893,239
In Linux I sometimes rename files like this: mv dir1/dir2/dir3/file.txt dir1/dir2/dir3/file.txt.old Note that I want to just rename the file, not move it to another directory. Is there a command that would allow me to do a shorthand version of that command? I am thinking something like: mv dir1/dir2/dir3/file.txt file.txt.old or maybe even something like (to just append to the name): mv dir1/dir2/dir3/file.txt {}.old My goal is not to have to specify the complete path again. I know those "examples" I wrote don't work, but it is just an idea of what I want to accomplish. I don't want to have to cd in to the directory.
for a single file try mv dir1/dir2/dir3/file.{txt,txt.old} where the X{a,b} construct expand to Xa Xb , you can have a preview using echo dir1/dir2/dir3/file.{txt,txt.old} to see if it fit your need. note: that for multiple files mv dir1/dir2/dir3/file{1,2,3}.{txt,txt.old} is unlikely to expand to what you want. (this will expand to a mixed of file.txt file1.txt.old file2.txt ...) {txt,txt.old} can be shorterned to {,.old} as per comment if directory name are unambigous, wildcard can be used. mv *1/*2/d*/*.{,old} for multiple file use rename rename -n s/txt/old.txt/ dir1/dir2/dir3/file*.txt drop -n to have effective rename.
{ "source": [ "https://serverfault.com/questions/893239", "https://serverfault.com", "https://serverfault.com/users/345965/" ] }
893,315
We want ALL sites on our webserver (IIS 10) to enforce SSL (ie redirect HTTP to HTTPS). We are currently 'Requiring SSL' on each site and setting up a 403 error handler to perform a 302 redirect to the https address for that specific site. This works great. But it's a pain to do for every single site, there's plenty of room for human error. Ideally I'd like to set up a permanent 301 redirect on all HTTP://* to HTTPS://* Is there a simple way to do this in IIS ?
The IIS URL Rewrite Module 2.1 for IIS7+ may be your friend. The module can be downloaded from IIS URL Rewrite . Using the URL Rewrite Module and URL Rewrite Module 2.0 Configuration Reference explain how to use the module. Once the module is installed, you can create a host wide redirect using IIS Manager. Select URL Rewrite , Add Rule(s)... , and Blank rule . Name: Redirect to HTTPS Match URL Requested URL: Matches the Pattern Using: Wildcards Pattern: * Ignore case: Checked Conditions Logical grouping: Match Any Condition input : {HTTPS} Check if input string: Matches the Pattern Pattern: OFF Ignore case: Checked Track capture groups across conditions: Not checked Server Variables Leave blank. Action Action type: Redirect Redirect URL: https://{HTTP_HOST}{REQUEST_URI} Append query string: Not checked Redirect type: Permanent (301) Apply the rule and run IISReset (or click Restart in the IIS Manager) Alternatively, after installing the module you could modify the applicationHost.config file as follows: <system.webServer> <rewrite> <globalRules> <rule name="Redirect to HTTPS" enabled="true" patternSyntax="Wildcard" stopProcessing="true"> <match url="*" ignoreCase="true" negate="false" /> <conditions logicalGrouping="MatchAny" trackAllCaptures="false"> <add input="{HTTPS}" ignoreCase="true" matchType="Pattern" negate="false" pattern="OFF" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}{REQUEST_URI}" appendQueryString="false" redirectType="Permanent" /> </rule> </globalRules> </rewrite> </system.webServer>
{ "source": [ "https://serverfault.com/questions/893315", "https://serverfault.com", "https://serverfault.com/users/327102/" ] }
893,343
I made a typo: $ history 169 9:34 la /usr/local/etc/ 170 9:35 sudo mkdir ^C 171 9:36 sudo mkdir /usr/local/etc/dnsmasq.d Now I have a file that is called ^C (ctrl+C)!! When I use ls I just see a questionmark (probably due to the locale?) % ls -al total 60 drwxr-xr-x 2 root wheel 512 Jan 21 09:35 ? <- this one drwxr-xr-x 5 admin wheel 512 Jan 21 16:24 . drwxr-xr-x 3 root wheel 512 Jan 20 14:29 .. -rw-r--r-- 1 admin nobody 1114 Jan 20 19:10 .cshrc -rw------- 1 admin wheel 6002 Jan 21 15:27 .history -rw-r--r-- 1 admin nobody 182 Jan 20 14:29 .login -rw-r--r-- 1 admin nobody 91 Jan 20 14:29 .login_conf -rw------- 1 admin nobody 301 Jan 20 14:29 .mail_aliases -rw-r--r-- 1 admin nobody 271 Jan 20 19:04 .mailrc -rw-r--r-- 1 admin nobody 726 Jan 20 19:05 .profile -rw------- 1 admin nobody 212 Jan 20 14:29 .rhosts -rw-r--r-- 1 admin nobody 911 Jan 20 19:06 .shrc drwx------ 2 admin nobody 512 Jan 20 15:05 .ssh drwxr-xr-x 2 admin wheel 512 Jan 20 19:08 bin and % ls -i 3611537 ? 3611534 bin I want to remove this file. I try mv and when using tab-completion it shows me: % mv ^C/ bin/ Obviously I can't type a ^C :-/ How do I remove this file?
^V ( ctrl + v ) works as a kind of escape sequence for the next key-press, inserting the associated value instead of taking whatever action that would normally be associated. Making use of this, ^V^C ( ctrl + v , ctrl + c ) ought to work for entering your difficult filename in the terminal.
{ "source": [ "https://serverfault.com/questions/893343", "https://serverfault.com", "https://serverfault.com/users/324494/" ] }
894,488
For years the press has been writing about the problem that there are now very few IPv4 addresses available. But on the other hand, I'm using a server hosting company which gladly gives out public IPv4 addresses for a small amount of money. And my private internet connection comes with a public IPv4 address. How is that possible? Is the problem as bad as the press wants us to believe?
It's very bad. Here is a list of examples of what I have first hand experience with consumer ISPs doing to fight the shortage of IPv4 addresses: Repeatedly shuffling around IPv4 blocks between cities causing brief outages and connection resets for customers. Shortening DHCP lease times from days to minutes. Allow users to choose if they want network address translation (NAT) on the Customer Premise Equipment (CPE) or not, then retroactively turn it on for everybody anyway. Enabling NAT on CPE for customers who already used the opportunity to opt out of NAT. Reducing the cap on number of concurrently active media access control (MAC) addresses enforced by CPE. Deploying carrier-grade NAT (CGN) for customers who had a real IP address when they signed up for the service. All of these are reducing the quality of the product the ISP is selling to their customers. The only sensible explanation for why they would be doing this to their customers is shortage of IPv4 addresses. The shortage of IPv4 addresses has lead to fragmentation of the address space which has multiple shortcomings: Administrative overhead which not only costs time and money, but also is error prone and has lead to outages. Large usage of content addressable memory (CAM) capacity on backbone routers which a few years back lead to a significant outage across multiple ISPs when it crossed the limit of a particular popular model of routers. Without NAT there is no way we could get by today with the 3700 million routable IPv4 addresses. But NAT is a brittle solution which gives you a less reliable connectivity and problems that are difficult to debug. The more layers of NAT the worse it will be. Two decades of hard work has made a single layer of NAT mostly work, but we have already crossed the point where a single layer of NAT was sufficient to work around the shortage of IPv4 addresses.
{ "source": [ "https://serverfault.com/questions/894488", "https://serverfault.com", "https://serverfault.com/users/454030/" ] }
895,088
This article describes how to assign host aliases to pods in kubernetes, is there anyway to do it for a deployment and not for a pod as such? Any other suggestions to add host entries in kubernetes to provide a first line of host name resolution (before checking a server like 8.8.8.8) would be welcomed as an answer as well.
Yes this is possible. All you need to do is follow the same advice you were for a pod specification, but rather than applying it to a YAML file for pods, you apply it to a YAML file for a deployment. For example, if you are already running a deployment you can edit the current deployment by issuing the following command. $ kubectl edit deployment DEPLOYMENT_NAME This will allow you to access edit mode of the currently running deployment in YAML format. You need to add the 'hostAliases' section in the deployments 'template: spec' field which allows you to configure the template for the pod/containers. So to demonstrate this visually, here is the YAML for a deployment I am running in my project that I can edit by running the command I mentioned above: apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "6" creationTimestamp: 2018-01-30T14:42:48Z generation: 7 labels: app: nginx-site-app name: nginx-site namespace: default resourceVersion: "778922" selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx-site uid: dc4535333d-05cb-11e8-b5c0-7878748e0178 spec: replicas: 1 selector: matchLabels: app: nginx-site-app strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: app: nginx-site-app spec: containers: - image: gcr.io/myprojectid/tuneup-nginx:latest imagePullPolicy: Always name: nginx-container ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 1 conditions: - lastTransitionTime: 2018-01-30T14:55:28Z lastUpdateTime: 2018-01-30T14:55:28Z message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available observedGeneration: 7 readyReplicas: 1 replicas: 1 updatedReplicas: 1 If I want to add 'hostAliases' to the pods within this deployment, I need to add this information to the pod template spec section as demonstrated below (notice it is in line with 'containers' (***important- it's worth noting that there are two 'spec' sections within my file- I don't want to add it to the first spec section, but rather the template spec section which defines the pod template): spec: containers: - image: gcr.io/development-project-192309/tuneup-nginx:latest imagePullPolicy: Always name: nginx-container ports: - containerPort: 80 protocol: TCP hostAliases: - ip: 127.0.0.1 hostnames: - myadded.examplehostname
{ "source": [ "https://serverfault.com/questions/895088", "https://serverfault.com", "https://serverfault.com/users/320719/" ] }
895,646
Clearly my file exists in /usr/bin $ ls /usr/bin/ngrok /usr/bin/ngrok However, when I attempt to chown it I receive an error $ sudo chown my_user:users /usr/bin/ngrok chown: cannot dereference '/usr/bin/ngrok': No such file or directory Further attempts to run it also fail! $ ngrok bash: ngrok: command not found $ sudo /usr/bin/ngrok sudo: /usr/bin/ngrok: command not found What is happening here?
/usr/bin/ngrok will be a symlink that points nowhere (or rather to a non-existing file). Check with ls -l .
{ "source": [ "https://serverfault.com/questions/895646", "https://serverfault.com", "https://serverfault.com/users/453947/" ] }
895,746
I need to downgrade PHP on one of my VMs from 7.2 to 7.1 on Ubuntu 16.0.4. The last time I tried to remove just PHP and replace it with a different version, I had all kinds of issues with Apache and MySQL. Is there a quick way to downgrade PHP from 7.2 to 7.1 without having to fully reinstall and configure Apache (latest version as of this writing) and everything else on the server? I have to downgrade due to bad information I received from a software vendor that claims their application runs on PHP 7.2. Turns out it must have 7.1. I tried looking for info about how to downgrade from 7.2 to 7.1, but only get 'upgrade' results. Thank you for your help.
Below is a description of what I did. I hope this information can help someone else: I installed PHP 7.1 along side PHP 7.2. I also installed most of the needed extensions for PHP 7.1. I then did a2dismod php7.2 and a2enmod php7.1 so that I could switch over to PHP 7.1 while keeping 7.2 still installed on the server. Most of my sites work after making the switch. The only site that doesn't seem to be working is a Joomla site. The full list of commands I ran are below: sudo add-apt-repository ppa:ondrej/php sudo apt-get update sudo apt-get install php7.1 sudo apt-get install php7.1-cli php7.1-common php7.1-json php7.1-opcache php7.1-mysql php7.1-mbstring php7.1-mcrypt php7.1-zip php7.1-fpm sudo a2dismod php7.2 sudo a2enmod php7.1 sudo service apache2 restart
{ "source": [ "https://serverfault.com/questions/895746", "https://serverfault.com", "https://serverfault.com/users/379025/" ] }
896,228
As an example, this project offers an *.asc file with a PGP signature to verify the contents of the download (as opposed to a checksum, you can see the empty column): https://ossec.github.io/downloads.html How would I use this file? I tried gpg --verify and other variants, but it seems to be matching the name up to the file, however the filename as it is downloaded is not exactly the same... not sure how it is supposed to work.
Download the key file: wget https://ossec.github.io/files/OSSEC-ARCHIVE-KEY.asc Inspect the key file to confirm it has EE1B0E6B2D8387B7 as its keyid. gpg --keyid-format long --list-options show-keyring OSSEC-ARCHIVE-KEY.asc If correct, then import the key: gpg --import OSSEC-ARCHIVE-KEY.asc Download the software package wget https://github.com/ossec/ossec-hids/archive/2.9.3.tar.gz Download the signature file https://github.com/ossec/ossec-hids/releases/download/2.9.3/ossec-hids-2.9.3.tar.gz.asc Verify it gpg --verify ossec-hids-2.9.3.tar.gz.asc 2.9.3.tar.gz Output gpg: Signature made Sat Dec 23 16:13:01 2017 UTC gpg: using RSA key EE1B0E6B2D8387B7 gpg: Good signature from "Scott R. Shinn <[email protected]>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: B50F B194 7A0A E311 45D0 5FAD EE1B 0E6B 2D83 87B7
{ "source": [ "https://serverfault.com/questions/896228", "https://serverfault.com", "https://serverfault.com/users/450922/" ] }
896,456
I understand that private addresses such as 10.0.0.0/8 , 172.16.0.0/12 and 192.168.0.0/16 are not routable. However, what exactly is stopping these addresses from being routable? Do ISPs implement ACLs that prevent these networks from routing or is it something higher up? Also, is it IANA that created this design?
Private IP addresses are routable, albeit they are not publicly routed. Basically, a router will route a private address to private/internal LAN, rather than to the internet. To expand my answer: a router can route a private address to the public side, via its default gateway. However, the packet will be "lost" in transit due to other routers dropping it, or due to packet's TTL reaching 0. For example, give a look at this (partially obfuscated) traceroute -I -n 192.168.200.1 : [root@myhost ~]# traceroute -I -n 192.168.200.1 traceroute to 192.168.200.1 (192.168.200.1), 30 hops max, 60 byte packets 1 x.x.x.x 0.851 ms 0.841 ms 0.818 ms 2 6x.xx.xx.xx 0.791 ms 0.791 ms 0.849 ms 3 15x.xx.xx.xx 1.350 ms 1.347 ms 1.373 ms 4 15x.x.xx.xx 1.446 ms 1.435 ms 1.428 ms 5 151.6.68.20 2.272 ms 2.266 ms 2.251 ms 6 151.6.0.91 8.818 ms 8.256 ms 8.326 ms 7 * * * 8 * * * 9 * * * 10 * * * ... ... 29 * * * 30 * * * As you can see, the packet is routed to the public internet via the machine's default gateway. However, it is dropped during the transit and never reaches any proper destination. After all, private IPs/classes are (by definition) overlapped between customer, so on which of the thousands 192.168.200.x/24 networks should be routed this packet? An interesting side note: internet providers often uses private addresses for their internal routing. If, for example, a private 192.168.200.x/24 classes is used for internal routing, the first router/machine with IP 192.168.200.1 will receive but drop the packet, because it was unsolicited. ICMP are an interesting exception, as router/machines generally replies to unsolicided PINGs. This means you sometime can use private address scans to map your ISP private network.
{ "source": [ "https://serverfault.com/questions/896456", "https://serverfault.com", "https://serverfault.com/users/455861/" ] }
896,673
I have just created an AKS cluster using a standard az aks create ... --ssh-key-value ... . According to https://docs.microsoft.com/en-us/azure/aks/kubernetes-service-principal , an AKS cluster is created, and because an existing service principal is not specified, a service principal is created for the cluster. Where can I find the created SP? Thanks
As Bruno Faria said, you can find the service principal in Azure Active Directory, Azure Active Directory -> App registrations -> All apps like this: Also you can use az aks list --resource-group <your-resouece-group> to find your service principal: Hope this helps.
{ "source": [ "https://serverfault.com/questions/896673", "https://serverfault.com", "https://serverfault.com/users/138728/" ] }
896,711
I use Ubuntu 16.04 with Nginx and I've installed Nginx Certbot on my operating system (Ubuntu 16.04) with: apt-get update -y add-apt-repository ppa:certbot/certbot -y apt-get update -y apt-get upgrade python-certbot-nginx -y I setted Nginx variables: s_a="/etc/nginx/sites-available" s_e="/etc/nginx/sites-available" I created an app conf based on these variables: sed "s/\${domain}/${1}/g" "~/${repo}/template_nginx_app" > "${s_a}/${domain}.conf" ln -sf ${s_a}/${domain}.conf ${s_e} I created a correspondent SSL certificate with Certbot based on the app conf, this way: certbot --nginx -d ${domain} -d www.${domain} There are cases an SSL certificate is created in a bad way and one just need to start over after some configurations. How could I totally remove the SSL certificate (besides removing the app conf ${domain}.conf which was also edited/reconfigured by Certbot) ? Is there a fast way to do that directly from Certbot? My desire is that no remnants whatsoever would left for both app conf and certificate. This might be the good way: rm ${s_a}/${domain}.conf && rm ${s_e}/${domain}.conf rm -rf /etc/letsencrypt/{live,renewal,archive}/{${DOMAIN},${DOMAIN}.conf}
Yes, certbot can help you clean up. sudo certbot certificates will list what certbot thinks you have installed sudo certbot delete will allow you to interactively remove and clean up unwanted / deprecated domains.
{ "source": [ "https://serverfault.com/questions/896711", "https://serverfault.com", "https://serverfault.com/users/454858/" ] }
897,781
If E3-1285 v6 supports a maximum of 64 GB of RAM, does using a dual socket motherboard increase max memory? I guess not, but would like to know the reason. My logic says that if RAM is shared, both processors should be able to address all available RAM and thus, it will also be limited to 64GB. Any technical explanation for this?
In modern CPUs the memory controller is integrated directly into the CPU, whereas in former times memory was accessed by the CPU over a bus system. The bus system had the advantage that memory access was uniform, which is still the case in single-socket CPUs. Now, entering dual-socket systems, each CPU has dedicated local memory and the memory of the other CPU can be accessed indirectly over QPI which is in simple words a link between the two CPUs. This is called NUMA ( non-uniform memory access ). Well, putting things together. If you have a second CPU you can increase the total amount of memory of your system, but you also need a CPU that is capable of running in dual-processor mode. IIRC the E3 series is not dual-socket capable, E5 is dual-socket capable and E7 quad-socket capable.
{ "source": [ "https://serverfault.com/questions/897781", "https://serverfault.com", "https://serverfault.com/users/107564/" ] }
897,836
I am working on one Linux box. I want something like this. network--->wlan0---->eth0-->other server. Both wlan0 and eth0 interface reside inside same Linux box. I am using dhcp which is assigning something say 192.168.3.21 to my wlan0 interface. I am assigning static IP say 192.168.3.101 to my eth0 interface and 192.168.3.102 to other server. Now, I want to ping from the network(192.168.3.XX) to other server at the address of 192.168.3.102 and my eth0 at 192.168.3.101. I am unable to do so. I am not even able to ping my other server at 192.168.3.102 from my linux box. I have enabled ip forwarding via "echo 1 > /proc/sys/net/ipv4/ip_forward" command. I have used the following command to enable nat forwarding too. iptables -A FORWARD -i wlan0 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth0 -o wlan0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Still I am unable to ping. Please, let me know what I am missing. Any help will be so much appreciated. Here is the output of iptables-save :- # Generated by iptables-save v1.6.0 on Mon Feb 19 10:17:54 2018 *raw :PREROUTING ACCEPT [481:39595] :OUTPUT ACCEPT [325:24634] COMMIT # Completed on Mon Feb 19 10:17:54 2018 # Generated by iptables-save v1.6.0 on Mon Feb 19 10:17:54 2018 *nat :PREROUTING ACCEPT [1:229] :INPUT ACCEPT [1:229] :OUTPUT ACCEPT [1:76] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o wlan0 -j MASQUERADE -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Mon Feb 19 10:17:54 2018 # Generated by iptables-save v1.6.0 on Mon Feb 19 10:17:54 2018 *mangle :PREROUTING ACCEPT [482:39927] :INPUT ACCEPT [474:38801] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [325:24634] :POSTROUTING ACCEPT [325:24634] COMMIT # Completed on Mon Feb 19 10:17:54 2018 # Generated by iptables-save v1.6.0 on Mon Feb 19 10:17:54 2018 *filter :INPUT ACCEPT [63:6229] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1:76] -A FORWARD -i wlan0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT COMMIT # Completed on Mon Feb 19 10:17:54 2018 Here is my output for ip route:- default via 192.168.0.1 dev wlan0 metric 10 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.190 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.235
In modern CPUs the memory controller is integrated directly into the CPU, whereas in former times memory was accessed by the CPU over a bus system. The bus system had the advantage that memory access was uniform, which is still the case in single-socket CPUs. Now, entering dual-socket systems, each CPU has dedicated local memory and the memory of the other CPU can be accessed indirectly over QPI which is in simple words a link between the two CPUs. This is called NUMA ( non-uniform memory access ). Well, putting things together. If you have a second CPU you can increase the total amount of memory of your system, but you also need a CPU that is capable of running in dual-processor mode. IIRC the E3 series is not dual-socket capable, E5 is dual-socket capable and E7 quad-socket capable.
{ "source": [ "https://serverfault.com/questions/897836", "https://serverfault.com", "https://serverfault.com/users/457178/" ] }
898,173
Context I am running Ubuntu Desktop as my primary machine, which I will call D. I want to connect to server S via ssh, but the firewall is blocking me. I have access to server S, via a very cumbersome path, involving a Windows virtual machine and PuTTY . This makes working with this server extremely annoying: completely different environment, copy/paste does not work, I can not properly use my desktop while being connected to it (Alt-Tab is broken by the Virtual Machine) etc I have verified that I can ssh from server S to my desktop machine D (the opposite from what I need). Could I somehow initiate "port forwarding" or similar from the server, so that I can ssh to the server from my desktop?
You can use the following command to set up an SSH tunnel from the remote server to your local machine: $ ssh -f -N -R 1234:localhost:22 user@your_machine_ip When the tunnel is set up, you can simply ssh to your remote server using the following command: $ ssh -p 1234 user@localhost Please note that you need to set up ssh keys for automatic login (no password prompt). If you want to create the SSH tunnel interactively, you can remove the options -f -N . For more info, man ssh .
{ "source": [ "https://serverfault.com/questions/898173", "https://serverfault.com", "https://serverfault.com/users/91978/" ] }
899,551
http://.................1168951531 Which, when put into chrome, previews to the URL http://69.172.200.235/ , which redirects (by external server response code 3XX ) to www.test.com , but that is outside the scope of what I'm trying to figure out. How does such a weird URL like above resolve to an IP address? Is this a formatting rule?
Chrome is interpreting the number 1168951531 as a decimal number, which when represented in hexadecimal is 45ACC8EB. 45ACC8EB in hex is the same as the dotted decimal 69.172.200.235, when you take each pair of hex digits as one decimal number. 45 -> 69 AC -> 172 C8 -> 200 EB -> 235 Short answer: It's the pure decimal representation of the same IP address.
{ "source": [ "https://serverfault.com/questions/899551", "https://serverfault.com", "https://serverfault.com/users/458733/" ] }
899,573
I have 3 servers that I will use for a new Ceph cluster. It's my first Ceph "playground"... Each server has 2x1TB and 6x2TB HDDs connected to two separate 4-channel SAS controllers, each with 1GB cache + BBU, so I plan to optimize those for throughput. The first two disks will be used as a RAID-1 array for the OS and probably journals (still researching on that). Drives 3 to 8 will be exposed as a separate RAID-0 devices in order to utilize the controller caches. I'm confused however about what will be the best tripe size and since I can't change that later without losing data I decided to ask here. Can somebody please explain? The default for the controllers (LSI 9271-4i) is 256k. I see some documents mentioning stripe width (e.g. here ) defaulting to 64kb, but I'm still unsure about that. Interestingly there are no discussions on this topic. Maybe because many people run such setups in JBOD mode or because it just doesn't matter that much... Since this will be my first cluster I will try to stick with the default settings as much as possible.
Chrome is interpreting the number 1168951531 as a decimal number, which when represented in hexadecimal is 45ACC8EB. 45ACC8EB in hex is the same as the dotted decimal 69.172.200.235, when you take each pair of hex digits as one decimal number. 45 -> 69 AC -> 172 C8 -> 200 EB -> 235 Short answer: It's the pure decimal representation of the same IP address.
{ "source": [ "https://serverfault.com/questions/899573", "https://serverfault.com", "https://serverfault.com/users/130407/" ] }
899,577
I'm looking for a way to snapshot/image a Google Cloud instance that uses multiple disks. From what I've found so far, this does not seem to be supported, as I haven't seen a way to create an image that references more than one disk. I'm surprised if this isn't available, as Amazon has had that functionality for a long time. There you can simply issue a command to create an image from an existing instance, and it will automatically snapshot all disks attached to the instance and include them in the image, such that launching a new instance from the image creates all new disks based off the snapshots. Is there a way to do this easily in GCE that I am missing, or does it just require custom scripting?
Chrome is interpreting the number 1168951531 as a decimal number, which when represented in hexadecimal is 45ACC8EB. 45ACC8EB in hex is the same as the dotted decimal 69.172.200.235, when you take each pair of hex digits as one decimal number. 45 -> 69 AC -> 172 C8 -> 200 EB -> 235 Short answer: It's the pure decimal representation of the same IP address.
{ "source": [ "https://serverfault.com/questions/899577", "https://serverfault.com", "https://serverfault.com/users/456788/" ] }
899,584
The list of "Internet accessible URLs required for connectivity to Microsoft Dynamics CRM Online" is comprised almost entirely of Microsoft owned domains. However, I cannot understand why access to the URL https://www.crmdyntint.com is required. Visiting this URL with a web browser just displays a generic domain parking page with ads, which makes me worry about the security of the domain. According to a Whois lookup the domain is owned by a Hong Kong based firm named China Capital . Does anyone know why access to https://www.crmdyntint.com is required by Microsoft Dynamics CRM?
Chrome is interpreting the number 1168951531 as a decimal number, which when represented in hexadecimal is 45ACC8EB. 45ACC8EB in hex is the same as the dotted decimal 69.172.200.235, when you take each pair of hex digits as one decimal number. 45 -> 69 AC -> 172 C8 -> 200 EB -> 235 Short answer: It's the pure decimal representation of the same IP address.
{ "source": [ "https://serverfault.com/questions/899584", "https://serverfault.com", "https://serverfault.com/users/340637/" ] }
899,704
I in S3 Buckets if ill create a new one or Bucket name already exists but I can I have two visible buckets. Edit: In other words, my bucket's been orphaned and I cant see it to delete it and I cannot recreate as per image:
S3 bucket names are globally unique . This means that if someone else has a bucket of a certain name, you cannot have a bucket with that same name. So if you are trying to create a bucket, and AWS says it already exists, then it already exists, either in your AWS account or someone else's AWS account .
{ "source": [ "https://serverfault.com/questions/899704", "https://serverfault.com", "https://serverfault.com/users/458866/" ] }
901,403
I'm running zsh 5.1.1 on Ubuntu 16.04. It seems that ~/.zprofile isn't sourced at login nor new terminal. ~/.zshrc is sourced however. I am running oh-my-zsh . Any ideas on why this is or how I can fix it?
~/.zprofile is only sourced when zsh is run as login shell, e.g. when logging in on the console or via SSH. It will not be sourced by zsh when opening a new terminal or starting a new zsh session from within a running session. Anything you need in all interactive sessions, should be set in ~/.zshrc . Anything you need in all zsh sessions, including scripts, should be set in ~/.zshenv . You can find additional information in the zshall manpage and on this site . ~/.zshprofile will (usually) also not be parsed by any other tools. So any environment variables set in ~/.zprofile will usually not be available in an X11 session. If you need some environment variable to be available globally in your session, you might want to have a look at man pam_env .
{ "source": [ "https://serverfault.com/questions/901403", "https://serverfault.com", "https://serverfault.com/users/460421/" ] }
903,253
Managing multiple servers, in excess of 90 currently with 3 devops via Ansible. All is working great, however there is a giant security problem right now. Each devop is using their own local ssh key to gain access directly to the servers. Each devop uses a laptop, and each laptop potentially could be be compromised thus opening the entire network of prod servers up to an attack. I am looking for a solution to centrally manage access, and thus block access for any given key. Not dissimilar to how keys are added to bitbucket or github. Off the top of my head I would assume the solution would be a tunnel from one machine, the gateway, to the desired prod server... while passing the gateway the request would pick up a new key and use to gain access to the prod server. The result would be we can quickly and efficiently kill access for any devop within seconds by just denying access to the gateway. Is this good logic? Has anyone seen a solution out there already to thwart this problem?
That's too complicated (checking if a key has access to a specific prod server). Use the gateway server as jump host that accepts every valid key (but can easily remove access for a specific key which removes access to all servers in turn) and then add only the allowed keys to each respective server. After that, make sure you can reach the SSH port of every server only via the jump host. This is the standard approach.
{ "source": [ "https://serverfault.com/questions/903253", "https://serverfault.com", "https://serverfault.com/users/145179/" ] }
903,679
Is there a reason that the IPv6 standard uses AAAA rather than AA? I cannot find reference to AA or AAA records in DNS. Do the As indicate anything specific?
I take it this is a question specifically about the name of the RR type? It obviously could have had a different name, the name AAAA for IPv6 address records is in reference to an IPv6 address (128 bits) being four times the size of an IPv4 address (32 bits).
{ "source": [ "https://serverfault.com/questions/903679", "https://serverfault.com", "https://serverfault.com/users/339270/" ] }
903,780
I have an Amazon EC2 box. I have installed Apache, MariaDb and PHP on it. Among other things, I want to host a couple of WordPress websites on the EC2. How do I go about installing Certbot on Amazon Linux so that I may issue SSL certificates for the various websites hosted on Apache? I cannot find Amazon Linux listed on Certbot's website , and I read somewhere that Amazon Linux is close to CentOS/RHEL 7 so I picked that and tried to follow the instructions , but I got to sudo yum install certbot-apache and it didn't work, I get: Loaded plugins: langpacks, priorities, update-motd No package certbot-apache available. Error: Nothing to do Any help would be greatly appreciated.
For EC2 running Amazon Linux 2 AMI: Enable EPEL Repo: sudo amazon-linux-extras install epel Install Certbot: sudo yum install certbot-apache
{ "source": [ "https://serverfault.com/questions/903780", "https://serverfault.com", "https://serverfault.com/users/186827/" ] }
903,792
I use Ansible to build configuration files in ini format. When I use the ini_file module with option and value pair it works as expected, for example: - name: Create configuration file ini_file: path: /tmp/test.conf state: present section: lol option: foo value: bar Would result with: [lol] foo = bar However I want a specific section to exist without options in it, like so: - name: Create configuration file ini_file: path: /tmp/test.conf state: present section: lol But all it does is reporting ok on the task and moves on to the next one. When I use verbose mode I can see: ok: [localhost] => {"changed": false, "msg": "OK", "path": "/tmp/test.conf", "state": "absent"} How can I use the module to create option-less sections?
For EC2 running Amazon Linux 2 AMI: Enable EPEL Repo: sudo amazon-linux-extras install epel Install Certbot: sudo yum install certbot-apache
{ "source": [ "https://serverfault.com/questions/903792", "https://serverfault.com", "https://serverfault.com/users/344443/" ] }
904,304
I have a fresh install of latest centos 7 [root@localhost ~]# cat /etc/centos-release CentOS Linux release 7.4.1708 (Core) [root@localhost ~]# I wanted to install something and wget was not installed so when I tried to install wget I saw tha yum is giving error. I saw maybe all the topics about this problem on the internet but no luck I cant find my solution. [root@localhost ~]# yum update Loaded plugins: fastestmirror Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=stock error was 14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error" http://mirror.centos.org/centos/7/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.centos.org; Unknown error" Trying other mirror. One of the configured repositories failed (CentOS-7 - Base), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=base ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable base or subscription-manager repos --disable=base 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=base.skip_if_unavailable=true failure: repodata/repomd.xml from base: [Errno 256] No more mirrors to try. http://mirror.centos.org/centos/7/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.centos.org; Unknown error" [root@localhost ~]# So when I list the repos I get this: [root@localhost ~]# yum repolist all Loaded plugins: fastestmirror Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=stock error was 14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error" http://mirror.centos.org/centos/7/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.centos.org; Unknown error" Trying other mirror. http://mirror.centos.org/centos/7/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.centos.org; Unknown error" Trying other mirror. Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=centosplus&infra=stock error was 14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error" http://mirror.centos.org/centos/7/centosplus/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.centos.org; Unknown error" Trying other mirror. Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=extras&infra=stock error was 14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error" http://mirror.centos.org/centos/7/extras/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.centos.org; Unknown error" Trying other mirror. Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=updates&infra=stock error was 14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error" http://mirror.centos.org/centos/7/updates/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.centos.org; Unknown error" Trying other mirror. repo id repo name status C7.0.1406-base/x86_64 CentOS-7.0.1406 - Base disabled C7.0.1406-centosplus/x86_64 CentOS-7.0.1406 - CentOSPlus disabled C7.0.1406-extras/x86_64 CentOS-7.0.1406 - Extras disabled C7.0.1406-fasttrack/x86_64 CentOS-7.0.1406 - CentOSPlus disabled C7.0.1406-updates/x86_64 CentOS-7.0.1406 - Updates disabled C7.1.1503-base/x86_64 CentOS-7.1.1503 - Base disabled C7.1.1503-centosplus/x86_64 CentOS-7.1.1503 - CentOSPlus disabled C7.1.1503-extras/x86_64 CentOS-7.1.1503 - Extras disabled C7.1.1503-fasttrack/x86_64 CentOS-7.1.1503 - CentOSPlus disabled C7.1.1503-updates/x86_64 CentOS-7.1.1503 - Updates disabled C7.2.1511-base/x86_64 CentOS-7.2.1511 - Base disabled C7.2.1511-centosplus/x86_64 CentOS-7.2.1511 - CentOSPlus disabled C7.2.1511-extras/x86_64 CentOS-7.2.1511 - Extras disabled C7.2.1511-fasttrack/x86_64 CentOS-7.2.1511 - CentOSPlus disabled C7.2.1511-updates/x86_64 CentOS-7.2.1511 - Updates disabled C7.3.1611-base/x86_64 CentOS-7.3.1611 - Base disabled C7.3.1611-centosplus/x86_64 CentOS-7.3.1611 - CentOSPlus disabled C7.3.1611-extras/x86_64 CentOS-7.3.1611 - Extras disabled C7.3.1611-fasttrack/x86_64 CentOS-7.3.1611 - CentOSPlus disabled C7.3.1611-updates/x86_64 CentOS-7.3.1611 - Updates disabled base/7/x86_64 CentOS-7 - Base enabled: 0 base-debuginfo/x86_64 CentOS-7 - Debuginfo disabled base-source/7 CentOS-7 - Base Sources disabled c7-media CentOS-7 - Media disabled centosplus/7/x86_64 CentOS-7 - Plus enabled: 0 centosplus-source/7 CentOS-7 - Plus Sources disabled cr/7/x86_64 CentOS-7 - cr disabled extras/7/x86_64 CentOS-7 - Extras enabled: 0 extras-source/7 CentOS-7 - Extras Sources disabled fasttrack/7/x86_64 CentOS-7 - fasttrack disabled updates/7/x86_64 CentOS-7 - Updates enabled: 0 updates-source/7 CentOS-7 - Updates Sources disabled repolist: 0 [root@localhost ~]# Im not sure where can be the problem its a fresh install on my vmware/OVH dedicated server. I have another server installed and working fine but this time I got this problem . Any one can help me?I have also tried to enable all disabled lines in etc/yum.repo.d my /etc/yum.repos.d/CentOS-Base.repo # CentOS-Base.repo # # The mirror system uses the connecting IP address of the client and the # update status of each mirror to pick mirrors that are updated to and # geographically close to the client. You should use this for CentOS updates # unless you are manually picking other mirrors. # # If the mirrorlist= does not work for you, as a fall back you can try the # remarked out baseurl= line instead. # # [base] name=CentOS-$releasever - Base mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #released updates [updates] name=CentOS-$releasever - Updates mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #additional packages that may be useful [extras] name=CentOS-$releasever - Extras mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #additional packages that extend functionality of existing packages [centosplus] name=CentOS-$releasever - Plus mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$infra baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Could not resolve host: mirrorlist.centos.org; Unknown error This indicates that you either (a) don't have a properly configured DNS server or (b) your network configuration isn't correct and you can't connect to a DNS server to check the hostname mirrorlist.centos.org . Try using ping 8.8.8.8 . If this fails, try ping <local-gateway-ip> . If that also fails, your local network configuration is wrong and you'll have to check the configuration. If you can ping 8.8.8.8 , try using host , nslookup or dig to check the DNS settings like host google.com or dig google.com . If these fail, you need to check your DNS settings. Check /etc/resolv.conf to see what's configured. UPDATE Since /etc/resolv.conf is blank, you need to setup a DNS resolver. I would suggest entering the following into the file using nano or vi (or whatever your comfortable using): nameserver 9.9.9.9 Save this file, then try yum update again. You can also try other DNS hosts if you would rather, such as 8.8.8.8 or 8.8.4.4 or any of the OpenDNS hosts.
{ "source": [ "https://serverfault.com/questions/904304", "https://serverfault.com", "https://serverfault.com/users/462211/" ] }
906,083
So, I'm trying to get Nexus running based off of this image in Kubernetes, but it's failing with: mkdir: cannot create directory '../sonatype-work/nexus3/log': Permission denied mkdir: cannot create directory '../sonatype-work/nexus3/tmp': Permission denied Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log due to No such file or directory From the documentation it says that the process runs with UID 200 and the volume must be mounted with those permissions: A persistent directory, /nexus-data, is used for configuration, logs, and storage. This directory needs to be writable by the Nexus process, which runs as UID 200. I've tried to search through the documentation to find a way to mount the volume with those permissions, however, I couldn't find any way to do it. Does anyone know whether you can specify in the configuration for either the PVC/PV or Deployment what UID to mount the volume with? If so, how?
There is no way to set the UID using the definition of Pod , but Kubernetes saves the UID of sourced volume. So, you can set the UID by InitContainer , which launches before the main container, just add it to the containers path of the Deployment : initContainers: - name: volume-mount-hack image: busybox command: ["sh", "-c", "chown -R 200:200 /nexus"] volumeMounts: - name: <your nexus volume> mountPath: /nexus
{ "source": [ "https://serverfault.com/questions/906083", "https://serverfault.com", "https://serverfault.com/users/335416/" ] }
906,108
We're using a 3rd-party service provider to send transactional email. I recently noticed increased failure rates for a given receiving domain. The sends fail with the error "498 No MX for example.com". The sends are retried after a given delay and then usually succeed after a couple retries. But sometimes, they exceed the retry limit and are dropped permanently. I contacted the support of the provider and they told me that this is due to the receiving domain declaring MX from different providers. $ dig mx example.com ;; ANSWER SECTION: example.com. 859 IN MX 25 mail05.example.com. example.com. 859 IN MX 20 mail11.example.net. They are referring to the fact that one MX is using example.com and the other is using example.net and that is apparently bad practice and can lead to the error described above. This is the first time I'm hearing something like that and I would instantly call BS on it, but I thought I'd give them the benefit of the doubt and hear what others have to say on the topic.
They are mostly wrong. It is not a bad practice to have more than one MX, and it's equally not a bad practice to have one or more of them with a hostname in another domain. In fact, it used to be quite common that people would set up their own mailserver in their own domain as their primary MX, and then have their ISP's mailserver as secondary MX. The one tiny part that might conceivably be relevant is that if the MX in the other domain doesn't resolve properly, e.g. if domain example.net is having DNS issues, that would be an issue. But that's why you have more than one MX - if one fails, the others will still work. You should respond to the provider and point them at RFC 5321 , section 5.1. It's a bit too long to quote, but the gist of it is that if there's more than one MX, the sender must try at least the first two, and there's no restriction on having them in separate domains.
{ "source": [ "https://serverfault.com/questions/906108", "https://serverfault.com", "https://serverfault.com/users/61246/" ] }
906,972
My environment: # uname -a Linux app11 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 (2018-01-04) x86_64 GNU/Linux # # cat /etc/*release PRETTY_NAME="Debian GNU/Linux 9 (stretch)" NAME="Debian GNU/Linux" VERSION_ID="9" VERSION="9 (stretch)" ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" # while trying to run apt-get update , I get bunch of errors: # apt-get update Ign:1 http://deb.debian.org/debian stretch InRelease Hit:2 http://security.debian.org stretch/updates InRelease Hit:3 http://deb.debian.org/debian stretch-updates InRelease Hit:4 http://deb.debian.org/debian stretch-backports InRelease Hit:5 http://deb.debian.org/debian stretch Release Get:6 http://packages.cloud.google.com/apt cloud-sdk-stretch InRelease [6,377 B] Ign:7 https://artifacts.elastic.co/packages/6.x/apt stable InRelease Hit:8 https://artifacts.elastic.co/packages/6.x/apt stable Release Get:9 http://packages.cloud.google.com/apt google-compute-engine-stretch-stable InRelease [3,843 B] Get:10 http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-stretch InRelease [3,876 B] Hit:11 https://download.docker.com/linux/debian stretch InRelease Err:6 http://packages.cloud.google.com/apt cloud-sdk-stretch InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB Err:9 http://packages.cloud.google.com/apt google-compute-engine-stretch-stable InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB Err:10 http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-stretch InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB Fetched 6,377 B in 0s (7,132 B/s) Reading package lists... Done W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://packages.cloud.google.com/apt cloud-sdk-stretch InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://packages.cloud.google.com/apt google-compute-engine-stretch-stable InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://packages.cloud.google.com/apt google-cloud-packages-archive-keyring-stretch InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB W: Failed to fetch http://packages.cloud.google.com/apt/dists/cloud-sdk-stretch/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB W: Failed to fetch http://packages.cloud.google.com/apt/dists/google-compute-engine-stretch-stable/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB W: Failed to fetch http://packages.cloud.google.com/apt/dists/google-cloud-packages-archive-keyring-stretch/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB W: Some index files failed to download. They have been ignored, or old ones used instead. # Please advise.
Per Installing Google Cloud SDK  |  Cloud SDK Documentation - Debian/Ubuntu: curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - OR curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - follow by: sudo apt-get update
{ "source": [ "https://serverfault.com/questions/906972", "https://serverfault.com", "https://serverfault.com/users/10683/" ] }
910,071
I'm trying to re-generate ssh host keys on a handful of remote servers via ansible (and ssh-keygen ), but the files don't seem to be showing up. The playbook runs OK, but the files on the remote are not altered. I need to resort to the echo -e hackery since these remotes are running Ubuntu 14.04 and haven't the correct version of the python-pexpect available (according to ansible). What am I missing? My playbook and output are below: playbook --- - hosts: all become: true gather_facts: false tasks: - name: Generate /etc/ssh/ RSA host key command : echo -e 'y\n'|ssh-keygen -q -t rsa -f /etc/ssh/ssh_host_rsa_key -C "" -N "" register: output - debug: var=output.stdout_lines - name: Generate /etc/ssh/ DSA host key command : echo -e 'y\n'|ssh-keygen -q -t dsa -f /etc/ssh/ssh_host_dsa_key -C "" -N "" register: output - debug: var=output.stdout_lines - name: Generate /etc/ssh/ ECDSA host key command : echo -e 'y\n'|ssh-keygen -q -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key -C "" -N "" register: output - debug: var=output.stdout_lines output $ ansible-playbook ./playbooks/ssh-hostkeys.yml -l myhost.mydom.com, SUDO password: PLAY [all] ********************************************************************************************** TASK [Generate /etc/ssh/ RSA host key] ****************************************************************** changed: [myhost.mydom.com] TASK [debug] ******************************************************************************************** ok: [myhost.mydom.com] => { "output.stdout_lines": [ "y", "|ssh-keygen -q -t rsa -f /etc/ssh/ssh_host_rsa_key -C -N " ] } TASK [Generate /etc/ssh/ DSA host key] ****************************************************************** changed: [myhost.mydom.com] TASK [debug] ******************************************************************************************** ok: [myhost.mydom.com] => { "output.stdout_lines": [ "y", "|ssh-keygen -q -t dsa -f /etc/ssh/ssh_host_dsa_key -C -N " ] } TASK [Generate /etc/ssh/ ECDSA host key] **************************************************************** changed: [myhost.mydom.com] TASK [debug] ******************************************************************************************** ok: [myhost.mydom.com] => { "output.stdout_lines": [ "y", "|ssh-keygen -q -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key -C -N " ] } PLAY RECAP ********************************************************************************************** myhost.mydom.com : ok=6 changed=3 unreachable=0 failed=0
As far as I know the only reason why you would need to pipe a 'y' to ssh-keygen, is if your command is replacing an existing file. In my opinion this is not a good way to do something from a configuration management tool. You should adjust your tasks to make them idempotent. Specifically if you add the creates: filename to your command, then the new keys will only be created when they don't already exist, instead of being replaced each time you run that playbook. --- - hosts: all become: true gather_facts: false tasks: - name: Generate /etc/ssh/ RSA host key command : ssh-keygen -q -t rsa -f /etc/ssh/ssh_host_rsa_key -C "" -N "" args: creates: /etc/ssh/ssh_host_rsa_key - name: Generate /etc/ssh/ DSA host key command : ssh-keygen -q -t dsa -f /etc/ssh/ssh_host_dsa_key -C "" -N "" args: creates: /etc/ssh/ssh_host_dsa_key - name: Generate /etc/ssh/ ECDSA host key command : ssh-keygen -q -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key -C "" -N "" args: creates: /etc/ssh/ssh_host_ecdsa_key If for some reason you wanted to replace those keys for example if they were too old or something you might want to add another task to remove them. Here is a simple delete - file: state: absent: path: "{{item}}" loop: - /etc/ssh/ssh_host_rsa_key - /etc/ssh/ssh_host_dsa_key - /etc/ssh/ssh_host_ecdsa_key If you wanted to delete files generated before a certain time, you could use the stat module to retrieve details about this files, and setup when conditions to selectively remove them if they were older then a certain date or something.
{ "source": [ "https://serverfault.com/questions/910071", "https://serverfault.com", "https://serverfault.com/users/151174/" ] }
912,162
Every time I try to make a mysqldump I get the following error: $> mysqldump --single-transaction --host host -u user -p db > db.sql mysqldump: Couldn't execute 'SELECT COLUMN_NAME, JSON_EXTRACT(HISTOGRAM, '$."number-of-buckets-specified"') FROM information_schema.COLUMN_STATISTICS WHERE SCHEMA_NAME = 'db' AND TABLE_NAME = 'Absence';': Unknown table 'COLUMN_STATISTICS' in information_schema (1109) The result is a dump which is not complete. The strange thing is that the same command, executed from another host, works without throwing any errors. Did someone experienced the same problem? I'm using mysql-client 8.0 and try to access a mysql 5-7 server - maybe that is the reason?
This is due to a new flag that is enabled by default in mysqldump 8 . You can disable it by adding --column-statistics=0 . The command will be something like: mysqldump --column-statistics=0 --host=<server> --user=<user> --password=<password> Check this link for more information. To disable column statistics by default, you can add [mysqldump] column-statistics=0 to a MySQL config file, go to /etc/my.cnf , ~/.my.cnf , or directly to /etc/mysql/mysql.cnf .
{ "source": [ "https://serverfault.com/questions/912162", "https://serverfault.com", "https://serverfault.com/users/468134/" ] }
912,433
I've got access to two computers (A and B) on a network. Both have got a static IP address with a subnet mask of 255.255.255.128 (I checked that a DHCP server was not being used). I want to configure multiple IP addresses to the same machine and hence I want to know what all IP addresses are already being used in the subnet. From an earlier question , I tried nmap -sP -PR 172.16.128.* command, but, I'm skeptical about its result as the same command gives different results on my two computers (A and B). On A, the result shows, a list of 8 IP addresses which are (supposedly) already being used, including that of A and B . Nmap done: 256 IP addresses (8 hosts up) scanned in 1.23 seconds But on B, the result is different i.e., Nmap done: 256 IP addresses (0 hosts up) scanned in 0.00 seconds The result on B is not even showing its own IP address as well as the IP address of A! What exactly am I doing wrong here? Is there any foolproof way in Red Hat Linux (RHEL) of discovering all IP addresses being used in the subnet of which my computer is a part of? RHEL: 6.5 Nmap version: 5.51
Any well-behaved device on an Ethernet LAN is free to ignore nearly any traffic, so PINGs, port scans, and the like are all unreliable. Devices are not, however, free to ignore ARP requests , afaik. Given that you specify you're scanning a local network, I find the least-fragile method of doing what you want is to try to connect to a remote address, then look in my ARP cache. Here's a simple, non-filtering device (ie, one which isn't configured to ignore some classes of IP traffic): [me@risby tmp]$ ping -c 1 -W 1 192.168.3.1 PING 192.168.3.1 (192.168.3.1) 56(84) bytes of data. 64 bytes from 192.168.3.1: icmp_seq=1 ttl=64 time=0.351 ms [...] [me@risby tmp]$ arp -a -n|grep -w 192.168.3.1 ? (192.168.3.1) at b8:27:eb:05:f5:71 [ether] on p1p1 Here's a filtering device (one configured with a single line of iptables to ignore all traffic): [me@risby tmp]$ ping -c 1 -W 1 192.168.3.31 [...] 1 packets transmitted, 0 received, 100% packet loss, time 0ms [me@risby tmp]$ arp -a -n|grep -w 192.168.3.31 ? (192.168.3.31) at b8:27:eb:02:e4:46 [ether] on p1p1 Here's a device that's just down; note the lack of a MAC address: [me@risby tmp]$ ping -c 1 -W 1 192.168.3.241 [...] 1 packets transmitted, 0 received, 100% packet loss, time 0ms [me@risby tmp]$ arp -a -n|grep -w 192.168.3.241 ? (192.168.3.241) at <incomplete> on p1p1 This method's not infallible - it misses devices that are turned off, for one thing - but it's the least-dreadful method I've yet tried. Edit : Eric Duminil, yes, it only works on a local network; see paragraph one. Vishal, the methods are functionally identical. Note the text quoted in Leo's answer about nmap : When a privileged user tries to scan targets on a local ethernet network, ARP requests are used unless --send-ip was specified. His method involves less typing. Mine can be done without privilege, and may give you a better understanding of what's actually happening. But the same thing is done on the wire in both cases.
{ "source": [ "https://serverfault.com/questions/912433", "https://serverfault.com", "https://serverfault.com/users/468699/" ] }
914,116
Is there a way to stop the Apache server without terminating executing requests, basically a way to tell it - don't accept any more connections and shut down when you finish your current ones?
Yes. apachectl -k graceful-stop https://httpd.apache.org/docs/2.4/stopping.html
{ "source": [ "https://serverfault.com/questions/914116", "https://serverfault.com", "https://serverfault.com/users/307483/" ] }
916,724
I occasionally get the following 421 error: Misdirected Request The client needs a new connection for this request as the requested host name does not match the Server Name Indication (SNI) in use for this connection. However, refreshing the browser clears the error and the page loads normally. The next time loading the page will not produce and error and as such the pattern seems pretty random. The only pattern I can see is that this may happen When I am redirecting a page using header("Location: " . $url); I have a PositiveSSL Multi-Domain Certificate from Comodo. My servers are Apache on a shared web hosting service so I don't have access to the configuration. I load pages from one domain and within the page are links to a second domain on the certificate. Everything I've read regarding this error seems to point to this problem being related to this being a multi-domain certificate. What I would like to know is if there is anything on the web page (php) coding side of things that can cause this (and can be fixed) or if it is a configuration error or possibly a server error and only my hosting service can fix it. My hosting service has so far been unable to provide anything and requested calling back with the exact time it happens next so they can research it. Any help would be appreciated as I am not overly confident they can figure this out. UPDATE Ok, almost a couple of years later and decided it was time to deal with it. I was able to get most of the problems resolved by removing my static domains which served images and javascript. However, I was still using a second domain for some of this content and Safari in particular was still giving me problems. I did more research and came across another article that talks about it here . Exactly what @Kevin describes. The article confirmed that it happens in Safari. So taking the advice, I set about getting separate certificates for each domain. I am on a shared host (Webhostinghub) and discovered they now offer free SSL (AutoSSL) that auto renews. It sounded to good to be true. They set me up with 5 free certificates. So far so good. I may even try to re-enable the static domains to test. If this all works, I'll save $ to boot as a bonus and let my Comodo certificates expire in July.
This is caused by the following sequence of events: The server and client both support and use HTTP/2. The client requests a page at foo.example.com . During TLS negotiation, the server presents a certificate which is valid for both foo.example.com and bar.example.com (and the client accepts it). This could be done with a wildcard certificate or a SAN certificate. The client reuses the connection to make a request for bar.example.com . The server is unable or unwilling to support cross-domain connection reuse (for example because you configured their SSL differently and Apache wants to force a TLS renegotiation), and serves HTTP 421. The client does not automatically retry with a new connection (see for example Chrome bug #546991 , now fixed). The relevant RfC says that the client MAY retry, not that it SHOULD or MUST. Failing to retry is not particularly user-friendly, but might be desirable for a debugging tool or HTTP library. Event #6 is out of your control, but depending on the server's software, #5 may be fixable. Consult your server's HTTP/2 documentation for more information on how and when it sends HTTP 421. Alternatively, you could issue separate certificates for each domain, but that creates more administrative overhead and may not be worth it. You could also turn off HTTP/2 entirely, but that's probably overkill in most cases.
{ "source": [ "https://serverfault.com/questions/916724", "https://serverfault.com", "https://serverfault.com/users/203564/" ] }
916,941
Due to problems with captive portals and the default Docker IP range I am trying to make Docker use the 198.18.0.0 range, instead of 172.17.0.0, which clashes with the captive portals used on the trains where I live. Following the docs , I created /etc/docker/daemon.json , and put the following in it: { "bip":"198.18.0.0/16" } This worked for docker0, but it seems to not have affected any of the other networks, and using docker compose the first network created is 172.17.0.0, which recreates the clash. What can I do to change the default subnet for all docker networks (preferably without having to state my custom IP range in every compose file)?
It is possible to redefine default range. $ docker -v Docker version 18.06.0-ce, build 0ffa825 Edit or create config file for docker daemon: # nano /etc/docker/daemon.json Add lines: { "default-address-pools": [ {"base":"10.10.0.0/16","size":24} ] } Restart dockerd: # service docker restart Check the result: $ docker network create foo $ docker network inspect foo | grep Subnet "Subnet": "10.10.1.0/24" It works for docker-compose too. More info here https://github.com/moby/moby/pull/29376 (merged)
{ "source": [ "https://serverfault.com/questions/916941", "https://serverfault.com", "https://serverfault.com/users/433132/" ] }
918,335
I would like to avoid backports, they always seem to mess up my packages. So I was thinking tools like conda / virtualenv / maybe even docker can help. What's the most simple / cleanest way to work with python 3.7 on my system?
It would be wise to use pyenv to safely manage multiple versions of Python installed on the same system. Nonetheless, this should get you up and running with Python 3.7.10 on Ubuntu 16.04 # WARNING: As of April 30th 2021, Ubuntu Linux 16.04 LTS will no longer supported # NOTE: It appears that Python 3.7.* has arrived into maintenance mode and will likely # only be getting security updates. See release notes https://www.python.org/downloads/release/python-3710/ # Install requirements sudo apt-get install -y build-essential \ checkinstall \ libreadline-gplv2-dev \ libncursesw5-dev \ libssl-dev \ libsqlite3-dev \ tk-dev \ libgdbm-dev \ libc6-dev \ libbz2-dev \ zlib1g-dev \ openssl \ libffi-dev \ python3-dev \ python3-setuptools \ wget # Prepare to build mkdir /tmp/Python37 mkdir /tmp/Python37/Python-3.7.10 cd /tmp/Python37/ # Pull down Python 3.7.10, build, and install wget https://www.python.org/ftp/python/3.7.10/Python-3.7.10.tar.xz tar xvf Python-3.7.10.tar.xz -C /tmp/Python37 cd /tmp/Python37/Python-3.7.10/ ./configure --enable-optimizations sudo make altinstall Then you would just call Python like so: python3.7 ./yourScript.py This is a screenshot of multiple versions of Python co-existing in a docker container and how they can be distinguished: Pip should have been installed with this installation as well. To install packages use this format: pip3.7 -V
{ "source": [ "https://serverfault.com/questions/918335", "https://serverfault.com", "https://serverfault.com/users/214924/" ] }
920,461
I have a bash script that generates a self-signed certificate and works perfectly fine: #! /bin/bash # Generate self signed root CA cert openssl req -nodes -x509 -days 358000 -newkey rsa:2048 -keyout ca.key -out ca.crt -subj "/C=IR/ST=TEH/L=Torento/O=CTO/OU=root/CN=es.example.com/[email protected]" # Generate server cert to be signed openssl req -nodes -newkey rsa:2048 -days 358000 -keyout server.key -out server.csr -subj "/C=IR/ST=TEH/L=Torento/O=CTO/OU=server/CN=es.example.com/[email protected]" # Sign the server cert openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt # Create server PEM file cat server.key server.crt > server.pem # Generate client cert to be signed openssl req -nodes -newkey rsa:2048 -days 358000 -keyout client.key -out client.csr -subj "/C=IR/ST=TEH/L=Torento/O=CTO/OU=client/CN=es.example.com/[email protected]" # Sign the client cert openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAserial ca.srl -out client.crt # Create client PEM file cat client.key client.crt > client.pem When I check the expiration time of the generated client.pem , it shows expiration time at 10th of Aug.: $ openssl x509 -enddate -noout -in client.pem notAfter=Aug 10 12:32:07 2018 GMT What is the problem with expiration date?
The validity is set with openssl x509 and not with openssl req . It you put the -days option with x509 command, it will work. You get the 30/08 because there isn't a -days option that override the default certificate validity of 30 days, as mentioned in x509 the man page: -days arg specifies the number of days to make a certificate valid for. The default is 30 days. Side note, generating certificate with 358000 days (980 years!) validity is too long if you want reasonable security.
{ "source": [ "https://serverfault.com/questions/920461", "https://serverfault.com", "https://serverfault.com/users/121615/" ] }
920,474
How does one automatically check if your Cisco ASA is running the most recent or non-vulnerable version with external monitoring? With SNMP, you can get the version number of an ASA: $ snmpget -v2c -c password 1.2.3.4 iso.3.6.1.2.1.1.1.0 iso.3.6.1.2.1.1.1.0 = STRING: "Cisco Adaptive Security Appliance Version 9.8(2)" But I can find nothing (URL/API/CVE database) to compare this with, or to test if that version has known vulnerabilities. The various Nagios plugins I can find (like check_snmp_checklevel and nm_check_version ) also don't do this. They merely allow verifying against version in a config file. Pages like these have version info, but parsing that is really unreliable of course. The Cisco ASA has a 'check for update' feature which must have some kind of URL it checks, but we don't have the cisco.com account. And I don't know what the URL is, and it's probably https, so sniffing it doesn't help. Having said that, if people know the password protected update URL, I'll gladly take it. Edit: it's even more complicated, because this CVE states that for version 9.8, version 9.8.2.28 is patched. But that patch-level is not visible in SNMP, nor in the GUI under 'About ASA'...
The validity is set with openssl x509 and not with openssl req . It you put the -days option with x509 command, it will work. You get the 30/08 because there isn't a -days option that override the default certificate validity of 30 days, as mentioned in x509 the man page: -days arg specifies the number of days to make a certificate valid for. The default is 30 days. Side note, generating certificate with 358000 days (980 years!) validity is too long if you want reasonable security.
{ "source": [ "https://serverfault.com/questions/920474", "https://serverfault.com", "https://serverfault.com/users/31475/" ] }
920,502
I did a google search on kernel crash dumps, and while I found plenty of information on what they are and how to set them up, I could not seem to get a recommendation on whether they should be enabled or disabled on a production server. From what I understand, kernel crash dumps are mostly useful for developers who are debugging kernels. Would enabling kernel crash dumps provide any value to the average system administrator? And if so, are there downsides to enabling them (e.g., introducing security vulnerabilities or causing thrashing on low memory systems)?
The validity is set with openssl x509 and not with openssl req . It you put the -days option with x509 command, it will work. You get the 30/08 because there isn't a -days option that override the default certificate validity of 30 days, as mentioned in x509 the man page: -days arg specifies the number of days to make a certificate valid for. The default is 30 days. Side note, generating certificate with 358000 days (980 years!) validity is too long if you want reasonable security.
{ "source": [ "https://serverfault.com/questions/920502", "https://serverfault.com", "https://serverfault.com/users/449841/" ] }
920,507
How can I log all executed commands over SSH on the client machine? Let's say I'm making a connection to example.com and type the following commands: $ ls $ touch hello $ mkdir world $ mv hello world/ I want on my client to have all typed commands in a file like this: ls touch hello mkdir world mv hello world/ My goal is if I make a mistake and break the server, I still can search which command goes wrong. I can't use ~/.bash_logout to save commands after disconnection nor ~/.bash_history because it's server-side. I didn't find anything relevant with ~/.ssh/config so I'm asking here. Any idea? EDIT: To be more clear, if I connect from machine A to machine B, I want to have the history file on machine A.
The validity is set with openssl x509 and not with openssl req . It you put the -days option with x509 command, it will work. You get the 30/08 because there isn't a -days option that override the default certificate validity of 30 days, as mentioned in x509 the man page: -days arg specifies the number of days to make a certificate valid for. The default is 30 days. Side note, generating certificate with 358000 days (980 years!) validity is too long if you want reasonable security.
{ "source": [ "https://serverfault.com/questions/920507", "https://serverfault.com", "https://serverfault.com/users/477564/" ] }
920,540
Ok so at work we were planning to scale down the number of nodes in Azure Kubernetes service. Before doing this I wanted to see what would happen if I overloaded the nodes on a test cluster. On a 3 node test cluster I wrote a overload.yaml which spawned 200 wordpress pods kubectl apply -f overload.yaml kubectl get deployments --all-namespaces=true This shows everything looks good, Azure's web portal showed only 30% cpu and ram usage. (It said 200 wordpress pods desired, 200 wordpress pods available, and it showed 8 pods from the kube-system namespace, and showed them all as available) All good so I bumped it up to 300 wordpress replicas. now kubectl get deployments --all-namespaces=true shows 300 wordpress pods desired, 105 wordpress pods available. It showed 0 of 8 kube-system deployments available, later only 2 of 8 restarted, which seems like a really bad thing, Azure's web portal showed 2 nodes were unavailable. az aks browse stopped working kubectl get pods --namespace=kube-system shows status nodelost, unknown, pending, and only 2 running that successfully autohealed. ~An hour later the Azure nodes were replaced based on uptime listed in the Azure web portal. I think they went down only because the kube-system pods went down, which I'm guessing caused them to fail a health check and triggered some auto recovery mechanism. Anyways is there a way to guarantee/reserve resources for deployments in the kube-system namespace? (Or is this a bug in kubernetes or azure?, because it seems like that should be default behavior to give preference to deployments in kube-system namespace) Side note: I did tell the overload.yaml deployment to scale from 300 instances to 1 instance, but the kubernetes system resources deployments availability isn't restored. I tired kubectl delete pods --all --namespace=kube-system to force the kube-system deployment's to redeploy the system pods, that doesn't help either. Waiting 1 hour for azure to detect the nodes are failing healthchecks, and then reprovisioning is a terrible solution. I'd rather prevent it from happening in the first place by a method to guarantee/reserver resources for kube-system. But I'd also be curious to know if anyone knows an alternate way to force redeploy pods beyond deleting pods of a deployment.
The validity is set with openssl x509 and not with openssl req . It you put the -days option with x509 command, it will work. You get the 30/08 because there isn't a -days option that override the default certificate validity of 30 days, as mentioned in x509 the man page: -days arg specifies the number of days to make a certificate valid for. The default is 30 days. Side note, generating certificate with 358000 days (980 years!) validity is too long if you want reasonable security.
{ "source": [ "https://serverfault.com/questions/920540", "https://serverfault.com", "https://serverfault.com/users/235967/" ] }
925,281
I'm trying to comment an existing ufw firewall rule, but I can't find the exact command I can easily add a rule with comment like: sudo ufw allow in on eth0 to any port 80 comment 'test' But how do I comment an existing rule?
If you add exactly same rule, then the existing rule gets overwritten and comment is updated there. E.g.: recyber@linux:~$ sudo ufw allow from 10.0.0.0/24 to any port 1234 comment "Comment" Rule updated
{ "source": [ "https://serverfault.com/questions/925281", "https://serverfault.com", "https://serverfault.com/users/354676/" ] }
926,974
Let's Encrypt are providing free SSL certificates. Are there any downsides compared to other, paid certificates e.g. AWS Certificate Manager ?
Certificate lifespan Security Shorter lifespan is better. Simply because revocation is mostly theoretical, in practice it cannot be relied on (big weakness in the public PKI ecosystem). Management Without automation: Longer lifespan is more convenient. LE may not be feasible if you, for whatever reason, cannot automate the certificate management With automation: Lifespan doesn't matter. End-user impression End-users are unlikely to have any idea one way or another. Level of verification Security Letsencrypt provides DV level of verification only. Buying a cert you get whatever you pay for (starting at DV, with the same level of assertion as with LE). DV = only domain name control is verified. OV = owner entity (organization) information is verified in addition. EV = more thorough version of OV, which has traditionally been awarded with the "green bar" (but the "green bar" appears to be going away soon). Management When using LE, the work you put in is setting up the necessary automation (in this context, to prove domain control). How much work that is will depend on your environment. When buying a cert the DV/OV/EV level will define how much manual work will be required to get the cert. For DV it typically boils down going through a wizard paying and copy/pasting something or clicking something, for OV and EV you can pretty much count on needing to be contacted separately to do additional steps to confirm your identity. End-user impression End-users probably recognize the current EV "green bar" (which is going away), other than that they don't tend to actually look at the certificate contents. Theoretically, though, it is clearly more helpful with a certificate that states information about the controlling entity. But browsers (or other client applications) need to start actually showing this in a useful way before that has any effect for the typical user. Installation Security It is possible to do things incorrectly in ways that expose private keys or similar. With LE, the provided tooling is set up around reasonable practices. With a person who knows what they are doing, manual steps can obviously also be done securely. Management LE is very much intended to have all processes automated, their service is entirely API-based and the short lifespan also reflects how everything is centered around automation. When buying a cert, even with a CA that provides APIs to regular customers (not really the norm at this point) it will be difficult to properly automate anything other than DV and with DV you are paying for essentially the same thing that LE provides. If you are going for OV or EV levels, you can probably only partially automate the process. End-user impression If the installation is done correctly, the end-user will obviously not know how it was done. The chances of messing things up (eg, forgetting to renew or doing the installation incorrectly when renewing) are less with an automated process. Overall Traditional means of buying certs are particularly useful if you desire OV/EV certs, are not automating certificate management or want certs used in some other context than HTTPS.
{ "source": [ "https://serverfault.com/questions/926974", "https://serverfault.com", "https://serverfault.com/users/88/" ] }
926,982
I have multiple identical servers behind a NAT. I use the same IP and just change up the port to move between the server via port forwarding.The issue is that i am using self-signed certificates and getting the not secured warring. Lets encrypt wont issue certificates as they only issue them for FQDNs. is it possible to apply a domain name to a NAT or servers behind a NAT so that i can be issues a certificate?
Certificate lifespan Security Shorter lifespan is better. Simply because revocation is mostly theoretical, in practice it cannot be relied on (big weakness in the public PKI ecosystem). Management Without automation: Longer lifespan is more convenient. LE may not be feasible if you, for whatever reason, cannot automate the certificate management With automation: Lifespan doesn't matter. End-user impression End-users are unlikely to have any idea one way or another. Level of verification Security Letsencrypt provides DV level of verification only. Buying a cert you get whatever you pay for (starting at DV, with the same level of assertion as with LE). DV = only domain name control is verified. OV = owner entity (organization) information is verified in addition. EV = more thorough version of OV, which has traditionally been awarded with the "green bar" (but the "green bar" appears to be going away soon). Management When using LE, the work you put in is setting up the necessary automation (in this context, to prove domain control). How much work that is will depend on your environment. When buying a cert the DV/OV/EV level will define how much manual work will be required to get the cert. For DV it typically boils down going through a wizard paying and copy/pasting something or clicking something, for OV and EV you can pretty much count on needing to be contacted separately to do additional steps to confirm your identity. End-user impression End-users probably recognize the current EV "green bar" (which is going away), other than that they don't tend to actually look at the certificate contents. Theoretically, though, it is clearly more helpful with a certificate that states information about the controlling entity. But browsers (or other client applications) need to start actually showing this in a useful way before that has any effect for the typical user. Installation Security It is possible to do things incorrectly in ways that expose private keys or similar. With LE, the provided tooling is set up around reasonable practices. With a person who knows what they are doing, manual steps can obviously also be done securely. Management LE is very much intended to have all processes automated, their service is entirely API-based and the short lifespan also reflects how everything is centered around automation. When buying a cert, even with a CA that provides APIs to regular customers (not really the norm at this point) it will be difficult to properly automate anything other than DV and with DV you are paying for essentially the same thing that LE provides. If you are going for OV or EV levels, you can probably only partially automate the process. End-user impression If the installation is done correctly, the end-user will obviously not know how it was done. The chances of messing things up (eg, forgetting to renew or doing the installation incorrectly when renewing) are less with an automated process. Overall Traditional means of buying certs are particularly useful if you desire OV/EV certs, are not automating certificate management or want certs used in some other context than HTTPS.
{ "source": [ "https://serverfault.com/questions/926982", "https://serverfault.com", "https://serverfault.com/users/483793/" ] }
927,956
I have a lot of gz compressed log files which have generic names and I need to check the period of time they reflect. I know about the zcat | head but this works for the beginning of the file only. How can I just get the last line without decompressing the whole file?
If you want lines from the tail-end of a file rather than the head-end, use tail instead of head : $ zcat /var/log/syslog.2.gz | tail -1 Aug 24 07:09:02 myhost rsyslogd: [origin software="rsyslogd" swVersion="8.4.2" x-pid="796" x-info="http://www.rsyslog.com"] rsyslogd was HUPed
{ "source": [ "https://serverfault.com/questions/927956", "https://serverfault.com", "https://serverfault.com/users/327535/" ] }
928,159
According to this comment by Tom O'Connor (slightly edited below) : You can seriously cheese off a datacentre by putting an UPS inside your own rack. What are the risks to the data center should a customer (somehow) choose to do this?
Pros: None Cons: It interrupts the flow of the Emergency Power Off (EPO) in the datacenter. If there is a life or death emergency in a datacenter, that EPO might be triggered to save someone's life. If your rack has its own UPS, it will violate that EPO order. You will not get extended runtime. Chances are as soon as your upstream UPS switches into battery mode, your UPS will detect the change in sine wave and drop into backup mode as well. You'll violate your warranty and potentially your datacenter's UPS warranty. UPSs are warrantied to be installed in very specific scenarios and power sources. Your UPS-on-UPS is not going to be a supported configuration. (from rexkogitans ) You have to care about your UPS while there is a staff caring for a UPS that may be provided to you nonetheless. So it adds administrating a UPS to your work really unnecessarily.
{ "source": [ "https://serverfault.com/questions/928159", "https://serverfault.com", "https://serverfault.com/users/184613/" ] }
928,376
Background I've been asked to create a systemd script for a new service, foo_daemon , that sometimes gets into a "bad state", and won't die via SIGTERM (likely due to custom signal handler). This is problematic for developers, as they are instructed to start/stop/restart the service via: systemctl start foo_daemon.service systemctl stop foo_daemon.service systemctl restart foo_daemon.service Problem Sometimes, due to foo_daemon getting into a bad state, we have to forcibly kill it via: systemctl kill -s KILL foo_daemon.service Question How can I setup my systemd script for foo_daemon so that, whenever a user attempts to stop/restart the service, systemd will: Attempt a graceful shutdown of foo_daemon via SIGTERM . Give up to 2 seconds for shutdown/termination of foo_daemon to complete. Attempt a forced shutdown of foo_daemon via SIGKILL if the process is still alive (so we don't have a risk of the PID being recycled and systemd issues SIGKILL against the wrong PID). The device we're testing spawns/forks numerous processes rapidly, so there is a rare but very real concern about PID recycling causing a problem. If, in practise, I'm just being paranoid about PID recycling, I'm OK with the script just issuing SIGKILL against the process' PID without being concerned about killing a recycled PID.
systemd already supports this out of the box, and it is enabled by default . The only thing you might want to customize is the timeout, which you can do with TimeoutStopSec= . For example: [Service] TimeoutStopSec=2 Now, systemd will send a SIGTERM, wait two seconds for the service to exit, and if it doesn't, it will send a SIGKILL. If your service is not systemd-aware, you may need to provide the path to its PID file with PIDFile= . Finally, you mentioned that your daemon spawns many processes. In this case, you might wish to set KillMode=control-group and systemd will send signals to all of the processes in the cgroup.
{ "source": [ "https://serverfault.com/questions/928376", "https://serverfault.com", "https://serverfault.com/users/178289/" ] }
929,229
I have a raspi type device in a data center, and recently accidentally fat fingered and pasted a shutdown command into the wrong terminal on my screen. Is there a way to keep shutdown -r but remove #poweroff #shutdown -P -H options? I want to keep shutdown -r command. I like to put a timer on it if I manage to freeze the system, or lock myself out with ip table rules. example shutdown -r +10
You should be using a systemd-based Linux distribution. In this case, you ought to be able to mask the poweroff target, so that systemd will refuse to execute it (and power off). e.g.: systemctl mask poweroff.target This makes it utterly impossible to shutdown the system, other than by rebooting. See that nothing happens: In this case, this VM's virtual power switch doesn't even work to shutdown the system anymore. But it still reboots perfectly well. To undo the change, of course, just unmask the target. Then you can shutdown the system. systemctl unmask poweroff.target
{ "source": [ "https://serverfault.com/questions/929229", "https://serverfault.com", "https://serverfault.com/users/486025/" ] }
930,047
I'm playing around with CHEF in a CentOS7 VM, and the script failed due to the issue: systemd[1]: start request repeated too quickly for fail2ban.service I know this is configurable in systemd, but I'd just like to know, for testing purposes, if there is a way to "reset" systemd so I'm allowed to execute start fail2ban service without receiving this error forever. Right now I have to restart the OS so I'm able to execute it. Thanks
If you really have some reason for restarting a service numerous times in a few seconds (or more likely, the service is misconfigured and failing to start) and are running into start limits, you can reset it by using systemctl reset-failed <unit> . systemctl reset-failed fail2ban.service Of course, you should fix whatever you did to the service configuration to cause it to fail to start properly.
{ "source": [ "https://serverfault.com/questions/930047", "https://serverfault.com", "https://serverfault.com/users/293570/" ] }
930,052
In my Python script I needed a fast/efficient way to set a max filesize on a file I'm constantly writing to. Rather than bring the whole thing into py's RAM, I ran this shell command: sed -i '1d' file.csv I monitor the filesize periodically and run the command as needed. Problem is that now if I tail -f file.csv , tail stops tailing the file as soon as sed removes a line from it. Any solution?
If you really have some reason for restarting a service numerous times in a few seconds (or more likely, the service is misconfigured and failing to start) and are running into start limits, you can reset it by using systemctl reset-failed <unit> . systemctl reset-failed fail2ban.service Of course, you should fix whatever you did to the service configuration to cause it to fail to start properly.
{ "source": [ "https://serverfault.com/questions/930052", "https://serverfault.com", "https://serverfault.com/users/241137/" ] }
932,628
Sure I'm not the first one that tried to serve a domain example.com from a example.net/bbb , but I haven't found a solution yet. My NGINX configuration follows the guidelines and looks something like this: server { listen 80; server_name example.net; root /path/to/aaa; location /bbb/ { proxy_pass http://example.com/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location / { try_files $uri $uri/ /index.html; } location ~ \.(svg|ttf|js|css|svgz|eot|otf|woff|jpg|jpeg|gif|png|ico)$ { access_log off; log_not_found off; expires max; } } I can manage to render the root of example.com in example.net/bbb but: ISSUE 1 example.net/bbb/some/path doesn't work as expected and the index.html of example.net is rendered. ISSUE 2 Any asset in example.com/assets gives 404 because the browser look for example.net/assets . Be great if I could solve this without placing absolute paths everywhere.
The problem is basically that using a proxy_pass directive won't rewrite HTML code and therefor relative URL's to for instance a img src="/assets/image.png" won't magically change to img src="/bbb/assets/image.png" . I wrote about potential strategies to address that in Apache httpd here and similar solutions are possible for nginx as well: If you have control over example.com and the how the application/content is deployed there, deploy in the same base URI you want to use on example.net for the reverse proxy --> deploy your code in example.com/bbb and then your proxy_pass will become quite an easy as /assets/image.png will have been moved to /bbb/assets/image.png: location /bbb/ { proxy_pass http://example.com/bbb/; If you have control over example.com and the how the application/content is deployed: change to relative paths , i.e. rather than img src="/assets/image.png" refer to img src="./assets/image.png" from a page example.com/index.html and to img src="../../assets/image.png" from a page example.com/some/path/index.html Maybe you're lucky and example.com only uses a few URI paths in the root and non of those are used by example.net, then simply reverse proxy every necessary subdirectory : location /bbb/ { proxy_pass http://example.com/; } location /assets/ { proxy_pass http://example.com/assets/; } location /styles/ { proxy_pass http://example.com/styles/; give up using a example.com as subdirectory on example.net and instead host it on a subdomain of example.net : server { server_name bbb.example.net location / { proxy_pass http://example.com/; } } rewrite the (HTML) content by enabling the nginx ngx_http_sub_module . That will also allow you to rewrite absolute URL's with something similar to: location /bbb/ { sub_filter 'src="/assets/' 'src="/bbb/assets/'; sub_filter 'src="http://example.com/js/' 'src="http://www.example.net/bbb/js/' ; sub_filter_once off; proxy_pass http://example.com/; }
{ "source": [ "https://serverfault.com/questions/932628", "https://serverfault.com", "https://serverfault.com/users/349111/" ] }
933,943
I want to build a low-end 6TB RAID 1 archive, on an old pc. MB: Intel d2500hn 64bit CPU: Intel Atom D2500 RAM: 4GB DDR3 533 MHz PSU: Chinese 500W NO GPU 1x Ethernet 1Gbps 2x SATA2 ports 1x PCI port 4x USB 2.0 I want to build a RAID1 archive on Linux (CentOS 7 I think, then I will install all I need, I think ownCloud or something similar), I will use it in my home local network. Is it better a 10-20$ raid PCI controller or a software RAID? If software raid is better, which should I choose on CentOS? Is it better to put the system on an external USB and using 2 disks on the connectors or should I put the system in one disk and then create RAID? If I would do a 3 disks RAID 5, should I choose hardware raid PCI or a simply PCI SATA connector?
A 10-20$ "hardware" RAID card is nothing more than a opaque, binary driver blob running a crap software-only RAID implementation. Stay well away from it. A 200$ RAID card offer proper hardware support (ie: a RoC running another opaque, binary blob which is better and does not run on the main host CPU). I suggest to stay away from these cards also because, lacking a writeback cache, they do not provide any tangible benefit over a software RAID implementation. A 300/400$ RAID card offering a powerloss-protected writeback cache is worth buying, but not for small, Atom-based PC/NAS. In short: I strongly suggest you to use Linux software RAID. Another option to seriously consider is a mirrored ZFS setup but, with an Atom CPU and only 4 GB RAM, do not expect high performance. For other information, read here
{ "source": [ "https://serverfault.com/questions/933943", "https://serverfault.com", "https://serverfault.com/users/490533/" ] }
934,336
I've run an image with: 'docker-compose up' With 'docker ps' i get: CREATED STATUS PORTS NAMES 55e1fd18acf1 simpleappnodedocker_web "node app.js" 6 seconds ago Up 6 seconds 0.0.0.0:9000->3000/tcp myapp 9879ff20e241 postgres:9.6 "docker-entrypoint..." 36 hours ago Up 36 hours 0.0.0.0:5432->5432/tcp nd-db I try run the bash to enter to the shell, but i get an error, how to solve this, i thinking i'm doing something wrong. $docker-compose run myapp /bin/bash ERROR: No such service: myapp docker-compose.yml: version: '2' services: web: container_name: myapp build: . command: node app.js ports: - "9000:3000"
I think you got the relation of docker and docker-compose wrong: docker-compose is a wrapper around docker . To do its job docker-compose needs its config: docker-compose.yaml Spinning your example further: create docker-compose.yaml : version: '2' services: web: container_name: myapp build: . command: node app.js ports: - "9000:3000" use docker-compose to start the container and run a command in the running container: docker-compose up docker-compose exec web /bin/bash docker-compose uses the name of the service - in your case this is web - whereas docker uses the container name - in this case myapp . So to run /bin/bash through docker , you would use the following: docker exec -ti myapp /bin/bash you could remove the container_name from docker-compose.yaml , then the container would be named automatically by docker-compose - similar to the service, but prefixed with the name of the docker-compose stack (the foldername where docker-compose.yaml is located).
{ "source": [ "https://serverfault.com/questions/934336", "https://serverfault.com", "https://serverfault.com/users/352751/" ] }
937,253
I have an EC2 instance with Apache as webserver (and Wildfly as app-server, although I'm not sure it has anything to do with this issue). In front of EC2 I have a load balancer which terminates HTTPS and applies the SSL cert. Both HTTP and HTTPS works fine in Chrome, but unfortunately not in Safari. Accessing http://test.papereed.com works fine, but accessing https://test.papereed.com gives the error "Safari can't open the page. The error is "The operation couldn't be completed. Protocol error" (NSPOSIXErrorDomain:100)" I've looked in /etc/httpd/logs/error_log and /etc/httpd/logs/access_log and also in the Safari console without finding any hint to solving the problem. And that's about how far my knowledge goes :-( Any hints how to trace this issue would be much appreciated.
curl (if compiled with HTTP/2 support) exhibits the same problem but shows the reason: http2 error: Invalid HTTP header field was received: frame type: 1, stream: 1, name: [upgrade], value: [h2,h2c] It looks like your server is offering an upgrade to HTTP/2 even though the connection is already done with HTTP/2 - which makes no sense. Not only that, it is explicitly forbidden. From RFC 7540 section 8.1.2.2 : An endpoint MUST NOT generate an HTTP/2 message containing connection-specific header fields; any message containing connection-specific header fields MUST be treated as malformed (Section 8.1.2.6).... connection- specific header fields, such as Keep-Alive, Proxy-Connection, Transfer-Encoding, and Upgrade It looks for me a bug since Apache should not send this header with HTTP/2. My guess is that you have a configuration like this Protocols h2 h2c http/1.1 Given that browsers do not support HTTP/2 without TLS anyway and that no Upgrade header is needed with HTTP/2 over TLS I recommend that you replace this configuration with Protocols h2 http/1.1 This disables support for the unneeded HTTP/2 without TLS but should hopefully get rid of the Upgrade header this way since this is only needed for upgrading from plain HTTP to plain HTTP/2. EDIT: according to the comment by the OP changing the Protocols configuration did not help. It was necessary to explicitly work around this behavior (i.e. bug) of mod_http2 by deleting the Upgrade header: Header unset Upgrade
{ "source": [ "https://serverfault.com/questions/937253", "https://serverfault.com", "https://serverfault.com/users/331640/" ] }
937,468
I'm having trouble understanding my large S3 bill, and figured I'd ask here before dropping $30 on AWS monthly support. Basically, I have an Amazon EC2 instance that makes an API to different cryptocurrency exchanges and saves the responses to the instance HD. Calls are made about every 5 minutes, response objects are about 100 kb, is read by an R script, and added to a CSV file every ~8 minutes. That CSV file is synchronised to an Amazon S3 bucket about every 15 minutes. The CSV files are usually 10 MB or so, for about 15 cryptocurrencies, every 15 minutes. So looking in the Amazon S3 bucket, there might be 0.5 GB of space used at the most. However, the 'TimedStorage-ByteHours' reads at about 4 TB! Amazon Simple Storage Service TimedStorage-ByteHrs $89.55 $0.000 per GB - storage under the monthly global free tier5 GB - Mo $0.00 $0.023 per GB - first 50 TB / month of storage used 3,893.399 GB - Mo $89.55 Any ideas?
Most likely you've got S3 Versioning enabled - that means old objects when overwritten with a newer version don't get deleted but are instead hidden in a history. To verify go to the Bucket -> Properties -> Versioning . You can also view the old versions in the browser, like on this screenshot I've got several versions of the 108c05...json file: If you've got versioning enabled but don't want to you can Suspend versioning but be aware that it won't delete the old versions, you'll have to either: use AWS-CLI and some scripting (start with aws s3api list-object-versions ) configure Bucket Lifecycle Policy to expire the old versions. That's done through S3 -> bucket -> Management -> Lifecycle -> Add lifecycle rule and then on the Expiration screen fill these details: Hope that helps :)
{ "source": [ "https://serverfault.com/questions/937468", "https://serverfault.com", "https://serverfault.com/users/482580/" ] }
937,547
I understand the argument regarding larger drives' increased likelihood of experiencing a URE during a rebuild, however I'm not sure what the actual implications are for this. This answer says that the entire rebuild fails, but does this mean that all the data is inaccessible? Why would that be? Surely a single URE from a single sector on the drive would only impact the data related to a few files, at most. Wouldn't the array still be rebuilt, just with some minor corruption to a few files? (I'm specifically interested in ZFS's implementation of RAID5 here, but the logic seems the same for any RAID5 implementation.)
It really depends on the specific RAID implementation: most hardware RAID will abort the reconstruction and some will also mark the array as failed , bringing it down. The rationale is that if an URE happens during a RAID5 rebuild it means some data are lost, so it is better to completely stop the array rather that risking silent data corruption. Note: some hardware RAID (mainly LSI based) will instead puncture the array, allowing the rebuild to proceed while marking the affected sector as unreadable (similar to how Linux software RAID behaves). linux software RAID can be instructed to a) stop the array rebuild (the only behavior of "ancient" MDRAID/kernels builds) or b) continue with the rebuild process marking some LBA as bad/inaccessible. The rationale is that it is better to let the user do his choice: after all, a single URE can be on free space, not affecting data at all (or affecting only unimportant files); ZRAID will show some file as corrupted, but it will continue with the rebuild process (see here for an example). Again, the rationale is that it is better to continue and report back to the user, enabling him to make an informed choice.
{ "source": [ "https://serverfault.com/questions/937547", "https://serverfault.com", "https://serverfault.com/users/273820/" ] }
939,909
I'm trying to create a private key and having an issue. When I use ssh-keygen -t rsa -b 4096 -C "[email protected]" , I get a private key in the following format. -----BEGIN OPENSSH PRIVATE KEY----- uTo43HGophPo5awKC8hoOz4KseENpgHDLxe5UX+amx8YrWvZCvsYRh4/wnwxijYx ... -----END OPENSSH PRIVATE KEY----- And this is not being accepted for an application that I'm trying to use. I'm expecting a key in the following RSA format. -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC,25737CC2C70BFABADB1B4598BD8AB9E9 uTo43HGophPo5awKC8hoOz4KseENpgHDLxe5UX+amx8YrWvZCvsYRh4/wnwxijYx ... -----END RSA PRIVATE KEY----- How do I create the correct format? This is weird because every other mac I have creates the correct format, except the one I'm having problem with. I'm on a fresh installed Mac OS Mojave
I faced the same problem recently (after upgrade to mojave 10.14.1), here are 2 possible solutions for this issue. Downgrade your ssh-keygen binary (you can easily get old version from any linux/docker image) OR Add option -m PEM into your ssh-keygen command. For example, you can run ssh-keygen -m PEM -t rsa -b 4096 -C "[email protected]" to force ssh-keygen to export as PEM format. It seems like in the current ssh-keygen version in mojave, the default export format is RFC4716 as mentioned here
{ "source": [ "https://serverfault.com/questions/939909", "https://serverfault.com", "https://serverfault.com/users/73080/" ] }
939,961
Due to the recent security findings in that probably most SSDs implement encryption in a completely naive and broken way, I want to check which of my BitLocker machines are using hardware encryption and which ones are using software. I found a way to disable the use of hardware encryption, but I can't figure out how to check if I'm using hardware encryption (in which case, I'll have to re-encrypt the drive). How do I do ti? I'm aware of manage-bde.exe -status which gives me an output such as: Disk volumes that can be protected with BitLocker Drive Encryption: Volume C: [Windows] [OS Volume] Size: 952.62 GB BitLocker Version: 2.0 Conversion Status: Used Space Only Encrypted Percentage Encrypted: 100.0% Encryption Method: XTS-AES 128 Protection Status: Protection On Lock Status: Unlocked Identification Field: Unknown Key Protectors: TPM Numerical Password but I don't know if the information I want is in this screen.
There exists a pretty new article on MSRC, partially explaining the issue and how to solve it. Thanks @Kevin Microsoft is aware of reports of vulnerabilities in the hardware encryption of certain self-encrypting drives (SEDs). Customers concerned about this issue should consider using the software only encryption provided by BitLocker Drive Encryption™. On Windows computers with self-encrypting drives, BitLocker Drive Encryption™ manages encryption and will use hardware encryption by default. Administrators who want to force software encryption on computers with self-encrypting drives can accomplish this by deploying a Group Policy to override the default behavior. Windows will consult Group Policy to enforce software encryption only at the time of enabling BitLocker. To check the type of drive encryption being used (hardware or software): Run manage-bde.exe -status from elevated command prompt. If none of the drives listed report "Hardware Encryption" for the Encryption Method field, then this device is using software encryption and is not affected by vulnerabilities associated with self-encrypting drive encryption. manage-bde.exe -status should show you if hardware-encryption is used. I don't have a HW encrypted drive ATM, so here is a reference link and the image it contains: The BitLocker UI in Control Panel does not tell you whether hardware encryption is used, but the command line tool manage-bde.exe does when invoked with the parameter status. You can see that hardware encryption is enabled for D: (Samsung SSD 850 Pro) but not for C: (Samsung SSD 840 Pro without support for hardware encryption):
{ "source": [ "https://serverfault.com/questions/939961", "https://serverfault.com", "https://serverfault.com/users/2563/" ] }
940,476
I am fairly new to network administration and therefore am already excited to have successfully set up a DNS record. Now I am a bit confused, because I would like to have this URL: http://www.example.org:8080/fetch/characters/ be actually reached by this http://www.example.org/fetch/characters/ So users can reach the service on port 8080 without having to explicitly set the port. How can I do this? Do I need some special application on my server? Or any redirect stuff to be applied to requests?
DNS records can't point to ports (with a few special case exceptions that do not apply here). If you have a web service listen on port 8080 and want to reach it without specifying this port, you have 3 options: Make it actually listening on port 80 (or 443 with https). Configure whatever is already listening on port 80 to forward requests to your service on port 8080 (reverse proxy). If you can live with a redirect, use this instead of a proxy, but then your clients will see the :8080 part in their address bars after the redirect.
{ "source": [ "https://serverfault.com/questions/940476", "https://serverfault.com", "https://serverfault.com/users/496579/" ] }
940,791
If I have an RPM located on a local disk - what is the diffrefence between the following yum commands? sudo yum install /tmp/rpm_name.rpm sudo yum localinstall /tmp/rpm_name.rpm Note: I use RedHat/CentOS 7.
In RHEL 5 and previous versions, yum install only accepted package names from enabled repositories, and did not accept paths to local RPMs; you had to use yum localinstall to install these. In RHEL 6 and later, yum install accepts both package names and local filenames, so localinstall is no longer necesary, but it's included for backward compatibility. In RHEL 8, dnf localinstall is simply an alias for dnf install .
{ "source": [ "https://serverfault.com/questions/940791", "https://serverfault.com", "https://serverfault.com/users/246747/" ] }
941,735
I recently encountered for the first time an A record of the form: https://www.example.com. <TTL> IN A <IP address> As far as I know, this record is deliberate (i.e. not an error). I know that the colon and forward-slash are valid characters for a label, per RFC 2181 , but I don't understand the record's purpose. Does some certificate authority use this form for domain control validation? Does this form protect against some type of exploit? Trap some kind of user error or known issue with software?
The most likely explanation is a user unfamiliar with DNS tried to configure the DNS records and made a mistake that's glaringly obvious to anyone familiar with DNS, but not to people who aren't. While a DNS label can be any arbitary binary data generally , you should read the rest of section 11, in particular: Note however, that the various applications that make use of DNS data can have restrictions imposed on what particular values are acceptable in their environment. For example, that any binary label can have an MX record does not imply that any binary name can be used as the host part of an e-mail address. Clients of the DNS can impose whatever restrictions are appropriate to their circumstances on the values they use as keys for DNS lookup requests, and on the values returned by the DNS. If the client has such restrictions, it is solely responsible for validating the data from the DNS to ensure that it conforms before it makes any use of that data. Among other things, this means that the label syntax may be constrained depending on the RR type. As specified in RFC 1123 section 2.1 and RFC 952, Internet host names have such a constrained syntax, in which the colon and slash are not valid.
{ "source": [ "https://serverfault.com/questions/941735", "https://serverfault.com", "https://serverfault.com/users/494415/" ] }
941,857
microk8s appears to be an easy way to install Kubernetes on Ubuntu. Several places refer to it as an alternative to minikube, which is not aimed at production environments, and this post seems to indicate that it's mostly aimed at development environments. However, I don't see a reason why it's not suitable for production environments. I have two Ubuntu servers and want to install Kubernetes on each while maintaining the legacy applications that also run on these servers. I'm wondering if microk8s is a good choice for this scenario. Is microk8s suitable for production environments, or is it just for development?
just to update to 2020 - This is from canonical: What is MicroK8s? MicroK8s is a powerful, lightweight, reliable production -ready Kubernetes distribution. It is an enterprise grade Kubernetes distribution that has a small disk and memory footprint while offering production grade add-ons out-the-box such as Istio, Knative, Grafana, Cilium and more. Whether you are running a production environment or interested in exploring K8s, MicroK8s serves your needs. So I think it's pretty clear. https://ubuntu.com/blog/introduction-to-microk8s-part-1-2
{ "source": [ "https://serverfault.com/questions/941857", "https://serverfault.com", "https://serverfault.com/users/275351/" ] }
942,430
> host example.com example.com has address 93.184.216.34 example.com has IPv6 address 2606:2800:220:1:248:1893:25c8:1946 I type 93.184.216.34 instead of http://example.com in Chrome. It doesn't load the website. Why?
Because the proper HTTP Host header is often required to actually get the intended site. It's very common to host multiple web sites on the same IP address and distinguish between them based on the HTTP Host header specified by the client (as well as the TLS SNI value nowadays in the case of HTTPS). That is, when you entered http://example.com into your browser the Host header was example.com , but that is not the case when you entered 93.184.216.34 . You reach the same web server in both cases, but you receive different responses (in this particular case 200 vs. 404).
{ "source": [ "https://serverfault.com/questions/942430", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
942,810
Both in server and client config I have set: cipher none auth none Following this advice I am also using UDP port 1195. When I launch server and client I get following warnings: Tue Dec 4 12:58:25 2018 ******* WARNING *******: '--cipher none' was specified. This means NO encryption will be performed and tunnelled data WILL be transmitted in clear text over the network! PLEASE DO RECONSIDER THIS SETTING! Tue Dec 4 12:58:25 2018 ******* WARNING *******: '--auth none' was specified. This means no authentication will be performed on received packets, meaning you CANNOT trust that the data received by the remote side have NOT been manipulated. PLEASE DO RECONSIDER THIS SETTING! ...which is good, but still openvpn is using encryption. I know this, because: 1) I get following message on server side when client connects: Tue Dec 4 12:59:59 2018 client_abc/10.20.73.2:36752 Outgoing Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key Tue Dec 4 12:59:59 2018 client_abc/10.20.73.2:36752 Incoming Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key 2) I get huuuge CPU load on both sides 3) I see in Wireshark that data is encrypted What else is required to disable encryption?
It looks like you have Negotiable Crypto Parameters (NCP) enabled. You should specify ncp-disable Disable “negotiable crypto parameters”. This completely disables cipher negotiation. When two OpenVPN instances have NCP enabled (default for recent versions) they will negotiate which cipher to use from a set of ciphers defined by ncp-ciphers. The default for that is 'AES-256-GCM:AES-128-GCM' which explains why you see AES-256-GCM on your connection.
{ "source": [ "https://serverfault.com/questions/942810", "https://serverfault.com", "https://serverfault.com/users/498910/" ] }
942,827
Using Centos7 and vsftpd I would like to chroot the user "testftp" to his home folder /home/testftp. The client uses WinSCP on Windows. The user testftp can reach the server and connects initially to his home folder. However the user is still able to browse higher levels. passwd: testftp:x:1001:1001::/home/testftp/:/bin/bash vsftpd.conf: chroot_local_user=YES chroot_list_enable=YES chroot_list_file=/etc/vsftpd/chroot_list Home Folder testftp: This has been modified with chmod 500 because of this line in the vsftpd.conf (Warning! chroot'ing can be very dangerous. If using chroot, make sure that the user does not have write access to the top level directory within the chroot) dr-x------. 3 testftp testftp 73 Dec 5 10:44 testftp Inside the Home Folder is another folder called ftp: drwx------. 2 > testftp testftp 44 Dec 5 10:52 ftp
It looks like you have Negotiable Crypto Parameters (NCP) enabled. You should specify ncp-disable Disable “negotiable crypto parameters”. This completely disables cipher negotiation. When two OpenVPN instances have NCP enabled (default for recent versions) they will negotiate which cipher to use from a set of ciphers defined by ncp-ciphers. The default for that is 'AES-256-GCM:AES-128-GCM' which explains why you see AES-256-GCM on your connection.
{ "source": [ "https://serverfault.com/questions/942827", "https://serverfault.com", "https://serverfault.com/users/228492/" ] }
942,833
I've change my vps recently, vps provider told me that he has backed up my VPS hdd from previous server and deployed it on the new server and changed just my ip, but now I have a huge traffic and connections from various IPs and variable ports. What do you suggest me to do ? OS is ubuntu, firewall is UFW and on, I've closed any unused ports. I've been using Cloud flare's DDOS protection and I changed my IP one more time. These huge traffic made my network very slow, when I try to reach my website it takes more seconds to open and time of pinging ip has become 3 times more. I've monitoring traffic with Nethogs. I think they're sending fake tcp syn packets to my server. The problem still remained even after stopping nginx and gunicorn. I can't ssh to my server even if all of my services have been stopped. Here's a picture of nethogs graph. Nethogs log Thanks in advance.
It looks like you have Negotiable Crypto Parameters (NCP) enabled. You should specify ncp-disable Disable “negotiable crypto parameters”. This completely disables cipher negotiation. When two OpenVPN instances have NCP enabled (default for recent versions) they will negotiate which cipher to use from a set of ciphers defined by ncp-ciphers. The default for that is 'AES-256-GCM:AES-128-GCM' which explains why you see AES-256-GCM on your connection.
{ "source": [ "https://serverfault.com/questions/942833", "https://serverfault.com", "https://serverfault.com/users/498932/" ] }
946,882
I'm creating a (local) user for a Windows service to run as. I've got good reasons for not wanting to use NETWORK SERVICE, LOCAL SERVICE, or LOCAL SYSTEM. I create the user via net user foobar "Abcd123!" /add - this works fine. At this point, c:\users\foobar does not exist. If I create the user's home directory, before the user either logs on (or, more pertinently) or the service that the user is for starts up, Windows creates a user-profile next-door called c:\users\foobar-{gibberish/SID/whatever} - this is not a predictable name. I need the user's home directory to contain things like a .ssh directory, a .gitconfig - tools like that (not limited to those tools) that make assumptions that it'll be a person using them, and so user-configuration goes inside ~/... . Usually, tools from a Unix heritage. Actual question So - is there a programmatic (preferably, PowerShell, or out-of-the-box command-line) way to tell Windows to create the user-profile for a local user? Or, any other workarounds? Things I've yet to try: An NSSM start/pre hook that copies files from elsewhere into the user-profile directory that hopefully exists at this point by virtue of Windows starting the service, creating the user-profile then handing control to the NSSM wrapper running the hook before startup. Setting the USERPROFILE environment variable for the service to be somewhere other than the actual user-profile directory. This strikes me as dangerously off-piste but also might work fine. Other context: Windows Server 2016, desktop experience. Can't use Core/Nano. There is no active directory in play. There won't be. These are local users. I'm doing this via Ansible, which is using PowerShell under the hood for Windows things. Specifically the win_user module, with Ansible 2.7.5. I don't want to create a C:\users\default (the equivalent of /etc/skel ), because there are a few different service-users and one size won't fit all. This also doesn't affect when the user-profile is created, just what will be in it when it is. I'm using NSSM to manage the services. Things I've tried starting the service and allowing Windows to create the directory I don't want to do this, because the service requires secrets before starting up, and so if I do this inside my image-baking process I'll then need to clean them up, and also make sure my service doesn't do any work during the baking phase. I want to avoid both of those fiddly bits.
Windows can create a user-profile on-demand, using the CreateProfile API However, if don't want to create an executable to perform this operation, you can call the API in PowerShell. Others have already done it: example on github . Relevant part of the code: $methodName = 'UserEnvCP' $script:nativeMethods = @(); Register-NativeMethod "userenv.dll" "int CreateProfile([MarshalAs(UnmanagedType.LPWStr)] string pszUserSid,` [MarshalAs(UnmanagedType.LPWStr)] string pszUserName,` [Out][MarshalAs(UnmanagedType.LPWStr)] StringBuilder pszProfilePath, uint cchProfilePath)"; Add-NativeMethods -typeName $MethodName; $localUser = New-Object System.Security.Principal.NTAccount("$UserName"); $userSID = $localUser.Translate([System.Security.Principal.SecurityIdentifier]); $sb = new-object System.Text.StringBuilder(260); $pathLen = $sb.Capacity; Write-Verbose "Creating user profile for $Username"; try { [UserEnvCP]::CreateProfile($userSID.Value, $Username, $sb, $pathLen) | Out-Null; } catch { Write-Error $_.Exception.Message; break; }
{ "source": [ "https://serverfault.com/questions/946882", "https://serverfault.com", "https://serverfault.com/users/3374/" ] }
947,182
I use SSHFS to mount a remote filesystem on my host and I want to be able to access it from inside a Docker container. I mount the remote filesystem sshfs -o idmap=user,uid=$(id -u),gid=$(id -g) user@remote:directory /path/to/sshfs And, using Docker, I get the following errors depending on me using --mount : docker run -it -v /path/to/sshfs:/target myimage bash docker: Error response from daemon: error while creating mount source path '/path/to/sshfs': mkdir /path/to/sshfs: file exists. or -v : docker run -it --mount src=/path/to/sshfs,target=/target,type=bind myimage bash docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /path/to/sshfs. See 'docker run --help' Is it possible to mount a sshfs mountpoint into a container?
Requires the following steps: uncomment user_allow_other in /etc/fuse.conf unmount the FUSE filesystem remount the FUSE filesystem with sshfs -o allow_other user@.... (making sure to include the -o allow_other option) try starting the container again
{ "source": [ "https://serverfault.com/questions/947182", "https://serverfault.com", "https://serverfault.com/users/503349/" ] }
947,207
I have configured 3 centos servers for chefworkstation, chefserver and chefclient. Now I want to install nginx using the cookbook. For that I have below script. package 'nginx' do action :install end This did not work because there were not enabled epel-release. Is there a way to enable epel repository before running above script?
Requires the following steps: uncomment user_allow_other in /etc/fuse.conf unmount the FUSE filesystem remount the FUSE filesystem with sshfs -o allow_other user@.... (making sure to include the -o allow_other option) try starting the container again
{ "source": [ "https://serverfault.com/questions/947207", "https://serverfault.com", "https://serverfault.com/users/500079/" ] }
947,210
I am new to using AWS and web hosting in general. Let's say I currently have a single server running a website, and the traffic starts to grow to a point I need to use load balancing. Assuming my current server is running on an EC2 instance with Ubuntu, and an Apache server with all the website files inside the /var/www/ folder. If I want to add load balancing, do I need to create an EC2 instance with the same website files copied to it? Or I just need to create an empty EC2 instance and the rest is done automatically? A bit confused as to how it would work.
Requires the following steps: uncomment user_allow_other in /etc/fuse.conf unmount the FUSE filesystem remount the FUSE filesystem with sshfs -o allow_other user@.... (making sure to include the -o allow_other option) try starting the container again
{ "source": [ "https://serverfault.com/questions/947210", "https://serverfault.com", "https://serverfault.com/users/503256/" ] }
948,974
I've configured systemd timesyncd to get it's time from a NTP server: /etc/systemd/timesyncd.conf > NTP=ca.pool.ntp.org systemctl restart systemd-timesyncd.service timedatectl set-ntp true The status is the following: $ timedatectl status ... Network time on: yes NTP synchronized: no As the output implies, the time is not synced, yet. Can someone please help me out with the following questions? How long will it take for timesyncd to sync with the NTP? At what intervals does it do that, where can I check and alter them? In urgent cases: Can I only set the time manually or can I force timesyncd to sync immediately with the NTP server?
To use an actual NTP implementation, you need to install and configure one, chrony or maybe ntpd . Do so if you require any monitoring of time performance. I will assume chrony. Add iburst to your pool or server lines in your config to speed up the initial few packets. It still may take a couple minutes to stabilize, be patient. While editing chrony.conf, review when steps are allowed. For example, makestep 1.0 3 means in the first 3 updates after chronyd is started, an offset greater than 1 second sets the clock immediately. Going back in time is bad for some applications, so large steps often are not allowed once a system is running. On the command line, every variable can be queried. chronyc tracking will show the current offset. Have an idea of what your requirements are, one second accuracy can easily tolerate tens of milliseconds offset. chronyc makestep with no arguments will make the current adjustment immediately. Not necessary usually, there is a corresponding config file directive, and chrony will steadily discipline the clock by itself. makestep on the CLI is for fixing NTP interactively when you don't want to restart chronyd. timesyncd is an SNTP client that can set the time, but not discipline it gradually and continuously, nor filter remote NTP server based on quality . (It also cannot talk to time hardware or PTP, only NTP protocol.) A little better than repeated ntpdate , by which I mean not very good clock. Personally, I replace it on most servers. About the only way to set the time with timesyncd is manually: timedatectl set-time "2019-01-15 00:40:16" . It does not have robust means to discipline and monitor the clock. Basic NTP stats via timedatectl timesync-status are a relatively new thing, I don't think that option is available in Red Hat 7 or Ubuntu 18.04. systemd defines "syncronized" to be if NTP was ever used to tell Linux to adjust the clock. Specifically, if kernel discipline call adjtimex() returned without error, and not the initial state. See the source code, systemd/src/basic/time-util.c.
{ "source": [ "https://serverfault.com/questions/948974", "https://serverfault.com", "https://serverfault.com/users/468134/" ] }
949,082
We've got a fleet of Nginx servers on Amazon EC2 where we occasionally need to update the configuration files to implement new settings. Currently we have the configurations in a custom AMI and if we need to update we have to rebuild the AMI and then EC2 instances. We've got some helper scripts, but it's still quite an effort to do that. Is there is some better way?
There are a number of concepts that you can leverage. The key to success is automation First option is to keep doing what you're doing now, i.e. rebuild the EC2s with every configuration change . Just in a fully automated way. As you're now doing configuration updates through AMIs you take this one step further and create a pipeline that, upon a configuration file change in some repository, will: Automatically build a new AMI - one of the most popular tools to do that is Packer Automatically rebuild your Nginx fleet - you should already have all the Nginx servers in an Auto-Scaling Group with an Application Load Balancer in front. If you don't you should as it will make the update as simple as updating the ASG Launch Configuration and waiting for the instances to get re-built from the new AMI. Second option is to keep the instances in place and only deploy the configuration files , without rebuilding them. Generally you can treat configuration files as code and deploy your configuration changes the same way you would deploy code releases. AWS has many tools to help with that. AWS Elastic Beanstalk that uses Chef internally and you can script your Nginx updates this way. AWS Code Deploy which is a fully scriptable deployment tool that integrates well with other parts of the AWS Code Suite : Code Commit where you can keep your Nginx configuration files in Git. Code Pipeline that can automatically trigger the deployment whenever a configuration file is updated in Code Commit. Ansible or Puppet which are popular non-AWS tools that can help you keep all the servers configured the same way. Once you're comfortable with automating these Nginx configuration updates you may want to extend the automation to the rest of your infrastructure. There is a great whitepaper Overview of Deployment Options on AWS that will give you a nice overview. I hope that helps :)
{ "source": [ "https://serverfault.com/questions/949082", "https://serverfault.com", "https://serverfault.com/users/499243/" ] }
949,093
We have a typical Dockerized Node/Express app that is deployed to about 100 machines on Digital Ocean. Currently, the entire deploy - not counting testing - takes about an hour. I am used to deploys that take maybe 10-15 minutes, even for thousands of machines. I am a bit confused about what is going on (their deploy system is rather bespoke) and have begun to gather data. The images are built in the cloud, so it's not something obvious like upload time from someone's laptop. However, that's not my main problem. The main problem is that nobody in this company thinks that one hour is a problematic amount of time for this deploy. (It used to be five hours.) Can you point me to data about what is a reasonable amount of time? UPDATE: as many commenters surmised, originally it was not parallelized. This is the main reason why it used to take five hours. However, now it is parallelized (with a home-grown system running on top of Ansible, which ought to be parallelized already? I don't understand it). And it still takes an hour. My intuition is not that we need to invest lots of engineering-hours optimizing anything, but that we just need to use more standard tools. NOTE: Shaming my co-workers is offtopic. Many of the people here are junior or are just inexperienced, and I am far more senior.
There are a number of concepts that you can leverage. The key to success is automation First option is to keep doing what you're doing now, i.e. rebuild the EC2s with every configuration change . Just in a fully automated way. As you're now doing configuration updates through AMIs you take this one step further and create a pipeline that, upon a configuration file change in some repository, will: Automatically build a new AMI - one of the most popular tools to do that is Packer Automatically rebuild your Nginx fleet - you should already have all the Nginx servers in an Auto-Scaling Group with an Application Load Balancer in front. If you don't you should as it will make the update as simple as updating the ASG Launch Configuration and waiting for the instances to get re-built from the new AMI. Second option is to keep the instances in place and only deploy the configuration files , without rebuilding them. Generally you can treat configuration files as code and deploy your configuration changes the same way you would deploy code releases. AWS has many tools to help with that. AWS Elastic Beanstalk that uses Chef internally and you can script your Nginx updates this way. AWS Code Deploy which is a fully scriptable deployment tool that integrates well with other parts of the AWS Code Suite : Code Commit where you can keep your Nginx configuration files in Git. Code Pipeline that can automatically trigger the deployment whenever a configuration file is updated in Code Commit. Ansible or Puppet which are popular non-AWS tools that can help you keep all the servers configured the same way. Once you're comfortable with automating these Nginx configuration updates you may want to extend the automation to the rest of your infrastructure. There is a great whitepaper Overview of Deployment Options on AWS that will give you a nice overview. I hope that helps :)
{ "source": [ "https://serverfault.com/questions/949093", "https://serverfault.com", "https://serverfault.com/users/505305/" ] }
949,991
I have the following line in the Dockerfile. RUN apt-get install -y tzdata When I run it, it asks for my input. After I provided my input, it hung there. Does anybody know how to solve this problem? Step 25/25 : RUN apt-get install -y tzdata ---> Running in ee47a1beff84 Reading package lists... Building dependency tree... Reading state information... The following NEW packages will be installed: tzdata 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 189 kB of archives. After this operation, 3104 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 tzdata all 2018i-0ubuntu0.18.04 [189 kB] debconf: unable to initialize frontend: Dialog debconf: (TERM is not set, so the dialog frontend is not usable.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype dpkg-preconfigure: unable to re-open stdin: Fetched 189 kB in 1s (219 kB/s) Selecting previously unselected package tzdata. (Reading database ... 25194 files and directories currently installed.) Preparing to unpack .../tzdata_2018i-0ubuntu0.18.04_all.deb ... Unpacking tzdata (2018i-0ubuntu0.18.04) ... Setting up tzdata (2018i-0ubuntu0.18.04) ... debconf: unable to initialize frontend: Dialog debconf: (TERM is not set, so the dialog frontend is not usable.) debconf: falling back to frontend: Readline Configuring tzdata ------------------ Please select the geographic area in which you live. Subsequent configuration questions will narrow this down by presenting a list of cities, representing the time zones in which they are located. 1. Africa 4. Australia 7. Atlantic 10. Pacific 13. Etc 2. America 5. Arctic 8. Europe 11. SystemV 3. Antarctica 6. Asia 9. Indian 12. US Geographic area: ``
One line only: RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get -y install tzdata
{ "source": [ "https://serverfault.com/questions/949991", "https://serverfault.com", "https://serverfault.com/users/203658/" ] }
951,217
Is a Gateway always a real computer or just a "logic" entity, which can be on any address, except the broadcast IP?
Default route (aka gateway address) has to be owned by something that is capable of forwarding packets to the rest of the internet, and which is willing to do so. It doesn't have to be the "principal" IP address of the thing that owns it (whatever that means). It can be a logical address that floats between two or more devices, and in high-availability setups it often is. The only requirement, in order that routing works, is that whatever device currently owns and advertises the address, that device can and will route traffic.
{ "source": [ "https://serverfault.com/questions/951217", "https://serverfault.com", "https://serverfault.com/users/195138/" ] }
953,169
I am trying to figure out something that I just cannot find a good answer to. If I have say a REDIS cache (or some external in-memory cache) sitting in a data center, and an application server sitting in the same data center, what will be the speed of the network connection (latency, throughput) for reading data between these two machines? Will the network "speed", for example, be still at least an order of magnitude higher than the speed of the RAM that is seeking my data out of the cache on REDIS? My ultimate question is -- is having this all sitting in memory on REDIS actually providing any utility? Contrasted with if REDIS was caching this all to an SSD instead? Memory is expensive. If the network is indeed not a bottleneck WITHIN the data center, then the memory has value. Otherwise, it does not. I guess my general question is despite the vast unknowns in data centers and the inability to generalize as well as the variances, are we talking sufficient orders of magnitude between memory latency in a computer system and even the best networks internal to a DC that the memory's reduced latencies don't provide a significant performance improvement? I get that that there are many variables, but how close is it? Is it so close that these variables do matter? For example, take take a hyperbolic stance on it, a tape drive is WAY slower than network, so tape is not ideal for a cache.
There are several versions of the "latency charts everyone should know" such as: https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html https://gist.github.com/jboner/2841832 https://computers-are-fast.github.io/ The thing is, in reality, there is more than just latency. It's a combination of factors. So, what's the network latency within a data center? Latency, well I would say it's "always" below 1ms. Is it faster than RAM? No. Is it close to RAM? I don't think so. But the question remains, is it relevant. Is that the datum you need to know? Your question make sense to me. As everything has a cost, should you get more RAM so that all the data can stay in RAM or it's ok to read from disk from time to time. Your "assumption" is that if the network latency is higher (slower) than the speed of the SSD, you won't be gaining by having all the data in RAM as you will have the slow on the network. And it would appear so. But, you also have to take into account concurrency. If you receive 1,000 requests for the data at once, can the disk do 1,000 concurrent requests? Of course not, so how long will it take to serve up those 1,000 requests? Compared to RAM? It's hard to boil it down to a single factor such as heavy loads. But yes, if you had a single operation going, the latency of the network is such that you would probably not notice the difference of SSD vs RAM. Just like until 12Gbps disk showed up on the market, a 10Gbps network link would not be overloaded by a single stream as the disk were the bottleneck. But remember that your disk is doing many other things, your process isn't the only process on the machine, your network may carry different things, etc. Also, not all disk activity mean network traffic. The database query coming from an application to the database server is only very minimal network traffic. The response from the database server may be very small (a single number) or very large (thousand of rows with multiple fields). To perform the operation, a server (database server or not) may need to do multiple disk seeks, reads and writes yet only send a very small bit back over the network. It's definitely not one-for-one network-disk-RAM. So far I avoided some details of your question - specifically, the Redis part. Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. - https://redis.io/ OK, so that means everything is in memory. Sorry, this fast SSD drive won't help you here. Redis can persist data to disk, so it can be loaded into RAM after a restart. That's only to not "lose" data or have to repopulate a cold cache after a restart. So in this case, you'll have to use the RAM, no matter what. You'll have to have enough RAM to contain your data set. Not enough RAM and I guess your OS will use swap - probably not a good idea.
{ "source": [ "https://serverfault.com/questions/953169", "https://serverfault.com", "https://serverfault.com/users/357001/" ] }
953,173
I am setting up a DHCP server on RHEL where some entries in the file are generated at a later stage and may be regenerated often. I was looking at the dhcpd config guide and the include ; guideline seems to be the best approach for this. But it looks like the DHCP server doesn't load the external file at all. Here's me dhcpd.conf: default-lease-time 86400; # 24 hours in seconds max-lease-time 604800; # 7 days in seconds authoritative; include "/opt/demo/deploy/extdhcp.conf"; #EXTERNAL FILE subnet 192.200.1.0 netmask 255.255.255.0 { option routers 192.200.1.1; option subnet-mask 255.255.255.0; option broadcast-address 192.200.1.255; host ANSIBLE-01 { hardware ethernet 00:50:56:8c:5e:47; fixed-address 192.200.1.10; } } Here's the external config file: subnet 10.64.0.0 netmask 255.255.255.0 { range 10.64.0.1 10.64.0.100; option routers 10.64.0.254; option subnet-mask 255.255.255.0; option broadcast-address 10.64.0.255; host ILO-1 { hardware ethernet 00:50:56:8c:0e:fd; fixed-address 10.64.0.55; } } This is what I see in the logs, that tells me that the external file hasn't been loaded in dhcpd. 2019-02-09T15:19:07.493576+00:00 dhcp-01.erewhon.com <daemon.err> dhcpd: DHCPDISCOVER from 00:50:56:8c:0e:fd via eth0: network 192.200.1.0/24: no free leases 2019-02-09T15:19:19.671670+00:00 dhcp-01.erewhon.com <daemon.err> dhcpd: message repeated 3 times: [ DHCPDISCOVER from 00:50:56:8c:0e:fd via eth0: network 192.200.1.0/24: no free leases] 2019-02-09T15:19:26.657147+00:00 dhcp-01.erewhon.com <daemon.err> dhcpd: DHCPDISCOVER from 00:50:56:8c:0e:fd via eth0: network 192.200.1.0/24: no free leases 2019-02-09T15:21:04.257982+00:00 dhcp-01.erewhon.com <daemon.err> dhcpd: message repeated 7 times: [ DHCPDISCOVER from 00:50:56:8c:0e:fd via eth0: network 192.200.1.0/24: no free leases] 2019-02-09T15:21:18.419381+00:00 dhcp-01.erewhon.com <daemon.err> dhcpd: DHCPDISCOVER from 00:50:56:8c:0e:fd via eth0: network 192.200.1.0/24: no free leases As you see from the logs, the DHCP seems to connect the MAC to the network defined in dhcpd.conf and not the external file. Is my understanding of the include guideline wrong?
There are several versions of the "latency charts everyone should know" such as: https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html https://gist.github.com/jboner/2841832 https://computers-are-fast.github.io/ The thing is, in reality, there is more than just latency. It's a combination of factors. So, what's the network latency within a data center? Latency, well I would say it's "always" below 1ms. Is it faster than RAM? No. Is it close to RAM? I don't think so. But the question remains, is it relevant. Is that the datum you need to know? Your question make sense to me. As everything has a cost, should you get more RAM so that all the data can stay in RAM or it's ok to read from disk from time to time. Your "assumption" is that if the network latency is higher (slower) than the speed of the SSD, you won't be gaining by having all the data in RAM as you will have the slow on the network. And it would appear so. But, you also have to take into account concurrency. If you receive 1,000 requests for the data at once, can the disk do 1,000 concurrent requests? Of course not, so how long will it take to serve up those 1,000 requests? Compared to RAM? It's hard to boil it down to a single factor such as heavy loads. But yes, if you had a single operation going, the latency of the network is such that you would probably not notice the difference of SSD vs RAM. Just like until 12Gbps disk showed up on the market, a 10Gbps network link would not be overloaded by a single stream as the disk were the bottleneck. But remember that your disk is doing many other things, your process isn't the only process on the machine, your network may carry different things, etc. Also, not all disk activity mean network traffic. The database query coming from an application to the database server is only very minimal network traffic. The response from the database server may be very small (a single number) or very large (thousand of rows with multiple fields). To perform the operation, a server (database server or not) may need to do multiple disk seeks, reads and writes yet only send a very small bit back over the network. It's definitely not one-for-one network-disk-RAM. So far I avoided some details of your question - specifically, the Redis part. Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. - https://redis.io/ OK, so that means everything is in memory. Sorry, this fast SSD drive won't help you here. Redis can persist data to disk, so it can be loaded into RAM after a restart. That's only to not "lose" data or have to repopulate a cold cache after a restart. So in this case, you'll have to use the RAM, no matter what. You'll have to have enough RAM to contain your data set. Not enough RAM and I guess your OS will use swap - probably not a good idea.
{ "source": [ "https://serverfault.com/questions/953173", "https://serverfault.com", "https://serverfault.com/users/509349/" ] }
954,586
After updating my OSX to Mojave, it seems I am no longer able to edit my crontab. Any attempt to do so results in the error message on the title of this question. I tracked crontab to /private/var/at and the permissions are the same as another computer running El Capitan: /private/var/at$ ls -laO total 0 drwxr-xr-x 8 daemon wheel - 256B Feb 18 16:47 ./ drwxr-xr-x 26 root wheel sunlnk 832B Feb 18 16:51 ../ -rw-r--r-- 1 root wheel - 0B Aug 22 22:11 at.deny -rw-r--r-- 1 root wheel compressed 6B Aug 17 2018 cron.deny drwxr-xr-x 2 daemon wheel - 64B Aug 17 2018 jobs/ drwxr-xr-x 2 daemon wheel - 64B Aug 22 22:11 spool/ drwx------ 4 root wheel - 128B Nov 22 12:46 tabs/ drwx------ 2 root wheel - 64B Feb 18 15:04 tmp/ /private/var$ ls -laOd at drwxr-xr-x 8 daemon wheel - 256B Feb 18 16:47 at/ /private$ ls -laOd var drwxr-xr-x 26 root wheel sunlnk 832B Feb 18 16:51 var/ Unlike that computer, any sudo change I try to do below /private/var/at (e.g. sudo touch test ) gets "Operation not permitted". On /private/var and above, i am able to sudo change anything (as in the limited and obvious type of changes i tested inside /private/var/at , not anything ). There is something preventing me from changing the contents of /private/var/at and I think this is what is causing the crontab error message because crontab is not able to write to /private/var/at/tmp and create the tmp crontab file that is reported in the error message. I know crontab is not the preferred method in OSX but that's not the point of this question.
The short answer: Go to System Preferences > Security & Privacy and give Full Disk Access to Terminal. The long answer: Pull down the Apple menu and choose ‘System Preferences’ Choose “Security & Privacy” control panel Now select the “Privacy” tab, then from the left-side menu select “Full Disk Access” Click the lock icon in the lower left corner of the preference panel and authenticate with an admin level login Now click the [+] plus button to add an application with full disk access Navigate to the /Applications/Utilities/ folder and choose “Terminal” to grant Terminal with Full Disk Access privileges Relaunch Terminal, the “Operation not permitted” error messages will be gone
{ "source": [ "https://serverfault.com/questions/954586", "https://serverfault.com", "https://serverfault.com/users/510717/" ] }
955,112
I'm creating new DNS records in our DC (Windows Server 2016) and I bump into zones where there are a lot of records that do not have a regular hostname, only an "@". We are using scopes and policies, new Windows Server 2016 features for DNS configuration. I know that one can use "*" for wildcards in hostnames, but I don't know the meaning of "@".
If the name for a domain (or zone ) is "example.com.", then an @ record indicates that the name for the DNS record is also "example.com." In the GUI for a Microsoft Windows Server DNS Service, this is (or at least has been for a long time) called "Same as parent folder". Normally the name used for a DNS record indicates everything before the name of the zone (commonly called the "domain name"). So if you enter a record named "server01" in a DNS zone called "example.com.", then the full record is "server01.example.com." If you want to enter a record where the full record is just "example.com" (which is necessary for a lot of things, like MX records), then you enter an @ in many DNS systems to tell the DNS server to respond to requests for "example.com." with the data you add to the record in question.
{ "source": [ "https://serverfault.com/questions/955112", "https://serverfault.com", "https://serverfault.com/users/162474/" ] }
956,613
I use Windows10 and I need to use a jumphost to get to my Linux servers. Thus I have configured my .ssh\config like so: Host jumphost HostName jumphost.server.local Host server*.server.local ProxyCommand ssh jumphost netcat -w 120 %h %p But when I run ssh server01.server.local -v (dash-v for verbose) I get the following error: OpenSSH_for_Windows_7.7p1, LibreSSL 2.6.5 debug1: Reading configuration data C:\\Users\\admin/.ssh/config debug1: C:\\Users\\admin/ssh/config line 70: Applying options for server*.server.local debug1: Executing proxy command: exec ssh jumphost netcat -w 120 server01.server.local 22 CreateProcessW failed error:2 posix_spawn: No such file or directory
As per this bug , the fix is to use a full path. So this is the correct line in the .ssh/config : ProxyCommand C:\Windows\System32\OpenSSH\ssh.exe jumphost netcat -w 120 %h %p For further development see this issue: https://github.com/microsoft/vscode-remote-release/issues/18
{ "source": [ "https://serverfault.com/questions/956613", "https://serverfault.com", "https://serverfault.com/users/470174/" ] }
956,623
I am installing my own cluster in order to practice the k8s . I have created cluster on the google cloud. $kubectl get all NAME READY STATUS RESTARTS AGE pod/webapp1-7d67d68676-k9hhl 1/1 Running 0 2h pod/webapp2-64d4844b78-9kln5 1/1 Running 0 2h pod/webapp3-5b8ff7484d-zvcsf 1/1 Running 0 2h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.51.240.1 <none> 443/TCP 3h service/webapp1-svc ClusterIP 10.51.240.184 <none> 80/TCP 2h service/webapp2-svc ClusterIP 10.51.246.184 <none> 80/TCP 2h service/webapp3-svc ClusterIP 10.51.244.85 <none> 80/TCP 2h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/webapp1 1 1 1 1 2h deployment.apps/webapp2 1 1 1 1 2h deployment.apps/webapp3 1 1 1 1 2h NAME DESIRED CURRENT READY AGE replicaset.apps/webapp1-7d67d68676 1 1 1 2h replicaset.apps/webapp2-64d4844b78 1 1 1 2h replicaset.apps/webapp3-5b8ff7484d 1 1 1 2h Proceed installation $ curl https://raw.githubusercontent.com/kubernetes/helm/master/scri pts/get > get_helm.sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7234 100 7234 0 0 21921 0 --:--:-- --:--:-- --:--:-- 22892 sarit_r@gke-singh-default-pool-a69fa545-1sm3 ~ $ chmod 700 get_helm.sh sarit_r@gke-singh-default-pool-a69fa545-1sm3 ~ $ ./get_helm.sh -bash: ./get_helm.sh: Permission denied sudo su to become a root already, but problem still persist. # sh get_helm.sh Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.13.0-linux-amd64.tar.gz Preparing to install helm and tiller into /usr/local/bin cp: cannot create regular file '/usr/local/bin': Read-only file system Failed to install helm For support, go to https://github.com/helm/helm. gke-singh-default-pool-a69fa545-1sm3 /home/sarit_r # id uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),26(tape),27(vide o),1001(chronos-access) master: 1.11.7-gke.4 node: 1.11.7-gke.4 Question: How do I install helm on Google Cluster?
As per this bug , the fix is to use a full path. So this is the correct line in the .ssh/config : ProxyCommand C:\Windows\System32\OpenSSH\ssh.exe jumphost netcat -w 120 %h %p For further development see this issue: https://github.com/microsoft/vscode-remote-release/issues/18
{ "source": [ "https://serverfault.com/questions/956623", "https://serverfault.com", "https://serverfault.com/users/128368/" ] }
956,634
On our dev server we have multiple sites for multiple developers running via vhosts in Apache 2.4.6. We are on CentOS 7. We want to redirect all http://www.site.ext.dev-username.commondomain.ext to https://www.site.ext.dev-username.commondomain.ext . Here, dev-username and site.ext can change depending on dev site and user. We have previously used something like this: <VirtualHost *:80> ServerName www.site.ext.dev-username.commondomain.ext Redirect permanent / https://www.site.ext.dev-username.commondomain.ext/ </VirtualHost> But is it possible to match any (or at least wildcard) ServerName and redirect accordingly, i.e. with a backreference to a regex? Maybe with a DirectoryMatch or something instead of the vhost? I have noticed this in the documentation (for directory and location matching), which is sadly not compatible with my version of Apache: From 2.4.8 onwards, named groups and backreferences are captured and written to the environment with the corresponding name prefixed with "MATCH_" and in upper case. This allows elements of URLs to be referenced from within expressions and modules like mod_rewrite. In order to prevent confusion, numbered (unnamed) backreferences are ignored. Use named groups instead.
As per this bug , the fix is to use a full path. So this is the correct line in the .ssh/config : ProxyCommand C:\Windows\System32\OpenSSH\ssh.exe jumphost netcat -w 120 %h %p For further development see this issue: https://github.com/microsoft/vscode-remote-release/issues/18
{ "source": [ "https://serverfault.com/questions/956634", "https://serverfault.com", "https://serverfault.com/users/106816/" ] }
958,003
I'm trying to write a script which will output the number of upgrade-able packages from apt. However it keeps giving me this warning with it also: # sudo apt update | grep packages | cut -d '.' -f 1 WARNING: apt does not have a stable CLI interface. Use with caution in scripts. All packages are up to date I would like it to just output either: All packages are up to date or 35 packages can be updated Is there any way to disable that warning? I will be using this returned string, along with some extra information, in a Discord notification from a cron job and it messes up my output pretty wickedly. I already looked at these, but none of them worked for me: https://askubuntu.com/questions/49958/how-to-find-the-number-of-packages-needing-update-from-the-command-line https://unix.stackexchange.com/questions/19470/list-available-updates-but-do-not-install-them https://askubuntu.com/questions/269606/apt-get-count-the-number-of-updates-available
First, consider the meaning of the warning you're trying to hide. In theory, apt could change tomorrow to calling them "distributions" instead of "packages" (because it "does not have a stable CLI interface yet") and this would completely break your pipeline. A more likely change would be one which uses the word "packages" in multiple places, causing your pipeline to return extraneous information instead of only the package count you're looking for. But you're probably not too worried about that, and, realistically, there's no reason you should be. The interface has been stable for years and probably isn't changing any time soon. So how do you make that warning go away? In the *nix world, output to the command line is generally of two flavors, stdout (standard output) and stderr (standard error). Well-behaved programs send their normal output to stdout and any warnings or error messages to stderr. So, if you want errors/warnings to disappear, you can usually accomplish this by throwing away any messages on stderr using the output redirection 2>/dev/null . (In English, that's "redirect ( > ) the second output channel ( 2 , which is stderr) to /dev/null (which just throws away everything sent there)". The answer, then, is: $ sudo apt update 2>/dev/null | grep packages | cut -d '.' -f 1 4 packages can be upgraded Side note: In the question, your command is shown as # sudo apt... . The # shell prompt implies that you were probably logged in as root when using that command. If you're already root, you don't need to use sudo . More on the warning you want to ignore (from man apt ): SCRIPT USAGE The apt(8) commandline is designed as a end-user tool and it may change the output between versions. While it tries to not break backward compatibility there is no guarantee for it either. All features of apt(8) are available in apt-cache(8) and apt-get(8) via APT options. Please prefer using these commands in your scripts.
{ "source": [ "https://serverfault.com/questions/958003", "https://serverfault.com", "https://serverfault.com/users/514085/" ] }
958,009
I am currently dealing with a site which has about 40 users, each with his/her own local Outlook setup on their PCs (currently connecting via POP/IMAP to the mail server). They have asked me to migrate all users to a cloud hosted Exchange service. I know how to manually export their local data into a PST, then re-import it into each Exchange account, but this will be a hugely time-consuming process. Is there any way to automate this, even to the point where each user can do his/her own migration by simply clicking a few buttons or running a script? After I manually create the users in the hosted Exchange environment, the steps that need to happen for each user are: Open Outlook and do a full export to PST file of all existing email/contacts/calendars/notes/etc. Close Outlook, go to "Mail" in Control Panel and create new profile. Connect the new profile to the Exchange server using the user's credentials Launch Outlook using the new profile and import the previous PST file, then wait for it to sync with the server. (Optional) I suppose it would be nice if the autocomplete entries were preserved as well. I'm wondering if PowerShell can reach this level of integration with Outlook. I would be very grateful for suggestions on how to accomplish this, whether it's a script, an application, batch file, etc. Surely this is a somewhat common issue, so I'd think there would be a fairly simple solution.
First, consider the meaning of the warning you're trying to hide. In theory, apt could change tomorrow to calling them "distributions" instead of "packages" (because it "does not have a stable CLI interface yet") and this would completely break your pipeline. A more likely change would be one which uses the word "packages" in multiple places, causing your pipeline to return extraneous information instead of only the package count you're looking for. But you're probably not too worried about that, and, realistically, there's no reason you should be. The interface has been stable for years and probably isn't changing any time soon. So how do you make that warning go away? In the *nix world, output to the command line is generally of two flavors, stdout (standard output) and stderr (standard error). Well-behaved programs send their normal output to stdout and any warnings or error messages to stderr. So, if you want errors/warnings to disappear, you can usually accomplish this by throwing away any messages on stderr using the output redirection 2>/dev/null . (In English, that's "redirect ( > ) the second output channel ( 2 , which is stderr) to /dev/null (which just throws away everything sent there)". The answer, then, is: $ sudo apt update 2>/dev/null | grep packages | cut -d '.' -f 1 4 packages can be upgraded Side note: In the question, your command is shown as # sudo apt... . The # shell prompt implies that you were probably logged in as root when using that command. If you're already root, you don't need to use sudo . More on the warning you want to ignore (from man apt ): SCRIPT USAGE The apt(8) commandline is designed as a end-user tool and it may change the output between versions. While it tries to not break backward compatibility there is no guarantee for it either. All features of apt(8) are available in apt-cache(8) and apt-get(8) via APT options. Please prefer using these commands in your scripts.
{ "source": [ "https://serverfault.com/questions/958009", "https://serverfault.com", "https://serverfault.com/users/143364/" ] }
959,707
Edit: a follow-up question: Restore mongoDB by --repair and WiredTiger . My developer committed a huge mistake and we cannot find our Mongo database anywhere in the server. He logged into the server, and saved the following shell under ~/crontab/mongod_back.sh : #!/bin/sh DUMP=mongodump OUT_DIR=/data/backup/mongod/tmp // 备份文件临时目录 TAR_DIR=/data/backup/mongod // 备份文件正式目录 DATE=`date +%Y_%m_%d_%H_%M_%S` // 备份文件将以备份对间保存 DB_USER=Guitang // 数库操作员 DB_PASS=qq■■■■■■■■■■■■■■■■■■■■■ // 数掘库操作员密码 DAYS=14 // 保留最新14天的份 TARBAK="mongod_bak_$DATE.tar.gz" // 备份文件命名格式 cd $OUT_DIR // 创建文件夹 rm -rf $OUT_DIR/* // 清空临时目录 mkdir -p $OUT_DIR/$DATE // 创建本次备份文件夹 $DUMP -d wecard -u $DB_USER -p $DB_PASS -o $OUT_DIR/$DATE // 执行备份命令 tar -zcvf $TAR_DIR/$TAR_BAK $OUT_DIR/$DATE // 将份文件打包放入正式 find $TAR_DIR/ -mtime +$DAYS -delete // 除14天前的旧备 And then he ran it and it outputted permission denied messages, so he pressed Ctrl+C . The server shut down automatically. He tried to restart it but got a grub error: He contacted AliCloud, the engineer connected the disk to another working server so that he could check the disk. Looks like some folders are gone, including /data/ where the mongodb is! We don't understand how the script could destroy the disk including /data/ ; And of course, is it possible to get the /data/ back? PS: He did not take snapshot of the disk before. PS2: As people mention "backups" a lot, we have lots of important users and data coming these 2 days, the purpose of this action was to backup them (for the first time), then they turned out to be entirely deleted.
Easy enough. The // sequence isn't a comment in bash ( # is). The statement OUT_DIR=x // text had no effect* except a cryptic error message. Thus, with the OUT_DIR being an empty string, one of the commands eventually executed was rm -rf /* . Some directories placed directly underneath / weren't removed due to user not having permissions, but it appears that some vital directories were removed. You need to restore from backup. * The peculiar form of bash statement A=b c d e f is roughly similar to: export A=b c d e f unset A A common example: export VISUAL=vi # A standard visual editor to use is `vi` visudo -f dummy_sudoers1 # Starts vi to edit a fake sudo config. Type :q! to exit VISUAL=nano visudo -f dummy_sudoers2 # Starts nano to edit a fake sudo config visudo -f dummy_sudoers3 # Starts vi again (!) And the problematic line of script amounted to this: export OUT_DIR=/data/backup/mongod/tmp // 备份文件临时目录 # shell error as `//` isn't an executable file! unset OUT_DIR
{ "source": [ "https://serverfault.com/questions/959707", "https://serverfault.com", "https://serverfault.com/users/141511/" ] }
959,952
I searched for an answer to this question on serverfault and could not find it. I know it is possible, but I can't remember how to do it. How do I change a Linux host's hostname and get that change to take effect without a reboot? I am using Ubuntu 16 and Ubuntu 18. A big feature of Ubuntu is the graphical desktop and graphical system utilities. However, we are running Ubuntu in our production environment so we chose not to use the graphical desktop or utilities in order not to have those features consume resources we need in our production environment. I know that to rename the host, I edit the files: /etc/hostname /etc/hosts In the /etc/hostname one just replaces the current hostname (soon to be former hostname) with the new hostname. Ubuntu in the /etc/hosts file has the line: 127.0.1.1 your-hostname your-hostname It acts as bootstrapping while your host is booting up and establishing itself within your network. Prior to changing the hostname, your-hostname is the current (soon to be former hostname) and as a part of changing your host's hostname, one replaces that name with the new name. What I am familiar with is executing the above two steps and then rebooting your host. But plenty of times, like with a production server, one would like to execute that rename, but not reboot one's host. How can I change hostname on a host and get that change to take effect without rebooting the host?
You can change the kernel's idea of the hostname on a systemd-based system using the hostnamectl tool. For example: hostnamectl set-hostname whatever You can view the system's current idea of the hostname with: hostnamectl # equivalent to hostnamectl status Keep in mind that this does not change a running process's idea of the hostname. Such a process would have to check the hostname again in order to be updated, and almost no process does. Thus such a process would need to be restarted. In order for every process to begin using the new hostname, they must be restarted. It's generally easier to just reboot the system than to restart every service individually.
{ "source": [ "https://serverfault.com/questions/959952", "https://serverfault.com", "https://serverfault.com/users/515660/" ] }
962,214
When I try to set root 's password: root@OpenWrt:~# passwd Changing password for root Enter the new password (minimum of 5, maximum of 8 characters) Please use a combination of upper and lower case letters and numbers. It seems the maximum length is 8. If I try to set a password longer than 8, only the first 8 characters are valid. How can I set a longer password for root ? My OpenWrt version: Linux OpenWrt 4.14.108 #0 SMP Wed Mar 27 21:59:03 2019 x86_64 GNU/Linux
This is because DES-based crypt (AKA 'descrypt') truncates passwords at 8 bytes, and only checks the first 8 for the purpose of password verification. That's the answer to your direct question, but here's some general advice implied by your context: Fortunately, from my reading, MD5 in /etc/login.defs is actually md5crypt ($1$), which, while a little outdated and declared deprecated by its author , is still far superior to DES-based crypt (and definitely much better than a raw, unsalted hash like plain MD5! Most unsalted hashes can be cracked on commodity GPUs at rates of billions per second) It looks like SHA256 (actually sha256crypt) and SHA512 (actually sha512crypt) are also there. I would pick one of those instead. If you set your password to password or something under each scheme, you can visually verify whether or not my conclusion that they're the -crypt variants is correct (examples here are taken from the hashcat example hashes , all 'hashcat', some wrapped for readability): Not recommended - unsalted or legacy hash types, much too "fast" (cracking rates) for password storage: MD5 - 8743b52063cd84097a65d1633f5c74f5 SHA256 - 127e6fbfe24a750e72930c220a8e138275656b8e5d8f48a98c3c92df2caba935 SHA512 - 82a9dda829eb7f8ffe9fbe49e45d47d2dad9664fbb7adf72492e3c81ebd3e2 \ 9134d9bc12212bf83c6840f10e8246b9db54a4859b7ccd0123d86e5872c1e5082f descrypt - 48c/R8JAv757A OK - much better than unsalted, no truncation, but no longer sufficiently resistant to brute force on modern hardware: md5crypt - $1$28772684$iEwNOgGugqO9.bIz5sk8k/ Better - relatively modern hashes with large salts and work factors: sha256crypt - $5$rounds=5000$GX7BopJZJxPc/KEK$le16UF8I2Anb.rOrn22AUPWvzUETDGefUmAV8AZkGcD sha512crypt - $6$52450745$k5ka2p8bFuSmoVT1tzOyyuaREkkKBcCNqoDKzYiJL9RaE8yMnPgh2XzzF0NDrUhgrcLwg78xs1w5pJiypEdFX/ Of these, only descrypt truncates at 8. The last two are your best bet. (Side note: the digits-only salts in the md5crypt and sha512crypt examples above are just side effects of how hashcat creates example hashes; real, healthy salts are usually drawn from a much larger keyspace). Note also that I'm only listing the hash types that are supported by /etc/login.defs on this platform. For general use, even sha256crypt and sha512crypt have been superseded - first by bcrypt, and then later by truly parallel-attack-resistant hashes like scrypt and the Argon2 family. (Note, however, that for interactive logins that should complete in under one second, bcrypt is actually more resistant to attack than the latter)
{ "source": [ "https://serverfault.com/questions/962214", "https://serverfault.com", "https://serverfault.com/users/306531/" ] }
962,223
Goodmorning everyone, I hope I will not miss anything in the way of writing this urgent problem of mine and that I have been unable to solve with openvpn for days. I have a vpn server on raspberry which I can connect perfectly from PCs and smartphones. However from the PC in my office I can't connect correctly. Or rather, I connect, the vpn works (I see the clients in my LAN at home) I see the network resources in the office, ping 8.8.8.8 ok but I don't surf! I thought it was a DNS problem but even putting the google ip address in the browser (216.58.205.195) doesn't solve it. I would like that when I connect from the office PC only the traffic towards 192.168.1.0/24 passes into the vpn and all the rest passes with the office settings. For this I modified the .ovpn file I have in the office, leaving the "redirect-gateway" option active on the server but adding the DNS of my office:192.168.100.30 Can someone give me a hand? I am in crisis without ideas. Thank you here my .ovpn file for modifications client dev tun proto udp remote xxxxx.myddns.net 1194 resolv-retry infinite nobind persist-key persist-tun key-direction 1 remote-cert-tls server tls-version-min 1.2 verify-x509-name server_xxxxxxxxxxxxxxxxxx name cipher AES-256-CBC auth SHA256 auth-nocache --pull-filter ignore redirect-gateway route 192.168.1.0 255.255.255.0 dhcp-option DNS 192.168.100.30 verb 3
This is because DES-based crypt (AKA 'descrypt') truncates passwords at 8 bytes, and only checks the first 8 for the purpose of password verification. That's the answer to your direct question, but here's some general advice implied by your context: Fortunately, from my reading, MD5 in /etc/login.defs is actually md5crypt ($1$), which, while a little outdated and declared deprecated by its author , is still far superior to DES-based crypt (and definitely much better than a raw, unsalted hash like plain MD5! Most unsalted hashes can be cracked on commodity GPUs at rates of billions per second) It looks like SHA256 (actually sha256crypt) and SHA512 (actually sha512crypt) are also there. I would pick one of those instead. If you set your password to password or something under each scheme, you can visually verify whether or not my conclusion that they're the -crypt variants is correct (examples here are taken from the hashcat example hashes , all 'hashcat', some wrapped for readability): Not recommended - unsalted or legacy hash types, much too "fast" (cracking rates) for password storage: MD5 - 8743b52063cd84097a65d1633f5c74f5 SHA256 - 127e6fbfe24a750e72930c220a8e138275656b8e5d8f48a98c3c92df2caba935 SHA512 - 82a9dda829eb7f8ffe9fbe49e45d47d2dad9664fbb7adf72492e3c81ebd3e2 \ 9134d9bc12212bf83c6840f10e8246b9db54a4859b7ccd0123d86e5872c1e5082f descrypt - 48c/R8JAv757A OK - much better than unsalted, no truncation, but no longer sufficiently resistant to brute force on modern hardware: md5crypt - $1$28772684$iEwNOgGugqO9.bIz5sk8k/ Better - relatively modern hashes with large salts and work factors: sha256crypt - $5$rounds=5000$GX7BopJZJxPc/KEK$le16UF8I2Anb.rOrn22AUPWvzUETDGefUmAV8AZkGcD sha512crypt - $6$52450745$k5ka2p8bFuSmoVT1tzOyyuaREkkKBcCNqoDKzYiJL9RaE8yMnPgh2XzzF0NDrUhgrcLwg78xs1w5pJiypEdFX/ Of these, only descrypt truncates at 8. The last two are your best bet. (Side note: the digits-only salts in the md5crypt and sha512crypt examples above are just side effects of how hashcat creates example hashes; real, healthy salts are usually drawn from a much larger keyspace). Note also that I'm only listing the hash types that are supported by /etc/login.defs on this platform. For general use, even sha256crypt and sha512crypt have been superseded - first by bcrypt, and then later by truly parallel-attack-resistant hashes like scrypt and the Argon2 family. (Note, however, that for interactive logins that should complete in under one second, bcrypt is actually more resistant to attack than the latter)
{ "source": [ "https://serverfault.com/questions/962223", "https://serverfault.com", "https://serverfault.com/users/518458/" ] }
963,115
It is fairly standard to receive a significant number of minor hacking attempts each day trying common username / passwords for services like SSH and SMTP. I've always assumed these attempts are using the "small" address space of IPv4 to guess IP addresses. I notice that I get zero hacking attempts on IPv6 despite my domain having AAAA Name records mirroring every A Name record and all IPv4 services are also open to IPv6. Assuming a public DNS (AWS route 53) with an obscure subdomain pointing to a reasonably randomised /64 suffix; Are IPv6 addresses and / subdomains remotely discoverable without trying every address in a /64 bit prefix or every subdomain in a very long list of common names? I am of course aware that crawling the web looking for listed (sub)domain names is simple enough. I'm also aware that machines on the same subnet can use NDP. I'm more interested in whether DNS or the underlying protocols of IPv6 allow discovery / listing unknown domains and addresses by remote.
Malicious bots don't guess IPv4 addresses anymore. They simply try them all. On modern systems this can take as little as a few hours. With IPv6, this is not really possible any longer, as you've surmised. The address space is so much larger that it's not even possible to brute-force scan a single /64 subnet within a human lifetime. Bots will have to get more creative if they are to continue blind scanning on IPv6 as on IPv4, and malicious bot operators will have to get accustomed to waiting far longer between finding any machines, let alone vulnerable ones. Fortunately for the bad guys and unfortunately for everyone else, IPv6 adoption has gone much more slowly than it really should have. IPv6 is 23 years old but has only seen significant adoption in the last five years or so. But everyone is keeping their IPv4 networks active, and extremely few hosts are IPv6-only, so malicious bot operators have had little incentive to make the switch. They probably won't do until there is a significant abandonment of IPv4, which probably won't happen in the next five years. I expect that blind guessing probably won't be productive for malicious bots, when they finally do move to IPv6, so they'll have to move to other means, like brute-forcing DNS names, or targeted brute-forcing of small subsets of each subnet. For instance, a common DHCPv6 server configuration gives out addresses in ::100 through ::1ff by default. That's just 256 addresses to try, out of a whole /64. Reconfiguring the DHCPv6 server to pick addresses from a much larger range mitigates this problem. And using modified EUI-64 addresses for SLAAC reduces the search space to 2 24 multiplied by the number of assigned OUIs. While this is over 100 billion addresses, it's far less than 2 64 . Random bots won't bother to search this space, but state-level malicious actors will, for targeted attacks, especially if they can make educated guesses as to which NICs might be in use, to reduce the search space further. Using RFC 7217 stable privacy addresses for SLAAC is easy (at least on modern operating systems that support it) and mitigates this risk. RFC 7707 describes several other ways in which reconnaissance might be performed in IPv6 networks to locate IPv6 addresses, and how to mitigate against those threats.
{ "source": [ "https://serverfault.com/questions/963115", "https://serverfault.com", "https://serverfault.com/users/94158/" ] }
963,178
I have a linux box (Ubuntu 16.04) whose boot disk is partitioned using MBR. How can I convert it to GPT+UEFI?
Before starting, make sure you have a backup, and make sure to have a linux live boot ready to rescue your system. It's easy to mess this up! Use gdisk to convert the partition table to GPT. gdisk /dev/sda Create the "BIOS boot" partition that GRUB needs. n to create a new partition. Needs to be about 1MB. You can probably squeeze this in from sectors 34-2047. Use L or l to look up the code for "BIOS boot" (ef02). Write the new partition table. w Reload the partition table. partprobe /dev/sda Re-install the GRUB boot loader using the new partition scheme. grub-install /dev/sda Optionally reboot to verify it's working. If you just need GPT and not UEFI, you can stop here. Use gdisk to add an "EFI System" partition (ESP). Officially should be 100-500MB, but mine only used 130kB. Can be anywhere on the disk, so consider putting it at the end if you're using non-resizable media like a physical disk. gdisk /dev/sda and use n to create the partition. Give the ESP a distinctive label without whitespace like EFI-system , because we'll reference the partition label in fstab. c to set the label. Write the partition table. w Reload the partition table. partprobe /dev/sda Build the filesystem for the ESP. mkfs -t vfat -v /dev/disk/by-partlabel/EFI-system Create the ESP mount point. mkdir /boot/efi Add the ESP to /etc/fstab . It should look like this: /dev/disk/by-partlabel/EFI-system /boot/efi vfat defaults 0 2 Mount the ESP. mount /boot/efi Install EFI package on Ubuntu/Debian. apt install grub-efi-amd64 Install the GRUB EFI bootloader. grub-install --target=x86_64-efi /dev/sda Reboot. Change the BIOS from BIOS boot to UEFI boot. Use the one-time boot menu to force boot the disk. You may have to navigate to the disk (Boot from file) -> EFI -> ubuntu -> grubx64.efi . Re-install GRUB's EFI bootloader to update the UEFI boot selector. grub-install Resources: The author of gdisk has a verbose description of MBR, GPT, and UEFI . Clonezilla restore MBR disk to 4TB disk (convert to GPT) -- LINUX (not Windows!) covers the first part of the process.
{ "source": [ "https://serverfault.com/questions/963178", "https://serverfault.com", "https://serverfault.com/users/114902/" ] }
963,184
I try a simple configuration with an nginx v1.10.13. The configuration : events { } http { server { listen 80; location = /loc1 { proxy_pass http://192.168.0.5:80/; } } } and try this configuration with curl --data "param1=XXX" -X POST http://192.168.0.4:80/loc1 I watch 192.168.0.5 incoming connection and nothing come. I really don't understand what is wrong, the configuration is very simple. I match all connexion on port 80 , and all loc1 location. So why my curl command fail ? curl command return Not found: / ...
Before starting, make sure you have a backup, and make sure to have a linux live boot ready to rescue your system. It's easy to mess this up! Use gdisk to convert the partition table to GPT. gdisk /dev/sda Create the "BIOS boot" partition that GRUB needs. n to create a new partition. Needs to be about 1MB. You can probably squeeze this in from sectors 34-2047. Use L or l to look up the code for "BIOS boot" (ef02). Write the new partition table. w Reload the partition table. partprobe /dev/sda Re-install the GRUB boot loader using the new partition scheme. grub-install /dev/sda Optionally reboot to verify it's working. If you just need GPT and not UEFI, you can stop here. Use gdisk to add an "EFI System" partition (ESP). Officially should be 100-500MB, but mine only used 130kB. Can be anywhere on the disk, so consider putting it at the end if you're using non-resizable media like a physical disk. gdisk /dev/sda and use n to create the partition. Give the ESP a distinctive label without whitespace like EFI-system , because we'll reference the partition label in fstab. c to set the label. Write the partition table. w Reload the partition table. partprobe /dev/sda Build the filesystem for the ESP. mkfs -t vfat -v /dev/disk/by-partlabel/EFI-system Create the ESP mount point. mkdir /boot/efi Add the ESP to /etc/fstab . It should look like this: /dev/disk/by-partlabel/EFI-system /boot/efi vfat defaults 0 2 Mount the ESP. mount /boot/efi Install EFI package on Ubuntu/Debian. apt install grub-efi-amd64 Install the GRUB EFI bootloader. grub-install --target=x86_64-efi /dev/sda Reboot. Change the BIOS from BIOS boot to UEFI boot. Use the one-time boot menu to force boot the disk. You may have to navigate to the disk (Boot from file) -> EFI -> ubuntu -> grubx64.efi . Re-install GRUB's EFI bootloader to update the UEFI boot selector. grub-install Resources: The author of gdisk has a verbose description of MBR, GPT, and UEFI . Clonezilla restore MBR disk to 4TB disk (convert to GPT) -- LINUX (not Windows!) covers the first part of the process.
{ "source": [ "https://serverfault.com/questions/963184", "https://serverfault.com", "https://serverfault.com/users/519443/" ] }
963,759
This issue is driving me crazy. I run a fresh install of Ubuntu 18.04, with: ufw to manage the firewall a br0 bridge lxd and libvirt (KVM) I tried stock docker.io package and packages form docker's own deb repository. I want o be able to deploy docker containers choosing the ip to bind its port (eg. -p 10.58.26.6:98800:98800) and then open the port with UFW. But docker seems to create iptables rules that pertubates the br0 bridge (eg. host cannot ping libvirt guests) I have looked all around and cannot find good, security aware solution. Manually doing iptables -I FORWARD -i br0 -o br0 -j ACCEPT seems to makes everything work. Also setting "iptables": false for the docker daemon allows the bridge to behave normally, but breaks docker's containers egress network. I have found this solution that seemed simple, by editing a single UFW's file https://stackoverflow.com/a/51741599/1091772 , but it doesn't work at all. What would be the best practice and secure way of solving this permanently, surviving to reboots ? EDIT: I ended up adding -A ufw-before-forward -i br0 -o br0 -j ACCEPT at the end of /etc/ufw/before.rules before the COMMIT. Can I consider this as a fix or doesn't it raise some issues ?
The problem, actually a feature: br_netfilter The explanation is that the bridge netfilter code is enabled by Docker for internal container isolation: intended among other usages for stateful bridge firewalling or for leveraging iptables ' matches and targets from bridge path without having to (or being able to) duplicate them all in ebtables . Quite disregarding network layering, the ethernet bridge code, at network layer 2, now makes upcalls to iptables working at IP level, ie network layer 3. It can be enabled only globally before kernel 5.3 (but Docker doesn't handle the new kernel 5.3 features): either for host and every containers, or for none. Once understood what's going and knowing what to look for, adapted choices can be made. The netfilter project describes the various ebtables / iptables interactions when br_netfilter is enabled. Especially of interest is the section 7 explaining why some rules without apparent effect are sometimes needed to avoid unintended effects from the bridge path, like using: iptables -t nat -A POSTROUTING -s 172.16.1.0/24 -d 172.16.1.0/24 -j ACCEPT iptables -t nat -A POSTROUTING -s 172.16.1.0/24 -j MASQUERADE to avoid two systems on the same LAN to be NATed by... the bridge (see example below). You have a few choices to avoid your problem, but the choice you took is probably the best if you don't want to know all the details nor verify if some iptables rules (sometimes hidden in other namespaces) would be disrupted: permanently prevent the br_netfilter module to be loaded. Usually blacklist isn't enough, install must be used. This is a choice prone to issues for applications relying on br_netfilter : obviously Docker, Kubernetes, ... echo install br_netfilter /bin/true > /etc/modprobe.d/disable-br-netfilter.conf Have the module loaded, but disable its effects: same results with regard to Docker. For iptables ' effects that is: sysctl -w net.bridge.bridge-nf-call-iptables=0 If putting this at startup, the module should be loaded first or this toggle won't exist yet. These two previous choices will for sure disrupt iptables match -m physdev : The xt_physdev module when itself loaded, auto-loads the br_netfilter module (this would happen even if a rule added from a container triggered the loading). Now br_netfilter won't be loaded, -m physdev will probably never match. Work around br_netfilter's effect when needed, like OP: add those apparent no-op rules in various chains (PREROUTING, FORWARD, POSTROUTING) as described in section 7 . For example: iptables -t nat -A POSTROUTING -s 172.18.0.0/16 -d 172.18.0.0/16 -j ACCEPT iptables -A FORWARD -i br0 -o br0 -j ACCEPT Those rules should never match because traffic in the same IP LAN is not routed, except for some rare DNAT setups. But thanks to br_netfilter they do match, because they are first called for switched frames ("upgraded" to IP packets) traversing the bridge . Then they are called again for routed packets traversing the router to an unrelated interface (but won't match then). Don't put an IP on the bridge: put that IP on one end of a veth interface with its other end on the bridge: this should ensure that the bridge won't interact with routing, but that's not what are doing most container/VM common products. You can even hide the bridge in its own isolated network namespace (that would only be helpful if wanting to isolate from other ebtables rules this time). Switch everything to nftables which among stated goals will avoid these bridge interaction issues . For now the bridge firewalling has no stateful support available, it's still WIP but is promised to be cleaner when available, because there won't be any "upcall". You should search what triggers the loading of br_netfilter (eg: -m physdev ) and see if you can avoid it or not, to choose how to proceed. Minimal Docker integration When the breakage happens in the host initial network namespace where Docker is running rather than in a new (eg: container) network namespace, OP's rule should be added to the DOCKER-USER chain rather than alone because Docker usually inserts its own rules before what can be already found. This could even be added in some network startup script. Here's an idempotent method for OP's case. This chain is created by Docker if it didn't exist before, so ignoring a failure makes it work wether started before or after Docker. Likewise -I is needed because Docker (or some versions of it) might append a dummy -j RETURN rule to DOCKER-USER , so -I makes it work whether started before or after Docker. iptables -N DOCKER-USER 2>/dev/null || true iptables -C DOCKER-USER -i br0 -o br0 -j ACCEPT >/dev/null 2>&1 || iptables -I DOCKER-USER -i br0 -o br0 -j ACCEPT Example with network namespaces Let's reproduce some effects using a network namespace. Note that nowhere any ebtables rule will be used. Also note that this example relies on the usual legacy iptables , not iptables over nftables as enabled by default on Debian buster. Let's reproduce a simple case similar with many container usages: a router 192.168.0.1/192.0.2.100 doing NAT with two hosts behind: 192.168.0.101 and 192.168.0.102, linked with a bridge on the router. The two hosts can communicate directly on the same LAN, through the bridge. #!/bin/sh for ns in host1 host2 router; do ip netns del $ns 2>/dev/null || : ip netns add $ns ip -n $ns link set lo up done ip netns exec router sysctl -q -w net.ipv4.conf.default.forwarding=1 ip -n router link add bridge0 type bridge ip -n router link set bridge0 up ip -n router address add 192.168.0.1/24 dev bridge0 for i in 1 2; do ip -n host$i link add eth0 type veth peer netns router port$i ip -n host$i link set eth0 up ip -n host$i address add 192.168.0.10$i/24 dev eth0 ip -n host$i route add default via 192.168.0.1 ip -n router link set port$i up master bridge0 done #to mimic a standard NAT router, iptables rule voluntarily made as it is to show the last "effect" ip -n router link add name eth0 type dummy ip -n router link set eth0 up ip -n router address add 192.0.2.100/24 dev eth0 ip -n router route add default via 192.0.2.1 ip netns exec router iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -j MASQUERADE Let's load the kernel module br_netfilter (to be sure it won't be later) and disable its effects with the (not-per-namespace) toggle bridge-nf-call-iptables , available only in initial namespace: modprobe br_netfilter sysctl -w net.bridge.bridge-nf-call-iptables=0 Warning: again, this can disrupt iptables rules like -m physdev anywhere on the host or in containers which rely on br_netfilter loaded and enabled. Let's add some icmp ping traffic counters. ip netns exec router iptables -A FORWARD -p icmp --icmp-type echo-request ip netns exec router iptables -A FORWARD -p icmp --icmp-type echo-reply Let's ping: # ip netns exec host1 ping -n -c2 192.168.0.102 PING 192.168.0.102 (192.168.0.102) 56(84) bytes of data. 64 bytes from 192.168.0.102: icmp_seq=1 ttl=64 time=0.047 ms 64 bytes from 192.168.0.102: icmp_seq=2 ttl=64 time=0.058 ms --- 192.168.0.102 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1017ms rtt min/avg/max/mdev = 0.047/0.052/0.058/0.009 ms The counters won't match: # ip netns exec router iptables -v -S FORWARD -P FORWARD ACCEPT -c 0 0 -A FORWARD -p icmp -m icmp --icmp-type 8 -c 0 0 -A FORWARD -p icmp -m icmp --icmp-type 0 -c 0 0 Let's enable bridge-nf-call-iptables and ping again: # sysctl -w net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-iptables = 1 # ip netns exec host1 ping -n -c2 192.168.0.102 PING 192.168.0.102 (192.168.0.102) 56(84) bytes of data. 64 bytes from 192.168.0.102: icmp_seq=1 ttl=64 time=0.094 ms 64 bytes from 192.168.0.102: icmp_seq=2 ttl=64 time=0.163 ms --- 192.168.0.102 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1006ms rtt min/avg/max/mdev = 0.094/0.128/0.163/0.036 ms This time switched packets got a match in iptables' filter/FORWARD chain: # ip netns exec router iptables -v -S FORWARD -P FORWARD ACCEPT -c 4 336 -A FORWARD -p icmp -m icmp --icmp-type 8 -c 2 168 -A FORWARD -p icmp -m icmp --icmp-type 0 -c 2 168 Let's put a DROP policy (which zeroes the default counters) and try again: # ip netns exec host1 ping -n -c2 192.168.0.102 PING 192.168.0.102 (192.168.0.102) 56(84) bytes of data. --- 192.168.0.102 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1008ms # ip netns exec router iptables -v -S FORWARD -P FORWARD DROP -c 2 168 -A FORWARD -p icmp -m icmp --icmp-type 8 -c 4 336 -A FORWARD -p icmp -m icmp --icmp-type 0 -c 2 168 The bridge code filtered the switched frames/packets via iptables. Let's add the bypass rule (which will zero again the default counters) like in OP and try again: # ip netns exec router iptables -A FORWARD -i bridge0 -o bridge0 -j ACCEPT # ip netns exec host1 ping -n -c2 192.168.0.102 PING 192.168.0.102 (192.168.0.102) 56(84) bytes of data. 64 bytes from 192.168.0.102: icmp_seq=1 ttl=64 time=0.132 ms 64 bytes from 192.168.0.102: icmp_seq=2 ttl=64 time=0.123 ms --- 192.168.0.102 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1024ms rtt min/avg/max/mdev = 0.123/0.127/0.132/0.012 ms # ip netns exec router iptables -v -S FORWARD -P FORWARD DROP -c 0 0 -A FORWARD -p icmp -m icmp --icmp-type 8 -c 6 504 -A FORWARD -p icmp -m icmp --icmp-type 0 -c 4 336 -A FORWARD -i bridge0 -o bridge0 -c 4 336 -j ACCEPT Let's see what is now actually received on host2 during a ping from host1: # ip netns exec host2 tcpdump -l -n -s0 -i eth0 -p icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 02:16:11.068795 IP 192.168.0.1 > 192.168.0.102: ICMP echo request, id 9496, seq 1, length 64 02:16:11.068817 IP 192.168.0.102 > 192.168.0.1: ICMP echo reply, id 9496, seq 1, length 64 02:16:12.088002 IP 192.168.0.1 > 192.168.0.102: ICMP echo request, id 9496, seq 2, length 64 02:16:12.088063 IP 192.168.0.102 > 192.168.0.1: ICMP echo reply, id 9496, seq 2, length 64 ... instead of source 192.168.0.101. The MASQUERADE rule was also called from the bridge path. To avoid this either add (as explained in section 7 's example) an exception rule before, or state a non-bridge outgoing interface, if possible at all (now it's available you can even use -m physdev if it has to be a bridge...). Randomly related: LKML/netfilter-dev: br_netfilter: enable in non-initial netns : it would help to enable this feature per namespace rather than globally, thus limiting interactions between hosts and containers. UPDATE: added in Linux kernel 5.3, but not supported by Docker. netfilter-dev: netfilter: physdev: relax br_netfilter dependency : merely attempting to delete a non-existing physdev rule could create problems. (UPDATE: fixed). netfilter-dev: connection tracking support for bridge : WIP bridge netfilter code to prepare stateful bridge firewalling using nftables, this time more elegantly. I think one of the last steps to get rid of iptables ('s kernel side API). UPDATE: added in kernel 5.3, but as long as there's no complete rework in Docker to use these features, doesn't change anything.
{ "source": [ "https://serverfault.com/questions/963759", "https://serverfault.com", "https://serverfault.com/users/165115/" ] }
969,606
I'm looking for a way to implement a more secure way of doing an offsite backup that will also protect my data against the situation where a malicious hacker has gained root access to my server. Even though the chance of that happening is smaller than other kinds of risks if SSH and password security is properly set up and the system is kept properly up-to-date, the amount of damage that can be permanently done is really high and therefore I'd like to find a solution to limit that. I've already tried two ways of offsite backups: a simple root-writable webdav mount (and configured in fstab) where the backed up data is copied onto. Problem : not really an offsite backup because the connection - and moreover access - to the offsite location is constantly left open as a folder in the filesystem. This is sufficient protection against many kinds of attacks if the mount has limited access privileges (read root only access), but doesn't protect against a malicious person with root access. Borg backup through SSH with key authentication. Problem : connection to that offsite server can be done with the key that's stored on the host if the malicious user has root access to the host. As a solution I'm thinking about these potential ways, but I don't know how and with what: Backups can only be written or appended to the destination but not deleted. The use of backup software that handles the offsite backups and doesn't support mass deletion of the offsite backups from the first host. Solutions that aren't really interesting in my situation: An extra backup job on the offsite host which transfers them to a location that isn't accessible by the first host (due to technical limitations). Can anyone give advice on how to implement a proper offsite backup for my case?
All your suggestions currently have one thing in common: the backup source does the backup and has access to the backup destination. Whether you mount the location or use tools like SSH or rsync, the source system somehow has access to the backup. Therefore, a compromise on the server might compromise your backups, too. What if the backup solution has access to the server, instead? The backup system can do its job with a read-only access, so any compromise on the backup system wouldn't probably compromise the server. Also, the backup system could be dedicated for that purpose alone, making the contents of the backup the only attack vector. That would be very unlikely and need a really sophisticated attack. To avoid overwriting the backups with tampered or damaged content, do incremental backups that allows you to restore any previous state within the restoration period defined.
{ "source": [ "https://serverfault.com/questions/969606", "https://serverfault.com", "https://serverfault.com/users/28449/" ] }
970,688
I have a Dell R610 server, which has 6 2.5" drive bays. These all came empty. Generally, when I see pictures, all of the drive bays contain either drives or empty mounts. At first, I thought it would be unnecessary to have unused mounts. However, I occasionally think that issues such as static electricity or dust might cause problems because of the empty space, necessitating empty bays being filled. I have tried several google searches, but I get no results. Can anyone disprove or back up my worries?
It sounds like you bought a repurposed server. The previous owner probably took out their disks and had them destroyed, leaving only empty hot-swap bays. On new servers those are are filled with either empty drive-trays (and you would place your drive in the tray to populate the slot) or more likely with filler blanks (and the vendor sells you drives ready to use as a single unit already attached to their version of hot swap tray) Plenty of places sell both filler blanks and drive trays. As far as I know it is not immediately harmful to leave hot swap bays empty , but it will result in a sub optimal airflow and cooling and you may get some dust build up in any of the exposed connectors which might be something to worry about when you do want to populate those empty slots. For aesthetics and airflow, fill them. (And blanks usually cost only a couple of $ € £ )
{ "source": [ "https://serverfault.com/questions/970688", "https://serverfault.com", "https://serverfault.com/users/498933/" ] }
970,717
When I run this command sudo systemctl enable /home/ec2-user/my_custom.service I get Failed to enable unit: Access denied And When I run systemctl enable /home/ec2-user/my_custom.service I get ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-unit-files ==== Authentication is required to manage system service or unit files. Authenticating as: Cloud User (ec2-user) Password: ==== AUTHENTICATION COMPLETE ==== Failed to enable unit: Access denied Now here I don't have any password to setting the new using sudo passwd ec2-user and then using that password but still the same error Here is content of my_custom.service [Unit] Description=go_responder After=network.target [Service] Type=simple User=ec2-user ExecStart=/home/ec2-user/custom_service_executable [Install] WantedBy=default.target
It sounds like you bought a repurposed server. The previous owner probably took out their disks and had them destroyed, leaving only empty hot-swap bays. On new servers those are are filled with either empty drive-trays (and you would place your drive in the tray to populate the slot) or more likely with filler blanks (and the vendor sells you drives ready to use as a single unit already attached to their version of hot swap tray) Plenty of places sell both filler blanks and drive trays. As far as I know it is not immediately harmful to leave hot swap bays empty , but it will result in a sub optimal airflow and cooling and you may get some dust build up in any of the exposed connectors which might be something to worry about when you do want to populate those empty slots. For aesthetics and airflow, fill them. (And blanks usually cost only a couple of $ € £ )
{ "source": [ "https://serverfault.com/questions/970717", "https://serverfault.com", "https://serverfault.com/users/320387/" ] }
971,990
Answers I found so far (e.g. Find out public ip address of the EC2 server ) suggest using wget or curl to reach the server. They are not useful for me because my ec2 instances are not reachable from the internet directly. I have tried aws ec2 --profile prod describe-instances --filters Name=instance-id,Values=i-00914683ababcba7eb1 But there are no IPs in the returned JSON result. Which aws CLI command I can use to retrieve this info?
Generally you can do it with --query filter. If you need the private IP address only: aws --region YOUR-AWS-REGION \ ec2 describe-instances \ --filters \ "Name=instance-state-name,Values=running" \ "Name=instance-id,Values=i-00914683ababcba7eb1" \ --query 'Reservations[*].Instances[*].[PrivateIpAddress]' \ --output text If you need the public ip address only: aws --region YOUR-AWS-REGION \ ec2 describe-instances \ --filters \ "Name=instance-state-name,Values=running" \ "Name=instance-id,Values=i-00914683ababcba7eb1" \ --query 'Reservations[*].Instances[*].[PublicIpAddress]' \ --output text Or you can have both: aws --region YOUR-AWS-REGION \ ec2 describe-instances \ --filters \ "Name=instance-state-name,Values=running" \ "Name=instance-id,Values=i-00914683ababcba7eb1" \ --query 'Reservations[*].Instances[*].[PrivateIpAddress, PublicIpAddress]' \ --output text Of course you can have the output in json format too. Just change --output text to --output json More information about --query filters.
{ "source": [ "https://serverfault.com/questions/971990", "https://serverfault.com", "https://serverfault.com/users/12364/" ] }
971,998
I have been stuck trying to get my site to work with SSL and a subdomain for a few days now. Have been googling endlessly to no avail. I have a webserver setup on AWS EC2 instance running amazon Linux 2 with apache (httpd). I currently have a letsencrypt wildcard certificate on my server. Got a Virtual host file below enabled (/etc/httpd/sites-enabled): <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot /var/www/example.com ErrorLog /var/www/example.com/error.log CustomLog /var/www/example.com/requests.log combined Redirect permanent / https://example.com/ </VirtualHost> <VirtualHost *:80> ServerName subdomain.example.com DocumentRoot /var/www/subdomain.example.com ErrorLog /var/www/subdomain.example.com/error.log CustomLog /var/www/subdomain.example.com/requests.log combined Redirect permanent / https://subdomain.example.com/ </VirtualHost> <VirtualHost *:443> ServerName example.com ServerAlias www.example.com DocumentRoot /var/www/example.com ErrorLog /var/www/example.com/error.log CustomLog /var/www/example.com/requests.log combined SSLEngine on SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem </VirtualHost> <VirtualHost *:443> ServerName subdomain.example.com DocumentRoot /var/www/subdomain.example.com ErrorLog /var/www/subdomain.example.com/error.log CustomLog /var/www/subdomain.example.com/requests.log combined SSLEngine on SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem </VirtualHost> If I go to https://example.com I get a certificate error But if I go to https://subdomain.example.com it works fine showing the correct letsencrypt certificate. I don't understand why the certificate is not working for both domain and subdomain. Can someone please help me see what the issue might be? Thanks.
Generally you can do it with --query filter. If you need the private IP address only: aws --region YOUR-AWS-REGION \ ec2 describe-instances \ --filters \ "Name=instance-state-name,Values=running" \ "Name=instance-id,Values=i-00914683ababcba7eb1" \ --query 'Reservations[*].Instances[*].[PrivateIpAddress]' \ --output text If you need the public ip address only: aws --region YOUR-AWS-REGION \ ec2 describe-instances \ --filters \ "Name=instance-state-name,Values=running" \ "Name=instance-id,Values=i-00914683ababcba7eb1" \ --query 'Reservations[*].Instances[*].[PublicIpAddress]' \ --output text Or you can have both: aws --region YOUR-AWS-REGION \ ec2 describe-instances \ --filters \ "Name=instance-state-name,Values=running" \ "Name=instance-id,Values=i-00914683ababcba7eb1" \ --query 'Reservations[*].Instances[*].[PrivateIpAddress, PublicIpAddress]' \ --output text Of course you can have the output in json format too. Just change --output text to --output json More information about --query filters.
{ "source": [ "https://serverfault.com/questions/971998", "https://serverfault.com", "https://serverfault.com/users/528370/" ] }
973,208
On a CentOS 7, I've installed foobar version 2, compiled from sources. How can I make yum aware of that install so it won't install foobar version 1 for dependency? Installation of foobar $ git clone https://example.com/foobar.git [...] $ cd foobar $ make && sudo make install [...] $ foobar --version foobar v2 Installation of a package requiring foobar $ sudo yum install baz [...] ---> Package baz.x86_64 0:3.14.15-9 will be installed --> Processing Dependency: foobar >= 1 for package: baz-3.14.15-9.x86_64 [...] Dependencies Resolved ============================================================== Package Arch Version Repository Size ============================================================== Installing: baz x86_64 3.14.15-9 example 1.1 M Installing for dependencies: foobar x86_64 1.0.0-0.el7 example 4.5 M I'd like yum to know foobar 2 is installed and since baz requires foobar >= 1 or simply foobar , foobar-1.0.0-0.el7.x86_64.rpm should not be installed.
"I've installed foobar version 2, compiled from sources" Take the extra effort when adding custom software to your system and package your additions in a RPM . See Martin Streicher, 2010-01-12, Building and distributing packages , IBM on how to do that. Then install that resulting RPM so it can and will play nice with your package manager's conflict and dependency handling, upgrade, downgrade and removal procedures and security reporting.
{ "source": [ "https://serverfault.com/questions/973208", "https://serverfault.com", "https://serverfault.com/users/328011/" ] }