source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
506,612 | I have user access (no root) into a Linux (Suse) machine where I developed some bash scripts and the corresponding bash autocompletion rules. Since the scripts belong only to my user and therefore I need the complete rules only "active" for me (a part from the fact that I have no root write acces), placing my bash_completion script into /etc/bash_completion.d/ folder is not an option. At the moment I named my file .bash_completion.myscript and source it directly from my .bashrc , but I just wonder if there is any other "standard" way of achieving these results, already considered in the bash implementation. For example, creating a folder /home/myuser/.bash_completion.d/ ? | Use a ~/.bash_completion file. From the Bash Completion FAQ : Q. How can I insert my own local completions without having to reinsert them every time you issue a new release? A. Put them in ~/.bash_completion, which is parsed at the end of the main completion script. See also the next question. Q. I author/maintain package X and would like to maintain my own completion code for this package. Where should I put it to be sure that interactive bash shells will find it and source it? A. Install it in one of the directories pointed to by bash-completion's pkgconfig file variables. There are two alternatives: the recommended one is 'completionsdir' (get it with "pkg-config --variable=completionsdir bash-completion") from which completions are loaded on demand based on invoked commands' names, so be sure to name your completion file accordingly, and to include for example symbolic links in case the file provides completions for more than one command. The other one which is present for backwards compatibility reasons is 'compatdir' (get it with "pkg-config --variable=compatdir bash-completion") from which files are loaded when bash_completion is loaded. | {
"source": [
"https://serverfault.com/questions/506612",
"https://serverfault.com",
"https://serverfault.com/users/158281/"
]
} |
506,619 | I'm trying to install nginx on wheezy using this code in my deployment script: echo "deb http://nginx.org/packages/debian/ wheezy nginx" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://nginx.org/packages/debian/ wheezy nginx" | sudo tee -a /etc/apt/sources.list
apt-get update
apt-get install -y nginx However it doesn't seem that wheezy is supported by nginx yet. Can anyone tell me how I am meant to be installing nginx on wheezy please? | Use a ~/.bash_completion file. From the Bash Completion FAQ : Q. How can I insert my own local completions without having to reinsert them every time you issue a new release? A. Put them in ~/.bash_completion, which is parsed at the end of the main completion script. See also the next question. Q. I author/maintain package X and would like to maintain my own completion code for this package. Where should I put it to be sure that interactive bash shells will find it and source it? A. Install it in one of the directories pointed to by bash-completion's pkgconfig file variables. There are two alternatives: the recommended one is 'completionsdir' (get it with "pkg-config --variable=completionsdir bash-completion") from which completions are loaded on demand based on invoked commands' names, so be sure to name your completion file accordingly, and to include for example symbolic links in case the file provides completions for more than one command. The other one which is present for backwards compatibility reasons is 'compatdir' (get it with "pkg-config --variable=compatdir bash-completion") from which files are loaded when bash_completion is loaded. | {
"source": [
"https://serverfault.com/questions/506619",
"https://serverfault.com",
"https://serverfault.com/users/139487/"
]
} |
507,521 | SSD drives have been around for several years now. But the issue of reliability still comes up. I guess this is a follow up from this question posted 4 years ago, and last updated in 2011. It's now 2013, has much changed? I guess I'm looking for some real evidence, more than just a gut feel. Maybe you're using them in your DC. What's been your experience? Reliability of ssd drives UPDATE: It's now 2016. I think the answer is probably yes (a pity they still cost more per GB though). This report gives some evidence: Flash Reliability in Production: The Expected and the Unexpected And some interesting data on (consumer) mechanical drives: Backblaze: Hard Drive Data and Stats | This is going to be a function of your workload and the class of drive you purchase... In my server deployments, I have not had a properly-spec'd SSD fail. That's across many different types of drives, applications and workloads. Remember, not all SSDs are the same!! So what does "properly-spec'd" mean? If your question is about SSD use in enterprise and server applications, quite a bit has changed over the past few years since the original question . Here are a few things to consider: Identify your use-case: There are consumer drives, enterprise drives and even ruggedized industrial application SSDs . Don't buy a cheap disk meant for desktop use and run a write-intensive database on it. Many form-factors are available: Today's SSDs can be found in PCIe cards, SATA and SAS 1.8", 2.5", 3.5" and other variants. Use RAID for your servers: You wouldn't depend on a single mechanical drive in a server situation. Why would you do the same for an SSD? Drive composition: There are DRAM-based SSDs, as well as the MLC, eMLC and SLC flash types. The latter have finite lifetimes, but they're well-defined by the manufacturer. e.g. you'll see daily write limits like 5TB/day for 3 years . Drive application matters: Some drives are for general use, while there are others that are read-optimized or write-optimized. DRAM-based drives like the sTec ZeusRAM and DDRDrive won't wear-out. These are ideal for high-write environments and to front slower disks. MLC drives tend to be larger and optimized for reads. SLC drives have a better lifetime than the MLC drives, but enterprise MLC really appears to be good enough for most scenarios. TRIM doesn't seem to matter: Hardware RAID controllers still don't seem to fully support it . And most of the time I use SSDs, it's going to be on a hardware RAID setup. It isn't something I've worried about in my installations. Maybe I should? Endurance: Over-provisioning is common in server-class SSDs. Sometimes this can be done at the firmware level, or just by partitioning the drive the right way. Wear-leveling algorithms are better across the board as well. Some drives even report lifetime and endurance statistics. For example, some of my HP-branded Sandisk enterprise SSDs show 98% life remaining after two years of use. Prices have fallen considerably: SSDs hit the right price:performance ratio for many applications. When performance is really needed, it's rare to default to mechanical drives now. Reputations have been solidified: e.g. Intel is safe but not high-performance. OCZ is unreliable. Sandforce -based drives are good. sTec/STEC is extremely-solid and is the OEM for a lot of high-end array drives. Sandisk /Pliant is similar. OWC has great SSD solutions with a superb warranty for low-impact servers and for workstation/laptop deployment. Power-loss protection is important: Look at drives with supercapacitors/supercaps to handle outstanding writes during power events. Some drives boost performance with onboard caches or leverage them to reduce wear. Supercaps ensure that those writes are flushed to stable storage. Hybrid solutions: Hardware RAID controller vendors offer the ability to augment standard disk arrays with SSDs to accelerate reads/writes or serve as intelligent cache. LSI has CacheCade and its Nytro hardware/software offerings. Software and OS-level solutions have also exist to do things like provide local cache on application, database or hypervisor systems. Advanced filesystems like ZFS make very intelligent use of read and write-optimized SSDs; ZFS can be configured to use separate devices for secondary caching and for the intent log, and SSDs are often used in that capacity even for HDD pools. Top-tier flash has arrived: PCIe flash solutions like FusionIO have matured to the point where organizations are comfortable deploying critical applications that rely on the increased performance. Appliance and SAN solutions like RanSan and Violin Memory are still out there as well, with more entrants coming into that space. | {
"source": [
"https://serverfault.com/questions/507521",
"https://serverfault.com",
"https://serverfault.com/users/14631/"
]
} |
508,026 | I have a command that finds all the PDF files that contain the string "Font" find /Users/me/PDFFiles/ -type f -name "*.pdf" -exec grep -H 'Font' '{}' ';' How can I change this command such that it does the inverse? Finds all PDF files that do not contain the search string "Font" | You want to use the "-L" option of grep : -L, --files-without-match
Only the names of files not containing selected lines are written to standard output. Path-
names are listed once per file searched. If the standard input is searched, the string
``(standard input)'' is written. | {
"source": [
"https://serverfault.com/questions/508026",
"https://serverfault.com",
"https://serverfault.com/users/145544/"
]
} |
508,049 | I read an article today describing how a penetration tester was able to demonstrate creating a fake bank account with a $14 million balance. However, one paragraph describing the attack stood out: Then he "flooded" switches -- small boxes that direct data traffic --
to overwhelm the bank's internal network with data. That kind of
attack turns the switch into a "hub" that broadcasts data out
indiscriminately. I'm not familiar with the effect that is described. Is it really possible to force a switch to broadcast traffic to all of its ports by sending massive amounts of traffic? What exactly is going on in this situation? | This is called MAC flooding . A "MAC address" is an Ethernet hardware address. A switch maintains a CAM table that maps MAC addresses to ports. If a switch has to send a packet to a MAC address not in its CAM table, it floods it to all ports just like a hub does. So if you flood a switch with a larger number of MAC addresses, you will force the entries of legitimate MAC addresses out of the CAM table and their traffic will be flooded to all ports. | {
"source": [
"https://serverfault.com/questions/508049",
"https://serverfault.com",
"https://serverfault.com/users/52352/"
]
} |
508,051 | I'm building an EC2 LAMP server for the first time, and so far so good. Except I can't seem to get the require 'vendor/autoload.php'; working right I get this error message whenever I write that line above Warning: require(/home/ec2-user/vendor/autoload.php): failed to open stream: Permission denied in /var/www/html/tables.php on line 6 Fatal error: require(): Failed opening required '/home/ec2-user/vendor/autoload.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/html/tables.php on line 6 I know I have those files. My path to the file is: /home/ec2-user/vendor/composer/autoload.php The files that represent my web page are in /var/www/html/ I can verify both using Filezilla. Do I need to configure permissions, or move the whole vendor folder to a place where it can be accessed? Did I make an error with the path? Thanks in advance. ps aux | grep apache gives me this: I think this means that its running under ec2-user? How do I switch it, then? apache 1511 0.0 1.5 407000 9376 ? S 15:30 0:00 /usr/sbin/httpd
apache 1512 0.0 1.3 407376 8380 ? S 15:30 0:00 /usr/sbin/httpd
apache 1513 0.0 1.5 406996 9368 ? S 15:30 0:00 /usr/sbin/httpd
apache 1514 0.0 1.3 406880 8388 ? S 15:30 0:00 /usr/sbin/httpd
apache 1515 0.0 1.5 406880 9368 ? S 15:30 0:00 /usr/sbin/httpd
apache 1516 0.0 1.3 406880 8320 ? S 15:30 0:00 /usr/sbin/httpd
apache 1517 0.0 1.5 406880 9356 ? S 15:30 0:00 /usr/sbin/httpd
apache 1518 0.0 1.3 406880 8380 ? S 15:30 0:00 /usr/sbin/httpd
ec2-user 2191 0.0 0.1 103416 828 pts/0 S+ 17:45 0:00 grep apache | This is called MAC flooding . A "MAC address" is an Ethernet hardware address. A switch maintains a CAM table that maps MAC addresses to ports. If a switch has to send a packet to a MAC address not in its CAM table, it floods it to all ports just like a hub does. So if you flood a switch with a larger number of MAC addresses, you will force the entries of legitimate MAC addresses out of the CAM table and their traffic will be flooded to all ports. | {
"source": [
"https://serverfault.com/questions/508051",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
508,074 | I've got NUT configured on a few systems in the following way: One host is connected to four UPSes via USB. That host is plugged into two of the four UPSes. Two other hosts are on the same rack, each plugged into one of the remaining UPSes. upsmon on the master only monitors the two UPSes relevant to powering that host, even though upsd is configured to communicate with all of them. upsmon on each of the slaves monitors the appropriate UPS attached to the master. I cannot have each slave directly attached to its UPS. (Those two hosts are VMware ESXis and the only NUT package I've found for them only contains upsmon.) (I also have another rack whose UPS configuration is sufficiently complicated that while I could have a setup where each UPS was directly attached to a host it powered, it makes for simpler cabling to have them all connected to a single host even if it doesn't draw power from all of them.) My question is this: if the power goes out and one of the slaves' UPSes goes into low battery state, what is the best way to have the master power off the UPS as soon as the slave attached to it has shut down? I don't want to just wait for the master to shut down, because that leaves a window of time where the power might return but, because the slave's UPS never powered off, the slave's system will not see a power cycle and, thus, will not turn itself back on. | This is called MAC flooding . A "MAC address" is an Ethernet hardware address. A switch maintains a CAM table that maps MAC addresses to ports. If a switch has to send a packet to a MAC address not in its CAM table, it floods it to all ports just like a hub does. So if you flood a switch with a larger number of MAC addresses, you will force the entries of legitimate MAC addresses out of the CAM table and their traffic will be flooded to all ports. | {
"source": [
"https://serverfault.com/questions/508074",
"https://serverfault.com",
"https://serverfault.com/users/119616/"
]
} |
508,691 | My network is completely locked down except for a few sites which are whitelisted. This is all done through iptables, which looks something like this: # Allow traffic to google.com
iptables -A zone_lan_forward -p tcp -d 1.2.3.0/24 -j ACCEPT
iptables -A zone_lan_forward -p udp -d 1.2.3.0/24 -j ACCEPT
iptables -A zone_lan_forward -p tcp -d 11.12.13.0/24 -j ACCEPT
iptables -A zone_lan_forward -p udp -d 11.12.13.0/24 -j ACCEPT
iptables -A zone_lan_forward -p tcp -d 101.102.103.0/24 -j ACCEPT
iptables -A zone_lan_forward -p udp -d 101.102.103.0/24 -j ACCEPT
... Obviously those addresses are hypothetical, but you get the idea. My firewall is becoming enormous. It would be much easier to maintain if I could just do this: # Allow traffic to google.com
iptables -A zone_lan_forward -p tcp -d google.com -j ACCEPT
iptables -A zone_lan_forward -p udp -d google.com -j ACCEPT I believe this is possible, since man iptables says: Address can be either a network name, a hostname (please note that specifying any name to be resolved with a remote query such as DNS is a really bad idea), a network IP address (with /mask), or a plain IP address. But what I'm concerned about is the part that says "specifying any name to be resolved with... DNS is a really bad idea". Why is this a bad idea? Does it just slow everything down? If I really shouldn't use hostnames in iptables rules, then what should I do to simplify my firewall? | DNS Names are resolved when the rules are added, not, when packets are checked. This violates the expectations most people have. The rule does not get updated to reflect changed DNS results. It is resolved when added and that is it. You will need to either periodically reload rules, or some sites may break. There is a bit of a security issue in that you are basically delegating control of your firewall rules to an external entity. What if your parent DNS server is compromised and returns false data. If your purpose is to block HTTP access, then you are usually far better of setting up a piece of software designed to filter at that level (e.g. squid+squidquard). | {
"source": [
"https://serverfault.com/questions/508691",
"https://serverfault.com",
"https://serverfault.com/users/174220/"
]
} |
508,698 | I try to migrate emails from an ice warp server to zimbra and I use imapsync for it. With the following command I get the error message --host1 option must be used, run /usr/bin/imapsync --help for help the command what I try to run imapsync \ --buffersize 8192000 --nosyncacls --subscribe --syncinternaldates \ --host1 myip1--user1 myuser --password1 mypassword --ssl1 --port1 993 \ --host2 myip --user2 myuser --password2 mypassword --ssl1 --port2 993 | DNS Names are resolved when the rules are added, not, when packets are checked. This violates the expectations most people have. The rule does not get updated to reflect changed DNS results. It is resolved when added and that is it. You will need to either periodically reload rules, or some sites may break. There is a bit of a security issue in that you are basically delegating control of your firewall rules to an external entity. What if your parent DNS server is compromised and returns false data. If your purpose is to block HTTP access, then you are usually far better of setting up a piece of software designed to filter at that level (e.g. squid+squidquard). | {
"source": [
"https://serverfault.com/questions/508698",
"https://serverfault.com",
"https://serverfault.com/users/147217/"
]
} |
508,700 | I have a number of iptables rules on my firewall that look like this: iptables -A zone_lan_forward -p tcp -d 1.2.3.0/24 -j ACCEPT
iptables -A zone_lan_forward -p udp -d 1.2.3.0/24 -j ACCEPT Is there a shortcut for having two rules - one for tcp and one for udp - for every address? I mean can I do something like this: iptables -A zone_lan_forward -p tcp,udp -d 1.2.3.0/24 -j ACCEPT | Create a new chain which will accept any TCP and UDP packets, and jump to that chain from the individual IP/port permissive rules: iptables -N ACCEPT_TCP_UDP
iptables -A ACCEPT_TCP_UDP -p tcp -j ACCEPT
iptables -A ACCEPT_TCP_UDP -p udp -j ACCEPT
iptables -A zone_lan_forward -d 1.2.3.0/24 -j ACCEPT_TCP_UDP This adds the overhead of a few extra lines, but halves the number of TCP / UDP rules. I would not omit the -p argument, because you're not only opening up the firewall for ICMP, but also any other protocol. From the iptables man page on -p : The specified protocol can be one of tcp, udp, icmp, or all, or it can
be a numeric value, representing one of these protocols or a different
one. A protocol name from /etc/protocols is also allowed. You may not be listening on any protocols except for TCP, UDP, and ICMP right now , but who knows what the future may hold. It would be bad practice to leave the firewall open unnecessarily. Disclaimer: The iptables commands are off the top of my head; I don't have access to a box on which to test them ATM. | {
"source": [
"https://serverfault.com/questions/508700",
"https://serverfault.com",
"https://serverfault.com/users/174220/"
]
} |
509,468 | I have a 400GB disk with a 320GB ext4 partition.
I would like to grow the ext4 partition to use the left space (80GB of free space). +--------------------------------+--------+
| ext4 | Free |
+--------------------------------+--------+ How could I do this? I've seen people using resize2fs but I don't understand if it resizes the partition. Another solution would be to use fdisk but I don't want to delete my partition and loose data. How could I simply grow the partition without loosing any file? Note: I'm talking about an un-mounted data partition without LVM and I have backups, but I'd like to avoid spending some time on recovery. | You must begin with the partition unmounted. If you can't unmount it (e.g. it's your root partition or something else the system needs to run), use something like System Rescue CD instead. Run parted , or gparted if you prefer a GUI, and resize the partition to use the extra space. I prefer gparted as it gives you a nice graphical representation, very similar to the one you've drawn in your question. resize2fs /dev/whatever e2fsck /dev/whatever (just to find out whether you are on the safe side) Remount your partition. While I've never seen this fail, do back up your data first! | {
"source": [
"https://serverfault.com/questions/509468",
"https://serverfault.com",
"https://serverfault.com/users/157498/"
]
} |
509,941 | I've been reading the iptables man-page (light bedtime reading) and i came across the 'TTL' target, but it warns: Setting or incrementing the TTL field can potentially be very dangerous and Don't ever set or increment the value on packets that leave your local network! I can see how perhaps decrementing or setting the TTL lower could cause packets to be dropped before reaching the destination, but what effect could incrementing have? | The TTL get decremented when it pass through a router. This makes sure that if the packet is traveling around in circles it will eventually die. The TTL field of an IP v4 packet is an 8-bit field (255 decimal). So setting it high at the start it isn't a big deal since it can't actually be that large in a well-formed packet (Although, some things might accept malformed IP packets). However, if something increments it, and the incrementation step is part of the loop , the packet could keep going in circles without ever reaching zero. Over time (could be very short, or a gradual leak), packets could build up in the system containing that loop causing it to overload. | {
"source": [
"https://serverfault.com/questions/509941",
"https://serverfault.com",
"https://serverfault.com/users/148824/"
]
} |
510,100 | I just installed a fresh new Windows 2012 x64 Virtual Machine. The very first thing I did was install Google Chrome. When I went to the Dashboard, I get the following error (or it's made to look like an error). So, Anyone know what this error is all about? How can i 'fix' this? Thank you kindly. EDIT: Please don't suggest I uninstall G-Chrome as a fix. | Going through the Google forums, it seems like Google doesn't have any intention of "fixing" this behavior. Issue seems like that the service starts and then it stops quickly making Windows think it failed, though in reality it checks for updates (and may be other "stuff") and exits with no errors. Since this is normal behavior, what I did was to remove the gupdate service from the Services drop down in the Services Detail View. This makes the Server Manager exclude it. | {
"source": [
"https://serverfault.com/questions/510100",
"https://serverfault.com",
"https://serverfault.com/users/58/"
]
} |
510,278 | I'm using below command to transfer files cross server scp -rc blowfish /source/directory/* [email protected]:/destination/directory Is there a way to transfer only files modified files just like update command for cp ? | rsync is your friend. rsync -ru /source/directory/* [email protected]:/destination/directory If you want it to delete files at the destination that no longer exist at the source, add the --delete option. | {
"source": [
"https://serverfault.com/questions/510278",
"https://serverfault.com",
"https://serverfault.com/users/150250/"
]
} |
510,442 | Please let me know if my question does not makes any sense as I am not sure if I am interpreting it correctly from my thoughts due to my lack of technical knowledge on this. If I am using a motherboard which has a connection for a SFF-8087 to 4x cable such as this SFF-8087 to 4x SATA connection. I am still learning about SAS but was told to build a system from a potential employer utilizing these connections. However, I am just not sure I understand the concept on how the system will treat the SATA connections which are going into the SAS port via this cable. Also what would be the advantage of doing it this way as opposed to just connecting the SATA drives directly into the SATA motherboard ports? I believe the built-in SAS connection may be an integrated RAID controller. Although, yes I can just go ahead and connect all the cables that fit I would like to have a better grasp of what I am doing such as: If a motherboard has SAS connections, should I automatically assume it has some type of RAID controller built-in or is this on a case by case basis? Do All RAID controllers have only SAS connections? Even though the SATA drives are connected via a SAS connection, are they still just treated as SATA drives or as SAS technology? | A few items to help clarify SAS technology ... SATA drives can connect to SAS ports. SAS drives cannot connect to SATA ports. Server-class hardware typically uses an embedded RAID controller or a separate RAID controller PCIe device. Most RAID controllers and SAS HBAs will use SAS connections (multilane or 4-lane SAS ports). Internally, these systems will use one of the internal SAS transports ( SFF-8087 or SFF-8484 ) for cabling. 4-lane SAS cables carry FOUR SAS links over the same cable. Enterprise servers will typically have a SAS backplane for hot-swap hard drives. These backplanes can accommodate SAS and SATA disks. The backplanes also provide power to the drives. It doesn't make sense to run SATA cables to hot-swappable hard drives. Instead, the internal SAS cables will link the controller and the backplane. You can mix and match SATA and SAS on the same backplane, but because the protocols are different, bad things. can happen. Internal SAS 4-lane cabling to a backplane Internal SAS breakout cabling to a backplane | {
"source": [
"https://serverfault.com/questions/510442",
"https://serverfault.com",
"https://serverfault.com/users/87445/"
]
} |
511,206 | I have some trouble with Nginx and Jenkins (Hudson). I am trying to use Nginx as Reverse Proxy for the Jenkins instance with HTTP Basic Authentication. It works so far, but i have no idea how to pass the Header with the Authentication Username. location / {
auth_basic "Restricted";
auth_basic_user_file /usr/share/nginx/.htpasswd;
sendfile off;
proxy_pass http://192.168.178.102:8080;
proxy_redirect default;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-User $http_authorization;
proxy_max_temp_file_size 0;
#this is the maximum upload size
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
} | Try adding this directives to your location block proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization; | {
"source": [
"https://serverfault.com/questions/511206",
"https://serverfault.com",
"https://serverfault.com/users/28736/"
]
} |
511,209 | i'm in the process of configuring an amazon EC2 machine for java and django development. The stack we've chosen is nginx as the top level listener on :80 port, and passing the requests to the respective applications, thro :8080 for tomcat and thro a .sock for uWSGI. I have nginx configured in the following way: server {
listen 80;
server_name test.myproject.com;
rewrite ^/(.*)$ /myproject/$1;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 80;
server_name test.myotherproject.com;
rewrite ^/(.*)$ /myotherproject/$1;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 80;
server_name web1.myhost.com.br;
location /static/ {
root /home/ubuntu/projects/hellow/;
}
location / {
uwsgi_pass unix:/var/run/uwsgi.sock;
include uwsgi_params;
uwsgi_param SCRIPT_NAME '';
}
} The first 'server' entry redirect to a tomcat webapp .war and its working flawlessly. the third is redirecting to the uWSGI sock and its also working fine. the second one is the problem. it is configured in the same way as the first, but when I access http://test.myotherproject.com my browser gives me a 404 with the following message: HTTP Status 404 - /myotherproject/myotherproject/user/redirectPermission
type Status report
message /myotherproject/myotherproject/user/redirectPermission
description The requested resource is not available.
Apache Tomcat/7.0.33 /user/redirectPermission is a valid page, and it is the expected behaviour of the application to go there on the first access, to get the user authenticated. The problem is the second /myotherproject/ part that gets appended, and I cant figure out why or where this is happening.
Note that the url that give me the 404 above is: http://test.myotherproject.com.br/myotherproject/user/redirectPermission .
Also the tomcat have no extra configurarion in the server.xml file other than changing its port to 8080, and the war is at webbapps and its named myotherproject.war (tomcat7.x/webapps/myotherproject.war).
Does anyone know how configure this setup properly?
Thanks in Advance,
Salvia | Try adding this directives to your location block proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization; | {
"source": [
"https://serverfault.com/questions/511209",
"https://serverfault.com",
"https://serverfault.com/users/175464/"
]
} |
511,609 | Debian and derivatives (Ubuntu) don't use the php session garbage collector session.gc_probability = 0 instead they use a cron /etc/cron.d/php5 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete Why Debian has chosen to do this? | Because Debian sets very stringent permissions on /var/lib/php5 (1733, owner root, group root) to prevent PHP session hijacking. Unfortunately, this also prevents the native PHP session garbage collector from working, because it can't see the session files there. The cron job runs as root, which does have sufficient access to see and clean up the session files. Edit : Supporting documentation: The behavior was established in response to bug #267720 . (There used to be comments in the stock php.ini file about this, but I don't see them there now in my wheezy-based PHP install.) | {
"source": [
"https://serverfault.com/questions/511609",
"https://serverfault.com",
"https://serverfault.com/users/109742/"
]
} |
511,738 | I was following this tut on how to set up a EC2 instance on Ubuntu but qhen trying to execute ssh command on my IP address, I had an operation Timeout. So I tried to ping it but no chance neither.
got Request timeout Any idea what to do to make it working ? Status is green on my dashboard. Thanks ! | AWS security groups block ICMP (including ping, traceroute, etc.) by default. You need to explicitly enable it. | {
"source": [
"https://serverfault.com/questions/511738",
"https://serverfault.com",
"https://serverfault.com/users/116336/"
]
} |
511,789 | I get the following error in my log files every time I try to upload a large file. a client request body is buffered to a temporary file /var/lib/nginx/body/0000000001 Although the file uploads successfully, I always get the above error. I increased the client_body_buffer_size to 1000m which is what I expect the largest file uploaded to be. However, this is was just a guess and although I don't get that error anymore I am wondering if this is an appropriate value to set for the client_body_buffer_size ? I would appreciate it if anyone can shed some light on this directive and how it should be used. | This is a warning, not an error. That's why it was prefaced with [warn] in the log. It means that the size of the uploaded file was larger than the in-memory buffer reserved for uploads. The directive client_body_buffer_size controls the size of that buffer. If you can afford to have 1GB of RAM always reserved for the occasional file upload, then that's fine. It's a performance optimization to buffer the upload in RAM rather than in a temporary file on disk, though with such large uploads a couple of extra seconds probably doesn't matter much. If most of your uploads are small, then it's probably a waste. In the end, only you can really make the decision as to what the appropriate size is. | {
"source": [
"https://serverfault.com/questions/511789",
"https://serverfault.com",
"https://serverfault.com/users/3411/"
]
} |
511,812 | I'm trying to install a certificate for my internal certificate server on a series of CentOS systems, and I'm finding the documentation on this to be almost non existent. My end goal is to be able to use git , curl , and others against internal secure servers without errors. On Ubuntu it's simple enough, you throw the certificate in a folder and run a command to generate a series of links to add the CA cert to the certification path. I can not for the life of me find out how to do this on CentOS.. plenty of information is available on trusting random certificates. (To wit: create a symlink in /etc/pki/tls/certs to the PEM encoded cert file, named with the hash of the certificate. Didn't work for my CA, since the aforementioned apps still can't verify a certificate signed by the CA). How do you install a new root CA on a CentOS system? | As of CentOS 6+, there is a tool for this. Per this guide , certificates can be installed first by enabling the system shared CA store: update-ca-trust enable Then placing the certificates to trust as CA's in /etc/pki/ca-trust/source/anchors/ for high priority (non-overridable), or /usr/share/pki/ca-trust-source/ (lower priority, overridable), and finally updating the system store with: update-ca-trust extract Et voila, system tools will now trust those certificates when making secure connections! | {
"source": [
"https://serverfault.com/questions/511812",
"https://serverfault.com",
"https://serverfault.com/users/129773/"
]
} |
511,813 | named.conf.local (included in named.conf) zone "foo.com" {
type master;
file "/var/lib/bind/foo.com.hosts";
}; DNS zone $ttl 600
foo.com. IN SOA server.hostname. mail.server.hostname. (
1369844282
600
600
600
600 )
foo.com. IN NS server.hostname.
fake A 99.99.99.99 dig test dig fake.foo.com +trace
[...]
foo.com. 600 IN SOA server.hostname. mail.server.hostname. 1369844282 600 600 600 600 Why fake is not resolved? What i'm missing? Some more details DNS for this server are managed from a domain panel on a hosting. On that DNS panel i've set a subdomain as NS record pointing to the server. Hosting DNS Panel Records @ A 99.99.99.99
www A 99.99.99.99
ftp A 99.99.99.99
beta A 99.99.99.99
_domainkey NS 99.99.99.99 So, when i talk about the fake record, i mean that dig can't resolve fake._domainkey.foo.com because, as already said, answer me with the SOA and not with the record. | As of CentOS 6+, there is a tool for this. Per this guide , certificates can be installed first by enabling the system shared CA store: update-ca-trust enable Then placing the certificates to trust as CA's in /etc/pki/ca-trust/source/anchors/ for high priority (non-overridable), or /usr/share/pki/ca-trust-source/ (lower priority, overridable), and finally updating the system store with: update-ca-trust extract Et voila, system tools will now trust those certificates when making secure connections! | {
"source": [
"https://serverfault.com/questions/511813",
"https://serverfault.com",
"https://serverfault.com/users/116912/"
]
} |
511,818 | Recently ran into a very strange problem. Several applications were having issues communication through our F5 Load-Balancer. When we looked into it we found that the router had an incorrect ARP and MAC-ADDRESS table entry on the Load-Balancer VLAN. Those entries were pointing towards a Windows Server 2008 R2 box instead of the Load-Balancers external interface. Now here is the strange thing. The hardware address in the MAC/ARP table entries did not exist on the Windows 2008 Server but it was very close. The Windows Server was on router interface Gi1/37 (below). The Load-Balancer External Address was 192.168.111.61 and the Windows Server was 192.168.111.125. Two totally different IP addresses in the same /24 subnet. IPConfig on Windows Server Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Intel(R) 82574L Gigabit Network Connect
Physical Address. . . . . . . . . : 00-E0-81-DF-15-FE
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::917f:6781:df6:f724%11(Preferred)
IPv4 Address. . . . . . . . . . . : 192.168.111.125(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : fe80::21e:f7ff:fe41:2a80%11
fe80::21e:f7ff:fe41:3540%11
192.168.111.1 MAC Info on Windows Box C:\Users\Administrator>getmac
Physical Address Transport Name
=================== =========================================================
00-E0-81-DF-15-FE \Device\Tcpip_{5BB4FA88-7056-4303-8528-AA2293E4821B}
00-E0-81-DF-15-FD Media disconnected The ARP and MAC ADDRESS entry in the Router Router#sh ip arp 192.168.111.61
Protocol Address Age (min) Hardware Addr Type Interface
Internet 192.168.111.61 1 00e0.81df.15fc ARPA Vlan50
Router#sh mac-address-table addr 00e0.81df.15fc
Legend: * - primary entry
age - seconds since last seen
n/a - not available
vlan mac address type learn age ports
------+----------------+--------+-----+----------+--------------------------
Module 1[FE 1]:
* 50 00e0.81df.15fc dynamic Yes 275 Gi1/37 The last 4 bits on the hardware address although similar were not existing physical hardware addresses on the Windows 2008 Server. Logic dictates that the Windows Server had to have performed some sort of incorrect gratuitous ARP in order to poison the ARP and MAC table on the router. Or it was responding to an ARP request for an IP that it didn't own and a MAC ADDRESS that it didn't own. The second we shut down the Windows 2008 interface and cleared the ARP/MAC tables the problem was solved. For the life of me i am unable to understand how this happened (or why). | As of CentOS 6+, there is a tool for this. Per this guide , certificates can be installed first by enabling the system shared CA store: update-ca-trust enable Then placing the certificates to trust as CA's in /etc/pki/ca-trust/source/anchors/ for high priority (non-overridable), or /usr/share/pki/ca-trust-source/ (lower priority, overridable), and finally updating the system store with: update-ca-trust extract Et voila, system tools will now trust those certificates when making secure connections! | {
"source": [
"https://serverfault.com/questions/511818",
"https://serverfault.com",
"https://serverfault.com/users/83392/"
]
} |
512,231 | On my new test server, which is a Windows Server 2012 core server, I've closed the only open cmd console with the exit command. How do I open another prompt now? Am I going to be forced to reboot the machine? | From the Technet article titled Manage a Server Core Server : If you close all command prompt windows and want to open a new Command
Prompt window, press CTRL+ALT+DELETE, click Start Task Manager, click
More Details, click File, click Run, and then type cmd.exe.
Alternatively, you can log off and log back on. | {
"source": [
"https://serverfault.com/questions/512231",
"https://serverfault.com",
"https://serverfault.com/users/171115/"
]
} |
512,240 | I have an application which is writing to syslog. The messages written to the syslog are for various buckets which need to be filtered out. Every message starts with a bucket number, so the messages are written as: 1: Message for bucket 1
14: Message for bucket 14
123: Message for bucket 123 I want to filter these messages based on the bucket number, which I suppose can be done with a regex. These buckets are numeric and can be in the range 1-999. The output for these buckets should go different files, one for each bucket. For the above example, it should be: /var/log/myapp/1.log
/var/log/myapp/14.log
/var/log/myapp/123.log Can someone help me with how this can be done with rsyslog? | From the Technet article titled Manage a Server Core Server : If you close all command prompt windows and want to open a new Command
Prompt window, press CTRL+ALT+DELETE, click Start Task Manager, click
More Details, click File, click Run, and then type cmd.exe.
Alternatively, you can log off and log back on. | {
"source": [
"https://serverfault.com/questions/512240",
"https://serverfault.com",
"https://serverfault.com/users/119868/"
]
} |
512,333 | CentOS 5.9 For testing purposes, I want my CentOS server to listen on a secondary virtual IP (eth0:0). I'm familiar with nc -l -p <port> but it only listens on the primary. Is there a way I can specify a specific IP for the listener to use? If not, is there another "stock" utility in CentOS 5.9 that can do this? | The syntax depends on the netcat package. netcat-openbsd nc -l 192.168.2.1 3000 netcat-traditional nc -l -p 3000 -s 192.168.2.1 A simple way (at least in bash) for telling them apart in scripts is: if ldd $(type -P nc) | grep -q libbsd; then
nc -l 192.168.2.1 3000
else
nc -l -p 3000 -s 192.168.2.1
fi | {
"source": [
"https://serverfault.com/questions/512333",
"https://serverfault.com",
"https://serverfault.com/users/21875/"
]
} |
513,381 | For a company with modest virtualization needs - VirtualBox is currently doing fine at hosting a few light servers - what would some of the benefits be of moving to a more robust platform? I'm hoping to shortcut my research a bit - to get a short list of the features enterprise-level virtualization has that VBox and its ilk don't. | The main reasons you'll want to pursue an enterprise-level virtualization solution are mindshare, support, manageability, and feature-set. Mindshare is important because virtualization is an investment in a technology, an investment that requires platform longevity. Nobody wants to be the one who picked the wrong tech solution. So the major players in the space (VMware, Microsoft, Citrix, KVM) all have some momentum behind them. This affects third-party applications and plugins; think of SAN-integration or backup software. More mature virtualization suites have APIs that are leveraged by other products. It's natural that more solutions would be developed for more popular platforms. Support is linked to mindshare. I'm constantly battling bugs and obscure problems with my Citrix Xenserver/Cloudstack solution. Due to mindshare and general knowledge of the solution being an order of magnitude smaller than something like Hyper-V or VMware, I have to rely heavily on Citrix support, bugfixes and trial-and-error to fix problems. Other solutions would have more community forums and of course, more people who've vetted the technology. Manageability and feature-set are key as well. Hypervisors today all provide similar raw capabilities: the ability to host multiple guest virtual machines and different operating systems on physical hardware nodes. It's how well they're packaged together and can be managed that shapes perception of the overall solution. Automation, monitoring, reporting, an ability to troubleshoot performance issues, and ease of installation are some important attributes. Also, any enterprise solution will have some ability to migrate virtual machine guests live between hosts and/or storage. | {
"source": [
"https://serverfault.com/questions/513381",
"https://serverfault.com",
"https://serverfault.com/users/6177/"
]
} |
513,388 | We are doing daily a Windows Server 2008 R2 Backup on a shared folder of a dedicated machine scheduled in the Windows Task Scheduler. To access a shared folder (a Win7 machine) as backup destination we are forced to use local domain admin account for the scheduled task. The problem is that the local domain admin account password changes periodically and the backup stops working till they we update the password on the scheduler. What is the most straightforward way to avoid this? Thanks. | The main reasons you'll want to pursue an enterprise-level virtualization solution are mindshare, support, manageability, and feature-set. Mindshare is important because virtualization is an investment in a technology, an investment that requires platform longevity. Nobody wants to be the one who picked the wrong tech solution. So the major players in the space (VMware, Microsoft, Citrix, KVM) all have some momentum behind them. This affects third-party applications and plugins; think of SAN-integration or backup software. More mature virtualization suites have APIs that are leveraged by other products. It's natural that more solutions would be developed for more popular platforms. Support is linked to mindshare. I'm constantly battling bugs and obscure problems with my Citrix Xenserver/Cloudstack solution. Due to mindshare and general knowledge of the solution being an order of magnitude smaller than something like Hyper-V or VMware, I have to rely heavily on Citrix support, bugfixes and trial-and-error to fix problems. Other solutions would have more community forums and of course, more people who've vetted the technology. Manageability and feature-set are key as well. Hypervisors today all provide similar raw capabilities: the ability to host multiple guest virtual machines and different operating systems on physical hardware nodes. It's how well they're packaged together and can be managed that shapes perception of the overall solution. Automation, monitoring, reporting, an ability to troubleshoot performance issues, and ease of installation are some important attributes. Also, any enterprise solution will have some ability to migrate virtual machine guests live between hosts and/or storage. | {
"source": [
"https://serverfault.com/questions/513388",
"https://serverfault.com",
"https://serverfault.com/users/38557/"
]
} |
513,395 | I've got a Windows 2008 Web Server and have just installed Smartermail Mail Server Software. The server will send messages but the replies bounce back. I have attached my dns entries Ports 25 and 110 are open on the server. Any ideas where to start next. Thanks John | The main reasons you'll want to pursue an enterprise-level virtualization solution are mindshare, support, manageability, and feature-set. Mindshare is important because virtualization is an investment in a technology, an investment that requires platform longevity. Nobody wants to be the one who picked the wrong tech solution. So the major players in the space (VMware, Microsoft, Citrix, KVM) all have some momentum behind them. This affects third-party applications and plugins; think of SAN-integration or backup software. More mature virtualization suites have APIs that are leveraged by other products. It's natural that more solutions would be developed for more popular platforms. Support is linked to mindshare. I'm constantly battling bugs and obscure problems with my Citrix Xenserver/Cloudstack solution. Due to mindshare and general knowledge of the solution being an order of magnitude smaller than something like Hyper-V or VMware, I have to rely heavily on Citrix support, bugfixes and trial-and-error to fix problems. Other solutions would have more community forums and of course, more people who've vetted the technology. Manageability and feature-set are key as well. Hypervisors today all provide similar raw capabilities: the ability to host multiple guest virtual machines and different operating systems on physical hardware nodes. It's how well they're packaged together and can be managed that shapes perception of the overall solution. Automation, monitoring, reporting, an ability to troubleshoot performance issues, and ease of installation are some important attributes. Also, any enterprise solution will have some ability to migrate virtual machine guests live between hosts and/or storage. | {
"source": [
"https://serverfault.com/questions/513395",
"https://serverfault.com",
"https://serverfault.com/users/176613/"
]
} |
513,397 | I'm working at a project with around 7 computers (regular, mid-end PCs) connected to an Ethernet LAN, running Linux and used to run molecular simulation software like PyMOL. There are several users, each of which has his/her $HOME folder. The amount of data stored on those directories is very large, so since PC has an average of two HDD of 1TB each, and only the second one is being used to store $HOMEs, every computer hosts a couple of those folders and is, at the same time, an NFS server and client: A client when a user logs in (all boxes have equal /etc/passwd and /etc/shadow files) and his/her $HOME is not hosted in that computer, then it'll be on other on, mounted via NFS. A server to export the $HOMEs it hosts. When I started working as a (very unexperienced) IT admin at this project, about two weeks ago, I thought that this is wrong, and that the right thing to do is to centralize the storage, in a NAS-way. But we (the project) cannot afford a dedicated NAS device, though we will purchase a few more computers soon. Since, while working, the R/W amount is not that intensive, and the number of PCs on the lab isn't expected do scale, I was wondering if putting all the HDDs in one machine (Core2Quad, or similar) and using it only as NFS file-server is a plausible option. Is it? (First doubt that crossed my mind is that a standard motherboard doesn't have seven or eight SATA plugs..) Thank you | The main reasons you'll want to pursue an enterprise-level virtualization solution are mindshare, support, manageability, and feature-set. Mindshare is important because virtualization is an investment in a technology, an investment that requires platform longevity. Nobody wants to be the one who picked the wrong tech solution. So the major players in the space (VMware, Microsoft, Citrix, KVM) all have some momentum behind them. This affects third-party applications and plugins; think of SAN-integration or backup software. More mature virtualization suites have APIs that are leveraged by other products. It's natural that more solutions would be developed for more popular platforms. Support is linked to mindshare. I'm constantly battling bugs and obscure problems with my Citrix Xenserver/Cloudstack solution. Due to mindshare and general knowledge of the solution being an order of magnitude smaller than something like Hyper-V or VMware, I have to rely heavily on Citrix support, bugfixes and trial-and-error to fix problems. Other solutions would have more community forums and of course, more people who've vetted the technology. Manageability and feature-set are key as well. Hypervisors today all provide similar raw capabilities: the ability to host multiple guest virtual machines and different operating systems on physical hardware nodes. It's how well they're packaged together and can be managed that shapes perception of the overall solution. Automation, monitoring, reporting, an ability to troubleshoot performance issues, and ease of installation are some important attributes. Also, any enterprise solution will have some ability to migrate virtual machine guests live between hosts and/or storage. | {
"source": [
"https://serverfault.com/questions/513397",
"https://serverfault.com",
"https://serverfault.com/users/168946/"
]
} |
513,399 | I've had a working Apache2 with SVN running for a long time. For some other reason I had to do a system upgrade: apt-get upgrade Everything seemed to go OK, except my Apache2 configuration. Now it won't start with DAV: SVN . I noticed that mod_dav_svn and mod_authz_svn were suddenly missing. So I tried to install them: root@kolky:/etc/apache2# apt-get install libapache2-svn
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
libapache2-svn : Depends: apache2.2-common but it is not going to be installed
E: Unable to correct problems, you have held broken packages. I can understand this is not working as my apache version is: root@kolky:/etc/apache2# apache2 -v
Server version: Apache/2.4.4 (Debian)
Server built: May 31 2013 10:04:32
root@kolky:/etc/apache2# svn --version
svn, version 1.7.9 (r1462340)
root@kolky:/etc/apache2# svnadmin --version
svnadmin, version 1.7.9 (r1462340)
root@kolky:/etc/apache2# cat /etc/issue
Debian GNU/Linux jessie/sid \n \l
root@kolky:/etc/apache2# uname -r
2.6.32-5-amd64 Is there a solution to this? Can I run Apache2.4.4 with mod_dav_svn somehow? Or will I have to downgrade my Apache? | The main reasons you'll want to pursue an enterprise-level virtualization solution are mindshare, support, manageability, and feature-set. Mindshare is important because virtualization is an investment in a technology, an investment that requires platform longevity. Nobody wants to be the one who picked the wrong tech solution. So the major players in the space (VMware, Microsoft, Citrix, KVM) all have some momentum behind them. This affects third-party applications and plugins; think of SAN-integration or backup software. More mature virtualization suites have APIs that are leveraged by other products. It's natural that more solutions would be developed for more popular platforms. Support is linked to mindshare. I'm constantly battling bugs and obscure problems with my Citrix Xenserver/Cloudstack solution. Due to mindshare and general knowledge of the solution being an order of magnitude smaller than something like Hyper-V or VMware, I have to rely heavily on Citrix support, bugfixes and trial-and-error to fix problems. Other solutions would have more community forums and of course, more people who've vetted the technology. Manageability and feature-set are key as well. Hypervisors today all provide similar raw capabilities: the ability to host multiple guest virtual machines and different operating systems on physical hardware nodes. It's how well they're packaged together and can be managed that shapes perception of the overall solution. Automation, monitoring, reporting, an ability to troubleshoot performance issues, and ease of installation are some important attributes. Also, any enterprise solution will have some ability to migrate virtual machine guests live between hosts and/or storage. | {
"source": [
"https://serverfault.com/questions/513399",
"https://serverfault.com",
"https://serverfault.com/users/174270/"
]
} |
513,942 | Are there any convenient public, globally routable test addresses for IPv6? Similar to how 8.8.8.8 and 8.8.4.4 tend to get used this way for IPv4? | The shortest I've seen is www.sprint.net ( 2600:: ). | {
"source": [
"https://serverfault.com/questions/513942",
"https://serverfault.com",
"https://serverfault.com/users/2101/"
]
} |
513,961 | I have an Ubuntu 12.04.2 LTS server running Apache 2.2.22 with mod_ssl and OpenSSL v1.0.1. In my vhosts config (everything else within which behaves as I would expect), I have the SSLProtocol line with -all +SSLv3 . With that configuration, TLS 1.1 & 1.2 are enabled and work correctly - which is counter-intuitive to me, as I would expect that only SSLv3 would be enabled given that configuration. I can enable/disable TLSv1 just fine with -/+TSLv1 , and it works as expected. But +/-TLSv1.1 and +/-TLSv1.2 are not valid configuration options - so I can't disable them that way. As for why I'd want to do this - I'm dealing with a third party application (which I have no control over) that has some buggy behavior with TLS enabled servers, and I need to completely disable it to move forward. | Intrigued by this bug (and yes, I've been able to reproduce it) I've taken a look at the source code for the latest stable version of mod_ssl and found an explanation. Bear with me, this is gonna get amateur-stack-overflowish: When the SSLProtocol has been parsed, it results in a char looking something like this: 0 1 0 0
^ ^ ^ ^
| | | SSLv1
| | SSLv2
| SSLv3
TLSv1 Upon initiating a new server context, ALL available protocols will be enabled, and the above char is inspected using some nifty bitwise AND operations to determine what protocols should be disabled . In this case, where SSLv3 is the only protocol to have been explicitly enabled, the 3 others will be disabled. OpenSSL supports a protocol setting for TLSv1.1, but since the SSLProtocol does not account for this options, it never gets disabled. OpenSSL v1.0.1 has some known issues with TLSv1.2 but if it's supported I suppose the same goes for that as for TLSv1.1; it's not recognized/handled by mod_ssl and thus never disabled. Source Code References for mod_ssl: SSLProtocol gets parsed at line 925 in pkg.sslmod/ssl_engine_config.c The options used in the above function is defined at line 444 in pkg.sslmod/mod_ssl.h All protocols gets enabled at line 586 in pkg.sslmod/ssl_engine_init.c whereafter specific protocols gets disabled on the subsequent lines How to disable it then? You have a few options: Disable it in the OpenSSL config file with: Protocols All,-TLSv1.1,-TLSv1.2 Rewrite mod_ssl ;-) | {
"source": [
"https://serverfault.com/questions/513961",
"https://serverfault.com",
"https://serverfault.com/users/57706/"
]
} |
514,091 | Hello i have just moved server, for my asp.net mvc framework. but now i get the following error message, and to be honest i do now know what is wrong? Module IIS Web Core Notification BeginRequest Handler Not yet determined Error Code 0x80070021 Config Error This configuration section cannot be used at this
path. This happens when the section is locked at a parent level.
Locking is either by default (overrideModeDefault="Deny"), or set
explicitly by a location tag with overrideMode="Deny" or the legacy
allowOverride="false". <?xml version="1.0" encoding="utf-8"?>
<!--
For more information on how to configure your ASP.NET application, please visit
http://go.microsoft.com/fwlink/?LinkId=152368
-->
<configuration>
<configSections>
<!-- For more information on Entity Framework configuration, visit http://go.microsoft.com/fwlink/?LinkID=237468 -->
<section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
</configSections>
<connectionStrings>
<add name="CosplayConnectionString" connectionString="Data Source=sogaard.us;Initial Catalog=NewCosplay;Integrated Security=False;Persist Security Info=True;User ID=XXXXXX;Password=XXXXXX;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />
</connectionStrings>
<appSettings>
<add key="MaxImageSize" value="5242880" />
<add key="webpages:Version" value="2.0.0.0" />
<add key="webpages:Enabled" value="false" />
<add key="PreserveLoginUrl" value="true" />
<add key="ClientValidationEnabled" value="true" />
<add key="UnobtrusiveJavaScriptEnabled" value="true" />
<add key="RouteDebugger:Enabled" value="true" />
<add key="RecaptchaPrivateKey" value="6LeAsuASAAAAAKigNk4qtA5iS_E0RPmYTcQM9U4Z" />
<add key="RecaptchaPublicKey" value="6LeAsuASAAAAAO8HMUg9HKihCMRx0s53Dazbpoag" />
</appSettings>
<system.web>
<customErrors mode="Off" />
<httpRuntime targetFramework="4.5" />
<compilation debug="true" targetFramework="4.5" />
<authentication mode="Forms">
<forms loginUrl="~/Account/Login" timeout="2880" />
</authentication>
<pages>
<namespaces>
<add namespace="System.Web.Helpers" />
<add namespace="System.Web.Mvc" />
<add namespace="System.Web.Mvc.Ajax" />
<add namespace="System.Web.Mvc.Html" />
<add namespace="System.Web.Optimization" />
<add namespace="System.Web.Routing" />
<add namespace="System.Web.WebPages" />
<add namespace="Recaptcha" />
</namespaces>
</pages>
<profile defaultProvider="DefaultProfileProvider">
<providers>
<add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" />
</providers>
</profile>
<membership defaultProvider="DefaultMembershipProvider">
<providers>
<add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" />
</providers>
</membership>
<roleManager defaultProvider="CosplayRoleProvider" enabled="true" cacheRolesInCookie="true">
<providers>
<clear />
<add name="CosplayRoleProvider" type="Sogaard.us.Cosplay.Library.CosplayRoleProvider, Sogaard.us.Cosplay, Version=1.0.0.0, Culture=neutral" connectionStringName="DefaultConnection" applicationname="Cosplay" />
</providers>
</roleManager>
<sessionState mode="InProc" customProvider="DefaultSessionProvider">
<providers>
<add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
</providers>
</sessionState>
<httpModules></httpModules>
<httpHandlers></httpHandlers>
</system.web>
<system.webServer>
<httpErrors errorMode="Detailed" />
<asp scriptErrorSentToBrowser="true" />
<handlers>
<remove name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" />
<remove name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" />
<remove name="ExtensionlessUrlHandler-Integrated-4.0" />
<add name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness32" responseBufferLimit="0" />
<add name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness64" responseBufferLimit="0" />
<add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
</handlers>
<modules runAllManagedModulesForAllRequests="true"></modules>
<validation validateIntegratedModeConfiguration="false" />
</system.webServer>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="System.Web.Helpers" publicKeyToken="31bf3856ad364e35" />
<bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="2.0.0.0" />
</dependentAssembly>
<dependentAssembly>
<assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
<bindingRedirect oldVersion="0.0.0.0-4.0.0.0" newVersion="4.0.0.0" />
</dependentAssembly>
<dependentAssembly>
<assemblyIdentity name="System.Web.WebPages" publicKeyToken="31bf3856ad364e35" />
<bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="2.0.0.0" />
</dependentAssembly>
<dependentAssembly>
<assemblyIdentity name="RouteMagic" publicKeyToken="84b59be021aa4cee" culture="neutral" />
<bindingRedirect oldVersion="0.0.0.0-0.2.2.2" newVersion="0.2.2.2" />
</dependentAssembly>
</assemblyBinding>
</runtime>
<entityFramework>
<defaultConnectionFactory type="System.Data.Entity.Infrastructure.SqlConnectionFactory, EntityFramework" />
</entityFramework>
</configuration> | We had the same error on a brand new server. The reason was not the default IIS security policy, stored in applicationHost.config , as suggested by the other answer (although we checked that). The reason was that we installed IIS without support for ASP.NET (an ASP.NET 4.5 role)! When we installed the missing support for ASP.NET, our application just started with no changes in configuration at all. Conclusion: Double check that you have ASP.NET role installed along with IIS if you get this error. To install the ASP.NET role in Windows Server: Open the add roles and features wizard Check the ASP.NET [your_version] entry under Web Server (IIS) -> Web Server -> Application Development To install the ASP.NET role in a Windows client: Open Turn Windows features on or off wizard Check the ASP.NET [your_version] entry under Internet Information Services -> World Wide Web Services -> Application Development Features | {
"source": [
"https://serverfault.com/questions/514091",
"https://serverfault.com",
"https://serverfault.com/users/40200/"
]
} |
514,118 | I have a server with NFSv4.
I am mounting contents of the home folder of remote user to local host.
Able to read and write contents, but when I am checking ownership of files at the mounted volume from the local host, they all belongs to corresponding remote user and group (512).
Is there any way to make it look like they belong to the local user and group (1000) on the local host? /etc/exports on remote host (IP is 192.168.1.110) /home/user512 192.168.1.142(rw,sync,all_squash,anonuid=512,anongid=512) /etc/fstab on local host (IP is 192.168.1.142) 192.168.1.110:/home/user512 /home/localuser/projects/project512 nfs rw,hard,intr,rsize=32768,wsize=32768 0 0 | This is what idmapping is suppose to do. First of all, enable is on the client and server: # echo N > /sys/module/nfsd/parameters/nfs4_disable_idmapping clean idmap cache and restart idmap daemon: # nfsidmap -c
# service rpcidmapd restart Now on server and the client will send instead of numeric IDs string principals like [email protected] . You need to have bob account on the both hosts - client and server. Nevertheless, the numeric ID's can be different. | {
"source": [
"https://serverfault.com/questions/514118",
"https://serverfault.com",
"https://serverfault.com/users/176932/"
]
} |
514,123 | Is there any way to check the number of kernel panic that happened on a system ?
If not, any idea on how to do make this new functionality ? I would like to hear about an answer for most Unix-like systems :) | This is what idmapping is suppose to do. First of all, enable is on the client and server: # echo N > /sys/module/nfsd/parameters/nfs4_disable_idmapping clean idmap cache and restart idmap daemon: # nfsidmap -c
# service rpcidmapd restart Now on server and the client will send instead of numeric IDs string principals like [email protected] . You need to have bob account on the both hosts - client and server. Nevertheless, the numeric ID's can be different. | {
"source": [
"https://serverfault.com/questions/514123",
"https://serverfault.com",
"https://serverfault.com/users/176982/"
]
} |
514,242 | I am currently running a Centos 6.4 server, with Apache 2.2.15 and mod_wsgi 3.2. The server is hosting a django-based site (django 1.5.1, python 2.6.6). Everything was running fine until I installed scipy 0.12.0 via pip. Now, when I attempt to load the django app, the server does not respond, and it appears that child httpd processes that are spawned hang. Looking through my logs (/var/logs/httpd/error_log, my vhost error.log, and my system logs) yield no errors. If I load my models, etc.. via the django manage.py shell, everything works fine, which leads me to believe it is a mod_wsgi issue. Any thoughts on how to start troubleshooting this? | Some third party packages for Python which use C extension modules, and this includes scipy and numpy, will only work in the Python main interpreter and cannot be used in sub interpreters as mod_wsgi by default uses. The result can be thread deadlock, incorrect behaviour or processes crashes. These is detailed in: http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Python_Simplified_GIL_State_API The workaround is to force the WSGI application to run in the main interpreter of the process using: WSGIApplicationGroup %{GLOBAL} If running multiple WSGI applications on same server, you would want to start investigating using daemon mode because some frameworks don't allow multiple instances to run in same interpreter. This is the case with Django. Thus use daemon mode so each is in its own process and force each to run in main interpreter of their respective daemon mode process groups. | {
"source": [
"https://serverfault.com/questions/514242",
"https://serverfault.com",
"https://serverfault.com/users/177041/"
]
} |
514,329 | I'm setting up a VPS with Ruby and Postgres. On my local machine, I have postgresql 9.2.3 (client and server) installed and therefore wanted to install the same on my VPS. Following the instructions of this blog post http://hendrelouw73.wordpress.com/2012/11/14/how-to-install-postgresql-9-1-on-ubuntu-12-10-linux/for installing postgres on ubuntu (with the only difference that I'm trying to install 9.2.3. and he installed 9.1), I did the following sudo apt-get install postgresql-9.2.3
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package postgresql-9.2.3
E: Couldn't find any package by regex 'postgresql-9.2.3' However, as you can see, it couldn't find a package postgresql-9.2.3 . Yet, I have that package installed on my local machine (which I installed on my Mac with Homebrew). Can you help me understand what I'm doing wrong? Update
I also tried to install it leaving off the '3' at the end like you see below but it didn't work as you can see. sudo apt-get install postgresql-9.2
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package postgresql-9.2 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'postgresql-9.2' has no installation candidate Update Ign http://security.ubuntu.com quantal-security InRelease
Ign http://archive.ubuntu.com quantal InRelease
Hit http://security.ubuntu.com quantal-security Release.gpg
Ign http://archive.ubuntu.com quantal-updates InRelease
Hit http://security.ubuntu.com quantal-security Release
Hit http://archive.ubuntu.com quantal Release.gpg
Get:1 http://archive.ubuntu.com quantal-updates Release.gpg [933 B]
Hit http://security.ubuntu.com quantal-security/main i386 Packages
Hit http://archive.ubuntu.com quantal Release
Get:2 http://archive.ubuntu.com quantal-updates Release [49.6 kB]
Hit http://security.ubuntu.com quantal-security/main Translation-en
Hit http://archive.ubuntu.com quantal/main i386 Packages
Hit http://archive.ubuntu.com quantal/universe i386 Packages
Ign http://security.ubuntu.com quantal-security/main Translation-en_US
Hit http://archive.ubuntu.com quantal/main Translation-en
Hit http://archive.ubuntu.com quantal/universe Translation-en
Get:3 http://archive.ubuntu.com quantal-updates/main i386 Packages [259 kB]
Get:4 http://archive.ubuntu.com quantal-updates/universe i386 Packages [192 kB]
Hit http://archive.ubuntu.com quantal-updates/main Translation-en
Hit http://archive.ubuntu.com quantal-updates/universe Translation-en
Ign http://archive.ubuntu.com quantal/main Translation-en_US
Ign http://archive.ubuntu.com quantal/universe Translation-en_US
Ign http://archive.ubuntu.com quantal-updates/main Translation-en_US
Ign http://archive.ubuntu.com quantal-updates/universe Translation-en_US
Fetched 501 kB in 3s (148 kB/s)
Reading package lists... Done
postgresql-9.1 - object-relational SQL database, version 9.1 server
postgresql-9.1-dbg - debug symbols for postgresql-9.1
postgresql-9.1-debversion - Debian version number type for PostgreSQL
postgresql-9.1-ip4r - IPv4 and IPv4 range index types for PostgreSQL 9.1
postgresql-9.1-orafce - Oracle support functions for PostgreSQL 9.1
postgresql-9.1-pgfincore - set of PostgreSQL functions to manage blocks in memory
postgresql-9.1-pgmemcache - PostgreSQL interface to memcached
postgresql-9.1-pgmp - arbitrary precision integers and rationals for PostgreSQL 9.1
postgresql-9.1-pgpool2 - connection pool server and replication proxy for PostgreSQL - modules
postgresql-9.1-pljava-gcj - Java procedural language for PostgreSQL 9.1
postgresql-9.1-pllua - Lua procedural language for PostgreSQL 9.1
postgresql-9.1-plproxy - database partitioning system for PostgreSQL 9.1
postgresql-9.1-plr - Procedural language interface between PostgreSQL and R
postgresql-9.1-plsh - PL/sh procedural language for PostgreSQL 9.1
postgresql-9.1-postgis - Geographic objects support for PostgreSQL 9.1
postgresql-9.1-prefix - Prefix Range module for PostgreSQL
postgresql-9.1-preprepare - Pre Prepare your Statement server side
postgresql-9.1-slony1-2 - replication system for PostgreSQL: PostgreSQL 9.1 server plug-in | In Ubuntu official repositories only PostgreSQL 9.1 is available. That is why it couldn't be found. In order to get PostgreSQL v9.2 in your VPS using apt you should follow the official PostgreSQL procedure for Ubuntu found here It consists of adding PostgreSQL official repository as one of your repository source Create the file /etc/apt/sources.list.d/pgdg.list Insert this line deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main Import the repository signing key wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - Refresh your repositories cache sudo apt-get update Now you can simply do sudo apt-get install postgresql-9.2 | {
"source": [
"https://serverfault.com/questions/514329",
"https://serverfault.com",
"https://serverfault.com/users/79866/"
]
} |
514,382 | I've been wondering for a while, why does running "echo 'helloworld' | openssl passwd -1 -stdin" yield different results every time?If I put any of the hashes in my /etc/shadow I can use them as my password and login to my system, how does it work? computer:/ user$ echo 'helloworld' | openssl passwd -1 -stdin
$1$xlm86SKN$vzF1zs3vfjC9zRVI15zFl1
computer:/ user$ echo 'helloworld' | openssl passwd -1 -stdin
$1$/0.20NIp$pd4X9xTZ6sF8ExEGqAXb9/
computer:/ user$ echo 'helloworld' | openssl passwd -1 -stdin
$1$sZ65uxPA$pENwlL.5a.RNVZITN/zNJ1
computer:/ user$ echo 'helloworld' | openssl passwd -1 -stdin
$1$zBFQ0d3Z$SibkYmuJvbmm8O8cNeGMx1
computer:/ user$ echo 'helloworld' | openssl passwd -1 -stdin
$1$PfDyDWER$tWaoTYym8zy38P2ElwoBe/ I would think that because I use this hash to describe to the system what my password should be, I should get the same results every time. Why don't I? | They all have a different salt . A unique salt is chosen each time, as salts should never be reused. Using a unique salt for each password makes them resistant to rainbow table attacks. | {
"source": [
"https://serverfault.com/questions/514382",
"https://serverfault.com",
"https://serverfault.com/users/52378/"
]
} |
514,870 | The LDAP server is hosted on Solaris. The client is CentOS. OpenLDAP/NSLCD/SSH authentication via LDAP work fine, but I am not able to use the ldapsearch commands to debug LDAP issues. [root@tst-01 ~]# ldapsearch
SASL/EXTERNAL authentication started
ldap_sasl_interactive_bind_s: Unknown authentication method (-6)
additional info: SASL(-4): no mechanism available:
[root@tst-01 ~]# cat /etc/openldap/ldap.conf
TLS_CACERTDIR /etc/openldap/cacerts
URI ldap://ldap1.tst.domain.tld ldap://ldap2.tst.domain.tld
BASE dc=tst,dc=domain,dc=tld
[root@tst-01 ~]# ls -al /etc/openldap/cacerts
total 12
drwxr-xr-x. 2 root root 4096 Jun 6 10:31 .
drwxr-xr-x. 3 root root 4096 Jun 10 10:12 ..
-rw-r--r--. 1 root root 895 Jun 6 10:01 cacert.pem
lrwxrwxrwx. 1 root root 10 Jun 6 10:31 cf848aa4.0 -> cacert.pem
[root@tst-01 ~]# I have tried authentication with a certificate via ldapsearch giving /etc/openldap/cacerts/cacert.pem as a parameter, but it didn't accept this certificate for authentication. | You may wish to turn off SASL and use simple authentication with the "-x" option. For example, a search to find a particular user ldapsearch -x -D "uid=search-user,ou=People,dc=example,dc=com" \
-W -H ldap://ldap.example.com -b "ou=People,dc=example,dc=com" \
-s sub 'uid=test-user' Will find "test-user" by -D - Use bind user "search-user" -W - Prompt for password -H - URL of LDAP server. Non-SSL in this case; use "ldaps://" for SSL -b - The search base -s - Search scope - i.e. base for base of tree, one for on level down and sub for recursively searching down the tree (can take a while) Finally the search filter as a non-option argument. In this case we will search for the uid of "test-user" | {
"source": [
"https://serverfault.com/questions/514870",
"https://serverfault.com",
"https://serverfault.com/users/81502/"
]
} |
514,871 | I have 80 nodes, 78 need to have a specific module, except for 2. [root@puppetmaster puppet]# cat hiera.yaml
:backends:
- yaml
:hierarchy:
- environment/%{::environment}/%{::hostname}
- environment/%{::environment}
- common
:logger: console
:yaml:
:datadir: '/etc/puppet/hieradata'
[root@puppetmaster puppet]# cat hieradata/common.yaml
---
classes:
- ldap
- motd
- ntp
- puppet-conf
[root@puppetmaster puppet]# cat hieradata/environment/tst/tst-01.yaml
---
classes:
- puppet-update
- public-keys
[root@puppetmaster puppet]# I want all nodes to have the ldap module, except for the tst-01 and tst-02 server. How do I exclude this module from these 2 servers? A solution would be to use 80 .yaml-files for all nodes and add "- ldap" to 78 of these .yaml-files, but this seems poor design. It would be cleaner to exclude the modules from the inherited list. | You may wish to turn off SASL and use simple authentication with the "-x" option. For example, a search to find a particular user ldapsearch -x -D "uid=search-user,ou=People,dc=example,dc=com" \
-W -H ldap://ldap.example.com -b "ou=People,dc=example,dc=com" \
-s sub 'uid=test-user' Will find "test-user" by -D - Use bind user "search-user" -W - Prompt for password -H - URL of LDAP server. Non-SSL in this case; use "ldaps://" for SSL -b - The search base -s - Search scope - i.e. base for base of tree, one for on level down and sub for recursively searching down the tree (can take a while) Finally the search filter as a non-option argument. In this case we will search for the uid of "test-user" | {
"source": [
"https://serverfault.com/questions/514871",
"https://serverfault.com",
"https://serverfault.com/users/81502/"
]
} |
514,925 | I've got an application that emails users once they have filled in a form. It uses a [email protected] as a from address. The customer wants it to use the email from the form as the from address which could be anything. I have been told that this is a bad idea due to spoofing/blacklisting and spam. I feel really vague about the exact reason about why this is a bad idea particularly as i've got to try to counsel the client out of this. Can someone explain to me why this is a bad idea. Interestingly the client has used a gmail account as the from address as a demo which not only works fine but has enabled the application to start sending emails (it wouldn't do it before with an email which was [email protected] ). Erm - what is going on. I'm told one thing and the opposite works. Sorry - i know this is basic but I could find anything on a google search. Largely I think because I'm having trouble even framing the question. EDIT Thank you everyone - great answers. Interestingly the server sending the email and the mail box that it is going to are both behind the same firewall so the client says they are unconcerned about spam. Oh well. | It is bad practice for several reasons: You are NOT allowed to send a mail from a domain you do not own. As such, it could be conceived as an attempt at impersonation. It's a common enough practice used by spammers and, as such, is frequently tagged by spam filters. It is pretty common for well-maintained domains to use SPF or DKIM to protect their reputation and help other systems identify impersonation and spam. You obviously will not be able to add the DKIM mail header or add your SMTP server into the domain's SPF DNS record and so you mail will be (rightly) considered as forged and rejected. The proper practice is to use your local domain as sender, possibly using a non-existing address as user name. | {
"source": [
"https://serverfault.com/questions/514925",
"https://serverfault.com",
"https://serverfault.com/users/169618/"
]
} |
515,437 | At work, we have two wireless networks (e.g., Work1 Work2); the Work2 is used downstairs and Work1 is used upstairs. However, both are notoriously slow. The connection is better when we are wired in, but unfortunately due to our building being very old and our company growing very fast, most employees are not seated near the walls where the ethernet cables are. I had Cox, our ISP, run a bandwidth utilization test and it doesn't seem like we are capping out on upstream/downstream, which leads me to believe that it's strictly an issue with the wireless networks (which were implemented before I got there). The wireless networks are both Apple Airport Extremes. Is there anything I can do to improve the situation for everyone? Speeds are extremely slow, and sometimes drops out. | I'm going to put this as gently as I can: Wireless networks (802.11) suck . The 2.4GHz band (802.11b, g, and some n devices) is a festering pit of radio noise. Everything from baby monitors to microwave ovens pollute this section of the spectrum, and the wanton proliferation of wireless networks has it so congested that you're frankly lucky to get 1Mbit speeds out of it in some urban areas (in the building my company is in the 2.4GHz band is unusable - average throughput is less than 100Kb/sec). The 5GHz band (802.11a, some n, and the new ac draft standard) is better in terms of interference, but you wind up taking a penalty in overall range (because 5GHz signals get eaten up more readily by the little things people like to have in their buildings, like walls). In both cases you're using a shared medium (wireless frequency) -- this means everyone else's signal is effectively your noise: the more people using the wireless link the worse this gets as devices are fighting a limited slice of frequency spectrum and time. Wireless "range extenders" just make the problem WORSE -- the extender is now chewing up radio spectrum to relay traffic back to the base station (adding more traffic and congestion to the airwaves). For more detail than you probably ever wanted to know about wireless networking, check out the blog posts the Server Fault team did when they were fitting out their office wireless network: A Studied Approach at WiFi - Part 1 A studied Approach at WiFi - Part 2 So what can/should you do? Ideally you should run cables to the high traffic locations, and leave wireless for things that are truly mobile (laptops going to the conference room, cordless VoIP phones, a "guest network", and stuff like that). Like Sirex suggested there are other ways to go about running cable that don't require a major remodel (but please check your local building codes before you start throwing wires through your ceiling). The ideal solution may not be practical, so the next best thing is to build a wireless network with multiple access points that use a wired backhaul to get to your main network. Apple documents how to do this with the Airport Extreme on their site , and you can find similar guidance from other manufacturers. Some other things to bear in mind: One WAP can support about 15-25 users (depending on how heavily they use the network). If you load WAPs/coverage areas above this number your performance will suffer. Your WAPs should have minimal coverage area overlap if possible Remember the signal from one set of devices (WAP+Clients) is just "noise" to the other sets. Cisco has some basic guidelines on setting up a wireless network which make for good reading. They also have more advanced documentation, but my Google-Fu is failing me at the moment. | {
"source": [
"https://serverfault.com/questions/515437",
"https://serverfault.com",
"https://serverfault.com/users/82516/"
]
} |
515,604 | I've read Stop ssh login from printing motd from the client? , however my situation is a bit different : I want to keep Banner /path/to/sometxt serverside I would like to pass an option under specific conditions so that Banner is not printed (eg ssh -o "PrintBanner=No" someserver ). Any idea? | There is a LogLevel option: It silences the banner but you're still able to receive errors: $ ssh -o LogLevel=error localhost
Permission denied (publickey). | {
"source": [
"https://serverfault.com/questions/515604",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
515,833 | I extracted certificate using Chrome's SSL/export command. Then provided it as input to openvpn - in the config for openvpn: pkcs12 "path/to/pkcs12_container" When calling openvpn ~/openvp_config it asks for a password for private key (wich I entered when exporting using Chrome): Enter Private Key Password:... I want to remove this password request. The question: how to remove the password for private key from pkcs12? That is, create pkcs12 file which doesn't require a password. (seems that I already somehow did this a year ago, and now forgot it.damn.) | It can be achieved by various openssl calls. PASSWORD is your current password YourPKCSFile is the file you want to convert NewPKCSWithoutPassphraseFile is the target file for the PKCS12 without passphrase First, extract the certificate: $ openssl pkcs12 -clcerts -nokeys -in "YourPKCSFile" \
-out certificate.crt -password pass:PASSWORD -passin pass:PASSWORD Second, the CA key: $ openssl pkcs12 -cacerts -nokeys -in "YourPKCSFile" \
-out ca-cert.ca -password pass:PASSWORD -passin pass:PASSWORD Now, the private key: $ openssl pkcs12 -nocerts -in "YourPKCSFile" \
-out private.key -password pass:PASSWORD -passin pass:PASSWORD \
-passout pass:TemporaryPassword Now remove the passphrase: $ openssl rsa -in private.key -out "NewKeyFile.key" \
-passin pass:TemporaryPassword Put things together for the new PKCS-File: $ cat "NewKeyFile.key" \
"certificate.crt" \
"ca-cert.ca" > PEM.pem And create the new file: $ openssl pkcs12 -export -nodes -CAfile ca-cert.ca \
-in PEM.pem -out "NewPKCSWithoutPassphraseFile" Now you have a new PKCS12 key file without passphrase on the private key part. | {
"source": [
"https://serverfault.com/questions/515833",
"https://serverfault.com",
"https://serverfault.com/users/177836/"
]
} |
515,957 | I'm chainging my setup from nginx > apache/php to haproxy > nginx > apache/php (using haproxy 1.5-dev18 with ssl support compiled in) Both nginx and haproxy are setup correctly to set the HTTP_X_FORWARDED_PROTO header. However, when nginx gets the ssl traffic from haproxy, it sees the connection as http and sets the header as so. How can I set nginx to forward the HTTP_X_FORWARDED_PROTO header if it exists, but otherwise continue setting it based on the connection? | I figured out how to solve this. The problem was that nginx was overwriting the header set by haproxy on this line of my config: proxy_set_header X-Forwarded-Proto $scheme; I fixed it by adding in this: map $http_x_forwarded_proto $thescheme {
default $scheme;
https https;
} and changing the proxy_set_header line to use the new scheme: proxy_set_header X-Forwarded-Proto $thescheme; | {
"source": [
"https://serverfault.com/questions/515957",
"https://serverfault.com",
"https://serverfault.com/users/19880/"
]
} |
515,966 | We have a number of iptables rules for forwarding connections, which are solid and work well. For example, port 80 forwards to port 8080 on the same machine (the webserver). When a given webserver is restarting, we forward requests to another IP on port 8080 which displays a Maintenance Page. In most cases, this other IP is on a separate server. This all worked perfectly until we installed bridge-utils and changed to using a bridge br0 instead of eth0 as the interface. The reason we have converted to using a bridge interface is to gain access to the MAC SNAT/DNAT capabilities of ebtables. We have no other reason to add a bridge interface on the servers, as they don't actually bridge connections over multiple interfaces. I know this is a strange reason to add a bridge on the servers, but we are using the MAC SNAT/DNAT capabilities in a new project and ebtables seemed to be the safest, fastest and easiest way to go since we are already so familiar with iptables. The problem is, since converting to a br0 interface, iptables PREROUTING forwarding to external servers is no longer working. Internal PREROUTING forwarding works fine (eg: request comes in on port 80, it forwards to port 8080). The OUTPUT chain also works (eg: we can connect outwards from the box via a local destination IP:8080, and the OUTPUT chain maps it to the Maintenance Server IP on a different server, port 8080, and returns a webpage). However, any traffic coming into the box seems to die after the PREROUTING rule if the destination IP is external. Here is an example of our setup: Old Setup: iptables -t nat -A PREROUTING -p tcp --dport 9080 -j DNAT --to-destination $MAINTIP:8080
iptables -a FORWARD --in-interface eth0 -j ACCEPT
iptables -t nat -A POSTROUTING --out-interface eth0 -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward New Setup: (old setup in various formats tried as well..., trying to log eth0 and br0 packets) iptables -t nat -A PREROUTING -p tcp --dport 9080 -j DNAT --to-destination $MAINTIP:8080
iptables -a FORWARD --in-interface br0 -j ACCEPT
iptables -t nat -A POSTROUTING --out-interface br0 -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward Before changing to br0, the client request would go to server A at port 9080, and then be MASQUERADED off to a different server $MAINTIP. As explained above, this works fine if $MAINTIP is on the same machine, but if it's on another server, the packet is never sent to $MAINTIP under the new br0 setup. We want the packets to go out the same interface they came in on, MASQUERADED, as they did before we switched to using a single-NIC bridge (br0/bridge-utils). I've tried adding logging at all stages in iptables. For some reason the iptables TRACE target doesn't work on this setup, so I can't get a TRACE log, but the packet shows up in the PREROUTING table, but seem to be silently dropped after that. I've gone through this excellent document and have a better understanding of the flow between iptables and ebtables: http://ebtables.sourceforge.net/br_fw_ia/br_fw_ia.html From my understanding, it seems that the bridge is not forwarding the packets out the same interface they came in, and is dropping them. If we had a second interface added, I imagine it would be forwarding them out on that interface (the other side of the bridge) - which is the way bridges are meant to work ;-) Is it possible to make this work the way we want it to, and PREROUTE/FORWARD those packets out over the same interface they came in on like we used to? I'm hoping there are some ebtables rules which can work in conjunction with the iptables PREROUTING/FORWARD/POSTROUTING rules to make iptables forwarding work the way it usually does, and to send the routed packets out br0 (eth0) instead of dropping them. Comments, flames, any and all advice welcome! Best Regards,
Neale | I figured out how to solve this. The problem was that nginx was overwriting the header set by haproxy on this line of my config: proxy_set_header X-Forwarded-Proto $scheme; I fixed it by adding in this: map $http_x_forwarded_proto $thescheme {
default $scheme;
https https;
} and changing the proxy_set_header line to use the new scheme: proxy_set_header X-Forwarded-Proto $thescheme; | {
"source": [
"https://serverfault.com/questions/515966",
"https://serverfault.com",
"https://serverfault.com/users/177903/"
]
} |
516,838 | I have an Ubuntu server where I am blocking some IPs with ufw . I enabled logging, but I don't know where to find the logs. Where might the logs be or why might ufw not be logging? | Perform sudo ufw status verbose to see if you're even logging in the first place. If you're not, perform sudo ufw logging on if it isn't. If it is logging, check /var/log/ for files starting with ufw . For example, sudo ls /var/log/ufw* If you are logging, but there are no /var/log/ufw* files, check to see if rsyslog is running: sudo service rsyslog status . If rsyslog is running, ufw is logging, and there are still no logs files, search through common log files for any mention of UFW . For example: grep -i ufw /var/log/syslog and grep -i ufw /var/log/messages as well as grep -i ufw /var/log/kern.log . If you find a ton of ufw messages in the syslog, messages, and kern.log file, then rsyslog might need to be told to log all UFW messages to a separate file. Add a line to the top of /etc/rsyslog.d/50-default.conf that says the following two lines: :msg, contains, “UFW” -/var/log/ufw.log
& ~ And you should then have a ufw.log file that contains all ufw messages! NOTE: Check the 50-default.conf file for pre-existing configurations. Make sure to backup the file before saving edits! UPDATE for Ubuntu Server 20.04 and later As of Ubuntu 20.04 and later, the ~ is deprecated and should not be used, as reported in the ubuntu 20.04 system log: rsyslogd: error during parsing file /etc/rsyslog.d/50-default.conf, on or before line 6: invalid character '�' - is there an invalid escape sequence somewhere? [v8.2001.0 try https://www.rsyslog.com/e/2207 ]
Jan 26 12:10:27 ubuntu rsyslogd: warning: ~ action is deprecated, consider using the 'stop' statement instead [v8.2001.0 try https://www.rsyslog.com/e/2307 ] Rather, you should replace it with the word stop . So if you still want to log all ufw messages in rsyslog to a seperate ufw.log file then you should add the following to your /etc/rsyslog.d/50-default.conf file: :msg, contains, “UFW” -/var/log/ufw.log
& stop instead of :msg, contains, “UFW” -/var/log/ufw.log
& ~ | {
"source": [
"https://serverfault.com/questions/516838",
"https://serverfault.com",
"https://serverfault.com/users/44784/"
]
} |
516,917 | I'm trying to write a function in puppet that will do a fail if the passed directory path does not exist. if File["/some/path"] always returns true, and if defined(File["/some/path"]) only returns true if the resource is defined in puppet, regardless of whether it actually exists. Is there a way to do this with a simple if statement? Thanks | Workaround for this: use onlyif on an exec "test" and require it in your action you want to execute: exec {"check_presence":
command => '/bin/true',
onlyif => '/usr/bin/test -e /path/must/be/available',
}
whatever {"foo...":
.....
require => Exec["check_presence"],
} | {
"source": [
"https://serverfault.com/questions/516917",
"https://serverfault.com",
"https://serverfault.com/users/162906/"
]
} |
517,190 | My first time using Nginx, but I am more than familiar with Apache and Linux. I am using an existing project and when ever I am trying to see the index.php I get a 404 File not found. Here is the access.log entry: 2013/06/19 16:23:23 [error] 2216#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /index.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.ordercloud.lh" And here is the sites-available file: server {
set $host_path "/home/willem/git/console/www";
access_log /www/logs/console-access.log main;
server_name console.ordercloud;
root $host_path/htdocs;
set $yii_bootstrap "index.php";
charset utf-8;
location / {
index index.html $yii_bootstrap;
try_files $uri $uri/ /$yii_bootstrap?$args;
}
location ~ ^/(protected|framework|themes/\w+/views) {
deny all;
}
#avoid processing of calls to unexisting static files by yii
location ~ \.(js|css|png|jpg|gif|swf|ico|pdf|mov|fla|zip|rar)$ {
try_files $uri =404;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php {
fastcgi_split_path_info ^(.+\.php)(.*)$;
#let yii catch the calls to unexising PHP files
set $fsn /$yii_bootstrap;
if (-f $document_root$fastcgi_script_name){
set $fsn $fastcgi_script_name;
}
fastcgi_pass 127.0.0.1:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fsn;
#PATH_INFO and PATH_TRANSLATED can be omitted, but RFC 3875 specifies them for CGI
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fsn;
}
location ~ /\.ht {
deny all;
}
} My /home/willem/git/console is owned by www-data:www-data (my web user running php etc) and I have given it 777 permissions out of frustration... My best guess is that something is wrong with the config, but I can't figure it out... UPDATE So I moved it to /var/www/ and used a much more basic config: server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name console.ordercloud;
location / {
root /var/www/console/frontend/www/;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www;
include fastcgi_params;
}
location ~ \.(js|css|png|jpg|gif|swf|ico|pdf|mov|fla|zip|rar)$ {
try_files $uri =404;
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
deny all;
}
} Also if I call localhost/console/frontend/www/index.php I get a 500 PHP which means it is serving there. It just isn't serving off console.ordercloud ... | The error message “primary script unknown” is almost always related to a wrongly set SCRIPT_FILENAME in the nginx fastcgi_param directive (or incorrect permissions, see other answers). You’re using an if in the configuration you posted first. Well it should be well known by now that if is evil and often produces problems. Setting the root directive within a location block is bad practice, of course it works. You could try something like the following: server {
location / {
location ~* \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
try_files $uri @yii =404;
}
}
location @yii {
fastcgi_param SCRIPT_FILENAME $document_root$yii_bootstrap;
}
} Please note that the above configuration is untested. You should execute nginx -t before applying it to check for problems that nginx can detect right away. | {
"source": [
"https://serverfault.com/questions/517190",
"https://serverfault.com",
"https://serverfault.com/users/178505/"
]
} |
517,483 | Here is my htop output: For example, I'm confused by this ruby script: How much physical memory is it using? 3+1+8+51+51 ? 51 ? 51+51 ? | Hide user threads (shift + H) and close the process tree view (F5), then you can sort out the process of your interest by PID and read the RES column (sort by MEM% by pressing shift + M, or F3 to search in cmd line) | {
"source": [
"https://serverfault.com/questions/517483",
"https://serverfault.com",
"https://serverfault.com/users/95509/"
]
} |
517,501 | Can anyone explain whether it is possible for two hostnames to share the same IP address? And what about if one hostname represents more than one IP address, is that possible too? Why? | Assigning more than one IP address to one hostname is also possible: rr.example.com. A 192.0.2.12
rr.example.com. A 192.0.2.23
rr.example.com. A 192.0.2.34
rr.example.com. A 192.0.2.45 When you query a DNS server for rr.example.com you'll get back a list of IP addresses back. You can then choose to connect to one of them. Should the first attempt to connect get actively refused, just try the next. Most browser will follow this flow, as long as the endpoints actively refuse TCP connectivity. Should an endpoint timeout, the ressource will be treated as unreachable even though not all IP's has been tried Since most applications (browsers included) are often only interested in 1 IP endpoint at a time and just choose the first available answer, you risk skewing the load between the target servers so that the first server gets all the traffic while the others might be idle. To circumvent this, most DNS servers offer what is known as a Round Robin configuration, making the server alternate the order in which equally matching records are returned. Before load balancers were commonplace, this was an efficient way to load balance and somewhat implement fault-tolerance on network systems. | {
"source": [
"https://serverfault.com/questions/517501",
"https://serverfault.com",
"https://serverfault.com/users/178677/"
]
} |
517,518 | We are constructing a new networking infrastructure to replace out 1GbE backbone and have decided upon using 4x Dell PowerConnect 8024F's as our "core" switches. As per the diagram below we have 2x 8024's upstairs and 2x downstairs providing links over MMF fiber for redundancy. Data being transferred is a mixture (70/30) of iSCSI/LAN on separate VLANs. How can we best configure these switches to allow for redundancy and throughput, 2x stacks of 2 8024's or 1x stack of 4 switches and where should LAG be employed? | Assigning more than one IP address to one hostname is also possible: rr.example.com. A 192.0.2.12
rr.example.com. A 192.0.2.23
rr.example.com. A 192.0.2.34
rr.example.com. A 192.0.2.45 When you query a DNS server for rr.example.com you'll get back a list of IP addresses back. You can then choose to connect to one of them. Should the first attempt to connect get actively refused, just try the next. Most browser will follow this flow, as long as the endpoints actively refuse TCP connectivity. Should an endpoint timeout, the ressource will be treated as unreachable even though not all IP's has been tried Since most applications (browsers included) are often only interested in 1 IP endpoint at a time and just choose the first available answer, you risk skewing the load between the target servers so that the first server gets all the traffic while the others might be idle. To circumvent this, most DNS servers offer what is known as a Round Robin configuration, making the server alternate the order in which equally matching records are returned. Before load balancers were commonplace, this was an efficient way to load balance and somewhat implement fault-tolerance on network systems. | {
"source": [
"https://serverfault.com/questions/517518",
"https://serverfault.com",
"https://serverfault.com/users/74265/"
]
} |
518,220 | I have the following code that is working on Nginx to keep the AWS ELB healthcheck happy. map $http_user_agent $ignore {
default 0;
"ELB-HealthChecker/1.0" 1;
}
server {
location / {
if ($ignore) {
access_log off;
return 200;
}
}
} I know the 'IF' is best avoided with Nginx and I wanted to ask if someone would know how to recode this without the 'if'? thankyou | Don't overcomplicate things. Just point your ELB health checks at a special URL just for them. server {
location /elb-status {
access_log off;
return 200;
}
} | {
"source": [
"https://serverfault.com/questions/518220",
"https://serverfault.com",
"https://serverfault.com/users/138436/"
]
} |
518,239 | Considering the fact that many server-class systems are equipped with ECC RAM , is it necessary or useful to burn-in the memory DIMMs prior to their deployment? I've encountered an environment where all server RAM is placed through a lengthy burn-in/stress-tesing process. This has delayed system deployments on occasion and impacts hardware lead-time. The server hardware is primarily Supermicro , so the RAM is sourced from a variety of vendors; not directly from the manufacturer like a Dell Poweredge or HP ProLiant . Is this a useful exercise? In my past experience, I simply used vendor RAM out of the box. Shouldn't the POST memory tests catch DOA memory? I've responded to ECC errors long before a DIMM actually failed, as the ECC thresholds were usually the trigger for warranty placement. Do you burn-in your RAM? If so, what method(s) do you use to perform the tests? Has it identified any problems ahead of deployment? Has the burn-in process resulted in any additional platform stability versus not performing that step? What do you do when adding RAM to an existing running server? | I found a document by Kingston detailing how they work with Server Memory, I believe that this process would, normally, be the same for most known manufacturers. Memory chips, as well as all semiconductor devices, follow a particular reliability/failure pattern that is known as
the Bathtub Curve: Time is represented on the horizontal axis,
beginning with the factory shipment and continuing
through three distinct time periods: Early Life Failures: Most failures occur during the early usage
period. However, as time goes on, the number of failures diminishes
quickly. The Early Life Failure period, shown in yellow, is
approximately 3 months. Useful Life: During this period, failures are extremely rare. The
useful life period is shown in blue and is estimated to be 20+ years. End-of-Life Failures: Eventually, semiconductor products wear out and
fail. The End-of-Life period is shown in green Now because Kingston noted that high fail-rates would occur the first three months (after these three months the unit is considered good until it's EOL about 15 - 20 years later). They designed a test using a unit called the KT2400 which brutally tests the server memory modules for 24 hours at 100 degrees celsius at high voltage, by which all cells of every DRAM chip is continuously exercised; this high level of stress testing
has the effect of aging the modules by at least three months (as noted before the critical period where most modules show failures). The results were: In March 2004, Kingston began a six-month trial in which 100 percent
of its server memory was tested in the KT2400. Results were closely
monitored to measure the change in failures. In September 2004, after
all the test data was compiled and analyzed, results showed that
failures were reduced by 90 percent. These results exceeded
expectations and represent a significant improvement for a product
line that was already at the top of its class. So why is burning in memory not useful for server memory? Simply, because it's already done by your manufacturer! | {
"source": [
"https://serverfault.com/questions/518239",
"https://serverfault.com",
"https://serverfault.com/users/13325/"
]
} |
518,240 | A company performs a full backup for its data in a daily basis for disaster recovery purposes. However, their backup process cannot be completed within the assigned backup time window. What would you recommend to this company about how to restructure its backup environment in order to minimize the backup time ? We got 4 candidates, 1. Perform LAN based backup 2. Weekly full backup and daily incremental 3. Weekly full backup and daily cumulative 4. Add more ISL to increase bandwidth when comparing incremental backup with cumulative backup , incremental backup time is surely shorter than cumulative backup time .But I don's know adding more ISL is allowed in an existing storage system,or can this operation really shorten backup time ? | I found a document by Kingston detailing how they work with Server Memory, I believe that this process would, normally, be the same for most known manufacturers. Memory chips, as well as all semiconductor devices, follow a particular reliability/failure pattern that is known as
the Bathtub Curve: Time is represented on the horizontal axis,
beginning with the factory shipment and continuing
through three distinct time periods: Early Life Failures: Most failures occur during the early usage
period. However, as time goes on, the number of failures diminishes
quickly. The Early Life Failure period, shown in yellow, is
approximately 3 months. Useful Life: During this period, failures are extremely rare. The
useful life period is shown in blue and is estimated to be 20+ years. End-of-Life Failures: Eventually, semiconductor products wear out and
fail. The End-of-Life period is shown in green Now because Kingston noted that high fail-rates would occur the first three months (after these three months the unit is considered good until it's EOL about 15 - 20 years later). They designed a test using a unit called the KT2400 which brutally tests the server memory modules for 24 hours at 100 degrees celsius at high voltage, by which all cells of every DRAM chip is continuously exercised; this high level of stress testing
has the effect of aging the modules by at least three months (as noted before the critical period where most modules show failures). The results were: In March 2004, Kingston began a six-month trial in which 100 percent
of its server memory was tested in the KT2400. Results were closely
monitored to measure the change in failures. In September 2004, after
all the test data was compiled and analyzed, results showed that
failures were reduced by 90 percent. These results exceeded
expectations and represent a significant improvement for a product
line that was already at the top of its class. So why is burning in memory not useful for server memory? Simply, because it's already done by your manufacturer! | {
"source": [
"https://serverfault.com/questions/518240",
"https://serverfault.com",
"https://serverfault.com/users/179060/"
]
} |
518,355 | We have two Apache server as front-end and 4 tomcat server as back-end configured using mod_proxy module as load balancer. Now, we want to exclude an single tomcat url from the mod_proxy load balancer. Is there any way or rule to exclude? Proxy Balancer Setting: <Proxy balancer://backend-cluster1>
BalancerMember http://10.0.0.1:8080 loadfactor=1 route=test1 retry=10
BalancerMember http://10.0.0.2:8080 loadfactor=1 route=test2 retry=10
</Proxy> | You exclude paths from mod_proxy with an exclamation mark (!) before your full ProxyPass statement, which your sample is missing - It would look something like ProxyPass /path balancer://backend-cluster1 . Therefore, to exclude a path, add: ProxyPass /my/excluded/path ! before ProxyPass /my balancer://backend-cluster1 | {
"source": [
"https://serverfault.com/questions/518355",
"https://serverfault.com",
"https://serverfault.com/users/103365/"
]
} |
518,967 | I'm just wondering if I need to restart my server after editing fstab and mtab. I changed something in this file manually due to problem with awstats report. I am using ISPConfig 3 with the help of the tutorial from howtoforge . But due to removing/deleting of some account, the configuration of fstab and mtab messed up. I also ask this question at howtoforge forum but until now no one has answered. If you'd like to read my question please visit it here . I tried very hard to fix the problem w/o luck. Update: Here's what happen to my fstab: Before the value was (I omitted the other): /var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web1/log none bind,nobootwait 0 0
/var/log/ispconfig/httpd/example.com /var/www/clients/client1/web2/log none bind,nobootwait 0 0 So I changed it to the correct path: /var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web2/log none bind,nobootwait 0 0
/var/log/ispconfig/httpd/example.com /var/www/clients/client1/web3/log none bind,nobootwait 0 0 I also found mtab to have the same value as above that's why I edited it manually. from: /var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web1/log none rw,bind 0 0
/var/log/ispconfig/httpd/example.com /var/www/clients/client1/web2/log none rw,bind 0 0 to: /var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web2/log none rw,bind 0 0
/var/log/ispconfig/httpd/example.com /var/www/clients/client1/web3/log none rw,bind 0 0 I edited those value because the correct path of mydomain.com and example.com should be under web2 and web3 folder respectively. As of now the log of example.com is pointed to: /var/www/clients/client1/web2/log when it should be: /var/www/clients/client1/web3/log So I am thinking that this is because of fstab and mtab. Please guide me how to point the log correctly to it's default directory. I explain the scenario one by one at this link . | File /etc/mtab is maintained by the operating system. Don't edit it. File /etc/fstab defines what should be mounted. It is read at system start. When I add an extra disk to a system that should be mounted at system start
I add it to /etc/fstab . To check the correctness of the updated /etc/fstab I use the command mount -a . That reads /etc/fstab as system start,
it mounts filesytems that aren'nt yet mounted. It gives an error when the mountpoint is missing or the device is missing. To answer the question on rebooting: No, there is no need to reboot after editing /etc/fstab .
You can testdrive with mount -a | {
"source": [
"https://serverfault.com/questions/518967",
"https://serverfault.com",
"https://serverfault.com/users/148294/"
]
} |
519,215 | I have often heard it recommended that a user account should be disabled by setting its shell to /bin/false . But, on my existing Linux systems, I see that a great number of existing accounts (all of them service accounts) have a shell of /sbin/nologin instead. I see from the man page that /sbin/nologin prints a message to the user saying the account is disabled, and then exits. Presumably /bin/false would not print anything. I also see that /sbin/nologin is listed in /etc/shells , while /bin/false is not. The man page says that FTP will disable access for users with a shell not listed in /etc/shells and implies that other programs may do the same. Does that mean that somebody could FTP in with an account that has /sbin/nologin as its shell? What is the difference here? Which one of these should I use to disable a user account, and in what circumstances? What other effects does a listing in /etc/shells have? | /bin/false is a utility program, companion to /bin/true , which is useful in some abstract sense to ensure that unix is feature-complete. However, emergent purposes for these programs have been found; consider the BASH statement /some/program || /bin/true , which will always boolean-evaluate to true ( $? = 0 ) no matter the return of /some/program . An emergent use of /bin/false , as you identified, is as a null shell for users not allowed to log in. The system in this case will behave exactly as though the shell failed to run. POSIX (though I may be wrong and it may the the SUS) constrains both these commands to do exactly nothing other than return the appropriate boolean value. /sbin/nologin is a BSD utility which has similar behaviour to /bin/false (returns boolean false), but prints output as well, as /bin/false is prohibited from doing. This is supposed to help the user understand what happened, though in practice many terminal emulators will simply close when the shell terminates, rendering the message all but unreadable anyway in some cases. There is little purpose to listing /sbin/nologin in /etc/shells . The standard effect of /etc/shells is to list the programs permissible for use with chsh when users are changing their own shell (and there is no credible reason to change your own shell to /sbin/nologin ). The superuser can change anyone's shell to anything. However, you may want to list both /sbin/nologin and /bin/false in /etc/rsh , which will prohibit users with these shells from changing their shell using chsh in the unfortunate event that they get a shell. FTP daemons may disallow access to users with a shell not in /etc/shells, or they may use any other logic they wish. Running FTP is to be avoided in any case because sftp (which provides similar functionality) is similar but secure. Some sites use /sbin/nologin to disable shell access while allowing sftp access by putting it in /etc/shells . This may open a backdoor if the user is allowed to create cronjobs. In either case, scp will not operate with an invalid shell. scponly can be used as a shell in this instance. Additionally, the choice of shell affects the operation of su - (AKA su -l ). Particularly, the output of /sbin/nologin will be printed to stdout if it is the shell; this cannot be the case with /bin/false . In either case commands run with su -cl will fail. Finally, the answer: To disable an account, you should do three things. Set the shell to /sbin/nologin Set the password field in /etc/passwd to the appropriate locked value for your UNIX ( ! on Linux, but *LOCKED* on FreeBSD). This prevents SSH login with keys unless UsePAM yes is set in the sshd_config . Set the account expiration date to the distant past (e.g., usermod --expiredate 1 ). This step will prevent SSH login with any method if PAM is used to process the login. If it's a service account, it's enough to make sure that it has no SSH authorized keys in its home directory and the first two steps above. If you're worried someone might get an SSH certificate for it or something, you could always list your service accounts and groups in DenyUsers and DenyGroups in sshd_config . | {
"source": [
"https://serverfault.com/questions/519215",
"https://serverfault.com",
"https://serverfault.com/users/126632/"
]
} |
519,429 | Is there a way to forward a range of ports using vagrant 1.2.1 or higher? I know that you can forward any number of ports individually by using config.vm.forward_port 80, 4567 Or, is the answer simply don't use vagrant to do such a thing? | If anyone needs an example of how to do the loop in your Vagrantfile here it is: for i in 64000..65535
config.vm.network :forwarded_port, guest: i, host: i
end The above loop will forward all ports between 64000 and 65535 to the exact same port on the guest (note that 64000 and 65535 are inclusive). | {
"source": [
"https://serverfault.com/questions/519429",
"https://serverfault.com",
"https://serverfault.com/users/58560/"
]
} |
519,435 | After freshly installing Ubuntu server 12.04, I did the usual apt-get update / apt-get upgrade , which failed for mysql-server-5.5 : Setting up mysql-server-5.5 (5.5.31-0ubuntu0.12.04.2) ...
start: Job failed to start
invoke-rc.d: initscript mysql, action "start" failed.
dpkg: error processing mysql-server-5.5 (--configure):
subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of mysql-server:
mysql-server depends on mysql-server-5.5; however:
Package mysql-server-5.5 is not configured yet.
dpkg: error processing mysql-server (--configure):
dependency problems - leaving unconfigured I tried a wide variety a approaches suggested by googling, which involved various combinations of apt-get remove / purge / install -f / reinstall , etc., with no luck. I also tried downloading the package directly from launchpad.net and running dpkg -i on it (this had worked for a similar issue with a kernel upgrade), but to no avail. I'm not actually particularly interested in what's going on with mysql, per se (though I will need to figure it out at some time); at this point, my primary concern is that I am unable to apt-get install other packages! What to do? | If anyone needs an example of how to do the loop in your Vagrantfile here it is: for i in 64000..65535
config.vm.network :forwarded_port, guest: i, host: i
end The above loop will forward all ports between 64000 and 65535 to the exact same port on the guest (note that 64000 and 65535 are inclusive). | {
"source": [
"https://serverfault.com/questions/519435",
"https://serverfault.com",
"https://serverfault.com/users/179627/"
]
} |
519,693 | In this compilation of sysadmin horrors , one of the authors writes, as a rule of thumb: Always do a cd . before doing anything. Why would you want to do that? | You don't. At least not just like that. The preceding line in the quoted document is of importance: Set up your prompt to do a pwd everytime you cd. Always do a cd . before doing anything. This way, you as the operator verify your current working dir before doing anything of importance, as it's printed out with each change. cd . doesn't make any sense otherwise. This "verification" is a good thing, and you should adapt a form of it. A more (IMHO) common variation of this theme is to always print out the working dir at the prompt. | {
"source": [
"https://serverfault.com/questions/519693",
"https://serverfault.com",
"https://serverfault.com/users/109728/"
]
} |
519,956 | I manage a server with two-factor authentication. I have to use the Google Authenticator iPhone app to get the 6-digit verification code to enter after entering the normal server password. The setup is described here: http://www.mnxsolutions.com/security/two-factor-ssh-with-google-authenticator.html I would like a way to get the verification code using just my laptop and not from my iphone. There must be a way to seed a command line app that generates these verification codes and gives you the code for the current 30-second window. Is there a program that can do this? | Yes, oathtool can do this. You'll need to seed it with the shared secret from your server (i.e. save the shared secret and re-use it each time, in this example we'll assume they offered N3V3R G0nn4 G1v3 Y0u Up ). You can install it from the oath-toolkit package. Example usage to generate same code as google authenticator and authy: oathtool -b --totp 'N3V3R G0nn4 G1v3 Y0u Up' | {
"source": [
"https://serverfault.com/questions/519956",
"https://serverfault.com",
"https://serverfault.com/users/68755/"
]
} |
520,195 | It's the following part of a virtual host config that I need further clarification on: <VirtualHost *:80>
# Admin email, Server Name (domain name), and any aliases
ServerAdmin [email protected]
ServerName 141.29.495.999
ServerAlias example.com
... This is and example config, similar to what I currently have (I don't have a domain name at the moment). <VirtualHost *:80> - Allow the following settings for all HTTP requests made on port 80 to IPs that this server can be contacted on. For instance, if the server could be accessed on more than one IP, you could restrict this directive to just one instead of both. ServerName - If the host part of the HTTP request matches this name, then allow the request. Normally this would be a domain name that maps to an IP, but in this case the HTTP request host must match this IP. ServerAlias - Alternate names accepted by the server. The confusing part for me is, in the above scenario, if I set ServerAlias mytestname.com and then made an HTTP request to mytestname.com , there would have to be a DNS record pointing to the server's IP for this to work? In which case, is ServerAlias just basically EXTRA ServerName entries? Say I had a DNS entry such that foobar.com = 141.29.495.999 but then I had ServerName = 141.29.495.999 and ServerAlias was empty, would that mean that although foobar.com gets resolved to the right IP, because there is no reference to accept foobar.com in ServerName or ServerAlias ? Or something. Man I'm confused. | Think of it like this: DNS is the phone directory/yellow pages. When someone wants to call your phone, they can look up your name and get your phone number and call that phone. DNS does the same but for computers - when someone wants to go to www.example.com they ask DNS for the IP address and then they can contact the computer that has that IP address. That is what resolve means. Resolving an IP address has nothing at all to do with Apache; it is strictly a DNS question. The ServerName and ServerAlias is more like a company's internal phone list. Your webserver is the switchboard; it will accept all incoming connections to the server. Then the client/caller will tell them what name they're looking for, and it will look in the Apache configuration for how to handle that name. If the name isn't listed as a ServerName/ServerAlias in the apache configuration, apache will always give them the first VirtualHost listed. Or, if there's no VirtualHost at all, it will give the same content no matter what hostname is given in the request. ETA: So, step by step for a normal connection: You type http://www.example.com into your browser. Your computer asks its DNS resolver which IP address it should use when it wants to talk to www.example.com . Your computer connects to that IP address, and says that it wants to talk to www.example.com (that's the Host: header in HTTP). The webserver looks at its configuration to figure out what to do with a request for content from www.example.com . Any one of the following may happen: www.example.com is listed as a ServerName or ServerAlias for a VirtualHost - if so, then it will use the configuration for that VirtualHost to deliver the content. The server doesn't have any VirtualHosts at all - if so, then it will use the configuration in its httpd.conf to deliver the content. The server has VirtualHosts but www.example.com isn't listed in any of them - if so, the first Virtualhost in the list will be used to deliver the content. | {
"source": [
"https://serverfault.com/questions/520195",
"https://serverfault.com",
"https://serverfault.com/users/180017/"
]
} |
520,244 | In theory browsers do not pass on referer information from HTTPS to HTTP sites. And in my experience this has always been true. But I just found an exception, and I want to understand why it works so I can use it as well. Search for "what is my referer" on https://www.google.ca/ eg: https://www.google.ca/search?q=what+is+my+referer There are a few sites that will show referer. They all seem to "work" when they shouldn't. For example, click the www.whatismyreferer.com one. I get: Your referer:
https://www.google.ca/ Note that sometimes, rarely, I get "no referer" as the result. Go back and click the link again and it'll "work" the next time. This should not happen. www.whatismyreferer.com is a non-HTTPS site. The referer header should not be being passed, but it is. What's going on here, and how can I do the same from my HTTPS site to the HTTP sites I'm linking to? | Looks like it's due to a new <meta> header that Google is using: <meta name="referrer" content="origin"> Specification: https://w3c.github.io/webappsec-referrer-policy/ It's currently only fully supported by a few browsers , so it's not a complete solution, but certainly a start! | {
"source": [
"https://serverfault.com/questions/520244",
"https://serverfault.com",
"https://serverfault.com/users/117613/"
]
} |
520,952 | My 10000+ users network, which spans the whole State and is very complex, has a "strange" addressing scheme. Though our PCs are not directly connected/exposed to the Internet, our network designers assigned IP addresses taking them from a range different the "ordinary" IANA-reserved private IPv4 network ranges (10.0.0.0-10.255.255.255, 172.16.0.0-172.31.255.255, 192.168.0.0-192.168.255.255). Assume that the IP addresses used in our intranet are in the range 20.*.*.* , i.e. addresses that are officially assigned in Internet (and don't belong to us). Can anyone explain the advantages (if any) of this strange choice? | Don't do this if you ever intend to connect the network to the Internet. It's just far too risky. First, you're using blocks of IP address space which belong to someone else. Because of this, you will have difficulty communicating with that other party as your routers may get confused as to whether the traffic should be sent to the other party or your internal network. Along the router confusion line, this is a seriously non-default configuration, and the slightest mistake can result in live traffic with those IP addresses going over the public Internet, or worse, routes being announced to the Internet's default-free zone . Just like when somebody in Pakistan screwed up a router config and caused all of YouTube's traffic to be routed to that country , you could find yourself swamped with the other party's traffic. And many ISPs and peering/transit providers have terms of service which prohibit using others' IP address blocks. If you use other people's IP address blocks, and they leak onto the Internet, you could be nullrouted or depeered or worse. (Interestingly, Apple was one of the first companies to make this mistake; they had to renumber 5000 machines to recover. Their story is mentioned in RFC 1627 .) Since you or your predecessors already did it, your only way forward is to fix the numbering scheme. This is not particularly challenging technically, but it will be very time consuming and require some maintenance windows as well as coordination between the system and network administrators. Hopefully you can finish before something really bad happens. | {
"source": [
"https://serverfault.com/questions/520952",
"https://serverfault.com",
"https://serverfault.com/users/180370/"
]
} |
521,124 | I'm running a production-level Amazon ec2 instance, and I want to close out root privileges to all users. Normally, when one logs in to the instance as ec2-user, the ec2-user immediately gets sudo privileges, which I am trying to do away with in order to ensure security. I was able to set a new password for the root user, and I went into /etc/sudoers to try and remove the ec2-user from sudo privileges, but that user isn't even listed in the file. Does anybody know how I can remove ec2-user from sudo privileges on an Amazon ec2 instance running the default linux installation? | Check /etc/sudoers.d/cloud-init file, ec2-user default user is there, just delete this file. | {
"source": [
"https://serverfault.com/questions/521124",
"https://serverfault.com",
"https://serverfault.com/users/180411/"
]
} |
521,483 | We restrict the running of exe's across the organization. But based on justifications & approvals we add users to (specific) AD groups for 24 hours. Currently the process of removing the users from those AD groups after X hours is manual. I am trying to automate it in some fashion. But I was wondering if there is any native way of handling this within AD 2003. Is writing a script (powershell / vbs) the only way of handling this? | Assuming all your Domain Controllers are Windows Server 2003 or later you can do this with native Active Directory's dynamic objects functionality without any scripting. Let's say that a user account, "Bob", needs to be in the "Accounting" group for 24 hours. Create a "Bob in Accounting 24 Hours" group and specify an entry-TTL for 24 hours (the duration you want the group to remain in the Active Directory) at the time of creation. Add the "Bob in Accounting 24 Hours" as a member of the "Accounting" group Add the "Bob" user account as a member of the "Bob in Accounting 24 Hours" group Upon the "Bob" user account's next logon it will be a member of the "Accounting" group through the nested group membership of the "Bob in Accounting 24 Hours" group into the "Accounting" group. At the end of 24 hours all the domain controllers will garbage-collect the "Bob in Accounting 24 Hours" group and "Bob" will no longer be a member of "Accounting". The trick is that non-dynamic objects cannot be converted to dynamic after their creation. Using group nesting, though, gets you around that limitation in this instance. You'll need to use a tool other than "Active Directory Users and Computers" to create the group because you'll need to set the entry-TTL at the time of the group's creation. The script in this blog entry might be a starting place (it's built to create User objects) or, alternatively, you could just use ldifde or csvde to do the creation, too. | {
"source": [
"https://serverfault.com/questions/521483",
"https://serverfault.com",
"https://serverfault.com/users/66190/"
]
} |
521,782 | I connect to our Microsoft Server 2012 through remote desktop. There are just too many animations (for example Opening and closing windows) etc. How do I turn off all animations? | Control Panel > System > Advanced system settings > Performance > Settings Select the options that suit you. I normally select the option ""Adjust for best performance". | {
"source": [
"https://serverfault.com/questions/521782",
"https://serverfault.com",
"https://serverfault.com/users/11181/"
]
} |
522,377 | I'm working on a Debian server as an inexperienced admin. I need to change the full name of a user (not the login name) provided during adduser USERNAME . How can I do this? I didn't find such an option in usermod ( http://linuxcommand.org/man_pages/usermod8.html ). | The GECOS field in /etc/password can be modified with the chfn(1) command. chfn -f "Joe Blow" jblow | {
"source": [
"https://serverfault.com/questions/522377",
"https://serverfault.com",
"https://serverfault.com/users/181102/"
]
} |
522,396 | I am new to asterisk and before I dive in, I just want to make sure that what I plan to do is possible/correct. My office will run an asterisk server and have both local and remote extensions. We have few people scattered around the US and want something scalable if that number increases. I have installed asterisk as a VM on VMware ESXi 5 but have not done any config. If I understand this correctly, I can get SIP Trunking service (the particular one I was looking at provides 1 DID and 5 ports) and have asterisk use that as the POTS gateway for outgoing calls. This will allow any extension to pick up the next free outgoing line if they want to make a call (right?). Is that a function of the SIP trunk provider or Asterisk? For incoming, we are already using twilio, so I was planning on keeping that since they now have SIP routing. So I assume I can use their call tree and route to my asterisk extensions. Can I duplicate twilio functionality in asterisk?
Thanks! | The GECOS field in /etc/password can be modified with the chfn(1) command. chfn -f "Joe Blow" jblow | {
"source": [
"https://serverfault.com/questions/522396",
"https://serverfault.com",
"https://serverfault.com/users/181112/"
]
} |
522,650 | I'm getting started with puppet on centos and was confused about a few things. First off a man page exists for puppet-master but not for puppetmaster even though the daemon in /etc/init.d is puppetmaster Running the command $ puppet-master --version returns bash: puppet-master: command not found. How do I tell what version I am running for both the master and the client? | Newer versions of puppet use a slightly different command line. The command you are looking for would be puppet --version , puppet master --version , and puppet agent --version For versions before 4.0, if puppet was installed as an RPM package you can query the RPM database like rpm -qa | grep puppet . For Debian/Ubuntu/Mint fans, the package query is dpkg -l | grep puppet . Puppetlabs has changed their packaging and the packaged puppet version is no indicated by the version number of the puppet-agent package. | {
"source": [
"https://serverfault.com/questions/522650",
"https://serverfault.com",
"https://serverfault.com/users/85577/"
]
} |
523,388 | I tried to understand the networking tools on Linux. I am confused now about what I should use to manipulate the static routing: route or ip route ?: route - show / manipulate the IP routing table ip - show / manipulate routing, devices, policy routing and tunnels What is the difference between these two tools? | The iproute2 suite is set to replace the net-tools suite of network configuration tools. There are "synonym" commands that perform similar function in each. While most documentation will refer you to the route command, you'll be ahead of the game to learn ip route since distributions should stop including net-tools at some point. Deprecated Linux networking commands and their replacements | {
"source": [
"https://serverfault.com/questions/523388",
"https://serverfault.com",
"https://serverfault.com/users/137784/"
]
} |
523,804 | In Thunderbird (and I assume in many other clients, too) I have the option to choose between "SSL/TLS" and "STARTTLS". As far as I understand it, "STARTTLS" means in simple words "encrypt if both ends support TLS, otherwise don't encrypt the transfer" . And "SSL/TLS" means in simple words "always encrypt or don't connect at all" . Is this correct? Or in other words: Is STARTTLS less secure than SSL/TLS, because it can fallback to plaintext without notifying me? | The answer, based on the STARTTLS RFC for SMTP ( RFC 3207 ) is: STARTTLS is less secure than TLS. Instead of doing the talking myself, I will allow the RFC to speak for itself, with the four relevant bits highlighted in BOLD : A man-in-the-middle attack can be launched by deleting the "250
STARTTLS" response from the server. This would cause the client not
to try to start a TLS session. Another man-in-the-middle attack is
to allow the server to announce its STARTTLS capability, but to alter
the client's request to start TLS and the server's response. In
order to defend against such attacks both clients and servers MUST be
able to be configured to require successful TLS negotiation of an
appropriate cipher suite for selected hosts before messages can be
successfully transferred. The additional option of using TLS when
possible SHOULD also be provided. An implementation MAY provide the
ability to record that TLS was used in communicating with a given
peer and generating a warning if it is not used in a later session. If the TLS negotiation fails or if the client receives a 454
response, the client has to decide what to do next. There are three
main choices: go ahead with the rest of the SMTP session , [...] As you can see, the RFC itself states (not very clearly, but clearly enough) that there is NOTHING requiring clients to establish a secure connection and inform users if a secure connection failed. It explicitly gives clients the option to silently establish plain-text connections. | {
"source": [
"https://serverfault.com/questions/523804",
"https://serverfault.com",
"https://serverfault.com/users/181851/"
]
} |
523,808 | I want my webserver to speak to the MySQL database server over an SSL connection. The Webserver runs CentOS5, the Database Server runs FreeBSD. The certificates are provided by a intermediate CA DigiCert. MySQL should be using ssl, according to my.cnf : # The MySQL server
[mysqld]
port = 3306
socket = /tmp/mysql.sock
ssl
ssl-capath = /opt/mysql/pki/CA
ssl-cert = /opt/mysql/pki/server-cert.pem
ssl-key = /opt/mysql/pki/server-key.pem When I start MySQL, the daemon starts without errors. This suggests that the certificate files are all readable. But when I try to connect from the webserver to the database server, I get an error: [root@webserver ~]# mysql -h mysql.example.org -u user -p
ERROR 2026 (HY000): SSL connection error And if I try to debug further with openssl: [root@webserver ~]# openssl s_client -connect mysql.example.org:3306 0>/dev/null
CONNECTED(00000003)
15706:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:588: Is this a valid way to test the SSL connection to a MySQL database server? The SSL23_GET_SERVER_HELLO:unknown protocol message is strange since this typically what you would see if you were speaking SSL on a port intended for non-SSL traffic. This same openssl command seems to work fine with LDAP & HTTP servers: $ openssl s_client -connect ldap.example.org:636 0>/dev/null
CONNECTED(00000003)
depth=2 /C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
...
$ openssl s_client -connect www.example.org:443 0>/dev/null
CONNECTED(00000003)
depth=0 /DC=org/DC=example/OU=Services/CN=www.example.org | OpenSSL version 1.1.1 (released on 11 Sep 2018) added support for -starttls mysql in commit a2d9cfbac5d87b03496d62079aef01c601193b58 . Unfortunately I cannot find the reference to this new feature in the OpenSSL changelog. I generated the SSL certificates as described in https://dev.mysql.com/doc/refman/5.7/en/creating-ssl-files-using-openssl.html , tried, and it works: $ echo | bin/openssl.Linux.x86_64.static s_client -starttls mysql -connect spx-bionic.censored.com:3306 -CAfile /tmp/ca.pem
CONNECTED(00000003)
depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = mysql test CA
verify return:1
depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = spx-bionic.censored.com
verify return:1
---
Certificate chain
0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=spx-bionic.censored.com
i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=mysql test CA
1 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=mysql test CA
i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=mysql test CA
---
Server certificate
-----BEGIN CERTIFICATE-----
CENSORED
-----END CERTIFICATE-----
subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=spx-bionic.censored.com
issuer=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=mysql test CA
---
No client certificate CA names sent
Client Certificate Types: RSA sign, DSA sign, ECDSA sign
Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
Shared Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
Peer signing digest: SHA512
Server Temp Key: ECDH, P-521, 521 bits
---
SSL handshake has read 2599 bytes and written 632 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: AD25B7C3018E4715F262188D982AAE141A232712316E0A3292B0C14178E0F505
Session-ID-ctx:
Master-Key: C121967E8FAEC4D0E0157419000660434D415251B0281CCBFC6D7A2AE8B0CC63AEFE22B332E91D31424C1BF03E5AF319
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 7200 (seconds)
TLS session ticket:
0000 - 82 db 03 0f c0 ce f2 26-62 bd 1b 18 71 03 88 db .......&b...q...
0010 - a6 66 7c 71 94 0c d5 ec-96 30 46 53 4a e6 cd 76 .f|q.....0FSJ..v
0020 - 66 b3 22 86 7d 9f 7e 2c-14 1d 66 f2 46 8f d2 d3 f.".}.~,..f.F...
0030 - f7 0a 0b f5 9e 05 97 e1-2b b3 ba 79 78 16 b8 59 ........+..yx..Y
0040 - dc c5 0d a8 de 0b 3a df-4b ec f9 73 3f 4c c3 f1 ......:.K..s?L..
0050 - 86 b6 f7 aa a7 92 84 77-9f 09 b2 cc 5d dd 35 41 .......w....].5A
0060 - 23 5d 77 74 e1 96 91 ac-28 81 aa 83 fe fc d2 3c #]wt....(......<
0070 - f9 23 09 6d 00 e0 da ef-48 69 92 48 54 61 69 e8 .#.m....Hi.HTai.
0080 - 30 0e 1f 49 7d 08 63 9e-91 70 fc 00 9f cd fe 51 0..I}.c..p.....Q
0090 - 66 33 61 24 42 8f c2 16-57 54 48 ec 6a 87 dc 50 f3a$B...WTH.j..P
Start Time: 1537350458
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
DONE There is also -starttls support for postgres and ldap too in OpenSSL 1.1.1. See https://github.com/openssl/openssl/blob/OpenSSL_1_1_1-stable/apps/s_client.c#L815-L831 for the full list. | {
"source": [
"https://serverfault.com/questions/523808",
"https://serverfault.com",
"https://serverfault.com/users/36178/"
]
} |
524,813 | I have an Nginx with a number of enabled server blocks. Each server answers to 1 canonical domain and may forward 1 or more to that canonical URL. I have at least one server (haven't checked all of them yet) where, if I type in a non-existent domain that points to this box, Nginx displays a site of its choosing (always the same site, but not one that I'm after). I've poked around the config file for the site I always land on, but don't see anything obvious that would identify it as any kind of default site and yet there it is, always showing up when I fat finger a URL. Any thoughts on what I should be looking for to track this down? | Add default_server to your listen directive in the server that you want to act as the default. | {
"source": [
"https://serverfault.com/questions/524813",
"https://serverfault.com",
"https://serverfault.com/users/39879/"
]
} |
524,966 | I am getting pages loading with a 500 internal server error, due I believe to a directive that Apache has not been configured to allow. I have AllowOverride set to all, and a .htaccess file, including: <FilesMatch "\.(eot|ico|pdf|flv|jpg|jpeg|png|gif|svg|swf|ttf|woff)$">
Header set Cache-Control "max-age=31536000, public"
Header set Expires "Wed, 23 Apr 2014 17:00:01 UTC"
</FilesMatch> /var/log/apache2/error.log has: [Sat Jul 20 15:12:36 2013] [alert] [client 24.15.83.241] /home/jonathan/.htaccess: Invalid command 'Header', perhaps misspelled or defined by a module not included in the server configuration What do I need to specify so that Apache2 will properly handle the 'Header' directive? | With apache2, just run a2enmod headers and then sudo service apache2 restart and it will install the headers module automatically. | {
"source": [
"https://serverfault.com/questions/524966",
"https://serverfault.com",
"https://serverfault.com/users/53361/"
]
} |
526,205 | I am using my domain example.org in my firm. I can use www.example.org to view my website. If I try http://example.org from outsite my firm there is no problem, but if I try it from inside, my windows DNS servers deliverthe IPs of domain controllers. How can I solve this? Can I prevent my DCs from registering as example.org in my DNS and will this be a problem for my enviroment? | If you've named your Active Directory example.org then you cannot prevent this. You've gone against Microsft's best practices for naming an AD and you're seeing one of the symptoms. You have a few choices: Migrate to a properly named AD. Something like corp.example.org . Install a web server on each DC and configure it to forward web requests for example.org to www.example.org . This is dirty and shouldn't be done, but it's an option nonetheless. Train your users to go to www.example.org internally. I've blogged about AD naming best practices multiple times and link to official Microsoft sources. You should read them: http://web.archive.org/web/20200214122247/http://www.mdmarra.com/2013/04/best-practices-for-configuring-new.html http://web.archive.org/web/20191201074255/www.mdmarra.com/2012/11/why-you-shouldnt-use-local-in-your.html http://web.archive.org/web/20200122002118/www.mdmarra.com/2013/07/more-documentation-from-microsoft-about.html If you want the short version: Do not create new Active Directory forests with the same name as an
external DNS name. For example, if your Internet DNS URL is http://contoso.com , you must choose a different name for your internal
forest to avoid future compatibility issues. That name should be
unique and unlikely for web traffic. For example: corp.contoso.com. -http://technet.microsoft.com/en-us/library/jj574166.aspx | {
"source": [
"https://serverfault.com/questions/526205",
"https://serverfault.com",
"https://serverfault.com/users/146994/"
]
} |
526,278 | This question was originally asked here: Why do DNS zone files require NS records? To summarise:
"When I go to my registrar and purchase example.com , I will tell my registrar that my nameservers are ns1.example.org and ns2.example.org". But please can somebody clarify the following: After registration, the .com registry will now have a record that tells a resolver needs to visit ns1.example.org or ns2.example.org in order to find out the IP address of example.com. The IP address resides in an A record in a zone file on ns1.example.org and has a identical copy on ns2.example.org. However, inside this file, there must also be 2 NS records which list ns1.example.org and ns2.example.org as the nameservers. But since we are already on one of these servers, this appears do be duplicated information. The answer originally given to the question said the nameservers listed in the zone file are "authoritative". If the nameservers didn't match, then the authoritative nameservers would take precedence. That's all very well and good, but the resolver arrived at the nameserver using the nameservers listed in the .com registry , and if the nameserver's didn't match, then the resolver would be looking for the zone file on the wrong nameserver and wouldn't be able to find it. Or is it a case of the .com registry extracts nameserver information from the zone file ns record? (But then I suppose if you change the ns record the zone file without telling the registry, then it would have no way of knowing where to look.) Thanks | Let's break it down a little. The NS records in the TLD zone (for example, example.com NS ... in com ) are delegation records. The A and AAAA records in the TLD zone (for example, ns1.example.com A ... in com ) are glue records. The NS records in the zone itself (that is, example.com NS ... in example.com ) are authority records. The A and AAAA records in the zone itself ( ns1.example.com A ... in example.com ) are address records, plain and simple. When a (recursive) resolver starts out with no cache of your zone's data and only the root zone cache (which is used to bootstrap the name resolution process), it will first go to . , then com. . The com servers will respond with an authority section response which basically says "I don't know, but look here for someone who does know", same as the servers for . do about com . This query response is not authoritative and does not include a populated answer section. It may also include a so-called additional section which gives the address mappings for any host names the particular server knows about (either from glue records or, in the case of recursive resolvers, from previously cached data). The resolver will take this delegation response, resolve the host name of a NS record if necessary, and proceed to query the DNS server to which authority has been delegated. This process may repeat a number of times if you have a deep delegation hierarchy, but eventually results in a query response with the "authoritative answer" flag set . It's important to note that the resolver (generally, hopefully) won't try to break down the host name being resolved to ask about it piece by piece, but will simply send it in its entirety to the "best" server it knows about. Since the average authoritative name server on the Internet is non-authoritative for the vast majority of valid DNS names, the response will be a non-authoritative delegation response pointing toward some other DNS server. Now, a server doesn't have to be named in the delegation or authority records anywhere to be authoritative for a zone. Consider for example the case of a private master server; in that case there exists an authoritative DNS server that only the administrator(s) of the slave DNS servers for the zone are aware of. A DNS server is authoritative for a zone if, through some mechanism, in its opinion it has full and accurate knowledge of the zone in question. A normally authoritative DNS server can, for example, become non-authoritative if the configured master server(s) cannot be reached within the time limit defined as the expiry time in the SOA record. Only authoritative answers should be considered proper query responses; everything else is either a delegation, or an error of some kind. A delegation to a non-authoritative server is called a "lame" delegation, and means the resolver has to backtrack one step and try some other named DNS server. If no authoritative reachable name servers exist in the delegation, then name resolution fails (otherwise, it'll just be slower than normal). This is all important because non-authoritative data mustn't be cached . How could it be, since the non-authoritative server doesn't have the full picture? So the authoritative server must, on its own accord, be able to answer the question "who is supposed to be authoritative, and for what?". That's the information provided by the in-zone NS records. There's a number of edge cases where this can actually make a serious difference, primarily centered around multiple host name labels inside a single zone (probably fairly common e.g. with reverse DNS zones particularly for large dynamic IP ranges) or when the name servers list differs between the parent zone and the zone in question (which most likely is an error, but can be done intentionally as well). You can see how this works in a little more detail using dig and its +norec (don't request recursion) and @ server specifier features. What follows is an illustration of how an actual resolving DNS server works. Query for the A record(s) for unix.stackexchange.com starting at e.g. a.root-servers.net : $ dig unix.stackexchange.com. A @a.root-servers.net. +norec Look closely at the flags as well as the per-section counts. qr is Query Response and aa is Authoritative Answer. Notice that you only get delegated to the com servers. Manually follow that delegation (in real life a recursive resolver would use the IP address from the additional section if provided, or initiate a separate name resolution of one of the named name servers if no IPs are provided in the delegation response, but we'll skip that part and just fall back to the operating system's normal resolver for brevity of the example): $ dig unix.stackexchange.com. A @a.gtld-servers.net. +norec Now you see that stackexchange.com is delegated to (among others) ns1.serverfault.com , and you are still not getting an authoritative answer. Again follow the delegation: $ dig unix.stackexchange.com. A @ns1.serverfault.com. +norec
...
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35713
;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 3
;; QUESTION SECTION:
;unix.stackexchange.com. IN A
;; ANSWER SECTION:
unix.stackexchange.com. 300 IN A 198.252.206.16 Bingo! We got an answer, because the aa flag is set, and it happens to contain an IP address just as we'd hoped to find. As an aside, it's worth noting that at least at the time of my writing this post, the delegated-to and the listed-authority name servers lists differ, showing that the two do not need to be identical. What I have exemplified above is basically the work done by any resolver, except any practical resolver will also cache responses along the way so it doesn't have to hit the root servers every time. As you can see from the above example, the delegation and glue records serve a purpose distinct from the authority and address records in the zone itself. A caching, resolving name server will also generally do some sanity checks on the returned data to protect against cache poisoning. For example, it may refuse to cache an answer naming the authoritative servers for com from a source other than one that has already been named by a parent zone as deleged-to for com . The details are server-dependent but the intent is to cache as much as possible while not opening up the barn door of allowing any random name server on the Internet to override delegation records for anything not officially under its "jurisdiction". | {
"source": [
"https://serverfault.com/questions/526278",
"https://serverfault.com",
"https://serverfault.com/users/182765/"
]
} |
526,399 | I was updating the authorized_keys file on my server with the public key for the new laptop I got and I was surprised to discover that the two public keys began the same: # key 1
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQ....
#
# key 2
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQ.... What's the story on AAAAB3... etc? With some searching online, I see that others keys start the same, too. Does it explain the algorithm or version or something? | This is actually a header that defines what kind of key this is. If you check out the Public Key Algorithm section of RFC 4253 we can see that for RSA keys The "ssh-rsa" key format has the following specific encoding: string "ssh-rsa"
mpint e
mpint n Here the 'e' and 'n' parameters form the signature key blob. In fact, if you Base64 decode the string "B3NzaC1yc2E" you will see it translates into ASCII as "ssh-rsa". Presumably the "AAAA" represents some kind of header so the application can know where exactly in the data stream to start processing the key. | {
"source": [
"https://serverfault.com/questions/526399",
"https://serverfault.com",
"https://serverfault.com/users/183118/"
]
} |
526,538 | TL;DR Is there a way via script, powershell, reg delete, via telekinesis, whatever to reset Outlook 2013 as if no profiles ever existed and it was running for the first time ever? Still working through this one but hoping others have insight. SCENARIO Lots of users here have existing Outlook profiles connecting to an on-premise Exchange server. We are in the middle of our migration to Office 365. In order to migrate the user's Outlook you have to either create a new profile in Outlook or delete the old profile completely and then "start fresh". We want our users to start fresh and have the default profile name of "Outlook" for their mail profile (instead of something custom or a 2nd profile like "O365") . This is because our ERP system looks for this profile to send email while in the ERP software. PROBLEM The problem is "starting fresh" isn't really starting fresh. If I manually remove the default profile "Outlook" from the Mail control panel settings, then Outlook starts up without a profile but prompts for a profile name: If I type Outlook as the new Profile name now I get: If I go into REGEDIT and look in: HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Windows Messaging Subsystem\Profiles I still see "Outlook" as a profile. I tried doing a Reg DELETE of this key and all sub-keys and while it says "successfully deleted" it doesn't. If I manually delete this profile key I can then start Outlook again and when it prompts for a new Profile name I can put in Outlook and it will take it and let me continue as if it is a new setup of Outlook: It doesn't appear from the command line switches for Outlook 2013 ( found here ) that the /cleanprofile is still around. BOTTOM LINE QUESTION Is there a way via script, powershell, reg delete, via telekinesis, whatever to reset Outlook 2013 as if no profiles ever existed and it was running for the first time ever? | % reg.exe delete HKCU\Software\Microsoft\Office\15.0\Outlook\Profiles\Outlook /f
% reg.exe add HKCU\Software\Microsoft\Office\15.0\Outlook\Profiles\Outlook This will delete the default profile called Outlook, and then recreate it with no settings. Then when you re-run Outlook, it will launch the wizard. | {
"source": [
"https://serverfault.com/questions/526538",
"https://serverfault.com",
"https://serverfault.com/users/7861/"
]
} |
526,726 | I use rsync to backup a directory which is very big, containing many sub-directories and files, so I don't want to see the "incremental file list". I just want to know the summary in the end. If I use the argument -q , nothing is output at all. Can I make rsync output only the summary? | Thanks to a tip by Wayne Davison , I use the --stats option for backup: rsync -a --stats src/ dest/ Nice little summary at the end, e.g. Number of files: 6765
Number of files transferred: 0
Total file size: 709674 bytes
Total transferred file size: 0 bytes
(10 more lines) | {
"source": [
"https://serverfault.com/questions/526726",
"https://serverfault.com",
"https://serverfault.com/users/183286/"
]
} |
526,946 | I've seen documentation for the Full and Linked virtual machine clones, but I can't seem to fully understand when I should use one over another. I see that the full clone creates a copy of the VM, but what about the linked one -- does it mean whatever I do in one VM will be reflected in another? That doesn't make sense to me... Can someone give a couple of specific examples when they'd use full vs. linked VM clones? | The linked clone works similar to the snapshot technique. Snapshots work in a way that saves disk use: When you create a snapshot and make changes after this (as you work with the system inside the VM), VMware stores only the changed parts of the disk (sectors). Linked VM clones work similar: If you have a 10 GB disk and create a linked clone, you do not need additional 10 GB of space - just much less. As you work with the original or cloned VM, only the changes or differences are stores. It is important to note, that the cloned VM still depends on the original disk image. But: You can work with both VMs independently - the differences is only internal. You save disk space when using a linked clone - but the performance might also be less in certain cases. If you have lots of very similar VMs to run, it might make sense to use linked clones, as you can save a lot of disk space. If you just clone a VM and then start installing totally different software or data, a copied clone might be better. | {
"source": [
"https://serverfault.com/questions/526946",
"https://serverfault.com",
"https://serverfault.com/users/183407/"
]
} |
526,957 | We use an LDAP-server, but we also use local accounts. [jdoe@tst-03 ~]$ passwd
Changing password for user jdoe.
Changing password for jdoe.
(current) UNIX password:
Enter login(LDAP) password: This is an account that exists locally on the server. Unfortunately when the user tries to change his password, passwd asks him for the LDAP password. How can I allow this user to change his password locally ('/etc/shadow')? It should not be asking for the LDAP password. | The linked clone works similar to the snapshot technique. Snapshots work in a way that saves disk use: When you create a snapshot and make changes after this (as you work with the system inside the VM), VMware stores only the changed parts of the disk (sectors). Linked VM clones work similar: If you have a 10 GB disk and create a linked clone, you do not need additional 10 GB of space - just much less. As you work with the original or cloned VM, only the changes or differences are stores. It is important to note, that the cloned VM still depends on the original disk image. But: You can work with both VMs independently - the differences is only internal. You save disk space when using a linked clone - but the performance might also be less in certain cases. If you have lots of very similar VMs to run, it might make sense to use linked clones, as you can save a lot of disk space. If you just clone a VM and then start installing totally different software or data, a copied clone might be better. | {
"source": [
"https://serverfault.com/questions/526957",
"https://serverfault.com",
"https://serverfault.com/users/81502/"
]
} |
526,985 | Is there a way to auto-fill a field or set of fields for all AD accounts in one go? For example if I want to set address field for all employees or in a specific OU? Or perhaps if I need to set their email field to [email protected] | The linked clone works similar to the snapshot technique. Snapshots work in a way that saves disk use: When you create a snapshot and make changes after this (as you work with the system inside the VM), VMware stores only the changed parts of the disk (sectors). Linked VM clones work similar: If you have a 10 GB disk and create a linked clone, you do not need additional 10 GB of space - just much less. As you work with the original or cloned VM, only the changes or differences are stores. It is important to note, that the cloned VM still depends on the original disk image. But: You can work with both VMs independently - the differences is only internal. You save disk space when using a linked clone - but the performance might also be less in certain cases. If you have lots of very similar VMs to run, it might make sense to use linked clones, as you can save a lot of disk space. If you just clone a VM and then start installing totally different software or data, a copied clone might be better. | {
"source": [
"https://serverfault.com/questions/526985",
"https://serverfault.com",
"https://serverfault.com/users/93816/"
]
} |
526,987 | Recently I got a error message while executing a PHP script from the command line saying: PHP Fatal error: Allowed memory size of 33554432 bytes exhausted
(tried to allocate 40961 bytes) php -i | grep memory shows: memory_limit => -1 => -1
suhosin.memory_limit => 0 => 0 Which should set no limit at all? The server is running PHP 5.3.3-7+squeeze16 with Suhosin-Patch (v0.9.32.1) (cli) How can this be? | The linked clone works similar to the snapshot technique. Snapshots work in a way that saves disk use: When you create a snapshot and make changes after this (as you work with the system inside the VM), VMware stores only the changed parts of the disk (sectors). Linked VM clones work similar: If you have a 10 GB disk and create a linked clone, you do not need additional 10 GB of space - just much less. As you work with the original or cloned VM, only the changes or differences are stores. It is important to note, that the cloned VM still depends on the original disk image. But: You can work with both VMs independently - the differences is only internal. You save disk space when using a linked clone - but the performance might also be less in certain cases. If you have lots of very similar VMs to run, it might make sense to use linked clones, as you can save a lot of disk space. If you just clone a VM and then start installing totally different software or data, a copied clone might be better. | {
"source": [
"https://serverfault.com/questions/526987",
"https://serverfault.com",
"https://serverfault.com/users/33572/"
]
} |
527,156 | If I already have a bunch of virtualhosts, how can I create a virtual host to handle requests that don't match any of the virtualhosts? (i.e. access by IP, another domain linking to IP, .etc .etc) | server_name _; and default_server on the listen configuration are what you are looking for. Example: server {
listen 80 default_server;
server_name _;
root /var/www/default; (or wherever)
} | {
"source": [
"https://serverfault.com/questions/527156",
"https://serverfault.com",
"https://serverfault.com/users/183556/"
]
} |
527,422 | I'm upgrading from MySQL 5.1 to 5.5, running mysql_upgrade and getting this output: # mysql_upgrade
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
FATAL ERROR: Upgrade failed Any ideas on where to look for what's happening (or, not happening?) so I can fix whatever is wrong and actually run mysql_upgrade ? Thanks! More output: # mysql_upgrade --verbose
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
FATAL ERROR: Upgrade failed
# mysql_upgrade --debug-check --debug-info
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
FATAL ERROR: Upgrade failed
# mysql_upgrade --debug-info
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
FATAL ERROR: Upgrade failed
User time 0.00, System time 0.00
Maximum resident set size 1260, Integral resident set size 0
Non-physical pagefaults 447, Physical pagefaults 0, Swaps 0
Blocks in 0 out 16, Messages in 0 out 0, Signals 0
Voluntary context switches 9, Involuntary context switches 5
# mysql_upgrade --debug-check
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
FATAL ERROR: Upgrade failed After shutting down mysqld --skip-grant-tables via mysqladmin shutdown and restarting mysql via service mysql start , the error log loops through this set of errors over and over: 130730 21:03:27 [Note] Plugin 'FEDERATED' is disabled.
/usr/sbin/mysqld: Table 'mysql.plugin' doesn't exist
130730 21:03:27 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
130730 21:03:27 InnoDB: The InnoDB memory heap is disabled
130730 21:03:27 InnoDB: Mutexes and rw_locks use GCC atomic builtins
130730 21:03:27 InnoDB: Compressed tables use zlib 1.2.3.4
130730 21:03:27 InnoDB: Initializing buffer pool, size = 20.0G
130730 21:03:29 InnoDB: Completed initialization of buffer pool
130730 21:03:30 InnoDB: highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 588190222435
130730 21:03:30 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Doing recovery: scanned up to log sequence number 588192055067
130730 21:03:30 InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percents: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
InnoDB: Last MySQL binlog file position 0 81298895, file name /var/log/mysql/mysql-bin.006008
130730 21:03:33 InnoDB: Waiting for the background threads to start
130730 21:03:34 InnoDB: 5.5.32 started; log sequence number 588192055067
130730 21:03:34 [Note] Recovering after a crash using /var/log/mysql/mysql-bin
130730 21:03:34 [Note] Starting crash recovery...
130730 21:03:34 [Note] Crash recovery finished.
130730 21:03:34 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306
130730 21:03:34 [Note] - '0.0.0.0' resolves to '0.0.0.0';
130730 21:03:34 [Note] Server socket created on IP: '0.0.0.0'.
130730 21:03:34 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist MySQL log during start up via mysqld_safe --skip-grant-tables 130730 21:19:36 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
130730 21:19:36 [Note] Plugin 'FEDERATED' is disabled.
130730 21:19:36 InnoDB: The InnoDB memory heap is disabled
130730 21:19:36 InnoDB: Mutexes and rw_locks use GCC atomic builtins
130730 21:19:36 InnoDB: Compressed tables use zlib 1.2.3.4
130730 21:19:37 InnoDB: Initializing buffer pool, size = 20.0G
130730 21:19:39 InnoDB: Completed initialization of buffer pool
130730 21:19:39 InnoDB: highest supported file format is Barracuda.
130730 21:19:42 InnoDB: Warning: allocated tablespace 566, old maximum was 0
130730 21:19:42 InnoDB: Waiting for the background threads to start
130730 21:19:43 InnoDB: 5.5.32 started; log sequence number 588192055067
130730 21:19:43 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306
130730 21:19:43 [Note] - '0.0.0.0' resolves to '0.0.0.0';
130730 21:19:43 [Note] Server socket created on IP: '0.0.0.0'.
130730 21:19:43 [Warning] Can't open and lock time zone table: Table 'mysql.time_zone_leap_second' doesn't exist trying to live without them
130730 21:19:43 [ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist
130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_current' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_history' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_history_long' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'setup_consumers' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'setup_instruments' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'setup_timers' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'performance_timers' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'threads' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_thread_by_event_name' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_instance' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'events_waits_summary_global_by_event_name' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'file_summary_by_event_name' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'file_summary_by_instance' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'mutex_instances' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'rwlock_instances' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'cond_instances' has the wrong structure
130730 21:19:43 [ERROR] Native table 'performance_schema'.'file_instances' has the wrong structure
130730 21:19:43 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.32-0ubuntu0.12.04.1-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) As I understand it, all the table structure/existence issues (as it relates to mysql system tables) should be corrected by running mysql_upgrade : | I think that it needs username and password mysql_upgrade -u root -p If I don't pass them I get your error Edit : thanks to the comments now I know that there are other reasons, maybe less frequent but it's best to be aware of them too So you get that error when you didn't pass username and password you passed your credentials, but they were wrong the MySQL server isn't running the permissions' tables are ruined (then you must restart MySQL with mysqld --skip-grant-table ) the table mysql.plugin is missing (you'll see an error about that when starting MySQL which suggests to run... mysql_upgrade, and that fails. You probably have some obsolete configuration in my.cnf) | {
"source": [
"https://serverfault.com/questions/527422",
"https://serverfault.com",
"https://serverfault.com/users/45257/"
]
} |
527,630 | I have some experience using linux but none using nginx. I have been tasked with researching load-balancing options for an application server. I have used apt-get to install nginx and all seems fine. I have a couple of questions. What is the difference between the sites-available folder and the conf.d folder. Both of those folders were INCLUDED in the default configuration setup for nginx. Tutorials use both. What are they for and what is the best practice? What is the sites-enabled folder used for? How do I use it? The default configuration references a www-data user? Do I have to create that user? How do I give that user optimal permissions for running nginx? | The sites-* folders are managed by nginx_ensite and nginx_dissite . For Apache httpd users who find this with a search, the equivalents is a2ensite / a2dissite . The sites-available folder is for storing all of your vhost configurations, whether or not they're currently enabled. The sites-enabled folder contains symlinks to files in the sites-available folder. This allows you to selectively disable vhosts by removing the symlink. conf.d does the job, but you have to move something out of the folder, delete it, or make changes to it when you need to disable something. The sites-* folder abstraction makes things a little more organized and allows you to manage them with separate support scripts. (unless you're like me, and one of many debian admins who just managed the symlinks directly, not knowing about the scripts...) | {
"source": [
"https://serverfault.com/questions/527630",
"https://serverfault.com",
"https://serverfault.com/users/11820/"
]
} |
528,075 | Today I tried this on my machine with OpenSUSE 12.3 (kernel 3.7): # resize2fs /dev/mapper/system-srv 2G
resize2fs 1.42.6 (21-Sep-2012)
Filesystem at /dev/mapper/system-srv is mounted on /srv; on-line resizing required
resize2fs: On-line shrinking not supported /dev/mapper/system-srv is an EXT4 volume. Is it really unsupported or I am missing something? | As the message said, you can only grow an ext4 filesystem on-line. If you want to shrink it, you will need to unmount it first. According to the ext4 filesystem maintainer , Ted Ts'o: Sorry, on-line shrinking is not supported. | {
"source": [
"https://serverfault.com/questions/528075",
"https://serverfault.com",
"https://serverfault.com/users/164968/"
]
} |
528,254 | VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Unfortunately, many vendor requirements and people unfamiliar with virtualization request more resources than necessary... I'm interested in quantifying the impact of this decision. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they actually need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem?? "? E.g. do customers need to be educated that VMs are not the same as physical hardware? What specific metric(s) should be used to meter RAM usage. Tracking the peaks of "Active" versus time? Watching "Consumed"? Update: I used vCenter Operations Manager to profile this environment and get some detail on the cluster stats listed above. While things are definitely overcommitted, the VMs are actually so overconfigured with unnecessary RAM that the real (tiny) memory footprint shows no memory contention at the cluster/host level... My takeaway is that VMs should really be right-sized with a little bit of buffer for OS-level caching. Overcommitting out of ignorance or vendor "requirements" leads to the situation presented here. Memory ballooning seems to be bad in every case, as there is a performance impact, so right-sizing can help prevent this. Update 2: Some of these VMs are beginning to crash with: kernel:BUG: soft lockup - CPU#1 stuck for 71s! VMware describes this as a symptom of heavy memory overcommitment . So I guess that answers the question. vCops "Oversized Virtual Machines" report... vCops "Reclaimable Waste" graph... | vSphere's memory management is pretty decent, though the terms used often cause a lot of confusion. In general, memory over-commit should be avoided as it creates exactly this type of problem. However, there are times when it cannot be avoided, so forewarned is forearmed! What are the downsides of overcommitting and over-configuring resources
(specifically RAM) in vSphere environments? The major downside of over-committing resources is that should you have contention, your hosts would be forced to balloon, swap or intelligently schedule/de-duplicate behind the scenes in order to give each VM the RAM it needs. For ballooning, vSphere will inflate a "balloon" of RAM within a chosen VM, then give that ballooned RAM to the guest that needs it. This isn't really "bad" - VMs are stealing each other's RAM, so there's no disk swapping going on - but it could lead to mis-fired alerting and skewed metrics if these rely on analysing the VM's RAM usage, as the RAM won't be marked as "ballooned", just that it's "in use" by the OS. The other feature that vSphere can use is Transparent Page Sharing (TPS) - which is essentially RAM de-duplication. vSphere will periodically scan all allocated RAM, looking for duplicated pages. When found, it will de-duplicate and free up the duplicated pages. Take a look at vSphere's Memory Management whitepaper (PDF) - specifically "Memory Reclamation in ESXi" (page 8) - if you need a more in-depth explanation. Assuming that the VMs can run in less RAM, is it fair to say that
there's overhead to configuring virtual machines with more RAM than
they need? There's no visible overhead - you can allocate 100GB of RAM on a host with 16 GB (however, that doesn't mean you should , for the reasons above). Total memory in use by all of your VMs is the "Active" curve shown in your graphs. Of course, you should never rely on just that figure when calculating how much you would like to overcommit, but if you have historical metrics as you have, you can analyse and work it out based on actual usage. The difference between "Active" and "Consumed" RAM is discussed in this VMWare Community thread . What is the counter-argument to: "if a VM has 16GB of RAM allocated,
but only uses 4GB, what's the problem??" ? E.g. do customers need to be
educated? The short answer to this is yes - customers should always be educated in best practices, regardless of the the tools at their disposal. Customers should be educated to size their VMs according to what they use , rather than what they want . A lot of the time, people will over-specify their VMs just because they might need 16 GB of RAM, even if they're historically bumbling along on 2 GB day after day. As a vSphere administrator, you have the knowledge, metrics and power to challenge them and ask them if they actually need the RAM they've allocated. That said, if you combine vSphere's memory management with carefully-controlled overcommit limits, you should rarely have an issue in practice, the likelihood of running out of RAM for an extended period of time is relatively remote. In addition to this, automated vMotion (called Distributed Resource Scheduling by VMware) is essentially a load-balancer for your VMs - if a single VM is becoming a resource hog, DRS should migrate VMs around to make best use of the cluster's resources. What specific metric should be used to meter RAM usage. Tracking the
peaks of "Active" versus time? Mostly covered above - your main concern should be "Active" RAM usage, though you should carefully define your overcommit thresholds so that if you reach a certain ratio ( this is a decent example , though it may be slightly outdated). Typically, I would certainly stay within 120% of total cluster RAM, but it's up to you to decide what ratio you're comfortable with. A few good articles/discussions on memory over-commit: Memory overcommit in production? YES YES YES vSphere - Overcommitting Memory? Memory overcommit in vSphere | {
"source": [
"https://serverfault.com/questions/528254",
"https://serverfault.com",
"https://serverfault.com/users/13325/"
]
} |
528,742 | I have a domain set up on my hosting account, which is a shared host. It has been doing fine, but as the site becomes more popular the response times are getting slower and slower, and sometimes gives 503 error (it's an API, so people are hitting it and need a speedy response time). It's got to the point now where the shared host is buckling. So I have purchased a VPS which should be able to handle the load. My question is, instead of directing all traffic to this VPS, is there a way of distributing it between the two? If I can have 2 A records, how does the browser determine which one it visits first? | Yes you can. It is called round-robin DNS, and the browser just chooses one of them randomly. It is a well used method of getting cheap load balancing, but if one host goes down, users will still try to access it. | {
"source": [
"https://serverfault.com/questions/528742",
"https://serverfault.com",
"https://serverfault.com/users/149970/"
]
} |
528,773 | I've got a problem which is "NetworkManager is not updating /etc/resolv.conf after openvpn connection with dns push configured". Here's my openvpn server config: ( I've changed domain name to ABC.COM for security reason ;) ) ########################################
# Sample OpenVPN config file for
# 2.0-style multi-client udp server
#
# Adapted from http://openvpn.sourceforge.net/20notes.html
#
# tun-style tunnel
port 1194
dev tun
# Use "local" to set the source address on multi-homed hosts
#local [IP address]
# TLS parms
tls-server
ca keys/ca.crt
cert keys/static.crt
key keys/static.key
dh keys/dh1024.pem
proto tcp-server
# Tell OpenVPN to be a multi-client udp server
mode server
# The server's virtual endpoints
ifconfig 10.8.0.1 10.8.0.2
# Pool of /30 subnets to be allocated to clients.
# When a client connects, an --ifconfig command
# will be automatically generated and pushed back to
# the client.
ifconfig-pool 10.8.0.4 10.8.0.255
# Push route to client to bind it to our local
# virtual endpoint.
push "route 10.8.0.1 255.255.255.255"
push "dhcp-option DNS 10.8.0.1"
# Push any routes the client needs to get in
# to the local network.
#push "route 192.168.0.0 255.255.255.0"
# Push DHCP options to Windows clients.
push "dhcp-option DOMAIN ABC.COM"
#push "dhcp-option DNS 192.168.0.1"
#push "dhcp-option WINS 192.168.0.1"
# Client should attempt reconnection on link
# failure.
keepalive 10 60
# Delete client instances after some period
# of inactivity.
inactive 600
# Route the --ifconfig pool range into the
# OpenVPN server.
route 10.8.0.0 255.255.255.0
# The server doesn't need privileges
user openvpn
group openvpn
# Keep TUN devices and keys open across restarts.
persist-tun
persist-key
verb 4 As you can see it's basicaly sample config with little tuning. Now.. On my machine (openvpn client), I can see that dns is ok: {17:12}/etc/NetworkManager ➭ nslookup git.ABC.COM 10.8.0.1
Server: 10.8.0.1
Address: 10.8.0.1#53
Name: git.ABC.COM
Address: 10.8.0.1
{17:18}/etc/NetworkManager ➭ nslookup ABC.COM 10.8.0.1
Server: 10.8.0.1
Address: 10.8.0.1#53
Name: ABC.COM
Address: 18X.XX.XX.71 openvpn logs on server side says (if I understand correctly) that DNS has been pushed: openvpn[13257]: TCPv4_SERVER link remote: [AF_INET]83.30.135.214:37658
openvpn[13257]: 83.30.135.214:37658 TLS: Initial packet from [AF_INET]83.30.135.214:37658, sid=3251df51 915772f3
openvpn[13257]: 83.30.135.214:37658 VERIFY OK: depth=1, C=XX, ST=XX, L=XXX, O=XXX, OU=XXX, CN=XXX, name=XXX, [email protected]
openvpn[13257]: 83.30.135.214:37658 VERIFY OK: depth=0, C=XX, ST=XX, L=XXX, O=XXX, OU=XXX, CN=XXX, name=XXX, [email protected]
openvpn[13257]: 83.30.135.214:37658 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key
openvpn[13257]: 83.30.135.214:37658 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
openvpn[13257]: 83.30.135.214:37658 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key
openvpn[13257]: 83.30.135.214:37658 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
openvpn[13257]: 83.30.135.214:37658 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
openvpn[13257]: 83.30.135.214:37658 [jacek] Peer Connection Initiated with [AF_INET]83.30.135.214:37658
openvpn[13257]: jacek/83.30.135.214:37658 MULTI_sva: pool returned IPv4=10.8.0.10, IPv6=(Not enabled)
openvpn[13257]: jacek/83.30.135.214:37658 MULTI: Learn: 10.8.0.10 -> jacek/83.30.135.214:37658
openvpn[13257]: jacek/83.30.135.214:37658 MULTI: primary virtual IP for jacek/83.30.135.214:37658: 10.8.0.10
openvpn[13257]: jacek/83.30.135.214:37658 PUSH: Received control message: 'PUSH_REQUEST'
openvpn[13257]: jacek/83.30.135.214:37658 send_push_reply(): safe_cap=940
openvpn[13257]: jacek/83.30.135.214:37658 SENT CONTROL [jacek]: 'PUSH_REPLY,route 10.8.0.1 255.255.255.255,dhcp-option DNS 10.8.0.1,dhcp-option DOMAIN ABC.COM,ping 10,ping-restart 60,ifconfig 10.8.0.10 10.8.0.9' (status=1) openvp logs on my side: Aug 05 17:13:55 localhost.localdomain openvpn[1198]: TCPv4_CLIENT link remote: [AF_INET]XXX.XX.37.71:1194
Aug 05 17:13:55 localhost.localdomain openvpn[1198]: TLS: Initial packet from [AF_INET]XXX.XX.37.71:1194, sid=89cc981c d57dd826
Aug 05 17:13:56 localhost.localdomain openvpn[1198]: VERIFY OK: depth=1, C=XX, ST=XX, L=XXX, O=XXX, OU=XXX, CN=XXX, name=XXX, [email protected]
Aug 05 17:13:56 localhost.localdomain openvpn[1198]: VERIFY OK: depth=0, C=XX, ST=XX, L=XXX, O=XXX, OU=XXX, CN=XXX, name=XXX, [email protected]
Aug 05 17:13:58 localhost.localdomain openvpn[1198]: Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key
Aug 05 17:13:58 localhost.localdomain openvpn[1198]: Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
Aug 05 17:13:58 localhost.localdomain openvpn[1198]: Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key
Aug 05 17:13:58 localhost.localdomain openvpn[1198]: Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
Aug 05 17:13:58 localhost.localdomain openvpn[1198]: Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
Aug 05 17:13:58 localhost.localdomain openvpn[1198]: [static] Peer Connection Initiated with [AF_INET]XXX.XX.37.71:1194
Aug 05 17:14:00 localhost.localdomain openvpn[1198]: SENT CONTROL [static]: 'PUSH_REQUEST' (status=1)
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: PUSH: Received control message: 'PUSH_REPLY,route 10.8.0.1 255.255.255.255,dhcp-option DNS 10.8.0.1,dhcp-option DOMAIN ABC.COM,ping 10,ping-restart 60,ifconfig 10.8.0.10 10.8.0.9'
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: OPTIONS IMPORT: timers and/or timeouts modified
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: OPTIONS IMPORT: --ifconfig/up options modified
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: OPTIONS IMPORT: route options modified
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: ROUTE_GATEWAY 10.123.123.1/255.255.255.0 IFACE=wlan0 HWADDR=44:6d:57:32:81:2e
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: TUN/TAP device tun0 opened
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: TUN/TAP TX queue length set to 100
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: /usr/sbin/ip link set dev tun0 up mtu 1500
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: /usr/sbin/ip addr add dev tun0 local 10.8.0.10 peer 10.8.0.9
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: /usr/sbin/ip route add 10.8.0.1/32 via 10.8.0.9
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: Initialization Sequence Completed It looks like everything's fine. But. I checked /var/log/messages also... and I found that line: Aug 5 17:14:01 localhost NetworkManager[761]: <warn> /sys/devices/virtual/net/tun0: couldn't determine device driver; ignoring... ip a returns: 5: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100
link/none
inet 10.8.0.10 peer 10.8.0.9/32 scope global tun0
valid_lft forever preferred_lft forever route -n returns: # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.123.123.1 0.0.0.0 UG 0 0 0 wlan0
10.8.0.1 10.8.0.9 255.255.255.255 UGH 0 0 0 tun0
10.8.0.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
10.123.123.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0 So basically everything works, except the DNS being pushed... Oh! Right, and my /etc/resolv.conf : # Generated by NetworkManager
domain home
search home
nameserver 10.123.123.1 Where's the issue? (I have a response from Windows-user with openvpn client, that on his side DNS works fine, so it's an issue on my side. Ok now I have another response (after I restarted openvpn service on server side) - it's not working. I must say that it worked yesterday on my machine too.. so have I screwed up something on server? What could it be? ) Edit: Okay, I've got another Windows-user response (the same user as before) - it's working now. So.. I guess it was caused by openvpn restart and some delays with it. I haven't done anything since then. So we're back onto my machine. I also traced that that wierd tun0 message appeared also yesterday, and yesterday it worked. Or maybe I added entry to resolv.conf by myself? I don't remember.. (damn it) | This works for me: http://www.softwarepassion.com/solving-dns-problems-with-openvpn-on-ubuntu-box/ The important step is adding following two lines of configuration into your client openvpn config file: up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf Also ensure the resolvconf package is installed on the client, because that update-resolv-conf script depends on it. It works with openvpn client service or command to start it manually. However, the Ubuntu Network Manager doesn't this. It's an issue so far: https://bugs.launchpad.net/ubuntu/+source/openvpn/+bug/1211110 | {
"source": [
"https://serverfault.com/questions/528773",
"https://serverfault.com",
"https://serverfault.com/users/184066/"
]
} |
529,049 | Sometimes my saltmaster hangs for a while on salt '*' test.ping waiting for downed minions to reply. Is there a way so see a list of connected minions, regardless of whether they respond to test.ping ? | The official answer: salt-run manage.up Also useful are: salt-run manage.status
salt-run manage.down | {
"source": [
"https://serverfault.com/questions/529049",
"https://serverfault.com",
"https://serverfault.com/users/27067/"
]
} |
529,124 | I want to detect if a 2012 server is has been setup as a Core install using WMI. An earlier question, would seem to indicate that I can get the OperatingSystemSKU from Win32_OperatingSystem . My Windows 2012 Core systems are reporting a OperatingSystemSKU of 7. The article from the other question would seem to indicate is a PRODUCT_STANDARD_SERVER, and if had a core install I should expect to see a value of 0x0000000D instead for PRODUCT_STANDARD_SERVER_CORE. What am I missing here. I eventually want to create a policy and use item level targeting to only apply that policy to Windows 2012 Server Core installs. PS C:\Users\zoredache\Documents> gwmi -Query "select OPeratingSystemSKU,Version,ProductType from Win32_OperatingSystem"
__GENUS : 2
__CLASS : Win32_OperatingSystem
__SUPERCLASS :
__DYNASTY :
__RELPATH : Win32_OperatingSystem=@
__PROPERTY_COUNT : 3
__DERIVATION : {}
__SERVER :
__NAMESPACE :
__PATH :
OperatingSystemSKU : 7
ProductType : 2
Version : 6.2.9200 | In PowerShell: Get-WMIObject Win32_OptionalFeature | where Name -eq 'Server-Gui-Shell' | Select InstallState returns 1 on a full server and 2 on a server core install. Edit: While my answer above is correct, there are two problems with it: When using this command on a workstation, it returns nothing, so you have to add an extra check for this. It is slow, when I tried it, it took between 600 and 3500
milliseconds. So the more pragmatic approach is to just check for the existence of a certain file: (Test-Path "$env:windir\explorer.exe") This returns $false for a Server Core installations and $true for all others and it takes one millisecond to execute. | {
"source": [
"https://serverfault.com/questions/529124",
"https://serverfault.com",
"https://serverfault.com/users/984/"
]
} |
529,287 | When I run this command rsync -avzp --del -e "ssh -p myport" user@hostname:/var/www/tests /var/www/tests files get synchronized but instead of saving files in /var/www/tests , Rsync creates one more directory "tests" inside of existing "tests": /var/www/tests/tests and puts files there. How to tell Rsync not to create a new directory? | If you don't want another tests directory created, the correct command would be rsync -avzp --del -e "ssh -p myport" user@hostname:/var/www/tests/ /var/www/tests Note the / at the end of user@hostname:/var/www/tests/ . | {
"source": [
"https://serverfault.com/questions/529287",
"https://serverfault.com",
"https://serverfault.com/users/184257/"
]
} |
529,394 | I keep seeing the below error message's in the error log, I can access all of the resources but I'm unsure as to why the error is flagging. error: [error] 13368#0: *449 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: myserver.com,
request: "GET /stories/mine HTTP/1.1", upstream:
" http://[::1]:5000/stories/mine ", host: "myserver.com" My Nginx config I'm passing the connection over to a node.js cluster running on port 5000. Can't see what I would have missed? upstream api {
server localhost:5000;
}
server {
listen 80;
server_name myserver.com;
root /home/user/_api;
# Logging
error_log /home/user/log/api.error.log notice;
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_cache one;
proxy_cache_key sfs$request_uri$scheme;
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
} | Nginx connects to nodjs on the IPv6 loopback [::1]. nodejs is probably just listening on IPv4. Try setting upstream api {
server 127.0.0.1:5000;
}
... | {
"source": [
"https://serverfault.com/questions/529394",
"https://serverfault.com",
"https://serverfault.com/users/184783/"
]
} |
530,415 | In an answer to my previous question I noticed these lines: It's normally this last stage of delegation that is broken with most
home user setups. They have gone through the process of buying a
domain with a registrar/service provider, but have then failed to
configure the domain to point the delegation to their own name
servers. You actually have to tell the registrar where your
nameservers are before they can put glue records in place to get your
step of the delegation to work. What is a DNS Delegation? How does it work? A full explanation for the hypothetical domain abc.com would be helpful. | In physical terms, delegation is very similar to how a manager will delegate responsibility of tasks to his staff. The results are the same, however more than one person was involved in the process. The manager receives the request for work, passes on the responsibility to another member of staff and either the staff member or the manager returns with the work results. This is all on the proviso that the work the staff member does is actually correct and is what the original requester asked for (or that the requester actually asked for something that was valid in the first place!). With DNS delegation, it is pretty similar. When the com name servers are asked for the place to find authority of the zone example.com , they often delegate this work off to separate name servers (in fact in the vast majority of cases, they do in fact delegate the response to other name servers). When you first register a domain, say our example.com domain, this is often done through a third party called a registrar. It is common practice by registrars to put in their name servers for the delegation and to serve a default zone from those name servers. This default zone includes the basic requirements to serve that zone on the internet (the SOA , NS and A records associated to those NS records). Obviously if you yourself want to take control of the authority of the domain, you have to ask the registrar to delegate the domain to your nameserver instead. Different registrars refer to this in process in different ways, 'change nameservers', 'use third party DNS', 'Add Glue records' and so on. The mechanism underneath remains the same. You provide, generally, 2 or more "name server names" (for example ns0.example.com and ns1.example.com ) and the IP addresses at which ns0 and ns1 are. They then process the request and the delegation is pointed away from your registrar to the nameservers you provided. In technical terms, it's at this point you have to ensure your nameservers are up and running, serving the domain example.com , with a minimum of an SOA (start of authority record), 1 or more NS records and the A records (the IPs) that these NS records are resolved from: example.com. IN SOA ns0.example.com. hostmaster.example.com. ( 10 3600 900 604800 7200 )
IN NS ns0.example.com.
IN NS ns1.example.com.
ns0 IN A 192.0.2.8
ns1 IN A 192.0.2.44 (I've picked somewhat arbitrary values for the SOA values, the names for the NS records and the IPs those nameservers resolve to). These will all have to reflect the zone for which you are serving. This DNS service has to be visible from anywhere on the internet, and not be firewalled (that is port 53 udp and tcp inbound have to be allowed). Also your service provider must not block that port either (which some providers do block inbound traffic destined to those ports). Given my original comparison, the com nameservers are the DNS managers, who are delegating the zone example.com to the nameservers (the staff members) to do the work of providing the basic zone information ( SOA , NS , A ). You can also serve any
additional records such as mail server records MX or may be an A record for your www.example.com address. If that name server doesn't do the work, returns the wrong results, or has a 3rd party (firewall/ISP) blocking the work, you will not have working DNS and the delegation breaks. It also may be worth noting that the domain does NOT have to be delegated to nameservers in the same domain, so ns0.example.net and ns0.example.org could both be valid nameserver who could have example.com delegated to them. Provided both those name servers served the example.com domain. The reason that multiple name servers are required is to provide redundancy to the DNS clients, which is important for an internet which doesn't break. | {
"source": [
"https://serverfault.com/questions/530415",
"https://serverfault.com",
"https://serverfault.com/users/184987/"
]
} |
531,026 | I removed the side panel of our server rack so I can clean up some of the wiring. Inside I found the PDU was wired like in the picture. Instead of an incoming power cable, there's just 3 uninsulated spade connectors. Is there any reason for wiring it like this instead of using a C19 to appropriate connector for your country cable? Or did whoever do this just not have the right cable handy, and figure this is good enough? Edit: When I say uninsulated, the metal parts of the spade connectors do look completely bare. I haven't looked too closely to see if there's clear insulation or something, but it doesn't look like it, and it's live. Edit 2 Proper cables are being dropshipped and will be installed as soon as they arrive. In the mean time nobody is going to go anywhere near it. I can easily cut power to the PDU so replacement wont be a problem. Edit 3 The cable to the PDU has been replaced with a proper one. No electrician needed. No fires, no injuries. Yay! The faulty cable has been destroyed so nobody ever gets tempted to do something like that again. For extra damage, the cord they attached those spades to was only an 18 gauge cable, not the 14 gauge ones usually used for servers. | That's shockingly bad (sorry!). If that cable gets yanked out by someone, at the very best it will short out and hopefully trip the circuit breaker, possibly taking out power in other racks or even other parts of the building. Worst case, somebody could be killed. Personally I'd do the following: Carefully secure the rack. Lock it up, post a notice to admins to not touch it under any circumstances. Make sure people are aware there is a dangerous electrical fault within the rack. If you have a server failure, so be it, that is too dangerous for anyone but a trained electrician to go near it. The stakes are too high. Call a professional registered electrician to fix that. Plan for downtime whilst the electrician does what's needed. Find out who was responsible for that, and if they are still employed, report them to management, as they should be severely disciplined for putting other people at risk. | {
"source": [
"https://serverfault.com/questions/531026",
"https://serverfault.com",
"https://serverfault.com/users/108667/"
]
} |
531,618 | I'm trying to execute a ping with eth1, but the program uses eth0(the default network device). Any tips, tricks, or alternate techniques available? | From the manual: -I interface
interface is either an address, or an interface name. If interface is an address, it sets source
address to specified interface address. If interface in an interface name, it sets source inter‐
face to specified interface. For ping6, when doing ping to a link-local scope address, link
specification (by the '%'-notation in destination, or by this option) is required. So, answer is: ping -I eth1 123.123.123.123 | {
"source": [
"https://serverfault.com/questions/531618",
"https://serverfault.com",
"https://serverfault.com/users/185814/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.