source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
408,017 | I ran across this page in the Heroku docs... Naked domains, also called bare or apex domains, are configured in DNS via A-records and have serious availability implications when used in highly available environments such as massive on-premise datacenters, cloud infrastructure services, and platforms like Heroku. For maximum scalability and resiliency applications should avoid naked domains and instead rely solely on subdomain-based hostnames. Does anyone here speak Enterprise? What are the "availability implications" they're warning about? (I notice that http://stackoverflow.com works no problem, so evidently there are viable alternate philosophies on this issue.) | What they're talking about is that when you use a CNAME to point to their services (which is only possible on subdomain, not the zone root - it can't coexist with the SOA and NS records that are required on the root of your zone), they can make a change to their own DNS records to work around some kind of availability issue. With a zone root, you must use an A record to point to a specific IP address for the service. If they have an issue with routing, or some kind of denial of service against that specific address, they're not able to update your zone's A record to point to a different IP on the fly; they can update their own, though, and that's what a CNAME allows them to do. This doesn't apply to Stack Exchange because they aren't using a third party's platform; they'll be the ones responding to an availability issue, so whether it's a CNAME or an A makes no difference to them. | {
"source": [
"https://serverfault.com/questions/408017",
"https://serverfault.com",
"https://serverfault.com/users/11478/"
]
} |
408,130 | One of my client's sites received a direct lightning hit last week (coincidentally on Friday the 13th! ). I was remote to the site, but working with someone onsite, I discovered a strange pattern of damage. Both internet links were down, most servers were inaccessible. Much of the damage occurred in the MDF , but one fiber-connected IDF also lost 90% of the ports on a switch stack member. Enough spare switch ports were available to redistribute cabling elsewhere and reprogram, but there was downtime while we chased down affected devices.. This was a new building/warehousing facility and a lot of planning went into the design of the server room. The main server room is run off of an APC SmartUPS RT 8000VA double-conversion online UPS, backed by a generator. There was proper power distribution to all connected equipment. Offsite data replication and systems backups were in place. In all, the damage (that I'm aware of) was: Failed 48-port line card on a Cisco 4507R-E chassis switch . Failed Cisco 2960 switch in a 4-member stack. (oops... loose stacking cable) Several flaky ports on a Cisco 2960 switch. HP ProLiant DL360 G7 motherboard and power supply. Elfiq WAN link balancer. One Multitech fax modem. WiMax/Fixed-wireless internet antenna and power-injector. Numerous PoE connected devices (VoIP phones, Cisco Aironet access points, IP security cameras) Most of the issues were tied to losing an entire switch blade in the Cisco 4507R-E. This contained some of the VMware NFS networking and the uplink to the site's firewall. A VMWare host failed, but HA took care of the VM's once storage networking connectivity was restored. I was forced to reboot/power cycle a number of devices to clear funky power states. So the time to recovery was short, but I'm curious about what lessons should be learned... What additional protections should be implemented to protect equipment in the future? How should I approach warranty and replacement? Cisco and HP are replacing items under contract. The expensive Elfiq WAN link balancer has a blurb on their website that basically said "too bad, use a network surge protector ". (seems like they expect this type of failure) I've been in IT long enough to have encountered electrical storm damage in the past, but with very limited impact; e.g. a cheap PC's network interface or the destruction of mini switches. Is there anything else I can do to detect potentially flaky equipment, or do I simply have to wait for odd behavior to surface? Was this all just bad luck, or something that should be really be accounted for in disaster recovery? With enough $$$, it's possible to build all sorts of redundancies into an environment, but what's a reasonable balance of preventative/thoughtful design and effective use of resources here? | A couple of jobs ago, one of the datacenters for the place I was working for was one floor below a very large aerial. This large, thin, metal item was the tallest thing in the area and was hit by lightning every 18 months or so. The datacenter itself was built around 1980, so I wouldn't call it the most modern thing around, but they had long experience dealing with lightning damage (the serial-comms boards had to be replaced every time , which is a trial if the comms boards are in a system that hasn't had any new parts made in 10 years). One thing that was brought up by the old hands is that all that spurious current can find a way around anything, and can spread in a common ground once it bridges in. And can bridge in from air-gaps. Lightning is an exceptional case, where normal safety standards aren't good enough to prevent arcs and will go as far as it has energy. And it has a lot. If there is enough energy it can arc from a suspended-ceiling grid (perhaps one of the suspension wires is hung from a loop with connection to a building girder in the cement) to the top of a 2-post rack and from there into the networking goodies. Like hackers, there is only so much you can do. Your power-feeds all have breakers on them that clamp spurious voltages, but your low-voltage networking gear almost never does and represents a common-path for an extremely energetic current to route. Detecting potentially flaky kit is something that I know how to do in theory, but not in reality. Probably your best bet is to put the suspect gear into an area and deliberately bring the temperature in the room up into the high end of the Operating Range and see what happens. Run some tests, load the heck out of it. Leave it there for a couple days. The added thermal stress over any pre-existing electrical damage may weed out some time-bombs. It definitely did shorten the lifespan of some of your devices, but finding out which ones is hard. Power conditioning circuitry inside power-supplies may have compromised components and be delivering dirty power to the server, something you could only detect through the use of specialized devices designed to test power-supplies. Lightning strikes are not something I've considered for DR outside of having a DC in a facility with a giant lightning rod on the roof . Generically, a strike is one of those things that happen so infrequently it's shuffled under 'act of god' and moved along. But... you've had one now. It shows your facility had the right conditions at least once. It's time to get an assessment for how prone your facility is given the right conditions and plan accordingly. If you're only thinking of the DR impacts of lightning now, I think that's appropriate. | {
"source": [
"https://serverfault.com/questions/408130",
"https://serverfault.com",
"https://serverfault.com/users/13325/"
]
} |
410,066 | In Linux, How do I display lines that contain a string in a text file, such as: search "my string" file_name How do I make the search case sensitive/insensitive?
And how do I also display the line numbers? Regards | well grep -n "my string" file_name will do for your particular query. GREP is by default case sensitive in nature and to make it case insensitive you can add -i option to it. The -n option displays the line numbers. For other myriad options, I recommend man grep for more interesting pattern matching capability of GREP. | {
"source": [
"https://serverfault.com/questions/410066",
"https://serverfault.com",
"https://serverfault.com/users/105220/"
]
} |
410,240 | On a Windows platform, is there any command line utility that I can pass a username , password domain name to in order to verify the credentials (or possibly give an error that the account is disabled, doesn't exist or expired)? | You could use the net use command, specifying the username and password on the command-line (in the form net use \\unc\path /user:username password and check the errorlevel returned to verify if a credential is valid. The runas command would work, too, except that you're going to have a tougher time testing the output. Testing a credential for the existence of an account would be a matter of using net user or dsquery . The net user command won't tell you if an account is locked out, but querying the lockoutTime attribute of the user account could tell you that. | {
"source": [
"https://serverfault.com/questions/410240",
"https://serverfault.com",
"https://serverfault.com/users/126775/"
]
} |
410,626 | I am reading up on TCP/IP and other related protocols and technologies. MAC addresses are described as being (reasonably :) unique, and as having a large possibility space (several hundred trillions), while also being assigned to all network interfaces. What are the historical and technical reasons why IPv4 or IPv6 addresses are used instead of MAC addresses for internetwork communication? Am I missing something fundamental or is it just a silly reason (e.g. building on top of legacy tech)? | The MAC address might be unique, but there's nothing special about the number that would indicate where it is. MAC 00-00-00-00-00-00 might be on the other side of the planet from 00-00-00-00-00-01 . IP is an arbitrary numbering scheme imposed in a hierarchical fashion on a group of computers to logically distinguish them as a group (that's what a subnet is). Sending messages between those groups is done by routing tables, themselves divided into multiple levels so that we don't have to keep track of every single subnet. For instance, 17.x.x.x is within the Apple network. From there, Apple will know where each of its thousands of subnets are located and how to get to them (nobody else needs to know this information, they just need to know that 17.anything goes to Apple). It's also pretty easy to relate this to another pair of systems. You have a State Issued ID Number, why would you need a mailing address if that ID number is already unique to just you? You need the mailing address because it's an arbitrary system that describes where the unique destination for communications to you should go. | {
"source": [
"https://serverfault.com/questions/410626",
"https://serverfault.com",
"https://serverfault.com/users/53391/"
]
} |
411,280 | I am often dealing with incredibly large log files (>3 GB). I've noticed the performance of less is terrible with these files. Often I want to jump do the middle of the file, but when I tell less to jump forward 15 M lines it takes minutes.. The problem I imagine is that less needs to scan the file for '\n' characters, but that takes too long. Is there a way to make it just seek to an explicit offset? e.g. seek to byte offset 1.5 billion in the file. This operation should be orders of magnitude faster. If less does not provide such an ability, is there another tool that does? | you can stop less from counting lines like this less -n To jump to a specific place like say 50% in, less -n +50p /some/log This was instant for me on a 1.5GB log file. Edit: For a specific byte offset: less -n +500000000P ./blah.log | {
"source": [
"https://serverfault.com/questions/411280",
"https://serverfault.com",
"https://serverfault.com/users/46231/"
]
} |
411,307 | Note: Please read the updated information starting with "EDIT" near the halfway point of this post - the environment and background of this problem has changed I've got a bog standard Debian 6.0 install here that I decided to sidegrade to the Debian Testing repositories. I did this by swapping out the references to the Squeeze repos in my sources.list to use the Testing repos instead. After the package install and a reboot, I get the following error when attempting to su - to another user: root@skaia:~# su joebloggs -
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell If I omit the -, this does not occur. Note that users can become root correctly, this only seems to happen when switching from root to somebody else and using the - to get that user's environment. Google is mostly useless here. The only things I can find are references from 2011 in regards to the sux package, which appear to have been fixed in the mean time. This looks and smells very much like an upgrade error, fixable by tweaking the right package in the right manner. I just have no idea where to start - aside from this, my system works completely normally and as expected. EDIT This is now happening to me on a Debian stable machine as described above. No upgrade or anything this time, just straight up stable. Yup, a year later. Still no idea what the heck the problem is. Here's what it looks like now (not much has changed): bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
terraria@skaianet:~$ tty
/dev/pts/0
terraria@skaianet:~$ ls -l /dev/pts/0
crw--w---- 1 root root 136, 0 Oct 10 19:21 /dev/pts/0
terraria@skaianet:~$ ls -l /dev/pts/
crw--w---- 1 root root 136, 0 Oct 10 19:21 0
crw--w---- 1 root root 136, 2 Sep 22 17:47 2
crw--w---- 1 root root 136, 3 Sep 26 19:30 3
c--------- 1 root root 5, 2 Sep 7 10:50 ptmx An strace generated like this: root@skaianet:~$ strace -f -o tracelog su terraria - ..also turns up some confusing behavior. These messages are rather confusing. Some chosen lines: readlink("/proc/self/fd/0", "/dev/pts/0", 4095) = 10
#Error code 10?
15503 open("/dev/tty", O_RDWR|O_NONBLOCK) = -1 ENXIO (No such device or address)
#Yes there is, and I can interact with it normally
15503 ioctl(255, TIOCGPGRP, [32561]) = -1 ENOTTY (Inappropriate ioctl for device) I've linked the full output of this strace session - all I did was run the su command, then immediately ctrl+d out of the terminal. | su - username is interpreted by your su to mean "run username 's shell as an interactive login shell" su username - is interpreted by your su to mean "run the following non-interactive command ( - ) as username " the latter only worked at all because: your su passes trailing arguments to sh for parsing sh takes - to mean "run as a login shell (read /etc/profile , ...)" But what you're really interested in is: why non-interactive ? Sharing the controlling terminal between the privileged parent and the unprivileged child leaves you vulnerable to " TTY pushback privilege escalation ", aka the TIOCSTI bug, so unless you really need it su detaches from it . When you used the su username - form, su inferred that you didn't need a controlling terminal . Only processes with a controlling terminal can have session leaders which manipulate process groups (do job control); the trace you gave is bash detecting that it can't be a session leader. You mention: Where it gets stranger is that both forms work fine on Ubuntu and CentOS 6, however on vanilla Debian, only the first form works without error. Ignoring variants like sux and sudo , there are at least three [1] versions of su on Linux: coreutils , util-linux and shadow-utils from which Debian's comes. The latter's manpage points out: This version of su has many compilation options, only some of which may be in use at any particular site. and Debian's comes with the flag old_debian_behavior ; other versions may have similar compile-time/runtime options. Another reason for variability might be that there was some debate [2] as to whether su should ever be used to drop privilege this way and whether the TIOCSTI bug is therefore a bug at all (Redhat originally closed it "WONTFIX" ). [1]: Edit: add SimplePAMApps and hardened-shadow to that. [2]: Solar Designer has some (old) opinions there which I think are worth a read. | {
"source": [
"https://serverfault.com/questions/411307",
"https://serverfault.com",
"https://serverfault.com/users/129773/"
]
} |
411,362 | Can't remember where, but I read uWSGI can reload itself like Django development server when a project script is modified. I can't find that in the docs , nor in the internets. How can I do this? I use Ubuntu 12.04 on my working machines and Debian Squeeze on stage & production server, Django 1.4 and uWSGI 1.2. | Reference: http://projects.unbit.it/uwsgi/wiki/Management If you have started uwsgi with the --touch-reload=/path/to/special/file/usually/the.ini option, reloading your uWSGI is a simple matter of touch reloading that file with touch /path/to/special/file/usually/the.ini And if you want the "autoreload" capability, this is the tip that gets this done: http://projects.unbit.it/uwsgi/wiki/TipsAndTricks#uWSGIdjangoautoreloadmode | {
"source": [
"https://serverfault.com/questions/411362",
"https://serverfault.com",
"https://serverfault.com/users/24378/"
]
} |
411,970 | I'm trying to get my Pelican blog working. It uses lftp to transfer the actual blog to ones server, but I always get an error: mirror: Fatal error: Certificate verification: subjectAltName does not match ‘blogname.com’ I think lftp is checking the SSL and the quick setup of Pelican just forgot to include that I don't have SSL on my FTP. This is the code in Pelican's Makefile: ftp_upload: $(OUTPUTDIR)/index.html
lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit" which renders in terminal as: lftp ftp://[email protected] -e "mirror -R /Volumes/HD/Users/me/Test/output /myblog_directory ; quit" What I managed so far is, denying the SSL check by changing the Makefile to: lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "set ftp:ssl-allow no" "mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit" Due to my incorrect implementation I get logged in correctly ( lftp [email protected]:~> ) but the one line feature doesn't work anymore and I have to enter the mirror command by hand: mirror -R /Volumes/HD/Users/me/Test/output/ /myblog_directory This works without an error and timeout. The question is how to do this with a one liner. In addition I tried: set ssl:verify-certificate/ftp.myblog.com no This trick to disable certificate verification in lftp: $ cat ~/.lftp/rc
set ssl:verify-certificate no However, it seems there is no "rc" folder in my lftp directory - so this prompt has no chance to work. | From the manpage : -c commands Execute the given commands and exit. Commands can be separated with a semicolon ( ; ), AND ( && ) or OR ( || ). Remember to quote the commands argument properly in the shell. This option must be used alone without other arguments. So you want to specify the commands as a single argument, separated by semicolons: lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "set ftp:ssl-allow no; mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit" You can actually omit the quit command and use -c instead of -e . | {
"source": [
"https://serverfault.com/questions/411970",
"https://serverfault.com",
"https://serverfault.com/users/130068/"
]
} |
412,263 | I lost my domain controller machine, and then add new domain controller but with a new domain. How do I remove network machines from old domain using command line and add to new domain? Machines using Windows Server 2008 Core (command line only) net computer \\name del works only on domain controller. sconfig When I try to exit from old domain, console requests username and password for exit. I type it, and then get error "Could not connect to domain" (old domain controller not exists) What to do? | Try netdom remove computername /Domain:domain /UserD:user /PasswordD:* /Force Type netdom remove /? for the full command usage. The /Force option is what you're looking for. Per the help: Forces the unjoin of the machine from the domain even if the domain is not found or does not contain the matching computer object. To join the members to the new domain: netdom join computername /Domain:domain /UserD:user /PasswordD:* Again type netdom join /? for help with the command usage. | {
"source": [
"https://serverfault.com/questions/412263",
"https://serverfault.com",
"https://serverfault.com/users/130197/"
]
} |
412,284 | I have a client who's been having problems with his site. The server doesn't seem to want to load hes site in certain countries, though other sites are fine. But this site [link removed] only seems to load in the US and Canada. In Europe, the UK, Asia etc, the site seems to be blocked (been like this for a week now). I've looked over the server and it seems fine. Other sites work fine, and the NS are set up properly, pointing to my main server, at http://puu.sh/MIGF Any ideas? | Try netdom remove computername /Domain:domain /UserD:user /PasswordD:* /Force Type netdom remove /? for the full command usage. The /Force option is what you're looking for. Per the help: Forces the unjoin of the machine from the domain even if the domain is not found or does not contain the matching computer object. To join the members to the new domain: netdom join computername /Domain:domain /UserD:user /PasswordD:* Again type netdom join /? for help with the command usage. | {
"source": [
"https://serverfault.com/questions/412284",
"https://serverfault.com",
"https://serverfault.com/users/130203/"
]
} |
412,305 | I have this weird issue that started happening a day or two ago, I don't know the cause. The server this is happening on is running CentOS 6.3 64 bit. For some reason, programs attempting to connect to a webpage of some external webserver instead go to the webserver running on the local machine. For instance, when I try to "yum update", the repo's give 404 messages, and this is in /var/logs/httpd/access_log: xx.xx.xx.xx - - [29/Jul/2012:09:18:34 -0700] "GET /centos/6.3/extras/x86_64/repodata/repomd.xml HTTP/1.1" 404 329
xx.xx.xx.xx - - [29/Jul/2012:09:18:35 -0700] "GET /packages/centos/6/x86_64/repodata/repomd.xml HTTP/1.1" 404 317
xx.xx.xx.xx - - [29/Jul/2012:09:18:35 -0700] "GET /repoforge/redhat/el6/en/x86_64/rpmforge/repodata/repomd.xml HTTP/1.1" 404 337
xx.xx.xx.xx - - [29/Jul/2012:09:18:36 -0700] "GET /centos/6.3/updates/x86_64/repodata/repomd.xml HTTP/1.1" 404 336 The xx.xx.xx.xx is one of the ip's on the local machine. It's not just happening with yum, there is another process running on the machine that goes to an external webpage to just signal a heartbeat request, that also gets redirected to the httpd server for whatever reason. The only thing I could think of was some rule getting added to iptables, I backup up the current rules and then flushed iptables, and the problem still persists. There have also been no recent changes to the httpd configuration or /etc/hosts. | Try netdom remove computername /Domain:domain /UserD:user /PasswordD:* /Force Type netdom remove /? for the full command usage. The /Force option is what you're looking for. Per the help: Forces the unjoin of the machine from the domain even if the domain is not found or does not contain the matching computer object. To join the members to the new domain: netdom join computername /Domain:domain /UserD:user /PasswordD:* Again type netdom join /? for help with the command usage. | {
"source": [
"https://serverfault.com/questions/412305",
"https://serverfault.com",
"https://serverfault.com/users/130214/"
]
} |
413,124 | For some domains nslookup gives me a Non-authoritative answer section. What does this mean? Got answer:
HEADER:
opcode = QUERY, id = 3, rcode = NXDOMAIN
header flags: response, want recursion, recursion avail.
questions = 1, answers = 0, authority records = 1, additional =
QUESTIONS:
www.example.com.SME, type = AAAA, class = IN
AUTHORITY RECORDS:
-> (root)
ttl = 1787 (29 mins 47 secs)
primary name server = a.root-servers.net
responsible mail addr = nstld.verisign-grs.com
------------
Non-authoritative answer:
------------
------------
Name: example.com
Address: 93.184.216.34
Aliases: www.example.com | Basically, it's what the name says it is. An authoritative answer comes from a nameserver that is considered authoritative for the domain which it's returning a record for (one of the nameservers in the list for the domain you did a lookup on), and a non-authoritative answer comes from anywhere else (a nameserver not in the list for the domain you did a lookup on). It's basically a distinction between a nameserver that's an official nameserver for the domain you're querying, and a nameserver that isn't. Nameservers that aren't authoritative are getting their answers second (or third or fourth...) hand - just relaying the information along from somewhere else. So, for example, If I did an nslookup of maps.google.com right now, I would get a response from one of my configured nameservers. (Either from my ISP, or my domain.) It would come back as non-authoritative because neither my ISP's nameservers, nor my own are in the list of nameservers for google.com . They aren't Google's nameservers, so they're not the authoritative source that creates the NS records. The list of authoritative nameservers for Google is below (from whois.internic.net). Domain Name: GOOGLE.COM Registrar: MARKMONITOR INC. Whois Server: whois.markmonitor.com Name Server: NS1.GOOGLE.COM Name Server: NS2.GOOGLE.COM Name Server: NS3.GOOGLE.COM Name Server: NS4.GOOGLE.COM Updated Date: 20-jul-2011 Creation Date: 15-sep-1997 Expiration Date: 14-sep-2020 If I changed my configured DNS server to one of the ones in that list, and then did an nslookup against maps.google.com , I'd get an authoritative answer back. Those servers are the authority, (or source) for what are valid names in Google's domains, and what aren't. All other nameservers, non-authoritative nameservers, get their NS records from the authoritative servers somewhere down the line. | {
"source": [
"https://serverfault.com/questions/413124",
"https://serverfault.com",
"https://serverfault.com/users/128164/"
]
} |
413,142 | While doing the perl -MCPAN -e 'install Module::Build'; it gives the following error, How can I resolve it ? /usr/bin/perl Build --makefile_env_macros 1
Can't locate Perl/OSType.pm in @INC (@INC contains: t/lib t/bundled lib /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .) at lib/Module/Build.pm line 13. | Basically, it's what the name says it is. An authoritative answer comes from a nameserver that is considered authoritative for the domain which it's returning a record for (one of the nameservers in the list for the domain you did a lookup on), and a non-authoritative answer comes from anywhere else (a nameserver not in the list for the domain you did a lookup on). It's basically a distinction between a nameserver that's an official nameserver for the domain you're querying, and a nameserver that isn't. Nameservers that aren't authoritative are getting their answers second (or third or fourth...) hand - just relaying the information along from somewhere else. So, for example, If I did an nslookup of maps.google.com right now, I would get a response from one of my configured nameservers. (Either from my ISP, or my domain.) It would come back as non-authoritative because neither my ISP's nameservers, nor my own are in the list of nameservers for google.com . They aren't Google's nameservers, so they're not the authoritative source that creates the NS records. The list of authoritative nameservers for Google is below (from whois.internic.net). Domain Name: GOOGLE.COM Registrar: MARKMONITOR INC. Whois Server: whois.markmonitor.com Name Server: NS1.GOOGLE.COM Name Server: NS2.GOOGLE.COM Name Server: NS3.GOOGLE.COM Name Server: NS4.GOOGLE.COM Updated Date: 20-jul-2011 Creation Date: 15-sep-1997 Expiration Date: 14-sep-2020 If I changed my configured DNS server to one of the ones in that list, and then did an nslookup against maps.google.com , I'd get an authoritative answer back. Those servers are the authority, (or source) for what are valid names in Google's domains, and what aren't. All other nameservers, non-authoritative nameservers, get their NS records from the authoritative servers somewhere down the line. | {
"source": [
"https://serverfault.com/questions/413142",
"https://serverfault.com",
"https://serverfault.com/users/125147/"
]
} |
413,231 | Is there a simple way to get a list of all fingerprints entered in the .ssh/authorized_keys || .ssh/authorized_keys2 file? ssh-keygen -l -f .ssh/authorized_keys will only return fingerprint of first line / entry / publickey hack with awk: awk 'BEGIN {
while (getline < ".ssh/authorized_keys") {
if ($1!~"ssh-(r|d)sa") {continue}
print "Fingerprint for "$3
system("echo " "\""$0"\"> /tmp/authorizedPublicKey.scan; \
ssh-keygen -l -f /tmp/authorizedPublicKey.scan; \
rm /tmp/authorizedPublicKey.scan"
)
}
}' but is there an easier way or ssh command I didn't find? | Here's another hack using plain bash without temporary files: while read l; do
[[ -n $l && ${l###} = $l ]] && ssh-keygen -l -f /dev/stdin <<<$l;
done < .ssh/authorized_keys You can easily make it a function in your .bashrc : function fingerprints() {
local file="${1:-$HOME/.ssh/authorized_keys}"
while read l; do
[[ -n $l && ${l###} = $l ]] && ssh-keygen -l -f /dev/stdin <<<$l
done < "${file}"
} and call it with: $ fingerprints .ssh/authorized_keys | {
"source": [
"https://serverfault.com/questions/413231",
"https://serverfault.com",
"https://serverfault.com/users/78392/"
]
} |
413,397 | I have an Arch Linux system with systemd and I've created my own service. The configuration service at /etc/systemd/system/myservice.service looks like this: [Unit]
Description=My Daemon
[Service]
ExecStart=/bin/myforegroundcmd
[Install]
WantedBy=multi-user.target Now I want to have an environment variable set for the /bin/myforegroundcmd . How do I do that? | Times change and so do best practices. The current best way to do this is to run systemctl edit myservice , which will create an override file for you or let you edit an existing one. In normal installations this will create a directory /etc/systemd/system/myservice.service.d , and inside that directory create a file whose name ends in .conf (typically, override.conf ), and in this file you can add to or override any part of the unit shipped by the distribution. For instance, in a file /etc/systemd/system/myservice.service.d/myenv.conf : [Service]
Environment="SECRET=pGNqduRFkB4K9C2vijOmUDa2kPtUhArN"
Environment="ANOTHER_SECRET=JP8YLOc2bsNlrGuD6LVTq7L36obpjzxd" Also note that if the directory exists and is empty, your service will be disabled! If you don't intend to put something in the directory, ensure that it does not exist. For reference, the old way was: The recommended way to do this is to create a file /etc/sysconfig/myservice which contains your variables, and then load them with EnvironmentFile . For complete details, see Fedora's documentation on how to write a systemd script . | {
"source": [
"https://serverfault.com/questions/413397",
"https://serverfault.com",
"https://serverfault.com/users/102960/"
]
} |
413,582 | How might one escape the exclamation point in a password: $ mysql -umyuser -pone_@&!two
-bash: !two: event not found Trying the obvious backslash did not help: $ mysql -umyuser -pone_@&\!two
[1] 22242
-bash: !two: command not found
[email protected] [~]# ERROR 1045 (28000): Access denied for user 'myuser'@'localhost' (using password: YES) All my google searches suggest that the backslash would help, but it does not. There is no way to use quotes as suggested in this question . The line will be used in a .bashrc alias. Don't worry, the usernames and passwords shown here are examples only and not used in production! | Use single quotes around the password like this: -p'one_@&!two' To put it in an alias, you'd do something like: alias runmysql='mysql -umyuser -p'\''one_@&!two'\''' | {
"source": [
"https://serverfault.com/questions/413582",
"https://serverfault.com",
"https://serverfault.com/users/91213/"
]
} |
413,844 | I have a motherboard with only one x16 PCIe slot and no x8 slots. I am buying a NIC with very specific configuration, but it is available for x8 slots only. Can I plug a x8 card in a x16 slot? I have googled this question and this seems quite possible. However, I need answer from an expert. Also, are there any performance implications? | What should be : The PCIe spec states that all slots start at 1x/v1.0 and negotiate how many lanes they can use and what clock speed. It shouldn't matter which supports more lanes/clock, some slots are designed to take larger cards and smaller cards fit in larger slots. Whatever the highest spec both sides can communicate at (both the number of lanes and the clock/version), that is the speed that will be negotiated and used. Endpoints can support 1x, 2x, 4x, 8x, 16x, and 32x, though there are no slots specifically for 2x and 32x. Speed is specified by major version number (2.5, 5.0, 8, 16 GT/s). What really is : Usually what should happen is what actually happens . But there are quite a few boards (especially enthusiast boards) that do not follow spec. Some motherboards will not use anything but a 16x video card in their first PCIe slot. Others will not auto-negotiate correctly (commonly falling back to less lanes - this seems particularly common with 2x cards that negotiate to 1x speed). In server grade hardware these problems are very rare, but it happens. If both the system/motherboard are from the same manufacturer as the card, you should be able to contact their support and find out if it's a supported configuration (if they don't know or can't answer it's a huge redflag and you should consider not buying from them/returning). Also, try searching your particular motherboard and see if anyone has reported a problem. | {
"source": [
"https://serverfault.com/questions/413844",
"https://serverfault.com",
"https://serverfault.com/users/41288/"
]
} |
414,074 | I have an issue with a mount point that was previously configured. It shows the folder, but the mount is missing and holds "?" values for size, permissions, etc. So I tried to remount using cifs and the same command from before: mount -t cifs //nas.domain.local/share /mnt/archive But I get the error: Host is down. If I ping the domain or IP I get a proper resolution and I also connected using smbclient without issue ping nas.domain.local
ping ip
smbclient //nas.domain.local/share I looked around, but cant find a solid answer. Any thoughts? | This could also be because of a protocol mismatch. In 2017 Microsoft patched Windows Servers and advised to disable the SMB1 protocol. From now on, mount.cifs might have problems with the protocol negotiation. The error displayed is "Host is down.", but when you do debug with: smbclient -L <server_ip> -U <username> -d 256 you will get the error: protocol negotiation failed: NT_STATUS_CONNECTION_RESET To overcome this use mount or smbclient with a protocol specified. for smbclient: add -m SMB2 (or SMB3 for the newer version of the protocol) smbclient -L <server_ip> -U <username> -m SMB2 or for mount: add vers=2.0 (or vers=3.0 if you want to use version 3 of the protocol) mount -t cifs //<server_ip>/<share> /mnt/<mountpoint> -o vers=2.0 | {
"source": [
"https://serverfault.com/questions/414074",
"https://serverfault.com",
"https://serverfault.com/users/111911/"
]
} |
414,225 | We've a list of 3000 301 redirects. We need assistance on What would the best place to put these? It seems putting these 3000 lines inside vhost in httpd.conf would be a mess. What are recommended ways to handle thousands of urls? How much is it going to affect page loading speed and apache server load ? Thanks. | You can use Include directive in httpd.conf to be able to maintain redirects in another file. But it would not be very efficient, as every request would need to be checked against a lot of regular expressions. Also a server restart would be required after every change in the file. A better way for so many redirects would be to use RewriteMap directive of type dbm to declare a map from URI's to redirects. This way it will be efficient, as dbm lookups are very fast, and after a change in the map you would not need to restart a server, as httpd checks for map file modification time. A rewrite rules would look like this (tested on my Fedora 16 computer): RewriteEngine On
RewriteMap redirects dbm=db:/etc/httpd/conf/redirects.db
RewriteCond ${redirects:$1} !=""
RewriteRule ^(.*)$ ${redirects:$1} [redirect=permanent,last] And dbm map would be created from text map /etc/httpd/conf/redirects.txt looking like this: /foo http://serverfault.com/
/bar/lorem/ipsum/ http://stackoverflow.com/ using a command httxt2dbm -f db -i /etc/httpd/conf/redirects.txt -o /etc/httpd/conf/redirects.db | {
"source": [
"https://serverfault.com/questions/414225",
"https://serverfault.com",
"https://serverfault.com/users/127417/"
]
} |
414,578 | when I'm using certutil it returns this error: certutil: function failed: security library: bad database. e.g. I can't list certs or keys How Can I fix this? | If it is new system, your certificate database might not be initialized. To fix this, perform: mkdir -p $HOME/.pki/nssdb
certutil -d $HOME/.pki/nssdb -N | {
"source": [
"https://serverfault.com/questions/414578",
"https://serverfault.com",
"https://serverfault.com/users/126492/"
]
} |
414,758 | I'm learning my way through configuration management in general and using puppet to implement it in particular, and I'm wondering what aspects of a system, if any, should not be managed with puppet? As an example we usually take for granted that hostnames are already set up before lending the system to puppet's management. Basic IP connectivity, at least on the network used to reach the puppetmaster, has to be working. Using puppet to automatically create dns zone files is tempting, but DNS reverse pointers ought to be already in place before starting up the thing or certificates are going to be funny. So should I leave out IP configuration from puppet? Or should I set it up prior to starting puppet for the first time but manage ip addresses with puppet nonetheless? What about systems with multiple IPs (eg. for WAN, LAN and SAN)? What about IPMI ? You can configure most, if not all, of it with ipmitool , saving you from getting console access (physical, serial-over-lan, remote KVM, whatever) so it could be automated with puppet. But re-checking its state at every puppet agent run doesn't sound cool to me, and basic lights out access to the system is something I'd like to have before doing anything else. Another whole story is about installing updates. I'm not going in this specific point, there are already many questions on SF and many different philosophies between different sysadmins. Myself, I decided to not let puppet update things (eg. only ensure => installed ) and do updates manually as we are already used to, leaving the automation of this task to a later day when we are more confident with puppet (eg. by adding MCollective to the mix). Those were just a couple of examples I got right now on my mind.
Is there any aspect of the system that should be left out of reach from puppet?
Or, said another way, where is the line between what should be set up at provisioning time and "statically" configured in the system, and what is handled through centralized configuration management? | General rule: If you're using configuration management, manage every aspect of the configuration that you can. The more you centralize the easier it will be to scale your environment out. Specific examples (cribbed from the question, all "This is why you want to manage it" narratives): IP Network configuration OK, sure, you configured an address/gateway/NS on the machine before you dropped it in the rack. I mean if you didn't how would you run puppet to do the rest of the config? But say now you add another nameserver to your environment and you need to update all your machines -- Don't you want your configuration management system to do that for you? Or say your company gets acquired, and your new parent company demands that you change from your 192.168.0.0/24 addressing to 10.11.12.0/24 to fit into their numbering system. Or you suddenly get a massive government contract -- Only catch is you have to turn up IPv6 RIGHT FREAKIN' NOW or the deal is blown.... Looks like network configuration is something we'd like to manage... IPMI Configuration Just like with IP addresses, I'm sure you set this up before you put the machine in the rack -- It's just good common sense to enable IPMI, remote console, etc. on any machine that has the capability, and those configurations don't change much... ... Until that hypothetical acquisition I mentioned in IP Configuration above -- The reason you were forced to vacate those 192.168-net addresses is because that's IPMI-land according to your new corporate overlords, and you need to go update all your IPMI cards NOW because they're gonna be trampling on someone's reserved IP space. OK, it's a bit of a stretch here, but like you said - all of it can be managed with ipmitool , so why not have Puppet run the tool and confirm the configuration while it's doing all of its other stuff? I mean it's not going to hurt anything, so we may as well include IPMI too... Updates Software updates are more of a gray area -- In my organization we evaluated puppet for this and found it "sorely lacking", so we use radmind for this purpose. There's no reason Puppet can't call radmind though -- In fact if/when we migrate to Puppet for configuration management that's exactly what's going to happen! The important thing here is to have all your updates installed in a standard way (either standard across the organization, or standard within platforms) -- There's no reason Puppet shouldn't be launching your update process, as long as you've thoroughly tested everything to ensure that Puppet won't mess up anything. There's also no reason why Puppet can't call out to a tool that's better suited for this task if you've determined that Puppet can't do a good job on its own... | {
"source": [
"https://serverfault.com/questions/414758",
"https://serverfault.com",
"https://serverfault.com/users/35909/"
]
} |
414,760 | On my Mac terminal, printing UTF-8 works in general, but the less doesn't work correctly. So this works correctly: $ echo -e '\xe2\x82\xac'
€ but piping it into less gives something like this: $ echo -e '\xe2\x82\xac' | less
<E2><82><AC> How can this be fixed? For diagnostics: I'm using Mac OS 10.6.8. less version 418, Terminal 2.1.2 (273.1). The output of my locale is this: $ locale
LANG="en_US.UTF-8"
LC_COLLATE="C"
LC_CTYPE="C"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL="C" | Okay, I found the answer after some googling. Apparently, LESSCHARSET needs to be set like this: export LESSCHARSET=utf-8 Now less works fine for me. | {
"source": [
"https://serverfault.com/questions/414760",
"https://serverfault.com",
"https://serverfault.com/users/9474/"
]
} |
414,983 | I have an EC2 instance which I created a 500GB EBS volume for. Unfortunately, the EC2 instance shows only 8GB available. I have only one drive, which is right. [root@ip-10-244-134-250 ~]# ls -la /dev/x*
brw-rw---- 1 root disk 202, 1 Aug 7 08:54 /dev/xvda1 But, that drive is only 8GB [root@ip-10-244-134-250 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 1.3G 6.7G 16% /
tmpfs 3.7G 0 3.7G 0% /dev/shm But, fdisk and /proc/partitions both show correct size [root@ip-10-244-134-250 ~]# fdisk -l
Disk /dev/xvda1: 536.9 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvda1 doesn't contain a valid partition table
[root@ip-10-244-134-250 ~]# cat /proc/partitions
major minor #blocks name
202 1 524288000 xvda1 Any help would be greatly appreciated, thanks. | If the root file system is ext3 or ext4, then run: sudo resize2fs /dev/xvda1 If the root file system is xfs (less common), then run: sudo xfs_growfs / You can omit "sudo" if you are logged in as root. These commands should be run while the system is running and the file system is mounted. It's standard for EBS volumes to not contain a partition table. The EBS volume is generally formatted as a file system in its entirety without partitions. | {
"source": [
"https://serverfault.com/questions/414983",
"https://serverfault.com",
"https://serverfault.com/users/7732/"
]
} |
415,040 | I have a fail2ban configured like below: block the ip after 3 failed attempts release the IP after 300 sec timeout This works perfectly and I want to keep it this way such that a valid user gets a chance to retry the login after the timeout. Now, I want to implement a rule where if same IP is been detected as attack and blocked, unblocked 5 times, permanently block the IP and never unblock again. Can this be achieved with fail2ban alone or I need to write my own script to do that? I am doing this in centos. | Before 0.11, there was no default feature or a setting within fail2ban to achieve this. But starting with the upcoming 0.11 release, ban time is automatically calculated and increases exponentially with each new offense which, on the long term, will mean a more or less permanent block. Until then, your best approach is probably setting up fail2ban to monitor its own log file . It is a two step process... Step 1 We could need to create a filter to check for BAN 's in the log file (fail2ban's log file) Step 2 We need to define the jail , similar to the following... [fail2ban]
enabled = true
filter = fail2ban
action = iptables-allports[name=fail2ban]
logpath = /path/to/fail2ban.log
# findtime: 1 day
findtime = 86400
# bantime: 1 year
bantime = 31536000 Technically, it is not a permanent block , but only blocks for a year (that we can increase too). Anyway, for your question (Can this be achieved with fail2ban alone or I need to write my own script to do that?)... writing own script might work well. Setting up the script to extract the frequently banned IPs and then putting them into /etc/hosts.deny is what I'd recommend. | {
"source": [
"https://serverfault.com/questions/415040",
"https://serverfault.com",
"https://serverfault.com/users/118591/"
]
} |
415,188 | A postgres SELECT query ran out of control on our DB server and started eating up tons of memory and swap until the server ran out of memory. I found the particular process via ps aux | grep postgres and ran kill -9 pid . This killed the process and the memory freed up as expected. The rest of the system and postgres queries appeared to be unaffected. This server is running postgres 9.1.3 on SLES 9 SP4. However, one of our developers chewed me out for killing a postgres process with kill -9 , saying that it will take down the entire postgres service. In reality, it did not. I've done this before a handful of times and have not seen any negative side effects. With that said, and after further reading, it looks like kill pid without the flags is the preferred way to kill a runaway postgres process, but per other users in the postgres community, it also sounds like postgres has "gotten better" over the years such that kill -9 on an individual query process/thread is no longer a death sentence. Can someone enlighten me on the proper way to kill a runaway postgres process as well as the how disastrous (or benign) using kill -9 is with Postgres these days? Thanks for the insight. | voretaq7 's answer covers the key points, including the correct way to terminate backends but I'd like to add a little more explanation. kill -9 (ie SIGKILL ) should never, ever, ever be your first-choice default . It should be your last resort when the process doesn't respond to its normal shutdown requests and a SIGTERM ( kill -15 ) has had no effect. That's true of Pg and pretty much everything else. kill -9 gives the killed process no chance to do any cleanup at all. When it comes to PostgreSQL, Pg sees a backed that's terminated by kill -9 as a backed crash . It knows the backend might have corrupted shared memory - because you could've interrupted it half way through writing a page into shm or modifying one, for example - so it terminates and restarts all the other backends when it notices that a backend has suddenly vanished and exited with a non-zero error code. You'll see this reported in the logs. If it appears to do no harm, that because Pg is restarting everything after the crash and your application is recovering from the lost connections cleanly. That doesn't make it a good idea. If nothing else backend crashes are less well tested than the normal-functioning parts of Pg and are much more complicated/varied, so the chances of a bug lurking in backend crash handling and recovery are higher. BTW, if you kill -9 the postmaster then remove postmaster.pid and start it again without making sure every postgres backend is gone, very bad things can happen . This could easily happen if you accidentally killed the postmaster instead of a backend, saw the database had gone down, tried to restart it, removed the "stale" .pid file when the restart failed, and tried to restart it again. That's one of the reasons you should avoid waving kill -9 around Pg, and shouldn't delete postmaster.pid . A demonstration: To see exactly what happens when you kill -9 a backend, try these simple steps. Open two terminals, open psql in each, and in each run SELECT pg_backend_pid(); . In another terminal kill -9 one of the PIDs. Now run SELECT pg_backend_pid(); in both psql sessions again. Notice how they both lost their connections? Session 1, which we killed: $ psql regress
psql (9.1.4)
Type "help" for help.
regress=# select pg_backend_pid();
pg_backend_pid
----------------
6357
(1 row)
[kill -9 of session one happens at this point]
regress=# select pg_backend_pid();
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
regress=# select pg_backend_pid();
pg_backend_pid
----------------
6463
(1 row) Session 2, which was collateral damage: $ psql regress
psql (9.1.4)
Type "help" for help.
regress=# select pg_backend_pid();
pg_backend_pid
----------------
6283
(1 row)
[kill -9 of session one happens at this point]
regress=# select pg_backend_pid();
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
regress=# select pg_backend_pid();
pg_backend_pid
----------------
6464
(1 row) See how both sessions were broken? That's why you don't kill -9 a backend. | {
"source": [
"https://serverfault.com/questions/415188",
"https://serverfault.com",
"https://serverfault.com/users/30415/"
]
} |
415,289 | When I typically update DNS ("A" records) I will allow for an extended period of time for the changes to propagate throughout the root nameservers. Do I need to make this same allowance for updates and changes to CNAME records? | No you don't because DNS records don't propagate. What you do need to allow for is for any cached records to expire, based on the TTL of the record in question. If this is a new record, no caching can have occurred so the new record should be available and should resolve immediately. Additionally, the root servers (first level; .) don't host DNS zones or records for any third level domain names. The root servers know which name servers are responsible for the gTLD zones (second level; .com, .edu, etc.), which in turn know which name servers are responsible for your zone (third level; yourcompany), which in turn hold a copy of your zone file. No other DNS server holds a copy of your zone file or DNS records other than your name servers. . COM YOURCOMPANY | {
"source": [
"https://serverfault.com/questions/415289",
"https://serverfault.com",
"https://serverfault.com/users/102467/"
]
} |
415,323 | Say I need hand write some queries in the console, what's the most efficient way of executing multiline queries like CREATE TABLE statements? I am used to using Microsoft Management Studio, but I now find myself having to learn about PostgreSQL on the fly. | The following will take you to PostgreSQL's interactive terminal: $ psql <your database name> Then enter \e (or \edit ) to open an editor ( vi is default): # \e Write some query: select now(); Finally, save and quit your editor (e.g. :wq in vi ), and psql will run the query you just wrote. To set a different editor, such as vim or nano , set one of the following environment variables: PSQL_EDITOR , EDITOR , VISUAL . For more information, see https://www.postgresql.org/docs/current/app-psql.html and search for \e . | {
"source": [
"https://serverfault.com/questions/415323",
"https://serverfault.com",
"https://serverfault.com/users/81366/"
]
} |
415,458 | Im trying to recompile PHP, but ./configure fails at : configure: error: Cannot find OpenSSL's <evp.h> I have LibSSL 1.0.0, LibSSL 0.9.8, LibSSL-Dev, OpenSSL installed. --with-openssl=/usr/include/openssl when I try with --with-openssl tells me: configure: error: Cannot find OpenSSL's libraries Where the **** are the problem ? P.S. Php is 5.2.5, OS is Ubuntu | The same issue occurred on Ubuntu 12.04.1 LTS and it was solved by issuing: sudo apt-get install libcurl4-openssl-dev pkg-config | {
"source": [
"https://serverfault.com/questions/415458",
"https://serverfault.com",
"https://serverfault.com/users/75294/"
]
} |
415,533 | I receive Mailer Daemon messages saying certain emails fail. My domain is itaccess.org which is administered by Google apps. Is there any way I can identify who is sending emails from my domain, and how they are doing it without me creating an account for them? Delivered-To: [email protected]
Received: by 10.142.152.34 with SMTP id z34csp12042wfd;
Wed, 8 Aug 2012 07:12:46 -0700 (PDT)
Received: by 10.152.112.34 with SMTP id in2mr18229790lab.6.1344435165782;
Wed, 08 Aug 2012 07:12:45 -0700 (PDT)
Return-Path: <[email protected]>
Received: from smtp-gw.fsdata.se (smtp-gw.fsdata.se. [195.35.82.145])
by mx.google.com with ESMTP id b9si24888989lbg.77.2012.08.08.07.12.44;
Wed, 08 Aug 2012 07:12:45 -0700 (PDT)
Received-SPF: neutral (google.com: 195.35.82.145 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=195.35.82.145;
Authentication-Results: mx.google.com; spf=neutral (google.com: 195.35.82.145 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected]
Received: from www20.aname.net (www20.aname.net [89.221.250.20])
by smtp-gw.fsdata.se (8.14.3/8.13.8) with ESMTP id q78EChia020085
for <[email protected]>; Wed, 8 Aug 2012 16:12:43 +0200
Received: from www20.aname.net (localhost [127.0.0.1])
by www20.aname.net (8.14.3/8.14.3) with ESMTP id q78ECgQ1013882
for <[email protected]>; Wed, 8 Aug 2012 16:12:42 +0200
Received: (from whao@localhost)
by www20.aname.net (8.14.3/8.12.0/Submit) id q78ECgKn013879;
Wed, 8 Aug 2012 16:12:42 +0200
Date: Wed, 8 Aug 2012 16:12:42 +0200
Message-Id: <[email protected]>
To: [email protected]
References: <20120808171231.CAC5128A79D815BC08430@USER-PC>
In-Reply-To: <20120808171231.CAC5128A79D815BC08430@USER-PC>
X-Loop: [email protected]
From: [email protected]
Subject: whao.se: kontot avstängt - account closed
X-FS-SpamAssassinScore: 1.8
X-FS-SpamAssassinRules: ALL_TRUSTED,DCC_CHECK,FRT_CONTACT,SUBJECT_NEEDS_ENCODING
Detta är ett automatiskt svar från F S Data - http://www.fsdata.se
Kontot för domänen whao.se är tillsvidare avstängt.
För mer information, kontakta [email protected]
Mvh,
/F S Data
-----
This is an automatic reply from F S Data - http://www.fsdata.se
The domain account "whao.se" is closed.
For further information, please contact [email protected]
Best regards,
/F S Data | Since it hasn't been explicitly stated yet, I'll state it. No one's using your domain to send spam. They're using spoofed sender data to generate an email that looks like it's from your domain. It's about as easy as putting a fake return address on a piece of postal mail, so no, there's really no way to stop it. SPF (as suggested) can make it easier for other mail servers to identify email that actually comes from your domain and email that doesn't, but just like you can't stop me from putting your postal address as the return address on all the death threats I mail, you can't stop someone from putting your domain as the reply-to address on their spam. SMTP just wasn't designed to be secure, and it isn't. | {
"source": [
"https://serverfault.com/questions/415533",
"https://serverfault.com",
"https://serverfault.com/users/90736/"
]
} |
415,538 | i'll need a bit of help for alias on folder with nginx I have my folder www/ with the container of my site example.com and a lot of folder like client0, client1, client2...
I should NOT modify www/example/ but i need that example.com/serveur0/ to be redirected to www/client0/ I made a nginx rule like this : location /serveur0/ {
alias /www/client0/;
index index.php
location ~ /serveur0/(.*\.php)$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$1;
include /etc/nginx/fastcgi_params;
}
} and it work perfectly.
But i have some issues when i try to generalize it, using regex. I tried this location /serveur([0-9]+)$/ {
alias /www/client$1/;
index index.php
location ~ /serveur$1/(.*\.php)$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$1;
include /etc/nginx/fastcgi_params;
}
} And it doesn't work, and i fail to understand why. Could you help me? | Since it hasn't been explicitly stated yet, I'll state it. No one's using your domain to send spam. They're using spoofed sender data to generate an email that looks like it's from your domain. It's about as easy as putting a fake return address on a piece of postal mail, so no, there's really no way to stop it. SPF (as suggested) can make it easier for other mail servers to identify email that actually comes from your domain and email that doesn't, but just like you can't stop me from putting your postal address as the return address on all the death threats I mail, you can't stop someone from putting your domain as the reply-to address on their spam. SMTP just wasn't designed to be secure, and it isn't. | {
"source": [
"https://serverfault.com/questions/415538",
"https://serverfault.com",
"https://serverfault.com/users/131418/"
]
} |
416,205 | I am trying to test whether I can get to a particular port on a remote server (both of which I have access to) through UDP. Both servers are internet facing.
I am using netcat to have a certain port listening. I then use nmap to check for that port to see if it is open, but it doesn't appear to be. Iptables is turned off. Any suggestions why this could be? I am eventually going to setup a VPN tunnel, but because I'm very new to tunnels, I want to make sure I have connectivity on port UDP 1194 before advancing. | There is no such thing as an "open" UDP port, at least not in the sense most people are used to think (which is answering something like "OK, I've accepted your connection"). UDP is session-less, so "a port" (read: the UDP protocol in the operating system IP stack) will never respond "success" on its own. UDP ports only have two states: listening or not. That usually translates to "having a socket open on it by a process" or "not having any socket open". The latter case should be easy to detect since the system should respond with an ICMP Destination Unreachable packet with code=3 (Port unreachable). Unfortunately many firewalls could drop those packets so if you don't get anything back you don't know for sure if the port is in this state or not.
And let's not forget that ICMP is session-less too and doesn't do retransmissions: the Port Unreachable packet could very well be lost somewhere on the net. A UDP port in the "listening" state may not respond at all (the process listening on it just receives the packet and doesn't transmit anything) or it could send something back (if the process does act upon reception and if it acts by responding via UDP to the original sender IP:port). So again, you never know for sure what's the state if you don't get anything back. You say you can have control of the receiving host: that makes you able to construct your own protocol to check UDP port reachability: just put a process on the receiving host that'll listen on the given UDP port and respond back (or send you an email, or just freak out and unlink() everything on the host file system... anything that'll trigger your attention will do). | {
"source": [
"https://serverfault.com/questions/416205",
"https://serverfault.com",
"https://serverfault.com/users/85440/"
]
} |
416,236 | HI I know this is basic but can anyone tell me what the default user/group names are for apache 2 on centos 5/6 please. Google is just giving me junk. | apache. Checking by: # egrep -i '^user|^group' /etc/httpd/conf/httpd.conf
User apache
Group apache | {
"source": [
"https://serverfault.com/questions/416236",
"https://serverfault.com",
"https://serverfault.com/users/103543/"
]
} |
416,412 | I am fairly new to server administration, and I have seen a lot of sites recommending to assign sudo privileges to a user created by the root user and giving the root user an insanely long password for security enhancement. If the newly created user can perform the same functions as a root user however, what is the actual benefit of doing this at all? | There are several benefits to using sudo over handing out the root password. In no particular order: You aren't giving out your root password As a general rule, if someone leaves your company and they knew the root password(s) you now have to go change those passwords everywhere. With proper configuration management this is a minor annoyance. Without it it's a huge chore. You aren't giving away the keys to the kingdom sudo allows you to specify a restricted list of commands that users can run, so if you decide that Alice only needs the ability to stop and start Apache, but Bob needs full root rights you can set them up accordingly. You can manage authorization centrally sudo supports LDAP configuration, which means every system in your company can look at a central LDAP server to determine who is allowed to do what. Need to authorize (or de-authorize) someone? Change the sudoers configuration in LDAP and all your systems are updated at once. There's an audit trail With the exception of users that are allowed to do sudo su - , sudo sh , or something equivalent, sudo will produce an audit trail of which user ran what commands. (It will also produce a list of the people who gave themselves an unlogged root shell, so you can point your finger at them and hiss in disapproval.) sudo is good for more than just root Everyone concentrates on sudo as a way to do stuff as the su peruser, but that's not all it's good for. Say Alice is responsible for a particular software build, but Bob should be able to run the build script too. You can give Bob an entry in sudoers that lets him run the build script as Alice's user. (Yes, sure, there are much better ways to deal with this particular case, but the principle of Let user A run a program as user B can be useful...). You also get all the same audit-trail benefits that I mentioned above when you do this... | {
"source": [
"https://serverfault.com/questions/416412",
"https://serverfault.com",
"https://serverfault.com/users/43197/"
]
} |
416,571 | I'm trying to compile nginx from source with the SSL module enabled. When I run this command: ./configure --with-http_ssl_module it does its usual checks to see if everything is installed correctly, and then this pops up: checking for OpenSSL library ... not found ./configure: error: SSL modules require the OpenSSL library. You can
either do not enable the modules, or install the OpenSSL library into
the system, or build the OpenSSL library statically from the source
with nginx by using --with-openssl= option. I know for a fact that OpenSSL is installed, because when I do openssl version I get OpenSSL 1.0.1 14 Mar 2012 So I'm pretty stumped. I thought maybe OpenSSL isn't isntalled in its default location, which is why nginx can't find it, but I have no idea where this is as it came pre-installed with the server. How can I find out where this is? The server is running Ubuntu 12.04 LTS. Thanks. | Most likely you're missing the libssl-dev package. But why not save yourself all the trouble and just use a PPA for nginx ? | {
"source": [
"https://serverfault.com/questions/416571",
"https://serverfault.com",
"https://serverfault.com/users/131828/"
]
} |
416,612 | I am configuring SSL for Apache 2 . My system is Ubuntu Server 10.04 LTS . I have the following settings related to SSL in my vhost configuration: SSLEngine On
SSLCertificateKeyFile /etc/ssl/private/server.insecure.key
SSLCertificateFile /etc/ssl/certs/portal.selfsigned.crt (Side note: I am using .insecure for the key file because the file is not passphrase-protected, and I like to clearly see that it is an insecure key file) So, when I restart apache I get the following message: Syntax error on line 39 of /etc/apache2/sites-enabled/500-portal-https:
SSLCertificateKeyFile: file '/etc/ssl/private/server.insecure.key' does not exist or is empty
Error in syntax. Not restarting. But the file is there, and is not empty (actually it contains a private key): sudo ls -l /etc/ssl/private/server.insecure.key
-rw-r----- 1 root www-data 887 2012-08-07 15:14 /etc/ssl/private/server.insecure.key
sudo ls -ld /etc/ssl/private/
drwx--x--- 2 root www-data 4096 2012-08-07 13:02 /etc/ssl/private/ I have tried changing the ownership, using two groups www-data and ssl-cert. I am not sure which is the right one in Ubuntu: by default Ubuntu uses ssl-cert, but on the other hand the apache processes run with user www-data: it is started by user root, but changes to www-data at some point, and I am not sure when are the certificates read. But anyway, changing the group owner has not improved the situation. My questions are: What else could I try to get this working? How can I verify that my keyfile is a valid keyfile? How can I verify that the keyfile and the certificate ( /etc/ssl/certs/portal.selfsigned.crt ) work together? I think that Apache is giving a misleading error message, and I would like to pinpoint the error. | I found the error. It was because I am using a script to setup the certificates, and one of the steps I am performing is apache2ctl configtest . The error was coming from this command, and not from apache restart, which was what was misleading me. Since I was running the apache2ctl command as normal user, it had no access the the keyfiles, and thus the error message. Facit: make sure all your apache commands are run with sudo, even the ones which are only intended for syntax verification ( apache2ctl ), since they alse need access to the keys. | {
"source": [
"https://serverfault.com/questions/416612",
"https://serverfault.com",
"https://serverfault.com/users/91978/"
]
} |
416,708 | I'm using OpenVPN through Tunnelblick on MacOS X Lion. I need to set specific DNS (with local IP, which works only when VPN is up) for the duration of this VPN session only. I do not have access to the OpenVPN server configuration. Only client config. Also, DNS from the server doesn't work. So it works like this: I connect to VPN, go the Network preferences and manually set DNS.
After VPN is disconnected, I switch back to default. It works, but it needs to be automatic. After some exploration I found that OpenVPN up- and down- scripts might help me with that. Unfortunately, I haven't found any specific documentation about how exactly it can be done. How it can or can't be done? Any advice would be appreciated! | try adding: # put actual dns name here
dhcp-option DNS 10.11.12.13 to your client's config | {
"source": [
"https://serverfault.com/questions/416708",
"https://serverfault.com",
"https://serverfault.com/users/131873/"
]
} |
416,779 | This is a simple issue that we all face and probably resolve manually without giving much thought. As servers change, are re-provisioned, or IP addresses reallocated, we receive the SSH host verification message below. I'm interested in streamlining the workflow to resolve these ssh identification errors. Given the following message, I typically vi /root/.ssh/known_hosts +434 and remove ( dd ) the offending line. I've seen developers/users in other organizations delete their entire known_hosts file out of frustration in seeing this message. While I don't go that far, I know there's a more elegant way to handle this. Tips? [root@xt ~]# ssh las-db1
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
ed:86:a2:c4:cd:9b:c5:7a:b1:2b:cc:42:15:76:8c:56.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending key in /root/.ssh/known_hosts:434
RSA host key for las-db1 has changed and you have requested strict checking.
Host key verification failed. | You can use the ssh-keygen command to remove specific entries by host: ssh-keygen -R las-db1 If you don't have that command, you could always use sed: sed -i '/las-db1/d' /root/.ssh/known_hosts | {
"source": [
"https://serverfault.com/questions/416779",
"https://serverfault.com",
"https://serverfault.com/users/13325/"
]
} |
417,140 | On Ubuntu, I cannot convert certificate using openssl successfully. vagrant@dev:/vagrant/keys$ openssl pkcs7 -print_certs -in a.p7b -out a.cer
unable to load PKCS7 object <blah blah>:PEM
routines:PEM_read_bio:no start line:pem_lib.c:696:Expecting: PKCS7 Have you seen this error before? | Try this: $ openssl pkcs7 -inform der -in a.p7b -out a.cer If it doesn't work, brings to a Windows machine and export follow this guide. | {
"source": [
"https://serverfault.com/questions/417140",
"https://serverfault.com",
"https://serverfault.com/users/132002/"
]
} |
417,173 | What is the best way to turn on HTTP Strict Transport Security on an IIS 7 web server? Can I just through the GUI and add the proper HTTP response header or should I be using appcmd and if so what switches? | This allows us to handle both the HTTP redirect and add the Strict-Transport-Security header to HTTPS responses with a single IIS site (URL Rewrite module has to be installed): <?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<rewrite>
<rules>
<rule name="HTTP to HTTPS redirect" stopProcessing="true">
<match url=".*" />
<conditions>
<add input="{HTTPS}" pattern="off" ignoreCase="true" />
</conditions>
<action type="Redirect" url="https://{HTTP_HOST}{REQUEST_URI}"
redirectType="Permanent" />
</rule>
</rules>
<outboundRules>
<rule name="Add Strict-Transport-Security when HTTPS" enabled="true">
<match serverVariable="RESPONSE_Strict_Transport_Security"
pattern=".*" />
<conditions>
<add input="{HTTPS}" pattern="on" ignoreCase="true" />
</conditions>
<action type="Rewrite" value="max-age=31536000; includeSubDomains; preload" />
</rule>
</outboundRules>
</rewrite>
</system.webServer>
</configuration> | {
"source": [
"https://serverfault.com/questions/417173",
"https://serverfault.com",
"https://serverfault.com/users/8396/"
]
} |
417,178 | I have a domain www.mydomain.com. It is hosted on Apache. When I hit this domain, it takes me to the Apache default page. Now my domain works as follows: www.domain.com/support
www.domain.com/support/admin
www.domain.com/support/staff
www.domain.com/support/support I want it to be www.domain.com
www.domain.com/admin
www.domain.com/staff
www.domain.com/support Support is the name of my application hosted in Apache. Here are my vhost entries: <VirtualHost *:80>
# ServerAdmin [email protected]
DocumentRoot /var/www/vhosts/www.domain.com/
ServerName www.domain.com
ErrorLog logs/www.domain.com-error_log
CustomLog logs/www.domain.com-access_log common
</VirtualHost> What should I edit? How shall I do this? Should I remove the '/support/' thing from my link? | This allows us to handle both the HTTP redirect and add the Strict-Transport-Security header to HTTPS responses with a single IIS site (URL Rewrite module has to be installed): <?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<rewrite>
<rules>
<rule name="HTTP to HTTPS redirect" stopProcessing="true">
<match url=".*" />
<conditions>
<add input="{HTTPS}" pattern="off" ignoreCase="true" />
</conditions>
<action type="Redirect" url="https://{HTTP_HOST}{REQUEST_URI}"
redirectType="Permanent" />
</rule>
</rules>
<outboundRules>
<rule name="Add Strict-Transport-Security when HTTPS" enabled="true">
<match serverVariable="RESPONSE_Strict_Transport_Security"
pattern=".*" />
<conditions>
<add input="{HTTPS}" pattern="on" ignoreCase="true" />
</conditions>
<action type="Rewrite" value="max-age=31536000; includeSubDomains; preload" />
</rule>
</outboundRules>
</rewrite>
</system.webServer>
</configuration> | {
"source": [
"https://serverfault.com/questions/417178",
"https://serverfault.com",
"https://serverfault.com/users/131274/"
]
} |
417,230 | How can i configure the database server on our development server so that when new databases are created that they are Simple recovery model by default? Currently if we remember, when creating a database we have to click on the options tab and select Simple. In a previous version of SQL i remember that I could set Simple as the default for new databases. How can this be set for Sql Server 2012? | Change the recovery mode of the database named "model". From this MSDN doc : A new database inherits its recovery model from the model database. The default recovery model of the model database depends on the edition of SQL Server. But this can be changed by anyone that has ALTER permission on the database. | {
"source": [
"https://serverfault.com/questions/417230",
"https://serverfault.com",
"https://serverfault.com/users/86499/"
]
} |
417,241 | Given ANY GitHub repository url string like: git://github.com/some-user/my-repo.git or [email protected]:some-user/my-repo.git or https://github.com/some-user/my-repo.git What is the best way in bash to extract the repository name my-repo from any of the following strings? The solution MUST work for all types of urls specified above. Thanks. | $ url=git://github.com/some-user/my-repo.git
$ basename=$(basename $url)
$ echo $basename
my-repo.git
$ filename=${basename%.*}
$ echo $filename
my-repo
$ extension=${basename##*.}
$ echo $extension
git | {
"source": [
"https://serverfault.com/questions/417241",
"https://serverfault.com",
"https://serverfault.com/users/65061/"
]
} |
417,696 | I want to run a cookbook_file resource only if the current environment is "dev". How can this be expressed? The documentation suggests this: In a recipe, a code block like this would be useful: qa_nodes = search(:node,"chef_environment:QA")
qa_nodes.each do |qa_node|
# Do useful specific to qa nodes only
end But I'm not sure that's what I want - the fact it's a loop seems wrong. | Look in the chef_environment Ruby attribute (not a regular Chef attribute) on the node: if node.chef_environment == "dev"
# stuff
end | {
"source": [
"https://serverfault.com/questions/417696",
"https://serverfault.com",
"https://serverfault.com/users/68259/"
]
} |
417,810 | I'm not quite clear on the difference between ServerName and ServerAlias. It looks like both of them work as host name settings, except that ServerAlias only works within the <VirtualHost> tag. That is, I can do: ServerName www.domain1.com
ServerName www.domain2.com or: <VirtualHost *:80>
ServerName www.domain1.com
ServerName www.domain2.com
</VirtualHost> and both domains work on the same box. Can I use either ServerName or ServerAlias in this case? | The ServerName directive is Hostname and port that the server uses to identify itself Whilst ServerAlias is Alternate names for a host used when matching requests to name-virtual hosts Given a vhost configured like ...
ServerName example.com
ServerAlias www.example.com foo.example.com *.somewherelse.org
... apache would respond to example.com , www.example.com foo.example.com and anything in .somewherelse.org with this VirtualHost | {
"source": [
"https://serverfault.com/questions/417810",
"https://serverfault.com",
"https://serverfault.com/users/108283/"
]
} |
418,019 | I have a Windows 2008 Enterprise on RAID 10 running Active Directory, Hosted Exchange, and a web server on HyperV VMs. Do I need a virtual RAID for Exchange? If so, why? Edit: thanks everyone for the answer. Very helpful! | Generally no, this is not a good thing to do. Let your underlying storage do the RAID and don't add software RAID unless you have a compelling edge case, and even then you probably should reconsider your design. It will increase overhead, decrease performance, and not add a whole lot of benefit. Software RAID has its place. That place isn't on top of hardware RAID. | {
"source": [
"https://serverfault.com/questions/418019",
"https://serverfault.com",
"https://serverfault.com/users/131154/"
]
} |
418,101 | in Apache on Ubuntu I've set up a vhost, but in the browser I keep getting a "403 Access forbidden" error; the log says " Client denied by server configuration: /home/remix/ ". Looking for the solution online I found many posts about the directory access (Allow from all, etc), but as far as I know I already did that. In httpd-vhosts.conf there is the following code: NameVirtualHost *:80
<VirtualHost *:80>
ServerAdmin [email protected]
DocumentRoot "/opt/lampp/htdocs/"
ServerName localhost
ServerAlias localhost
ErrorLog "logs/dummy-host.example.com-error_log"
CustomLog "logs/dummy-host.example.com-access_log" common
</VirtualHost>
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot "/home/remix/"
ServerName testproject
ServerAlias testproject
<Directory "/home/remix/">
Options Indexes FollowSymLinks Includes ExecCGI
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost> I've also added 127.0.0.1 testproject to the /etc/hosts file. Also, the /home/remix/ folder contains an index.html file and vhosts are enabled in httpd.conf. Is there anything I'm not seeing? Edit: This is the Apache error_log entry: [Sat Aug 18 09:15:32.666938 2012] [authz_core:error] [pid 6587]
[client 127.0.0.1:38873] AH01630: client denied by server configuration: /home/remix/ | Change your authorization configuration: <Directory /home/remix/>
#...
Order allow,deny
Allow from all
</Directory> ...to the Apache 2.4 version of the same. <Directory /home/remix/>
#...
Require all granted
</Directory> Review the upgrading overview document for information on other changes you might need to make - and be aware that most of the config examples and assistance that you find out there on Google (as well as on this site) is referring to 2.2. | {
"source": [
"https://serverfault.com/questions/418101",
"https://serverfault.com",
"https://serverfault.com/users/132354/"
]
} |
418,599 | I realize how some might think this isn't exactly constructive, buuuut, I was wondering how come you can't resolve com, org, us, ru, or any other top level domain? I am taking this is as a learning exercise because there might be some holes in my understanding of how DNS works. For example, I tried; nslookup com
Server: dns.server.com
Address: 123.123.123.123
*** dns.server.com cant find com: Non-existent domain I always thought that all other sites under the .com top level domain depended on the existence of an actual domain name called com . At the very least, I thought it kept track of existing domains under the .com domain. What am I missing? | They do depend on com. - but it does not have an A record and you can't look it up like that. Try looking for the NS record instead: nslookup
> set type=NS
> com.
Server: 12.12.12.12
Address: 12.12.12.12#53
Non-authoritative answer:
com nameserver = b.gtld-servers.net.
com nameserver = f.gtld-servers.net.
com nameserver = j.gtld-servers.net.
com nameserver = g.gtld-servers.net.
com nameserver = k.gtld-servers.net.
com nameserver = e.gtld-servers.net.
com nameserver = l.gtld-servers.net.
com nameserver = d.gtld-servers.net.
com nameserver = i.gtld-servers.net.
com nameserver = m.gtld-servers.net.
com nameserver = a.gtld-servers.net.
com nameserver = h.gtld-servers.net.
com nameserver = c.gtld-servers.net.
Authoritative answers can be found from:
b.gtld-servers.net internet address = 192.33.14.30
b.gtld-servers.net has AAAA address 2001:503:231d::2:30
f.gtld-servers.net internet address = 192.35.51.30
j.gtld-servers.net internet address = 192.48.79.30
g.gtld-servers.net internet address = 192.42.93.30
k.gtld-servers.net internet address = 192.52.178.30
e.gtld-servers.net internet address = 192.12.94.30
l.gtld-servers.net internet address = 192.41.162.30
d.gtld-servers.net internet address = 192.31.80.30
i.gtld-servers.net internet address = 192.43.172.30
m.gtld-servers.net internet address = 192.55.83.30
a.gtld-servers.net internet address = 192.5.6.30
a.gtld-servers.net has AAAA address 2001:503:a83e::2:30
h.gtld-servers.net internet address = 192.54.112.30
c.gtld-servers.net internet address = 192.26.92.30 This will give you the gtld-servers which are authoritative for com. and on which you are directed to next set of nameservers for a domain. If you have dig, try dig +trace com. if not, then visit http://www.digwebinterface.com/?hostnames=com.&type=&trace=on&ns=resolver&useresolver=8.8.4.4&nameservers= which will show you the output and the route from root level (.) until the NS that gives you the NXDOMAIN response. | {
"source": [
"https://serverfault.com/questions/418599",
"https://serverfault.com",
"https://serverfault.com/users/81366/"
]
} |
418,600 | We use Exchange 2010 SP2 on a Windows Server 2008 R2 box. Constantly throughout the day people here/outside the office are asked to enter their usersnames/passwords. It syncs with the AD account info. I know there's an issue when users are wireless and they unplug the physical LAN. even though the connection is maintained while it defaults to the physical LAN it kicks back the usersname/password prompt. (They all use Outlook 2010.) Sometimes the phones (droid x/iphone ect) prompt as well. Has anyone experienced this issue? | They do depend on com. - but it does not have an A record and you can't look it up like that. Try looking for the NS record instead: nslookup
> set type=NS
> com.
Server: 12.12.12.12
Address: 12.12.12.12#53
Non-authoritative answer:
com nameserver = b.gtld-servers.net.
com nameserver = f.gtld-servers.net.
com nameserver = j.gtld-servers.net.
com nameserver = g.gtld-servers.net.
com nameserver = k.gtld-servers.net.
com nameserver = e.gtld-servers.net.
com nameserver = l.gtld-servers.net.
com nameserver = d.gtld-servers.net.
com nameserver = i.gtld-servers.net.
com nameserver = m.gtld-servers.net.
com nameserver = a.gtld-servers.net.
com nameserver = h.gtld-servers.net.
com nameserver = c.gtld-servers.net.
Authoritative answers can be found from:
b.gtld-servers.net internet address = 192.33.14.30
b.gtld-servers.net has AAAA address 2001:503:231d::2:30
f.gtld-servers.net internet address = 192.35.51.30
j.gtld-servers.net internet address = 192.48.79.30
g.gtld-servers.net internet address = 192.42.93.30
k.gtld-servers.net internet address = 192.52.178.30
e.gtld-servers.net internet address = 192.12.94.30
l.gtld-servers.net internet address = 192.41.162.30
d.gtld-servers.net internet address = 192.31.80.30
i.gtld-servers.net internet address = 192.43.172.30
m.gtld-servers.net internet address = 192.55.83.30
a.gtld-servers.net internet address = 192.5.6.30
a.gtld-servers.net has AAAA address 2001:503:a83e::2:30
h.gtld-servers.net internet address = 192.54.112.30
c.gtld-servers.net internet address = 192.26.92.30 This will give you the gtld-servers which are authoritative for com. and on which you are directed to next set of nameservers for a domain. If you have dig, try dig +trace com. if not, then visit http://www.digwebinterface.com/?hostnames=com.&type=&trace=on&ns=resolver&useresolver=8.8.4.4&nameservers= which will show you the output and the route from root level (.) until the NS that gives you the NXDOMAIN response. | {
"source": [
"https://serverfault.com/questions/418600",
"https://serverfault.com",
"https://serverfault.com/users/123640/"
]
} |
418,611 | We have an SMTP relay (just an XP box with SMTP) on our network. It's hanging around because some legacy apps used to use it so send emails from code. It hardly gets any traffic, I can see from the SMTP log that it's only used every few days, if that. Before I turn it off, I want to track where the emails are coming from (I can see the originating server, but I want to be able to see the SMTP header to get sender, recipient and if possible the body.) How can I do this over a long period of time? I thought about wireshark, but am planning on leaving it running for a couple of weeks. Is this manageable or is there a better solution? | They do depend on com. - but it does not have an A record and you can't look it up like that. Try looking for the NS record instead: nslookup
> set type=NS
> com.
Server: 12.12.12.12
Address: 12.12.12.12#53
Non-authoritative answer:
com nameserver = b.gtld-servers.net.
com nameserver = f.gtld-servers.net.
com nameserver = j.gtld-servers.net.
com nameserver = g.gtld-servers.net.
com nameserver = k.gtld-servers.net.
com nameserver = e.gtld-servers.net.
com nameserver = l.gtld-servers.net.
com nameserver = d.gtld-servers.net.
com nameserver = i.gtld-servers.net.
com nameserver = m.gtld-servers.net.
com nameserver = a.gtld-servers.net.
com nameserver = h.gtld-servers.net.
com nameserver = c.gtld-servers.net.
Authoritative answers can be found from:
b.gtld-servers.net internet address = 192.33.14.30
b.gtld-servers.net has AAAA address 2001:503:231d::2:30
f.gtld-servers.net internet address = 192.35.51.30
j.gtld-servers.net internet address = 192.48.79.30
g.gtld-servers.net internet address = 192.42.93.30
k.gtld-servers.net internet address = 192.52.178.30
e.gtld-servers.net internet address = 192.12.94.30
l.gtld-servers.net internet address = 192.41.162.30
d.gtld-servers.net internet address = 192.31.80.30
i.gtld-servers.net internet address = 192.43.172.30
m.gtld-servers.net internet address = 192.55.83.30
a.gtld-servers.net internet address = 192.5.6.30
a.gtld-servers.net has AAAA address 2001:503:a83e::2:30
h.gtld-servers.net internet address = 192.54.112.30
c.gtld-servers.net internet address = 192.26.92.30 This will give you the gtld-servers which are authoritative for com. and on which you are directed to next set of nameservers for a domain. If you have dig, try dig +trace com. if not, then visit http://www.digwebinterface.com/?hostnames=com.&type=&trace=on&ns=resolver&useresolver=8.8.4.4&nameservers= which will show you the output and the route from root level (.) until the NS that gives you the NXDOMAIN response. | {
"source": [
"https://serverfault.com/questions/418611",
"https://serverfault.com",
"https://serverfault.com/users/42346/"
]
} |
418,709 | I'm currently trying to get nginx to add a header to the response when it is sending some kind of 50* error. I already have an add_header directive on the http block, and that gets respected for all requests except it seems errors. I also tried the following in one of my vhosts: location /mediocregopheristhecoolest {
add_header X-Test "blahblahblah";
return 502;
} Going to that page gives me a 502, but no header. Is this simply something nginx doesn't do, or am I doing it wrong? | The documentation states that add_header "Adds the specified field to a response header provided that the response code equals 200, 204, 206, 301, 302, 303, 304, or 307. A value can contain variables." So it doesn't work with a 502. I forgot to add that you can use the third party headers more module to add headers to other codes. You'll probably have to recompile to add it, though. | {
"source": [
"https://serverfault.com/questions/418709",
"https://serverfault.com",
"https://serverfault.com/users/76697/"
]
} |
418,716 | Using two TP-Link WA-5210G, I am trying to create a network bridge between two networks, netA and netB. I want to share my internet connection from netA to netB, using the AP from netB. netA :: Wireless ADSL modem / router : 192.168.1.1
DHCP disabled
Wireless AP in client mode, LAN connected with ADSL router @ 192.168.1.254
netB :: Wireless AP in AP mode : 192.168.1.253
DHCP disabled
Other wireless clients with static manual IP addresses For default gateway and DNS at the clients in netB I set the IP of the modem / router, 192.168.1.1, and I have confirmed that a client can ping the router. But still no internet. What remains to be done? | The documentation states that add_header "Adds the specified field to a response header provided that the response code equals 200, 204, 206, 301, 302, 303, 304, or 307. A value can contain variables." So it doesn't work with a 502. I forgot to add that you can use the third party headers more module to add headers to other codes. You'll probably have to recompile to add it, though. | {
"source": [
"https://serverfault.com/questions/418716",
"https://serverfault.com",
"https://serverfault.com/users/131715/"
]
} |
418,931 | I just set up my SFTP server and it works fine when I use it from my first user account.
I wanted to add a user which we will call 'magnarp'.
At first I did like this in sshd_config: Subsystem sftp internal-sftp
Match group sftponly
ChrootDirectory /home/%u
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp That worked fine enough, user magnarp went into his home directory.
I then tried to add a symbolic link to it. home$ sudo ln -s /home/DUMP/High\ Defenition/ /home/magnarp/"High Defenition" The symlink worked fine via SSH but not over SFTP. So what I want to do now is to Chroot group sftponly to /home/DUMP
and i did like this: Match group sftponly
ChrootDirectory /home/DUMP
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp The DUMP folder have permissions as follows. drwxrwxrwx 5 root root 4096 aug 18 02:25 DUMP And this is the error code: Aug 18 16:40:29 nixon-01 sshd[7346]: Connection from 192.168.1.198 port 51354
Aug 18 16:40:30 nixon-01 sshd[7346]: Accepted password for magnarp from 192.168.1.198 port 51354 ssh2
Aug 18 16:40:30 nixon-01 sshd[7346]: pam_unix(sshd:session): session opened for user magnarp by (uid=0)
Aug 18 16:40:30 nixon-01 sshd[7346]: User child is on pid 7467
Aug 18 16:40:30 nixon-01 sshd[7467]: fatal: bad ownership or modes for chroot directory "/home/DUMP"
Aug 18 16:40:30 nixon-01 sshd[7346]: pam_unix(sshd:session): session closed for user magnarp | sshd has a certain level of paranoia when it comes to chroot directories. I do not think this can be disabled (even with StrictModes no ). The chroot directory and all parent directories must be properly set : The chroot directory and all of its parents must not have group or world write capabilities (ie chmod 755 ) The chroot directory and all of its parents must be owned by root. In your case the login error can be fixed with chmod 755 /home/DUMP Your apparent intent to have a world-writable directory that sftpuser can log into and everyone can put files in can be solved by making that directory a subdirectory of /home/DUMP/ | {
"source": [
"https://serverfault.com/questions/418931",
"https://serverfault.com",
"https://serverfault.com/users/132656/"
]
} |
419,402 | Can somebody point out the similarities and differences between the normal CNAME record and Amazon's Route 53 ALIAS record. ? | Both CNAMEs and alias records provide a level of indirection, i.e. it's a pointer to another location which requires an additional step to find the answer. The difference is who performs this additional step. With CNAME records the additional step is done by the client. The server simply returns the configured value of the CNAME record, and the client is responsible for then looking up that name to find the A/AAAA record. With alias records the additional step is done by the server. The server takes the configured value of the record and actively resolves this to find the A/AAAA record. It then returns this result to the client as an A/AAAA record, and the client doesn't need to do anything to get the final answer. The client doesn't even know that the server did this, it simply sees a plain A/AAAA record. The Route53 documentation has more detail on alias records. At the moment alias records can only point at ELB hostnames or at a hostname in the same zone. | {
"source": [
"https://serverfault.com/questions/419402",
"https://serverfault.com",
"https://serverfault.com/users/95256/"
]
} |
419,407 | This is a Canonical Question about Fighting Spam. Also related: How to stop people from using my domain to send spam? What are SPF records, and how do I configure them? There are so many techniques and so much to know about fighting SPAM. What widely used techniques and technologies are available to Administrator, Domain Owners, and End Users to help keep the junk out of our inboxes? We're looking for an answer that covers different tech from various angles. The accepted answer should include a variety of technologies (eg SPF/SenderID, DomainKeys/DKIM, Graylisting, DNS RBLs, Reputation Services, Filtering Software [SpamAssassin, etc]); best practices (eg mail on Port 25 should never be allowed to relay, Port 587 should be used; etc), terminology (eg, Open Relay, Backscatter, MSA/MTA/MUA, Spam/Ham), and possibly other techniques. | To defeat your enemy, you must know your enemy. What is spam? For our purposes, spam is any unsolicited bulk electronic message. Spam these days is intended to lure unsuspecting users into visiting a (usually shady) web site where they will be asked to buy products, or have malware delivered to their computers, or both. Some spam will deliver malware directly. It may surprise you to learn that the first spam was sent in 1864. It was an advertisement for dental services, sent via Western Union telegram. The word itself is a reference to a scene in Monty Python's Flying Circus . Spam, in this case, does not refer to mailing list traffic a user subscribed to, even if they changed their minds later (or forgot about it) but have not actually unsubscribed yet. Why is spam a problem? Spam is a problem because it works for the spammers . Spam typically generates more than enough sales (or malware delivery, or both) to cover the costs -- to the spammer -- of sending it. The spammer does not consider the costs to the recipient, you and your users. Even when a tiny minority of users receiving spam respond to it, it's enough. So you get to pay the bills for bandwidth, servers, and administrator time to deal with incoming spam. We block spam for these reasons: we don't want to see it, to reduce our costs of handling email, and to make spamming more expensive for the spammers. How does spam work? Spam typically is delivered in different ways from normal, legitimate email. Spammers almost always want to obscure the origin of the email, so a typical spam will contain fake header information. The From: address is usually fake. Some spam includes fake Received: lines in an attempt to disguise the trail. A lot of spam is delivered via open SMTP relays, open proxy servers and botnets. All of these methods make it more difficult to determine who originated the spam. Once in the user's inbox, the purpose of the spam is to entice the user to visit the advertised web site. There, the user will be enticed to make a purchase, or the site will attempt to install malware on the user's computer, or both. Or, the spam will ask the user to open an attachment which contains malware. How do I stop spam? As a system administrator of a mail server, you will configure your mail server and domain to make it more difficult for spammers to deliver their spam to your users. I will be covering issues specifically focused on spam and may skip over things not directly related to spam (such as encryption). Don't run an open relay The big mail server sin is to run an open relay , a SMTP server which will accept mail for any destination and deliver it onward. Spammers love open relays because they virtually guarantee delivery. They take on the load of delivering messages (and retrying!) while the spammer does something else. They make spamming cheap . Open relays also contribute to the problem of backscatter. These are messages which were accepted by the relay but then found to be undeliverable. The open relay will then send a bounce message to the From: address which contains a copy of the spam. Configure your mail server to accept incoming mail on port 25 only for your own domain(s). For most mail servers, this is the default behavior, but you at least need to tell the mail server what your domains are. Test your system by sending your SMTP server a mail from outside your network where both the From: and To: addresses are not within your domain. The message should be rejected. (Or, use an online service like MX Toolbox to perform the test, but be aware that some online services will submit your IP address to blacklists if your mail server fails the test.) Reject anything that looks too suspicious Various misconfigurations and errors can be a tip-off that an incoming message is likely to be spam or otherwise illegitimate. Mark as spam or reject messages for which the IP address has no reverse DNS (PTR record). Treat the lack of a PTR record more harshly for IPv4 connections than for IPv6 connections, as many IPv6 addresses do not yet have reverse DNS, and may not for several years, until DNS server software is better able to handle these potentially very large zones. Reject messages for which the domain name in the sender or recipient addresses does not exist. Reject messages which do not use fully qualified domain names for the sender or recipient domains, unless they originate within your domain and are meant to be delivered within your domain (e.g. monitoring services). Reject connections where the other end does not send a HELO / EHLO . Reject connections where the HELO / EHLO is: not a fully qualified domain name and not an IP address blatantly wrong (e.g. your own IP address space) Reject connections which use pipelining without being authorized to do so. Authenticate your users Mail arriving at your servers should be thought of in terms of inbound mail and outbound mail. Inbound mail is any mail arriving at your SMTP server which is ultimately destined for your domain; outbound mail is any mail arriving at your SMTP server which will be transferred elsewhere before being delivered (eg. it's going to another domain). Inbound mail can be handled by your spam filters, and may come from anywhere but must always be destined for your users. This mail can't be authenticated, because it is not possible to give credentials to every site which might send you mail. Outbound mail, that is, mail which will be relayed, must be authenticated. This is the case whether it comes from the Internet or from inside your network (though you should restrict the IP address ranges allowed to use your mailserver if operationally possible); this is because spambots might be running inside your network. So, configure your SMTP server such that mail bound for other networks will be dropped (relay access will be denied) unless that mail is authenticated. Better still, use separate mail servers for inbound and outbound mail, allow no relaying at all for the inbound ones, and allow no unauthenticated access to the outbound ones. If your software allows this, you should also filter messages according to the authenticated user; if the from address of the mail does not match the user who authenticated, it should be rejected. Do not silently update the from address; the user should be aware of the configuration error. You should also log the username which is used to send mail, or add an identifying header to it. This way, if abuse does occur, you have evidence and know which account was used to do it. This allows you to isolate compromised accounts and problem users, and is especially valuable for shared hosting providers. Filter traffic You want to be certain that mail leaving your network is actually being sent by your (authenticated) users, not by bots or people from outside. The specifics of how you do this depend on exactly what kind of system you are administering. Generally, blocking egress traffic on ports 25, 465, and 587 (SMTP, SMTP/SSL, and Submission) for everything but your outbound mailservers is a good idea if you are a corporate network. This is so that malware-running bots on your network cannot send spam from your network either to open relays on the Internet or directly to the final MTA for an address. Hotspots are a special case because legitimate mail from them originates from many different domains, but (because of SPF, among other things) a "forced" mailserver is inappropriate and users should be using their own domain's SMTP server to submit mail. This case is much harder, but using a specific public IP or IP range for Internet traffic from these hosts (to protect your site's reputation), throttling SMTP traffic, and deep packet inspection are solutions to consider. Historically, spambots have issued spam mainly on port 25, but nothing prevents them from using port 587 for the same purpose, so changing the port used for inbound mail is of dubious value. However, using port 587 for mail submission is recommended by RFC 2476 , and allows for a separation between mail submission (to the first MTA) and mail transfer (between MTAs) where that is not obvious from network topology; if you require such separation, you should do this. If you are an ISP, VPS host, colocation provider, or similar, or are providing a hotspot for use by visitors, blocking egress SMTP traffic can be problematic for users who are sending mail using their own domains. In all cases except a public hotspot, you should require users who need outbound SMTP access because they are running a mailserver to specifically request it. Let them know that abuse complaints will ultimately result in that access being terminated to protect your reputation. Dynamic IPs, and those used for virtual desktop infrastructure, should never have outbound SMTP access except to the specific mailserver those nodes are expected to use. These types of IPs should also appear on blacklists and you should not attempt to build reputation for them. This is because they are extremely unlikely to be running a legitimate MTA. Consider using SpamAssassin SpamAssassin is a mail filter which can be used to identify spam based on the message headers and content. It uses a rules-based scoring system to determine the likelihood that a message is spam. The higher the score, the more likely the message is spam. SpamAssassin also has a Bayesian engine which can analyze spam and ham (legitimate email) samples fed back into it. Best practice for SpamAssassin is not to reject the mail, but to put it in a Junk or Spam folder. MUAs (mail user agents) such as Outlook and Thunderbird can be set up to recognize the headers that SpamAssassin adds to email messages and to file them appropriately. False positives can and do happen, and while they're rare, when it happens to the CEO, you will hear about it. That conversation will go much better if the message was simply delivered to the Junk folder rather than rejected outright. SpamAssassin is almost one-of-a-kind, though a few alternatives exist . Install SpamAssassin and configure automatic update for its rules using sa-update . Consider using custom rules where appropriate. Consider setting up Bayesian filtering . Consider using DNS-based blackhole lists and reputation services DNSBLs (formerly known as RBLs, or realtime blackhole lists) provide lists of IP addresses associated with spam or other malicious activity. These are run by independent third parties based on their own criteria, so research carefully whether the listing and delisting criteria used by a DNSBL is compatible with your organization's need to receive email. For instance, a few DNSBLs have draconian delisting policies which make it very difficult for someone who was accidentally listed to be removed. Others automatically delist after the IP address has not sent spam for a period of time, which is safer. Most DNSBLs are free to use. Reputation services are similar, but claim to provide better results by analyzing more data relevant to any given IP address. Most reputation services require a subscription payment or hardware purchase or both. There are dozens of DNSBLs and reputation services available, though some of the better known and useful ones I use and recommend are: Conservative lists: Spamhaus ZEN Barracuda Reputation Database (no purchase necessary) SpamCop Aggressive lists: UCEPROTECT Backscatterer As mentioned before, many dozens of others are available and may suit your needs. One of my favorite tricks is to look up the IP address which delivered a spam that got through against multiple DNSBLs to see which of them would have rejected it. For each DNSBL and reputation service, examine its policies for listing and delisting of IP addresses and determine whether these are compatible with your organization's needs. Add the DNSBL to your SMTP server when you have decided it is appropriate to use that service. Consider assigning each DNSBL a score and configuring it into SpamAssassin rather than your SMTP server. This reduces the impact of a false positive; such a message would be delivered (possibly to Junk/Spam) instead of bounced. The tradeoff is that you will deliver a lot of spam. Or, reject outright when the IP address is on one of the more conservative lists, and configure the more aggressive lists in SpamAssassin. Use SPF SPF (Sender Policy Framework; RFC 4408 and RFC 6652 ) is a means to prevent email address spoofing by declaring which Internet hosts are authorized to deliver mail for a given domain name. Configure your DNS to declare an SPF record with your authorized outgoing mail servers and -all to reject all others. Configure your mail server to check the SPF records of incoming mail, if they exist, and reject mail which fails SPF validation. Skip this check if the domain does not have SPF records. Investigate DKIM DKIM (DomainKeys Identified Mail; RFC 6376 ) is a method of embedding digital signatures in mail messages which can be verified using public keys published in the DNS. It is patent-encumbered in the US, which has slowed its adoption. DKIM signatures can also break if a message is modified in transit (e.g. SMTP servers occasionally may repack MIME messages). Consider signing your outgoing mail with DKIM signatures, but be aware that the signatures may not always verify correctly even on legitimate mail. Consider using greylisting Greylisting is a technique where the SMTP server issues a temporary rejection for an incoming message, rather than a permanent rejection. When the delivery is retried in a few minutes or hours, the SMTP server will then accept the message. Greylisting can stop some spam software which is not robust enough to differentiate between temporary and permanent rejections, but does not help with spam that was sent to an open relay or with more robust spam software. It also introduces delivery delays which users may not always tolerate. Consider using greylisting only in extreme cases, since it is highly disruptive to legitimate email traffic. Consider using nolisting Nolisting is a method of configuring your MX records such that the highest priority (lowest preference number) record does not have a running SMTP server. This relies on the fact that a lot of spam software will only try the first MX record, while legitimate SMTP servers try all MX records in ascending order of preference. Some spam software also attempts to send directly to the lowest priority (highest preference number) MX record in violation of RFC 5321 , so that could also be set to an IP address without an SMTP server. This is reported to be safe, though as with anything, you should test carefully first. Consider setting your highest-priority MX record to point to a host which does not answer on port 25. Consider setting your lowest-priority MX record to point to a host which does not answer on port 25. Consider a spam filtering appliance Place a spam filtering appliance such as Cisco IronPort or Barracuda Spam & Virus Firewall (or other similar appliances) in front of your existing SMTP server to take much of the work out of reducing the spam you receive. These appliances are pre-configured with DNSBLs, reputation services, Bayesian filters and the other features I've covered, and are updated regularly by their manufacturers. Research spam filtering appliance hardware and subscription costs. Consider hosted email services If it's all too much for you (or your overworked IT staff) you can always have a third party service provider handle your email for you. Services such as Google's Postini , Symantec MessageLabs Email Security (or others) will filter messages for you. Some of these services can also handle regulatory and legal requirements. Research hosted email service subscription costs. What guidance should sysadmins give to end users regarding fighting spam? The absolute #1 thing that end users should do to fight spam is: DO NOT RESPOND TO THE SPAM. If it looks funny, don't click the website link and don't open the attachment. No matter how attractive the offer seems. That viagra isn't that cheap, you aren't really going to get naked pictures of anybody, and there is no $15 million dollars in Nigeria or elsewhere except for the money taken from people who did respond to the spam. If you see a spam message, mark it as Junk or Spam depending on your mail client. DO NOT mark a message as Junk/Spam if you actually signed up to receive the messages and just want to stop receiving them. Instead, unsubscribe from the mailing list using the unsubscribe method provided. Check your Junk/Spam folder regularly to see if any legitimate messages got through. Mark these as Not Junk/Not Spam and add the sender to your contacts to prevent their messages from being marked as spam in the future. | {
"source": [
"https://serverfault.com/questions/419407",
"https://serverfault.com",
"https://serverfault.com/users/33417/"
]
} |
419,433 | My brain is wrapped around the axle on public and private keys. When you create a cloud server (instance) on Amazon's EC2 service and then want to connect to it via SSH, Amazon requires you to download private a key to make the connection. Doesn't the idea behind public/private key suggest that Amazon should be require you to download a public one? Further, if I set up an SFTP server for a customer to use, should I be installing their key on the server or giving them a key from the server? In either case, should it be a public or private key? | Thinking more deeply about the authentication process, what needs to be kept secret? Amazon knows the public half of the key, and anybody can know the public half. The public half of the keypair, when matched with the private half, denotes that the private half was used to authenticate. You private key that is provided to you when Amazon generates a keypair for you is only useful if you're the only one that has it. If it's not a secret, then anybody else who knows it can also authenticate to anybody who holds the public half of the keypair. Whoever is being authenticated must hold the private half. It's ok if everybody in the world can authenticate you by holding the public half of the key, but only you should be in control of the private half. | {
"source": [
"https://serverfault.com/questions/419433",
"https://serverfault.com",
"https://serverfault.com/users/101904/"
]
} |
419,784 | One of my LAMP servers was recently brought down by some kind of script bot looking for exploits. From the looks of it, it was making so many requests a second, that it overloaded the RAM on the server and brought my entire site down for an hour. That "attacK" all came from a single IP address. So how can I automatically and temporarily block an IP address making too many hits on my LAMP Server in a short period of time? What's the best tool for the job, and should I be solving this at the Operating System level or via PHP? | Fail2Ban . The gold standard/default solution to this problem on the Linux platform. | {
"source": [
"https://serverfault.com/questions/419784",
"https://serverfault.com",
"https://serverfault.com/users/132964/"
]
} |
419,997 | I'm running SQL Server (2012) on a Hyper-V instance. It has plenty of resources and 25% reserved of the total resources, the VHD is placed on a very fast SSD drive for quick response times. Every now and then when the applications that use the SQL Server haven't been accessed for a while they get the error "The wait operation timed out". When reloading or retrying to access the database it seems to have been "waken up" and is as fast as ever. Is there any way to ensure that this soft sleep mode doesn't occur on this kind of environment? Added Exception Details: System.ComponentModel.Win32Exception: The wait operation timed out | Try to execute this command: exec sp_updatestats It, incredibly, resolved the problem. The code above its the error before the command has been executed. [Win32Exception (0x80004005): The wait operation timed out]
[SqlException (0x80131904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.]
System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) +1742110
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) +5279619
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) +242
System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) +1434
System.Data.SqlClient.SqlDataReader.TryConsumeMetaData() +61
System.Data.SqlClient.SqlDataReader.get_MetaData() +90
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +365
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite) +1355
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite) +175
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +53
System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +134
System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) +41
System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) +10
System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) +140
System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) +316
System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable) +86
System.Web.UI.WebControls.SqlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) +1482
System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) +21
System.Web.UI.WebControls.DataBoundControl.PerformSelect() +138
System.Web.UI.WebControls.BaseDataBoundControl.DataBind() +30
System.Web.UI.WebControls.BaseDataBoundControl.EnsureDataBound() +79
System.Web.UI.WebControls.BaseDataBoundControl.OnPreRender(EventArgs e) +22
System.Web.UI.Control.PreRenderRecursiveInternal() +83
System.Web.UI.Control.PreRenderRecursiveInternal() +155
System.Web.UI.Control.PreRenderRecursiveInternal() +155
System.Web.UI.Control.PreRenderRecursiveInternal() +155
System.Web.UI.Control.PreRenderRecursiveInternal() +155
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +974 | {
"source": [
"https://serverfault.com/questions/419997",
"https://serverfault.com",
"https://serverfault.com/users/38365/"
]
} |
420,158 | I've been having difficulty with connecting to my IPv6 address via rsync. Because the argument for the destination folder is colon-separated, the IPv6 address disrupts this like so: root@fdff::ffff:ffff:ffff:/path/to/dest How do I use rsync with an IPv6 address via SSH? | You'll want to wrap the address in brackets like so: rsync -rtlzv -e ssh /path/to/src 'root@[fdff::ffff:ffff:ffff]':/path/to/dest | {
"source": [
"https://serverfault.com/questions/420158",
"https://serverfault.com",
"https://serverfault.com/users/133102/"
]
} |
420,286 | Is there an easy way out to get the latest PHP? I have tried updating my package but none of it has 5.4.6 yet... if anyone knows on how to do it quickly, can it be shared here? I've tried compiling from the source, but I am constantly getting: configure: error: Cannot find OpenSSL's <evp.h>. In my ./configure I've specified where evp.h is, --with-openssl=/usr/include/openssl \.. , but still it gives me, that error – | Installing PHP 5.4.* on Ubuntu 12.04 Simply add the PPA repository: sudo add-apt-repository ppa:ondrej/php5-oldstable And install it: sudo apt-get update
sudo apt-get install php5 You may need to install add-apt-repository on Ubuntu 12.04. To do so, run the command: sudo apt-get install python-software-properties Other New Versions For PHP 5.5 (currently 5.5.30) add the PPA repository instead: sudo add-apt-repository ppa:ondrej/php5 For PHP 5.6 (currently 5.6.14) add the PPA repository instead: sudo add-apt-repository ppa:ondrej/php5-5.6 | {
"source": [
"https://serverfault.com/questions/420286",
"https://serverfault.com",
"https://serverfault.com/users/79356/"
]
} |
420,351 | I have a number of vhosts, and I'd like to "turn off" the default vhost, either by blank page, error page, or generally whatever is the most efficient use of Nginx's resources, whilst only allowing other vhosts to be access via pre-defined domains. | Define a default_server that returns an HTTP 444 code: server {
listen 80 default_server;
server_name _;
return 444;
} (Returning a 4xx error code means requests can be interpreted by a client as an unsuccessful request, rather an HTTP 200 Blank Page But Totally Worked Trust Me .) For port 443 / SSL requests, you can use ssl_reject_handshake on | {
"source": [
"https://serverfault.com/questions/420351",
"https://serverfault.com",
"https://serverfault.com/users/79905/"
]
} |
420,385 | We have an ESXi 4.1 server with 48 GB RAM. For each VM, we are allocating 4GB of memory. Since the server will have 13 virtual machines, my manager thinks this is wrong. I am going to explain to them that ESXi will actually manage memory itself, but they asked me how much memory I allocated for the ESXi server itself. I did not allocate any (I have not even heard of an option for allocating memory for the ESXi server itself). How is memory allocated for ESXi server? How does it over-allocate/distribute RAM among virtual machines without issue? | There is a lot more than just ESXi in question here, Each VM will consume up to 4GBs + "overhead" which is documented here . This depends on the vCPUs, + memory allocated. At minimum each VM will use 4261.98 MBs (4096 + 165.98) ESXi's own memory overhead, this is hardware dependent. The easiest option is to look at the System memory usage in the vSphere client. From memory I recall it is around the 1.3GB mark, but as stated that is very dependent on hardware. Memory Allocation & Overcommitment Explained Note that the hypervisor won't allocate all of that memory upfront , it is dependent on the VM's usage. However, it is worthwhile understanding what will happen should the VMs try to allocate and use all of memory allocated to them. The maximum your VM + host will try to use will be approximately, 55 GBs milage may vary 1.3 GBs used by ESXi 4261.98 MBs * 13 used by the VMs There is another aspect to take into account and that's memory thresholds.
By default VMware will aim to have 6% free (high memory threshold). So the 55 GBs of used memory needs to be reduced down to ~45GBs That means the host will have approximatley 10,500 MBs of memory it needs to reclaim back from somewhere should the VMs use the memory they've been allocated. There are three things ESX does to find that additional 10.5 GBs. Memory Reclamation Methods Transparent Page Sharing Memory Ballooning Hypervisor Swapping You should read and understand Understanding Memory Resource Management
in VMware® ESX™ Server . Depending on a large number of factors, a combination of all three will / could happen on an over committed host. You need to test your envrionment and monitor these metrics to understand the impact of over committing. Some rough rules that are worth knowing (all in the above paper and other sources). Transparent page sharing does not happen for VMs that use 2/4 MB pages. As you've allocated 4096 MBs to your Windows VMs, they will use the 2/4 MB pages by default (PAE dependent). Only under memory pressure will VMware break the large pages down to 4 KB pages that can be shared. TPS relies on using idle CPU cycles and scanning memory pages at a certain rate. It returns memory relatively slowly (think an hour rather than minutes). So a boot storm will means TPS will not help you. From the three, this has the lowest performance impact. More from the document, In hardware-assisted memory virtualization (for example, Intel EPT
Hardware Assist and AMD RVI Hardware Assist [6]) systems, ESX will
automatically back guest physical pages with large host physical pages
(2MB contiguous memory region instead of 4KB for regular pages) for
better performance due to less TLB misses. In such systems, ESX will
not share those large pages because: 1) the probability of finding two
large pages having identical contents is low, and 2) the overhead of
doing a bit-by-bit comparison for a 2MB page is much larger than for a
4KB page. However, ESX still generates hashes for the 4KB pages within
each large page. Since ESX will not swap out large pages, during host
swapping, the large page will be broken into small pages so that these
pre-generated hashes can be used to share the small pages before they
are swapped out. In short, we may not observe any page sharing for
hardware-assisted memory virtualization systems until host memory is
overcommitted. Ballooning kicks in next (thresholds are configurable, by default this is when the host has les than 6% memory free (between high and software)). Make sure you install the driver, and watch out for Java and managed applications in general. The OS has no insight into what the garbage collector will do next and it will end up hitting pages that have been swapped to disk. It is not uncommon practice for servers that run java applications exclusively to disable swap entirely to guarantee that doesn't happen. Have a look at Page 17 of vSphere Memory Management, SPECjbb Hypervisor swapping , from the three methods is the only one that guarantees "memory" being available to the hypervisor in a set time. This will be used if 1 & 2 do not give it enough memory to remain under the hard threshold (default of 2% free memory). When you read through the performance metrics (do your own), you'll realise this is the worst performing of the three. Aim to avoid it at all cost as the performance impact will be very noticable on nearly all applications double digit percentage There is one more state to be aware of low (by default 1%). From the manual this can drastically cut your performance, In a rare case where host free memory drops below the low threshold,
the hypervisor continues to reclaim memory through swapping and memory
compression, and additionally blocks the execution of all virtual
machines that consume more memory than their target memory
allocations. Summary The key point to stress is it is impossible to predict from the whitepapers how your environment will behave. How much can TPS give you? (Depends on how similar your VMs are with their OS, Service Pack, and running applications) How quickly do your VMs allocate your memory? The quicker they do, the more likely you are to jump to the next threshold before the less impactful memory reclamation scheme succeeds in keeping you in your current threshold. Depending on application, each memory reclamation scheme will have widely varying impact. Test your average scenarios, you're 95% percentile scenario, and finally your maximum to understand how your environment will run. Edit 1 Worth adding that with vSphere 4 (or 4.1 can't recall), it is now possible to place the hypervisor swap on local disk but still vmotion the VM. If you're using shared storage I strongly recommend you move the hypervisor swap file to be on local disk by default. This ensures that when one host is under severe memory pressure, it doesn't end up impacting all the other vSphere hosts/VMs on the same shared storage. Edit 2 Based on comments, made the fact that ESX doesn't allocate the memory upfront in bold... Edit 3 Explained a little more about memory thresholds. | {
"source": [
"https://serverfault.com/questions/420385",
"https://serverfault.com",
"https://serverfault.com/users/105129/"
]
} |
420,526 | I've been trying to issue commands using plink to retrieve information from my external server. Note that these plink commands are run from a binary that expects no input from the user. Is there a flag that will allow me to override this error message and continue with program output? The server's host key is not cached in the registry. You
have no guarantee that the server is the computer you
think it is.
The server's rsa2 key fingerprint is:
ssh-rsa 2048 **:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**
If you trust this host, enter "y" to add the key to
PuTTY's cache and carry on connecting.
If you want to carry on connecting just once, without
adding the key to the cache, enter "n".
If you do not trust this host, press Return to abandon the
connection.
Store key in cache? (y/n) Thank you! | Try prepending your script with: echo y | plink -ssh root@REMOTE_IP_HERE "exit" This will pipe the y character through stdin to plink when you get the Store key in cache? (y/n) prompt, allowing all further plink commands to pass through without the need of user input. The exit command will close the SSH session after it has been established, allowing the following plink commands to run. Here's an example script which writes the external server's Unix time to a local file: echo y | plink -ssh root@REMOTE_IP_HERE "exit"
plink -ssh root@REMOTE_IP_HERE "date -t" > remote_time.tmp Pipelining Reference : http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-4.html | {
"source": [
"https://serverfault.com/questions/420526",
"https://serverfault.com",
"https://serverfault.com/users/133254/"
]
} |
420,778 | I have pretty good web (dedicated) server with good memory resources: System information
Server load 2.19 (8 CPUs)
Memory Used 29.53% (4,804,144 of 16,267,652)
Swap Used 10.52% (220,612 of 2,097,136) As you can see, my server is using swap when there is plenty of free memory available. Is this normal or is there something wrong with the configuration or the coding ? N.B. My MySQL process is using over 160% of the CPU power for some reason; I don't know why, but I don't have more than 70 simultaneous users ... | This is perfectly normal. At system startup, a number of services start. These services initialize themselves, read in configuration files, create data structures and so on. They use some memory. Many of these services will never run again for the entire time the system is up because you're not using them. Some of them may run in hours, days, or weeks. Yet all this data is in physical memory. Of course, the system can't throw this data away. It can't prove that it will literally never be accessed. One of those services, for example, might be the one that provides you remote access to the box. You may not have used it in a week, but if you do use it, it had better work. But the system knows that it might like to use that physical memory for things like a disk cache or in other ways that will improve performance. So it does opportunistic swapping. When it has nothing better to do, it writes data that hasn't been used in a very long time to disk, using swap space. However, it still keeps the pages in physical memory. So they can still be accessed without having to swap them in. Now, if the system later needs that physical memory for something else, it can simply throw those pages away because it has already written them to swap. This gives the system the best of both worlds. The data is still kept in memory, so it can be accessed without having to read it from disk. But if the system needs that memory for another purpose, it won't have to write it out first. Big win all around. | {
"source": [
"https://serverfault.com/questions/420778",
"https://serverfault.com",
"https://serverfault.com/users/128164/"
]
} |
420,779 | This isn't a directly programming related question, but I wasn't sure where else to ask this and since it is for a technical project I'm working on, I hope it isn't closed down. This project requires users to be able to send text and images via text to a website. I have been told to make use of an SMS gateway that supports MMS (for the image part). What are some reliable MMS gateways, I tried searching but wasn't sure which ones are reliable and which aren't. | This is perfectly normal. At system startup, a number of services start. These services initialize themselves, read in configuration files, create data structures and so on. They use some memory. Many of these services will never run again for the entire time the system is up because you're not using them. Some of them may run in hours, days, or weeks. Yet all this data is in physical memory. Of course, the system can't throw this data away. It can't prove that it will literally never be accessed. One of those services, for example, might be the one that provides you remote access to the box. You may not have used it in a week, but if you do use it, it had better work. But the system knows that it might like to use that physical memory for things like a disk cache or in other ways that will improve performance. So it does opportunistic swapping. When it has nothing better to do, it writes data that hasn't been used in a very long time to disk, using swap space. However, it still keeps the pages in physical memory. So they can still be accessed without having to swap them in. Now, if the system later needs that physical memory for something else, it can simply throw those pages away because it has already written them to swap. This gives the system the best of both worlds. The data is still kept in memory, so it can be accessed without having to read it from disk. But if the system needs that memory for another purpose, it won't have to write it out first. Big win all around. | {
"source": [
"https://serverfault.com/questions/420779",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
420,877 | You know, you see pictures like below and sort of chuckle until you actually have to deal with it. I have just inherited something that looks like the picture below. The culture of the organization does not tolerate down time very well, yet I have been tasked to 'clean it up'. The network functions as it is, and there doesn't seem to be rush to get it done, but I will have to tackle the bear at some point. I get the ugly eye when I mention anything about weekends. So my question goes, is there sort of a structured approach to this problem? My Ideas thus far: Label, Label, Label Make up my patch cables of desired length ahead of time Do each subnet at a time (appears that each subnet are for different physical locations) Replace one cable at a time for each subnet It's easier to get forgiveness than permision? | In no particular order here are some suggestions that have been helpful to me over the years- Can any of the equipment in those racks be eliminated, upgraded or consolidated? It's hard to tell what's there, but in my experience these kinds of messes tend to be aggravated by gear that should have been pulled out years ago. Once you've got some idea of the minimum set of equipment then consider how best to lay it out. The criteria here may vary, but grouping by technology type or business function might make sense. Clearly the proximity of high density devices (i.e. switches) and patch panels and such will immediately be apparent. Use cable management!!! There are both horizontal and vertical cable management solutions. Use both - horizontals around patch panels and other significant concentrations, verticals next to switches and to facilitate risers. It's always surprising, but how power cables are routed should be considered. UPS units in the bottom of racks, PDU selection and diversity all need to be considered before pulling a cable. Keep inventory of common cable lengths. It's late at night and you want to go home. A 3' cable is what's necessary but the closest you have handy is 5'. This is how these kinds of messes develop. Documenting is part of the game, but the importance of labeling cannot be overstated. With clear labels and efficient/clean cabling the number of mistakes will be vastly decreased and troubleshooting simplified. Limit who can pull cables!!! Differing styles and degrees of attention to detail can yield chaos pretty quickly. | {
"source": [
"https://serverfault.com/questions/420877",
"https://serverfault.com",
"https://serverfault.com/users/81366/"
]
} |
421,046 | I have protected a web folder with Nginx's Auth_Basic module. The problem is, we can try several passwords until it works (brute force attacks). Is there a way to limit the number of failed re-tries? | As far as I know, Auth Basic module doesn't support this feature, but you can do this by using Fail2ban . Testing with a non-existent user, you will see something like belows in the error log: 2012/08/25 10:07:01 [error] 5866#0: *1 no user/password was provided for basic authentication, client: 127.0.0.1, server: localhost, request: "GET /pma HTTP/1.1", host: "localhost:81"
2012/08/25 10:07:04 [error] 5866#0: *1 user "ajfkla" was not found in "/etc/nginx/htpasswd", client: 127.0.0.1, server: localhost, request: "GET /pma HTTP/1.1", host: "localhost:81" Then create necessary filter: /etc/fail2ban/filter.d/nginx-auth.conf [Definition]
failregex = no user/password was provided for basic authentication.*client: <HOST>
user .* was not found in.*client: <HOST>
user .* password mismatch.*client: <HOST>
ignoreregex = </host></host></host> /etc/fail2ban/jail.conf [nginx-auth]
enabled = true
filter = nginx-auth
action = iptables[name=NoAuthFailures, port=80, protocol=tcp]
logpath = /var/log/nginx*/*error*.log
bantime = 3600 # 1 hour
maxretry = 3 Testing Fail2Ban rules: fail2ban-regex /var/log/nginx/localhost.error_log /etc/fail2ban/filter.d/nginx-auth.conf Failregex
|- Regular expressions:
| [1] no user/password was provided for basic authentication.*client: <HOST>
| [2] user .* was not found in.*client: <HOST>
| [3] user .* password mismatch.*client: <HOST>
|
`- Number of matches:
[1] 1 match(es)
[2] 2 match(es)
[3] 0 match(es)
Ignoreregex
|- Regular expressions:
|
`- Number of matches:
Summary
=======
Addresses found:
[1]
127.0.0.1 (Sat Aug 25 10:07:01 2012)
[2]
127.0.0.1 (Sat Aug 25 10:07:04 2012)
127.0.0.1 (Sat Aug 25 10:07:07 2012)
[3] PS: Since Fail2ban fetches log files to ban, make sure logpath matches with your configuration. | {
"source": [
"https://serverfault.com/questions/421046",
"https://serverfault.com",
"https://serverfault.com/users/80981/"
]
} |
421,161 | Whenever I install vsftpd on centos , I only setup the jail environment for the users and rest is default configuration of vsftpd . I create user and try to connect with filezila ftp client, but I could not connect with passive mode. I always change the transfer settings to active mode to successfully connect to the ftp server otherwise I get Error: Failed to retrieve directory listing So is there a way to change any directive in vsftp.conf file and we can connect with passive mode to the server? | To configure passive mode for vsftpd you need to set some parameters in vsftpd.conf. pasv_enable=Yes
pasv_max_port=10100
pasv_min_port=10090 This enables passive mode and restricts it to using the eleven ports for data connections. This is useful as you need to open these ports on your firewall. iptables -I INPUT -p tcp --destination-port 10090:10100 -j ACCEPT If after testing this all works then save the state of your firewall with service iptables save which will update the /etc/sysconfig/iptables file. To do this is CentOS 7 you have to use the new firewalld, not iptables: Find your zone: # firewall-cmd --get-active-zones
public
interfaces: eth0 My zone is 'public', so I set my zone to public, add the port range, and after that we reload: # firewall-cmd --permanent --zone=public --add-port=10090-10100/tcp
# firewall-cmd --reload What happens when you make a connection Your client makes a connection to the vsftpd server on port 21. The sever responds to the client telling it which port to connect to from the range specified above. The client makes a data connection on the specified port and the session continues. There is a great explanation of the different ftp modes here. | {
"source": [
"https://serverfault.com/questions/421161",
"https://serverfault.com",
"https://serverfault.com/users/88928/"
]
} |
421,301 | I wanted to try to set the worker processes in nginx, but it throws me this error: nginx: [emerg] "worker_processes" directive is not allowed here in /etc/nginx/sites-enabled/default:1
nginx: configuration file /etc/nginx/nginx.conf test failed here is my code worker_processes 4;
worker_rlimit_nofile 8192;
worker_priority 0;
worker_cpu_affinity
0001 0010 0100 1000;
server {
server_name --.--.--.---;
listen 80;
#root /var/www/devsites/wordpress/;
root /var/www/devsites/trademob/tm-hp-v2/; What can I do to fix this issue? | You said that your error message was: nginx: [emerg] "worker_processes" directive is not allowed here in /etc/nginx/sites-enabled/default:1
nginx: configuration file /etc/nginx/nginx.conf test failed Place this directive at the top of /etc/nginx/nginx.conf instead of in /etc/nginx/sites-enabled/default . The worker_processes directive is valid only at the top level of the configuration. The same applies to all the other worker_* directives you've used. | {
"source": [
"https://serverfault.com/questions/421301",
"https://serverfault.com",
"https://serverfault.com/users/133529/"
]
} |
421,310 | I have a webserver, i need to check number of connections in my server at that given time, i used following netstat -anp |grep 80 |wc -l this returned with 2542 but from my google analytics's i know that simultaneous users is not more than 100. is this correct ?
if not how to i get the active number of connections ?
is this sign of a victim of DOS attack how do i know that ? | Try just counting the ESTABLISHED connections: netstat -anp | grep :80 | grep ESTABLISHED | wc -l Also, be careful about not using a colon in your port grep statement. Just looking for 80 can lead to erroneous results from pids and other ports that happen to have the characters 80 in their output. | {
"source": [
"https://serverfault.com/questions/421310",
"https://serverfault.com",
"https://serverfault.com/users/128164/"
]
} |
421,445 | My web server (Ubuntu, Nginx) have both IPv4 and IPv6 addresses assigned by the host. For my website, shall I bind it to only an IPv6 address? Is it the standard recommended way? Or, shall I use both IPv4 and IPv6 addresses? | Use both IPv4 and IPv6 You should use both IPv4 and IPv6 addresses. Nearly everyone on the Internet currently has an IPv4 address, or is behind a NAT of some kind, and can access IPv4 resources. However, at the time of writing only about 0.7% 2.3% 3.8% 6.5% 9% 12% 19% 22% 26% 32% 37% of the Internet is IPv6 capable , but that number is steadily growing as IPv6 begins to roll out worldwide. In a very few places, ISPs are providing primarily IPv6 or only IPv6 to residential customers and using large scale NAT, NAT64 or other such solutions for IPv4 connectivity. This number is expected to grow as IPv4 address space is finally exhausted. These users will typically have better performance over IPv6. Where ISPs have deployed large scale NAT to solve IPv4 exhaustion, users stuck with this will suffer reduced reliability of all their Internet connections due to the connection limits inherent in the large scale NAT gateways. For instance, a web page might only load some but not all of its resources , leaving broken icons where images should be, missing styles and scripts, etc. This is similar to connection limit exhaustion on a home router, but affecting all users of the ISP intermittently and seemingly randomly. If you want your site to be reliable for these users, you must serve it via IPv6 (and the ISP must have deployed IPv6). Since IPv6 is where the Internet is going, having your web site IPv6 enabled now puts you ahead of the game and lets you resolve any problems long before they become serious. Configure nginx By default with Linux and nginx, you can bind to both IPv4 and IPv6 at the same time by changing your listen directives to: listen [::]:80;
listen 80; Or, for SSL sites: listen [::]:443 ssl;
listen 443 ssl; | {
"source": [
"https://serverfault.com/questions/421445",
"https://serverfault.com",
"https://serverfault.com/users/80981/"
]
} |
421,776 | We had a disk fail in a server and replaced it before removing the drive from LVM. The server has 4 physical drives (PV's), each with it's own volume group (VG). Each VG has 2 or more logical volumes (LV's.) Now LVM is complaining about the missing drive. So we have a VG (vg04) with two LV's that have become orphans than we need to clear out of the system. The problem is every time we run any LVM command we get these 'read failed' errors: # lvscan
/dev/vg04/swap: read failed after 0 of 4096 at 4294901760: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 4294959104: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 0: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 4096: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 995903864832: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 995903922176: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 0: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 4096: Input/output error
# vgreduce vg04 --removemissing --force
/dev/vg04/swap: read failed after 0 of 4096 at 4294901760: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 4294959104: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 0: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 4096: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 995903864832: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 995903922176: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 0: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 4096: Input/output error
Volume group "vg04" not found
# vgchange -a n /dev/vg04
/dev/vg04/swap: read failed after 0 of 4096 at 4294901760: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 4294959104: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 0: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 4096: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 995903864832: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 995903922176: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 0: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 4096: Input/output error
Volume group "vg04" not found
# lvchange -a n /dev/vg04/vz
/dev/vg04/swap: read failed after 0 of 4096 at 4294901760: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 4294959104: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 0: Input/output error
/dev/vg04/swap: read failed after 0 of 4096 at 4096: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 995903864832: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 995903922176: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 0: Input/output error
/dev/vg04/vz: read failed after 0 of 4096 at 4096: Input/output error
Volume group "vg04" not found
Skipping volume group vg04 The missing VG and LV's are not important, we just want to remove them. As you can see we've tried all the suggestions made, so far without luck. Output from 'lvm dumpconfig' can be checked at http://pastebin.com/MHiBzrLJ | The solution was to run dmsetup, in this case the two commands dmsetup remove vg04-vz
dmsetup remove vg04-swap Before doing this, I checked with the command 'dmsetup info' that the 'open count' for both LV's were zero. WARNING: dmsetup can wreck serious havoc with your disks so anyone using this information in the future please make sure you read the man page. | {
"source": [
"https://serverfault.com/questions/421776",
"https://serverfault.com",
"https://serverfault.com/users/133707/"
]
} |
422,158 | According to the table here , it says that MTU = 1500 bytes and that the payload part is 1500 - 42 bytes or 1458 bytes (<- this is actually wrong!). Now on top of that you have to add IPv4 and UDP headers, which are 28 bytes (20 IP + 8 UDP). That leaves my maximum possible application message to as 1430 bytes! But by looking for this number in the Internet I see 1472 instead. Am I doing this calculation wrong here? All I want to find out is the maximum application message I can send over the wire without the risk of fragmentation. It is definitely not 1500 because that includes the frame headers. Can someone help? The confusion is the PAYLOAD can actually be as large as 1500 bytes and that's the MTU. So now what is the size in-the-wire for a payload of 1500? From that table it can be as big as 1542 bytes. So the maximum app messages I can send is 1472 (1500 - 20 (ip) - 8 (udp)) for a maximum in the wire size of 1542. It amazes me how things can get so complicated when they are actually simple. And I have not clue how someone came up with the number 1518 if the table says 1542. | The diagram on Wikipedia is horrible. Hopefully what I'm about to write is clearer. The maximum payload in 802.3 ethernet is 1500 bytes. This is the data you're trying to send out over the wire (and what the MTU is referring to). [payload] <- 1500 Bytes The payload is encapsulated in an Ethernet Frame (which adds the Source/Destination MAC, VLAN tag, Length, and CRC Checksum. This is a total of 22 bytes of additional "stuff" [SRC+DST+VLAN+LENGTH+[payload]+CRC] <- 1522 Bytes The Frame is transmitted over the wire -- before your ethernet card does that it basically stands up and shouts really loud to make sure nobody else is using the wire (CSMA/CD) -- This is the Preamble and Start-of-Frame delimiter (SFD) -- an additional 8 bytes, so now we have: [Preamble+SFD+[Ethernet Frame]] <- 1530 Bytes Finally when an ethernet transceiver is done sending a frame it is required by 802.3 to transmit 12 bytes of silence ("Interframe Gap") before it's allowed to send its next frame. [Preamble+SFD+[Ethernet Frame]+Silence] <- 1542 bytes transmitted on the wire. The Preamble, SFD and Interframe Gap do not count as part of the frame. They are support structure for the Ethernet protocol itself. The MTU applies to the payload -- it is the largest unit of data you can cram into the packet. Thus an ethernet packet with an MTU of 1500 bytes will actually be a 1522 byte frame, and 1542 bytes on the wire (assuming there's a vLAN tag). So the answer to your question - What is the biggest packet I can send out over 802.3 ethernet without fragmentation? - is 1500 bytes of payload data . HOWEVER the ethernet layer may not be your limiting factor. To discover if something along the way is restricting the MTU to be smaller than 1500 bytes of payload data use one of the following: Windows: ping hostname -f -l sizeofdata (technique John K mentioned) BSD: ping -D -s sizeofdata hostname Linux: ping -M do -s sizeofdata hostname The largest value of sizeofdata that works is the MTU (over the particular path your data is taking). | {
"source": [
"https://serverfault.com/questions/422158",
"https://serverfault.com",
"https://serverfault.com/users/111851/"
]
} |
422,288 | I am new to networking and all this DNS thing. I have the following questions What is an Authoritative Nameserver ? What is a Recursive Resolver ? Please help/ guide me out on this. I have read Authoritative Nameserver , but I was not able to clearly understand it. Can one please explain me in some simple terms. | An authoritative Nameserver is a nameserver (DNS Server) that holds the actual DNS records (A, CNAME, PTR, etc) for a particular domain/ address. A recursive resolver would be a DNS server that queries an authoritative nameserver to resolve a domain/ address. So, for example, If I have a a DNS server in my network that holds an A record for foobar.com, my DNS server would be authoritative for the foobar.com domain. If clients needed to access foobar.com, they could query my DNS server and they would get an authoritative response. However, if a client needed to access contoso.com, and they queried my DNS server, it would not have records to resolve that domain. In order for my DNS server to resolve contoso.com, it would need to use recursive lookups (via Forwarders or Root Hints). My DNS server would be set to send queries for domains for which it is not authoritative, to another DNS server. That DNS server would do the same, until the query reached a DNS server that was authoritative for contoso.com. That DNS server would return the proper records, which would be passed all the way back down to the client. This is an oversimplification, as there are other things in play here, like caching records. | {
"source": [
"https://serverfault.com/questions/422288",
"https://serverfault.com",
"https://serverfault.com/users/133889/"
]
} |
422,296 | So I have this SharePoint 2007 site that is basically trash. I'm supposed to just toss it, but I'm in need of copying all of the data in form of traditional files and folders from certain projects. And since the transaction log is full, it's so damn slow. Even opening SharePoint takes up to 15 minutes, or it won't open at all. Copying of files is extremely slow. So I'm in need of a quick fix here. Just to be able to copy out some files and folders. I don't need to fix the problem per se. What can I do to fix it temporarily to be able to copy out the data? | An authoritative Nameserver is a nameserver (DNS Server) that holds the actual DNS records (A, CNAME, PTR, etc) for a particular domain/ address. A recursive resolver would be a DNS server that queries an authoritative nameserver to resolve a domain/ address. So, for example, If I have a a DNS server in my network that holds an A record for foobar.com, my DNS server would be authoritative for the foobar.com domain. If clients needed to access foobar.com, they could query my DNS server and they would get an authoritative response. However, if a client needed to access contoso.com, and they queried my DNS server, it would not have records to resolve that domain. In order for my DNS server to resolve contoso.com, it would need to use recursive lookups (via Forwarders or Root Hints). My DNS server would be set to send queries for domains for which it is not authoritative, to another DNS server. That DNS server would do the same, until the query reached a DNS server that was authoritative for contoso.com. That DNS server would return the proper records, which would be passed all the way back down to the client. This is an oversimplification, as there are other things in play here, like caching records. | {
"source": [
"https://serverfault.com/questions/422296",
"https://serverfault.com",
"https://serverfault.com/users/63697/"
]
} |
422,908 | Every time I initiate an ssh connection from my Mac to a Linux (Debian) I do get this warning: No xauth data; using fake authentication data for X11 forwarding. This also happens for tools that are using ssh, like git or mercurial. I just want to make a local change to my system in order to prevent this from appearing. Note: I do have X11 server (XQuartz 2.7.3 (xorg-server 1.12.4)) on my Mac OS X (10.8.1) and it is working properly, I can successfully start clock locally or remotely. | None of the posted solutions worked for me. My client (desktop) system is running macOS 10.12.5 (Sierra). I added -v to the options for the ssh command and it told me, debug1: No xauth program. which means it doesn't have a correct path to the xauth program. (On this version of macOS the path to xauth is nonstandard.) The solution was to add this line to /etc/ssh/ssh_config (may be /etc/ssh/config in some setups) or in ~/.ssh/config (if you don't have admin rights): XAuthLocation /opt/X11/bin/xauth Now the warning message is gone. | {
"source": [
"https://serverfault.com/questions/422908",
"https://serverfault.com",
"https://serverfault.com/users/10361/"
]
} |
422,950 | I want to execute a script every time my server start up. The problem is that I need to be a certain user to execute the script, if I try to do it as root it cant find certain packages (such as ruby). I try to change to xxx user01. sudo su user01
/etc/init.d/script start This doesn't work however. | Running sudo su user01 in a script does not mean the following commands are sent to the resultant shell. In fact, it likely means a new shell is spawned as user01, which never exits! Two things: You can execute a command as another user either by passing the -c 'command...' argument to su, like su user01 -c '/etc/init.d/script start' . Starting a service that uses /etc/init.d from rc.local isn't the correct thing to do. You want to use enable the service at startup using your distribution tools, like chkconfig or update-rc.d . You also don't want jobs in /etc/init.d that shouldn't be started as root . The jobs themselves can feel free to fork to another user account, but should be invoked by root. | {
"source": [
"https://serverfault.com/questions/422950",
"https://serverfault.com",
"https://serverfault.com/users/123909/"
]
} |
424,452 | We all know how to enable a website using apache on Linux.
I'm pretty sure that we all agree on using the a2ensite command. Unfortunately, there is no default equivalent command that comes with Nginx, but it did happen that I installed some package on ubuntu that allowed me to enable/disable sites and list them. The problem is I don't remember the name of this package. Does anybody know what I'm talking about? Please tell me the name of this package and the command name. | If you have installed the nginx package from the Ubuntu repositories, you will have two directories. /etc/nginx/sites-enabled and /etc/nginx/sites-available . In the main nginx configuration, /etc/nginx/nginx.conf , you have the following line: include /etc/nginx/sites-enabled/*.conf; So basically to list all available virtualhosts, you can run the following command: ls /etc/nginx/sites-available To activate one of them, run the following command: ln -s /etc/nginx/sites-available/www.example.org.conf /etc/nginx/sites-enabled/ The scripts that comes with Apache is basically just simple shell wrappers that does something similar as above. After linking the files, remember to run sudo service nginx reload / service nginx reload | {
"source": [
"https://serverfault.com/questions/424452",
"https://serverfault.com",
"https://serverfault.com/users/78777/"
]
} |
424,465 | I want to start over the configurations of replica, is it possible? How to reset it? In group people saying to remove the database content, but are there any work around? | If you want to keep the data, but start outside a replica set, just restart the mongod process without the --replSet and on a different port. That will give you a standalone mongod. To be completely sure that the replica set configuration is gone from the instance, make sure that the local.system.replset collection is empty. Once that is done, and you are happy with your standalone instance, you can then restart with a different --replSet argument and go through the replica set configuration process again: http://www.mongodb.org/display/DOCS/Replica+Set+Configuration The other option, as you mention, is to remove all the data files and start completely from scratch. | {
"source": [
"https://serverfault.com/questions/424465",
"https://serverfault.com",
"https://serverfault.com/users/50774/"
]
} |
424,486 | In the GUI tool you can get a list packages with security updates. Can this be done on the command line in Debian or Ubuntu? Normally I might use "apt-get upgrade" which would show me what is being upgraded, but I would like to know which ones are security updates. | apt-get upgrade -s | grep -i security ... is what the Nagios check-apt plugin uses to count pending security updates which is similar to what you're looking for. | {
"source": [
"https://serverfault.com/questions/424486",
"https://serverfault.com",
"https://serverfault.com/users/27236/"
]
} |
424,678 | I have a virtual machine that recently had its disk image increased from 20GB to 50GB, and fdisk -l verifies that the VM can see this new size. Now I need to resize my root LVM partition to fill the extra 30GB. I've found several articles about resizing LVM, but the few that cover resizing the root partition all claim you need to boot from a LiveCD. Is there any way to do this without taking down the server? The server is critical, so I'd like to minimize downtime. Edit: Output of fdisk -l : [root@fedora-host ~]# sudo fdisk -l
Disk /dev/sda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00097c90
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 83886079 41430016 8e Linux LVM
Disk /dev/mapper/VolGroup-lv_root: 36.1 GB, 36104568832 bytes
255 heads, 63 sectors/track, 4389 cylinders, total 70516736 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_root doesn't contain a valid partition table
Disk /dev/mapper/VolGroup-lv_swap: 6308 MB, 6308233216 bytes
255 heads, 63 sectors/track, 766 cylinders, total 12320768 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_swap doesn't contain a valid partition table Edit: How do I resize the physical partition? fdisk can see the free space, but I don't know how to resize primary LVM partition to use it. I tried booting into a LiveCD and using parted'd resize command, but all it gives me is the error "Unable to detect file system". I found this guide , which says I need to delete the partition and create a new one with the correct size, but that sounds very dangerous. Final Edit: Parted's resize command is oddly unable to resize LVM partitions. Go figure. Instead, I simply deleted the old partition and created a new one with the new range, as outlined in the link above, and that correctly resized the LVM partition. I then followed the advice below to resize the volumes and filesystems inside the LVM partition. | You can grow a logical volume online. You'd have to unmount it to shrink it (which requires a LiveCD / Rescue Mode.) pvresize /dev/sda2 (assuming your LVM partition is sda2 . Replace as required.) lvextend /dev/mapper/root -l+100%FREE (or, whatever your root logical volume is called.) resize2fs /dev/mapper/root (assuming ext2/3/4) | {
"source": [
"https://serverfault.com/questions/424678",
"https://serverfault.com",
"https://serverfault.com/users/41252/"
]
} |
425,335 | I'm proposing this to be a canonical question about enterprise-level Storage Area Networks. What is a Storage Area Network (SAN), and how does it work? How is it different from a Network Attached Storage (NAS)? What are the use cases compared to direct-attached storage (DAS)? In which way is it better or worse? Why is it so expensive? Should I (or my company) use one? | First of all, for a (broad) comparison of DAS, NAS and SAN storage see here . There are some common misconceptions about the term " SAN ", which means " Storage Area Network " and as such, strictly speaking , refers only to the communication infrastructure connecting storage devices (disk arrays, tape libraries, etc.) and storage users (servers). However, in common practice the term "SAN" is used to refer to two things: A complete storage infrastructure, including all the hardware and software involved in providing shared access to central storage devices from multiple servers. This usage, although not strictly correct, is commonly accepted and what most people refers to when talking about a "SAN". The rest of this answer will focus on it, thus describing every component of an enterprise-level storage infrastructure. A single storage array (see later); as in, "we have a Brand X SAN with 20 TB storage". This usage is fundamentally incorrect, because it doesn't even take into account the real meaning of "SAN" and just assumes it's some form of storage device. A SAN can be composed of very different hardware, but can usually be broken down into various components: Storage Arrays : this is where data is actually stored (and what is erroneously called a "SAN" quite often). They are composed of: Physical Disks: they, of course, archive the data. Enterprise-level disks are used, which means they usually have lower per-disk capacity, but much higher performance and reliability; also, they are a lot more expensive than consumer-class disks. The disks can use a wide range of connections and protocols ( SATA , SAS , FC , etc.) and different storage media ( Solid-State Disks are becoming increasingly common), depending on the specific SAN implementation. Disk Enclosures: this is where the disks are placed. They provide electricity and data connections to them. Storage Controllers/Processors: these manage disk I/O, RAID and caching (the term "controller" or "processor" varies between SAN vendors). Again, enterprise-level controllers are used, so they have much better performance and reliability than consumer-class hardware. They can, and usually are, configured in pair for redundancy. Storage Pools : a storage pool is a bunch of storage space, comprising some (often many) disks in a RAID configuration. It is called a "pool" because sections of it can be allocated, resized and de-allocated on demand, creating LUNs. Logical Unit Numbers (LUNs): a LUN is chunk of space drawn from a storage pool, which is then made available ("presented") to one or more servers. This is seen by the servers as a storage volume, and can be formatted by them using any file system they prefer. Tape Libraries: they can be connected to a SAN and use the same communications technology both for connecting to servers and for direct storage-to-tape backups. Communications Network ( the "SAN" proper ): this is what allows the storage users (servers) to access the storage devices (storage array(s), tape libraries, etc.); it is, strictly speaking, the real meaning of the term "Storage Area Network", and the only part of a storage infrastructure that should be defined as such. There really are lots of solutions to connect servers to shared storage devices, but the most common ones are: Fibre Channel : a technology which uses fiber-optics for high-speed connections to shared storage. It includes host bus adapters , fiber-optic cables and FC switches, and can achieve transfer speeds ranging from 1 Gbit to 20 Gbit. Also, multipath I/O can be used to group several physical links together, allowing for higher bandwidth and fault tolerance. iSCSI : an implementation of the SCSI protocol over IP transport. It runs over standard Ethernet hardware, which means it can achieve transfer speeds from 100 Mbit (generally not used for SANs) to 100 Gbit. Multipath I/O can also be used (although the underlying networking layer introduces some additional complexities). Fibre Channel over Ethernet (FCoE) : a technology in-between full FC and iSCSI, which uses Ethernet as the physical layer but FC as the transport protocol, thus avoiding the need for an IP layer in the middle. InfiniBand : a very high-performance connectivity technology, less used and quite expensive, but which can achieve some impressive bandwidth. Host Bus Adapters (HBAs): the adapter cards used by the servers to access the connectivity layer; they can be dedicated adapters (as in FC SANs) or standard Ethernet cards. There are also iSCSI HBAs, which have a standard Ethernet connection, but can handle the iSCSI protocol in hardware, thus relieving the server of some additional load. A SAN provides many additional capabilities over direct-attached (or physically shared) storage: Fault tolerance: high availability is built-in in any enterprise-level SAN, and is handled at all levels, from power supplies in storage arrays to server connections. Disks are more reliable, RAID is used to withstand single-disk (or multiple-disk) failures, redundant controllers are employed, and multipath I/O allows for uninterrupted storage access even in the case of a link failure. Greater storage capacity: SANs can contain many large storage devices, allowing for much greater storage spaces than what a single server could achieve. Dynamic storage management: storage volumes (LUNs) can be created, resized and destroyed on demand; they can be moved from one server to another; allocating additional storage to a server requires only some configurations, as opposed to buying disks and installing them. Performance: a properly-configured SAN, using recent (although expensive) technologies, can achieve really impressive performance, and is designed from the ground up to handle heavy concurrent load from multiple servers. Storage-level replication: two (or more) storage arrays can be configured for synchronous replication, allowing for the complete redirection of server I/O from one to another in fault or disaster scenarios. Storage-level snapshots: most storage arrays allow for taking snapshots of single volumes and/or whole storage pools. Those snapshots can then be restored if needed. Storage-level backups: most SANs also allow for performing backups directly from storage arrays to SAN-connected tape libraries, completely bypassing the servers which actually use the data; various techniques are employed to ensure data integrity and consistency. Based on everything above, the benefits of using SANs are obvious; but what about the costs of buying one, and the complexity of managing one? SANs are enterprise-grade hardware (although there can be a business case for small SANs even in small/medium companies); they are of course highly customizable, so can range from "a couple TBs with 1 Gbit iSCSI and somewhat high reliability" to "several hundred TBs with amazing speed, performance and reliability and full synchronous replication to a DR data center"; costs vary accordingly, but are generally higher (as in "total cost", as well as in "cost per gigabyte of space") than other solutions. There is no pricing standard, but it's not uncommon for even small SANs to have price tags in the tens-of-thousands (and even hundreds-of-thousands) dollars range. Designing and implementing a SAN (even more so for a high-end one) requires specific skills, and this kind of job is usually done by highly-specialized people. Day-to-day operations, such as managing LUNs, are considerably easier, but in many companies storage management is anyway handled by a dedicated person or team. Regardless of the above considerations, SANs are the storage solution of choice where high capacity, reliability and performance are required. | {
"source": [
"https://serverfault.com/questions/425335",
"https://serverfault.com",
"https://serverfault.com/users/6352/"
]
} |
425,346 | Trying to access X11 my CentOS 6 x32 small Linode VPS through SSH Putty/Xming (enabled X11 forwarding on options). My windows machine is not the problem since it works with others CentOS Servers. X11Forwarding is enabled on /etc/ssh/sshd_config , still I can't get X11 forwarding. I'm trying to get xclock to work, but I get Can't open display :0.0 error. I've also tried with different $DISPLAY values like :0 or :10.0 I've tried MobaXterm, and I get this message when connecting: X11 forwarding request failed on channel 0 | Here (Red Hat Login required) is a Tech Brief article from a fellow Red Hat consultant which discusses the minimum packages needed for X-Windows to work for SSH connections. The key points are: 1) Install the following:
xorg-x11-xauth
xorg-x11-fonts-*
xorg-x11-utils
2) Enable the following in the sshd_config file
X11Forwarding yes
3) Use an appropriate X-Server on your desktop | {
"source": [
"https://serverfault.com/questions/425346",
"https://serverfault.com",
"https://serverfault.com/users/127306/"
]
} |
425,424 | I'm setting up a new web server that hosts a dozen virtual hosts on Ubuntu 12.4 using Apache 2.2.22 with one config file per site. I created all the configuration files all at once and ran a2ensite * to enable them all at once. When I reloaded the configuration it failed and after restarting apache I found the following error message in my error.log: Oops, no RSA or DSA server certificate found for 'server.host.name:0'?! Most of the results for this error message are years old that don't fix the problem or are bugs that have been fixed https://issues.apache.org/bugzilla/show_bug.cgi?id=31709 | From: http://www.clearchain.com/blog/posts/solving-the-apache-ssl-error-oops-no-rsa-or-dsa-server-certificate-found-for-www-somedomain-com0 Summary: This error may also occur if you forget the following line in your VirtualHost section: SSLEngine on | {
"source": [
"https://serverfault.com/questions/425424",
"https://serverfault.com",
"https://serverfault.com/users/10975/"
]
} |
425,427 | Is there a way to list all domains on an SAN/UCC SSL Certificate (ideally using command line on linux/os x)? Clearly there must be some way to extract the data, since browsers can do it. Unfortunately, I can see the list but can't cut and paste it. | openssl x509 -text < $CERT_FILE
#=>
. . .
DNS: . . .
. . . where $CERT_FILE can have either the .pem or .crt extension. Shell functions for viewing cert. files and checking that a cert. & key file match can be found here . | {
"source": [
"https://serverfault.com/questions/425427",
"https://serverfault.com",
"https://serverfault.com/users/54481/"
]
} |
426,183 | This is a Canonical Question about IPv6 Subnetting. Related: How does IPv4 Subnetting Work? I know a lot about IPv4 Subnetting , and as I prepare to (deploy|work on) an IPv6 network I need to know how much of this knowledge is transferable and what I still need to learn. IPv6 seems at first glance to be much more complex than IPv4. So I would like to know: IPv6 is 128 bits, so why is /64 the smallest recommended subnet for hosts?
Related to this: Why is it recommended to use /127 for point to point links between routers, and why was it recommended against in the past? Should I change existing router links to use /127? Why would virtual machines be provisioned with less than a /64 worth of addresses? Are there other situations in which I would use a subnet smaller than /64? Can I map directly from IPv4 subnets to IPv6 subnets? For instance, does an IPv4 /24 correspond directly to an IPv6 /56 or /120? My interfaces have several IPv6 addresses. Must the subnet be the same for all of them? Why do I sometimes see a % rather than a / in an IPv6 address and what does it mean? Am I wasting too many subnets? Aren't we just going to run out again? In what other major ways is IPv6 subnetting different from IPv4 subnetting? | The first thing that should be mentioned about IPv6 subnetting is that a different mode of thought is called for. In IPv4 you usually think about how many addresses you have available and how you can allocate enough of them to each end user. In IPv6 you usually think about how many /64 - subnets you have available and how you can allocate them to end users. You almost never worry about how many IP addresses will be used in a given subnet. Except for some special cases like point to point links, each subnet just simply has far more addresses available than it will ever require, so instead you worry only about allocating subnets, not hosts inside them. IPv6 subnets are usually /64 because that is required in order for SLAAC (stateless address auto-configuration) to work. Even where SLAAC is not in use, there may be other reasons to use /64 . For example, there might be some end user devices out there that just assume /64 , or else routing subnets narrower than /64 might be inefficient on some routers because the router implementer has optimized the case of /64 or wider routes in order to save routing table memory. Why is it recommended to use /127 for point to point links? For the specific case of point-to-point links, /127 is recommended instead of /64 in order to avoid a vulnerability where packets addressed to any one of the quadrillions of unused addresses on the subnet cause unwanted neighbour solicitation requests and table entries that could drown a router. Such misaddressed packets may be malicious or accidental. But even if you actually configure a point-to-point link as /127 , some people advocate assigning a whole /64 anyway just to be consistent. Why would virtual machines be provisioned with subnets narrower than /64 ? I don't know specifically why virtual machines would be provisioned with subnets narrower than /64 . Perhaps because a hosting provider assumed that a server was like an end-user and required only a single /64 subnet, not anticipating that the server would actually be a collection of VMs requiring an internal routing topology? It could be done also simply as a matter of making the addressing plan easier to memorize: the host gets PREFIX::/64 , then each VM gets PREFIX:0:NNNN::/96 where NNNN is unique to the VM and the VM can allocate PREFIX:0:NNNN:XXXX:YYYY as it pleases. Can I map directly from IPv4 subnets to IPv6 subnets? For instance, does an IPv4 /24 correspond directly to an IPv6 /56 or /120 ? From a low-level perspective of how addressing and routing works, the prefix length has the same meaning in IPv6 and IPv4. On that level, you can make an analogy such as "an IPv4 /16 uses half the bits for the network address and half the bits for the host address, that's like a /64 in IPv6". But this comparison is not really apt. Strong conventions have emerged in IPv6 which make the divisions of network sizes look somewhat more like the old world of classful networks in IPv4. To be sure, IPv6 didn't reintroduce classful addressing in which the most significant few bits of the address force a particular netmask, but what IPv6 does have is certain [ de facto /conventional] standard network sizes: /64 : the basic size of a single subnet: LAN, WAN, block of addresses for web virtual hosts, etc... "Normal" subnets are never expected to be any narrower (longer prefix) than /64 . No subnets are ever expected to be wider (shorter prefix) than /64 since a /64 's worth of host addresses is much more than we can imagine needing. /56 : a block of 256 basic subnets. Even though current policies permit ISPs to hand out blocks as large as /48 to every end user and still consider their address utilisation well justified, some ISPs may (and already do) choose to allocate a /56 to consumer-grade customers as a compromise between allocation lots of subnets for them and address economy. /48 : a block of 65536 basic subnets and the recommended size of block that every ISP customer end site should receive. /32 : the default size of block that most ISPs will receive each time they request more addresses from a regional address registry. Inside service provider and enterprise networks, many more prefix lengths than these 4 can be seen. When looking at the routing tables of routers inside these networks, IPv4 and IPv6 have much in common including most of the way routing works: routes for longer prefixes override covering routes for shorter prefixes, so it is possible to aggregate (make shorter) and drill down (make longer) routes. Like in IPv4, routes can be aggregated or summarized to larger blocks with shorter prefixes in order to minimize the size of routing tables. A different question of mapping between IPv4 and IPv6 would be how to harmonize IPv4 and IPv6 assignments on dual-stack machines so that addressing plans can be readily understood. Far that, there are certainly conventions in common use to do this: embed the IPv4 "subnet number" into a portion of the IPv6 prefix, either with BCD (e.g. 10.0.234.0/24 becomes 2001:db8:abcd:234::/64 ) or binary ( 10.0.234.0/24 becomes 2001:db8:abcd:ea::/64 ). My interfaces have several IPv6 addresses. Must the subnet be the same for all of them? Absolutely not! IPv6 hosts are expected to be able to be multihomed by having several IP addresses simultaneously that come from different subnets, just like IPv4. If they are autoconfigured with SLAAC then the different subnets might have come from router advertisements from different routers. Why do I sometimes see a % rather than a / in an IPv6 address and what does it mean? You would not see one instead of the other. They have different meanings. A slash denotes a prefix (subnet), meaning a block of addresses that all start with the same n bits. An address without a slash is a host address. You may think of such an address as having an implied /128 at the end, meaning all 128 bits are specified. The percent sign accompanies a link-local address. In IPv6, every interface has a link-local address in addition to any other IP addresses it might have. But the thing is, link-local addresses are always, without exception, in the fe80::/10 block. But if we attempt to talk to a peer using a link local address and the local host has multiple interfaces, how are we to know which interface to use to talk to this peer? Normally the routing table tells us which interface to use for a particular prefix, but here it will tell us than fe80::/10 is reachable via every interface. The answer is that we must tell it which interface to use using the syntax address%interface . For example, fe80::1234:5678:8765:4321%eth0 . Am I wasting too many subnets? Aren't we just going to run out again? Nobody knows. Who can tell the future? But consider this. In IPv6 the number of available subnets is the square of the number of available individual addresses in IPv4. That's really quite a lot. No, I mean really quite a lot! But still: we are automatically handing out a /32 to any ISP who requests one, we are handing out a /48 to every single ISP customer. Perhaps we're exaggerating and we will squander IPv6 after all. But there is a provision for this: Only one eighth of the IPv6 space has been made available for use so far: 2000::/3 . The idea is that if we make a horrible mess of the first eighth and we have to drastically revise the liberal allocation policies, we get to try 7 more times before we're in trouble. And finally: IPv6 doesn't have to last forever. Perhaps it will have a longer lifetime than IPv4 (an impressive lifetime already and it's not over) but like every technology it will someday stop mattering. We only need to make it until then. | {
"source": [
"https://serverfault.com/questions/426183",
"https://serverfault.com",
"https://serverfault.com/users/126632/"
]
} |
426,394 | I have two files, id_rsa and id_rsa.pub . What command can be used to validate if they are a valid pair? | I would prefer the ssh-keygen -y -e -f <private key> way instead of the accepted answer of How do you test a public/private DSA keypair? on Stack Overflow. ssh-keygen -y -e -f <private key> takes a private key and prints the corresponding public key which can be directly compared to your available public keys. (Hint: beware of comments or key-options.) (How the hell is it doing that? I can only hope the public key is encoded directly or indirectly in the private key...) I needed this myself and used the following Bash one-liner. It should output nothing if the keys belong together. Apply a little -q to the diff in scripts and diff only sets the return code appropriately. PRIVKEY=id_rsa
TESTKEY=id_rsa.pub
diff <( ssh-keygen -y -e -f "$PRIVKEY" ) <( ssh-keygen -y -e -f "$TESTKEY" ) | {
"source": [
"https://serverfault.com/questions/426394",
"https://serverfault.com",
"https://serverfault.com/users/50774/"
]
} |
426,726 | HAProxy has the ability to enable HTTP keep-alive on the client side (client <-> HAProxy) but disable it on the server side (HAProxy <-> server). Some of our clients connect to our web service via satellite so the latency is ~600ms and I think that by enabling keep-alive, it will speed things up a bit. Am I right? Is this supported by Nginx?
Is this a widely implemented feature in other software and hardware load balancers?
What else besides HAProxy? | edit: My answer only covers the original unedited question, which was whether this sort of thing is typical in load balancers/reverse proxies. I'm not sure whether nginx/product X has support for this, 99.9% of my reverse proxying experience is with HAproxy. Correct. HTTP Keep-Alive on the client side, but not on the server side. Why? If you break down a few details you can quickly see why this is a benefit. For this example, let's pretend we're loading a page www.example.com and that page includes 3 images, img[1-3].jpg. Browser loading a page, without Keep-Alive Client establishes a TCP connection to www.example.com on port 80 Client does an HTTP GET request for "/" Server sends the HTML content of the URI "/" (which includes HTML tags referencing the 3 images) Server closes the TCP connection Client establishes a TCP connection to www.example.com on port 80 Client does an HTTP GET request for "/img1.jpg" Server sends the image Server closes the TCP connection Client establishes a TCP connection to www.example.com on port 80 Client does an HTTP GET request for "/img2.jpg" Server sends the image Server closes the TCP connection Client establishes a TCP connection to www.example.com on port 80 Client does an HTTP GET request for "/img3.jpg" Server sends the image Server closes the TCP connection Notice that there are 4 seperate TCP sessions established and then closed. Browser loading a page, with Keep-Alive HTTP Keep-Alive allows for a single TCP connection to serve multiple HTTP requests, one after the other. Client establishes a TCP connection to www.example.com on port 80 Client does an HTTP GET request for "/", and also asks the server to make this an HTTP Keep-Alive session. Server sends the HTML content of the URI "/" (which includes HTML tags referencing the 3 images) Server does not close the TCP connection Client does and HTTP GET request for "/img1.jpg" Server sends the image Client does and HTTP GET request for "/img2.jpg" Server sends the image Client does and HTTP GET request for "/img3.jpg" Server sends the image Server closes TCP connection if no more HTTP requests are received within its HTTP Keep-Alive timeout period Notice that with Keep-Alive, only 1 TCP connection is established and eventually closed. Why's Keep-Alive better? To answer this you must understand what it takes to establish a TCP connection between a client and a server. This is called the TCP 3-way handshake. Client sends a SYN(chronise) packet Server sends back a SYN(chronise) ACK(nowledgement), SYN-ACK Client sends an ACK(nowledgement) packet TCP connection is now considered active by both client and server Networks have latency, so each step in the 3-way handshake takes a certain amount of time. Lets say that there's 30ms between the client and server, the back-and-forth sending of IP packets required to establish the TCP connection means that it takes 3 x 30ms = 90ms to establish a TCP connection. This may not sound like much, but if we consider that in our original example, we have to establish 4 separate TCP connections, this becomes 360ms. What if the latency between the client and server is 100ms instead of 30ms? Then our 4 connections are taking 1200ms to establish. Even worse, a typical web page may require far more than just 3 images in order to load, there may be multiple CSS, JavaScript, image or other files that the client needs to request. If the page loads 30 other files and the client-server latency is 100ms, how long do we spend establishing TCP connections? To establish 1 TCP connection takes 3 x latency, i.e. 3 x 100ms = 300ms. We must do this 31 times, once for the page, and another 30 times for each other file referenced by the page. 31 x 300ms = 9.3 seconds. 9.3 seconds spent establishing TCP connections to load a webpage which references 30 other files. And that doesn't even count the time spent sending HTTP requests and receiving responses. With HTTP Keep-Alive, we need only establish 1 TCP connection, which takes 300ms. If HTTP Keep-Alive is so great, why not use it on the server side as well? HTTP reverse proxies (like HAproxy) are typically deployed very close to the backend servers they are proxying for. In most cases the latency between the reverse proxy and its backend server/s will be under 1ms, so establishing a TCP connection is much faster than it is between a client. That's only half the reason though. An HTTP server allocates a certain amount of memory for each client connection. With Keep-Alive, it will keep the connection alive, and by extension it'll keep a certain amount of memory in use on the server, until the Keep-Alive timeout is reached, which may be up to 15s, depending on server configuration. So if we consider the effects of using Keep-Alive on the server side of an HTTP reverse proxy, we are increasing the need for memory, but because the latency between the proxy and the server is so low, we get no real benefit from the reduction in time taken for TCP's 3-way handshake, so its typically better to just disable Keep-Alive between the proxy and the web server in this scenario. Disclaimer: yes, this explanation doesn't take into account the fact that browsers typically establish multiple HTTP connections to a server in parallel. However, there is a limit to how many parallel connections a browser will make to the same host, and typically this is still small enough to make keep-alive desirable. | {
"source": [
"https://serverfault.com/questions/426726",
"https://serverfault.com",
"https://serverfault.com/users/92800/"
]
} |
426,736 | I am trying to run iisreset on a windows-xp. It's not a domain machine, just a local workgroup we use for testing. The error I get is : IIS Admin Service is Disabled Just like "IIS Admin Service is Disabled" Error When IIS Admin Is Running with the difference that that Q is for a domain connected computer. - And it's answer is related on Resyncing the domain account. What I tried: uninstall and reinstall IIS, reboot. change the log-on user for the IIS Admin service - to local administrator rather than system To note that the service is off course Enabled, and set to Automatic. If I stop it, and start: iisreset - it immediatly starts, but than I get the error message anyway. I had an issue with the machine clock - fixed. The following A suggests a Group Policy - Are there group policy on NON-Domain computers? IIS Admin Service is disabled EDIT: I tried the suggested: How do I restore the IUSR account used by IIS 6 It did not help. EDIT: When I try to restart/stop/start the IIS Admin from services.msc - it works fine. The problem is from CMD trying to run: iisreset EDIT:
Logging using the Administrator account on the machine, gets the same behavior: IISRESET command, successfully stopping, but fails on starting, with the error:
IIS Admin Service is Disabled. EDIT:
System event error: I need that : http://localhost will work on the PC. Where else should I check? | edit: My answer only covers the original unedited question, which was whether this sort of thing is typical in load balancers/reverse proxies. I'm not sure whether nginx/product X has support for this, 99.9% of my reverse proxying experience is with HAproxy. Correct. HTTP Keep-Alive on the client side, but not on the server side. Why? If you break down a few details you can quickly see why this is a benefit. For this example, let's pretend we're loading a page www.example.com and that page includes 3 images, img[1-3].jpg. Browser loading a page, without Keep-Alive Client establishes a TCP connection to www.example.com on port 80 Client does an HTTP GET request for "/" Server sends the HTML content of the URI "/" (which includes HTML tags referencing the 3 images) Server closes the TCP connection Client establishes a TCP connection to www.example.com on port 80 Client does an HTTP GET request for "/img1.jpg" Server sends the image Server closes the TCP connection Client establishes a TCP connection to www.example.com on port 80 Client does an HTTP GET request for "/img2.jpg" Server sends the image Server closes the TCP connection Client establishes a TCP connection to www.example.com on port 80 Client does an HTTP GET request for "/img3.jpg" Server sends the image Server closes the TCP connection Notice that there are 4 seperate TCP sessions established and then closed. Browser loading a page, with Keep-Alive HTTP Keep-Alive allows for a single TCP connection to serve multiple HTTP requests, one after the other. Client establishes a TCP connection to www.example.com on port 80 Client does an HTTP GET request for "/", and also asks the server to make this an HTTP Keep-Alive session. Server sends the HTML content of the URI "/" (which includes HTML tags referencing the 3 images) Server does not close the TCP connection Client does and HTTP GET request for "/img1.jpg" Server sends the image Client does and HTTP GET request for "/img2.jpg" Server sends the image Client does and HTTP GET request for "/img3.jpg" Server sends the image Server closes TCP connection if no more HTTP requests are received within its HTTP Keep-Alive timeout period Notice that with Keep-Alive, only 1 TCP connection is established and eventually closed. Why's Keep-Alive better? To answer this you must understand what it takes to establish a TCP connection between a client and a server. This is called the TCP 3-way handshake. Client sends a SYN(chronise) packet Server sends back a SYN(chronise) ACK(nowledgement), SYN-ACK Client sends an ACK(nowledgement) packet TCP connection is now considered active by both client and server Networks have latency, so each step in the 3-way handshake takes a certain amount of time. Lets say that there's 30ms between the client and server, the back-and-forth sending of IP packets required to establish the TCP connection means that it takes 3 x 30ms = 90ms to establish a TCP connection. This may not sound like much, but if we consider that in our original example, we have to establish 4 separate TCP connections, this becomes 360ms. What if the latency between the client and server is 100ms instead of 30ms? Then our 4 connections are taking 1200ms to establish. Even worse, a typical web page may require far more than just 3 images in order to load, there may be multiple CSS, JavaScript, image or other files that the client needs to request. If the page loads 30 other files and the client-server latency is 100ms, how long do we spend establishing TCP connections? To establish 1 TCP connection takes 3 x latency, i.e. 3 x 100ms = 300ms. We must do this 31 times, once for the page, and another 30 times for each other file referenced by the page. 31 x 300ms = 9.3 seconds. 9.3 seconds spent establishing TCP connections to load a webpage which references 30 other files. And that doesn't even count the time spent sending HTTP requests and receiving responses. With HTTP Keep-Alive, we need only establish 1 TCP connection, which takes 300ms. If HTTP Keep-Alive is so great, why not use it on the server side as well? HTTP reverse proxies (like HAproxy) are typically deployed very close to the backend servers they are proxying for. In most cases the latency between the reverse proxy and its backend server/s will be under 1ms, so establishing a TCP connection is much faster than it is between a client. That's only half the reason though. An HTTP server allocates a certain amount of memory for each client connection. With Keep-Alive, it will keep the connection alive, and by extension it'll keep a certain amount of memory in use on the server, until the Keep-Alive timeout is reached, which may be up to 15s, depending on server configuration. So if we consider the effects of using Keep-Alive on the server side of an HTTP reverse proxy, we are increasing the need for memory, but because the latency between the proxy and the server is so low, we get no real benefit from the reduction in time taken for TCP's 3-way handshake, so its typically better to just disable Keep-Alive between the proxy and the web server in this scenario. Disclaimer: yes, this explanation doesn't take into account the fact that browsers typically establish multiple HTTP connections to a server in parallel. However, there is a limit to how many parallel connections a browser will make to the same host, and typically this is still small enough to make keep-alive desirable. | {
"source": [
"https://serverfault.com/questions/426736",
"https://serverfault.com",
"https://serverfault.com/users/85932/"
]
} |
426,748 | I set up a cron job on my server running RedHat 4.1 to backup MySQL databases, and then upload to Amazon S3. The goal is to drop the .bz2 file in a folder corresponding with the day of the week. However, I'm getting the following error mailed to me by daemon. Cron job: [email protected]
0 4 * * * mysqldump --all-databases -ubackups -pPassword | gzip > all-databases.sql.bz2; s3cmd put all-databases.sql.bz2 s3://backup_exampleserver.com/mysql_backups/`date +%A`/all-databases.sql.bz2 Error message: /bin/sh: -c: line 0: unexpected EOF while looking for matching ``'
/bin/sh: -c: line 1: syntax error: unexpected end of file | You need to escape the percent sign in your command with a backslash: \% , otherwise it is interpreted as the end of command. From crontab (5): The command field (the rest of the line) is the command to be run. The
entire command portion of the line, up to a newline or % character, will
be executed by /bin/sh or by the shell specified in the SHELL variable of
the crontab. Percent signs (‘%’) in the command, unless escaped with a
backslash (‘\’), will be changed into newline characters, and all data
after the first ‘%’ will be sent to the command as standard input. | {
"source": [
"https://serverfault.com/questions/426748",
"https://serverfault.com",
"https://serverfault.com/users/135163/"
]
} |
426,807 | If a DNS server looks up a record and it's missing, it will often "negatively cache" the fact that this record is missing, and not try to look it up again for a while. I don't see anything in the RFC about the TTL on negative caching should be, so I'm guessing it's somewhat arbitrary. In the real world, how long do these negative records stick around for? | The TTL for negative caching is not arbitrary. It is taken from the SOA record at the top of the zone to which the requested record would have belonged, had it existed. For example: example.org. IN SOA master-ns1.example.org. Hostmaster.example.org. (
2012091201 43200 1800 1209600 86400 ) The last value in the SOA record ("86400") is the amount of time clients are asked to cache negative results under example.org. . If a client requests doesnotexist.example.org. , it will cache the result for 86400 seconds. | {
"source": [
"https://serverfault.com/questions/426807",
"https://serverfault.com",
"https://serverfault.com/users/77287/"
]
} |
427,018 | I have always noticed an IP something "169.254.x.x" in my routing table even when I am not connected to any network in my Windows operating system. In Linux, when I list my routing table. $ ip route show I get an entry like 169.254.0.0/16 dev eth0 scope link metric 1000 Can somebody explain me what is this IP address actually. Whether its something like the 127.0.0.0/8 family. Edit : In ec2, each instance can get meta-data regarding their own by making HTTP requests to this IP. $ curl -s http://169.254.169.254/user-data/ So can someone tell me to whom this IP address is actually assigned ? | These are dynamically configured link-local addresses . They are only valid on a single network segment and are not to be routed. Of particular note, 169.254.169.254 is used in AWS , Azure and other cloud computing platforms to host instance metadata service. | {
"source": [
"https://serverfault.com/questions/427018",
"https://serverfault.com",
"https://serverfault.com/users/95256/"
]
} |
427,262 | I recently received the following message from Google Webmaster Tools: Dear site owner or webmaster of http://gotgenes.com/ , [...] Below are one or more example URLs on your site which may be part of a
phishing attack: http://repair.gotgenes.com/~elmsa/.your-account.php [...] What I don't understand is that I never had a subdomain repair.gotgenes.com, but visiting it in the web browser gives an actual website. My DNS is FreeDNS , which does not list a repair subdomain. My domain name is registered with GoDaddy, and the nameservers are correctly set to NS1.AFRAID.ORG, NS2.AFRAID.ORG, NS3.AFRAID.ORG, and NS4.AFRAID.ORG. I have the following questions: Where is repair.gotgenes.com actually registered? How was it registered? What action can I take to have it removed from DNSs? How can I prevent this from happening in the future? This is pretty disconcerting; I feel like my domain has been hijacked. Any help would be much appreciated. | Sigh. I've had a few clients fall trap to this by using afraid.org as their DNS provider. Because they're free, they allow anyone who wants to to create subdomains off your primary domain, unless you specifically disallow it. You can see here: https://freedns.afraid.org/domain/registry/?sort=5&q=gotgenes&submit=SEARCH that someone has created 79 subdomains off your primary domain. Never. ever. ever. ever. use afraid.org for a website you care about. | {
"source": [
"https://serverfault.com/questions/427262",
"https://serverfault.com",
"https://serverfault.com/users/12533/"
]
} |
428,820 | How to locate large files (> 100 MB) in /home/ for 'cleaning'? It's Centos 6.x. I tried some commands, but they didn't work. | Find has it's own -delete option so find /home -type f -size +100M -delete should do what you want. Just be careful about where you put the -delete option Warnings: Don’t forget that the find command line is evaluated
as an expression, so putting -delete first will make find try to
delete everything below the starting points you specified. If you want to test this before using it then you need to add -depth as -delete implies it. find /home -type f -size +100M -depth | {
"source": [
"https://serverfault.com/questions/428820",
"https://serverfault.com",
"https://serverfault.com/users/126492/"
]
} |
429,299 | This question is from 2012. If you are reading this in 2019 or later, then the answer really is: No. There is no good reason in 2019 to be maintaining 32-bit desktop operating systems. Original question below: Server software has been 64-bit only for a while now (Since Server 2008 R2 for Windows, even earlier for Exchange and Sharepoint) and even Ubuntu are pushing you away from 32-bit versions for their server OSes. But is there any good, quantifiable reason to keep a 32-bit desktop operating system maintained? We're preparing our Windows 8 images for the (unfortunate?) few that will be early adopters. The majority of our desktop computers have 4gb or less of RAM, but I would love to not have to bother supporting a 32-bit flavoured operating system any more. Any reason why I should? | 32-bit can be slightly faster in certain use cases -- the smaller addresses means sightly more compact code, which means greater cache efficiency. In the benchmarks I've seen, that efficiency tends to be be overshadowed by 64-bit's greater computational efficiency in heavy-computation environments. But 32-bit does in fact occasionally win on some benchmarks. YMMV. The age of your software matters, as newer builds take advantage of 64-bit stuff that older builds do not. More compact code means less disk space. Just go download the ISOs for your favorite OS in 64 and 32 -bit flavors to see the difference. It's not trivial. It's also quite a lot more once you uncompress the binaries. As pointed out by OrangeDog : Much of this space consumption comes from the fact that 64-bit OSes ship 32-bit libraries in addition to the 64-bit ones. You still get better compatibility with legacy components and software with 32-bit. This is particularly visible in systems that dynamically compile on the host machine but pull in 3rd-party binary libraries at the same time. Microsoft's .NET framework is a great example of this: while the programs are theoretically architecture-independent, anytime you link to a native binary you tie to one arch or the other. Many developers don't even know this is happening, and ship production components that will fail to run on 64-bit systems without some tweaking to explicitly instruct .NET to run in 32-bit mode. Most people don't know how to do this. As pointed out by Daniel B: Windows .NET development on 64-bit machines leaves you open to a frustrating inconsistency where under certain circumstances exceptions are masked by the OS. Legacy hardware. You can't run a 32-bit driver on a 64-bit kernel. None of this adds up to a show-stopper for most people. Still, you have to decide how these factors affect your environment. | {
"source": [
"https://serverfault.com/questions/429299",
"https://serverfault.com",
"https://serverfault.com/users/7709/"
]
} |
429,392 | Using duplicity to backup a folder on a certain event, how can I get a list of all available backup dates as I don't know in advance when the event occurred?
I want to list the available dates as deja-dup does. Final goal is to restore a certain date from the list. duplicity file:///backup-folder restore-folder --restore-time "yyyy-mm-dd" | The following command is probably what you're looking for: duplicity collection-status file:///backup-folder example: Found primary backup chain with matching signature chain:
-------------------------
Chain start time: Mon Mar 4 10:37:27 2013
Chain end time: Fri Mar 15 15:42:22 2013
Number of contained backup sets: 3
Total number of contained volumes: 10
Type of backup set: Time: Num volumes:
Full Mon Mar 4 10:37:27 2013 8
Incremental Fri Mar 8 15:53:07 2013 1
Incremental Fri Mar 15 15:42:22 2013 1
-------------------------
No orphaned or incomplete backup sets found. | {
"source": [
"https://serverfault.com/questions/429392",
"https://serverfault.com",
"https://serverfault.com/users/107998/"
]
} |
429,400 | I was wondering if someone could help me with the following iptables rule: We would like to allow ANY and ALL locally originating (as in, on the server running iptables) traffic. DNS, HTTP, etc... all of it. Any connection initiated by the server running iptables should be allowed. Currently we are using basically OUTPUT default policy, ACCEPT. Is this correct? Inputs are blocked, so I am assuming this means that the connections (except those we allow) cannot be started because they will be dropped before our side can hit the OUTPUT policy? Sorry, my iptables skills are weak ;) Thank you kindly. | You need two rules to do that: iptables -I OUTPUT -o eth0 -d 0.0.0.0/0 -j ACCEPT
iptables -I INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT Some notes. Preexisting rules that you may have may do this already, but look different. This uses -I to force these rules to be first. iptables rules are evaluated top down. The -o and -i flags mean "out" and "in" respectively. Replace eth0 with the proper ethernet interface name. | {
"source": [
"https://serverfault.com/questions/429400",
"https://serverfault.com",
"https://serverfault.com/users/95749/"
]
} |
429,426 | Especially with the option to install Server Core in Server 2008 and above, connecting to Windows servers over a CLI is increasingly useful ability, if not one that's very widespread amongst Windows administrators. Practically every Windows GUI management tool has an option to connect to a remote computer, but there is no such option present in the built-in Windows CLI ( cmd.exe ), which gives the initial impression that this might not be possible. Is it possible to remotely management or administer a Windows Server using a CLI? And if so, what options are there to achieve this? | There are several fairly easy options available for remotely managing a remote Windows Server using a command line, including a few native options. Native Options: WinRS/WinRM Windows Remote Shell/Management tool is the easiest way to remotely manage a remote Windows server in a command line utility, and as with most Windows command line utilities, ss64 has a good page on its options and syntax . Although not explicitly stated in the Microsoft documentation, this can be used to launch a remote instance of cmd.exe , which creates an interactive command line on the remote system, rather than as command line option to execute a single command on a remote server. As with: winrs -r:myserver.mydomain.tld cmd This is also the natively-supported option that will probably be most familiar to administrators of other systems (*nix, BSD , etc.) that are primarily CLI -based. PowerShell Hopefully PowerShell needs no introduction, and can be used to manage remote computers from a CLI using WMI (Windows Management Instrumentation). PowerShell remoting allows the execution of Powershell scripts and commands on remote computers. There are a number of good resources on using WMI + PowerShell for remote management, such as The Scripting Guy's blog , the MSDN WMI Reference and ss64.com, which has an index of PowerShell 2.0 commands . Remote Desktop Probably not exactly the first thing to come to mind as a Window CLI option, but of course, using mstsc.exe to connect to a server over Remote Desktop Protocl ( RDP ) does enable the use of a command line on the remote server. Connecting to a Server Core installation over RDP , is actually possible and will give the same interface as connecting to the console - an instance of cmd.exe . This may be somewhat counter-intuitive, as Server Core lacks a desktop, or the other normal Windows shell options, but there's a quick article over at petri.co.il about how to manage Server Core over RDP , should one be so inclined. Popular, Non-Native Options: Even though Windows now provides a few native options for accessing a remote sever over a CLI , this was not always the case, and as a result, a number of fairly popular 3rd party solutions were created. The three most notable are below. Install SSH on your Windows Server If you just must have SSH , that's an option too, and there's a guide on social.technet for how to install OpenSSH on Server 2008. Probably most useful for administrators of other systems (*nix, BSD , etc.) that make heavy use of SSH for this purpose, though there are advantages to even Windows-only administrators for having a single terminal emulator client (like PuTTY ) store a number of target computers and customized (or standardized) settings for each. PSExec The original option for executing remote commands on a Windows box through the Windows CLI , this is part of the excellent SysInternals suite . One of the very few "must have" packages for Windows admins, the SysInternals tools were so widely respected and used that SyInternals was bought out by Microsoft, and the tools are now somewhat officially supported by Microsoft. Just as with WinRS / RM , PSExec can be used to issue single commands to a remote server, or to launch an interactive instance of cmd.exe on a remote computer. As with: psexec \\myserver.mydomain.tld cmd As with the other options, there are steps one must take first to ensure PSExec is actually able to connect to the target machine . Add a utilities folder to the server and store its value in the %PATH% system variable As has been noted in the comments there are many a good SysInternals program that can be executed on the command line and targeted at a remote system, and this is true of more than just SysInternals. Basically, package up a bundle of your favorite Windows utilities into a folder you push to all your servers and add that folder to the %PATH% environmental variable of your systems . Both are easily done through GPO . (I include the SysInternals Suite , PuTTY , WinDirStat and a bunch of custom scripts I find myself reusing) into a folder that gets pushed to all my servers Obviously, this is useful for more than just managing Windows systems via CLI , but I find it so useful I think it's worth including anyway. | {
"source": [
"https://serverfault.com/questions/429426",
"https://serverfault.com",
"https://serverfault.com/users/118258/"
]
} |
429,634 | I have an Apache 2.2 server with an SSL certificate hosting several services that should be only access using SSL. ie: https ://myserver.com/topsecret/ should be allowed while http ://myserver.com/topsecret/ should be either denied or, ideally, redirected to https. http://myserver.com/public should not have this restriction, and should work using either http or https. The decision to allow/deny http is made at the top level directory, and affects all content underneath it. Is there a directive that can be placed in the Apache config to retrict access in this manner? | The SSLRequireSSL directive is what you're looking for. Inside your <VirtualHost> , or at the top level if you're not using virtual hosts: <Directory /topsecret>
SSLRequireSSL
</Directory> Or in .htaccess : SSLRequireSSL | {
"source": [
"https://serverfault.com/questions/429634",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
]
} |
429,757 | I am experiencing an issue with bind. If i want to resolve any domain name that is on the zone file. It works fine. However, when I try to resolve anything that does not belong to the zone file. I know that actual DNS servers that are being forwarded are working fine. But somehow bind9 fails to use them.
The content of /etc/bind/named.conf.options is: options {
directory "/var/cache/bind";
forwarders {
131.181.127.32;
131.181.59.48;
};
dnssec-validation auto;
auth-nxdomain no; # conform to RFC1035
listen-on-v6 { any; };
}; I have also tried to use only one ip address and it still did not work. also the content of /etc/bind/named.conf is: include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.local";
include "/etc/bind/named.conf.default-zones"; So there is no problem with including options file.
Any recommendations for fixing this problem? | I had this issue before with recent version of Bind (9.8.1). The following option solved the problem for me : dnssec-validation no; | {
"source": [
"https://serverfault.com/questions/429757",
"https://serverfault.com",
"https://serverfault.com/users/135933/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.