source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
292,513 | The man page of logrotate says that: It can be used when some program cannot be told to close its logfile
and thus might continue writing to the previous log file for some
time. I'm confused by this. If a program cannot be told to close its logfile, it will continue to write forever , not for sometime . If the compression is postponed to next rotation cycle, the program continues to write to that file even after the next rotation cycle. How is postponing solving the problem? My understanding is that copytruncate should be used when a program cannot be told to close the logfile. I'm aware that some data written to the logfile gets lost when the copy is in progress. I was looking at the logrotate file for couchdb, and it had both copytruncate and delaycompress options. /usr/local/couchdb-1.0.1/var/log/couchdb/*.log {
weekly
rotate 10
copytruncate
delaycompress
compress
notifempty
missingok
} It looks like there is no point using delaycompress when copytruncate is already there. What am I missing? | Your understanding of copytruncate is correct, but the wording in the manpage for delaycompress is a little misleading. More properly, it should say "when some program cannot be told to immediately close it's logfile" -- for instance, if you're using sharedscripts and the script sends a signal to the process using the log when all the log files have been rotated. | {
"source": [
"https://serverfault.com/questions/292513",
"https://serverfault.com",
"https://serverfault.com/users/72005/"
]
} |
293,171 | I'm told that it's possible to make a web application that does not require a login. The user logs in to Windows, which authenticates via an Active Directory (LDAP) Lookup. Then, they should be able to go to my webapp and never see a login prompt. These customers have been referring to this as Single Sign On (perhaps incorrectly and part of my confusion). But, from what I read Single Sign On from the Tomcat docs is: The Single Sign On Valve is utilized when you wish to give users the
ability to sign on to any one of the web applications associated with
your virtual host , and then have their identity recognized by all
other web applications on the same virtual host. This is perfectly clear to me. User has to login once and can access every webapp on an instance of tomcat. But, what I need to do is somehow let them login without ever providing any credentials to my tomcat server. So, in order for this to work I imagine: User makes request for some page Server sees no session token and then request the client for some credentials. The clients browser without any intervention from the user provides some credentials to the server. Then, using those credentials provided by the clients browser it does a lookup in an LDAP. I've seen some examples which use client side certificates... particularly the DoD PKI system which makes some sense to me because in those cases you configure Tomcat to request client side certs , but just logging into windows I don't see how this would work and what information the browser would pass to the server etc. Is this what NTLM is used for? | First of all - and in case other users happen to visit this page - there are only certain authentication methods that allow you to do promptless SSO. These are NTLM and Kerberos . LDAP - on the other hand - will never give you promptless SSO. NTLM is actually NTLMv1 and NTLMv2. These are very different and NTLMv1 is deprecated because of serious security issues. You should shy away from Java authentication solutions that fail to correctly identify if they support NTLMv1 or NTLMv2 because they only use the word "NTLM" in their documentation. Chances are the developer's of said security solution don't know themselves which is all the more reason to look for the fire escape. Contrary to traditional belief both NTLMv1 and NTLMv2 are fully documented by Microsoft but you will still find solutions that claim to have 'reverse engineered' the protocol. It is true that this was needed prior to Microsoft documenting the protocols I believe around 2006 or 2007. Anyway NTLMv1 is a no-no. There's nothing wrong with NTLMv2 per-se but Microsoft has been phasing out NTLM (in any form) in all of its products in favour of Kerberos authentication. NTLMv1 is long dead and NTLMv2 is now only used by Microsoft in cases where no Domain Controller is available. Bottom line: NTLM (in any form) is not really the way forward. We should actually salute Microsoft for taking a standards based approach here. This leaves you with Kerberos. Microsoft has created a protocol for negotiating and transporting authentication information over HTTP. This is known in Microsoft products as " Integrated Windows Authentication " but it has been nailed down as an official standard under the name of SPNEGO . This is what you should be looking for. SPNEGO supports both NTLMv2 and Kerberos as the underlying authentication mechanism but for the above reasons you should be targeting Kerberos rather than NTLMv2. I've successfully integrated several Tomcat applications (running on Linux/Solaris) with Active Directory using the SPNEGO Project at SourceForge . I've found this to be the simplest approach. This gives you promptless SSO similar to what for example a Sharepoint server does. This is most likely what your users will expect when talking about 'SSO'. Getting the Kerberos configuration right, generating keys and setting up 'dummy' accounts in Active Directory can be a hassle but once you get it right it works like a charm. The only thing I do not like about the SPNEGO Project at SourceForge is that I do not understand how often it performs the authentication. My nasty suspicion is that it does it for every page view rather than once for each session. Perhaps I'm wrong in this. Anyway: this highlights another thing to consider in SSO solutions: you don't want to implement a solution that 'spams' your identity provider (say Active Directory) with unnecessary requests. | {
"source": [
"https://serverfault.com/questions/293171",
"https://serverfault.com",
"https://serverfault.com/users/4811/"
]
} |
293,217 | A security auditor for our servers has demanded the following within two weeks: A list of current usernames and plain-text passwords for all user accounts on all servers A list of all password changes for the past six months, again in plain-text A list of "every file added to the server from remote devices" in the past six months The public and private keys of any SSH keys An email sent to him every time a user changes their password, containing the plain text password We're running Red Hat Linux 5/6 and CentOS 5 boxes with LDAP authentication. As far as I'm aware, everything on that list is either impossible or incredibly difficult to get, but if I don't provide this information we face losing access to our payments platform and losing income during a transition period as we move to a new service. Any suggestions for how I can solve or fake this information? The only way I can think to get all the plain text passwords, is to get everyone to reset their password and make a note of what they set it to. That doesn't solve the problem of the past six months of password changes, because I can't retroactively log that sort of stuff, the same goes for logging all the remote files. Getting all of the public and private SSH keys is possible (though annoying), since we have just a few users and computers. Unless I've missed an easier way to do this? I have explained to him many times that the things he's asking for are impossible. In response to my concerns, he responded with the following email: I have over 10 years experience in security auditing and a full
understanding of the redhat security methods, so I suggest you check
your facts about what is and isn't possible. You say no company could
possibly have this information but I have performed hundreds of audits
where this information has been readily available. All [generic credit
card processing provider] clients are required to conform with our new
security policies and this audit is intended to ensure those policies
have been implemented* correctly. *The "new security policies" were introduced two weeks before our audit, and the six months historical logging was not required before the policy changes. In short, I need; A way to "fake" six months worth of password changes and make it look valid A way to "fake" six months of inbound file transfers An easy way to collect all the SSH public and private keys being used If we fail the security audit we lose access to our card processing platform (a critical part of our system) and it would take a good two weeks to move somewhere else. How screwed am I? Update 1 (Sat 23rd) Thanks for all your responses, It gives me great relief to know this isn't standard practice. I'm currently planning out my email response to him explaining the situation. As many of you pointed out, we have to comply with PCI which explicitly states we shouldn't have any way to access plain-text passwords. I'll post the email when I've finished writing it. Unfortunately I don't think he's just testing us; these things are in the company's official security policy now. I have, however, set the wheels in motion to move away from them and onto PayPal for the time being. Update 2 (Sat 23rd) This is the email I've drafted out, any suggestions for stuff to add/remove/change? Hi [name], Unfortunately there is no way for us to provide you with some
of the information requested, mainly plain-text passwords, password
history, SSH keys and remote file logs. Not only are these things
technically impossible, but also being able to provide this
information would be both a against PCI Standards, and a breach of the
data protection act. To quote the PCI requirements, 8.4 Render all passwords unreadable during transmission and storage on
all system components using strong cryptography. I can provide you
with a list of usernames and hashed passwords used on our system,
copies of the SSH public keys and authorized hosts file (This will
give you enough information to determine the number of unique users
can connect to our servers, and the encryption methods used),
information about our password security requirements and our LDAP
server but this information may not be taken off site. I strongly
suggest you review your audit requirements as there is currently no way
for us to pass this audit while remaining in compliance of PCI and the
Data Protection act. Regards, [me] I will be CC'ing in the company's CTO and our account manager, and I'm hoping the CTO can confirm this information is not available. I will also be contacting the PCI Security Standards Council to explain what he's requiring from us. Update 3 (26th) Here are some emails we exchanged; RE: my first email; As explained, this information should be easily available on any well
maintained system to any competent administrator. Your failure to be
able to provide this information leads me to believe you are aware of
security flaws in your system and are not prepared to reveal them. Our
requests line up with the PCI guidelines and both can be met. Strong
cryptography only means the passwords must be encrypted while the user
is inputting them but then they should be moved to a recoverable format
for later use. I see no data protection issues for these requests, data protection only
applies to consumers not businesses so there should be no issues with this
information. Just, what, I, can't, even... "Strong cryptography only means the passwords must be encrypted while
the user is inputting them but then they should be moved to a
recoverable format for later use." I'm going to frame that and put it on my wall. I got fed up being diplomatic and directed him to this thread to show him the response I got: Providing this information DIRECTLY contradicts several requirements
of the PCI guidelines. The section I quoted even says storage (Implying to where we store the data on the disk). I started a
discussion on ServerFault.com (An on-line community for sys-admin
professionals) which has created a huge response, all suggesting this
information cannot be provided. Feel free to read through yourself https://serverfault.com/questions/293217/ We have finished moving over our system to a new platform and will be
cancelling our account with you within the next day or so but I want
you to realize how ridiculous these requests are, and no company
correctly implementing the PCI guidelines will, or should, be able to
provide this information. I strongly suggest you re-think your
security requirements as none of your customers should be able to
conform to this. (I'd actually forgotten I'd called him an idiot in the title, but as mentioned we'd already moved away from their platform so no real loss.) And in his response, he states that apparently none of you know what you're talking about: I read in detail through those responses and your original post, the
responders all need to get their facts right. I have been in this
industry longer than anyone on that site, getting a list of user
account passwords is incredibly basic, it should be one of the first
things you do when learning how to secure your system and is essential
to the operation of any secure server. If you genuinely lack the
skills to do something this simple I'm going to assume you do not have
PCI installed on your servers as being able to recover this
information is a basic requirement of the software. When dealing with
something such as security you should not be asking these questions on
a public forum if you have no basic knowledge of how it works. I would also like to suggest that any attempt to reveal me, or
[company name] will be considered libel and appropriate legal action
will be taken Key idiotic points if you missed them: He's been a security auditor longer than anyone else on here has (He's either guessing, or stalking you) Being able to get a list of passwords on a UNIX system is 'basic' PCI is now software People shouldn't use forums when they're not sure of security Posing factual information (to which I have email proof) online is libel Excellent. PCI SSC have responded and are investigating him and the company. Our software has now moved onto PayPal so we know it's safe. I'm going to wait for PCI to get back to me first but I'm getting a little worried that they might have been using these security practices internally. If so, I think it is a major concern for us as all our card processing ran through them. If they were doing this internally I think the only responsible thing to do would be to inform our customers. I'm hoping when PCI realize how bad it is they will investigate the entire company and system but I'm not sure. So now we've moved away from their platform, and assuming it will be at least a few days before PCI get back to me, any inventive suggestions for how to troll him a bit? =) Once I've got clearance from my legal guy (I highly doubt any of this is actually libel but I wanted to double check) I'll publish the company name, his name and email, and if you wish you can contact him and explain why you don't understand the basics of Linux security like how to get a list of all the LDAP users passwords. Little update: My "legal guy" has suggested revealing the company would probably cause more problems than needed. I can say though, this is not a major provider, they have less 100 clients using this service. We originally started using them when the site was tiny and running on a little VPS, and we didn't want to go through all the effort of getting PCI (We used to redirect to their frontend, like PayPal Standard). But when we moved to directly processing cards (including getting PCI, and common sense), the devs decided to keep using the same company just a different API. The company is based in the Birmingham, UK area so I'd highly doubt anyone here will be affected. | First, DON'T capitulate. He is not only an idiot but DANGEROUSLY wrong. In fact, releasing this information would violate the PCI standard (which is what I'm assuming the audit is for since it's a payment processor) along with every other standard out there and just plain common sense. It would also expose your company to all sorts of liabilities. The next thing I would do is send an email to your boss saying he needs to get corporate counsel involved to determine the legal exposure the company would be facing by proceeding with this action. This last bit is up to you, but I would contact VISA with this information and get his PCI auditor status pulled. | {
"source": [
"https://serverfault.com/questions/293217",
"https://serverfault.com",
"https://serverfault.com/users/80776/"
]
} |
293,226 | Our Linux systems run logwatch(8) utility by default. On a RedHat/CentOS/SL system, Logwatch is called by the /etc/cron.daily/ cronjob, which then sends a daily email with the results. These emails have a subject like: Subject: Logwatch for $HOSTNAME The problem is that by default these daily emails are too noisy and contain a lot of superfluous information (HTTP errors, daily disk usage, etc) which are already monitored by other services (Nagios, Cacti, central syslog, etc). For 100 systems, the email load is unbearable. People ignore the emails, which means that we may miss problems which are picked up by logwatch. How can I reduce the amount of noise generated by logwatch, but still use logwatch to notify us of significant problems? I'll post my own answer below, but I would like to see what others have done. Note : I have a similar question regarding FreeBSD, at FreeBSD: periodic(8) is too noisy. How can I control the noise level? | Overall, the available documentation for Logwatch lacks adequate explanation and is often far too vague. I pieced together some useful examples, and have reduced the Logwatch noise by over 95%. Here's what I have found. Keep in mind that you can find some Logwatch documentation at /usr/share/doc/logwatch-*/HOWTO-Customize-LogWatch , and it contains a few useful examples. On RHEL/CentOS/SL, the default logwatch configuration is under /usr/share/logwatch/default.conf/logwatch.conf These settings can be overriden by placing your local configuration under /etc/logwatch/conf/logwatch.conf . Place the following in that file to tell logwatch to completely ignore services like 'httpd' and the daily disk usage checks: # Don't spam about the following Services
Service = "-http"
Service = "-zz-disk_space" Sometimes I don't want to completely disable logwatch for a specific service, I just want to fine tune the results to make them less noisy. /usr/share/logwatch/default.conf/services/*.conf contains the default configuration for the services. These parameters can be overridden by placing your local configuration under /etc/logwatch/conf/services/$SERVICE.conf . Unfortunately, logwatch's ability here is limited, and many of the logwatch executables are full of undocumented Perl. Your choice is to replace the executable with something else, or try to override some settings using /etc/logwatch/conf/services . For example, I have a security scanner which runs scans across the network. As the tests run, the security scanner generates many error messages in the application logs. I would like logwatch to ignore errors from my security scanners, but still notify me of attacks from other hosts. This is covered in more detail at Logwatch: Ignore certain IPs for SSH & PAM checks? . To do this, I place the following under /etc/logwatch/conf/services/sshd.conf : # Ignore these hosts
*Remove = 192.168.100.1
*Remove = X.Y.123.123
# Ignore these usernames
*Remove = testuser
# Ignore other noise. Note that we need to escape the ()
*Remove = "pam_succeed_if\(sshd:auth\): error retrieving information about user netscan.* " logwatch also allows you to strip out output from the logwatch emails by placing regular expressions in /etc/logwatch/conf/ignore.conf . HOWTO-Customize-LogWatch says: ignore.conf: This file specifies regular expressions that,
when matched by the output of logwatch, will
suppress the matching line, regardless of which
service is being executed. However, I haven't had much luck with this. My requirements need a conditional statement, which is something like 'If there are security warnings due to my security scanner, then don't print the output. But if there are security warnings from my security scanner and from some bad guys, then print the useful parts-- The header which says "Failed logins from:", the IPs of the bad hosts, but not the IPs of scanners.' Nip it at the source (As suggested by @user48838). These messages are being generated by some application, and then Logwatch is happily spewing the results to you. In these cases, you can modify the application to log less. This isn't always desirable, because sometimes you want the full logs to be sent somewhere (to a Central syslog server, central IDS server, Splunk, Nagios, etc.), but you don't want logwatch to email you about this from every server, every day. | {
"source": [
"https://serverfault.com/questions/293226",
"https://serverfault.com",
"https://serverfault.com/users/36178/"
]
} |
294,121 | I'd like to know what the maximum username length is for current GNU/Linux systems, e.g. Ubuntu 11.04. 8 characters appears to be some historical standard, but I've already noticed on my current Ubuntu system that this limit does not apply. | The current limit is 32 characters (according to useradd man page). | {
"source": [
"https://serverfault.com/questions/294121",
"https://serverfault.com",
"https://serverfault.com/users/42893/"
]
} |
294,209 | Recently we had an apache server which was responding very slowly due to SYN flooding. The workaround for this was to enable tcp_syncookies ( net.ipv4.tcp_syncookies=1 in /etc/sysctl.conf ). I posted a question about this here if you want more background. After enabling syncookies we started seeing the following message in /var/log/messages approximately every 60 seconds: [84440.731929] possible SYN flooding on port 80. Sending cookies. Vinko Vrsalovic informed me that this means the syn backlog is getting full, so I raised tcp_max_syn_backlog to 4096. At some point I also lowered tcp_synack_retries to 3 (down from the default of 5) by issuing sysctl -w net.ipv4.tcp_synack_retries=3 . After doing this, the frequency seemed to drop, with the interval of the messages varying between roughly 60 and 180 seconds. Next I issued sysctl -w net.ipv4.tcp_max_syn_backlog=65536 , but am still getting the message in the log. Throughout all this I've been watching the number of connections in SYN_RECV state (by running watch --interval=5 'netstat -tuna |grep "SYN_RECV"|wc -l' ), and it never goes higher than about 240, much much lower than the size of the backlog. Yet I have a Red Hat server which hovers around 512 (limit on this server is the default of 1024). Are there any other tcp settings which would limit the size of the backlog or am I barking up the wrong tree? Should the number of SYN_RECV connections in netstat -tuna correlate to the size of the backlog? Update As best I can tell I'm dealing with legitimate connections here, netstat -tuna|wc -l hovers around 5000. I've been researching this today and found this post from a last.fm employee, which has been rather useful. I've also discovered that the tcp_max_syn_backlog has no effect when syncookies are enabled (as per this link ) So as a next step I set the following in sysctl.conf: net.ipv4.tcp_syn_retries = 3
# default=5
net.ipv4.tcp_synack_retries = 3
# default=5
net.ipv4.tcp_max_syn_backlog = 65536
# default=1024
net.core.wmem_max = 8388608
# default=124928
net.core.rmem_max = 8388608
# default=131071
net.core.somaxconn = 512
# default = 128
net.core.optmem_max = 81920
# default = 20480 I then setup my response time test, ran sysctl -p and disabled syncookies by sysctl -w net.ipv4.tcp_syncookies=0 . After doing this the number of connections in the SYN_RECV state still remained around 220-250, but connections were starting to delay again. Once I noticed these delays I re-enabled syncookies and the delays stopped. I believe what I was seeing was still an improvement from the initial state, however some requests were still delayed which is much worse than having syncookies enabled. So it looks like I'm stuck with them enabled until we can get some more servers online to cope with the load. Even then, I'm not sure I see a valid reason to disable them again as they're only sent (apparently) when the server's buffers get full. But the syn backlog doesn't appear to be full with only ~250 connections in the SYN_RECV state! Is it possible that the SYN flooding message is a red herring and it's something other than the syn_backlog that's filling up? If anyone has any other tuning options I haven't tried yet I'd be more than happy to try them out, but I'm starting to wonder if the syn_backlog setting isn't being applied properly for some reason. | So, this is a neat question. Initially, I was surprised that you saw any connections in SYN_RECV state with SYN cookies enabled. The beauty of SYN cookies is that you can statelessly participate in the in TCP 3-way handshake as a server using cryptography, so I would expect the server not to represent half-open connections at all because that would be the very same state that isn't being kept. In fact, a quick peek at the source (tcp_ipv4.c) shows interesting information about how the kernel implements SYN cookies. Essentially, despite turning them on, the kernel behaves as it would normally until its queue of pending connections is full. This explains your existing list of connections in SYN_RECV state. Only when the queue of pending connections is full, AND another SYN packet (connection attempt) is received, AND it has been more than a minute since the last warning message, does the kernel send the warning message you have seen ("sending cookies"). SYN cookies are sent even when the warning message isn't; the warning message is just to give you a heads up that the issue hasn't gone away. Put another way, if you turn off SYN cookies, the message will go away. That is only going to work out for you if you are no longer being SYN flooded. To address some of the other things you've done: net.ipv4.tcp_synack_retries : Increasing this won't have any positive effect for those incoming connections that are spoofed, nor for any that receive a SYN cookie instead of server-side state (no retries for them either). For incoming spoofed connections, increasing this increases the number of packets you send to a fake address, and possibly the amount of time that that spoofed address stays in your connection table (this could be a significant negative effect). Under normal load / number of incoming connections, the higher this is, the more likely you are to quickly / successfully complete connections over links that drop packets. There are diminishing returns for increasing this. net.ipv4.tcp_syn_retries : Changing this cannot have any effect on inbound connections (it only affects outbound connections) The other variables you mention I haven't researched, but I suspect the answers to your question are pretty much right here. If you aren't being SYN flooded and the machine is responsive to non-HTTP connections (e.g. SSH) I think there is probably a network problem, and you should have a network engineer help you look at it. If the machine is generally unresponsive even when you aren't being SYN flooded, it sounds like a serious load problem if it affects the creation of TCP connections (pretty low level and resource non-intensive) | {
"source": [
"https://serverfault.com/questions/294209",
"https://serverfault.com",
"https://serverfault.com/users/78116/"
]
} |
294,218 | I have a long running batch process that outputs some debug and process information to stdout.
If I just run from a terminal I can keep track of 'where it is' but then the data gets too much and scrolls off the screen. If I redirect to output to a file '> out.txt' I get the whole output eventually but it is buffered so I can no longer see what it is doing right now. Is there a way to redirect the output but make it not buffer its writes? | You can explicitly set the buffering options of the standard streams using a setvbuf call in C (see this link ), but if you're trying to modify the behaviour of an existing program try stdbuf (part of coreutils starting with version 7.5 apparently). This buffers stdout up to a line: stdbuf -oL command > output This disables stdout buffering altogether: stdbuf -o0 command > output | {
"source": [
"https://serverfault.com/questions/294218",
"https://serverfault.com",
"https://serverfault.com/users/57861/"
]
} |
294,423 | In Apache2 is it possible to set multiple ServerNames in one VHost? I want to setup a "wiki" vhost for an internal wiki. My network has a ".lan" suffix. How do I get Apache to answer both "wiki" and "wiki.lan" on the same vhost? | Use both the ServerName and ServerAlias directives in your virtualhost definition. You would do something like: <VirtualHost *:80>
Servername wiki.lan
ServerAlias wiki
[...]
</Virtualhost> See Apache Docs – ServerAlias Directive . | {
"source": [
"https://serverfault.com/questions/294423",
"https://serverfault.com",
"https://serverfault.com/users/63619/"
]
} |
294,622 | I've puzzled and debated over this for a while, so it's time to ask the community. What is the correct accepted pronunciation of Nagios , or at least the most common pronunciation? This topic is addressed in the project's FAQ , but the linked .mp3 of the pronunciation has been missing for some time. | From the Nagios knowledge base FAQ: I pronounce Nagios as: nah-ghee-ose At least I think that's how I pronounce it (damn phonetic spelling)... The "Na" sounds like "Nah", "gi" sounds like the first part of "geese", and "os" sounds like the last part of "verbose". You can pronounce it however the heck you'd like. Alternative pronounciations vary. One that I liked is "nachos". Mmmmmm.... nachos. Answer originally linked to an mp3 audio file which contained the pronunciation as spoken by Ethan, the author of Nagios. File is no longer hosted by Nagios . | {
"source": [
"https://serverfault.com/questions/294622",
"https://serverfault.com",
"https://serverfault.com/users/66038/"
]
} |
294,635 | So here's the setup. I've got a Rails 3 application deployed to two servers, both
running Apache2, both with identical VirtualHost configs, both operating on Passenger.
There are a few routes in the Rails application that require requests to be done on SSL,
so I've defined those routes with :protocol => 'https as necessary. These two servers are part of a load-balancing pool on our BigIP load balancer, with
one profile setup to handle port 80 traffic, and another to handle port 443 traffic.
We've purchased a cert and we've loaded it onto the BigIP box, as well as setup a
profile for the cert that's assigned to the :443 profile. My Apache configs on each server identically define ServerName , DocumentRoot , SetEnv (for my Rails environment), and all that jazz inside a <VirtualHost *:80 *:443> declaration (note that in mucking with these files, removing the *:443 bit changed
absolutely nothing). There's nothing really out of the ordinary there. When browsing to this site on port 80, traffic passes through just fine and it hits
the Rails application. When browsing to the login page, which requires HTTPS, the
browser will just sit there and try to contact the page. Eventually my browser gives
me a server unexpectedly dropped the connection error. My question is this: how does BigIP send SSL traffic to the servers in its pool, and
how is Apache supposed to recognize that? I don't even get entries in my Apache logs
that the traffic even hits the two backend servers. Is there something I need to modify
with a Passenger configuration somewhere to allow this traffic? If there's more info needed than what I've posted already, let me know and I'll append it
to this question. It appears I'm greener at this kind of stuff than I thought! Also; since I feel really kinda dumb about this stuff, what's a great resource to help me learn about how web servers handle SSL requests? | From the Nagios knowledge base FAQ: I pronounce Nagios as: nah-ghee-ose At least I think that's how I pronounce it (damn phonetic spelling)... The "Na" sounds like "Nah", "gi" sounds like the first part of "geese", and "os" sounds like the last part of "verbose". You can pronounce it however the heck you'd like. Alternative pronounciations vary. One that I liked is "nachos". Mmmmmm.... nachos. Answer originally linked to an mp3 audio file which contained the pronunciation as spoken by Ethan, the author of Nagios. File is no longer hosted by Nagios . | {
"source": [
"https://serverfault.com/questions/294635",
"https://serverfault.com",
"https://serverfault.com/users/71734/"
]
} |
294,645 | I've been learning Rails for the last few days and during this period, I've tested out Heroku and it's great to just do a "git push heroku" and the entire application is up and running. The problem is that I already have a VPS and I'd like the similar deployment method. How would I do this? Which web server is the best to use? My issue isn't performance - I just want fast and easy deployment. Is this even possible? | From the Nagios knowledge base FAQ: I pronounce Nagios as: nah-ghee-ose At least I think that's how I pronounce it (damn phonetic spelling)... The "Na" sounds like "Nah", "gi" sounds like the first part of "geese", and "os" sounds like the last part of "verbose". You can pronounce it however the heck you'd like. Alternative pronounciations vary. One that I liked is "nachos". Mmmmmm.... nachos. Answer originally linked to an mp3 audio file which contained the pronunciation as spoken by Ethan, the author of Nagios. File is no longer hosted by Nagios . | {
"source": [
"https://serverfault.com/questions/294645",
"https://serverfault.com",
"https://serverfault.com/users/88056/"
]
} |
294,661 | I would like a user to have sudo rights (without password check) to a couple of shell scripts under a specific directory (in my case, /usr/local/tomcat7/bin ), and to nowhere else. What's the simplest way to accomplish this? Something like this in /etc/sudoers didn't seem to work: jsmith ALL=(ALL) NOPASSWD: /usr/local/tomcat7/bin | I think you are almost there. put a / at the end of your directory spec jsmith ALL=(ALL) NOPASSWD: /usr/local/tomcat7/bin/ From the sudoers man page A directory is a fully qualified path name ending in a '/'. When you
specify a directory in a Cmnd_List, the user will be able to run any
file within that directory (but not in any subdirectories therein). | {
"source": [
"https://serverfault.com/questions/294661",
"https://serverfault.com",
"https://serverfault.com/users/1746/"
]
} |
294,761 | What is LXC? For what it is useful? What are the differences between LXC and common virtualization? | If by "Plain English" you mean untechnical people, the difference can't be explained easily. That hair is too fine to split without very careful consideration. If by "Plain English" you mean managerial types who talk to technical people, and thus have at least a passing understanding of technical topics, I submit the following verbage: It is a different form of virtualization. If you look at VMWare ESXi, that's a full hypervisor running what is called full virtualization. There is a very small layer between the virtualized systems running on top of the hardware. There is full hardware virtualization, where the OS running in the virtual machine is fully independent from the hypervisor itself and is presented with all the hardware it is expecting. Take another step up, and look at something like VMWare Player, Workstation, ESX (not ESXi), or VMWare Server, and you have a full operating system providing the hypervisor role. However, virtual machines are still presented with a full array of virtual hardware. Another approach is para-virtualization, which Xen followed for quite some time. In this form of virtualization, the guest operating system is aware that it is virtualized and has been modified to work in that environment. Sometimes all this needs is special para-virtualization drivers. Other times, outright kernel changes are needed. LXC, or Linux Containers, is yet another step up. In this case it is running multiple instances of the exact same operating system . The kernel may be the same, but multiple userspaces are running for each OS container. Each container may or may not have a different file-system. Containers offer a way to provide strong security separation between processes in a way that isn't available in systems that have the same userspace. Unix-like operating systems have had the 'chroot jail' for quite some time, but it doesn't provide process separation or an ability to limit the resources consumed by processes in the jail. By containerizing such processes, resource usage can be limited, discrete IP addresses can be assigned to them, and security vulnerabilities exploiting userspace are contained from the rest of the system. Where would you use LXC versus some other type of virtualization? It depends, but LXC should provide less virtualization-penalty than any other vitualization method as it is the same kernel mediating all userspace calls rather than a hypervisor pretending to be hardware to a bunch of OS images expecting to talk to physical hardware. So if you have a bunch of processing that needs the same OS version, and can be rebooted at the same time for updates, LXC could provide a low-cost way to run all of that securely and with resource management. | {
"source": [
"https://serverfault.com/questions/294761",
"https://serverfault.com",
"https://serverfault.com/users/83957/"
]
} |
294,787 | What is the command to sync a Windows workstation or server to its configured time source? | As Kyle said w32tm /resync is the modern way to do this. See this Link to Microsoft Knowledgebase (KB 307897) for more information on the w32tm command. There is also net time which is an older version but perhaps easier. | {
"source": [
"https://serverfault.com/questions/294787",
"https://serverfault.com",
"https://serverfault.com/users/2561/"
]
} |
295,285 | this is somewhat of a mystery to me. The only way I can connect to MySQL is if I call it via "127.0.0.1" ... for example, my PHP connect script will NOT work with localhost I'm running Mac OS X Lion, built-in apache2, MySQL, PHP, phpMyAdmin mysqladmin: count 0
debug-check FALSE
debug-info TRUE
force FALSE
compress FALSE
character-sets-dir (No default value)
default-character-set auto
host (No default value)
no-beep FALSE
port 0
relative FALSE
socket (No default value)
sleep 0
ssl FALSE
ssl-ca (No default value)
ssl-capath (No default value)
ssl-cert (No default value)
ssl-cipher (No default value)
ssl-key (No default value)
ssl-verify-server-cert FALSE
user (No default value)
verbose FALSE
vertical FALSE
connect-timeout 43200
shutdown-timeout 3600
plugin-dir (No default value)
default-auth (No default value) | MySQL will try to connect to the unix socket if you tell it to connect to "localhost". If you tell it to connect to 127.0.0.1 you are forcing it to connect to the network socket. So probably you have MySQL configured to only listen to the network socket and not to the file system socket. What exactly is wrong with your unix socket is hard to tell. But I recommend you to read this page on the MySQL reference guide. This should help you. UPDATE:
Based on the updated question: The parameter "socket" should be something like this: "/var/lib/mysql/mysql.sock". This page in the Reference Manual has some more information. Here you have the beginning of my /etc/my.cnf file: [mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock Your file should be similar. Then your problem should be solved. Don't forget to restart the MySQL server before you test it. | {
"source": [
"https://serverfault.com/questions/295285",
"https://serverfault.com",
"https://serverfault.com/users/79621/"
]
} |
295,288 | I have an instance of MySQL server. I cannot log in. I did the following: Cleaned out some logs with > xxxx.log Restarted mysql with mysqld restart I did NOT use --user=mysql , but that is in my.cnf Now I cant log in as ANY users. How can I get this up and running the easiest and most correct way, without losing all the logins? | MySQL will try to connect to the unix socket if you tell it to connect to "localhost". If you tell it to connect to 127.0.0.1 you are forcing it to connect to the network socket. So probably you have MySQL configured to only listen to the network socket and not to the file system socket. What exactly is wrong with your unix socket is hard to tell. But I recommend you to read this page on the MySQL reference guide. This should help you. UPDATE:
Based on the updated question: The parameter "socket" should be something like this: "/var/lib/mysql/mysql.sock". This page in the Reference Manual has some more information. Here you have the beginning of my /etc/my.cnf file: [mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock Your file should be similar. Then your problem should be solved. Don't forget to restart the MySQL server before you test it. | {
"source": [
"https://serverfault.com/questions/295288",
"https://serverfault.com",
"https://serverfault.com/users/84345/"
]
} |
295,565 | I am using PSCP to upload some files from Windows to Linux. I can do it fine just uploading one file at a time. But I have some very large directories and I want to upload an entire directory at once. I have tried: pscp -i C:\sitedeploy\abt-keypair.ppk includes\* [email protected]:/usr/local/tomcat/webapps/ROOT/includes/* Throws error: "pscp: remote filespec /usr/local/tomcat/webapps/ROOT/includes/*: not a directory" and pscp -i C:\sitedeploy\abt-keypair.ppk includes\ [email protected]:/usr/local/tomcat/webapps/ROOT/includes/ Throws error: "scp: includes: not a regular file" and pscp -i C:\sitedeploy\abt-keypair.ppk includes [email protected]:/usr/local/tomcat/webapps/ROOT/includes Throws error: "scp: includes: not a regular file" | Two problems: First, the * does not go on the destination side. Second, -r is for copying an entire directory and subdirectories. pscp -i C:\sitedeploy\abt-keypair.ppk includes\* [email protected]:/usr/local/tomcat/webapps/ROOT/includes/ Will copy all of the files in the local includes\ directory to the .../includes/ directory on the server. pscp -r -i C:\sitedeploy\abt-keypair.ppk includes\ [email protected]:/usr/local/tomcat/webapps/ROOT/ Will copy the includes\ directory itself, including all files and subdirectories, to the .../ROOT/ directory on the server (where the contents of the local directory would merge with any existing .../ROOT/includes/ directory. | {
"source": [
"https://serverfault.com/questions/295565",
"https://serverfault.com",
"https://serverfault.com/users/82160/"
]
} |
295,584 | I am getting a memory error in a php cron job: Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 71 bytes) in /opt/matrix/core/lib/DAL/DAL.inc on line 830 The applicable parts of the crontab are: $ sudo crontab -u www-data -l
MAILTO=root
# m h dom mon dow command
*/15 * * * * php /opt/matrix/core/cron/run.php /opt/matrix I am running on Debian Squeeze, fully updated. The obvious solution would be that the cli has a low memory limit (of 64MB). However, /etc/php5/cli/php.ini says it's unlimited. $ cat /etc/php5/cli/php.ini | grep memory_limit
memory_limit = -1 I read somewhere that it could be different for different users, and since the process is running as www-data, i ran: $ sudo -u www-data -s
$ php -i | grep memory_limit
memory_limit => -1 => -1
suhosin.memory_limit => 0 => 0 Even the apache/php.ini has a higher limit than the error is claiming: $ sudo cat /etc/php5/apache2/php.ini | grep memory_limit
memory_limit = 128M What am I missing? Where is this memory limit? | IIRC, an unlimited memory_limit isn't supported by the CLI (I'll try to find a source for this) but for now, try passing it into the command: php -d memory_limit=128M my_script.php UPDATE Apparently I was dreaming about the unlimited memory_limit not being supported for php cli. Regardless, it looks like the value from the ini is ignored. The simplest solution should then be to specifically set it in the php command calling the script. UPDATE2 To answer the question of where the memory limit is coming from, it's most likely being set in the script itself using 'ini_set'. | {
"source": [
"https://serverfault.com/questions/295584",
"https://serverfault.com",
"https://serverfault.com/users/25447/"
]
} |
295,768 | I have two public keys, one for some servers and one for others. How do I specify which key to use when connecting to a server? | Assuming you're on a Unix/Linux environment, you can create or edit the file ~/.ssh/config . That config file allows you to establish the parameters to use for each host; so, for example: Host host1
HostName <hostname_or_ip>
IdentityFile ~/.ssh/identity_file1
Host Host2
HostName <hostname_or_ip2>
User differentusername
IdentityFile ~/.ssh/identity_file2 Note that host1 and host2 can also be not hostnames, but rather labels to identify a server. Now you can log onto the to hosts with: ssh host1
ssh host2 | {
"source": [
"https://serverfault.com/questions/295768",
"https://serverfault.com",
"https://serverfault.com/users/82219/"
]
} |
295,975 | I'm running httpd on linux. I have a folder ( /data/ ) that is not in the apache web directory ( /var/www/html/ ) that I would like users to be able to access from their browser. I don't want to move this folder. How do I make files in this folder accessible to a web browser when the folder is outside the apache web folder? | You can use mod_alias to do this quite simply Alias /data /data/outside/documentroot
<Directory /data/outside/documentroot>
Order allow,deny
Allow from all
</Directory> Would redirect urls like http://example.com/data/file1.dat to the file /data/outside/documentroot/file1.dat | {
"source": [
"https://serverfault.com/questions/295975",
"https://serverfault.com",
"https://serverfault.com/users/87190/"
]
} |
295,999 | I want to write a bash or perl script to install a number of packages on my debian based machine. I want it to be something like : aptitude install package1
aptitude install package2 But, I do not know how how to automatically say "yes" through the script at the prompt to confirm you want to install that package. Can someone give me an example in perl and bash ? gratz! | aptitude install -y package1 package2 package3 | {
"source": [
"https://serverfault.com/questions/295999",
"https://serverfault.com",
"https://serverfault.com/users/86280/"
]
} |
296,156 | HI I am going to install MyBB but I am not sure whether I have installed the correct version of PHP and MySQL. PHP version 5.1.0 or above with XML Extension installed MySQL version 4.0 or above How to check that? Especiall the PHP XML Extension? Is there simpler way than the <?php phpinfo() ?> solution? I am expecting a command line solution. Thanks a lot! | Do it from your command line: php -v
mysql -V and: php -i | grep -i '^libxml' OR Put this in your root directory: <?php
phpinfo();
php?> Save it as phpinfo.php and point your browser to it (this could be http://localhost/phpinfo.php ) | {
"source": [
"https://serverfault.com/questions/296156",
"https://serverfault.com",
"https://serverfault.com/users/86954/"
]
} |
296,552 | How do I measure IOPS of a running Linux server? I know that the theoretical IOPS of a SATA drive is around 90 and enterprise 10k SAS/FC disk is 180. I want to know how much my running system is using currently? Currently I am using iotop and iostat. But both utilities do not give the IOPS number. btw, this question is not a duplicate of this . I am not looking for benchmarking my storage system, but figure out how much IOPS is being used by my current system. | Uhm... iostat on my system shows the IOPS: Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 1.00 64.00 0.00 64 0 Might want to look at upgrading. | {
"source": [
"https://serverfault.com/questions/296552",
"https://serverfault.com",
"https://serverfault.com/users/2622/"
]
} |
296,603 | I realized today that I fundamentally don't understand how port communication works. If I fire up an instance of a webserver listening on port 80, it can respond to many requests from many different browser tabs, all communicating over port 80. However, I cannot start up two instances of the server, both listening on port 80, as it results in a port conflict. I've always taken this as a given, (only one process can bind to a specific port at any given time) without ever really thinking it through -- aren't there multiple processes communicating on port 80? (ie., each of the tabs running in the browser?) | Basically, only one process can LISTEN on a port at a time (technically, one socket is dedicated to listening). But, a port can handle many sockets transferring data, a socket is a combination of local IP / port and remote IP address / remote port. In that way, once the server accepts the incoming connection while LISTENing it opens a new socket dedicated to that conversation and hands the processing off to something else, then goes back to LISTENing. More detail here . | {
"source": [
"https://serverfault.com/questions/296603",
"https://serverfault.com",
"https://serverfault.com/users/25972/"
]
} |
296,970 | We've got seperate environments at my workplace for development, testing, integration, and staging. Within those envs, we've overloaded the hostnames in DNS - e.g. in the dev environment, the primary web machine is called web1.dev.example.com , and in the test environment, the primary web machine is web1.test.example.com . To distinguish between machines in the different environments, I want to customise the bash prompts to display the FQDN rather than just the hostname. Well and good; I should be able to replace \h with \H in $PS1 , right? Hmm. They show the exact same thing. me@web1:~$ hostname
web1
me@web1:~$ hostname -f
web1.dev.example.com
me@web1:~$ export PS1="\[\u@\h: \w\]\$ "
me@web1: ~$ export PS1="\[\u@\H: \w\]\$ "
me@web1: ~$ In /etc/hostname , I've got just the hostname ( web1 ). hostname and hostname -f both return the correct results ("web1" and "web1.test.example.com" respectively), and I've got the correct entries in /etc/hosts . What gives? These are Ubuntu 10.04 hosts, if that makes a difference. | Try using an explicit call to hostname -f to get the fqdn of the system export PS1="\[\u@$(hostname -f): \w\]\$ " e.g. iain$ export PS1="\[\u@$(hostname -f): \w\]\$ "
[email protected]: ~$ EDIT: Further research shows that the contents of /etc/hostname (Ubuntu) and /etc/sysconfig/network (CentOS) are relevant. If the FQDN is in the file then the \H works correctly. The hostname(1) man page for Ubuntu does though say that you shouldn't put the FQDN in /etc/hostname but gives no reason as to why. | {
"source": [
"https://serverfault.com/questions/296970",
"https://serverfault.com",
"https://serverfault.com/users/6565/"
]
} |
297,029 | I have several TBs of very valuable personal data in a zpool which I can not access due to data corruption. The pool was originally set up back in 2009 or so on a FreeBSD 7.2 system running inside a VMWare virtual machine on top of a Ubuntu 8.04 system. The FreeBSD VM is still available and running fine, only the host OS has now changed to Debian 6. The hard drives are made accessible to the guest VM by means of VMWare generic SCSI devices, 12 in total. There are 2 pools: zpool01: 2x 4x 500GB zpool02: 1x 4x 160GB The one that works is empty, the broken one holds all the important data: [user@host~]$ uname -a
FreeBSD host.domain 7.2-RELEASE FreeBSD 7.2-RELEASE #0: \
Fri May 1 07:18:07 UTC 2009 \
[email protected]:/usr/obj/usr/src/sys/GENERIC amd64
[user@host ~]$ dmesg | grep ZFS
WARNING: ZFS is considered to be an experimental feature in FreeBSD.
ZFS filesystem version 6
ZFS storage pool version 6
[user@host ~]$ sudo zpool status
pool: zpool01
state: UNAVAIL
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zpool01 UNAVAIL 0 0 0 insufficient replicas
raidz1 UNAVAIL 0 0 0 corrupted data
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
da8 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
da4 ONLINE 0 0 0
pool: zpool02
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zpool02 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da9 ONLINE 0 0 0
da10 ONLINE 0 0 0
da11 ONLINE 0 0 0
da12 ONLINE 0 0 0
errors: No known data errors I was able to access the pool a couple of weeks ago. Since then, I had to replace pretty much all of the hardware of the host machine and install several host operating systems. My suspicion is that one of these OS installations wrote a bootloader (or whatever) to one (the first ?) of the 500GB drives and destroyed some zpool metadata (or whatever) - 'or whatever' meaning that this is just a very vague idea and that subject is not exactly my strong side... There is plenty of websites, blogs, mailing lists, etc. about ZFS. I post this question here in the hope that it helps me gather enough information for a sane, structured, controlled, informed, knowledgeable approach to get my data back - and hopefully help someone else out there in the same situation. The first search result when googling for 'zfs recover' is the ZFS Troubleshooting and Data Recovery chapter from Solaris ZFS Administration Guide. In the first ZFS Failure Modes section, it says in the 'Corrupted ZFS Data' paragraph: Data corruption is always permanent and requires special consideration during repair. Even if the underlying devices are repaired or replaced, the original data is lost forever. Somewhat disheartening. However, the second google search result is Max Bruning's weblog and in there, I read Recently, I was sent an email from someone who had 15 years of video and music stored in a 10TB ZFS pool that, after a power failure, became defective. He unfortunately did not have a backup. He was using ZFS version 6 on FreeBSD 7
[...]
After spending about 1 week examining the data on the disk, I was able to restore basically all of it. and As for ZFS losing your data, I doubt it. I suspect your data is there, but you need to find the right way to get at it. (that sounds so much more like something I wanna hear...) First step : What exactly is the problem ? How can I diagnose why exactly the zpool is reported as corrupted ? I see there is zdb which doesn't seem to be officially documented by Sun or Oracle anywhere on the web. From its man page: NAME
zdb - ZFS debugger
SYNOPSIS
zdb pool
DESCRIPTION
The zdb command is used by support engineers to diagnose failures and
gather statistics. Since the ZFS file system is always consistent on
disk and is self-repairing, zdb should only be run under the direction
by a support engineer.
If no arguments are specified, zdb, performs basic consistency checks
on the pool and associated datasets, and report any problems detected.
Any options supported by this command are internal to Sun and subject
to change at any time. Further, Ben Rockwood has posted a detailed article and there is a video of Max Bruning talking about it (and mdb) at the Open Solaris Developer Conference in Prague on June 28, 2008. Running zdb as root on the broken zpool gives the following output: [user@host ~]$ sudo zdb zpool01
version=6
name='zpool01'
state=0
txg=83216
pool_guid=16471197341102820829
hostid=3885370542
hostname='host.domain'
vdev_tree
type='root'
id=0
guid=16471197341102820829
children[0]
type='raidz'
id=0
guid=48739167677596410
nparity=1
metaslab_array=14
metaslab_shift=34
ashift=9
asize=2000412475392
children[0]
type='disk'
id=0
guid=4795262086800816238
path='/dev/da5'
whole_disk=0
DTL=202
children[1]
type='disk'
id=1
guid=16218262712375173260
path='/dev/da6'
whole_disk=0
DTL=201
children[2]
type='disk'
id=2
guid=15597847700365748450
path='/dev/da7'
whole_disk=0
DTL=200
children[3]
type='disk'
id=3
guid=9839399967725049819
path='/dev/da8'
whole_disk=0
DTL=199
children[1]
type='raidz'
id=1
guid=8910308849729789724
nparity=1
metaslab_array=119
metaslab_shift=34
ashift=9
asize=2000412475392
children[0]
type='disk'
id=0
guid=5438331695267373463
path='/dev/da1'
whole_disk=0
DTL=198
children[1]
type='disk'
id=1
guid=2722163893739409369
path='/dev/da2'
whole_disk=0
DTL=197
children[2]
type='disk'
id=2
guid=11729319950433483953
path='/dev/da3'
whole_disk=0
DTL=196
children[3]
type='disk'
id=3
guid=7885201945644860203
path='/dev/da4'
whole_disk=0
DTL=195
zdb: can't open zpool01: Invalid argument I suppose the 'invalid argument' error at the end occurs because the zpool01 does not actually exist: It doesn't occur on the working zpool02, but there doesn't seem to be any further output either... OK, at this stage, it is probably better to post this before the article gets too long. Maybe someone can give me some advice on how to move forward from here and while I'm waiting for a response, I'll watch the video, go through the details of the zdb output above, read Bens article and try to figure out what's what... 20110806-1600+1000 Update 01: I think I have found the root cause: Max Bruning was kind enough to respond to an email of mine very quickly, asking for the output of zdb -lll . On any of the 4 hard drives in the 'good' raidz1 half of the pool, the output is similar to what I posted above. However, on the first 3 of the 4 drives in the 'broken' half, zdb reports failed to unpack label for label 2 and 3. The fourth drive in the pool seems OK, zdb shows all labels. Googling that error message brings up this post . From the first response to that post: With ZFS, that are 4 identical labels on each
physical vdev, in this case a single hard drive.
L0/L1 at the start of the vdev, and
L2/L3 at the end of the vdev. All 8 drives in the pool are of the same model, Seagate Barracuda 500GB . However, I do remember I started the pool with 4 drives, then one of them died and was replaced under warranty by Seagate. Later on, I added another 4 drives. For that reason, the drive and firmware identifiers are different: [user@host ~]$ dmesg | egrep '^da.*?: <'
da0: <VMware, VMware Virtual S 1.0> Fixed Direct Access SCSI-2 device
da1: <ATA ST3500418AS CC37> Fixed Direct Access SCSI-5 device
da2: <ATA ST3500418AS CC37> Fixed Direct Access SCSI-5 device
da3: <ATA ST3500418AS CC37> Fixed Direct Access SCSI-5 device
da4: <ATA ST3500418AS CC37> Fixed Direct Access SCSI-5 device
da5: <ATA ST3500320AS SD15> Fixed Direct Access SCSI-5 device
da6: <ATA ST3500320AS SD15> Fixed Direct Access SCSI-5 device
da7: <ATA ST3500320AS SD15> Fixed Direct Access SCSI-5 device
da8: <ATA ST3500418AS CC35> Fixed Direct Access SCSI-5 device
da9: <ATA SAMSUNG HM160JC AP10> Fixed Direct Access SCSI-5 device
da10: <ATA SAMSUNG HM160JC AP10> Fixed Direct Access SCSI-5 device
da11: <ATA SAMSUNG HM160JC AP10> Fixed Direct Access SCSI-5 device
da12: <ATA SAMSUNG HM160JC AP10> Fixed Direct Access SCSI-5 device I do remember though that all drives had the same size. Looking at the drives now, it shows that the size has changed for three of them, they have shrunk by 2 MB: [user@host ~]$ dmesg | egrep '^da.*?: .*?MB '
da0: 10240MB (20971520 512 byte sectors: 255H 63S/T 1305C)
da1: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C)
da2: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C)
da3: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C)
da4: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C)
da5: 476938MB (976771055 512 byte sectors: 255H 63S/T 60801C) <--
da6: 476938MB (976771055 512 byte sectors: 255H 63S/T 60801C) <--
da7: 476938MB (976771055 512 byte sectors: 255H 63S/T 60801C) <--
da8: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C)
da9: 152627MB (312581808 512 byte sectors: 255H 63S/T 19457C)
da10: 152627MB (312581808 512 byte sectors: 255H 63S/T 19457C)
da11: 152627MB (312581808 512 byte sectors: 255H 63S/T 19457C)
da12: 152627MB (312581808 512 byte sectors: 255H 63S/T 19457C) So by the looks of it, it was not one of the OS installations that 'wrote a bootloader to one the drives' (as I had assumed before), it was actually the new motherboard (an ASUS P8P67 LE ) creating a 2 MB host protected area at the end of three of the drives which messed up my ZFS metadata. Why did it not create an HPA on all drives ? I believe this is because the HPA creation is only done on older drives with a bug that was fixed later on by a Seagate hard drive BIOS update: When this entire incident began a couple of weeks ago, I ran Seagate's SeaTools to check if there is anything physically wrong with the drives (still on the old hardware) and I got a message telling me that some of my drives need a BIOS update. As I am now trying to reproduce the exact details of that message and the link to the firmware update download, it seems that since the motherboard created the HPA, both SeaTools DOS versions to fail to detect the harddrives in question - a quick invalid partition or something similar flashes by when they start, that's it. Ironically, they do find a set of Samsung drives, though. (I've skipped on the painful, time-consuming and ultimately fruitless details of screwing around in a FreeDOS shell on a non-networked system.) In the end, I installed Windows 7 on a separate machine in order to run the SeaTools Windows version 1.2.0.5. Just a last remark about DOS SeaTools: Don't bother trying to boot them standalone - instead, invest a couple of minutes and make a bootable USB stick with the awesome Ultimate Boot CD - which apart from DOS SeaTools also gets you many many other really useful tools. When started, SeaTools for Windows bring up this dialog: The links lead to the Serial Number Checker (which for some reason is protected by a captcha - mine was 'Invasive users') and a knowledge base article about the firmware update. There's probably further links specific to the hard drive model and some downloads and what not, but I won't follow that path for the moment: I won't rush into updating the firmware of three drives at a time that have truncated partitions and are part of a broken storage pool. That's asking for trouble. For starters, the firmware update most likely can not be undone - and that might irrevocably ruin my chances to get my data back. Therefore, the very first thing I'm going to do next is image the drives and work with the copies, so there's an original to go back to if anything goes wrong. This might introduce an additional complexity, as ZFS will probably notice that drives were swapped (by means of the drive serial number or yet another UUID or whatever), even though it's bit-exact dd copies onto the same hard drive model. Moreover, the zpool is not even live. Boy, this might get tricky. The other option however would be to work with the originals and keep the mirrored drives as backup, but then I'll probably run into above complexity when something went wrong with the originals. Naa, not good. In order to clear out the three hard drives that will serve as imaged replacements for the three drives with the buggy BIOS in the broken pool, I need to create some storage space for the stuff that's on there now, so I'll dig deep in the hardware box and assemble a temporary zpool from some old drives - which I can also use to test how ZFS deals with swapping dd'd drives. This might take a while... 20111213-1930+1100 Update 02: This did take a while indeed. I've spent months with several open computer cases on my desk with various amounts of harddrive stacks hanging out and also slept a few nights with earplugs, because I could not shut down the machine before going to bed as it was running some lengthy critical operation. However, I prevailed at last! :-) I've also learned a lot in the process and I would like to share that knowledge here for anyone in a similar situation. This article is already much longer than anyone with a ZFS file server out of action has the time to read, so I will go into details here and create an answer with the essential findings further below. I dug deep in the obsolete hardware box to assemble enough storage space to move the stuff off the single 500GB drives to which the defective drives were mirrored. I also had to rip out a few hard drives out of their USB cases, so I could connect them over SATA directly. There was some more, unrelated issues involved and some of the old drives started to fail when I put them back into action requiring a zpool replace, but I'll skip on that. Tip: At some stage, there was a total of about 30 hard drives involved in this. With that much hardware, it is an enormous help to have them stacked properly; cables coming loose or hard drive falling off your desk surely won't help in the process and might cause further damage to your data integrity. I spent a couple of minutes creating some make-shift cardboard hard drive fixtures which really helped to keep things sorted: Ironically, when I connected the old drives the first time, I realized there's an old zpool on there I must have created for testing with an older version of some, but not all of the personal data that's gone missing, so while the data loss was somewhat reduced, this meant additional shifting back and forth of files. Finally, I mirrored the problematic drives to backup drives, used those for the zpool and left the original ones disconnected. The backup drives have a newer firmware, at least SeaTools does not report any required firmware updates. I did the mirroring with a simple dd from one device to the other, e.g. sudo dd if=/dev/sda of=/dev/sde I believe ZFS does notice the hardware change (by some hard drive UUID or whatever), but doesn't seem to care. The zpool however was still in the same state, insufficient replicas / corrupted data. As mentioned in the HPA Wikipedia article mentioned earlier, the presence of a host protected area is reported when Linux boots and can be investigated using hdparm . As far as I know, there is no hdparm tool available on FreeBSD, but by this time, I anyway had FreeBSD 8.2 and Debian 6.0 installed as dual-boot system, so I booted into Linux: user@host:~$ for i in {a..l}; do sudo hdparm -N /dev/sd$i; done
...
/dev/sdd:
max sectors = 976773168/976773168, HPA is disabled
/dev/sde:
max sectors = 976771055/976773168, HPA is enabled
/dev/sdf:
max sectors = 976771055/976773168, HPA is enabled
/dev/sdg:
max sectors = 976771055/976773168, HPA is enabled
/dev/sdh:
max sectors = 976773168/976773168, HPA is disabled
... So the problem obviously was that the new motherboard created a HPA of a couple of megabytes at the end of the drive which 'hid' the upper two ZFS labels, i.e. prevented ZFS from seeing them. Dabbling with the HPA seems a dangerous business. From the hdparm man page, parameter -N: Get/set max visible number of sectors, also known as the Host Protected Area setting.
...
To change the current max (VERY DANGEROUS, DATA LOSS IS EXTREMELY LIKELY), a new value
should be provided (in base10) immediately following the -N option.
This value is specified as a count of sectors, rather than the "max sector address"
of the drive. Drives have the concept of a temporary (volatile) setting which is lost on
the next hardware reset, as well as a more permanent (non-volatile) value which survives
resets and power cycles. By default, -N affects only the temporary (volatile) setting.
To change the permanent (non-volatile) value, prepend a leading p character immediately
before the first digit of the value. Drives are supposed to allow only a single permanent
change per session. A hardware reset (or power cycle) is required before another
permanent -N operation can succeed.
... In my case, the HPA is removed like this: user@host:~$ sudo hdparm -Np976773168 /dev/sde
/dev/sde:
setting max visible sectors to 976773168 (permanent)
max sectors = 976773168/976773168, HPA is disabled and in the same way for the other drives with an HPA. If you get the wrong drive or something about the size parameter you specify is not plausible, hdparm is smart enough to figure: user@host:~$ sudo hdparm -Np976773168 /dev/sdx
/dev/sdx:
setting max visible sectors to 976773168 (permanent)
Use of -Nnnnnn is VERY DANGEROUS.
You have requested reducing the apparent size of the drive.
This is a BAD idea, and can easily destroy all of the drive's contents.
Please supply the --yes-i-know-what-i-am-doing flag if you really want this.
Program aborted. After that, I restarted the FreeBSD 7.2 virtual machine on which the zpool had been originally created and zpool status reported a working pool again. YAY! :-) I exported the pool on the virtual system and re-imported it on the host FreeBSD 8.2 system. Some more major hardware upgrades, another motherboard swap, a ZFS pool update to ZFS 4 / 15, a thorough scrubbing and now my zpool consists of 8x1TB plus 8x500GB raidz2 parts: [user@host ~]$ sudo zpool status
pool: zpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zpool ONLINE 0 0 0
raidz2 ONLINE 0 0 0
ad0 ONLINE 0 0 0
ad1 ONLINE 0 0 0
ad2 ONLINE 0 0 0
ad3 ONLINE 0 0 0
ad8 ONLINE 0 0 0
ad10 ONLINE 0 0 0
ad14 ONLINE 0 0 0
ad16 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
errors: No known data errors
[user@host ~]$ df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/label/root 29G 13G 14G 49% /
devfs 1.0K 1.0K 0B 100% /dev
zpool 8.0T 3.6T 4.5T 44% /mnt/zpool As a last word, it seems to me ZFS pools are very, very hard to kill. The guys from Sun from who created that system have all the reason the call it the last word in filesystems. Respect! | The problem was that the new motherboard's BIOS created a host protected area (HPA) on some of the drives, a small section used by OEMs for system recovery purposes, usually located at the end of the harddrive. ZFS maintains 4 labels with partition meta information and the HPA prevents ZFS from seeing the upper two. Solution: Boot Linux, use hdparm to inspect and remove the HPA. Be very careful, this can easily destroy your data for good. Consult the article and the hdparm man page (parameter -N) for details. The problem did not only occur with the new motherboard, I had a similar issue when connecting the drives to an SAS controller card. The solution is the same. | {
"source": [
"https://serverfault.com/questions/297029",
"https://serverfault.com",
"https://serverfault.com/users/26210/"
]
} |
297,061 | I am pretty new to awstats and have configured Awstats on my apache webserver to analyze nginx access logs(nginx webserver is for my django app), I am able to take the stats from LogFile=/var/log/nginx/access.log but how do I analyze multiple Logs that are gzip format. Such as access.log.1.gz...access.log.40.gz. I have a lot of logs to analyze. | What you probably want to do here is to analyze all these logfiles once, then keep analyzing only the current logfiles from then on. The simplest thing to do is unzip all those files into a single file, then have awstats run over it once, then point awstats at your access.log file from then on. awstats normally has a script called logresolvemerge.pl, which can read the compressed files, and will merge them appropriately for awstats to do analsyis. To merge all your existing ones, run perl /usr/share/awstats/tools/logresolvemerge.pl /var/log/nginx/access.log* > /tmp/nginx.tmplog This will probably take a while. You can then have awstats run once over this file (set LogFile appropriately). From then on, you should have awstats run over the most recent logfile - which is what your current configuration is doing. Depending on how often you are running awstats vs rotating nginx logfiles, you may want to have it read both the current logfile and the previous one. (eg, if you rotate nginx logfiles every day at 12, but have awstats run every day at 1, then whenever awstats runs the logfile will only contain what's been written since the last rotation). You can use logresolvemerge.pl inside your LogFile command like this: LogFile="/usr/share/awstats/tools/logresolvemerge.pl /var/log/nginx/access.log /var/log/nginx/access.log.1.gz |" This tells awstats to run the logresolvemerge.pl command with the two logfiles as parameters, and awstats will read in the output of that script (that's what the pipe | does) | {
"source": [
"https://serverfault.com/questions/297061",
"https://serverfault.com",
"https://serverfault.com/users/73524/"
]
} |
297,318 | I'm very confused. I basically understand how DNS works. Here's an example that helps illustrate what I'm having trouble understanding. Right now, I run a small web-server. I use my provider's DNS manager, so I don't have a DNS server hosted on the machine. Let's say for a second, that I don't use my host's DNS, and I decide to set up a DNS server on my server. Hypothetical scenario: my server (entire) server goes down - DNS included. Why do I need backup DNS? If the server is down, who cares if the DNS server is down too, considering that even if I had DNS up (it wasn't on the crashed server), it wouldn't be able to forward requests anyway since the server would be down? Is the point of having secondary DNS, to be able to change the IP addresses that your DNS server points to, so if your webserver was down, you could redirect traffic to a backup? How would you switch to the secondary provider, in the event that your main DNS provider becomes unavailable? Is a backup DNS system basically up all the time? How is it configured? Is it just an exact clone of the DNS server you would have on your server? Do they run simultaneously? Hopefully someone can see what I'm hung up on, and provide some guidance. | The major point in having a secondary DNS server is as backup in the event the primary DNS server handling your domain goes down. In this case, your server would be still up, and so without having a backup, nobody could get to your server possibly costing you lots of lost customers (i.e. REAL MONEY). A secondary DNS server is always up, and ready to serve. It can help balance the load on the network as there are now more than one authoritative place to get your information. Updates are generally performed automatically from the master DNS. Thus it is an exact clone of the master. Generally a DNS server contains more information than just a single server, it might contain mail routing information, information for many many hosts, mail spam keys, etc. So resilancy and redundancy are of DEFINITE benefit to domain holders. I hope this helps your understanding. | {
"source": [
"https://serverfault.com/questions/297318",
"https://serverfault.com",
"https://serverfault.com/users/90415/"
]
} |
297,328 | I have a VPS running Windows 2008 Web Edition 32bit.
Currently it is rebooting at random times every day, usually 2-3 times a day for no reason that I can see.
The hosting company assures me that the host server is fine and not causing the reboots on my node, but there's nothing whatsoever in the event viewer that would suggest there is any problem.
Before the server reboots, it generates a crash dump which I've run through the debugging tools - detailed below. Any help or insight, very much appreciated. BugCheck 101, {61, 0, 803cd120, 1} Probably caused by : Unknown_Image ( ANALYSIS_INCONCLUSIVE ) Followup: MachineOwner 0: kd> !analyze -v * Bugcheck Analysis * * CLOCK_WATCHDOG_TIMEOUT (101)
An expected clock interrupt was not received on a secondary processor in an
MP system within the allocated interval. This indicates that the specified
processor is hung and not processing interrupts.
Arguments:
Arg1: 00000061, Clock interrupt time out interval in nominal clock ticks.
Arg2: 00000000, 0.
Arg3: 803cd120, The PRCB address of the hung processor.
Arg4: 00000001, 0. Debugging Details: BUGCHECK_STR: CLOCK_WATCHDOG_TIMEOUT_2_PROC CUSTOMER_CRASH_COUNT: 3 DEFAULT_BUCKET_ID: DRIVER_FAULT_SERVER_MINIDUMP PROCESS_NAME: System CURRENT_IRQL: 1c STACK_TEXT: 8d107a60 81cc88e5 00000101 00000061 00000000 nt!KeBugCheckEx+0x1e
8d107a94 81cca67d 8d107d1b 000000d1 8d107b24 nt!KeUpdateRunTime+0xd5
8d107a94 81c61bae 8d107d1b 000000d1 8d107b24 nt!KeUpdateSystemTime+0xed
8d107b24 81c7894b 8d107b84 00000000 c3e6e000 nt!KeFlushMultipleRangeTb+0x11a
8d107c08 81c543f8 c3e6f000 0001c000 8d107d10 nt!MmSetAddressRangeModified+0x32b
8d107c98 81c554cb 84ea0344 00000000 00000001 nt!CcFlushCache+0x395
8d107cec 81c52645 84e94008 8d107d10 00000000 nt!CcWriteBehind+0x115
8d107d44 81cc2e22 84a6b978 00000000 84e6f840 nt!CcWorkerThread+0x11e
8d107d7c 81df2f7a 84a6b978 a77a75f2 00000000 nt!ExpWorkerThread+0xfd
8d107dc0 81c5befe 81cc2d25 80000000 00000000 nt!PspSystemThreadStartup+0x9d
00000000 00000000 00000000 00000000 00000000 nt!KiThreadStartup+0x16 STACK_COMMAND: kb SYMBOL_NAME: ANALYSIS_INCONCLUSIVE FOLLOWUP_NAME: MachineOwner MODULE_NAME: Unknown_Module IMAGE_NAME: Unknown_Image DEBUG_FLR_IMAGE_TIMESTAMP: 0 FAILURE_BUCKET_ID: CLOCK_WATCHDOG_TIMEOUT_2_PROC_ANALYSIS_INCONCLUSIVE BUCKET_ID: CLOCK_WATCHDOG_TIMEOUT_2_PROC_ANALYSIS_INCONCLUSIVE Followup: MachineOwner | The major point in having a secondary DNS server is as backup in the event the primary DNS server handling your domain goes down. In this case, your server would be still up, and so without having a backup, nobody could get to your server possibly costing you lots of lost customers (i.e. REAL MONEY). A secondary DNS server is always up, and ready to serve. It can help balance the load on the network as there are now more than one authoritative place to get your information. Updates are generally performed automatically from the master DNS. Thus it is an exact clone of the master. Generally a DNS server contains more information than just a single server, it might contain mail routing information, information for many many hosts, mail spam keys, etc. So resilancy and redundancy are of DEFINITE benefit to domain holders. I hope this helps your understanding. | {
"source": [
"https://serverfault.com/questions/297328",
"https://serverfault.com",
"https://serverfault.com/users/54764/"
]
} |
297,595 | Is it possible to do a simple inline style SSH command, for example: ssh [email protected] { cd foo/bar && rm *.foobar } | Should you want to execute cd foo/bar && rm *.foobar on the remote machine, simply do ssh [email protected] 'cd foo/bar && rm *.foobar' and see man ssh ... ssh [-1246AaCfgkMNnqsTtVvXxY] [-b bind_address] [-c cipher_spec] [-D
[bind_address:]port] [-e escape_char] [-F configfile]
[-i identity_file] [-L [bind_address:]port:host:hostport]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port] [-R
[bind_address:]port:host:hostport] [-S ctl_path] [-w tunnel:tunnel
[user@]hostname [command] The parts you want: ssh [user@]hostname [command] | {
"source": [
"https://serverfault.com/questions/297595",
"https://serverfault.com",
"https://serverfault.com/users/72069/"
]
} |
297,756 | Is it possible/recommended for two servers to use the same name servers? For example, if I have two VPS servers; one for business one for personal. Can they both use the same name servers? | If they can both see the name servers in question (i.e., not on an internal network to one of them)...sure. | {
"source": [
"https://serverfault.com/questions/297756",
"https://serverfault.com",
"https://serverfault.com/users/71770/"
]
} |
298,146 | I'm a *.deb guy and I feel quite uncomfortable while managing rpms. I'm used to run apt-get upgrade in my debian based servers for "normal" upgrades and apt-get dist-upgrade for allowing kernel upgrades or allowing new major package versions upgrades. In the CentOS servers I admin, I would like to have a similar feature, however man yum doesn't seem to offer such behaviour. And the differences between yum update and yum upgrade seems to be not what I'm looking for. So far my best approach is to add and remove the following setting in /etc/yum.conf : exclude=kernel* There must be a better approach. Every suggestion will be welcome. EDITED: The yum's man page description of them and the --obsoletes flag is a bit cryptic for me. So let me reword what I understand from it: Do I have to understand that yum update won't install a new kernel because it would mean marking as obsolete the current one? Can I assume that yum upgrade does the same or almost the same than apt-get dist-upgrade ? EDITED 2 What I like best from apt-get upgrade is that it tells me which packages remain retained so I can act accordingly; either with apt-get dist-upgrade or with explicit apt-get install package . So after thinking a bit my best approach at this moment will be: disable the obsoletes setting in yum.conf (as described by Steven Pritchard in his answer ) and run at first yum update . Once all the updates are installed, run a second yum update --obsoletes to check which packages have been retained and act in function of its results. Will that work? | yum update originally just did upgrades of packages to new versions. If, for example, foo-awesome obsoleted foo , yum update wouldn't offer to upgrade from foo to foo-awesome . Adding the --obsoletes flag to yum update made it do the extra checks to also offer that upgrade path. yum upgrade was added as (essentially) an alias for yum --obsoletes update . Since this is the behavior that almost everyone wants all of the time, the configuration option obsoletes=1 was added to the default /etc/yum.conf , making yum update and yum upgrade equivalent on any recent, stock, Fedora/RHEL/CentOS/etc. If you want to avoid kernel updates when you're running yum update , you can just do yum --exclude=kernel* update . If you want automatic updates on, but you want to avoid automatic kernel upgrades, then adding the exclude to yum.conf is probably the right answer. There probably isn't a Right Answer for your question. RHEL and RHEL-based distributions don't have the same philosophy as the Debian developers when it comes to updates, so the tools don't encourage the same sorts of behavior. | {
"source": [
"https://serverfault.com/questions/298146",
"https://serverfault.com",
"https://serverfault.com/users/79393/"
]
} |
299,031 | How does the computer know which device on the network to query? How does the default gateway factor into this? Pretty much, what is the chain of events that occurs when a computer tries to obtain an IP address using DHCP ? The reason I ask is because I'm trying to figure out how to best set up a redundant DHCP server, in case the original fails for whatever reason. | It doesn't know what device to query. Thus it broadcasts its request to the entire subnet. The DHCP server is listening for a certain type of communication and when it hears that specific broadcast it begins the DHCP conversation with the device that broadcasted its request. Take a look at the DORA process for more information. DORA stands for: D iscovery O ffer R equest A cknowledgement As for the gateway, it can come into play only if it explicitly forwards DHCP traffic to another subnet. That feature is usually called DHCP Relay . Otherwise the gateway ignores the broadcast DHCP message just like it ignores every other broadcast message. As for redundant DHCP servers, you'll want to keep them both on the same subnet and probably use the 80/20 rule . You can set up failover if you're using CentOS as your DHCP server . | {
"source": [
"https://serverfault.com/questions/299031",
"https://serverfault.com",
"https://serverfault.com/users/65498/"
]
} |
299,287 | I need to set up a cron job in cpanel that calls a URL (on the same server) once a week. I was going to use wget but it turns out this is disabled on the shared server being used. Is there an alternative to wget ? I've heard that curl can be used but I don't know how to set that up in a cron command. Also, what's the command to make the cronjob do nothing on completion? Any ideas greatly appreciated! | instead of using wget, curl works like this: curl --silent http://domain.com/cron.php which will work in the same way as wget. if its a php file you are launching, is there any reason you cant run it via the command line php interpreter like so: php -q /path/to/cron.php same on a webserver request and often will work much faster and without certain timeout restrictions present when called via webserver/curl | {
"source": [
"https://serverfault.com/questions/299287",
"https://serverfault.com",
"https://serverfault.com/users/83950/"
]
} |
299,288 | I've found numerous installation instructions for Node.js but they all seem so complicated -- I'm not a super sys admin but I can get around. I have yum on the system, but I didn't find any node.js packages, and I'm not sure how to compile code on the server or where to put it. | su -
yum install gcc-c++ openssl-devel
cd /usr/local/src
wget http://nodejs.org/dist/node-latest.tar.gz
tar zxvf node-latest.tar.gz
(cd into extracted folder: ex "cd node-v0.10.3")
./configure
make
make install Note that this requires Python 2.6+ to use ./configure above. You can modify the "configure" file to point to python2.7 in line 1 if necessary. To create an RPM package, you can use FPM : # wget http://nodejs.org/dist/node-latest.tar.gz
# tar zxvf node-latest.tar.gz
(cd into extracted folder: ex "cd node-v0.10.3")
# ./configure --prefix=/usr/
# make
# mkdir /tmp/nodejs
# make install DESTDIR=/tmp/nodejs/
# tree -L 3 /tmp/nodejs/
/tmp/nodejs/
└── usr
├── bin
│ ├── node
│ ├── node-waf
│ └── npm -> ../lib/node_modules/npm/bin/npm-cli.js
├── include
│ └── node
├── lib
│ ├── dtrace
│ ├── node
│ └── node_modules
└── share
└── man Now make the nodejs package: # fpm -s dir -t rpm -n nodejs -v 0.8.18 -C /tmp/nodejs/ usr/bin usr/lib Then install and check the version: # rpm -ivh nodejs-0.8.18-1.x86_64.rpm
Preparing... ########################################### [100%]
1:nodejs ########################################### [100%]
# /usr/bin/node --version
v0.8.18 Source: https://github.com/jordansissel/fpm/wiki/PackageMakeInstall | {
"source": [
"https://serverfault.com/questions/299288",
"https://serverfault.com",
"https://serverfault.com/users/21343/"
]
} |
299,291 | I use Microsoft's ALTools on a regular basis to troubleshoot multiple different issues. Lately, however, EventCombMT.exe has been crashing on me during the simplest operations. I noticed that it hasn't been updated for a very long time. Is Microsoft recommending that people use a different set of tools? Is there a better alternative out there? | su -
yum install gcc-c++ openssl-devel
cd /usr/local/src
wget http://nodejs.org/dist/node-latest.tar.gz
tar zxvf node-latest.tar.gz
(cd into extracted folder: ex "cd node-v0.10.3")
./configure
make
make install Note that this requires Python 2.6+ to use ./configure above. You can modify the "configure" file to point to python2.7 in line 1 if necessary. To create an RPM package, you can use FPM : # wget http://nodejs.org/dist/node-latest.tar.gz
# tar zxvf node-latest.tar.gz
(cd into extracted folder: ex "cd node-v0.10.3")
# ./configure --prefix=/usr/
# make
# mkdir /tmp/nodejs
# make install DESTDIR=/tmp/nodejs/
# tree -L 3 /tmp/nodejs/
/tmp/nodejs/
└── usr
├── bin
│ ├── node
│ ├── node-waf
│ └── npm -> ../lib/node_modules/npm/bin/npm-cli.js
├── include
│ └── node
├── lib
│ ├── dtrace
│ ├── node
│ └── node_modules
└── share
└── man Now make the nodejs package: # fpm -s dir -t rpm -n nodejs -v 0.8.18 -C /tmp/nodejs/ usr/bin usr/lib Then install and check the version: # rpm -ivh nodejs-0.8.18-1.x86_64.rpm
Preparing... ########################################### [100%]
1:nodejs ########################################### [100%]
# /usr/bin/node --version
v0.8.18 Source: https://github.com/jordansissel/fpm/wiki/PackageMakeInstall | {
"source": [
"https://serverfault.com/questions/299291",
"https://serverfault.com",
"https://serverfault.com/users/28433/"
]
} |
299,297 | We're looking to set up an e-mail sub-domain for a project, and we need to set up a catch-all e-mail address, so whether people send project updates to project1234, or project4321, it will redirect to the one existing e-mail account. I've set up the sub-domain MX in our public DNS. I've set up the sub-domain in EMC, per this article . We do not have an edge transport server, but the same settings are under Hub Transport, which I thought would be the same. I've set up the catch all e-mail address per this article . The sub-domain works when I send directly to the one existing account with an e-mail address in the sub-domain, and it works if I set up specific aliases on the account, but it's not working as a catch all. When I test from my Gmail to a non-existent address on the sub-domain, it is rejected as an unrecognized recipient. At first I considered that it might be our spam filter (McAfee hosted) blocking these messages. But when I added an alias in Exchange and did not set up the user in McAfee, it still came through, so it really appears to be something misconfigured or missing in Exchange. I set up the Transport rule to be "when a recipient's address matches '@sub.example.com$' copy the message to '[email protected]'" I've also tried "when a recipient's address contains specific words 'sub.example.com'" and any other variation I could think of to get a generic catch all for the sub-domain... nothing has worked so far, except creating an alias (which would defeat the purpose of having a catch all). Does anyone have experience with setting one of these up, and so could provide direction on what I'm missing? P.S. the NDR Diagnostic information for administrators:
Generating server: example.com
[email protected]
#550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ##
Original message headers:
Received: from p01c12m115.mxlogic.net (208.65.145.247) by
server.local.example.com (192.168.1.18) with Microsoft SMTP Server
(TLS) id 14.0.722.0; Tue, 9 Aug 2011 11:54:09 -0400
Received: from unknown [74.125.82.170] (EHLO mail-wy0-f170.google.com) by
p01c12m115.mxlogic.net(mxl_mta-6.10.0-2) over TLS secured channel with ESMTP
id f18514e4.0.181220.00-2292.264602.p01c12m115.mxlogic.net (envelope-from
<[email protected]>); Tue, 09 Aug 2011 09:54:08 -0600 (MDT)
Received: by wyf23 with SMTP id 23so97339wyf.29 for
<[email protected]>; Tue, 09 Aug 2011 08:54:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=gmail.com; s=gamma;
h=mime-version:date:message-id:subject:from:to:content-type;
bh=+ku4XfmMdO3N03t8z+YI9ApzoPFBdZazI1GqwxB5JPs=;
b=fkzsd9eyTJown62n8alAINYW6arHT/qB6EjoAzlwoDjRvDpgJERLEGVrw3eXwbJDlU
aekvxsWTfizZJGxY4KypkJH1T0tnMCjANscAM3avwld8qVbaGlnxE1wipi3i3Bfgcv1R
l3GNqUqCd0FJIXC02+A2CDkihdxqPM3UKHfwc=
MIME-Version: 1.0
Received: by 10.216.67.8 with SMTP id i8mr1726607wed.61.1312905246774; Tue, 09
Aug 2011 08:54:06 -0700 (PDT)
Received: by 10.216.210.134 with HTTP; Tue, 9 Aug 2011 08:54:06 -0700 (PDT)
Date: Tue, 9 Aug 2011 11:54:06 -0400
Message-ID: <CAE=Hmibpw4TVZ5MnG81qBjrUdPRc93eNhx8ACD71u4rjKo7evw@mail.gmail.com>
Subject: test test test
From: Me <[email protected]>
To: <[email protected]>
Content-Type: multipart/alternative; boundary="000e0ce0cf08db956b04aa149235"
X-Spam: [F=0.2000000000; B=0.500(0); spf=0.500; STSI=0.500(0); STSM=0.500(0); CM=0.500; MH=0.500(2011080922); S=0.200(2010122901); SC=none]
X-MAIL-FROM: <[email protected]>
X-SOURCE-IP: [74.125.82.170]
X-AnalysisOut: [v=1.0 c=1 a=nDghuxUhq_wA:10 a=BLceEmwcHowA:10 a=nS36O97Bj3]
X-AnalysisOut: [wUElCrIrAA:9 a=wPNLvfGTeEIA:10]
Return-Path: [email protected]
Received-SPF: Neutral (server.local.example.com: 208.65.145.247 is
neither permitted nor denied by domain of [email protected])
Final-Recipient: rfc822;[email protected]
Action: failed
Status: 5.1.1
Diagnostic-Code: smtp;550 5.1.1 RESOLVER.ADR.RecipNotFound; not found | su -
yum install gcc-c++ openssl-devel
cd /usr/local/src
wget http://nodejs.org/dist/node-latest.tar.gz
tar zxvf node-latest.tar.gz
(cd into extracted folder: ex "cd node-v0.10.3")
./configure
make
make install Note that this requires Python 2.6+ to use ./configure above. You can modify the "configure" file to point to python2.7 in line 1 if necessary. To create an RPM package, you can use FPM : # wget http://nodejs.org/dist/node-latest.tar.gz
# tar zxvf node-latest.tar.gz
(cd into extracted folder: ex "cd node-v0.10.3")
# ./configure --prefix=/usr/
# make
# mkdir /tmp/nodejs
# make install DESTDIR=/tmp/nodejs/
# tree -L 3 /tmp/nodejs/
/tmp/nodejs/
└── usr
├── bin
│ ├── node
│ ├── node-waf
│ └── npm -> ../lib/node_modules/npm/bin/npm-cli.js
├── include
│ └── node
├── lib
│ ├── dtrace
│ ├── node
│ └── node_modules
└── share
└── man Now make the nodejs package: # fpm -s dir -t rpm -n nodejs -v 0.8.18 -C /tmp/nodejs/ usr/bin usr/lib Then install and check the version: # rpm -ivh nodejs-0.8.18-1.x86_64.rpm
Preparing... ########################################### [100%]
1:nodejs ########################################### [100%]
# /usr/bin/node --version
v0.8.18 Source: https://github.com/jordansissel/fpm/wiki/PackageMakeInstall | {
"source": [
"https://serverfault.com/questions/299297",
"https://serverfault.com",
"https://serverfault.com/users/57461/"
]
} |
299,556 | How do I generate a random MAC address from the Linux command line? I search for a solution that only requires standard tools commonly found on the Linux command line. The MAC address will be used for a guest KVM. | I use macaddr=$(echo $FQDN|md5sum|sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/02:\1:\2:\3:\4:\5/') The benefit of this method, over a completely random number, is that it's possible to reliably reproduce the MAC address based on the FQDN of the machine, which I find useful sometimes. The 02 for the first octet just sets the "locally assigned" bit, which makes it obvious that it's not a vendor-provided MAC address, and guarantees that you won't collide with a real NIC's MAC address. If you need to generate multiple MAC addresses per host, I used to concatenate the FQDN with the name of the bridge to connect the interface to; this did a good job of spreading things out for different NICs. | {
"source": [
"https://serverfault.com/questions/299556",
"https://serverfault.com",
"https://serverfault.com/users/90881/"
]
} |
300,260 | Are IP addresses with all zeroes in the first octet valid? For example, can 0.1.2.0/24 be a valid subnet, with network address 0.1.2.0 , broadcast address 0.1.2.255 and an usable address range from 0.1.2.1 to 0.1.2.254 ? It looks like it should be valid, but it doesn't work, at least on Windows systems. If it's not valid, then why? | RFC1122 , Requirements for Internet Hosts -- Communication Layers , says: { <Network-number>, <Host-number> }
(a) { 0, 0 }
This host on this network. MUST NOT be sent, except as
a source address as part of an initialization procedure
by which the host learns its own IP address.
See also Section 3.3.6 for a non-standard use of {0,0}.
(b) { 0, <Host-number> }
Specified host on this network. It MUST NOT be sent,
except as a source address as part of an initialization
procedure by which the host learns its full IP address. | {
"source": [
"https://serverfault.com/questions/300260",
"https://serverfault.com",
"https://serverfault.com/users/6352/"
]
} |
300,316 | I've got a 30K row table When I run a long, 50-line query on that table, a GROUP function reduces the number of rows to 7K I want to export the grouped 7K rows as a new table, or save them as a
CSV When I attempt to export, instead of getting the grouped 7K rows, I get the old, pre-query 30K rows. What am I doing wrong, and what should I be doing? NOTE: I'm not a coder, so I'd really appreciate a solution that just used the phpMyAdmin GUI. | Execute your sql query in the SQL tab of phpMyAdmin. After execution, scroll down the page and look for “Query results operations” Click “Export” link from the above and you will get the page to export all the results of the queries to desired format. | {
"source": [
"https://serverfault.com/questions/300316",
"https://serverfault.com",
"https://serverfault.com/users/91381/"
]
} |
300,319 | Right now mydomain.com is accessible to the world because I made an association at godaddy's configuration page, which tells mydomain.com is located at x.x.x.x. As I'm planning to configure a DNS Server I would like to make mydomain.com accessible to the world throught my DNS Server instead of using the godaddy DNS. After adding an A record to my DNS Server what do I need to do in order to let other DNS Servers and the world know mydomain.com exists and is located at my DNS Server? My machine is located at Amazon EC2. | Execute your sql query in the SQL tab of phpMyAdmin. After execution, scroll down the page and look for “Query results operations” Click “Export” link from the above and you will get the page to export all the results of the queries to desired format. | {
"source": [
"https://serverfault.com/questions/300319",
"https://serverfault.com",
"https://serverfault.com/users/91387/"
]
} |
300,749 | I would like to view what packages are available for update/upgrade without actually changing any files becuase there are some packages I wouldn't like to update. Would it then be possible to apt-get update with exceptions. | I use apt list --upgradable . The next alternative is apt-get --simulate upgrade . (based on @EightBitTony) Here are outputs from different options (hope it helps someone): me@machine:~$ apt list --upgradable
Listing... Done
kubernetes-cni/kubernetes-xenial 0.7.5-00 amd64 [upgradable from: 0.6.0-00]
N: There are 3 additional versions. Please use the '-a' switch to see them. me@machine:~$ apt-get --simulate upgrade
NOTE: This is only a simulation!
apt-get needs root privileges for real execution.
Keep also in mind that locking is deactivated,
so don't depend on the relevance to the real current situation!
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
kubernetes-cni
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Inst kubernetes-cni [0.6.0-00] (0.7.5-00 kubernetes-xenial:kubernetes-xenial [amd64])
Conf kubernetes-cni (0.7.5-00 kubernetes-xenial:kubernetes-xenial [amd64]) me@machine:~$ apt-get -u upgrade --assume-no
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root? me@machine:~$ sudo apt-get -u upgrade --assume-no
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
kubernetes-cni
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6,473 kB of archives.
After this operation, 4,278 kB of additional disk space will be used.
Do you want to continue? [Y/n] N
Abort. me@machine:~$ sudo apt-get -u -V upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
kubernetes-cni (0.6.0-00 => 0.7.5-00)
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6,473 kB of archives.
After this operation, 4,278 kB of additional disk space will be used.
Do you want to continue? [Y/n] n
Abort. | {
"source": [
"https://serverfault.com/questions/300749",
"https://serverfault.com",
"https://serverfault.com/users/86582/"
]
} |
300,776 | To clarify: I'm using my public hostname to connect to a MySQL database. The hostname resolves to my server's external IP (e.g. 1.2.3.4). Is the data I'm sending/receiving via the MySQL connection going over the internet at all? Would it be faster to use localhost? Will it take up my server's bandwidth? | If you want to be sure, you can use traceroute 1.2.3.4 . This will list all the routers between the host running the command and the device with 1.2.3.4 IP address. | {
"source": [
"https://serverfault.com/questions/300776",
"https://serverfault.com",
"https://serverfault.com/users/87940/"
]
} |
300,788 | I have backup2l scheduled to take daily backups that are stored on an external server, this work fine although I would also like to take a full backup of the whole server and store this on my local PC. How can I do this with minimal errors or big downtime? I mean I dont want the database to become corrupt while I am downloading if it changes half way through, do I need to shutdown most processes? Is there any advice you can give. I have a fairly default debian lamp setup. Also is it better to zip everything before I download?
Should I take care of security by protecting the data for this one time? | If you want to be sure, you can use traceroute 1.2.3.4 . This will list all the routers between the host running the command and the device with 1.2.3.4 IP address. | {
"source": [
"https://serverfault.com/questions/300788",
"https://serverfault.com",
"https://serverfault.com/users/86582/"
]
} |
300,961 | As far as I know, LVM makes it possible to take snapshots of a volume. There are also a number of file systems (ZFS, Btrfs, reiserfs, ...) which supports snapshots. However, I've never understood the difference between LVM snapshots and file system snapshots. If it's possible to take snapshots with LVM, why does someone take their time to implement it in a file system? Edit: Is any of them preferred in some situations? Why? | Most of these snapshots are copy-on-write snapshots, which are really fast and really cheap (storage-wise) on rarely-updated systems. LVM snapshots are COW snapshots, ZFS/BTRFS both have a COW-mode for snapshots, reiserfs doesn't have snapshots natively, Novell's NSS file-system is also COW, as are Shadow Copy volumes for Windows NTFS volumes. Copy-on-write snapshots take a copy of the metadata of the target volume into the snapshot pool. Then, depending on which mode of COW they're using, they copy data that would be overwritten by new writes to the snapshot pool before writing the new data. ZFS and (eventually if not already there) BTRFS have full-snapshot capabilities, which is useful for snapping onto separate media, which in turn is very handy for sneakernet backup systems using removable media. ZFS doesn't call this a "snapshot" though, they leverage ZFS's ability to use zfs send and zfs recv to copy volumes and snapshots over the network to a remote host (or local array). I prefer filesystem-level snapshot abilities over LVM ones because I better trust the filesystem itself to handle the process cleanly. However, in the lack of direct filesystem support, LVM should work just fine in most cases. COW snapshots are good if you need a point-in-time backup taken really fast for short-term recovery needs. Such as doing a daily, or 4x daily, snap to be kept for a week. This is handy if you need to recover files users accidentally delete, or need to roll-back an entire system to a pre-update config. They can also be used by some backup systems as a fully quiesced filesystem, so backups taken from the snapshot volume don't have to worry about open files getting in the way. The key thing to remember is that the snapshot volumes will be on the same storage as the primary volume, so don't give you anything in case of array failure. FULL snapshots are good if they're taken to removable or remote media of some kind. If you have networked storage, the target could be a different iSCSI or Fibre Channel array than the one the primary storage is hosted in. This gives you some off-array protection for some kinds of faults. If using removable media, such as a 3TB ESATA drive, you can even use it as a simple backup-to-disk system. These snapshots CAN be on different hardware than their COW brothers, so are useful for disaster-resilience. On Full vs COW snapshots. The term 'snapshot' has drifted a bit over the years. This year, I'm pretty sure it means "a Copy-On-Write copy of the original data using block-relocation". By this definition, the "Full" snapshot presented above is not actually a snapshot, it's replication. Some storage vendors have used different definitions of 'snapshot' in the past to describe various block-level operations they perform. Where it gets confusing are systems that use snapshots as part of the replication process. | {
"source": [
"https://serverfault.com/questions/300961",
"https://serverfault.com",
"https://serverfault.com/users/64490/"
]
} |
301,423 | I was using df -h to print out human readable disk usage. I would like to figure out what is taking up so much space. For instance, is there a way to pipe this command so that it prints out files that are larger than 1GB in size? Other ideas? Thanks | I use this one a lot. du -kscx * It can take a while to run, but it'll tell you where the disk space is being used. | {
"source": [
"https://serverfault.com/questions/301423",
"https://serverfault.com",
"https://serverfault.com/users/114227/"
]
} |
301,783 | My local CUPS server is confused about the name of the printer I use. It has two names: hpext and hpext@vm-cups . I can never predict which one is going to work, and if I use the wrong one, jobs just sit in its queue indefinitely. There are no printers listed in /etc/cupsd/cupsd.conf ; instead I have the line BrowsePoll cups.eecs.tufts.edu This server lists only hpext and not hpext@vm-cups . I'm thinking that somehow my local server is confused, and if I can delete the printer from its memory, all will be well. But nowhere in the documentation can I find a command to delete a printer, and the DELETE PRINTER button on the stupid web interface has no effect. What can I do? | lpadmin helps you to manage cups' printers Try man lpadmin I believe what you need is lpadmin -x | {
"source": [
"https://serverfault.com/questions/301783",
"https://serverfault.com",
"https://serverfault.com/users/6116/"
]
} |
301,903 | I have a linux server on configuration with apache. However I cannot get access to it using a remote computer. I can ssh to the server normally. my IP table: Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited netstat -ant Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 SERVERIP:80 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:45117 0.0.0.0:* LISTEN
tcp 0 196 SERVERIP:22 MyIP:3149 ESTABLISHED
tcp 0 0 :::111 :::* LISTEN
tcp 0 0 :::22 :::* LISTEN
tcp 0 0 :::47193 :::* LISTEN using Curl SERVERIP:80 and curl localhost:80 , both return default page from apache. What could be the problem? | You need to enable access to your server on port 80 as it is currently being blocked by iptables. sudo /sbin/iptables -I INPUT -p tcp -m tcp --dport 80 -j ACCEPT This will insert the rule into your iptables configuration at the start. Once you have done this and tested that it works then you should save the configuration so that it it is used next time the service starts, sudo /sbin/service iptables save this will write the current configuration to /etc/sysconfig/iptables . If you use CentOS 7 then FirewallD is the right way to go: firewall-cmd --zone=public --add-port=80/tcp Verify with your browser that it works, and then: firewall-cmd --zone=public --add-port=80/tcp --permanent firewall-cmd --reload To make changes permanent | {
"source": [
"https://serverfault.com/questions/301903",
"https://serverfault.com",
"https://serverfault.com/users/26631/"
]
} |
302,026 | is there a way to get an overview of users logged in on my server and the ip they are connecting from ? I have the IP already, I want the user that is associated with it :) | w | {
"source": [
"https://serverfault.com/questions/302026",
"https://serverfault.com",
"https://serverfault.com/users/86280/"
]
} |
302,122 | I have a partition mounted with mount -t ext3 /dev/sda3 /foo . Each time I reboot, I need to remount. How can I keep this mounted after every reboot? | You need to make an entry in /etc/fstab for the mount, something like: /dev/sda3 /foo ext3 defaults 1 1 For more information see: https://help.ubuntu.com/community/Fstab | {
"source": [
"https://serverfault.com/questions/302122",
"https://serverfault.com",
"https://serverfault.com/users/91857/"
]
} |
302,299 | I am looking for tools for Windows that can act as a reverse-proxy in front of a server to introduce various networking issues like jitter, delays, or packet loss. My preference is a software solution that will work on Windows. Httpd mod_proxy doesn't appear to support such a configuration, and googling for a tool in this category is proving fruitless. | I find clumsy wonderful : http://jagt.github.io/clumsy/index.html clumsy makes your network condition on Windows significantly worse,
but in a managed and interactive manner. | {
"source": [
"https://serverfault.com/questions/302299",
"https://serverfault.com",
"https://serverfault.com/users/7909/"
]
} |
302,505 | Can I tell SSH to send the data only after pressing enter or tab, and not after each individual keypress? | No, because SSH has no way of knowing whether what you're typing would require an enter or tab to action -- if you're trying to go through your command history, for instance, the ^R or up arrows wouldn't be sent by themselves, and that would be... unpleasant. You don't have to wait between each character for it to appear on the screen, though; if you know what you have to type, bash away at it as quick as you like, and the terminal will catch up in about one round-trip time from when you stopped typing, which is about as good as you'll get out of a line-buffered setup anyway (packet loss is different, but it introduces it's own interesting quirks). | {
"source": [
"https://serverfault.com/questions/302505",
"https://serverfault.com",
"https://serverfault.com/users/22994/"
]
} |
302,509 | I know about the HttpRewriteModule , but I don't really know how to handle regex and I would need to redirect all URLs within a certain directory to another, specifically: From: example.com/component/tag/whatever To: example.com/tag/whatever Could some one tell me how to do this in Nginx? | Do you mean something like: rewrite ^/component(.*)$ $1 last; | {
"source": [
"https://serverfault.com/questions/302509",
"https://serverfault.com",
"https://serverfault.com/users/49697/"
]
} |
302,592 | I have a large httpd.conf file, most of which is virtual hosts. Is there a way to make a file, say virtual_hosts.conf, and include it from httpd.conf? I've googled a bit, but can't seem to find much as far as includes, just module loading. | Information on apache httpd.conf files can be found at here . Some snippets have been copied from this website to ensure that the information is not lost if the link would deprecated: Include /usr/local/apache2/conf/ssl.conf
Include /usr/local/apache2/conf/vhosts/*.conf Relative paths: Include conf/ssl.conf
Include conf/vhosts/*.conf Wildcards: Include conf/vhosts/*/*.conf | {
"source": [
"https://serverfault.com/questions/302592",
"https://serverfault.com",
"https://serverfault.com/users/59562/"
]
} |
302,776 | Today, I found my server couldn't work because it was filled. I checked the logs and they had grown enormously, I deleted them so as things could work. now with current logs I'm seeing a lot of suspicious activity. Mail log : Aug 18 23:09:29 veepiz postfix/smtpd[16724]: match_list_match: unknown: no match
Aug 18 23:09:29 veepiz postfix/smtpd[16904]: match_hostaddr: 61.67.184.122 ~? 10.182.130.68/32
Aug 18 23:09:29 veepiz postfix/smtpd[13321]: input attribute name: nexthop
Aug 18 23:09:29 veepiz postfix/smtpd[12192]: private/rewrite socket: wanted attribute: flags
Aug 18 23:09:29 veepiz postfix/smtpd[12800]: input attribute value: (end)
Aug 18 23:09:29 veepiz postfix/smtpd[17483]: private/anvil: wanted attribute: rate
Aug 18 23:09:29 veepiz postfix/smtpd[12468]: smtp_get: EOF
Aug 18 23:09:29 veepiz postfix/smtpd[17928]: send attr milter_actions = 17
Aug 18 23:09:29 veepiz postfix/smtpd[16135]: generic_checks: name=reject_unauth_destination
Aug 18 23:09:29 veepiz postfix/smtpd[19163]: input attribute value: 7476A1659B3
Aug 18 23:09:29 veepiz postfix/smtpd[14164]: private/rewrite socket: wanted attribute: flags
Aug 18 23:09:29 veepiz postfix/smtpd[19366]: input attribute value: smtp
Aug 18 23:09:29 veepiz postfix/smtpd[15307]: match_hostname: dsl093-059-178.blt1.dsl.speakeasy.net ~? 127.0.0.1/32
Aug 18 23:09:29 veepiz postfix/smtpd[15951]: milter8_connect: milter inet:127.0.0.1:20209 version 2
Aug 18 23:09:29 veepiz postfix/smtpd[15865]: send attr ident = smtp:202.91.239.165
Aug 18 23:09:29 veepiz postfix/smtpd[15569]: ctable_locate: leave existing entry key [email protected]
Aug 18 23:09:29 veepiz postfix/smtpd[12901]: disconnect from dsl093-059-178.blt1.dsl.speakeasy.net[66.93.59.178]
Aug 18 23:09:29 veepiz postfix/smtpd[13166]: match_hostaddr: 202.53.71.60 ~? 127.0.0.1/32
Aug 18 23:09:29 veepiz postfix/smtpd[18364]: match_hostname: unknown ~? 50.57.111.177/32
Aug 18 23:09:29 veepiz postfix/smtpd[12205]: input attribute value: 2048
Aug 18 23:09:29 veepiz postfix/smtpd[14859]: match_list_match: unknown: no match
Aug 18 23:09:29 veepiz postfix/smtpd[18082]: generic_checks: name=permit_mynetworks
Aug 18 23:09:29 veepiz opendkim[19722]: OpenDKIM Filter: Unable to create listening socket on conn inet:20209@localhost
Aug 18 23:09:29 veepiz postfix/smtpd[19586]: name_mask: resource
Aug 18 23:09:29 veepiz postfix/smtpd[14764]: match_hostaddr: 122.201.66.80 ~? 127.0.0.1/32
Aug 18 23:09:29 veepiz postfix/smtpd[12265]: input attribute name: count
Aug 18 23:09:29 veepiz postfix/smtpd[19034]: match_hostaddr: 82.71.212.10 ~? 10.182.130.68/32
Aug 18 23:09:29 veepiz postfix/smtpd[18460]: match_hostaddr: 190.146.184.219 ~? 10.182.130.68/32
Aug 18 23:09:29 veepiz postfix/smtpd[17099]: match_hostaddr: 178.83.29.189 ~? 50.57.111.177/32
Aug 18 23:09:29 veepiz postfix/smtpd[17710]: match_hostname: unknown ~? 50.57.111.177/32
Aug 18 23:09:29 veepiz postfix/smtpd[14232]: disconnect event to all milters
Aug 18 23:09:29 veepiz postfix/smtpd[15782]: input attribute name: (end)
Aug 18 23:09:29 veepiz postfix/smtpd[18174]: milter_macro_lookup: "v"
Aug 18 23:09:29 veepiz postfix/smtpd[12122]: send attr sender =
Aug 18 23:09:29 veepiz postfix/smtpd[16633]: match_hostname: unknown ~? 127.0.0.1/32
Aug 18 23:09:29 veepiz postfix/smtpd[15479]: private/rewrite socket: wanted attribute: flags
Aug 18 23:09:29 veepiz postfix/smtpd[13872]: event: SMFIC_CONNECT; macros: j=veepiz.com {daemon_name}=veepiz.com v=Postfix 2.3.3
Aug 18 23:09:29 veepiz postfix/smtpd[15132]: input attribute name: (end)
Aug 18 23:09:29 veepiz postfix/smtpd[16806]: E5A4C1654DE: reject: RCPT from unknown[59.163.57.239]: 554 5.7.1 <[email protected]>: Relay access denied; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<59.163.57.239.static.vsnl.net.in>
Aug 18 23:09:29 veepiz postfix/smtpd[14527]: match_hostname: unknown ~? 10.182.130.68/32
Aug 18 23:09:29 veepiz postfix/smtpd[12222]: match_list_match: gmail.com: no match
Aug 18 23:09:29 veepiz postfix/smtpd[15648]: private/rewrite socket: wanted attribute: address
Aug 18 23:09:29 veepiz postfix/smtpd[13525]: match_string: hotmail.com ~? veepiz.com
Aug 18 23:09:29 veepiz postfix/smtpd[12639]: permit_auth_destination: [email protected]
Aug 18 23:09:29 veepiz postfix/smtpd[18793]: milter8_connect: milter inet:127.0.0.1:20209 version 2
Aug 18 23:09:29 veepiz postfix/smtpd[13076]: input attribute name: (end)
Aug 18 23:09:29 veepiz postfix/smtpd[17002]: private/rewrite socket: wanted attribute: (list terminator)
Aug 18 23:09:29 veepiz postfix/smtpd[18678]: generic_checks: name=reject_unauth_destination
Aug 18 23:09:29 veepiz postfix/smtpd[13243]: milter_macro_lookup: "{rcpt_addr}"
Aug 18 23:09:29 veepiz postfix/smtpd[13626]: private/rewrite socket: wanted attribute: (list terminator)
Aug 18 23:09:29 veepiz postfix/smtpd[18566]: match_hostaddr: 112.166.135.242 ~? 50.57.111.177/32
Aug 18 23:09:29 veepiz postfix/smtpd[18913]: public/cleanup socket: wanted attribute: queue_id
Aug 18 23:09:29 veepiz postfix/smtpd[16226]: < unknown[61.19.246.53]: RCPT TO: <[email protected]>
Aug 18 23:09:29 veepiz postfix/smtpd[12213]: ctable_locate: leave existing entry key [email protected]
Aug 18 23:09:29 veepiz postfix/smtpd[13785]: match_list_match: 61.133.8.74: no match
Aug 18 23:09:29 veepiz postfix/smtpd[16360]: < unknown[200.68.18.101]: RCPT TO: <[email protected]>
Aug 18 23:09:29 veepiz postfix/smtpd[14682]: send attr ident = smtp:201.236.80.197
Aug 18 23:09:29 veepiz postfix/smtpd[13712]: input attribute value: (end)
Aug 18 23:09:29 veepiz postfix/smtpd[12331]: > unknown[200.6.252.70]: 250 2.0.0 Ok
Aug 18 23:09:29 veepiz postfix/smtpd[17297]: milter8_connect: milter inet:127.0.0.1:20209 version 2
Aug 18 23:09:29 veepiz postfix/smtpd[13946]: report connect to all milters
Aug 18 23:09:29 veepiz postfix/smtpd[12980]: send attr address = [email protected]
Aug 18 23:09:29 veepiz postfix/smtpd[15223]: send attr address = [email protected]
Aug 18 23:09:29 veepiz postfix/smtpd[16046]: input attribute name: address
Aug 18 23:09:29 veepiz postfix/smtpd[13423]: match_hostaddr: 110.74.129.159 ~? 10.182.130.68/32
Aug 18 23:09:29 veepiz postfix/smtpd[18264]: match_hostaddr: 200.160.111.154 ~? 10.182.130.68/32
Aug 18 23:09:29 veepiz postfix/smtpd[12158]: input attribute name: flags
Aug 18 23:09:29 veepiz postfix/smtpd[14952]: generic_checks: name=permit_mynetworks
Aug 18 23:09:29 veepiz postfix/smtpd[15045]: reply: SMFIR_CONTINUE data 0 bytes
Aug 18 23:09:29 veepiz postfix/smtpd[14014]: ctable_locate: install entry key [email protected]
Aug 18 23:09:29 veepiz postfix/smtpd[12165]: match_hostaddr: 189.7.37.81 ~? 50.57.111.177/32
Aug 18 23:09:29 veepiz postfix/smtpd[15390]: < unknown[77.91.195.16]: RSET
Aug 18 23:09:29 veepiz postfix/smtpd[14083]: match_list_match: unknown: no match
Aug 18 23:09:29 veepiz postfix/smtpd[16450]: match_string: gmail.com ~? veepiz.com
Aug 18 23:09:29 veepiz postfix/qmgr[12109]: B868E165652: to=<[email protected]>, relay=none, delay=13716, delays=13522/194/0/0, dsn=4.7.0, status=deferred (delivery temporarily suspended: host mx1.mail.tw.yahoo.com[203.188.197.119] refused to talk to me: 421 4.7.0 [TS01] Messages from 50.57.111.177 temporarily deferred due to user complaints - 4.16.55.1; see http://postmaster.yahoo.com/421-ts01.html)
Aug 18 23:09:29 veepiz postfix/smtpd[12150]: permit_mynetworks: ks390655.kimsufi.com 188.165.248.79
Aug 18 23:09:29 veepiz postfix/smtpd[16724]: match_list_match: 208.87.240.34: no match
Aug 18 23:09:29 veepiz postfix/smtpd[16904]: match_list_match: 61-67-184-host122.kbtelecom.net.tw: no match
Aug 18 23:09:29 veepiz postfix/smtpd[12192]: input attribute name: flags
Aug 18 23:09:29 veepiz postfix/smtpd[13321]: input attribute value: gmail.com
Aug 18 23:09:29 veepiz postfix/smtpd[12800]: public/cleanup socket: wanted attribute: (list terminator)
Aug 18 23:09:29 veepiz postfix/smtpd[17483]: input attribute name: rate
Aug 18 23:09:29 veepiz postfix/smtpd[12468]: match_hostname: unknown ~? 127.0.0.1/32
Aug 18 23:09:29 veepiz postfix/smtpd[17928]: send attr milter_events = 0
Aug 18 23:09:29 veepiz postfix/smtpd[16135]: reject_unauth_destination: [email protected]
Aug 18 23:09:29 veepiz postfix/smtpd[19163]: public/cleanup socket: wanted attribute: (list terminator)
Aug 18 23:09:29 veepiz postfix/smtpd[14164]: input attribute name: flags
Aug 18 23:09:29 veepiz postfix/smtpd[19366]: private/rewrite socket: wanted attribute: nexthop
Aug 18 23:09:29 veepiz postfix/smtpd[15307]: match_hostaddr: 66.93.59.178 ~? 127.0.0.1/32
Aug 18 23:09:29 veepiz postfix/smtpd[15951]: milter8_connect: events
Aug 18 23:09:29 veepiz postfix/smtpd[15865]: private/anvil: wanted attribute: status
Aug 18 23:09:29 veepiz postfix/smtpd[15569]: NOQUEUE: reject: RCPT from unknown[195.239.156.234]: 554 5.7.1 <[email protected]>: Relay access denied; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<mail.bkrb.ru>
Aug 18 23:09:29 veepiz postfix/smtpd[12901]: master_notify: status 1
Aug 18 23:09:29 veepiz postfix/smtpd[13166]: match_hostname: unknown ~? 50.57.111.177/32
Aug 18 23:09:29 veepiz postfix/smtpd[18364]: match_hostaddr: 190.26.210.23 ~? 50.57.111.177/32
Aug 18 23:09:29 veepiz postfix/smtpd[12205]: private/rewrite socket: wanted attribute: (list terminator)
Aug 18 23:09:29 veepiz postfix/smtpd[14859]: match_list_match: 98.142.210.165: no match
Aug 18 23:09:29 veepiz postfix/smtpd[18082]: permit_mynetworks: unknown 124.95.140.14
Aug 18 23:09:29 veepiz opendkim[19722]: smfi_opensocket() failed
Aug 18 23:09:29 veepiz postfix/smtpd[12713]: < unknown[190.182.52.113]: RCPT TO: <[email protected]>
Aug 18 23:09:29 veepiz postfix/smtpd[19586]: name_mask: software
Aug 18 23:09:29 veepiz postfix/smtpd[14764]: match_hostname: unknown ~? 50.57.111.177/32
Aug 18 23:09:29 veepiz postfix/smtpd[12265]: input attribute value: 1
Aug 18 23:09:29 veepiz postfix/smtpd[19034]: match_list_match: pancake.2280.net: no match
Aug 18 23:09:29 veepiz postfix/smtpd[18460]: match_list_match: unknown: no match
Aug 18 23:09:29 veepiz postfix/smtpd[17099]: match_hostname: 178-83-29-189.dynamic.hispeed.ch ~? 10.182.130.68/32
Aug 18 23:09:29 veepiz postfix/smtpd[17710]: match_hostaddr: 61.155.164.76 ~? 50.57.111.177/32
Aug 18 23:09:29 veepiz postfix/smtpd[15715]: < unknown[202.91.239.165]: RCPT TO: <[email protected]>
Aug 18 23:09:29 veepiz postfix/smtpd[15782]: rewrite_clnt: local: [email protected] -> [email protected]
Aug 18 23:09:29 veepiz postfix/smtpd[18174]: milter_macro_lookup: result "Postfix 2.3.3"
Aug 18 23:09:29 veepiz postfix/smtpd[12122]: send attr address = [email protected]
Aug 18 23:09:29 veepiz postfix/smtpd[16633]: match_hostaddr: 96.9.160.96 ~? 127.0.0.1/32
Aug 18 23:09:29 veepiz postfix/smtp[19166]: D8DCA164E37: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[74.125.67.27]:25, delay=572, delays=342/214/0.11/16, dsn=5.1.1, status=bounced (host gmail-smtp-in.l.google.com[74.125.67.27] said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://mail.google.com/support/bin/answer.py?answer=6596 l14si8292456ybg.13 (in reply to RCPT TO command))
Aug 18 23:09:29 veepiz postfix/smtpd[14232]: milter8_disc_event: quit milter inet:127.0.0.1:20209
Aug 18 23:09:29 veepiz postfix/smtpd[15479]: input attribute name: flags
Aug 18 23:09:29 veepiz postfix/smtpd[13872]: reply: SMFIR_CONTINUE data 0 bytes
Aug 18 23:09:29 veepiz postfix/smtpd[15132]: resolve_clnt: `' -> `[email protected]' -> transp=`smtp' host=`yahoo.com.tw' rcpt=`[email protected]' flags= class=default
Aug 18 23:09:29 veepiz postfix/smtpd[16806]: generic_checks: name=reject_unauth_destination status=2
Aug 18 23:09:29 veepiz postfix/smtpd[14527]: match_hostaddr: 189.16.128.130 ~? 10.182.130.68/32
Aug 18 23:09:29 veepiz postfix/smtpd[12222]: maps_find: virtual_alias_maps: @gmail.com: not found
Aug 18 23:09:29 veepiz postfix/smtpd[15648]: input attribute name: address
Aug 18 23:09:29 veepiz postfix/smtpd[13525]: match_string: hotmail.com ~? localhost.com
Aug 18 23:09:29 veepiz postfix/smtpd[12639]: ctable_locate: leave existing entry key [email protected]
Aug 18 23:09:29 veepiz postfix/smtpd[18793]: milter8_connect: events
Aug 18 23:09:29 veepiz postfix/smtpd[13076]: resolve_clnt: `' -> `[email protected]' -> transp=`relay' host=`hotmail.com' rcpt=`[email protected]' flags= class=relay
Aug 18 23:09:29 veepiz postfix/smtpd[17002]: input attribute name: (end)
Aug 18 23:09:29 veepiz postfix/smtpd[18678]: reject_unauth_destination: [email protected]
Aug 18 23:09:29 veepiz postfix/smtpd[13243]: milter_macro_lookup: result "[email protected]"
Aug 18 23:09:29 veepiz postfix/smtpd[13626]: input attribute name: (end)
Aug 18 23:09:29 veepiz postfix/smtpd[18566]: match_hostname: unknown ~? 10.182.130.68/32
Aug 18 23:09:29 veepiz postfix/smtpd[18913]: input attribute name: queue_id
Aug 18 23:09:29 veepiz postfix/smtpd[16226]: extract_addr: input: <[email protected]>
Aug 18 23:09:29 veepiz postfix/smtpd[12213]: generic_checks: name=reject_unauth_destination status=0
Aug 18 23:09:29 veepiz postfix/smtpd[13785]: send attr request = disconnect
Aug 18 23:09:29 veepiz postfix/smtpd[16360]: extract_addr: input: <[email protected]>
Aug 18 23:09:29 veepiz postfix/smtpd[14682]: private/anvil: wanted attribute: status
Aug 18 23:09:29 veepiz postfix/smtpd[13712]: public/cleanup socket: wanted attribute: (list terminator)
Aug 18 23:09:29 veepiz postfix/smtpd[17297]: milter8_connect: events
Aug 18 23:09:29 veepiz postfix/smtpd[13946]: milter_macro_lookup: "j"
Aug 18 23:09:30 veepiz postfix/smtpd[12980]: private/rewrite socket: wanted attribute: flags
Aug 18 23:09:30 veepiz postfix/smtpd[15223]: private/rewrite socket: wanted attribute: flags
Aug 18 23:09:30 veepiz postfix/smtpd[16046]: input attribute value: [email protected]
Aug 18 23:09:30 veepiz postfix/smtpd[13423]: match_list_match: unknown: no match
Aug 18 23:09:30 veepiz postfix/smtpd[18264]: match_list_match: unknown: no match
Aug 18 23:09:30 veepiz postfix/smtpd[12158]: input attribute value: 0
Aug 18 23:09:30 veepiz postfix/smtpd[14952]: permit_mynetworks: li371-14.members.linode.com 96.126.122.14
Aug 18 23:09:30 veepiz postfix/smtpd[15045]: > unknown[187.105.132.234]: 250 2.1.5 Ok
Aug 18 23:09:30 veepiz postfix/smtpd[14014]: extract_addr: in: <[email protected]>, result: [email protected]
Aug 18 23:09:30 veepiz postfix/smtpd[12165]: match_hostname: unknown ~? 10.182.130.68/32
Aug 18 23:09:30 veepiz postfix/smtpd[15390]: abort all milters
Aug 18 23:09:30 veepiz postfix/smtpd[14083]: match_list_match: 190.147.205.152: no match
Aug 18 23:09:30 veepiz postfix/smtpd[16450]: match_string: gmail.com ~? localhost.com
Aug 18 23:09:30 veepiz postfix/smtpd[12150]: match_hostname: ks390655.kimsufi.com ~? 127.0.0.1/32
Aug 18 23:09:30 veepiz postfix/smtpd[16724]: send attr request = disconnect
Aug 18 23:09:30 veepiz postfix/smtpd[16904]: match_list_match: 61.67.184.122: no match
Aug 18 23:09:30 veepiz postfix/qmgr[12109]: C1E66164A28: to=<[email protected]>, relay=none, delay=79045, delays=78851/194/0/0, dsn=4.7.0, status=deferred (delivery temporarily suspended: host mx1.mail.tw.yahoo.com[203.188.197.119] refused to talk to me: 421 4.7.0 [TS01] Messages from 50.57.111.177 temporarily deferred due to user complaints - 4.16.55.1; see http://postmaster.yahoo.com/421-ts01.html)
Aug 18 23:09:30 veepiz postfix/smtpd[12192]: input attribute value: 0
Aug 18 23:09:30 veepiz postfix/smtpd[13321]: private/rewrite socket: wanted attribute: recipient
Aug 18 23:09:30 veepiz postfix/smtpd[12800]: input attribute name: (end)
Aug 18 23:09:30 veepiz postfix/smtpd[17483]: input attribute value: 1
Aug 18 23:09:30 veepiz postfix/smtpd[12468]: match_hostaddr: 46.181.195.57 ~? 127.0.0.1/32
Aug 18 23:09:30 veepiz postfix/smtpd[17928]: send attr milter_non_events = 4294967040
Aug 18 23:09:30 veepiz postfix/smtpd[16135]: permit_auth_destination: [email protected]
Aug 18 23:09:30 veepiz postfix/smtpd[19163]: input attribute name: (end)
Aug 18 23:09:30 veepiz postfix/smtpd[14164]: input attribute value: 4096
Aug 18 23:09:30 veepiz postfix/smtpd[19366]: input attribute name: nexthop
Aug 18 23:09:30 veepiz postfix/smtpd[15307]: match_hostname: dsl093-059-178.blt1.dsl.speakeasy.net ~? 50.57.111.177/32
Aug 18 23:09:30 veepiz postfix/smtpd[15951]: milter8_connect: requests SMFIF_ADDHDRS SMFIF_CHGHDRS
Aug 18 23:09:30 veepiz postfix/smtpd[15865]: input attribute name: status
Aug 18 23:09:30 veepiz postfix/smtpd[15569]: generic_checks: name=reject_unauth_destination status=2
Aug 18 23:09:30 veepiz postfix/smtpd[12901]: connection closed
Aug 18 23:09:30 veepiz postfix/smtpd[13166]: match_hostaddr: 202.53.71.60 ~? 50.57.111.177/32
Aug 18 23:09:30 veepiz postfix/smtpd[18364]: match_hostname: unknown ~? 10.182.130.68/32
Aug 18 23:09:30 veepiz postfix/smtpd[12205]: input attribute name: (end)
Aug 18 23:09:30 veepiz postfix/smtpd[14859]: generic_checks: name=permit_mynetworks status=0
Aug 18 23:09:30 veepiz postfix/smtpd[18082]: match_hostname: unknown ~? 127.0.0.1/32
Aug 18 23:09:30 veepiz opendkim[12241]: exited with status 69, restarting
Aug 18 23:09:30 veepiz postfix/smtpd[12331]: < unknown[200.6.252.70]: MAIL FROM: <[email protected]>
Aug 18 23:09:30 veepiz postfix/smtpd[12713]: extract_addr: input: <[email protected]>
Aug 18 23:09:30 veepiz postfix/smtpd[14764]: match_hostaddr: 122.201.66.80 ~? 50.57.111.177/32
Aug 18 23:09:30 veepiz postfix/smtpd[12265]: private/anvil: wanted attribute: rate
Aug 18 23:09:30 veepiz postfix/smtpd[19034]: match_list_match: 82.71.212.10: no match
Aug 18 23:09:30 veepiz postfix/smtpd[18460]: match_list_match: 190.146.184.219: no match
Aug 18 23:09:30 veepiz postfix/smtpd[19723]: dict_eval: const mail
Aug 18 23:09:30 veepiz postfix/smtpd[17099]: match_hostaddr: 178.83.29.189 ~? 10.182.130.68/32
Aug 18 23:09:30 veepiz postfix/smtpd[17710]: match_hostname: unknown ~? 10.182.130.68/32
Aug 18 23:09:30 veepiz postfix/smtpd[15715]: extract_addr: input: <[email protected]>
Aug 18 23:09:30 veepiz postfix/smtpd[15782]: send attr request = resolve
Aug 18 23:09:30 veepiz postfix/smtpd[18174]: milter8_connect: non-protocol events for protocol version 2: SMFIP_NOUNKNOWN SMFIP_NODATA 0xfffffc00
Aug 18 23:09:30 veepiz postfix/smtpd[12122]: private/rewrite socket: wanted attribute: flags
Aug 18 23:09:30 veepiz postfix/smtpd[16633]: match_hostname: unknown ~? 50.57.111.177/32
Aug 18 23:09:30 veepiz postfix/smtpd[14232]: disconnect from unknown[202.53.71.60]
Aug 18 23:09:30 veepiz postfix/smtpd[15479]: input attribute value: 0
Aug 18 23:09:30 veepiz postfix/smtpd[13872]: > unknown[123.30.186.36]: 220 veepiz.com ESMTP Postfix
Aug 18 23:09:30 veepiz postfix/smtpd[19586]: connect from unknown[196.46.27.11]
Aug 18 23:09:30 veepiz postfix/smtpd[15132]: ctable_locate: install entry key [email protected]
Aug 18 23:09:30 veepiz postfix/smtpd[16806]: > unknown[59.163.57.239]: 554 5.7.1 <[email protected]>: Relay access denied
Aug 18 23:09:30 veepiz postfix/smtpd[14527]: match_list_match: unknown: no match
Aug 18 23:09:30 veepiz postfix/smtpd[12222]: mail_addr_find: [email protected] -> (not found)
Aug 18 23:09:30 veepiz postfix/smtpd[15648]: input attribute value: [email protected] I also keep getting emails like this : Subject: Postfix SMTP server: errors from unknown[81.24.210.138]
From: "Mail Delivery System" <[email protected]>
Date: Thu, August 18, 2011 1:03 pm
To: "Postmaster" <[email protected]>
Priority: Normal
Options: View Full Header | View Printable Version | Download this as a file
Transcript of session follows.
In: RSET
Out: 250 2.0.0 Ok
In: MAIL FROM: <[email protected]>
Out: 250 2.1.0 Ok
In: RCPT TO: <[email protected]>
Out: 250 2.1.5 Ok
In: RCPT TO: <[email protected]>
Out: 250 2.1.5 Ok
In: RCPT TO: <[email protected]>
Out: 250 2.1.5 Ok
In: RCPT TO: <[email protected]>
Out: 250 2.1.5 Ok
In: RCPT TO: <[email protected]>
Out: 250 2.1.5 Ok
In: RCPT TO: <[email protected]>
Out: 250 2.1.5 Ok
In: RCPT TO: <[email protected]>
Out: 250 2.1.5 Ok
In: RCPT TO: <[email protected]>
Out: 250 2.1.5 Ok
In: RCPT TO: <[email protected]>
Out: 250 2.1.5 Ok
In: RCPT TO: <[email protected]>
Out: 554 5.7.1 <[email protected]>: Relay access denied
In: RSET
Out: 250 2.0.0 Ok
In: MAIL FROM: <[email protected]>
Out: 452 4.3.1 Insufficient system storage
In: RSET
Out: 250 2.0.0 Ok
In: MAIL FROM: <[email protected]>
Out: 452 4.3.1 Insufficient system storage
In: QUIT
Out: 221 2.0.0 Bye I've contacted admins at rackspace but they cannot offer me any help for unmanaged servers. I'm gutted and want to stop this weird activity. Any advice ? | You have an open relay. Change the mynetworks variable to mynetworks = 127.0.0.1 . Reset all passwords (just to make sure). After that do a SMTP check for your server at http://mxtoolbox.com and look if it is still an open relay. By the way reduce logging to the standard value. Another tip: paste the logs directly to this question next time, rewrite your question to plain readable(!) English. And accept answers to your former questions. Edit The logging can be reset to the default by (re)setting debug_peer_level = 2
debug_peer_list = (yes, the last line ends with the equal sign) Edit 2 I forgot to mention the settings in master.cf where there maybe lines ending with smtpd -v or even more than one -v . Remove the -v s. | {
"source": [
"https://serverfault.com/questions/302776",
"https://serverfault.com",
"https://serverfault.com/users/88185/"
]
} |
302,787 | I am trying to configure endpoint machines with a firewall that only allows white-listed traffic, and all other connections are blocked. The client machines are desktops and laptops running Windows 7 (both x86 and x64) using the built-in Windows Firewall with Advanced Security. Every machine is part of a Windows Server 2008 domain, and I am configuring the firewall using Group Policy. I am testing this firewall configuration with a small subset of machines. Right now, I have Windows Firewall configured to block all inbound and outbound traffic that doesn't match an explicit allow rule. Here are the basic communications that are currently enabled: DNS (UDP 53 Out) LDAP (TCP 389 Out, UDP 389 Out) Remote Desktop (TCP 3389 Out) Web Browsing (TCP 80 Out) Preset: Core Networking Preset: Distributed Transaction Coordinator Preset: File and Printer Sharing Preset: Network Discovery Preset: Remote Assistance In addition, I have a few rules defined for the business applications we use. This has been working fairly well, but today I encountered some problems with MSRPC (Microsoft Remote Procedure Call). I open mmc.exe and load the computer management snap-in in order to modify the local administrators group. In the "Select Users, Computers..." window I enter the username, then click "Check Names". It gives me the following error: Windows cannot process the object with the name "Foo Bar" because of the following error:
Access is denied. When I remove the firewall restrictions, it works fine. The traffic being blocked is MSRPC, and it uses a randomly selected port in the range of [49100...65535]. How can I create a rule for Windows Firewall that allows MSRPC traffic without creating an overly broad rule, such as allowing TCP traffic on all ports? | You have an open relay. Change the mynetworks variable to mynetworks = 127.0.0.1 . Reset all passwords (just to make sure). After that do a SMTP check for your server at http://mxtoolbox.com and look if it is still an open relay. By the way reduce logging to the standard value. Another tip: paste the logs directly to this question next time, rewrite your question to plain readable(!) English. And accept answers to your former questions. Edit The logging can be reset to the default by (re)setting debug_peer_level = 2
debug_peer_list = (yes, the last line ends with the equal sign) Edit 2 I forgot to mention the settings in master.cf where there maybe lines ending with smtpd -v or even more than one -v . Remove the -v s. | {
"source": [
"https://serverfault.com/questions/302787",
"https://serverfault.com",
"https://serverfault.com/users/23300/"
]
} |
303,151 | I've been trying to mount an ISO file on Windows Server 2008 without success. I've tried MagicISO, Daemon Tools, and Pismo Disk Mounter, but all of them give me some error or another. I'm guessing this is some security issue, but I'm not sure how to get around it. Has anyone had luck with this? | Why do you need to actually mount the ISO? Instead of installing unnecesasry 3rd party apps on your server, why not just use something like 7-zip to extract the contents of the ISO? | {
"source": [
"https://serverfault.com/questions/303151",
"https://serverfault.com",
"https://serverfault.com/users/92160/"
]
} |
303,365 | I'm looking for a quick way to compare directory contents. Is it possible to do an md5sum (or equivalent checksum) of an entire directory? Using Ubuntu Linux | Sure - md5sum directory/* If you need something a little more flexible (say, for directory recursion or hash comparison), try md5deep. apt-get install md5deep
md5deep -r directory To compare a directory structure, you can give it a list of hashes to compare against: md5deep -r -s /directory1 > dir1hashes
md5deep -r -X dir1hashes /directory2 This will output all of the files in directory2 that do not match to directory1. This will not show files that have been removed from directory1 or files that have been added to directory2. | {
"source": [
"https://serverfault.com/questions/303365",
"https://serverfault.com",
"https://serverfault.com/users/4935/"
]
} |
303,716 | There are several command line utilities to resolve host names ( host , dig , nslookup ), however they all use nameservers exclusively, while applications in general look in /etc/hosts first (using gethostbyname I believe). Is there a command line utility to resolve host names that behaves like a usual application, thus looking in /etc/hosts first and only then asking a nameserver? (I am aware that it would probably be like 3 lines of c, but I need it inside of a somewhat portable shell script.) | This is easily achieved with getent : getent hosts 127.0.0.1 getent will do lookups for any type of data configured in nsswitch.conf . | {
"source": [
"https://serverfault.com/questions/303716",
"https://serverfault.com",
"https://serverfault.com/users/79831/"
]
} |
303,744 | I need to setup my VirtualHost on Apache to serve on both http and https (using standard ports) If I enable the SSL Engine (as per below) - I get an error when on port 80. The reason is, parts of the site need to be SSL but other parts don't. How can I go about serving both http + https on the site? Here is my virtual host file.... NameVirtualHost *
<VirtualHost *>
ServerAdmin webmaster@localhost
ServerName mysite.co.uk
DocumentRoot /var/www/mysite/public
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/mysite/public>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>
ErrorLog /var/log/apache2/error.log
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn
CustomLog /var/log/apache2/access.log combined
ServerSignature On
Alias /doc/ "/usr/share/doc/"
<Directory "/usr/share/doc/">
Options Indexes MultiViews FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all
Allow from 127.0.0.0/255.0.0.0 ::1/128
</Directory>
#SSL STUFF...
SSLEngine on
SSLCertificateFile /etc/apache2/crts/mysite.crt
SSLCertificateKeyFile /etc/apache2/crts/mysite.key
SSLCertificateChainFile /etc/apache2/crts/DigiCertCA.crt
</VirtualHost> | You can't do this in one virtual host, because Apache needs to know which one's going to talk SSL and which one isn't (sidenote: nginx doesn't have this problem, you can tell it which listen directives relate to SSL; one of the many reasons I love it). The way I manage this in Apache is to put all my non-SSL-related configuration into a separate file, and then have the two vhosts configured next to each other, each including the site-specific configuration file inside the vhost stanza, like this: <VirtualHost 192.0.2.12:80>
Include /etc/apache2/sites/example.com
</VirtualHost>
<VirtualHost 192.0.2.12:443>
SSLEngine On
# etc
Include /etc/apache2/sites/example.com
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/303744",
"https://serverfault.com",
"https://serverfault.com/users/61378/"
]
} |
304,022 | I messed up my .bashrc file. How do I get to the default one that was initially created with my home directory? | mv ~/.bashrc ~/.bashrc.messed
cp /etc/skel/.bashrc ~/.bashrc | {
"source": [
"https://serverfault.com/questions/304022",
"https://serverfault.com",
"https://serverfault.com/users/92409/"
]
} |
304,121 | I've got 4 specific files that seem to keep disappearing from a user's home directory. As far as we know, there are no cronjobs or other automated tasks that would be removing them. I've setup auditd on them but the logs aren't really showing anything of interest. I can see our backup utility accessing them every night until the point they aren't there anymore, but nothing else. Is there anything that would be causing those files to be removed that would get around auditd? The files in question are these: /home/username/.bashrc
/home/username/.bash_profile as well as a couple of files in that user's .ssh directory. Copies of these files placed into a subfolder called "keepers" get deleted at the same time as well. Changing the permissions on them to 000 and having them owned by root hasn't helped. I've currently got inotifywait setup to log create,delete,move on that subfolder, so hopefully that will turn up something, although it doesn't log much aside from when it happened, not what caused it. | Solution 1 : systemtap You can use systemtap to show all PIDs that are trying to use unlink() on the inode of .bashrc and .bash_profile files. Install systemtap and the debug symbols for your kernel. Create a file with name unlink.stap with the following content: probe syscall.unlink
{
printf ("%s(%d) unlink (%s) userID(%d)\n", execname(), pid(), argstr, uid())
} Then run it with sudo stap unlink.stap Solution 2 : inotify You can also use inotify to see when the file is deleted. Solution 3 : ftrace Another solution is to use ftrace : trace-cmd record -e \*unlink\* Wait for the file to be deleted, press CTRL+C to stop trace-cmd record ... , then run: trace-cmd report Solution 4 : bpftrace Install bpftrace , then run: bpftrace -e 'BEGIN {printf("PID\tPPID\tCMD\tunlink path\n")} tracepoint:syscalls:sys_enter_unlink* { printf("%d\t%d\t%s\t%s\n", pid, curtask->parent->pid, comm, str(args->pathname)); }' | {
"source": [
"https://serverfault.com/questions/304121",
"https://serverfault.com",
"https://serverfault.com/users/3215/"
]
} |
304,125 | It turns out rsync can't work with a remote server which has a .bashrc file? At local client i got when run rsync: protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(180) [sender=3.0.7] As suggested here removing the .bashrc on server solved the problem. How to solve it without removing the .bashrc file (temporarily)? | You can run into problems if the .bashrc on the remote server outputs anything to the terminal. Rsync may not expect that and may have problems as a result. You can fix this by removing any commands in the .bashrc that output text, or by piping any output to /dev/null. | {
"source": [
"https://serverfault.com/questions/304125",
"https://serverfault.com",
"https://serverfault.com/users/86305/"
]
} |
304,354 | I have a question about the difference between 'ntfs' and 'ntfs-3g' in the filesystem type field in the /etc/fstab file. My Linux distribution is Xubuntu; I suppose the answer may well vary between distros. My question is basically which is best to use in which contexts. It seems that most websites tell you to use 'ntfs-3g', which is a FUSE driver for NTFS under linux. From some searching around it seems to be the case that 'ntfs' (without the -3g part) typically refers to a kernel driver, rather than the ntfs-3g userland driver. The only problem with that is that I've been using 'ntfs' in my fstabs rather than 'ntfs-3g', while a check of /proc/filesystems doesn't show any listing for ntfs. Can anyone shed some light on what the precise difference in semantics (if there is any) is between 'ntfs' and 'ntfs-3g'? Is it safe to assume that if mount sees 'ntfs' it will search for a driver which supports that type of filesystem and find the ntfs-3g driver if it's installed? EDIT: I forgot to add that 'ntfs' has worked whenever I've used it -- I was simply curious as to the answer, and I wanted to make sure I wasn't doing something iffy. | They're identical - both use ntfs-3g in (current) Ubuntu; the ntfs utils are just symlinked to ntfs-3g . # which mount.ntfs
/sbin/mount.ntfs
# which mount.ntfs-3g
/sbin/mount.ntfs-3g
# ls /sbin/mount.ntfs* -l
lrwxrwxrwx 1 root root 13 2011-03-01 21:13 /sbin/mount.ntfs -> mount.ntfs-3g
lrwxrwxrwx 1 root root 12 2011-03-01 21:13 /sbin/mount.ntfs-3g -> /bin/ntfs-3g | {
"source": [
"https://serverfault.com/questions/304354",
"https://serverfault.com",
"https://serverfault.com/users/92531/"
]
} |
304,424 | What's the technical limitation preventing us, in the glorious year 2011, from emailing each other 1GB files? Or is it just the main email platforms dragging their feet? If I can set my inbox to grab headers only, and then full attachments if I want them, what is the problem? I feel like email attachment sizes are stuck in 1992... | The problem is this: e-mail (SMTP/POP3/IMAP/what-have-you) is an ancient, simple protocol originally intended for sending plaintext messages in a trusted network. Using it for sending or receiving large amounts of binary data across today's Internet is a bolted-on hack, completely different from the original use case, and it performs rather miserably in this role. When you attach a file to the e-mail, it gets base64-encoded, which increases its size by 1/3. Thus, your 1 GB file becomes another 300 MB larger; also, there is no built-in compression to the download protocol, thus no way to speed up the transfer (and in some cases (SMTP for sending,POP3 for receiving), even no way to resume a broken transfer - connection broke at 1.2 GB? Sorry, you need to re-transmit it all again). Moreover, SMTP is a store-and-forward protocol. Guess what? Yup, that 1.3 GB file needs to be copied across multiple servers; cue unbounded happiness from the mail server admins. This was a problem in the 1990s, when there was no useful alternative (FTP? HTTP/1.0? Puh-leeze); but in the glorious year 2011, with various ways of seamlessly up/downloading data to/from the cloud (e.g. Dropbox, Ubuntu One, Amazon S3, to name the most known), the excuse of "there's no other useful way to do this" is not true any more. Note also that not everyone is on a 100 Mbit link to the Internet - e.g. mobile and smartphone; not every mail client is capable of downloading only the headers (e.g. POP3 is still in much use), and not every user is willing to download the 20 inevitable "look at this funneh 1 GB video" e-mails per week that will appear (people will send as large files as the system will let them; and yes, there is something like FUP with most ISPs). TL;DR : while it would be technically possible to do such things as e-mailing a 1GB file, it would also be technically possible to pound in a nail using a screwdriver - it's just not a good way to do it, as there are tools that are more suitable for such tasks. | {
"source": [
"https://serverfault.com/questions/304424",
"https://serverfault.com",
"https://serverfault.com/users/52514/"
]
} |
304,781 | I'm looking for a list of CIDR blocks for "The Internet", i.e. everything from 0.0.0.0 to 223.255.255.255, excluding RFC1918 address space of 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16 (yes, I know there are lots of little nets in there that are special, like 192.0.0.0/24, but I really don't care about them). I figure this list must exist somewhere on the Internet, but my google-fu is failing me, so I figured I'd ask here before generating the range myself. Edit: I forgot a really important part of this question: I need it in the fewest possible number of entries. And if you want to know what I'm doing with this, we are entering static flow-mods to work around some nasty issues in an OpenFlow controlled network that currently has a significant amount of badness happening, and we need to reduce the number of flow entries temporarily so we don't exceed the available space in the flow tables and cause everything to come crashing down for a few minutes while it reestablishes a connection to the controller. | Let me show my working here... You need a minimal number of CIDR blocks to cover: 0.0.0.0-9.255.255.255 11.0.0.0-172.15.255.255 172.32.0.0-192.167.255.255 192.169.0.0-223.255.255.255 To turn these ranges into minimal CIDR blocks, you can just use netmask (the swiss army knife of addressing), like so: $ netmask -c 0.0.0.0:9.255.255.255
0.0.0.0/5
8.0.0.0/7
$ netmask -c 11.0.0.0:172.15.255.255
11.0.0.0/8
12.0.0.0/6
16.0.0.0/4
32.0.0.0/3
64.0.0.0/2
128.0.0.0/3
160.0.0.0/5
168.0.0.0/6
172.0.0.0/12
$ netmask -c 172.32.0.0:192.167.255.255
172.32.0.0/11
172.64.0.0/10
172.128.0.0/9
173.0.0.0/8
174.0.0.0/7
176.0.0.0/4
192.0.0.0/9
192.128.0.0/11
192.160.0.0/13
$ netmask -c 192.169.0.0:223.255.255.255
192.169.0.0/16
192.170.0.0/15
192.172.0.0/14
192.176.0.0/12
192.192.0.0/10
193.0.0.0/8
194.0.0.0/7
196.0.0.0/6
200.0.0.0/5
208.0.0.0/4 Hey presto, Bob's your Auntie's live-in lover. | {
"source": [
"https://serverfault.com/questions/304781",
"https://serverfault.com",
"https://serverfault.com/users/37815/"
]
} |
305,388 | What are some different ways/tools to verify that keep-alive is working on the server from the client's end? | As Ron Garrity says, you can use Curl like this: curl -Iv http://www.aptivate.org 2>&1 | grep -i 'connection #0' And it outputs these two lines if keep-alive is working: * Connection #0 to host www.aptivate.org left intact
* Closing connection #0 And if keep-alive is not working, then it just outputs this line: * Closing connection #0 The output Connection ... left intact proves that the server did not close the connection, and it is available for the client to reuse. It's up to the client to decide whether it actually wants to reuse the connection or not. You can demonstrate it with Curl by listing the same URL twice on the command line curl -Iv http://www.aptivate.org --next http://www.aptivate.org 2>&1 | grep -i '#0' in which case it will give output something like: Re-using existing connection! (#0) with host ... | {
"source": [
"https://serverfault.com/questions/305388",
"https://serverfault.com",
"https://serverfault.com/users/92868/"
]
} |
305,738 | I need to know the login history for specific user (i.e. login and logout time),
How do I extract this history for a specific date range in Linux ? | You can try the last command: last john It prints out the login/out history of user john. Whereas running just last prints out the login/out history of all users. | {
"source": [
"https://serverfault.com/questions/305738",
"https://serverfault.com",
"https://serverfault.com/users/92969/"
]
} |
306,240 | I'm trying to read log files of Varnish server in Ubuntu environment. I actually never used Varnish before. so...I cd to /var/log/varnish, but the folder is empty. it tells me that I have to configure my varnish server to save logs...is that true? | by default varnish will not log anywhere.. you have to run a command to get it to show logs You can run the command varnishncsa For more info on how to use that command to write to a log instead of stdout http://www.go2linux.org/linux/2011/05/configure-varnish-logs-varnishnsca-logrotate-and-awstats-1014 | {
"source": [
"https://serverfault.com/questions/306240",
"https://serverfault.com",
"https://serverfault.com/users/73080/"
]
} |
306,246 | I'm trying to set up a MySQL database, and all seems to be working except I cannot get PHP files to communicate with it. Just to be clear, the PHP files are correct, along with the host name, username, and password (I got the files from EasyAPNS). I have all of the ports open and phpMyAdmin does work, and PHP files run. I cannot get the database to work though. When the PHP files from EasyAPNS are run, I get return data stating that "We're having a slight problem with the database". Is there any common extra configuration that I might be forgetting about? Thanks. | by default varnish will not log anywhere.. you have to run a command to get it to show logs You can run the command varnishncsa For more info on how to use that command to write to a log instead of stdout http://www.go2linux.org/linux/2011/05/configure-varnish-logs-varnishnsca-logrotate-and-awstats-1014 | {
"source": [
"https://serverfault.com/questions/306246",
"https://serverfault.com",
"https://serverfault.com/users/91252/"
]
} |
306,345 | In 2004, I set up a small certification authority using OpenSSL on Linux and the simple management scripts provided with OpenVPN. In accordance with the guides I found at the time, I set the validity period for the root CA certificate to 10 years. Since then, I have signed many certificates for OpenVPN tunnels, web sites and e-mail servers, all of which also have a validity period of 10 years (this may have been wrong, but I didn't know better at the time). I have found many guides about setting up a CA, but only very little information about its management, and in particular, about what has to be done when the root CA certificate expires, which will happen some time in 2014. So I have the following questions: Will the certificates that have a validity period extending after the expiry of the root CA certificate become invalid as soon as the latter expires, or will they continue to be valid (because they were signed during the validity period of the CA certificate)? What operations are needed to renew the root CA certificate and ensure a smooth transition over its expiry? Can I somehow re-sign the current root CA certificate with a different validity period, and upload the newly-signed cert to clients so that client certificates remain valid? Or do I need to replace all client certificates with new ones signed by a new root CA certificate? When should the root CA certificate be renewed? Close to expiry, or a reasonable time before expiry? If the renewal of the root CA certificate becomes a major piece of work, what can I do better now to ensure a smoother transition at the next renewal (short of setting the validity period to 100 years, of course)? The situation is made slightly more complicated by the fact that my only access to some of the clients is through an OpenVPN tunnel that uses a certificate signed by the current CA certificate, so if I have to replace all client certs, I will need to copy the new files to the client, restart the tunnel, cross my fingers and hope that it comes up afterwards. | Keeping the same private key on your root CA allows for all certificates to continue to validate successfully against the new root; all that's required of you is to trust the new root. The certificate signing relationship is based on a signature from the private key; keeping the same private key (and, implicitly, the same public key) while generating a new public certificate, with a new validity period and any other new attributes changed as needed, keeps the trust relationship in place. CRLs, too, can continue over from the old cert to the new, as they are, like certificates, signed by the private key. So, let's verify! Make a root CA: openssl req -new -x509 -keyout root.key -out origroot.pem -days 3650 -nodes Generate a child certificate from it: openssl genrsa -out cert.key 1024
openssl req -new -key cert.key -out cert.csr Sign the child cert: openssl x509 -req -in cert.csr -CA origroot.pem -CAkey root.key -create_serial -out cert.pem
rm cert.csr All set there, normal certificate relationship. Let's verify the trust: # openssl verify -CAfile origroot.pem -verbose cert.pem
cert.pem: OK Ok, so, now let's say 10 years passed. Let's generate a new public certificate from the same root private key. openssl req -new -key root.key -out newcsr.csr
openssl x509 -req -days 3650 -in newcsr.csr -signkey root.key -out newroot.pem
rm newcsr.csr And.. did it work? # openssl verify -CAfile newroot.pem -verbose cert.pem
cert.pem: OK But.. why? They're different files, right? # sha1sum newroot.pem
62577e00309e5eacf210d0538cd79c3cdc834020 newroot.pem
# sha1sum origroot.pem
c1d65a6cdfa6fc0e0a800be5edd3ab3b603e1899 origroot.pem Yes, but, that doesn't mean that the new public key doesn't cryptographically match the signature on the certificate. Different serial numbers, same modulus: # openssl x509 -noout -text -in origroot.pem
Serial Number:
c0:67:16:c0:8a:6b:59:1d
...
RSA Public Key: (1024 bit)
Modulus (1024 bit):
00:bd:56:b5:26:06:c1:f6:4c:f4:7c:14:2c:0d:dd:
3c:eb:8f:0a:c0:9d:d8:b4:8c:b5:d9:c7:87:4e:25:
8f:7c:92:4d:8f:b3:cc:e9:56:8d:db:f7:fd:d3:57:
1f:17:13:25:e7:3f:79:68:9f:b5:20:c9:ef:2f:3d:
4b:8d:23:fe:52:98:15:53:3a:91:e1:14:05:a7:7a:
9b:20:a9:b2:98:6e:67:36:04:dd:a6:cb:6c:3e:23:
6b:73:5b:f1:dd:9e:70:2b:f7:6e:bd:dc:d1:39:98:
1f:84:2a:ca:6c:ad:99:8a:fa:05:41:68:f8:e4:10:
d7:a3:66:0a:45:bd:0e:cd:9d
# openssl x509 -noout -text -in newroot.pem
Serial Number:
9a:a4:7b:e9:2b:0e:2c:32
...
RSA Public Key: (1024 bit)
Modulus (1024 bit):
00:bd:56:b5:26:06:c1:f6:4c:f4:7c:14:2c:0d:dd:
3c:eb:8f:0a:c0:9d:d8:b4:8c:b5:d9:c7:87:4e:25:
8f:7c:92:4d:8f:b3:cc:e9:56:8d:db:f7:fd:d3:57:
1f:17:13:25:e7:3f:79:68:9f:b5:20:c9:ef:2f:3d:
4b:8d:23:fe:52:98:15:53:3a:91:e1:14:05:a7:7a:
9b:20:a9:b2:98:6e:67:36:04:dd:a6:cb:6c:3e:23:
6b:73:5b:f1:dd:9e:70:2b:f7:6e:bd:dc:d1:39:98:
1f:84:2a:ca:6c:ad:99:8a:fa:05:41:68:f8:e4:10:
d7:a3:66:0a:45:bd:0e:cd:9d Let's go a little further to verify that it's working in real world certificate validation. Fire up an Apache instance, and let's give it a go (debian file structure, adjust as needed): # cp cert.pem /etc/ssl/certs/
# cp origroot.pem /etc/ssl/certs/
# cp newroot.pem /etc/ssl/certs/
# cp cert.key /etc/ssl/private/ We'll set these directives on a VirtualHost listening on 443 - remember, the newroot.pem root certificate didn't even exist when cert.pem was generated and signed. SSLEngine on
SSLCertificateFile /etc/ssl/certs/cert.pem
SSLCertificateKeyFile /etc/ssl/private/cert.key
SSLCertificateChainFile /etc/ssl/certs/newroot.pem Let's check out how openssl sees it: # openssl s_client -showcerts -CAfile newroot.pem -connect localhost:443
Certificate chain
0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=server.lan
i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=root
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
1 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=root
i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=root
-----BEGIN CERTIFICATE-----
MIICHzCCAYgCCQCapHvpKw4sMjANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJB
...
-----END CERTIFICATE-----
(this should match the actual contents of newroot.pem)
...
Verify return code: 0 (ok) Ok, and how about a browser using MS's crypto API? Gotta trust the root, first, then it's all good, with the new root's serial number: And, we should still be working with the old root, too. Switch Apache's config around: SSLEngine on
SSLCertificateFile /etc/ssl/certs/cert.pem
SSLCertificateKeyFile /etc/ssl/private/cert.key
SSLCertificateChainFile /etc/ssl/certs/origroot.pem Do a full restart on Apache, a reload won't switch the certs properly. # openssl s_client -showcerts -CAfile origroot.pem -connect localhost:443
Certificate chain
0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=server.lan
i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=root
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
1 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=root
i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=root
-----BEGIN CERTIFICATE-----
MIIC3jCCAkegAwIBAgIJAMBnFsCKa1kdMA0GCSqGSIb3DQEBBQUAMFQxCzAJBgNV
...
-----END CERTIFICATE-----
(this should match the actual contents of origroot.pem)
...
Verify return code: 0 (ok) And, with the MS crypto API browser, Apache's presenting the old root, but the new root's still in the computer's trusted root store. It'll automatically find it and validate the cert against the trusted (new) root, despite Apache presenting a different chain (the old root). After stripping the new root from trusted roots and adding the original root cert, all is well: So, that's it! Keep the same private key when you renew, swap in the new trusted root, and it pretty much all just works . Good luck! | {
"source": [
"https://serverfault.com/questions/306345",
"https://serverfault.com",
"https://serverfault.com/users/6195/"
]
} |
306,421 | This is what I'm doing: mysql --host=localhost --port=9999 mysql -u root -p --execute="show tables;" The command works (connecting to port 3306) no matter what I provide in --port argument. I have two mysql servers running on one machine, and want to connect to the second one by explicitly providing its port number. What's going on? Why does mysql ignore this parameter? | When localhost parameter given, MySQL uses sockets. Use 127.0.0.1 instead. | {
"source": [
"https://serverfault.com/questions/306421",
"https://serverfault.com",
"https://serverfault.com/users/44810/"
]
} |
306,526 | We have a 16-drive RAID-6 that has three problem drives. Two are already dead, and the third is giving SMART warnings. (Nevermind how it got in such a bad state.) Obviously we want to replace the dead drives before the one that is still working, but is it better to: replace one dead drive, let the RAID rebuild, then replace the other, and let it rebuild again; or replace both drives at once and let it rebuild both in parallel? To put it another way, will we get back to a state of redundancy faster by reintroducing one drive or two? Does rebuilding two drives in parallel slow the rebuild process? In case it matters, the controller is a 3ware 9650SE-16ML. | !!!!! ONE !!!!! Do one at a time, seriously dude, don't think of doing this ANY other way ok. Anything else will test your full system restoration skills. | {
"source": [
"https://serverfault.com/questions/306526",
"https://serverfault.com",
"https://serverfault.com/users/18096/"
]
} |
306,541 | I have some arbitrary number of servers with the same user/pass combination. I want to write a script (that I call once) so that ssh-copy-id user@myserver is called for each server. Since they all have the same user/pass this should be easy but ssh-copy-id wants me to type the password in separately each time which defeats the purpose of my script. There is no option for putting in a password, ie ssh-copy-id -p mypassword user@myserver . How can I write a script that automatically fills in the password field when ssh-copy-id asks for it? | Take a look at sshpass . Place your password in a text file and do something like this: $ sshpass -f password.txt ssh-copy-id user@yourserver | {
"source": [
"https://serverfault.com/questions/306541",
"https://serverfault.com",
"https://serverfault.com/users/48818/"
]
} |
306,837 | I've got a list of hundreds of page requests from the same IP and I need to know if these could be requests by different computers. | There is no limit to the number of computers , however there is a limit to the number of simultaneous connections because of the possibility of ephemeral port exhaustion. More computers usually means more connections so there is a practical limit to how many computers will typically share the same IP address. Usually with a very large number of computers, multiple IP addresses will be shared in a pool to be used for NAT. | {
"source": [
"https://serverfault.com/questions/306837",
"https://serverfault.com",
"https://serverfault.com/users/91635/"
]
} |
307,896 | I'm about to implement my own Certification Authority (CA) for interal use only. Now there is a problem, that the CA private should never ever be exploited. So right now the private key is encrypted. What else could be done to enhance the security of the private key? | I worked at a company where the security of the CA key was critical to the continued success of the business. To this end the key was encrypted using a custom protocol that required at least 2 people to be present with physical tokens plugged into terminals to decrypt it(there were at least 5 of these tokens, any 2 combined would work). The terminals were physically separated from the actual machine with the CA key. The interface that the users had who decrypted it was a VT220 terminal that allowed them to input the decryption tokens and then select what they wanted to 'sign' with the key (never giving them access to the decrypted key). This system meant at least 4 people would have to work together to compromise the key, two token holders, the guy who had access to the data center, and another person who had root access on the server (because the decrypted key was never stored on the server only in memory you couldn't just steal the box, and the people with root to this specific server were not allowed DC access). If you are interested in more details on this sort of setup Bruce Schneier has a great site covering computer security design and implementation: http://www.schneier.com/ He has also published a really good book Applied Cryptography that I found helped me understand the fundamentals of systems like this and how to architect more secure infrastructures (readable by people who don't wear pocket protectors): http://www.schneier.com/book-applied.html | {
"source": [
"https://serverfault.com/questions/307896",
"https://serverfault.com",
"https://serverfault.com/users/64660/"
]
} |
308,085 | How to save and exit crontab -e ? i tried every method listed here and none works, i have a centos 5, vi comes by default with yum and i installed nano Solved just changed the default editor export EDITOR=nano and now i can do what I do using nano :) thanks everyone and yes i should learn Vi.. someday!!! | As others have pointed out, the first thing is to make sure you're using an editor you like. We're all admins here, so we all like vi (ducks, runs). export VISUAL=vi
crontab -e (do some edits, finishing with ESCAPE) :wq And crontab -l should now show you your new crontab. If you prefer some other editor, set that in the VISUAL environment variable, and exit it as appropriate. | {
"source": [
"https://serverfault.com/questions/308085",
"https://serverfault.com",
"https://serverfault.com/users/93256/"
]
} |
308,097 | Can I use Fabric to automatically deploy an app on my server every time I push the code to GitHub? (GitHub has the ability to POST to a URL every time I push.) If so, how? | As others have pointed out, the first thing is to make sure you're using an editor you like. We're all admins here, so we all like vi (ducks, runs). export VISUAL=vi
crontab -e (do some edits, finishing with ESCAPE) :wq And crontab -l should now show you your new crontab. If you prefer some other editor, set that in the VISUAL environment variable, and exit it as appropriate. | {
"source": [
"https://serverfault.com/questions/308097",
"https://serverfault.com",
"https://serverfault.com/users/20346/"
]
} |
308,232 | Is there a more efficient way to retrieve the MAC address of a NIC in Linux? This works: ip link show dev eth0 | awk ' /link\/ether/ { print $2 }' but can it be found via something like: cat /sys/net/something | It's at /sys/class/net/eth0/address (or more precisely /sys/devices/pciXXXX:XX/XXXX/net/eth0/address where the XXX is your PCI bus ID, but this varies between systems). (Incidentally, I found this with find /sys -name eth0 and looking at the files in the directories identified.) | {
"source": [
"https://serverfault.com/questions/308232",
"https://serverfault.com",
"https://serverfault.com/users/85230/"
]
} |
308,299 | I am using Nginx as my webserver for the first time. I didn't have any trouble to set it up and everything works great. The problem came when the designer asked me if he could send me "the icon in the title bar" to "put it up there". # /opt/nginx/conf/nginx.conf
...
server {
listen 80 ;
server_name *.website.com website.com;
root /home/webuser/sites/website;
} My directory: /home/webuser/sites/website/
|_ index.html
|_ main.css
|_ favicon.ico Is it possible to put a specific favicon.ico to each Virtual Host? Where should you put that file and how can you configure it? EDIT: I just realized that it was a completely different problem. Both answers were right but my problem was the permission. I don't know why the file favicon.ico ended up having permissions 600 and of course the moment I did: chmod +r favicon.ico Worked like a charm. I will leave this here if it happens to someone else. | This is how we do it in our specific vhost config ( sites-available/[vhostconfigfile] ) under the server directive: location = /favicon.ico {
alias /var/www/media/images/favicon.X.ico;
} That way you can put it anywhere you want with no html whatsoever. The ".X." is not required at all, and only denotes that you can change this filename to anything you like. I simply use the ".X." as a placeholder to identify the specific sub domain that I am referencing. Its purely for organization. | {
"source": [
"https://serverfault.com/questions/308299",
"https://serverfault.com",
"https://serverfault.com/users/93761/"
]
} |
309,052 | How can I check if a port is listening on a Linux server? | You can check if a process listens on a TCP or UDP port with netstat -tuplen . To check whether some ports are accessible from the outside (this is probably what you want) you can use a port scanner like Nmap from another system. Running Nmap on the same host you want to check is quite useless for your purpose. | {
"source": [
"https://serverfault.com/questions/309052",
"https://serverfault.com",
"https://serverfault.com/users/86484/"
]
} |
309,069 | I'm new to administering a server, and I'm wondering if there is any value in obfuscating the server field for HTTP response headers I'm sending out. Would this prevent hackers from determining which webserver I'm using, and therefore make it more difficult to locate an exploitable crack in my security? | You can check if a process listens on a TCP or UDP port with netstat -tuplen . To check whether some ports are accessible from the outside (this is probably what you want) you can use a port scanner like Nmap from another system. Running Nmap on the same host you want to check is quite useless for your purpose. | {
"source": [
"https://serverfault.com/questions/309069",
"https://serverfault.com",
"https://serverfault.com/users/92486/"
]
} |
309,087 | I am looking at this page ( http://support.microsoft.com/lifecycle/?LN=en-us&x=7&y=17&p1=3198 ) but I can't figure out the answer to this question. Suppose we have a Windows 2003 Server with the latest service pack. Is it still supported? | You can check if a process listens on a TCP or UDP port with netstat -tuplen . To check whether some ports are accessible from the outside (this is probably what you want) you can use a port scanner like Nmap from another system. Running Nmap on the same host you want to check is quite useless for your purpose. | {
"source": [
"https://serverfault.com/questions/309087",
"https://serverfault.com",
"https://serverfault.com/users/8625/"
]
} |
309,096 | Issue: Take a list of usernames from a CSV file and recursively remove them from all of the user groups that they are a member of. I would like to limit what groups they are removed from based upon a prefix for the group name. Example: If user_bob (one of the many usernames within the CSV) was a member of the the following groups (determined recursively): abc-users, abc-printers, abc-users-limited, xzy-users, xyz-secure, then I would like to have the PowerShell script remove him from all groups that are prefixed with "abc-" and then move on to the next username in the CSV and perform the same process. Notes: I have been searching all over online for examples for something like this and cannot seem to locate any leads. Code snippets or samples that I can begin the process of testing with or putting together into a solution would be most appreciated. I have been looking through the TechNet documentation for the commandlets and have not made much progress. Thanks in advance! | You can check if a process listens on a TCP or UDP port with netstat -tuplen . To check whether some ports are accessible from the outside (this is probably what you want) you can use a port scanner like Nmap from another system. Running Nmap on the same host you want to check is quite useless for your purpose. | {
"source": [
"https://serverfault.com/questions/309096",
"https://serverfault.com",
"https://serverfault.com/users/60455/"
]
} |
309,113 | Google did a very thorough study on hard drive failures which found that a significant portion of hard drives fail within the first 3 months of heavy usage. My coworkers and I are thinking we could implement a burn-in process for all our new hard drives that could potentially save us some heartache from losing time on new, untested drives. But before we implement a burn-in process, we would like to get some insight from others who are more experienced: How important is it to burn in a hard drive before you start using it? How do you implement a burn-in process? How long do you burn in a hard drive? What software do you use to burn in drives? How much stress is too much for a burn-in process? EDIT:
Due to the nature of the business, RAIDs are impossible to use most of the time. We have to rely on single drives that get mailed across the nation quite frequently. We back up drives as soon as we can, but we still encounter failure here and there before we get an opportunity to back up data. UPDATE My company has implemented a burn-in process for a while now, and it has proven to be extremely useful. We immediately burn in all new drives that we get in stock, allowing us to find many errors before the warranty expires and before installing them into new computer systems. It has also proven useful to verify that a drive has gone bad. When one of our computers starts encountering errors and a hard drive is the main suspect, we'll rerun the burn-in process on that drive and look at any errors to make sure the drive actually was the problem before starting the RMA process or throwing it in the trash. Our burn-in process is simple. We have a designated Ubuntu system with lots of SATA ports, and we run badblocks in read/write mode with 4 passes on each drive. To simplify things, we wrote a script that prints a "DATA WILL BE DELETED FROM ALL YOUR DRIVES" warning and then runs badblocks on every drive except the system drive. | IMNSHO, you shouldn't be relying on a burn-in process to weed out bad drives and "protect" your data. Developing this procedure and implementing it will take up time that could be better used elsewhere and even if a drive passes burn-in, it may still fail months later. You should be using RAID and backups to protect your data. Once that is in place, let it worry about the drives. Good RAID controllers and storage subsystems will have 'scrubbing' processes that go over the data every so often and ensure everything is good. Once that all is taken care of, there's no need to do disk scrubbing, though as others have mentioned it doesn't hurt to do a system load test to ensure that everything is working as you expect. I wouldn't worry about individual disks at all. As has been mentioned in the comments, it doesn't make a lot of sense to use hard drives for your particular use case. Shipping them around is far more likely to cause data errors that won't be there when you did the burn-in. Tape media is designed to be shipped around. You can get 250MBps (or up to 650MBps compressed) with a single IBM TS1140 drive which should be faster than your hard drive. And bigger as well - a single cartridge can give you up to 4TB (uncompressed). If you don't want to use tape, use SSDs. They can be treated far rougher than HDDs and satisfy all the requirements you've given so far. After all that, here are my answers to your questions: How important is it to burn in a hard drive before you start using it? Not at all. How do you implement a burn-in process? How long do you burn in a hard drive? One or two runs. What software do you use to burn in drives? A simple run of, say, shred and badblocks will do. Check the SMART data afterwards. How much stress is too much for a burn-in process? No stress is too much. You should be able to throw anything at a disk without it blowing up. | {
"source": [
"https://serverfault.com/questions/309113",
"https://serverfault.com",
"https://serverfault.com/users/30256/"
]
} |
309,127 | Does anyone know how to install php 5.2.17 on a 64bit centos 6 install? I've got a old legacy system that requires php 5.2.17, but centos 6 only supports php 5.3. I've installed repo's such as webtatic, but had no luck at all. Should I rather revert back to centos 5 and install it there? Any ideas, I am out? | IMNSHO, you shouldn't be relying on a burn-in process to weed out bad drives and "protect" your data. Developing this procedure and implementing it will take up time that could be better used elsewhere and even if a drive passes burn-in, it may still fail months later. You should be using RAID and backups to protect your data. Once that is in place, let it worry about the drives. Good RAID controllers and storage subsystems will have 'scrubbing' processes that go over the data every so often and ensure everything is good. Once that all is taken care of, there's no need to do disk scrubbing, though as others have mentioned it doesn't hurt to do a system load test to ensure that everything is working as you expect. I wouldn't worry about individual disks at all. As has been mentioned in the comments, it doesn't make a lot of sense to use hard drives for your particular use case. Shipping them around is far more likely to cause data errors that won't be there when you did the burn-in. Tape media is designed to be shipped around. You can get 250MBps (or up to 650MBps compressed) with a single IBM TS1140 drive which should be faster than your hard drive. And bigger as well - a single cartridge can give you up to 4TB (uncompressed). If you don't want to use tape, use SSDs. They can be treated far rougher than HDDs and satisfy all the requirements you've given so far. After all that, here are my answers to your questions: How important is it to burn in a hard drive before you start using it? Not at all. How do you implement a burn-in process? How long do you burn in a hard drive? One or two runs. What software do you use to burn in drives? A simple run of, say, shred and badblocks will do. Check the SMART data afterwards. How much stress is too much for a burn-in process? No stress is too much. You should be able to throw anything at a disk without it blowing up. | {
"source": [
"https://serverfault.com/questions/309127",
"https://serverfault.com",
"https://serverfault.com/users/73669/"
]
} |
309,171 | I've created an RSA keypair that I used for SSH, and it includes my email address. (At the end of the public key.) I've now changed my email address. Is it possible to change the email address on the key, or is it part of the key and I would have to make a new one? | I've created an RSA keypair that I used for SSH, and it includes my email address. (At the end of the public key.) That part of an ssh key is just a comment. You can change it to anything you want at any time. It doesn't even need to be the same on different servers. You can remove it as well. It is only there to help you or someone else figure out what to delete when you have many keys in an authorized_keys file and you need to revoke or change one of them. ssh-rsa AAAAB3N....NMqKM= this_is_a_comment When I create my keys with ssh-keygen I usually use a command like this to set a different comment. I don't think the username@host is very useful. You can certainly put it whatever comment that you like that will be useful to you and any other admins to help identify who the key belongs to. ssh-keygen ... -C YYYYMMDD_surname_givenname | {
"source": [
"https://serverfault.com/questions/309171",
"https://serverfault.com",
"https://serverfault.com/users/20346/"
]
} |
309,357 | Just a quick sanity check here. Can you ping a specific port of a machine, and if so, can you provide an example? I'm looking for something like ping ip address portNum . | You can't ping ports, as Ping is using ICMP which is an internet layer protocol that doesn't have ports. Ports belong to the transport layer protocols like TCP and UDP. However, you could use nmap to see whether ports are open or not nmap -p 80 example.com Edit:
As flokra mentioned, nmap is more than just a ping-for-ports-thingy. It's the security auditers and hackers best friend and comes with tons of cool options. Check the doc for all possible flags. | {
"source": [
"https://serverfault.com/questions/309357",
"https://serverfault.com",
"https://serverfault.com/users/259693/"
]
} |
309,515 | How to make wireshark filter POST-requests only? | You can use the following filter: http.request.method == "POST" | {
"source": [
"https://serverfault.com/questions/309515",
"https://serverfault.com",
"https://serverfault.com/users/94146/"
]
} |
309,581 | Why would a heavily disk intensive application run faster on a SAN than on a Physical Disk? I would have expected the Physical disk to be slightly faster but in fact the process ran 100 times faster when it's work drive was set to a partition on the SAN. Our guess is that the SAN is optimised out of the box to be fast whereas the physical disk tuning settings are OS (Solaris) related and have not been touched or the OS patched. During the highest activity the disk I/O was running at 100% and the time to complete a write was over 2 seconds as several processes were writing to the disk at the same time. (FYI the application involved was Informatica PowerCenter) | I'm not at all surprised. SAN arrays typically have a LOT of disks involved. The limiting factor for disk I/O is the speed of the individual disk, and these stack. 6 drives locally in a RAID10 will perform better than 2, and 80 drives on a SAN will perform better than 10 drives locally. There are variables of course, but that's how it's supposed to work. Also, if the SAN has any SSDs involved, things get really zippy. | {
"source": [
"https://serverfault.com/questions/309581",
"https://serverfault.com",
"https://serverfault.com/users/4872/"
]
} |
309,612 | So I found this website that is visitable and does not have a TLD? Anyone got any idea how to do this? | It does have a TLD - in this case the TLD is ac . This is actually a special case. Usually a TLD does not have an A record associated with it: $ host -t A ac.
ac has address 193.223.78.210
$ host -t A com.
com has no A record To get this behaviour, you would have to register your own TLD. | {
"source": [
"https://serverfault.com/questions/309612",
"https://serverfault.com",
"https://serverfault.com/users/68683/"
]
} |
309,622 | This is a Canonical Question about DNS glue records. What exactly (but briefly) is a DNS glue record? Why are they needed and how do they work? | A glue record is a term for a record that's served by a DNS server that's not authoritative for the zone, to avoid a condition of impossible dependencies for a DNS zone. Say I own a DNS zone for example.com . I want to have DNS servers that're hosting the authoritative zone for this domain so that I can actually use it - adding records for the root of the domain, www , mail , etc. So, I put the name servers in the registration to delegate to them - those are always names, so we'll put in ns1.example.com and ns2.example.com . There's the trick. The TLD's servers will delegate to the DNS servers in the whois record - but they're within example.com . They try to find ns1.example.com , ask the .com servers, and get referred back to... ns1.example.com . What glue records do is to allow the TLD's servers to send extra information in their response to the query for the example.com zone - to send the IP address that's configured for the name servers, too. It's not authoritative, but it's a pointer to the authoritative servers, allowing for the loop to be resolved. | {
"source": [
"https://serverfault.com/questions/309622",
"https://serverfault.com",
"https://serverfault.com/users/79496/"
]
} |
309,651 | Is it allowed in DNS to have a CNAME record that points to another CNAME record? The reason we need this is that we have a hostname that we want to be looked up to the IP address of our web server computer. We also have another web server computer stand by that could be activated in case the first one would die. In such a case we would quickly need to point the hostname to the IP address of the stand by web server computer. Unfortunately the hostname resides in a DNS domain where any change would take long time due to manual operation dependent on other sysadmins. But we have another DNS domain where we can perform the changes ourselves quickly. Having CNAME to CNAME chain seems like a possible solution. But is it allowed? Will web browsers understand it? | From RFC 1034 - Domain names - concepts and facilities : Domain names in RRs which point at another name should always point at
the primary name and not the alias. This avoids extra indirections in
accessing information. For example, the address to name RR for the
above host should be: 52.0.0.10.IN-ADDR.ARPA IN PTR C.ISI.EDU rather than pointing at USC-ISIC.ARPA. Of course, by the robustness
principle, domain software should not fail when presented with CNAME
chains or loops; CNAME chains should be followed and CNAME loops
signalled as an error. So yes, it is allowed and properly written software will handle it just OK. CNAME chains aren't however considered good practice and impose an overhead on the infrastructure. | {
"source": [
"https://serverfault.com/questions/309651",
"https://serverfault.com",
"https://serverfault.com/users/90881/"
]
} |
309,848 | How do I check the Jenkins build status without switching to the browser? If required, I can create a script using the JSON API, but I was wondering if there is already something like this built in. | I couldn't find a built in tool so I made one: #!/usr/bin/python
#
# author: ajs
# license: bsd
# copyright: re2
import json
import sys
import urllib
import urllib2
jenkinsUrl = "https://jenkins.example.com/job/"
if len( sys.argv ) > 1 :
jobName = sys.argv[1]
jobNameURL = urllib.quote(jobName)
else :
sys.exit(1)
try:
jenkinsStream = urllib2.urlopen( jenkinsUrl + jobNameURL + "/lastBuild/api/json" )
except urllib2.HTTPError, e:
print "URL Error: " + str(e.code)
print " (job name [" + jobName + "] probably wrong)"
sys.exit(2)
try:
buildStatusJson = json.load( jenkinsStream )
except:
print "Failed to parse json"
sys.exit(3)
if buildStatusJson.has_key( "result" ):
print "[" + jobName + "] build status: " + buildStatusJson["result"]
if buildStatusJson["result"] != "SUCCESS" :
exit(4)
else:
sys.exit(5)
sys.exit(0) | {
"source": [
"https://serverfault.com/questions/309848",
"https://serverfault.com",
"https://serverfault.com/users/89282/"
]
} |
310,098 | I have a constantly running script that I output to a log file: script.sh >> /var/log/logfile I'd like to add a timestamp before each line that is appended to the log. Like: Sat Sep 10 21:33:06 UTC 2011 The server has booted up. Hmmph. Is there any jujitsu I can use? | You can pipe the script's output through a loop that prefixes the current date and time: ./script.sh | while IFS= read -r line; do printf '%s %s\n' "$(date)" "$line"; done >>/var/log/logfile If you'll be using this a lot, it's easy to make a bash function to handle the loop: adddate() {
while IFS= read -r line; do
printf '%s %s\n' "$(date)" "$line";
done
}
./thisscript.sh | adddate >>/var/log/logfile
./thatscript.sh | adddate >>/var/log/logfile
./theotherscript.sh | adddate >>/var/log/logfile | {
"source": [
"https://serverfault.com/questions/310098",
"https://serverfault.com",
"https://serverfault.com/users/66603/"
]
} |
310,300 | I have 2 svn checkouts that someone setup for me. Now I need to check these same files on another computer, but since I didn't check them out initially I don't know the urls to use when running the svn checkout command: svn co WHAT_GOES_HERE? Since these 2 checkouts already exist on one of my servers, is there a way to get the url of the repo from which they were initially checked out from? | You can get the URL of the directory you are in, as well as the Repository Root and other info by running the following command in any of the checked out directories: svn info If you want a command that returns only the URL of the repository, perhaps for use in a script, then you can pass the following parameter: svn info --show-item repos-root-url It is worth noting that --show-item is available in Subversion 1.9+ . In older versions you can use the following snippet the achieve similar result: svn info | grep 'Repository Root' | awk '{print $NF}' | {
"source": [
"https://serverfault.com/questions/310300",
"https://serverfault.com",
"https://serverfault.com/users/94388/"
]
} |
310,530 | I ask this question, because Comodo are telling me that a wildcard certificate for *.example.com will also secure the root domain example.com. So with a single certificate, both my.example.com and example.com are secured without warning from a browser. However, this is not the case with the certificate I've been provided. My sub-domains are secured fine and do not give an error, but the root domain throws up an error in the browser, saying the identify can't be verified. When I compare this certificate to other similar scenarios, I see that in the scenarios that work without error, the Subject Alternative Name (SAN) lists both *.example.com and example.com, whereas the recent certificate from Comodo only lists *.example.com as the Common Name and NOT example.com as the Subject Alternative Name. Can anyone confirm/clarify that the root domain should be listed in SAN details if it is also to be secured correctly? When I read this: http://www.digicert.com/subject-alternative-name.htm It seems that the SAN must list both in order to work as I need it to. What's your experience? Thanks very much. | There's some inconsistency between SSL implementations on how they match wildcards, however you'll need the root as an alternate name for that to work with most clients. For a *.example.com cert, a.example.com should pass www.example.com should pass example.com should not pass a.b.example.com may pass depending on implementation (but probably not). Essentially, the standards say that the * should match 1 or more non-dot characters, but some implementations allow a dot. The canonical answer should be in RFC 2818 (HTTP Over TLS) : Matching is performed using the matching rules specified by
[RFC2459]. If more than one identity of a given type is present in
the certificate (e.g., more than one dNSName name, a match in any one
of the set is considered acceptable.) Names may contain the wildcard
character * which is considered to match any single domain name
component or component fragment. E.g., *.a.com matches foo.a.com but
not bar.foo.a.com. f*.com matches foo.com but not bar.com. RFC 2459 says: A "*" wildcard character MAY be used as the left-most name
component in the certificate. For example, *.example.com would
match a.example.com, foo.example.com, etc. but would not match
example.com. If you need a cert to work for example.com, www.example.com and foo.example.com, you need a certificate with subjectAltNames so that you have "example.com" and "*.example.com" (or example.com and all the other names you might need to match). | {
"source": [
"https://serverfault.com/questions/310530",
"https://serverfault.com",
"https://serverfault.com/users/94459/"
]
} |
310,640 | SNMPd on my CentOS systems is sending log messages to syslog every time it receives a query from my monitoring tools. Is there a way to lower the verbosity of SNMPd? It adds a lot of clutter to the logs. Sep 12 13:05:40 myhost snmpd[7073]: Received SNMP packet(s) from UDP: [ipaddr]:42874
Sep 12 13:05:40 myhost snmpd[7073]: Connection from UDP: [ipaddr]:49272 Thanks! | Check the command that starts snmpd (possibly somewhere /etc/rc.d/ - in Ubuntu it's /etc/defaults/snmpd ) for the logging options: SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid -g root 0.0.0.0' Or find it in the ps aux | grep snmpd output. The man page gives the logging options: -Ls FACILITY Log messages via syslog, using the specified facility ('d' for LOG_DAEMON, 'u' for LOG_USER, or '0'-'7' for LOG_LOCAL0 through LOG_LOCAL7).
There are also "upper case" versions of each of these options, which allow the corresponding logging mechanism to be restricted to certain priorities of message. For -LF and -LS the priority specification comes before the file or facility token. The priorities recognised are: 0 or ! for LOG_EMERG,
1 or a for LOG_ALERT,
2 or c for LOG_CRIT,
3 or e for LOG_ERR,
4 or w for LOG_WARNING,
5 or n for LOG_NOTICE,
6 or i for LOG_INFO, and
7 or d for LOG_DEBUG. The default is fairly verbose (only 2 levels below debug): Normal output is (or will be!) logged at a priority level of LOG_NOTICE If you're logging to syslog via LOG_DAEMON (-Lsd), you could reduce it to e.g. LOG_WARNING with -LSwd / -LS4d , or LOG_ERR with -LSed / -LS3d . (Edited to put the options in the right order.) | {
"source": [
"https://serverfault.com/questions/310640",
"https://serverfault.com",
"https://serverfault.com/users/87213/"
]
} |
311,565 | I have a local postgresql database for development purposes that I dont want to start up every time Windows does - how do I stop it from starting! | If it is running as a Windows service: Start -> Run -> (then type in:) services.msc.
When you see PostgresSQL services set them to manual instead of automatic. If you do need them again, just fire up services.msc again and click the Start icon/button once you have reselected the PostgresSQL service. | {
"source": [
"https://serverfault.com/questions/311565",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
311,593 | I'm connecting to a Linux machine through SSH, and I'm trying to run a heavy bash script that makes filesystem operations. It's expected to keep running for hours, but I cannot leave the SSH session open because of internet connections issues I have. I doubt that running the script with the background operator, the ampersand ( & ), will do the trick, because I tried it and later found that process was not completed. How can I logout and keep the process running? | The best method is to start the process in a terminal multiplexer. Alternatively you can make the process not receive the HUP signal. A terminal multiplexer provides "virtual" terminals which run independent from the "real" terminal (actually all terminals today are "virtual" but that is another topic for another day). The virtual terminal will keep running even if your real terminal is closed with your ssh session. All processes started from the virtual terminal will keep running with that virtual terminal. When you reconnect to the server you can reconnect to the virtual terminal and everything will be as if nothing happened, other than the time which passed. Two popular terminal multiplexers are screen and tmux . Screen has a steep learning curve. Here is a good tutorial with diagrams explaining the concept: http://www.ibm.com/developerworks/aix/library/au-gnu_screen/ The HUP signal (or SIGHUP) is sent by the terminal to all its child processes when the terminal is closed. The common action upon receiving SIGHUP is to terminate. Thus when your ssh session gets disconnected all your processes will terminate. To avoid this you can make your processes not receive SIGHUP. Two easy methods to do so are nohup and disown . For more information about how nohup and disown works read this question and answer: https://unix.stackexchange.com/questions/3886/difference-between-nohup-disown-and Note: although the processes will keep running you can no longer interact with them because they are no longer attached to any terminal. This method is mainly useful for long running batch processes which, once started, no longer need any user input. | {
"source": [
"https://serverfault.com/questions/311593",
"https://serverfault.com",
"https://serverfault.com/users/67954/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.