source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
329,901 | Suppose I have four computers, Laptop, Server1, Server2, Kerberos server: I log in using PuTTY or SSH from L to S1, giving my username / password From S1 I then SSH to S2. No password is needed as Kerberos authenticates me Describe all the important SSH and KRB5 protocol exchanges: "L sends username to S1", "K sends ... to S1" etc. (This question is intended to be community-edited; please improve it for the non-expert reader .) | First login: L sends username and SSH authentication request to S1 S1 returns available SSH authentication mechanisms, with "password" as one of them L picks "password" and sends the plain password to S1 S1 gives username and password to PAM stack. On S1, PAM (usually pam_krb5 or pam_sss ) requests a TGT (ticket-granting ticket) from the Kerberos KDC. S1 obtains a TGT. Old style (without preauth): S1 sends an AS-REQ and receives a AS-REP containing the TGT. New style (with preauth): S1 uses your password to encrypt the current time stamp, and attaches it to the AS-REQ. The server decrypts the timestamp and verifies that it is within the allowed time skew; if decryption fails, the password is immediately rejected. Otherwise, a TGT is returned in the AS-REP. S1 attempts to decrypt the TGT using a key generated from your password. If the decryption succeeds, the password is accepted as correct. The TGT is stored to a newly created credential cache. (You can inspect the $KRB5CCNAME environment variable to find the ccache, or use klist to list its contents.) S1 uses PAM to perform authorization checks (configuration-dependent) and open the session. If pam_krb5 is called in authorization stage, it checks whether ~/.k5login exists. If it does, it must list the client Kerberos principal. Otherwise, the only allowed principal is username @ DEFAULT-REALM . Second login: S1 sends username and SSH authn request to S2 S2 returns available auth mechs, one of them being "gssapi-with-mic" 1 S1 requests a ticket for host/ s2.example.com @ EXAMPLE.COM , by sending a TGS-REQ with the TGT to the KDC, and receiving a TGS-REP with the service ticket from it. S1 generates an "AP-REQ" (authentication request) and sends it to S2. S2 attempts to decrypt the request. If it succeeds, authentication is done. (PAM is not used for authentication.) Other protocols such as LDAP may choose to encrypt further data transmission with a "session key" that was included with the request; however, SSH has already negotiated its own encryption layer. If authentication succeeds, S2 uses PAM to perform authorization checks and open the session, same as S1. If credential forwarding was enabled and the TGT has the "forwardable" flag, then S1 requests a copy of the user's TGT (with the "forwarded" flag set) and sends it to S2, where it gets stored to a new ccache. This allows recursive Kerberos-authenticated logins. Note that you can obtain TGTs locally as well. On Linux, you can do this using kinit , then connect using ssh -K . For Windows, if you are logged in to a Windows AD domain, Windows does that for you; otherwise, MIT Kerberos can be used. PuTTY 0.61 supports using both Windows (SSPI) and MIT (GSSAPI), although you must enable forwarding (delegation) manually. 1 gssapi-keyex is also possible but was not accepted into official OpenSSH. | {
"source": [
"https://serverfault.com/questions/329901",
"https://serverfault.com",
"https://serverfault.com/users/55866/"
]
} |
329,906 | Need some help to redirect to a mobile subdomain.
Currently both m.domain.com and domain.com will go to var/www/html/mobile.
Obviously I need domain.com to direct to var/www/html and m.domain.com to var/www/html/mobile I have disabled ServerName and my aliases are correct.. Currently my virtualhosts are: <VirtualHost *:80>
ServerName DOMAIN.com
DocumentRoot "/var/www/html"
</VirtualHost>
#VirtualHost to redirect m.DOMAIN to mobile directory
<VirtualHost m.DOMAIN.com:80>
ServerName m.DOMAIN.com
DocumentRoot "/var/www/html/mobile"
</VirtualHost> Could someone please point me in the right direction? | First login: L sends username and SSH authentication request to S1 S1 returns available SSH authentication mechanisms, with "password" as one of them L picks "password" and sends the plain password to S1 S1 gives username and password to PAM stack. On S1, PAM (usually pam_krb5 or pam_sss ) requests a TGT (ticket-granting ticket) from the Kerberos KDC. S1 obtains a TGT. Old style (without preauth): S1 sends an AS-REQ and receives a AS-REP containing the TGT. New style (with preauth): S1 uses your password to encrypt the current time stamp, and attaches it to the AS-REQ. The server decrypts the timestamp and verifies that it is within the allowed time skew; if decryption fails, the password is immediately rejected. Otherwise, a TGT is returned in the AS-REP. S1 attempts to decrypt the TGT using a key generated from your password. If the decryption succeeds, the password is accepted as correct. The TGT is stored to a newly created credential cache. (You can inspect the $KRB5CCNAME environment variable to find the ccache, or use klist to list its contents.) S1 uses PAM to perform authorization checks (configuration-dependent) and open the session. If pam_krb5 is called in authorization stage, it checks whether ~/.k5login exists. If it does, it must list the client Kerberos principal. Otherwise, the only allowed principal is username @ DEFAULT-REALM . Second login: S1 sends username and SSH authn request to S2 S2 returns available auth mechs, one of them being "gssapi-with-mic" 1 S1 requests a ticket for host/ s2.example.com @ EXAMPLE.COM , by sending a TGS-REQ with the TGT to the KDC, and receiving a TGS-REP with the service ticket from it. S1 generates an "AP-REQ" (authentication request) and sends it to S2. S2 attempts to decrypt the request. If it succeeds, authentication is done. (PAM is not used for authentication.) Other protocols such as LDAP may choose to encrypt further data transmission with a "session key" that was included with the request; however, SSH has already negotiated its own encryption layer. If authentication succeeds, S2 uses PAM to perform authorization checks and open the session, same as S1. If credential forwarding was enabled and the TGT has the "forwardable" flag, then S1 requests a copy of the user's TGT (with the "forwarded" flag set) and sends it to S2, where it gets stored to a new ccache. This allows recursive Kerberos-authenticated logins. Note that you can obtain TGTs locally as well. On Linux, you can do this using kinit , then connect using ssh -K . For Windows, if you are logged in to a Windows AD domain, Windows does that for you; otherwise, MIT Kerberos can be used. PuTTY 0.61 supports using both Windows (SSPI) and MIT (GSSAPI), although you must enable forwarding (delegation) manually. 1 gssapi-keyex is also possible but was not accepted into official OpenSSH. | {
"source": [
"https://serverfault.com/questions/329906",
"https://serverfault.com",
"https://serverfault.com/users/100616/"
]
} |
329,913 | I have read at several blogs now that one should remove passwords from SSL certificates in order to avoid password prompts during Apache restarts. Is this true and does this pose any security risks? | Yeah, it will stop the prompts being sent to the terminal when starting a web server. And yes it does pose a security risk because where before the certificate was encrypted it is now in plain text. This means it might be possible to steal a completely working certificate from the machine. Whether this poses a significant security risk to you depends on what the repercussions would be if it happened to you and what the you gain from doing it this way. If it's more important to you that services should restart gracefully even if unattended than the security of the SSL system overall then it's a straight forward answer. Personally, I find keeping decrypted copies of SSL certificates overall has more pros than cons for my typical workload, here's why; An attacker would still have a copy of the certificate even if it was encrypted so it would be your duty to revoke it anyway. These days it's far easier for an attacker to obtain a valid certificate for your site via social engineering than to steal a working copy of one. Certificates naturally expire making their attack surface limited. Host based security systems such as traditionally permissions and SELinux offer a robust means of protecting certificates on the platform. A certificate isn't a be-all and end-all of a secure system. There are many other aspects to consider such as the data you store, the media you store it on and the value and/or personal nature of the data. Things that might make me encrypt: If you used the certificate to perform mutual authentication. It's a wildcard certificate or a certificate which hosts multiple domains (the losses double, or triple or whatever many hosts can be used for it) The certificate is multi-purpose in some other fashion. The certificates purpose is to ensure the integrity high value data (medical records, financial transactions and the like). The other end expects a high degree of trust and/or is reliant on the integrity of your system to make operational decisions. Ultimately, don't rely on others to make security decisions for you. You need to weight the risks and determine what is best for you and your institution using as much information as possible. | {
"source": [
"https://serverfault.com/questions/329913",
"https://serverfault.com",
"https://serverfault.com/users/51792/"
]
} |
330,069 | The previous SF questions I've seen have lead to answers that produce MD5 hashed password. Does anyone have a suggestion on to produce an SHA-512 hashed password? I'd prefer a one liner instead of a script but, if a script is the only solution, that's fine as well. Update Replacing previous py2 versions with this one: python3 -c "import crypt;print(crypt.crypt(input('clear-text pw: '), crypt.mksalt(crypt.METHOD_SHA512)))" | Edit: Please note this answer is 10+ years old. Here's a one liner: python -c 'import crypt; print crypt.crypt("test", "$6$random_salt")' Python 3.3+ includes mksalt in crypt , which makes it much easier (and more secure) to use: python3 -c 'import crypt; print(crypt.crypt("test", crypt.mksalt(crypt.METHOD_SHA512)))' If you don't provide an argument to crypt.mksalt (it could accept crypt.METHOD_CRYPT , ...MD5 , SHA256 , and SHA512 ), it will use the strongest available. The ID of the hash (number after the first $ ) is related to the method used: 1 -> MD5 2a -> Blowfish (not in mainline glibc; added in some Linux distributions) 5 -> SHA-256 (since glibc 2.7) 6 -> SHA-512 (since glibc 2.7) I'd recommend you look up what salts are and such and as per smallclamgers comment the difference between encryption and hashing. Update 1: The string produced is suitable for shadow and kickstart scripts. Update 2: Warning . If you are using a Mac, see the comment about using this in python on a mac where it doesn't seem to work as expected. On macOS you should not use the versions above, because Python uses the system's version of crypt() which does not behave the same and uses insecure DES encryption . You can use this platform independent one liner (requires passlib – install with pip3 install passlib ): python3 -c 'import passlib.hash; print(passlib.hash.sha512_crypt.hash("test"))' | {
"source": [
"https://serverfault.com/questions/330069",
"https://serverfault.com",
"https://serverfault.com/users/53736/"
]
} |
330,127 | How can you extract only the target dir and not the complete dir tree? compress tar cf /var/www_bak/site.tar /var/www/site extract tar xf /var/www/site.tar -C /tmp This will produce: /tmp/var/www/site How is it possible to avoid the whole dir tree to be created when the file is extracted? What I want it to extract to: /tmp/site | You want to use the --strip-components=NUMBER option of tar : --strip-components=NUMBER
strip NUMBER leading components from file names on extraction Your command would be: tar xfz /var/www/site.gz --strip-components=2 -C /tmp | {
"source": [
"https://serverfault.com/questions/330127",
"https://serverfault.com",
"https://serverfault.com/users/82522/"
]
} |
330,503 | Is there any chance to skip the known_hosts check without clearing known_hosts or disable it in ssh.conf ? I neither have access to known_hosts nor ssh.conf yet.
Don't find any suitable in man . | scp is supposed to take the same command line options as ssh , try: -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null Maybe add -q to disable the warnings as well. | {
"source": [
"https://serverfault.com/questions/330503",
"https://serverfault.com",
"https://serverfault.com/users/98811/"
]
} |
330,532 | I'm running a Linux instance on EC2 (I have MongoDB and node.js installed) and I'm getting this error: Cannot write: No space left on device I think I've tracked it down to this file, here is the df output Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 1032088 1032088 0 100% / The problem is, I don't know what this file is and I also don't know if this file is even the problem. So my question is: How do I fix the "No space left on device" error? | That file, / is your root directory. If it's the only filesystem you see in df , then it's everything. You have a 1GB filesystem and it's 100% full. You can start to figure out how it's used like this: sudo du -x / | sort -n | tail -40 You can then replace / with the paths that are taking up the most space. (They'll be at the end, thanks to the sort . The command may take awhile.) | {
"source": [
"https://serverfault.com/questions/330532",
"https://serverfault.com",
"https://serverfault.com/users/100801/"
]
} |
330,776 | What is the proper way to clear the recycle bin for all users in Windows Server 2008 R2? | As far as I can tell, these is no "official" Microsoft supported way of doing this. There are two options. One involves deleting c:\$Recycle.Bin and the other is scripting cleanmgr.exe to run at each user logon. The closest thing to "official" support for deleting c:\$Recycle.bin is from this MS KB , which references XP and Vista, but implies the expected behavior. Immediate deletion If you want this to happen immediately, it seems that you can just run rd /s c:\$Recycle.Bin and Windows should re-create the necessary folders the next time that they are needed. I just tested this quickly and it appears to work, but -obviously- proceed with caution. Recurring logon-scriptable deletion You can do this with the Disk Cleanup tool (cleanmgr.exe). Unfortunately, Microsoft decided to bundle this with the "Desktop Experience" set of features, meaning you'll have to install a bunch of other crap and reboot. The alternative is to grab the following two files and move them to the specified locations per Technet : C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.1.7600.16385_none_c9392808773cd7da\cleanmgr.exe
C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.1.7600.16385_en-us_b9cb6194b257cc63\cleanmgr.exe.mui Cleanmgr.exe should go in %systemroot%\System32. Cleanmgr.exe.mui should go in %systemroot%\System32\en-US. Running cleanmgr alone won't let you clear everyone's recycle bin, but you can use /sageset and /sagerun to make a logon script that runs for all users via GPO that will clear their recycle bin on the next logon, as described here . It's not the cleanest thing, but it will work. The linked article is for XP, but the syntax is unchanged as of Server 2008 R2. | {
"source": [
"https://serverfault.com/questions/330776",
"https://serverfault.com",
"https://serverfault.com/users/2561/"
]
} |
330,843 | We're planning to host our website for the first time for ourselves. We have currently have a linode of 8 gigs and the memory is going up to 90% most of the time. So I want to move my website to my own server with huge RAM. So this will be first time to manage any physical hardware of a server. So I came across IBM's BladeCenter, found them interesting. So can I just buy the blade and run it? Or do I have to buy the chassis for sure? Also, do I need to buy an UPS? So how hard is it to setup? How about the hard drives? Can I setup them easily? Please advice. | Unless you have a very good reason - density, etc - I would advise against going with blades. A good 2U server from HP or Dell would provide all you need in the way of RAM. I personally prefer HP DL380's - I have several with 72+ GB of RAM. You really need to get a better grip on the fundamentals before you start worrying about what kind of hardware to purchase. You need to have reliable power, cooling, security (locked rack / server room), network access, ram and disk specifications, etc before you start looking at the kind of servers to buy. EDIT - there is no such thing as an all in one guide to servers. I'll provide you with some preliminary stuff to get started. There is a little bit of extra information in here just in case anyone comes across this at a later point in time. This Tech Republic article does well describing the physical requirements you should be thinking about. Your existing bandwidth requirements should be pretty easy to determine. From the tone of your question you have a hosted solution somewhere else, either a VPS or some kind of hosting provider. You should be able to locate your existing utilization data. Expect to provide the same amount of bandwidth or higher for your in house server. The same can be said regarding the amount of disk space you require. You definitely need to have a UPS in place for your server. Without power conditioning in place you are asking for disaster. What happens if the power flickers for one second Friday night? Your website will be out of commission until someone notices Monday morning. Regarding your disks, you need to have RAID in your server. I suggest either RAID 5, RAID 10 or RAID 6, See here and here . Most any modern server provides this capacity. Consult the server manual for how to configure the RAID as it varies widely by manufacturer. There are a couple more advanced points associated with running your own server that should be considered as well. Along with running any server the burden of maintaining backups becomes yours as well. It sounds like this is something that you haven't considered. In this situation you might go with a tape drive attached directly to your server.. In any case it is something you should be thinking about. Any internet facing server creates some risk for the network it sits on. Use a firewall to protect it from most internet traffic. To minimize risk to The use of a DMZ is highly recommended. | {
"source": [
"https://serverfault.com/questions/330843",
"https://serverfault.com",
"https://serverfault.com/users/100907/"
]
} |
331,024 | I know I can list the triggers with \dft . But how can I see one concrete trigger? I want to know details like on which events the trigger is executed, which function is executed and so on. | OK, I found out about it myself. The command \dft doesn't show the triggers itself (as I thought), it shows all trigger-functions (return-type trigger). To see the trigger you can make \dS <tablename> , it shows not only columns of this table, but also all triggers defined on this table. To show the source of the trigger-function (or any function) use \df+ <functionname> . | {
"source": [
"https://serverfault.com/questions/331024",
"https://serverfault.com",
"https://serverfault.com/users/18957/"
]
} |
331,027 | I can't seem to figure out how to append to the default path in a supervisord program config. I can reset the path: environment=PATH="/home/site/environments/master/bin" But when I try: environment=PATH="/home/site/environments/master/bin:$PATH" I see that supervisord doesn't evaluate $PATH . Google wasn't a big help on this for some reason, I cannot believe I'm the first person to need this. Supervisord must have support for this, any idea what it is? | This feature has been added to Supervisor back in 2014 environment=PATH="/home/site/environments/master/bin:%(ENV_PATH)s" see https://github.com/Supervisor/supervisor/blob/95ca0bb6aec582885453899872c60b4174ccbd58/supervisor/skel/sample.conf#L7 See also https://stackoverflow.com/questions/12900402/supervisor-and-environment-variables | {
"source": [
"https://serverfault.com/questions/331027",
"https://serverfault.com",
"https://serverfault.com/users/13627/"
]
} |
331,187 | I know that IIS 7+ uses XML config files instead of the metabase. I also know that if I edit a web.config file for a given site, IIS automagically detects the changes and implements any corresponding config changes. However, does this also apply to the server-level applicationHost.config settings file? (usually located in C:\windows\system32\inetsrv\config ) Specifically, is it safe to edit this file instead of using IIS Manager or the appcmd command line utility? I couldn't find anything in the documentation that said it was okay or not okay to do this. I'm curious because I need to change the bindings for numerous sites from one IP to another. It would be much faster to do a global search and replace the IP address in the config file instead of manually editing a few dozen sites in the GUI. | Also check this answer from here: Cannot manually edit applicationhost.config The answer is simple, if not that obvious: win2008 is 64bit, notepad++ is 32bit. When you navigate to Windows\System32\inetsrv\config using explorer you are using a 64bit program to find the file. When you open the file using using notepad++ you are trying to open it using a 32bit program. The confusion occurs because, rather than telling you that this is what you are doing, windows allows you to open the file but when you save it the file's path is transparently mapped to Windows\SysWOW64\inetsrv\Config. So in practice what happens is you open applicationhost.config using notepad++, make a change, save the file; but rather than overwriting the original you are saving a 32bit copy of it in Windows\SysWOW64\inetsrv\Config, therefore you are not making changes to the version that is actually used by IIS. If you navigate to the Windows\SysWOW64\inetsrv\Config you will find the file you just saved. How to get around this? Simple - use a 64bit text editor, such as the normal notepad that ships with windows. | {
"source": [
"https://serverfault.com/questions/331187",
"https://serverfault.com",
"https://serverfault.com/users/101025/"
]
} |
331,256 | I'm looking for an overly simplified answer to the following question. I'm trying to build a foundational understanding of how Nginx works alongside something like Gunicorn. Do I need both Nginx and something like Gunicorn to deploy Django apps on Nginx? If so, what actually handles the HTTP requests? Ps. I don't want to use Apache and mod_wsgi! | Overly simplified: You need something that executes Python but Python isn't the best at handling all types of requests. [disclaimer: I'm a Gunicorn developer] Less simplified: Regardless of what app server you use (Gunicorn, mod_wsgi, mod_uwsgi, cherrypy) any sort of non-trivial deployment will have something upstream that will handle the requests that your Django app should not be handling. Trivial examples of such requests are serving static assets (images/css/js). This results in two first tiers of the classic "three tier architecture". Ie, the webserver (Nginx in your case) will handle many requests for images and static resources. Requests that need to be dynamically generated will then be passed on to the application server (Gunicorn in your example). (As an aside, the third of the three tiers is the database) Historically speaking, each of these tiers would be hosted on separate machines (and there would most likely be multiple machines in the first two tiers, ie: 5 web servers dispatch requests to two app servers which in turn query a single database). In the modern era we now have applications of all shapes and sizes. Not every weekend project or small business site actually needs the horsepower of multiple machines and will run quite happily on a single box. This has spawned new entries into the array of hosting solutions. Some solutions will marry the app server to the web server (Apache httpd + mod_wsgi, Nginx + mod_uwsgi, etc). And its not at all uncommon to host the database on the same machine as one of these web/app server combinations. Now in the case of Gunicorn, we made a specific decision (copying from Ruby's Unicorn) to keep things separate from Nginx while relying on Nginx's proxying behavior. Specifically, if we can assume that Gunicorn will never read connections directly from the internet, then we don't have to worry about clients that are slow. This means that the processing model for Gunicorn is embarrassingly simple. The separation also allows Gunicorn to be written in pure Python which minimizes the cost of development while not significantly impacting performance. It also allows users the ability to use other proxies (assuming they buffer correctly). As to your second question about what actually handles the HTTP request, the simple answer is Gunicorn. The complete answer is both Nginx and Gunicorn handle the request. Basically, Nginx will receive the request and if it's a dynamic request (generally based on URL patterns) then it will give that request to Gunicorn, which will process it, and then return a response to Nginx which then forwards the response back to the original client. So in closing, yes. You need both Nginx and Gunicorn (or something similar) for a proper Django deployment. If you're specifically looking to host Django with Nginx, then I would investigate Gunicorn, mod_uwsgi, and maybe CherryPy as candidates for the Django side of things. | {
"source": [
"https://serverfault.com/questions/331256",
"https://serverfault.com",
"https://serverfault.com/users/101046/"
]
} |
331,499 | Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I , Part II and Part III . We also setup a Cacti monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault . It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$) , our customers are happy (storage costs are down) , I'm happy (fun, fun, fun) . Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors
Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170
Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors
Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors
Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910
Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910
Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791
Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856
Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something? | I hate to say "don't use SATA" in critical production environments, but I've seen this situation quite often. SATA drives are not generally meant for the duty cycle you describe, although you did spec drives specifically rated for 24x7 operation in your setup. My experience has been that SATA drives can fail in unpredictable ways, often times affecting the entire storage array, even when using RAID 1+0, as you've done. Sometimes the drives fail in a manner that can stall the entire bus. One thing to note is whether you're using SAS expanders in your setup. That can make a difference in how the remaining disks are impacted by a drive failure. But it may have made more sense to go with midline/nearline (7200 RPM) SAS drives versus SATA. There's a small price premium over SATA, but the drives will operate/fail more predictably. The error-correction and reporting in the SAS interface/protocol is more robust than the SATA set. So even with drives whose mechanics are the same , the SAS protocol difference may have prevented the pain you experienced during your drive failure. | {
"source": [
"https://serverfault.com/questions/331499",
"https://serverfault.com",
"https://serverfault.com/users/251/"
]
} |
331,531 | I have a set of Nginx servers behind an Amazon ELB load balancer. I am using set_real_ip (from the HttpRealIpModule ) so that I can access the originating client IP address on these servers (for passing through to php-fpm and for use in the HttpGeoIPModule ). It seems that set_real_ip_from in the nginx configuration can only accept an IP address. However, with regard to ELB machines Amazon say: Note: Because the set of IP addresses associated with a LoadBalancer can change over time, you should never create an "A" record with any specific IP address. If you want to use a friendly DNS name for your LoadBalancer instead of the name generated by the Elastic Load Balancing service, you should create a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone. For more information, see the Using Domain Names With Elastic Load Balancing But if I need to input an IP address I can't use a CNAME (either amazon's or my own). Is there a solution to this problem? | If you can guarantee that all requests will be coming from ELB (I'm not familiar with it), you could try: real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0; That should tell nginx to trust an X-Forwarded-For header from anyone. The downside is that if anyone directly accesses your server, they would be able to spoof an X-Forwarded-For header and nginx would use the wrong client ip address. | {
"source": [
"https://serverfault.com/questions/331531",
"https://serverfault.com",
"https://serverfault.com/users/43759/"
]
} |
331,587 | I'm changing the way that our DHCP/DNS stuff works at work. Currently we've got 3 DNS servers, and a DHCP box. All of them are VMs. There's a circular dependency where stuff booting requires NFS, which requires DNS. So when we reboot stuff, things might come back subtly broken until the DNS is up, and we restart some services. What I want to do is have a few low power servers, probably dual core Atoms or similar, running from SSDs, so that they boot damn fast. I want to make the whole thing boot as near to instantaneously as possible. Ideally I'd like to use Ubuntu 11.10, or Debian 6 as the OS. I'm not interested in Gentoo or compiling my own kernel. This needs to be reasonably supportable by myself. Other than SSD drives, what other optimization steps can I take to improve boot speed? | Isn't this a situation where you should engineer around the circular dependencies? Set power-on delays in the server BIOS. You have multiple DNS servers, so that's a plus. DNS caching? Would this be as simple as using IP addresses or host files for your NFS or storage network? You didn't mention the particular virtualization technology, but it's possible to set VM boot priority in VMWare, for instance... Is this across multiple host servers? Otherwise, SSD-based boot drives can help. Use a distro with Upstart boot processes. Trim down daemons. | {
"source": [
"https://serverfault.com/questions/331587",
"https://serverfault.com",
"https://serverfault.com/users/16732/"
]
} |
331,591 | All our SVN repositories are hosted on a dedicated machine on which all the developers have access. Every now and then we need to checkout a repository on a machine we don't own or operate ourselves. Currently we all use our own system (SSH) account for this, but instead I would like to use some generic 'checkoutsvn' user that can be used for this. This user is only used for checking out from a repository, but should not be allowed to log in to the system (no shell access). I tried to do this by setting the default shell of that account to /sbin/nologin but then SVN fails, as apparently svn+ssh requires shell access. How do you do this? Is there a good solution for this? | Isn't this a situation where you should engineer around the circular dependencies? Set power-on delays in the server BIOS. You have multiple DNS servers, so that's a plus. DNS caching? Would this be as simple as using IP addresses or host files for your NFS or storage network? You didn't mention the particular virtualization technology, but it's possible to set VM boot priority in VMWare, for instance... Is this across multiple host servers? Otherwise, SSD-based boot drives can help. Use a distro with Upstart boot processes. Trim down daemons. | {
"source": [
"https://serverfault.com/questions/331591",
"https://serverfault.com",
"https://serverfault.com/users/99559/"
]
} |
331,936 | I've noticed that the "preferred" method of setting the system hostname is fundamentally different between Red Hat/CentOS and Debian/Ubuntu systems. CentOS documentation and the RHEL deployment guide say the hostname should be the FQDN : HOSTNAME=<value> , where <value> should be the Fully Qualified Domain
Name (FQDN), such as hostname.example.com , but can be whatever
hostname is necessary. The RHEL install guide is slightly more ambiguous: Setup prompts you to supply a host name for this computer, either as a fully-qualified domain name (FQDN) in the format hostname.domainname or as a short host name in the format hostname . The Debian reference says the hostname should not use the FQDN : 3.5.5. The hostname The kernel maintains the system hostname . The init script in runlevel
S which is symlinked to " /etc/init.d/hostname.sh " sets the system
hostname at boot time (using the hostname command) to the name stored
in " /etc/hostname ". This file should contain only the system hostname,
not a fully qualified domain name. I haven't seen any specific recommendations from IBM about which to use, but some software seems to have a preference. My questions: In a heterogeneous environment, is it better to use the vendor recommendation, or choose one and be consistent across all hosts? What software have you encountered which is sensitive to whether the hostname is set to the FQDN or short name? | I would choose a consistent approach across the entire environment. Both solutions work fine and will remain compatible with most applications. There is a difference in manageability, though. I go with the short name as the HOSTNAME setting, and set the FQDN as the first column in /etc/hosts for the server's IP, followed by the short name. I have not encountered many software packages that enforce or display a preference between the two. I find the short name to be cleaner for some applications, specifically logging. Maybe I've been unlucky in seeing internal domains like server.northside.chicago.rizzomanufacturing.com . Who wants to see that in the logs or a shell prompt ? Sometimes, I'm involved in company acquisitions or restructuring where internal domains and/or subdomains change. I like using the short hostname in these cases because logging, kickstarts, printing, systems monitoring, etc. do not need full reconfiguration to account for the new domain names. A typical RHEL/CentOS server setup for a server named "rizzo" with internal domain "ifp.com", would look like: /etc/sysconfig/network:
HOSTNAME=rizzo
... - /etc/hosts:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.100.13 rizzo.ifp.com rizzo - [root@rizzo ~]# hostname
rizzo - /var/log/messages snippet:
Dec 15 10:10:13 rizzo proftpd[19675]: 172.16.100.13 (::ffff:206.15.236.182[::ffff:206.15.236.182]) - Preparing to
chroot to directory '/app/upload/GREEK'
Dec 15 10:10:51 rizzo proftpd[20660]: 172.16.100.13 (::ffff:12.28.170.2[::ffff:12.28.170.2]) - FTP session opened.
Dec 15 10:10:51 rizzo proftpd[20660]: 172.16.100.13 (::ffff:12.28.170.2[::ffff:12.28.170.2]) - Preparing to chroot
to directory '/app/upload/ftp/SRRID' | {
"source": [
"https://serverfault.com/questions/331936",
"https://serverfault.com",
"https://serverfault.com/users/50996/"
]
} |
332,019 | I'm using the new AWS GUI for Route 53 to setup my domain records. However, the AWS console won't accept the recommended Google Apps SPF record, v=spf1 include:_spf.google.com ~all (found here ). It keeps giving me an error stating The record set could not be saved because:
- The Value field contains invalid characters or is in an invalid format. This happens when saving as SPF and TXT. Any ideas? | I had to wrap my SPF record in quotation marks for it to work. "v=spf1 include:_spf.google.com ~all" | {
"source": [
"https://serverfault.com/questions/332019",
"https://serverfault.com",
"https://serverfault.com/users/101289/"
]
} |
332,255 | I want to do a complete server backup. I already have my backup script copying all of the html/php files for the web app, and the mysql databases, placing them into a .tar.gz file. How can I add the crontab files to that backup? Whenever I save the crontab, it goes to /tmp folder.. and when I check that folder immediately afterwards, it is empty. | You could just backup the entire /var/spool/cron directory. It contains all crontabs for all users. | {
"source": [
"https://serverfault.com/questions/332255",
"https://serverfault.com",
"https://serverfault.com/users/33108/"
]
} |
332,372 | I have a few large files that I need to copy from one Linux machine to about 20 other Linux machines, all on the same LAN as quickly as is feasible. What tools/methods would be best for copying these files, noting that this is not going to be a one-time copy. These machines will never be connected to the Internet, and security is not an issue. Update: The reason for my asking this is because (as I understand it) we are currently using scp in serial to copy the files to each of the machines and I have been informed that this is "too slow" and a faster alternative is being sought. According to what I have been told, attempting to parallelize the scp calls simply slows it down further due to hard drive seeks. | BitTorrent. It's how Twitter deploys some things internally. http://engineering.twitter.com/2010/07/murder-fast-datacenter-code-deploys.html (web archive link) | {
"source": [
"https://serverfault.com/questions/332372",
"https://serverfault.com",
"https://serverfault.com/users/94886/"
]
} |
332,848 | I ordered a dedicated server 1 month ago and I want to make sure my server is dedicated and not a VPS or Shared server. Are there any tools I can verify that my server is running on bare metal and that I am the only user? | First, you should trust your hosting provider. If you think they sold you a VPS, maybe you should reconsider this provider.
Just to make sure you have a dedicated you can try this: Does the command esxtop work ? This tool is used to check performances on Virtual Machines Check the network interfaces. Run the command ifconfig . If you see something like this: venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:99999 errors:0 dropped:0 overruns:0 frame:0
TX packets:99999 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:126223307 (120.3 MiB) TX bytes:2897538 (2.7 MiB)
venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:6x.xxx.xxx.xxx P-t-P:6x.xxx.xxx.xxx Bcast:6x.xxx.xxx.xxx Mask:255.255.255.255
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1400 Metric:1 you are probably have a VPS since venet0 is telling that this server is being an OpenVZ VPS.
Note: This is not 100% fool proof, some VPS like Xen have an eth0. Check devices/system: Run lspci and dmesg as root. If you see something like: VMWare SVGA device
acd0: CDROM <VMware Virtual IDE CDROM Drive/00000001> at ata0-master UDMA33
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device Then you are using a VPS. Check if some files exists: If it's a VPS running OpenVZ they'd have a file called /proc/user_beancounters . View http://wiki.openvz.org/Proc/user_beancounters for more details. Look if /proc/vz or /proc/vz/veinfo exists (for OpenVZ) or /proc/sys/xen, /sys/bus/xen or /proc/xen (for Xen) Check if /proc/self/status has an s_context or VxID field. If one of these file exists, then you have a VPS. IP lookup: You could do a reverse IP lookup to check to see if any other websites are hosted on the same IP. Check Memory: Run lspci and look for RAM memory: Qumranet, Inc. Virtio memory balloon . Then you have a VPS. | {
"source": [
"https://serverfault.com/questions/332848",
"https://serverfault.com",
"https://serverfault.com/users/96988/"
]
} |
333,048 | I have a bunch of rewrite rules that I have to port from apache to nginx. It's a rather painful process because I'm not able to see if my rewrite rules and "if" conditions are working as I want them to. Apache did have debugging for its rewrite module. Whats can I do for nginx? | Enable rewrite_log : rewrite_log on; and set debug level in error_log directive: error_log /var/log/nginx/localhost.error_log notice; | {
"source": [
"https://serverfault.com/questions/333048",
"https://serverfault.com",
"https://serverfault.com/users/94896/"
]
} |
333,116 | What does mdev mean in ping output (last row below)? me@callisto ~ % ping -c 1 example.org
PING example.org (192.0.43.10) 56(84) bytes of data.
64 bytes from 43-10.any.icann.org (192.0.43.10): icmp_seq=1 ttl=245 time=119 ms
--- example.org ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 119.242/119.242/119.242/0.000 ms | It's the standard deviation, essentially an average of how far each ping RTT is from the mean RTT. The higher mdev is, the more variable the RTT is (over time). With a high RTT variability, you will have speed issues with bulk transfers (they will take longer than is strictly speaking necessary, as the variability will eventually cause the sender to wait for ACKs) and you will have middling to poor VoIP quality. | {
"source": [
"https://serverfault.com/questions/333116",
"https://serverfault.com",
"https://serverfault.com/users/98724/"
]
} |
333,118 | I used the dialog binary tool to create some msgbox on Linux screen as the following example dialog –colors –title “test” –msgbox “type <ENTER> 8 50 My question how to kill the dialog process in order to clear the screen without dialog BOX
, there no dialog process , I check with ps –ef ,
I also try to dialog –clear this isn’t clear the screen and the dialog box still exist Please advice? | It's the standard deviation, essentially an average of how far each ping RTT is from the mean RTT. The higher mdev is, the more variable the RTT is (over time). With a high RTT variability, you will have speed issues with bulk transfers (they will take longer than is strictly speaking necessary, as the variability will eventually cause the sender to wait for ACKs) and you will have middling to poor VoIP quality. | {
"source": [
"https://serverfault.com/questions/333118",
"https://serverfault.com",
"https://serverfault.com/users/90487/"
]
} |
333,321 | I've recently set up my server so that my suPHP 'virtual' users can't be logged into by using this article My issue now is that before when I ran a rake command for my Ruby on Rails application running on the server, I used su to go into www-data and execute the command from there - obviously I can't do that anymore because of the nologin. So as a root user, how can I execute commands as other user's, even if they are nologin? | One way is to launch a shell for that user (explicitly specifying the shell): sudo -u www-data bash This will launch a (bash) shell as the specified user. You can then execute your command(s) and logout (to return to your previous shell) | {
"source": [
"https://serverfault.com/questions/333321",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
333,329 | If 3 routers are connected this way together, is the connection between Router1 and Router2 itself forms a network on its own despite having no stations between them? Similarly, would the connection between Router2 and Router3, and Router3 and Router1 form a network without stations between them? In other words, the connection between Router1 and Router2 will require another subnet of IP addresses which is only used for the interfaces connected between Router1 and Router2? Otherwise, what would the IP address of the Interface on the routers that connect between themselves if they don't form an "empty network" between them? | One way is to launch a shell for that user (explicitly specifying the shell): sudo -u www-data bash This will launch a (bash) shell as the specified user. You can then execute your command(s) and logout (to return to your previous shell) | {
"source": [
"https://serverfault.com/questions/333329",
"https://serverfault.com",
"https://serverfault.com/users/99957/"
]
} |
333,340 | I have a website (Alpha) running successfully on an IIS 7.5 webserver running on Windows Server 2008 R2. I basically want to clone Alpha and have a second website Beta, the same as Alpha, but will have somewhat different code. I've created the second website and also created a second Application pool. As far as I can tell, the two application pools are configured the same: auto start, v4.0, Integrated, Identity: ApplicationPoolIdentity. The second website (Beta) doesn't work if I connect it to its own Application Pool, but works fine if I connect it to Alpha's Application Pool. As far as I can remember, I did not do anything special to Alpha's Application Pool. As far as I can tell, the advanced settings are the same for both. The failure Beta has when connected to its own Application Pool is is getting an unhandled exception: Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON' Any ideas on how to compare the two Application Pools, or to debug the overall system would be appreciated. I tried deleting Beta's Application Pool and re-creating it. | One way is to launch a shell for that user (explicitly specifying the shell): sudo -u www-data bash This will launch a (bash) shell as the specified user. You can then execute your command(s) and logout (to return to your previous shell) | {
"source": [
"https://serverfault.com/questions/333340",
"https://serverfault.com",
"https://serverfault.com/users/2489/"
]
} |
333,526 | Update: EMC has dropped our warranty and support, so this is going to be an insurance case. Dell says's that we can get a professional cleaning agency to refurbish the servers and keep our warranty. Cisco says "maybe". HP is still silent :( Final update: EMC turned around and approved cleaning from a certified company. The VNX got shipped back to us today and works just fine. The rest of the server room is also getting cleaned, and our losses are limited to a couple of tape drives. The insurance company picks up the bill for just about everything else. The original question: Here's the story.. The owners of the building we lease office space from decided to do a renovation of the exterior. This involved in some pretty heavy work at the level where our server room is, including exchanging windows wich are fit inside a concrete wall. My red alert went off when I heard that they were going to do the same thing with our server room (yes, our server room has a window. We're a small shop with 3 racks. The window is secured with steel bars.) I explicity told the contractor that they need to put up a temporarily wall between our racks and the original wall - and to make sure that the temporary wall is 100 % air and water-tight. They promised to do so. The temporary wall has a small door in it, so that workers can go in/out through the day (through our server room, wich was the only option....). On several occasions I could find the small door half-way shut while working evenings/nights. I locked the door, and thought that they would hopefully get the point soon and keep the door shut. I even gave a electrician a mouthful when I saw that he didn't close the door properly. By this point - I bet that most of you get a picture of what happened. Yes, they probably left the door open while drilling in the concrete. I present you our 4 weeks old EMC VNX: I'll even put in a little bonus, here is the APC UPS one rack further away from the temporary wall. See the nice little landing strip from my finger? What should I do? The only thing that comes to mind is to either call all our suppliers (EMC, HP, Dell, Cisco) and get them to send technicians to check out all the gear in the server room, or get some kind of certified 3rd-party consulant to check all of it. Would you run production systems on this gear? How long? I should also note that our aircondition isn't exactly enterprise-grade, given the nature of our small room. It's just a single inverter, wich have failed one time before I started working here (failed inverters usually leads to water dripping out). | Get all the technicians in. Make them check/clean all the equipment. Send the bill to the building planner. Really, servers can withstand some level of dust but this is just too much. We clean our servers regularly during downtime with a PC vacuum by 3M . It's a nice thing to have around the office. But for now, start cleaning. The faster you get the dust out of there, the better. Try to keep heatsinks and fans clear of dust. If a heatsink or fan is covered in dust, its ability to dissipate heat is much worse then a clean unit. | {
"source": [
"https://serverfault.com/questions/333526",
"https://serverfault.com",
"https://serverfault.com/users/9140/"
]
} |
333,548 | So I've wondered this for a long time. Where does email sent to *@example.com go? If I accidentally sent sensitive information to *@example.com would some evil person (potentially at the IANA) be able to retrieve it someday? | If you attempt to send an email to *@example.com Your SMTP will check the domain exists. Your SMTP server will lookup for a MX record at example.com . There is none: Your SMTP will fall back on the A record. The IP is 174.137.125.92 (as of today) The IANA has registered the domain, but has not set up a SMTP server listening on port 25 on 174.137.125.92. Then the behaviour depends on your SMTP. Most servers will send you a warning, and try again later. Eventually (usually in 3 days), the SMTP will discard the message and send you a notification of failure. Bottom line : It depends on your own configuration. But if IANA set up a server today, they might be able to receive messages you tried to send 3 days ago. | {
"source": [
"https://serverfault.com/questions/333548",
"https://serverfault.com",
"https://serverfault.com/users/59968/"
]
} |
333,816 | Our company has a brand new NAS, and the idea is that we will be able to use it for fast, shared access to our data on our network. It's a fairly simple 2-disk system, but from what I understand, it should reach speeds of about 40mb/s. We have a 100mb/sec network between our PC's and the NAS. However, we're only getting NAS speeds of around 8-10mb/sec. What could the bottleneck be? | You are confusing your units. M = mega m = milli B = byte b = bit When referring to disk usage, we measure throughput in megabytes per second, or MB/s. Notice the capital M for mega and the capital B for bytes. When referring to network performance, we measure throughput in megabits per second, or Mb/s. Notice the lowercase b . A bit is eight times smaller than a byte. You can figure out your 100Mb/s network's maximum theoretical throughput in MB simply by dividing by 8. 100 / 8 = 12.5 . TCP/IP has ~ a 10% overhead, as does Ethernet, so realistically you'll only see about 80% of that at the high end. A little more basic math shows that 12.5 * .8 = 10 . You should expect to be able to write at about 10MB/s over your 100Mb/s network. This lines up perfectly with what you are seeing. tl;dr - Capitalization is important. | {
"source": [
"https://serverfault.com/questions/333816",
"https://serverfault.com",
"https://serverfault.com/users/101878/"
]
} |
333,907 | I have a WCF service app hosted in IIS. On startup, it goes and fetches a really expensive (in terms of time and cpu) resource to use as local cache. Unfortunately, IIS seems to recycle the process on a fairly regular basis. So I am trying to change the settings on the Application Pool to make sure that IIS does not recycle the application. So far, I've change the following: Limit Interval under CPU from 5 to 0. Idle Time-out under Process Model from 20 to 0. Regular Time Interval under Recycling from 1740 to 0. Will this be enough? And I have specific questions about the items I changed: What specifically does Limit Interval setting under CPU mean? Does it mean that if a certain CPU usage is exceeded, the application pool will be recycled? What exactly does "recycled" mean? Is the application completely torn down and started up again? What is the difference between "Worker Process shutdown" and "Application Pool recycling"? The documentation for the Idle Time-out under Process Model talks about shutting down the worker process. While the docs for Regular Time Interval under Recycling talk about application pool recycling. I don't quite grok the difference between the two. I thought the w3wp.exe is the worker process which runs the application pool. Can someone explain the difference to the application between the two? The reason for having IIS7 and IIS7.5 tags is because the app will run in both and hope the answers are the same between the versions. Image for reference: | Recycling Recycling is usually* where IIS starts up a new process as a container for your application, and then gives the old one up to ShutdownTimeLimit to go away of its own volition before it's killed. *- usually: see DisallowOverlappingRotation / "Disable overlapped recycle" setting It is destructive , in that the original process and all its state information are discarded. Using out-of-process session state (eg, State Server or a database, or even a cookie if your state is tiny) can allow you to work around this. But it is by default overlapped - meaning the duration of an outage is minimized because the new process starts and is hooked up to the request queue, before the old one is told "you have [ ShutdownTimeLimit ] seconds to go away. Please comply." Settings To your question: all the settings on that page control recycling in some way. "Shutdown" might be described as "proactive recycling" - where the process itself decides it's time to go, and exits in an orderly manner. Reactive recycling is where WAS detects a problem and shoots the process (after establishing a suitable replacement W3WP). Now, here's some stuff that can cause recycling of one form or another: an ISAPI deciding it's unhealthy any module crashing idle timeout cpu limiting adjusting App Pool properties as your mum may have screamed at one point: "Stop picking at it, or it'll never get better!" "ping" failure * not actually pinging per se, cos it uses a named pipe - more "life detection" all of the settings in the screenshot above What To Do: Generally: Disable Idle timeouts . 20 minutes of inactivity = boom! Old process gone! New process on the next incoming request. Set that to zero. Disable Regular time interval - the 29 hour default has been described as "insane", "annoying" and "clever" by various parties. Actually, only two of those are true. Optionally Turn on DisallowRotationOnConfigChange (above, Disable Reycling for configuration changes ) if you just can't stop playing with it - this allows you to change any app pool setting without it instantly signaling to the worker processes that it needs to be killed. You need to manually recycle the App Pool to get the settings to take effect, which lets you pre-set settings and then use a change window to apply them via your recycle process. As a general principle, leave pinging enabled . That's your safety net. I've seen people turn it off, and then the site hangs indefinitely sometimes, leading to panic... so if the settings are too aggressive for your apparently-very-very-slow-to-respond app, back them off a bit and see what you get, rather than turning it off. (Unless you've got auto-crash-mode dumping set up for hung W3WPs through your own monitoring process) That's enough to cause a well-behaved process to live forever. If it dies, sure, it'll be replaced. If it hangs, pinging should pick that up and a new one should start within 2 minutes (by default; worst-case calc should be: up to ping frequency + ping timeout + startup time limit before requests start working again). CPU limiting isn't normally interesting, because by default it's turned off, and it's also configured to do nothing anyway; if it were configured to kill the process, sure, that'd be a recycling trigger. Leave it off. Note for IIS 8.x, CPU Throttling becomes an option too. An (IIS) AppPool isn't a (.Net) AppDomain (but may contain one/some) But... then we get into .Net land, and App Domain recycling, which can also cause a loss of state. (See: https://blogs.msdn.microsoft.com/tess/2006/08/02/asp-net-case-study-lost-session-variables-and-appdomain-recycles/ ) Short version, you do that by touching a web.config file in your content folder ( again with the picking! ), or by creating a folder in that folder, or an ASPX file, or.. other things... and that's about as destructive as an App Pool recycle, minus the native-code startup costs (it's purely a managed code (.Net) concept, so only managed code initialization stuff happens here). Antivirus can also trigger this as it scans web.config files, causing a change notification, causing.... | {
"source": [
"https://serverfault.com/questions/333907",
"https://serverfault.com",
"https://serverfault.com/users/3025/"
]
} |
334,029 | this is the result of my traceroute traceroute 211.140.5.120 1 141.1.31.2 (111.1.31.2) 0.397 ms 0.380 ms 0.366 ms
2 141.1.28.38 (111.1.28.38) 3.999 ms 3.971 ms 3.982 ms
3 142.11.124.193 (112.11.124.133) 1.315 ms 1.533 ms 1.455 ms
4 (201.141.0.261) 2.615 ms 2.749 ms 2.572 ms
5 (201.141.0.82) 2.705 ms 2.564 ms 2.680 ms
6 (201.118.231.14) 5.375 ms 5.126 ms 5.252 ms
7 * * *
8 * * *
9 * * *
10 * * *
11 * * *
12 * * *
13 * * *
14 * * *
15 * * *
16 * * *
17 * * *
18 * * *
19 * * *
20 * * *
21 * * *
22 * * *
23 * * *
24 * * *
25 * * *
26 * * *
27 * * *
28 * * *
29 * * *
30 * * * I want to know what does the *** mean and does the result mean there are really more than 30 hops between my host and the target server ? | All implementations of traceroute rely on ICMP (type 11) packets being sent to the originator. This program attempts trace route by launching UDP probe packets with a small ttl (time to live) then listening for an ICMP "time exceeded" reply from a gateway. It starts
probes with a ttl of one and increase by one until we get an ICMP "port unreachable" (which means we got to "host") or hit a max (which defaults to 30 hops & can be changed with the -m flag). Three probes (change with -q flag) are sent at each ttl setting and a line is printed showing the ttl, address of the gateway and round trip time of each probe( so three * ). If there is no response within a 5 sec. timeout interval (changed with the -w flag), a "*" is printed for that probe. So in your case we can conclude that we got response only upto 201.118.231.14. Afterwards the nodes are not responding to the ICMP packets (type 11) upto hop 30 which is the max time-to-live (max number of hops). You can increase max-time-to-live using -m flag. | {
"source": [
"https://serverfault.com/questions/334029",
"https://serverfault.com",
"https://serverfault.com/users/55582/"
]
} |
334,448 | Most guides for OpenSSH configuration advise to disable password authentication in favor of key-based authentication. But in my opinion password authentication has a significant advantage: an ability to connect from absolutely anywhere without a key. If used always with a strong password, this should not be a security risk. Or should it? | There are pro's and con's for either pw or key-based authentication. In some cases, for example, key-based authentication is less secure than password authentication. In other cases, its pw-based that's less secure. In some cases, one is more convenient, in others, less. It all boils down to this: When you do key-based authentication, you must secure your key with a passphrase. Unless you have ssh-agent running (ssh-agent frees you from entering your passphrase every time), you've gained nothing in terms of convenience. Security is disputable: the attack vector now shifted from the server to YOU, or your account, or your personal machine, (...) - those may or may not be easier to break. Think outside of the box when deciding this. Whether you gain or loose in terms of security depends on the rest of your environment and other measures. edit: Oh, just saw that you're talking about a home server. I was in the same situation, "password" or "USB stick with key on it" always with me? I went for the former but changed the SSH listening port to something different than 22. That stops all those lame script kiddies brute forcing whole network ranges. | {
"source": [
"https://serverfault.com/questions/334448",
"https://serverfault.com",
"https://serverfault.com/users/93517/"
]
} |
334,456 | I have logged in as "userc". I need access to all the files that "usera" has. I have edited the following file, vi /etc/group
usera:x:1000:userb,userc But this does not seem to work.
I am still getting "permission denied" error. If I su to usera then I am able to access those files. What is the best way to have equivalent access to "root" or "usera"? Update: I have tried the options suggested in the answer but I am still getting the following: [root@app company]# cd /opt/company/
[root@app company]# chmod 777 emboss/
[root@app company]# su shantanu
[shantanu@app company]$ whoami
shantanu
[shantanu@app company]$ echo "test" > /opt/company/emboss/todel.txt
bash: /opt/company/emboss/todel.txt: Permission denied
[shantanu@app company]$ sudo echo "test" > /opt/company/emboss/todel.txt
bash: /opt/company/emboss/todel.txt: Permission denied
[shantanu@app company]$ sudo -u usera echo "test" > /opt/company/emboss/todel.txt
bash: /opt/company/emboss/todel.txt: Permission denied | There are pro's and con's for either pw or key-based authentication. In some cases, for example, key-based authentication is less secure than password authentication. In other cases, its pw-based that's less secure. In some cases, one is more convenient, in others, less. It all boils down to this: When you do key-based authentication, you must secure your key with a passphrase. Unless you have ssh-agent running (ssh-agent frees you from entering your passphrase every time), you've gained nothing in terms of convenience. Security is disputable: the attack vector now shifted from the server to YOU, or your account, or your personal machine, (...) - those may or may not be easier to break. Think outside of the box when deciding this. Whether you gain or loose in terms of security depends on the rest of your environment and other measures. edit: Oh, just saw that you're talking about a home server. I was in the same situation, "password" or "USB stick with key on it" always with me? I went for the former but changed the SSH listening port to something different than 22. That stops all those lame script kiddies brute forcing whole network ranges. | {
"source": [
"https://serverfault.com/questions/334456",
"https://serverfault.com",
"https://serverfault.com/users/16842/"
]
} |
334,663 | What is the recommended size for a Linux /boot partition? And is it safe to not have a /boot partition? I see some servers don't have a /boot partition while some servers have a 128 MB /boot partition. I am a little confused. Is /boot partition necessary? If it is, how large should it be? | These days, 100 Megabytes or 200 Megabytes is the norm. You do not need to have a /boot partition. However, it's good to have for flexibility reasons (LVM, encryption, BIOS limitations). Edit: The recommended size has been increased to 300MB-500MB. Also see: https://superuser.com/questions/66015/installing-ubuntu-do-i-really-need-a-boot-parition | {
"source": [
"https://serverfault.com/questions/334663",
"https://serverfault.com",
"https://serverfault.com/users/87873/"
]
} |
335,625 | I'm trying to give full access (read, write) to a specific folder to all users on Windows 7. The problem is that I don't know how to do that using icacls. | c:\windows\system32\icacls c:\folder /grant "domain\user":(OI)(CI)M
c:\windows\system32\icacls c:\folder /grant "everyone":(OI)(CI)M
c:\windows\system32\icacls c:\folder /grant "Authenticated Users":(OI)(CI)M Open command window and type c:\windows\system32\icacls /? | {
"source": [
"https://serverfault.com/questions/335625",
"https://serverfault.com",
"https://serverfault.com/users/92366/"
]
} |
335,769 | I would like to have alias ll="ls -l" to be system wide. How is that done on Ubuntu? | Add it in to /etc/bashrc . This will (or should) get called on login by every user who uses bash. | {
"source": [
"https://serverfault.com/questions/335769",
"https://serverfault.com",
"https://serverfault.com/users/34187/"
]
} |
336,121 | I am trying to see if a process is running on multiple servers and then format it into a table. get-process -ComputerName server1,server2,server3 -name explorer | Select-Object processname,machinename Thats the easy part - When the process does not exist or if the server is unavailable, powershell outputs a big ugly error, messes up the the table and doesn't continue. Example Get-Process : Couldn't connect to remote machine.At line:1 char:12 + get-process <<<< -ComputerName server1,server2,server3 -name explorer | format-table processname,machinename
+ CategoryInfo : NotSpecified: (:) [Get-Process], InvalidOperatio nException + FullyQualifiedErrorId : System.InvalidOperationException,Microsoft.Power Shell.Commands.GetProcessCommand How do I get around this? If the I would still like to get notified if the process isn't available or Running. | Add -ErrorAction SilentlyContinue to your command. When it's not an error, but an unhandled Exception, you should add -EV Err -EA SilentlyContinue , in order to catch the exception. ( EA is an alias for ErrorAction ) You can then evaluate the error in your script, by having a look at $Err[0] | {
"source": [
"https://serverfault.com/questions/336121",
"https://serverfault.com",
"https://serverfault.com/users/43680/"
]
} |
336,217 | I want to monitor all user's activity in my server. Even when the user executes a shell command from some editor like vim I want to
see them in the log file. I have checked the tool acct but it is not listing the complete commands.
(Please correct me if I have missed some options which does already). Which Linux tool I should be looking at to solve this problem? | Add this line to your pam config responsible for logins (its system-auth on redhat based distros) session required pam_tty_audit.so enable=* To find out what was done, you can use. ausearch -ts <some_timestamp> -m tty -i This produces an output like this: type=TTY msg=audit(11/30/2011 15:38:39.178:12763684) : tty pid=32377 uid=root
auid=matthew major=136 minor=2 comm=bash data=<up>,<ret> The only downside to this is is can be a little bit difficult to read, but it is much better than most proposed solutions since in theory it could be used to record an entire session, warts n all. Edit: Oh and you can use aureport to generate a list that can be more helpful. # aureport --tty
...
12. 11/30/2011 15:50:54 12764042 501 ? 4294967295 bash "d",<^D>
13. 11/30/2011 15:52:30 12764112 501 ? 4294967295 bash "aureport --ty",<ret>
14. 11/30/2011 15:52:31 12764114 501 ? 4294967295 bash <up>,<left>,<left>,"t",<ret> | {
"source": [
"https://serverfault.com/questions/336217",
"https://serverfault.com",
"https://serverfault.com/users/10303/"
]
} |
336,250 | I am writing a web application that uses .NET Windows Authentication and relies on a user's group membership to Authorize them to various areas of the website. Right now I'm on a dev machine that IS NOT part of a domain and is not using AD, instead I'm just using local user groups. In general this is working fine as is. However, as I test the application I need to add and remove roles in my user account to verify things are working. When I add a role it doesn't seem to propagate until I log out of Windows and login again. Is it possible to force an update to Group membership without having to log off? | taskkill.exe /F /IM explorer.exe
runas /user:%USERDOMAIN%\%USERNAME% explorer.exe This will kill explorer, then reopen with your user account... It will prompt you for your password and that will get you a new token, thereby updating your membership. | {
"source": [
"https://serverfault.com/questions/336250",
"https://serverfault.com",
"https://serverfault.com/users/102621/"
]
} |
336,298 | I have a specific use case where I would really like to be able to change a user's password with a single command with no interactivity. This is being done in a safe fashion (over SSH, and on a system with only one user able to be logged in), so it's fine to expose the new password (and even the old one, if necessary) on the command line. FWIW, it's a Ubuntu system. I just want to avoid having to add something Expect-like to this system for just this one task. | You could use chpasswd . echo user:pass | /usr/sbin/chpasswd | {
"source": [
"https://serverfault.com/questions/336298",
"https://serverfault.com",
"https://serverfault.com/users/11131/"
]
} |
336,325 | I just signed up with DreamHost VPS, and their sign-up process offered an unique IP address for an additional ~$4/mo. I know what IP addresses are. Why would this uniqueness matter? Visitors are accessing my website via URL addresses anyway. | The big one is that you need a unique IP address for some SSL/TLS implementations. As pointed out in the comments, no version of IE on XP can do this, which is the biggest offender. Also, if you have an application that needs to reference an IP instead of a DNS name, you'd need it since your shared host is likely configured to ignore requests to an IP. | {
"source": [
"https://serverfault.com/questions/336325",
"https://serverfault.com",
"https://serverfault.com/users/61062/"
]
} |
336,617 | I've setup ispconfig3 on my debian six server, and here is a little smtp over ssl: The server is postfix AUTH PLAIN (LOL!)
235 2.7.0 Authentication successful
MAIL FROM: [email protected]
250 2.1.0 Ok
RCPT TO: [email protected]
RENEGOTIATING
depth=0 /C=AU/ST=NSW/L=Sydney/O=Self-Signed Key! Procees with caution!/OU=Web Hosting/[email protected]
verify error:num=18:self signed certificate
verify return:1
depth=0 /C=AU/ST=NSW/L=Sydney/O=Self-Signed Key! Procees with caution!/OU=Web Hosting/[email protected]
verify return:1
DATA
554 5.5.1 Error: no valid recipients but, the thing is, if I just do a vanilla telnet over port 25 I can authenticate and send mail like a madman... hopefully this is enough information! (as opposed to 'mail.app can't handle ssl!') | Pressing "R" in an s_client session causes openssl to renegotiate . Try entering "rcpt to:" instead of "RCPT TO". You might also try tools that are more suited to SMTP-specific testing, such as Tony Finch's smtpc or swaks . | {
"source": [
"https://serverfault.com/questions/336617",
"https://serverfault.com",
"https://serverfault.com/users/102747/"
]
} |
336,629 | I built a freebsd guest OS for a custom Java application. No Gui, no frills, just SSH and an API to coordinate all the machines. This application is inherently single-threaded, and it is only limited by the CPU speed. Which is the best Virtualization framework for JAVA CPU-bound applications? We started with OpenVZ but we found a nasty bug that causes memory leaks in our app, so now we're looking on what to use next. | Pressing "R" in an s_client session causes openssl to renegotiate . Try entering "rcpt to:" instead of "RCPT TO". You might also try tools that are more suited to SMTP-specific testing, such as Tony Finch's smtpc or swaks . | {
"source": [
"https://serverfault.com/questions/336629",
"https://serverfault.com",
"https://serverfault.com/users/44097/"
]
} |
336,630 | The question says it all, I think. I vaguely remember there was an easy way to do this, but don't remember what it was. | It doesn't provide much, but here it is: C:\Windows\system32>fltmc filters
Filter Name Num Instances Altitude Frame
------------------------------ ------------- ------------ -----
MpFilter 12 328000 0
luafv 1 135000 0
FileInfo 12 45000 0
C:\Windows\system32>fltmc volumes
Dos Name Volume Name FileSystem Status
------------------------------ --------------------------------------- ---------- --------
\Device\Mup Remote
C: \Device\HarddiskVolume2 NTFS
D: \Device\HarddiskVolume3 NTFS
\Device\HarddiskVolume1 NTFS
\Device\HarddiskVolumeShadowCopy12 NTFS
E: \Device\HarddiskVolume14 NTFS
\Device\HarddiskVolumeShadowCopy15 NTFS
\Device\HarddiskVolumeShadowCopy17 NTFS
\Device\HarddiskVolumeShadowCopy19 NTFS
\Device\HarddiskVolumeShadowCopy21 NTFS
\Device\HarddiskVolumeShadowCopy23 NTFS
F: \Device\CdRom11 CDFS | {
"source": [
"https://serverfault.com/questions/336630",
"https://serverfault.com",
"https://serverfault.com/users/61119/"
]
} |
336,854 | Is there a way to monitor the traffic (e.g., get a live view of the utilization) over a particular network interface, say eth0? The catch here is that the set of tools on the box is fixed, and is pretty much a stock RHEL deployment, so add-on tools can't be used. Looking for something basic and usually present like iostat here. | The data you want to see shows up in good old ifconfig. watch ifconfig eth0 or to make things stand out better: watch -n 1 -d ifconfig eth0 | {
"source": [
"https://serverfault.com/questions/336854",
"https://serverfault.com",
"https://serverfault.com/users/29918/"
]
} |
337,082 | I just ran rm -rf /* accidentally, but I meant rm -rf ./* (notice the star after the slash). alias rm='rm -i' and --preserve-root by default didn't save me, so are there any automatic safeguards for this? I wasn't root and cancelled the command immediately, but there were some relaxed permissions somewhere or something because I noticed that my Bash prompt broke already. I don't want to rely on permissions and not being root (I could make the same mistake with sudo ), and I don't want to hunt for mysterious bugs because of one missing file somewhere in the system, so, backups and sudo are good, but I would like something better for this specific case. About thinking twice and using the brain. I am using it actually! But I'm using it to solve some complex programming task involving 10 different things. I'm immersed in this task deeply enough, there isn't any brain power left for checking flags and paths, I don't even think in terms of commands and arguments, I think in terms of actions like 'empty current dir', different part of my brain translates them to commands and sometimes it makes mistakes. I want the computer to correct them, at least the dangerous ones. | One of the tricks I follow is to put # in the beginning while using the rm command. root@localhost:~# #rm -rf / This prevents accidental execution of rm on the wrong file/directory. Once verified, remove # from the beginning. This trick works, because in Bash a word beginning with # causes that word and all remaining characters on that line to be ignored. So the command is simply ignored. OR If you want to prevent any important directory, there is one more trick. Create a file named -i in that directory. How can such a odd file be created? Using touch -- -i or touch ./-i Now try rm -rf * : sachin@sachin-ThinkPad-T420:~$ touch {1..4}
sachin@sachin-ThinkPad-T420:~$ touch -- -i
sachin@sachin-ThinkPad-T420:~$ ls
1 2 3 4 -i
sachin@sachin-ThinkPad-T420:~$ rm -rf *
rm: remove regular empty file `1'? n
rm: remove regular empty file `2'? Here the * will expand -i to the command line, so your command ultimately becomes rm -rf -i . Thus command will prompt before removal. You can put this file in your / , /home/ , /etc/ , etc. OR Use --preserve-root as an option to rm . In the rm included in newer coreutils packages, this option is the default. --preserve-root
do not remove `/' (default) OR Use safe-rm Excerpt from the web site: Safe-rm is a safety tool intended to prevent the accidental deletion
of important files by replacing /bin/rm with a wrapper, which checks
the given arguments against a configurable blacklist of files and
directories that should never be removed. Users who attempt to delete one of these protected files or
directories will not be able to do so and will be shown a warning
message instead: $ rm -rf /usr
Skipping /usr | {
"source": [
"https://serverfault.com/questions/337082",
"https://serverfault.com",
"https://serverfault.com/users/91493/"
]
} |
337,274 | I'm looking for a simple way to SSH from my local machine, A, through a proxy, B, to a destination host, C. The private key that goes with the public key on C is on B, and I can't put that key on my local machine. Any tips? Also, I'd like to be able to do this using ~/.ssh/config. Thanks! | Schematic: ssh ssh
A ------> B ------> C
^ ^
using A's using B's
ssh key ssh key Preconditions: A is running ssh-agent; A can access B ; B can access C ; A 's ssh public key is present in B:~/.ssh/authorized_keys B 's ssh public key is present in C:~/.ssh/authorized_keys In ~/.ssh/config on A , add Host C
ProxyCommand ssh -o 'ForwardAgent yes' B 'ssh-add && nc %h %p' If your ssh private key on B is in a nonstandard location, add its path after ssh-add . You should now be able to access C from A : A$ ssh C
C$ | {
"source": [
"https://serverfault.com/questions/337274",
"https://serverfault.com",
"https://serverfault.com/users/75925/"
]
} |
337,631 | I ran my crontab job 0 2 */1 * * /aScript >aLog.log 2>&1 as a 'root' user, and however I found the env is different from env of the 'root' user, and therefore experiencing a different runtime behavior of my scripts. An attempt fix was placing export commands in rc.d files, but it still didn't show up!
I end up placing export commands in the aScript itself. My question is that is there a better way to approach this problem? and why env is missing even though it is from the same user 'root' ? (I modifies crontab by running 'crontab -e' from the root) | Cron always runs with a mostly empty environment. HOME , LOGNAME , and SHELL are set; and a very limited PATH . It is therefore advisable to use complete paths to executables, and export any variables you need in your script when using cron . There are several approaches you can use to set your environment variables in cron , but they all amount to setting it in your script. Approach 1: Set each variable you need manually in your script. Approach 2: Source your profile: . $HOME/.bash_profile (or . $HOME/.profile ) (You will usually find that the above file will source other files (e.g. ~/.bashrc --> /etc/bashrc --> /etc/profile.d/* ) - if not, you can source those as well.) Approach 3: Save your environment variables to a file (run as the desired user): env > /path/to/my_env.sh Then import via your cron script: env - `cat /path/to/my_env.sh` /bin/sh Approach 4: In some cases, you can set global cron variables in /etc/default/cron . There is an element of risk to this however, as these will be set for all cron jobs. | {
"source": [
"https://serverfault.com/questions/337631",
"https://serverfault.com",
"https://serverfault.com/users/81562/"
]
} |
337,678 | I need to send a large volume of emails, roughly 60.000 per week. At the moment we outsource this service to a third party, and we expect to double our volume within the next 6 months.
Since the service is starting to be too expensive, I was thinking about setting up our own MTA. Our own SysAdmin told us it is not difficult at all to have our own MTA, but I'm afraid he might have oversimplified this. Is it difficult to handle a MTA? Should I be afraid that my MTA will lose the company mails? Should I stay with a third party service? p.s: The emails have been collected respecting the local legislation on privacy, so no spam. | There should be no problems in doing it yourself, however, you need an experienced sysadmin, or a sysadmin willing to learn something new. It's not as easy as just running another daemon and opening a port in the firewall. I run an MTA for personal projects on a VPS, and while you of course need high availability and be able to handle way more load, the general setup would be pretty much the same. Some general advice: Be sure not end up with an open relay, you'll get blacklisted Read up on how to avoid the dreaded spam folder Make sure the correct MX records are in place Use a subdomain for your send only MTA (mailer.example.com) Use correct mail headers, from: and reply-to: Use DKIM for signing mail (helps avoid spam also) EDIT: I forgot two important points (thanks symcbean): SPF , to restrict mail from your domain to specific IP or ranges Intelligent bounce handling; configured to talk to your mailing list app (removing dead addresses etc.) | {
"source": [
"https://serverfault.com/questions/337678",
"https://serverfault.com",
"https://serverfault.com/users/44097/"
]
} |
337,818 | I would like to analyze mysql traffic. Right now, all mysql requests are sent to the MySQL unix socket: unix 2 [ ACC ] STREAM LISTENING 3734388 15304/mysqld /var/run/mysqld/mysqld.sock I'm trying to disable that socket to force MySQL to use the network socket instead on the loopback. I tried commenting out all the socket directives in the my.cnf and debian.cnf files and restarted MySQL but it made no difference. How can I disable the MySQL unix socket to force MySQL over the network? additional info: I'm running MySQL 5.1 on ubuntu 10.04 . Precisions on the question Since plenty of people suggested enabling the network socket I would like to clarify my question by pointing out that the bind address was already enabled with bind-address = 127.0.0.1 and that a listening connection is available: tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 15601/mysqld Still I see no connections attempt to 127.0.0.1:3306 coming from my webapp (Drupal website). Updated with the answer It appears indeed that the issue is coming from the mysqli connector that Drupal uses (in .ht_config.php for those who are interested). It was set: mysqli://drupal:***@localhost/drupal , changing localhost to 127.0.0.1 fixed the issue (i.e. Drupal is now making connections to the network socket). | In Linux and other *nixes, MySQL will assume you want to use a socket (Unix domain socket) if you connect to the host "localhost" (which would be the default hostname). You can override this in 3 ways: Specify a different hostname like 127.0.0.1 ( mysql -h 127.0.0.1 ) or your server's real hostname Specify that you want to use TCP and not a socket ( mysql --protocol tcp ) You can also easily make that the default my editing your my.cnf so it has this ([client] means any client: [client]
protocol=tcp You can see the full description of how MySQL decides how to connect here: http://dev.mysql.com/doc/refman/5.5/en/connecting.html | {
"source": [
"https://serverfault.com/questions/337818",
"https://serverfault.com",
"https://serverfault.com/users/64204/"
]
} |
338,937 | I know that /dev/sda is the raw device, and that /dev/sda1 is the partition or virtual device. But I'm a little confused as to why the sda# only comes up some of the time, or only on certain systems. What causes this to occur? Perhaps the times when the sda# drives don't appear is when it is unpartitioned? Or perhaps it's not the same across hardware? And why can I mount both? (sometimes) Shouldn't the partition be the one mountable? Any resources or color you can give would be greatly appreciated. Thank you in advance. | On a modern system, a partition device will only appear if the partition actually exists. On a disk with an MBR partition table, partition numbers 1 through 4 correspond to the four slots in the partition table, called "primary" partitions. They don't have to be filled sequentially, so it's possible, for example, to have an sda2 but no sda1. Partition numbers 5 and up correspond to "logical drives" in an extended partition, and those are always numbered sequentially, so you can't have an sda6 without having an sda5 too. On a disk with a GPT partition table, there can be many more (typically up to 128) partitions, and all are "primary". So you could have a disk whose only partition is sda9, for example. If the disk has no partition table, then it'll have no partition devices, of course. Older systems — those using a static /dev rather than one managed by udev — will typically have device nodes for all the possible partition numbers, regardless of whether the partitions actually exist. (Trying to open the device file for a nonexistent partition will fail, of course.) It's possible to forego partitioning and put a filesystem directly on a disk. When you mount a block device, the filesystem driver typically looks for a superblock at a predetermined offset from the beginning of the device, and since the beginning of a partition is not the beginning of the disk itself, the superblock for a filesystem in a partition is located at a different place on the disk than the superblock for a filesystem created on the "whole-disk" device. So if the disk used to just have a filesystem, and then it was partitioned and a filesystem was created in a partition, the old superblock might still be there, e.g. in the small gap before the beginning of the first partition. So the disk still appears to have a filesystem on both the raw disk device and on the partition device, because whichever one you try to mount, when the filesystem driver goes looking for the superblock it'll find one. It's not actually safe to mount and use both filesystems, though, since they overlap on the disk. One may have important bookkeeping data in what the other thinks is free space. That's why it's a good idea to zero the beginning of a block device, to remove any unwanted superblocks, when you want to change a raw disk to a partitioned one, or vice versa, or change the type of filesystem used on a partition, etc. | {
"source": [
"https://serverfault.com/questions/338937",
"https://serverfault.com",
"https://serverfault.com/users/103523/"
]
} |
339,128 | This is a Canonical Question about RAID levels. What are: the RAID levels typically used (including the RAID-Z family)? deployments are they commonly found in? benefits and pitfalls of each? | RAID: Why and When RAID stands for Redundant Array of Independent Disks (some are taught "Inexpensive" to indicate that they are "normal" disks; historically there were internally redundant disks which were very expensive; since those are no longer available the acronym has adapted). At the most general level, a RAID is a group of disks that act on the same reads and writes. SCSI IO is performed on a volume ("LUN"), and these are distributed to the underlying disks in a way that introduces a performance increase and/or a redundancy increase. The performance increase is a function of striping: data is spread across multiple disks to allow reads and writes to use all the disks' IO queues simultaneously. Redundancy is a function of mirroring. Entire disks can be kept as copies, or individual stripes can be written multiple times. Alternatively, in some types of raid, instead of copying data bit for bit, redundancy is gained by creating special stripes that contain parity information, which can be used to recreate any lost data in the event of a hardware failure. There are several configurations that provide different levels of these benefits, which are covered here, and each one has a bias toward performance, or redundancy. An important aspect in evaluating which RAID level will work for you depends on its advantages and hardware requirements (E.g.: number of drives). Another important aspect of most of these types of RAID (0,1,5) is that they do not ensure the integrity of your data, because they are abstracted away from the actual data being stored. So RAID does not protect against corrupted files. If a file is corrupted by any means, the corruption will be mirrored or paritied and committed to the disk regardless. However, RAID-Z does claim to provide file-level integrity of your data . Direct attached RAID: Software and Hardware There are two layers at which RAID can be implemented on direct attached storage: hardware and software. In true hardware RAID solutions, there is a dedicated hardware controller with a processor dedicated to RAID calculations and processing. It also typically has a battery-backed cache module so that data can be written to disk, even after a power failure. This helps to eliminate inconsistencies when systems are not shut down cleanly. Generally speaking, good hardware controllers are better performers than their software counterparts, but they also have a substantial cost and increase complexity. Software RAID typically does not require a controller, since it doesn't use a dedicated RAID processor or a separate cache. Typically these operations are handled directly by the CPU. In modern systems, these calculations consume minimal resources, though some marginal latency is incurred. RAID is handled by either the OS directly, or by a faux controller in the case of FakeRAID . Generally speaking, if someone is going to choose software RAID, they should avoid FakeRAID and use the OS-native package for their system such as Dynamic Disks in Windows, mdadm/LVM in Linux, or ZFS in Solaris, FreeBSD, and other related distributions. FakeRAID use a combination of hardware and software which results in the initial appearance of hardware RAID, but the actual performance of software RAID. Additionally it is commonly extremely difficult to move the array to another adapter (should the original fail). Centralized Storage The other place RAID is common is on centralized storage devices, usually called a SAN (Storage Area Network) or a NAS (Network Attached Storage). These devices manage their own storage and allow attached servers to access the storage in various fashions. Since multiple workloads are contained on the same few disks, having a high level of redundancy is generally desirable. The main difference between a NAS and a SAN is block vs. file system level exports. A SAN exports a whole "block device" such as a partition or logical volume (including those built on top of a RAID array). Examples of SANs include Fibre Channel and iSCSI. A NAS exports a "file system" such as a file or folder. Examples of NASs include CIFS/SMB (Windows file sharing) and NFS. RAID 0 Good when: Speed at all costs! Bad when: You care about your data RAID0 (aka Striping) is sometimes referred to as "the amount of data you will have left when a drive fails". It really runs against the grain of "RAID", where the "R" stands for "Redundant". RAID0 takes your block of data, splits it up into as many pieces as you have disks (2 disks → 2 pieces, 3 disks → 3 pieces) and then writes each piece of the data to a separate disk. This means that a single disk failure destroys the entire array (because you have Part 1 and Part 2, but no Part 3), but it provides very fast disk access. It is not often used in production environments, but it could be used in a situation where you have strictly temporary data that can be lost without repercussions. It is used somewhat commonly for caching devices (such as an L2Arc device). The total usable disk space is the sum of all the disks in the array added together (e.g. 3x 1TB disks = 3TB of space). RAID 1 Good when: You have limited number of disks but need redundancy Bad when: You need a lot of storage space RAID 1 (aka Mirroring) takes your data and duplicates it identically on two or more disks (although typically only 2 disks). If more than two disks are used the same information is stored on each disk (they're all identical). It is the only way to ensure data redundancy when you have less than three disks. RAID 1 sometimes improves read performance. Some implementations of RAID 1 will read from both disks to double the read speed. Some will only read from one of the disks, which does not provide any additional speed advantages. Others will read the same data from both disks, ensuring the array's integrity on every read, but this will result in the same read speed as a single disk. It is typically used in small servers that have very little disk expansion, such as 1RU servers that may only have space for two disks or in workstations that require redundancy. Because of its high overhead of "lost" space, it can be cost prohibitive with small-capacity, high-speed (and high-cost) drives, as you need to spend twice as much money to get the same level of usable storage. The total usable disk space is the size of the smallest disk in the array (e.g. 2x 1TB disks = 1TB of space). RAID 1E The 1E RAID level is similar to RAID 1 in that data is always written to (at least) two disks. But unlike RAID1, it allows for an odd number of disks by simply interleaving the data blocks among several disks. Performance characteristics are similar to RAID1, fault tolerance is similar to RAID 10. This scheme can be extended to odd numbers of disks more than three (possibly called RAID 10E, though rarely). RAID 10 Good when: You want speed and redundancy Bad when: You can't afford to lose half your disk space RAID 10 is a combination of RAID 1 and RAID 0. The order of the 1 and 0 is very important. Say you have 8 disks, it will create 4 RAID 1 arrays, and then apply a RAID 0 array on top of the 4 RAID 1 arrays. It requires at least 4 disks, and additional disks have to be added in pairs. This means that one disk from each pair can fail. So if you have sets A, B, C and D with disks A1, A2, B1, B2, C1, C2, D1, D2, you can lose one disk from each set (A,B,C or D) and still have a functioning array. However, if you lose two disks from the same set, then the array is totally lost. You can lose up to (but not guaranteed) 50% of the disks. You are guaranteed high speed and high availability in RAID 10. RAID 10 is a very common RAID level, especially with high capacity drives where a single disk failure makes a second disk failure more likely before the RAID array is rebuilt. During recovery, the performance degradation is much lower than its RAID 5 counterpart as it only has to read from one drive to reconstruct the data. The available disk space is 50% of the sum of the total space. (e.g. 8x 1TB drives = 4TB of usable space). If you use different sizes, only the smallest size will be used from each disk. It is worth noting that the Linux kernel's software raid driver called md allows for RAID 10 configurations with an odd amount of drives , i.e. a 3 or 5 disk RAID 10. RAID 01 Good when: never Bad when: always It is the reverse of RAID 10. It creates two RAID 0 arrays, and then puts a RAID 1 over the top. This means that you can lose one disk from each set (A1, A2, A3, A4 or B1, B2, B3, B4). It's very rare to see in commercial applications, but is possible to do with software RAID. To be absolutely clear: If you have a RAID10 array with 8 disks and one dies (we'll call it A1) then you'll have 6 redundant disks and 1 without redundancy. If another disk dies there's a 85% chance your array is still working. If you have a RAID01 array with 8 disks and one dies (we'll call it A1) then you'll have 3 redundant disks and 4 without redundancy. If another disk dies there's a 43% chance your array is still working. It provides no additional speed over RAID 10, but substantially less redundancy and should be avoided at all costs. RAID 5 Good when: You want a balance of redundancy and disk space or have a mostly random read workload Bad when: You have a high random write workload or large drives RAID 5 has been the most commonly-used RAID level for decades. It provides the system performance of all the drives in the array (except for small random writes, which incur a slight overhead). It uses a simple XOR operation to calculate parity. Upon single drive failure, the information can be reconstructed from the remaining drives using the XOR operation on the known data. Unfortunately, in the event of a drive failure, the rebuilding process is very IO-intensive. The larger the drives in the RAID, the longer the rebuild will take, and the higher the chance for a second drive failure. Since large slow drives both have a lot more data to rebuild and a lot less performance to do it with, it is not usually recommended to use RAID 5 with anything 7200 RPM or lower. Perhaps the most critical issue with RAID 5 arrays, when used in consumer applications, is that they are almost guaranteed to fail when the total capacity exceeds 12TB. This is because the unrecoverable read error (URE) rate of SATA consumer drives is one per every 10 14 bits, or ~12.5TB. If we take an example of a RAID 5 array with seven 2 TB drives: when a drive fails there are six drives left. In order to rebuild the array the controller needs to read through six drives at 2 TB each. Looking at the figure above it is almost certain another URE will occur before the rebuild has finished. Once that happens the array and all data on it is lost. http://www.zdnet.com/article/why-raid-5-stops-working-in-2009 However the URE/data loss/array failure with RAID 5 issue in consumer drives has been somewhat mitigated by the fact that most hard disk manufacturers have increased their newer drives' URE ratings to one in 10 15 bits. As always, check the specification sheet before buying! https://www.zdnet.com/article/why-raid-5-still-works-usually/ It is also imperative that RAID 5 be put behind a reliable (battery-backed) write cache. This avoids the overhead for small writes, as well as flaky behaviour that can occur upon a failure in the middle of a write. RAID 5 is the most cost-effective solution of adding redundant storage to an array, as it requires the loss of only 1 disk (E.g. 12x 146GB disks = 1606GB of usable space). It requires a minimum of 3 disks. RAID 6 Good when: You want to use RAID 5, but your disks are too large or slow Bad when: You have a high random write workload RAID 6 is similar to RAID 5 but it uses two disks worth of parity instead of just one (the first is XOR, the second is a LSFR), so you can lose two disks from the array with no data loss. The write penalty is higher than RAID 5 and you have one less disk of space. It is worth considering that eventually a RAID 6 array will encounter similar problems as a RAID 5. Larger drives cause larger rebuild times and more latent errors, eventually leading to a failure of the entire array and loss of all data before a rebuild has completed. http://www.zdnet.com/article/why-raid-6-stops-working-in-2019 http://queue.acm.org/detail.cfm?id=1670144 RAID 50 Good when: You have a lot of disks that need to be in a single array and RAID 10 isn't an option because of capacity Bad when: You have so many disks that many simultaneous failures are possible before rebuilds complete, or when you don't have many disks RAID 50 is a nested level, much like RAID 10. It combines two or more RAID 5 arrays and stripes data across them in a RAID 0. This offers both performance and multiple disk redundancy, as long as multiple disks are lost from different RAID 5 arrays. In a RAID 50, disk capacity is n-x, where x is the number of RAID 5s that are striped across. For example, if a simple 6-disk RAID 50, the smallest possible, if you had 6x1TB disks in two RAID 5s that were then striped across to become a RAID 50, you would have 4TB usable storage. RAID 60 Good when: You have a similar use case to RAID 50, but need more redundancy Bad when: You don't have a substantial number of disks in the array RAID 6 is to RAID 60 as RAID 5 is to RAID 50. Essentially, you have more than one RAID 6 that data is then striped across in a RAID 0. This setup allows for up to two members of any individual RAID 6 in the set to fail without data loss. Rebuild times for RAID 60 arrays can be substantial, so it's usually a good idea to have one hot-spare for each RAID 6 member in the array. In a RAID 60, disk capacity is n-2x, where x is the number of RAID 6s that are striped across. For example, if a simple 8 disk RAID 60, the smallest possible, if you had 8x1TB disks in two RAID 6s that were then striped across to become a RAID 60, you would have 4TB usable storage. As you can see, this gives the same amount of usable storage that a RAID 10 would give on an 8 member array. While RAID 60 would be slightly more redundant, the rebuild times would be substantially larger. Generally, you want to consider RAID 60 only if you have a large number of disks. RAID-Z Good when: You are using ZFS on a system that supports it Bad when: Performance demands hardware RAID acceleration RAID-Z is a bit complicated to explain since ZFS radically changes how storage and file systems interact. ZFS encompasses the traditional roles of volume management (RAID is a function of a Volume Manager) and file system. Because of this, ZFS can do RAID at the file's storage block level rather than at the volume's strip level. This is exactly what RAID-Z does, write the file's storage blocks across multiple physical drives including a parity block for each set of stripes. An example may make this much more clear. Say you have 3 disks in a ZFS RAID-Z pool, the block size is 4KB. Now you write a file to the system that is exactly 16KB. ZFS will split that into four 4KB blocks (as would a normal operating system); then it will calculate two blocks of parity. Those six blocks will be placed on the drives similar to how RAID-5 would distribute data and parity. This is an improvement over RAID5 in that there was no reading of existing data stripes to calculate the parity. Another example builds on the previous. Say the file was only 4KB. ZFS will still have to build one parity block, but now the write load is reduced to 2 blocks. The third drive will be free to service any other concurrent requests. A similar effect will be seen anytime the file being written is not a multiple of the pool's block size multiplied by the number of drives less one (ie [File Size] <> [Block Size] * [Drives - 1]). ZFS handling both Volume Management and File System also means you don't have to worry about aligning partitions or stripe-block sizes. ZFS handles all that automatically with the recommended configurations. The nature of ZFS counteracts some of the classic RAID-5/6 caveats. All writes in ZFS are done in a copy-on-write fashion; all changed blocks in a write operation are written to a new location on disk, instead of overwriting the existing blocks. If a write fails for any reason, or the system fails mid-write, the write transaction either occurs completely after system recovery (with the help of the ZFS intent log) or does not occur at all, avoiding potential data corruption. Another issue with RAID-5/6 is potential data loss or silent data corruption during rebuilds; regular zpool scrub operations can help to catch data corruption or drive issues before they cause data loss, and checksumming of all data blocks will ensure that all corruption during a rebuild is caught. The main disadvantage to RAID-Z is that it is still software raid (and suffers from the same minor latency incurred by the CPU calculating the write load instead of letting a hardware HBA offload it). This may be resolved in the future by HBAs that support ZFS hardware acceleration. Other RAID and Non-Standard Functionality Because there's no central authority enforcing any sort of standard functionality, the various RAID levels have evolved and been standardized by prevalent use. Many vendors have produced products which deviate from the above descriptions. It's also quite common for them to invent some fancy new marketing terminology to describe one of the above concepts (this happens most frequently in the SOHO market). When possible, try to get the vendor to actually describe the functionality of the redundancy mechanism (most will volunteer this information, as there's really no secret sauce anymore). Worth mentioning, there are RAID 5-like implementations which allow you to start an array with only two disks. It would store data on one stripe and parity on the other, similar to RAID 5 above. This would perform like RAID 1 with the extra overhead of the parity calculation. The advantage is that you could add disks to the array by recalculating the parity. | {
"source": [
"https://serverfault.com/questions/339128",
"https://serverfault.com",
"https://serverfault.com/users/10472/"
]
} |
339,282 | I need a comprehensive and complex set of performance counters in windows performance monitor.
At this point every time that I use performance monitor, I have to add the counters, one by one. Is there any way to save the counter set and load it at the later use?
Thank you, | A colleague figured out how to achieve this. Instead of launching Performance Monitor directly: Launch the Microsoft Management Console (mmc.exe) File -> Add/Remove Snap-ins Select Performance Monitor, select Add >, select OK. Add your desired Counters as usual File -> Save As... The resulting .msc file will allow you to restore the Performance Monitor with your saved Counters! | {
"source": [
"https://serverfault.com/questions/339282",
"https://serverfault.com",
"https://serverfault.com/users/103612/"
]
} |
339,426 | Through reasons that don't warrant exhaustive discussion, I find myself in charge of 10 servers: A domain controller-~500 hosts/~350 users IIS web server-This is where we make our money SQL server-The crown jewels Exchange server Linux box for data entry AV server Backup server A few others tossed around The company where I work believes everybody is replaceable and therefore believes they can pay a minimum wage for any position. The IT manager and Sysadmin quit recently and I think I was the only person who did not take a big step backward when the call went out for volunteers. This also explains why someone with my background is in this position. That is the reality, as much as I wish it otherwise. What are the things I should be doing to keep those systems running? There is no written procedure left behind and I crammed the A+ and Network+ certs in the last two months but that leaves me with some theory and no practical experience. I am in the process of teaching myself powershell but from here to there is a long way. I have no scripting or programming experience. What tasks should I be performing? What practices should I implement? I understand I am probably hosed but a lifeline to get me through would be helpful. | Honestly, I would find another job, unless your current task is just to keep everything running until they hire a new SysAdmin. You are being setup for failure. You are doing the job of at least two people if this is all hosted locally and nothing is documented. Don't worry about the scripting or programming anything just yet. Get a handle on keeping everything running. Are you in charge of the corporate firewall too? The quick and dirty daily tasks I see you needing to do are (in no particular order): check the nightly backups check the exchange queues to make sure they are processing check the SQL backups check the AV server for alerts if anything failed to scan or update | {
"source": [
"https://serverfault.com/questions/339426",
"https://serverfault.com",
"https://serverfault.com/users/103654/"
]
} |
339,469 | Let's say we have an SSL certificate for a site. According to a web browser, the certificate expires tomorrow, Dec 10 2011. OK, but that glosses over time zones. When will it expire, exactly? 00:00 local time of the server (e.g. ET) 00:00 local time of the user browsing the site (wherever) 00:00 UTC ? (Context of question: An admin who likes to wait until the last day before expiration, to set up the new cert. Why? To "get the most value out of it", he says. I don't follow that logic, exactly, and probably he should just replace it a few days earlier? But anyway I'm concerned/curous whether the cert may stop working for some/all users, before 00:00 our local time.) | Almost all cert vendors will renew a cert for the additional whole year (or whatever time frame) for a month or so before the previous expires. So if your cert was good for Dec 10, 2010 to Dec 10, 2011; you can get a new cert in November and it'll be good for Nov 20, 2011 to Dec 10, 2012. That way you don't have to worry about "getting the most value out of it". To answer the question, certs specify the time down to the minute, and include a time zone. You can feed your public cert through openssl x509 -in Certificate_File.pem -text and it will output the Validity range. The following is from my personal websites from last year: Not Before: Apr 20 20:48:59 2010 GMT
Not After : Jun 5 01:52:13 2011 GMT | {
"source": [
"https://serverfault.com/questions/339469",
"https://serverfault.com",
"https://serverfault.com/users/64837/"
]
} |
339,534 | I have just purchased a new server that will be the new primary domain controller. I was wondering if anyone knew any articles or tutorials on how to do this change over? I would imagine it is just simply setting up the role and importing a backup of the Active Directory from the old domain controller. I just want to make sure I'm not missing any crucial tasks in between. | Add new computer to domain Promote system to a domain controller ( dcpromo ) Transfer FSMO roles Verify/Make the new system a Global Catalog . Wait some time for replication to take place. Run dcdiag/ repadmin and so on to make sure everything transferred Demote old system (dcpromo) Double check DNS zones & AD to make sure old system was removed. Migrate any other data or services as needed. Of course you could leave the old system up so you have another spare DC. | {
"source": [
"https://serverfault.com/questions/339534",
"https://serverfault.com",
"https://serverfault.com/users/102789/"
]
} |
339,814 | I was playing aroudn with some variations of date like DATE = $(date) but that didnt work either crontab -e CRONLOG=/tmp/log/crontab.log
DATEVAR=`date +20\%y\%m\%d_\%H\%M\%S`
* * * * * echo $DATEVAR >> /tmp/log/crontab.log
*/2 * * * * echo "$DATEVAR hello" >> ${CRONLOG}
*/1 * * * * echo 'every minute' >> ${CRONLOG} this just outputs the text as is... I want to create a log entry in crontab.log with a timestamp on each update How can I do this on CentOS 6? UPDATE DATEVAR=date +20%y%m%d_%H%M%S
*/1 * * * * /bin/echo [CRON] $($(DATEVAR)) >> /tmp/log/crontab.log rendered only [CRON] and NOTHING when I tried it =/ | Cron is not a shell - it does not parse commands in the same way that a shell does. As such, your variable is assigned as if it was static text. There are three solutions I know of to this problem: Option 1: Use a shell script to generate your command, include whatever variables and logic you want - and call that shell script from cron. * * * * * /path/to/myscript.sh Where myscript.sh : DATEVAR=`date +20\%y\%m\%d_\%H\%M\%S`
echo $DATEVAR >> /tmp/crontab.log Option 2: Include the date command directly in your command, and, since the entire command is passed to the shell, the date will be processed and replaced with an actual date. * * * * * /bin/echo `date +20\%y\%m\%d_\%H\%M\%S` >> /tmp/crontab.log Option 3: Set the string variable in cron, and pass that to your command to be processed (note - the percent signs do not need to be escaped, and the variable itself is wrapped in $() to execute it in a separate shell - backticks should work the same): DATEVAR=date +20%y%m%d_%H%M%S
* * * * * /bin/echo $($DATEVAR) >> /tmp/crontab.log (In all the cases above, you can, of course, use a variable for the log path, instead of 'hard coding' it.) | {
"source": [
"https://serverfault.com/questions/339814",
"https://serverfault.com",
"https://serverfault.com/users/21343/"
]
} |
339,824 | Here are the rules I setup.. iptables -P INPUT DROP
iptables -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --dport 443 -j ACCEPT When I try to make an outgoing connection, it gets blocked. What's missing? Here's the output from iptables -n -L Chain INPUT (policy DROP)
target prot opt source destination
DROP tcp -- 0.0.0.0/0 0.0.0.0/0 source IP range 93.0.0.0-93.255.255.255
tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW recent: SET name: DEFAULT side: source
DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW recent: UPDATE seconds: 60 hit_count: 15 name: DEFAULT side: source
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination Note.. the first rule is there to block a particular range and the next two are supposed to limit the number of requests anyone can make within 60 seconds. | Cron is not a shell - it does not parse commands in the same way that a shell does. As such, your variable is assigned as if it was static text. There are three solutions I know of to this problem: Option 1: Use a shell script to generate your command, include whatever variables and logic you want - and call that shell script from cron. * * * * * /path/to/myscript.sh Where myscript.sh : DATEVAR=`date +20\%y\%m\%d_\%H\%M\%S`
echo $DATEVAR >> /tmp/crontab.log Option 2: Include the date command directly in your command, and, since the entire command is passed to the shell, the date will be processed and replaced with an actual date. * * * * * /bin/echo `date +20\%y\%m\%d_\%H\%M\%S` >> /tmp/crontab.log Option 3: Set the string variable in cron, and pass that to your command to be processed (note - the percent signs do not need to be escaped, and the variable itself is wrapped in $() to execute it in a separate shell - backticks should work the same): DATEVAR=date +20%y%m%d_%H%M%S
* * * * * /bin/echo $($DATEVAR) >> /tmp/crontab.log (In all the cases above, you can, of course, use a variable for the log path, instead of 'hard coding' it.) | {
"source": [
"https://serverfault.com/questions/339824",
"https://serverfault.com",
"https://serverfault.com/users/28207/"
]
} |
339,968 | When defining and testing new services in nagios I have been restarting nagios, then clicking the service, and rescheduling a check for as soon as possible, then waiting until the check happens. Is there a more efficient way to do this? I'd like to use the command line to run that particular check and get the output. | Sometimes I find it tricky figuring out exactly what a plugin is doing. To figure this out I set nagios into debug mode with the configuration like this. debug_level=2048 With nagios in debug mode I simply tail the debug_log file debug_file=/var/log/nagios3/nagios.debug . Force a check and you will see exactly how the command is being run. I wouldn't leave this setting on normally though, it is very verbose and fills your log file at a rapid rate. | {
"source": [
"https://serverfault.com/questions/339968",
"https://serverfault.com",
"https://serverfault.com/users/67923/"
]
} |
340,307 | One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;) | A DDOS (or even a DOS), in its essence, is a resource exhaustion. You will never be able to eliminate bottlenecks, as you can only push them farther away. On AWS, you are lucky because the network component is very strong - it would be very surprising to learn that the upstream link was saturated. However, the CPU, as well as disks I/O, are way easier to flood. The best course of action would be by starting some monitoring (local such as SAR, remote with Nagios and/or ScoutApp) and some remote logging facilities (Syslog-ng). With such setup, you will be able to identify which resources get saturated (network socket due to Syn flood ; CPU due to bad SQL queries or crawlers ; ram due to …). Don’t forget to have your log partition (if you don’t have remote logging enable) on an EBS volumes (to later study the logs). If the attack come through the web pages, the access log (or the equivalent) can be very useful. | {
"source": [
"https://serverfault.com/questions/340307",
"https://serverfault.com",
"https://serverfault.com/users/67923/"
]
} |
340,635 | I have opened port 443 through iptables : pkts bytes target prot opt in out source destination
45 2428 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
6 1009 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
141 10788 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443
7 1140 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
6 360 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 And it is listening as netstat -a indicates: Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:6311 *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 gauss:ssh ommited ESTABLISHED
tcp 0 0 gauss:ssh ommited ESTABLISHED
tcp6 0 0 localhost:8005 [::]:* LISTEN
tcp6 0 0 [::]:8009 [::]:* LISTEN
tcp6 0 0 [::]:www [::]:* LISTEN
tcp6 0 0 [::]:ssh [::]:* LISTEN
tcp6 0 0 [::]:https [::]:* LISTEN
udp 0 0 *:mdns *:*
udp 0 0 *:52703 *:*
udp6 0 0 [::]:42168 [::]:*
udp6 0 0 [::]:mdns [::]:* However I can't ping port 443: PING 443 (0.0.1.187) 56(124) bytes of data.
^C
--- 443 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6006ms What's going on? | The ping utility does what it's supposed to, hit the ping interface using ICMP, you can't just ping any port you like with it. I'm sure there's a million ways to do it but most people just use 'telnet IP port', i.e. 'telnet 1.2.3.4 25' to test connection. | {
"source": [
"https://serverfault.com/questions/340635",
"https://serverfault.com",
"https://serverfault.com/users/104025/"
]
} |
340,652 | I have a RAID 5 array (with LVM on top) on a CentOS 6 box. The array itself is an LSI StorageTek disk shelf with 14 drives connected to the server with a fibre channel cable. After rebooting the machine the RAID array won't come back up. Disk Utility in Gnome states that it is "Not running, partially assembled". I ran mdadmin --assemble --scan , which said: mdadm: /dev/md/:storagetek-1_0 assembled from 1 drive - not enough to start the array.
mdadm: No arrays found in config file or automatically /proc/mdstat says: Personalities : [raid6] [raid5] [raid4]
md127 : inactive sdf1[7] sdi1[4] sde1[8] sdj1[3] sdc1[10] sdg1[6] sdd1[9] sdn1[12] sdb1[11] sdm1[0] sda1[14] sdk1[2]
860171694 blocks super 1.2
unused devices: <none> I'm pretty new to managing RAID arrays on Linux (could you guess?) so I've reached the limit of my very limited knowledge on the subject. I'm optimistically hoping that it's in the process of being rebuilt, but from what I've seen I doubt it. Please can somebody give me a hint on how to fix it? | The ping utility does what it's supposed to, hit the ping interface using ICMP, you can't just ping any port you like with it. I'm sure there's a million ways to do it but most people just use 'telnet IP port', i.e. 'telnet 1.2.3.4 25' to test connection. | {
"source": [
"https://serverfault.com/questions/340652",
"https://serverfault.com",
"https://serverfault.com/users/104029/"
]
} |
340,837 | I'm currently snapshotting my ZFS-based NAS nightly and weekly, a process that has saved my ass a few times. However, while the creation of the snapshot is automatic (from cron), the deletion of old snapshots is still a manual task. Obviously there's a risk that if I get hit by a bus, or the manual task isn't carried out, the NAS will run out of disk space. Does anyone have any good ways / scripts they use to manage the number of snapshots stored on their ZFS systems? Ideally, I'd like a script that iterates through all the snapshots for a given ZFS filesystem and deletes all but the last n snapshots for that filesystem. E.g. I've got two filesystems, one called tank and another called sastank . Snapshots are named with the date on which they were created: sastank@AutoD-2011-12-13 so a simple sort command should list them in order. I'm looking to keep the last 2 week's worth of daily snapshots on tank , but only the last two days worth of snapshots on sastank . | You may find something like this a little simpler zfs list -t snapshot -o name | grep ^tank@Auto | tac | tail -n +16 | xargs -n 1 zfs destroy -r Output the list of the snapshot (names only) with zfs list -t snapshot -o name Filter to keep only the ones that match tank@Auto with grep ^tank@Auto Reverse the list (previously sorted from oldest to newest) with tac Limit output to the 16th oldest result and following with tail -n +16 Then destroy with xargs -n 1 zfs destroy -vr Deleting snapshots in reverse order is supposedly more efficient or sort in reverse order of creation. zfs list -t snapshot -o name -S creation | grep ^tank@Auto | tail -n +16 | xargs -n 1 zfs destroy -vr Test it with ...|xargs -n 1 echo . | {
"source": [
"https://serverfault.com/questions/340837",
"https://serverfault.com",
"https://serverfault.com/users/68794/"
]
} |
340,865 | I have a situation where I want to connect to a Linux machine running VNC (lets call it VNCServer) which is behind two consecutive Linux machines i.e., to ssh into the VNCServer, I have to ssh into Gateway1 from my laptop, then from Gateway1 shell I ssh into the Gateway2 and then from that shell I finally ssh into VNCServer. I cannot change the network design and access flow Laptop-->Gateway1-->Gateway2-->Server. I have no root privileges on Gateway1 and all ports except 22 and 5901 are closed. Is there a way by which I can launch a VNC viewer on my laptop and access the VNCServer? I understand that it might be done using ssh tunneling features and I have putty on my Windows laptop (sorry, no Linux or Cygwin etc. can be installed on the work laptop). Any help will be greatly appreciated as this would make my life so easier! | Putty does support ssh tunnels, if you expand the Connection, SSH tree, you'll see an entry for tunnels. Local tunnels produce a localhost port opening on your windows machine that remotes to the ip address and port you specify. For instance, when I'm trying to RDP to a desktop at my house, I'll generally choose a random local port, something like 7789, then put the local ip address of the desktop (1.2.3.4:3389) as the remote host. Be sure to click "Add", then "Apply." At this point, when you rdp to 127.0.0.1:7789, you'll then connect to 1.2.3.4:3389 over the putty session. This is where the fun comes in. If you then setup a port tunnel on your intermediate box, setting up the local port you specified as the remote port in putty, you can then bounce through your putty, through the intermediate box your final destination. You'll still need to do a few ssh connects, but you'll be able to cross vnc or rdp directly from the windows system once you're set, which is what I believe you're looking to do. EXAMPLE Head over to the tunnels panel in Putty (Connections->SSH->Tunnels accessed either from the context menu if the ssh session is already active, or in the beginning connection screen when just starting putty) Create a tunnel with local source 15900, and remote source 127.0.0.1:15900 Connect (if not already connected) to Gateway1. On Gateway1, ssh -L 127.0.0.1:15900:VNCServerIP:5900 user@Gateway2 Once the ssh to Gateway2 is up, attempt to vnc to 127.0.0.1:15900 -- you should now see the VNC screen on the far side! ADDED BONUS -- not many people know this, but this process can also be used to proxy IPv6/IPv4 traffic as well. SSH doesn't care what protocol it uses for the tunnels, so you can theoretically access IPv6 only hosts from an IPv4 only system, given that the ssh server is dual stack (has both IPv4 and IPv6 addresses.) | {
"source": [
"https://serverfault.com/questions/340865",
"https://serverfault.com",
"https://serverfault.com/users/104112/"
]
} |
341,043 | When I try to start Process Monitor from SysInternals on some 64 bit windows 7 machines,the process fails to start. There is no error message. I double click and nothing happens. Other 64 bit windows 7 computers work fine. Any ideas? | Here is what I found. The 32 bit Procmon.exe contains the 64 bit exe inside it as a binary resource. When the 32 bit exe starts, it extracts the 64 bit version out to a hidden file called Procmon64.exe and then executes that. For some reason this process fails on some Windows 7 installs. I managed to extract the 64 bit exe using Visual Studio 2010. Open Visual Studio and open the Procmon.exe file using the File->Open->File... menu In the resource tree, expand the "BINRES" node Right-click on the 1308 node and select Export... Name the exported resource Procmon-64.exe and save Run the extracted exe Don't name the extracted exe Procmon64.exe (no hyphen) because the 32 bit Procmon will try to delete it if it gets the chance. If you don't have Visual Studio, use a windows executable resource extractor like ResourcesExtract - http://www.nirsoft.net/utils/resources_extract.html | {
"source": [
"https://serverfault.com/questions/341043",
"https://serverfault.com",
"https://serverfault.com/users/2477/"
]
} |
341,045 | I find myself needing to limit the number of requests a particular IP can send for a particular URI (e.g. www.foo.com/somewhere/special ) while leaving the rest of the site unregulated. How would I configure that using the HttpLimitReqModule built into Nginx? | Here is what I found. The 32 bit Procmon.exe contains the 64 bit exe inside it as a binary resource. When the 32 bit exe starts, it extracts the 64 bit version out to a hidden file called Procmon64.exe and then executes that. For some reason this process fails on some Windows 7 installs. I managed to extract the 64 bit exe using Visual Studio 2010. Open Visual Studio and open the Procmon.exe file using the File->Open->File... menu In the resource tree, expand the "BINRES" node Right-click on the 1308 node and select Export... Name the exported resource Procmon-64.exe and save Run the extracted exe Don't name the extracted exe Procmon64.exe (no hyphen) because the 32 bit Procmon will try to delete it if it gets the chance. If you don't have Visual Studio, use a windows executable resource extractor like ResourcesExtract - http://www.nirsoft.net/utils/resources_extract.html | {
"source": [
"https://serverfault.com/questions/341045",
"https://serverfault.com",
"https://serverfault.com/users/46020/"
]
} |
341,143 | I was wondering if there's any way to display some kind of progress info when searching for files in linux using find . I often find myself searching for files on a big disk and some kind of progress indicator would be very helpfull, like a bar or at least the current directory "find" searches in. Are there any scripts that do that, or does find support some hooks? | with this trick you can see the current folder - but no progress bar - sorry. watch readlink -f /proc/$(pidof find)/cwd | {
"source": [
"https://serverfault.com/questions/341143",
"https://serverfault.com",
"https://serverfault.com/users/97969/"
]
} |
341,190 | My situation : Me(localhost) -> Server A(ip:100.100.100.100) =>(server B(ip:192.168.25.100),server....) i'm able to SSH into server since it has a true ip
if i then want to connect to server b, i would ssh server b with it's ip(192.168.25.100) example: from my pc: ssh [email protected] then in 100.100.100.100, ssh [email protected] this would get me to server B with ssh what if i want to connect to server b directly?
how can i do that? example: from my oc: [email protected] i have tried the following: ssh -L 22:localhost:22 [email protected] without success | Your problem is in binding a listener to localhost:22; there's already an sshd listening on that. Tunnelling an ssh connection through an ssh connection is completely lawful, and I do it all the time, but you need to pick unused ports for your forwarding listeners. Try me% ssh [email protected] -L 2201:192.168.25.100:22 then me% ssh localhost -p 2201 You should end up on server B (unless something's already bound to me:2201, in which case, pick another port). | {
"source": [
"https://serverfault.com/questions/341190",
"https://serverfault.com",
"https://serverfault.com/users/104227/"
]
} |
341,196 | I've been asked to compile a list of TCP/UDP and ports they use for the collection of application that comprise our product for a client who wants the information for their firewall. So I fired up TCPView some of the processes are so short lived that I can't register the information fast enough and there is seemingly no way of recording it. I tried Capsa Free which records the connection but doesn't record the process - they seem to expect you to match it up manually via the PID. Is there an application that will record the process, the protocol and teh local port it used for later consumption. I'd rather not manually dig through the source to find which application/dll uses what. | Your problem is in binding a listener to localhost:22; there's already an sshd listening on that. Tunnelling an ssh connection through an ssh connection is completely lawful, and I do it all the time, but you need to pick unused ports for your forwarding listeners. Try me% ssh [email protected] -L 2201:192.168.25.100:22 then me% ssh localhost -p 2201 You should end up on server B (unless something's already bound to me:2201, in which case, pick another port). | {
"source": [
"https://serverfault.com/questions/341196",
"https://serverfault.com",
"https://serverfault.com/users/1379/"
]
} |
341,199 | On a server, install git cd /
git init
git add .
git commit -a -m "Yes, this is server" Then get /.git/ to point to a network drive (SAN, NFS, Samba whatever) or different disk. Use a cron job every hour/day etc. to update the changes. The .git directory would contain a versioned copy of all the server files (excluding the useless/complicated ones like /proc, /dev etc.) For a non-important development server where I don't want the hassle/cost of setting it up on a proper backup system, and where backups would only be for convenience (I.E. we don't need to backup this server but it would save some time if things went wrong), could this be a valid backup solution or will it just fall over in a big pile of poop? | You're not a silly person. Using git as a backup mechanism can be attractive, and despite what other folks have said, git works just fine with binary files. Read this page from the Git Book for more information on this topic. Basically, since git is not using a delta storage mechanism, it doesn't really care what your files look like (but the utility of git diff is pretty low for binary files with a stock configuration). The biggest issue with using git for backup is that it does not preserve most filesystem metadata. Specifically, git does not record: file groups file owners file permissions (other than "is this executable") extended attributes You can solve this by writing tools to record this information explicitly into your repository, but it can be tricky to get this right. A Google search for git backup metadata yields a number of results that appear to be worth reading (including some tools that already attempt to compensate for the issues I've raised here). etckeeper was developed for backing up /etc and solves many of these problems. | {
"source": [
"https://serverfault.com/questions/341199",
"https://serverfault.com",
"https://serverfault.com/users/80776/"
]
} |
341,206 | I need to take different type of backups(code backup, database backup, user uploaded files backup) from amazon instances, I need to make sure this backups are running properly and if any backup fails I need to get mail (failure is may be because of hacks or server failure) What currently I am planning is to send mails after every backup to a mail address, and a script will read that mails, if any mail is missing it will shoot a mail to real email address. I am doing this because I need to maintain different clients web sites backup, can anybody suggests any better way, basically I am not a sys-admin guy I am a developer who is trying to solve my problem, sys-admins can please suggest any tools or better scripts to do this | You're not a silly person. Using git as a backup mechanism can be attractive, and despite what other folks have said, git works just fine with binary files. Read this page from the Git Book for more information on this topic. Basically, since git is not using a delta storage mechanism, it doesn't really care what your files look like (but the utility of git diff is pretty low for binary files with a stock configuration). The biggest issue with using git for backup is that it does not preserve most filesystem metadata. Specifically, git does not record: file groups file owners file permissions (other than "is this executable") extended attributes You can solve this by writing tools to record this information explicitly into your repository, but it can be tricky to get this right. A Google search for git backup metadata yields a number of results that appear to be worth reading (including some tools that already attempt to compensate for the issues I've raised here). etckeeper was developed for backing up /etc and solves many of these problems. | {
"source": [
"https://serverfault.com/questions/341206",
"https://serverfault.com",
"https://serverfault.com/users/94180/"
]
} |
341,207 | Server 2003 with 10 Windows7 client PCs. Server was rebuilt a month ago. Experiencing a dns issue where the clients cant resolve webpages or they are increibly slow to load. Nothing wrong with the broadband, when I point a single PC at the router's dns it resolves webpages fine but back to server ip and it behaves the same way. If I restart the dns service on the server, the problem seems to go away for a bit. Can anyone advise ? The DNS address on the servers lan card is: 127.0.0.1 I also have looked at other customer servers and they dont have reverse lookup zones, but this server has 4 entries so prehaps over configured? | You're not a silly person. Using git as a backup mechanism can be attractive, and despite what other folks have said, git works just fine with binary files. Read this page from the Git Book for more information on this topic. Basically, since git is not using a delta storage mechanism, it doesn't really care what your files look like (but the utility of git diff is pretty low for binary files with a stock configuration). The biggest issue with using git for backup is that it does not preserve most filesystem metadata. Specifically, git does not record: file groups file owners file permissions (other than "is this executable") extended attributes You can solve this by writing tools to record this information explicitly into your repository, but it can be tricky to get this right. A Google search for git backup metadata yields a number of results that appear to be worth reading (including some tools that already attempt to compensate for the issues I've raised here). etckeeper was developed for backing up /etc and solves many of these problems. | {
"source": [
"https://serverfault.com/questions/341207",
"https://serverfault.com",
"https://serverfault.com/users/104232/"
]
} |
341,400 | There's only one SAS, right? Serial attached SCSI? The female connectors: SATA2 looks like: ------------| |------- And SAS looks : ------------+-+------- For me, the SATA2 male connector looks like it has little corners in the middle that would prevent it from sliding into SAS, which doesn't have the little gap to allow the corners in. Is this correct? | It works. SATA discs are compatible with SAS. I have tons of them in SAS backplanes. Work like a charm. | {
"source": [
"https://serverfault.com/questions/341400",
"https://serverfault.com",
"https://serverfault.com/users/102280/"
]
} |
341,804 | I'm new to opening up ports in CentOS. I need to open up tcp port 8080 and have installed/ran nmap to find it is not open already. I've been reading about the iptables command, I have v1.3.5 installed but I really don't know where to start with it regarding opening up this port. I'd appreciate a code sample or at least a link to a guide to opening this port using iptables (or any other good method.) Thank you. | I always like to add a comment and limit scope in my firewall rules. If I was opening up tcp port 8080 from everywhere (no scope limiting needed) for Tomcat I would run the following command iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT -m comment --comment "Tomcat Server port" Then make sure to save your running iptables config so that it goes into effect after the next restart service iptables save Note: you'll need to have the comment module installed for that part to work, probably a good chance that it is if you are running Centos 5 or 6 P.S. If you want to limit scope you can use the -s flag. Here is an example on how to limit traffic to 8080 from the 192.168.1 subnet iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -s 192.168.1.0/24 -j ACCEPT -m comment --comment "Tomcat Server port" | {
"source": [
"https://serverfault.com/questions/341804",
"https://serverfault.com",
"https://serverfault.com/users/102161/"
]
} |
341,821 | My ASA 5510 has the following configuration for an interface. My Ubuntu box (2.6.35) connected to this network will correctly autoconf an IPv6 address, but it will not set a default route. interface Ethernet0/0.10
vlan 10
no shutdown
nameif inside
security-level 100
ip address 172.18.0.1 255.255.254.0
ipv6 address REMOVED:1::1/64
ipv6 nd prefix REMOVED:1::/64
ipv6 nd ra-interval 120
ipv6 enable Thus, ping6 REMOVED:1::1 works fine and if I manually add a default route for IPv6 it works fine. The resulting router advertisement looks like this: 01:06:42.253895 IP6 (class 0xe0, hlim 255, next-header ICMPv6 (58) payload length: 64) fe80::21c:58ff:fed3:ea36 > ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 64
hop limit 64, Flags [none], pref medium, router lifetime 1800s, reachable time 0s, retrans time 1000s
source link-address option (1), length 8 (1): 00:1c:58:d3:ea:36
0x0000: 001c 58d3 ea36
mtu option (5), length 8 (1): 1500
0x0000: 0000 0000 05dc
prefix info option (3), length 32 (4): REMOVED:1::/64, Flags [onlink, auto], valid time 2592000s, pref. time 604800s
0x0000: 40c0 0027 8d00 0009 3a80 0000 0000 XXXX
0x0010: XXXX XXXX 0001 0000 0000 0000 0000 How come I do not get a default gateway set? | I always like to add a comment and limit scope in my firewall rules. If I was opening up tcp port 8080 from everywhere (no scope limiting needed) for Tomcat I would run the following command iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT -m comment --comment "Tomcat Server port" Then make sure to save your running iptables config so that it goes into effect after the next restart service iptables save Note: you'll need to have the comment module installed for that part to work, probably a good chance that it is if you are running Centos 5 or 6 P.S. If you want to limit scope you can use the -s flag. Here is an example on how to limit traffic to 8080 from the 192.168.1 subnet iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -s 192.168.1.0/24 -j ACCEPT -m comment --comment "Tomcat Server port" | {
"source": [
"https://serverfault.com/questions/341821",
"https://serverfault.com",
"https://serverfault.com/users/67890/"
]
} |
342,228 | I had set up a Ubuntu instance with a Rails package, deployed my app, and it is working fine. But when I try to do SSH, it's not allowing me for the remote login and throws errors like: Host key verification failed . The problem seem to be persistent. I have attached the Elastic IP to that instance and I am not able to see the public DNS. My instance is running in Singapore region. ssh debug output: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to 46.137.253.231 [46.137.253.231] port 22.
debug1: Connection established.
debug1: identity file st.pem type -1
debug1: identity file st.pem-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-4ubuntu6
debug1: match: OpenSSH_5.5p1 Debian-4ubuntu6 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is.
Please contact your system administrator.
Add correct host key in /home/ubuntu/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/ubuntu/.ssh/known_hosts:1
remove with: ssh-keygen -f "/home/ubuntu/.ssh/known_hosts" -R 46.137.253.231
RSA host key for 46.137.253.231 has changed and you have requested strict checking.
Host key verification failed. | When you connect to a ssh server your ssh client keeps a list of trusted hosts as key-value pairs of IP and ssh server finger print. With ec2 you often reuse the same IP with several server instances which causes conflict. If you have connected to an earlier ec2 instance with this IP, and now connect to a new instance with the same IP your computer will complain of "Host verification failed" as its previously stored pair no longer matches the new pair. The error message tells you how to fix it: Offending RSA key in /home/ubuntu/.ssh/known_hosts:1 remove with: ssh-keygen -f "/home/ubuntu/.ssh/known_hosts" -R 46.137.253.231" Alternative simply open /home/ubuntu/.ssh/known_hosts and delete line 1 (as indicated by the ":1"). You can now connect and receive a new host verification. Please note usually ssh's known_hosts file usually have stored a second line pair for hostname or ip6 value so you might need to remove a couple of lines. Warning: Host verification is important and it is a good reason why you get this warning. Make sure you are expecting host verification to fail. Do not remove the verification key-value pair if not certain. | {
"source": [
"https://serverfault.com/questions/342228",
"https://serverfault.com",
"https://serverfault.com/users/74544/"
]
} |
342,284 | While looking into how to set up some static DNS-SD services in our network, I came across http://www.dns-sd.org/ServerStaticSetup.html , which states that Active Directory's DNS server does not support DNS names with spaces in them. Does anyone know if this is still true (as the page feels rather old)? Update: I'm primarily referring to PTR and SRV records, not A/CNAME records. | A domain name can include any binary octet in the range 0 to 255. However if your AD entries represent host names , then a space is not a valid character. A host name (i.e. a domain name that points to an A or AAAA record) must follow the rules from RFC 1123 , which essentially restricts the legal characters to LDH ("letter digit hyphen"). Hence for other entries it's perfectly possible that MS have misinterpreted the RFCs. They won't be the first, and they certainly won't be the last. References §5.1 of RFC 1035 : Quoting conventions allow arbitrary characters to be
stored in domain names. and §6.1.3.5. of RFC 1123 : The DNS defines domain name syntax very generally -- a string of labels each containing up to 63 8-bit octets, separated by dots and §11 of RFC 2181 : any binary string whatever can be used as the label of any resource record | {
"source": [
"https://serverfault.com/questions/342284",
"https://serverfault.com",
"https://serverfault.com/users/104553/"
]
} |
342,473 | I was wondering if it is a good idea to replace a hard drive in a (fairly) system-critical database server after a certain number of years of use, before it dies. For example, I was thinking of replacing a hard drive after 3 years of use. Since I have many hard drives across servers, I could stagger which hard drives are replaced. Is this a good idea, or do people just wait for the failure? | Google did a study on disk drives and found very little correlation between disk age and failure. SMART tests also do not show failures. My local observations (>500 servers) is similar. I have new disks fail quickly while old ones still chug along. My general rule is if we seen disk issues (SMART or system errors) we replace it immediately. If not, then the drives get cycled out when the server does. Google Study http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/disk_failures.pdf | {
"source": [
"https://serverfault.com/questions/342473",
"https://serverfault.com",
"https://serverfault.com/users/87222/"
]
} |
342,626 | I installed postgresql via Homebrew. I have the following issue after upgrading: FATAL: database files are incompatible with server
DETAIL: The data directory was initialized by PostgreSQL version 9.0, which is not compatible with this version 9.1.2. Any tips on how to upgrade? I tried the following: $ pg_upgrade -d /usr/local/var/postgres/ -D /usr/local/var/postgres -b
/usr/local/Cellar/postgresql/9.0.4/bin -B /usr/local/Cellar/postgresql/9.1.2/bin It didn't work. Here's the output. Performing Consistency Checks
Checking current, bin, and data directories ok
Checking cluster versions
This utility can only upgrade to PostgreSQL version 9.1.
Failure, exiting error. | For me on OS X with Homebrew it was like this. Installed new postgres with Homebrew (started getting the error) mv /usr/local/var/postgres /usr/local/var/postgres.old initdb -D /usr/local/var/postgres pg_upgrade -b /usr/local/Cellar/postgresql/9.0.4/bin -B /usr/local/Cellar/postgresql/9.1.2/bin -d /usr/local/var/postgres.old -D /usr/local/var/postgres ./delete_old_cluster.sh (this script is created for you automatically in current dir when you go through above steps) rm delete_old_cluster.sh | {
"source": [
"https://serverfault.com/questions/342626",
"https://serverfault.com",
"https://serverfault.com/users/33304/"
]
} |
343,442 | Recently our infrastructure team told our development team that you do not need a certificate for https. They mentioned that the only benefit of buying a certificate was to give the consumer peace of mind that they are connecting to the correct website. This goes against everything I assumed about https. I read wikipedia and it mentions you need either a trusted certificate or a self signed certificate to configure https. Is it possible to configure IIS to respond to https without any certificate? | No. You must have a certificate. It can be self signed, but there must be a public/private key pair in place to exchange the session symmetric key between server and client to encrypt data. | {
"source": [
"https://serverfault.com/questions/343442",
"https://serverfault.com",
"https://serverfault.com/users/74559/"
]
} |
343,705 | I have a script that writes to a few files but I need them a specific size. So I'm wondering if there's a way of appending a specific number of null bytes to a file by using standard command line tools (e.g, by copying from /dev/zero )? | truncate is much faster than dd . To grow the file with 10 bytes use: truncate -s +10 file.txt | {
"source": [
"https://serverfault.com/questions/343705",
"https://serverfault.com",
"https://serverfault.com/users/90433/"
]
} |
343,941 | I'm looking for a reliable and up-to-date list of WHOIS Servers to use in a whois script. Since the list changes frequently, it'd be nice if there were a resource I could refer to rather than having to update the script frequently. | There are several well-known ways of locating whois servers for TLDs, the IANA database is probably the closest to what the question asks for, however there are other sources that may be more useful in practice. From IANA (access via whois and http) Browse http://www.iana.org/domains/root/db or search the whois database at whois.iana.org for the TLD. Each entry has a field specifying the whois server. Example: $ whois -h whois.iana.org com
[Querying whois.iana.org]
[whois.iana.org]
% IANA WHOIS server
% for more information on IANA, visit http://www.iana.org
% This query returned 1 object
domain: COM
organisation: VeriSign Global Registry Services
address: 12061 Bluemont Way
address: Reston Virginia 20190
address: United States
contact: administrative
name: Registry Customer Service
organisation: VeriSign Global Registry Services
address: 12061 Bluemont Way
address: Reston Virginia 20190
address: United States
phone: +1 703 925-6999
fax-no: +1 703 948 3978
e-mail: [email protected]
contact: technical
name: Registry Customer Service
organisation: VeriSign Global Registry Services
address: 12061 Bluemont Way
address: Reston Virginia 20190
address: United States
phone: +1 703 925-6999
fax-no: +1 703 948 3978
e-mail: [email protected]
nserver: A.GTLD-SERVERS.NET 192.5.6.30 2001:503:a83e:0:0:0:2:30
nserver: B.GTLD-SERVERS.NET 192.33.14.30 2001:503:231d:0:0:0:2:30
nserver: C.GTLD-SERVERS.NET 192.26.92.30
nserver: D.GTLD-SERVERS.NET 192.31.80.30
nserver: E.GTLD-SERVERS.NET 192.12.94.30
nserver: F.GTLD-SERVERS.NET 192.35.51.30
nserver: G.GTLD-SERVERS.NET 192.42.93.30
nserver: H.GTLD-SERVERS.NET 192.54.112.30
nserver: I.GTLD-SERVERS.NET 192.43.172.30
nserver: J.GTLD-SERVERS.NET 192.48.79.30
nserver: K.GTLD-SERVERS.NET 192.52.178.30
nserver: L.GTLD-SERVERS.NET 192.41.162.30
nserver: M.GTLD-SERVERS.NET 192.55.83.30
ds-rdata: 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CFC41A5766
whois: whois.verisign-grs.com
status: ACTIVE
remarks: Registration information: http://www.verisign-grs.com
created: 1985-01-01
changed: 2012-02-15
source: IANA
$ From whois-servers.net (access via DNS) The name tld.whois-servers.net is a CNAME to the appropriate whois-server. Somewhat unclear who actually maintains this but it seems pretty popular as it's very easy to use this with pretty much any whois client (and some clients default to using this service). Example: $ dig com.whois-servers.net +noall +answer
; <<>> DiG 9.9.4-P2-RedHat-9.9.4-15.P2.fc20 <<>> com.whois-servers.net +noall +answer
;; global options: +cmd
com.whois-servers.net. 600 IN CNAME whois.verisign-grs.com.
whois.verisign-grs.com. 5 IN A 199.7.55.74
$ From the registry itself (access via DNS) Many registries publish the address of their whois server in DNS directly in the relevant zone as a _nicname._tcp SRV record . Example: $ dig _nicname._tcp.us SRV +noall +answer
; <<>> DiG 9.9.4-P2-RedHat-9.9.4-15.P2.fc20 <<>> _nicname._tcp.us SRV +noall +answer
;; global options: +cmd
_nicname._tcp.us. 518344 IN SRV 0 0 43 whois.nic.us.
$ | {
"source": [
"https://serverfault.com/questions/343941",
"https://serverfault.com",
"https://serverfault.com/users/79267/"
]
} |
344,295 | I'm aiming to start up a second sshd instance on a non-privileged port (e.g. 2222) with my own configuration file. Obviously, the sshd process can't setuid so logging in as users other than the one who is running the sshd daemon is clearly impossible. However, is it possible to have a working sshd daemon that will work for the currently running user? For my use case, this would be fine. I tried booting up an sshd instance with my own config file and host key and the sshd process starts up (no complaints about not being root, like some commands), however when I try to connect to that port, the sshd process dies. $ /usr/sbin/sshd -dD -h .ssh/id_rsa -p 2222
debug1: sshd version OpenSSH_5.6p1
debug1: read PEM private key done: type RSA
debug1: private host key: #0 type 1 RSA
debug1: setgroups() failed: Operation not permitted
debug1: rexec_argv[0]='/usr/sbin/sshd'
debug1: rexec_argv[1]='-dD'
debug1: rexec_argv[2]='-h'
debug1: rexec_argv[3]='.ssh/id_rsa'
debug1: rexec_argv[4]='-p'
debug1: rexec_argv[5]='2222'
debug1: Bind to port 2222 on 0.0.0.0.
Server listening on 0.0.0.0 port 2222.
debug1: Bind to port 2222 on ::.
Server listening on :: port 2222.
debug1: fd 6 clearing O_NONBLOCK
debug1: Server will not fork when running in debugging mode.
debug1: rexec start in 6 out 6 newsock 6 pipe -1 sock 9
debug1: inetd sockets after dupping: 5, 5
Connection from ::1 port 57670
debug1: Client protocol version 2.0; client software version OpenSSH_5.6
debug1: match: OpenSSH_5.6 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.6
debug1: list_hostkey_types:
No supported key exchange algorithms
debug1: do_cleanup
debug1: do_cleanup
debug1: audit_event: unhandled event 12 The debug1: setgroups() failed: Operation not permitted line obviously sticks out, but it doesn't die until it tries to accept a connection. | After a bit of digging around I figured it out. Start the process with sshd -f ~/.ssh/sshd_config where /.ssh/sshd_config is a new file you created. Among other options (such as a different host key, different port, etc) you need to add the line UsePrivilegeSeparation no . This will prevent the sshd process from trying to do any setuid or setgid calls and allow it to continue running as your user and accept connections as your user. EDIT: A few moments after figuring it out somebody else tweeted this link to me which confirms this is the correct way to do this: http://cygwin.com/ml/cygwin/2008-04/msg00363.html | {
"source": [
"https://serverfault.com/questions/344295",
"https://serverfault.com",
"https://serverfault.com/users/16907/"
]
} |
344,544 | 3-digit: 644
ugo (user group other) 4-digit: 0644
?ugo (??? user group other) What is the first octal digit for in 4-digit octal Unix file permission notation? | From man chmod : A numeric mode is from one to four octal digits (0-7), derived by adding up the bits with values 4, 2, and 1. Any omitted digits are assumed to be leading zeros. The first digit selects the set user ID (4) and set group ID (2) and sticky (1) attributes. What are "set user ID", "set group ID", and "sticky", you ask? setuid/setgid : setuid and setgid (short for "set user ID upon execution" and "set group ID upon execution", respectively) are Unix access rights flags that allow users to run an executable with the permissions of the executable's owner or group. They are often used to allow users on a computer system to run programs with temporarily elevated privileges in order to perform a specific task. While the assumed user id or group id privileges provided are not always elevated, at a minimum they are specific. Also, when applied to a directory, the setuid/setgid cause new files created in the directory to inherit the uid or gid, respectively, of the parent directory. This behavior varies based upon the flavor of unix. For example, linux honors the setgid, but ignores the setuid on directories. And sticky : The most common use of the sticky bit today is on directories. When the sticky bit is set, only the item's owner, the directory's owner, or the superuser can rename or delete files. Without the sticky bit set, any user with write and execute permissions for the directory can rename or delete contained files, regardless of owner. Typically this is set on the /tmp directory to prevent ordinary users from deleting or moving other users' files. | {
"source": [
"https://serverfault.com/questions/344544",
"https://serverfault.com",
"https://serverfault.com/users/53106/"
]
} |
344,614 | What I want to do is the following: My domain xy.example.com no longer exists. Thus I want to do a simple redirect to the new domain abc.example.com. It should be a redirect, that also works when someone types in the browser bar http://xy.example.com/team.php - than it shoul redirect to http://abc.example.com/team.php I've already tried a few things, but it didn't really work. What do I have to put in the Apache 2 config? | You can use the RedirectPermanent directive to redirect the client to your new URL. Just create a very simple VirtualHost for the old domain in which you redirect it to the new domain: <VirtualHost *:80>
ServerName xy.example.com
RedirectPermanent / http://abc.example.com/
# optionally add an AccessLog directive for
# logging the requests and do some statistics
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/344614",
"https://serverfault.com",
"https://serverfault.com/users/91589/"
]
} |
344,731 | Currently I have two directories A/ and B/ which are identical in every respect, with the exception of the timestamps. Therefore if I run the command : rsync --dry-run -crvv A/ B/ then all files are marked "uptodate", whereas the command : rsync --dry-run -rvv A/ B/ shows that all files are to be copied over from A/ to B/. My question is this : given that I know the files are identical (in respect to contents), then is there any way (via rsync or otherwise) to set the timestamps for files in B/ to be identical to the timestamps of the files in A/, without copying over all the files from A/ to B/ ? Thanks | Using -t (preserve timestamps) and --size-only will only compare files on size. If the size matches, rsync will not copy the file but since -t is specified, it will update the timestamp on the destination file without recopying it. Make sure to not use -u (update) as this will skip files that already exist and completely skip updating the timestamp. I had the problem of originally not using rsync to copy a bunch of files to a new drive, and therefore the timestamps were updated to current time. I used the command below to sync everything correctly in a decent amount of time: rsync -vrt --size-only /src /dest | {
"source": [
"https://serverfault.com/questions/344731",
"https://serverfault.com",
"https://serverfault.com/users/98196/"
]
} |
345,029 | I intend to use chef or puppet to do administration (I'm thinking more of chef as it's younger and I get a better feeling about it). In both home pages I saw there is an "enterprise edition" that costs money and I don't intend to buy anything. What would I miss in chef / puppet if I don't buy them? What does chef offer that costs money exactly? What does puppet offer that costs money exactly? It was not so clear to me from their web site, as it's kind of obscure. | The paid versions offer more features (i.e., puppet offers an easier way to deploy en mass) and, in many cases most importantly, paid support. When running enterprise servers, having paid support to help you get setup is typically worth it--especially when you run into issues. Chef version comparison Comparison between Puppet and Puppet Enterprise Typically, you won't go wrong with the free versions... its only if you need help getting up and running, or you simply have such a large infrastructure (and little experience with configuration management). | {
"source": [
"https://serverfault.com/questions/345029",
"https://serverfault.com",
"https://serverfault.com/users/50868/"
]
} |
345,341 | A few days ago I found I can no longer create symlinks from Ubuntu in any directories that are shared with the OS X host. ln: creating symbolic link `foo': Read-only file system I'm able to create symlinks in non-shared folders and on OS X directly. I've also tried running disk repair, but no errors were found. Setup: OS X 10.6.6 Ubuntu server 11.04 Virtualbox 4.1.8 | Another workaround is to run the following command on your host: VBoxManage setextradata VM_NAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARE_NAME 1 Or on Windows VBoxManage.exe setextradata VM_NAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARE_NAME 1 where VM_NAME is the name of your virtual machine (e.g Ubuntu) and SHARE_NAME the name of your shared directory (without the "sf_" prefix). This will re-enable the previous symlink friendly behavior. Note: On Windows, always restart the Virtual Machine AND VirtualBox GUI. | {
"source": [
"https://serverfault.com/questions/345341",
"https://serverfault.com",
"https://serverfault.com/users/76905/"
]
} |
345,670 | We have several of standard non-managed 3com switches in a network. I thought switches were supposed to only send packages between peers of a connection. However it appears network sniffing software running on a computer attached to one any one of the switches is able to detect traffic (ie youtube video streaming, web pages) of other host computers attached to other switches on the network. Is this even possible or is the network thoroughly broken? | To complete David's answer, a switch learns who is behind a port by looking at the MAC addresses of packets received on that port. When the switch is powered on, it knows nothing. Once device A sends a packet from port 1 to device B, the switch learns that device A is behind port 1, and sends the packet to all ports. Once device B replies to A from port 2, the switch only sends the packet on port 1. This MAC to port relationship is stored in a table in the switch. Of course, many devices can be behind a single port (if a switch is plugged in to the port as an example), so there may be many MAC addresses associated with a single port. This algorithm breaks when the table is not large enough to store all the relationships (not enough memory in the switch). In this case, the switch loses information and begins to send packets to all ports. This can easily be done (now you know how to hack your network) by forging lot of packets with different MAC from a single port. It can also be done by forging a packet with the MAC of the device you want to spy, and the switch will begin sending you the traffic for that device. Managed switches can be configured to accept a single MAC from a port (or a fixed number). If more MACs are found on that port, the switch can shutdown the port to protect the network, or send a log message to the admin. EDIT: About the youtube traffic, the algorithm described above only works on unicast traffic. Ethernet broadcast (ARP as an example), and IP multicast (used sometimes for streaming) are handled differently. I do not know if youtube uses multicast, but it might be a case where you can sniff traffic not belonging to you. About web page traffic, this is strange, as the TCP handshake should have set the MAC to port table correctly. Either the network topology cascades a lot of very cheap switches with small tables that are always full, or somebody is messing with the network. | {
"source": [
"https://serverfault.com/questions/345670",
"https://serverfault.com",
"https://serverfault.com/users/101554/"
]
} |
345,848 | You can use # to comment out individual lines.
Is there a syntax for commenting out entire blocks? I've tired surrounding the block (specifically a <Directory> block) with <IfModule asdfasdf>...</IfModule> , but that didn't work. | I came across this post from a Google search for "Apache block comment". Later, I discovered a non-perl, non-import solution from Apache's core documentation (although I'm sure this is very non-intended practice). From the core documentation for Apache 2.0 http://httpd.apache.org/docs/2.0/mod/core.html , you can see that the tag <IfDefine> will handily ignore statements when the parameter you specify does not exist: <IfDefine IgnoreBlockComment>
...
</IfDefine> So that'll successfully "comment" out the statements in between. | {
"source": [
"https://serverfault.com/questions/345848",
"https://serverfault.com",
"https://serverfault.com/users/88/"
]
} |
346,196 | If I want to allow Windows networked drives between two firewalled computers, do I need to open ports 137-139, or is port 445 sufficient? I have to submit a form and get approval to open firewall ports, and I don't want to ask for more open ports than I need. All of the machines here are Windows XP or later. Note: when I say "Windows networked drives", I'm not entirely sure whether I'm referring to SMB or CIFS, and I'm not entirely clear on the difference between the two protocols. | Ports 137-139 are for NetBios/Name resolution. Without it you will have to access machines by IP address opposed to NetBIOS name. Example \\192.168.1.100\share_name opposed to \\my_file_server\share_name So port 445 is sufficient if you can work with IP addresses only. | {
"source": [
"https://serverfault.com/questions/346196",
"https://serverfault.com",
"https://serverfault.com/users/98105/"
]
} |
346,481 | I want a UDP echo server to get packets, and reply exactly what it has received. How can I simply do this using netcat or socat ? It should stay alive forever and handle packets coming from several hosts. | Another netcat-like tool is the nmap version, ncat , that has lots of built in goodies to simplify things like this. This would work: ncat -e /bin/cat -k -u -l 1235 -e means it executes /bin/cat (to echo back what you type) -k means keep-alive, that it keeps listening after each connection -u means udp -l 1235 means that it listens on port 1235 | {
"source": [
"https://serverfault.com/questions/346481",
"https://serverfault.com",
"https://serverfault.com/users/90670/"
]
} |
346,482 | I have two volumes of amazon EC2 where by default 1 GiB volume using which has fulled.
Now I want to use my second volume which is 9 Gim.
I used command cat /proc/partitions I got major minor #blocks name 202 1 1048576 xvda1 202 80 9437184 xvdf Then I hit mkfs.ext3 -F /dev/sdf its showing mkfs.ext3: No such file or directory while trying to determine filesystem size then I hit command df and I got Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 1032088 1031280 0 100% / tmpfs 313160 8 313152 1% /lib/init/rw udev 297800 24 297776 1% /dev tmpfs 313160 4 313156 1% /dev/shm overflow 1024 32 992 4% /tmp means still I am unable to use my 9 GiB space Volume. I am conform I have two volume where attachment information is i-7e4fb41c:/dev/sda1 (attached) and i-7e4fb41c:/dev/sdf (attached) where only sda1 is using. Any one know how may I use my second volume(sdf). Thx | Another netcat-like tool is the nmap version, ncat , that has lots of built in goodies to simplify things like this. This would work: ncat -e /bin/cat -k -u -l 1235 -e means it executes /bin/cat (to echo back what you type) -k means keep-alive, that it keeps listening after each connection -u means udp -l 1235 means that it listens on port 1235 | {
"source": [
"https://serverfault.com/questions/346482",
"https://serverfault.com",
"https://serverfault.com/users/64714/"
]
} |
346,487 | I'm looking at a device wireshark recording and see something weird. After I completed the DNS resolve transaction (query + resonse),
I immediately get an HTTP response (200 OK) from the responding server. the site is a standard public site: crl.verisign.net Any ideas what is happening here? | Another netcat-like tool is the nmap version, ncat , that has lots of built in goodies to simplify things like this. This would work: ncat -e /bin/cat -k -u -l 1235 -e means it executes /bin/cat (to echo back what you type) -k means keep-alive, that it keeps listening after each connection -u means udp -l 1235 means that it listens on port 1235 | {
"source": [
"https://serverfault.com/questions/346487",
"https://serverfault.com",
"https://serverfault.com/users/105895/"
]
} |
346,647 | I've managed to locate my install directory for MySQL: /usr/local/mysql/ Where can I find the path to my.cnf to know where I should configure the server? I've tried creating a /etc/my.cnf (as shown below) and it had no affect [mysqld]
#charset
collation_server=utf8_general_ci
character_set_server=utf8
default_character_set=utf8 | As per this article : Running this command from the command line / terminal will show where MySQL will look for the my.cnf file on Linux/BSD/OS X systems: mysql --help | grep "Default options" -A 1 This will output something like this: Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf /usr/etc/my.cnf ~/.my.cnf You can now check for files using the above output at /etc/my.cnf , then /etc/mysql/my.cnf and so on. If there isn't one at one of those locations, you can create one and know MySQL will use it. | {
"source": [
"https://serverfault.com/questions/346647",
"https://serverfault.com",
"https://serverfault.com/users/37222/"
]
} |
346,866 | I have MySQL installed on an Ubuntu machine. I added this line in /etc/mysql/my.cnf group_concat_max_len=15360 But there is no effect. Every time I restart mysql, the value is set to 1,024. I have to manually run SET GLOBAL group_concat_max_len=15360 ...every time I start up mysql. Why is my.cnf not working the way I thought it should?
Thank you | If you have the setting already in my.cnf or my.cfg, and a restart did not bring about the change you expected, you may just have the setting placed in the wrong location. Make sure the setting is under the [mysqld] group header [mysqld]
group_concat_max_len=15360 then you can restart mysqld without worry BTW @ gbn may be more correct in this instance because you cannot use commas in the numerical settings for my.cnf (+1 for @gbn) | {
"source": [
"https://serverfault.com/questions/346866",
"https://serverfault.com",
"https://serverfault.com/users/94276/"
]
} |
347,318 | It is usually instructed to introduce new cron jobs through command lines; but I found it easier (with a better control of current cron tasks) to manually edit (in text editor) the user cron file like /var/spool/cron/crontabs/root . Is it dangerous to edit the file in text editor? The comments in the default file is confusing. The first line says # DO NOT EDIT THIS FILE - edit the master and reinstall. But the fourth line says # Edit this file to introduce tasks to be run by cron. | If you modify the user file under crontabs, it should work. However, there are two issues to take into consideration: If you mistyped the cron entry in the file, you will not be warned as opposed to using crontab -e command. You can not edit your user file under crontabs directly without login as root or using sudo. You will get permission denied error. Edit One more point to add. When you edit the file directly, you may be warned by the text editor if you opened the file twice (two users accessing the same file). However, the cron list will be overwritten when using crontab -e from two different shell sessions of the same user. This is another difference. | {
"source": [
"https://serverfault.com/questions/347318",
"https://serverfault.com",
"https://serverfault.com/users/94757/"
]
} |
347,328 | My company is in the process of upgrading all of our users from old Windows XP computers to newer quad-core Win7 computers. This is a good thing - it's long overdue that we upgrade our workstations - but I now spend a ton of time configuring new computers. Is there any way to automate this process? The steps that I go through with just about every computer: Run through Win7 setup process (we do mostly HPs, so we get the stupid "The computer is personal again" thing. Uninstall bloatware (norton, bing bar, roxio, etc.) Install Updates Add to domain & configure network settings Install Office, and other company-specific applications Configure important shortcuts (Outlook on task bar) There's a couple other things that I do after that that would be nice to automate, but it's unlikely due to license keys, passwords, etc. Configure Outlook Pull in files/settings with easy transfer wizard Map network drives I know that it's possible to create a complete image of a computer, but how does that work with different hardware/drivers? What about Win7 license keys? If there is a way to make this work, what is the best (preferably free/open source) software out there to do this? | Don't bother with uninstalling or fixing bloatware. Just reimage the computers. In fact it's pretty easy to setup a reference image, sysprep, capture, and deploy it using WDS + MDT . See the aforementioned for various driver packages: trust me you're not the first person to think of this stuff, it's been solved already. Profiles can be transferred with USMT . Mapped drives are best done with a logon script . Outlook 2007+ with Exchange 2007+ can use Autodiscovery . Install updates with WSUS (fully automated at install with a simple script ). Keys and Activation can be managed with scripts or VAMT . Fair warning that if you don't know about any of this stuff already you've got one heck of a learning curve to get through and you're way behind the times. If you really only have a handful of computers it probably isn't worth the time to set this stuff up now, but if it's more than a dozen it's worth the time. Also future hardware refreshes aren't nearly so painful. Bonus that many of these skills allow you to be more efficient in your routine tasks and help prevent problems. | {
"source": [
"https://serverfault.com/questions/347328",
"https://serverfault.com",
"https://serverfault.com/users/82281/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.