source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
38,130
Situation: We have DR site that needs to be tested There is a mix of Linux and Windows hosts on DR site.In case of disaster if SERVER in production site is not available, SERVER_DR on DR is switched on and joined to the same Windows domain.This is done this way in case primary site is not available and we don't want to delete original record for SERVER form AD. The problem I am trying to solve is a name resolution in DR site.Processes and scripts are using DNS name SERVER so in DR site requests to SERVER should translate to SERVER_DR. We have no access to do anything on Windows DNS. My idea is to use BIND to solve this problem.Hosts in DR should be able to authenticate with AD .Fact that they don't need to access anything else outside the DR site in Windows domain should simplify the problem. Services that need to be accessed by DR hosts are mostly file sharing and SQL servers.I believe SQL servrers maybe a problem here since they use SPN This brings an idea of using BIND for our.domain.com zone held by BIND in DR site, however I can see possible problem when Windows DR hosts need to authenticate against AD since they need to use undescored records if I remember correctly.We cannot delegate zones from AD due to the reasons I mentioned earlier. Is it worth trouble to solve this problem?One of my colleagues suggested using hosts file for each Windows DR host.However it seems pretty ugly there are not many of them and my time setting up BIND may be wasted.
I asked this question over on SO and it got moved here. That said I no longer have the ability to edit the question as if I owned it, or even accept the correct answer, but this turned out to be the true reason why and how to solve it: Found here User "rohandhruva" on there gives the right answer: This happens if you change the hostname during the install process. To solve the problem, edit the file /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 <ADD_YOURS_HERE> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 <ADD_YOURS_HERE>
{ "source": [ "https://serverfault.com/questions/38130", "https://serverfault.com", "https://serverfault.com/users/11059/" ] }
38,222
I'm trying to test my ASP.Net website on localhost and I'm getting this error: HTTP Error 401.3 - Unauthorized You do not have permission to view this directory or page because of the access control list (ACL) configuration or encryption settings for this resource on the Web server. I have the following users on the website application folder, with full read/write permissions: NETWORK SERVICE IIS_IUSRS SYSTEM Administrators Nathan (me) What can I try to fix this?
IIS 7 also creates "IUSR" as default user to access files via IIS. So make user IUSR has read access to files/folders. How to check if IUSR has read Access? Right Click -> Folder -> Properties -> Security Tab See if IUSR is in Group or user names list, If No. Click Edit -> Add -> Advanced -> Find Now -> Select IUSR and click OK four times
{ "source": [ "https://serverfault.com/questions/38222", "https://serverfault.com", "https://serverfault.com/users/12248/" ] }
38,236
I was wondering which class would be more efficient: PHP (Glype, PHProxy), CGI (CGIProxy), or javascript based scripts that run on a webserver, or an http proxy run through squid. Assuming neither class was doing any caching, would one or the other be much more efficient at handling web browsing? Thanks!
IIS 7 also creates "IUSR" as default user to access files via IIS. So make user IUSR has read access to files/folders. How to check if IUSR has read Access? Right Click -> Folder -> Properties -> Security Tab See if IUSR is in Group or user names list, If No. Click Edit -> Add -> Advanced -> Find Now -> Select IUSR and click OK four times
{ "source": [ "https://serverfault.com/questions/38236", "https://serverfault.com", "https://serverfault.com/users/12007/" ] }
38,398
My current scenario involves allowing various rules, but I need ftp to be accessible from anywhere. The OS is Cent 5 and I am using VSFTPD. I can't seem to get the syntax correct. All other rules work correctly. ## Filter all previous rules *filter ## Loopback address -A INPUT -i lo -j ACCEPT ## Established inbound rule -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT ## Management ports -A INPUT -s x.x.x.x/24 -p icmp -m icmp --icmp-type any -j ACCEPT -A INPUT -s x.x.x.x/23 -p icmp -m icmp --icmp-type any -j ACCEPT -A INPUT -s x.x.x.x/24 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -s x.x.x.x/23 -p icmp -m icmp --icmp-type any -j ACCEPT -A INPUT -s x.x.x.x/23 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -i lo -j ACCEPT ## Allow NRPE port (Nagios) -A INPUT -s x.x.x.x -p tcp -m state --state NEW -m tcp --dport 5666 -j ACCEPT -A INPUT -s x.x.x.x -p tcp -m state --state NEW -m tcp --dport 5666 -j ACCEPT ##Allow FTP ## Default rules :INPUT DROP [0:0] :FORWARD DROP :OUTPUT ACCEPT [0:0] COMMIT The following are rules I have tried. ##Allow FTP -A INPUT --dport 21 any -j ACCEPT -A INPUT --dport 20 any -j ACCEPT -A INPUT -p tcp --dport 21 -j ACCEPT -A INPUT -p tcp --dport 20 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 21 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 20 -j ACCEPT -A INPUT -p tcp -s 0/0 -d 0/0 --destination-port 20 -j ACCEPT -A INPUT -p tcp -s 0/0 -d 0/0 --destination-port 21 -j ACCEPT -A INPUT -s 0.0.0.0/0 -p tcp -m state --state NEW -m tcp --dport 21 -j ACCEPT -A INPUT -s 0.0.0.0/0 -p tcp -m state --state NEW -m tcp --dport 20 -j ACCEPT
Here's the document I refer people to so that they can following the FTP protocol: http://slacksite.com/other/ftp.html To do active-mode FTP, you need to allow incoming connections to TCP port 21 and outgoing connections from port 20. To do passive-mode FTP, you need to allow incoming connections to TCP port 21 and incoming connections to a randomly-generated port on the server computer (necessitating using a conntrack module in netfilter) You don't have anything re: your OUTPUT chain in your post, so I'll include that here, too. If your OUTPUT chain is default-drop then this matters. Add these rules to your iptables configuration: iptables -A INPUT -p tcp --dport 21 -j ACCEPT iptables -A OUTPUT -p tcp --sport 20 -j ACCEPT To support passive mode FTP, then, you need to load the ip_conntrack_ftp module on boot. Uncomment and modify the IPTABLES_MODULES line in the /etc/sysconfig/iptables-config file to read: IPTABLES_MODULES="ip_conntrack_ftp" Save the iptables config and restart iptables. service iptables save service iptables restart To completely rule out VSFTPD as being a problem, stop VSFTPD, verify that it's not listening on port 21 with a "netstat -a" and then run a : nc -l 21 This will start netcat listening on port 21 and will echo input to your shell. From another host, TELNET to port 21 of your server and verify that you get a TCP connection and that you see output in the shell when you type in the TELNET connection. Finally, bring VSFTPD back up, verify that it is listening on port 21, and try to connect again. If the connection to netcat worked then your iptables rules are fine. If the connection to VSFTPD doesn't work after netcat does then something is wrong w/ your VSFTPD configuration.
{ "source": [ "https://serverfault.com/questions/38398", "https://serverfault.com", "https://serverfault.com/users/3246/" ] }
38,417
Basically I have a remote screen session, which I wish to automatically reattach to.. Currently I'm doing this by with the following command (as an iTerm bookmark, or an alias) ssh host -t screen -x thesessionname This works fine, but if the session dies for whatever reason, I'd like it to be recreated when I next connect. The -R flag for screen is almost perfect: ssh host -t screen -R -S thesessionname ..but if the session is already attached, a second session gets made (as -R simple looks for the first detached session, if none are found it creates a new one) Is there a way to make the -R flag look for attached sessions also, and only create a new one if thesessionname doesn't exist? If this is not easily doable, how could I automatically recreate the screen session when it dies? Perhaps a script run via cron that looks for the named session, creating it should it not exist?
Tell screen to be a bit more persistent about trying: -D -R Attach here and now. In detail this means: If a session is run- ning, then reattach. If necessary detach and logout remotely first. If it was not running create it and notify the user. This is the author's favorite. So combine the two and you should have your solution ("-DR" is equivalent to "-D -R"): screen -DR <yoursession> Additionally and useful to know, you can view running sessions with: screen -ls
{ "source": [ "https://serverfault.com/questions/38417", "https://serverfault.com", "https://serverfault.com/users/1070/" ] }
38,549
Possible Duplicate: My server's been hacked EMERGENCY Geeze, I'm desperate! A few hours ago our production DB was sql-injected. I know we have some big holes in the system... because we inherited the website from a guy that did it on classic ASP, his programming was really awful and unsecured. So we spent some time migrating it to ASP.NET (first 1.1, then 2.0 and now 3.5). But it's a big project, and there is still old and unsecure code. I'm not going to lie, the project is a mess, I hate it, but it's our most important client (we are just 2 young guys, not a big company). So I know they have injected some js script references to my whole db somehow.... It was probably through an old page using concatenated string sql queries and throwing directly into the db (because that guy that starts the project said "Stored procedures doesn't work"..... so he did the whole site using string concatenation, and throwing them directly to the sql without doing any safety validation or anything. When we got the project, the client didnt want to spend time redoing the crap that the old guy did. So we had to lead to crappy and unsecure code and fixing it while developing new features, because that was what the client wants... and now that we've been sql injected they get crazy of course. SO.... **Is there any way to check for old the sql queries that have been executed in the last X hours? Something like how SQL Profiler does (but of course we didnt have the profiler open when the attacked happened)? Is there a way to find out which page is the vulnerable one? Please, help, there are a lots of pages. I cannot search through those manually without knowing for sure which one was the page. Also... could there be another way they could inject the db? Like using an IIS request or js or something?** I have full Remote desktop access to the server machine (it is not in a hosted environment) so I can access every file, log, whatever on the server... Please help! PS: Sorry, my english is not so great, and it's worse now that I'm nervous! EDIT Windows 2003 Server SQL SERVER 2005 ASP .NET 3.5 The script they are throwing is the following DECLARE @S NVARCHAR(4000);SET @S=CAST(0x4400450043004C0041005200450020004000540020007600610072006300680061007200280032003500350029002C0040004300200076006100720063006800610072002800320035003500290020004400450043004C0041005200450020005400610062006C0065005F0043007500720073006F007200200043005500520053004F005200200046004F0052002000730065006C00650063007400200061002E006E0061006D0065002C0062002E006E0061006D0065002000660072006F006D0020007300790073006F0062006A006500630074007300200061002C0073007900730063006F006C0075006D006E00730020006200200077006800650072006500200061002E00690064003D0062002E0069006400200061006E006400200061002E00780074007900700065003D00270075002700200061006E0064002000280062002E00780074007900700065003D003900390020006F007200200062002E00780074007900700065003D003300350020006F007200200062002E00780074007900700065003D0032003300310020006F007200200062002E00780074007900700065003D00310036003700290020004F00500045004E0020005400610062006C0065005F0043007500720073006F00720020004600450054004300480020004E004500580054002000460052004F004D00200020005400610062006C0065005F0043007500720073006F007200200049004E0054004F002000400054002C004000430020005700480049004C004500280040004000460045005400430048005F005300540041005400550053003D0030002900200042004500470049004E00200065007800650063002800270075007000640061007400650020005B0027002B00400054002B0027005D00200073006500740020005B0027002B00400043002B0027005D003D0072007400720069006D00280063006F006E007600650072007400280076006100720063006800610072002C005B0027002B00400043002B0027005D00290029002B00270027003C0073006300720069007000740020007300720063003D0068007400740070003A002F002F006600310079002E0069006E002F006A002E006A0073003E003C002F007300630072006900700074003E0027002700270029004600450054004300480020004E004500580054002000460052004F004D00200020005400610062006C0065005F0043007500720073006F007200200049004E0054004F002000400054002C0040004300200045004E004400200043004C004F005300450020005400610062006C0065005F0043007500720073006F00720020004400450041004C004C004F00430041005400450020005400610062006C0065005F0043007500720073006F007200 AS NVARCHAR(4000));EXEC @S; Which translated to text is: DECLARE @T varchar(255), @C varchar(255) DECLARE Table_Cursor CURSOR FOR select a.name,b.name from sysobjects a,syscolumns b where a.id=b.id and a.xtype='u' and (b.xtype=99 or b.xtype=35 or b.xtype=231 or b.xtype=167) OPEN Table_Cursor FETCH NEXT FROM Table_Cursor INTO @T,@C WHILE(@@FETCH_STATUS=0) BEGIN exec('update [' + @T + '] set [' + @C + ']=rtrim(convert(varchar,[' + @C + '])) + ''&lt;script src=http://f1y.in/j.js&gt;&lt;/script&gt;''') FETCH NEXT FROM Table_Cursor INTO @T,@C END CLOSE Table_Cursor DEALLOCATE Table_Cursor
The first thing to do is not panic. But I see you've skipped that and have decided to The second thing is to take the site down and make sure it's not accessible from the outside until you can figure out what's broke. Start looking at access logs and try to find out what the main problem is. The third thing to do is see if you backup your DB regularly and just do a roll back. You might lose some data -but you'll be in a better spot than you are right now The fourth thing to do is - DO NOT - give out the url because apparently it's unsecure
{ "source": [ "https://serverfault.com/questions/38549", "https://serverfault.com", "https://serverfault.com/users/12326/" ] }
38,564
I have solaris 10 sparc running and working very well but i have problem with external SCSI tape drive DAT 72 problem it seems to me the tape drive is manufactured by SUN microsystems when i ran mt -f /dev/rmt/0 status it reveals the following output bash-3.00# mt -f /dev/rmt/0 status /dev/rmt/0: No such file or directory when i ran ls -l it reveals the following output ls -l /dev/rmt/0 lrwxrwxrwx 1 root root 43 Sep 20 2006 /dev/rmt/0 -> ../../devices/pci@8,600000/scsi@1,1/st@3,0: it seems to me everything is okay SCSI cable is connected properly to Tape device and to server as well the tape has SCSI termination dongle as well and connected properly to Tape device as well any ideas would be a great assist Thanks in advance
The first thing to do is not panic. But I see you've skipped that and have decided to The second thing is to take the site down and make sure it's not accessible from the outside until you can figure out what's broke. Start looking at access logs and try to find out what the main problem is. The third thing to do is see if you backup your DB regularly and just do a roll back. You might lose some data -but you'll be in a better spot than you are right now The fourth thing to do is - DO NOT - give out the url because apparently it's unsecure
{ "source": [ "https://serverfault.com/questions/38564", "https://serverfault.com", "https://serverfault.com/users/12320/" ] }
38,626
a simple cat on the pcap file looks terrible: $cat tcp_dump.pcap ?ò????YVJ? JJ ?@@.?E<??@@ ?CA??qe?U????иh? .Ceh?YVJ?? JJ ?@@.?E<??@@ CA??qe?U????еz? .ChV?YVJ$?JJ ?@@.?E<-/@@A?CAͼ?9????F???A&? .Ck??YVJgeJJ@@.?Ӣ#3E<@3{nͼ?9CA??P?ɝ?F???<K? ?ԛ`.Ck??YVJgeBB ?@@.?E4-0@@AFCAͼ?9????F?P?ʀ??? .Ck??ԛ`?YVJ?""@@.?Ӣ#3E?L@3?Iͼ?9CA??P?ʝ?F????? ?ԛ?.Ck?220-rly-da03.mx etc. I tried to make it prettier with: sudo tcpdump -ttttnnr tcp_dump.pcap reading from file tcp_dump.pcap, link-type EN10MB (Ethernet) 2009-07-09 20:57:40.819734 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776168808 0,nop,wscale 5> 2009-07-09 20:57:43.819905 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776169558 0,nop,wscale 5> 2009-07-09 20:57:47.248100 IP 67.23.28.65.42385 > 205.188.159.57.25: S 2644526720:2644526720(0) win 5840 <mss 1460,sackOK,timestamp 776170415 0,nop,wscale 5> 2009-07-09 20:57:47.288103 IP 205.188.159.57.25 > 67.23.28.65.42385: S 1358829769:1358829769(0) ack 2644526721 win 5792 <mss 1460,sackOK,timestamp 4292123488 776170415,nop,wscale 2> 2009-07-09 20:57:47.288103 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 1 win 183 <nop,nop,timestamp 776170425 4292123488> 2009-07-09 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: P 1:481(480) ack 1 win 1448 <nop,nop,timestamp 4292123568 776170425> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: P 1:18(17) ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: . ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: P 481:536(55) ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 67.23.28.65.42385 > 205.188.159.57.25: P 18:44(26) ack 536 win 216 <nop,nop,timestamp 776170454 4292123606> 2009-07-09 20:57:47.444112 IP 205.188.159.57.25 > 67.23.28.65.42385: P 536:581(45) ack 44 win 1448 <nop,nop,timestamp 4292123644 776170454> 2009-07-09 20:57:47.484114 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 581 win 216 <nop,nop,timestamp 776170474 4292123644> 2009-07-09 20:57:47.616121 IP 67.23.28.65.42385 > 205.188.159.57.25: P 44:50(6) ack 581 win 216 <nop,nop,timestamp 776170507 4292123644> 2009-07-09 20:57:47.652123 IP 205.188.159.57.25 > 67.23.28.65.42385: P 581:589(8) ack 50 win 1448 <nop,nop,timestamp 4292123855 776170507> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: P 50:56(6) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: F 56:56(0) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.668124 IP 67.23.28.65.49239 > 216.239.113.101.25: S 2642380481:2642380481(0) win 5840 <mss 1460,sackOK,timestamp 776170520 0,nop,wscale 5> 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: P 589:618(29) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: F 618:618(0) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 Well...that is much prettier but it doesn't show the actual messages. I can actually extract more information just viewing the RAW file. What is the best ( and preferably easiest) way to just view all the contents of the pcap file? UPDATE Thanks to the responses below, I made some progress. Here is what it looks like now: tcpdump -qns 0 -A -r blah.pcap 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: tcp 480 0x0000: 4500 0214 834c 4000 3306 f649 cdbc 9f39 [email protected] 0x0010: 4317 1c41 0019 a591 50fe 18ca 9da0 4681 C..A....P.....F. 0x0020: 8018 05a8 848f 0000 0101 080a ffd4 9bb0 ................ 0x0030: 2e43 6bb9 3232 302d 726c 792d 6461 3033 .Ck.220-rly-da03 0x0040: 2e6d 782e 616f 6c2e 636f 6d20 4553 4d54 .mx.aol.com.ESMT 0x0050: 5020 6d61 696c 5f72 656c 6179 5f69 6e2d P.mail_relay_in- 0x0060: 6461 3033 2e34 3b20 5468 752c 2030 3920 da03.4;.Thu,.09. 0x0070: 4a75 6c20 3230 3039 2031 363a 3537 3a34 Jul.2009.16:57:4 0x0080: 3720 2d30 3430 300d 0a32 3230 2d41 6d65 7.-0400..220-Ame 0x0090: 7269 6361 204f 6e6c 696e 6520 2841 4f4c rica.Online.(AOL 0x00a0: 2920 616e 6420 6974 7320 6166 6669 6c69 ).and.its.affili 0x00b0: 6174 6564 2063 6f6d 7061 6e69 6573 2064 ated.companies.d etc. This looks good, but it still makes the actual message on the right difficult to read. Is there a way to view those messages in a more friendly way? UPDATE This made it pretty: tcpick -C -yP -r tcp_dump.pcap Thanks!
Wireshark is probably the best, but if you want/need to look at the payload without loading up a GUI you can use the -X or -A options tcpdump -qns 0 -X -r serverfault_request.pcap 14:28:33.800865 IP 10.2.4.243.41997 > 69.59.196.212.80: tcp 1097 0x0000: 4500 047d b9c4 4000 4006 63b2 0a02 04f3 E..}..@[email protected]..... 0x0010: 453b c4d4 a40d 0050 f0d4 4747 f847 3ad5 E;.....P..GG.G:. 0x0020: 8018 f8e0 1d74 0000 0101 080a 0425 4e6d .....t.......%Nm 0x0030: 0382 68a1 4745 5420 2f71 7565 7374 696f ..h.GET./questio 0x0040: 6e73 2048 5454 502f 312e 310d 0a48 6f73 ns.HTTP/1.1..Hos 0x0050: 743a 2073 6572 7665 7266 6175 6c74 2e63 t:.serverfault.c 0x0060: 6f6d 0d0a 5573 6572 2d41 6765 6e74 3a20 om..User-Agent:. 0x0070: 4d6f 7a69 6c6c 612f 352e 3020 2858 3131 Mozilla/5.0.(X11 0x0080: 3b20 553b 204c 696e 7578 2069 3638 363b ;.U;.Linux.i686; tcpdump -qns 0 -A -r serverfault_request.pcap 14:29:33.256929 IP 10.2.4.243.41997 > 69.59.196.212.80: tcp 1097 E..}..@[email protected]. ...E;...^M.P..^w.G.......t..... .%.}..l.GET /questions HTTP/1.1 Host: serverfault.com There are many other tools for reading and getting stats, extracting payloads and so on. A quick look on the number of things that depend on libpcap in the debian package repository gives a list of 50+ tools that can be used to slice, dice, view, and manipulate captures in various ways. For example. tcpick tcpxtract
{ "source": [ "https://serverfault.com/questions/38626", "https://serverfault.com", "https://serverfault.com/users/3567/" ] }
38,816
Is unlink any faster than rm?
Both are a wrapper to the same fundamental function which is an unlink() system call. To weigh up the differences between the userland utilies. rm(1) : More options. More feedback. Sanity checking. A bit slower for single calls as a result of the above. Can be called with multiple arguments at the same time. unlink(1) : Less sanity checking. Unable to delete directories. Unable to recurse. Can only take one argument at a time. Marginally leaner for single calls due to it's simplicity. Slower when compared with giving rm(1) multiple arguments. You could demonstrate the difference with: $ touch $(seq 1 100) $ unlink $(seq 1 100) unlink: extra operand `2' $ touch $(seq 1 100) $ time rm $(seq 1 100) real 0m0.048s user 0m0.004s sys 0m0.008s $ touch $(seq 1 100) $ time for i in $(seq 1 100); do rm $i; done real 0m0.207s user 0m0.044s sys 0m0.112s $ touch $(seq 1 100) $ time for i in $(seq 1 100); do unlink $i; done real 0m0.167s user 0m0.048s sys 0m0.120s If however we're talking about an unadulterated call to the system unlink(2) function, which I now realise is probably not what you're accounting for. You can perform a system unlink() on directories and files alike. But if the directory is a parent to other directories and files, then the link to that parent would be removed, but the children would be left dangling. Which is less than ideal. Edit: Sorry, clarified the difference between unlink(1) and unlink(2) . Semantics are still going to differ between platform.
{ "source": [ "https://serverfault.com/questions/38816", "https://serverfault.com", "https://serverfault.com/users/43616/" ] }
39,004
I have the following syntax (which I think is correcT?) but it runs the command every minute! * */4 * * * /cmd.sh
0 0,4,8,12,16,20 * * * /cmd.sh That's probably how I would do it. This will run the job every 4 hours, on the hours of 00:00, 04:00, 08:00 12:00, 16:00, 20:00. This is just a little more verbose way of writing */4, but it should work the same.
{ "source": [ "https://serverfault.com/questions/39004", "https://serverfault.com", "https://serverfault.com/users/11224/" ] }
39,027
I need to transfer files from one CentOS server to another. Will transfer 5MB files about every 10 minutes. Do not need encryption. What is an easy was for fast transfer of files? Is there something simpler than ftp? Thanks!
rsync I'd use rsync before I used ftp or tftp. More options and (in my experience) more reliable transfer.
{ "source": [ "https://serverfault.com/questions/39027", "https://serverfault.com", "https://serverfault.com/users/141516/" ] }
39,071
Since Windows Explorer (since at least Windows XP) has some basic support for ZIP files, it seems like there should be a command-line equivalent, but I can't seem to find any sign of one. Does Windows (XP, Vista, 7, 8, 2003, 2008, 2013) ship with a built-in command-line zip tool, or do I need to stick with third-party tools?
It's not built into Windows, but it's in the Resource Kit Tools as COMPRESS , C:\>compress /? Syntax: COMPRESS [-R] [-D] [-S] [ -Z | -ZX ] Source Destination COMPRESS -R [-D] [-S] [ -Z | -ZX ] Source [Destination] Description: Compresses one or more files. Parameter List: -R Rename compressed files. -D Update compressed files only if out of date. -S Suppress copyright information. -ZX LZX compression. This is default compression. -Z MS-ZIP compression. Source Source file specification. Wildcards may be used. Destination Destination file | path specification. Destination may be a directory. If Source is multiple files and -r is not specified, Destination must be a directory. Examples: COMPRESS temp.txt compressed.txt COMPRESS -R *.* COMPRESS -R *.exe *.dll compressed_dir
{ "source": [ "https://serverfault.com/questions/39071", "https://serverfault.com", "https://serverfault.com/users/1382/" ] }
39,271
Sounds like a dumb question but I bet a lot of people don't know either. I understand servers, clients, modems, routers, ISP's, ect; but what I don't understand is what makes up the backbone structure of the internet. I have never seen any clear UML diagram or description of the backbone of the internet. I have heard things about 7 main servers (don't quote me on that), but who owns each server, when were they built, how old are they, how do they interact? It seems surprisingly hard to find information on this. All my google searching provides seemingly vague and outdated information. Edit: Sorry if you found this question vague, not only did I write it late last night, but I have a vague understanding of how the backbone of the net works thus making my question vague.
I'll take a stab at this: First thing first, no one owns or controls the internet. Right now there is defacto control provided through the DNS servers, which are what change "www.google.com" into "IP address 123.456.789.000". These DNS "root" servers control the domain name infrastructure that provides the web as many people know it. However the internet is actually a network of networks controlled by people (hence inter - network). If you imagine that you have a network of computers controlled by a cable provider, by a telephone provider, network them to a government network, network them to a link to europe, another to hawaii, another to asia, another to Australia, you can see how the internet starts to take shape. Essentially companies and in some cases countries will pay to have their internet connected to a link in a network to america. Once these links were established the internet really started to take shape. From a hardware point of view, the internet is built upon IP (not TCP/IP). IP was a system that provides for networking using a shared address space (the familiar www.xxx.yyy.zzz) addressing system with a notion of a "gateway" which is if I don't know the person who owns this packet I'll forward it to someone who does. Essentially there was a routing network created which defines which routers control certain IP ranges. That way if I can digress, for your computer to send a packet in america to a computer in australia, the following would happen. You send the packet via your modem to your ISP. Your ISP uses it's rules to determine it doesn't own the IP of the packet and forwards it to it's backbone or a Tier 1 provider. The backbone will determine that this packet is for australia and sends it to a machine connected to a fibre link or possibly over a satellite etc. This process happens in reverse from the backbone, to the ISP, to the local ISP connection, to the modem in the house, to the computer in the house. Now when you realise that the routing rules have redundancy (ie. you have more than one route to send a packet to australia for instance, picking a different cable, or using a sattelite) you can start to understand how the internet can survive when computers or routers shut down or fail, which is a key part of the infrastructure. So if you have a network that can get packets to anyone who is connected, and anyone can connect by agreement to tier 1 connections, you can combine the capability to talk to any computer on the network with a protocol to send information, such as HTTP, FTP, SSL etc. you end up with the internet as it exists. A final word: If you managed to soak all of that in, you can see now that the argument between "well everyone should be able to watch youtube and make VOIP calls without it EVER being throttled" doesn't mesh so well with the fact that the people who are providing the internet have to share it with networks which they don't control. I'm speaking about net neutrality of course.
{ "source": [ "https://serverfault.com/questions/39271", "https://serverfault.com", "https://serverfault.com/users/49930/" ] }
39,288
We have moved mysql data directory to another disk, so now /var/lib/mysql is just a mount point to another partition. We set the owner of the /var/lib/mysql directory to mysql.mysql . But everytime we mount the partition, the ownership changes to root.root . Because of this, we couldn't create additional MySQL database. Our fstab entry: /dev/mapper/db-db /var/lib/mysql ext3 relatime 0 2 How to change the owner of mount point to user other than root?
You need to change the permissions of the mounted filesystem, not of the mount point when the filesystem is not mounted. So mount /var/lib/mysql then chown mysql.mysql /var/lib/mysql . This will change the permissions of the root of the MySQL DB filesystem. The basic idea is that the filesystem holding the DB needs to be changed, not the mount point, unless its path has some issues, e.g. lib can only be read by root.
{ "source": [ "https://serverfault.com/questions/39288", "https://serverfault.com", "https://serverfault.com/users/10236/" ] }
39,522
When backing up with rsync, How do I keep the full directory structure? For example, the remote server is saturn , and I want to backup saturn 's /home/udi/files/pictures to a local directory named backup . I want to have (locally) backup/home/udi/files/pictures rather than backup/pictures .
Use the -R or --relative option to preserve the full path.
{ "source": [ "https://serverfault.com/questions/39522", "https://serverfault.com", "https://serverfault.com/users/10904/" ] }
39,712
We get the message “TTL expired in transit” when we try to ping to a server in a different network segment. When we run tracert, 4 ip addresses repeat themselves indefinitely: 14 60 ms 59 ms 60 ms xxx.xxx.xxx.2 15 83 ms 81 ms 82 ms xxx.xxx.xxx.128 16 75 ms 80 ms 81 ms xxx.xxx.xxx.249 17 81 ms 78 ms 80 ms xxx.xxx.xxx.250 18 82 ms 80 ms 77 ms xxx.xxx.xxx.2 19 102 ms 101 ms 100 ms xxx.xxx.xxx.128 20 101 ms 100 ms 98 ms xxx.xxx.xxx.249 21 97 ms 98 ms 99 ms xxx.xxx.xxx.250 ... What are the basic steps for troubleshooting this error?
As stated in all answers above there is loop in routing that is causing TTL to expire. Check route on the devices whose IP addresses are repeating. On Linux you can use route -n as root user to see current routing table. On windows you can go to cmd and use command route print to see current routing table. On cisco manageable switches you can use command show ip route Using above commands on all the four IPs that are repeating you should see which routing table is wrong. One of the four devices / hosts involved should ideally route traffic to destination you are pinging using some other gateway.
{ "source": [ "https://serverfault.com/questions/39712", "https://serverfault.com", "https://serverfault.com/users/416/" ] }
39,733
I have an instance of an application running in the cloud on an Amazon EC2 instance, and I need to connect to it from my local Ubuntu. It works fine on one local ubuntu and also laptop. I got this message, Permission denied (publickey). , when trying to SSH to EC2 from a different local Ubuntu. I'm thinking there may be problems with security settings on the Amazon EC2, which has limited IP access to one instance; or maybe a certificate needs to regenerate. Does anyone know a solution to the Permission denied error?
The first thing to do in this situation is to use the -v option to ssh , so you can see what types of authentication is tried and what the result is. Does that help enlighten the situation? In your update to your question, you mention "on another local Ubuntu". Have you copied over the ssh private key to the other machine?
{ "source": [ "https://serverfault.com/questions/39733", "https://serverfault.com", "https://serverfault.com/users/63066/" ] }
39,828
I'm looking for the best way to send passwords over the internet safely. Options I've looked at are PGP and encrypted RAR files. There are no real parameters other than getting from point a to point b over the internets without too much risk.
PGP or another asymmetric encryption method would sound like the way to go .. both sides must publish his/her public key sign your message with your own private key encrypt with the other's public key transmit the file only the other's private key can decrypt the message your public key can be used to validate the message => secure & private
{ "source": [ "https://serverfault.com/questions/39828", "https://serverfault.com", "https://serverfault.com/users/3528/" ] }
40,071
SSH supports two signature algorithms for key pairs: RSA and DSA. Which is preferred, if any? For RSA, what is the minimum acceptable key length?
RSA is generally preferred (now that the patent issue is over with) because it can go up to 4096 bits, where DSA has to be exactly 1024 bits (in the opinion of ssh-keygen ). 2048 bits is ssh-keygen 's default length for RSA keys, and I don't see any particular reason to use shorter ones. (The minimum possible is 768 bits; whether that's "acceptable" is situational, I suppose.)
{ "source": [ "https://serverfault.com/questions/40071", "https://serverfault.com", "https://serverfault.com/users/4881/" ] }
40,144
I'd like to retrieve the absolute file name of the script file that's currently executed. Links should be resolved, too. On Linux, this seems to be done like this: $(readlink -mn "$0") but readlink seems to work very differently on Mac OS X. I've read that this is done using $(realpath $0) in BSD but that doesn't work, either. Mac OS X does not have realpath . Any idea?
I cheat and use perl for this very thing: #!/bin/bash dirname=`perl -e 'use Cwd "abs_path";print abs_path(shift)' $0` echo $dirname You'd think I'd just write the entire script in perl, and often I do, but not always.
{ "source": [ "https://serverfault.com/questions/40144", "https://serverfault.com", "https://serverfault.com/users/10266/" ] }
40,156
how can i ensure that if new version of configuration file is downloaded via puppet from master repository to one of managed servers relevant service is restarted. typical scenario - let's say there is new munin or apache config. puppet client discovers it, overwrites local files... and... - how to make sure service is restarted / reloaded ? thanks a lot!
An alternative to notify is subscribe: file { "/etc/sshd_config": source => "....", } service { sshd: ensure => running, subscribe => File["/etc/sshd_config"], } The difference being that the relationship is described from the other end. For example, you might make apache subscribe to /etc/apache/httpd.conf, but you'd make a vhost file notify apache, as your apache class won't know about every vhost you have. A similar dual-ended situation applies to require and before. It's just a matter of which makes more sense in the particular situation. As Chad mentioned, if you find puppet constantly trying to start your service, then you need to add a pattern parameter, which is a regex to apply against the list of processes. By default puppet will do a stop and start to restart a service. If you add "hasrestart => true", then it will use the command specified in the "restart" parameter to restart the service.
{ "source": [ "https://serverfault.com/questions/40156", "https://serverfault.com", "https://serverfault.com/users/2413/" ] }
40,284
I wonder if there is a way to create a 'virtual file' from a bash output. Example: Let's say I want to email the output of mysqldump as an attachment to an external email address. I can use Mutt to do so. The mutt option I need to use is -a <name of the file I want to attach> . I know I could use a temporary file: mysqldump mysqldumpoptions > /tmp/tempfile && mutt -a /tmp/tempfile [email protected] But I would rather redirect the mysqldump output directly to Mutt instead. Mutt's -a option only accepts a file and not a stream, but maybe there is a way to pass it some kind of virtual file descriptor or something along those lines. Something like: mutt -a $(mysqldump mysqldumpoptions) [email protected] Is it possible? If not, why? This is maybe a silly example and there surely are easier ways to do this, but I hope it explains my question about creating a virtual file from the output of another command.
This is the cleanest way to do what you want: mutt [email protected] -a <(mysqldump mysqldumpoptions) The <() operator is what you were asking for; it creates a FIFO (or /dev/fd) and forks a process and connects stdout to the FIFO. >() does the same, except connects stdin to the FIFO instead. In other words, it does all the mknod stuff for you behind the scenes; or on a modern OS, does it in an even better way. Except, of course, that doesn't work with mutt, it says: /dev/fd/63: unable to attach file. I suspect the problem is that mutt is trying to seek in the file, which you can't do on a pipe of any sort. The seeking is probably something like scanning the file to figure out what MIME type it is and what encodings might work (ie, whether the file is 7bit or 8bit), and then seeking to the beginning of the file to actually encode it into the message. If what you want to send is plain text, you could always do something like this to make it the main contents of the email instead (not ideal, but it actually works): mysqldump mysqldumpoptions | mutt -s "Here's that mysqldump" [email protected]
{ "source": [ "https://serverfault.com/questions/40284", "https://serverfault.com", "https://serverfault.com/users/12765/" ] }
40,504
I have written a php script to check if there are any new files in a folder and, if any new files exist, upload them to a server. These files can be quite large. I want to run this script frequently-let's say every 5 min-as a scheduled task so that the files get moved to the server as soon as possible. However, once the script is already attempting to upload a file, I do not want it to be run again, as I am afraid the second instance will overwrite the file that is already being uploaded to the server. How can I run the script as a scheduled task unless the script is already running?
Assuming you're just setting the task to "Repeat" in the XP "Scheduled Tasks" system, no further action on your part is needed. Scheduled Tasks won't "Repeat" a task if it's already running. If you want to override that default, you can check the box "If the task is still running, stop it at this time" to cause the task scheduler to kill the last instance before starting a new one (though it sounds like you probably don't want that).
{ "source": [ "https://serverfault.com/questions/40504", "https://serverfault.com", "https://serverfault.com/users/2763/" ] }
40,712
I want to assign my virtual machines MAC addresses so that I can configure DHCP reservations for them so that they always get the same IP address regardless of which host hypervisor they are running on or operating system they are running. What I need to know is what range of MAC addresses can I use without fear that one day some device may be connected to our network with that MAC? I have read the Wikipedia article on MAC addresses and this section seems to indicate that if I create an address with the form 02-XX-XX-XX-XX-XX then it is considered a locally administered address. I would assume this means that no hardware manufacturer would ever use an address starting with 02 so I should be safe to use anything that starts with 02 for my virtual machines? Thanks for the help.
There are actually 4 sets of Locally Administered Address Ranges that can be used on your network without fear of conflict, assuming no one else has assigned these on your network: x2-xx-xx-xx-xx-xx x6-xx-xx-xx-xx-xx xA-xx-xx-xx-xx-xx xE-xx-xx-xx-xx-xx Replacing x with any hex value.
{ "source": [ "https://serverfault.com/questions/40712", "https://serverfault.com", "https://serverfault.com/users/12890/" ] }
40,764
I've found a lot of times that "the big boss" in a company want to be able to install "anything" and to do anything in their computer. Of course we can tell him that it is bad because the IT system administrators lose control over the computer, and so on. Any irrefutable argument to convince big bosses that it's better to not have administrator privileges on their desktop-computer?
The only time I was even a tiny bit successful on this was a boss who was willing to use run as with alternate credentials if he wanted to install something. I explained that even the sysadmins logged onto systems with normal accounts most of the time and then created him his very own admin account that he was only to use when he wanted to do something special. It was actually very effective, and kept his machine from getting totally screwed up in the two years that I was at the company. This was a relatively savvy CEO who was able to understand the whole run as thing, and I'm sure he had stuff on there I wouldn't have approved, but at least it stopped him from passively screwing stuff up.
{ "source": [ "https://serverfault.com/questions/40764", "https://serverfault.com", "https://serverfault.com/users/5920/" ] }
41,020
I'm trying to figure out how LVM snapshots work so I can implement it on my fileserver but I'm having difficulty finding anything on google that explains how it works, instead of how to use it for a base backup system. From what I've read I think it works something like this: You have an LVM with a primary partition and lots and lots of unallocated freespace not in the partition Then you take a snapshot and mount it on a new Logical Volume. Snapshots are supposed to have changes so this first snapshot would be a whole copy, correct? Then, the next day you take another snapshot (this one's partition size doesn't have to be so big) and mount it. Somehow the LVM keeps track of the snapshots, and doesn't store unchanged bits on the primary volume. Then you decide that you have enough snapshots and get rid of the first one. I have no idea how this works or how that would affect the next snapshot. Can someone correct me where I'm wrong. At best, I'm guessing, I can't find anything on google. vgdiplay obu1:/home/jail/home/qps/backup/D# vgdisplay --- Volume group --- VG Name fileserverLVM System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 931.51 GB PE Size 4.00 MB Total PE 238467 Alloc PE / Size 238336 / 931.00 GB Free PE / Size 131 / 524.00 MB VG UUID qSGaG1-SQYO-D2bm-ohDf-d4eG-oGCY-4jOegU
Why not have a look at the snapshots section of the LVM-HOWTO ? LVM snapshots are your basic "copy on write" snapshot solution. The snapshot is really nothing more than asking the LVM to give you a "pointer" to the current state of the filesystem and to write changes made after the snapshot to a designated area. LVM snapshots "live" inside the volume group hosting the volume subject to the snapshot-- not another volume. Your statement "...lots and lots of unallocated freespace not it the partition" makes it sound like your thinking is that the snapshots "live" outside the volume group subject to snapshot, and that's not accurate. Your volume group lives in a hard disk partition, and the volume being subject to snapshot and any shapshots you've taken live in that volume group. The normal way that LVM snapshots are used is not for long-term storage, but rather to get a consistent "picture" of the filesystem such that a backup can be taken. Once the backup is done, the snapshot is discarded. When you create an LVM snapshot you designate an amount of space to hold any changes made while the snapshot is active. If more changes are made than you've designated space for the snapshot becomes unusable and must be discarded. You don't want to leave snapshots laying around because (a) they'll fill up and become unusable, and (b) the system's performance is impacted while a snapshot is active-- things get slower. Edit: What Microsoft Volume Shadow Copy Services and LVM snapshots do aren't too tremendously different. Microsoft's solution is a bit more comprehensive (as is typically the case with Microsoft-- for better or for worse their tools and products often seek to solve pretty large problems versus focusing on one thing). VSS is a more comprehensive solution that unifies support for hardware devices that support snapshots and software-based snapshots into a single API. Further, VSS has APIs to allow applications to be made quiescent through the snapshot APIs, whereas LVM snapshots are just concerned with snapshots-- any quiescing applications is your problem (putting databases into "backup" states, etc).
{ "source": [ "https://serverfault.com/questions/41020", "https://serverfault.com", "https://serverfault.com/users/12659/" ] }
41,041
Is there some way to define user specific hosts - like in /etc/hosts? Maybe something like ~/.hosts?
For anything ssh based (including rsync over ssh) you can add entries to your ~/.ssh/config file e.g. Host myhost Hostname myhost.example.com Then ssh myhost will connect you to myhost.example.com
{ "source": [ "https://serverfault.com/questions/41041", "https://serverfault.com", "https://serverfault.com/users/13025/" ] }
41,064
Is there a built-in command line tool that will do reverse DNS look-ups in Windows? I.e., something like <toolname> w.x.y.z => mycomputername I've tried: nslookup : seems to be forward look-up only. host : doesn't exist dig : also doesn't exist. I found " What's the reverse DNS command line utility? " via a search, but this is specifically looking for a *nix utility, not a Windows one.
ping -a w.x.y.z Should resolve the name from the IP address if the reverse lookup zone has been set up properly. If the reverse lookup zone does not have an entry for the record, the -a will just ping without a name.
{ "source": [ "https://serverfault.com/questions/41064", "https://serverfault.com", "https://serverfault.com/users/289/" ] }
41,130
I have a simple scenario. There's an application on ServerA that runs under the built-in Network Service account. It needs to read and write files on a folder share on ServerB. What permissions do I need to set on the folder share on ServerB? I can get it to work by opening the security dialog of the share, adding a new security user, clicking "Object Types" and making sure "Computers" is checked, and then adding ServerA with read/write access. By doing this, what accounts are gaining access to the share? Only Network Service? All local accounts on ServerA? What should I be doing to grant ServerA's Network Service account access to ServerB's share? Note: I know this is similar to this question . However, in my scenario ServerA and ServerB are in the same domain.
The "Share Permissions" can be "Everyone / Full Control"-- only the NTFS permissions really matter. (Cue religious arguments from people who have an unhealthy attachment to "Share Permissions" here...) In the NTFS permissions on the folder on ServerB you could get by with either "DOMAIN\ServerA - Modify" or "DOMAIN\ServerA - Write", depending on whether it needed to be able to modify existing files or not. (Modify is really the preferred because your application may re-open a file after it creates it to write further-- Modify gives it that right, but Write does not.) Only the "SYSTEM" and "Network Service" contexts on ServerA will have access, assuming you name "DOMAIN\ServerA" in the permission. Local user accounts on the ServerA computer are different from the "DOMAIN\ServerA" context (and would have to be named individually if you somehow did want to grant them access). As an aside: Server computer roles change. You may want to create a group in the AD for this role, put ServerA into that group, and grant the group rights. If you ever change ServerA's role and replace it with, say, ServerC, you need only change the group memberships and you never need to touch the folder permission again. A lot of admins think about this kind of thing for users being named in permissions, but they forget that "computers are people too" and their roles sometimes change. Minimizing your work in the future (and your ability to make mistakes) is what being efficient in this game is all about...
{ "source": [ "https://serverfault.com/questions/41130", "https://serverfault.com", "https://serverfault.com/users/657/" ] }
41,523
I am trying to copy all newer jpgs from one folder to another using the following command cp -u --force /home/oldfolder/*.jpg /home/newfolder/ and I get the following promt: cp: overwrite `/home/newfolder/4095-181.jpg'? The '-u' I know is working fine as is it only prompting me on the newer files, but i dont want to get the prompt i just want it to overwrite them. I have tried --force and -f Any suggestions? Thanks in advance
Maybe your cp command is an alias? Try: \cp -uf file folder/
{ "source": [ "https://serverfault.com/questions/41523", "https://serverfault.com", "https://serverfault.com/users/12228/" ] }
41,841
I've heard often that it is better to su to root rather than log in directly as the root user (and of course people also say that it's even better to use sudo). I've never really understood why one is better than the other(s), insight?
The inference is to only su or sudo when required. Most everyday tasks don't require a root shell. So it is good practice to use an unprivileged shell as your default behaviour and then only elevate to root when you need to perform special tasks. By doing so you are reducing scope for dangerous mistakes (bad scripting, misplaced wildcards, etc) and vulnerabilities from any applications that you use. Especially those which connect to the Internet - see the old adage "Don't IRC as root" . sudo is often recommended because it allows you fine grain and audit the use of such privileges. By observing these practices you are also in a position to disable remote root logins. This increases the bar of entry for any would-be attacker, as they would need to compromise both a regular user account that was a member of the "wheel" group and ideally only authorised by SSH public keys, then the root account itself.
{ "source": [ "https://serverfault.com/questions/41841", "https://serverfault.com", "https://serverfault.com/users/9735/" ] }
41,922
Similar to a http://whatismyip.com lookup. It would obviously need to query a computer out there. Just wondering if anyone had a clever way to do it?
curl http://myip.dnsomatic.com
{ "source": [ "https://serverfault.com/questions/41922", "https://serverfault.com", "https://serverfault.com/users/1576/" ] }
41,964
How can I hide the screen output (printf) of a shell application in Linux?
You can redirect the output of any program so that it won't be seen. $ program > /dev/null This will redirect the standard output - you'll still see any errors $ program &> /dev/null This will redirect all output, including errors.
{ "source": [ "https://serverfault.com/questions/41964", "https://serverfault.com", "https://serverfault.com/users/958/" ] }
41,972
What is faster on the same hardware, Xen or KVM? I'm trying to pick-up a virtualization technology to work, which gives the best performance. There are some benchmarks here that I found on the subject: http://virt.kernelnewbies.org/XenVsKVM They show KVM as a winner, with significant difference in performance - which goes against the idea that KVM is a type-2 hypervisor, and by definition it should be slower than Type-1 hypervisors (like Xen) - or at least that what the articles on the web say. Any idea on the subject?
That benchmark is only comparing the speed of the native OS to a single guest OS. It is hardly a real-world test. I don't think I would put much weight on it. Most of the KVM camp argues that Xen requires too many interrupts and hops between kernel and user space, but from most of the more real-world benchmarks that I've seen that hasn't really been realized and Xen seems to be a bit faster than KVM. Sorry I don't have a link to back that up handy. But I will say that KVM is improving fast and seems to be catching up on feature set and stability quickly. As to which approach is better. The Xen camp will argue that a true light-weight hypervisor is required virtualization to be secure and fast. Xen is also starting to be supported in firmware by some vendors which is also nice. The KVM camp will argue that KVM is simpler and that Linux is capable of being a good hypervisor. In the end it's still unclear which direction will ultimately win. Xen certainly has a head start and already has a nice market share. But it's not in the mainline kernel, yet. Hopefully that will change soon and there has certainly been a lot of talk about this on the kernel list in the past few months. Red Hat is in the KVM camp now and will be pushing it as the virtualization platform of choice. Red Hat Linux 5.4 which is coming out shortly will be the first to include it. So that will likely attract shops that haven't rolled out or committed to a virtualization platform yet. As far as tools go, both Xen and KVM use libvirt and QEMU and the tools associated with them. So they share many of the same tools such as virt-manager. We use Xen at work, and it works well for us. But I've been looking into KVM due to some USB forwarding and PCI passthrough issues I've been unable to resolve with Xen. I'm not sure KVM is any better at this, but I guess I'll find out once I try it. One thing I have noticed in researching my USB issues is that KVM's documentation is more assessable and organized compared to Xen's. But there is no perfect virtualization platform so you'll need to figure out what makes sense for you.
{ "source": [ "https://serverfault.com/questions/41972", "https://serverfault.com", "https://serverfault.com/users/13323/" ] }
42,021
How can I ping a certain address and when found, stop pinging. I want to use it in a bash script, so when the host is starting up, the script keeps on pinging and from the moment the host is available, the script continues...
A further simplification of Martynas' answer: until ping -c1 www.google.com >/dev/null 2>&1; do :; done note that ping itself is used as the loop test; as soon as it succeeds, the loop ends. The loop body is empty, with the null command " : " used to prevent a syntax error. Update: I thought of a way to make Control-C exit the ping loop cleanly. This will run the loop in the background, trap the interrupt (Control-C) signal, and kill the background loop if it occurs: ping_cancelled=false # Keep track of whether the loop was cancelled, or succeeded until ping -c1 "$1" >/dev/null 2>&1; do :; done & # The "&" backgrounds it trap "kill $!; ping_cancelled=true" SIGINT wait $! # Wait for the loop to exit, one way or another trap - SIGINT # Remove the trap, now we're done with it echo "Done pinging, cancelled=$ping_cancelled" It's a bit circuitous, but if you want the loop to be cancellable it should do the trick.
{ "source": [ "https://serverfault.com/questions/42021", "https://serverfault.com", "https://serverfault.com/users/317/" ] }
42,174
I'm trying to monitor some web traffic using wireshark. Our web proxy is on port 9191. How can I get the wireshark view to treat port 9191 just like port 80 - ie as HTTP. Just using Decode_As on the menu seems to allow half the conversation but only one side. Any suggestions how to make this a permanent option?
If you go to Edit -> Preferences -> Protocols -> HTTP, you should find a list of ports that are considered to be HTTP. Add port 9191 to that list. I believe you have to re-start Wireshark and re-open your capture file or re-start your capture for this to take effect. This is on the Windows version 1.0.3; it might be slightly different on other platforms. Obviously this isn't a generic way to alter the port to protocol mappings, but the authors of the http decoder seem to have recognized that people run it on many different ports.
{ "source": [ "https://serverfault.com/questions/42174", "https://serverfault.com", "https://serverfault.com/users/2958/" ] }
42,426
In a typical browser, when we set a proxy server, we can define a list of hostnames/IP addresses that are not to use the proxy server. How do we accomplish the same thing when using $http_proxy? I rely on setting $http_proxy to use the proxy server in Chromium on Linux but they're certain IP addresses on the intranet that I need to bypass the proxy settings for.
Try setting variable named no_proxy in following manner $ export no_proxy=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16 But if you do this in the command line, you will have to do it again each time you open a new terminal window. If you want those settings to be persistent, put this very command inside your .profile file under $HOME ( read this answer if you want to understand better what this .profile file is ).
{ "source": [ "https://serverfault.com/questions/42426", "https://serverfault.com", "https://serverfault.com/users/11673/" ] }
42,519
This morning, in order to correct a problem with a name mismatch in the security certificate, I followed the recommended steps from How to fix mail server SSL? , but now, when attempting to send an email from a client (in this case the client is Windows Mail), I receive the following error. The rejected e-mail address was '[email protected]'. Subject 'This is a test. ', Account: 'mail.domain.com', Server: 'mail.domain.com', Protocol: SMTP, Server Response: '554 5.7.1 : Relay access denied', Port: 25, Secure(SSL): No, Server Error: 554, Error Number: 0x800CCC79 Edit : I can still retrieve emails from this account, and I send emails to other accounts at the same domain. I just can't send emails to recipients outside of our domain. I tried disabling TLS altogether but no dice, I still get the same error. When I check file mail.log , I see the following. Jul 18 08:24:41 company imapd: LOGIN, [email protected], ip=[::ffff:111.111.11.11], protocol=IMAP Jul 18 08:24:42 company imapd: DISCONNECTED, [email protected], ip=[::ffff:111.111.11.11], headers=0, body=0, rcvd=83, sent=409, time=1 Jul 18 08:25:19 company postfix/smtpd[29282]: connect from company.university.edu[111.111.11.11] Jul 18 08:25:19 company postfix/smtpd[29282]: NOQUEUE: reject: RCPT from company.university.edu[111.111.11.11]: 554 5.7.1 <[email protected]>: Relay access denied; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<UserPC> Jul 18 08:25:19 company postfix/smtpd[29282]: disconnect from company.university.edu[111.111.11.11] Jul 18 08:25:22 company imapd: DISCONNECTED, [email protected], ip=[::ffff:111.111.11.11], headers=13, body=142579, rcvd=3289, sent=215892, time=79 File main.cf looks like this: # # Postfix MTA Manager Main Configuration File; # # Please do NOT edit this file manually; # # # Postfix directory settings; These are critical for normal Postfix MTA functionallity; # command_directory = /usr/sbin daemon_directory = /usr/lib/postfix program_directory = /usr/lib/postfix # # Some common configuration parameters; # inet_interfaces = all mynetworks = 127.0.0.0/8 mynetworks_style = host myhostname = mail.domain.com mydomain = domain.com myorigin = $mydomain smtpd_banner = $myhostname ESMTP 2.4.7.1 (Debian/GNU) setgid_group = postdrop # # Receiving messages parameters; # mydestination = localhost, company append_dot_mydomain = no append_at_myorigin = yes transport_maps = mysql:/etc/postfix/transport.cf # # Delivering local messages parameters; # mail_spool_directory = /var/spool/mail mailbox_size_limit = 0 mailbox_command = procmail -a "$EXTENSION" biff = no alias_database = hash:/etc/aliases local_recipient_maps = # # Delivering virtual messages parameters; # virtual_mailbox_maps=mysql:/etc/postfix/mysql_virt.cf virtual_uid_maps=mysql:/etc/postfix/uids.cf virtual_gid_maps=mysql:/etc/postfix/gids.cf virtual_mailbox_base=/usr/local/virtual virtual_maps=mysql:/etc/postfix/virtual.cf virtual_mailbox_domains=mysql:/etc/postfix/virtual_domains.cf # # SASL paramters; # smtp_use_tls = yes smtpd_use_tls = yes smtpd_tls_auth_only = yes smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtp_tls_CAfile = /etc/postfix/ssl/smptd.pem smtp_tls_cert_file = /etc/postfix/ssl/smptd.crt smtp_tls_key_file = /etc/postfix/ssl/smptd.key smtpd_tls_CAfile = /etc/postfix/ssl/smptd.pem smtpd_tls_cert_file = /etc/postfix/ssl/smptd.crt smtpd_tls_key_file = /etc/postfix/ssl/smptd.key smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain = broken_sasl_auth_clients = yes smtpd_sender_restrictions = permit_sasl_authenticated permit_mynetworks smtpd_recipient_restrictions = permit_sasl_authenticated check_recipient_access hash:/etc/postfix/filtered_domains permit_mynetworks reject_unauth_destination As a side note, my employer wants to be able to send emails from clients (Thunderbird and Outlook) both from within our local network and outside it.
TLS just enables encryption on the smtp session and doesn't directly affect whether or not Postfix will be allowed to relay a message. The relaying denied message occurs because the smtpd_recipient_restrictions rules was not matched. One of those conditions must be fulfilled to allow the message to go through: smtpd_recipient_restrictions = permit_sasl_authenticated check_recipient_access hash:/etc/postfix/filtered_domains permit_mynetworks reject_unauth_destination To explain those rules: permit_sasl_authenticated permits authenticated senders through SASL. This will be necessary to authenticate users outside of your network which are normally blocked. check_recipient_access This will cause postfix to look in /etc/postfix/filtered_domains for rules based on the recipient address. (Judging by the file name on the file name, it is probably just blocking specific domains... Check to see if gmail.com is listed in there?) permit_mynetworks This will permit hosts by IP address that match IP ranges specified in $mynetworks. In the main.cf you posted, $mynetworks was set to 127.0.0.1, so it will only relay emails generated by the server itself. Based on that configuration, your mail client will need to use SMTP Authentication before being allowed to relay messages. I'm not sure what database SASL is using. That is specified in /usr/lib/sasl2/smtpd.conf Presumably it also uses the same database as your virtual mailboxes, so you should be able enable SMTP authentication in your mail client and be all set.
{ "source": [ "https://serverfault.com/questions/42519", "https://serverfault.com", "https://serverfault.com/users/2882/" ] }
42,531
I checked /var/log and /usr/local/mysql and i can't seem to find the log. I am trying to troubleshoot an error establishing a database connection with a php function.
As Chealion mentioned, there are several ways that your mysql could have been installed. Each of which will place your data dir and/or logs in different locations. The following command will give you (and us) a good indication of where to look. ps auxww|grep [m]ysqld # Putting brackets around the first char is a `grep`+`ps` trick # to keep it from matching its own process. # Note: For zsh compatibility put quotes around the grep regex Can you post the result of that command here please? Mine looks like this: _mysql 101 0.0 0.3 112104 13268 ?? S 12:30AM 0:13.20 /opt/local/libexec/mysqld --basedir=/opt/local --datadir=/opt/local/var/db/mysql --user=mysql --pid-file=/opt/local/var/db/mysql/rbronosky-mbp.pid root 76 0.0 0.0 600172 688 ?? S 12:30AM 0:00.02 /bin/sh /opt/local/lib/mysql/bin/mysqld_safe --datadir=/opt/local/var/db/mysql --pid-file=/opt/local/var/db/mysql/rbronosky-mbp.pid From that you can see that my datadir is /opt/local/var/db/mysql (because I installed via MacPorts). Let's take this lesson a bit further... From the first line you can see the my daemon is /opt/local/libexec/mysqld . The mysqld can be called with --verbose --help to get a list of all command line options (and here is the important/valuable part!) followed by the values that would be used if you were launching mysqld instead of just checking the help output. The values are the result of your compile time configuration, my.cnf file, and any command line options. I can exploit this feature to find out EXACTLY where my log files are, like so: /opt/local/libexec/mysqld --verbose --help|grep '^log' Mine looks like this: log /tmp/mysql.log log-bin /tmp/mysql-bin log-bin-index (No default value) log-bin-trust-function-creators FALSE log-bin-trust-routine-creators FALSE log-error /tmp/mysql.error.log log-isam myisam.log log-queries-not-using-indexes FALSE log-short-format FALSE log-slave-updates FALSE log-slow-admin-statements FALSE log-slow-queries (No default value) log-tc tc.log log-tc-size 24576 log-update (No default value) log-warnings 1 LO AND BEHOLD! all of the advice in the world was not going to help me because my log file is kept in a completely non-standard location! I keep mine in /tmp/ because on my laptop, I don't care (actually I prefer) to loose all of my logs on reboot. Let's put it all together and make you a oneliner: $(ps auxww|sed -n '/sed -n/d;/mysqld /{s/.* \([^ ]*mysqld\) .*/\1/;p;}') --verbose --help|grep '^log' Execute that one command and you will get a list of all of the logs for your running instance of mysql. Enjoy! This Bash-Fu brought to you for free by my commitment to all things Open Source.
{ "source": [ "https://serverfault.com/questions/42531", "https://serverfault.com", "https://serverfault.com/users/3567/" ] }
42,571
I'm currently fighting an issue with ASP.Net taking minutes to load a page for the first time. Through playing with settings I've found that disabling "Shutdown worker processes after being idle for (time in minutes)" stops the issue from occurring... I assume the reason it stops my issue from occurring is due to the fact the worker process does not end and therefor the app pool never needs to recreate itself. Is there any harm in disabling this option? What ramifications could it have?
I highly recommend turning off the idle timeout in most situations. It's the default but it's meant more for bulk hosters that want unused worker processes to be ended so that they can always assume that they won't have all of them running at the same time. However, if you have just a few production app pools on a server but occasionally don't have a visitor in a 20 minute space (i.e. overnight), you don't want your app pool to stop. You likely have enough resources to have all of your app pools running at once. Additionally the default settings of recycling the app pool at 1740 minutes should also be changed. I recommend scheduling it for an off-peak time like 4:00am daily rather than having it at different times each day. More on that here on my website.
{ "source": [ "https://serverfault.com/questions/42571", "https://serverfault.com", "https://serverfault.com/users/13366/" ] }
42,678
I haven't changed anything related to the DNS entry for serverfault.com , but some users were reporting today that the serverfault.com DNS fails to resolve for them . I ran a justping query and I can sort of confirm this -- serverfault.com dns appears to be failing to resolve in a handful of countries, for no particular reason that I can discern. (also confirmed via What's My DNS which does some worldwide pings in a similar fashion, so it's confirmed as an issue by two different sources.) Why would this be happening, if I haven't touched the DNS for serverfault.com ? our registrar is (gag) GoDaddy, and I use default DNS settings for the most part without incident. Am I doing something wrong? Have the gods of DNS forsaken me? is there anything I can do to fix this? Any way to goose the DNS along, or force the DNS to propagate correctly worldwide? Update: as of Monday at 3:30 am PST, everything looks correct.. JustPing reports site is reachable from all locations. Thank you for the many very informative responses, I learned a lot and will refer to this Q the next time this happens..
This is not directly a DNS problem, it's a network routing problem between some parts of the internet and the DNS servers for serverfault.com. Since the nameservers can't be reached the domain stops resolving. As far as I can tell the routing problem is on the (Global Crossing?) router with IP address 204.245.39.50 . As shown by @radius , packets to ns52 (as used by stackoverflow.com ) pass from here to 208.109.115.121 and from there work correctly. However packets to ns22 go instead to 208.109.115.201 . Since those two addresses are both in the same /24 and the corresponding BGP announcement is also for a /24 this shouldn't happen . I've done traceroutes via my network which ultimately uses MFN Above.net instead of Global Crossing to get to GoDaddy and there's no sign of any routing trickery below the /24 level - both name servers have identical traceroutes from here. The only times I've ever seen something like this it was broken Cisco Express Forwarding (CEF). This is a hardware level cache used to accelerate packet routing. Unfortunately just occasionally it gets out of sync with the real routing table, and tries to forward packets via the wrong interface. CEF entries can go down to the /32 level even if the underlying routing table entry is for a /24 . It's tricky to find these sorts of problems, but once identified they're normally easy to fix. I've e-mailed GC and also tried to speak to them, but they won't create a ticket for non-customers. If any of you are a customer of GC, please try and report this... UPDATE at 10:38 UTC As Jeff has noted the problem has now cleared. Traceroutes to both servers mentioned above now go via the 208.109.115.121 next hop.
{ "source": [ "https://serverfault.com/questions/42678", "https://serverfault.com", "https://serverfault.com/users/1/" ] }
42,789
I have a Windows 2008 server with 8GB of RAM running IIS7 and MySQL. I've been tracking the memory, cpu and disk usage on the server and I found out that MySQL is using only 250MB of RAM, keeping the disks very busy, even though I have plenty of free ram laying around. In SQL Server I can easily set the amount of memory I want it to use, I am looking for the same setting in MySQL. How can I configure MySQL to use more memory and reduce the cpu and disk usage?
table_cache is the most useful configuration directive to change. Each time MySQL accesses a table, it loads the table into cache. If you've got a high number of tables, it's faster to have them cached. Take a look at your server variables by running: show status; and have a look for the variable open_tables . If this is the same as your table_cache value, and opened_tables keeps going up, then you need to increase the table_cache value in your configuration file. You'll find a balance by experimenting with these variables during peak times. You want to configure it so that at peak times, there are a low amount of opened_tables even after the server has been up for a long time. key_buffer_size is also a good variable to experiment with. This variable affects the index buffer size, and making this variable bigger increases MySQL's index handling speed. You can look at the variables with the show variables; command again, and compare key_read_requests to key_reads . Ideally, you want the ratio between these two variables to be as low as possible, and you can do this by increasing the size of the key_buffer_size . If you set this variable higher, you will have less writes and reads directly to and from the disk, which was your main concern.
{ "source": [ "https://serverfault.com/questions/42789", "https://serverfault.com", "https://serverfault.com/users/2221/" ] }
42,799
The environment is Debian , although the answer will apply to all distributions.
You can also use this command: dhclient -r interface Where interface is the device you want to get a new address for. dhclient -r eth0 The -r flag forces dhclient to first release any leases you have, you can then use this command to request a new lease: dhclient eth0 From man dhclient : -r Tell dhclient to release the current lease it has from the server. This is not required by the DHCP protocol, but some ISPs require their clients to notify the server if they wish to release an assigned IP address.
{ "source": [ "https://serverfault.com/questions/42799", "https://serverfault.com", "https://serverfault.com/users/9485/" ] }
43,014
I have to copy a large directory tree, about 1.8 TB. It's all local. Out of habit I'd use rsync , however I wonder if there's much point, and if I should rather use cp . I'm worried about permissions and uid/gid, since they have to be preserved in the copy (I know rsync does this). As well as things like symlinks. The destination is empty, so I don't have to worry about conditionally updating some files. It's all local disk, so I don't have to worry about ssh or network. The reason I'd be tempted away from rsync, is because rsync might do more than I need. rsync checksums files. I don't need that, and am concerned that it might take longer than cp. So what do you reckon, rsync or cp ?
I would use rsync as it means that if it is interrupted for any reason, then you can restart it easily with very little cost. And being rsync, it can even restart part way through a large file. As others mention, it can exclude files easily. The simplest way to preserve most things is to use the -a flag – ‘archive.’ So: rsync -a source dest Although UID/GID and symlinks are preserved by -a (see -lpgo ), your question implies you might want a full copy of the filesystem information; and -a doesn't include hard-links, extended attributes, or ACLs (on Linux) or the above nor resource forks (on OS X.) Thus, for a robust copy of a filesystem, you'll need to include those flags: rsync -aHAX source dest # Linux rsync -aHE source dest # OS X The default cp will start again, though the -u flag will "copy only when the SOURCE file is newer than the destination file or when the destination file is missing" . And the -a (archive) flag will be recursive, not recopy files if you have to restart and preserve permissions. So: cp -au source dest
{ "source": [ "https://serverfault.com/questions/43014", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
43,360
I'm running Cygwin with an SSH deamon on a Windows Server 2008 machine. I was looking at the Event Viewer and noticed as much as 5 to 6 failed login attempts per second (brute force) for the last week or so, from different IPs. How can I autoblock these IPs rather than blocking them one by one manually? Thanks, Ahmad
I wrote a program to block IP addresses like you're asking for a couple of years ago, but did it for a Customer as a work-for-hire. Since I ended up with some "spare" time this evening I opted to re-implement the whole thing from the ground up, write some useful documentation, and generally make it a presentable program. Since I've heard from multiple people that this would be a handy thing to have it seems like it's probably worth the time. Hopefully you, and other members of the community, can get some use out of it. Windows sshd_block sshd_block is a VBScript program that acts as a WMI event sink to receive Windows Event Log entries logged by sshd. It parses these log entries and acts upon them as follows: If the IP address attempts to logon with a username flagged as "ban immediately" the IP address is banned immediately. If the IP address attempts to logon with more frequently than is allowed in a given time period the IP address is banned. The "ban immediately" usernames and thresholds associated with repeated logon attempts are configurable in the "Configuration" section of the script. Default settings are as follows: Ban Immediately Usernames - administrator, root, guest Logon attempts allowed - 5 in 120 seconds (2 minutes) Duration of ban - 300 seconds (5 minutes) Once a second any IP addresses that have been banned for the ban duration are unbanned (by having the black-hole route removed from the routing table). You can download the software here and can browse the archive here . Edit: As of 2010-01-20 I've updated the code to support using the "Advanced Firewall" on Windows Vista / 2008 / 7 / 2008 R2 to perform black-holding of traffic via creating firewall rules (which is much more in line with the behavior of "fail2ban"). I also added some additional matching strings to catch OpenSSH versions that "invalid user" as opposed to "illegal user".
{ "source": [ "https://serverfault.com/questions/43360", "https://serverfault.com", "https://serverfault.com/users/655/" ] }
43,362
I get this: Macintosh:8.4 TAmoyal$ su Password: su: Sorry Macintosh:8.4 TAmoyal$ I typed in the password I use for sudo. Why won't this work? Thanks!
No need to make up a root password. Try sudo su and type your user password.
{ "source": [ "https://serverfault.com/questions/43362", "https://serverfault.com", "https://serverfault.com/users/3567/" ] }
43,383
I have a rather old server that has 4GB of RAM and it is pretty much serving the same files all day, but it is doing so from the hard drive while 3GBs of RAM are "free". Anyone who has ever tried running a ram-drive can witness that It's awesome in terms of speed. The memory usage of this system is usually never higher than 1GB/4GB so I want to know if there is a way to use that extra memory for something good. Is it possible to tell the filesystem to always serve certain files out of RAM? Are there any other methods I can use to improve file reading capabilities by use of RAM? More specifically, I am not looking for a 'hack' here. I want file system calls to serve the files from RAM without needing to create a ram-drive and copy the files there manually. Or at least a script that does this for me. Possible applications here are: Web servers with static files that get read alot Application servers with large libraries Desktop computers with too much RAM Any ideas? Edit: Found this very informative: The Linux Page Cache and pdflush As Zan pointed out, the memory isn't actually free. What I mean is that it's not being used by applications and I want to control what should be cached in memory.
vmtouch seems like a good tool for the job. Highlights: query how much of a directory is cached query how much of a file is cached (also which pages, graphical representation) load file into cache remove file from cache lock files in cache run as daemon vmtouch manual EDIT: Usage as asked in the question is listed in example 5 on vmtouch Hompage Example 5 Daemonise and lock all files in a directory into physical memory: vmtouch -dl /var/www/htdocs/critical/ EDIT2: As noted in the comments, there is now a git repository available.
{ "source": [ "https://serverfault.com/questions/43383", "https://serverfault.com", "https://serverfault.com/users/1876/" ] }
43,510
I have the cron job as shown below, and wanted it to run every 2 hours, but it keeps running every 2 minutes. Can someone tell me where I'm going wrong? * */2 * * * /path-to-script
An asterisk in the minute (first) field tells it to run every minute, regardless of the other fields. You need to specify an exact minute to run within the hour. Be that on the hour (0), half past (30), etc.. 0 */2 * * * /path-to-script
{ "source": [ "https://serverfault.com/questions/43510", "https://serverfault.com", "https://serverfault.com/users/13756/" ] }
43,692
Roughly how much of a performance hit will https take compared to http for the same page? Suppose I can handle 1000 requests/s for abc.php, how much will it decrease by when accessed through https? I know this might be dependent on hardware, config, OS etc etc but I am just looking for a general rule of thumb/estimate.
For a quick&dirty test (i.e. no optimization whatsoever!) I enabled the simple Ubuntu apache2 default website (which just says "It works!") with both http and https (self-signed certificate) on a local Ubuntu 9.04 VM and ran the apache benchmark " ab " with 10,000 requests (no concurrency). Client and server were on the same machine/VM: Results for http (" ab -n 10000 http://ubuntu904/index.html ") Time taken for tests: 2.664 seconds Requests per second: 3753.69 (#/sec) Time per request: 0.266ms Results for https (" ab -n 10000 https://ubuntu904/index.html "): Time taken for tests: 107.673 seconds Requests per second: 92.87 (#/sec) Time per request: 10.767ms If you take a closer look (e.g. with tcpdump or wireshark) at the tcp/ip communication of a single request you'll see that the http case requires 10 packets between client and server whereas https requires 16: Latency is much higher with https. (More about the importance of latency here ) Adding keep-alive ( ab option -k ) to the test improves the situation because now all requests share the same connection i.e. the SSL overhead is lower - but https is still measurable slower: Results for http with keep-alive (" ab -k -n 10000 http://ubuntu904/index.html ") Time taken for tests: 1.200 seconds Requests per second: 8334.86 (#/sec) Time per request: 0.120ms Results for https with keep-alive (" ab -k -n 10000 https://ubuntu904/index.html "): Time taken for tests: 2.711 seconds Requests per second: 3688.12 (#/sec) Time per request: 0.271ms Conclusion : In this simple testcase https is much slower than http. It's a good idea to enable https support and benchmark your website to see if you want to pay for the https overhead. Use wireshark to get an impression of the SSL overhead.
{ "source": [ "https://serverfault.com/questions/43692", "https://serverfault.com", "https://serverfault.com/users/11224/" ] }
43,940
There are numerous scripts that I have written for my server. Some of them are in my ~/scripts and some of them are in application directories. I am just wondering is there a directory that you would normally use to keep your shell scripts?
Personal ones for my account, ~/bin . System-wide ones go in /usr/local/bin or /usr/local/sbin as appropriate (scripts which should only be run as root go in sbin , while scripts intended to help ordinary users go in bin ), rolled out via configuration management to ensure that all machines that need them have them (and the latest versions, too).
{ "source": [ "https://serverfault.com/questions/43940", "https://serverfault.com", "https://serverfault.com/users/58032/" ] }
44,257
I'm trying to deploy an MSI via the Group Policy in Active Directory. But these are the errors I'm getting in the System event log after logging in: The assignment of application XStandard from policy install failed. The error was : %%1274 The removal of the assignment of application XStandard from policy install failed. The error was : %%2 Failed to apply changes to software installation settings. The installation of software deployed through Group Policy for this user has been delayed until the next logon because the changes must be applied before the user logon. The error was : %%1274 The Group Policy Client Side Extension Software Installation was unable to apply one or more settings because the changes must be processed before system startup or user logon. The system will wait for Group Policy processing to finish completely before the next startup or logon for this user, and this may result in slow startup and boot performance. When I reboot and log in again I simply get the same messages about needing to perform the update before the next logon. I'm on a Windows Vista 32-bit laptop. I'm rather new to deploying via group policy so what other information would be helpful in determining the issue? I tried a different MSI with the same results. I'm able to install the MSI using the command line and msiexec when logged into the computer, so I know the MSI is working ok at least.
You're seeing the dreaded scourge of asynchronous policy processing. It's not a "feature" (and was default-off in Windows 2000 but default-on in Windows XP and above) and causes exactly what you're seeing-- non-deterministic behaviour with processing some types of GPO settings. In a GPO that applies to that computer, add the following setting: Computer Settings Administrative Templates System Logon Always wait for the network at computer startup and logon - Enabled After you set that (and allow the GPO to replicate if you're in a multi-DC environment), do a "gpupdate /force /boot" on the subject PC. It will reboot and you should see the software installation occur. The "Always wait for the network at computer startup and logon" slightly slows down the startup and logon because all GPO extensions are allowed to process, but the upside is that all GPO extensions are allowed to process.
{ "source": [ "https://serverfault.com/questions/44257", "https://serverfault.com", "https://serverfault.com/users/4908/" ] }
44,400
What's a good way of running a shell script as a different user. I'm using Debian etch, and I know which user I want to impersonate. If I was doing it manually, I would do: su postgres ./backup_db.sh /tmp/test exit Since I want to automate the process, I need a way to run backup_db.sh as postgres (inheriting the environment, etc) Thanks!
To run your script as another user as one command, run: /bin/su -c "/path/to/backup_db.sh /tmp/test" - postgres Breaking it down: /bin/su : switch user -c "/path/to..." : command to run - : option to su, make it a login session (source profile for the user) postgres : user to become I recommend always using full paths in scripts like this - you can't always guarantee that you'll be in the right directory when you su (maybe someone changed the homedir on you, who knows). I also always use the full path to su (/bin/su) because I'm paranoid. It's possible someone can edit your path and cause you to use a compromised version of su.
{ "source": [ "https://serverfault.com/questions/44400", "https://serverfault.com", "https://serverfault.com/users/9540/" ] }
44,597
Can anyone share their experiences (for example, this was great! This failed miserably!) with using the Hyper-V , ESXi , and XenServer virtualization platforms? Cost? Management? features? Handling load and backups and recovery? And also minimum server requirements? I thought Xen was a free virtualization platform for Linux. Is there a Xen and a separate XenServer platform? Opinions and observations would be appreciated for a test rollout for our organization.
I recently took all three for a spin to run my home network, and the short answer is that it depends on your particular needs. Unless your needs are very specialized (database/Exchange/etc), modern hardware with virtualization support you will run the guests with negligible performance differences. Given that I'd suggest looking at features & price. VMware: As you're probably aware VMware is the long-standing king of virtualization. It has the biggest list of compatible guest OSs, and has one significant unique feature - memory overcommit (you can allocate more virtual memory than there is physical memory). If your goal is to consolidate a bunch of small, underutilized servers VMware will likely give you more VMs/host than anything else. The caveat is that if you overcommit and the VMs need more resources performance tanks. ESX/ESXi also has the smallest list of compatible hardware. If you are looking at a white-box system, check here first. If you have compatible hardware it's fairly easy to install and use. The free version (ESXi) comes with hardly any features, which is fine if you're looking for a few standalone hosts, and the non-free versions are priced out of this world. On a personal note, I VMware leaves a nasty taste in my mouth - in my mind they are one of the many companies resist change & innovation when the very foundation of their business is challenged by the competition. Recently they asked a partner company to remove their product's support for the free version. Microsoft: Hyper-V is a very intriguing option, even more with the R2 version. I tested Hyper-V Server, which is the free standalone product. I'm a Microsoft fan, and I really wanted to like Hyper-V, primarily because it can run on practically any hardware that has Windows drivers. If you are running in a domain environment and primarily use Windows, Hyper-V should be at the top of your list. When you have the option to buy/use SCVMM it appears to be an even better value. Unlike VMware, the free version comes with a good feature set and is even better in R2, where clustering & live migration are available! Hyper-V runs Windows guests very well, has a small, but growing, list of supported Linux guests, and even unenlightened Linux guests seem to run reasonably well. The story is different if you aren't in a domain environment as managing the standalone Hyper-V Server is a major pain. Despite all of the goods Microsoft delivered in a v1 product, the management was driving me crazy. Citrix: The end result of my testing was to go with XenServer 5.5. It has IMHO the best set of features and capabilities of the three free offerings. Like VMware it is installed and managed like an appliance rather than an operating system (like Hyper-V). It also has a much larger list of compatible hardware (and I suspect the ability to add drivers if needed). It offers way more features than VMware's free offering, and if you were to upgrade the free version to the paid version would cost much, much less. Windows guests are well supported, but Linux guests are, well, not what you'd expect from a Linux-based virtualization platform. Its list of supported Linux guests is quite small compared to VMware and non-supported Linux guests don't seem to run well at all. Ubuntu is noticeably lacking from the list. Overall for home use I felt that it had the best bang for the buck.
{ "source": [ "https://serverfault.com/questions/44597", "https://serverfault.com", "https://serverfault.com/users/13647/" ] }
44,600
Open Source Linux network analyzer Which are there? What features do they offer?
What exactly do you need? wireshark - network sniffer/analyzer iftop - bandwidth usage darkstat - traffic analyzer nmap - network port scanner nessus - vulnerability scanner metasploit - penetration testing
{ "source": [ "https://serverfault.com/questions/44600", "https://serverfault.com", "https://serverfault.com/users/10835/" ] }
44,618
I know it's valid to have a DNS A record that's a wildcard (e.g. *.mysite.com). Is it possible/valid/advised to have a wildcard CNAME record?
It is possible to do this. At one point it was up in the air a bit until 4592 clarified that it should be supported. Just because it is possible doesn't mean it is supported by all DNS providers. For example, GoDaddy won't let you set up a wildcard in a CNAME record. In terms of whether it is advisable or not to do this, it depends on your usage. Usually CNAMES are used for convenience when you are pointing to an "outside" domain name that you don't control the DNS on. For example, let's say you set up a CMS system that allows you to have *.mycms.com as the site name (it uses host headers). You want customers to be able to easily set up *.cms.customer.com, without worrying that you might change your IP address at some point. In that case, you could advise them to set up a wildcard CNAME called *.cms.customer.com to www.mycms.com. Because wildcard CNAMES aren't supported by all providers (such as GoDaddy), I wouldn't advise using it in a case where you suggested it for various customers (where you don't know their provider's capabilities).
{ "source": [ "https://serverfault.com/questions/44618", "https://serverfault.com", "https://serverfault.com/users/13406/" ] }
44,628
I have installed Windows 2008 Terminal Services and I want to use the new RemoteApp feature, I have setup a remoteapp but I don't want to use the WebAccess to get to it and I don't want to create an .RDP file. The reason for this is my SSL VPN can connect to Terminal Services but I have to use a name and port for the connection. Any help is much appreciated. Thanks
It is possible to do this. At one point it was up in the air a bit until 4592 clarified that it should be supported. Just because it is possible doesn't mean it is supported by all DNS providers. For example, GoDaddy won't let you set up a wildcard in a CNAME record. In terms of whether it is advisable or not to do this, it depends on your usage. Usually CNAMES are used for convenience when you are pointing to an "outside" domain name that you don't control the DNS on. For example, let's say you set up a CMS system that allows you to have *.mycms.com as the site name (it uses host headers). You want customers to be able to easily set up *.cms.customer.com, without worrying that you might change your IP address at some point. In that case, you could advise them to set up a wildcard CNAME called *.cms.customer.com to www.mycms.com. Because wildcard CNAMES aren't supported by all providers (such as GoDaddy), I wouldn't advise using it in a case where you suggested it for various customers (where you don't know their provider's capabilities).
{ "source": [ "https://serverfault.com/questions/44628", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
44,707
I want to restrict all users on a server to only be able to use SFTP while the members of an admin group should have full SSH access. I found that it is possible to restrict the members of a group by using Match Group and ForceCommand . But I found no logical negation. So I tried to construct it in reverse: # SFTP only, full access only for admin group X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp Match Group admin X11Forwarding yes AllowTcpForwarding yes ForceCommand /usr/local/sbin/ssh-allowcmd.sh and built a script ssh-allowcmd.sh that executes either the given command or /bin/bash for interactive access. Is there a better solution?
If you're using OpenSSH 5.1 or later then it supports Match Group negation . Assuming the defaults are OK for the admin group, then just change everyone else: Match Group *,!admin X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp There's really no reason to rely on third-party shells to do this kind of job with recent OpenSSH releases.
{ "source": [ "https://serverfault.com/questions/44707", "https://serverfault.com", "https://serverfault.com/users/9195/" ] }
44,862
I am hoping that somewhere in Active Directory the "last logged on from [computer]" is written/stored, or there is a log I can parse out? The purpose of wanting to know the last PC logged on from is for offering remote support over the network - our users move around pretty infrequently, but I'd like to know that whatever I'm consulting was updating that morning (when they logged in, presumably) at minimum. I'm also considering login scripts that write the user and computer names to a known location I can reference, but some of our users don't like to logout for 15 days at a time. If there is an elegant solution that uses login scripts, definitely mention it - but if it happens to work for merely unlocking the station, that would be even better!
As part of our logon script I have that information (and more) logged into a hidden share on a server, with one log file per user. A logoff scripts adds the time the user logged off to the same log file. Easy to set up, no cost and the information is there in an easy to read format.
{ "source": [ "https://serverfault.com/questions/44862", "https://serverfault.com", "https://serverfault.com/users/7320/" ] }
44,870
This is probably a noob question, but how can I determine if the public SSH key someone gives me has a passphrase or not? We have a situation where I am not generating the SSH keys for users, but I want to make sure every SSH key I put on a server has a passphrase, but I get the feeling the passphrase is only part of the private key. Thanks!
This is not something you can determine from the public half of the key. Even if you could determine it, what's to stop the user from subsequently removing it? When you remove the passphrase from the private side of the key, the public side doesn't change.
{ "source": [ "https://serverfault.com/questions/44870", "https://serverfault.com", "https://serverfault.com/users/7221/" ] }
45,042
Based on the descriptions for both the Prefork and Worker MPM, it seems the prefork type is somewhat outdated, but I can't really find a proper comparison of the two types. What i'd like to know: What are the differences between the two versions? What are the (dis-)advantages of each server type? Are there any basic guidelines on which type to choose based on the conditions? Are there any big performance differences between the two?
As the docs say, you should use the prefork MPM if you need to avoid threading for compatibility with non-thread-safe libraries. Typically, any non-trivial Apache module ( mod_php -- or, more precisely, the myriad of extensions and libraries that it links to -- being the canonical example) has some sort of non-thread-safe library (or has non-thread-safe code in it), so unless you're using a pretty stock Apache install, I'd go for the prefork MPM.
{ "source": [ "https://serverfault.com/questions/45042", "https://serverfault.com", "https://serverfault.com/users/22/" ] }
45,083
The more I use rsync the more I realise that it's a swiss army knife of file transfer. There are so many options. I recently found out that you can go --remove-source-files and it'll delete a file from the source when it's been copied, which makes it a bit more of a move, rather than copy programme. :) What are you favorite little rsync tips and tricks?
Try to use rsync version 3 if you have to sync many files! V3 builds its file list incrementally and is much faster and uses less memory than version 2. Depending on your platform this can make quite a difference. On OSX version 2.6.3 would take more than one hour or crash trying to build an index of 5 million files while the version 3.0.2 I compiled started copying right away.
{ "source": [ "https://serverfault.com/questions/45083", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
45,237
I have a file that was deleted, but is still held open by a program. I found the inode number using lsof. How can I create a hard link back to that inode?
You can't create a link to it, but you can get it back. Let's do an experiment: $ echo blurfl >myfile.txt $ tail -f myfile.txt & $ rm myfile.txt myfile.txt is now gone, but the inode is kept alive by the tail command. To get your file back, first find the PID of the process keeping the inode: $ ps auxw | grep tail sunny 409 0.0 0.0 8532 824 pts/5 S 18:07 0:00 tail -f myfile.txt The PID is 409. chdir to /proc/409/fd/ and list the contents: dr-x------ 2 sunny sunny 0 2009-07-24 18:07:18 . dr-xr-xr-x 7 sunny sunny 0 2009-07-24 18:07:17 .. lrwx------ 1 sunny sunny 64 2009-07-24 18:07:33 0 -> /dev/pts/5 lrwx------ 1 sunny sunny 64 2009-07-24 18:07:33 1 -> /dev/pts/5 lrwx------ 1 sunny sunny 64 2009-07-24 18:07:18 2 -> /dev/pts/5 lr-x------ 1 sunny sunny 64 2009-07-24 18:07:33 3 -> /home/sunny/tmp/myfile.txt (deleted) The /proc/[PID]/fd/ directories contain symlinks to file descriptors of all files the process uses. In this case the symlink "3" points to the deleted file. So, to restore the file, copy the contents to a new file: $ cat 3 >/home/mydir/saved_file.txt
{ "source": [ "https://serverfault.com/questions/45237", "https://serverfault.com", "https://serverfault.com/users/3139/" ] }
45,439
I was wondering if anybody knew what the maximum string length of a browser's SSID is or where I could go to look for that sort of information. (From a spec of some sort)
According to the documentation of the standard, the length of an SSID should be a maximum of 32 characters (32 octets, normally ASCII letters and digits, though the standard itself doesn't exclude values). Some access point/router firmware versions use null-terminated strings and accept only 31 characters. Here is the paragraph defining from the IEEE standard document : Download link: PDF . Telecommunications and information exchange between systems — Local and metropolitan area networks — Specific requirementsPart 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications)*
{ "source": [ "https://serverfault.com/questions/45439", "https://serverfault.com", "https://serverfault.com/users/1980/" ] }
45,470
If I'm going to make a DNS change to an A record for my domain (changing from one IP to another), how long can I expect until people are moved over to the new info? Is it simply <= the TTL? I know it used to take a while, but in 2009 how long should I expect?
Theoretically everyone should see the updated A record somewhere between instantly and the relevant TTL value. Most registrars set the TTL to 24 hours IIRC, so for 24 hours some people will see the old address and some will see the new one and by 24 hours after the change everyone should have the new address, with some instead using a lower value like 4 hours. If you have access to change the TTL values (i.e. you run you own DNS servers) then you can reduce the TTLs down to something small a day or so before you make your change so the propogation period is much lower. I say "theoretically" above as there will always be some bugs, glitches, and badly configured caches out there that will mean some users will not see the change for longer. This is especially true if you use very small TTLs as there are still some ISPs out there with DNS caches that ignore TTLs below a given value. Another thing to look out for is delays between your registrar's DNS control panel and their DNS servers. For instance I noticed that changes made to domains managed by 123-reg.co.uk can take up to an hour to appear on their DNS servers, which is an extra hour on top of the TTL value that you'll have to account for.
{ "source": [ "https://serverfault.com/questions/45470", "https://serverfault.com", "https://serverfault.com/users/2448/" ] }
45,479
I am running xampp on my PC for a dev server but I would like to make it go on the internet, so I can show certain people work I am doing. Can someone tell me how to do this?
Theoretically everyone should see the updated A record somewhere between instantly and the relevant TTL value. Most registrars set the TTL to 24 hours IIRC, so for 24 hours some people will see the old address and some will see the new one and by 24 hours after the change everyone should have the new address, with some instead using a lower value like 4 hours. If you have access to change the TTL values (i.e. you run you own DNS servers) then you can reduce the TTLs down to something small a day or so before you make your change so the propogation period is much lower. I say "theoretically" above as there will always be some bugs, glitches, and badly configured caches out there that will mean some users will not see the change for longer. This is especially true if you use very small TTLs as there are still some ISPs out there with DNS caches that ignore TTLs below a given value. Another thing to look out for is delays between your registrar's DNS control panel and their DNS servers. For instance I noticed that changes made to domains managed by 123-reg.co.uk can take up to an hour to appear on their DNS servers, which is an extra hour on top of the TTL value that you'll have to account for.
{ "source": [ "https://serverfault.com/questions/45479", "https://serverfault.com", "https://serverfault.com/users/13943/" ] }
45,653
from: http://seclists.org/fulldisclosure/2009/Jul/0388.html If I understand it best from the posts from: http://news.ycombinator.com/item?id=723798 the Matasano guys left sshd internet accessible - any proposed solutions for this (from a programming point-of-view) ?
How did Matasano get hacked? That's impossible to answer from the information in the post to Full Disclosure. However it's always interesting to speculate, as they do give a little info away - # ./th3_f1n4l_s0lut10n www.matasano.com [-] Connecting to 69.61.87.163:22.. [/] Looking for valid non-root user.. adam ******** R3D4CT3D h4h4h4h4 ******** They run their binary " th3_f1n41_s01ut10n " against Matasano's server, which connects to the ssh port. It finds a valid non-root user through some unknown means, and the rest of the output is redacted. # ./th3_f1n4l_s0lut10n -u adam -t 3 www.matasano.com [*] Connectback listener on 209.112.118.10:3338.. [!] SSH2_MSG_SERVICE_ACCEPT [OpenSSH_4.5p1, OpenSSL 0.9.8g 19 Oct 2007] The binary is run again using the found username, which logs in and connects back to their server on port 3338 (hope that's not registered in their name...). adam_at_www:~$ uname -a Linux www 2.6.20.1-1-686 #1 SMP Sun Mar 4 12:44:55 UTC 2007 i686 GNU/Linux **** h4h4h4hh4h4h4 l3tz us3 m0r3 !0D4Y! H4H4H4H4H4H4H4 **** They could be implying they have a 0-day against this kernel, which is quite old when you consider this company's stock-in-trade. adam_at_www:~$ cd /tmp *********** B0R1NG *********** root_at_www:~# cat /etc/shadow Whoops - all of a sudden the user is now root. They have a local privilege escalation exploit in /tmp that might be the 0-day they referred to. So there are at least two exploits going on here - the OpenSSH exploit to get a valid non-root user on the system, and login as that user, and then the local privilege escalation. Considering that OpenSSH has a few known security issues since version 4.5: From OpenSSH's security page : OpenSSH prior to version 5.2 is vulnerable to the protocol weakness described in CPNI-957037 "Plaintext Recovery Attack Against SSH". However, based on the limited information available it appears that this described attack is infeasible in most circumstances. For more information please refer to the cbc.adv advisory and the OpenSSH 5.2 release notes. OpenSSH 4.9 and newer do not execute ~/.ssh/rc for sessions whose command has been overridden with a sshd_config(5) ForceCommand directive. This was a documented, but unsafe behaviour (described in OpenSSH 4.9 release notes). OpenSSH 4.7 and newer do not fall back to creating trusted X11 authentication cookies when untrusted cookie generation fails (e.g. due to deliberate resource exhaustion), as described in the OpenSSH 4.7 release notes. I guess having this older Linux kernel and older SSH daemon did for them. Also, it was running on their www server, which is available to the Internet, which is quite a confident thing to do in my opinion. The people who broke in obviously wanted to embarrass them. How to prevent these attacks? This could have been prevented by proactive administration - making sure any internet-facing services are patched, and limiting the number of people who can connect rather than allowing people to connect from anywhere. This episode compounds the lesson that secure system administration is hard, and requires dedication from the business to provide time for IT to keep things patched - in reality, not something that happens easily, at least in smaller companies. Using a belt-and-braces approach is best - using public-key authentication, whitelisting on the ssh daemon, two-factor authentication, IP restrictions, and/or putting everything behind the VPN are possible routes to lock it down. I think I know what I'll be doing at work tomorrow. :)
{ "source": [ "https://serverfault.com/questions/45653", "https://serverfault.com", "https://serverfault.com/users/14898/" ] }
46,279
According to the Amazon EC2 FAQ , when an instance is terminated the data is gone. What steps can I take to preserve data in the event my instance is rebooted? I've been looking into EBS and S3 - would either of these be useful to store an active database? How often are instances rebooted anyways?
Like others have said, EBS--Elastic Block Storage. I am using it myself now that it is released to the general public. It is better than S3 on multiple points: EBS are fast . Faster than even the local mounts, according to Amazon. EBS mounts as proper devices . Unlike S3, which you'll need custom S3 oject access logic in your code, or middleware (JungleDisk, ElasticDisk, et al) which present their own problems and costs EBS are easy to back up . Amazon give one the ability to take snap shots, which are saved on S3 EBS are portable between instances --volumes can be unmounted from one instance, and attached to another instance EBS devices can even be RAID'ed together for improved reliability My experience with EBS so far has been the most positive thing about AWS I've dealt with to date. Update: While my experience with EBS has been positive, others have had issues. Very specifically EBS do not implement fsync() correctly. Ted Dziuba has some interesting words about this in his blog post Amazon — The Purpose of Pain : Myth 2: Architecture Will Save You from Cloud Failures This gets even more entertaining with Amazon Elastic Block Store, which, as the Reddit administrators have found, will happily accept calls to fsync(), and lie to your face, saying that the data has been written to disk, when it may not have been.
{ "source": [ "https://serverfault.com/questions/46279", "https://serverfault.com", "https://serverfault.com/users/548/" ] }
46,381
While I install software from packages (MacPorts / apt-get) where-ever possible, I often find myself needing to compile packages from source. ./configure && make && sudo make install is usually enough, but sometimes it doesn't work - and when it doesn't, I frequently get stuck. This almost always relates to other library dependencies in some way. I'd like to learn the following: How do I figure out what arguments to pass to ./configure ? How shared libraries work under OS X / Linux - where they live on the filesystem, how ./configure && make finds them, what actually happens when they are linked against What are the actual differences between a shared and a statically linked library? Why can't I just statically link everything (RAM and disk space are cheap these days) and hence avoid weird library version conflicts? How can I tell what libraries I have installed, and what versions? How can I install more than one version of a library without breaking my normal system? If I am installing stuff from source on a system that is otherwise managed using packages, what's the cleanest way of doing so? Assuming I manage to compile something fiddly from source, how can I then package that up so other people don't have to jump through the same hoops? Particularly on OS X.... What are the command line tools I need to master to get good at this stuff? Stuff like otool, pkg-config etc. I'm willing to invest quite a bit of time and effort here - I don't necessarily want direct answers to the above questions, I'd much rather get recommendations on books / tutorials / FAQs that I can read which will give me the knowledge I need to understand what's actually going on and hence figure out problems on my own.
I apologise for directly answering everything, but I don't know any useful tutorials, FAQs, etc. Basically what follows is 8 years of making desktop apps (that I help distribute), frustration and googling: 1. How do I figure out what arguments to pass to ./configure? Practice really. Autotools is easy enough as it is consistent. But there's plenty of stuff out there using cmake, or custom build scripts. Generally, you shouldn't have to pass anything to configure, it should figure out if your system can build foo-tool or not. Configure and GNU tools all look in /, /usr and /usr/local for dependencies. If you install anything anywhere else (which makes things painful if the dependency was installed by MacPorts or Fink), you will have to pass a flag to configure or modify the shell's environment to help GNU tools find these dependencies. 2. How shared libraries work under OS X / Linux - where they live on the filesystem, how ./configure && make finds them, what actually happens when they are linked against On Linux they need to be installed to a path that the dynamic linker can find, this is defined by the LD_LIBRARY_PATH environment variable and the contents of /etc/ld.conf. On Mac it is the same for most open source software almost always (unless it is an Xcode Project). Except the env variable is DYLD_LIBRARY_PATH instead. There is a default path that the linker searches for libraries. It is /lib:/usr/lib:/usr/local/lib You can supplement this by using the CPATH variable, or CFLAGS or any number of other environment variables really (conveniently complicated). I suggest CFLAGS like so: export CFLAGS="$CFLAGS -L/new/path" The -L parameter adds to the link path. Modern stuff uses the pkg-config tool. Modern stuff you install also installs a .pc file that describes the library and where it is and how to link to it. This can make life easier. But it doesn't come with OS X 10.5 so you'll have to install that too. Also a lot of basic deps don't support it. The act of linking is just "resolve this function at runtime", really it's a big string table. 3. What are the actual differences between a shared and a statically linked library? Why can't I just statically link everything (RAM and disk space are cheap these days) and hence avoid weird library version conflicts? When you link to a static library file the code becomes part of your application. It would be like if there was one giant .c file for that library and you compiled it into your application. Dynamic libraries have the same code, but when the app is run, the code is loaded into the app at runtime (simplified explanation). You can statically link to everything, however, sadly hardly any build systems make this easy. You'd have to edit build system files manually (eg. Makefile.am, or CMakeLists.txt). However this is probably worth learning if you regularly install things that require different versions of libraries and you are finding installing dependencies in parallel difficult. The trick is to change the link line from -lfoo to -l/path/to/static/foo.a You can probably find and replace. Afterwards check the tool doesn't link to the .so or dylib using ldd foo or otool -L foo Another problem is not all libraries compile to static libraries. Many do. But then MacPorts or Debian may have decided not to ship it. 4. How can I tell what libraries I have installed, and what versions? If you have pkg-config files for those libraries it is easy: pkg-config --list-all Otherwise you often can't easily. The dylib may have a soname (ie. foo.0.1.dylib, the soname is 0.1) that is the same as the library's version. However this is not required. The soname is a binary computability feature, you have to bump the major part of the soname if you change the format of the functions in the library. So you can get eg. version 14.0.5 soname for a 2.0 library. Although this is not common. I got frustrated with this sort of thing and have developed a solution for this on Mac, and I'm talking about it next. 5. How can I install more than one version of a library without breaking my normal system? My solution to this is here: http://github.com/mxcl/homebrew/ I like installing from source, and wanted a tool that made it easy, but with some package management. So with Homebrew I build, eg. wget myself from source, but make sure to install to a special prefix: /usr/local/Cellar/wget/1.1.4 I then use the homebrew tool to symlink all that into /usr/local, so I still have /usr/local/bin/wget and /usr/local/lib/libwget.dylib Later if I need a different version of wget I can install it in parallel and just change the version that is linked into the /usr/local tree. 6. If I am installing stuff from source on a system that is otherwise managed using packages, what's the cleanest way of doing so? I believe the Homebrew way is cleanest, so use it or do the equivalent. Install to /usr/local/pkgs/name/version and symlink or hard link the rest in. Do use /usr/local. Every build tool that exists searches there for dependencies and headers. Your life will be much easier. 7. Assuming I manage to compile something fiddly from source, how can I then package that up so other people don't have to jump through the same hoops? Particularly on OS X.... If it has no dependencies you can tar up the the build directory and give it to someone else who can then do "make install". However you can only do this reliably for the exact same versions of OS X. On Linux it will probably work for similar Linux (eg. Ubuntu) with the same Kernel version and libc minor version. The reason it is not easy to distribute binaries on Unix is because of binary compatibility. The GNU people, and everyone else change their binary interfaces often. Basically don't distribute binaries. Things will probably break in very strange ways. On Mac, the best option is to make a macports package. Everyone uses macports. On Linux there are so many different build systems and combinations, I don't think there is any better advise than to write a blog entry about how you succeeded building x tool in y strange configuration. If you make a package description (for macports or homebrew) then anyone can install that package, and it solves the dependency problems too. However this is often not easy, and it also isn't easy to get your macports recipe included in the main macports tree. Also macports doesn't support exotic installation types, they offer one choice for all packages. One of my future goals with Homebrew is to make it possible to click a link on a website (eg. homebrew://blah and it will download that Ruby script, install the deps for that package and then build the app. But yeah, not yet done, but not too tricky considering the design I chose. 8. What are the command line tools I need to master to get good at this stuff? Stuff like otool, pkg-config etc. otool is really only useful afterwards. It tells you what the built binary links to. When you are figuring out the dependencies of a tool you have to build, it is useless. The same is true of pkg-config as you will have already installed the dependency before you can use it. My tool chain is, read the README and INSTALL files, and do a configure --help. Watch the build output to check it is sane. Parse any build errors. Maybe in future, ask on serverfault :)
{ "source": [ "https://serverfault.com/questions/46381", "https://serverfault.com", "https://serverfault.com/users/14512/" ] }
46,545
Some remote SMTP server I am trying to deliver mail to refuses to accept the HELO from my server: 504 5.5.2 <localhost>: Helo command rejected: need fully-qualified hostname Apparently, my Exim4 server sends localhost as its FQDN. Searching the net and a bunch of config files, I have learned that the value sent as FQDN during HELO is drawn from the primary_hostname configuration variable. My question is: what is the correct way to change this variable in a Debian system? I guess I can simply hardcode a value in on of the Exim4 config files, but IMHO it would seem to make more sense if the value automagically corresponded to /etc/mailname or some other centralized name config. I have a feeling that the answer to my question can be found in this text from the Debian wiki : The name used by Exim in EHLO/HELO is pulled from configuration option primary_hostname . Debian's exim4 default configuration does not set primary_hostname . Exim then defaults to uname() to find the host name. If that call only returns one component, gethostbyname() or getipnodebyname() is used to obtain the fully qualified host name. If your Exim HELOs as localhost.localdomain, then you have most probably a misconfigured /etc/hosts created by some versions of the Debian installer. In this case, please fix your /etc/hosts. Unfortunately, I am not familiar enough with Linux server administration to know exacly what all this means :(
Your /etc/hosts file should have at least two records in it. The first record should be of the form: <IP_ADDRESS> <HOST_FQDN> <HOSTNAME> the second one should be of the form: 127.0.0.1 localhost You also need to make sure that your /etc/hostname file contains the server's FQDN, and that running hostname -f returns your servers FQDN. If you make sure all of this is correct, and restart Exim, you should start seeing it HELO properly.
{ "source": [ "https://serverfault.com/questions/46545", "https://serverfault.com", "https://serverfault.com/users/12695/" ] }
46,614
I am running Apache Tomcat on a Windows 2003 Server and I have data stored in a mySQL database. How can I prevent that a server admin can see any data?
You can't, really. The "Administrator" user on a Windows machine has complete control of that machine. That's "how it is". Someone will probably suggest you encrypt the data in the database. Assuming that the keys for that encryption are located somewhere on that computer (since you'll wnat the application to have acess to them) the "Administrator" can just take that key and decrypt the data. Someone else will suggest that you use some kind of file permissions. That won't work either-- the "Administrator" can just change them. If you can't give your "Administrator" user a limited account with which they can accomplish all their day-to-day activities but otherwise is not an "Administrator", the only answer is "don't store that kind of data there". Any "answer" that involves the "Administrator" user retaining their "Administrator" rights won't give you any real protection. An Edit for JimB's sake: JimB left a few comments that I think deserve a longer response than I can give in a comment, so I'm dropping an edit on here. I answered the poster's question w/ technical accuracy w/ respect to the privileges granted to the Windows "Administrators" group. The poster is certainly free to spend all the time he wants "tweaking" the default security permissions in the operating system to attempt to either (a) strip the "Administrators" group of the root-equivalent privileges (which, I would expect, Microsoft would tell you not to do) or (b) create a lesser-privileged group that could perform all the necessary day-to-day server administration functions but would not otherwise be an "Administrator". Unless the poster's "server admin" needs are very basic, I would guess that the poster is going to end up getting into uncharted and undocumented territory. Maybe the poster needs a "server admin" that can perform only very basic operations to the server computer and the "Administrator" password can be set to an arbitrarily complex string and stored in a locked safe. That's one possible strategy, if the poster's business requirements re: a "server admin" allow such a thing. If the poster's requirements are more complex, I would expect that a LOT of ACLs (in the filesystem, registry, global object manager, and service control manager, at least) would need to be changed to accomodate giving a non-"Administrators" group member a close approximation of the abilities of an "Administrators" group member. The poster would also lose the utility of the well-known BUILTIN\Administrators SID, too. I would be shocked if there aren't some assumptions running pretty deeply into the Windows NT OS about the out-of-the-box privileges assigned to members of the "Administrators" group. Attempting to take away privileges from the BUILTIN\Administrators group is, to my mind, asking for instability and problems with the OS. I've not made any statements about "business policy" enforcing security. I don't know what JimB got out of my post or comments that gave him that idea. Business policy can't change the way that code works, and all my statements relate to how code works. Auditing that a breach occurred doesn't mitigate that the breach occurred. You can know that someone breached confidentiality, but no auditing mechanism can tell you how many or few copies of the confidential bits were made after confidentiality was breached. It's boolean-- either confidentiality has been breached or it hasn't. Auditing can tell you that, and nothing more. A business can attempt to "enforce security" in all the "business policy" that they would like, but unless that "business policy" is congruous with how the code, and reality, operates it's really pretty meaningless.
{ "source": [ "https://serverfault.com/questions/46614", "https://serverfault.com", "https://serverfault.com/users/14592/" ] }
46,645
Is there a bash command to find the IP address for an Ubuntu box? I need to find the IP address so I can ssh into the machine later.
/sbin/ifconfig -a
{ "source": [ "https://serverfault.com/questions/46645", "https://serverfault.com", "https://serverfault.com/users/13814/" ] }
46,748
I've got an Ubuntu 8.04 LTS server. There are several packages which are "kept back" ( "the following updates have been kept back" ) when I do an apt-get upgrade . It's my understanding that I can do an apt-get dist-upgrade to upgrade these packages, but I have a few concerns: If I do a dist-upgrade , will I be upgrading from 8.04 to higher version (8.10 I guess)? If so, what's the point of 8.04 being "Long Term Support" (LTS)? Is this a "dangerous" process? I'm assuming that packages are kept back because there are new packages that they depend on. Does dist-upgrade simply pull the new packages and do a fairly straightforward upgrade, or are there caveats to look into?
The command apt-get upgrade will not add or remove packages. apt-get dist-upgrade will add or remove packages as required. The command apt-get dist-upgrade will not automatically upgrade you from one release to another unless you have also updated your sources (/etc/apt/sources.list) to point at a newer release. man apt-get upgrade upgrade is used to install the newest versions of all packages currently installed on the system from the sources enumerated in /etc/apt/sources.list. dist-upgrade dist-upgrade, in addition to performing the function of upgrade, also intelligently handles changing dependencies with new versions of packages; Are there special concerns to be aware of when doing a dist-upgrade vs upgrade? For the most part I always apt-get dist-upgrade to apply updates to a system. Of course pay attention to exactly what new packages are being added or removed. Frequently this happens when something is being added like a newer kernel that isn't compatible with the previous and you will have to recompile modules. If you have some kernel module you had to build on your own, then you may need to make sure you recompile it for the new kernel. I have a couple systems with network interfaces not supported by the stock kernel that I have to recompile the network driver after each kernel update.
{ "source": [ "https://serverfault.com/questions/46748", "https://serverfault.com", "https://serverfault.com/users/2189/" ] }
46,852
We're using rsnapshot for backups. It keeps lots of snapshots of the backuped up file, but it does delete old ones. This is good. However it's taking about 7 hours to do a rm -rf on a massive directory tree. The filesystem is XFS. I'm not sure how many files are there, but it numbers in the millions probably. Is there anyway to speed it up? Is there any command that does the same as rm -rf and doesn't take hours and hours?
No. rm -rf does a recursive depth-first traversal of your filesystem, calling unlink() on every file. The two operations that cause the process to go slowly are opendir() / readdir() and unlink() . opendir() and readdir() are dependent on the number of files in the directory. unlink() is dependent on the size of the file being deleted. The only way to make this go quicker is to either reduce the size and numbers of files (which I suspect is not likely) or change the filesystem to one with better characteristics for those operations. I believe that XFS is good for unlink() on large file, but isn't so good for large directory structures. You might find that ext3+dirindex or reiserfs is quicker. I'm not sure how well JFS fares, but I'm sure there are plenty of benchmarks of different file system performance. Edit: It seems that XFS is terrible at deleting trees , so definitely change your filesystem.
{ "source": [ "https://serverfault.com/questions/46852", "https://serverfault.com", "https://serverfault.com/users/8950/" ] }
46,960
I've been playing with virtual machines lately, and I wondered if I could run a virtual machine inside a virtual machine. Is this possible? Is it practical?
Nesting VMs is something that has been done for forever on IBM Mainframe hardware. That hardware does lots of stuff to make the process very very efficient. You can have VMs nested to an arbitrary depth and it works very well. PC hardware very recently has kinda made this barely possible. A document on VMware's web site discusses it, but the gist is that you can have VMs nested 2 deep, but only on very modern hardware that supports true hardware virtualization (VT-x or AMD-V), and the second VM depth must be running the older style BT/binary translation style virtualization. There are also severe restrictions on the virtual monitors you're able to run on the inner guest. Needless to say, it's not supported and I'd expect it to be really flakey if you do anything even remotely weird (like Hyper-V under ESX). And performance will not be good, regardless of if it is stable.
{ "source": [ "https://serverfault.com/questions/46960", "https://serverfault.com", "https://serverfault.com/users/9230/" ] }
47,003
Through a boneheaded maneuver on my part, I accidentally created a directory called (for instance) -A, and ended up filling it with files. I want to delete said directory. I've tried: rmdir -- -A but it then tells me that the directory still has files in it. And I can't figure out how to cd into the directory to delete said files. What should I do to get rid of this troublesome directory?
Use -- on every command. $ ls -la total 32 drwxr-xr-x 3 richard richard 512 Jul 28 15:44 . drwxr-xr-x 3 root wheel 512 Jul 6 17:10 .. $ mkdir -- -A $ ls -la total 36 drwxr-xr-x 2 richard richard 512 Jul 28 15:44 -A drwxr-xr-x 4 richard richard 512 Jul 28 15:44 . drwxr-xr-x 3 root wheel 512 Jul 6 17:10 .. $ cd -- -A $ ls $ pwd /home/richard/-A $ cd .. $ rm -rf -- -A $ ls -la total 32 drwxr-xr-x 3 richard richard 512 Jul 28 15:44 . drwxr-xr-x 3 root wheel 512 Jul 6 17:10 ..
{ "source": [ "https://serverfault.com/questions/47003", "https://serverfault.com", "https://serverfault.com/users/7629/" ] }
47,021
FreeNAS seems like a great product with a full checklist of features, even iSCSI. But how reliable is it? There are a few scary stories about lost data, for example here. Here is another example. If you have used freeNAS for a longer period of time or even in a production setting, please share your experiences, good or bad. It would be great if you could also describe the setup, ie which hardware and features (software raid, zfs, iscsi etc) you are using.
I have been using freenas on a spare machine with 4x 1TB hard drives (2 raid 1's, so 2TB usable). It has been up 24/7 for 6 months. I find it brilliant! I tested many NAS's devices and only got a maximum of 10Mb/s on a gigabit port, and that was rare, typically it was around 3-4. My main reason for a device was to save energy, however 2x 2 drive nas's = more than a 80+% psu on a celeron system. On freenas, I have a celeron based machine that cost me under £70, and on the internal 100Mb card, I can easily push 70Mb/s on samba. The most expensive part was I bought a 4 drive enclosure to add/remove hard drives easily! Was a bit of a waste of money, but looks cool! I can not complain at all about it and love the system. I did look at openfiler, but it seemed a bit OTT and freenas did what I needed... To the others who recommended it, not saying Openfiler is bad, but freenas suited my needs perfectly, I boot the machine off of a USB stick and works well... The question was "is FreeNAS reliable" and my answer has to be yes. The system is using software raid and even though the celeron is a single core 64 bit one, even during a raid rebuild + watching a HDTV episode across the network, it never goes above 60% cpu To get it working, I downloaded the full iso, put a 1GB usb stick in my laptop, used usb pass through on Vmware Workstation and booted from the iso. I then used the install option and chose the USB stick. (You can do this on the actual machine and I have since however this was my first time using it and I couldn't find a blank cd!) I put the usb stick in to the machine and booted. It worked fine first time! Steps to actually get it usable as a nas were the following: Go in to disk management and add each of the 4 drives. Go to format and format all drives to software raid Go to software raid and add disks 1 and 2, 3 and 4 to a new raid 1 Go to format and format both the new raid's to the standard os Mount both raids Set up Samba and choose both of the mount points as shares Set up a couple of users Then it was accessible over windows by \\ip and using the username and password I chose. I will be looking at openfiler again soon as AD support is lacking a bit, however for a SOHO / domainless environment, you can not go wrong with freenas. edit - Via request - Was to big to fit in comments
{ "source": [ "https://serverfault.com/questions/47021", "https://serverfault.com", "https://serverfault.com/users/4974/" ] }
47,175
Can a DNS record point to an address like my.domain.com/subdir1
DNS records only map IP addresses to hostnames so in a word, no You could, however, use a hostname configuration in your web server to serve a subdirectory when a request comes in. Like having something.domain.com redirect/equate to somethingelse.domain.com/downhere. That would depend on your web server software, not DNS.
{ "source": [ "https://serverfault.com/questions/47175", "https://serverfault.com", "https://serverfault.com/users/14691/" ] }
47,269
I feel like this should be a really simple thing to do, but googling and checking SF I didn't see anything. I'm trying to make my Fedora server not respond to pings, how do I do that?
To disable the PING response, add the following line to your init script for the network: echo 1 >/proc/sys/net/ipv4/icmp_echo_ignore_all To reenable the PING response do this: echo 0 >/proc/sys/net/ipv4/icmp_echo_ignore_all Update: To make the change permanent add the following line to /etc/sysctl.conf : net.ipv4.icmp_echo_ignore_all=1
{ "source": [ "https://serverfault.com/questions/47269", "https://serverfault.com", "https://serverfault.com/users/134800/" ] }
47,324
I've seen some documentation discussing the use of an unmanaged switch. What is the difference in functionality/performance/etc. between an unmanaged and managed switch?
Unmanaged switches — These switches have no configuration interface or options. They are plug-and-play. They are typically the least expensive switches, found in home, SOHO, or small businesses. They can be desktop or rack mounted. Managed switches — These switches have one or more ways, or interfaces, to modify the operation of the switch. Common management methods include: a serial console or Command Line Interface accessed via telnet or Secure Shell; an embedded Simple Network Management Protocol SNMP agent allowing management from a remote console or management station; a web interface for management from a web browser. Examples of configuration changes that one can do from a managed switch include: enable features such as Spanning Tree Protocol; set port speed; create or modify VLANs, etc. Two sub-classes of managed switches are marketed today: Smart (or intelligent) switches — These are managed switches with a limited set of management features. Likewise "web-managed" switches are switches which fall in a market niche between unmanaged and managed. For a price much lower than a fully managed switch they provide a web interface (and usually no CLI access) and allow configuration of basic settings, such as VLANs, port-speed and duplex.[10] Enterprise Managed (or fully managed) switches - These have a full set of management features, including Command Line Interface, SNMP agent, and web interface. They may have additional features to manipulate configurations, such as the ability to display, modify, backup and restore configurations. Compared with smart switches, enterprise switches have more features that can be customized or optimized, and are generally more expensive than "smart" switches. Enterprise switches are typically found in networks with larger number of switches and connections, where centralized management is a significant savings in administrative time and effort. A Stackable switch is a version of enterprise-managed switch. Source: http://en.wikipedia.org/wiki/Network_switch I would explain in more personal detail, but the wiki explains it pretty well.
{ "source": [ "https://serverfault.com/questions/47324", "https://serverfault.com", "https://serverfault.com/users/1047/" ] }
47,458
I am about to replace an old hardware RAID5 array with a Linux software RAID1 array. I was talking to a friend and he claimed that RAID5 was more robust than RAID1. His claim was that with RAID5, on read the parity data was read to make sure that all the drives were returning the correct data. He further claimed that on RAID1 errors occurring on a drive will go unnoticed because no such checking is done with RAID1. I can see how this could be true, but can also see that it all depends on how the RAID systems in question are implemented. Surely a RAID5 system doesn't have to read and check the parity data on a read and a RAID1 system could just as easily read from all drives on read to check they were all holding the same data and therefore achieve the same level of robustness (with a corresponding loss of performance). So the question is, what do RAID5/RAID1 systems in the real world actually do ? Do RAID5 systems check the parity data on reads ? Are there RAID1 systems that read from all drives and compare the data on read ?
RAID-5 is a fault-tolerance solution, not a data-integrity solution . Remember that RAID stands for Redundant Array of Inexpensive Disks . Disks are the atomic unit of redundancy -- RAID doesn't really care about data. You buy solutions that employ filesystems like WAFL or ZFS to address data redundancy and integrity. The RAID controller (hardware or software) does not verify the parity of blocks at read time. This is a major risk of running RAID-5 -- if you encounter a partial media failure on a drive (a situation where a bad block isn't marked "bad"), you are now in a situation where your data have been silently corrupted. Sun's RAID-Z/ZFS actually provides end-to-end data integrity , and I suspect other filesystems and RAID systems will provide this feature in the future as the number of cores available on CPUs continues to increase. If you're using RAID-5, you're being cheap, in my opinion. RAID 1 performs better, offers greater protection, and doesn't impact production when a drive fails -- for a marginal cost difference.
{ "source": [ "https://serverfault.com/questions/47458", "https://serverfault.com", "https://serverfault.com/users/858/" ] }
47,466
Having every file be updated just when accessing them sounds like a waste. What's the catch with mounting a file system with the noatime option. What kind of applications/servers relies on the access time?
Consider relatime: If you have a newish install (~2008), you can use the relatime mount option. This is a good compromise for atime I think. From the kerneltrap discussion about implementing this new option: "relative atime only updates the atime if the previous atime is older than the mtime or ctime. Like noatime, but useful for applications like mutt that need to know when a file has been read since it was last modified." This makes it so most of the applications that need atime will still work, but lessens the disk load -- so it is a compromise. This is the default with recent Ubuntu desktop distributions. Regarding noatime and nodiratime: If you are going noatime for files, I wonder if there is a reason not to use nodiratime in addition to noatime so you are not updating the access time on directories as well. The other reason to keep atime enabled which wasn't mentioned is for auditing purposes. But since who accessed it is not kept and only when , it is probably not that useful for an audit trail. All of these options can be found in 'man mount 8'.
{ "source": [ "https://serverfault.com/questions/47466", "https://serverfault.com", "https://serverfault.com/users/10271/" ] }
47,537
I have several web site set up on one IIS 6 server distinguished by Host Header. However, I wish to have one of the sites served by a Linux / Apache server on my network. Do I need to use a reverse proxy add-in for IIS, or is there a simple way to tell IIS to pass on all requests to another server?
For IIS 7.5, Microsoft provides official modules for this! URL Rewrite: http://www.iis.net/download/URLRewrite Reverse proxy: http://www.iis.net/download/ApplicationRequestRouting In the site settings, you'll get an "URL Rewrite" icon. Open it right click on the "inbound rules list" Select "Add Rule(s)" Choose "Reverse proxy" In this dialog you can enter the hostname + port to forward to. After adding the rule, opening the edit dialog offers more customizations.
{ "source": [ "https://serverfault.com/questions/47537", "https://serverfault.com", "https://serverfault.com/users/46160/" ] }
47,630
For the purposes of a small business (less than 50 employees) is it generally advisable to hire someone full-time for IT Services or outsource to a separate company? Here are some details about the company in question: Health Services - Generally involves gadgets and hardware not easily maintained 25-50 Employees - Most of which interface with a computer regularly Microsoft - Almost completely windows based Should this business be outsourcing their IT/Infrastructure issues or hiring someone full time? If outsourcing is the answer, what are some good companies that do this sort of thing? What should the requirements be for the third party?
It All Depends(tm). I'm a little bit biased because I am an outsourced IT provider. I have Customers who are larger and smaller than the company you describe-- some have in-house IT staff and others don't. I'll apologize in advance if it sounds like I'm trying to "sell" you in this posting. I'm really not. We only service Customers who we can feasibly visit face-to-face, and your profile says "South Carolina" whereas mine says "Ohio". With that in mind, though, I do think that there is a potential situation that outsourcing may work for you (or, it may not). If you don't get anything else out of my message, take this away: The specific people who are involved, ultimately, will make any arrangement either succeed or fail. If you get an employee or an outsourced provider who genuinely cares about your business, cares about doing you right, and cares about providing the most efficient and cost-effective experience they can you'll have the best luck. Too often employees see their work as "just a job" and don't give their best effort. Too many outsourcing companies are in the business to make money, potentially at the expense of their Customers. Outsourcing doesn't necessarily mean "a different person ever time". My company, for example, has been the same three people since we started in 2004. Our Customers see the same faces on every visit. Your selection of an outsourcing firm makes a big difference on this issue. Going with an outsourcing firm that uses a "fleet" of rotating "technicians" (constantly experiencing turnover) isn't going to give you a good experience. Having a "revolving door" position inside your company will accomplish the same poor results. The labor expense of an employee depends a lot on their level of skill and your job market. In my area, Dayton, OH, US, I'd expect to pay an admin handling, say, server computer hardware, cabling / network infrastructure maintenance, Active Directory, Exchange, PC issues, a company web site (not necessarily the content-- just the hosting relationship), VPN access for remote users, and backup between $35,000.00 and $50,000.00 / year. Benefits and payroll tax are going to be anywhere from 20% to 35% of the salary (depending on your locality), for a "true cost" in dollars of somewhere beteen $42K to $54K on the low-side. Just remember that an employee costs more than their salary and benefits. You have an opportunity with many outsourced firms to "cross train" multiple "technicians" with your environment. With an employee you're often stuck with nothing when they decide to seek greener pastures. With our particular firm, $42K can buy a substantial quantity of on-site presence and VPN-based incident response. I'd shop around whatever you expect the true cost of an employee to be to local IT outsoucing firms and see what you come up with. Based on the size of the company you're talking about, and bearing in mind that I know nothign about your line-of-business software (the quality of which DRAMATICALLY affects service labor outlay), I could easily see that after getting over the initial setup hurdles (wresting "Administrator" rights away from users, getting servers / backups / email / etc setup in an optimal configuration) you could settle into a recurring service routine of 16 - 20 hours per week, and possibly even less. The best arrangement that we've worked in, historically, has been reporting to a semi-technical manager who handles the day-to-day admin tasks (password resets, new account creation, etc), and defers higher-level problems to us. This keeps the day-to-day expenses lower, but still assures that the infrastructure is properly installed and maintained. Since we're a "services only" Firm (and don't sell hardware, software licenses, etc), it's usually easier for our Customers to grasp that we really are "on their side" and really are looking out for their best interests. Our business model is to get Customers setup with a solid infrastructure that's setup right from the start, proactively monitored for failures, and ultimately configured with the intention of requiring the least amount of ongoing support labor to "keep running". It's perfectly possible to have a network infrastructure that mostly "takes care of itself" and doesn't need a large amount of daily "care and feeding". Like I said-- I'm probably biased. This is the kind of work that I've done for years and I think it tends to work very well. Ultimately, though, there is no easy answer.
{ "source": [ "https://serverfault.com/questions/47630", "https://serverfault.com", "https://serverfault.com/users/1188/" ] }
47,811
When including a literal quote character inside a quoted string in Powershell, how do I escape the quote character to indicate it is a literal instead of a string delimeter?
From help about_quoting_rules To make double-quotation marks appear in a string, enclose the entire string in single quotation marks. For example: 'As they say, "live and learn."' The output from this command is: As they say, "live and learn." You can also enclose a single-quoted string in a double-quoted string. For example: "As they say, 'live and learn.'" The output from this command is: As they say, 'live and learn.' To force Windows PowerShell to interpret a double quotation mark literally, use a backtick character. This prevents Windows PowerShell from interpreting the quotation mark as a string delimiter. For example: "Use a quotation mark (`") to begin a string." The output from this command is: Use a quotation mark (") to begin a string. Because the contents of single-quoted strings are interpreted literally, you cannot use the backtick character to force a literal character interpretation in a single-quoted string. The use of the backtick character to escape other quotation marks in single quoted strings is not supported in recent versions of PowerShell. In earlier versions of PowerShell the backtick escape character could be used to escape a double quotation mark character within a single quoted string as detailed in the help about_quoting document that is available in those versions of PowerShell.
{ "source": [ "https://serverfault.com/questions/47811", "https://serverfault.com", "https://serverfault.com/users/920/" ] }
47,915
I'm trying to get the default gateway, using the destination 0.0.0.0 i used this command: netstat -rn | grep 0.0.0.0 and it returns this list: Destination - Gateway - Genmask - Flags - MSS - Window - irtt - Iface 10.9.9.17 - 0.0.0.0 - 255.255.255.255 - UH - 0 0 0 - tun0 133.88.0.0 - 0.0.0.0 - 255.255.0.0 - U - 0 0 0 - eth0 0.0.0.0 - 133.88.31.70 - 0.0.0.0 - UG - 0 0 0 - eth0 My goal here is to ping the default gateway using destination 0.0.0.0; thus, that is "133.88.31.70"; but this one returns a list because of using 'grep'. Question is: How do i get the default gateway only? I will need it for my bash script to identify if net connection is up or not.
DEFAULT_ROUTE=$(ip route show default | awk '/default/ {print $3}') ping -c 1 $DEFAULT_ROUTE This should solve your problem.
{ "source": [ "https://serverfault.com/questions/47915", "https://serverfault.com", "https://serverfault.com/users/14966/" ] }
47,933
The obvious solution produces an exit code of 1: bash$ rm -rf .* rm: cannot remove directory `.' rm: cannot remove directory `..' bash$ echo $? 1 One possible solution will skip the "." and ".." directories but will only delete files whose names are longer than 3 characters: bash$ rm -f .??*
rm -rf .[^.] .??* Should catch all cases. The .??* will only match 3+ character filenames (as explained in previous answer), the .[^.] will catch any two character entries (other than ..).
{ "source": [ "https://serverfault.com/questions/47933", "https://serverfault.com", "https://serverfault.com/users/9635/" ] }
48,053
I can create by own CA and generate a self signed SSL certificate this way. But what does it take to make the browser show the certificate as being an "Extended Validation SSL certificate" ? Can I create one myself and teach my browser to show it as EV?
The way that EV SSL certificates work is to stick an authority-specific OID in the certificate policies extension field of the cert (which is a standard X.509 certificate otherwise). As EK said, the reference OIDs for each authority are shipped as part of the browser's root store of certificates. The user interfaces don't let you add a new CA and say "this is an EV capable CA and the UID is a.b.c.d.e.f". I suppose it might be possible to build an open-source browser from source, adding your own CA's cert along with its EV oid to the root store, but you haven't really achieved much by doing so. The browser would no longer be compliant with the CA/Browser forum EV guidelines (which limit the EV-capable authorities). Wikipedia has more info on EV certificates here: http://en.wikipedia.org/wiki/Extended_Validation_Certificate
{ "source": [ "https://serverfault.com/questions/48053", "https://serverfault.com", "https://serverfault.com/users/9825/" ] }
48,256
I have been told by colleagues (mainly non-technical) that some of my admin behaviors border on / cross the line between normal and obsessive, which sometimes leads me to wonder how screwed up I really am (read "how screwed up everyone else really is"). What are your obsessive behaviors when it comes to your sysadmin tasks and job functions? What do you do religiously that would make you twitch if you didn't do it or that others just roll their eyes at? I have reasons for my actions. I want to prove to my coworkers that I'm not alone.
There are two levels of obsessive - good obsessive and pointless obsessive . The guy who defrags three times a day is pointless obsessive, because he's not worrying about things that actually matter. The guy who denies user's permission to change the wallpaper on their workstations via Group Policy is pointless obsessive, using his technological advantage to control others to satisfy some ego issue. However... The guy who locks his workstation, enforces a strong password policy, keeps firewall rules tight enough but not insane, audits the infrastructure every so often, etc., is good obsessive. I'd like to also point out that another term for good obsessive is professional . :) Edit: Also, good obsessive provides a strong and solid infrastructure while not inhibiting business needs . I think that's a key difference.
{ "source": [ "https://serverfault.com/questions/48256", "https://serverfault.com", "https://serverfault.com/users/1054/" ] }
48,330
In Linux if you go digging in /proc/<pid>/fd often you'll see output like: lrwx------ 1 root root 64 Jul 30 15:14 0 -> /dev/null lrwx------ 1 root root 64 Jul 30 15:14 1 -> /dev/null l-wx------ 1 root root 64 Jul 30 15:14 10 -> pipe:[90222668] lr-x------ 1 root root 64 Jul 30 15:14 11 -> pipe:[90222669] l-wx------ 1 root root 64 Jul 30 15:14 13 -> pipe:[90225058] lr-x------ 1 root root 64 Jul 30 15:14 14 -> pipe:[90225059] How do I get more info about the open pipes, such as which process is on the other end?
Similiar to other answers, but: lsof | grep 90222668 Will show you both ends, because both ends share the 'pipe number'.
{ "source": [ "https://serverfault.com/questions/48330", "https://serverfault.com", "https://serverfault.com/users/5922/" ] }
48,428
This is a canonical question about how to handle email sent from your server being misclassified as spam. For additional information you may find these similar questions helpful: Best Practices for preventing you from looking like a spammer Fighting Spam - What can I do as an: Email Administrator, Domain Owner, or User? Sometimes I want to send newsletters to my customers. The problem is, that some of the emails get caught as spam messages. Mostly by Outlook at the client (even in my own Outlook 2007). Now I want to know what should be done to create "good" emails. I know about reverse lookup etc., but (for example), what about a unsubscribe link with an unique ID? Does that increase a spam rating?
Be sure that your emails don’t look like typical spam emails: don’t insert only a large image; check that the character-set is set correctly; don’t insert “IP-address only” links. Write your communication as you would write a normal email. Make it really easy to unsubscribe or opt-out. Otherwise, your users will unsubscribe by pressing the “spam” button, and that will affect your reputation. On the technical side: if you can choose your SMTP server, be sure it is a “clean” SMTP server. IP addresses of spamming SMTP servers are often blacklisted by other providers. If you don’t know your SMTP servers in advance, it’s a good practice to provide configuration options in your application for controlling batch sizes and delay between batches. Some mail servers don’t accept large sending batches or continuous activity. Use email authentication methods, such as SPF , and DKIM to prove that your emails and your domain name belong together. The nice side-effect is you help in preventing that your email domain is spoofed. Also check your reverse DNS to make sure the IP address of your mail server points to the domain name that you use for sending mail. Make sure that the reply-to address of your emails are a valid, existing addresses. Use the full, real name of the addressee in the To field, not just the email-address (e.g. "John Doe" <[email protected]> ) and monitor your abuse accounts, such as [email protected] and [email protected] .
{ "source": [ "https://serverfault.com/questions/48428", "https://serverfault.com", "https://serverfault.com/users/13589/" ] }
48,455
We are running computing jobs with GridEngine. Every jobs returns 3 different times: Wall clock time User time CPU time What are the differences between these three? Which of these three is most suitable to compare the performance of two applications/scripts
Wall clock time is the actual amount of time taken to perform a job. This is equivalent to timing your job with a stopwatch and the measured time to complete your task can be affected by anything else that the system happens to be doing at the time. User time measures the amount of time the CPU spent running your code. This does not count anything else that might be running, and also does not count CPU time spent in the kernel (such as for file I/O). CPU time measures the total amount of time the CPU spent running your code or anything requested by your code. This includes kernel time. The "User time" measurement is probably the most appropriate for measuring the performance of different jobs, since it will be least affected by other things happening on the system.
{ "source": [ "https://serverfault.com/questions/48455", "https://serverfault.com", "https://serverfault.com/users/261/" ] }
48,481
Is there a way to add a directory structure to an SVN repository without adding the files contained in the folders?
So sorry, I should've RTFM ... http://svnbook.red-bean.com/en/1.5/svn.ref.svn.c.add.html You can add a directory without adding its contents: $ svn add --depth=empty otherdir A otherdir Edit: This doesn't work recursively though, is there any way to do that too?
{ "source": [ "https://serverfault.com/questions/48481", "https://serverfault.com", "https://serverfault.com/users/15141/" ] }
48,486
I am using RAM for storing some of my database tables and the others are stored in hard disk. Today I came to know that my processes are using swap memory. Now what is swap memory and how can I detect that which process is using swap memory and how can i stop them from using it?
If you run out of physical memory, you use virtual memory, which stores the data in memory on disk. Reading from disk is several orders of magnitude slower than reading from memory, so this slows everything way down. (Exchanging data between real memory and virtual memory is "swapping". The space on disk is "swap space".) If your app is "using swap", then you either need to use less memory or buy more RAM. (Swap is useful because applications that aren't being used can be stored on disk until they are used. Then they can be "paged in" and run normally again. While it is not in memory, though, the OS can use that memory for something else, like disk cache. So it's a very useful feature, but if you don't have enough physical memory to run your program, you definitely need more memory. Fortunately, memory is really really cheap these days.)
{ "source": [ "https://serverfault.com/questions/48486", "https://serverfault.com", "https://serverfault.com/users/-1/" ] }
48,582
Using ps, I can see the size, the vsize (same as top's VIRT?), and the rss (same as top's RES?). (One more I see in top is SHR.) Could someone summarize for me what these different fields mean?
In short: Virtual size: is the amount of address space that a process is managing. The virtual address space contains everything that the process can access through pointers (memory address references). For example, if your program gets access to the framebuffer of your video card, that memory is mapped to the process virtual space and receives an address that is stored to a pointer. Memory-mapped files and anonymous mappings are also accounted into the virtual address space size. Pretty much everything is in the virtual size. If you sum up the size of all address ranges listed in /proc/<pid>/maps , it should return you roughly the same value of the virtual size. Resident size: is the amount of memory that belongs specifically to that process that is currently resident in memory. That means, the amount of memory that is not in swap. Note that parts of the process can be in swap memory even when the process is running. The operating system will pull these regions from the swap when the process tries to access it. This should include the heap, the stacks of all threads and other private mappings. If you look in /proc/<pid>/maps , the [stack] , [heap] and other anonymous mappings (those without file paths) are either swapped or accounted in the resident size. Shared size: is the amount of memory that may belong to multiple processes. For example, if you have four instances of the same application loaded in memory, you will have four instances of the heap and at least four stacks, one for each process (this is the resident memory), but you will have only one instance of the binary code of the program and its libraries. This is the shared space. Not only it includes the program binary code and its libraries, but also localization files, read-only program data, SysV and POSIX shared memory segments, semaphores, etc... If you look in /proc/<pid>/maps , most mappings tied to library and program files are shared. Note that VIRT contains the union of RSS and SHR, and will always be greater than any one of them. There may be regions accounted as both RSS and SHR.
{ "source": [ "https://serverfault.com/questions/48582", "https://serverfault.com", "https://serverfault.com/users/1753/" ] }
48,600
I have a Windows service that exits unexpectedly every few days. Is there a simple way to monitor it to make sure it gets restarted quickly if it crashes?
Under the Services application, select the properties of the service in question. View the recovery tab - there are all sorts of options - I'd set First & Second Failure to Restart the Service, Third to run a batch program that BLAT 's out an email with the third failure notification. You should also set the Reset Fail Count to 1 to reset the fail count daily. EDIT: Looks like you can do this via a command line: SC failure w3svc reset= 432000 actions= restart/30000/restart/60000/run/60000 SC failure w3svc command= "MyBatchFile.cmd" Your MyBatchFile.CMD file can look like this: blat - -body "Service W3svc Failed" -subject "SERVICE ERROR" -to [email protected] -server SMTP.Example.com -f [email protected]
{ "source": [ "https://serverfault.com/questions/48600", "https://serverfault.com", "https://serverfault.com/users/1867/" ] }
48,642
-i.e - how to get a full list of hardware components in command line (on a machine with no window system) Thank you.
lspci for pci cards, lsusb for usb, lshw works on debian based distros, here's a list of ways to get other hardware specs ,
{ "source": [ "https://serverfault.com/questions/48642", "https://serverfault.com", "https://serverfault.com/users/13823/" ] }
48,643
How to connect to a computer that is in Sleep mode over the internet? I am using LogMeIn to connect to another computer offsite. I just installed Windows 7 RC on that system and found that the Sleep mode actually works. Currently LogMeIn does not connect when the system is in Sleep mode or Hibernate mode (that is what their error message displays when you try). Is there a way to get LogMeIn to connect to a system in Sleep mode? Is there other software that gives simliar LogMeIn functionallity (like RDP, etc.) that could be used on Windows 7 instead. I just use LMI for connecting and nothing else (no printing or file transfers). A Non-expensive options (such as free) would be better. I have seen web sites mentioning "Wake on LAN". Does anyone have some good links on how to set this up to be accessed over the internet? Edited: It looks like LogMeIn BETA might be the solution. https://beta.logmein.com/welcome/nextgen/ Has anyone tried this beta yet?
lspci for pci cards, lsusb for usb, lshw works on debian based distros, here's a list of ways to get other hardware specs ,
{ "source": [ "https://serverfault.com/questions/48643", "https://serverfault.com", "https://serverfault.com/users/3465/" ] }
48,717
We recently began load testing our application and noticed that it ran out of file descriptors after about 24 hours. We are running RHEL 5 on a Dell 1955: CPU: 2 x Dual Core 2.66GHz 4MB 5150 / 1333FSB RAM: 8GB RAM HDD: 2 x 160GB 2.5" SATA Hard Drives I checked the file descriptor limit and it was set at 1024. Considering that our application could potentially have about 1000 incoming connections as well as a 1000 outgoing connections, this seems quite low. Not to mention any actual files that need to be opened. My first thought was to just increase the ulimit -n parameter by a few orders of magnitude and then re-run the test but I wanted to know any potential ramifications of setting this variable too high. Are there any best practices towards setting this other than figuring out how many file descriptors our software can theoretically open?
These limits came from a time where multiple "normal" users (not apps) would share the server, and we needed ways to protect them from using too many resources. They are very low for high performance servers and we generally set them to a very high number. (24k or so) If you need higher numbers, you also need to change the sysctl file-max option (generally limited to 40k on ubuntu and 70k on rhel) . Setting ulimit: # ulimit -n 99999 Sysctl max files: #sysctl -w fs.file-max=100000 Also, and very important, you may need to check if your application has a memory/file descriptor leak. Use lsof to see all it has open to see if they are valid or not. Don't try to change your system to work around applications bugs.
{ "source": [ "https://serverfault.com/questions/48717", "https://serverfault.com", "https://serverfault.com/users/12317/" ] }
48,724
Is there a way to do an apt-get dist-upgrade in Debian that not only automatically answers "yes" to all questions asked, but also uses reasonable defaults as answers to questions that are sophisticated enough to require various interactive dialog boxes to pop up? I'm thinking here of the keymap stuff that shows up when you upgrade libc6 , and kernel image choices. The goal is to be able to remotely initiate a rather large dist-upgrade - even for a machine that is severely behind the times - and not have to babysit it at all, unless something is just horribly, disastrously wrong. Surely this is possible? Thanks in advance!
If you set DEBIAN_FRONTEND=noninteractive (to stop debconf prompts from appearing) and add force-confold and force-confdef to your /etc/dpkg/dpkg.cfg file, you should have a completely noninteractive package installation experience. Any package that still prompts you for information has a release critical bug (and I say that as both an automation junkie and as a Debian developer).
{ "source": [ "https://serverfault.com/questions/48724", "https://serverfault.com", "https://serverfault.com/users/3916/" ] }
48,769
I use bash and I would like to avoid some commands being kept in the history. Is it possible to do that for the next command only? Is it possible to do that for the entire session?
and i just remembered another answer, this one is the actual answer to your question. if you have "ignorespace" in HISTCONTROL, then bash wont remember any line beginning with a space character. it won't appear even in the current shell's history, let alone be saved to $HISTFILE. e.g. I have export HISTCONTROL='ignoreboth:erasedups' in my ~/.bashrc here's the details from the bash man page: HISTCONTROL A colon-separated list of values controlling how commands are saved on the history list. If the list of values includes ignorespace, lines which begin with a space character are not saved in the history list. A value of ignoredups causes lines matching the previous history entry to not be saved. A value of ignoreboth is shorthand for ignorespace and ignoredups. A value of erasedups causes all previous lines matching the current line to be removed from the history list before that line is saved. Any value not in the above list is ignored. If HISTCONTROL is unset, or does not include a valid value, all lines read by the shell parser are saved on the history list, subject to the value of HISTIGNORE. The second and subsequent lines of a multi- line compound command are not tested, and are added to the history regardless of the value of HISTCONTROL.
{ "source": [ "https://serverfault.com/questions/48769", "https://serverfault.com", "https://serverfault.com/users/6343/" ] }