source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
386,918 | Just got a new VPS running Ubuntu 11.04 and tried to update it. I got this error. I get the same error whenever using apt-get login as: root
[email protected]'s password:
Welcome to Ubuntu 11.04 (GNU/Linux 2.6.38-8-generic-pae i686)
* Documentation: https://help.ubuntu.com/
Last login: Mon May 7 19:55:45 2012 from 108.192.44.54
root@Rx:~# apt-get update
Err http://security.ubuntu.com natty-security InRelease
Err http://archive.ubuntu.com natty InRelease
Err http://security.ubuntu.com natty-security Release.gpg
Temporary failure resolving 'security.ubuntu.com'
Err http://archive.ubuntu.com natty-updates InRelease
Err http://archive.ubuntu.com natty Release.gpg
Temporary failure resolving 'archive.ubuntu.com'
Err http://archive.ubuntu.com natty-updates Release.gpg
Temporary failure resolving 'archive.ubuntu.com'
Reading package lists... Done
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/natty/InRelease
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/natty-updates/InRelease
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/natty-security/InRelease
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/natty-security/Release.gpg Temporary failure resolving 'security.ubuntu.com'
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/natty/Release.gpg Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/natty-updates/Release.gpg Temporary failure resolving 'archive.ubuntu.com'
W: Some index files failed to download. They have been ignored, or old ones used instead.
root@Rx:~# If needed, here is my /etc/apt/sources.list root@Rx:/etc# more /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu natty main
deb http://archive.ubuntu.com/ubuntu natty-updates main
deb http://security.ubuntu.com/ubuntu natty-security main
deb http://archive.ubuntu.com/ubuntu natty universe
deb http://archive.ubuntu.com/ubuntu natty-updates universe And if needed, I did a ping test: root@Rx:~# ping -n 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=56 time=13.3 ms
64 bytes from 8.8.8.8: icmp_req=2 ttl=56 time=13.2 ms
64 bytes from 8.8.8.8: icmp_req=3 ttl=56 time=13.4 ms
64 bytes from 8.8.8.8: icmp_req=4 ttl=56 time=13.3 ms
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 13.243/13.326/13.428/0.066 ms
root@Rx:~# This is /etc/resolv.conf root@Rx:~# more /etc/resolv.conf
nameserver 199.193.248.1 | The problem is that the DNS server you had originally isn't responding to your queries. You can add another one to the list to check. 8.8.8.8 (provided by Google) is the easiest to remember. Add the line nameserver 8.8.8.8 to your /etc/resolv.conf to query that server. If the original server is one that the VPS provider gave you, you may want to bring this up with their support team - it's possible there's some sort of management tool that depends on it. Other than that, you can use 8.8.8.8 as your primary DNS forever. | {
"source": [
"https://serverfault.com/questions/386918",
"https://serverfault.com",
"https://serverfault.com/users/72171/"
]
} |
386,931 | I'm trying to install Z-threads on Ubuntu but am running into a problem. I've installed Z-threads dozens of times but this is the first time this has happened. I usually use these to install it: wget http://voxel.dl.sourceforge.net/sourceforge/zthread/ZThread-2.3.2.tar.gz
tar -xzf ZThread-2.3.2.tar.gz
cd ZThread-2.3.2
./configure CXXFLAGS="-fpermissive" --prefix=/usr/
make && make install It errors on the configure command above: root@Rx:~/ZThread-2.3.2# ./configure CXXFLAGS="-fpermissive" --prefix=/usr/
checking build system type... i686-pc-linux-gnu
checking host system type... i686-pc-linux-gnu
checking target system type... i686-pc-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
Loading m4 macros from share
checking for g++... g++
checking for C++ compiler default output file name... a.out
checking whether the C++ compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for style of include used by make... GNU
checking dependency style of g++... gcc3
checking for gcc... gcc
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
checking dependency style of gcc... gcc3
checking how to run the C preprocessor... gcc -E
checking for egrep... grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking pthread.h usability... yes
checking pthread.h presence... yes
checking for pthread.h... yes
checking for linker option -pthread... no
checking for linker option -lpthread... yes
checking for sched_get_priority_max in -lrt... yes
checking for sched_yield... yes
checking for pthread_yield... yes
checking for pthread_key_create... yes
checking for doxygen... no
detecting for ftime() function
checking sys/time.h usability... yes
checking sys/time.h presence... yes
checking for sys/time.h... yes
checking for _ftime()... no
checking sys/timeb.h usability... yes
checking sys/timeb.h presence... yes
checking for sys/timeb.h... yes
checking for ftime()... yes
checking how to run the C++ preprocessor... g++ -E
checking for ANSI C header files... (cached) yes
checking errno.h usability... yes
checking errno.h presence... yes
checking for errno.h... yes
checking for target implementation... compile-time guess
checking for sigsetjmp()... yes
checking for _beginthreadex()... no
checking for a sed that does not truncate output... /bin/sed
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for /usr/bin/ld option to reload object files... -r
checking for BSD-compatible nm... /usr/bin/nm -B
checking whether ln -s works... yes
checking how to recognise dependent libraries... pass_all
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking how to run the C++ preprocessor... g++ -E
checking for g77... no
checking for f77... no
checking for xlf... no
checking for frt... no
checking for pgf77... no
checking for fort77... no
checking for fl32... no
checking for af77... no
checking for f90... no
checking for xlf90... no
checking for pgf90... no
checking for epcf90... no
checking for f95... no
checking for fort... no
checking for xlf95... no
checking for ifc... no
checking for efc... no
checking for pgf95... no
checking for lf95... no
checking for gfortran... no
checking whether we are using the GNU Fortran 77 compiler... no
checking whether accepts -g... no
checking the maximum length of command line arguments... 32768
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking for objdir... .libs
checking for ar... ar
checking for ranlib... ranlib
checking for strip... strip
checking for correct ltmain.sh version... grep: character class syntax is [[:space:]], not [:space:]
no
*** Gentoo sanity check failed! ***
*** libtool.m4 and ltmain.sh have a version mismatch! ***
*** (libtool.m4 = 1.5.10, ltmain.sh = ) ***
Please run:
libtoolize --copy --force
if appropriate, please contact the maintainer of this
package (or your distribution) for help.
root@Rx:~/ZThread-2.3.2# upon running libtoolize --copy --force : root@Rx:~/ZThread-2.3.2# libtoolize --copy --force
libtoolize: putting auxiliary files in `.'.
libtoolize: copying file `./ltmain.sh'
libtoolize: You should add the contents of the following files to `aclocal.m4':
libtoolize: `/usr/share/aclocal/libtool.m4'
libtoolize: `/usr/share/aclocal/ltoptions.m4'
libtoolize: `/usr/share/aclocal/ltversion.m4'
libtoolize: `/usr/share/aclocal/ltsugar.m4'
libtoolize: `/usr/share/aclocal/lt~obsolete.m4'
libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.ac and
libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree.
libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am.
root@Rx:~/ZThread-2.3.2# What is it even suggesting I do? And is it perhaps part of a larger problem? | The problem is that the DNS server you had originally isn't responding to your queries. You can add another one to the list to check. 8.8.8.8 (provided by Google) is the easiest to remember. Add the line nameserver 8.8.8.8 to your /etc/resolv.conf to query that server. If the original server is one that the VPS provider gave you, you may want to bring this up with their support team - it's possible there's some sort of management tool that depends on it. Other than that, you can use 8.8.8.8 as your primary DNS forever. | {
"source": [
"https://serverfault.com/questions/386931",
"https://serverfault.com",
"https://serverfault.com/users/72171/"
]
} |
386,947 | I have an RDS instance that is costing me a lot of money. From my account activity on amazon I see that the instance has had about 800,000,000 IO requests over the past 7 days. To give you a little perspective, my app only gets about 6,000 unique visits a day and it doesn't make that many database connections. So, what exactly is an IO Request, and why would that number be so unearthly high? I'm willing to do whatever it takes to my app to reduce that cost if necessary, but I'm not sure what's really going on. I would appreciate your thoughts. | The problem is that the DNS server you had originally isn't responding to your queries. You can add another one to the list to check. 8.8.8.8 (provided by Google) is the easiest to remember. Add the line nameserver 8.8.8.8 to your /etc/resolv.conf to query that server. If the original server is one that the VPS provider gave you, you may want to bring this up with their support team - it's possible there's some sort of management tool that depends on it. Other than that, you can use 8.8.8.8 as your primary DNS forever. | {
"source": [
"https://serverfault.com/questions/386947",
"https://serverfault.com",
"https://serverfault.com/users/120281/"
]
} |
387,121 | Is it possible to disable PASSWORD SSH access to user but to allow Key authentication on a per user basis ?
I mean, I have a userA whom I don't want to give Password based access BUT I wan't him to only use key authentication to access the server(s).
Thanks | You can add "Match" sections to match on particular users or groups at the bottom of sshd_config, like: Match user stew
PasswordAuthentication no or Match group dumbusers
PasswordAuthentication no | {
"source": [
"https://serverfault.com/questions/387121",
"https://serverfault.com",
"https://serverfault.com/users/66274/"
]
} |
387,268 | Is there any way to see what process(es) caused the most CPU usage? I have AMAZON EC2 Linux which CPU utilization reaches 100 percent and make me to reboot the system. I cannot even login through SSH (Using putty). Is there any way to see what causes such a high CPU usage and which process caused that ? I know about sar and top command but I could not find process execution history anywhere. Here is the image from Amazon EC2 monitoring tool, but I would like to know which process caused that : I have also tried ps -eo pcpu,args | sort -k 1 -r | head -100 but no luck finding such a high CPU usage. | There are a couple of possible ways you can do this.
Note that its entirely possible its many processes in a runaway scenario causing this, not just one. The first way is to setup pidstat to run in the background and produce data. pidstat -u 600 >/var/log/pidstats.log & disown $! This will give you a quite detailed outlook of the running of the system at ten minute intervals. I would suggest this be your first port of call since it produces the most valuable/reliable data to work with. There is a problem with this, primarily if the box goes into a runaway cpu loop and produces huge load -- your not guaranteed that your actual process will execute in a timely manner during load (if at all) so you could actually miss the output! The second way to look for this is to enable process accounting. Possibly more of a long term option. accton on This will enable process accounting (if not already added). If it was not running before this will need time to run. Having been ran, for say 24 hours - you can then run such a command (which will produce output like this) # sa --percentages --separate-times
108 100.00% 7.84re 100.00% 0.00u 100.00% 0.00s 100.00% 0avio 19803k
2 1.85% 0.00re 0.05% 0.00u 75.00% 0.00s 0.00% 0avio 29328k troff
2 1.85% 0.37re 4.73% 0.00u 25.00% 0.00s 44.44% 0avio 29632k man
7 6.48% 0.00re 0.01% 0.00u 0.00% 0.00s 44.44% 0avio 28400k ps
4 3.70% 0.00re 0.02% 0.00u 0.00% 0.00s 11.11% 0avio 9753k ***other*
26 24.07% 0.08re 1.01% 0.00u 0.00% 0.00s 0.00% 0avio 1130k sa
14 12.96% 0.00re 0.01% 0.00u 0.00% 0.00s 0.00% 0avio 28544k ksmtuned*
14 12.96% 0.00re 0.01% 0.00u 0.00% 0.00s 0.00% 0avio 28096k awk
14 12.96% 0.00re 0.01% 0.00u 0.00% 0.00s 0.00% 0avio 29623k man*
7 6.48% 7.00re 89.26% 0.00u 0.00% 0.00s The columns are ordered as such: Number of calls Percentage of calls Amount of real time spent on all the processes of this type. Percentage. User CPU time Percentage System CPU time. Average IO calls. Percentage Command name What you'll be looking for is the process types that generate the most User/System CPU time. This breaks down the data as the total amount of CPU time (the top row) and then how that CPU time has been split up. Process accounting only accounts properly when its on when processes spawn, so its probably best to restart the system after enabling it to ensure all services are being accounted for. This, by no means actually gives you a definite idea what process it might be that is the cause of this problem, but might give you good feel. As it could be a 24 hour snapshot theres a possibility of skewed results so bear that in mind. It also should always log since its a kernel feature and unlike pidstat will always produce output even during heavy load. The last option available also uses process accounting so you can turn it on as above, but then use the program "lastcomm" to produce some statistics of processes executed around the time of the problem along with cpu statistics for each process. lastcomm | grep "May 8 22:[01234]"
kworker/1:0 F root __ 0.00 secs Tue May 8 22:20
sleep root __ 0.00 secs Tue May 8 22:49
sa root pts/0 0.00 secs Tue May 8 22:49
sa root pts/0 0.00 secs Tue May 8 22:49
sa X root pts/0 0.00 secs Tue May 8 22:49
ksmtuned F root __ 0.00 secs Tue May 8 22:49
awk root __ 0.00 secs Tue May 8 22:49 This might give you some hints too as to what might be causing the problem. | {
"source": [
"https://serverfault.com/questions/387268",
"https://serverfault.com",
"https://serverfault.com/users/120387/"
]
} |
387,547 | I accidentally pressed Ctrl+C during Ubuntu Server's do-release-upgrade process. I'd dropped to a shell to compare a .conf file in /etc/. When I pressed Ctrl-C, it asked whether I wanted to try to reattach to the upgrade process, but it failed to do so. So I quit, and now there's a hanging dpkg process which is holding onto the apt lock. This is a virtualised server with no GUI frontend... Is it possible to recover the upgrade process, or do I have to kill the dpkg process and start again? | I usually do release upgrades over VPN, so I've tried this a few times. Whenever it updates my openvpn package I lose connection, so I reconnect afterwards. do-release-upgrade starts a backup SSH session on port 1022 and a backup screen session. If you do not have screen installed this will NOT be available. You can get the screen session by running: sudo screen -list
There is a screen on:
2953.ubuntu-release-upgrade-screen-window (09/13/2012 04:48:02 AM) (Detached)
1 Socket in /var/run/screen/S-root. Then to reattach do: sudo screen -d -r root/2953.ubuntu-release-upgrade-screen-window Using the previously listed screen after root/ You should be back to where you lost connection. | {
"source": [
"https://serverfault.com/questions/387547",
"https://serverfault.com",
"https://serverfault.com/users/77677/"
]
} |
387,627 | I'm increasingly seeing mobile networking technologies being used to get internet access in areas where it is otherwise not available. While mobile networking is usually not yet viable as the primary internet connection, mobile technology looks like a good option for an emergency fallback. Bandwith is not the problem: With HDSPA, speeds of several MBit are possible, which provides a decent uplink. However, I know from personal experience that mobile networks internet links (via GPRS, UMTS etc.) have much higher latencies than regular DSL (200-400 ms for UMTS, even more for GPRS). This of course makes them unsuitable for many applications, such as VoIP and teleconferencing. Where does this latency come from? Are there any technologies available that can mitigate this problem, to make UMTS viable for low-latency applications? I assume there must be some inherent technical reason, but what is it? Does it have to do with how data is transmitted over the air? And if it is because of the wireless transmission, why does WLAN have much lower latencies? | The book "High Performance Browser Networking" from Ilya Grigorik answers exactly this. There is a whole chapter (7th) dedicated to mobile networks. The book states that the problem with high performance is almost always tied to latency, we usually have plenty of bandwidth but the protocols gets in the way. Be it TCP slow start , the Radio Resource Controller (RRC) or suboptimal configurations. If you are experiencing poor latency only in mobile networks it's the way they are designed. There is a table in the book about typical latencies: Table 7-2. Data rates and latency for an active mobile connection Generation | Data rate | Latency
2G | 100–400 Kbit/s | 300–1000 ms
3G | 0.5–5 Mbit/s | 100–500 ms
4G | 1–50 Mbit/s | < 100 ms Although very relevant to latency the TCP characteristic three-way handshake or the slow-start don't really answer the question, as they affect wired connections equally. What does really affect the latency in mobile networks is the layer under IP. If the layer under IP has a latency of half a second, a TCP connection to a server will take ~1.5 sec (0.5s*3), as you see the numbers add up pretty quick. As said before that's supposing that the mobile is not idle. If the handset is idle it first has to "connect" to the network, that requires to negotiate a reserve of resources with the tower (simplified), and that takes between 50-100ms in LTE, up to several seconds in 3G, and more in earlier networks. Figure 7-12. LTE request flow latencies Control plane latency: Fixed, one-time latency cost incurred for RRC negotiation and state transitions: <100 ms for idle to active, and <50 ms for dormant to active. User plane latency: Fixed cost for every application packet transferred between the device and the radio tower: <5 ms. Core network latency: Carrier dependent cost for transporting the packet from the radio tower to the packet gateway: in practice, 30–100 ms. Internet routing latency: Variable latency cost between the carrier’s packet gateway and the destination address on the public Internet. In practice, the end-to-end latency of many deployed 4G networks tends to be in the 30–100 ms range once the device is in a connected state. So, you have for one request (Figure 8-2. Components of a "simple" HTTP request): RRC negotiation 50-2500 ms DNS lookup 1 RTT TCP handshake 1 RTT (preexisting connection) or 3 RTT (new connection) TLS handshake 1-2 RTTs HTTP request 1-n RTTs And with real data: Table 8-1. Latency overhead of a single HTTP request | 3G | 4G
Control plane | 200–2,500 ms | 50–100 ms
DNS lookup | 200 ms | 100 ms
TCP handshake | 200 ms | 100 ms
TLS handshake | 200–400 ms | 100–200 ms
HTTP request | 200 ms | 100 ms
Total latency overhead | 200–3500 ms | 100–600 ms In addition if you have a interactive application that you want to perform moderately ok in a mobile network you can experiment disabling the Nagle algorithm (the kernel waits for data to coalescence into bigger packets instead of sending multiple smaller packets) look for ways to test it in https://stackoverflow.com/a/17843292/869019 . There is the option to read the entire book for free by everyone at https://hpbn.co/ sponsored by Velocity Conference. This is a very highly recommended book, not only to people developing websites, it is useful to everyone who does serve bytes over some network to a client. | {
"source": [
"https://serverfault.com/questions/387627",
"https://serverfault.com",
"https://serverfault.com/users/2439/"
]
} |
387,682 | I've heard MS SQL Server takes up as much RAM as it can to cache results. Well, it's not leaving enough bargaining room for our little server's RAM. How do I change the settings to limit the amount of RAM it can use? MS SQL Server running on Windows Server 2008. | From How to configure memory options using SQL Server Management Studio : Use the two server memory options, min server memory and max server memory , to reconfigure the amount of memory (in megabytes) managed by the SQL Server Memory Manager for an instance of SQL Server. In Object Explorer, right-click a server and select Properties . Click the Memory node. Under Server Memory Options , enter the amount that you want for Minimum server memory and Maximum server memory . You can also do it in T-SQL using the following commands (example): exec sp_configure 'max server memory', 1024
reconfigure | {
"source": [
"https://serverfault.com/questions/387682",
"https://serverfault.com",
"https://serverfault.com/users/119541/"
]
} |
387,859 | I see on http://exchange.nagios.org that there are not plugins to check if sendmail, xinetd, automount, ypserv, ypbind, mailscanner, mcafee, clamav, samba server, and openvpn are running. Of course all these should be stable programs, but they are critical, so I would like to check if they are running. Question Does there exist a generic plugin to check for specific processes? | I use the standard NAGIOS check_procs plugin, with the -C flag, shown here being invoked from nrpe.cfg via NRPE: command[check_spamd]=/usr/lib/nagios/plugins/check_procs -c 1: -w 3: -C spamd which will WARN if it doesn't find at least three processes with the executable name (not counting path) spamd , and which will CRIT if it doesn't find at least one. | {
"source": [
"https://serverfault.com/questions/387859",
"https://serverfault.com",
"https://serverfault.com/users/34187/"
]
} |
387,935 | I'm new to working in the shell and the usage of these commands seems arbitrary. Is there a reason one flag has a single dash and another might have a double dash? | A single hyphen can be followed by multiple single-character flags. A double hyphen prefixes a single, multicharacter option. Consider this example: tar -czf In this example, -czf specifies three single-character flags: c , z , and f . Now consider another example: tar --exclude In this case, --exclude specifies a single, multicharacter option named exclude . The double hyphen disambiguates the command-line argument, ensuring that tar interprets it as exclude rather than a combination of e , x , c , l , u , d , and e . | {
"source": [
"https://serverfault.com/questions/387935",
"https://serverfault.com",
"https://serverfault.com/users/11659/"
]
} |
388,016 | I am working on automating the creation of subversion repositories and associated websites as described in this blog post I wrote . I am running into issues right around the part where I su to the www-data user to run the following command: svnadmin create /svn/repository There is a check at the beginning of the script that makes sure it is running as root or sudo, and everything after that one command needs to be run as root. Is there a good way to run that one command as www-data and then switch back to root to finish up? | Just use su - www-data -c 'svnadmin create /svn/repository' in your script run by root. So that only this command is run by www-data user. Update for future viewers : In case you receive a "This account is currently not available" error, consider using: su - www-data -s /bin/bash -c 'svnadmin create /svn/repository' ( @Petr 's valuable mention about -s flag to accommodate www-data user's no login policy ) | {
"source": [
"https://serverfault.com/questions/388016",
"https://serverfault.com",
"https://serverfault.com/users/98347/"
]
} |
389,032 | Going to cut right to the point on this question, as I'm after as diverse range of solutions as possible so don't want to effect any opinions with the question too much. Client is a UK based company. Organisation is 95% Windows with AD They have an IT policy of keeping as little infrastructure "on premises" as possible, as such they have a 1Gbps line to a data centre which houses all server infrastructure. UK branches who can't justify a high speed link run a local server and Windows DFS for fast file access with synchronisation - works fine. This company have decided to open an office in Sydney, Australia. Currently they have 20 people in this office, as well as 1 man "presences" around the country. They are having issues with both latency and bandwidth accessing the UK. Typical tests from their office yield usually no greater than 4Mbps and 320ms on a good day. The high latency is preventing use of terminal services. They need access to a lot of the same data as the UK staff. We've had quite a few ideas already, but I'd like thoughts on how the users of ServerFault would solve this problem. Feel free to ask questions :) | Welcome welcome welcome to the world of the Internet in Australia. Even in our largest population center, we can struggle to get 3Mbps downstream on a business-class ADSL2+ connection. Cable penetration is poor in residential areas, and even worse in commercial so unless you're fortunate you can't get cable internet. And because we're such a sparse population spread over such a massive area (4 million people in an area about the size of New York City?) wireless solutions are just as crappy, and expensive because they don't have 18 million potential customers. I'm in the same situation as you (in case you can't tell), where we have users in a different capital city who have 200ms latency between our terminal servers and their office. Solutions. Well, they're all mucky I'm afraid: DFS. You mention you have DFS in your UK branches already. Can these be extended to your Australian office as well? Depending on the size of the folders, it may a good idea to load up a 2Tb drive with a copy of the DFS root, air-mail it to Sydney, get them to copy it onto their local server and then set up the DFS to sync the changes between the two. Terminal Services. You're sort of screwed here to be honest. High latency does not play well with real-time applications, and apart from changing the laws of physics, if it takes 300ms for the data to get there and back, it will take at least 300ms to register the mouse click, plus about 5 seconds to render whatever context window it opened. BUT, there are things you can do: In terms of bandwidth, a terminal server session consumes about 30Kbps. This is less than a dial-up modem. Citrix consumes about 20Kbps and reportedly has better functionality for dealing with high latency. Lower the colours to 16-bit Disable drive and printer redirection Are you having trouble with the server thinking that the clients no longer exist and terminating their sessions? You can increase the number of "failed" contact attempts it takes to drop a session in the registry at [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters] TcpMaxDataRetransmission ; it's hex so 0000000a is 10. Default is 3. Use QoS heavilly on the remote networks. You don't need someone's porn Youtube stopping people from actually getting work done Citrix has a product that doesn't actually lower the latency, but greatly lowers its noticability. For example, it renders entered text client-side before sending it server-side, so it looks like the text has been entered whilst it's still only half-way to its destination. I forget what it's called Printing over terminal services sucks. Even in 2008 R2 with EasyPrint, the XPS files can become massive. Look into something like ThinPrint or Screwdrivers if you're going to be doing a lot of printing from the terminal server. Choice of internet provider. Australia has two major international link providers. One of them is owned by Telstra, who for many, many years had a total monopoly on the market. They used to be a government-owned company before they were privatised. They still own all the aging, shitty copper lines to the premises, all the exchanges, most of the equipment inside the exchanges and most of the communication between the exchanges. They also held the rest of the country by the balls when it came to international data exchange. Then a few other companies (mainly iiNet and Internode, if I remember correctly) forked out a shitload of money and got their own international link. Try getting a 2nd line from a different ISP and see how it goes. If one line is with Telstra, try iiNet or Internode. If you're already with iiNet/Internet/Optus, try getting a Teltra link (god, I feel dirty just writing that). Steer clear of the low-budget carriers (Dodo, TPG) as they over-sell their services and although 1Tb of quota a month sounds great, when their core routers are overloaded because they're just Cisco 800's (ok, that's an exaggeration) then you're never going to get good quality of service. Wait. The Australian government is in the process of rolling out a Fibre to the Premesis project called the National Broadband Network . If you're not in one of the planned development areas, then you might be in for a long wait (5+ years). But if the office has not been established (sounds like it has though), then if you can get a convenient location inside the NBN rollout, then that could be worth it (it could be established anywhere from 6 months to 3 years though). 100Mbs fibre terminated at your front door should be a pretty good deal. However, if we have a change of government in the next election (which is highly possible) then you can be assured they will can the NBN and replace it with an LTE wireless network which, whilst reasonable for checking emails on your Blackberry and stalking your ex girlfriend on Facebook, will not be as amazeballs as the NBN. Apart from all of the above, which are bandaids at best, the other option might be to see if the software they're running can be extended to multiple sites. SQL Merge replication is a common one, but the database and software usually have to be designed to take advantage of it. If you can, then perhaps an always-on merge replication and a local terminal server/application is the way to go. | {
"source": [
"https://serverfault.com/questions/389032",
"https://serverfault.com",
"https://serverfault.com/users/87855/"
]
} |
389,197 | I'm trying to use OpenSSL to connect to an SSL server. When I run: openssl s_client -connect myhost.com:443 The following SSL client configurations work just fine: Windows ( OpenSSL 0.9.83e 23 Feb 2007 ) Linux ( OpenSSL 0.9.8o 01 Jun 2010 ) Linux ( OpenSSL 1.0.0-fips 29 Mar 2010 ) Output from any successful connection looks like this: New, TLSv1/SSLv3, Cipher is DES-CBC3-SHA
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1
Cipher : DES-CBC3-SHA
Session-ID: (hidden)
Session-ID-ctx:
Master-Key: (hidden)
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1337266099
Timeout : 300 (sec)
Verify return code: 0 (ok) However, when I use client with my Ubuntu 12.04 (w/ OpenSSL 1.0.1 14 Mar 2012 ) I get error: CONNECTED(00000003)
...:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: How can I proceed on solving this? All tips are much appreciated! | This looks to be a known issue with Ubuntu's 1.0.1 OpenSSL: https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/965371 It doesn't look like a fix is available. If possible you could downgrade to 1.0.0. Try openssl s_client -tls1 -connect myhost.com:443 | {
"source": [
"https://serverfault.com/questions/389197",
"https://serverfault.com",
"https://serverfault.com/users/32615/"
]
} |
389,202 | I have discovered that I can set the TTL in Varnish as follows in my VCL file: sub vcl_fetch {
# 1 minute
set obj.ttl = 1m;
} But what is the default setting (assuming the backend server is setting no cache-control header) ? | This is in the default template: sub vcl_fetch {
if (beresp.ttl <= 0s ||
beresp.http.Set-Cookie ||
beresp.http.Vary == "*") {
/*
* Mark as "Hit-For-Pass" for the next 2 minutes
*/
set beresp.ttl = 120 s;
return (hit_for_pass);
}
return (deliver);
} So, 120 seconds. | {
"source": [
"https://serverfault.com/questions/389202",
"https://serverfault.com",
"https://serverfault.com/users/92319/"
]
} |
389,337 | Seems the Public DNS, e.g. ec2-x-x-x-x.compute-1.amazonaws.com will be changed when you stop or terminated the instance. So this mean they have the same life span as the ec2 public IP address , so why should I use this public DNS? They are not easy to remember, and rather meaningless.. | The public DNS name (whether elastic IP address or not) is exactly the same as using the public IP address (elastic IP or not) with the one following important difference: If you query the public DNS name from outside of EC2, it resolves to the public IP address. If you query the public DNS name from inside of EC2, it resolves to the private IP address. You can use this trick with or without Elastic IP addresses. I recommend using Elastic IP addresses as it keeps the public DNS name the same even after stop/start or moving your service to another EC2 instance. Because of this, you can always use the public DNS name of the Elastic IP address and it will resolve to the internal IP address of the current instance to which the Elastic IP is associated. You can extend this by using a CNAME DNS entry where you map your preferred hostname to the external DNS name of the Elastic IP. Here's an article I wrote about using this feature to save money and speed up network performance with internal EC2 communication without having to keep track of the current internal IP address for each instance on all your other instances: http://alestic.com/2009/06/ec2-elastic-ip-internal Other than this one difference, I agree that you might as well use the public IP address instead of the public DNS name because: You save time by not doing a DNS lookup You avoid any security risks that occasionally arise in the DNS protocol. so I suppose, in reality, right there are two more differences... | {
"source": [
"https://serverfault.com/questions/389337",
"https://serverfault.com",
"https://serverfault.com/users/50774/"
]
} |
389,650 | Is there a canonical way to find out the last time that yum update was run on a system? Our set up is that we have staging servers that run automatic updates, and provided they don't fall over, we will manually update our production servers about once a month (barring critical updates). (I say manually, ideally I want to manually trigger an update across all of them, but that's another issue). But you get busy, tasks slip etc. So I want to set up a nagios check that will start bothering us if we've left it too long. Searching the web hasn't got me very far. Poking around the system, the best thing I've found so far would be something like: grep Updated /var/log/yum.log | tail -1 | cut -d' ' -f 1-2 which gives me something like Mar 12 which I can then convert into a date. There are a few minor complications about whether the date is this year or last year, and I'd also need to check /var/log/yum.log.1 in case of checking immediately after a logrotate. But that is just scripting details. This can of course be 'fooled' by an update to a single package rather than a general update. So is there a more canonical way to see when yum update was run? Edit: I've now written a Nagios NRPE plugin that uses the idea I put forward in the question. You can grab it from https://github.com/aptivate/check_yum_last_update | The yum history option allows the user to view what has happened in past transactions. To make it more simple you can grep Update from yum history # yum history
Loaded plugins: fastestmirror, refresh-packagekit
ID | Login user | Date and time | Action(s) | Altered
-------------------------------------------------------------------------------
41 | root <root> | 2012-04-27 20:17 | Install | 19
40 | root <root> | 2011-11-20 10:09 | Install | 10
39 | root <root> | 2011-11-20 08:14 | Install | 1 E<
38 | root <root> | 2011-11-19 15:46 | Update | 1 | {
"source": [
"https://serverfault.com/questions/389650",
"https://serverfault.com",
"https://serverfault.com/users/629/"
]
} |
389,997 | I am trying to use symbolic links. I did some reading and found the following commands: Creation -> ln -s {/path/to/file-name} {link-name}
Update -> ln -sfn {/path/to/file-name} {link-name}
Deletion -> rm {link-name} Creations and deletions work fine. But updates do not work. After performing this command, the symlink becomes invalid. I have read here and there that it is not possible to update/override a symlink. So there is contradictory information on the net. Who is right? If a symlink can be updated/overridden, how can I achieve this? Update Here is my directory structure: ~/scripts/test/
~/scripts/test/remote_loc/
~/scripts/test/remote_loc/site1/
~/scripts/test/remote_loc/site1/stuff1.txt
~/scripts/test/remote_loc/site2/
~/scripts/test/remote_loc/site2/stuff2.txt
~/scripts/test/remote_loc/site2/
~/scripts/test/remote_loc/site3/stuff3.txt From ~/scripts/test/ , when I perform: ln -s /remote_loc/site1 test_link a test_link is created, and I can ls -l it, but it seems broken (contrary to what I said above in my question). How can I perform a multiple directory level link? | Ok, I found where my error is: one should not put the first / in path. In other words, the commands in my questions should be: Creation -> ln -s {path/to/file-name} {link-name}
Update -> ln -sfn {path/to/file-name} {link-name} instead of Creation -> ln -s {/path/to/file-name} {link-name}
Update -> ln -sfn {/path/to/file-name} {link-name} considering my case. | {
"source": [
"https://serverfault.com/questions/389997",
"https://serverfault.com",
"https://serverfault.com/users/66861/"
]
} |
390,012 | I have yet to see a system whose default configuration enables MMU and directed I/O virtualization. Often this necessitates rebooting and going into the BIOS to enable it if you want, e.g., 64-bit support on your VMs. Is there some kind of substantial processor overhead that occurs if this is switched on and you're not using virtualization? If not, then what's the reason for it being off by default? | There were some proof-of-concept rootkits like Blue Pill a while back that could own a system with VT on. After this discovery, most vendors began shipping their units with VT disabled as a general security precaution. | {
"source": [
"https://serverfault.com/questions/390012",
"https://serverfault.com",
"https://serverfault.com/users/69693/"
]
} |
390,202 | This message appears when I login to my machine: There is 1 zombie process. What is it telling me? Is this anything I should worry about? If yes, then what should I do, and how? | There's nothing to worry about : Zombie On Unix operating systems, a zombie process or defunct process is a
process that has completed execution but still has an entry in the
process table, allowing the process that started it to read its exit
status. In the term's colorful metaphor, the child process has died
but has not yet been reaped. When a process ends, all of the memory and resources associated with
it are deallocated so they can be used by other processes. However,
the process's entry in the process table remains. The parent is sent a
SIGCHLD signal indicating that a child has died; the handler for this
signal will typically execute the wait system call, which reads the
exit status and removes the zombie. The zombie's process ID and entry
in the process table can then be reused. However, if a parent ignores
the SIGCHLD, the zombie will be left in the process table. In some
situations this may be desirable, for example if the parent creates
another child process it ensures that it will not be allocated the
same process ID. Source : http://wiki.answers.com/Q/What_is_Zombie_Process_and_Orphan_Process | {
"source": [
"https://serverfault.com/questions/390202",
"https://serverfault.com",
"https://serverfault.com/users/43274/"
]
} |
390,757 | When I open vim for a file like /etc/nginx/sites-available/default , syntax highlighting works fine. But then if I create my own file /etc/nginx/sites-available/myapp , vim does not highlight its syntax. I have to do :setf conf every time. Is there anything I can put in ~/.vimrc to tell vim "if you don't know which syntax to use, just use conf " ? A .vimrc template for a vim noob is also welcome. I'm not using it as an IDE, I use vim mostly for config files only. Note: I'm using Ubuntu 12, in case it matters. | The following line in ~/.vimrc should do this. autocmd BufRead,BufNewFile /etc/nginx/sites-*/* setfiletype conf | {
"source": [
"https://serverfault.com/questions/390757",
"https://serverfault.com",
"https://serverfault.com/users/101521/"
]
} |
390,840 | I recently installed tomcat via an installation script from the apache solr typo3 community and spent the last 3 days trying to figure out why it won't work until by chance I noticed that when I queried the process listening on the port via " lsof -i ", it was bound to the ipv6 protocol. I have googled everywhere and most say that setting address to 0.0.0.0 in the tomcat connector resolves this issue, others say setting JAVA_OPTS="-Djava.net.preferIPv4Stack=true" . I have tried the former which doesn't work but the latter I am unsure of where to put it. One solution I read somewhere suggested to put it in setenv.sh but I can't find this file in my tomcat installation. I would appreciate any help at the moment regarding this. The tomcat version is 6.x and the OS is ubuntu 11.10. Thanks | Many suggested updating catalina.sh startup script. Yes, that solution would work, but catalina.sh script is not meant to be customized/updated. All changes should go into the customization script instead, i.e. setenv.sh . NOTE: TOMCAT_HOME/bin/setenv.sh doesn't exist by default, you need to create it. Check the catalina.sh script and you will see the startup script checks if setenv.sh exists, and executes if it does. So, I suggest you create new TOMCAT_HOME/bin/setenv.sh script with a single line: JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Addresses=true " | {
"source": [
"https://serverfault.com/questions/390840",
"https://serverfault.com",
"https://serverfault.com/users/67144/"
]
} |
390,988 | In our team we have three seasoned Linux sysadmins having to administer a few dozen Debian servers. Previously we have all worked as root using SSH public key authentication. But we had a discussion on what is the best practice for that scenario and couldn't agree on anything. Everybody's SSH public key is put into ~root/.ssh/authorized_keys2 Advantage: easy to use, SSH agent forwarding works easily, little overhead Disadvantage: missing auditing (you never know which "root" made a change), accidents are more likely Using personalized accounts and sudo That way we would login with personalized accounts using SSH public keys and use sudo to do single tasks with root permissions. In addition we could give ourselves the "adm" group that allows us to view log files. Advantage: good auditing, sudo prevents us from doing idiotic things too easily Disadvantage: SSH agent forwarding breaks, it's a hassle because barely anything can be done as non-root Using multiple UID 0 users This is a very unique proposal from one of the sysadmins. He suggest to create three users in /etc/passwd all having UID 0 but different login names. He claims that this is not actually forbidden and allow everyone to be UID 0 but still being able to audit. Advantage: SSH agent forwarding works, auditing might work (untested), no sudo hassle Disadvantage: feels pretty dirty - couldn't find it documented anywhere as an allowed way What would you suggest? | The second option is the best one IMHO. Personal accounts, sudo access. Disable root access via SSH completely. We have a few hundred servers and half a dozen system admins, this is how we do it. How does agent forwarding break exactly? Also, if it's such a hassle using sudo in front of every task you can invoke a sudo shell with sudo -s or switch to a root shell with sudo su - | {
"source": [
"https://serverfault.com/questions/390988",
"https://serverfault.com",
"https://serverfault.com/users/6990/"
]
} |
391,181 | What is the difference between a 302 and 303 response? http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html 10.3.3 302 Found 10.3.4 303 See Other Are these interchangeable or why would one be used over the other? Could you please provide a use case of when one would be used (and the other would not) ? | The description on the page to which you linked seem to be fairly descriptive of their intended purpose: A 302 redirect indicates that the redirect is temporary -- clients should check back at the original URL in future requests. A 303 redirect is meant to redirect a POST request to a GET resource (otherwise, the client assumes that the request method for the new location is the same as for the original resource). If you're redirecting a client as part of your web application but expect them to always start at the web application (for example, a URL shortener), a 302 redirect seems to make sense. A 303 redirect is for use when you are receiving POST data from a client (e.g., a form submission) and you want to redirect them to a new web page to be retrieved using GET instead of POST (e.g., a standard page request). But see this note from the status code definitions -- most clients will do the same thing for either a 302 or 303: Note: RFC 1945 and RFC 2068 specify that the client is not allowed
to change the method on the redirected request. However, most
existing user agent implementations treat 302 as if it were a 303
response, performing a GET on the Location field-value regardless
of the original request method. The status codes 303 and 307 have
been added for servers that wish to make unambiguously clear which
kind of reaction is expected of the client. | {
"source": [
"https://serverfault.com/questions/391181",
"https://serverfault.com",
"https://serverfault.com/users/78503/"
]
} |
391,255 | Exactly what the title says. I'm not having much luck finding the proper documentation to see what -xe does in the following use case: #!/bin/bash -xe what do those parameters do and where it is documented? | If you read the man page for bash you'll find the following at the top of the OPTIONS section: All of the single-character shell options documented in the
description of the set builtin command can be used as options when the
shell is invoked. In addition, bash interprets the following options
when it is invoked... And if you read the documentation for the set command later on in the man page, you'll find: -e Exit immediately if a pipeline (which may consist of a
single simple command), a subshell command enclosed in parentheses,
or one of the commands executed as part of a command list enclosed by
braces (see SHELL GRAMMAR above) exits with a non-zero status.
-x After expanding each simple command, for command, case
command, select command, or arithmetic for command, display
the expanded value of PS4, followed by the command and its
expanded arguments or associated word list. In other words, -e makes the shell exit immediately whenever
something returns an error (this is often used in shell scripts as a
failsafe mechanism), and -x enables verbose execution of scripts so
that you can see what's happening. | {
"source": [
"https://serverfault.com/questions/391255",
"https://serverfault.com",
"https://serverfault.com/users/19432/"
]
} |
391,370 | Traditionally, all anti-virus programs and IPS systems work using signature-based techniques. However, this doesn't help much to prevent zero-day attacks . Therefore, what can be done to prevent zero-day attacks? | I think you acknowledge an interesting sys-admin truth there, which is that unless you can reduce the probability of being hacked to zero then eventually , at some point, you are going to get hacked . This is just a basic truth of maths and probability, that for any non-zero probability of an event. The event eventually happens... So the 2 golden rules for reducing the impact of this "eventually hacked" event are these; The principle of least privilege You should configure services to run as a user with the least possible rights necessary to complete the service's tasks. This can contain a hacker even after they break in to a machine. As an example, a hacker breaking into a system using a zero-day exploit of the Apache webserver service is highly likely to be limited to just the system memory and file resources that can be accessed by that process. The hacker would be able to download your html and php source files, and probably look into your mysql database, but they should not be able to get root or extend their intrusion beyond apache-accessible files. Many default Apache webserver installations create the 'apache' user and group by default and you can easily configure the main Apache configuration file (httpd.conf) to run apache using those groups. The principle of separation of privileges If your web site only needs read-only access to the database, then create an account that only has read-only permissions, and only to that database. SElinux is a good choice for creating context for security, app-armor is another tool. Bastille was a previous choice for hardening. Reduce the consequence of any attack, by separating the power of the service that has been compromised into it own "Box". Silver Rules are also good. Use the tools available. (It's highly unlikely that you can do as well as the guys who are security experts, so use their talents to protect yourself.) public key encryption provides excellent security. use it. everywhere. users are idiots, enforce password complexity understand why you are making exceptions to the rules above. review your exceptions regularly. hold someone to account for failure. it keeps you on your toes. | {
"source": [
"https://serverfault.com/questions/391370",
"https://serverfault.com",
"https://serverfault.com/users/63449/"
]
} |
391,457 | I'm working on some basic apache configuration, but I don't understand precisely how apache merges different <Location> sections when several of them match an incoming requests URL. The apache documentation in its "How the sections are merged" chapter is a little bit confusing when it comes to the order/priority of several matching sections of the same type. For example, imagine the following apache configuration (ignore whether the actual contents make sense or not, I'm only interested in the application order of each rule/section): <Location / >
ProxyPass http://backend.com/
Order allow,deny
Satisfy any
</Location>
<Location /sub/foo>
Order allow,deny
</Location>
<Location /sub >
Order deny,allow
Require valid-user
Satisfy all
</Location>
<Location /doesnt/match >
ProxyPass !
</Location> Now if a client makes a request to /sub/foobar , which is the final configuration that will be applied to this request? Is the applied configuration the equivalent of: # All the directives contained in all the matchin Locations in declaration order
ProxyPass http://backend.com/
Order allow,deny
Satisfy any
Order allow,deny
Order deny,allow
Require valid-user
Satisfy all or maybe # same as above, but with longest matching path last
ProxyPass http://backend.com/
Order allow,deny
Satisfy any
Order deny,allow
Require valid-user
Satisfy all
Order allow,deny or something completely different. Thanks for your help, I'm really confusing. | The order of merging is fairly complicated, and it's easy to be caught out by exceptions... The apache doc is " How the sections are merged " According to that documentation, the order of merging of sections is done by processing all of the matching entries for each match type in the order that they are encountered in the configuration files, with the last match winning. (with the exception of <Directory>, which is treated in order of path specificity). The order of types is Directory , DirectoryMatch , Files , and finally Location . Later matches overwrite earlier matches. (*ProxyPass and Alias are treated differently again, see note at end) And there are several important exceptions to these rules that apply to using ProxyPass, and ProxyPass in a <Location> section. (see below) So from your example above requesting http://somehost.com/sub/foobar with the follwing config; <Location / >
ProxyPass http://backend.com/
Order allow,deny
Satisfy any
</Location>
<Location /sub/foo>
Order allow,deny
</Location>
<Location /sub >
Order deny,allow
Require valid-user
Satisfy all
</Location>
<Location /doesnt/match >
ProxyPass !
</Location> It would accumulate the following directives .... ProxyPass http://backend.com/
Order allow,deny
Satisfy any
Order allow,deny
Order deny,allow
Require valid-user
Satisfy all With the later matches eliminating the previous duplicates, resulting in; ProxyPass http://backend.com/
Order deny,allow
Require valid-user
Satisfy all Explanation Later matches overwrite earlier matches with the exception of <Directory> where matches are processed in the order: shortest directory component to longest. So for example, <Directory /var/web/dir> will be processed before <Directory /var/web/dir/subdir> regardless of what order those directives were specified in the config, and the more specific match wins. Any matching Location directive will always override a previously matching Directory directive. The basic idea is that for a request like GET /some/http/request.html internally it will be translated to a location in the filesystem via an Alias , ScriptAlias or for a normal file location under the DocumentRoot for the VirtualHost that it matched. So a request will have the following properties which it uses for matching: Location: /some/http/request.html
File: /var/www/html/mysite/some/http/request.html
Directory: /var/www/html/mysite/some/http Apache will then apply in turn all of the Directory matches, in the order of directory specificity, from the config, and then in turn apply DirectoryMatch , Files , and finally Location matches in the order in which they are encountered. So Location overrides Files , which overrides DirectoryMatch , with paths matching Directory at the lowest priority. Hence in your example above, a request to /sub/foobar would match the first 3 Location in order, hence the last one wins for conflicting directives. (You are right that it is not clear from the docs how some of the edge cases are resolved, its possible that any allow from * type directives would be connected to the associated Order allow,deny , but I didn't test that. Also what happens if you match Satisfy Any but you have previously collected an Allow from * ...) interesting note about ProxyPass and Alias Just to be annoying, ProxyPass and Alias appears to work in the other direction.... ;-) It basically hits the first match, then stops and uses that! Ordering ProxyPass Directives
The configured ProxyPass and ProxyPassMatch rules are
checked in the order of configuration.
The first rule that matches wins. So
usually you should sort conflicting ProxyPass rules starting with the
longest URLs first. Otherwise later rules for longer URLS will be
hidden by any earlier rule which uses a leading substring of the URL.
Note that there is some relation with worker sharing.
For the same reasons exclusions must come before the general
ProxyPass directives. so basically, Alias and ProxyPass directives have to be specified, most specific first; Alias "/foo/bar" "/srv/www/uncommon/bar"
Alias "/foo" "/srv/www/common/foo" and ProxyPass "/special-area" "http://special.example.com" smax=5 max=10
ProxyPass "/" "balancer://mycluster/" stickysession=JSESSIONID|jsessionid nofailover=On However, as @orev has pointed out. You can have a ProxyPass directive in a Location directive, and so a more specific ProxyPass in a Location will beat out any previously found ProxyPass. | {
"source": [
"https://serverfault.com/questions/391457",
"https://serverfault.com",
"https://serverfault.com/users/49950/"
]
} |
391,538 | If a logrotate config is specified with "size" and "daily" parameters, which one takes precedence? Where is this documented? I would like these rotations to occur as a boolean OR operation, ie, if the logs are a day old they get rotated, OR if they are larger than a certain size they will also get rotated. However, logrotate is currently only using the "size" directive, and appears to be ignoring the "daily" directive. Logrotate is set up to run every hour. OS is linux, Red Hat and Debian derivatives. Also, I am specifying "daily" first, then "size" from the start of the file. Not sure if the order matters, but in any case, one has to come first in the config file... Thanks! | If the size directive is used, logrotate will ignore the daily , weekly , monthly , and yearly directives. This is not clear in the documentation when you execute the man logrotate command. However it can be confirmed in practice, and is mentioned in some arbitrary blog posts such as this one . There is a directive called minsize which according to the logrotate man page is the only size directive that can be used in conjunction with the time ones. However, its still not what you want. Using minsize with daily essentially says: rotate the logs daily, but only when they are at least #MB in size . To date I have found no way with logrotate to do the condition you require: rotate every day, unless the size exceeds #MB, in which case rotate immediately . I do not think this is supported using only logrotate directives. It might be possible to do with some clever scripting via the script hook directives like prerotate , postrotate , firstaction , and lastaction . Update : As of logrotate 3.8.1, maxsize and timeperiod are supported together, which would be the ideal solution. See the answer to this post: How to rotate log based on an interval unless log exceeds a certain size? | {
"source": [
"https://serverfault.com/questions/391538",
"https://serverfault.com",
"https://serverfault.com/users/107102/"
]
} |
391,554 | I have an nginx web server acting as a reverse proxy to forward requests on to Apache for additional handling (I'm begging you not to ask why). I have a request to which I'm trying to attach a custom header and I'd like for nginx to forward that custom header along to Apache so I can do something with it in an app. I've poked through the HttpProxyModule docs, but they're not very descriptive even if I'm in the right place (it very well could be that I'm not). How can I get nginx to forward an X-CUSTOM-REFERRER header? Moreover, if possible, I'd like it to forward along any custom header that comes in. If the latter is too much to ask, the former would suffice for my current need. As you can see, I'm very new to nginx, so the remedial version would be helpful. Thanks. UPDATE The relevant snippet from my existing config: location / {
proxy_pass http://preview;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Custom-Referrer $x_custom_referrer;
} | The proxy_set_header directive from the HttpProxyModule allows you to do this. For example: proxy_pass http://apachehost;
proxy_set_header X-Custom-Referrer $proxy_add_<header_field_name_from_last_request>; | {
"source": [
"https://serverfault.com/questions/391554",
"https://serverfault.com",
"https://serverfault.com/users/39879/"
]
} |
391,555 | I have a network over about 20 computers that need to be maintained from a remote location. Would the best way be to setup software from say LogMeIn on one of the systems and then configure all the other systems to only allow remote access from that one central system? What I need to do is be able to manage all 20 something systems from a remote location with a limited budget so the best idea that came to mind was using a central system with 3rd party software and once logged into it I can use Windows Remote Desktop to access the other systems. | The proxy_set_header directive from the HttpProxyModule allows you to do this. For example: proxy_pass http://apachehost;
proxy_set_header X-Custom-Referrer $proxy_add_<header_field_name_from_last_request>; | {
"source": [
"https://serverfault.com/questions/391555",
"https://serverfault.com",
"https://serverfault.com/users/112405/"
]
} |
391,621 | I need to, from a bash script, check to see if certain Ruby gems are installed . I thought I could do something like if ! gem list <name>; then do_stuff; fi but testing on the command line using echo $? shows that gem list <name> returns 0 regardless of if name is actually found. Does this mean I have to use grep to filter the output of gem list, or is there a better way I can check to see if a gem is installed? | gem list <name> -i will return the string true if the gem is installed and false otherwise. Also, the return codes are what you would expect. For more informations, see gem help list . Edit: @Riateche correctly observed that this might give false positives if you search for a gem name that is a substring of an otherwise installed gem. To avoid this, use a regex syntax: gem list '^<name>$' -i (Example: gem list '^mini$' -i ). | {
"source": [
"https://serverfault.com/questions/391621",
"https://serverfault.com",
"https://serverfault.com/users/121905/"
]
} |
391,735 | I've noticed that nmap only scans a bunch of known ports, and the only way i've managed to check 'em all is to put a "-p 0-65535" in. Why is that? am I wrong? is there a more popular way to scan all ports aside from what I've done? | By default, Nmap scans the top 1000 most popular ports, according to the statistics generated from Internet-wide scans and large internal network scans from the summer of 2008. There are a few options that change this: -F reduces the number to 100, -p allows you to specify which ports to scan, and --top-ports lets you specify how many of the most popular ports to scan. This means that the default scan is equivalent to --top-ports 1000 , and -F is the same as --top-ports 100 . These numbers were set in version 4.75, and were a change from the roughly 1700 (TCP) ports that were the default in version 4.68. The purpose was to decrease scanning times while still giving reasonable results. The flexibility of Nmap's command-line options guarantees that you can still scan just about any combination of ports that you want, regardless of the defaults. Scanning all 65536 TCP ports is still possible with -p0- , but it will take a very long time. Scanning all UDP ports with -sU -p0- will take even longer, because of the way that open ports are detected. | {
"source": [
"https://serverfault.com/questions/391735",
"https://serverfault.com",
"https://serverfault.com/users/121682/"
]
} |
391,985 | We have been using a company to administer our (small office) IT infrastructure. We don't have complete records of what has been done or hasn't been done and don't know what we need to ask for in order to pick this up ourselves. Is there a good "checklist of things to make sure you get" for people in this situation? (Windows OS product keys, installation media(?), domain controller admin password, etc.) | All passwords (for all devices, applications and accounts). All records relating to software, licensing and media (including purchase order history/proof). All media (including installation media and live data backups). All documentation, including, Server, hardware, network (including IP addresses) and operating system configurations. Details of processes and procedures (e.g. adding users, create new mailboxes, etc.) Information on any automated / manually triggered jobs (backups, housekeeping, etc.) Review the documentation beforehand and have them make improvements if you have questions or find missing/old information. Details of any third party contracts they might have taken out for support that you may need to take on / take out yourself (e.g. hardware maintenance). Any physical / VPN / secure access items (badges, keys, tokens, fobs, etc.) Information about telecom accounts Logins to any websites you might need. (Download software, open support cases.. ) Account information related to domain name registrations, details of registrars used, etc. Copies of any security certificates, and the relevant key phrases. Ensure the old supplier also destroys their copies. And once this is all done - change all the passwords. | {
"source": [
"https://serverfault.com/questions/391985",
"https://serverfault.com",
"https://serverfault.com/users/99969/"
]
} |
391,995 | I used mailq command and I got a line like for example: A705238B4C 603953 Wed May 23 11:09:58 [email protected] So, now I'm wondering is there a way where I can "read" an actual content of the mail by its id A705238B4C | The best way is to make use of the postcat command. postcat -q A705238B4C At least the system I can look at right now, /var/spool/postfix is the master directory. Subdirectories of that which matter include active , deferred , bounce , etc. Queued files may be stored using the full file name ( A705238B4C ) or with some level of hashing depth ( A/7/05238B4C ). | {
"source": [
"https://serverfault.com/questions/391995",
"https://serverfault.com",
"https://serverfault.com/users/80429/"
]
} |
392,216 | In order to assess performance monitoring accuracy on virtualization platforms, the CPU steal time has become an increasingly relevant metric - see EC2 monitoring: the case of stolen CPU for an instructive summary in the context of Amazon EC2 and IBM's paper on CPU time accounting for a more in-depth technical explanation (including illustrations) of the concept: Steal time is the percentage of time a virtual CPU waits for a real
CPU while the hypervisor is servicing another virtual processor. Accordingly, it is exposed in most related Unix/Linux monitoring tools nowadays - see e.g. columns %steal or st in sar or top : st -- Steal Time The amount of CPU 'stolen' from this virtual machine by the hypervisor
for other tasks (such as running another virtual machine). I've been unable to figure out how to capture the same metric on Windows though, is this possible already? (Ideally for the Windows 2008 Server R2 AMIs on EC2 and via a respective Windows Performance Counters of course.) | Edit: Updating on Oct. 1 2013 - Some of my original answer has since become obsolete. I'm not sure if you're still active on this site or that you'll see this, but I wanted you to know that I read this question today and it fascinated me, and so I spent all day (when I should have been working) researching Hyper-V and Windows internals and even digging in to the very concepts of virtualization itself in hopes that I might be ready to answer your question. Let me preface by saying that I am coming from the point of view of Hyper-V as a virtualization platform because that is where I have the most experience. Even though there may be certain tenets of virtualization, as we know it, that cannot be deviated from, Microsoft and VMware and Xen all have different strategies for how they design their hypervisors. That's the first thing that makes your question challenging. You pose your question as if it were hypervisor-agnostic, when in truth it is not. Amazon EC2, for example, uses the Xen hypervisor, and the "CPU Steal Time" metric that you see in the output of a top command issued from within a Linux VM running on that hypervisor is a result of the integration services installed on that guest OS (or virtualization-aware tools on the guest) in conjunction with data provided by that specific hypervisor. First off let me just answer your question straight up: There is no way to see from inside a virtual machine running Windows how much time the processors belonging to the physical machine on which the hypervisor runs spends doing other things, unless the particular virtual tools/services or virtualization-aware tools for your particular hypervisor are installed in the guest VM and the particular hypervisor on which the guest is running exposes that data to the guest. Even a Windows guest running on a Hyper-V hypervisor will not have immediate access to information regarding the time spent that the physical processors on the hypervisor were doing other things. (To quote voretaq7, something that "breaks the fourth wall.") Even though Windows client and server operating systems running as virtualized guests in Hyper-V with the correct integration services/tools installed make use of "enlightenments" (which are literally kernel code alterations made especially for VMs) that significantly increase their performance in using the resources of a physical host, the bottom line is that the hypervisor does not have to give any more information to the guest OS than it wants to. That means the hypervisor does not have to tell a guest VM what else it is doing besides servicing that VM... unless it wants to. And that information about what else the physical processors are doing is necessary for deriving a metric from the perspective of the VM such as "CPU Steal Time: the percentage of time the vCPU waits for a physical CPU." How could the guest OS know that, if it didn't even realize that it was actually virtualized? In other words, without the right integration tools installed on the guest, the guest OS won't even know that its CPU is actually a v CPU. It won't even know that there is another force outside of itself "stealing" CPU cycles from it, therefore that metric will not exist on the guest VM. VMware has begun to expose this data to Windows guests as well as of ESXi 5.0. VMware integration tools also need to be updated on the guest. Here is a reference ; they refer to it as "CPU Stolen Time". A hypervisor such as Hyper-V does not give guests direct access to physical resources such as physical processors or processor cores. Instead the hypervisor gives them vDevs - virtual devices - such as vCPUs. A prime example of why: Say a virtual machine guest OS makes the call to flush the TLB (translation look-aside buffer) which is a physical component of a physical CPU. If the guest OS was allowed to clear the entire TLB on a physical processor, that would have negative performance effects for all the other VMs that were also sharing that same physical TLB. In the case of Windows, that call in the guest OS is translated into a "hypercall" or "enlightened" call which is interpreted by the hypervisor so that only the section of the TLB that is relevant to that virtual machine is flushed. (Interestingly, that hints to me that guest VMs that do not have the proper integration tools and/or services could have the ability to impact the performance of all the other VMs on the same host, but that is completely outside the scope of this topic.) All that to say that you can still detect in a Hyper-V host the time that a virtual processor spent waiting for a real processor to become available so that it could scheduled to run. But you can only see that data on a Windows Hyper-V hypervisor. If it is possible to see this in other hypervisors, I urge others to tell us how to see this in that hypervisor and also if it is exposed to the guests. (Edit 10/1/2013 Thank you evilensky for doing just that!) My test machine was Hyper-V Server 2012, which is the free edition of Server 2012 that only runs Core and the Hyper-V role. It's effectively the same as any Windows Server 2012 running Hyper-V. Fire up Perfmon on your parent partition, aka physical host. Load this counter: Hyper-V Hypervisor Virtual Processor\CPU Wait Time Per Dispatch\* You will notice that there will be an instance of that counter for each virtual machine on that hypervisor, as well as _Total. The Microsoft definition of that Perfmon counter is: The average time (in nanoseconds) spent waiting for a virtual processor to be dispatched onto a logical processor. Obviously, you want that number to be as low as possible. For computers, waiting is almost never a good thing. Other performance counters on the hypervisor that you will want to investigate are Hyper-V Hypervisor Root Virtual Processor\% Guest Run Time , % Hypervisor Run Time , and % Total Run Time . These counters provide you with the percentages that could be used to determine facts such as how much time the "real" processors spend doing things other than servicing a VM or all VMs. So in conclusion, the metric that you are looking for in a guest virtual machine depends on the hypervisor that it is running on, whether that hypervisor chooses to provide the data about how it spends its time other than servicing that VM, and if the guest OS has the right virtualization integration tools/services/drivers to be aware enough to realize that the hypervisor is making that data available. I know of no way on a Windows guest, integration tools installed or not, to see how much time, in terms of seconds or percentage, that VM's host has spent servicing it or not servicing it respective to the total physical processor time. (Edit 10/1/2013: ESXi 5.0 or better exposes this data to the guest VM through the integration tools. Still nothing on Hyper-V though.) | {
"source": [
"https://serverfault.com/questions/392216",
"https://serverfault.com",
"https://serverfault.com/users/10305/"
]
} |
392,415 | I need to find all .pem files on my system. Would the following do this? sudo find / -type f -name *.pem If not, how would I write a find command to find every file of the sort? | You're on the right track -- you just need to quote the pattern so that it gets interpreted by find and not by your shell: sudo find / -type f -name '*.pem' | {
"source": [
"https://serverfault.com/questions/392415",
"https://serverfault.com",
"https://serverfault.com/users/78503/"
]
} |
392,433 | How can I find all the SQL servers installed in our network that were not installed by a DBA? Meaning, someone else has installed the SQL server and we need to get the details like SQL server version, instance name and port number so it can be added to our monitoring scripts. | You're on the right track -- you just need to quote the pattern so that it gets interpreted by find and not by your shell: sudo find / -type f -name '*.pem' | {
"source": [
"https://serverfault.com/questions/392433",
"https://serverfault.com",
"https://serverfault.com/users/122309/"
]
} |
392,561 | Ive requested the carpet to be removed from my server room, a disaster was almost waiting to happen air con leaked and the carpet UNDER the rack was soaked. (air con unit 0.3 meter from rear of servers) They have asked if we can have some sort of lining thrown over the carpet as it is a rented building. If so what material or other recommendations do you have? | First of all, carpet in server room is also very dangerous when it's dry, because moving around on them generates static electricity that is very unsafe for computers. If you can, remove the carpet and buy some kind of special anti-static lining/mat. If you cannot remove the carpet, then buy some anti-static lining and cover the carpet with it. If you have the money and need, raised floor will be an even better choice than lining. | {
"source": [
"https://serverfault.com/questions/392561",
"https://serverfault.com",
"https://serverfault.com/users/121246/"
]
} |
393,417 | I created an Amazon Free tier Usage Account. I launched two amazon ec2 instances using the online tool. After that one instance was created and running while other was pending which quickly shifted to terminated state. In description it shows State Transition Reason: Server.InternalError: Internal error on launch Is there any where I could restart the terminated instance or remove it from table. It looks very annoying | Terminated instances will go away after a few hours. There is nothing you can do to manually remove them. Not to worry, you won't get billed for it. | {
"source": [
"https://serverfault.com/questions/393417",
"https://serverfault.com",
"https://serverfault.com/users/115714/"
]
} |
393,532 | I'm using Nginx to serve static files in response to CORS requests using the technique outlined in this question . However, when the file doesn't exist the 404 response does not contain the Access-Control-Allow-Origin: * header and so is block by the browser. How can I send Access-Control-Allow-Origin: * on 404 responses? | Even though this was asked long ago, I was compiling nginx with more module, but with newer version of nginx, I found I don't have to custom compile nginx, all I needed was to add always directive. http://nginx.org/en/docs/http/ngx_http_headers_module.html Syntax: add_header name value [always]; If the always parameter is specified (1.7.5), the header field will be added regardless of the response code. So a tuned version of CORS headers: if ($cors = "trueget") {
# Tells the browser this origin may make cross-origin requests
# (Here, we echo the requesting origin, which matched the whitelist.)
add_header 'Access-Control-Allow-Origin' "$http_origin" always;
# Tells the browser it may show the response, when XmlHttpRequest.withCredentials=true.
add_header 'Access-Control-Allow-Credentials' 'true' always;
} | {
"source": [
"https://serverfault.com/questions/393532",
"https://serverfault.com",
"https://serverfault.com/users/102317/"
]
} |
393,862 | I've got MySQL Master/Slave setup and I've noticed the following warnings in the mysql log files on both servers: [Warning] IP address 'xxx.xxx.xxx.xxx' could not be resolved: Name or service not known I've checked and the DNS lookups works fine and most of these IPs are from China. I planning to limit access on port 3306 on the firewall however could you please help me to understand what they are trying to do. Are they just trying to connect to the MySQL server. Where I can look for some more details. Thanks | When you create a MySQL user [email protected] MySQL has to do a reverse lookup on every IP address connecting to it to determine whether they are part of example.com . Of course, there's no restriction on creating reverse lookups, so I can quite happily ask my provider to set the reverse lookup for my IP address to be google.com if I want... or example.com if I happen to know that's what the users in your database have. This won't let me in, as MySQL then does a forward lookup on the returned domain to make sure it matches the same IP address that's connecting. You can switch this off with skip_name_resolve in your my.cnf . There are many good reasons for doing this . The reason you are getting this error is that the IP address in question has no reverse lookup at all. You also have malicious attackers from China trying to brute force their way into your database. That should be your top priority. | {
"source": [
"https://serverfault.com/questions/393862",
"https://serverfault.com",
"https://serverfault.com/users/120102/"
]
} |
393,957 | I am setting up an app to be hosted using VMs(probably amazon, but that is not set in stone) which will require both HTTP load balancing and load balancing a large number(50k or so if possible) of persistant TCP connections. The amount of data is not all that high, but updates are frequent. Right now I am evaluating load balancers and am a bit confused about the architecture of HAProxy. If I use HAProxy to balance the TCP connections, will all the resulting traffic have to flow through the load balancer? If so, would another solution(such as LVS or even nginx_tcp_proxy_module) be a better fit? | HAProxy (like many load balancers) generally maintain two conversations. The Proxy has a session (tcp in this case) with the client, and another session with the server. Therefore with proxies you end up seeing 2x the connections on the load balancer. Therefore all traffic flows through the load balancer. When it comes to scaling across multiple load balancers I don't think you need to. But a practical and fairly easy way to do this is use something like keepalived with two floating IPs and round robin DNS between those two IPs. With keepalived, if one of the load balancers goes down the other would hold both IPs, so you get high availability this way. That being said, I think you will be fine with one active haproxy instance with your load. HAProxy scales very well. An an example, the Stack Exchange network use web sockets which maintain open TCP connections. While I am posting this we have 143,000 established TCP sockets on a VMware virtual machine with no issues. The CPU usage on the VM is around 7%. With this sort of setup with HAProxy make sure you set maxconn high enough. Here is some example HAProxy config to get you started: frontend fe_websockets
bind 123.123.123.123:80
mode tcp
log global
option tcplog
timeout client 3600s
backlog 4096
maxconn 50000
default_backend be_nywebsockets
backend be_nywebsockets
mode tcp
option log-health-checks
option redispatch
option tcplog
balance roundrobin
server web1 10.0.0.1:1234
server web2 10.0.0.2:1234
timeout connect 1s
timeout queue 5s
timeout server 3600s | {
"source": [
"https://serverfault.com/questions/393957",
"https://serverfault.com",
"https://serverfault.com/users/122875/"
]
} |
394,257 | I seem to have a few installations of postgresql on my machine somehow. I'm not sure if this is a mistake or, if Ubuntu for some odd reason duplicates direcotories and keeps them elsewhere. I have a postgresql directory in /etc one in /usr/lib and one in /opt I'm properly confused at this point. How do I go about deleting the extra ones?. Which ones are he extra one? I also need to make sure that my 'pg' gem in my rails env is pointing towards the correct posgresql db server. Any thoughts on my issue would be huge. | There can be some confusion over what people mean by an installation. The /etc/postgresql/ folder is the config folder for your clusters. The /var/lib/postgresql/ folder is for data. The program binaries for each version are in separate folders usually in /usr/lib/postgresql/ . I really don't know about /opt/postgresql as I don't have that on mine. But /opt is for "optional" binaries, so it's possible that your installation is here instead of /usr/lib/postgresql/ . In short, I think you may just have one installation which has files in multiple locations. If you want to look at what you have installed, this may help: How postgresql is structured: To make things a little clearer postgres is structured as follows: A version literally refers to which version of the postgresql
program binaries. Each installed version may have
a cluster installed under that version. If not then that version is
effectively dormant as it has no data or running server associated. Under each version there many be a number of clusters. You can think
of the cluster as a running prostgres server (process). Each cluster
has to have its own port/socket file for clients to connect to. Each
cluster will be managed by a single version. Inside each cluster will be a number of databases. When a client
connects it selects a DB to connect to. It can ask to change which
DB it's connected to without opening a new session, but it can only
ever be connected to one. What have you got installed? To find out which versions are installed you can look to dpkg and apt . You should be able to uninstall versions using apt and dpkg , but be very careful not to do this before you've checked what clusters are under each version. To find out what clusters you have use the command pg_lsclusters . When I call this I get the following, you will get something different: Version Cluster Port Status Owner Data directory Log file
9.1 main 5432 online <unknown> /var/lib/postgres/data/9.1/main /var/log/postgresql/postgresql-9.1-main.log Pay careful attention to the "Status" column. If a cluster is not online then it's just data on disk and is doing nothing. If it is online then it is running. How do you merge clusters? You can copy the content from one cluster to another using the pg_dumpall command to generate a backup and use psql to import it to the cluster you want to keep. Its worth keeping backups of everything before you start. How do you remove a cluster that is no-longer used? Use pg_lsclusters to get the details about the clusters and note the
data directory and log file for those clusters. Use pg_ctlcluster <version> <cluster>
stop to stop the cluster. Remove the data folder and optionally the log file. Finally remove the data and config. The data folder should be /var/lib/postgres/data/<version>/<cluster> but check the output of pg_lsclusters to be sure of this. The config for the cluster: All clusters will have
their own config folder in /etc/postgresql/<version>/<cluser>/ . Why did you get multiple clusters if you never asked for them? Usually you have to specifically request a cluster to be created to get a new one. The only exception to this is when you upgrade a cluster, it will effectively create a new one and leave the old one in place. | {
"source": [
"https://serverfault.com/questions/394257",
"https://serverfault.com",
"https://serverfault.com/users/123068/"
]
} |
394,590 | I'm unsure about the differences in these storage interfaces. My Dell servers all have SAS RAID controllers in them and they seem to be cross-compatible to an extent. The Ultra-320 SCSI RAID controllers in my old servers were simple enough: One type of interface (SCA) with special drives with special controllers, humming at 10-15K RPM. But these SAS/SATA drives seem like the drives I have in my desktop, only more expensive. Also my old SCSI controllers have their own battery backup and DDR buffer - neither of these things are present on the SAS controllers. What's up with that? "Enterprise" SATA drives are compatible with my SAS RAID controller, but I'd like to know what advantage SAS drives have over SATA drives as they seem to have similar specs (but one is a lot cheaper). Also, how do SSDs fit into this? I remember when RAID controllers required HDDs to spin at the same rate (as if the controller card supplanted the controller in the drive) - so how does that work out now? And what's the deal with Near-line SATA? I apologise about the rambling tone in this message, it's 5am and I haven't slept much. | This has been covered here... See the related links on the right pane of this question. Right now, the market conditions are such that you should try to use SAS disks everywhere you can. Enterprise SAS disks are your fastest and most resilient rotating media available at 10,000 and 15,000 RPM. Performance-optimized Nearline or Midline SAS are usually mechanically-equivalent to 7,200 RPM SATA disks, but feature a SAS interface and offer the benefits of the SAS protocol . They are available in higher capacities than enterprise SAS disks. They have a slight price premium over the same sized SATA drives. Capacity-optimized Solid-state disks (SSDs) are a higher tier of storage and shouldn't be compared directly with rotating media. Their main characteristic is higher random read and write performance, but the details are beyond the scope of this question. Also see: SAS or SATA for 3 TB drives? How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? | {
"source": [
"https://serverfault.com/questions/394590",
"https://serverfault.com",
"https://serverfault.com/users/20707/"
]
} |
394,804 | This is a Canonical Question about Active Directory DNS Settings. Related: What is Active Directory Domain Services and how does it work? Assuming an environment with multiple domain controllers (assume that they all run DNS as well): in what order should the DNS servers be listed in the network adapters for each domain controller? Should 127.0.0.1 be used as the primary DNS server for each domain controller? Does it make any difference, if so what versions are affected and how? | According to this link and the Windows Server 2008 R2 Best Practices Analyzer, the loopback address should be in the list, but never as the primary DNS server. In certain situations like a topology change, this could break replication and cause a server to be "on an island" as far as replication is concerned. Say that you have two servers: DC01 (10.1.1.1) and DC02 (10.1.1.2) that are both domain controllers in the same domain and both hold copies of the ADI zones for that domain. They should be configured as follows: DC01
Primary DNS 10.1.1.2
Secondary DNS 127.0.0.1
DC02
Primary DNS 10.1.1.1
Secondary DNS 127.0.0.1 | {
"source": [
"https://serverfault.com/questions/394804",
"https://serverfault.com",
"https://serverfault.com/users/10472/"
]
} |
394,815 | I am running into issues where the CA bundle that has been bundled with my version of cURL is outdated. curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html Reading through the documentation didn't help me because I didn't understand what I needed to do or how to do it. I am running RedHat and need to update the CA bundle. What do I need to do to update my CA bundle on RedHat? | For RHEL 6 or later , you should be using update-ca-trust , as lzap describes in his answer below. --- For older versions of Fedora, CentOS, Redhat: Curl is using the system-default CA bundle is stored in /etc/pki/tls/certs/ca-bundle.crt . Before you change it, make a copy of that file so that you can restore the system default if you need to. You can simply append new CA certificates to that file, or you can replace the entire bundle. Are you also wondering where to get the certificates? I (and others) recommend curl.se/ca . In one line: curl https://curl.se/ca/cacert.pem -o /etc/pki/tls/certs/ca-bundle.crt Fedora Core 2 location is /usr/share/ssl/certs/ca-bundle.crt . | {
"source": [
"https://serverfault.com/questions/394815",
"https://serverfault.com",
"https://serverfault.com/users/10126/"
]
} |
395,342 | Is there anything that enables a "telnet-like" functionality for UDP? I know the difference between TCP and UDP, and why telnet itself won't work - but I'm wondering if there is something similar to the telnet client, from the end-user perspective. E.g. udp-telnet [ip] [sending-port] [receiving-port] which then prints out wether a packet made it back or not. Having a tool like this would proove helpful for testing out firewall settings for OpenVPN which uses UDP connections. | You can use netcat - just start it, and type something inside, and pres the return key. nc -u <host> <port> And on the other side you can listen with netcat too (you should see the written text), or just start a tcpdump, and see packets coming in. | {
"source": [
"https://serverfault.com/questions/395342",
"https://serverfault.com",
"https://serverfault.com/users/78061/"
]
} |
395,990 | Six months ago, in our not-for-profit project we decided to start migrating our system management to a Puppet-controlled environment because we are expecting our number of servers to grow substantially between now and a year from now. Since the decision has been made our IT guys have become a bit too annoyed a bit too often. Their biggest objections are: "We're not programmers, we're sysadmins"; Modules are available online but many differ from one another; wheels are being reinvented too often, how do you decide which one fits the bill; Code in our repo is not transparent enough, to find how something works they have to recurse through manifests and modules they might have even written themselves a while ago; One new daemon requires writing a new module, conventions have to be similar to other modules, a difficult process; "Let's just run it and see how it works" Tons of hardly known 'extensions' in community modules: 'trocla', 'augeas', 'hiera'... how can our sysadmins keep track? I can see why a large organisation would dispatch their sysadmins to Puppet courses to become Puppet masters. But how would smaller players get to learn Puppet to a professional level if they do not go to courses and basically learn it via their browser and editor? | I started using Puppet ahead of deploying a new infrastructure and simply bought a ( well-regarded ) book on the topic. I don't think most people actually obtain professional Puppet training. I worked on examples until I could mold the process to my environment. This was December 2011, so within a few weeks, I was able to understand the basics and get a production framework in place. I wasn't new to configuration management, having a CFEngine background, but many of your sysadmins' concerns resonate. I made mistakes and had to refactor several times, but I did get things working satisfactorily. A few notes on your points... The traditional systems administration role is changing. Adapt or be left behind. I've been a successful systems engineer, but am having to retool as well (learning Python, for example). The focus on individual servers is diminished as hardware abstraction through virtualization and public and private cloud services gain traction. This means automation of systems tasks and the use of configuration management to wrest control of a larger number of servers. Add DevOps concepts to the mix, and you'll see that the customer/end-user expectations and requirements are changing. Puppet modules available online differ in style and structure and yes, I saw lots of overlap, redundancy and duplicated efforts. One developer I worked with said, "you could have developed your own tools in the time you spent looking online for something that works!" That gave me pause as I realized that Puppet seems to appeal more to developer types than admins looking for a best-practices or the right way approach. Document heavily in order to get a feel for how things are connected. Given the shaky definitions and lack of a standard way of doing things, your configuration management structure really is unique to your environment. That transparency is going to have to be developed within. I'd argue that it's reasonably easy to duplicate a module to accommodate a new daemon or add a service to an existing manifest, depending on how you've organized your servers and roles. I spent a lot of time testing on a single target before pushing changes to larger groups of servers. Running puppetd by hand on a representative server allowed me to debug changes and assess their impact. Maybe that's a bit conservative, but it was necessary. I'm not sure how much I'd depend on community modules. I did have to start using Augeas for some work , and lamented the fact that it was a functionality I took for granted in CFEngine. In all, I feel like there isn't a well-defined standard when it comes to Puppet. I had trouble figuring out how to organize directory structure on my Puppetmaster, understanding how to manage certificate signing, getting proper reverse DNS in place everywhere, getting Puppet to scale appropriately for the environment and understanding when to leverage community modules versus building my own. It's a shift in thinking and I see how that would make a sysadmin panic. However, this was also solution built from scratch, so I had the luxury of evaluating tools. The decision to go this way was based on mindshare and the momentum behind Puppet. It was worth the effort to learn something new. Remember, this site is a good resource, too. | {
"source": [
"https://serverfault.com/questions/395990",
"https://serverfault.com",
"https://serverfault.com/users/93703/"
]
} |
396,136 | How can I forward message from a specific log file like /www/myapp/log/test.log with rsyslog client to remote rsyslog server? This log file is outside of the directory /var/log . | Just setup an imfile rule in your /etc/rsyslog.conf #/etc/rsyslog.conf
$ModLoad imfile
$InputFileName /data/mysql/error.log
$InputFileTag mysql-error
$InputFileStateFile stat-mysql-error
$InputFileSeverity error
$InputFileFacility local3
$InputRunFileMonitor
local3.* @@hostname:<portnumber> This watches a file and saves to the local3 facility in syslog. Then you can send all data from the local3 facility to your remote server. You may also want to add the following to your rsyslog conf (usually /etc/rsyslog.d/50-default.conf on Ubuntu) to not save the local3 facility to /var/log/syslog: #/etc/rsyslog.d/50-default.conf
*.*;auth,authpriv.none,local1.none,local2.none,local3.none,local4.none,local5.none,local6.none -/var/log/syslog Additionally, I would encourage some reading from the following rsyslog docs for more advanced filtering: The Property Replacer Filter Conditions | {
"source": [
"https://serverfault.com/questions/396136",
"https://serverfault.com",
"https://serverfault.com/users/21323/"
]
} |
396,595 | My question is in the subject. I have a domain, this is the nginx config for it: server {
listen 80;
server_name connect3.domain.ru www.connect3.domain.ru;
access_log /var/log/nginx/connect3.domain.ru.access.log;
error_log /var/log/nginx/connect3.domain.ru.error.log;
root /home/httpd/vhosts/html;
index index.html index.htm index.php;
location ~* \.(avi|bin|bmp|css|dmg|doc|docx|dpkg|exe|flv|gif|htm|html|ico|ics|img|jpeg|jpg|js|m2a|m2v|mov|mp3|mp4|mpeg|mpg|msi|pdf|pkg|png|pps|ppt|pptx|ps|rar|rss|rtf|swf|tif|tiff|txt|wmv|xhtml|xls|xml|zip)$ {
root /home/httpd/vhosts/html;
access_log off;
expires 1d;
}
location ~ /\.(git|ht|svn) {
deny all;
}
location / {
#rewrite ^ http://connect2.domain.ru/;
proxy_pass http://127.0.0.1:8080/;
proxy_redirect off;
proxy_hide_header "Cache-Control";
add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0";
proxy_hide_header "Pragma";
add_header Pragma "no-cache";
expires -1;
add_header Last-Modified $sent_http_Expires;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
} I need to proxy connect3.domain.ru host to connect2.domain.ru, but with no URL changed in browser's address bars. My commented out rewrite line could solve this problem, but it's just a rewrite, so I cannot stay with the same URL. I know that this question is easy, but please help. Thank you. | You set: proxy_set_header Host $host; You want: proxy_set_header Host connect2.domain.ru; | {
"source": [
"https://serverfault.com/questions/396595",
"https://serverfault.com",
"https://serverfault.com/users/72269/"
]
} |
396,722 | At our office, all of our Windows 7 Clients get this error message when we try and RDP to a remote Windows 2008 Server outside of the office: Your system administrator does not allow the user of saved credentials to
log on to the remote computer XXX because its identity is not fully verified.
Please enter new credentials A quick google search leads to some posts they all suggest I edit group policy, etc. I'm under the impression, that the common fix for this, is to follow those instructions on every Windows 7 machine. Is there any way that I can do something via the Active Directory which could update all Windows 7 clients in the office LAN? | If you don't want to change local or server side GPOs: Go to Control Panel -> Credential Manager on the local computer you are trying to connect from . You will see three sections: Windows Credentials Certificate-Based Credentials Generic Credentials Remove the credentials from Windows Credentials and add it to Generic Credentials . | {
"source": [
"https://serverfault.com/questions/396722",
"https://serverfault.com",
"https://serverfault.com/users/58/"
]
} |
396,768 | Recently we had a problem where one of the ext4 file-systems seemed unable to handle very large number of files, more than 6mln in this case , in spite of having enough space. Is it 6mln the max number, an ext4 file-system can have when formatted with all the default settings? I tried to Google it but didn't get any definitive answer. Anyone one out here can shade some light on this please? Cheers!! | There is no default as such for ext4, it depends on the size of the device and the options chosen at creation. You can check the existing limits using tune2fs -l /path/to/device For example, root@xwing:~# tune2fs -l /dev/sda1
tune2fs 1.42 (29-Nov-2011)
Filesystem volume name: <none>
Last mounted on: /
[lots of stuff snipped]
Inode count: 1277952
Free inodes: 1069532
Inodes per group: 8192
Inode blocks per group: 512
[lots of stuff snipped] As per man mkfs.ext4 -i bytes-per-inode Specify the bytes/inode ratio.
mke2fs creates an inode for every bytes-per-inode bytes of space on the disk. The
larger the bytes-per-inode ratio, the fewer inodes will be
created. This value generally shouldn't be smaller than the blocksize
of the filesystem, since in that case more inodes would be made than
can ever be used. Be warned that it is not possible to expand the
number of inodes on a filesystem after it is created, so be careful
deciding the correct value for this parameter. | {
"source": [
"https://serverfault.com/questions/396768",
"https://serverfault.com",
"https://serverfault.com/users/74048/"
]
} |
396,958 | I would like to configure a nameserver that will return the same IP address ("A" record) for any arbitrary host name. For example: example.com subdomain.example.com someotherdomain.com anyotherdomain.co.uk should all return the same IP address. Is there a way to do this with BIND? Or is there an alternative to BIND that can do this? | With BIND, you need a fake root zone to do this. In named.conf , put the following: zone "." {
type master;
file "/etc/bind/db.fakeroot";
}; Then, in that db.fakeroot file, you will need something like the following: @ IN SOA ns.domain.com. hostmaster.domain.com. ( 1 3h 1h 1w 1d )
IN NS <ip>
* IN A <ip> With that configuration, BIND will return the same IP address for all A queries. | {
"source": [
"https://serverfault.com/questions/396958",
"https://serverfault.com",
"https://serverfault.com/users/74299/"
]
} |
397,350 | What will happen when an ARP Request packet is sent from router1 to router2 in the following two cases? Will an ARP Reply be generated or the ARP Request packet be dropped? [router1]Intf1(20.0.0.1/24) ======== (40.0.0.1/24)Intf2[router2] [router1]Intf1(20.0.0.1/24) ======== (20.0.0.2/8) Intf2[router2] The topology above have a port "Intf1" on router "router1" connected a port "Intf2" on another router "router2" via a direct link(eg, a 1 Gbps cable). | ARP only works between devices in the same IP subnet. When device A with IP address A needs to send a packet to device B with IP address B, the first thing it does is consulting its routing table to determine if IP address B belongs to a subnet it can directly reach through its network interface(s); if it does, then devices A uses ARP to map IP address B to a physical Ethernet address, and then sends an Ethernet frame to that address. But if the two IP Addresses are on different subnets, the device will follow a completely different logic: it will look in its routing table for a route to the destination network, and then it will send its packet to the appropriate router (or to its default gateway if no more specific route is present); in this scenario, ARP will be used to find the hardware address of the router , because the destination IP address has already be deemed to not be directly reachable, so the packet must be delivered to a router which can take care of it. | {
"source": [
"https://serverfault.com/questions/397350",
"https://serverfault.com",
"https://serverfault.com/users/124195/"
]
} |
397,420 | I have a home file server running FreeNAS 8. A few days ago I used rsync to upload my entire iTunes library from Mac so that I could load my library over the network instead of off a slow USB drive. This mostly worked, and iTunes runs much better now, but I'm running into issues accessing any songs that have non-ascii characters in it (I first noticed the problem when loading Queensrÿche tracks). The files would show up in the Finder, but any attempt to access them made them vanish until I reconnected to the server. After some research I found out this is because OSX uses a different UTF character order from Linux. OSX filesystems use Unicode Normalization Form D (NFD), where linux uses Form C (NFC). Rsync doesn't convert these forms when it performs the copy from my mac to the server, now when iTunes tries to access a file with a special character over the network, the files on the server have the wrong encoding and afpd reports they don't exist. What is the best way to address this problem? Is it possible to make rsync perform the unicode conversion while uploading the base library to the server? Can I configure afpd to transmit/receive filenames in NFD format? Is there an easy solution to change the filenames on the server? I found some stuff about a program named convmv, but I don't know if I can run that on FreeNAS. | You can use rsync's --iconv option to convert between UTF-8 NFC & NFD, at least if you're on a Mac. There is a special utf-8-mac character set that stands for UTF-8 NFD. So to copy files from your Mac to your NAS, you'd need to run something like: rsync -a --iconv=utf-8-mac,utf-8 localdir/ mynas:remotedir/ This will convert all the local filenames from UTF-8 NFD to UTF-8 NFC on the remote server. The files' contents won't be affected. | {
"source": [
"https://serverfault.com/questions/397420",
"https://serverfault.com",
"https://serverfault.com/users/124222/"
]
} |
397,762 | I need to create folders starting from 00 to 99 (00, 01, 02, 03, etc....) in several hundred places. Is there a single line command that will let me do that? | mulaz's answer is correct, but many people say seq is evil beacuse most shells will let you do the following mkdir {00..99} However in some older versions of bash, 0-9 arent padded, so you would have to do mkdir 0{0..9} {10..99} | {
"source": [
"https://serverfault.com/questions/397762",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
397,973 | When encrypting a file to send to a collaborator, I see this message: gpg: using subkey XXXX instead of primary key YYYY Why would that be? I've noticed that when they send me an encrypted file, it also appears to be encrypted towards my subkey instead of my primary key. For me, this doesn't appear to be a problem; gpg (1.4.x, macosx) just handles it & moves on. But for them, with their automated tool setup, this seems to be an issue, and they've requested that I be sure to use their primary key. I've tried to do some reading, and I have the Michael Lucas's "GPG & PGP" book on order, but I'm not seeing why there's this distinction. I have read that the key used for signing and the key used for encryption would be different, but I assumed that was about public vs private keys at first. In case it was a trust/validation issue, I went through the process of comparing fingerprints and verifying, yes, I trust this key. While I was doing that, I noticed the primary & subkeys had different "usage" notes: primary: usage: SCA
subkey: usage: E "E" seems likely to mean "Encryption". But, I haven't been able to find any documentation on this. Moreover, my collaborator has been using these tools & techniques for some years now, so why would this only be a problem for me? | Update While the original post below correctly explains why you might want to use separate encryption and signing keys, it does not do a great job answering the question of why you use subkeys instead of the primary key. The Debian Wiki provides a much more thorough answer . To summarize, your primary key is your online identity, and your identity and reputation are built up by having other people vouch for that key being yours by signing it themselves. (You might think of it like being your Twitter handle, and your reputation being your Twitter followers, or you might object to that analogy, but I hope it gives you some sense of why you want to protect it.) So, since your primary key is super important and is built up over years, you want to protect it very much. In particular, you do not want to store it on a computer that might get stolen or hacked into; you want to keep your primary key off-line in a safe place. This, of course, makes your primary key very inconvenient to use. So for day-to-day operations you want to use a key that is not such a big problem to replace if it gets compromised. This is why subkeys were invented. A subkey is still a public/private key pair and is secure as long as only you have the private key. It is, cryptographically, just as secure as your primary key. The difference is that your reputation is only attached to it by your own signature, the signature from your private key. To use the Twitter analogy, the world trusts that you are your Twitter handle because all your followers say so (I know, it doesn't always really work that way, but analogies are hard!), and then with that trust established you can then much more easily convince the world you own your Instagram handle by just tweeting it and people will believe you because the tweet came from your account, which they trust. You still want to keep your subkey safe, but now if it is compromised it is not the huge problem it would be if your primary key were compromised (or, in the analogy, someone hijacked your Twitter account). Now you can just revoke the subkey and issue a new one by signing a revocation certificate and a new subkey and posting them both on your public keyring (like tweeting "hey, my Instagram handle changed, don't use the old one, use this one instead"). This makes keeping your subkey on your laptop computer a more acceptable risk than keeping your primary key on it. TL;DR Subkeys make key management much easier by separating the cryptographic functions of public keys from the trust and identity functions of (primary) public keys. Original post If you look into the details of the math of public-key encryption, you will discover that signing and decrypting are actually identical mathematical operations . Thus in a naïve implementation it is possible to trick somebody into decrypting a message by asking them to sign it. Several things are done in practice to guard against this. The most obvious is that you never sign an actual message, instead you sign a secure hash of the message. Less obviously, but just to be extra safe, you use different keys for signing and encrypting. Also, keeping the encryption key separate allows you to keep the other arguably more important and definitely less frequently used keys off-line and more secure. That is the case with the keys you have inspected. By the way the flags mean: e = encrypt/decrypt (decrypt a message you received encrypted for you to read) s = sign (sign data. For example a file or to send signed e-mail) c = certify (sign another key, establishing a trust-relation) a = authentication (log in to SSH with a PGP key; this is relatively new usage) Note that in all cases, "key", means a public & private key pair. This should not be a problem for your friend if he has everything set up correctly, but setting up everything correctly can be more complex than it should be. So it may be that the best solution is for your friend to generate a new public-key that he uses for both signing and encrypting. Like I said, because the primary defense against attack is to only sign a cryptographically secure hash of a message, it is not a serious weakness to have such a key. | {
"source": [
"https://serverfault.com/questions/397973",
"https://serverfault.com",
"https://serverfault.com/users/12212/"
]
} |
398,187 | How to delete all characters in one line after "]" with sed ? Im trying to grep some file using cat, awk. Now my oneliner returns me something like 121.122.121.111] other characters in logs from sendmail.... :) Now I want to delete all after "]" character (with "]"). I want only 121.122.121.111 in my output. I was googling for that particular example of sed but didn't find any help in those examples. | echo "121.122.121.111] other characters in logs from sendmail...." | sed 's/].*//' So if you have a file full of lines like that you can do sed 's/].*//' filename | {
"source": [
"https://serverfault.com/questions/398187",
"https://serverfault.com",
"https://serverfault.com/users/71114/"
]
} |
398,196 | Quite often, when reading about the recommended worker_processes for nginx it is stated that this should be set to the numbers of cores the server hosting nginx has. We were wondering if we should count the number of HT cores for this as well? Or do we just count the number of true physical cores? Thanks! | echo "121.122.121.111] other characters in logs from sendmail...." | sed 's/].*//' So if you have a file full of lines like that you can do sed 's/].*//' filename | {
"source": [
"https://serverfault.com/questions/398196",
"https://serverfault.com",
"https://serverfault.com/users/95749/"
]
} |
398,203 | We've a lot of users that usually complain about that his PC is "slow". (we use win XP). We usually check startup programs, virus, fragmentation, disk health and common problems that causes slowness (Symantec AV drops disk to 1mb/s , or a seagate HD firmware error in certain models), but in those cases the slowness is pretty evident. In other hand, the most common is the user complaining about his pc but for us looks OK, even in 6 years old desktops. People sometimes even complains about his new quad core desktops speed!!! So, we are asking if there's a way to OBJECTIVELY check that a computer didn't dropped its performance, compared with similar ones o previous measures, specially for work use (I don't think that 3dmark benchmark o similar may help).
The only thing that I found that was useful is HDTune, but it only check hard disk performance. Basically, what we want is something that enable us to say to our users "see? your PC is as slow as was three years ago! stop complaining! Is all in your head!" | echo "121.122.121.111] other characters in logs from sendmail...." | sed 's/].*//' So if you have a file full of lines like that you can do sed 's/].*//' filename | {
"source": [
"https://serverfault.com/questions/398203",
"https://serverfault.com",
"https://serverfault.com/users/99844/"
]
} |
398,234 | I am using this simple command to monitor connections (to deal with some recent DoS attacks) on my Debian server: netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n How do I run it continuously? So it will refresh itself once per minute (or any given amount of time, of course). I tried watch: watch -n 30 "netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n" But it changed the output from nice list with num of connections to something like this: 1 tcp 0 10015 [LOCAL IP]
...
1 Proto Recv-Q Send-Q Local Address Foreign Address State
1 Active Internet connections (w/o servers) So external IP is not being displayed. Is there something I missed? This is how the original output looks: 2 [IP ADDRESS]
4 [IP ADDRESS]
4 [IP ADDRESS]
4 [IP ADDRESS]
7 [IP ADDRESS]
16 [IP ADDRESS]
71 [IP ADDRESS] And when I say [LOCAL IP] I mean my machine's IP. When I run it with -c it just freezes. | netstat -c may help you if i've not misunderstood your problem. -c stands for --continuous. EDIT:
there you go: watch -n 30 "netstat -ntu | awk '{print \$5}' | cut -d: -f1 | sort | uniq -c | sort -n" I've added a \ before $. | {
"source": [
"https://serverfault.com/questions/398234",
"https://serverfault.com",
"https://serverfault.com/users/124523/"
]
} |
398,355 | You can create a user that has privileges like root , and it's home directory will fall under /home/username . Why does root get its own folder at the top level of the file system? Is this just convention, a security concern, or is there a performance-related reason? | One reason: On many systems, /home is on a separate partition (or network share) that might fail to mount and it is a good idea to allow root to login with his usual environment whenever possible. | {
"source": [
"https://serverfault.com/questions/398355",
"https://serverfault.com",
"https://serverfault.com/users/88876/"
]
} |
398,361 | I'm doing some audit automation, this example describes checking the version of Java, even though other programs do the same thing. The output of "java -version" goes to STDERR, which is easily redirected, but I want to send shell errors (for example, when the java binary is missing) to /dev/null. It seems that shell redirection is an all-or-nothing proposition. My most promising attempt so far as been: { /bin/ksh "{java -version 2>&1;}"; } 2>/dev/null ...which properly sends the output of the -version command to STDOUT, but if java isn't there, it sends the shell "not found" error to STDOUT as well. I don't want to see that message. Same behavior with: { /bin/ksh "{java -version 2>&1;}" 2>/dev/null; } Does anyone know a way to limit the scope of redirection so each process gets its own? I'm not limited to ksh, but for environment reasons it's got to be a shell based one-liner. | One reason: On many systems, /home is on a separate partition (or network share) that might fail to mount and it is a good idea to allow root to login with his usual environment whenever possible. | {
"source": [
"https://serverfault.com/questions/398361",
"https://serverfault.com",
"https://serverfault.com/users/124555/"
]
} |
398,514 | Say I have a command foo which takes a filename argument: foo myfile.txt . Annoyingly, foo doesn't read from standard input. Instead of an actual file, I'd like to pass it the result of another command (in reality, pv , which will cat the file and output a progress meter as a side effect). Is there a way to make this happen? Nothing in my bag of tricks seems to do it. ( foo in this case is a PHP script which I believe processes the file sequentially). I'm using Ubuntu and Bash EDIT
Sorry for the slightly unclear problem description, but here's the answer that does what I want: pv longfile.txt | foo /dev/stdin Very obvious now that I see it. | If I understand what you want to do properly, you can do it with bash's command substitution feature: foo <(somecommand | pv) This does something similar to what the mkfifo -based answers suggest, except that bash handles the details for you (and it winds up passing something like /dev/fd/63 to the command, rather than a regular named pipe). You might also be able to do it even more directly like this: somecommand | pv | foo /dev/stdin | {
"source": [
"https://serverfault.com/questions/398514",
"https://serverfault.com",
"https://serverfault.com/users/68259/"
]
} |
398,779 | We are going through an RFP process of changing hosting companies for most of our servers (~10 fairly powerful workhorses and database servers). When the existing company was picked I wasn't at the company, nor have I worked with hosting companies in the past (Always had hardware on site in previous companies). We will be doing site tours for each of the companies over the next few weeks. What type of things do you normally look for? Questions to ask their on site staff, etc? Anything that can help me evaluate and compare. Most of the of the hosting companies maintiane VM Ware farms with DR sites connected via fiber. | It's a good thing that you're thinking about what questions to ask your hosting company, but I think you're approaching it backwards. First figure out your requirements , and then ask each company how their infrastructure will meet them . When they're explaining how their infrastructure meets your needs don't be afraid to ask questions , and if you aren't satisfied with the answers you're getting don't be afraid to insist on having someone relatively upper-level give you a good explanation -- You are giving the hosting company good money, and if their sales guy can't explain things to your satisfaction insist on a network engineer or someone from their Datacenter Operations team to explain things. In addition to what everyone else has mentioned, some other things to consider (geared toward colocation - hosting your hardware at someone else's facility): General Is the facility clean? Is the house cabling neat and orderly? If they use cable trays, look up -- Things should be neat, and strapped down with velcro ties. (NO plastic zip ties, NO tape) If they route all cabling through the floor ask to look in the underfloor. (They may say no. If they say yes stick your head down there and look around. Again, all cabling should be neat and bundled with velcro ties. Suspended cable trays are important here to allow airflow. See cooling.) Network What providers do they have for their uplinks? Where (physically) are their network uplinks? Is their network core redundant? Do they provide redundant access drops to your rack? Power What kind of UPS systems does the facility have? How long can they hold the load? What kind of generators does the facility have? How often are the generators tested? How much fuel is on site? What are the fueling provisions in the event of an emergency? How much power can you draw in your rack? (Circuit capacity, cooling cappacity). Do they provide diverse redundant power to your rack (circuits from separate UPS banks)? Cooling Is the cooling truly N+1 Redundant? (If they lose an air conditioner will the room stay at temperature?) Are they using a Hot-Aisle/Cold-Aisle layout? (They should be. If not, worry.) Bonus points if they have containment to keep the hot (or cold) air where it belongs. Is there adequate pressure in the floor? (Assuming they use a traditional down-flow cooling system that bows cold air into the floor, stand at
a perforated tile near an air conditioner, then at one as far from the AC unit as you can get -- The breeze should be relatively even) Does the room feel hot?
(Obviously standing in hot aisles it will, but how is it near the door? In cold aisles?) Does the room feel wet? Do they take advantage of "free" cooling? (air-side economizers, heat wheels, etc?) Security and Access Do you have 24x7x365 access to the facility? (You should!) How is that access controlled (Thumbprint? Keycard? See the man at the desk?) Monitoring Do they offer monitoring? (and do you want it?) Ask to see their facility monitoring system (they might say no, but if it's a really slick system they might want to show off) Managed Services (If you want them) What services are included? Typical selections include: Basic monitoring (ping) Advanced monitoring (SNMP, services, etc.) Remote Hands (you call and we type what you tell us to) How many hours of consulting/troubleshooting service are included in your base price? Is patching service included? If yes, how do they handle patching (software used, scheduling, etc.)? Disaster recovery I put this last because it's really a minimal concern -- Datacenters spend money making themselves very reliable and robust in the face of subsystem/component failures. Disaster Recovery in the sense of "what happens if my datacenter goes away" is best addressed by having another datacenter, so the questions I ask are along those lines: Do they have an off-site facility where customers can host cold- or warm-standby racks? Do they have enough bandwidth for you to host a cold/warm standby rack with another provider and replicate everything you need? Does the facility itself have solid plans to recover from system/component failures? Sometimes it's fun to play what-if: "What if the entire northeast lost power for a week?" | {
"source": [
"https://serverfault.com/questions/398779",
"https://serverfault.com",
"https://serverfault.com/users/60894/"
]
} |
398,837 | My resolv.conf looks like this: ; generated by /sbin/dhclient-script
search mcdc
nameserver 10.0.4.48
nameserver 8.8.8.8 if I do nslookup www.google.com it works nslookup www.google.com
;; Got SERVFAIL reply from 10.0.4.48, trying next server
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
www.google.com canonical name = www.l.google.com. but when I curl www.google.com, it cannot resolve the host. I tried running curl under strace, and found curl was only using the first nameserver in resolv.conf, not the second. If I switch the two nameserver lines around, www.google.com resolves, but internal DNS names do not, so thats not a good workaround. How can I fix resolv.conf to use both nameservers? | The default behavior for resolv.conf and the resolver is to try the servers in the order listed. The resolver will only try the next nameserver if the first nameserver times out. The resolv.conf manpage says: nameserver Name server IP address Internet address (in dot notation) of a name server that the resolver
should query. Up to MAXNS (currently 3, see ) name
servers may be listed, one per keyword. If there are multiple
servers, the resolver library queries them in the order listed. And: (The algorithm used is to try a name server, and if the query times out, try the next, until out of name servers, then repeat trying all the name servers until a maximum number of retries are made.) Also see the resolver(5) manual page for more information. You can alter the resolver's behavior using rotate , which will query the Nameservers in a round-robin order: rotate sets RES_ROTATE in _res.options, which causes round robin
selection of nameservers from among those listed. This has the effect
of spreading the query load among all listed servers, rather than
having all clients try the first listed server first every time. However, nslookup will use the second nameserver if it receives a SERVFAIL from the first nameserver. From the nslookup manpage : [no]fail
Try the next nameserver if a nameserver responds with SERVFAIL or a referral (nofail) or terminate query (fail) on such a response. (Default = nofail) | {
"source": [
"https://serverfault.com/questions/398837",
"https://serverfault.com",
"https://serverfault.com/users/124715/"
]
} |
399,089 | At a company I work for we have such a thing called "playlists" which are small files ~100-300 bytes each. There's about a million of them. About 100,000 of them get changed every hour. These playlists need to be uploaded to 10 other remote servers on different continents every hour and it needs to happen quick in under 2 mins ideally. It's very important that files that are deleted on the master are also deleted on all the replicas. We currently use Linux for our infrastructure. I was thinking about trying rsync with the -W option to copy whole files without comparing contents. I haven't tried it yet but maybe people who have more experience with rsync could tell me if it's a viable option? What other options are worth considering? Update: I have chosen the lsyncd option as the answer but only because it was the most popular. Other suggested alternatives are also valid in their own way. | Since instant updates are also acceptable, you could use lsyncd . It watches directories (inotify) and will rsync changes to slaves. At startup it will do a full rsync , so that will take some time, but after that only changes are transmitted. Recursive watching of directories is possible, if a slave server is down the sync will be retried until it comes back. If this is all in a single directory (or a static list of directories) you could also use incron . The drawback there is that it does not allow recursive watching of folders and you need to implement the sync functionality yourself. | {
"source": [
"https://serverfault.com/questions/399089",
"https://serverfault.com",
"https://serverfault.com/users/61396/"
]
} |
399,262 | I'm building a Bash script for some tasks. One of those tasks is create a MySQL DB from within the same bash script. What I'm doing right now is creating two vars: one for store user name and the other for store password. This is the relevant part of my script: MYSQL_USER=root
MYSQL_PASS=mypass_goes_here
touch /tmp/$PROY.sql && echo "CREATE DATABASE $DB_NAME;" > /tmp/script.sql
mysql --user=$MYSQL_USER --password="$MYSQL_PASS" < /tmp/script.sql
rm -rf /tmp/script.sql But always get a error saying access denied for user root with NO PASSWORD, what I'm doing wrong? I need to do the same for PostgreSQL. | Both for MySQL and PostgreSQL you can specify your user and password in local config file. .my.cnf for MySQL and .pgpass for PostgreSQL. These files should be in your home directory (i.e. ~/.my.cnf). .my.cnf: [mysql]
user=user
password=password .pgpass: host:port:database:user:password You can have a wildcard entry here, substituting any field for *******. PS: DO NOT EVER SPECIFY A PASSWORD ON THE COMMAND LINE! This can be perfectly visible with ps if your system is not configured to not show processes that belongs to other users. @thinice: If you want to create those files really secure you should do: umask 077
touch .my.new.config
umask 022 # or whatever was your default This way the file would be created with secure permissions from the start and no eavesdropper would have a chance leeching your password. PostgreSQL will refuse to use the file with permissions higher the 0600 anyway. | {
"source": [
"https://serverfault.com/questions/399262",
"https://serverfault.com",
"https://serverfault.com/users/124850/"
]
} |
399,428 | I'm working with a bash script trying to stop it from attempting to replace variables inside my heredoc. How do set a heredoc to either A) escape the variable names instead of parsing them or B) return the entire string untouched? cat > /etc/nginx/sites-available/default_php <<END
server {
listen 80 default;
server_name _;
root /var/www/$host; <--- $host is a problem child
}
END As is, when I it finishes injecting it into a file I'm left with this: server {
listen 80 default;
server_name _;
root /var/www/;
} | From the bash(1) man page: If any characters in word are
quoted, the delimiter is the result of quote removal on word , and the
lines in the here-document are not expanded. cat > /etc/nginx/sites-available/default_php <<"END" | {
"source": [
"https://serverfault.com/questions/399428",
"https://serverfault.com",
"https://serverfault.com/users/32759/"
]
} |
399,514 | Can someone tell what does this mean? I tried a command like lastb to see last user logins and I see some strange logins from China (server is EU, I am in EU). I was wondering if these could be login attempts or successfull logins? These seem to be very old and usually I lock port 22 to my IPs only, I think I had the port open for a while, last log is in July. root ssh:notty 222.92.89.xx Sat Jul 9 12:26 - 12:26 (00:00)
root ssh:notty 222.92.89.xx Sat Jul 9 12:04 - 12:04 (00:00)
oracle ssh:notty 222.92.89.xx Sat Jul 9 11:43 - 11:43 (00:00)
gary ssh:notty 222.92.89.xx Sat Jul 9 11:22 - 11:22 (00:00)
root ssh:notty 222.92.89.xx Sat Jul 9 11:01 - 11:01 (00:00)
gt05 ssh:notty 222.92.89.xx Sat Jul 9 10:40 - 10:40 (00:00)
admin ssh:notty 222.92.89.xx Sat Jul 9 10:18 - 10:18 (00:00) | lastb only shows login failures . Use last to see successful logins. | {
"source": [
"https://serverfault.com/questions/399514",
"https://serverfault.com",
"https://serverfault.com/users/95818/"
]
} |
399,616 | I have two ruby on rails 3 applications running on same server, (ubuntu 10.04), both with SSL. Here is my apache config file: <VirtualHost *:80>
ServerName example1.com
DocumentRoot /home/me/example1/production/current/public
</VirtualHost>
<VirtualHost *:443>
ServerName example1.com
DocumentRoot /home/me/example1/production/current/public
SSLEngine on
SSLCertificateFile /home/me/example1/production/shared/example1.crt
SSLCertificateKeyFile /home/me/example1/production/shared/example1.key
SSLCertificateChainFile /home/me/example1/production/shared/gd_bundle.crt
SSLProtocol -all +TLSv1 +SSLv3
SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM
</VirtualHost>
<VirtualHost *:80>
ServerName example2.com
DocumentRoot /home/me/example2/production/current/public
</VirtualHost>
<VirtualHost *:443>
ServerName example2.com
DocumentRoot /home/me/example2/production/current/public
SSLEngine on
SSLCertificateFile /home/me/example2/production/shared/iwanto.crt
SSLCertificateKeyFile /home/me/example2/production/shared/iwanto.key
SSLCertificateChainFile /home/me/example2/production/shared/gd_bundle.crt
SSLProtocol -all +TLSv1 +SSLv3
SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM
</VirtualHost> Whats the issue: On restarting my server it gives me some output like this: * Restarting web server apache2
[Sun Jun 17 17:57:49 2012] [warn] _default_ VirtualHost overlap on port 443, the first has precedence
... waiting [Sun Jun 17 17:57:50 2012] [warn] _default_ VirtualHost overlap on port 443, the first has precedence On googling why this issue is coming I got something like this: You cannot use name based virtual hosts with SSL because the SSL handshake (when the browser accepts the secure Web server's certificate) occurs before the HTTP request, which identifies the appropriate name based virtual host. If you plan to use name-based virtual hosts, remember that they only work with your non-secure Web server. But not able to figure out how to run two ssl application on same server. Can any one help me? | Almost there! Add this to ports.conf or http.conf and keep your above config. <IfModule mod_ssl.c>
# If you add NameVirtualHost *:443 here, you will also have to change
# the VirtualHost statement in /etc/apache2/sites-available/default-ssl
# to <VirtualHost *:443>
# Server Name Indication for SSL named virtual hosts is currently not
# supported by MSIE on Windows XP.
# !important below!
NameVirtualHost *:443
Listen 443
</IfModule> | {
"source": [
"https://serverfault.com/questions/399616",
"https://serverfault.com",
"https://serverfault.com/users/26114/"
]
} |
401,437 | I know how to retrieve the last modification date of a single file in a Git repository: git log -1 --format="%ad" -- path/to/file Is there a simple and efficient way to do the same for all the files currently present in the repository? | A simple answer would be to iterate through each file and display its modification time, i.e.: git ls-tree -r --name-only HEAD | while read filename; do
echo "$(git log -1 --format="%ad" -- $filename) $filename"
done This will yield output like so: Fri Dec 23 19:01:01 2011 +0000 Config
Fri Dec 23 19:01:01 2011 +0000 Makefile Obviously, you can control this since its just a bash script at this point--so feel free to customize to your heart's content! | {
"source": [
"https://serverfault.com/questions/401437",
"https://serverfault.com",
"https://serverfault.com/users/65570/"
]
} |
401,704 | virsh create somefile.xml creates my machine just fine but when I shut the machine down the whole thing disappears. Machines I made with the virt-manager GUI are persistent (stick around after shutdown) and the xml file is derived from those virt-manager created machines. | Use virsh define somefile.xml and virsh start domain-name , doing this the VM will be persistent.
I can't check right now, but I think you can use virsh define on an already started VM and this will make it persistent. | {
"source": [
"https://serverfault.com/questions/401704",
"https://serverfault.com",
"https://serverfault.com/users/48965/"
]
} |
402,023 | I'm setting up a Debian box as a router for 4 subnets. For that I have defined 4 virtual interfaces on the NIC where the LAN is connected ( eth1 ). eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98
inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0
inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6026521 errors:0 dropped:0 overruns:0 frame:0
TX packets:35331299 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:673201397 (642.0 MiB) TX bytes:177276932 (169.0 MiB)
Interrupt:19 Base address:0x6000
eth1:0 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98
inet addr:10.1.2.1 Bcast:10.1.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:19 Base address:0x6000
eth1:1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98
inet addr:10.1.3.1 Bcast:10.1.3.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:19 Base address:0x6000
eth1:2 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98
inet addr:10.1.4.1 Bcast:10.1.4.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:19 Base address:0x6000
eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38
inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:199809345 errors:0 dropped:0 overruns:0 frame:0
TX packets:158362936 errors:0 dropped:0 overruns:0 carrier:1
collisions:0 txqueuelen:1000
RX bytes:3656983762 (3.4 GiB) TX bytes:1715848473 (1.5 GiB)
Interrupt:27
eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72
inet addr:192.168.2.5 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::960c:6dff:fe82:c872/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:110814 errors:0 dropped:0 overruns:0 frame:0
TX packets:73386 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16044901 (15.3 MiB) TX bytes:42125647 (40.1 MiB)
Interrupt:20 Base address:0x2000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:22351 errors:0 dropped:0 overruns:0 frame:0
TX packets:22351 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2625143 (2.5 MiB) TX bytes:2625143 (2.5 MiB)
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:41358924 errors:0 dropped:0 overruns:0 frame:0
TX packets:23116350 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:3065505744 (2.8 GiB) TX bytes:1324358330 (1.2 GiB) I have two other computers connected to this network. One has IP 10.1.1.12 (subnet mask 255.255.255.0) and the other one 10.1.2.20 (subnet mask 255.255.255.0). I want to be able to reach 10.1.1.12 from 10.1.2.20. Since packet forwarding is enabled in the router and the policy of the FORWARD chain is ACCEPT (and there are no other rules), I understand that there should be no problem to ping from 10.1.2.20 to 10.1.1.12 going through the router. However, this is what I get: $ ping -c15 10.1.1.12
PING 10.1.1.12 (10.1.1.12): 56 data bytes
Request timeout for icmp_seq 0
92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 81d4 0 0000 3f 01 e2b3 10.1.2.20 10.1.1.12
Request timeout for icmp_seq 1
92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 899b 0 0000 3f 01 daec 10.1.2.20 10.1.1.12
Request timeout for icmp_seq 2
92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 78fe 0 0000 3f 01 eb89 10.1.2.20 10.1.1.12
Request timeout for icmp_seq 3
92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 14b8 0 0000 3f 01 4fd0 10.1.2.20 10.1.1.12
Request timeout for icmp_seq 4
92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 8ef7 0 0000 3f 01 d590 10.1.2.20 10.1.1.12
Request timeout for icmp_seq 5
92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 ec9d 0 0000 3f 01 77ea 10.1.2.20 10.1.1.12
Request timeout for icmp_seq 6
92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 70e6 0 0000 3f 01 f3a1 10.1.2.20 10.1.1.12
Request timeout for icmp_seq 7
92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 b0d2 0 0000 3f 01 b3b5 10.1.2.20 10.1.1.12
Request timeout for icmp_seq 8
92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 f8b4 0 0000 3f 01 6bd3 10.1.2.20 10.1.1.12
Request timeout for icmp_seq 9
Request timeout for icmp_seq 10
92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 1c95 0 0000 3f 01 47f3 10.1.2.20 10.1.1.12
Request timeout for icmp_seq 11
Request timeout for icmp_seq 12
Request timeout for icmp_seq 13
92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12)
Vr HL TOS Len ID Flg off TTL Pro cks Src Dst
4 5 00 0054 62bc 0 0000 3f 01 01cc 10.1.2.20 10.1.1.12 Why does this happen? From what I've read the Redirect Host response has something to do with the fact that the two hosts are in the same network and there being a shorter route (or so I understood). They are in fact in the same physical network, but why would there be a better route if they are not on the same subnet (they can't see each other)? What am I missing? Some extra info you might want to see: # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3
10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0
192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2
10.1.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
10.1.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2
0.0.0.0 192.168.2.1 0.0.0.0 UG 100 0 0 eth3
# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
# iptables -L -n -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- !10.0.0.0/8 10.0.0.0/8
MASQUERADE all -- 10.0.0.0/8 !10.0.0.0/8
Chain OUTPUT (policy ACCEPT)
target prot opt source destination | At first blush, it looks like Debian is stretching the boundaries for sending an ICMP redirect; quoting RFC 792 (Internet Protocol) . The gateway sends a redirect message to a host in the following situation.
A gateway, G1, receives an internet datagram from a host on a network
to which the gateway is attached. The gateway, G1, checks its routing
table and obtains the address of the next gateway, G2, on the route to
the datagram's internet destination network, X. If G2 and the host
identified by the internet source address of the datagram are on the same
network, a redirect message is sent to the host. The redirect message
advises the host to send its traffic for network X directly to gateway
G2 as this is a shorter path to the destination. The gateway forwards
the original datagram's data to its internet destination. In this case, G1 is 10.1.2.1 ( eth1:0 above), X is 10.1.1.0/24 and G2 is 10.1.1.12 , and the source is 10.1.2.20 (i.e. G2 and the host identified by the internet source address of the datagram are **NOT** on the same network ). Maybe this has been historically interpreted differently in the case of interface aliases (or secondary addresses) on the same interface, but strictly speaking I'm not sure we should see Debian send that redirect. Depending on your requirements, you might be able to solve this by making the subnet for eth1 something like 10.1.0.0/22 (host addresses from 10.1.0.1 - 10.1.3.254 ) instead of using interface aliases for individual /24 blocks ( eth1 , eth1:0 , eth1:1 , eth1:2 ); if you did this, you'll need to change the netmask of all hosts attached and you wouldn't be able to use 10.1.4.x unless you expanded to a /21 . EDIT We're venturing a bit outside the scope of the original question, but I'll help work through the design/security issues mentioned in your comment. If you want to isolate users in your office from each other, let's step back for a second and look at some security issues with what you have now: You currently have four subnets in one ethernet broadcast domain. All users in one broadcast domain doesn't meet the security requirements you articulated in the comments (all machines will see broadcasts from other machines and could spontaneously send traffic to each other at Layer2, regardless of their default gateway being eth1 , eth1:0 , eth1:1 or eth1:2 ). There is nothing your Debian firewall can do to change this (or maybe I should say there is nothing your Debian firewall should do to change this :-). Acquire a managed ethernet switch which supports vlans and dot1q tagging Plug all your users into the ethernet switch Assign users into Vlans (in linux and on the ethernet switch) based on security policy stated in the comments. A properly-configured Vlan will go a long way to fixing the issues mentioned above. With respect to multiple security domains accessing 10.1.1.12 , you have a couple of options: Option 1 : Given the requirement for all users to access services on 10.1.1.12 , you could put all users in one IP subnet and implement security policies with Private Vlans (RFC 5517) , assuming your ethernet switch supports this. This option will not require iptables rules to limit intra-office traffic from crossing security boundaries (that is accomplished with private Vlans). Option 2 : You could put users into different subnets (corresponding to Vlans) and implement iptables rules to deploy your security policies After you have secured your network at the Vlan level, set up source-based routing policies to send different users out your multiple uplinks. FYI, if you have a router or Layer3 ethernet switch that supports VRFs , some of this gets even easier; IIRC, you have a Cisco IOS machine onsite. Depending on the model and software image you already have, that Cisco could do a fantastic job isolating your users from each other and implement source-based routing policies. | {
"source": [
"https://serverfault.com/questions/402023",
"https://serverfault.com",
"https://serverfault.com/users/111334/"
]
} |
402,026 | In my 2003 domain, I am being requested to set a password policy to require passwords to expire every 4 months, and also require users to change their password on their next login, due to a security issue. In my domain, my OU's are setup by location, then drilled down to city, then the users and computers are in separate sub-domains. My question is, how do I set this up for my domain? Will I need to set the policy up for loop back? Can I configure this for just a specific OU? Any suggestions on how to move forward? Any advise is much appreciated, and thanks in advance! | At first blush, it looks like Debian is stretching the boundaries for sending an ICMP redirect; quoting RFC 792 (Internet Protocol) . The gateway sends a redirect message to a host in the following situation.
A gateway, G1, receives an internet datagram from a host on a network
to which the gateway is attached. The gateway, G1, checks its routing
table and obtains the address of the next gateway, G2, on the route to
the datagram's internet destination network, X. If G2 and the host
identified by the internet source address of the datagram are on the same
network, a redirect message is sent to the host. The redirect message
advises the host to send its traffic for network X directly to gateway
G2 as this is a shorter path to the destination. The gateway forwards
the original datagram's data to its internet destination. In this case, G1 is 10.1.2.1 ( eth1:0 above), X is 10.1.1.0/24 and G2 is 10.1.1.12 , and the source is 10.1.2.20 (i.e. G2 and the host identified by the internet source address of the datagram are **NOT** on the same network ). Maybe this has been historically interpreted differently in the case of interface aliases (or secondary addresses) on the same interface, but strictly speaking I'm not sure we should see Debian send that redirect. Depending on your requirements, you might be able to solve this by making the subnet for eth1 something like 10.1.0.0/22 (host addresses from 10.1.0.1 - 10.1.3.254 ) instead of using interface aliases for individual /24 blocks ( eth1 , eth1:0 , eth1:1 , eth1:2 ); if you did this, you'll need to change the netmask of all hosts attached and you wouldn't be able to use 10.1.4.x unless you expanded to a /21 . EDIT We're venturing a bit outside the scope of the original question, but I'll help work through the design/security issues mentioned in your comment. If you want to isolate users in your office from each other, let's step back for a second and look at some security issues with what you have now: You currently have four subnets in one ethernet broadcast domain. All users in one broadcast domain doesn't meet the security requirements you articulated in the comments (all machines will see broadcasts from other machines and could spontaneously send traffic to each other at Layer2, regardless of their default gateway being eth1 , eth1:0 , eth1:1 or eth1:2 ). There is nothing your Debian firewall can do to change this (or maybe I should say there is nothing your Debian firewall should do to change this :-). Acquire a managed ethernet switch which supports vlans and dot1q tagging Plug all your users into the ethernet switch Assign users into Vlans (in linux and on the ethernet switch) based on security policy stated in the comments. A properly-configured Vlan will go a long way to fixing the issues mentioned above. With respect to multiple security domains accessing 10.1.1.12 , you have a couple of options: Option 1 : Given the requirement for all users to access services on 10.1.1.12 , you could put all users in one IP subnet and implement security policies with Private Vlans (RFC 5517) , assuming your ethernet switch supports this. This option will not require iptables rules to limit intra-office traffic from crossing security boundaries (that is accomplished with private Vlans). Option 2 : You could put users into different subnets (corresponding to Vlans) and implement iptables rules to deploy your security policies After you have secured your network at the Vlan level, set up source-based routing policies to send different users out your multiple uplinks. FYI, if you have a router or Layer3 ethernet switch that supports VRFs , some of this gets even easier; IIRC, you have a Cisco IOS machine onsite. Depending on the model and software image you already have, that Cisco could do a fantastic job isolating your users from each other and implement source-based routing policies. | {
"source": [
"https://serverfault.com/questions/402026",
"https://serverfault.com",
"https://serverfault.com/users/117870/"
]
} |
402,035 | I've been challenged to "improve Skype performance" for calls within my organisation. Having read the Skype IT Administrators Guide I am wondering whether we might have a performance issue where the Skype Clients in a call are all on our WAN. The call is initiated by a Skype Client at our head office, and terminated on a Skype Client in a remote office connected via IPSEC VPN. Where this happens, I assume the trafficfrom Client A (encrypted by Skype) goes to our ASA 5510, where it is furtehr encrypted, sent to the remote ASA 5505 decrypted, then passed to Client B which decrypts the Skype encryption. Would the call quality benefit if the traffic didn't go over the VPN, but instead only relied on Skype's encryption? I imagine I could achieve this by setting up a SOCKS5 proxy in our HQ DMZ for Skype traffic. Then the traffic goes from Client A to Proxy, over the Skype relay network, then arrives at Cisco ASA 5505 as any other internet traffic, and then to Client B. Is there likely to be any performance benefit in doing this? If so, is there a way to do it that doesn't require a proxy? Has anyone else tackled this? | At first blush, it looks like Debian is stretching the boundaries for sending an ICMP redirect; quoting RFC 792 (Internet Protocol) . The gateway sends a redirect message to a host in the following situation.
A gateway, G1, receives an internet datagram from a host on a network
to which the gateway is attached. The gateway, G1, checks its routing
table and obtains the address of the next gateway, G2, on the route to
the datagram's internet destination network, X. If G2 and the host
identified by the internet source address of the datagram are on the same
network, a redirect message is sent to the host. The redirect message
advises the host to send its traffic for network X directly to gateway
G2 as this is a shorter path to the destination. The gateway forwards
the original datagram's data to its internet destination. In this case, G1 is 10.1.2.1 ( eth1:0 above), X is 10.1.1.0/24 and G2 is 10.1.1.12 , and the source is 10.1.2.20 (i.e. G2 and the host identified by the internet source address of the datagram are **NOT** on the same network ). Maybe this has been historically interpreted differently in the case of interface aliases (or secondary addresses) on the same interface, but strictly speaking I'm not sure we should see Debian send that redirect. Depending on your requirements, you might be able to solve this by making the subnet for eth1 something like 10.1.0.0/22 (host addresses from 10.1.0.1 - 10.1.3.254 ) instead of using interface aliases for individual /24 blocks ( eth1 , eth1:0 , eth1:1 , eth1:2 ); if you did this, you'll need to change the netmask of all hosts attached and you wouldn't be able to use 10.1.4.x unless you expanded to a /21 . EDIT We're venturing a bit outside the scope of the original question, but I'll help work through the design/security issues mentioned in your comment. If you want to isolate users in your office from each other, let's step back for a second and look at some security issues with what you have now: You currently have four subnets in one ethernet broadcast domain. All users in one broadcast domain doesn't meet the security requirements you articulated in the comments (all machines will see broadcasts from other machines and could spontaneously send traffic to each other at Layer2, regardless of their default gateway being eth1 , eth1:0 , eth1:1 or eth1:2 ). There is nothing your Debian firewall can do to change this (or maybe I should say there is nothing your Debian firewall should do to change this :-). Acquire a managed ethernet switch which supports vlans and dot1q tagging Plug all your users into the ethernet switch Assign users into Vlans (in linux and on the ethernet switch) based on security policy stated in the comments. A properly-configured Vlan will go a long way to fixing the issues mentioned above. With respect to multiple security domains accessing 10.1.1.12 , you have a couple of options: Option 1 : Given the requirement for all users to access services on 10.1.1.12 , you could put all users in one IP subnet and implement security policies with Private Vlans (RFC 5517) , assuming your ethernet switch supports this. This option will not require iptables rules to limit intra-office traffic from crossing security boundaries (that is accomplished with private Vlans). Option 2 : You could put users into different subnets (corresponding to Vlans) and implement iptables rules to deploy your security policies After you have secured your network at the Vlan level, set up source-based routing policies to send different users out your multiple uplinks. FYI, if you have a router or Layer3 ethernet switch that supports VRFs , some of this gets even easier; IIRC, you have a Cisco IOS machine onsite. Depending on the model and software image you already have, that Cisco could do a fantastic job isolating your users from each other and implement source-based routing policies. | {
"source": [
"https://serverfault.com/questions/402035",
"https://serverfault.com",
"https://serverfault.com/users/31143/"
]
} |
402,046 | I'm trying to use a PHP script to create at jobs, but when it comes time to execute the jobs, nothing seems to be happening. I've tried to output any errors to log files, but have had no luck. It seems obvious that it's a permissions issue, because when I set apache to run as my personal user, everything works fine. However, when I exec wget directly from PHP, everything works fine so it seems that apache has the correct permissions to use it. The problem appears to be when using at in conjunction with apache. So I need to find a way to make this work with apache running as its own user. Here is the command I'm using: echo "wget -qO- http://example.com/" | at now + 1 minute 2>&1 Any ideas? EDIT: Apache can create the at jobs, it just seems that when they execute nothing is happening. | At first blush, it looks like Debian is stretching the boundaries for sending an ICMP redirect; quoting RFC 792 (Internet Protocol) . The gateway sends a redirect message to a host in the following situation.
A gateway, G1, receives an internet datagram from a host on a network
to which the gateway is attached. The gateway, G1, checks its routing
table and obtains the address of the next gateway, G2, on the route to
the datagram's internet destination network, X. If G2 and the host
identified by the internet source address of the datagram are on the same
network, a redirect message is sent to the host. The redirect message
advises the host to send its traffic for network X directly to gateway
G2 as this is a shorter path to the destination. The gateway forwards
the original datagram's data to its internet destination. In this case, G1 is 10.1.2.1 ( eth1:0 above), X is 10.1.1.0/24 and G2 is 10.1.1.12 , and the source is 10.1.2.20 (i.e. G2 and the host identified by the internet source address of the datagram are **NOT** on the same network ). Maybe this has been historically interpreted differently in the case of interface aliases (or secondary addresses) on the same interface, but strictly speaking I'm not sure we should see Debian send that redirect. Depending on your requirements, you might be able to solve this by making the subnet for eth1 something like 10.1.0.0/22 (host addresses from 10.1.0.1 - 10.1.3.254 ) instead of using interface aliases for individual /24 blocks ( eth1 , eth1:0 , eth1:1 , eth1:2 ); if you did this, you'll need to change the netmask of all hosts attached and you wouldn't be able to use 10.1.4.x unless you expanded to a /21 . EDIT We're venturing a bit outside the scope of the original question, but I'll help work through the design/security issues mentioned in your comment. If you want to isolate users in your office from each other, let's step back for a second and look at some security issues with what you have now: You currently have four subnets in one ethernet broadcast domain. All users in one broadcast domain doesn't meet the security requirements you articulated in the comments (all machines will see broadcasts from other machines and could spontaneously send traffic to each other at Layer2, regardless of their default gateway being eth1 , eth1:0 , eth1:1 or eth1:2 ). There is nothing your Debian firewall can do to change this (or maybe I should say there is nothing your Debian firewall should do to change this :-). Acquire a managed ethernet switch which supports vlans and dot1q tagging Plug all your users into the ethernet switch Assign users into Vlans (in linux and on the ethernet switch) based on security policy stated in the comments. A properly-configured Vlan will go a long way to fixing the issues mentioned above. With respect to multiple security domains accessing 10.1.1.12 , you have a couple of options: Option 1 : Given the requirement for all users to access services on 10.1.1.12 , you could put all users in one IP subnet and implement security policies with Private Vlans (RFC 5517) , assuming your ethernet switch supports this. This option will not require iptables rules to limit intra-office traffic from crossing security boundaries (that is accomplished with private Vlans). Option 2 : You could put users into different subnets (corresponding to Vlans) and implement iptables rules to deploy your security policies After you have secured your network at the Vlan level, set up source-based routing policies to send different users out your multiple uplinks. FYI, if you have a router or Layer3 ethernet switch that supports VRFs , some of this gets even easier; IIRC, you have a Cisco IOS machine onsite. Depending on the model and software image you already have, that Cisco could do a fantastic job isolating your users from each other and implement source-based routing policies. | {
"source": [
"https://serverfault.com/questions/402046",
"https://serverfault.com",
"https://serverfault.com/users/125921/"
]
} |
402,440 | In Windows Server 2008 R2 there is the "Server Manager" program that always starts up when I log on. I would like to make it so that this does not start up every time that I log into the server. How can I do this? | I found this blog post by Alen Siljak which describes how you can keep it from starting when logging on. There are two different methods to solve the problem. The first and most simple is a checkbox in the Server Manager itself. The second involves modifying the registry, which can be used to automate and script the process for a large number of servers. UI Method - In the "Server Manager" program there is the "Server Summary -> Computer Information" section. At the bottom of the section there is a checkbox "Do not show me this console at logon". Check this box and exit the program and at next log on you will not see the Server Manager. Registry Method - Go to the registry editor and HKLM\Software\Microsoft\ServerManager and set the variable DoNotOpenServerManagerAtLogon to 1 . Then go to another entry at HKCU\Software\Microsoft\ServerManager and set the CheckedUnattendLaunchSetting to 0 (Note that this will only set it for the current user). After logging out and logging back on you should no longer see the server manager. | {
"source": [
"https://serverfault.com/questions/402440",
"https://serverfault.com",
"https://serverfault.com/users/57931/"
]
} |
402,455 | I have VPS with limit of 2GB of ram and 8 CPU cores. I have 5 sites on that VPS (one of them is just for testing, no visitors exept me). All 5 sites are image galleries, like wallpaper sites.
Last week I noticed problem on one site (main domain, used for name servers, and also with most traffic, visitors). That site has two image galleries, one is old static html gallery made few years ago and another, main, is powered by ZENPhoto CMS.
Also I have that same gallery CMS on another two sites on that same VPS (on one running site and on one just for testing site). On other two sites I have diferent PHP driven gallery. Problem is that after some time (it vary from 10 minutes to few hours after apache restart), loading of pages on main site becomes very slow, or I get 503 Service Temporarily Unavailable error. So pages becomes unavailable.
But just that part with new CMS gallery, old part of site with static html pages are working fast and just fine. Also other two sites with same CMS gallery and other two with different PHP driven gallery are working fine and fast at the same time.
I thought it must be something with CMS on that main site, because other sites are working nice. Then I tryed to open contact and guest book pages on that main site which are outside of that CMS but also PHP pages, and they do not load too, but that same contact php scipts are working on other sites at the same time. So, when site starts to hangs, ONLY PHP generated content is not working, like I said other static pages are working. And, ONLY on that one main site I have problems.
Then I need to restart Apache, after restart everything is vorking nice and fast, for some time, than again, just PHP pages on main site are becomming slower. If I do not restart apache that slowness take some time (several minutes, hours, depending ot traffic) and during that time PHP diven content is loading very slow or unavailable on that site. After sime time, on moments everything start to work and is fast again for some time, and again.
In hours with more traffic PHP content is loading slowly or it is unavailable, in hours with less traffic it is sometimes fast and sometimes little bit slower than usually.
And ones again, only on that main site, and only PHP driven pages, static pages are working fast even in most traffic hours also other sites with even same CMS are working fast. Currently I have about 7000 unique visitors on that site but site worked nice even with 11500 visitors per day. And about 17000 in total visitors on VPS, all sites ( about 3 pages per unique visitor). When site start to slow down sometimes in apache status I can see something like this: mod_fcgid status: Total FastCGI processes: 37 Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 11300 39 28 7 Working 11274 47 28 7 Working 11296 40 29 3 Working 11283 45 30 3 Working 11304 36 31 1 Working 11282 46 32 3 Working 11292 42 33 1 Working 11289 44 34 1 Working 11305 35 35 0 Working 11273 48 36 2 Working 11280 47 39 1 Working 10125 133 40 12 Exiting(communication error) 11294 41 41 1 Exiting(communication error) 11277 47 42 2 Exiting(communication error) 11291 43 43 1 Exiting(communication error) 10187 108 43 10 Exiting(communication error) 10209 95 44 7 Exiting(communication error) 10171 113 44 5 Exiting(communication error) 11275 47 47 1 Exiting(communication error) 10144 125 48 8 Exiting(communication error) 10086 149 48 20 Exiting(communication error) 10212 94 49 5 Exiting(communication error) 10158 118 49 5 Exiting(communication error) 10169 114 50 4 Exiting(communication error) 10105 141 50 16 Exiting(communication error) 10094 146 50 15 Exiting(communication error) 10115 139 51 17 Exiting(communication error) 10213 93 51 9 Exiting(communication error) 10197 103 51 7 Exiting(communication error) Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7983 1079 2 149 Ready 7979 1079 11 151 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7990 1066 0 57 Ready 8001 1031 64 35 Ready 7999 1032 94 29 Ready 8000 1031 91 36 Ready 8002 1029 34 52 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7991 1064 29 115 Ready When it is working nicly there is no lines with "Exiting(communication error)" Active and Idle are time active and time since last request, in seconds. Here are system info. Sysem info: Total processors: 8 Processor #1
Vendor
GenuineIntel
Name
Intel(R) Xeon(R) CPU E5440 @ 2.83GHz
Speed
88.320 MHz
Cache
6144 KB All other seven are the same. System Information Linux vps.nnnnnnnnnnnnnnnnn.nnn 2.6.18-028stab099.3 #1 SMP Wed Mar 7 15:20:22 MSK 2012 x86_64 x86_64 x86_64 GNU/Linux Current Memory Usage
total used free shared buffers cached
Mem: 8388608 882164 7506444 0 0 0
-/+ buffers/cache: 882164 7506444
Swap: 0 0 0
Total: 8388608 882164 7506444 Current Disk Usage
Filesystem Size Used Avail Use% Mounted on
/dev/vzfs 100G 34G 67G 34% /
none System Details: Running on: Apache/2.2.22
System info: (Unix) mod_ssl/2.2.22 OpenSSL/0.9.8e-fips-rhel5 DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_fcgid/2.3.6
Powered by: PHP/5.3.10 Current Configuration Default PHP Version (.php files) 5 PHP 5 Handler fcgi
PHP 4 Handler suphp Apache suEXEC on Apache Ruid2 off PHP 4 Handler suphp Apache suEXEC on Apache Configuration The following settings have been saved: fileetag: All keepalive: On keepalivetimeout: 3 maxclients: 150 maxkeepaliverequests: 10 maxrequestsperchild: 10000 maxspareservers: 10 minspareservers: 5 root_options: ExecCGI, FollowSymLinks, Includes, IncludesNOEXEC, Indexes, MultiViews, SymLinksIfOwnerMatch serverlimit: 256 serversignature: Off servertokens: Full sslciphersuite: ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP:!kEDH startservers: 5 timeout: 30 I hope, I explained my problem nicely. Any help would be nice. | I found this blog post by Alen Siljak which describes how you can keep it from starting when logging on. There are two different methods to solve the problem. The first and most simple is a checkbox in the Server Manager itself. The second involves modifying the registry, which can be used to automate and script the process for a large number of servers. UI Method - In the "Server Manager" program there is the "Server Summary -> Computer Information" section. At the bottom of the section there is a checkbox "Do not show me this console at logon". Check this box and exit the program and at next log on you will not see the Server Manager. Registry Method - Go to the registry editor and HKLM\Software\Microsoft\ServerManager and set the variable DoNotOpenServerManagerAtLogon to 1 . Then go to another entry at HKCU\Software\Microsoft\ServerManager and set the CheckedUnattendLaunchSetting to 0 (Note that this will only set it for the current user). After logging out and logging back on you should no longer see the server manager. | {
"source": [
"https://serverfault.com/questions/402455",
"https://serverfault.com",
"https://serverfault.com/users/119171/"
]
} |
402,496 | Ive seen tons of examples where a & follows the end of a command string, but I can't seem to find an explanation on what it does. It's not even in nohup 's man page. Is this a shell thing? Either using & or not, I find that any process ran with nohup seems to exhibit immunity to any hangup signal. | From the bash manpage : If a command is terminated by the control operator &, the shell executes the command in the background in a subshell. The shell does not wait for the command to finish, and the return status is 0. So yes, it's a shell thing, and can be used with any command. It essentially returns control to you immediately and allows the command to complete in the background. This is almost always used with nohup because you typically want to exit the shell after starting the command. | {
"source": [
"https://serverfault.com/questions/402496",
"https://serverfault.com",
"https://serverfault.com/users/81366/"
]
} |
402,580 | This is a Canonical Question about Active Directory Domain Services (AD DS). What is Active Directory? What does it do and how does it work? How is Active Directory organized: Forest, Child Domain, Tree, Site, or OU I find myself explaining some of what I assume is common knowledge about it almost daily. This question will, hopefully, serve as a canonical question and answer for most basic Active Directory questions. If you feel that you can improve the answer to this question, please edit away. | What is Active Directory? Active Directory Domain Services is Microsoft's Directory Server. It provides authentication and authorization mechanisms as well as a framework within which other related services can be deployed (AD Certificate Services, AD Federated Services, etc). It is an LDAP compliant database that contains objects. The most commonly used objects are users, computers, and groups. These objects can be organized into organizational units (OUs) by any number of logical or business needs. Group Policy Objects (GPOs) can then be linked to OUs to centralize the settings for various users or computers across an organization. When people say "Active Directory" they typically are referring to "Active Directory Domain Services." It is important to note that there are other Active Directory roles/products such as Certificate Services, Federation Services, Lightweight Directory Services, Rights Management Services, etc. This answer refers specifically to Active Directory Domain Services. What is a domain and what is a forest? A forest is a security boundary. Objects in separate forests are not able to interact with each other, unless the administrators of each separate forest create a trust between them. For example, an Enterprise Administrator account for domain1.com , which is normally the most privileged account of a forest, will have, no permissions at all in a second forest named domain2.com , even if those forests exist within the same LAN, unless there is a trust in place. If you have multiple disjoint business units or have the need for separate security boundaries, you need multiple forests. A domain is a management boundary. Domains are part of a forest. The first domain in a forest is known as the forest root domain. In many small and medium organizations (and even some large ones), you will only find a single domain in a single forest. The forest root domain defines the default namespace for the forest. For example, if the first domain in a new forest is named domain1.com , then that is the forest root domain. If you have a business need for a child domain, for example - a branch office in Chicago, you might name the child domain chi . The FQDN of the child domain would be chi.domain1.com . You can see that the child domain's name was prepended forest root domain's name. This is typically how it works. You can have disjoint namespaces in the same forest, but that's a whole separate can of worms for a different time. In most cases, you'll want to try and do everything possible to have a single AD domain. It simplifies management, and modern versions of AD make it very easy to delegate control based on OU, which lessens the need for child domains. I can name my domain whatever I want, right? Not really. dcpromo.exe , the tool that handles the promotion of a server to a DC isn't idiot-proof. It does let you make bad decisions with your naming, so pay attention to this section if you are unsure. (Edit: dcpromo is deprecated in Server 2012. Use the Install-ADDSForest PowerShell cmdlet or install AD DS from Server Manager.) First of all, don't use made up TLDs like .local, .lan, .corp, or any of that other crap. Those TLDs are not reserved. ICANN is selling TLDs now, so your mycompany.corp that you're using today could actually belong to someone tomorrow. If you own mycompany.com , then the smart thing to do is use something like internal.mycompany.com or ad.mycompany.com for your internal AD name. If you use mycompany.com as an externally resolvable website, you should avoid using that as your internal AD name as well, since you'll end up with a split-brain DNS. Domain Controllers and Global Catalogs A server that responds to authentication or authorization requests is a Domain Controller (DC). In most cases, a Domain Controller will hold a copy of the Global Catalog . A Global Catalog (GC) is a partial set of objects in all domains in a forest. It is directly searchable, which means that cross-domain queries can usually be performed on a GC without needing a referral to a DC in the target domain. If a DC is queried on port 3268 (3269 if using SSL), then the GC is being queried. If port 389 (636 if using SSL) is queried, then a standard LDAP query is being used and objects existing in other domains may require a referral . When a user tries to log in to a computer that is joined to AD using their AD credentials, the salted and hashed username and password combination are sent to the DC for both the user account and the computer account that are logging in. Yes, the computer logs in too. This is important, because if something happens to the computer account in AD, like someone resets the account or deletes it, you may get an error that say that a trust relationship doesn't exist between the computer and the domain. Even though your network credentials are fine, the computer is no longer trusted to log into the domain. Domain Controller Availability Concerns I hear "I have a Primary Domain Controller (PDC) and want to install a Backup Domain Controller (BDC)" much more frequently that I would like to believe. The concept of PDCs and BDCs died with Windows NT4. The last bastion for PDCs was in a Windows 2000 transitional mixed mode AD when you still had NT4 DCs around. Basically, unless you're supporting a 15+ year old
install that has never been upgraded, you really don't have a PDC or a BDC, you just have two domain controllers. Multiple DCs are capable of answering authentication requests from different users and computers simultaneously. If one fails, then the others will continue to offer authentication services without having to make one "primary" like you would have had to do in the NT4 days. It is best practice to have at least two DCs per domain. These DCs should both hold a copy of the GC and should both be DNS servers that hold a copy of the Active Directory Integrated DNS zones for your domain as well. FSMO Roles "So, if there are no PDCs, why is there a PDC role that only a single DC can have?" I hear this a lot. There is a PDC Emulator role. It's different than being a PDC. In fact, there are 5 Flexible Single Master Operations roles (FSMO) . These are also called Operations Master roles as well. The two terms are interchangeable. What are they and what do they do? Good question! The 5 roles and their function are: Domain Naming Master - There is only one Domain Naming Master per forest. The Domain Naming Master makes sure that when a new domain is added to a forest that it is unique. If the server holding this role is offline, you won't be able to make changes to the AD namespace, which includes things like adding new child domains. Schema Master - There is only one Schema Operations Master in a forest. It is responsible for updating the Active Directory Schema. Tasks that require this, such as preparing AD for a new version of Windows Server functioning as a DC or the installation of Exchange, require Schema modifications. These modifications must be done from the Schema Master. Infrastructure Master - There is one Infrastructure Master per domain. If you only have a single domain in your forest, you don't really need to worry about it. If you have multiple forests, then you should make sure that this role is not held by a server that is also a GC holder unless every DC in the forest is a GC . The infrastructure master is responsible for making sure that cross-domain references are handled properly. If a user in one domain is added to a group in another domain, the infrastructure master for the domains in question make sure that it is handled properly. This role will not function correctly if it is on a global catalog. RID Master - The Relative ID Master (RID Master) is responsible for issuing RID pools to DCs. There is one RID master per domain. Any object in an AD domain has a unique Security Identifier (SID) . This is made up of a combination of the domain identifier and a relative identifier. Every object in a given domain has the same domain identifier, so the relative identifier is what makes objects unique. Each DC has a pool of relative IDs to use, so when that DC creates a new object, it appends a RID that it hasn't used yet. Since DCs are issued non-overlapping pools, each RID should remain unique for the duration of the life of the domain. When a DC gets to ~100 RIDs left in its pool, it requests a new pool from the RID master. If the RID master is offline for an extended period of time, object creation may fail. PDC Emulator - Finally, we get to the most widely misunderstood role of them all, the PDC Emulator role. There is one PDC Emulator per domain. If there is a failed authentication attempt, it is forwarded to the PDC Emulator. The PDC Emulator functions as the "tie-breaker" if a password was updated on one DC and hasn't yet replicated to the others. The PDC Emulator is also the server that controls time sync across the domain. All other DCs sync their time from the PDC Emulator. All clients sync their time from the DC that they logged in to. It's important that everything remain within 5 minutes of each other, otherwise Kerberos breaks and when that happens, everyone cries. The important thing to remember is that the servers that these roles run on is not set in stone. It's usually trivial to move these roles around, so while some DCs do slightly more than others, if they go down for short periods of time, everything will usually function normally. If they're down for a long time, it's easy to transparently transfer the roles. It's much nicer than the NT4 PDC/BDC days, so please stop calling your DCs by those old names. :) So, um...how do the DCs share information if they can function independently of each other? Replication, of course . By default, DCs belonging to the same domain in the same site will replicate their data to each other at 15 second intervals. This makes sure that everything is relatively up to date. There are some "urgent" events that trigger immediate replication. These events are: An account is locked out for too many failed logins, a change is made to the domain password or lockout policies, the LSA secret is changed, the password is changed on a DC's computer account, or the RID Master role is transferred to a new DC. Any of these events will trigger an immediate replication event. Password changes fall somewhere between urgent and non-urgent and are handled uniquely. If a user's password is changed on DC01 and a user tries to log into a computer that is authenticating against DC02 before replication occurs, you'd expect this to fail, right? Fortunately that doesn't happen. Assume that there is also a third DC here called DC03 that holds the PDC Emulator role. When DC01 is updated with the user's new password, that change is immediately replicated to DC03 also. When thee authentication attempt on DC02 fails, DC02 then forwards that authentication attempt to DC03 , which verifies that it is, indeed, good, and the logon is allowed. Let's talk about DNS DNS is critical to a properly functioning AD. The official Microsoft party line is that any DNS server can be used if it is set up properly. If you try and use BIND to host your AD zones, you're high. Seriously. Stick with using AD Integrated DNS zones and use conditional or global forwarders for other zones if you must. Your clients should all be configured to use your AD DNS servers, so it's important to have redundancy here. If you have two DCs, have them both run DNS and configure your clients to use both of them for name resolution. Also, you're going to want to make sure that if you have more than one DC, that they don't list themselves first for DNS resolution. This can lead to a situation where they are on a "replication island" where they are disconnected from the rest of the AD replication topology and cannot recover. If you have two servers DC01 - 10.1.1.1 and DC02 - 10.1.1.2 , then their DNS server list should be configured like this: Server: DC01 (10.1.1.1) Primary DNS - 10.1.1.2 Secondary DNS - 127.0.0.1 Server: DC02 (10.1.1.2) Primary DNS - 10.1.1.1 Secondary DNS - 127.0.0.1 OK, this seems complicated. Why do I want to use AD at all? Because once you know what you're doing, you life becomes infinitely better. AD allows for the centralization of user and computer management, as well as the centralization of resource access and usage. Imagine a situation where you have 50 users in an office. If you wanted each user to have their own login to each computer, you'd have to configure 50 local user accounts on each PC. With AD, you only have to made the user account once and it can log into any PC on the domain by default. If you wanted to harden security, you'd have to do it 50 times. Sort of a nightmare, right? Also imagine that you have a file share that you only want half of those people to get to. If you're not using AD, you'd either need to replicate their username and passwords by hand on the server to give seemless access, or you'd have to make a shared account and give each user the username and password. One way means that you know (and have to constantly update) users' passwords. The other way means that you have no audit trail. Not good, right? You also get the ability to use Group Policy when you have AD set up. Group Policy is a set of objects that are linked to OUs that define settings for users and/or computers in those OUs. For example, if you want to make it so that "Shutdown" isn't on the start menu for 500 lab PCs, you can do that in one setting in Group Policy. Instead of spending hours or days configuring the proper registry entries by hand, you create a Group Policy Object once, link it to the correct OU or OUs, and never have to think about it again. There are hundreds of GPOs that can be configured, and the flexibility of Group Policy is one of the major reasons that Microsoft is so dominant in the enterprise market. | {
"source": [
"https://serverfault.com/questions/402580",
"https://serverfault.com",
"https://serverfault.com/users/10472/"
]
} |
402,908 | I've been working with linux for a while but in a rather simple manner. I understand that scripts in init.d are executed when the os starts but how exactly does it works? What if I want to keep a script but don't want it to start automaticly? Say I have a /etc/init.d/varnish and want to disable it temporary. How do I make sure it doesn't start if the os reboots? I don't want to delete the script. What if I want to add it again? | There are a couple ways. If you just want to do this temporarily, you can remove the execute bit from the file: $ chmod -x /etc/init.d/varnish Then re-add it when appropriate: $ chmod +x /etc/init.d/varnish The "official" way in Ubuntu (as well as in Debian and other Debian derivatives), though, is to use the update-rc.d command: $ update-rc.d varnish disable This will remove all of the symlinks from the /etc/rcX.d folders, which take care of starting and stopping the service when appropriate. See the update-rc.d man page for more information. | {
"source": [
"https://serverfault.com/questions/402908",
"https://serverfault.com",
"https://serverfault.com/users/121735/"
]
} |
402,938 | We all know what 127.0.0.1 is used for (loopback). What are uses cases for the rest of the reserved 127.0.0.0/8 loopback space? | It's also reserved for loopback, so no, it's not widely used for anything. In practice, 127.0.0.1 is usually used as "the" loopback address, but the rest of the block should loopback as well, meaning it's just generally not used for anything. (Though, for example, larger Cisco switches will use 127.0.0.xx IPs to listen for attached cards and modules, so at least some of other addresses are in use.) From RFC3330: Special-Use IPv4 addresses 127.0.0.0/8 - This block is assigned for use as the Internet host
loopback address. A datagram sent by a higher level protocol to an
address anywhere within this block should loop back inside the host.
This is ordinarily implemented using only 127.0.0.1/32 for loopback,
but no addresses within this block should ever appear on any network
anywhere [RFC1700, page 5]. | {
"source": [
"https://serverfault.com/questions/402938",
"https://serverfault.com",
"https://serverfault.com/users/87940/"
]
} |
403,409 | My google-fu is failing me on this one. What are the little things called that let you mount devices like switches which expect threaded round holes in a server rack with square holes? | Are you looking for cage nuts ? | {
"source": [
"https://serverfault.com/questions/403409",
"https://serverfault.com",
"https://serverfault.com/users/108667/"
]
} |
403,732 | *NOTE: if your server still has issues due to confused kernels, and you can't reboot - the simplest solution proposed with gnu date installed on your system is: date -s now. This will reset the kernel's internal "time_was_set" variable and fix the CPU hogging futex loops in java and other userspace tools. I have straced this command on my own system an confirmed it's doing what it says on the tin * POSTMORTEM Anticlimax: only thing that died was my VPN (openvpn) link to the cluster, so there was an exciting few seconds while it re-established. Everything else was fine, and starting up ntp went cleanly after the leap second had passed. I have written up my full experience of the day at http://blog.fastmail.fm/2012/07/03/a-story-of-leaping-seconds/ If you look at Marco's blog at http://my.opera.com/marcomarongiu/blog/2012/06/01/an-humble-attempt-to-work-around-the-leap-second - he has a solution for phasing the time change over 24 hours using ntpd -x to avoid the 1 second skip. This is an alternative smearing method to running your own ntp infrastructure. Just today, Sat June 30th, 2012 - starting soon after the start of the day GMT. We've had a handful of servers in different datacentres as managed by different teams all go dark - not responding to pings, screen blank. They're all running Debian Squeeze - with everything from stock kernel to custom 3.2.21 builds. Most are Dell M610 blades, but I've also just lost a Dell R510 and other departments have lost machines from other vendors too. There was also an older IBM x3550 which crashed and which I thought might be unrelated, but now I'm wondering. The one crash which I did get a screen dump from said: [3161000.864001] BUG: spinlock lockup on CPU#1, ntpd/3358
[3161000.864001] lock: ffff88083fc0d740, .magic: dead4ead, .owner: imapd/24737, .owner_cpu: 0 Unfortunately the blades all supposedly had kdump configured, but they died so hard that kdump didn't trigger - and they had console blanking turned on. I've disabled console blanking now, so fingers crossed I'll have more information after the next crash. Just want to know if it's a common thread or "just us". It's really odd that they're different units in different datacentres bought at different times and run by different admins (I run the FastMail.FM ones)... and now even different vendor hardware. Most of the machines which crashed had been up for weeks/months and were running 3.1 or 3.2 series kernels. The most recent crash was a machine which had only been up about 6 hours running 3.2.21. THE WORKAROUND Ok people, here's how I worked around it. disabled ntp: /etc/init.d/ntp stop created http://linux.brong.fastmail.fm/2012-06-30/fixtime.pl (code stolen from Marco, see blog posts in comments) ran fixtime.pl without an argument to see that there was a leap second set ran fixtime.pl with an argument to remove the leap second NOTE: depends on adjtimex . I've put a copy of the squeeze adjtimex binary at http://linux.brong.fastmail.fm/2012-06-30/adjtimex — it will run without dependencies on a squeeze 64 bit system. If you put it in the same directory as fixtime.pl , it will be used if the system one isn't present. Obviously if you don't have squeeze 64-bit… find your own. I'm going to start ntp again tomorrow. As an anonymous user suggested - an alternative to running adjtimex is to just set the time yourself, which will presumably also clear the leapsecond counter. | This is caused by a livelock when ntpd calls adjtimex(2) to tell the kernel to insert a leap second. See lkml posting http://lkml.indiana.edu/hypermail/linux/kernel/1203.1/04598.html Red Hat should also be updating their KB article as well. https://access.redhat.com/knowledge/articles/15145 UPDATE: Red Hat has a second KB article just for this issue here: https://access.redhat.com/knowledge/solutions/154713 - the previous article is for an earlier, unrelated problem The work-around is to just turn off ntpd. If ntpd already issued the adjtimex(2) call, you may need to disable ntpd and reboot to be 100% safe. This affects RHEL 6 and other distros running newer kernels (newer than approx 2.6.26), but not RHEL 5. The reason this is occurring before the leap second is actually scheduled to occur is that ntpd lets the kernel handle the leap second at midnight, but needs to alert the kernel to insert the leap second before midnight. ntpd therefore calls adjtimex(2) sometime during the day of the leap second, at which point this bug is triggered. If you have adjtimex(8) installed, you can use this script to determine if flag 16 is set. Flag 16 is "inserting leap second": adjtimex -p | perl -p -e 'undef $_, next unless m/status: (\d+)/; (16 & $1) && print "leap second flag is set:\n"' UPDATE: Red Hat has updated their KB article to note: "RHEL 6 customers may be affected by a known issue that causes NMI Watchdog to detect a hang when receiving the NTP leapsecond announcement. This issue is being addressed in a timely manner. If your systems received the leapsecond announcement and did not experience this issue, then they are no longer affected." UPDATE: The above language was removed from the Red Hat article; and a second KB solution was added detailing the adjtimex(2) crash issue: https://access.redhat.com/knowledge/solutions/154713 However, the code change in the LKML post by IBM Engineer John Stultz notes there may also be a deadlock when the leap second is actually applied, so you may want to disable the leap second by rebooting or using adjtimex(8) after disabling ntpd. FINAL UPDATE: Well, I'm no kernel dev, but I reviewed John Stultz's patch again here: https://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=6b43ae8a619d17c4935c3320d2ef9e92bdeed05d If I'm reading it right this time, I was wrong about there being another deadlock when the leap second is applied. That seems to be Red Hat's opinion as well, based on their KB entry. However, if you have disabled ntpd, keep it disabled for another 10 minutes, so that you don't hit the deadlock when ntpd calls adjtimex(2). We'll find out if there are any more bugs soon :) POST-LEAP SECOND UPDATE: I spent the last few hours reading through the ntpd and pre-patch (buggy) kernel code, and while I may be very wrong here, I'll attempt to explain what I think was going on: First, ntpd calls adjtimex(2) all the time. It does this as part of its "clock loop filter", defined in local_clock in ntp_loopfilter.c. You can see that code here: http://www.opensource.apple.com/source/ntp/ntp-70/ntpd/ntp_loopfilter.c (from ntp version 4.2.6). The clock loop filter runs quite often -- it runs every time ntpd polls its upstream servers, which by default is every 17 minutes or more. The relevant bit of the clock loop filter is: if (sys_leap == LEAP_ADDSECOND)
ntv.status |= STA_INS; And then: ntp_adjtime(&ntv) In other words, on days when there's a leap second, ntpd sets the "STA_INS" flag and calls adjtimex(2) (via its portability-wrapper). That system call makes its way to the kernel. Here's the relevant kernel code: https://github.com/mirrors/linux/blob/a078c6d0e6288fad6d83fb6d5edd91ddb7b6ab33/kernel/time/ntp.c The kernel codepath is roughly this: line 663 - start of do_adjtimex routine. line 691 - cancel any existing leap-second timer. line 709 - grab the ntp_lock spinlock (this lock is involved in the possible livelock crash) line 724 - call process_adjtimex_modes. line 616 - call process_adj_status. line 590 - set time_status global variable, based on flags set in adjtimex(2) call line 592 - check time_state global variable. in most cases, call ntp_start_leap_timer. line 554 - check time_status global variable. STA_INS will be set, so set time_state to TIME_INS and call hrtimer_start (another kernel function) to start the leap second timer. in the process of creating a timer, this code grabs the xtime_lock. if this happens while another CPU has already grabbed the xtime_lock and the ntp_lock, then the kernel livelocks. this is why John Stultz wrote the patch to avoid using hrtimers. This is what was causing everyone trouble today. line 598 - if ntp_start_leap_timer did not actually start a leap timer, set time_state to TIME_OK line 751 - assuming the kernel does not livelock, the stack is unwound and the ntp_lock spinlock is released. There are a couple interesting things here. First, line 691 cancels the existing timer every time adjtimex(2) is called. Then, 554 re-creates that timer. This means each time ntpd ran its clock loop filter, the buggy code was invoked. Therefore I believe Red Hat was wrong when they said that once ntpd had set the leap-second flag, the system would not crash. I believe each system running ntpd had the potential to livelock every 17 minutes (or more) for the 24-hour period before the leap-second. I believe this may also explain why so many systems crashed; a one-time chance of crashing would be much less likely to hit as compared to 3 chances an hour. UPDATE: In Red Hat's KB solution at https://access.redhat.com/knowledge/solutions/154713 , Red Hat engineers did come to the same conclusion (that running ntpd would continuously hit the buggy code). And indeed they did so several hours before I did. This solution wasn't linked to the main article at https://access.redhat.com/knowledge/articles/15145 , so I didn't notice it until now. Second, this explains why loaded systems were more likely to crash. Loaded systems will be handling more interrupts, causing the "do_tick" kernel function to be called more often, giving more of a chance for this code to run and grab the ntp_lock while the timer was being created. Third, is there a chance of the system crashing when the leap-second actually occurs? I don't know for sure, but possibly yes, because the timer that fires and actually executes the leap-second adjustment (ntp_leap_second, on line 388) also grabs the ntp_lock spinlock, and has a call to hrtimer_add_expires_ns. I don't know if that call might also be able to cause a livelock, but it doesn't seem impossible. Finally, what causes the leap-second flag to be disabled after the leap-second has run? The answer there is ntpd stops setting the leap-second flag at some point after midnight when it calls adjtimex(2). Since the flag isn't set, the check on line 554 will not be true, and no timer will be created, and line 598 will reset the time_state global variable to TIME_OK. This explains why if you checked the flag with adjtimex(8) just after the leap second, you would still see the leap-second flag set. In short, the best advice for today seems to be the first I gave after all: disable ntpd, and disable the leap-second flag. And some final thoughts: none of the Linux vendors noticed John Stultz's patch and applied it to their kernels :( why didn't John Stultz alert some of the vendors this was needed? perhaps the chance of the livelock seemed low enough making noise wasn't warranted. I've heard reports of Java processes locking up or spinning when the leap-second was applied. Perhaps we should follow Google's lead and rethink how we apply leap-seconds to our systems: http://googleblog.blogspot.com/2011/09/time-technology-and-leaping-seconds.html 06/02 Update from John Stultz: https://lkml.org/lkml/2012/7/1/203 The post contained a step-by-step walk-through of why the leap second caused the futex timers to expire prematurely and continuously, spiking the CPU load. | {
"source": [
"https://serverfault.com/questions/403732",
"https://serverfault.com",
"https://serverfault.com/users/126591/"
]
} |
404,072 | Is it dangerous or not advisable to change the UPS battery while it is plugged in? We have an APC Smart-UPS 1500 and we need to replace its battery. I have the new battery but the UPS powers a few machines that I would rather not turn off for the replacement procedure. Would something bad happen if I just replace it while it's still plugged in and powered on? | According to the manual : Replacing the Battery Module This UPS has an easy to replace,
hot-swappable battery module. Replacement is a safe procedure,
isolated from electrical hazards. You may leave the UPS and connected
equipment on for this proce- dure. | {
"source": [
"https://serverfault.com/questions/404072",
"https://serverfault.com",
"https://serverfault.com/users/31131/"
]
} |
404,447 | In my own computer, running MacOSX, I have this in ~/.ssh/config Host *
ForwardAgent yes
Host b1
ForwardAgent yes b1 is a virtual machine running Ubuntu 12.04. I ssh to it like this: ssh pupeno@b1 and I get logged in without being asked for a password because I already copied my public key. Due to forwarding, I should be able to ssh to pupeno@b1 from b1 and it should work, without asking me for a password, but it doesn't. It asks me for a password. What am I missing? This is the verbose output of the second ssh: pupeno@b1:~$ ssh -v pupeno@b1
OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to b1 [127.0.1.1] port 22.
debug1: Connection established.
debug1: identity file /home/pupeno/.ssh/id_rsa type -1
debug1: identity file /home/pupeno/.ssh/id_rsa-cert type -1
debug1: identity file /home/pupeno/.ssh/id_dsa type -1
debug1: identity file /home/pupeno/.ssh/id_dsa-cert type -1
debug1: identity file /home/pupeno/.ssh/id_ecdsa type -1
debug1: identity file /home/pupeno/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1
debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ECDSA 35:c0:7f:24:43:06:df:a0:bc:a7:34:4b:da:ff:66:eb
debug1: Host 'b1' is known and matches the ECDSA host key.
debug1: Found key in /home/pupeno/.ssh/known_hosts:1
debug1: ssh_ecdsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Trying private key: /home/pupeno/.ssh/id_rsa
debug1: Trying private key: /home/pupeno/.ssh/id_dsa
debug1: Trying private key: /home/pupeno/.ssh/id_ecdsa
debug1: Next authentication method: password
pupeno@b1's password: | It turns out my key was not in the agent, and this fixed it: OS X : ssh-add -K Linux/Unix : ssh-add -k You can list loaded keys using: ssh-add -l
ssh-add -L # for more detail | {
"source": [
"https://serverfault.com/questions/404447",
"https://serverfault.com",
"https://serverfault.com/users/2563/"
]
} |
404,455 | Possible Duplicate: Prevent service accounts from logging in locally or remotely I've got a few accounts being used to run various services eg SQL.Service , TFS.Application , etc... and want to mark those accounts as not supporting interactive login in AD Presumably I should put them in a specific security group (I've created one called MyOrg.Services ) but I don't know how to flag users in that group as being services not "real" users | It turns out my key was not in the agent, and this fixed it: OS X : ssh-add -K Linux/Unix : ssh-add -k You can list loaded keys using: ssh-add -l
ssh-add -L # for more detail | {
"source": [
"https://serverfault.com/questions/404455",
"https://serverfault.com",
"https://serverfault.com/users/20386/"
]
} |
404,462 | we are trying to get a MongoDB setup in EC2 going. I had a few questions - Should we turn on auth since the MongoDB endpoint will have a public VIP? Any big hit on perf with auth enabled? Best way to deploy a replicaset in EC2? Do I have to deploy all 3 nodes individually and configure them or can I use a tool to automate the deployment? We would like one of the secondaries to be located in a different DC than the primary. Ubuntu or RHEL? And what version? Thanks! | It turns out my key was not in the agent, and this fixed it: OS X : ssh-add -K Linux/Unix : ssh-add -k You can list loaded keys using: ssh-add -l
ssh-add -L # for more detail | {
"source": [
"https://serverfault.com/questions/404462",
"https://serverfault.com",
"https://serverfault.com/users/121971/"
]
} |
404,626 | I am testing nginx and want to output variables to the log files. How can I do that and which log file will it go (access or error). | You can send nginx variable values via headers. Handy for development. add_header X-uri "$uri"; and you'll see in your browser's response headers: X-uri:/index.php I sometimes do this during local development. It's also handy for telling you if a subsection is getting executed or not. Just sprinkle it inside your clauses to see if they're getting used. location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ {
add_header X-debug-message "A static file was served" always;
...
}
location ~ \.php$ {
add_header X-debug-message "A php file was used" always;
...
} So visiting a url like http://www.example.com/index.php will trigger the latter header while visiting http://www.example.com/img/my-ducky.png will trigger the former header. | {
"source": [
"https://serverfault.com/questions/404626",
"https://serverfault.com",
"https://serverfault.com/users/126898/"
]
} |
404,815 | Currently, my PHP is on 5.3.3, how can I upgrade it? Also how can I upgrade anything? For example, if I want to upgrade phpMyAdmin as well? | Upgrade all packages: apt-get update; apt-get upgrade; If you want to upgrade just one package (e.g. php5): apt-get update; apt-get install php5; For the package versions available on Debian take a look at: http://www.debian.org/distrib/packages If you want to install php5 5.4.4-2, that is only available on wheezy, you should add wheezy to your /etc/apt/sources.list: deb http://ftp.us.debian.org/debian/ wheezy main non-free contrib and then: apt-get update
apt-get install -t wheezy php5 To don't get any surprises, you should use apt pinning in order to prevent that your system from installing packages from wheezy, just create the file /etc/apt/preferences Package: *
Pin: release n=squeeze
Pin-Priority: 650
Package: *
Pin: release n=wheezy
Pin-Priority: -10 So when that when you do apt-get install, if you don't specify -t wheezy it will by default install the package from squeeze. | {
"source": [
"https://serverfault.com/questions/404815",
"https://serverfault.com",
"https://serverfault.com/users/56241/"
]
} |
404,840 | Possible Duplicate: Is it true that a nameserver have to answer queries over TCP? I know DNS uses UDP for most of its queries, but in what circumstances will it use TCP instead? | DNS uses TCP when the size of the request or the response is greater than a single packet such as with responses that have many records or many IPv6 responses or most DNSSEC responses. The maximum size was originally 512 bytes but there is an extension to the DNS protocol that allows clients to indicate that they can handle UDP responses of up to 4096 bytes. DNSSEC responses are usually larger than the maximum UDP size. Transfer requests are usually larger than the maximum UDP size and hence will also be done over TCP. | {
"source": [
"https://serverfault.com/questions/404840",
"https://serverfault.com",
"https://serverfault.com/users/95256/"
]
} |
405,482 | I'm working with lxc in Ubuntu 12.04, and it's really great. However, I am unable to disconnect from a lxc-console session after I've connected. I read somewhere that Ctrl-a q will disconnect me from the console but it doesn't seem to work. Should I be running lxc-console via screen instead? | Yes, Ctrl-a q , should work by default, however no, lxc-console does not actually use screen to accomplish its console behavior. In fact, you might be encountering a conflict if you are using screen since it also uses Ctrl-a as a prefix. If you're inside screen but don't realize it then you'll need to type Ctrl-a a q since the default behavior of screen is that you have to type Ctrl-a a to actually send ^a to the shell running inside of it. You can change the prefix for escape by passing the -e or --escape=PREFIX option to lxc-console . Also, it appears there may be a bug in lxc-start so that if it immediately goes into console mode when you start the container you can't using Ctrl-a q to escape-- in fact, all the control characters seem to be screwed up and print to the screen instead of behaving the way you expect. One workaround is to run it with the -d or --daemon option so that it doesn't immediately start a console, and the connect to it by hand: lxc-start -d -n container-name
lxc-console -n container-name | {
"source": [
"https://serverfault.com/questions/405482",
"https://serverfault.com",
"https://serverfault.com/users/19561/"
]
} |
405,647 | I am wondering what is the command/utility to have a real-time view of incoming IPs to my server, ideally along with the port and connected. | Use pktstat -n interface: eth0
bps
bps % desc
162.3 0% arp
286.5 0% llc 802.1d -> 802.1d
544.3 1% tcp 172.16.1.5:22 <-> 172.16.1.95:8074
34.0k 87% udp 172.16.1.1:514 <-> 172.16.1.5:514
350.1 0% udp 172.16.1.5:24330 <-> 209.18.47.62:53
329.4 0% udp 172.16.1.5:34870 <-> 209.18.47.62:53
388.3 0% udp 172.16.1.5:4470 <-> 209.18.47.62:53
407.4 1% udp 172.16.1.5:47008 <-> 209.18.47.62:53
741.6 1% udp 172.16.1.5:53 <-> 172.16.1.74:43289
663.6 1% udp 172.16.1.5:53 <-> 172.16.1.74:44589
647.7 1% udp 172.16.1.5:53 <-> 172.16.1.74:58223
128.9 0% udp 172.16.1.74:5353 <-> 224.0.0.251:5353
160.7 0% udp6 fe80::21c:bfff:fecf:a798,5353 <-> ff02::fb,5353 The pktstat source code is hosted on Debian's site, or you can get it from SourceArchive.com | {
"source": [
"https://serverfault.com/questions/405647",
"https://serverfault.com",
"https://serverfault.com/users/58595/"
]
} |
406,240 | This is a Canonical Question about Active Directory Group Policy Basics What is Group Policy? How does it work and why should I use it? Note: This is a Question & Answer to new administrator that might not be familiar with how it functions and how powerful it is. | What is Group Policy? Group Policy is a tool that is available to administrators that are running a Windows 2000 or later Active Directory Domain . It allows for centralized management of settings on client computers and servers joined to the domain as well as providing a rudimentary way to distribute software. Settings are grouped into objects called Group Policy Objects (GPOs). GPOs are linked to an Active Directory organizational unit (OU) and can be applied to users and computers. GPOs cannot be applied to groups directly, though you can use security filtering or item-level targeting to filter policy application based on group membership. That's cool, what can it do? Anything. Seriously, you can do anything that you want to users or computers in your domain. There are hundreds of pre-defined settings for things like folder redirection, password complexity, power settings, drive mappings, drive encryption, Windows Update , and so on. Anything that you can't configure via a pre-defined setting you can control via scripting. Batch and VBScript scripts are supported on all supported clients and PowerShell scripts can be run on Windows 7 hosts. Professional tip: You can actually run PowerShell startup scripts on Windows XP and Windows Vista hosts as well as long as they have PowerShell 2.0 installed. You can make a batch file that calls the script with this syntax: powershell Set-ExecutionPolicy RemoteSigned
powershell \\\\server\share\script.ps1
powershell Set-ExecutionPolicy Restricted The first line allows unsigned scripts from remote shares to be run on that host and the second line calls the script from the batch file. The third line sets sets the policy back to restricted (the default) for maximum security. How are Group Policy Objects applied? GPOs are applied in a predictable order. Local policies are applied first. There are policies set on the local machine via gpedit.msc. Site policies are applied second. Domain policies are applied third, and OU policies are applied fourth. If an object is nested inside of multiple OUs, then the GPOs are applied at the OUs closest to the root first. Keep in mind that if there is a conflict, the last GPO applied "wins." This means, for example, that the policy linked at the OU that a computer resides in will win if there is a conflict between a setting in that GPO and one linked in a parent OU. Logon and Startup Scripts seem cool, how do those work? A logon or startup script can live on any network share as long as the Domain Users and Domain Computers groups have read access to the share that they are on. Traditionally, they reside in \\domain.tld\sysvol , but that's not a requirement. Startup scripts are run when the computer starts up. They are run as the SYSTEM account on the local machine. This means that they access network resources as the computer's account. For example, if you wanted a startup script to have access to a network resource on a share that has the UNC of \\server01\share1 and the computer's name was WORKSTATION01 you would need to make sure that WORKSTATION01$ had access to that share. Since this script is run as system, it can do stuff like install software, modify privileged sections of the registry, and modify most files on the local machine. Logon scripts are run in the security context of the locally logged on user. Hopefully your users aren't administrators, so that means that you won't be able to use these to install software or modify protected registry settings. Logon and startup scripts were a cornerstone of Windows 2003 and earlier domains, but their usefulness has been diminished in later releases of Windows Server. Group Policy Preferences gives administrators a much better way to handle drive and printer mappings, shortcuts, files, registry entries, local group membership and many other things that could only be done in a startup or logon script. If you're thinking that you might need to use a script for a simple task, there's probably a Group Policy or preference for it instead. Nowadays on domains with Windows 7 (or later) clients, only complex tasks require startup or logon scripts. I found a cool GPO, but it applies to users, I want it to apply to computers! Yeah, I know. I've been there. This is especially prevalent in academic lab or other shared computer scenarios where you want some of the user policies for printers or similar resources to be based on the computer, not the user. Guess what, you're in luck! You want to enable the GPO setting for Group Policy Loopback Mode . You're welcome. You said I can use this to install software, right? Yep, you can. There are some caveats, though. The software must be in MSI format, and any modifications to it must be in an MST file. You can make an MST with software like ORCA or any other MSI editor. If you don't make a transform, your end result will be the same as running msiexec /i <path to software> /q The software is also only installed at startup, so it's not a very fast way of distributing software, but it's free. In a low-budget lab environment, I've made a scheduled task (via GPO) that will reboot every lab computer at midnight with a random 30 minute offset. This will ensure that software is, at a maximum, one day out of date in those labs. Still, software like SCCM , LANDesk , Altaris , or anything else that can "push" software on an on-demand basis is preferable. How often is it applied? Clients refresh their Group Policy Objects every 90 minutes with a 30 minute randomization. That means that, by default, there can be up to a 120 minute wait. Also, some settings, like drive mappings, folder redirection, and file preferences, are only applied on startup or logon. Group Policy is meant for long-term planned management, not for instant quick-fix situations. Domain Controllers refresh their policy every five minutes. | {
"source": [
"https://serverfault.com/questions/406240",
"https://serverfault.com",
"https://serverfault.com/users/10472/"
]
} |
406,606 | Say you're seeing this message: FATAL: Ident authentication failed for user "..." What are the causes of this error message? | It means that Postgres is trying to authenticate a user using the Ident protocol, and can't. Ident auth automatically matches Unix usernames with Postgres usernames. It works like this: You have database role 'foo' on database 'db' Your pg_hba.conf file (in /etc/postgres-something/main ) defines 'Ident' as the protocol to connect to database db for users connecting from certain hosts The unix username making the connection is 'foo' An Ident server running on the machine the user is connecting from confirms that their username really is 'foo' Possible causes and solutions: There is no Ident server running on the machine you're trying to connect from. Test this by trying to connect to it on port 113. If that fails, install an Ident server (eg, sudo apt-get install oidentd ). There's an Ident server, but there's no database role matching the name you're trying to connect with ('foo' in the above example). So create it by connecting somehow to the database with superuser rights and do CREATE ROLE foo . Alternatively add an entry to /etc/postgresql/.../main/pg_ident.conf (or /var/lib/pgsql/12/data or wherever). Maybe the shell username doesn't match the database role. You may be able to test this by connecting to the Ident server while a connection is going on, and passing the right port numbers. Maybe you actually want to connect with a password , not Ident. Edit the pg_hba.conf file appropriately. For example, change: host all all 127.0.0.1/32 ident to host all all 127.0.0.1/32 md5 Be sure to restart Postgres after updating the pg_hba.conf file. You do that by issuing the following command: sudo service postgresql-12 restart | {
"source": [
"https://serverfault.com/questions/406606",
"https://serverfault.com",
"https://serverfault.com/users/68259/"
]
} |
406,791 | I know that head and tail can take -c option to specify a byte offset. I'm looking for a way to efficiently extract a byte range from a large log file. | The DareDevil of the Unix commands, dd to the rescue! dd if=yourfile ibs=1 skip=200 count=100 That would start from byte 200 and show 100 next bytes, or in other words, bytes 200-300. ibs means dd only reads one byte at a time instead of the default 512 bytes, but still writes out in default 512 byte chunks. Go and see if ibs harms the performance, I hope not. | {
"source": [
"https://serverfault.com/questions/406791",
"https://serverfault.com",
"https://serverfault.com/users/42474/"
]
} |
406,803 | I have a 4 port Digium card in there, and have 4 lines running smoothly. Now, we added ANOTHER 4 port card 1AEX410PELF and have 4 more analog lines coming into the Trixbox server. It still runs the 4 fine, but what do I need to do to add the additional 4 phone numbers/lines? I want it to act exactly as before, there's nothing special about the new lines. We just need more lines so that when we have 4 out of state customers call, we can have 4 more call and not get the busy signal. Trixbox CE 2.8 bob is the name of the server | The DareDevil of the Unix commands, dd to the rescue! dd if=yourfile ibs=1 skip=200 count=100 That would start from byte 200 and show 100 next bytes, or in other words, bytes 200-300. ibs means dd only reads one byte at a time instead of the default 512 bytes, but still writes out in default 512 byte chunks. Go and see if ibs harms the performance, I hope not. | {
"source": [
"https://serverfault.com/questions/406803",
"https://serverfault.com",
"https://serverfault.com/users/108429/"
]
} |
407,033 | I have a VPS for my website hosting. It is running a Ubuntu server. Every time I logged in my server by ssh, it displays a lengthy welcome message in my terminal. Linux node61.buyvm.net 2.6.18-pony6-3 #1 SMP Tue Mar 13 07:31:44 PDT
2012 x86_64 The programs included with the Debian GNU/Linux system are free
software; the exact distribution terms for each program are described
in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law. Last login: Wed Jul 11 12:08:19 2012 from
113.72.193.52 Linux node61.buyvm.net 2.6.18-pony6-3 #1 SMP Tue Mar 13 07:31:44 PDT 2012 x86_64 The programs included with the Debian GNU/Linux system are free
software; the exact distribution terms for each program are described
in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law. entered into CT 17323
-bash-4.2# After doing some researches about this(yes i was just googling around), I realized that my server should have a .bashrc and .bash_profile (or .profile ) controlling this. I use vim to open my .bashrc and .profile and I couldn't seem to find any line of codes that would display message in my terminal. Therefore I am wondering if there is like another file for this? I want to comment out those welcome message because my SFTP is not working with an error ( Received message too long 761422195 ). I am pretty sure that this error is caused by my server's welcome message. | You need to edit two files: /etc/motd (Message of the Day) /etc/ssh/sshd_config : Change the setting PrintLastLog to "no", this will disable the "Last login" message. And then restart your sshd. | {
"source": [
"https://serverfault.com/questions/407033",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
407,317 | I'm trying to write a configuration script for new servers, and one of the first steps is to install a series of required packages, such as MySQL, phpMyAdmin, etc. using apt-get install However, when dpkg tries to configure them it asks you for a few options, such as MySQL root password, phpMyAdmin passwords, what server to use, etc. Since I will likely be passing this script on to co-workers who are unlikely to read the prompts, and my desire to simply start it and walk away, I'd like to know how to pass in a series of "default" answers/values for it to use. This might include usernames/passwords/other dynamic values passed on command line. -- I realize that having passwords in a script is a security issue, but I'm willing to ignore it, particularly in the more general sense of installing packages that an answer to this would imply. | Use debconf's configuration preseeding. Do a test install to get the values that you want: root@test1:~# apt-get install mysql-server ..and set the root password when prompted during the install. Then you can check what the debconf settings look like for what you just installed (you may need to install debconf-utils ): root@test1:~# debconf-get-selections | grep mysql-server
mysql-server-5.5 mysql-server/root_password_again password
mysql-server-5.5 mysql-server/root_password password
mysql-server-5.5 mysql-server/error_setting_password error
mysql-server-5.5 mysql-server-5.5/postrm_remove_databases boolean false
mysql-server-5.5 mysql-server-5.5/start_on_boot boolean true
mysql-server-5.5 mysql-server-5.5/nis_warning note
mysql-server-5.5 mysql-server-5.5/really_downgrade boolean false
mysql-server-5.5 mysql-server/password_mismatch error
mysql-server-5.5 mysql-server/no_upgrade_when_using_ndb error There's some noise there, but the important part is the password settings. Then, for a fresh install, you can avoid the prompts completely by setting the password beforehand: root@test2:~# echo "mysql-server-5.5 mysql-server/root_password_again password Som3Passw0rd" | debconf-set-selections
root@test2:~# echo "mysql-server-5.5 mysql-server/root_password password Som3Passw0rd" | debconf-set-selections
root@test2:~# apt-get install mysql-server No prompts at all during that install. | {
"source": [
"https://serverfault.com/questions/407317",
"https://serverfault.com",
"https://serverfault.com/users/123070/"
]
} |
407,954 | I've just deployed an update to an existing ASP.NET MVC3 site (it was already configured) and I'm getting the IIS blue screen of death stating HTTP Error 500.0 - Internal Server Error The page cannot be displayed because an internal server error has occurred. However; there is nothing showing up in the Application Event Log where I would expect to see a (more) detailed description of the entry. How can I go about diagnosing this issue? | Take a look at IIS7's Failed Request Tracing feature: Troubleshooting Failed Requests Using Tracing in IIS 7 Troubleshoot with Failed Request Tracing The other thing I would do is tweak your <httpErrors> setting because IIS may be swallowing an error message from further up the pipeline: <configuration>
<system.webServer>
<httpErrors existingResponse="PassThrough" />
</system.webServer>
</configuration> If the site is written in Classic ASP then be sure to turn on the Send Errors to Browser setting in the ASP configuration feature: And finally, if you're using Internet Explorer then make sure you've turned off Show friendly HTTP error messages in the Advanced settings (though I suspect you've done that already or are using a different browser). | {
"source": [
"https://serverfault.com/questions/407954",
"https://serverfault.com",
"https://serverfault.com/users/1653/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.