source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
109,970 | I was running out of space on an Ubuntu server, so I did this command to save space sudo rm -rf /var/cache/apt/archives However now when trying to do things with apt, I get the following errors: E: Could not open lock file /var/cache/apt/archives/lock - open (2 No such file or directory)
E: Unable to lock the download directory And things like Archive directory /var/cache/apt/archives/partial is missing. Clearly I have removed some directory structure. Is there some way to do a apt-get rebuild-var-tree or similar? | You need two things there: sudo mkdir -p /var/cache/apt/archives/partial
sudo touch /var/cache/apt/archives/lock
sudo chmod 640 /var/cache/apt/archives/lock Removing this directory manually is a bad idea generally. To clean archives cleanly, use: sudo apt-get clean | {
"source": [
"https://serverfault.com/questions/109970",
"https://serverfault.com",
"https://serverfault.com/users/8950/"
]
} |
110,152 | I just bought a new VPS and I basically need to set up a LAMP(HP) stack, I'm considering between CentOS or Ubuntu as my operating system but I also need a hosting control panel to ease the system administrator tasks involved in hosting a website since I'm not much of a Linux guru ... I've used cPanel and Plesk in the past to host hundreds of virtual hosts in some dedicated servers and besides the license price I've nothing to complain. I've also used OpenPanel in the last dedicated server I bought to host about 5 websites, the interface is quite nice but there are still some minor bugs / lack of features and I also think the project has staled because the last release was back in July, 2008. I've also asked a related question about free hosting control panel alternatives , but honestly the answers were not very helpful to me. Having considered all the suggestions I've found [User|Web|Virtual]min to be the most appropriate for me, I've already installed and tried Virtualmin (it also installs Webmin) and it seems to do the job, but since I'm running on a resource limited VPS I want to know what are the differences between these 3 solutions - I only need to host and manage one website in the VPS. Between Usermin , Webmin and Virtualmin which one does the job and is less resource hungry? | Webmin is a Perl-based (not Apache-based) administration interface which, unlike cPanel, allows you to control every aspect of your server, either visually or manually, through the use of web forms. It also features a cool java file manager which allows you to get a visual idea of what's on your HDDs and it can perform basic file operations on them. In terms of security, you can restrict access to its interface by specifying a list of IPs or classes of IPs. If you intend to handle multiple domains then Virtualmin (it's a module for Webmin) is the best choice as it allows you to manage a domain in a centralized way, that is, it automatically takes care of DNS zones, email aliases and Apache vhosts. Of course, you can fine tune BIND, Apache and the mail server by using the visual configuration of/or the manual configuration. If you intend to give others access to the server then Usermin is a good choice as it allows normal users to access the SQL server, email server plus more but be careful what modules you activate, that is, don't enable modules unless you intend to use them. Support: Webmin offers good support for Ubuntu and it can give you good information about outdated packages plus the possibility to update them. It also has a couple of modules which were specially designed for Ubuntu administration tasks. | {
"source": [
"https://serverfault.com/questions/110152",
"https://serverfault.com",
"https://serverfault.com/users/17287/"
]
} |
110,154 | I have just installed postgres 8.4 on Ubuntu 9.10 and it has never asked me to create a superuser. Is there a default superuser and its password? If not, how do I create a new one? | CAUTION The answer about changing the UNIX password for "postgres" through "$ sudo passwd postgres" is not preferred, and can even be DANGEROUS ! This is why: By default, the UNIX account "postgres" is locked, which means it cannot be logged in using a password. If you use "sudo passwd postgres", the account is immediately unlocked. Worse, if you set the password to something weak, like "postgres", then you are exposed to a great security danger. For example, there are a number of bots out there trying the username/password combo "postgres/postgres" to log into your UNIX system. What you should do is follow Chris James 's answer: sudo -u postgres psql postgres
# \password postgres
Enter new password: To explain it a little bit. There are usually two default ways to login to PostgreSQL server: By running the "psql" command as a UNIX user (so-called IDENT/PEER authentication), e.g.: sudo -u postgres psql . Note that sudo -u does NOT unlock the UNIX user. by TCP/IP connection using PostgreSQL's own managed username/password (so-called TCP authentication) (i.e., NOT the UNIX password). So you never want to set the password for UNIX account "postgres". Leave it locked as it is by default. Of course things can change if you configure it differently from the default setting. For example, one could sync the PostgreSQL password with UNIX password and only allow local logins. That would be beyond the scope of this question. | {
"source": [
"https://serverfault.com/questions/110154",
"https://serverfault.com",
"https://serverfault.com/users/16033/"
]
} |
110,436 | I've launched something that took lots of memory and now everything lags a lot.
I guess all applications' memory has gone to swap in order to free some space for the memory-intensive process, and now everything is slowly returning to RAM when accessed. Is there a way to explicitly move everything possible from swap back to RAM? Or maybe not everything, but just some particular processes data? | I'd recommend allowing the normal Linux memory control swap in the things that are actually used, as they are used. The only thing I can think off is to turn swap off, then on again sudo swapoff -a
sudo swapon -a That assumes you have enough spare physical memory to contain everything in swap... | {
"source": [
"https://serverfault.com/questions/110436",
"https://serverfault.com",
"https://serverfault.com/users/12097/"
]
} |
110,593 | Is there a way to find out which partition a directory is located in? I know I can use df to list partitions and mount points but I need to be able to find out which partition any directory is located in with a simple command. | df -h . gives you: $ df -h .
Filesystem Size Used Avail Capacity Mounted on
/dev/disk0s2 1.4Ti 390Gi 1.0Ti 28% / so you have 'mounted on' for that dir | {
"source": [
"https://serverfault.com/questions/110593",
"https://serverfault.com",
"https://serverfault.com/users/31749/"
]
} |
110,699 | I'm transferring files between servers and just started noticing that some of them are getting modified to be one long continuous line as opposed to having the returns and line-breaks they originally had. I'm assuming this has something to do with the transfer-type of my FTP Client which was originally set to "Auto," but sporting "Binary" and "ASCII" as additional options. In short, what are the differences between the ways I transfer a file from one server to another, and will these differences be capable of modifying the file in such a way as I mentioned above? Transfering FROM Windows TO Linux. | The "Binary" transfer mode of FTP copies files exactly, byte for byte. Simple and straightforward. When bringing text files between different operating systems, though, this might not be what you want -- different operating systems use different codes to represent line breaks. The "ASCII" mode exists for this purpose: it automatically translates all line endings from the source system's format to the destination's. Not sure about "Auto", but I imagine it looks that the file's extension or something similar to decide whether it's a text file, and tries to guess the appropriate mode. Which mode you want depends on exactly what you're doing with the files... if you're just copying them to back them up, then you'll probably want to copy in binary mode so they'll be exactly the same when you later restore them to the windows server again. If they need to be usable as text files (perhaps as config files for a cross-platform program?) on both sides, you'll want to use ASCII mode to translate them. EDIT: As far as I can tell, FTPing files from Windows to Linux should never result in line breaks disappearing... however , if you copy them in ASCII mode, and then bring them back to the Windows server in binary mode, the Linux-style line endings might not be recognized on the Windows box. (Notepad won't see them; Wordpad will; YMMV with other software.) (Today, such a convenience -- converting line endings automatically -- might seem odd in such a basic protocol as FTP. When FTP was invented, though, sending text files was the norm, and one of the goals of the protocol was to make this as easy as possible.) | {
"source": [
"https://serverfault.com/questions/110699",
"https://serverfault.com",
"https://serverfault.com/users/9364/"
]
} |
110,725 | What command would you use in cmd.exe to find the number of files in the current directory? Is there a powershell option here? Update : I was hoping to avoid dir , as I know there are 10,000+ files in the current directory. Wanted to avoid the enumeration output to the cmd window. Thank you! | If you want to do it with cmd , then the following is the trivial way to do it: set count=0 & for %x in (*) do @(set /a count+=1 >nul)
echo %count% That's assuming the command line. In a batch file you would do @echo off
setlocal enableextensions
set count=0
for %%x in (*) do set /a count+=1
echo %count%
endlocal which does things a little nicer. You can drop the >nul in a batch, since set /a won't display the result if run from a batch file—it does directly from the command line. Furthermore the % sign in the for loop has to be doubled. I've seen quite a few instances where people try nifty tricks with find /c . Be very careful with those, as various things can break this. Common mistakes: Using find /c /v and try finding something that is never included in a file name, such as :: . Won't. Work. Reliably. When the console window is set to raster fonts then you can get those character combination. I can include characters in a file name such as : , ? , etc. in their full-width variants for example, which will then get converted to their normal ASCII counterparts which will break this. If you need an accurate count, then don't try this. Using find /c and try finding something that is always included in a file name. Obviously the dot ( . ) is a poor choice. Another answer suggests dir /a-d | find /c ":" which assumes several things about the user's locale, not all of which are guaranteed to be true (I've left a comment detailing the problems there) and returns one result too much. Generally, you want to use find on dir /b which cuts away all the non-filename stuff and avoids fencepost errors that way. So the elegant variant would be: dir /b /a-d | find /c /v "" which will first output all file names, one line each. And then count all lines of that output which are not empty. Since the file name can't be empty (unless I'm missing something, but Unicode will not trip this up according to my tests). | {
"source": [
"https://serverfault.com/questions/110725",
"https://serverfault.com",
"https://serverfault.com/users/658/"
]
} |
110,791 | On my windows 2008 server I have a network share. I am logged on to the server with full administrator rights. I would like to know what users have active connections to that share. How do I find that information? | There are two ways to go about this that I know if. One is unreliable, but probably good enough for most scenarios. One is extensive, but hard to implement at any scale that exceeds a handful of user connections. The Kinda-Sorta Way: Select System Tools >> Shared Folders >> Open Files to see what files are open on the file server. From there you can correlate the user accounts that have open files to the shares that they are connected to. However, that can be insufficient. Don't believe me? Go into Computer Management and select System Tools >> Shared Folders >> Sessions to see who is connected. Then look at the # open files column. Some sessions should have 0 open files. How do you know what share they are technically connected to? I'm glad you asked... The Extensive but Hard to Scale Way: Perform net share [sharename] on each share in question to get a list of the users that are connected to it. In my testing, even users that have no open file are listed. You can also utilize the Share and Storage Manager administrative tool in Server 2008 and beyond instead of Computer Management. Find the share in the list of shares, and then in the action pane to the right click "Manage Sessions." You will see a list of sessions including those that have zero open files. But... but... I want to find a specific user without querying each share! If you have a specific user that you want to track down, it appears that your only means of finding that information is to query each share and eyeball it to find the user you want. And by eyeball I mean piping output to findstr or select-string . One could extrapolate the workflow to a script that enumerates all available shares, queries for connected users, and searches the output for the user in question, but that appears to be an exercise for the reader and not something that Microsoft has included as a native feature. | {
"source": [
"https://serverfault.com/questions/110791",
"https://serverfault.com",
"https://serverfault.com/users/33171/"
]
} |
111,064 | I don't want to comment out the line in /etc/sudoers : Defaults requiretty Instead, I only want a certain user not to require a tty.
How can this be done? | You said that you want one particular user to not require a tty. That's the default behavior. Nevertheless, you can explicitly set that like this: Defaults:username !requiretty If you want everyone else to require a tty, then you'll have to uncommment the line. | {
"source": [
"https://serverfault.com/questions/111064",
"https://serverfault.com",
"https://serverfault.com/users/27451/"
]
} |
111,151 | How can I compare two directories with sub dirs to see where is the difference? | Under Linux: $ diff -r /first/directory /second/directory Under Windows: you'd probably better download and install WinMerge, then > WinMerge /r c:\first\folder c:\second\folder M | {
"source": [
"https://serverfault.com/questions/111151",
"https://serverfault.com",
"https://serverfault.com/users/10683/"
]
} |
111,358 | I figured I'd share my question here and then answer, as there seems to be many people stuck in my position - but no definitive answer. The problem is, if you apt-get remove mysql-server, it does not clean up the configuration and database files, so if you've somehow screwed them up, then installing again, will not replace them. So there seems to be many people asking "how do I completely remove mysql-server, so that I can re-install a fresh?" -- everyone answers with apt-get remove --purge mysql-server -- I'm not sure why, but this does not fully uninstall. My answer follows... | removing mysql-server does not work because mysql-server is just a metapackage that depends on the specific server version apt-get remove --purge 'mysql-.*' or apt-get remove --purge 'mysql-server.*' will do the trick. | {
"source": [
"https://serverfault.com/questions/111358",
"https://serverfault.com",
"https://serverfault.com/users/17170/"
]
} |
111,360 | I have set up a subversion server through Apache http (mod_dav_svn). I created a Subversion project at /usr/local/svn. I can check out the project via svn co http://host/svn . However, when I try to commit, I get this error: svn: Can't create directory '/usr/local/svn/db/transactions/0-1.txn': Permission denied. I tried changing the owner and permissions of /usr/local/svn (the repository) as follows: chown -R apache.apache /usr/local/svn
chmod -R g-w /usr/local/svn Unfortunately this does not solve the problem. Please help! | removing mysql-server does not work because mysql-server is just a metapackage that depends on the specific server version apt-get remove --purge 'mysql-.*' or apt-get remove --purge 'mysql-server.*' will do the trick. | {
"source": [
"https://serverfault.com/questions/111360",
"https://serverfault.com",
"https://serverfault.com/users/13573/"
]
} |
111,368 | I am looking at deploying SNMP Settings for windows servers via Group Policy and have an administrative template prepared. However I am noting strange behaviour when applying the policy. Two Community strings are created, with different permissions. ie: MYCOMMUNITY - READ ONLY
MYCOMMUNITY - READ CREATE The Windows Registry also shows two values being created in HKLM\Software\Policies\SNMP\Parameters\ValidCommunities
NAME TYPE DATA
1 REG_SZ MYCOMMUNITY
MYCOMMUNITY REG_DWORD 0x00000010 (16) It is the latter of these two registry values that generates the READ CREATE community that I want, yet I seem to be unable to stop the first string entry from being generated. My question is Whether READ CREATE Permissions will take precedence over READ ONLY permissions, or will there be "Random" behaviour with this configuration? Thanks | removing mysql-server does not work because mysql-server is just a metapackage that depends on the specific server version apt-get remove --purge 'mysql-.*' or apt-get remove --purge 'mysql-server.*' will do the trick. | {
"source": [
"https://serverfault.com/questions/111368",
"https://serverfault.com",
"https://serverfault.com/users/54805/"
]
} |
111,525 | I'm having an issue on my server when working with my VM guests, and I think its due to a recently installed update. What is the correct command to uninstall Windows Updates from either the command prompt, or Powershell? | To obtain a list of installed patches you can do: wmic qfe list To uninstall a listed patch, you do: wusa /uninstall /kb:<kbnumber> Here are some links with more information: http://www.systemcentercentral.com/BlogDetails/tabid/143/indexid/57960/Default.aspx http://support.microsoft.com/kb/934307 http://technet.microsoft.com/en-us/library/dd883262(WS.10).aspx Note: the 934307 KB article says that you can't use /uninstall on Windows 2008 - this does not apply to Windows 2008 R2 - they enabled the uninstall switch on R2 (see the last link). | {
"source": [
"https://serverfault.com/questions/111525",
"https://serverfault.com",
"https://serverfault.com/users/9007/"
]
} |
111,609 | I have a job which runs forever the moment it starts.
So i want to start it only once after entering it into "crontab -e" file and saving it (or) whenever reboot happens. How to achieve this? | If you want a command to run once at a later date, use the at command. If you want a command to be run once at system boot, the correct solution is to use either: system RC scripts (/etc/rc.local) crontab with the @reboot special prefix (see manpage) The latter is the only option for a non-root user. | {
"source": [
"https://serverfault.com/questions/111609",
"https://serverfault.com",
"https://serverfault.com/users/31480/"
]
} |
111,650 | I'm running a Windows XP desktop in a corporate environment. How can I find out what AD groups I belong to? | Try running gpresult /R for RSoP summary or gpresult /V for verbose output from the command line as an administrator on the computer. It should output something like this: C:\Windows\system32>gpresult /V
Microsoft (R) Windows (R) Operating System Group Policy Result tool v2.0
Copyright (C) Microsoft Corp. 1981-2001
Created On 2/10/2010 at 10:27:41 AM
RSOP data for OQMSupport01\- on OQMSUPPORT01 : Logging Mode
------------------------------------------------------------
OS Configuration: Standalone Workstation
OS Version: 6.1.7600
Site Name: N/A
Roaming Profile: N/A
Local Profile: C:\Users\-
Connected over a slow link?: No
COMPUTER SETTINGS
------------------
Last time Group Policy was applied: 2/10/2010 at 10:16:09 AM
Group Policy was applied from: N/A
Group Policy slow link threshold: 500 kbps
Domain Name: OQMSUPPORT01
Domain Type: <Local Computer>
Applied Group Policy Objects
-----------------------------
N/A
The following GPOs were not applied because they were filtered out
-------------------------------------------------------------------
Local Group Policy
Filtering: Not Applied (Empty)
The computer is a part of the following security groups
-------------------------------------------------------
System Mandatory Level
Everyone
Debugger Users
IIS_WPG
SQLServer2005MSSQLUser$OQMSUPPORT01$ACT7
SQLServerMSSQLServerADHelperUser$OQMSUPPORT01
BUILTIN\Users
NT AUTHORITY\SERVICE
CONSOLE LOGON
NT AUTHORITY\Authenticated Users
This Organization
BDESVC
BITS
CertPropSvc
EapHost
hkmsvc
IKEEXT
iphlpsvc
LanmanServer
MMCSS
MSiSCSI
RasAuto
RasMan
RemoteAccess
Schedule
SCPolicySvc
SENS
SessionEnv
SharedAccess
ShellHWDetection
wercplsupport
Winmgmt
wuauserv
LOCAL
BUILTIN\Administrators
USER SETTINGS
--------------
Last time Group Policy was applied: 2/10/2010 at 10:00:51 AM
Group Policy was applied from: N/A
Group Policy slow link threshold: 500 kbps
Domain Name: OQMSupport01
Domain Type: <Local Computer>
The user is a part of the following security groups
---------------------------------------------------
None
Everyone
Debugger Users
HomeUsers
BUILTIN\Administrators
BUILTIN\Users
NT AUTHORITY\INTERACTIVE
CONSOLE LOGON
NT AUTHORITY\Authenticated Users
This Organization
LOCAL
NTLM Authentication
High Mandatory Level
The user has the following security privileges
----------------------------------------------
Bypass traverse checking
Manage auditing and security log
Back up files and directories
Restore files and directories
Change the system time
Shut down the system
Force shutdown from a remote system
Take ownership of files or other objects
Debug programs
Modify firmware environment values
Profile system performance
Profile single process
Increase scheduling priority
Load and unload device drivers
Create a pagefile
Adjust memory quotas for a process
Remove computer from docking station
Perform volume maintenance tasks
Impersonate a client after authentication
Create global objects
Change the time zone
Create symbolic links
Increase a process working set Or if you are logged in to a Windows Server OS with the ActiveDirectory PowerShell Module (or Client OS with the Remote Server Administration Tools) try the Get-ADPrincipalGroupMembership cmdlet: C:\Users\username\Documents> Get-ADPrincipalGroupMembership username | Select name
name
----
Domain Users
All
Announcements
employees_US
remotes
ceo-report
all-engineering
not-sales
Global-NotSales | {
"source": [
"https://serverfault.com/questions/111650",
"https://serverfault.com",
"https://serverfault.com/users/2100/"
]
} |
111,766 | Is there a way to force puppet to do certain things first? For instance, I need it to install an RPM on all servers to add a yum repository (IUS Community) before I install any of the packages. | If you want to make sure a repository is installed on all your server then I would suggest something like this node default {
include base
}
class base {
yumrepo { "IUS":
baseurl => "http://dl.iuscommunity.org/pub/ius/stable/$operatingsystem/$operatingsystemrelease/$architecture",
descr => "IUS Community repository",
enabled => 1,
gpgcheck => 0
}
} Then, for any node that extends base you can say class foo {
package { "bar": ensure => installed, require => Yumrepo["IUS"] }
} This will ensure that The package bar will not be installed unless the IUS repository is defined The package will not attempt to install before the IUS repository is defined | {
"source": [
"https://serverfault.com/questions/111766",
"https://serverfault.com",
"https://serverfault.com/users/30142/"
]
} |
111,954 | I've setup an NTP client for my DC to sync time with time.windows.com but I want it to query the NTP server at least twice a day. I made all the changes via the registry, is there a period registry I can amend and how? | You have the list of registry values here . Referring to this, try setting the following values : SpecialPollInterval : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient Explanation : Version : Windows XP, Windows Vista, Windows Server 2003, and Windows Server 2008 This entry specifies the special poll interval in seconds for manual peers. When the SpecialInterval 0x1 flag is enabled ( see next key : NtpServer ) , W32Time uses this poll interval instead of a poll interval determine by the operating system. The default value on domain members is 3,600 (1 hour). The default value on stand-alone clients and servers is 604,800 (7 days). NtpServer HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters Explanation : Version : Windows Server 2003 and Windows Server 2008 This entry specifies a space-delimited list of peers from which a computer obtains time stamps, consisting of one or more DNS names or IP addresses per line. Each DNS name or IP address listed must be unique. Computers connected to a domain must synchronize with a more reliable time source, such as the official U.S. time clock. 0x01 SpecialInterval There is no default value for this registry entry on domain members. The default value on stand-alone clients and servers is time.windows.com,0x1. UpdateInterval : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config Explanation : Version : Windows XP, Windows Vista, Windows Server 2003, and Windows Server 2008 This entry specifies the number of clock ticks between phase correction adjustments. The default value for domain controllers is 100. The default value for domain members is 30,000. The default value for stand-alone clients and servers is 360,000. MinPollInterval : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config Explanation : Version : Windows XP, Windows Vista, Windows Server 2003, and Windows Server 2008 This entry specifies the smallest interval, in log2 seconds, allowed for the system polling interval. Note that while a system does not request samples more frequently than this, a provider can produce samples at times other than the scheduled interval. The default value for domain controllers is 6. The default value for domain members is 10. The default value for stand-alone clients and servers is 10. | {
"source": [
"https://serverfault.com/questions/111954",
"https://serverfault.com",
"https://serverfault.com/users/8987/"
]
} |
112,292 | I was researching different load balancing algorithms for HTTP and I just found 3. Random, Round Robin and Weighted Round Robin. Are there any other options? Thanks
Paul | The most common load balancing algorithms for HTTP load balancers are IMHO: Round Robin (sometimes called "Next in Loop"). Weighted Round Robin -- as Round Robin, but some servers get a larger share of the overall traffic. Random . Source IP hash. Connections are distributed to backend servers based on the source IP address. If a webnode fails and is taken out of service the distribution changes. As long as all servers are running a given client IP address will always go to the same web server. URL hash. Much like source IP hash, except hashing is done on the URL of the request. Useful when load balancing in front of proxy caches, as requests for a given object will always go to just one backend cache. This avoids cache duplication, having the same object stored in several / all caches, and increases effective capacity of the backend caches. Least connections , weighted least connections. The load balancer monitors the number of open connections for each server, and sends to the least busy server. Least traffic , weighted least traffic. The load balancer monitors the bitrate from each server, and sends to the server that has the least outgoing traffic. Least latency . Perlbal makes a quick HTTP OPTIONS request to backend servers, and sends the request to the first server to answer. Arguably the above aren't algorithms in a strict computer science sense, they're more general descriptions of common approaches. Here is one little paper from Cisco which describes some of the algorithms they use in more detail . Implementations from other vendors will be slightly different. There are edge cases where the more exotic algorithms are useful -- for example video streaming may lend itself well to "least traffic". But generally speaking, for most web applications and web sites, the optimal is solution is: A shared / distributed session system , so that any webnode can answer any user request (i.e. user session data such as session cookies is equally available to all servers). Load balancing using Round Robin (optionally Weighted Round Robin) or Random distribution. Round Robin and Random are simple and resilient algorithms without any 'hot spot' problems, i.e. the load distribution to backends remains fair in all situations. | {
"source": [
"https://serverfault.com/questions/112292",
"https://serverfault.com",
"https://serverfault.com/users/2506/"
]
} |
112,457 | Is there a way to remote tail 2 files? I have two servers(a and b) behind a load balancer and I would like to do something like this if possible: tail -f admin@serverA:~/mylogs/log admin@serverB:~/mylogs/log Thanks! | This worked for me: ssh -n user@hostname1 'tail -f /mylogs/log' &
ssh -n user@hostname2 'tail -f /mylogs/log' & | {
"source": [
"https://serverfault.com/questions/112457",
"https://serverfault.com",
"https://serverfault.com/users/13909/"
]
} |
112,542 | Possible Duplicate: How to find out details about hardware on the Linux machine? How can I get processor/RAM/disk specs from the Linux command Line? | CPU $ cat /proc/cpuinfo Memory : $ free $ cat /proc/meminfo HDD: $ df -h $ sudo fdisk -l $ hdparm -i /dev/device (for example sda1, hda3...) | {
"source": [
"https://serverfault.com/questions/112542",
"https://serverfault.com",
"https://serverfault.com/users/26257/"
]
} |
112,711 | How can I get CPU count and total RAM from the OS X command line? | You can get this from the system_profiler tool: system_profiler SPHardwareDataType | grep " Memory:"
system_profiler SPHardwareDataType | grep Cores:
system_profiler SPHardwareDataType | grep Processors: or, if you want to go low-level, use sysctl : sysctl hw.memsize
sysctl hw.ncpu Or to capture the values in a script (credit: @bleater): mem_size=$(sysctl -n hw.memsize)
cpus_virtual=$(sysctl -n hw.ncpu) btw, there are a bunch of other interesting things you can get from sysctl . Try: sysctl -a | grep cpu to see a few of them | {
"source": [
"https://serverfault.com/questions/112711",
"https://serverfault.com",
"https://serverfault.com/users/26257/"
]
} |
112,795 | I have Googled about a solution for quite some time, but couldn't find an answer. I am on Ubuntu Linux and want to run a server on port 80, but due to security mechanism of Ubuntu, I get the following error: java.net.BindException: Permission denied:80 I think it should be simple enough to either disable this security mechanism so that port 80 is available to all users or to assign required privileges to the current user to access port 80. | Short answer: you can't. Ports below 1024 can be opened only by root. As per comment - well, you can, using CAP_NET_BIND_SERVICE , but that approach, applied to java bin will make any java program to be run with this setting, which is undesirable, if not a security risk. The long answer: you can redirect connections on port 80 to some other port you can open as normal user. Run as root: # iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080 As loopback devices (like localhost) do not use the prerouting rules, if you need to use localhost, etc., add this rule as well ( thanks @Francesco ): # iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 8080 NOTE: The above solution is not well suited for multi-user systems, as any user can open port 8080 (or any other high port you decide to use), thus intercepting the traffic. (Credits to CesarB ). EDIT: as per comment question - to delete the above rule: # iptables -t nat --line-numbers -n -L This will output something like: Chain PREROUTING (policy ACCEPT)
num target prot opt source destination
1 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 redir ports 8088
2 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8080 The rule you are interested in is nr. 2, so to delete it: # iptables -t nat -D PREROUTING 2 | {
"source": [
"https://serverfault.com/questions/112795",
"https://serverfault.com",
"https://serverfault.com/users/32722/"
]
} |
112,806 | I have several erros in the system event log of my single Windows 2003 SP2 domain controller. Multiple member computers on the domain are listed in these errors. I am seeing two similar errors for each computer one second apart in the event log. Event ID 7 Source KDC The Security Account Manager failed a
KDC request in an unexpected way. The
error is in the data field. The
account name was [email protected] and lookup type
0x8. followed by Event ID 7 Source KDC The Security Account Manager failed a
KDC request in an unexpected way. The
error is in the data field. The
account name was MEMBERNAME$ and lookup type
0x8. The Lookup Types are also different, I have 0x8, 0x28, 0x0, 0x20. I am also receiving other authentication errors in the same time frame as all of the KDC errors Event ID 5722 Source NETLOGON The session setup from the computer
MEMBERNAME failed to authenticate. The
name(s) of the account(s) referenced
in the security database is MEMBERNAME$.
The following error occurred: Access
is denied. I have run dcdiag /v to see if there was something wrong with Active Directory, but all tests passed.
I also ran netdiag /v and it appers all of those tests ran. Any ideas on where to start for this issue? Thank you, Keith | Short answer: you can't. Ports below 1024 can be opened only by root. As per comment - well, you can, using CAP_NET_BIND_SERVICE , but that approach, applied to java bin will make any java program to be run with this setting, which is undesirable, if not a security risk. The long answer: you can redirect connections on port 80 to some other port you can open as normal user. Run as root: # iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080 As loopback devices (like localhost) do not use the prerouting rules, if you need to use localhost, etc., add this rule as well ( thanks @Francesco ): # iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 8080 NOTE: The above solution is not well suited for multi-user systems, as any user can open port 8080 (or any other high port you decide to use), thus intercepting the traffic. (Credits to CesarB ). EDIT: as per comment question - to delete the above rule: # iptables -t nat --line-numbers -n -L This will output something like: Chain PREROUTING (policy ACCEPT)
num target prot opt source destination
1 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 redir ports 8088
2 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8080 The rule you are interested in is nr. 2, so to delete it: # iptables -t nat -D PREROUTING 2 | {
"source": [
"https://serverfault.com/questions/112806",
"https://serverfault.com",
"https://serverfault.com/users/991/"
]
} |
113,188 | I am tweaking my homepage for performance, currently it handles about 200 requests/second on 3.14.by which eats 6 SQL queries, and 20 req/second on 3.14.by/forum which is phpBB forum. Strangely enough, numbers are about the same on some VPS and dedicated Atom 330 server. Server software is the following: Apache2+mod_php prefork 4 childs (tried different numbers here), php5, APC, nginx, memcached for PHP sessions storage. MySQL is configured to eat about 30% of available RAM (~150Mb on VPS, 700Mb on dedicated server) This looks like there is a bottleneck somewhere not allowing me to go higher, any suggestions? (i.e. I know that doing less than 6 SQL's would make it faster, but this does not look like a limiting factor, as sqld eats no more than a few % in top due to cached queries) Has anyone tested that kicking preforked apache2 and leaving just nginx+php is much faster? Some more benchmarks Small 40-byte static file: 1484 r/s via nginx+apache2, 2452 if we talk to apache2 directly.
Small "Hello world" php script: 458 r/s via ngin+apache2. Update: It appears bottleneck is MySQL performance on cached data.
Page with single SQL shows 354req/sec, with 6 SQL's - 180 req/sec.
What do you think I can tweak here? (I can fork out 100-200Mb for MySQL) [client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
default-character-set=cp1251
collation-server=cp1251_general_cs
skip-character-set-client-handshake
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
skip-external-locking
bind-address = 127.0.0.1
key_buffer = 16M
max_allowed_packet = 8M
thread_stack = 64K
thread_cache_size = 16
sort_buffer_size = 8M
read_buffer_size = 1M
myisam-recover = BACKUP
max_connections = 650
table_cache = 256
thread_concurrency = 10
query_cache_limit = 1M
query_cache_size = 16M
expire_logs_days = 10
max_binlog_size = 100M
[mysqldump]
quick
quote-names
max_allowed_packet = 8M
[mysql]
[isamchk]
key_buffer = 8M
!includedir /etc/mysql/conf.d/ | Obviously, there is a lot you can try. Your best bet is chasing your logs for queries that don't use indexes (enable logs for those) and other non-optimized queries. I have compiled a huge list of performance related options over the years, so I've included a small subset here for your information - hopefully it helps.
Here are some general notes for things you can try (if you haven't already): MySQL query_cache_type=1 - cache SQL queries is on. If set to 2, queries are only cached if the SQL_CACHE hint is passed to them. Similarly with type 1, you can disable cache for a particular query with the SQL_NO_CACHE hint key_buffer_size=128M (default: 8M) - memory buffer for MyISAM table indexes. On dedicated servers, aim to set the key_buffer_size to at least a quarter, but no more than half, of the total amount of memory on the server query_cache_size=64M (default: 0) - size of the query cache back_log=100 (default: 50, max: 65535) - The queue of outstanding connection requests. Only matters when there are lots of connections in short time join_buffer_size=1M (default: 131072) - a buffer that's used when having full table scans (no indexes) table_cache=2048 (default: 256) - should be max_user_connections multiplied by the maximum number of JOINs your heaviest SQL query contains. Use the "open_tables" variable at peak times as a guide. Also look at the "opened_tables" variable - it should be close to "open_tables" query_prealloc_size=32K (default: 8K) - persistant memory for statements parsing and execution. Increase if having complex queries sort_buffer_size=16M (default: 2M) - helps with sorting (ORDER BY and GROUP BY operations) read_buffer_size=2M (default: 128K) - Helps with sequential scans. Increase if there are many sequential scans. read_rnd_buffer_size=4M - helps MyISAM table speed up read after sort max_length_for_sort_data - row size to store instead of row pointer in sort file. Can avoid random table reads key_cache_age_threshold=3000 (default: 300) - time to keep key cache in the hot-zone (before it's demoted to warm) key_cache_division_limit=50 (default: 100) - enables a more sophisticated cache eviction mechanism (two levels). Denotes the percentage to keep for the bottom level.
delay_key_write=ALL - the key buffer is not flushed for the table on every index update, but only when the table is closed. This speeds up writes on keys a lot, but if you use this feature, you should add automatic checking of all MyISAM tables by starting the server with the --myisam-recover=BACKUP,FORCE option memlock=1 - lock process in memory (to reduce swapping in/out) Apache change the spawning method (to mpm for example) disable logs if possible AllowOverride None - whenever possible disable .htaccess. It stops apache for looking for .htaccess files if they are not used so it saves a file lookup request SendBufferSize - Set to OS default. On congested networks, you should set this parameter close to the size of the largest file normally downloaded KeepAlive Off (default On) - and install lingerd to properly close network connections and is faster DirectoryIndex index.php - Keep file list as short and absolute as possible. Options FollowSymLinks - to simplify file access process in Apache Avoid using mod_rewrite or at least complex regexs ServerToken=prod PHP variables_order="GPCS" (If you don't need environment variables) register_globals=Off - apart from being a security risk, it also has a performance impact Keep include_path as minimal as possible (avoids extra filesystem lookups) display_errors=Off - Disable showing errors. Strongly recommended for all production servers (doesn't display ugly error messages in case of a problem). magic_quotes_gpc=Off magic_quotes_*=Off output_buffering=On Disable logging if possible expose_php=Off register_argc_argv=Off always_populate_raw_post_data=Off place php.ini file where php would look for it first. session.gc_divisor=1000 or 10000 session.save_path = "N;/path" - For large sites consider using it. Splits session files into subdirectories OS Tweaks Mount used hard disks with the -o noatime option (no access time). Also add this option to /etc/fstab file. Tweak the /proc/sys/vm/swappiness (from 0 to 100) to see what has best results Use RAM Disks - mount --bind -ttmpfs /tmp /tmp | {
"source": [
"https://serverfault.com/questions/113188",
"https://serverfault.com",
"https://serverfault.com/users/34947/"
]
} |
113,200 | Based on my understanding, there are 3 ways I could do it. Filestream File-system Store images in the database itself I am having a hard time deciding which one would be more appropriate for me. I will have a lot of more reads than writes, but writes wouldnt be completely infrequent either. All images would be less than 1 MB in size for sure, but they would usually be around 50KB. | Obviously, there is a lot you can try. Your best bet is chasing your logs for queries that don't use indexes (enable logs for those) and other non-optimized queries. I have compiled a huge list of performance related options over the years, so I've included a small subset here for your information - hopefully it helps.
Here are some general notes for things you can try (if you haven't already): MySQL query_cache_type=1 - cache SQL queries is on. If set to 2, queries are only cached if the SQL_CACHE hint is passed to them. Similarly with type 1, you can disable cache for a particular query with the SQL_NO_CACHE hint key_buffer_size=128M (default: 8M) - memory buffer for MyISAM table indexes. On dedicated servers, aim to set the key_buffer_size to at least a quarter, but no more than half, of the total amount of memory on the server query_cache_size=64M (default: 0) - size of the query cache back_log=100 (default: 50, max: 65535) - The queue of outstanding connection requests. Only matters when there are lots of connections in short time join_buffer_size=1M (default: 131072) - a buffer that's used when having full table scans (no indexes) table_cache=2048 (default: 256) - should be max_user_connections multiplied by the maximum number of JOINs your heaviest SQL query contains. Use the "open_tables" variable at peak times as a guide. Also look at the "opened_tables" variable - it should be close to "open_tables" query_prealloc_size=32K (default: 8K) - persistant memory for statements parsing and execution. Increase if having complex queries sort_buffer_size=16M (default: 2M) - helps with sorting (ORDER BY and GROUP BY operations) read_buffer_size=2M (default: 128K) - Helps with sequential scans. Increase if there are many sequential scans. read_rnd_buffer_size=4M - helps MyISAM table speed up read after sort max_length_for_sort_data - row size to store instead of row pointer in sort file. Can avoid random table reads key_cache_age_threshold=3000 (default: 300) - time to keep key cache in the hot-zone (before it's demoted to warm) key_cache_division_limit=50 (default: 100) - enables a more sophisticated cache eviction mechanism (two levels). Denotes the percentage to keep for the bottom level.
delay_key_write=ALL - the key buffer is not flushed for the table on every index update, but only when the table is closed. This speeds up writes on keys a lot, but if you use this feature, you should add automatic checking of all MyISAM tables by starting the server with the --myisam-recover=BACKUP,FORCE option memlock=1 - lock process in memory (to reduce swapping in/out) Apache change the spawning method (to mpm for example) disable logs if possible AllowOverride None - whenever possible disable .htaccess. It stops apache for looking for .htaccess files if they are not used so it saves a file lookup request SendBufferSize - Set to OS default. On congested networks, you should set this parameter close to the size of the largest file normally downloaded KeepAlive Off (default On) - and install lingerd to properly close network connections and is faster DirectoryIndex index.php - Keep file list as short and absolute as possible. Options FollowSymLinks - to simplify file access process in Apache Avoid using mod_rewrite or at least complex regexs ServerToken=prod PHP variables_order="GPCS" (If you don't need environment variables) register_globals=Off - apart from being a security risk, it also has a performance impact Keep include_path as minimal as possible (avoids extra filesystem lookups) display_errors=Off - Disable showing errors. Strongly recommended for all production servers (doesn't display ugly error messages in case of a problem). magic_quotes_gpc=Off magic_quotes_*=Off output_buffering=On Disable logging if possible expose_php=Off register_argc_argv=Off always_populate_raw_post_data=Off place php.ini file where php would look for it first. session.gc_divisor=1000 or 10000 session.save_path = "N;/path" - For large sites consider using it. Splits session files into subdirectories OS Tweaks Mount used hard disks with the -o noatime option (no access time). Also add this option to /etc/fstab file. Tweak the /proc/sys/vm/swappiness (from 0 to 100) to see what has best results Use RAM Disks - mount --bind -ttmpfs /tmp /tmp | {
"source": [
"https://serverfault.com/questions/113200",
"https://serverfault.com",
"https://serverfault.com/users/34956/"
]
} |
114,388 | Lets say that my user name of my mac machine is John. I have a fully configured slicehost account. Note that on this slice there is no ssh key for John. Now I configure this box for ssh acces for user deploy. On my mac machine I have the private key for user deploy. Slicehost has public key for user deploy. Again slicehost has nothing for user john. If I want to ssh into sliceghost box as user deploy do I need to put the public key for John there too? | Sort of. You need to put your public key for the account you're coming from on the remote server. If that is your John key, then put that key on the server that you are connecting to using the appropriate account for that server. In your case, you're connecting as deploy. So, when you connect, you'll type: ssh deploy@slicehost If your key for John is in the .ssh/authorized_keys file of the account deploy , then you'll get direct access. | {
"source": [
"https://serverfault.com/questions/114388",
"https://serverfault.com",
"https://serverfault.com/users/35340/"
]
} |
114,477 | I have supervisor setup to manage a few processes. It works perfectly fine when I boot my server, however when I stop it and try to start it again it fails and give's me this error msg: * Starting Supervisor daemon manager...
Error: Another program is already listening on a port that one of our HTTP servers is configured to use. Shut this program down first before starting supervisord.
For help, use /usr/bin/supervisord -h
...fail! I'm running nginx on port 80 and 4 web servers on ports 8000, 8001, 8002, 8003 Does anyone have any idea of what is going on? When I reboot everything works fine. | Just ran into this as well. I fixed it by doing either of these: sudo unlink /tmp/supervisor.sock
sudo unlink /var/run/supervisor.sock This .sock file is defined in /etc/supervisord.conf [unix_http_server] file config value (default is /tmp/supervisor.sock or /var/run/supervisor.sock ). Hope this helps someone in the future. | {
"source": [
"https://serverfault.com/questions/114477",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
114,897 | I have this shell command: kill `cat -- $PIDFILE` What the double -- does here? Why not use just kill `cat $PIDFILE` | The -- tells cat not to try to parse what comes after it as command line options. As an example, think of what would happen in the two cases if the variable $PIDFILE was defined as PIDFILE="--version" . On my machine, they give the following results: $ cat $PIDFILE
cat (GNU coreutils) 6.10
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Torbjorn Granlund and Richard M. Stallman.
$ cat -- $PIDFILE
cat: --version: No such file or directory | {
"source": [
"https://serverfault.com/questions/114897",
"https://serverfault.com",
"https://serverfault.com/users/7587/"
]
} |
114,908 | I run a LVM setup on a raid1 created by mdadm. md2 is based on sda6 (major:minor 8:6) and sdb6 (8:22). md2 is partition 9:2. The VG on top of md2 has 4 LVs, var, home, usr, tmp. First the problem: While booting it seems as if the device mapper takes the wrong partition for the mapping! Immediately after boot the information is like ~# dmsetup table
systemlvm-home: 0 4194304 linear 8:22 384
systemlvm-home: 4194304 16777216 linear 8:22 69206400
systemlvm-home: 20971520 8388608 linear 8:22 119538048
systemlvm-home: 29360128 6291456 linear 8:22 243270016
systemlvm-tmp: 0 2097152 linear 8:22 41943424
systemlvm-usr: 0 10485760 linear 8:22 20971904
systemlvm-var: 0 10485760 linear 8:22 10486144
systemlvm-var: 10485760 6291456 linear 8:22 4194688
systemlvm-var: 16777216 4194304 linear 8:22 44040576
systemlvm-var: 20971520 10485760 linear 8:22 31457664
systemlvm-var: 31457280 20971520 linear 8:22 48234880
systemlvm-var: 52428800 33554432 linear 8:22 85983616
systemlvm-var: 85983232 115343360 linear 8:22 127926656
~# cat /proc/mdstat
Personalities : [raid1]
md2 : active (auto-read-only) raid1 sda6[0]
151798080 blocks [2/1] [U_]
md0 : active raid1 sda1[0] sdb1[1]
96256 blocks [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
2931776 blocks [2/2] [UU] I have to manually "lvchange -an" all LVs, add /dev/sdb6 back to the raid and reactivate the LVs, then all is fine. But it prevents me from automounting the partitions and obviously leads to a bunch of other problems. If everything works fine, the information is like ~$ cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb6[1] sda6[0]
151798080 blocks [2/2] [UU]
...
~# dmsetup table
systemlvm-home: 0 4194304 linear 9:2 384
systemlvm-home: 4194304 16777216 linear 9:2 69206400
systemlvm-home: 20971520 8388608 linear 9:2 119538048
systemlvm-home: 29360128 6291456 linear 9:2 243270016
systemlvm-tmp: 0 2097152 linear 9:2 41943424
systemlvm-usr: 0 10485760 linear 9:2 20971904
systemlvm-var: 0 10485760 linear 9:2 10486144
systemlvm-var: 10485760 6291456 linear 9:2 4194688
systemlvm-var: 16777216 4194304 linear 9:2 44040576
systemlvm-var: 20971520 10485760 linear 9:2 31457664
systemlvm-var: 31457280 20971520 linear 9:2 48234880
systemlvm-var: 52428800 33554432 linear 9:2 85983616
systemlvm-var: 85983232 115343360 linear 9:2 127926656 I think that LVM for some reason just "takes" /dev/sdb6 which is then missing in the raid. I tried almost all options in the lvm.conf but none seems to work. Below is some more information, like config files. Does anyone have any idea about what is going on here and how to prevent that? If you need any additional information, please let me know Thanks in advance!
Dominik The information (off a "repaired" system): ~# cat /etc/debian_version
5.0.4
~# uname -a
Linux kermit 2.6.26-2-686 #1 SMP Wed Feb 10 08:59:21 UTC 2010 i686 GNU/Linux
~# lvm version
LVM version: 2.02.39 (2008-06-27)
Library version: 1.02.27 (2008-06-25)
Driver version: 4.13.0
~# cat /etc/mdadm/mdadm.conf
DEVICE partitions
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=00.90 UUID=11e9dc6c:1da99f3f:b3088ca6:c6fe60e9
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=92ed1e4b:897361d3:070682b3:3baa4fa1
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=00.90 UUID=601d4642:39dc80d7:96e8bbac:649924ba
~# mount
/dev/md1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/md0 on /boot type ext3 (rw)
/dev/mapper/systemlvm-usr on /usr type reiserfs (rw)
/dev/mapper/systemlvm-tmp on /tmp type reiserfs (rw)
/dev/mapper/systemlvm-home on /home type reiserfs (rw)
/dev/mapper/systemlvm-var on /var type reiserfs (rw)
~# grep -v ^$ /etc/lvm/lvm.conf | grep -v "#"
devices {
dir = "/dev"
scan = [ "/dev" ]
preferred_names = [ ]
filter = [ "a|/dev/md.*|", "r/.*/" ]
cache_dir = "/etc/lvm/cache"
cache_file_prefix = ""
write_cache_state = 1
sysfs_scan = 1
md_component_detection = 1
ignore_suspended_devices = 0
}
log {
verbose = 0
syslog = 1
overwrite = 0
level = 0
indent = 1
command_names = 0
prefix = " "
}
backup {
backup = 1
backup_dir = "/etc/lvm/backup"
archive = 1
archive_dir = "/etc/lvm/archive"
retain_min = 10
retain_days = 30
}
shell {
history_size = 100
}
global {
umask = 077
test = 0
units = "h"
activation = 1
proc = "/proc"
locking_type = 1
fallback_to_clustered_locking = 1
fallback_to_local_locking = 1
locking_dir = "/lib/init/rw"
}
activation {
missing_stripe_filler = "/dev/ioerror"
reserved_stack = 256
reserved_memory = 8192
process_priority = -18
mirror_region_size = 512
readahead = "auto"
mirror_log_fault_policy = "allocate"
mirror_device_fault_policy = "remove"
}
:~# vgscan -vvv
Processing: vgscan -vvv
O_DIRECT will be used
Setting global/locking_type to 1
File-based locking selected.
Setting global/locking_dir to /lib/init/rw
Locking /lib/init/rw/P_global WB
Wiping cache of LVM-capable devices
/dev/block/1:0: Added to device cache
/dev/block/1:1: Added to device cache
/dev/block/1:10: Added to device cache
/dev/block/1:11: Added to device cache
/dev/block/1:12: Added to device cache
/dev/block/1:13: Added to device cache
/dev/block/1:14: Added to device cache
/dev/block/1:15: Added to device cache
/dev/block/1:2: Added to device cache
/dev/block/1:3: Added to device cache
/dev/block/1:4: Added to device cache
/dev/block/1:5: Added to device cache
/dev/block/1:6: Added to device cache
/dev/block/1:7: Added to device cache
/dev/block/1:8: Added to device cache
/dev/block/1:9: Added to device cache
/dev/block/253:0: Added to device cache
/dev/block/253:1: Added to device cache
/dev/block/253:2: Added to device cache
/dev/block/253:3: Added to device cache
/dev/block/8:0: Added to device cache
/dev/block/8:1: Added to device cache
/dev/block/8:16: Added to device cache
/dev/block/8:17: Added to device cache
/dev/block/8:18: Added to device cache
/dev/block/8:19: Added to device cache
/dev/block/8:2: Added to device cache
/dev/block/8:21: Added to device cache
/dev/block/8:22: Added to device cache
/dev/block/8:3: Added to device cache
/dev/block/8:5: Added to device cache
/dev/block/8:6: Added to device cache
/dev/block/9:0: Already in device cache
/dev/block/9:1: Already in device cache
/dev/block/9:2: Already in device cache
/dev/bsg/0:0:0:0: Not a block device
/dev/bsg/1:0:0:0: Not a block device
/dev/bus/usb/001/001: Not a block device
[... many more "not a block device"]
/dev/core: Not a block device
/dev/cpu_dma_latency: Not a block device
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895: Aliased to /dev/block/8:16 in device cache
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part1: Aliased to /dev/block/8:17 in device cache
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part2: Aliased to /dev/block/8:18 in device cache
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part3: Aliased to /dev/block/8:19 in device cache
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part5: Aliased to /dev/block/8:21 in device cache
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part6: Aliased to /dev/block/8:22 in device cache
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800: Aliased to /dev/block/8:0 in device cache
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part1: Aliased to /dev/block/8:1 in device cache
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part2: Aliased to /dev/block/8:2 in device cache
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part3: Aliased to /dev/block/8:3 in device cache
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part5: Aliased to /dev/block/8:5 in device cache
/dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part6: Aliased to /dev/block/8:6 in device cache
/dev/disk/by-id/dm-name-systemlvm-home: Aliased to /dev/block/253:2 in device cache
/dev/disk/by-id/dm-name-systemlvm-tmp: Aliased to /dev/block/253:3 in device cache
/dev/disk/by-id/dm-name-systemlvm-usr: Aliased to /dev/block/253:1 in device cache
/dev/disk/by-id/dm-name-systemlvm-var: Aliased to /dev/block/253:0 in device cache
/dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr25N7CRZpUMzR18NfS6zeSeAVnVT98LuU: Aliased to /dev/block/253:0 in device cache
/dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr3TpFXtLjYGEwn79IdXsSCZPl8AxmqbmQ: Aliased to /dev/block/253:1 in device cache
/dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvrc5MJ4KolevMjt85PPBrQuRTkXbx6NvTi: Aliased to /dev/block/253:3 in device cache
/dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvrYXrfdg5OSYDVkNeiQeQksgCI849Z2hx8: Aliased to /dev/block/253:2 in device cache
/dev/disk/by-id/md-uuid-11e9dc6c:1da99f3f:b3088ca6:c6fe60e9: Already in device cache
/dev/disk/by-id/md-uuid-601d4642:39dc80d7:96e8bbac:649924ba: Already in device cache
/dev/disk/by-id/md-uuid-92ed1e4b:897361d3:070682b3:3baa4fa1: Already in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895: Aliased to /dev/block/8:16 in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part1: Aliased to /dev/block/8:17 in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part2: Aliased to /dev/block/8:18 in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part3: Aliased to /dev/block/8:19 in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part5: Aliased to /dev/block/8:21 in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part6: Aliased to /dev/block/8:22 in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800: Aliased to /dev/block/8:0 in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part1: Aliased to /dev/block/8:1 in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part2: Aliased to /dev/block/8:2 in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part3: Aliased to /dev/block/8:3 in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part5: Aliased to /dev/block/8:5 in device cache
/dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part6: Aliased to /dev/block/8:6 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0: Aliased to /dev/block/8:0 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part1: Aliased to /dev/block/8:1 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part2: Aliased to /dev/block/8:2 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part3: Aliased to /dev/block/8:3 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part5: Aliased to /dev/block/8:5 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part6: Aliased to /dev/block/8:6 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0: Aliased to /dev/block/8:16 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part1: Aliased to /dev/block/8:17 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part2: Aliased to /dev/block/8:18 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part3: Aliased to /dev/block/8:19 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part5: Aliased to /dev/block/8:21 in device cache
/dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part6: Aliased to /dev/block/8:22 in device cache
/dev/disk/by-uuid/13c1262b-e06f-40ce-b088-ce410640a6dc: Aliased to /dev/block/253:3 in device cache
/dev/disk/by-uuid/379f57b0-2e03-414c-808a-f76160617336: Aliased to /dev/block/253:2 in device cache
/dev/disk/by-uuid/4fb2d6d3-bd51-48d3-95ee-8e404faf243d: Already in device cache
/dev/disk/by-uuid/5c6728ec-82c1-49c0-93c5-f6dbd5c0d659: Aliased to /dev/block/8:5 in device cache
/dev/disk/by-uuid/a13cdfcd-2191-4185-a727-ffefaf7a382e: Aliased to /dev/block/253:1 in device cache
/dev/disk/by-uuid/e0d5893d-ff88-412f-b753-9e3e9af3242d: Aliased to /dev/block/8:21 in device cache
/dev/disk/by-uuid/e79c9da6-8533-4e55-93ec-208876671edc: Aliased to /dev/block/253:0 in device cache
/dev/disk/by-uuid/f3f176f5-12f7-4af8-952a-c6ac43a6e332: Already in device cache
/dev/dm-0: Aliased to /dev/block/253:0 in device cache (preferred name)
/dev/dm-1: Aliased to /dev/block/253:1 in device cache (preferred name)
/dev/dm-2: Aliased to /dev/block/253:2 in device cache (preferred name)
/dev/dm-3: Aliased to /dev/block/253:3 in device cache (preferred name)
/dev/fd: Symbolic link to directory
/dev/full: Not a block device
/dev/hpet: Not a block device
/dev/initctl: Not a block device
/dev/input/by-path/platform-i8042-serio-0-event-kbd: Not a block device
/dev/input/event0: Not a block device
/dev/input/mice: Not a block device
/dev/kmem: Not a block device
/dev/kmsg: Not a block device
/dev/log: Not a block device
/dev/loop/0: Added to device cache
/dev/MAKEDEV: Not a block device
/dev/mapper/control: Not a block device
/dev/mapper/systemlvm-home: Aliased to /dev/dm-2 in device cache
/dev/mapper/systemlvm-tmp: Aliased to /dev/dm-3 in device cache
/dev/mapper/systemlvm-usr: Aliased to /dev/dm-1 in device cache
/dev/mapper/systemlvm-var: Aliased to /dev/dm-0 in device cache
/dev/md0: Already in device cache
/dev/md1: Already in device cache
/dev/md2: Already in device cache
/dev/mem: Not a block device
/dev/net/tun: Not a block device
/dev/network_latency: Not a block device
/dev/network_throughput: Not a block device
/dev/null: Not a block device
/dev/port: Not a block device
/dev/ppp: Not a block device
/dev/psaux: Not a block device
/dev/ptmx: Not a block device
/dev/pts/0: Not a block device
/dev/ram0: Aliased to /dev/block/1:0 in device cache (preferred name)
/dev/ram1: Aliased to /dev/block/1:1 in device cache (preferred name)
/dev/ram10: Aliased to /dev/block/1:10 in device cache (preferred name)
/dev/ram11: Aliased to /dev/block/1:11 in device cache (preferred name)
/dev/ram12: Aliased to /dev/block/1:12 in device cache (preferred name)
/dev/ram13: Aliased to /dev/block/1:13 in device cache (preferred name)
/dev/ram14: Aliased to /dev/block/1:14 in device cache (preferred name)
/dev/ram15: Aliased to /dev/block/1:15 in device cache (preferred name)
/dev/ram2: Aliased to /dev/block/1:2 in device cache (preferred name)
/dev/ram3: Aliased to /dev/block/1:3 in device cache (preferred name)
/dev/ram4: Aliased to /dev/block/1:4 in device cache (preferred name)
/dev/ram5: Aliased to /dev/block/1:5 in device cache (preferred name)
/dev/ram6: Aliased to /dev/block/1:6 in device cache (preferred name)
/dev/ram7: Aliased to /dev/block/1:7 in device cache (preferred name)
/dev/ram8: Aliased to /dev/block/1:8 in device cache (preferred name)
/dev/ram9: Aliased to /dev/block/1:9 in device cache (preferred name)
/dev/random: Not a block device
/dev/root: Already in device cache
/dev/rtc: Not a block device
/dev/rtc0: Not a block device
/dev/sda: Aliased to /dev/block/8:0 in device cache (preferred name)
/dev/sda1: Aliased to /dev/block/8:1 in device cache (preferred name)
/dev/sda2: Aliased to /dev/block/8:2 in device cache (preferred name)
/dev/sda3: Aliased to /dev/block/8:3 in device cache (preferred name)
/dev/sda5: Aliased to /dev/block/8:5 in device cache (preferred name)
/dev/sda6: Aliased to /dev/block/8:6 in device cache (preferred name)
/dev/sdb: Aliased to /dev/block/8:16 in device cache (preferred name)
/dev/sdb1: Aliased to /dev/block/8:17 in device cache (preferred name)
/dev/sdb2: Aliased to /dev/block/8:18 in device cache (preferred name)
/dev/sdb3: Aliased to /dev/block/8:19 in device cache (preferred name)
/dev/sdb5: Aliased to /dev/block/8:21 in device cache (preferred name)
/dev/sdb6: Aliased to /dev/block/8:22 in device cache (preferred name)
/dev/shm/network/ifstate: Not a block device
/dev/snapshot: Not a block device
/dev/sndstat: stat failed: Datei oder Verzeichnis nicht gefunden
/dev/stderr: Not a block device
/dev/stdin: Not a block device
/dev/stdout: Not a block device
/dev/systemlvm/home: Aliased to /dev/dm-2 in device cache
/dev/systemlvm/tmp: Aliased to /dev/dm-3 in device cache
/dev/systemlvm/usr: Aliased to /dev/dm-1 in device cache
/dev/systemlvm/var: Aliased to /dev/dm-0 in device cache
/dev/tty: Not a block device
/dev/tty0: Not a block device
[... many more "not a block device"]
/dev/vcsa6: Not a block device
/dev/xconsole: Not a block device
/dev/zero: Not a block device
Wiping internal VG cache
lvmcache: initialised VG #orphans_lvm1
lvmcache: initialised VG #orphans_pool
lvmcache: initialised VG #orphans_lvm2
Reading all physical volumes. This may take a while...
Finding all volume groups
/dev/ram0: Skipping (regex)
/dev/loop/0: Skipping (sysfs)
/dev/sda: Skipping (regex)
Opened /dev/md0 RO
/dev/md0: size is 192512 sectors
Closed /dev/md0
/dev/md0: size is 192512 sectors
Opened /dev/md0 RW O_DIRECT
/dev/md0: block size is 1024 bytes
Closed /dev/md0
Using /dev/md0
Opened /dev/md0 RW O_DIRECT
/dev/md0: block size is 1024 bytes
/dev/md0: No label detected
Closed /dev/md0
/dev/dm-0: Skipping (regex)
/dev/ram1: Skipping (regex)
/dev/sda1: Skipping (regex)
Opened /dev/md1 RO
/dev/md1: size is 5863552 sectors
Closed /dev/md1
/dev/md1: size is 5863552 sectors
Opened /dev/md1 RW O_DIRECT
/dev/md1: block size is 4096 bytes
Closed /dev/md1
Using /dev/md1
Opened /dev/md1 RW O_DIRECT
/dev/md1: block size is 4096 bytes
/dev/md1: No label detected
Closed /dev/md1
/dev/dm-1: Skipping (regex)
/dev/ram2: Skipping (regex)
/dev/sda2: Skipping (regex)
Opened /dev/md2 RO
/dev/md2: size is 303596160 sectors
Closed /dev/md2
/dev/md2: size is 303596160 sectors
Opened /dev/md2 RW O_DIRECT
/dev/md2: block size is 4096 bytes
Closed /dev/md2
Using /dev/md2
Opened /dev/md2 RW O_DIRECT
/dev/md2: block size is 4096 bytes
/dev/md2: lvm2 label detected
lvmcache: /dev/md2: now in VG #orphans_lvm2 (#orphans_lvm2)
/dev/md2: Found metadata at 39936 size 2632 (in area at 2048 size 194560) for systemlvm (rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr)
lvmcache: /dev/md2: now in VG systemlvm with 1 mdas
lvmcache: /dev/md2: setting systemlvm VGID to rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr
lvmcache: /dev/md2: VG systemlvm: Set creation host to rescue.
Closed /dev/md2
/dev/dm-2: Skipping (regex)
/dev/ram3: Skipping (regex)
/dev/sda3: Skipping (regex)
/dev/dm-3: Skipping (regex)
/dev/ram4: Skipping (regex)
/dev/ram5: Skipping (regex)
/dev/sda5: Skipping (regex)
/dev/ram6: Skipping (regex)
/dev/sda6: Skipping (regex)
/dev/ram7: Skipping (regex)
/dev/ram8: Skipping (regex)
/dev/ram9: Skipping (regex)
/dev/ram10: Skipping (regex)
/dev/ram11: Skipping (regex)
/dev/ram12: Skipping (regex)
/dev/ram13: Skipping (regex)
/dev/ram14: Skipping (regex)
/dev/ram15: Skipping (regex)
/dev/sdb: Skipping (regex)
/dev/sdb1: Skipping (regex)
/dev/sdb2: Skipping (regex)
/dev/sdb3: Skipping (regex)
/dev/sdb5: Skipping (regex)
/dev/sdb6: Skipping (regex)
Locking /lib/init/rw/V_systemlvm RB
Finding volume group "systemlvm"
Opened /dev/md2 RW O_DIRECT
/dev/md2: block size is 4096 bytes
/dev/md2: lvm2 label detected
lvmcache: /dev/md2: now in VG #orphans_lvm2 (#orphans_lvm2) with 1 mdas
/dev/md2: Found metadata at 39936 size 2632 (in area at 2048 size 194560) for systemlvm (rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr)
lvmcache: /dev/md2: now in VG systemlvm with 1 mdas
lvmcache: /dev/md2: setting systemlvm VGID to rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr
lvmcache: /dev/md2: VG systemlvm: Set creation host to rescue.
Using cached label for /dev/md2
Read systemlvm metadata (19) from /dev/md2 at 39936 size 2632
/dev/md2 0: 0 16: home(0:0)
/dev/md2 1: 16 24: var(40:0)
/dev/md2 2: 40 40: var(0:0)
/dev/md2 3: 80 40: usr(0:0)
/dev/md2 4: 120 40: var(80:0)
/dev/md2 5: 160 8: tmp(0:0)
/dev/md2 6: 168 16: var(64:0)
/dev/md2 7: 184 80: var(120:0)
/dev/md2 8: 264 64: home(16:0)
/dev/md2 9: 328 128: var(200:0)
/dev/md2 10: 456 32: home(80:0)
/dev/md2 11: 488 440: var(328:0)
/dev/md2 12: 928 24: home(112:0)
/dev/md2 13: 952 206: NULL(0:0)
Found volume group "systemlvm" using metadata type lvm2
Read volume group systemlvm from /etc/lvm/backup/systemlvm
Unlocking /lib/init/rw/V_systemlvm
Closed /dev/md2
Unlocking /lib/init/rw/P_global
~# vgdisplay
--- Volume group ---
VG Name systemlvm
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 19
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 144,75 GB
PE Size 128,00 MB
Total PE 1158
Alloc PE / Size 952 / 119,00 GB
Free PE / Size 206 / 25,75 GB
VG UUID rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr
~# pvdisplay
--- Physical volume ---
PV Name /dev/md2
VG Name systemlvm
PV Size 144,77 GB / not usable 16,31 MB
Allocatable yes
PE Size (KByte) 131072
Total PE 1158
Free PE 206
Allocated PE 952
PV UUID ZSAzP5-iBvr-L7jy-wB8T-AiWz-0g3m-HLK66Y
:~# lvdisplay
--- Logical volume ---
LV Name /dev/systemlvm/home
VG Name systemlvm
LV UUID YXrfdg-5OSY-DVkN-eiQe-Qksg-CI84-9Z2hx8
LV Write Access read/write
LV Status available
# open 2
LV Size 17,00 GB
Current LE 136
Segments 4
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Name /dev/systemlvm/var
VG Name systemlvm
LV UUID 25N7CR-ZpUM-zR18-NfS6-zeSe-AVnV-T98LuU
LV Write Access read/write
LV Status available
# open 2
LV Size 96,00 GB
Current LE 768
Segments 7
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Name /dev/systemlvm/usr
VG Name systemlvm
LV UUID 3TpFXt-LjYG-Ewn7-9IdX-sSCZ-Pl8A-xmqbmQ
LV Write Access read/write
LV Status available
# open 2
LV Size 5,00 GB
Current LE 40
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Name /dev/systemlvm/tmp
VG Name systemlvm
LV UUID c5MJ4K-olev-Mjt8-5PPB-rQuR-TkXb-x6NvTi
LV Write Access read/write
LV Status available
# open 2
LV Size 1,00 GB
Current LE 8
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3 | The -- tells cat not to try to parse what comes after it as command line options. As an example, think of what would happen in the two cases if the variable $PIDFILE was defined as PIDFILE="--version" . On my machine, they give the following results: $ cat $PIDFILE
cat (GNU coreutils) 6.10
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Torbjorn Granlund and Richard M. Stallman.
$ cat -- $PIDFILE
cat: --version: No such file or directory | {
"source": [
"https://serverfault.com/questions/114908",
"https://serverfault.com",
"https://serverfault.com/users/4959/"
]
} |
115,161 | I have a web site that needs to send email to customers to deliver files that they have purchased. Reliable email delivery is vital to this business. Unfortunately, most of the emails sent by my server are not delivered due to my MTA's "poor reputation." Here are some sample lines from my mail.log : Feb 20 02:40:41 servername postfix/smtp[14580]: 4E30B1100C7: host aspmx.l.google.com[209.85.211.78] said: 421-4.7.0 [174.143.183.26] Our system has detected an unusual amount of 421-4.7.0 unsolicited mail originating from your IP address. To protect our 421-4.7.0 users from spam, mail sent from your IP address has been temporarily 421-4.7.0 blocked. Please visit http://www.google.com/mail/help/bulk_mail.html 421 4.7.0 to review our Bulk Email Senders Guidelines. 10si1216690ywh.92 (in reply to end of DATA command)
Feb 20 12:49:22 servername postfix/smtp[5651]: A86CB1CC0CF: to=<[email protected]>, relay=mx3.comcast.net[76.96.58.14]:25, delay=55186, delays=55185/0.01/0.93/0, dsn=4.0.0, status=deferred (host mx3.comcast.net[76.96.58.14] refused to talk to me: 554 imta36.westchester.pa.mail.comcast.net comcast 174.143.206.168 found on one or more DNSBLs, see http://help.comcast.net/content/faq/BL000001)
Feb 16 10:50:11 servername postfix/smtp[6931]: 98B94380A1: host mx-in-2.webreus.nl[212.61.252.240] refused to talk to me: 554-mx-in-2.webreus.nl 554-Your access to this mail system has been rejected due to the sending MTA's poor reputation. If you believe that this failure is in error, please contact the intended recipient via alternate means. 554 More information can be found on http://senderbase.org/senderbase_queries/detailhost?search_string=174.143.206.168
Feb 16 10:50:12 servername postfix/smtp[6931]: 98B94380A1: to=<[email protected]>, relay=mx-in-1.webreus.nl[212.61.10.240]:25, delay=173653, delays=173650/0.22/2.8/0, dsn=4.0.0, status=deferred (host mx-in-1.webreus.nl[212.61.10.240] refused to talk to me: 554-mx-in-1.webreus.nl 554-Your access to this mail system has been rejected due to the sending MTA's poor reputation. If you believe that this failure is in error, please contact the intended recipient via alternate means. 554 More information can be found on http://senderbase.org/senderbase_queries/detailhost?search_string=174.143.206.168) Steps I've taken to try to improve the situation: set up reverse DNS lookups to work correctly set up SPF records for my domain disallow incoming connections to my SMTP server format messages according to RFC 2822 never send unsolicited messages (I never have) My server is in Rackspace's cloud. Is it possible that the IP address's bad reputation was inherited from a previous customer? Some of the above steps have been taken in the past week--am I going to have to wait for the situation to improve? Are there other things I should be doing? Should I hire a third party to send emails for me? | Unfortunately the IP is blacklisted irrespective of the hardware that sits behind it, so there's not a lot you can do about your existing reputation except ensuring you are sending mail correctly and contacting the relevant spam lists. You may have to wait a few days or weeks for the situation to improve. I would recommend using Google Apps for SMTP to eradicate these problems :) Otherwise, if you keep going on your own, check: You're not on a blacklist Your server has MX and reverse DNS records You have SPF DNS records (many servers reject mail without a valid SPF, GMail for example, here's an explanation and a wizard ) Your mailserver's HELO response matches your hostname Your mailserver is not an open relay Your DNS records' TTL is not too low - 86400 (24 hours) is recommended (some spammers set their TTL very low to regularly update forged DNS records) You comply with AOL's Technical Standards | {
"source": [
"https://serverfault.com/questions/115161",
"https://serverfault.com",
"https://serverfault.com/users/29556/"
]
} |
115,503 | Why are Class C IP addresses preferred over A and B in private networks? My possible answer is "In class C, the number of host IP address available in the network is less than class A or B thus making it easier for DHCP to manage." But I'd like to double confirm. | I don't think they're preferred. I've seen plenty of networks using RFC1918 class A, B, and C addressing schemes. Use the class that suits your needs: How many subnets do you need? How many hosts per subnet do you need? What routing needs do you have to route traffic between subnets? Do you anticipate having a large number of hosts per subnet and want to reduce the size of your broadcast domains? | {
"source": [
"https://serverfault.com/questions/115503",
"https://serverfault.com",
"https://serverfault.com/users/34006/"
]
} |
115,856 | Given this example: mkdir a
ln -s a b
ln -s b c
ln -s c d If I execute: ls -l d It will show: d -> c Is there a way for ls or any other linux command to show d -> c -> b -> a instead? | Just use namei : $ namei d
f: d
l d -> c
l c -> b
l b -> a
d a | {
"source": [
"https://serverfault.com/questions/115856",
"https://serverfault.com",
"https://serverfault.com/users/35802/"
]
} |
115,950 | I understand how to create a new user with privileges, but what is the correct way to change privileges for users that are already created? We are running a DB audit and some of the users have way more access then is needed. Plus I don't know the passwords for most of these MySQL users, so I don't want to delete them and create new ones. | To list users: select user,host from mysql.user; To show privileges: show grants for 'user'@'host'; To change privileges, first revoke. Such as: revoke all privileges on *.* from 'user'@'host'; Then grant the appropriate privileges as desired: grant SELECT,INSERT,UPDATE,DELETE ON `db`.* TO 'user'@'host'; Finally, flush: flush privileges; The MySQL documentation is excellent: Access Control and Account Management | {
"source": [
"https://serverfault.com/questions/115950",
"https://serverfault.com",
"https://serverfault.com/users/32828/"
]
} |
115,968 | If I was running a command before the SSH connection was dropped, will the command continue executing? | In most cases, no. Processes will be sent a SIGHUP on loss of terminal. You can prefix a command with 'nohup' to ignore the signal. See: http://en.wikipedia.org/wiki/Nohup | {
"source": [
"https://serverfault.com/questions/115968",
"https://serverfault.com",
"https://serverfault.com/users/28207/"
]
} |
115,969 | EDIT Sorry forgot details, I'm running SQL Server 2008 R2 16GB of RAM My min/max memory allocation is set to 13 GB. Running perfmon - SQL Server:Memory Manager. Target Server Memory and Total Server Memory confirms this. The ram counter in Task Manager is at 15 GB (assuming 2GB is used for other applications) What I don't understand is why the process sqlservr.exe is only showing 82KB usage in task manager? Further to that, all of memory consumed by the processes running on my server from Task Manager doesn't add up to 15 GB either. So what gives | In most cases, no. Processes will be sent a SIGHUP on loss of terminal. You can prefix a command with 'nohup' to ignore the signal. See: http://en.wikipedia.org/wiki/Nohup | {
"source": [
"https://serverfault.com/questions/115969",
"https://serverfault.com",
"https://serverfault.com/users/23950/"
]
} |
115,999 | Asking this after a prolonged discussion with a coworker, I'd really like a clarification here. I launch a background process, either by appending " & " to the command line or by stopping it with CTRL-Z and resuming it in background with " bg ". Then I log out. What happens? We were quite sure it should have been killed by a SIGHUP, but this didn't happen; upon logging in again, the process was happily running and pstree showed it was "adopted" by init . Is this the expected behaviour? But then, if it is, what's the nohup command's purpose? It just looks like the process isn't going to be killed anyway, with or without it... Edit 1 Some more details: The command was launched from a SSH session, not from the physical console. The command was launched without nohup and/or & ; it was then suspended with CTRL-Z and resumed in background with bg . The ssh session did not drop. There was an actual logout (" exit " command). The process was a scp file copy operation. Upon logging in again, pstree showed the process running and being child of init . Edit 2 To state the question more clearly: will putting a process in background (using & or bg ) make it ignore SIGHUP , just like the nohup command does? Edit 3 I tried manually sending a SIGHUP to scp : it exited, so it definitely doesn't ignore the signal. Then I tried again launching it, putting it in the background and logging off: it got "adopted" by init and kept running, and I found it there when logging back on. I'm quite puzzled now. Looks like no SIGHUP was sent at all upong logging off. | Answer found. For BASH, this depends on the huponexit shell option, which can be viewed and/or set using the built-in shopt command. Looks like this options is off by default, at least on RedHat-based systems. More info on the BASH man page : The shell exits by default upon receipt of a SIGHUP. Before exiting, an interactive shell resends the SIGHUP to all jobs, running or stopped. Stopped jobs are sent SIGCONT to ensure that they receive the SIGHUP. To prevent the shell from sending the signal to a particular job, it should be removed from the jobs table with the disown builtin (see SHELL BUILTIN COMMANDS below) or marked to not receive SIGHUP using disown -h. If the huponexit shell option has been set with shopt, bash sends a SIGHUP to all jobs when an interactive login shell exits. | {
"source": [
"https://serverfault.com/questions/115999",
"https://serverfault.com",
"https://serverfault.com/users/6352/"
]
} |
116,100 | On my windows dev box mysql is running on port 3306 How can I check what port it is running on the unix server that I have to upload the app to. | I did mysql> SHOW GLOBAL VARIABLES LIKE 'PORT'; And that indicated that I was using port 3306 and that my search for the error continues. | {
"source": [
"https://serverfault.com/questions/116100",
"https://serverfault.com",
"https://serverfault.com/users/3198/"
]
} |
116,177 | Just wanted a quick summary of the differences between them and why there are two? | In OpenSSH prior to version 3, the sshd man page used to say: The $HOME/.ssh/authorized_keys file
lists the RSA keys that are permitted
for RSA authentication in SSH
protocols 1.3 and 1.5 Similarly, the
$HOME/.ssh/authorized_keys2 file lists
the DSA and RSA keys that are permitted
for public key authentication
(PubkeyAuthentication) in SSH protocol 2.0. The release announcement for version 3 states that authorized_keys2 is deprecated and all keys should be put in the authorized_keys file. | {
"source": [
"https://serverfault.com/questions/116177",
"https://serverfault.com",
"https://serverfault.com/users/33575/"
]
} |
116,299 | When building a new kernel based on a previous config, is there a way to automate the make oldconfig process so that it sets new options to their default values? Edit: What I mean is that when using a .config (from /boot/config-* or /proc/config.gz ) on a newer kernel, the make oldconfig process will ask wether or not you want to enable options that were not available in your older kernel. You can answer Y/n/m or press enter to accept default. I would like to accept defaults automatically with no user interaction. | Use the command : yes "" | make oldconfig The 'yes' command repeatedly output a line with all specified string, or 'y' by default. So, you can use it to simply "press enter", which will result in using the defaults value for the 'make oldconfig' command. | {
"source": [
"https://serverfault.com/questions/116299",
"https://serverfault.com",
"https://serverfault.com/users/35913/"
]
} |
116,395 | I am running Windows 7 Ultimate. If I open up IIS Manager, I see a list of "connections" on the left hand side. In previous versions, I would be able to select an option to "connect to another server" or "connect to another machine", but there is no such option visible anywhere here. The only thing in the list is my local machine. Even in the address bar, if I manually type in the server location (\servername, even tried just servername), nothing happens (it just reverts back to my current local computer) The documentation at http://technet.microsoft.com/en-us/library/cc732466%28WS.10%29.aspx seems to imply the very same steps... but there is just no button or menu option anywhere to do this. Am I missing something? I'm not even seeing a grayed out menu option. EDIT: Under the "File" menu, I see 2 options: Save Connections (grayed out) Exit Under the "Connections" pane, I see 1 button, grayed out. When I hover the mouse, it simply says "Up", appears to be usable if I browse into an element in my local computers IIS settings If I right click inside the pane itself, I see Refresh Add website (to the current host) Start Stop Rename Switch to Content View UPDATE: I downloaded and installed the Remote Server Administration tools from http://www.microsoft.com/downloads/details.aspx?FamilyID=7D2F6AD7-656B-4313-A005-4E344E43997D&displaylang=en , and I enabled everything listed under "Remote Server Administration Tools" under "Turn Windows Features On or Off". Still nothing. | Turns out I needed to install "IIS Manager for Remote Administration", which is discussed (and available for download) at http://www.iis.net/expand/IISManager . Why this is not part of the Windows 7 "Remote Administration Tools" I'm not sure. But after installing this I get the additional "connect" options. | {
"source": [
"https://serverfault.com/questions/116395",
"https://serverfault.com",
"https://serverfault.com/users/18496/"
]
} |
116,396 | I need to set up a smallish windows network at the office. I want the email profile in outlook connected to the login account like in a corporate network and each user needs to be able to hotdesk ie log in to their account on any pc in the office. What is the mimimum set of software required to do this in Windows? | Turns out I needed to install "IIS Manager for Remote Administration", which is discussed (and available for download) at http://www.iis.net/expand/IISManager . Why this is not part of the Windows 7 "Remote Administration Tools" I'm not sure. But after installing this I get the additional "connect" options. | {
"source": [
"https://serverfault.com/questions/116396",
"https://serverfault.com",
"https://serverfault.com/users/27345/"
]
} |
116,402 | we are running a Windows Server 2003 with IIS 6.0 installed and I need to block access to certain file being exposed to the internet. Im the virtual folder I have a index.html file with an embed index.swf file. This swf file needs a config.xml file to run (which is in the same folder) and other multiple swf file located in a com folder (also in the same folder). Here's a diagram : Virtual folder
|-> index.html
|-> index.swf
|-> config.xml
|-> com
|-> a lot of swf files How can I possibly restrict the access so the user can't look at the content of the config.xml file and the com folder but can still be able to "run" the index.html (therefore the index.swf and all the other swf files located in the com folder). Thank | Turns out I needed to install "IIS Manager for Remote Administration", which is discussed (and available for download) at http://www.iis.net/expand/IISManager . Why this is not part of the Windows 7 "Remote Administration Tools" I'm not sure. But after installing this I get the additional "connect" options. | {
"source": [
"https://serverfault.com/questions/116402",
"https://serverfault.com",
"https://serverfault.com/users/11598/"
]
} |
116,417 | I'm obviously using the wrong search terms, the answer must be somewhere out there, so please throw some URLs at me. I'm about to create a cluster with 2 virtual servers in the cloud, namely at Rackspace. One for the frontend (Apache+PHP), one for the backend (presumably PostgreSQL). Apart from pointing the database host to the another IP instead of localhost, and opening up the , is there anything else I'm supposed to learn or prepare to fully utilize this architecture? | Turns out I needed to install "IIS Manager for Remote Administration", which is discussed (and available for download) at http://www.iis.net/expand/IISManager . Why this is not part of the Windows 7 "Remote Administration Tools" I'm not sure. But after installing this I get the additional "connect" options. | {
"source": [
"https://serverfault.com/questions/116417",
"https://serverfault.com",
"https://serverfault.com/users/555/"
]
} |
116,728 | I have just installed Apache web server on my computer. I have managed to use it locally (I can open index.php from my computer using my web browser). But I would like to make my web site available publicly. I found out that for that I need to open port 80. I started to do it and now I have to specify to which protocol I need to apply these rules (TCP or UDP). Can anybody, pleas, help me? | Web servers work with the HTTP (and HTTPS) protocol which is TCP based. As a general rule, if people neglect to specify whether they mean TCP/UDP/SomethingElse then they probably mean TCP. | {
"source": [
"https://serverfault.com/questions/116728",
"https://serverfault.com",
"https://serverfault.com/users/35301/"
]
} |
116,775 | Found out today that running screen as a different user that I sudo into won't work! i.e. ssh bob@server # ssh into server as bob
sudo su "monitor" -
screen # fails: Cannot open your terminal '/dev/pts/0' I have a script that runs as the "monitor" user. We run it in a screen session in order to see output on the screen. The problem is, we have a number of user who logs in with their own account (i.e. bob, james, susie, etc...) and then they sudo into the "monitor" user. Giving them access to the "monitor" user is out of the question. | Try running script /dev/null as the user you su to before launching screen - its a ghetto little hack, but it should make screen happy. | {
"source": [
"https://serverfault.com/questions/116775",
"https://serverfault.com",
"https://serverfault.com/users/25170/"
]
} |
116,950 | On Ubuntu server load graphs I see 4 types of CPU consumption: User, System, Nice and Idle. What does Nice type mean? | On a CPU utilization graph or report, the "nice" CPU percentage is the % of CPU time occupied by user level processes with a positive nice value (lower scheduling priority -- see man nice for details). Basically it's CPU time that's currently "in use", but if a normal (nice value 0) or high-priority (negative nice value) process comes along those programs will be kicked off the CPU. | {
"source": [
"https://serverfault.com/questions/116950",
"https://serverfault.com",
"https://serverfault.com/users/12808/"
]
} |
117,104 | In SQL Server 2005/2008, how can I tell if Snapshot Isolation is turned on? I know how to turn it on, but I can't find the incantation to get google to tell me how to query the state of the Snapshot Isolation option. | Powershell, really? what's wrong with good ol' fashioned T-SQL? sys.databases is what you want. It has human readable description columns like snapshot_isolation_state_desc SELECT snapshot_isolation_state_desc from sys.databases
where name='adventureworks' | {
"source": [
"https://serverfault.com/questions/117104",
"https://serverfault.com",
"https://serverfault.com/users/177/"
]
} |
117,123 | How to remove Instances ? I test some Instances and now I terminate it. So, I want to remove it. I can't find delete or remove action. I just found terminate. How to change the pair key ? I don't have pair key for old Instances and I want to configure that instances. | If your instance uses the instance store as its root device (e.g. it's not an EBS backed instance), simply terminating it will destroy the instance and you won't have to do anything else. | {
"source": [
"https://serverfault.com/questions/117123",
"https://serverfault.com",
"https://serverfault.com/users/36134/"
]
} |
117,152 | This is a followup to this question . I've run some more tests; looks like it really doesn't matter if this is done at the physical console or via SSH, neither does this happen only with SCP; I also tested it with cat /dev/zero > /dev/null . The behaviour is exactly the same: Start a process in the background using & (or put it in background after it's started using CTRL-Z and bg ); this is done without using nohup . Log off. Log on again. The process is still there, running happily, and is now a direct child of init . I can confirm both SCP and CAT quits immediately if sent a SIGHUP ; I tested this using kill -HUP . So, it really looks like SIGHUP is not sent upon logoff, at least to background processes (can't test with a foreground one for obvious reasons). This happened to me initially with the service console of VMware ESX 3.5 (which is based on RedHat), but I was able to replicate it exactly on CentOS 5.4. The question is, again: shouldn't a SIGHUP be sent to processes, even if they're running in background, upon logging off? Why is this not happening? Edit I checked with strace , as per Kyle's answer. As I was expecting, the process doesn't get any signal when logging off from the shell where it was launched. This happens both when using the server's console and via SSH. | Answer found. For BASH, this depends on the huponexit shell option, which can be viewed and/or set using the built-in shopt command. Looks like this options is off by default, at least on RedHat-based systems. More info on the BASH man page : The shell exits by default upon receipt of a SIGHUP. Before exiting, an interactive shell resends the SIGHUP to all jobs, running or stopped. Stopped jobs are sent SIGCONT to ensure that they receive the SIGHUP. To prevent the shell from sending the signal to a particular job, it should be removed from the jobs table with the disown builtin (see SHELL BUILTIN COMMANDS below) or marked to not receive SIGHUP using disown -h. If the huponexit shell option has been set with shopt, bash sends a SIGHUP to all jobs when an interactive login shell exits. | {
"source": [
"https://serverfault.com/questions/117152",
"https://serverfault.com",
"https://serverfault.com/users/6352/"
]
} |
117,255 | We have a remote git repo that we normally deploy from using git push on our dev server then git pull on on our live servers to get the latest pushed version of the repo. But if we have committed and pushed a few revisions (without a git pull on the live servers) how can we do a git pull that is referring to the older commit that we want? i.e. something like git pull -r 3ef0dedda699f56dc1062b5dcc2c59f7ad93ede4 | Once you've pulled the repository you should be able to go: git checkout 3ef0d... | {
"source": [
"https://serverfault.com/questions/117255",
"https://serverfault.com",
"https://serverfault.com/users/27683/"
]
} |
117,360 | I have a crontab like this on a LAMP setup: 0 0 * * * /some/path/to/a/file.php > $HOME/cron.log 2>&1 This writes the output of the file to cron.log . However, when it runs again, it overwrites whatever was previously in the file. How can I get cron to output to a file with a timestamp in its filename? An example filename would be something like this: 2010-02-26-000000-cron.log I don't really care about the format, as long as it has a timestamp of some kind. Thanks in advance. | Try: 0 0 * * * /some/path/to/a/file.php > $HOME/`date +\%Y\%m\%d\%H\%M\%S`-cron.log 2>&1 Play around with the date format, if you like; just be sure to escape any % like \% , as above. | {
"source": [
"https://serverfault.com/questions/117360",
"https://serverfault.com",
"https://serverfault.com/users/864/"
]
} |
117,400 | Is it common practice to use certain private IP address ranges for certain purposes? I'm starting to look into setting up virtualization systems and storage servers. Each system has two NICs, one for public network access, and one for internal management and storage access. Is it common for businesses to use certain ranges for certain purposes? If so, what are these ranges and purposes? Or does everyone do it differently? I just don't want to do it completely differently from what is standard practice in order to simplify things for new hires, etc. | Most systems I've seen attempt to map the IP ranges to a hierarchy of geography and/or system components. One employer tended to use: 10.building.floor.device (with non-user resource VLANs using 10.x.100.x to 10.x.120.x ) and 10.major_system.tier_or_subsystem.component | {
"source": [
"https://serverfault.com/questions/117400",
"https://serverfault.com",
"https://serverfault.com/users/27557/"
]
} |
117,525 | I know that I can set user's privileges in the following simple way: grant all on [database name].[table name] to [user name]@[host name]; But how can I see existing privileges? I need to see data similar to those which are used in grant. In other words I want to know that a given user has a given access to a given table of a given database from a given host. How can I get it? | The command SHOW GRANTS [FOR user] is what you're looking for. See the SHOW GRANTS Statement section for more detail. | {
"source": [
"https://serverfault.com/questions/117525",
"https://serverfault.com",
"https://serverfault.com/users/35301/"
]
} |
117,531 | I tried googling around and checked the man page but couldn't find what I was looking for. Basically need to extract a rar archive to a separate volume. I know: rar e archive.rar will extract it to the current folder but I want to extract it to somewhere else. Can this be achieved without having to first move the archive to that location? | rar x archive.rar path/to/extract/to Worked. | {
"source": [
"https://serverfault.com/questions/117531",
"https://serverfault.com",
"https://serverfault.com/users/31944/"
]
} |
117,584 | I'm working on moving my current server setup to newer hardware, and migrating from ubuntu karmic koala to lucid lynx. Currently i'm using gw6c (compiled from the gogo6 website, as opposed to the version from the repositories) to get ipv6 access for my systems. On the karmic koala system, i used simple init.d script to get the ipv6 client started #! /bin/sh
/usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf I figured since this runs at any runlevel, it should translate to respawn
console none
start on startup
stop on shutdown
script
exec /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf
emit free6_ipv6_started
end script this works fine started from initctrl, but it apparently fails to start when it boots. - its status being stop/waiting. It works fine (and respawns) when started otherwise.Any ideas on where i'm going wrong, and what would be the appropriate 'start on' arguement? EDIT: the exact error is 'init: gw6c main process (xxx) ended with status 8' followed by the process respawning , with xxx being a PID i suspect. I'm also half suspecting this is cause gw6c starts before networking does, and i need my eth0 up before gw6c is | Apparently respawn
console none
start on (local-filesystems and net-device-up IFACE!=lo)
stop on [!12345]
script
chdir /usr/local/gw6c/bin/
exec /usr/local/gw6c/bin/gw6c
end script seems to work | {
"source": [
"https://serverfault.com/questions/117584",
"https://serverfault.com",
"https://serverfault.com/users/33193/"
]
} |
117,834 | I know that by default PostgreSQL listens on port 5432, but what is the command to actually determine PostgreSQL's port? Configuration: Ubuntu 9.10 with PostgreSQL 8.4 | lsof and nmap are solutions, but they're not installed by default. What you want is netstat(8). sudo netstat -plunt |grep postgres | {
"source": [
"https://serverfault.com/questions/117834",
"https://serverfault.com",
"https://serverfault.com/users/20648/"
]
} |
118,181 | Just curious, I have 6 x 1TB 7200RPM Near Line SAS for my new server. I can either configure it as RAID5+1 Hot Spare or RAID6. What should I choose? | You have disadvantages and advantages with each approach; it depends on why you're using RAID. Most people use it for availability. They don't want a drive to die and end up having to take their system or server down. In that, you don't use RAID 5. I learned it the hard way and hammer this point home with every RAID-related question I get into on SF. Why? Because as drives are getting larger, there's more tolerance for URE, unrecoverable read errors. We had it happen and it isn't what you want to discover in the middle of a rebuild. Scenario: RAID system with 3 drives. We got an alarm on our Dell with a hardware PERC card that drive C died. Order new drive, swap it out, no problem. In the middle of the rebuild, it died. According to the diagnostics, there was a "bad spot" on drive B. The system had a silently failed on that drive repeatedly, and now that it was rebuilding the data, it couldn't read that spot, and no matter how many times we ran the repair even off the controller directly and it each time said everything was fixed, it wouldn't rebuild. So we have one dead drive and one drive that couldn't read from a spot...we end up replacing 2 drives and restoring from backup. Lesson: RAID isn't a backup, and RAID 5 is no longer an availability option for larger drives. If you're looking to increase speed or increase storage sizes, then you can balance that into your decision. You need to define your needs in terms of your needs and goals, not in terms of "I need RAID, which do I use?" | {
"source": [
"https://serverfault.com/questions/118181",
"https://serverfault.com",
"https://serverfault.com/users/35989/"
]
} |
118,290 | Is it possible to enter some sort of CNAME record or alias in the windows hosts file (C:\Windows\System32\drivers\etc\hosts)? Basically I want to forward all requests made to host A to host B, without having to hard code the IP address (which changes frequently). | The windows hosts file supports only ip->name mappings, it does not support any other standard DNS record types. See here: http://technet.microsoft.com/en-us/library/bb727005.aspx#EDAA I would recommend running a simple dns server in order to do the redirect you are talking about. Try powerdns http://www.powerdns.com/ | {
"source": [
"https://serverfault.com/questions/118290",
"https://serverfault.com",
"https://serverfault.com/users/36477/"
]
} |
118,297 | I can't seem to get Windows installed into this Dell PowerEdge 2950 server (As embarrassing as it is). The server has no floopy and the install needs a RAID driver (Perc 5/i). So far I have tried: A usb floppy drive 2 Thumb drives after turning on floppy emulation for USB in the BIOS for these drives. The drives have the drivers in the root of the drive (no folders). I have a dell disk for a 2850 not a 2950, but that asks for a replication floppy? Anything else I can try besides building a custom windows install disk? | The windows hosts file supports only ip->name mappings, it does not support any other standard DNS record types. See here: http://technet.microsoft.com/en-us/library/bb727005.aspx#EDAA I would recommend running a simple dns server in order to do the redirect you are talking about. Try powerdns http://www.powerdns.com/ | {
"source": [
"https://serverfault.com/questions/118297",
"https://serverfault.com",
"https://serverfault.com/users/2561/"
]
} |
118,378 | I need to test sub-domains on my localhost. How can I effectively have this result of adding *.localhost.com to my /etc/hosts/ file? If it's not possible, how do I work around this problem? I need to test wildcard sub-domains on my localserver. It is a Django devserver, can the Django dev server handle the sub-domains? Can some other piece of software/routing give me the end result I want? | Install dnsmasq (I do this on all my Linux desktops as a DNS cache anyways). In dnsmasq.conf add the line: address=/localhost.com/127.0.0.1 | {
"source": [
"https://serverfault.com/questions/118378",
"https://serverfault.com",
"https://serverfault.com/users/2476/"
]
} |
118,652 | First, what exactly Bonjour does (pleas read my guesses written bellow)? Here I found out that Bonjour enables automatic discovery of computers, devices, and services on IP networks. But I thought that it not only "discovers devices on IP network" it also creates an IP network by assigning IP addresses to devices where Bonjour is running. Am I right? And I still miss the essence. Does it work in the following way? First I connect devices (for example laptops) physically so that they potentially can communicate with each other. Then, let say, on some laptops I have Bonjour running and then, as a consequence, these laptops assign IP addresses to them self in automatic way. So, laptops (where Bonjour is running) build an IP network. Does it work in this way? Or may be a computer running Bonjour is not considered as a service and it does not broadcast itself just because Bonjour is running on this computer. I mean that the applications running on the computers need to use Bonjour to broadcast themself. So, it is applications that broadcast themself (not computers) and it is not done automatically (application needs to broadcast themself explicitly). Is it right? How exactly my application can broadcast itself? Can I use command line to register an service (so that all applications using Bonjour knows that a new service appeared)? Further, I would like to have an application which use the IP network created by Bonjour. For that my application needs to know which devices/services are present in the network. In more details, my application needs to have a list of services. Each service in the list should have a name, the IP address where it is running and the port which is used by the application. Can Bonjour provide this information in some way? If it is the case, how exactly it works. How my program can get this information from Bonjour? Can my program read some file created by Bonjour and containing the above mentioned information? Can I use some commands in command line to retrieve this information? I have a special interest in accessing the information about services from files, environment variables or commands in command line. These options seems to me to be the simplest! Since in these case I do not need to use any additional libraries to communicate with Bonjour from a particular programming language. P.S. Pleas ask questions if something is not clear in my question. I will try to formulate my question in a more clear way. P.P.S. I use Windows 7 . ADDED:
I plan to write my applications in PHP. Every computer should run a Apache web server. And I want to use Bonjour to help computer discover each other (computers are working in a local network). | Yes. Stuart Cheshire, who was the creator and is a primary maintainer of Rendezvous/Bonjour at Apple, who also co-chaired the IETF ZeroConf working group, and wrote the O’Reilly book on Zero Configuration Networking, has described Bonjour as a “three-legged stool” where the legs are: IPv4 (and IPv6) link-local addressing Multicast Name Resolution (mDNS) DNS Service Discovery (DNS-SD) The IETF ZeroConf working group and Apple both consider link-local addressing, especially IPv4 link-local addressing ( 169.254.0.0/16 addresses) to be part of ZeroConf/Bonjour, even though link-local addressing shipped years before the other two “legs of the stool”. Note that since Windows already supports automatic link-local addressing even without Apple’s Bonjour for Windows software installed, many Windows users do not think of IPv4 link-local addressing to be part of Bonjour/ZeroConf. Yes. Macs and Windows machines, by default, do IPv4 link-local addressing if they are configured for DHCP but there is no DHCP server available. Linux and BSD machines with Avahi (or possibly other ZeroConf implementations) installed will also do this. If a computer is running Bonjour, its hostname is published on the LAN via mDNS. If your machine’s name is “Alice”, it will be Alice.local via mDNS. From another computer (let’s call it “Bob”) on the same LAN (specifically, on the same link-local multicast domain), you should be able to simply type ping Alice.local , and Bob should do an mDNS lookup of Alice.local to discover Alice's IP address(es), and ping (one of) the address(es) it gets back. Note, though, that Bonjour differentiates between hostnames and service names. For example, if you have two separate USB printers, let’s say “HP” and “Canon”, connected to Alice, and Alice is acting as, say, an lpr print server for both of them, they can each show up as their own service, which maps back to Alice.local as the host. Their service names would show up to the user as “HP” and “Canon” with no mention of Alice. Behind the scenes, they would be known as HP._printer._tcp.local and Canon._printer._tcp.local , and DNS-SD lookups on those service names would show that those services are available on Alice.local on two different TCP ports. So yes, applications must notify the Bonjour daemon (called mDNSResponder in Apple’s implementation) that they have services they want to advertise. macOS has mechanisms to automatically handle service advertisement for legacy services that are not natively Bonjour-aware. For instance, macOS's sshd is OpenSSH, which doesn't support Bonjour directly, but macOS takes care of advertising the ssh service via Bonjour so that you can just ssh [email protected] from other machines on the LAN. On macOS, there's a "dns-sd" command-line tool that can register a service using this syntax: dns-sd -R <Name> <Type> <Domain> <Port> [<TXT>...]
# (Register a service) So for example: dns-sd -R MyWebsite _http._tcp local 80 I would not be surprised if it is included in Bonjour for Windows, or the Bonjour SDK for Windows, or if you can compile it for Windows from Apple’s mDNSResponder open-source project. Googling for dns-sd.exe , I see such a thing exists. I am not sure I would just download a binary for it. Instead I would try to get it from one of the packages mentioned above, or compile it myself from the mDNSResponder project sources. You can also use the dns-sd command-line tool to browse for services and look them up. Here is an example of looking up a local web service: Browse for local web services with -B : $ dns-sd -B _http._tcp local
Browsing for _http._tcp.local
Timestamp A/R Flags if Domain Service Type Instance Name
16:30:59.870 Add 3 6 local. _http._tcp. My Cool Web App
16:30:59.871 Add 3 6 local. _http._tcp. Someone Else's Web Service
16:30:59.871 Add 3 6 local. _http._tcp. A Third One
^C Look up the one I want, "My Cool Web App", with -L : $ dns-sd -L "My Cool Web App" _http._tcp local
Lookup My Cool Web App._http._tcp.local
16:31:52.678 My\032Cool\032Web\032App._http._tcp.local. can be reached at MyWebServer.local.:80 (interface 6)
^C Query the IP addresses for MyWebServer.local, with -Q : $ dns-sd -Q MyWebServer.local
Timestamp A/R Flags if Name T C Rdata
16:32:40.786 Add 2 6 MyWebServer.local. 1 1 169.254.45.209
^C Note in these examples that you must Ctrl-C out of the dns-sd tool. Otherwise it will stay open forever, continuously watching the network and reporting any changes in the results of the query you issued (such as web servers coming and going on the network, while you sit with a -B browse query open). I have found that for this and other reasons, the dns-sd tool is not well suited for being called from a script. You might want to look at what the ZeroConf libraries for your preferred language after all. To answer one of your other questions, I am not aware of any ZeroConf implementation that allows you to perform queries and get results just by reading/writing files. Most apps that use Bonjour do so by calling the APIs, either directly (C/C++/Obj-C/Swift apps) or through a library specific to the language (interpreted/scripting languages). | {
"source": [
"https://serverfault.com/questions/118652",
"https://serverfault.com",
"https://serverfault.com/users/35301/"
]
} |
118,653 | Is there a way to localize dns entries? Meaning, that users from asia resolve mydomain.com to another ip than users from usa or europe.
This would be helpful to give the users the server nearby. DNS is the only technique used so far, meaning I cannot place some softwarerouting or central system replacing the dns to solve this. | Yes, there are currently two popular solutions to this problem. The first is called Anycast , where the same IP block is literally in use in multiple locations around the world. That is to say, the name servers for your domain always return the same IP address, but that IP address is actually assigned to more than one set of physical servers. You can read more about it here http://en.wikipedia.org/wiki/Anycast The second technique again involves AnyCast, however this time, the IP address range being anycasted referes to our name servers themselves. As the nameservers will only requests from clients who they are closest too (as determined by the magic of BGP), they can themselves return IP addresses that are logically local to the client. An example of this is google's l.google.com domain From a host in Australia crimson:~ dave$ host www.google.com
www.google.com is an alias for www.l.google.com.
www.l.google.com is an alias for www-notmumbai.l.google.com.
www-notmumbai.l.google.com has address 66.249.89.99
www-notmumbai.l.google.com has address 66.249.89.147
www-notmumbai.l.google.com has address 66.249.89.103
www-notmumbai.l.google.com has address 66.249.89.104 From a host in the US [dave@odessa ~]$ host www.google.com
www.google.com is an alias for www.l.google.com.
www.l.google.com has address 74.125.95.99
www.l.google.com has address 74.125.95.147
www.l.google.com has address 74.125.95.104
www.l.google.com has address 74.125.95.106
www.l.google.com has address 74.125.95.105
www.l.google.com has address 74.125.95.103 So, the CNAME for www.google.com resolves to www.l.google.com , but when you resolve that, depending on your location, your client receives a different set of IP addresses. This is because the name server that received the request for www.l.google.com was the local nameserver, relative to the client. | {
"source": [
"https://serverfault.com/questions/118653",
"https://serverfault.com",
"https://serverfault.com/users/13589/"
]
} |
118,791 | I'm running e2fsk on a very large (1TB+) ext3 disk with e2fsck -v /dev/sda1 from RIPLinux booted with PXE. I get e2fsck 1.41.6 (30-May-2009)
/dev/sda1 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes and then a very long pause... How do I get some idea of activity? Ideally a count of completed items vs total and some kind of ETA. | The -C flag will display a progress bar. Performance differences depending on how fsck is called. And very cool, if e2fsck is already running, you can send a USR1 signal for it to start displaying a progress bar. USR2 to stop. Example: killall -USR1 e2fsck From FSCK(8): -C Display completion/progress bars for those filesys-
tems checkers (currently only for ext2) which sup-
port them. Fsck will manage the filesystem check-
ers so that only one of them will display a
progress bar at a time. From E2FSCK(8): -C fd This option causes e2fsck to write completion
information to the specified file descriptor so
that the progress of the filesystem check can be
monitored. This option is typically used by pro-
grams which are running e2fsck. If the file
descriptor specified is 0, e2fsck will print a com-
pletion bar as it goes about its business. This
requires that e2fsck is running on a video console
or terminal. | {
"source": [
"https://serverfault.com/questions/118791",
"https://serverfault.com",
"https://serverfault.com/users/15016/"
]
} |
118,923 | What is the difference between /etc/hosts and /etc/resolv.conf? Also, is there a good documentation that explain all these configuration files. Thank you.
Bala | resolv.conf specifies the nameservers for resolver lookups, where it will actually use the DNS protocol for resolving the hostnames. Typically the hosts file is used for administrative purposes, such as backend and internal functions, which is substantially more isolated in scope, as only the local server will reference it. /etc/nsswitch.conf specifies the lookup order with the hosts entry. If this does not answer your question, please clarify further. Look at the following manpages: HOSTS(5)
RESOLVER(5) | {
"source": [
"https://serverfault.com/questions/118923",
"https://serverfault.com",
"https://serverfault.com/users/32389/"
]
} |
119,105 | I tried searching a lot but unable to find how to actually setup a ubuntu server, so that I can send mails through php using mail() function in php. I have apache2, mysql and php5 installed on my server. Thank You. | I also agree there is a lack of tutorials for people who just need a script to send, e.g. error emails, and don't need a full-blown mail server. First, if postfix not already installed do: sudo apt-get install postfix It prompts with a couple of questions. For the first I chose "Internet site"; for a machine behind a firewall I might choose smarthost instead.
For the second question it defaults to the machine name; I appended a domain name that I control (so I can set DNS for it later, should I need to). At this point you should be able to use "mail" from the commandline to send a test. (I usually follow instructions on http://ubuntuforums.org/showthread.php?t=38429 first, otherwise I have to use the -f flag to /usr/bin/sendmail . I also like to create /etc/aliases with entries for root and my normal user, and then run newaliases ) Then under /etc/php5/conf.d create a file (e.g. mailconfig.ini) with these contents: sendmail_from = "[email protected]"
sendmail_path = "/usr/sbin/sendmail -t -i -f [email protected]" Change [email protected] to your email address. They mean all email will look like it is sent by you, which can help prevent it being rejected. This is sufficient for just sending error emails to a developer. (The above instructions tested on Ubuntu 10.04, 11.04, 11.10, 12.04) P.S. As pointed out by razzed in the comments, mail is not always there (e.g. on Ubuntu 11.10 it is missing). This does not actually affect the above instructions, you only need mail for the test, and you can use sendmail just as well for that. But mail is also useful for reading email, so it is usually worth installing it, with: apt-get install mailutils (as root). | {
"source": [
"https://serverfault.com/questions/119105",
"https://serverfault.com",
"https://serverfault.com/users/26789/"
]
} |
119,111 | I'd like to relay mail from my laptop through my server. I've successfully configured postfix and SASL, and I can AUTH successfully using telnet. dhcp-241:~ jgorset$ telnet mail.example.com 25
Trying 85.25.124.196...
Connected to mail.example.com.
Escape character is '^]'.
220 mail.example.com ESMTP Postfix
EHLO mail.example.com
250-mail.example.com
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-AUTH PLAIN LOGIN
250-AUTH=PLAIN LOGIN
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN
AUTH PLAIN ***********************
235 2.7.0 Authentication successful
MAIL FROM: <[email protected]>
250 2.1.0 Ok
RCPT TO: <[email protected]>
554 5.7.1 <[email protected]>: Recipient address rejected: Access denied As demonstrated above, I'm being denied relay through my postfix server even though I've authenticated and configured postfix to allow authenticated clients to relay mail. # /etc/postfix/main.cf
# SMTP authentication
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes
broken_sasl_auth_clients = yes
smtpd_sasl_security_options = noanonymous
smtpd_recipient_restrictions = reject, permit_sasl_authenticated Am I being rejected because the smtpd_recipient_restrictions option is set to reject , even though is specifically states to permit_sasl_authenticated ? I was under the impression that the latter would override the former, seeing that you are required to enter either reject , defer , defer_if_permit or reject_unauth_destination with this option. Update : Turns out this also happens if I telnet from localhost. If I comment out the smtpd_recipient_restrictions line, I can send mail to anyone (though only from localhost). I'd like to do so from any computer that authenticates using SASL. How should I go about this? Thanks! | I also agree there is a lack of tutorials for people who just need a script to send, e.g. error emails, and don't need a full-blown mail server. First, if postfix not already installed do: sudo apt-get install postfix It prompts with a couple of questions. For the first I chose "Internet site"; for a machine behind a firewall I might choose smarthost instead.
For the second question it defaults to the machine name; I appended a domain name that I control (so I can set DNS for it later, should I need to). At this point you should be able to use "mail" from the commandline to send a test. (I usually follow instructions on http://ubuntuforums.org/showthread.php?t=38429 first, otherwise I have to use the -f flag to /usr/bin/sendmail . I also like to create /etc/aliases with entries for root and my normal user, and then run newaliases ) Then under /etc/php5/conf.d create a file (e.g. mailconfig.ini) with these contents: sendmail_from = "[email protected]"
sendmail_path = "/usr/sbin/sendmail -t -i -f [email protected]" Change [email protected] to your email address. They mean all email will look like it is sent by you, which can help prevent it being rejected. This is sufficient for just sending error emails to a developer. (The above instructions tested on Ubuntu 10.04, 11.04, 11.10, 12.04) P.S. As pointed out by razzed in the comments, mail is not always there (e.g. on Ubuntu 11.10 it is missing). This does not actually affect the above instructions, you only need mail for the test, and you can use sendmail just as well for that. But mail is also useful for reading email, so it is usually worth installing it, with: apt-get install mailutils (as root). | {
"source": [
"https://serverfault.com/questions/119111",
"https://serverfault.com",
"https://serverfault.com/users/36453/"
]
} |
119,299 | My /var/log/btmp file is 1.3 GB in size. I've read that the file is "Used to store information about failed login". What does this mean for my server? And can I delete this file? | This means people are trying to brute-force your passwords (common on any public-facing server). It shouldn't cause any harm to clear out this file. One way to reduce this is to change the port for SSH from 22 to something arbitrary. For some additional security, DenyHosts can block login attempts after a certain number of failures. I'd highly recommend installing and configuring it. | {
"source": [
"https://serverfault.com/questions/119299",
"https://serverfault.com",
"https://serverfault.com/users/36766/"
]
} |
119,869 | Does anyone know how I would turn off the interactive mode when using cp? I am trying to copy a directory recursively into another and for each file that is getting overwritten I have to answer 'y'. The command I am using is: cp -r /usr/share/drupal-update/* /usr/share/drupal But I get asked to confirm each overwrite: cp: overwrite `./CHANGELOG.txt'? y
cp: overwrite `./COPYRIGHT.txt'? y
cp: overwrite `./INSTALL.mysql.txt'? y
cp: overwrite `./INSTALL.pgsql.txt'? y
... I am using ubuntu server version jaunty. Thanks! | Execute: alias cp To see if cp has been aliased to cp -i In that case run: \cp -r /usr/share/drupal-update/* /usr/share/drupal to ignore the alias | {
"source": [
"https://serverfault.com/questions/119869",
"https://serverfault.com",
"https://serverfault.com/users/36571/"
]
} |
120,418 | There are some programs which can display used disk space using a treemap , such as WinDirStat for Windows and KDirStat for KDE/Linux: I'm looking for something similar, but for a headless Linux box. (E.g. run console data collection program on the server, then load the file in a graphical program in a GUI environment.) Alternatively, what are other good ways to get a structured used disk space representation, with just SSH access? | NCurses Disk Usage (ncdu) is good for this. See http://dev.yorhel.nl/ncdu for details. It's available as a package for most popular distributions and lets you browse and find out where your disk space is used. It uses text characters to display a bar-chart of directory usage so you get a semi-graphical interface, in a text only environment. | {
"source": [
"https://serverfault.com/questions/120418",
"https://serverfault.com",
"https://serverfault.com/users/25229/"
]
} |
120,431 | I switched a few weeks ago from a dedicated server to a VPS. Now that everything is working well on the VPS I would like to shutdown the dedicated server and close my account with the hosting company. For peace of mind and in order to be more safe I would like to do a full backup of the server before stopping it. The best would be a backup that I could browse if I find that I need a something in the backup. What would be the best solution from command line? Update : Medium : Network | The best tool to use for this is probably dump, which is a standard linux tool and will give you the whole filesystem. I would do something like this: /sbin/dump -0uan -f - / | gzip -2 | ssh -c blowfish [email protected] dd of=/backup/server-full-backup-`date '+%d-%B-%Y'`.dump.gz This will do a file system dump of / (make sure you don't need to dump any other mounts!), compress it with gzip and ssh it to a remote server (backupserver.example.com), storing it in /backup/. If you later need to browse the backup you use restore: restore -i Another option, if you don't have access to dump is to use tar and do something like tar -zcvpf /backup/full-backup-`date '+%d-%B-%Y'`.tar.gz --directory / --exclude=mnt --exclude=proc --exclude=tmp . But tar does not handle changes in the file system as well. | {
"source": [
"https://serverfault.com/questions/120431",
"https://serverfault.com",
"https://serverfault.com/users/34271/"
]
} |
120,437 | How can I add a service (like tomcat) to my startup routine so it starts up upon server restarts? | The best tool to use for this is probably dump, which is a standard linux tool and will give you the whole filesystem. I would do something like this: /sbin/dump -0uan -f - / | gzip -2 | ssh -c blowfish [email protected] dd of=/backup/server-full-backup-`date '+%d-%B-%Y'`.dump.gz This will do a file system dump of / (make sure you don't need to dump any other mounts!), compress it with gzip and ssh it to a remote server (backupserver.example.com), storing it in /backup/. If you later need to browse the backup you use restore: restore -i Another option, if you don't have access to dump is to use tar and do something like tar -zcvpf /backup/full-backup-`date '+%d-%B-%Y'`.tar.gz --directory / --exclude=mnt --exclude=proc --exclude=tmp . But tar does not handle changes in the file system as well. | {
"source": [
"https://serverfault.com/questions/120437",
"https://serverfault.com",
"https://serverfault.com/users/9900/"
]
} |
120,488 | I have a dedicated server with Apache, on which I've set up some VirtualHosts. I've set up one to handle the www domain as well as the non-www domain. My VH .conf file for the www: <VirtualHost *>
DocumentRoot /var/www/site
ServerName www.example.com
<Directory "/var/www/site">
allow from all
</Directory>
</VirtualHost> With this .htaccess : RewriteEngine on
RewriteBase /
RewriteCond %{HTTP_HOST} ^www.example.com [NC]
RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Is there a simple way to redirect the www to the non-www version? Currently I'm sending both versions to the same DocumentRoot and using .htaccess but I'm sure I must be able to do it in the VirtualHost file. | Turns out mod_rewrite rules are fine in the VirtualHosts file, apart from the RewriteBase rule. I ended up with this: <VirtualHost *>
ServerName www.example.com
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.example.com
RewriteRule ^/(.*)$ http://example.com/$1 [L,R=301]
</VirtualHost> EDIT: on the advice of joschi in the comments, I'm now using this simplified version using the Redirect directive from mod_alias : <VirtualHost *>
ServerName www.example.com
Redirect 301 / http://example.com/
</VirtualHost> | {
"source": [
"https://serverfault.com/questions/120488",
"https://serverfault.com",
"https://serverfault.com/users/17166/"
]
} |
120,843 | I am trying to import a mysqldump into a new database. When I run: mysqldump -umydbuser -p --database testimport < database.dump I get the following output: Enter password:
-- MySQL dump 10.11
--
-- Host: localhost Database: testimport
-- ------------------------------------------------------
-- Server version 5.0.75-0ubuntu10.3
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
--
-- Current Database: `testimport`
--
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `testimport` /*!40100 DEFAULT CHARACTER SET latin1 */;
USE `testimport`;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;
/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;
-- Dump completed on 2010-03-09 17:46:03 However, when I look at the testimport database, there are no tables and no data. Even if I export a working database by: mysqldump -umydbuser -p --database workingdatabase > test.sql and then import: mysqldump -umydbuser -p --database testimport < test.sql I get the same output, but nothing is imported into the testimport database. I don't see any errors in the output and it is using the proper database. If I tail the exported .sql file, I see the create statements for all tables and the inserts for all data. Why isn't this data importing? Is there any additional logging I can see? | You want to run the dump through the mysql client. Example: mysql -uroot -p testimport < database.dump | {
"source": [
"https://serverfault.com/questions/120843",
"https://serverfault.com",
"https://serverfault.com/users/37112/"
]
} |
120,877 | I want to use file transfer via SSH on some scripts. I've read it's possible to tar over ssh. Where should I start reading? | To do file transfer over ssh you can use scp scp -r /srcdir/ user@remotehost:/destdir/ use rsync over ssh (see the -e parameter) rsync -e ssh -a /srcdir/ user@remotehost:/destdir/ use some tool that transfers data via stdin/out ( tar , cpio , etc) cd /sourcedir; tar -c . | ssh username@remotehost bash 'cd /dstdir; tar -x Mount the filesystem via sshfs (if fuse is supported on your system) | {
"source": [
"https://serverfault.com/questions/120877",
"https://serverfault.com",
"https://serverfault.com/users/701/"
]
} |
120,878 | Does there exist a router that supports multiple GRE connections over NAT? Im currently running pfSense, and it only supports 1 at a time. I understand why, its just a drag as there are multiple people in this office trying connect to the same VPN server. Obviously there are other ways to solve this, like different VPN setups, multiple interfaces, etc, but for a variety of reasons Id much prefer multiple GRE connections. | To do file transfer over ssh you can use scp scp -r /srcdir/ user@remotehost:/destdir/ use rsync over ssh (see the -e parameter) rsync -e ssh -a /srcdir/ user@remotehost:/destdir/ use some tool that transfers data via stdin/out ( tar , cpio , etc) cd /sourcedir; tar -c . | ssh username@remotehost bash 'cd /dstdir; tar -x Mount the filesystem via sshfs (if fuse is supported on your system) | {
"source": [
"https://serverfault.com/questions/120878",
"https://serverfault.com",
"https://serverfault.com/users/11087/"
]
} |
121,121 | I use remote SMTP via nullmailer and it requires set From field to the specific name, but cron set it as [email protected]. How could I change it to something like [email protected]? | Modern versions of cron do accept "MAILFROM=..." in the crontab format. I suggest that you try "man 5 crontab". If it mentions MAILFROM, your version should support it. The phrase to look for is towards the end of the paragraph discussing MAILTO, and should be something like this: If MAILFROM is defined (and non-empty), it will be used as the envelope sender address, otherwise, ''root'' will be used. | {
"source": [
"https://serverfault.com/questions/121121",
"https://serverfault.com",
"https://serverfault.com/users/10421/"
]
} |
121,133 | I've been reading a lot about java memory management, garbage collecting et al and I'm trying to find the best settings for my limited memory (1.7g on a small ec2 instance) I'm wondering if there is a direct correlation between my code size and the permgen setting. According to sun: The permanent generation is special because it holds data needed by the virtual machine to describe objects that do not have an equivalence at the Java language level. For example objects describing classes and methods are stored in the permanent generation. To me this means that it's literally storing my class def'ns etc... Does this mean there is a direct correlation between my compiled code size and the permgen I should be setting? My whole app is about 40mb and i noticed we're using 256mb permgen. I'm thinking maybe we're using memory that could be better allocated to dynamic code like object instances etc... | Modern versions of cron do accept "MAILFROM=..." in the crontab format. I suggest that you try "man 5 crontab". If it mentions MAILFROM, your version should support it. The phrase to look for is towards the end of the paragraph discussing MAILTO, and should be something like this: If MAILFROM is defined (and non-empty), it will be used as the envelope sender address, otherwise, ''root'' will be used. | {
"source": [
"https://serverfault.com/questions/121133",
"https://serverfault.com",
"https://serverfault.com/users/19656/"
]
} |
121,890 | I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test
127.0.1.1 desktop /etc/host.conf order hosts,bind
multi on /etc/resolv.conf # Generated by NetworkManager
search search servers obtained via DHCP
nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat
group: compat
shadow: compat
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
networks: files
protocols: db files
services: db files
ethers: db files
rpc: db files
netgroup: nis But if fact it is not. user@test ~>ping test PING localhost (127.0.0.1) 56(84) bytes of data.
[skip] Pinging is ok. user@test ~>host test test.mydomain.com has address xx.xxx.161.201 I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it.
Any thoughts, suggestions? | With this configuration, most applications will happily work with your entry from /etc/hosts . However host doesn't look at /etc/nsswitch.conf . That is by design, not by accident, since host is specifically a DNS lookup program. /etc/hosts is not DNS, it's (mostly) what we used before we had DNS. The same is also true for dig and nslookup - they're DNS specific too. | {
"source": [
"https://serverfault.com/questions/121890",
"https://serverfault.com",
"https://serverfault.com/users/51983/"
]
} |
122,157 | I'm looking for an easy way to follow a packet through the iptables rules. This is not so much about logging, because I don't want to log all traffic (and I only want to have LOG targets for very few rules). Something like Wireshark for Iptables. Or maybe even something similar to a debugger for a programming language. Thanks
Chris Note: It doesn't have to be a fancy GUI tool. But it must do more than just showing a package counter or so. Update: It almost looks as if we can't find anything that provides the functionality that is asked for. In that case: Let's at least find a good technique that's based on iptables logging - which can be easily turned on and off, and doesn't require to write iptables rules redundantly (having to write the same rule for -j LOG and -j ... ) | If you have a recent enough kernel and version of iptables you can use the TRACE target (Seems to be builtin on at least Debian 5.0). You should set the conditions of your trace to be as specific as possible and disable any TRACE rules when you are not debugging because it does spew a lot of information to the logs. TRACE This target marks packes so that
the kernel will log every rule which
match the packets as those traverse
the tables, chains, rules. (The
ipt_LOG or ip6t_LOG module is required
for the logging.) The packets are
logged with the string prefix: "TRACE:
tablename:chainname:type:rulenum "
where type can be "rule" for plain
rule, "return" for implicit rule at
the end of a user defined chain and
"policy" for the policy of the built
in chains. It can only be used in the
raw table. If you added rules like this iptables -t raw -A PREROUTING -p tcp --destination 192.168.0.0/24 --dport 80 -j TRACE
iptables -t raw -A OUTPUT -p tcp --destination 192.168.0.0/24 --dport 80 -j TRACE You will be supplied with output that looks like this. # cat /var/log/kern.log | grep 'TRACE:'
Mar 24 22:41:52 enterprise kernel: [885386.325658] TRACE: raw:PREROUTING:policy:2 IN=eth0 OUT= MAC=00:1d:7d:aa:e3:4e:00:04:4b:05:b4:dc:08:00 SRC=192.168.32.18 DST=192.168.12.152 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=30561 DF PROTO=TCP SPT=53054 DPT=80 SEQ=3653700382 ACK=0 WINDOW=8192 RES=0x00 SYN URGP=0 OPT (020405B40103030201010402)
Mar 24 22:41:52 enterprise kernel: [885386.325689] TRACE: mangle:PREROUTING:policy:1 IN=eth0 OUT= MAC=00:1d:7d:aa:e3:4e:00:04:4b:05:b4:dc:08:00 SRC=192.168.32.18 DST=192.168.12.152 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=30561 DF PROTO=TCP SPT=53054 DPT=80 SEQ=3653700382 ACK=0 WINDOW=8192 RES=0x00 SYN URGP=0 OPT (020405B40103030201010402)
Mar 24 22:41:52 enterprise kernel: [885386.325713] TRACE: nat:PREROUTING:rule:1 IN=eth0 OUT= MAC=00:1d:7d:aa:e3:4e:00:04:4b:05:b4:dc:08:00 SRC=192.168.32.18 DST=192.168.12.152 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=30561 DF PROTO=TCP SPT=53054 DPT=80 SEQ=3653700382 ACK=0 WINDOW=8192 RES=0x00 SYN URGP=0 OPT (020405B40103030201010402)
Mar 24 22:41:52 enterprise kernel: [885386.325731] TRACE: nat:nat.1:rule:1 IN=eth0 OUT= MAC=00:1d:7d:aa:e3:4e:00:04:4b:05:b4:dc:08:00 SRC=192.168.32.18 DST=192.168.12.152 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=30561 DF PROTO=TCP SPT=53054 DPT=80 SEQ=3653700382 ACK=0 WINDOW=8192 RES=0x00 SYN URGP=0 OPT (020405B40103030201010402)
Mar 24 22:41:52 enterprise kernel: [885386.325731] TRACE: mangle:INPUT:policy:1 IN=eth0 OUT= MAC=00:1d:7d:aa:e3:4e:00:04:4b:05:b4:dc:08:00 SRC=192.168.32.18 DST=192.168.32.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=30561 DF PROTO=TCP SPT=53054 DPT=3128 SEQ=3653700382 ACK=0 WINDOW=8192 RES=0x00 SYN URGP=0 OPT (020405B40103030201010402)
Mar 24 22:41:52 enterprise kernel: [885386.325731] TRACE: filter:INPUT:rule:2 IN=eth0 OUT= MAC=00:1d:7d:aa:e3:4e:00:04:4b:05:b4:dc:08:00 SRC=192.168.32.18 DST=192.168.32.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=30561 DF PROTO=TCP SPT=53054 DPT=3128 SEQ=3653700382 ACK=0 WINDOW=8192 RES=0x00 SYN URGP=0 OPT (020405B40103030201010402)
Mar 24 22:41:52 enterprise kernel: [885386.325731] TRACE: filter:in_world:rule:1 IN=eth0 OUT= MAC=00:1d:7d:aa:e3:4e:00:04:4b:05:b4:dc:08:00 SRC=192.168.32.18 DST=192.168.32.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=30561 DF PROTO=TCP SPT=53054 DPT=3128 SEQ=3653700382 ACK=0 WINDOW=8192 RES=0x00 SYN URGP=0 OPT (020405B40103030201010402)
Mar 24 22:41:52 enterprise kernel: [885386.325731] TRACE: filter:in_world_all_c1:return:2 IN=eth0 OUT= MAC=00:1d:7d:aa:e3:4e:00:04:4b:05:b4:dc:08:00 SRC=192.168.32.18 DST=192.168.32.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=30561 DF PROTO=TCP SPT=53054 DPT=3128 SEQ=3653700382 ACK=0 WINDOW=8192 RES=0x00 SYN URGP=0 OPT (020405B40103030201010402)
Mar 24 22:41:52 enterprise kernel: [885386.325731] TRACE: filter:in_world:rule:2 IN=eth0 OUT= MAC=00:1d:7d:aa:e3:4e:00:04:4b:05:b4:dc:08:00 SRC=192.168.32.18 DST=192.168.32.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=30561 DF PROTO=TCP SPT=53054 DPT=3128 SEQ=3653700382 ACK=0 WINDOW=8192 RES=0x00 SYN URGP=0 OPT (020405B40103030201010402)
Mar 24 22:41:52 enterprise kernel: [885386.325731] TRACE: filter:in_world_irc_c2:return:2 IN=eth0 OUT= MAC=00:1d:7d:aa:e3:4e:00:04:4b:05:b4:dc:08:00 SRC=192.168.32.18 DST=192.168.32.10 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=30561 DF PROTO=TCP SPT=53054 DPT=3128 SEQ=3653700382 ACK=0 WINDOW=8192 RES=0x00 SYN URGP=0 OPT (020405B40103030201010402) | {
"source": [
"https://serverfault.com/questions/122157",
"https://serverfault.com",
"https://serverfault.com/users/37454/"
]
} |
122,178 | I'm using CentOS and Red Hat Enterprise Linux on a few machines without the GUI. How can I check if recently installed updates require a reboot? In Ubuntu, I'm used to checking if /var/run/reboot-required is present. | https://access.redhat.com/discussions/3106621#comment-1196821 Don't forget that you might need to reboot because of core library updates, at least if it is glibc. (And also, services may need to be restarted after updates). If you install the yum-utils package, you can use a command called needs-restarting . You can use it both for checking if a full reboot is required because of kernel or core libraries updates (using the -r option), or what services need to be restarted (using the -s option). needs-restarting -r returns 0 if reboot is not needed, and 1 if it is, so it is perfect to use in a script. An example: root@server1:~> needs-restarting -r ; echo $?
Core libraries or services have been updated:
openssl-libs -> 1:1.0.1e-60.el7_3.1
systemd -> 219-30.el7_3.9
Reboot is required to ensure that your system benefits from these updates.
More information:
https://access.redhat.com/solutions/27943
1 | {
"source": [
"https://serverfault.com/questions/122178",
"https://serverfault.com",
"https://serverfault.com/users/1753/"
]
} |
122,562 | I am upgrading LAMP stack on customer's server and need to ./configure mysql and apache with exact last settings they were compiled with last time. Where do I get these? PHP configure string can be got by php -i. What about others? | Was the decompressed source directory kept around? If so, the configure flags would typically be in config.status or config.log . This differs slightly depending on the software and whether or not autoconf was used. | {
"source": [
"https://serverfault.com/questions/122562",
"https://serverfault.com",
"https://serverfault.com/users/20381/"
]
} |
122,564 | We are struggling with our users visiting infected or "attack" sites and Phising in general. Most of our machines are protected by an Enterprise anti virus and monitoring solution (McAffe ePO) and we try to get people to use Firefox... But no AV is perfect and we have to endure personal machines as well (albeit on their own 'Plague' VLANs) and would like to do something about Phishing as our users seem intent on disclosing their passwords to the world... To complicate matters we don't want to implement a block for many many reasons (political, ideological, legal etc) instead we would like to implement something akin to Firefox's "Reported Scam/Phish/Attack Site" - "Get me out of here" or crucially "Let me in anyway", giving the user a choice to still infect themselves if they feel like it (or look at a site incorrectly blacklisted). The reason we can't just use Firefox is we have a core enterprise app only certified on IE6&7 - thank you Oracle. Is it possible to implement this type of advisory filtering either using a proxy (in our case Squid) or DNS? What free options are available for web content filtering? Open Source Filtering of HTTPS Traffic Were a good start, but they don't address the advisory aspect of the filtering. | Was the decompressed source directory kept around? If so, the configure flags would typically be in config.status or config.log . This differs slightly depending on the software and whether or not autoconf was used. | {
"source": [
"https://serverfault.com/questions/122564",
"https://serverfault.com",
"https://serverfault.com/users/3256/"
]
} |
122,679 | I notice that on a new CentOS image that I just booted up off of EC2 that the ulimit default is 1024 open files, but /proc/sys/fs/file-max is set at 761,408 and I'm wondering how these two limits work together. I'm guessing that ulimit -n is a per-user limit of number of file descriptors while /proc/sys/fs/file-max is system-wide? If that's the case, say I've logged in twice as the same user -- does each logged-in user have a 1024 limit on number of open files, or is it a limit of 1024 combined open files between each of those logged-in users? And is there much performance impact to setting your max file descriptors to a very high number, if your system isn't ever opening very many files? | file-max is the maximum File Descriptors (FD) enforced on a kernel level, which cannot be surpassed by all processes without increasing. The ulimit is enforced on a process level, which can be less than the file-max . There is no performance impact risk by increasing file-max . Modern distributions have the maximum FD set pretty high, whereas in the past it required kernel recompilation and modification to increase past 1024. I wouldn't increase system-wide unless you have a technical need. The per-process configuration often needs tuned for serving a particular daemon be it either a database or a Web server. If you remove the limit entirely, that daemon could potentially exhaust all available system resources; meaning you would be unable to fix the problem except by pressing the reset button or power cycling. Of course, either of those is likely to result in corruption of any open files. | {
"source": [
"https://serverfault.com/questions/122679",
"https://serverfault.com",
"https://serverfault.com/users/37767/"
]
} |
122,681 | I'm trying to set the caller id number for an outbound call. My asterisk .call file looks like this: Channel: SIP/flowroute/1234567890
Context: test
Extension: 1234567890
Priority: 1 Here's my extensions.conf: [test]
exten => _1NXXXXXXXXX,1,Set(CALLERID(num)=8005552222)
exten => _1NXXXXXXXXX,n,Dial(SIP/${EXTEN}@flowroute)
exten => _1NXXXXXXXXX,n,Playback(hello-world) When I receive the call, the caller id number is 1-206-445-6979, even though the CDR log has both src and clid set to 8005552222 . I'm using flowroute as my carrier. Is there something wrong on their side? | file-max is the maximum File Descriptors (FD) enforced on a kernel level, which cannot be surpassed by all processes without increasing. The ulimit is enforced on a process level, which can be less than the file-max . There is no performance impact risk by increasing file-max . Modern distributions have the maximum FD set pretty high, whereas in the past it required kernel recompilation and modification to increase past 1024. I wouldn't increase system-wide unless you have a technical need. The per-process configuration often needs tuned for serving a particular daemon be it either a database or a Web server. If you remove the limit entirely, that daemon could potentially exhaust all available system resources; meaning you would be unable to fix the problem except by pressing the reset button or power cycling. Of course, either of those is likely to result in corruption of any open files. | {
"source": [
"https://serverfault.com/questions/122681",
"https://serverfault.com",
"https://serverfault.com/users/3303/"
]
} |
122,737 | Is the expansion of a wildcard in Bash guaranteed to be in alphabetical order? I am forced to split a large file into 10 Mb pieces so that they can be be accepted by my Mercurial repository. So I was thinking I could use: split -b 10485760 Big.file BigFilePiece. and then in place of: cat BigFile | bigFileProcessor I could do: cat BigFilePiece.* | bigFileProcessor in its place. However, I could not find anywhere that guaranteed that the expansion of the asterisk (aka wildcard, aka * ) would always be in alphabetical order so that .aa came before .ab (as opposed to be timestamp ordering or something like that). Also, are there any flaws in my plan? How great is the performance cost of cat ing the file together? | Yes, globbing expansion is alphabetical. From the Bash man page: Pathname Expansion After word splitting, unless the -f option has been set, bash scans
each word for the characters * , ? , and [ . If one of these characters
appears, then the word is regarded as a pattern, and replaced with an
alphabetically sorted list of file names matching the pattern. | {
"source": [
"https://serverfault.com/questions/122737",
"https://serverfault.com",
"https://serverfault.com/users/37787/"
]
} |
122,741 | I installed Oracle SOA Suite 11g. I start the NodeManager successfully. I start the AdminServer successfully.
When I go to start the soa serve (soa_server1) I get this output: NMProcess: <Mar 15, 2010 3:33:30 PM> <WARNING> <Exception while starting server 'soa_server1'>
NMProcess: java.io.IOException: Server failed to start up. See server output log for more details.
NMProcess: at weblogic.nodemanager.server.ServerManager.start(ServerManager.java:331)
NMProcess: at weblogic.nodemanager.server.Handler.handleStart(Handler.java:541)
NMProcess: at weblogic.nodemanager.server.Handler.handleCommand(Handler.java:116)
NMProcess: at weblogic.nodemanager.server.Handler.run(Handler.java:70)
NMProcess: at java.lang.Thread.run(Thread.java:619)
NMProcess:
NMProcess: Mar 15, 2010 3:33:30 PM weblogic.nodemanager.server.Handler handleStart
NMProcess: WARNING: Exception while starting server 'soa_server1'
NMProcess: java.io.IOException: Server failed to start up. See server output log for more details.
NMProcess: at weblogic.nodemanager.server.ServerManager.start(ServerManager.java:331)
NMProcess: at weblogic.nodemanager.server.Handler.handleStart(Handler.java:541)
NMProcess: at weblogic.nodemanager.server.Handler.handleCommand(Handler.java:116)
NMProcess: at weblogic.nodemanager.server.Handler.run(Handler.java:70)
NMProcess: at java.lang.Thread.run(Thread.java:619)
Error Starting server soa_server1: weblogic.nodemanager.NMException: Exception while starting server 'soa_server1' In the log file I've got this: <Mar 15, 2010 3:33:27 PM> <INFO> <NodeManager> <Starting WebLogic server with command line: /usr/java/jdk1.6.0_18/jre/bin/java -Dweblogic.Name=soa_server1 -Djava.security.policy=null -Djava.library.path="/usr/java/jdk1.6.0_18/jre/lib/amd64 server:/usr/java/jdk1.6.0_18/jre/lib/amd64:/usr/java/jdk1.6.0_18/jre/../lib/amd64:/u01/app/oracle/product/11.1.1/mw/patch_wls1032/profiles/default/native:/u01/app/oracle/product/11.1.1/mw/wlserver_10.3/server/native/linux/x86_64:/u01/app/oracle/product/11.1.1/mw/wlserver_10.3/server/native/linux/x86_64/oci920_8:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib" -Djava.class.path=/usr/java/jdk1.6.0_18/jre/lib/rt.jar:/usr/java/jdk1.6.0_18/jre/lib/i18n.jar:/u01/app/oracle/product/11.1.1/mw/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_18/lib/tools.jar:/u01/app/oracle/product/11.1.1/mw/utils/config/10.3/config-launch.jar:/u01/app/oracle/product/11.1.1/mw/wlserver_10.3/server/lib/weblogic_sp.jar:/u01/app/oracle/product/11.1.1/mw/wlserver_10.3/server/lib/weblogic.jar:/u01/app/oracle/product/11.1.1/mw/modules/features/weblogic.server.modules_10.3.2.0.jar:/u01/app/oracle/product/11.1.1/mw/wlserver_10.3/server/lib/webservices.jar:/u01/app/oracle/product/11.1.1/mw/modules/org.apache.ant_1.7.0/lib/ant-all.jar:/u01/app/oracle/product/11.1.1/mw/modules/net.sf.antcontrib_1.0.0.0_1-0b2/lib/ant-contrib.jar:/u01/app/oracle/product/11.1.1/mw/wlserver_10.3/common/eval/pointbase/lib/pbembedded57.jar:/u01/app/oracle/product/11.1.1/mw/wlserver_10.3/common/eval/pointbase/lib/pbclient57.jar:/u01/app/oracle/product/11.1.1/mw/wlserver_10.3/common/eval/pointbase/lib/pbtools57.jar -Dweblogic.nodemanager.ServiceEnabled=true weblogic.Server >
<Mar 15, 2010 3:33:27 PM> <INFO> <NodeManager> <Working directory is '/u01/app/oracle/user_projects/domains/soa_domain2'>
<Mar 15, 2010 3:33:27 PM> <INFO> <NodeManager> <Server output log file is '/u01/app/oracle/user_projects/domains/soa_domain2/servers/soa_server1/logs/soa_server1.out'>
<Mar 15, 2010 3:33:28 PM ART> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) 64-Bit Server VM Version 16.0-b13 from Sun Microsystems Inc.>
<Mar 15, 2010 3:33:28 PM ART> <Info> <Management> <BEA-141107> <Version: WebLogic Server 10.3.2.0 Tue Oct 20 12:16:15 PDT 2009 1267925 >
<Mar 15, 2010 3:33:30 PM ART> <Info> <Security> <BEA-090065> <Getting boot identity from user.>
Enter username to boot WebLogic server:Enter password to boot WebLogic server:
<Mar 15, 2010 3:33:30 PM ART> <Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason:
There are 1 nested errors:
weblogic.management.ManagementException: Booting as admin server, but servername, soa_server1, does not match the admin server name, AdminServer
at weblogic.management.provider.internal.RuntimeAccessService.start(RuntimeAccessService.java:67)
at weblogic.t3.srvr.ServerServicesManager.startService(ServerServicesManager.java:461)
at weblogic.t3.srvr.ServerServicesManager.startInStandbyState(ServerServicesManager.java:166)
at weblogic.t3.srvr.T3Srvr.initializeStandby(T3Srvr.java:749)
at weblogic.t3.srvr.T3Srvr.startup(T3Srvr.java:488)
at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:446)
at weblogic.Server.main(Server.java:67)
>
<Mar 15, 2010 3:33:30 PM ART> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FAILED>
<Mar 15, 2010 3:33:30 PM ART> <Error> <WebLogicServer> <BEA-000383> <A critical service failed. The server will shut itself down>
<Mar 15, 2010 3:33:30 PM ART> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN>
<Mar 15, 2010 3:33:30 PM> <FINEST> <NodeManager> <Waiting for the process to die: 31144>
<Mar 15, 2010 3:33:30 PM> <INFO> <NodeManager> <Server failed during startup so will not be restarted>
<Mar 15, 2010 3:33:30 PM> <FINEST> <NodeManager> <runMonitor returned, setting finished=true and notifying waiters> The commands to start the servers and each result are: $wlst.sh
>startNodeManager()
Successfully started
>nmConnect(......)
Successfully connected
>nmStart('AdminServer')
Successfully started
>nmStart('soa_server1')
Given error... When I start the servers using the scripts it works correctly: $nohup $WL_HOME/server/bin/startNodeManager.sh > nodemanager.out &
$nohup $MW_HOME/user_projects/domains/soa_domain/startWebLogic.sh > adminserver.out &
$nohup $MW_HOME/user_projects/domains/soa_domain/bin/startManagedServer.sh soa_server > soa_server.out & Do you have any clue of what is happening? If you need more info, just ask for it. thanks in advance | Yes, globbing expansion is alphabetical. From the Bash man page: Pathname Expansion After word splitting, unless the -f option has been set, bash scans
each word for the characters * , ? , and [ . If one of these characters
appears, then the word is regarded as a pattern, and replaced with an
alphabetically sorted list of file names matching the pattern. | {
"source": [
"https://serverfault.com/questions/122741",
"https://serverfault.com",
"https://serverfault.com/users/37642/"
]
} |
122,824 | find has good support for finding files the more modified less than X days ago, but how can I use find to locate all files modified before a certain date? I can't find anything in the find man page to do this, only to compare against another files time or to check for differences between created time and now. Is making a file with the desired time and comparing against that the only way to do this? | No, you can use a date/time string. From man find : -newerXY reference Compares the timestamp of the current file with reference. The
reference argument is normally the name of a file (and one of
its timestamps is used for the comparison) but it may also be a
string describing an absolute time. X and Y are placeholders
for other letters, and these letters select which time belonging
to how reference is used for the comparison. a The access time of the file reference
B The birth time of the file reference
c The inode status change time of reference
m The modification time of the file reference
t reference is interpreted directly as a time Example: find -newermt "mar 03, 2010" -ls
find -newermt yesterday -ls
find -newermt "mar 03, 2010 09:00" -not -newermt "mar 11, 2010" -ls | {
"source": [
"https://serverfault.com/questions/122824",
"https://serverfault.com",
"https://serverfault.com/users/11495/"
]
} |
123,419 | I am writing a RHEL kickstart script, and in my %post, I need to install a JRE. Basically, the current setup involves me needing to manually go in after first boot and set the newly installed JRE as the default using the alternatives --config command. Is there a way for me to pass arguments to alternatives so I don't have to manually pick the correct JRE? | Does your version have --set ? --set name path Set the program path as alternative for name. This is equivalent to --config but is non-interactive and thus scriptable. | {
"source": [
"https://serverfault.com/questions/123419",
"https://serverfault.com",
"https://serverfault.com/users/35677/"
]
} |
123,553 | I have my network set up like this: In words: I have a machine (Calcium, running Arch Linux) that has two network interfaces. Interface eth0 is hooked up to a router, and is gigabit. Interface eth1 is hooked up directly to the university network over 10Megabit. The router's uplink is hooked up to the university network as well, and it is also 10Megabit. Currently (I believe) all traffic on Calcium is going through eth0, through the router, regardless of whether it is internal or external. (How can I confirm this?) Ideally, traffic that is destined for the internal network (192.168.10.0/24) would travel over eth0 to the router, and wherever it is going. ALL other traffic should go over eth1. | Here's the complete answer, in case it helps others: To make packets with destinations 192.168.10.* use eth0, and all other packets use eth1: 1) View your current routing table ip route list One entry will be something like "default via 192.168.1.1" where 192.168.1.1 is your router (a.k.a. gateway) ip address. Remember the gateways for eth0 and eth1, as we'll need them later. 2) Delete the default route(s). (Warning: this will kick you offline.) ip route del default 3) Add a new default route (this will bring you back online). Replace 192.168.1.1, below, with your gateway ip address from above. ip route add default via 192.168.1.1 dev eth1 4) Add a specific route that will be served by eth0. More-specific routes automatically take precedence over less-specific ones. ip route add 192.168.10.0/24 via 192.168.1.1 dev eth0 Finally, you can ask Linux which interface will be used to send a packet to a specific ip address: ip route get 8.8.8.8 If the configuration worked, packets to 8.8.8.8 (Google's server) will use eth1. Packets to any ip on your local network: ip route get 192.168.10.7 will use eth0. | {
"source": [
"https://serverfault.com/questions/123553",
"https://serverfault.com",
"https://serverfault.com/users/11785/"
]
} |
123,726 | What's the difference between a Layer 2 & Layer 3 switch? I've always wondered and never needed to know until now. | I will complete Zoredache's answer. A L2 switch does switching only. This means that it uses MAC addresses to switch the packets from a port to the destination port (and only the destination port). It therefore maintains a MAC address table so that it can remember which ports have which MAC address associated. A L3 switch also does switching exactly like a L2 switch. The L3 means that it has an identity from the L3 layer. Practically this means that a L3 switch is capable of having IP addresses and doing routing. For intra-VLAN communication, it uses the MAC address table. For extra-VLAN communication, it uses the IP routing table. This is simple but you could say "Hey but my Cisco 2960 is a L2 switch and it has a VLAN interface with an IP !". You are perfectly right but that VLAN interface cannot be used for IP routing since the switch does not maintain an IP routing table. | {
"source": [
"https://serverfault.com/questions/123726",
"https://serverfault.com",
"https://serverfault.com/users/3256/"
]
} |
123,729 | Problem when I manually set the HTTP Status of my response stream to, say, 404 or 503 , IIS renders up the stock IIS content/view, instead of my custom view. When I do this with the web development server (AKA. Cassini ), it works correctly (that is, my content is displayed and the response.statuscode == my entered data). Is there any way I can override this behaviour? How To Replicate Make a default ASP.NET MVC1 web application. Add the following route public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
"Default",
"{*catchall}",
new { controller = "Home", action = "Index" }
);
} Now replace the the HomeController's Index method with... [HandleError]
public class HomeController : Controller
{
public ActionResult Index()
{
Response.StatusCode = 404;
return View();
}
} | Ok - found the answer. As I expected, IIS is hijacking my non 200 responses. Not sure (ie. I'm not sure if this is the default behaviour OR it's because of a setting one of the team members updated in the machine config, etc...). Anyways, the key here is tell IIS to not handle any non-200 status result resources. How? Config entry in the web.config. <system.webServer>
<httpErrors errorMode="DetailedLocalOnly" existingResponse="PassThrough"/>
.... snipped other IIS relevant elements ...
</system.webServer> Now, the key here is existingResponse="PassThrough" . That bad boy tells IIS to leave my resources alone if the HTTP status code != 200. Want more info? Sure: Read More about this Element on the Official IIS Website . | {
"source": [
"https://serverfault.com/questions/123729",
"https://serverfault.com",
"https://serverfault.com/users/58/"
]
} |
123,761 | I'm using munin as a tool for monitoring my servers. On some of the graphs, the units are marked with a ' m '. For instance, my apache accesses graph is labeled 100m, 200m, 300m, along the y-axis. What does the ' m ' mean? I understand ' M ' (caps) is mega as in megabytes, the ' k ' is kilo, the ' G ' is giga, but what about ' m '? At first I thought it was million, but there's no way apache is serving 100 million accesses even per decade. | The 'm' stands for milli , meaning 10^(-3) or 1/1000th of the unit. | {
"source": [
"https://serverfault.com/questions/123761",
"https://serverfault.com",
"https://serverfault.com/users/19478/"
]
} |
124,153 | I'm a newbie of server administration and I'm looking for a powerful hosting service to host my new website. This website is basically a back-end of an mobile online game, and it will: handle up to 10 million of HTTPS request and mySQL queries a day store up to 2000 GB file on the hard disk transfer probably 5000 GB data in and out per month it runs on PHP and mySQL have 10 million records in mySQL database, for each record there are 5-10 fields, around 100 bytes each I really don't know what kind of a server I need to handle these requirements, my question is: What CPU/RAM do I need for a dedicated server or VPS? What hosting companies are able to offer this kind of dedicated server or VPS? What about cloud computing? I've researched Amazon EC2 but it seems complicated to me. And I've contacted Rackspace but strangely they said Cloudsites is not suitable for my requirements. I wonder if there is other cloud hosting company. Any other alternative method? | A cheap desktop? Let's get into the math. 10 million requests. That breaks down to 416667 requests per hour. That breaks down to 6944 requests per minute. That breaks down to 116 requests per second. Double that (peak load) and we talk of a load a cheap quad core desktop can handle IF the queries are simple enough, and you don't really say how complex they are. 5000 GB per month is trivial - seriously, same math applies. That breaks down to 208GB / day That breaks down to 8GB / hour That breaks down to 148MB / minute That breaks down to 2,5MB / second, 25Mbit. Double for peak - 50Mbit, trivial for any hosting center. Will cost you, though. Store 2000 GB on the hard disc.
That is 2x2000 GB hard discs in a RAID?
Unless: it is for the database, an has lots of complex IO, then it is anything between some dozen discs and a LOT of 73GB 15.000RPM SAS discs in a RAID 10 (around 60 discs) to get the I/O needed - this question is not answerable without a LOT more info on data access patterns. Runs PHP and MySQL - My mobile phone can do that ;) The question is how complex the application is. MySQL MAY or MAY NOT be an acceptable solution here, BTW l. - that would require more testing. There is a reason some people still use other larger commercial databases. What CPU/Ram do I need for a Dedicated Server or VPS? One would say that depends on logic (how much calculations in the PHP part, smartness or lack of programmers and a lot of other questions. Seriously, this is a non-trivial setup. Get some specialists look into it. Basically you need to get down and get your homework done. A lot of the questions are not answerable in this form. Especially because you don't seem to care about your data... Backups? No contingency plan? I mean, servers die - so you are OK with the site being down for days while replacement is configured? | {
"source": [
"https://serverfault.com/questions/124153",
"https://serverfault.com",
"https://serverfault.com/users/38180/"
]
} |
124,156 | We are using physical disk on two of Guest operating systems. Is this a know issue? Do we need to have DPM 2010? "One or more physical disks are attached to virtual machine 'Myserver'. Back up programs that use the Hyper-V VSS writer cannot back up volumes that are attached to virtual machines as physical disks. To avoid potential data loss, use another method to back up the data on the physical disks. If you restore the data on this virtual machine, make sure to check the data of the physical disk for integrity. (Virtual machine ID 8EF3C0CB-967D-4D67-B4D8-7B782C7AC07C)" | A cheap desktop? Let's get into the math. 10 million requests. That breaks down to 416667 requests per hour. That breaks down to 6944 requests per minute. That breaks down to 116 requests per second. Double that (peak load) and we talk of a load a cheap quad core desktop can handle IF the queries are simple enough, and you don't really say how complex they are. 5000 GB per month is trivial - seriously, same math applies. That breaks down to 208GB / day That breaks down to 8GB / hour That breaks down to 148MB / minute That breaks down to 2,5MB / second, 25Mbit. Double for peak - 50Mbit, trivial for any hosting center. Will cost you, though. Store 2000 GB on the hard disc.
That is 2x2000 GB hard discs in a RAID?
Unless: it is for the database, an has lots of complex IO, then it is anything between some dozen discs and a LOT of 73GB 15.000RPM SAS discs in a RAID 10 (around 60 discs) to get the I/O needed - this question is not answerable without a LOT more info on data access patterns. Runs PHP and MySQL - My mobile phone can do that ;) The question is how complex the application is. MySQL MAY or MAY NOT be an acceptable solution here, BTW l. - that would require more testing. There is a reason some people still use other larger commercial databases. What CPU/Ram do I need for a Dedicated Server or VPS? One would say that depends on logic (how much calculations in the PHP part, smartness or lack of programmers and a lot of other questions. Seriously, this is a non-trivial setup. Get some specialists look into it. Basically you need to get down and get your homework done. A lot of the questions are not answerable in this form. Especially because you don't seem to care about your data... Backups? No contingency plan? I mean, servers die - so you are OK with the site being down for days while replacement is configured? | {
"source": [
"https://serverfault.com/questions/124156",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
]
} |
124,274 | I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas? | Here is how I would approach this problem - keep in mind I haven't done this before, it is just concepts that I tested out a little bit in my dev environment. You should be able to setup a pretty robust framework using this and some scripting in your language of choice. Basically we are going to setup a ghetto load balancing environment and use that to switch between the new site and the old site. To get it setup, you are going to need: IIS Application Request Routing (ARR) module IIS Web Deployment tool (msdeploy) 3 different IIS websites (with 3 different IP addresses - just using ports or host headers might not work here) Install ARR to begin with. Setup the 3 websites in IIS: Website 1 will be the site your users actually connect to, lets say http://192.168.1.1/ . This is also the ARR site. Just setup an empty directory for this to point to, and put it in its own app pool. Setup the app pool to not timeout according to these instructions . Website 2 and 3 will be the sites that actually host your content. These need to be on their own IPs and due to how ARR works, on a different port than website 1. Lets say they are http://192.168.1.2:8080 and http://192.168.1.3:8080 . They should also be in their own app pools, and point to different directories on the file system (but both directories have the same content typically) After installing ARR, there will be a new category in IIS Manager called "Server Farms" - right click that and create a new farm. give it a name that is meaningful to you add Webserver 2 and Webserver 3 as the servers - make sure to click the "advanced settings" button, open up the "applicationRequestRouting" category and change the httpPort to 8080 for each server Finish the wizard - you will be asked if you want to create URL Rewrite rules - click Yes You now have a server farm - to finish the configuration, go to the farm and click the Proxy configuration button - turn on "reverse rewrite host in response headers" and apply the changes In IIS Manager, go to the root level server category and click the URL Rewrite button, there will be a rule that was created for your farm double click the rule to get to the settings open the Conditions box add a new condition for {SERVER_PORT} does not match 8080 apply the changes At this point you have the basics of what we need to accomplish your request. If you go to http://192.168.1.1/ you will get your website from either Website 1 or Website 2, but it will be completely seamless that there are other sites. Now what you can do when you want to deploy a new version of your application is: drainstop 1 of the servers in your farm (in the server farm tools, go to "Monitoring and Management", choose a server and choose "Make server unavailable gracefully") deploy your new version of the site to the system that is offline warmup the site that is offline using its alternate IP/port make the site available to the farm again repeat the process for the other server The Web Deployment tool comes into play when you talk about wanting to script all of this. It makes it super easy to create a package for your application and deploy it from the command line. You can also then rollback that package easily if there are problems. ARR is also scriptable using the Microsoft.Web.Administration dlls. One other thing - if you are actually on Windows 2008 R2 (which is IIS 7.5) take a look at the Application Warmup module - it should make the warming up portion of this easier on you as well. | {
"source": [
"https://serverfault.com/questions/124274",
"https://serverfault.com",
"https://serverfault.com/users/6436/"
]
} |
124,517 | It's from this answer: https://stackoverflow.com/questions/2482411/is-this-pdo-bug-fixed-now/2482424#2482424 When the host is "localhost", MySQL Unix clients use a Unix socket, AKA Unix Domain Socket, rather than a TCP/IP socket for the connection, thus the TCP port doesn't matter. | A UNIX socket , AKA Unix Domain Socket, is an inter-process communication mechanism that allows bidirectional data exchange between processes running on the same machine. IP sockets (especially TCP/IP sockets) are a mechanism allowing communication between processes over the network. In some cases, you can use TCP/IP sockets to talk with processes running on the same computer (by using the loopback interface). UNIX domain sockets know that they’re executing on the same system, so they can avoid some checks and operations (like routing); which makes them faster and lighter than IP sockets. So if you plan to communicate with processes on the same host, this is a better option than IP sockets. Edit: As per Nils Toedtmann's comment : UNIX domain sockets are subject to file system permissions, while TCP sockets can be controlled only on the packet filter level. | {
"source": [
"https://serverfault.com/questions/124517",
"https://serverfault.com",
"https://serverfault.com/users/38199/"
]
} |
124,659 | I am running Ubuntu, and would like to open a file whose file name starts with "-"(minus). When I try to open the file with pico or vim, the command thinks that the "-" sign is an option for the command. I tried enclosing the file name with quotes ('), but I still get the same error. I tried with bash and zsh, but still the same error. | In cases where command -- -file does not work - since not every program uses the same option parsing routines, command ./-file works everywhere. | {
"source": [
"https://serverfault.com/questions/124659",
"https://serverfault.com",
"https://serverfault.com/users/28583/"
]
} |
124,800 | Updated Summary The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/ ). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed correct as it gives random users full permissions. What permissions are need to be used on /var/www so that: Source control like git or svn Users in a group like "websites" ( or even added to "www-data" ) Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www ...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update 1: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since none of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personnel (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person who has the best method of securing and managing the www directory wins. | After more research it seems like another (possibly better way) to answer this would be to setup the www folder like so. sudo usermod -a -G developer user1 (add each user to developer group) sudo chgrp -R developer /var/www/site.com/ so that developers can work in there sudo chmod -R 2774 /var/www/site.com/ so that only developers can create/edit files (other/world can read) sudo chgrp -R www-data /var/www/site.com/uploads so that www-data (apache/nginx) can create uploads. Since git runs as whatever user is calling it, then as long as the user is in the "developer" group they should be able to create folders, edit PHP files, and manage the git repository. Note: In step (3): '2' in 2774 means to 'set Group ID' for the directory. This causes new files and sub directories created within it to inherit the group ID of the parent directory (instead of the primary group of the user) Reference: http://en.wikipedia.org/wiki/Setuid#setuid_and_setgid_on_directories | {
"source": [
"https://serverfault.com/questions/124800",
"https://serverfault.com",
"https://serverfault.com/users/32759/"
]
} |
124,952 | I am looking for a tool to test a website from a Linux command line. From the output, I need to know the http response (status codes) but also benchmark the time it takes to download the different elements of the site. Thank you in advance. | You can try wget with -p option: wget -p http://site.com It will tell you how long it takes to download each element and the return codes for each request. | {
"source": [
"https://serverfault.com/questions/124952",
"https://serverfault.com",
"https://serverfault.com/users/36288/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.