source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
149,355
So far I've been using vim */** which seems to open all files in subdirectories but not those in the current directory, and vim * which opens all files in the current directory. But how do I open all files in the current directory and all subdirectories?
With zsh : vim ./**/*(.) Other shells: find . -name '.?*' -prune -o -type f -exec vim {} + To open only the (non-hidden) regular files (not directories, symlinks, pipes, devices, doors, sockets...) in any level of subdirectories. vim ./**/*(D-.) Other shells, GNU find : find . -xtype f -exec vim {} + To also open hidden files (and traversing hidden directories) and symlinks to regular files. And: vim ./***/*(D-.) other shells: find -L . -type f -exec vim {} + to also traverse symlinks when looking into subdirectories. If you only want one level of subdirectories: vim ./* ./*/* Note that it's a good habit to prefix your globs with ./ in case some of the file names start with - or + . (of course the find ones also work in zsh . Note that they may run several instances of vim if the list of files is big, and at least with GNU find , will fail to skip hidden files/dirs whose name contains sequences of bytes that don't form valid characters in your locale).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/149355", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65634/" ] }
149,358
I am implementing a backup scheme using rsync and hardlinks. I know I can use link-dest with rsync to do the hardlinks, but I saw mention of using "cp -l" before "link-dest" was implemented in rsync. Another method of hardlinking I know of is "ln". So my question is, out of curiosity: is there a difference in making hardlinks using "cp -l" as compared to using "ln"?
The results of both has to be the same, in that a hard link is created to the original file. The difference is in the intended usage and therefore the options available to each command. For example, cp can use recursion whereas ln cannot: cp -lr <src> <target> will create hard links in <target> to all files in <src> . (it creates new directories; not links) The result will be that the directory tree structure under <target> will look identical to the one under <src> . It will differ from cp -r <src> <target> in that using the latter will copy each file and folder and give each a new inode whereas the former just uses hard links on files and therefore simply increases their Links count. When used to copy a single file, as in your example, then the results will be the identical.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/149358", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77711/" ] }
149,359
I wish to install OpenVPN on OpenBSD 5.5 using OpenVPN source tarball. According to the instructions here , I have to install lzo and add CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib" directives to "configure", since gcc will not find them otherwise. I have googled extensively for guide on how to do the above on OpenBSD but there is none. This is what I plan to do: Untar the source tarball to a freshly created directory Issue the command ./configure CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib" Issue the command make Issue the command make install Which of the following syntax is correct? ./configure CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib" or ./configure --CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib" or ./configure --CFLAGS="-I/usr/local/include" --LDFLAGS="-L/usr/local/lib"
The correct way is: ./configure CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib" but this may not work with all configure scripts. It's probably better to set environment variables such as CPATH and LIBRARY_PATH (see gcc man page). An example: export CPATH=/usr/local/includeexport LIBRARY_PATH=/usr/local/libexport LD_LIBRARY_PATH=/usr/local/lib in your .profile , for instance. The LD_LIBRARY_PATH can be needed in case of shared libraries if a run path is not used (this depends on the OS, the build tools and the options that are used, but it shouldn't hurt).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/149359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66229/" ] }
149,377
I am trying to get meld 3.11 to work on ubuntu 14.04 I tried following the method below" cd ~ git clone https://git.gnome.org/browse/meld cd meld sudo ln -s ~/meld/bin/meld /usr/bin/meld But when i run meld in the terminal i get the following error Traceback (most recent call last): File "/usr/bin/meld", line 223, in <module> setup_settings() File "/usr/bin/meld", line 197, in setup_settings cwd=melddir) File "/usr/lib/python2.7/subprocess.py", line 522, in call return Popen(*popenargs, **kwargs).wait() File "/usr/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory I am not sure if its dependencies issue or python path problem.
#Make a clean working directorymkdir -p work/crap#Get in to that directorycd work/crap#Clone git headgit clone https://git.gnome.org/browse/meld#Get in to that project directorycd meld#Install dependenciessudo apt-get install intltool itstool gir1.2-gtksource-3.0 libxml2-utils#Install meldsudo python setup.py install If you wanted to work on the code itself without re-installing, I typically do that by installing in a venv and opening the installed to folder in the venv in an IDE. Result of running meld in a terminal after those steps: Note that the current version of meld requires GTK+ 3.14, which is not available on Ubuntu 14.04 ( Meld requires GTK+ 3.14 or higher. error). So for Ubuntu 14.04, you need to checkout the last version that does not need GTK+ 3.14. That is the meld 3.14.* branch (currently 3.14.3), so checkout the branch using git checkout meld-3-14 .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/149377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56385/" ] }
149,419
I am running my below shell script on a machine on which c++ application server is running on port 8080 and in the shell script I am executing an URL and stores the response coming from that URL in DATA variable. But suppose if the same app server is down, then it will not be able to execute the URL and then it will print out Retrying Again and sleep for 30 seconds and then execute the same url again. #!/bin/bashHOSTNAME=$hostnameDATA=""RETRY=15echo $HOSTNAMEwhile [ $RETRY -gt 0 ]do DATA=$(wget -O - -q -t 1 http://$HOSTNAME:8080/beat) if [ $? -eq 0 ] then break else echo "Retrying Again" >&2 # restart the server let RETRY-=1 sleep 30 fidoneecho "Server is UP" And here HOSTNAME is the local hostname of the server on which I am running my above shell script. Problem Statement:- Now what I am trying to do is, if the server is down, then it will print out Retrying Again so after that I want to check whether port 8080 is opened on $HOSTNAME or not. If not, then it means server is down so I want to restart the server by executing this command and then sleep for 30 seconds as shown above in the shell script. /opt/app/test/start_stop.sh start Is this possible to do here in my above shell script? I am running this shell script on Ubuntu 12.04.
The program lsof allows you to check which processes are using which ressources, like files or ports. To show which processes are listening on port 8080: lsof -Pi :8080 -sTCP:LISTEN In your case, you want to test whether a process is listening on 8080 - the return value of this command tells you just that. It also prints the pid of the process. lsof -Pi :8080 -sTCP:LISTEN -t If you need just the test, with no output, redirect it to /dev/null : if lsof -Pi :8080 -sTCP:LISTEN -t >/dev/null ; then echo "running"else echo "not running"fi If you use multiple host names with multiple IP addresses locally, specify the hostname too like lsof -Pi @someLocalName:8080 -sTCP:LISTEN
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/149419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64455/" ] }
149,451
How can I install a new version of R in my own directory, e.g., /local/data/project/behi .
The easiest way to do this is to install R from source : $ wget http://cran.rstudio.com/src/base/R-3/R-3.4.1.tar.gz$ tar xvf R-3.4.1.tar.gz$ cd R-3.4.1$ ./configure --prefix=$HOME/R$ make && make install The second-to-last step is the critical one. It configures R to be installed into a subdirectory of your own home directory. To run it on Linux, macOS and similar systems, add $HOME/R/bin to your PATH . Then, shell commands like R and Rscript will work. On macOS, you have another alternative: build R.app and install it into your user's private Applications folder. You need to have Xcode installed to do this. You might consider giving --prefix=$HOME instead. That installs R at the top level of your home directory, so that the R and Rscript binaries end up in $HOME/bin , which is likely already in your user's PATH . The downside is that it makes later uninstallation harder, since R would be intermingled among your other $HOME contents. (If this is the first thing you've installed to $HOME/bin , you might have to log out and back in to get this in your PATH , since it's often added conditionally only if $HOME/bin exists at login time.) This general pattern applies to a large amount of Unix software you can install from source code. If the software has a configure script, it probably understands the --prefix option, and if not, there is usually some alternative with the same effect. These features are common for a number of reasons. In decreasing order of likelihood, in my experience: The safe default ( /usr/local ) is not the right $prefix in all situations. Circumstances might dictate something else such as /usr , /opt/$PKGNAME , etc. Binary package building systems ( RPM , DEB , PKG , Cygport ...) typically build and install the package into a special staging directory, then pack that up in such a way that it expands into the desired installation location. Your case, where you can't get root to install the software into a typical location, so you install into $HOME instead.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/149451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80343/" ] }
149,452
I'm trying to find my java location within my Linux system and got this [980@b449 ~]$ which java/usr/bin/java[980@b449 ~]$ readlink -f $(which java)/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/bin/java what is the difference between the 2 commands?
which 2 commands? /usr/bin/java is a soft (symbolic) link to /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/bin/java There is no difference as they are the same file. If you type something like ls -l /usr/bin/java You might get a result such as: lrwxrwxrwx. 1 root root 22 Aug 5 17:01 /usr/bin/java -> /etc/alternatives/java Which would mean you can have several java versions on your system and use alternatives to change the default one.Otherwise you can simply add and remove links to change the default one manually. To create symbolic links use the command ln -s /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/bin/java /usr/bin/java Or in general form ln -s <original file> <link to file> And use rm to delete the link as you would delete any other file.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/149452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65920/" ] }
149,474
I need to start a cronjob every day, but an hour later each day. What I have so far works for the most part, except for 1 day of the year: 0 0 * * * sleep $((3600 * (10#$(date +\%j) \% 24))) && /usr/local/bin/myprog When the day of year is 365 the job will start at 5:00, but the next day (not counting a leap year) will have a day of year as 1, so the job will start at 1:00. How can I get rid of this corner case?
My preferred solution would be to start the job every hour but have the script itself check whether it's time to run or not and exit without doing anything 24 times out of 25. crontab: 0 * * * * /usr/local/bin/myprog at the top of myprog : [ 0 -eq $(( $(date +%s) / 3600 % 25 )) ] || exit 0 If you don't want to make any changes to the script itself, you can also put the "time to run" check in the crontab entry but it makes for a long unsightly line: 0 * * * * [ 0 -eq $(( $(date +\%s) / 3600 \% 25 )) ] && /usr/local/bin/myprog
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/149474", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80352/" ] }
149,494
I'm trying to create tar.gz file using the following command: sudo tar -vcfz dvr_rdk_v1.tar.gz dvr_rdk/ It then start to create files (many files in folder), but then I get the following error: tar: dvr_rdk_v1.tar.gz: Cannot stat: No such file or directorytar: Exiting with failure status due to previous errors I don't see any description of this error, what does it mean?
Remove - from vcfz options. tar does not need hyphen for options. With a hyphen, the argument for the -f option is z . So the command is in effect trying to archive dvr_rdk_v1.tar.gz and dvr_rdk into an archive called z . Without the hyphen, the semantics of the options changes, so that the next argument on the command line, i.e. your archive's filename, becomes the argument to the f flag. Also check your write permission to the directory from which you are executing the command.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/149494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79842/" ] }
149,525
Braiam said that Firefox stores the password data for login websites in ~/.mozilla/firefox/key3.db and ~/.mozilla/firefox/signons.sqlite files. These files can be read with some sqlite editor. I try to query for my username and password of a website (e.g. https://sourceforge.net/account/login.php ) from the Firefox's database. I can't do it through Firefox, because my Firefox GUI is not working, and I am fairly new to and also interested in learning using databases to do the job. what are the different roles of key3.db and signons.sqlite ? I searched on the internet, and is it correct that I should use sqlite3 to open a database? $ sqlite3 key3.db SQLite version 3.7.9 2011-11-01 00:52:41Enter ".help" for instructionsEnter SQL statements terminated with a ";"sqlite> .tablesError: file is encrypted or is not a database I guess the reason of failure is that, in Firefox, I set up a masterkeyword to access the passwords it stores. How should I proceed to query the password of a given website? My OS is Ubuntu, here is thefile type of key3.db : $ file key3.db key3.db: Berkeley DB 1.85 (Hash, version 2, native byte-order) What shall I read and learn in order to query the password from a givenwebsite name? Will reading http://www.sqlite.org/cli.html help? To garethTheRed: I tried your command. Not return anything however. The output is abysmal: $ sqlite3 signons.sqliteSQLite version 3.7.9 2011-11-01 00:52:41Enter ".help" for instructionsEnter SQL statements terminated with a ";"sqlite> .tablesmoz_deleted_logins moz_disabledHosts moz_logins sqlite> select * from moz_logins;...55|https://sourceforge.net||https://sourceforge.net|form_loginname|form_pw|MDIEEPgAAAAAAAAAAAAAAAAAAAEwF\AYIKoZIhvcNAwcECCPrVdOzWamBBAjPs0DI8FrUnQ==|MDoEEPgAAAAAAAAAAAAAAAAAAAEwFAYIKoZIhvcNAwcECCnZved1LRQMBBBV\DtXpOvAp0TQHibFeX3NL|{16e782de-4c65-426f-81dc-ee0361816262}|1|1327675445094|1403706275829|1327675445094|\4... Does Firefox encrypt passwords regardless of if there is a master key? If yes, can we decrypte them in command line (my firefox CLI may still work)? Alternatively, is it possible that Chrome browser can read and import the passwords stored by Firefox?
Some guy seem to have glued all the necessary code together here : #!/usr/bin/env python"Recovers your Firefox or Thunderbird passwords"import base64from collections import namedtuplefrom ConfigParser import RawConfigParser, NoOptionErrorfrom ctypes import (Structure, CDLL, byref, cast, string_at, c_void_p, c_uint, c_ubyte, c_char_p)from getpass import getpassimport loggingfrom optparse import OptionParserimport ostry: from sqlite3 import dbapi2 as sqliteexcept ImportError: from pysqlite2 import dbapi2 as sqlitefrom subprocess import Popen, CalledProcessError, PIPEimport sysLOGLEVEL_DEFAULT = 'warn'log = logging.getLogger()PWDECRYPT = 'pwdecrypt'SITEFIELDS = ['id', 'hostname', 'httpRealm', 'formSubmitURL', 'usernameField', 'passwordField', 'encryptedUsername', 'encryptedPassword', 'guid', 'encType', 'plain_username', 'plain_password' ]Site = namedtuple('FirefoxSite', SITEFIELDS)'''The format of the SQLite database is:(id INTEGER PRIMARY KEY,hostname TEXT NOT NULL,httpRealm TEXT,formSubmitURL TEXT,usernameField TEXT NOT NULL,passwordField TEXT NOT NULL,encryptedUsername TEXT NOT NULL,encryptedPassword TEXT NOT NULL,guid TEXT,encType INTEGER);'''#### These are libnss definitions ####class SECItem(Structure): _fields_ = [('type',c_uint),('data',c_void_p),('len',c_uint)]class secuPWData(Structure): _fields_ = [('source',c_ubyte),('data',c_char_p)](PW_NONE, PW_FROMFILE, PW_PLAINTEXT, PW_EXTERNAL) = (0, 1, 2, 3)# SECStatus(SECWouldBlock, SECFailure, SECSuccess) = (-2, -1, 0)#### End of libnss definitions ####def get_default_firefox_profile_directory(dir='~/.mozilla/firefox'): '''Returns the directory name of the default profile If you changed the default dir to something like ~/.thunderbird, you would get the Thunderbird default profile directory.''' profiles_dir = os.path.expanduser(dir) profile_path = None cp = RawConfigParser() cp.read(os.path.join(profiles_dir, "profiles.ini")) for section in cp.sections(): if not cp.has_option(section, "Path"): continue if (not profile_path or (cp.has_option(section, "Default") and cp.get(section, "Default").strip() == "1")): profile_path = os.path.join(profiles_dir, cp.get(section, "Path").strip()) if not profile_path: raise RuntimeError("Cannot find default Firefox profile") return profile_pathdef get_encrypted_sites(firefox_profile_dir=None): 'Opens signons.sqlite and yields encryped password data' if firefox_profile_dir is None: firefox_profile_dir = get_default_firefox_profile_directory() password_sqlite = os.path.join(firefox_profile_dir, "signons.sqlite") query = '''SELECT id, hostname, httpRealm, formSubmitURL, usernameField, passwordField, encryptedUsername, encryptedPassword, guid, encType, 'noplainuser', 'noplainpasswd' FROM moz_logins;''' # We don't want to type out all the column from the DB as we have ## stored them in the SITEFIELDS already. However, we have two ## components extra, the plain usename and password. So we remove ## that from the list, because the table doesn't have that column. ## And we add two literal SQL strings to make our "Site" data ## structure happy #queryfields = SITEFIELDS[:-2] + ["'noplainuser'", "'noplainpassword'"] #query = '''SELECT %s # FROM moz_logins;''' % ', '.join(queryfields) connection = sqlite.connect(password_sqlite) try: cursor = connection.cursor() cursor.execute(query) for site in map(Site._make, cursor.fetchall()): yield site finally: connection.close()def decrypt(encrypted_string, firefox_profile_directory, password = None): '''Opens an external tool to decrypt strings This is mostly for historical reasons or if the API changes. It is very slow because it needs to call out a lot. It uses the "pwdecrypt" tool which you might have packaged. Otherwise, you need to build it yourself.''' log = logging.getLogger('firefoxpasswd.decrypt') execute = [PWDECRYPT, '-d', firefox_profile_directory] if password: execute.extend(['-p', password]) process = Popen(execute, stdin=PIPE, stdout=PIPE, stderr=PIPE) output, error = process.communicate(encrypted_string) log.debug('Sent: %s', encrypted_string) log.debug('Got: %s', output) NEEDLE = 'Decrypted: "' # This string is prepended to the decrypted password if found output = output.strip() if output == encrypted_string: log.error('Password was not correct. Please try again without a ' 'password or with the correct one') index = output.index(NEEDLE) + len(NEEDLE) password = output[index:-1] # And we strip the final quotation mark return passwordclass NativeDecryptor(object): 'Calls the NSS API to decrypt strings' def __init__(self, directory, password = ''): '''You need to give the profile directory and optionally a password. If you don't give a password but one is needed, you will be prompted by getpass to provide one.''' self.directory = directory self.log = logging.getLogger('NativeDecryptor') self.log.debug('Trying to work on %s', directory) self.libnss = CDLL('libnss3.so') if self.libnss.NSS_Init(directory) != 0: self.log.error('Could not initialize NSS') # Initialize to the empty string, not None, because the password # function expects rather an empty string self.password = password = password or '' slot = self.libnss.PK11_GetInternalKeySlot() pw_good = self.libnss.PK11_CheckUserPassword(slot, c_char_p(password)) while pw_good != SECSuccess: msg = 'Password is not good (%d)!' % pw_good print >>sys.stderr, msg password = getpass('Please enter password: ') pw_good = self.libnss.PK11_CheckUserPassword(slot, c_char_p(password)) #raise RuntimeError(msg) # That's it, we're done with passwords, but we leave the old # code below in, for nostalgic reasons. if password is None: pwdata = secuPWData() pwdata.source = PW_NONE pwdata.data = 0 else: # It's not clear whether this actually works pwdata = secuPWData() pwdata.source = PW_PLAINTEXT pwdata.data = c_char_p (password) # It doesn't actually work :-( # Now follow some attempts that were not succesful! def setpwfunc(): # One attempt was to use PK11PassworFunc. Didn't work. def password_cb(slot, retry, arg): #s = self.libnss.PL_strdup(password) s = self.libnss.PL_strdup("foo") return s PK11PasswordFunc = CFUNCTYPE(c_void_p, PRBool, c_void_p) c_password_cb = PK11PasswordFunc(password_cb) #self.libnss.PK11_SetPasswordFunc(c_password_cb) # To be ignored def changepw(): # Another attempt was to use ChangePW. Again, no effect. #ret = self.libnss.PK11_ChangePW(slot, pwdata.data, 0); ret = self.libnss.PK11_ChangePW(slot, password, 0) if ret == SECFailure: raise RuntimeError('Setting password failed! %s' % ret) #self.pwdata = pwdata def __del__(self): self.libnss.NSS_Shutdown() def decrypt(self, string, *args): 'Decrypts a given string' libnss = self.libnss uname = SECItem() dectext = SECItem() #pwdata = self.pwdata cstring = SECItem() cstring.data = cast( c_char_p( base64.b64decode(string)), c_void_p) cstring.len = len(base64.b64decode(string)) #if libnss.PK11SDR_Decrypt (byref (cstring), byref (dectext), byref (pwdata)) == -1: self.log.debug('Trying to decrypt %s (error: %s)', string, libnss.PORT_GetError()) if libnss.PK11SDR_Decrypt (byref (cstring), byref (dectext)) == -1: error = libnss.PORT_GetError() libnss.PR_ErrorToString.restype = c_char_p error_str = libnss.PR_ErrorToString(error) raise Exception ("%d: %s" % (error, error_str)) decrypted_data = string_at(dectext.data, dectext.len) return decrypted_data def encrypted_sites(self): 'Yields the encryped passwords from the profile' sites = get_encrypted_sites(self.directory) return sites def decrypted_sites(self): 'Decrypts the encrypted_sites and yields the results' sites = self.encrypted_sites() for site in sites: plain_user = self.decrypt(site.encryptedUsername) plain_password = self.decrypt(site.encryptedPassword) site = site._replace(plain_username=plain_user, plain_password=plain_password) yield sitedef get_firefox_sites_with_decrypted_passwords(firefox_profile_directory = None, password = None): 'Old school decryption of passwords using the external tool' if not firefox_profile_directory: firefox_profile_directory = get_default_firefox_profile_directory() #decrypt = NativeDecryptor(firefox_profile_directory).decrypt for site in get_encrypted_sites(firefox_profile_directory): plain_user = decrypt(site.encryptedUsername, firefox_profile_directory, password) plain_password = decrypt(site.encryptedPassword, firefox_profile_directory, password) site = site._replace(plain_username=plain_user, plain_password=plain_password) log.debug("Dealing with Site: %r", site) log.info("user: %s, passwd: %s", plain_user, plain_password) yield sitedef main_decryptor(firefox_profile_directory, password, thunderbird=False): 'Main function to get Firefox and Thunderbird passwords' if not firefox_profile_directory: if thunderbird: dir = '~/.thunderbird/' else: dir = '~/.mozilla/firefox' firefox_profile_directory = get_default_firefox_profile_directory(dir) decryptor = NativeDecryptor(firefox_profile_directory, password) for site in decryptor.decrypted_sites(): print siteif __name__ == "__main__": parser = OptionParser() parser.add_option("-d", "--directory", default=None, help="the Firefox profile directory to use") parser.add_option("-p", "--password", default=None, help="the master password for the Firefox profile") parser.add_option("-l", "--loglevel", default=LOGLEVEL_DEFAULT, help="the level of logging detail [debug, info, warn, critical, error]") parser.add_option("-t", "--thunderbird", default=False, action='store_true', help="by default we try to find the Firefox default profile." " But you can as well ask for Thunderbird's default profile." " For a more reliable way, give the directory with -d.") parser.add_option("-n", "--native", default=True, action='store_true', help="use the native decryptor, i.e. make Python use " "libnss directly instead of invoking the helper program" "DEFUNCT! this option will not be checked.") parser.add_option("-e", "--external", default=False, action='store_true', help="use an external program `pwdecrypt' to actually " "decrypt the passwords. This calls out a lot and is dead " "slow. " "You need to use this method if you have a password " "protected database though.") options, args = parser.parse_args() loglevel = {'debug': logging.DEBUG, 'info': logging.INFO, 'warn': logging.WARN, 'critical':logging.CRITICAL, 'error': logging.ERROR}.get(options.loglevel, LOGLEVEL_DEFAULT) logging.basicConfig(level=loglevel) log = logging.getLogger() password = options.password if not options.external: sys.exit (main_decryptor(options.directory, password, thunderbird=options.thunderbird)) else: for site in get_firefox_sites_with_decrypted_passwords(options.directory, password): print site See the related discussion in the mozilla fora.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/149525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
149,543
I want to copy a directory into another directory. For example, cp -r dir1 dir2 copies the contents of dir1 into dir2. I want to copy dir1 itself into dir2 so that if I ls dir2 it will output dir1 and not whatever was inside of dir1.
Just do as you did: cp -r dir1 dir2 and you will have dir1 (with its content as well) inside dir2 . Try if you don't believe ;-). The command that would copy content of dir1 into dir2 is: cp -r dir1/* dir2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/149543", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80391/" ] }
149,589
I have happened upon a command that sometimes works and sometimes does not, even when executed multiple times in rapid succession in a bash shell (I have not tested the behavior in other shells). The problem has been localized to the reading of a variable in the BEGIN block of an awk statement at the end of the pipe line. During some executions, the variable is correctly read in the BEGIN block and during other executions, the operation fails. Supposing this aberrant behavior can be reproduced by others (and is not a consequence of some problem with my system), can its inconsistency be explained? Take as input the following file called tmp : cat > tmp <<EOFa ab *aa aaaa aaa aa ac *aaa aaaaa ad *aaa aa aaaaaa ae *aaaa aaaa af *aa aa ag *EOF On my system, the pipe line awk '{if($2!~/\*/) print $1}' tmp | tee >(wc -l | awk '{print $1}' > n.txt) | sort | uniq -c | sort -k 1,1nr | awk 'BEGIN{getline n < "n.txt"}{print $1 "\t" $1/n*100 "\t" $2}' will either produce the correct output: 4 28.5714 a4 28.5714 aaa3 21.4286 aa2 14.2857 aaaa1 7.14286 aaaaa or the error message: awk: cmd. line:1: (FILENAME=- FNR=1) fatal: division by zero attempted How can a command possibly give different output when run twice in succession when no random number generation is involved and no change to the environment is made in the interim? To demonstrate how absurd the behavior is, consider the output generated by executing the above pipe line ten times consecutively in a loop: for x in {1..10}; do echo "Iteration ${x}"; awk '{if($2!~/\*/) print $1}' tmp | tee >(wc -l | awk '{print $1}' > n.txt) | sort | uniq -c | sort -k 1,1nr | awk 'BEGIN{getline n < "n.txt"}{print $1 "\t" $1/n*100 "\t" $2}'; doneIteration 1awk: cmd. line:1: (FILENAME=- FNR=1) fatal: division by zero attemptedIteration 24 28.5714 a4 28.5714 aaa3 21.4286 aa2 14.2857 aaaa1 7.14286 aaaaaIteration 34 28.5714 a4 28.5714 aaa3 21.4286 aa2 14.2857 aaaa1 7.14286 aaaaaIteration 4awk: cmd. line:1: (FILENAME=- FNR=1) fatal: division by zero attemptedIteration 5awk: cmd. line:1: (FILENAME=- FNR=1) fatal: division by zero attemptedIteration 6awk: cmd. line:1: (FILENAME=- FNR=1) fatal: division by zero attemptedIteration 74 28.5714 a4 28.5714 aaa3 21.4286 aa2 14.2857 aaaa1 7.14286 aaaaaIteration 8awk: cmd. line:1: (FILENAME=- FNR=1) fatal: division by zero attemptedIteration 94 28.5714 a4 28.5714 aaa3 21.4286 aa2 14.2857 aaaa1 7.14286 aaaaaIteration 10awk: cmd. line:1: (FILENAME=- FNR=1) fatal: division by zero attempted Note: I have also tried closing the file (awk close ) after reading the variable, in case the problem relates to the file being left open. However, the inconsistent output remains.
Your redirections have a race condition. This: >(wc -l | awk '{print $1}' > n.txt) runs in parallel with: awk 'BEGIN{getline n < "n.txt"}...' later in the pipeline. Sometimes, n.txt is still empty when the awk program starts running. This is (obliquely) documented in the Bash Reference Manual. In a pipeline : The output of each command in the pipeline is connected via a pipe to the input of the next command. That is, each command reads the previous command’s output. This connection is performed before any redirections specified by the command . and then: Each command in a pipeline is executed in its own subshell (emphasis added). All the processes in the pipeline are started, with their input and output connected together, without waiting for any of the earlier programs to finish or even start doing anything. Before that, process substitution with >(...) is: performed simultaneously with parameter and variable expansion, command substitution, and arithmetic expansion. What that means is that the subprocess running the wc -l | awk ... command starts early on, and the redirection empties n.txt just before that, but the awk process that causes the error is started shortly after. Both of those commands execute in parallel - you'll have several processes going at once here. The error occurs when awk runs its BEGIN block before the wc command's output has been written into n.txt . In that case, the n variable is empty, and so is zero when used as a number. If the BEGIN runs after the file is filled in, everything works. When that happens depends on the operating system scheduler, and which process gets a slot first, which is essentially random from the user perspective. If the final awk gets to run early, or the wc pipeline gets scheduled a little later, the file will still be empty when awk starts doing its work and the whole thing will break. In all likelihood the processes will run on different cores actually simultaneously, and it's down to which one gets to the point of contention first. The effect you'll get is probably of the command working more often than not, but sometimes failing with the error you post. In general, pipelines are only safe in so far as they're just pipelines - standard output into standard input is fine, but because the processes execute in parallel it's not reliable to rely on the sequencing of any other communication channels , like files, or of any part of any one process executing before or after any part of another unless they're locked together by reading standard input. The workaround here is probably to do all your file writing in advance of needing them: at the end of a line, it's guaranteed that an entire pipeline and all of its redirections have completed before the next command runs. This command will never be reliable, but if you really do need it to work in this sort of a structure you can insert a delay ( sleep ) or loop until n.txt is non-empty before running the final awk command to increase the chances of things working how you want.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/149589", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14960/" ] }
149,612
I know that I can sort the output of ls by time with the -t option. I know that when I have so many files that they don't fit in a single ls invocation, I can normally use xargs (or find ... -exec ... {} + ) to let ls be called multiple times. How can I combine the two? I have more files than fit on the command-line, and wish to list them sorted by time. find . -type f -exec ls -t {} + doesn't work, because supposing exactly 1000 file names fit on the command-line, and 3000 files are present, this will run ls -t [first 1000 files]; ls -t [second 1000 files]; ls -t [last 1000 files] , where the last 1000 files find sees may well have a modification time before any of the first 1000. It doesn't seem like anything involving xargs or equivalent has any chance whatsoever of working, it seems like that approach is fundamentally flawed, but I cannot find a way that does work.
ls -t on its own will list all files in the current directory with that sorting, without ever needing to list them on the command line at all. If you need the recursion behaviour of find , or to do some other tests on the files, you can have find generate timestamped entries, either through stat or through GNU find 's -printf option and sort it. Something like: find . -type f -printf '%T@ %p\0' | sort -zn -printf '%T@ %p\0' generates null-separated Unix timestamp ( %T@ )-filename ( %p ) pairs. sort -z is also a non-standard GNU extension, which uses null-delimited records to be filename-safe. The sort option is supported in most of the BSDs too, but -printf is GNU-only as far as I know. You can cut that output back into filenames only, or any other format you like.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/149612", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56650/" ] }
149,660
I can do df . to get some of the info on the mount that the current directory is in, and I can get all the info I want from mount . However I get to much info (info about other mounts). I can grep it down, but am wondering if there is a better way. Is there some command mountinfo such that mountinfo . gives info I want (like df . , but with the info that mount gives.) I am using Debian Gnu+Linux.
I think you want something like this: findmnt -T . When using the option -T, --target path if the path is not a mountpoint file or directory, findmnt checks path elements in reverse order to get the mountpoint. You can print only certain fields via -o, --output [list] . See findmnt --help for the list of available fields. Alternatively, you could run: (until findmnt . ; do cd .. ; done) The problem you're running into is that all paths are relative to something or other, so you just have to walk the tree. Every time. findmnt is a member of the util-linux package and has been for a few years now. By now, regardless of your distro, it should already be installed on your Linux machine if you also have the mount tool. man mount | grep findmnt -B1 -m1For more robust and customizable output usefindmnt(8), especially in your scripts. findmnt will print out all mounts' info without a mount-point argument, and only that for its argument with one. The -D is the emulate df option. Without -D its output is similar to mount 's - but far more configurable. Try findmnt --help and see for yourself. I stick it in a subshell so the current shell's current directory doesn't change. So: mkdir -p /tmp/1/2/3/4/5/6 && cd $_ (until findmnt . ; do cd .. ; done && findmnt -D .) && pwd OUTPUT TARGET SOURCE FSTYPE OPTIONS/tmp tmpfs tmpfs rwSOURCE FSTYPE SIZE USED AVAIL USE% TARGETtmpfs tmpfs 11.8G 839.7M 11G 7% /tmp/tmp/1/2/3/4/5/6 If you do not have the -D option available to you (Not in older versions of util-linux) then you need never fear - it is little more than a convenience switch in any case. Notice the column headings it produces for each call - you can include or exclude those for each invocation with the -o utput switch. I can get the same output as -D might provide like: findmnt /tmp -o SOURCE,FSTYPE,SIZE,USED,AVAIL,USE%,TARGET OUTPUT SOURCE FSTYPE SIZE USED AVAIL USE% TARGETtmpfs tmpfs 11.8G 1.1G 10.6G 10% /tmp
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/149660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4778/" ] }
149,665
I have made a user in Mint 16 but want to change its first and last name without deleting it and starting from scratch. I know how to do it with a GUI but I want to know the terminal command alternative. I've checked the man pages of usermod but couldn't figure out how in sh.
sudo usermod -c "Jecht Tyre" jecht You can change it with -c option. -c is for adding comment usermod -c "YOUR NAME" username
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/149665", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80476/" ] }
149,715
I have been reading through the study guides for the LPIC-1 . echo "This is a sentence. " !#:* !#:1->text3 I'm having trouble understanding how the above line of code repeats the echo command multiple times. I know that it is using a feature of bash 's history but I can't find any documentation on !#:* or !#:1 . Could someone explain this for me?
Yes, this is using history. !# is a history event designator that refers to the entire command line typed so far. :* is a word (range) designator that refers to all of the words, except the 0th. So, after you have typed echo "This is a sentence. " ,then !#:* expands to "This is a sentence.  " . And x - y (where x and y are integers)is a word (range) designator that refers to word number x through word number y . If y is omitted ( x - ),this is interpreted to mean word number x through the second to last word. So, after your “entire command line typed so far” stands as echo "This is a sentence. " "This is a sentence. " then !#:1- expands to "This is a sentence. " ,because each of the quoted "This is a sentence. " strings counts as one word,and so !#:1- is equivalent to !#:1 (just word number 1). So you end up with echo "This is a sentence. " "This is a sentence. " "This is a sentence. " >text3 The fact that the - and the > appear together in the command is just a confusion;they don’t interact. And the fact that “This is a sentence.” is quoted obscures what is going on; if you said echo This is a sentence. !#:* !#:1- it would expand to echo This is a sentence. This is a sentence. !#:1- and thence to echo This is a sentence. This is a sentence. This is a sentence. This is a (because !#:1- expands to word number 1 through the second to last word.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/149715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74033/" ] }
149,726
I'm looking to write a script that takes a .txt filename as an argument, reads the file line by line, and passes each line to a command. For example, it runs command --option "LINE 1" , then command --option "LINE 2" , etc. The output of the command is written to another file. How do I go about doing that? I don't know where to start.
Use while read loop: : > another_file ## Truncate file.while IFS= read -r line; do command --option "$line" >> another_filedone < file Another is to redirect output by block: while IFS= read -r line; do command --option "$line"done < file > another_file Last is to open the file: exec 4> another_filewhile IFS= read -r line; do command --option "$line" >&4 echo xyz ## Another optional command that sends output to stdout.done < file If one of the commands reads input, it would be a good idea to use another fd for input so the commands won't eat it (here assuming ksh , zsh or bash for -u 3 , use <&3 instead portably): while IFS= read -ru 3 line; do ...done 3< file Finally to accept arguments, you can do: #!/bin/bashfile=$1another_file=$2exec 4> "$another_file"while IFS= read -ru 3 line; do command --option "$line" >&4done 3< "$file" Which one could run as: bash script.sh file another_file Extra idea. With bash , use readarray : readarray -t lines < "$file"for line in "${lines[@]}"; do ...done Note: IFS= can be omitted if you don't mind having line values trimmed of leading and trailing spaces.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/149726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80515/" ] }
149,741
Given a shell process (e.g. sh ) and its child process (e.g. cat ), how can I simulate the behavior of Ctrl + C using the shell's process ID? This is what I've tried: Running sh and then cat : [user@host ~]$ shsh-4.3$ cattesttest Sending SIGINT to cat from another terminal: [user@host ~]$ kill -SIGINT $PID_OF_CAT cat received the signal and terminated (as expected). Sending the signal to the parent process does not seem to work. Why is the signal not propagated to cat when sent to its parent process sh ? This does not work: [user@host ~]$ kill -SIGINT $PID_OF_SH
How CTRL + C works The first thing is to understand how CTRL + C works. When you press CTRL + C , your terminal emulator sends an ETX character (end-of-text / 0x03). The TTY is configured such that when it receives this character, it sends a SIGINT to the foreground process group of the terminal. This configuration can be viewed by doing stty -a and looking at intr = ^C; .The POSIX specification says that when INTR is received, it should send a SIGINT to the foreground process group of that terminal. What is the foreground process group? So, now the question is, how do you determine what the foreground process group is?The foreground process group is simply the group of processes which will receive any signals generated by the keyboard (SIGTSTP, SIGINT, etc). Simplest way to determine the process group ID is to use ps : ps ax -O tpgid The second column will be the process group ID. How do I send a signal to the process group? Now that we know what the process group ID is, we need to simulate the POSIX behavior of sending a signal to the entire group. This can be done with kill by putting a - in front of the group ID. For example, if your process group ID is 1234, you would use: kill -INT -1234 Simulate CTRL + C using the terminal number. So the above covers how to simulate CTRL + C as a manual process. But what if you know the TTY number, and you want to simulate CTRL + C for that terminal? This becomes very easy. Lets assume $tty is the terminal you want to target (you can get this by running tty | sed 's#^/dev/##' in the terminal). kill -INT -$(ps h -t $tty -o tpgid | uniq) This will send a SIGINT to whatever the foreground process group of $tty is.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/149741", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80522/" ] }
149,760
I need to delete text from a line before delimiter. For example: (123434): hello::{apple,orange,mango}. I need to delete text before first : . i.e. (123434) . Is there any command in linux to perform this task?
This sed command should do the trick. The following command will overwrite the file: sed -i 's/^[^:]*:/:/' file To just print the output, remove the -i flag. To put the output in a new file, remove the -i flag and redirect the output: sed 's/^[^:]*:/:/' file > new_file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/149760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80538/" ] }
149,805
How can I get, not the dependencies of a package, but the packages that are depending on a certain package? I'm on debian 6.0 Squeeze-LTS (the first-time extension to squeeze for long term support!) for my web server, and it reports that support has ended for a certain package: Unfortunately, it has been necessary to limit security support for somepackages.The following packages found on this system are affected by this:* Source:libplrpc-perl, ended on 2014-05-31 at version 0.2020-2 Details: Not supported in squeeze LTS Affected binary package: - libplrpc-perl (installed version: 0.2020-2) I don't really want to try to uninstall that binary package without seeing what depends on it, and it's description describes stuff that I've never heard of before: libplrpc-perl: Perl extensions for writing PlRPC servers and clients So I'd be fine with just removing the package if possible, but want to determine the things that depend on it before doing so.
Why it is installed: aptitude why libplrpc-perl What depends on this package: aptitude search '~i~Dlibplrpc-perl' What would happen, if libplrpc-perl is removed: aptitude -s purge libplrpc-perl
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/149805", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3932/" ] }
149,820
I want my output : VDD.GNDAGNDSUB VMEASPOS.VMEASNEGVREFEXT File1 having following information : Power and signal VDD Digital Power This pin provides power supply connection for the digitalblocks.GND Digital Ground This pin provides ground connection for the digital blocks.AGNDSUB Ground This pin provides substrate connection.VMEASPOS Digital Power Voltage to be measured.VMEASNEG Ground Ground for the voltage to be measured.VREFEXT Digital Power Reference voltage input of 1.024V %for VSENS calibration.operating voltage
With Gnu grep : grep -Eow '^[[:upper:]]+' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/149820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80576/" ] }
149,823
expr does not seem to like parenthesis (used in mathematics to explicit operator priority): expr 3 * (2 + 1)bash: syntax error near unexpected token `(' How to express operator priority in bash?
You can use the arithmetic expansion instead. echo "$(( 3 * ( 2 + 1 ) ))"9 In my personal opinion, this looks a bit nicer than using expr . From man bash Arithmetic Expansion Arithmetic expansion allows the evaluation of an arithmetic expression and the substitution of the result. The format for arithmetic expansion is: $((expression)) The expression is treated as if it were within double quotes, but a double quote inside the parentheses is not treated specially. All tokens in the expression undergo parameter expansion, string expansion, command substitution, and quote removal. Arithmetic expansions may be nested. The evaluation is performed according to the rules listed below under ARITHMETIC EVALUATION. If expression is invalid, bash prints a message indicating failure and no substitution occurs.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/149823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2305/" ] }
149,874
I am experiencing slow ssh login from a machine to a remote machine.The verbose of ssh is shown below in two broken blocks. ssh freezes for 15 seconds in the below shown block. [root@zabbix ~]# ssh -vvv [email protected]_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010debug1: Reading configuration data /etc/ssh/ssh_configdebug1: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to 172.18.xxx.xx [172.18.xxx.xx] port 22.debug1: Connection established.debug1: permanently_set_uid: 0/0debug1: identity file /root/.ssh/identity type -1debug1: identity file /root/.ssh/id_rsa type -1debug1: identity file /root/.ssh/id_dsa type -1debug1: Remote protocol version 2.0, remote software version Sun_SSH_1.1.4debug1: no match: Sun_SSH_1.1.4debug1: Enabling compatibility mode for protocol 2.0..............................debug1: Trying private key: /root/.ssh/id_dsadebug3: no such identity: /root/.ssh/id_dsadebug2: we did not send a packet, disable methoddebug3: authmethod_lookup keyboard-interactivedebug3: remaining preferred: passworddebug3: authmethod_is_enabled keyboard-interactivedebug1: Next authentication method: keyboard-interactivedebug2: userauth_kbdintdebug2: we sent a keyboard-interactive packet, wait for replydebug3: Wrote 96 bytes for a total of 1205 ssh hangs here for approx 15 seconds and then it asks for the password debug2: input_userauth_info_reqdebug2: input_userauth_info_req: num_prompts 1Password: After password input, it hangs at the end of the line shown below: debug3: packet_send2: adding 32 (len 23 padlen 9 extra_pad 64)debug3: Wrote 80 bytes for a total of 1285debug1: Authentication succeeded (keyboard-interactive).debug1: channel 0: new [client-session]debug3: ssh_session2_open: channel_new: 0debug2: channel 0: send opendebug1: Entering interactive session.debug3: Wrote 64 bytes for a total of 1349 After approx 15 seconds, login is done successfully. My question is, what can I do to make this ssh attempt faster? This login attempt is done from a RHEL 6.2 machine to a Solaris 10 machine.At the very first moment I thought that it could be a network issue but later I found that I could login without any such freezes from an another Solaris 10 machine to the same remote Solaris machine I mentioned above. The version of SSH in the remote Solaris machine is shown below: $ ssh -VSun_SSH_1.1.4, SSH protocols 1.5/2.0, OpenSSL 0x0090704f While the version of SSH in RHEL machine is shown below: [root@zabbix ~]# ssh -VOpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 Please note that the remote Solaris 10 machine is a zone on Physical solaris 10 machine.Attempt to login using ssh to the Physical solaris 10 machine is very fast from the local RHEL machine. So, I suppose, this is not network issue at all. Update : Adding below all enabled directives in sshd_config file in the remote Solaris 10 machine. Protocol 2Port 22ListenAddress 0.0.0.0AllowTcpForwarding yesGatewayPorts yesX11Forwarding yesX11DisplayOffset 10X11UseLocalhost yesPrintMotd noKeepAlive yesSyslogFacility authLogLevel infoHostKey /etc/ssh/ssh_host_rsa_keyHostKey /etc/ssh/ssh_host_dsa_keyServerKeyBits 768KeyRegenerationInterval 3600StrictModes yesLoginGraceTime 600MaxAuthTries 6MaxAuthTriesLog 3PermitEmptyPasswords noPasswordAuthentication yesPAMAuthenticationViaKBDInt yesSubsystem sftp internal-sftpIgnoreRhosts yesRhostsAuthentication noRhostsRSAAuthentication noRSAAuthentication yes Your input is highly appreciated. Thanks
On the RHEL machine, try: ssh -o GSSAPIAuthentication=no [email protected] If that works, make it permanent by editing ~/.ssh/config and add: GSSAPIAuthentication no Also, check that the RHEL is visible in DNS (from the server's point of view). The server tries to check your reverse DNS resolution. If that fails, you'll suffer a delay. This check can be disabled: Edit /etc/ssh/sshd_config OpenSSH: Use UseDNS no Solaris: Use LookupClientHostnames no Restart sshd and it should be quicker to log on.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/149874", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80599/" ] }
149,920
Accidently, I ran sudo rm -r /tmp , is that a problem ? I recreated it using sudo mkdir /tmp , does that fix the problem ? After I recreated the directory, In the places section in the sidebar in nautilus in Ubuntu 14.04 I can see /tmp , which wasn't there before .. Is that a problem ? One last thing, do I have to run sudo chown $USER:$USER /tmp to make it accessible as it was before .. Would there be any side-effects after this ? By the way, I get this seemingly-related error whenI try to use bash autocompletion bash: cannot create temp file for here-document: Permission denied
/tmp can be considered as a typical directory in most cases. You can recreate it, give it to root ( chown root:root /tmp ) and set 1777 permissions on it so that everyone can use it ( chmod 1777 /tmp ). This operation will be even more important if your /tmp is on a separate partition (which makes it a mount point). By the way, since many programs rely on temporary files, I would recommend a reboot to ensure that all programs resume as usual. Even if most programs are designed to handle these situations properly, some may not.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/149920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80638/" ] }
149,940
I have a (hopefully) interesting problem that I could use some advice on. I have a system that's essentially used for storing logs. It has a directory structure like: YYYY/MM/DD/hostname/ There are a number of hostnames, and under each one are a bunch of gzipped hourly logs (access, error, etc). What I'm interested in is the total count of a given string in the access logs broken down by day and hostname. What's the best way to do this? Is this possible with a find and grep combination, or is it too complicated for that and instead need a script?
for d in */*/*/*; do printf '%s: ' "$d" zcat -- "$d/"*.gz | grep -Fc STRINGdone would count the number of lines that contain STRING. Replace grep -Fc STRING with grep -Fo STRING | wc -l (assuming GNU grep ) to have the number of occurrences. Replace zcat with gzip -dc if your zcat doesn't support .gz files. With zsh and GNU grep , you can shorten it to: for d (*/*/*/*) zcat $d/*.gz | grep -FcH --label=$d STRING
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/149940", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80645/" ] }
149,946
I've just used the Parallels upgrade service ( autoinstaller in terminal). I upgraded from Parallels 11 to 12. I logged into the panel, it asked me all the personal info and I started to get nervous. Now that I am properly logged in, all my 165 customers and domains are missing!?! I've checked on the server, and the data is still present in the /var/www/vhosts directories, although nothing shows up in Parallels. I've also looked through the Parallels docs/forums/help but I can't find any starting points, apart from this very helpful suggestion to restore the server from backup: http://kb.sp.parallels.com/en/11190 Is there any way I can roll back to eleven? And, is there any chance I can get my domains/customers/sites/dignity back?
for d in */*/*/*; do printf '%s: ' "$d" zcat -- "$d/"*.gz | grep -Fc STRINGdone would count the number of lines that contain STRING. Replace grep -Fc STRING with grep -Fo STRING | wc -l (assuming GNU grep ) to have the number of occurrences. Replace zcat with gzip -dc if your zcat doesn't support .gz files. With zsh and GNU grep , you can shorten it to: for d (*/*/*/*) zcat $d/*.gz | grep -FcH --label=$d STRING
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/149946", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80650/" ] }
149,959
Based on various sources I have cobbled together ~/.config/systemd/user/screenlock.service : [Unit]Description=Lock X sessionBefore=sleep.target[Service]Environment=DISPLAY=:0ExecStart=/usr/bin/xautolock -locknow[Install]WantedBy=sleep.target I've enabled it using systemctl --user enable screenlock.service . But after rebooting, logging in, suspending and resuming (tested both with systemctl suspend and by closing the lid) the screen is not locked and there is nothing in journalctl --user-unit screenlock.service . What am I doing wrong? Running DISPLAY=:0 /usr/bin/xautolock -locknow locks the screen as expected. $ systemctl --versionsystemd 215+PAM -AUDIT -SELINUX -IMA -SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ +SECCOMP -APPARMOR$ awesome --versionawesome v3.5.5 (Kansas City Shuffle) • Build: Apr 11 2014 09:36:33 for x86_64 by gcc version 4.8.2 (nobody@) • Compiled against Lua 5.2.3 (running with Lua 5.2) • D-Bus support: ✔$ slim -vslim version 1.3.6 If I run systemctl --user start screenlock.service the screen locks immediately and I get a log message in journalctl --user-unit screenlock.service , so ExecStart clearly is correct. Relevant .xinitrc section : xautolock -locker slock & Creating a system service with the same file works (that is, slock is active when resuming): # ln -s "${HOME}/.config/systemd/user/screenlock.service" /usr/lib/systemd/system/screenlock.service# systemctl enable screenlock.service$ systemctl suspend But I do not want to add a user-specific file outside $HOME for several reasons: User services should be clearly separated from system services User services should be controlled without using superuser privileges Configuration should be easily version controlled
sleep.target is specific to system services. The reason is, sleep.target is not a magic target that automatically gets activated when going to sleep. It's just a regular target that puts the system to sleep – so the 'user' instances of course won't have an equivalent. (And unfortunately the 'user' instances currently have no way to depend on systemwide services.) (That, and there's the whole "hardcoding $DISPLAY" business. Every time you hardcode session parameters in an OS that's based on the heavily multi-user/multi-seat Unix, root kills a kitten.) So there are two good ways to do this (I suggest the 2nd one): Method 1 Create a system service (or a systemd-sleep(8) hook) that makes systemd-logind broadcast the "lock all sessions" signal when the system goes to sleep: ExecStart=/usr/bin/loginctl lock-sessions Then, within your X11 session (i.e. from ~/.xinitrc), run something that reacts to the signal: systemd-lock-handler slock & xss-lock --ignore-sleep slock & (GNOME, Cinnamon, KDE, Enlightenment already support this natively.) Method 2 Within your X11 session, run something that directly watches for the system going to sleep, e.g. by hooking into systemd-logind's "inhibitors". The aforementioned xss-lock actually does exactly that, even without the explicit "lock all" signal, so it is enough to have it running: xss-lock slock & It will run slock as soon as it sees systemd-logind preparing to suspend the computer.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/149959", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }
149,965
I have two directories images and images2 with this structure in Linux: /images/ad /images/fe /images/foo ... and other 4000 folders and the other is like: /images2/ad /images2/fe /images2/foo ... and other 4000 folders Each of these folders contain images and the directories' names under images and images2 are exactly the same, however their content is different. Then I want to know how I can copy-merge the images of /images2/ad into images/ad, the images of /images2/foo into images/foo and so on with all the 4000 folders..
This is a job for rsync . There's no benefit to doing this manually with a shell loop unless you want to move the file rather than copy them. rsync -a /path/to/source/ /path/to/destination In your case: rsync -a /images2/ /images/ (Note trailing slash on images2 , otherwise it would copy to /images/images2 .) If images with the same name exist in both directories, the command above will overwrite /images/SOMEPATH/SOMEFILE with /images2/SOMEPATH/SOMEFILE . If you want to replace only older files, add the option -u . If you want to always keep the version in /images , add the option --ignore-existing . If you want to move the files from /images2 , with rsync, you can pass the option --remove-source-files . Then rsync copies all the files in turn, and removes each file when it's done. This is a lot slower than moving if the source and destination directories are on the same filesystem.
{ "score": 9, "source": [ "https://unix.stackexchange.com/questions/149965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80666/" ] }
149,970
I often pipe commands to less in order to read through the output (e.g. compiler errors). MyCommand | less This is great because it makes trawling through large amounts of output easy, but when I exit less the output is gone. How can I make the output still visible after quitting less? This question differs from Is there a way to redirect a program's output and still have it go to stdout? because that question relates to output to a text file via tee , which, as far as I know, doesn't provide a facility to split output between less and stdout .
Using less -X : Disables sending the termcap initialization and deinitialization strings to the terminal. That will leave any text on-screen behind before and after paging. So: command | less -X will have the effect you want. Note that this output will still be wrong (duplicated lines) if you ever scrolled up - that's unavoidable without without writing to a file. You can also set the environment variable LESS to a value that contains X to do this by default for every invocation of less . If you want to write to a file without resorting to tee , you can use the less -o filename or --log-file=filename options.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/149970", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65634/" ] }
149,993
My preferred keyboard configuration is US-International. When I use it on MS-Windows and type certain special characters (i.e., ~ ' " ) followed by a , o n I get á ó or ñ as I wish. However when I type these characters followed by a letter that doesn't match, I will get that special character followed by the letter. So if I want to type, let's say, "q I can do it by pressing " q . However, if I do the same on Gnome I need to do it like " ␣ q , otherwise I get an audible bell. Also, if I want to type two single quotes, in Gnome I need to type ' four times, instead of the two times it takes on MS-windows. I know it is a rather minor annoyance, but it does get to me, and I see GNU+Linux as a highly customizable OS, and I'd like to learn more about how do I do such things. Changing to the normal US keyboard is not an option, because I need to use characters such as á ó or ñ for typing in Spanish. note: I am using Debian
The dead key sequences are configured in the same place as the compose key sequences, in the compose map. The compose map file is loaded by each application when it starts up; the following files are tried in order: the file name indicated by the environment variable XCOMPOSEFILE ~/.XCompose /usr/share/X11/locale/ LOCALE_NAME /Compose (e.g. /usr/share/X11/locale/en_US.UTF-8/Compose ) There's no include mechanism, so if you want to modify the table, you'll need to make a copy of the standard file and edit it. For example, to define dead ¨ q to insert "q and dead ' dead ' to insert '' : <dead_diaeresis> <q> : "\"q"<dead_acute> <dead_acute> : "''" You'll have to repeat the "q sequence for all letters, there's no macro facility. As far as I know, there's no fallback facility either: if a sequence is defined for <dead_diaeresis> <a> but none for <dead_diaeresis> <q> then typing <dead_diaeresis> <q> will not insert anything. If you want more sophisticated capabilities, you'll have to move on from the basic compose facility into the world of input methods . Input methods are primarily intended for people who use non-alphabetic scripts or multiple scripts, but of course you can use it for diacritics as well. Several input method frameworks are available. Since Gnome 3.6 , Gnome integrates support for iBus .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/149993", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80678/" ] }
149,997
I want to do some troubleshooting on my bash script. Is there a short and simple command that generates both stdout and stderr so that I can use 2>&1 on it? Sorry if this looks trivial, but I just can't think of one at the moment.
A simple approach would be to use ls to list actual and imaginary files: ls . *.blah This assumes that there are visible files in the working directory and that you don't have any files that end in .blah 1 1. ...and if you do, we won't judge you.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/149997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24509/" ] }
150,039
Hi in my previous question I got clarity on how to use associative arrays in zsh shell. But whenever I trigger the following command in my script for KEY in ${!array[@]} to iterate amongst the keys in my array I get a bad substitution error. even echo ${!array[@]} gives the same. NB: array is the name of my associative array
zsh has different parameter substitution than Bash, which is documented in man zshexpn . It supports a variety of modifiers to expansion behaviour , which are put in parentheses before the variable name: ${(X)name} . The modifier to include array keys (including for associative arrays) is k : ${(k)array} expands to the list of keys in the array, except that if a key is the empty string, it is omitted. Use double quotes and the @ modifier to retain the empty key. for x in "${(@k)array}" ; ... will loop over the keys of the array array .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20845/" ] }
150,048
I want to install CentOS 6.4 on HP DL380 G4 server. First of all I select install or upgrade an existing system , but it boot something like without graphic (like basic video card). My main problem is in partitioning type: it shows this and won't let me custom partitioning: |partitioning type|installation require partitioning of your hard drive. the default layout is suitable for most users. select what space to use and which drive to use as the install layout Use entire drive Replace existing Linux system Use free space [*] cciss/c0d0 .... MB (compaq amart arrey) OK BACK I want to manually make the partitioning myself, but selecting each one of them don't let me to do that. What to do?
zsh has different parameter substitution than Bash, which is documented in man zshexpn . It supports a variety of modifiers to expansion behaviour , which are put in parentheses before the variable name: ${(X)name} . The modifier to include array keys (including for associative arrays) is k : ${(k)array} expands to the list of keys in the array, except that if a key is the empty string, it is omitted. Use double quotes and the @ modifier to retain the empty key. for x in "${(@k)array}" ; ... will loop over the keys of the array array .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150048", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78188/" ] }
150,111
Say I have several file with the following naming: 000001000002000003... Each of these files is a csv file (may include escape characters). In total the folder has ~20GB of data. How can I stitch these files together into a single final file? In case it matters I usually use Zsh .
cat <->.csv > all.csv Where <-> matches any positive integer decimal number, will concatenate all those (in lexical order, which for 0 padded numbers is the same as numerical order) into all.csv . That will double the space on disk though. If you don't intend to keep the original files, you could do: for i in <->.csv; do cat $i && rm -f $i || breakdone > all.csv
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150111", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
150,120
I have the file: key valueblah blahblah blahblahblahman1 boy1blah blahman1 boy2man1 boy1 I do this to remove duplicate lines: awk '/man1/ { print $1,$2} ' file | awk '!x[$0]++' and the command take the first line and ignore other lines man1 boy1 man1 boy2 but I want to ignore all lines except the last line: man1 boy2 man1 boy1 as ramesh said I want something like: cat filenameblah blahblah blahblahblahman1 boy1blah blahman1 boy2man1 boy1man1 boy2man1 boy3man1 boy4man1 boy2 the desired output man1 boy1man1 boy3man1 boy4man1 boy2
you can do this using this shell script: #!/bin/bashawk '/man1/{pos[$0] = NR}END { for(key in pos) reverse[pos[key]] = key for(nr=1;nr<=NR;nr++) if(nr in reverse) print reverse[nr]}' yourfile Output: [root@host ~]# sh shell.shman1 boy1man1 boy3man1 boy4man1 boy2 Source
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150120", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79550/" ] }
150,121
I was analyzing some web heads looking at htop and noticed the following Uptime: 301 days(!), 23:47:39 What does the (!) mean?
From htop source code, file UptimeMeter.c , you can see: char daysbuf[15];if (days > 100) { sprintf(daysbuf, "%d days(!), ", days);} else if (days > 1) { sprintf(daysbuf, "%d days, ", days);} else if (days == 1) { sprintf(daysbuf, "1 day, ");} else { daysbuf[0] = '\0';} I think ! here is just a mark that server has been up for more than 100 days. Reference http://sourceforge.net/p/htop/mailman/htop-general/?viewmonth=200707
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/150121", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78937/" ] }
150,135
I like the image-preview in ranger, but I also like my terminal transparent. Is there really no way to get the image-preview work with w3m and transparent background?(I'm willing to change my terminal-emulator if that's necessary, currently urxvt)
From htop source code, file UptimeMeter.c , you can see: char daysbuf[15];if (days > 100) { sprintf(daysbuf, "%d days(!), ", days);} else if (days > 1) { sprintf(daysbuf, "%d days, ", days);} else if (days == 1) { sprintf(daysbuf, "1 day, ");} else { daysbuf[0] = '\0';} I think ! here is just a mark that server has been up for more than 100 days. Reference http://sourceforge.net/p/htop/mailman/htop-general/?viewmonth=200707
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/150135", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33165/" ] }
150,205
Actually I have a file called test1.txt . In that file I have the following five lines: 'test message1' 'test message2' 'test message3' 'test message4' 'test message5' Now I want to add a new line 'testing testing' into test1.txt file, after the 'test message1' line. How to do that?
This is what the a command does: sed -e "/test message1/a\\'testing testing'" < data This command will: Queue the lines of text which follow this command (each but the last ending with a \, which are removed from the output) to be output at the end of the current cycle, or when the next input line is read. So in this case, when we match a line with /test message1/ , we run the command a ppend with the text argument " 'testing testing' ", which becomes a new line of the file: 'test message1' 'testing testing' 'test message2' 'test message3' 'test message4' 'test message5' You can insert multiple lines by ending each of the non-final lines with a backslash. The double backslash above is to prevent the shell from eating it; if you're using it in a standalone sed script you use a single backslash. GNU sed accepts a single line of text immediately following a as well, but that is not portable.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61108/" ] }
150,219
I'm trying to write a script to unify two separate commands I run by hand into a single bash script I can then cron. The first command is a simple find on files with a certain name and size find /some/path -type f -name file.pl -size +10M This will produce several matching files and their full path. I then copy these paths by hand into a for loop as arguments to the next script. for path in /some/path/1/file.pl /some/path/2/file.pl /some/path/3/file.pl ; do perl /my/script.pl $path ; done Seems like it should be easy to get this into a single shell script but finding it a struggle.
That's what the -exec predicate is for: find /some/path -type f -name file.pl -size +10M -exec perl /my/script.pl {} \; If you do want to have your shell run the commands based on the output of find , then that will have to be bash / zsh specific if you want to be reliable as in: zsh : IFS=$'\0'for f ($(find /some/path -type f -name file.pl -size +10M -print0)) { /my/script.pl $f} though in zsh , you can simply do: for f (./**/file.pl(.LM+10)) /my/script.pl $f bash / zsh while IFS= read -rd '' -u3 file; do /my/script.pl "$file"done 3< <(find /some/path -type f -name file.pl -size +10M -print0) Whatever you do, in bash or other POSIX shells, avoid: for file in $(find...) Or at least make it less bad by fixing the field separator to newline and disable globbing: IFS=''; set -f; for file in $(find...) (which will still fail for file paths that contain newline characters).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80825/" ] }
150,236
When I start Thunderbird or the IDE 'Eclipse', there are no icons in the menu entries. Several solutions found on the Internet suggest things like setting a specific dconf-value, but with my installation (Arch) this is not possible: % gsettings set org.gnome.desktop.interface menus-have-icons trueNo such key 'menus-have-icons' So what is the current way for enabling these icons?
It seems that since GTK 3.10 the value 'menus-have-icons' is deprecated . I found a solution by using this command: % gsettings set org.gnome.settings-daemon.plugins.xsettings overrides "{'Gtk/ButtonImages': <1>, 'Gtk/MenuImages': <1>}"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150236", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80832/" ] }
150,248
I have a file which has embedded ^M characters. I wish to join the lines at the ^M character. ie: change this: ^M être, ^M étant, ^M été,Indicatif,Présent,suis,es,est,sommes,êtes,sont ^M être, ^M étant, ^M été,Indicatif,Imparfait,étais,étais,était,étions,étiez,étaient to this: être,étant,été,Indicatif,Présent,suis,es,est,sommes,êtes,sontêtre,étant,été,Indicatif,Imparfait,étais,étais,était,étions,étiez,étaient This command removes the ^M but the lines are not joined: %s/\r//g
It seems that since GTK 3.10 the value 'menus-have-icons' is deprecated . I found a solution by using this command: % gsettings set org.gnome.settings-daemon.plugins.xsettings overrides "{'Gtk/ButtonImages': <1>, 'Gtk/MenuImages': <1>}"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150248", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63878/" ] }
150,249
I want to convert following string ( 20140805234656 ) into date time stamp ( 2014-08-05 23:46:56 ).I am new to gawk and I don't know the exact syntax,how can I put - at every 5,8 and : at every 14,17 and put " " at 11 index. Is there any efficient way to achieve this in awk? EDIT Please note that I have string as variable in awk.I generated it during some processing of records.
One way of doing it using GNU awk is this: echo 20140805234656 | awk 'BEGIN { FIELDWIDTHS = "4 2 2 2 2 2" } { printf "%s-%s-%s %s:%s:%s\n", $1, $2, $3, $4, $5, $6 }'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150249", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58317/" ] }
150,275
[[email protected] ~]# pvdisplay -s Device "/dev/sda2" has a capacity of 0[[email protected] ~]# vgdisplay -s "vg_vpsny23" 1.36 TiB [1.36 TiB used / 0 free][[email protected] ~]# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/vg_vpsny23-lv_root 50G 4.0G 43G 9% /tmpfs 16G 0 16G 0% /dev/shm/dev/sda1 485M 65M 395M 15% /boot/dev/mapper/vg_vpsny23-lv_home 1.3T 300M 1.3T 1% /home[[email protected] ~]# umount /home [[email protected] ~]# vgdisplay --- Volume group --- VG Name vg_vpsny23 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.36 TiB PE Size 4.00 MiB Total PE 357314 Alloc PE / Size 357314 / 1.36 TiB Free PE / Size 0 / 0 I ran umount /home now do I destroy the /home and then merge all the space to the / point?
Yes when you do the lvremove (warning: this kills the data) on the vg_vpsny23-lv_home volume, the space will become available in the volume group again which will let you do a lvextend on the vg_vpsny23-lv_root volume. In other words: # lvremove /dev/mapper/vg_vpsny23-lv_home# lvextend -l +100%FREE -r /dev/mapper/vg_vpsny23-lv_root# systemctl daemon-reload (if using systemd) This should extend the root volume online. Remember that you can grow a filesystem online but you have to unmount a filesystem to shrink it. For the root filesystem, taking it offline means booting into rescue mode. So if you may want to use some of this space elsewhere you may want to modify the argument to the -l option that I gave you up there. Make sure to remove the /home entry from /etc/fstab and reload systemd (or reboot) as other services may be relying on the removed LV's device/mount unit file (Ex. ssh server)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150275", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47368/" ] }
150,341
I started a Emacs in my terminal, and then started running Emacs server inside it by M-x server-start . Now I would like to cancel running the Emacs server without exiting the Emacs process. There seems to be no command for that. How can i do that? Thanks.
The command to do that from inside emacs is M-x server-mode The first time you run it, it'll restart the server it's running. The second time, it'll stop the server. To make sure that you're stopping the server, pass a non-positive prefix argument: M-0 M-x server-mode RET
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
150,358
In bash how do you avoid a specific space from being expanded in a variable? Lets say I have this JAVA_OPTS="-Xmx1g"JAVA_OPTS="$JAVA_OPTS -XX:OnError='/path/to/a/script.sh %p'"function args() { printf "%d :" $# printf " <%s> " $@ echo}args $JAVA_OPTS You get this 3 : <-Xmx1g> <-XX:OnError='/path/to/a/script.sh> <%p'> I would like this 2 : <-Xmx1g> <-XX:OnError='/path/to/a/script.sh %p'>
First note that args will show two arguments even if you only give it one: $ args "abc def"1 : <abc> <def> To get it to display correctly, double-quotes need to be added: $ function args() { printf "%d :" $#; printf " <%s> " "$@"; echo; }$ args "abc def"1 : <abc def> However, there are still issues with the definition of JAVA_OPTS . Observe: $ args $JAVA_OPTS3 : <-Xmx1g> <-XX:OnError='/path/to/a/script.sh> <%p'> This is because, when $JAVA_OPTS appears on a command line, the shell will do word splitting on the contents of JAVA_OPTS but it does not respect or process the quotes contained therein. For this type of application, you are much better off with JAVA_OPTS defined as a bash array: $ JAVA_OPTS="-Xmx1g"$ JAVA_OPTS=("$JAVA_OPTS" "-XX:OnError=/path/to/a/script.sh %p")$ args "${JAVA_OPTS[@]}"2 : <-Xmx1g> <-XX:OnError=/path/to/a/script.sh %p> By the way, when working with arrays, a handy way to see what is in them is with declare -p : $ declare -p JAVA_OPTSdeclare -a JAVA_OPTS='([0]="-Xmx1g" [1]="-XX:OnError=/path/to/a/script.sh %p")'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150358", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17346/" ] }
150,402
I can't seem to find an answer to this simple question, which I need for some compliance documentation. On a default install of CentOS 6.5 (OpenSSH 5.3p1-94.el6), after how long of being idle will a user's SSH session be terminated? I believe the following can be set to increase the idle timeout, but they are commented out by default. $ grep -i alive /etc/ssh/sshd_config#TCPKeepAlive yes#ClientAliveInterval 0#ClientAliveCountMax 3 Also, is there a command to dump a list of the current sshd settings? I don't see anything in man sshd .
The commented lines in sshd_config usually display the defaults. This is the case with all of the lines in your question. You can verify this in the sshd_config manpage . Here are the relevant snippets: TCPKeepAlive Specifies whether the system should send TCP keepalive messages to the other side.  If they are sent, death of theconnection or crash of one of the machines will be properly noticed.  However, this means that connections will dieif the route is down temporarily, and some people find it annoying.  On the other hand, if TCP keepalives are notsent, sessions may hang indefinitely on the server, leaving “ghost” users and consuming server resources. The default is “yes” (to send TCP keepalive messages), and the server will notice if the network goes down or theclient host crashes.  This avoids infinitely hanging sessions. To disable TCP keepalive messages, the value should be set to “no”. This option was formerly called KeepAlive . ClientAliveCountMax Sets the number of client alive messages (see below) which may be sent without sshd(8) receiving any messages backfrom the client.  If this threshold is reached while client alive messages are being sent, sshd will disconnect theclient, terminating the session.  It is important to note that the use of client alive messages is very differentfrom TCPKeepAlive (below) (above).  The client alive messages are sent through the encrypted channel and therefore will notbe spoofable.  The TCP keepalive option enabled by TCPKeepAlive is spoofable.  The client alive mechanism isvaluable when the client or server depend on knowing when a connection has become inactive. The default value is 3.  If ClientAliveInterval (see below) is set to 15, and ClientAliveCountMax is left at thedefault, unresponsive SSH clients will be disconnected after approximately 45 seconds.  This option applies to protocol version 2 only. ClientAliveInterval Sets a timeout interval in seconds after which if no data has been received from the client, sshd(8) will send amessage through the encrypted channel to request a response from the client.  The default is 0, indicating thatthese messages will not be sent to the client.  This option applies to protocol version 2 only.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/689/" ] }
150,427
Is it possible to move to the next flagged message in Mutt? In looking at the development manual , I wasn't able to see anything that would let me do this. I'm looking for something like <next-new> , but for skipping around to the next flagged message. If there's nothing like this in Mutt, can I fake it with macros?
Just search for the next flagged message: / followed by ~F . Well, the only drawback is that this doesn't work from the pager menu (but this would be a valid RFE). And you can write a macro with the value: <search>~F\r Note: similarly, I suppose that <next-new> is almost the same as <search>~N\r in the index menu (the only difference I can see is the different error message when there are no new messages). Note 2: from the pager, I suppose that a macro <exit><search>~F\r<display-message> would do what you want.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150427", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48106/" ] }
150,448
I installed a new icon set (numix), however not all icons were changed (e.g. the software manager). How can I manually change icons?
One way of finding the location of the icon for an application is to add it to the panel (right click > add to panel) and then right click on the newly added icon to edit it. By clicking on the icon in "Launcher Properties" you'll get its location. For instance mintInstall is found in /usr/lib/linuxmint/mintInstall/icon.svg Having this you can then replace the icon with you own file and you can remove the application from the panel again.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150448", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80860/" ] }
150,451
Over the past week my server (running Debian Jessie) has rebooted twice. In the syslog I see this before each reboot, and at no other points: Aug 15 13:32:58 hoshimiya kernel: [296512.005355] {1}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 1Aug 15 13:32:58 hoshimiya kernel: [296512.005360] {1}[Hardware Error]: It has been corrected by h/w and requires no further actionAug 15 13:32:58 hoshimiya kernel: [296512.005361] {1}[Hardware Error]: event severity: correctedAug 15 13:32:58 hoshimiya kernel: [296512.005362] {1}[Hardware Error]: Error 0, type: correctedAug 15 13:32:58 hoshimiya kernel: [296512.005363] {1}[Hardware Error]: fru_text: CorrectedErrAug 15 13:32:58 hoshimiya kernel: [296512.005364] {1}[Hardware Error]: section_type: memory errorAug 15 13:32:58 hoshimiya kernel: [296512.005365] [Firmware Warn]: error section length is too small Some googling leads me to believe that this is to do with my ECC RAM detecting and recovering from an error. Is this correct? If it's recovering, why does the system reboot? I'd like to prevent the system from rebooting, if at all possible.
Looks like your RAM is failing, or having errors that are being corrected. Depending on the severity it sounds like these errors are impacting it's ability to function and it's having to reboot afterwards. From the looks of this thread the message bit at the end about the error section length being too small is likely the culprit. excerpt - [PATCH 1/1] efi: cper: Support different length of Error Section Some fields might be added to the Error Section in the newer UEFI spec. For example, the fields 'Reserved', 'Rank Number', 'Card Handle' and 'Module Handle' are added to the Memory Error Section started from UEFI spec 2.3. Unfortunately, there will have the following warning message if the memory corrected error is detected and the field 'revision' in struct acpi_generic_data is less then 0x203 (UEFI spec 2.3): {1}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 3{1}[Hardware Error]: It has been corrected by h/w and requires no further action{1}[Hardware Error]: event severity: corrected{1}[Hardware Error]: Error 0, type: corrected{1}[Hardware Error]: section_type: memory error[Firmware Warn]: error section length is too small This behavior causes this corrected error cannot be displayed correctly. To solve the issue, this patch supports different length of the Error Section for different UEFI spec version. And, this patch employs a pre-defined structure to clean up the duplicated codes in function cper_estatus_print_section. With applying this patch, the memory corrected error could be displayed correctly after injecting the error. Tested on v3.14-rc5 with Grantley platform and Intel RAStool. So it would seem a patch for that particular error is in the works and might be available in a newer version of the kernel.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80981/" ] }
150,476
I'm adding some Fedora 20 workstations to our Windows 2003 domain. I've successfully joined the domain with the boxes, and can login with domain accounts. Now I'm trying to allow the default AD group Enterprise Admins to use SUDO , however whatever I do, it seems that the group cannot be found (or at least it tells me my user account is not in the sudoers file) Structure of the OU (default really): mydomain.local Builtin Computers DCOM-Users DOmain Controllers ForeignSecurityPrincipals CompanyName Management Accounting Admins SysAccounts CustomerService Warehouse Users I used realmd and sssd to join the domain, and am trying to allow sudo to groups located under the Users OU, but would also like to add some from the CompanyName --> Admins OU/Sub-group as well. I'm currently trying this with no luck (in /etc/sudoers) %MYDOMAIN\\Enterprise^Admins ALL=(ALL) ALL I've also tried variations as well, such as: %MYDOMAIN\\Users\Enterprise^Admins ALL=(ALL) ALL%Enterprise^[email protected] ALL=(ALL) ALL etc... nothing seems to be working. Even after reboots, and/or systemctrl restart sssd . If i explicitly add my domain account to the /etc/sudoers file, it works no problem. [email protected] ALL=(ALL) ALL There are a few resources that seem to indicate it should be possible to add AD groups to sudoers, however so far none of them have worked for me: http://funwithlinux.net/2013/09/join-fedora-19-to-active-directory-domain-realmd/ https://serverfault.com/questions/387950/how-to-map-ad-domain-admins-group-to-ubuntu-admins https://help.ubuntu.com/community/LikewiseOpen
Several months after you asked but the correct answer is that you remove all domain information from the group. All the information is set and extracted by SSSD automatically. The only flaw I see in some of your examples is that you escaped the space with a ^. An AD group of Enterprise Admins would have a sudoers line that starts with %Enterprise\ Admins For example, if your domain is example.com , then the sudoers line looks like %Enterprise\ [email protected] ALL=(ALL) ALL You can verify this by looking calling getent on the group. getent group Enterprise\ Admins
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34072/" ] }
150,487
I just restored my Raspberry Pi server from an rsync image. During the backup, I had excluded /var/cache/* , thinking that this would restore an empty directory. This worked, but when I rebooted, a process complained that it couldn't write to it in the following mail. Subject: status report from [email protected]: updating <url>.dynu.com: nochg: No update required; unnecessary attempts to change to the current address are considered abusiveFATAL: Cannot create file '/var/cache/ddclient/ddclient.cache'. (No such file or directory) I checked the permissions of /var/log , which were consistent with my Arch desktop system. $ ls -ld /var/cache/drwxr-xr-x 3 root root 4096 Aug 15 13:23 /var/cache/ Do I have to do anything else? If the permissions are a-w , then how can non-root processes write in here?
/var/cache is not a free-for-all like /var/tmp . Each service that requires it has a subdirectory in /var/cache with appropriate permissions for it to store files. On Debian and derived distributions, you can run dpkg -S /var/cache to find what packages have set up directories under /var/cache , and apt-get --reinstall install PACKAGE_NAME … to reinstall these packages and re-create the directories under /var/cache . Some applications repopulate their cache on the fly. Others need to have the cache filled explicitly; this is often done by a cron job. A few need to be populated manually; for example, to use apt-file , you'll first need to run apt-file update as root. There is one piece of /var/cache on Debian that cannot be reconstructed: /var/cache/debconf/config.dat . This file contains the answers that you gave during the installation of Debian packages. This is a long-standing bug in Debconf.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18887/" ] }
150,523
How do I fix the iptables: command not found problem that happened to Debian 7.6? batman@gotham:~$ uname -aLinux gotham 3.14-0.bpo.2-amd64 #1 SMP Debian 3.14.13-2~bpo70+1 (2014-07-31) x86_64 GNU/Linuxbatman@gotham:~$ iptables -Lbash: iptables: command not foundbatman@gotham:~$ sudo apt-get install iptables[sudo] password for batman: Reading package lists... DoneBuilding dependency tree Reading state information... Doneiptables is already the newest version.0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.batman@gotham:~$ I googled extensively and most of the answers were for CentOS and Fedora that dated back to 2005 and 2009.
The iptables command can pretty much only be usefully run as root, not as another user. So it is not in the default command search path for users other than root. To run iptables , run it as root, with either of these commands: su 'iptables --some-option …'sudo iptables --some-option … The executable is located in /sbin , which is in the default command search path for root.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70167/" ] }
150,533
I was looking into special parameters in bash. I am curious to know what is $& and how it is different from $_ . I see the following output when running the commands but could not locate the meaning as well. k@Linux:~$ echo $&[1] 12397$k@Linux:~$ echo $n[1]+ Done echo $k@Linux:~$
$& is not a single token/special variable, it is simply $ and & . The command echo $& is treated as echo $ & , which echos a literal $ in the background. $_ on the other hand is a special variable that expands to the last argument of the most recent command executed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74355/" ] }
150,546
I'm new on linux user I try to run crontab to backup my database with vagrant user * * * * * /usr/bin/mysqldump -h localhost -u root -p root mydb | gzip > /var/backup/all/database_`date +%Y-%m-%d`.sql.gz >/dev/null 2>&1 when the crontab runs there is no backup file in the folder (my backup/all has the permission scheme 755 ). This is error from /var/log/syslog Aug 16 11:55:01 precise64 CRON[2213]: (vagrant) CMD (/usr/bin/mysqldump -h localhost -u root -p root mydb | gzip > /var/backup/all/database_`date +%Y-%m-%d`.sql.gz >/dev/null 2>&1)Aug 16 11:55:01 precise64 CRON[2212]: (CRON) info (No MTA installed, discarding output) So I think it's about crontab can't create backup file because of Permission denied. it's about I'm didn't install MTA but I use >/dev/null 2>&1 to disable crontab to sent it to email why it error ?
Of course, the error is that you don't have a mailer (sendmail,postfix, etc) implemented and active. That being said your other problem is that the >/dev/null 2>&1 ONLY only applies/associates to the LAST command in this case gzip. Thus there must be some type of output going to STDERR for your mysqldump. The correct way to do what I think you want is: * * * * * (command | command ) >/dev/null 2>&1
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81042/" ] }
150,557
echo ' 1 2 3 '|sed 's/[[:space:]]*/_/g' I want to see _1_2_____3__ as the output. Instead, I am seeing _1_2_3_ as the output. What am I doing wrong?? EDIT The other thing confusing me, which I forgot to put in the above example, is why this: echo ' test1 test2 ' | sed 's/[[:space:]]*/_/g' shows this: _t_e_s_t_1_t_e_s_t_2_ How can [[:space:]] match between t and e and s ...?
Because you use * , meaning it matches 0 or more spaces. So zero or multiple spaces is substitued by one underscore _ . Try: $ echo ' 1 2 3 ' | sed 's/[[:space:]]/_/g'_1_2_____3__ Remember that [[:space:]] also matches tab, newline, carriage return. Note BREs Matching Multiple Characters
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150557", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21276/" ] }
150,578
Bash offers many useful emacs-style keybindings for simple commandline editing. For example, Ctrl+w deletes ("kills") word left from the cursor. Another keybinding, Alt+d is supposed to be a "mirror" of the first one. It is supposed to delete a word right from the cursor. However, I have noticed, these two keybindings do not act completely symetricaly. Whereas Ctrl+w treats foo.bar as one word, Alt+d treats it as two words Even more annoyingly, # echo are two words for Ctrl+w , but one word for Alt+d . Is there some logic in this? Is there some reason why they don't treat words in the same way? Is there any way for me to change this? I am using bash on Debian Wheezy
Different bash commands use different notions of word. Check the description of each command in the manual . C-w kills to the previous whitespace. M-DEL (usually Alt + BackSpace ) kills to the previous word boundary where words contain only letters and digits (the same as M-b and M-f ), and M-d kills forward similarly. Bash uses the Readline library to process user input, and can be configured either via ~/.inputrc or via the bind builtin in ~/.bashrc . You can bind a key to a different readline command if you wish. You can also use bind -x to bind a key to a bash functions that modifies the READLINE_LINE variable. For example, to make M-d kill a shell word, bind it to shell-kill-word in your .bashrc : bind '"\M-d": shell-kill-word' To make M-d delete a whitespace-delimited word, there is no built-in function, so you need to write either a macro or a shell function. Since there is no motion command that goes by whitespace-delimited words, you need a function at least for that part. delete_whitespace_word () { local suffix="${READLINE_LINE:$READLINE_POINT}" if [[ $suffix =~ ^[[:space:]]*[^[:space:]]+ ]]; then local -i s=READLINE_POINT+${#BASH_REMATCH[0]} READLINE_LINE="${READLINE_LINE:0:$READLINE_POINT}${READLINE_LINE:$s}" fi}bind -x '"\ed": delete_whitespace_word' To make M-d kill a whitespace-delimited word is more complicated because as far as I know, there is no way to access the kill ring from bash code. So this requires a function to find the end of the portion to kill, and a macro to follow this by the actual killing. forward_whitespace_word () { local suffix="${READLINE_LINE:$READLINE_POINT}" if [[ $suffix =~ ^[[:space:]]*[^[:space:]]+ ]]; then ((READLINE_POINT += ${#BASH_REMATCH[0]})) else READLINE_POINT=${#READLINE_LINE} fi}bind -x '"\C-xF": forward_whitespace_word'bind '"\C-x\C-w": kill-region'bind '"\ed": "\e \C-xF\C-x\C-w"' All of this would be a lot easier in zsh.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150578", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
150,582
Given the scenario: Remote machine: SSH server; user does not have admin privileges; Local machine: SSH client; user has admin privileges. If user , logging in to remote from local , wishes to interact with remote using a shell not installed on remote , how can user accomplish this alone? Example: user uses fish on local , and wishes also to use it on remote , but remote only has bash and zsh installed.
Install your favorite shell on the remote machine. You don't need any administrator privileges to do that, you can install programs in your home directory, it's just less convenient. See Installation on debian 5 32-bit without being a root , How to install program locally without sudo privileges? , Keeping track of programs and other questions . If you want to automatically log into a shell that you installed yourself instead of the default one, see Making zsh default shell without root access If all you want to do is manipulate remote files, you can use SSHFS to mount the remote directory tree on your local machine. mkdir ~/remote.dsshfs remote.example.com:/ ~/remote.dls ~/remote.d/…fusermount -u ~/remote.d If you have no room in your home directory or it's a shared account, you can make do with setting up a reverse SSH tunnel and mount your local directory tree on the remote machine with SSHFS , assuming that the two machines are running the same architecture (same unix variant on the same processor type). If the two machines have incompatible architectures, you can even install the programs for the remote architecture in your local home directory. This may not be very convenient as you'll have to set up paths correctly for the programs to find their libraries, configuration files and other data files. Emacs's eshell is compatible with Tramp : if you change to a remote directory in Eshell, you'll be executing commands on the remote machine.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
150,594
The program ed , a minimal text editor, cannot be exited by sending it an interrupt through using Ctrl - C , instead printing the error message "?" to the console. Why doesn't ed just exit when it receives the interrupt? Surely there's no reason why a cryptic error message is more useful here than just exiting. This behavior leads many new users into the following sort of interaction: $ edhello?help?exit?quit?^C?^C???^D$ su# rm -f /bin/ed Such a tragic waste—easily avoidable if ed simply agreed to be interrupted. Another stubborn program exhibiting similar behavior is less which also doesn't appear to have much reason to ignore C-c . Why don't these programs just take a hint?
Ctrl + C sends SIGINT . The conventional action for SIGINT is to return to a program's toplevel loop, cancelling the current command and entering a mode where the program waits for the next command. Only non-interactive programs are supposed to die from SIGINT. So it's natural that Ctrl + C doesn't kill ed, but causes it to return to its toplevel loop. Ctrl + C aborts the current input line and returns to the ed prompt. The same goes for less: Ctrl + C interrupts the current command and brings you back to its command prompt. For historical reasons, ed ignores SIGQUIT ( Ctrl + \ ). Normal applications should not catch this signal and allow themselves to be terminated, with a core dump if enabled.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/150594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44164/" ] }
150,628
So the following behaviour of unix find just cost me dearly: > touch foo> touch bar> ls bar foo> find . -name '*oo' -delete> lsbar> touch baz> lsbar baz> find . -delete -name '*ar'> ls> #WHAAAT? How does this make sense?
The command line of find is made from different kinds of options, that are combined to form expressions. The find option -delete is an action. That means it is executed for each file matched so far. As first option after the paths, all files are matched... oops! It is dangerous - but the man page at least has a big warning : From man find : ACTIONS -delete Delete files; true if removal succeeded. If the removal failed, an error message is issued. If -delete fails, find's exit status will be nonzero (when it eventually exits). Use of -delete automatically turns on the -depth option. Warnings: Don't forget that the find command line is evaluated as an expression, so putting -delete first will make find try to delete everything below the starting points you specified. When testing a find command line that you later intend to use with -delete, you should explicitly specify -depth in order to avoid later surprises. Because -delete implies -depth, you cannot usefully use -prune and -delete together. From further up in man find : EXPRESSIONS The expression is made up of options (which affect overall operation rather than the processing of a specific file, and always return true), tests (which return a true or false value), and actions (which have side effects and return a true or false value), all separated by operators. -and is assumed where the operator is omitted. If the expression contains no actions other than -prune, -print is per‐ formed on all files for which the expression is true. On trying out what a find command will do: To see what a command like find . -name '*ar' -delete will delete, you can first replace the action -delete by a more harmless action - like -fls or -print : find . -name '*ar' -print This will print which files are affected by the action. In this example, the -print can be left out. In this case, there is not action at all, so the most obvious is added implicitly: -print . (See the second paragraph of the section "EXPRESSIONS" cited above)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81080/" ] }
150,637
I'm trying to copy the Documents and Settings folder of a Windows XP system over to an NTFS external disk using a USB Live of Puppy Linux. I encountered an encoding problem for namefiles in which the system doesn't recognize italian special characters (part of utf-8) so that using cp or the GUI file manager will bring the error invalid or incomplete multibyte or wide character . How can I copy the files whose names include the special characters to the NTFS drive?
Are you sure the file names are valid on the NTFS filesystem? Do you require that the file names stay the same? If not, you could remove the "strange" characters to make your live easier: There is a tool for that, detox . You can check what would get renamed without changing the filenames first: $ detox -n somedir/* And then, actually do it: $ detox somedir/* Another approach is to mount the NTFS filesystem in a way that it cleans up ('sanitizes') the file names itself. There is a mount option to enable this, windows_names : From man ntfs-3g : windows_names This option prevents files, directories and extended attributes to be created with a name not allowed by windows, either because it contains some not allowed character (which are the nine characters " * / : < > ? \ | and those whose code is less than 0x20) or because the last character is a space or a dot. Existing such files can still be read (and renamed).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81083/" ] }
150,644
In his answer to the question "mixed raid types" , HBruijn suggests using LVM to implement RAID vs the more standard MDRAID. After a little investigation, it seems LVM also supports RAID functionality. In the past, I have used LVM on top of MDRAID, and was not aware till now that LVM also supports RAID functionality. This seems to be a relatively recent development, but I have not found out exactly when this was implemented. So, these are alternative ways to implement software RAID on Linux. What are the pros and cons of these two different approaches? I'm looking for feature comparisons between the two approaches so people can decide which is better for them. Conclusions based on experimentation (as in, this feature doesn't work as well as this feature and here is why) are also Ok, provided you include your data in the answer. Some specific issues to address: Suppose I want to do sw RAID + LVM (a common scenario). Should I useLVM's support for sw RAID and thus use one utility instead of two?Does this more integrated approach have any advantages? Does LVMs support for sw RAID have significant deficiencies comparedto the more mature MDADM? Specifically, how stable/bug-free is theLVM support for sw RAID? It seems this support only goes back to2011 (see below), while MDADM is much older. Also, how does itcompare in terms of feature set? Does it have significant featuredeficiencies compared to MDADM? Conversely, does it have supportfor any sw RAID features that MDADM does not have? NOTES: There is a detailed discussion at http://www.olearycomputers.com/ll/linux_mirrors.html but I could notfind out what date it was written on. Similar question on Serverfault: linux LVM mirror vs. MDmirror .However, this question was asked in 2010, and the answers may be outof date. The changelog entry for version 2.02.87 - 12th August 2011 has Add configure --with-raid for new segtype 'raid' for MD RAID 1/4/5/6 support So, it looks like RAID support in LVM is about 3 years old.
How mature and featureful is LVM RAID? LVM-RAID is actually mdraid under the covers. It basically works by creating two logical volumes per RAID device (one for data, called "rimage"; one for metadata, called "rmeta"). It then passes those off to the existing mdraid drivers. So things like handling disk read errors, I/O load balancing, etc. should be fairly mature. That's the good news. Tools You can't use mdadm on it (at least, not in any easy way¹) and the LVM RAID tools are nowhere near as mature. For example, in Debian Wheezy, lvs can't tell you RAID5 sync status. I very much doubt repair and recovery (especially from "that should never happen!" situations) is anywhere near as good as mdadm (and I accidentally ran into one of those in my testing, and finally just gave up on recovering it—recovery with mdadm would have been easy). Especially if you're not using the newest versions of all the tools, it gets worse. Missing Features Current versions of LVM-RAID do not support shrinking ( lvreduce ) a RAID logical volume. Nor do they support changing the number of disks or RAID level ( lvconvert gives an error message saying not supported yet). lvextend does work, and can even grow RAID levels that mdraid only recently gained support for, such as RAID10. In my experience, extending LVs is much more common than reducing them, so that's actually reasonable. Some other mdraid features aren't present, and especially you can't customize all the options you can in with mdadm. On older versions (as found in, for example, Debian Wheezy), LVM RAID does not support growing, either. For example, on Wheezy: root@LVM-RAID:~# lvextend -L+1g vg0/rootExtending logical volume root to 11.00 GiBInternal error: _alloc_init called for non-virtual segment with no disk space. In general, you don't want to run the Wheezy versions. The above is once you get it installed. That is not a trivial process either. Tool problems Playing with my Jessie VM, I disconnected (virtually) one disk. That worked, the machine stayed running. lvs , though, gave no indication the arrays were degraded. I re-attached the disk, and removed a second. Stayed running (this is raid6). Re-attached, still no indication from lvs . I ran lvconvert --repair on the volume, it told me it was OK. Then I pulled a third disk... and the machine died. Re-inserted it, rebooted, and am now unsure how to fix. mdadm --force --assemble would fix this; neither vgchange nor lvchange appears to have that option (lvchange accepts --force , but it doesn't seem to do anything). Even trying dmsetup to directly feed the mapping table to the kernel, I could not figure out how to recover it. Also, mdadm is a dedicated tool just for managing RAID. LVM does a lot more, but it feels (and I admit this is pretty subjective) like the RAID functionality has sort of been shoved in there; it doesn't quite fit. How do you actually install a system with LVM RAID? Here is a brief outline of getting it installed on Debian Jessie or Wheezy. Jessie is far easier; note if you're going to try this on Wheezy, read the whole thing first… Use a full CD image to install, not a netinst image. Proceed as normal, get to disk partitioning, set up your LVM physical volumes. You can put /boot on LVM-RAID (on Jessie, and on Wheezy with some work detailed below). Create your volume group(s). Leave it in the LVM menu. First bit of fun—the installer doesn't have the dm-raid.ko module loaded, or even available! So you get to grab it from the linux-image package that will be installed. Switch to a console (e.g., Alt - F2 ) and: cd /tmpdpkg-deb --fsys-tarfile /cdrom/pool/main/l/linux/linux-image-*.deb | tar xdepmod -a -b /tmpmodprobe -d /tmp dm-raid The installer doesn't know how to create LVM-RAID LVs, so you have to use the command line to do it. Note I didn't do any benchmarking; the stripe size ( -I ) below is entirely a guess for my VM setup: lvcreate --type raid5 -i 4 -I 256 -L 10G -n root vg0 On Jessie, you can use RAID10 for swap. On Wheezy, RAID10 isn't supported. So instead you can use two swap partitions, each RAID1. But you must tell it exactly which physical volumes to put them on or it puts both halves of the mirror on the same disk . Yes. Seriously. Anyway, that looks like: lvcreate --type raid1 -m1 -L 1G -n swap0 vg0 /dev/vda1 /dev/vdb1lvcreate --type raid1 -m1 -L 1G -n swap1 vg0 /dev/vdc1 /dev/vdd1 Finally, switch back to the installer, and hit 'Finish' in the LVM menu. You'll now be presented with a lot of logical volumes showing. That's the installer not understanding what's going on; ignore everything with rimage or rmeta in their name (see the first paragraph way above for an explanation of what those are). Go ahead and create filesystems, swap partitions, etc. as normal. Install the base system, etc., until you get to the grub prompt. On Jessie, grub2 will work if installed to the MBR (or probably with EFI, but I haven't tested that). On Wheezy, install will fail, and the only solution is to backport Jessie's grub2. That is actually fairly easy, it compiles cleanly on Wheezy. Somehow, get your backported grub packages into /target (or do it in a second, after the chroot) then: chroot /target /bin/bashmount /sysdpkg -i grub-pc_*.deb grub-pc-bin_*.deb grub-common_*.deb grub2-common_*.deb grub-install /dev/vda … grub-install /dev/vdd # for each diskecho 'dm_raid' >> /etc/initramfs-tools/modulesupdate-initramfs -kall -uupdate-grub # should work, technically not quite tested²umount /sysexit Actually, on my most recent Jessie VM grub-install hung. Switching to F2 and doing while kill $(pidof vgs); do sleep 0.25; done , followed by the same for lvs , got it through grub-install. It appeared to generate a valid config despite that, but just in case I did a chroot /target /bin/bash , made sure /proc and /sys were mounted, and did an update-grub . That time, it completed. I then did a dpkg-reconfigure grub-pc to select installing grub on all the virtual disks' MBRs. On Wheezy, after doing the above, select 'continue without a bootloader'. Finish the install. It'll boot. Probably. Community Knowledge There are a fair number of people who know about mdadm , and have a lot of deployment experience with it. Google is likely to answer most questions about it you have. You can generally expect a question about it here to get answers, probably within a day. The same can't be said for LVM RAID. It's hard to find guides. Most Google searches I've run instead find me stuff on using mdadm arrays as PVs. To be honest, this is probably largely because it's newer, and less commonly used. Somewhat, it feels unfair to hold this against it—but if something goes wrong, the much larger existing community around mdadm makes recovering my data more likely. Conclusion LVM-RAID is advancing fairly rapidly. On Wheezy, it isn't really usable (at least, without doing backports of LVM and the kernel). Earlier, in 2014, on Debian testing, it felt like an interesting, but unfinished idea. Current testing, basically what will become Jessie, feels like something that you might actually use, if you frequently need to create small slices with different RAID configurations (something that is an administrative nightmare with mdadm ). If your needs are adequately served by a few large mdadm RAID arrays, sliced into partitions using LVM, I'd suggest continuing to use that. If instead you wind up having to create many arrays (or even arrays of logical volumes), consider switching to LVM-RAID instead. But keep good backups. A lot of the uses of LVM RAID (and even mdadm RAID) are being taken over by things like cluster storage/object systems, ZFS, and btrfs. I recommend also investigating those, they may better meet your needs. Thank yous I'd like to thank psusi for getting me to revisit the state of LVM-RAID and update this post. Footnotes I suspect you could use device mapper to glue the metadata and data together in such a way that mdadm --assemble will take it. Of course, you could just run mdadm on logical volumes just fine... and that'd be saner. When doing the Wheezy install, I failed to do this first time, and wound up with no grub config. I had to boot the system by entering all the info at the grub prompt. Once booted, that worked, so I think it'll work just fine from the installer. If you wind up at the grub prompt, here are the magic lines to type: linux /boot/vmlinuz-3.2.0-4-amd64 root=/dev/mapper/vg0-rootinitrd /boot/initrd.image-3.2.0-4-amd64boot PS: It's been a while since I actually did the original experiments. I have made my original notes available. Note that I have now done more recent ones, covered in this answer, and not in those notes.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/150644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4671/" ] }
150,670
I want to dual boot my macbook with Arch Linux and thus tried to install rEFInd using the install script, however, after install rEFInd doesn't seem to start at all? Anyone experienced the same issue or have possible solutions on how to fix this?
I'm going to reanswer my own question here because there is now an official solution from rEFInd, and thus I believe this is the right way to go about this. The official guidelines can be found on the rEFInd web site . Following are the steps presented there: Boot to OS X, using whatever means is available to you. Holding Option (or Alt) while powering up will normally give you Apple's own boot manager, which should enable you to boot to OS X. If your rEFInd installation is currently starting but is not showing an OS X option, skip to step #7; but if rEFInd isn't starting, follow steps #2–7. If you've made changes to /EFI/refind/refind.conf , back it up. Remove the /EFI/refind directory tree; it's useless now, and its presence may cause confusion. Re-install rEFInd, as described in the Installing rEFInd page ; but be sure to use the --esp or --ownhfs device-file option. The latter is preferable, but requires either a dedicated partition for rEFInd or an HFS+ data partition that is currently not bootable. Ensure that the partition to which you've installed rEFInd is mounted. The details depend on how you installed it: If you installed rEFInd to your ESP, typing mkdir /Volumes/esp followed by sudo mount -t msdos /dev/disk0s1 /Volumes/esp will probably work, although in some cases your ESP won't be /dev/disk0s1 , so you may need to change this detail. If you used the --ownhfs device-file installation option, the target partition should already be mounted, normally somewhere under /Volumes. If not, locate it and mount it with Disk Utility or mount . If you backed up your refind.conf file, you can now copy it over your new refind.conf file. You should copy the file to either /Volumes/esp/EFI/refind/ (if you used --esp and mounted the ESP at /Volumes/esp ) or to /Volumes/Mountpoint/System/Library/CoreServices/ (if you used a dedicated HFS+ volume; note that Mountpoint will be the name of the volume). Edit your new refind.conf file, which should be located as described in the previous step. In your favorite editor, locate the dont_scan_volumes line, which is commented out with a # symbol at the start of the line by default. Uncomment this line and remove the "Recovery HD" item from the line. Some users report that they need to enter one or two dummy entries, as in dont_scan_volumes foo,bar , to get it to work.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150670", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48986/" ] }
150,706
I prefer to launch GUI applications from a terminal window rather than by using a graphical desktop. A frequent annoyance is that often the developers haven't anticipated this type of use, so the app prints lots of useless, cryptic, or uninformative messages to stdout or stderr. Further clutter on the terminal occurs because running the program in the background, with an &, generates reports of the creation and termination of the job. What is a workaround for these problems that will accept command line arguments and handle autocompletion? Related: https://stackoverflow.com/questions/7131670/make-bash-alias-that-takes-parameter
Redirecting the standard error immediately to /dev/null is a bad idea as it will hide early error messages, and failures may be hard to diagnostic. I suggest something like the following start-app zsh script: #!/usr/bin/env zshcoproc "$@" 2>&1quit=$(($(date +%s)+5))nlines=0while [[ $((nlines++)) -lt 10 ]] && read -p -t 5 linedo [[ $(date +%s) -ge $quit ]] && break printf "[%s] %s\n" "$(date +%T)" "$line"done & Just run it with: start-app your_command argument ... This script will output at most 10 lines of messages and for at most 5 seconds. Note however that if the application crashes immediately (e.g. due to a segmentation fault), you won't see any error message. Of course, you can modify this script in various ways to do what you want... Note: To make completions work with start-app in zsh, it suffices to do: compdef _precommand start-app and in bash: complete -F _command start-app (copied from the one for exec and time in /usr/share/bash-completion/bash_completion ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150706", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
150,718
I've been using Linux for a while now and whenever I typed sudo I thought I was switching over to the root user for a command. Apparently this is not true because all I need is my user account's password. I'm guessing since I haven't worked with multiple users I haven't really noticed this in the real world. I am unsure how Ubuntu sets up my first account. Is there a root user? Am I root? I'm guessing I just created a new user upon installation but it gave me root privileges? Just a little confused here... So why am I allowed to run root commands with my user's password?
In details it works the following way: /usr/bin/sudo executable file has setuid bit set, so even when executed by another user, it runs with the file owner's user id (root in that case). sudo checks in /etc/sudoers file what privileges do you have and whether you are permitted to run the command you are invoking. Saying simply, /etc/sudoers is a file which defines which users can run which commands using sudo mechanism. That's how that file look on my Ubuntu: # User privilege specificationroot ALL=(ALL:ALL) ALL# Members of the admin group may gain root privileges%admin ALL=(ALL) ALL# Allow members of group sudo to execute any command%sudo ALL=(ALL:ALL) ALL The third line is what presumably interests you. It lets anybody in the "sudo" group to execute any command as any user. When Ubuntu sets up the first account during installation it add that account to the "sudo" group. You can check which groups which users belong to with group command. sudo asks you for a password. Regarding the fact that it needs user's password, not the root's one, that is an excerpt from sudoers manual : Authentication and logging The sudoers security policy requires that most users authenticate themselves before they can use sudo. A password is not required if the invoking user is root, if the target user is the same as the invoking user, or if the policy has disabled authentication for the user or command. Unlike su(1), when sudoers requires authentication, it validates the invoking user's credentials, not the target user's (or root's) credentials. This can be changed via the rootpw, targetpw and runaspw flags, described later. However, in fact, sudo does not need your user password for anything. It ask for it just to ensure that you are really you and to provide you some kind of warning (or chance to stop) before invoking some potentially dangerous command. If you want to turn off password asking, change the sudoers entry to: %sudo ALL=(ALL:ALL) NOPASSWD: ALL After authentication sudo spawns child process which run the invoked command. The child inherits the root user id from its parent -- the sudo process. So, answering your questions precisely: I thought I was switching over to the root user for a command. You were right. Each command preceded with sudo runs with the root user id. Is there a root user? Yes, there is a root user account, separate from your user account created during system installation. However, by default in Ubuntu you are not allowed to login to interactive terminal as root user. Am I root? No, you are not a root. You only have privilege to run individual commands as a root , using the sudo mechanism described above. So why am I allowed to run root commands with my user's password? You have to enter user's password only due to sudo internal security mechanism. It can be easily turned off. You gain your root powers because of setuid bit of /usr/bin/sudo , not because of any passwords you enter.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/150718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22494/" ] }
150,734
A PCB or process control block , is defined like this on Wikipedia Process Control Block (PCB, also called Task Controlling Block,[1] Task Struct, or Switchframe) is a data structure in the operating system kernel containing the information needed to manage a particular process. The PCB is "the manifestation of a process in an operating system and its duty is: Process identification dataProcessor state dataProcess control data So where can the PCB of a process be found?
In the Linux kernel, each process is represented by a task_struct in a doubly-linked list, the head of which is init_task (pid 0, not pid 1). This is commonly known as the process table . In user mode, the process table is visible to normal users under /proc . Taking the headings for your question: Process identification data is the process ID (which is in the path /proc/<process-id>/... ), the command line ( cmd ), and possibly other attributes depending on your definition of 'identification'. Process state data includes scheduling data ( sched , stat and schedstat ), what the process is currently waiting on ( wchan ), its environment ( environ ) etc. Process control data could be said to be its credentials ( uid_map ) and resource limits ( limits ). So it all depends how you define your terms... but in general, all data about a process can be found in /proc .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49764/" ] }
150,764
I have a file like this ...1562 first part1563 H col3 H col41564 H col3 H col4...3241 H col3 H col43242 third part... I want to replace only the first H in every line to H# , with # is its number of appearance. The output should be: ...1562 first part1563 H1 col3 H col41564 H2 col3 H col4...3241 H1652 col3 H col43242 third part... So far, I've tried: max=`grep -c ' H ' b`while [[ "$i" -le $max ]];do grep -m $i ' H ' b|tail -n1|sed "s/H/H$i/1" let i=i+1done This code is slow, it needs to read every line to replace and can't add the first part and third part of the file. Is there any better way to do this? Maybe awk? Thank you.
You can for example use this: $ awk '/H/{sub("H", "H"++v)}1' file1562 first part1563 H1 col3 H col41564 H2 col3 H col43241 H3 col3 H col43242 third part... This looks for those lines containing H and replaces that H with H together with a variable we keep incrementing. Note you could use gsub() instead of sub() if you wanted to perform this change in all the matched patterns instead in just one. The final 1 is a true condition, hence it performs the default awk action: {print $0} , that is, prints the full line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60602/" ] }
150,786
I'm trying to run a simple script- clone a git repository into a certain directory, then cd to this directory in order to execute an installation script. This script is in a Makefile. But the cd seems not to be working. It doesn't find my installation script. I added a pwd after the cd in the script, and it shows me the directory from where I'm executing the script, not the directory where I cd into. What's the problem? git clone http://somerepo ~/some_dircd ~/some_dir/pwdpython myscript.py install => pwd : /hereIsPathToDirectoryFromWhichIRunTheScript python: can't open file 'setup.py': [Errno 2] No such file or directory It also doesn't work with ./setup.py . If I enter the absolute path ~/some_dir/setup.py the script fails later because it's trying to access resources in the same folder.
You're using a makefile. Makefiles aren't scripts, each line is executed in a new shell. Meaning when you change the environment in line (such as cd ), that change is not propagated to the next line. The solution is that when you want to preserve the environment between commands, you run all the commands in the same line. All the commands will then be executed in the same shell, and the environment is preserved. For example: target: git clone http://somerepo ~/some_dir cd ~/some_dir/ && python myscript.py install
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/150786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81167/" ] }
150,789
I have Linux system in which we force /dev/devname for running the system. proc /proc proc defaults 0 0/dev/sda1 / ext3 barrier=1,errors=remount-ro 0 1/dev/sda5 /opt ext3 barrier=1,defaults 0 22 /dev/sda2 /opt/vortex/dvss ext3 barrier=1,defaults 0 3/dev/sda6 none swap sw 0 0/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0 We have this system running without issues till date. But, often in some installed machine we see that the system is not able to boot properly and sudden goes into "Grub rescue" When i mount the device as secondary and run E2Fsck i see that the system can be restored. Now, we are trying to address this failure. [ Fixing System boot failure due to GRUB Error In order, I noticed in some forums they say to SET UUID based boot up in FSTAB what are all the advantages that we would have if it is set through UUID. Is there a possibility that it would reduce my GRUB ERROR
You're using a makefile. Makefiles aren't scripts, each line is executed in a new shell. Meaning when you change the environment in line (such as cd ), that change is not propagated to the next line. The solution is that when you want to preserve the environment between commands, you run all the commands in the same line. All the commands will then be executed in the same shell, and the environment is preserved. For example: target: git clone http://somerepo ~/some_dir cd ~/some_dir/ && python myscript.py install
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/150789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52764/" ] }
150,800
I know how to get length of the longest line in a text file with awk awk ' { if ( length > L ) { L=length} }END{ print L}' file.txt but how can I get the length of the longest line of all files in a directory?
The most straightforward solution is to concatenate all the files and pipe the result to your script: cat ./* | awk '{ if ( length > L ) { L=length} }END{ print L}' You can also pass directly several files to awk: awk '{ if ( length > L ) { L=length} }END{ print L}' ./* Of course, there can be some warnings if files are in fact directories but it should be harmless. You may have bigger problems with binary files because they don't have a concept of line . So, in order to be more specific, you can do something like awk '{ if ( length > L ) { L=length} }END{ print L}' ./*.txt to match only the .txt files in the current directory. And, as @G-Man stated in his comment, * won't match hidden files (starting with a dot). If you want those, use * .* .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150800", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81180/" ] }
150,816
I have a bash script that I use to adjust my monitor brightness that uses xrandr --verbose to get the current brightness. It works fine, but using xrandr is kind of slow on my machine, as you can see here: [PROMPT REDACTED]$ time xrandr --verbose# xrandr output omitted for brevityreal 0m0.976suser 0m0.003ssys 0m0.002s This outputs lots of information that I don't need, in addition to taking almost a full second. The only line out of the output that I actually need is Brightness: X . I am currently using this line to get the value from it: BRIGHTNESS=`xrandr --verbose | grep -i brightness | cut -f2 -d ' ' | head -n1` Side note: head is called at the end because I have 2 monitors, so I end up with 2 values, but only need 1, since I am keeping them both at the same brightness. Since I only need that one line from xrandr --verbose , I was wondering if there is a way I could "lazily" evaluate it, by doing something like: Stopping xrandr outputting once it reaches that line Ignoring the rest of the output from xrandr once I have read that line Something else? I realize bash may not be the language best suited for this, so I am open to solutions in other languages as well.
Let's try and stop and ignore upon the first find of brigthness . From grep man page: -m NUM, --max-count=NUM Stop reading a file after NUM matching lines. This is my final version. Note that we don't even need the head : BRIGHTNESS=`xrandr --verbose | grep -m 1 -i brightness | cut -f2 -d ' '`
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150816", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54458/" ] }
150,839
I am trying to remove all files that have a hexadecimal digit in the first two digits, so I am using the following expression: ls | grep -Z '^[0-9a-f][0-9a-f]' | xargs -0 rm However, the terminal outputs the error: xargs: argument line too long To double-check, I ran: ls | grep -Z '^[0-9a-f][0-9a-f]' which outputs all of the files that I want to delete. Why am I getting this error? Additionally, how can I delete these files? Also, my file names are similar to the following: ffc1abfa3149067e990620dbecfa96d325fbbdffcc72282168e33110ecf436e2726a5f901ca6ffd010299a02ded0a8d41ee1ccc242f2193df2ffd27295acbe3d35088a5a754f5593eac6a0aeffd332a39f7be05d58863fe3bf55d7aba68b69ffd7ba85b0577b90c0fb1b3922303c486127d4ffdb37718feaf64c404a6c2a3648f15cdf27b1ffdbffe5b187c8a73d15da9e5f6cc0fb8d4df3ffdd8c340650848759c7e59f90f8c112ac33ceffde57cb4ba9b69531a3b3f2c6588d2802f71bffdeb529353a85b642efa1404aa27e58982da1ffe0bec99e3e64c61dd45e404c8ccf12d7bea5ffe58837e9d976499781de17628f2f41e16c9affee6887889924583762e43d5a6b9cd29b6690fff0b6886aff6cb4073742fbf7bcc1b47d9b45 Perhaps the file names themselves are too long for xargs ?
-Z is to output a NUL after each file name with grep -l , not to change the newlines to NULs in the lines it outputs. So xargs -0 sees only one huge record (with several newline characters in it) as there's no NUL delimited, so that's only one argument to pass to rm and it probably is bigger than the maximum size of an argument (128kB on Linux) and anyway there's no such file called ...ffd7ba85b0577b90c0fb1b3922303c486127d4<newline>...fff0b6886aff6cb4073742fbf7bcc1b47d9b45 . Simply do: rm [0-9a-f][0-9a-f]* Or if the list is too big: printf '%s\0' [0-9a-f][0-9a-f]* | xargs -r0 rm Or with zsh : autoload zargs # best in ~/.zshrcsetopt extendedglob # dittozargs [0-9a-f](#c2)* -- rm Or with ksh93 : command -x rm {2}([0-9a-f])* Or: find . ! -name . -prune -name '[0-9a-f][0-9a-f]*' -exec rm {} + Beware that in non-C locales [a-f] may match more than [abcdef] .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59802/" ] }
150,917
I used the following command to redirect 80 to 3000 . All the requests that come, from any domain, are redirected to 3000 : sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000 Having to processes: one that listens on 80001 and another one that listens on 8002 how can I link two domains to the two ports? Is it possible to have a JSON configuration like below? [ { "port": 8001, "domains": ["example.com", "example2.com"] }, { "port": 8002, "domains": ["domain.com", "domain2.com"] }] So, domain.com , domain2.com will send their requests to the process that listens on 8002 and the other two domains on 8001 ?
Usually you would have to setup your web server with virtual hosts and maybe mod_proxy (for Apache). However, I would suggest that you use a reverse proxy such as haproxy to take care of that. Setup Haproxy so that it listens to port 80 and direct your traffic to your webservers using ACLs on the domain name. Setup your webserver with virtual hosts that listen to 127.0.0.1:8002 and 127.0.0.1:80001 (if haproxy runs on the same server). Pretty simple setup. Look at this example . It's for putting haproxy in front of Docker containers, but you can adapt the configuration to suit your needs.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150917", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45370/" ] }
150,922
Note that this is not a duplicate. I am asking about disabling the cache, not clearing it. If you have a cache to clear, then it is obviously not disabled. On the rare occasions that I notice bash's cache of things that it has found in the path, it's not because it's helpful, it's because it's bloody annoying. An example: ~ dc$ export PATH=$HOME/bin:$PATH~ dc$ cat bin/which#!/bin/bashecho "my which"~ dc$ whichmy which~ dc$ rm bin/which~ dc$ which which-bash: /Users/dc/bin/which: No such file or directory In another shell ... ~ dc$ which which/usr/bin/which I'm sure that this caching made sense back in the good old days when disks were slow and memory was expensive and limited and so you couldn't cache much - caching a path is cheaper than caching all the disk blocks necessary to find a command. But these days it provides no noticeable benefit and causes more problems than it solves. It's a misfeature, verging on being a bug. And I can't even find a way of disabling it. Any pointers?
You can just clear the hashed executables before the prompt gets drawn: PROMPT_COMMAND='hash -r' From help hash : hash: hash [-lr] [-p pathname] [-dt] [name ...]Remember or display program locations.Determine and remember the full pathname of each command NAME. Ifno arguments are given, information about remembered commands is displayed.Options: -d forget the remembered location of each NAME -l display in a format that may be reused as input -p pathname use PATHNAME is the full pathname of NAME -r forget all remembered locations -t print the remembered location of each NAME, preceding each location with the corresponding NAME if multiple NAMEs are givenArguments: NAME Each NAME is searched for in $PATH and added to the list of remembered commands.Exit Status:Returns success unless NAME is not found or an invalid option is given.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38386/" ] }
150,925
I'm watching different logs by tail -q -f /var/log/syslog -f /var/log/fail2ban.log -f /var/log/nginx/error.log How can I have the output of each log colored differently?
Using GNU grep for the colouring: color() { GREP_COLOR=$1 grep --color '.*'; }(tail -qf /var/log/syslog | color 31 &tail -qf /var/log/fail2ban.log | color 32 &tail -qf /var/log/nginx/error.log | color 33) Note that the first 2 are started in background. That means they won't be killed if you press Ctrl-C (shell explicitly ignore SIGINT for asynchronous jobs). To prevent that, you can do instead: color() { GREP_COLOR=$1 grep --line-buffered --color=always '.*'; }(tail -qf /var/log/syslog | color 31 &tail -qf /var/log/fail2ban.log | color 32 &tail -qf /var/log/nginx/error.log | color 33) | cat That way, upon Ctrl-C , the last tail+grep and cat die (of the SIGINT) and the other two grep+tails will die of a SIGPIPE the next time they write something. Or restore the SIGINT handler (won't work with all shells): color() { GREP_COLOR=$1 grep --color '.*'; }((trap - INT; tail -qf /var/log/syslog | color 31) &(trap - INT; tail -qf /var/log/fail2ban.log | color 32) &tail -qf /var/log/nginx/error.log | color 33) You can also do it in the color function. That won't apply to tail , but tail will die of a SIGPIPE the next time it writes if grep dies. color() (trap - INT; GREP_COLOR=$1 exec grep --color '.*')(tail -qf /var/log/syslog | color 31 &tail -qf /var/log/fail2ban.log | color 32 &tail -qf /var/log/nginx/error.log | color 33) Or make the whole tail+grep a function: tailc() (trap - INT; export GREP_COLOR="$1"; shift; tail -qf -- "$@" | grep --color '.*')tailc 31 /var/log/syslog &tailc 32 /var/log/syslog &tailc 33 /var/log/nginx/error.log Or the whole thing: tailc() ( while [ "$#" -ge 2 ]; do (trap - INT; tail -f -- "$2" | GREP_COLOR=$1 grep --color '.*') & shift 2 done wait)tailc 31 /var/log/syslog 32 /var/log/syslog 33 /var/log/nginx/error.log
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40549/" ] }
150,944
According to the answers in Stack Overflow question How to use sed/grep to extract text between two words? , we can get text between two words: sed -n "/first/,/second/p" file But what if I want to get text between a word and the last line of the file, as in the following? sed -n "/word/,/lastline/p" file
Including the last line you'd do: sed -n '/word/,$p' That matches the first occurrence of word all the way until the last line and prints all matches. Not including the last line: sed '/word/,$!d;$d' ...which deletes negated matches and then deletes the last line. And to get from only the last match to the last line you have to try a little harder: sed -e :n -e '/\n.*word/D;N;$q;bn' It loops - it never completes the normal sed line cycle but instead appends the next input line to the pattern space buffer and b ranches back to do so again. But when it has at least two lines in pattern space and the last matches word it deletes everything in the buffer but the line that matches word . On the last line it just quits and breaks the loop. So what gets printed is everything from the last occurring line containing word to the last line. Hmmm... maybe I made that harder than it has to be: sed 'H;$x;/word/h;$!d' With that one every line is appended to hold space. But lines matching word then overwrite hold space. Every line in pattern space that is not the last line is deleted. And on the last line, just after it is appended to hold space, the hold and pattern spaces are exchanged (in case the last line also contains word) and everything from the last time word overwrote hold space is printed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78050/" ] }
150,952
I have a bunch of symlinks in /home to files and sub-directories in /foo . I want to target the new directory, /bar . My approach was to look at all invalid symlinks and verify that they were pointing to /foo . I then did the following: sudo find . -type l -! -exec test -e {} \; -exec sh -c '\ old_link_target=$(readlink "$0"); \ new_link_target=${old_link_target//foo/bar}; \ ln -snf $new_link_target $0' {} \; However, I want a more precise approach that would not include the initial step of putting eyes on the invalid symlinks. So, for the sake of this question, assume /foo still exists so another approach is required.
GNUly: find . -lname '/foo*' -printf '%p\0%l\0' | awk -vRS='\0' ' { getline target sub("^/foo", "/bar", target) printf("%s\0%s\0", target, $0) }' | xargs -r0n2 ln -sfT Or with recent GNU sed : find . -lname '/foo*' -printf '%l\0%p\0' | sed -z 's|^/foo|/bar|;n' | xargs -r0n2 ln -sfT Beware that you will potentially be affecting the ownership of the symlinks (so for instance, their original author won't be able to remove them any longer if they're in a directory they don't own but have write access to and has the t bit set (like /tmp )). To prevent that, you could use GNU tar instead: find . -lname '/foo*' -print0 | tar --null -T - -cf - --transform='s@^/foo@/bar@' | tar xpf -
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150952", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2372/" ] }
150,957
How can I generate 10 MB files from /dev/urandom filled with: ASCII 1s and 0s ASCII numbers between 0 and 9
ASCII numbers between 0 and 9 < /dev/urandom tr -dc '[:digit:]' | head -c 10000000 > 10mb.txt ASCII 1s and 0s < /dev/urandom tr -dc 01 | head -c 10000000 > 10mb.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81287/" ] }
150,960
Here's something that kept me wondering for a while: [15:40:50][/tmp]$ mkdir a[15:40:52][/tmp]$ strace rmdir aexecve("/usr/bin/rmdir", ["rmdir", "a"], [/* 78 vars */]) = 0brk(0) = 0x11bb000mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff3772c3000access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=245801, ...}) = 0mmap(NULL, 245801, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ff377286000close(3) = 0open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\36\3428<\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=2100672, ...}) = 0mmap(0x3c38e00000, 3924576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3c38e00000mprotect(0x3c38fb4000, 2097152, PROT_NONE) = 0mmap(0x3c391b4000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b4000) = 0x3c391b4000mmap(0x3c391ba000, 16992, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3c391ba000close(3) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff377285000mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff377283000arch_prctl(ARCH_SET_FS, 0x7ff377283740) = 0mprotect(0x609000, 4096, PROT_READ) = 0mprotect(0x3c391b4000, 16384, PROT_READ) = 0mprotect(0x3c38c1f000, 4096, PROT_READ) = 0munmap(0x7ff377286000, 245801) = 0brk(0) = 0x11bb000brk(0x11dc000) = 0x11dc000brk(0) = 0x11dc000open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=106070960, ...}) = 0mmap(NULL, 106070960, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ff370d5a000close(3) = 0rmdir("a") = 0close(1) = 0close(2) = 0exit_group(0) = ?+++ exited with 0 +++[15:40:55][/tmp]$ touch a[15:41:16][/tmp]$ strace rm aexecve("/usr/bin/rm", ["rm", "a"], [/* 78 vars */]) = 0brk(0) = 0xfa8000mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3b2388a000access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=245801, ...}) = 0mmap(NULL, 245801, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f3b2384d000close(3) = 0open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\36\3428<\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=2100672, ...}) = 0mmap(0x3c38e00000, 3924576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3c38e00000mprotect(0x3c38fb4000, 2097152, PROT_NONE) = 0mmap(0x3c391b4000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b4000) = 0x3c391b4000mmap(0x3c391ba000, 16992, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3c391ba000close(3) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3b2384c000mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3b2384a000arch_prctl(ARCH_SET_FS, 0x7f3b2384a740) = 0mprotect(0x60d000, 4096, PROT_READ) = 0mprotect(0x3c391b4000, 16384, PROT_READ) = 0mprotect(0x3c38c1f000, 4096, PROT_READ) = 0munmap(0x7f3b2384d000, 245801) = 0brk(0) = 0xfa8000brk(0xfc9000) = 0xfc9000brk(0) = 0xfc9000open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=106070960, ...}) = 0mmap(NULL, 106070960, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f3b1d321000close(3) = 0ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0newfstatat(AT_FDCWD, "a", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0geteuid() = 1000newfstatat(AT_FDCWD, "a", {st_mode=S_IFREG|0664, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0faccessat(AT_FDCWD, "a", W_OK) = 0unlinkat(AT_FDCWD, "a", 0) = 0lseek(0, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek)close(0) = 0close(1) = 0close(2) = 0exit_group(0) = ?+++ exited with 0 +++ Why are there separate system calls for removing a directory and files? Why would these two operations be semantically distinct?
Directories are special in the sense that within a directory you can have references to several files and directories, so, if you remove the parent directory, all those files lose their reference point from where they can be accessed, the same with process. For such cases, rmdir() have different checks, that are different from unlink() : If the directory is not empty. If a directory is not empty it can't remove it until the contents are unlink 'd/removed. ENOTEMPTY pathname contains entries other than . and .. ; or, pathname has .. as its final component. POSIX.1-2001 also allows EEXIST for this condition. If the directory is in use. If a process losses their current directory, it could lead to problems and undefined behaviors. Is better to prevent them. EBUSY pathname is currently in use by the system or some process that prevents its removal. On Linux this means pathname is currently used as a mount point or is the root directory of the calling process. In the case of unlink() these checks doesn't exist. In fact, you can delete the name of a file with unlink() and the process that is still using/making reference to it, can modify it without problems. The file exist until the file descriptor exist, just unaccessible to new process (unless you know where to search). This is part of the rainbow-colored-hands magic of the *NIX file systems. Now, there's the unlinkat() which behaves as both, unlink() or rmdir(2) depending the path which is what you expect.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26489/" ] }
150,975
For SysV init , I need /etc/inittab respawning getty entries, the /sbin/init binary, the binaries and shared libraries for the shell, login , the getty , the PAM/security/shadow stuff, and a few device files. For upstart I need pretty much the same requirements, but instead of /etc/inittab , I have a few *.conf files under /etc/init : one *.conf that start on startup that sets a runlevel with telinit , and one *.conf for each tty that start/respawn getty on that tty on the appropriate runlevels. What configuration and binaries do I need for systemd init ? The documentation I find all seems to be focused on how to use an already-installed system to start and stop services. A minimal list of files to copy (except the kernel/initrd) from a running Arch or fedora installation would do fine, but I cannot seem to find that kind of information about systemd . What I would like to know is, for systemd , what files are required, and what must they contain, to start a login shell after an initramfs does it's switch_root call to the systemd /sbin/init . Example for upstart , the binaries and two *.conf files: File /etc/init/whatever.conf : start on startupemits runleveltaskscript telinit 2end script File /etc/init/tty1.conf : start on runlevel [12345]respawnexec /sbin/agetty -8 --noclear 38400 tty1 linux Example for sysvinit , the binaries and 1 conf file named /etc/inittab : id:2:initdefault:c1:12345:respawn:/sbin/agetty 38400 tty1 linux Now I'm after the systemd equivalent. I assume at least 1 *.service file is needed somewhere, with a [Service] entry containing ExecStart=-/sbin/agetty --noclear %I linux and Restart=always , but what else is needed?
First of all, systemd is not a traditional unix init . Systemd is so much more, so it's a bit unfair to compare the two. To answer the question, what appears to be necessary are some binaries and the following configuration files: /usr/lib/systemd/system/default.target/usr/lib/systemd/system/basic.target/usr/lib/systemd/system/sysinit.target/usr/lib/systemd/system/getty.target/usr/lib/systemd/system/[email protected]/usr/lib/systemd/system/console-getty.service issuing systemctl enable console-getty.service [email protected] then creates these symlinks: /etc/systemd/system/default.target.wants/[email protected] -> /lib/systemd/system/getty@service/etc/systemd/system/getty.target.wants/console-getty.service -> /lib/systemd/system/console-getty.service NOTE : To utilize systemd 's special features for starting agetty dynamically, on-demand when pressing Alt + F3 and so on, it appears that you must also have at least these two files: /etc/systemd/logind.conf/lib/systemd/system/[email protected] where [email protected] is a symlink to [email protected] . Contents of configuration files: The default.target , getty.target , sysinit.target files can be empty except for the [Unit] tag and (probably) Description=xxx . basic.target also contains dependency information: [Unit]Description=Basic SystemRequires=sysinit.targetWants=sockets.target timers.target paths.target slices.targetAfter=sysinit.target sockets.target timers.target paths.target slices.target I'm not sure if the references to targets that don't exist as files are needed or not. They are described on the systemd.special(7) man page. console-getty.service : (Special case for agetty on the console) [Unit]Description=Console GettyAfter=systemd-user-sessions.service plymouth-quit-wait.serviceBefore=getty.target[Service]ExecStart=-/sbin/agetty --noclear --keep-baud console 115200,38400,9600 $TERMType=idleRestart=alwaysRestartSec=0UtmpIdentifier=consTTYPath=/dev/consoleTTYReset=yesTTYVHangup=yesKillMode=processIgnoreSIGPIPE=noSendSIGHUP=yes[Install]WantedBy=getty.target [email protected] : (generic config for all getty services except console) [Unit]Description=Getty on %IAfter=systemd-user-sessions.service plymouth-quit-wait.serviceBefore=getty.targetIgnoreOnIsolate=yesConditionPathExists=/dev/tty0[Service]ExecStart=-/sbin/agetty --noclear %I $TERMType=idleRestart=alwaysRestartSec=0UtmpIdentifier=%ITTYPath=/dev/%ITTYReset=yesTTYVHangup=yesTTYVTDisallocate=noKillMode=processIgnoreSIGPIPE=noSendSIGHUP=yes[Install]WantedBy=getty.targetDefaultInstance=tty1 Finally you probably need a few of these special binaries (I haven't tried which ones are crucial): /lib/systemd/systemd (/sbin/init usually points to this)/lib/systemd/systemd-logind/lib/systemd/systemd-cgroups-agent/lib/systemd/systemd-user-sessions/lib/systemd/systemd-vconsole-setup/lib/systemd/systemd-update-utmp/lib/systemd/systemd-sleep/lib/systemd/systemd-sysctl/lib/systemd/systemd-initctl/lib/systemd/systemd-reply-password/lib/systemd/systemd-ac-power/lib/systemd/systemd-activate/lib/systemd/systemd-backlight/lib/systemd/systemd-binfmt/lib/systemd/systemd-bootchart/lib/systemd/systemd-bus-proxyd/lib/systemd/systemd-coredump/lib/systemd/systemd-cryptsetup/lib/systemd/systemd-fsck/lib/systemd/systemd-hostnamed/lib/systemd/systemd-journald/lib/systemd/systemd-journal-gatewayd/lib/systemd/systemd-journal-remote/lib/systemd/systemd-localed/lib/systemd/systemd-machined/lib/systemd/systemd-modules-load/lib/systemd/systemd-multi-seat-x/lib/systemd/systemd-networkd/lib/systemd/systemd-networkd-wait-online/lib/systemd/systemd-quotacheck/lib/systemd/systemd-random-seed/lib/systemd/systemd-readahead/lib/systemd/systemd-remount-fs/lib/systemd/systemd-resolved/lib/systemd/systemd-rfkill/lib/systemd/systemd-shutdown/lib/systemd/systemd-shutdownd/lib/systemd/systemd-socket-proxyd/lib/systemd/systemd-timedated/lib/systemd/systemd-timesyncd/lib/systemd/systemd-udevd/lib/systemd/systemd-update-done To summarize the systemd start process, I think it works something like this: systemd locates basic.target (or all *.target files?) dependencies are resolved based on WantedBy= , Wants= , Before= , After= ... directives in the [Install] section of the *.service and *.target configuration files. *.service s that should start (that are not "special" services), have a [Service] section with a ExecStart= directive, that points out the executable to start.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/150975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5923/" ] }
150,988
Filling a drive with /dev/urandom seems to be very slow, so I created a file filled with FF : dd if=/dev/zero ibs=1k count=1000 | tr "\000" "\377" >ff.bin I'd like to fill the drive with copies of this file but the following command only writes once: dd if=ff.bin of=/dev/sdb count=10000 How do I fill the drive with copies of the file, or is there a faster way to fill the drive with 1 's?
Simply do: tr '\0' '\377' < /dev/zero > /dev/sdb It will abort with an error when the drive is full. Using dd does not make sense here. You use dd to make sure reads and writes are made of a specific size. There's no reason to do it here. tr will do reads/writes of 4 or 8 kiB which should be good enough.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/150988", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28458/" ] }
151,008
I need to recognize type of data contained in random files. I am new to Linux. I am planning to use the file command to understand what type of data a file has. I tried that command and got the output below. Someone suggested to me that the file command looks at the initial bytes of a file to determine data type. The file command doesn't look at a file extension at all. Is that correct? I looked at the man page but felt that it was too technical. I would appreciate if anyone can provide a link which has much simpler explanation regarding how the file command works. What are different possible answers that I could get after running the file command? For example, in the transcript below I get JPEG, ISO media, ASCII, etc: The screen output is as follows m7% file date-file.csvdate-file.csv: ASCII text, with CRLF line terminatorsm7% file image-file.JPGimage-file.JPG: JPEG image data, EXIF standardm7% file music-file.m4amusic-file.m4a: ISO Media, MPEG v4 system, iTunes AAC-LCm7% file numbers-file.txtnumbers-file.txt: ASCII textm7% file pdf-file.pdfpdf-file.pdf: PDF document, version 1.4m7% file text-file.txttext-file.txt: ASCII textm7% file video-file.MOVvideo-file.MOV: data Update 1 Thanks for answers and they clarified a couple of things for me. So if I understand correctly folder /usr/share/mime/magic has a database that will give me what are the current possible file formats (outputs that I can get when I type file command and follow it by a file). is that correct? Is it true that whenever 'File' command output contains the word "text" it refers to something that you can read with a text viewer, and anything without "text" is some kind of binary?
file uses several kinds of test : 1: If file does not exist, cannot be read, or its file status could not be determined, the output shall indicate that the file was processed, but that its type could not be determined. This will be output like cannot open file: No such file or directory . 2: If the file is not a regular file, its file type shall be identified. The file types directory, FIFO, socket, block special, and character special shall be identified as such. Other implementation-defined file types may also be identified. If file is a symbolic link, by default the link shall be resolved and file shall test the type of file referenced by the symbolic link. (See the -h and -i options below.) This will be output like .: directory and /dev/sda: block special . Much of the format for this and the previous point is partially defined by POSIX - you can rely on certain strings being in the output. 3: If the length of file is zero, it shall be identified as an empty file. This is foo: empty . 4: The file utility shall examine an initial segment of file and shall make a guess at identifying its contents based on position-sensitive tests. (The answer is not guaranteed to be correct; see the -d, -M, and -m options below.) 5: The file utility shall examine file and make a guess at identifying its contents based on context-sensitive default system tests. (The answer is not guaranteed to be correct.) These two use magic number identification and are the most interesting part of the command. A magic number is a special sequence of bytes that's in a known place in a file that identifies its type. Traditionally that place is the first two bytes, but the term has been extended further to include longer strings and other locations. See this other question for more detail about magic numbers in the file command. The file command has a database of these numbers and what type they correspond to; that database is usually in /usr/share/mime/magic , and maps file contents to MIME types . The output there (often part of file -i if you don't get it by default) will be a defined media type or an extension. "Context-sensitive tests" use the same sort of approach, but are a bit fuzzier. None of these are guaranteed to be right, but they're intended to be good guesses. file also has a database mapping those types to names, by which it will know that a file it has identified as application/pdf can be described as a PDF document . Those human-readable names may be localised to another language too. These will always be some high-level description of the file type in a way a person will understand, rather than a machine. The majority of different outputs you can get will come from these stages. You can look at the magic file for a list of supported types and how they're identified - my system knows 376 different types. The names given and the types supported are determined by your system packaging and configuration, and so your system may support more or fewer than mine, but there are generally a lot of them. libmagic also includes additional hard-coded tests in it. 6: The file shall be identified as a data file. This is foo: data , when it failed to figure out anything at all about the file. There are also other little tags that can appear. An executable ( +x ) file will include " executable " in the output, usually comma-separated. The file implementation may also know extra things about some file formats to be able to describe additional points about them, as in your " PDF document, version 1.4 ".
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/151008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81320/" ] }
151,009
I have six Linux logical volumes that together back a virtual machine. The VM is currently shutdown, so its easy to take consistent images of them. I'd like to pack all six images together in an archive. Trivially, I could do something like this: cp /dev/Zia/vm_lvraid_* /tmp/somedirtar c /tmp/somedir | whatever But that of course creates an extra copy. I'd like to avoid the extra copy. The obvious approach: tar c /dev/Zia/vm_lvraid_* | whatever does not work, as tar recognizes the files a special (symlinks in this case) and basically stores the ln -s in the archive. Or, with --dereference or directly pointed at /dev/dm-X , it recognizes them as special (device files) and basically stores the mknod in the archive. I've searched for command-line options to tar to override this behavior, and couldn't find any. I also tried cpio , same problem, and couldn't find any options to override it there, either. I also tried 7z (ditto). Same with pax . I even tried zip , which just got itself confused. edit: Looking at the source code of GNU tar and GNU cpio, it appears neither of them can do this. At least, not without serious trickery (the special handling of device files can't be disabled). So, suggestions of serious trickery would be appreciated or alternate utilities. TLDR: Is there some archiver that will pack multiple disk images together (taken from raw devices) and stream that output, without making extra on-disk copies? My preference would be output in a common format, like POSIX or GNU tar.
So recently I wanted to do this with tar . Some investigation indicated to me that it was more than a little nonsensical that I couldn't. I did come up with this weird split --filter="cat >file; tar -r ..." thing, but, well, it was terribly slow. And the more I read about tar the more nonsensical it seemed. You see, tar is just a concatenated list of records. The constituent files are not altered in any way - they're whole within the archive. But they are blocked off on 512-byte block boundaries, and preceding every file there is a header . That's it. The header format is really, very simple as well. So, I wrote my own tar . I call it... shitar . z() (IFS=0; printf '%.s\\0' $(printf "%.$(($1-${#2}))d"))chk() (IFS=${IFS#??}; set -f; set -- $( printf "$(fmt)" "$n" "$@" '' "$un" "$gn" ); IFS=; a="$*"; printf %06o "$(($( while printf %d+ "'${a:?}"; do a=${a#?}; done 2>/dev/null)0))") fmt() { printf '%s\\'"${1:-n}" %s "${1:+$(z 99 "$n")}%07d" \ %07o %07o %011o %011o "%-${1:-7}s" ' 0' "${1:+$(z 99)}ustar " %s \ "${1:+$(z 31 "$un")}%s"} That's the meat and potatoes, really. It writes the headers and computes the chksum - which, relatively speaking, is the only hard part. It does the ustar header format... maybe . At least, it emulates what GNU tar seems to think is the ustar header format to the point that it does not complain. And there's more to it, it's just that I haven't really coagulated it yet. Here, I'll show you: for f in 1 2; do echo hey > file$f; done{ tar -cf - file[123]; echo .; } | tr \\0 \\n | grep -b .0:file1 #filename - first 100 bytes100:0000644 #octal mode - next 8108:0001750 #octal uid,116:0001750 #gid - next 16124:00000000004 #octal filesize - next 12136:12401536267 #octal epoch mod time - next 12148:012235 #chksum - more on this155: 0 #file type - gnu is weird here - so is shitar257:ustar #magic string - header type265:mikeserv #owner297:mikeserv #group - link name... others shitar doesnt do512:hey #512-bytes - start of file 1024:file2 #512 more - start of header 21124:00006441132:00017501140:00017501148:000000000041160:124015362671172:0122361179: 01281:ustar 1289:mikeserv1321:mikeserv1536:hey10240:. #default blocking factor 20 * 512 That's tar . Everything's padded with \0 nulls so I just turn em into \n ewlines for readability. And shitar : #the rest, kind of, calls z(), fmt(), chk() + gets $mdata and blocks w/ ddfor n in file[123]do d=$n; un=$USER; gn=$(id --group --name) set -- $(stat --printf "%a\n%u\n%g\n%s\n%Y" "$n") printf "$(fmt 0)" "$n" "$@" "$(chk "$@")" "$un" "$gn" printf "$(z $((512-298)) "$gn")"; cat "$d" printf "$(x=$(($4%512));z $(($4>512?($x>0?$x:512):512-$4)))"done |{ dd iflag=fullblock conv=sync bs=10240 2>/dev/null; echo .; } |tr \\0 \\n | grep -b . OUTPUT 0:file1 #it's the same. I shortened it.100:0000644 #but the whole first file is here108:0001750116:0001750124:00000000004136:12401536267148:012235 #including its checksum155: 0257:ustar 265:mikeserv297:mikeserv512:hey1024:file2...1172:012236 #and file2s checksum...1536:hey10240:. I say kind of up there because that isn't shitar 's purpose - tar already does that beautifully. I just wanted to show how it works - which means I need to touch on the chksum . If it wasn't for that I would just be dd ing off the head of a tar file and done with it. That might even work sometimes, but it gets messy when there are multiple members in the archive. Still, the chksum is really easy. First, make it 7 spaces - (which is a weird gnu thing, I think, as the spec says 8, but whatever - a hack is a hack) . Then add up the octal values of every byte in the header. That's your chksum. So you need the file metadata before you do the header, or you don't have a chksum. And that's a ustar archive, mostly. Ok. Now, what it is meant to do: cd /tmp; mkdir -p mnt for d in 1 2 3 do fallocate -l $((1024*1024*500)) disk$d lp=$(sudo losetup -f --show disk$d) sync sudo mkfs.vfat -n disk$d "$lp" sudo mount "$lp" mnt echo disk$d file$d | sudo tee mnt/file$d sudo umount mnt sudo losetup -d "$lp"done That makes three 500M disk images, formats and mounts each, and writes a file to each. for n in disk[123]do d=$(sudo losetup -f --show "$n") un=$USER; gn=$(id --group --name) set -- $(stat --printf "%a\n%u\n%g\n$(lsblk -bno SIZE "$d")\n%Y" "$n") printf "$(fmt 0)" "$n" "$@" "$(chk "$@")" "$un" "$gn" printf "$(z $((512-298)) "$gn")" sudo cat "$d" sudo losetup -d "$d"done | dd iflag=fullblock conv=sync bs=10240 2>/dev/null |xz >disks.tar.xz Note - apparently block devices will just always block correctly. Pretty handy. That tar 's the contents of the disk device files in-stream and pipes the output to xz . ls -l disk*-rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk1-rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk2-rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk3-rw-r--r-- 1 mikeserv mikeserv 229796 Sep 3 01:05 disks.tar.xz Now, the moment of truth... xz -d <./disks.tar.xz| tar -tvf --rw-r--r-- mikeserv/mikeserv 524288000 2014-09-03 01:01 disk1-rw-r--r-- mikeserv/mikeserv 524288000 2014-09-03 01:01 disk2-rw-r--r-- mikeserv/mikeserv 524288000 2014-09-03 01:01 disk3 Hooray! Extraction... xz -d <./disks.tar.xz| tar -xf - --xform='s/[123]/1&/' ls -l disk*-rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk1-rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk11-rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk12-rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk13-rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk2-rw-r--r-- 1 mikeserv mikeserv 524288000 Sep 3 01:01 disk3-rw-r--r-- 1 mikeserv mikeserv 229796 Sep 3 01:05 disks.tar.xz Comparison... cmp disk1 disk11 && echo yay || echo shiteyay And the mount... sudo mount disk13 mntcat mnt/*disk3 file3 And so, in this case, shitar performs ok, I guess. I'd rather not go into all of the things which it won't do well. But, I will say - don't do newlines in the filenames at the least. You can also do - and maybe should, considering the alternatives I've offered -this with squashfs . Not only do you get the single archive built from the stream - but it's mount able and builtin to the kernel's vfs : From pseudo-file.example : # Copy 10K from the device /dev/sda1 into the file input. Ordinarily# Mksquashfs given a device, fifo, or named socket will place that special file# within the Squashfs filesystem, this allows input from these special# files to be captured and placed in the Squashfs filesystem.input f 444 root root dd if=/dev/sda1 bs=1024 count=10# Creating a block or character device examples# Create a character device "chr_dev" with major:minor 100:1 and# a block device "blk_dev" with major:minor 200:200, both with root# uid/gid and a mode of rw-rw-rw.chr_dev c 666 root root 100 1blk_dev b 666 0 0 200 200 You might also use btrfs (send|receive) to stream out a subvolume into whatever stdin -capable compressor you liked. This subvolume need not exist before you decide to use it as compression container, of course. Still, about squashfs ... I don't believe I'm doing this justice. Here's a very simple example: cd /tmp; mkdir ./emptydir mksquashfs ./emptydir /tmp/tmp.sfs -p \ 'file f 644 mikeserv mikeserv echo "this is the contents of file"' Parallel mksquashfs: Using 6 processorsCreating 4.0 filesystem on /tmp/tmp.sfs, block size 131072.[==================================================================================|] 1/1 100%Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072 compressed data, compressed metadata, compressed fragments,... ###...###AND SO ON###...echo '/tmp/tmp.sfs /tmp/imgmnt squashfs loop,defaults,user 0 0'| sudo tee -a /etc/fstab >/dev/nullmount ./tmp.sfs cd ./imgmntlstotal 1-rw-r--r-- 1 mikeserv mikeserv 29 Aug 20 11:34 filecat filethis is the contents of filecd ..umount ./imgmnt That's only the inline -p argument for mksquash . You can source a file with -pf containing as many of those as you like. The format is simple - you define a target file's name/path in the new archive's filesystem, you give it a mode and an owner, and then you tell it which process to execute and read stdout from. You can create as many as you like - and you can use LZMA, GZIP, LZ4, XZ... hmm there are more... compression formats as you like. And the end result is an archive into which you cd . More on the format though: This is, of course, not just an archive - it is a compressed, mountable Linux file-system image. Its format is the Linux kernel's - it is a vanilla kernel supported filesystem. In this way it is as common as the vanilla Linux kernel. So if you told me you were running a vanilla Linux system on which the tar program was not installed I would be dubious - but I would probably believe you. But if you told me you were running a vanilla Linux system on which the squashfs filesystem was not supported I would not believe you.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/151009", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/977/" ] }
151,048
I'm trying to find the corresponding command to the buttons in virt-manager, I read about virsh help domain and I found start, shutdown and reset etc. But the one for Force Off is missing. Anyone know what that is?
virsh destroy , from man virsh Immediately terminate the domain domain. This doesn't give the domain OS any chance to react, and it's the equivalent of ripping the power cord out on a physical machine.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/151048", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
151,068
How do I handle the backspaces entered, it shows ^? if tried & how read counts the characters, as in 12^?3 already 5 characters were complete(though all of them were not actual input), but after 12^?3^? it returned the prompt, weird. Please help! -bash-3.2$ read -n 512^?3^?-bash-3.2$
When you read a whole line with plain read (or read -r or other options that don't affect this behavior), the kernel-provided line editor recognizes the Backspace key to erase one character, as well as a very few other commands (including Return to finish the input line and send it). The shortcut keys can be configured with the stty utility. The terminal is said to be in cooked mode when its line editor is active. In raw mode, each character typed on the keyboard is transmitted to the application immediately. In cooked mode, the characters are stored in a buffer and only complete lines are transmitted to the application. In order to stop reading after a fixed number of characters so as to implement read -n , bash has to switch to raw mode. In raw mode, the terminal doesn't do any processing of the Backspace key (by the time you press Backspace , the preceding character has already been sent to bash), and bash doesn't do any processing either (presumably because this gives the greater flexibility of allowing the script to do its own processing). You can pass the option -e to enable bash's own line editor (readline, which is a proper line editor, not like the kernel's extremely crude one). Since bash is doing the line edition, it can stop reading once it has the requested number of characters.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/151068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74910/" ] }
151,100
We all know that !! can repeat the last command you do in bash. But sometimes we need to do some operation like $ python test.py$ vim test.py$ python test.py # here is where I need to repeat the second proximate bash command I can use up-arrow key to do that, but that requires me to move my right hand away to an uncomfortable position. So I'm wondering if there is a command which like !! would work?
You can use !-2 : $ echo foofoo$ echo barbar$ !-2echo foofoo That may not help with your right-hand situation. You can also use !string history searching for this sort of case: $ python test.py$ vim test.py$ !pypython test.py # Printed, then run This may be more convenient to use. It will run: the most recent command preceding the current position in the history list starting with string . Even just !p would work. You can use !?string to search the whole command line, rather than just the start.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151100", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
151,118
From help compgen : $ help compgencompgen: compgen [-abcdefgjksuv] [-o option] [-A action] [-G globpat] [-W wordlist] [-F function] [-C command] [-X filterpat] [-P prefix] [-S suffix] [word] Display possible completions depending on the options. Intended to be used from within a shell function generating possible completions. If the optional WORD argument is supplied, matches against WORD are generated. Exit Status: Returns success unless an invalid option is supplied or an error occurs. What do options [-abcdefgjksuv] stand for? In other words, I want to know how to use all options.
Options for compgen command are the same as complete , except -p and -r . From compgen man page: compgen compgen [option] [word] Generate possible completion matches for word according to the options, which may be any option accepted by the complete builtin with the exception of -p and -r, and write the matches to the standard output For options [abcdefgjksuv] : -a means Names of alias -b means Names of shell builtins -c means Names of all commands -d means Names of directory -e means Names of exported shell variables -f means Names of file and functions -g means Names of groups -j means Names of job -k means Names of Shell reserved words -s means Names of service -u means Names of userAlias names -v means Names of shell variables You can see complete man page here .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/151118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
151,121
Is there any issue to put / as a home directory of root user from security point of view instead of /root
Yes. /root has 700 permission (rwx------) whereas / has 555 (r-xr-xr-x) permissions for all users. Now if you use various common utilities you would have /root/.config with rwxr-xr-x permissions. If you were in /, that directory becomes accessible to anyone on the server, whereas if it was in /root it would not be. Having root's data accessible to any user is a security risk. Edit 1 Note that /root/.config is an example, there are many other folders, folders as diverse as any one of hundreds of thousands of possible programs that root can run. Yes, technically it is security by obscurity. But for example, would you send me the IP address of your server please? Why not? Why do people obscure IP addresses and server names etc in posts? For the exact same reason you don't want unauthorised people accessing root's data. The same reason you don't hand out a network map. If roots data is not secure, you must vet every single program to ensure it secures it's data properly instead of just knowing it's safe because it's in /root. Morpheus: We've survived by hiding from them, by running from them. But they are the gatekeepers. They are guarding all the doors, they are holding all the keys. Which means that sooner or later, someone is going to have to fight them. In the case of root, root is the gatekeeper, guarding all the doors, holding all the keys. That's why root is a big fat target for everyone trying to hack a server. Edit 2 In warfare you never give your enemy anything. He is not to know when your patrols are scheduled, when your convoys are due to arrive, where your potatoes come from, what time breakfast is served, what time the guards are changed, where your main powerline runs, which tent or barracks belongs to the commander, who the commanders driver is, what jeep does he drive, anything. In counter intelligence we want the enemy to know nothing about us at all, because through the long history of the world we can find many many examples of how what has been thought to be the most trivial piece of information has been used to bring down kingdoms, destroy nations, assassinate kings and win battles. So ask yourself this question. Which is more secure? Knowing something about root Knowing nothing about root The choice of whether or not to restrict access to any information about root, roots activity or roots data should be trivially obvious. No professional answer can be otherwise.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151121", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78907/" ] }
151,149
If I do a sudo cp /etc/foo.txt ~/foo.txt , the new file is created with root as the owner. Right now, I see no way around this other than using the last two commands ( ls to clarify use-case): belmin@server1$ ls /etc/foo.txt> -rw------- 1 root root 3848 Mar 6 20:35 /etc/foo.txt>belmin@server1$ sudo cp /etc/foo.txt ~/foo.txtbelmin@server1$ sudo chown belmin: $_ I would prefer: Doing it in one sudo command. Not having to specify my current user (maybe using a variable?).
Use install instead of cp : sudo install -o belmin /etc/foo.txt ~/foo.txt
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/151149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2372/" ] }
151,154
On a ubuntu server I own, I am running out of space. When I ran sudo parted /dev/sda -l to find all available drives, I got this: Model: ATA ST31000528AS (scsi)Disk /dev/sda: 1000GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 1049kB 256MB 255MB primary ext2 boot 2 257MB 1000GB 1000GB extended 5 257MB 1000GB 1000GB logical lvmModel: Linux device-mapper (linear) (dm)Disk /dev/mapper/server--vg-swap_1: 2135MBSector size (logical/physical): 512B/512BPartition Table: loopNumber Start End Size File system Flags 1 0.00B 2135MB 2135MB linux-swap(v1)Model: Linux device-mapper (linear) (dm)Disk /dev/mapper/server--vg-root: 998GBSector size (logical/physical): 512B/512BPartition Table: loopNumber Start End Size File system Flags 1 0.00B 998GB 998GB ext4 I understand /dev/mapper/server--vg-root is the filesystem, and /dev/sda1 has some stuff related to GRUB. But, what about /dev/sda2 and /dev/sda5? When I tried to mount /dev/sda2, it said that I needed to specify the file system, which according to the table, is nonexistent. So, is it safe to format this with, say ext4 and mount it? Also, when I tried to mount /dev/sd5, it gave me this error: mount: unknown filesystem type 'LVM2_member' I assume it is NOT save to reformat this. If I'm wrong, then that would be great, because I could save some space. Please let me know either way. Thanks in advance! UPDATE:Here is the result of mount : /dev/mapper/server--vg-root on / type ext4 (rw,errors=remount-ro)proc on /proc type proc (rw,noexec,nosuid,nodev)sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)none on /sys/fs/fuse/connections type fusectl (rw)none on /sys/kernel/debug type debugfs (rw)none on /sys/kernel/security type securityfs (rw)udev on /dev type devtmpfs (rw,mode=0755)devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)none on /run/shm type tmpfs (rw,nosuid,nodev)/dev/sda1 on /boot type ext2 (rw,acl)/dev/sda1 on /media/hd2 type ext2 (rw)
No! /dev/sda contains: a small /dev/sda1 which is needed to boot. a extended partition /dev/sda2 The extended partition contains a logical partition /dev/sda5 . The logical partition contains a LVM setup, broken down into to two logical volumes: /dev/mapper/server--vg-swap_1 which is your swap space /dev/mapper/server--vg-root which is your root ( / ) partition where everything is stored. There is nothing there that is unused, so the bottom line is you are out of space. You will need to add additional storage to the system. Luckily, you have LVM so you can add it to the volume group and simply expand the logical volume server--vg-root and then the filesystem within it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151154", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81405/" ] }
151,162
I have a concatenated log file with multiple logs inside that I'm trying to parse out into individual log files. I will later rename them to the date/time of each. Each log is separated by "--- LOG REPORT ---". So far I have: sed -n '/--- LOG REPORT ---/,/--- LOG REPORT ---/p' logname.log > test.out However, as you can imagine, that only outputs the first instance of the pattern. I looked over the man page for sed and I'm not convinced it can output multiple files. Perhaps I could keep extracting from a file until it's empty but that seems like too much work. How I can achieve this? Maybe I should be using awk instead? Example of input file filename.log --- LOG REPORT ---MaryHadALittleLamb--- LOG REPORT ---HerFleeceWasWhiteAsSnow Desired output: In filename_1.log --- LOG REPORT ---MaryHadALittleLamb In filename_2.log --- LOG REPORT ---HerFleeceWasWhiteAsSnow
How about something like awk '/--- LOG REPORT ---/ {n++;next} {print > "test"n".out"}' logname.log
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151162", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41913/" ] }
151,171
I did something stupid. I simply scp ed my rpmforge repo files from another (working) machine on to my dev box, and ran yum update . This yielded: root@dev07 /etc/yum.repos.d # yum updateLoaded plugins: refresh-packagekit, securityrpmforge | 1.9 kB 00:00 rpmforge/primary_db | 2.7 MB 00:01 Setting up Update ProcessResolving Dependencies--> Running transaction check---> Package htop.x86_64 0:1.0.1-2.el6 will be updated---> Package htop.x86_64 0:1.0.3-1.el6.rf will be an update---> Package libewf.x86_64 0:20100226-1.el6 will be updated---> Package libewf.x86_64 0:20100226-1.el6.rf will be an update---> Package perl-Compress-Raw-Bzip2.x86_64 0:2.021-136.el6 will be updated---> Package perl-Compress-Raw-Bzip2.x86_64 0:2.052-1.el6.rf will be an update---> Package testdisk.x86_64 0:6.14-1.el6 will be updated---> Package testdisk.x86_64 0:6.14-1.el6.rf will be an update--> Processing Dependency: libreiserfs-0.3.so.0()(64bit) for package: testdisk-6.14-1.el6.rf.x86_64--> Processing Dependency: libntfs-3g.so.84()(64bit) for package: testdisk-6.14-1.el6.rf.x86_64---> Package xclip.x86_64 0:0.12-1.el6 will be updated---> Package xclip.x86_64 0:0.12-1.el6.rf will be an update--> Running transaction check---> Package fuse-ntfs-3g.x86_64 0:2013.1.13-2.el6.rf will be installed---> Package progsreiserfs.x86_64 0:0.3.0.4-1.2.el6.rf will be installed--> Finished Dependency ResolutionDependencies Resolved======================================================================================================================================================================================================================== Package Arch Version Repository Size========================================================================================================================================================================================================================Updating: htop x86_64 1.0.3-1.el6.rf rpmforge 87 k libewf x86_64 20100226-1.el6.rf rpmforge 343 k perl-Compress-Raw-Bzip2 x86_64 2.052-1.el6.rf rpmforge 104 k testdisk x86_64 6.14-1.el6.rf rpmforge 451 k xclip x86_64 0.12-1.el6.rf rpmforge 27 kInstalling for dependencies: fuse-ntfs-3g x86_64 2013.1.13-2.el6.rf rpmforge 483 k progsreiserfs x86_64 0.3.0.4-1.2.el6.rf rpmforge 119 kTransaction Summary========================================================================================================================================================================================================================Install 2 Package(s)Upgrade 5 Package(s)Total download size: 1.6 MIs this ok [y/N]: yDownloading Packages:(1/7): fuse-ntfs-3g-2013.1.13-2.el6.rf.x86_64.rpm | 483 kB 00:00 (2/7): htop-1.0.3-1.el6.rf.x86_64.rpm | 87 kB 00:00 (3/7): libewf-20100226-1.el6.rf.x86_64.rpm | 343 kB 00:00 (4/7): perl-Compress-Raw-Bzip2-2.052-1.el6.rf.x86_64.rpm | 104 kB 00:00 (5/7): progsreiserfs-0.3.0.4-1.2.el6.rf.x86_64.rpm | 119 kB 00:00 (6/7): testdisk-6.14-1.el6.rf.x86_64.rpm | 451 kB 00:00 (7/7): xclip-0.12-1.el6.rf.x86_64.rpm | 27 kB 00:00 -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Total 873 kB/s | 1.6 MB 00:01 warning: rpmts_HdrFromFdno: Header V3 DSA/SHA1 Signature, key ID 6b8d79e6: NOKEYRetrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmforge-dag Oops. I didn't follow the directions . So I deleted the *rpmforge* files from /etc/yum.repos.d/ , and started over the correct way. Which all went fine until I tried another yum update : [snip]Transaction Check Error: file /sbin/mount.lowntfs-3g from install of fuse-ntfs-3g-2013.1.13-2.el6.rf.x86_64 conflicts with file from package ntfs-3g-2:2011.4.12-5.el6.x86_64 file /sbin/mount.ntfs from install of fuse-ntfs-3g-2013.1.13-2.el6.rf.x86_64 conflicts with file from package ntfs-3g-2:2011.4.12-5.el6.x86_64 file /sbin/mount.ntfs-3g from install of fuse-ntfs-3g-2013.1.13-2.el6.rf.x86_64 conflicts with file from package ntfs-3g-2:2011.4.12-5.el6.x86_64 file /usr/bin/ntfs-3g from install of fuse-ntfs-3g-2013.1.13-2.el6.rf.x86_64 conflicts with file from package ntfs-3g-2:2011.4.12-5.el6.x86_64 file /usr/bin/ntfsmount from install of fuse-ntfs-3g-2013.1.13-2.el6.rf.x86_64 conflicts with file from package ntfs-3g-2:2011.4.12-5.el6.x86_64 file /usr/share/man/man8/ntfs-3g.8.gz from install of fuse-ntfs-3g-2013.1.13-2.el6.rf.x86_64 conflicts with file from package ntfs-3g-2:2011.4.12-5.el6.x86_64 file /usr/share/man/man8/ntfs-3g.probe.8.gz from install of fuse-ntfs-3g-2013.1.13-2.el6.rf.x86_64 conflicts with file from package ntfs-3g-2:2011.4.12-5.el6.x86_64Error Summary-------------root@dev07 /etc/yum.repos.d # I suppose I could just delete those files, but I want to be sure that by getting a bigger hammer, I'm not just breaking my system in to smaller pieces first. How should I fix this?
Try removing ntfs-3g-2:2011.4.12-5.el6.x86_64 package by: yum remove ntfs-3g See which packages depends on this package. If there is only one package appeared in list of removing package by yum, you can confidently remove this package. ( Note: Don't press 'Y' for removing packages if you don't know which are they.) After this go for yum update .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151171", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19581/" ] }
151,207
Input file: A 1,2,3,4 #length($2)=4B 1,2 #length($2)=2C 9,8,7,6,5,4 #length($2)=6 Expected output: 12 #4+2+6 A method like: awk -F '[\t,]' '{print length($2)}' but working on the whole file.
If there are no other columns with commas, this will do it: awk -F, '{c+=NF} END {print c+0}' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74555/" ] }
151,281
I install a ruby package. $ sudo gem install pdfbeadsERROR: Error installing pdfbeads: nokogiri requires Ruby version >= 1.9.2. says that it needs ruby version greater than 1.9.1. My ruby is 1.8.7. $ which ruby/usr/bin/ruby$ ruby --versionruby 1.8.7 (2011-06-30 patchlevel 352) [i686-linux]$ gem --version1.8.15 I have ruby 1.9.1 and 1.9.3. $ whereis ruby1.9.1ruby1.9: /usr/bin/ruby1.9.1 /usr/bin/ruby1.9.3 /usr/bin/X11/ruby1.9.1 /usr/bin/X11/ruby1.9.3 but 1.9.3 is linked to 1.9.1. $ ls /usr/bin/ruby* -llrwxrwxrwx 1 root root 22 Jul 10 02:33 /usr/bin/ruby -> /etc/alternatives/ruby-rwxr-xr-x 1 root root 5504 Nov 26 2013 /usr/bin/ruby1.8-rwxr-xr-x 1 root root 5552 Nov 26 2013 /usr/bin/ruby1.9.1lrwxrwxrwx 1 root root 9 Nov 26 2013 /usr/bin/ruby1.9.3 -> ruby1.9.1 It says itself is 1.9.3 however: $ /usr/bin/ruby1.9.3 --versionruby 1.9.3p0 (2011-10-30 revision 33570) [i686-linux] I change the link to ruby1.9.3 anyway $ ls -l /usr/bin/ruby*lrwxrwxrwx 1 root root 9 Aug 20 21:16 /usr/bin/ruby -> ruby1.9.3-rwxr-xr-x 1 root root 5504 Nov 26 2013 /usr/bin/ruby1.8-rwxr-xr-x 1 root root 5552 Nov 26 2013 /usr/bin/ruby1.9.1lrwxrwxrwx 1 root root 9 Nov 26 2013 /usr/bin/ruby1.9.3 -> ruby1.9.1 The installation still says it needs ruby >= 1.9.2 $ sudo gem install pdfbeadsERROR: Error installing pdfbeads: nokogiri requires Ruby version >= 1.9.2.$ gem --version1.8.15$ ruby --versionruby 1.9.3p0 (2011-10-30 revision 33570) [i686-linux] Do I have ruby1.9.3 or just ruby1.9.1? How can I make sudo gem install pdfbeads use ruby 1.9.3? Update: I have now followed the way of installing ruby 2.1.0 by RVM, I have added the path of rvm to my PATH .I then successfully installed ruby2.1.0 by $ rvm install 2.1.0 and made it default $ rvm use 2.1.0$ ruby -vruby 2.1.0p0 (2013-12-25 revision 44422) [i686-linux]$ which ruby/home/tim/.rvm/rubies/ruby-2.1.0/bin/ruby Now back to install the package pdfbeads , but without sudo (because I thought I had installed ruby 2.1.0 under my account, not under root, and installation of the package requires the newer version ruby) $ gem install pdfbeadsERROR: While executing gem ... (Gem::FilePermissionError) You don't have write permissions into the /var/lib/gems/1.8 directory. So I think I have to use sudo . But I still get the original error, $ sudo gem install pdfbeadsERROR: Error installing pdfbeads: nokogiri requires Ruby version >= 1.9.2. I think it is because under sudo , the user is root , which still has the older version ruby1.8.7 as the default. So I wonder what can I do now?
You may wish to consider using a ruby package manager like rvm or rbenv You can install different rubies and switch between them easily. You might also want to consider trying 2.0+ Sample output from rvm: 21:59:48 durrantm Castle2012 /home/durrantm $ rvm listrvm rubies ruby-1.8.7-p374 [ x86_64 ] ruby-1.9.3-p125 [ x86_64 ] ruby-1.9.3-p194 [ x86_64 ] ruby-1.9.3-p448 [ x86_64 ] ruby-2.0.0-p195 [ x86_64 ]=* ruby-2.0.0-p247 [ x86_64 ] ruby-2.0.0-p481 [ x86_64 ] ruby-2.1.1 [ x86_64 ] ruby-2.1.2 [ x86_64 ]# => - current# =* - current && default# * - default21:59:50 durrantm Castle2012 /home/durrantm $ rvm use 2.0.0Using /home/durrantm/.rvm/gems/ruby-2.0.0-p481$ rvm use 2.1.1Using /home/durrantm/.rvm/gems/ruby-2.1.1$ rvm use 1.9.3ruby-1.9.3-p547 is not installed.$ rvm use 1.9.3-p448Using /home/durrantm/.rvm/gems/ruby-1.9.3-p448 Get rvm at http://rvm.io/ Install with its famous 1 liner: $ \curl -sSL https://get.rvm.io | bash -s stable
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151281", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
151,289
I'm trying to compile a Linux kernel to debug an issue I have on Arch Linux, and I was configuring modules, but I didn't know what a lot of the stuff was so I left them on. If this kernel works I plan to keep it, but would all those extra modules slow the system down or will it load up only when they are needed?
While you won't notice any performance improvement (assuming you build your kernel with the modules you actaully require), there is some benefit in removing unneeded modules: first, it can significantly reduce the compile time and secondly, it will reduce the size of the final kernel. Creating a .config with make localmodconfig is a good way to get your feet wet. See the Arch Wiki for the traditional compilation approach .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81000/" ] }
151,310
When running sudo <command> under a user login session, will thatchange $PATH to be the root's $PATH during the running of sudo<command> ? If <command> relies on the user's $PATH , not the root's $PATH ,how can the user run sudo <command> successfully? One way is to sudo su to be the root, change the root's $PATH to be the user's, and run <command> directly. This is how I solved my problem of How to specify a higher ruby version for installing a gem? . Any way simpler? Can it be done without switching to the root from the user?
This is actually configuration-dependent . There is an env_reset option in sudoers that, combined with env_check and env_delete , controls whether to replace, extend, or pass through some or all environment variables, including PATH . The default behaviour is to have env_reset enabled, and to reset PATH . The value PATH is set to can be controlled with the secure_path option, and otherwise it is determined by the user configuration. You can disable env_reset or add PATH to env_keep to change that behaviour, but note that it may not have the effect you want overall - there are often directories ( sbin ) in root's PATH that aren't in your user's. You can enable setenv instead to allow overriding environment for a single execution of sudo using the -E option to sudo . All of these could be changed in your distribution's default configuration already. Run sudo visudo to have a look at what's currently in your sudoers file. There are alternative approaches. One simple one is to use sudo 's built-in environment variable setting or env : sudo PATH="$PATH" command ...sudo env PATH="$PATH" command ... will both run just this command with your current user's PATH . You can set other variables there as well in the same way, which is often useful. One or other of those may be disallowed by your configuration.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151310", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
151,316
How to redirect the output of a unix command from one server to another server. I should be able to send the unix command's output from server-1. Then I should be able to receive the output in Server-2 and write it into a file.
General, you can always do: <command> | ssh user@remote-server "cat > output.txt" It saves output of <command> to output.txt file in remote server. In your case, on Server-1: echo "qwerty" | ssh user@Server-2 "cat > output.txt" If two servers have no connectivity, but you can ssh to both servers, then from local machine, you can do: ssh user@Server-1 "<command>" | ssh user@Server-2 "cat > output.txt"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/151316", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81493/" ] }
151,323
Please let me know if anybody knows how to login to Skype on Linux Mint from terminal or any other way? I have tried a lot but nothing found. Desktop # ./skype.desktop./skype.desktop: line 1: [Desktop: command not found./skype.desktop: line 3: Internet: command not found./skype.desktop: line 4: fg: no job control./skype.desktop: line 9: Application: command not found./skype.desktop: line 11: X-KDE-Protocols=skype: command not found
The skype.desktop files are not meant for you to run. To invoke skype you should be able to simply type skype in your terminal or run it via the full path here: $ /usr/bin/skype Files that end in a .desktop are configuration files for your desktop, not for you to execute directly. See here for example: $ more /usr/share/applications/skype.desktop[Desktop Entry]Name=SkypeComment=Skype Internet TelephonyExec=skype %UIcon=skype.pngTerminal=falseType=ApplicationEncoding=UTF-8Categories=Network;Application;MimeType=x-scheme-handler/skype;X-KDE-Protocols=skype You could try using gnome-open ./skype.desktop or xdg-open ./skype.desktop . These used to work, but there appears to be a bug that's been present for some time, which breaks these 2 commands ability to process .desktop files. See this AU Q&A titled: Running a .desktop file in the terminal for more on this. exo-open Using exo-open ./skype.desktop worked for me. exo-open is part of the Xfce DE, but will properly invoke the .desktop files. $ exo-open /usr/share/applications/skype.desktop gtk-launch Using gtk-launch skype.desktop /path/to/desktop/file also works. $ gtk-launch skype.desktop /usr/share/applications Command line login If you take a look at the output of skype --help : --pipelogin Command line login. "echo username password | skype --pipelogin" So you could achieve what you want like so: $ echo username password | skype --pipelogin
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151323", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81497/" ] }
151,325
When I open my .java file in vim, I could see a couple of lines prefixed with one / more ^I characters. It looks like tabs in Eclipse that has got converted into ^I . I would like to replace a single ^I into spaces with 4 characters. E.g ^I^I^I^IList<History> rulePackagesHistory = result.getHistory(); How can do that in vim editor?
Add these lines to your .vimrc : set tabstop=4set shiftwidth=4set expandtab After that, each new tab character entered will be changed to 4 spaces, old tabs don't. You must type: :retab This will convert all existing tabs in files to spaces. If you don't want to use retab , you can use perl to replace each tab by 4 spaces: perl -i.bak -pe 's/\t/ /g' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151325", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81498/" ] }
151,329
How can I efficiently reorder windows in tmux? For example, having this set of windows: 0:zsh 1:elinks 2:mutt 3:irssi 4:emacs 5:rss 6:htop What would I have to do to move rss to between elinks and mutt , ending up with: 0:zsh 1:elinks 2:rss 3:mutt 4:irssi 5:emacs 6:htop I know how to use move-window to move a window to a yet-unused index, and I could use a series of them to achieve this—but, obviously, this is very tedious.
swap-window can help you: swap-window -t -1 It moves current window to the left by one position. From man tmux : swap-window [-d] [-s src-window] [-t dst-window] (alias: swapw)This is similar to link-window, except the source and destination windows are swapped. It is an error if no window exists at src-window. You can bind it to a key: bind-key -n S-Left swap-window -t -1bind-key -n S-Right swap-window -t +1 Then you can use Shift+Left and Shift+Right to change current window position.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/151329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2916/" ] }
151,341
I'd like to remove from a given column ($2 in the example) the duplicate fields (comma separated). Input file: A 1,2,3,4 B 4,5,6,3C 2,15 Expected output: A 1,2,3,4B 5,6C 15
perl -lpe 's/\s\K\S+/join ",", grep {!$seen{$_}++} split ",", $&/e' You can run the above like so: $ perl -lpe 's/\s\K\S+/join ",", grep {!$seen{$_}++} split ",", $&/e' afile A 1,2,3,4B 5,6C 15 How it works First calling perl with -lpe does the following 3 things. -l[octal] enable line ending processing, specifies line terminator -p assume loop like -n but print line also, like sed -e program one line of program (several -e's allowed, omit programfile) This essentially take the file in, strips off the newlines, operates on a line, and then tacks a newline character back onto it when it's done. So it's just looping through the file and executing our Perl code against each in turn. As for the actual Perl code: \s means a spacing character (the five characters [ \f\n\r\t] and \v in newer versions of perl , like [[:space:]] ). \K Keep the stuff left of the \K, don't include it in $& \S+ one or more characters not in the set [ \f\n\r\t\v] The join ",", is going to take the results and rejoin each field so that it's separated by a comma. The split ",", $& will take the matches that were found by the \S+ and split them into just the fields, without the comma. The grep {!$seen{$_}++} will take each field's number, add it to the hash, $seen{} where each field's number is $_ as we go through each of them. Each time a field number is "seen" it's counted via the ++ operator, $seen{$_}++ . The grep{!$seen{$_}++} will return a field value if it's only been seen once. Modified to see what's happening If you use this modified abomination you can see what's going on as this Perl one liner moves across the lines from the file. $ perl -lpe 's/\s\K\S+/join ",", grep {!$seen{$_}++} split ",", $&/e; @a=keys %seen; @b=values %seen; print "keys: @a | vals: @b"' afile keys: 4 1 3 2 | vals: 1 1 1 1A 1,2,3,4keys: 6 4 1 3 2 5 | vals: 1 2 1 2 1 1B 5,6keys: 6 4 1 3 2 15 5 | vals: 1 2 1 2 2 1 1C 15 This is showing you the contents of $seen{} at the end of processing a line from the file. Let's take the 2nd line of the file. B 4,5,6,3 And here's what my modified version shows that line as: keys: 6 4 1 3 2 15 5 | vals: 1 2 1 2 2 1 1 So this is saying that we've seen field # 6 (1 time), field # 4 (2 times), etc. and field # 5 (1 time). So when grep{...} returns the results it will only return results from this array if it was present in this line (4,5,6,3) and if we've seen it only 1 time (6,1,15,5). The intersection of these 2 lists is (5,6) and so that's what gets returned by grep . References perlre - perldoc.perl.org
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/151341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74555/" ] }