source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
107,885
I often experience a problem to umount a directory: umount /mnt/dir umount: /mnt/dir: device is busy There are many reasons why the device is busy. Sometimes there are processes running which have open locks on it, sometimes there are other directories mounted on top of /mnt/dir . My question: What are the steps to check why a directory couldn't be unmounted. I know there are many reasons, but it's ok if you explain a specific solution. [EDIT] [X] running processes on mounted volumes. [X] another volume is mounted on top of a volume we want to unmount [_] NFS locks the volume we want to unmount
The way to check is fuser -vm /mnt/dir , which must be run as root. It will tell you which processes are accessing the mount point. An alternative is lsof /mnt/dir , which will show each open file on the mount. Again best run as root. You can run either of these as non-root, but then the output will be limited to your processes—ones from other users will just be silently not shown, even though they will prevent unmounting the filesystem. Example: Watt:~# fuser -vm /mnt/Zia/src USER PID ACCESS COMMAND /mnt/Zia/src: root kernel mount /mnt/Zia/src anthony 24909 ..c.. bash anthony 25041 F.c.. gvim The "access" field tells you how its being accessed. In this case, the kernel has it in use as a mount (duh, but unmount will be OK with only this). bash has it as the current working directory (will have to cd to a different directory before unmount) and gvim both has the current directory and has a file open (will need to close that gvim). Watt:~# lsof /mnt/Zia/src COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME bash 24909 anthony cwd DIR 0,26 12288 3527682 /mnt/Zia/src/perl (zia.vpn.home:/home/anthony/src) gvim 25041 anthony cwd DIR 0,26 12288 3527682 /mnt/Zia/src/perl (zia.vpn.home:/home/anthony/src) gvim 25041 anthony 6u REG 0,26 16384 3526219 /mnt/Zia/src/perl/.utf8.c.swp (zia.vpn.home:/home/anthony/src) In this output, you can see the current directories for both bash and gvim (as type DIR ). You can also see which file gvim has open for write. How to force the issue: fuser has a -k option which will send a signal (default: SIGKILL ) to each process using the mount. This is a rather forceful way to stop the mount from being busy. (And of course, be careful of what you SIGKILL !) umount has an -l option to perform a lazy unmount. The mount will be removed from the filesystem namespace (so you won't see it under /mnt/Zia/src anymore, in the example) but it stays mounted, so programs accessing it can continue to do so. When the last program accessing it exits, the unmount will actually occur. There is one final fixable cause of unmount failing, and that's an NFS server going down. Here you can use umount -f , but you risk data loss if you do so. (The client may have cached writes that haven't been confirmed by the server yet, and those writes will be discarded. Apps, however, have already been told the write is successful.)
{ "source": [ "https://unix.stackexchange.com/questions/107885", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
107,939
I am running my Python script in the background in my Ubuntu machine (12.04) like this - nohup python testing.py > test.out & Now, it might be possible that at some stage my above Python script can die for whatever reason. So I am thinking to have some sort of cron agent in bash shell script which can restart my above Python script automatically if it is killed for whatever reason. Is this possible to do? If yes, then what's the best way to solve these kind of problem? UPDATE: After creating the testing.conf file like this - chdir /tekooz exec python testing.py respawn I ran below sudo command to start it but I cannot see that process running behind using ps ax? root@bx13:/bezook# sudo start testing testing start/running, process 27794 root@bx13:/bezook# ps ax | grep testing.py 27806 pts/3 S+ 0:00 grep --color=auto testing.py Any idea why px ax is not showing me anything? And how do I check whether my program is running or not? This is my python script - #!/usr/bin/python while True: print "Hello World" time.sleep(5)
On Ubuntu (until 14.04, 16.04 and later use systemd) can use upstart to do so, better than a cron job. You put a config setup in /etc/init and make sure you specify respawn It could be a minimal file /etc/init/testing.conf (edit as root ): chdir /your/base/directory exec python testing.py respawn And you can test with /your/base/directory/testing.py : from __future__ import print_function import time with open('/var/tmp/testing.log', 'a') as fp: print(time.time(), 'done', file=fp) time.sleep(3) and start with: sudo start testing and follow what happens (in another window) with: tail -f /var/tmp/testing.log and stop with: sudo stop testing You can also add [start on][2] to have the command start on boot of the system.
{ "source": [ "https://unix.stackexchange.com/questions/107939", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22434/" ] }
108,020
I already know vim -b , however, depending on the locale used, it displays multi-byte characters (like UTF-8) as single letters. How can I ask vim to only display ASCII printable characters, and treat the rest as binary data, no matter the charset?
When using vim -b , this displays all high characters as <xx> : set encoding=latin1 set isprint= set display+=uhex Any single-byte encoding will work, vim uses ASCII for all lower chars and has them hard-coded as printable. Setting isprint to empty will mark everything else as non-printable. Setting uhex will display them as hexadecimal. Here is how the display changes after each command:
{ "source": [ "https://unix.stackexchange.com/questions/108020", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30196/" ] }
108,100
More and more tar archives use the xz format based on LZMA2 for compression instead of the traditional bzip2(bz2) compression. In fact kernel.org made a late " Good-bye bzip2 " announcement, 27th Dec. 2013 , indicating kernel sources would from this point on be released in both tar.gz and tar.xz format - and on the main page of the website what's directly offered is in tar.xz . Are there any specific reasons explaining why this is happening and what is the relevance of gzip in this context?
For distributing archives over the Internet, the following things are generally a priority: Compression ratio (i.e., how small the compressor makes the data); Decompression time (CPU requirements); Decompression memory requirements; and Compatibility (how wide-spread the decompression program is) Compression memory & CPU requirements aren't very important, because you can use a large fast machine for that, and you only have to do it once. Compared to bzip2, xz has a better compression ratio and lower (better) decompression time. It, however—at the compression settings typically used—requires more memory to decompress [1] and is somewhat less widespread. Gzip uses less memory than either. So, both gzip and xz format archives are posted, allowing you to pick: Need to decompress on a machine with very limited memory (<32 MB): gzip. Given, not very likely when talking about kernel sources. Need to decompress minimal tools available: gzip Want to save download time and/or bandwidth: xz There isn't really a realistic combination of factors that'd get you to pick bzip2. So its being phased out. I looked at compression comparisons in a blog post . I didn't attempt to replicate the results, and I suspect some of it has changed (mostly, I expect xz has improved, as its the newest.) (There are some specific scenarios where a good bzip2 implementation may be preferable to xz: bzip2 can compresses a file with lots of zeros and genome DNA sequences better than xz. Newer versions of xz now have an (optional) block mode which allows data recovery after the point of corruption and parallel compression and [in theory] decompression. Previously, only bzip2 offered these. [2] However none of these are relevant for kernel distribution) 1: In archive size, xz -3 is around bzip -9 . Then xz uses less memory to decompress. But xz -9 (as, e.g., used for Linux kernel tarballs) uses much more than bzip -9 . (And even xz -0 needs more than gzip -9 ). 2: F21 System Wide Change: lbzip2 as default bzip2 implementation
{ "source": [ "https://unix.stackexchange.com/questions/108100", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
108,136
I made a bad choice when configuring Zsh the first time around, so now I would like to run the configuration wizard (the thing that runs the first time you log in) again. How do I do this?
The wizard is provided by the function zsh-newuser-install . To run it again, make a backup of your .zshrc (because there's a small risk that zsh-newuser-install will mess up your manual configuration), then run autoload -U zsh-newuser-install zsh-newuser-install -f
{ "source": [ "https://unix.stackexchange.com/questions/108136", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56089/" ] }
108,145
I stumbled across a blog that mentioned the following command. who mom likes It appears to be equivalent to who am i The author warns to never enter the following into the command line (I suspect he is being facetious) who mom hates There is nothing documented about the mom command. What does it do?
Yes it's a joke, included in by the developers of the who command. See the man page for who . excerpt If FILE is not specified, use /var/run/utmp. /var/log/wtmp as FILE is common. If ARG1 ARG2 given, -m presumed: 'am i' or 'mom likes' are usual. This U&L Q&A titled: What is a "non-option argument"? explains some of the terminology from the man page and my answer also covers alternatives to who .. ... . commands. Details There really isn't anything special about am I or am i . The who command is designed to return the same results for any 2 arguments. Actually it behaves as if you called it with its -m switch. -m only hostname and user associated with stdin Examples $ who -m saml pts/1 2014-01-06 09:44 (:0) $ who likes candy saml pts/1 2014-01-06 09:44 (:0) $ who eats cookies saml pts/1 2014-01-06 09:44 (:0) $ who blah blah saml pts/1 2014-01-06 09:44 (:0) Other implementations If you take a look at The Heirloom Project , you can gain access to an older implementation of who . The Heirloom Toolchest is a collection of standard Unix utilities. Highlights are: Derived from original Unix material released as Open Source by Caldera and Sun . The man page that comes with this who in this distribution also has the same "feature", except it's more obvious. $ groff -Tascii -man who.1 |less ... SYNOPSIS who [-abdHlmpqRrstTu] [utmp_file] who -q [-n x] [utmp_file] who [am i] who [am I] ... ... With the two-argument synopsis forms `who am i' and `who am I', who tells who you are logged in as. ... ...
{ "source": [ "https://unix.stackexchange.com/questions/108145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39263/" ] }
108,169
I'm reading this howto, and there's something like this: We can allow established sessions to receive traffic: $ sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT The above rule has no spaces either side of the comma in ESTABLISHED,RELATED If the line above doesn't work, you may be on a castrated VPS whose provider has not made available the extension, in which case an inferior version can be used as last resort: $ sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT Is there a significant difference in working between -m conntrack --ctstate and -m state --state ? They say that one may not work, but they don't say why. Why should I prefer one over the other?
I don't claim to be an expert with iptables rules but the first command is making use of the connection tracking extension ( conntrack ) while the second is making use of the state extension. Data point #1 According to this document the conntrack extension superseded state . Obsolete extensions: • -m state: replaced by -m conntrack Data point #2 Even so I found this SF Q&A titled: Firewall questions about state and policy? where the OP claimed to have asked this question on IRC in #iptables@freenode. After discussing it there he came to the conclusion that: Technically the conntrack match supersedes - and so obsoletes - the state match. But practically the state match is not obsoleted in any way. Data point #3 Lastly I found this SF Q&A titled: Iptables, what's the difference between -m state and -m conntrack? . The answer from this question is probably the best evidence and advice on how to view the usage of conntrack and state . excerpt Both use same kernel internals underneath (connection tracking subsystem). Header of xt_conntrack.c: xt_conntrack - Netfilter module to match connection tracking information. (Superset of Rusty's minimalistic state match.) So I would say -- state module is simpler (and maybe less error prone). It's also longer in kernel. Conntrack on the other side has more options and features [1] . My call is to use conntrack if you need it's features, otherwise stick with state module. Similar question on netfilter maillist. [1] Quite useful like "-m conntrack --ctstate DNAT -j MASQUERADE" routing/DNAT fixup ;-) Data point #4 I found this thread from the [email protected] netfilte/iptables discussions, titled: state match is obsolete 1.4.17 , which pretty much says that state is just an alias to conntrack so it doesn't really matter which you use, in both circumstances you're using conntrack . excerpt Actually, I have to agree. Why don't we keep "state" as an alias and accept the old syntax in "conntrack"? state is currently aliased and translated to conntrack in iptables if the kernel has it. No scripts are broken. If the aliasing is done in userspace, the kernel part can be removed - someday maybe. The aliasing is already done in userspace. One types in "state" and it's converted into "conntrack" and that is then sent to the kernel. (So as far as I see if the ipt_state, etc module aliases were added to the conntrack module, even the state kernel module could be removed.) References Firewall questions about state and policy? iptables: differences using conntrack or state module
{ "source": [ "https://unix.stackexchange.com/questions/108169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52763/" ] }
108,171
I came across the following config file: # Generated by iptables-save v1.3.1 on Sun Apr 23 06:19:53 2006 *filter :INPUT ACCEPT [368:102354] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [92952:20764374] -A INPUT -i lo -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A INPUT -j DROP COMMIT # Completed on Sun Apr 23 06:19:53 2006 Does anyone know what [368:102354] , [0:0] and [92952:20764374] mean?
The two values correspond to the number of packets and the number of bytes that the chain's default policy has been applied to so far (see this other answer for details). According to the source code in iptables-save.c itself: /* Dump out chain names first, * thereby preventing dependency conflicts */ for (chain = iptc_first_chain(h); chain; chain = iptc_next_chain(h)) { printf(":%s ", chain); if (iptc_builtin(chain, h)) { struct xt_counters count; printf("%s ", iptc_get_policy(chain, &count, h)); printf("[%llu:%llu]\n", (unsigned long long)count.pcnt, (unsigned long long)count.bcnt); } else { printf("- [0:0]\n"); } } And, the structure xt_counters is defined as follow in include/linux/netfilter/x_tables.h : struct xt_counters { __u64 pcnt, bcnt; /* Packet and byte counters */ }; Note also that chains which are not builtin are marked with [0:0] anyway (it's a quirk in the code).
{ "source": [ "https://unix.stackexchange.com/questions/108171", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52763/" ] }
108,174
Before Mavericks, I could use /etc/launchd.conf file to change maximum system resource consumption, for example: limit maxfiles 16384 unlimited limit maxproc 16384 unlimited It no longer works in Mavericks. What is the correct way to do it in the recent version of OS X?
Shell Session Limit The limits set via ulimit only affects processes created by the current shell session. The "soft limit" is the actual limit that is used. It could be set, as far as it's not greater than the "hard limit". The "hard limit" could also be set, but only to a value less than the current one, and only to a value not less than the "soft limit". The "hard limit", as well as system-wide limits, could be raised by root (the administrator) by executing system configuration commands or modifying system configuration files. After you terminate the shell session (by Ctrl + D , exit , or closing the Terminal.app window, etc.), the settings are gone. If you want the same setting in the next shell session, add the setting to the shell startup script. NOTE: If you are using bash , then it should be ~/.bash_proile or ~/.bash_login . If you are using other shells, it should probably be ~/.profile . System Limit (Requires Reboot to Take Effect) For 10.9 (Mavericks), 10.10 (Yosemite), 10.11 (El Capitan), and 10.12 (Sierra): You have to create a file at /Library/LaunchDaemons/limit.maxfiles.plist (owner: root:wheel , mode: 0644 ): <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>limit.maxfiles</string> <key>ProgramArguments</key> <array> <string>launchctl</string> <string>limit</string> <string>maxfiles</string> <string>262144</string> <string>524288</string> </array> <key>RunAtLoad</key> <true/> <key>ServiceIPC</key> <false/> </dict> </plist> You should change the numbers according to your needs. They are the "soft limit" ( 262144 ) and the is "hard limit" ( 524288 ) respectively. For more information, consult the manual page by running man launchd.plist . For 10.8 (Mountain Lion): You may add the following lines to /etc/sysctl.conf (owner: root:wheel , mode: 0644 ): kern.maxfiles=524288 kern.maxfilesperproc=262144 You should change the numbers according to your needs. They are the "system-wide limit" ( kern.maxfiles ) and "per-process limit" ( kern.maxfilesperproc ) respectively. For more settings, consult the manual page by running man sysctl , or read the source code at /usr/include/sys/sysctl.h . For older Mac OS X (I guess it works on 10.7 (Lion) or before): You may add the following line to /etc/launchd.conf (owner: root:wheel , mode: 0644 ): limit maxfiles 262144 524288 You should change the numbers according to your needs. They are the "soft limit" ( 262144 ) and the is "hard limit" ( 524288 ) respectively. If the system doesn't let you set the limits above a certain value... The system doesn't let you set a value higher than a "hard maximum" (proposed by Apple). To increase this "hard maximum", you have to purchase "OS X Server" from "App Store", then you have to execute the following command once: sudo serverinfo --setperfmode true This activates "server performance mode" on your machine. You can then set the maximum according to the configuration of your machine (see this) . I tried this before (on Mountain and Mavericks) and it works! Please see my post ( here ) for more information. References Open Files Limit | riakdocs HT3854 Not applicable in Mac OS X Server v10.8 (Mountain Lion)? Mac OS X Server v10.6: Understanding process limits - Apple Support OS X Server: Dedicating system resources for high performance services - Apple Support launchctl(1) Mac OS X Manual Page launchd.conf(5) Mac OS X Manual Page launchd.plist(5) Mac OS X Manual Page sysctl(8) Mac OS X Manual Page
{ "source": [ "https://unix.stackexchange.com/questions/108174", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22534/" ] }
108,216
When I run find with -execdir I don't get the results I was expecting. For example: mkdir -p a/b/c find . -type d -execdir touch foo \; $ tree a a ├── b │   ├── c │   └── foo └── foo Directory c does not contain a foo file. How do I get find to visit and do something locally in each directory?
For every matching file (i.e. every directory), find switches to the directory that contains it (i.e. its parent directory) and executes the specified command. Since the command doesn't use the name of the match, it's never going to act on all the directories. For this particular directory tree, you're doing (cd . && touch foo) # because ./a matches (cd ./a && touch foo) # because ./a/b matches (cd ./a/b && touch foo) # because ./a/b/c matches To create a file in every directory, you can simply use -exec instead of -execdir , provided your implementation of find allows {} inside an argument (many do): find . -type d -exec touch '{}/foo' + For POSIX portability, you would need to do the assembling of the directory name and the file base name manually. find . -type d -exec sh -c 'touch "$0/foo"' {} \; or (slightly faster) find . -type d -exec sh -c 'for d; do touch "$d/foo"; done' sh {} + Alternatively, you can use bash's recursive wildcard matching. Beware that (unlike the corresponding feature in ksh and zsh, and unlike your find command) earlier versions of bash used to recurse under symbolic links to directories. shopt -s globstar for d in **/*/; do touch -- "$d/foo"; done A zsh solution: touch ./**/*(/e[REPLY+=/foo]) Or: (){ touch $^@/foo; } ./**/*(/)
{ "source": [ "https://unix.stackexchange.com/questions/108216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24044/" ] }
108,217
The bash built-in type can be used for this purpose by checking its exit status: Exit Status: Returns success if all of the NAMEs are found; fails if any are not found. How portable is it? The POSIX spec is a little bit less clear regarding the exit status of type : EXIT STATUS The following exit values shall be returned: 0 Successful completion. >0 An error occurred. Source: http://pubs.opengroup.org/onlinepubs/009695399/utilities/type.html
For every matching file (i.e. every directory), find switches to the directory that contains it (i.e. its parent directory) and executes the specified command. Since the command doesn't use the name of the match, it's never going to act on all the directories. For this particular directory tree, you're doing (cd . && touch foo) # because ./a matches (cd ./a && touch foo) # because ./a/b matches (cd ./a/b && touch foo) # because ./a/b/c matches To create a file in every directory, you can simply use -exec instead of -execdir , provided your implementation of find allows {} inside an argument (many do): find . -type d -exec touch '{}/foo' + For POSIX portability, you would need to do the assembling of the directory name and the file base name manually. find . -type d -exec sh -c 'touch "$0/foo"' {} \; or (slightly faster) find . -type d -exec sh -c 'for d; do touch "$d/foo"; done' sh {} + Alternatively, you can use bash's recursive wildcard matching. Beware that (unlike the corresponding feature in ksh and zsh, and unlike your find command) earlier versions of bash used to recurse under symbolic links to directories. shopt -s globstar for d in **/*/; do touch -- "$d/foo"; done A zsh solution: touch ./**/*(/e[REPLY+=/foo]) Or: (){ touch $^@/foo; } ./**/*(/)
{ "source": [ "https://unix.stackexchange.com/questions/108217", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53904/" ] }
108,271
I have a txt file : $ file -i x.txt x.txt: text/plain; charset=unknown-8bit $ file x.txt x.txt: Non-ISO extended-ASCII text, with CRLF line terminators And there are some characters that are incorrectly encoded : trwa³y, sta³y, usuwaæ How can I change this file's encoding to UTF-8 ? I have tried the following way so far : $ iconv -f ASCII -t UTF-8 x.txt puiconv: illegal input sequence at position 4 Maybe I should somehow use extended ASCII ( high ASCII ) but cannot find it in iconv 's encoding list.
file tells you “Non-ISO extended-ASCII text” because it detects that this is: most likely a “text” file from the lack of control characters (byte values 0–31) other than line breaks; “extended-ASCII” because there are characters outside the ASCII range (byte values ≥128); “non-ISO” because there are characters in the 128–159 range ( ISO 8859 reserves this range for control characters). You have to figure out which encoding this file seems to be in. You can try Enca 's automatic recognition. You might need to nudge it in the right direction by telling it in what language the text is. enca x.txt enca -L polish x.txt To convert the file, pass the -x option: enca -L polish x.txt -x utf8 >x.utf8.txt If you can't or don't want to use Enca, you can guess the encoding manually. A bit of looking around told me that this is Polish text and the words are trwały, stały, usuważ, so we're looking for a translation where ³ → ł and æ → ż . This looks like latin-2 or latin-10 or more likely (given “non-ISO” CP1250 which you're viewing as latin1 . To convert the file to UTF-8, you can use recode or iconv . recode CP1250..utf8 <x.txt >x.utf8.txt iconv -f CP1250 -t UTF-8 <x.txt >x.utf8.txt
{ "source": [ "https://unix.stackexchange.com/questions/108271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15387/" ] }
108,485
Sometimes I need to send a fragment of code on google-group inline. Text does not help here; I can type it in markdown, convert it to html (using pandoc etc.), attach to mutt as text/html and send it. There is one good solution available here but it uses external sendmail program to send email. I am using mutt which has capabilities to send emails over IMAP by itself.
After you compose a message, but before sending you have lots of options available to you. Press ? to view them. Some that may help here: F to filter the attachment through an external processor Use pandoc -s -f markdown -t html to convert to HTML ^T to edit the attachment MIME type Change from text/plain to text/html . Now a macro that will do everything in one step. Add this to your .muttrc : macro compose \e5 "F pandoc -s -f markdown -t html \ny^T^Utext/html; charset=utf-8\n" set wait_key=no To use this macro, after you have finished composing your message but before you send, press Esc then 5 to convert your markdown formatted message into HTML. You can naturally customize this macro as you see fit. Mutt has lots of key bindings already built in, so whatever key sequence you choose to bind to, make sure it doesn't overwrite something else (or it's something you can live without). The option set wait_key=no suppresses Mutt's Press any key to continue... prompt when external commands are run. If wait_key is yes (which is the default) you'll have to press Esc , then 5 , then any other key to continue.
{ "source": [ "https://unix.stackexchange.com/questions/108485", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5362/" ] }
108,493
Are there any command line tools on Linux that allow you to post output from commands or text files directly to a sharing service such as pastebin.com?
There are several services that provide this but 2 that are pretty easy to use from the command line are fpaste and pastebinit . These 2 tools link to the sites, paste.fedoraproject.org and pastebin.com . fpaste NOTE: This is a Fedora/CentOS/RHEL only option If you're using any of the Red Hat based distros you can install the package fpaste which gives you a command line tool for posting content to paste.fedoraproject.org . Basic commands For basic posting of a single text file you can do the following. $ fpaste hello_unixnlinux.txt The above command will return a URL where your content can now be accessed by others. ... Uploading (0.1KiB)... http://ur1.ca/gddtt -> http://paste.fedoraproject.org/66894/89230131 Other commands There are of course a whole host of other options. paste clipboard: fpaste -i paste sytem info: fpaste --sysinfo dry run: fpaste --printonly somefile.txt See the man page, man fpaste for more details. pastebinit This is probably the more popular of the 2 tools. It's supported on most of the distros I frequent such as Fedora, CentOS, and Ubuntu just to name a few. It has similar features to fpaste but, you can do a whole lot more with it, for example: list of services For starters we can get a list of all the "supported" URLs via the -l switch. $ pastebinit -l Supported pastebins: - cxg.de - fpaste.org - p.defau.lt - paste.debian.net - paste.drizzle.org - paste.kde.org - paste.openstack.org - paste.pocoo.org - paste.pound-python.org - paste.ubuntu.com - paste.ubuntu.org.cn - paste2.org - pastebin.com - pastie.org - pb.daviey.com - slexy.org - sprunge.us If you don't bother to select one using the -b switch, it will pick one based on your distro, assuming there's one for it, otherwise falling back to pastebin.com. Notice it also supports fpaste.org , for Fedora, so you could use this tool to cover that pasting service as well. Posting a simple text file To post a sample file to pastebin.com . $ pastebinit -i hello_unixnlinux.txt -b http://pastebin.com http://pastebin.com/d6uXieZj Posting code You can also tell that the content you're pasting is code using the -f switch. For example here's a Bash script. We're also going to name the upload using the -a switch, so that it will show up with the name "ex_bash_1". $ pastebinit -i sample.bash -f bash -a ex_bash_1 -b http://pastebin.com http://pastebin.com/jGvyysQ9 A full list of syntax's supported, is covered in the pastebin.com FAQ under this topic titled: For which languages do you offer syntax highlighting? . For further details be cure to check the man pages, man pastebinit . Samples Here are 2 examples of the file that I posted to each service. fpaste - http://ur1.ca/gddtt pastebin - http://pastebin.com/jGvyysQ9 References COMMAND LINE OUTPUT TO PASTEBIN - PASTEBINIT
{ "source": [ "https://unix.stackexchange.com/questions/108493", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
108,581
Is there any Linux command one can use to sample subset of a file? For instance, a file contains one million lines, and we want to randomly sample only one thousand lines from that file. For random I mean that every line gets the same probability to be chosen and none of the lines chosen are repetitive. head and tail can pick a subset of the file but not randomly. I know I can always write a python script to do so but just wondering is there a command for this usage.
The shuf command (part of coreutils) can do this: shuf -n 1000 file And at least for now non-ancient versions (added in a commit from 2013 ), that will use reservoir sampling when appropriate, meaning it shouldn't run out of memory and is using a fast algorithm.
{ "source": [ "https://unix.stackexchange.com/questions/108581", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30034/" ] }
108,591
Is there any command that would show all the available services in my wheezy Debian based OS? I know that in order to see all the running services you can use service --status-all .
Wheezy uses SysV init, and all the services are controlled with special shell scripts in /etc/init.d , so ls /etc/init.d will list them. These files also contain a description of the service at the top, and the directory contains a README . Some but not all of them have a .sh suffix, you should leave that off when using, eg., update-rc.d .
{ "source": [ "https://unix.stackexchange.com/questions/108591", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53006/" ] }
108,603
Do changes in /etc/security/limits.conf require a reboot before taking effect? For example, if I have a script that sets the following limits in /etc/security/limits.conf , does this require a system reboot before those limits will take effect? * hard nofile 94000 * soft nofile 94000 * hard nproc 64000 * soft nproc 64000
No but you should close all active sessions windows. They still remember the old values. In other words, log out and back in. Every remote new session or a local secure shell take effect of the limits changes.
{ "source": [ "https://unix.stackexchange.com/questions/108603", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43029/" ] }
108,613
My friend has an colored image with Chinese handwriting (basically by taking photo of or scanning what he wrote on a piece of white paper), and he would like me to convert it into a black and white binary image. Are there applications under Ubuntu that can accomplish that? Here is an example image:
What you want is referred to as "threshold" in image processing. Basically, it takes an image as an input and outputs an image that has all pixels with a value below a given threshold set to black, and all pixels the value of which is above the threshold set to white. This results in a black-and-white image from an arbitrary input image. Generally, you want to convert to grayscale first for more predictable results, but it is possible to threshold a full-color image as well. You can use a graphical tool such as GIMP to do this interactively (you'll find the tool through the main menu -> Colors -> Threshold), or you can use ImageMagick something like this: convert colored.png -threshold 75% thres_colored.png Running the above command on the example image produces the result shown below. Since thresholding is often somewhat of a trial-and-error process to get a result you're happy with, particularly if the source image is not very close to black-and-white already, I recommend the GUI approach if possible, but if that is not an option for whatever reason you can do it through the command line as well. For finer control of the output, you can use tools like color curves, levels and contrast first to isolate the light and dark portions of the image better before thresholding. (Actually, threshold can be seen as an extreme case of using the color curves tool.)
{ "source": [ "https://unix.stackexchange.com/questions/108613", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
108,628
On my Debian system I have a box with certain configuration settings that I like but I've had to change them continuously. There have been times that I've gotten into a some-what irreversible position. Q: how do I create a compressed copy of my hard drive so that I can put it on external storage and reuse it. I think there's a way to use dd but then I want to compress it too to reduce the size to only have in the file that which I need.
What you want is referred to as "threshold" in image processing. Basically, it takes an image as an input and outputs an image that has all pixels with a value below a given threshold set to black, and all pixels the value of which is above the threshold set to white. This results in a black-and-white image from an arbitrary input image. Generally, you want to convert to grayscale first for more predictable results, but it is possible to threshold a full-color image as well. You can use a graphical tool such as GIMP to do this interactively (you'll find the tool through the main menu -> Colors -> Threshold), or you can use ImageMagick something like this: convert colored.png -threshold 75% thres_colored.png Running the above command on the example image produces the result shown below. Since thresholding is often somewhat of a trial-and-error process to get a result you're happy with, particularly if the source image is not very close to black-and-white already, I recommend the GUI approach if possible, but if that is not an option for whatever reason you can do it through the command line as well. For finer control of the output, you can use tools like color curves, levels and contrast first to isolate the light and dark portions of the image better before thresholding. (Actually, threshold can be seen as an extreme case of using the color curves tool.)
{ "source": [ "https://unix.stackexchange.com/questions/108628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55794/" ] }
108,635
I'm trying to escape the space char for a path in Bash, but neither using a backslash or quotes works. .sh script: ROOT="/home/hogar/Documents/files/" FILE=${ROOT}"bdd.encrypted" DESTINATION="/home/hogar/Ubuntu\ One/folder" mv ${FILE} ${DESTINATION} After execute the script ( ./file ) this is the result: mv: target 'One/folder' is not a directory Why does the mv command split the string, and how do I stop this from happening?
You are expanding the DESTINATION variable, if you did echo this is what you would get: echo ${DESTINATION} /home/hogar/Ubuntu\ One/folder But mv doesn't understand this: mv ${FILE} ${DESTINATION} mv: cannot move '/home/hogar/Documents/files/bdd.encrypted' to '/home/hogar/Ubuntu\\ One/folder': No such file or directory (for some reason my mv is more verbose) To prevent this you should use quotes instead: mv "${FILE}" "${DESTINATION}" If you don't need expansion (since you are already expanding before) just using "$..." should suffice: mv "$FILE" "$DESTINATION"
{ "source": [ "https://unix.stackexchange.com/questions/108635", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53809/" ] }
108,699
I see these pretty colors in manpages viewed with less when setting a bunch of these variables. What do they mean, where is the documentation? Is this interpreted by less or termcap? screenshot
Termcap is a library that Less uses to access the terminal. Termcap is largely obsolete, having been replaced by Terminfo , but Terminfo offers a Termcap compatibility interface to applications. Less is content with the Termcap interface and uses that. The Termcap library is a description of the terminal's facilities. Each facility is identified by a two-letter (or more generally two-character) code. For example, hc identifies hardcopy terminals (i.e. printers, not screens); co is the number of columns; md starts displaying bold text. Each capability has a value, which can be a boolean (as with hc ), an integer (as with co ) or a string (as with md ). Many of the strings are escape sequences that applications can send to the terminal to achieve a certain effect. Why escape sequences? Because the interface between the terminal and the application is a character stream (more precisely, one character stream in each direction: one for user input, one for output to display). When an application writes a character to the terminal, it is usually displayed. A few characters have a different behavior: they are control characters, which do things like moving the cursor around, switching display attributes, etc. There are a lot more commands than control characters, so most commands are accessed by escape sequences, which begin with a special character (often the escape character, hence the name). For example, when Less wants to display some bold text, it looks up the value of the md capability. This is a string, which Less writes to the terminal. The terminal recognizes this string as an escape sequence, and adjusts its internal state so that subsequent characters will be displayed in bold. In the early days of hardware terminals, different brands had different escape sequences and capabilities; the Termcap database and interface was invented so that applications wouldn't have to know about every terminal model. Nowadays most terminal emulators have very similar capabilities, but the Termcap or Terminfo database is still useful to cope with minor differences. The LESS_TERMCAP_* variables can be set in the environment or in the .lesskey file . It provides Less with alternative values for Terminal capabilities. When Less wants to use a terminal capability, say switch to bold, it first checks if there is a LESS_TERMCAP_md variable. If this variable exists, Less uses its value as the escape sequence to switch to bold. If not, it uses the value from the Termcap database. This mechanism allows the user to override Termcap database settings for Less. The most useful LESS_TERMCAP_* settings are escape sequences. You can map attributes to different attributes. You can use the tput command to look up the value of a capability for the current terminal in the system's Termcap or Terminfo database. You can use escape sequences directly if you don't mind being terminal-dependent. For example, this setting tells Less to display in bold red when instructed to display in bold: LESS_TERMCAP_md=$(tput md; tput AF 1) or if your tput command doesn't support Termcap names: LESS_TERMCAP_md=$(tput bold; tput setaf 1) Man sends Less text with some very simple formatting that can only express bold and italics. In addition, Less uses various formatting capabilities for its internal use, such as to highlight search results and to display the mode line at the bottom. Here are some of the escape sequences that Less uses (I only list capabilities that it is reasonably useful to remap): termcap terminfo ks smkx make the keypad send commands ke rmkx make the keypad send digits vb flash emit visual bell mb blink start blink md bold start bold me sgr0 turn off bold, blink and underline so smso start standout (reverse video) se rmso stop standout us smul start underline ue rmul stop underline To show output in color, use the setaf capability (or AF with Termcap). The LESS_TERMCAP_* settings are not mentioned in the LESS documentation. The best reference I can offer is my answer here .
{ "source": [ "https://unix.stackexchange.com/questions/108699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56392/" ] }
108,782
I've a command that outputs data to stdout ( command1 -p=aaa -v=bbb -i=4 ). The output line can have the following value: rate (10%) - name: value - 10Kbps I want to grep that output in order to store that 'rate' (I guess pipe will be useful here). And finally, I would like that rate to be a value of a parameter on a second command (let's say command2 -t=${rate} ) It looks to be tricky on my side; I would like to know better how to use pipes, grep, sed and so on. I've tried lots of combinations like that one but I'm getting confused of these: $ command1 -p=aaa -v=bbb -i=4 | grep "rate" 2>&1 command2 -t="rate was "${rate}
You are confusing two very different types of inputs. Standard input ( stdin ) Command line arguments These are different, and are useful for different purposes. Some commands can take input in both ways, but they typically use them differently. Take for example the wc command: Passing input by stdin : ls | wc -l This will count the lines in the output of ls Passing input by command line arguments: wc -l $(ls) This will count lines in the list of files printed by ls Completely different things. To answer your question, it sounds like you want to capture the rate from the output of the first command, and then use the rate as a command line argument for the second command. Here's one way to do that: rate=$(command1 | sed -ne 's/^rate..\([0-9]*\)%.*/\1/p') command2 -t "rate was $rate" Explanation of the sed : The s/pattern/replacement/ command is to replace some pattern The pattern means: the line must start with "rate" ( ^rate ) followed by any two character ( .. ), followed by 0 or more digits, followed by a % , followed by the rest of the text ( .* ) \1 in the replacement means the content of the first expression captured within \(...\) , so in this case the digits before the % sign The -n flag of the sed command means to not print lines by default. The p at the end of the s/// command means to print the line if there was a replacement. In short, the command will print something only if there was a match.
{ "source": [ "https://unix.stackexchange.com/questions/108782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56442/" ] }
108,784
Let's say I have 2 user accounts user1 and user2 . When I login as user1 , and then switch to user2 using su , I can execute command-line programs, but GUI programs fail. Example: user1@laptop:~$ su - user2 user2@laptop:~$ leafpad ~/somefile.txt No protocol specified leafpad: Cannot open display: So how can I run a GUI application?
su vs. su - When becoming another user you generally want to use su - user2 . The dash will force user2's .bash_profile to get sourced. xhost Additionally you'll need to grant users access to your display. This is governed by X. You can use the command xhost + to allow other users permission to display GUI's to user1's desktop. NOTE: When running xhost + you'll want to run this while still in a shell that belongs to user1. $DISPLAY When you become user2 you may need to set the environment variable $DISPLAY . $ export DISPLAY=:0.0
{ "source": [ "https://unix.stackexchange.com/questions/108784", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9108/" ] }
108,785
I am running a production server and my web application is running at port 8099. so if front end wants to access any backend endpoint, they make a call to this url: http://production.server.com:8099/mainserver/some/get?xxxxxxx I need to setup production.server.com:8099 to something like api.production-server.com, which will point to production.server.com:8099. What's the best approach to this? This is for ubuntu platform.
su vs. su - When becoming another user you generally want to use su - user2 . The dash will force user2's .bash_profile to get sourced. xhost Additionally you'll need to grant users access to your display. This is governed by X. You can use the command xhost + to allow other users permission to display GUI's to user1's desktop. NOTE: When running xhost + you'll want to run this while still in a shell that belongs to user1. $DISPLAY When you become user2 you may need to set the environment variable $DISPLAY . $ export DISPLAY=:0.0
{ "source": [ "https://unix.stackexchange.com/questions/108785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41043/" ] }
108,838
I've seen commands to benchmark one's HDD such as this using dd : $ time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" Are there better methods to do so than this?
I usually use hdparm to benchmark my HDD's. You can benchmark both the direct reads and the cached reads. You'll want to run the commands a couple of times to establish an average value. Examples Here's a direct read. $ sudo hdparm -t /dev/sda2 /dev/sda2: Timing buffered disk reads: 302 MB in 3.00 seconds = 100.58 MB/sec And here's a cached read. $ sudo hdparm -T /dev/sda2 /dev/sda2: Timing cached reads: 4636 MB in 2.00 seconds = 2318.89 MB/sec Details -t Perform timings of device reads for benchmark and comparison purposes. For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no other active processes) with at least a couple of megabytes of free memory. This displays the speed of reading through the buffer cache to the disk without any prior caching of data. This measurement is an indication of how fast the drive can sustain sequential data reads under Linux, without any filesystem overhead. To ensure accurate measurements, the buffer cache is flushed during the processing of -t using the BLKFLSBUF ioctl. -T Perform timings of cache reads for benchmark and comparison purposes. For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no other active processes) with at least a couple of megabytes of free memory. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test. Using dd I too have used dd for this type of testing as well. One modification I would make to the above command is to add this bit to the end of your command, ; rm ddfile . $ time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"; rm ddfile This will remove the ddfile after the command has completed. NOTE: ddfile is a transient file that you don't need to keep, it's the file that dd is writing to ( of=ddfile ), when it's putting your HDD under load. Going beyond If you need more rigorous testing of your HDD's you can use Bonnie++ . References How to use 'dd' to benchmark your disk or CPU? Benchmark disk IO with DD and Bonnie++
{ "source": [ "https://unix.stackexchange.com/questions/108838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
108,873
I'm trying to compile wxWidgets using MingW, and I have cygwin in my path, which seems to conflict. So I would like to remove /d/Programme/cygwin/bin from the PATH variable and I wonder if there is some elegant way to do this. The naive approach would be to echo it to a file, remove it manually and source it, but I bet there is better approach to this.
There are no standard tools to "edit" the value of $PATH (i.e. "add folder only when it doesn't already exists" or "remove this folder"). You can execute: export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games that would be for the current session, if you want to change permanently add it to any .bashrc, bash.bashrc, /etc/profile - whatever fits your system and user needs. However if you're using BASH, you can also do the following if, let's say, you want to remove the directory /home/wrong/dir/ from your PATH variable, assuming it's at the end: PATH=$(echo "$PATH" | sed -e 's/:\/home\/wrong\/dir$//') So in your case you may use PATH=$(echo "$PATH" | sed -e 's/:\/d\/Programme\/cygwin\/bin$//')
{ "source": [ "https://unix.stackexchange.com/questions/108873", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37423/" ] }
109,003
What's the point of the touch command? I know I can create empty files with it, but so is also the case with echo -n . Otherwise, why would someone need to change the timestamps of a file? Unless to create the false impression about the age of a file, I don't see any other use, and this one is not a legitimate one (from my point of view).
One advantage of touch is that it can specify arbitrary timestamps, while echo -n will always result in the current time. An example of a legitimate use is to update a timestamp of a source code file so a program like make will consider the source file newer than its compiled object and rebuild it. Other uses are to create files that function solely based on their existence e.g. /etc/nologin , which disallows logins if it exists. I'd also argue that touch myfile is a simpler construct than echo -n >> myfile , as well as being shorter to type.
{ "source": [ "https://unix.stackexchange.com/questions/109003", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46600/" ] }
109,032
Suppose I have entered a function at the bash prompt, on multiple lines rather than squeezing it onto one with semicolons: $ function blah { echo blah } $ history -1 12690 function blah {\necho blah\n} How do I get this to display with real newline characters instead of '\n'?
You can enable and disable this feature within Bash using the shopt command. From the Bash man page. excerpt cmdhist If set, bash attempts to save all lines of a multiple-line command in the same history entry. This allows easy re-editing of multiline commands. lithist If set, and the cmdhist option is enabled, multi-line commands are saved to the history with embedded newlines rather than using semicolon separators where possible. Enables the feature $ shopt -s cmdhist $ shopt -s lithist Disables the feature $ shopt -u cmdhist $ shopt -u lithist Example $ shopt -s cmdhist $ shopt -s lithist Now when I run history : 70 text=$(cat<<'EOF' hello world\ foo\bar EOF) 71 text=$(cat<<'EOF' hello world\ foo\bar EOF ) 72 ls 73 cd IT/ ... ... 414 shopt -s lithist 415 history | less 416 function blah { echo blah } 417 history | less
{ "source": [ "https://unix.stackexchange.com/questions/109032", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3358/" ] }
109,216
So, through typing several commands I've found that there's not only ls , but l and la too. There doesn't appear to be any man entries on Ubuntu 12.14. They all appear to do similar things with minor differences: $ ls app config CONTRIBUTING.md doc Gemfile Guardfile LICENSE MAINTENANCE.md Procfile Rakefile script tmp VERSION CHANGELOG config.ru db features Gemfile.lock lib log PROCESS.md public README.md spec vendor $ la app CHANGELOG config.ru db features Gemfile .git Guardfile LICENSE MAINTENANCE.md Procfile Rakefile .rspec .secret spec .travis.yml VERSION .bundle config CONTRIBUTING.md doc .foreman Gemfile.lock .gitignore lib log PROCESS.md public README.md script .simplecov tmp vendor $ l app/ config/ CONTRIBUTING.md doc/ Gemfile Guardfile LICENSE MAINTENANCE.md Procfile Rakefile script/ tmp/ VERSION CHANGELOG config.ru db/ features/ Gemfile.lock lib/ log/ PROCESS.md public/ README.md spec/ vendor/ Just as a bit of trivia, are there more of these and what do they do? Is here any place to find this out? Unfortunately, google searching these commands gets ignored because they're so short.
Aliases ls is a command, l and la are most likely aliases which make use of the command ls . If you run the command alias you can find all the aliases on your system. $ alias | grep -E ' l=| la=' This will return all the aliases that match the pattern l=... or la=... . Debugging it further You can also use the command type to see how a particular command is getting executed. Is it a command, an alias, or a function. Example On my system I have the command ls aliased so that it calls ls but also includes a bunch of extra switches, like so: $ type -a ls ls is aliased to `ls --color=auto' ls is /usr/bin/ls ls is /bin/ls In the above output you can see that ls is aliases, but then also on my system's $PATH in the directories /usr/bin and /bin .
{ "source": [ "https://unix.stackexchange.com/questions/109216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56671/" ] }
109,380
Why does sshd require an absolute path when restarting, e.g /usr/sbin/sshd rather than sshd Are there any security implications? P.S the error message: # sshd sshd re-exec requires execution with an absolute path
This is specific to OpenSSH from version 3.9 onwards. For every new connection, sshd will re-execute itself, to ensure that all execute-time randomisations are re-generated for each new connection. In order for sshd to re-execute itself, it needs to know the full path to itself. Here's a quote from the release notes for 3.9: Make sshd(8) re-execute itself on accepting a new connection. This security measure ensures that all execute-time randomisations are reapplied for each connection rather than once, for the master process' lifetime. This includes mmap and malloc mappings, shared library addressing, shared library mapping order, ProPolice and StackGhost cookies on systems that support such things In any case, it is usually better to restart a service using either its init script (e.g. /etc/init.d/sshd restart ) or using service sshd restart . If nothing else, it will help you verify that the service will start properly after the next reboot... ( original answer, now irrelevant: My first guess would be that /usr/sbin isn't in your $PATH. )
{ "source": [ "https://unix.stackexchange.com/questions/109380", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
109,382
I often start to read a huge file and then want to quit after a while, but there is a lag from pressing Ctrl + C to the program stops. Is there a chance of shortening the lag by pressing the Ctrl + C key several times? Or am I wasting my keypresses?
After the first Ctrl-C , the program will receive SIGINT and usually starts cleaning up (deleting tmp files, closing sockets, etc.). If you hit Ctrl-C again while that is going on, it may happen that you interrupt the clean up routine (i.e. the additional signal might be acted upon instead of being left alone), leaving a mess behind. While this usually is not the case, more commonly the additional signals are in fact sent after the process finished (because of the inherent delays in the interaction of the operator with the system). That means that signals are received by another process (often shell, but not always). If that recipient doesn't handle this signal properly (like shell usually does - see Jenny D's answer) you may be unpleasantly surprised by the outcome of such an action.
{ "source": [ "https://unix.stackexchange.com/questions/109382", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26674/" ] }
109,443
I'm trying to open the Port 80 in my CentOS 6.5, on my virtual machine, so I can access the apache from my desktop's browser. If you take a look at the screenshot above.... I've added the line before the blue arrow, as is written on http://www.cyberciti.biz/faq/linux-iptables-firewall-open-port-80/ Now I do get the apache test page when entering the IP-address in my browser, but still when restarting the iptables, I get a "FAILED" when CentOS tries to apply the new rule. Does anyone know a solution for this? Or do I need to ignore the failure?
Rather than key the rules in manually you can use iptables to add the rules to the appropriate chains and then save them. This will allow you to debug the rules live, confirming they're correct, rather than having to add them to the file like you appear to be doing. To open port 80 I do this: $ sudo iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT $ sudo /etc/init.d/iptables save The last command will save the added rules. This is the rule I would use to open up the port for web traffic. Why your rule is causing issues If you notice the rule you're attempting to use: -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT Has a chain called "RH-Firewall-1-INPUT". If you do not have this chain, or a link from the INPUT chain to this chain, then this rule will never be reachable. This rule could likely be like this: -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT Or your INPUT chain should link to this chain RH-Firewall-1-INPUT with a rule like this: $ sudo iptables --list Chain INPUT (policy ACCEPT) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 .... NOTE: You can see what chains you have with this command: $ sudo iptables -L| grep Chain Chain INPUT (policy ACCEPT) Chain FORWARD (policy ACCEPT) Chain OUTPUT (policy ACCEPT) ... Also the states might need to be modified so that existing connections are allowed as well. -A INPUT -m state --state NEW,ESTABLISHED -m tcp -p tcp --dport 80 -j ACCEPT Also when you use the -A switch you're appending the rule to chain INPUT . If there are other rules before it that are blocking and/or interfering with the reaching of this rule, it will never get executed. So you might want to move it to the top by inserting rather than appending, like this: -I INPUT -m state --state NEW,ESTABLISHED -m tcp -p tcp --dport 80 -j ACCEPT Using the GUI Firewalls can be complicated beasts. So you might want to try the TUI instead (TUI's are GUI's for the terminal). $ sudo system-config-firewall-tui You can then go through the various screens setting up iptables rules. References Linux Firewall Tutorial: IPTables Tables, Chains, Rules Fundamentals
{ "source": [ "https://unix.stackexchange.com/questions/109443", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56816/" ] }
109,496
I am currently having some issues with the cache. It is a little too much right now so I wanted to clear it. I googled and found this little command: sync && echo 3 > /proc/sys/vm/drop_caches . I am logged in as root over SSH (not using sudo). These are the attempts I made: root@server: ~ # ll /proc/sys/vm/drop_caches -rw-r--r-- 1 root root 0 15. Jan 20:21 /proc/sys/vm/drop_caches root@server: ~ # echo 3 > /proc/sys/vm/drop_caches -bash: /proc/sys/vm/drop_caches: Permission denied root@server: ~ # sudo su -c "echo 3 > /proc/sys/vm/drop_caches" bash: /proc/sys/vm/drop_caches: Permission denied root@server: ~ # echo 3 | sudo tee /proc/sys/vm/drop_caches tee: /proc/sys/vm/drop_caches: Permission denied 3 It is a remote machine running Debian. As far as I know there are some vCores in this machine and it uses Virtuozzo for the virtualization. I really just want to clear the cache (So I can only access it using SSH) . I also tried registering this as a cronjob. But it simply fails too!
I am logged in as root over SSH...It is a remote machine running Debian. Is it actually a remote machine, or a just a remote system ? If this is a VPS slice somewhere, (at least some forms of) OS virtualization (e.g. openVZ) won't permit this from within the container. You don't run the machine, you just run your slice.
{ "source": [ "https://unix.stackexchange.com/questions/109496", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45867/" ] }
109,536
I accidentally "stopped" my telnet process. Now I can neither "switch back" into it, nor can I kill it (it won't respond to kill 92929 , where 92929 is the processid.) So, my question is, if you have a stopped process on linux command line, how do you switch back into it, or kill it, without having to resort to kill -9 ?
The easiest way is to run fg to bring it to the foreground: $ help fg fg: fg [job_spec] Move job to the foreground. Place the job identified by JOB_SPEC in the foreground, making it the current job. If JOB_SPEC is not present, the shell's notion of the current job is used. Exit Status: Status of command placed in foreground, or failure if an error occurs. Alternatively, you can run bg to have it continue in the background: $ help bg bg: bg [job_spec ...] Move jobs to the background. Place the jobs identified by each JOB_SPEC in the background, as if they had been started with `&'. If JOB_SPEC is not present, the shell's notion of the current job is used. Exit Status: Returns success unless job control is not enabled or an error occurs. If you have just hit Ctrl Z , then to bring the job back just run fg with no arguments.
{ "source": [ "https://unix.stackexchange.com/questions/109536", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37011/" ] }
109,563
About 5 times a day, I type "vi" when I meant "cd", and end up opening a directory in vi. It's making me NUTS. It seems like there should be a way to detect when I type in "vi + directory" and automatically change it to "cd + directory". Thoughts?
With the assumption that you call vi with the directory as the last argument: vi() { if [[ -d ${!#} ]]; then cd "$@" else command vi "$@" fi }
{ "source": [ "https://unix.stackexchange.com/questions/109563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56875/" ] }
109,625
I have a shell script where we have following lines if [ -z "$xyz" ] and if [ -n "$abc" ] , but I am not sure what their purpose is. Can anyone please explain?
You can find a very nice reference for bash's operators here . If you are using a different shell, just search for <my shell> operators and you will find everything you need. In your particular case, you are using: -n string is not null. -z string is null, that is, has zero length To illustrate: $ foo="bar"; $ [ -n "$foo" ] && echo "foo is not null" foo is not null $ [ -z "$foo" ] && echo "foo is null" $ foo=""; $ [ -n "$foo" ] && echo "foo is not null" $ [ -z "$foo" ] && echo "foo is null" foo is null
{ "source": [ "https://unix.stackexchange.com/questions/109625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56913/" ] }
109,698
I started an upgrade of my Kubuntu 12.04 system with this command, as usual: sudo apt-get --show-upgraded dist-upgrade I came back later and it had failed: Preconfiguring packages ... (Reading database ... 478306 files and directories currently installed.) Preparing to replace ... Unpacking replacement base-files ... Processing triggers for man-db ... Processing triggers for install-info ... ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-32-lowlatency gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.8.0-32-lowlatency with 1. dpkg: error processing initramfs-tools (--unpack): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) here's the problem: $ df -h output: Filesystem Size Used Avail Use% Mounted on /dev/sda1 894M 879M 0 100% /boot manually deleted older files and now some space is free Filesystem Size Used Avail Use% Mounted on /dev/sda1 894M 129M 717M 16% /boot I ran this next: sudo apt-get autoremove Next: sudo apt-get -f install output: The following extra packages will be installed: initramfs-tools The following packages will be upgraded: initramfs-tools dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (<< 0.99ubuntu13.3.1~); however: Version of initramfs-tools-bin on system is 0.99ubuntu13.4. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) sudo apt-get install initramfs-tools the above fails dpkg -l initramfs-tools output: Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-==============-==============-============================================ iF initramfs-tool 0.99ubuntu13.3 tools for generating an initramfs sudo apt-get install --reinstall initramfs-tools output: The following packages will be upgraded: initramfs-tools 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. Need to get 0 B/49.2 kB of archives. After this operation, 0 B of additional disk space will be used. dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (<< 0.99ubuntu13.3.1~); however: Version of initramfs-tools-bin on system is 0.99ubuntu13.4. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Here is the output of apt-cache policy initramfs-tools-bin initramfs-tools : initramfs-tools-bin: Installed: 0.99ubuntu13.4 Candidate: 0.99ubuntu13.4 Version table: *** 0.99ubuntu13.4 0 500 http://us.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://us.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages initramfs-tools: Installed: 0.99ubuntu13.3 Candidate: 0.99ubuntu13.4 Version table: 0.99ubuntu13.4 0 500 http://us.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages *** 0.99ubuntu13.3 0 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://us.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages As suggested below, here are my next steps: $ sudo apt-get update $ sudo apt-get -f install initramfs-tools=0.99ubuntu13 initramfs-tools-bin=0.99ubuntu13 Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: ... Use 'apt-get autoremove' to remove them. The following packages will be DOWNGRADED: initramfs-tools initramfs-tools-bin 0 upgraded, 0 newly installed, 2 downgraded, 0 to remove and 4 not upgraded. 1 not fully installed or removed. Need to get 59.2 kB of archives. After this operation, 2,048 B disk space will be freed. Do you want to continue [Y/n]? Get:1 http://us.archive.ubuntu.com/ubuntu/ precise/main initramfs-tools all 0.99ubuntu13 [49.2 kB] Get:2 http://us.archive.ubuntu.com/ubuntu/ precise/main initramfs-tools-bin amd64 0.99ubuntu13 [9,988 B] Fetched 59.2 kB in 0s (124 kB/s) dpkg: warning: downgrading initramfs-tools-bin from 0.99ubuntu13.4 to 0.99ubuntu13. (Reading database ... 478624 files and directories currently installed.) Preparing to replace initramfs-tools-bin 0.99ubuntu13.4 (using .../initramfs-tools-bin_0.99ubuntu13_amd64.deb) ... Unpacking replacement initramfs-tools-bin ... Setting up initramfs-tools-bin (0.99ubuntu13) ... dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (>= 0.99ubuntu13.3); however: Version of initramfs-tools-bin on system is 0.99ubuntu13. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Next I tried Giles's suggestion: sudo dpkg --configure -a --force-depends sudo apt-get install -f sudo apt-get dist-upgrade
Your system is in a state which I think should not happen: you have the new version of the dependency initramfs-tools-bin in the installed state, but the old version of the dependency initramfs-tools in a half-installed state. I'm not sure whether the problem is that APT is letting the system get into a state where it can't recover, dpkg is letting the system get into a state where it can't recover, the package maintainer used a combination of dependencies which isn't supported, or my limited understanding doesn't cover this case. Try using dpkg directly: dpkg --configure -a If this still complains about dependencies, try dpkg --configure -a --force-depends If this works, you have the dpkg database in a consistent state. You need to get APT in a good state (which requires no broken dependencies): apt-get -f install After this you can resume normal upgrading. If your purge of /boot was deleting old kernels that were in packages, you won't be able to remove the kernel packages anymore. You'll have to recreate the files. You can create empty files ( touch `cat /var/lib/dpkg/info/linux-image-1.2.3-foo` ) if you're removing the linux-image-1.2.3-foo package and you manually removed some of its files.
{ "source": [ "https://unix.stackexchange.com/questions/109698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
109,747
In a directory size 80GB with approximately 700,000 files, there are some file names with non-English characters in the file name. Other than trawling through the file list laboriously is there: An easy way to list or otherwise identify these file names? A way to generate printable non-English language characters - those characters that are not listed in the printable range of man ascii (so I can test that these files are being identified)?
Assuming that "foreign" means "not an ASCII character", then you can use find with a pattern to find all files not having printable ASCII characters in their names: LC_ALL=C find . -name '*[! -~]*' (The space is the first printable character listed on http://www.asciitable.com/ , ~ is the last.) The hint for LC_ALL=C is required (actually, LC_CTYPE=C and LC_COLLATE=C ), otherwise the character range is interpreted incorrectly. See also the manual page glob(7) . Since LC_ALL=C causes find to interpret strings as ASCII, it will print multi-byte characters (such as π ) as question marks. To fix this, pipe to some program (e.g. cat ) or redirect to file. Instead of specifying character ranges, [:print:] can also be used to select "printable characters". Be sure to set the C locale or you get quite (seemingly) arbitrary behavior. Example: $ touch $(printf '\u03c0') "$(printf 'x\ty')" $ ls -F dir/ foo foo.c xrestop-0.4/ xrestop-0.4.tar.gz π $ find -name '*[! -~]*' # this is broken (LC_COLLATE=en_US.UTF-8) ./x?y ./dir ./π ... (a lot more) ./foo.c $ LC_ALL=C find . -name '*[! -~]*' ./x?y ./?? $ LC_ALL=C find . -name '*[! -~]*' | cat ./x y ./π $ LC_ALL=C find . -name '*[![:print:]]*' | cat ./x y ./π
{ "source": [ "https://unix.stackexchange.com/questions/109747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34921/" ] }
109,804
man 5 crontab is pretty clear on how to use crontab to run a script on boot: These special time specification "nicknames" are supported, which replace the 5 initial time and date fields, and are prefixed by the `@` character: @reboot : Run once after reboot. So I happily added a single line to my crontab (under my user account, not root): @reboot /home/me/myscript.sh But for some reason, myscript.sh wouldn't run on machine reboot. (it runs fine if I invoke it from the command line, so it's not a permissions problem) What am I missing? Update to answer @Anthon's questions: Oracle-linux version: 5.8 (uname: 2.6.32-300.39.2.el5uek #1 SMP) Cron version: vixie-cron-4.1-81.el5.x86_64 Yes, /home is a mounted partition. Looks like this is the problem. How do I workaround this? Currently, myscript.sh only echos a text message to a file in /home/me .
This can be a bit of a confusing topic because there are different implementations of cron. Also there were several bugs that broke this feature, and there are also some use cases where it simply won't work, specifically if you do a shutdown/boot vs. a reboot. Bugs datapoint #1 One such bug in Debian is covered here, titled: cron: @reboot jobs are not run . This seems to have made it's way into Ubuntu as well, which I can't confirm directly. datapoint #2 Evidence of the bug in Ubuntu would seem to be confirmed here in this SO Q&A titled: @reboot cronjob not executing . excerpt comment #1: .... 3) your version of crond may not support @reboot are you using vix's crond? ... show results of crontab -l -u user comment #2: ... It might be a good idea to set it up as an init script instead of relying on a specific version of cron's @reboot. comment #3: ... @MarkRoberts removed the reboot and modified the 1 * * * * , to */1 * * * * , problem is solved! Where do I send the rep pts Mark? Thank you! The accepted answer in that Q&A also had this comment: Seems to me Lubuntu doesn't support the @Reboot Cron syntax. Additional evidence datapoint #3 As additional evidence there was this thread that someone was attempting the very same thing and getting frustrated that it didn't work. It's titled: Thread: Cron - @reboot jobs not working . excerpt Re: Cron - @reboot jobs not working Quote Originally Posted by ceallred View Post This is killing me... Tried the wrapper script. Running manually generates the log file... rebooting and the job doesn't run or create log file. Syslog shows that CRON ran the job... but again, no output and the process isn't running. Jul 15 20:07:45 RavenWing cron[1026]: (CRON) INFO (Running @reboot jobs) Jul 15 20:07:45 RavenWing CRON[1053]: (ceallred) CMD (/home/ceallred/Scripts/run_spideroak.sh > /home/ceallred/Scripts/SpiderOak.log 2>&1 &) It's seems like cron doesn't like the @reboot command.... Any other ideas? Okay... Partially solved. I'll mark this one as solved and start a new thread with the new issue..... I think the answer was my encrypted home directory wasn't mounted when CRON was trying to run the script (stored in /home/username/scripts). Moved to /usr/scripts and the job runs as expected. So now it appears to be a spideroak issue. Process starts, but by the time the boot process is finished, it's gone. I'm guessing a crash for some reason.... New thread to ask about that. Thanks for all the help! Once this above user figured out his issue he was able to get @reboot working out of the crontab entry of a user. I'm not entirely sure what version of cron is used on Ubuntu, but this would seem to indicate that user's can use @reboot too, or that the bug was fixed at some point in subsequent versions of cron. datapoint #4 I tested on CentOS 6 the following and it worked. Example $ crontab -l @reboot echo "hi" > /home/sam/reboot.txt 2>&1 I then rebooted the system. $ sudo reboot After the reboot. $ cat reboot.txt hi Take aways This feature does seem to be supported for both system and user crontab entries. You have to make sure that it's supported/working in your particular distro and/or version of the cron package. For more on how the actual mechanism works for @reboot I did come across this blog post which discusses the innards. It's titled: @reboot - explaining simple cron magic . Debugging crond You can turn up the verbosity of crond by adding the following to this configuration file on RHEL/CentOS/Fedora based distros. $ more crond # Settings for the CRON daemon. # CRONDARGS= : any extra command-line startup arguments for crond CRONDARGS="-L 2" The valid levels are 0, 1, or 2. To revert this file back to it's default logging level simply remove the "-L 2" when you're done debugging the situation.
{ "source": [ "https://unix.stackexchange.com/questions/109804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28712/" ] }
109,827
I'm trying to see how many times foo bar appears in /var/log/foo.log within an arbitrary amount of time on a remote server, but nothing that I've tried so far has worked. I've already got a timer script that I use to keep track of how long it has been since I started tailing /var/log/foo.log , and now I'd just like a way to tell how many times foo bar has appeared in the tailed output. I searched google, but I didn't find anything pertinent within the first 10 pages of results. Here's what I've tried with frustrating results: ## works on local machine, but doesn't work as expected on remote tail -f /var/log/foo.log | grep foo\ bar | sed '=' ## works on local, but not remote tail -f /var/log/foo.log | grep foo\ bar | cat -n - ## works on local, but not remote tail -f /var/log/foo.log | grep foo\ bar | awk -F'\n' '{printf "[%d]> ", NR; print $1}' I even tried to write a sed script that'd act like tail -f , but I made limited-to-no headway with that. NOTE the remote server is running an older version of coreutils, and upgrading is an option, but is NOT in any way the desired solution.
tail -f | nl works for me and is the first what I thought of - that is if you really want the lines numbered from 1 and not with the real line number from the file watched. Optionally add grep if needed to the appropriate place (either before or after nl ). However, remember that buffering may occur. In my particular case, grep has the --line-buffered option, but nl buffers it's output and doesn't have an option to switch that off. Hence the tail | nl | grep combo doesn't really flow nicely. That said, tail -f | grep -n pattern works for me as well. Numbering starts again from the beginning of the "tailing" rather than beginning of the whole log file.
{ "source": [ "https://unix.stackexchange.com/questions/109827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43029/" ] }
109,835
I have this input, which is displayed in columns. I would like to get the second last column with the numbers of this sample: [ 3] 1.0- 2.0 sec 1.00 MBytes 8.39 Mbits/sec [ 3] 2.0- 3.0 sec 768 KBytes 6.29 Mbits/sec [ 3] 3.0- 4.0 sec 512 KBytes 4.19 Mbits/sec [ 3] 4.0- 5.0 sec 256 KBytes 2.10 Mbits/sec ... If I use cut -d\ -f 13 I get Mbits/sec 6.29 4.19 2.10 because sometimes there are additional spaces in between.
If we use tr command along with squeeze option ( -s flag ) to convert all multiple consecutive spaces to a single space and then perform cut operation with space as delimiter – we can access the required column carrying the numbers: < file tr -s ' ' | cut -d ' ' -f 8
{ "source": [ "https://unix.stackexchange.com/questions/109835", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
110,240
Are you literally "ending a file" by inputting this escape sequence, i.e. is the interactive shell session is seen as a real file stream by the shell, like any other file stream? If so, which file? Or, is the Ctrl + D signal just a placeholder which means "the user has finished providing input and you may terminate"?
The ^D character (also known as \04 or 0x4, END OF TRANSMISSION in Unicode) is the default value for the eof special control character parameter of the terminal or pseudo-terminal driver in the kernel (more precisely of the tty line discipline attached to the serial or pseudo-tty device ). That's the c_cc[VEOF] of the termios structure passed to the TCSETS/TCGETS ioctl one issues to the terminal device to affect the driver behaviour. The typical command that sends those ioctls is the stty command. To retrieve all the parameters: $ stty -a speed 38400 baud; rows 58; columns 191; line = 0; intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D ; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany -imaxbel iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke That eof parameter is only relevant when the terminal device is in icanon mode. In that mode, the terminal driver (not the terminal emulator) implements a very simple line editor , where you can type Backspace to erase a character, Ctrl-U to erase the whole line... When an application reads from the terminal device, it sees nothing until you press Return at which point the read() returns the full line including the last LF character (by default, the terminal driver also translates the CR sent by your terminal upon Return to LF ). Now, if you want to send what you typed so far without pressing Enter , that's where you can enter the eof character. Upon receiving that character from the terminal emulator, the terminal driver submits the current content of the line, so that the application doing the read on it will receive it as is (and it won't include a trailing LF character). Now, if the current line was empty, and provided the application will have fully read the previously entered lines, the read will return 0 character. That signifies end of file to the application (when you read from a file, you read until there's nothing more to be read). That's why it's called the eof character, because sending it causes the application to see that no more input is available. Now, modern shells, at their prompt do not set the terminal in icanon mode because they implement their own line editor which is much more advanced than the terminal driver built-in one. However, in their own line editor , to avoid confusing the users, they give the ^D character (or whatever the terminal's eof setting is with some) the same meaning (to signify eof ).
{ "source": [ "https://unix.stackexchange.com/questions/110240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16279/" ] }
110,251
Any quicker navigation trick to place the line at which the cursor is at the moment to the top of the screen? center of the screen? bottom of the screen?
z Enter or zt puts current line to top of screen z. or zz puts current line to center of screen z- or zb puts current line to bottom of screen ( z Enter , z. , and z- puts the cursor in the first non blank column. zt , zz , and zb leaves the cursor in the current column) More info about scrolling at http://vimdoc.sourceforge.net/htmldoc/scroll.html or in vim type :help scroll-cursor
{ "source": [ "https://unix.stackexchange.com/questions/110251", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17265/" ] }
110,283
I have the following in my ~/.ssh/config. HOST 10.2.192.* USER foo PreferredAuthentications publickey IdentityFile ~/.ssh/foo/id_rsa The above configuration lets me connect to a machine while typing half as many words. ssh 10.2.192.x Before my ssh config, I had to type in all of this: ssh [email protected] -i ~/.ss/foo/id_rsa However there is one machine in the 10.2.192.x subnet that I want to connect to with password based authentication instead of keybased authentication. Because ssh looks at my config file and finds a match for PreferredAuthentications publickey I am unable to login with just my password. I don't intend to ssh into this special snowflake vm often enough to warrant adding a new rule to my ssh config. How can I make ssh ignore my config file just this once, and allow me to authenticate with a password?
To make your ssh client ignore your configuration file, use ssh -F /dev/null [email protected] . Because your subnet's IdentityFile is in ~/.ssh/foo rather than ~/.ssh/ , you don't need to whip up a whole new file to eschew your extant private key. From the ssh man page: -F configfile Specifies an alternative per-user configuration file. If a configuration file is given on the command line, the system-wide configuration file (/etc/ssh/ssh_config) will be ignored. The default for the per-user configuration file is ~/.ssh/config.
{ "source": [ "https://unix.stackexchange.com/questions/110283", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39263/" ] }
110,348
Is there a way I can do what stated in the title from the terminal commands, or will I have to look into the codes?
There is no "recipe" to get the meanings of an exit status of a given terminal command. My first attempt would be the manpage: user@host:~# man ls Exit status: 0 if OK, 1 if minor problems (e.g., cannot access subdirectory), 2 if serious trouble (e.g., cannot access command-line argument). Second : Google . See wget as an example. Third : The exit statuses of the shell, for example bash. Bash and it's builtins may use values above 125 specially. 127 for command not found, 126 for command not executable. For more information see the bash exit codes .
{ "source": [ "https://unix.stackexchange.com/questions/110348", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55118/" ] }
110,490
I know this is sort of a duplicate of another question ( Why this sort command gives me an empty file? ) but I wanted to expand on the question in response to the answers given. The command shuf example.txt > example.txt Returns a blank file, because the shell truncates the file before shuffling it, leaving only a blank file to shuffle. However, cat example.txt | shuf > example.txt will produce a shuffled file as expected. Why does the pipeline method work when the simple redirection doesn't? If the file is truncated before the commands are run, shouldn't the second method also leave an empty file?
The problem is that > example.txt starts writing to that file, before shuf example.txt starts reading it. So as there was no output yet, example.txt is empty, shuf reads an empty file, and as shuf makes no output in this case, the final result stays empty. Your other command may suffer from the same issue. > example.txt may kill the file before cat example.txt starts reading it; it depends on the order the shell executes those things, and how long it takes cat to actually open the file. To avoid such issues entirely, you could use shuf example.txt > example.txt.shuf && mv example.txt.shuf example.txt . Or you could go with shuf example.txt --output=example.txt instead.
{ "source": [ "https://unix.stackexchange.com/questions/110490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56089/" ] }
110,522
I'm setting the timezone to GMT+6 on my Linux machine by copying the zoneinfo file to /etc/localtime , but the date command is still showing the time as UTCtime-6 . Can any one explain to me this behavior? I'm assuming the date command should display UTCtime+6 time. Here are steps I'm following: date Wed Jan 22 17:29:01 IST 2014 date -u Wed Jan 22 11:59:01 UTC 2014 cp /usr/share/zoneinfo/Etc/GMT+6 /etc/localtime date Wed Jan 22 05:59:21 GMT+6 2014 date -u Wed Jan 22 11:59:01 UTC 2014
Take a look at this blog post titled: How To: 2 Methods To Change TimeZone in Linux . Red Hat distros If you're using a distribution such as Red Hat then your approach of copying the file would be mostly acceptable. NOTE: If you're looking for a distro-agnostic solution, this also works on Debian, though there are simpler approaches below if you only need to be concerned with Debian machines. $ ls /usr/share/zoneinfo/ Africa/ CET Etc/ Hongkong Kwajalein Pacific/ ROK zone.tab America/ Chile/ Europe/ HST Libya Poland Singapore Zulu Antarctica/ CST6CDT GB Iceland MET Portugal Turkey Arctic/ Cuba GB-Eire Indian/ Mexico/ posix/ UCT Asia/ EET GMT Iran MST posixrules Universal Atlantic/ Egypt GMT0 iso3166.tab MST7MDT PRC US/ Australia/ Eire GMT-0 Israel Navajo PST8PDT UTC Brazil/ EST GMT+0 Jamaica NZ right/ WET Canada/ EST5EDT Greenwich Japan NZ-CHAT ROC W-SU I would recommend linking to it rather than copying however. $ sudo unlink /etc/localtime $ sudo ln -s /usr/share/zoneinfo/Etc/GMT+6 /etc/localtime Now date shows the different timezone: $ date -u Thu Jan 23 05:40:31 UTC 2014 $ date Wed Jan 22 23:40:38 GMT+6 2014 Ubuntu/Debian Distros To change the timezone on either of these distros you can use this command: $ sudo dpkg-reconfigure tzdata $ sudo dpkg-reconfigure tzdata Current default time zone: 'Etc/GMT-6' Local time is now: Thu Jan 23 11:52:16 GMT-6 2014. Universal Time is now: Thu Jan 23 05:52:16 UTC 2014. Now when we check it out: $ date -u Thu Jan 23 05:53:32 UTC 2014 $ date Thu Jan 23 11:53:33 GMT-6 2014 NOTE: There's also this option in Ubuntu 14.04 and higher with a single command (source: Ask Ubuntu - setting timezone from terminal ): $ sudo timedatectl set-timezone Etc/GMT-6 On the use of "Etc/GMT+6" excerpt from @MattJohnson's answer on SO Zones like Etc/GMT+6 are intentionally reversed for backwards compatibility with POSIX standards. See the comments in this file . You should almost never need to use these zones. Instead you should be using a fully named time zone like America/New_York or Europe/London or whatever is appropriate for your location. Refer to the list here .
{ "source": [ "https://unix.stackexchange.com/questions/110522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57432/" ] }
110,557
Is there a way to avoid ssh printing warning messages like this? @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Although the remote host identity has changed but I know it is fine and just want to get rid of this warning.
Four ways: To just connect once to a system with a new host key, without having to answer questions, connect with the following option: ssh -q -o "StrictHostKeyChecking no" this.one.host.name To permanently remove the warning for all systems, edit your ~/.ssh/config file to add the following lines: Host * StrictHostKeyChecking no To permanently remove all warnings for this one server, edit your ~/.ssh/config file and add the following lines: Host this.one.hostname StrictHostKeyChecking no To remove the warning for this one change for this one server, remove the host key for that server from ~/.ssh/known_hosts . The next time you connect, the new host key will be added.
{ "source": [ "https://unix.stackexchange.com/questions/110557", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28650/" ] }
110,558
As root, I'm connecting to a remote host to execute a command. Only "standarduser" has the appropriate id-file and correct .ssh/config, so I'm switching the user first: su standarduser -c 'ssh -x remotehost ./remotecommand' The command works fine, but despite the fact that I used "-x" (disable X11-Forwarding) and having X11Forwards disabled in /etc/ssh/ssh_config , I still get the error message: X11 connection rejected because of wrong authentication. I'm not getting the error message when I'm logged in as "standarduser". This is quite annoying as I would like to integrate the command in a cron job file. I understand that the error message refers to the wrong authentication of root's .XAuth file, but I'm not even trying to connect via X11. Why is "ssh -x" not disabling the X11 connection and throwing the error message? UPDATE : The message only shows when I'm logged in within a screen, when using the command stated above on the local machine itself (without screen), I don't get an error message, so this should be fine with cron, too. I also started the same command with -v and surprisingly got the error message FIRST, even before the status information from SSH: root@localhost:~# su standarduser -c 'ssh -x remotehost ./remotecommand' X11 connection rejected because of wrong authentication. OpenSSH_6.2p2 Ubuntu-6ubuntu0.1, OpenSSL 1.0.1e 11 Feb 2013 This led me to the problem itself, it is NOT the ssh which is throwing the error message, it's su : root@localhost:~# su standarduser -c 'echo Hi' X11 connection rejected because of wrong authentication. Hi Why do I only get this error within screen ? How can I disable this error message?
Seems like your root lacks some X11 magic cookie in the .Xauthority , which your standarduser has. Here is how to fix this. SHORT VERSION (thanks to @bmaupin ) standarduser@localhost:~$ xauth list | grep unix`echo $DISPLAY | cut -c10-12` > /tmp/xauth standarduser@localhost:~$ sudo su root@localhost:~$ xauth add `cat /tmp/xauth` Attention: check the backticks! They cannot be replaced with quotes! You need sudo installed to proceed further the second command! ORIGINAL LONG VERSION To fix things, first detect which display number standarduser uses: standarduser@localhost:~$ echo $DISPLAY localhost:21.0 In this case it is 21.0 . Secondly, display standarduser 's list of cookies: standarduser@localhost:~$ xauth list localhost/unix:1 MIT-MAGIC-COOKIE-1 51a3801fd7776704575752f09015c61d localhost/unix:21 MIT-MAGIC-COOKIE-1 0ba2913f8d9df0ee9eda295cad7b104f localhost/unix:22 MIT-MAGIC-COOKIE-1 22ba6595c270f20f6315c53e27958dfe localhost/unix:20 MIT-MAGIC-COOKIE-1 267f68b51726a8a381cfc10c91783a13 The cookie for the 21.0 display is the second in the list and ends with 104f . The last thing to do is to add this particular cookie to the root's .Xauthority . Log in as root and do the following: root@localhost:~$ xauth add localhost/unix:21 MIT-MAGIC-COOKIE-1 0ba2913f8d9df0ee9eda295cad7b104f This is how you can mitigate the X11 connection rejected because of wrong authentication error when you run su as a different user in Bash script or screen . Thanks to this guy for inspiration.
{ "source": [ "https://unix.stackexchange.com/questions/110558", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47505/" ] }
110,613
I am running Linux Mint Debian edition (essentially Debian testing) and the Cinnamon desktop environment. Every time I launch google-chrome it asks to become the default browser. I have told it to do so in all ways I can think of but I still get this pop-up: What I have tried: Clicking on "Set as default" in the pop-up. Making chrome the default in its settings: Using my desktop environment's (cinnamon) settings app to set it as default: Associating it with all relevant mimetypes in the various ways and files where such things are defined: $ xdg-mime query default text/html chrome.desktop $ grep chrome .local/share/applications/mimeapps.list text/html=chrome.desktop x-scheme-handler/http=chrome.desktop x-scheme-handler/https=chrome.desktop x-scheme-handler/about=google-chrome.desktop x-scheme-handler/about=google-chrome.desktop; text/html=emacs.desktop;google-chrome.desktop;firefox.desktop; x-scheme-handler/http=chrome.desktop; $ grep chrome /usr/share/applications/defaults.list application/xhtml+xml=google-chrome.desktop text/html=google-chrome.desktop text/xml=gedit.desktop;pluma.desktop;google-chrome.desktop x-scheme-handler/http=google-chrome.desktop x-scheme-handler/https=google-chrome.desktop In those files, I replaced all occurrences of firefox (my previous default) with google-chrome . No other browsers are defined anywhere in the file: $ grep -E 'firefox|opera|chromium' /usr/share/applications/defaults.list \ .local/share/applications/mimeapps.list $ Launching chrome as root in case that helps but it won't let me: Using Debian's alternatives system to set it as default: $ sudo update-alternatives --install /usr/bin/www-browser www-browser /usr/bin/google-chrome 1080 update-alternatives: using /usr/bin/google-chrome to provide /usr/bin/www-browser (www-browser) in auto mode $ ls -l /etc/alternatives/www-browser lrwxrwxrwx 1 root root 22 Jan 23 17:03 /etc/alternatives/www-browser -> /usr/bin/google-chrome None of these seem to have any effect. Will no one rid me of this turbulent pop-up?
For Chromium, when I choose "Don't ask again", Chromium stores the following setting in my ~/.config/chromium/Profile 1/Preferences file: { "alternate_error_pages": { "enabled": false }, "apps": { "shortcuts_have_been_created": true }, "autofill": { "negative_upload_rate": 1.0, "positive_upload_rate": 1.0 }, "bookmark_bar": { "show_on_all_tabs": true }, "bookmark_editor": { "expanded_nodes": [ "1" ] }, "browser": { "check_default_browser": false, [...] For standard Google Chrome: Close Chrome. In Terminal, paste open ~/Library/Application Support/Google/Chrome/Default/Preferences (and then hit enter) search for "browser":{ ​ and replace it with "browser":{"check_default_browser":false, When you start chrome back up it shouldn't prompt you anymore. Note: The preferences setting seems to differ substantially between chrome versions. On Chrome-78.0 the setting "browser":{"default_browser_infobar_last_declined":"13236762067983049"} seems to work. I assume it simulates clicking the x .
{ "source": [ "https://unix.stackexchange.com/questions/110613", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
110,624
I had troubles with the screen brightness control in my laptop and I fixed it by adding the acpi_osi=linux and acpi_backlight=vendor parameters to the file grub.cfg . I'd like to know what these parameters mean and why they work.
The kernel parameters are documented at kernel.org . To understand what acpi_osi does, you roughly need to know how ACPI works. ACPI consists of so-called tables that the BIOS loads into RAM before the operating system starts. Some of them simply contain information about essential devices on the mainboard in a fixed format, but some like the DSDT table contain AML code. This code is executed by the operating system and provides the OS with a tree structure describing many devices on the mainboard and callable functions that are executed by the OS when e.g. power saving is enabled. The AML code can ask the OS which OS it is by calling the _OSI function. This is often used by vendors to make workarounds e.g. around bugs in some Windows versions. As many hardware vendors only test their products with the (at that time) latest version of Windows, the "regular" code paths without the workarounds are often buggy. Because of this Linux usually answers yes when asked if it's Windows. Linux also used to answer yes when asked if it's "Linux", but that caused BIOS vendors to work around bugs or missing functionality in the (at that time) latest Linux kernel version instead of opening bug reports or providing patches. When these bugs were fixed the workarounds caused unnecessary performance penalities and other problems for all later Linux versions. acpi_osi=Linux makes Linux answer yes again when asked if it's "Linux" by the ACPI code, thus allowing the ACPI code to enable workarounds for Linux and/or disable workarounds for Windows. acpi_backlight=vendor changes the order in which the ACPI drivers for backlights are checked. Usually Linux will use the generic video driver, when the ACPI DSDT provides a backlight device claiming standard compatibility and will only check other vendor specific drivers if such a device is not found. acpi_backlight=vendor reverses this order, so that the vendor specific drivers are tried first.
{ "source": [ "https://unix.stackexchange.com/questions/110624", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57493/" ] }
110,645
I use a lot of grep awk sort in my unix shell to work with medium-sized (around 10M-100M lines) tab-separated column text files. In this respect unix shell is my spreadsheet. But I have one huge problem, that is selecting records given a list of IDs. Having table.csv file with format id\tfoo\tbar... and ids.csv file with list of ids, only select records from table.csv with id present in ids.csv . kind of https://stackoverflow.com/questions/13732295/extract-all-lines-from-text-file-based-on-a-given-list-of-ids but with shell, not perl. grep -F obviously produces false positives if ids are variable width. join is an utility I could never figure out. First of all, it requires alphabetic sorting (my files are usually numerically sorted), but even then I can't get it to work without complaining about incorrect order and skipping some records. So I don't like it. grep -f against file with ^id\t -s is very slow when number of ids is large. awk is cumbersome. Are there any good solutions for this? Any specific tools for tab-separated files? Extra functionality will be most welcome too. UPD: Corrected sort -> join
I guess you meant grep -f not grep -F but you actually need a combination of both and -w : grep -Fwf ids.csv table.csv The reason you were getting false positives is (I guess, you did not explain) because if an id can be contained in another, then both will be printed. -w removes this problem and -F makes sure your patterns are treated as strings, not regular expressions. From man grep : -F, --fixed-strings Interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched. (-F is specified by POSIX.) -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. -f FILE, --file=FILE Obtain patterns from FILE, one per line. The empty file contains zero patterns, and therefore matches nothing. (-f is specified by POSIX.) If your false positives are because an ID can be present in a non-ID field, loop through your file instead: while read pat; do grep -w "^$pat" table.csv; done < ids.csv or, faster: xargs -I {} grep "^{}" table.csv < ids.csv Personally, I would do this in perl though: perl -lane 'BEGIN{open(A,"ids.csv"); while(<A>){chomp; $k{$_}++}} print $_ if defined($k{$F[0]}); ' table.csv
{ "source": [ "https://unix.stackexchange.com/questions/110645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57502/" ] }
110,648
I have a virtual server (debian) and the clock fails to sync from time to time (probably because i save/restore state with vboxheadlesstray). To fix this issue I run dpkg-reconfigure ntp && ntpq -p , it works when I run it as root, but doesn't work with cron. I have added it in crontab -e (as root user) and is using this line: 1 * * * * dpkg-reconfigure ntp && ntpq -p > /dev/null 2>&1 My ordinary user user get mail about it saying /bin/sh: 1: dpkg-reconfigure: not found , why is my ordinary user getting the mail and not root and what do I need to change to make it work?
I guess you meant grep -f not grep -F but you actually need a combination of both and -w : grep -Fwf ids.csv table.csv The reason you were getting false positives is (I guess, you did not explain) because if an id can be contained in another, then both will be printed. -w removes this problem and -F makes sure your patterns are treated as strings, not regular expressions. From man grep : -F, --fixed-strings Interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched. (-F is specified by POSIX.) -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. -f FILE, --file=FILE Obtain patterns from FILE, one per line. The empty file contains zero patterns, and therefore matches nothing. (-f is specified by POSIX.) If your false positives are because an ID can be present in a non-ID field, loop through your file instead: while read pat; do grep -w "^$pat" table.csv; done < ids.csv or, faster: xargs -I {} grep "^{}" table.csv < ids.csv Personally, I would do this in perl though: perl -lane 'BEGIN{open(A,"ids.csv"); while(<A>){chomp; $k{$_}++}} print $_ if defined($k{$F[0]}); ' table.csv
{ "source": [ "https://unix.stackexchange.com/questions/110648", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49673/" ] }
110,682
I am attempting to install the VMWare player in Fedora 19. I am running into the problem that multiple users have had where VMware player cannot find the kernel headers. I have installed the kernel-headers and kernel-devel packages through yum and the file that appears in /usr/src/kernels is: 3.12.8-200.fc19.x86_64 However, when I do uname -r my Fedora kernel version is: 3.9.5-301.fc19.x86_64 which is a different version. This seems to mean that when I point VMware player at the path of the kernels I get this error: C header files matching your running kernel were not found. Refer to your distribution's documentation for installation instructions. How can I install the correct Kernel and where should I be pointing VMware if its not /usr/src/kernels/<my-kernel> ?
You can install the correct kernel header files like so: $ sudo yum install "kernel-devel-uname-r == $(uname -r)" Example This command will always install the right version. $ sudo yum install "kernel-devel-uname-r == $(uname -r)" Loaded plugins: auto-update-debuginfo, changelog, langpacks, refresh-packagekit No package kernel-devel-uname-r == 3.12.6-200.fc19.x86_64 available. Error: Nothing to do Or you can search for them like this: $ yum search "kernel-headers-uname-r == $(uname -r)" --disableexcludes=all Loaded plugins: auto-update-debuginfo, changelog, langpacks, refresh-packagekit Warning: No matches found for: kernel-headers-uname-r == 3.12.6-200.fc19.x86_64 No matches found However I've notice this issue as well where specific versions of headers are not present in the repositories. You might have to reach into Koji to find a particular version of a build. Information for build kernel-3.12.6-200.fc19 That page includes all the assets for that particular version of the Kernel.
{ "source": [ "https://unix.stackexchange.com/questions/110682", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57514/" ] }
110,750
What's the difference between du -sh * and du -sh ./* ? Note: What interests me is the * and ./* parts.
$ touch ./-c $'a\n12\tb' foo $ du -hs * 0 a 12 b 0 foo 0 total As you can see, the -c file was taken as an option to du and is not reported (and you see the total line because of du -c ). Also, the file called a\n12\tb is making us think that there are files called a and b . $ du -hs -- * 0 a 12 b 0 -c 0 foo That's better. At least this time -c is not taken as an option. $ du -hs ./* 0 ./a 12 b 0 ./-c 0 ./foo That's even better. The ./ prefix prevents -c from being taken as an option and the absence of ./ before b in the output indicates that there's no b file in there, but there's a file with a newline character (but see below 1 for further digressions on that). It's good practice to use the ./ prefix when possible, and if not and for arbitrary data, you should always use: cmd -- "$var" or: cmd -- $patterns If cmd doesn't support -- to mark the end of options, you should report it as a bug to its author (except when it's by choice and documented like for echo ). There are cases where ./* solves problems that -- doesn't. For instance: awk -f file.awk -- * fails if there is a file called a=b.txt in the current directory (sets the awk variable a to b.txt instead of telling it to process the file). awk -f file.awk ./* Doesn't have the problem because ./a is not a valid awk variable name, so ./a=b.txt is not taken as a variable assignment. cat -- * | wc -l fails if there a file called - in the current directory, as that tells cat to read from its stdin ( - is special to most text processing utilities and to cd / pushd ). cat ./* | wc -l is OK because ./- is not special to cat . Things like: grep -l -- foo *.txt | wc -l to count the number of files that contain foo are wrong because it assumes file names don't contain newline characters ( wc -l counts the newline characters, those output by grep for each file and those in the filenames themselves). You should use instead: grep -l foo ./*.txt | grep -c / (counting the number of lines with a / character is more reliable as there can only be one per filename). For recursive grep , the equivalent trick is to use: grep -rl foo .//. | grep -c // ./* may have some unwanted side effects though. cat ./* adds two more character per file, so would make you reach the limit of the maximum size of arguments+environment sooner. And sometimes you don't want that ./ to be reported in the output. Like: grep foo ./* Would output: ./a.txt: foobar instead of: a.txt: foobar Further digressions 1 . I feel like I have to expand on that here, following the discussion in comments. $ du -hs ./* 0 ./a 12 b 0 ./-c 0 ./foo Above, that ./ marking the beginning of each file means we can clearly identify where each filename starts (at ./ ) and where it ends (at the newline before the next ./ or the end of the output). What that means is that the output of du ./* , contrary to that of du -- * ) can be parsed reliably, albeit not that easily in a script. When the output goes to a terminal though, there are plenty more ways a filename may fool you: Control characters, escape sequences can affect the way things are displayed. For instance, \r moves the cursor to the beginning of the line, \b moves the cursor back, \e[C forward (in most terminals)... many characters are invisible on a terminal starting with the most obvious one: the space character. There are Unicode characters that look just the same as the slash in most fonts $ printf '\u002f \u2044 \u2215 \u2571 \u29F8\n' / ⁄ ∕ ╱ ⧸ (see how it goes in your browser). An example: $ touch x 'x ' $'y\bx' $'x\n0\t.\u2215x' $'y\r0\t.\e[Cx' $ ln x y $ du -hs ./* 0 ./x 0 ./x 0 ./x 0 .∕x 0 ./x 0 ./x Lots of x 's but y is missing. Some tools like GNU ls would replace the non-printable characters with a question mark (note that ∕ (U+2215) is printable though) when the output goes to a terminal. GNU du does not. There are ways to make them reveal themselves: $ ls x x x?0?.∕x y y?0?.?[Cx y?x $ LC_ALL=C ls x x?0?.???x x y y?x y?0?.?[Cx See how ∕ turned to ??? after we told ls that our character set was ASCII. $ du -hs ./* | LC_ALL=C sed -n l 0\t./x$ 0\t./x $ 0\t./x$ 0\t.\342\210\225x$ 0\t./y\r0\t.\033[Cx$ 0\t./y\bx$ $ marks the end of the line, so we can spot the "x" vs "x " , all non-printable characters and non-ASCII characters are represented by a backslash sequence (backslash itself would be represented with two backslashes) which means it is unambiguous. That was GNU sed , it should be the same in all POSIX compliant sed implementations but note that some old sed implementations are not nearly as helpful. $ du -hs ./* | cat -vte 0^I./x$ 0^I./x $ 0^I./x$ 0^I.M-bM-^HM-^Ux$ (not standard but pretty common, also cat -A with some implementations). That one is helpful and uses a different representation but is ambiguous ( "^I" and <TAB> are displayed the same for instance). $ du -hs ./* | od -vtc 0000000 0 \t . / x \n 0 \t . / x \n 0 \t . 0000020 / x \n 0 \t . 342 210 225 x \n 0 \t . / y 0000040 \r 0 \t . 033 [ C x \n 0 \t . / y \b x 0000060 \n 0000061 That one is standard and unambiguous (and consistent from implementation to implementation) but not as easy to read. You'll notice that y never showed up above. That's a completely unrelated issue with du -hs * that has nothing to do with file names but should be noted: because du reports disk usage, it doesn't report other links to a file already listed (not all du implementations behave like that though when the hard links are listed on the command line).
{ "source": [ "https://unix.stackexchange.com/questions/110750", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55560/" ] }
110,757
I've installed Debian 7 i386 on my VPS (OpenVZ). Everything works fine, except locales - any attempt to install anything shows: [...] perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "pl_PL.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory [...] What I've tried: Generating locales myself - update-locale LC_ALL="pl_PL.UTF-8" - shows: http://www.wklej.org/id/1248438/ apt-get install --reinstall locales http://www.wklej.org/id/1248442/ The same with dpkg-reconfigure locales + setting pl_PL.UTF-8 , pl_PL.ISO-8859-2 or even en_US : http://www.wklej.org/id/1248446/ export LC_ALL=pl_PL.UTF-8 (even on root): -bash: warning: setlocale: LC_ALL: cannot change locale (pl_PL.UTF-8) Here is what shows locale: root:~# locale locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=pl_PL.UTF-8 LANGUAGE= LC_CTYPE="pl_PL.UTF-8" LC_NUMERIC="pl_PL.UTF-8" LC_TIME="pl_PL.UTF-8" LC_COLLATE="pl_PL.UTF-8" LC_MONETARY="pl_PL.UTF-8" LC_MESSAGES="pl_PL.UTF-8" LC_PAPER="pl_PL.UTF-8" LC_NAME="pl_PL.UTF-8" LC_ADDRESS="pl_PL.UTF-8" LC_TELEPHONE="pl_PL.UTF-8" LC_MEASUREMENT="pl_PL.UTF-8" LC_IDENTIFICATION="pl_PL.UTF-8" LC_ALL= Nothing interesting found in /var/log. Even changing repo to official + purge and manual installation locales doesn't solve my problem, which manifests itself on each fresh installation of Debian 7.
It seems that no locale is generated. Have you selected pl_PL.UTF-8 properly in dpkg-reconfigure locales by pressing space in the corresponding line? If yes, the line pl_PL.UTF-8 UTF-8 in /etc/locale.gen is not commented (= does not start with # ). If you need to fix this, you need also to run locale-gen to generate the locales. Its output should be: Generating locales (this might take a while)... pl_PL.UTF-8... done Generation complete. If it does not output the locales you want to generate, there seems to be something wrong with your system. One reason could be that you have localepurge installed. If there are no files in /usr/share/locale/pl/LC_MESSAGES or /usr/share/locale/pl_PL/LC_MESSAGES this is the case or your system is broken.
{ "source": [ "https://unix.stackexchange.com/questions/110757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56616/" ] }
110,911
As we know, the shell enables the user to run background processes using & at the command line's end. Each background process is identified by a job ID and, of course, by it's PID. When I'm executing a new job, the output is something like [1] 1234 (the second number is the process ID). Trying to invoke commands like stop 1 and stop %1 causes a failure message: stop: Unknown job: 1 Understanding that the stop command causes a job to be suspended, I was wondering how to get the job ID and do it right. If the only way to kill a job is by it's process ID, what is the purpose of the job ID ?
After a process is sent to the background with & , its PID can be retrieved from the variable $! . The job IDs can be displayed using the jobs command, the -l switch displays the PID as well. $ sleep 42 & [1] 5260 $ echo $! 5260 $ jobs -l [1] - 5260 running sleep 42 Some kill implementations allow killing by job ID instead of PID. But a more sensible use of the job ID is to selectively foreground a particular process. If you start five processes in the background and want to foreground the third one, you can run the jobs command to see what processes you launched and then fg %3 to foreground the one with the job ID three.
{ "source": [ "https://unix.stackexchange.com/questions/110911", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52533/" ] }
110,923
My camera is very basic (understatement), but it does have one notable feature: its thumbnail mode is impressively fast; it loads a screenful of 9 thumbnails in less than a quarter of a second. Now, when I select an image, it first instantly loads an extremely blocky, blurry rendition of my picture, then locks up annoyingly (:P) for umpteen seconds while reloading the image at "full" resolution. Clearly, that low-quality "instant" load is what it's using for thumbnail mode. There's no semi-hidden "thumbnails" directory hiding on my SD card; instead, what I think it's doing is exploiting the fact that JPGs can load "progressively" like GIFs can, and I think my camera is running through the first progressive "scan" of each picture, and then immediately stopping and rendering just that data. I roughly estimate the size of each thumbnail at around 90x90, and the first "scan" of a 7MP image scaled down to that size looks just fine. (It's when I select the image and it displays like that for a few seconds that it looks blocky.) Now, using eg "feh" in thumbnail mode, pulling JPGs off my MicroSD card reader is just about as mind-numbingly slow as viewing fullscreen images inside the camera, because feh loads the full image then thumbnails it (which is quite inefficient if you think about it...). What applications exist for Linux which will show "instant" thumbnails, without retrieving and processing the full image, as per how my camera does it?
After a process is sent to the background with & , its PID can be retrieved from the variable $! . The job IDs can be displayed using the jobs command, the -l switch displays the PID as well. $ sleep 42 & [1] 5260 $ echo $! 5260 $ jobs -l [1] - 5260 running sleep 42 Some kill implementations allow killing by job ID instead of PID. But a more sensible use of the job ID is to selectively foreground a particular process. If you start five processes in the background and want to foreground the third one, you can run the jobs command to see what processes you launched and then fg %3 to foreground the one with the job ID three.
{ "source": [ "https://unix.stackexchange.com/questions/110923", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57627/" ] }
110,990
I'm replacing a failed harddrive in a mirrored btrfs. btrfs device delete missing /[mountpoint] is taking very long, so I assume that it's actually rebalancing data across to the replacement drive. Is there any way to monitor the progress of such an operation? I don't necessarily expect a pretty looking GUI, or even a % counter; and I'm willing to write a couple of lines of shell script if that's necessary, but I don't even know where to start looking for relevant data. btrfs filesystem show for example just hangs, presumably waiting for the balance operation to finish before it displays any information about the mirrored fs.
btrfs balance status /mountpoint man 8 btrfs [filesystem] balance status [-v] <path> Show status of running or paused balance. Options -v be verbose
{ "source": [ "https://unix.stackexchange.com/questions/110990", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18863/" ] }
111,145
The output of dmesg shows the number of second+milliseconds since the system start. [ 10.470000] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 14.610000] device eth0 entered promiscuous mode [ 18.750000] cfg80211: Calling CRDA for country: DE [ 18.750000] cfg80211: Regulatory domain changed to country: DE Q: How to put the seconds+milliseconds in to a readable format? My dmesg: root@OpenWrt:/tmp# dmesg -h dmesg: invalid option -- h BusyBox v1.19.4 (2013-03-14 11:28:31 UTC) multi-call binary. Usage: dmesg [-c] [-n LEVEL] [-s SIZE] Print or control the kernel ring buffer -c Clear ring buffer after printing -n LEVEL Set console logging level -s SIZE Buffer size To install util-Linux won't be possible, because there is not much available space: root@OpenWrt:~# df -h Filesystem Size Used Available Use% Mounted on rootfs 1.1M 956.0K 132.0K 88% / /dev/root 2.0M 2.0M 0 100% /rom tmpfs 14.3M 688.0K 13.6M 5% /tmp tmpfs 512.0K 0 512.0K 0% /dev /dev/mtdblock3 1.1M 956.0K 132.0K 88% /overlay overlayfs:/overlay 1.1M 956.0K 132.0K 88% / . root@OpenWrt:/tmp# which awk perl sed bash sh shell tcsh /usr/bin/awk /bin/sed /bin/sh root@OpenWrt:~# date -h date: invalid option -- h BusyBox v1.19.4 (2013-03-14 11:28:31 UTC) multi-call binary. Usage: date [OPTIONS] [+FMT] [TIME] Display time (using +FMT), or set time [-s,--set] TIME Set time to TIME -u,--utc Work in UTC (don't convert to local time) -R,--rfc-2822 Output RFC-2822 compliant date string -I[SPEC] Output ISO-8601 compliant date string SPEC='date' (default) for date only, 'hours', 'minutes', or 'seconds' for date and time to the indicated precision -r,--reference FILE Display last modification time of FILE -d,--date TIME Display TIME, not 'now' -D FMT Use FMT for -d TIME conversion -k Set Kernel timezone from localtime and exit
I think that what you're looking for is -T as documented in man dmesg : -T, --ctime Print human readable timestamps. The timestamp could be inaccurate! The time source used for the logs is not updated after system SUSPEND/RESUME. So, for example: [ 518.511925] usb 2-1.1: new low-speed USB device number 7 using ehci-pci [ 518.615735] usb 2-1.1: New USB device found, idVendor=1c4f, idProduct=0002 [ 518.615742] usb 2-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 518.615747] usb 2-1.1: Product: USB Keykoard Becomes: [Mon Jan 27 16:22:42 2014] hid-generic 0003:1C4F:0002.0007: input,hidraw0: USB HID v1.10 Keyboard [USB USB Keykoard] on usb-0000:00:1d.0-1.1/input0 [Mon Jan 27 16:22:42 2014] input: USB USB Keykoard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.1/2-1.1:1.1/input/input24 [Mon Jan 27 16:22:42 2014] hid-generic 0003:1C4F:0002.0008: input,hidraw1: USB HID v1.10 Device [USB USB Keykoard] on usb-0000:00:1d.0-1.1/input1 I found a cool trick here . The sed expression used there was wrong since it would fail when there was more than one ] in the dmesg line. I have modified it to work with all cases I found in my own dmesg output. So, this should work assuming your date behaves as expected: base=$(cut -d '.' -f1 /proc/uptime); seconds=$(date +%s); dmesg | sed 's/\]//;s/\[//;s/\([^.]\)\.\([^ ]*\)\(.*\)/\1\n\3/' | while read first; do read second; first=`date +"%d/%m/%Y %H:%M:%S" --date="@$(($seconds - $base + $first))"`; printf "[%s] %s\n" "$first" "$second"; done Output looks like: [27/01/2014 16:14:45] usb 2-1.1: new low-speed USB device number 7 using ehci-pci [27/01/2014 16:14:45] usb 2-1.1: New USB device found, idVendor=1c4f, idProduct=0002 [27/01/2014 16:14:45] usb 2-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [27/01/2014 16:14:45] usb 2-1.1: Product: USB Keykoard
{ "source": [ "https://unix.stackexchange.com/questions/111145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
111,188
I am using Arch Linux with KDE/Awesome WM. I am trying to get notify-send to work with cron . I have tried setting DISPLAY / XAUTHORITY variables, and running notify-send with "sudo -u", all without result. I am able to call notify-send interactively from the session and get notifications. FWIW, the cron job is running fine which I verified by echoing stuff to a temporary file. It is just the "notify-send" that fails to work. Code: [matrix@morpheus ~]$ crontab -l * * * * * /home/matrix/scripts/notify.sh [matrix@morpheus ~]$ cat /home/matrix/scripts/notify.sh #!/bin/bash export DISPLAY=127.0.0.1:0.0 export XAUTHORITY=/home/matrix/.Xauthority echo "testing cron" >/tmp/crontest sudo -u matrix /usr/bin/notify-send "hello" echo "now tested notify-send" >>/tmp/crontest [matrix@morpheus ~]$ cat /tmp/crontest testing cron now tested notify-send [matrix@morpheus ~]$ As you can see the echo before & after notify-send worked. Also I have tried setting DISPLAY=:0.0 UPDATE: I searched a bit more and found that DBUS_SESSION_BUS_ADDRESS needs to be set. And after hardcoding this using the value I got from my interactive session, the tiny little "hello" message started popping up on the screen every minute! But the catch is this variable is not permanent per that post, so I'll have try the the named pipe solution suggested there. [matrix@morpheus ~]$ cat scripts/notify.sh #!/bin/bash export DISPLAY=127.0.0.1:0.0 export XAUTHORITY=/home/matrix/.Xauthority export DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-BouFPQKgqg,guid=64b483d7678f2196e780849752e67d3c echo "testing cron" >/tmp/crontest /usr/bin/notify-send "hello" echo "now tested notify-send" >>/tmp/crontest Since cron doesn't seem to support notify-send (at least not directly) is there some other notification system that is more cron friendly that I can use?
You need to set the DBUS_SESSION_BUS_ADDRESS variable. By default cron does not have access to the variable. To remedy this put the following script somewhere and call it when the user logs in, for example using awesome and the run_once function mentioned on the wiki. Any method will do, since it does not harm if the function is called more often than required. #!/bin/sh touch $HOME/.dbus/Xdbus chmod 600 $HOME/.dbus/Xdbus env | grep DBUS_SESSION_BUS_ADDRESS > $HOME/.dbus/Xdbus echo 'export DBUS_SESSION_BUS_ADDRESS' >> $HOME/.dbus/Xdbus exit 0 This creates a file containing the required Dbus evironment variable. Then in the script called by cron you import the variable by sourcing the script: if [ -r "$HOME/.dbus/Xdbus" ]; then . "$HOME/.dbus/Xdbus" fi Here is an answer that uses the same mechanism.
{ "source": [ "https://unix.stackexchange.com/questions/111188", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56530/" ] }
111,237
I am interested to learn about how Linux deals with a separate boot partitions. I am not interested in actually doing this, but I would like to know how this works under-the-hood. Consider a hard drive sda , which has two partitions sda1 and sda2 . Let's say that sda2 is the root partition / which contains the Linux OS. My understanding is that the bootloader GRUB2 , is mounted to /boot . When the directory /boot is on a separate partition sda2 , however, how is it that this can happen before / is actually mounted? How does the interaction between the BIOS, the Master boot record and GRUB (or the files /boot ) successfully happen in this case? Is it that the data in /boot is not actually mounted to the / filesystem at this early stage? Note: this question deals with mounting the root partition, but does not discuss a separate boot partition.
Here is the problem in your understanding: My understanding is that the bootloader GRUB2, is mounted to /boot. GRUB is not "mounted" on boot. GRUB is installed to /boot , and is loaded from code in the Master Boot Record. Here is a simplified overview of the modern boot process, assuming a GNU/Linux distribution with an MBR/BIOS (not GPT/UEFI): The BIOS loads. The BIOS loads the small piece of code that is in the Master Boot Record. GRUB does not fit in 440 bytes, the size of the Master Boot Record. Therefore, the code that is loaded actually just parses the partition table, finds the /boot partition (which I believe is determined when you install GRUB to the Master Boot Record), and parses the filesystem information. It then loads Stage 2 GRUB. (This is where the simplification comes in.) Stage 2 GRUB loads everything it needs, including the GRUB configuration, then presents a menu (or not, depending on user configuration). A boot sequence is chosen. This could be by a timeout, by the user selecting a menu entry, or by booting a command list. The boot sequence starts executing. This can do a number of things - for example, loading a kernel, chainloading to another bootloader - but let's assume that the boot sequence is standard GNU/Linux. GRUB loads the Linux kernel. GRUB loads the initial ramdisk . The initial ramdisk mounts / under /new_root (possibly cryptographically unlocking it), starts udev, starts resume-from-swap, etc. The initial ramdisk uses the pivot_root utility to set /new_root as the real / . init starts. Partitions get mounted, daemons get started, and the system boots. Notice how the kernel is only loaded at step 7. Because of this, there is no concept of mounting until step 7 . This is why /boot has to be mounted again in step 9, even though GRUB has already used it. It may also be of use to look at the GRUB 2 section of the Wikipedia page on GRUB.
{ "source": [ "https://unix.stackexchange.com/questions/111237", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57770/" ] }
111,256
I've brought down a network interface with ifconfig wlan0 down , but every few hours or so, the wlan0 interface comes back up and I can't figure out why. I don't restart the machine, never changed /etc/network/interface . I guess my question is, how would I go about just "permanently" disabling wlan0 . Do I use /etc/network/interface ? I already have ifconfig wlan0 down in my rc.local .
Method #1 - from NetworkManager's Applet Try disabling the wireless networking under the Network Applet that's accessible from under the icons in the upper right of your desktop. NOTE: The networking applet's icon looks like a triangle wedge. The image above is pointing to is as arrow #1. If you click it you should see a menu slide out from where you can disable wireless permanently, arrow #2. Method #2 - /etc/network/interfaces From the file /etc/network/interfaces you can specify that NetworkManager shouldn't control the wlan0 interface. To do so simply add this line to the above mentioned file: iface wlan0 inet manual Then restart NetworkManager: $ sudo service network-manager restart References How to disable built-in wifi and use only USB wifi card?
{ "source": [ "https://unix.stackexchange.com/questions/111256", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57786/" ] }
111,355
I'm trying to write a simple script to monitor my network status, without all of ping 's output: ping -q -c 1 google.com > /dev/null && echo online || echo offline The problem is that when I'm not connected, I'm still getting an error message in my output: ping: unknown host google.com offline How can I keep this error message out of my output?
When you run: ping -q -c 1 google.com > /dev/null && echo online || echo offline You are essentially only redirecting the output of Stream 1 (i.e. stdout ) to /dev/null . This is fine when you want to redirect the output that is produced by the normal execution of a program. However, in case you also wish to redirect the output caused by all the errors, warnings or failures, you should also redirect the stderr or Standard Error stream to /dev/null . One way of doing this is prepending the number of the stream you wish to redirect to the redirection operator, > like this: Command 2> /dev/null Hence, your command would look like: ping -q -c 1 google.com > /dev/null 2> /dev/null && echo online || echo offline But, notice that we have already redirected one stream to /dev/null . Why not simply piggyback on the same redirection? Bash allows us to do this by specifying the stream number to which to redirect to. 2>&1 . Notice the & character after the redirection operator. This tells the shell that what appears next is not a filename, but an identifier for the output stream. ping -q -c 1 google.com > /dev/null 2>&1 echo online || echo offline Be careful with the redirection operators, their order matters a lot. If you were to redirect in the wrong order, you'll end up with unexpected results. Another way in which you can attain complete silence is by redirecting all output streams to /dev/null using this shortcut: &>/dev/null (or redirect to a log file with &>/path/to/file.log ). Hence, write your command as: ping -q -c 1 google.com &> /dev/null && echo online || echo offline
{ "source": [ "https://unix.stackexchange.com/questions/111355", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55773/" ] }
111,365
I installed ZSH on a VM of mine, where I compiled it from source. The location of ZSH is /usr/local/bin/zsh when I run chsh -s /usr/local/bin/zsh it outputs chsh: /usr/local/bin/zsh is an invalid shell . I also tried this with sudo as well. How can I change this?
Add zsh to /etc/shells : command -v zsh | sudo tee -a /etc/shells You can now use chsh to set zsh as shell: sudo chsh -s "$(command -v zsh)" "${USER}" See this documentation: Changing your login shell
{ "source": [ "https://unix.stackexchange.com/questions/111365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45483/" ] }
111,372
I have mint 13 xfce installed on my thinkpad r31. For this laptop, I have a Tornado 211g pcmcia wireless adapter. This is not installed automatically in Mint and I am searching for a way to get this card working. How Can I achieve this? I have the windows XP drivers and have added these using ndiswrapper but so far, nothing has happened. in terminal, I have typed lspci which returned my wireless adapter as an ACX111 chipset.
Add zsh to /etc/shells : command -v zsh | sudo tee -a /etc/shells You can now use chsh to set zsh as shell: sudo chsh -s "$(command -v zsh)" "${USER}" See this documentation: Changing your login shell
{ "source": [ "https://unix.stackexchange.com/questions/111372", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57927/" ] }
111,397
I saw a trick for implementing associative arrays in a shell script. For example print array["apples"] could be scripted as echo \$array$key where key=apples. However, there was no mention of how to generate the keys to iterate over the array. The only way I could think of was to store the keys in a variable delimited by spaces so I could use a for-loop to iterate over the array. So, is there some other way to store the keys for later use?
Shells with associative arrays Some modern shells provide associative arrays: ksh93, bash ≥4, zsh. In ksh93 and bash, if a is an associative array, then "${!a[@]}" is the array of its keys: for k in "${!a[@]}"; do echo "$k -> ${a[$k]}" done In zsh, that syntax only works in ksh emulation mode. Otherwise you have to use zsh's native syntax: for k in "${(@k)a}"; do echo "$k -> $a[$k]" done ${(k)a} also works if a does not have an empty key. In zsh, you could also loop on both k eys and v alues at the same time: for k v ("${(@kv)a}") echo "$k -> $v" Shells without associative arrays Emulating associative arrays in shells that don't have them is a lot more work. If you need associative arrays, it's probably time to bring in a bigger tool, such as ksh93 or Perl. If you do need associative arrays in a mere POSIX shell, here's a way to simulate them, when keys are restricted to contain only the characters 0-9A-Z_a-z (ASCII digits, letters and underscore). Under this assumption, keys can be used as part of variable names. The functions below act on an array identified by a naming prefix, the “stem”, which must not contain two consecutive underscores. ## ainit STEM ## Declare an empty associative array named STEM. ainit () { eval "__aa__${1}=' '" } ## akeys STEM ## List the keys in the associatve array named STEM. akeys () { eval "echo \"\$__aa__${1}\"" } ## aget STEM KEY VAR ## Set VAR to the value of KEY in the associative array named STEM. ## If KEY is not present, unset VAR. aget () { eval "unset $3 case \$__aa__${1} in *\" $2 \"*) $3=\$__aa__${1}__$2;; esac" } ## aset STEM KEY VALUE ## Set KEY to VALUE in the associative array named STEM. aset () { eval "__aa__${1}__${2}=\$3 case \$__aa__${1} in *\" $2 \"*) :;; *) __aa__${1}=\"\${__aa__${1}}$2 \";; esac" } ## aunset STEM KEY ## Remove KEY from the associative array named STEM. aunset () { eval "unset __aa__${1}__${2} case \$__aa__${1} in *\" $2 \"*) __aa__${1}=\"\${__aa__${1}%%* $2 } \${__aa__${1}#* $2 }\";; esac" } (Warning, untested code. Error detection for syntactically invalid stems and keys is not provided.)
{ "source": [ "https://unix.stackexchange.com/questions/111397", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57091/" ] }
111,433
I have a service running on 127.0.0.1 with port 2222. I need to forward all requests to 192.168.2.2:2222 (outside IP) only from subnet 192.168.1.0/24 to 127.0.0.1:2222. I'm trying to use this, but it's not working. $ iptables -t nat -I PREROUTING -p tcp -d 192.168.1.0/24 --dport 2222 -j DNAT --to-destination 127.0.0.1:2222 How can I get this to work? UPD: Edit address scheme.
The iptables rule you are using will work, but there is one additional change you need to make: sysctl -w net.ipv4.conf.eth0.route_localnet=1 (replacing eth0 with the nic 192.168.2.2 resides on) By default this value is 0 , which instructs the kernel to not route external traffic destined to 127.0.0.0/8 . This is just for security as such traffic is not normal.
{ "source": [ "https://unix.stackexchange.com/questions/111433", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57174/" ] }
111,455
I have some old scripts that I try to update. Some of the code condenses to: export X=`(echo "abc"; echo "def")` echo $X which gives the expected output: abc def Now the internet tells me backticks are out $() is what I need to use, but when I try: export X=$((echo "abc"; echo "def")) X is not set and I get the error: bash: echo "abc"; echo "def": syntax error: invalid arithmetic operator (error token is ""abc"; echo "def"") What am I doing wrong?
The $(( … )) syntax is an arithmetic expression . What is missing is a space between the $( and the following ( , to avoid the arithmetic expression syntax. The section on command substitution in the shell command language specification actually warns for that: If the command substitution consists of a single subshell, such as: $( (command) ) a conforming application shall separate the "`$(`" and '`(`' into two tokens (that is, separate them with white space). This is required to avoid any ambiguities with arithmetic expansion.
{ "source": [ "https://unix.stackexchange.com/questions/111455", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58003/" ] }
111,508
I am looking for a construct in bash , to decide if a variable $WORD is one of defined words. I need something like this: if "$WORD" in dog cat horse ; then echo yes else echo no fi does bash have such construct? If not, what would be the closest?
This is a Bash-only (>= version 3) solution that uses regular expressions: if [[ "$WORD" =~ ^(cat|dog|horse)$ ]]; then echo "$WORD is in the list" else echo "$WORD is not in the list" fi If your word list is long, you can store it in a file (one word per line) and do this: if [[ "$WORD" =~ $(echo ^\($(paste -sd'|' /your/file)\)$) ]]; then echo "$WORD is in the list" else echo "$WORD is not in the list" fi One caveat with the file approach: It will break if the file has whitespace. This can be remedied by something like: sed 's/[[:blank:]]//g' /your/file | paste -sd '|' /dev/stdin Thanks to @terdon for reminding me to properly anchor the pattern with ^ and $ .
{ "source": [ "https://unix.stackexchange.com/questions/111508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
111,519
When I ssh into a remote server that's not running any type of X11 desktop environment I get the following message. $ ssh user@server X11 forwarding request failed $ ssh user@server ls X11 forwarding request failed on channel 1 file1 file2 ... How can I get rid of these messages?
These messages can be eliminated through 1 of 3 methods, using just SSH options. You can always send messages to /dev/null too but these methods try to deal with the message through configuration, rather than just trapping and dumping them. Method #1 - install xauth The server you're remoting into is complaining that it cannot create an entry in the user's .Xauthority file, because xauth is not installed. So you can install it on each server to get rid of this annoying message. On Fedora 19 you install xauth like so: $ sudo yum install xorg-x11-xauth If you then attempt to ssh into the server you'll see a message that an entry is being created in the user's .Xauthority file. $ ssh root@server /usr/bin/xauth: creating new authority file /root/.Xauthority $ Subsequent logins will no longer show this message. Method #2 - disable it via ForwardX11 You can instruct the ssh client to not attempt to enable X11 forwarding by inclusion of the SSH parameter ForwardX11. $ ssh -o ForwardX11=no root@server You can do the same thing with the -x switch: $ ssh -x root@server This will only temporarily disable this message, but is a good option if you're not able to or unwilling to install xauth on the remote server. Method #3 - disable it via sshd_config This is typically the default but in case it isn't, you can setup your sshd server so that X11Forwarding is off, in /etc/ssh/sshd_config . X11Forwarding no Of the 3 methods I generally use #2, because I'll often want X11Forwarding on for most of my servers, but then don't want to see the X11.... warnings $HOME/.ssh/config Much of the time these message won't even show up. They're usually only present when you have the following entries in your $HOME/.ssh/config file, at the top. ServerAliveInterval 15 ForwardX11 yes ForwardAgent yes ForwardX11Trusted yes GatewayPorts yes So it's this setup, which is ultimately driving the generation of those X11.. messages, so again, method #2 would seem to be the most appropriate if you want to operate with ForwardX11 yes on by default, but then selectively disable it for certain connections from the ssh client's perspective. Security It's generally ill-advised to run with ForwardX11 yes on at all times. So if you're wanting to operate your SSH connections in the most secure manor possible, it's best to do the following: Don't include ForwardX11 yes in your $HOME/.ssh/config file Only use ForwardingX11 when you need to via ssh -X user@server If you can, disable X11Forwarding completely on the server so it's disallowed References SSH: The Secure Shell - The Definitive Guide - 9.3. X Forwarding
{ "source": [ "https://unix.stackexchange.com/questions/111519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
111,526
I need to execute rsync , without it prompting me for password. I've seen in rsync manpage that it doesn't allow specifying the password as command line argument. But I noticed that it allows specifying the password via the variable RSYNC_PASSWORD . So I've tried exporting the variable, but rsync keeps asking me for password. export RSYNC_PASSWORD="abcdef" rsync [email protected]:/abc /def What am I doing wrong? Please consider: I understand that this is a bad idea from security aspect I must use only rsync , can't use other software I can't use key-based authentication I've already read many SE question, e.g.: how-to-pass-password-for-rsync-ssh-command @ stackoverflow.com rsync-cron-job-with-a-password @ superuser.com how-to-setup-rsync-without-password-with-ssh-on-unix-linux @ superuser.com In other words, I need to have the RSYNC_PASSWORD approach working! :-)
If the rsync daemon isn't running on the target machine, and you don't care about exposing passwords to everyone on the local machine ( Why shouldn't someone use passwords in the command line? ), you can use sshpass : sshpass -p "password" rsync [email protected]:/abc /def Note the space at the start of the command, in the bash shell this will stop the command (and the password) from being stored in the history. I don't recommend using the RSYNC_PASSWORD variable unless absolutely necessary (as per a previous edit to this answer), I recommend suppressing history storage or at least clearing history after. In addition, you can use tput reset to clear your terminal history.
{ "source": [ "https://unix.stackexchange.com/questions/111526", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4313/" ] }
111,558
I am using vim and I need a way to always be able to see the file that I am working on without having to do ^G . I see the file name when I start vim but when I start to work and use various functions it gets lost. Also I have seen other people have some kind of "addons" in the lower part of the vim console that seem like they are "button"/"tabs" (I am not sure how to describe them) that show various info constantly including the file name. Any idea what are these plugins? Or how can I achieve what I want?
You can add this to your .vimrc file, or temporarily while in vim . vimrc - set laststatus=2 in vim - :set laststatus=2 To get the full path you can add this command, again to either your .vimrc or while in vim . vimrc - set statusline+=%F in vim - :set statusline+=%F Examples normal mode command line mode For more info than you care to read through there's additional info on both of these available in vim . :help laststatus :help statusline References Displaying status line always 5. Modes, introduction - vim-modes-intro vim-modes Learning the vi Editor/Vim/Modes How can I permanently display the path of the current file in Vim?
{ "source": [ "https://unix.stackexchange.com/questions/111558", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42132/" ] }
111,611
Does anybody have an idea about the full form of rc.d in, for example, /etc/rc.d ? It contains scripts to used to control the starting, stopping and restarting of daemons. But what exactly is the meaning of rc here?
This goes pretty far back in the Unix history. rc stands for "run commands", and makes sense actually: rc runcom (as in .cshrc or /etc/rc) The rc command derives from the runcom facility from the MIT CTSS system, ca. 1965. From Brian Kernighan and Dennis Ritchie, as told to Vicki Brown: "There was a facility that would execute a bunch of commands stored in a file ; it was called runcom for "run commands", and the file began to be called "a runcom". rc in Unix is a fossil from that usage." Source The idea of having the command processing shell be an ordinary slave program came from the Multics design, and a predecessor program on CTSS by Louis Pouzin called RUNCOM, the source of the ".rc" suffix on some Unix configuration files . The first time I remember the name "shell" for this function was in a Multics design document by Doug Eastwood (of BTL). Commands that return a value into the command line were called "evaluated commands" in the original Multics shell, which used square brackets where Unix uses backticks. ( source ) In summary, rc.d stands for "run commands" at runlevel which is their actual usage. The meaning of .d can be found here What does the .d stand for in directory names? Please don't mix up with rc the shell interpreter which was included in Plan 9, which were descendant of Bourne Shell.
{ "source": [ "https://unix.stackexchange.com/questions/111611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57818/" ] }
111,620
When I schedule a job, some seem to be applied immediately, while others after a reboot. So is it recommended to restart cron ( crond ) after adding a new cron job? How to do that properly (esp. in a Debian system), and should that be done with sudo (like sudo service cron restart ) even for that of normal users'? I tried: /etc/init.d/cron restart which doesn't seem to work (neither does /etc/init.d/cron stop or service cron stop ) and completes with return code 1. Here's a part of the message output: Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) utility, e.g. stop cron stop: Rejected send message, 1 matched rules; type="method_call", sender=":1.91" (uid=1000 pid=3647 comm="stop cron ") interface="com.ubuntu.Upstart0_6.Job" member="Stop" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init") (what does that mean?)
No you don't have to restart cron , it will notice the changes to your crontab files (either /etc/crontab or a users crontab file). At the top of your /etc/crontab you probably have (if you have the Vixie implementation of cron that IIRC is the one on Debian): # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. The reason you might not see specific changes implemented is if you add things to e.g. /etc/cron.daily and the daily run has already occurred. The message that you get is because you use an old way of restarting cron on your system. The recommended way (but not necessary if you just edit cron files) is: restart cron You of course have to reboot in order to see the effects of a @reboot cron job
{ "source": [ "https://unix.stackexchange.com/questions/111620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55118/" ] }
111,625
It often baffles me that, although I have been working professionally with computers for several decades and Linux for a decade, I actually treat most of the OS' functionality as a black box, not unlike magic. Today I thought about the kill command, and while I use it multiple times per day (both in its "normal" and -9 flavor) I must admit that I have absolutely no idea how it works behind the scenes. From my viewpoint, if a running process is "hung", I call kill on its PID, and then it suddenly isn't running anymore. Magic! What really happens there? Manpages talk about "signals" but surely that's just an abstraction. Sending kill -9 to a process doesn't require the process' cooperation (like handling a signal), it just kills it off. How does Linux stop the process from continuing to take up CPU time? Is it removed from scheduling? Does it disconnect the process from its open file handles? How is the process' virtual memory released? Is there something like a global table in memory, where Linux keeps references to all resources taken up by a process, and when I "kill" a process, Linux simply goes through that table and frees the resources one by one? I'd really like to know all that!
Sending kill -9 to a process doesn't require the process' cooperation (like handling a signal), it just kills it off. You're presuming that because some signals can be caught and ignored they all involve cooperation. But as per man 2 signal , "the signals SIGKILL and SIGSTOP cannot be caught or ignored". SIGTERM can be caught, which is why plain kill is not always effective – generally this means something in the process's handler has gone awry. 1 If a process doesn't (or can't) define a handler for a given signal, the kernel performs a default action. In the case of SIGTERM and SIGKILL, this is to terminate the process (unless its PID is 1; the kernel will not terminate init ) 2 meaning its file handles are closed, its memory returned to the system pool, its parent receives SIGCHILD, its orphan children are inherited by init, etc., just as if it had called exit (see man 2 exit ). The process no longer exists – unless it ends up as a zombie, in which case it is still listed in the kernel's process table with some information; that happens when its parent does not wait and deal with this information properly. However, zombie processes no longer have any memory allocated to them and hence cannot continue to execute. Is there something like a global table in memory where Linux keeps references to all resources taken up by a process and when I "kill" a process Linux simply goes through that table and frees the resources one by one? I think that's accurate enough. Physical memory is tracked by page (one page usually equalling a 4 KB chunk) and those pages are taken from and returned to a global pool. It's a little more complicated in that some freed pages are cached in case the data they contain is required again (that is, data which was read from a still existing file). Manpages talk about "signals" but surely that's just an abstraction. Sure, all signals are an abstraction. They're conceptual, just like "processes". I'm playing semantics a bit, but if you mean SIGKILL is qualitatively different than SIGTERM, then yes and no. Yes in the sense that it can't be caught, but no in the sense that they are both signals. By analogy, an apple is not an orange but apples and oranges are, according to a preconceived definition, both fruit. SIGKILL seems more abstract since you can't catch it, but it is still a signal. Here's an example of SIGTERM handling, I'm sure you've seen these before: #include <stdio.h> #include <signal.h> #include <unistd.h> #include <string.h> void sighandler (int signum, siginfo_t *info, void *context) { fprintf ( stderr, "Received %d from pid %u, uid %u.\n", info->si_signo, info->si_pid, info->si_uid ); } int main (void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = sighandler; sa.sa_flags = SA_SIGINFO; sigaction(SIGTERM, &sa, NULL); while (1) sleep(10); return 0; } This process will just sleep forever. You can run it in a terminal and send it SIGTERM with kill . It spits out stuff like: Received 15 from pid 25331, uid 1066. 1066 is my UID. The PID will be that of the shell from which kill is executed, or the PID of kill if you fork it ( kill 25309 & echo $? ). Again, there's no point in setting a handler for SIGKILL because it won't work. 3 If I kill -9 25309 the process will terminate. But that's still a signal; the kernel has the information about who sent the signal , what kind of signal it is, etc. 1. If you haven't looked at the list of possible signals , see kill -l . 2. Another exception, as Tim Post mentions below, applies to processes in uninterruptible sleep . These can't be woken up until the underlying issue is resolved, and so have ALL signals (including SIGKILL) deferred for the duration. A process can't create that situation on purpose, however. 3. This doesn't mean using kill -9 is a better thing to do in practice. My example handler is a bad one in the sense that it doesn't lead to exit() . The real purpose of a SIGTERM handler is to give the process a chance to do things like clean up temporary files, then exit voluntarily. If you use kill -9 , it doesn't get this chance, so only do that if the "exit voluntarily" part seems to have failed.
{ "source": [ "https://unix.stackexchange.com/questions/111625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58115/" ] }
111,722
Is there any way to tell ack to only search for text on the current folder? (or specify a max-depth level?) And with grep ?
You can couple find with the -exec argument. Example: find . -maxdepth 1 -exec grep foo {} \; This can be scaled, i.e. -maxdepth 2 . Edit As mentioned in the [answer by @Stéphane Chazelas], it is advisable to restrict find to regular files so that grep doesn't produce an error when the argument {} actually is a directory path: find . -maxdepth 1 -type f -exec grep -H foo {} \; -type f is a filter for find that limits the search results to files -H is a grep option used to print a filename for every match (desired behavior when more than one file match)
{ "source": [ "https://unix.stackexchange.com/questions/111722", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
111,889
In Ubuntu, we use this command to update GRUB: # update-grub But how do I update GRUB version 2.00 in Arch Linux?
The update-grub command is just a script which runs the grub-mkconfig tool to generate a grub.cfg file. See the Archlinux GRUB documentation . It refers to the following: # grub-mkconfig -o /boot/grub/grub.cfg
{ "source": [ "https://unix.stackexchange.com/questions/111889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52995/" ] }
111,949
How can I get a list of the subdirectories which contain a file whose name matches a particular pattern? More specifically, I am looking for directories which contain a file with the letter 'f' somewhere occurring in the file name. Ideally, the list would not have duplicates and only contain the path without the filename.
find . -type f -name '*f*' | sed -r 's|/[^/]+$||' |sort |uniq The above finds all files below the current directory ( . ) that are regular files ( -type f ) and have f somewhere in their name ( -name '*f*' ). Next, sed removes the file name, leaving just the directory name. Then, the list of directories is sorted ( sort ) and duplicates removed ( uniq ). The sed command consists of a single substitute. It looks for matches to the regular expression /[^/]+$ and replaces anything matching that with nothing. The dollar sign means the end of the line. [^/]+' means one or more characters that are not slashes. Thus, /[^/]+$ means all characters from the final slash to the end of the line. In other words, this matches the file name at the end of the full path. Thus, the sed command removes the file name, leaving unchanged the name of directory that the file was in. Simplifications Many modern sort commands support a -u flag which makes uniq unnecessary. For GNU sed: find . -type f -name '*f*' | sed -r 's|/[^/]+$||' |sort -u And, for MacOS sed: find . -type f -name '*f*' | sed -E 's|/[^/]+$||' |sort -u Also, if your find command supports it, it is possible to have find print the directory names directly. This avoids the need for sed : find . -type f -name '*f*' -printf '%h\n' | sort -u More robust version (Requires GNU tools) The above versions will be confused by file names that include newlines. A more robust solution is to do the sorting on NUL-terminated strings: find . -type f -name '*f*' -printf '%h\0' | sort -zu | sed -z 's/$/\n/' Simplified using dirname Imagine needing the command in a script where command will be in single quotes, escaping sed command is painful and less than ideal, so replace with dirname. Issues regard special chars and newline are also mute if you did not need to sort or directories names are not affected. find . -type f -name "*f*" -exec dirname "{}" \; |sort -u take care of newline issue: find . -type f -name "*f*" -exec dirname -z "{}" \; |sort -zu |sed -z 's/$/\n/'
{ "source": [ "https://unix.stackexchange.com/questions/111949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54375/" ] }
111,975
I found out it's possible to show the output of the ls command vertically using the -1 switch: $ ls -1 But couldn't find it in the manual of ls . Is it a secret option?
The manual is out of date with the program. Try ls --help | grep -- ' -1' : -1 list one file per line It is one of the last options described if you just do ls --help .
{ "source": [ "https://unix.stackexchange.com/questions/111975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10867/" ] }
112,023
Replacing strings in files based on certain search criteria is a very common task. How can I replace string foo with bar in all files in the current directory? do the same recursively for sub directories? replace only if the file name matches another string? replace only if the string is found in a certain context? replace if the string is on a certain line number? replace multiple strings with the same replacement replace multiple strings with different replacements
1. Replacing all occurrences of one string with another in all files in the current directory: These are for cases where you know that the directory contains only regular files and that you want to process all non-hidden files. If that is not the case, use the approaches in 2. All sed solutions in this answer assume GNU sed . If using FreeBSD or macOS, replace -i with -i '' . Also note that the use of the -i switch with any version of sed has certain filesystem security implications and is inadvisable in any script which you plan to distribute in any way. Non recursive, files in this directory only: sed -i -- 's/foo/bar/g' * perl -i -pe 's/foo/bar/g' ./* (the perl one will fail for file names ending in | or space) ). Recursive, regular files ( including hidden ones ) in this and all subdirectories find . -type f -exec sed -i 's/foo/bar/g' {} + If you are using zsh: sed -i -- 's/foo/bar/g' **/*(D.) (may fail if the list is too big, see zargs to work around). Bash can't check directly for regular files, a loop is needed (braces avoid setting the options globally): ( shopt -s globstar dotglob; for file in **; do if [[ -f $file ]] && [[ -w $file ]]; then sed -i -- 's/foo/bar/g' "$file" fi done ) The files are selected when they are actual files (-f) and they are writable (-w). 2. Replace only if the file name matches another string / has a specific extension / is of a certain type etc: Non-recursive, files in this directory only: sed -i -- 's/foo/bar/g' *baz* ## all files whose name contains baz sed -i -- 's/foo/bar/g' *.baz ## files ending in .baz Recursive, regular files in this and all subdirectories find . -type f -name "*baz*" -exec sed -i 's/foo/bar/g' {} + If you are using bash (braces avoid setting the options globally): ( shopt -s globstar dotglob sed -i -- 's/foo/bar/g' **baz* sed -i -- 's/foo/bar/g' **.baz ) If you are using zsh: sed -i -- 's/foo/bar/g' **/*baz*(D.) sed -i -- 's/foo/bar/g' **/*.baz(D.) The -- serves to tell sed that no more flags will be given in the command line. This is useful to protect against file names starting with - . If a file is of a certain type, for example, executable (see man find for more options): find . -type f -executable -exec sed -i 's/foo/bar/g' {} + zsh : sed -i -- 's/foo/bar/g' **/*(D*) 3. Replace only if the string is found in a certain context Replace foo with bar only if there is a baz later on the same line: sed -i 's/foo\(.*baz\)/bar\1/' file In sed , using \( \) saves whatever is in the parentheses and you can then access it with \1 . There are many variations of this theme, to learn more about such regular expressions, see here . Replace foo with bar only if foo is found on the 3d column (field) of the input file (assuming whitespace-separated fields): gawk -i inplace '{gsub(/foo/,"baz",$3); print}' file (needs gawk 4.1.0 or newer). For a different field just use $N where N is the number of the field of interest. For a different field separator ( : in this example) use: gawk -i inplace -F':' '{gsub(/foo/,"baz",$3);print}' file Another solution using perl : perl -i -ane '$F[2]=~s/foo/baz/g; $" = " "; print "@F\n"' foo NOTE: both the awk and perl solutions will affect spacing in the file (remove the leading and trailing blanks, and convert sequences of blanks to one space character in those lines that match). For a different field, use $F[N-1] where N is the field number you want and for a different field separator use (the $"=":" sets the output field separator to : ): perl -i -F':' -ane '$F[2]=~s/foo/baz/g; $"=":";print "@F"' foo Replace foo with bar only on the 4th line: sed -i '4s/foo/bar/g' file gawk -i inplace 'NR==4{gsub(/foo/,"baz")};1' file perl -i -pe 's/foo/bar/g if $.==4' file 4. Multiple replace operations: replace with different strings You can combine sed commands: sed -i 's/foo/bar/g; s/baz/zab/g; s/Alice/Joan/g' file Be aware that order matters ( sed 's/foo/bar/g; s/bar/baz/g' will substitute foo with baz ). or Perl commands perl -i -pe 's/foo/bar/g; s/baz/zab/g; s/Alice/Joan/g' file If you have a large number of patterns, it is easier to save your patterns and their replacements in a sed script file: #! /usr/bin/sed -f s/foo/bar/g s/baz/zab/g Or, if you have too many pattern pairs for the above to be feasible, you can read pattern pairs from a file (two space separated patterns, $pattern and $replacement, per line): while read -r pattern replacement; do sed -i "s/$pattern/$replacement/" file done < patterns.txt That will be quite slow for long lists of patterns and large data files so you might want to read the patterns and create a sed script from them instead. The following assumes a <<!>space<!>> delimiter separates a list of MATCH<<!>space<!>>REPLACE pairs occurring one-per-line in the file patterns.txt : sed 's| *\([^ ]*\) *\([^ ]*\).*|s/\1/\2/g|' <patterns.txt | sed -f- ./editfile >outfile The above format is largely arbitrary and, for example, doesn't allow for a <<!>space<!>> in either of MATCH or REPLACE . The method is very general though: basically, if you can create an output stream which looks like a sed script, then you can source that stream as a sed script by specifying sed 's script file as - stdin. You can combine and concatenate multiple scripts in similar fashion: SOME_PIPELINE | sed -e'#some expression script' \ -f./script_file -f- \ -e'#more inline expressions' \ ./actual_edit_file >./outfile A POSIX sed will concatenate all scripts into one in the order they appear on the command-line. None of these need end in a \n ewline. grep can work the same way: sed -e'#generate a pattern list' <in | grep -f- ./grepped_file When working with fixed-strings as patterns, it is good practice to escape regular expression metacharacters . You can do this rather easily: sed 's/[]$&^*\./[]/\\&/g s| *\([^ ]*\) *\([^ ]*\).*|s/\1/\2/g| ' <patterns.txt | sed -f- ./editfile >outfile 5. Multiple replace operations: replace multiple patterns with the same string Replace any of foo , bar or baz with foobar sed -Ei 's/foo|bar|baz/foobar/g' file or perl -i -pe 's/foo|bar|baz/foobar/g' file 6. Replace File paths in multiple files Another use case of using different delimiter: sed -i 's|path/to/foo|path/to/bar|g' *
{ "source": [ "https://unix.stackexchange.com/questions/112023", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
112,117
I have a hard disk in my computer that I use to make backups of my data. I do not use this disk otherwise. How can I stop this disk from spinning once my backup is finished? Also how would I make it spin back up again before the backup takes place later on? The drive is a regular SATA drive.
Umount the filesystem and then run hdparm -S 1 /dev/sdb to set it to spin down after five seconds (replace /dev/sdb with the actual device for the hard disk). This will minimize the power used and heat generated by the hard disk.
{ "source": [ "https://unix.stackexchange.com/questions/112117", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30196/" ] }
112,132
It seems I am misusing grep / egrep . I was trying to search for strings in multiple line and could not find a match while I know that what I'm looking for should match. Originally I thought that my regexes were wrong but I eventually read that these tools operate per line (also my regexes were so trivial it could not be the issue). So which tool would one use to search patterns across multiple lines?
Here's a sed one that will give you grep -like behavior across multiple lines: sed -n '/foo/{:start /bar/!{N;b start};/your_regex/p}' your_file How it works -n suppresses the default behavior of printing every line /foo/{} instructs it to match foo and do what comes inside the squigglies to the matching lines. Replace foo with the starting part of the pattern. :start is a branching label to help us keep looping until we find the end to our regex. /bar/!{} will execute what's in the squigglies to the lines that don't match bar . Replace bar with the ending part of the pattern. N appends the next line to the active buffer ( sed calls this the pattern space) b start will unconditionally branch to the start label we created earlier so as to keep appending the next line as long as the pattern space doesn't contain bar . /your_regex/p prints the pattern space if it matches your_regex . You should replace your_regex by the whole expression you want to match across multiple lines.
{ "source": [ "https://unix.stackexchange.com/questions/112132", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42132/" ] }
112,157
Suppose I want a more recent version of software than is available for my current version of an operating system, what can I do? Cases to consider: There are semiofficial/official sources of additional packages available for that version of the OS. E.g. backports.org for Debian or PPAs for Ubuntu. There are no more recent versions of the package available for that version of the OS, but there are more recent versions available for more recent versions of the OS. This is the standard case for backporting. There are no packaged versions of more recent versions of the software available. Options available are to package the more recent version. Per Let's compile a list of canonical Q&As this is intended as a place to put canonical answers for the following. Answers should probably be made community wiki.
(If you have questions/comments about this answer, please add a comment. Or, if you have sufficient rep, you can ping me in chat.) Directly installing binary packages from a newer version of Debian - not the answer. Suppose you are running some version of a Debian-based distribution. You want a more recent version of a package than is available to you. The first thing that every beginner tries to do it to install the binary package directly on your version of Debian. This may or not work, depending on what version you are running, and how much newer the package is. In general, this procedure will not work well. Consider for example the case where one is trying to install a binary package from testing/unstable directly on stable. This will most likely not go well, unless testing/unstable happen to very close to stable at that moment. The reason has to do with the nature of a Linux-based binary distribution like Debian. Such operating systems depend heavily on shared libraries, and these dependencies are often very tightly version-dependent; often much more so than necessary. Debian currently does not have a good way of making version dependencies "tight" - a shorthand way of saying that the version dependency is exactly as restrictive as necessary. What does this mean for the user? Suppose for example that you are trying to install say slrn from Debian unstable to Debian stable. What would this look like? # apt-get install slrn/unstable Reading package lists... Done Building dependency tree Reading state information... Done Selected version '1.0.1-10' (Debian:testing [amd64]) for 'slrn' Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: slrn : Depends: libc6 (>= 2.15) but 2.13-38+deb7u1 is to be installed E: Unable to correct problems, you have held broken packages. Despite the error produced by apt , there are no broken packages here. So, what went wrong? The problem is that the version of libc6 that the unstable slrn was compiled against is different (and has a higher version number) than the one available on Debian stable. ( libc6 is the GNU C library. The C library is central to any Unix-like operating system, and GNU C library is the version that Linux-based operating systems generally use.) Therefore the unstable slrn requires a higher numbered version of libc6 than is available for stable. Note that because a package has been compiled against a higher version of library does not necessarily require a higher version of that library, but it is often the case. The syntax apt-get install slrn/unstable means: use the unstable slrn but for all other packages only use the versions from stable. To be more precise, it uses priority numbers. See man apt_preferences for details. One can also do apt-get install -t unstable slrn This is much more likely to work, but you generally don't want to do it. Why? This means: temporarily treat all packages in unstable on an equal footing with the packages in stable. Therefore this will pull in the unstable slrn 's dependencies from unstable if they are of a higher version number, and they generally will be. This will generally include the GNU C library for reasons already explained. Now, this approach will generally "succeed", in that the dependencies will be satisfied by definition (unstable's slrn has dependencies which are satisfied in unstable), but you end up with a mixture of packages that suddenly are being forced to run with versions of libraries different from what they were built for. This will probably not end well. The answer is... BACKPORTS! So, what is the correct way to do this? It is to rebuild the Debian sources of more recent versions on your system, popularly known as "backporting". Consider the following cases: There are semiofficial/official sources of additional packages available for that version of Debian. The first place to look is Debian Backports , which is the official site for Debian backports. For a concrete example: Add the appropriate backports line for your release and update to find the new packages then install something from backports explicitly (because backports are deactivated by default). echo "deb http://ftp.debian.org/debian stretch-backports main" | sudo tee /etc/apt/sources.list.d/stretch-backports.list sudo apt-get update sudo apt-get install -t stretch-backports git This will get the latest stable version of git which has useful newer features than the stable one included with stretch (e.g. 'include' which allows you to combine multiple config files or change your username for ~/work/projects/ vs ~/personal/projects/). Another place to look at is the various PPAs by Ubuntu maintainers. You can do a search for "packagename PPA". There are no more recent versions of the package available for that version of the OS, but there are more recent versions available for more recent versions/releases of the OS. This is the standard case for backporting. Backporting means that you rebuild the Debian sources from a later version of Debian on the version you are running. This procedure may be easy or involved and difficult depending on the package. Here is an outline of how to do this. A Brief Backporting Tutorial for Beginners For concreteness I will assume you are running the current Debian stable, currently wheezy. I'll use the package slrn as an example. First, note that all the Debian packaging files live in the debian/ subdirectory of the source directory. The first step is to check whether a more recent version is available. You can do this using apt-cache policy . apt-cache policy slrn slrn: Installed: 1.0.0~pre18-1.3 Candidate: 1.0.0~pre18-1.3 Version table: 1.0.1-10 0 50 http://debian.lcs.mit.edu/debian/ testing/main amd64 Packages 50 http://debian.lcs.mit.edu/debian/ unstable/main amd64 Packages *** 1.0.0~pre18-1.3 0 500 http://debian.lcs.mit.edu/debian/ wheezy/main amd64 Packages 100 /var/lib/dpkg/status 1.0.0~pre18-1.1 0 500 http://debian.lcs.mit.edu/debian/ squeeze/main amd64 Packages We would like to backport 1.0.1-10 . STEP 1: NB: Make sure that the deb-src lines for the source version you want to download appears in your /etc/apt/sources.list . For example, if you want to download the unstable version of slrn , you need the deb-src line for unstable, or it won't work. Note that you don't need the corresponding deb lines to download the sources, though apt-cache policy uses that information, so if you don't have the corresponding deb lines, then apt-cache policy won't show you the relevant version(s). If you do have the deb lines, don't forget to pin the newer versions using an entry in /etc/apt/preferences or similar. An entry in /etc/apt/preferences like this (for unstable) will work, for example. Package: * Pin: release a=unstable Pin-Priority: 50 If you add lines in /etc/apt/sources.list , don't forget to run apt-get update afterwards. Download the sources for slrn . A good place is /usr/local/src/slrn . apt-get source slrn=1.0.1-10 STEP 2: Change the version number slightly, so as to distinguish your backport from the upstream version. Run dch --bpo , which will automatically add an entry to the debian/changelog file with an appropriate version number, for example slrn (1.0.1-10~bpo10+1) UNRELEASED; urgency=low * Backport to buster. -- User <user@domain> Sun, 02 Feb 2014 23:54:13 +0530 STEP 3: Attempt to build the sources. If the packages required for the build are not available, then the attempt will fail. Change directory into the source directory. Use debuild from the devtools package. cd slrn-1.0.1/ debuild -uc -us If the build dependencies are satisfied, then the sources will build and produce some debs at the level above the source directory; in this case /usr/local/src/slrn . STEP 4: Suppose the build dependencies are not satisfied. Then you need to try to install the build dependencies. This may or may not work, as the dependencies may not be available for your version, or if available, may not be available in the right version. NB: It is unfortunately not uncommon for Debian packages to require versions of build dependencies that are higher than necessary. There is no automated way in Debian to check this, and often package maintainers don't care as long as it works on the corresponding version/release. Therefore, take a skeptical attitude to dependency versions, and use common sense. For example, widely used packages like Python and the GNU tools will not depend on very specific versions of their dependencies, regardless what the Debian packager lists. In any case, you can try to install them doing apt-get build-dep slrn=1.0.1-10 If this succeeds, then try building the package again (STEP 2). If it fails, then further work is needed. Note that debuild looks at the Build Dependencies in the debian/control file, and you can change these if necessary. So let us talk about that now. Here are the Build Dependencies for slrn. Build-Depends: debhelper (>=9), libslang2-dev, libuu-dev, exim4 | mail-transport-agent, libgnutls-openssl-dev, po-debconf, autoconf, libcanlock2-dev, autotools-dev, dpkg-dev (>= 1.16.0), chrpath, dh-autoreconf, inn2-inews An alternative to using apt-get build-dep is to install these manually, by doing apt-get install debhelper libslang2-dev ... If you start changing these values in the control file, then you should switch to a manual installation, as then apt-get build-dep will no longer be doing the right thing. There are no packaged versions of more recent versions of the software available. Options available are to package the more recent version. In many cases, one can reuse the packaging from earlier versions of the software in conjunction with newer sources. This approach can run into problems, notably patches that applied to earlier versions of the software may not apply here, so one may need to resync them with the sources. The 3.0 (quilt) source format which is now becoming standard uses quilt, and patches are located in the debian/patches directory. However, a detailed discussion of these issues is out of scope for this post.
{ "source": [ "https://unix.stackexchange.com/questions/112157", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4671/" ] }
112,159
I have a file with about 30.000.000 lines (Radius Accounting) and I need to find the last match of a given pattern. The command: tac accounting.log | grep $pattern gives what I need, but it's too slow because the OS has to first read the whole file and then send to the pipe. So, I need something fast that can read the file from the last line to the first.
tac only helps if you also use grep -m 1 (assuming GNU grep ) to have grep stop after the first match: tac accounting.log | grep -m 1 foo From man grep : -m NUM, --max-count=NUM Stop reading a file after NUM matching lines. In the example in your question, both tac and grep need to process the entire file so using tac is kind of pointless. So, unless you use grep -m , don't use tac at all, just parse the output of grep to get the last match: grep foo accounting.log | tail -n 1 Another approach would be to use Perl or any other scripting language. For example (where $pattern=foo ): perl -ne '$l=$_ if /foo/; END{print $l}' file or awk '/foo/{k=$0}END{print k}' file
{ "source": [ "https://unix.stackexchange.com/questions/112159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58408/" ] }
112,225
I have to start an application with my own user rights, but the group must be different. So, instead of: $ ps -eo "user,group,args" | grep qbittorrent morfik morfik /usr/bin/qbittorrent it should be for example: $ ps -eo "user,group,args" | grep qbittorrent morfik p2p /usr/bin/qbittorrent It also has to be done without asking about password. Is there a way to achieve this?
Use sg . For example, the following command will invoke sleep for group group-name sg group-name -c 'sleep 100' From the man page: NAME sg - execute command as different group ID SYNOPSIS sg [-] [group [-c ] command] DESCRIPTION The sg command works similar to newgrp but accepts a command. The command will be executed with the /bin/sh shell...
{ "source": [ "https://unix.stackexchange.com/questions/112225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52763/" ] }
113,497
I often run grep commands to find things in my code, but the problem with web projects is that there will often be compressed JavaScript and CSS files which create one huge line of text, so that if a match is found, the whole terminal window is filled for more then a 1000 lines, making it extremely impractical to find what I'm looking for. So is there a way to avoid files that have say single lines of text over 200 characters?
With GNU grep and xargs: grep -rLZE '.{200}' . | xargs -r0 grep pattern Alternatively, you could cut the output of grep: grep -r pattern . | cut -c1-"$COLUMNS" or tell your terminal not to wrap text if it supports it: tput rmam grep -r pattern . ( rmam for r eset m ode a uto- m argin, smam to restore it; see man 5 terminfo for details). or use less -S grep -r pattern . | less -S
{ "source": [ "https://unix.stackexchange.com/questions/113497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59576/" ] }
113,499
I have a larger directory with files whose filenames contain special characters such as line breaks. The origin of these characters is seemingly that I copied and pasted text from within pdf files (titles and author names) to the 'save as' dialog of pdf-readers, ignoring that they contained these invalid characters. With ls or a file manager I do not see the special characters but their presence hinders me to copy or to rename them. So how can I recursively rename the files removing all invalid characters? Note that I do not want to remove regular utf8 characters such as umlauts, spaces etc.
With GNU grep and xargs: grep -rLZE '.{200}' . | xargs -r0 grep pattern Alternatively, you could cut the output of grep: grep -r pattern . | cut -c1-"$COLUMNS" or tell your terminal not to wrap text if it supports it: tput rmam grep -r pattern . ( rmam for r eset m ode a uto- m argin, smam to restore it; see man 5 terminfo for details). or use less -S grep -r pattern . | less -S
{ "source": [ "https://unix.stackexchange.com/questions/113499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18047/" ] }
113,530
I already asked a question about how to list all namespaces in Linux , but there wasn't any correct and exact answers, so I want to find out a method which can help me to find out the namespace of PID of some process or group of processes. How can it be done in Linux?
I'll try and answer both this and your earlier question as they are related. The doors to namespaces are files in /proc/*/ns/* and /proc/*/task/*/ns/* . A namespace is created by a process unsharing its namespace. A namespace can then be made permanent by bind-mounting the ns file to some other place. That's what ip netns does for instance for net namespaces. It unshares its net namespace and bind-mounts /proc/self/ns/net to /run/netns/ netns-name . In a /proc mounted in the root pid namespace, you can list all the namespaces that have a process in them by doing: # readlink /proc/*/task/*/ns/* | sort -u ipc:[4026531839] mnt:[4026531840] mnt:[4026531856] mnt:[4026532469] net:[4026531956] net:[4026532375] pid:[4026531836] pid:[4026532373] uts:[4026531838] The number in square brackets is the inode number. To get that for a given process: # ls -Li /proc/1/ns/pid 4026531836 /proc/1/ns/pid Now, there may be permanent namespaces that don't have any process in them. Finding them out can be a lot trickier AFAICT. First, you have to bear in mind that there can be several mount namespaces. # awk '$9 == "proc" {print FILENAME,$0}' /proc/*/task/*/mountinfo | sort -k2 -u /proc/1070/task/1070/mountinfo 15 19 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw /proc/19877/task/19877/mountinfo 50 49 0:3 / /run/netns/a rw,nosuid,nodev,noexec,relatime shared:2 - proc proc rw /proc/19877/task/19877/mountinfo 57 40 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw /proc/1070/task/1070/mountinfo 66 39 0:3 / /run/netns/a rw,nosuid,nodev,noexec,relatime shared:2 - proc proc rw /proc/19877/task/19877/mountinfo 68 67 0:3 / /mnt/1/a rw,nosuid,nodev,noexec,relatime unbindable - proc proc rw Those /mnt/1/a , /run/netns/a may be namespace files. We can get an inode number: # nsenter --mount=/proc/19877/task/19877/ns/mnt -- ls -Li /mnt/1/a 4026532471 /mnt/1/a But that doesn't tell us much other than it's not in the list computed above. We can try and enter it as any of the different types: # nsenter --mount=/proc/19877/task/19877/ns/mnt -- nsenter --pid=/mnt/1/a true nsenter: reassociate to namespace 'ns/pid' failed: Invalid argument # nsenter --mount=/proc/19877/task/19877/ns/mnt -- nsenter --mount=/mnt/1/a true nsenter: reassociate to namespace 'ns/mnt' failed: Invalid argument # nsenter --mount=/proc/19877/task/19877/ns/mnt -- nsenter --net=/mnt/1/a true # OK, that was a net namespace file. So it would seem we have a method to list the name spaces: list the ns directories of all the tasks, then find all the proc mountpoints in all the /proc/*/task/*/mountinfo and figure out their type by trying to enter them.
{ "source": [ "https://unix.stackexchange.com/questions/113530", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50482/" ] }
113,544
I have a output from lstopo --output-format txt -v --no-io > lstopo.txt for a 8-core node in a cluster, which is https://dl.dropboxusercontent.com/u/13029929/lstopo.txt The file is a text drawing of the node. It is too wide for both the terminal and gedit on Ubuntu of my laptop, and some of its right is moved by my laptop to the left and overlap the left part of the drawing. I wonder how I can view the file properly? ( Added: I realize that I can view the drawing properly by uploading to dropbox and opening in Firefox, which zoom out the drawing properly. But open the local file in Firefox will mis-display the dash lines "-", and I wonder why? Other than Firefox, any software can also work on it?) what does "PU P#" mean in each core "Core P#"? Why are their numbers not the same? Does "L1i" mean a L1 instruction cache, and "L1d" a L1 data cache? Why do L2 and L3 caches not have distinction between instruction cache and data cache? Is this common for computers? What does "Socket P#" mean? Is the "socket" used for connection between the L3 caches and the main memory? What does "NUMANode P# (16GB)" mean? Is it a main memory chip? Does the drawing show that there are four cores sharing a main memory chip , and the other four cores sharing another main memory chip? Is there not a main memory shared by all the 8 cores in the node? So is the node just like a distributed system with two 4-core computers without shared memory between them? How can the two 4-core groups communicate with each other? Does "Machine (32GB)" mean the sum of the sizes of the two main memory chips mentioned in 6?
Here are the answers to your questions: I'd view it as a graphical image rather than an ASCII image. $ lstopo --output-format png -v --no-io > cpu.png NOTE: You can view the generated file cpu.png "PU P#" = Processing Unit Processor #. These are processing elements within the cores of the CPU. On my laptop (Intel i5) I have 2 cores that each have 2 processing elements, for a total of 4. But in actuality I have only 2 physical cores. L#i = Instruction Cache, L#d = Data Cache. L1 = a Level 1 cache. In the Intel architectures the instruction & data get mixed as you move down from L1 → L2 → L3. "Socket P#" is that there are 2 physical sockets on the motherboard, there are 2 physically discrete CPUs in this setup. In multiple CPU architectures the RAM is usually split so that a portion of it is assigned to each core. If CPU0 needs data from CPU1's RAM, then it needs to "request" this data through CPU1. There are a number of reasons why this is done, too many to elaborate here. Read up on NUMA style memory architectures if you're really curious. The drawing is showing 4 cores (with 1 Processing Unit in each) that are in 2 physical CPU packages. Each physical CPU has "isolated" access to 16 GB of RAM. No, there is no shared memory among all the CPUs. The 2 CPUs have to interact with the other's RAM through the CPU. Again see the NUMA Wikipage for more on the Non Uniform Memory Architecture . Yes, the system has a total of 32 GB of RAM. But only 1/2 of the RAM is accessible by either physical CPU directly. What's a socket? A socket is the term used to describe the actual package that a CPU is contained inside of, for mounting on the motherboard. There are many different styles and configurations; check out the Wikipedia page on CPU Sockets . This picture also kind of illustrates the relationships between the "cores", the CPUs, and the "sockets".
{ "source": [ "https://unix.stackexchange.com/questions/113544", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
113,563
function foo { (cd $FOOBAR; <some command> if [$? -ne 0] then echo "Nope!" else echo "OK!" fi ) } I am trying to write a function like the one above and place it in my .bashrc file. After I source the file and run, I get: Total time: 51 seconds -bash: [1: command not found OK! Can someone help me understand what I did wrong?
Add a space after the [ , and another before ] : function foo { (cd "$FOOBAR"; <some command> if [ "$?" -ne 0 ] then echo "Nope!" else echo "OK!" fi ) } [ is a shell builtin, it is a command just like echo , read , expr ... it needs a space after it, and requires a matching ] . Writing [ "$?" -ne 0 ] is actually invoking [ and giving it 4 parameters: $? , -ne , 0 , and ] . Note: the fact that you are getting an error saying [1: command not found means that $? was having the value of 1 .
{ "source": [ "https://unix.stackexchange.com/questions/113563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3115/" ] }
113,670
Is updatedb necessary at all? I never use locate and my servers tend to have dozens of millions of files which usually makes updatedb to run for a long time and consume I/O needed by MySQL and/or other software. Can I just remove it from cron and expect everything to work? (by everything I mean usual software found on server: linux, cpanel, mysql, apache, php etc.).
Yes you can disable it in the crons or remove the package that provides updatedb . On a Red Hat system you'd go about the steps in determining if anything requires it prior to removal. First find out where the program is located on disk. $ type updatedb updatedb is /usr/bin/updatedb Next find out what package provides updatedb . $ rpm -qf /usr/bin/updatedb mlocate-0.26-3.fc19.x86_64 See if anything requires mlocate . $ rpm -q --whatrequires mlocate no package requires mlocate Nothing requires it so you can remove the package. $ yum remove mlocate
{ "source": [ "https://unix.stackexchange.com/questions/113670", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59666/" ] }
113,690
Could anyone explain what happens in system after unplugging CD-ROM during working with LiveCD session? Suppose that I'm using Ubuntu LiveCD and suddenly by accident the CD-ROM disconnect for a matter of second because of for example power outage in the case of external devices or just opening the CD tray if the OS allows to do it. Assume that after a while the CD-ROM is plugged in again. Can anyone explain what exactly happens then, why the OS stops or rather is working but not as before it and how, if it's possible, the stability and usefulness can be brought back and user can continue working with? According to my observations based on Ubuntu LiveCDs (versions under 12, the kernel version is probably 2.6.XX) the system reaction is like following: all applications in X desktop disappear (I think it's due to automatic sending a kill signal to all processes as if the OS wanted to shut down but actually it seems to wait for something) and it looks like only the background image with the cursor which can be moved all the time all TTYs are available and can be displayed but there are only error messages there like INFO: task <process name>:<pid> blocked for more than 120 seconds and SquashFS error: unable to read [...] all the time I can see blinking '_' and I can type something but pressing enter just make it skipped to the new line and obviously the OS doesn't execute it terminal's shortcuts for cancelling and quit current process also don't work magic SysRq combinations are generally working and I can see the actual output but I don't know which of them would be useful in this case Can anyone explain this reaction and tell what exactly these errors mean? And what can one do then to fix it - is there any way for that and why not, if so? Here: What does "INFO: task XXX blocked for more than 120 seconds" exactly mean on Linux? I've just read that "if a task is blocked, it waits for resources to become available again". So if I got it correctly it's waiting for resources - is it true or just wrong interpretation and it will wait endlessly - CD-ROM is surely plugged in again as fast as possible. How to understand it? I know that processes are gone (or "blocked" as error messages are suggesting?), but how about files in the ramdisk, are they still there until reboot? Is this possible to access them somehow? Or for example just extract text strings and so on? There is a magic SysRq shortcut which remount filesystems - are there any chances that it would help in such situations or should be tried eventually at the end after other trials? If the CD is working as a virtual fs, what effects can bring remounting filesystems in that case? @bersch: It's surely a real situation - rare but possible. It happened to me so it happens and I just consider what should or actually what can I do next time - I like to understand things happen around me. It can happen when you're using for example a server machine with UPS and no optical drive or just laptop or netbook with battery power supply and with no CD-ROM built in and in both cases you have to use external CD-ROM connected to wall power supply to run LiveCD. I know that it's possible to boot through network or just use USB stick instead of CD but somethimes using LiveCD is necessary and irreplaceable. So just imagine what happen when in such circumstances there is a power outage and what's the effect of that: the computer - I mean all harware except CD-ROM is still powered on and working well due to UPS or battery but CD-ROM is already not so. It's rare because it's obviously impossible when you're using a standard laptop with CD-ROM inside which would be all the time powered as well as the rest of hardware from battery in such situation or standard pc which would shut down when the power is off the same as CD-ROM because of sharing a source of power supply and it can happen accidentally only in mentioned above cases. I really don't know why developers didn't predict it and did not include any solution for such eventualities. It's untypical but still possible and questions are not about only a hypothetical situation. I would really like to know what then exactly happen in the system and how it's woring in this state: what's working and what's not, what can be done, if filesystem with files still exsists on ramdisk, if it can be restored, if processes are blocked or killed and so on - generally speaking how it works. Where can I find out some more about it?
Yes you can disable it in the crons or remove the package that provides updatedb . On a Red Hat system you'd go about the steps in determining if anything requires it prior to removal. First find out where the program is located on disk. $ type updatedb updatedb is /usr/bin/updatedb Next find out what package provides updatedb . $ rpm -qf /usr/bin/updatedb mlocate-0.26-3.fc19.x86_64 See if anything requires mlocate . $ rpm -q --whatrequires mlocate no package requires mlocate Nothing requires it so you can remove the package. $ yum remove mlocate
{ "source": [ "https://unix.stackexchange.com/questions/113690", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55602/" ] }
113,695
To take a static screenshot of a selected part of my screen, I often use scrot with -s shot.png . This is great for adding illustrations to StackExchange posts. I even found this script to automatically upload such a screenshot to Imgur.com and put a link in my X clipboard! Let's turn this up to twelve : How do I similarly create a GIF screencast? There are programs like recordmydesktop , byzanz & co as discussed on Ask Ubuntu that aim to be "user friendly", but in my experience are buggy, inefficient, mostly unscriptable and unsuited for little one-off things like this. I just want to select an area and record a GIF, with a console command I can understand, not some arcane unscriptable GUI monstrosity. How can I do this?
OK then I started ffcast , did vim , quit ffcast , then convert ed .avi → .gif . I ran the recording commands in another terminal. Polished script for your $PATH at the end of this answer. What happened? Capturing FFcast helps the user interactively select a screen region and hands over the geometry to an external command, such as FFmpeg, for screen recording. ffcast is the glorious product of some hacking at the Arch Linux community (mainly lolilolicon ). You can find it on github (or in the AUR for Arch ers). Its dependency list is just bash and ffmpeg , though you'll want xrectsel ( AUR link ) for interactive rectangle selection. You can also append ffmpeg flags right after the command. I set -r 15 to capture at 15 frames per second and -codec:v huffyuv for lossless recording. (Play with these to tweak the size/quality tradeoff.) GIFfing ImageMagick can read .avi videos and has some GIF optimisation tricks that drastically reduce file size while preserving quality: The -layers Optimize to convert invokes the general-purpose optimiser. The ImageMagick manual has a page on advanced optimisations too. Final script This is what I have in my $PATH . It records into a temporary file before converting. #!/bin/bash TMP_AVI=$(mktemp /tmp/outXXXXXXXXXX.avi) ffcast -s % ffmpeg -y -f x11grab -show_region 1 -framerate 15 \ -video_size %s -i %D+%c -codec:v huffyuv \ -vf crop="iw-mod(iw\\,2):ih-mod(ih\\,2)" $TMP_AVI \ && convert -set delay 10 -layers Optimize $TMP_AVI out.gif Thanks to BenC for detective work in figuring out the correct flags after the recent ffcast update. If you'd like to install the dependencies on a Debian-based distro, Louis has written helpful installation notes .
{ "source": [ "https://unix.stackexchange.com/questions/113695", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16404/" ] }
113,702
I run a console window through my PuTTY session. That console windows has a column width of 140. When I start the screen session, the console shrinks to 80 columns. I do not see this behavior on CentOS 5, only on CentOS 6. Does anybody know what has to be tweaked?
OK then I started ffcast , did vim , quit ffcast , then convert ed .avi → .gif . I ran the recording commands in another terminal. Polished script for your $PATH at the end of this answer. What happened? Capturing FFcast helps the user interactively select a screen region and hands over the geometry to an external command, such as FFmpeg, for screen recording. ffcast is the glorious product of some hacking at the Arch Linux community (mainly lolilolicon ). You can find it on github (or in the AUR for Arch ers). Its dependency list is just bash and ffmpeg , though you'll want xrectsel ( AUR link ) for interactive rectangle selection. You can also append ffmpeg flags right after the command. I set -r 15 to capture at 15 frames per second and -codec:v huffyuv for lossless recording. (Play with these to tweak the size/quality tradeoff.) GIFfing ImageMagick can read .avi videos and has some GIF optimisation tricks that drastically reduce file size while preserving quality: The -layers Optimize to convert invokes the general-purpose optimiser. The ImageMagick manual has a page on advanced optimisations too. Final script This is what I have in my $PATH . It records into a temporary file before converting. #!/bin/bash TMP_AVI=$(mktemp /tmp/outXXXXXXXXXX.avi) ffcast -s % ffmpeg -y -f x11grab -show_region 1 -framerate 15 \ -video_size %s -i %D+%c -codec:v huffyuv \ -vf crop="iw-mod(iw\\,2):ih-mod(ih\\,2)" $TMP_AVI \ && convert -set delay 10 -layers Optimize $TMP_AVI out.gif Thanks to BenC for detective work in figuring out the correct flags after the recent ffcast update. If you'd like to install the dependencies on a Debian-based distro, Louis has written helpful installation notes .
{ "source": [ "https://unix.stackexchange.com/questions/113702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1257/" ] }
113,719
Is there a unix command that can check if any two lines in a file are the same? For e.g. Consider a file sentences.txt This is sentence X This is sentence Y This is sentence Z This is sentence X This is sentence A This is sentence B We see that the sentence This is sentence X is repeated. Is there any command that can quickly detect this, so that I can perhaps execute it like this - $ cat sentences.txt | thecommand Line 1:This is sentence X Line 4:This is sentence X
Here is one way to get the exact output you're looking for: $ grep -nFx "$(sort sentences.txt | uniq -d)" sentences.txt 1:This is sentence X 4:This is sentence X Explanation: The inner $(sort sentences.txt | uniq -d) lists each line that occurs more than once. The outer grep -nFx looks again in sentences.txt for exact -x matches to any of these lines -F and prepends their line number -n
{ "source": [ "https://unix.stackexchange.com/questions/113719", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17133/" ] }
113,732
I am using 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux Debian GNU/Linux 7.1 (wheezy) Release: 7.1. I typed "sudo apt-get upgrade" and hoped for the best (I updated all my packages just before doing that with "sudo ap-get update"). I am not aware ow any changes to grub I could have made although I am not owner of this machine, I just happen to have sudo permissions and use it. Please, what should I do? I am afraid of breaking my system:( A new version of configuration file /etc/default/grub is available, but the version installed currently has been locally modified. │ What do you want to do about modified configuration file grub? │ │ │ │ install the package maintainer's version │ │ keep the local version currently installed │ │ show the differences between the versions │ │ show a side-by-side difference between the versions │ │ show a 3-way difference between available versions │ │ do a 3-way merge between available versions (experimental) │ │ start a new shell to examine the situation │ Here is the screen after "show the differences between the versions"
To sum up: Use the show the differences between the versions to check what the differences are. From the diff view, you can recognize the changes you have made to the file (if any), and the differences between current file and the maintainer file. Now you need to merge the maintainer file with the local changes: either install the package maintainer's version and then edit to introduce your changes to the settings, or keep the local version currently installed and then edit to introduce the changes made by the package maintainer. In your case you have no changes made to the file, and the differences are minor and irrelevant to your setup, so you can ignore and proceed with install the package maintainer's version without the need to edit the file any further.
{ "source": [ "https://unix.stackexchange.com/questions/113732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36400/" ] }
113,754
I need to allow user martin to switch to user martin-test without password su - martin-test I think this can be configured in /etc/pam.d/su . There are already some lines in that file which can be uncommented. However, I don't like the idea of adding user martin to group wheel . I don't want to give martin any more privileges than to be able to switch to martin-test . I also do not want to use sudo . What would be the best way to do it, while keeping the privileges of user martin minimal?
Add the following lines underneath the pam_rootok.so line in your /etc/pam.d/su : auth [success=ignore default=1] pam_succeed_if.so user = martin-test auth sufficient pam_succeed_if.so use_uid user = martin These lines perform checks using the pam_succeed_if.so module. See also the Linux-PAM configuration file syntax to learn more about the auth lines. The first line checks whether the target user is martin-test . If it is nothing happens ( success=ignore ) and we can continue on the next line to check the current user . If it is not, the next line will be skipped ( default=1 ) and we can continue on subsequent lines with the usual authentication steps. The second line checks whether the current user is martin or not, if it is then the system considers the authentication process as successful and returns ( sufficient ), if it is not, nothing happens and we continue on subsequent lines with the usual authentication steps. You can also restrict su to a group, here the group allowedpeople can su without a password: auth sufficient pam_succeed_if.so use_uid user ingroup allowedpeople
{ "source": [ "https://unix.stackexchange.com/questions/113754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
113,774
It appears I still miss some things about the way permissions work. I am on a debian 7 system btw. just now I have this file of which I downloaded and it belongs to myuser:myuser , that is both user and group are set to me. It also resides in my $HOME directory since that is where I downloaded it to. So far so good. Now I want to share this file with some other users of the pc and for that I want to switch the group ownership of the file to group "users". however that fails: nass@quarx:~/xmas_carol$ chgrp -R users * chgrp: changing group of movie.mov': Operation not permitted And the contents of the folder are: -rwxr-xr-x 1 nass nass 2482411461 Feb 6 03:57 movie.mov I am fuzzy about what is going on with the permissions. Can someone explain
Your user is probably not a member of the users group, so you don't have the right to give a file to that group. To illustrate: $ groups terdon sudo netdev fuse vboxsf vboxusers $ ls -l file -rw-r--r-- 1 terdon terdon 604 Feb 6 03:04 file $ chgrp users file chgrp: changing group of ‘file’: Operation not permitted $ chgrp vboxusers file $ ls -l file -rw-r--r-- 1 terdon vboxusers 604 Feb 6 03:04 file This behavior is mentioned in the POSIX specs : Only the owner of a file or the user with appropriate privileges may change the owner or group of a file. Some implementations restrict the use of chgrp to a user with appropriate privileges when the group specified is not the effective group ID or one of the supplementary group IDs of the calling process. The main reason for this is that if you aren't a member of a group, you should not be able to modify what that group has access to. This answer on chown permissions is also relevant. Traditionally, on shared systems, you have a users group to which all regular users belong and that is the primary group of each user. That way, files are created owned by the users group and all users can read them. Anyway, since that is not the way that Debian-based distros are set up these days, the way to give a specific user access to your file would be to either Change the group ownership of the file/directory to a group that both you and the other user are members of; Just change the permissions of the file/directory accordingly: $ chmod 755 /home/terdon $ ls -ld /home/terdon drwxr-xr-x 170 terdon terdon 491520 Apr 20 13:43 /home/terdon/ That will make the directory accessible to everybody.
{ "source": [ "https://unix.stackexchange.com/questions/113774", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29529/" ] }
113,795
In python re.sub(r"(?<=.)(?=(?:...)+$)", ",", stroke ) To split a number by triplets, e.g.: echo 123456789 | python -c 'import sys;import re; print re.sub(r"(?<=.)(?=(?:...)+$)", ",", sys.stdin.read());' 123,456,789 How to do the same with bash/awk?
With sed : $ echo "123456789" | sed 's/\([[:digit:]]\{3\}\)\([[:digit:]]\{3\}\)\([[:digit:]]\{3\}\)/\1,\2,\3/g' 123,456,789 (Note that this only works for exactly 9 digits!) or this with sed : $ echo "123456789" | sed ':a;s/\B[0-9]\{3\}\>/,&/;ta' 123,456,789 With printf : $ LC_NUMERIC=en_US printf "%'.f\n" 123456789 123,456,789
{ "source": [ "https://unix.stackexchange.com/questions/113795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59716/" ] }