source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
41,563
I found this image on the internet, but don't know how was it made, Can anyone provide a sample on that? I just would like to have a color table as well.
install the package named as: colortest and enjoy coloring by running the binaries like colortest-8 colortest-16 colortest-16b and so on
{ "source": [ "https://unix.stackexchange.com/questions/41563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
41,571
Consider the following code: foo () { echo $* } bar () { echo $@ } foo 1 2 3 4 bar 1 2 3 4 It outputs: 1 2 3 4 1 2 3 4 I am using Ksh88, but I am interested in other common shells as well. If you happen to know any particularity for specific shells, please do mention them. I found the follwing in the Ksh man page on Solaris: The meaning of $* and $@ is identical when not quoted or when used as a parameter assignment value or as a file name. However, when used as a command argument, $* is equivalent to ``$1d$2d...'', where d is the first character of the IFS variable, whereas $@ is equivalent to $1 $2 .... I tried modifying the IFS variable, but it doesn't modify the output. Maybe I'm doing something wrong?
When they are not quoted, $* and $@ are the same. You shouldn't use either of these, because they can break unexpectedly as soon as you have arguments containing spaces or wildcards. "$*" expands to a single word "$1c$2c..." . c was a space in the Bourne shell but is now the first character of IFS in modern Bourne-like shells (from ksh and specified by POSIX for sh), so it can be anything¹ you choose. The only good use I've ever found for it is: join arguments with comma (simple version) function join1 { typeset IFS=, # typeset makes a local variable in ksh² print -r -- "$*" # using print instead of unreliable echo³ } join1 a b c # => a,b,c join arguments with the specified delimiter (better version) function join2 { typeset IFS="$1" shift print -r -- "$*" } join2 + a b c # => a+b+c "$@" expands to separate words: "$1" "$2" ... This is almost always what you want. It expands each positional parameter to a separate word, which makes it perfect for taking command line or function arguments in and then passing them on to another command or function. And because it expands using double quotes, it means things don't break if, say, "$1" contains a space or an asterisk ( * ) 4 . Let's write a script called svim that runs vim with sudo . We'll do three versions to illustrate the difference. svim1 #!/bin/sh sudo vim $* svim2 #!/bin/sh sudo vim "$*" svim3 #!/bin/sh sudo vim "$@" All of them will be fine for simple cases, e.g. a single file name that doesn't contain spaces: svim1 foo.txt # == sudo vim foo.txt svim2 foo.txt # == sudo vim "foo.txt" svim2 foo.txt # == sudo vim "foo.txt" But only $* and "$@" work properly if you have multiple arguments. svim1 foo.txt bar.txt # == sudo vim foo.txt bar.txt svim2 foo.txt bar.txt # == sudo vim "foo.txt bar.txt" # one file name! svim3 foo.txt bar.txt # == sudo vim "foo.txt" "bar.txt" And only "$*" and "$@" work properly if you have arguments containing spaces. svim1 "shopping list.txt" # == sudo vim shopping list.txt # two file names! svim2 "shopping list.txt" # == sudo vim "shopping list.txt" svim3 "shopping list.txt" # == sudo vim "shopping list.txt" So only "$@" will work properly all the time. ¹ though beware in some shells, it doesn't work for multibyte characters. ² typeset which is used to set types and attributes of variables also makes a variable local in ksh 4 (in ksh93, that's only for functions defined with the Korn function f {} syntax, not the Bourne f() ... syntax ). It means here IFS will be restored to its previous value when the function returns. This is important, because the commands you run afterward might not work as expected if IFS is set to something non-standard and you forgot to quote some expansions. ³ echo will or may fail to print its arguments properly if the first starts with - or any contain backslashes, print can be told not to do backslash processing with -r and to guard against argument starting with - or + with the -- (or - ) option delimiter. printf '%s\n' "$*" would be the standard alternative but note that ksh88 and pdksh and some of its derivatives still don't have printf builtin. 4 Note that "$@" didn't work properly in the Bourne shell and ksh88 when $IFS didn't contain the space character, as effectively it was implemented as the positional parameter being joined with "unquoted" spaces and the result subject to $IFS splitting. Early versions of the Bourne shell also had that bug that "$@" was expanded to one empty argument when there was no positional parameter, which is one of the reasons why you sometimes see ${1+"$@"} in place of "$@" . Neither of those bugs affect modern Bourne-like shells. 5 The Almquist shell and bosh have local for that instead. bash , yash and zsh also have typeset , aliased to local (also to declare in bash and zsh) with the caveat that in bash , local can only be used in a function.
{ "source": [ "https://unix.stackexchange.com/questions/41571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4098/" ] }
41,647
I have this issue with Lenovo Thinkcentre Edge. Its keyboard has Fn key, which acts in my Ubuntu (with Fluxbox) as if it is always "active/pressed". I can't use standard F1 - F12 keys unless I hold down this stupid key. You see, I'm a programmer so it's really pain to me. So I decided to remap function keys with xev and xmodmap I remapped F1 - F3 and 'till this point everything is fine, but F4 does some kind of window minimization. When I run xev and hit F4 , I don't get a reply from the program with a keycode and stuff, instead the window is minimized and when I maximize the window again there is no response from the key. Important info: The function of Fn key can't be disabled in the BIOS. So the question is: Do you have ANY idea how to solve my mystery? EDIT: # content of .fluxbox/keys # click on the desktop to get menus OnDesktop Mouse1 :HideMenus OnDesktop Mouse2 :WorkspaceMenu OnDesktop Mouse3 :RootMenu # scroll on the desktop to change workspaces OnDesktop Mouse4 :PrevWorkspace OnDesktop Mouse5 :NextWorkspace # scroll on the toolbar to change current window OnToolbar Mouse4 :PrevWindow {static groups} (iconhidden=no) OnToolbar Mouse5 :NextWindow {static groups} (iconhidden=no) # alt + left/right click to move/resize a window OnWindow Mod1 Mouse1 :MacroCmd {Raise} {Focus} {StartMoving} OnWindowBorder Move1 :StartMoving OnWindow Mod1 Mouse3 :MacroCmd {Raise} {Focus} {StartResizing NearestCorner} OnLeftGrip Move1 :StartResizing bottomleft OnRightGrip Move1 :StartResizing bottomright # alt + middle click to lower the window OnWindow Mod1 Mouse2 :Lower # control-click a window's titlebar and drag to attach windows OnTitlebar Control Mouse1 :StartTabbing # double click on the titlebar to shade OnTitlebar Double Mouse1 :Shade # left click on the titlebar to move the window OnTitlebar Mouse1 :MacroCmd {Raise} {Focus} {ActivateTab} OnTitlebar Move1 :StartMoving # middle click on the titlebar to lower OnTitlebar Mouse2 :Lower # right click on the titlebar for a menu of options OnTitlebar Mouse3 :WindowMenu # alt-tab Mod1 Tab :NextWindow {groups} (workspace=[current]) Mod1 Shift Tab :PrevWindow {groups} (workspace=[current]) # cycle through tabs in the current window Control Tab :NextTab Control Shift Tab :PrevTab # go to a specific tab in the current window Mod4 1 :Tab 1 Mod4 2 :Tab 2 Mod4 3 :Tab 3 Mod4 4 :Tab 4 Mod4 5 :Tab 5 Mod4 6 :Tab 6 Mod4 7 :Tab 7 Mod4 8 :Tab 8 Mod4 9 :Tab 9 # open a terminal Mod1 F1 :Exec x-terminal-emulator # open a dialog to run programs Mod1 F2 :Exec fbrun # volume settings, using common keycodes # if these don't work, use xev to find out your real keycodes 176 :Exec amixer sset Master,0 1+ 174 :Exec amixer sset Master,0 1- 160 :Exec amixer sset Master,0 toggle # current window commands Mod1 F4 :Close Mod1 F5 :Kill # open the window menu Mod1 space :WindowMenu # exit fluxbox Control Mod1 Delete :Exit # change to previous/next workspace Control Mod1 Left :PrevWorkspace Control Mod1 Right :NextWorkspace # change to a specific workspace Control F1 :Workspace 1 Control F2 :Workspace 2 Control F3 :Workspace 3 Control F4 :Workspace 4 #osobni Mod4 d :ShowDesktop Mod4 m :Maximize Mod4 f :Exec firefox Mod4 u :Exec unison-gtk Mod4 e :Exec eclipse Mod4 t :Exec thunderbird Mod4 q :Exec qutim Mod4 s :Exec skype Ubuntu is 12.04 LTS, kernel 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
On Lenovo Thinkpad Edge, press Fn + Esc and Fn key will light up and you can use F1 - F12 keys as default.
{ "source": [ "https://unix.stackexchange.com/questions/41647", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19109/" ] }
41,655
This is the data what I want to sort. But sort treats the numeric to string, the data it no sorted as I expected. /home/files/profile1 /home/files/profile10 /home/files/profile11 /home/files/profile12 /home/files/profile14 /home/files/profile15 /home/files/profile16 /home/files/profile2 /home/files/profile3 /home/files/profile4 /home/files/profile5 /home/files/profile6 /home/files/profile7 /home/files/profile8 /home/files/profile9 I want to sort this to, /home/files/profile1 /home/files/profile2 /home/files/profile3 /home/files/profile4 /home/files/profile5 /home/files/profile6 /home/files/profile7 /home/files/profile8 /home/files/profile9 /home/files/profile10 /home/files/profile11 /home/files/profile12 /home/files/profile14 /home/files/profile15 /home/files/profile16 Is there a good way by bash script? I can't use ruby or python script in here.
This is very similar to this question . The trouble is that you have an alphanumeric field that you are sorting on, and -n doesn't treat that sensibly, however version sort ( -V ) does. Thus use: sort -V Note that this feature is currently supported by the GNU, FreeBSD and OpenBSD sort implementations.
{ "source": [ "https://unix.stackexchange.com/questions/41655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20261/" ] }
41,668
Suppose I read (cat) a file while another process is rewriting its contents. Is the output predictable? What would happen?
That depends on what the writer does. If the writer overwrites the existing file, then the reader will see the new content when the writer overtakes the reader, if ever. If the writer and the reader proceed at variable speeds, the reader may alternatively see old and new content. If the writer truncates the file before it starts to write, the reader will run against the end of the file at that point. If the writer creates a new file then moves the new file to the old name, the reader will keep reading from the old file. If an opened file is moved or removed, the processes that have the file opened keep reading from that same file. If the file is removed, it actually remains on the disk (but with no way to open it again) until the last process has closed it. Unix systems tend not to have mandatory locks . If an application wants to ensure that its writer component and its reader component don't step on each other's toes, it's up to the developer to use proper locking. There are a few exceptions where a file that's open by the kernel may be protected from writing by user applications, for example a loop -mounted filesystem image or an executable that's being executed on some unix variants.
{ "source": [ "https://unix.stackexchange.com/questions/41668", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17874/" ] }
41,682
Is there a way to back out of all SSH connections and close PuTTY in "one shot"? I work in Windows 7 and use PuTTY to SSH to various Linux hosts. An example of the way I find myself working: SSH to host1 with PuTTY... banjer@host1:~> #...doin some work...ooh! need to go check something on host8... banjer@host1:~> ssh host8 banjer@host8:~> #...doin some work...OK time for lunch. lets close putty... banjer@host8:~> exit banjer@host1:~> exit Putty closes. Per above, any way to get from host8 to closing PuTTY in one shot? Sometimes I find myself up to 5 or 10 hosts deep. I realize I can click the X to close the PuTTY window, but I like to make sure my SSH connections get closed properly by using the exit command. I also realize I'm asking for tips on how to increase laziness. I'll just write it off as "how can I be more efficient".
Try using the ssh connection termination escape sequence. In the ssh session, enter ~. (tilde dot). You won't see the characters when you type them, but the session will terminate immediately. $ ~. $ Connection to me.myhost.com closed. From man 1 ssh The supported escapes (assuming the default ‘~’) are: ~. Disconnect. ~^Z Background ssh. ~# List forwarded connections. ~& Background ssh at logout when waiting for forwarded connection / X11 sessions to terminate. ~? Display a list of escape characters. ~B Send a BREAK to the remote system (only useful for SSH protocol version 2 and if the peer supports it). ~C Open command line. Currently this allows the addition of port forwardings using the -L, -R and -D options (see above). It also allows the cancellation of existing remote port-forwardings using -KR[bind_address:]port. !command allows the user to execute a local command if the PermitLocalCommand option is enabled in ssh_config(5). Basic help is available, using the -h option. ~R Request rekeying of the connection (only useful for SSH protocol version 2 and if the peer supports it).
{ "source": [ "https://unix.stackexchange.com/questions/41682", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/689/" ] }
41,693
So, you can use the * as a wild card for all files when using cp within context of a directory. Is there a way to copy all files except x file?
In bash you can use extglob : $ shopt -s extglob # to enable extglob $ cp !(b*) new_dir/ where !(b*) exclude all b* files. You can later disable extglob with $ shopt -u extglob
{ "source": [ "https://unix.stackexchange.com/questions/41693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10979/" ] }
41,739
Sometimes I misunderstand the syntax of a command: # mysql -d test mysql: unknown option '-d' # echo $? 2 I try again and get it right: # mysql --database test Welcome to the MySQL monitor. mysql > ... How do I prevent the first command, with error code different than 0, to enter the history?
I don't think you really want that. My usual workflow goes like this: Type a command Run it Notice it failing Press UP key Edit the command Run it again Now, if the failed command weren't saved into history I couldn't get it easily back to fix and run again.
{ "source": [ "https://unix.stackexchange.com/questions/41739", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1079/" ] }
41,740
I understand that the -exec can take a + option to mimic the behaviour of xargs . Is there any situation where you'd prefer one form over the other? I personally tend to prefer the first form, if only to avoid using a pipe. I figure surely the developers of find must've done the appropriate optimizations. Am I correct?
You might want to chain calls to find (once, when you learned, that it is possible, which might be today). This is, of course, only possible as long as you stay in find. Once you pipe to xargs it's out of scope. Small example, two files a.lst and b.lst: cat a.lst fuddel.sh fiddel.sh cat b.lst fuddel.sh No trick here - simply the fact that both contain "fuddel" but only one contains "fiddel". Assume we didn't know that. We search a file which matches 2 conditions: find -exec grep -q fuddel {} ";" -exec grep -q fiddel {} ";" -ls 192097 4 -rw-r--r-- 1 stefan stefan 20 Jun 27 17:05 ./a.lst Well, maybe you know the syntax for grep or another program to pass both strings as condition, but that's not the point. Every program which can return true or false, given a file as argument, can be used here - grep was just a popular example. And note, you may follow find -exec with other find commands, like -ls or -delete or something similar. Note, that delete not only does rm (removes files), but rmdir (removes directories) too. Such a chain is read as an AND combination of commands, as long as not otherwise specified (namely with an -or switch (and parens (which need masking))). So you aren't leaving the find chain, which is a handy thing. I don't see any advantage in using -xargs, since you have to be careful in passing the files, which is something find doesn't need to do - it automatically handles passing each file as a single argument for you. If you believe you need some masking for finds {} braces , feel free to visit my question which asks for evidence. My assertion is: You don't.
{ "source": [ "https://unix.stackexchange.com/questions/41740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4098/" ] }
41,803
I love vim's colorization of /var/log/messages , but it only works for that – the absolute filename. It doesn't work for older rotations of messages (e.g. /var/log/messages-20120610 ) or for messages files I get from other systems. How can I tweak this?
When you have the file open, you can run: :set filetype=messages To automate this for all files called messages, put the following into ~/.vim/ftdetect/messages.vim : autocmd BufNewFile,BufReadPost *messages* :set filetype=messages
{ "source": [ "https://unix.stackexchange.com/questions/41803", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14833/" ] }
41,817
If my target has one device connected and many drivers for that device loaded, how can I understand what device is using which driver?
Just use /sys . Example. I want to find the driver for my Ethernet card: $ sudo lspci ... 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) $ find /sys | grep drivers.*02:00 /sys/bus/pci/drivers/r8169/0000:02:00.0 That is r8169 . First I need to find coordinates of the device using lspci ; then I find driver that is used for the devices with these coordinates.
{ "source": [ "https://unix.stackexchange.com/questions/41817", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
41,828
Given the following command: gzip -dc /cdrom/cdrom0/file.tar.gz | tar xvf – What does the - at the end of the command mean? Is it some kind of placeholder?
In this case, it means ‘standard input’. It's used by some software (e.g. tar ) when a file argument is required and you need to use stdin instead. It's not a shell construct and it depends on the program you're using. Check the manpage if in doubt! In this instance, standard input is the argument to the -f option. In cases where - isn't supported, you can get away with using something like tar xvf /proc/self/fd/0 or tar xvf /dev/stdin (the latter is widely supported in various unices). Don't rely on this to mean ‘standard input’ universally. Since it's not interpreted by the shell, every program is free to deal with it as it pleases. In some cases, it's standard output or something entirely different: on su it signifies ‘start a login shell’. In other cases, it's not interpreted at all. Muscle memory has made me create quite a few files named - because some version of some program I was used to didn't understand the dash.
{ "source": [ "https://unix.stackexchange.com/questions/41828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15142/" ] }
41,840
By default on RHEL 5.5 I have [deuberger@saleen trunk]$ sudo cat /etc/securetty console vc/1 vc/2 vc/3 vc/4 vc/5 vc/6 vc/7 vc/8 vc/9 vc/10 vc/11 tty1 tty2 tty3 tty4 tty5 tty6 tty7 tty8 tty9 tty10 tty11 What is the difference between each of the entry types ( console , vc/* , and tty* )? Specifically, what is the end result of adding and removing each entry type? My understanding is that they affect how and when you can login, but are there any other effects? And when can you and when can you not login depending on which entries are there? EDIT 1 What I do know is that tty1-6 correspond to whether you can login from the first 6 consoles that you reach using Ctrl - Alt - F1 through Ctrl - Alt - F6 . I always thought those were virtual consoles, so I'm a bit confused. And what does console correspond to? Thanks. EDIT 2 What is the effect, if any, in single user mode?
/etc/securetty is consulted by pam_securetty module to decide from which virtual terminals ( tty* ) root is allowed to login from. In the past, /etc/securetty was consulted by programs like login directly, but now PAM handles that. So changes to /etc/securetty will affect anything using PAM with a configuration file that uses pam_securetty.so . So, only the login program is affected by default. /etc/pam.d/login is used for local logins and /etc/pam.d/remote is used for remote logins (like telnet). The primary entry types and their affects are as follows: If /etc/securetty doesn't exist, root is allowed to login from any tty If /etc/securetty exist and is empty, root access will be restricted to single user mode or programs that are not restricted by pam_securetty (i.e. su , sudo , ssh , scp , sftp ) If you are using devfs (a deprecated filesystem for handling /dev ), adding entries of the form vc/[0-9]* will permit root login from the given virtual console number. If you are using udev (for dynamic device management and replacement for devfs ), adding entries of the form tty[0-9]* will permit root login from the given virtual console number. Listing console in /etc/securetty normally has no effect since /dev/console points to the current console and is normally only used as the tty filename in single user mode, which is unaffected by /etc/securetty Adding entries like pts/[0-9]* will allow programs that use pseudo-terminals ( pty ) and pam_securetty to login into root assuming the allocated pty is one of the ones listed; it's normally a good idea not to include these entries because it's a security risk; it would allow, for instance, someone to login into root via telnet, which sends passwords in plaintext (note that pts/[0-9]* is the format for udev which is used in RHEL 5.5; it will be different if using devfs or some other form of device management). For single user mode, /etc/securetty is not consulted because the sulogin is used instead of login (see the sulogin man page for more info). Also you can change the login program used in /etc/inittab for each runlevel. Note that to you should not use /etc/securetty to control root logins via ssh . To do that change the value of PermitRootLogin in /etc/ssh/sshd_config . By default /etc/pam.d/sshd is not configured to consult pam_securetty (and therefore /etc/securetty ). You could add a line to do so, but ssh doesn't set the actual tty until sometime after the auth stage, so it doesn't work as expected. During the auth and account stages - at least for openssh - the tty ( PAM_TTY ) is hardcoded to ssh . The above answer is based on RHEL 5.5. Much of it will pertain to current distributions of other *nix systems, but there are differences, some of which I noted, but not all. I answered this myself because the other answers were incomplete and/or inaccurate. Many other forums, blogs, etc online have inaccurate and incomplete information in this topic as well, so I've done extensive research and testing to try to get the correct details. If anything I've said is wrong, please let me know though. Sources: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Deployment_Guide/ch-sec-network.html#s1-wstation-privileges http://www.mathematik.uni-marburg.de/local-doc/centos5/pam-0.99.6.2/html/sag-pam_securetty.html http://linux.die.net/man/1/login http://www.tldp.org/HOWTO/html_single/Text-Terminal-HOWTO/ http://www.kernel.org/doc/Documentation/devices.txt http://en.wikipedia.org/wiki/Virtual_console http://en.wikipedia.org/wiki/Linux_console http://www.kernel.org/doc/man-pages/online/pages/man4/console.4.html http://www.unix.com/security/8527-restricting-root-login.html http://www.redhat.com/mirrors/LDP/HOWTO/Serial-HOWTO-11.html#ss11.3 http://www.mathematik.uni-marburg.de/local-doc/centos5/udev-095/udev_vs_devfs
{ "source": [ "https://unix.stackexchange.com/questions/41840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6708/" ] }
41,863
du and df do rather similar things, and so I always find myself typing the wrong one. I think if I knew what "du" and "df" stands for it might make it easier to remember which to use. What is a way to differentiate between these two so I can remember which does which action?
du == Disk Usage . It walks through directory tree and counts the sum size of all files therein. It may not output exact information due to the possibility of unreadable files, hardlinks in directory tree, etc. It will show information about the specific directory requested. Think, "How much disk space is being used by these files?" df == Disk Free . Looks at disk used blocks directly in filesystem metadata. Because of this it returns much faster that du but can only show info about the entire disk/partition. Think, "How much free disk space do I have?"
{ "source": [ "https://unix.stackexchange.com/questions/41863", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4143/" ] }
41,889
I'm trying to chroot into a Arch Linux ARM filesystem from x86_64 . I've seen that it's possible to do using static qemu by copying the binary into the chroot system: $ cp /usr/bin/qemu-arm archarm-chroot/usr/bin But despite this I always get the following error: chroot: failed to run command ‘/bin/bash’: Exec format error I know this means that the architectures differ. Am I doing something wrong?
I use an ARM chroot from time to time: my phone runs Linux Deploy and the image dies now and then. I then copy it to my computer and examine the situation with chroot like this: # This provides the qemu-arm-static binary apt-get install qemu-user-static # Mount my target filesystem on /mnt mount -o loop fs.img /mnt # Copy the static ARM binary that provides emulation cp $(which qemu-arm-static) /mnt/usr/bin # Or, more simply: cp /usr/bin/qemu-arm-static /mnt/usr/bin # Finally chroot into /mnt, then run 'qemu-arm-static bash' # This chroots; runs the emulator; and the emulator runs bash chroot /mnt qemu-arm-static /bin/bash
{ "source": [ "https://unix.stackexchange.com/questions/41889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14667/" ] }
41,954
I have got new earphones, the AKG K318's to be exact and they have one of those remotes. On a smartphone, such as an Android phone or iPhone, the buttons map to actions on the music player such as play/pause, volume up/down, skip, previous, you get the idea. I was wondering how I could replicate the same function on my computer. I imagine the process consists of getting X to recognize the input, and then somehow mapping those inputs for an application to use. The "device" (which would connect via sound jack) isn't listed in xinput , nor do the buttons trigger regular keyboard events. How can I use the earphones plugged in the output sound jack as X key inputs?
Those 'special' headphones or earphones which can be used on specialized devices to control media players, volume and mute usually have FOUR connections on the plug, versus the typical THREE a normal headphone output jack has. The usual three are Left Channel, Right Channel and Ground (common), while the fourth is often set up as a multi-value resistance, each button when pressed presents a particular resistance on the fourth wire (+ ground), which the media device can sense and from that determine what function is needed. Pretty slick method of getting several buttons to work off one wire without resorting to expensive digital signal generators and stuff (all packed in that little blob on the wires!). Four buttons might use four resistances (of any unit): volume up: 1 ohm volume down: 2 ohms stop: 4 ohms play: 8 ohms If this looks suspiciously like a binary encoding scheme... it is!! (You're so smart!!) Using values similarly ratio'd, you can sense 16 different outputs, even handling multiple keys pressed at the same time. Taa Daa! Old people might remember the first iPods, which had a little 4connector jack next to the audio out plug, which many devices plugged into alongside their audio plug which enabled control signals to be sent back and forth. This was phased out in favor of the (imho cooler!) fourth wire system... standard headphones will work as expected, and headphones set up to interface with the fourth wire method are accepted too. But to answer your question (finally!!)... no, there is no 'standard' way to enable the functionality you're looking for. Bluetooth headsets would be your best solution. (mine are COOL!)
{ "source": [ "https://unix.stackexchange.com/questions/41954", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11724/" ] }
42,015
I am trying to run zerofree on Ubuntu 11.04 so that I can compact the VirtualBox vdi image using: VBoxManage modifyhd Ubuntu.vdi --compact In order to run zerofree the disk image has be mounted as read-only. I'm following these instructions which says to use this to remount as read-only from the recovery mode (Drop to root shell prompt): mount -n -o remount,ro -t ext2 /dev/sda1 / But when I do this I get the error: mount: / is busy Any ideas on how to do this? Follow up : Following Jari's answer and this post by running these commands resolves the issue. service rsyslog stop service network-manager stop killall dhclient
Some processes are keeping files open for writing. These could be, for example, programs that write logs, like rsyslogd , networking tools, like dhclient or something else. Shutting these down one by one and trying the remount might work. You can find processes that use certain files by using the program fuser . For example, fuser -v -m / will return a list of processes. However, I am not sure if it is one of these which keeps the file system busy.
{ "source": [ "https://unix.stackexchange.com/questions/42015", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20405/" ] }
42,020
If I set the current/working directory (navigating to it using cd ) to some particular directory and then type: rm *.xvg What will this command do? Is it true that the above command will only delete files with the extension .xvg only in the working directory? I was nervous about trying this before asking, because I want to be absolutely sure that the above command will only delete .xvg files LOCATED IN THE WORKING DIRECTORY .
Yes, rm *.xvg will only delete the files with the specified extension in your current directory. A good way to make sure you are indeed in the directory you want delete your files is to use the pwd command which will display your current directory and then do an ls to verify you find the files you are expecting. If you are bit apprehensive about issuing the rm command, there are 2 things you can do: type ls *.xvg to see a list of what files would be affected by this command. Unless you have a lot of files, you could always also use the -i command line switch for rm (also exists for cp and mv ). Using rm -i *.xvg would prompt you for each individual file if it was ok to delete it, so you could be sure nothing you didn't expect was getting deleted. (This will be tedious if you have a lot of files though :)
{ "source": [ "https://unix.stackexchange.com/questions/42020", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9605/" ] }
42,131
Server A exports directory /srv via NFS with option nohide . A subdirectory within /srv , /srv/foo , is a mount point for another location on the NFS server using --bind option, like server# mount --bind /bar/foo/ /srv/foo/ Client B imports A:/srv and mounts it on /mnt/srv using NFS. Contents of /mnt/srv are the contents of A:/srv . The problem is that /mnt/srv/foo is empty, while I'm expecting to see the contents of A:/bar/foo/ there. How to properly export and import NFS shares that have subdirectories as mount points also?
crossmnt is your friend. /srv *(rw,fsid=0,no_subtree_check,crossmnt)
{ "source": [ "https://unix.stackexchange.com/questions/42131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3766/" ] }
42,198
I have a large tarball that is busy being FTP'd over from a remote system to our local system. I want to know if it is possible to start untarring lets say 50 files at a time so that those files can begin being processed while the transfer takes place.
Here is a detailed explanation on how it is possible to extract specific files from an archive. Specifically GNU tar can be used to extract a single or more files from a tarball. To extract specific archive members, give their exact member names as arguments. For example: tar --extract --file={tarball.tar} {file} You can also extract those files that match a specific globbing pattern (wildcards). For example, to extract from cbz.tar all files that begin with pic, no matter their directory prefix, you could type: tar -xf cbz.tar --wildcards --no-anchored 'pic*' To extract all php files, enter: tar -xf cbz.tar --wildcards --no-anchored '*.php' Where, -x : instructs tar to extract files. -f : specifies filename / tarball name. -v : Verbose (show progress while extracting files). -j : filter archive through bzip2, use to decompress .bz2 files. -z : filter archive through gzip, use to decompress .gz files. --wildcards : instructs tar to treat command line arguments as globbing patterns. --no-anchored : informs it that the patterns apply to member names after any / delimiter.
{ "source": [ "https://unix.stackexchange.com/questions/42198", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7726/" ] }
42,277
The tl;dr: how would I go about fixing a bad block on 1 disk in a RAID1 array? But please read this whole thing for what I've tried already and possible errors in my methods. I've tried to be as detailed as possible, and I'm really hoping for some feedback This is my situation: I have two 2TB disks (same model) set up in a RAID1 array managed by mdadm . About 6 months ago I noticed the first bad block when SMART reported it. Today I noticed more, and am now trying to fix it. This HOWTO page seems to be the one article everyone links to to fix bad blocks that SMART is reporting. It's a great page, full of info, however it is fairly outdated and doesn't address my particular setup. Here is how my config is different: Instead of one disk, I'm using two disks in a RAID1 array. One disk is reporting errors while the other is fine. The HOWTO is written with only one disk in mind, which bring up various questions such as 'do I use this command on the disk device or the RAID device'? I'm using GPT, which fdisk does not support. I've been using gdisk instead, and I'm hoping that it is giving me the same info that I need So, lets get down to it. This is what I have done, however it doesn't seem to be working. Please feel free to double check my calculations and method for errors. The disk reporting errors is /dev/sda: # smartctl -l selftest /dev/sda smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.4.4-2-ARCH] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed: read failure 90% 12169 3212761936 With this, we gather that the error resides on LBA 3212761936. Following the HOWTO, I use gdisk to find the start sector to be used later in determining the block number (as I cannot use fdisk since it does not support GPT): # gdisk -l /dev/sda GPT fdisk (gdisk) version 0.8.5 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sda: 3907029168 sectors, 1.8 TiB Logical sector size: 512 bytes Disk identifier (GUID): CFB87C67-1993-4517-8301-76E16BBEA901 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 3907029134 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 3907029134 1.8 TiB FD00 Linux RAID Using tunefs I find the blocksize to be 4096 . Using this info and the calculuation from the HOWTO, I conclude that the block in question is ((3212761936 - 2048) * 512) / 4096 = 401594986 . The HOWTO then directs me to debugfs to see if the block is in use (I use the RAID device as it needs an EXT filesystem, this was one of the commands that confused me as I did not, at first, know if I should use /dev/sda or /dev/md0): # debugfs debugfs 1.42.4 (12-June-2012) debugfs: open /dev/md0 debugfs: testb 401594986 Block 401594986 not in use So block 401594986 is empty space, I should be able to write over it without problems. Before writing to it, though, I try to make sure that it, indeed, cannot be read: # dd if=/dev/sda1 of=/dev/null bs=4096 count=1 seek=401594986 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000198887 s, 20.6 MB/s If the block could not be read, I wouldn't expect this to work. However, it does. I repeat using /dev/sda , /dev/sda1 , /dev/sdb , /dev/sdb1 , /dev/md0 , and +-5 to the block number to search around the bad block. It all works. I shrug my shoulders and go ahead and commit the write and sync (I use /dev/md0 because I figured modifying one disk and not the other might cause issues, this way both disks overwrite the bad block): # dd if=/dev/zero of=/dev/md0 bs=4096 count=1 seek=401594986 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000142366 s, 28.8 MB/s # sync I would expect that writing to the bad block would have the disks reassign the block to a good one, however running another SMART test shows differently: # 1 Short offline Completed: read failure 90% 12170 3212761936 Back to square 1. So basically, how would I fix a bad block on 1 disk in a RAID1 array? I'm sure I've not done something correctly... Thanks for your time and patience. EDIT 1: I've tried to run an long SMART test, with the same LBA returning as bad (the only difference is it reports 30% remaining rather than 90%): SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 30% 12180 3212761936 # 2 Short offline Completed: read failure 90% 12170 3212761936 I've also used badblocks with the following output. The output is strange and seems to be miss-formatted, but I tried to test the numbers outputed as blocks but debugfs gives an error # badblocks -sv /dev/sda Checking blocks 0 to 1953514583 Checking for bad blocks (read-only test): 1606380968ne, 3:57:08 elapsed. (0/0/0 errors) 1606380969ne, 3:57:39 elapsed. (1/0/0 errors) 1606380970ne, 3:58:11 elapsed. (2/0/0 errors) 1606380971ne, 3:58:43 elapsed. (3/0/0 errors) done Pass completed, 4 bad blocks found. (4/0/0 errors) # debugfs debugfs 1.42.4 (12-June-2012) debugfs: open /dev/md0 debugfs: testb 1606380968 Illegal block number passed to ext2fs_test_block_bitmap #1606380968 for block bitmap for /dev/md0 Block 1606380968 not in use Not sure where to go from here. badblocks definitely found something, but I'm not sure what to do with the information presented... EDIT 2 More commands and info. I feel like an idiot forgetting to include this originally. This is SMART values for /dev/sda . I have 1 Current_Pending_Sector, and 0 Offline_Uncorrectable. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 100 100 051 Pre-fail Always - 166 2 Throughput_Performance 0x0026 055 055 000 Old_age Always - 18345 3 Spin_Up_Time 0x0023 084 068 025 Pre-fail Always - 5078 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 75 5 Reallocated_Sector_Ct 0x0033 252 252 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 252 252 051 Old_age Always - 0 8 Seek_Time_Performance 0x0024 252 252 015 Old_age Offline - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 12224 10 Spin_Retry_Count 0x0032 252 252 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 252 252 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 75 181 Program_Fail_Cnt_Total 0x0022 100 100 000 Old_age Always - 1646911 191 G-Sense_Error_Rate 0x0022 100 100 000 Old_age Always - 12 192 Power-Off_Retract_Count 0x0022 252 252 000 Old_age Always - 0 194 Temperature_Celsius 0x0002 064 059 000 Old_age Always - 36 (Min/Max 22/41) 195 Hardware_ECC_Recovered 0x003a 100 100 000 Old_age Always - 0 196 Reallocated_Event_Count 0x0032 252 252 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 1 198 Offline_Uncorrectable 0x0030 252 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0036 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x002a 100 100 000 Old_age Always - 30 223 Load_Retry_Count 0x0032 252 252 000 Old_age Always - 0 225 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 77 # mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu May 5 06:30:21 2011 Raid Level : raid1 Array Size : 1953512383 (1863.01 GiB 2000.40 GB) Used Dev Size : 1953512383 (1863.01 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Jul 3 22:15:51 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : server:0 (local to host server) UUID : e7ebaefd:e05c9d6e:3b558391:9b131afb Events : 67889 Number Major Minor RaidDevice State 2 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 As per one of the answers: it would seem I did switch seek and skip for dd . I was using seek as that's what is used with the HOWTO. Using this command causes dd to hang: # dd if=/dev/sda1 of=/dev/null bs=4096 count=1 skip=401594986 Using blocks around that one (..84, ..85, ..87, ..88) seems to work just fine, and using /dev/sdb1 with block 401594986 reads just fine as well (as expected as that disk passed SMART testing). Now, the question that I have is: When writing over this area to reassign the blocks, do I use /dev/sda1 or /dev/md0 ? I don't want to cause any issues with the RAID array by writing directly to one disk and not having the other disk update. EDIT 3 Writing to the block directly produced filesystem errors. I've chosen an answer that solved the problem quickly: # 1 Short offline Completed without error 00% 14211 - # 2 Extended offline Completed: read failure 30% 12244 3212761936 Thanks to everyone who helped. =)
All these "poke the sector" answers are, quite frankly, insane. They risk (possibly hidden) filesystem corruption. If the data were already gone, because that disk stored the only copy, it'd be reasonable. But there is a perfectly good copy on the mirror. You just need to have mdraid scrub the mirror. It'll notice the bad sector, and rewrite it automatically. # echo 'check' > /sys/block/mdX/md/sync_action # use 'repair' instead for older kernels You need to put the right device in there (e.g., md0 instead of mdX). This will take a while, as it does the entire array by default. On a new enough kernel, you can write sector numbers to sync_min/sync_max first, to limit it to only a portion of the array. This is a safe operation. You can do it on all of your mdraid devices. In fact, you should do it on all your mdraid devices, regularly. Your distro likely ships with a cronjob to handle this, maybe you need to do something to enable it? Script for all RAID devices on the system A while back, I wrote this script to "repair" all RAID devices on the system. This was written for older kernel versions where only 'repair' would fix the bad sector; now just doing check is sufficient (repair still works fine on newer kernels, but it also re-copies/rebuilds parity, which isn't always what you want, especially on flash drives) #!/bin/bash save="$(tput sc)"; clear="$(tput rc)$(tput el)"; for sync in /sys/block/md*/md/sync_action; do md="$(echo "$sync" | cut -d/ -f4)" cmpl="/sys/block/$md/md/sync_completed" # check current state and get it repairing. read current < "$sync" case "$current" in idle) echo 'repair' > "$sync" true ;; repair) echo "WARNING: $md already repairing" ;; check) echo "WARNING: $md checking, aborting check and starting repair" echo 'idle' > "$sync" echo 'repair' > "$sync" ;; *) echo "ERROR: $md in unknown state $current. ABORT." exit 1 ;; esac echo -n "Repair $md...$save" >&2 read current < "$sync" while [ "$current" != "idle" ]; do read stat < "$cmpl" echo -n "$clear $stat" >&2 sleep 1 read current < "$sync" done echo "$clear done." >&2; done for dev in /dev/sd?; do echo "Starting offline data collection for $dev." smartctl -t offline "$dev" done If you want to do check instead of repair , then this (untested) first block should work: case "$current" in idle) echo 'check' > "$sync" true ;; repair|check) echo "NOTE: $md $current already in progress." ;; *) echo "ERROR: $md in unknown state $current. ABORT." exit 1 ;; esac
{ "source": [ "https://unix.stackexchange.com/questions/42277", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20536/" ] }
42,287
I have a command that I want to have run again automatically each time it terminates, so I ran something like this: while [ 1 ]; do COMMAND; done; but if I can't stop the loop with Ctrl-c as that just kills COMMAND and not the entire loop. How would I achieve something similar but which I can stop without having to close the terminal?
Check the exit status of the command. If the command was terminated by a signal the exit code will be 128 + the signal number. From the GNU online documentation for bash : For the shell’s purposes, a command which exits with a zero exit status has succeeded. A non-zero exit status indicates failure. This seemingly counter-intuitive scheme is used so there is one well-defined way to indicate success and a variety of ways to indicate various failure modes. When a command terminates on a fatal signal whose number is N, Bash uses the value 128+N as the exit status. POSIX also specifies that the value of a command that terminated by a signal is greater than 128, but does not seem to specify its exact value like GNU does: The exit status of a command that terminated because it received a signal shall be reported as greater than 128. For example if you interrupt a command with control-C the exit code will be 130, because SIGINT is signal 2 on Unix systems. So: while [ 1 ]; do COMMAND; test $? -gt 128 && break; done
{ "source": [ "https://unix.stackexchange.com/questions/42287", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12093/" ] }
42,320
Having recently come across wordlist and wordnet , two great discoveries on their own, I'm now looking for a similar tool, if simpler, that will take the bare infinitive of a verb and return the simple past and past participle. Example: $ verbteacher throw Simple past: threw Past participle: thrown Does anybody know where to find verbteacher(1) ?
Seems the easiest way is to write it yourself. At the first look I found pretty good website, that can give us all information we need. Thus all we need to do is to write a function that will parse it. So five minutes with bash and voila: $ function verbteacher() { wget -qO - http://conjugator.reverso.net/conjugation-english-verb-$1.html | \ sed -n "/>Preterite\|>Past</{s@<[^>]*>@ @g;s/\s\+/ /g;/e I/s/.* I \([^ ]*\) you .*/Simple past: \1/;/ Past/s/ Past /Past participle: /;p}" ; } $ verbteacher go Simple past: went Past participle: gone $ verbteacher throw Simple past: threw Past participle: thrown So you can put this function to your ~/.bashrc and use it until the site will change its structure. Hope it will never do it. Obviously it won't work without the internet connection. Hope this is not critical for you.
{ "source": [ "https://unix.stackexchange.com/questions/42320", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20557/" ] }
42,349
I sometimes run long xargs jobs overnight and it is really annoying to discover in the morning that xargs died somewhere in the middle, for example because of a segmentation fault in one single special case, as happened this night. If even one xargs child is killed, it does not process any more input: Console 1: [09:35:48] % seq 40 | xargs -i --max-procs=4 bash -c 'sleep 10; date +"%H:%M:%S {}";' xargs: bash: terminated by signal 15 09:35:58 3 09:35:58 4 09:35:58 2 <Exit with code 125> Console 2: [09:35:54] kill 5601 Can I somehow prevent xargs from stopping to process any more input once a child process dies and instead continue processing?
No, you can't. From the xargs sources at savannah.gnu.org : if (WEXITSTATUS (status) == CHILD_EXIT_PLEASE_STOP_IMMEDIATELY) error (XARGS_EXIT_CLIENT_EXIT_255, 0, _("%s: exited with status 255; aborting"), bc_state.cmd_argv[0]); if (WIFSTOPPED (status)) error (XARGS_EXIT_CLIENT_FATAL_SIG, 0, _("%s: stopped by signal %d"), bc_state.cmd_argv[0], WSTOPSIG (status)); if (WIFSIGNALED (status)) error (XARGS_EXIT_CLIENT_FATAL_SIG, 0, _("%s: terminated by signal %d"), bc_state.cmd_argv[0], WTERMSIG (status)); if (WEXITSTATUS (status) != 0) child_error = XARGS_EXIT_CLIENT_EXIT_NONZERO; There's no flag around that check, or around the function that calls it. It does seem to be related to max procs, which I suppose makes sense: if you set max procs high enough, it won't bother checking until it's hit the limit, which you might get to be never. A better solution for what you're trying to do might be to use GNU Make : TARGETS=$(patsubst %,target-%,$(shell seq 1 40)) all: $(TARGETS) target-%: sleep 10; date +"%H:%M:%S $*" Then: $ make -k -j4 will have the same effect, and give you much better control.
{ "source": [ "https://unix.stackexchange.com/questions/42349", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4297/" ] }
42,359
How can I configure systemd to automatically log me in to my desktop environment, preferably without using a login manager? I'm using Arch Linux.
This is described in the ArchWiki : Create a new service file similar to [email protected] by copying it to /etc/systemd/system/ cp /usr/lib/systemd/system/[email protected] /etc/systemd/system/[email protected] This basically copies the already existing [email protected] to a new file [email protected] which can be freely modified. It is copied to /etc/systemd/system because that's where site-specific unit files are stored. /usr/lib/systemd/system contains unit files provided by packages so you shouldn't change anything in there. You will then have to symlink that [email protected] to the getty service for the tty on which you want to autologin, for examply for tty1: ln -s /etc/systemd/system/[email protected] /etc/systemd/system/getty.target.wants/[email protected] Up to now, this is still the same as the usual [email protected] file, but the most important part is to modify the [email protected] to actually log you in automatically. To do that, you only need to change the ExecStart line to read ExecStart=-/sbin/agetty -a USERNAME %I 38400 The difference between the ExecStart line in [email protected] and [email protected] is only the -a USERNAME which tells agetty to log the user with the username USERNAME in automatically. Now you only have to tell systemd to reload its daemon files and start the service: systemctl daemon-reload systemctl start [email protected] (I'm not sure if the service will start properly if you're already logged in on tty1, the safest way is probably to just reboot instead of starting the service). If you then want to automatically start X, insert the following snippet into your ~/.bash_profile (taken from the wiki again): if [[ -z $DISPLAY ]] && [[ $(tty) = /dev/tty1 ]]; then exec startx fi You will have to modify your ~/.xinitrc to start your desktop environment, how to do that depends on the DE and is probably described in the ArchWiki as well.
{ "source": [ "https://unix.stackexchange.com/questions/42359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20583/" ] }
42,376
I have a serial port device that I would like to test using linux command line. I am able to use stty and echo for sending commands to serial port, but when device responds I have no way of reading what is coming from serial port. I am using stty -F /dev/ttyS0 speed 9600 cs8 -cstopb -parenb && echo -n ^R^B > /dev/ttyS0 to send a command to the device. Device operates and sends a response back in 300 ms's. How do I print that response to the console using command line?
Same as with output. Example: cat /dev/ttyS0 Or: cat < /dev/ttyS0 The first example is an app that opens the serial port and relays what it reads from it to its stdout (your console). The second is the shell directing the serial port traffic to any app that you like; this particular app then just relays its stdin to its stdout . To get better visibility into the traffic, you may prefer a hex dump: od -x < /dev/ttyS0
{ "source": [ "https://unix.stackexchange.com/questions/42376", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22056/" ] }
42,398
I used the 'useradd' command to create a new account, but I did so without specifying the password. Now, when the user tries to log in, it asks him for a password. If I didn't set it up initially, how do I set the password now?
Easiest way to do this from the command line is to use the passwd command with root privileges. passwd username From man 1 passwd NAME passwd - update user's authentication token SYNOPSIS passwd [-k] [-l] [-u [-f]] [-d] [-n mindays] [-x maxdays] [-w warndays] [-i inactivedays] [-S] [--stdin] [username] DESCRIPTION The passwd utility is used to update user's authentication token(s). After you set the user password, you can force the user to change it on next login using the chage command (also with root privileges) which expires the password. chage -d 0 username When the user successfully authenticates with the password you set, the user will automatically be prompted to change it. After a successful password change, the user will be disconnected, forcing re-authentication with the new password. See man 1 chage for more information on password expiry.
{ "source": [ "https://unix.stackexchange.com/questions/42398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15417/" ] }
42,407
I'm trying to find all files that are of a certain type and do not contain a certain string. I am trying to go about it by piping find to grep -v example: find -type f -name '*.java' | xargs grep -v "something something" This does not seem to work. It seems to be just returning all the files that the find command found. What I am trying to do is basically find all .java files that match a certain filename(e.g. ends with 'Pb' as in SessionPb.java) and that do not have an 'extends SomethingSomething" inside it. My suspicion is that I'm doing it wrong. So how should the command look like instead?
There is no need in xargs here. Also you need to use grep with -L option (files without match), cause otherwise it will output the file content instead of its name, like in your example. find . -type f -iname "*.java" -exec grep -L "something somethin" {} \+
{ "source": [ "https://unix.stackexchange.com/questions/42407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20600/" ] }
42,567
Assume I have ssh access to some Ubuntu server as user and I need some not system tools to be installed for convenience (mc, rtorrent, mcedit). I do not want to bother admins for these small programs. Is there a way to install them (make them run) without using something like sudo apt-get install ?
You need to compile these from source. It should just be a matter of apt-get source PACKAGE ./configure --prefix=$HOME/myapps make make install The binary would then be located in ~/myapps/bin . So, add export PATH="$HOME/myapps/bin:$PATH" to your .bashrc file and reload the .bashrc file with source ~/.bashrc . Of course, this assumes that gcc is installed on the system.
{ "source": [ "https://unix.stackexchange.com/questions/42567", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6852/" ] }
42,572
I haven't found a clear answer to the differences between the two options to the command shutdown . Is halt the same as shutdown -H and poweroff the same as shutdown -P ?
It's a bit historical. halt was used before ACPI (which today will turn off the power for you)*. It would halt the system and then print a message to the effect of "it's ok to power off now". Back then there were physical on/off switches, rather than the combo ACPI controlled power button of modern computers. poweroff , naturally will halt the system and then call ACPI power off. * These days halt is smart enough to automatically call poweroff if ACPI is enabled. In fact, they are functionally equivalent now.
{ "source": [ "https://unix.stackexchange.com/questions/42572", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
42,629
In OSX I can just hold down the option key and press the left cursor key until I get to the word I need to edit (or in Vi I can just hit b , but I haven't been able to figure out how to do this in Terminal yet...
To set the key binding: You first have to find out what key codes the Ctrl + Left key sequence creates. Just use the command cat to switch off any interference with existing key bindings, and then type the key sequence. In my system (Linux), this looks like that: $ cat ^[[1;5D Press Ctrl + d to exit cat. Now you have found out that Ctrl-Left issues 6 key codes: Escape (^[) [ 1 ; 5 D Now you can issue the bind command: bind '"\e[1;5D": backward-word'
{ "source": [ "https://unix.stackexchange.com/questions/42629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20710/" ] }
42,636
If I try wget on a webpage, I am getting the page as html. Is it possible to retrieve only text of a file without associated html ? (This is required for me since some of the HTML pages contains c program is getting downloaded with html tags. I have to open it in browser and manually copy the text to make a .c file.)
wget will only retrieve the document. If the document is in HTML, what you want is the result of parsing the document. You could, for example, use lynx -dump -nolist , if you have lynx around. lynx is a lightweight, simple web browser, which has the -dump feature, used to output the result of the parsing process. -nolist avoids the list of links at the end, which will appear if the page has any hyperlinks. As mentioned by @Thor, elinks can be used for this too, as it also has a -dump option (and has -no-references to omit the list of links). It may be especially useful if you walk across some site using -sigh- frames (MTFBWY). Also, keep in mind that, unless the page is really just C code with HTML tags, you will need to check the result, just to make sure there's nothing more than C code there.
{ "source": [ "https://unix.stackexchange.com/questions/42636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17137/" ] }
42,643
I read about setting up ssh keys in Linux and have some questions. Correct me if I'm wrong… Let's say host tr-lgto wants to connect to host tr-mdm using ssh. If we want to be sure that it's the real tr-mdm, we generate a pair of keys on tr-mdm and we add the public key to known_hosts on tr-lgto. If tr-mdm wants to check that it's the real tr-lgto, then tr-lgto has to generate a keypair and add the public key to authorized_keys on tr-mdm. Question 1 : There is no user field in file known_hosts, just IP addresses and hostnames. tr-mdm might have a lot of users, each with their own .ssh folder. Should we add the public key to each of the known_hosts files? Question 2 : I found that ssh-keyscan -t rsa tr-mdm will return the public key of tr-mdm. How do I know what user this key belongs to? Moreover, the public key in /root/.ssh/ is different from what that command returns. How can this be?
You're mixing up the authentication of the server machine to the client machine, and the authentication of the user to the server machine. Server authentication One of the first things that happens when the SSH connection is being established is that the server sends its public key to the client, and proves (thanks to public-key cryptography ) to the client that it knows the associated private key. This authenticates the server: if this part of the protocol is successful, the client knows that the server is who it pretends it is. The client may check that the server is a known one, and not some rogue server trying to pass off as the right one. SSH provides only a simple mechanism to verify the server's legitimacy: it remembers servers you've already connected to, in the ~/.ssh/known_hosts file on the client machine (there's also a system-wide file /etc/ssh/known_hosts ). The first time you connect to a server, you need to check by some other means that the public key presented by the server is really the public key of the server you wanted to connect to. If you have the public key of the server you're about to connect to, you can add it to ~/.ssh/known_hosts on the client manually. Authenticating the server has to be done before you send any confidential data to it. In particular, if the user authentication involves a password, the password must not be sent to an unauthenticated server. User authentication The server only lets a remote user log in if that user can prove that they have the right to access that account. Depending on the server's configuration and the user's choice, the user may present one of several forms of credentials (the list below is not exhaustive). The user may present the password for the account that he is trying to log into; the server then verifies that the password is correct. The user may present a public key and prove that he possesses the private key associated with that public key. This is exactly the same method that is used to authenticate the server, but now the user is trying to prove their identity and the server is verifying them. The login attempt is accepted if the user proves that he knows the private key and the public key is in the account's authorization list ( ~/.ssh/authorized_keys on the server). Another type of method involves delegating part of the work of authenticating the user to the client machine. This happens in controlled environments such as enterprises, when many machines share the same accounts. The server authenticates the client machine by the same mechanism that is used the other way round, then relies on the client to authenticate the user.
{ "source": [ "https://unix.stackexchange.com/questions/42643", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20715/" ] }
42,712
I recently managed to set up my mailcap so that Mutt can show HTML e-mails in the message window: # ~/.mailcap text/html; lynx -dump '%s' | more; nametemplate=%s.html; copiousoutput; which is automated by: # ~/.muttrc auto_view text/html Although I think Lynx does a decent job on converting the HTML to text, sometimes this doesn't cut it and I would like to be able to open the HTML attachment in my web browser Luakit . Is there a way to transparently do this? A good workflow for me would look like: open mail (Lynx converts it) see that it is too complicated for Lynx press v navigate to HTML attachment press Enter to open the mail in Luakit.
You can do this with mutt's mime support . In addition, you can use this with Autoview to denote two commands for viewing an attachment, one to be viewed automatically, the other to be viewed interactively from the attachment menu. Essentially, you include two options in your mailcap file 1 . text/html; luakit '%s' &; test=test -n "$DISPLAY"; needsterminal; text/html; lynx -dump %s; nametemplate=%s.html; copiousoutput; The first entry tests that X is running, and if it is, it hands the file to luakit. The default, however, is determined by the copiousoutput tag, so it will be rendered in Mutt by lynx. You will need these options in your .muttrc : auto_view text/html # view HTML automatically alternative_order text/plain text/enriched text/html # save HTML for last If you want to look at it in your browser, it is just a matter of hitting v to view the attached HTML and then m to send it to mailcap. For convenience, I bind Enter to that function in muttrc : bind attach <return> view-mailcap 1. Note, I don't use lynx or luakit, so these options are indicative only. Shamelessly reproduced from this blog post: https://jasonwryan.com/blog/2012/05/12/mutt/
{ "source": [ "https://unix.stackexchange.com/questions/42712", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11279/" ] }
42,715
I have a cron job that is scheduled to run everyday, other than changing the schedule, is there any other way to do a test run of the command right now to see if it works as intended? EDIT: (from the comments) I know the command works fine when enter it in shell (my shell), but I want to know if it works correctly when cron runs it, it could be affected by ENV or shell specific stuff (~ expansion) or ownership and permission stuff or ...
You can simulate the cron user environment as explained in "Running a cron job manually and immediately" . This will allow you to test the job works when it would be run as the cron user. Excerpt from link: Step 1 : I put this line temporarily in the user's crontab: * * * * * /usr/bin/env > /home/username/tmp/cron-env then took it out once the file was written. Step 2 : Made myself a little run-as-cron bash script containing: #!/bin/bash /usr/bin/env -i $(cat /home/username/tmp/cron-env) "$@" So then, as the user in question, I was able to run-as-cron /the/problematic/script --with arguments --and parameters
{ "source": [ "https://unix.stackexchange.com/questions/42715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3850/" ] }
42,726
"Joe's own editor" does not come naturally to me. How do I change to using nano or vim? I've tried export EDITOR=nano but it doesn't seem to be respected. I'd like visudo to respect this as well.
To change the default editor at the system level: sudo update-alternatives --config editor and then follow the onscreen prompts.
{ "source": [ "https://unix.stackexchange.com/questions/42726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20758/" ] }
42,728
I saw this line in a script: DEVICE=`dialog --inputbox "Festplatten-Laufzeit auslesen. Gebe Sie das gewünschte Device an: " 0 70 "" 3>&1 1>&2 2>&3` What is 3>&1 1>&2 2>&3 doing? I know that 1 = stdout and 2 = stderr, but what are the 3 and the & for?
The numbers are file descriptors and only the first three (starting with zero) have a standardized meaning: 0 - stdin 1 - stdout 2 - stderr So each of these numbers in your command refer to a file descriptor. You can either redirect a file descriptor to a file with > or redirect it to another file descriptor with >& The 3>&1 in your command line will create a new file descriptor and redirect it to 1 which is STDOUT . Now 1>&2 will redirect the file descriptor 1 to STDERR and 2>&3 will redirect file descriptor 2 to 3 which is STDOUT . So basically you switched STDOUT and STDERR , these are the steps: Create a new fd 3 and point it to the fd 1 Redirect file descriptor 1 to file descriptor 2. If we wouldn't have saved the file descriptor in 3 we would lose the target. Redirect file descriptor 2 to file descriptor 3. Now file descriptors one and two are switched. Now if the program prints something to the file descriptor 1, it will be printed to the file descriptor 2 and vice versa.
{ "source": [ "https://unix.stackexchange.com/questions/42728", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20434/" ] }
42,757
Is there a Linux script / application which, instead of deleting files, moves them to a special “trash” location? I’d like this as a replacement for rm (maybe even aliasing the latter; there are pros and cons for that). By “trash” I mean a special folder. A single mv "$@" ~/.trash is a first step, but ideally this should also handle trashing several files of the same name without overwriting older trashed files, and allow to restore files to their original location with a simple command (a kind of “undo”). Furthermore, it’d be nice if the trash was automatically emptied on reboot (or a similar mechanism to prevent endless growth). Partial solutions for this exist, but the “restore” action in particular isn’t trivial. Are there any existing solutions for this which don’t rely on a trash system from a graphical shell? (As an aside, there have been endless discussions whether this approach is justified, rather than using frequent backups and VCS. While those discussions have a point, I believe there’s still a niche for my request.)
There is a specification (draft) for Trash on freedesktop.org. It is apparently what is usually implemented by desktop environments. A commandline implementation would be trash-cli . Without having had a closer look, it seems to provide the funtionality you want. If not, tell us in how far this is only a partial solution. As far as using any program as replacement/alias for rm is concerned, there are good reasons not to do that. Most important for me are: The program would need to understand/handle all of rm 's options and act accordingly It has the risk of getting used to the semantics of your "new rm" and performing commands with fatal consequences when working on other people's systems
{ "source": [ "https://unix.stackexchange.com/questions/42757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3651/" ] }
42,801
Possible Duplicate: Redirecting stdout to a file you don't have write permission on I am trying to append a line of text to a write protected file. I tried to accomplish this with sudo echo "New line to write" >> file.txt but I get a permission denied error — presumably because it is trying to sudo the string, not the act of appending it to a file. If I run sudo vi file.txt and authenticate I can happily write away. Any help would be greatly appreciated.
Use the command below echo "New line to write" | sudo tee -a file.txt
{ "source": [ "https://unix.stackexchange.com/questions/42801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/625/" ] }
42,847
Most languages have naming conventions for variables, the most common style I see in shell scripts is MY_VARIABLE=foo . Is this the convention or is it only for global variables? What about variables local to the script?
Environment variables or shell variables introduced by the operating system, shell startup scripts, or the shell itself, etc., are usually all in CAPITALS 1 . To prevent your variables from conflicting with these variables, it is a good practice to use lower_case variable names. 1 A notable exception that may be worth knowing about is the path array, used by the zsh shell. This is the same as the common PATH variable but represented as an array.
{ "source": [ "https://unix.stackexchange.com/questions/42847", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11897/" ] }
42,898
Is it possible to find any lines in a file that exceed 79 characters?
In order of decreasing speed (on a GNU system in a UTF-8 locale and on ASCII input) according to my tests: grep '.\{80\}' file perl -nle 'print if length$_>79' file awk 'length>79' file sed -n '/.\{80\}/p' file Except for the perl ¹ one (or for awk / grep / sed implementations (like mawk or busybox) that don't support multi-byte characters), that counts the length in terms of number of characters (according to the LC_CTYPE setting of the locale) instead of bytes . If there are bytes in the input that don't form part of valid characters (which happens sometimes when the locale's character set is UTF-8 and the input is in a different encoding), then depending on the solution and tool implementation, those bytes will either count as 1 character, or 0 or not match . . For instance, a line that consists of 30 a s a 0x80 byte, 30 b s, a 0x81 byte and 30 UTF-8 é s (encoded as 0xc3 0xa9), in a UTF-8 locale would not match .\{80\} with GNU grep / sed (as that standalone 0x80 byte doesn't match . ), would have a length of 30+1+30+1+2*30=122 with perl or mawk , 3*30=90 with gawk . If you want to count in terms of bytes, fix the locale to C with LC_ALL=C grep/awk/sed... . That would have all 4 solutions consider that line above contains 122 characters. Except in perl and GNU tools, you'd still have potential issues for lines that contain NUL characters (0x0 byte). ¹ the perl behaviour can be affected by the PERL_UNICODE environment variable though
{ "source": [ "https://unix.stackexchange.com/questions/42898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20841/" ] }
42,901
I have a program which produces useful information on stdout but also reads from stdin . I want to redirect its standard output to a file without providing anything on standard input. So far, so good: I can do: program > output and don't do anything in the tty. However, the problem is I want to do this in the background. If I do: program > output & the program will get suspended ("suspended (tty input)"). If I do: program < /dev/null > output & the program terminates immediately because it reaches EOF. It seems that what I need is to pipe into program something which does not do anything for an indefinite amount of time and does not read stdin . The following approaches work: while true; do sleep 100; done | program > output & mkfifo fifo && cat fifo | program > output & tail -f /dev/null | program > output & However, this is all very ugly. There has to be an elegant way, using standard Unix utilities, to "do nothing, indefinitely" (to paraphrase man true ). How could I achieve this? (My main criteria for elegance here: no temporary files; no busy-waiting or periodic wakeups; no exotic utilities; as short as possible.)
I don't think you're going to get any more elegant than the tail -f /dev/null that you already suggested (assuming this uses inotify internally, there should be no polling or wakeups, so other than being odd looking, it should be sufficient). You need a utility that will run indefinitely, will keep its stdout open, but won't actually write anything to stdout, and won't exit when its stdin is closed. Something like yes actually writes to stdout. cat will exit when its stdin is closed (or whatever you re-direct into it is done). I think sleep 1000000000d might work, but the tail is clearly better. My Debian box has a tailf that shortens command slightly. Taking a different tack, how about running the program under screen ?
{ "source": [ "https://unix.stackexchange.com/questions/42901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8446/" ] }
42,930
In Ubuntu 12.04 I use CTRL - R to enter a reverse history search. If the command I want is not found (after repeated CTRL - R ), how do I immediately exit back to the (empty) command prompt with no historical command entered or executed on the command line?
Ctrl G this will abort the search
{ "source": [ "https://unix.stackexchange.com/questions/42930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
42,964
So I have a standard RS232 serial port that is looped back to itself by simply running a wire from Tx to Rx. I'm testing loopback by running echo and cat in two separate terminals: cat /dev/ttyS1 echo "hi" > /dev/ttyS1 My issue is with the output. I would expect to see one "hi" come back on the terminal running cat but instead I get this: hi [2 newlines] hi [4 newlines] hi [8 newlines] hi [16 newlines] hi [32 newlines] hi ...and so on until I ctrl + c cat . After interrupting cat, if I run it again it will not output "hi"s until I run echo a second time. Is this normal? Any idea why I'm seeing this behavior? Edit : By newline, I mean ASCII 0x0A . There are no carriage returns in this output.
Thanks to the second comment by Bruce, I was able to figure out the problem on my own. After running stty -a -F /dev/ttyS1 , there were 3 options I found to contribute to the problem: "echo", "onlcr", and "icrnl". Since this serial port is looped back to itself, here is what happened after running echo "hi" > /dev/ttyS1 : The echo command appends a newline to the end of the message by default, so "hi" + LF is sent out to /dev/ttyS1 Because "onlcr" was set, the serial device converted the LF to CRLF so the physical message sent out the Tx line was "hi" + CRLF Because "icrnl" was set, the physical messaged received on the Rx line converted the CR to LF. So the message outputted by 'cat' was "hi" + LFLF. Because "echo" was set, the message received on the Rx ("hi" + LFLF), was then sent back out on the Tx line. Because of onlcr, "hi" + LFLF became "hi" + CRLFCRLF. Because of icrnl, "hi" + CRLFCRLF became "hi" + LFLFLFLF Because of echo, "hi" + LFLFLFLF was then sent out the Tx And so on... In order to fix this problem, I ran the following command: stty -F /dev/ttyS1 -echo -onlcr Disabling "echo" prevents an infinite loop of messages and disabling "onlcr" prevents the serial device from converting LF to CRLF on output. Now cat receives one "hi" (with a single newline!) for each time I run echo . CR = carriage return (ASCII 0x0D); LF = line feed or newline (ASCII 0x0A)
{ "source": [ "https://unix.stackexchange.com/questions/42964", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4805/" ] }
43,003
I used to have a co-worker who was really good at UNIX. He showed me how to use Vi key bindings to edit my shell commands. He placed the command in a file that ran every time I logged in. Since then, I've moved to a different project. Unfortunately I don't remember how to set this up. Is there anyone here who knows how to use Vi key bindings to edit commands in the terminal? How can I make that setting permanent?
You're talking about the greatest feature ever! You can use vi commands to edit shell commands (and command history) by adding this to your .bashrc file: set -o vi You can also run that command from the command line to affect only your current session. If you don't use bash, substitue the appropriate rc file for your shell. This allows you to use vi commands to edit any command... You can also use j and k to move through your history (after pressing ESC ). You can also use / (after hitting ESC ) to search for old commands. In other words, to find that super-long cp command you did ten minutes ago: ESC / cp ENTER Then you can cycle through all the matching commands in your history with n and N . All this makes me 10 trillion times more productive at the command line!
{ "source": [ "https://unix.stackexchange.com/questions/43003", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38140/" ] }
43,037
In the manual page of tar command, an option for following hard links is listed. -h, --dereference follow symlinks; archive and dump the files they point to --hard-dereference follow hard links; archive and dump the files they refer to How does tar know that a file is a hard link? How does it follow it? What if I don't choose this option? How does it not hard-dereference?
By default, if you tell tar to archive a file with hard links, and more than one such link is included among the files to be archived, it archives the file only once, and records the second (and any additional names) as hard links. This means that when you extract that archive, the hard links will be restored. If you use the --hard-dereference option, then tar does not preserve hard links. Instead, it treats them as independent files that just happen to have the same contents and metadata. When you extract the archive, the files will be independent. Note: It recognizes hard links by first checking the link count of the file. It records the device number and inode of each file with more than one link, and uses that to detect when the same file is being archived again. (When you use --hard-dereference , it does not do this.)
{ "source": [ "https://unix.stackexchange.com/questions/43037", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3352/" ] }
43,046
The output of ls -l command yields the following result: What is the number field between file permission and owner? i.e. what are those 1, 1, 1, and 2 ? I checked the --help but that doesn't explain it. [EDIT] I thought it was the number of files in a directory but it isn't. See image. "tempFolder" has 3 files but still show a "2"
Note: edited after @StephaneChazelas comment The first number of the ls -l output after the permission block is the number of hard links . It is the same value as the one returned by the stat command in "Links". This number is the hardlink count of the file, when referring to a file, or the number of contained directory entries, when referring to a directory. A file typically has a hard link count of 1 but this changes if hard links are made with the ln command. See the Debian Reference manual . In your example, adding a hard link for tempFile2 will increase its link count: ln -l ln tempFile2 tempHardLink ln -l Both tempFile2 and tempHardLink will have a link count of 2. If you do the same exercise with a symbolic link ( ln -s tempFile2 tempSymLink ), the count value will not increase. A directory will have a minimum count of 2 for '.' (link to itself) and for the entry in its parent's directory . In your example, if you want to increase the link count of tempFolder , create a new directory and the number will go up. ls -l tempFolder mkdir tempFolder/anotherFolder ls -l tempFolder The link from anotherFolder/ to tempFolder/ (which is .. ) will be added to the count.
{ "source": [ "https://unix.stackexchange.com/questions/43046", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20215/" ] }
43,075
So, when wget gets a web page, it shows you a status bar that indicated how much the file(s) is/are downloaded. It looks like this: 25%[=============>______________________________________] 25,000 100.0K/s (underscores are spaces; I just couldn't figure out how to get more than one consecutive space in there) However, instead of writing another line to stdout and adding another progress bar, it updates it, like this: 50%[===========================>________________________] 50,000 100.0K/s And wget isn't the only example of this, either. For example, when you pipe something into less and then exit, your original prompt is still there, along with the result of whatever commands that you ran previously. It's like you never left. So, my questions are, what is this called, how do I implement it, does it only work for a single line at a time, and can I use this in C?
First of all your question has nothing to do with bash but with the terminal. The terminal is responding for displaying the text of the programs and bash itself has no control over the programs once they launched. Terminals offer control sequences to control color, font, cursor position and more. For a list of standardized terminal sequences have a look at http://www.termsys.demon.co.uk/vtansi.htm You can for example position the cursor at the beginning of the line delete the line afterwards write a new line to create a progress bar. More advanced terminal escape sequences are typically terminal dependent, e.g. work only with Eterm or xterm. ncurses - is a programming library which to create interactive programs with the terminal so you won't have to use escape sequences. How to overwrite an existing line with terminal sequences echo long text sleep 1 printf "\033[1A" # move cursor one line up printf "\033[K" # delete till end of line echo foo How to overwrite an existing line without terminal sequence One simple solution is to not write a newline at the end but write carriage return, which basically resets the cursor to the beginning of the line, e.g: echo -n first sleep 1 echo -ne "\rsecond" echo The \r or carriage return will put the cursor at the beginning of the line and allows you to overwrite the content of the line. Switch between buffers like less or vi The behavior of less is also due to a more advanced terminal feature, the alternate screen: In VT102 mode, there are escape sequences to activate and deactivate an alternate screen buffer, which is the same size as the display area of the window. When activated, the current screen is saved and replaced with the alternate screen. Saving of lines scrolled off the top of the window is disabled until the normal screen is restored. The term‐ cap(5) entry for xterm allows the visual editor vi(1) to switch to the alternate screen for editing and to restore the screen on exit. A popup menu entry makes it simple to switch between the normal and alternate screens for cut and paste. http://rosettacode.org/wiki/Terminal_control/Preserve_screen lists some example how to do it yourself, either via tput or via some escape sequences.
{ "source": [ "https://unix.stackexchange.com/questions/43075", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20615/" ] }
43,102
Is useful to use -T largefile flag at creating a file-system for a partition with big files like video, and audio in flac format? I tested the same partition with that flag and without it, and using tune2fs -l [partition] , I checked in "Filesystem features" that both have "large_file" enabled. So, is not necessary to use -T flag largefile ?
The -T largefile flag adjusts the amount of inodes that are allocated at the creation of the file system. Once allocated, their number cannot be adjusted (at least for ext2/3, not fully sure about ext4). The default is one inode for every 16K of disk space. -T largefile makes it one inode for every megabyte. Each file requires one inode. If you don't have any inodes left, you cannot create new files. But these statically allocated inodes take space, too. You can expect to save around 1,5 gigabytes for every 100 GB of disk by setting -T largefile , as opposed to the default. -T largefile4 (one inode per 4 MB) does not have such a dramatic effect. If you are certain that the average size of the files stored on the device will be above 1 megabyte, then by all means, set -T largefile . I'm happily using it on my storage partitions, and think that it is not too radical of a setting. However, if you unpack a very large source tarball of many files (think hundreds of thousands) to that partition, you have a chance of running out of inodes for that partition. There is little you can do in that situation, apart from choosing another partition to untar to. You can check how many inodes you have available on a live filesystem with the dumpe2fs command: # dumpe2fs /dev/hda5 [...] Inode count: 98784 Block count: 1574362 Reserved block count: 78718 Free blocks: 395001 Free inodes: 34750 Here, I can still create 34 thousand files. Here's what I got after doing mkfs.ext3 -T largefile -m 0 on a 100-GB partition: Filesystem 1M-blocks Used Available Use% Mounted on /dev/loop1 102369 188 102181 1% /mnt/largefile /dev/loop2 100794 188 100606 1% /mnt/normal The largefile version has 102 400 inodes while the normal one created 6 553 600 inodes, and saved 1,5 GB in the process. If you have a good clue on what size files you are going to put on the file system, you can fine-tune the amount of inodes directly with the -i switch. It sets the bytes per inode ratio. You would gain 75% of the space savings if you used -i 65536 while still being able to create over a million files. I generally calculate to keep at least 100 000 inodes spare.
{ "source": [ "https://unix.stackexchange.com/questions/43102", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18516/" ] }
43,103
When I ssh into another machine with Debian with my account(with sudo permissions), my backspace key generates some awkward symbols on pressing. Also Tab & del keys don't work too. On the other hand, I also have another account on the same machine & when I ssh through this account, its terminal works perfectly fine. I couldn't figure out why is this happening.
I have seen such problems before. Take the backspace for example, the remote host expects some character to be used as "erase/backspace" , while you pressing backspace in the terminal , the terminal program will send some character to the remote host, if what the remote host expects diffs with the characters sent by the terminal program, you would encounter this issue. So a quick fix is as below: run command #stty -a in the remote host, and find what is expected to be an erase code in the output. Say erase=^? . In the terminal, press Ctrl + v and press your backspace. You'll see what code is sent as "erase". Say it is ^H . In the remote host, run #stty erase ^H . (Note: use Ctrl v + Backspace , do not type the ^ manually) You can fix the Tab issue with the same as above.
{ "source": [ "https://unix.stackexchange.com/questions/43103", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19252/" ] }
43,106
I want Firefox window to be opened in a specific size, and location on screen using a shell command, for example: firefox myfile.html size 800x600 location bottom-left Is there such a command?
Here is a community version of the answer by Yokai that incorporates examples offered by Rudolf Olah . You can use the tool called xdotool to control window size and location. Not only that, any script you write in bash , using xdotool , can be setup to work with a fully maximized window and it can be scripted to set the window size and x:y coordinates by manipulating the mousemove and click commands. Find the window ID: xdotool search --onlyvisible --name firefox Set the window size xdotool windowsize $WINDOW_ID_GOES_HERE $WIDTH $HEIGHT Move the window xdotool windowmove $WINDOW_ID_GOES_HERE $X $Y For example, if the window id for firefox is 123 you would do this: xdotool windowsize 123 800 600 xdotool windowmove 123 0 1080 The bottom-left positioning will have to be figured out based on your screen resolution.
{ "source": [ "https://unix.stackexchange.com/questions/43106", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20039/" ] }
43,135
On a sandbox VM environment, I have a setup of Ubuntu Linux which is firewalled and cannot be accessed from outside the local system. Therefore, on that VM, I'd like to give the administrative user (which I set up) the ability to run anything with sudo and not need a password. While I know this is not secure, this VM is not on all the time, and requires my personal passcode to run. So even though this is not "secure", is there a way to get the desired functionality?
From man sudoers : NOPASSWD and PASSWD By default, sudo requires that a user authenticate him or herself before running a command. This behavior can be modified via the NOPASSWD tag. Like a Runas_Spec, the NOPASSWD tag sets a default for the commands that follow it in the Cmnd_Spec_List. Conversely, the PASSWD tag can be used to reverse things. For example: ray rushmore = NOPASSWD: /bin/kill, /bin/ls, /usr/bin/lprm would allow the user ray to run /bin/kill, /bin/ls, and /usr/bin/lprm as root on the machine rushmore without authenticating himself. One other tag is ALL , to allow the user ray to run any command on any host without password you can use: ray ALL= NOPASSWD: ALL
{ "source": [ "https://unix.stackexchange.com/questions/43135", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5807/" ] }
43,196
I have a dual boot Linux/windows system set up, and frequently switch from one to the other. I was thinking if I could add a menu item in one of the menus to reboot directly into windows, without stopping at the GRUB prompt. I saw this question on a forum, that's exactly what I want but it's dealing with lilo, which is not my case. I thought of a solution that would modify the default entry in the GRUB menu and then reboot, but there are some drawbacks, and I was wondering if there was a cleaner alternative. (Also, I would be interested in a solution to boot from Windows directly into Linux, but that might be harder, and does not belong here. Anyway, as long as I have it in one way, the other way could be set up as the default. UPDATE It seems someone asked a similar question , and if those are the suggested answers, I might as well edit /boot/grub/grubenv as grub-reboot and grub-set-default and grub-editenv do. ) Thanks in advance for any tips. UPDATE : This is my GRUB version: (GRUB) 1.99-12ubuntu5-1linuxmint1 I tried running grubonce , the command is not found. And searching for it in the repositories gives me nothing. I'm on Linux Mint, so that might be it... Seeing man grub-reboot , it seems like it does what I want, as grubonce does. It is also available everywhere (at least it is for me, I think it is part of the grub package). I saw two related commands: grub-editenv and grub-set-default . I found out that after running sudo grub-set-default 4 , when running grub-editenv list you get something similar to: saved_entry=4 And when running grub-reboot 4 , you get something like: prev_saved_entry=0 saved_entry=4 Which means both do the same thing (one is temporary one is not). Surprisingly, when I tried: sudo grub-reboot 4 sudo reboot now It did not work, as if I hadn't done anything, it just showed me the menu as usual, and selected the first entry, saying it will boot this entry in 10s. I tried it again, I thought I might have written the wrong entry (it is zero-based, right?). That time, it just hanged at the menu screen, and I had to hard-reset the PC to be able to boot. If anyone can try this out, just to see if it's just me, I'd appreciate it. (mint has been giving me a hard time, and that would be a good occasion to change :P). Reading the code in /boot/grub/grub.cfg , seems like this is the way to go, but from my observations, it's just ignoring these settings...
In order for the grub-reboot command to work, several required configuration changes must be in place: The default entry for grub must be set to saved . One possible location for this is the GRUB_DEFAULT= line in /etc/default/grub Use grub-set-default to set your default entry to the one you normally use. Update your grub config (e.g. update-grub ). This should take care of the initial set-up. In the future, just do grub-reboot <entry> for a one-time boot of <entry> .
{ "source": [ "https://unix.stackexchange.com/questions/43196", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17484/" ] }
43,263
root user can write to a file even if its write permissions are not set. root user can read a file even if its read permissions are not set. root user can cd into a directory even if its execute permissions are not set. root user cannot execute a file when its execute permissions are not set. Why? user$ echo '#!'$(which bash) > file user$ chmod 000 file user$ ls -l file ---------- 1 user user 12 Jul 17 11:11 file user$ cat file # Normal user cannot read cat: file: Permission denied user$ su root$ echo 'echo hello' >> file # root can write root$ cat file # root can read #!/bin/bash echo hello root$ ./file # root cannot execute bash: ./file: Permission denied
In short, because the execute bit is considered special; if it's not set at all , then the file is considered to be not an executable and thus can't be executed. However, if even ONE of the execute bits is set, root can and will execute it. Observe: caleburn: ~/ >cat hello.sh #!/bin/sh echo "Hello!" caleburn: ~/ >chmod 000 hello.sh caleburn: ~/ >./hello.sh -bash: ./hello.sh: Permission denied caleburn: ~/ >sudo ./hello.sh sudo: ./hello.sh: command not found caleburn: ~/ >chmod 100 hello.sh caleburn: ~/ >./hello.sh /bin/sh: ./hello.sh: Permission denied caleburn: ~/ >sudo ./hello.sh Hello!
{ "source": [ "https://unix.stackexchange.com/questions/43263", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3352/" ] }
43,340
I want to run a shell script that got a loop in it and it can go for ever which I do not want to happen. So I need to introduce a timeout for the whole script. How can I introduce a timeout for the whole shell script under SuSE?
If GNU timeout is not available you can use expect (Mac OS X, BSD, ... do not usually have GNU tools and utilities by default). ################################################################################ # Executes command with a timeout # Params: # $1 timeout in seconds # $2 command # Returns 1 if timed out 0 otherwise timeout() { time=$1 # start the command in a subshell to avoid problem with pipes # (spawn accepts one command) command="/bin/sh -c \"$2\"" expect -c "set echo \"-noecho\"; set timeout $time; spawn -noecho $command; expect timeout { exit 1 } eof { exit 0 }" if [ $? = 1 ] ; then echo "Timeout after ${time} seconds" fi } Edit Example: timeout 10 "ls ${HOME}"
{ "source": [ "https://unix.stackexchange.com/questions/43340", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7221/" ] }
43,413
I sometimes need to plug a disk into a disk bay. At other times, I have the very weird setup of connecting a SSD using a SATA-eSATA cable on my laptop while pulling power from a desktop. How can I safely remove the SATA disk from the system? This Phoronix forum thread has some suggestions: justsumdood wrote: An(noymous)droid wrote: What then do you do on the software side before unplugging? Is it a simple "umount /dev/sd"[drive letter]? after unmounting the device, to "power off" (or sleep) the unit: hdparm -Y /dev/sdX (where X represents the device you wish to power off. for example: /dev/sdb) this will power the drive down allowing for it's removal w/o risk of voltage surge. Does this mean that the disk caches are properly flushed and powered off thereafter? Another suggestion from the same thread: chithanh wrote: All SATA and eSATA hardware is physically able to be hotplugged (ie. not damaged if you insert/pull the plug). How the chipset and driver handles this is another question. Some driver/chipset combinations do not properly handle hotplugging and need a warmplug command such as the following one: echo 0 - 0 > /sys/class/scsi_host/hostX/scan Replace X with the appropriate number for your SATA/eSATA port. I doubt whether is the correct way to do so, but I cannot find some proof against it either. So, what is the correct way to remove an attached disk from a system? Assume that I have already unmounted every partition on the disk and ran sync . Please point to some official documentation if possible, I could not find anything in the Linux documentation tree, nor the Linux ATA wiki .
Unmount any filesystems on the disk. ( umount ... ) Deactivate any LVM groups. ( vgchange -an ) Make sure nothing is using the disk for anything. You Could unplug the HDD here, but it is recommended to also do the last two steps Spin the HDD down. (irrelevant for SSD's) ( sudo hdparm -Y /dev/(whatever) ) Tell the system, that we are unplugging the HDD, so it can prepare itself. ( echo 1 | sudo tee /sys/block/(whatever)/device/delete ) If you want to be extra cautious, do echo 1 | sudo tee /sys/block/(whatever)/device/delete first. That'll unregister the device from the kernel, so you know nothing's using it when you unplug it. When I do that with a drive in an eSATA enclosure, I can hear the drive's heads park themselves, so the kernel apparently tells the drive to prepare for power-down. If you're using an AHCI controller, it should cope with devices being unplugged. If you're using some other sort of SATA controller, the driver might be confused by hotplugging. In my experience, SATA hotplugging (with AHCI) works pretty well in Linux. I've unplugged an optical drive, plugged in a hard drive, scanned it for errors, made a filesystem and copied data to it, unmounted and unplugged it, plugged in a differerent DVD drive, and burned a disc, all with the machine up and running.
{ "source": [ "https://unix.stackexchange.com/questions/43413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8250/" ] }
43,414
In your .tmux.conf file you can set the window history with something like: set -g history-limit 4096 Is there a way to set an unlimited history for each window?
From what I can tell, you can only do this in a "practical" fashion, by setting the history to an absurdly large number. e.g.: set -g history-limit 999999999 UPDATE: see the other answer as to why you don't want to use a number this high. Something more reasonable (less 9's) would be best. UPDATE again: perhaps pre-allocation doesn't occur. @Volker Siegel's comment on the other answer indicates that setting the value does not cause memory allocation.
{ "source": [ "https://unix.stackexchange.com/questions/43414", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4954/" ] }
43,465
For a long period I thought the default behavior of the sort program was using ASCII order. However, when I input the following lines into sort without any arguments: # @ I got: @ # But according to the ASCII table, # is 35 and @ is 64. Another example is: A a And the output is: a A Can anybody explain this? By the way, what is 'dictionary-order' when using sort -d ?
Looks like you are using a non-POSIX locale. Try: export LC_ALL=C and then sort . info sort clearly says: (1) If you use a non-POSIX locale (e.g., by setting `LC_ALL' to `en_US'), then `sort' may produce output that is sorted differently than you're accustomed to. In that case, set the `LC_ALL' environment variable to `C'. Note that setting only `LC_COLLATE' has two problems. First, it is ineffective if `LC_ALL' is also set. Second, it has undefined behavior if `LC_CTYPE' (or `LANG', if `LC_CTYPE' is unset) is set to an incompatible value. For example, you get undefined behavior if `LC_CTYPE' is `ja_JP.PCK' but `LC_COLLATE' is `en_US.UTF-8'.
{ "source": [ "https://unix.stackexchange.com/questions/43465", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
43,478
While in an active Vim buffer, how can I write out a specific range of lines to a new file without closing the current buffer first?
You can do :100,200w filename Of course 100,200 is the range of lines you want to write.
{ "source": [ "https://unix.stackexchange.com/questions/43478", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
43,496
I have to run top command on one computer being on another. My targeted PC has IP 192.168.0.81 I was trying to do it: ssh 192.168.0.81 top But I got this result: top: tcgetattr() failed: Invalid argument Could anybody help me with this issue? System info: Linux iRP-C-09 2.4.18-timesys-4.0.642 Top version: 2.0.7
top is a full screen interactive console application. It requires a tty to run. Try ssh -t or ssh -tt to force pseudo-tty allocation.
{ "source": [ "https://unix.stackexchange.com/questions/43496", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21071/" ] }
43,526
I'm setting up virtualized Linux boxes (as local development servers) for developers at a company that is primarily Windows-based, and some of developers make negative cracks about vim (among other things). (It seems to them to represent Linux/Unix in some way, and prove that the environment is obtusely difficult to use.) I remember when I was first forced to use vim, (the sysadmins refused to install emacs!) and the difficult initial learning curve, so I'm somewhat sympathetic. It occured to me that, rather than introduce them to nano (which they would probably never get past) it might be possible to set up nano-like menus in vim to make the transition easier. (I've found a very beginner-friendly .vimrc file to give them, but it doesn't have anything like nano-style menus.) The only problem is the only thing I've been able to find that claims it's possible to setup menus in vim (not gvim) didn't work, and my attempts to correct the problem just left me with yet another problem to solve. Before I waste lots of time I'd like to know if it is in fact possible, since there seems to be very little information about how to do it.
Yes, it is possible. You can load menu.vim (the default gvim menu definitions), or you can just start from scratch and create your own, then access them through :emenu . This doesn't give you nano-like always-visible menus, though; it gives you the ability to navigate menus using command-line tab completion. If the user doesn't have a vimrc, you'll want to start by disabling vi compatibility: :set nocompatible Enable smart command line completion on <Tab> (enable listing all possible choices, and navigating the results with <Up> , <Down> , <Left> , <Right> , and <Enter> ): :set wildmenu Make repeated presses cycle between all matching choices: :set wildmode=full Load the default menus (this would happen automatically in gvim, but not in terminal vim): :source $VIMRUNTIME/menu.vim After those four commands, you can manually trigger menu completion by invoking tab completion on the :emenu command, by doing :emenu<space><tab> You can navigate the results using the tab key and the arrow keys, and the enter key (it both expands submenus and selects items). You can then make that more convenient by going a step further, and binding a mapping to pop up the menu without having to type :emenu every time: Make Ctrl-Z in a mapping act like pressing <Tab> interactively on the command line: :set wildcharm=<C-Z> And make a binding that automatically invokes :emenu completion for you: :map <F4> :emenu <C-Z>
{ "source": [ "https://unix.stackexchange.com/questions/43526", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3358/" ] }
43,527
Is there a more compact form of killing background jobs than: for i in {1..5}; do kill %$i; done Also, {1..5} obviously has a hard-coded magic number in it, how can I make it "N" with N being the right number, without doing a: $(jobs | wc -l) I actually use \j in PS1 to get the # of managed jobs, is this equivalent?
To just kill all background jobs managed by bash , do kill $(jobs -p) Note that since both jobs and kill are built into bash , you shouldn't run into any errors of the Argument list too long type.
{ "source": [ "https://unix.stackexchange.com/questions/43527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10283/" ] }
43,539
How can I tell whether my processor has a particular feature? (64-bit instruction set, hardware-assisted virtualization, cryptographic accelerators, etc.) I know that the file /proc/cpuinfo contains this information, in the flags line, but what do all these cryptic abbreviations mean? For example, given the following extract from /proc/cpuinfo , do I have a 64-bit CPU? Do I have hardware virtualization? model name : Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz … flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm tpr_shadow vnmi flexpriority
x86 (32-bit a.k.a. i386–i686 and 64-bit a.k.a. amd64. In other words, your workstation, laptop or server.) FAQ: Do I have… 64-bit (x86_64/AMD64/Intel64)? lm Hardware virtualization (VMX/AMD-V)? vmx (Intel), svm (AMD) Accelerated AES (AES-NI)? aes TXT (TPM)? smx a hypervisor (announced as such)? hypervisor Most of the other features are only of interest to compiler or kernel authors. All the flags The full listing is in the kernel source, in the file arch/x86/include/asm/cpufeatures.h . Intel-defined CPU features, CPUID level 0x00000001 (edx) See also Wikipedia and table 2-27 in Intel Advanced Vector Extensions Programming Reference fpu : Onboard FPU (floating point support) vme : Virtual 8086 mode enhancements de : Debugging Extensions ( CR4.DE ) pse : Page Size Extensions (4MB memory pages ) tsc : Time Stamp Counter (RDTSC) msr : Model-Specific Registers (RDMSR, WRMSR) pae : Physical Address Extensions (support for more than 4GB of RAM) mce : Machine Check Exception cx8 : CMPXCHG8 instruction (64-bit compare-and-swap ) apic : Onboard APIC sep : SYSENTER / SYSEXIT mtrr : Memory Type Range Registers pge : Page Global Enable (global bit in PDEs and PTEs) mca : Machine Check Architecture cmov : CMOV instructions (conditional move) (also FCMOV ) pat : Page Attribute Table pse36 : 36-bit PSEs (huge pages) pn : Processor serial number clflush : Cache Line Flush instruction dts : Debug Store (buffer for debugging and profiling instructions) acpi : ACPI via MSR (temperature monitoring and clock speed modulation) mmx : Multimedia Extensions fxsr : FXSAVE/FXRSTOR, CR4.OSFXSR sse : Intel SSE vector instructions sse2 : SSE2 ss : CPU self snoop ht : Hyper-Threading and/or multi-core tm : Automatic clock control (Thermal Monitor) ia64 : Intel Itanium Architecture 64-bit (not to be confused with Intel's 64-bit x86 architecture with flag x86-64 or "AMD64" bit indicated by flag lm ) pbe : Pending Break Enable (PBE# pin) wakeup support AMD-defined CPU features, CPUID level 0x80000001 See also Wikipedia and table 2-23 in Intel Advanced Vector Extensions Programming Reference syscall : SYSCALL (Fast System Call) and SYSRET (Return From Fast System Call) mp : Multiprocessing Capable. nx : Execute Disable mmxext : AMD MMX extensions fxsr_opt : FXSAVE/FXRSTOR optimizations pdpe1gb : One GB pages (allows hugepagesz=1G ) rdtscp : Read Time-Stamp Counter and Processor ID lm : Long Mode ( x86-64 : amd64, also known as Intel 64, i.e. 64-bit capable) 3dnowext : AMD 3DNow! extensions 3dnow : 3DNow! (AMD vector instructions, competing with Intel's SSE1) Transmeta-defined CPU features, CPUID level 0x80860001 recovery : CPU in recovery mode longrun : Longrun power control lrti : LongRun table interface Other features, Linux-defined mapping cxmmx : Cyrix MMX extensions k6_mtrr : AMD K6 nonstandard MTRRs cyrix_arr : Cyrix ARRs (= MTRRs) centaur_mcr : Centaur MCRs (= MTRRs) constant_tsc : TSC ticks at a constant rate up : SMP kernel running on UP art : Always-Running Timer arch_perfmon : Intel Architectural PerfMon pebs : Precise-Event Based Sampling bts : Branch Trace Store rep_good : rep microcode works well acc_power : AMD accumulated power mechanism nopl : The NOPL (0F 1F) instructions xtopology : cpu topology enum extensions tsc_reliable : TSC is known to be reliable nonstop_tsc : TSC does not stop in C states cpuid : CPU has CPUID instruction itself extd_apicid : has extended APICID (8 bits) amd_dcm : multi-node processor aperfmperf : APERFMPERF eagerfpu : Non lazy FPU restore nonstop_tsc_s3 : TSC doesn't stop in S3 state tsc_known_freq : TSC has known frequency mce_recovery : CPU has recoverable machine checks Intel-defined CPU features, CPUID level 0x00000001 (ecx) See also Wikipedia and table 2-26 in Intel Advanced Vector Extensions Programming Reference pni : SSE-3 (“ Prescott New Instructions”) pclmulqdq : Perform a Carry-Less Multiplication of Quadword instruction — accelerator for GCM ) dtes64 : 64-bit Debug Store monitor : Monitor/Mwait support ( Intel SSE3 supplements ) ds_cpl : CPL Qual. Debug Store vmx : Hardware virtualization: Intel VMX smx : Safer mode: TXT ( TPM support) est : Enhanced SpeedStep tm2 : Thermal Monitor 2 ssse3 : Supplemental SSE-3 cid : Context ID sdbg : silicon debug fma : Fused multiply-add cx16 : CMPXCHG16B xtpr : Send Task Priority Messages pdcm : Performance Capabilities pcid : Process Context Identifiers dca : Direct Cache Access sse4_1 : SSE-4.1 sse4_2 : SSE-4.2 x2apic : x2APIC movbe : Move Data After Swapping Bytes instruction popcnt : Return the Count of Number of Bits Set to 1 instruction ( Hamming weight , i.e. bit count) tsc_deadline_timer : Tsc deadline timer aes / aes-ni : Advanced Encryption Standard (New Instructions) xsave : Save Processor Extended States : also provides XGETBY , XRSTOR , XSETBY avx : Advanced Vector Extensions f16c : 16-bit fp conversions ( CVT16 ) rdrand : Read Random Number from hardware random number generator instruction hypervisor : Running on a hypervisor VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001 rng : Random Number Generator present (xstore) rng_en : Random Number Generator enabled ace : on-CPU crypto (xcrypt) ace_en : on-CPU crypto enabled ace2 : Advanced Cryptography Engine v2 ace2_en : ACE v2 enabled phe : PadLock Hash Engine phe_en : PHE enabled pmm : PadLock Montgomery Multiplier pmm_en : PMM enabled More extended AMD flags: CPUID level 0x80000001, ecx lahf_lm : Load AH from Flags (LAHF) and Store AH into Flags (SAHF) in long mode cmp_legacy : If yes HyperThreading not valid svm : “Secure virtual machine”: AMD-V extapic : Extended APIC space cr8_legacy : CR8 in 32-bit mode abm : Advanced Bit Manipulation sse4a : SSE-4A misalignsse : indicates if a general-protection exception (#GP) is generated when some legacy SSE instructions operate on unaligned data. Also depends on CR0 and Alignment Checking bit 3dnowprefetch : 3DNow prefetch instructions osvw : indicates OS Visible Workaround , which allows the OS to work around processor errata. ibs : Instruction Based Sampling xop : extended AVX instructions skinit : SKINIT/STGI instructions wdt : Watchdog timer lwp : Light Weight Profiling fma4 : 4 operands MAC instructions tce : translation cache extension nodeid_msr : NodeId MSR tbm : Trailing Bit Manipulation topoext : Topology Extensions CPUID leafs perfctr_core : Core Performance Counter Extensions perfctr_nb : NB Performance Counter Extensions bpext : data breakpoint extension ptsc : performance time-stamp counter perfctr_l2 : L2 Performance Counter Extensions mwaitx : MWAIT extension ( MONITORX / MWAITX ) Auxiliary flags: Linux defined - For features scattered in various CPUID levels ring3mwait : Ring 3 MONITOR/MWAIT cpuid_fault : Intel CPUID faulting cpb : AMD Core Performance Boost epb : IA32_ENERGY_PERF_BIAS support cat_l3 : Cache Allocation Technology L3 cat_l2 : Cache Allocation Technology L2 cdp_l3 : Code and Data Prioritization L3 invpcid_single : effectively invpcid and CR4.PCIDE=1 hw_pstate : AMD HW-PState proc_feedback : AMD ProcFeedbackInterface sme : AMD Secure Memory Encryption pti : Kernel Page Table Isolation (Kaiser) retpoline : Retpoline mitigation for Spectre variant 2 (indirect branches) retpoline_amd : AMD Retpoline mitigation intel_ppin : Intel Processor Inventory Number avx512_4vnniw : AVX-512 Neural Network Instructions avx512_4fmaps : AVX-512 Multiply Accumulation Single precision mba : Memory Bandwidth Allocation rsb_ctxsw : Fill RSB on context switches Virtualization flags: Linux defined tpr_shadow : Intel TPR Shadow vnmi : Intel Virtual NMI flexpriority : Intel FlexPriority ept : Intel Extended Page Table vpid : Intel Virtual Processor ID vmmcall : prefer VMMCALL to VMCALL Intel-defined CPU features, CPUID level 0x00000007:0 (ebx) fsgsbase : {RD/WR}{FS/GS}BASE instructions tsc_adjust : TSC adjustment MSR bmi1 : 1st group bit manipulation extensions hle : Hardware Lock Elision avx2 : AVX2 instructions smep : Supervisor Mode Execution Protection bmi2 : 2nd group bit manipulation extensions erms : Enhanced REP MOVSB/STOSB invpcid : Invalidate Processor Context ID rtm : Restricted Transactional Memory cqm : Cache QoS Monitoring mpx : Memory Protection Extension rdt_a : Resource Director Technology Allocation avx512f : AVX-512 foundation avx512dq : AVX-512 Double/Quad instructions rdseed : The RDSEED instruction adx : The ADCX and ADOX instructions smap : Supervisor Mode Access Prevention avx512ifma : AVX-512 Integer Fused Multiply Add instructions clflushopt : CLFLUSHOPT instruction clwb : CLWB instruction intel_pt : Intel Processor Tracing avx512pf : AVX-512 Prefetch avx512er : AVX-512 Exponential and Reciprocal avx512cd : AVX-512 Conflict Detection sha_ni : SHA1/SHA256 Instruction Extensions avx512bw : AVX-512 Byte/Word instructions avx512vl : AVX-512 128/256 Vector Length extensions Extended state features, CPUID level 0x0000000d:1 (eax) xsaveopt : Optimized XSAVE xsavec : XSAVEC xgetbv1 : XGETBV with ECX = 1 xsaves : XSAVES / XRSTORS Intel-defined CPU QoS sub-leaf, CPUID level 0x0000000F:0 (edx) cqm_llc : LLC QoS Intel-defined CPU QoS sub-leaf, CPUID level 0x0000000F:1 (edx) cqm_occup_llc : LLC occupancy monitoring cqm_mbm_total : LLC total MBM monitoring cqm_mbm_local : LLC local MBM monitoring AMD-defined CPU features, CPUID level 0x80000008 (ebx) clzero : CLZERO instruction irperf : instructions retired performance counter xsaveerptr : Always save/restore FP error pointers Thermal and Power Management leaf, CPUID level 0x00000006 (eax) dtherm (formerly dts ): digital thermal sensor ida : Intel Dynamic Acceleration arat : Always Running APIC Timer pln : Intel Power Limit Notification pts : Intel Package Thermal Status hwp : Intel Hardware P-states hwp_notify : HWP notification hwp_act_window : HWP Activity Window hwp_epp : HWP Energy Performance Preference hwp_pkg_req : HWP package-level request AMD SVM Feature Identification, CPUID level 0x8000000a (edx) npt : AMD Nested Page Table support lbrv : AMD LBR Virtualization support svm_lock : AMD SVM locking MSR nrip_save : AMD SVM next_rip save tsc_scale : AMD TSC scaling support vmcb_clean : AMD VMCB clean bits support flushbyasid : AMD flush-by-ASID support decodeassists : AMD Decode Assists support pausefilter : AMD filtered pause intercept pfthreshold : AMD pause filter threshold avic : Virtual Interrupt Controller vmsave_vmload : Virtual VMSAVE VMLOAD vgif : Virtual GIF Intel-defined CPU features, CPUID level 0x00000007:0 (ecx) avx512vbmi : AVX512 Vector Bit Manipulation instructions umip : User Mode Instruction Protection pku : Protection Keys for Userspace ospke : OS Protection Keys Enable avx512_vbmi2 : Additional AVX512 Vector Bit Manipulation instructions gfni : Galois Field New Instructions vaes : Vector AES vpclmulqdq : Carry-Less Multiplication Double Quadword avx512_vnni : Vector Neural Network Instructions avx512_bitalg : VPOPCNT[B,W] and VPSHUF-BITQMB instructions avx512_vpopcntdq : POPCNT for vectors of DW/QW la57 : 5-level page tables rdpid : RDPID instruction AMD-defined CPU features, CPUID level 0x80000007 (ebx) overflow_recov : MCA overflow recovery support succor : uncorrectable error containment and recovery smca : Scalable MCA Detected CPU bugs (Linux-defined) f00f : Intel F00F fdiv : CPU FDIV coma : Cyrix 6x86 coma amd_tlb_mmatch : tlb_mmatch AMD Erratum 383 amd_apic_c1e : apic_c1e AMD Erratum 400 11ap : Bad local APIC aka 11AP fxsave_leak : FXSAVE leaks FOP/FIP/FOP clflush_monitor : AAI65, CLFLUSH required before MONITOR sysret_ss_attrs : SYSRET doesn't fix up SS attrs espfix : "" IRET to 16-bit SS corrupts ESP/RSP high bits null_seg : Nulling a selector preserves the base swapgs_fence : SWAPGS without input dep on GS monitor : IPI required to wake up remote CPU amd_e400 : CPU is among the affected by Erratum 400 cpu_meltdown : CPU is affected by meltdown attack and needs kernel page table isolation spectre_v1 : CPU is affected by Spectre variant 1 attack with conditional branches spectre_v2 : CPU is affected by Spectre variant 2 attack with indirect branches spec_store_bypass : CPU is affected by the Speculative Store Bypass vulnerability (Spectre variant 4). P.S. This listing was derived from arch/x86/include/asm/cpufeatures.h in the kernel source. The flags are listed in the same order as the source code. Please help by adding links to descriptions of features when they're missing, by writing a short description of features that have an unexpressive names, and by updating the list for new kernel versions. The current list is from Linux 4.15 plus some later additions.
{ "source": [ "https://unix.stackexchange.com/questions/43539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
43,601
I would like my default bash shell to go straight into tmux instead of my always having to type tmux every time.
@StarNamer's answer is generally accurate, though I typically include the following tests to make sure that tmux exists on the system we're in an interactive shell, and tmux doesn't try to run within itself So, I would add this to the .bashrc : if command -v tmux &> /dev/null && [ -n "$PS1" ] && [[ ! "$TERM" =~ screen ]] && [[ ! "$TERM" =~ tmux ]] && [ -z "$TMUX" ]; then exec tmux fi References Using bash's command to check for existence of a command - http://man7.org/linux/man-pages/man1/bash.1.html#SHELL_BUILTIN_COMMANDS Why to use command instead of which to check for the existence of commands - https://unix.stackexchange.com/a/85250 Using $PS1 to check for interactive shell - https://www.gnu.org/software/bash/manual/html_node/Is-this-Shell-Interactive_003f.html Expected state of $TERM environment variable "for all programs running inside tmux" - http://man7.org/linux/man-pages/man1/tmux.1.html#WINDOWS_AND_PANES
{ "source": [ "https://unix.stackexchange.com/questions/43601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
43,605
So I was going to back up my home folder by copying it to an external drive as follows: sudo cp -r /home/my_home /media/backup/my_home With the result that all folders on the external drives are now owned by root:root . How can I have cp keep the ownership and permissions from the original?
sudo cp -rp /home/my_home /media/backup/my_home From cp manpage: -p same as --preserve=mode,ownership,timestamps --preserve[=ATTR_LIST] preserve the specified attributes (default: mode,ownership,timestamps), if possible additional attributes: context, links, xattr, all
{ "source": [ "https://unix.stackexchange.com/questions/43605", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21110/" ] }
43,713
The following command will tar all "dot" files and folders: tar -zcvf dotfiles.tar.gz .??* I am familiar with regular expressions , but I don't understand how to interpret .??* . I executed ls .??* and tree .??* and looked at the files which were listed. Why does this regular expression include all files within folders starting with . for example?
Globs are not regular expressions. In general, the shell will try to interpret anything you type on the command line that you don't quote as a glob. Shells are not required to support regular expressions at all (although in reality many of the fancier more modern ones do, e.g. the =~ regex match operator in the bash [[ construct). The .??* is a glob. It matches any file name that begins with a literal dot . , followed by any two (not necessarily the same) characters, ?? , followed by the regular expression equivalent of [^/]* , i.e. 0 or more characters that are not / . For the full details of shell pathname expansion (the full name for "globbing"), see the POSIX spec .
{ "source": [ "https://unix.stackexchange.com/questions/43713", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3895/" ] }
43,744
What does GID actually mean? I have Googled it and this is what linux.about.com said: Group identification number for the process. Valid group numbers are given in /etc/group , and in the GID field of /etc/passwd file. When a process is started, its GID is set to the GID of its parent process. But what does that mean? The permissions I have for my folder is currently at 0755 I understand if I set the UID for the Owner it will be 4755 And if I set the GID of the Group it will be 2755 If I set the Sticky Bit for Others it will be 1755 Is it even important to set those permissions?
Every process in a UNIX-like system, just like every file, has an owner (the user, either real or a system "pseudo-user", such as daemon , bin , man , etc) and a group owner. The group owner for a user's files is typically that user's primary group, and in a similar fashion, any processes you start are typically owned by your user ID and by your primary group ID. Sometimes, though, it is necessary to have elevated privileges to run certain commands, but it is not desirable to give full administrative rights. For example, the passwd command needs access to the system's shadow password file, so that it can update your password. Obviously, you don't want to give every user root privileges, just so they can reset their password - that would undoubtedly lead to chaos! Instead, there needs to be another way to temporarily grant elevated privileges to users to perform certain tasks. That is what the SETUID and SETGID bits are for. It is a way to tell the kernel to temporarily raise the user's privileges, for the duration of the marked command's execution. A SETUID binary will be executed with the privileges of the owner of the executable file (usually root ), and a SETGID binary will be executed with the group privileges of the group owner of the executable file. In the case of the passwd command, which belongs to root and is SETUID, it allows normal users to directly affect the contents of the password file, in a controlled and predictable manner, by executing with root privileges. There are numerous other SETUID commands on UNIX-like systems ( chsh , screen , ping , su , etc), all of which require elevated privileges to operate correctly. There are also a few SETGID programs, where the kernel temporarily changes the GID of the process, to allow access to logfiles, etc. sendmail is such a utility. The sticky bit serves a slightly different purpose. Its most common use is to ensure that only the user account that created a file may delete it. Think about the /tmp directory. It has very liberal permissions, which allow anyone to create files there. This is good, and allows users' processes to create temporary files ( screen , ssh , etc, keep state information in /tmp ). To protect a user's temp files, /tmp has the sticky bit set, so that only I can delete my files, and only you can delete yours. Of course, root can do anything, but we have to hope that the sysadmin isn't deranged! For normal files (that is, for non-executable files), there is little point in setting the SETUID/SETGID bits. SETGID on directories on some systems controls the default group owner for new files created in that directory.
{ "source": [ "https://unix.stackexchange.com/questions/43744", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20752/" ] }
43,830
If I type ls *ro* I also get files in subdirectories that match the *ro* pattern. Is there any option for ls similar to prune? Ideally a flag, otherwise perhaps an exec?
Use the -d switch: ls -d *ro* .....
{ "source": [ "https://unix.stackexchange.com/questions/43830", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
43,854
I developed an algorithm for a fairly hard problem in mathematics which is likely to need several months to finish. As I have limited resources only, I started this on my Ubuntu 12.04 (x86) laptop. Now I want to install some updates and actually restart the laptop (the "please reboot" message is just annoying). Is there a way to save an entire process including its allocated memory for continuation beyond a reboot? Here is some information about the process you might need. Please feel free to ask for further information if needed. I called the process in a terminal with the command " ./binary > ./somefile & " or "time ./binary > ./somefile &", I cannot really remember. It's printing some debug information to std::cerr (not very often). It's currently using roughly 600.0 kiB and even though this will increase, it's unlikely to increase rapidly. the process runs with normal priority the kernel is 3.2.0-26-generic-pae, the cpu is an AMD, the operating system is Ubuntu 12.04 x86. it runs since 9 days and 14 hours (so too long to cancel it ;-) )
The best/simplest solution is to change your program to save the state to a file an reuse that file to restore the process. Based upon the wikipedia page about application snapshots there are multiple alternatives: There is also cryopid but it seems to be unmaintained. Linux checkpoint/restart seems to be a good choice but your kernel needs to have CONFIG_CHECKPOINT_RESTORE enabled. criu is probably the most up to-date project and probably your best shot but depends also on some specific Kernel options which your distribution probably hasn't set. This is already too late but another more hands-on approach is to start your process in a dedicated VM and just suspend and restore the whole Virtual machine. Depending on your hypervisor you can also move the machine between different hosts. For the future think about where you run your long-running processes, how to parallize them and how to handle problems, e.g. full disks, process gets killed etc.
{ "source": [ "https://unix.stackexchange.com/questions/43854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17986/" ] }
43,882
What's the difference between executing a script like this: ./test.sh and executing a script like this: . test.sh ? I tried a simple, two-line script to see if I could find if there was a difference: #!/bin/bash ls But both . test.sh and ./test.sh returned the same information.
./test.sh runs test.sh as a separate program. It may happen to be a bash script, if the file test.sh starts with #!/bin/bash . But it could be something else altogether. . ./test.sh executes the code of the file test.sh inside the running instance of bash. It works as if the content file test.sh had been included textually instead of the . ./test.sh line. (Almost: there are a few details that differ, such as the value of $BASH_LINENO , and the behavior of the return builtin.) source ./test.sh is identical to . ./test.sh in bash (in other shells, source may be slightly different or not exist altogether; . for inclusion is in the POSIX standard). The most commonly visible difference between running a separate script with ./test.sh and including a script with the . builtin is that if the test.sh script sets some environment variables, with a separate process, only the environment of the child process is set, whereas with script inclusion, the environment of the sole shell process is set. If you add a line foo=bar in test.sh and echo $foo at the end of the calling script, you'll see the difference: $ cat test.sh #!/bin/sh foo=bar $ ./test.sh $ echo $foo $ . ./test.sh $ echo $foo bar
{ "source": [ "https://unix.stackexchange.com/questions/43882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21238/" ] }
43,896
I wonder if it's possible to merge video files using the cat command? I mean will the resultant file play seamlessly?
Yes, it is possible. But not all formats support it. ffmpeg FAQ : A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to join video files by merely concatenating them. When converting to RAW formats you also have a high chance that the files can be concatenated. ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg cat intermediate1.mpg intermediate2.mpg > intermediate_all.mpg ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi But using cat in this way created intermediate files, which are not necessary. This is a better approach to avoid creating those intermediate files: ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg ffmpeg -i concat:"intermediate1.mpg|intermediate2.mpg" -c copy intermediate_all.mpg ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi
{ "source": [ "https://unix.stackexchange.com/questions/43896", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9833/" ] }
43,922
I accidentally overwrote my /dev/sda partition table with GParted ( full story on AskUbuntu ). Since I haven't rebooted yet and my filesystem is still perfectly usable, I was told I might be able to recover the partition table from in-kernel memory. Is that possible? If so, how do I recover it and restore it?
Yes, you can do this with the /sys filesystem. /sys is a fake filesystem dynamically generated by the kernel & kernel drivers. In this specific case you can go to /sys/block/sda and you will see a directory for each partition on the drive. There are 2 specific files in those folders you need, start and size . start contains the offset from the beginning of the drive, and size is the size of the partition. Just delete the partitions and recreate them with the exact same starts and sizes as found in /sys . For example this is what my drive looks like: Device Boot Start End Blocks Id System /dev/sda1 * 2048 133119 65536 83 Linux /dev/sda2 * 133120 134340607 67103744 7 HPFS/NTFS/exFAT /dev/sda3 134340608 974675967 420167680 8e Linux LVM /dev/sda4 974675968 976773167 1048600 82 Linux swap / Solaris And this is what I have in /sys/block/sda : sda1/ start: 2048 size: 131072 sda2/ start: 133120 size: 134207488 sda3/ start: 134340608 size: 840335360 sda4/ start: 974675968 size: 2097200 I have tested this to verify information is accurate after modifying the partition table on a running system
{ "source": [ "https://unix.stackexchange.com/questions/43922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2683/" ] }
43,945
I tried vt100, vt102, vt220, and xterm by using top . But I can't find their difference. Is there any other term type? What's their difference?
xterm is supposed to be a superset of vt220 , in other words it's like vt220 but has more features. For example, xterm usually supports colors, but vt220 doesn't. You can test this by pressing z inside top . In the same way, vt220 has more features than vt100 . For example, vt100 doesn't seem to support F11 and F12 . Compare their features and escape sequences that your system thinks they have by running infocmp <term type 1> <term type 2> , e.g. infocmp vt100 vt220 . The full list varies from system to system. You should be able to get the list using toe , toe /usr/share/terminfo , or find ${TERMINFO:-/usr/share/terminfo} . If none of those work, you could also look at ncurses' terminfo.src , which is where most distributions get the data from these days. But unless your terminal looks like this or this , there's only a few others you might want to use: xterm-color - if you're on an older system and colors don't work putty , konsole , Eterm , rxvt , gnome , etc. - if you're running an XTerm emulator and some of the function keys, Backspace, Delete, Home, and End don't work properly screen - if running inside GNU screen (or tmux) linux - when logging in via a Linux console (e.g. Ctrl+Alt+F1 ) dumb - when everything is broken
{ "source": [ "https://unix.stackexchange.com/questions/43945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10826/" ] }
43,957
I have been using rsync to copy files for some time. My understanding is that rsync is faster than cp when some of the files to transfer are already in the destination directory, transferring only the incremental difference (i.e. the "deltas"). If this is correct, would there be any advantage to using rsync to moving the contents of a folder A , to say, a folder B , with B being empty? The folder A has close to 1TB of data (and millions of files in it). The transfer would be done over a local network ( A and B being on different filesystems, both mounted on a supercomputer, e.g. A is NFS and B is lustre ). Aside from that, what flags should I use to ask rsync to move (not copy) files from A to B (i.e. to delete A when the transfer has successfully finished)?
You can pass --remove-source-files to rsync to move files instead of copying them. But in your case, there's no point in using rsync, since the destination is empty. A plain mv will do the job as fast as possible. In your case, what could make a difference to performance is the choice of network protocol, if you have a choice among NFS, Samba, sshfs, sftp, rsync over ssh, tar piped into ssh, etc. The relative speed of these methods depends on the file sizes, the network and disk bandwidth, and other factors, so there's no way to give general advice, you'll need to run your own benchmarks.
{ "source": [ "https://unix.stackexchange.com/questions/43957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
43,976
Is there a way from command line to retrieve the list of all available keyboard layouts and relative variants? I need to list all the valid layout/variants choices to be used then from setxkbmap. Also about the layout toggle options, is there a way to retrieve a list of all available choices (e.g. grp:shift_caps_toggle , ...) I know that with setxkbmap -query I retrieve the list of my current ones, but I need the whole list of options. UPDATE: I've been told about the command man xkeyboard-config which provides all the info to the command line. Furthermore, using man -P cat xkeyboard-config the output goes to stdout and can be parsed with scripts or c code
Take a look at localectl , especially following options: localectl list-x11-keymap-layouts - gives you layouts (~100 on modern systems) localectl list-x11-keymap-variants de gives you variants for this layout (or all variants if no layout specified, ~300 on modern systems) localectl list-x11-keymap-options | grep grp: - gives you all layout switching options
{ "source": [ "https://unix.stackexchange.com/questions/43976", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19704/" ] }
44,027
It seems that I have added incorrect record to /etc/fstab : //servername/share /mnt/share cifs defaults,username=myuser 0 0 When I did mount -a , it asked user password to mount network share. It seems that it cannot proceed without password on boot, so it is just hung. How can I fix fstab to prevent boot failure?
It seems that I’ve found a solution: At the GRUB prompt, hit A to append options. Add init=/bin/bash to the end of the kernel command line and press Enter . The system will boot to a prompt like bash-3.2# enter the following commands at the prompt mount -o remount,rw / Then edit the fstab: vim /etc/fstab Edit the fstab file commenting the errors by adding a # at the begining of each problematic line, save the file and reboot by pressing Ctrl + Alt + Del .
{ "source": [ "https://unix.stackexchange.com/questions/44027", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6702/" ] }
44,040
Is there a standard tool which converts an integer count of Bytes into a human-readable count of the largest possible unit-size, while keeping the numeric value between 1.00 and 1023.99 ? I have my own bash/awk script, but I am looking for a standard tool, which is found on many/most distros... something more generally available, and ideally has simple command line args, and/or can accept piped input. Here are some examples of the type of output I am looking for. 1 Byt 173.00 KiB 46.57 MiB 1.84 GiB 29.23 GiB 265.72 GiB 1.63 TiB Here is the bytes-human script (used for the above output) awk -v pfix="$1" -v sfix="$2" 'BEGIN { split( "Byt KiB MiB GiB TiB PiB", unit ) uix = uct = length( unit ) for( i=1; i<=uct; i++ ) val[i] = (2**(10*(i-1)))-1 }{ if( int($1) == 0 ) uix = 1; else while( $1 < val[uix]+1 ) uix-- num = $1 / (val[uix]+1) if( uix==1 ) n = "%5d "; else n = "%8.2f" printf( "%s"n" %s%s\n", pfix, num, unit[uix], sfix ) }' Update Here is a modified version of Gilles' script, as described in a comment to his answer ..(modified to suit my preferred look). awk 'function human(x) { s=" B KiB MiB GiB TiB EiB PiB YiB ZiB" while (x>=1024 && length(s)>1) {x/=1024; s=substr(s,5)} s=substr(s,1,4) xf=(s==" B ")?"%5d ":"%8.2f" return sprintf( xf"%s\n", x, s) } {gsub(/^[0-9]+/, human($1)); print}'
There is nothing like this in POSIX, but there's a number formatting program in modern GNU coreutils: numfmt that at least gets close to your sample output. With GNU coreutils ≥8.24 (2015, so present on all non-embedded Linux except the oldest releases with a very long-term support cycle): $ numfmt --to=iec-i --suffix=B --format="%9.2f" 1 177152 48832200 1975684956 1.00B 173.00KiB 46.58MiB 1.84GiB Many older GNU tools can produce this format and GNU sort can sort numbers with units since coreutils 7.5 (Aug 2009, so present on virtually all non-embedded Linux distributions). I find your code a bit convoluted. Here's a cleaner awk version (the output format isn't exactly identical): awk ' function human(x) { if (x<1000) {return x} else {x/=1024} s="kMGTEPZY"; while (x>=1000 && length(s)>1) {x/=1024; s=substr(s,2)} return int(x+0.5) substr(s,1,1) } {sub(/^[0-9]+/, human($1)); print}' ( Reposted from a more specialized question )
{ "source": [ "https://unix.stackexchange.com/questions/44040", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2343/" ] }
44,095
I know I can set the volume name when I format the partition with the -n option of mkfs.vfat . But how to just change the name without formatting? I especially want to be able to use lower and uppercase letters. In worst case, I can use a windows tool, but windows by default transforms all letters to uppercase (but works fine with lowercase letters in volumes created with mkfs.vfat ).
Dosfstools , which provides mkfs.vfat and friends, also provides fatlabel (called dosfslabel in older versions) to change the label.
{ "source": [ "https://unix.stackexchange.com/questions/44095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20782/" ] }
44,103
How can I find which process is constantly writing to disk? I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it. And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so). In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk. But here I'm not sure. I tried the following: ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp but nothing is changing there. Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing. Could it be something in the kernel/system I just installed or do I have a faulty harddisk? hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad. (HD is a Seagate Barracude 500GB)
Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk. example output: Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 6 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0] 7 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/0] 8 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/1] 1033 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [flush-8:0] 10 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/1]
{ "source": [ "https://unix.stackexchange.com/questions/44103", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11923/" ] }
44,115
I use vim for essentially all my editing needs, so I decided to once again try vi-mode for my shell (currently ZSH w/ oh-my-zsh on OS X), but I find myself trying (and failing) to use Ctrl-R constantly. What's the equivalent key-binding? And for future reference, how would I figure this out myself? I'm pretty sure I could use bind -P in bash.
You can run bindkey with no arguments to get a list of existing bindings, eg: # Enter vi mode chopper:~> bindkey -v # Search for history key bindings chopper:~> bindkey | fgrep history "^[OA" up-line-or-history "^[OB" down-line-or-history "^[[A" up-line-or-history "^[[B" down-line-or-history In emacs mode, the binding you want is history-incremental-search-backward , but that is not bound by default in vi mode. To bind Ctrl-R yourself, you can run this command, or add it to your ~/.zshrc : bindkey "^R" history-incremental-search-backward The zshzle manpage ( man zshzle ) has more information on zsh's line editor, bindkey, and emacs/vi modes.
{ "source": [ "https://unix.stackexchange.com/questions/44115", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/916/" ] }
44,127
I want to mount a password-protected SMB share (served by a Windows machine). The share is protected by a user name and password, and I may not write the password in a file, I want to be prompted for the password at mount time. I need a solution that works even for when the user on the client machine does not have any administrative privileges, so whatever method is used to mount the share must not allow him to get root permissions. The initial installation can be done as root. Users must be able to specify arbitrary server names. My immediate need is with Ubuntu 12.04, but the wider applicable a solution is the better. The client is headless, so I'm looking for a command-line tool. What I tried: mount.cifs : while it can be made setuid root, its authors do not consider it secure . Running it under sudo has the same problem. smbnetfs , fusesmb : I couldn't convince either of them to prompt me for a password. Nautilus and gvfs: gvfs-mount smb://servername/sharename fails with Error mounting location: volume doesn't implement mount . How can I mount a Samba share from the command line, as a non-root user, with a password prompt?
“Error mounting location: volume doesn't implement mount” apparently translates to “I need D-Bus but it isn't available”. (Thanks to venturax's guru colleague for this information.) Within an SSH session, I can use gvfs-mount provided that dbus-daemon is launched first and the environment variable DBUS_SESSION_BUS_ADDRESS is set. export $(dbus-launch) gvfs-mount smb://workgroupname\;username@hostname/sharename # Type password ls ~/.gvfs/'sharename on hostname' gvfs-mount and other GVFS utilities must all talk to the same D-Bus session. Hence, if you use multiple SSH sessions or otherwise use mounts across login sessions, you must: start D-Bus the first time it is needed, at the latest; take care not to let D-Bus end with the session, as long as there are mounted GVFS filesystems; reuse the existing D-Bus session at login time if there is one. See Reuse D-Bus sessions across login sessions for that.
{ "source": [ "https://unix.stackexchange.com/questions/44127", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
44,215
I used Mozilla Firefox in Windows, and now I'm using Iceweasel in Debian 6. Is there any difference to the two programs? What are the advantages and disadvantages to each program? Which one seems better?
It's the same thing. See wikipedia . Basically, you are not allowed to re-compile the source code and still call it Firefox for trademark reasons.
{ "source": [ "https://unix.stackexchange.com/questions/44215", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13636/" ] }
44,221
I am trying to burn a DVD using GParted (not a DVD of GParted ). I see that GParted uses a Debian distro (Wheezy). I am trying to install dvd+rw-tools: sudo apt-get install dvd+rw-tools Package dvd+rw-tools is not availble, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only availble from another source Fine. Then I try to add another repository: sudo apt-add repository ppa:ferramroberto/extra sudo: add-apt-repository: command not found Then, I try to install python-software-properties : sudo apt-get install python-software-properties And get Unable to locate package python-software-properties How can I make this work?
It's the same thing. See wikipedia . Basically, you are not allowed to re-compile the source code and still call it Firefox for trademark reasons.
{ "source": [ "https://unix.stackexchange.com/questions/44221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
44,226
Is there any other command line calculator that supports log , n! calculations? At least bc can't do that, it produced a parse error. It's best if I could use it in a script, e.g echo '5!' | program .
bc supports the natural logarithm if invoked with the -l flag. You can calculate the base-10 or base-2 log with it: $ bc -l ... l(100)/l(10) 2.00000000000000000000 l(256)/l(2) 8.00000000000000000007 I don't think there's a built-in factorial, but that's easy enough to write yourself: $ bc ... define fact_rec (n) { if (n < 0) { print "oops"; halt; } if (n < 2) return 1; return n*fact_rec(n-1); } fact_rec(5) 120 Or: define fact_it (n) { if (n < 0) { print "oops"; halt; } res = 1; for (; n > 1; n--) { res *= n; } return res; } fact_it(100) 93326215443944152681699238856266700490715968264381621468592963895217\ 59999322991560894146397615651828625369792082722375825118521091686400\ 0000000000000000000000 To be POSIX compliant, you'd need to write it: define f(n) { auto s, m if (n <= 0) { "Invalid input: " n return(-1) } s = scale scale = 0 m = n / 1 scale = s if (n != m) { "Invalid input: " n return(-1) } if (n < 2) return(1) return(n * f(n - 1)) } That is: single character function name, no print , no halt , parenthesis required in return(x) . If you don't need input validation (here for positive integer numbers), it's just: define f(n) { if (n < 2) return(1) return(n * f(n - 1)) }
{ "source": [ "https://unix.stackexchange.com/questions/44226", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
44,234
How to clear unused space with zeros ? (ext3,ext4) I'm looking for something smarter than cat /dev/zero > /mnt/X/big_zero ; sync; rm /mnt/X/big_zero Like FSArchiver is looking for "used space" and ignores unused, but opposite site. Purpose: I'd like to compress partition images, so filling unused space with zeros is highly recommended. Btw. For btrfs : Clear unused space with zeros (btrfs)
Such an utility is zerofree . From its description: Zerofree finds the unallocated, non-zeroed blocks in an ext2 or ext3 file-system and fills them with zeroes. This is useful if the device on which this file-system resides is a disk image. In this case, depending on the type of disk image, a secondary utility may be able to reduce the size of the disk image after zerofree has been run. Zerofree requires the file-system to be unmounted or mounted read-only. The usual way to achieve the same result (zeroing the unused blocks) is to run "dd" do create a file full of zeroes that takes up the entire free space on the drive, and then delete this file. This has many disadvantages, which zerofree alleviates: it is slow it makes the disk image (temporarily) grow to its maximal extent it (temporarily) uses all free space on the disk, so other concurrent write actions may fail. Zerofree has been written to be run from GNU/Linux systems installed as guest OSes inside a virtual machine. If this is not your case, you almost certainly don't need this package. UPDATE #1 The description of the .deb package contains the following paragraph now which would imply this will work fine with ext4 too. Description: zero free blocks from ext2, ext3 and ext4 file-systems Zerofree finds the unallocated blocks with non-zero value content in an ext2, ext3 or ext4 file-system and fills them with zeroes... Other uses Another application this utility is to compress disk images that are a backup of a real disk. A typical example of this is the dump of the SD card in a BeagleBone or a Raspberry Pi. Once empty spaces have been zeroed, backup images can be compressed more efficiently.
{ "source": [ "https://unix.stackexchange.com/questions/44234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9689/" ] }
44,247
How to move directories that have files in common from one to another partition ? Let's assume we have partition mounted on /mnt/X with directories sharing files with hardlinks. How to move such directories to another partition , let it be /mnt/Y with preserving those hardlinks. For better illustration what do I mean by "directories sharing files in common with hardlinks", here is an example: # let's create three of directories and files mkdir -p a/{b,c,d}/{x,y,z} touch a/{b,c,d}/{x,y,z}/f{1,2,3,4,5} # and copy it with hardlinks cp -r -l a hardlinks_of_a To be more specific, let's assume that total size of files is 10G and each file has 10 hardlinks. The question is how to move it to destination with using 10G (someone might say about copying it with 100G and then running deduplication - it is not what I am asking about)
First answer: The GNU Way GNU cp -a copies recursively preserving as much structure and metadata as possible. Hard links between files in the source directory are included in that. To select hard link preservation specifically without all the other features of -a , use --preserve=links . mkdir src cd src mkdir -p a/{b,c,d}/{x,y,z} touch a/{b,c,d}/{x,y,z}/f{1,2,3,4,5} cp -r -l a hardlinks_of_a cd .. cp -a src dst
{ "source": [ "https://unix.stackexchange.com/questions/44247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9689/" ] }
44,249
What's the best way to check if two directories belong to the same filesystem? Acceptable answers: bash, python, C/C++.
It can be done by comparing device numbers . In a shell script on Linux it can be done with stat : stat -c "%d" /path # returns the decimal device number In python : os.lstat('/path...').st_dev or os.stat('/path...').st_dev
{ "source": [ "https://unix.stackexchange.com/questions/44249", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9689/" ] }
44,266
Is there a way to color output for git (or any command)? Consider: baller@Laptop:~/rails/spunky-monkey$ git status # On branch new-message-types # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: app/models/message_type.rb # no changes added to commit (use "git add" and/or "git commit -a") baller@Laptop:~/rails/spunky-monkey$ git add app/models And baller@Laptop:~/rails/spunky-monkey$ git status # On branch new-message-types # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: app/models/message_type.rb # The output looks the same, but the information is totally different: the file has gone from unstaged to staged for commit. Is there a way to colorize the output? For example, files that are unstaged are red, staged are green? Or even Changes not staged for commit: to red and # Changes to be committed: to green? Working in Ubuntu. EDIT: Googling found this answer which works great: git config --global --add color.ui true . However, is there any more general solution for adding color to a command output?
You can create a section [color] in your ~/.gitconfig with e.g. the following content [color] diff = auto status = auto branch = auto interactive = auto ui = true pager = true You can also fine control what you want to have coloured in what way, e.g. [color "status"] added = green changed = red bold untracked = magenta bold [color "branch"] remote = yellow I hope this gets you started. And of course, you need a terminal which supports colour. Also see this answer for a way to add colorization directly from the command line.
{ "source": [ "https://unix.stackexchange.com/questions/44266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11815/" ] }
44,271
While working with Ubuntu and other Debian-based distros, I've noticed that packages in the software repos often contain the major version number. For example, Apache: apache2 Tomcat: tomcat7 PHP: php5 Wine: wine1.4 MySQL: mysql-server-5.5 I notice however that there's no apache1 package available, and similar for the rest. If the name of the package changes with updates to the software, doesn't that get in the way of one of the major goals of package management (easy upgrades)? If Apache 3 comes out tomorrow, am I going to have to install the apache3 package manually if I want to upgrade?`
Packages are named like that where there is (or was) a need to ease the transition between two major versions of a package, and the time needed to do so is expected to be long. During the transition period, both new and old versions are kept available, with the understanding that at some future time the older one(s) will be discontinued. Sometimes the transition period is happening during the system release you're currently using. For some packages, it happens often enough that you can expect to see transitional package versions in every new system release. Software development tools often fall into this category, since upgrading to new tools on the same schedule as system releases may not be practical. My company's dependence on particular versions of GCC, Autoconf and Perl might be on a 5 year cycle, while my OS might be on a 3 year upgrade cycle. It therefore makes it easier for me to adopt new OSes if it includes my older versions of some packages in addition to whatever was current at the time the new OS was being developed. Other times, these major version changes happened long ago, in the past, and now everyone is on the current version. This is the case with Apache, for example. The 1.3 to 2.0 change was a far bigger deal from a compatibility standpoint than any of the 2.x version changes, so once everyone was off 1.3, there was no longer a need to keep offering multiple Apache versions within a given OS release. But, once you've got everyone using the apache2 package, there isn't a very good argument for renaming it back to just apache . That would cause an unnecessary upgrade hassle. Besides, where there was a perceived need in the past to provide two parallel versions temporarily, the need will probably recur in the future. This package naming practice typically happens only with libraries or important core packages. For more peripheral packages, you're expected to just upgrade to whatever's current at the moment. Libraries are more commonly treated this way than applications because, by their nature, other packages depend on them. The more popular a library is, the more impractical it is to demand that every other package depending on it be rebuilt and relinked against it purely so that the library can be step-upgraded to a new major version without this transition period. Often when an application is being treated this way, it is because it contains a library element. For example, Apache is not just a web server, it also provides a development API for the plugins. ( mod_foo and such.) If someone has an old mod_something linked against the Apache 1.3 plugin ABI and hasn't upgraded it to use the newer 2.0 API, it's convenient if your OS continues to offer the old Apache 1.3 until all the plugin creators have a chance to update their plugins.
{ "source": [ "https://unix.stackexchange.com/questions/44271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12223/" ] }
44,370
I have set my environment variable using /etc/profile : export VAR=/home/userhome Then if I do echo $VAR it shows /home/userhome But when I put reference to this variable into the /etc/init.d/servicename file, it cannot find this variable. When I run service servicename status using /etc/init.d/servicename file with following content: case "$1" in status) cd $VAR/dir ;; esac it says /dir: No such file or directory But it works if I run /etc/init.d/servicename status instead of service servicename status How can I make unix service see environment variables?
The problem is service strips all environment variables but TERM , PATH and LANG which is a good thing. If you are executing the script directly nothing removes the environment variables so everything works. You don't want to rely on external environment variables because at startup the environment variable probably isn't present and your init system probably won't set it anyway. If you still want to rely on such variables, source a file and read the variables from it, e.g. create /etc/default/servicename with the content: VAR=value and source it from your init script, e.g: [ -f /etc/default/service-name ] && . /etc/default/service-name if [ -z "$VAR" ] ; then echo "VAR is not set, please set it in /etc/default/service-name" >&2 exit 1 fi case "$1" in status) cd "$VAR"/dir ;; esac
{ "source": [ "https://unix.stackexchange.com/questions/44370", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6702/" ] }
44,523
Is there an editor which has the option to "split the screen" into two or more parts, accessing more than one file (possibly with a file tree) without opening more editor windows at once, and how would one do this (what are the commands). I don't know if I made myself clear, but "split screen" is the only way to describe what I want to achieve. I want to use it to program, having more than one file open for editing. Note that I'm pretty new to both vi and emacs, if these are capable of doing this. Also, if this has to be done through a terminal editor, can it be done in the same terminal, regardless of the screen size?
vim can easily do that: ctrl + w s - Split windows ctrl + w w - switch between windows ctrl + w q - Quit a window ctrl + w v - Split windows vertically :sp filename will open filename in new buffer and split a window. You can also do vim -o file1 file2 To open the files in a split screen layout. Replace -o with -O for vertical split instead of horizontal.
{ "source": [ "https://unix.stackexchange.com/questions/44523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21353/" ] }
44,634
Is it possible to use the mouse to navigate between different window panes which are split vertically or horizontally?
As of tmux 2.1 , you can enable this by adding it to your .tmux.conf : set -g mouse on Mouse-mode has been rewritten. There's now no longer options for: mouse-resize-pane mouse-select-pane mouse-select-window mode-mouse Instead there is just one option: 'mouse' which turns on mouse support entirely. See the mouse-select-pane option in man tmux : mouse-select-pane [on | off] If on, tmux captures the mouse and when a window is split into multiple panes the mouse may be used to select the current pane. The mouse click is also passed through to the application as normal. You can enable this by adding it to your .tmux.conf : set -g mouse-select-pane on
{ "source": [ "https://unix.stackexchange.com/questions/44634", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
44,643
I use Alt+Space in Emacs, but in Xfce it pops up window manager menu at the upper left corner of a window. How do i disable Alt+Space for Xfce and change global keyboard shortcuts in general?
Here, in Xfce4 Settings Manager or launch xfce4-settings-manager from terminal, In Window Manager configuration, find the keyboard part, look for Window operations menu, and then hit on Clear button, which will remove that shortcut key, effects immedately
{ "source": [ "https://unix.stackexchange.com/questions/44643", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11397/" ] }
44,677
I'm switching to Cygwin from the bash shell that ships with Git for Windows, and encountering a strange problem. Someone thought it would be a good idea to add /cygdrive/ to all paths, while I think it's a horribly ugly idea. I've been able to determine that I can partially fix this by adding mount --change-cygdrive-prefix / export HOME=/c/Users/BZISAD0 in my .bashrc, but if I take a look at the PATH variable, everything still has /cygdrive/ in it. I suppose I could write a script to fix the PATH but that's even more kludgey than what I'm already doing. There's got to be a better way, and I'm pretty confident there is since Git's bash shell uses (AFAIK) an older version of Cygwin, and it's somehow configured to not prepend /cygdrive everywhere. So, how can I turn the "Suck" knob to zero?
Grepping around in /etc turned up a link that Googling did not. It turns out you can control this in the file /etc/fstab . Just add a line that says none / cygdrive binary 0 0 and the problem should be fixed. No more kludgey fixes in .bashrc, and no messed-up $PATH.
{ "source": [ "https://unix.stackexchange.com/questions/44677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3358/" ] }
44,678
I'm attempting to look for grub: [root /]# find / -iname "*grub*" /sbin/grubby /usr/share/man/man8/grubby.8.gz /usr/share/vim/vim70/syntax/grub.vim /usr/share/vim/vim70/ftplugin/grub.vim /usr/lib/pm-utils/sleep.d/01grub Now I'm attempting to look for lilo: [root /]# find / -iname "*lilo*" /usr/share/doc/syslinux-3.11/keytab-lilo.doc /usr/share/vim/vim70/syntax/lilo.vim /usr/lib/syslinux/keytab-lilo.pl I thought perhaps it was somehow being hidden with SELinux so I tried to turn that off (temporarily): [root@ /]# setenforce 0 setenforce: SELinux is disabled Hmm, look like it was already off. What about turning that on? [root@ /]# setenforce 1 setenforce: SELinux is disabled Ok, now I have no clue why I can't find any bootloader files. I re-run the find commands and get the same thing. Next I had read the bootloader section in the Linux Administration Handbook and it didn't mention not being able to find bootloader configuration files. This is an box on Amazon EC2: CentOS release 5.4 final selinux Is this normal to not have these files? I also don't seem to have any /etc/sysconfig/selinux or /etc/selinux/config files.... Hmmm.... Update - Why am I asking? This article (among others) mentions using boot flags to enable or disable selinux in the grub.conf file. Without a boot loader how do you specify boot flags?
Grepping around in /etc turned up a link that Googling did not. It turns out you can control this in the file /etc/fstab . Just add a line that says none / cygdrive binary 0 0 and the problem should be fixed. No more kludgey fixes in .bashrc, and no messed-up $PATH.
{ "source": [ "https://unix.stackexchange.com/questions/44678", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
44,686
I've just executed a long-running process from the bash prompt. In hindsight, I wish I'd run time on it, or noted down the time at which I kicked it off. Is there any way of getting this information retrospectively? The .bash_history doesn't seem to include timestamps. In my particular case it's Mac OS X, but I'm interested in general Unix/Linux solutions. To clarify, the process has now completed, and I'd prefer not to run it again unless absolutely necessary!
bash actually remembers the times until you close the shell. So try running HISTTIMEFORMAT='%x %X ' history If you also put HISTTIMEFORMAT=<some format> in your ~/.bashrc , it will also get written to ~/.bash_history on exit, so you can check what happened in previous shell sessions too.
{ "source": [ "https://unix.stackexchange.com/questions/44686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14482/" ] }
44,692
I need to read the file (contains 16K rows)and print the entire row if any of columns and all columns contains max value (100) and all columns contain min value (0).The ouput example is given input.txt (tab-delimited) Id sno1 sno2 sno3 sno4 E1 98 100 88 78 E2 33 99 78 66 E3 0 0 100 56 E4 0 0 0 0 E5 45 55 65 100 E6 0 0 99 88 E7 100 100 100 100 Ouput.txt E1 98 100 88 78 E3 0 0 100 56 E4 0 0 0 0 E5 45 55 65 100 E7 100 100 100 100
bash actually remembers the times until you close the shell. So try running HISTTIMEFORMAT='%x %X ' history If you also put HISTTIMEFORMAT=<some format> in your ~/.bashrc , it will also get written to ~/.bash_history on exit, so you can check what happened in previous shell sessions too.
{ "source": [ "https://unix.stackexchange.com/questions/44692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20346/" ] }
44,713
I would like to configure bash to execute clear command every time I type some command in the terminal (before executing my command). How can I do that? I'm using Debian Linux.
Bash has a precommand hook . Sort of. preexec () { clear } preexec_invoke_exec () { [ -n "$COMP_LINE" ] && return # do nothing if completing [ "$BASH_COMMAND" = "$PROMPT_COMMAND" ] && return # don't cause a preexec for $PROMPT_COMMAND local this_command=`history 1 | sed -e "s/^[ ]*[0-9]*[ ]*//g"`; # obtain the command from the history, removing the history number at the beginning preexec "$this_command" } trap 'preexec_invoke_exec' DEBUG
{ "source": [ "https://unix.stackexchange.com/questions/44713", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20334/" ] }
44,735
How can I get only the filename using sed? I've this out_file=$(echo $in_file|sed "s/\(.*\.\).*/\1mp4/g") But I get the path too /root/video.mp4 , and I want only video.mp4 .
basename from the GNU coreutils can help you doing this job: $ basename /root/video.mp4 video.mp4 If you already know the extension of the file, you can invoke basename using the syntax basename NAME [SUFFIX] in order to remove it: $ basename /root/video.mp4 .mp4 video Or another option would be cutting everything after the last dot using sed : $ basename /root/video.old.mp4 | sed 's/\.[^.]*$//' video.old
{ "source": [ "https://unix.stackexchange.com/questions/44735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21633/" ] }
44,736
I have Tomcat and Apache installed in CentOS 5. They're integrated with a help of mod_jk.so. They both display the same in http://www.tource.com/cms/admin and http://www.tource.com:8080/cms/admin But I'd like to make the context cms displayed only when I access with address below. http://cms.tource.com/ How could the context "www.tource.com/cms" turn into "cms.tource.com" ?
basename from the GNU coreutils can help you doing this job: $ basename /root/video.mp4 video.mp4 If you already know the extension of the file, you can invoke basename using the syntax basename NAME [SUFFIX] in order to remove it: $ basename /root/video.mp4 .mp4 video Or another option would be cutting everything after the last dot using sed : $ basename /root/video.old.mp4 | sed 's/\.[^.]*$//' video.old
{ "source": [ "https://unix.stackexchange.com/questions/44736", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21634/" ] }