source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
37,915 | I'm new into C++ programming and I have a very basic question.
I've noticed that when I return -1 in C++ program and then check the exit status, I get 255 . Why is that? The code is the most basic: int main()
{
return -1;
} Then, after running the compiled code: echo $?
255 | Because in UNIX/POSIX, the exit code of a program is defined to be an unsigned 8-bit value. Converting -1 to unsigned 8-bit gives 255. Edit to add: To give more detail: the wait*() family of system calls in UNIX encode the result of a process into a single 32bit integer. The 32 bits of that result are further broken up to provide information such as whether the process dumped core, exited due to a signal (and which one), etc. Of that 32 bits, only 8 are reserved for the exit code of the process and those are interpreted as an unsigned value. The fork/exec/wait model of UNIX/POSIX is one of its very oldest and most deeply embedded features; if you were designing a new operating system today you might do something different (at least use 64 bits :-)). On the other hand, practically speaking is it really useful to have >255 exit codes? I doubt it. If you really wanted something more powerful I'd suggest that you'd switch to an "exit string", instead of a numeric exit code with a wider range. | {
"source": [
"https://unix.stackexchange.com/questions/37915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15142/"
]
} |
37,950 | Is there a command to delete all the files in a directory that haven't been modified in N days? I need to clean up some old logs. | This will delete all files older than 5 days, you can put a -name '*log' in there too to be more precise and you might want to specify a maxdepth in the find command too. find /some/dir -type f -mtime +5 -delete | {
"source": [
"https://unix.stackexchange.com/questions/37950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1024/"
]
} |
38,055 | In unix based operating systems are utf6 filenames permissible? If so do I need to do anything special to write the file to disk. Let me explain what I'm hoping to do. I'm writing an application that will transfer a file via ftp to a remote system but the filename is dynamically set to via some set of meta data which potentially could be in utf8. I'm wondering if there's something I need to do to write the file to disk in unix/linux. Also as a follow up does anyone know what would happen if I did upload a utf 8 filename to a system doesn't support utf8? | On Unix/Linux, a filename is a sequence of any bytes except for a slash or a NUL. A slash separates path components, and a NUL terminates a path name. So, you can use whatever encoding you want for filenames. Some applications may have trouble with some encodings if they are naïve about what characters may be in filenames - for example, poorly-written shell scripts often do not handle filenames with spaces. Modern Unix/Linux environments handle UTF-8 encoded filenames just fine. | {
"source": [
"https://unix.stackexchange.com/questions/38055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8041/"
]
} |
38,072 | When I am running my analyses using the bash shell, I often want to save the commands I've used that gave me good results to a file in the same directory (my "LOGBOOK", as its called) so that I can check what I did to get those results. So far this has meant me either copy.pasting the command from the terminal or pressing "up" modifying the command to an echo"my command" >> LOGBOOK , or other similar antics. I found there was a history tool the other day, but I can't find a way of using it to get the previously executed command so that I can do something like getlast >> LOGBOOK . Is there a nice easy way to do this. Alternatively, how do others deal with saving the commands for results they have? | If you are using bash , you can use the fc command to display your history in the way you want: fc -ln -1 That will print out your last command. -l means list, -n means not to prefix lines with command numbers and -1 says to show just the last command. If the whitespace at the start of the line (only the first line on multi-line commands) is bothersome, you can get rid of that easily enough with sed . Make that into a shell function, and you have a solution as requested ( getlast >> LOGBOOK ): getlast() {
fc -ln "$1" "$1" | sed '1s/^[[:space:]]*//'
} That should function as you have asked in your question. I have added a slight variation by adding "$1" "$1" to the fc command. This will allow you to say, for example, getlast mycommand to print out the last command line invoking mycommand , so if you forgot to save before running another command, you can still easily save the last instance of a command. If you do not pass an argument to getlast (i.e. invoke fc as fc -ln "" "" , it prints out just the last command only). [Note: Answer edited to account for @Bram's comment and the issue mentioned in @glenn jackman's answer.] | {
"source": [
"https://unix.stackexchange.com/questions/38072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9569/"
]
} |
38,109 | What does gvfs do for me on my Kubuntu machine and why is /usr/lib/gvfs/gvfs-gdu-volume-monitor eating so much CPU time? BTW: I read https://en.wikipedia.org/wiki/GVFS and still don't know what's in it for me, especially on KDE / Kubuntu. lsof shows me that thunderbird , firefox and pidgin have gvfs libraries open, but for what functionality? | GVFS ( GNOME Virtual file system ) provides a layer just below the user applications you use like firefox. This layer is called a virtual filesystem and basically presents to firefox, thunderbird and pidgin a common layer that allows them to see local file resource and remote file resource as a single set of resources. Meaning your access to the resource whether on your local machine or the remote machine would be transparent to the user. Although this layer is mostly there to make it easier for application developers to code to a single set of interfaces and not have to distinguish between local and remote file system and their low-level code. For the user this could mean that the same file manager you use to browse your local files, could also be used to browse files on a remote server. As a simplified contrast, on Windows I can browse my local files with Explorer, but to browse files on an NFS or SFTP server I would need a separate application. | {
"source": [
"https://unix.stackexchange.com/questions/38109",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16841/"
]
} |
38,163 | I just learned that Linux has a sudo !! command which literally applies sudo to the last entered command. I had never heard about it. Is that a common control? Where can I find documentation about it? | This is just bash shortcuts. It's not sudo!! , by the way. It's sudo !! (note the space). The !! in bash is basically an expansion of the previous command. Take a look at the "History Expansion" section of the bash man page: http://www.gnu.org/software/bash/manual/bashref.html#Event-Designators | {
"source": [
"https://unix.stackexchange.com/questions/38163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18515/"
]
} |
38,164 | I'm partitioning a non-SSD hard disk with parted because I want a GPT partition table. parted /dev/sda mklabel gpt Now, I'm trying to create the partitions correctly aligned so I use the following command to know where the first sector begins: parted /dev/sda unit s p free
Disk /dev/sda: 488397168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
34s 488397134s 488397101s Free Space We can see that it starts in sector 34 (that's the default when this partition table is used). So, to create the first partition I tried: parted /dev/sda mkpart primary 63s 127s to align it on sector 64 since it's a multiple of 8 but it shows: Warning: The resulting partition is not properly aligned for best performance. The logical and physical sector sizes in my hard disk are both 512 bytes: cat /sys/block/sda/queue/physical_block_size
512
cat /sys/block/sda/queue/logical_block_size
512 How do I create partitions correctly aligned? What am I doing wrong? | In order to align partition with parted you can use --align option. Valid alignment types are: none - Use the minimum alignment allowed by the disk type. cylinder - Align partitions to cylinders. minimal - Use minimum alignment as given by the disk topology information. This and the opt value will use layout information provided by the disk to align the logical partition table addresses to actual physical blocks on the disks. The min value is the minimum alignment needed to align the partition properly to physical blocks, which avoids performance degradation. optimal Use optimum alignment as given by the disk topology information. This aligns to a multiple of the physical block size in a way that guarantees optimal performance. Other useful tip is that you can set the size with percentages to get it aligned. Start at 0% and end at 100%. For example: parted -a optimal /dev/sda mkpart primary 0% 4096MB | {
"source": [
"https://unix.stackexchange.com/questions/38164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18516/"
]
} |
38,170 | I have an installation of openSUSE 11.4 which was installed in German. I started Yast2 (which displayed in German, like "Netzwerkdienste") and switched the language to English under "System" -> "Sprache". Yast2 downloaded some files, installed something and remained German ("Netzwerkdienste" can still be seen instead of, presumably, "Network Services"). I rebooted the machine, same result. I un-installed the German Yast language pack. Yast2 persists in displaying in German. I don't know how many of Yast2's screens are supposed to be translated, but I think it might only be the main screen that is in German. However, it is annoying. How can I change it? Update: I checked environment variables (for the root user). There are several variables that refer to language settings and all remain set to German. declare -x LANG="de_DE.utf8"
declare -x LANGUAGE="de_DE.utf8"
declare -x LC_ALL="de_DE.utf8" Shouldn't Yast2 have modified them? Update: I just started vi and it's also in German... does the language setting in Yast2 do anything at all? | In order to align partition with parted you can use --align option. Valid alignment types are: none - Use the minimum alignment allowed by the disk type. cylinder - Align partitions to cylinders. minimal - Use minimum alignment as given by the disk topology information. This and the opt value will use layout information provided by the disk to align the logical partition table addresses to actual physical blocks on the disks. The min value is the minimum alignment needed to align the partition properly to physical blocks, which avoids performance degradation. optimal Use optimum alignment as given by the disk topology information. This aligns to a multiple of the physical block size in a way that guarantees optimal performance. Other useful tip is that you can set the size with percentages to get it aligned. Start at 0% and end at 100%. For example: parted -a optimal /dev/sda mkpart primary 0% 4096MB | {
"source": [
"https://unix.stackexchange.com/questions/38170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9839/"
]
} |
38,172 | I'm looking to switch from bash to zsh but concerned about compatibility of bash scripts. Are all bash scripts/functions compatible with zsh? Therefore, if that is true is zsh just an enhancement to bash? | If your scripts start with the line #!/bin/bash they will still be run using bash, even if your default shell is zsh. I've found the syntax of zsh really close to the one of bash, and I did not pay attention if there was really some incompatibilities. I switched 6 years ago from bash to zsh seamlessly. | {
"source": [
"https://unix.stackexchange.com/questions/38172",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7768/"
]
} |
38,175 | I understand the basic difference between an interactive shell and a non-interactive shell. But what exactly differentiates a login shell from a non-login shell? Can you give examples for uses of a non-login interactive shell? | A login shell is the first process that executes under your user ID when you log in for an interactive session. The login process tells the shell to behave as a login shell with a convention: passing argument 0, which is normally the name of the shell executable, with a - character prepended (e.g. -bash whereas it would normally be bash . Login shells typically read a file that does things like setting environment variables: /etc/profile and ~/.profile for the traditional Bourne shell, ~/.bash_profile additionally for bash † , /etc/zprofile and ~/.zprofile for zsh † , /etc/csh.login and ~/.login for csh, etc. When you log in on a text console, or through SSH, or with su - , you get an interactive login shell. When you log in in graphical mode (on an X display manager ), you don't get a login shell, instead you get a session manager or a window manager. It's rare to run a non-interactive login shell, but some X settings do that when you log in with a display manager, so as to arrange to read the profile files. Other settings (this depends on the distribution and on the display manager) read /etc/profile and ~/.profile explicitly, or don't read them. Another way to get a non-interactive login shell is to log in remotely with a command passed through standard input which is not a terminal, e.g. ssh example.com <my-script-which-is-stored-locally (as opposed to ssh example.com my-script-which-is-on-the-remote-machine , which runs a non-interactive, non-login shell). When you start a shell in a terminal in an existing session (screen, X terminal, Emacs terminal buffer, a shell inside another, etc.), you get an interactive, non-login shell. That shell might read a shell configuration file ( ~/.bashrc for bash invoked as bash , /etc/zshrc and ~/.zshrc for zsh, /etc/csh.cshrc and ~/.cshrc for csh, the file indicated by the ENV variable for POSIX/XSI-compliant shells such as dash, ksh, and bash when invoked as sh , $ENV if set and ~/.mkshrc for mksh, etc.). When a shell runs a script or a command passed on its command line, it's a non-interactive, non-login shell. Such shells run all the time: it's very common that when a program calls another program, it really runs a tiny script in a shell to invoke that other program. Some shells read a startup file in this case (bash runs the file indicated by the BASH_ENV variable, zsh runs /etc/zshenv and ~/.zshenv ), but this is risky: the shell can be invoked in all sorts of contexts, and there's hardly anything you can do that might not break something. † I'm simplifying a little, see the manual for the gory details. | {
"source": [
"https://unix.stackexchange.com/questions/38175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8624/"
]
} |
38,205 | How might it be possible to alter some variable in the env of an already running process, for example through /proc/PID/environ? That "file" is read-only . Need to change or unset the DISPLAY variable of a long-running batch job without killing it. | You can't do this without a nasty hacks - there's no API for this, no way to notify the process that its environment has changed (since that's not really possible anyway). Even if you do manage to do that, there is no way to be sure that it will have any effect - the process could very well have cached the environment variable you're trying to poke (since nothing is supposed to be able to change it). If you really do want to do this, and are prepared to pick up the pieces if things go wrong, you can use a debugger. See for instance this Stack Overflow question: Is there a way to change another process's environment variables? Essentially: (gdb) attach process_id
(gdb) call putenv ("DISPLAY=your.new:value")
(gdb) detach Other possible functions you could try to call are setenv or unsetenv . Please do really keep in mind that this may not work, or have dire consequences if the process you target does "interesting" things with its environment block. Do test it out on non-critical processes first, but make sure these test processes mirror as close as possible the one you're trying to poke. | {
"source": [
"https://unix.stackexchange.com/questions/38205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13496/"
]
} |
38,207 | How can I assign folder permissions in Fedora 14 using the command line? I'd like to give read/write/execute permissions to : 2 users (tom, jim) to the folder /Home/Share 1 group (employees) to the folder /Home/Share/Employees However, I'm not sure how to do this properly from a shell. | You can't do this without a nasty hacks - there's no API for this, no way to notify the process that its environment has changed (since that's not really possible anyway). Even if you do manage to do that, there is no way to be sure that it will have any effect - the process could very well have cached the environment variable you're trying to poke (since nothing is supposed to be able to change it). If you really do want to do this, and are prepared to pick up the pieces if things go wrong, you can use a debugger. See for instance this Stack Overflow question: Is there a way to change another process's environment variables? Essentially: (gdb) attach process_id
(gdb) call putenv ("DISPLAY=your.new:value")
(gdb) detach Other possible functions you could try to call are setenv or unsetenv . Please do really keep in mind that this may not work, or have dire consequences if the process you target does "interesting" things with its environment block. Do test it out on non-critical processes first, but make sure these test processes mirror as close as possible the one you're trying to poke. | {
"source": [
"https://unix.stackexchange.com/questions/38207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19382/"
]
} |
38,310 | Say I've got the following pipeline: cmd1 < input.txt |\
cmd2 |\
cmd4 |\
cmd5 |\
cmd6 |\
(...) |\
cmdN > result.txt Under certain conditions I would like to add a cmd3 between cmd2 and cmd4 . Is there a way to create a kind of conditional pipeline without saving the result of cmd2 into a temporary file? I would think of something like: cmd1 < input.txt |\
cmd2 |\
(${DEFINED}? cmd3 : cat ) |\
cmd4 |\
cmd5 |\
cmd6 |\
(...) |\
cmdN > result.txt | Just the usual && and || operators: cmd1 < input.txt |
cmd2 |
( [ -n "$DEFINED" ] && cmd3 || cat ) |
cmd4 |
cmd5 |
cmd6 |
(...) |
cmdN > result.txt Although, as specified by this answer , you would generally prefer if ... else , if you're after the if-else syntax: ...
cmd2 |
if [ -n "$DEFINED" ]; then cmd3; else cat; fi |
cmd4 |
... (Note that no trailing backslash is needed when the line ends with pipe.) Update according to Jonas' observation . If cmd3 may terminate with non-zero exit code and you not want cat to process the remaining input, reverse the logic: cmd1 < input.txt |
cmd2 |
( [ -z "$DEFINED" ] && cat || cmd3 ) |
cmd4 |
cmd5 |
cmd6 |
(...) |
cmdN > result.txt | {
"source": [
"https://unix.stackexchange.com/questions/38310",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5921/"
]
} |
38,330 | I'm trying to find where a specific alias has been declared. I've searched all the usual places I know to look for aliases: ~/.bashrc ~/.bash_profile /etc/bashrc /etc/profile With no luck. I know it's an alias because when I do which COMMAND , I get: alias COMMAND='/path/to/command'
/path/to/command Is there a way to find what file declares an alias only knowing the alias name? | I would look in /etc/profile.d/ for the offending alias . You could also do the following to find it: grep -r '^alias COMMAND' /etc This will recursively grep through files looking for a line beginning with alias COMMAND . If all else fails, put this at the end of your ~/.bashrc unalias COMMAND | {
"source": [
"https://unix.stackexchange.com/questions/38330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
38,379 | I've switched to using Arch Linux for most of my day to day work and don't need Windows for anything but gaming and the couple of apps that aren't ported to Linux like OneNote. My Linux distribution is hosted in VirtualBox with Windows as host, and I quite like it that way, snapshots are incredibly useful. Let's say I were to pretty much never care about the Windows host and spend 95% of the time in the guest, what would I be missing out on? Are there serious downsides? Is performance severely affected and will installing straight onto the machine make my life much more amazing? | Assuming you can get everything working, and you don't want to do resource intensive tasks such as playing games or doing large compiles, then I think you'll be fine. There's some basic issues you will probably encounter: guest time incorrect guest screen size or color depth incorrect can't access USB devices (printers, phones, etc.) To fix this, you should install VirtualBox guest additions . See the VirtualBox Arch Linux guests guide for details. To get some extra features, such as USB 2.0 and Intel PXE support, you can also install the VirtualBox extension pack . After that, there's a few issues you should know about: can't use USB 3.0 can't use IEEE1394/"FireWire" can't use seamless mode in combination with dual-head time gets out of sync on 64-bit guests Obviously your Linux VM will be affected if your Windows system crashes too. Issues I've had happen recently: Windows host crashes due to driver bug (blue screen) Windows host reboots due to security update When running a virtual machine the biggest performance hit will be to your disk I/O . If at all possible, put your VM on a separate disk and/or use a solid-state drive . Using a virtual SATA drive instead of a virtual IDE drive can help too. | {
"source": [
"https://unix.stackexchange.com/questions/38379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16539/"
]
} |
38,436 | Is it possible to change the tmux prefix keyboard shortcut Ctrl + B to Alt + B ? | Yes. See for example this link : set-option -g prefix M-b | {
"source": [
"https://unix.stackexchange.com/questions/38436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6215/"
]
} |
38,538 | After upgrading to a new release version, my bash scripts start spitting errors: bash: /dev/stderr: Permission denied in previous versions Bash would internally recognize those file names (which is why this question is not a duplicate of this one ) and do the right thing (tm) , however, this has stopped working now. What can I do to be able to run my scripts again successfully? I have tried adding the user running the script to the group tty , but this makes no difference (even after logging out and back in). I can reproduce this on the command line without problem: $ echo test > /dev/stdout
bash: /dev/stdout: Permission denied
$ echo test > /dev/stderr
bash: /dev/stderr: Permission denied
$ ls -l /dev/stdout /dev/stderr
lrwxrwxrwx 1 root root 15 May 13 02:04 /dev/stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 May 13 02:04 /dev/stdout -> /proc/self/fd/1
$ ls -lL /dev/stdout /dev/stderr
crw--w---- 1 username tty 136, 1 May 13 05:01 /dev/stderr
crw--w---- 1 username tty 136, 1 May 13 05:01 /dev/stdout
$ echo $BASH_VERSION
4.2.24(1)-release On an older system (Ubuntu 10.04): $ echo $BASH_VERSION
4.1.5(1)-release | I don't think this is entirely a bash issue. In a comment, you said that you saw this error after doing sudo su username2 when logged in as username . It's the su that's triggering the problem. /dev/stdout is a symlink to /proc/self/fd/1 , which is a symlink to, for example, /dev/pts/1 . /dev/pts/1 , which is a pseudoterminal, is owned by, and writable by, username ; that ownership was granted when username logged in. When you sudo su username2 , the ownership of /dev/pts/1 doesn't change, and username2 doesn't have write permission. I'd argue that this is a bug. /dev/stdout should be, in effect, an alias for the standard output stream, but here we see a situation where echo hello works but echo hello > /dev/stdout fails. One workaround would be to make username2 a member of group tty , but that would give username2 permission to write to any tty, which is probably undesirable. Another workaround would be to login to the username2 account rather than using su , so that /dev/stdout points to a newly allocated pseudoterminal owned by username2 . This might not be practical. Another workaround would be to modify your scripts so they don't refer to /dev/stdout and /dev/stderr ; for example, replace this: echo OUT > /dev/stdout
echo ERR > /dev/stderr by this: echo OUT
echo ERR 1>&2 I see this on my own system, Ubuntu 12.04, with bash 4.2.24 -- even though the bash document ( info bash ) on my system says that /dev/stdout and /dev/stderr are treated specially when used in redirections. But even if bash doesn't treat those names specially, they should still act as equivalents for the standard I/O streams. (POSIX doesn't mention /dev/std{in,out,err} , so it may be difficult to argue that this is a bug.) Looking at old versions of bash, the documentation implies that /dev/stdout et al are treated specially whether the files exist or not. The feature was introduced in bash 2.04, and the NEWS file for that version says: The redirection code now handles several filenames specially:
/dev/fd/N, /dev/stdin, /dev/stdout, and /dev/stderr, whether or not
they are present in the file system. But if you examine the source code ( redir.c ), you'll see that that special handling is enabled only if the symbol HAVE_DEV_STDIN is defined (this is determined when bash is built from source). As far as I can tell, no released version of bash has made the special handling of /dev/stdout et al unconditional -- unless some distribution has patched it. So another workaround (which I haven't tried) would be to grab the bash sources , modify redir.c to make the special /dev/* handling unconditional, and use your rebuilt version rather than the one that came with your system. This is probably overkill, though. SUMMARY : Your OS, like mine, is not handling the ownership and permissions of /dev/stdout and /dev/stderr correctly. bash supposedly treats these names specially in redirections, but in fact it does so only if the files don't exist. That wouldn't matter if /dev/stdout and /dev/stderr worked correctly. This problem only shows up when you su to another account or do something similar; if you simply login to an account, the permissions are correct. | {
"source": [
"https://unix.stackexchange.com/questions/38538",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
38,560 | I installed CUDA toolkit on my computer and started BOINC project on GPU. In BOINC I can see that it is running on GPU, but is there a tool that can show me more details about that what is running on GPU - GPU usage and memory usage? | For Nvidia GPUs there is a tool nvidia-smi that can show memory usage, GPU utilization and temperature of GPU. There also is a list of compute processes and few more options but my graphic card (GeForce 9600 GT) is not fully supported. Sun May 13 20:02:49 2012
+------------------------------------------------------+
| NVIDIA-SMI 3.295.40 Driver Version: 295.40 |
|-------------------------------+----------------------+----------------------+
| Nb. Name | Bus Id Disp. | Volatile ECC SB / DB |
| Fan Temp Power Usage /Cap | Memory Usage | GPU Util. Compute M. |
|===============================+======================+======================|
| 0. GeForce 9600 GT | 0000:01:00.0 N/A | N/A N/A |
| 0% 51 C N/A N/A / N/A | 90% 459MB / 511MB | N/A Default |
|-------------------------------+----------------------+----------------------|
| Compute processes: GPU Memory |
| GPU PID Process name Usage |
|=============================================================================|
| 0. Not Supported |
+-----------------------------------------------------------------------------+ | {
"source": [
"https://unix.stackexchange.com/questions/38560",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2661/"
]
} |
38,608 | Yesterday, I ran a bash script for about 10 hours. When I went to use the computer, it locked up. I have an Eee PC with Debian. The screen was still visible, but the mouse or keyboard had not effect. I tried Ctrl Alt Delete , Ctrl Alt Backspace , Ctrl Alt F1 , but to no effect. The hard drive light showed no activity. How can I determine what went wrong? What logs can I check? | You can find all messages in /var/log/syslog and in other /var/log/ files. Old messages are in /var/log/syslog.1 , /var/log/syslog.2.gz etc. if logrotate is installed. However, if the kernel really locks up, the probability is low that you will find any related message. It could be, that only the X server locks up. In this case, you can usually still access the PC over network via ssh (if you have installed it). There is also the Magic SysRq key to unRaw the keyboard such that the shortcuts you tried could work, too. | {
"source": [
"https://unix.stackexchange.com/questions/38608",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13099/"
]
} |
38,620 | I'm looking for a SQLite graphical administration utility for Linux but I can't seem to find any (I found an extension for Firefox I'm not a user of that browser). Is there any that you know of? | Tried Sqliteman ? Look for sqliteman in your package manager. It is stable, so should be broadly available. | {
"source": [
"https://unix.stackexchange.com/questions/38620",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18817/"
]
} |
38,634 | Relatively often, I find myself wanting to quit less but leave what I was viewing on the screen, to refer back to. Is there any way to do this? Workarounds? (My current workaround is to quit, then use more . So any workaround that's better than that is welcomed. The ideal would be something I can use once I'm already inside less , perhaps with a shell setting or some scripting.) My desktop is OSX, but I use RHEL and Ubuntu servers. | This is actually a function of the terminal emulator you are using (xterm, gnome-terminal, konsole, screen). An alternate screen, or altscreen, gets launched when programs such as less or vim are invoked. This altscreen has no history buffer and exits immediately when you quit the program, switching back to the original screen which restores the previous window content history and placement. You can prevent less from launch in an altscreen by passing the argument "-X". less -X /path/to/some/file You can also pass "-X" as an environment variable. So if you are using bash , place this in ~/.bashrc : export LESS="-X" However, this disbles the termcap (terminal capability) initialization and deinitialization, so other views when you use less may appear off. Another option would be to use screen and set the option altscreen off in your ~/.screenrc . less will not clear the screen and should preserve color formatting. Presumably tmux will have the same option. This blog entry describes the problem and offers some different solutions specific to gnome-terminal with varying success. | {
"source": [
"https://unix.stackexchange.com/questions/38634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18823/"
]
} |
38,681 | How should one open .eml files in linux ? I'm not sure if mutt can handle it ? UPDATE I worked it out partially , by creating a new mailbox: mkdir -p a/{cur,tmp,new} And place the eml file in a/cur , I could read it with: mutt -f But that's not exactly what I want yet | mutt doesn't seem able to open individual messages. What you can do is convert the .eml file into an mbox folder containing a single message. This basically involves adding a From line at the top, which can be done using formail -b : formail -b < themessage.eml > themessage.mbox This can then be opened within mutt using change-folder (default key c ). | {
"source": [
"https://unix.stackexchange.com/questions/38681",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
38,737 | I need to login to a user that I've created on a remote host running Ubuntu. I can't use an ssh key because the ssh login will happen from a bash script ran within a server that I won't have access to (think continuous integration server like Bamboo). I understand this isn't an ideal practice, but I want to either set the remote host to not ask for the password or be able to login with something like ssh --passsword foobar user@host , kind of like MySQL allows you to do for logins. I'm not finding this in man ssh and I'm open to any alternatives to getting around this issue. | On Ubuntu, install the sshpass package, then use it like this: sshpass -p 'YourPassword' ssh user@host sshpass also supports passing the keyboard-interactive password from a file or an environment variable, which might be a more appropriate option in any situation where security is relevant. See man sshpass for the details. | {
"source": [
"https://unix.stackexchange.com/questions/38737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18833/"
]
} |
38,755 | I have a server in USA (Linux box B), and my home PC (Linux box A),
and I need download a file from website C, The issue is, it is very slow to download a file direct from A,
so I need download the file when I log in B, and sftp get the file from A. Is there any way that I can download file and use B as proxy directly through only one line command? | (Strange situation, doesn't something like the triangle inequality hold for internet routing?) Anyway, try the following, on A , ssh into B with a -D argument, ssh -D 1080 address-of-B which acts as a SOCKS5 proxy on 127.0.0.1:1080 , which can be used by anything supporting SOCKS5 proxied connections. Apparently, wget can do this , by using the environment variable export SOCKS_SERVER=127.0.0.1:1080
wget http://server-C/whatever Note that sometimes curl is more handy (i.e. I'm not sure if wget can do hostname lookups via SOCKS5; but this is not one of your concerns I suppose); also Firefox is able to work completely through such a SOCKS5 proxy. Edit I've just now noticed that you're looking for a one-line solution. Well, how about ssh address-of-B 'wget -O - http://server-C/whatever' >> whatever i.e. redirection the wget -fetched output to stdout , and redirecting the local output (from ssh running wget remotely) to a file. This seems to work, the wget output is just a little confusing (" saved to - "), you can get rid of it by adding -q to the wget call. | {
"source": [
"https://unix.stackexchange.com/questions/38755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18893/"
]
} |
38,793 | I found if I search using grep without specifying a path, like grep -r 'mytext' it takes infinitely long. Meanwhile if I search with path specified grep -r 'mytext' . it instantly finds what I need. So, I'm curious, in first form, in which directory does grep search? UDATE: grep version: grep (GNU grep) 2.10 | Actually it doesn't search anywhere. It waits for input from standard input. Try this: beast:~ viroos$ grep foo when you type line containing "foo" and hit enter this line will be repeated otherwise cursor will be moved to new line but grep won't print anything. | {
"source": [
"https://unix.stackexchange.com/questions/38793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
38,808 | I've always wondered why cd isn't a program, but never managed to find the answer. Anyone know why this is the case? | The cd command modifies the "current working directory", right? "current working directory" is a property that is unique to each process. So, if cd was a program it would work like this: cd foo the cd process starts the cd process changes the directory for the cd process the cd process exits your shell still has the same state, including current working directory, that it did before you started. | {
"source": [
"https://unix.stackexchange.com/questions/38808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18917/"
]
} |
38,837 | When doing an apt-get upgrade I sometimes get a message saying "The following packages have been kept back". For example: $ sudo apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages have been kept back:
linux-headers-server linux-image-server linux-server
0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded. What does this mean exactly? Obviously the packages have been held back and not installed, but why? The follow-on question would be: how does one upgrade these kept back packages? | If the upgrade would require another package to be deleted, or a new package to be installed, the package will be "kept back." As the man page for apt-get upgrade explains: Packages currently installed with new versions available are retrieved
and upgraded; under no circumstances are currently installed packages
removed, or packages not already installed retrieved and installed. To get past this, you can do sudo apt-get --with-new-pkgs upgrade This allows new packages to be installed. It will let you know what packages would be installed and prompt you before actually doing the install. | {
"source": [
"https://unix.stackexchange.com/questions/38837",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18068/"
]
} |
38,839 | I need to do a script to look in some folders and send an email if there are files in these folders. I've tried to do something like this but I get errors with the command variable and print of the folder. for folder in "FOLDER1" "FOLDER2"; do
command=`ssh -q user@host "ls /usr/local/username/`{print $folder}` | wc -l"`
#echo $command
if [ $command -ne '0' ]
then
#send error email
fi
done | If the upgrade would require another package to be deleted, or a new package to be installed, the package will be "kept back." As the man page for apt-get upgrade explains: Packages currently installed with new versions available are retrieved
and upgraded; under no circumstances are currently installed packages
removed, or packages not already installed retrieved and installed. To get past this, you can do sudo apt-get --with-new-pkgs upgrade This allows new packages to be installed. It will let you know what packages would be installed and prompt you before actually doing the install. | {
"source": [
"https://unix.stackexchange.com/questions/38839",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18944/"
]
} |
38,867 | I need a solution for getting the current active (focused) window information on a Gnome 2 desktop. I'm mostly interested in the process running that window and window title. Is it possible? SOLUTION: Getting window title: xwininfo -root -children | grep $(printf '%x\n' $(xdotool getwindowfocus)) | grep -oEi '"[^"]+"' | head -1 Getting process name: ps -e | grep $(xdotool getwindowpid $(xdotool getwindowfocus)) | grep -v grep | awk '{print $4}' or: cat /proc/$(xdotool getwindowpid $(xdotool getwindowfocus))/comm | You can use xdotool , a versatile X window automation tool. focused_window_id=$(xdotool getwindowfocus)
active_window_id=$(xdotool getactivewindow)
active_window_pid=$(xdotool getwindowpid "$active_window_id") (I don't know what the difference between focused and active is.) (I thought wmctrl could do this, but apparently not.) | {
"source": [
"https://unix.stackexchange.com/questions/38867",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15487/"
]
} |
38,870 | I am new at scripting ... I can do very basic stuff, but now I need a hand. I have a local filesystem that only will be mounted when I need to do a backup. I'm starting with this. #!/bin/bash
export MOUNT=/myfilesystem
if grep -qs $MOUNT /proc/mounts; then
echo "It's mounted."
else
echo "It's not mounted."; then
mount $MOUNT;
fi As I said, I'm very basic at scripting. I heard that you can check the status of the mount command by looking at the return codes. RETURN CODES
mount has the following return codes (the bits can be ORed):
0 success
1 incorrect invocation or permissions
2 system error (out of memory, cannot fork, no more loop devices)
4 internal mount bug
8 user interrupt
16 problems writing or locking /etc/mtab
32 mount failure
64 some mount succeeded I don't know how to check that. Any guidance? | You can check the status code of mount , and most well written executables, with the shell special parameter ? . From man bash : ? Expands to the exit status of the most recently executed foreground pipeline. After you run the mount command, immediately executing echo $? will print the status code from the previous command. # mount /dev/dvd1 /mnt
mount: no medium found on /dev/sr0
# echo $?
32 Not all executables have well defined status codes. At a minimum, it should exit with a success (0) or failure (1) code, but that's not always the case. To expand on (and correct) your example script, I added a nested if construct for clarity. It's not the only way to test the status code and perform an action, but it's the easiest to read when learning. #!/bin/bash
mount="/myfilesystem"
if grep -qs "$mount" /proc/mounts; then
echo "It's mounted."
else
echo "It's not mounted."
mount "$mount"
if [ $? -eq 0 ]; then
echo "Mount success!"
else
echo "Something went wrong with the mount..."
fi
fi For more information on "Exit and Exit Status", you can refer to the Advanced Bash-Scripting Guide . | {
"source": [
"https://unix.stackexchange.com/questions/38870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18251/"
]
} |
38,910 | I installed Cygwin on my Windows XP to use some of its commands. Now I what to check a file every 2 minutes and want to use watch for this purpose but I see that Cygwin has no such command. Is this really true? what can I do instead? | There is watch in cygwin. But it isn't installed by default. You need to install procps-ng package to appear watch . (you can run cygwin installer again and it allows to install only the missed packages without reinstalling the whole cygwin) Instead of watch you can use simple cycle like: while true ; do check file ; sleep 2 ; done where check is your command of choice. | {
"source": [
"https://unix.stackexchange.com/questions/38910",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18957/"
]
} |
38,951 | I have a script that works when I run it from the command line, but when I schedule it with cron I get errors that it cannot find files or commands. My question is twofold: When I schedule a cron job using crontab -e , does it use my user ID as the basis for its permissions? Or does it use a cron user ID of some sort and its related permissions? When a cron job is launched, what is the working directory? Is it the directory where I specify the script to run, or a different directory? Here is my cron job: 15 7 * * * /home/xxxx/Documents/Scripts/email_ip_script.sh Here is the actual script: vIP_ADDR="`curl automation.whatismyip.com/n09230945.asp`"
echo "$vIP_ADDR"
sed "s/IPADDR/$vIP_ADDR/g" template.txt > emailmsg.txt
ssmtp [email protected] < emailmsg.txt Here are the errors I get when I view the mail message produced by cron : sed: can't read template.txt: No such file or directory
/home/xxxx/Documents/Scripts/email_ip_script.sh: line 15: ssmtp: command not found It cannot find the template.txt but it resides in the same directory as the script. It also cannot run ssmtp , but I can as my user. What am I missing to get this to work properly? | Add cd /home/xxxx/Documents/Scripts/ if you want your job to run in that directory. There's no reason why cron would change to that particular directory. Cron runs your commands in your home directory. As for ssmtp , it might not be in your default PATH . Cron's default path is implementation-dependent, so check your man page, but in all likelihood ssmtp is in /usr/sbin which is not in your default PATH , only root's. PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
15 7 * * * cd /home/xxxx/Documents/Scripts && ./email_ip_script.sh | {
"source": [
"https://unix.stackexchange.com/questions/38951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19015/"
]
} |
38,955 | I get the following error when trying to ls *.txt | wc -l a directory that contains many files: -bash: /bin/ls: Argument list too long Does the threshold of this "Argument list" dependent on distro or computer's spec? Usually, I'd pipe the result of such big result to some other commands ( wc -l for example), so I'm not concerned with limits of the terminal. | Your error message argument list too long comes from the * of ls *.txt . This limit is a safety for both binary programs and your Kernel. See ARG_MAX, maximum length of arguments for a new process for more information about it, and how it's used and computed. There is no such limit on pipe size. So you can simply issue this command: find -type f -name '*.txt' | wc -l NB: On modern Linux, weird characters in filenames (like newlines) will be escaped with tools like ls or find , but still displayed from * . If you are on an old Unix, you'll need this command find -type f -name '*.txt' -exec echo \; | wc -l NB2: I was wondering how one can create a file with a newline in its name. It's not that hard, once you know the trick: touch "hello
world" | {
"source": [
"https://unix.stackexchange.com/questions/38955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
38,978 | Does anyone know where file access logs are stored, so I can run a tail -f command in order to see who is accessing a particular file. I have XAMPP, which is an Apache server installed on my machine, which automatically logs the accesses. It is stored in my installation folder. | Ultimately, this depends on your Apache configuration. Look for CustomLog directives in your Apache configuration, see the manual for examples. A typical location for all log files is /var/log and subdirectories. Try /var/log/apache/access.log or /var/log/apache2/access.log or /var/log/httpd/access.log . If the logs aren't there, try running locate access.log access_log . | {
"source": [
"https://unix.stackexchange.com/questions/38978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18917/"
]
} |
39,039 | I had a command which would work through a text file, count all the occurrences of the words and print it out like this: user@box $˜ magic-command-i-forgot | with grep | and awk | sort ./textfile.txt
66: the
54: and
32: I
16: unix
12: bash
5: internet
3: sh
1: GNU/Linux So it does not search line-by-line, but word by word, and it does it for all the words, not just for 1 word. I'd found it somewhere on the internets a long time ago, but I cannot find or remember it.. | I would use tr instead of awk : echo "Lorem ipsum dolor sit sit amet et cetera." | tr '[:space:]' '[\n*]' | grep -v "^\s*$" | sort | uniq -c | sort -bnr tr just replaces spaces with newlines grep -v "^\s*$" trims out empty lines sort to prepare as input for uniq uniq -c to count occurrences sort -bnr sorts in numeric reverse order while ignoring whitespace wow. it turned out to be a great command to count swear-per-lines find . -name "*.py" -exec cat {} \; | tr '[:space:]' '[\n*]' | grep -v "^\s*$" | sort | uniq -c | sort -bnr | grep fuck | {
"source": [
"https://unix.stackexchange.com/questions/39039",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
39,064 | I have a very old 2.5" IDE drive inside a USB enclosure that gives some buffer I/O error. I tried to use smartctl to see what SMART says about it, but I can't manage to make it work. Being root , if I just write: #> smartctl --all /dev/sde smartctl answers: /dev/sde: Unknown USB bridge [0x14cd:0x6600 (0x201)]
Smartctl: please specify device type with the -d option. So I've tried every -d TYPE available in the help summary, and the best result is achieved with: #> smartctl --all -d scsi /dev/sde that outputs: Vendor: IC25N030
Product: ATMR04-0
User Capacity: 30,005,821,440 bytes [30,0 GB]
Logical block size: 512 bytes
scsiModePageOffset: response length too short, resp_len=4 offset=4 bd_len=0
>> Terminate command early due to bad response to IEC mode page
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. If I also add -T permissive the last line is replaced with: Error Counter logging not supported
Device does not support Self Test logging It seems that just a few models of USB enclosures are officially supported by smartmontools . Is there something that I'm missing or simply the device implements an archaic version of SMART without any counters (and hence almost useless)? | There is a vendor independent SAT (SCSI/ATA transfer) standard, but AFAIK this is not widely supported on (cheaper) bridges. There are several vendor specific ATA pass-through commands that you can select with smartctl with the -d option: -d TYPE, --device=TYPE
Specify device type to one of: ata, scsi, sat[,N][+TYPE],
usbcypress[,X], usbjmicron[,x][,N], usbsunplus, marvell,
areca,N, 3ware,N, hpt,L/M/N, megaraid,N, cciss,N, auto, test where -d sat is for SAT compatible devices. The USB Device Support lists devices and their command line options, so if you get a USB controller with one of the devices listed as supported, you have a better chance of getting things to work. | {
"source": [
"https://unix.stackexchange.com/questions/39064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18904/"
]
} |
39,175 | I have a hard time understanding how the file name encoding works. On unix.SE
I find contradicting explanations. File names are stored as characters To quote another answer: Several questions about file-system character encoding on linux […] as you mention in your question, a UNIX file name is just a sequence of
characters; the kernel knows nothing about the encoding, which entirely a
user-space (i.e., application-level) concept. If file names are stored as characters, there has to be some kind of encoding
involved, since finally the file name has to end up as a bit or byte sequence
on the disk. If the user can choose any encoding to map the characters to a
byte sequence that is fed to the kernel, it is possible to create any byte
sequence for a valid file name. Assume the following: A user uses a random encoding X , which translates
the file foo into the byte sequence α and saves it to disk. Another user
uses encoding Y . In this encoding α translates to / , which is not
allowed as a file name. However, for the first user the file is valid. I assume that this scenario cannot happen. File names are stored as binary blobs To quote another answer: What charset encoding is used for filenames and paths on Linux? As noted by others, there isn't really an answer to this: filenames and
paths do not have an encoding; the OS only deals with sequence of bytes.
Individual applications may choose to interpret them as being encoded in
some way, but this varies. If the system does not deal with characters, how can particular characters
(e.g. / or NULL ) be forbidden in file names? There no notion of a / without an encoding. An explanation would be that file system can store file names containing any character and it's only the user programs that take an encoding into account
that would choke on file names containing invalid characters. That, in turn,
means that file systems and the kernel can, without any difficulty, handle
file names containing a / . I also assume that this is wrong. Where does the encoding take place and where is the restriction posed of not
allowing particular characters? | Short answer: restrictions imposed in Unix/Linux/BSD kernel, namei() function. Encoding takes place in user level programs like xterm , firefox or ls . I think you're starting from incorrect premises. A file name in Unix is a string of bytes with arbitrary values. A few values, 0x0 (ASCII Nul) and 0x2f (ASCII '/') are just not allowed, not as part of a multi-byte character encoding, not as anything. A "byte" can contain a number representing a character (in ASCII and some other encodings) but a "character" can require more than 1 byte (for example, code points above 0x7f in UTF-8 representation of Unicode). These restrictions arise from file name printing conventions and the ASCII character set. The original Unixes used ASCII '/' (numerically 0x2f) valued bytes to separate pieces of a partially- or fully-qualified path (like '/usr/bin/cat' has pieces "usr", "bin" and "cat"). The original Unixes used ASCII Nul to terminate strings. Other than those two values, bytes in file names may assume any other value. You can see an echo of this in the UTF-8 encoding for Unicode. Printable ASCII characters, including '/', take only one byte in UTF-8. UTF-8 for code points above does not include any Zero-valued bytes, except for the Nul control character. UTF-8 was invented for Plan-9, The Pretender to the Throne of Unix. Older Unixes (and it looks like Linux) had a namei() function that just looks at paths a byte at a time, and breaks the paths into pieces at 0x2F valued bytes, stopping at a zero-valued byte. namei() is part of the Unix/Linux/BSD kernel, so that's where the exceptional byte values get enforced. Notice that so far, I've talked about byte values, not characters. namei() does not enforce any character semantics on the bytes. That's up to the user-level programs, like ls , which might sort file names based on byte values, or character values. xterm decides what pixels to light up for file names based on the character encoding. If you don't tell xterm you've got UTF-8 encoded filenames, you'll see a lot of gibberish when you invoke it. If vim isn't compiled to detect UTF-8 (or whatever, UTF-16, UTF-32) encodings, you'll see a lot of gibberish when you open a "text file" containing UTF-8 encoded characters. | {
"source": [
"https://unix.stackexchange.com/questions/39175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12779/"
]
} |
39,210 | Java community use 4 spaces as the unit of indentation. 1 Ruby community use 2 spaces that is generally agreed-upon. 2 What's the standard for indentation in shell scripts? 2 or 4 spaces or 1 tab? | There is no standard indentation in shell scripts that matters. Slightly less flippant answer: Pick a standard in your team that you can all work to, to simplify things. Use something your editor makes easy so you don't have to fight to stick to the standard. | {
"source": [
"https://unix.stackexchange.com/questions/39210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6219/"
]
} |
39,211 | Context I do not know all standard linux/unix commands, so I need manpages. I found QNX's manuals (invoked by "use [command name]") terse; I prefer them to linux manpages. I can get them from QNX with shell script, but I need to have QNX installed and it seems system does not have "use" for all commands. Question Is there archive of QNX's use messages on internet? | There is no standard indentation in shell scripts that matters. Slightly less flippant answer: Pick a standard in your team that you can all work to, to simplify things. Use something your editor makes easy so you don't have to fight to stick to the standard. | {
"source": [
"https://unix.stackexchange.com/questions/39211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8342/"
]
} |
39,218 | Is it possible to throttle (limit) the download speed of wget or curl ? Is it possible to change the throttle value while it is downloading ? | Yes both wget and curl support limiting your download rate. Both options are directly mentioned in the man page. curl --limit-rate <speed>
Specify the maximum transfer rate you want curl to use.
This feature is useful if you have a limited pipe and
you'd like your transfer not to use your entire bandwidth.
The given speed is measured in bytes/second, unless a suffix
is appended. Appending 'k' or 'K' will count the number
as kilobytes, 'm' or M' makes it megabytes, while 'g' or 'G'
makes it gigabytes. Examples: 200K, 3m and 1G. E.g: curl --limit-rate 423K wget --limit-rate=amount
Limit the download speed to amount bytes per second. Amount may
be expressed in bytes, kilobytes with the k suffix, or
megabytes with the m suffix. For example, --limit-rate=20k will limit
the retrieval rate to 20KB/s. This is useful when, for
whatever reason, you don't want Wget to consume
the entire available bandwidth. E.g: wget --limit-rate=423k | {
"source": [
"https://unix.stackexchange.com/questions/39218",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10611/"
]
} |
39,226 | What do I need to put in the [install] section, so that systemd runs /home/me/so.pl right before shutdown and also before /proc/self/net/dev gets destroyed? [Unit]
Description=Log Traffic
[Service]
ExecStart=/home/me/so.pl
[Install]
? | The suggested solution is to run the service unit as a normal service - have a look at the [Install] section. So everything has to be thought reverse, dependencies too. Because the shutdown order is the reverse startup order. That's why the script has to be placed in ExecStop= . The following solution is working for me: [Unit]
Description=...
[Service]
Type=oneshot
RemainAfterExit=true
ExecStop=<your script/program>
[Install]
WantedBy=multi-user.target RemainAfterExit=true is needed when you don't have an ExecStart action. After creating the file, make sure to systemctl daemon-reload and systemctl enable yourservice --now . I just got it from systemd IRC, credits are going to mezcalero. | {
"source": [
"https://unix.stackexchange.com/questions/39226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17617/"
]
} |
39,261 | I want to see the version of a package before I install it. How can I do this? | Packages known by your system / offline You can use apt-cache to query the APT cache. To show the versions known by your system use apt-cache policy . Example: apt-cache policy iceweasel
iceweasel:
Installed: 10.0.4esr-3
Candidate: 10.0.4esr-3
Version table:
12.0-7 0
1 http://ftp.us.debian.org/debian/ experimental/main amd64 Packages
*** 10.0.4esr-3 0
500 http://ftp.us.debian.org/debian/ sid/main amd64 Packages
100 /var/lib/dpkg/status
10.0.4esr-2 0
500 http://ftp.us.debian.org/debian/ testing/main amd64 Packages This means iceweasel version 12.0-7 is available in experimental and has priority 1, version 10.0.4esr-3 is installed from sid and has priority 500 and 10.0.4esr-2 is in testing. For a detailed description about the meaning of priorities have a look at apt_preferences(5) You can also display a brief description and some meta information about the package with apt-cache show package-name Information about all debian packages / online If you want to get version information about all available debian packages (basically what http://packages.debian.org does) you can use rmadison(1) to remotely query the database. rmadison is in the devscripts package which you have to install via apt-get install devscripts . $ rmadison iceweasel
iceweasel | 3.0.6-3 | lenny-security | source, alpha, amd64, arm, armel, hppa, i386, ia64, mips, mipsel, powerpc, s390, sparc
iceweasel | 3.0.6-3 | lenny | source, alpha, amd64, arm, armel, hppa, i386, ia64, mips, mipsel, powerpc, s390, sparc
iceweasel | 3.5.16-11~bpo50+1 | backports/lenny | source, alpha, amd64, armel, i386, ia64, mips, mipsel, powerpc, s390, sparc
iceweasel | 3.5.16-14 | squeeze | source, amd64, armel, i386, ia64, kfreebsd-amd64, kfreebsd-i386, mips, mipsel, powerpc, s390, sparc
iceweasel | 3.5.16-15 | squeeze-p-u | source, amd64, armel, i386, ia64, kfreebsd-amd64, kfreebsd-i386, mips, mipsel, powerpc, s390, sparc
iceweasel | 3.5.16-15 | squeeze-security | source, amd64, armel, i386, ia64, kfreebsd-amd64, kfreebsd-i386, mips, mipsel, powerpc, s390, sparc
iceweasel | 10.0.4esr-2~bpo60+1 | squeeze-backports | source, amd64, i386, kfreebsd-amd64, kfreebsd-i386, s390
iceweasel | 10.0.4esr-2 | wheezy | source, amd64, armel, armhf, i386, ia64, kfreebsd-amd64, kfreebsd-i386, mips, mipsel, powerpc, s390, s390x, sparc
iceweasel | 10.0.4esr-3 | sid | source, amd64, armel, armhf, hurd-i386, i386, ia64, kfreebsd-amd64, kfreebsd-i386, mips, mipsel, powerpc, s390, s390x, sparc
iceweasel | 11.0-4 | experimental | source, armel
iceweasel | 12.0-3 | experimental | source, mips
iceweasel | 12.0-7 | experimental | source, amd64, armhf, hurd-i386, i386, ia64, kfreebsd-amd64, kfreebsd-i386, powerpc, s390, s390x, sparc The difference between apt-cache and rmadison is that apt-cache shows only the information known to your system (but can be used offline) while rmadison shows all version of available packages | {
"source": [
"https://unix.stackexchange.com/questions/39261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18027/"
]
} |
39,273 | Bash offers the functionality to reverse search via Ctrl + R . Then one can type in a part of a command it will show a fitting entry from the history. Assume this is my history: vim foo1
vim foo2 # I want to go here
vim foo3 # this is where I land, how to go back? I search for foo . Hitting Ctrl + R again shows the next fitting search entry. Often it happens to me, that I am too fast and navigate past my intended result and vim foo3 is shown and now I want to go back to vim foo2 . Hence my question is: How do I navigate within the reverse search? | You can access this via the forward-search-history function which is bind per default to ctrl+s . Unfortunately ctrl+s is used to signal xoff per default which means you can't use it to change the direction of the search. There are two solutions for solving the problem, one disabling sending the xoff/xon signaling and the other change the keybinding for forward-search-history Disable xon/xoff Run stty -ixon in your terminal or add it to your ~/.bashrc . This allows you to use ctrl+s to use the forward-search-history history function. For more information about control flow have a look at How to unfreeze after accidentally pressing Ctrl-S in a terminal? and some of the answers Change the keybinding If you don't want to change the default behavior of ctrl+s you can change the keybinding for forward-search-history with bind . As most keys are already defined in bash you may have to get creative: bind "\C-t":forward-search-history This will bind ctrl+t to forward-search-history, but please be aware that per default ctrl+t runs transpose-chars | {
"source": [
"https://unix.stackexchange.com/questions/39273",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12471/"
]
} |
39,291 | Let's say I have the following alias in bash - alias ls='ls --color=auto' - and I want to call ordinary ls without options. Is the only way to do that is to unalias, do the command and then alias again? Or there is some nifty trick or workaround? | You can also prefix a back slash to disable the alias: \ls Edit: Other ways of doing the same include: Use "command": command ls as per Mikel . Use the full path: /bin/ls as per uther . Quote the command: "ls" or 'ls' as per Mikel comment. You can remove the alias temporarily for that terminal session with unalias command_name . | {
"source": [
"https://unix.stackexchange.com/questions/39291",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11397/"
]
} |
39,314 | While logged in as root I would like to su to a specific regular user. I run su username and immediately receive the prompt back, still as root. There is no error given. I'm aware of the old "the user you're trying to su to doesn't have permission for the folder you're currently in" problem, and that's not the case in this scenario. Furthermore, there is no error displayed, which is always the case (as far as I know) when that particular permissions issue is encountered. I've tried su - username with the same effect. The command is processed, no errors are seen, and I receive the prompt back immediately. What could be causing this behavior? How can I troubleshoot this? | Check what shell the user has in /etc/passwd . If the shell is /bin/false (a common shell to disallow logins), then you will see the behavior you describe. Alternatively, it may be some other immediately-terminating program that gives the same effective result. | {
"source": [
"https://unix.stackexchange.com/questions/39314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4232/"
]
} |
39,333 | With the command: ls -la * I can list all my symbolic links. How can I remove all symbolic links which are linked to a special folder? For example: In my directory usr/local/bin I have the following entries: lrwxrwxrwx 1 root root 50 Apr 22 14:52 allneeded -> /usr/local/texlive/2011/bin/x86_64-linux/allneeded
lrwxrwxrwx 1 root root 47 Apr 22 14:52 amstex -> /usr/local/texlive/2011/bin/x86_64-linux/amstex
lrwxrwxrwx 1 root root 24 Apr 23 19:09 arara -> /home/marco/.arara/arara Now I want to remove all links with the path /usr/local/texlive/ | Please make sure to read the alternative answer . It's even more to the point although not voted as high at this point. You can use this to delete all symbolic links: find -type l -delete with modern find versions. On older find versions it may have to be: find -type l -exec rm {} \;
# or
find -type l -exec unlink {} \; To limit to a certain link target, assuming none of the paths contain any newline character: find -type l | while IFS= read -r lnkname; do if [ "$(readlink '$lnkname')" == "/your/exact/path" ]; then rm -- "$lnkname"; fi; done or nicely formatted find -type l |
while IFS= read -r lnkname;
do
if [ "$(readlink '$lnkname')" = "/your/exact/path" ];
then
rm -- "$lnkname"
fi
done The if could of course also include a more complex condition such as matching a pattern with grep . Tailored to your case: find -type l | while IFS= read -r lnk; do if (readlink "$lnk" | grep -q '^/usr/local/texlive/'); then rm "$lnk"; fi; done or nicely formatted: find -type l | while IFS= read -r lnk
do
if readlink "$lnk" | grep -q '^/usr/local/texlive/'
then
rm "$lnk"
fi
done | {
"source": [
"https://unix.stackexchange.com/questions/39333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10943/"
]
} |
39,370 | How should one reload udev rules, so that newly created one can function? I'm running Arch Linux, and I don't have a udevstart command here. Also checked /etc/rc.d , no udev service there. | # udevadm control --reload-rules && udevadm trigger | {
"source": [
"https://unix.stackexchange.com/questions/39370",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
39,384 | I want update my linux in one shell but by default wget or axel in updater use all the bandwidth. How can I limit the speed in this shell? I want other shells to have a fair share, and to limit everything in that shell – something like a proxy! I use Zsh and Arch Linux. This question focuses on process-wide or session-wide solutions. See How to limit network bandwidth? for system-wide or container-wide solutions on Linux. | Have a look at trickle a userspace bandwidth shaper. Just start your shell with trickle and specify the speed, e.g.: trickle -d 100 zsh which tries to limit the download speed to 100KB/s for all programs launched inside this shell. As trickle uses LD_PRELOAD this won't work with static linked programs but this isn't a problem for most programs. | {
"source": [
"https://unix.stackexchange.com/questions/39384",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5280/"
]
} |
39,458 | I was following a tutorial and it told me to run sudo chmod +a "SOME_PARAMS" some/dir I was surprised to see that fail telling me chmod: invalid mode: `+a' So I wonder: What does the +a mode mean? How would I translate it into something Ubuntu understands? And I also like to know why it isn't universally supported. | I have never seen +a , only something like chmod a+r which means "add read permissions to all users" (owner/user, group, others). From man 1 chmod : The format of a symbolic mode is [ugoa...][[+-=][perms...]...], where perms is either zero or more letters from the set rwxXst, or a single letter from the set ugo. Multiple symbolic modes can be given, separated by commas. A combination of the letters ugoa controls which users' access to the file will be changed: the user who owns it (u), other users in the file's group (g), other users not in the file's group (o), or all users (a). If none of these are given, the effect is as if a were given, but bits that are set in the umask are not affected. Right, as you said in a comment, it's Mac OS X specific. From http://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/chmod.1.html : The ACL manipulation options are as follows: +a The +a mode parses a new ACL entry from the next argument on the commandline and inserts it into the canonical location in the ACL. If the supplied entry refers to an identity already listed, the two entries are combined. | {
"source": [
"https://unix.stackexchange.com/questions/39458",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12471/"
]
} |
39,464 | For scripting I need to get the page dimensions of a PDF file (in mm). pdfinfo just prints it in 'pts', e.g.: Page size: 624 x 312 pts What should I use? Or what unit is 'pts' anyway - in case I want to convert them ... | The 'pts' unit used by pdfinfo denotes a PostScript point. A PostScript point is defined in terms of an inch and a resolution of 72 dots per inch: In the late 1980s to the 1990s, the traditional point was supplanted by the desktop publishing point (also called the PostScript point), which was defined as 72 points to the inch ( 1 point = 1⁄72 inches = 25.4⁄72 mm = 0.352¯7 mm [ ≙ 0.3528 mm ] ). The manual to gv contains a list of common paper formats specified in PostScript points. | {
"source": [
"https://unix.stackexchange.com/questions/39464",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
39,466 | I've set up vsftpd on Amazon EC2 with the Amazon Linux AMI. I created a user and can now successfully connect via ftp. However, if I try to upload something I get the error message 553 Could not create file. I assume this has to do with permissions, but I don't know enough about it to be able to fix it. So basically, what do I have to do to be able to upload files? | There are two likely reasons that this could happen -- you do not have write and execute permissions on the directories leading to the directory you are trying to upload to, or vsftpd is configured not to allow you to upload. In the former case, use chmod and chown as appropriate to make sure that your user has these permissions on every intermediate directory. The write bit allows the affected user to create, rename, or delete files within the directory, and modify the directory's attributes, whilst the read bit allows the affected user to list the files within the directory. Since intermediate directories in the path also affect this, the permissions must be set appropriately leading up to the ultimate destination that you intend to upload to. In the latter case, look at your vsftpd.conf . write_enable must be true to allow writing (and it is false by default). There is good documentation on this configuration file at man 5 vsftpd.conf . | {
"source": [
"https://unix.stackexchange.com/questions/39466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
39,501 | What is the difference between ifconfig and ipconfig? What do dhcpcd and ifconfig actually do? | ipconfig ( i nternet p rotocol config uration) in Microsoft Windows is a console application that displays all current TCP/IP network configuration values and can modify Dynamic Host Configuration Protocol DHCP and Domain Name System DNS settings. ifconfig (short for i nter f ace config uration) is a system administration utility in Unix-like operating systems to configure, control, and query TCP/IP network interface parameters from a command line interface (CLI) or in system configuration scripts. dhcpcd is a DHCP client. It is used to obtain an IP address and other information from a dhcp server, renew the IP address lease time, and automatically configure the network interface. The program performs a similar function as dhclient. | {
"source": [
"https://unix.stackexchange.com/questions/39501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19290/"
]
} |
39,572 | Is it possible to allow non-root users to install packages system-wide using apt or rpm? The place where I work currently has an out of date setup on the linux boxes, and admins are sick of having to do all the installations for users on request, so they are thinking of giving full sudo rights to all users. This has obvious security disadvantages. So I'm wondering if there's a way to allow normal users to install software - and to upgrade and remove it? | You can specify the allowed commands with sudo, you don't have to allow unlimited access, e.g. username ALL = NOPASSWD : /usr/bin/apt-get , /usr/bin/aptitude This would allow username to run sudo apt-get and sudo aptitude without any password but would not allow any other commands. You can also use packagekit combined with PolicyKit for some more finer level of control than sudo. Allowing users to install/remove packages can be a risk. They can pretty easily render a system nonfunctional just by uninstalling necessary software like libc6, dpkg, rpm etc. Installing arbitrary software from the defined archives may allow attackers to install outdated or exploitable software and gain root access. The main question in my opinion is how much do you trust your employees? Of course your admin team could also start using a configuration management system like puppet, chef or look into spacewalk to manage your system. This would allow them to configure and manage the system from a central system. | {
"source": [
"https://unix.stackexchange.com/questions/39572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17416/"
]
} |
39,623 | I'm trying to create some error reporting using a Trap to call a function on all errors: Trap "_func" ERR Is it possible to get what line the ERR signal was sent from? The shell is bash. If I do that, I can read and report what command was used and log/perform some actions. Or maybe I'm going at this all wrong? I tested with the following: #!/bin/bash
trap "ECHO $LINENO" ERR
echo hello | grep "asdf" And $LINENO is returning 2. Not working. | As pointed out in comments, your quoting is wrong. You need single quotes to prevent $LINENO from being expanded when the trap line is first parsed. This works: #! /bin/bash
err_report() {
echo "Error on line $1"
}
trap 'err_report $LINENO' ERR
echo hello | grep foo # This is line number 9 Running it: $ ./test.sh
Error on line 9 | {
"source": [
"https://unix.stackexchange.com/questions/39623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9786/"
]
} |
39,710 | I need to set the same chmod, how to get number for -rw-r--r-- ? | Please check stat output: # stat .xsession-errors
File: ‘.xsession-errors’
Size: 839123 Blocks: 1648 IO Block: 4096 regular file
Device: 816h/2070d Inode: 3539028 Links: 1
Access: (0600/-rw-------) Uid: ( 1000/ lik) Gid: ( 1000/ lik)
Access: 2012-05-30 23:11:48.053999289 +0300
Modify: 2012-05-31 07:53:26.912690288 +0300
Change: 2012-05-31 07:53:26.912690288 +0300
Birth: - | {
"source": [
"https://unix.stackexchange.com/questions/39710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3373/"
]
} |
39,718 | Is there a reason to use scp instead of rsync ? I can see no reason for using scp ever again, rsync does everything that scp does, with more safety (can preserve symlinks etc). | scp provides a cp like method to copy files from one machine to a remote machine over a secure SSH connection. rsync allows you to syncronise remote folders. They are different programs and both have their uses. scp is always secure, whereas rsync must travel over SSH to be secure. | {
"source": [
"https://unix.stackexchange.com/questions/39718",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19380/"
]
} |
39,722 | I want to backup 1 terabyte of data to an external disk. I am using this command: tar cf /media/MYDISK/backup.tar mydata PROBLEM: My poor laptop freezes and crashes whenever I use 100% CPU or 100% disk (if you want to react about this please write here ) .
So I want to stay at around 50% CPU and 50% disk max. My question: How to throttle CPU and disk with the tar command? Rsync has a --bwlimit option, but I want an archive because 1) there are many small files 2) I prefer to manage a single file rather a tree. That's why I use tar . | You can use pv to throttle the bandwidth of a pipe. Since your use case is strongly IO-bound, the added CPU overhead of going through a pipe shouldn't be noticeable, and you don't need to do any CPU throttling. tar cf - mydata | pv -L 1m >/media/MYDISK/backup.tar | {
"source": [
"https://unix.stackexchange.com/questions/39722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2305/"
]
} |
39,729 | I normally watch many logs in a directory doing tail -f directory/* .
The problem is that a new log is created after that, it will not show in the screen (because * was expanded already). Is there a way to monitor every file in a directory, even those that are created after the process has started? | You can tail multi ple files with… multitail . multitail -Q 1 'directory/*' -Q 1 PATTERN means to check for new content in existing or new files matching PATTERN every 1 second. Lines from all files are shown in the same window, use -q instead of -Q to have separate windows. | {
"source": [
"https://unix.stackexchange.com/questions/39729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19386/"
]
} |
39,761 | Possible Duplicate: How to apply recursively chmod directories without affecting files? What is the command to apply execute permission for directories (for traversal), but leave the execute bit off for files contained in the directory? | If you don't want to remove the executable bit from existing files you can use the X mode. To recursively set the executable bit on all directories use: chmod -R a+X dir From man chmod: execute/search only if the file is a directory or already has
execute permission for some user (X) | {
"source": [
"https://unix.stackexchange.com/questions/39761",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6442/"
]
} |
39,800 | I am looking for a way to replace a string in a file with a string that contains a slash by using sed. connect="192.168.100.61/foo"
srcText="foo.bar=XPLACEHOLDERX"
echo $srcText | sed "s/XPLACEHOLDERX/$connect" The result is: sed: -e Expression #1, Character 32: Unknown option for `s' | Use another character as delimiter in the s command: printf '%s\n' "$srcText" | sed "s|XPLACEHOLDERX|$connect|" Or escape the slashes with ksh93's ${var//pattern/replacement} parameter expansion operator (now also supported by zsh , bash , mksh , yash and recent versions of busybox sh ). printf '%s\n' "$srcText" | sed "s/XPLACEHOLDERX/${connect//\//\\/}/" | {
"source": [
"https://unix.stackexchange.com/questions/39800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11341/"
]
} |
39,846 | I use Ubuntu. Sometimes, the system does not have any response with mouse and keyboard. Is there any way to solve this problem except hitting the reset button on the machine? | If you want a way to reboot, without saving open documents, but without hitting the reset button, then there are ways that are less likely to cause data loss. First, try Ctrl + Alt + F1 . That should bring you to a virtual console , as ixtmixilix said. Once you're in a virtual console, Ctrl + Alt + Delete will shut down and reboot the machine. If that technique doesn't work, there's always Alt + SysRq + R E I S U B . As for fixing the problem without rebooting, without more information about what is going on, it would be difficult to give a good answer. If you could describe the circumstances under which this occurs (the best way to do that is to edit your question to add the information), then that may help people to give good answers. The other thing to consider is that, if your computer is becoming unresponsive--especially if it takes more than a a few seconds for Ctrl + Alt + F1 to bring up a virtual console--then you almost certainly have a bug, and by reporting it you can both help the community and maybe get an answer. GUI Glitches Causing Unresponsive WM or X11/Wayland This might be happening due to an interaction between an application and a window manager --or the X11 server or Wayland. A sign that this is the nature of the problem is if an application stops responding and prevents you from entering input with the keyboard or mouse to other application windows. (No application should be able to do this; some GUI component must have a bug in it for this to occur.) If that's what's happening, then you can kill the offending process in a virtual console (as ixtmixilix alluded to): Press Ctrl + Alt + F1 . Log in. You won't see anything as you enter your password. That's normal. Use a utility like ps to figure out the offending program's process name. Sometimes this is easy in Ubuntu, and other times it isn't. For example, the name of an Archive Manager process is file-roller . If you have trouble figuring it out, you can usually find the information online without too much trouble (or if you can't, you can post a question about it). You can pipe ps 's output to grep to narrow things down. Suppose it was Archive Manager that was causing the problem. Then you could run: ps x | grep file-roller You'll see an entry for your own grep command, plus an entry for file-roller . Attempt to kill the offending process with SIGTERM . This gives it the chance to do last-minute cleanup like flushing file buffers, signaling to remote servers that it is about to disconnect (for protocols that do that), and releasing other sorts of resources. To do this, use the kill command: kill PID where PID is the process ID number of the process you want to kill, obtained from running ps in step 3. SIGTERM is a way to firmly ask a process to quit. The process can ignore that signal, and will do so when malfunctioning under certain circumstances. So you should check to see that it worked. If it didn't, kill it with SIGKILL , which it cannot ignore, and which always works except in the rare case where the process is in uninterruptible sleep (or if it is not really running, but is rather a zombie process ). You can both check to see if the process is still running, and kill it with SIGKILL if it is, with just one command: kill -KILL PID If you get a message like kill: ( PID ) - No such process , you know killing it with SIGTERM worked. If you get no output, you know SIGTERM didn't work. In that case, SIGKILL probably did, but it's worth checking by running it again. (Press the up arrow key to bring up previous commands, for ease of typing.) In rare instances for your own processes, or always with processes belonging to root or another user besides yourself, you must kill the process as root . To do that, prepend sudo (including the trailing space) before the above kill commands. If the above commands don't work or you're told you don't have the necessary access to kill the process, try it as root with sudo . (By the way, kill -KILL is the same as the widely popular kill -9 . I recommend kill -KILL because SIGKILL is not guaranteed to have 9 as its signal number on all platforms. It works on x86, but that doesn't mean it will necessarily work everywhere. In this way, kill -KILL is more likely to successfully end the process than kill -9 . But they're equivalent on x86, so feel free to use it there if you like.) If you know there are no other processes with the same name as the one you want to kill, you can use killall instead of kill and the name of the process instead of the process ID number. A Process Monopolizing CPU Resources If a process runs at or very near the highest possible priority (or to state it more properly, at or near the lowest possible niceness ), it could potentially render your graphical user interface completely, or near-completely, unresponsive. However, in this situation, you would likely not be able to switch to a virtual console and run commands (or maybe even reboot). If a process or a combination of processes running at normal or moderately elevated priority are slowing your machine down, you should be able to kill them using the technique in the section above. But if they are graphical programs, you can likely also kill them by clicking the close button on their windows--the desktop environment will give you the option to kill them if they are not responding. If this doesn't work, of course you can (almost) always kill them with kill -KILL . I/O Problems Buggy I/O can cause prolonged (even perpetual) unresponsiveness. This can be due to a kernel bug and/or buggy drivers. A partial workaround is to avoid heavy and simultaneous read and/or write operations (for example, don't copy two big files at once, in two simultaneous copy processes; don't copy a big file while watching an HD video or installing an OS in a virtual machine). This is obviously unsatisfactory and the real solution is to find the problem and report it. Unless you're running a mainline kernel from kernel.org , kernel bugs should be reported against the package linux in Ubuntu (since Ubuntu gives special kernel builds that integrate distro-specific patches, and bug reports not confirmed against a mainline kernel will be rejected at kernel.org ). You should do this by running ubuntu-bug linux (or apport-cli linux ) on the affected machine. See the Ubuntu bug reporting documentation first; it explains how to do this properly. Graphics Card Problems Some GUI lockups can be caused by graphics card problems. There are a few things you can try, to alleviate this: Search the web to see if other people have experienced similar problems with the same video card (and/or make and model of machine) on Ubuntu or other GNU/Linux distributions. There may be solutions more specific than what I can offer in this answer, without more specific information than is currently in your question. See if different video drivers are available for you to try. You can do this by checking in Additional Drivers; you can also search the web to see what Linux drivers are available for your video card. Most proprietary video cards are Intel, AMD/ATi , or Nvidia (click those links to see the community documentation on installing and using proprietary drivers for these cards in Ubuntu). For Intel, you're best off sticking with the FOSS drivers that are present in Ubuntu, but there's still helpful information you can use. Regardless of what card you have, this general information may help. If you're currently using proprietary drivers, you can try using different proprietary drivers (for example, directly from NVidia or AMD/ATi), or you can try using the free open source drivers instead. Try selecting a graphical login session type that doesn't require/use graphics acceleration. To do this, log out, and on the graphical login screen click the Ubuntu logo or gear icon near your login name. A drop-down menu is shown. Change the selection from Ubuntu to Ubuntu 2D . This makes you use Unity 2D instead of Unity . (If you're using GNOME Shell , you can select GNOME Fallback / GNOME Classic instead.) If in doubt and there's a selection that says "no effects," pick that, as that's probably the safest. This question has some more information about different graphical interfaces you can choose between in Ubuntu. In newer versions of Ubuntu, you can choose between X.org and Wayland on the login screen. Whichever you've been using, try the other. Sometimes a problem with Wayland can be fixed by using X.org, or vice versa. Report a bug. Hopefully the information above has conveyed some general information about what could be causing this kind of problem. It should also serve to illuminate what kind of information might be useful for you to add to your question (depending on the specific details of the problem), to make it possible to get an even better answer. (Or to improve this answer with additional information specific to your situation.) | {
"source": [
"https://unix.stackexchange.com/questions/39846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9502/"
]
} |
39,881 | I want to change my shell from bash to zsh. I have tried running the following while logged in as user zol: $ chsh -s /bin/zsh
$ sudo chsh -s /bin/zsh zol
$ su -c 'chsh -s /bin/zsh zol'
# The above all results with:
$ password:
$ chsh: Shell not changed.
# zsh exists in /etc/shells..
chsh -l
/bin/sh
/bin/bash
/sbin/nologin
/bin/zsh What could be wrong? How can I fix it? | User account modifications will not be saved if you have opened /etc/passwd (vim /etc/passwd) when you try to change the info. Alternative: try with usermod (as zol): $ usermod -s /bin/zsh or $ sudo usermod -s /bin/zsh zol If this doesn't work either, edit /etc/passwd by hand. sudo vipw
# set zol's shell to /bin/zsh
:wq | {
"source": [
"https://unix.stackexchange.com/questions/39881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/647/"
]
} |
39,887 | Is there a way to add and remove packages at the same time with a single yum command? For example, installing postfix and removing sendmail without running two separate commands/transactions. | Yes. Invoking yum shell will allow you to specify multiple commands that will happen simultaneously when run is entered. | {
"source": [
"https://unix.stackexchange.com/questions/39887",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2538/"
]
} |
39,905 | I want to list and remove the content of a directory on a removable hard drive. But I have experienced "Input/output error": $ rm pic -R
rm: cannot remove `pic/60.jpg': Input/output error
rm: cannot remove `pic/006.jpg': Input/output error
rm: cannot remove `pic/008.jpg': Input/output error
rm: cannot remove `pic/011.jpg': Input/output error
$ ls -la pic
ls: cannot access pic/60.jpg: Input/output error
-????????? ? ? ? ? ? 006.jpg
-????????? ? ? ? ? ? 006.jpg
-????????? ? ? ? ? ? 011.jpg I was wondering what the problem is? How can I recover or remove the directory pic and all of its content? My OS is Ubuntu 12.04, and the removable hard drive has ntfs filesystem. Other directories not containing or inside pic on the removable hard drive are working fine. Added: Last part of output of dmesg after I tried to list the content of the directory: [19000.712070] usb 1-1: new high-speed USB device number 2 using ehci_hcd
[19000.853167] usb-storage 1-1:1.0: Quirks match for vid 05e3 pid 0702: 520
[19000.853195] scsi5 : usb-storage 1-1:1.0
[19001.856687] scsi 5:0:0:0: Direct-Access ST316002 1A 0811 PQ: 0 ANSI: 0
[19001.858821] sd 5:0:0:0: Attached scsi generic sg2 type 0
[19001.861733] sd 5:0:0:0: [sdb] 312581808 512-byte logical blocks: (160 GB/149 GiB)
[19001.862969] sd 5:0:0:0: [sdb] Test WP failed, assume Write Enabled
[19001.865223] sd 5:0:0:0: [sdb] Cache data unavailable
[19001.865232] sd 5:0:0:0: [sdb] Assuming drive cache: write through
[19001.867597] sd 5:0:0:0: [sdb] Test WP failed, assume Write Enabled
[19001.869214] sd 5:0:0:0: [sdb] Cache data unavailable
[19001.869218] sd 5:0:0:0: [sdb] Assuming drive cache: write through
[19001.891946] sdb: sdb1
[19001.894713] sd 5:0:0:0: [sdb] Test WP failed, assume Write Enabled
[19001.895950] sd 5:0:0:0: [sdb] Cache data unavailable
[19001.895953] sd 5:0:0:0: [sdb] Assuming drive cache: write through
[19001.895958] sd 5:0:0:0: [sdb] Attached SCSI disk
[19113.024123] usb 2-1: new high-speed USB device number 3 using ehci_hcd
[19113.218157] scsi6 : usb-storage 2-1:1.0
[19114.232249] scsi 6:0:0:0: Direct-Access USB 2.0 Storage Device 0100 PQ: 0 ANSI: 0 CCS
[19114.233992] sd 6:0:0:0: Attached scsi generic sg3 type 0
[19114.242547] sd 6:0:0:0: [sdc] 312581808 512-byte logical blocks: (160 GB/149 GiB)
[19114.243144] sd 6:0:0:0: [sdc] Write Protect is off
[19114.243154] sd 6:0:0:0: [sdc] Mode Sense: 08 00 00 00
[19114.243770] sd 6:0:0:0: [sdc] No Caching mode page present
[19114.243778] sd 6:0:0:0: [sdc] Assuming drive cache: write through
[19114.252797] sd 6:0:0:0: [sdc] No Caching mode page present
[19114.252807] sd 6:0:0:0: [sdc] Assuming drive cache: write through
[19114.280407] sdc: sdc1 < sdc5 >
[19114.289774] sd 6:0:0:0: [sdc] No Caching mode page present
[19114.289779] sd 6:0:0:0: [sdc] Assuming drive cache: write through
[19114.289783] sd 6:0:0:0: [sdc] Attached SCSI disk | Input/Output errors during filesystem access attempts generally mean hardware issues. Type dmesg and check the last few lines of output. If the disc or the connection to it is failing, it'll be noted there. EDIT Are you mounting it via ntfs or ntfs-3g ? As I recall, the legacy ntfs driver had no stable write support and was largely abandoned when it turned out ntfs-3g was significantly more stable and secure. | {
"source": [
"https://unix.stackexchange.com/questions/39905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
39,934 | Is it possible to compress a very large file (~30 GB) using gzip? If so, what commands, switches, and options should I use? Or is there another program (preferably one commonly available on Ubuntu distributions) that I can use to compress/zip very large files? Do you have any experience with this? | AFAIK there is no limit of size for gzip - at least not 30GB. Of course, you need the space for the zipped file on your disc, both versions will be there simultanously while compressing. bzip2 compresses files (not only big ones :-) better, but it is (sometimes a lot) slower. | {
"source": [
"https://unix.stackexchange.com/questions/39934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
39,982 | Is there any way to set +x bit on script while creating? For example I run: vim -some_option_to_make_file_executable script.sh and after saving I can run file without any additional movings. ps. I can run chmod from vim or even from console itself, but this is a little annoying, cause vim suggests to reload file. Also it's annoying to type chmod command every time.
pps. It would be great to make it depending on file extension (I don't need executable .txt :-) ) | I don't recall where I found this, but I use the following in my ~/.vimrc " Set scripts to be executable from the shell
au BufWritePost * if getline(1) =~ "^#!" | if getline(1) =~ "/bin/" | silent !chmod +x <afile> | endif | endif The command automatically sets the executable bit if the first line starts with "#!" or contains "/bin/". | {
"source": [
"https://unix.stackexchange.com/questions/39982",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13003/"
]
} |
40,087 | I have an Angstrom Linux device acting as an access point, running hostapd , dhcpd , which works fine. Can I get a list of devices connected to the Wi-Fi? I know I can get the DHCP leases, but I need to know which devices connect through wlan0 . I've tried this ( iwlist has options): iwlist wlan0 ap
iwlist wlan0 accesspoints
iwlist wlan0 peers but all return: wlan0 Interface doesn't have a list of Peers/Access-Points iwconfig , iwgetid , iwpriv and iwspy are also present in /sbin , but don't seem to have options to display the client list. | You should use iw dev wlan0 station dump as root | {
"source": [
"https://unix.stackexchange.com/questions/40087",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8702/"
]
} |
40,179 | I have installed some rpm package on my Fedora 17. Some packages had a lot of dependencies.
I have removed some packages but I forgot remove unused dependencies with yum remove. How can I do that now? | It's not easy. How do you differentiate between "a file that was required by something I have since removed" from "a file that is not required by anything else that I really want"? You can use the package-cleanup command from the yum-utils package to list "leaf nodes" in your package dependency graph. These are packages that can be removed without affecting anything else: $ package-cleanup --leaves This will produce a list of "libraries" on which nothing else depends. In most cases you can safely remove these packages. If you add --all to the command line: $ package-cleanup --leaves --all You'll get packages that aren't considered libraries, also, but this list is going to be so long that it probably won't be useful. | {
"source": [
"https://unix.stackexchange.com/questions/40179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
40,242 | I use screen for my command-line tasks while managing the servers where I work. I usually run small commands (mostly file-system tasks) but sometimes I run more extensive tasks (like DBA). The output of those tasks is important to me. Since I use Ubuntu and OS X (both Terminal Windows) for my tasks, yet I need to use screen, the scrolling is not available, so any long output (think a 500-row table from a select) is invisible for me. Mouse-wheel is out of the question. When I say "scroll is invisible for me, I mean this: I was thinking about two options: Pause (think paginate ) the output of a certain command. When the output begins, it would let me read what's happening, then I press "Enter", then the output continues until there's nothing more to show. Scroll inside screen. But I don't know if this is possible. Of course, I don't know if those options are actually possible . If they are, how can achieve them? Other alternatives will be well received. | Screen has its own scroll buffer, as it is a terminal multiplexer and has to deal with several buffers. Maybe there's a better way, but I'm used to scrolling using the "copy mode" (which you can use to copy text using screen itself, although that requires the paste command too): Hit your screen prefix combination ( C-a / control + A by default), then hit Escape . Move up/down with the arrow keys ( ↑ and ↓ ). When you're done, hit q or Escape to get back to the end of the scroll buffer. (If instead of q or Escape you hit Enter or Return and then move the cursor, you will be selecting text to copy, and hitting Enter or Return a second time will copy it. Then you can paste with C-a followed by ] .) Of course, you can always use more and less , two commonly used pagers, which may be enough for some commands. | {
"source": [
"https://unix.stackexchange.com/questions/40242",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5317/"
]
} |
40,342 | Possible Duplicate: Can I identify my RAM without shutting down linux? I'd like to know the type, size, and model. But I'd like to avoid having to shut down and open the machine. | Check out this How do I detect the RAM memory chip specification from within a Linux machine question. This tool might help: http://www.cyberciti.biz/faq/check-ram-speed-linux/ $ sudo dmidecode --type 17 | more Sample output: # dmidecode 2.9
SMBIOS 2.4 present.
Handle 0x0018, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x0017
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 2048 MB
Form Factor: DIMM
Set: None
Locator: J6H1
Bank Locator: CHAN A DIMM 0
Type: DDR2
Type Detail: Synchronous
Speed: 800 MHz (1.2 ns)
Manufacturer: 0x2CFFFFFFFFFFFFFF
Serial Number: 0x00000000
Asset Tag: Unknown
Part Number: 0x5A494F4E203830302D3247422D413131382D
Handle 0x001A, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x0017
Error Information Handle: Not Provided
Total Width: Unknown
Data Width: Unknown
Size: No Module Installed
Form Factor: DIMM
Set: None
Locator: J6H2
Bank Locator: CHAN A DIMM 1
Type: DDR2
Type Detail: None
Speed: Unknown
Manufacturer: NO DIMM
Serial Number: NO DIMM
Asset Tag: NO DIMM
Part Number: NO DIMM Alternatively, both newegg.com and crucial.com among other sites have memory upgrade advisors/scanners that I've used regularly under Windows. Some of them were web-based at some point, so you could try that, or if you could possibly boot into Windows (even if temporarily) it might help. Not sure what the results would be under a Windows VM, and unfortunately I am currently running Linux in a VM under Windows 7, so can't reliably test for this myself. I do realize that this doesn't give you necessarily exactly what you asked for .. but perhaps it will be of use none-the-less. | {
"source": [
"https://unix.stackexchange.com/questions/40342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18276/"
]
} |
40,350 | I'm looking for one that is frequently updated and full-featured. | There are many text web browsers, as there are many graphical web browsers, so it really depends on what you're looking for. lynx is a common slim choice, Elinks has many features. Both of these support other protocols, such as ftp and gopher ( Elinks even supports bittorrent ). Elinks may also be built with support for JavaScript, using Mozilla's former JavaScript implementation, Spidermonkey. There is also w3m , which can also be used through Emacs . If you want to try one at random, Wikipedia has a list of text based browsers . How to install these has more to do with how your distribution manages packages than with the browsers themselves. Some of them are probably in the package repositories for your distribution. | {
"source": [
"https://unix.stackexchange.com/questions/40350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18276/"
]
} |
40,351 | A drive is beginning to fail and I only know the device by its /dev/sdb device file designation. What are the ways that I can use to correlate that device file to an actual hardware device to know which drive to physically replace? Bonus: What if I don't have /dev/disk/ and its sub directories on this installation? (Which, sadly, I don't) | You can look in /sys/block : -bash-3.2$ ls -ld /sys/block/sd*/device
lrwxrwxrwx 1 root root 0 Jun 8 21:09 /sys/block/sda/device -> ../../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdb/device -> ../../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdc/device -> ../../devices/pci0000:00/0000:00:1f.2/host2/target2:0:0/2:0:0:0
lrwxrwxrwx 1 root root 0 Jun 8 21:10 /sys/block/sdd/device -> ../../devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0 Or if you don't have /sys , you can look at /proc/scsi/scsi : -bash-3.2$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST31000340AS Rev: SD1A
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: PepperC Model: Virtual Disc 1 Rev: 0.01
Type: CD-ROM ANSI SCSI revision: 03 | {
"source": [
"https://unix.stackexchange.com/questions/40351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4232/"
]
} |
40,442 | Disk space on my root partition is running low, so I want to delete some applications from the system. How can I see which software packages use the most disk space? Is it possible to view that from aptitude ? I know about generic disk space analyzers like df or baobab , but I need solutions for installed applications. | The easiest way (without installing extra packages) is: dpkg-query -Wf '${Installed-Size}\t${Package}\n' | sort -n which displays packages in estimated size order, in kilobytes, largest package last. Unfortunately on at least some systems, this list includes packages that have been removed but not purged. All such packages can be purged by running: dpkg --list |grep "^rc" | cut -d " " -f 3 | xargs sudo dpkg --purge Or if you don't want to purge uninstalled packages you can use this variant to filter out the packages which aren't in the 'installed' state from the list: dpkg-query -Wf '${db:Status-Status} ${Installed-Size}\t${Package}\n' | sed -ne 's/^installed //p'|sort -n | {
"source": [
"https://unix.stackexchange.com/questions/40442",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11397/"
]
} |
40,458 | Sometimes Firefox doesn't release the mouse after dragging, so I need to kill the application to force it to release its pointer grab. Is there any command to force an application to ungrab the pointer without killing it? | On modern-ish X.org installations, there is an XF86Ungrab keysym, which causes the server to release all active pointer or keyboard grabs. You can make the server break all grabs by enabling break action XKB option, then generating the keysym either with a command or with the keyboard. With xdotool : setxkbmap -option grab:break_actions
xdotool key XF86Ungrab On some systems, the XF86Ungrab keysym is bound to the key combination Ctrl + Alt + Keypad / . However this possibility is often turned off because it could allow bypassing a screensaver ). | {
"source": [
"https://unix.stackexchange.com/questions/40458",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11920/"
]
} |
40,480 | I need to upload a 400mb file to my web server, but I'm limited to 200mb uploads. My host suggested I use a spanned archive, which I've never done on Linux. I created a test in its own folder, zipping up a PDF into test.zip.001 , .002 , and .003 . How do I go about unzipping it? Do I need to join them first? Please note that I'd be just as happy using 7z as I am using ZIP formats. If this makes any difference to the outcome. | You will need to join them first. You may use the common linux app, cat as in the example below: cat test.zip* > ~/test.zip This will concatenate all of your test.zip.001 , test.zip.002 , etc files into one larger, test.zip file. Once you have that single file, you may run unzip test.zip "How to create, split, join and extract zip archives in Linux" may help. | {
"source": [
"https://unix.stackexchange.com/questions/40480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19717/"
]
} |
40,587 | Why have almost all the shared libraries in /usr/lib/ have the executable permission bit set? I don't see any use case for executing them. Some do manage to hook-up some form of main function to print a short copyright and version note, but many don't do even that and segfault upon execution. So, what's the point of setting this x ? Must all library packagers do that? What will happen if I dlopen() a shared library that has 0644 permissions? | Under HP-UX, shared libraries are mapped into memory using mmap(), and all memory pages in the system have protection bits which are coupled with the kernel and processor hardware's memory page protection mechanisms. In order to execute the contents of any page of memory on the system, that page must have PROT_EXEC set - a useful feature to prevent data execution exploits. The mmap() call uses the permission bits on the file it is about to map to define the protection bits of the mapped memory pages which are to contain it: rwx -> PROT_READ|PROT_WRITE|PROT_EXEC (from sys/mman.h). so in order for a shared library to be usable on HP-UX, the file containing the shared library must have execute permissions to insure that the mapped library also has execute permission. A shared library with mode 644 on an HP-UX system will cause core dumps. | {
"source": [
"https://unix.stackexchange.com/questions/40587",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19759/"
]
} |
40,638 | What command would I use to convert an mp4 or mv4 video file to an animated gif, and vice versa. That is, convert a animated gif to a mp4 or mv4. | Here's what worked for me: ffmpeg -i animated.gif -movflags faststart -pix_fmt yuv420p -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" video.mp4 movflags – This option optimizes the structure of the MP4 file so the browser can load it as quickly as possible. pix_fmt – MP4 videos store pixels in different formats. We include this option to specify a specific format which has maximum compatibility across all browsers. vf – MP4 videos using H.264 need to have a dimensions that are divisible by 2. This option ensures that’s the case. Source: http://rigor.com/blog/2015/12/optimizing-animated-gifs-with-html5-video | {
"source": [
"https://unix.stackexchange.com/questions/40638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
40,694 | I have a script converting video files and I run it at server on test data and measure its time by time . In result I saw: real 2m48.326s
user 6m57.498s
sys 0m3.120s Why real time is that much lower than user time? Does this have any connection with multithreading? Or what else? Edit: And I think that script was running circa 2m48s | The output you show is a bit odd, since real time would usually be bigger than the other two. Real time is wall clock time. (what we could measure with a stopwatch) User time is the amount of time spend in user-mode within the
process Sys is the CPU time spend in the kernel within the process. So I suppose if the work was done by several processors concurrently, the CPU time would be higher than the elapsed wall clock time. Was this a concurrent/multi-threaded/parallel type of application? Just as an example, this is what I get on my Linux system when I issue the time find . command. As expected the elapsed real time is much larger than the others on this single user/single core process. real 0m5.231s
user 0m0.072s
sys 0m0.088s The rule of thumb is: real < user: The process is CPU bound and takes advantage of parallel execution on multiple cores/CPUs. real ≈ user: The process is CPU bound and takes no advantage of parallel exeuction. real > user: The process is I/O bound. Execution on multiple cores would be of little to no advantage. | {
"source": [
"https://unix.stackexchange.com/questions/40694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19800/"
]
} |
40,708 | Answers to the questions on SO and askubuntu , along with poking through (and reading headers of) $HOME and /etc/ , indicate a number of files that can be used to set environment variables, including: ~/.profile ~/.bashrc ~/.bash_profile ~/.gnomerc ~/.Rprofile /etc/bash_bashrc /etc/profile /etc/screenrc I gather that files in /etc/ work for all users whereas files in $HOME are user-specific. I also gather that .profile is loaded at login whereas .bashrc loaded when /bin/bash is executed. I also understand that different programs have different settings files (e.g. .Rprofile for R). But I would appreciate some clarification: Are *rc and *profile files fundamentally different? What is the scope of such files (e.g. which files are are commonly used with Linux) Is there a hierarchy (e.g. .bashrc overwrites variables set in .settings ) What is a good reference for this class of files? For the options in these files? Linked questions "How to access a bash environment variable from within R in emacs-ess?" "Difference between launching an application from a keyboard shortcut vs the terminal?" | The organization of configuration files is much less uniform than your questions seem to imply. There is no "class", there is no "hierarchy", and there is no global "configuration czar" nor committee that decrees a common syntax or other nice clean generalizations like the ones you are seeking. There is only a multitude of separate applications like R , bash , screen and the GNOME desktop environment, all of whom have their own ways of doing things, so you should look at the documentation for each individual program to answer any specific questions about a particular file. If it seems ad-hoc, that's because it is: most of Unix / Linux software out there was developed for different purposes by different people who all went about configuration slightly differently. To answer your other questions pointwise: *rc and *profile do not mean very much, so this question can't really be answered. "rc" is merely a commonly used abbreviation or suffix for configuration files. Its etymology goes back to ancient times (in computer years), and probably means run commands (from runcom ). Just because applications use the same word does not mean they agree on conventions. "profile" is a much less common suffix. Define "scope". Most applications do not share configuration files with other non-related applications. The one possible exception is /etc/profile and .profile , which may be used by multiple different shells (including at least sh and bash ). There is something called an environment associated with every running process which can contain variables that may affect the behavior of said process. Generally, environment variables are set by the appropriate shell configuration files, or perhaps the configuration files of whatever graphical desktop environment you are using. There are also configuration files for "libraries", like .inputrc for readline and .gtkrc* for GTK, which will affect every application that uses the library. No, there is no global hierarchy for configuration files. Again, refer to the documentation for the specific program in question, for example, the bash manual for bash . A general convention you can usually rely on is that user settings in $HOME override system-wide configuration in /etc . This is typically accomplished by reading the user file after the system one, so that later settings overwrite earlier ones. However, this is not a guarantee, and for definitive answers you should refer to the documentation for the specific program you are using. There is no "class", at least none general enough to encompass all the files you've listed in your question, so the question of a reference for such a "class" is moot. Again, refer to the documentation of the specific program you are using. | {
"source": [
"https://unix.stackexchange.com/questions/40708",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19809/"
]
} |
40,719 | When working on the command line, I often change to sudo using sudo -i . However, my working directory changes automatically to /root . I never want to go there; I want to stay where I was! How can I achieve this? | You could use sudo -s instead, it would not change your current directory to /root , though some of your environment variables would not be those of root. This page from the Ubuntu Forums has a nice summary: Summary of the differences found
corrupted by user's
HOME=/root uses root's PATH env vars
sudo -i Y Y[2] N
sudo -s N Y[2] Y
sudo bash N Y[2] Y
sudo su Y N[1] Y This page from from Ubuntu's documentation has much more background information on sudo . | {
"source": [
"https://unix.stackexchange.com/questions/40719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14584/"
]
} |
40,749 | I am trying to write a bash shell function that will allow me to remove duplicate copies of directories from my PATH environment variable. I was told that it is possible to achieve this with a one line command using the awk command, but I cannot figure out how to do it. Anybody know how? | If you don't already have duplicates in the PATH and you only want to add directories if they are not already there, you can do it easily with the shell alone. for x in /path/to/add …; do
case ":$PATH:" in
*":$x:"*) :;; # already there
*) PATH="$x:$PATH";;
esac
done And here's a shell snippet that removes duplicates from $PATH . It goes through the entries one by one, and copies those that haven't been seen yet. if [ -n "$PATH" ]; then
old_PATH=$PATH:; PATH=
while [ -n "$old_PATH" ]; do
x=${old_PATH%%:*} # the first remaining entry
case $PATH: in
*:"$x":*) ;; # already there
*) PATH=$PATH:$x;; # not there yet
esac
old_PATH=${old_PATH#*:}
done
PATH=${PATH#:}
unset old_PATH x
fi | {
"source": [
"https://unix.stackexchange.com/questions/40749",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19696/"
]
} |
40,786 | Using echo "20+5" literally produces the text " 20+5 ". What command can I use to get the numeric sum, 25 in this case? Also, what's the easiest way to do it just using bash for floating
point? For example, echo $((3224/3807.0)) prints 0 :(. I am looking for answers using either the basic command shell ('command
line') itself or through using languages that are available from the
command line. | There are lots of options!!! Summary $ printf %.10f\\n "$((10**9 * 20/7))e-9" # many shells. Not mksh.
$ echo "$((20.0/7))" # (ksh93/zsh/yash, some bash)
$ awk "BEGIN {print (20+5)/2}"
$ zcalc
$ bc <<< 20+5/2
$ bc <<< "scale=4; (20+5)/2"
$ dc <<< "4 k 20 5 + 2 / p"
$ expr 20 + 5
$ calc 2 + 4
$ node -pe 20+5/2 # Uses the power of JavaScript, e.g. : node -pe 20+5/Math.PI
$ echo 20 5 2 / + p | dc
$ echo 4 k 20 5 2 / + p | dc
$ perl -E "say 20+5/2"
$ python -c "print(20+5/2)"
$ python -c "print(20+5/2.0)"
$ clisp -x "(+ 2 2)"
$ lua -e "print(20+5/2)"
$ php -r 'echo 20+5/2;'
$ ruby -e 'p 20+5/2'
$ ruby -e 'p 20+5/2.0'
$ guile -c '(display (+ 20 (/ 5 2)))'
$ guile -c '(display (+ 20 (/ 5 2.0)))'
$ slsh -e 'printf("%f",20+5/2)'
$ slsh -e 'printf("%f",20+5/2.0)'
$ tclsh <<< 'puts [expr 20+5/2]'
$ tclsh <<< 'puts [expr 20+5/2.0]'
$ sqlite3 <<< 'select 20+5/2;'
$ sqlite3 <<< 'select 20+5/2.0;'
$ echo 'select 1 + 1;' | sqlite3
$ psql -tAc 'select 1+1'
$ R -q -e 'print(sd(rnorm(1000)))'
$ r -e 'cat(pi^2, "\n")'
$ r -e 'print(sum(1:100))'
$ smjs
$ jspl
$ gs -q <<< "5 2 div 20 add =" Details Shells You can use POSIX arithmetic expansion for integer arithmetic echo "$((...))" : $ echo "$((20+5))"
25
$ echo "$((20+5/2))"
22 Quite portable ( ash dash yash bash ksh93 lksh zsh ): Using printf ability to print floats we can extend most shells to do floating point math albeit with a limited range (no more than 10 digits): $ printf %.10f\\n "$((1000000000 * 20/7 ))e-9"
2.8571428570 ksh93 , yash and zsh do support floats here: $ echo "$((1.2 / 3))"
0.4 only ksh93 (directly) and zsh loading library mathfunc here: $ echo "$((4*atan(1)))"
3.14159265358979324 ( zsh need to load zmodload zsh/mathfunc to get functions like atan ). Interactively with zsh: $ autoload zcalc
$ zcalc
1> PI/2
1.5708
2> cos($1)
6.12323e-17
3> :sci 12
6.12323399574e-17 With (t)csh (integer only): % @ a=25 / 3; echo $a
8 In the rc shell family, akanga is the one with arithmetic expansion: ; echo $:25/3
8 POSIX toolchest bc (see below for interactive mode), manual here Mnemonic: b est c alculator (though the b is in fact for basic ). $ echo 20+5/2 | bc
22
$ echo 'scale=4;20+5/2' | bc
22.5000 (supports arbitrary precision numbers) bc interactive mode: $ bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
5+5
10
2.2+3.3
5.5 Rush 's solution, expr (no interactive mode): $ expr 20 + 5
25
$ expr 20 + 5 / 2
22 Joshua's solution : awk (no interactive mode): $ calc() { awk "BEGIN{print $*}"; }
$ calc 1/3
0.333333 Other more or less portable tools Arcege 's solution, dc (interactive mode: dc ): Which is even more fun since it works by reverse polish notation. $ echo 20 5 2 / + p | dc
22
$ echo 4 k 20 5 2 / + p | dc
22.5000 But not as practical unless you work with reverse polish notation a lot. Note that dc predates bc and bc has been historically implemented as a wrapper around dc but dc was not standardised by POSIX DQdims 's calc (required sudo apt-get install apcalc) : $ calc 2 + 4
6 General purpose language interpreters: manatwork 's solution, node (interactive mode: node ; output function not needed): $ node -pe 20+5/2 # Uses the power of JavaScript, e.g. : node -pe 20+5/Math.PI
22.5 Perl (interactive mode: perl -de 1 ): $ perl -E "say 20+5/2"
22.5 Python (interactive mode: python ; output function not needed): $ python -c "print(20+5/2)"
22 # 22.5 with python3
$ python -c "print(20+5/2.0)"
22.5 Also supports arbitrary precision numbers: $ python -c 'print(2**1234)'
295811224608098629060044695716103590786339687135372992239556207050657350796238924261053837248378050186443647759070955993120820899330381760937027212482840944941362110665443775183495726811929203861182015218323892077355983393191208928867652655993602487903113708549402668624521100611794270340232766099317098048887493809023127398253860618772619035009883272941129544640111837184 If you have clisp installed, you can also use polish notation: $ clisp -x "(+ 2 2)" Marco 's solution, lua (interactive mode: lua ): $ lua -e "print(20+5/2)"
22.5 PHP (interactive mode: php -a ): $ php -r 'echo 20+5/2;'
22.5 Ruby (interactive mode: irb ; output function not needed): $ ruby -e 'p 20+5/2'
22
$ ruby -e 'p 20+5/2.0'
22.5 Guile (interactive mode: guile ): $ guile -c '(display (+ 20 (/ 5 2)))'
45/2
$ guile -c '(display (+ 20 (/ 5 2.0)))'
22.5 S-Lang (interactive mode: slsh ; output function not needed, just a ; terminator): $ slsh -e 'printf("%f",20+5/2)'
22.000000
$ slsh -e 'printf("%f",20+5/2.0)'
22.500000 Tcl (interactive mode: tclsh ; output function not needed, but expr is): $ tclsh <<< 'puts [expr 20+5/2]'
22
$ tclsh <<< 'puts [expr 20+5/2.0]'
22.5 Javascript shells: $ smjs
js> 25/3
8.333333333333334
js>
$ jspl
JSC: 25/3
RP: 8.33333333333333
RJS: [object Number]
JSC:
Good bye...
$ node
> 25/3
8.333333333333334
> Various SQL's: SQLite (interactive mode: sqlite3 ): $ sqlite3 <<< 'select 20+5/2;'
22
$ sqlite3 <<< 'select 20+5/2.0;'
22.5 MySQL : mysql -BNe 'select 1+1' PostgreSQL : psql -tAc 'select 1+1 _The options on mysql and postgres stop the 'ascii art' image ! Specialised math-oriented languages: R in plain mode - lets generate 1000 Normal random numbers and get the standard deviation and print it $ R -q -e 'print(sd(rnorm(1000)))'
> print(sd(rnorm(1000)))
[1] 1.031997 R using the littler script - lets print pi squared $ r -e 'cat(pi^2, "\n")'
9.869604
$ r -e 'print(sum(1:100))'
[1] 5050 PARI/GP , an extensive computer algebra system for number theory, linear algebra, and many other things $ echo "prime(1000)"|gp -q
7919 // the 1000th prime
$ echo "factor(1000)" | gp -q
[2 3]
[5 3] // 2^3*5^3
$ echo "sum(x=1,5,x)" | gp -q
15 // 1+2+3+4+5 GNU Octave (a high-level interpreted language, primarily intended for numerical computations) Also supports complex numbers: $ octave
>> 1.2 / 7
ans = 0.17143
>> sqrt(-1)
ans = 0 + 1i Julia , high-performance language and interpreter for scientific and numerical computing. Non-interactive option: $ julia -E '2.5+3.7'
6.2 GhostScript GhostScript is a PostScript interpreter, very commonly installed even in very old distributions. See PostScript docs for a list of supported math commands. Interactive example: $ GS_DEVICE=display gs
GPL Ghostscript 9.07 (2013-02-14)
Copyright (C) 2012 Artifex Software, Inc. All rights reserved.
This software comes with NO WARRANTY: see the file PUBLIC for details.
GS>5 2 div 20 add =
22.5
GS> | {
"source": [
"https://unix.stackexchange.com/questions/40786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
40,922 | And with the oldest file on bottom? Also, if I do this, is it also possible to strip out the redundant headers contained within each HTML file? I'm seeing myself concatenate a lot of HTML files up, and it would be nice to reduce the file size of the ultimate file a bit. | To concatenate files you use cat file1 file2 file3 ... To get a list of quoted filenames sorted by time, newest first, you use ls -t Putting it all together, cat $(ls -t) > outputfile You might want to give some arguments to ls (eg, *.html ). But if you have filenames with spaces in them, this will not work. My file.html will be assumed to be two filenames: My and file.html . You can make ls quote the filenames, and then use xargs , who understands the quoting, to pass the arguments to cat . ls -tQ | xargs cat As for your second question, filtering out parts of files isn't difficult, but it depends on what exactly you want to strip out. What are the “redundant headers”? | {
"source": [
"https://unix.stackexchange.com/questions/40922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9431/"
]
} |
40,928 | I was trying to re-attach to a long-running tmux session to check up on a python web-application. However tmux attach claims that there is no running session, and ps shows a tmux process (first line), but with a question mark instead of the pts number. What does this mean---is this tmux session permanently lost, and what could have caused it? Is there still a way to look at the current state of the python process, spawned in the tmux session and running in pts/19 (second line)? [mhermans@web314 ~]$ ps -ef | grep mhermans
mhermans 16709 1 0 Mar04 ? 00:26:32 tmux
mhermans 8526 16710 0 Mar04 pts/19 00:20:04 python2.7 webapp.py
root 9985 6671 0 10:18 ? 00:00:00 sshd: mhermans [priv]
mhermans 10028 9985 0 10:18 ? 00:00:00 sshd: mhermans@pts/16
mhermans 10030 10028 0 10:18 pts/16 00:00:00 -bash
mhermans 16247 10030 6 10:28 pts/16 00:00:00 ps -ef
mhermans 16276 10030 0 10:28 pts/16 00:00:00 grep mhermans
mhermans 16710 16709 0 Mar04 pts/19 00:00:00 -bash
mhermans 16777 16709 0 Mar04 pts/21 00:00:00 -bash | Solution courtesy of the Webfaction-support : As the process was still running, the issue was a deleted socket, possibly caused by a purged tmp-directory. According to the tmux mapage: If the socket is accidentally removed, the SIGUSR1 signal may be sent to the tmux server process to recreate it. So sending the signal and attaching works: killall -s SIGUSR1 tmux
tmux attach | {
"source": [
"https://unix.stackexchange.com/questions/40928",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19919/"
]
} |
40,954 | After entering an incorrect password at a login prompt, there s an approximately 3-second delay. How can I change that on a Linux system with PAM? | I assume you are using Linux and pam. The delay is probably caused by pam_faildelay.so . Check your pam configuration in /etc/pam.d using pam_faildelay , e.g: # Enforce a minimal delay in case of failure (in microseconds).
# (Replaces the `FAIL_DELAY' setting from login.defs)
# Note that other modules may require another minimal delay. (for example,
# to disable any delay, you should add the nodelay option to pam_unix)
auth optional pam_faildelay.so delay=3000000 To change the time adjust the delay parameter. If you want to get rid of the delay you can delete/comment the complete line. Another source for the delay may be pam_unix.so . To disable the delay caused by pam_unix.so add the nodelay parameter, and optionally add a line calling pam_faildelay.so to add a (variable) delay instead, e.g.: auth optional pam_faildelay.so delay=100000 | {
"source": [
"https://unix.stackexchange.com/questions/40954",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2180/"
]
} |
41,141 | My processor is using a big part of my RAM memory as cache and I want to clean it up because of that; will it prejudice something? | There is no need to do this, the kernel manages RAM efficiently by using it for caches and buffers if it is not needed by processes. If processes request more RAM the kernel will deallocate caches and buffers if necessary to satisfy the request. This ServerFault answer explains how to interpret the memory usage reported by free . | {
"source": [
"https://unix.stackexchange.com/questions/41141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18276/"
]
} |
41,225 | I've been trying various terminal emulators lately, from the built-in gnome-terminal, aterm, xterm, wterm, to rxvt. The test I've been doing is in this order: Open up a tmux window with 2 panes The left pane will be an verbose-intensive task such as grep a /et/c -r or a simple time seq -f 'blah blah %g' 100000 The right pane will be a vim window with syntax on, opening any file that has more than >100 lines of code. When the left pane is printing a lot of output, the right pane seems to be very slow and unresponsive, I tried to scroll in vim but it takes 1-2 second for it to change. When I try to press Ctrl C on the left pane it waits for more than 10 second before it stopped When I do the same thing in TTY (pressing CTRL + ALT +( F[1-6] )), it doesn't happen and both panes are very responsive. I've turned of some config such as antialias fonts, turn of coloring, use default settings, and change to xmonad and openbox, but it doesn't change anything. The result of time seq -f 'blah blah %g' 100000 is not really different among these terminals, but the responsiveness is really different especially when I'm running spitted pane tmux (or other multiplexers). FYI, I'm running all of them in a maximized mode. I've read about frame buffered terminals but not sure how does it work and how can it be used to speed up my terminal emulator. So my question is, what makes terminal emulator far slower than TTY? Is there any possibility to make it as fast as TTY? Maybe hardware acceleration or something?. One thing I know, my resolution in X server when running a maximized terminal emulator is 1920x1080, and when I'm running TTY it is less than that, but I'm not sure how this would affect the performance. | When a GUI terminal emulator prints out a string, it has to convert the string to font codepoints, send the codepoints to a font renderer, get back a bitmap and blit that bitmap to the display via the X server. The font renderer has to retrieve the glyphs and run them (did you know that Truetype/Opentype fonts are programs running inside a virtual machine in the font renderer?). During the process of running each glyph, an insane number of decisions are made with respect to font metrics, kerning (though monospace fonts and kerning don't mix well), Unicode compliance, and that's before we even reach the rasteriser which probably uses sub-pixel addressing. The terminal then has to take the buffer produced by the font rasteriser and blit it to the right place, taking care of pixel format conversions, alpha channels (for sub-pixel addressing), scrolling (which involves more blitting), et cetera. In comparison, writing a string to a Virtual Terminal running in text mode (note: not a graphical console) involves writing that string to video memory. ‘Hello, World!’ involves writing 13 bytes (13 16-bit words if you want colours, too). The X font rasteriser hasn't even started its stretching exercises and knuckle cracking yet, and we're done. This is why text mode was so incredibly important in decades past. It's very fast to implement. Even scrolling is easier than you think: even on the venerable Motorola 6845-based MDA and CGA, you could scroll the screen vertically by writing a single 8-bit value to a register (could be 16... it's been too long). The screen refresh circuitry did the rest. You were essentially changing the start address of the frame buffer. There's nothing you can do to make a graphical terminal as fast as a text mode terminal on the same computer. But take heart: there have been computers with slower text modes than the slowest graphical terminal you're ever likely to see on a modern computer. The original IBM PC was pretty bad (DOS did software scrolling, sigh). When I saw my first Minix console on an 80286, I was amazed at the speed of the (jump) scrolling. Progress is good. Update: how to accelerate the terminal @ poige has already mentioned three in his answer , but here's my own take on them: Decrease the size of the terminal. My own terminals tend to grow till they fill screens, and they get slow as they do that. I get exasperated, annoyed at graphical terminals, then I resize them and everything's better. :) (@poige) Use a different terminal emulator. You can get a huge speed boost at the cost of some modern features. xterm and rxvt work really well, it has a fantastic terminal emulator. I suspect your tests may have showed they perform better than the ‘modern’ ones. (@poige) Don't use scalable fonts. 1986 may call and ask for its terminals back, but you can't deny they're faster. ;) (@poige) Dumb down the font rasteriser by turning off anti-aliasing/sub-pixel addressing and hinting. Most of them allow overrides in environment variables, so you don't have to do this globally. Note: pointless if you choose a bitmap font. This will hurt the most: don't use (multiple panes in) tmux — run two separate terminals side by side. When tmux displays two panes, it has to use terminal directives to move the cursor around a lot. Even though modern terminal libraries are very fast and good at optimising, they're still stealing bytes from your raw terminal bandwidth. To move the cursor to an arbitrary row on a DEC VT-compatible terminal, you send ESC [ row ; col H . That's 6–10 bytes. With multiple terminals, you're segregating the work, doing away with the need for positioning, optimisation, buffering and all the other stuff curses does, and making better use of multiple CPU cores. | {
"source": [
"https://unix.stackexchange.com/questions/41225",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20073/"
]
} |
41,246 | How to redirect standard output to multiple log files?
The following does not work: some_command 1> output_log_1 output_log_2 2>&1 | See man tee : NAME: tee - read from standard input and write to standard output and files SYNOPSIS: tee [OPTION]... [FILE]... Accordingly: echo test | tee file1 file2 file3 | {
"source": [
"https://unix.stackexchange.com/questions/41246",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16270/"
]
} |
41,274 | My current workflow is: CTRL + SHIFT + T to launch a new terminal window. That starts a new zsh terminal. Type tmux to start tmux. How can I have tmux load by default with a new terminal window? | There are at least two ways: Write something like if [ "$TMUX" = "" ]; then tmux; fi at the beginning of ~/.zshrc . Note the conditional test to a possible loop when tmux spawns its own zsh . Modify terminal launching command to something like xterm -e tmux I prefer the second way, because sometimes I need to launch a terminal without tmux (for example when I need to reconnect to an existing session). | {
"source": [
"https://unix.stackexchange.com/questions/41274",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14091/"
]
} |
41,287 | I have a bunch of MP3 files that have their album art included within the file itself. I am now looking for a way to extract them to store them separably, at best from command line. Is there a way to achieve this? | You can use eyed3 which is a great utility for handling id3 tags. To extract all images from an mp3 file you can use: eyeD3 --write-images=DIR mp3_file This will write all embedded images from the mp3 file to the specified directory. | {
"source": [
"https://unix.stackexchange.com/questions/41287",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12471/"
]
} |
41,336 | I just accidentally mounted a new drive to a folder that already contained files. I don't care about them and have them somewhere else, but that folder appears empty now. I'm curious what happened to the files. Are they simply deleted by Linux? | Just "shadowed" and will be there again when unmounted. :) In fact the files are "there" intact and if you need to reach them right away, w/o unmounting, this can be worked-around with so-called bind mount: mount --bind /Original/FS/Mount/Point /Somewhere/Else It works (so) because when you ask kernel to mount a filesystem to some mountpoint, kernel treats that mountpoint as a "view port" to filesystem you're mounting, so it's expected you shall see mounted FS content there. But this is not the only way how those FSes "layers" can be combined to single view. There's so called " union mount " approach (it's funny to know that this "a central concept in Plan 9", BTW). On Linux you could use Aufs , which never made its way into mainline kernel, or, currently (since 3.18), OverlayFS — it did . | {
"source": [
"https://unix.stackexchange.com/questions/41336",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8977/"
]
} |
41,362 | I want to view pdf files directly on our cluster rather than copying them to my local machine and then opening them in a viewer. How can I view a pdf file in my terminal? | In many systems less uses lesspipe, which can handle pdftotext automatically. Therefore, you can immediately try less file.pdf which will show the output of pdftotext in less . | {
"source": [
"https://unix.stackexchange.com/questions/41362",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17938/"
]
} |
41,382 | I seem to misunderstand the Bash rule for creating a subshell. I thought parentheses always creates a subshell, which runs as its own process. However, this doesn't seem to be the case. In Code Snippet A (below), the second sleep command does not run in a separate shell (as determined by pstree in another terminal). However, in Code Snippet B, the second sleep command does run in a separate shell. The only difference between the snippets is that the second snippet has two commands within the parentheses. Could somebody please explain the rule for when subshells are created? CODE SNIPPET A: sleep 5
(
sleep 5
) CODE SNIPPET B: sleep 5
(
x=1
sleep 5
) | The parentheses always start a subshell. What's happening is that bash detects that sleep 5 is the last command executed by that subshell, so it calls exec instead of fork + exec . The sleep command replaces the subshell in the same process. In other words, the base case is: ( … ) create a subshell. The original process calls fork and wait . In the subprocess, which is a subshell: sleep is an external command which requires a subprocess of the subprocess. The subshell calls fork and wait . In the subsubprocess: The subsubprocess executes the external command → exec . Eventually the command terminates → exit . wait completes in the subshell. wait completes in the original process. The optimization is: ( … ) create a subshell. The original process calls fork and wait . In the subprocess, which is a subshell until it calls exec : sleep is an external command, and it's the last thing this process needs to do. The subprocess executes the external command → exec . Eventually the command terminates → exit . wait completes in the original process. When you add something else after the call the sleep , the subshell needs to be kept around, so this optimization can't happen. When you add something else before the call to sleep , the optimization could be made (and ksh does it), but bash doesn't do it (it's very conservative with this optimization). | {
"source": [
"https://unix.stackexchange.com/questions/41382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20149/"
]
} |
41,406 | I'm sure it is relatively simple, I just don't know how to do it. #!/usr/bin/ksh
set `iostat`
myvar=6 I want to something like echo ${$myvar} which i want interpreted as ${$myvar} -> ${6} -> value | You can do this with eval , built-in to many fine shells, including ksh: #!/usr/bin/ksh
set $(iostat)
myvar=6
eval "echo \${$myvar}" The trick is to double-quote the string you feed to eval so that $myvar gets substituted with "6", and to backslash the outer dollar-sign, so that eval gets a string "$6". I got "%user" for the output, but I tried it on a multi-processor RHEL machine. | {
"source": [
"https://unix.stackexchange.com/questions/41406",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20145/"
]
} |
41,421 | $ ls -og /proc/self /proc/self/fd
lrwxrwxrwx 1 64 Jun 18 11:12 /proc/self -> 32157
/proc/self/fd:
total 0
lrwx------ 1 64 Jun 22 2012 0 -> /dev/tty1
lrwx------ 1 64 Jun 22 2012 1 -> /dev/tty1
lrwx------ 1 64 Jun 22 2012 2 -> /dev/tty1
lr-x------ 1 64 Jun 22 2012 3 -> /proc/32157/fd What is the file descriptor 3 assigned by default? | Nothing: there are three standard file descriptions, STDIN, STDOUT, and STDERR. They are assigned to 0, 1, and 2 respectively. What you are seeing there is an artifact of the way ls(1) works: in order to read the content of the /proc/self/fd directory and display it, it needs to open that directory. That means it gets a file handle, typically the first available ... thus, 3. If you were to run, say, cat on a separate console and inspect /proc/${pid}/fd for it you would find that only the first three were assigned. | {
"source": [
"https://unix.stackexchange.com/questions/41421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17253/"
]
} |
41,479 | I want to find, say, 10 most common word in a text file. Firstly, solution should be optimized for keystrokes (in other words - my time). Secondly, for the performance. Here is what I have so far to get top 10: cat test.txt | tr -c '[:alnum:]' '[\n*]' | uniq -c | sort -nr | head -10
6 k
2 g
2 e
2 a
1 r
1 k22
1 k
1 f
1 eeeeeeeeeeeeeeeeeeeee
1 d I could make a java, python etc. program where I store (word, numberOfOccurences) in a dictionary and sort the value or I could use MapReduce, but I optimize for keystrokes. Are there any false positives? Is there a better way? | That's pretty much the most common way of finding "N most common things", except you're missing a sort , and you've got a gratuitious cat : tr -c '[:alnum:]' '[\n*]' < test.txt | sort | uniq -c | sort -nr | head -10 If you don't put in a sort before the uniq -c you'll probably get a lot of false singleton words. uniq only does unique runs of lines, not overall uniquness. EDIT: I forgot a trick, "stop words". If you're looking at English text (sorry, monolingual North American here), words like "of", "and", "the" almost always take the top two or three places. You probably want to eliminate them. The GNU Groff distribution has a file named eign in it which contains a pretty decent list of stop words. My Arch distro has /usr/share/groff/current/eign , but I think I've also seen /usr/share/dict/eign or /usr/dict/eign in old Unixes. You can use stop words like this: tr -c '[:alnum:]' '[\n*]' < test.txt |
fgrep -v -w -f /usr/share/groff/current/eign |
sort | uniq -c | sort -nr | head -10 My guess is that most human languages need similar "stop words" removed from meaningful word frequency counts, but I don't know where to suggest getting other languages stop words lists. EDIT: fgrep should use the -w command, which enables whole-word matching. This avoids false positives on words that merely contain short stop works, like "a" or "i". | {
"source": [
"https://unix.stackexchange.com/questions/41479",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20116/"
]
} |
41,493 | Possible Duplicate: ssh via multiple hosts For connecting to server B I have to first ssh to server A .
What's the command line to access server B ? | If server B is reachable via ssh and you only need ssh (not direct scp or sftp ), this also works very well: ssh -t $SERVER_A ssh $SERVER_B The -t option forces allocation of a pseudo-tty even when running a single command at the other end. This is helpful, since ssh needs a pseudo-tty. Since you're using two nested instances of ssh , the escape character in the inner session is Enter ~ ~ (two tildes). One tilde will send the escape to the first shell. | {
"source": [
"https://unix.stackexchange.com/questions/41493",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20188/"
]
} |
41,496 | I am trying to make a productivity suite for myself. My first goal is to block Facebook, Gmail and Stackexchange from 0900 to 1600. As of now, I have edited my /etc/hosts and added 0.0.0.0 www.facebook.com and similar ones for gmail and stackexchange. But I am a little confused about how to include the blocking duration in my script. What I thought is having 2 different files (hosts_allow, hosts_block) and then cp hosts_allow hosts or cp hosts_block hosts depending upon time but then this would need to be put in an infinite loop or something which I'm not really sure is the best way of approaching the problem. Any clues? | Use cron . Say crontab -e as root — or sudo crontab -e if you have sudo set up — and put the following in the file that comes up in the text editor: 0 9 * * * cp /etc/hosts_worktime /etc/hosts
0 16 * * * cp /etc/hosts_playtime /etc/hosts This says that on the zeroth minute of the 9th and 16th hours of every day of the month, overwrite /etc/hosts using the shell commands given. You might actually want something a little more complicated: 0 9 * * 1-5 cp /etc/hosts_worktime /etc/hosts
0 16 * * 1-5 cp /etc/hosts_playtime /etc/hosts That one change — putting 1-5 in the fifth position — says the change between work and play time happens only on Monday through Friday. Say man 5 crontab to get a full explanation of what all you can do in a crontab file. By the way, I changed the names of your hosts files above, because hosts_allow is too close to hosts.allow , used by TCP Wrappers . | {
"source": [
"https://unix.stackexchange.com/questions/41496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
41,501 | When I start nightly, new window of firefox is starts again, so I have to close firefox and start nightly, then if I open firefox Nightly's new window opens. What I am asking is can I run them both simultaneously? | Use cron . Say crontab -e as root — or sudo crontab -e if you have sudo set up — and put the following in the file that comes up in the text editor: 0 9 * * * cp /etc/hosts_worktime /etc/hosts
0 16 * * * cp /etc/hosts_playtime /etc/hosts This says that on the zeroth minute of the 9th and 16th hours of every day of the month, overwrite /etc/hosts using the shell commands given. You might actually want something a little more complicated: 0 9 * * 1-5 cp /etc/hosts_worktime /etc/hosts
0 16 * * 1-5 cp /etc/hosts_playtime /etc/hosts That one change — putting 1-5 in the fifth position — says the change between work and play time happens only on Monday through Friday. Say man 5 crontab to get a full explanation of what all you can do in a crontab file. By the way, I changed the names of your hosts files above, because hosts_allow is too close to hosts.allow , used by TCP Wrappers . | {
"source": [
"https://unix.stackexchange.com/questions/41501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2490/"
]
} |
41,550 | Assume there's an image storage directory, say, ./photos/john_doe , within which there are multiple subdirectories, where many certain files reside (say, *.jpg ). How can I calculate a summary size of those files below the john_doe branch? I tried du -hs ./photos/john_doe/*/*.jpg , but this shows individual files only. Also, this tracks only the first nest level of the john_doe directory, like john_doe/june/ , but skips john_doe/june/outrageous/ . So, how could I traverse the entire branch, summing up the size of the certain files? | find ./photos/john_doe -type f -name '*.jpg' -exec du -ch {} + | grep total$ If more than one invocation of du is required because the file list is very long, multiple totals will be reported and need to be summed. | {
"source": [
"https://unix.stackexchange.com/questions/41550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3766/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.