source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
346,426
run multiple values from a file on a single command one after another I have a the following command ./test.sh -f test.txt Completed Success I have 1000 of inputs to be passed to the same script which I have in a file example.txt. every time the script executes and output successful. cat example.txt test.txt test1.txt test2.txt test3.txt etc I want the command to fetch each and every line and execute in a batch process like ./test.sh -f test.txt Completed Success ./test.sh -f test1.txt Completed Success ./test.sh -f test2.txt Completed Success and so on.
The very short answer is: a file is an anonymous blob of data a hardlink is a name for a file a symbolic link is a special file whose content is a pathname Unix files and directories work exactly like files and directories in the real world (and not like folders in the real world); Unix filesystems are (conceptually) structured like this: a file is an anonymous blob of data; it doesn't have a name, only a number (inode) a directory is a special kind of file which contains a mapping of names to files (more specifically inodes); since a directory is just a file, directories can have entries for directories, that's how recursion is implemented (note that when Unix filesystems were introduced, this was not at all obvious, a lot of operating systems didn't allow directories to contain directories back then) these directory entries are called hardlinks a symbolic link is another special kind of file, whose content is a pathname; this pathname is interpreted as the name of another file other kinds of special files are: sockets, fifos, block devices, character devices Keeping this metaphor in mind, and specifically keeping in mind that Unix directories work like real-world directories and not like real-world folders explains many of the "oddities" that newcomers often encounter, like: why can I delete a file I don't have write access to? Well, for one, you're not deleting the file, you are deleting one of many possible names for the file, and in order to do that, you only need write access to the directory, not the file. Just like in the real world. Or, why can I have dangling symlinks? Well, the symlink simply contains a pathname. There is nothing that says that there actually has to be a file with that name. My question is simply what is the difference of a file and a hard link ? The difference between a file and a hard link is the same as the difference between you and the line with your name in the phone book. Hard link is pointing to an inode, so what is a file ? Inode entry itself ? Or an Inode with a hard link ? A file is an anonymous piece of data. That's it. A file is not an inode, a file has an inode, just like you are not a Social Security Number, you have a SSN. A hard link is a name for a file. A file can have many names. Let's say, I create a file with touch, then an Inode entry is created in the Inode Table . Yes. And I create a hard link, which has the same Inode number with the file. No. A hard link doesn't have an inode number, since it's not a file. Only files have inode numbers. The hardlink associates a name with an inode number. So did I create a new file ? Yes. Or the file is just defined as an Inode ? No. The file has an inode, it isn't an inode.
{ "source": [ "https://unix.stackexchange.com/questions/346426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34348/" ] }
346,549
Why is it that ssh -t doesn't wait for background jobs to finish? Example: ssh user@example 'sleep 2 &' This works as expected, since ssh returns after 2 seconds, whereas ssh user@example -t 'sleep 2 &' does not wait for sleep to finish and returns immediately. Can anyone explain the reason behind this? Is there a way to let ssh -t wait for all background processes to finish before returning? My use case is that I start a script with ssh -t , and this script starts several background jobs that should stay alive after the main script finishes. With ssh -t this is not possible so far.
Without -t , sshd gets the stdout of the remote shell (and children like sleep ) and stderr via two pipes (and also sends the client's input via another pipe). sshd does wait for the process in which it has started the user's login shell, but also, after that process has terminated waits for eof on the stdout pipe (not the stderr pipe in the case of openssh at least). And eof happens when there's no file descriptor by any process open on the writing end of the pipe, which typically only happens when all the processes that didn't have their stdout redirected to something else are gone. When you use -t , sshd doesn't use pipes. Instead, all the interaction (stdin, stdout, stderr) with the remote shell and its children are done using one pseudo-terminal pair. With a pseudo-terminal pair, for sshd interacting with the master side, there's no similar eof handling and while at least some systems provide alternative ways to know if there are still processes with fds open to the slave side of the pseudo-terminal (see @JdeBP comment below), sshd doesn't use them, so it just waits for the termination of the process in which it executed the login shell of the remote user and then exits. Upon that exit, the master side of the pty pair is closed which means the pty is destroyed, so processes controlled by the slave will receive a SIGHUP (which by default would terminate them). Edit : that last part was incorrect, though the end result is the same. See @pynexj's answer for a correct description of what exactly happens.
{ "source": [ "https://unix.stackexchange.com/questions/346549", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91327/" ] }
346,770
How can I safely access "my" web services and accounts in a computer in which I do have sudo rights but the administrator(s) have, naturally, remote root access as well? Details I use a dekstop which is connected to a (large) closed/restricted LAN. Even log-in to the system is only successful if connected to the LAN. The administrator has, of course, remote root access (which I will suggest to change and opt for a password-less ssh-key based authentication). As well, my userid is assigned to the sudoers group, ie, in the /etc/sudoers file, there is: userid ALL=(ALL) NOPASSWD: ALL I am hesitant to use my passwords for accessing my webmail client and my firefox account. And more. Questions What can I do to ensure that my passwords, to access external web services, are protected from anyone else than me? For example, since I do have sudo rights, how can I ensure that no key loggers are running? I access password-less-ly based on SSH key(s) various services. How can I protect my passphrase from being logged? Would 2FA be safe to access external services in such a use-case? Is there a collection of "Safe practices using a Linux-based computer which others can access remotely as root ?"
You can't. The root user has full access to the machine. This includes the possibility of running keyloggers, reading any file, causing the programs you run to do things without showing them in the user interface... Whether this is likely to happen depends on your environment so we can't tell you that. Even 2FA isn't safe because of the possibility of session hijacking. In general, if you suspect a machine isn't safe, you shouldn't use it to access your services.
{ "source": [ "https://unix.stackexchange.com/questions/346770", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13011/" ] }
346,973
I already got this installed: 1 core/archlinux-keyring 20170104-1 [installed] 10 blackarch/blackarch-keyring 20140118-3 [installed] But I get an error when upgrading libc++abi from AUR: ==> Verifying source file signatures with gpg... llvm-3.9.1.src.tar.xz ... FAILED (unknown public key 8F0871F202119294) libcxx-3.9.1.src.tar.xz ... FAILED (unknown public key 8F0871F202119294) libcxxabi-3.9.1.src.tar.xz ... FAILED (unknown public key 8F0871F202119294) ==> ERROR: One or more PGP signatures could not be verified! ==> ERROR: Makepkg was unable to build libc++. ==> Restart building libc++abi ? [y/N] How to resolve this? Is there a way to know which keyring I should install to solve this issue?
gpg --recv-keys 8F0871F202119294 (AUR) the missing key needs to be added to your USER keyring I did not need to trust the key for makepkg to finish the build. ~/.gnupg/gpg.conf also needed: keyserver-options no-honor-keyserver-url in my particular case Missing keys for official Arch repos are normally missing an updated archlinux-keyring
{ "source": [ "https://unix.stackexchange.com/questions/346973", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27996/" ] }
347,188
The purpose of this question is to answer a curiosity, not to solve a particular computing problem. The question is: Why are POSIX mandatory utilities not commonly built into shell implementations? For example, I have a script that basically reads a few small text files and checks that they are properly formatted, but it takes 27 seconds to run, on my machine, due to a significant amount of string manipulation. This string manipulation makes thousands of new processes by calling various utilities, hence the slowness. I am pretty confident that if some of the utilities were built in, namely grep , sed , cut , tr , and expr , then the script would run in a second or less (based on my experience in C). It seems there would be a lot of situations where building these utilities in would make the difference between whether or not a solution in shell script has acceptable performance. Obviously, there is a reason it was chosen not to make these utilities built in. Maybe having one version of a utility at a system level avoids having multiple unequal versions of that utility being used by various shells. I really can't think of many other reasons to keep the overhead of creating so many new processes, and POSIX defines enough about the utilities that it does not seem like much of a problem to have different implementations, so long as they are each POSIX compliant. At least not as big a problem as the inefficiency of having so many processes.
Why are POSIX mandatory utilities not built into shell? Because to be POSIX compliant, a system is required 1 to provide most utilities as standalone commands. Having them builtin would imply they have to exist in two different locations, inside the shell and outside it. Of course, it would be possible to implement the external version by using a shell script wrapper to the builtin, but that would disadvantage non shell applications calling the utilities. Note that BusyBox took the path you suggested by implementing many commands internally, and providing the standalone variant using links to itself. One issue is while the command set can be quite large, the implementations are often a subset of the standard so aren't compliant. Note also that at least ksh93 , bash and zsh go further by providing custom methods for the running shell to dynamically load builtins from shared libraries. Technically, nothing then prevents all POSIX utilities to be implemented and made available as builtins. Finally, spawning new processes has become quite a fast operation with modern OSes. If you are really hit by a performance issue, there might be some improvements to make your scripts run faster. 1 POSIX.1-2008 However, all of the standard utilities , including the regular built-ins in the table, but not the special built-ins described in Special Built-In Utilities, shall be implemented in a manner so that they can be accessed via the exec family of functions as defined in the System Interfaces volume of POSIX.1-2008 and can be invoked directly by those standard utilities that require it (env, find, nice, nohup, time, xargs).
{ "source": [ "https://unix.stackexchange.com/questions/347188", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183171/" ] }
347,280
I have 2TB ext4 partition with half million files on it. I want to check whether this partition contains any errors or not. I don't want to search for bad blocks, only logical structure should be checked. I have unmounted the partition and run fsck /dev/sda2 , but fsck returns immediately with exit code 0 without actually checking whole file system. I'm expecting full partition check would take hours to complete. I have read man fsck but did not find an option for "thorough testing". I'm afraid my partition may have some sectors accidentally overwritten by garbage data. My HDD was previously connected to another OS, and ext4 partition may get harmed by wrong behavior of that OS. That's why I want to be sure the whole tree structure is completely correct. In other words, I want to perform a check similar to what utility chkdsk.exe does on Windows. What should I use on Debian to completely check ext4 file system?
As mentioned by Satō Katsura , run e2fsck in "force" mode: e2fsck -f /dev/sda2 This will force a check even if the system thinks the file system is clean. The "verbose" option is helpful too: e2fsck -vf /dev/sda2 As a side-note, and not applicable in your case, but if you use LVM for your storage you can use the neat little lvcheck tool to run an "offline" file system check on a mounted file system (it uses an LVM snapshot and updates the file system metadata if the check doesn't find any errors).
{ "source": [ "https://unix.stackexchange.com/questions/347280", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124598/" ] }
347,873
How does one set the PATH for non-login shells in CentOS 7? Specifically, I have a systemd unit that needs binaries in /usr/local/texlive/2016/bin/x86_64-linux . I attempted to set it in /etc/environment with PATH=/usr/local/texlive/2016/bin/x86_64-linux:$PATH but then my PATH was /usr/local/texlive/2016/bin/x86_64-linux:$PATH:/usr/local/sbin:/usr/sbin . I created /etc/profile.d/texlive.sh with export PATH="/usr/local/texlive/2016/bin/x86_64-linux:${PATH}" but that only worked for login shells. I looked at Set Path for all Users (Login and Non-login Shells) but the solution was already attempted above. I looked at How to add a path to system $PATH for all users's non-login shell and login shell on debian but there's no accepted solution and I'm not sure I want to modify /etc/login.defs because it might get changed in an update.
The simplest answer is to set the PATH as part of your ExecStart command in the systemd Unit file. For example, if you currently have ExecStart=/bin/mycmd arg1 arg2 then change it to ExecStart=/bin/bash -c 'PATH=/new/path:$PATH exec /bin/mycmd arg1 arg2' The expansion of $PATH will be done by bash, not systemd. Alternatives such as using Environment=PATH=/new/path:$PATH will not work as systemd will not expand the $PATH .
{ "source": [ "https://unix.stackexchange.com/questions/347873", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/218185/" ] }
348,450
I have a systemd service that needs to create a directory in /run , but otherwise run as a non-root user. From a blog example, I derived the following solution: [Unit] Description=Startup Thing [Service] Type=oneshot ExecStart=/usr/bin/python3 -u /opt/thing/doStartup WorkingDirectory=/opt/thing StandardOutput=journal User=thingUser # Make sure the /run/thing directory exists PermissionsStartOnly=true ExecStartPre=-/bin/mkdir -p /run/thing ExecStartPre=/bin/chmod -R 777 /run/thing [Install] WantedBy=multi-user.target The magic is in the 3 lines that follow the comment. Apparently the ExecStartPre 's will run as root this way, but the ExecStart will run as the specified user. This has lead to 3 questions though: What does the - do in front of the /bin/mkdir ? I don't know why it's there or what it does. When there are multiple ExecStartPre 's in a unit file, are they just run serially in the order that they are found in the unit file? Or some other method? Is this actually the best technique to accomplish my goal of getting the run directory created so that the non-root user can use it?
For any questions about a systemd directives, you can use man systemd.directives to lookup the man page that documents the directive. In the case of ExecStartPre= , you'll find it documented in man systemd.service . There in docs for ExecStartPre= , you'll find it explained that the leading "-" is used to note that failure is tolerated for these commands. In this case, it's tolerated if /run/thing already exists. The docs there also explain that "multiple command lines are allowed and the commands are executed one after the other, serially." One improvement to your method of pre-creating the directory is not make it world-writable when you only need it to be writable by a particular user. More limited permissions would be accomplished with: ExecStartPre=-/bin/chown thingUser /run/thing ExecStartPre=-/bin/chmod 700 /run/thing That makes the directory owned by and fully accessible from a particular user.
{ "source": [ "https://unix.stackexchange.com/questions/348450", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101140/" ] }
348,771
Environment: Fedora 25 (4.9.12-200.fc25.x86_64) GNOME Terminal 3.22.1 Using VTE version 0.46.1 +GNUTLS VIM - Vi IMproved 8.0 (2016 Sep 12, compiled Feb 22 2017 16:26:11) tmux 2.2 I recently started using tmux and have observed that the colors within Vim change depending on whether I'm running inside or outside of tmux. Below are screenshots of Vim outside (left) and inside (right) of tmux while viewing a Git diff: My TERM variable is Outside tmux: xterm-256color Inside tmux: screen-256color Vim reports these terminal types as expected (via :set term? ): Outside tmux: term=xterm-256color Inside tmux: term=screen-256color Vim also reports both instances are running in 256-color mode (via :set t_Co? ): Outside tmux: t_Co=256 Inside tmux: t_Co=256 There are many similar questions out there regarding getting Vim to run in 256-color mode inside tmux (the best answer I found is here ), but I don't think that's my problem given the above information. I can duplicate the problem outside of tmux if I run Vim with the terminal type set to screen-256color : $ TERM=screen-256color vim So that makes me believe there's simply some difference between the xterm-256color and screen-256color terminal capabilities that causes the difference in color. Which leads to the question posed in the title: what specifically in the terminal capabilities causes the Vim colors to be different? I see the differences between running :set termcap inside and outside of tmux, but I'm curious as to which variables actually cause the difference in behavior. Independent of the previous question, is it possible to have the Vim colors be consistent when running inside or outside of tmux? Some things I've tried include: Explicitly setting the default terminal tmux uses in ~/.tmux.conf to various values (some against the advice of the tmux FAQ ): set -g default-terminal "screen-256color" set -g default-terminal "xterm-256color" set -g default-terminal "screen.xterm-256color" set -g default-terminal "tmux-256color" Starting tmux using tmux -2 . In all cases, Vim continued to display different colors inside of tmux.
tmux doesn't support the terminfo capability bce (back color erase), which vim checks for, to decide whether to use its "default color" scheme. That characteristic of tmux has been mentioned a few times - Reset background to transparent with tmux? Clear to end of line uses the wrong background color in tmux
{ "source": [ "https://unix.stackexchange.com/questions/348771", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124274/" ] }
348,913
If I select text with a mouse in tmux in iTerm2 on macOS I get the selected text copied into my clipboard. I do not have to click any extra buttons - just select the text you want and you're done. I've tested tmux in terminal.app on macOS but it doesn't work there - I have to hit y to copy the selection to my clipboard. I thought that there is a mouse binding (something like MouseOnSelection similar to MouseDown1Pane ) but I couldn't find anything useful on the web and man tmux . I wonder if there is a way to have a similar behaviour on Ubuntu 16.10 - preferably in the Gnome terminal.
Tmux 2.4+ with vi copy mode bindings and xclip : set-option -g mouse on set-option -s set-clipboard off bind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel "xclip -selection clipboard -i" For older tmux versions, emacs copy mode bindings (the default), or non-X platforms (i.e., no xclip), see the explanation below. Explanation: First we need to enable the mouse option so tmux will capture the mouse and let us bind mouse events: set-option -g mouse on Gnome-terminal doesn't support setting the clipboard using xterm escape sequences so we should ensure the set-clipboard option is off: set-option -s set-clipboard off This option might be supported and enabled by default on iTerm2 (see set-clipboard in the tmux manual), which would explain the behavior on there. We can then bind the copy mode MouseDragEnd1Pane "key", i.e., when the first mouse button is released after clicking and dragging in a pane, to a tmux command which takes the current copy mode selection (made by the default binding for MouseDrag1Pane ) and pipes it to a shell command. This tmux command was copy-pipe before tmux 2.4, and has since changed to send-keys -X copy-pipe[-and-cancel] . As for the shell command, we simply need something which will set the contents of the system clipboard to whatever is piped to it; xclip is used to do this in the following commands. Some equivalent replacements for "xclip -selection clipboard -i" below on non-X platforms are "wl-copy" (Wayland), "pbcopy" (macOS), "clip.exe" (Windows, WSL), and "cat /dev/clipboard" (Cygwin, MinGW). Tmux 2.4+: # For vi copy mode bindings bind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel "xclip -selection clipboard -i" # For emacs copy mode bindings bind-key -T copy-mode MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel "xclip -selection clipboard -i" Tmux 2.2 to 2.4: # For vi copy mode bindings bind-key -t vi-copy MouseDragEnd1Pane copy-pipe "xclip -selection clipboard -i" # For emacs copy mode bindings bind-key -t emacs-copy MouseDragEnd1Pane copy-pipe "xclip -selection clipboard -i" Before tmux 2.2: Copy after mouse drag support was originally added in Tmux 1.3 through setting the new mode-mouse option to on . Tmux 2.1 changed the mouse support to the familiar mouse key bindings, but did not have DragEnd bindings, which were introduced in 2.2. Thus, before 2.2 I believe the only method of setting the system clipboard on mouse drag was through the built-in use of xterm escape sequences (the set-clipboard option). This means that it's necessary to update to at least tmux 2.2 to obtain the drag-and-copy behavior for terminals that don't support set-clipboard , such as GNOME Terminal.
{ "source": [ "https://unix.stackexchange.com/questions/348913", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128489/" ] }
349,005
Why do people use apt-get instead of apt ? In nearly every tutorial I see, the suggestion is to use apt-get . apt is prettier (by default), shorter, and generally more intuitive. ( apt-cache search vs apt search , for example) I don't know if I'm missing something because apt just seems better in every way. What's the argument for apt-get over apt for everyday use?
The apt front-end is a recent addition, it was added in version 1.0 in April 2014. So it's only been part of one Debian stable release, Debian 8. People who've used Debian for longer are used to apt-get and apt-cache , and old habits die hard — and old tutorials die harder (and new users learn old habits from those). apt is nicer for end users as a command-line tool, although even there it has competition — I prefer aptitude for example. As a general-purpose tool though it's not necessarily ideal, because its interface is explicitly not guaranteed to stay the same from one release to the next, and it's not designed for use in scripts. Thus in any circumstance where instructions may be used in a script, it should be avoided; so it's typically safer to suggest apt-get rather than apt in answers on Unix.SE and similar sites.
{ "source": [ "https://unix.stackexchange.com/questions/349005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/219064/" ] }
349,015
I'm using sshpass and ssh to send a command to a Linux box, and then disconnect. The command is sent ok, but I don't get the response I expect. I noticed that upon login the host sends 5 blank lines, then a 5 line banner. It appears that the ssh command (when passing a command as a parameter) is returning only the first blank line. Is there a way to cause it to return ALL text? (or wait for 5 seconds to capture all text before returning) Command looks like this, and capturing response into Bash variable RESPONSE=$(sshpass .... ssh..... "my command")
The apt front-end is a recent addition, it was added in version 1.0 in April 2014. So it's only been part of one Debian stable release, Debian 8. People who've used Debian for longer are used to apt-get and apt-cache , and old habits die hard — and old tutorials die harder (and new users learn old habits from those). apt is nicer for end users as a command-line tool, although even there it has competition — I prefer aptitude for example. As a general-purpose tool though it's not necessarily ideal, because its interface is explicitly not guaranteed to stay the same from one release to the next, and it's not designed for use in scripts. Thus in any circumstance where instructions may be used in a script, it should be avoided; so it's typically safer to suggest apt-get rather than apt in answers on Unix.SE and similar sites.
{ "source": [ "https://unix.stackexchange.com/questions/349015", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23091/" ] }
349,052
I have been trying to format an sd card with the lastest debian jessie-lite image for use with raspberry pi. When using the dd command, it states that there is no space left on device after copying 10 megs. I have searched SE and have tried to use various answers to questions but I always end up back at the same place. Below are the outputs of dd, fdisk, df and ls commands that may be of interest. /dev/sdb is the sd card dd bs=4M if=/home/user/Downloads/2017-02-16-raspbian-jessie-lite.img of=/dev/sdb dd: error writing ‘/dev/sdb’: No space left on device 3+0 records in 2+0 records out 10485760 bytes (10 MB) copied, 0.0137885 s, 760 MB/s fdisk -l /dev/sdb Disk /dev/sdb: 10 MiB, 10485760 bytes, 20480 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xdbcc7ab3 Device Boot Start End Sectors Size Id Type /dev/sdb1 8192 137215 129024 63M c W95 FAT32 (LBA) /dev/sdb2 137216 2807807 2670592 1.3G 83 Linux ls -al /dev/sdb* -rw-r--r-- 1 root root 10485760 Mar 3 22:04 /dev/sdb brw-rw---- 1 root disk 8, 17 Mar 3 22:05 /dev/sdb1 brw-rw---- 1 root disk 8, 18 Mar 3 22:05 /dev/sdb2 brw-rw---- 1 root disk 8, 19 Mar 3 22:05 /dev/sdb3 df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 226G 7.3G 207G 4% / udev 10M 10M 0 100% /dev tmpfs 1.6G 9.3M 1.6G 1% /run tmpfs 3.9G 112K 3.9G 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup tmpfs 792M 4.0K 792M 1% /run/user/119 tmpfs 792M 8.0K 792M 1% /run/user/1000
-rw-r--r-- 1 root root 10485760 Mar 3 22:04 /dev/sdb /dev/sdb is a regular file, not a device. You must have run rm /dev/sdb at some point. It is created automatically when the device is inserted, but when you run commands as root, you can mess up with it. Now that /dev/sdb is a regular file, it's stored in memory, on a filesystem which has a low size limit because it's only meant to contain device files that have no content as such since they're just markers to say “call this device driver to store the contents”. Remove the file ( rm /dev/sdb as root). Then, to re-create the proper /dev/sdb , the easiest way is to eject the SD card and insert it back it. Once you've done that, you can copy the image with the command you were using, or simply </home/user/Downloads/2017-02-16-raspbian-jessie-lite.img sudo tee /dev/sdb >/dev/null
{ "source": [ "https://unix.stackexchange.com/questions/349052", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/141314/" ] }
349,118
I am looking for command like mount 1234-SOME-UUID /some/mount/folder I am connecting a couple of external USB hard drives. I want them to be mounted on specific folders during startup. I am unable to boot using /etc/fstab if one of the drive is not connected. so I am using an init script. But /dev/sdbx enumeration is not always same to use with mount /dev/sdX /some/mount/folder in the init script.
From the manpage of mount . -U, --uuid uuid Mount the partition that has the specified uuid. So your mount command should look like as follows. mount -U 1234-SOME-UUID /some/mount/folder or mount --uuid 1234-SOME-UUID /some/mount/folder A third possibility would be mount UUID=1234-SOME-UUID /some/mount/folder
{ "source": [ "https://unix.stackexchange.com/questions/349118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27711/" ] }
349,252
I run debian jessie on Host 64-bit and in virtualbox 32-bit. To spare traffic I try to cp the i386 packages from host to the shared folder, for using them in virualbox. My Hostname/var/cache/apt/archives$ ls -al /var/cache/apt/archives/ | grep 'i386' | awk '{print $9}' alsa-oss_1.0.28-1_i386.deb gcc-4.9-base_4.9.2-10_i386.deb i965-va-driver_1.4.1-2_i386.deb libaacplus2_2.0.2-dmo2_i386.deb libaio1_0.3.110-1_i386.deb libasound2_1.0.28-1_i386.deb libasound2-dev_1.0.28-1_i386.deb libasound2-plugins_1.0.28-1+b1_i386.deb Shows me the packages I'm looking for. but them I try to cp them after xargs My Hostname/var/cache/apt/archives$ ls -al /var/cache/apt/archives/ | grep 'i386' | awk '{print $9}' | LANG=C xargs cp -u /home/alex/debian-share/apt-archives/ cp: target 'zlib1g_1%3a1.2.8.dfsg-2+b1_i386.deb' is not a directory I can not figure out what I am doing wrong. Is this way even possible? My problem is I can not script. Probable it is somthing like that for i in *_i386.deb ; do cp [option] full-path to shared-folder I didn't dry, because I will not mess my Host.
While you already know how you should solve your current problem, I'll still answer about xargs . xargs puts the string it got in the end of command, while in your case you need that string before the last argument of cp . Use -I option of xargs to construct the command. Like this: ls /source/path/*pattern* | xargs -I{} cp -u {} /destination/path In this example I'm using {} to as a replacement string, so the syntax looks similar to find .
{ "source": [ "https://unix.stackexchange.com/questions/349252", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
349,264
I'm trying to mount a CIFS device after the system boots (using systemd ), but the system tries to mount the system before the network is established, so it fails. After logging into the system I can mount it without any problem, using sudo mount -a . How can I tell my Arch (arm) to wait until the network is available?
Adding _netdev to the mount options in /etc/fstab might be sufficient. Mount units referring to local and network file systems are distinguished by their file system type specification. In some cases this is not sufficient (for example network block device based mounts, such as iSCSI), in which case _netdev may be added to the mount option string of the unit, which forces systemd to consider the mount unit a network mount. Additionally systemd supports explicit order dependencies between mount entries and other units: Adding x-systemd.after=network-online.target to the mount options might work if _netdev is not enough. See the systemd mount unit documentation for more details.
{ "source": [ "https://unix.stackexchange.com/questions/349264", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148745/" ] }
349,669
How will you ssh into some other system's root account? Assume that you have to access to target system. This is a question I was asked in a quiz. Apparently simply using ssh [email protected] wasn't the answer. I'd like to know the answer.
That is actually the proper way to SSH into a server (192.168.xxx.xxx), that accepts SSH connections on the default port (22). To specify the user you want to use for login, you can use: ssh -l root 192.168.xxx.xxx or ssh [email protected] If the SSH service is configured to allow root login, you should be able to connect without problems ( PermitRootLogin yes, under sshd_config).
{ "source": [ "https://unix.stackexchange.com/questions/349669", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/219550/" ] }
350,085
I am using find -type f command to recursively find all files from a certain starting directory. However, I would like to have some directories prevented from entering and extracting names of files inside. So basically I am looking for something like: find . -type f ! -name "avoid_this_directory_please" Is there a functioning alternative to this?
This is what the -prune option is for: find . -type d -name 'avoid_this_directory_please' -prune -o \ -type f -print You may interpret the above as "if there's a directory called avoid_this_directory_please , don't enter it, otherwise, if it's a regular file, print its pathname." You may also prune the directory given any other criteria, e.g. its full pathnames from the top-level search path: find . -type d -path './some/dir/avoid_this_directory_please' -prune -o \ -type f -print
{ "source": [ "https://unix.stackexchange.com/questions/350085", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/219860/" ] }
350,240
If I execute the following simple script: #!/bin/bash printf "%-20s %s\n" "Früchte und Gemüse" "foo" printf "%-20s %s\n" "Milchprodukte" "bar" printf "%-20s %s\n" "12345678901234567890" "baz" It prints: Früchte und Gemüse foo Milchprodukte bar 12345678901234567890 baz that is, text with umlauts (such as ü ) is "shrunk" by one character per umlaut. Certainly, I have some wrong setting somewhere, but I am not able to figure out which one that could be. This occurs if the file's encoding is UTF-8. If I change its encoding to latin-1, the alignment is correct, but the umlauts are rendered wrong: Fr�chte und Gem�se foo Milchprodukte bar 12345678901234567890 baz
POSIX requires printf 's %-20s to count those 20 in terms of bytes not characters even though that makes little sense as printf is to print text , formatted (see discussion at the Austin Group (POSIX) and bash mailing lists). The printf builtin of bash and most other POSIX shells honour that. zsh ignores that silly requirement (even in sh emulation) so printf works as you'd expect there. Same for the printf builtin of fish (not a POSIX-like shell). The ü character (U+00FC), when encoded in UTF-8 is made of two bytes (0xc3 and 0xbc), which explains the discrepancy. $ printf %s 'Früchte und Gemüse' | wc -mcL 18 20 18 That string is made of 18 characters, is 18 columns wide ( -L being a GNU wc extension to report the display width of the widest line in the input) but is encoded on 20 bytes. In zsh or fish , the text would be aligned correctly. Now, there are also characters that have 0-width (like combining characters such as U+0308, the combining diaresis) or have double-width like in many Asiatic scripts (not to mention control characters like Tab) and even zsh wouldn't align those properly. Example, in zsh : $ printf '%3s|\n' u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| In bash : $ printf '%3s|\n' u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| ksh93 has a %Ls format specification to count the width in terms of display width. $ printf '%3Ls|\n' u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| That still doesn't work if the text contains control characters like TAB (how could it? printf would have to know how far apart the tab stops are in the output device and what position it starts printing at). It does work by accident with backspace characters (like in the roff output where X (bold X ) is written as X\bX ) though as ksh93 considers all control characters as having a width of -1 . Other options In zsh , you can use its padding parameter expansion flags ( l for left-padding, r for right-padding), which when combined with the m flag considers the display width of characters (as opposed to the number of characters in the string): $ () { printf '%s|\n' "${(ml[3])@}"; } u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| With expand : printf '%s\t|\n' u ü $'u\u308' $'\u1100' | expand -t3 That works with some expand implementations (not GNU's though). On GNU systems, you could use GNU awk whose printf counts in chars (not bytes, not display-widths, so still not OK for the 0-width or 2-width characters, but OK for your sample): gawk 'BEGIN {for (i = 1; i < ARGC; i++) printf "%-3s|\n", ARGV[i]} ' u ü $'u\u308' $'\u1100' If the output goes to a terminal, you can also use cursor positioning escape sequences. Like: forward21=$(tput cuf 21) printf '%s\r%s%s\n' \ "Früchte und Gemüse" "$forward21" "foo" \ "Milchprodukte" "$forward21" "bar" \ "12345678901234567890" "$forward21" "baz"
{ "source": [ "https://unix.stackexchange.com/questions/350240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6479/" ] }
350,246
I 'm working on a bash script to copy files from a single USB drive to multiple others. I'm currently using rsync that copies from the source to a single destination, going through all of the output drives in a loop one at a time: for line in $(cat output_drives_list); do rsync -ah --progress --delete mountpoints/SOURCE/ mountpoints/$line/ done I'm trying to optimize the process to get maximum use of the USB bandwidth, avaiding the bottleneck of a single drive write speed. Is is possible to do something like rsync, but with multiple output directories, that will write to all output drives at once, but read only once from the input? I guess that some of this is already taken care of by the system cache, but that only optimizes for read. If I run multiple rsync processes in parallel, this might optimize the write speed, but I'm also afraid it'll butcher the read speed. Do I need to care about single-read when copying in parallel?
POSIX requires printf 's %-20s to count those 20 in terms of bytes not characters even though that makes little sense as printf is to print text , formatted (see discussion at the Austin Group (POSIX) and bash mailing lists). The printf builtin of bash and most other POSIX shells honour that. zsh ignores that silly requirement (even in sh emulation) so printf works as you'd expect there. Same for the printf builtin of fish (not a POSIX-like shell). The ü character (U+00FC), when encoded in UTF-8 is made of two bytes (0xc3 and 0xbc), which explains the discrepancy. $ printf %s 'Früchte und Gemüse' | wc -mcL 18 20 18 That string is made of 18 characters, is 18 columns wide ( -L being a GNU wc extension to report the display width of the widest line in the input) but is encoded on 20 bytes. In zsh or fish , the text would be aligned correctly. Now, there are also characters that have 0-width (like combining characters such as U+0308, the combining diaresis) or have double-width like in many Asiatic scripts (not to mention control characters like Tab) and even zsh wouldn't align those properly. Example, in zsh : $ printf '%3s|\n' u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| In bash : $ printf '%3s|\n' u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| ksh93 has a %Ls format specification to count the width in terms of display width. $ printf '%3Ls|\n' u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| That still doesn't work if the text contains control characters like TAB (how could it? printf would have to know how far apart the tab stops are in the output device and what position it starts printing at). It does work by accident with backspace characters (like in the roff output where X (bold X ) is written as X\bX ) though as ksh93 considers all control characters as having a width of -1 . Other options In zsh , you can use its padding parameter expansion flags ( l for left-padding, r for right-padding), which when combined with the m flag considers the display width of characters (as opposed to the number of characters in the string): $ () { printf '%s|\n' "${(ml[3])@}"; } u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| With expand : printf '%s\t|\n' u ü $'u\u308' $'\u1100' | expand -t3 That works with some expand implementations (not GNU's though). On GNU systems, you could use GNU awk whose printf counts in chars (not bytes, not display-widths, so still not OK for the 0-width or 2-width characters, but OK for your sample): gawk 'BEGIN {for (i = 1; i < ARGC; i++) printf "%-3s|\n", ARGV[i]} ' u ü $'u\u308' $'\u1100' If the output goes to a terminal, you can also use cursor positioning escape sequences. Like: forward21=$(tput cuf 21) printf '%s\r%s%s\n' \ "Früchte und Gemüse" "$forward21" "foo" \ "Milchprodukte" "$forward21" "bar" \ "12345678901234567890" "$forward21" "baz"
{ "source": [ "https://unix.stackexchange.com/questions/350246", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67203/" ] }
350,315
I have to sort the following list with a shell script and make the latest version appear on the bottom or top. How would I do that with shell tools only? release-5.0.0.rc1 release-5.0.0.rc2 release-5.0.0 release-5.0.1 release-5.0.10 release-5.0.11 release-5.0.13 release-5.0.14 release-5.0.15 release-5.0.16 release-5.0.17 release-5.0.18 release-5.0.19 release-5.0.2 release-5.0.20 release-5.0.21 release-5.0.22 release-5.0.23 release-5.0.24 release-5.0.25 release-5.0.26 release-5.0.27 release-5.0.28 release-5.0.29 release-5.0.3
GNU sort has -V that can mostly deal with a list like that ( details ): -V, --version-sort natural sort of (version) numbers within text $ cat vers release-5.0.19 release-5.0.19~pre1 release-5.0.19-bigbugfix release-5.0.2 release-5.0.20 $ sort -V vers release-5.0.2 release-5.0.19~pre1 release-5.0.19 release-5.0.19-bigbugfix release-5.0.20 However, those .rc* versions could be a bit of a problem, since they probably should be sorted before the corresponding non-rc version, if there happened to be both, that is. Some versioning systems (like Debian's), use suffixes starting with a tilde ( ~ ) to mark pre-releases, and they sort before the version without a suffix, which sorts before versions with other suffixes. Apparently this is supported by at least the sort on my system, as shown above ( sort (GNU coreutils) 8.23 ). To sort the example list, you could use the following: perl -pe 's/\.(?=rc)/~/' < versions.txt | sort -V | perl -pe 's/~/./' > versions-sorted.txt
{ "source": [ "https://unix.stackexchange.com/questions/350315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120583/" ] }
350,625
I was reading up on chmod and its octal modes . I saw that 1 is execute only. What is a valid use case for an execute only permission? To execute a file, one typically would want read and execute permission. $ echo 'echo foo' > say_foo $ chmod 100 ./say_foo $ ./say_foo bash: ./say_foo: Permission denied $ chmod 500 ./say_foo $ ./say_foo foo
Shell scripts require the read permission to be executed, but binary files do not: $ cat hello.cpp #include<iostream> int main() { std::cout << "Hello, world!" << std::endl; return 0; } $ g++ -o hello hello.cpp $ chmod 100 hello $ ./hello Hello, world! $ file hello hello: executable, regular file, no read permission Displaying the contents of a file and executing them are two different things. With shell scripts, these things are related because they are "executed" by "reading" them into a new shell (or the current one), if you'll forgive the simplification. This is why you need to be able to read them. Binaries don't use that mechanism. For directories, the execute permission is a little different; it means you can do things to files within that directory (e. g. read or execute them). So let's say you have a set of tools in /tools that you want people to be able to use, but only if they know about them. chmod 711 /tools . Then executable things in /tools can be run explicitly (e. g. /tools/mytool ), but ls /tools/ will be denied. Similarly, documents could be stored in /private-docs which could be read if and only if the file names are known.
{ "source": [ "https://unix.stackexchange.com/questions/350625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181811/" ] }
351,331
Is there a way to execute a command with arguments in linux without whitespaces? cat file.txt needs to be: cat(somereplacementforthiswhitespace)file.txt
If only there was a variable whose value is a space… Or more generally, contains a space. cat${IFS}file.txt The default value of IFS is space, tab, newline. All of these characters are whitespace. If you need a single space, you can use ${IFS%??} . More precisely, the reason this works has to do with how word splitting works. Critically, it's applied after substituting the value of variables. And word splitting treats each character in the value of IFS as a separator, so by construction, as long as IFS is set to a non-empty value, ${IFS} separates words. If IFS is more than one character long, each character is a word separator. Consecutive separator characters that are whitespace are treated as a single separator, so the result of the expansion of cat${IFS}file.txt is two words: cat and file.txt . Non-whitespace separators are treated separately, with something like IFS=',.'; cat${IFS}file.txt , cat would receive two arguments: an empty argument and file.txt .
{ "source": [ "https://unix.stackexchange.com/questions/351331", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220775/" ] }
351,593
Most answers here [ 1 ] [ 2 ] [ 3 ] use a single angle bracket to redirect to /dev/null, like this : command > /dev/null But appending to /dev/null works too : command >> /dev/null Except for the extra character, is there any reason not to do this ? Is either of these "nicer" to the underlying implementation of /dev/null ? Edit: The open(2) manpage says lseek is called before each write to a file in append mode: O_APPEND The file is opened in append mode. Before each write(2), the file offset is positioned at the end of the file, as if with lseek(2). The modification of the file offset and the write operation are performed as a single atomic step. which makes me think there might be a tiny performance penalty for using >> . But on the other hand truncating /dev/null seems like an undefined operation according to that document: O_TRUNC If the file already exists and is a regular file and the access mode allows writing (i.e., is O_RDWR or O_WRONLY) it will be truncated to length 0. If the file is a FIFO or terminal device file, the O_TRUNC flag is ignored. Otherwise, the effect of O_TRUNC is unspecified. and the POSIX spec says > shall truncate an existing file , but O_TRUNC is implementation-defined for device files and there's no word on how /dev/null should respond to being truncated . So, is truncating /dev/null actually unspecified ? And do the lseek calls have any impact on write performance ?
By definition /dev/null sinks anything written to it , so it doesn't matter if you write in append mode or not, it's all discarded. Since it doesn't store the data, there's nothing to append to, really. So in the end, it's just shorter to write > /dev/null with one > sign. As for the edited addition: The open(2) manpage says lseek is called before each write to a file in append mode. If you read closely, you'll see it says (emphasis mine): the file offset is positioned at the end of the file, as if with lseek(2) Meaning, it doesn't (need to) actually call the lseek system call, and the effect is not strictly the same either: calling lseek(fd, SEEK_END, 0); write(fd, buf, size); without O_APPEND isn't the same as a write in append mode, since with separate calls another process could write to the file in between the system calls, trashing the appended data. In append mode, this doesn't happen (except over NFS, which doesn't support real append mode ). The text in the standard doesn't mention lseek at that point, only that writes shall go the end of the file. So, is truncating /dev/null actually unspecified? Judging by the scripture you refer to, apparently it's implementation-defined. Meaning that any sane implementation will do the same as with pipes and TTY's, namely, nothing. An insane implementation might do something else, and perhaps truncation might mean something sensible in the case of some other device file. And do the lseek calls have any impact on write performance? Test it. It's the only way to know for sure on a given system. Or read the source to see where the append mode changes the behaviour, if anywhere.
{ "source": [ "https://unix.stackexchange.com/questions/351593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/187346/" ] }
351,765
When I use either of these commands with an argument as the name of a process, both of them return the exact same number. Are they the same commands? Are they two different commands that do the same thing? Is one of them an alias to the other? pidof firefox pgrep firefox
The programs pgrep and pidof are not quite the same thing, but they are very similar. For example: $ pidof 'firefox' 5696 $ pgrep '[i]ref' 5696 $ pidof '[i]ref' $ printf '%s\n' "$?" 1 As you can see, pidof failed to find a match for [i]ref . This is because pidof program returns a list of all process IDs associated with a program called program . On the other hand, pgrep re returns a list of all process IDs associated with a program whose name matches the regular expression re . In their most basic forms, the equivalence is actually: $ pidof 'program' $ pgrep '^program$' As yet another concrete example, consider: $ ps ax | grep '[w]atch' 12 ? S 0:04 [watchdog/0] 15 ? S 0:04 [watchdog/1] 33 ? S< 0:00 [watchdogd] 18451 pts/5 S+ 0:02 watch -n600 tail log-file $ pgrep watch 12 15 33 18451 $ pidof watch 18451
{ "source": [ "https://unix.stackexchange.com/questions/351765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166244/" ] }
351,779
I dualbooted Kali linux on my 2013 macbook pro and at first there was no wireless extension showing when i typed iwconfig in terminal but then after following this video https://www.youtube.com/watch?v=Lp3snFy9Jbs i got wlan1 and wlan0 showing but they dont detect any wireless network. i tried it on a vm, liveboot and not i even dualbooted it to my hard drive but it still wont detect any wifi network. i posted what shows up when i type iwconfig in terminal. how do i fix this? root@kali:~# iwconfig lo no wireless extensions. wlan1 IEEE 802.11abgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm Retry short limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off wlan0 IEEE 802.11abgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm Retry short limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off hwsim0 no wireless extensions. eth0 no wireless extensions. output of lspci -knn | grep Net -A2 root@kali:~# lspci -knn | grep Net -A2 03:00.0 Network controller [0280]: Broadcom Corporation BCM4360 802.11ac Wireless Network Adapter [14e4:43a0] (rev 03) Subsystem: Apple Inc. BCM4360 802.11ac Wireless Network Adapter [106b:0112] Kernel driver in use: bcma-pci-bridge Kernel modules: bcma root@kali:~#
The programs pgrep and pidof are not quite the same thing, but they are very similar. For example: $ pidof 'firefox' 5696 $ pgrep '[i]ref' 5696 $ pidof '[i]ref' $ printf '%s\n' "$?" 1 As you can see, pidof failed to find a match for [i]ref . This is because pidof program returns a list of all process IDs associated with a program called program . On the other hand, pgrep re returns a list of all process IDs associated with a program whose name matches the regular expression re . In their most basic forms, the equivalence is actually: $ pidof 'program' $ pgrep '^program$' As yet another concrete example, consider: $ ps ax | grep '[w]atch' 12 ? S 0:04 [watchdog/0] 15 ? S 0:04 [watchdog/1] 33 ? S< 0:00 [watchdogd] 18451 pts/5 S+ 0:02 watch -n600 tail log-file $ pgrep watch 12 15 33 18451 $ pidof watch 18451
{ "source": [ "https://unix.stackexchange.com/questions/351779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221115/" ] }
351,916
Suppose file stores the pathname of a non-dir file. How can I get its parent directory? why does the following way by appending /.. to its value not work $ cd $file/.. cd: ./Tools/build.bat/..: No such file or directory Thanks.
Assuming $ file=./Tools/build.bat With a POSIX compatible shell (including zsh): $ echo "${file%/*}" ./Tools With dirname : $ echo "$(dirname -- "$file")" ./Tools (at least GNU dirname takes options, so the -- is required in case the path starts with a dash.)
{ "source": [ "https://unix.stackexchange.com/questions/351916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
352,089
I tried finding an answer to this question, but got no luck so far: I have a script that runs some other scripts, and many of those other scripts have "set -x" in them, which makes them print every command they execute. I would like to get rid of that but retain the information if any of the scripts send the error message to stderr. So I can't simply write ./script 2>/dev/null Also, I don't have privileges to edit those other scripts, so I can't manually change the set option. I was thinking about logging everything from stderr to the separate file and filtering out the tracing commands, but maybe there is a simpler way?
With bash 4.1 and above, you can do BASH_XTRACEFD=7 ./script.bash 7> /dev/null (also works when bash is invoked as sh ). Basically, we're telling bash to output the xtrace output on file descriptor 7 instead of the default of 2, and redirect that file descriptor to /dev/null . The fd number is arbitrary. Use a fd above 2 that is not otherwise used in your script. If the shell you're entering this command in is bash or yash , you can even use a number above 9 (though you may run into problems if the file descriptor is used internally by the shell). If the shell you're calling that bash script from is zsh , you can also do: (export BASH_XTRACEFD; ./script.bash {BASH_XTRACEFD}> /dev/null) for the variable to be automatically assigned the first free fd above 9. For older versions of bash , another option, if the xtrace is turned on with set -x (as opposed to #! /bin/bash -x or set -o xtrace ) would be to redefine set as an exported function that does nothing when passed -x (though that would break the script if it (or any other bash script it invokes) used set to set the positional parameters). Like: set() case $1 in (-x) return 0;; (-[!-]|"") builtin set "$@";; (*) echo >&2 That was a bad idea, try something else; builtin set "$@";; esac export -f set ./script.bash Another option is to add a DEBUG trap in a $BASH_ENV file that does set +x before every command. echo 'trap "{ set +x; } 2>/dev/null" DEBUG' > ~/.no-xtrace BASH_ENV=~/.no-xtrace ./script.bash That won't work when set -x is done in a sub-shell though. As @ilkkachu said, provided you have write permission to any folder on the filesystem, you should at least be able to make a copy of the script and edit it. If there's nowhere you can write a copy of the script, or if it's not convenient to make and edit a new copy every time there's an update to the original script, you may still be able to do: bash <(sed 's/set -x/set +x/g' ./script.bash) That (and the copy approach) may not work properly if the script does anything fancy with $0 or special variables like $BASH_SOURCE (such as looking for files that are relative to the location of the script itself), so you may need to do some more editing like replace $0 with the path of the script...
{ "source": [ "https://unix.stackexchange.com/questions/352089", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151203/" ] }
352,601
Playing with e2fsprogs debugfs , by change/accident, a file named filen/ame was created. Obviously the forward slash character / serves as the special separator character in pathnames. Still using debugfs I wanted to remove the file named filen/ame , but I had little success, since the / character is not interpreted as part of the filename? Does debugfs provide a way to remove this file containing the slash? If so how? I used: cd /tmp echo "content" > contentfile dd if=/dev/zero of=/tmp/ext4fs bs=1M count=50 mkfs.ext4 /tmp/ext4fs debugfs -w -R "write /tmp/contentfile filen/ame" /tmp/ext4fs debugfs -w -R "ls" /tmp/ext4fs which outputs: debugfs 1.43.4 (31-Jan-2017) 2 (12) . 2 (12) .. 11 (20) lost+found 12 (980) filen/ame I tried the following to remove the filen/ame file: debugfs -w -R "rm filen/ame" /tmp/ext4fs but this did not work and only produced: debugfs 1.43.4 (31-Jan-2017) rm: File not found by ext2_lookup while trying to resolve filename Apart from changing the content of the directory node manually, is there a way to remove the file using debugfs ?
If you want a fix and are not just trying out debugfs , you can have fsck do the work for you. Mark the filesystem as dirty and run fsck -y to get the filename changed: $ debugfs -w -R "dirty" /tmp/ext4fs $ fsck -y /tmp/ext4fs ... /tmp/ext4fs was not cleanly unmounted, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Entry 'filen/ame' in / (2) has illegal characters in its name. Fix? yes ... $ debugfs -w -R "ls" /tmp/ext4fs 2 (12) . 2 (12) .. 11 (20) lost+found 12 (980) filen.ame
{ "source": [ "https://unix.stackexchange.com/questions/352601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
353,044
I want to be able to log in to a (publicly-accessible) SSH server from the local network (192.168.1.*) using some SSH key, but I don't want that key to be usable from outside the local network. I want some other key to be used for external access instead (same user in both cases). Is such a thing possible to achieve in SSH?
Yes. In the file ~/.ssh/authorized_keys on the server, each entry now probably looks like ssh-ed25519 AAAAC3NzaC1lZSOMEKEYFINGERPRINT comment (or similar) There is an optional first column that may contain options. These are described in the sshd manual. One of the options is from="pattern-list" Specifies that in addition to public key authentication, either the canonical name of the remote host or its IP address must be present in the comma-separated list of patterns. See PATTERNS in ssh_config(5) for more information on patterns. In addition to the wildcard matching that may be applied to hostnames or addresses, a from stanza may match IP addresses using CIDR address/masklen notation. The purpose of this option is to optionally increase security: public key authentication by itself does not trust the network or name servers or anything (but the key); however, if somebody somehow steals the key, the key permits an intruder to log in from anywhere in the world. This additional option makes using a stolen key more difficult (name servers and/or routers would have to be compromised in addition to just the key). This means that you should be able to modify ~/.ssh/authorized_keys from ssh-ed25519 AAAAC3NzaC1lZSOMEKEYFINGERPRINT comment to from="pattern" ssh-ed25519 AAAAC3NzaC1lZSOMEKEYFINGERPRINT comment Where pattern is a pattern matching the client host that you're connecting from, for example by its public DNS name, IP address, or some network block: from="192.168.1.0/24" ssh-ed25519 AAAAC3NzaC1lZSOMEKEYFINGERPRINT comment (this would only allow the use of this key from a host in the 192.168.1.* network)
{ "source": [ "https://unix.stackexchange.com/questions/353044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6252/" ] }
353,206
Let's say I have a script that will be executed on various machines with root privileges, but I want to execute certain commands within that script without root. Is that possible?
Both su and sudo can do this. They run a command as another user; by default that "another user" is root, but it can be any user. For example, sudo -u www-data ls will run ls as the user www-data . However... The usual way is to run the script as the invoking user and use sudo for those commands which need it. sudo caches the credentials, so it should prompt at most once.
{ "source": [ "https://unix.stackexchange.com/questions/353206", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222288/" ] }
353,452
I'm trying to connect to port 25 with netcat from one virtual machine to another but It's telling me no route to host although i can ping. I do have my firewall default policy set to drop but I have an exception to accept traffic for port 25 on that specific subnet. I can connect from VM 3 TO VM 2 on port 25 with nc but not from VM 2 TO 3. Here's a preview of my firewall rules for VM2 Here's a preview of my firewall rules for VM 3 When I show the listening services I have *:25 which means it's listening for all ipv4 ip addresses and :::25 for ipv6 addresses. I don't understand where the error is and why is not working both firewall rules accept traffic on port 25 so it's supposed to be connecting. I tried comparing the differences between both to see why I can connect from vm3 to vm2 but the configuration is all the same. Any suggestions on what could be the problem? Update stopping the iptable service resolves the issue but I still need those rules to be present.
Your no route to host while the machine is ping-able is the sign of a firewall that denies you access politely (i.e. with an ICMP message rather than just DROP-ping). See your REJECT lines? They match the description (REJECT with ICMP xxx). The problem is that those seemingly (#) catch-all REJECT lines are in the middle of your rules, therefore the following rules won't be executed at all. (#) Difficult to say if those are actual catch-all lines, the output of iptables -nvL would be preferable. Put those REJECT rules at the end and everything should work as expected.
{ "source": [ "https://unix.stackexchange.com/questions/353452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139921/" ] }
353,684
I want to call a Linux syscall (or at least the libc wrapper) directly from a scripting language. I don't care what scripting language - it's just important that it not be compiled (the reason basically has to do with not wanting a compiler in the dependency path, but that's neither here nor there). Are there any scripting languages (shell, Python, Ruby, etc) that allow this? In particular, it's the getrandom syscall.
Perl allows this with its syscall function: $ perldoc -f syscall syscall NUMBER, LIST Calls the system call specified as the first element of the list, passing the remaining elements as arguments to the system call. If ⋮ The documentation also gives an example of calling write(2): require 'syscall.ph'; # may need to run h2ph my $s = "hi there\n"; syscall(SYS_write(), fileno(STDOUT), $s, length $s); Can't say I've ever used this feature, though. Well, before just now to confirm the example does indeed work. This appears to work with getrandom : $ perl -E 'require "syscall.ph"; $v = " "x8; syscall(SYS_getrandom(), $v, length $v, 0); print $v' | xxd 00000000: 5790 8a6d 714f 8dbe W..mqO.. And if you don't have getrandom in your syscall.ph, then you could use the number instead. It's 318 on my Debian testing (amd64) box. Beware that Linux syscall numbers are architecture-specific.
{ "source": [ "https://unix.stackexchange.com/questions/353684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60181/" ] }
354,043
I need to convert ".xlsx" file to ".xls" using shell command. At my work we are currently using xlsx2csv command but now requirement has been changed and we need to convert all ".xlsx" files to ".xls" files for further calculation. For that, Some guy at my work has developed one command that can convert ".xlsx" to ".xls" but, that is applicable for only one sheet.. We have multiple sheets in one file. Thanks in advance....
If you install LibreOffice, you can use the following command: libreoffice --headless --convert-to xls myfile.xlsx or just: libreoffice --convert-to xls myfile.xlsx in recent version (>= 4.5) where --convert-to implies --headless . This will create myfile.xls , and keep the original myfile.xlsx —so you’ll probably need to do a cleanup after you've validated the conversion is successful.
{ "source": [ "https://unix.stackexchange.com/questions/354043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216530/" ] }
354,322
I have CDs for Age of Empire III and I would like to play it in a Windows 10 VM. Is this possible? I know how to insert virtual CDs (i.e., ISO files) into a VirtualBox VM (via the "Storage" settings), but physical CDs are a different story. The best solution I can think of is to add where I've mounted the CDs on my Linux system to the system via shared folders.
Yes you can, but you need to have DVD passthrough active. Go to VirtualBox's Machine > Settings > Storage > Enable Passthrough for the DVD drive. To allow an external DVD drive to be recognized by a VirtualBox Virtual Machine (VM) it must be configured in such a way that "passthrough" is enabled. Enabling Passthrough allows the underlying operating system to pass the required commands through to the device that is connected to the Virtual Machine as opposed to the host operating system instance. http://www.tempusfugit.ca/techwatch.ca/passthrough.html
{ "source": [ "https://unix.stackexchange.com/questions/354322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27613/" ] }
354,364
If I do the following: touch /tmp/test and then perform ls -la /tmp/ I could see the test file with 0 Bytes in the directory. But how does the Operating System handle a concept of 0 Bytes . If I put it in layman terms: 0 Bytes is no memory at all, hence nothing is created. Creation of a file, must or should at least require certain memory, right?
A file is (roughly) three separate things: An "inode", a metadata structure that keeps track of who owns the file, permissions, and a list of blocks on disk that actually contain the data. One or more directory entries (the file names) that point to that inode The actual blocks of data themselves When you create an empty file, you create only the inode and a directory entry pointing to that inode. Same for sparse files ( dd if=/dev/null of=sparse_file bs=10M seek=1 ). When you create hardlinks to an existing file, you just create additional directory entries that point to the same inode. I have simplified things here, but you get the idea.
{ "source": [ "https://unix.stackexchange.com/questions/354364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178625/" ] }
354,377
For the purpose of a forensic mission, we must get a docker image without using the famous export from a docker command. Does copy and paste of the folder /var/lib/docker/containers in another server allow us to retrieve information without any corrupted data ? Thanks.
A file is (roughly) three separate things: An "inode", a metadata structure that keeps track of who owns the file, permissions, and a list of blocks on disk that actually contain the data. One or more directory entries (the file names) that point to that inode The actual blocks of data themselves When you create an empty file, you create only the inode and a directory entry pointing to that inode. Same for sparse files ( dd if=/dev/null of=sparse_file bs=10M seek=1 ). When you create hardlinks to an existing file, you just create additional directory entries that point to the same inode. I have simplified things here, but you get the idea.
{ "source": [ "https://unix.stackexchange.com/questions/354377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160856/" ] }
354,462
I have downloaded BASH for Windows 10. How would I navigate to a network address as I would in a Windows environment? I have seen SAMBA mentioned and have downloaded smbclient . I have tried: smbclient \\localhost\ I receive the error ERROR: Could not determine network interfaces, you must use a interfaces config file I am a novice user of BASH, and see this as an opportunity to be more efficient. As a bonus please show how I could accomplish some common tasks such as copying files across a network, as well as how to authenticate since this would likely be required for such operations.
In the latest Windows release "Fall Creators Update" it is possible to mount UNC paths, or any other filesystem that Windows can access, from within WSL . You can do this with the mount command as usual, with the filesystem " drvfs " provided by WSL: sudo mount -t drvfs '\\server\share' /mnt/share Single quotes are useful around the UNC path so that you don't have to escape the backslashes. You can mount on an arbitrary directory; I've used /mnt/share as an example here, but any empty directory will do. All files will show up with full a+rwx 777 permissions. The real access rights will be checked when you try to access a file, and you can get an error at that point even if it looks like the operation should succeed. Every readable file will be treated as executable. For locations that require credentials you have three options: Prior to mounting, navigate to the location using Windows' File Explorer and authenticate. WSL will inherit your credentials and permissions. This is the easiest way for a one-off. Use the net use command from a cmd prompt, or net.exe use from inside WSL ( cd /mnt/c first to suppress a warning). You'll need something like net.exe use \\server\share <PASSWORD> /USER:<USERNAME> . You can use '*' for the password to be prompted instead. Other configurations are shown with net.exe help use . Use the Windows Credential Manager to set up a stored credential. I've never done this one. I understand that Samba proper can be made to work under WSL as well, but since the host provides the same functionality I would use the built-in version from Windows when it's available. smbclient is primarily for FTP-style access to SMB servers and retrieving/putting individual files, and it should work when appropriately configured as usual.
{ "source": [ "https://unix.stackexchange.com/questions/354462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140356/" ] }
354,594
I'd like to set ssh_config so after just typing ssh my_hostname i end up in specific folder. Just like I would type cd /folder/another_one/much_much_deeper/ . How can i achieve that? EDIT. It's have been marked as duplicate of "How to ssh into dir..." yet it is not my question. I know i can execute any commands by tailing them to ssh command. My question is about /ssh_config file not the command.
There wasn't a way to do that, until OpenSSH 7.6 . From manual : RemoteCommand Specifies a command to execute on the remote machine after successfully connecting to the server. The command string extends to the end of the line, and is executed with the user's shell. Arguments to RemoteCommand accept the tokens described in the TOKENS section. So now you can have RemoteCommand cd /tmp && bash It was introduced in this commit .
{ "source": [ "https://unix.stackexchange.com/questions/354594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223364/" ] }
354,928
I am trying to deploy django app. When I print apt-get update I see W: Unable to read /etc/apt/apt.conf.d/ - DirectoryExists (13: Permission denied) W: Unable to read /etc/apt/sources.list.d/ - DirectoryExists (13: Permission denied) W: Unable to read /etc/apt/sources.list - RealFileExists (13: Permission denied) E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied) E: Unable to read /var/cache/apt/ - opendir (13: Permission denied) E: Unable to read /var/cache/apt/ - opendir (13: Permission denied) E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root? When I print sudo apt-get update I see -bash: sudo: command not found I tried to use su instead of sudo . But it is strange. For example I print su apt-get update And nothing happens I just see a new line, (uiserver):u78600811:~$ su apt-get update (uiserver):u78600811:~$ The same if I try to install some packages. What do I do? If it is useful info - I am using Debian (uiserver):u87600811:~$ uname -a Linux infong1559 3.14.0-ui16294-uiabi1-infong-amd64 #1 SMP Debian 3.14.79-2~ui80+4 (2016-10-20) x86_64 GNU/Linux
By default sudo is not installed on Debian, but you can install it. First enable su-mode: su - Install sudo by running: apt-get install sudo -y After that you would need to play around with users and permissions. Give sudo right to your own user. usermod -aG sudo yourusername Make sure your sudoers file have sudo group added. Run: visudo to modify sudoers file and add following line into it (if it is missing): # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL You need to relogin or reboot device completely for changes to take effect.
{ "source": [ "https://unix.stackexchange.com/questions/354928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104867/" ] }
355,266
How can I sort input such as this using the sort command? I would like the numbers to be sorted numerically before the letters. 10 11 12 1 13 14 15 16 17 18 19 20 21 2 22 3 4 5 6 7 8 9 X Y
As @terdon noticed, the inclusion of X and Y and the fact that the numbers run from 1 to 22 identifies this as a possible list of human chromosomes (which is why he says that chromosome M (mitochondrial) may be missing). To sort a list of numbers, one would usually use sort -n : $ sort -n -o list.sorted list where list is the unsorted list, and list.sorted will be the resulting sorted list. With -n , sort will perform a numerical sort on its input. However, since some of the input is not numerical, the result is probably not the intended; X and Y will appear first in the sorted list, not last (the sex chromosomes are usually listed after chromosome 22). However, if you use sort -V (for "version sorting"), you will actually get what you want: $ sort -V -o list.sorted list $ cat list.sorted 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 X Y This will probably still not work if you do add M as that would be sorted before X and not at the end (which I believe is how it's usually presented).
{ "source": [ "https://unix.stackexchange.com/questions/355266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223875/" ] }
355,407
I want to cut 30% from the top of the image. I know the thread How to cut a really large raster image into smaller chunks? but there is no successful approach because I cannot find a distance measure of convert from zero to the end , only by absolute value dimensions. Pseudocode convert -crop-y -units-percentage 0x30 heart.png Fig. 1 Input figure I can do the task with LaTeX's adjustbox but the output in the pdf file is not really end result but a presentation of it. So copying the image from the pdf document yields the original image. So this approach failed.
You can crop a percentage of your image though in this case, to avoid running additional commands to get the image height and width (in order to calculate crop offset which by default is relative to top-left corner) you'll also have to crop relative to gravity (so that your crop offset position is relative to the bottom-left corner of the image): convert -gravity SouthWest -crop 100x70%x+0+0 infile.jpg outfile.jpg
{ "source": [ "https://unix.stackexchange.com/questions/355407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
355,559
I have one quick question. Is it normal that bash (i am using 4.4.11) is not displaying lines/text that is separated / end with plain \r ? I was a bit surprised to see this behavior: $ a=$(printf "hello\ragain\rgeorge\r\n") $ echo "$a" george But "hello again" text is still there,somehow "hidden": $ echo "$a" |od -w32 -t x1c 0000000 68 65 6c 6c 6f 0d 61 67 61 69 6e 0d 67 65 6f 72 67 65 0d 0a h e l l o \r a g a i n \r g e o r g e \r \n And as soon as we just play with bash is fine.... But is this a potential security risk? What if contents of variable "a" come from outter world and include "bad commands" instead of just hello? Another test, a bit unsecure this time: $ a=$(printf "ls;\rGeorge\n") $ echo "$a" George $ eval "$a" 0 awkprof.out event-tester.log helloworld.c oneshot.sh rightclick-tester.py tmp uinput-simple.py <directory listing appears with an error message at the end for command George> Imagine a hidden rm instead of a hidden ls . Same behavior when using echo -e: $ a=$(echo -e "ls;\rGeorge\r\n"); echo "$a" George Is it me that does something wrong...?
Your echo "$a" prints "hello", then goes back to the beginning of the line (which is what \r does), print "again", goes back again, prints "george", goes back again, and goes to the next line ( \n ). It’s all perfectly normal, but as chepner points out, it doesn’t have anything to do with Bash: \r and \n are interpreted by the terminal, not by Bash (which is why you get the full output when you pipe the command to od ). You can see this better with $ a=$(printf "hellooooo\r again,\rgeorge\r\n") $ echo "$a" since that will leave the end of the overwritten text: georgen,o You can’t really use that to hide commands though, only their output (and only if you can be sure to overwrite with enough characters), unless using eval as you show (but using eval is generally not recommended). A more dangerous trick is using CSS to mask commands intended to be copied and pasted from web sites.
{ "source": [ "https://unix.stackexchange.com/questions/355559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/188385/" ] }
355,610
You should never paste from web to your terminal . Instead, you should paste to your text editor, check the command and then paste to the terminal. That's OK, but what if Vim is my text editor? Could one forge a content that switches Vim to command mode and executes the malicious command?
Short answer: In many situations, Vim is vulnerable to this kind of attack (when pasting text in Insert mode). Proof of concept Using the linked article as a starting point, I was able to quickly create a web page with the following code, using HTML span elements and CSS to hide the middle part of the text so that only ls -la is visible to the casual viewer (not viewing the source). Note: the ^[ is the Escape character and the ^M is the carriage return character. Stack Exchange sanitises user input and protects against hiding of content using CSS so I’ve uploaded the proof of concept . ls ^[:echom "This could be a silent command."^Mi -la If you were in Insert mode and pasted this text into terminal Vim (with some qualifiers, see below) you would see ls -la but if you run the :messages command, you can see the results of the hidden Vim command. Defence To defend against this attack it’s best to stay in Normal mode and to paste using "*p or "+p . In Normal mode, when p utting text from a register, the full text (including the hidden part) is pasted. This same doesn’t happen in Insert mode (even if :set paste ) has been set. Bracketed paste mode Recent versions of Vim support bracketed paste mode that mitigate this type of copy-paste attack. Sato Katsura has clarified that “Support for bracketed paste appeared in Vim 8.0.210, and was most recently fixed in version 8.0.303 (released on 2nd February 2017)”. Note: As I understand it, versions of Vim with support for bracketed paste mode should protect you when pasting using Ctrl - Shift - V (most GNU/Linux desktop environments), Ctrl - V (MS Windows), Command - V (Mac OS X), Shift - Insert or a mouse middle-click. Testing I did some testing from a Lubuntu 16.04 desktop machine later but my results were confusing and inconclusive. I’ve since realised that this is because I always use GNU screen but it turns out that screen filters the escape sequence used to enable/disable the bracketed paste mode (there is a patch but it looks like it was submitted at a time when the project was not being actively maintained). In my testing, the proof of concept always works when running Vim via GNU screen, regardless of whether Vim or the terminal emulator support bracketed paste mode. Further testing would be useful but, so far, I found that support for bracketed paste mode by the terminal emulator block my Proof of Concept – as long as GNU screen isn’t blocking the relevant escape sequences. However, user nneonneo reports that careful crafting of escape sequences may be used to exit bracketed paste mode. Note that even with an up-to-date version of Vim, the Proof of Concept always works if the user pastes from the * register while in Insert mode by typing ( Ctrl - R * ). This also applies to GVim which can differentiate between typed and pasted input. In this case, Vim leaves it to the user to trust the contents of their register contents. So don’t ever use this method when pasting from an untrusted source (it’s something I often do – but I’ve now started training myself not to). Related links What you see is not what you copy (from 2009, first mention of this kind of exploit that I found) How can I protect myself from this kind of clipboard abuse? Recent discussion on vim_dev mailing list (Jan 2017) Conclusion Use Normal mode when pasting text (from the + or * registers). … or use Emacs. I hear it’s a decent operating system. :)
{ "source": [ "https://unix.stackexchange.com/questions/355610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6117/" ] }
355,763
Situation: I need a filesystem on thumbdrives that can be used across Windows and Linux. Problem: By default, the common FS between Windows and Linux are just exFAT and NTFS (at least in the more updated kernels) Question: In terms of performance on Linux (since my base OS is Linux), which is a better FS? Additional information: If there are other filesystems that you think is better and satisfies the situation, I am open to hearing it. EDIT 14/4/2020: ExFAT is being integrated into the Linux kernel and may provide better performance in comparison to NTFS (which I have learnt since that the packages that read-write to NTFS partitions are not the fastest [granted, it is a great interface]). Bottom line is still -- if you need the journal to prevent simple corruptions, go NTFS. EDIT 18/9/2021: NTFS is now being integrated into the Linux kernel (soon), and perhaps this will mean that NTFS performance will be much faster due to the lesser overhead than when it was a userland module. EDIT 15/6/2022: The NTFS3 kernel driver is officially part of the Linux Kernel as of version 5.15 (Released November 2021). Will do some testing and update this question with results.
NTFS is a Microsoft proprietary filesystem. All exFAT patents were released to the Open Invention Network and it has a fully functional in-kernel Linux driver since version 5.4 (2019). [1] exFat, also called FAT64, is a very simple filesystem, practically an extension of FAT32, due to its simplicity, it's well implemented in Linux and very fast. But due to its easy structure, it's easily affected by fragmentation, so performance can easily decrease with the use. exFAT doesn't support journaling thus meaning it needs full checking in case of unclean shutdown. NTFS is slower than exFAT, especially on Linux, but it's more resistant to fragmentation. Due to its proprietary nature it's not as well implemented on Linux as on Windows, but from my experience it works quite well. In case of corruption, NTFS can easily be repaired under Windows (even for Linux there's ntfsfix ) and there are lots of tools able to recover lost files. Personally, I prefer NTFS for its reliability. Another option is to use ext4, and mount under Windows with extfsd , ext4 is better on Linux, but the driver is not well implemented on Windows. Extfsd doesn't fully support journaling, so there is a risk to write under Windows, but ext is easier to repair under Linux than exFAT.
{ "source": [ "https://unix.stackexchange.com/questions/355763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208121/" ] }
355,775
On Debian Jessie, using php5.6 and telnet version: $ dpkg -l | grep telnet ii telnet 0.17-36 amd64 The telnet client I have written a php script to listen on port 23 for incoming tcp connections. For testing, I telnet into it, however I have noticed that it actually makes a difference wither I telnet into it like this: $ telnet localhost 23 vs like this: $ telnet localhost But according to man telnet , it should not make a difference: port Specifies a port number or service name to contact. If not specified, the telnet port (23) is used. If I do not specify the port, then I get some weird noise on the line. Or maybe its not noise? But if I do specify the port then I do not get this noise on the line. The noise is the following set of ascii characters: <FF><FD><03><FF><FB><18><FF><FB><1F><FF><FB><20><FF><FB><21><FF><FB><22><FF><FB><27><FF><FD><05> And just in case this is due to a bug in my server-side code, here is a cut down version of the script, which does exhibit the noise (though I don't think there are any bugs in the code, I just include this because someone is bound to ask): #!/usr/bin/php <?php set_time_limit(0); // infinite execution time for this script define("LISTEN_ADDRESS", "127.0.0.1"); $sock = socket_create(AF_INET, SOCK_STREAM, SOL_TCP); socket_set_option($sock, SOL_SOCKET, SO_RCVTIMEO, array('sec' => 30, 'usec' => 0)); // timeout after 30 sec socket_bind($sock, LISTEN_ADDRESS, 23); // port = 23 socket_listen($sock); echo "waiting for a client to connect...\n"; // accept incoming requests and handle them as child processes // block for 30 seconds or until there is a connection. $client = socket_accept($sock); //get the handle to this client echo "got a connection. client handle is $client\n"; $raw_data = socket_read($client, 1024); $human_readable_data = human_str($raw_data); echo "raw data: [$raw_data], human readable data: [$human_readable_data]\n"; echo "closing the connection\n"; socket_close($client); socket_close($sock); function human_str($str) { $strlen = strlen($str); $new_str = ""; // init for($i = 0; $i < $strlen; $i++) { $new_str .= sprintf("<%02X>", ord($str[$i])); } return $new_str; } ?> And the output from the script (from connecting like so: telnet localhost ) is: waiting for a client to connect... got a connection. client handle is Resource id #5 raw data: [�������� ��!��"��'��], human readable data: [<FF><FD><03><FF><FB><18><FF><FB><1F><FF><FB><20><FF><FB><21><FF><FB><22><FF><FB><27><FF><FD><05>] closing the connection But when connecting like telnet localhost 23 (and issuing the word hi ) the output is: waiting for a client to connect... got a connection. client handle is Resource id #5 raw data: [hi ], human readable data: [<68><69><0D><0A>] closing the connection So my question is whether this is expected behavior from the telnet client, or whether this is noise? It is very consistent - its always the same data - so it could be some kind of handshake? Here is the "noise" string again with spaces and without spaces, in case its more useful: FFFD03FFFB18FFFB1FFFFB20FFFB21FFFB22FFFB27FFFD05 FF FD 03 FF FB 18 FF FB 1F FF FB 20 FF FB 21 FF FB 22 FF FB 27 FF FD 05
telnet is not netcat . The telnet protocol is more than raw TCP. Among other things it can have a number of options , and the "noise" you're seeing is the negotiation of these options between your client and the server. When you specify a port you don't see any noise because according to the manual: When connecting to a non-standard port, telnet omits any automatic initiation of TELNET options. When the port number is preceded by a minus sign, the initial option negotiation is done. So apparently your implementation of telnet disables option negotiation when you specify a port (even when the port is 23), and re-enables it when the port is preceded by a minus sign. On a more general note, it's generally safe to forget about telnet these days. Use netcat instead if you need a simple plain TCP client (or server, for that matter).
{ "source": [ "https://unix.stackexchange.com/questions/355775", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5451/" ] }
356,385
I have a folder named 'sample' and it has 3 files in it. I want to write a shell script which will read these files inside the sample folder and post it to an HTTP site using curl. I have written the following for listing files inside the folder: for dir in sample/*; do echo $dir; done But it gives me the following output: sample/log sample/clk sample/demo It is attaching the parent folder in it. I want the output as follows (without the parent folder name) log clk demo How do I do this?
Use basename to strip the leading path off of the files: for file in sample/*; do echo "$(basename "$file")" done Though why not: ( cd sample; ls )
{ "source": [ "https://unix.stackexchange.com/questions/356385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224290/" ] }
356,569
When I'm trying to connect to x11vnc server started on Ubuntu 16.10 x11vnc The "Screen Sharing" app on on OS X 10.11.6 just hangs. How can I fix this?
If you want to connect to x11vnc server using "Screen Sharing" app on OS X, you need to tweak the x11vnc starting command: x11vnc -display :0 -noxrecord -noxfixes -noxdamage -forever -passwd 123456 You can't use -ncache You have to use -passwd [source]
{ "source": [ "https://unix.stackexchange.com/questions/356569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67785/" ] }
356,576
I am quite new to bash scripting and so any help would be much appreciated. Below is what I want to achieve. I have two text files. I want to delete all of the lines on the first file where it matches any of my string on the second file before the a comma. e.g. File 1: this_is_a_test.txt,11dsdsdsdsd this_is_a_test24.txt,545467ddd this_is_a_test22,121244442 File 2: this_is_a_test.txt this_is_a_test24.txt this_is_a_test22 Desired Output: Blank
If you want to connect to x11vnc server using "Screen Sharing" app on OS X, you need to tweak the x11vnc starting command: x11vnc -display :0 -noxrecord -noxfixes -noxdamage -forever -passwd 123456 You can't use -ncache You have to use -passwd [source]
{ "source": [ "https://unix.stackexchange.com/questions/356576", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224879/" ] }
357,928
AFAIK, the NIC receives all packets from the wire in a Local Area Network but rejects those packets which their destination address is not equal to its ip. I want to develop an application that monitors the internet usage of users. Each user has a fixed IP address. I and some other people are connected to a DES-108 8-Port Fast Ethernet Unmanaged Desktop Switch As said earlier I want to capture all the traffics from all users not only those packets that are belong to me. How should I force my NIC or other components to receive all of packets?
AFAIK, the NIC receives all packets from the wire in a Local Area Network but rejects those packets which their destination address is not equal to its ip. Correction: it rejects those packets which their destination MAC address is not equal to its MAC address (or multicast or any additional addresses in its filter. Packet capture utilities can trivially put the network device into promiscuous mode, which is to say that the above check is bypassed and the device accepts everything it receives. In fact, this is usually the default: with tcpdump , you have to specify the -p option in order to not do it. The more important issue is whether the packets you are interested are even being carried down the wire to your sniffing port at all. Since you are using an unmanaged ethernet switch, they almost certainly are not. The switch is deciding to prune packets that don't belong to you from your port before your network device can hope to see them. You need to connect to a specially configured mirroring or monitoring port on a managed ethernet switch in order to do this.
{ "source": [ "https://unix.stackexchange.com/questions/357928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172829/" ] }
358,079
My company resells an application whose brand name is mixed case, for example "ApplicationName". The application's installer creates all paths and file names in this standard. E.g. The main directory is /opt/ApplicationName , the init file is called ApplicationName so I have to run service ApplicationName status and so on. To me, this breaks all sensible conventions and I feel the files and directories should all be lower case (there is precedent in other applications such as MySQL, whose files and dirs are all called mysql , even applications like Apache and Tomcat do away with the preceding upper case letter). If I raise this as a bug report, I'd like to put up a stronger argument than just "I think it's wrong". So is it dictated in something like the POSIX standard that system files like this should be lower case?
The POSIX standard has a section with guidelines for conforming utilities (i.e., "such as those written specific to a local system or that are components of a larger application") that says Utility names should be between two and nine characters, inclusive. Utility names should include lowercase letters (the lower character classification) and digits only from the portable character set. [ref: 12.2 Utility Syntax Guidelines ] It's unclear to me whether the use of the words "should include" really means "should only include". (The consensus in the comments below is that it means "should only include"). An application on a Unix system that does not claim to be a POSIX conformant utility may otherwise use whatever name it wants. If it does claim to be a POSIX conformant utility that is part of the POSIX shell utilities , the text after the guidelines in section 12.2 says that "should" changes meaning to "shall". There are no similar guideline regarding directory names as far as I know. macOS (which is a certified UNIX 03 product when running on an Intel-based Mac computer) uses /Users as the prefix for user's home directories, for example, as well as a number of other mixed-case directory names.
{ "source": [ "https://unix.stackexchange.com/questions/358079", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/226207/" ] }
358,224
I believe (not sure) that the owner of a file/directory and the root user are the only users that are allowed to change the permissions of a file/directory. Am I correct or are there other users that are also allowed to change the permissions?
Only the owner and root (super user) are allowed to the change the permission of a file or directory. This means that the owner and the super user can set the read ( r ), write ( w ) and execute ( x ) permissions. But changing the ownership (user/group) of files and directories with the commands chown / chgrp is only allowed to root .
{ "source": [ "https://unix.stackexchange.com/questions/358224", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/226341/" ] }
358,229
can't mount my hard disk with ntfs file format in linux mint
Only the owner and root (super user) are allowed to the change the permission of a file or directory. This means that the owner and the super user can set the read ( r ), write ( w ) and execute ( x ) permissions. But changing the ownership (user/group) of files and directories with the commands chown / chgrp is only allowed to root .
{ "source": [ "https://unix.stackexchange.com/questions/358229", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/226347/" ] }
358,272
The variable BUILDNUMBER is set to value 230. I expect 230_ to be printed for the command echo $BUILDNUMBER_ but the output is empty as shown below. # echo $BUILDNUMBER_ # echo $BUILDNUMBER 230
The command echo $BUILDNUMBER_ is going to print the value of variable $BUILDNUMBER_ which is not set (underscore is a valid character for a variable name as explicitly noted by Jeff Schaller) You just need to apply braces (curly brackets) around the variable name or use the most rigid printf tool: echo "${BUILDNUMBER}_" printf '%s_\n' "$BUILDNUMBER" PS: Always quote your variables.
{ "source": [ "https://unix.stackexchange.com/questions/358272", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29049/" ] }
358,319
I need to find out what's contributing to the disk usage on a specific filesystem ( /dev/sda2 ): $ df -h / Filesystem Size Used Avail Use% Mounted on /dev/sda2 96G 82G 9.9G 90% / I can't just do du -csh / because I have many other filesystems mounted underneath / , some of which are huge and slow: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 96G 82G 9.9G 90% / /dev/sdb1 5.2T 3.7T 1.3T 76% /disk3 /dev/sda1 99M 18M 76M 20% /boot tmpfs 16G 4.0K 16G 1% /dev/shm nfshome.XXX.net:/home/userA 5.3T 1.6T 3.5T 32% /home/userA nfshome.XXX.net:/home/userB 5.3T 1.6T 3.5T 32% /home/userB How can I retrieve disk usage only on /dev/sda2 ? None of these work: Attempt 1: $ du -csh /dev/sda2 0 /dev/sda2 0 total Attempt 2: $ cd /dev/sda2/ cd: not a directory: /dev/sda2/
Use the -x (single file system) option: du -cshx / This instructs du to only consider directories of / which are on the same file system.
{ "source": [ "https://unix.stackexchange.com/questions/358319", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
358,994
I understand* the primary admin user is given a user ID of 501 and subsequent users get incremental numbers ( 502 , 503 , …). But why 501 ? What’s special about 50x , what’s the historical/technical reason for this choice? * I started looking into this when I got curious as to why my external hard drive had all its trashed files inside .Trashes/501 . My search led me to the conclusion 501 is the user ID for the primary admin in *nix systems (I am on macOS), but not why .
Many Unix systems start handing out UIDs to users at some particular number. Solaris will give the first general purpose user UID 100, on OpenBSD it's 1000, and on macOS it appears it's UID 501 that will be the UID for the first created interactive user, which is also likely a macOS admin user (which is not the same as the root user). The accounts with lower numbers are system user accounts for daemons etc. This makes it easier to distinguish interactive "human" accounts from system services accounts. This may also make user management, authentication etc. easier in various software. YP/NIS , a slightly outdated system for keeping user accounts (and other information) on a central server without having to create local users on multiple client machines, for example, has a MINUID and MAXUID setting for the range of user accounts that it should handle. On some Unices, a range of the system service accounts may be allocated to third-party software, such as UIDs 50 to 999 on FreeBSD or 500 to 999 on OpenBSD. All of these ranges are chosen by the makers and maintainers of the individual Unices according to the expected needs of their operating system. The POSIX standard does not say anything about these things. The lowest and highest allocatable UID (and GID) is often configurable by a local admin (see your adduser manual). Most Unices reserve UID 0 for root , the super-user, and assigns the highest possible UID (or at least some high value) to the user nobody (Solaris uses UID 60001, OpenBSD uses 32768, but UIDs may be much larger than that). (See comments about UID 0 always being root (or not), which is a slight digression from this topic) Update: The OpenBSD project recently rejected the idea of randomizing UID/GID allocation.
{ "source": [ "https://unix.stackexchange.com/questions/358994", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49593/" ] }
359,038
I am studying the history of computers to better understand why Linux terminals work the way they do. I have read that in the mid 1970's to the mid 1980's, most people used real terminals (as opposed to terminal emulators) to communicate with large computers, this is an example of a real terminal: But I am unable to find information about these large computers that the real terminals were connected to. Can anybody provide a name/picture of such large computer?
That terminal would typically be connected to a PDP-11 , or a VAX-11 (it can be used with many, many different types of computers though!). The PDP-11, like many mini-computers, was often housed in a rack: You can see detailed photos of a Data General Nova rack (along with a terminal) on our sister Retrocomputing site . Some variants were housed in cabinets; this was also typically the case for Vaxen: (Both photos taken from the Wikipedia articles linked above.) Terminals were used with computers of all sizes, from room-sized mainframes such as the PDP-10 to tower PC-sized VAXServers (thanks to hobbs for the link to that photo — the server shown there is smaller than many PC servers of the time!) or even pizza-box workstations in the mid-nineties. You can still connect many of these terminals to a modern PC running Linux or various other operating systems, as long as the PC has serial ports, or USB-to-RS-232 adapters (as pointed out by Michael Kjörling ), and you use null-modem cables to connect them (as pointed out by Mark Plotnick ). Check out Dinosaur’s Pen for many, many more photos of such systems in actual use. Some applications still in production use software dating back to these kinds of systems, although commonly the hardware is emulated; an example was given recently at Systems we love .
{ "source": [ "https://unix.stackexchange.com/questions/359038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/226968/" ] }
359,225
I would like to create self-signed certificates on the fly with arbitrary start- and end-dates, including end-dates in the past . I would prefer to use standard tools, e.g., OpenSSL, but anything that gets the job done would be great. The Stack Overflow question How to generate openssl certificate with expiry less than one day? asks a similar question, but I want my certificate to be self-signed. In case you were wondering, the certificates are needed for automated testing.
You have two ways of creating certificates in the past. Either faking the time (1)(2), or defining the time interval when signing the certificate (3). 1) Firstly, about faking the time: to make one program think it is in a different date from the system, have a look at libfaketime and faketime To install it in Debian: sudo apt-get install faketime You would then use faketime before the openssl command. For examples of use: $faketime 'last friday 5 pm' /bin/date Fri Apr 14 17:00:00 WEST 2017 $faketime '2008-12-24 08:15:42' /bin/date Wed Dec 24 08:15:42 WET 2008 From man faketime : The given command will be tricked into believing that the current system time is the one specified in the timestamp. The wall clock will continue to run from this date and time unless specified otherwise (see advanced options). Actually, faketime is a simple wrapper for libfaketime, which uses the LD_PRELOAD mechanism to load a small library which intercepts system calls to functions such as time(2) and fstat(2). So for instance, in your case, you can very well define a date of 2008, and create then a certificate with the validity of 2 years up to 2010. faketime '2008-12-24 08:15:42' openssl ... As a side note, this utility can be used in several Unix versions, including MacOS, as an wrapper to any kind of programs (not exclusive to the command line). As a clarification, only the binaries loaded with this method (and their children) have their time changed, and the fake time does not affect the current time of the rest of the system. 2) As @Wyzard states, you also have the datefudge package which is very similar in use to faketime . As differences, datefudge does not influence fstat (i.e. does not change file time creation). It also has it´s own library, datefudge.so, that it loads using LD_PRELOAD. It also has a -s static time where the time referenced is always returned despite how many extra seconds have passed. $ datefudge --static "2007-04-01 10:23" sh -c "sleep 3; date -R" Sun, 01 Apr 2007 10:23:00 +0100 3) Besides faking the time, and even more simply, you can also define the starting point and ending point of validity of the certificate when signing the certificate in OpenSSL. The misconception of the question you link to in your question, is that certificate validity is not defined at request time (at the CSR request), but when signing it. When using openssl ca to create the self-signed certificate, add the options -startdate and -enddate . The date format in those two options, according to openssl sources at openssl/crypto/x509/x509_vfy.c , is ASN1_TIME aka ASN1UTCTime: the format must be either YYMMDDHHMMSSZ or YYYYMMDDHHMMSSZ. Quoting openssl/crypto/x509/x509_vfy.c : int X509_cmp_time(const ASN1_TIME *ctm, time_t *cmp_time) { static const size_t utctime_length = sizeof("YYMMDDHHMMSSZ") - 1; static const size_t generalizedtime_length = sizeof("YYYYMMDDHHMMSSZ") - 1; ASN1_TIME *asn1_cmp_time = NULL; int i, day, sec, ret = 0; /* * Note that ASN.1 allows much more slack in the time format than RFC5280. * In RFC5280, the representation is fixed: * UTCTime: YYMMDDHHMMSSZ * GeneralizedTime: YYYYMMDDHHMMSSZ * * We do NOT currently enforce the following RFC 5280 requirement: * "CAs conforming to this profile MUST always encode certificate * validity dates through the year 2049 as UTCTime; certificate validity * dates in 2050 or later MUST be encoded as GeneralizedTime." */ And from the CHANGE log (2038 bug?) - This change log is just as an additional footnote, as it only concerns those using directly the API. Changes between 1.1.0e and 1.1.1 [xx XXX xxxx] *) Add the ASN.1 types INT32, UINT32, INT64, UINT64 and variants prefixed with Z. These are meant to replace LONG and ZLONG and to be size safe. The use of LONG and ZLONG is discouraged and scheduled for deprecation in OpenSSL 1.2.0. So, creating a certificate from the 1st of January 2008 to the 1st of January of 2010, can be done as: openssl ca -config /path/to/myca.conf -in req.csr -out ourdomain.pem \ -startdate 200801010000Z -enddate 201001010000Z or openssl ca -config /path/to/myca.conf -in req.csr -out ourdomain.pem \ -startdate 0801010000Z -enddate 1001010000Z -startdate and -enddate do appear in the openssl sources and CHANGE log; as @guntbert noted, while they do not appear in the main man openssl page, they also appear in man ca : -startdate date this allows the start date to be explicitly set. The format of the date is YYMMDDHHMMSSZ (the same as an ASN1 UTCTime structure). -enddate date this allows the expiry date to be explicitly set. The format of the date is YYMMDDHHMMSSZ (the same as an ASN1 UTCTime structure). Quoting openssl/CHANGE : Changes between 0.9.3a and 0.9.4 [09 Aug 1999] *) Fix -startdate and -enddate (which was missing) arguments to 'ca' program. P.S. As for the chosen answer of the question you reference from StackExchange: it is generally a bad idea to change the system time, especially in production systems; and with the methods in this answer you do not need root privileges when using them.
{ "source": [ "https://unix.stackexchange.com/questions/359225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14097/" ] }
359,303
How can I check the validity of a gz file, I do not have the hash of the file, I'm using gzip -t but it is not returning any output.
The gzip -t command only returns an exit code to the shell saying whether the file passed the integrity test or not. Example (in a script): if gzip -t file.gz; then echo 'file is ok' else echo 'file is corrupt' fi Adding -v will make it actually report the result with a message. Example: $ gzip -v -t file.gz file.gz: OK So the file is ok. Let's corrupt the file (by writing the character 0 at byte 40 in the file) and try again. $ dd seek=40 bs=1 count=1 of=file.gz <<<"0" 1+0 records in 1+0 records out 1 bytes transferred in 0.000 secs (2028 bytes/sec) $ gzip -v -t file.gz file.gz: gzip: file.gz: Inappropriate file type or format The integrity of a file with respect to its compression does not guarantee that the file contents is what you believe it is. If you have an MD5 checksum (or some similar checksum) of the file from whomever provided it, then you would be able to get an additional confirmation that the file not only is a valid gzip archive, but also that its contents is what you expect it to be.
{ "source": [ "https://unix.stackexchange.com/questions/359303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17678/" ] }
359,470
Is there a way to re-write the command structure A && B || C | D so that either B or C is piped into D? With the current command either only B or both C and D are run. For example:
Yes, in bash you can use parentheses: (A && B || C) | D This way the output of A && B || C will be piped into D .
{ "source": [ "https://unix.stackexchange.com/questions/359470", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
359,832
I have a number of large CSV files and would like them in TSV (tab separated format). The complication is that there are commas in the fields of the CSV file, eg: A,,C,"D,E,F","G",I,"K,L,M",Z Expected output: A C D,E,F G I K,L,M Z (where whitespace in between are 'hard' tabs) I have Perl, Python, and coreutils installed on this server.
Python Add to file named csv2tab , and make it executable touch csv2tab && chmod u+x csv2tab Add to it #!/usr/bin/env python import csv, sys csv.writer(sys.stdout, dialect='excel-tab').writerows(csv.reader(sys.stdin)) Test runs $ echo 'A,,C,"D,E,F","G",I,"K,L,M",Z' | ./csv2tab A C D,E,F G I K,L,M Z $ ./csv2tab < data.csv > data.tsv && head data.tsv 1A C D,E,F G I K,L,M Z 2A C D,E,F G I K,L,M Z 3A C D,E,F G I K,L,M Z
{ "source": [ "https://unix.stackexchange.com/questions/359832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
359,902
I have CentOS 5.6 on my laptop. When I type yum update , I get the below error: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile YumRepo Error: All mirror URLs are not using ftp, http[s] or file. Eg. Invalid release/ removing mirrorlist with no valid mirrors: /var/cache/yum/base/mirrorlist.txt Error: Cannot find a valid baseurl for repo: base Below is my /etc/yum.repos.d/CentOS-Base.repo file (I didn't change anything in it): [base] name=CentOS-$releasever - Base mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os #baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5 #released updates [updates] name=CentOS-$releasever - Updates mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates #baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5 #additional packages that may be useful [extras] name=CentOS-$releasever - Extras mirrorlist=http://mirrorlist.centos.org/? release=$releasever&arch=$basearch&repo=extras #baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5 #additional packages that extend functionality of existing packages [centosplus] name=CentOS-$releasever - Plus mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus #baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/ gpgcheck=1 enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5 #contrib - packages by Centos Users [contrib] name=CentOS-$releasever - Contrib mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib #baseurl=http://mirror.centos.org/centos/$releasever/contrib/$basearch/ gpgcheck=1 enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5 Below is my /etc/yum.conf file (I didn't change anything in it): [main] cachedir=/var/cache/yum keepcache=0 debuglevel=2 logfile=/var/log/yum.log distroverpkg=redhat-release tolerant=1 exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 bugtracker_url=http://bugs.centos.org/set_project.php?project_id=16&ref=http://bugs.centos.org/bug_report_page.php?category=yum Why I can't update my CentoOS to 5.11? Previously I was able to update CentOS to 5.11 without any problems. Can someone please help me?
CentOS-5 reached end-of-life on March 31, 2017. This means that no new updates will be released by Red Hat. The current 5.11 tree you seek has been moved to vault.centos.org . To obtain access to the 5.11 branch, edit /etc/yum.repos.d/CentOS-Base.repo and comment out the mirrorlist directives. Furthermore, in each enabled section add baseurl=http://vault.centos.org/5.11/os/$basearch or baseurl=http://vault.centos.org/5.11/updates/$basearch , appropriately. For example, for a base repo that looks like: [base] name=CentOS-$releasever - Base mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5 ...change to: [base] name=CentOS-$releasever - Base # mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os baseurl=http://vault.centos.org/5.11/os/$basearch gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5
{ "source": [ "https://unix.stackexchange.com/questions/359902", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111388/" ] }
360,375
Is it good to delete the variables used in a shell-script at the end of a script. rm -rf $abc rm -rf $def or unset $abc unset $def and so on. Is this actually a good practice?
This is a very bad practice. rm deletes files . It is nothing to do with variables. In any case, the variables themselves will be disposed of when the script ends and the operating system reclaims the shell's memory. In the simple case, a file with the same name as the value of one of the variables will be deleted by this construction: abc=filename rm -f $abc # Deletes "filename" in the current directory It gets worse. If abc and def contain the names of files as individual words separated by a space (or any other character of IFS ), you will delete those files, and wildcards like * will be expanded too if they appear in any of the words. abc='hello world' rm -f $abc # Deletes file "hello" and "world" (leaves "hello world" alone) abc='5 * 3' rm -f $abc # Deletes all files, because * is expanded (!) def='-r /' rm -f $def # Really deletes *all* files this user can access Shell parameter expansion with $var is subject to word splitting , where every character of the IFS variable divides the variable into different arguments. Each word is then subject to filename expansion , which uses * , ? , and [abc...] patterns to create filenames. This could get very bad, depending on what your variables have in them. Do not do this. There is no need to blank or unset variables at the end of a shell script in any way.
{ "source": [ "https://unix.stackexchange.com/questions/360375", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152598/" ] }
360,545
I recently decided to change my PS1 variable to accommodate some pretty Solarized colors for my terminal viewing pleasure. When not in a tmux session, everything is great! Rainbows, ponies, unicorns and a distinguishable prompt! Cool! The problem is within tmux, however. I've verified that the value of PS1 is what I expect it to be and the same as it is when tmux isn't running, namely \[\033]0;\w\007\]\[\[\]\]\u\[\]@\[\[\]\]\h\[\]:\[\]\W\[\]$ \[\] . All of my aliases, etc. in my .bash_profile are also functioning as expected. tmux is also displaying colors without incident, as echo -ne "\033[1;33m hi" behaves as expected as does gls --color . The current relevant line in my .bash_profile is export PS1="\[\033]0;\w\007\]\[\[\]\]\u\[\]@\[\[\]\]\h\[\]:\[\]\W\[\]$ \[\]" , although originally I was sourcing a script located in a .bash_prompt file to handle some conditionals, etc. I tried reverting to the simpler version. Executing bash will cause the prompt to colorize, but must be done in each pane. export PS1=[that long string I've already posted] will not. My .tmux.conf is as follows: set-option -g default-command "reattach-to-user-namespace -l /usr/local/bin/bash" set -g default-terminal "xterm-256color" set-window-option -g automatic-rename on bind '"' split-window -c "#{pane_current_path}" bind % split-window -h -c "#{pane_current_path}" bind c new-window -c "#{pane_current_path}" Relevant portions of .bash_profile: export TERM="xterm-256color" if which tmux >/dev/null 2>&1; then test -z "$TMUX" && (tmux attach || tmux new-session) fi I'm using macOS Sierra, iTerm 2, I've tried both the current homebrew version of bash and the system bash (it's currently using the homebrew), tmux 2.4. I also placed touch testing_touch_from_bash_profile in my .bash_profile while in a tmux session with two panes, killed one pane, opened a pane and verified that the file was in fact created. echo $TERM returns xterm-256color . I've ensured that when exiting tmux to test settings changes that I've exited tmux and that no tmux process is currently running on the system via ps -ax | grep tmux . Oddly, sourcing the .bash_prompt script also changes the color so long as I do it within each tmux pane. I've looked at https://stackoverflow.com/questions/21005966/tmux-prompt-not-following-normal-bash-prompt-ps1-w and tried adding the --login flag after the bash call in the first line of my .tmux.conf. Launching tmux with tmux new bash will cause the first pane to colorize, but subsequent panes will not. The $PS1 variable is being honored for seemingly all aspects except colorizing any of the fields. Anyone have any ideas?
On my machine the solution is to add set -g default-terminal "xterm-256color" to ~/.tmux.conf .
{ "source": [ "https://unix.stackexchange.com/questions/360545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/228084/" ] }
360,547
In my LAN I am using a PFSense server with one DHCP server on it. I need to block a second DHCP server showing up in my LAN. I think I can use the PfSense firewall to refuse the other DHCP server IP address. What should I do?
On my machine the solution is to add set -g default-terminal "xterm-256color" to ~/.tmux.conf .
{ "source": [ "https://unix.stackexchange.com/questions/360547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227734/" ] }
360,559
I am trying to install Debian but i don't know how to partition it. I have an 1tb hard disk. I want to give 60gb for debian files,2gb for swap and want to use the others for media files. What is /,/home,/usr/local?
On my machine the solution is to add set -g default-terminal "xterm-256color" to ~/.tmux.conf .
{ "source": [ "https://unix.stackexchange.com/questions/360559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227559/" ] }
361,134
I have a variable named descr which can contain a string Blah: -> r1-ae0-2 / [123] , -> s7-Gi0-0-1:1-US / Foo , etc. I want to get the -> r1-ae0-2 , -> s7-Gi0-0-1:1-US part from the string. At the moment I use descr=$(grep -oP '\->\s*\S+' <<< "$descr" for this. Is there a better way to do this? Is it also possible to do this with parameter expansion?
ksh93 and zsh have back-reference (or more accurately 1 , references to capture groups in the replacement) support inside ${var/pattern/replacement} , not bash . ksh93 : $ var='Blah: -> r1-ae0-2 / [123]' $ printf '%s\n' "${var/*@(->*([[:space:]])+([^[:space:]]))*/\1}" -> r1-ae0-2 zsh : $ var='Blah: -> r1-ae0-2 / [123]' $ set -o extendedglob $ printf '%s\n' "${var/(#b)*(->[[:space:]]#[^[:space:]]##)*/$match[1]}" -> r1-ae0-2 ( mksh man page also mentions that future versions will support it with ${KSH_MATCH[1]} for the first capture group. Not available yet as of 2017-04-25). However, with bash , you can do: $ [[ $var =~ -\>[[:space:]]*[^[:space:]]+ ]] && printf '%s\n' "${BASH_REMATCH[0]}" -> r1-ae0-2 Which is better as it checks that the pattern is found first. If your system's regexps support \s / \S , you can also do: re='->\s*\S+' [[ $var =~ $re ]] With zsh , you can get the full power of PCREs with: $ set -o rematchpcre $ [[ $var =~ '->\s*\S+' ]] && printf '%s\n' $MATCH -> r1-ae0-2 With zsh -o extendedglob , see also: $ printf '%s\n' ${(SM)var##-\>[[:space:]]#[^[:space:]]##} -> r1-ae0-2 Portably: $ expr " $var" : '.*\(->[[:space:]]*[^[:space:]]\{1,\}\)' -> r1-ae0-2 If there are several occurrences of the pattern in the string, the behaviour will vary with all those solutions. However none of them will give you a newline separated list of all matches like in your GNU- grep -based solution. To do that, you'd need to do the looping by hand. For instance, with bash : re='(->\s*\S+)(.*)' while [[ $var =~ $re ]]; do printf '%s\n' "${BASH_REMATCH[1]}" var=${BASH_REMATCH[2]} done With zsh , you could resort to this kind of trick to store all the matches in an array: set -o extendedglob matches=() n=0 : ${var//(#m)->[[:space:]]#[^[:space:]]##/${matches[++n]::=$MATCH}} printf '%s\n' $matches 1 back-references does more commonly designate a pattern that references what was matched by an earlier group. For instance, the \(.\)\1 basic regular expression matches a single character followed by that same character (it matches on aa , not on ab ). That \1 is a back-reference to that \(.\) capture group in the same pattern. ksh93 does support back-references in its patterns (for instance ls -d -- @(?)\1 will list the file names that consist of two identical characters), not other shells. Standard BREs and PCREs support back-references but not standard ERE, though some ERE implementations support it as an extension. bash 's [[ foo =~ re ]] uses EREs. [[ aa =~ (.)\1 ]] will not match, but re='(.)\1'; [[ aa =~ $re ]] may if the system's EREs support it.
{ "source": [ "https://unix.stackexchange.com/questions/361134", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
361,213
Adding a gpg key via apt-key systematically fails since I've switched to Ubuntu 17.04 (I doubt it's directly related though). Example with Spotify's repo key : $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys BBEBDCB318AD50EC6865090613B00F1FD2C19886 Executing: /tmp/apt-key-gpghome.wRE6z9GBF8/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys BBEBDCB318AD50EC6865090613B00F1FD2C19886 gpg: keyserver receive failed: No keyserver available Same thing if I remove the hkp:// prefix. Context: I use CNTLM to cope with the local corporate proxy. Env variables are set (in /etc/environment ): $ env | grep 3128 https_proxy=http://localhost:3128 http_proxy=http://localhost:3128 ftp_proxy=http://localhost:3128 /etc/apt/apt.conf is configured ( apt commands are working fine): $ cat /etc/apt/apt.conf Acquire::http::Proxy "http://localhost:3128"; Acquire::https::Proxy "http://localhost:3128"; Acquire::ftp::Proxy "http://localhost:3128"; Finally, the specified keyserver seems reachable: $ curl keyserver.ubuntu.com:80 <?xml version="1.0"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>SKS OpenPGP Public Key Server</title> </head> <body> [...] What can I do ? I'm not even sure on how to further debug it... Things I already tried to do, without any result: run sudo with -E (preserve env) option run apt-key adv with --keyserver-options http-proxy=http://localhost:3128/ option ( source ) run $ gpg --list-keys for some reason ( source ) use another keyserver ( --keyserver pgp.mit.edu ) remove the hkp:// part ( --keyserver keyserver.ubuntu.com:80 ) Weird thing is that I never see any "cntlm" entry in /var/log/syslog when running apt-key .
You usually have a proxy for ftp, http and https; I am seeing there hkp:// as an URL; so it should not be directed via a pure http proxy, hence failing the communication. Use this instead: sudo apt-key adv --keyserver keyserver.ubuntu.com --keyserver-options http-proxy=http://localhost:3128 --recv-keys BBEBDCB318AD50EC6865090613B00F1FD2C19886 As for the system updates, I would advise using an APT proxy, for instance, apt-cacher-ng . Another way of doing it, is searching in the public web interface, with a browser, for instance on your working station for the key you want at https://keyserver.ubuntu.com Open the site, and you got a form. In this case I used the "Search String" "Spotify"; then select "Search" ; it will list several keys. Searching for the signature/fingerprint that you mentioned in the result page: pub 4096R/D2C19886 2015-05-28 Fingerprint=BBEB DCB3 18AD 50EC 6865 0906 13B0 0F1F D2C1 9886 uid Spotify Public Repository Signing Key <[email protected]> sig sig3 D2C19886 2015-05-29 __________ 2017-11-22 [selfsig] sig sig 94558F59 2015-06-02 __________ __________ Spotify Public Repository Signing Key <[email protected]> We see this is the entry that interests us. So we click in D2C19886 and are presented with a page with the key at https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x13B00F1FD2C19886 . Public Key Server -- Get "0x13b00f1fd2c19886 " -----BEGIN PGP PUBLIC KEY BLOCK----- Version: SKS 1.1.6 Comment: Hostname: keyserver.ubuntu.com mQINBFVm7dMBEADGcdfhx/pjGtiVhsyXH4r8TrFgsGyHEsOWaYeU2JL1tEi+YI1qjpExb2Te TReDTiGEFFMWgPTS0y5HQGm+2P3XGv0pShvgg9A6FWZmZmT+tymA2zvNrdpmKdhScZ52StPL Fz9wsmXHG4DIKVuzgzuV4YxJ1i2wFtoVp8zT9ORu1BxLZ0IBwTvLRbaQGZ8DwXVAHak9cK91 Ujj6gJ1MJPohZLHH2BjrOjEl/I36jFUjK0AadznNzo08lLAi94qjtheJtuJD3IEOAlCkaknz 6vbEFpszLGlLD7GENMzJk46ObuJuvW5R2PkOU2U8jS0GaUD9Ou/SIdJ6vIdvjSs/ettc2wwd nbSdadvjovIfvEBRsEVMpRG+42B+DZpJbS9pCb8sxTJtnUy1YViZmG0++FhPGGPGzQYhC/Mz 07lsx5PkC7Kka2FCNmhauxw5deO43Ck181oQVdbt/VxmChzchUJ6N6/uOV5JKm7B9UnDNyqU Yv6goeLvFnT9ag+FCxiroTrq+dINr6d+XT/cI9WtSagfmhcekwhyfcCgYsFemAOckRifjEGF MksQlnWkGwWNoKe91KBxjgaJaazSbZRk0dFPSSmfKWaxuTwkR74pbaueyijnQJgHAjfCyzQe 9miN9DitON5l6T2gVAN3Jn1QQmV7tt5GB7amcHf5/b0oYmmRPQARAQABtD5TcG90aWZ5IFB1 YmxpYyBSZXBvc2l0b3J5IFNpZ25pbmcgS2V5IDxvcGVyYXRpb25zQHNwb3RpZnkuY29tPokB HAQQAQIABgUCVW3SWAAKCRAILM7flFWPWUk5B/wOqqD9/2Do9PyPucfUs/rrP4+M8iJLpv8U +bX/qHryTTWfpk3YuKL4+c8saHySK4HLGyxd3mdo1XMF351KrxLQvWMSSPbIRV9cSqZROOVn 2ya+3xpWk6t1omLzxtBBMOC4B5qAfWhog7ioAmzQNY5NUz5mqXVP5WbgR/G+GOszzuQUgeu1 Xxxzir3JqWQ0g8mp3EtX7dB76zxkkuTYbeVDPOvtJPn/38d3oSLUI1QJnL8pjREHeE8fO5mW ncJmyZNhkYd+rfnPk+W0ZkTr59QBIEOGMTmATtNh+x1mo5e2dW91Oj4jEWipMUouLGqbo/gJ uHFMt8RWBmy+zFYUEPYHiQI+BBMBAgAoAhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAUC VWg3sAUJBK3QLQAKCRATsA8f0sGYhl6hEACJ1CrYjaflKKR2Znuh0g0gM89NAwO8AA4+SpkW HagdGLo7OV/rGB3mlwD4mhaa8CbEnBT/za3jFnT19KsYQWiT21oOX/eo47ITbAspjDZTiXLi nyAcOJn+q/EFkelROzbVaxZHi6SN5kCEd8KAew8h2jZf8wWqaYVyMPNSqotUhin6YjWsu57B GixVThoMmxx3udsGAiYqt8buAANWbkUphrvtJuNCKkGym7psnS4Q5EnHPfvbYii9iAfBswX6 nZQlehva7aToN73elYL3opCArAxKAFx70bpGxb7T16KjKzkKS0a4iQ7xdbBGylb+AE/RhICa +RM5tma2YnB3pZvFM/n0BNeYReCgvxkl1rqrB1KxmFHfGqjLkb2YAZ5RYnP3gEt+nbEWxL8F O0Bhakn1RB3NqTC2oiQAUfh+66yUawUNkHRHlGAEzZAxvpfnf0hSJp734lyQZJs+zqXUAXa2 UmEZ6se62PgZRQIz5IbAVxSiGz4xIZs1yS36N2vZ34LFJa9o/HVk5OfpqZM0zjWwQIQN2b4O BizL5r4h2Mi5BHUEyYMsDZn+txoJjPPYLolRlf31sqi5MJE+cbOAXSn8PC9k4i+hrbfqFzts 47+6xgCH3aXbhUkJh1CH/0/qEXfTPYTyayijm4rdvSBczzEORWGT5E38oV9h1eUqp4nVPg== =/qip -----END PGP PUBLIC KEY BLOCK----- You cut between the line that begins with "-----BEGIN" and the line ending with "-----END", including those lines, and paste to a file, say spotify.pgp on the intended server you want to import that key. (do not cut it from here, as I added 4 spaces before each line while formatting) Finally to import the key into the server you do: $sudo apt-key add spotify.pgp OK
{ "source": [ "https://unix.stackexchange.com/questions/361213", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101593/" ] }
361,245
Looking at the source of strace I found the use of the clone flag CLONE_IDLETASK which is described there as: #define CLONE_IDLETASK 0x00001000 /* kernel-only flag */ After looking deeper into it I found that, although that flag is not covered in man clone it is actually used by the kernel during the boot process to create idle processes (all of which should have PID 0) for each CPU on the machine. i.e. a machine with 8 CPUs will have at least 7 (see question below) such processes "running" (note quotes). Now, this leads me to a couple of question about what that "idle" process actually do. My assumption is that it executes NOP operation continuously until its timeframe ends and the kernel assigns a real process to run or assign the idle process once again (if the CPU is not being used). Yet, that's a complete guess. So: On a machine with, say, 8 CPUs will 7 such idle processes be created? (and one CPU will be held by the kernel itself whilst no performing userspace work?) Is the idle process really just an infinite stream of NOP operations? (or a loop that does the same). Is CPU usage (say uptime ) simply calculated by how long the idle process was on the CPU and how long it was not there during a certain period of time? P.S. It is likely that a good deal of this question is due to the fact that I do not fully understand how a CPU works. i.e. I understand the assembly, the timeframes and the interrupts but I do not know how, for example, a CPU may use more or less energy depending on what it is executing. I would be grateful if someone can enlighten me on that too.
The idle task is used for process accounting, and also to reduce energy consumption. In Linux, one idle task is created for every processor, and locked to that processor; whenever there’s no other process to run on that CPU, the idle task is scheduled. Time spent in the idle tasks appears as “idle” time in tools such as top . (Uptime is calculated differently.) Unix seems to always have had an idle loop of some sort (but not necessarily an actual idle task, see Gilles’ answer ), and even in V1 it used a WAIT instruction which stopped the processor until an interrupt occurred (it stood for “wait for interrupt”). Some other operating systems used busy loops, DOS, OS/2 , and early versions of Windows in particular. For quite a long time now, CPUs have used this kind of “wait” instruction to reduce their energy consumption and heat production. You can see various implementations of idle tasks for example in arch/x86/kernel/process.c in the Linux kernel: the basic one just calls HLT , which stops the processor until an interrupt occurs (and enables the C1 energy-saving mode), the other implementations handle various bugs or inefficiencies ( e.g. using MWAIT instead of HLT on some CPUs). All this is completely separate from idle states in processes, when they’re waiting for an event (I/O etc.).
{ "source": [ "https://unix.stackexchange.com/questions/361245", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172635/" ] }
361,895
I am logged into Sun Solaris OS. I want to create and extract a compressed tar file. I tried this normal UNIX command: tar -cvzf file.tar.gz directory1 It is failing to execute in Sun OS with following error bash-3.2$ tar -cvzf file.tar.tz directory1 tar: z: unknown function modifier Usage: tar {c|r|t|u|x}[BDeEFhilmnopPqTvw@[0-7]][bfk][X...] [blocksize] [tarfile] [size] [exclude-file...] {file | -I include-file | -C directory file}...
To avoid creation of temporary intermediate file you can use this command tar cvf - directory1|gzip -c >file.tar.gz
{ "source": [ "https://unix.stackexchange.com/questions/361895", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176232/" ] }
361,923
I am trying to identify a strange character I have found in a file I am working with: $ cat file � $ od file 0000000 005353 0000002 $ od -c file 0000000 353 \n 0000002 $ od -x file 0000000 0aeb 0000002 The file is using ISO-8859 encoding and can't be converted to UTF-8: $ iconv -f ISO-8859 -t UTF-8 file iconv: conversion from `ISO-8859' is not supported Try `iconv --help' or `iconv --usage' for more information. $ iconv -t UTF-8 file iconv: illegal input sequence at position 0 $ file file file: ISO-8859 text My main question is how can I interpret the output of od here? I am trying to use this page which lets me translate between different character representations, but it tells me that 005353 as a "Hex code point" is 卓 which doesn't seem right and 0aeb as a "Hex code point" is ૫ which, again, seems wrong. So, how can I use any of the three options ( 355 , 005353 or 0aeb ) to find out what character they are supposed to represent? And yes, I did try with Unicode tools but it doesn't seem to be a valid UTF character either: $ uniprops $(cat file) U+FFFD ‹�› \N{REPLACEMENT CHARACTER} \pS \p{So} All Any Assigned Common Zyyy So S Gr_Base Grapheme_Base Graph X_POSIX_Graph GrBase Other_Symbol Print X_POSIX_Print Symbol Specials Unicode if I understand the description of the Unicode U+FFFD character, it isn't a real character at all but a placeholder for a corrupted character. Which makes sense since the file isn't actually UTF-8 encoded.
Your file contains two bytes, EB and 0A in hex. It’s likely that the file is using a character set with one byte per character, such as ISO-8859-1 ; in that character set, EB is ë: $ printf "\353\n" | iconv -f ISO-8859-1 ë Other candidates would be δ in code page 437 , Ù in code page 850 ... od -x ’s output is confusing in this case because of endianness; a better option is -t x1 which uses single bytes: $ printf "\353\n" | od -t x1 0000000 eb 0a 0000002 od -x maps to od -t x2 which reads two bytes at a time, and on little-endian systems outputs the bytes in reverse order. When you come across a file like this, which isn’t valid UTF-8 (or makes no sense when interpreted as a UTF-8 file), there’s no fool-proof way to automatically determine its encoding (and character set). Context can help: if it’s a file produced on a Western PC in the last couple of decades, there’s a fair chance it’s encoded in ISO-8859-1, -15 (the Euro variant), or Windows-1252; if it’s older than that, CP-437 and CP-850 are likely candidates. Files from Eastern European systems, or Russian systems, or Asian systems, would use different character sets that I don’t know much about. Then there’s EBCDIC... iconv -l will list all the character sets that iconv knows about, and you can proceed by trial and error from there. (At one point I knew most of CP-437 and ATASCII off by heart, them were the days.)
{ "source": [ "https://unix.stackexchange.com/questions/361923", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
362,100
Do I need to check & create /tmp before writing to a file inside of it? Assume that no one has run sudo rm -rf /tmp because that's a very rare case
The FHS mandates that /tmp exist, as does POSIX so you can rely on its being there (at least on compliant systems; but really it’s pretty much guaranteed to be present on Unix-like systems). But you shouldn’t: the system administrator or the user may prefer other locations for temporary files. See Finding the correct tmp dir on multiple platforms for more details.
{ "source": [ "https://unix.stackexchange.com/questions/362100", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27362/" ] }
362,115
I'm about to run an python script on Ubuntu on VPS. It's is machine learning training process so take huge time to train. How can I close putty without stopping that process.
You have two main choices: Run the command with nohup . This will disassociate it from your session and let it continue running after you disconnect: nohup pythonScript.py Note that the stdout of the command will be appended to a file called nohup.out unless you redirect it ( nohup pythonScript.py > outfile ). Use a screen multiplexer like tmux . This will let you disconnect from the remote machine but then, next time you connect, if you run tmux attach again, you will find yourself in exactly the same session. The command will still be running (it will continue running when you log out) and you will be able to see its stdout and stderr just as though you'd never logged out: tmux pythonScript.py Once you've launched that, just close the PuTTY window. Then, connect again the next day, run tmux attach again and you're back where you started.
{ "source": [ "https://unix.stackexchange.com/questions/362115", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/229191/" ] }
362,229
I was looking at the man page for the rm command on my MacBook and I noticed the the following: -W Attempt to undelete the named files. Currently, this option can only be used to recover files covered by whiteouts. What does this mean? What is a "whiteout"?
A whiteout is a special marker file placed by some "see-through" higher-order filesystems (those which use one or more real locations as a basis for their presentation), particularly union filesystems, to indicate that a file that exists in one of the base locations has been deleted within the artificial filesystem even though it still exists elsewhere. Listing the union filesystem won't show the whited-out file. Having a special kind of file representing these is in the BSD tradition that macOS derives from: macOS uses st_mode bits 0160000 to mark them . Using ls -F , those files will be marked with a % sign , and ls -W will show that they exist (otherwise, they're generally omitted from listings). Many union systems also make normal files with a special name to represent whiteouts on systems that don't support those files. I'm not sure that macOS exposes these itself in any way, but other systems from its BSD heritage do and it's possible that external filesystem drivers could use them.
{ "source": [ "https://unix.stackexchange.com/questions/362229", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
362,292
I'm trying to look for a file called Book1 . In my test I'm trying to look for the aforementioned file and in this test, I don't know where that file is located. I tried find / -iname book1 but there is no output. How do I find my file called book1 using the command line if I don't know where the file is located? EDIT: My scenario is described in more detail below: The file extension is unknown The exact name (i.e. Capitalized letters, numbers, etc.) is unknown The location of the file is unknown
First, an argument to -iname is a shell pattern . You can read more about patterns in Bash manual . The gist is that in order for find to actually find a file the filename must match the specified pattern. To make a case-insensitive string book1 match Book1.gnumeric you either have to add * so it looks like this: find / -iname 'book1*' or specify the full name: find / -iname 'Book1.gnumeric' Second, -iname will make find ignore the filename case so if you specify -iname book1 it might also find Book1 , bOok1 etc. If you're sure the file you're looking for is called Book1.gnumeric then don't use -iname but -name , it will be faster: find / -name 'Book1.gnumeric' Third, remember about quoting the pattern as said in the other answer . And last - are you sure that you want to look for the file everywhere on your system? It's possible that the file you're looking for is actually in your $HOME directory if you worked on that or downloaded it from somewhere. Again, that may be much faster. EDIT : I noticed that you edited your question. If you don't know the full filename, capitalization and location indeed you should use something like this: find / -iname 'book1*' I also suggest putting 2>/dev/null at the end of the line to hide all *permission denied* and other errors that will be present if you invoke find as a non-root user: find / -iname 'book1*' 2>/dev/null And if you're sure that you're looking for a single file, and there is only a single file on your system that match the criteria you can tell find to exit after finding the first matching file: find / -iname 'book1*' -print -quit 2>/dev/null
{ "source": [ "https://unix.stackexchange.com/questions/362292", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224726/" ] }
362,559
Where can I find a complete list of the keyboard combinations which send signals in Linux? Eg: Ctrl + C - SIGINT Ctrl + \ - SIGQUIT
The Linux N_TTY line discipline only sends three different signals: SIGINT, SIGQUIT, and SIGTSTP. By default the following control characters produce the signals: Ctrl + C - SIGINT Ctrl + \ - SIGQUIT Ctrl + Z - SIGTSTP
{ "source": [ "https://unix.stackexchange.com/questions/362559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
362,642
Is there a standard location in Linux for holding source files for example OpenSSL . I am building Nginx from source with non default version of OpenSSL. I need to download and untar OpenSSL and I did it in home directory. Now, I wonder is there a standard location in Linux maybe /opt ?
Whenever you ask yourself something like this, check out the Filesystem Hierarchy Standard (FHS).There, you will find the following entry: usr/src : Source code (optional) Purpose Source code may be placed in this subdirectory, only for reference purposes So you can put your source files in subdirectories of /usr/src . That said, this is an optional directory so you can really keep them wherever you like. Source code is not relevant after you've compiled it into an executable so the system will never require the source of something to be accessible at a specific location. In conclusion: /usr/src is a pretty standard location but feel free to choose your own if you prefer.
{ "source": [ "https://unix.stackexchange.com/questions/362642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154659/" ] }
363,048
I'm trying to install Docker on a Ubuntu 64 machine following the official installation guide . Sadly Ubuntu seems it is not able to locate the docker-ce package. Any idea to fix it or at least to track what is happening ? Here some details for you... $ uname --all; sudo grep docker /etc/apt/sources.list; sudo apt-get install docker-ce Linux ubuntu 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable. # deb-src [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable. Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package docker-ce
Ubuntu 22.10 (Kinetic) sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu kinetic stable" Ubuntu 22.04 (Jammy) sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu jammy stable" Ubuntu 21.10 (Impish) sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu impish stable" Ubuntu 21.04 (hirsute) sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu hirsute stable" Ubuntu 20.10 (Groovy) sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu groovy stable" Ubuntu 20.04 (Focal) sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" Ubuntu 19.10 (Eoan) sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu eoan stable" Ubuntu 19.04 (Disco) sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu disco stable" Ubuntu 18.10 (Cosmic) sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu cosmic test" Ubuntu 18.04 (bionic) sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" Ubuntu 17.10 docker-ce package is available on the official docker (Ubutu Artful) repository , to install it use the following commands : sudo apt install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu artful stable" Ubuntu 16.04 You can install docker-ce on Ubuntu as follows: sudo apt-get install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" Run the following: sudo apt update apt-cache search docker-ce sample output: docker-ce - Docker: the open-source application container engine Install docker-ce : For Ubuntu 16.04 you need to run sudo apt update . For Ubuntu 18.04 and higher, add-apt-repository will execute apt update automatically: sudo apt install docker-ce To check the available and permitted Ubuntu codenames: curl -sSL https://download.docker.com/linux/ubuntu/dists/ |awk -F'"' 'FNR >7 {print $2}' sample output (Results may be different after the directory updates): ../ artful/ bionic/ cosmic/ disco/ eoan/ focal/ groovy/ hirsute/ trusty/ xenial/ yakkety/ zesty/ Docker , OS requirements
{ "source": [ "https://unix.stackexchange.com/questions/363048", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85598/" ] }
363,164
I have this following code: find ./ -iname '*phpmyadmin' -exec rm -rf {} \; It deletes a dir called phpmyadmin , but it does not delete a file called phpMyAdmin-Version-XYZ.zip Even if I remove the -rf , it still won't delete it (probably because a second problem with the -iname not affecting case insensitivity). Is there a way to delete any inode in a single rm (file, dir, softlink)? Why does adding the -iname not have an effect? Note: I didn't find a "delete any inode" argument in man rm .
The problem is that you are matching a file that ends in phpmyadmin ( case-insensitively ) by using the pattern *phpmyadmin . To get any file that contains the string phpmyadmin (case-insensitively), use -iname '*phpmyadmin*' : find ./ -iname '*phpmyadmin*' -exec rm -rf {} \; Perhaps getting the matched files before removal would be sane: find ./ -iname '*phpmyadmin*' To answer your first question, there is no option in rm in userspace to deal with inodes.
{ "source": [ "https://unix.stackexchange.com/questions/363164", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
363,297
I want to run some commands in parallel. When all of these commands are finished start the next one. I though the following approach will work #!/bin/bash command1 & command2 & command3 && command4 but it didn't. I need to run command4 when all the first three commands have been completely finished.
#!/bin/bash command1 & command2 & command3 & wait command4 wait (without any arguments) will wait until all the backgrounded processes have exited. The complete description of wait in the bash manual: wait [-n] [n ...] Wait for each specified child process and return its termination status. Each n may be a process ID or a job specification; if a job spec is given, all processes in that job's pipeline are waited for. If n is not given, all currently active child processes are waited for, and the return status is zero. If the -n option is supplied, wait waits for any job to terminate and returns its exit status. If n specifies a non-existent process or job, the return status is 127. Otherwise, the return status is the exit status of the last process or job waited for.
{ "source": [ "https://unix.stackexchange.com/questions/363297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10780/" ] }
363,575
Let’s say I install a package using dpkg : sudo dpkg -i package-name.deb then without running the package binaries I just remove it: sudo dpkg -r package-name Is there anything harmful that can happen in this process? For example, any malicious configuration script in the .deb file? What are other possible threats that might happen?
Yes, packages can contain “maintainer scripts” which are run before and/or after installation. You can see the scripts, if any, by extracting the control archive from the package: dpkg-deb --ctrl-tarfile package-name.deb > control.tar tar tf control.tar or, if you know you want to extract the control archive’s contents: dpkg-deb -e package-name.deb package-control (which places the extracted files in a directory named package-control ). They run as root and can do whatever the package author wants on your system. You should really consider that installing a package is equivalent to granting the maintainer (and anyone else involved in the package’s maintenance and build) root access to your system. Who do you trust?
{ "source": [ "https://unix.stackexchange.com/questions/363575", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64321/" ] }
363,814
Using Raspbian and Ubunntu 16.04 LTS so need a generic Linux solution. Requirement is simple: I need a way to send one-line email messages from the command line. I have set up a gmail account just for this particular Rpi3, with the address of [email protected] - with no 2FA So now I need to be able to send one-line mail messages from anywhere (including cron) without user intervention. I also would like it to be able to send text files; basically, anything from stdin .
The simplest answer to sending one-line messages via gmail is to use ssmtp Install it with the following commands: sudo apt-get update sudo apt-get install ssmtp Edit /etc/ssmtp/ssmtp.conf to look like this: [email protected] mailhub=smtp.gmail.com:465 FromLineOverride=YES [email protected] AuthPass=testing123 UseTLS=YES Send a one-liner like so: echo "Testing...1...2...3" | ssmtp [email protected] or printf "Subject: Test\n\nTesting...1...2...3" | ssmtp [email protected] Then, true to *nix, you just get the prompt back in a few seconds. Check your [email protected] account, and voila, it is there! This also works well when sending a file, as so: cat program.py | ssmtp [email protected] And the program will show up in the mailbox If the file is a text file, it can have a first line that says Subject: xxxxxx This can be used with various cron jobs can send me data with subject lines indicating the content. This will work with anything that prepares a message that is piped into ssmtp via stdin. For more details such as securing these files against other users and such, visit this article: Send Email from Raspberry Pi Command Line Be sure to also look down below to the answer posted by Rui about locking down the FROM: address that might be changed in formatted message files, if necessary. Now if only I could figure out how to send SMS the same way.
{ "source": [ "https://unix.stackexchange.com/questions/363814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/197427/" ] }
363,819
OS: Linux (Debian 8) Environment details: We use LXDE and uxterm . Virtual console form till ctrl +Alt + F6 is disabled. Total users available user1, user2, user3 the default system is set with auto login with user1 and we trigger user terminal via script ( uxterm ). On user pressing special combination in LXDE it invoked a Uxterminal in screen with User2 as login. on login success, user2 console can be used. We have 3rd user, where his role is very limited and needed for our current scenario of our operation. To create a BASH profile for the user, we have added info in sudoers with NOPASSWD option. etc/sudoers User_Alias PRIVILEGEDUSER = user1,user2 Runas_Alias TARGETUSER = user2,user3 PRIVILEGEDUSER ALL=(TARGETUSER) NOPASSWD: /bin/bash also we have added env_keep in Defaults Defaults env_reset Defaults env_keep += "JRE_HOME" Defaults mail_badpass Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" With this option when we try switching user, JRE_HOME is not getting set. Also added JRE_HOME in etc/.profile manually for all available users in system for testing purposed, even that did not help JRE_HOME=/usr/lib/jvm/java-7-openjdk-i386/jre export JRE_HOME Need some guidance in setting JRE_HOME during sudo -u switching.
The simplest answer to sending one-line messages via gmail is to use ssmtp Install it with the following commands: sudo apt-get update sudo apt-get install ssmtp Edit /etc/ssmtp/ssmtp.conf to look like this: [email protected] mailhub=smtp.gmail.com:465 FromLineOverride=YES [email protected] AuthPass=testing123 UseTLS=YES Send a one-liner like so: echo "Testing...1...2...3" | ssmtp [email protected] or printf "Subject: Test\n\nTesting...1...2...3" | ssmtp [email protected] Then, true to *nix, you just get the prompt back in a few seconds. Check your [email protected] account, and voila, it is there! This also works well when sending a file, as so: cat program.py | ssmtp [email protected] And the program will show up in the mailbox If the file is a text file, it can have a first line that says Subject: xxxxxx This can be used with various cron jobs can send me data with subject lines indicating the content. This will work with anything that prepares a message that is piped into ssmtp via stdin. For more details such as securing these files against other users and such, visit this article: Send Email from Raspberry Pi Command Line Be sure to also look down below to the answer posted by Rui about locking down the FROM: address that might be changed in formatted message files, if necessary. Now if only I could figure out how to send SMS the same way.
{ "source": [ "https://unix.stackexchange.com/questions/363819", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52764/" ] }
363,967
A few years ago I recall using the terminal and reading a tutorial in the Linux manual (using man ) on how a computer worked after it was turned on. It walked you through the whole process explaining the role of the BIOS, ROM, RAM and OS on this process. Which page was this, if any? How can I read it again?
You're thinking of the boot(7) manual ( man 7 boot ) and/or the bootup(7) manual ( man 7 bootup ). Those are the manuals I can think of on (Ubuntu) Linux that best fits your description. These manuals are available on the web (see links above), but the definite text is what's available on the system that you are using. If a web-based manual says one thing but the manual on your system says another thing, then the manual on your system is the more correct one for you. This goes for all manuals. See also the "See also" section in those manuals. This other question may also be of interest: How does the Linux or Unix " / " get mounted during bootup? For a non-Linux take on the boot process, the OpenBSD first-stage system bootstrap ( biosboot(8) ) and second-stage bootstrap ( boot(8) ) manuals, followed by rc(8) , may be interesting.
{ "source": [ "https://unix.stackexchange.com/questions/363967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134848/" ] }
364,105
btrfs ( often pronounced "better fs" ) has quite a few features that ext4 lacks. However, comparing the functionality of btrfs vs ext4, what is lacking in btrfs? 1 In other words, what can I do with ext4 that I can't with btrfs? 1 Ignoring the lesser battle-ground testing of btrfs given ext4 is so widely used
Disadvantages of btrfs compared to ext4: btrfs doesn't support badblocks This means that if you've run out of spare non-addressable sectors that the HDD firmware keeps to cover for a limited number of failures, there is no way to mark blocks bad and avoid them at the filesystem level. Swap files are only supported via a loopback device , which complicates things because it seems impossible to resume from suspend using this method It's quite tricky to calculate free space , so much so that... You can get "No space left on device" errors even though btrfs' own tools say there is space
{ "source": [ "https://unix.stackexchange.com/questions/364105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
364,112
How can I wrap paragraphs in plain text with paragraph tags {p} before and {/p} after each paragraph using sed ? Each paragraph is separated by blank lines. I can use sed -e 's/^\s*$/<r>/ somefile.txt to find every blank line in the text file, but this will always insert {p} everywhere and I don't quite understand, how to vary them. Also, there's no empty line after the very last paragraph, so it won't do anything for the last one. Input text: Section 5. General Information About Project Gutenberg-tm electronic works. Description Professor Michael S. Hart is the originator of the Project Gutenberg-tm concept of a library of electronic works that could be freely shared with anyone. Project Gutenberg-tm eBooks are often created from several printed editions, all of which are confirmed as Public Domain in the U.S. unless a copyright notice is included. Required Output: Section 5. General Information About Project Gutenberg-tm electronic works. {p} Description {/p} {p} Professor Michael S. Hart is the originator of the Project Gutenberg-tm concept of a library of electronic works that could be freely shared with anyone. {/p} {p} Project Gutenberg-tm eBooks are often created from several printed editions, all of which are confirmed as Public Domain in the U.S. unless a copyright notice is included. {/p}
Disadvantages of btrfs compared to ext4: btrfs doesn't support badblocks This means that if you've run out of spare non-addressable sectors that the HDD firmware keeps to cover for a limited number of failures, there is no way to mark blocks bad and avoid them at the filesystem level. Swap files are only supported via a loopback device , which complicates things because it seems impossible to resume from suspend using this method It's quite tricky to calculate free space , so much so that... You can get "No space left on device" errors even though btrfs' own tools say there is space
{ "source": [ "https://unix.stackexchange.com/questions/364112", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/230685/" ] }
364,156
I want to time a command which consists of two separate commands with one piping output to another. For example, consider the two scripts below: $ cat foo.sh #!/bin/sh sleep 4 $ cat bar.sh #!/bin/sh sleep 2 Now, how can I get time to report the time taken by foo.sh | bar.sh (and yes, I know the pipe makes no sense here, but this is just an example)? It does work as expected if I run them sequentially in a subshell without piping: $ time ( foo.sh; bar.sh ) real 0m6.020s user 0m0.010s sys 0m0.003s But I can't get it to work when piping: $ time ( foo.sh | bar.sh ) real 0m4.009s user 0m0.007s sys 0m0.003s $ time ( { foo.sh | bar.sh; } ) real 0m4.008s user 0m0.007s sys 0m0.000s $ time sh -c "foo.sh | bar.sh " real 0m4.006s user 0m0.000s sys 0m0.000s I've read through a similar question ( How to run time on multiple commands AND write the time output to file? ) and also tried the standalone time executable: $ /usr/bin/time -p sh -c "foo.sh | bar.sh" real 4.01 user 0.00 sys 0.00 It doesn't even work if I create a third script which only runs the pipe: $ cat baz.sh #!/bin/sh foo.sh | bar.sh And then time that: $ time baz.sh real 0m4.009s user 0m0.003s sys 0m0.000s Interestingly, it doesn't appear as though time exits as soon as the first command is done. If I change bar.sh to: #!/bin/sh sleep 2 seq 1 5 And then time again, I was expecting the time output to be printed before the seq but it isn't: $ time ( { foo.sh | bar.sh; } ) 1 2 3 4 5 real 0m4.005s user 0m0.003s sys 0m0.000s Looks like time doesn't count the time it took to execute bar.sh despite waiting for it to finish before printing its report 1 . All tests were run on an Arch system and using bash 4.4.12(1)-release. I can only use bash for the project this is a part of so even if zsh or some other powerful shell can get around it, that won't be a viable solution for me. So, how can I get the time a set of piped commands took to run? And, while we're at it, why doesn't it work? It looks like time immediately exits as soon as the first command has finished. Why? I know I can get the individual times with something like this: ( time foo.sh ) 2>foo.time | ( time bar.sh ) 2> bar.time But I still would like to know if it's possible to time the whole thing as a single operation. 1 This doesn't seem to be a buffer issue, I tried running the scripts with unbuffered and stdbuf -i0 -o0 -e0 and the numbers were still printed before the time output.
It is working. The different parts of a pipeline are executed concurrently. The only thing that synchronises/serialises the processes in the pipeline is IO, i.e. one process writing to the next process in the pipeline and the next process reading what the first one writes. Apart from that, they are executing independently of each other. Since there is no reading or writing happening between the processes in your pipeline, the time take to execute the pipeline is that of the longest sleep call. You might as well have written time ( foo.sh & bar.sh &; wait ) Terdon posted a couple of slightly modified example scripts in the chat : #!/bin/sh # This is "foo.sh" echo 1; sleep 1 echo 2; sleep 1 echo 3; sleep 1 echo 4 and #!/bin/sh # This is "bar.sh" sleep 2 while read line; do echo "LL $line" done sleep 1 The query was "why does time ( sh foo.sh | sh bar.sh ) return 4 seconds rather than 3+3 = 6 seconds?" To see what's happening, including the approximate time each command is executed, one may do this (the output contains my annotations): $ time ( env PS4='$SECONDS foo: ' sh -x foo.sh | PS4='$SECONDS bar: ' sh -x bar.sh ) 0 bar: sleep 2 0 foo: echo 1 ; The output is buffered 0 foo: sleep 1 1 foo: echo 2 ; The output is buffered 1 foo: sleep 1 2 bar: read line ; "bar" wakes up and reads the two first echoes 2 bar: echo LL 1 LL 1 2 bar: read line 2 bar: echo LL 2 LL 2 2 bar: read line ; "bar" waits for more 2 foo: echo 3 ; "foo" wakes up from its second sleep 2 bar: echo LL 3 LL 3 2 bar: read line 2 foo: sleep 1 3 foo: echo 4 ; "foo" does the last echo and exits 3 bar: echo LL 4 LL 4 3 bar: read line ; "bar" fails to read more 3 bar: sleep 1 ; ... and goes to sleep for one second real 0m4.14s user 0m0.00s sys 0m0.10s So, to conclude, the pipeline takes 4 seconds, not 6, due to the buffering of the output of the first two calls to echo in foo.sh .
{ "source": [ "https://unix.stackexchange.com/questions/364156", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
364,401
I typed help suspend and got this short explanation: suspend: suspend [-f] Suspend shell execution. Suspend the execution of this shell until it receives a SIGCONT signal. Unless forced, login shells cannot be suspended. Options: -f force the suspend, even if the shell is a login shell Exit Status: Returns success unless job control is not enabled or an error occurs. How I understand this is: I type suspend and the terminal freezes, not even strg + c can unfreeze it. But when I open another terminal and search for the PID for the frozen one and type kill -SIGCONT PID-NR a SIGCONT signal is send to the frozen terminal and thaws it up, so that it gets unfrozen. But, what is the actual purpose of suspending a terminal? Which every day applications are typical for it? What did the people who made it a shell builtin have in mind?
If you start a shell from another shell, you can suspend the inner one. Say when using su , and wanting to switch back to the regular user for a moment: user$ su Password: ... root# do something root# suspend user$ do something as the ordinary user again user$ fg root# ... (If you do that, don't forget the privileged shell open in the background...) Similarly, if you escape to a shell from some other program (the ! command in e.g. less ), you can still suspend the shell. But I wouldn't expect many other programs to handle it nicely when they launch a subprocess, which then suspends itself.
{ "source": [ "https://unix.stackexchange.com/questions/364401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/229576/" ] }
364,660
I'm going through this book , Advanced Linux Programming by Mark Mitchell, Jeffrey Oldham, and Alex Samuel. It's from 2001, so a bit old. But I find it quite good anyhow. However, I got to a point when it diverges from what my Linux produces in the shell output. On page 92 (116 in the viewer), the chapter 4.5 GNU/Linux Thread Implementation begins with the paragraph containing this statement: The implementation of POSIX threads on GNU/Linux differs from the thread implementation on many other UNIX-like systems in an important way: on GNU/Linux, threads are implemented as processes. This seems like a key point and is later illustrated with a C code. The output in the book is: main thread pid is 14608 child thread pid is 14610 And in my Ubuntu 16.04 it is: main thread pid is 3615 child thread pid is 3615 ps output supports this. I guess something must have changed between 2001 and now. The next subchapter on the next page, 4.5.1 Signal Handling, builds up on the previous statement: The behavior of the interaction between signals and threads varies from one UNIX-like system to another. In GNU/Linux, the behavior is dictated by the fact that threads are implemented as processes. And it looks like this will be even more important later on in the book. Could someone explain what's going on here? I've seen this one Are Linux kernel threads really kernel processes? , but it doesn't help much. I'm confused. This is the C code: #include <pthread.h> #include <stdio.h> #include <unistd.h> void* thread_function (void* arg) { fprintf (stderr, "child thread pid is %d\n", (int) getpid ()); /* Spin forever. */ while (1); return NULL; } int main () { pthread_t thread; fprintf (stderr, "main thread pid is %d\n", (int) getpid ()); pthread_create (&thread, NULL, &thread_function, NULL); /* Spin forever. */ while (1); return 0; }
I think this part of the clone(2) man page may clear up the difference re. the PID: CLONE_THREAD (since Linux 2.4.0-test8) If CLONE_THREAD is set, the child is placed in the same thread group as the calling process. Thread groups were a feature added in Linux 2.4 to support the POSIX threads notion of a set of threads that share a single PID. Internally, this shared PID is the so-called thread group identifier (TGID) for the thread group. Since Linux 2.4, calls to getpid(2) return the TGID of the caller. The "threads are implemented as processes" phrase refers to the issue of threads having had separate PIDs in the past. Basically, Linux originally didn't have threads within a process, just separate processes (with separate PIDs) that might have had some shared resources, like virtual memory or file descriptors. CLONE_THREAD and the separation of process ID (*) and thread ID make the Linux behaviour look more like other systems and more like the POSIX requirements in this sense. Though technically the OS still doesn't have separate implementations for threads and processes. Signal handling was another problematic area with the old implementation, this is described in more detail in the paper @FooF refers to in their answer . As noted in the comments, Linux 2.4 was also released in 2001, the same year as the book, so it's not surprising the news didn't get to that print.
{ "source": [ "https://unix.stackexchange.com/questions/364660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
364,669
Working on Linux Mint 18.1, VirtualBox 5.0.40_Ubuntu. I have a VDI file from a VirtualBox VM: ~/VirtualBox\ VMs/Win10x64/Win10x64.vdi I've taken a Snapshot: ~/VirtualBox\ VMs/Win10x64/Snapshots/{GUID}.vdi I want to mount the guest's HDD from the snapshot . I can successfully mount the base VDI using qemu-nbd : qemu-nbd -c /dev/nbd0 ~/VirtualBox\ VMs/Win10x64/Win10x64.vdi But if I try with the Snapshot file: qemu-nbd -c /dev/nbd0 ~/VirtualBox\ VMs/Win10x64/Snapshots/{GUID}.vdi it fails with: unsupported VDI image (non-NULL link UUID) I did notice the --snapshot parameter for qemu-nbd but this doesn't seem to be the right thing. How can I mount the HDD as it is in the snapshot? Edit #1 I've also tried vdfuse , but again, doesn't seem to be any way of "applying" the differencing disk.
I think this part of the clone(2) man page may clear up the difference re. the PID: CLONE_THREAD (since Linux 2.4.0-test8) If CLONE_THREAD is set, the child is placed in the same thread group as the calling process. Thread groups were a feature added in Linux 2.4 to support the POSIX threads notion of a set of threads that share a single PID. Internally, this shared PID is the so-called thread group identifier (TGID) for the thread group. Since Linux 2.4, calls to getpid(2) return the TGID of the caller. The "threads are implemented as processes" phrase refers to the issue of threads having had separate PIDs in the past. Basically, Linux originally didn't have threads within a process, just separate processes (with separate PIDs) that might have had some shared resources, like virtual memory or file descriptors. CLONE_THREAD and the separation of process ID (*) and thread ID make the Linux behaviour look more like other systems and more like the POSIX requirements in this sense. Though technically the OS still doesn't have separate implementations for threads and processes. Signal handling was another problematic area with the old implementation, this is described in more detail in the paper @FooF refers to in their answer . As noted in the comments, Linux 2.4 was also released in 2001, the same year as the book, so it's not surprising the news didn't get to that print.
{ "source": [ "https://unix.stackexchange.com/questions/364669", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119822/" ] }
364,671
I did the following in BASH: while true;do bash;done I wrote this one liner, but I was not sure at first whether it: stays in the main shell and fathers as many subshells as it runs dry of memory or some other stuff. the main shell fathers a subshell, than this subshells fathers a subshell until this lineage runs dry of memory or some other stuff. But, I suppose it is the second case, because once I run the one liner, I got shortly after my prompt back and I began to type exit and another exit and exit, exit, exit...and I still was not back in the main shell. Now, since so many subshells have been opened and each one is a program, I thought each one should have its own PID. So I did: ps aux | grep bash expecting to see a lot of processes with bash in their names. However, nothing like this, there were only two bash shells. How is it possible, I guess I have somewhere a very wrong idea of processes, shells, subshells and PIDs, but do not know where.
I think this part of the clone(2) man page may clear up the difference re. the PID: CLONE_THREAD (since Linux 2.4.0-test8) If CLONE_THREAD is set, the child is placed in the same thread group as the calling process. Thread groups were a feature added in Linux 2.4 to support the POSIX threads notion of a set of threads that share a single PID. Internally, this shared PID is the so-called thread group identifier (TGID) for the thread group. Since Linux 2.4, calls to getpid(2) return the TGID of the caller. The "threads are implemented as processes" phrase refers to the issue of threads having had separate PIDs in the past. Basically, Linux originally didn't have threads within a process, just separate processes (with separate PIDs) that might have had some shared resources, like virtual memory or file descriptors. CLONE_THREAD and the separation of process ID (*) and thread ID make the Linux behaviour look more like other systems and more like the POSIX requirements in this sense. Though technically the OS still doesn't have separate implementations for threads and processes. Signal handling was another problematic area with the old implementation, this is described in more detail in the paper @FooF refers to in their answer . As noted in the comments, Linux 2.4 was also released in 2001, the same year as the book, so it's not surprising the news didn't get to that print.
{ "source": [ "https://unix.stackexchange.com/questions/364671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/229576/" ] }
364,782
I have a service that stopped suddenly. I tried to restart that service but failed and was asked to run: systemctl daemon-reload . What does it exactly do? What is a daemon-reload ?
man systemctl says: daemon-reload Reload systemd manager configuration. This will rerun all generators (see systemd.generator(7)), reload all unit files, and recreate the entire dependency tree. While the daemon is being reloaded, all sockets systemd listens on behalf of user configuration will stay accessible. This command should not be confused with the reload command. So, it's a "soft" reload, essentially; taking changed configurations from filesystem and regenerating dependency trees . Consequently, systemd.generator states: Generators are small binaries that live in /usr/lib/systemd/user-generators/ and other directories listed above. systemd(1) will execute those binaries very early at bootup and at configuration reload time — before unit files are loaded. Generators can dynamically generate unit files or create symbolic links to unit files to add additional dependencies, thus extending or overriding existing definitions. Their main purpose is to convert configuration files that are not native unit files dynamically into native unit files. Generators are loaded from a set of paths determined during compilation, listed above. System and user generators are loaded from directories with names ending in system-generators/ and user-generators/, respectively. Generators found in directories listed earlier override the ones with the same name in directories lower in the list. A symlink to /dev/null or an empty file can be used to mask a generator, thereby preventing it from running. Please note that the order of the two directories with the highest priority is reversed with respect to the unit load path and generators in /run overwrite those in /etc. After installing new generators or updating the configuration, systemctl daemon-reload may be executed. This will delete the previous configuration created by generators, re-run all generators, and cause systemd to reload units from disk. See systemctl(1) for more information.
{ "source": [ "https://unix.stackexchange.com/questions/364782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231172/" ] }
365,023
I'm learning CentOS/RHEL and currently doing some stuff about process management. The RHCSA book I'm reading describes running kill 1234 as sending SIGQUIT. I always thought the kill command without adding a switch for signal type should default to kill -15 SIGTERM is kill -15 and SIGKILL is kill -9 , right? Does CentOS/RHEL use a slightly different method of kill -15 or have I just been mistaken? EDIT: kill -l gives SIGQUIT as kill -3 and it seems to be associated with using the keyboard to terminate a process. man 7 signal also states that SIGQUIT is kill -3 , so I can only assume that my book is wrong in stating that SIGQUIT is kill -15 default.
No, they're not the same. The default action for both is to terminate the process, but SIGQUIT also dumps core. See e.g. the Linux man page signal(7) . kill by default sends SIGTERM, so I can only imagine that the mention of SIGQUIT being default is indeed just a mistake. That default is in POSIX , and so are the numbers for SIGTERM, SIGKILL and SIGQUIT.
{ "source": [ "https://unix.stackexchange.com/questions/365023", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204207/" ] }
365,225
Is there a clean, simple way to get an IP address for a network interface from /proc , similar to the way I can get the MAC address for a network interface? Ideally I would just type cat /proc/<foo>/{interface_name} and get the IPv4 address. I'd rather not run anything other than cat .
Under the /proc directory, you can also find the IPv4 addresses in the Forwarding Information Base table, at /proc/net/fib_trie The table is pretty intelligible doing a mere cat , first comes the Main: and then Local: cat /proc/net/fib_trie or to see your network, IP addresses and netmask: cat /proc/net/fib_trie | grep "|--" | egrep -v "0.0.0.0| 127." |-- 193.136.1.0 |-- 193.136.1.2 |-- 193.136.1.255 |-- 193.136.1.0 |-- 193.136.1.2 |-- 193.136.1.255
{ "source": [ "https://unix.stackexchange.com/questions/365225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115294/" ] }
365,399
I know that, given l="a b c" , echo $l | xargs ls yields ls a b c Which construct yields mycommand -f a -f b -f c
One way to do it: echo "a b c" | xargs printf -- '-f %s\n' | xargs mycommand This assumes a , b , and c don't contain blanks, newlines, quotes or backslashes. :) With GNU findutil you can handle the general case, but it's slightly more complicated: echo -n "a|b|c" | tr \| \\0 | xargs -0 printf -- '-f\0%s\0' | xargs -0 mycommand You can replace the | separator with some other character, that doesn't appear in a , b , or c . Edit: As @MichaelMol notes, with a very long list of arguments there is a risk of overflowing the maximum length of arguments that can be passed to mycommand . When that happens, the last xargs will split the list and run another copy of mycommand , and there is a risk of it leaving an unterminated -f . If you worry about that situation you could replace the last xargs -0 above by something like this: ... | xargs -x -0 mycommand This won't solve the problem, but it would abort running mycommand when the list of arguments gets too long.
{ "source": [ "https://unix.stackexchange.com/questions/365399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52728/" ] }
365,436
Is there any way to dynamically choose the interpreter that's executing a script? I have a script that I'm running on two different systems, and the interpreter I want to use is located in different locations on the two systems. What I end up having to to is change the hashbang line every time I switch over. I would like to do something that is the logical equivalent of this (I realize that this exact construct is impossible): if running on system A: #!/path/to/python/on/systemA elif running on system B: #!/path/on/systemB #Rest of script goes here Or even better would be this, so that it tries to use the first interpreter, and if it doesn't find it uses the second: try: #!/path/to/python/on/systemA except: #!path/on/systemB #Rest of script goes here Obviously, I can instead execute it as /path/to/python/on/systemA myscript.py or /path/on/systemB myscript.py depending on where I am, but I actually have a wrapper script that launches myscript.py , so I would like to specify the path to the python interpreter programmatically rather than by hand.
No, that won't work. The two characters #! absolutely needs to be the first two characters in the file (how would you specify what interpreted the if-statement anyway?). This constitutes the "magic number" that the exec() family of functions detects when they determine whether a file that they are about to execute is a script (which needs an interpreter) or a binary file (which doesn't). The format of the shebang line is quite strict. It needs to have an absolute path to an interpreter and at most one argument to it. What you can do is to use env : #!/usr/bin/env interpreter Now, the path to env is usually /usr/bin/env , but technically that's no guarantee. This allows you to adjust the PATH environment variable on each system so that interpreter (be it bash , python or perl or whatever you have) is found. A downside with this approach is that it will be impossible to portably pass an argument to the interpreter. This means that #!/usr/bin/env awk -f and #!/usr/bin/env sed -f is unlikely to work on some systems. Another obvious approach is to use GNU autotools (or some simpler templating system) to find the interpreter and place the correct path into the file in a ./configure step, which would be run upon installing the script on each system. One could also resort to running the script with an explicit interpreter, but that's obviously what you're trying to avoid: $ sed -f script.sed
{ "source": [ "https://unix.stackexchange.com/questions/365436", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/230958/" ] }
365,550
I uploaded my source code to SVN repository. After committing I found many files starting with ._filename. How can I remove all those files starting with ._filename ? I have so many subfolders and each subfolder has same problem.It would be better for me to verify that only those files which match a particular pattern are deleted. So kindly help
find . -type f -name "._*" -print This will find and display the names of all the files matching the filename globbing pattern ._* in the current directory, or in any of its subdirectories. To remove them, change -print to -delete , or just add -delete to the end if you want to see what gets deleted.
{ "source": [ "https://unix.stackexchange.com/questions/365550", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231598/" ] }
365,592
I have a large set of JPEG pictures all with the same resolution. It would take too long to open each one inside the graphical interface of imagemagic or gimp. How do I achieve each picture being rotated and saved as the same filename?
You can use the convert command: convert input.jpg -rotate <angle in degrees> out.jpg To rotate 90 degrees clockwise: convert input.jpg -rotate 90 out.jpg To save the file with the same name: convert file.jpg -rotate 90 file.jpg To rotate all files: for photo in *.jpg ; do convert $photo -rotate 90 $photo ; done Alternatively, you can also use the mogrify command line tools (the best tool) recommended by @don-crissti : mogrify -rotate 90 *.jpg
{ "source": [ "https://unix.stackexchange.com/questions/365592", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/229576/" ] }
365,740
I recently started using CentOS. I went to try to use the killall utility but found it missing, with me receiving a command not found message when trying to use it. How can I get this functionality on my system so that I can, for instance, kill all processes whose names match a pattern?
The pkill utility is a much better alternative to killall . killall is not portable as the behavior of the command is very different across OSs. pkill is portable and behaves the same everywhere. It's also a lot more flexible as it provides a lot of different ways of matching the processes. It also shares the same matching behavior and arguments as the pgrep utility , which allows you to see what processes would be matched and signaled without actually signalling them. Usage: pkill foo (which would be the same as killall foo )
{ "source": [ "https://unix.stackexchange.com/questions/365740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163827/" ] }
366,407
Is it possible to use a commit message from stdout, like: echo "Test commit" | git commit - Tried also to echo the message content in .git/COMMIT_EDITMSG , but then running git commit would ask to add changes in mentioned file.
You can use the -F <file>, --file=<file> option. echo "Test commit" | git commit -F - Its usage is described in the man page for git commit : Take the commit message from the given file. Use - to read the message from the standard input.
{ "source": [ "https://unix.stackexchange.com/questions/366407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126666/" ] }