source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
266,888 | I want to install an older version of package <x> , and when I use dnf it only shows the current version of the package <x> . Is there any way to install an older versions using dnf ? | You can install using a specific name-version as described in the man page: dnf install tito-0.5.6-1.fc22 Install package with specific version. If the package is already installed it will automatically try to downgrade or upgrade to specific version. To view all versions of a package in your enabled repositories, use: dnf --showduplicates list <package> | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/266888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148747/"
]
} |
266,904 | I am trying to extract a value from a long string that may change over time. So for example the string could look something like this ....../filename-1.9.0.3.tar.gz"<.... And what I want to extract is the value between filename- and .tar.gz , essentially the file version (1.9.0.3 in this case). The reason I need to do it this way is because I may later run the command and the value will be 1.9.0.6 or 2.0.0.2 or something entirely different. How can I do this? I'm currently only using grep, but I wouldn't mind using other utilities such as sed or awk or cut or whatever. To be perfectly clear, I need to extract only the file version part of the string, since it is very long (on both sides) everything else needs to be cut out somehow. | With grep -P / pcregrep , using a positive look-behind and a positive look-ahead: grep -P -o '(?<=STRING1).*?(?=STRING2)' infile in your case replace STRING1 with filename- and STRING2 with \.tar\.gz If you don't have access to pcregrep and/or if your grep doesn't support -P you can do this with your favourite text processing tool. Here's a portable way with ed that gives you the same output: ed -s infile <<\INg/STRING1/s//\ &/gv/STRING1.*STRING2/d,s/STRING1//,s/STRING2.*//,pIN How it works: a newline is prepended to each STRING1 occurrence (so now there's at most one occurrence per line) then all lines not matching STRING1.*STRING2 are deleted; on the remaining ones we only keep what's between STRING1 and STRING2 and print the result. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/266904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72554/"
]
} |
266,921 | I'm printing a message in a Bash script, and I want to colourise a portion of it; for example, #!/bin/bashnormal='\e[0m'yellow='\e[33m'cat <<- EOF ${yellow}Warning:${normal} This script repo is currently located in: [ more messages... ]EOF But when I run in the terminal ( tmux inside gnome-terminal ) the ANSI escape characters are just printed in \ form; for example, \e[33mWarning\e[0m This scr.... If I move the portion I want to colourise into a printf command outside the here-doc, it works. For example, this works: printf "${yellow}Warning:${normal}"cat <<- EOF This script repo is currently located in: [ more messages... ]EOF From man bash – Here Documents: No parameter and variable expansion, command substitution, arithmetic expansion, or pathname expansion is performed on word . If any characters in word are quoted, the delimiter is the result of quote removal on word , and the lines in the here-document are not expanded. If word is unquoted, all lines of the here-document are subjected to parameter expansion, command substitution, and arithmetic expansion. In the latter case, the character sequence \<newline> is ignored, and \ must be used to quote the characters \ , $ , and ` . I can't work out how this would affect ANSI escape codes. Is it possible to use ANSI escape codes in a Bash here document that is cat ted out? | In your script, these assignments normal='\e[0m'yellow='\e[33m' put those characters literally into the variables, i.e., \ e [ 0 m , rather than the escape sequence. You can construct an escape character using printf (or some versions of echo ), e.g., normal=$(printf '\033[0m')yellow=$(printf '\033[33m') but you would do much better to use tput , as this will work for any correctly set up terminal: normal=$(tput sgr0)yellow=$(tput setaf 3) Looking at your example, it seems that the version of printf you are using treats \e as the escape character (which may work on your system, but is not generally portable to other systems). To see this, try yellow='\e[33m'printf 'Yellow:%s\n' $yellow and you would see the literal characters: Yellow:\e[33m rather than the escape sequence. Putting those in the printf format tells printf to interpret them (if it can). Further reading: tput, reset - initialize a terminal or query terminfo database printf - write formatted output (POSIX) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/266921",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
266,931 | Say I started a su command, and I want to cancel it. Control + C doesn't work for su like it does for sudo ... I have to finish the prompt (either by getting the password wrong enough times or by getting it right). Is there something that I can type to kill a password prompt? | su is running with elevated privileges, and you are not seeing it respond to ^C (which sends a signal with your privileges). You could su to another shell and kill it from the other shell. Also (depending on the system), it might respond to SIGHUP (a hangup signal) if you closed the terminal session where the awkward su is in progress. There's more than one way that su can ignore your ^C , e.g., establishing signal handlers or running under a different controlling terminal. A quick read of Debian's su seems that it uses the latter. Your system of course may be different. Further reading: /bin/su no longer listens to SIGINT! how to terminate some process which is run with sudo with kill | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/266931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61235/"
]
} |
266,999 | I have a text file as- abc.text and it has its contents as Hi I'm a text file. If I double click to open this file, then the files is opened in gedit editor. Whereas, if I rename the file to abc.html (without changing any of its contents) then by default it opens in Chrome. This sort of behavior is acceptable on a Windows machine, since Windows uses file extensions to identify file types. But as far as I've read, Linux doesn't need file extensions. So why does changing file extensions in Linux changes the default program that opens it? | Linux doesn't use file extensions to decide how to open a file, but Linux uses file extensions to decide how to open a file. The problem here is that “Linux” can designate different parts of the operating system, and “opening a file” can mean different things too. A difference between Linux and Windows is how they treat application files vs data files. On Windows, the line between the two is blurred; there are a few types of executable files, and they are determined by their extension ( .exe , .bat , etc.), but in most contexts you can “execute” any file (e.g. by clicking in Explorer), and this executes the executable that is associated with that file type, where the file type is entirely determined by the extension (so executing a .doc file might start c:\Program Files\something or other\winword.exe , executing a .py file might start a Python interpreter, etc.). On Linux, there is a notion of executable file which is independent of the file name. Executables generally have no extension, because they're meant to be typed by the user. The type of the file is irrelevant, all the user wants to do is execute the file. The kernel determines how to execute the file from the file contents: it knows some file types natively, and the shebang mechanism allows a file to declare any other executable file¹ as its interpreter. On the other hand, data files usually do have an extension that indicates the type of data. The general idea here is that the type of data is not synonymous with what application to use to open the file with. You may want to view a PDF in Okular, or in Evince, or in Xpdf, or in Acroread, or in Mupdf, etc. There are many tools that do however allow opening a data file without having to explicitly specify what application to use. These tools almost exclusively base their decision on the file extension. The file extension and the file's content are the only information that these tools have at their disposal: Linux does not store any meta information regarding the file format. So when you click on a .pdf file in a file manager (or when you run the .pdf file on a suitably-configured zsh command line, etc.), the file manager consults a database to find what application is the preferred one for .pdf file. This database may be structured in two sections, one that associates extensions to MIME types ( /etc/mime.types , ~/.local/share/mime ) and one that associates MIME types to applications ( /etc/mailcap , ~/.local/share/applications ), but even so the origin is the extension. While it would often be possible to figure out the application from the file content, this would be slower, and not always possible (many formats look just like text files, a .jar is a type of .zip , etc.). Linux doesn't need file extensions, and it doesn't use them to determine how to run an executable file, but it does use them to determine which program to use to open a data file. ¹ That file has to be a native executable, a shebang executable can't point to another shebang executable to avoid potentially unending recursion. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/266999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159144/"
]
} |
267,056 | I currently have this code: cat /proc/cpuinfo | grep "MHz" | cut -d ':' -f2 | sed 's/.*/Core NUM&Mhz/' Which outputs the following (for 8 cores): Core NUM 1941.054MhzCore NUM 1949.820MhzCore NUM 2022.734MhzCore NUM 1877.171MhzCore NUM 1938.265MhzCore NUM 1945.703MhzCore NUM 1845.562MhzCore NUM 1781.546Mhz What I want to do is replace "Core NUM" with actual numbers (e.g. Core 0, Core 1, Core 2, Core 3...) I would prefer a solution that can be done on one line with the rest of this command, but since I am in the end working from a bash script, I don't mind a multi-line/scripted solution either. The biggest problem I guess is that grep is one command outputting many lines, and I don't know if there's a way to target these lines one by one in something like a for or while loop. So, how can I do this? | Let awk do it all for you: </proc/cpuinfo awk -F : '/MHz/{printf "Core %d%sMhz\n", n++, $2}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267056",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72554/"
]
} |
267,094 | In my .profile , I use the following code to ensure that Bash-related aliases and functions are only sourced if the login shell actually is Bash : # If the current (login) shell is Bash, thenif [ "${BASH_VERSION:-}" ]; then # source ~/.bashrc if it exists. if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fifi I’m currently in the process of putting my shell configuration files, scripts and functions under version control. I’ve also recently started the process of removing casual Bashisms from shell scripts that don’t benefit from Bash-specific features, e.g., replacing function funcname() with funcname() . For my shell files repository, I’ve configured a pre-commit hook that runs the checkbashisms utility from Debian’s devscripts package on each sh file in the repository to ensure that I don’t inadvertently introduce Bash-specific syntax. However, this raises an error for my .profile : possible bashism in .profile line 51 ($BASH_SOMETHING):if [ "${BASH_VERSION:-}" ]; then I was wondering if there was a way to check which shell is running that wouldn’t trigger a warning in checkbashisms . I checked the list of shell-related variables listed by POSIX in the hope that one of them could used to show the current shell. I’ve also looked at the variables set in an interactive Dash shell but, again, failed to find a suitable candidate. At the moment, I’ve excluded .profile from being processed by checkbashisms ; it’s a small file so it’s not hard to check it manually. However, having researched the issue, I’d still like to know if there is a POSIX compliant method to determine which shell is running (or at least a way that doesn’t cause checkbashisms to fail). Further background/clarification One of the reasons I’m putting my shell configuration files under version control is to configure my environment on all the systems I currently log in to on a regular basis: Cygwin, Ubuntu and CentOS (both 5 and 7, using Active Directory for user authentication). I most often log on via X Windows / desktop environments and SSH for remote hosts. However, I’d like this to be future proof and have the least reliance on system dependencies and other tools as possible. I’ve been using checkbashisms as a simple, automated sanity check for the syntax of my shell-related files. It’s not a perfect tool, e.g., I’ve already applied a patch to it so that it doesn’t complain about the use of command -v in my scripts. While researching, I’ve learned that the program’s actual purpose is to ensure compliance with Debian policy which, as I understand it, is based on POSIX 2004 rather than 2008 (or its 2013 revision). | Your # If the current (login) shell is Bash, thenif [ "${BASH_VERSION:-}" ]; then # source ~/.bashrc if it exists. if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fifi code is completely POSIX conformant and the best way to check that you're currently running bash . Of course the $BASH_VERSION variable is bash-specific, but that's specifically why you're using it! To check that you're running bash ! Note that $BASH_VERSION will be set whether bash is invoked as bash or sh . Once you've asserted that you're running bash , you can use [ -o posix ] as an indicator that the shell was invoked as sh (though that option is also set when POSIXLY_CORRECT is in the environment or bash is called with -o posix , or with SHELLOPTS=posix in the environment. But in all those cases, bash will behave as if called as sh ). Another variable you could use instead of $BASH_VERSION and that checkbashism doesn't seem to complain about unless passed the -x option is $BASH . That is also specific to bash so one you should also be able to use to determine whether you're running bash or not. I'd also argue it's not really a proper use of checkbashisms . checkbashisms is a tool to help you write portable sh scripts (as per the sh specification in the Debian policy, a superset of POSIX), it helps identify non-standard syntax introduced by people writing scripts on systems where sh is a symlink to bash . A .profile is interpreted by different shells, many of which are not POSIX compliant. Generally, you don't use sh as your login shell, but shells like zsh , fish or bash with more advanced interactive features. bash and zsh , when not called as sh and when their respective profile session file ( .bash_profile , .zprofile ) are not POSIX conformant (especially zsh ) but still read .profile . So it's not POSIX syntax you want for .profile but a syntax that is compatible with POSIX (for sh ), bash and zsh if you're ever to use those shells (possibly even Bourne as the Bourne shell also reads .profile but is not commonly found on Linux-based systems). checkbashisms would definitely help you find out bashisms but may not point out POSIX syntax that is not compatible with zsh or bash . Here, if you want to use bash -specific code (like the work around of that bash bug whereby it doesn't read ~/.bashrc in interactive login shells), a better approach would be to have ~/.bash_profile do that (before or after sourcing ~/.profile where you put your common session initialisations). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22812/"
]
} |
267,100 | Trying to find the Linux distribution with the highest system requirements? What is the most demanding of all the Linux OS according to the published minimum specifications of each distro? Based on default graphical installation. | Your # If the current (login) shell is Bash, thenif [ "${BASH_VERSION:-}" ]; then # source ~/.bashrc if it exists. if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fifi code is completely POSIX conformant and the best way to check that you're currently running bash . Of course the $BASH_VERSION variable is bash-specific, but that's specifically why you're using it! To check that you're running bash ! Note that $BASH_VERSION will be set whether bash is invoked as bash or sh . Once you've asserted that you're running bash , you can use [ -o posix ] as an indicator that the shell was invoked as sh (though that option is also set when POSIXLY_CORRECT is in the environment or bash is called with -o posix , or with SHELLOPTS=posix in the environment. But in all those cases, bash will behave as if called as sh ). Another variable you could use instead of $BASH_VERSION and that checkbashism doesn't seem to complain about unless passed the -x option is $BASH . That is also specific to bash so one you should also be able to use to determine whether you're running bash or not. I'd also argue it's not really a proper use of checkbashisms . checkbashisms is a tool to help you write portable sh scripts (as per the sh specification in the Debian policy, a superset of POSIX), it helps identify non-standard syntax introduced by people writing scripts on systems where sh is a symlink to bash . A .profile is interpreted by different shells, many of which are not POSIX compliant. Generally, you don't use sh as your login shell, but shells like zsh , fish or bash with more advanced interactive features. bash and zsh , when not called as sh and when their respective profile session file ( .bash_profile , .zprofile ) are not POSIX conformant (especially zsh ) but still read .profile . So it's not POSIX syntax you want for .profile but a syntax that is compatible with POSIX (for sh ), bash and zsh if you're ever to use those shells (possibly even Bourne as the Bourne shell also reads .profile but is not commonly found on Linux-based systems). checkbashisms would definitely help you find out bashisms but may not point out POSIX syntax that is not compatible with zsh or bash . Here, if you want to use bash -specific code (like the work around of that bash bug whereby it doesn't read ~/.bashrc in interactive login shells), a better approach would be to have ~/.bash_profile do that (before or after sourcing ~/.profile where you put your common session initialisations). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159202/"
]
} |
267,148 | For a large logging file, how do I display those lines without "success" or not terminated with "ok"? | To remove lines that contain either string, specifically with grep: In one command, per jordanm's comment: grep -Ev 'success|ok$' or: grep -ve success -e 'ok$' or: grep -v 'successok$' In two commands: grep -v success file | grep -v 'ok$' Example: $ cat filesuccess something elsesuccess okjust something else$ grep -Ev 'success|ok$'just something else$ grep -v success file | grep -v 'ok$'just something else To remove lines that contain both strings, specifically with grep: grep -v 'success.*ok$' file Example: $ cat filesuccess something elsesuccess okjust something else$ grep -v 'success.*ok$' filesuccess something elsejust something else | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267148",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67765/"
]
} |
267,153 | What are the fundamental differences between some arbitrary account and root ? Is it just the UID being something other than zero? So, what exactly does the su binary do and how does it elevate a user to root? I know a user must first be a part of the sudo group through what we find in /etc/sudoers . # User privilege specificationroot ALL=(ALL:ALL) ALL# Members of the admin group may gain root privileges%admin ALL=(ALL) ALL# Allow members of group sudo to execute any command%sudo ALL=(ALL:ALL) ALL Taking a look at the su executable's permissions we find -rwsr-xr-x , or 4755 (i.e. setuid is set). Is it the su binary that reads this configuration file and checks if the user requesting root permissions is part of either group sudo or admin ? If so, does the binary spawn another shell as root (considering the setuid bit) assuming the user is part of the expected groups and knows the appropriate user's password that is attempting to be substituted (.e.g root , in particular)? tl;dr Does the act of privilege elevation rely on the setuid bit in the su binary, or are there other mechnisms to change the UID of the current process? In the case of the former, it seems that only the EUID would change leaving UID != EUID. Is this ever a problem? related How is all of this emulated in the Android environment? As far as I have read, access to root has been entirely stripped -- although processes still run at this privilege level. If we removed sudo and su would that be enough to prevent privilege elevation or has Android taken further steps? | Root is user 0 The key thing is the user ID 0. There are many places in the kernel that check the user ID of the calling process and grant permission to do something only if the user ID is 0. The user name is irrelevant; the kernel doesn't even know about user names. Android's permission mechanism is identical at the kernel level but completely different at the application level. Android has a root user (UID 0), just like any other system based on a Linux kernel. Android doesn't have user accounts though, and on most setups doesn't allow the user (as in the human operating and owning the device) to perform actions as the root user. A “rooted” Android is a setup that does allow the device owner/user to perform actions as root. How setuid works A setuid executable runs as the user who owns the executable. For example, su is setuid and owned by root, so when any user runs it, the process running su runs as the root user. The job of su is to verify that the user that calls it is allowed to use the root account, to run the specified command (or a shell if no command is specified) if this verification succeeds, and to exit if this verification fails. For example, su might ask the user to prove that they know the root password. In more detail, a process has three user IDs : the effective UID, which is used for security checks; the real UID, which is used in a few privilege checks but is mainly useful as a backup of the original user ID, and the saved user ID which allows a process to temporarily switch its effective UID to the real user ID and then go back to the former effective UID (this is useful e.g. when a setuid program needs to access a file as the original user). Running a setuid executable sets the effective UID to the owner of executable and retains the real UID. Running a setuid executable (and similar mechanisms, e.g. setgid) is the only way to elevate the privileges of a process. Pretty much everything else can only decrease the privileges of a process. Beyond traditional Unix Until now I described traditional Unix systems. All of this is true on a modern Linux system, but Linux brings several additional complications. Linux has a capability system. Remember how I said that the kernel has many checks where only processes running as user ID 0 are allowed? In fact, each check gets its own capability (well, not quite, some checks use the same capability). For example, there's a capability for accessing raw network sockets, and another capability for rebooting the system. Each process has a set of capabilities along side its users and groups. The process passes the check if it is running as user 0 or if it has the capability that corresponds to the check. A process that requires a specific privilege can run as a non-root user but with the requisite capability; this limits the impact if the process has a security hole. An executable can be setcap to one or more capabilities: this is similar to setuid, but works on the process's capability set instead of the process's user ID. For example, ping only needs raw network sockets, so it can be setcap CAP_NET_RAW instead of setuid root. Linux has several security modules , the best known being SELinux . Security modules introduce additional security checks, which can apply even to processes running as root. For example, it's possible (not easy!) to set up SELinux so as to run a process as user ID 0 but with so many restrictions that it can't actually do anything . Linux has user namespaces . Inside the kernel, a user is in fact not just a user ID, but a pair consisting of a user ID and a namespace. Namespaces form a hierarchy: a child namespace refines permissions within its parent. The all-powerful user is user 0 in the root namespace. User 0 in a namespace has powers only inside that namespace. For example, user 0 in a user namespace can impersonate any user of that namespace; but from the outside all the processes in that namespace run as the same user. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267153",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72608/"
]
} |
267,154 | I know this question was asked on google a few times here and there but everywhere so far people just tell how it works on Linux and I've learned that already, so you don't have to tell me how partitions work nor how the operating system works :) I want to know how to practically install my packages onto a sd/usb stick. I have only 16 GB available initially on my Chromebook's drive which is already occupied by both Chrome OS and Ubuntu, so I am pretty low on that and I would like not use any more of that drive space. I would like to either: 1) Set a default installation path to that sd/usb drive or 2) Enter the path manually each time. Both are okay with me. If there is a piece of software which would allow you to do that in GUI - that'd be even better! | Root is user 0 The key thing is the user ID 0. There are many places in the kernel that check the user ID of the calling process and grant permission to do something only if the user ID is 0. The user name is irrelevant; the kernel doesn't even know about user names. Android's permission mechanism is identical at the kernel level but completely different at the application level. Android has a root user (UID 0), just like any other system based on a Linux kernel. Android doesn't have user accounts though, and on most setups doesn't allow the user (as in the human operating and owning the device) to perform actions as the root user. A “rooted” Android is a setup that does allow the device owner/user to perform actions as root. How setuid works A setuid executable runs as the user who owns the executable. For example, su is setuid and owned by root, so when any user runs it, the process running su runs as the root user. The job of su is to verify that the user that calls it is allowed to use the root account, to run the specified command (or a shell if no command is specified) if this verification succeeds, and to exit if this verification fails. For example, su might ask the user to prove that they know the root password. In more detail, a process has three user IDs : the effective UID, which is used for security checks; the real UID, which is used in a few privilege checks but is mainly useful as a backup of the original user ID, and the saved user ID which allows a process to temporarily switch its effective UID to the real user ID and then go back to the former effective UID (this is useful e.g. when a setuid program needs to access a file as the original user). Running a setuid executable sets the effective UID to the owner of executable and retains the real UID. Running a setuid executable (and similar mechanisms, e.g. setgid) is the only way to elevate the privileges of a process. Pretty much everything else can only decrease the privileges of a process. Beyond traditional Unix Until now I described traditional Unix systems. All of this is true on a modern Linux system, but Linux brings several additional complications. Linux has a capability system. Remember how I said that the kernel has many checks where only processes running as user ID 0 are allowed? In fact, each check gets its own capability (well, not quite, some checks use the same capability). For example, there's a capability for accessing raw network sockets, and another capability for rebooting the system. Each process has a set of capabilities along side its users and groups. The process passes the check if it is running as user 0 or if it has the capability that corresponds to the check. A process that requires a specific privilege can run as a non-root user but with the requisite capability; this limits the impact if the process has a security hole. An executable can be setcap to one or more capabilities: this is similar to setuid, but works on the process's capability set instead of the process's user ID. For example, ping only needs raw network sockets, so it can be setcap CAP_NET_RAW instead of setuid root. Linux has several security modules , the best known being SELinux . Security modules introduce additional security checks, which can apply even to processes running as root. For example, it's possible (not easy!) to set up SELinux so as to run a process as user ID 0 but with so many restrictions that it can't actually do anything . Linux has user namespaces . Inside the kernel, a user is in fact not just a user ID, but a pair consisting of a user ID and a namespace. Namespaces form a hierarchy: a child namespace refines permissions within its parent. The all-powerful user is user 0 in the root namespace. User 0 in a namespace has powers only inside that namespace. For example, user 0 in a user namespace can impersonate any user of that namespace; but from the outside all the processes in that namespace run as the same user. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159237/"
]
} |
267,155 | Back in the day I used to build chroot jails using yum . It was simple, easy, and let me do things that weren't really available at the time (build packages for multiple distros on platforms like ia64 and ppc using the same infrastructure). Fast forward 5 years, I'd like to build a simple chroot jail on Fedora 23. However, dnf doesn't make this easy. I used to be able to just create an /etc/yum.repo.d/ file in the jail dir and call yum --installroot . Unfortunately dnf is still reading the local repo and not the one created in the chroot jail. Is it possible to have dnf use conf files that aren't /etc/dnf/dnf.conf or in /etc/yum.repos.d/ ? | As you found out, with dnf you need to specify the --releaserver argument. In addition, if you want to use repositories specific to the chroot, then you'll need a bit more work. I find the easiest solution is to create your own dnf.conf file inside the chroot, put the repository configurations inside, and then use it. For example, let's say you want to create a Fedora 24 chroot in the $(pwd)/mychroot folder, using only packages from the fedora and rpmfusion-free repositories. You would create the mychroot/etc/dnf/dnf.conf file, with the following content: [main]gpgcheck=1installonly_limit=3clean_requirements_on_remove=Truereposdir=[fedora]name=Fedora $releasever - $basearchfailovermethod=prioritymetalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearchenabled=1metadata_expire=7dgpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearchskip_if_unavailable=False[updates]name=Fedora $releasever - $basearch - Updatesfailovermethod=prioritymetalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever&arch=$basearchenabled=1gpgcheck=1metadata_expire=6hgpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearchskip_if_unavailable=False[rpmfusion-free]name=RPM Fusion for Fedora $releasever - Freemetalink=https://mirrors.rpmfusion.org/metalink?repo=free-fedora-$releasever&arch=$basearchenabled=1metadata_expire=14dgpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmfusion-free-fedora-$releasever (look at the /etc/yum.repos.d/*.repo files on your system and just copy-paste) The important part is this line in the main section, which tells dnf not to search for repositories in any directory, but only in the main configuration file, which will make it ignore your system repositories: reposdir= Finally, you can run dnf: # dnf -c $(pwd)/mychroot/etc/dnf/dnf.conf install --installroot=$(pwd)/mychroot --releasever=24 gstreamer1-libav | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267155",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159238/"
]
} |
267,169 | Oracle Linux 5.10 BASH shell [oracle@src01]$ getconf ARG_MAX131072[oracle@srv01]$ ls -1 | wc -l40496#!/bin/bash## delete files in /imr_report_repo that are older than 15-daysfind /imr_report_repo/* -maxdepth 0 -type f -mtime +15 |while read filedo rm -f $file done/usr/bin/find: Argument list too long If I'm reading this right the maximum arguments allowed is 131,072 and I only have 40,496 files in this directory. I haven't checked, but I'm probably trying to delete 40,000 files (over 2-weeks old). | I think this has been answered here: https://arstechnica.com/civis/viewtopic.php?t=1136262 The shell is doing a file expansion of /imr_report_repo/* , which causes the problem. I had a similar issue which I fixed by changing the find command from find /imr_report_repo/* -maxdepth 0 -type f -mtime +15 to find /imr_report_repo/ -name "*" -maxdepth 0 -type f -mtime +15 The quotes keep the shell from expanding the wildcard and then find can use it as a regular expression. It also helps if you need to search for a large number of files that match a specific criteria (like "*.foo"). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88123/"
]
} |
267,178 | I have a list of base64-encoded file names in the pattern of {base64-encoded part here}_2015-11-12.pdf . I'm trying to decode that list of files and return it on the command line as another list, separated by newlines. Here's what I'm trying now: find . -name "*_*" -printf "%f\0" | sed 's/_....-..-..\.pdf//g' | xargs -0 -i echo "{}" | base64 -d I think what I'm doing here is . . . finding the files, printing out only the file's name (i.e., stripping off the "./" prefix) separated by a null character using sed to preserve only the base64-encoded part (i.e., removing the _2015-11-12.pdf part of the file's name) using xargs to ostensibly pass each file name to echo then decoding the value returned by echo. The result of that is apparently a big string of all of the base64-decoded file names, each name separated by a null character, with the entire string followed by a newline. The desired result would be each individual decoded file name on a line by itself. I've tried all kinds of tricks to try and fix this but I haven't found anything that works. I've tried ... | base64 -d | echo , ... | base64 -d && echo , etc., trying to insert a newline at various points along the way. It seems like by the time the values end up at | base64 -d , they are all processed at once, as a single string. I'm trying to find a way to send each value to base64 -d one at a time, NOT as a monolithic list of file names. | Just add the base64 encoding of newline ( Cg== ) after each file name and pipe the whole thing to base64 -d : find . -name "*_*" -printf "%f\n" | sed -n 's/_....-..-..\.pdf$/Cg==/p' | base64 -d With your approach, that would have to be something like: find . -name "*_*" -printf "%f\0" | sed -zn 's/_....-..-..\.pdf$//p' | xargs -r0 sh -c ' for i do echo "$i" | base64 -d done' sh as you need a shell to create those pipelines. But that would mean running several commands per file which would be quite inefficient. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37920/"
]
} |
267,182 | I'm working on a script to create a fully encrypted washable system from debootstrap . It's doing some good, but the initramfs image that comes out does not pick up the cryptroot properly. After booting the image with qemu, I'm dropped to a busybox shell and I have to unlock the luks encryption manually with cryptsetup : cryptsetup luksOpen /dev/sda1 system/scripts/local-premount/flashbackexit (flashback does some btrfs snapshoting magic to forget changes made on every boot) After this, boot in qemu continues normally and I am then able to generate a good initramfs image. I copy this to the btrfs @root.base subvolume and all is well from then on. I need help with figuring out why the cryptsetup/cryptroot part is not being picked up in the chroot environment by update-initramfs : echo "CRYPTSETUP=y" >> /usr/share/initramfs-tools/conf-hooks.d/cryptsetupecho "export CRYPTSETUP=y" >> /usr/share/initramfs-tools/conf-hooks.d/cryptsetupupdate-initramfs -ut I have tried many things, I write a good fstab and crypttab and even tried to explicitly set cryptdevice in grub.cfg. Refer to the specific version of the script . Here's how I create the fstab and crypttab: export partuuid=$(blkid $partition | sed -re 's/.*: UUID="([^"]+)".*/\1/')export decruuid=$(blkid /dev/mapper/$decrypted | sed -re 's/.*: UUID="([^"]+)".*/\1/')echo "Adding flashback with uuid $partuuid"echo "system UUID=$partuuid none luks" >> "$rootmount/etc/crypttab"echo "UUID=$decruuid / btrfs [email protected] 0 0" >> "$rootmount/etc/fstab"echo "UUID=$decruuid /home btrfs subvol=@home 0 0" >> "$rootmount/etc/fstab" The question in principle is: How do you generate a functioning initramfs image in an encrypted chroot of a debootstrapped debian? Thanks a bunch | Using /etc/initramfs-tools/conf.d/cryptsetup is deprecated in stretch. The new preferred method is to set "CRYPTSETUP=y" in /etc/cryptsetup-initramfs/conf-hook . In buster and later, this configuration parameter appears to be redundant, as the default behaviour seems to be to configure cryptsetup in initramfs IFF the initramfs-cryptsetup package is installed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159260/"
]
} |
267,185 | So, not concerning ourselves with the WHY, and more so with the HOW, I'd like to see if anyone knows where I'm going wrong here. Basically, I'd like to forward all packets headed for port 80 on an IP that I've aliased to the loopback device (169.254.169.254) to be forwarded to port 8080 on another IP, which happens to be the public IP of the same box (we'll use 1.1.1.1 for the purpose of this question). In doing so, I should [ostensbily] be able to run telnet 169.254.169.254 80 and reach 1.1.1.1:8080, however, this is not happening. Here is my nat table in iptables: ~# iptables -nvL -t natChain PREROUTING (policy ACCEPT 66 packets, 3857 bytes) pkts bytes target prot opt in out source destination 0 0 DNAT tcp -- * * 0.0.0.0/0 169.254.169.254 tcp dpt:80 to:1.1.1.1:8080Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Am I missing something? I've followed most the information in the iptables man pages and also in the links below, however, am still getting a "connection refused" during my telnet attempts. I have tried adding ~#iptables -t nat -A POSTROUTING -j MASQUERADE to my iptables, but to no avail :/ If anyone could point me in the right direction that would be phenomenal! http://linux-ip.net/html/nat-dnat.html https://www.frozentux.net/iptables-tutorial/chunkyhtml/x4033.html EDIT I wanted to add that I do indeed have the following sysctl parameter enabled ~# sysctl net.ipv4.ip_forwardnet.ipv4.ip_forward = 1 EDIT No. 2 I was able to solve this by adding the rule to the OUTPUT chain in the nat table, v.s. the PREROUTING chain as I originally tried. | Using /etc/initramfs-tools/conf.d/cryptsetup is deprecated in stretch. The new preferred method is to set "CRYPTSETUP=y" in /etc/cryptsetup-initramfs/conf-hook . In buster and later, this configuration parameter appears to be redundant, as the default behaviour seems to be to configure cryptsetup in initramfs IFF the initramfs-cryptsetup package is installed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31779/"
]
} |
267,204 | When rsync ing a directory to a freshly plugged-in external USB flash drive, via rsync -av /source/ /dest/ all files get transferred (i.e. rewritten) despite no changes in the files. Note that overwriting the files only takes place once the USB is un- and replugged. Doing the rsync command twice in a row without unplugging the drive in-between does successfully skip the whole directory contents. Including the -u update option and explicitly adding the -t option did not change anything. The mount point remains the same (i.e. /media/user/<UUID> , the drive is automouted by xfce , the /dev/sdxy obviously changes)The hard drive source is ext4 , while the USB is vfat with utf8 character encoding. What could be the reason for this behaviour is it the change in the /dev/ name entry? How can I make rsync run with properly recognizing file changes? My backup should just take seconds without this, while it now is always minutes due to the large amount of data being overwritten repeatedly, nor is the massive writing the best for the flash drive's life time expectancy. | Your FAT drive can store timestamps only to two second accuracy. When you unplug and replug the drive you effectively break all the file times. See the --modify-window option for a workaround. Secondly, you're never going to see fast backups with rsync like this, because when copying locally it behaves much like cp . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267204",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123460/"
]
} |
267,210 | strace runs a specified command until it exits. It intercepts and records the system calls which are called by a process and the signals which are received by a process. When running an external command in a bash shell, the shell first fork() a child process, and then execve() the command in the child process. So I guess that strace will report fork() or something similar such as clone() But the following example shows it doesn't. Why doesn't strace report that the parent shell fork() the child process before execve() the command? Thanks. $ strace -f timeexecve("/usr/bin/time", ["time"], [/* 66 vars */]) = 0brk(0) = 0x84c000access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efe9b2a5000access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3fstat(3, {st_mode=S_IFREG|0644, st_size=141491, ...}) = 0mmap(NULL, 141491, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7efe9b282000close(3) = 0access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\37\2\0\0\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=1840928, ...}) = 0mmap(NULL, 3949248, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7efe9acc0000mprotect(0x7efe9ae7b000, 2093056, PROT_NONE) = 0mmap(0x7efe9b07a000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1ba000) = 0x7efe9b07a000mmap(0x7efe9b080000, 17088, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7efe9b080000close(3) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efe9b281000mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efe9b27f000arch_prctl(ARCH_SET_FS, 0x7efe9b27f740) = 0mprotect(0x7efe9b07a000, 16384, PROT_READ) = 0mprotect(0x602000, 4096, PROT_READ) = 0mprotect(0x7efe9b2a7000, 4096, PROT_READ) = 0munmap(0x7efe9b282000, 141491) = 0write(2, "Usage: time [-apvV] [-f format] "..., 177Usage: time [-apvV] [-f format] [-o file] [--append] [--verbose] [--portability] [--format=format] [--output=file] [--version] [--quiet] [--help] command [arg...]) = 177exit_group(1) = ?+++ exited with 1 +++ | $ strace -f timeexecve("/usr/bin/time", ["time"], [/* 66 vars */]) = 0brk(0) = 0x84c000... Strace directly invokes the program to be traced. It doesn't use the shell to run child commands, unless the child command is a shell invocation. The approximate sequence of events here is as follows: The shell executes strace with arguments "strace", "-f", "time". Strace starts up, parses its command line, and eventually forks. The original (parent) strace process begins tracing the child strace process. The child strace process executes /usr/bin/time with the argument "time". The time program starts up. After step 1, the original shell process is idle, waiting for strace to exit. It's not actively doing anything. And even if it were doing something, it's not being traced by strace, so its activity wouldn't appear in the strace output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
267,216 | I developed a module which works as an emulator for a block device. When I write into the block device, I get this in dmesg and the module crashes. I cannot get any hint about what is going on? [82013.054224] CPU: 9 PID: 15452 Comm: my_blk/0 Tainted: G B I E 3.19.0+ #1[82013.054226] Hardware name: Dell Inc. PowerEdge R730xd/0599V5, BIOS 1.0.4 08/28/2014[82013.054229] ffffffff81aa8fb8 ffff881fe1613778 ffffffff817a7f98 0000000000000000[82013.054234] 0000000000000009 ffff881fe16137a8 ffffffff813c45b5 ffff880030243600[82013.054239] ffff881fe0a4c798 ffff883feb3ced00 ffff881fe00c3900 ffff881fe16137b8[82013.054244] Call Trace:[82013.054251] [<ffffffff817a7f98>] dump_stack+0x4f/0x7b[82013.054257] [<ffffffff813c45b5>] check_preemption_disabled+0xf5/0x110[82013.054262] [<ffffffff813c4607>] debug_smp_processor_id+0x17/0x20[82013.054276] [<ffffffffc03599dd>] megasas_build_io_fusion+0x54d/0x5a0 [megaraid_sas][82013.054287] [<ffffffffc0359af1>] megasas_build_and_issue_cmd_fusion+0x71/0x110 [megaraid_sas][82013.054296] [<ffffffffc034cf35>] megasas_queue_command+0x145/0x1b0 [megaraid_sas][82013.054301] [<ffffffff8154ae03>] scsi_dispatch_cmd+0x103/0x370[82013.054306] [<ffffffff8154dcbf>] scsi_request_fn+0x4af/0x6c0[82013.054311] [<ffffffff81374177>] __blk_run_queue+0x37/0x50[82013.054315] [<ffffffff81374dd1>] queue_unplugged+0x41/0xf0[82013.054320] [<ffffffff8137a042>] blk_flush_plug_list+0x1d2/0x210[82013.054325] [<ffffffff8137a098>] blk_finish_plug+0x18/0x50[82013.054331] [<ffffffff8127e54b>] ext4_writepages+0x55b/0xd10[82013.054336] [<ffffffff812144ad>] ? __mnt_drop_write+0x2d/0x50[82013.054342] [<ffffffff8109d624>] ? finish_task_switch+0x64/0x110[82013.054348] [<ffffffff81187ea0>] do_writepages+0x20/0x40[82013.054352] [<ffffffff8117c1a9>] __filemap_fdatawrite_range+0x59/0x60[82013.054356] [<ffffffff8117c1e7>] filemap_write_and_wait_range+0x37/0x80[82013.054360] [<ffffffff8127376a>] ext4_sync_file+0x12a/0x390///// calling some functions in my_blk[82013.054397] [<ffffffff81097b19>] kthread+0xc9/0xe0[82013.054402] [<ffffffff81097a50>] ? flush_kthread_worker+0x90/0x90[82013.054407] [<ffffffff817af7bc>] ret_from_fork+0x7c/0xb0[82013.054412] [<ffffffff81097a50>] ? flush_kthread_worker+0x90/0x90 | $ strace -f timeexecve("/usr/bin/time", ["time"], [/* 66 vars */]) = 0brk(0) = 0x84c000... Strace directly invokes the program to be traced. It doesn't use the shell to run child commands, unless the child command is a shell invocation. The approximate sequence of events here is as follows: The shell executes strace with arguments "strace", "-f", "time". Strace starts up, parses its command line, and eventually forks. The original (parent) strace process begins tracing the child strace process. The child strace process executes /usr/bin/time with the argument "time". The time program starts up. After step 1, the original shell process is idle, waiting for strace to exit. It's not actively doing anything. And even if it were doing something, it's not being traced by strace, so its activity wouldn't appear in the strace output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267216",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/157029/"
]
} |
267,259 | In the root directory of my USB flash drive, sometimes when I run ls , the output is normal and it lists the files. At other times, the output is simply one line: $ ls. If I try ls -la at one of those times, I get this: $ ls -lals: .: Invalid argument If I run ls back to back multiple times, it seems to return either the normal output or the abnormal one basically at random. ls appears to work normally in other directories. ls $drivename even appears to work fine from the parent directory, and ls .. seems to work fine from a child directory. (Though I can't be 100% sure of the ones that "work normally" since the behavior is indeterminate to begin with.) I tried two other external USB drives and got the same behavior. What's going on here? I'm on Mac OS X 10.11.3. Edit: Nice idea, but I don't seem to be using an alias, and /bin/ls gives the same result. | It may be a bug in the filesystem driver for FAT32 on recent versions of OSX. This also only appears to occur when the working directory is at the root of the mounted drive. If it's in a subdirectory or anywhere else on the system things seem to work. There's some interesting discussion in this thread including system traces. https://github.com/robbyrussell/oh-my-zsh/issues/4161 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267259",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/157777/"
]
} |
267,317 | I'm quite new to shell scripting. I need to create a script or reuse a script that converts PDF to PS (Post Script) files, is there anyway this can be done? | I need to create a script or reuse a scriptthat converts PDF to PS (PostScript) files There are tools but there are also pros and cons for each tool. pdf2ps is my go to tool since my needs are pretty low. It is a simple one-line command. pdf2ps [options] input.pdf [output.ps] If you don't give an output.ps it will keep the input file name and just change the extension from pdf to ps. pdf2ps will convert the files and the file may be largerand will take longer then pdftops. pdf2ps converts the fonts to bitmap fonts. Poppler's pdftops is the successor to Xpdf (poppler-utils in Ubuntu). It runs fast, represents the fonts better and has a ton of neat tools. Its invocation synopsys is identical to that of pdf2ps: pdftops [options] input.pdf [output.ps] Wikipedia says: poppler-utils is a collection of command-line utilitiesbuilt on Poppler's library API, to manage PDF and extract contents: pdfattach – add a new embedded file (attachment) to an existing PDF pdfdetach – extract embedded documents from a PDF pdffonts – lists the fonts used in a PDF pdfimages – extract all embedded images at native resolution from a PDF pdfinfo – list all information of a PDF pdfseparate – extract single pages from a PDF pdftocairo – convert single pages from a PDF to vector or bitmap formats using cairo pdftohtml – convert PDF to HTML format retaining formatting pdftoppm – convert a PDF page to a bitmap pdftops – convert PDF to printable PS format pdftotext – extract all text from PDF pdfunite – merges several PDF To make it a bash script to convert all pdf to ps converting the extensions only: #1/bin/bashclearfind . -type f -iname '*.pdf' -print0 | while IFS= read -r -d '' file do pdftops "${file}" "${file%.*}.ps"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267317",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159355/"
]
} |
267,320 | I need to grab the first lines of a long text file for some bugfixing on a smaller file (a Python script does not digest the large text file as intended). However, for the bugfixing to make any sense, I really need the lines to be perfect copies, basically byte-by-byte, and pick up any potential problems with character encoding, end-of-line characters, invisible characters or what not in the original txt. Will the following simple solution accomplish that or I'd lose something using the output of head ? head infile.txt > output.txt A more general question on the binary copy with head , dd , or else is now posted here . | POSIX says that the input to head is a text file , and defines a text file: 3.397 Text File A file that contains characters organized into zero or more lines. The lines do not contain NUL characters and none can exceed {LINE_MAX} bytes in length, including the <newline> character. Although POSIX.1-2008 does not distinguish between text files and binary files (see the ISO C standard), many utilities only produce predictable or meaningful output when operating on text files. The standard utilities that have such restrictions always specify "text files" in their STDIN or INPUT FILES sections. So there is a possibility of losing information. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267320",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101254/"
]
} |
267,329 | How does one stop a Postgresql instance, 9.2, on CentOS 7. I found nothing under /etc/init.d/ nor a ctl program. | Welcome to the world of systemd Try something like: service postgresql-9.2 stop If that does not work, try to find the correct servicename: systemctl list-units|grep postgresql And retry above command with the part of the result just before the ".service". | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159424/"
]
} |
267,361 | Lots of programming-oriented editors will colorize source code. Is there a command that will colorize source code for viewing in the terminal? I could open a file with emacs -nw (which opens in the terminal instead of popping up a new window), but I'm looking for something that works like less (or that works with less -R , which passes through color escape sequences in its input). | With highlight on a terminal that supports the same colour escape sequences as xterm : highlight -O xterm256 your-file | less -R With ruby-rouge : rougify your-file | less -R With python-pygments : pygmentize your-file | less -R With GNU source-highlight : source-highlight -f esc256 -i your-file | less -R You can also use vim as a pager with the help of macros/less.sh script shipped with vim (see :h less within vim for details): On my system: sh /usr/share/vim/vim74/macros/less.sh your-file Or you could use any of the syntax highlighters that support HTML output and use elinks or w3m as the pager (or elinks -dump -dump-color-mode 3 | less -R ) like with GNU source-highlight : source-highlight -o STDOUT -i your-file | elinks -dump -dump-color-mode 3 | less -R | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/267361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23344/"
]
} |
267,367 | Currently, my whole system is located at the end of my hdd. I'd like to move that data to the beginning and still have booting and other details working. dd seems to do exactly what I want (to copy my data exactly how it is placed), but I'm not sure about things like booting, grub configs and so on. Will I need to set these things later, or will dd do this job for me? | With highlight on a terminal that supports the same colour escape sequences as xterm : highlight -O xterm256 your-file | less -R With ruby-rouge : rougify your-file | less -R With python-pygments : pygmentize your-file | less -R With GNU source-highlight : source-highlight -f esc256 -i your-file | less -R You can also use vim as a pager with the help of macros/less.sh script shipped with vim (see :h less within vim for details): On my system: sh /usr/share/vim/vim74/macros/less.sh your-file Or you could use any of the syntax highlighters that support HTML output and use elinks or w3m as the pager (or elinks -dump -dump-color-mode 3 | less -R ) like with GNU source-highlight : source-highlight -o STDOUT -i your-file | elinks -dump -dump-color-mode 3 | less -R | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/267367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119794/"
]
} |
267,437 | How can I make the second echo to echo out test in this example as well: echo test | xargs -I {} echo {} && echo {} | Just write {} two times in your command. The following would work: $ echo test | xargs -I {} echo {} {}test test Your problem is how the commands are nested . Lets look at this: echo test | xargs -I {} echo {} && echo {} bash will execute echo test | xargs -I {} echo {} . If it runs successfully, echo {} is executed. To change the nesting, you could do something like this: echo test | xargs -I {} sh -c "echo {} && echo {}" However, you could get trouble because the approach might be prone to code injection. When "test" is substituted with shell code, it gets executed. Therefore, you should probably pass the input to the nested shell with arguments. echo test | xargs -I {} sh -c 'echo "$1" && echo "$1"' sh {} | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/267437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
267,478 | I am trying to make use of Amazon's elasticity and bring instances online and off using the AWS cli . But as you know, every time you offline a system, and bring it back to life, it comes with a new IP address. I have AWS-cli package installed on my Centos 6 server, located elsewhere. I have been researching this for days now and am not able to find a working command, that I can issue from my Centos box and get the IP address of the instance at Amazon EC2. EC2 instance is up and running. Most relevant information I found was aws ec2 describe-instances but all I am getting back from this command is something like a usage syntax output. I also found (and promptly lost) a switch --query followed by a set of keywords to extract this information. But that command gave me a response, saying --query is not a recognized argument. I checked the Amazon's cli reference for this command and only parameter it seems to accept is --filter and examples are way far from being helpful. Does anyone know how to accomplish this ? EDIT More on the issue I discovered over the weekend. Before my attempt to get the Public DNS info from the instance, I need a way to connect to this instance. I am unable to get information about the instances I have, no matter what I do: $ ec2-describe-instances i-b78a096fsanity-check: Your system clock is 50 seconds behind.+----------------------------+---------------------------------------------+| Code | Message |+----------------------------+---------------------------------------------+| InvalidInstanceID.NotFound | The instance ID 'i-b78a096f' does not exist |+----------------------------+---------------------------------------------+ I know that my AWS_ACCESS_KEY and AWS_SECRET_KEY variables are correctly assigned to their proper variable names. The instance id has been copied and pasted from the AWS management console. To test, I spun up a new instance and tested the same command against it, with no different result. Although, when I run ec2-describe-regions command, I can see the regions list available to me with no problems. I am dumbfounded right now. | An ec2 instance can be identified by its instance-id, which will never change, no matter how many times you stop and start the instance. So you can try this out if you have an instance id and you need its IP address. For public IP address: aws ec2 describe-instances --instance-ids i-b78a096f | grep PublicIpAddress | awk -F ":" '{print $2}' | sed 's/[",]//g' For private IP address: aws ec2 describe-instances --instance-ids i-b78a096f | grep PrivateIpAddress | head -1 | awk -F ":" '{print $2}' | sed 's/[",]//g' Still, I personally think that it would be much much better if you try the same thing in python. I have implemented same logic in my previous organization with the python boto library and it was much more simple and manageable. setting up virtual env: #!/usr/bin/env bashset -eHOME_DIR=/home/$(whoami)#Dependenciessudo apt-get install ncurses-devel patch openssl openssl-devel zlib-devel# Install python locallymkdir -p $HOME_DIR/srcmkdir -p $HOME_DIR/.localpythoncd $HOME_DIR/srcwget https://www.python.org/ftp/python/2.7.8/Python-2.7.8.tgztar -zxvf Python-2.7.8.tgzcd Python-2.7.8./configure --prefix=$HOME_DIR/.localpython makemake install# Install virtualenv locallycd $HOME_DIR/srcwget --no-check-certificate https://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.11.6.tar.gz#md5=f61cdd983d2c4e6aeabb70b1060d6f49tar -zxvf virtualenv-1.11.6.tar.gzcd virtualenv-1.11.6/~/.localpython/bin/python setup.py install # Create a test virtual environmentmkdir -p $HOME_DIR/virtualenvscd $HOME_DIR/virtualenvs~/.localpython/bin/virtualenv my_virtual_env --python=$HOME_DIR/.localpython/bin/python2.7cd my_virtual_envsource bin/activatepip install awscli | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267478",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27383/"
]
} |
267,486 | I wrote simple script which is running all the time and whenever the size of a file is changed it will write something like "The size has changed" to terminal but instead of terminal message, is it possible to actually get some prompt or some alert sound like in C? | An ec2 instance can be identified by its instance-id, which will never change, no matter how many times you stop and start the instance. So you can try this out if you have an instance id and you need its IP address. For public IP address: aws ec2 describe-instances --instance-ids i-b78a096f | grep PublicIpAddress | awk -F ":" '{print $2}' | sed 's/[",]//g' For private IP address: aws ec2 describe-instances --instance-ids i-b78a096f | grep PrivateIpAddress | head -1 | awk -F ":" '{print $2}' | sed 's/[",]//g' Still, I personally think that it would be much much better if you try the same thing in python. I have implemented same logic in my previous organization with the python boto library and it was much more simple and manageable. setting up virtual env: #!/usr/bin/env bashset -eHOME_DIR=/home/$(whoami)#Dependenciessudo apt-get install ncurses-devel patch openssl openssl-devel zlib-devel# Install python locallymkdir -p $HOME_DIR/srcmkdir -p $HOME_DIR/.localpythoncd $HOME_DIR/srcwget https://www.python.org/ftp/python/2.7.8/Python-2.7.8.tgztar -zxvf Python-2.7.8.tgzcd Python-2.7.8./configure --prefix=$HOME_DIR/.localpython makemake install# Install virtualenv locallycd $HOME_DIR/srcwget --no-check-certificate https://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.11.6.tar.gz#md5=f61cdd983d2c4e6aeabb70b1060d6f49tar -zxvf virtualenv-1.11.6.tar.gzcd virtualenv-1.11.6/~/.localpython/bin/python setup.py install # Create a test virtual environmentmkdir -p $HOME_DIR/virtualenvscd $HOME_DIR/virtualenvs~/.localpython/bin/virtualenv my_virtual_env --python=$HOME_DIR/.localpython/bin/python2.7cd my_virtual_envsource bin/activatepip install awscli | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159391/"
]
} |
267,496 | I am running the following to find all directories without a number in them (not including the original path): find /path/2/directory -type d -name '[!0-9]*' The problem is that it finds directories that are subdirectories of directories that have numbers. Example: /path/2/directory/20160303/backup or even /path/2/directory/backup20160303a/backup Is there any way to prevent find from returning such directories? I cannot solve this problem by limiting the depth, the depth can vary. Example: /path/2/directory/subdirectory/20160303/backup | Use -prune to ignore those directories: find /path/to/directory -name '*[0-9]*' -prune -o -type d -print though if you're on a gnu setup you may want to use the C locale when running the above, see Stéphane's comment below. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159472/"
]
} |
267,506 | I recently noticed the following in my cygwin profile, more precisely: /usr/local/bin:/usr/bin${PATH:+:${PATH}} What does it mean? Why is not just $PATH? Is this an 'if $PATH exists then add :$PATH'? My purpose is to swap the order and put the cygwin paths behind the windows path. In the past I would have $PATH:/usr/local/bin:/usr/bin but this confuses me. Maybe I should be doing PATH="${PATH:+${PATH}:}/usr/local/bin:/usr/bin" to append the : at the end of the $PATH? | The :+ is a form of parameter expansion : ${parameter:+[word]} : Use Alternative Value. If parameter is unset or null, null shall be substituted; otherwise, the expansion of word (or an empty string if word is omitted) shall be substituted. In other words, if the variable $var is defined, echo ${var:+foo} will print foo and, if it is not, it will print the empty string. The second : is nothing special. It is the character used as a separator in the list of directories in $PATH . So, PATH="/usr/local/bin:/usr/bin${PATH:+:${PATH}}" is a shorthand way of writing: if [ -z "$PATH" ]; then PATH=/usr/local/bin:/usr/binelse PATH=/usr/local/bin:/usr/bin:$PATHfi It's just a clever trick to avoid adding an extra : when $PATH is not set. For example: $ PATH="/usr/bin"$ PATH="/new/dir:$PATH" ## Add a directory$ echo "$PATH"/new/dir:/usr/bin But if PATH is unset: $ unset PATH$ PATH="/new/dir:$PATH"$ echo "$PATH"/new/dir: A : by itself adds the current directory to the $PATH . Using PATH="/new/dir${PATH:+:$PATH}" avoids this. So sure, you can use PATH="${PATH:+${PATH}:}/usr/local/bin:/usr/bin" if you want to, or you can use PATH="$PATH:/usr/local/bin:/usr/bin" if you prefer. The only difference is that the former might add an extra : , thereby adding your current directory to your $PATH . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/267506",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110907/"
]
} |
267,507 | I want to know how to create a shell script inside a text editor. So this is what I have inside the text editor. #!/bin/bashmkdir -p tempcd tempif [ $1 > $2 ] ;then echo $1else echo $2fi./max.sh 4 6./max.sh -2 -5./max.sh 7 -3 So basically inside the text editor I want to create a shell script called max.sh so that below it I can pass arguments through it but in the same text editor. To make it more clear: I want the if-statement to be inside a script called max.sh, so below it I can call the max.sh with arguments and it will work. | What you want is called a function : #!/bin/bashmax() { if [ "$1" -gt "$2" ] ; then printf %s\\n "$1" else printf %s\\n "$2" fi}max 4 6max -2 -5max 7 -3 Further reading: In Bash, when to alias, when to script, and when to write a function? Why is printf better than echo? Why does my shell script choke on whitespace or other special characters? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267507",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155767/"
]
} |
267,528 | Trying to command mv foo&foo.jpg images/ but get command not found, then if try and rename the file it won't let me. | Use single quotes. For example: mv 'foo&foo.jpg' images/ Unless you quote or escape the & symbol, it's interpreted as a special token by the shell. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267528",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40129/"
]
} |
267,536 | I saw in linux script there was a command argument > /dev/null 2>&1 , I know it is to redirect the output to null, means silencing it. I also know about the numbering for 0,1,2 (STDIN, STDOUT, STDERR), but I don't get why need to have this line? 2>&1 Basically I want to know what is the difference between >/dev/null and >/dev/null 2>&1 | 2>&1 will redirect stderr to wherever stdout currently points to. The argument >/dev/null will redirect stdout to /dev/null i.e discard/silent the output by command. But if you also want to discard (make silent) the stderr, then after redirecting stdout to /dev/null , specify 2>&1 to redirect stderr to the same place. Example (For visualizing difference): $ lsfile1file2 $ ls file1 > /dev/null$ Here the output of ls file1 is file1 which is sent to /dev/null and hence we get nothing. But: $ ls file12 > /dev/nullls: cannot access file12: No such file or directory which gives stderr and as only output is sent to /dev/null . So, If you want to discard/silent stderr also then you can redirect stderr to stdout and hence both will be sent to /dev/null as follows: $ ls file12 >/dev/null 2>&1$ Note that the order/sequence of redirection matters. 2>&1 to redirect standard error must always be placed after redirecting standard output or it doesn't do anything. In above example if you run ls file12 2>&1 >/dev/null you will see the stderr printed to the terminal; if you run ls file12 >/dev/null 2>&1 you won't. Alternatively You could also use ls file1 file12 2>/dev/null 1>&2 with the same effect—which first redirects stderr to /dev/null and then redirects stdout to point to the same place stderr is currently pointing to. With the new version of bash you can also use >& simply like: ls file12 >& /dev/null which will redirects both stdout & stderr to /dev/null | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267536",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/158235/"
]
} |
267,562 | When I use df or mount , I'm most of all interested in physical disk partitions. Nowadays the output of those commands is overwhelmed by temporary and virtual filesystems, cgroups and other things I am not interested in on a regular basis. My physical partitions in the output always start with ' / ', so I tried making aliases for df and mount : alias df1="df | egrep '^/'"alias mount1="mount | egrep '^/'" That works OK for mount1 (although it shows the ' / ' in red), but for df1 I would sometimes like to add the -h option to df and cannot do df1 -h . I would prefer not to have an alias for every option combination I might want to use. Do I really have to look into defining functions in bash (I would prefer not to)? Is there a better solution for df1 ? | You can solve the df1 argument issue by using the following alias: alias df1='df --type btrfs --type ext4 --type ext3 --type ext2 --type vfat --type iso9660' make sure to add any other type ( xfs , fuseblk (for modern NTFS support, as @Pandya pointed out), etc) you're interested in. With that youcan do df1 -h and get the expected result. mount does have a -t option but you cannot specify it multiple times (only the last is taken), there I would use: alias mount1="mount | /bin/grep -E '^/'" I am using grep -E as egrep is deprecated and using /bin/grep makes sure you're not using --colour=auto from an alias for grep / egrep | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267562",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159523/"
]
} |
267,566 | Running the following command works as expected, creating a symlink to src from include/bb : ln -sf ../src include/bb However, calling the same command again will cause an additional symlink to src to be created in include/bb aka src/src . What can I do to prevent it? Edit: Apparently this happens when ln is called twice, possibly the semicolon has nothing to do with it after all. Here's a sequence of commands that will cause this to happen: mkdir testcd testmkdir srctouch src/main.cppmkdir includeln -sf ../src include/bbln -sf ../src include/bb | ln -s ../src include/bb either creates a link named include/bb referring to ../src (relative to its location), or it creates a file include/bb/src referring to ../src (relative to its location). The latter is the case if include/bb exists and is a directory (actually or as a link to a directory). Thus, when you've done the command once, there is now a link include/bb that points out the directory src , sibling to include . I.e., include/bb is now a directory. Therefore, when you do the command the subsequent time, a new link named src is added to that directory. It has nothing to do with ; . You might have intended to use ln -sTf ../src include/bb instead, so as to (by -T ) tell ln to treat any existing include/bb as a plain file (even though it's a link that refers to a directory), and (by -f ) force that file to be replaced. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159526/"
]
} |
267,631 | I'm working actually in a shell script to monitor a server's resources. I have a function and I want to know: how can I call a second function inside the main one? Example: funct mainfunct(){echo "Text to show here" **$secondfunct**}funct secondfunct(){commands} | In ksh or bash, mainfunct() { echo "Text to show here" $(secondfunct)}secondfunct() { echo commands here}mainfunct Generates the following: Text to show here commands here | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/158831/"
]
} |
267,636 | I have a list of strings, for every of those strings, I want to check if it occurs in a big source code directory. I came to a GNU grep solution that gives me what I want: for key in $(cat /tmp/listOfKeys.txt); do if [ "$(grep -rio -m 1 "$key" . | wc -l)" = "0" ]; then echo "$key has no occurence"; fidone However, it's not efficient at all since it always grep every file of the directory, even if it finds a match early. Since there are a lot of keys to lookup, and pretty much files to search in, it is not usable as-is. Do you know a way to do this efficiently with a "standard" unix tool? | It can at least be simplified to: set -f # needed if you're using the split+glob operator and don't want the # glob partfor key in $(cat /tmp/listOfKeys.txt); do grep -riFqe "$key" . || printf '%s\n' "$key has no occurrence"done Which would stop searching after the first occurrence of the key and not consider the key as a regular expression (or possible option to grep ). To avoid having to read files several times, and assuming your list of keys is one key per line (as opposed to space/tab/newline separated in the code above), you could do with GNU tools: find . -type f -size +0 -printf '%p\0' | awk ' ARGIND == 2 {ARGV[ARGC++] = $0; next} ARGIND == 4 {a[tolower($0)]; n++; next} { l = tolower($0) for (i in a) if (index(l, i)) { delete a[i] if (!--n) exit } } END { for (i in a) print i, "has no occurrence" }' RS='\0' - RS='\n' /tmp/listOfKeys.txt It's optimised in that it will stop looking for a key as soon as it's seen it and will stop as soon as all the keys have been found and will read the files only once. It assumes keys are unique in listOfKeys.txt . It will output the keys in lower case. The GNUisms above are -printf '%p\0' , ARGIND and the ability of awk to handle NUL delimited records. The first two can be addressed with: find . -type f -size +0 -exec printf '%s\0' {} + | awk ' step == 1 {ARGV[ARGC++] = $0; next} step == 2 {a[tolower($0)]; n++; next} { l = tolower($0) for (i in a) if (index(l, i)) { delete a[i] if (!--n) exit } } END { for (i in a) print i, "has no occurrence" }' step=1 RS='\0' - step=2 RS='\n' /tmp/listOfKeys.txt step=3 The third one could be addressed with tricks like this one , but that's probably not worth the effort. See Barefoot IO's solution for a way to bypass the problem altogether. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137586/"
]
} |
267,677 | I have the following string y10_zcis y10_nom y10_infl y20_zcis y20_infl y30_zcis I would like to transform this to "y10_zcis", "y10_nom", "y10_infl", "y20_zcis", "y20_infl", "y30_zcis" I accomplished something similar with the extremely ugly: $ cat in.txt | sed 's/ /\'$'\n/g' | sed 's/\(.*\)/"\1",/g' | tr -d '\n'"y10_zcis","y10_nom","y10_infl","y20_zcis","y20_infl","y30_zcis", But that feels like an utter failure, and it doesn't take care of the last unwanted , (but perhaps this is best to just delete afterwards) | You can do sed -e 's| |", "|g' -e 's|^|"|g' -e 's|$|"|g' in.txt Where 's| |", "|g' will replace every space with ", " 's|^|"|g' while at the beginning there's no space, you must specify with ^ the beginning of the line, so you're telling, put " at the beginning. 's|$|"|g' same thing but specifying the end of every line with $ UPDATE As @don_crissti pointed out, you can do it shorter with the following sed 's| |", "|g;s|.*|"&"|' Where ; separate each instruction .* matches the entire line. & an ampersand on the RHS is replaced by the entire expression matched on the LHS, in this case .* RHS=Right hand side LHS=Left hand side | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143576/"
]
} |
267,694 | From the manpage for ln : -d, -F, --directory allow the superuser to attempt to hard link directories (note: will probably fail due to system restrictions, even for the superuser) Are there any filesystem drivers that actually allow this, or is the only option mount --bind <src> <dest> ?Or is this kind of behavior blocked by the kernel before it even gets to the filesystem-specific driver? NOTE: I'm not actually planning on doing this on any machines, just curious. | First a note: the ln command does not have options like -d , -F , --directory , this is a non-portable GNUism. The feature you are looking for, is implemented by the link(1) command. Back to your original question: On a typical UNIX system the decision, whether hard links on directories are possible, is made in the filesystem driver. The Solaris UFS driver supports hard links on directories, the ZFS driver does not. The reason why UFS on Solaris supports hard links is that AT&T was interested in this feature - UFS from BSD does not support hard linked directories. The reason why ZFS does not support hardlinked directories is that Jeff Bonwick does not like that feature. Regarding Linux, I would guess that Linux blocks attempts to created hard links on directories in the upper kernel layers. The reason for this assumption is that Linus Torvalds wrote code for GIT that did shred directories when git clone was called as root on a platform that supports hard linked directories. Note that a filesystem that supports to create hard linked directories also needs to support unlink(1) to remove non-empty directories as root. So if we assume that Torvalds knows how Linux works and if Linux did support hard linked directories, Torvalds should have known that calling unlink(2) on a directory while being root, will not return with an error but shred that directory. IN other words, it is unlikely that Linux permits a file system driver to implement hard linked directories. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74972/"
]
} |
267,704 | Is there a command that exists that can simulate keypresses? I want to pipe some data to it to make it type into a GUI program for me. | Yes, it is xdotool . To simulate a key press, use: xdotool key <key> For example, to simulate pressing F2 : xdotool key F2 To simulate pressing crtl + c : xdotool key ctrl+c To simulate pressing ctrl + c and then a Backspace : xdotool key ctrl+c BackSpace Check man xdotool to get more idea. You might need to install the xdotool package first to use xdotool command. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/267704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146386/"
]
} |
267,706 | The Bash Manual says { list; } Placing a list of commands between curly braces causes the list to be executed in the current shell context. No subshell is created. The semicolon (or newline) following list is required. The braces are reserved words, so they must be separated from the list by blanks or other shell metacharacters. The parentheses are operators, and are recognized as separate tokens by the shell even if they are not separated from the list by whitespace. If I remove the semicolon, like this: $ { date }> Why does it expect stdin input? A metacharacter is a character that separates words. The semicolon and whitespace are both "shell metacharacters". Why can't whitespace separate words date and } ? Why do we need a semicolon instead of a whitespace for separating the words? | It's waiting for the closing } . You've entered a list with one command, date } , and a newline, so you're still inside the command group and can either add another command to the list or terminate it. So it's not waiting for standard input (exactly), it's waiting for you to complete the command you started on the first line. If you enter } here, you'll (probably) get an error from the date command saying that it doesn't understand " } ". { and } are valid arguments to common commands, as seen in point 1. For example, find uses {} as an argument. Specifically, only exactly " { " and " } " are reserved words . Reserved words in the shell only matter when they're given exactly as an entire word of their own, and only where they're specifically expected. The most important place they're expected is at the start of a command. The semicolon or newline means that } appears at the start of the next command in the list, where it can be recognised as a reserved word and given its special treatment. This is specified by POSIX for the shell grammar : This rule also implies that reserved words are not recognized except in certain positions in the input, such as after a <newline> or <semicolon> ; the grammar presumes that if the reserved word is intended, it is properly delimited by the user It would be annoying if "then", another reserved word, couldn't be used as a normal word, so that basically makes sense. ( and ) , by contrast, are operators, can appear anywhere, and need to be escaped if they're used for their literal values. This is essentially a historical artefact, and given a do-over perhaps a more consistent choice would be made in one direction or another, or perhaps a more complex parser would be mandated. For Bash in particular, braces for command grouping also need to be distinguished from braces for brace expansion , and the parser makes an assumption that you're unlikely to be brace-expanding a command that starts with a space. There is a choice whether to maintain corner-case historical compatibility or not, and some other shells, such as zsh, have cleverer parsers and are able to deal with { date } with the meaning you intended. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267706",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
267,719 | I'm trying to install Elementary OS on a USB drive so I can carry my OS with me. I have installed the ISO to the pen drive and can boot into the desktop no problem. When I come to do an install the installer doesn't see the pen drive as a disk it only sees my internal HDD's. Any ideas? Thank you | It's waiting for the closing } . You've entered a list with one command, date } , and a newline, so you're still inside the command group and can either add another command to the list or terminate it. So it's not waiting for standard input (exactly), it's waiting for you to complete the command you started on the first line. If you enter } here, you'll (probably) get an error from the date command saying that it doesn't understand " } ". { and } are valid arguments to common commands, as seen in point 1. For example, find uses {} as an argument. Specifically, only exactly " { " and " } " are reserved words . Reserved words in the shell only matter when they're given exactly as an entire word of their own, and only where they're specifically expected. The most important place they're expected is at the start of a command. The semicolon or newline means that } appears at the start of the next command in the list, where it can be recognised as a reserved word and given its special treatment. This is specified by POSIX for the shell grammar : This rule also implies that reserved words are not recognized except in certain positions in the input, such as after a <newline> or <semicolon> ; the grammar presumes that if the reserved word is intended, it is properly delimited by the user It would be annoying if "then", another reserved word, couldn't be used as a normal word, so that basically makes sense. ( and ) , by contrast, are operators, can appear anywhere, and need to be escaped if they're used for their literal values. This is essentially a historical artefact, and given a do-over perhaps a more consistent choice would be made in one direction or another, or perhaps a more complex parser would be mandated. For Bash in particular, braces for command grouping also need to be distinguished from braces for brace expansion , and the parser makes an assumption that you're unlikely to be brace-expanding a command that starts with a space. There is a choice whether to maintain corner-case historical compatibility or not, and some other shells, such as zsh, have cleverer parsers and are able to deal with { date } with the meaning you intended. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85353/"
]
} |
267,729 | How can I print $myvar padded so that it is in the center of the terminal, and to either side are = to the edge of the screen? | I found two pieces of information here on the stackexchange network that helped me arrive at this working answer: https://stackoverflow.com/q/263890/5419599 https://stackoverflow.com/q/4409399/5419599 However the code in this answer is my own. See the edit history if you want more verbosity; I've edited out all the cruft and "steps along the way." I think the best way is: center() { termwidth="$(tput cols)" padding="$(printf '%0.1s' ={1..500})" printf '%*.*s %s %*.*s\n' 0 "$(((termwidth-2-${#1})/2))" "$padding" "$1" 0 "$(((termwidth-1-${#1})/2))" "$padding"}center "Something I want to print" Output on a terminal 80 columns wide: ========================== Something I want to print =========================== Note that the padding doesn't have to be a single character; in fact the padding variable isn't, it's 500 characters long in the above code. You could use some other form of padding by changing just the padding line: padding="$(printf '%0.2s' ^v{1..500})" Results in: ^v^v^v^v^v^v^v^v^v^v^v^v^v Something I want to print ^v^v^v^v^v^v^v^v^v^v^v^v^v^ Another handy use is: clear && center "This is my header" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
267,736 | I always use rpm --query --package-name to see what files (scripts and configurations) I have installed. I'm looking for the similar command in ubuntu. Is there any equivalent command for apt-get? | I found two pieces of information here on the stackexchange network that helped me arrive at this working answer: https://stackoverflow.com/q/263890/5419599 https://stackoverflow.com/q/4409399/5419599 However the code in this answer is my own. See the edit history if you want more verbosity; I've edited out all the cruft and "steps along the way." I think the best way is: center() { termwidth="$(tput cols)" padding="$(printf '%0.1s' ={1..500})" printf '%*.*s %s %*.*s\n' 0 "$(((termwidth-2-${#1})/2))" "$padding" "$1" 0 "$(((termwidth-1-${#1})/2))" "$padding"}center "Something I want to print" Output on a terminal 80 columns wide: ========================== Something I want to print =========================== Note that the padding doesn't have to be a single character; in fact the padding variable isn't, it's 500 characters long in the above code. You could use some other form of padding by changing just the padding line: padding="$(printf '%0.2s' ^v{1..500})" Results in: ^v^v^v^v^v^v^v^v^v^v^v^v^v Something I want to print ^v^v^v^v^v^v^v^v^v^v^v^v^v^ Another handy use is: clear && center "This is my header" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267736",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/157029/"
]
} |
267,743 | When I type ctrl+r and then start typing I can see what commands in the history match which is great. Now is there a way to search the history on commands I have already typed in the terminal? For example if I type ctrl+r and then type ping I can cycle through servers I have pinged. But if I type "ping" first and then hit ctrl+r it ignores the "ping" I have already typed. Some times I'll get half way though typing out a string of commands and then think "oh I already typed this it sure would be nice to search the history on what I have already typed instead of starting over". Does this make sense what I am asking? | If you start typing a command and then, after typing some of it, remember to do a history search, you just need to: CTRL + A CTRL + R CTRL + Y CTRL + R ... (keep searching or) CTRL + S ... (search in the other direction*) Note: CTRL + S will suspend your terminal unless you explicitly revoked this behavior with [[ $- == *i* ]] && stty -ixon in your .bashrc Edited: a shortcut can be seen here in this somewhat duplicated question https://superuser.com/questions/384051/is-there-a-way-of-using-ctrl-r-after-typing-part-of-command-in-bash/1271740#1271740 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267743",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29731/"
]
} |
267,751 | I have a script which runs the command apt-get update but the script is unable to detect failure due to no internet. i.e. if apt-get is not run with root privileges it exits with code 100 . so I can do a check like if apt-get updatethen echo "success!"else echo "something went wrongfi But if apt-get update is run without an internet connect, it produces error messages, some to STDOUT, some to STDERR, e.g. this goes to STDERR ... W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/trusty-security/InRelease ... This goes to STDOUT ...Err http://archive.ubuntu.com trusty-updates Release.gpg Temporary failure resolving 'archive.ubuntu.com'Ign https://deb.nodesource.com trusty InReleaseIgn https://deb.nodesource.com trusty Release.gpg... But ultimately it still returns an exit of 0 . How can I detect an apt-get upgrade failure caused by no internet access? | If you start typing a command and then, after typing some of it, remember to do a history search, you just need to: CTRL + A CTRL + R CTRL + Y CTRL + R ... (keep searching or) CTRL + S ... (search in the other direction*) Note: CTRL + S will suspend your terminal unless you explicitly revoked this behavior with [[ $- == *i* ]] && stty -ixon in your .bashrc Edited: a shortcut can be seen here in this somewhat duplicated question https://superuser.com/questions/384051/is-there-a-way-of-using-ctrl-r-after-typing-part-of-command-in-bash/1271740#1271740 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
267,761 | From Make bash use external `time` command rather than shell built-in , Stéphane Chazelas wrote: There is no time bash builtin. time is a keyword so you can do for instance time { foo; bar; } We can verify it: $ type -a timetime is a shell keywordtime is /usr/bin/time It doesn't show that time can be a builtin command. What is the definition of a "keyword"? is "keyword" the same concept as "reserved word" in Bash ReferenceManual? reserved word A word that has a special meaning to the shell. Most reserved words introduce shell fl ow control constructs, such as for and while . Is a keyword necessarily not a command (or not a builtin command)? As a keyword, is time not a command (or not a builtin command)? Based on the definitions of keyword and of builtin, why is time not a builtin but a keyword? Why "you can do for instance time { foo; bar; } " because " time is a keyword"? | Keywords, reserved words, and builtins are all the "first word" of a simple command. They can be placed in two groups: Keywords and builtins. The two are mutually exclusive. A word (token) can be either a keyword or a builtin, but not both. Why the "first word" From the POSIX definition of "simple command" (emphasis mine): A "simple command" is a sequence of optional variable assignments and redirections, in any sequence, optionally followed by words and redirections, terminated by a control operator. 2.- The words that are not variable assignments or redirections shall be expanded. If any fields remain following their expansion, the first field shall be considered the command name and remaining fields are the arguments for the command. After that "first word" has been identified, and after is has been expanded (by an alias, for example) the final word is "the command", there could be only one command in each line. That command word could be a builtin or a keyword. Keyword Yes, a keyword is a "reserved word". Load "man bash" and search for keyword or just execute this command: LESS=+/'keyword' man bash . The first hit on search says this: keyword Shell reserved words. It happens in the completion section, but is quite clear. Reserved words In POSIX, there is this definition of "reserved words" and some description of what reserved words do . But the Bash manual has a better working definition.Search for "RESERVED WORDS" ( LESS=+/'RESERVED WORDS' man bash ) and find this: RESERVED WORDSReserved words are words that have a special meaning to the shell.The following words are recognized as reserved when unquoted and either the first word of a simple command or the third word of a case or for command: ! case do done elif else esac fi for function if in select then until while { } time [[ ]] Builtin It is not defined in the Bash manual, but it is quite simple: It is a command that has been implemented inside the shell for essential needs of the shell (cd, pwd, eval), or speed in general or to avoid conflicting interpretations of external utilities in some cases. Time is a keyword why is time not a builtin but a keyword? To allow the existence of a command as the second word. It is similar as how an if ... then .... fi allow the inclusion of commands (even compound commands) after the first keyword if . Or while or case , etc. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267761",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
267,797 | I want to grep a link from an external file example.txt . example.txt containins: (https://example.com/pathto/music.mp3)music.mp3 the code: egrep -o -m1 '(https)[^'\"]+.mp3' example.txt output: https://example.com/pathto/music1.mp3)music.mp3 When I run grep, it detect the last .mp3 as end of output while I just need it end after first occurrence. How can I tell grep to stop after finding the first pattern? My desired output: https://example.com/pathto/music.mp3 I just want to extract any string starting with https and ending with mp3 | Standard grep does not accept the ? modifier that would normally make it non-greedy. But you can try the -P option that - if enabled in your distro - will make it accept Perl style regexes: grep -oP -m1 "(https)[^'\"]+?.mp3" mp3.txt If that does not work, you could for your specific example include the right parenthesis in the range so it wouldn't look beyond the parenthesis: egrep -o -m1 "(https)[^'\")]+?.mp3" mp3.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267797",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153322/"
]
} |
267,835 | What is the Linux kernel source tree?What does it contain and what is its purpose? I'm trying to build an external module and the tutorial I'm using says to make sure that a kernel source tree is available. If it is available, where can I find it in Ubuntu? There is a similar question here: What does a kernel source tree contain? Is this related to Linux kernel headers? but I don't see the answer to my questions. It would be nice to have this clarified. | The source-tree is a directory which contains all of the kernel source. You could build a new kernel, install that, and reboot your machine to use the rebuilt kernel. Other than for learning, people rebuild the kernel to select less-used options, or to add device drivers which are normally not bundled with Linux. You may not find it in Ubuntu, but would have to download the source tar-file, e.g., from kernel.org . Ubuntu uses Debian packages for many things, and the latter's website makes it easier to find the packages. http://packages.ubuntu.com/ https://www.debian.org/distrib/packages Those consist (in either case) of a "pristine" tar-file (from "upstream") and a "debian" add-on (scripts and packages). You can download both of those from Debian. If you are looking for the source for the kernel package which you have installed, you would download both parts. You can also install the "linux-source" package: Debian and Ubuntu provide a few source-packages, this is one of the few (a quick check finds only a couple of dozen packages with "-source" in their names, compared to tens of thousands of other packages). The source-package is preferred, since there are many fixes (and customizations) needed, and the source-package has those patches incorporated into the tree. I used to routinely build kernels until about ten years ago, since the drivers for sound, video and network were lacking. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267835",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159727/"
]
} |
267,865 | I have a C program (say a simple Queue system) which I compile and get an executable file. I want to run this executable as a service on a specific TCP port on a CentOS system which I can connect to via telnet and use it as a service (issuing command like getHead, queue, dequeue, etc). Do I need to code this in the C program itself, like which port to listen on? How can I achieve this? | The source-tree is a directory which contains all of the kernel source. You could build a new kernel, install that, and reboot your machine to use the rebuilt kernel. Other than for learning, people rebuild the kernel to select less-used options, or to add device drivers which are normally not bundled with Linux. You may not find it in Ubuntu, but would have to download the source tar-file, e.g., from kernel.org . Ubuntu uses Debian packages for many things, and the latter's website makes it easier to find the packages. http://packages.ubuntu.com/ https://www.debian.org/distrib/packages Those consist (in either case) of a "pristine" tar-file (from "upstream") and a "debian" add-on (scripts and packages). You can download both of those from Debian. If you are looking for the source for the kernel package which you have installed, you would download both parts. You can also install the "linux-source" package: Debian and Ubuntu provide a few source-packages, this is one of the few (a quick check finds only a couple of dozen packages with "-source" in their names, compared to tens of thousands of other packages). The source-package is preferred, since there are many fixes (and customizations) needed, and the source-package has those patches incorporated into the tree. I used to routinely build kernels until about ten years ago, since the drivers for sound, video and network were lacking. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267865",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159755/"
]
} |
267,879 | Base on this I'm running the command < /dev/urandom hexdump -v -e '/1 "%u\n"' |awk '{ split("0,2,4,5,7,9,11,12",a,","); for (i = 0; i < 1; i+= 0.0001) printf("%08X\n", 100*sin(1382*exp((a[$1 % 8]/12)*log(2))*i)) }' |xxd -r -p |sox -traw -r44100 -b16 -e unsigned-integer - -tcoreaudio I notice that the memory used by awk continually grows while this command is running, for example consuming over 500MB of memory by the time 75MB of raw audio data has been played. All of the other commands in the pipeline maintain a constant amount of memory. What is awk using this memory for and is there an alternative that does the intended stream processing using only a constant amount of memory? in case the awk version matters: ⑆ awk --versionawk version 20070501 Here's the command I tested based on Thomas Dickey's answer: < /dev/urandom hexdump -v -e '/1 "%u\n"' |awk 'BEGIN { split("0,2,4,5,7,9,11,12",a,",") } { for (i = 0; i < 1; i+= 0.0001) printf("%08X\n", 100*sin(1382*exp((a[$1 % 8]/12)*log(2))*i)) }' |xxd -r -p |sox -traw -r44100 -b16 -e unsigned-integer - -tcoreaudio | This statement is odd: split("0,2,4,5,7,9,11,12",a,","); It repetitively splits a constant string to create an array a . If you move that into a BEGIN section, the program should work the same — without allocating a new copy of the a array for each input-record. Addressing comments: the for-loop and expression do not allocate memory in a simple manner. A quick comparison of mawk, gawk and awk shows that there is no problem with the first two, but /usr/bin/awk on OSX does leak rapidly. If Apple had a bug-reporting system, that would be the place to go. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/267879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86575/"
]
} |
267,885 | I have a Dell XPS 13 9343 2015 with a resolution of 3200x1800 pixels. I am trying to use i3 windows manager on it but everything is tiny and hardly readable. I managed to scale every applications (firefox, terminal, etc...) using .Xresources : ! Fonts {{{Xft.antialias: trueXft.hinting: trueXft.rgba: rgbXft.hintstyle: hintfullXft.dpi: 220! }}} but i3 interface still does not scale... I have understood that xrandr --dpi 220 may solve the problem, but I don't know how/where to use it. Can somebody enlighten me on this issue ? | Since version 4.13 i3 reads DPI information from Xft.dpi ( source ). So, to set i3 to work with high DPI screens you'll probably need to modify two files. Add this line to ~/.Xresources with your preferred value: Xft.dpi: 120 Make sure the settings are loaded properly when X starts in your ~/.xinitrc ( source ): xrdb -merge ~/.Xresourcesexec i3 Note that it will affect other applications (e.g. your terminal) that read DPI settings from X resources. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/267885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115216/"
]
} |
267,908 | I am setting up an automated backup job for some computers on my network. There is a server that will, daily, run an rsync command to backup each of the other computers. I'd like the user that the rsync job runs as to be able to read everyone's home directories (including sensitive files like encrypted secret SSH keys) but not be able to write anywhere on the system (except for /tmp ). I'd also like to prevent normal users from reading each other's home directories, especially the sensitive parts. My first thought was to make a group comprising of only the backup user. Then I'd have the users chgrp their files to the backup group. Not being members themselves, they wouldn't be able to read each other's files but the backup user could read everything they wanted backed up. However, users cannot chgrp to a group they are not a part of. I can't add them to the group since that would enable users to read each other's home directories. I had considered giving the backup user a NOPASSWD entry in the sudoers file that allowed him to only run the exact rsync command it needs as root, but that seems potentially disastrous if I don't set it up right (if there was a way to make a symlink to /etc/sudoers and to get the rsync command to use it as a destination, for example). | TL,DR: run the backup as root. There's nothing wrong with authorizing the precise rsync command via sudo, as long as you carefully review the parameters; what would be wrong would be to allow the caller to specify parameters. If you want the backup user to be able to read file, see Allow a user to read some other users' home directories The idea is to create a bindfs view of the filesystem where this user can read everything. But the file level isn't the best level to solve this particular problem. The problem with backups made by rsync is that they're inconsistent: if a user changes file1 then file2 while the backup is in progress, but the backup reaches file2 before file1 , then the backup will contain the old version of file2 and the new version of file1 . If file2 is the new version of file1 and file1 is removed, that means that this file won't appear in the backup at all, which is clearly bad. The solution to this problem is to create a snapshot of the filesystem, and run the backup from that. Depending on your snapshot technology, there may be a way to ensure that a user can read the snapshot. If not, mount the snapshot and use the generic filesystem-based solution. And even if there is, rsync is still problematic, because if you run it as an ordinary user, it won't be able to back up ownership. So if you're backing up multiple users' directories, you need to run the backup as root. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17749/"
]
} |
267,922 | #!/bin/shif [ $num -eq 9 -o $num -eq 75 -o $num -eq 200 ]; then echo "do this"elif [ $num -eq 40 -o $num -eq 53 -o $num -eq 63]; then echo "do something for this"else echo "for other do this"fi Is any other way to the shrink the expression in the if statement? Perhaps something like [ $num -eq (9,75,200) ] BTW, I don't have GNU utils on this OS. | Sometimes a different construction may end up more readable: case $num in9|75|200) echo "do this" ;;40|53|63) echo "do something for this" ;;*) echo "for other do this" ;;esac | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159795/"
]
} |
267,923 | I have shell command: printf '/^PermitRootLogin/\nc\nPermitRootLogin no\n.\nw\nq\n' | ed -s /etc/ssh/sshd_config And I need catch #PermitRootLogin like PermitRootLogin but not commented paragrath with this phrase in sentence, I'm using ed text editor and try to find some way to catch all occurrences with one RegExp in my command. I don't have GNU utils on this OS.Thank you for answer! | Sometimes a different construction may end up more readable: case $num in9|75|200) echo "do this" ;;40|53|63) echo "do something for this" ;;*) echo "for other do this" ;;esac | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267923",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159795/"
]
} |
267,934 | I'm trying to find "corrupt" files inside a directory structure, namely files which the file command would interpret as "data".Here is the command I've been trying to run, but fails: $ find . -type f -exec if [[ $(file \{} | cut -f2 -d':') == " data" ]] \; then echo " \{} is CORRUPT" \; else echo " \{} is DATA" \; fi \;find: paths must precede expression: then Does anyone know what I'm doing wrong here? I realize I never saw an if inside an -exec parameter. Is it even possible? Basically, I'm trying to find files that match that criteria (file would report it as "data", while not identifying a specific file type), and then list them, so that I can analyze before removing. | You need a shell to interpret those if/then/else constructs or run those pipelines (though you don't really need those here): find . -type f -exec sh -c ' for file do case $(file -b "$file") in (data) printf "%s is CORRUPT\n" "$file";; (*) printf "%s is DATA\n" "$file";; esac done' sh {} + (like in your question, it prints "CORRUPT" when file says data . I'm not sure that's what you meant though). Whatever you do, don't include {} in the shell code like others have suggested! That would be very dangerous (and non-portable btw) as for instance a file called $(rm -rf "$HOME") would make you remove your whole home directory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/267934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149759/"
]
} |
267,965 | I have a new CentOS 7 installation, and noticed that my /var/log/messages file is full of messages like this Mar 6 08:40:01 myhostname systemd: Started Session 2043 of user root.Mar 6 08:40:01 myhostname systemd: Starting Session 2043 of user root.Mar 6 08:40:01 myhostname systemd: Created slice user-1001.slice.Mar 6 08:40:01 myhostname systemd: Starting user-1001.slice.Mar 6 08:40:01 myhostname systemd: Started Session 2042 of user userx.Mar 6 08:40:01 myhostname systemd: Starting Session 2042 of user userx.Mar 6 08:40:01 myhostname systemd: Started Session 2041 of user root.Mar 6 08:40:01 myhostname systemd: Starting Session 2041 of user root.Mar 6 08:40:31 myhostname systemd: Removed slice user-1001.slice.Mar 6 08:40:31 myhostname systemd: Stopping user-1001.slice.Mar 6 08:41:01 myhostname systemd: Created slice user-1001.slice.Mar 6 08:41:01 myhostname systemd: Starting user-1001.slice.Mar 6 08:41:01 myhostname systemd: Started Session 2044 of user userx.Mar 6 08:41:01 myhostname systemd: Starting Session 2044 of user userx.Mar 6 08:41:21 myhostname systemd: Removed slice user-1001.slice.Mar 6 08:41:21 myhostname systemd: Stopping user-1001.slice. What do all of these mean, and why are they there? If this is normal background noise them it seems like an enourmous waste to of resources to log this... | (this question is also answered over on superuser here ) Those are messages pertaining to the creation and deletion of slices, which are used in systemd to group processes and manage their resources. Why they are logged by default escapes me but I've seen two ways to disable them: The less intrusive way is to filter them out by creating /etc/rsyslog.d/ignore-systemd-session-slice.conf with the following contents: if $programname == "systemd" and ($msg contains "Starting Session" or $msg contains "Started Session" or $msg contains "Created slice" or $msg contains "Starting user-" or $msg contains "Removed Slice" or $msg contains "Stopping user-") then stop and restart rsyslogd with systemctl restart rsyslog The broader way is to set the systemd logging level a bit higher by editing /etc/systemd/system.conf : #LogLevel=info LogLevel=notice References: https://access.redhat.com/solutions/1564823 I have more but can't post more than 2 links. Hooray. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/267965",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23091/"
]
} |
268,006 | Below is the process I took to create a user on bash in Linux. $ sudo useradd Alexandra$ sudo passwd AlexandraEnter new UNIX password: Enter new UNIX password: passwd: password updated successfully I understand that the password shouldn't be displayed for security purposes, but what I mean is, why do asterisks (or the characters I entered) not appear? | Because that's the way we do things in *nix land. :) It gives a little bit of extra security by not displaying a bunch of asterisks. That way, someone who sees your screen can't see the length of your password. But I must admit it is a little bit scary not getting any feedback when you're entering a password, especially if you've got a bad keyboard. So most GUI password dialog on *nix systems do give you some kind of feedback, e.g. using asterisks, or more commonly ⬤. And some even display each character as you type it, but then immediately replace it with a * or ⬤, but that's not so good if someone may be looking over your shoulder. Or if they have a device that can pick up & decode the video signal being sent from your computer to your monitor. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/268006",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142920/"
]
} |
268,027 | I was reading the book Linux Kernel Development on Chapter Process Scheduling . On Page 61, section Waking Up , the first paragraph reads: Waking is handled via wake_up(), which wakes up all the tasks waiting on the given wait queue. It ( Q1 : what does this it refer to ?) calls try_to_wake_up(), which sets the task’s( Q2 : which task? all awoken tasks? ) state to TASK_RUNNING, calls enqueue_task() to add the task to the red-black tree, and sets need_resched if the awakened task’s priority is higher than the priority of the current task.The code that causes the event to occur typically calls wake_up() itself. For example, when data arrives from the hard disk, the VFS calls wake_up() on the wait queue that holds the processes waiting for the data. I am quite confused about the above. Let me just use the example in the above paragraph, i.e., the disk interrupted after reading data, but with a more complete picture. Please correct me if any of the following is wrong or incomplete: Some user process issued a blocking read operation, triggering a sys call and the process is in the realm of kernel. Kernel sets up the disk controller requesting the needed data and puts this process on sleep (this process is put into a wait queue). Kernel schedules another process to run. Disk interrupt occurs. CPU suspends the current executing process and jumps to the disk interrupt handling. Disk controller will kick in sometime during the interrupt handling to transfer the data read from disk to main memory (either under the direction of CPU or by DMA) (Not sure, please correct) As the paragraph says, VFS calls wake_up() on the wait queue that holds the processes waiting for the data. My specific questions are the following: Q1 (refer to the quoted paragraph): I assume the It in the second sentence refers to the function wake_up() . Why does the function wake_up wakes up all tasks instead of just the one waiting for this disk data? Q2 (refer to the quoted paragraph): Does try_to_wake_up() somehow knows the specific task whose state needs to be set to TASK_RUNNING? Or try_to_wake_up() sets all awoken tasks' state to TASK_RUNNING? Q3 : How many wait queues are there for the kernel to manage? If there are more than 2 such wait queues, how does the kernel know which queue to select, such that the process waiting for the disk data is on that wait queue? Q4 : Now say we know the queue where the waiting process is on. How does the kernel know which process is waiting for the data from the disk. I can only image that some info specific to the process requesting the disk data is passed to the disk controller, like the process's PID, memory address or something. Then upon completing the interrupt handling, the disk controller(or kernel?) uses this info to pinpoint the process on the wait queue. Please help me complete this picture of process wake_up! Thanks! | Q1: “It” is wake_up . It wakes up all tasks that are waiting for the disk data . If they weren't waiting for that data, they wouldn't be waiting on that queue. Q2: I'm not sure I understand the question. Each wake queue entry contains a pointer to the task. try_to_wake_up receives a pointer to the task that it's supposed to wake up. It is called once per function. Q3: There are lots of wait queues. There's one for every event that can happen. The disk driver sets up a wait queue for each request to the disk. For example, when the filesystem driver wants the content of a certain disk block, it asks the disk driver for that block, and then the request starts with the task that made the filesystem request. Other entries may be added to the wait queue if another request for the same block comes in while this one is still outstanding. When an interrupt happens, the disk driver determines which disk has data available from the information passed by the hardware and looks up the data structure that contains the kernel data for this disk to find which request was to be filled. In this data structure, among others, are the location where the data is to be written and the corresponding wake queue indicating what to do next. Q4: The process makes a system call, let's say to read a file. This triggers some code in the filesystem driver which determines that the data needs to be loaded from the disk. That code makes a request to the disk driver and adds the calling process to the request's wait queue. (There are actually more layers than that, but you get the idea.) When the disk read completes, the wait queue event triggers, and the process is thus removed from the disk's wait queue. The code triggered by the wait queue event is a function supplied by the filesystem driver, which copies the data to the process's memory and causes the read system call to return. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24608/"
]
} |
268,094 | So basically I have three files, two with permission 744 and one with 601 and I want to list the names of the files that have permission 744 and in addition, it also have to have string "def" anywhere in the lines of the file. All I got is how to print out if they have permission 744: find * -perm 744 How do I get it so it can check for string "def"? Any help is appreciated. | Follow your current options with -exec grep -l def {} + e.g., find * -perm 744 -exec grep -l def {} + The -l option of grep tells it to just list the names of files where a match occurs, and since that is the only action for find , the output of grep will be all that you see. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155767/"
]
} |
268,284 | I have a simple Dockerfile like this FROM ubuntu:latestADD run.sh /run.shCMD /run.sh In run.sh, I have this (also deadly simple) #!/bin/bashnohup awk 'BEGIN { while (c++<50) print "y" }' &sleep 10 When I run the script in docker from bash (that is, run bash command in interactive mode and there run the script), it works correctly - it goes into "nohup mode" and the output is, correctly, in nohup.out. However, if I run the docker container with the /run.sh as the default command, the output is still in STDOUT. What am I doing wrong? Why is it working in docker, in bash, but only when it's run from interactive mode? | nohup only redirects the command's output if it's going to a terminal. If the output is already going to another type of file (e.g. regular file or a pipe), nohup assumes that this is desired and does not redirect to nohup.out . By default, docker run runs the command via a socket (connecting the host with the virtual environment — that's how they communicate). A socket isn't a terminal so nohup doesn't perform any redirection. If you run docker run -t then Docker will emulate a terminal in the container and so nohup will redirect to nohup.out . If you don't pass a command name then docker acts as if you'd used docker run -t bash . The best solution is to explicitly redirect the output of the command to your choice of log file. Don't forget to redirect stderr as well. That way you'll know where they're going. nohup awk 'BEGIN { while (c++<50) print "y" }' >myscript.log 2>&1 & | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/268284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10393/"
]
} |
268,297 | Bash can print a function defintion: $ bash -c 'y(){ echo z; }; export -f y; export -f'y (){ echo z}declare -fx y However this fails under POSIX Bash, /bin/sh and /bin/dash : $ bash --posix -c 'y(){ echo z; }; export -f y; export -f'export -f y Can a function definition be printed under a POSIX shell? | You can not do it portably. POSIX spec did not specify any way to dump function definition, nor how functions are implemented. In bash , you don't have to export the function to the environment, you can use: declare -f funcname (Work in zsh ) This works even you run bash in posix mode: $ bash --posix -c 'y(){ echo z; }; declare -f y'y () { echo z} In ksh : typeset -f funcname (Works in bash , zsh , mksh , pdksh , lksh ) In yash : typeset -fp funcname This won't work if yash enter POSIXly-correct mode : $ yash -o posixly-correct -c 'y() { ehco z; }; typeset -fp y'yash: no such command `typeset' With zsh : print -rl -- $functions[funcname]whence -f funcnametype -f funcnamewhich funcname Note that both whence -f , which , type -f will report alias first with the same name. You can use -a to make zsh report all definitions. POSIXly, you'd have to record your function definition yourself which you could do with: myfunction_code='myfunction() { echo Hello World; }'eval "$myfunction_code" or a helper function defn() { code=$(cat) eval "${1}_code=\$code; $1() { $code; }"}defn myfunction << 'EOF'echo Hello WorldEOFprintf '%s\n' "$myfunction_code" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17307/"
]
} |
268,305 | having a bit of an issue with SSH and my CentOS 7 server.Here's a description of my problem. I have open-ssh setup on my server, and am able to ssh in as root (The server is a remote server, and open-ssh/centos 7 was setup by default). However, I cannot ssh in as another user, but I can do things like SFTP as that user. In /var/log/messages it says: Mar 7 22:10:04 mail systemd: Started Hostname Service.Mar 7 22:10:42 mail systemd-logind: New session 1107 of user -----.Mar 7 22:10:42 mail systemd: Started Session 1107 of user -----.Mar 7 22:10:42 mail systemd: Starting Session 1107 of user -----.Mar 7 22:10:42 mail systemd-logind: Removed session 1107. (I have removed the username for security) While on the client's end it says: 2016-03-08 03:37:29 Sent password2016-03-08 03:37:29 Access granted2016-03-08 03:37:29 Opening session as main channel2016-03-08 03:37:29 Server unexpectedly closed network connection I have not edited the base sshd config that was given with the server yet.Any help would be appreciated greatly. Normally I use XRDP for Remote Desktop Access into the server, but I need to allow SSH access for another user to start/stop a process. | You can not do it portably. POSIX spec did not specify any way to dump function definition, nor how functions are implemented. In bash , you don't have to export the function to the environment, you can use: declare -f funcname (Work in zsh ) This works even you run bash in posix mode: $ bash --posix -c 'y(){ echo z; }; declare -f y'y () { echo z} In ksh : typeset -f funcname (Works in bash , zsh , mksh , pdksh , lksh ) In yash : typeset -fp funcname This won't work if yash enter POSIXly-correct mode : $ yash -o posixly-correct -c 'y() { ehco z; }; typeset -fp y'yash: no such command `typeset' With zsh : print -rl -- $functions[funcname]whence -f funcnametype -f funcnamewhich funcname Note that both whence -f , which , type -f will report alias first with the same name. You can use -a to make zsh report all definitions. POSIXly, you'd have to record your function definition yourself which you could do with: myfunction_code='myfunction() { echo Hello World; }'eval "$myfunction_code" or a helper function defn() { code=$(cat) eval "${1}_code=\$code; $1() { $code; }"}defn myfunction << 'EOF'echo Hello WorldEOFprintf '%s\n' "$myfunction_code" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160080/"
]
} |
268,324 | I want to add date and time to the file name, for example: 08032016out.log.zip This is what I try to do: _now=$(date +"%m_%d_%Y")_file="$_nowout.log"touch $_file.txt # creating new file with the timedate How can I create the new file with the datetime? | You have created a variable named _now , but later you reference a variable named _nowout . To avoid such issues, use curly braces to delimit variable names: _now=$(date +"%m_%d_%Y")_file="${_now}out.log"touch "$_file.txt" Note that I have left "$_file.txt" as is, because . is already a variable names delimiter. When in doubt, "${_file}.txt" could be used just as well. Bonus 1: ${varname} syntax actually provides several useful string operations on variables, in addition to delimiting. Bonus 2: creative shell escaping and quoting can also be used to delimit variable names. You could quote the variable and the string literal separately (i.e. file="$_now""out.log" or file="$_now"'out.log' ) or leave one of the parts unquoted (i.e. file=$_now"out.log" or file="$_now"out.log ). Finally, you can escape a single character which follows your variable name: file=$_now\out.log . Though I wouldn't recommend reusing these examples without good understanding of shell quoting and escaping rules. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268324",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160098/"
]
} |
268,357 | I've install fail2ban on Debian Jessie LXC container, currently it's failing due to: Starting authentication failure monitor: fail2banERROR No file(s) found for glob /var/log/auth.logERROR Failed during configuration: Have not found any log file for ssh jail There's no syslog or rsyslog on the system and thus /var/log/auth.log is not generated. Is there a way how to tell fail2ban to use output of journalctl _COMM=sshd ? | For systemd systems: You have to specify the backend in /etc/fail2ban/jail.conf to use systemd as follows: backend = systemd Then restart fail2ban: systemctl restart fail2ban Edit: I'm a heavy CentOS/RHEL/Fedora guy so you may have to adapt what I say a bit. As far as this answer, you may have to update the fail2ban package to a version that supports systemd as a backend or you'll have to install rsyslog and add the following to your /etc/rsyslog.conf : authpriv.* /var/log/auth.log This will make sure sshd auth logs are logging to /var/log/auth.log which will be read by the default pyinotify backend in fail2ban: | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/268357",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34514/"
]
} |
268,386 | I recently moved from GNU screen to tmux . I find it quite similar but with bigger support (I switched due to problem with escape-time in neovim - resolution was only for tmux). Unfortunately in tmux I'm unable to find a similar command to this: screen -X eval "chdir $(some_dir)" The command above changed the default directory for new window/screen/pane from within the GNU screen so when I pressed Ctrl + a (similar to tmux Ctrl + b )- new window opened in the $(some_dir) directory. Is there a similar thing in tmux? ANSWER: I have used @Lqueryvg answer and combined it with @Vincent Nivoliers suggestion froma a comment and that gave me a new binding for a command attach -c "#{pane_current_path}" which sets my current directory as a default one. Thanks. | tl;dr Ctrl + b : attach -c desired/directory/path Long Answer Start tmux as follows: (cd /aaa/bbb; tmux) Now, any new windows (or panes) you create will start in directory /aaa/bbb , regardless of the current directory of the current pane. If you want to change the default directory once tmux is up and running, use attach-session with -c . Quoting from the tmux man page for attach-session : -c will set the session working directory (used for new windows) to working-directory. For example: Ctrl + b : attach -c /ddd/eee New windows (or panes) will now start in directory /ddd/eee , regardless of the directory of the current pane. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/268386",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116320/"
]
} |
268,412 | I have CentOS 6.7. This question is about creating a new mount point on an existing partition. Following is my current disk configuration. Filesystem Size Used Avail Use% Mounted on/dev/mapper/vg_i200-lv_root 50G 41G 6.1G 87% / tmpfs 3.8G 0 3.8G 0% /dev/shm/dev/sda1 477M 67M 385M 15% /boot/dev/mapper/vg_i200-lv_home 172G 21G 143G 13% /home I want to create a new mount point mymount that point to the folder /home/myfolder. I followed the instructions given in this thread in superuser. But when I execute this command: mount --bind /home/myfolder/ /mymount/ I get the following error: mount: mount point /mymount/ does not exist So, I want to know how can I create a new mount point that points to a folder in an existing filesystem. | mkdir /mymount with root user before mount --bind /home/myfolder/ /mymount/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59999/"
]
} |
268,460 | I've read that with qemu-nbd and the network block device kernel module, I can mount a qcow2 image. I haven't seen any tutorials on mounting a qcow2 via a loop device. Is it possible? If not, why? I don't really understand the difference between a qcow2 and an iso. | A loop device just turns a file into a block device. If the file has some special internal mapping of its blocks, the loop device won't translate any of it. qcow2 is special... it has special mapping inside that handles different snapshots of the same blocks stored in different places. If you mount that as a loop device, you'll just get one big block device that doesn't represent the actual data in the image. Another option is to convert to raw and mount as a loop device: qemu-img convert -p -O raw oldfile.qcow2 newfile.raw But then you have to convert it back to qcow2 to use it again as before. I think using qemu-nbd is not the most efficient IO, but is easy. Mounting it in a VM, like one booted with a live usb, is easy too. Converting doesn't make much sense... it was just an example of how they're different. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/268460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107073/"
]
} |
268,464 | What's the simplest way to express "allow all connections to the local lan" for iptables output? Including connections to 192.* , 172.* , 10.* , etc. Can all of this compressed within a single rule? | A loop device just turns a file into a block device. If the file has some special internal mapping of its blocks, the loop device won't translate any of it. qcow2 is special... it has special mapping inside that handles different snapshots of the same blocks stored in different places. If you mount that as a loop device, you'll just get one big block device that doesn't represent the actual data in the image. Another option is to convert to raw and mount as a loop device: qemu-img convert -p -O raw oldfile.qcow2 newfile.raw But then you have to convert it back to qcow2 to use it again as before. I think using qemu-nbd is not the most efficient IO, but is easy. Mounting it in a VM, like one booted with a live usb, is easy too. Converting doesn't make much sense... it was just an example of how they're different. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/268464",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49297/"
]
} |
268,474 | I need a file (preferably a .list file) which contains the absolute path of every file in a directory. Example dir1: file1.txt file2.txt file3.txt listOfFiles.list : /Users/haddad/dir1/file1.txt/Users/haddad/dir1/file2.txt/Users/haddad/dir1/file3.txt How can I accomplish this in linux/mac? | ls -d "$PWD"/* > listOfFiles.list | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/268474",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160195/"
]
} |
268,480 | I would like to find directories that does not contain a file with name OSZICAR and then cd into that directory and do something more... all i have now is: find `pwd` -mindepth 2 -maxdepth 2 -type d -exec sh -c "echo {}; cd {}; ls; if [! -f $0/OSZICAR];echo "doing my thing";fi" \; but there is error, could anyone help? Thank you My original command without the Criteria of Not having OSZICAR is: find `pwd` -mindepth 2 -maxdepth 2 -type d -exec sh -c "echo {}; cd {}; ls; cp ../../submit_script_Stampede.sh .; ls;sed -i s/Monkhorst/Gamma/ KPOINTS; cp CONTCAR POSCAR ;sbatch submit_script_Stampede.sh" \; | ls -d "$PWD"/* > listOfFiles.list | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/268480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118930/"
]
} |
268,488 | I am learning how to efficiently use different set options in my script and came across set -u that appears to be perfect for exiting my script if a variable does not get set properly (ex. deleting users). According to the man page, set -u and set -e does the following... -e Exit immediately if a command exits with a non-zero status.-u Treat unset variables as an error when substituting. I created a test script to test this functionality, but it does not appear to be working as expected. Perhaps someone could better explain my issue to me and where I am mis-interpreting? Test script is below. Thank you. set -eset -utesting="This works"echo $?echo ${testing}testing2=echo $?echo ${testing2}testing3="This should not appear"echo $?echo ${testing3} I expect the script to display 0 and "This works" , and then fail as ${testing2} is not set. Instead I am displayed 0 and "This works" , follow by 0 and then 0 This should not appear Can anyone provide some knowledge? Thank you. | From "man Bash": A parameter is set if it has been assigned a value. The null string is a valid value. Once a variable is set, it may be unset only by using the unset builtin command. When you do testing2= you are setting the variable to the null string. Change that to unset testing2 and try again. The set -e does not help in this case as an assignment never has an exit code of 1. Try this to see that the last command executed (the assignment) has an exit code of 0, or read this question : $ false; a=""; echo $?0 And I also believe that the use of set -e is more a problem that a solution. What may get an error with the use of unset variables is set -u : #!/bin/bashset -utesting="This works"echo ${testing}unset testing2echo ${testing2}testing3="This should not appear"echo ${testing3} Will output: $ ./script.shThis works./script.sh: line 9: testing2: unbound variable | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/268488",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147475/"
]
} |
268,491 | I'm trying to install virtuabox on Debian 8.3 using the contrib repos.When I use apt-get install virtualbox it wants to install gcc-4.8 , but I already have gcc version 4.9.2 installed.What am I doing wrong? Is it safe to do so I can keep my gcc-4.9.2? apt-get output apt-get install virtualboxReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following extra packages will be installed: cpp-4.8 dkms gcc-4.8 libasan0 libgcc-4.8-dev libgsoap5 libvncserver0 linux-compiler-gcc-4.8-x86 linux-headers-3.16.0-4-amd64 linux-headers-3.16.0-4-common linux-headers-amd64 linux-kbuild-3.16 virtualbox-dkms virtualbox-qtSuggested packages: gcc-4.8-locales gcc-4.8-multilib gcc-4.8-doc libgcc1-dbg libgomp1-dbg libitm1-dbg libatomic1-dbg libasan0-dbg libtsan0-dbg libquadmath0-dbg vde2 virtualbox-guest-additions-isoRecommended packages: linux-imageThe following NEW packages will be installed: cpp-4.8 dkms gcc-4.8 libasan0 libgcc-4.8-dev libgsoap5 libvncserver0 linux-compiler-gcc-4.8-x86 linux-headers-3.16.0-4-amd64 linux-headers-3.16.0-4-common linux-headers-amd64 linux-kbuild-3.16 virtualbox virtualbox-dkms virtualbox-qt0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.Need to get 0 B/35.7 MB of archives.After this operation, 148 MB of additional disk space will be used.Do you want to continue? [Y/n]n apt-cache policy virtualbox virtualbox: Installed: (none) Installation candidates: 4.3.32-dfsg-1+deb8u2 Version table: 4.3.32-dfsg-1+deb8u2 0 500 http://httpredir.debian.org/debian/ jessie/contrib amd64 Packages | From "man Bash": A parameter is set if it has been assigned a value. The null string is a valid value. Once a variable is set, it may be unset only by using the unset builtin command. When you do testing2= you are setting the variable to the null string. Change that to unset testing2 and try again. The set -e does not help in this case as an assignment never has an exit code of 1. Try this to see that the last command executed (the assignment) has an exit code of 0, or read this question : $ false; a=""; echo $?0 And I also believe that the use of set -e is more a problem that a solution. What may get an error with the use of unset variables is set -u : #!/bin/bashset -utesting="This works"echo ${testing}unset testing2echo ${testing2}testing3="This should not appear"echo ${testing3} Will output: $ ./script.shThis works./script.sh: line 9: testing2: unbound variable | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/268491",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159834/"
]
} |
268,514 | How can I print the numerical ASCII values of each character in a text file. Like cat , but showing the ASCII values only... (hex or decimal is fine). Example output for a file containing the word Apple (with a line feed) might look like: 065 112 112 108 101 013 004 | The standard command for that is od , for octal dump (though with options, you can change from octal to decimal or hexadecimal...): $ echo Apple | od -An -vtu1 65 112 112 108 101 10 Note that it outputs the byte value of every byte in the file. It has nothing to do with ASCII or any other character set. If the file contains a A in a given character set, and you would like to see 65, because that's the byte used for A in ASCII, then you would need to do: < file iconv -f that-charset -t ascii | od -An -vtu1 To first convert that file to ascii and then dump the corresponding byte values. For instance Apple<LF> in EBCDIC-UK would be 193 151 151 147 133 37 ( 301 227 227 223 205 045 in octal). $ printf '\301\227\227\223\205\045' | iconv -f ebcdic-uk -t ascii | od -An -vtu1 65 112 112 108 101 10 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/268514",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153478/"
]
} |
268,534 | I'd like to be able to run a script as another user, and only as that user. The way I currently have this set up is to have alice ALL = (bob) NOPASSWD: /home/alice/script.sh in the sudoers file and alice@foo:~$ ls script.sh-rwxr-xr-x 1 root root ..... script.shalice@foo:~$ lsattr script.sh----i----------- script.shalice@foo:~$ head -1 script.sh#!/bin/bashalice@foo:~$ sudo -u bob ./script.shok Is there a way to have the shebang line be something like #!/usr/bin/sudo -u bob -- /bin/bash so that alice could just run alice@foo:~$ ./script.shok ? If I try this I simply get the error message sudo: unknown user: blog -- /bin/bashsudo: unable to initialize policy plugin | Linux (like many other Unix variants) only supports passing a single argument to the interpreter of a script. (The interpreter is the program on the shebang line.) A script starting with #!/usr/bin/sudo -u bob -- /bin/bash is executed by calling /usr/bin/sudo with the arguments -u bob -- /bin/bash and /home/alice/script.sh . One solution is to use a wrapper script: make /home/alice/script.sh contain #!/bin/shexec sudo -u bob /home/alice/script.real and put the code in /home/alice.script.real starting with #!/bin/bash and make the sudo rule refer to /home/alice.script.real . Another solution is to make the script reexecute itself. You need to be careful to detect the desirable condition properly, otherwise you risk creating an infinite loop. #!/bin/bashif ((EUID != 123)); then exec sudo -u \#123 /home/alice/script.shfi (where 123 is the user ID of bob ) A simple solution is to tell people to run sudo -u bob /home/alice/script.sh instead of running the script directly. You can provide shell aliases, .desktop files, etc. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27371/"
]
} |
268,536 | I'm retrieving a list of two PIDs that I want to kill. My pipeline looks something like ps -ef | grep foo | grep -v grep | awk {'print $2'} | xargs kill -9 Both processes are killed when executing this locally. But when using it with ssh like ssh foo@<IP Address> "ps -ef | grep foo | grep -v grep | awk {'print $2'} | xargs kill -9" only one PID out of two is getting deleted. It outputs kill: <PID>: No such process . Could it be trying to kill the second process locally? | The $2 is within double quotes on the local shell command line, so expanded by your local shell (probably to the empty string). So the command run on the remote host is something like: ps -ef | grep foo | grep -v grep | awk {'print '} | xargs kill -9 That awk command is a no-op (prints all the lines) and xargs will eventually run: kill -9 <all-the-words-present-in-the-output-of-ps|grep...> That includes pid and ppid, user names... Also if the pid of the kill command is in that list, it will kill itself and thus not be able to kill the remaining ones in the list. Here, you'd want: ssh foo@<IP Address> 'pkill -KILL -f foo' (kills all processes whose command line (at least the first few kilobytes of it) contains foo ). Or if the remote system doesn't have a pkill command: ssh foo@<IP Address> ' ps -Ao pid= -o args= | awk "/[f]oo/ {print \$1}" | xargs kill -s KILL' We use single quotes on the local side (so no variable expansion there), and double quote for the awk code in the remote shell command line, so we need to escape the $ in $1 so that it is not expanded by the remote shell. awk is a superset of grep , you generally don't need to pipe them together. Best is to tell ps to output only the things you want to match on, here assuming you want to search for foo in the list of arguments displayed by ps as your usage of -f suggests. If you want to kill all processes of the remote user (here of user foo and sshing as foo), you could do: ssh foo@<IP Address> 'kill -s KILL -- -1' That will kill all the processes that the user has permission to kill. For a normal user, that's all processes whose real or saved set user id is the same as the real or effective user id of the killing process. Or: ssh foo@<IP Address> 'pkill -KILL -U "$(id -u)"' (or pkill -U foo ). To kill all processes whose real user id is the same as the effective user id of the remote user. Or you could do: ssh foo@<IP Address> ' trap "" HUP ps -Ao sid= -o pid= -U "$(id -u)" | awk "\$1 != $(ps -eo sid= -p "\$\$") {print \$2}" | xargs kill -s KILL' (checking on sid so that kill doesn't kill itself, the shell, awk or xargs , and ignoring SIGHUP in case killing the user sshd process causes that signal to be sent) We're assuming the login shell of the remote user is Bourne-like. Those commands would have to be adapted if the login shell of the remote user is of the csh or rc families which have a different syntax for instance. It's also better practice to refer to signals by their name rather than number as that makes it clearer and the signal number can change from system to system (9 for SIGKILL is quite universal though). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268536",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93215/"
]
} |
268,565 | I seem to recall from comments on this site that the contents of arithmetic expansion may be word split, but I can't find the comment again. Consider the following code: printf '%d\n' "$(($(sed -n '/my regex/{=;q;}' myfile)-1))" If the sed command outputs a multi-digit number and $IFS contains digits, will the command substitution get word split before the arithmetic occurs? (I've already tested using extra double quotes: printf '%d\n' "$(("$(sed -n '/my regex/{=;q;}' myfile)"-1))" and this doesn't work.) Incidentally the example code above is a reduced-to-simplest-form alteration of this function that I just posted on Stack Overflow. | No, it doesn't. In $((expression)) , expression is treated as it was in double quote, as POSIX specified . But beware that the expression inside command substitution still be subjected to split+glob : $ printf '%d\n' "$(( $(IFS=0; a=10; echo $a) + 1 ))"2 With double quote: $ printf '%d\n' "$(( $(IFS=0; a=10; echo "$a") + 1 ))"11 Like other expansions, arithmetic expansion, if not inside double quote, undergo split+glob : $ IFS=0$ echo $((10))1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268565",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
268,582 | Is there any way I can configure terminal applications to display command text from front of $ to under the user name on terminal window? See image attached. I use terminator . | POSIXly: $ NL=''$ PS1=${PS1}${NL}$<cursor here> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268582",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/156356/"
]
} |
268,603 | System-left or System-Right binds the window to half screen size on the left or right side of the screen canvas. Assuming I have only 30% open on the right side how to make it consume only 30% of screen canvas instead of 50%? Is it even possible in Gnome? I'm using Fedora 23. | POSIXly: $ NL=''$ PS1=${PS1}${NL}$<cursor here> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15958/"
]
} |
268,640 | I'm trying to use sed to edit a config file. There are a few lines I'd like to change. I know that under Linux sed -i allows for in place edits but it requires you save to a backup file. However I would like to avoid having multiple backup files and make all my in place changes at once. Is there a way to do so with sed -i or is there a better alternative? | You can tell sed to carry out multiple operations by just repeating -e (or -f if your script is in a file). sed -i -e 's/a/b/g' -e 's/b/d/g' file makes both changes in the single file named file , in-place. Without a backup file. sed -ibak -e 's/a/b/g' -e 's/b/d/g' file makes both changes in the single file named file , in-place. With a single backup file named filebak . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/268640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153274/"
]
} |
268,642 | I am trying to get the number of seconds since the epoch in both Solaris 10 and Solaris 11. On Solaris 11, "date +%s" is giving me the output (from bash), but the same is failing on Solaris 10. What is the right command in Solaris 10? | I would use nawk : nawk "BEGIN{print srand}" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/158782/"
]
} |
268,651 | Here is the output of my normal ls command: f1 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f2 f20 f3 f4 f5 f6 f7 f8 f9 So I have 20 files. I need them displayed as : f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 Is there any single-line command to this other than writing a script? I am sure some of you must have faced this weird situation. Note: the above is just a sample. In the actual scenario I need a list of all the file names in proper sorted order. (ranging from f{0..10000} ) | fortunately! there is a single-line command ls -lav should do what you are looking for | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153692/"
]
} |
268,720 | I had Ubuntu-14.04 installed in a minimum configuration, i.e. with no X Windows support. Later I added x-server packages for my card, and a lightweight WindowManager (I don't want KDE or GNOME), so I normally launch X with startx, however someone keeps on creating Documents, Desktop, Download, Video, Music etc. directories in my $HOME. I thought this isually done by "advanced" desktop environments. What application/daemon can be behind this anyways? | This is carried out by the xdg-user-dirs-update 1 package. The file /usr/bin/xdg-user-dirs-update is run at logon and creates the files based on defaults in /etc/xdg/user-dirs.defaults , or if it exists $HOME/.config/user-dirs.dirs . If you want to disable it, the setting is in /etc/xdg/user-dirs.conf , or uninstall the package, if dependencies allow. 1: The package name above is for Ubuntu. On Fedora and Arch it is xdg-user-dirs . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/268720",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86215/"
]
} |
268,722 | Is there a way to search for a string test123 in all textfiles on the hdd? | You can use find to get all the .txt files and then grep the desired string $ find / -type f -name '*.txt' -exec grep 'test123' {} + Where: / search in all the server. -type f find only files, not directories -name '*.txt' find all .txt files -exec grep 'test123' search test123 in all the files found {} is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone + it will improve execution time significantly (since it will contatenate arguments prior to execution until) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153097/"
]
} |
268,764 | In a set of if/elif/else/fi statements, I have made 'else' present the user with an error message, but I also want it to take the user back to the question which was asked before the if/else statements so that they can try to answer it again. How do I take the user back to a previous line of code? Or, if this is not possible, is there another way to do this? | I think the easiest way would be to wrap the prompting code into a function, and then drop it into an until loop. Since all you need really is to call the function until it succeeds, you can put the noop command " : " in the until loop. Something like this: #!/bin/bashgetgender() { read -p "What is the gender of the user? (male/female): " gender case "$gender" in m|M) grouptoaddto="boys" return 0 ;; f|F) grouptoaddto="girls" return 0 ;; *) printf %s\\n "Please enter 'M' or 'F'" return 1 ;; esac}until getgender; do : ; donesudo usermod -a -G "$grouptoaddto" "$username" The point here is the function called with until , so it is repeatedly called until it succeeds. The case switch within the function is just an example. Simpler example, without using a function: while [ -z "$groupname" ]; do read -p "What gender is the user?" answer case "$answer" in [MmBb]|[Mm]ale|[Bb]oy) groupname="boys" ;; [FfGg]|[Ff]emale|[Gg]irl) groupname="girls" ;; *) echo "Please choose male/female (or boy/girl)" ;; esacdonesudo usermod -a -G "$groupname" "$username" In this last example, I'm using the -z switch to the [ (test) command, to continue the loop as long as the "groupname" variable has zero length. The keynote is the use of while or until . To translate this last example into human readable pseudocode: While groupname is empty, ask user for gender. If he answers with one letter "m" or "B", or the word "Male" or "boy", set the groupname as "boys". If she answers with one letter "F" or "g", or the word "female" or "Girl", set the groupname as "girls". If he/she answers anything else, complain.(And then repeat, since groupname is still empty.)Once you have groupname populated, add the user to that group. Yet another example, without the groupname variable: while true; do read -p "What gender is the user?" answer case "$answer" in [MmBb]|[Mm]ale|[Bb]oy) sudo usermod -a -G boys "$username" break ;; [FfGg]|[Ff]emale|[Gg]irl) sudo usermod -a -G girls "$username" break ;; *) echo "Please choose male/female (or boy/girl)" ;; esacdone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142920/"
]
} |
268,766 | So, I thought I had a good understanding of this, but just ran a test (in response to a conversation where I disagreed with someone) and found that my understanding is flawed... In as much detail as possible what exactly happens when I execute a file in my shell? What I mean is, if I type: ./somefile some arguments into my shell and press return (and somefile exists in the cwd, and I have read+execute permissions on somefile ) then what happens under the hood? I thought the answer was: The shell make a syscall to exec , passing the path to somefile The kernel examines somefile and looks at the magic number of the file to determine if it is a format the processor can handle If the magic number indicates that the file is in a format the processor can execute, then a new process is created (with an entry in the process table) somefile is read/mapped to memory. A stack is created and execution jumps to the entry point of the code of somefile , with ARGV initialized to an array of the parameters (a char** , ["some","arguments"] ) If the magic number is a shebang then exec() spawns a new process as above, but the executable used is the interpreter referenced by the shebang (e.g. /bin/bash or /bin/perl ) and somefile is passed to STDIN If the file doesn't have a valid magic number, then an error like "invalid file (bad magic number): Exec format error" occurs However someone told me that if the file is plain text, then the shell tries to execute the commands (as if I had typed bash somefile ). I didn't believe this, but I just tried it, and it was correct. So I clearly have some misconceptions about what actually happens here, and would like to understand the mechanics. What exactly happens when I execute a file in my shell? (in as much detail is reasonable...) | The definitive answer to "how programs get run" on Linux is the pair of articles on LWN.net titled, surprisingly enough, How programs get run and How programs get run: ELF binaries . The first article addresses scripts briefly. (Strictly speaking the definitive answer is in the source code, but these articles are easier to read and provide links to the source code.) A little experimentation show that you pretty much got it right, and that the execution of a file containing a simple list of commands, without a shebang, needs to be handled by the shell. The execve(2) manpage contains source code for a test program, execve; we'll use that to see what happens without a shell. First, write a testscript, testscr1 , containing #!/bin/shpstree and another one, testscr2 , containing only pstree Make them both executable, and verify that they both run from a shell: chmod u+x testscr[12]./testscr1 | less./testscr2 | less Now try again, using execve (assuming you built it in the current directory): ./execve ./testscr1./execve ./testscr2 testscr1 still runs, but testscr2 produces execve: Exec format error This shows that the shell handles testscr2 differently. It doesn't process the script itself though, it still uses /bin/sh to do that; this can be verified by piping testscr2 to less : ./testscr2 | less -ppstree On my system, I get |-gnome-terminal--+-4*[zsh] | |-zsh-+-less | | `-sh---pstree As you can see, there's the shell I was using, zsh , which started less , and a second shell, plain sh ( dash on my system), to run the script, which ran pstree . In zsh this is handled by zexecve in Src/exec.c : the shell uses execve(2) to try to run the command, and if that fails, it reads the file to see if it has a shebang, processing it accordingly (which the kernel will also have done), and if that fails it tries to run the file with sh , as long as it didn't read any zero byte from the file: for (t0 = 0; t0 != ct; t0++) if (!execvebuf[t0]) break; if (t0 == ct) { argv[-1] = "sh"; winch_unblock(); execve("/bin/sh", argv - 1, newenvp); } bash has the same behaviour, implemented in execute_cmd.c with a helpful comment (as pointed out by taliezin ): Execute a simple command that is hopefully defined in a disk file somewhere. fork () connect pipes look up the command do redirections execve () If the execve failed, see if the file has executable mode set. If so, and it isn't a directory, then execute its contents as a shell script. POSIX defines a set of functions, known as the exec(3) functions , which wrap execve(2) and provide this functionality too; see muru 's answer for details. On Linux at least these functions are implemented by the C library, not by the kernel. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/268766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1974/"
]
} |
268,810 | Suppose I want a bash command to do something extra. As a simple example, imagine I just want it to echo "123" before running. One simply way to do this would be to alias the command. Since we still need the original, we can just refer to it by its exact path, which we can find using which . For example: $ which rm/bin/rm$ echo "alias rm='echo 123 && /bin/rm'" >> .bashrc This was easy because I was able to look up the path to rm using which . However, I am trying to do this with exit , and which doesn't seem to know anything about it. $ which exit$ echo $?1 The command did not output a path, and in fact it returned a non-zero exit code, which which does when a command is not in $PATH . I thought maybe it's a function, but apparently that's not the case either: $ typeset -F | grep exit$ echo $?1 So the exit command is not defined anywhere as a function or as a command in $PATH , and yet, when I type exit , it closes the terminal. So it clearly is defined somewhere but I can't figure out where. Where is it defined, and how can I call to it explicitly? | exit is a shell special built-in command. It was built with the shell interpreter, the shell knows about it and can execute it directly without searching anywhere. On most shells, you can use: $ type exitexit is a shell builtin You have to read source of the shell to see how its builtin implemented, here is link to source of bash exit builtin . With bash , zsh , ksh93 , mksh , pdksh , to invoke the exit built-in explicitly, use builtin builtin command: builtin exit See How to invoke a shell built-in explicitly? for more details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38639/"
]
} |
268,818 | In Bash, suppose I visit a directory, and then another directory. I would like to copy a file from the first directory to the second directory, but without specifying the long pathnames of them. Is it possible? My temporary solution is to use /tmp as a temporary place to store a copy of the file. cp myfile /tmp when I am in the first directory, and then cp /tmp/myfile . when I am in the second directory. But I may check if the file will overwrite anything in /tmp . Is there something similar to a clipboard for copying and pasting a file? | Using Bash, I would just visit the directories: $ cd /path/to/source/directory$ cd /path/to/destination/directory Then, I would use the shortcut ~- , which points to the previous directory: $ cp -v ~-/file1.txt .$ cp -v ~-/file2.txt .$ cp -v ~-/file3.txt . If one wants to visit directories in reverse order, then: $ cp -v fileA.txt ~-$ cp -v fileB.txt ~-$ cp -v fileC.txt ~- | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/268818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
268,849 | Is there a tool or even an entire distribution that supports rolling-back changed packages after an update? As an example: I upgraded packages A, B and C. After working with those packages for several days, I encounter a bug in B that is deal-breaking. While I'd submit a bugreport, I'd also need to downgrade B to the previous version so that I can finish what I was about to do. Meanwhile A is dependent on B, so it'd need to be downgraded as well, but C is independent of both, so it could stay at its current version. Is there a tool or a distribution that supports this? I know that most distributions have a way of downgrading a package but that's usually kind of sketchy or not even possible because the previous package was removed from repositories and it some cases (for example after upgrading the X server and Mesa) it gets really... messy. | NixOS supports upgrade rollbacks, although as I understand it, it doesn't go quite as far as you'd like: if you upgrade A, B and C in one operation, you can roll that entire operation back, but not just A and B. (You should be able to roll A, B and C back, and then upgrade C...) That makes sense from a transactional perspective though. Debian (in combination with the snapshot archive if you no longer have the old packages) will allow you to downgrade B, and tools like apt or aptitude will in many cases figure out that A also needs to be downgraded (once you've convinced them that you don't want to simply upgrade B). But as you say that tends to be somewhat messy, and package downgrades are unsupported in Debian anyway (which means that most of the time they work, but if they break it's not a bug). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/268849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55018/"
]
} |
268,889 | When I want to easily read my PostgreSQL schema, I dump it to stderr and redirect it to vim : pg_dump -h localhost -U postgres dog_food --schema-only | vim - This gives: vim does not have a syntax highlight schema, because it has no filename extension when reading from stdin, so I use the following: :set syntax=sql Which gives: Being the lazy developer I am, I would like to force vim to use the SQL syntax by passing a command line argument, saving me the choir of re-typing set syntax=<whatever> every time I open it with stdin data.. Is there a way to set vim syntax by passing a command line argument? | You can use: vim -c 'set syntax=sql' - | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/268889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1079/"
]
} |
268,952 | I'm looking for ways to use /dev/random (or /dev/urandom ) from the command line. In particular, I'd like to know how to use such a stream as stdin to write streams of random numbers to stdout (one number per line). I'm interested in random numbers for all the numeric types that the machine's architecture supports natively. E.g. for a 64-bit architecture, these would include 64-bit signed and unsigned integers, and 64-bit floating point numbers. As far as ranges go, the maximal ranges for the various numeric types will do. I know how to do all this with all-purpose interpreters like Perl, Python, etc., but I'd like to know how to do this with "simpler" tools from the shell. (By "simpler" I mean "more likely to be available even in a very minimal Unix installation".) Basically the problem reduces that of converting binary data to their string representations on the command line. (E.g., this won't do: printf '%f\n' $(head -c8 /dev/random) .) I'm looking for shell-agnostic answers. Also, the difference between /dev/random and /dev/urandom is not important for this question. I expect that any procedure that works for one will work for the other, even when the semantics of the results may differ. I adapted EightBitTony's answer to produce the functions toints , etc. shown below. Example use: % < /dev/urandom toprobs -n 50.2376162817789280.855784791255320.03300496820197560.7988123916552430.138499033902422 Remarks: I'm using hexdump instead of od because it gave me an easier way to format the output the way I wanted it; Annoyingly though, hexdump does not support 64-bit integers (wtf???); The functions' interface needs work (e.g. they should accept -n5 as well as -n 5 ), but given my pitiful shell programming skillz, this was the best I could put together quickly. (Comments/improvements welcome, as always.) The big surprise I got from this exercise was to discover how hard it is to program on the shell the most elementary numerical stuff (e.g. read a hexadecimal float, or get the maximum native float value)... _tonums () { local FUNCTION_NAME=$1 BYTES=$2 CODE=$3 shift 3 local USAGE="Usage: $FUNCTION_NAME [-n <INTEGER>] [FILE...]" local -a PREFIX case $1 in ( -n ) if (( $# > 1 )) then PREFIX=( head -c $(( $2 * $BYTES )) ) shift 2 else echo $USAGE >&2 return 1 fi ;; ( -* ) echo $USAGE >&2 return 1 ;; ( * ) PREFIX=( cat ) ;; esac local FORMAT=$( printf '"%%%s\\n"' $CODE ) $PREFIX "$@" | hexdump -ve $FORMAT}toints () { _tonums toints 4 d "$@"}touints () { _tonums touints 4 u "$@"}tofloats () { _tonums tofloats 8 g "$@"}toprobs () { _tonums toprobs 4 u "$@" | perl -lpe '$_/=4294967295'} | You can use od to get numbers out of /dev/random and /dev/urandom . For example, 2 byte unsigned decimal integers, $ od -vAn -N2 -tu2 < /dev/urandom24352 1 byte signed decimal integer, $ od -vAn -N1 -td1 < /dev/urandom-78 4 byte unsigned decimal integers, $ od -vAn -N4 -tu4 < /dev/urandom3394619386 man od for more information on od . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/268952",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
268,963 | I have a docker container and every time I build and start it, I want just to press UP for choosing a predefined command. What's the best way to handle this? | $ echo "<yourcommand>" >> ~/.bash_history or set up an alias in your .bashrc / .bash_aliases alias s='<yourcommand(s)>' so every time you input s and hit enter it executes your commands. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/268963",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160535/"
]
} |
269,024 | I have installed openssh server. I want to disable banner which is shown when I do :: nc 0.0.0.0 22 It shows something like this :: SSH-2.0-OpenSSH_6.7p1 Raspbian-5 . How to make it show something else or nothing at all ? | This banner SSH-2.0-OpenSSH_6.7p1 Raspbian-5 is part of SSH protocol as described in chapter 4.2. Protocol Version Exchange of RFC 4253 : When the connection has been established, both sides MUST send an identification string. This identification string MUST be SSH-protoversion-softwareversion SP comments CR LF You can't get rid of the SSH-2.0 part. The softwareversion part is used commonly for interoperability and it is also not good idea to remove it. The comments are optional and don't need to be there (but Debian puts them in by default). You can get rid of the comment using DebianBanner option in sshd_config . Setting it to no and restarting ssh server will not show it any more. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/269024",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87903/"
]
} |
269,049 | I have gone through many sites and tutorials for KVM installation and every tutorial says "install KVM under XYZ OS". KVM is a type 1 (bare metal) hypervisor. So shouldn't KVM be installed directly on top of hardware? Is it possible to install KVM on a completely bare metal without any OS just like ESXi? For ESXi we don't need any OS, we can directly install it from media. Our goal is to directly install KVM hypervisor on a bare metal CPU with no OS. | I believe you're misunderstanding how it works. KVM is a combination of the kernel modules (mainlined in the kernel since 2.6.20 if I remember correctly) and utilities needed to run a Virtual Environment ( libvirt , virt-install , virt-manager , qemu , etc). Look at ESXi. That is a Linux system all by itself that sits on bare metal with the bits required to run the Virtual Host piece, including the kernel modules, binaries, etc. Any machine that is considered a KVM host will be doing the same thing, acting as a Virtual Host. Think about it. The OS is always installed to bare metal. I would recommend reading here: http://www.linux-kvm.org/page/Main_Page I know this isn't part of your question, but I really recommend installing an absolute BARE system, meaning, just the minimum amount of packages for a system to be functional, and then going from there. Keep the host to one purpose, and one purpose only: To be a virtual host for a virtual environment. I run four CentOS 7 KVM machines at home in a cluster. That's all they do, run libvirt (the vital service for KVM). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/269049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160600/"
]
} |
269,077 | I am in the process of colorizing my terminal’s PS1 . I am setting color variables using tput ; for example, here’s purple: PURPLE=$(tput setaf 125) Question: How do I find the color codes (e.g. 125 ) of other colors? Is there a color table guide/cheat sheet somewhere? I’m just not sure what 125 is … Is there some way to take a hex color and convert into a number that setaf can use? | The count of colors available to tput is given by tput colors . To see the basic 8 colors (as used by setf in urxvt terminal and setaf in xterm terminal): $ printf '\e[%sm▒' {30..37} 0; echo ### foreground$ printf '\e[%sm ' {40..47} 0; echo ### background And usually named as this: Color #define Value RGBblack COLOR_BLACK 0 0, 0, 0red COLOR_RED 1 max,0,0green COLOR_GREEN 2 0,max,0yellow COLOR_YELLOW 3 max,max,0blue COLOR_BLUE 4 0,0,maxmagenta COLOR_MAGENTA 5 max,0,maxcyan COLOR_CYAN 6 0,max,maxwhite COLOR_WHITE 7 max,max,max To see the extended 256 colors (as used by setaf in urxvt): $ printf '\e[48;5;%dm ' {0..255}; printf '\e[0m \n' If you want numbers and an ordered output: #!/bin/bashcolor(){ for c; do printf '\e[48;5;%dm%03d' $c $c done printf '\e[0m \n'}IFS=$' \t\n'color {0..15}for ((i=0;i<6;i++)); do color $(seq $((i*36+16)) $((i*36+51)))donecolor {232..255} The 16 million colors need quite a bit of code (some consoles can not show this). The basics is: fb=3;r=255;g=1;b=1;printf '\e[0;%s8;2;%s;%s;%sm▒▒▒ ' "$fb" "$r" "$g" "$b" fb is front/back or 3/4 . A simple test of your console capacity to present so many colors is: for r in {200..255..5}; do fb=4;g=1;b=1;printf '\e[0;%s8;2;%s;%s;%sm ' "$fb" "$r" "$g" "$b"; done; echo It will present a red line with a very small change in tone from left to right. If that small change is visible, your console is capable of 16 million colors. Each r , g , and b is a value from 0 to 255 for RGB (Red,Green,Blue). If your console type support this, this code will create a color table: mode2header(){ #### For 16 Million colors use \e[0;38;2;R;G;Bm each RGB is {0..255} printf '\e[mR\n' # reset the colors. printf '\n\e[m%59s\n' "Some samples of colors for r;g;b. Each one may be 000..255" printf '\e[m%59s\n' "for the ansi option: \e[0;38;2;r;g;bm or \e[0;48;2;r;g;bm :"}mode2colors(){ # foreground or background (only 3 or 4 are accepted) local fb="$1" [[ $fb != 3 ]] && fb=4 local samples=(0 63 127 191 255) for r in "${samples[@]}"; do for g in "${samples[@]}"; do for b in "${samples[@]}"; do printf '\e[0;%s8;2;%s;%s;%sm%03d;%03d;%03d ' "$fb" "$r" "$g" "$b" "$r" "$g" "$b" done; printf '\e[m\n' done; printf '\e[m' done; printf '\e[mReset\n'}mode2headermode2colors 3mode2colors 4 To convert an hex color value to a (nearest) 0-255 color index: fromhex(){ hex=${1#"#"} r=$(printf '0x%0.2s' "$hex") g=$(printf '0x%0.2s' ${hex#??}) b=$(printf '0x%0.2s' ${hex#????}) printf '%03d' "$(( (r<75?0:(r-35)/40)*6*6 + (g<75?0:(g-35)/40)*6 + (b<75?0:(b-35)/40) + 16 ))"} Use it as: $ fromhex 00fc7b048$ fromhex #00fc7b048 To find the color number as used in HTML colors format : #!/bin/dashtohex(){ dec=$(($1%256)) ### input must be a number in range 0-255. if [ "$dec" -lt "16" ]; then bas=$(( dec%16 )) mul=128 [ "$bas" -eq "7" ] && mul=192 [ "$bas" -eq "8" ] && bas=7 [ "$bas" -gt "8" ] && mul=255 a="$(( (bas&1) *mul ))" b="$(( ((bas&2)>>1)*mul ))" c="$(( ((bas&4)>>2)*mul ))" printf 'dec= %3s basic= #%02x%02x%02x\n' "$dec" "$a" "$b" "$c" elif [ "$dec" -gt 15 ] && [ "$dec" -lt 232 ]; then b=$(( (dec-16)%6 )); b=$(( b==0?0: b*40 + 55 )) g=$(( (dec-16)/6%6)); g=$(( g==0?0: g*40 + 55 )) r=$(( (dec-16)/36 )); r=$(( r==0?0: r*40 + 55 )) printf 'dec= %3s color= #%02x%02x%02x\n' "$dec" "$r" "$g" "$b" else gray=$(( (dec-232)*10+8 )) printf 'dec= %3s gray= #%02x%02x%02x\n' "$dec" "$gray" "$gray" "$gray" fi}for i in $(seq 0 255); do tohex ${i}done Use it as ("basic" is the first 16 colors, "color" is the main group, "gray" is the last gray colors): $ tohex 125 ### A number in range 0-255dec= 125 color= #af005f$ tohex 6dec= 6 basic= #008080$ tohex 235dec= 235 gray= #262626 | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/269077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67282/"
]
} |
269,078 | I have a script that does a number of different things, most of which do not require any special privileges. However, one specific section, which I have contained within a function, needs root privileges. I don't wish to require the entire script to run as root, and I want to be able to call this function, with root privileges, from within the script. Prompting for a password if necessary isn't an issue since it is mostly interactive anyway. However, when I try to use sudo functionx , I get: sudo: functionx: command not found As I expected, export didn't make a difference. I'd like to be able to execute the function directly in the script rather than breaking it out and executing it as a separate script for a number of reasons. Is there some way I can make my function "visible" to sudo without extracting it, finding the appropriate directory, and then executing it as a stand-alone script? The function is about a page long itself and contains multiple strings, some double-quoted and some single-quoted. It is also dependent upon a menu function defined elsewhere in the main script. I would only expect someone with sudo ANY to be able to run the function, as one of the things it does is change passwords. | I will admit that there's no simple, intuitive way to do this, and this is a bit hackey. But, you can do it like this: function hello(){ echo "Hello!"}# Test that it works.helloFUNC=$(declare -f hello)sudo bash -c "$FUNC; hello" Or more simply: sudo bash -c "$(declare -f hello); hello" It works for me: $ bash --versionGNU bash, version 4.3.42(1)-release (x86_64-apple-darwin14.5.0)$ helloHello!$$ FUNC=$(declare -f hello)$ sudo bash -c "$FUNC; hello"Hello! Basically, declare -f will return the contents of the function, which you then pass to bash -c inline. If you want to export all functions from the outer instance of bash, change FUNC=$(declare -f hello) to FUNC=$(declare -f) . Edit To address the comments about quoting, see this example: $ hello()> {> echo "This 'is a' test."> }$ declare -f hellohello (){ echo "This 'is a' test."}$ FUNC=$(declare -f hello)$ sudo bash -c "$FUNC; hello"Password:This 'is a' test. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/269078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160045/"
]
} |
269,094 | Given input with mixed text and numbers (w/o leading zeros) , how can I get it sorted in the "natural" order? For example, given the following input (hostnames): whatever-1.example.orgwhatever-10.example.orgwhatever-11.example.orgwhatever-12.example.orgwhatever-13.example.orgwhatever-2.example.orgwhatever-3.example.orgwhatever-4.example.orgwhatever-5.example.orgwhatever-6.example.orgwhatever-7.example.orgwhatever-8.example.orgwhatever-9.example.org I would like this output: whatever-1.example.orgwhatever-2.example.orgwhatever-3.example.orgwhatever-4.example.orgwhatever-5.example.orgwhatever-6.example.orgwhatever-7.example.orgwhatever-8.example.orgwhatever-9.example.orgwhatever-10.example.orgwhatever-11.example.orgwhatever-12.example.orgwhatever-13.example.org EDIT I should have mentioned that in addition to the "whatever"s there would also be thingaroo-#.example.org . :blargh-#.example.org . :...etc... Thanks! | If you have GNU coreutils ≥ 7.0, then you can use version sort. This is lexicographic order except that sequences of digits are ordered according to their value as an integer in decimal notation. sort -V | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/269094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/151566/"
]
} |
269,098 | My understanding is that hard drives and SSDs implement some basic error correction inside the drive, and most RAID configurations e.g. mdadm will depend on this to decide when a drive has failed to correct an error and needs to be taken offline. However, this depends on the storage being 100% accurate in its error diagnosis. That's not so, and a common configuration like a two-drive RAID-1 mirror will be vulnerable: suppose some bits on one drive are silently corrupted and the drive does not report a read error. Thus, file systems like btrfs and ZFS implement their own checksums, so as not to trust buggy drive firmwares, glitchy SATA cables, and so on. Similarly, RAM can also have reliability problems and thus we have ECC RAM to solve this problem. My question is this : what's the canonical way to protect the Linux swap file from silent corruption / bit rot not caught by drive firmware on a two-disk configuration (i.e. using mainline kernel drivers)? It seems to me that a configuration that lacks end-to-end protection here (such as that provided by btrfs) somewhat negates the peace of mind brought by ECC RAM. Yet I cannot think of a good way: btrfs does not support swapfiles at all. You could set up a loop device from a btrfs file and make a swap on that. But that has problems: Random writes don't perform well: https://btrfs.wiki.kernel.org/index.php/Gotchas#Fragmentation The suggestion there to disable copy-on-write will also disable checksumming - thus defeating the whole point of this exercise. Their assumption is that the data file has its own internal protections. ZFS on Linux allows using a ZVOL as swap, which I guess could work: http://zfsonlinux.org/faq.html#CanIUseaZVOLforSwap - however, from my reading, ZFS is normally demanding on memory, and getting it working in a swap-only application sounds like some work figuring it out. I think this is not my first choice. Why you would have to use some out-of-tree kernel module just to have a reliable swap is beyond me - surely there is a way to accomplish this with most modern Linux distributions / kernels in this day & age? There was actually a thread on a Linux kernel mailing list with patches to enable checksums within the memory manager itself, for exactly the reasons I discuss in this question: http://thread.gmane.org/gmane.linux.kernel/989246 - unfortunately, as far as I can tell, the patch died and never made it upstream for reasons unknown to me. Too bad, it sounded like a nice feature. On the other hand, if you put swap on a RAID-1 - if the corruption is beyond the ability of the checksum to repair, you'd want the memory manager to try to read from the other drive before panicking or whatever, which is probably outside the scope of what a memory manager should do. In summary: RAM has ECC to correct errors Files on permanent storage have btrfs to correct errors Swap has ??? <--- this is my question | We trust the integrity of the data retrieved from swap because the storage hardware has checksums, CRCs, and such. In one of the comments above, you say: true, but it won't protect against bit flips outside of the disk itself "It" meaning the disk's checksums here. That is true, but SATA uses 32-bit CRCs for commands and data. Thus, you have a 1 in 4 billion chance of corrupting data undetectably between the disk and the SATA controller. That means that a continuous error source could introduce an error as often as every 125 MiB transferred, but a rare, random error source like cosmic rays would cause undetectable errors at a vanishingly small rate. Realize also that if you've got a source that causes an undetected error at a rate anywhere near one per 125 MiB transferred, performance will be terrible because of the high number of detected errors requiring re-transfer. Monitoring and logging will probably alert you to the problem in time to avoid undetected corruption. As for the storage medium's checksums, every SATA (and before it, PATA) disk uses per-sector checksums of some kind. One of the characteristic features of "enterprise" hard disks is larger sectors protected by additional data integrity features , greatly reducing the chance of an undetected error. Without such measures, there would be no point to the spare sector pool in every hard drive: the drive itself could not detect a bad sector, so it could never swap fresh sectors in. In another comment, you ask: if SATA is so trustworthy, why are there checksummed file systems like ZFS, btrfs, ReFS? Generally speaking, we aren't asking swap to store data long-term. The limit on swap storage is the system's uptime , and most data in swap doesn't last nearly that long, since most data that goes through your system's virtual memory system belongs to much shorter-lived processes. On top of that, uptimes have generally gotten shorter over the years, what with the increased frequency of kernel and libc updates, virtualization, cloud architectures, etc. Furthermore, most data in swap is inherently disused in a well-managed system, being one that doesn't run itself out of main RAM. In such a system, the only things that end up in swap are pages that the program doesn't use often, if ever. This is more common than you might guess. Most dynamic libraries that your programs link to have routines in them that your program doesn't use, but they had to be loaded into RAM by the dynamic linker . When the OS sees that you aren't using all of the program text in the library, it swaps it out, making room for code and data that your programs are using. If such swapped-out memory pages are corrupted, who would ever know? Contrast this with the likes of ZFS where we expect the data to be durably and persistently stored, so that it lasts not only beyond the system's current uptime, but also beyond the life of the individual storage devices that comprise the storage system. ZFS and such are solving a problem with a time scale roughly two orders of magnitude longer than the problem solved by swap. We therefore have much higher corruption detection requirements for ZFS than for Linux swap. ZFS and such differ from swap in another key way here: we don't RAID swap filesystems together. When multiple swap devices are in use on a single machine, it's a JBOD scheme, not like RAID-0 or higher. (e.g. macOS's chained swap files scheme , Linux's swapon , etc.) Since the swap devices are independent, rather than interdependent as with RAID, we don't need extensive checksumming because replacing a swap device doesn't involve looking at other interdependent swap devices for the data that should go on the replacement device. In ZFS terms, we don't resilver swap devices from redundant copies on other storage devices. All of this does mean that you must use a reliable swap device. I once used a $20 external USB HDD enclosure to rescue an ailing ZFS pool, only to discover that the enclosure was itself unreliable, introducing errors of its own into the process. ZFS's strong checksumming saved me here. You can't get away with such cavalier treatment of storage media with a swap file. If the swap device is dying, and is thus approaching that worst case where it could inject an undetectable error every 125 MiB transferred, you simply have to replace it, ASAP. The overall sense of paranoia in this question devolves to an instance of the Byzantine generals problem . Read up on that, ponder the 1982 date on the academic paper describing the problem to the computer science world, and then decide whether you, in 2019, have fresh thoughts to add to this problem. And if not, then perhaps you will just use the technology designed by three decades of CS graduates who all know about the Byzantine Generals Problem. This is well-trod ground. You probably can't come up with an idea, objection, or solution that hasn't already been discussed to death in the computer science journals. SATA is certainly not utterly reliable, but unless you are going to join academia or one of the the kernel development teams, you are not going to be in a position to add materially to the state of the art here. These problems are already well in hand, as you've already noted: ZFS, btrfs, ReFS... As an OS user, you simply have to trust that the OS's creators are taking care of these problems for you, because they also know about the Byzantine Generals. It is currently not practical to put your swap file on top of ZFS or Btrfs, but if the above doesn't reassure you, you could at least put it atop xfs or ext4. That would be better than using a dedicated swap partition. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/269098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160630/"
]
} |
269,103 | How can I find all folders starting with a value in the number range 500 to 899?I just need to list them in a file. Additional informations: Maxdepth 2 Examples of folder names: 593091_azerty_qwerty or 849934_blablablabla_bla_blabla | We trust the integrity of the data retrieved from swap because the storage hardware has checksums, CRCs, and such. In one of the comments above, you say: true, but it won't protect against bit flips outside of the disk itself "It" meaning the disk's checksums here. That is true, but SATA uses 32-bit CRCs for commands and data. Thus, you have a 1 in 4 billion chance of corrupting data undetectably between the disk and the SATA controller. That means that a continuous error source could introduce an error as often as every 125 MiB transferred, but a rare, random error source like cosmic rays would cause undetectable errors at a vanishingly small rate. Realize also that if you've got a source that causes an undetected error at a rate anywhere near one per 125 MiB transferred, performance will be terrible because of the high number of detected errors requiring re-transfer. Monitoring and logging will probably alert you to the problem in time to avoid undetected corruption. As for the storage medium's checksums, every SATA (and before it, PATA) disk uses per-sector checksums of some kind. One of the characteristic features of "enterprise" hard disks is larger sectors protected by additional data integrity features , greatly reducing the chance of an undetected error. Without such measures, there would be no point to the spare sector pool in every hard drive: the drive itself could not detect a bad sector, so it could never swap fresh sectors in. In another comment, you ask: if SATA is so trustworthy, why are there checksummed file systems like ZFS, btrfs, ReFS? Generally speaking, we aren't asking swap to store data long-term. The limit on swap storage is the system's uptime , and most data in swap doesn't last nearly that long, since most data that goes through your system's virtual memory system belongs to much shorter-lived processes. On top of that, uptimes have generally gotten shorter over the years, what with the increased frequency of kernel and libc updates, virtualization, cloud architectures, etc. Furthermore, most data in swap is inherently disused in a well-managed system, being one that doesn't run itself out of main RAM. In such a system, the only things that end up in swap are pages that the program doesn't use often, if ever. This is more common than you might guess. Most dynamic libraries that your programs link to have routines in them that your program doesn't use, but they had to be loaded into RAM by the dynamic linker . When the OS sees that you aren't using all of the program text in the library, it swaps it out, making room for code and data that your programs are using. If such swapped-out memory pages are corrupted, who would ever know? Contrast this with the likes of ZFS where we expect the data to be durably and persistently stored, so that it lasts not only beyond the system's current uptime, but also beyond the life of the individual storage devices that comprise the storage system. ZFS and such are solving a problem with a time scale roughly two orders of magnitude longer than the problem solved by swap. We therefore have much higher corruption detection requirements for ZFS than for Linux swap. ZFS and such differ from swap in another key way here: we don't RAID swap filesystems together. When multiple swap devices are in use on a single machine, it's a JBOD scheme, not like RAID-0 or higher. (e.g. macOS's chained swap files scheme , Linux's swapon , etc.) Since the swap devices are independent, rather than interdependent as with RAID, we don't need extensive checksumming because replacing a swap device doesn't involve looking at other interdependent swap devices for the data that should go on the replacement device. In ZFS terms, we don't resilver swap devices from redundant copies on other storage devices. All of this does mean that you must use a reliable swap device. I once used a $20 external USB HDD enclosure to rescue an ailing ZFS pool, only to discover that the enclosure was itself unreliable, introducing errors of its own into the process. ZFS's strong checksumming saved me here. You can't get away with such cavalier treatment of storage media with a swap file. If the swap device is dying, and is thus approaching that worst case where it could inject an undetectable error every 125 MiB transferred, you simply have to replace it, ASAP. The overall sense of paranoia in this question devolves to an instance of the Byzantine generals problem . Read up on that, ponder the 1982 date on the academic paper describing the problem to the computer science world, and then decide whether you, in 2019, have fresh thoughts to add to this problem. And if not, then perhaps you will just use the technology designed by three decades of CS graduates who all know about the Byzantine Generals Problem. This is well-trod ground. You probably can't come up with an idea, objection, or solution that hasn't already been discussed to death in the computer science journals. SATA is certainly not utterly reliable, but unless you are going to join academia or one of the the kernel development teams, you are not going to be in a position to add materially to the state of the art here. These problems are already well in hand, as you've already noted: ZFS, btrfs, ReFS... As an OS user, you simply have to trust that the OS's creators are taking care of these problems for you, because they also know about the Byzantine Generals. It is currently not practical to put your swap file on top of ZFS or Btrfs, but if the above doesn't reassure you, you could at least put it atop xfs or ext4. That would be better than using a dedicated swap partition. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/269103",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160644/"
]
} |
269,159 | When I always try to install new package I get this message: Can't set locale; make sure $LC_* and $LANG are correct!perl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = "en_GB:en", LC_ALL = (unset), LC_CTYPE = "en_GB.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system.perl: warning: Falling back to the standard locale ("C").locale: Cannot set LC_MESSAGES to default locale: No such file or directorylocale: Cannot set LC_ALL to default locale: No such file or directory My OS is Debian Jessie 8.3 (Mate) using English with French keyboard.When I type locale, I get this: locale: Cannot set LC_MESSAGES to default locale: No such file or directorylocale: Cannot set LC_ALL to default locale: No such file or directoryLANG=en_US.UTF-8LANGUAGE=en_GB:enLC_CTYPE=en_GB.UTF-8LC_NUMERIC="en_US.UTF-8"LC_TIME="en_US.UTF-8"LC_COLLATE="en_US.UTF-8"LC_MONETARY="en_US.UTF-8"LC_MESSAGES="en_US.UTF-8"LC_PAPER="en_US.UTF-8"LC_NAME="en_US.UTF-8"LC_ADDRESS="en_US.UTF-8"LC_TELEPHONE="en_US.UTF-8"LC_MEASUREMENT="en_US.UTF-8"LC_IDENTIFICATION="en_US.UTF-8"LC_ALL= | Debian ships locales in source form. They need to be compiled explicitly. The reason for this is that compiled locales use a lot more disk space, but most people only use a few of them. Run dpkg-reconfigure locales as root, select the locales you want in the list (with your settings, you need en_GB and en_US.UTF-8 — I recommend selecting en_US and en_GB.UTF-8 as well) then press <OK> . Alternatively, edit /etc/locale.gen , uncomment the lines for the locales you want, and run locale-gen as root. (Note: on Ubuntu, this works differently: run locale-gen with the locales you want to generate as arguments, e.g. sudo locale-gen en_GB en_US en_GB.UTF-8 en_US.UTF-8 .) Alternatively, Debian now has a package locales-all which you can install instead of locales . It has all the locales pre-generated. The downside is that they use up more disk space (112MB vs 16MB). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/269159",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31699/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.