source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
464,574
ssh-add alone is not working: Error connecting to agent: No such file or directory How should I use that tool?
You need to initialize ssh-agent first. You can do this in multiple ways. Either by starting a new shell ssh-agent bash or by evaluating the script returned by ssh-agent in your current shell. eval "$(ssh-agent)" I suggest using the second method, because you keep all your history and variables.
{ "source": [ "https://unix.stackexchange.com/questions/464574", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287413/" ] }
465,305
If you were to globally set alias ':(){ :|:& };:'='echo fork bomb averted' would that be an effective security strategy to avoid the Bash fork bomb execution or would there still be a way to execute it? I suppose the question cashes out to: is there a way to execute a command when it's aliased to something else?
The two , no, three , ... Amongst the main obstacles to that are: It's not a valid name for an alias. Bash's online manual : The characters ... and any of the shell metacharacters or quoting characters listed above may not appear in an alias name. ( , ) , & , | and whitespace are out in Bash 4.4. That particular string is not the only way to write a fork bomb in the shell, just famous because it looks obscure. For example, there's no need to call the function : instead of something actually composed of letters. If you could set the alias, the user could unset the alias, circumvent it by escaping the alias name on the command line, or disable aliases altogether, possibly by running the function in a script (Bash doesn't expand aliases in noninteractive shells). Even if the shell is restricted enough to stop all versions of a fork bomb, a general purpose system will have other programmable utilities that can recurse and fork off subprocesses. Got Perl or a C compiler? Easy enough. Even awk could probably do it. Even if you don't have those installed, you'll also need to stop the user from bringing in compiled binaries from outside the system, or running /bin/sh which probably needs to be a fully operational shell for the rest of the system to function. Just use ulimit -u (i.e. RLIMIT_NPROC ) or equivalent to restrict the number of processes a user can start. On most Linux systems there's pam_limits that can set the process count limit before any commands chosen by the user are started. Something like this in /etc/security/limits.conf would put a hard limit of 50 processes to all users: * hard nproc 50 (Stephen Kitt already mentioned point 1, Jeff Schaller mentioned 2 and 3.)
{ "source": [ "https://unix.stackexchange.com/questions/465305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288980/" ] }
465,322
at 18:00 shutdown now and shutdown 18:00 , are they starting the same service? Do they work the same way?
at 18:00 shutdown now creates an "at" job, which is performed at the specified time by the at daemon or perhaps the cron daemon, depending on your system. shutdown 18:00 starts a process in your shell that waits until the specified time and then performs the shutdown. This command can be terminated if e.g. your shell session is terminated. The net result in most cases will be the same: the system is shutdown at 18:00. One difference is that if you use at , the job will be stored and if the system is shutdown by some other means before 18:00, upon booting again the job will still be waiting to be run; if the time is already passed, the shutdown will be performed immediately which could be quite unexpected. Another difference is that shutdown 18:00 will create a /run/nologin file 5 minutes before the scheduled time to prevent people logging in after that moment. Also broadcast messages will be sent to warn logged in users that the system is about to be shutdown. You need to take account these differences to decide which to use.
{ "source": [ "https://unix.stackexchange.com/questions/465322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/307871/" ] }
465,758
I have a coworker who says you need to be careful extracting tarballs because they can make changes you don't know about. I always thought a tarball was just a hierarchy of compressed files, so if you extract it to /tmp/example/ it can't possibly sneak a file into /etc/ or anything like that.
Different tar utilities behave differently in this regard, so it's good to be careful. For a tar file that you didn't create, always list the table of contents before extracting it. Solaris tar : The named files are extracted from the tarfile and written to the directory specified in the tarfile, relative to the current directory. Use the relative path names of files and directories to be extracted. Absolute path names contained in the tar archive are unpacked using the absolute path names, that is, the leading forward slash (/) is not stripped off. In the case of a tar file with full (absolute) path names, such as: /tmp/real-file /etc/sneaky-file-here ... if you extract such a file, you'll end up with both files. GNU tar : By default, GNU tar drops a leading / on input or output, and complains about file names containing a .. component. There is an option that turns off this behavior: --absolute-names -P Do not strip leading slashes from file names, and permit file names containing a .. file name component. ... if you extract a fully-pathed tar file using GNU tar without using the -P option, it will tell you: tar: Removing leading / from member names and will extract the file into subdirectories of your current directory. AIX tar : says nothing about it, and behaves as the Solaris tar -- it will create and extract tar files with full/absolute path names. HP-UX tar : (better online reference welcomed) WARNINGS There is no way to restore an absolute path name to a relative position. OpenBSD tar : -P Do not strip leading slashes ( / ) from pathnames. The default is to strip leading slashes. There are -P options implemented for tar on macOS, FreeBSD and NetBSD as well, with the same semantics, with the addition that tar on FreeBSD and macOS will "refuse to extract archive entries whose pathnames contain .. or whose target directory would be altered by a symlink" without -P . schilytools star : -/ Don't strip leading slashes from file names when extracting an archive. Tar archives containing absolute pathnames are usually a bad idea. With other tar implementations, they may possibly never be extracted without clobbering existing files. Star for that reason, by default strips leading slashes from filenames when in extract mode.
{ "source": [ "https://unix.stackexchange.com/questions/465758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20840/" ] }
466,244
What are the consequences for a ext4 filesystem when I terminate a copying cp command by typing Ctrl + C while it is running? Does the filesystem get corrupted? Is the partition's space occupied by the incomplete copied file still usable after deleting it? And, most importantly, is terminating a cp process a safe thing to do?
This is safe to do, but naturally you may not have finished the copy. When the cp command is run, it makes syscalls that instruct the kernel to make copies of the file. A syscall, or system call, is a function that an application can use to requests a service from the kernel, such as reading or writing data to the disk. The userspace process simply waits for the syscall to finish. If you were to trace the calls from cp ~/hello.txt /mnt , it would look like: open("/home/user/hello.txt", O_RDONLY) = 3 open("/mnt/hello.txt", O_CREAT|O_WRONLY, 0644) = 4 read(3, "Hello, world!\n", 131072) = 14 write(4, "Hello, world!\n", 14) = 14 close(3) = 0 close(4) = 0 This repeats for each file that is to be copied. No corruption will occur because of the way these syscalls work. When syscalls like these are entered, the fatal signal will only take effect after the syscall has finished , not while it is running (in fact, signals only arrive during a kernelspace to userspace context switch). Note that some signals, like read() , can be terminated early. Because of this, forcibly killing the process will only cause it to terminate after the currently running syscall has returned. This means that the kernel, where the filesystem driver lives, is free to finish the operations that it needs to complete to put the filesystem into a sane state. Any I/O of this kind will never be terminated in the middle of operation, so there is no risk of filesystem corruption.
{ "source": [ "https://unix.stackexchange.com/questions/466244", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233964/" ] }
466,747
Let say, from kernel 2.6 onwards. I watch all the running processes on the system. Are the PID of the children always greater than their parents' PIDs? Is it possible to have special cases of "inversion"?
No, for the very simple reason that there is a maximum numerical value the PID can have. If a process has the highest PID, no child it forks can have a greater PID. The alternative to giving the child a lower PID would be to fail the fork() altogether, which wouldn't be very productive. The PIDs are allocated in order, and after the highest one is used, the system wraps around to reusing the (free) lower ones, so you can get lower PIDs for a child in other cases too. The default maximum PID on my system ( /proc/sys/kernel/pid_max ) is just 32768, so it's not hard to reach the condition where the wraparound happens. $ echo $$ 27468 $ bash -c 'echo $$' 1296 $ bash -c 'echo $$' 1297 If your system were to allocate PIDs randomly ( like OpenBSD appears to do ) instead of consecutively (like Linux), there would be two options. Either the random choice was made over the whole space of possible PIDs, in which case it would be obvious that a child's PID can be lower than the parent's. Or, the child's PID would be chosen by random from the values greater than the parent's PID, which would on average put it halfway between the parent's PID and the maximum. Processes forking recursively would then quickly reach the maximum and we'd be at the same point as mentioned above: a new fork would need to use a lower PID to succeed.
{ "source": [ "https://unix.stackexchange.com/questions/466747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12985/" ] }
466,770
It's best to use public keys for SSH. So my sshd_config has PasswordAuthentication no . Some users never log in, e.g. a sftp user with shell /usr/sbin/nologin . Or a system account. So I can create such a user without a password with adduser gary --shell /usr/sbin/nologin --disabled-password . Is that a good/bad idea? Are there ramifications I've not considered?
If you have root access to the server and can regenerate ssh keys for your users in case they lose them AND you're sure a user (as a person) won't have multiple user accounts and they need to switch between those on an SSH session (well, they can also open multiple SSH sessions if the need arises) AND they will never need "physical" (via keyboard+monitor or via remote console for a VM) access to the server AND no users have password-gated sudo access (i.e. they either don't have sudo access at all, or have sudo access with NOPASSWD ) I think you'll be good. We have many servers at work configured like this (only some accounts need access to the VM via vmware remote console, the others connect only via SSH with pubkey auth).
{ "source": [ "https://unix.stackexchange.com/questions/466770", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291147/" ] }
466,938
I have a iso file named ubuntu.iso . I can mount it with the command: mount ubuntu.iso /mnt . After mounting it, I can see it from the outout of the command df -h : /dev/loop0 825M 825M 0 100% /mnt . However, if I execute the command mount -o loop ubuntu.iso /mnt , I'll get the same result. As I know, loop device allows us to visit the iso file as a device, I think this is why we add the option -o loop . But I can visit my iso file even if I only execute mount ubuntu.iso /mnt . So I can't see the difference between mount and mount -o loop .
Both versions use loop devices, and produce the same result; the short version relies on “cleverness” added to mount in recent years. mount -o loop tells mount explicitly to use a loop device; it leaves the loop device itself up to mount , which will look for an available device, set it up, and use that. (You can specify the device too with e.g. mount -o loop=/dev/loop1 .) The cleverness is that, when given a file to mount, mount will automatically use a loop device to mount it when necessary — i.e. , the file system isn’t specified, or libblkid determines that the file system is only supported on block devices (and therefore a loop device is needed to translate the file into a block device). The loop device section of the mount man page has more details.
{ "source": [ "https://unix.stackexchange.com/questions/466938", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145824/" ] }
467,039
Given this minimal example ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) it outputs LINE 1 and then, after one second, outputs LINE 2 , as expected . If we pipe this to grep LINE ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) | grep LINE the behavior is the same as in the previous case, as expected . If, alternatively, we pipe this to cat ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) | cat the behavior is again the same, as expected . However , if we pipe to grep LINE , and then to cat , ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) | grep LINE | cat there is no output until one second passes, and both lines appear on the output immediately, which I did not expect . Why is this happening and how can I make the last version to behave in the same way as the first three commands?
When (at least GNU) grep ’s output is not a terminal, it buffers its output, which is what causes the behaviour you’re seeing. You can disable this either using GNU grep ’s --line-buffered option: ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) | grep --line-buffered LINE | cat or the stdbuf utility: ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) | stdbuf -oL grep LINE | cat Turn off buffering in pipe has more on this topic.
{ "source": [ "https://unix.stackexchange.com/questions/467039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/249779/" ] }
468,416
I recently set up both Fedora 28 & Ubuntu 18.04 systems and would like to configure my primary user account on both so that I can run sudo commands without being prompted for a password. How can I do this on the respective systems?
This is pretty trivial if you make use of the special Unix group called wheel on Fedora systems. You merely have to do the following: Add your primary user to the wheel group $ sudo gpasswd -a <primary account> wheel Enable NOPASSWD for the %wheel group in /etc/sudoers $ sudo visudo Then comment out this line: ## Allows people in group wheel to run all commands # %wheel ALL=(ALL) ALL And uncomment this line: ## Same thing without a password %wheel ALL=(ALL) NOPASSWD: ALL Save this file with Shift + Z + Z . Logout and log back in NOTE: This last step is mandatory so that your desktop and any corresponding top level shells are re-execed showing that your primary account is now a member of the wheel Unix group.
{ "source": [ "https://unix.stackexchange.com/questions/468416", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
469,070
I would like to install Debian 5 on an older PC, because I expect that the kernel of Debian 5 would work better on this computer. I downloaded the netinstall ISO from debian.org and I tried to install it on a Virtualbox machine. I got this error: Bad mirror . I changed the mirror to archive.debian.org as a hostname, then /debian/ and the problem got resolved. My problem right now is that the installation stucks on Please wait... , on the screen of Select and install (exactly after choosing what to install - only Standard System - at 13%). I don't get any errors. I don't know also how to check logs or something else if there exists some. When I Press CTRL + ALT + F4 , I see the following on the screen: > sep 14 15:36:00 in-target: You should only proceed with the installation if you re certain that > sep 14 15:36:00 in-target: this is what you want to do. > sep 14 15:36:00 in-target: > sep 14 15:36:00 in-target: ispell ibritish wamerican mlocate exim4-config libnfsidmapZ bind9-host > sep 14 15:36:00 in-target: mime-support libidn11 telnet lsof bash-completion dsutils > sep 14 15:36:00 in-target: exim4-daemon-light perl libcap2 mutt reportbug libds58 bc m4 doc-debian > sep 14 15:36:00 in-target: dc at libeuent1 ncurses-term libpcre3 doc-linux-texwhois libsqlite3-0 > sep 14 15:36:00 in-target: python2.5 python-minimal libisccc50 procmail time 1ibrpcsecgss3 > sep 14 15:36:00 in-target: liblwres50 python ftp pciutils dictionaries-commonpython-central w3m > sep 14 15:36:00 in-target: openbsd-inetd libbind9-50 libxle libgme debian-fafile ucf > sep 14 15:36:00 in-target: perl-modules python2.5-minimal libldap-2.4-2 libiscfg50 libdb4.5 > sep 14 15:36:00 in-target: bsd-mailx exim4 libgc1c2 exim4-base patch libisc50 libgssgluel iamerican > sep 14 15:36:00 in-target: portmap nfs-common less libmagicl texinfo liblockfile1 > sep 14 15:36:00 in-target: > sep 14 15:36:00 in-target: Do you want to ignore this warning and proceed anyway > sep 14 15:36:00 in-target: To continue, enter "Yes": to abort, enter "No": What is this warning message about? What can I do? Important to note that I had tried to install Debian 9 on a VirtualBox and it worked. I tried to install Debian 6 and had the same problem.
I would like to install Debian 5 on an older PC, because Debian 5's kernel should work well on this computer. Umm... no! That is in fact a Really Bad Idea. There are multiple GNU/Linux distributions available that will run on - and are in fact made for - older 32bit PC's (AntiX, Bodhi etc). You should never run operating systems that have reach end of life, and as such do not recieve security updates in a timely order. And I fail to see why an older kernel should work better than a new one, if it is non PAE you are looking for, there are alternatives (see above).
{ "source": [ "https://unix.stackexchange.com/questions/469070", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177948/" ] }
469,537
The established structure of the .deb file name is package_version_architecture.deb . According to this paragraph: Some packages don't follow the name structure package_version_architecture.deb . Packages renamed by dpkg-name will follow this structure. Generally this will have no impact on how packages are installed by dselect/dpkg, but other installation tools might depend on this naming structure. Question: However, are there any real situations when renaming the .deb package file is highly un recommended? Is it a normal practice to provide a custom .deb file name for my software? Example: My Program for Linux v1.0.0 (Pro).deb — the custom naming my-program_1.0.0-1_amd64.deb — the proper official naming Note: I'm not planning to create a repo, I'm just hosting the .deb package of my software on my website for direct download.
Over the years, I’ve accumulated a large number of .deb packages with non-standard names, and I don’t remember running into any problems. “Famous” packages with non-standard names that people might come across nowadays include google-chrome-stable_current_amd64.deb and steam.deb . (In both cases, the fixed, versionless name ensures that a stable URL can be used for downloads, and a stable name for installation instructions.) However I don’t remember running across any with spaces in their names; that shouldn’t cause issues with tools either, but it might cause confusion for your users (since they’ll need to quote the filename or escape the spaces if they’re using shell-based tools). Another point to note is that using a non-standard name which isn’t the same as your package name (as stored in the control file) could also cause confusion, e.g. when attempting to remove the package (since the package name won’t be the same as the name used to install it). As a result of all this, if you don’t want to stick to the canonical name I would recommend something like my-program.deb or my-program_amd64.deb (depending on whether you want to support multiple architectures). You can make that a symlink to the versioned filename too if you want to allow older versions to be downloaded.
{ "source": [ "https://unix.stackexchange.com/questions/469537", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183516/" ] }
469,585
When using the command strace with the flag -T , I would like to know what is the time unit used to display time spent in syscalls? I assume it should be in seconds, but I am not quite sure and it seems to be omitted from the manual.
From the source code : if (Tflag) { ts_sub(ts, ts, &tcp->etime); tprintf(" <%ld.%06ld>", (long) ts->tv_sec, (long) ts->tv_nsec / 1000); } This means that the time is shown in seconds, with microseconds (calculated from the nanosecond value) after the decimal point.
{ "source": [ "https://unix.stackexchange.com/questions/469585", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/311285/" ] }
469,736
I need to run tail -f against a log file, but only for specific amount of time, for example 20 seconds, and then exit. What would be an elegant way to achieve this?
With GNU timeout: timeout 20 tail -f /path/to/file
{ "source": [ "https://unix.stackexchange.com/questions/469736", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/256195/" ] }
469,950
Recently I've been digging up information about processes in GNU/Linux and I met the infamous fork bomb : :(){ : | :& }; : Theoretically, it is supposed to duplicate itself infinitely until the system runs out of resources... However, I've tried testing both on a CLI Debian and a GUI Mint distro, and it doesn't seem to impact much the system. Yes there are tons of processes that are created, and after a while I read in console messages like : bash: fork: Resource temporarily unavailable bash: fork: retry: No child processes But after some time, all the processes just get killed and everything goes back to normal. I've read that the ulimit set a maximum amount of process per user, but I can't seem to be able to raise it really far. What are the system protections against a fork-bomb? Why doesn't it replicate itself until everything freezes or at least lags a lot? Is there a way to really crash a system with a fork bomb?
You probably have a Linux distro that uses systemd. Systemd creates a cgroup for each user, and all processes of a user belong to the same cgroup. Cgroups is a Linux mechanism to set limits on system resources like max number of processes, CPU cycles, RAM usage, etc. This is a different, more modern, layer of resource limiting than ulimit (which uses the getrlimit() syscall). If you run systemctl status user-<uid>.slice (which represents the user's cgroup), you can see the current and maximum number of tasks (processes and threads) that is allowed within that cgroup. $ systemctl status user-$UID.slice ● user-22001.slice - User Slice of UID 22001 Loaded: loaded Drop-In: /usr/lib/systemd/system/user-.slice.d └─10-defaults.conf Active: active since Mon 2018-09-10 17:36:35 EEST; 1 weeks 3 days ago Tasks: 17 (limit: 10267) Memory: 616.7M By default, the maximum number of tasks that systemd will allow for each user is 33% of the "system-wide maximum" ( sysctl kernel.threads-max ); this usually amounts to ~10,000 tasks. If you want to change this limit: In systemd v239 and later, the user default is set via TasksMax= in: /usr/lib/systemd/system/user-.slice.d/10-defaults.conf To adjust the limit for a specific user (which will be applied immediately as well as stored in /etc/systemd/system.control), run: systemctl [--runtime] set-property user-<uid>.slice TasksMax=<value> The usual mechanisms of overriding a unit's settings (such as systemctl edit ) can be used here as well, but they will require a reboot. For example, if you want to change the limit for every user, you could create /etc/systemd/system/user-.slice.d/15-limits.conf . In systemd v238 and earlier, the user default is set via UserTasksMax= in /etc/systemd/logind.conf . Changing the value generally requires a reboot. More info about this: man 5 systemd.resource-control man 5 systemd.slice man 5 logind.conf http://0pointer.de/blog/projects/systemd.html (search this page for cgroups) man 7 cgroups and https://www.kernel.org/doc/Documentation/cgroup-v1/pids.txt https://en.wikipedia.org/wiki/Cgroups
{ "source": [ "https://unix.stackexchange.com/questions/469950", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275586/" ] }
470,288
What's a good one-liner to generate an easily memorable password, like xkcd's correct horse battery staple or a Bitcoin seed? EDIT 1 : This is not the same as generating a random string since random strings are not at all memorable. Compare to the obligatory xkcd ...
First of all, install a dictionary of a language you're familiar with, using: sudo apt-get install <language-package> To see all available packages: apt-cache search wordlist | grep ^w Note: all installation instructions assume you're on a debian-based OS. After you've installed dictionary run: WORDS=5; LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${WORDS} | paste -sd "-" Which will output ex: blasphemous-commandos-vasts-suitability-arbor To break it down: WORDS=5; — choose how many words you want in your password. LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words — choose only words containing lowercase alphabet characters (it excludes words with ' in them or funky characters like in éclair ). LC_ALL=C ensures that [a-z] in the regex won't match letter-like symbols other than lowercase letters without diacritics. shuf --random-source=/dev/urandom -n ${WORDS} — chose as many WORDS as you've requested. --random-source=/dev/urandom ensures that shuf seeds its random generator securely; without it, shuf defaults to a secure seed, but may fall back to a non-secure seed on some systems such as some Unix emulation layers on Windows. paste -sd "-" — join all words using - (feel free to change the symbol to something else). Alternatively you can wrap it in a function: #!/bin/bash function memorable_password() { words="${1:-5}" sep="${2:--}" LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep" } or #!/bin/sh memorable_password() { words="$1" if [ -z "${words}" ]; then words=5 fi sep="$2" if [ -z "${sep}" ]; then sep="-" fi LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep" } Both of which can be called as such: memorable_password 7 _ memorable_password 4 memorable_password Returning: skipped_cavity_entertainments_gangway_seaports_spread_communique evaporated-clashes-bold-presuming excelling-thoughtless-pardonable-promulgated-forbearing Bonus For a nerdy and fun, but not very secure password, that doesn't require dictionary installation, you can use (courtesy of @jpa): WORDS=5; man git | \ tr ' ' '\n' | \ egrep '^[a-z]{4,}$' | \ sort | uniq | \ shuf --random-source=/dev/urandom -n ${WORDS} | \ paste -sd "-"
{ "source": [ "https://unix.stackexchange.com/questions/470288", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/311902/" ] }
470,299
I have a number of variables in a .dat file that I automatically changed with a script. One of these files has a parameter, bar , that is arranged as such in a .dat file: var1 var2 var3 foo bar T T T 100 100 I used to use the following lines of a bash script to change the value of bar from an arbitrary initial value to the desired value, in this case 2000. This script would change 'bar' to 2000. LINE1=$(awk '/bar/ {++n;if (n==1) {print FNR}}' data.dat) ((LINE1=$LINE1 + 1)) OLD1=$(awk '{for(i=1;i<'$LINE1';i++) getline}{print $12}' data.dat) sed -i '' "${LINE1}s/$OLD1/2000/" data.dat However, I now must now change foo alongside bar . In this example, this is setting foo and bar both to 2000. LINE1=$(awk '/foo/ {++n;if (n==1) {print FNR}}' data.dat) ((LINE1=$LINE1 + 1)) OLD1=$(awk '{for(i=1;i<'$LINE1';i++) getline}{print $12}' data.dat) sed -i '' "${LINE1}s/$OLD1/2000/" data.dat LINE1=$(awk '/bar/ {++n;if (n==1) {print FNR}}' data.dat) ((LINE1=$LINE1 + 1)) OLD1=$(awk '{for(i=1;i<'$LINE1';i++) getline}{print $12}' data.dat) sed -i '' "${LINE1}s/$OLD1/2000/" data.dat This instead only changed the foo to 2000 while leaving bar unchanged. I realize that this is an issue with the way I've described the regular expression, but I have been unable to change both variables with an awk/sed expression.
First of all, install a dictionary of a language you're familiar with, using: sudo apt-get install <language-package> To see all available packages: apt-cache search wordlist | grep ^w Note: all installation instructions assume you're on a debian-based OS. After you've installed dictionary run: WORDS=5; LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${WORDS} | paste -sd "-" Which will output ex: blasphemous-commandos-vasts-suitability-arbor To break it down: WORDS=5; — choose how many words you want in your password. LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words — choose only words containing lowercase alphabet characters (it excludes words with ' in them or funky characters like in éclair ). LC_ALL=C ensures that [a-z] in the regex won't match letter-like symbols other than lowercase letters without diacritics. shuf --random-source=/dev/urandom -n ${WORDS} — chose as many WORDS as you've requested. --random-source=/dev/urandom ensures that shuf seeds its random generator securely; without it, shuf defaults to a secure seed, but may fall back to a non-secure seed on some systems such as some Unix emulation layers on Windows. paste -sd "-" — join all words using - (feel free to change the symbol to something else). Alternatively you can wrap it in a function: #!/bin/bash function memorable_password() { words="${1:-5}" sep="${2:--}" LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep" } or #!/bin/sh memorable_password() { words="$1" if [ -z "${words}" ]; then words=5 fi sep="$2" if [ -z "${sep}" ]; then sep="-" fi LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep" } Both of which can be called as such: memorable_password 7 _ memorable_password 4 memorable_password Returning: skipped_cavity_entertainments_gangway_seaports_spread_communique evaporated-clashes-bold-presuming excelling-thoughtless-pardonable-promulgated-forbearing Bonus For a nerdy and fun, but not very secure password, that doesn't require dictionary installation, you can use (courtesy of @jpa): WORDS=5; man git | \ tr ' ' '\n' | \ egrep '^[a-z]{4,}$' | \ sort | uniq | \ shuf --random-source=/dev/urandom -n ${WORDS} | \ paste -sd "-"
{ "source": [ "https://unix.stackexchange.com/questions/470299", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/311907/" ] }
470,301
I have a text file, and I would like to add lines to the file arranged as follows; #define ICFGx 0x2y where x is a decimal number that begins at 0 and ends at 255, incrementing by 1 with each line and y is a hexadecimal number that begins at 000 and ends at 3FC, incrementing by 0x004 with each line. #define ICFG0 0x2000 #define ICFG1 0x2004 #define ICFG2 0x2008 #define ICFG3 0x200C I would also like add them from a certain line onward, say line 500. Is there any way to go about this task from the command line? I'm fairly new to using the linux terminal and I haven't done much bash scripting yet.
First of all, install a dictionary of a language you're familiar with, using: sudo apt-get install <language-package> To see all available packages: apt-cache search wordlist | grep ^w Note: all installation instructions assume you're on a debian-based OS. After you've installed dictionary run: WORDS=5; LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${WORDS} | paste -sd "-" Which will output ex: blasphemous-commandos-vasts-suitability-arbor To break it down: WORDS=5; — choose how many words you want in your password. LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words — choose only words containing lowercase alphabet characters (it excludes words with ' in them or funky characters like in éclair ). LC_ALL=C ensures that [a-z] in the regex won't match letter-like symbols other than lowercase letters without diacritics. shuf --random-source=/dev/urandom -n ${WORDS} — chose as many WORDS as you've requested. --random-source=/dev/urandom ensures that shuf seeds its random generator securely; without it, shuf defaults to a secure seed, but may fall back to a non-secure seed on some systems such as some Unix emulation layers on Windows. paste -sd "-" — join all words using - (feel free to change the symbol to something else). Alternatively you can wrap it in a function: #!/bin/bash function memorable_password() { words="${1:-5}" sep="${2:--}" LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep" } or #!/bin/sh memorable_password() { words="$1" if [ -z "${words}" ]; then words=5 fi sep="$2" if [ -z "${sep}" ]; then sep="-" fi LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep" } Both of which can be called as such: memorable_password 7 _ memorable_password 4 memorable_password Returning: skipped_cavity_entertainments_gangway_seaports_spread_communique evaporated-clashes-bold-presuming excelling-thoughtless-pardonable-promulgated-forbearing Bonus For a nerdy and fun, but not very secure password, that doesn't require dictionary installation, you can use (courtesy of @jpa): WORDS=5; man git | \ tr ' ' '\n' | \ egrep '^[a-z]{4,}$' | \ sort | uniq | \ shuf --random-source=/dev/urandom -n ${WORDS} | \ paste -sd "-"
{ "source": [ "https://unix.stackexchange.com/questions/470301", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/311910/" ] }
470,440
A few times when I read about programming I came across the "callback" concept. Funnily, I never found an explanation I can call "didactic" or "clear" for this term "callback function" (almost any explanation I read seemed to me enough different from another and I felt confused). Is the "callback" concept of programming existent in Bash? If so, please answer with a small, simple, Bash example.
In typical imperative programming , you write sequences of instructions and they are executed one after the other, with explicit control flow. For example: if [ -f file1 ]; then # If file1 exists ... cp file1 file2 # ... create file2 as a copy of a file1 fi etc. As can be seen from the example, in imperative programming you follow the execution flow quite easily, always working your way up from any given line of code to determine its execution context, knowing that any instructions you give will be executed as a result of their location in the flow (or their call sites’ locations, if you’re writing functions). How callbacks change the flow When you use callbacks, instead of placing the use of a set of instructions “geographically”, you describe when it should be called. Typical examples in other programming environments are cases such as “download this resource, and when the download is complete, call this callback”. Bash doesn’t have a generic callback construct of this kind, but it does have callbacks, for error-handling and a few other situations; for example (one has to first understand command substitution and Bash exit modes to understand that example): #!/bin/bash scripttmp=$(mktemp -d) # Create a temporary directory (these will usually be created under /tmp or /var/tmp/) cleanup() { # Declare a cleanup function rm -rf "${scripttmp}" # ... which deletes the temporary directory we just created } trap cleanup EXIT # Ask Bash to call cleanup on exit If you want to try this out yourself, save the above in a file, say cleanUpOnExit.sh , make it executable and run it: chmod 755 cleanUpOnExit.sh ./cleanUpOnExit.sh My code here never explicitly calls the cleanup function; it tells Bash when to call it, using trap cleanup EXIT , i.e. “dear Bash, please run the cleanup command when you exit” (and cleanup happens to be a function I defined earlier, but it could be anything Bash understands). Bash supports this for all non-fatal signals, exits, command failures, and general debugging (you can specify a callback which is run before every command). The callback here is the cleanup function, which is “called back” by Bash just before the shell exits. You can use Bash’s ability to evaluate shell parameters as commands, to build a callback-oriented framework; that’s somewhat beyond the scope of this answer, and would perhaps cause more confusion by suggesting that passing functions around always involves callbacks. See Bash: pass a function as parameter for some examples of the underlying functionality. The idea here, as with event-handling callbacks, is that functions can take data as parameters, but also other functions — this allows callers to provide behaviour as well as data. A simple example of this approach could look like #!/bin/bash doonall() { command="$1" shift for arg; do "${command}" "${arg}" done } backup() { mkdir -p ~/backup cp "$1" ~/backup } doonall backup "$@" (I know this is a bit useless since cp can deal with multiple files, it’s only for illustration.) Here we create a function, doonall , which takes another command, given as a parameter, and applies it to the rest of its parameters; then we use that to call the backup function on all the parameters given to the script. The result is a script which copies all its arguments, one by one, to a backup directory. This kind of approach allows functions to be written with single responsibilities: doonall ’s responsibility is to run something on all its arguments, one at a time; backup ’s responsibility is to make a copy of its (sole) argument in a backup directory. Both doonall and backup can be used in other contexts, which allows more code re-use, better tests etc. In this case the callback is the backup function, which we tell doonall to “call back” on each of its other arguments — we provide doonall with behaviour (its first argument) as well as data (the remaining arguments). (Note that in the kind of use-case demonstrated in the second example, I wouldn’t use the term “callback” myself, but that’s perhaps a habit resulting from the languages I use. I think of this as passing functions or lambdas around, rather than registering callbacks in an event-oriented system.)
{ "source": [ "https://unix.stackexchange.com/questions/470440", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
471,476
I want to try cgroup v2 but am not sure if it is installed on my linux machine >> uname -r 4.14.66-041466-generic Since cgroup v2 is available in 4.12.0-rc5, I assume it should be available in the kernel version I am using. https://www.infradead.org/~mchehab/kernel_docs/unsorted/cgroup-v2.html However, it does not seem like my system has cgroup v2 as the memory interface files mentioned in its documentation are not available on my system. https://www.kernel.org/doc/Documentation/cgroup-v2.txt It seems like I still have cgroup v1. /sys/fs/cgroup/memory# ls cgroup.clone_children memory.kmem.failcnt memory.kmem.tcp.usage_in_bytes memory.memsw.usage_in_bytes memory.swappiness cgroup.event_control memory.kmem.limit_in_bytes memory.kmem.usage_in_bytes memory.move_charge_at_immigrate memory.usage_in_bytes cgroup.procs memory.kmem.max_usage_in_bytes memory.limit_in_bytes memory.numa_stat memory.use_hierarchy cgroup.sane_behavior memory.kmem.slabinfo memory.max_usage_in_bytes memory.oom_control notify_on_release docker memory.kmem.tcp.failcnt memory.memsw.failcnt memory.pressure_level release_agent memory.failcnt memory.kmem.tcp.limit_in_bytes memory.memsw.limit_in_bytes memory.soft_limit_in_bytes tasks memory.force_empty memory.kmem.tcp.max_usage_in_bytes memory.memsw.max_usage_in_bytes memory.stat Follow-up questions Thanks Brian for the help. Please let me know if I should be creating a new question but I think it might be helpful to other if I just ask my questions here. 1) I am unable to add cgroup controllers, following the command in the doc >> echo "+cpu +memory -io" > cgroup.subtree_control However, I got "echo: write error: Invalid argument". Am I missing a prerequisite to this step? 2) I ran a docker container but the docker daemon log complained about not able to find "/sys/fs/cgroup/cpuset/docker/cpuset.cpus". It seems like docker is still expecting cgroupv1. What is the best way to enable cgroupv2 support on my docker daemon? docker -v Docker version 17.09.1-ce, build aedabb7
You could run the following command: grep cgroup /proc/filesystems If your system supports cgroupv2, you would see: nodev cgroup nodev cgroup2 On a system with only cgroupv1, you would only see: nodev cgroup
{ "source": [ "https://unix.stackexchange.com/questions/471476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/312790/" ] }
471,521
I want to get only the version of php installed on CentOS. Output of php -v PHP 7.1.16 (cli) (built: Mar 28 2018 13:19:29) ( NTS ) Copyright (c) 1997-2018 The PHP Group Zend Engine v3.1.0, Copyright (c) 1998-2018 Zend Technologies I tried this following: php -v | grep PHP | awk '{print $2}' But the output I got was: 7.1.16 (c) How can I get only 7.1.16?
Extending Jeff Schaller's answer , skip the pipeline altogether and just ask for the internal constant representation: $ php -r 'echo PHP_VERSION;' 7.1.15 You can extend this pattern to get more, or less, information: $ php -r 'echo PHP_MAJOR_VERSION;' 7 See the PHP list of pre-defined constants for all available. The major benefit: it doesn't rely on a defined output format of php -v . Given it's about the same performance as a pipeline solution, then it seems a more robust choice. If your objective is to test for the version, then you can also use this pattern. For example, this code will exit 0 if PHP >= 7, and 1 otherwise: php -r 'exit((int)version_compare(PHP_VERSION, "7.0.0", "<"));' For reference, here are timings for various test cases, ordered fastest first: $ time for (( i=0; i<1000; i++ )); do php -v | awk '/^PHP [0-9]/ { print $2; }' >/dev/null; done real 0m13.368s user 0m8.064s sys 0m4.036s $ time for (( i=0; i<1000; i++ )); do php -r 'echo PHP_VERSION;' >/dev/null; done real 0m13.624s user 0m8.408s sys 0m3.836s $ time for (( i=0; i<1000; i++ )); do php -v | head -1 | cut -f2 -d' ' >/dev/null; done real 0m13.942s user 0m8.180s sys 0m4.160s
{ "source": [ "https://unix.stackexchange.com/questions/471521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138782/" ] }
471,577
My requirement is to list all files in a directory, except files ending with a ~ (backup files). I tried to use command: ls -l | grep -v ~ I get this output: asdasad asdasad~ file_names.txt normaltest.txt target_filename testshell1.sh testshell1.sh~ testshell2.sh testshell2.sh~ testtwo.txt testtwo.txt~ test.txt test.txt~ I want to get only these files: asdasad file_names.txt normaltest.txt target_filename testshell1.sh testshell2.sh testtwo.txt test.txt
ls -l | grep -v ~ The reason this doesn't work is that the tilde gets expanded to your home directory, so grep never sees a literal tilde. (See e.g. Bash's manual on Tilde Expansion .) You need to quote it to prevent the expansion, i.e. ls -l | grep -v "~" Of course, this will still remove any output lines with a tilde anywhere, even in the middle of a file name or elsewhere in the ls output (though it's probably not likely to appear in usernames, dates or such). If you really only want to ignore files that end with a tilde, you can use ls -l | grep -v "~$"
{ "source": [ "https://unix.stackexchange.com/questions/471577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/312867/" ] }
471,824
I cannot find the correct way to execute some local scripts (or very local commands) at systemd, I already know I must not create a service (in systemd a unit) for this kinds of scripts (or I must?).... The workaround that I found is to create rc.local and give to it execution permissions. printf '#!/bin/bash \n\nexit 0' >/etc/rc.local chmod +x /etc/rc.local For example, if I get a legacy server with a simple rc.local configured by you, I will know what did you do and how much it gonna hurt to upgrade or install something new on the distro, as rc.local was respected by external packages, but in the other hand if I install a server and create a systemd unit or two or three (or even sysvinit services), just for doing a simple task, this can sometimes make your life harder, and much more than this my units names can someday conflicts with the names of the new services created by the distribution development, and maybe installed on a upgrade, causing trouble for my scripts ! I see another question asking about where is rc.local and the answer was to create it and give execution permissions, I think my question is really not a duplicate , because I do not want to know where it is - believe me, I just want to accept that it is deprecated , but i cannot find the correct way for doing this kind of things, should I really create a unit just for some simple like that?
As pointed out elsewhere, it becomes moderately unclean to use rc-local.service under systemd . It is theoretically possible that your distribution will not enable it. (I think this is not common, e.g. because disabling the same build option also removes poweroff / reboot commands that a lot of people use). The semantics are not entirely clear. Systemd defines rc-local.service one way, but Debian provides a drop-in file which alters at least one important setting. rc-local.service can often work well. If you're worried about the above, all you need to do is make your own copy of it! Here's the magic: # /etc/systemd/system/my-startup.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/local/libexec/my-startup-script [Install] WantedBy=multi-user.target I don't think you need to understand every single detail[*], but there are two things you need to know here. You need to enable this with systemctl enable my-startup.service . If your script has a dependency on any other service, including network-online.target , you must declare it. E.g. add a [Unit] section, with the lines Wants=network-online.target and After=network-online.target . You don't need to worry about dependencies on "early boot" services - specifically, services that are already ordered before basic.target . Services like my-startup.service are automatically ordered after basic.target , unless they set DefaultDependencies=no . If you're not sure whether one of your dependencies is an "early boot" service, one approach is to list the services that are ordered before basic.target , by running systemctl list-dependencies --after basic.target . (Note that's --after , not --before ). There are some considerations that I think also applied to pre-systemd rc.local : You need to make sure your commands are not conflicting with another program that tries to control the same thing. It is best not to start long-running programs aka daemons from rc.local . [*] I used Type=oneshot + RemainAfterExit=yes because it makes more sense for most one-shot scripts. It formalizes that you will run a series of commands, that my-startup will be shown as "active" once they have completed, and that you will not start a daemon.
{ "source": [ "https://unix.stackexchange.com/questions/471824", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170792/" ] }
472,718
I have a server that it is normally switched off for security reasons. When I want to work on it I switch it on, execute my tasks, and shut it down again. My tasks usually take no more than 15 minutes. I would like to implement a mechanism to shut it down automatically after 60 minutes. I've researched how to do this with cron, but I don't think it's the proper way because cron doesn't take into account when the server was last turned on. I can only set periodic patterns, but they don't take that data into account. How could I do this implementation?
There are a several options. Provide time directly to shutdown -P : shutdown -P +60 Note that shutdown man page also points out: If the time argument is used, 5 minutes before the system goes down the /run/nologin file is created to ensure that further logins shall not be allowed. Use at command. Create a systemd unit file or init script which runs shutdown -P 60 at startup. Use cron's @reboot to run command after boot. Add to (root) crontab: @reboot shutdown -P +60 For the last two methods, you could also use sleep 3600 && shutdown -P now instead of using time argument to shutdown to delay the shutdown for 60 minutes. This way logins are possible to the last moment before shutdown is issued.
{ "source": [ "https://unix.stackexchange.com/questions/472718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/313774/" ] }
472,991
The accepted answer to Transform an array into arguments of a command? uses the following Bash command: command "${my_array[@]/#/-}" "$1" I'm trying to figure out what the /#/- part does, exactly. Unfortunately, I don't know what to call it, so I'm having trouble finding any documentation. I've looked through the Bash man page section on arrays and a few websites, but can't find anything.
This is an instance of pattern replacement in shell parameter expansion : ${parameter/pattern/replacement} expands ${parameter} , replacing the first instance of pattern with replacement . In the context of a pattern of this kind, # is special: it anchors the pattern to the start of the parameter. The end result of all this is to expand all the values in the my_array array, prepending - to each one (by replacing the empty pattern at the start of each parameter).
{ "source": [ "https://unix.stackexchange.com/questions/472991", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/303312/" ] }
473,193
I'm trying to sort the /etc/passwd numerically by user id numbersb(third field) in ascending order and then send it to s4. What command would I uses to do that? I'm on this for a while now.
Try the below code, Sort the /etc/passwd based on uid. sort -n -t ':' -k3 /etc/passwd
{ "source": [ "https://unix.stackexchange.com/questions/473193", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/313386/" ] }
474,623
One of my friends told me that without Python, Linux cannot get ip, cannot open network stack and can't make "port switching", even he thinks kernel can't boot without Python. Is python really a requirement for a Linux system or it is just another tool like other interpreters, languages etc... He says Android has already Python inside.
Python is not mandatory for Linux, and there are plenty of small "embedded" Linux systems that don't have it. However, many distributions require it. So RHEL may have a dependency on Python because some of their management tools and scripts have been written in it. On those systems python is a requirement.
{ "source": [ "https://unix.stackexchange.com/questions/474623", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36410/" ] }
474,709
if grep -q "�" out.txt then echo "working" else cat out.txt fi Basically, if the file "out.txt" contains "�" anywhere in the file I would like it to echo "working" AND if the file "out.txt" does NOT contain "�" anywhere in the file then I would like it to cat out.txt EDIT: So here's what I'm doing. I'm trying to brute force an openssl decrypt. openssl enc returns 0 on success, non-zero otherwise. Note: you will get false positives because AES/CBC can only determine if "decryption works" based on getting the padding right. So the file decrypts but it will not be the correct password so it will have gibberish in it. A common character in the gibberish is "�". So I want the do loop to keep going if the output contains "�". Heres my git link https://github.com/Raphaeangelo/OpenSSLCracker Heres the script while read line do openssl aes-256-cbc -d -a -in $1 -pass pass:$line -out out.txt 2>out.txt >/dev/null && printf "==================================================\n" if grep -q "�" out.txt then : else cat out.txt && printf "\n==================================================" && printfn"\npassword is $line\n" && read -p "press return key to continue..." < /dev/tty; fi done < ./password.txt its still showing me output with the � charicter in it
grep is the wrong tool for the job. You see the � U+FFFD REPLACEMENT CHARACTER not because it’s literally in the file content, but because you looked at a binary file with a tool that is supposed to handle only text-based input. The standard way to handle invalid input (i.e., random binary data) is to replace everything that is not valid in the current locale (most probably UTF-8) with U+FFFD before it hits the screen. That means it is very likely that a literal \xEF\xBF\xBD (the UTF-8 byte sequence for the U+FFFD character) never occurs in the file. grep is completely right in telling you, there is none. One way to detect whether a file contains some unknown binary is with the file(1) command: $ head -c 100 /dev/urandom > rubbish.bin $ file rubbish.bin rubbish.bin: data For any unknown file type it will simply say data . Try $ file out.txt | grep '^out.txt: data$' to check whether the file really contains any arbitrary binary and thus most likely rubbish. If you want to make sure that out.txt is a UTF-8 encoded text file only, you can alternatively use iconv : $ iconv -f utf-8 -t utf-16 out.txt >/dev/null
{ "source": [ "https://unix.stackexchange.com/questions/474709", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/315247/" ] }
474,735
May I kindly ask how to match all values of the first column in File with the line text in File 2 so I can copy the fasta sequences of all Object ID in File 1? File 1.csv file Object_ID, Length, Assignment NODE_142_length_92872_cov_11.2497,92872,2005469 NODE_405_length_50717_cov_10.7964,50717,82654 NODE_775_length_33402_cov_18.9306,33402,1147 NODE_1008_length_27630_cov_17.7829,27630,1184 File 2 fasta.file >NODE_1_length_501653_cov_19.284 TGGTGTGAGAGGCGCACCTCGCTAACTTTTCAGTTAGCGAGGCCGTCTACTCGATTAGCT GTTATGAGCCCGACGAGCTACCAACTGCTCCATCCCGCGATATTGTGATGCAAAGGTAAG >NODE_142_length_92872_cov_11.2497 ATTAACTACTAAGTTACAAATTTTAGTAGCTGTCCAGTTTAAAGGAAGTATTTCATATTT TCGCTTACGTTAAATAGGAAAAGCAAGTTCTTTTTTGAGGTACCCAGTGAGTCTGATTTT OUTPUT FILE >NODE_142_length_92872_cov_11.2497 ATTAACTACTAAGTTACAAATTTTAGTAGCTGTCCAGTTTAAAGGAAGTATTTCATATTT TCGCTTACGTTAAATAGGAAAAGCAAGTTCTTTTTTGAGGTACCCAGTGAGTCTGATTTT Thank you
grep is the wrong tool for the job. You see the � U+FFFD REPLACEMENT CHARACTER not because it’s literally in the file content, but because you looked at a binary file with a tool that is supposed to handle only text-based input. The standard way to handle invalid input (i.e., random binary data) is to replace everything that is not valid in the current locale (most probably UTF-8) with U+FFFD before it hits the screen. That means it is very likely that a literal \xEF\xBF\xBD (the UTF-8 byte sequence for the U+FFFD character) never occurs in the file. grep is completely right in telling you, there is none. One way to detect whether a file contains some unknown binary is with the file(1) command: $ head -c 100 /dev/urandom > rubbish.bin $ file rubbish.bin rubbish.bin: data For any unknown file type it will simply say data . Try $ file out.txt | grep '^out.txt: data$' to check whether the file really contains any arbitrary binary and thus most likely rubbish. If you want to make sure that out.txt is a UTF-8 encoded text file only, you can alternatively use iconv : $ iconv -f utf-8 -t utf-16 out.txt >/dev/null
{ "source": [ "https://unix.stackexchange.com/questions/474735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/315228/" ] }
474,926
Operating System Concepts says Consider a sequential read of a file on disk using the standard system calls open(), read(), and write() . Each file access requires a system call and disk access . Alternatively, we can use the virtual memory techniques discussed so far to treat file I/O as routine memory accesses. This approach, known as memory mapping a file , allows a part of the virtual address space to be logically associated with the file. As we shall see, this can lead to significant performance increases . Memory mapping a file is accomplished by mapping a disk block to a page (or pages) in memory. Initial access to the file proceeds through ordinary demand paging, resulting in a page fault. However, a page-sized portion of the file is read from the file system into a physical page (some systems may opt to read in more than a page-sized chunk of memory at a time). Subsequent reads and writes to the file are handled as routine memory accesses. Manipulating files through memory rather than incurring the overhead of using the read() and write() system calls simplifies and speeds up file access and usage. Could you analyze the performance of memory mapped file? If I am correct, memory mapping file works as following. It takes a system call to create a memory mapping. Then when it accesses the mapped memory, page faults happen. Page faults also have overhead. How does memory mapping a file have significant performance increases over the standard I/O system calls?
Memory mapping a file directly avoids copying buffers which happen with read() and write() calls. Calls to read() and write() include a pointer to buffer in process' address space where the data is stored. Kernel has to copy the data to/from those locations. Using mmap() maps the file to process' address space, so the process can address the file directly and no copies are required. There is also no system call overhead when accessing memory mapped file after the initial call if the file is loaded to memory at initial mmap() . If a page of the mapped file is not in memory, access will generate a fault and require kernel to load the page to memory. Reading a large block with read() can be faster than mmap() in such cases, if mmap() would generate significant number of faults to read the file. (It is possible to advise kernel in advance with madvise() so that the kernel may load the pages in advance before access). For more details, there is related question on Stack Overflow: mmap() vs. reading blocks
{ "source": [ "https://unix.stackexchange.com/questions/474926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
475,008
I just want to implement if condition in awk. I created one file name : "simple_if" as below. BEGIN{ num=$1; if (num%2==0) printf "%d is Even number.\n",num; else printf "%d is odd Number.\n",num } Then i executed the program by passing 10 as argument for $1 as below awk -f simple_if 10 But it doesn't take input and instead displays 0. Output: 0 is Even number. How to get value from user in awk?
Arguments given at the end of the command line to awk are generally taken as filenames that the awk script will read from. To set a variable on the command line, use -v variable=value , e.g. awk -v num=10 -f script.awk This would enable you to use num as a variable in your script. The initial value of the variable will be 10 in the above example. You may also read environment variables using ENVIRON["variable"] in your script (for some environment variable named variable ), or look at the command line arguments with ARGV[n] where n is some positive integer. With $1 in awk , you would refer to the value of the first field in the current record, but since you are using it in a BEGIN block, no data has yet been read from any file. The number in your code is being interpreted as zero since it's an empty variable used in an arithmetic context.
{ "source": [ "https://unix.stackexchange.com/questions/475008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184505/" ] }
475,292
I want to watch output from a systemd service on CentOS as if I have started this service from console. Yes, I can see output with journalctl , but it doesn't scroll to the bottom automatically. So how can I watch live output from service?
journalctl -f -u mystuff.service It's in the manual: -f, --follow Show only the most recent journal entries, and continuously print new entries as they are appended to the journal. and -u, --unit=UNIT|PATTERN Show messages for the specified systemd unit UNIT (such as a service unit), or for any of the units matched by PATTERN. If a pattern is specified, a list of unit names found in the journal is compared with the specified pattern and all that match are used. For each unit name, a match is added for messages from the unit ("_SYSTEMD_UNIT=UNIT"), along with additional matches for messages from systemd and messages about coredumps for the specified unit. This parameter can be specified multiple times.
{ "source": [ "https://unix.stackexchange.com/questions/475292", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28212/" ] }
476,048
I previously used to create image files using dd , set up a filesystem on them using mkfs and mount them to access them as mounted partitions. Later on, I have seen on the internet that many examples use losetup beforehand to make a loop device entry under /dev , and then mount it. I could not tell why one would practically need an image file to behave as a loop device and have its own /dev entry while the same behaviour can be obtained without all the hassle. Summary: In a real-life scenario, why do we need a /dev/loopX entry to be present at all, when we can just mount the fs image without it? What's the use of a loop device?
Mounts, typically, must be done on block devices. The loop driver puts a block device front-end onto your data file. If you do a loop mount without losetup then the OS does one in the background. eg $ dd if=/dev/zero of=/tmp/foo bs=1M count=100 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.0798775 s, 1.3 GB/s $ mke2fs /tmp/foo mke2fs 1.42.9 (28-Dec-2013) .... $ losetup $ mount -o loop /tmp/foo /mnt1 $ losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE /dev/loop0 0 0 1 0 /tmp/foo $ umount /mnt1 $ losetup $ You may need to call losetup directly if your file image has embedded partitions in it. eg if I have this image: $ fdisk -l /tmp/foo2 Disk /tmp/foo2: 104 MB, 104857600 bytes, 204800 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x1f25ff39 Device Boot Start End Blocks Id System /tmp/foo2p1 2048 204799 101376 83 Linux I can't mount that directly $ mount -o loop /tmp/foo2 /mnt1 mount: /dev/loop0 is write-protected, mounting read-only mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error But if I use losetup and kpartx then I can access the partitions: $ losetup -f /tmp/foo2 $ losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE /dev/loop0 0 0 0 0 /tmp/foo2 $ kpartx -a /dev/loop0 $ mount /dev/mapper/loop0p1 /mnt1 $
{ "source": [ "https://unix.stackexchange.com/questions/476048", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104727/" ] }
476,080
Say I run some processes: #!/usr/bin/env bash foo & bar & baz & wait; I run the above script like so: foobarbaz | cat as far as I can tell, when any of the processes write to stdout/stderr, their output never interleaves - each line of stdio seems to be atomic. How does that work? What utility controls how each line is atomic?
They do interleave! You only tried short output bursts, which remain unsplit, but in practice it's hard to guarantee that any particular output remains unsplit. Output buffering It depends how the programs buffer their output. The stdio library that most programs use when they're writing uses buffers to make output more efficient. Instead of outputting data as soon as the program calls a library function to write to a file, the function stores this data in a buffer, and only actually outputs the data once the buffer has filled up. This means that output is done in batches. More precisely, there are three output modes: Unbuffered: the data is written immediately, without using a buffer. This can be slow if the program writes its output in small pieces, e.g. character by character. This is the default mode for standard error. Fully buffered: the data is only written when the buffer is full. This is the default mode when writing to a pipe or to a regular file, except with stderr. Line-buffered: the data is written after each newline, or when the buffer is full. This is the default mode when writing to a terminal, except with stderr. Programs can reprogram each file to behave differently, and can explicitly flush the buffer. The buffer is flushed automatically when a program closes the file or exits normally. If all the programs that are writing to the same pipe either use line-buffered mode, or use unbuffered mode and write each line with a single call to an output function, and if the lines are short enough to write in a single chunk, then the output will be an interleaving of whole lines. But if one of the programs uses fully-buffered mode, or if the lines are too long, then you will see mixed lines. Here is an example where I interleave the output from two programs. I used GNU coreutils on Linux; different versions of these utilities may behave differently. yes aaaa writes aaaa forever in what is essentially equivalent to line-buffered mode. The yes utility actually writes multiple lines at a time, but each time it emits output, the output is a whole number of lines. while true; do echo bbbb; done | grep b writes bbbb forever in fully-buffered mode. It uses a buffer size of 8192, and each line is 5 bytes long. Since 5 does not divide 8192, the boundaries between writes are not at a line boundary in general. Let's pitch them together. $ { yes aaaa & while true; do echo bbbb; done | grep b & } | head -n 999999 | grep -e ab -e ba bbaaaa bbbbaaaa baaaa bbbaaaa bbaaaa bbbaaaa ab bbbbaaa As you can see, yes sometimes interrupted grep and vice versa. Only about 0.001% of the lines got interrupted, but it happened. The output is randomized so the number of interruptions will vary, but I saw at least a few interruptions every time. There would be a higher fraction of interrupted lines if the lines were longer, since the likelihood of an interruption increases as the number of lines per buffer decreases. There are several ways to adjust output buffering . The main ones are: Turn off buffering in programs that use the stdio library without changing its default settings with the program stdbuf -o0 found in GNU coreutils and some other systems such as FreeBSD. You can alternatively switch to line buffering with stdbuf -oL . Switch to line buffering by directing the program's output through a terminal created just for this purpose with unbuffer . Some programs may behave differently in other ways, for example grep uses colors by default if its output is a terminal. Configure the program, for example by passing --line-buffered to GNU grep. Let's see the snippet above again, this time with line buffering on both sides. { stdbuf -oL yes aaaa & while true; do echo bbbb; done | grep --line-buffered b & } | head -n 999999 | grep -e ab -e ba abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb abbbb So this time yes never interrupted grep, but grep sometimes interrupted yes. I'll come to why later. Pipe interleaving As long as each program outputs one line at a time, and the lines are short enough, the output lines will be neatly separated. But there's a limit to how long the lines can be for this to work. The pipe itself has a transfer buffer. When a program outputs to a pipe, the data is copied from the writer program to the pipe's transfer buffer, and then later from the pipe's transfer buffer to the reader program. (At least conceptually — the kernel may sometimes optimize this to a single copy.) If there's more data to copy than fits in the pipe's transfer buffer, then the kernel copies one bufferful at a time. If multiple programs are writing to the same pipe, and the first program that the kernel picks wants to write more than one bufferful, then there's no guarantee that the kernel will pick the same program again the second time. For example, if P is the buffer size, foo wants to write 2* P bytes and bar wants to write 3 bytes, then one possible interleaving is P bytes from foo , then 3 bytes from bar , and P bytes from foo . Coming back to the yes+grep example above, on my system, yes aaaa happens to write as many lines as can fit in a 8192-byte buffer in one go. Since there are 5 bytes to write (4 printable characters and the newline), that means it writes 8190 bytes every time. The pipe buffer size is 4096 bytes. It is therefore possible to get 4096 bytes from yes, then some output from grep, and then the rest of the write from yes (8190 - 4096 = 4094 bytes). 4096 bytes leaves room for 819 lines with aaaa and a lone a . Hence a line with this lone a followed by one write from grep, giving a line with abbbb . If you want to see the details of what's going on, then getconf PIPE_BUF . will tell you the pipe buffer size on your system, and you can see a complete list of system calls made by each program with strace -s9999 -f -o line_buffered.strace sh -c '{ stdbuf -oL yes aaaa & while true; do echo bbbb; done | grep --line-buffered b & }' | head -n 999999 | grep -e ab -e ba How to guarantee clean line interleaving If the line lengths are smaller than the pipe buffer size, then line buffering guarantees that there won't be any mixed line in the output. If the line lengths can be larger, there's no way to avoid arbitrary mixing when multiple programs are writing to the same pipe. To ensure separation, you need to make each program write to a different pipe, and use a program to combine the lines. For example GNU Parallel does this by default.
{ "source": [ "https://unix.stackexchange.com/questions/476080", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
476,167
I was reading the manpage for gdb and I came across the line: You can use GDB to debug programs written in C, C@t{++}, Fortran and Modula-2. The C@t{++} looks like a regex but I can't seem to decode it. What does it mean?
GNU hates man pages, so they usually write documentation in another format and generate a man page from that, without really caring if the result is usable. C@t{++} is some texinfo markup which didn't get translated. It wasn't intended to be part of the user-visible documentation. It should simply say C++ (possibly with some special font for the ++ to make it look nice).
{ "source": [ "https://unix.stackexchange.com/questions/476167", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223417/" ] }
476,253
I have a function in my .bashrc file. I know what it does, it steps up X many directories with cd Here it is: up() { local d="" limit=$1 for ((i=1 ; i <= limit ; i++)) do d=$d/.. done d=$(echo $d | sed 's/^\///') if [ -z "$d" ]; then d=.. fi cd $d } But can you explain these three things from it for me? d=$d/.. sed 's/^\///' d=.. Why not just do like this: up() { limit=$1 for ((i=1 ; i <= limit ; i++)) do cd .. done } Usage: <<<>>>~$ up 3 <<<>>>/$
d=$d/.. adds /.. to the current contents of the d variable. d starts off empty, then the first iteration makes it /.. , the second /../.. etc. sed 's/^\///' drops the first / , so /../.. becomes ../.. (this can be done using a parameter expansion, d=${d#/} ). d=.. only makes sense in the context of its condition: if [ -z "$d" ]; then d=.. fi This ensures that, if d is empty at this point, you go to the parent directory. ( up with no argument is equivalent to cd .. .) This approach is better than iterative cd .. because it preserves cd - — the ability to return to the previous directory (from the user’s perspective) in one step. The function can be simplified: up() { local d=.. for ((i = 1; i < ${1:-1}; i++)); do d=$d/..; done cd $d } This assumes we want to move up at least one level, and adds n - 1 levels, so we don’t need to remove the leading / or check for an empty $d . Using Athena jot (the athena-jot package in Debian): up() { cd $(jot -b .. -s / "${1:-1}"); } (based on a variant suggested by glenn jackman ).
{ "source": [ "https://unix.stackexchange.com/questions/476253", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36440/" ] }
476,883
I have this program that can run with both a text user interface and a graphical user interface. It lacks any command line switch to force one or the other, rather I guess it somehow auto-detects whether we are in X or not (e.g. if I run it from a virtual terminal it enters its text mode, and if I run it from an X terminal emulator it opens a separate graphical window). I'd like to force it into text mode and have it run inside the X terminal. How would I go about doing it?
Usually just unset DISPLAY in command-line of the terminal. Some applications are smarter than that, and actually check permissions and type of the console versus pseudoterminal.
{ "source": [ "https://unix.stackexchange.com/questions/476883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271462/" ] }
477,210
Below is the curl command output (file information about branch), need script or command to print file name, filetype and size. I have tried with jq but was able fetch single value ( jq '.values[].size' ) { "path": { "components": [], "name": "", "toString": "" }, "revision": "master", "children": { "size": 5, "limit": 500, "isLastPage": true, "values": [ { "path": { "components": [ ".gitignore" ], "parent": "", "name": ".gitignore", "extension": "gitignore", "toString": ".gitignore" }, "contentId": "c9e472ef4e603480cdd85012b01bd5f4eddc86c6", "type": "FILE", "size": 224 }, { "path": { "components": [ "Jenkinsfile" ], "parent": "", "name": "Jenkinsfile", "toString": "Jenkinsfile" }, "contentId": "e878a88eed6b19b2eb0852c39bfd290151b865a4", "type": "FILE", "size": 1396 }, { "path": { "components": [ "README.md" ], "parent": "", "name": "README.md", "extension": "md", "toString": "README.md" }, "contentId": "05782ad495bfe11e00a77c30ea3ce17c7fa39606", "type": "FILE", "size": 237 }, { "path": { "components": [ "pom.xml" ], "parent": "", "name": "pom.xml", "extension": "xml", "toString": "pom.xml" }, "contentId": "9cd4887f8fc8c2ecc69ca08508b0f5d7b019dafd", "type": "FILE", "size": 2548 }, { "path": { "components": [ "src" ], "parent": "", "name": "src", "toString": "src" }, "node": "395c71003030308d1e4148b7786e9f331c269bdf", "type": "DIRECTORY" } ], "start": 0 } } expected output should be something like below .gitignore FILE 224 Jenkinsfile FILE 1396
For the use case provided in the Question, @JigglyNaga's answer is probably better than this, but for some more complicated task, you could also loop through the list items using keys : from file : for k in $(jq '.children.values | keys | .[]' file); do ... done or from string: for k in $(jq '.children.values | keys | .[]' <<< "$MYJSONSTRING"); do ... done So e.g. you might use: for k in $(jq '.children.values | keys | .[]' file); do value=$(jq -r ".children.values[$k]" file); name=$(jq -r '.path.name' <<< "$value"); type=$(jq -r '.type' <<< "$value"); size=$(jq -r '.size' <<< "$value"); printf '%s\t%s\t%s\n' "$name" "$type" "$size"; done | column -t -s$'\t' if you have no newlines for the values, you can make it with a single jq call inside the loop which makes it much faster: for k in $(jq '.children.values | keys | .[]' file); do IFS=$'\n' read -r -d '' name type size \ <<< "$(jq -r ".children.values[$k] | .path.name,.type,.size" file)" printf '%s\t%s\t%s\n' "$name" "$type" "$size"; done | column -t -s$'\t'
{ "source": [ "https://unix.stackexchange.com/questions/477210", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/317218/" ] }
477,401
I tested cp with the following commands: $ ls first.html second.html third.html $ cat first.html first $ cat second.html second $ cat third.html third Then I copy first.html to second.html : $ cp first.html second.html $ cat second.html first The file second.html is silently overwritten without any errors. However, if I do it in a desktop GUI by dragging and dropping a file with the same name, it will be suffixed as first1.html automatically. This avoids accidentally overwriting an existing file. Why doesn't cp follow this pattern instead of overwriting files silently?
The default overwrite behavior of cp is specified in POSIX. If source_file is of type regular file, the following steps shall be taken: 3.a. The behavior is unspecified if dest_file exists and was written by a previous step. Otherwise, if dest_file exists, the following steps shall be taken: 3.a.i. If the -i option is in effect, the cp utility shall write a prompt to the standard error and read a line from the standard input. If the response is not affirmative, cp shall do nothing more with source_file and go on to any remaining files. 3.a.ii. A file descriptor for dest_file shall be obtained by performing actions equivalent to the open() function defined in the System Interfaces volume of POSIX.1-2017 called using dest_file as the path argument, and the bitwise-inclusive OR of O_WRONLY and O_TRUNC as the oflag argument. 3.a.iii. If the attempt to obtain a file descriptor fails and the -f option is in effect, cp shall attempt to remove the file by performing actions equivalent to the unlink() function defined in the System Interfaces volume of POSIX.1-2017 called using dest_file as the path argument. If this attempt succeeds, cp shall continue with step 3b. When the POSIX specification was written, there already was a large number of scripts in existence, with a built-in assumption for the default overwrite behavior. Many of those scripts were designed to run without direct user presence, e.g. as cron jobs or other background tasks. Changing the behavior would have broken them. Reviewing and modifying them all to add an option to force overwriting wherever needed was probably considered a huge task with minimal benefits. Also, the Unix command line was always designed to allow an experienced user to work efficiently, even at the expense of a hard learning curve for a beginner. When the user enters a command, the computer is to expect that the user really means it, without any second-guessing; it is the user's responsibility to be careful with potentially destructive commands. When the original Unix was developed, the systems then had so little memory and mass storage compared to modern computers that overwrite warnings and prompts were probably seen as wasteful and unnecessary luxuries. When the POSIX standard was being written, the precedent was firmly established, and the writers of the standard were well aware of the virtues of not breaking backwards compatibility . Besides, as others have described, any user can add/enable those features for themselves, by using shell aliases or even by building a replacement cp command and modifying their $PATH to find the replacement before the standard system command, and get the safety net that way if desired. But if you do so, you'll find that you are creating a hazard for yourself. If the cp command behaves one way when used interactively and another way when called from a script, you may not remember that the difference exists. On another system, you might end up being careless because you're become used to the warnings and prompts on your own system. If the behavior in scripts will still match the POSIX standard, you're likely to get used to the prompts in interactive use, then write a script that does some mass copying - and then find you're again inadvertently overwritten something. If you enforce prompting in scripts too, what will the command do when run in a context that has no user around, e.g. background processes or cron jobs? Will the script hang, abort, or overwrite? Hanging or aborting means that a task that was supposed to get done automatically will not be done. Not overwriting may sometimes also cause a problem by itself: for example, it might cause old data to be processed twice by another system instead of being replaced with up-to-date data. A large part of the power of the command line comes from the fact that once you know how to do something on the command line, you'll implicitly also know how to make it happen automatically by scripting . But that is only true if the commands you use interactively also work exactly the same when invoked in a script context. Any significant differences in behavior between interactive use and scripted use will create a sort of cognitive dissonance which is annoying to a power user.
{ "source": [ "https://unix.stackexchange.com/questions/477401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260114/" ] }
477,794
'mount -a' works fine as a one-time action. But auto-mount of removable media reverts to settings that were in fstab at the last reboot. How to make the OS actually reload fstab so auto-mounts use the new settings when media is connected? Specific example seen with Raspbian (Debian) Stretch: FAT-formatted SD card; configured fstab to auto-mount; rebooted; volume auto-mounts, but RO Changed umask options in fstab; mount -a while media is connected, and volume is now RW Unmount and re-insert the media; auto-mount works, but using the options in fstab from the last reboot, so volume is RO Reboot; OS loads updated fstab; auto-mount works when media is connected, and volume is RW - how to get this effect without a reboot? FWIW, the (updated) fstab syntax was: /dev/sdb1 /Volumes/boot vfat rw,user,exec,nofail,umask=0000 0 0
I suspect this is caused by systemd’s conversion of /etc/fstab ; traditional mount doesn’t remember the contents of /etc/fstab . To refresh systemd’s view of the world, including changes to /etc/fstab , run systemctl daemon-reload
{ "source": [ "https://unix.stackexchange.com/questions/477794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/317707/" ] }
477,959
I notice that to set newline IFS should with a $ as prefix IFS=$'\n' but if set a colon, just IFS=: Is \n is a variable?
That $'...' in bash is not parameter expansion, it's a special kind of quote introduced by ksh93 that expands those \n , \x0a , \12 codes to a newline character. zsh also added \u000a / \U0000000a for the characters with the corresponding Unicode code point. ksh93 and bash also have \cj while zsh has \C-J . ksh93 also supports variations like \x{a} . The $ is a cue that it is some form or expansion. But in any case, it differs from other forms of expansions that use $ (like $((1 + 1)) , $param or $(cmd) ) in that it is not performed inside double quotes or here documents ( echo "$'x'" outputs $'x' in all shells though is unspecified per POSIX) and its expansion is not subject to split+glob, it's definitely closer to a quoting operator than an expansion operator. IFS=\n would set IFS to n ( \ is treated as a quoting operator) and IFS="\n" or IFS='\n' would set IFS to the two characters backslash and n . You can also use: IFS=' ' or IFS=" " or IFS=$' ' To pass a literal newline, though that's less legible (and one can't see other than using things like set list in vi whether $IFS contains other spacing characters in that code). IFS=: , IFS=':' , IFS=":" , IFS=$':' all set IFS to : so it doesn't matter which you use. $'...' is supported (with variations) by at least: ksh93 , zsh , bash , mksh , busybox sh , FreeBSD sh . ksh93 and bash also have a $"..." form of quotes used for localisation of text though it's rarely used as it's cumbersome to deploy and use portably and reliably. The es and fish shells can also use \n outside of quotes to expand to newline. Some tools like printf , some implementations of echo or awk can also expand those \n by themselves. For instance, one can do: printf '\n' awk 'BEGIN{printf "\n"}' echo echo '\n\c' # UNIX compliant echos only to output of newline character, but note that: IFS=$(printf '\n') won't work because command substitution ( $(...) ) strips all trailing newline characters. You can however use: eval "$(printf 'IFS="\n"')" Which works because the output of printf ends in a " character, not a newline. Now, for completeness, in the rc shell and derivatives (like es or akanga ), $'\n' is indeed the expansion of that \n variable (a variable whose name is the sequence of two characters \ and n ). Those shells don't have a limitation on what characters variable names may contain and only have one type of quotes: '...' . $ rc ; '\n' = (foo bar) ; echo $'\n' foo bar ; echo $'\n'(1) foo rc variables are also all exported to the environment, but at least in the Unix variant of rc , for variable names like \n , the environment variable version undergoes a form of encoding: ; env | grep foo | sed -n l __5cn=foo\001bar$ ( 0x5c being the byte value of ASCII \ ; see also how that array variable was encoded with a 0x1 byte as the separator).
{ "source": [ "https://unix.stackexchange.com/questions/477959", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260114/" ] }
477,998
ɛ ("Latin epsilon") is a letter used in certain African languages, usually to represent the vowel sound in English "bed". In Unicode it's encoded as U+025B, very distinct from everyday e . However, if I sort the following: eb ed ɛa ɛc it seems that sort considers ɛ and e equivalent: ɛa eb ɛc ed What's going on here? And is there a way to make ɛ and e distinct for sort ing purposes?
No, it doesn't consider them as equivalent, they just have the same primary weight. So that, in first approximation, they sort the same. If you look at /usr/share/i18n/locales/iso14651_t1_common (as used as basis for most locales) on a GNU system (here with glibc 2.27), you'll see: <U0065> <e>;<BAS>;<MIN>;IGNORE # 259 e <U025B> <e>;<PCL>;<MIN>;IGNORE # 287 ɛ <U0045> <e>;<BAS>;<CAP>;IGNORE # 577 E e , ɛ and E have the same primary weight, e and E same secondary weight, only the third weight differentiates them. When comparing strings, sort (the strcoll() standard libc function is uses to compare strings) starts by comparing the primary weights of all characters, and only go for the second weight if the strings are equal with the primary weights (and so on with the other weights). That's how case seems to be ignored in the sorting order in first approximation. Ab sorts between aa and ac , but Ab can sort before or after ab depending on the language rule (some languages have <MIN> before <CAP> like in British English, some <CAP> before <MIN> like in Estonian). If e had the same sorting order as ɛ , printf '%s\n' e ɛ | sort -u would return only one line. But as <BAS> sorts before <PCL> , e alone sorts before ɛ . eɛe sorts after EEE (at the secondary weight) even though EEE sorts after eee (for which we need to go up to the third weight). Now if on my system with glibc 2.27, I run: sed -n 's/\(.*;[^[:blank:]]*\).*/\1/p' /usr/share/i18n/locales/iso14651_t1_common | sort -k2 | uniq -Df1 You'll notice that there are quite a few characters that have been defined with the exact same 4 weights. In particular, our ɛ has the same weights as: <U01DD> <e>;<PCL>;<MIN>;IGNORE <U0259> <e>;<PCL>;<MIN>;IGNORE <U025B> <e>;<PCL>;<MIN>;IGNORE And sure enough: $ printf '%s\n' $'\u01DD' $'\u0259' $'\u025B' | sort -u ǝ $ expr ɛ = ǝ 1 That can be seen as a bug of GNU libc locales. On most other systems, locales make sure all different characters have different sorting order in the end. On GNU locales, it gets even worse, as there are thousands of characters that don't have a sorting order and end up sorting the same, causing all sorts of problems (like breaking comm , join , ls or globs having non-deterministic orders...), hence the recommendation of using LC_ALL=C to work around those issues . As noted by @ninjalj in comments, glibc 2.28 released in August 2018 came with some improvements on that front though AFAICS, there are still some characters or collating elements defined with identical sorting order. On Ubuntu 18.10 with glibc 2.28 and in a en_GB.UTF-8 locale. $ expr $'L\ub7' = $'L\u387' 1 (why would U+00B7 be considered equivalent as U+0387 only when combined with L / l ?!). And: $ perl -lC -e 'for($i=0; $i<0x110000; $i++) {$i = 0xe000 if $i == 0xd800; print chr($i)}' | sort > all-chars-sorted $ uniq -d all-chars-sorted | wc -l 4 $ uniq -D all-chars-sorted | wc -l 1061355 (still over 1 million characters (95% of the Unicode range, down from 98% in 2.27) sorting the same as other characters as their sorting order is not defined). See also: What does "LC_ALL=C" do? Generate the collating order of a string What is the difference between "sort -u" and "sort | uniq"?
{ "source": [ "https://unix.stackexchange.com/questions/477998", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88106/" ] }
478,532
I have two different machines (home and work) running Ubuntu 18.04. Last night vim froze at home. I was in insert mode and typing and went to save ( esc :w ) and nothing happened. The status bar still reads -- INSERT -- , the cursor is still blinking where it was. I was stuck. I couldn't find a way out. I couldn't type (nothing happened when I type), I couldn't move around (the up and down arrows did nothing). It was stuck in insert mode with the cursor blinking where it was. I was definitely multitasking and probably hit some other keys in there, but I don't know what keys. It was late, though, so I closed the terminal window and tried again (I was entering a git commit message). It happened again partway through my typing so I switched to git commit -m "don't need an editor for this" instead. And then I shut down my computer and stopped working. I figured I was just tired, but then it happened to me today at work on a different laptop altogether. Again I was multitasking and can't swear I didn't type any bizarro key sequence but if I did it was accidental. And other tabs in the same terminal aren't frozen. I'm used to getting trapped in visual mode in vim. That's a trick I've learned. But stuck in insert mode? Any ideas on what I might've done and how to get out of it? Per a comment suggestion I tried looking at .viminfo but the only .viminfo I see is owned exclusively by root and only appears to show things I would have edited with sudo : # Input Line History (newest to oldest): # Debug Line History (newest to oldest): # Registers: # File marks: '0 1 0 /etc/neomuttrc |4,48,1,0,1531789956,"/etc/neomuttrc" '1 1 66 /etc/apt/sources.list.d/signal-bionic.list |4,49,1,66,1530816565,"/etc/apt/sources.list.d/signal-bionic.list" '2 51 0 /etc/apt/sources.list |4,50,51,0,1530816531,"/etc/apt/sources.list" # Jumplist (newest first): -' 1 0 /etc/neomuttrc |4,39,1,0,1531789956,"/etc/neomuttrc" -' 1 66 /etc/apt/sources.list.d/signal-bionic.list |4,39,1,66,1530816565,"/etc/apt/sources.list.d/signal-bionic.list" -' 1 66 /etc/apt/sources.list.d/signal-bionic.list |4,39,1,66,1530816565,"/etc/apt/sources.list.d/signal-bionic.list" -' 51 0 /etc/apt/sources.list |4,39,51,0,1530816531,"/etc/apt/sources.list" -' 51 0 /etc/apt/sources.list |4,39,51,0,1530816531,"/etc/apt/sources.list" -' 51 0 /etc/apt/sources.list |4,39,51,0,1530816531,"/etc/apt/sources.list" -' 51 0 /etc/apt/sources.list |4,39,51,0,1530816531,"/etc/apt/sources.list" -' 1 0 /etc/apt/sources.list |4,39,1,0,1530816447,"/etc/apt/sources.list" -' 1 0 /etc/apt/sources.list |4,39,1,0,1530816447,"/etc/apt/sources.list" -' 1 0 /etc/apt/sources.list |4,39,1,0,1530816447,"/etc/apt/sources.list" -' 1 0 /etc/apt/sources.list |4,39,1,0,1530816447,"/etc/apt/sources.list" # History of marks within files (newest to oldest): > /etc/neomuttrc * 1531789952 0 " 1 0 > /etc/apt/sources.list.d/signal-bionic.list * 1530816564 0 " 1 66 ^ 1 67 . 1 66 + 1 66 > /etc/apt/sources.list * 1530816454 0 " 51 0 It seems odd that I wouldn't have an unprivileged .viminfo but I did sudo udpatedb and locate .viminfo and still didn't surface more than the one root-owned file.
One key that I frequently fat-finger by mistake is Ctrl S ; that stops all terminal output until a Ctrl Q is typed. That's the XON/XOFF control-flow, which is enabled by default, and ^S and ^Q are the default VSTART and VSTOP keys respectively -- see the stty(1) and termios(3) manpages. You can disable it with: stty -ixon vim will not reenable it as part of its changing the terminal settings.
{ "source": [ "https://unix.stackexchange.com/questions/478532", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/141494/" ] }
478,543
I have this line word1 word2 1234 4567 word3 8901 word4 word5 2541 5142 word5 I want to split this line in order to insert a line break before a numeric field or before an alphanumeric field that is just after a numeric field, so the output would be: word1 word2 1234 4567 word3 8901 word4 word5 2541 5142 word5 All alphanumeric fields begin with letters
One key that I frequently fat-finger by mistake is Ctrl S ; that stops all terminal output until a Ctrl Q is typed. That's the XON/XOFF control-flow, which is enabled by default, and ^S and ^Q are the default VSTART and VSTOP keys respectively -- see the stty(1) and termios(3) manpages. You can disable it with: stty -ixon vim will not reenable it as part of its changing the terminal settings.
{ "source": [ "https://unix.stackexchange.com/questions/478543", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216688/" ] }
478,590
I was just going through the official bash repository(I don't usually do this) for something unrelated but noticed that bash 5 was already in beta. I was just curious about what's going to be new in bash 5 but couldn't find any information. Can someone summarize the changes between 4.4 and 5 version of Bash
The changes made to bash between release 4.4 and 5.0 (released 2019-01-07) may be found in the NEWS file in the bash source distribution. Here is a link to it (the changes are too numerous to list here).
{ "source": [ "https://unix.stackexchange.com/questions/478590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90864/" ] }
478,592
Using shell script I am making a db call on database VM and I am getting and storing the query response into a .txt file. Which looks like below: X folder Check: Number of Files on X Outbound 17 Y folder Check: Number of Files on Y Outbound 17 Z folder Check: Number of Files on Z Outbound 18 Now for each of the X,Y and Z. I am basically receiving files(counts) on their respective locations. So I am expecting to get "18" files for each X,Y and Z. Now using shell I want to be able to know/store the folders for which I didn't receive 18 files. Example: here in the above case I should get that I am missing files for X and Y folders.
The changes made to bash between release 4.4 and 5.0 (released 2019-01-07) may be found in the NEWS file in the bash source distribution. Here is a link to it (the changes are too numerous to list here).
{ "source": [ "https://unix.stackexchange.com/questions/478592", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246749/" ] }
478,823
I am working on a CentOS server and schedule a task with command at # echo "touch a_long_file_name_file.txt" | at now + 1 minute job 2 at Wed Oct 31 13:52:00 2018 One minute later, # ls | grep a_long_file_name_file.tx a_long_file_name_file.txt the file was successful created. However, if I run it locally on my macOS, $ echo "touch a_long_file_name_file.txt" | at now + 1 minute job 31 at Wed Oct 31 13:58:00 2018 Minutes later, if it failed to make such a file. I checked the version of at on the CentOS server AUTHOR: At was mostly written by Thomas Koenig, [email protected]. 2009-11-14 In contrast, the macOS version AUTHORS At was mostly written by Thomas Koenig <[email protected]>. The time parsing routines are by David Parsons <[email protected]>, with minor enhancements by Joe Halpin <[email protected]>. BSD January 13, 2002 I found that at , atq , atrm are not of GNU coreutils. $ ls /usr/local/opt/coreutils/libexec/gnubin/ | grep at cat date pathchk realpath stat truncate How could I install the latest version of at on macOS and make it work?
Instead of updating at and the associated tools on macOS, lets try to make the default at on macOS work. The at manual on macOS says (my emphasis): IMPLEMENTATION NOTES Note that at is implemented through the launchd(8) daemon periodically invoking atrun(8) , which is disabled by default . See atrun(8) for information about enabling atrun . Checking the atrun manual: DESCRIPTION The atrun utility runs commands queued by at(1) . It is invoked periodically by launchd(8) as specified in the com.apple.atrun.plist property list. By default the property list contains the Disabled key set to true, so atrun is never invoked. Execute the following command as root to enable atrun : launchctl load -w /System/Library/LaunchDaemons/com.apple.atrun.plist What I think may be happening here, and what is prompting your other at -related questions, is that you just haven't enabled atrun on your macOS installation. On macOS Mojave, in addition to running the above launchctl command (with sudo ), you will also have to add /usr/libexec/atrun to the list of commands/applications that have "Full Disk Access" in the "Security & Privacy" preferences on the system. Note that I don't know the security implications of doing this. Personally, I have also added /usr/sbin/cron there to get cron jobs to work (not shown in the screenshot below as this is from another computer). To add a command from the /usr path (which won't show up in the file selection dialog on macOS), press Cmd+Shift+G when the file selection dialog is open (after pressing the plus-icon/button in the bottom of the window). You do not need to reboot the machine after these changes. I have tested this on macOS Mojave 14.10.1.
{ "source": [ "https://unix.stackexchange.com/questions/478823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260114/" ] }
479,349
I have a college exercise which is "Find all files which name ends in ".xls" of a directory and sub-directories that have the word "SCHEDULE", without using pipes and using only some of the commands GREP, FIND, CUT, PASTE or LS I have reached this command: ls *.xls /users/home/DESKTOP/*SCHEDULE This shows me only the .xls files on the Desktop and opens all directories with SCHEDULE on the name but when it does it it shows me all the files on the directories insted of only the .xls ones.
Assuming that by "file" they mean "regular file", as opposed to directory, symbolic link, socket, named pipe etc. To find all regular files that have a filename suffix .xls and that reside in or below a directory in the current directory that contain the string SCHEDULE in its name: find . -type f -path '*SCHEDULE*/*' -name '*.xls' With -type f we test the file type of the thing that find is currently processing. If it's a regular file (the f type), the next test is considered (otherwise, if it's anything but a file, the next thing is examined). The -path test is a test agains the complete pathname to the file that find is currently examining. If this pathname matches *SCHEDULE*/* , the next test will be considered. The pattern will only match SCHEDULE in directory names (not in the final filename) due to the / later in the pattern. The last test is a test against the filename itself, and it will succeed if the filename ends with .xls . Any pathname that passes all tests will by default be printed. You could also shorten the command into find . -type f -path '*SCHEDULE*/*.xls'
{ "source": [ "https://unix.stackexchange.com/questions/479349", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/318998/" ] }
479,352
We know that the backtick character is used for command substitution : chown `id -u` /mydir Which made me wonder: is the tick character ´ used for anything in the Linux shell? Note: incidentally, command substitution can also be written more readably as chown $(id -u) /mydir
The character sets used historically with Unix, including ASCII , don’t have a tick character, so it wasn’t used. As far as I’m aware no common usage for that character has been introduced since it’s become available; nor would it, since it’s not included in POSIX’s portable character set . ` was apparently originally included in ASCII (along with ^ and ~) to serve as a diacritic. When ASCII was defined, the apostrophe was typically represented by a ′-style glyph (“prime”, as used for minutes or feet) rather than a straight apostrophe ', and was used as a diacritic acute accent too. Historically, in Unix shell documentation, ` was referred to as a grave accent , not a backtick. The lack of a forward tick wouldn’t have raised eyebrows, especially since ' was used as the complementary character (see roff syntax).
{ "source": [ "https://unix.stackexchange.com/questions/479352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34039/" ] }
479,710
We can use the following in order to test telnet VIA port; in the following example we test port 6667: [root@kafka03 ~]# telnet kafka02 6667 Trying 103.64.35.86... Connected to kafka02. Escape character is '^]'. ^CConnection closed by foreign host Since on some machines we can't use telnet (for internal reasons) what are the alternatives to check ports, as telnet?
Netcat ( nc ) is one option. nc -zv kafka02 6667 -z = sets nc to simply scan for listening daemons, without actually sending any data to them -v = enables verbose mode
{ "source": [ "https://unix.stackexchange.com/questions/479710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
480,121
Issue: Every now and then I need to do simple arithmetic in a command-line environment. E.G. given the following output: Disk /dev/sdb: 256GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 106MB 105MB fat32 hidden, diag 2 106MB 64.1GB 64.0GB ext4 3 64.1GB 192GB 128GB ext4 5 236GB 256GB 20.0GB linux-swap(v1) What's a simple way to calculate on the command line the size of the unallocated space between partition 3 and 5? What I've tried already: bc bc bc 1.06.95 Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 236-192 44 quit where the bold above is all the stuff I need to type to do a simple 236-192 as bc 1+1 echoes File 1+1 is unavailable. expr expr 236 - 192 where I need to type spaces before and after the operator as expr 1+1 just echoes 1+1 .
You can reduce the amount of verbosity involved in using bc : $ bc <<<"236-192" 44 $ bc <<<"1+1" 2 (assuming your shell supports that). If you’d rather have that as a function: $ c() { printf "%s\n" "$@" | bc -l; } $ c 1+1 22/7 2 3.14285714285714285714 ( -l enables the standard math library and increases the default scale to 20.) Store the c definition in your favourite shell startup file if you want to make it always available.
{ "source": [ "https://unix.stackexchange.com/questions/480121", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90054/" ] }
481,884
I used sudoedit to create a file: $ sudoedit /etc/systemd/system/apache2.service but when I went to save the file, it wrote it in a temporary directory (/var/temp/blahblah). What is going on? Why is it not saving it to the system directory?
The point of sudoedit is to allow users to edit files they wouldn’t otherwise be allowed to, while running an unprivileged editor. To make this happen, sudoedit copies the file to be edited to a temporary location, makes it writable by the requesting user, and opens it in the configured editor. That’s why the editor shows an unrelated filename in a temporary directory. When the editor exits, sudoedit checks whether any changes were really made, and copies the changed temporary file back to its original location if necessary.
{ "source": [ "https://unix.stackexchange.com/questions/481884", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
481,896
I created a service-unit to run apache-httpd and it is working, but I am concerned that my service-unit configuration-file is a file, but the other items in the directory (/etc/systemd/system) are all directories, so my file looks like an anomaly: It works, but why is my definition different than the others? I used the instructions at "Tech Guides" to create the service unit .
The point of sudoedit is to allow users to edit files they wouldn’t otherwise be allowed to, while running an unprivileged editor. To make this happen, sudoedit copies the file to be edited to a temporary location, makes it writable by the requesting user, and opens it in the configured editor. That’s why the editor shows an unrelated filename in a temporary directory. When the editor exits, sudoedit checks whether any changes were really made, and copies the changed temporary file back to its original location if necessary.
{ "source": [ "https://unix.stackexchange.com/questions/481896", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
481,939
I have generated keys using GPG, by executing the following command gpg --gen-key Now I need to export the key pair to a file; i.e., private and public keys to private.pgp and public.pgp , respectively.  How do I do it?
Export Public Key This command will export an ascii armored version of the public key: gpg --output public.pgp --armor --export username@email Export Secret Key This command will export an ascii armored version of the secret key: gpg --output private.pgp --armor --export-secret-key username@email Security Concerns, Backup, and Storage A PGP public key contains information about one's email address. This is generally acceptable since the public key is used to encrypt email to your address. However, in some cases, this is undesirable. For most use cases, the secret key need not be exported and should not be distributed . If the purpose is to create a backup key, you should use the backup option: gpg --output backupkeys.pgp --armor --export-secret-keys --export-options export-backup user@email This will export all necessary information to restore the secrets keys including the trust database information. Make sure you store any backup secret keys off the computing platform and in a secure physical location. If this key is important to you, I recommend printing out the key on paper using paperkey . And placing the paper key in a fireproof/waterproof safe. Public Key Servers In general, it's not advisable to post personal public keys to key servers. There is no method of removing a key once it's posted and there is no method of ensuring that the key on the server was placed there by the supposed owner of the key. It is much better to place your public key on a website that you own or control. Some people recommend keybase.io for distribution. However, that method tracks participation in various social and technical communities which may not be desirable for some use cases. For the technically adept, I personally recommend trying out the webkey domain level key discovery service.
{ "source": [ "https://unix.stackexchange.com/questions/481939", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/321124/" ] }
482,032
When I try to get the week number for Dec 31, it returns 1. When I get the week number for Dec 30, I get 52 --- which is what I would expect. The day Monday is correct. This is on a RPI running Ubuntu. $ date -d "2018-12-30T1:58:55" +"%V%a" 52Sun $ date -d "2018-12-31T1:58:55" +"%V%a" 01Mon Same issue without time string $ date -d "2018-12-31" +"%V%a" 01Mon
This is giving you the ISO week which begins on a Monday. The ISO week date system is effectively a leap week calendar system that is part of the ISO 8601 date and time standard issued by the International Organization for Standardization (ISO) since 1988 (last revised in 2004) and, before that, it was defined in ISO (R) 2015 since 1971. It is used (mainly) in government and business for fiscal years, as well as in timekeeping. This was previously known as "Industrial date coding". The system specifies a week year atop the Gregorian calendar by defining a notation for ordinal weeks of the year. An ISO week-numbering year (also called ISO year informally) has 52 or 53 full weeks. That is 364 or 371 days instead of the usual 365 or 366 days. The extra week is sometimes referred to as a leap week, although ISO 8601 does not use this term. Weeks start with Monday. Each week's year is the Gregorian year in which the Thursday falls. The first week of the year, hence, always contains 4 January. ISO week year numbering therefore slightly deviates from the Gregorian for some days close to 1 January. If you want to show 12/31 as week 52, you should use %U , which does not use the ISO standard: $ date -d "2018-12-31T1:58:55" +"%V%a" 01Mon $ date -d "2018-12-31T1:58:55" +"%U%a" 52Mon
{ "source": [ "https://unix.stackexchange.com/questions/482032", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/321191/" ] }
482,390
After the last upgrade on: Operating System: Debian GNU/Linux buster/sid Kernel: Linux 4.18.0-2-686-pae Architecture: x86 /usr/lib/tracker/tracker-store eats a huge load of CPU. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7039 nath 20 0 96136 24460 11480 R 100,0 1,3 0:01.76 tracker-store When I run tracker daemon I get: Miners: 17 Nov 2018, 21:17:06: ? File System - Not running or is a disabled plugin 17 Nov 2018, 21:17:06: ? Applications - Not running or is a disabled plugin 17 Nov 2018, 21:17:06: ? Extractor - Not running or is a disabled plugin I thought I disabled all tracker activities, what is it doing? The fan is going like crazy and a reboot does not improve the situation.
after having tracker-store running with almost 100% CPU, almost all the time for 7 days now, it seems like I found an easy fix: tracker reset --hard CAUTION: This process may irreversibly delete data. Although most content indexed by Tracker can be safely reindexed, it can?t be assured that this is the case for all data. Be aware that you may be incurring in a data loss situation, proceed at your own risk. Are you sure you want to proceed? [y|N]: /usr/lib/tracker/tracker-store process is gone, fan is spinning down, and everything is quiet after a week. After a reboot tracker-store still stays quiet. Update for Tracker3: tracker3 reset -s -r
{ "source": [ "https://unix.stackexchange.com/questions/482390", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240990/" ] }
483,871
I know I can find files using find : find . -type f -name 'sunrise' . Example result: ./sunrise ./events/sunrise ./astronomy/sunrise ./schedule/sunrise I also know that I can determine the file type of a file: file sunrise . Example result: sunrise: PEM RSA private key But how can I find files by file type? For example, my-find . -type f -name 'sunrise' -filetype=bash-script : ./astronomy/sunrise ./schedule/sunrise
"File types" on a Unix system are things like regular files, directories, named pipes, character special files, symbolic links etc. These are the type of files that find can filter on with its -type option. The find utility can not by itself distinguish between a "shell script", "JPEG image file" or any other type of regular file . These types of data may however be distinguished by the file utility, which looks at particular signatures within the files themselves to determine type of the file contents. A common way to label the different types of data files is by their MIME type , and file is able to determine the MIME type of a file. Using file with find to detect the MIME type of regular files, and use that to only find shell scripts: find . -type f -exec sh -c ' case $( file -bi "$1" ) in (*/x-shellscript*) exit 0; esac exit 1' sh {} \; -print or, using bash , find . -type f -exec bash -c ' [[ "$( file -bi "$1" )" == */x-shellscript* ]]' bash {} \; -print Add -name sunrise before the -exec if you wish to only detect scripts with that name. The find command above will find all regular files in or below the current directory, and for each such file call a short in-line shell script. This script runs file -bi on the found file and exits with a zero exit status if the output of that command contains the string /x-shellscript . If the output does not contain that string, it exits with a non-zero exit status which causes find to continue immediately with the next file. If the file was found to be a shell script, the find command will proceed to output the file's pathname (the -print at the end, which could also be replaced by some other action). The file -bi command will output the MIME type of the file. For a shell script on Linux (and most other systems), this would be something like text/x-shellscript; charset=us-ascii while on systems with a slightly older variant of the file utility, it may be application/x-shellscript The common bit is the /x-shellscript substring. Note that on macOS, you would have to use file -bI instead of file -bi because of reasons (the -i option does something quite different). The output on macOS is otherwise similar to that of a Linux system. Would you want to perform some custom action on each found shell script, you could do that with another -exec in place of the -print in the find commands above, but it would also be possible to do find . -type f -exec sh -c ' for pathname do case $( file -bi "$pathname" ) in */x-shellscript*) ;; *) continue esac # some code here that acts on "$pathname" done' sh {} + or, with bash , find . -type f -exec bash -c ' for pathname do [[ "$( file -bi "$pathname" )" != */x-shellscript* ]] && continue # some code here that acts on "$pathname" done' bash {} + Related: Understanding the -exec option of `find`
{ "source": [ "https://unix.stackexchange.com/questions/483871", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/217242/" ] }
483,879
I have a Python code which listens and detects environmental sounds. It is not my project, I found it on web (SoPaRe). With the ./sopare.py -l command, it starts recording sounds but in infinite loop. When I want to stop it, I have to press Ctrl+C . My purpose is to stop this program automatically after 10 seconds, but when I talked with the author he said that the program does not have a time limiter. I tried to kill it via kill PID , but PID changes every time when program runs. How can stop it after a time interval via bash ? Alternatively, I can execute this command from python with os.system() command.
The simplest solution would be to use timeout from the collection of GNU coreutils (probably installed by default on most Linux systems): timeout 10 ./sopare.py -l See the manual for this utility for further options ( man timeout ). On non-GNU systems, this utility may be installed as gtimeout if GNU coreutils is installed at all. Another alternative, if GNU coreutils is not available, is to start the process in the background and wait for 10 seconds before sending it a termination signal: ./sopare.py -l & sleep 10 kill "$!" $! will be the process ID of the most recently started background process, in this case of your Python script. In case the waiting time is used for other things: ./sopare.py -l & pid=$! # whatever code here, as long as it doesn't change the pid variable kill "$pid"
{ "source": [ "https://unix.stackexchange.com/questions/483879", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/322153/" ] }
483,881
If it depends on the exact type of block device, then what is the default I/O scheduler for each type of device? Background information Fedora 29 includes a Linux kernel from the 4.19 series. (Technically, the initial release used a 4.18 series kernel. But a 4.19 kernel is installed by the normal software updates). Starting in version 4.19, the mainline kernel has CONFIG_SCSI_MQ_DEFAULT as default y . I.e. that's what you get if you take the tree published by Linus, without applying any Fedora-specific patches. By default, SCSI and SATA devices will use the new multi-queue block layer. (Linux treats SATA devices as being SCSI, using a translation based on the SAT standard ). This is a transitional step towards removing the old code. All the old code will now be removed in version 4.21 5.0 , the next kernel release after 4.20. In the new MQ system, block devices use a new set of I/O schedulers. These include none , mq-deadline , and bfq . In the mainline 4.19 kernel, the default scheduler is set as follows: /* For blk-mq devices, we default to using mq-deadline, if available, for single queue devices. If deadline isn't available OR we have multiple queues, default to "none". */ A suggestion has been made to use BFQ as the default in place of mq-deadline . This suggestion was not accepted for 4.19. For the legacy SQ block layer, the default scheduler is CFQ, which is most similar to BFQ. => The kernel's default I/O scheduler can vary, depending on the type of device: SCSI/SATA, MMC/eMMC, etc. CFQ attempts to support some level of "fairness" and I/O priorities ( ionice ). It has various complexities. BFQ is even more complex; it supports ionice but also has heuristics to classify and prioritize some I/O automatically. deadline style scheduling is simpler; it does not support ionice at all. => Users with the Linux default kernel configuration, SATA devices, and no additional userspace policy (e.g. no udev rules), will be subject to a change in behaviour in 4.19. Where ionice used to work, it will no longer have any effect. However Fedora includes specific kernel patches / configuration. Fedora also includes userspace policies such as default udev rules. What does Fedora Workstation 29 use as the default I/O scheduler? If it depends on the exact type of block device, then what is the default I/O scheduler for each type of device?
The simplest solution would be to use timeout from the collection of GNU coreutils (probably installed by default on most Linux systems): timeout 10 ./sopare.py -l See the manual for this utility for further options ( man timeout ). On non-GNU systems, this utility may be installed as gtimeout if GNU coreutils is installed at all. Another alternative, if GNU coreutils is not available, is to start the process in the background and wait for 10 seconds before sending it a termination signal: ./sopare.py -l & sleep 10 kill "$!" $! will be the process ID of the most recently started background process, in this case of your Python script. In case the waiting time is used for other things: ./sopare.py -l & pid=$! # whatever code here, as long as it doesn't change the pid variable kill "$pid"
{ "source": [ "https://unix.stackexchange.com/questions/483881", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29483/" ] }
483,882
I have to make a bash script that will be changing files' names from lower to upper OR from upper to lower via parameters in command line. So when I put in command line: ./bashScript lower upper then all files in directory should change from lower to upper case. I have to also add 3rd parameter that will let me change only one specific file. So for example I have to be able of putting in command line: ./bashScript lower upper fileName
The simplest solution would be to use timeout from the collection of GNU coreutils (probably installed by default on most Linux systems): timeout 10 ./sopare.py -l See the manual for this utility for further options ( man timeout ). On non-GNU systems, this utility may be installed as gtimeout if GNU coreutils is installed at all. Another alternative, if GNU coreutils is not available, is to start the process in the background and wait for 10 seconds before sending it a termination signal: ./sopare.py -l & sleep 10 kill "$!" $! will be the process ID of the most recently started background process, in this case of your Python script. In case the waiting time is used for other things: ./sopare.py -l & pid=$! # whatever code here, as long as it doesn't change the pid variable kill "$pid"
{ "source": [ "https://unix.stackexchange.com/questions/483882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/322613/" ] }
484,060
Will # dd if=/dev/zero of=/dev/sda wipe out a pre-existing partition table? Or is it the other way around, i.e, does # fdisk /dev/sda g (for GPT) wipe out the zeros written by /dev/zero ?
Will dd if=/dev/zero of=/dev/sda wipe out a pre-existing partition table? Yes, the partition table is in the first part of the drive, so writing over it will destroy it. That dd will write over the whole drive if you let it run (so it will take quite some time). Something like dd bs=512 count=50 if=/dev/zero of=/dev/sda would be enough to overwrite the first 50 sectors, including the MBR partition table and the primary GPT. Though at least according to Wikipedia, GPT has a secondary copy of the partition table at the end of the drive, so overwriting just the part in the head of the drive might not be enough. (You don't have to use dd , though. head -c10000 /dev/zero > /dev/sda or cat /bin/ls > /dev/sda would have the same effect.) does fdisk /dev/sda g (for GPT) wipe out the zeros written by /dev/zero? Also yes (provided you save the changes). (However, the phrasing in the title is just confusing, /dev/zero in itself does not do anything any more than any regular storage does.)
{ "source": [ "https://unix.stackexchange.com/questions/484060", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
484,276
disown causes a shell not to send SIGHUP to its disowned job when the shell terminates, and removes the disowned job from the shell's job control. Is the first the result of the second? In other words, if a process started from a shell is removed from the shell's job control by any way, will the shell not send SIGHUP to the process when the shell terminates? disown -h still keeps a process under a shell's job control. Does it mean that disown -h makes a process still receives SIGHUP sent from the shell, but sets up the action of SIGHUP by the process to be "ignore"? That sounds similar to nohup . $ sleep 123 & disown -h [1] 26103 $ jobs [1]+ Running sleep 123 & $ fg 1 sleep 123 $ ^Z [1]+ Stopped sleep 125 $ bg 1 [1]+ sleep 123 & $ exit $ ps aux | grep sleep t 26103 0.0 0.0 14584 824 ? S 15:19 0:00 sleep 123 Do disown -h and nohup work effectively the same, if we disregard their difference in using a terminal? Thanks.
nohup and disown -h are not exactly the same thing. With disown , a process is removed from the list of jobs in the current interactive shell. Running jobs after starting a background process and running disown will not show that process as a job in the shell. A disowned job will not receive a HUP from the shell when it exits (but see note at end). With disown -h , the job is not removed from the list of jobs, but the shell would not send a HUP signal to it if it exited (but see note at end). The nohup utility ignores the HUP signal and starts the given utility. The utility inherits the signal mask from nohup and will therefore also ignore the HUP signal. When the shell terminates, the process remains as a child process of nohup (and nohup is re-parented to init ). The difference is that the process started with nohup ignores HUP regardless of who sends the signal. The disowned processes are just not sent a HUP signal by the shell , but may still be sent the signal from e.g. kill -s HUP <pid> and will not ignore this. Note that HUP is only sent to the jobs of a shell if the shell is a login shell and the huponexit shell option is set, or the shell itself recieves a HUP signal. Relevant bits from the bash manual (my emphasis): SIGNALS [...] The shell exits by default upon receipt of a SIGHUP . Before exiting, an interactive shell resends the SIGHUP to all jobs, running or stopped. Stopped jobs are sent SIGCONT to ensure that they receive the SIGHUP . To prevent the shell from sending the signal to a particular job, it should be removed from the jobs table with the disown builtin (see SHELL BUILTIN COMMANDS below) or marked to not receive SIGHUP using disown -h . If the huponexit shell option has been set with shopt , bash sends a SIGHUP to all jobs when an interactive login shell exits. disown [-ar] [-h] [jobspec ... | pid ... ] Without options, remove each jobspec from the table of active jobs. [...] If the -h option is given, each jobspec is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP . [...] Related: Difference between nohup, disown and &
{ "source": [ "https://unix.stackexchange.com/questions/484276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
484,434
I have: a Linux server that I connect via SSH on IP 203.0.113.0 port 1234 a home computer (behind a router), public IP 198.51.100.17, which is either Debian or Windows+Cygwin What's the easiest to have a folder /home/inprogress/ synchronized (in both directions), a bit like rsync , but with a filesystem watcher , so that each time a file is modified, it is immediately replicated on the other side? (i.e. no need to manually call a sync program) I'm looking for a command-line / no-GUI solution, as the server is headless. Is there a Linux/Debian built-in solution?
Following @Kusalananda's comment, I finally spent a few hours testing Syncthing for this use case and it works great. It automatically detects changes on both sides and the replication is very fast. Example: imagine you're working locally on server.py in your favorite Notepad software, you hit CTRL+S (Save). A few seconds later it's automatically replicated on the distant server (without any popup dialog). One great thing I've noticed is that you don't have to think about the IP of the home computer and server with Syncthing: each "device" (computer, server, phone, etc.) has a unique DeviceID and if you share the ID with another device, it will find out automatically how they should connect to each other. To do: Home computer side (Windows or Linux): Use the normal Syncthing in-browser configuration tool VPS side: First connect the VPS with a port forwarding: ssh <user>@<VPS_IP> -L 8385:localhost:8384 The latter option will redirect the VPS's Syncthing web-configuration tool listening on port 8384 to the home computer's port 8385. Then run this on VPS: wget https://github.com/syncthing/syncthing/releases/download/v0.14.52/syncthing-linux-amd64-v0.14.52.tar.gz tar xvfz syncthing-linux-amd64-v0.14.52.tar.gz nohup syncthing-linux-amd64-v0.14.52/syncthing & Then on the home computer's browser, open http://localhost:8385 : this will be the VPS's Syncthing configuration! Other solution I tried: SSHFS using this tutorial . Please note that in this tutorial they don't use sshfs-win but win-sshfs instead (these are two different projects). I tried both, and I couldn't make any of them work (probably a problem with my VPS configuration). Here is an interesting reference too: https://softwarerecs.stackexchange.com/questions/13875/windows-sshfs-sftp-mounting-clients Additional advantages of Syncthing I've just noticed: you can reduce fsWatcherDelayS in the config.xml from 10 to 2 seconds so that after doing CTRL+S, 2 seconds later (+the time to upload, i.e. less than 1 second for a small text file) it's on the other computer if you sync two computers which are in the same local network (by just giving the DeviceID to each other, no need to care about local IP addresses), it will automatically notice that it doesn't need to transit via internet, but it can deal locally. This is great and allows a very fast speed transfer (4 MB/s!) sync of phone <--> computer both connected to the same home router via WiFi... ...whereas it would be stuck at 100 KB/s on ADSL with a Dropbox sync! (my ADSL is limited at 100 KB/s on upload)
{ "source": [ "https://unix.stackexchange.com/questions/484434", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59989/" ] }
484,481
I am trying to list every .tar.gz file, only using the following command: ls *.tar.gz -l ...It shows me the following list: -rw-rw-r-- 1 osm osm 949 Nov 27 16:17 file1.tar.gz -rw-rw-r-- 1 osm osm 949 Nov 27 16:17 file2.tar.gz However, I just need to list it this way: file1.tar.gz file2.tar.gz and also not: file1.tar.gz file2.tar.gz How is this "properly" done?
The -1 option (the digit “one”, not lower-case “L”) will list one file per line with no other information: ls -1 -- *.tar.gz
{ "source": [ "https://unix.stackexchange.com/questions/484481", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49478/" ] }
484,655
If I wanted to stay on the same file system, couldn't I just specify an output path for the same file system? Or is it to prevent accidentally leaving the current file system?
It limits where files are copied from , not where they’re copied to. It’s useful with recursive copies, to control how cp descends into subdirectories. Thus cp -xr / blah will only copy the root file system, not any of the other file systems mounted. See the cp -x documentation (although its distinction is subtle).
{ "source": [ "https://unix.stackexchange.com/questions/484655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270469/" ] }
485,156
From this answer to Linux: Difference between /dev/console , /dev/tty and /dev/tty0 From the documentation : /dev/tty Current TTY device /dev/console System console /dev/tty0 Current virtual console In the good old days /dev/console was System Administrator console. And TTYs were users' serial devices attached to a server. Now /dev/console and /dev/tty0 represent current display and usually are the same. You can override it for example by adding console=ttyS0 to grub.conf . After that your /dev/tty0 is a monitor and /dev/console is /dev/ttyS0 . By " System console ", /dev/console seems like the device file of a text physical terminal, just like /dev/tty{1..63} are device files for the virtual consoles. By " /dev/console and /dev/tty0 represent current display and usually are the same", /dev/console seems to me that it can also be the device file of a virtual console. /dev/console seems more like /dev/tty0 than like /dev/tty{1..63} ( /dev/tty0 is the currently active virtual console, and can be any of /dev/tty{1..63} ). What is /dev/console ? What is it used for? Does /dev/console play the same role for Linux kernel as /dev/tty for a process? ( /dev/tty is the controlling terminal of the process session of the process, and can be a pts, /dev/ttyn where n is from 1 to 63, or more?) The other reply mentions: The kernel documentation specifies /dev/console as a character device numbered 5:1. Opening this character device opens the "main" console, which is the last tty in the list of consoles. Does "the list of consoles" mean all the console= 's in the boot option ? By " /dev/console as a character device numbered 5:1", does it mean that /dev/console is the device file of a physical text terminal i.e. a system console? (But again, the first reply I quoted above says /dev/console can be the same as /dev/tty0 which is not a physical text terminal, but a virtual console) Thanks.
/dev/console exists primarily to expose the kernel’s console to userspace. The Linux kernel’s documentation on devices now says The console device, /dev/console , is the device to which system messages should be sent, and on which logins should be permitted in single-user mode. Starting with Linux 2.1.71, /dev/console is managed by the kernel; for previous versions it should be a symbolic link to either /dev/tty0 , a specific virtual console such as /dev/tty1 , or to a serial port primary ( tty* , not cu* ) device, depending on the configuration of the system. /dev/console , the device node with major 5 and minor 1, provides access to whatever the kernel considers to be its primary means of interacting with the system administrator; this can be a physical console connected to the system (with the virtual console abstraction on top, so it can use tty0 or any ttyN where N is between 1 and 63), or a serial console, or a hypervisor console, or even a Braille device. Note that the kernel itself doesn’t use /dev/console : devices nodes are for userspace, not for the kernel; it does, however, check that /dev/console exists and is usable, and sets init up with its standard input, output and error pointing to /dev/console . As described here, /dev/console is a character device with a fixed major and minor because it’s a separate device (as in, a means of accessing the kernel; not a physical device), not equivalent to /dev/tty0 or any other device. This is somewhat similar to the situation with /dev/tty which is its own device (5:0) because it provides slightly different features than the other virtual console or terminal devices. The “list of consoles” is indeed the list of consoles defined by the console= boot parameters (or the default console, if there are none). You can see the consoles defined in this way by looking at /proc/consoles . /dev/console does indeed provide access to the last of these : You can specify multiple console= options on the kernel command line. Output will appear on all of them. The last device will be used when you open /dev/console .
{ "source": [ "https://unix.stackexchange.com/questions/485156", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
485,164
I am trying to configure a DNS cache with dnsmasq. The server responds to the query, but the response time is exactly the same as the Cloudflare DNS. To test the DNS Server I have removed any internet DNS Server from my computer and also in the dnsmasq config file. Here my /etc/dnsmasq.conf domain=raspberry.local resolv-file=/etc/resolv.dnsmasq min-port=4096 cache-size=10000 I have tried for example: dig facebook.it and the Query time is circa 85 msec, end this is the exactly tile that I have if I use Clodflare DNS. maybe there is something that I don't understand, but I think that a Query time should be less than 10 msec if I use a local cache DNS. Here the content of the file /etc/resolv.conf # Generated by resolvconf # Domain search xxxxxxx # CloudFlare Servers nameserver 1.1.1.1 nameserver 1.0.0.1 search lan nameserver 127.0.0.1 I don't try 127.0.0.1 because I use the DNS server on raspberry pi for the rest of lan. I have tried dig facebook.com and the response arrive from 192.168.100.5 that is the raspberry pi LAN IP Here the content of the file /etc/resolv.conf # Generated by resolvconf # Domain search xxxxxxx # CloudFlare Servers nameserver 1.1.1.1 nameserver 1.0.0.1 search lan nameserver 127.0.0.1 I don't try 127.0.0.1 because I use the DNS server on raspberry pi for the rest of lan. I have tried dig facebook.com and the response arrive from 192.168.100.5 that is the raspberry pi LAN IP
/dev/console exists primarily to expose the kernel’s console to userspace. The Linux kernel’s documentation on devices now says The console device, /dev/console , is the device to which system messages should be sent, and on which logins should be permitted in single-user mode. Starting with Linux 2.1.71, /dev/console is managed by the kernel; for previous versions it should be a symbolic link to either /dev/tty0 , a specific virtual console such as /dev/tty1 , or to a serial port primary ( tty* , not cu* ) device, depending on the configuration of the system. /dev/console , the device node with major 5 and minor 1, provides access to whatever the kernel considers to be its primary means of interacting with the system administrator; this can be a physical console connected to the system (with the virtual console abstraction on top, so it can use tty0 or any ttyN where N is between 1 and 63), or a serial console, or a hypervisor console, or even a Braille device. Note that the kernel itself doesn’t use /dev/console : devices nodes are for userspace, not for the kernel; it does, however, check that /dev/console exists and is usable, and sets init up with its standard input, output and error pointing to /dev/console . As described here, /dev/console is a character device with a fixed major and minor because it’s a separate device (as in, a means of accessing the kernel; not a physical device), not equivalent to /dev/tty0 or any other device. This is somewhat similar to the situation with /dev/tty which is its own device (5:0) because it provides slightly different features than the other virtual console or terminal devices. The “list of consoles” is indeed the list of consoles defined by the console= boot parameters (or the default console, if there are none). You can see the consoles defined in this way by looking at /proc/consoles . /dev/console does indeed provide access to the last of these : You can specify multiple console= options on the kernel command line. Output will appear on all of them. The last device will be used when you open /dev/console .
{ "source": [ "https://unix.stackexchange.com/questions/485164", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/311687/" ] }
485,173
I have an Enigma2 FreeSat recorder that I've now hooked up to my Plex Media Server. Plex can see and play the files from the Enigma2 just fine, but the file naming makes this unattractive. How can I rename files of this format: yyyymmdd nnnn - channel - title.* e.g. 20181128 2100 - BBC One HD - The Apprentice.* To: title - dd-mm-yyyy - channel.* e.g. The Apprentice - 28-11-2018 - BBC One HD.* (in such a way I can run this every few minutes from the command line). I want to be sure that it only matches files in the first format so it doesn't try to rename files already renamed. Later I'll want to have this running as a docker container.
/dev/console exists primarily to expose the kernel’s console to userspace. The Linux kernel’s documentation on devices now says The console device, /dev/console , is the device to which system messages should be sent, and on which logins should be permitted in single-user mode. Starting with Linux 2.1.71, /dev/console is managed by the kernel; for previous versions it should be a symbolic link to either /dev/tty0 , a specific virtual console such as /dev/tty1 , or to a serial port primary ( tty* , not cu* ) device, depending on the configuration of the system. /dev/console , the device node with major 5 and minor 1, provides access to whatever the kernel considers to be its primary means of interacting with the system administrator; this can be a physical console connected to the system (with the virtual console abstraction on top, so it can use tty0 or any ttyN where N is between 1 and 63), or a serial console, or a hypervisor console, or even a Braille device. Note that the kernel itself doesn’t use /dev/console : devices nodes are for userspace, not for the kernel; it does, however, check that /dev/console exists and is usable, and sets init up with its standard input, output and error pointing to /dev/console . As described here, /dev/console is a character device with a fixed major and minor because it’s a separate device (as in, a means of accessing the kernel; not a physical device), not equivalent to /dev/tty0 or any other device. This is somewhat similar to the situation with /dev/tty which is its own device (5:0) because it provides slightly different features than the other virtual console or terminal devices. The “list of consoles” is indeed the list of consoles defined by the console= boot parameters (or the default console, if there are none). You can see the consoles defined in this way by looking at /proc/consoles . /dev/console does indeed provide access to the last of these : You can specify multiple console= options on the kernel command line. Output will appear on all of them. The last device will be used when you open /dev/console .
{ "source": [ "https://unix.stackexchange.com/questions/485173", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/323624/" ] }
485,221
I am trying to get a bash array of all the unstaged modifications of files in a directory (using Git). The following code works to print out all the modified files in a directory: git -C $dir/.. status --porcelain | grep "^.\w" | cut -c 4- This prints "Directory Name/File B.txt" "File A.txt" I tried using arr1=($(git status --porcelain | grep "^.\w" | cut -c 4-)) but then for a in "${arr1[@]}"; do echo "$a"; done (both with and without the quotes around ${arr1[@]} prints "Directory Name/File B.txt" "File A.txt" I also tried git -C $dir/.. status --porcelain | grep "^.\w" | cut -c 4- | readarray arr2 but then for a in "${arr2[@]}"; do echo "$a"; done (both with and without the quotes around ${arr2[@]} ) prints nothing. Using declare -a arr2 beforehand does absolutely nothing either. My question is this: How can I read in these values into an array? (This is being used for my argos plugin gitbar , in case it matters, so you can see all my code).
TL;DR In bash: readarray -t arr2 < <(git … ) printf '%s\n' "${arr2[@]}" There are two distinct problems on your question Shell splitting. When you did: arr1=($(git … )) the "command expansion" is unquoted, and so: it is subject to shell split and glob. The exactly see what that shell splitting do, use printf: $ printf '<%s> ' $(echo word '"one simple sentence"') <word> <"one> <simple> <sentence"> That would be avoided by quoting : $ printf '<%s> ' "$(echo word '"one simple sentence"')" <word "one simple sentence"> But that, also, would avoid the splitting on newlines that you want. Pipe When you executed: git … | … | … | readarray arr2 The array variable arr2 got set but it went away when the pipe ( | ) was closed. You could use the value if you stay inside the last subshell: $ printf '%s\n' "First value." "Second value." | { readarray -t arr2; printf '%s\n' "${arr2[@]}"; } First value. Second value. But the value of arr2 will not survive out of the pipe. Solution(s) You need to use read to split on newlines but not with a pipe. From older to newer: Loop. For old shells without arrays (using positional arguments, the only quasi-array): set -- while IFS='' read -r value; do set -- "$@" "$value" done <<-EOT $(printf '%s\n' "First value." "Second value.") EOT printf '%s\n' "$@" To set an array (ksh, zsh, bash) i=0; arr1=() while IFS='' read -r value; do arr1+=("$value") done <<-EOT $(printf '%s\n' "First value." "Second value.") EOT printf '%s\n' "${arr1[@]}" Here-string Instead of the here document ( << ) we can use a here-string ( <<< ): i=0; arr1=() while IFS='' read -r value; do arr1+=("$value") done <<<"$(printf '%s\n' "First value." "Second value.")" printf '%s\n' "${arr1[@]}" Process substitution In shells that support it (ksh, zsh, bash) you can use <( … ) to replace the here-string: i=0; arr1=() while IFS='' read -r value; do arr1+=("$value") done < <(printf '%s\n' "First value." "Second value.") printf '%s\n' "${arr1[@]}" With differences: <( ) is able to emit NUL bytes while a here-string might remove (or emit a warning) the NULs. A here-string adds a trailing newline by default. There may be others AFAIK. readarray Use readarray in bash [a] (a.k.a mapfile ) to avoid the loop: readarray -t arr2 < <(printf '%s\n' "First value." "Second value.") printf '%s\n' "${arr2[@]}" [a] In ksh you will need to use read -A , which clears the variable before use, but needs some "magic" to split on newlines and read the whole input at once. IFS=$'\n' read -d '' -A arr2 < <(printf '%s\n' "First value." "Second value.") You will need to load a mapfile module in zsh to do something similar.
{ "source": [ "https://unix.stackexchange.com/questions/485221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184237/" ] }
485,246
I used to use Windows but switched to Linux, I am used to use putty on Windows and to copy from Windows and paste on Putty I could just right click, but I think I might be missing some configuration, when I right click it does not paste, when I CTRL+V it does not paste, I can copy and paste any text anywhere on Elementary OS, but it just won't paste inside putty, is there some clipboard configuration on putty or something for this?...
TL;DR In bash: readarray -t arr2 < <(git … ) printf '%s\n' "${arr2[@]}" There are two distinct problems on your question Shell splitting. When you did: arr1=($(git … )) the "command expansion" is unquoted, and so: it is subject to shell split and glob. The exactly see what that shell splitting do, use printf: $ printf '<%s> ' $(echo word '"one simple sentence"') <word> <"one> <simple> <sentence"> That would be avoided by quoting : $ printf '<%s> ' "$(echo word '"one simple sentence"')" <word "one simple sentence"> But that, also, would avoid the splitting on newlines that you want. Pipe When you executed: git … | … | … | readarray arr2 The array variable arr2 got set but it went away when the pipe ( | ) was closed. You could use the value if you stay inside the last subshell: $ printf '%s\n' "First value." "Second value." | { readarray -t arr2; printf '%s\n' "${arr2[@]}"; } First value. Second value. But the value of arr2 will not survive out of the pipe. Solution(s) You need to use read to split on newlines but not with a pipe. From older to newer: Loop. For old shells without arrays (using positional arguments, the only quasi-array): set -- while IFS='' read -r value; do set -- "$@" "$value" done <<-EOT $(printf '%s\n' "First value." "Second value.") EOT printf '%s\n' "$@" To set an array (ksh, zsh, bash) i=0; arr1=() while IFS='' read -r value; do arr1+=("$value") done <<-EOT $(printf '%s\n' "First value." "Second value.") EOT printf '%s\n' "${arr1[@]}" Here-string Instead of the here document ( << ) we can use a here-string ( <<< ): i=0; arr1=() while IFS='' read -r value; do arr1+=("$value") done <<<"$(printf '%s\n' "First value." "Second value.")" printf '%s\n' "${arr1[@]}" Process substitution In shells that support it (ksh, zsh, bash) you can use <( … ) to replace the here-string: i=0; arr1=() while IFS='' read -r value; do arr1+=("$value") done < <(printf '%s\n' "First value." "Second value.") printf '%s\n' "${arr1[@]}" With differences: <( ) is able to emit NUL bytes while a here-string might remove (or emit a warning) the NULs. A here-string adds a trailing newline by default. There may be others AFAIK. readarray Use readarray in bash [a] (a.k.a mapfile ) to avoid the loop: readarray -t arr2 < <(printf '%s\n' "First value." "Second value.") printf '%s\n' "${arr2[@]}" [a] In ksh you will need to use read -A , which clears the variable before use, but needs some "magic" to split on newlines and read the whole input at once. IFS=$'\n' read -d '' -A arr2 < <(printf '%s\n' "First value." "Second value.") You will need to load a mapfile module in zsh to do something similar.
{ "source": [ "https://unix.stackexchange.com/questions/485246", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/316527/" ] }
486,902
I have this in a bash script: exit 3; exit_code="$?" if [[ "$exit_code" != "0" ]]; then echo -e "${r2g_magenta}Your r2g process is exiting with code $exit_code.${r2g_no_color}"; exit "$exit_code"; fi It looks like it will exit right after the exit command, which makes sense. I was wondering is there some simple command that can provide an exit code without exiting right away? I was going to guess: exec exit 3 but it gives an error message: exec: exit: not found .  What can I do? :)
If you have a script that runs some program and looks at the program's exit status (with $? ), and you want to test that script by doing something that causes $? to be set to some known value (e.g., 3 ), just do (exit 3) The parentheses create a sub-shell.  Then the exit command causes that sub-shell to exit with the specified exit status.
{ "source": [ "https://unix.stackexchange.com/questions/486902", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
487,458
I am trying to sort some simple pipe-delimited data. However, sort isn't actually sorting. It moves my header row to the bottom, but my two rows starting with 241 are being split by a row starting with 24. cat sort_fail.csv column_a|column_b|column_c 241|212|20810378 24|121|2810172 241|213|20810376 sort sort_fail.csv 241|212|20810378 24|121|2810172 241|213|20810376 column_a|column_b|column_c The column headers are being moved to the bottom of the file, so sort is clearly processing it. But, the actual values aren't being sorted like I'd expect. In this case I worked around it with sort sort_fail.csv --field-separator='|' -k1,1 But, I feel like that shouldn't be necessary. Why is sort not sorting?
sort is locale aware, so depending on your LC_COLLATE setting (which is inherited from LANG) you may get different results: $ LANG=C sort sort_fail.csv 241|212|20810378 241|213|20810376 24|121|2810172 column_a|column_b|column_c $ LANG=en_US sort sort_fail.csv 241|212|20810378 24|121|2810172 241|213|20810376 column_a|column_b|column_c This can cause problems in scripts, because you may not be aware of what the calling locale is set to, and so may get different results. It's not uncommon for scripts to force the setting needed e.g. $ grep 'LC.*sort' /bin/precat LC_COLLATE=C sort -u | prezip-bin -z "$cmd: $2" Now what's interesting, here, is the | character looks odd. But that's because the default rule for en_US, which derives from ISO, says $ grep 007C /usr/share/i18n/locales/iso14651_t1_common <U007C> IGNORE;IGNORE;IGNORE;<j> # 142 | Which means the | character is ignored and the sort order would be as if the character doesn't exist.. $ tr -d '|' < sort_fail.csv | LANG=C sort 24121220810378 241212810172 24121320810376 column_acolumn_bcolumn_c And that matches the "unexpected" sorting you are seeing. The work arounds are to use -n (to force numeric sorts), or to use the field separator (as you did) or to use the C locale.
{ "source": [ "https://unix.stackexchange.com/questions/487458", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/325417/" ] }
487,955
I'm looking for something like command1 ; command2 i.e. how to run command2 after command1 but I'd like to plan execution of command2 when command1 is already running. It can be solved by just typing the command2 and confirming by Enter supposed that the command1 is not consuming standard input and that the command1 doesn't produce to much text on output making it impractical to type (typed characters are blended with the command1 output).
Generally what I do is: Ctrl + Z fg && command2 Ctrl + Z to pause it and let you type more in the shell. Optionally bg , to resume command1 in the background while you type out command2. fg && command2 to resume command1 in the foreground and queue up command2 afterwards if command1 succeeds. You can of course substitute ; or || for the && if you so desire.
{ "source": [ "https://unix.stackexchange.com/questions/487955", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46334/" ] }
489,421
I'm using these commands: du -sh --apparent-size ./* du -sh ./* both reporting: 4.0K ./Lightroom_catalog_from_win_backup 432M ./Lightroom catalog - wine_backup while those directories contain: $ll ./"Lightroom catalog - wine_backup" total 432M -rwxrwx--- 1 gigi gigi 432M Mar 18 2018 Lightroom 5 Catalog Linux.lrcat -rwxrwx--- 1 gigi gigi 227 Nov 21 2015 zbackup.bat $ll ./Lightroom_catalog_from_win_backup total 396M -rwxrwx--- 3 gigi gigi 396M Dec 17 09:35 Lightroom 5 Catalog Linux.lrcat -rwxrwx--- 3 gigi gigi 227 Dec 17 09:35 zbackup.bat Why du is reporting 4.0K for ./Lightroom_catalog_from_win_backup and how could I make it to report correctly? PS: other system information: $stat --file-system $HOME File: "/home/gigi" ID: 5b052c62a5a527bb Namelen: 255 Type: ext2/ext3 Block size: 4096 Fundamental block size: 4096 Blocks: Total: 720651086 Free: 155672577 Available: 119098665 Inodes: Total: 183050240 Free: 178896289 $lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.5 LTS Release: 16.04 Codename: xenial
I can reproduce if the files are hard links: ~ mkdir foo bar ~ dd if=/dev/urandom of=bar/file1 count=1k bs=1k 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00985276 s, 106 MB/s ~ ln bar/file1 foo/file1 ~ du -sh --apparent-size foo bar 1.1M foo 4.0K bar This is expected behaviour. From the GNU du docs : If two or more hard links point to the same file, only one of the hard links is counted. The file argument order affects which links are counted, and changing the argument order may change the numbers and entries that du outputs. If you really need repeated sizes of hard links, try the -l option: ‘ -l ’ ‘ --count-links ’ Count the size of all files, even if they have appeared already (as a hard link). ~ du -sh --apparent-size foo bar -l 1.1M foo 1.1M bar
{ "source": [ "https://unix.stackexchange.com/questions/489421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227120/" ] }
489,431
I have the following text: Name= Garen Class= 9C School= US Name= Lulu Class= 4A Name= Kata Class= 10D School= UK I got the awk cmd below: awk '$Name ~/Name/ {printf $0;} $Class ~/Class/ {printf $0;} $School ~/School/ {print $0;} ' file.txt But it outputs in a new line. Like this: Name= Garen Class= 9C School= US Name= Lulu Class= 4A Name= Kata Class= 10D School= UK I want it to output like this : Name= Garen ,Class= 9C ,School= US Name= Lulu , Class= 4A , Name= Kata ,Class= 10D ,School= UK if it falls into a situation : Name= Garen Class= 9C Last Name= Wilson School= US Name= Lulu Class= 4A Last Name= Miller Name= Kata Class= 10D School= UK Last Name= Thomas and print: Name= Garen,Class= 9C,School= US Name= Lulu,Class= 4A Name= Kata,Class= 10D,School= UK
I can reproduce if the files are hard links: ~ mkdir foo bar ~ dd if=/dev/urandom of=bar/file1 count=1k bs=1k 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00985276 s, 106 MB/s ~ ln bar/file1 foo/file1 ~ du -sh --apparent-size foo bar 1.1M foo 4.0K bar This is expected behaviour. From the GNU du docs : If two or more hard links point to the same file, only one of the hard links is counted. The file argument order affects which links are counted, and changing the argument order may change the numbers and entries that du outputs. If you really need repeated sizes of hard links, try the -l option: ‘ -l ’ ‘ --count-links ’ Count the size of all files, even if they have appeared already (as a hard link). ~ du -sh --apparent-size foo bar -l 1.1M foo 1.1M bar
{ "source": [ "https://unix.stackexchange.com/questions/489431", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/327226/" ] }
489,445
How to unzip a file (ex: foo.zip ) to a folder with the same name ( foo/ )? Basically, I want to create an alias of unzip that unzips files into a folder with the same name (instead of the current folder). That's how Mac's unzip utility works and I want to do the same in CLI.
I use unar for this; by default, if an archive contains more than one top-level file or directory, it creates a directory to store the extracted contents, named after the archive in the way you describe: unar foo.zip You can force the creation of a directory in all cases with the -d option: unar -d foo.zip Alternatively, a function can do this with unzip : unzd() { if [[ $# != 1 ]]; then echo I need a single argument, the name of the archive to extract; return 1; fi target="${1%.zip}" unzip "$1" -d "${target##*/}" } The target=${1%.zip} line removes the .zip extension, with no regard for anything else (so foo.zip becomes foo , and ~/foo.zip becomes ~/foo ). The ${target##*/} parameter expansion removes anything up to the last / , so ~/foo becomes foo . This means that the function extracts any .zip file to a directory named after it, in the current directory. Use unzip $1 -d "${target}" if you want to extract the archive to a directory alongside it instead. unar is available for macOS (along with its GUI application, The Unarchiver ), Windows, and Linux; it is packaged in many distributions, e.g. unar in Debian and derivatives, Fedora and derivatives, community/unarchiver in Arch Linux.
{ "source": [ "https://unix.stackexchange.com/questions/489445", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/323983/" ] }
489,453
we want to replace the script file extension so we did the following: new_name=` echo run_fix.bash | sed 's/[.].*$//' ` new_file_extension=".in_hold.txt" new_name=$new_name$new_file_extension echo $new_name run_fix.in_hold.txt but I feel my approach is not so elegant note - because script extension could be bash or perl or python and also the target extension could be any thing after "." we want global replacement I am using redhat 7.2
old_name=run_fix.bash new_name=${old_name%.bash}.in_hold.txt printf 'New name: %s\n' "$new_name" This would remove the filename suffix .bash from the value of $old_name and add .in_hold.txt to the result of that. The whole thing would be assigned to the variable new_name . The expansion ${variable%pattern} to remove the shortest suffix string matching the pattern pattern from the value of $variable is a standard parameter expansion . To replace any filename suffix (i.e. anything after the last dot in the filename): new_name=${old_name%.*}.new_suffix The .* pattern would match the last dot and anything after it (this would be removed). Had you used %% instead of % , the longest substring that matched the pattern would have been removed (in this case, you would have removed everything after the first dot in the string). If the string does not contain any dots, it remains unaltered.
{ "source": [ "https://unix.stackexchange.com/questions/489453", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
489,459
Suppose I have search the string which give like the following result anything1.knownKeyWord anything2.knownKeyWord anything3[1].knownKeyWord How I can write generic syntax for grep such it match all 3 string. I have done like this ^.*\w+\d[\[]?[0]?[\]]?\.knownKeyWord.*$ But I think for indexing eg [1] is not written in good way, how can I achieve so that even i replace [1] with [2342jdsjf] , I don't have to change the syntax much.
old_name=run_fix.bash new_name=${old_name%.bash}.in_hold.txt printf 'New name: %s\n' "$new_name" This would remove the filename suffix .bash from the value of $old_name and add .in_hold.txt to the result of that. The whole thing would be assigned to the variable new_name . The expansion ${variable%pattern} to remove the shortest suffix string matching the pattern pattern from the value of $variable is a standard parameter expansion . To replace any filename suffix (i.e. anything after the last dot in the filename): new_name=${old_name%.*}.new_suffix The .* pattern would match the last dot and anything after it (this would be removed). Had you used %% instead of % , the longest substring that matched the pattern would have been removed (in this case, you would have removed everything after the first dot in the string). If the string does not contain any dots, it remains unaltered.
{ "source": [ "https://unix.stackexchange.com/questions/489459", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/327259/" ] }
489,628
From the Shell Command Language page of the POSIX specification: If the first line of a file of shell commands starts with the characters "#!", the results are unspecified. Why is the behavior of #! unspecified by POSIX? I find it baffling that something so portable and widely used would have an unspecified behavior.
I think primarily because: the behaviour varies greatly between implementation. See https://www.in-ulm.de/~mascheck/various/shebang/ for all the details. It could however now specify a minimum subset of most Unix-like implementations: like #! *[^ ]+( +[^ ]+)?\n (with only characters from the portable filename character set in those one or two words) where the first word is an absolute path to a native executable, the thing is not too long and behaviour unspecified if the executable is setuid/setgid, and implementation defined whether the interpreter path or the script path is passed as argv[0] to the interpreter. POSIX doesn't specify the path of executables anyway. Several systems have pre-POSIX utilities in /bin / /usr/bin and have the POSIX utilities somewhere else (like on Solaris 10 where /bin/sh is a Bourne shell and the POSIX one is in /usr/xpg4/bin ; Solaris 11 replaced it with ksh93 which is more POSIX compliant, but most of the other tools in /bin are still ancient non-POSIX ones). Some systems are not POSIX ones but have a POSIX mode/emulation. All POSIX requires is that there be a documented environment in which a system behaves POSIXly. See Windows+Cygwin for instance. Actually, with Windows+Cygwin, the she-bang is honoured when a script is invoked by a cygwin application, but not by a native Windows application. So even if POSIX specified the shebang mechanism it could not be used to write POSIX sh / sed / awk ... scripts (also note that the shebang mechanism cannot be used to write reliable sed / awk script as it doesn't allow passing an end-of-option marker). Now the fact that it's unspecified doesn't mean you can't use it (well, it says you shouldn't have the first line start with #! if you expect it to be only a regular comment and not a she-bang), but that POSIX gives you no guarantee if you do. In my experience, using shebangs gives you more guarantee of portability than using POSIX's way of writing shell scripts: leave off the she-bang, write the script in POSIX sh syntax and hope that whatever invokes the script invokes a POSIX compliant sh on it, which is fine if you know the script will be invoked in the right environment by the right tool but not otherwise. You may have to do things like: #! /bin/sh - if : ^ false; then : fine, POSIX system by default else # cover Solaris 10 or older. ": ^ false" returns false # in the Bourne shell as ^ is an alias for | there for # compatibility with the Thompson shell. PATH=`getconf PATH`:$PATH; export PATH exec /usr/xpg4/bin/sh - "$0" ${1+"$@"} fi # rest of script If you want to be portable to Windows+Cygwin, you may have to name your file with a .bat or .ps1 extension and use some similar trick for cmd.exe or powershell.exe to invoke the cygwin sh on the same file.
{ "source": [ "https://unix.stackexchange.com/questions/489628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273175/" ] }
489,647
i have an external hard drive connect to my TP-Link router and shared using USB Share, i am unable to connect to this Share from Ubuntu, i can only see shared volumes but can't gain access. I can connect to it from Windows and even from my Android device using X-plore File Manager. What can i do ? My router is old and it supports only SMBv1 shares.
I think primarily because: the behaviour varies greatly between implementation. See https://www.in-ulm.de/~mascheck/various/shebang/ for all the details. It could however now specify a minimum subset of most Unix-like implementations: like #! *[^ ]+( +[^ ]+)?\n (with only characters from the portable filename character set in those one or two words) where the first word is an absolute path to a native executable, the thing is not too long and behaviour unspecified if the executable is setuid/setgid, and implementation defined whether the interpreter path or the script path is passed as argv[0] to the interpreter. POSIX doesn't specify the path of executables anyway. Several systems have pre-POSIX utilities in /bin / /usr/bin and have the POSIX utilities somewhere else (like on Solaris 10 where /bin/sh is a Bourne shell and the POSIX one is in /usr/xpg4/bin ; Solaris 11 replaced it with ksh93 which is more POSIX compliant, but most of the other tools in /bin are still ancient non-POSIX ones). Some systems are not POSIX ones but have a POSIX mode/emulation. All POSIX requires is that there be a documented environment in which a system behaves POSIXly. See Windows+Cygwin for instance. Actually, with Windows+Cygwin, the she-bang is honoured when a script is invoked by a cygwin application, but not by a native Windows application. So even if POSIX specified the shebang mechanism it could not be used to write POSIX sh / sed / awk ... scripts (also note that the shebang mechanism cannot be used to write reliable sed / awk script as it doesn't allow passing an end-of-option marker). Now the fact that it's unspecified doesn't mean you can't use it (well, it says you shouldn't have the first line start with #! if you expect it to be only a regular comment and not a she-bang), but that POSIX gives you no guarantee if you do. In my experience, using shebangs gives you more guarantee of portability than using POSIX's way of writing shell scripts: leave off the she-bang, write the script in POSIX sh syntax and hope that whatever invokes the script invokes a POSIX compliant sh on it, which is fine if you know the script will be invoked in the right environment by the right tool but not otherwise. You may have to do things like: #! /bin/sh - if : ^ false; then : fine, POSIX system by default else # cover Solaris 10 or older. ": ^ false" returns false # in the Bourne shell as ^ is an alias for | there for # compatibility with the Thompson shell. PATH=`getconf PATH`:$PATH; export PATH exec /usr/xpg4/bin/sh - "$0" ${1+"$@"} fi # rest of script If you want to be portable to Windows+Cygwin, you may have to name your file with a .bat or .ps1 extension and use some similar trick for cmd.exe or powershell.exe to invoke the cygwin sh on the same file.
{ "source": [ "https://unix.stackexchange.com/questions/489647", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/327108/" ] }
489,661
In CUPS, you can set system default destination with: lpadmin -d <printer_name> or with: lpoptions -d <printer_name> However I wasn't able to find a way to remove default destination (so that there's none in the system). Even worse, if you remove a printer and then re-add it under the same name it becomes the default automatically! Any ideas how to de-default a printer?
I think primarily because: the behaviour varies greatly between implementation. See https://www.in-ulm.de/~mascheck/various/shebang/ for all the details. It could however now specify a minimum subset of most Unix-like implementations: like #! *[^ ]+( +[^ ]+)?\n (with only characters from the portable filename character set in those one or two words) where the first word is an absolute path to a native executable, the thing is not too long and behaviour unspecified if the executable is setuid/setgid, and implementation defined whether the interpreter path or the script path is passed as argv[0] to the interpreter. POSIX doesn't specify the path of executables anyway. Several systems have pre-POSIX utilities in /bin / /usr/bin and have the POSIX utilities somewhere else (like on Solaris 10 where /bin/sh is a Bourne shell and the POSIX one is in /usr/xpg4/bin ; Solaris 11 replaced it with ksh93 which is more POSIX compliant, but most of the other tools in /bin are still ancient non-POSIX ones). Some systems are not POSIX ones but have a POSIX mode/emulation. All POSIX requires is that there be a documented environment in which a system behaves POSIXly. See Windows+Cygwin for instance. Actually, with Windows+Cygwin, the she-bang is honoured when a script is invoked by a cygwin application, but not by a native Windows application. So even if POSIX specified the shebang mechanism it could not be used to write POSIX sh / sed / awk ... scripts (also note that the shebang mechanism cannot be used to write reliable sed / awk script as it doesn't allow passing an end-of-option marker). Now the fact that it's unspecified doesn't mean you can't use it (well, it says you shouldn't have the first line start with #! if you expect it to be only a regular comment and not a she-bang), but that POSIX gives you no guarantee if you do. In my experience, using shebangs gives you more guarantee of portability than using POSIX's way of writing shell scripts: leave off the she-bang, write the script in POSIX sh syntax and hope that whatever invokes the script invokes a POSIX compliant sh on it, which is fine if you know the script will be invoked in the right environment by the right tool but not otherwise. You may have to do things like: #! /bin/sh - if : ^ false; then : fine, POSIX system by default else # cover Solaris 10 or older. ": ^ false" returns false # in the Bourne shell as ^ is an alias for | there for # compatibility with the Thompson shell. PATH=`getconf PATH`:$PATH; export PATH exec /usr/xpg4/bin/sh - "$0" ${1+"$@"} fi # rest of script If you want to be portable to Windows+Cygwin, you may have to name your file with a .bat or .ps1 extension and use some similar trick for cmd.exe or powershell.exe to invoke the cygwin sh on the same file.
{ "source": [ "https://unix.stackexchange.com/questions/489661", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164247/" ] }
489,775
Using common bash tools as part of a shell script, I want to repeatedly insert a newline char ('\n') into a long string at intervals of every N chars. For example, given this string, how would I insert a newline char every 20 chars? head -n 50 /dev/urandom | tr -dc A-Za-z0-9 Example of the results I am trying to achieve: ZL1WEV72TTO0S83LP2I2 MTQ8DEIU3GSSYJOI9CFE 6GEPWUPCBBHLWNA4M28D P2DHDI1L2JQIZJL0ACFV UDYEK7HN7HQY4E2U6VFC RH68ZZJGMSSC5YLHO0KZ 94LMELDIN1BAXQKTNSMH 0DXLM7B5966UEFGZENLZ 4917Y741L2WRTG5ACFGQ GRVDVT3CYOLYKNT2ZYUJ EAVN1EY4O161VTW1P3OY Q17T24S7S9BDG1RMKGBX WOZSI4D35U81P68NF5SB HH7AOYHV2TWQP27A40QC QW5N4JDK5001EAQXF41N FKH3Q5GOQZ54HZG2FFZS Q89KGMQZ46YBW3GVROYH AIBOU8NFM39RYP1XBLQM YLG8SSIW6J6XG6UJEKXO A use-case is to quickly make a set of random passwords or ID's of a fixed length. The way I did the above example is: for i in {1..30}; do head /dev/random | tr -dc A-Z0-9 | head -c 20 ; echo ''; done However, for learning purposes, I want to do it a different way. I want to start with an arbitrarily long string and insert newlines, thus breaking one string into multiple small strings of fixed char length.
The venerable fold command ("written by Bill Joy on June 28, 1977") can wrap lines: $ printf "foobarzot\n" | fold -w 3 foo bar zot However, there are some edge cases BUGS Traditional roff(7) output semantics, implemented both by GNU nroff and by mandoc(1), only uses a single backspace for backing up the previous character, even for double-width characters. The fold backspace semantics required by POSIX mishandles such backspace-encoded sequences, breaking lines early. The fmt(1) utility provides similar functionality and does not suffer from that problem, but isn't standardized by POSIX. so if your input has backspace characters you may need to filter or remove those $ printf "a\bc\bd\be\n" | col -b | fold -w 1 e $ printf "a\bc\bd\be\n" | tr -d "\b" | fold -w 1 a c d e
{ "source": [ "https://unix.stackexchange.com/questions/489775", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268886/" ] }
490,267
I have noticed that a logoff (log out) from my X user session will kill any tmux session I have initiated, even sessions I had run with sudo tmux and similar commands. I am sure that this formerly did not happen, but some recent change has effected this behavior. How do I maintain these tmux (or screen ) sessions, even after I end my X session?
This "feature" has existed in systemd previously, but the systemd developers decided to effect a change in the default , to enable the setting for termination of child processes upon log out of a session. You can revert this setting in your logind.conf ( /etc/systemd/logind.conf ): KillUserProcesses=no You can also run tmux with a systemd-run wrapper like the following: systemd-run --scope --user tmux For these systems, you may just want to alias the tmux (or screen ) command: alias tmux="systemd-run --scope --user tmux"
{ "source": [ "https://unix.stackexchange.com/questions/490267", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13308/" ] }
490,402
Can anyone explain in details what is going on with the following. Let's imagine I am mounting a directory with noexec option as follows: mount -o noexec /dev/mapper/fedora-data /data So to verify this I ran mount | grep data : /dev/mapper/fedora-data on /data type ext4 (rw,noexec,relatime,seclabel,data=ordered) Now within /data I'm creating a simple script called hello_world as follows: #!/bin/bash echo "Hello World" whoami So I made the script executable by chmod u+x hello_world (this will however have no effect on a file system with noexec options) and I tried running it: # ./hello_world -bash: ./hello_world: Permission denied However, prepanding bash to the file yields to: # bash hello_world Hello World root So then I created a simple hello_world.c with the following contents: #include <stdio.h> int main() { printf("Hello World\n"); return 0; } Compiled it using cc -o hello_world hello_world.c Now running: # ./hello_world -bash: ./hello_world: Permission denied So I tried to run it using /lib64/ld-linux-x86-64.so.2 hello_world The error: ./hello_world: error while loading shared libraries: ./hello_world: failed to map segment from shared object: Operation not permitted So this is of course true since ldd returns the following: ldd hello_world ldd: warning: you do not have execution permission for `./hello_world' not a dynamic executable On another system where noexec mount option doesn't apply I see: ldd hello_world linux-vdso.so.1 (0x00007ffc1c127000) libc.so.6 => /lib64/libc.so.6 (0x00007facd9d5a000) /lib64/ld-linux-x86-64.so.2 (0x00007facd9f3e000) Now my question is this: Why does running a bash script on a file system with noexec option work but not a c compiled program? What is happening under the hood?
What's happening in both cases is the same: to execute a file directly, the execute bit needs to be set, and the filesystem can't be mounted noexec. But these things don't stop anything from reading those files. When the bash script is run as ./hello_world and the file isn't executable (either no exec permission bit, or noexec on the filesystem), the #! line isn't even checked , because the system doesn't even load the file. The script is never "executed" in the relevant sense. In the case of bash ./hello_world , well, The noexec filesystem option just plain isn't as smart as you'd like it to be. The bash command that's run is /bin/bash , and /bin isn't on a filesystem with noexec . So, it runs no problem. The system doesn't care that bash (or python or perl or whatever) is an interpreter. It just runs the command you gave ( /bin/bash ) with the argument which happens to be a file. In the case of bash or another shell, that file contains a list of commands to execute, but now we're "past" anything that's going to check file execute bits. That check isn't responsible for what happens later. Consider this case: $ cat hello_world | /bin/bash … or for those who do not like Pointless Use of Cat: $ /bin/bash < hello_world The "shbang" #! sequence at the beginning of a file is just some nice magic for doing effectively the same thing when you try to execute the file as a command. You might find this LWN.net article helpful: How programs get run .
{ "source": [ "https://unix.stackexchange.com/questions/490402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43380/" ] }
490,524
awk 'processing_script_here' my=file.txt seems to stop and wait indefinitely... What's going on here and how do I make it work ?
As Chris says , arguments of the form variablename=anything are treated as variable assignment (that are performed at the time the arguments are processed as opposed to the (newer) -v var=value ones which are performed before the BEGIN statements) instead of input file names. That can be useful in things like: awk '{print $1}' FS=/ RS='\n' file1 FS='\n' RS= file2 Where you can specify a different FS / RS per file. It's also commonly used in: awk '!file1_processed{a[$0]; next}; {...}' file1 file1_processed=1 file2 Which is a safer version of: awk 'NR==FNR{a[$0]; next}; {...}' file1 file2 (which doesn't work if file1 is empty) But that gets in the way when you have files whose name contains = characters. Now, that's only a problem when what's left of the first = is a valid awk variable name. What constitutes a valid variable name in awk is stricter than in sh . POSIX requires it to be something like: [_a-zA-Z][_a-zA-Z0-9]* With only characters of the portable character set. However, the /usr/xpg4/bin/awk of Solaris 11 at least is not compliant in that regard and allows any alphabetical characters in the locale in variable names, not just a-zA-Z. So an argument like x+y=foo or =bar or ./foo=bar is still treated as an input file name and not an assignment as what's left of the first = is not a valid variable name. An argument like Stéphane=Chazelas.txt may or may not, depending on the awk implementation and locale. That's why with awk, it's recommended to use: awk '...' ./*.txt instead of awk '...' *.txt for instance to avoid the problem if you can't guarantee the name of the txt files won't contain = characters. Also, beware that an argument like -vfoo=bar.txt may be treated as an option if you use: awk -f file.awk -vfoo=bar.txt (also applies to awk '{code}' -vfoo=bar.txt with the awk from busybox versions prior to 1.28.0, see corresponding bug report ). Again, using ./*.txt works around that (using a ./ prefix also helps with a file called - which otherwise awk understands as meaning standard input instead). That's also why #! /usr/bin/awk -f shebangs don't really work. While the var=value ones can be worked around by fixing the ARGV values (add a ./ prefix) in a BEGIN statement: #! /usr/bin/awk -f BEGIN { for (i = 1; i < ARGC; i++) if (ARGV[i] ~ /^[_[:alpha:]][_[:alnum:]]*=/) ARGV[i] = "./" ARGV[i] } # rest of awk script That won't help with the option ones as those ones are seen by awk and not the awk script. One potential cosmetic issue with using that ./ prefix is it ends up in FILENAME , but you can always use substr(FILENAME, 3) to strip it if you don't want it. The GNU implementation of awk fixes all those issues with its -E option. After -E , gawk expects only the path of the awk script (where - still means stdin) and then a list of input file paths only (and there, not even - is treated specially). It's specially designed for: #! /usr/bin/gawk -E shebangs where the list of arguments are always input files (note that you're still free to edit that ARGV list in a BEGIN statement). You can also use it as: gawk -e '...awk code here...' -E /dev/null *.txt We use -E with an empty script ( /dev/null ) just to make sure those *.txt afterwards are always treated as input files, even if they contain = characters.
{ "source": [ "https://unix.stackexchange.com/questions/490524", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22142/" ] }
490,564
I have bunch of source files with very usual structure: some comments in header, some (optional) imports, and then source code, e.g.: // // AppDelegate.swift // settings // // Created by Mikhail Igonin on 14/06/2018. // Copyright © 2018 Mikhail Igonin. All rights reserved. // import UIKit import Fabric import Crashlytics @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate { //Other comment } I need to add another import after comments and import block. So regex to match beginning of this file should look like this: (([\n\s]*)((\/\/.*\n)|(import.*\n)))+ And looks like this regex is ok: https://www.regextester.com/index.php?fam=106706 Now I'm trying to inset new import with awk and gensub : gawk -v RS='^$' '{$0=gensub(/(([\n\s]*)((\/\/.*\n)|(import.*\n)))+/,"\\1\\2\nimport NEW_IMPORT\n\\2",1)}1' test.swift However it doesn't work and my regex match all file: // // AppDelegate.swift // settings // // Created by Mikhail Igonin on 14/06/2018. // Copyright © 2018 Mikhail Igonin. All rights reserved. // import UIKit import Fabric import Crashlytics @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate { } import NEW_IMPORT What's my mistake? Looks like .* works incorrect and match all file. I've tried to mark it as lazy ( .*? ) but without success also. PS Solutions without awk or gensub would be also useful.
As Chris says , arguments of the form variablename=anything are treated as variable assignment (that are performed at the time the arguments are processed as opposed to the (newer) -v var=value ones which are performed before the BEGIN statements) instead of input file names. That can be useful in things like: awk '{print $1}' FS=/ RS='\n' file1 FS='\n' RS= file2 Where you can specify a different FS / RS per file. It's also commonly used in: awk '!file1_processed{a[$0]; next}; {...}' file1 file1_processed=1 file2 Which is a safer version of: awk 'NR==FNR{a[$0]; next}; {...}' file1 file2 (which doesn't work if file1 is empty) But that gets in the way when you have files whose name contains = characters. Now, that's only a problem when what's left of the first = is a valid awk variable name. What constitutes a valid variable name in awk is stricter than in sh . POSIX requires it to be something like: [_a-zA-Z][_a-zA-Z0-9]* With only characters of the portable character set. However, the /usr/xpg4/bin/awk of Solaris 11 at least is not compliant in that regard and allows any alphabetical characters in the locale in variable names, not just a-zA-Z. So an argument like x+y=foo or =bar or ./foo=bar is still treated as an input file name and not an assignment as what's left of the first = is not a valid variable name. An argument like Stéphane=Chazelas.txt may or may not, depending on the awk implementation and locale. That's why with awk, it's recommended to use: awk '...' ./*.txt instead of awk '...' *.txt for instance to avoid the problem if you can't guarantee the name of the txt files won't contain = characters. Also, beware that an argument like -vfoo=bar.txt may be treated as an option if you use: awk -f file.awk -vfoo=bar.txt (also applies to awk '{code}' -vfoo=bar.txt with the awk from busybox versions prior to 1.28.0, see corresponding bug report ). Again, using ./*.txt works around that (using a ./ prefix also helps with a file called - which otherwise awk understands as meaning standard input instead). That's also why #! /usr/bin/awk -f shebangs don't really work. While the var=value ones can be worked around by fixing the ARGV values (add a ./ prefix) in a BEGIN statement: #! /usr/bin/awk -f BEGIN { for (i = 1; i < ARGC; i++) if (ARGV[i] ~ /^[_[:alpha:]][_[:alnum:]]*=/) ARGV[i] = "./" ARGV[i] } # rest of awk script That won't help with the option ones as those ones are seen by awk and not the awk script. One potential cosmetic issue with using that ./ prefix is it ends up in FILENAME , but you can always use substr(FILENAME, 3) to strip it if you don't want it. The GNU implementation of awk fixes all those issues with its -E option. After -E , gawk expects only the path of the awk script (where - still means stdin) and then a list of input file paths only (and there, not even - is treated specially). It's specially designed for: #! /usr/bin/gawk -E shebangs where the list of arguments are always input files (note that you're still free to edit that ARGV list in a BEGIN statement). You can also use it as: gawk -e '...awk code here...' -E /dev/null *.txt We use -E with an empty script ( /dev/null ) just to make sure those *.txt afterwards are always treated as input files, even if they contain = characters.
{ "source": [ "https://unix.stackexchange.com/questions/490564", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/328155/" ] }
490,565
I've been using CentOS 7 and its kernel version is 3.10. To check Kernel Version, I typed 'uname -r' and command showed 3.10.0-957.1.3.el7.x86_64 As far as I know, MemAvailable metric was introduced to Linux kernel version 3.14. But, I ran /proc/meminfo and this command showed MemAvailable metric. MemTotal: 3880620 kB MemFree: 3440980 kB MemAvailable: 3473820 kB Why did my linux show MemAvailable metric? My Linux kernel is below 3.14
As Chris says , arguments of the form variablename=anything are treated as variable assignment (that are performed at the time the arguments are processed as opposed to the (newer) -v var=value ones which are performed before the BEGIN statements) instead of input file names. That can be useful in things like: awk '{print $1}' FS=/ RS='\n' file1 FS='\n' RS= file2 Where you can specify a different FS / RS per file. It's also commonly used in: awk '!file1_processed{a[$0]; next}; {...}' file1 file1_processed=1 file2 Which is a safer version of: awk 'NR==FNR{a[$0]; next}; {...}' file1 file2 (which doesn't work if file1 is empty) But that gets in the way when you have files whose name contains = characters. Now, that's only a problem when what's left of the first = is a valid awk variable name. What constitutes a valid variable name in awk is stricter than in sh . POSIX requires it to be something like: [_a-zA-Z][_a-zA-Z0-9]* With only characters of the portable character set. However, the /usr/xpg4/bin/awk of Solaris 11 at least is not compliant in that regard and allows any alphabetical characters in the locale in variable names, not just a-zA-Z. So an argument like x+y=foo or =bar or ./foo=bar is still treated as an input file name and not an assignment as what's left of the first = is not a valid variable name. An argument like Stéphane=Chazelas.txt may or may not, depending on the awk implementation and locale. That's why with awk, it's recommended to use: awk '...' ./*.txt instead of awk '...' *.txt for instance to avoid the problem if you can't guarantee the name of the txt files won't contain = characters. Also, beware that an argument like -vfoo=bar.txt may be treated as an option if you use: awk -f file.awk -vfoo=bar.txt (also applies to awk '{code}' -vfoo=bar.txt with the awk from busybox versions prior to 1.28.0, see corresponding bug report ). Again, using ./*.txt works around that (using a ./ prefix also helps with a file called - which otherwise awk understands as meaning standard input instead). That's also why #! /usr/bin/awk -f shebangs don't really work. While the var=value ones can be worked around by fixing the ARGV values (add a ./ prefix) in a BEGIN statement: #! /usr/bin/awk -f BEGIN { for (i = 1; i < ARGC; i++) if (ARGV[i] ~ /^[_[:alpha:]][_[:alnum:]]*=/) ARGV[i] = "./" ARGV[i] } # rest of awk script That won't help with the option ones as those ones are seen by awk and not the awk script. One potential cosmetic issue with using that ./ prefix is it ends up in FILENAME , but you can always use substr(FILENAME, 3) to strip it if you don't want it. The GNU implementation of awk fixes all those issues with its -E option. After -E , gawk expects only the path of the awk script (where - still means stdin) and then a list of input file paths only (and there, not even - is treated specially). It's specially designed for: #! /usr/bin/gawk -E shebangs where the list of arguments are always input files (note that you're still free to edit that ARGV list in a BEGIN statement). You can also use it as: gawk -e '...awk code here...' -E /dev/null *.txt We use -E with an empty script ( /dev/null ) just to make sure those *.txt afterwards are always treated as input files, even if they contain = characters.
{ "source": [ "https://unix.stackexchange.com/questions/490565", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/328156/" ] }
491,161
Does an X client necessarily need a window manager to work? Can an X client work with only the X server? If an X client doesn't have a window , does whether it can work need a window manager? If an X client can work without a window manager, does the X client necessarily have no window? Thanks.
No. Well written apps don't need a window manager. But some "modern" broken apps will not work fine without a window manager (eg. firefox and its address bar suggestions which won't drop down [1]). Many other subpar apps not only assume a window manager, but to add insult to injury, a click to focus window manager. For instance, it used to be that any java app will simply steal the focus on startup. If you want to test, install Xephyr (a "nested" X11 server), run it with Xephyr :1 , and then start your apps with DISPLAY=:1 in their environment. [1] the "awesome bar" of Firefox won't open its suggestions pane when typed into or clicked on the history button unless there's a window manager running. The auto-hide menu won't work either.
{ "source": [ "https://unix.stackexchange.com/questions/491161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
492,044
Usually when I have programs that are doing a full disk scan and going over all files in the system they take a very long time to run. Why does updatedb run so fast in comparison?
The answer depends on the version of locate you’re using, but there’s a fair chance it’s mlocate , whose updatedb runs quickly by avoiding doing full disk scans: mlocate is a locate/updatedb implementation. The 'm' stands for "merging": updatedb reuses the existing database to avoid rereading most of the file system, which makes updatedb faster and does not trash the system caches as much. (The database stores each directory’s timestamp, ctime or mtime , whichever is newer.) Like most implementations of updatedb , mlocate ’s will also skip file systems and paths which it is configured to ignore. By default there are none in mlocate ’s case, but distributions typically provide a basic updatedb.conf which ignores networked file systems, virtual file systems etc. (see Debian’s configuration file for example; this is standard practice in Debian, so GNU’s updatedb is configured similarly ).
{ "source": [ "https://unix.stackexchange.com/questions/492044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23960/" ] }
492,500
I have a file like this: 2018.01.02;1.5;comment 1 2018.01.04;2.75;comment 2 2018.01.07;5.25;comment 4 2018.01.09;1.25;comment 7 I want to replace all dots . in the second column with a comma , as I would with sed 's/\./\,/g' file how can I use sed or preferably awk to only apply this for the second column, so my output would look like this: 2018.01.02;1,5;comment 1 2018.01.04;2,75;comment 2 2018.01.07;5,25;comment 4 2018.01.09;1,25;comment 7
$ awk 'BEGIN{FS=OFS=";"} {gsub(/\./, ",", $2)} 1' ip.txt 2018.01.02;1,5;comment 1 2018.01.04;2,75;comment 2 2018.01.07;5,25;comment 4 2018.01.09;1,25;comment 7 BEGIN{} this block of code will be executed before processing any input line FS=OFS=";" set input and output field separator as ; gsub(/\./, ",", $2) for each input line, replace all the . in 2nd field with , 1 is an awk idiom to print contents of $0 (which contains the input record)
{ "source": [ "https://unix.stackexchange.com/questions/492500", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240990/" ] }
492,773
Earlier I was using fsarchiver to create compressed partition image. Due to some weird behavior I am choosing to replace it with dd . However, I like how fsarchiver compressed with zstd . So, I studied, How to make a disk image and restore from it later? Using DD for disk cloning Making full disk image with DD compressing dd backup on the fly How do you monitor the progress of dd? What these essentially say is, I have to use the following command to backup dd if=/dev/sda2 status=progress | gzip -c > /media/mint/Data/_Fsarchiver/MintV1.img.gz And the following command to restore gunzip -c /media/mint/Data/_Fsarchiver/MintV1.img.gz | dd of=/dev/sda2 status=progress Now I want to replace gzip -c & gunzip -c with zstd & zstd -d The commands I came up with are To compress sudo dd if=/dev/sda2 status=progress | zstd -16vT6 > /media/mint/Data/_Fsarchiver/MintV1.zst To decompress zstd -vdcfT6 /media/mint/Data/_Fsarchiver/MintV1.zst | dd of=/dev/sda2 status=progress Is it safe to try these commands or am I doing something wrong?
Using dd like that (without any options) will make your life miserable. Just cut it out entirely. Or at the very least increase its block size and tell it not to object to short reads. Without dd , first run sudo -s to get a root shell: gzip </dev/sda2 >/media/mint/Data/_Fsarchiver/MintV1.img.gz gunzip </media/mint/Data/_Fsarchiver/MintV1.img.gz >/dev/sda2 Your zstd commands look entirely plausible, but just omit dd and read/write the device directly as root. (My version doesn't understand your T6 so I've omitted that here.) zstd -16v </dev/sda2 >/media/mint/Data/_Fsarchiver/MintV1.zst zstdcat -v /media/mint/Data/_Fsarchiver/MintV1.zst >/dev/sda2 With dd , either prefix the dd with sudo or use sudo -s to get a root shell: dd bs=1M iflag=fullblock if=/dev/sda2 status=progress | gzip >/media/mint/Data/_Fsarchiver/MintV1.img.gz gzcat /media/mint/Data/_Fsarchiver/MintV1.img.gz | dd bs=1M iflag=fullblock of=/dev/sda2 status=progress dd bs=1M iflag=fullblock if=/dev/sda2 status=progress | zstd -16v >/media/mint/Data/_Fsarchiver/MintV1.img.zst zstdcat /media/mint/Data/_Fsarchiver/MintV1.img.zst | dd bs=1M iflag=fullblock of=/dev/sda2 status=progress With pv instead of dd . Use sudo -s beforehand to get a root shell: pv /dev/sda2 | gzip >/media/mint/Data/_Fsarchiver/MintV1.img.gz gzcat /media/mint/Data/_Fsarchiver/MintV1.img.gz | pv >/dev/sda2 pv /dev/sda2 | zstd -16 >/media/mint/Data/_Fsarchiver/MintV1.img.zst zstdzcat /media/mint/Data/_Fsarchiver/MintV1.img.zst | pv >/dev/sda2 Also see Syntax When Combining dd and pv As always, to read with elevated permissions change command <source to sudo cat source | command , and to write with elevated permissions replace command >target with command | sudo tee target >/dev/null .
{ "source": [ "https://unix.stackexchange.com/questions/492773", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206574/" ] }
492,966
According to Wikipedia , GRUB was released in 1995. By that point Linux and xBSD existed for several years. I know early Unix versions were tied to hardware in the 70s and 80s, but Linux and xBSD were free to distribute and install. Which begs the question how would you boot Linux back then? Were distributions shipping with their own implementations of bootloaders?
The first Linux distribution I used back in the 90s ( Slackware 3.0 IIRC) used LILO as a bootloader. And many distros used LILO for years even when GRUB was becoming the "default" bootloader. Moreover, in the early years of Linux it was common to boot Linux from another OS (i.e. DOS or Windows) instead of relying on a bootloader/dual booting. For example there was loadlin . Don't forget Syslinux , which is a simpler boot loader often used for USB self-bootable installation/recovery distros. Or Isolinux (from the same project) used by many "Live" distros. Keep in mind that today GRUB can be used to load many operating systems, while LILO was more limited, and specifically targeted at Linux (i.e. LInux LOader), with some support for dual booting to Windows. GRUB is very useful for dual/multi booting because of its many configurable options, scripting capabilities, etc... If you just want a single OS on your machine "any" (i.e. whichever bootloader is the default for your Linux/BSD distribution) should be enough.
{ "source": [ "https://unix.stackexchange.com/questions/492966", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85039/" ] }
493,729
I know that Bash and Zsh support local variables, but there are systems only have POSIX-compatible shells. And local is undefined in POSIX shells. So I want to ask which shells support local keyword for defining local variables? Edit : About shells I mean the default /bin/sh shell.
It's not as simple as supporting local or not. There is a lot of variation on the syntax and how it's done between shells that have one form or other of local scope. That's why it's very hard to come up with a standard that agrees with all. See http://austingroupbugs.net/bug_view_page.php?bug_id=767 for the POSIX effort on that front. local scope was added first in ksh in the early 80s. The syntax to declare a local variable in a function was with typeset : function f { typeset var=value set -o noglob # also local to the function ... } (function support was added to the Bourne shell later, but with a different syntax ( f() command ) and ksh added support for that one as well later; the Bourne shell never had local scope (except of course via subshells)) The local builtin AFAIK was added first to the Almquist shell (used in BSDs, dash, busybox sh) in 1989, but works significantly differently from ksh 's typeset . ash derivatives don't support typeset as an alias to local , but you can always define one by hand. bash and zsh added typeset aliased to local in 1989 and 1991 respectively. ksh88 added local as an undocumented alias to typeset circa 1990 and pdksh and its derivatives in 1994. posh (based on pdksh ) removed typeset (for strict compliance to the Debian Policy that requires local , but not typeset ). POSIX initially objected to specifying typeset on the ground that it was dynamic scoping. So ksh93 (a rewrite of ksh in 1993 by David Korn) switched to static scoping instead. Also in ksh93, as opposed to ksh88, local scoping is only done for functions declared with the ksh syntax ( function f {...} ), not the Bourne syntax ( f() {...} ) and the local alias was removed. However the ksh93v- beta and final version from AT&T can be compiled with an experimental "bash" mode (actually enabled by default) that does dynamic scoping (in bother forms of functions, including with local and typeset ) when ksh93 is invoked as bash . local differs from typeset in that case in that it can only be invoked from within a function. That bash mode will be disabled by default in ksh2020 though the local / declare aliases to typeset will be retained even when the bash mode is not compiled in (though still with static scoping). yash (written much later), has typeset (à la ksh88), but has only had local as an alias to it since version 2.48 (December 2018). @Schily (who sadly passed away in 2021 ) used to maintain a Bourne shell descendant which has been recently made mostly POSIX compliant, called bosh that supports local scope since version 2016-07-06 (with local , similar to ash ). So the Bourne-like shells that have some form of local scope for variables today are: ksh, all implementations and their derivatives (ksh88, ksh93, pdksh and derivatives like posh, mksh, OpenBSD sh). ash and all its derivatives (NetBSD sh, FreeBSD sh, dash, busybox sh) bash zsh yash bosh As far as the sh of different systems go, note that there are systems where the POSIX sh is in /bin (most), and others where it's not (like Solaris where it's in /usr/xpg4/bin ). For the sh implementation on various systems we have: ksh88: most SysV-derived commercial Unices (AIX, HP/UX, Solaris¹...) bash: most GNU/Linux systems, Cygwin, macOS ash: by default on Debian and most derivatives (including Ubuntu, Linux/Mint) though can be changed by the admin to bash or mksh. NetBSD, FreeBSD and some of their derivatives (not macOS). busybox sh: many if not most embedded Linux systems pdksh or derivatives: OpenBSD, MirBSD, Android Now, where they differ: typeset (ksh, pdksh, bash, zsh, yash) vs local (ksh88, pdksh, bash, zsh, ash, yash 2.48+). the list of supported options. static (ksh93, in function f {...} function), vs dynamic scoping (all other shells). For instance, whether function f { typeset v=1; g; echo "$v"; }; function g { v=2; }; f outputs 1 or 2 . See also how the export attribute affects scoping in ksh93 . whether local / typeset just makes the variable local ( ash , bosh ), or creates a new instance of the variable (other shells). For instance, whether v=1; f() { local v; echo "${v:-empty}"; }; f outputs 1 or empty (see also the localvar_inherit option in bash 5.0 and above). with those that create a new variable, whether the new one inherits the attributes (like export ) and/or type and which ones from the variable in the parent scope. For instance, whether export V=1; f() { local V=2; printenv V; }; f prints 1 , 2 or nothing. whether that new variable has an initial value (empty, 0, empty list, depending on type, zsh ) or is initially unset. whether unset V on a variable in a local scope leaves the variable unset , or just peels one level of scoping ( mksh , yash , bash under some circumstances). For instance, whether v=1; f() { local v=2; unset v; echo "$v"; } outputs 1 or nothing (see also the localvar_unset option in bash 5.0 and above) like for export , whether it's a keyword or only a mere builtin or both and under what condition it's considered as a keyword. like for export , whether the arguments are parsed as normal command arguments or as assignments (and under what condition). whether you can declare local a variable that was readonly in the parent scope. the interactions with v=value myfunction where myfunction itself declares v as local or not. That's the ones I'm thinking of just now. Check the austin group bug above for more details. As far as local scoping for shell options (as opposed to variables ), shells supporting it are: ksh88 (with both function definition syntax): done by default, I'm not aware of any way to disable it. ash (since 1989): with local - . It makes the $- parameter (which stores the list of options) local. ksh93 : now only done for function f {...} functions. zsh (since 1995). With setopt localoptions . Also with emulate -L for the emulation mode (and its set of options) to be made local to the function. bash (since 2016) with local - like in ash , but only for the options managed by set , not the ones managed by shopt . ¹ the POSIX sh on Solaris is /usr/xpg4/bin/sh (though it has many conformance bugs including those options local to functions). /bin/sh up to Solaris 10 was the Bourne shell (so no local scope), and since Solaris 11 is ksh93
{ "source": [ "https://unix.stackexchange.com/questions/493729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178265/" ] }
493,897
I made a recording with ffmpeg -f alsa -ac 2 -i plughw:0,0 /tmp/audio.mp4 I then moved /tmp/audio.mp4 to another directory ( /root/audio.mp4 ) without stopping ffmpeg leading to a broken .mp4 file: ffplay /root/audio.mp4 [...] [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f3524000b80] moov atom not found audio.mp4: Invalid data found when processing input How to recover and read my .mp4 file?
You can try and use Untrunc to fix the file. Restore a damaged (truncated) mp4, m4v, mov, 3gp video. Provided you have a similar not broken video. you may need to compile it from source, but there is another option to use a Docker container and bind the folder with the file into the container and fix it that way. You can use the included Dockerfile to build and execute the package as a container git clone https://github.com/ponchio/untrunc.git cd untrunc docker build -t untrunc . docker run -v ~/Desktop/:/files untrunc /files/filea /files/fileb
{ "source": [ "https://unix.stackexchange.com/questions/493897", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189711/" ] }
494,637
So I less my file: less myFile.log Then I try to search for a value: /70.5 I've since learned less uses regex, so . is a wildcard. I've tried to escape it with no success.
You can turn off regex mode by hitting Ctrl + R before typing the pattern: ^R Don't interpret regular expression metacharacters; that is, do a simple textual comparison.
{ "source": [ "https://unix.stackexchange.com/questions/494637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/218434/" ] }
495,161
I set some environment variables in a terminal, and then run my script. How can I pull in the variables in the script? I need to know their values. Simply referring to them as $MY_VAR1 doesn't work; it is empty.
If the variables are truly environment variables (i.e., they've been exported with export ) in the environment that invokes your script, then they would be available in your script. That they aren't suggests that you haven't exported them, or that you run the script from an environment where they simply don't exist even as shell variables. Example: $ cat script.sh #!/bin/sh echo "$hello" $ sh script.sh (one empty line of output since hello doesn't exist anywhere) $ hello="hi there" $ sh script.sh (still only an empty line as output as hello is only a shell variable, not an environment variable) $ export hello $ sh script.sh hi there Alternatively, to set the environment variable just for this script and not in the calling environment: $ hello="sorry, I'm busy" sh script.sh sorry, I'm busy $ env hello="this works too" sh script.sh this works too
{ "source": [ "https://unix.stackexchange.com/questions/495161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86215/" ] }
495,169
My understanding is that the kernel understands how to communicate with the different hardware in a system via specific device trees. How is it that I can download one version of Ubuntu and I am able to install this on any system where the hardware may vary? The same goes for the BeagleBone embedded boards. There is a default Debian image which can flash to any of the different type of BeagleBone boards which have different peripherals. How does it know which device tree / device tree overlay to use when the same image works for all?
If the variables are truly environment variables (i.e., they've been exported with export ) in the environment that invokes your script, then they would be available in your script. That they aren't suggests that you haven't exported them, or that you run the script from an environment where they simply don't exist even as shell variables. Example: $ cat script.sh #!/bin/sh echo "$hello" $ sh script.sh (one empty line of output since hello doesn't exist anywhere) $ hello="hi there" $ sh script.sh (still only an empty line as output as hello is only a shell variable, not an environment variable) $ export hello $ sh script.sh hi there Alternatively, to set the environment variable just for this script and not in the calling environment: $ hello="sorry, I'm busy" sh script.sh sorry, I'm busy $ env hello="this works too" sh script.sh this works too
{ "source": [ "https://unix.stackexchange.com/questions/495169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/323971/" ] }
495,421
I am trying to mount /boot/config-4.14.90-v8 on to /usr/src/linux/.config . I've tried: rpi-4.14.y:linux Necktwi$ sudo mount -o loop,ro -t vfat /boot/config-4.14.90-v8-g6d68e517b3ec /usr/src/linux/.config mount: /usr/src/linux/.config: cannot mount /dev/loop0 read-only. notice the error cannot mount /dev/loop0 read-only . rootfs is btrfs /boot is vfat /usr/src is nfs (I mounted remote server's /usr/src ) I tried mount --bind but it failed. rpi-4.14.y:linux Necktwi$ sudo mount --bind /boot/config-4.14.90-v8-g6d68e517b3ec /usr/src/linux/.config mount: /usr/src/linux/.config: bind /boot/config-4.14.90-v8-g6d68e517b3ec failed.
If you want to mount a single file, so that the contents of that file are seen on the mount point, then what you want is a bind mount . You can accomplish that with the following command: # mount --bind /boot/config-4.14.90-v8 /usr/src/linux/.config You can use -o ro to make it read-only on the /usr/src/linux/.config path. For more details, look for bind mounts in the man page for mount(8) . Loop devices do something similar, yet different. They mount a filesystem stored into a regular file onto another directory. So if you had a vfat or ext4 etc. filesystem stored into a file, say /vol/myfs.img , you could then mount it into a directory , say /mnt/myfs , using the following command: # mount -o loop /vol/myfs.img /mnt/myfs You can pass it -t vfat etc. to force the filesystem type. Note that the -o loop is usually not needed, since mount will figure that out by you trying to mount a file and will do that for you automatically. Also, mounting a file with -o loop (or automatically detected) is a shortcut to mapping that file to a /dev/loopX device, which you can also do using losetup , and then running the mount command, such as mount /dev/loop0 /mnt/myfs . See the man page for losetup(8) for details on loop devices.
{ "source": [ "https://unix.stackexchange.com/questions/495421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27711/" ] }
495,422
I encountered a Linux command, builtin cd . What is the difference between the commands builtin cd and cd ? In fact, I made some researches about the difference, but I could not find a remarkable and significant explanation about this.
The cd command is a built-in, so normally builtin cd will do the same thing as cd . But there is a difference if cd is redefined as a function or alias, in which case cd will call the function/alias but builtin cd will still change the directory (in other words, will keep the built-in accessible even if clobbered by a function.) For example: user:~$ cd () { echo "I won't let you change directories"; } user:~$ cd mysubdir I won't let you change directories user:~$ builtin cd mysubdir user:~/mysubdir$ unset -f cd # undefine function Or with an alias: user:~$ alias cd='echo Trying to cd to' user:~$ cd mysubdir Trying to cd to mysubdir user:~$ builtin cd mysubdir user:~/mysubdir$ unalias cd # undefine alias Using builtin is also a good way to define a cd function that does something and changes directory (since calling cd from it would just keep calling the function again in an endless recursion.) For example: user:~ $ cd () { echo "Changing directory to ${1-home}"; builtin cd "$@"; } user:~ $ cd mysubdir Changing directory to mysubdir user:~/mysubdir $ cd Changing directory to home user:~ $ unset -f cd # undefine function
{ "source": [ "https://unix.stackexchange.com/questions/495422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278582/" ] }
495,643
I have an UTF-8 file that contains a strange character -- visible to me just as <96> This is how it appears on vi and how it appears on gedit and how it appears under LibreOffice and that makes a series of basic Unix tools misbehave, including: cat file make the character dissapear, and more as well I cannot copy and paste inside vi/vim -- it will not even find itself grep fails to display anything as well, as if the character did not exists. The program file works fine and recognizes it an UTF-8 file. I also know that, because of the nature of the file, it most likely came from a Copy & Paste from the web and the character initially represented an EMDASH. My basic questions are: Is there anything wrong with this file? How can I search for other occurrences of it inside the same file? How can I grep for other files that may contain the same problem/character? The file can be found here: file.txt
This file contains bytes C2 96 , which are the UTF-8 encoding of codepoint U+0096. That codepoint is one of the C1 control characters commonly called SPA "Start of Guarded Area" (or "Protected Area"). That isn't a useful character for any modern system, but it's unlikely to be harmful that it's there. The original source for this was likely a byte 0x96 in some single-byte 8-bit encoding that has been transcoded incorrectly somewhere along the way. Probably this was originally a Windows CP1252 en dash "–", which has byte value 96 in that encoding - most other plausible candidates have the control set at positions 80-9F - which has been translated to UTF-8 as though it was latin-1 ( ISO/IEC 8859-1 ), which is not uncommon. That would lead to the byte being interpreted as the control character and translated accordingly as you've seen. You can fix this file with the iconv tool, which is part of glibc. iconv -f utf-8 -t iso-8859-1 < mwe.txt | iconv -f cp1252 -t utf-8 produces a correct version of your minimal example for me. That works by first converting the UTF-8 to latin-1 (inverting the earlier mistranslation), and then reinterpreting that as cp1252 to convert it back to UTF-8 correctly. It does depend on what else is in the real file, however. If you have characters outside Latin-1 elsewhere it will fail because it can't encode those correctly at the first step. If you don't have iconv, or it doesn't work for the real file, you can replace the bytes directly using sed: LC_ALL=C sed -e $'s/\xc2\x96/\xe2\x80\x93/g' < mwe.txt This replaces C2 96 with the UTF-8 en dash encoding E2 80 93 . You could also replace it with e.g. a hyphen or two by changing \xe2\x80\x93 into -- . You can grep in a similar fashion. We're using LC_ALL=C to make sure we're reading the actual bytes, and not having grep interpret things: LC_ALL=C grep -R $'\xc2\x96` . will list out everywhere under this directory those bytes appear. You may want to limit it to just text files if you have mixed content around, since binary files will include any pair of bytes fairly often.
{ "source": [ "https://unix.stackexchange.com/questions/495643", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332501/" ] }