source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
489,609 | Reading one of Stephen's excellent replies , I was wondering what differences are between When the operating system shuts down. ... and When the kernel shuts down, ... (... I’m considering that the variant which uses an external command to shut down isn’t the kernel) ? Is "the variant which uses an external command to shut down" "when the OS shuts down" or "when the kernel shuts down"? What does "I’m considering that the variant which uses an external command to shut down isn’t the kernel" mean in other words? Does system call reboot() reboot the OS or kernel? Does command reboot reboot the OS but not the kernel? Thanks. | He seems to be noting a difference between the kernel itself, and the rest of the operating system, the user-space constructs built on top of the kernel. When you shut down the system with /sbin/reboot or equivalent (which in turn calls systemd or some init scripts or something), it does more than just ask the kernel to shut down. The user-space tools are the ones that do almost all the cleanup, like unmounting filesystems, sending SIGTERM to other processes ask them to shut down, etc. If, instead, you go and call the reboot() system call as root directly, then none of that cleanup happens, the kernel just does what it's told to do and shuts down right away (possibly restarting or powering down the machine). The man page notes that reboot() doesn't even do the equivalent of sync() , so it doesn't even do the kinds of cleanup that could be done within the kernel (where the filesystem drivers and I/O buffers reside.) As an example from the man page: LINUX_REBOOT_CMD_RESTART (RB_AUTOBOOT, 0x1234567). The message "Restarting system." is printed, and a default restart is performed immediately. If not preceded by a sync(2), data will be lost. So, Does system call reboot() reboot the OS or kernel? It asks the kernel to shut down or reboot, the OS goes down with it. Does command reboot reboot the OS but not the kernel? It asks user-space processes to shut down, does other cleanup, and only then asks the kernel to shut down or reboot. The reboot() system call has a mode ( LINUX_REBOOT_CMD_RESTART2 ) that is described as "using a command string". However, it doesn't mean a user-mode command, but one internal to the kernel, and one that isn't even used on x86. Note that while we're considering the distinction between the kernel and the OS-on-top-of-the-kernel, you could in principle reboot just the OS but keep the kernel running. You'd need to clean up everything set up by the userspace and kill other userspace processes, then restart init to bring everything back up again instead of asking the kernel to reboot. That might not be very useful though, and it would be hard to reliably reset all state left in the kernel (you'd need to manually reset all network interfaces, clean up iptables rules, reset RAID and loop devices, etc. etc. There's a good chance of missing something that might then bite back afterwards.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489609",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
489,628 | From the Shell Command Language page of the POSIX specification: If the first line of a file of shell commands starts with the characters "#!", the results are unspecified. Why is the behavior of #! unspecified by POSIX? I find it baffling that something so portable and widely used would have an unspecified behavior. | I think primarily because: the behaviour varies greatly between implementation. See https://www.in-ulm.de/~mascheck/various/shebang/ for all the details. It could however now specify a minimum subset of most Unix-like implementations: like #! *[^ ]+( +[^ ]+)?\n (with only characters from the portable filename character set in those one or two words) where the first word is an absolute path to a native executable, the thing is not too long and behaviour unspecified if the executable is setuid/setgid, and implementation defined whether the interpreter path or the script path is passed as argv[0] to the interpreter. POSIX doesn't specify the path of executables anyway. Several systems have pre-POSIX utilities in /bin / /usr/bin and have the POSIX utilities somewhere else (like on Solaris 10 where /bin/sh is a Bourne shell and the POSIX one is in /usr/xpg4/bin ; Solaris 11 replaced it with ksh93 which is more POSIX compliant, but most of the other tools in /bin are still ancient non-POSIX ones). Some systems are not POSIX ones but have a POSIX mode/emulation. All POSIX requires is that there be a documented environment in which a system behaves POSIXly. See Windows+Cygwin for instance. Actually, with Windows+Cygwin, the she-bang is honoured when a script is invoked by a cygwin application, but not by a native Windows application. So even if POSIX specified the shebang mechanism it could not be used to write POSIX sh / sed / awk ... scripts (also note that the shebang mechanism cannot be used to write reliable sed / awk script as it doesn't allow passing an end-of-option marker). Now the fact that it's unspecified doesn't mean you can't use it (well, it says you shouldn't have the first line start with #! if you expect it to be only a regular comment and not a she-bang), but that POSIX gives you no guarantee if you do. In my experience, using shebangs gives you more guarantee of portability than using POSIX's way of writing shell scripts: leave off the she-bang, write the script in POSIX sh syntax and hope that whatever invokes the script invokes a POSIX compliant sh on it, which is fine if you know the script will be invoked in the right environment by the right tool but not otherwise. You may have to do things like: #! /bin/sh -if : ^ false; then : fine, POSIX system by defaultelse # cover Solaris 10 or older. ": ^ false" returns false # in the Bourne shell as ^ is an alias for | there for # compatibility with the Thompson shell. PATH=`getconf PATH`:$PATH; export PATH exec /usr/xpg4/bin/sh - "$0" ${1+"$@"}fi# rest of script If you want to be portable to Windows+Cygwin, you may have to name your file with a .bat or .ps1 extension and use some similar trick for cmd.exe or powershell.exe to invoke the cygwin sh on the same file. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/489628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273175/"
]
} |
489,647 | i have an external hard drive connect to my TP-Link router and shared using USB Share, i am unable to connect to this Share from Ubuntu, i can only see shared volumes but can't gain access. I can connect to it from Windows and even from my Android device using X-plore File Manager. What can i do ? My router is old and it supports only SMBv1 shares. | I think primarily because: the behaviour varies greatly between implementation. See https://www.in-ulm.de/~mascheck/various/shebang/ for all the details. It could however now specify a minimum subset of most Unix-like implementations: like #! *[^ ]+( +[^ ]+)?\n (with only characters from the portable filename character set in those one or two words) where the first word is an absolute path to a native executable, the thing is not too long and behaviour unspecified if the executable is setuid/setgid, and implementation defined whether the interpreter path or the script path is passed as argv[0] to the interpreter. POSIX doesn't specify the path of executables anyway. Several systems have pre-POSIX utilities in /bin / /usr/bin and have the POSIX utilities somewhere else (like on Solaris 10 where /bin/sh is a Bourne shell and the POSIX one is in /usr/xpg4/bin ; Solaris 11 replaced it with ksh93 which is more POSIX compliant, but most of the other tools in /bin are still ancient non-POSIX ones). Some systems are not POSIX ones but have a POSIX mode/emulation. All POSIX requires is that there be a documented environment in which a system behaves POSIXly. See Windows+Cygwin for instance. Actually, with Windows+Cygwin, the she-bang is honoured when a script is invoked by a cygwin application, but not by a native Windows application. So even if POSIX specified the shebang mechanism it could not be used to write POSIX sh / sed / awk ... scripts (also note that the shebang mechanism cannot be used to write reliable sed / awk script as it doesn't allow passing an end-of-option marker). Now the fact that it's unspecified doesn't mean you can't use it (well, it says you shouldn't have the first line start with #! if you expect it to be only a regular comment and not a she-bang), but that POSIX gives you no guarantee if you do. In my experience, using shebangs gives you more guarantee of portability than using POSIX's way of writing shell scripts: leave off the she-bang, write the script in POSIX sh syntax and hope that whatever invokes the script invokes a POSIX compliant sh on it, which is fine if you know the script will be invoked in the right environment by the right tool but not otherwise. You may have to do things like: #! /bin/sh -if : ^ false; then : fine, POSIX system by defaultelse # cover Solaris 10 or older. ": ^ false" returns false # in the Bourne shell as ^ is an alias for | there for # compatibility with the Thompson shell. PATH=`getconf PATH`:$PATH; export PATH exec /usr/xpg4/bin/sh - "$0" ${1+"$@"}fi# rest of script If you want to be portable to Windows+Cygwin, you may have to name your file with a .bat or .ps1 extension and use some similar trick for cmd.exe or powershell.exe to invoke the cygwin sh on the same file. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/489647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327108/"
]
} |
489,661 | In CUPS, you can set system default destination with: lpadmin -d <printer_name> or with: lpoptions -d <printer_name> However I wasn't able to find a way to remove default destination (so that there's none in the system). Even worse, if you remove a printer and then re-add it under the same name it becomes the default automatically! Any ideas how to de-default a printer? | I think primarily because: the behaviour varies greatly between implementation. See https://www.in-ulm.de/~mascheck/various/shebang/ for all the details. It could however now specify a minimum subset of most Unix-like implementations: like #! *[^ ]+( +[^ ]+)?\n (with only characters from the portable filename character set in those one or two words) where the first word is an absolute path to a native executable, the thing is not too long and behaviour unspecified if the executable is setuid/setgid, and implementation defined whether the interpreter path or the script path is passed as argv[0] to the interpreter. POSIX doesn't specify the path of executables anyway. Several systems have pre-POSIX utilities in /bin / /usr/bin and have the POSIX utilities somewhere else (like on Solaris 10 where /bin/sh is a Bourne shell and the POSIX one is in /usr/xpg4/bin ; Solaris 11 replaced it with ksh93 which is more POSIX compliant, but most of the other tools in /bin are still ancient non-POSIX ones). Some systems are not POSIX ones but have a POSIX mode/emulation. All POSIX requires is that there be a documented environment in which a system behaves POSIXly. See Windows+Cygwin for instance. Actually, with Windows+Cygwin, the she-bang is honoured when a script is invoked by a cygwin application, but not by a native Windows application. So even if POSIX specified the shebang mechanism it could not be used to write POSIX sh / sed / awk ... scripts (also note that the shebang mechanism cannot be used to write reliable sed / awk script as it doesn't allow passing an end-of-option marker). Now the fact that it's unspecified doesn't mean you can't use it (well, it says you shouldn't have the first line start with #! if you expect it to be only a regular comment and not a she-bang), but that POSIX gives you no guarantee if you do. In my experience, using shebangs gives you more guarantee of portability than using POSIX's way of writing shell scripts: leave off the she-bang, write the script in POSIX sh syntax and hope that whatever invokes the script invokes a POSIX compliant sh on it, which is fine if you know the script will be invoked in the right environment by the right tool but not otherwise. You may have to do things like: #! /bin/sh -if : ^ false; then : fine, POSIX system by defaultelse # cover Solaris 10 or older. ": ^ false" returns false # in the Bourne shell as ^ is an alias for | there for # compatibility with the Thompson shell. PATH=`getconf PATH`:$PATH; export PATH exec /usr/xpg4/bin/sh - "$0" ${1+"$@"}fi# rest of script If you want to be portable to Windows+Cygwin, you may have to name your file with a .bat or .ps1 extension and use some similar trick for cmd.exe or powershell.exe to invoke the cygwin sh on the same file. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/489661",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164247/"
]
} |
489,737 | I am trying to automate a process which involves running scripts on various machines via ssh. It is vital to capture both output and the return code (for the detection of errors). Setting the exit code explicitly works as expected: ~$ ssh host exit 5 && echo OK || echo FAILFAIL However, if there is a shell script signalling an unclean exit, ssh always returns 0 (script simulated by string execution): ~$ ssh host sh -c 'exit 5' && echo OK || echo FAILOK Running the very same script on the host in an interactive shell works just fine: ~$ sh -c 'exit 5' && echo OK || echo FAILFAIL I am confused as to why this happens. How can I tell ssh to propagate bash's return code? I may not change the remote scripts. I am using public key authentication, the private key is unlocked – there is no need for user interaction. All systems are Ubuntu 18.04. Application versions are: OpenSSH_7.6p1 Ubuntu-4ubuntu0.1, OpenSSL 1.0.2n 7 Dec 2017 GNU bash, Version 4.4.19(1)-release (x86_64-pc-linux-gnu) Note: This question is different from these seemingly similar questions: bash shell - ssh remote script capture output and exit code? https://stackoverflow.com/questions/15390978/shell-script-ssh-command-exit-status https://stackoverflow.com/questions/36726995/exit-code-from-ssh-command https://superuser.com/questions/652729/command-executed-via-ssh-does-not-return-proper-return-code | I am able to duplicate this using the command you used, and I am able to resolve it by wrapping the remote command in quotes. Here are my test cases: #!/bin/bash -xecho 'Unquoted Test:'ssh evil sh -x -c exit 5 && echo OK || echo FAILecho 'Quoted Test 1:'ssh evil sh -x -c 'exit 5' && echo OK || echo FAILecho 'Quoted Test 2:'ssh evil 'sh -x -c "exit 5"' && echo OK || echo FAIL Here are the results: bash-[540]$ bash -x test.sh+ echo 'Unquoted Test:'Unquoted Test:+ ssh evil sh -x -c exit 5+ exit+ echo OKOK+ echo 'Quoted Test 1:'Quoted Test 1:+ ssh evil sh -x -c 'exit 5'+ exit+ echo OKOK+ echo 'Quoted Test 2:'Quoted Test 2:+ ssh evil 'sh -x -c "exit 5"'+ exit 5+ echo FAILFAIL In the first test and second tests, it seems the 5 is not being passed to exit as we would expect it to be. It just seems to be disappearing. It's not going to exit , sh isn't complaining about 5: command not found , and ssh isn't complaining about it. In the third test, exit 5 is quoted within the larger command to run on the remote host, same as in the second test. This ensures that the 5 is passed to exit , and both are executed as the -c option to sh . The difference between the second and third tests is that the whole set of commands and arguments is sent to the remote host quoted as a single command argument to ssh . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67507/"
]
} |
489,742 | From When the operating system shuts down, how does a service manager know that it should sends SIGTERM and SIGKILL to its services? systemd IS BOTH the init and service manager What is the difference between "init" and "service manager"? I guess they are the same thing? What is some example which is "the init" but not "service manager"? Vice versa? Thanks. | init is the conventional name of the program that runs in process #1. It has taken many forms over the years, and the tasks that init programs have performed have significantly varied. Confusingly, it is also the name of a command that administrators use to communicate with process #1. They are best regarded as two separate things, and were certainly documented that way in AT&T Unix, even if they are in some softwares all mixed up in one program that behaves differently according to what process ID it finds that it has. Further confusingly, there can be multiple different programs run by process #1 in the lifetime of a system, at least two of which (in the case of Linux systems with an initramfs) are named init ( /init in the initramfs and /sbin/init in the eventual root filesystem, conventionally chained-to by the former). A service manager is a program that manages services, as the name says. It does not have to be run as process #1, and in fact, over the broad spectrum of operating system softwares over the years, generally has not been process #1. Service managers range from Gerrit Pape's runsv through Laurent Bercot's s6-supervise to the imaginatively named service-manager in my nosh tooset. They also encompass the Service Access Facility of AT&T Unix System 5 Release 4, the System Resource Controller of IBM AIX 3.1, and the Service Management Facility of Solaris. They spawn service programs from a uniform, consistent, known context, and provide mechanisms for those services to be controlled (brought up, terminated and restarted, and taken down) and their statuses to be queried by the rest of the system. A system manager is a program that manages the system, dealing in system state changes. It generally is run as process #1. This is in part because the kernel of the operating system treats it specially, sending it information about system state change requests, such as power failure events or special key chords on the kernel virtual terminal's keyboard device(s) (e.g. on Linux ⇮ + ↑ generating SIGWINCH or ⎈ + ⎇ + ⌦ generating SIGINT to process #1). It also deals in setting up the initial system state at bootstrap, and sometimes in finalizing the system state at shutdown. The details of system state vary from software to software. The van Smoorenburg init operated in terms of the (now passé) run-levels. BSD init 's state machine is entirely internal and has states such as running /etc/rc , multi-user , and single user . Case studies: systemd is a process #1 program. It performs both service management and system management in one program, running as that process. However, it does not deal in finalizing the system state, instead chain loading process #1 to a different program named systemd-shutdown for that. System state changes generally take the form of the service manager starting/stopping targets which in turn cause the start/stop of services . Several services, such as emergency.service and systemd-halt.service for examples, when activated themselves run systemctl which sends commands back to process #1, for making further system state changes. Shutdown is a two-state sequence, in this design. The imaginatively named system-manager in my nosh tooset is a process #1 program that only does the system manager rôle. It does the initialization at bootstrap and the finalization at shutdown. It manages the system by spawning the (system-wide) service manager as another process and various invocations of the system-control program in response to events. (The SIGINT resulting from the ⎈ + ⎇ + ⌦ chord on the KVT keyboard causes it to spawn a child process to run system-control start secure-attention-key , for example.) system-control issues commands to the service manager to start and stop services and targets. Similarly, several services/targets invoke system-control to send commands back to the system manager so that upon their activation further system state changes are requested. Service processes are grandchildren of process #1. runit is a process #1 program that also does only the system management. It spawns the service manager as other processes. This is done in what runit people call "stage 2", by invoking a shell script that in turn chain loads runsvdir which in turn spawns the several runsv programs as grandchild processes of process #1. Service processes are great-grandchildren of process #1. System management takes a "just run three shell scripts" approach, triggered by a combination of signals and flag files. System 5 init was a process #1 program that only did the system management. It had the aforementioned run-levels as its system states, and in theory could be a service manager as well. In reality, its service management capabilities were so feature poor that after a few years they were not even used for TUI login service management any more. It spawned (far more functional) service managers as child processes, in the forms of the aforementioned SAF and SRC. By 1990 the number of run-levels in use shrank to 1, yielding much the same design in actual operation as the nosh system-manager all of these decades later, with process #1 largely just spawning a service manager child process and further child processes to run commands in response to kernel events and administrator commands. Service processes are great-grandchildren of process #1, grand-children and children of the various service manager processes. (A TUI login service process is spawned by the ttymon process, itself spawned from the sac process, spawned by process #1, for example.) van Smoorenburg init is like System 3 init and System 5 init were in the few years before the advent of the aforementioned service managers in Unix. It is a process #1 program that performs the system manager rôle and that also manages some services (albeit in the same feature poor way, not allowing fine grained control of starting/stopping individual services, as System 5 init ). Service management, if it is done at all (rather than just forking off service programs and largely forgetting about them), is done by other programs in child processes. In contrast to both systemd and the nosh toolset's system-manager it leaves some system management actions to programs running in child processes. Whereas both systemd and system-manager perform the final act of system poweroff/restart/halt (making the appropriate system call to the kernel) in process #1 (albeit in another program in the systemd case), in the van Smoorenburg system these are performed in child processes invoked via rc . For example: The final system call that enacts halting the system is performed via a halt shell script run as a child process of rc (itself a child of process #1) that in turn runs the halt program (as a great-grandchild of process #1) that actually makes the system call. Further reading Jonathan de Boyne Pollard (2018). The Unix Service Access Facility . Frequently Given Answers. Jonathan de Boyne Pollard (2015). /etc/inittab is a thing of the past. . Frequently Given Answers. Jonathan de Boyne Pollard (2018). run-levels are things of the past. . Frequently Given Answers. Jonathan de Boyne Pollard (2018). getty spawned from init is a thing of the past. . Frequently Given Answers. Linux - IPC with init What exactly does init do? What is the difference between these commands for bringing down a Linux server? https://retrocomputing.stackexchange.com/a/8290/1932 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489742",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
489,771 | I have setup a git server, by creating a user "git" and then creating a local repository in the git user's directory. The git server works great, I can pull, push, etc. I allowed users to interact with the git repo by adding their public keys to to the .ssh/authorized_keys file. I have disabled password based logins. But the problem is that these users can login to the server via ssh since their keys are on the authorized keys list. Okay, the permissions are set to be pretty restricted for the git user, but still, I would prefer it if there was no way for git to login directly. Is there a way to disable logins for the "git" user, but maintain the ability for the git user to accept pushes and pull through git/ssh? | You can use git-shell to restrict access to SSH user accounts. From the documentation page : This is a login shell for SSH accounts to provide restricted Git access. It permits execution only of server-side Git commands implementing the pull/push functionality, plus custom commands present in a subdirectory named git-shell-commands in the user’s home directory. git-shell is non-interactive by default. Setting a user's default shell to git-shell will allow you to prevent users from interactively logging into your server, while keeping the functionality of git intact. Some level of customization is possible, which is documented on the same page, under the 'EXAMPLES' section. git-shell should be installed along with git at /usr/bin/git-shell . You can set this as a user's default shell using usermod : usermod -s /usr/bin/git-shell username | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94466/"
]
} |
489,775 | Using common bash tools as part of a shell script, I want to repeatedly insert a newline char ('\n') into a long string at intervals of every N chars. For example, given this string, how would I insert a newline char every 20 chars? head -n 50 /dev/urandom | tr -dc A-Za-z0-9 Example of the results I am trying to achieve: ZL1WEV72TTO0S83LP2I2MTQ8DEIU3GSSYJOI9CFE6GEPWUPCBBHLWNA4M28DP2DHDI1L2JQIZJL0ACFVUDYEK7HN7HQY4E2U6VFCRH68ZZJGMSSC5YLHO0KZ94LMELDIN1BAXQKTNSMH0DXLM7B5966UEFGZENLZ4917Y741L2WRTG5ACFGQGRVDVT3CYOLYKNT2ZYUJEAVN1EY4O161VTW1P3OYQ17T24S7S9BDG1RMKGBXWOZSI4D35U81P68NF5SBHH7AOYHV2TWQP27A40QCQW5N4JDK5001EAQXF41NFKH3Q5GOQZ54HZG2FFZSQ89KGMQZ46YBW3GVROYHAIBOU8NFM39RYP1XBLQMYLG8SSIW6J6XG6UJEKXO A use-case is to quickly make a set of random passwords or ID's of a fixed length. The way I did the above example is: for i in {1..30}; do head /dev/random | tr -dc A-Z0-9 | head -c 20 ; echo ''; done However, for learning purposes, I want to do it a different way. I want to start with an arbitrarily long string and insert newlines, thus breaking one string into multiple small strings of fixed char length. | The venerable fold command ("written by Bill Joy on June 28, 1977") can wrap lines: $ printf "foobarzot\n" | fold -w 3foobarzot However, there are some edge cases BUGS Traditional roff(7) output semantics, implemented both by GNU nroff and by mandoc(1), only uses a single backspace for backing up the previous character, even for double-width characters. The fold backspace semantics required by POSIX mishandles such backspace-encoded sequences, breaking lines early. The fmt(1) utility provides similar functionality and does not suffer from that problem, but isn't standardized by POSIX. so if your input has backspace characters you may need to filter or remove those $ printf "a\bc\bd\be\n" | col -b | fold -w 1e$ printf "a\bc\bd\be\n" | tr -d "\b" | fold -w 1acde | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/489775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/268886/"
]
} |
489,940 | On all my Red Hat Linux machines version 7.2 we saw that systemd-tmpfiles-clean.service is inactive: systemctl status systemd-tmpfiles-clean.service● systemd-tmpfiles-clean.service - Cleanup of Temporary Directories Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-clean.service; static; vendor preset: disabled) Active: inactive (dead) since Wed 2018-12-19 14:47:14 UTC; 12min ago Docs: man:tmpfiles.d(5) man:systemd-tmpfiles(8) Process: 34231 ExecStart=/usr/bin/systemd-tmpfiles --clean (code=exited, status=0/SUCCESS) Main PID: 34231 (code=exited, status=0/SUCCESS)Dec 19 14:47:14 master02.uridns.com systemd[1]: Starting Cleanup of Temporary Directories...Dec 19 14:47:14 master02.uridns.com systemd[1]: Started Cleanup of Temporary Directories. It is strange that we saw the files and folders under /tmp ,and it seems that cleanup is performed every some time. I searched on crontab or cronjob, but I did not find other cleanup jobs. Am I missing something here? Is it possible that in spite of the service being inactive, the cleanup is performed every couple of weeks? systemctl enable systemd-tmpfiles-clean.serviceThe unit files have no [Install] section. They are not meant to be enabledusing systemctl.Possible reasons for having this kind of units are:1) A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory.2) A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it.3) A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...). We also saw a few folders that were real old, as ls -ltrtotal 137452drwxr-xr-x 3 root root 33 Jun 13 2017 Toolsdrwx--x--x 3 root root 16 Oct 12 09:33 systemd-private-74982d8a24254a1d8b8ec3b5c0d80a9b-httpd.service-QZqGLAdrwx--x--x 3 root root 16 Oct 12 10:02 systemd-private-74982d8a24254a1d8b8ec3b5c0d80a9b-rtkit-daemon.service-BTcGY1drwx--x--x 3 root root 16 Oct 12 10:02 systemd-private-74982d8a24254a1d8b8ec3b5c0d80a9b-vmtoolsd.service-mQ1SXcdrwxr-xr-x 2 ambari ambari 18 Oct 12 12:02 hsperfdata_ambaridrwx--x--x 3 root root 16 Oct 12 12:17 systemd-private-74982d8a24254a1d8b8ec3b5c0d80a9b-cups.service-PnKaq8drwx--x--x 3 root root 16 Oct 12 12:17 systemd-private-74982d8a24254a1d8b8ec3b5c0d80a9b-colord.service-DNn470-rwxr-xr-x 1 root root 83044 Nov 18 17:27 Spark_Thrift.logdrwxr-xr-x 2 zookeeper hadoop 18 Nov 18 17:28 hsperfdata_zookeeper-rwxr-xr-x 1 root root 379 Nov 18 17:37 requests.txt-rwxr-xr-x 1 root root 137348 Nov 22 14:50 pp-rwxr-xr-x 1 root root 344 Nov 26 15:24 yyprwx--x--x 1 root root 0 Nov 29 21:26 hogsuspend-rwxr-xr-x 1 root root 1032 Dec 3 10:55 aa From my machine: more /lib/systemd/system/systemd-tmpfiles-clean.timer# This file is part of systemd.## systemd is free software; you can redistribute it and/or modify it# under the terms of the GNU Lesser General Public License as published by# the Free Software Foundation; either version 2.1 of the License, or# (at your option) any later version.[Unit]Description=Daily Cleanup of Temporary DirectoriesDocumentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)[Timer]OnBootSec=15minOnUnitActiveSec=1d The rules are: more /usr/lib/tmpfiles.d/tmp.conf# This file is part of systemd.## systemd is free software; you can redistribute it and/or modify it# under the terms of the GNU Lesser General Public License as published by# the Free Software Foundation; either version 2.1 of the License, or# (at your option) any later version.# See tmpfiles.d(5) for details# Clear tmp directories separately, to make them easier to overridev /tmp 1777 root root 10dv /var/tmp 1777 root root 30d# Exclude namespace mountpoints created with PrivateTmp=yesx /tmp/systemd-private-%b-*X /tmp/systemd-private-%b-*/tmpx /var/tmp/systemd-private-%b-*X /var/tmp/systemd-private-%b-*/tmp | You can ask systemd what a unit’s triggers are: systemctl show -p TriggeredBy systemd-tmpfiles-clean This will show that the systemd-tmpfiles-clean service is triggered by the systemd-tmpfiles-clean.timer timer. That is defined as # SPDX-License-Identifier: LGPL-2.1+## This file is part of systemd.## systemd is free software; you can redistribute it and/or modify it# under the terms of the GNU Lesser General Public License as published by# the Free Software Foundation; either version 2.1 of the License, or# (at your option) any later version.[Unit]Description=Daily Cleanup of Temporary DirectoriesDocumentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)[Timer]OnBootSec=15minOnUnitActiveSec=1d Thus the service runs every day, and cleans directories up based on the tmpfiles.d configuration. See the associated man pages for details. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/489940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
489,959 | Many screen lockers (mine is i3lock ) do not block access to other Virtual Terminals. This means that, if I leave a session opened in some VT, then even when the desktop is locked (for example when resuming), a malicious person can switch to the VT and do anything. This is an actual issue for me, as I occasionally switch to a VT, then switch back to the graphical environment and forget to log out from the VT. The question then is: how to add VT-locking on top of an existing screen locker? The Arch Linux wiki suggests to simply disable VTs from Xorg , with this piece of configuration for the X server: Section "ServerFlags" # disable VT switching: Option "DontVTSwitch" "True" # disable “zapping”, ie. killing the X server with Ctrl-Alt-Bksp: Option "DontZap" "True"EndSection This is not an option since I use VTs, as already explained above. Maybe one solution would be to set and reset those options dynamically, but I found nothing to change X server options at runtime, at least in general (there are things like setxkbmap for keyboard layouts, or xset for misc stuff). Is this possible? I also found the command vlock -a which, when called from a text-based VT, locks the session and disable VT switching. However, it does not work from the graphical environment, and would anyway be redundant with the graphical screen locker. How can I solve this problem? | You can ask systemd what a unit’s triggers are: systemctl show -p TriggeredBy systemd-tmpfiles-clean This will show that the systemd-tmpfiles-clean service is triggered by the systemd-tmpfiles-clean.timer timer. That is defined as # SPDX-License-Identifier: LGPL-2.1+## This file is part of systemd.## systemd is free software; you can redistribute it and/or modify it# under the terms of the GNU Lesser General Public License as published by# the Free Software Foundation; either version 2.1 of the License, or# (at your option) any later version.[Unit]Description=Daily Cleanup of Temporary DirectoriesDocumentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)[Timer]OnBootSec=15minOnUnitActiveSec=1d Thus the service runs every day, and cleans directories up based on the tmpfiles.d configuration. See the associated man pages for details. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/489959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288527/"
]
} |
490,025 | When the CPU (Intel i5-8400) is heavily loaded, the fan seems to increase its speed and make noise. I want to eliminate the noise when running CPU-intensive backup process ( backup2l program). (It is apparently CPU-intensive because of compressing backup with gzip .) How to make a process not to use turbo boost? My OS is Ubuntu Linux 18.10. If such a feature is not available in Linux, we should report a feature suggestion. | That's what cpulimit is for: cpulimit --exe=gzip --background --limit=100cpulimit --exe=tar --background --limit=100 this will limit the total CPU usage of the most CPU-resource intensive programs used by the backup2l script to 100% per core. If that would still make too much noise, reduce that number until your machine is quiet again. After backup2l is finished, just killall cpulimit to go back to normal operations. Note: your backup might take twice as long if you limit it to only 2 cores just like a car: the faster, the noisier... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490025",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9158/"
]
} |
490,057 | Normally, it is possible to set an environment variable for a command by prefixing it like so: hello=hi bash -c 'echo $hello' I also know that we can use a variable to substitute any part of a command invocation like the following: $ cmd=bash$ $cmd -c "echo hi" # equivalent to bash -c "echo hi" I was very surprised to find out that you cannot use a variable to prefix a command to set an environment variable. Test case: $ prefix=hello=hi$ echo $prefix # prints hello=hi$ $prefix bash -c 'echo $hello'hello=hi: command not found Why can I not set the environment variable using a variable? Is the prefix part a special part? I was able to get it working by using eval in front, but I still do not understand why. I am using bash 4.4. | I suspect this is the part of the sequence that's catching you: The words that are not variable assignments or redirections are expanded (see Shell Expansions). If any words remain after expansion, the first word is taken to be the name of the command and the remaining words are the arguments That's from the Bash reference manual in the section on Simple Command Expansion. In the cmd=bash example, no environment variables are set, and bash processes the command line up through parameter expansion, leaving bash -c "echo hi" . In the prefix=hello=hi example, there are again no variable assignments in the first pass, so processing continues to parameter expansion, resulting in a first word of hello=hi . Once the variable assignments have been processed, they are not re-processed during command execution. See the processing and its results under set -x : $ prefix=hello=hi+ prefix=hello=hi$ $prefix bash -c 'echo $hello'+ hello=hi bash -c 'echo $hello'-bash: hello=hi: command not found$ hello=42 bash -c 'echo $hello'+ hello=42+ bash -c 'echo $hello'42 For a safer variation of "variable expansion" -as- "environment variables" than eval , consider wjandrea's suggestion of env : prefix=hello=hienv "$prefix" bash -c 'echo "$hello"'hi It's not strictly a command-line variable assignment, since we're using the env utility's main function of assigning environment variables to a command, but it accomplishes the same goal. The $prefix variable is expanded during the processing of the command-line, providing the name=value to env , who passes it along to bash . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/490057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18135/"
]
} |
490,065 | Before anyone says it, yes, I know this is a duplicate, except when I copy/paste that code into tilde slash period bash(no period)rc , the terminal hasn't changed at all. I'm running Debian on Virtual Box (Windows 10). I'm guessing I'm using the Mate desktop because I'm using Mate terminal. When I go into a directory with git, I need to type in git status or git branch to see what branch I'm on, it doesn't display it like it's supposed to with the code. EDIT: The code I pasted force_color_prompt=yesparse_git_branch() { git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'}if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[01;31m\]$(parse_git_branch)\[\033[00m\]\$ 'else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w$(parse_git_branch)\$ 'fiunset color_prompt force_color_prompt EDIT 2:So it turns out my .bashrc file had if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[01;31m\]$(parse_git_branch)\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w$(parse_git_branch)\$ ' fi unset color_prompt force_color_prompt printed somewhere else in it. I commented that out and now I get branch (whatever) when I'm in a git directory without needing to type git branch or git status . The problem I have now is the colors won't show up. This isn't a major issue, though it would be nice to have. | I suspect this is the part of the sequence that's catching you: The words that are not variable assignments or redirections are expanded (see Shell Expansions). If any words remain after expansion, the first word is taken to be the name of the command and the remaining words are the arguments That's from the Bash reference manual in the section on Simple Command Expansion. In the cmd=bash example, no environment variables are set, and bash processes the command line up through parameter expansion, leaving bash -c "echo hi" . In the prefix=hello=hi example, there are again no variable assignments in the first pass, so processing continues to parameter expansion, resulting in a first word of hello=hi . Once the variable assignments have been processed, they are not re-processed during command execution. See the processing and its results under set -x : $ prefix=hello=hi+ prefix=hello=hi$ $prefix bash -c 'echo $hello'+ hello=hi bash -c 'echo $hello'-bash: hello=hi: command not found$ hello=42 bash -c 'echo $hello'+ hello=42+ bash -c 'echo $hello'42 For a safer variation of "variable expansion" -as- "environment variables" than eval , consider wjandrea's suggestion of env : prefix=hello=hienv "$prefix" bash -c 'echo "$hello"'hi It's not strictly a command-line variable assignment, since we're using the env utility's main function of assigning environment variables to a command, but it accomplishes the same goal. The $prefix variable is expanded during the processing of the command-line, providing the name=value to env , who passes it along to bash . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/490065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327743/"
]
} |
490,077 | When we execute command wpa_supplicant -iwlan0 -c/etc/wpa_supplicant.conf for connecting to AP, wpa_supplicant follows following steps: 1. wpa_supplicant requests the kernel driver to scan neighboring BSSes 2. wpa_supplicant selects a BSS based on its configuration 3. wpa_supplicant requests the kernel driver to associate with the chosen BSS Is there any way to skip the scanning part i.e. step no.1 ?Since scanning takes considerably few seconds, as local env have 50+ SSID. | I suspect this is the part of the sequence that's catching you: The words that are not variable assignments or redirections are expanded (see Shell Expansions). If any words remain after expansion, the first word is taken to be the name of the command and the remaining words are the arguments That's from the Bash reference manual in the section on Simple Command Expansion. In the cmd=bash example, no environment variables are set, and bash processes the command line up through parameter expansion, leaving bash -c "echo hi" . In the prefix=hello=hi example, there are again no variable assignments in the first pass, so processing continues to parameter expansion, resulting in a first word of hello=hi . Once the variable assignments have been processed, they are not re-processed during command execution. See the processing and its results under set -x : $ prefix=hello=hi+ prefix=hello=hi$ $prefix bash -c 'echo $hello'+ hello=hi bash -c 'echo $hello'-bash: hello=hi: command not found$ hello=42 bash -c 'echo $hello'+ hello=42+ bash -c 'echo $hello'42 For a safer variation of "variable expansion" -as- "environment variables" than eval , consider wjandrea's suggestion of env : prefix=hello=hienv "$prefix" bash -c 'echo "$hello"'hi It's not strictly a command-line variable assignment, since we're using the env utility's main function of assigning environment variables to a command, but it accomplishes the same goal. The $prefix variable is expanded during the processing of the command-line, providing the name=value to env , who passes it along to bash . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/490077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4843/"
]
} |
490,178 | Tried to use a wrapper around find but hitting the following problem. I want e.g to provide a few directories as arguments but the last argument is always a number that indicates how old data should be removed. For example: rmoldfiles dir1 dir2 dir3 20 This should remove the old files that are older than 30 days looking at mtime of course Here is the script: #!/bin/bashdie(){ echo >&2 "$@" exit 1}usage(){ echo >&2 "Usage: $0 [dir1 dir2 dir3] [days old]" die}if [[ (($# < 1)) || -f "$1" ]]; then if [[ -f "$1" ]]; then printf '%s\n' "Please provide a directory" fi usagefi while (( $# )); do while IFS= read -r -d $'\0' file; do printf 'rm %s\n' "$file" sleep 1 done < <(find "$1" -type f -mtime +$2 -print0) shiftdoneecho "Done deleting" Problem How to shift directories but not the last argument. | A couple of solutions. Pick out the last command line argument. args=( "$@" )num=${args[-1]}args=( "${args[@]:0:${#args[@]} - 1}" ) (then use find "${args[@]}" -type f -mtime "+$num" -print -delete to delete those files). Put the number first. num=$1; shift (then use find "$@" -type f -mtime "+$num" -print -delete to delete the files). The loop is only needed if you have hundreds or thousands of directories to process, in which case the find command would be too long with a single invocation. Otherwise, don't loop. find can take multiple search paths. If you want to insert a delay and use rm explicitly, and have some formatted output for each file: find "$@" -type f -mtime "+$num" -exec sh -c ' for pathname do printf "Removing %s\n" "$pathname" rm -- "$pathname" sleep 1 done' sh {} + If you find that you do need to loop over the directories (or if this just feels better): # Assumes that the arguments are in the order# num dir1 dir2 dir3 ...num=$1shiftfor dir do printf 'Processing %s...\n' "$dir" find "$dir" -type f -mtime "+$num" -exec sh -c ' for pathname do printf "Removing %s\n" "$pathname" rm -- "$pathname" sleep 1 done' sh {} +done or, # Assumes that the arguments are in the order# dir1 dir2 dir3 ... numargs=( "$@" )num=${args[-1]}args=( "${args[@]:0:${#args[@]} - 1}" )for dir in "${args[@]}"; do # as abovedone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45891/"
]
} |
490,267 | I have noticed that a logoff (log out) from my X user session will kill any tmux session I have initiated, even sessions I had run with sudo tmux and similar commands. I am sure that this formerly did not happen, but some recent change has effected this behavior. How do I maintain these tmux (or screen ) sessions, even after I end my X session? | This "feature" has existed in systemd previously, but the systemd developers decided to effect a change in the default , to enable the setting for termination of child processes upon log out of a session. You can revert this setting in your logind.conf ( /etc/systemd/logind.conf ): KillUserProcesses=no You can also run tmux with a systemd-run wrapper like the following: systemd-run --scope --user tmux For these systems, you may just want to alias the tmux (or screen ) command: alias tmux="systemd-run --scope --user tmux" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/490267",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13308/"
]
} |
490,277 | I have written this menu that calls forth several scripts. One of this script is dbus-monitor --system so it displays live traffic over dbus. but when I want to exit this I normally do Ctrl + C , but that also exits my menu and I would like to just return to my menu. is there a code that I can put after the dbus-moniter, when an exit is detected it starts my menu again?my menu is just another .sh script or .... ---------------- clarify --------------- i am not that advanced "yet" ;) in scripting. this is the menu where i call my dbus script select opt in "dbus Live Traffic" "option 2" "Main menu" "Quit" do case $opt in "dbus Live Traffic") curl -s -u lalala:hihihi ftp://ftp.somewhere.com/folder/dbuslivetraffic.sh | bash ;; "option 2") do_something ;; "Main menu") main_menu;; "Quit") quit_menu;; esac if [[ $opt != "Main menu" ]] || [[ $opt != "Quit" ]] ; then main_menu fi done and this is the content of my dbuslivetraffic.sh dbus-monitor --system for now just this single line, but maybe in the near future more code will be added to this script. i don't really understand where i need to put the TRAP function like suggested by @RoVo | You can run the command in a subshell and trap on SIGINT running kill 0 to kill the process group of the subshell only. select opt in a b; do case $REPLY in 1) ( trap "kill -SIGINT 0" SIGINT sleep 10 ) ;; 2) sleep 10 ;; esacdone Selecting (1) will let you use Ctrl + c without killing the menu. Selecting (2) and pressing Ctrl + c will kill the menu, too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325292/"
]
} |
490,301 | While pressing Alt key on the keyboard the control is transferred to the terminal menu. There is an option in ubuntu terminal ( Edit -> Preferences ) to disable this Enable mnemonics (such as alt + f to open the filename) I'm running Lubuntu 18.10 (LXQT). There is no option to do this via gui settings. I also tried looking at ~/.config/aterminal.org/qterminal.ini still no option to turn on meta key. How to disable menu while pressing Alt key? EDIT : One of the answers referred to a file located at ~/.config/openbox/lubuntu-rc.xml . Instead of lubuntu-rc.xml what I have is lxqt-rc.xml and I'm not able to find an entry for my problem. Here is my lxqt-rc.xml file (excluding the commented out example part at the bottom) <?xml version="1.0" encoding="UTF-8"?><!-- Do not edit this file, it will be overwritten on install. Copy the file to $HOME/.config/openbox/ instead. --><openbox_config xmlns="http://openbox.org/3.4/rc" xmlns:xi="http://www.w3.org/2001/XInclude"><resistance> <strength>10</strength> <screen_edge_strength>20</screen_edge_strength></resistance><focus> <focusNew>yes</focusNew> <!-- always try to focus new windows when they appear. other rules do apply --> <followMouse>no</followMouse> <!-- move focus to a window when you move the mouse into it --> <focusLast>yes</focusLast> <!-- focus the last used window when changing desktops, instead of the one under the mouse pointer. when followMouse is enabled --> <underMouse>no</underMouse> <!-- move focus under the mouse, even when the mouse is not moving --> <focusDelay>200</focusDelay> <!-- when followMouse is enabled, the mouse must be inside the window for this many milliseconds (1000 = 1 sec) before moving focus to it --> <raiseOnFocus>no</raiseOnFocus> <!-- when followMouse is enabled, and a window is given focus by moving the mouse into it, also raise the window --></focus><placement> <!-- Lubuntu specific : Place new windows where the mouse is <monitor>Mouse</monitor> <primaryMonitor>Mouse</primaryMonitor> --> <policy>Smart</policy> <!-- 'Smart' or 'UnderMouse' --> <center>yes</center> <!-- whether to place windows in the center of the free area found or the top left corner --> <monitor>Mouse</monitor> <!-- with Smart placement on a multi-monitor system, try to place new windows on: 'Any' - any monitor, 'Mouse' - where the mouse is, 'Active' - where the active window is, 'Primary' - only on the primary monitor --> <primaryMonitor>Mouse</primaryMonitor> <!-- The monitor where Openbox should place popup dialogs such as the focus cycling popup, or the desktop switch popup. It can be an index from 1, specifying a particular monitor. Or it can be one of the following: 'Mouse' - where the mouse is, or 'Active' - where the active window is --></placement><theme> <!-- Lubuntu specific : Theme = Lubuntu and font = Ubuntu --> <name>Lubuntu Arc</name> <titleLayout>NLIMC</titleLayout> <!-- available characters are NDSLIMC, each can occur at most once. N: window icon L: window label (AKA title). I: iconify M: maximize C: close S: shade (roll up/down) D: omnipresent (on all desktops).--> <keepBorder>yes</keepBorder> <animateIconify>yes</animateIconify> <font place="ActiveWindow"> <name>Ubuntu Medium</name> <size>11</size> <!-- font size in points --> <weight>bold</weight> <!-- 'bold' or 'normal' --> <slant>normal</slant> <!-- 'italic' or 'normal' --> </font> <font place="InactiveWindow"> <name>Ubuntu Medium</name> <size>11</size> <!-- font size in points --> <weight>bold</weight> <!-- 'bold' or 'normal' --> <slant>normal</slant> <!-- 'italic' or 'normal' --> </font> <font place="MenuHeader"> <name>Ubuntu</name> <size>11</size> <!-- font size in points --> <weight>normal</weight> <!-- 'bold' or 'normal' --> <slant>normal</slant> <!-- 'italic' or 'normal' --> </font> <font place="MenuItem"> <name>Ubuntu</name> <size>11</size> <!-- font size in points --> <weight>normal</weight> <!-- 'bold' or 'normal' --> <slant>normal</slant> <!-- 'italic' or 'normal' --> </font> <font place="ActiveOnScreenDisplay"> <name>Ubuntu Medium</name> <size>11</size> <!-- font size in points --> <weight>bold</weight> <!-- 'bold' or 'normal' --> <slant>normal</slant> <!-- 'italic' or 'normal' --> </font> <font place="InactiveOnScreenDisplay"> <name>Ubuntu Medium</name> <size>11</size> <!-- font size in points --> <weight>bold</weight> <!-- 'bold' or 'normal' --> <slant>normal</slant> <!-- 'italic' or 'normal' --> </font></theme><desktops> <!-- this stuff is only used at startup, pagers allow you to change them during a session these are default values to use when other ones are not already set by other applications, or saved in your session use obconf if you want to change these without having to log out and back in --> <number>4</number> <firstdesk>1</firstdesk> <names> <!-- set names up here if you want to, like this: <name>desktop 1</name> <name>desktop 2</name> --> </names> <popupTime>875</popupTime> <!-- The number of milliseconds to show the popup for when switching desktops. Set this to 0 to disable the popup. --></desktops><resize> <!-- Lubuntu specific : Don't draw content on resize (too heavy). <drawContents>no</drawContents> --> <drawContents>no</drawContents> <popupShow>Nonpixel</popupShow> <!-- 'Always', 'Never', or 'Nonpixel' (xterms and such) --> <popupPosition>Center</popupPosition> <!-- 'Center', 'Top', or 'Fixed' --> <popupFixedPosition> <!-- these are used if popupPosition is set to 'Fixed' --> <x>10</x> <!-- positive number for distance from left edge, negative number for distance from right edge, or 'Center' --> <y>10</y> <!-- positive number for distance from top edge, negative number for distance from bottom edge, or 'Center' --> </popupFixedPosition></resize><!-- You can reserve a portion of your screen where windows will not cover when they are maximized, or when they are initially placed. Many programs reserve space automatically, but you can use this in other cases. --><margins> <top>0</top> <bottom>0</bottom> <left>0</left> <right>0</right></margins><dock> <position>TopLeft</position> <!-- (Top|Bottom)(Left|Right|)|Top|Bottom|Left|Right|Floating --> <floatingX>0</floatingX> <floatingY>0</floatingY> <noStrut>no</noStrut> <stacking>Above</stacking> <!-- 'Above', 'Normal', or 'Below' --> <direction>Vertical</direction> <!-- 'Vertical' or 'Horizontal' --> <autoHide>no</autoHide> <hideDelay>300</hideDelay> <!-- in milliseconds (1000 = 1 second) --> <showDelay>300</showDelay> <!-- in milliseconds (1000 = 1 second) --> <moveButton>Middle</moveButton> <!-- 'Left', 'Middle', 'Right' --></dock><keyboard> <chainQuitKey>C-g</chainQuitKey> <!-- Keybindings for desktop switching --> <keybind key="C-A-Left"> <action name="GoToDesktop"><to>left</to><wrap>no</wrap></action> </keybind> <keybind key="C-A-Right"> <action name="GoToDesktop"><to>right</to><wrap>no</wrap></action> </keybind> <keybind key="C-A-Up"> <action name="GoToDesktop"><to>up</to><wrap>no</wrap></action> </keybind> <keybind key="C-A-Down"> <action name="GoToDesktop"><to>down</to><wrap>no</wrap></action> </keybind> <keybind key="S-A-Left"> <action name="SendToDesktop"><to>left</to><wrap>no</wrap></action> </keybind> <keybind key="S-A-Right"> <action name="SendToDesktop"><to>right</to><wrap>no</wrap></action> </keybind> <keybind key="S-A-Up"> <action name="SendToDesktop"><to>up</to><wrap>no</wrap></action> </keybind> <keybind key="S-A-Down"> <action name="SendToDesktop"><to>down</to><wrap>no</wrap></action> </keybind> <keybind key="W-F1"> <action name="GoToDesktop"><to>1</to></action> </keybind> <keybind key="W-F2"> <action name="GoToDesktop"><to>2</to></action> </keybind> <keybind key="W-F3"> <action name="GoToDesktop"><to>3</to></action> </keybind> <keybind key="W-F4"> <action name="GoToDesktop"><to>4</to></action> </keybind> <keybind key="W-d"> <action name="ToggleShowDesktop"/> </keybind> <!-- Keybindings for windows --> <!-- Keybindings for windows --> <keybind key="A-F4"> <action name="Close"/> </keybind> <keybind key="A-Escape"> <action name="Lower"/> <action name="FocusToBottom"/> <action name="Unfocus"/> </keybind> <keybind key="A-space"> <action name="ShowMenu"><menu>client-menu</menu></action> </keybind> <!-- Keybindings for window switching --> <keybind key="A-Tab"> <action name="NextWindow"> <finalactions> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> </finalactions> </action> </keybind> <keybind key="A-S-Tab"> <action name="PreviousWindow"> <finalactions> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> </finalactions> </action> </keybind> <keybind key="C-A-Tab"> <action name="NextWindow"> <panels>yes</panels><desktop>yes</desktop> <finalactions> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> </finalactions> </action> </keybind> <!-- Keybindings for window switching with the arrow keys --> <keybind key="W-S-Right"> <action name="DirectionalCycleWindows"> <direction>right</direction> </action> </keybind> <keybind key="W-S-Left"> <action name="DirectionalCycleWindows"> <direction>left</direction> </action> </keybind> <keybind key="W-S-Up"> <action name="DirectionalCycleWindows"> <direction>up</direction> </action> </keybind> <keybind key="W-S-Down"> <action name="DirectionalCycleWindows"> <direction>down</direction> </action> </keybind> <!-- Lubuntu specific. Keybindings for window tiling --> <!-- # HalfLeftScreen --> <keybind key="W-Left"> <action name="UnmaximizeFull"/> <action name="MoveResizeTo"> <x>0</x> <y>0</y> <height>100%</height> <width>50%</width> </action> </keybind> <!-- # HalfRightScreen --> <keybind key="W-Right"> <action name="UnmaximizeFull"/> <action name="MoveResizeTo"> <x>-0</x> <y>0</y> <height>100%</height> <width>50%</width> </action> </keybind> <!-- # HalfUpperScreen --> <keybind key="W-Up"> <action name="UnmaximizeFull"/> <action name="MoveResizeTo"> <x>0</x> <y>0</y> <width>100%</width> <height>50%</height> </action> </keybind> <!-- # HalfLowerScreen --> <keybind key="W-Down"> <action name="UnmaximizeFull"/> <action name="MoveResizeTo"> <x>0</x> <y>-0</y> <width>100%</width> <height>50%</height> </action> </keybind> <!-- Lubuntu specific : Keybindings --> <!-- Keybindings for running applications on Home + E --> <keybind key="W-e"> <action name="Execute"> <startupnotify> <enabled>true</enabled> <name>File manager</name> </startupnotify> <command>pcmanfm-qt</command> </action> </keybind> <!-- Keybindings for running Run menu from Lxpanel on Home + R--> <keybind key="W-r"> <action name="Execute"> <command>lxqt-runner</command> </action> </keybind> <keybind key="A-F2"> <action name="Execute"> <command>lxqt-runner</command> </action> </keybind> <!-- Keybindings for running Menu from Lxpanel --><!-- <keybind key="A-F1"> <action name="Execute"> <command>lxpanelctl menu</command> </action> </keybind> <keybind key="C-Escape"> <action name="Execute"> <command>lxpanelctl menu</command> </action> </keybind>--> <!-- Keybindings to toggle fullscreen --> <keybind key="F11"> <action name="ToggleFullscreen"/> </keybind> <!-- Launch task manager on Ctrl + Alt + Del--> <keybind key="C-A-Delete"> <action name="Execute"> <command>qps</command> </action> </keybind> <!-- Launch a terminal on Ctrl + Alt + T--> <keybind key="C-A-T"> <action name="Execute"> <command>qterminal</command> </action> </keybind> <!-- Lock the screen on Ctrl + Alt + l--> <!-- <keybind key="C-A-l"> <action name="Execute"> <command>lxsession-default lock</command> </action> </keybind>--> <!-- Keybinding for terminal button--> <keybind key="XF86WWW"> <action name="Execute"> <command>qterminal</command> </action> </keybind> <keybind key="XF86Terminal"> <action name="Execute"> <command>qterminal</command> </action> </keybind> <!-- Keybinding for calculator button--> <!-- <keybind key="XF86Calculator"> <action name="Execute"> <command>lxsession-default calculator</command> </action> </keybind>--> <!-- Keybinding for computer button--> <keybind key="XF86MyComputer"> <action name="Execute"> <command>pcmanfm-qt</command> </action> </keybind> <!-- Keybindings for Multimedia Keys and LCD Backlight (alternative when not using gnome-power-manager or xfce4-volumed) --> <keybind key="C-F7"> <action name="Execute"> <command>xset dpms force off</command> </action> </keybind> <keybind key="C-F10"> <action name="Execute"> <command>xbacklight -dec 10</command> </action> </keybind> <keybind key="C-F11"> <action name="Execute"> <command>xbacklight -inc 10</command> </action> </keybind> <!-- Take a screenshot of the current window with scrot when Alt+Print are pressed --> <!-- <keybind key="Print"> <action name="Execute"> <command>lxsession-default screenshot</command> </action> </keybind> <keybind key="A-Print"> <action name="Execute"> <command>lxsession-default screenshot window</command> </action> </keybind> --> <!-- Launch logout when push on the shutdown button --> <!-- <keybind key="XF86PowerOff"> <action name="Execute"> <command>lxsession-default quit</command> </action> </keybind>--></keyboard><mouse> <!-- Lubuntu specific : Specific mouse settings <dragThreshold>8</dragThreshold> <doubleClickTime>200</doubleClickTime> --> <dragThreshold>8</dragThreshold> <!-- number of pixels the mouse must move before a drag begins --> <doubleClickTime>200</doubleClickTime> <!-- in milliseconds (1000 = 1 second) --> <screenEdgeWarpTime>400</screenEdgeWarpTime> <!-- Time before changing desktops when the pointer touches the edge of the screen while moving a window, in milliseconds (1000 = 1 second). Set this to 0 to disable warping --> <screenEdgeWarpMouse>false</screenEdgeWarpMouse> <!-- Set this to TRUE to move the mouse pointer across the desktop when switching due to hitting the edge of the screen --> <context name="Frame"> <mousebind button="A-Left" action="Press"> <action name="Focus"/> <action name="Raise"/> </mousebind> <mousebind button="A-Left" action="Click"> <action name="Unshade"/> </mousebind> <mousebind button="A-Left" action="Drag"> <action name="Move"/> </mousebind> <mousebind button="A-Right" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> </mousebind> <mousebind button="A-Right" action="Drag"> <action name="Resize"/> </mousebind> <mousebind button="A-Middle" action="Press"> <action name="Lower"/> <action name="FocusToBottom"/> <action name="Unfocus"/> </mousebind> <mousebind button="A-Up" action="Click"> <action name="GoToDesktop"><to>previous</to></action> </mousebind> <mousebind button="A-Down" action="Click"> <action name="GoToDesktop"><to>next</to></action> </mousebind> <mousebind button="C-A-Up" action="Click"> <action name="GoToDesktop"><to>previous</to></action> </mousebind> <mousebind button="C-A-Down" action="Click"> <action name="GoToDesktop"><to>next</to></action> </mousebind> <mousebind button="A-S-Up" action="Click"> <action name="SendToDesktop"><to>previous</to></action> </mousebind> <mousebind button="A-S-Down" action="Click"> <action name="SendToDesktop"><to>next</to></action> </mousebind> </context> <context name="Titlebar"> <mousebind button="Left" action="Drag"> <action name="Move"/> </mousebind> <mousebind button="Left" action="DoubleClick"> <action name="ToggleMaximize"/> </mousebind> <mousebind button="Up" action="Click"> <action name="if"> <shaded>no</shaded> <then> <action name="Shade"/> <action name="FocusToBottom"/> <action name="Unfocus"/> <action name="Lower"/> </then> </action> </mousebind> <mousebind button="Down" action="Click"> <action name="if"> <shaded>yes</shaded> <then> <action name="Unshade"/> <action name="Raise"/> </then> </action> </mousebind> </context> <context name="Titlebar Top Right Bottom Left TLCorner TRCorner BRCorner BLCorner"> <mousebind button="Left" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> </mousebind> <mousebind button="Middle" action="Press"> <action name="Lower"/> <action name="FocusToBottom"/> <action name="Unfocus"/> </mousebind> <mousebind button="Right" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="ShowMenu"><menu>client-menu</menu></action> </mousebind> </context> <context name="Top"> <mousebind button="Left" action="Drag"> <action name="Resize"><edge>top</edge></action> </mousebind> </context> <context name="Left"> <mousebind button="Left" action="Drag"> <action name="Resize"><edge>left</edge></action> </mousebind> </context> <context name="Right"> <mousebind button="Left" action="Drag"> <action name="Resize"><edge>right</edge></action> </mousebind> </context> <context name="Bottom"> <mousebind button="Left" action="Drag"> <action name="Resize"><edge>bottom</edge></action> </mousebind> <mousebind button="Right" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="ShowMenu"><menu>client-menu</menu></action> </mousebind> </context> <context name="TRCorner BRCorner TLCorner BLCorner"> <mousebind button="Left" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> </mousebind> <mousebind button="Left" action="Drag"> <action name="Resize"/> </mousebind> </context> <context name="Client"> <mousebind button="Left" action="Press"> <action name="Focus"/> <action name="Raise"/> </mousebind> <mousebind button="Middle" action="Press"> <action name="Focus"/> <action name="Raise"/> </mousebind> <mousebind button="Right" action="Press"> <action name="Focus"/> <action name="Raise"/> </mousebind> </context> <context name="Icon"> <mousebind button="Left" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> <action name="ShowMenu"><menu>client-menu</menu></action> </mousebind> <mousebind button="Right" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="ShowMenu"><menu>client-menu</menu></action> </mousebind> </context> <context name="AllDesktops"> <mousebind button="Left" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> </mousebind> <mousebind button="Left" action="Click"> <action name="ToggleOmnipresent"/> </mousebind> </context> <context name="Shade"> <mousebind button="Left" action="Press"> <action name="Focus"/> <action name="Raise"/> </mousebind> <mousebind button="Left" action="Click"> <action name="ToggleShade"/> </mousebind> </context> <context name="Iconify"> <mousebind button="Left" action="Press"> <action name="Focus"/> <action name="Raise"/> </mousebind> <mousebind button="Left" action="Click"> <action name="Iconify"/> </mousebind> </context> <context name="Maximize"> <mousebind button="Left" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> </mousebind> <mousebind button="Middle" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> </mousebind> <mousebind button="Right" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> </mousebind> <mousebind button="Left" action="Click"> <action name="ToggleMaximize"/> </mousebind> <mousebind button="Middle" action="Click"> <action name="ToggleMaximize"><direction>vertical</direction></action> </mousebind> <mousebind button="Right" action="Click"> <action name="ToggleMaximize"><direction>horizontal</direction></action> </mousebind> </context> <context name="Close"> <mousebind button="Left" action="Press"> <action name="Focus"/> <action name="Raise"/> <action name="Unshade"/> </mousebind> <mousebind button="Left" action="Click"> <action name="Close"/> </mousebind> </context> <context name="Desktop"> <mousebind button="Up" action="Click"> <action name="GoToDesktop"><to>previous</to></action> </mousebind> <mousebind button="Down" action="Click"> <action name="GoToDesktop"><to>next</to></action> </mousebind> <mousebind button="A-Up" action="Click"> <action name="GoToDesktop"><to>previous</to></action> </mousebind> <mousebind button="A-Down" action="Click"> <action name="GoToDesktop"><to>next</to></action> </mousebind> <mousebind button="C-A-Up" action="Click"> <action name="GoToDesktop"><to>previous</to></action> </mousebind> <mousebind button="C-A-Down" action="Click"> <action name="GoToDesktop"><to>next</to></action> </mousebind> <mousebind button="Left" action="Press"> <action name="Focus"/> <action name="Raise"/> </mousebind> <mousebind button="Right" action="Press"> <action name="Focus"/> <action name="Raise"/> </mousebind> </context> <context name="Root"> <!-- Menus --> <mousebind button="Middle" action="Press"> <action name="ShowMenu"><menu>client-list-combined-menu</menu></action> </mousebind> <mousebind button="Right" action="Press"> <action name="ShowMenu"><menu>root-menu</menu></action> </mousebind> </context> <context name="MoveResize"> <mousebind button="Up" action="Click"> <action name="GoToDesktop"><to>previous</to></action> </mousebind> <mousebind button="Down" action="Click"> <action name="GoToDesktop"><to>next</to></action> </mousebind> <mousebind button="A-Up" action="Click"> <action name="GoToDesktop"><to>previous</to></action> </mousebind> <mousebind button="A-Down" action="Click"> <action name="GoToDesktop"><to>next</to></action> </mousebind> </context></mouse><menu> <!-- You can specify more than one menu file in here and they are all loaded, just don't make menu ids clash or, well, it'll be kind of pointless --> <!-- Lubuntu specific : Default menu of Lubuntu --> <file>/usr/share/lubuntu/openbox/menu.xml</file> <!-- default menu file (or custom one in $HOME/.config/openbox/) --> <file>menu.xml</file> <hideDelay>200</hideDelay> <!-- if a press-release lasts longer than this setting (in milliseconds), the menu is hidden again --> <middle>no</middle> <!-- center submenus vertically about the parent entry --> <submenuShowDelay>100</submenuShowDelay> <!-- time to delay before showing a submenu after hovering over the parent entry. if this is a negative value, then the delay is infinite and the submenu will not be shown until it is clicked on --> <submenuHideDelay>400</submenuHideDelay> <!-- time to delay before hiding a submenu when selecting another entry in parent menu if this is a negative value, then the delay is infinite and the submenu will not be hidden until a different submenu is opened --> <applicationIcons>yes</applicationIcons> <!-- Lubuntu specific : Show applications icons if openbox is build with this support --> <manageDesktops>yes</manageDesktops> <!-- show the manage desktops section in the client-list-(combined-)menu --> <showIcons>yes</showIcons> <!-- controls if icons appear in the client-list-(combined-)menu --></menu> | You can run the command in a subshell and trap on SIGINT running kill 0 to kill the process group of the subshell only. select opt in a b; do case $REPLY in 1) ( trap "kill -SIGINT 0" SIGINT sleep 10 ) ;; 2) sleep 10 ;; esacdone Selecting (1) will let you use Ctrl + c without killing the menu. Selecting (2) and pressing Ctrl + c will kill the menu, too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490301",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286520/"
]
} |
490,393 | I have encountered comparisions of variables to string literals multiple times over the years which had one character prefixing the variable and the literal, e.g. if [ "x$A" = "xtrue" ]; then in order to check whether $A is "true" . I assume this is done to achieve shell compatibility or to work around a longterm bug, an unintuitive behavior, etc. Nothing obvious comes to mind. Today I figured I want to know the reason, but my research didn't turn up anything. Or maybe it's just me making something out of a rather frequent exposure to rare occurances. Is this practice still useful, maybe even best? | The important thing to understand here is that in most shells¹, [ is just an ordinary command parsed by the shell like any other ordinary command. Then the shell invokes that [ (aka test ) command with a list of arguments, and then it's up to [ to interpret them as a conditional expression. At that point, those are just a list of strings and the information about which ones resulted from some form of expansion is lost, even in those shells where [ is built-in (all Bourne-like ones these days). The [ utility used to have a hard time telling which ones of its arguments were operators and which ones were operands (the thing operators work on). It didn't help that the syntax was intrinsically ambiguous. For instance: [ -t ] used to be (and still is in some shells/ [ s) to test whether stdout is a terminal. [ x ] is short for [ -n x ] : test whether x is a non-empty string (so you can see there's a conflict with the above). in some shells/ [ s, -a and -o can be both unary ( [ -a file ] for accessible file (now replaced by [ -e file ] ), [ -o option ] for is the option enabled? ) and binary operators ( and and or ). Again, ! -a x can be either and(nonempty("!"), nonempty("x")) or not(isaccessible("x")) . ( , ) and ! add more problems. In normal programming languages like C or perl , in: if ($a eq $b) {...} There's no way the content of $a or $b will be taken as operators because the conditional expression is parsed before those $a and $b are expanded. But in shells, in: [ "$a" = "$b" ] The shell expands the variables first ². For instance, if $a contains ( and $b contains ) , all the [ command sees is [ , ( , = , ) and ] arguments. So does that means "(" = ")" (are ( and ) lexically equal) or ( -n = ) (is = a non-empty string). Historical implementations ( test appeared in Unix V7 in the late 70s) used to fail even in cases where it was not ambiguous just because of the order in which they were processing their arguments. Here with version 7 Unix in a PDP11 emulator: $ ls -l /bin/[-rwxr-xr-x 2 bin 2876 Jun 8 1979 /bin/[$ [ ! = x ]test: argument expected$ [ "(" = x ]test: argument expected Most shell and [ implementations have or have had problems with those or variants thereof . With bash 4.4 today: bash-4.4$ a='(' b=-o c=xbash-4.4$ [ "$a" = "$b" -o "$a" = "$c" ]bash: [: `)' expected, found = POSIX.2 (published in the early 90s) devised an algorithm that would make [ 's behaviour unambiguous and deterministic when passed at most 4 arguments (beside [ and ] ) in the most common usage patterns ( [ -f "$a" -o "$b" ] still unspecified for instance). It deprecated ( , ) , -a and -o , and dropped -t without operand. bash did implement that algorithm (or at least tried to) in bash 2.0. So, in POSIX compliant [ implementations, [ "$a" = "$b" ] is guaranteed to compare the content of $a and $b for equality, whatever they are. Without -o , we would write: [ "$a" = "$b" ] || [ "$a" = "$c" ] That is, call [ twice, each time with fewer than 5 arguments. But it took quite a while for all [ implementations to become compliant. bash 's was not compliant until 4.4 (though the last problem was for [ '(' ! "$var" ')' ] which nobody would really use in real life) The /bin/sh of Solaris 10 and older, which is not a POSIX shell, but a Bourne shell still has problems with [ "$a" = "$b" ] : $ a='!' b='!'$ [ "$a" = "$b" ]test: argument expected Using [ "x$a" = "x$b" ] works around the problem as there is no [ operator that starts with x . Another option is to use case instead: case "$a" in "$b") echo same;; *) echo different;;esac (quoting is necessary around $b , not around $a ). In any case, it is not and never has been about empty values. People have problems with empty values in [ when they forget to quote their variables, but that's not a problem with [ then. $ a= b='-o x'[ $a = $b ] with the default value of $IFS becomes: [ = -o x ] Which is a test of whether = or x is a non-empty string, but no amount of prefixing will help³ as [ x$a = x$b ] will still be: [ x = x-o x ] which would cause an error, and it could get a lot worse including DoS and arbitrary command injection with other values like in bash : bash-4.4$ a= b='x -o -v a[`uname>&2`]'bash-4.4$ [ x$a = x$b ]Linux The correct solution is to always quote : [ "$a" = "$b" ] # OK in POSIX compliant [ / shells[ "x$a" = "x$b" ] # OK in all Bourne-like shells Note that expr has similar (and even worse) problems. expr also has a = operator, though it's for testing whether the two operands are equal integers when they look like decimal integer numbers, or sort the same when not. In many implementations, expr + = + , or expr '(' = ')' or expr index = index don't do equality comparison. expr "x$a" = "x$b" would work around it for string comparison, but prefixing with an x could affect the sorting (in locales that have collating elements starting with x for instance) and obviously can't be used for number comparison expr "0$a" = "0$b" doesn't work for comparing negative integers. expr " $a" = " $b" works for integer comparison in some implementations, but not others (for a=01 b=1 , some would return true, some false). ¹ ksh93 is an exception. In ksh93 , [ can be seen as a reserved word in that [ -t ] is actually different from var=-t; [ "$var" ] , or from ""[ -t ] or cmd='['; "$cmd" -t ] . That's to preserve backward compatibility and still be POSIX compliant in cases where it matters. The -t is only taken as an operator here if it's literal, and ksh93 detects that you're calling the [ command. ² ksh added a [[...]] conditional expression operator with its own syntax parsing rules (and some problems of its own) to address that (also found in some other shells, with some differences). ³ except in zsh where split+glob is not invoked upon parameter expansion, but empty removal still is, or in other shells when disabling split+glob globally with set -o noglob; IFS= | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/490393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63502/"
]
} |
490,402 | Can anyone explain in details what is going on with the following. Let's imagine I am mounting a directory with noexec option as follows: mount -o noexec /dev/mapper/fedora-data /data So to verify this I ran mount | grep data : /dev/mapper/fedora-data on /data type ext4 (rw,noexec,relatime,seclabel,data=ordered) Now within /data I'm creating a simple script called hello_world as follows: #!/bin/bashecho "Hello World"whoami So I made the script executable by chmod u+x hello_world (this will however have no effect on a file system with noexec options) and I tried running it: # ./hello_world-bash: ./hello_world: Permission denied However, prepanding bash to the file yields to: # bash hello_worldHello Worldroot So then I created a simple hello_world.c with the following contents: #include <stdio.h>int main(){ printf("Hello World\n"); return 0;} Compiled it using cc -o hello_world hello_world.c Now running: # ./hello_world-bash: ./hello_world: Permission denied So I tried to run it using /lib64/ld-linux-x86-64.so.2 hello_world The error: ./hello_world: error while loading shared libraries: ./hello_world: failed to map segment from shared object: Operation not permitted So this is of course true since ldd returns the following: ldd hello_worldldd: warning: you do not have execution permission for `./hello_world' not a dynamic executable On another system where noexec mount option doesn't apply I see: ldd hello_world linux-vdso.so.1 (0x00007ffc1c127000) libc.so.6 => /lib64/libc.so.6 (0x00007facd9d5a000) /lib64/ld-linux-x86-64.so.2 (0x00007facd9f3e000) Now my question is this: Why does running a bash script on a file system with noexec option work but not a c compiled program? What is happening under the hood? | What's happening in both cases is the same: to execute a file directly, the execute bit needs to be set, and the filesystem can't be mounted noexec. But these things don't stop anything from reading those files. When the bash script is run as ./hello_world and the file isn't executable (either no exec permission bit, or noexec on the filesystem), the #! line isn't even checked , because the system doesn't even load the file. The script is never "executed" in the relevant sense. In the case of bash ./hello_world , well, The noexec filesystem option just plain isn't as smart as you'd like it to be. The bash command that's run is /bin/bash , and /bin isn't on a filesystem with noexec . So, it runs no problem. The system doesn't care that bash (or python or perl or whatever) is an interpreter. It just runs the command you gave ( /bin/bash ) with the argument which happens to be a file. In the case of bash or another shell, that file contains a list of commands to execute, but now we're "past" anything that's going to check file execute bits. That check isn't responsible for what happens later. Consider this case: $ cat hello_world | /bin/bash … or for those who do not like Pointless Use of Cat: $ /bin/bash < hello_world The "shbang" #! sequence at the beginning of a file is just some nice magic for doing effectively the same thing when you try to execute the file as a command. You might find this LWN.net article helpful: How programs get run . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/490402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43380/"
]
} |
490,457 | There is a command in Microsoft's cmd , called color .I know that, in bash , there are special characters that allows you, during the echos, to change the text colors. As well I do know that in ubuntu you can edit the parameters of the terminal setting a "style" going inside the config, editing it and applying it with mouse under the menus. What I ask is, if there exists under debian, ubuntu and centOS something very simple like: color 1b so that the console turns from: to | There are multiple ways you can do this. One way is by using tput : tput setab 4 sets the background color to blue. To set the foreground color, use tput setaf . Another way is by using raw ANSI escapes, here is a good documentation: https://misc.flogisoft.com/bash/tip_colors_and_formatting | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143935/"
]
} |
490,524 | awk 'processing_script_here' my=file.txt seems to stop and wait indefinitely... What's going on here and how do I make it work ? | As Chris says , arguments of the form variablename=anything are treated as variable assignment (that are performed at the time the arguments are processed as opposed to the (newer) -v var=value ones which are performed before the BEGIN statements) instead of input file names. That can be useful in things like: awk '{print $1}' FS=/ RS='\n' file1 FS='\n' RS= file2 Where you can specify a different FS / RS per file. It's also commonly used in: awk '!file1_processed{a[$0]; next}; {...}' file1 file1_processed=1 file2 Which is a safer version of: awk 'NR==FNR{a[$0]; next}; {...}' file1 file2 (which doesn't work if file1 is empty) But that gets in the way when you have files whose name contains = characters. Now, that's only a problem when what's left of the first = is a valid awk variable name. What constitutes a valid variable name in awk is stricter than in sh . POSIX requires it to be something like: [_a-zA-Z][_a-zA-Z0-9]* With only characters of the portable character set. However, the /usr/xpg4/bin/awk of Solaris 11 at least is not compliant in that regard and allows any alphabetical characters in the locale in variable names, not just a-zA-Z. So an argument like x+y=foo or =bar or ./foo=bar is still treated as an input file name and not an assignment as what's left of the first = is not a valid variable name. An argument like Stéphane=Chazelas.txt may or may not, depending on the awk implementation and locale. That's why with awk, it's recommended to use: awk '...' ./*.txt instead of awk '...' *.txt for instance to avoid the problem if you can't guarantee the name of the txt files won't contain = characters. Also, beware that an argument like -vfoo=bar.txt may be treated as an option if you use: awk -f file.awk -vfoo=bar.txt (also applies to awk '{code}' -vfoo=bar.txt with the awk from busybox versions prior to 1.28.0, see corresponding bug report ). Again, using ./*.txt works around that (using a ./ prefix also helps with a file called - which otherwise awk understands as meaning standard input instead). That's also why #! /usr/bin/awk -f shebangs don't really work. While the var=value ones can be worked around by fixing the ARGV values (add a ./ prefix) in a BEGIN statement: #! /usr/bin/awk -fBEGIN { for (i = 1; i < ARGC; i++) if (ARGV[i] ~ /^[_[:alpha:]][_[:alnum:]]*=/) ARGV[i] = "./" ARGV[i]}# rest of awk script That won't help with the option ones as those ones are seen by awk and not the awk script. One potential cosmetic issue with using that ./ prefix is it ends up in FILENAME , but you can always use substr(FILENAME, 3) to strip it if you don't want it. The GNU implementation of awk fixes all those issues with its -E option. After -E , gawk expects only the path of the awk script (where - still means stdin) and then a list of input file paths only (and there, not even - is treated specially). It's specially designed for: #! /usr/bin/gawk -E shebangs where the list of arguments are always input files (note that you're still free to edit that ARGV list in a BEGIN statement). You can also use it as: gawk -e '...awk code here...' -E /dev/null *.txt We use -E with an empty script ( /dev/null ) just to make sure those *.txt afterwards are always treated as input files, even if they contain = characters. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/490524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22142/"
]
} |
490,564 | I have bunch of source files with very usual structure: some comments in header, some (optional) imports, and then source code, e.g.: //// AppDelegate.swift// settings//// Created by Mikhail Igonin on 14/06/2018.// Copyright © 2018 Mikhail Igonin. All rights reserved.// import UIKitimport Fabricimport Crashlytics@UIApplicationMainclass AppDelegate: UIResponder, UIApplicationDelegate { //Other comment} I need to add another import after comments and import block. So regex to match beginning of this file should look like this: (([\n\s]*)((\/\/.*\n)|(import.*\n)))+ And looks like this regex is ok: https://www.regextester.com/index.php?fam=106706 Now I'm trying to inset new import with awk and gensub : gawk -v RS='^$' '{$0=gensub(/(([\n\s]*)((\/\/.*\n)|(import.*\n)))+/,"\\1\\2\nimport NEW_IMPORT\n\\2",1)}1' test.swift However it doesn't work and my regex match all file: //// AppDelegate.swift// settings//// Created by Mikhail Igonin on 14/06/2018.// Copyright © 2018 Mikhail Igonin. All rights reserved.//import UIKitimport Fabricimport Crashlytics@UIApplicationMainclass AppDelegate: UIResponder, UIApplicationDelegate {}import NEW_IMPORT What's my mistake? Looks like .* works incorrect and match all file. I've tried to mark it as lazy ( .*? ) but without success also. PS Solutions without awk or gensub would be also useful. | As Chris says , arguments of the form variablename=anything are treated as variable assignment (that are performed at the time the arguments are processed as opposed to the (newer) -v var=value ones which are performed before the BEGIN statements) instead of input file names. That can be useful in things like: awk '{print $1}' FS=/ RS='\n' file1 FS='\n' RS= file2 Where you can specify a different FS / RS per file. It's also commonly used in: awk '!file1_processed{a[$0]; next}; {...}' file1 file1_processed=1 file2 Which is a safer version of: awk 'NR==FNR{a[$0]; next}; {...}' file1 file2 (which doesn't work if file1 is empty) But that gets in the way when you have files whose name contains = characters. Now, that's only a problem when what's left of the first = is a valid awk variable name. What constitutes a valid variable name in awk is stricter than in sh . POSIX requires it to be something like: [_a-zA-Z][_a-zA-Z0-9]* With only characters of the portable character set. However, the /usr/xpg4/bin/awk of Solaris 11 at least is not compliant in that regard and allows any alphabetical characters in the locale in variable names, not just a-zA-Z. So an argument like x+y=foo or =bar or ./foo=bar is still treated as an input file name and not an assignment as what's left of the first = is not a valid variable name. An argument like Stéphane=Chazelas.txt may or may not, depending on the awk implementation and locale. That's why with awk, it's recommended to use: awk '...' ./*.txt instead of awk '...' *.txt for instance to avoid the problem if you can't guarantee the name of the txt files won't contain = characters. Also, beware that an argument like -vfoo=bar.txt may be treated as an option if you use: awk -f file.awk -vfoo=bar.txt (also applies to awk '{code}' -vfoo=bar.txt with the awk from busybox versions prior to 1.28.0, see corresponding bug report ). Again, using ./*.txt works around that (using a ./ prefix also helps with a file called - which otherwise awk understands as meaning standard input instead). That's also why #! /usr/bin/awk -f shebangs don't really work. While the var=value ones can be worked around by fixing the ARGV values (add a ./ prefix) in a BEGIN statement: #! /usr/bin/awk -fBEGIN { for (i = 1; i < ARGC; i++) if (ARGV[i] ~ /^[_[:alpha:]][_[:alnum:]]*=/) ARGV[i] = "./" ARGV[i]}# rest of awk script That won't help with the option ones as those ones are seen by awk and not the awk script. One potential cosmetic issue with using that ./ prefix is it ends up in FILENAME , but you can always use substr(FILENAME, 3) to strip it if you don't want it. The GNU implementation of awk fixes all those issues with its -E option. After -E , gawk expects only the path of the awk script (where - still means stdin) and then a list of input file paths only (and there, not even - is treated specially). It's specially designed for: #! /usr/bin/gawk -E shebangs where the list of arguments are always input files (note that you're still free to edit that ARGV list in a BEGIN statement). You can also use it as: gawk -e '...awk code here...' -E /dev/null *.txt We use -E with an empty script ( /dev/null ) just to make sure those *.txt afterwards are always treated as input files, even if they contain = characters. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/490564",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/328155/"
]
} |
490,565 | I've been using CentOS 7 and its kernel version is 3.10. To check Kernel Version, I typed 'uname -r' and command showed 3.10.0-957.1.3.el7.x86_64 As far as I know, MemAvailable metric was introduced to Linux kernel version 3.14. But, I ran /proc/meminfo and this command showed MemAvailable metric. MemTotal: 3880620 kBMemFree: 3440980 kBMemAvailable: 3473820 kB Why did my linux show MemAvailable metric? My Linux kernel is below 3.14 | As Chris says , arguments of the form variablename=anything are treated as variable assignment (that are performed at the time the arguments are processed as opposed to the (newer) -v var=value ones which are performed before the BEGIN statements) instead of input file names. That can be useful in things like: awk '{print $1}' FS=/ RS='\n' file1 FS='\n' RS= file2 Where you can specify a different FS / RS per file. It's also commonly used in: awk '!file1_processed{a[$0]; next}; {...}' file1 file1_processed=1 file2 Which is a safer version of: awk 'NR==FNR{a[$0]; next}; {...}' file1 file2 (which doesn't work if file1 is empty) But that gets in the way when you have files whose name contains = characters. Now, that's only a problem when what's left of the first = is a valid awk variable name. What constitutes a valid variable name in awk is stricter than in sh . POSIX requires it to be something like: [_a-zA-Z][_a-zA-Z0-9]* With only characters of the portable character set. However, the /usr/xpg4/bin/awk of Solaris 11 at least is not compliant in that regard and allows any alphabetical characters in the locale in variable names, not just a-zA-Z. So an argument like x+y=foo or =bar or ./foo=bar is still treated as an input file name and not an assignment as what's left of the first = is not a valid variable name. An argument like Stéphane=Chazelas.txt may or may not, depending on the awk implementation and locale. That's why with awk, it's recommended to use: awk '...' ./*.txt instead of awk '...' *.txt for instance to avoid the problem if you can't guarantee the name of the txt files won't contain = characters. Also, beware that an argument like -vfoo=bar.txt may be treated as an option if you use: awk -f file.awk -vfoo=bar.txt (also applies to awk '{code}' -vfoo=bar.txt with the awk from busybox versions prior to 1.28.0, see corresponding bug report ). Again, using ./*.txt works around that (using a ./ prefix also helps with a file called - which otherwise awk understands as meaning standard input instead). That's also why #! /usr/bin/awk -f shebangs don't really work. While the var=value ones can be worked around by fixing the ARGV values (add a ./ prefix) in a BEGIN statement: #! /usr/bin/awk -fBEGIN { for (i = 1; i < ARGC; i++) if (ARGV[i] ~ /^[_[:alpha:]][_[:alnum:]]*=/) ARGV[i] = "./" ARGV[i]}# rest of awk script That won't help with the option ones as those ones are seen by awk and not the awk script. One potential cosmetic issue with using that ./ prefix is it ends up in FILENAME , but you can always use substr(FILENAME, 3) to strip it if you don't want it. The GNU implementation of awk fixes all those issues with its -E option. After -E , gawk expects only the path of the awk script (where - still means stdin) and then a list of input file paths only (and there, not even - is treated specially). It's specially designed for: #! /usr/bin/gawk -E shebangs where the list of arguments are always input files (note that you're still free to edit that ARGV list in a BEGIN statement). You can also use it as: gawk -e '...awk code here...' -E /dev/null *.txt We use -E with an empty script ( /dev/null ) just to make sure those *.txt afterwards are always treated as input files, even if they contain = characters. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/490565",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/328156/"
]
} |
490,572 | When I write the code the way below, I am able to run several commands after the else statement: if [ "$?" -eq 0 ] then echo "OK" else echo "NOK" exit 1fi However, when I use another syntax I am unable to union 2 commands after the OR : [ "$?" -eq 0 ] && echo "OK" || (echo "NOK" >&2 ; exit 1) In my use-case I have a complex script based on "$?" == 0 , so I'm looking for a way to abort (additionally to echoing the message) when it is not true. | The pair of ( ) spawns a subshell, defeating the goal of exiting the whole script with the exit command inside. Just replace the ( ) with { } (and adjusted syntax because { } are not automatical delimiters but more treated like commands: a space after { and last command inside must end with some terminator: ; fits): this will run the chain of commands inside in the same shell, thus exit will affect this shell. [ "$?" -eq 0 ] && echo "OK" || { echo "NOK" >&2; exit 1;} UPDATE: @D.BenKnoble commented that should echo fail, the behaviour won't be like the former if ...; then ... else ... fi construct. So the first echo 's exit code has to be "escaped" with a noop : command (which being built-in can't fail). [ "$?" -eq 0 ] && { echo "OK"; :;} || { echo "NOK" >&2; exit 1;} references: POSIX : Grouping Commands The format for grouping commands is as follows: (compound-list) Execute compound-list in a subshell environment; see Shell Execution Environment. Variable assignments and built-in commands that affect the environment shall not remain in effect after the list finishes. [...] { compound-list;} Execute compound-list in the current process environment. The semicolon shown here is an example of a control operator delimiting the } reserved word. Other delimiters are possible, as shown in Shell Grammar; a <newline> is frequently used. dash manpage , bash manpage ,... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267199/"
]
} |
490,597 | I have a file which looks like this 18DMA H 9996 0.886 5.687 5.320 18DMA H 9997 1.019 5.764 5.247 18DMA Np 9998 0.947 5.584 5.151 18DMA H 9999 1.033 5.541 5.113 18DMA Cn10000 0.880 5.674 5.050 18DMA H10001 0.831 5.616 4.971 18DMA H10002 0.814 5.751 5.091 18DMA H10003 0.957 5.735 5.003 18DMA Cn10004 0.837 5.486 5.185 The desire output is to delete column 3 however since from a certain row/line and next there is no a space between atom name and number I cannot make the deletion by column. Is there any way to make the deletion by selecting certain number of characters? The desire output should be 18DMA H 0.886 5.687 5.320 18DMA H 1.019 5.764 5.247 18DMA Np 0.947 5.584 5.151 18DMA H 1.033 5.541 5.113 18DMA Cn 0.880 5.674 5.050 18DMA H 0.831 5.616 4.971 18DMA H 0.814 5.751 5.091 18DMA H 0.957 5.735 5.003 18DMA Cn 0.837 5.486 5.185 | Assuming you don't have <TAB> s but multiple spaces as field separators, and by looking at and counting your sample data, I came up with $ sed -E 's/^(.{15}).{5}/\1/' file 18DMA H 0.886 5.687 5.320 18DMA H 1.019 5.764 5.247 18DMA Np 0.947 5.584 5.151 18DMA H 1.033 5.541 5.113 18DMA Cn 0.880 5.674 5.050 18DMA H 0.831 5.616 4.971 18DMA H 0.814 5.751 5.091 18DMA H 0.957 5.735 5.003 18DMA Cn 0.837 5.486 5.185 It's using a "back reference" for the first 15 characters to restore them using \1 in the replacement part of the s ubstitute command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/298739/"
]
} |
490,625 | I am trying to troubleshoot a problem: usb mouse doesn't work on a freshly installed linux. I suspect the problem is that there is no suitable kernel module/driver for my usb hardware. Indeed: $ lspci -knn...01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b9] (rev 02) Subsystem: ASMedia Technology Inc. Device [1b21:1142]01:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b5] (rev 02) Subsystem: ASMedia Technology Inc. Device [1b21:1062] Kernel driver in use: ahci... As you can see no kernel driver is reported for USB controller device (I suppose it should be reported in a way similar to driver reported for SATA controller) So, I need to rebuild kernel with a module which would be suitable for my device. But how can I find out what module should I build? I have information which identifies my device: it's vendor id and hardware id ([1b21:43b9]). How to find out corresponding kernel module name given this information? | PCI ID 1022:43b9 is an AMD X370 Series Chipset USB 3.1 xHCI Controller. The PCI subsystem ID 1b21:1142 would suggest it might actually be an ASMedia ASM1042A USB 3 controller, possibly integrated into the AMD chipset. For most USB 3.x controller chips, the appropriate driver module is xhci_pci which depends on module xhci_hcd . Both these modules are part of the standard Linux kernel, so they should be available in all modern Linux distributions. The corresponding kernel configuration options are CONFIG_USB_XHCI_PCI and CONFIG_USB_XHCI_HCD . Many distributions include the kernel configuration file as /boot/config-<kernel version number> . So, you could run this command: $ grep XHCI /boot/config-$(uname -r)CONFIG_USB_XHCI_HCD=mCONFIG_USB_XHCI_PCI=m# CONFIG_USB_XHCI_PLATFORM is not set Here, both xhci_hcd and xhci_pci are configured to be available as modules. If the lines would say ...=y instead, the USB 3 support would be compiled into the main kernel. PCI ID 1022:43b5, subsystem ID 1b21:1062 is an AHCI SATA (or eSATA) controller, which is already covered by module ahci . You can look up PCI IDs in PCI ID Repository . If a driver has been specified by vendor/product IDs, you could use /sbin/modprobe -c | grep '<vendor ID>.*<product ID>' . If you get back a line like this, you've found a match: alias pci:v0000<vendor ID>:d0000<product ID>sv... <module name> This information comes from /lib/modules/modules.alias[.bin] , which is generated by the depmod command from the device support information embedded in the kernel modules themselves (defined in the source code with a MODULE_DEVICE_TABLE macro). You can also use modinfo <module name> | grep alias to view the hardware support claimed by a particular module. However, not all modules are specified by vendor/product IDs. Some drivers will cover an entire class of devices; for example, the xhci_pci module claims support of PCI base class 0x0C, subclass 0x03, interface 0x30... which maps to "Serial bus controller", "USB controller" and "XHCI" respectively. This is expressed as alias: pci:v*d*sv*sd*bc0Csc03i30* Note that you should not normally need to do any of these lookups manually unless you've blacklisted some modules or the auto-detection fails for some reason. For example, when the Linux kernel detects the original poster's USB 3 controller, it will cause (the equivalent of) the following command to be executed: modprobe pci:v00001022d000043b9sv1b21sd1142bc0Csc03i30 which contains all the hardware vendor/device/class/subclass/interface IDs available for the device. If one of the wildcarded aliases on record in modprobe configuration matches this string, the respective module will get loaded automatically. For USB devices (and indeed for any autoprobeable buses), there is a similar system of module aliases, and a USB ID repository. If you don't have the appropriate module compiled on your system, your best bet is to use the PCI ID repository to identify the device or the chip used within it. Sometimes the repository entry identifies the Linux kernel module that will provide support for it. If that information is not present, you may need to google using the device/chip model; this will usually allow you to find any alternative/experimental driver modules that have not (yet?) been included in the standard kernel. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50667/"
]
} |
490,649 | If I have n files in a directory, for example; abc How do I get pairwise combinations of these files (non-directional) to pass to a function? The expected output is a-ba-cb-c so that it can be passed to a function like fn -file1 a -file2 bfn -file1 a -file2 c... This is what I am trying out now. for i in *.txt do for j in *.txt do if [ "$i" != "$j" ] then echo "Pairs $i and $j" fi done done Output Pairs a.txt and b.txtPairs a.txt and c.txtPairs b.txt and a.txtPairs b.txt and c.txtPairs c.txt and a.txtPairs c.txt and b.txt I still have duplicates (a-b is same as b-a) and I am thinking perhaps there is a better way to do this. | Put the file names in an array and run through it manually with two loops. You get each pairing only once if if j < i where i and j are the indexes used in the outer and the inner loop, respectively. $ touch a b c d$ f=(*)$ for ((i = 0; i < ${#f[@]}; i++)); do for ((j = i + 1; j < ${#f[@]}; j++)); do echo "${f[i]} - ${f[j]}"; done; done a - ba - ca - db - cb - dc - d | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490649",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165231/"
]
} |
490,693 | I'm currently working with mutt to get it to display HTML mail properly. I've set up auto_view and a w3m invocation in the mailcap, and this works fine. However, the URLs for links in received mail are often longer than the width of my terminal window. This makes clicking them inconvenient, because they become split across multiple lines, and I have to manually reconstruct the original URL before I can copy it to the browser. I've set $markers to off, which successfully removes the plus sign marking a wrapped line. However, mutt is still wrapping the lines on its own (presumably just by echoing \n , since that's what I get on a copy), so it's still inconvenient to get the link. Is there a way to tell mutt to disable wrapping completely, and just let the terminal emulator handle it? This would allow the links to be copied in one operation. There doesn't seem to be such an option for the $wrap variable; I've tried setting $wrap to far wider than the terminal width, but this doesn't work. | Put the file names in an array and run through it manually with two loops. You get each pairing only once if if j < i where i and j are the indexes used in the outer and the inner loop, respectively. $ touch a b c d$ f=(*)$ for ((i = 0; i < ${#f[@]}; i++)); do for ((j = i + 1; j < ${#f[@]}; j++)); do echo "${f[i]} - ${f[j]}"; done; done a - ba - ca - db - cb - dc - d | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103923/"
]
} |
490,764 | Is it possible to use bash to subtract variables containing 24-hour time? #!/bin/bashvar1="23:30" # 11:30pmvar2="20:00" # 08:00pmecho "$(expr $var1 - $var2)" Running it produces the following error. ./test expr: non-integer argument I need the output to appear in decimal form, for example: ./test 3.5 | The date command is pretty flexible about its input. You can use that to your advantage: #!/bin/bashvar1="23:30"var2="20:00"# Convert to epoch time and calculate difference.difference=$(( $(date -d "$var1" "+%s") - $(date -d "$var2" "+%s") ))# Divide the difference by 3600 to calculate hours.echo "scale=2 ; $difference/3600" | bc Output: $ ./test.bash3.50 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/328302/"
]
} |
490,801 | I (ab)use Alt + . to recover the last argument in a previous command (I'm using ZSH ): for example, $ convert img.png img.pdf$ llpp (alt + .) # which produces llpp img.pdf but sometimes I review a pdf with llpp $ llpp pdffile.pdf& and then if I try to do something else with pdffile.pdf I run into troubles $ llpp (`Alt` + `.`) # produces llpp & So, is there any way to recover pdffile.pdf using something similar to Alt + . ? $ echo $SHELL/usr/bin/zsh$ echo $TERMxterm | ESC-. ( insert-last-word ) considers any space-separated or space-separable shell token¹ a “word“, including punctuation tokens such as & . You can give it a numeric argument to grab a word other than the last one. Positive arguments count from the right: Alt + 1 Alt + . is equivalent to Alt + . , Alt + 2 Alt + . grabs the previous word, etc. Alt + 0 Alt + . is the previous word, and negative arguments continue from the left, e.g. Alt + - Alt + 1 Alt + . is the first argument. I have copy-earlier-word bound to ESC-, . Where repeated invocations of ESC-. insert the last word of successive commands going back in the history, repeated invocations of ESC-, after ESC-. insert the previous word of the same command. So with the following code in your .zshrc , you can get the next-to-last word of the previous command with Alt + . Alt + , . autoload -U copy-earlier-wordzle -N copy-earlier-wordbindkey '^[,' copy-earlier-word ¹ There are several reasonable definitions of “token” in this context. In this answer I'm going by the definition “something that insert-last-word considers to be a separate word”. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/490801",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36165/"
]
} |
490,871 | I have just installed Lubuntu on VM VirtrualBox. When I run an app from terminal, eg. firefox, it works but terminal pops up some warnings. maciex@maciex-pc:~$ firefox(firefox:1152): Gtk-WARNING **: 16:15:43.300: Theme parsing error: <data>:1:34: Expected ')' in color definition(firefox:1152): Gtk-WARNING **: 16:15:43.300: Theme parsing error: <data>:1:77: Expected ')' in color definition(firefox:1152): GLib-GIO-CRITICAL **: 16:15:43.425: g_dbus_proxy_new: assertion 'G_IS_DBUS_CONNECTION (connection)' failed(firefox:1152): GLib-GIO-CRITICAL **: 16:15:43.425: g_dbus_proxy_new: assertion 'G_IS_DBUS_CONNECTION (connection)' failed(firefox:1152): GLib-GIO-CRITICAL **: 16:15:43.425: g_dbus_proxy_new: assertion 'G_IS_DBUS_CONNECTION (connection)' failed(firefox:1152): GLib-GIO-CRITICAL **: 16:15:43.426: g_dbus_proxy_new: assertion 'G_IS_DBUS_CONNECTION (connection)' failed(firefox:1152): GLib-GIO-CRITICAL **: 16:15:43.426: g_dbus_proxy_new: assertion 'G_IS_DBUS_CONNECTION (connection)' failed It's not about firefox, the same issue with other apps. But if I start the same app not from terminal and then open terminal and run the same app from terminal - I do not have any warnings. maciex@maciex-pc:~$ ps -u maciex PID TTY TIME CMD 829 ? 00:00:00 systemd 840 ? 00:00:00 (sd-pam) 865 ? 00:00:00 gnome-keyring-d 868 ? 00:00:00 lxqt-session 886 ? 00:00:00 dbus-daemon 920 ? 00:00:00 ssh-agent 950 ? 00:00:00 openbox 953 ? 00:00:00 at-spi-bus-laun 960 ? 00:00:00 agent 964 ? 00:00:00 gvfsd 969 ? 00:00:00 pcmanfm-qt 970 ? 00:00:00 lxqt-globalkeys 971 ? 00:00:00 lxqt-notificati 972 ? 00:00:00 lxqt-panel 973 ? 00:00:00 lxqt-policykit- 974 ? 00:00:00 lxqt-runner 976 ? 00:00:00 gvfsd-fuse 979 ? 00:00:00 xscreensaver 990 ? 00:00:00 dbus-daemon 992 ? 00:00:00 applet.py 1001 ? 00:00:00 pulseaudio 1063 ? 00:00:00 gvfsd-trash 1069 ? 00:00:00 gvfs-udisks2-vo 1086 ? 00:00:00 gvfs-goa-volume 1092 ? 00:00:00 gvfs-gphoto2-vo 1101 ? 00:00:00 gvfs-mtp-volume 1105 ? 00:00:00 gvfs-afc-volume 1119 ? 00:00:00 lxqt-powermanag 1121 ? 00:00:00 qlipper 1123 ? 00:00:00 nm-tray 1131 ? 00:00:00 qterminal 1134 pts/0 00:00:00 bash 1142 pts/0 00:00:00 ps Could someone explain it? How to solve it? Thanks | As noticed by John Little (thanks !), this is related to fcitx (hamster-cli:4440): GLib-GIO-CRITICAL **: 13:54:40.431: g_dbus_proxy_new: assertion 'G_IS_DBUS_CONNECTION (connection)' failed sudo apt purge fcitx-module-dbus removed the symptom.Tested in lubuntu-18.10 , default desktop ( LXQt ). That's probably https://gitlab.com/fcitx/fcitx/issues/396 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/490871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/328411/"
]
} |
491,004 | I have a directory with files, subdirectories and symlinks How to zip only files and folders from that directory without symlinks or files referred to by the symlink? | use 'zip -y' option it copies the link as is, instead of the complete file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/328551/"
]
} |
491,161 | Does an X client necessarily need a window manager to work? Can an X client work with only the X server? If an X client doesn't have a window , does whether it can work need a window manager? If an X client can work without a window manager, does the X client necessarily have no window? Thanks. | No. Well written apps don't need a window manager. But some "modern" broken apps will not work fine without a window manager (eg. firefox and its address bar suggestions which won't drop down [1]). Many other subpar apps not only assume a window manager, but to add insult to injury, a click to focus window manager. For instance, it used to be that any java app will simply steal the focus on startup. If you want to test, install Xephyr (a "nested" X11 server), run it with Xephyr :1 , and then start your apps with DISPLAY=:1 in their environment. [1] the "awesome bar" of Firefox won't open its suggestions pane when typed into or clicked on the history button unless there's a window manager running. The auto-hide menu won't work either. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/491161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
491,173 | I change to Linux Mint about 2 days ago. And I feel uncomfortable when I put the 'sudo password', because I used to see nothing and now I saw "*****". | LinuxMint added the behavior in /etc/sudoers.d/0pwfeedback . You could simply do like I did, delete the file as it contains only that adjustment: sudo rm -rf /etc/sudoers.d/0pwfeedback | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/328704/"
]
} |
491,175 | I've hijacked a pretty neat backup script from the internet but somewhere along the lines there is something like this going on DIRS="/home/ /var/www/html /etc"tar -czf /backup/file.tar.gz "${DIRS}" Which happens to work fine on my machine but on my server it appears to think it's a single path and tells me it does not exist: /bin/tar: /home/ /var/www/html /etc: Cannot stat: No such file or directory /bin/tar: Exiting with failure status due to previous errors tar version locally is 1.29 while server is 1.28 What's the proper way to supply the directories to tar separately from that variable? | As long as this is a bash script (or even most versions of sh) you should use an array to pass arguments rather than a variable: DIRS=('/home/' '/var/www/html' '/etc')tar -czf /backup/file.tar.gz "${DIRS[@]}" This can be written as follows if you prefer (usually easier to read if the array gets large): DIRS=( '/home/' '/var/www/html' '/etc') In a shell that does not support arrays you will need to unquote your variable to allow word splitting ( Not recommended if it can be avoided ): DIRS="/home/ /var/www/html /etc"tar -czf /backup/file.tar.gz $DIRS When you quote the variable to pass these arguments it's essentially the same as: tar -czf /backup/file.tar.gz "/home/ /var/www/html /etc" However when you pass them through a quoted array it will be more like: tar -czf /backup/file.tar.gz "/home/" "/var/www/html" "/etc" Passing them through an unquoted array or variable will perform somthing similar to: tar -czf /backup/file.tar.gz /home/ /var/www/html /etc Which in this example should not be an issue but leaves it open to additional word splitting and other types of expansion that may be undesirable or potentially harmful. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/491175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/318297/"
]
} |
491,184 | I've been playing around with Splunk and got some questions: I have installed it using rpm -iv splunk-7.2.3-06d57c595b80-linux-2.6-x86_64.rpm I removed it next using rpm -e splunk-7.2.3-06d57c595b80.x86_64 My question is, why didn't rpm remove the Splunk user from /etc/passwd ? Also I'm a bit puzzled why removing via splunk-7.2.3-06d57c595b80-linux-2.6-x86_64.rpm did not work (but the installation did) and I had to get the actual package name with rpm -qa | grep splunk first? Is this related to the Splunk rpm package or rather standard? | As long as this is a bash script (or even most versions of sh) you should use an array to pass arguments rather than a variable: DIRS=('/home/' '/var/www/html' '/etc')tar -czf /backup/file.tar.gz "${DIRS[@]}" This can be written as follows if you prefer (usually easier to read if the array gets large): DIRS=( '/home/' '/var/www/html' '/etc') In a shell that does not support arrays you will need to unquote your variable to allow word splitting ( Not recommended if it can be avoided ): DIRS="/home/ /var/www/html /etc"tar -czf /backup/file.tar.gz $DIRS When you quote the variable to pass these arguments it's essentially the same as: tar -czf /backup/file.tar.gz "/home/ /var/www/html /etc" However when you pass them through a quoted array it will be more like: tar -czf /backup/file.tar.gz "/home/" "/var/www/html" "/etc" Passing them through an unquoted array or variable will perform somthing similar to: tar -czf /backup/file.tar.gz /home/ /var/www/html /etc Which in this example should not be an issue but leaves it open to additional word splitting and other types of expansion that may be undesirable or potentially harmful. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/491184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/307920/"
]
} |
491,188 | I use rename to underscore spaces in filenames, simply with: rename "s/ /_/g" * but I encounter the problem that files downloaded from the internet often have multiple spaces. A nasty workaround I used (but works only for 3 spaces which in most cases is enough), but there has to be a more elegant approach than: rename "s/ /_/g" *; rename "s/ /_/g" *; rename "s/ /_/g" * | The following worked for me: rename 's/\s+/_/g' * It will match one to unlimited instances of white space Note this would also work for newlines and tabs, however based on your use case I think that would be preferable and not unwanted? But to match only space specifically you could do: rename 's/ +/_/g' * | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
491,189 | When I run my input file I get a file that contains 758 lines that look like this. DISTANCIA1.45_SIMETRIA1_GIRO2_ACTIVOS11/MoN-MVW.out::: Total energy: DISTANCIA1.45_SIMETRIA1_GIRO2_ACTIVOS7/MoN-MVW.out::: Total energy: DISTANCIA1.45_SIMETRIA1_GIRO2_ACTIVOS9/MoN-MVW.out::: Total energy: DISTANCIA1.45_SIMETRIA1_GIRO4_ACTIVOS11/MoN-MVW.out::: Total energy: I need to sort so that it looks like this. DISTANCIA1.45_SIMETRIA1_GIRO2_ACTIVOS7/MoN-MVW.out::: Total energy: DISTANCIA1.45_SIMETRIA1_GIRO2_ACTIVOS9/MoN-MVW.out::: Total energy: DISTANCIA1.45_SIMETRIA1_GIRO2_ACTIVOS11/MoN-MVW.out::: Total energy: DISTANCIA1.45_SIMETRIA1_GIRO4_ACTIVOS11/MoN-MVW.out::: Total energy: In other words I need it to be sorted by the numerical value that comes after the word ACTIVOS. | The following worked for me: rename 's/\s+/_/g' * It will match one to unlimited instances of white space Note this would also work for newlines and tabs, however based on your use case I think that would be preferable and not unwanted? But to match only space specifically you could do: rename 's/ +/_/g' * | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291942/"
]
} |
491,224 | I would like to see the tree of a big compressed file (specifically only the second level of directories) so I used the following command: tar -tf tarfile | tree -L 2 But it outputs the tree of the directory I am in, not of the compressed file. The other commands work fine, for example if I do: tar -tf tarfile | less It lets me explore correctly the tarfile. Am I doing something wrong or I can't use tree like other commands trough pipping? If not, is there any other way to only see the files till second level directories of a compressed file? | The following worked for me: rename 's/\s+/_/g' * It will match one to unlimited instances of white space Note this would also work for newlines and tabs, however based on your use case I think that would be preferable and not unwanted? But to match only space specifically you could do: rename 's/ +/_/g' * | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/297714/"
]
} |
491,416 | When I look at journalctl , it tells me the PID and the program name(or service name?) of a log entry. Then I wondered, logs are created by other processes, how do systemd-journald know the PID of these processes when processes may only write raw strings to the unix domain socket which systemd-journald is listenning. Also, do sytemd-journald always use the same technique to detect the PID of a piece of log data even when processes are producing log using functions like sd_journal_sendv() ? Is there any documentation I should read about this? I read JdeBP's answer and know systemd-journald listen on an Unix Domian Socket, but even if can know the peer socket address who send the log message, how does it know the PID? What if that sending socket is opened by many non-parent-children processes? | It receives the pid via the SCM_CREDENTIALS ancillary data on the unix socket with recvmsg() , see unix(7) . The credentials don't have to be sent explicitly. Example: $ cc -Wall scm_cred.c -o scm_cred$ ./scm_credscm_cred: received from 10114: pid=10114 uid=2000 gid=2000 Processes with CAP_SYS_ADMIN data can send whatever pid they want via SCM_CREDENTIALS ; in the case of systemd-journald , this means they can fake entries as if logged by another process: # cc -Wall fake.c -o fake# setcap CAP_SYS_ADMIN+ep fake$ ./fake `pgrep -f /usr/sbin/sshd`# journalctl --no-pager -n 1...Dec 29 11:04:57 debin sshd[419]: fake log message from 14202# rm fake# lsb_release -dDescription: Debian GNU/Linux 9.6 (stretch) systemd-journald handles datagrams and credentials sent via ancillary data is in the server_process_datagram() function from journald-server.c . Both the syslog(3) standard function from libc and sd_journal_sendv() from libsystemd will send their data via a SOCK_DGRAM socket by default, and getsockopt(SO_PEERCRED) does not work on datagram (connectionless) sockets. Neither systemd-journald nor rsyslogd accept SOCK_STREAM connections on /dev/log . scm_cred.c #define _GNU_SOURCE 1#include <sys/socket.h>#include <sys/un.h>#include <unistd.h>#include <err.h>int main(void){ int fd[2]; pid_t pid; if(socketpair(AF_LOCAL, SOCK_DGRAM, 0, fd)) err(1, "socketpair"); if((pid = fork()) == -1) err(1, "fork"); if(pid){ /* parent */ int on = 1; union { struct cmsghdr h; char data[CMSG_SPACE(sizeof(struct ucred))]; } buf; struct msghdr m = {0}; struct ucred *uc = (struct ucred*)CMSG_DATA(&buf.h); m.msg_control = &buf; m.msg_controllen = sizeof buf; if(setsockopt(fd[0], SOL_SOCKET, SO_PASSCRED, &on, sizeof on)) err(1, "setsockopt"); if(recvmsg(fd[0], &m, 0) == -1) err(1, "recvmsg"); warnx("received from %d: pid=%d uid=%d gid=%d", pid, uc->pid, uc->uid, uc->gid); }else /* child */ write(fd[1], 0, 0); return 0;} fake.c #define _GNU_SOURCE 1#include <sys/socket.h>#include <sys/un.h>#include <unistd.h>#include <stdlib.h>#include <stdio.h>#include <err.h>int main(int ac, char **av){ union { struct cmsghdr h; char data[CMSG_SPACE(sizeof(struct ucred))]; } cm; int fd; char buf[256]; struct ucred *uc = (struct ucred*)CMSG_DATA(&cm.h); struct msghdr m = {0}; struct sockaddr_un ua = {AF_UNIX, "/dev/log"}; struct iovec iov = {buf}; if((fd = socket(AF_LOCAL, SOCK_DGRAM, 0)) == -1) err(1, "socket"); if(connect(fd, (struct sockaddr*)&ua, SUN_LEN(&ua))) err(1, "connect"); m.msg_control = &cm; m.msg_controllen = cm.h.cmsg_len = CMSG_LEN(sizeof(struct ucred)); cm.h.cmsg_level = SOL_SOCKET; cm.h.cmsg_type = SCM_CREDENTIALS; uc->pid = ac > 1 ? atoi(av[1]) : getpid(); uc->uid = ac > 2 ? atoi(av[2]) : geteuid(); uc->gid = ac > 3 ? atoi(av[3]) : getegid(); iov.iov_len = snprintf(buf, sizeof buf, "<13>%s from %d", ac > 4 ? av[4] : "fake log message", getpid()); if(iov.iov_len >= sizeof buf) errx(1, "message too long"); m.msg_iov = &iov; m.msg_iovlen = 1; if(sendmsg(fd, &m, 0) == -1) err(1, "sendmsg"); return 0;} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/301641/"
]
} |
491,419 | On an Ubuntu ( $ uname -a : Linux kumanaku 4.15.0-43-generic #46-Ubuntu SMP Thu Dec 6 14:45:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux ), I just installed fish ( $ fish --version : fish, version 2.7.1 ) using the following commands : sudo apt-add-repository ppa:fish-shell/release-2sudo apt-get updatesudo apt-get install fishchsh -s /usr/bin/fishecho /usr/bin/fish | sudo tee -a /etc/shells I can launch fish and use it but when I launch a simple shell file like : echo "something" I got the following message : $ ./myscript.shFailed to execute process './myscript.sh'. Reason:exec: Exec format errorThe file './myscript.sh' is marked as an executable but could not be run by the operating system. There's no shebang in my script. If I add #!/usr/bin/env fish , everything's ok (i.e. the script is successfully launched) but I'd like to avoid such a line to keep my script compatible with different shells. Any idea ? | You need a shebang line if the executable file cannot be run natively by the kernel. The kernel can only run machine code in a specific format ( ELF on most Unix variants), or sometimes other formats (e.g. on Linux you can register executable formats through binfmt_misc ). If the executable file needs an interpreter then the kernel needs to know which interpreter to call. That's what the shebang line is for. If your script is in fish syntax, its first line must be #!/usr/bin/env fish (You can use the absolute path instead, but then you'll have to modify the script if you want to run it on a machine where the fish executable is in a different location, e.g. /usr/bin/fish vs /usr/local/bin/fish .) If your script is in sh syntax, use #!/bin/sh (All modern Unix systems have a POSIX sh at /bin/sh so you don't need env .) If your script is in bash syntax (which is sh plus some bash-specific extensions), use #!/usr/bin/env bash On Linux, in practice, #!/bin/bash will also work. All of this is independent of which shell you're calling the script from. All that matters is what language the script is written in. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/491419",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325612/"
]
} |
491,433 | I want to display free memory as a percentage. I'm pretty sure that I'm not displaying the free memory but the used one. Or am I wrong? mem=`free -m | awk 'NR ==2{printf $3,$2,$3*100/$2}'`echo $mem | You need a shebang line if the executable file cannot be run natively by the kernel. The kernel can only run machine code in a specific format ( ELF on most Unix variants), or sometimes other formats (e.g. on Linux you can register executable formats through binfmt_misc ). If the executable file needs an interpreter then the kernel needs to know which interpreter to call. That's what the shebang line is for. If your script is in fish syntax, its first line must be #!/usr/bin/env fish (You can use the absolute path instead, but then you'll have to modify the script if you want to run it on a machine where the fish executable is in a different location, e.g. /usr/bin/fish vs /usr/local/bin/fish .) If your script is in sh syntax, use #!/bin/sh (All modern Unix systems have a POSIX sh at /bin/sh so you don't need env .) If your script is in bash syntax (which is sh plus some bash-specific extensions), use #!/usr/bin/env bash On Linux, in practice, #!/bin/bash will also work. All of this is independent of which shell you're calling the script from. All that matters is what language the script is written in. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/491433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/328879/"
]
} |
491,434 | Ever seen a print of a long discussion thread? Looks like the picture of a skyscraper… with only the building in it! So what I generally do — manually! — is break up the image into segments, and then use this script to stack them side by side using imagemagick: for f in "$@"do h=($(sips -g pixelHeight "$f" | grep -o '[0-9]*$')) if [[ $h -gt $height ]]; then height=$h fidoneconvert +append "$@" -geometry x$height ~/Desktop/Hcombined.png How can I extend it to do the whole thing on its own? | You need a shebang line if the executable file cannot be run natively by the kernel. The kernel can only run machine code in a specific format ( ELF on most Unix variants), or sometimes other formats (e.g. on Linux you can register executable formats through binfmt_misc ). If the executable file needs an interpreter then the kernel needs to know which interpreter to call. That's what the shebang line is for. If your script is in fish syntax, its first line must be #!/usr/bin/env fish (You can use the absolute path instead, but then you'll have to modify the script if you want to run it on a machine where the fish executable is in a different location, e.g. /usr/bin/fish vs /usr/local/bin/fish .) If your script is in sh syntax, use #!/bin/sh (All modern Unix systems have a POSIX sh at /bin/sh so you don't need env .) If your script is in bash syntax (which is sh plus some bash-specific extensions), use #!/usr/bin/env bash On Linux, in practice, #!/bin/bash will also work. All of this is independent of which shell you're calling the script from. All that matters is what language the script is written in. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/491434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296574/"
]
} |
491,452 | A few posts ago someone asked how to show memory in percentage. Someone replied with: free | awk '/^Mem/ { printf("free: %.2f %\n", $4/$2 * 100.0) }' I was wondering if I can turn this command into an alias in ~/.bashrc. But the syntax of alias is: alias aliasname='command' How can I do this? That command contains both ' and " . I tried different ways, but it didn't work. Is this even possible? Am I missing something? | You need: alias aliasname="free | awk '/^Mem/ { printf(\"free: %.2f %\n\", \$4/\$2 * 100.0) }'" Notice that you need to escape both " and $ . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491452",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/314925/"
]
} |
491,501 | I want to format a USB storage device from the terminal and I have found several formats to do it. It's the first time I'm going to do this and I have doubts. I want to do it well. I have these options and I want to know which one is convenient that is compatible with all operating systems. # mkfs.vfat -n 'dickEt' -I /dev/sdd1# mkfs.ntfs -n 'dickEt' -I /dev/sdd1# mkfs.ext2 -n 'dickEt' -I /dev/sdd1# mkfs.ext3 -n 'dickEt' -I /dev/sdd1# mkfs.ext4 -n 'dickEt' -I /dev/sdd1# mkfs.msdos -n 'dickEt' -I /dev/sdd1# mkfs.xfs -n 'dickEt' -I /dev/sdd1# mkfs.bfs -n 'dickEt' -I /dev/sdd1 | The answer to your question¹ is simple: mkfs.msdos -n 'dickEt' -I /dev/sdd1 Hoever, it comes with the following limitations: Maximum file size is 4GB Maximum partition size is 2TB OS - File system compatibility (mini) matrix: FAT NTFS EXT[2..4] BTRFS XFS HPFSAmiga xMS-DOS, Win95, 98 xNT, W2K, ... W10 x x 2MacOS x 3 4 xLinux x x x x x x Note 1: You asked for maximum OS compatibility and that's the only answer as it is compatible with most OSes as it's one of the oldest and least capable file systems. ( Not ALL OSes! E.G. C64 does not support FAT!) Note 2: Commercial Tryware if you want write capabilities. Note 3: Commercial Software if you want write capabilities. Note 4: Read-only | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/491501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/290919/"
]
} |
491,503 | I'm trying to figure out how to quickly send the window with focus to the next workspace numerically. Using the following lines in my ~/.config/i3/config file, I'm able to bind Super [ and Super ] to move the currently focused window to the previous and next workspace. # use brackets to move window to adjacent workspacebindsym $mod+bracketright move to workspace prevbindsym $mod+bracketleft move to workspace next However, only workspaces with windows currently in them are candidates for receiving the window. In particular, if only one workspace is currently non-empty, Super [ and Super ] can't be used to declutter the current workspace by moving windows to an adjacent workspace. Does i3 expose the ability to send a window to workspace n+1 or n-1 regardless of whether the workspace is empty or not? | The answer to your question¹ is simple: mkfs.msdos -n 'dickEt' -I /dev/sdd1 Hoever, it comes with the following limitations: Maximum file size is 4GB Maximum partition size is 2TB OS - File system compatibility (mini) matrix: FAT NTFS EXT[2..4] BTRFS XFS HPFSAmiga xMS-DOS, Win95, 98 xNT, W2K, ... W10 x x 2MacOS x 3 4 xLinux x x x x x x Note 1: You asked for maximum OS compatibility and that's the only answer as it is compatible with most OSes as it's one of the oldest and least capable file systems. ( Not ALL OSes! E.G. C64 does not support FAT!) Note 2: Commercial Tryware if you want write capabilities. Note 3: Commercial Software if you want write capabilities. Note 4: Read-only | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/491503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86874/"
]
} |
491,513 | I have created a Ubuntu VM on a server running Ubuntu Server 16.04.5 LTS using the following command: sudo virt-install \ --name TEST \ --memory 2048 \ --vcpus 2 \ --location 'http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/' \ --os-variant ubuntu16.04 \ --disk path=/pools/pool0/images/vm/test,size=150,bus=virtio,sparse=no,format=qcow2 \ --filesystem type=mount,source=/pools/pool0/volumes/shared,target=shared,mode=mapped \ --network network=vms \ --graphics none \ --virt-type kvm \ --hvm \ --console pty,target_type=serial \ --extra-args 'console=ttyS0,115200n8 serial' Note that I have created a shared folder, called shared with mapped access in order to allow reading and writing on the guest. I then start the VM with this command: virsh start TEST --console Inside the guest, I have edited /etc/fstab to auto-mount the shared folder with this line, where UID 1000 is my user and GID 1000 is the associated group which contains no other members: shared /mnt 9p trans=virtio,version=9p2000.L,rw,uid=1000,gid=1000 0 0 In the /mnt directory on the guest, running ls -ln gives the following output: $ ls -ln /mnttotal 42drwxrwxr-x 8 1000 1000 8 Jul 28 23:52 Backupsdrwxrwxr-x 6 1000 1000 6 Dec 28 00:15 Mediadrwxrwxr-x 6 1000 1000 67 Mar 31 2018 Miscdrwxrwxr-x 2 1000 1000 4 Mar 31 2018 Recipes I get the same output when running ls -ln on the host in the /pools/pool0/volumes/shared directory: $ ls -ln /pools/pool0/volumes/sharedtotal 42drwxrwxr-x 8 1000 1000 8 Jul 28 23:52 Backupsdrwxrwxr-x 6 1000 1000 6 Dec 28 00:15 Mediadrwxrwxr-x 6 1000 1000 67 Mar 31 2018 Miscdrwxrwxr-x 2 1000 1000 4 Mar 31 2018 Recipes In the guest, I can create and modify files and folders as myself, an unprivileged user: $ mkdir /mnt/Media/test-dir$ touch /mnt/Media/test-file$ ls -ln /mnt/Mediatotal 75drwxrwxr-x 199 1000 1000 199 Dec 28 22:07 Moviesdrwxrwxr-x 152 1000 1000 153 Dec 25 16:26 Musicdrwxrwxr-x 75 1000 1000 75 Jul 16 21:02 Photosdrwxrwxr-x 2 1000 1000 2 Dec 29 20:30 test-dir-rw-rw-r-- 1 1000 1000 0 Dec 29 20:31 test-filedrwxrwxr-x 15 1000 1000 15 Dec 18 15:40 TV Shows However, on the host OS, these files and folders have been given root only access: $ ls -ln /pools/pool0/volumes/shared/Mediatotal 75drwxrwxr-x 199 1000 1000 199 Dec 28 22:07 Moviesdrwxrwxr-x 152 1000 1000 153 Dec 25 16:26 Musicdrwxrwxr-x 75 1000 1000 75 Jul 16 21:02 Photosdrwx------ 2 0 0 2 Dec 29 20:30 test-dir-rw------- 1 0 0 0 Dec 29 20:31 test-filedrwxrwxr-x 15 1000 1000 15 Dec 18 15:40 TV Shows I run automated scripts on my server, and for these to work I need these folders and directories to be created with UID 1000 , GID 1000 , permissions of rwxrwxr-x (775) for directories, and permissions of rw-rw-r-- (664) for files. I do not want to have to manually run chmod and chown with sudo each time I create a new file / directory. I need to fix this issue, preferably without having to re-install the VM from scratch. | The answer to your question¹ is simple: mkfs.msdos -n 'dickEt' -I /dev/sdd1 Hoever, it comes with the following limitations: Maximum file size is 4GB Maximum partition size is 2TB OS - File system compatibility (mini) matrix: FAT NTFS EXT[2..4] BTRFS XFS HPFSAmiga xMS-DOS, Win95, 98 xNT, W2K, ... W10 x x 2MacOS x 3 4 xLinux x x x x x x Note 1: You asked for maximum OS compatibility and that's the only answer as it is compatible with most OSes as it's one of the oldest and least capable file systems. ( Not ALL OSes! E.G. C64 does not support FAT!) Note 2: Commercial Tryware if you want write capabilities. Note 3: Commercial Software if you want write capabilities. Note 4: Read-only | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/491513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/329004/"
]
} |
491,519 | I would like to remove rows that have an "na" file 0.0000.0000.0550.0360.0030.0020.0000.0020.0020.0020.000nana0.0000.000na0.0020.0020.003 output 0.0000.0000.0550.0360.0030.0020.0000.0020.0020.0020.0000.0000.0000.0020.0020.003 I was trying to do this in R but i couldn't collapse the rows, just remove the na | The answer to your question¹ is simple: mkfs.msdos -n 'dickEt' -I /dev/sdd1 Hoever, it comes with the following limitations: Maximum file size is 4GB Maximum partition size is 2TB OS - File system compatibility (mini) matrix: FAT NTFS EXT[2..4] BTRFS XFS HPFSAmiga xMS-DOS, Win95, 98 xNT, W2K, ... W10 x x 2MacOS x 3 4 xLinux x x x x x x Note 1: You asked for maximum OS compatibility and that's the only answer as it is compatible with most OSes as it's one of the oldest and least capable file systems. ( Not ALL OSes! E.G. C64 does not support FAT!) Note 2: Commercial Tryware if you want write capabilities. Note 3: Commercial Software if you want write capabilities. Note 4: Read-only | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/491519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/213707/"
]
} |
491,626 | In a bash shell in a terminal emulator window of lxterminal, I run $ trap "echo hello" SIGHUP $ kill -s HUP $$hello$ and then I close the terminal emulator window. Does closing the terminal emulator window only cause SIGHUP sent to the controlling process i.e. the bash process? Since the SIGHUP trap doesn't terminate the bash process, I expect the bash process isn't terminated, but why is the bash process actually terminated? Same thing happen if I change the trap to "" (ignore). Terminal emulator matters. In bash running in a xterm window, setting trap to "" will make a xterm window not closable, while setting trap to echo hello can still close a xterm window. Thanks. | [ I'll ignore the actual and possible behavior of different terminal emulators; a completely reasonable behavior would be to send a ^D ( VEOF ) to the pty on a window close / WM_DELETE_WINDOW , instead of tearing it down and causing the process running in it to receive a SIGHUP ; the following assumes xterm , which will send a SIGHUP to the shell's process group in that case ]. The behavior you're seeing is because of the readline library, which is installing its own signal handlers. If you try the following: xterm -e bash --noediting (or dash , zsh or ksh instead of bash --noediting ), then run trap 'echo HUP' HUP in the terminal, the window will become unclose-able; the shell will just print HUP as expected on trying to close the window; forcefully closing it (eg. with xkill ) will cause the shell to exit with EIO errors, which is completely expected, since the pty was torn down. Here is a simpler testcase for the behavior you're observing, not involving terminal emulators. Run the following in your terminal: bash --rcfile <(echo 'trap "echo HUP" HUP') Then kill -HUP $$ will just print HUP , but (sleep 1; kill -HUP $$) & (or kill -HUP <pid> from another window) will cause the shell to print exit and exit, unless you started it with --noediting (= don't use readline) The readline() function as called by bash will install its own signal handlers upon waiting for input from the user, and restore the original handlers upon returning; a SIGHUP while waiting for input from the user will cause it to return NULL , which will be treated as EOF by bash (in the yy_readline_get() function), before having the chance to run the deferred trap handler. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
491,691 | tgid and pid are the same concept for any process or for any lightweight process. In /proc/${pid}/status , tgid and pid are distinct fields. Are tgid and pid ever different for a process or lightweight process? Thanks. | When looking at /proc/${pid}/status , then the Tgid: and Pid: fields will always match, since they're the same for a process or for the main thread of a process. The reason why there are two separate fields is that the same code is used to produce /proc/${pid}/task/${tid}/status , in which Tgid: and Pid: may differ from each other. (More specifically, Tgid: will match ${pid} and Pid: will match ${tid} in the file name template used above.) The naming is a bit confusing, mainly because threading support was only added to the Linux kernel later on and, at the time, the scheduler code was modified to reuse the logic that used to schedule processes so it would now schedule threads. This resulted in reusing the concept of "pids" to identify individual threads. So, in effect, from the kernel's point of view, "pid" is still used for threads and "tgid" was introduced for processes. But from userspace you still want the PID to identify a process, therefore userspace utilities such as ps , etc. will map kernel's "tgid" to PID and kernel's "pid" to "tid" (thread id.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491691",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
491,694 | In the process of external HDD recovery, I backed-up the whole disk to LVM LV, run fsck and resize2fs to fit into new drive (both external USB HDD, but the newer one is few MB smaller). I used ddrescue to copy the data into the LV.Now, when I use dd to copy data from the LV to the physical drive I get broken partition table.When I fix the partition table (to be same as the LV's partition table) I get errors from fsck. I run dd bs=100M if=/dev/mapper/backup--vg-backup--lv of=/dev/sdh to get the data from the LV to the physical drive. Both LV and physical drive use same logical/physical sector size. My question is how to copy data from LV (which contains a whole disk) back to physical disk? | When looking at /proc/${pid}/status , then the Tgid: and Pid: fields will always match, since they're the same for a process or for the main thread of a process. The reason why there are two separate fields is that the same code is used to produce /proc/${pid}/task/${tid}/status , in which Tgid: and Pid: may differ from each other. (More specifically, Tgid: will match ${pid} and Pid: will match ${tid} in the file name template used above.) The naming is a bit confusing, mainly because threading support was only added to the Linux kernel later on and, at the time, the scheduler code was modified to reuse the logic that used to schedule processes so it would now schedule threads. This resulted in reusing the concept of "pids" to identify individual threads. So, in effect, from the kernel's point of view, "pid" is still used for threads and "tgid" was introduced for processes. But from userspace you still want the PID to identify a process, therefore userspace utilities such as ps , etc. will map kernel's "tgid" to PID and kernel's "pid" to "tid" (thread id.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/329158/"
]
} |
491,773 | I have a while loop in my script which waits for the connection get online and then continues. #!/bin/shwhile ! ping -c1 $1 &>/dev/null do echo "Ping Fail - `date`"doneecho "Host Found - `date`" It takes 25 to 45 seconds for the connection to reconnect. I cannot let it wait for any longer than 50 seconds. What would be the best solution to limit the time while loop works? | Without a while loop: # -W 50 = timeout after 50 seconds# -c 1 = 1 packet to be sentresponse="$(ping -W 50 -c 1 "$1" | grep '1 packets transmitted, 1 received')"if [ "$response" == '' ] ; then echo no response after 50 secondselse echo connectedfi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491773",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/329096/"
]
} |
491,793 | $ tpgid=$(ps --no-headers -o tpgid -p 1)$ echo $tpgid-1$ if [[ $tpgid == "-1" ]]; then> echo "yes"> else> echo "no"> fino Why is the condition not true? Thanks. $ printf "%s" "$tpgid" > /tmp/test/fff$ hd /tmp/test/fff00000000 20 20 20 2d 31 | -1|00000005 | Even though [[ ... ]] is "smarter" than [ ... ] or test ... , it's still a better idea to explicitly use numerical comparison operators: if [[ "$tpgid" -eq -1 ]]; then ... Further, your hexdump: $ hd /tmp/test/fff00000000 20 20 20 2d 31 | -1| shows that $tpgid expands to " -1" , not "-1" ; -eq knows how to handle this, while == is rightly doing a string comparison: $ if [[ " -1" == -1 ]]; then echo truthy; else echo falsy; fifalsy$ if [[ " -1" -eq -1 ]]; then echo truthy; else echo falsy; fitruthy In short, the string matching condition is not returning true because the strings in fact do not match. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
491,809 | I'm trying to change fonts in URxvt. $ fc-match "FuraCode Nerd Font Mono"Fura_Code_Regular_Nerd_Font_Complete_Mono.otf: "FuraCode Nerd Font Mono" "Regular" But when changing ~/.Xresources like this URxvt.font: xft:FuraCode Nerd Font Mono:pixelsize=12 and running xrdb -merge ~/.Xresources The next session uses the same font as i3.Tried the same with xterm and it is working. What am i doing wrong? edit: urxvt -fn "xft:FuraCode Nerd Font Mono:pixelsize=15" is working too. In ~/xsession-errors: urxvt: unable to load base fontset, please specify a valid one using -fn, aborting | Even though [[ ... ]] is "smarter" than [ ... ] or test ... , it's still a better idea to explicitly use numerical comparison operators: if [[ "$tpgid" -eq -1 ]]; then ... Further, your hexdump: $ hd /tmp/test/fff00000000 20 20 20 2d 31 | -1| shows that $tpgid expands to " -1" , not "-1" ; -eq knows how to handle this, while == is rightly doing a string comparison: $ if [[ " -1" == -1 ]]; then echo truthy; else echo falsy; fifalsy$ if [[ " -1" -eq -1 ]]; then echo truthy; else echo falsy; fitruthy In short, the string matching condition is not returning true because the strings in fact do not match. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491809",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224563/"
]
} |
491,814 | On an Orange Pi Zero running a Raspbian server, it's possible to use the watchdog very easily just by running the command echo 1 > /dev/watchdog as root. The idea is that the system will certainly reboot after some time that this command is executed, so I need to keep repeating this command in a regular interval of time to keep the system on. We can implement a watchdog using cron as root and making it execute the following script on boot: #!/bin/bashwhile [ true ]; do echo 1 > /dev/watchdog sleep 5done This script works fine on the Orange Pi Zero... However, on my desktop computer running Ubuntu 18.04 the command echo 1 > /dev/watchdog doesn't work at all. Is it possible to activate the watchdog on any device running Linux? | There are two types of watchdog; hardware and software. On the Orange Pi the SOC chip provides a hardware watchdog. If initialised then it needs to be pinged every so often, otherwise it performs a board reset. However not many desktops have hardware watchdogs, so the kernel provides a software version. Now the kernel will try and keep track, and force a reboot. This isn't as good as a hardware watchdog because if the kernel, itself, breaks then nothing will trigger the reset. But it works. The software watchdog can be initialised by loading the softdog module % modprobe softdog% dmesg | tail -1[ 120.573945] softdog: Software Watchdog Timer: 0.08 initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0) We can see this has a 60 second timeout by default. If I then do % echo > /dev/watchdog% dmesg | tail -1[ 154.514976] watchdog: watchdog0: watchdog did not stop! We can see the watchdog hadn't timed out. I then leave the machine idle for a minute and on the console I see [ 214.624112] softdog: Initiating system reboot and the OS reboots. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103357/"
]
} |
491,823 | Sometimes you need to unmount a filesystem or detach a loop device but it is busy because of open file descriptors, perhaps because of a smb server process. To force the unmount, you can kill the offending process (or try kill -SIGTERM ), but that would close the smb connection (even though some of the files it has open do not need to be closed). A hacky way to force a process to close a given file descriptor is described here using gdb to call close(fd) .This seems dangerous, however. What if the closed descriptor is recycled? The process might use the old stored descriptor not realizing it now refers to a totally different file. I have an idea, but don't know what kind of flaws it has: using gdb , open /dev/null with O_WRONLY (edit: an comment suggested O_PATH as a better alternative), then dup2 to close the offending file descriptor and reuse its descriptor for /dev/null . This way any reads or writes to the file descriptor will fail. Like this: sudo gdb -p 234532(gdb) set $dummy_fd = open("/dev/null", 0x200000) // O_PATH(gdb) p dup2($dummy_fd, offending_fd)(gdb) p close($dummy_fd)(gdb) detach(gdb) quit What could go wrong? | There are two types of watchdog; hardware and software. On the Orange Pi the SOC chip provides a hardware watchdog. If initialised then it needs to be pinged every so often, otherwise it performs a board reset. However not many desktops have hardware watchdogs, so the kernel provides a software version. Now the kernel will try and keep track, and force a reboot. This isn't as good as a hardware watchdog because if the kernel, itself, breaks then nothing will trigger the reset. But it works. The software watchdog can be initialised by loading the softdog module % modprobe softdog% dmesg | tail -1[ 120.573945] softdog: Software Watchdog Timer: 0.08 initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0) We can see this has a 60 second timeout by default. If I then do % echo > /dev/watchdog% dmesg | tail -1[ 154.514976] watchdog: watchdog0: watchdog did not stop! We can see the watchdog hadn't timed out. I then leave the machine idle for a minute and on the console I see [ 214.624112] softdog: Initiating system reboot and the OS reboots. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240014/"
]
} |
491,854 | i am trying to write a function but I am getting Syntax error. Below is my function: checkNoOfParameter () { if [[ ${1} -eq ${2} ]] then job_Status = $true else job_Status = $false echo "Please provide all \"${2}\" arguments with single space separation" readArgumentsFromUser ${2}} and I am calling the function like this: readArgumentsFromUser () { read -a input checkNoOfParameter ${#input[*]} ${1} } readArgumentsFromUser 3 | You're missing the fi to end the if statement before the } closes the function. You also have spaces in your assignments that shouldn't be there, so you will subsequently get errors "job_Status: command not found"; remove the spaces on both sides of the = . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491854",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/329279/"
]
} |
491,944 | Using a Lenovo Legion Y520 with i7-7700HQ (base clock 2.8Ghz) and GTX 1050. I'm getting CPU overheating warnings in linux and it's affecting my performance in games (found in Payday 2 and CS:GO). I've never had problems in Windows. This is what I found when trying to troubleshoot this issue: In Windows 10 (using aida64) Windows stays at around 3.4 Ghz on idle (because my power settings are set to 'high performance' instead of the default 'balanced'), with a temperature of around 50C. When stressing the cpu, the temperature goes slowly (in a couple seconds instead of instantly) from about 50C to around 75C and stays there comfortably. Clock speeds are about 2.9Ghz when stressing. Utilization is always 100%. Aida64 doesn't report throttling. The voltage on the CPU core goes from about 1.1 to 0.9 when stressing. In Arch Linux (using s-tui) Linux stays at around 2.0Ghz on idle, with a temperature of around 50C. Here's where it gets weird: when stressing the cpu, the temperature IMMEDIATELY goes from 50C to about 93C. Clock speeds are exactly 3.4Ghz when stressing. Utilization is always 100%. When turing the stress test off, the temperature IMMEDIATELY goes back to about 50C, as if nothing ever happened. The laptop certainly doesn't feel like it heats up to 90C+ when doing this, even after a long stress. Here's an image that shows how temperature, power, and frequency all go down at the exact same time. Notice how much cpu temperature changes in so little time. How do I fix this throttling issue? Do I undervolt my CPU in linux? How come it reads temperatures wrong in Linux but not in Windows? I changed the profile using cpupower from powersave to performance. I still see the same throttling in s-tui. There is a jump up in idle cpu frequency when setting to performance (instead of around 2000-2500Mhz to always at 3400Mhz), but that's the only thing that has changed. Fan control I tried to control fans using fancontrol (lm_sensors) , but pwmconfig says there are no pwm-capable sensor modules installed. I tried it with NBFC , but it doesn't seem to be doing anything, no matter what profile I choose. I don't even know if NBFC can control my fans, but it doesn't report any errors when choosing a profile. I also tried thinkfan , but it doesn't seem to help with throttling. It also thinks my fan's speed is at 8RPM, see this thread Solution I found that lowering the maximum allowed cpu frequency using cpupower to something like 3100MHz instead of the default 3800 fixes all issues. sudo cpupower frequency-set -u 3100MHz I also changed max_freq in /etc/default/cpupower to the same value, to make it permanent. I found that this does result in a slight fps drop in games, but nothing serious. At least my fps is stable :) Sadly I think this might result in decreased performance in non-gaming tasks like when compiling something. After 1.5 years I just stability tested Windows again (with AIDA64) and found it now also thermal throttles. As you can see in the image below the temperatures jump quickly to the high 90s and AIDA64 reports throttling. The clock speed idles at 3.4GHz and a few seconds after starting the test it drops to around 800MHz, before jumping up to 3.4GHz again a second later. It doesn't decide to lower the clock speed while stresstesting to something like 2.9GHZ (like before). How come it suddenly stopped lowering the maximum frequency in Windows? | The difference is due to windows and linux using different CPU throttling profiles. You do have some control over this on linux. For example, the following command will show you which profile is currently being used: cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor There are ways to choose which profiles to use. The Arch Linux wiki has good information on this, it may be worth a read: CPU Frequency Scaling - Arch Wiki There is an additional issue of fan control -- you need to make sure you have the proper drivers for controllin your fans and that they are set to a high enough speed when gaming. Linux on Laptops can be a helpful resource. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/302467/"
]
} |
491,948 | I try to change owner to root:users recursively below a directory, if owner is other than root:users. cd /dir/find . \( ! -user root -o ! -group users \) -print0 | xargs -0 chown -vc root:users I get error: chown: missing operand after ‘root:users’Try 'chown --help' for more information. Why I get the error?How can I fix it? | Use the recursive switch on chown: chown -R root:users dir And that should do it. More to why you have an error: if the find command doesn't find any files, then chown will be executed without an operand at the end, which generates this error. If you are really intent on sticking with your original command format, you can add the -r switch to xargs and it should get rid of the error when no files are found. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/208148/"
]
} |
491,960 | In case of test -e for example: VAR="<this is a file path: /path/to/file/or/dir"test -e $VAR && do_something || do_anotherthing Question: Should I use "$VAR" here?, here I don't like verbose if not necessary, because $VAR obviously in this case is a path then if it's empty string it should always fail because there's no path that is empty string, then with my logic, double quote it is not necessary. But it case of string test, test -n for exmpale: VAR="<this is a string"test -n $VAR && do_something || do_anotherthing then with my logic, $VAR should be put in double quote: "$VAR" because it can be expanded to an empty string that if not in double quote will be expanded to -n argument only and always true. So the actual question because of that I'm in doubt of that should we only use double quotes in test command only with -n and -z against strings? | A general rule is to double quote any variable, unless you want the shell to perform token splitting and wildcard expansion on its content. because $VAR obviously in this case is a path then if it's empty string it should always fail [...] then with my logic, double quote it is not necessary. On contrary. The behavior of test -e with no operands is another reason you should quote the variable in this particular case: $ unset foo # or foo=""$ test -e $foo # note this is equivalent to `test -e'$ echo $?0$ test -e "$foo"$ echo $?1$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/491960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256195/"
]
} |
492,006 | In the directory /home/username/data I have both files and directories. Some of these filenames end in .txt (to which I'll refer as text files ), others don't. The same happens in the subdirectories. One of the subdirectories is called other_files (its full path is /home/username/data/other_files/ ). I'd like to move all the files not ending with .txt in the root of /home/username/data to other_files . I could possibly do it with a loop, but that's not what I want. I want to use commands and piping. I believe this is easy, I'm just not seeing it. A combination of mv , find , grep and xargs should do it, I'm just not sure how. So I'm stuck in trying to match the text files (to then think of way to match everything except them). In the following, assume my current directory is /home/username/data . First I went for find . | grep -E "*\.txt" , but this matches all text files, including the ones in the subdirectories. So I tried find . | grep -E "\./*\.txt" just to see if I would get the same matches to then work my way towards my goal, but this doesn't match anything and this is where I'm stuck. How do I go about doing what I described at the beginning of the question? | The simple shell loop variant (in bash ): shopt -s extglob dotglob nullglobfor pathname in ~username/data/!(*.txt); do ! test -d "$pathname" && mv "$pathname" ~username/data/other_filesdone The shell options set on the first line will make the bash shell enable extended globbing patterns ( !(*.txt) to match all names not ending with .txt ), it enables glob patterns to match hidden names, and it makes the pattern expand to nothing at all if nothing matches. The body of the loop will skip anything that is a directory (or symbolic link to a directory) and will move everything else to the given directory. The equivalent thing with find and GNU mv (will move symbolic links to directories if there are any, and will invoke mv for as many files as possible at a time, but those are the only differences): find ~username/data -maxdepth 1 ! -type d ! -name '*.txt' \ -exec mv -t ~username/data/other_files {} + Related: Understanding the -exec option of `find` | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/492006",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/329410/"
]
} |
492,013 | Updating repositories with apt update fails since the public keys of several signatures are not available. $ sudo apt updateGet:1 http://security.debian.org/debian-security stretch/updates InRelease [94.3 kB]Ign:2 http://deb.debian.org/debian stretch InRelease Hit:3 http://deb.debian.org/debian stretch Release Ign:4 http://ftp.fr.debian.org/debian stretch InRelease Ign:6 http://ftp.fr.debian.org/debian jessie InRelease Hit:7 https://fr.archive.ubuntu.com/ubuntu bionic InRelease Hit:8 http://ftp.fr.debian.org/debian stretch Release Get:5 http://ftp.fr.debian.org/debian stretch-updates InRelease [91.0 kB] Hit:9 https://riot.im/packages/debian stretch InRelease Hit:10 http://ftp.fr.debian.org/debian jessie Release Err:1 http://security.debian.org/debian-security stretch/updates InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9D6D8F6BC857C906 NO_PUBKEY 8B48AD6246925553Get:11 https://mega.nz/linux/MEGAsync/Debian_9.0 ./ InRelease [1,480 B] Ign:12 http://download.opensuse.org/repositories/home:/strycore/Debian_9.0 ./ InRelease Hit:13 http://download.opensuse.org/repositories/home:/strycore/Debian_9.0 ./ Release Err:14 http://deb.debian.org/debian stretch Release.gpg The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AD6246925553 NO_PUBKEY 7638D0442B90D010 NO_PUBKEY EF0F382A1A7B6500Get:15 https://content.runescape.com/downloads/ubuntu trusty InRelease [2,236 B]Err:16 http://ftp.fr.debian.org/debian stretch Release.gpg The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AD6246925553 NO_PUBKEY 7638D0442B90D010 NO_PUBKEY EF0F382A1A7B6500Err:9 https://riot.im/packages/debian stretch InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E019645248E8F4A1Err:5 http://ftp.fr.debian.org/debian stretch-updates InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AD6246925553 NO_PUBKEY 7638D0442B90D010Err:17 http://ftp.fr.debian.org/debian jessie Release.gpg The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AD6246925553 NO_PUBKEY 7638D0442B90D010 NO_PUBKEY CBF8D6FD518E17E1Err:11 https://mega.nz/linux/MEGAsync/Debian_9.0 ./ InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 4B4E7A9523ACD201Err:18 http://download.opensuse.org/repositories/home:/strycore/Debian_9.0 ./ Release.gpg The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 2F7F0DA5FD5B64B9Err:15 https://content.runescape.com/downloads/ubuntu trusty InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 7373B12CE03BEB4BReading package lists... Done W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://security.debian.org/debian-security stretch/updates InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9D6D8F6BC857C906 NO_PUBKEY 8B48AD6246925553W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://deb.debian.org/debian stretch Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AD6246925553 NO_PUBKEY 7638D0442B90D010 NO_PUBKEY EF0F382A1A7B6500W: Skipping acquire of configured file 'xenial/binary-amd64/Packages' as repository 'https://fr.archive.ubuntu.com/ubuntu bionic InRelease' doesn't have the component 'xenial' (component misspelt in sources.list?)W: Skipping acquire of configured file 'xenial/binary-i386/Packages' as repository 'https://fr.archive.ubuntu.com/ubuntu bionic InRelease' doesn't have the component 'xenial' (component misspelt in sources.list?)W: Skipping acquire of configured file 'xenial/i18n/Translation-en_US' as repository 'https://fr.archive.ubuntu.com/ubuntu bionic InRelease' doesn't have the component 'xenial' (component misspelt in sources.list?)W: Skipping acquire of configured file 'xenial/i18n/Translation-en' as repository 'https://fr.archive.ubuntu.com/ubuntu bionic InRelease' doesn't have the component 'xenial' (component misspelt in sources.list?)W: Skipping acquire of configured file 'xenial/dep11/Components-amd64.yml' as repository 'https://fr.archive.ubuntu.com/ubuntu bionic InRelease' doesn't have the component 'xenial' (component misspelt in sources.list?)W: Skipping acquire of configured file 'xenial/dep11/icons-64x64.tar' as repository 'https://fr.archive.ubuntu.com/ubuntu bionic InRelease' doesn't have the component 'xenial' (component misspelt in sources.list?)W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://ftp.fr.debian.org/debian stretch Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AD6246925553 NO_PUBKEY 7638D0442B90D010 NO_PUBKEY EF0F382A1A7B6500W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://riot.im/packages/debian stretch InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E019645248E8F4A1W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://ftp.fr.debian.org/debian stretch-updates InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AD6246925553 NO_PUBKEY 7638D0442B90D010W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://ftp.fr.debian.org/debian jessie Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AD6246925553 NO_PUBKEY 7638D0442B90D010 NO_PUBKEY CBF8D6FD518E17E1W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://mega.nz/linux/MEGAsync/Debian_9.0 ./ InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 4B4E7A9523ACD201W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://download.opensuse.org/repositories/home:/strycore/Debian_9.0 ./ Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 2F7F0DA5FD5B64B9W: GPG error: https://content.runescape.com/downloads/ubuntu trusty InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 7373B12CE03BEB4BE: The repository 'https://content.runescape.com/downloads/ubuntu trusty InRelease' is not signed.N: Updating from such a repository can't be done securely, and is therefore disabled by default.N: See apt-secure(8) manpage for repository creation and user configuration details. I tried to update the keys without keyring.debian.org $ gpg --recv-keys 9D6D8F6BC857C906gpg: key 9D6D8F6BC857C906: 12 signatures not checked due to missing keysgpg: key 9D6D8F6BC857C906: "Debian Security Archive Automatic Signing Key (8/jessie) <[email protected]>" not changedgpg: Total number processed: 1gpg: unchanged: 1 and with $ gpg --keyserver keyring.debian.org --recv-keys 9D6D8F6BC857C906gpg: no valid OpenPGP data found.gpg: Total number processed: 0 ca-certificates is up-to-date with version 20180409 as well as debian-keyring with version 2018.03.24 . I have also deleted /etc/apt/trusted.gpg as per https://serverfault.com/q/851724 . @Stephen Kitt's request: $ ls -la /etc/apt/trusted.gpg.dtotal 28drwxr-xr-x 2 root root 4096 Jan 2 10:42 .drwxr-xr-x 6 root root 4096 Jan 2 11:06 ..-rw-r--r-- 1 root root 10345 Jan 2 10:42 ubuntu-keyring-2012-archive.gpg-rw-r--r-- 1 root root 2796 Feb 6 2018 ubuntu-keyring-2012-archive.gpg~-rw-r--r-- 1 root root 2794 Feb 6 2018 ubuntu-keyring-2012-cdimage.gpg$ apt policy debian-archive-keyringdebian-archive-keyring: Installed: 2017.7ubuntu1 Candidate: 2017.7ubuntu1 Version table: *** 2017.7ubuntu1 500 500 https://fr.archive.ubuntu.com/ubuntu bionic/universe amd64 Packages 500 https://fr.archive.ubuntu.com/ubuntu bionic/universe i386 Packages 100 /var/lib/dpkg/status 2017.5 500 500 http://ftp.fr.debian.org/debian stretch/main amd64 Packages 500 http://ftp.fr.debian.org/debian stretch/main i386 Packages 500 http://deb.debian.org/debian stretch/main amd64 Packages 500 http://deb.debian.org/debian stretch/main i386 Packages 2017.5~deb8u1 500 500 http://ftp.fr.debian.org/debian jessie/main amd64 Packages 500 http://ftp.fr.debian.org/debian jessie/main i386 Packages How do I resolve the issue of importing the proper keys? | You need to install Debian’s version of debian-archive-keyring , the package containing the archive keys. You currently have Ubuntu’s. ( debian-keyring contains the developers’ keys.) You’ll probably have to download it manually and install it using dpkg -i (as root, or using sudo ). As a longer-term fix, you should either drop Bionic from your repositories, or configure pinning correctly so that it isn’t used as an upgrade candidate by default. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230119/"
]
} |
492,030 | I basically want to go from this: .├── Alan Walker│ ├── Different World│ │ ├── 01 Intro.mp3│ │ ├── 02 Lost Control.mp3│ │ └── cover.jpg│ └── Same World│ ├── 01 Intro.mp3│ └── 02 Found Control.mp3├── Aurora│ └── Infections Of A Different Kind Step 1│ ├── 01 Queendom.lrc│ ├── 02 Forgotten Love.lrc│ └── 03 Gentle Earthquakes.mp3└── Guns N' Roses └── Use Your Illusion I ├── 01 Right Next Door To Hell.lrc ├── 01 Right Next Door To Hell.mp3 ├── 02 Dust N' Bones.lrc └── 02 Dust N' Bones.mp3 to this: .├── Alan Walker - Different World│ ├── 01 Intro.mp3│ ├── 02 Lost Control.mp3│ └── cover.jpg├── Alan Walker - Same World│ ├── 01 Intro.mp3│ └── 02 Found Control.mp3├── Aurora - Infections Of A Different Kind Step 1│ ├── 01 Queendom.lrc│ ├── 02 Forgotten Love.lrc│ └── 03 Gentle Earthquakes.mp3└── Guns N' Roses - Use Your Illusion I ├── 01 Right Next Door To Hell.lrc ├── 01 Right Next Door To Hell.mp3 ├── 02 Dust N' Bones.lrc └── 02 Dust N' Bones.mp3 None of the existing solutions I could find included renaming the directory itself. It'd be great to be able to do this with zmv, but I can't figure out how to. | Something like this maybe? #!/bin/shfor topdir in */; do topdir_name=$( basename "$topdir" ) for subdir in "$topdir"/*/; do subdir_name=$( basename "$subdir" ) newdir="$topdir_name - $subdir_name" if mkdir "$newdir"; then mv "$subdir"/* "$newdir" rmdir "$subdir" fi done rmdir "$topdir"done This goes through all the top-level directories in the current directory (the band names). For each such directory, it goes through its subdirectories (the album names). For each pair of band name and album name, a new directory is created and the files from the subdirectory are moved to it. The album subdirectories are removed when they have been processed, as are the original band top-level directories. The rmdir calls will fail if any directory contains hidden filenames or if any of the new directories failed to be created. This is totally untested code. Run it on a backed-up copy of your files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492030",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283419/"
]
} |
492,044 | Usually when I have programs that are doing a full disk scan and going over all files in the system they take a very long time to run. Why does updatedb run so fast in comparison? | The answer depends on the version of locate you’re using, but there’s a fair chance it’s mlocate , whose updatedb runs quickly by avoiding doing full disk scans: mlocate is a locate/updatedb implementation. The 'm' stands for "merging": updatedb reuses the existing database to avoid rereading most of the file system, which makes updatedb faster and does not trash the system caches as much. (The database stores each directory’s timestamp, ctime or mtime , whichever is newer.) Like most implementations of updatedb , mlocate ’s will also skip file systems and paths which it is configured to ignore. By default there are none in mlocate ’s case, but distributions typically provide a basic updatedb.conf which ignores networked file systems, virtual file systems etc. (see Debian’s configuration file for example; this is standard practice in Debian, so GNU’s updatedb is configured similarly ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/492044",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23960/"
]
} |
492,077 | I just can't find the command to display a *.png image! I tried xdg-open command but it failed: [student@seqpapl1 Images]$ xdg-open adapter_content.png xdg-open: no method available for opening 'adapter_content.png' I am currently running ubuntu linux on the server. | Use mimeopen -d to set the default application: mimeopen -d image.png sample output: Please choose a default application for files of type image/png1) ImageMagick (color depth=q16) (display-im6.q16)2) GNU Image Manipulation Program (gimp)3) Feh (feh) Select your default application , next time you will be able to use: mimeopen image.png or: xdg-open image.png | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/492077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327732/"
]
} |
492,089 | I need to check if there are 12 files landed in a dir. This loop works fine when there is a one file and it will loop 4 times with "sleep 300". but when there are no files at all, it fails and does not loop. what else can I add to make it loop even with NO files at all. In short, I want to check 20 mins for file delivery. retry() {attempt_num=0 while [[ `ls -1 *File*${JulianDate}.* | wc -l` -lt 12 ]] do | Use mimeopen -d to set the default application: mimeopen -d image.png sample output: Please choose a default application for files of type image/png1) ImageMagick (color depth=q16) (display-im6.q16)2) GNU Image Manipulation Program (gimp)3) Feh (feh) Select your default application , next time you will be able to use: mimeopen image.png or: xdg-open image.png | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/492089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/329491/"
]
} |
492,302 | A pseudoterminal has a pair of master and slave. How can we find out the master device file from a slave device file (e.g. /etc/pts/3 )? I only find /dev/ptmx and /dev/pts/ptmx , but they can't be shared by multiple slaves. Given one of the processes working on the master and slave, how can we find out the other? For example, ps provides information about the controlling tty of each process. Can it be helpful? Thanks. | That is one thing that is harder than it should be. With newer linux kernels, the index of the slave pty paired with a master can be gathered from the tty-index entry from /proc/PID/fdinfo/FD . See this commit . With older kernels, the only way you can get that is by attaching with a debugger to a process holding a master pty, and call ptsname(3) (or directly ioctl(TIOCGPTN) ) on the file descriptor. [but both methods will run into issues on systems using multiple devpts mounts, see below] With that info, you can build a list of master-slave pairings, which will also allow you to look up the master starting up from the slave. Here is a silly script that should do that; it will first try the tty-index way, and if that doesn't work it will fall back to gdb . For the latter case, it needs a working gdb (not gdb-minimal or another half-broken gdb most distros ship with) and because of its use of gdb , it will be very slow . For each pty pair, it will print something like: /dev/pts/1 1227 3 t/ct_test 1228 +* t/ct_test 1230 + t/ct_test/dev/pts/3 975 9 'sshd: root [priv]' '' '' '' '' '' '' '' '' 978 14,18,19 'sshd: root@pts/3' '' '' '' '' '' '' '' 979 -*0,1,2,255 -bash 1222 1 tiocsti 1393 -0,1,2 sleep 3600 1231 +0,2 perl ptys.pl 1232 +1,2 cut -b1-60 the two sshd processes (pids 975 and 978) have open handles to the master side (one as its 9 fd and the other as its 14, 18 and 19 fds). sleep and -bash have open handles to the slave side as their standard (0,1 and 2) fds. The session leader ( bash ) is also marked with a * , the processes in the foreground ( perl and cut ) with a + , and those in the background ( less and -bash ) with a - . The t/ct_test processes are using the pty as their controlling terminal without having any fd open to it. tiocsti has an open handle to it without it being its controlling terminal. Tested on Debian 9 and Fedora 28. Info about the magic numbers it's using can be found in procfs(5) and Documentation/admin-guide/devices.txt in the linux kernel source. This will fail on any system using chroots or namespace containers; that's not fixable without some changes to the kernel, since there's no reliable way to match the tty field from /proc/PID/stat to a pty, and a fd opened via /dev/ptmx to the corresponding /dev/pts mount. See here for a rant about it. This will also not link in any fd opened via /dev/tty ; the real tty could be worked out by attaching to the process and calling ioctl(fd, TIOCGDEV, &dev) , but that will mean another dirty heavy use of gdb, and it will run into the same issues as above with the major, minor numbers being ambiguous for pseudo-tty slaves. ptys.pl: my (%pty, %ctty);for(</proc/*[0-9]*/{fd/*,stat}>){ if(my ($pid, $fd) = m{/proc/(\d+)/fd/(\d+)}){ next unless -c $_; my $rdev = (stat)[6]; my $maj = $rdev >> 8 & 0xfff; if($rdev == 0x502){ # /dev/ptmx or /dev/pts/ptmx $pty{ptsname($pid, $fd, readlink $_)}{m}{$pid}{$fd} = 1; }elsif($maj >= 136 && $maj <= 143){ # /dev/pts/N $pty{readlink $_}{s}{$pid}{$fd} = 1; } }else{ my @s = readfile($_) =~ /(?<=\().*(?=\))|[^\s()]+/gs; $ctty{$s[6]}{$s[0]} = # ctty{tty}{pid} = ($s[4] == $s[7] ? '+' : '-'). # pgrp == tpgid ($s[0] == $s[5] ? '*' : ''); # pid == sid }}for(sort {length($a)<=>length($b) or $a cmp $b} keys %pty){ print "$_\n"; pproc(4, $pty{$_}{m}); pproc(8, $pty{$_}{s}, $ctty{(stat)[6]});}sub readfile { local $/; my $h; open $h, '<', shift and <$h> }sub cmdline { join ' ', map { s/'/'\\''/g, $_ = "'$_'" if m{^$|[^\w./+=-]}; $_ } readfile("/proc/$_[0]/cmdline") =~ /([^\0]*)\0/g;}sub pproc { my ($px, $h, $sinfo) = @_; exists $$h{$_} or $$h{$_} = {''} for keys %$sinfo; return printf "%*s???\n", $px, "" unless $h; for my $pid (sort {$a<=>$b} keys %$h){ printf "%*s%-5d %s%-3s %s\n", $px, "", $pid, $$sinfo{$pid}, join(',', sort {$a<=>$b} keys %{$$h{$pid}}), cmdline $pid; }}sub ptsname { my ($pid, $fd, $ptmx) = @_; return '???' unless defined(my $ptn = getptn($pid, $fd)); $ptmx =~ m{(.*)(?:/pts)?/ptmx$} ? "$1/pts/$ptn" : "$ptmx ..?? pts/$ptn"}sub getptn { my ($pid, $fd) = @_; return $1 if readfile("/proc/$pid/fdinfo/$fd") =~ /^tty-index:\s*(\d+)$/m; return gdb_ioctl($pid, $fd, 0x80045430); # TIOCGPTN}sub gdb_ioctl { my ($pid, $fd, $ioctl) = @_; my $cmd = qq{p (int)ioctl($fd, $ioctl, &errno) ? -1 : errno}; qx{exec 3>&1; gdb -batch -p $pid -ex '$cmd' 2>&1 >&3 | grep -v '/sysdeps/.*No such file or directory' >&2} =~ /^\$1 *= *(\d+)$/m ? $1 : undef;} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
492,308 | I want to create a .tgz from the content of a directory. I also want to strip the leading " ./ " from the tar'ed content. I had done this as follows: cd /path/to/files/ && find . -type f | cut -c 3- | xargs czf /path/to/tgz/myTgz.tgz I learned recently that using xargs may not be the best way to pull this off because xargs may invoke tar multiple times if the cmdline arg list gets too long, and I was advised to make use of tar 's ability to read a list of input files from stdin. I ended up finding this article on how to do this. However, I find that the recommendation... cd /path/to/files/ && find . -type f | cut -c 3- | tar czf foo.tgz -T - ...seems to not be portable. It runs fine on my dev PC, but on a busybox target, I get the following error from running the same command: tar: can't open '-': No such file or directory So, my question : is there a truly portable/global way to invoke tar to create a .tgz by feeding it input files from stdin (as opposed to cmdline arguments)? (It is not an option available to me to install alternatives to tar such as gnutar / bsdtar /etc.) (Secondary question: Why does the " -T - " argument to tar denote " read files from stdin "? From the tar man page, all I could find was that " -T " means: get names to extract or create from FILE ... but I couldn't see any reference to a plain " - ") | It is a common convention to interpret - to mean standard input where an input file name is expected, and to mean standard output where an output file name is expected. Because this is a common convention, the short help summary in the GNU tar man page does not mention it, but the complete manual (usually available locally through info tar ) does. The POSIX command line utility syntax guidelines includes this convention, so it's pretty widespread (but it's always a choice on the part of the author of the program). BusyBox utilities do follow this convention. But the manual does not mention tar as supporting the option -T , and neither does the version on the machine I'm posting this (1.27.2 on Ubuntu). I don't know why you're getting the error “: No such file or directory” rather than “invalid option -- 'T'”. It seems that your tar interprets -T as an option that does not take an argument, then sees - as a file name. Since in this context tar needs a file name to put in the archive, and not just some content that comes from a file, it would not make sense to use the stdin/stdout interpretation for - . BusyBox utilities support a restricted set of functionality by design, because they're intended for embedded systems where the fancier features of GNU utilities wouldn't fit. Apparently -T is not a feature that the BusyBox designers considered useful. I don't think BusyBox tar has any way to read file names from stdin. If you need to archive a subset of the files in a directory and you don't need any symbolic links in the archive, a workaround is to create a forest of symbolic links in a temporary directory and archive this temporary directory. It's not clear exactly why you're using find . If you only want the files in the current directory, why not tar czf /path/to/archive.tgz -- * ? Your command does make sense if there are subdirectories and you want to archive the files in these subdirectories, but not the directories themselves (presumably to restore them in a place where the directory structure must exist but may have different permissions). In this case a leading ./ wouldn't do any harm. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492308",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/214773/"
]
} |
492,312 | Why does the following two commands of ps give different CMD field? It matters because screen and SCREEN are different: the first is client and the second is server. Thanks. $ ps -A | grep -i screen 3341 ? 00:00:00 screen 3875 ? 00:00:00 screen27525 ? 00:00:00 screen$ ps -Af | grep -i screentestme 3341 1 0 2018 ? 00:00:00 SCREEN -S testmetestme 3875 1 0 2018 ? 00:00:00 SCREEN -S tmt 27525 1 0 2018 ? 00:00:00 SCREEN -S test SCREEN is not a program, and then why is it showed in ps ? $ SCREENSCREEN: command not found | screen renames its main (server) process SCREEN to distinguish it from later clients. This is very obliquely mentioned once in the man page: Note that this command only affects debugging output from the main "SCREEN" process correctly. Debug output from attacher processes can only be turned off once and forever. but, oddly, not explicitly mentioned anywhere I can see. ps and ps -f display different things for CMD: the executable name ( ps , the "command" format specifier) and the reconstructed command line ( ps -f , the "args" format specifier). The latter uses the process's ARGV and sees changes to it, while the executable name itself is unchanged. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492312",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
492,315 | To create a column header, looking like: 1234567890123456789 I (am trying to) use seq and echo: seq -s '' 1 9 ; echo -n 0; seq -s '' 1 9 However, seq outputs a newline after each run. How can I avoid that? | Assuming you just want to print 1234567890123456789 , you can do it with: $ printf "%s" $(seq 1 9) $(seq 0 9)1234567890123456789$ That won't have a trailing newline at all though, so maybe you prefer: $ printf "%s" $(seq 1 9) $(seq 0 9) $'\n'1234567890123456789$ A few simpler choices if you don't need to use seq : $ perl -le 'print 1..9,0,1..9'1234567890123456789$ printf "%s" {1..9} {0..9} $'\n'1234567890123456789 Since you mentioned portability, I recommend you use the perl approach, or if you are likely to encounter systems without perl , and yet need the same command to run in shells including bash , sh , dash , tcsh etc, try Kamil's approach . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46948/"
]
} |
492,324 | On 19 Aug 2013, Randal L. Schwartz posted this shell script, which was intended to ensure, on Linux, "that only one instance of [the] script is running, without race conditions or having to clean up lock files": #!/bin/sh# randal_l_schwartz_001.sh( if ! flock -n -x 0 then echo "$$ cannot get flock" exit 0 fi echo "$$ start" sleep 10 # for testing. put the real task here echo "$$ end") < $0 It seems to work as advertised: $ ./randal_l_schwartz_001.sh & ./randal_l_schwartz_001.sh[1] 1186311863 start11864 cannot get flock$ 11863 end[1]+ Done ./randal_l_schwartz_001.sh$ Here is what I do understand: The script redirects ( < ) a copy of its own contents (i.e. from $0 ) to the STDIN (i.e. file descriptor 0 ) of a subshell. Within the subshell, the script attempts to get a non-blocking, exclusive lock ( flock -n -x ) on file descriptor 0 . If that attempt fails, the subshell exits (and so does the main script, as there is nothing else for it to do). If the attempt instead succeeds, the subshell runs the desired task. Here are my questions: Why does the script need to redirect, to a file descriptor inherited by the subshell, a copy of its own contents rather than, say, the contents of some other file? (I tried redirecting from a different file and re-running as above, and the execution order changed: the non-backgrounded task gained the lock before the background one. So, maybe using the file's own contents avoids race conditions; but how?) Why does the script need to redirect, to a file descriptor inherited by the subshell, a copy of a file's contents, anyway? Why does holding an exclusive lock on file descriptor 0 in one shell prevent a copy of the same script, running in a different shell, from getting an exclusive lock on file descriptor 0 ? Don't shells have their own, separate copies of the standard file descriptors ( 0 , 1 , and 2 , i.e. STDIN, STDOUT, and STDERR)? | Why does the script need to redirect, to a file descriptor inherited by the subshell, a copy of its own contents rather than, say, the contents of some other file? You could use any file, as long as all copies of the script use the same one.Using $0 just ties the lock to the script itself: If you copy the script and modify it for some other use, you don't need to come up with a new name for the lock file. This is convenient. If the script is called through a symlink, the lock is on the actual file, and not the link. (Of course, if some process runs the script and gives it a made up value as the zeroth argument instead of the actual path, then this breaks. But that's rarely done.) (I tried using a different file and re-running as above, and the execution order changed) Are you sure that was because of the file used, and not just random variation? As with a pipeline, there's really no way to be sure in what order the commands get to run in cmd1 & cmd . It's mostly up to the OS scheduler. I get random variation on my system. Why does the script need to redirect, to a file descriptor inherited by the subshell, a copy of a file's contents, anyway? It looks like that's so that the shell itself holds a copy of the file description holding the lock, instead of just the flock utility holding it. A lock made with flock(2) is released when the file descriptors having it are closed. flock has two modes, either to take a lock based on a file name, and run an external command (in which case flock holds the required open file descriptor), or to take a file descriptor from the outside, so an outside process is responsible for holding it. Note that the contents of the file are not relevant here, and there are no copies made. The redirection to the subshell doesn't copy any data around in itself, it just opens a handle to the file. Why does holding an exclusive lock on file descriptor 0 in one shell prevent a copy of the same script, running in a different shell, from getting an exclusive lock on file descriptor 0? Don't shells have their own, separate copies of the standard file descriptors (0, 1, and 2, i.e. STDIN, STDOUT, and STDERR)? Yes, but the lock is on the file , not the file descriptor. Only one opened instance of the file can hold the lock at a time. I think you should be able to do the same without the subshell, by using exec to open a handle to the lock file: $ cat lock.sh#!/bin/shexec 9< "$0"if ! flock -n -x 9; then echo "$$/$1 cannot get flock" exit 0fiecho "$$/$1 got the lock"sleep 2echo "$$/$1 exit"$ ./lock.sh bg & ./lock.sh fg ; wait; echo[1] 1136211363/fg got the lock11362/bg cannot get flock11363/fg exit[1]+ Done ./lock.sh bg | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/492324",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
492,434 | I need to remove a column from my tabulated CSV file if this column exists. My CSV file: GENE REF ALTAKT A GAKT G G Desired output:if column REF exists delete this column GENE ALTAKT GAKT G I tried to do that: sed 's/\tREF.[^\t]*//' filename.csv but it doesn't work. | Hi with miller ( http://johnkerl.org/miller/doc ) and this input.csv GENE,REF,ALTAKT,A,GAKT,G,G is very easy mlr --csv cut -x -f REF input.csv The output is GENE,ALTAKT,GAKT,G | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327975/"
]
} |
492,500 | I have a file like this: 2018.01.02;1.5;comment 12018.01.04;2.75;comment 22018.01.07;5.25;comment 42018.01.09;1.25;comment 7 I want to replace all dots . in the second column with a comma , as I would with sed 's/\./\,/g' file how can I use sed or preferably awk to only apply this for the second column, so my output would look like this: 2018.01.02;1,5;comment 12018.01.04;2,75;comment 22018.01.07;5,25;comment 42018.01.09;1,25;comment 7 | $ awk 'BEGIN{FS=OFS=";"} {gsub(/\./, ",", $2)} 1' ip.txt2018.01.02;1,5;comment 12018.01.04;2,75;comment 22018.01.07;5,25;comment 42018.01.09;1,25;comment 7 BEGIN{} this block of code will be executed before processing any input line FS=OFS=";" set input and output field separator as ; gsub(/\./, ",", $2) for each input line, replace all the . in 2nd field with , 1 is an awk idiom to print contents of $0 (which contains the input record) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/492500",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
492,501 | I have a file with multiple lines. I want to know, for each word that appears in the total file, how many lines contain that word, for example: 0 hello world the man is world1 this is the world2 a different man is the possible one The result I'm expecting is: 0:11:12:1a:1different:1hello:1is:3man:2one:1possible:1the:3this:1world:2 Note that the count for "world" is 2, not 3, since the word appears on 2 lines. Because of this, translating blanks to newline chars wouldn't be the exact solution. | Another Perl variant, using List::Util $ perl -MList::Util=uniq -alne ' map { $h{$_}++ } uniq @F }{ for $k (sort keys %h) {print "$k: $h{$k}"}' file0: 11: 12: 1a: 1different: 1hello: 1is: 3man: 2one: 1possible: 1the: 3this: 1world: 2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86942/"
]
} |
492,607 | I have a directory /var/mychoot on the same filesystem as / , and I've started the program /var/mychroot/prog as sudo chroot /var/mychroot /prog , so the program is running as EUID 0. If the program executes the chdir("..") escape technique , then it is able to escape the chroot and see everything within / . (I've verified this on Linux 4.18.) I want to prevent such an escape. In fact I want to prevent all kinds of chroot escapes, but in this question I'm only interested in how the chdir("..") escape technique can be prevented on modern Linux systems. For this I'm looking for alternatives of the chroot(2) system call. I've found 2 solutions: pivot_root and MS_MOVE , but they only work if /var/mychroot is a mount point, so they fail if /var/mychroot is just a subdirectory within the / filesystem. Is there another solution in this case? I want to avoid techniques using LD_PRELOAD (because LD_PRELOAD doesn't affect statically linked executables), techniques using ptrace(2) (because then I'm not able to run strace within the chroot, and also ptrace(2) is very tricky to get right: processes will crash or hang) and real virtualization (e.g. Xen or KVM or QEMU; because of the performance overhead and the less flexible memory provisioning). To recap, I need: an alternative of chroot(2) system call, with which root can restrict processes running as root (EUID 0), to a subdirectory of the filesystem of / , which prevents the chdir("..") escape technique , and doesn't use LD_PRELOAD or ptrace(2) or virtualization (e.g. Xen, KVM or QEMU), and it runs on a modern Linux system, with and unpatched kernel. Does it exist? | To protect against the specific chdir("..") escape technique you mentioned, you can simply drop the capability to execute chroot(2) again once you're chrooted to /var/mychroot yourself. The escape technique requires another call to chroot() , so blocking that is enough to prevent it from working. You can do that with Linux capabilities, by dropping CAP_SYS_CHROOT which is the one needed for chroot(2) to be available. For example: outside# chroot /var/mychrootinside# capsh --drop=cap_sys_chroot --inside# ./escape_chrootchroot(baz): Operation not permitted (The second prompt inside the chroot is from a shell spawned from capsh . You can make it run another command with, for example, capsh --drop=cap_sys_chroot -- -c 'exec ./escape_chroot' .) But a much better technique is to just use pivot_root , since it protects from many of the other possible exploits that chroot(2) will not protect from. You mentioned that it only works if /var/mychroot is a mount point, but you can make it a mount point by simply bind mounting it into itself. Note that you need to create a mount namespace to use pivot_root to create a jail, otherwise it will try to change the root of all processes in your filesystem, which is most likely not what you want here... So the whole sequence goes: outside# unshare -moutside# mount --bind /var/mychroot /var/mychrootoutside# cd /var/mychrootoutside# mkdir old_rootoutside# pivot_root . old_rootlimbo# exec chroot .inside# umount /old_root (Again, many of these commands are spawning new shells. unshare does that, so does chroot itself. You can work around those by passing commands as extra arguments. In some cases you might want to pass sh -c '...' for a full script.) At this point, you're inside a pivot_root jail in a separate mount namespace, and the fact that /var/mychroot is simply a directory of the original root (and not a mount of a separate device or loop device) didn't really prevent this from working, thanks to the bind mount into itself. Running the escape code, you'll see that the jail works as expected (even though the escape code claims otherwise): inside# touch /inside_jailinside# ./escape_chrootExploit seems to work. =)inside# ls -ld /inside_jail /old_root-rw-r--r--. 1 0 0 0 Jan 5 23:45 /inside_jaildr-xr-xr-x. 20 0 0 4096 Jul 5 23:45 /old_root As you can see, still inside the jail... The exploit code is just a bit naive and assumes that as long as the operations ( chroot , chdir ) have succeeded, that was enough to escape the jail, which is not really the case... So consider using this technique to create a jail that is superior to chroot and does not require using Linux capabilities to block operations inside it (such as creating additional chroot s, which might actually be useful or required for what you're actually trying to run in a jail.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3054/"
]
} |
492,630 | The following answer on Stack Overflow, How can I repeat a character in bash? imposes one plausible way of POSIX -ly repeating a single character, as follows. In this example let's use the equal sign 100 times: printf %100s | tr " " "=" My problem is that I don't understand how it works, and I would prefer a straightforward explanation. Please refrain from comments like read the manual , I did so, and since I am not clever from it, I am asking this question as I've never used tr , nor seen such a printf statement. | In short, printf %100s will print 100 spaces, and tr " " "=" will convert those spaces to equal signs, effectively printing 100 equal signs. Breaking it down: printf is a shell built-in. It typically takes two or more arguments, the first of which is a "format string" and the rest will be used to fill up placeholders in that format string. Once that template is fully filled, it will print out the result. If there are more arguments left, it will start over, filling up more arguments and printing the resulting string. The format string used for printf takes format specifications, which start with % and end with a single letter, so %d means an integer (using the decimal base, therefore "d"), %f means a floating-point number and %s means a string of characters. Characters other than letters after the % are modifiers for the format specification and, in particular, numbers are used to specify the requested length of the field on output. So %100s will format the string to have at least 100 characters, it will pad it with spaces and it will keep it aligned right (in other words, add spaces at the beginning of the string.) If passed an extra argument, it would use it for that %s field, so for example printf %100s abc will print 97 spaces (to get to 100 total, considering the 3 in "abc") followed by the actual string "abc". But if no argument is given, then the format specification is filled with an empty or null argument (which is an empty string for %s , it would be 0 for %d , etc.) So that's the same as if an empty string was passed, such as printf %100s '' . The end result is that only the 100 character padding is printed. So, putting it all together, printf %100s results in 100 spaces printed. Now tr is a tool to translate characters from input to output. It takes two arguments, SET1 and SET2, each a set of characters, and then translates the first character of SET1 into the first of SET2, the second character of SET1 into the second of SET2 and so on. tr reads its input from stdin and writes it back to stdout (so it's very useful in pipelines like the one above.) tr will always translate all occurrences of that character in a given string. For example, tr aeiou 12345 will translate lowercase vowels into the numbers 1 to 5 in that order, so it will translate "queueing" into "q52523ng" for example. You can also pass it character ranges, such as tr a-z A-Z to turn any lowercase letter into its corresponding uppercase one. So tr " " "=" is simply translating spaces into equal signs throughout the string. The first space needs to be quoted to be recognized as an argument. The = doesn't actually need to be quoted, but doing so doesn't hurt. tr " " = would have worked the same. Putting it all together, print 100 spaces, then translate each of them into equal signs. Hopefully this explains it in enough detail, but if there's still something you don't understand, please leave a comment and I'll try to address that. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/492630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
492,725 | I've exported my email archive of 10 years which is very large. I want to parse all the text for any string that is 64 characters long in search of a bitcoin private key. How can I parse strings of a certain length in characters? | If you have GNU grep (default on Linux), you can do: grep -Po '(^|\s)\S{64}(\s|$)' file The -P enables Perl Compatible Regular Expressions, which give us \b (word-boundaries) \S (non-whitespace) and {N} (find exactly N characters), and the -o means "print only the matching part of the line. Then, we look for stretches of non-whitespace that are exactly 64 characters long that are either at the beginning of the line ( ^ ) or after whitespace ( 's ) and which end either at the end of the line ( $ ) or with another whitespace character. Note that the result will include any whitespace characters at the beginning and end of the string, so if you want to parse this further, you might want to use this instead: grep -Po '(^|\s)\K\S{64}(?=\s|$)' That will look for a whitespace character or the beginning of the string (\s|^) , then discard it \K and then look for 64 non-whitespace characters followed by (the (?=foo) is called a " lookahead " and will not be included in the match) either a whitespace character, or the end of the line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492725",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
492,773 | Earlier I was using fsarchiver to create compressed partition image. Due to some weird behavior I am choosing to replace it with dd . However, I like how fsarchiver compressed with zstd . So, I studied, How to make a disk image and restore from it later? Using DD for disk cloning Making full disk image with DD compressing dd backup on the fly How do you monitor the progress of dd? What these essentially say is, I have to use the following command to backup dd if=/dev/sda2 status=progress | gzip -c > /media/mint/Data/_Fsarchiver/MintV1.img.gz And the following command to restore gunzip -c /media/mint/Data/_Fsarchiver/MintV1.img.gz | dd of=/dev/sda2 status=progress Now I want to replace gzip -c & gunzip -c with zstd & zstd -d The commands I came up with are To compress sudo dd if=/dev/sda2 status=progress | zstd -16vT6 > /media/mint/Data/_Fsarchiver/MintV1.zst To decompress zstd -vdcfT6 /media/mint/Data/_Fsarchiver/MintV1.zst | dd of=/dev/sda2 status=progress Is it safe to try these commands or am I doing something wrong? | Using dd like that (without any options) will make your life miserable. Just cut it out entirely. Or at the very least increase its block size and tell it not to object to short reads. Without dd , first run sudo -s to get a root shell: gzip </dev/sda2 >/media/mint/Data/_Fsarchiver/MintV1.img.gz gunzip </media/mint/Data/_Fsarchiver/MintV1.img.gz >/dev/sda2 Your zstd commands look entirely plausible, but just omit dd and read/write the device directly as root. (My version doesn't understand your T6 so I've omitted that here.) zstd -16v </dev/sda2 >/media/mint/Data/_Fsarchiver/MintV1.zst zstdcat -v /media/mint/Data/_Fsarchiver/MintV1.zst >/dev/sda2 With dd , either prefix the dd with sudo or use sudo -s to get a root shell: dd bs=1M iflag=fullblock if=/dev/sda2 status=progress | gzip >/media/mint/Data/_Fsarchiver/MintV1.img.gz gzcat /media/mint/Data/_Fsarchiver/MintV1.img.gz | dd bs=1M iflag=fullblock of=/dev/sda2 status=progress dd bs=1M iflag=fullblock if=/dev/sda2 status=progress | zstd -16v >/media/mint/Data/_Fsarchiver/MintV1.img.zst zstdcat /media/mint/Data/_Fsarchiver/MintV1.img.zst | dd bs=1M iflag=fullblock of=/dev/sda2 status=progress With pv instead of dd . Use sudo -s beforehand to get a root shell: pv /dev/sda2 | gzip >/media/mint/Data/_Fsarchiver/MintV1.img.gz gzcat /media/mint/Data/_Fsarchiver/MintV1.img.gz | pv >/dev/sda2 pv /dev/sda2 | zstd -16 >/media/mint/Data/_Fsarchiver/MintV1.img.zst zstdzcat /media/mint/Data/_Fsarchiver/MintV1.img.zst | pv >/dev/sda2 Also see Syntax When Combining dd and pv As always, to read with elevated permissions change command <source to sudo cat source | command , and to write with elevated permissions replace command >target with command | sudo tee target >/dev/null . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/492773",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206574/"
]
} |
492,809 | In my interactive bash shell run in lxterminal's window, I don't know what I have messed up. Typing Return key doesn't start a new line, but Ctrl-C will. $ ^C$ $ $ $ $ ^C$ ^C$ $ $ $ $ $ $ When I type a command, although hitting Return key will execute it, the typed command is not visible. Before that happened, I was running some command sudo lsof ... | less (or sudo netstat ... | less ), and there seemed no output, so I hit ctrl-c and/or q multiple times in an arbitrary order. When I finally got out of less , that problem with bash happened. Did I accidentally redirect the stdout of the shell somewhere else? Is there some way to fix the problem without closing my shell? | I think your terminal may be stuck in a “funny” mode. You probably can reset it with the /usr/bin/reset command, that normally comes with the ncurses library. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/492809",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
492,835 | Prior to running the dd command, the command lsblk returned the output below: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 931.5G 0 disk The command dd if=/dev/urandom of=/dev/sda conv=fsync status=progress is run. The device however loses power and shuts down. When power is reinstated, the command lsblk returns the following output: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk sda2 8:2 0 487.5G 0 disk | Several possibilities: Linux supports a lot of different partition table types, some of which use very few magic bytes, and then it's easy to mis-identify random data (*) [so it's possible to randomly generate a somewhat "valid" partition table]. Some partition table types have backups at the end of the disk as well (most notably GPT) and that could be picked up on if the start of the drive was replaced with random garbage. The device doesn't work properly and it was disconnected before it finished writing the data, or keeps returning old data, so partition table survives. Sometimes this happens with USB sticks. ... (*) Make 1000 files with random data in them and see what comes out: $ truncate -s 8K {0001..1000}$ shred -n 1 {0001..1000}$ file -s {0001..1000} | grep -v data0099: COM executable for DOS0300: DOS executable (COM)0302: TTComp archive, binary, 4K dictionary0389: Dyalog APL component file 64-bit level 1 journaled checksummed version 192.1920407: COM executable for DOS0475: PGP\011Secret Sub-key -.... The goal of random-shredding a drive is to make old data vanish for good. There is no promise the drive will appear empty, unused, in pristine condition afterwards. It's common to follow up with a zero wipe to achieve that. If you are using LVM, it's normal for LVM to zero out the first few sectors of any LV you create so old data won't interfere. There's also a dedicated utility ( wipefs ) to get rid of old magic byte signatures which you can use to get rid of filesystem and partition table metadata. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/492835",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321547/"
]
} |
492,861 | I build Vim from source, and install it with checkinstall . Soon after Software&Updates warned there is an update for Vim. But after the installation, Vim path is altered and another build take its place and my build from source disappeared. Any idea what is going on? | Several possibilities: Linux supports a lot of different partition table types, some of which use very few magic bytes, and then it's easy to mis-identify random data (*) [so it's possible to randomly generate a somewhat "valid" partition table]. Some partition table types have backups at the end of the disk as well (most notably GPT) and that could be picked up on if the start of the drive was replaced with random garbage. The device doesn't work properly and it was disconnected before it finished writing the data, or keeps returning old data, so partition table survives. Sometimes this happens with USB sticks. ... (*) Make 1000 files with random data in them and see what comes out: $ truncate -s 8K {0001..1000}$ shred -n 1 {0001..1000}$ file -s {0001..1000} | grep -v data0099: COM executable for DOS0300: DOS executable (COM)0302: TTComp archive, binary, 4K dictionary0389: Dyalog APL component file 64-bit level 1 journaled checksummed version 192.1920407: COM executable for DOS0475: PGP\011Secret Sub-key -.... The goal of random-shredding a drive is to make old data vanish for good. There is no promise the drive will appear empty, unused, in pristine condition afterwards. It's common to follow up with a zero wipe to achieve that. If you are using LVM, it's normal for LVM to zero out the first few sectors of any LV you create so old data won't interfere. There's also a dedicated utility ( wipefs ) to get rid of old magic byte signatures which you can use to get rid of filesystem and partition table metadata. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/492861",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
492,912 | With Bash and Dash, you can check for an empty directory using just the shell(ignore dotfiles to keep things simple): set *if [ -e "$1" ]then echo 'not empty'else echo 'empty'fi However I recently learned that Zsh fails spectacularly in this case: % set *zsh: no matches found: *% echo "$? $#"1 0 So not only does the set command fail, but it doesn't even set $@ . I supposeI could test if $# is 0 , but it appears that Zsh even stops execution: % { set *; echo 2; }zsh: no matches found: * Compare with Bash and Dash: $ { set *; echo 2; }2 Can this be done in a way that works in bash, dash and zsh? | There are several problems with that set *if [ -e "$1" ]then echo 'not empty'else echo 'empty'fi code: if the nullglob option (from zsh but now supported by most other shells) is enabled, set * becomes set which lists all the shell variables (and functions in some shells) if the first non-hidden file has a name that starts with - or + , it will be treated as an option by set . Those two issues can be fixed by using set -- * instead. * expands only non-hidden files, so it's not a test whether the directory is empty or not but whether it contains non-hidden files or not. With some shells, you can use a dotglob or globdot option or play with a FIGNORE special variable depending on the shell to work around that. [ -e "$1" ] tests whether a stat() system call succeeds or not. If the first file is a symlink to an inaccessible location, that will return false. You shouldn't need to stat() (not even lstat() ) any file to know whether a directory is empty or not, only check that it has some content. * expansion involves opening the current directory, retrieving all the entries, storing all the non-hidden one and sorting them, which is also quite inefficient. The most efficient way to check if a directory is non-empty (has any entry other than . and .. ) in zsh is with the F glob qualifier ( F for full , here meaning non-empty): if [ .(NF) ]; then print . is not emptyelse print "it's empty or I can't read it"fi N is the nullglob glob qualifier. So .(NF) expands to . if . is full and nothing otherwise. After the lstat() on the directory, if zsh finds it has a link-count greater than 2, then that means it has at least one subdirectory so is not empty, so we don't even need to open that directory (that also means, in that case we can tell that the directory is non-empty even if we don't have read access to it). Otherwise, zsh opens the directory, reads its content and stops at the first entry that is neither . nor .. without having to read, store nor sort everything. With POSIX shells ( zsh only behaves (more) POSIXly in sh emulation), it is very awkward to check that a directory is non-empty with globs only. One way is with: set .[!.]* '.[!.]'[*] .[.]?* [*] *if [ "$#$1$2$3$4$5" = '5.[!.]*.[!.][*].[.]?*[*]*' ]; then echo "empty or can't read"else echo not emptyfi (assuming no glob-related option is changed from the default (POSIX only specifies noglob ) and that the GLOBIGNORE (for bash ) and FIGNORE (for ksh ) variables are not set, and that (for yash ) none of the file names contain sequences of bytes not forming valid characters). The idea is that in POSIX shells, when a glob doesn't match, it is left unexpanded (a misfeature introduced by the Bourne shell in the late 70s). So with set -- * , if we get $1 == * , we don't know whether it was because there was no match or whether there was a file called * . Your (flawed) approach to work around that was to use [ -e "$1" ] . Here instead, we use set -- [*] * . That allows to disambiguate the two cases, because if there is no file, the above will stay [*] * , and if there is a file called * , that becomes * * . We do something similar for hidden files. That is a bit awkward because of yet another misfeature of the Bourne shell (also fixed by zsh , the Forsyth shell, pdksh and fish ) whereby the expansion of .* does include the special (pseudo-)entries . and .. when reported by readdir() . So to make it work in all those shells, you could do: cwd_empty() if [ -n "$ZSH_VERSION" ]; then eval '! [ .(NF) ]' else set .[!.]* '.[!.]'[*] .[.]?* [*] * [ "$#$1$2$3$4$5" = '5.[!.]*.[!.][*].[.]?*[*]*' ] fi In any case, the syntax of zsh by default is not compatible with the POSIX sh syntax as it has fixed most of the major issues in the Bourne shell (well before POSIX.2 was first published) in a non-backward compatible way, including that * left unexpanded when there's no match (pre-Bourne shells didn't have that issue, csh , tcsh and fish don't either), and .* including . and .. but several others like split+glob performed upon parameter or arithmetic expansion, so you can't expect code written in the POSIX sh to always work in zsh unless you turn on sh emulation. That sh emulation is especially there so you can use POSIX code in zsh . If you want to source a file written in POSIX sh inside zsh , you can do: emulate sh -c 'source that-file' Then that file will be evaluated in sh emulation and any function declared within will retain that emulation mode. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492912",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
492,966 | According to Wikipedia , GRUB was released in 1995. By that point Linux and xBSD existed for several years. I know early Unix versions were tied to hardware in the 70s and 80s, but Linux and xBSD were free to distribute and install. Which begs the question how would you boot Linux back then? Were distributions shipping with their own implementations of bootloaders? | The first Linux distribution I used back in the 90s ( Slackware 3.0 IIRC) used LILO as a bootloader. And many distros used LILO for years even when GRUB was becoming the "default" bootloader. Moreover, in the early years of Linux it was common to boot Linux from another OS (i.e. DOS or Windows) instead of relying on a bootloader/dual booting. For example there was loadlin . Don't forget Syslinux , which is a simpler boot loader often used for USB self-bootable installation/recovery distros. Or Isolinux (from the same project) used by many "Live" distros. Keep in mind that today GRUB can be used to load many operating systems, while LILO was more limited, and specifically targeted at Linux (i.e. LInux LOader), with some support for dual booting to Windows. GRUB is very useful for dual/multi booting because of its many configurable options, scripting capabilities, etc... If you just want a single OS on your machine "any" (i.e. whichever bootloader is the default for your Linux/BSD distribution) should be enough. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/492966",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85039/"
]
} |
492,972 | I'm using Devuan ASCII (which is more or less Debian 9, Stretch). Now, my /var/log/auth.log has a bunch of these entries: Jan 6 09:45:01 mybox CRON[20951]: pam_env(cron:session): Unable to open env file: /etc/environment: No such file or directoryJan 6 09:45:01 mybox CRON[20951]: pam_unix(cron:session): session opened for user root by (uid=0) which apparently get generated when I su . Why is cron/pam_env/pam_unix trying to open that file in the first place, rather than checking whether it exists? If they legitimately expect it, why isn't it there? What should I do about this? | Answering all of your questions Why is cron/pam_env/pam_unix trying to open that file in the first place? See BUG #646015 . In some cases (like locale related stuff) this file is deprecated. But it is still used system-wide, and log is made whenever it is missing. If they legitimately expect it, why isn't it there? Cause maybe the bug isn't fixed after all. Steve Langasek ( BUG #646015 ) said it is, and new systems should create that file using postinst scripts the same way old systems being upgraded should already have that file. What should I do about this? Run dpkg-reconfigure libpam-modules and see if it will create the file through its postinst script. If that does not work, create the file manually with touch /etc/environment It's also interesting to report your issue to the Devuan Project with details of the problem and your setup since this issue was resolved before the Debian/Devuan fork happened. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492972",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
492,983 | Getting these... Jan 7 06:25:01 debian liblogging-stdlog: [origin software="rsyslogd" swVersion="8.24.0" x-pid="551" x-info="http://www.rsyslog.com"] rsyslogd was HUPed Debian default logcheck rule /etc/logcheck/ignore.d.server/rsyslog ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ rsyslogd: \[origin software="rsyslogd" swVersion="[0-9.]+" x-pid="[0-9]+" x-info="http://www.rsyslog.com"\] rsyslogd was HUPed$ Custom rules tried /etc/logcheck/ignore.d.server/rsyslog-fix ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ liblogging-stdlog: \[origin software="rsyslogd" swVersion="[0-9.]+" x-pid="[0-9]+" x-info="http://www.rsyslog.com"\] rsyslogd was HUPed$^\w{3} [ :[:digit:]]{11} [._[:alnum:]-]+ liblogging-stdlog: \[origin software="rsyslogd" swVersion="[0-9.]+" x-pid="[[:digit:]]+" x-info="http:\/\/www\.rsyslog\.com"\] rsyslogd was HUPed$ Any idea?? Thx! | Answering all of your questions Why is cron/pam_env/pam_unix trying to open that file in the first place? See BUG #646015 . In some cases (like locale related stuff) this file is deprecated. But it is still used system-wide, and log is made whenever it is missing. If they legitimately expect it, why isn't it there? Cause maybe the bug isn't fixed after all. Steve Langasek ( BUG #646015 ) said it is, and new systems should create that file using postinst scripts the same way old systems being upgraded should already have that file. What should I do about this? Run dpkg-reconfigure libpam-modules and see if it will create the file through its postinst script. If that does not work, create the file manually with touch /etc/environment It's also interesting to report your issue to the Devuan Project with details of the problem and your setup since this issue was resolved before the Debian/Devuan fork happened. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/492983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/330024/"
]
} |
493,033 | While following a video tutorial on Linux bash scripting, the teacher uses atom for editor. When I did I tried to install atom : sudo snap install atom I got the message: error: This revision of snap "atom" was published using classic confinement and thus may perform arbitrary system changes outside of the security sandbox that snaps are usually confined to, which may put your system at risk. If you understand and want to proceed repeat the command including --classic.* What would you do/suggest? | This is answered well in snapcraft's official documentation . In the interest of time, here is the pertinent portion: Classic confinement is effectively un-confining the applications inside a snap. Applications which use classic confinement have the same full system access as traditionally packaged applications. Classic confinement is intended as a stop-gap measure to enable developers to publish applications which need more access than the current set of interfaces enable. Over time, as more interfaces are developed, snap publishers can migrate away from classic confinement to strict. Classically confined snaps must be reviewed by the snap store reviewers team before they can be published in the stable channel. Snaps which use classic confinement may be rejected if they don’t meet the requirements. Users should not attempt to override a strictly confined snap to make it ‘classic’ as this undoes the confinement and interfaces defined by the developer. In addition applications published as strict snaps may misbehave when installed with the ‘–classic’ switch. As for a recommendation, you'll need to weigh the risks in your own mind. Consider the publisher of the software, their reputation/recognition and the fact that classic confinement snaps are reviewed before being published. Classic confinement is not all that different than having done a traditional apt install in terms of the access it allows to the program. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/493033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/330223/"
]
} |
493,081 | How do I create an environment variable that is the result of a specific command? Specifically, I want an environment variable ($BWD) that is the basename of $PWD $ cd /home/devel/Autils/lib$ echo $PWD/home/devel/Autils/lib$ # something here to assign BWD$ echo $BWDlib | In general the sequence foo="$(bar)" will run the command bar and assign the output to the variable. e.g. % echo $PWD/home/sweh% BWD="$(basename "$PWD")"% echo $BWDsweh This creates a shell variable. If you want to make it into an environment variable (which can be seen by sub-shells) you can export it. e.g. export BWD="$(basename "$PWD")" However, in this case don't need to run a command, but use shell variable expansion BWD=${PWD##*/} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/493081",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/330266/"
]
} |
493,141 | root@u1804:~# sed --versionsed (GNU sed) 4.5Copyright (C) 2018 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.Written by Jay Fenlason, Tom Lord, Ken Pizzini,and Paolo Bonzini.GNU sed home page: <https://www.gnu.org/software/sed/>.General help using GNU software: <https://www.gnu.org/gethelp/>.E-mail bug reports to: <[email protected]>.root@u1804:~# I'm new to sed and I created below sed's workflow based on my understanding (correct me if you find anything wrong). So it seems the default auto printing of the pattern space will always include a newline at the end. My question is, will p includes a newline, too? I have below examples. root@u1804:~# seq 3 | sed -rn 'p'123root@u1804: Here the newline at the end of each number is added by sed itself (see the diagram "adds back newline to pattern space"). So it seems p will not append a newline. However, see below example. root@u1804:~# seq 3 | sed -rn 'x;p;x;p'123root@u1804:~# Here x exchange pattern space with hold space, which will result in an empty pattern space. Now p applies to the pattern space (nothing in it) should print nothing. But based on the result, it seems here p prints a newline. To me it seems this is inconsistent behavior. Can anyone explain? | To answer your main question: GNU sed will append a <newline> character when executing the p command unless the input line was missing its terminating <newline> character (see the clarifications about lines below). As far as I can tell, sed 's p flag and its auto-print feature implement the same logic to output the pattern space: if the trailing <newline> character was removed, they add it back; otherwise they don't. Examples: $ printf '%s\n%s' '4' '5' | sed ';' | hexdump -C # auto-print00000000 34 0a 35 |4.5|00000003 $ printf '%s\n%s' '4' '5' | sed -n 'p;' | hexdump -C # no auto-print; p flag00000000 34 0a 35 |4.5|00000003 In both cases there is no <newline> character ( 0a ) in the output for the input lines that don't have one. About your diagrams: "Adds back newline to pattern space" is probably inaccurate because the <newline> character is not put in the pattern space 1 . Also, that step is not related to the -n option - but this does not make the diagram wrong ; rather, it should probably be merged into "Print pattern space". Still, I agree with you about the documentation's lack of clarity. 1 The sentence you quote in your own answer , "the contents of pattern space are printed out to the output stream, adding back the trailing newline if it was removed", means that the <newline> is appended to the stream, not to pattern space. Of course, since pattern space is cleared in a short while, this is a really minor point About your tests involving the x flag: Internally, pattern space and hold space are structures, and "was my trailing <newline> character dropped?" is a member of them. We will call it chomped (as it is named in sed 's source code, by the way). Pattern space is filled with a read line and its chomped attribute depends on how that line is terminated: true if it ends with a <newline> character, false otherwise. On the other hand, hold space is initialized as empty and its chomped attributed is just set to true . Therefore, when you swap pattern space and hold space and print what was born as hold and is now pattern, a <newline> character is printed. Examples - these commands have the same output: $ printf '\n' | sed -n 'p;' | hexdump -C # input is only a <newline>00000000 0a |.|00000001 $ printf '%s' '5' | sed -n 'x;p;' | hexdump -C # input has no <newline>00000000 0a |.|00000001 (I gave only a really brief look at sed 's code, so this might well be not accurate). About lines (clarification started with comments to your answer ): It goes without saying that a line without a terminating <newline> character is a problematic concept. Quoting POSIX : 3.206 Line A sequence of zero or more non- <newline> characters plus a terminating <newline> character. Furthermore, POSIX defines a text file: 3.403 Text File A file that contains characters organized into zero or more lines. ... Finally, POSIX on sed (bold mine): DESCRIPTION The sed utility is a stream editor that shall read one or more text files , make editing changes according to a script of editing commands, and write the results to standard output. ... GNU sed , though, seems to be less strict when defining its input: sed is a stream editor. A stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline). ... So, relating to my first sentence, we should take into account that, for GNU sed , what is read into the pattern space doesn't necessarily have to be a well formed line of text. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/493141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8839/"
]
} |
493,181 | I want to set 755 permission on all files and sub-directories under a specific directory, but I want to execute chmod 755 only for those components which does not have 755 permission. find /main_directory/ -exec chmod 755 {} \; If the find command returns a long list, this will take a lot of time.I know that I can use the stat command to check the Octal file level permission of each component and then use if-else to toggle the file permission, but is there any single line approach using find and xargs to first check what permission the file/directory has, and then use chmod to change it to 755 if it is set to something else. | If you want to change permissions to 755 on both files and directories, there's no real benefit to using find (from a performance point of view at least), and you could just do chmod -R 755 /main_directory If you really want to use find to avoid changing permissions on things that already has 755 permissions (to avoid updating their ctime timestamp), then you should also test for the current permissions on each directory and file: find /main_directory ! -perm 0755 -exec chmod 755 {} + The -exec ... {} + will collect as many pathnames as possible that passes the ! -perm 0755 test, and execute chmod on all of them at once. Usually, one would want to change permissions on files and directories separately, so that not all files are executable: find /main_directory -type d ! -perm 0755 -exec chmod 755 {} +find /main_directory ! -type d ! -perm 0644 -exec chmod 644 {} + | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/493181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
493,192 | I am working on Linux Ubuntu 16.04. In this challenge I want to remove some commands from the Linux shell. Specifically, I want to remove the exit command. I want that if a user will try to exit the shell with this command, it won't be possible or maybe even send some message instead. Any idea how can I do it ? I searched for a file named exit : find / -name exit But it found only directories: /usr/src/linux-headers-4.13.0-41-generic/include/config/have/exit/usr/src/linux-headers-4.13.0-41-generic/include/config/have/irq/exit/usr/src/linux-headers-4.13.0-43-generic/include/config/have/exit/usr/src/linux-headers-4.13.0-43-generic/include/config/have/irq/exit/usr/src/linux-headers-4.13.0-36-generic/include/config/have/exit/usr/src/linux-headers-4.13.0-36-generic/include/config/have/irq/exit/usr/src/linux-headers-4.15.0-43-generic/include/config/have/exit/usr/src/linux-headers-4.15.0-43-generic/include/config/have/irq/exit EDIT: I read here about using trap like this trap "ENTER-COMMAND-HERE" EXIT and I tried trap "sh" EXIT but it still existing the shell. | There is no exit executable, that is one of the shell's (you don't say what shell, so I am assuming bash ) builtin commands. As such, the only way to remove it completely is to edit the source code of the shell and recompile it (but look at @Kusalananda's answer for a better approach). As an alternative, you could add something like this to /etc/bash.bashrc : alias exit='echo "You can check-out any time you like, but you can never leave!"' But this is trivially easy to circumvent by anyone who knows even a little bit about *nix. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/493192",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316437/"
]
} |
493,436 | We have a RHEL 7 machine, with only 2G of available RAM: free -g total used free shared buff/cache availableMem: 31 28 0 0 1 2Swap: 15 9 5 so we decided to increase the swappiness to the maximum with vm.swappiness = 100 in /etc/sysctl.conf instead of 10, and used sysctl -p to apply the setting. After some time we checked the status again: free -g total used free shared buff/cache availableMem: 31 28 0 0 2 2Swap: 15 9 5 as we can see despite the new swappiness setting, we see from free -g that the available RAM stays at 2G. Why? What is wrong here? We expected to see 15G of used swap. We also checked: cat /proc/sys/vm/swappiness100 so everything should work according to the new settings BUT free shows the same situation. What is going here? | The swappiness setting is working as intended. Increasing swappiness doesn’t cause the system to prefer swap to anything else; increasing swappiness affects the balance between the page cache and swap. When the kernel needs to make physical memory available, it can discard generally use one of two strategies: it can discard pages from the page cache (since their content is on disk), or it can move pages to swap; swappiness determines how much it favours one strategy over another. Setting swappiness to 0 (the minimum) means the kernel will avoid swapping until it hits various high water marks, and evict pages from the page cache instead; setting it to 100 (the maximum) means the kernel will consider swapping and evicting the page cache equally. You’ll only see your new setting make a difference when the kernel needs more memory: you’ll see the amount of swap used increase before the amount of memory used in the cache decreases. You can’t use swappiness to get the kernel to keep more memory available. Physical memory is always best used rather than left free, so the kernel has no incentive to pre-emptively free physical memory (increasing available memory). See the RHEL 7 performance tuning guide for more information. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/493436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
493,464 | I want to know if there's a way to put the ampersand in a variable and still use it to send a process to the background. This works: BCKGRND=yesif [ "$BCKGRND" = "yes" ]; then sleep 5 &else sleep 5fi But wouldn't it be cool to accomplish those five lines with only one? Like so: BCKGRND='&'sleep 5 ${BCKGRND} But that doesn't work. If BCKGRND isn't set it works - but when it is set it's interpreted it as a literal '&' and errors out. | It's not possible to use a variable to background the call because variable expansion happens after the command-line is parsed for control operators (such as && and & ). Yet another option would be to wrap the calls in a function: mayberunbg() { if [ "$BCKGRND" = "yes" ]; then "$@" & else "$@" fi} ... and then set the variable as needed: $ BCKGRND=yes mayberunbg sleep 3[1] 14137$[1]+ Done "$@"# or$ BCKGRND=yes$ mayberunbg sleep 3[1] 14203$[1]+ Done "$@"$ BCKGRND=no mayberunbg sleep 3# 3 seconds later$ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/493464",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/329458/"
]
} |
493,485 | I don't want to label myself to specific configuration-manager modules like Ansible's apt module or yum module. Is there a distro-agnostic configuration management software, or at least one with distro-agnostic code to install the following packages for Arch Linux as well? I ask this because I didn't find a suitable Ansible galaxy-role to install LAMP on Arch Linux and the following Bash script for Debian isn't fit for Arch: #!/bin/bashapt update -yapt upgrade ufw sshguard unattended-upgrades wget curl git zip unzip tree -yufw --force enableufw allow 22,25,80,443apt upgrade lamp-server^ ssmtp -yapt upgrade python-certbot-apache -yapt upgrade php-{cli,curl,mbstring,mcrypt,gd} phpmyadmin -y | Technically, Ansible is that; because it's agent-less; I've used it to manage routers, switches, servers, etc. What it seems like you're asking for is if the package module supports Arch Linux? I'm too lazy to test if that supports Arch; but if it doesn't there is always the pacman module ... And if that doesn't work... There is always writing your own module. What you're speaking of though is a larger problem with running multiple different distributions in a production environment . It becomes painful to manage long term. This is why it's good practice to not run multiple distributions in production, as from a management perspective (purely from code), it's a lot of work. The most obvious way to get around this is with Ansible using when in combination with os_family : apt: name: apache2 when: ansible_facts['os_family'] == "Debian" pacman: name: nginx when: ansible_facts['os_family'] == "Archlinux" I've been in a situation where I had to manage Debian Servers and CentOS servers in production; eventually I made the choice to go pure Debian because: The codebase for CM was cut in half (all the logic for distro specific quirks was removed). Testing became less painful (if you're not testing your CM code, then you're doing it wrong). You'll also run into major differences anyways; for example: Some packages are named differently; httpd (RHEL) vs apache2 (Debian). Different "default" configuration directories; /etc/default (Debian) vs /etc/sysconfig (RHEL). Different init systems; although systemd has largely taken over. No SSH; for example WinRM for Windows. Configuration Management systems are a way of abstracting the environment into code; and they give you logic/conditionals to do that yourself . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/493485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
493,561 | The notification popups stay too long on the screen in my opinion. How to modify the number of seconds notifications are displayed? I see no such option in any of the notification settings. (Kubuntu 18.04 - Plasma 5.12.7) | This can be done by modifying the file /usr/share/plasma/plasmoids/org.kde.plasma.notifications/contents/ui/NotificationPopup.qml . So, open it in kate : kate /usr/share/plasma/plasmoids/org.kde.plasma.notifications/contents/ui/NotificationPopup.qml Find the line notificationTimer.interval = notification.expireTimeout and comment/change it to notificationTimer.interval = 1 * 1000 where 1 is the number of seconds. Test it with notify-send "your notification" Source here . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/493561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
493,582 | For my local work environment I would like to access all my companies server directly from my workstation elsewhere.To make this really fun the only access possible is via the VPN on the company managed notebook and the notebook does not allow connections to the local network. Obviously, no IT department is helping you with such a problem. | This can be done by modifying the file /usr/share/plasma/plasmoids/org.kde.plasma.notifications/contents/ui/NotificationPopup.qml . So, open it in kate : kate /usr/share/plasma/plasmoids/org.kde.plasma.notifications/contents/ui/NotificationPopup.qml Find the line notificationTimer.interval = notification.expireTimeout and comment/change it to notificationTimer.interval = 1 * 1000 where 1 is the number of seconds. Test it with notify-send "your notification" Source here . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/493582",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90557/"
]
} |
493,607 | I have a built-in USB device that is not responding to setup address. It tries to set the device up continuously and fails continuously: wasting battery, CPU, disk space, etc. Is there a way to kill the USB port or otherwise stop the kernel from trying to configure it? I've tried rebooting, using uhubctl (not known as a smart hub), playing with port's power/autosuspend_delay_ms (gets input/output error), playing with the port's power/control (already auto), playing with the hub's power/level (invalid argument). Of course, I cannot try another cable--it is an embedded device. And I would prefer to not disable the hub entirely, but I'd be willing to try it. I can virtually remove the PCI card via Linux, but that takes out stuff I actually need (the high speed USB hub). I'm going to guess that the device is actually a laptop fingerprint reader that I've never used or been able to use, but I remember being around. [ 7283.684834] usb usb1-port7: attempt power cycle[ 7284.312659] usb 1-7: new full-speed USB device number 41 using xhci_hcd[ 7284.312858] usb 1-7: Device not responding to setup address.[ 7284.516966] usb 1-7: Device not responding to setup address.[ 7284.724647] usb 1-7: device not accepting address 41, error -71[ 7284.838653] usb 1-7: new full-speed USB device number 42 using xhci_hcd[ 7284.838852] usb 1-7: Device not responding to setup address.[ 7285.044852] usb 1-7: Device not responding to setup address.[ 7285.252760] usb 1-7: device not accepting address 42, error -71[ 7285.252861] usb usb1-port7: unable to enumerate USB device[ 7285.366647] usb 1-7: new full-speed USB device number 43 using xhci_hcd[ 7285.480810] usb 1-7: device descriptor read/64, error -71[ 7285.702811] usb 1-7: device descriptor read/64, error -71[ 7285.918653] usb 1-7: new full-speed USB device number 44 using xhci_hcd[ 7286.032729] usb 1-7: device descriptor read/64, error -71[ 7286.254780] usb 1-7: device descriptor read/64, error -71[ 7286.356717] usb usb1-port7: attempt power cycleRepeat forever Running lsusb of course does not report the device. However, the upstream hub is: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub00:14.0 USB controller: Intel Corporation 100 Series/C230 Series Chipset Family USB 3.0 xHCI Controller (rev 31) | While, as the OP told, some USB hubs have an additional protocol allowing to power off a single port, thus easily solving the problem with the use of uhubctl, most USB hubs, internal included, have no such control. What is still possible in Linux is to ask the kernel to disable the use of an USB device by writing 0 to the authorized control file of this device in the /sys/bus/usb/devices tree. For a device that behaves normally, this would solve the problem, but not for a device that is disconnecting and connecting back all the time. Still, when any USB hub is thus disabled, it will disable and power off all its ports. So disabling the USB hub where the device is connected will effectively disable and power off the misconducting device. If the loss of any other connected device to this hub is acceptable then that's a possible method. Writing back 1 to the authorized file will again enable the device, and for a hub, power back on its ports, powering back any connected device. Example: # cat /sys/bus/usb/devices/2-1/productUSB2.0 Hub# echo 0 > /sys/bus/usb/devices/2-1/authorized# dmesg|tail -1[226616.900051] usb 2-1.3: USB disconnect, device number 30 usb 2-1.3 was a keyboard and its LEDs go off. # echo 1 > /sys/bus/usb/devices/2-1/authorized# dmesg|fgrep 2-1|tail -10[227055.203089] hub 2-1:1.0: USB hub found[227055.204441] hub 2-1:1.0: 4 ports detected[227055.213891] usb 2-1: authorized to connect[227055.405342] usb 2-1.3: new low-speed USB device number 41 using xhci_hcd[227055.511969] usb 2-1.3: New USB device found, idVendor=413c, idProduct=2113, bcdDevice= 1.08[227055.511975] usb 2-1.3: New USB device strings: Mfr=0, Product=2, SerialNumber=0[227055.511978] usb 2-1.3: Product: Dell KB216 Wired Keyboard[227055.520754] input: Dell KB216 Wired Keyboard as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1.3/2-1.3:1.0/0003:413C:2113.001A/input/input136[227055.583032] input: Dell KB216 Wired Keyboard System Control as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1.3/2-1.3:1.1/0003:413C:2113.001B/input/input137[227055.641748] input: Dell KB216 Wired Keyboard Consumer Control as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1.3/2-1.3:1.1/0003:413C:2113.001B/input/input138 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/493607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7687/"
]
} |
493,621 | I was getting errors on using npm start in my react application. which goes like this ENOSPC system limit for file watcher reached I updated my npm and node versions but still i was getting this problem. After a little research i got to know that there is something known as inotify which watches different files and i need to increase its file watching limit. I used this code echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p in my terminal to increase the file watcher limit. After this i was succesfully able to do npm start and got my project running in localhost:3000, But this slowed my system incredibly, and it freezed every now and then. I highly suspect this is due to increasing number of file watchers. Is it the case if yes what should i do now?? | While, as the OP told, some USB hubs have an additional protocol allowing to power off a single port, thus easily solving the problem with the use of uhubctl, most USB hubs, internal included, have no such control. What is still possible in Linux is to ask the kernel to disable the use of an USB device by writing 0 to the authorized control file of this device in the /sys/bus/usb/devices tree. For a device that behaves normally, this would solve the problem, but not for a device that is disconnecting and connecting back all the time. Still, when any USB hub is thus disabled, it will disable and power off all its ports. So disabling the USB hub where the device is connected will effectively disable and power off the misconducting device. If the loss of any other connected device to this hub is acceptable then that's a possible method. Writing back 1 to the authorized file will again enable the device, and for a hub, power back on its ports, powering back any connected device. Example: # cat /sys/bus/usb/devices/2-1/productUSB2.0 Hub# echo 0 > /sys/bus/usb/devices/2-1/authorized# dmesg|tail -1[226616.900051] usb 2-1.3: USB disconnect, device number 30 usb 2-1.3 was a keyboard and its LEDs go off. # echo 1 > /sys/bus/usb/devices/2-1/authorized# dmesg|fgrep 2-1|tail -10[227055.203089] hub 2-1:1.0: USB hub found[227055.204441] hub 2-1:1.0: 4 ports detected[227055.213891] usb 2-1: authorized to connect[227055.405342] usb 2-1.3: new low-speed USB device number 41 using xhci_hcd[227055.511969] usb 2-1.3: New USB device found, idVendor=413c, idProduct=2113, bcdDevice= 1.08[227055.511975] usb 2-1.3: New USB device strings: Mfr=0, Product=2, SerialNumber=0[227055.511978] usb 2-1.3: Product: Dell KB216 Wired Keyboard[227055.520754] input: Dell KB216 Wired Keyboard as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1.3/2-1.3:1.0/0003:413C:2113.001A/input/input136[227055.583032] input: Dell KB216 Wired Keyboard System Control as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1.3/2-1.3:1.1/0003:413C:2113.001B/input/input137[227055.641748] input: Dell KB216 Wired Keyboard Consumer Control as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1.3/2-1.3:1.1/0003:413C:2113.001B/input/input138 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/493621",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/330709/"
]
} |
493,633 | Is there any way to detect downloaded (in cache) and not installed packages using Synaptic, apt or dpkg? Do I have to create a specific filter for that? If so how do I do that? Where could I learn about that? I know the apt or Synaptic cache is on /var/cache/apt/archives and it seems that if you install something using apt nothing goes to cache. On the other hand you have the option in Synaptic for downloading only and install later. What I want is to know what is on that cache that is not installed already. | While, as the OP told, some USB hubs have an additional protocol allowing to power off a single port, thus easily solving the problem with the use of uhubctl, most USB hubs, internal included, have no such control. What is still possible in Linux is to ask the kernel to disable the use of an USB device by writing 0 to the authorized control file of this device in the /sys/bus/usb/devices tree. For a device that behaves normally, this would solve the problem, but not for a device that is disconnecting and connecting back all the time. Still, when any USB hub is thus disabled, it will disable and power off all its ports. So disabling the USB hub where the device is connected will effectively disable and power off the misconducting device. If the loss of any other connected device to this hub is acceptable then that's a possible method. Writing back 1 to the authorized file will again enable the device, and for a hub, power back on its ports, powering back any connected device. Example: # cat /sys/bus/usb/devices/2-1/productUSB2.0 Hub# echo 0 > /sys/bus/usb/devices/2-1/authorized# dmesg|tail -1[226616.900051] usb 2-1.3: USB disconnect, device number 30 usb 2-1.3 was a keyboard and its LEDs go off. # echo 1 > /sys/bus/usb/devices/2-1/authorized# dmesg|fgrep 2-1|tail -10[227055.203089] hub 2-1:1.0: USB hub found[227055.204441] hub 2-1:1.0: 4 ports detected[227055.213891] usb 2-1: authorized to connect[227055.405342] usb 2-1.3: new low-speed USB device number 41 using xhci_hcd[227055.511969] usb 2-1.3: New USB device found, idVendor=413c, idProduct=2113, bcdDevice= 1.08[227055.511975] usb 2-1.3: New USB device strings: Mfr=0, Product=2, SerialNumber=0[227055.511978] usb 2-1.3: Product: Dell KB216 Wired Keyboard[227055.520754] input: Dell KB216 Wired Keyboard as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1.3/2-1.3:1.0/0003:413C:2113.001A/input/input136[227055.583032] input: Dell KB216 Wired Keyboard System Control as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1.3/2-1.3:1.1/0003:413C:2113.001B/input/input137[227055.641748] input: Dell KB216 Wired Keyboard Consumer Control as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1.3/2-1.3:1.1/0003:413C:2113.001B/input/input138 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/493633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
493,657 | I have the following path: /dir1/dir2/ In this path I have the following directories containing various (not relevant) application detrius: follower1234 1-Dec-2018follower3456 2-Dec-2018follower4567 3-Dec-2018follower7890 9-Jan-2019follower8901 10-Jan-2019leader8765 4-Dec-2018bystander6789 5-Dec-2018 Assume that today is 10 Jan 2019. Assume that there can be any number of followerXXXX , leaderXXXX and bystanderXXXX directories. What I want is to delete all followerXXXX directories but the latest followerXXX directory, that are older than two weeks old. Now I can delete all directories older than a particular date . But that isn't my question. I'm adding two additional parameters. In this case I want to delete: follower1234 1-Dec-2018follower3456 2-Dec-2018follower4567 3-Dec-2018 But not follower7890 9-Jan-2019follower8901 10-Jan-2019leader8765 4-Dec-2018bystander6789 5-Dec-2018 ie I want to delete files (a) matching a pattern (b) older than two weeks (c) not the latest directory matching the pattern (ie keep the last one) My question is: How to delete all directories in a directory older than 2 weeks except the latest one that match a file pattern? | Introduction The question has been modified. My first alternative (the oneliner) does not match the new specification, but saves the newest directory among the directories that are old enough to be deleted (older than 14 days). I made a second alternative, (the shellscript) that uses @ seconds since Jan. 1, 1970, 00:00 GMT, with fractional part. and subtracting the seconds corresponding to 14 days to get a timestamp for the 'limit-in-seconds' at seclim in the sorted list of directories. 1. Oneliner The previous answers are clean and nice, but they do not preserve the newest follower directory. The following command line will do it (and can manage names with spaces but names with newlines create problems), find . -type d -name "follower*" -printf "%T+ %p\n"|sort|head -n -1 | cut -d ' ' -f 2- | sed -e 's/^/"/' -e 's/$/"/' | xargs echo rm -r tested on this directory structure, $ find -printf "%T+ %p\n"|sort2019-01-10+13:11:40.6279621810 ./follower12019-01-10+13:11:40.6279621810 ./follower1/2/32019-01-10+13:11:40.6279621810 ./follower1/2/dirnam with spaces2019-01-10+13:11:40.6279621810 ./follower1/2/name with spaces2019-01-10+13:11:56.5968732640 ./follower1/2/file2019-01-10+13:13:18.3975675510 ./follower22019-01-10+13:13:19.4016254340 ./follower32019-01-10+13:13:20.4056833250 ./follower42019-01-10+13:13:21.4097412230 ./follower52019-01-10+13:13:22.4137991260 ./follower62019-01-10+13:13:23.4138568040 ./follower72019-01-10+13:13:24.4219149500 ./follower82019-01-10+13:13:25.4259728780 ./follower92019-01-10+13:15:34.4094596830 ./leader12019-01-10+13:15:36.8336011960 .2019-01-10+13:15:36.8336011960 ./leader22019-01-10+13:25:03.0751878450 ./follower1/2 like so, $ find . -type d -name "follower*" -printf "%T+ %p\n"|sort|head -n -1 | cut -d ' ' -f 2- | sed -e 's/^/"/' -e 's/$/"/' | xargs echo rm -rrm -r ./follower1 ./follower2 ./follower3 ./follower4 ./follower5 ./follower6 ./follower7 ./follower8 So follower9 is excluded because it is the newest follower directory (directories with names, that do not start with follower ( leader1 , leader2 and 2 are not in the game). Now we add the time criterion, -mtime +14 and do another 'dry run' to check that it works like it should, when we have changed directory to where there are real follower directories, find . -type d -name "follower*" -mtime +14 -printf "%T+ %p\n"|sort|head -n -1 | cut -d ' ' -f 2- | sed -e 's/^/"/' -e 's/$/"/' | xargs echo rm -r Finally we remove echo and have a command line that can do what we want, find . -type d -name "follower*" -mtime +14 -printf "%T+ %p\n"|sort|head -n -1 | cut -d ' ' -f 2- | sed -e 's/^/"/' -e 's/$/"/' | xargs rm -r find in the current directory, directories with names beginning with follower , that are not modified since 14 days ago. After printing and sorting head -n -1 will exclude the newest follower directory . The time stamps are cut away, and double quotes are added at the head end and tail end of each directory name. Finally the result is piped via xargs as parameters to rm -r in order to remove the directories, that we want to remove. 2. Shellscript I made a second alternative, (the shellscript) that uses @ seconds since Jan. 1, 1970, 00:00 GMT, with fractional part. It has also two options, -n dry run -v verbose I modified the shellscript according to what the OP wants: enter the pattern as a parameter within single quotes e.g. 'follower*'. I suggest that the name of the shellscript is prune-dirs because it is more general now (no longer only prune-followers to prune directories follower* ). You are recommended to run the shellscript with both options the first time in order to 'see' what you will do, and when it looks correct, remove the -n to make the shellscript remove the directories that are old enough to be removed. So let us call it prune-dirs and make it executable. #!/bin/bash# date sign comment# 2019-01-11 sudodus version 1.1# 2019-01-11 sudodus enter the pattern as a parameter# 2019-01-11 sudodus add usage# 2019-01-14 sudodus version 1.2# 2019-01-14 sudodus check if any parameter to the command to be performed# Usageusage () { echo "Remove directories found via the pattern (older than 'datint') Usage: $0 [options] <pattern>Examples: $0 'follower*' $0 -v -n 'follower*' # 'verbose' and 'dry run'The 'single quotes' around the pattern are important to avoid that the shell expandsthe wild card (for example the star, '*') before it reaches the shellscript" exit}# Manage options and parametersverbose=falsedryrun=falsefor i in in "$@"do if [ "$1" == "-v" ] then verbose=true shift elif [ "$1" == "-n" ] then dryrun=true shift fidoneif [ $# -eq 1 ]then pattern="$1"else usagefi# Command to be performed on the selected directoriescmd () { echo rm -r "$@"}# Pattern to search for and limit between directories to remove and keep#pattern='follower*'datint=14 # daystmpdir=$(mktemp -d)tmpfil1="$tmpdir"/fil1tmpfil2="$tmpdir"/fil2secint=$((60*60*24*datint))seclim=$(date '+%s')seclim=$((seclim - secint))printf "%s limit-in-seconds\n" $seclim > "$tmpfil1"if $verbosethen echo "----------------- excluding newest match:" find . -type d -name "$pattern" -printf "%T@ %p\n" | sort |tail -n1 | cut -d ' ' -f 2- | sed -e 's/^/"/' -e 's/$/"/'fi# exclude the newest match with 'head -n -1'find . -type d -name "$pattern" -printf "%T@ %p\n" | sort |head -n -1 >> "$tmpfil1"# put 'limit-in-seconds' in the correct place in the sorted list and remove the timestampssort "$tmpfil1" | cut -d ' ' -f 2- | sed -e 's/^/"/' -e 's/$/"/' > "$tmpfil2"if $verbosethen echo "----------------- listing matches with 'limit-in-seconds' in the sorted list:" cat "$tmpfil2" echo "-----------------"fi# create 'remove task' for the directories older than 'limit-in-seconds'params=while read filnamdo if [ "${filnam/limit-in-seconds}" != "$filnam" ] then break else params="$params $filnam" fidone < "$tmpfil2"cmd $params > "$tmpfil1"cat "$tmpfil1"if ! $dryrun && ! test -z "$params"then bash "$tmpfil1"firm -r $tmpdir Change current directory to the directory with the follower subdirectories create the file prune-dirs make it executable and run with the two options -v -n cd directory-with-subdirectories-to-be-pruned/nano prune-dirs # copy and paste into the editor and save the filechmod +x prune-dirs./prune-dirs -v -n Test I tested prune-dirs in a directory with the following sub-directories, as seen with find $ find . -type d -printf "%T+ %p\n"|sort2018-12-01+02:03:04.0000000000 ./follower12342018-12-02+03:04:05.0000000000 ./follower34562018-12-03+04:05:06.0000000000 ./follower45672018-12-04+05:06:07.0000000000 ./leader87652018-12-05+06:07:08.0000000000 ./bystander67892018-12-06+07:08:09.0000000000 ./follower with spaces old2019-01-09+10:11:12.0000000000 ./follower78902019-01-10+11:12:13.0000000000 ./follower89012019-01-10+13:15:34.4094596830 ./leader12019-01-10+13:15:36.8336011960 ./leader22019-01-10+14:08:36.2606738580 ./22019-01-10+14:08:36.2606738580 ./2/follower with spaces2019-01-10+17:33:01.7615641290 ./follower with spaces new2019-01-10+19:47:19.6519169270 . Usage $ ./prune-dirsRemove directories found via the pattern (older than 'datint') Usage: ./prune-dirs [options] <pattern>Examples: ./prune-dirs 'follower*' ./prune-dirs -v -n 'follower*' # 'verbose' and 'dry run'The 'single quotes' around the pattern are important to avoid that the shell expandsthe wild card (for example the star, '*') before it reaches the shellscript Run with -v -n (a verbose dry run) $ ./prune-dirs -v -n 'follower*'----------------- excluding newest match:"./follower with spaces new"----------------- listing matches with 'limit-in-seconds' in the sorted list:"./follower1234""./follower3456""./follower4567""./follower with spaces old""limit-in-seconds""./follower7890""./follower8901""./2/follower with spaces"-----------------rm -r "./follower1234" "./follower3456" "./follower4567" "./follower with spaces old" A verbose dry run with a more general pattern $ LANG=C ./prune-dirs -v -n '*er*'----------------- excluding newest match:"./follower with spaces new"----------------- listing matches with 'limit-in-seconds' in the sorted list:"./follower1234""./follower3456""./follower4567""./leader8765""./bystander6789""./follower with spaces old""limit-in-seconds""./follower7890""./follower8901""./leader1""./leader2""./2/follower with spaces"-----------------rm -r "./follower1234" "./follower3456" "./follower4567" "./leader8765" "./bystander6789" "./follower with spaces old" Run without any options (a real case removing directories) $ ./prune-dirs 'follower*'rm -r "./follower1234" "./follower3456" "./follower4567" "./follower with spaces old" Run with -v 'trying again' $ LANG=C ./prune-dirs -v 'follower*'----------------- excluding newest match:"./follower with spaces new"----------------- listing matches with 'limit-in-seconds' in the sorted list:"limit-in-seconds""./follower7890""./follower8901""./2/follower with spaces"-----------------rm -r The shellscript lists no directory 'above' "limit-in-seconds", and there are no files listed for the rm -r command line, so the work was done already (which is the correct result). But if you run the shellscript again several days later, some new directory may be found 'above' "limit-in-seconds" and be removed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/493657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112231/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.