source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
479,900 | On Lubuntu 18.04, I open pcmanfm $ pcmanfm . and after looking at the thumbnails of the image files under the current directory in pcmanfm, I closed the window of pcmanfm by Alt-F4, but it is still hanging on the foreground in the terminal emulator. I move it to background by Ctrl-Z and bg 2 , and kill it, but doesn't work. $ jobs -l [2]+ 31124 Running pcmanfm . &$ kill %2$ jobs -l[2]+ 31124 Running pcmanfm . &$ sudo kill 31124$ jobs -l[2]+ 31124 Running pcmanfm . & Its state is Sl , S means "interruptible sleep (waiting for an event to complete)" and l means "is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)". So I wonder why I can't kill the process? How would you kill it? Thanks. $ ps aux | grep [3]1124t 31124 0.8 0.7 693952 57064 pts/9 Sl 06:34 0:47 pcmanfm . | By default, kill only sends a TERM signal, for some reason pcmanfm is ignoring this signal. If you pass option -KILL to kill, then it will send the signal to the scheduler, and the process will be removed with no chance to clean-up, or appeal. You do not need extra privileges ( sudo ), to kill processes that you own. sudo can be dangerous, don't just use out of frustration. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/479900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
480,035 | I desire to list all inodes in current directory that are regular files (i.e. not directories, links, or special files), with ls -la ( ll ). I went to the man ls searching for type and found only this which I didn't quite understand in that regard: --indicator-style=WORD append indicator with style WORD to entry names: none (default), slash (-p), file-type (--file-type), classify (-F) How could I list only regular files with ls -la ( ll as my shortcut in Ubuntu 18.04)? | find . -maxdepth 1 -type f -ls This would give you the regular files in the current directory in a format similar to what you would get with ls -lisa (but only showing regular files, thanks to -type -f on the command line). Note that -ls (introduced by BSDs) and -maxdepth (introduced by GNU find ) are non-standard (though now common) extensions. POSIXly, you can write it: find . ! -name . -prune -type f -exec ls -ldi {} + (which also has the benefit of sorting the file list, though possibly in big independent chunks if there's a large number of files in the current directory). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480035",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
480,052 | Problem/Goal description Ideally, I would like a good way of detecting from a shell script whether or not the window has focus. By a "good" way, I mean some way which requires minimal steps and preferably does not require sifting through each open window blindly to find mine based on title. The purpose is for controlling notifications in many different scripts -- so I'm just looking for a general solution that can apply to any and all of them. What I've come up with so far is roundabout and hacky -- it is as follows: Set my title to something unique or mechanically relevant (in my model, it is my PTS path or, more robustly, a UUID). Hope desperately that this title is not overridden by something. Get a list of all open windows, by title. Iterate through list to identify my window by matching it to the title element. (Note the possibility of errors here if another window happens to have that same title element.) Detect whether said window has focus or not. It should be noted that I do not want to implement this, and will only do it as a last resort. So what I'm asking for here is something that is not this . Compromises This solution is obviously terrible, so I'd like to know if there is anything remotely better, in any way. I would prefer something portable, elegant, and perfect, but I recognize the potential need to compromise. By better I mean any of the following: A solution that only works with a specific terminal emulator, e.g. by having the terminal emulator itself set an environment variable allowing the script to detect which window it is in. A solution that does not require setting the title, and instead uses some other invisible marker in window state that is accessible and detectable from a shell script attached to said window. Recusing up the parent process ladder to find the parent terminal emulator PID, and working from there (Note that a solution that works by recusing up the the process tree to detect the parent process that started the script will only work if the script is running locally, so this solution is incomplete but still good!) Conditions I was getting questions about exactly what conditions my preferred solution is supposed to function under, and the answer is as many as possible . But at minimum, I would like something that works: In a single-tab terminal session running natively (default scenario). In terminal multiplexers like tmux. (Portability between different terminal multiplexers is preferred but really not required.) Extras that I'd really appreciate (in order of importance), include: Ability to function on remote connections over telnet and SSH. Ability to distinguish which tab is open in a multi-tab terminal session. Summary I want a good way of finding what terminal emulator window my shell script is attached to, so that I can detect whether it has focus or not. Note that I'm already aware of the mechanics of how to to iterate through open windows, and how to detect whether they have focus or not and what titles they have. I am aware of the existance of xdotool and xprop and this question is not about the basic mechanics of those tools (unless there is some hidden black magic feature I don't know about that side-steps the intrinsic hackiness of my current solution.) The reason I don't want to that is because it's terrible. Any other solution that accomplishes the same thing? | There's a FocusIn/FocusOut mode. To enable: echo -ne '\e[?1004h' To disable: echo -ne '\e[?1004l' On each focus event, you receive either \e[I (in) or \e[O (out) from the input stream. GNOME Terminal (and other VTE based terminals) also report the current state when you enable this mode. That is, you can enable and then immediately disable it for querying the value once. You can combine read with a timeout, or specifying to read 3 characters to get the response. Note however that it's subject to race condition, e.g. in case you have typed ahead certain characters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/310093/"
]
} |
480,059 | On my Lubuntu (18.10), xdg-open launches VLC Player when the file is not associated to any applications. $ xdg-mime query filetype jquery.jsapplication/javascript$ xdg-mime query default application/javascript # no output$ xdg-open jquery.jsError: no "view" mailcap rules found for type "application/javascript"Opening "/tmp/jquery.js" with VLC media player (application/javascript) On some files, it launches Calibre's E-book viewer (.rb for example). EDIT I digged into xdg-open and found it executes following commands: Check filetype with xdg-mime query filename "$file" and xdg-mime query default $filetype run-mailcap --action=view "$file" mimeopen -L -n "$file" The problem lies in mimeopen. Then how can I change mimeopen to open any unknown files with featherpad, or specific app? In other words, I'd like to set default fallback application if mimeopen can not find any suitable apps. | mimeopen treats unknown files as text/plain or application/octet-stream . To set default application, run mimeopen with -d option. Since I could not find option to specify mimetype, you need to create dummy files at first. touch text.txt # for text/plainmimeopen -d text.txt # and choose your favorite appecho -e \\0 > data.dat # for application/octet-streammimeopen -d -M data.dat or edit "~/.config/mimeapps.list". [Default Applications]text/plain=featherpad.desktop; application/octet-stream=firefox.desktop; mimeopen , which is shiped with File-MimeInfo , tries to find applications with parent mimytypes. For example, if the filetype starts with "text/", it has "text/plain" as parent. And all filetype inherits "application/octet-stream". On mimeopen in my environment, the most "suitable" app for octet-stream is VLC Player and for text/plain, it is Calibre's E-book Viewer. That's why some files are opened with these apps. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
480,082 | I often connect to a network, which has a lot of printers. When printer discovery is ongoing, a lot of distracting messages pop up in GNOME. I use printer only rarely, so I would prefer to keep CUPS disabled most of time. Stopping CUPS works and eliminates annoying notifications: systemctl stop cups I would like to disable it on boot. Surprisingly, after disabling systemctl disable cups CUPS still runs after reboot. The status command systemctl status cups produces ● cups.service - CUPS Scheduler Loaded: loaded (/lib/systemd/system/cups.service; disabled; vendor preset: enabled) Drop-In: /etc/systemd/system/cups.service.d Active: active (running) since Tue 2018-11-06 02:35:50 PST; 11s ago I expected that disabling a service will prevent its running after reboot. Does activation happen because of preset? I was trying to preset "disabled" status with --preset-mode , but it did not work. My OS is Debian Stretch. systemctl --versionsystemd 232+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN | No, activation does not happen because of preset. systemctl disable cups will only prevent it from auto-starting. It's possible that it was started anyway because it was required for another service. This would confirm; systemctl --reverse list-dependencies cups.service If that's the case then you should evaluate & disable those services as well. Or, if you don't care about the repercussions and want to completely prevent it from being started, mask it. systemctl mask cups | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/480082",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/319585/"
]
} |
480,112 | I have established a reverse ssh tunnel into a restricted network to an aws server, i.e. to access it, I ssh into the aws server and from there I get access to the machine in the restricted network on some custom port. On this restricted network, there are devices which can be configured through a web browser. I believe I could do something like ssh -R 8080:deviceIP:80 user@aws to get it forwarded to aws machine but then I still can't access it (other than through remote X which is terribly slow). How can I pipe deviceIP:80 through to my browser at home via aws ? I've tried the above ssh command and then directed the brower on my home computer to aws:8080 but that didn't load any page... | SSH tunnels are useful to cross insecure networks, providing end-to-end encryption when connecting two end points that seat on distinct networks. A far as I can tell (thanks to comments), what you have is: A local host-A: your localhost , on your local network (likely behind firewall/NAT) A publicly reachable host-B: the aws server A non-publicly reachable host-C: on the restricted remote network (likely behind firewall/NAT) A non-publicly reachable host-D: the one you refer to as deviceIP , that listens on port 80 and is on the remote restricted network If you want to connect your host A to host D, letting your browser reach it on port 80 , you need: A tunnel from host-A to host-B, that: Lets host-A listen on port 8080 Sends traffic from that port through the tunnel On host-B ( aws ), redirects the traffic coming from the tunnel to the local (i.e. on host-B) port 15872 (I took it from your comments; you can choose any available port; just make sure to use the same one in all commands) # Execute on host-A$ ssh -L 8080:localhost:15872 user@host-B A tunnel from host-C to host-B, that: Lets host-B listen on port 15872 Sends traffic from that port through the tunnel On host-C (your Linux server), redirects that traffic to port 80 on host-D # Execute on host-C$ ssh -R *:15872:host-D:80 user@host-B This way, requests made to host-A on port 8080 will be tunneled to host-B, redirected to port 15872 on the same host-B, tunneled to host-C and redirected on host-C to port 80 of host-D. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46433/"
]
} |
480,121 | Issue: Every now and then I need to do simple arithmetic in a command-line environment. E.G. given the following output: Disk /dev/sdb: 256GBSector size (logical/physical): 512B/512BPartition Table: gptDisk Flags: Number Start End Size File system Name Flags 1 1049kB 106MB 105MB fat32 hidden, diag 2 106MB 64.1GB 64.0GB ext4 3 64.1GB 192GB 128GB ext4 5 236GB 256GB 20.0GB linux-swap(v1) What's a simple way to calculate on the command line the size of the unallocated space between partition 3 and 5? What I've tried already: bc bc bc 1.06.95Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.This is free software with ABSOLUTELY NO WARRANTY.For details type `warranty'. 236-192 44 quit where the bold above is all the stuff I need to type to do a simple 236-192 as bc 1+1 echoes File 1+1 is unavailable. expr expr 236 - 192 where I need to type spaces before and after the operator as expr 1+1 just echoes 1+1 . | You can reduce the amount of verbosity involved in using bc : $ bc <<<"236-192"44$ bc <<<"1+1"2 (assuming your shell supports that). If you’d rather have that as a function: $ c() { printf "%s\n" "$@" | bc -l; }$ c 1+1 22/723.14285714285714285714 ( -l enables the standard math library and increases the default scale to 20.) Store the c definition in your favourite shell startup file if you want to make it always available. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/480121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90054/"
]
} |
480,241 | I'm trying to remove a lot of files at once but need to be specific as to not remove any of the files I actually need. I have a ton of corrupt files that start master- but there are valid files that start with master-2018 So, I want to do something like rm -rf master-* --exclude master-2018* Is that I need possible? | Yes you can use more than one pattern with find : $ find -name 'master-*' \! -name 'master-2018*' -print0 -prune | xargs -0 echo rm -fr (remove the echo if you're satisfied with the dry run) You should add a -maxdepth 1 predicate just after find if you only want ro remove files from the current directory, ie master-1991 but no subdir/master-1991 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/480241",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/319718/"
]
} |
480,330 | Can we use temporary folders like temporary files TMP=$(mktemp ... )exec 3<>$TMPrm $TMPcat <&3 which will be destroyed automatically after this shell exit? | In the case of a temporary file, your example in the question would create it, then unlink it from the directory (making it "disappear"), and when the script closes the filedescriptor (probably upon termination), the space taken by the file would be reclaimable by the system. This is a common way to deal with temporary files in languages like C. It is, as far as I know, not possible to open a directory in the same way, at least not in any way that would make the directory usable. A common way to delete temporary files and directories at the termination of a script is by installing a cleanup EXIT trap. The code examples given below avoids having to juggle filedescriptors completely. tmpdir=$(mktemp -d)tmpfile=$(mktemp)trap 'rm -f "$tmpfile"; rm -rf "$tmpdir"' EXIT# The rest of the script goes here. Or you may call a cleanup function: cleanup () { rm -f "$tmpfile" rm -rf "$tmpdir"}tmpdir=$(mktemp -d)tmpfile=$(mktemp)trap cleanup EXIT# The rest of the script goes here. The EXIT trap won't be executed upon receiving the KILL signal (which can't be trapped), which means that there will be no cleanup performed then. It will however execute when terminating due to an INT or TERM signal (if running with bash or ksh , in other shells you may want to add these signals after EXIT in the trap command line), or when exiting normally due to arriving at the end of the script or executing an exit call. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/318733/"
]
} |
480,375 | I have a directory with a lot of mp3 files, and I need a simple way to find the accumulated duration for them. I know that I can find the duration for one file with ffmpeg -i <file> 2>&1 | grep Duration I also know that I can run this command on all mp3 files in a directory with the command for file in *.mp3; do ffmpeg -i "$file" 2>&1 | grep Duration; done This can be somewhat filtered with for file in *.mp3; do ffmpeg -i "$file" 2>&1 | grep Duration | cut -f4 -d ' '; done But how do I sum it all up? Using ffmpeg is not necessary. The output format is not so important either. Seconds or mm:ss or something similar will do. I would like it to look something like this: $ <command>84:33 | You can get exactly the duration in seconds, then sum them with bc: for file in *.mp3;do ffprobe -v error -select_streams a:0 -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1 "$file";done|paste -sd+|bc -l Convert this number to HH:MM:SS format by yourself. e.g. https://stackoverflow.com/a/12199816/6481121 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/236940/"
]
} |
480,429 | I've somehow managed to remove the toolbar and menu-bar from Okular. After I've found no way to reactivate them from "outside" – sice I can't click on Options anymore – I've tried to reinstall a clean version after removing it with apt-get remove --purge okular . However, it didn't work, the tool- and menu-bars were still unavailable. I also looked for any configuration files in the home directory, without success. How should I proceed now in order to restore a clean Okular. | The keyboard shortcut to show/hide the menu bar is Ctrl-m . I found the config file where the setting is stored at ~/.config/okularrc . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/480429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58310/"
]
} |
480,459 | When using the debian:stretch Docker image, the /usr/share/man/ directory already contains many manpages , and man can be easily installed to view them: $ apt-get update$ apt-get install man$ man ls$ man cp However, when using the debian:stretch-slim Docker image, the /usr/share/man/ directory is intentionally empty: These tags are an experiment in providing a slimmer base (removing some extra files that are normally not necessary within containers, such as man pages and documentation) How do I populate the /usr/share/man/ directory, so I can use man to view manpages for core utilities (such as cat , chmod , chown , cp , ls , mkdir , mv , rm , tail , etc) ? | The coreutils package populates the /usr/share/man/man1/ directory with manpages for core utilities . However, simply running apt-get update and apt-get install coreutils is not sufficient, because dpkg has been configured to exclude /usr/share/man/* , using path-exclude in /etc/dpkg/dpkg.cfg.d/docker (see here and here ). So the first step is to remove that line from the /etc/dpkg/dpkg.cfg.d/docker file. One way to do this is by using sed : $ sed -i '/path-exclude \/usr\/share\/man/d' /etc/dpkg/dpkg.cfg.d/docker dpkg has also been configured to exclude /usr/share/groff/* , and this needs to be undone too (since groff is required in order to render manpages): $ sed -i '/path-exclude \/usr\/share\/groff/d' /etc/dpkg/dpkg.cfg.d/docker Now the /usr/share/man/man1/ directory needs to be populated from the coreutils package. Since coreutils is already installed in the debian:stretch-slim Docker image, it needs to be reinstalled: $ apt-get update$ apt-get install --reinstall coreutils Finally, man can be installed and manpages can be viewed: $ apt-get install man$ man ls$ man cp It's also helpful to install less , which man will use for paginating the manpages, and provides a better experience than the default more paginator: $ apt-get install less Related questions: Remove documentation to save hard drive space Installing packages without docs Reinstall man pages & fix man How can I restore the man page for ls (/usr/share/man/man1/ls.1.gz)? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480459",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27789/"
]
} |
480,651 | I'm doing some hands on pen testing and following some guides to get an understanding of the tools of the trade. I'm following along with the guide here and I understand everything except for the last page. I need assistance understanding sudo -l below. I know that it details what the current user can do. However, what does the output below mean? And how about the command below (excluding touch)? It kind of confuses me because after running that command (exploit?), I was able to get root. From my understanding, the line is saying to run the command as root or elevate to root, zip the file called exploit, and place it in tmp/exploit. I believe I'm wrong but that's where my understanding of that line stops. I'm confused as to how I got root with that command and what that line is doing. | For your first question, the indicated lines of output are telling you that you are permitted to run /bin/tar and /usr/bin/zip via sudo as the root user without even needing to provide zico 's password. For your second question, we get the answer from zip 's manual page: --unzip-command cmd Use command cmd instead of 'unzip -tqq' to test an archive when the -T option is used. So, since you're privileged to run zip as the root user through sudo , the exploit is simply telling zip "hey, when you're testing this archive, use the command sh -c /bin/bash to test it, would you?" and it's helpfully doing so, giving you a root shell. The exploit file is just there to provide zip something to compress, so that there would be something to "test". It's never being run or anything and indeed in your demonstration is simply an empty file. $ sudo -u root zip /tmp/exploit.zip /tmp/exploit -T --unzip-command="sh -c /bin/bash" is instructing sudo to, as the root user, run this command: $ zip /tmp/exploit.zip /tmp/exploit -T --unzip-command="sh -c /bin/bash" This command will take the file /tmp/exploit and put it into a new archive, /tmp/exploit.zip . The -T switch tells zip to then T est the integrity of the archive, and the --unzip-command switch is telling zip how to test the archive. This last thing is the actual exploit: because zip is being run as root, running sh -c /bin/bash gives you a shell as the root user. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224726/"
]
} |
480,656 | The user calls my script with a file path that will be either be created or overwritten at some point in the script, like foo.sh file.txt or foo.sh dir/file.txt . The create-or-overwrite behavior is much like the requirements for putting the file on the right side of the > output redirect operator, or passing it as an argument to tee (in fact, passing it as an argument to tee is exactly what I'm doing). Before I get into the guts of the script, I want to make a reasonable check if the file can be created/overwritten, but not actually create it. This check doesn't have to be perfect, and yes I realize that the situation can change between the check and the point where the file is actually written - but here I'm OK with a best effort type solution so I can bail out early in the case that the file path is invalid. Examples of reasons the file couldn't created: the file contains a directory component, like dir/file.txt but the directory dir doesn't exist the user doens't have write permissions in the specified directory (or the CWD if no directory was specified Yes, I realize that checking permissions "up front" is not The UNIX Way™ , rather I should just try the operation and ask forgiveness later. In my particular script however, this leads to a bad user experience and I can't change the responsible component. | The obvious test would be: if touch /path/to/file; then : it can be createdfi But it does actually create the file if it's not already there. We could clean up after ourselves: if touch /path/to/file; then rm /path/to/filefi But this would remove a file that already existed, which you probably don't want. We do, however, have a way around this: if mkdir /path/to/file; then rmdir /path/to/filefi You can't have a directory with the same name as another object in that directory. I can't think of a situation in which you'd be able to create a directory but not create a file. After this test, your script would be free to create a conventional /path/to/file and do whatever it pleases with it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87246/"
]
} |
480,672 | No colors are displayed when running bash -c "ls -l" (CentOS 7.2). I have an obscure reason to want to show the colors after all. Can I disable this... color suppression? For anyone wondering about my obscure use case: I'm running the Parcellite clipboard manager, which supports "actions" in a context menu, and one of the actions I've defined is "Run terminal command" which opens a new terminal and runs a command stored on the clipboard. It is implemented in the following way (so that the command is allowed to contain all special characters except apostrophe): # Parcellite recognizes %s only once; store in a variable to use twiceCMD='%s' gnome-terminal -e "$SHELL -c 'echo \\\$ \"\$CMD\"; eval \$CMD; exec $SHELL'"# Or equivalently....CMD='%s' gnome-terminal -e $SHELL' -c "echo \$ "$CMD"; eval $CMD; exec $SHELL"' Because gnome-terminal can only run a single command and doesn't understand things like && , I need to call bash -c ( $SHELL -c ) in order to interpret the command correctly and keep the shell running afterward (and since bash doesn't directly support that I have to also sneak in exec $SHELL at the end.) | The obvious test would be: if touch /path/to/file; then : it can be createdfi But it does actually create the file if it's not already there. We could clean up after ourselves: if touch /path/to/file; then rm /path/to/filefi But this would remove a file that already existed, which you probably don't want. We do, however, have a way around this: if mkdir /path/to/file; then rmdir /path/to/filefi You can't have a directory with the same name as another object in that directory. I can't think of a situation in which you'd be able to create a directory but not create a file. After this test, your script would be free to create a conventional /path/to/file and do whatever it pleases with it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119798/"
]
} |
480,728 | I am running Debian (Stretch) with XFCE and many applications do not appear in the menu (in my case Whisker Menu). As an example, I often run remote sessions using VNC and at the moment I can only start the VNC viewer from the terminal. Ideally it would have an icon/item so that not only it would be visible in the menu, but I could also select it as a 'favourite' (easy-to-reach) item in the Whisker Menu. Sticking with the example case, the VNC viewer is from an 'official' package: $ sudo apt --reinstall install tigervnc-viewerReading package lists... DoneBuilding dependency tree Reading state information... Done0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 1 not upgraded.Need to get 168 kB of archives.After this operation, 0 B of additional disk space will be used.Get:1 http://mirrorservice.org/sites/ftp.debian.org/debian stretch/main amd64 tigervnc-viewer amd64 1.7.0+dfsg-7 [168 kB]Fetched 168 kB in 0s (642 kB/s) (Reading database ... 669847 files and directories currently installed.)Preparing to unpack .../tigervnc-viewer_1.7.0+dfsg-7_amd64.deb ...Unpacking tigervnc-viewer (1.7.0+dfsg-7) over (1.7.0+dfsg-7) ...Processing triggers for man-db (2.7.6.1-2) ...Setting up tigervnc-viewer (1.7.0+dfsg-7) ... so is there a way to have it as an 'official' application in the Whisker Menu? | Basically, those menu items are .desktop files.The usual paths are: ~/.local/share/applications/usr/local/share/applications/usr/share/applications To continue with your exemple: [workstation] user ~ >cat /usr/share/applications/vncviewer.desktop [Desktop Entry]Name=TigerVNC ViewerComment=Connect to VNC server and display remote desktopExec=/usr/bin/vncviewerIcon=tigervncTerminal=falseType=ApplicationStartupWMClass=TigerVNC Viewer: Connection DetailsCategories=Network;RemoteAccess; I suggest you first check if those .desktop files are well created. If not stored in the same path, you can search for those with the following command: find / -name '*.desktop' If you don't find those files, you can create those using the template from my vncviewer.desktop | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86288/"
]
} |
480,732 | I need to write a shell script that find and print all files in a directory which starts with the string: #include .Now, I know how to check if a string is in the file, by using: for f in `ls`; do if grep -q 'MyString' $f; then: #DO SOMETHING fi but how can I apply this to the first line?I thought to maybe create a variable of the first line and check if it starts with #include , but I'm not sure how to do this. I tried the read command but I fail to read into a variable. I'd like to hear other approaches to this problem; maybe awk?Anyway, remember, I need to check if the first line starts with #include , not if it contains that string.That's why I found those questions: How to print file content only if the first line matches a certain pattern? https://stackoverflow.com/questions/5536018/how-to-print-matched-regex-pattern-using-awk they are not completely helping. | It is easy to check if the first line starts with #include in (GNU and AT&T) sed: sed -n '1{/^#include/p};q' file Or simplified (and POSIX compatible): sed -n '/^#include/p;q' file That will have an output only if the file contains #include in the first line. That only needs to read the first line to make the check, so it will be very fast. So, a shell loop for all files (with sed) should be like this: for file in *do [ "$(sed -n '/^#include/p;q' "$file")" ] && printf '%s\n' "$file"done If there are only files (not directories) in the pwd. If what you need is to print all lines of the file, a solution similar to the first code posted will work (GNU & AT&T version): sed -n '1{/^#include/!q};p' file Or, (BSD compatible POSIXfied version): sed -ne '1{/^#include/!q;}' -e p file Or: sed -n '1{ /^#include/!q } p ' file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/480732",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316477/"
]
} |
480,776 | We have env(1) to modify the environment of the command we want to run (for example env MANPAGER=more man dtrace ). Is there something similar but for modifying the directory that the command is going to be started in? Ideally, I would like it to look like this: theMagicCommand /new/cwd myProgram This way it could be "chained" with other env(1)-like commands, e.g., daemon -p /tmp/pid env VAR=value theMagicCommand /new/cwd myProgram So far I can think of the following solution, which unfortunately does not have the same interface as env(1): cd /new/cwd && myProgram Also, I can just create a simple shell script like this: #! /bin/sh -cd "${1:?Missing the new working directory}" || exit 1shiftexec "${@:?Missing the command to run}" but I am looking for something that already exists (at least on macOS and FreeBSD). myProgram is not necessarily a desktop application (in which case I could just use the Path key in a .desktop file ). | AFAIK, there is no such dedicated utility in the POSIX tool chest. But it's common to invoke sh to set up an environment (cwd, limits, stdout/in/err, umask...) before running a command as you do in your sh script. But you don't have to write that script in a file, you can just inline it: sh -c 'CDPATH= cd -P -- "$1" && shift && exec "$@"' sh /some/dir cmd args (assuming the directory is not - ). Adding CDPATH= (in case there's one in the environment) and -P for it to behave more like a straight chdir() . Alternatively, you could use perl whose chdir() does a straight chdir() out of the box. perl -e 'chdir(shift@ARGV) or die "chdir: $!"; exec @ARGV or die "exec: $!" ' /some/dir cmd args | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/480776",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128489/"
]
} |
480,806 | How can I generate a year's full months names programmatically using bash? | locale will tell you the names of the months in the current locale: locale mon Thus: $ LC_TIME=en_GB.UTF-8 locale monJanuary;February;March;April;May;June;July;August;September;October;November;December$ LC_TIME=fr_FR.UTF-8 locale monjanvier;février;mars;avril;mai;juin;juillet;août;septembre;octobre;novembre;décembre If you want the months on separate lines, pipe the output to tr ';' '\n' . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/480806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283551/"
]
} |
480,810 | I need to get first 9 words from a pipe delimited file and then next 9 words. cat a.txta|b|c|d|e|f|g|h|i|j|k|l|m|n|o|p|q|r|s|t|u|v|w|x|y|z| cat new.ksh #! /bin/ksha=`awk -F "|" ' { print NF-1 } ' a.txt`echo $a Expected Output: grep -i "a|b|c|d|e|f|g|h|i" b.txt >> c.txtgrep -i "j|k|l|m|n|o|p|q|r" b.txt >> c.txtgrep -i "s|t|u|v|w|x|y|z" b.txt >> c.txt | locale will tell you the names of the months in the current locale: locale mon Thus: $ LC_TIME=en_GB.UTF-8 locale monJanuary;February;March;April;May;June;July;August;September;October;November;December$ LC_TIME=fr_FR.UTF-8 locale monjanvier;février;mars;avril;mai;juin;juillet;août;septembre;octobre;novembre;décembre If you want the months on separate lines, pipe the output to tr ';' '\n' . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/480810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317177/"
]
} |
480,846 | I'm trying to remove the first / from a string in a shell script. For example file:///path/to/file and output to file://path/to/file | If the string is in a shell variable, then you can use shell parameter expansion: $ var='file:///path/to/file'$ echo "${var/\//}"file://path/to/file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320255/"
]
} |
480,934 | I'm using a bash script script.sh containing a command cmd , launched in background: #!/bin/bash…cmd &… If I open a terminal emulator and run script.sh , cmd is properly executed in background, as expected. That is, while script.sh has ended, cmd continues to run in background, with PPID 1. But, if I open another terminal emulator (let say xfce4-terminal) from the previous one (or at the beginning of desktop session, which is my real use case), and execute script.sh by xfce4-terminal -H -x script.sh cmd is not properly executed anymore: It is killed by the termination of script.sh . Using nohup to prevent this is not sufficient. I am obliged to put a sleep command after it, otherwise cmd is killed by the termination of script.sh , before being dissociated from it. The only way I found to make cmd properly execute in background is to put set -m in script.sh . Why is it necessary in this case, and not in the first one? Why this difference in behaviour between the two ways of executing script.sh (and hence cmd )? I assume that, in the first case, monitor mode is not activated, as one can see by putting set -o in script.sh . | The process your cmd is supposed to be run in will be killed by the SIGHUP signal between the fork() and the exec() , and any nohup wrapper or other stuff will have no chance to run and have any effect. (You can check that with strace ) Instead of nohup , you should set SIGHUP to SIG_IGN (ignore) in the parent shell before executing your background command; if a signal handler is set to "ignore" or "default", that disposition will be inherited through fork() and exec() . Example: #! /bin/shtrap '' HUP # ignore SIGHUPxclock &trap - HUP # back to default Or: #! /bin/sh(trap '' HUP; xclock &) If you run this script with xfce4-terminal -H -x script.sh , the background command ( xclock & ) will not be killed by the SIGHUP sent when script.sh terminates. When a session leader (a process that "owns" the controlling terminal, script.sh in your case) terminates, the kernel will send a SIGHUP to all processes from its foreground process group; but set -m will enable job control and commands started with & will be put in a background process group, and they won't signaled by SIGHUP . If job control is not enabled (the default for a non-interactive script), commands started with & will be run in the same foreground process group, and the "background" mode will be faked by redirecting their input from /dev/null and letting them ignore SIGINT and SIGQUIT . Processes started this way from a script which once run as a foreground job but which has already exited won't be signaled with SIGHUP either, since their process group (inherited from their dead parent) is no longer the foreground one on the terminal. Extra notes: The "hold mode" seems to be different between xterm and xfce4-terminal (and probably other vte-based terminals). While the former will keep the master side of the pty open, the latter will tear it off after the program run with -e or -x has exited, causing any write to the slave side to fail with EIO . xterm will also ignore WM_DELETE_WINDOW messages (ie it won't close) while there are still processes from the foreground process group running. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320314/"
]
} |
480,956 | How to include the redirected command itself to the output file redirection? For example echo "hello!">output.txt I want the content of output file like this: echo "hello!">output.txthello! | You may want one of two things here. Using tee to get the result of the echo sent to the terminal as well as to the file: $ echo 'hello!' | tee outputhello!$ cat outputhello! Using script to capture the whole terminal session: $ scriptScript started, output file is typescript$ echo 'hello!' >output$ cat outputhello!$ exitScript done, output file is typescript $ cat typescriptScript started on Sat Nov 10 15:15:06 2018$ echo 'hello!' >output$ cat outputhello!$ exitScript done on Sat Nov 10 15:15:37 2018 If this is not what you are asking for, then please clarify your question. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320339/"
]
} |
480,992 | I am using ImageMagick to compose many images. for counter in {1..$limit}do partial=... x=... y=... convert $canvas $partial -geometry "+$x+$y" -composite $canvasdone Basically I am mogrifying N images, one-at-a-time. Right now my $limit is small and the script runs in reasonable time. But $limit will increase to thousands of images in the future. I am considering to build the convert command to include all the partials at once. Then only run convert once. ImageMagick claims it can work with terabit images no problem. So I imagine giving thousands of tasks at once should not be a problem. The issue is that shell limits the number of arguments you can pass to a program. Can I pass all the arguments to ImageMagick in some other way, like a script file? And is this a bad idea? | You can pass ImageMagick options from a file, by passing it an @filename argument, in which case ImageMagick will read the arguments from the passed file. From the documentation (see section "File References"): Some ImageMagick command-line options may exceed the capabilities of your command-line processor. Windows, for example, limits command-lines to 8192 characters. If, for example, you have a draw option with polygon points that exceed the command-line length limit, put the draw option instead in a file and reference the file with the @ (e.g. @mypoly.txt ). So that feature is designed specifically to curb around limitations in command-line length and number of arguments. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/480992",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37065/"
]
} |
481,063 | I'm on Ubuntu 18.04.1 LTS x64 and I need to update my Qt 5 installation from v5.9.5 to v5.10.0, however when I issue the command sudo apt-get install qt5-default it gives me that qt5-default is already at the latest versione (5.9.5+dfsg-0ubuntu1). But obviously it's not true.I've also tried running: sudo apt-get updatesudo apt upgradesudo apt dist-upgrade before, but without success. What is wrong with those commands? I just need to install the core libs without the UI stuff (e.g. qtcreator). | You have the latest version of qt5-default package available from Ubuntu repositories qt5-default (5.9.5+dfsg-0ubuntu1) . To install the 5.10.x version you should follow the instructions described on the official website : Install Qt 5 on Ubuntu The installation file can be downloaded from here . The 5.10.0 version: wget http://download.qt.io/official_releases/qt/5.10/5.10.0/qt-opensource-linux-x64-5.10.0.run The 5.10.1 version: wget http://download.qt.io/official_releases/qt/5.10/5.10.1/qt-opensource-linux-x64-5.10.1.run to set qt 5.10 as default edit: sudo nano /usr/lib/x86_64-linux-gnu/qtchooser/default.conf with the following line (replace $USER with your username): /home/$USER/Qt5.10.0/5.10.0/gcc_64/bin/home/$USER/Qt5.10.0/5.10.0/gcc_64/lib then run: qtchooser -print-envqmake -v . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/278921/"
]
} |
481,073 | I'm trying to use some arithmetic over the matched patterns in perl command line. I'm able to do it for one match but not for all. str="a1b2c3"perl -pe 's/\d+/$&+1/e' <<<"$str"a2b2c3 I understand $& refers to the first matched digit 1 here. What do I need to do to add 1 to all the digits? Is there a variable similar to $& that represents all matched patterns? or the regex needs to be modified to match multiple digits. For the given input, I'm expecting output something like a2b3c4 | str="a1b2c3"perl -pe 's/\d+/$&+1/ge' <<<"$str" The g flag to the substitution would make Perl apply the expression for each non-overlapping match on the input line. Nitpick: There are actually no capture groups involved here ( the original question mentioned capture groups ). The Perl variable $& is the "string matched by the last successful pattern match". This is different from e.g. $1 and $2 etc. that refer to the string matched by the corresponding capture group (parenthesised expression). There are no capture groups in \d+ , but you could have used s/(\d+)/$1+1/ge instead, which does use a single capture group. There is no difference between s/(\d+)/$1+1/ge and s/\d+/$&+1/ge in terms of outcome. In this short in-line Perl script, it makes no difference whether you choose to use one or the other, but generally you'd like to avoid using $& in longer Perl programs that do many regular expression operations, at least if using an older Perl release. From perldoc perlvar (my emphasis): Performance issues Traditionally in Perl, any use of any of the three variables $` , $& or $' (or their use English equivalents) anywhere in the code, caused all subsequent successful pattern matches to make a copy of the matched string , in case the code might subsequently access one of those variables. This imposed a considerable performance penalty across the whole program, so generally the use of these variables has been discouraged. [...] In Perl 5.20.0 a new copy-on-write system was enabled by default, which finally fixes all performance issues with these three variables , and makes them safe to use anywhere. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481073",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112235/"
]
} |
481,082 | I have the following: 2018-11-10 23:57:21 [COMMAND]: sar -u 10 5AIX host 1 7 11/10/18System configuration: lcpu=64 mode=Capped23:57:21 %usr %sys %wio %idle physc23:57:31 10 7 0 83 16.0023:57:41 9 6 0 85 16.0023:57:51 9 6 0 85 16.0023:58:01 9 7 0 84 16.0023:58:11 10 6 0 84 16.00Average 9 6 0 84 16.002018-11-10 23:58:21 [COMMAND]: sar -u 10 5AIX host 1 7 11/10/18System configuration: lcpu=64 mode=Capped23:58:21 %usr %sys %wio %idle physc23:58:31 10 8 0 82 15.9923:58:41 9 6 0 85 16.0023:58:51 9 6 0 85 16.0023:59:01 9 6 0 84 16.0023:59:11 10 6 0 83 16.00Average 10 6 0 84 16.00 I need to get the time with the average value of %idle : 2018-11-10 23:57:21|842018-11-10 23:58:21|84 | Going by your input file as-is, a simple awk command as simple as below should suffice. awk '/sar/{ time=$1" "$2; next }/Average/{ print time"|"$5 }' file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/481082",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274738/"
]
} |
481,196 | Is it possible to get aws account id with only aws access key and secret key in command line (CLI) I have access key and secret key with me. Is it possible to get the account id using those in command line. | This is the correct way: ~ $ aws sts get-caller-identity{ "Account": "123456789012", "UserId": "AIDABCDEFGHJKL...", "Arn": "arn:aws:iam::123456789012:user/some.user"} It works for IAM Users, Cross-account IAM Roles, EC2 IAM Roles, etc. Use together with jq to obtain just the account id: ~ $ aws sts get-caller-identity | jq -r .Account123456789012 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481196",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316417/"
]
} |
481,228 | I've created a deb package which installs a service. On our embedded devices, I want this package to automatically enable the service. On our developer workstations, I want the developers to systemctl start foo manually (it's a heavy service, and so it just consumes resources if run all of the time on a desktop environment). How can I go about prompting the user for their decision during the apt-get step? Is that the best solution? Note, I've created the package using dh_make and debhelper and enabled it with: %: dh $@ --with=systemdoverride_dh_systemd_enable: dh_systemd_enable --name=foo foo.service | You can use systemd presets to affect whether a systemd service will default to being enabled or disabled at installation time. The Debian presets default to enabling all services as they're installed, so you only need to ship a preset to the development workstations (the default behavior matches what you want to happen in production), by shipping a file such as /etc/systemd/system-preset/80-foo.preset containing a line that says disable foo.service If you manage your developer workstations using a system such as Puppet, Chef, Ansible, etc., you can use them to ship such a systemd preset configuration, that should make it easy for you to apply the policy to developer workstations only and not production machines. Your .deb package should use the systemctl preset command to enable the service, since that command will respect the preset configuration. As @JdeBP and @sourcejedi point out, the Debian macros in deb-helpers (such as dh_systemd_enable ) do that already, they invoke deb-systemd-helper which will use systemctl preset by default (with a small caveat that if you remove (but do not purge) the package, and later re-install it, it will then not enable the service, even if you remove the preset file.) See this comment in deb-systemd-helper 's enable operation : # We use 'systemctl preset' on the initial installation only. # On upgrade, we manually add the missing symlinks only if the # service already has some links installed. Using 'systemctl # preset' allows administrators and downstreams to alter the # enable policy using systemd-native tools. For more information on the systemd feature of presets, see the man page of systemd presets and of the command systemctl preset which implements it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/481228",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/272848/"
]
} |
481,249 | os_version=$(cat /etc/issue |grep Ubuntu)if $os_versionthenecho foundelseecho notfoundfi When I tried it on an Ubuntu machine it says./test: line 2: Ubuntu: command not foundnotfound This works for me, But I want to assign it to a variable if cat /etc/issue |grep Ubuntuthenecho foundelseecho notfoundfi | The if statement runs a command , and checks its exit status.Using $os_version as a command works by expanding it, and running the resulting command line. So if the variable contains Ubuntu 18.04.1 LTS \n \l , it'll try to to run a command called Ubuntu with the arguments 18.04.1 , LTS , etc. You probably want to use if [ -n "$os_version" ]; then ...fi to check if the variable is empty or not ( [ -n "$var" ] is true if it's not empty, while [ -z "$var" ] if the variable is empty). Alternatively, you could use the grep within the if statement itself as you did in the edit, and set a variable there: distro=unknownif grep -q Ubuntu < /etc/issue; then distro=ubuntufi# ... laterif [ "$distro" = ubuntu ]; then # do something Ubuntu-specificfi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481249",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320578/"
]
} |
481,273 | I am trying to add missing quotes at the ends of some lines in a text file. I find that the regex [^\"]$ suffices to find lines with missing terminal doublequotes and so tried the following replacement using a backreference (which tbh I've never used before). Using parens around the 'capture group' I hoped that sed would allow backreference to that group, but sed 's|([^\"]$)|\1\"|g' bigfile.tsv hits sed: -e expression #1, char 17: invalid reference \1 on `s' command's RHS and same if I don't escape the replacement quotes sed 's|([^\"]$)|\1"|g' bigfile.tsv (tho now its char 16 that's offensive) . How does the backreference go? https://xkcd.com/1171/ | When you run sed without -E , then the expression is a basic regular expression and the capture groups must be written as \(...\) . When you use -E to enable extended regular expressions, capture groups are written (...) . The \ inside [...] is literal, so your expression would also avoid adding a double quote on lines ending with \ . Some of the other escaping is also unnecessary. Therefore, you may write your sed command as sed 's/\([^"]\)$/\1"/' or as sed -E 's/([^"])$/\1"/' Or, using & : sed 's/[^"]$/&"/' The & in the replacement part of the expression will be substituted by the part of the input that matched the regular expression. A couple of other alternatives that does not use a capture group: sed '/[^"]$/ s/$/"/' This applies s/$/"/ to all lines that matches /[^"]$/ . Or, alternatively, sed '/"$/ !s/$/"/' This applies s/$/"/ to all lines that don't match /"$/ (there's a slight difference from the other approaches here in that it also adds a " to empty lines). Note that in all cases, the g flag at the end is definitely not needed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481273",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118522/"
]
} |
481,274 | The following bash script is working completely fine: #!/bin/bashecho '!PaSsWoRd!' | openconnect --csd-wrapper=/home/user/.cisco/csd-wrapper.sh --authenticate --user=abcde123 --authgroup="tunnel My Company" --passwd-on-stdin vpn.mycompany.com However, I want to replace the previous input parameters with variables like that: #!/bin/bashWRAPPER=/home/user/.cisco/csd-wrapper.shUSER=abcde123PASSWD=!PaSsWoRd!AUTHGROUP=tunnel My CompanyDOMAIN=vpn.mycompany.comecho '$PASSWD' | openconnect --csd-wrapper=$WRAPPER --authenticate --user=$USER --authgroup="$AUTHGROUP" --passwd-on-stdin $DOMAIN Unfortunately this attempt does not work anymore. I think I have to put in some quote chars or similar. Do you know what is wrong with the bash script below? | When you run sed without -E , then the expression is a basic regular expression and the capture groups must be written as \(...\) . When you use -E to enable extended regular expressions, capture groups are written (...) . The \ inside [...] is literal, so your expression would also avoid adding a double quote on lines ending with \ . Some of the other escaping is also unnecessary. Therefore, you may write your sed command as sed 's/\([^"]\)$/\1"/' or as sed -E 's/([^"])$/\1"/' Or, using & : sed 's/[^"]$/&"/' The & in the replacement part of the expression will be substituted by the part of the input that matched the regular expression. A couple of other alternatives that does not use a capture group: sed '/[^"]$/ s/$/"/' This applies s/$/"/ to all lines that matches /[^"]$/ . Or, alternatively, sed '/"$/ !s/$/"/' This applies s/$/"/ to all lines that don't match /"$/ (there's a slight difference from the other approaches here in that it also adds a " to empty lines). Note that in all cases, the g flag at the end is definitely not needed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481274",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/241507/"
]
} |
481,288 | I've got a JSON object like so: { "1": { "available_memory": 1086419656.0, "available_memory_no_overbooking": 1086419656.0, "conns": 1.0 }, "2": { "available_memory": 108641236.0, "available_memory_no_overbooking": 10861216.0, "conns": 2.0 }} I want to retrieve the value of "conns" atribute for each object id. I am new at jq and I cant found clear examples. I have tried the following: echo "$OUTPUT" | jq -r .[].conns Which returns all the values for conns, but thats is not what I needed. The spected output would be: 1 1.02 2.0 Any ideas? | $ jq -r 'keys[] as $k | "\($k) \(.[$k].conns)"' file.json1 12 2 Seems like jq translates 1.0 to 1 and 2.0 to 2. Altering the input for clarity: $ cat file.json{ "1a": { "available_memory": 1086419656.0, "available_memory_no_overbooking": 1086419656.0, "conns": 1.1 }, "2b": { "available_memory": 108641236.0, "available_memory_no_overbooking": 10861216.0, "conns": 2.2 }}$ jq -r 'keys[] as $k | "\($k) \(.[$k].conns)"' file.json1a 1.12b 2.2 Refs: https://stedolan.github.io/jq/manual/#Variable/SymbolicBindingOperator:...as$identifier|... https://stedolan.github.io/jq/manual/#Stringinterpolation-\(foo) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481288",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320038/"
]
} |
481,343 | I have groff file to generate a pdf in the format: .TLArticle title.AUAuthor name.AIPublication title.SH.LPFirst paragraph.PPMore paragraphs I'm then running groff -ms a.ms -T pdf > a.pdf to generate a pdf. I like how groff makes formatting easy but I'm wondering if it would be possible to include an ascii diagram. For example the following: +-------------------------+ +-----------------+ | | | | | Hello | | | | +-------------+ | | | | | +-------------------------+ +-----------------+ If input as a normal paragraph turns into: Is there anyway I can insert a section into groff that will preserve the spaces so these kind of diagrams can be used? Looking at the manual for groff_ms I see: .PS and .PE Denotes a graphic, to be processed by the pic preprocessor. You can create a pic file by hand, using the AT&T pic manual available on the Web as a reference, or by using a graphics program such as xfig. But this seems to only accept pic language markup . Is there anyway I can insert assci drawings into groff? | Groff supports a CW (constant width) font, and you can select it with .ft CW or \f(CW . To turn off filling, use a display, .DS - .DE , or a .nf - .fi pair. .TLTwo boxes, two ways.LPASCII drawing.DS C.ft CW +-------------------------+ +-----------------+ | | | | | Hello | | | | +-------------+ | | | | | +-------------------------+ +-----------------+.ft.DE.LPPic drawing.PSbox width 2 "\f(CWHello\fP"line 1.5box width 1.5.PE | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
481,382 | On Linux and Windows, I'm used to the situation that I require a 64-bit kernel to have a system with multiarch/WoW where I can run 32-bit and 64-bit software side-by-side. And then, years ago it blew my mind when someone showed me that MacOS 10.6 Snow Leopard could run 64-bit applications with the kernel in 32-bit mode. This may be largely forgotten now because it was a one-time technology transition. With the hardware ahead of the curve in the mobile space, as far as I know this was never needed in the move to 64-bit for iOS and Android. My question: What would it take to get the same capability in a 32-bit Linux kernel (i386 or armhf)? I understand that this probably isn't trivial. If it was, Microsoft could have put the feature into Windows XP 32-bit. What are the general requirements though? Has there ever been a proposed patch or proof-of-concept? In the embedded world I think this would be especially helpful, as 64-bit support can lag behind for a long time in device drivers. | Running 64-bit applications requires some support from the kernel: the kernel needs to at least set up page tables, interrupt tables etc. as necessary to support running 64-bit code on the CPU, and it needs to save the full 64-bit context when switching between applications (and from applications to the kernel and back). Thus a purely 32-bit kernel can’t support 64-bit userspace. However a kernel can run 32-bit code in kernel space, while supporting 64-bit code in user space. That involves handling similar to the support required to run 32-bit applications with a 64-bit kernel: basically, the kernel has to support the 64-bit interfaces the applications expect. For example, it has to provide some mechanism for 64-bit code to call into the kernel, and preserve the meaning of the parameters (in both directions). The question then is whether it’s worth it. On the Mac, and some other systems, a case can be made since supporting 32-bit kernel code means drivers don’t all have to make the switch simultaneously. On Linux the development model is different: anything in the kernel is migrated as necessary when large changes are made, and anything outside the kernel isn’t really supported by the kernel developers. Supporting 32-bit userland with a 64-bit kernel is certainly useful and worth the effort (at least, it was when x86-64 support was added), I’m not sure there’s a case to be made for 64-bit on 32-bit... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/481382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320681/"
]
} |
481,490 | I'm using a Lenovo laptop with Intel video drivers, and I'm trying to control the brightness. I installed the xbacklight and xserver-xorg-video-intel packages, added these lines to /etc/X11/xorg.conf : Section "Device" Identifier "Card0" Driver "intel" Option "Backlight" "intel_backlight"EndSection and I verified that the /sys/class/backlight/intel_backlight/ directory does exist. When I run xbacklight , I get the error: No outputs have backlight property How do I configure the backlight? I'm using Debian 9 x64 and the system is fully up to date. EDIT: I can manually change the brightness by elevating my permissions with sudo and writing to the brightness file: echo 500 > /sys/class/backlight/intel_backlight/brightness EDIT: I get the same "No outputs have backlight property" if I run xbacklight as root or with sudo . | There can be numerous reason why this doesn't work, and they are all to complicated to ask in a comment to the question, so I'll leave this as a resource here - and if none of these work feel free to comment rather than down-voting and I'll remove it (or leave for others who end up here with the same problem but different causes) . The first thing you can try is adding one of these kernel parameters: acpi_osi=Linuxacpi_osi="!Windows 2012"acpi_osi= This is a pretty common issue where the backlight stops working after a suspend (I know this isn't directly related, but might be worth mentioning). Another issue might be that you're lacking sufficient permission to actually modify the brightness (again, probably not related to OP since the error message is usually different and OP has already tried it) . If that's the case, modify the udev-rules by changing/adding this to /etc/udev/rules.d/backlight.rules : ACTION=="add", SUBSYSTEM=="backlight", KERNEL=="intel_backlight", RUN+="/bin/chgrp video /sys/class/backlight/%k/brightness"ACTION=="add", SUBSYSTEM=="backlight", KERNEL=="intel_backlight", RUN+="/bin/chmod g+w /sys/class/backlight/%k/brightness" Another, also common issue, is when used in conjunction with multiple graphic cards or hybrid graphics (like the Optimus project) . If so, you can try to add one of the following kernel parameters: acpi_backlight=videoacpi_backlight=vendoracpi_backlight=nativeacpi_backlight=none # <-- Mainly for AMD/ATI drivers Finally, what OP might be here for: Change /etc/X11/xorg.conf.d/20-intel.conf to reflect: Section "Device" Identifier "Intel Graphics" Driver "intel" Option "Backlight" "intel_backlight"EndSection Odds are xrandr or xbacklight have just got a faulty mapping against /sys/class/backlight/<path> . Thus, manually setting it to intel_backlight might solve your issue. All that might be wrong is the Identifier, judging by the question. If it still doesn't work, verify and make sure that the Device-config is actually the one in use, because it really sounds like a mapping issue between xrandr/xbacklight and the path where it thinks it'll find the backlight-directory. Any of these might give you a clue or hint as to which driver and config is being used: lspci | grep VGAlsmod | grep "kms\|drm"find /dev -group videocat /proc/cmdlinefind /etc/modprobe.d/cat /etc/modprobe.d/*kms*glxinfo | grep -i "vendor\|rendering"grep LoadModule /var/log/Xorg.0.logegrep -i " connected|card detect|primary dev|Setting driver" /var/log/Xorg.0.logudevadm info -a -p /sys/class/backlight/intel_backlight/ I hope it's as simple as this; if it's not, again, I'd be happy to change my answer or delete it all together. Just sharing some knowledge gathered while struggling with the same thing. bugs.debian.org issue Oh, and the kernel parameter nomodeset tends to interfere with backlight settings. I don't know why. But if whoever ends up here uses it, try to remove that and see if at least the backlight kicks in again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49592/"
]
} |
481,552 | i can`t make a bash script who check if a input numbers in command line is an power of 2 input # ./pow2script.sh xyzdf 4 8 12 -2 USAD desired output : the desire output should be on separated lines 48 because only 4 is 2^2and 8 is 2^3 content of pow2script.sh #!/bin/bashfunction is_power_of_two () { declare -i n=$1 (( n > 0 && (n & (n - 1)) == 0 ))}for number; do if is_power_of_two "$number"; then printf "%d\n" "$number" fidone | The number is a power of 2 if its Hamming weight is exactly 1. To calculate a number's Hamming weight is the same as calculate the number of 1s in its binary representation. The following is a short bash script that does that: #!/bin/bash# loop over all numbers on the command line# note: we don't verify that these are in fact numbersfor number do w=0 # Hamming weight (count of bits that are 1) n=$number # work on $n to save $number for later # test the last bit of the number, and right-shift once # repeat until number is zero while (( n > 0 )); do if (( (n & 1) == 1 )); then # last bit was 1, count it w=$(( w + 1 )) fi if (( w > 1 )); then # early bail-out: not a power of 2 break fi # right-shift number n=$(( n >> 1 )) done if (( w == 1 )); then # this was a power of 2 printf '%d\n' "$number" fidone Testing: $ bash script.sh xyzdf 4 8 12 -2 USAD48 Note: There are more efficient ways to do this, and bash is a particularly bad choice of language for it. Since it's come up a few times in a short time (this appears to be a homework assignment or some other type of exercise): I will not modify this code to skip the number 1 if it occurs in the input. I will not make it output the sum of the numbers in any form. I will not describe the algorithm further than what's already been done in the comments. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481552",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320820/"
]
} |
481,587 | In bash $ va='\\abc'$ echo $va\\abc In the assignment va='\\abc' , the single quotes preserve the two backslashes in the value of va . In the echo command echo $va , Is it correct that bash first performs parameter expansion on $va (expand it to \\abc ), and then performs quote removal on the result of parameter expansion? Quote removal removes backslash and quotes, but why are the two backslashes still preserved? I expect the result to be \abc . For comparison $ echo \\abc\abc Do I miss something in the bash manual? I appreciate that someone can point out what I miss. Thanks. | Start with a simpler comparison: $ echo '\\abc'\\abc$ echo \\abc\abc In the first command, the apostrophes do not become part of the argument to echo because they have been used for quoting. All of the characters inside, including both backslashes, are passed to echo . In the second command, the first backslash quotes the second backslash. The one that was used for quoting does not become part of the argument to echo . The other one is passed to echo , along with the abc (which was not quoted, but that doesn't matter because they are not metacharacters). Now we're ready to talk about your command sequence $ va='\\abc'$ echo $va\\abc When the assignment command is executed, the apostrophes quote everything between them. The apostrophes do not become part of the value assigned, but everything else does, including both backslashes. In the echo command, there are no quoting characters. The value of va is retrieved and inserted into the argument list. Now there is an argument containing 2 backslashes, but they don't function as quoting characters, because the parsing phases where we were looking for quoting characters was done before variable expansion. Variable expansion is not like macro expansion. The resulting series of arguments is not fed back in to the full command line parser. Some post-processing is done (word-splitting and globbing) but there is not a second pass of quote removal and variable expansion. When you want to build an argument list and reparse the whole thing as a new command line with all shell features available, you can use eval . This is usually a bad idea because "all shell features" is a lot, and if you aren't careful, something bad can happen. $ va='\\abc'$ eval echo $va\abc Perfect, right? $ va='\\abc;rm -rf $important_database'$ eval echo $va\abc^C^C^C ARGH! When you find yourself wanting to use shell quoting syntax inside the value of a shell variable, try to think of a different way to solve your problem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481587",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
481,599 | I'm trying to install resolvconf in centos. In Ubuntu I'm able to install resolvconf sudo apt-get install resolvconf . For centos, I tried to do the same by sudo yum install resolvconf but I get message No package found. Nothing to do How can I install resolvconf in centos . | Start with a simpler comparison: $ echo '\\abc'\\abc$ echo \\abc\abc In the first command, the apostrophes do not become part of the argument to echo because they have been used for quoting. All of the characters inside, including both backslashes, are passed to echo . In the second command, the first backslash quotes the second backslash. The one that was used for quoting does not become part of the argument to echo . The other one is passed to echo , along with the abc (which was not quoted, but that doesn't matter because they are not metacharacters). Now we're ready to talk about your command sequence $ va='\\abc'$ echo $va\\abc When the assignment command is executed, the apostrophes quote everything between them. The apostrophes do not become part of the value assigned, but everything else does, including both backslashes. In the echo command, there are no quoting characters. The value of va is retrieved and inserted into the argument list. Now there is an argument containing 2 backslashes, but they don't function as quoting characters, because the parsing phases where we were looking for quoting characters was done before variable expansion. Variable expansion is not like macro expansion. The resulting series of arguments is not fed back in to the full command line parser. Some post-processing is done (word-splitting and globbing) but there is not a second pass of quote removal and variable expansion. When you want to build an argument list and reparse the whole thing as a new command line with all shell features available, you can use eval . This is usually a bad idea because "all shell features" is a lot, and if you aren't careful, something bad can happen. $ va='\\abc'$ eval echo $va\abc Perfect, right? $ va='\\abc;rm -rf $important_database'$ eval echo $va\abc^C^C^C ARGH! When you find yourself wanting to use shell quoting syntax inside the value of a shell variable, try to think of a different way to solve your problem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253851/"
]
} |
481,685 | Exec allows us to either pass all arguments at once with {} + or to pass them one by one with {} \; Now let's say I want to rename all jpeg , no problem doing this: find . \( -name '*.jpg' -o -name '*.jpeg' \) -exec mv '{}' '{}'.new \; But if I need to redirect output, '{}' isn't accessible after redirection. find . \( -name '*.jpg' -o -name '*.jpeg' \) -exec cjpeg -quality 80 '{}' > optimized_'{}' \; This wouldn't work. I'd have to use a for loop, storing find's output into a variable before using it. Let's admit it, it's cumbersome. for f in `find . \( -name '*.jpg' -o -name '*.jpeg' \)`; do cjpeg -quality 80 $f > optimized_$f; done; So is there a better way? | You could use bash -c within the find -exec command and use the positional parameter with the bash command: find . \( -name '*.jpg' -o -name '*.jpeg' \) -exec bash -c 'cjpeg -quality 80 "$1" > "$(dirname "$1")/optimized_$(basename "$1")"' sh {} \; That way {} is provided with $1 . The sh before the {} tells the inner shell its "name", the string used here is used in e.g. error messages. This is discussed more in this answer on stackoverflow . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/481685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147170/"
]
} |
481,686 | Both distros use Gnome, both fail at exactly the same moment in boot process.Background changes, mouse icon is shown and then everything freezes. Both distros work when running from live-usb. Problem asserts itself when booting from hdd. I've tried repartitioning.. I am using /boot/efi Will try to repartition again, as it makes sense that could somehow be the issue. EDIT (last few lines of boot.log): [[0;32m OK [0m] Started WPA supplicant.[[0;32m OK [0m] Started Hostname Service.[[0;32m OK [0m] Started Modem Manager.[[0;32m OK [0m] Started Raise network interfaces.[[0;32m OK [0m] Started Dispatcher daemon for systemd-networkd.[[0;32m OK [0m] Started Network Manager.[[0;32m OK [0m] Reached target Network. Starting OpenVPN service... Starting Permit User Sessions... Starting Network Manager Wait Online... Starting Network Manager Script Dispatcher Service...[[0;32m OK [0m] Started Permit User Sessions. Starting GNOME Display Manager... Starting Hold until boot process finishes up...[[0;32m OK [0m] Started Disk Manager.[[0;32m OK [0m] Started OpenVPN service.[[0;32m OK [0m] Started Network Manager Script Dispatcher Service.[[0;32m OK [0m] Started Snappy daemon. Starting Wait until snapd is fully seeded... EDIT: I found this answers which helped a lot. However in my case problem doesn't seem to be in the display manager, i've tried gdm gdx and lightdm and all have same issue. It seems that the problem is likely with the GPU, I've purged nvidia drivers (no effect) and am reinstalling them ATM. If this doesn't work I will try to enforce usage of intel gpu (if that's possible) | You could use bash -c within the find -exec command and use the positional parameter with the bash command: find . \( -name '*.jpg' -o -name '*.jpeg' \) -exec bash -c 'cjpeg -quality 80 "$1" > "$(dirname "$1")/optimized_$(basename "$1")"' sh {} \; That way {} is provided with $1 . The sh before the {} tells the inner shell its "name", the string used here is used in e.g. error messages. This is discussed more in this answer on stackoverflow . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/481686",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320925/"
]
} |
481,690 | Is there a way to execute a command on the local machine after disconnecting from an ssh server? This would have the same behavior as ssh_config LocalCommand that executes a command on the local machine after successfully connecting to the server. This could be used to set terminal window title to that of the current server. Using LocalCommand when connecting, and that something else after disconnecting. Alternatively, is there a way to automatically run a command after a specific command using bash? | You could use bash -c within the find -exec command and use the positional parameter with the bash command: find . \( -name '*.jpg' -o -name '*.jpeg' \) -exec bash -c 'cjpeg -quality 80 "$1" > "$(dirname "$1")/optimized_$(basename "$1")"' sh {} \; That way {} is provided with $1 . The sh before the {} tells the inner shell its "name", the string used here is used in e.g. error messages. This is discussed more in this answer on stackoverflow . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/481690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320930/"
]
} |
481,693 | I am writing a wrapper application to bash scripts and want the application to keep a track of which tools/processes have been launched from user scripts. I would like to know what is the best way to determine the list of child processes that were spawned of this parent process. I tried Periodically invoking ps command and building a process tree (like ps -ejH) but this misses out on processes that ran to completion very quickly. Using a tool like forkstat that uses the proc connector interface, but that would only run with elevated privileges. While this gives the correct data, running as sudo would not work in my case? Any suggestions how this can be achieved? | If you're using Linux, you can use strace to trace system calls used by a process. For example: ~ strace -e fork,vfork,clone,execve -fb execve -o log ./foo.shfoo bar~ cat log4817 execve("./foo.sh", ["./foo.sh"], [/* 42 vars */]) = 04817 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f1bb563b9d0) = 48184818 execve("/bin/true", ["/bin/true"], [/* 42 vars */] <detached ...>4817 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=4818, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---4817 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f1bb563b9d0) = 48194817 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f1bb563b9d0) = 48204820 execve("/bin/echo", ["/bin/echo", "foo", "bar"], [/* 42 vars */] <detached ...>4817 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=4820, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---4817 +++ exited with 0 +++4819 execve("/bin/sleep", ["sleep", "1"], [/* 42 vars */] <detached ...> You can see that the script forked off three processes (PIDs 4818, 4819, 4820) using the clone(2) system call, and the execve(2) system calls in those forked off processes show the commands executed. -e fork,vfork,clone,execve limits strace output to these system calls -f follows child processes -b execve detaches from a process when the execve is reached, so we don't see further tracing of child processes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320927/"
]
} |
481,794 | I have a bash script, which concatenate its arguments by a given separator #! /bin/bash d="$1";shift;echo -n "$1";shift;printf "%s" "${@/#/$d}"; This is how I use it: $ a=(one two 'three four' five)$ myscript ' *** ' "${a[@]}" one *** two *** three four *** five Now I would like to make a newline as the separator, but that doesn't happen: $ myscript '\n' "${a[@]}" one\ntwo\nthree four\nfive How shall I pass a newline character to the printf command in the script? (I am not looking for rewriting my script, if that is possible).Thanks. | Use the $'...' kind of quotes, if you want the \n to be expanded into a newline character: myscript $'\n' "${a[@]}" Or pass the newline literally inside single or double quotes: myscript '' "${a[@]}" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
481,795 | I am trying to make a .txt file that contains only one line gathered from other 30 .log files.To extract only one line I used this: $ sed -n '/Num mapped reads/p' /home/travc/seq_v2/AgamP4_v2/samples/ERS224561/qualimap/qualimap.log > /data/home/odkirling/Mali/Yeah1.txt It works great but now I need to do it for other 29 files, how can I do that? | Use the $'...' kind of quotes, if you want the \n to be expanded into a newline character: myscript $'\n' "${a[@]}" Or pass the newline literally inside single or double quotes: myscript '' "${a[@]}" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481795",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321008/"
]
} |
481,857 | I came across the problem after upgrading my system through pacman -Syu . During the upgrade, I encountered a python package conflict which caused the upgrade transaction aborted. So I resolved the conflict: removing the python package by pip uninstall pkg_name , then retried pacman -Syu . This time no more errors. Then I rebooted my system and the problem occurred: Warning: /lib/modules/4.19.1-arch1-1-ARCH/modules.devname not found - ignoringstarting version 239/dev/nvme0n1p2: clean, 968023/31227904 files, 27066236/124895569 blocksmount: /new_root: unknown filesystem type 'ext4'You are now being dropped into an emergency shell,sh: can't access tty: job control turned off[rootfs] # BTW: As the warning indicating, I was upgrading kernel 4.18 to 4.19 | If the update was aborted and the kernel was in the process of being updated, you probably still have the initramfs of the the old kernel in your /boot whilst having the new kernel installed which can prevent booting. This can also happen on a freshly installed system if you forgot to properly mount the /boot partition. The easiest way to fix this would be to boot with an archlinux installation media, perform a chroot and reinstall the kernel using pacman # mount /dev/yourrootdisk /mnt# mount /dev/yourbootdisk /mnt/boot # if needed# mount /dev/yourefipartition /mnt/boot/EFI # if you use EFI (optionnal)# arch-chroot /mnt# pacman -S linux The files that should be modified are /boot/initramfs-linux.img and /boot/initramfs-linux-fallback.img so you probably don't need to mount the EFI partition If for some reason you can't use pacman , you can also launch mkinitcpio by hand to regenerate the initramfs to use the new kernel | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481857",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321063/"
]
} |
481,868 | I have this test.debug file in AIX unix with the following fields: in[ 2: ]<0000********0000> in[ 3: ]<0> in[ 4: ]<000000020000> in[ 7: ]<1113> in[ 7: ]<80402> in[ 11: ]<5530> in[ 12: ]<181113> in[ 12: ]<90254> in[ 17: ]<1113> in[ 19: ]<960> in[ 22: ]<510101510000> in[ 24: ]<400> in[ 25: ]<4021> in[ 26: ]<7011> in[ 28: ]<181115> in[ 30: ]<000000020000> in[ 32: ]<000090> in[ 33: ]<589638> in[ 37: ]<000000000132> in[ 41: ]<75000001> in[ 42: ]<01111111111 > in[ 49: ]<960>in[ 56: ]<110000553018111309025406000004> in[128: ]<98D6F81BFFFFFFFF>out[000: ]<ISO9090-9999999902299>in[129: ]<9420> I want to create a script that will be able to select the following: in[ 32: ]<000090> , in[ 49: ]<960> , out[000: ]<ISO9090-9999999902299> and in[129: ]<9420> and echo them out. The directory for the logs is /var/debug/logs . | If the update was aborted and the kernel was in the process of being updated, you probably still have the initramfs of the the old kernel in your /boot whilst having the new kernel installed which can prevent booting. This can also happen on a freshly installed system if you forgot to properly mount the /boot partition. The easiest way to fix this would be to boot with an archlinux installation media, perform a chroot and reinstall the kernel using pacman # mount /dev/yourrootdisk /mnt# mount /dev/yourbootdisk /mnt/boot # if needed# mount /dev/yourefipartition /mnt/boot/EFI # if you use EFI (optionnal)# arch-chroot /mnt# pacman -S linux The files that should be modified are /boot/initramfs-linux.img and /boot/initramfs-linux-fallback.img so you probably don't need to mount the EFI partition If for some reason you can't use pacman , you can also launch mkinitcpio by hand to regenerate the initramfs to use the new kernel | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/481868",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320907/"
]
} |
481,884 | I used sudoedit to create a file: $ sudoedit /etc/systemd/system/apache2.service but when I went to save the file, it wrote it in a temporary directory (/var/temp/blahblah). What is going on? Why is it not saving it to the system directory? | The point of sudoedit is to allow users to edit files they wouldn’t otherwise be allowed to, while running an unprivileged editor. To make this happen, sudoedit copies the file to be edited to a temporary location, makes it writable by the requesting user, and opens it in the configured editor. That’s why the editor shows an unrelated filename in a temporary directory. When the editor exits, sudoedit checks whether any changes were really made, and copies the changed temporary file back to its original location if necessary. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/481884",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47542/"
]
} |
481,896 | I created a service-unit to run apache-httpd and it is working, but I am concerned that my service-unit configuration-file is a file, but the other items in the directory (/etc/systemd/system) are all directories, so my file looks like an anomaly: It works, but why is my definition different than the others? I used the instructions at "Tech Guides" to create the service unit . | The point of sudoedit is to allow users to edit files they wouldn’t otherwise be allowed to, while running an unprivileged editor. To make this happen, sudoedit copies the file to be edited to a temporary location, makes it writable by the requesting user, and opens it in the configured editor. That’s why the editor shows an unrelated filename in a temporary directory. When the editor exits, sudoedit checks whether any changes were really made, and copies the changed temporary file back to its original location if necessary. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/481896",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47542/"
]
} |
481,939 | I have generated keys using GPG, by executing the following command gpg --gen-key Now I need to export the key pair to a file;i.e., private and public keys to private.pgp and public.pgp , respectively. How do I do it? | Export Public Key This command will export an ascii armored version of the public key: gpg --output public.pgp --armor --export username@email Export Secret Key This command will export an ascii armored version of the secret key: gpg --output private.pgp --armor --export-secret-key username@email Security Concerns, Backup, and Storage A PGP public key contains information about one's email address. This is generally acceptable since the public key is used to encrypt email to your address. However, in some cases, this is undesirable. For most use cases, the secret key need not be exported and should not be distributed . If the purpose is to create a backup key, you should use the backup option: gpg --output backupkeys.pgp --armor --export-secret-keys --export-options export-backup user@email This will export all necessary information to restore the secrets keys including the trust database information. Make sure you store any backup secret keys off the computing platform and in a secure physical location. If this key is important to you, I recommend printing out the key on paper using paperkey . And placing the paper key in a fireproof/waterproof safe. Public Key Servers In general, it's not advisable to post personal public keys to key servers. There is no method of removing a key once it's posted and there is no method of ensuring that the key on the server was placed there by the supposed owner of the key. It is much better to place your public key on a website that you own or control. Some people recommend keybase.io for distribution. However, that method tracks participation in various social and technical communities which may not be desirable for some use cases. For the technically adept, I personally recommend trying out the webkey domain level key discovery service. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/481939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321124/"
]
} |
482,032 | When I try to get the week number for Dec 31, it returns 1. When I get the week number for Dec 30, I get 52 --- which is what I would expect. The day Monday is correct. This is on a RPI running Ubuntu. $ date -d "2018-12-30T1:58:55" +"%V%a"52Sun$ date -d "2018-12-31T1:58:55" +"%V%a"01Mon Same issue without time string $ date -d "2018-12-31" +"%V%a"01Mon | This is giving you the ISO week which begins on a Monday. The ISO week date system is effectively a leap week calendar system that is part of the ISO 8601 date and time standard issued by the International Organization for Standardization (ISO) since 1988 (last revised in 2004) and, before that, it was defined in ISO (R) 2015 since 1971. It is used (mainly) in government and business for fiscal years, as well as in timekeeping. This was previously known as "Industrial date coding". The system specifies a week year atop the Gregorian calendar by defining a notation for ordinal weeks of the year. An ISO week-numbering year (also called ISO year informally) has 52 or 53 full weeks. That is 364 or 371 days instead of the usual 365 or 366 days. The extra week is sometimes referred to as a leap week, although ISO 8601 does not use this term. Weeks start with Monday. Each week's year is the Gregorian year in which the Thursday falls. The first week of the year, hence, always contains 4 January. ISO week year numbering therefore slightly deviates from the Gregorian for some days close to 1 January. If you want to show 12/31 as week 52, you should use %U , which does not use the ISO standard: $ date -d "2018-12-31T1:58:55" +"%V%a"01Mon$ date -d "2018-12-31T1:58:55" +"%U%a"52Mon | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/482032",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321191/"
]
} |
482,348 | I looked into the systemd-networkd and systemd-resolved: systemd.network(5) on manpages.ubuntu.com systemd-resolved.service(8) on manpages.ubuntu.com and I am confused by some words: systemd-resolved.service(8) Single-label names are routed to all local interfaces capable of IP multicasting, using the LLMNR protocol. Lookups for a hostname ending in one of the per-interface domains are exclusively routed to the matching interfaces. systemd.network(5) Both "search" and "routing-only" domains are used for routing of DNS queries: look-ups for host names ending in those domains (hence also single label names, if any "search domains" are listed), are routed to the DNS servers configured for this interface. my question is: for hosts with a bunch of interfaces configured with "search domains" and LLMNR enabled, where will the single-label lookup requests go? more details of my confusion: if an interface is configured with search domain "mydomain" and LLMNRdisabled, will any single-label lookup request be routed into thisinterface? if an interface is configured with search domain "mydomain" andLLMNR enabled and a lookup request for "xyz" comes in, will "xyz"through LLMNR and "xyz.mydomain" throuth specified dns server bothhappen? | This issue gets very long to explain. The short (imprecise) description is: where will the single-label lookup requests go? Single label? (not localhost et al.): Always to the LLMNR system. Multi-label?: To the DNS servers of each interface. On failure (or if not configured), to the global DNS servers. Yes, the general sequence goes as described in systemd-resolved.service(8) BUT : Routing of lookups may be influenced by configuring per-interface domain names. See systemd.network(5) for details. Sets the systemd.network(5) as an additional resource for DNS resolution. And, understand that From RFC 4795 : Since LLMNR only operates on the local link, it cannot be considered a substitute for DNS. The sequence (simplified) is: The local, configured hostname is resolved to all locally configured IP addresses ordered by their scope, or — if none are configured — the IPv4 address 127.0.0.2 (which is on the local loopback) and the IPv6 address ::1 (which is the local host). The hostnames "localhost" and "localhost.localdomain" (and any hostname ending in ".localhost" or ".localhost.localdomain") are resolved to the IP addresses 127.0.0.1 and ::1. The hostname "_gateway" is resolved to … The mappings defined in /etc/hosts are included (forth and back). If the name to search has no dots (a name like home. has a dot) it is resolved by the LLMNR protocol. LLMNR queries are sent to and received on port 5355. RFC 4795 Multi word (one dot or more) names for some domain suffixes (like ".local", see full list with systemd-resolve --status ) are resolved via MulticastDNS protocol. Multi word names are checked against the Domains= list in systemd.network(5) per interface and if matched, the list of DNS servers of that interface are used. Other multi-label names are routed to all local interfaces that have a DNS server configured, plus the globally configured DNS server if there is one. #Edit The title of your question reads: How single-label dns lookup requests are handled by systemd-resolved? So, I centered my answer on systemd-resolved exclusively. Now you ask: If an interface is configured with search domain "mydomain" and LLMNR disabled, will any single-label lookup request be routed into this interface? If an interface is configured with search domain "mydomain" and LLMNR enabled and a lookup request for "xyz" comes in, will "xyz" through LLMNR and "xyz.mydomain" throuth specified dns server both happen? Those appear to be outside of systemd-resolved exclusively. Let's try to analyze them: LLMNR disabled ?How? Might I ask?. By disabling systemd-resolved itself with something similar to systemctl mask systemd-resolved ? If systemd-resolved is disabled/stopped there is no LLMNR in use (most probably, unless you install Avahi, Apple bonjour or a similar program) But certainly, that is outside of systemd-resolved configuration. In this case, we should ask: what happens when a name resolution fails? (as there is no server to answer it). That is configured in nsswitch (file /etc/nsswitch.conf ). The default configuration for Ubuntu (as Debian) contains this line: hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname That means (in nsswitch parlance): Begin by checking the /etc/hosts file. If not found, continue. Try mdns4_minimal ( Avahi et al. ) which will attempt to resolve the name via multicast DNS only if it ends with .local. If it does but no such mDNS host is located, mdns4_minimal will return NOTFOUND. The default name service switch response to NOTFOUND would be to try the next listed service, but the [NOTFOUND=return] entry overrides that and stops the search with the name unresolved. If mdns4_minimal return UNAVAIL (not running) then go to dns. The plot thickens everybody wants to be first on the list to resolve names and everyone offers to do all resolutions by themselves. The dns entry in nsswitch actually calls nss-resolve first which replace nss-dns nss-resolve is a plug-in module for the GNU Name Service Switch (NSS) functionality of the GNU C Library (glibc) enabling it to resolve host names via the systemd-resolved(8) local network name resolution service. It replaces the nss-dns plug-in module that traditionally resolves hostnames via DNS. Which will depend on the several DOMAINS= entries in general /etc/systemd/resolved.conf and/or each interface via /etc/systemd/network files. That was explained above the EDIT entry. Understand that sytemd-resolved might query the dns servers by itself before the dns entry in nsswitch. If not found yet (without a [notfound=return] entry) then try the DNS servers. This will happen more-or-less immediately if the name does not end in .local, or not at all if it does. If you remove the [NOTFOUND=return] entry, nsswitch would try to locate unresolved .local hosts via unicast DNS. This would generally be a bad thing , as it would send many such requests to Internet DNS servers that would never resolve them. Apparently, that happens a lot. The final myhostname acts as a last-resort resolver for localhost, hostname, *.local and some other basic names. If systemd-resolved has a LLMNR=no set in /etc/systemd/resolved.conf the same list as above apply but systemd-resolved is still able to resolve localhost and to apply DOMAINS= settings (global or per interface). Understand that There's the LLMNR setting in systemd-resolved and there's also the per-link LLMNR setting in systemd-networkd. link . #What does all that mean?That it is very difficult to say with any certainty what will happen unless the configuration is very very specific. you will have to disable services and try (in your computer with your configuration) what will happen. #Q1 If an interface is configured with search domain "mydomain" and LLMNR disabled, will any single-label lookup request be routed into this interface? Yes, of course, it may . That LLMNR is disabled only blocks local resolution (no other servers on the local ( yes: .local ) network will be asked) but the resolution for that name must find an answer (even if negative) so it may (if there is no NOTFOUND=return entry, for example) happen that the DNS servers for a matching interface are contacted to resolve mylocalhost.mylocaldomain when a resolution for mylocalhost was started and there is an entry for mylocaldomain in the "search domain". to answer in a general sense is almost imposible, too many variables. #Q2 If an interface is configured with search domain "mydomain" and LLMNR enabled and a lookup request for "xyz" comes in, will "xyz" through LLMNR and "xyz.mydomain" throuth specified dns server both happen? No. if all is correctly configured a single label name "xyz" should only be resolved by LLMNR, and, even if asked, a DNS server should not try to resolve it. Well, that's theory. But the DNS system must resolve com (clearly, or the network would fall down as it is now). But there is a simple workaround: ask for com. , it has a dot, it is a FQDN. In any case, a DNS server shuld answer with NOERROR (with an empty A (or AAAA)) if the server doesn't have enough information about a label and the resolution should continue with the root servers (for . ). Or with a NXDOMAIN (the best answer to avoid further resolutions) for domains it knows that do not exist. The only safe way to control this is to have a local DNS server and choose which names to resolve and which not to. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482348",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321473/"
]
} |
482,351 | I've created a NFS volume with docker volume create --driver local --opt type=nfs --opt o=addr=preisschild-server-2.lan,rw --opt device=:/mnt/tank/MariaDB MariaDBData which seems to work, but when I use the volume on a docker container: docker run --name MariaDB -v MariaDBData:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=topsecretpassword --network bridged -p 3306:3306 -d mariadb:latest I get /usr/bin/docker-current: Error response from daemon: SELinux relabeling of /var/lib/docker/volumes/MariaDBDataNFS/_data is not allowed: "operation not supported" as output.I've already tried settings permissive, but that didn't work. Additional Information:OS: CentOS7Docker Version: 1.13.1 | This issue gets very long to explain. The short (imprecise) description is: where will the single-label lookup requests go? Single label? (not localhost et al.): Always to the LLMNR system. Multi-label?: To the DNS servers of each interface. On failure (or if not configured), to the global DNS servers. Yes, the general sequence goes as described in systemd-resolved.service(8) BUT : Routing of lookups may be influenced by configuring per-interface domain names. See systemd.network(5) for details. Sets the systemd.network(5) as an additional resource for DNS resolution. And, understand that From RFC 4795 : Since LLMNR only operates on the local link, it cannot be considered a substitute for DNS. The sequence (simplified) is: The local, configured hostname is resolved to all locally configured IP addresses ordered by their scope, or — if none are configured — the IPv4 address 127.0.0.2 (which is on the local loopback) and the IPv6 address ::1 (which is the local host). The hostnames "localhost" and "localhost.localdomain" (and any hostname ending in ".localhost" or ".localhost.localdomain") are resolved to the IP addresses 127.0.0.1 and ::1. The hostname "_gateway" is resolved to … The mappings defined in /etc/hosts are included (forth and back). If the name to search has no dots (a name like home. has a dot) it is resolved by the LLMNR protocol. LLMNR queries are sent to and received on port 5355. RFC 4795 Multi word (one dot or more) names for some domain suffixes (like ".local", see full list with systemd-resolve --status ) are resolved via MulticastDNS protocol. Multi word names are checked against the Domains= list in systemd.network(5) per interface and if matched, the list of DNS servers of that interface are used. Other multi-label names are routed to all local interfaces that have a DNS server configured, plus the globally configured DNS server if there is one. #Edit The title of your question reads: How single-label dns lookup requests are handled by systemd-resolved? So, I centered my answer on systemd-resolved exclusively. Now you ask: If an interface is configured with search domain "mydomain" and LLMNR disabled, will any single-label lookup request be routed into this interface? If an interface is configured with search domain "mydomain" and LLMNR enabled and a lookup request for "xyz" comes in, will "xyz" through LLMNR and "xyz.mydomain" throuth specified dns server both happen? Those appear to be outside of systemd-resolved exclusively. Let's try to analyze them: LLMNR disabled ?How? Might I ask?. By disabling systemd-resolved itself with something similar to systemctl mask systemd-resolved ? If systemd-resolved is disabled/stopped there is no LLMNR in use (most probably, unless you install Avahi, Apple bonjour or a similar program) But certainly, that is outside of systemd-resolved configuration. In this case, we should ask: what happens when a name resolution fails? (as there is no server to answer it). That is configured in nsswitch (file /etc/nsswitch.conf ). The default configuration for Ubuntu (as Debian) contains this line: hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname That means (in nsswitch parlance): Begin by checking the /etc/hosts file. If not found, continue. Try mdns4_minimal ( Avahi et al. ) which will attempt to resolve the name via multicast DNS only if it ends with .local. If it does but no such mDNS host is located, mdns4_minimal will return NOTFOUND. The default name service switch response to NOTFOUND would be to try the next listed service, but the [NOTFOUND=return] entry overrides that and stops the search with the name unresolved. If mdns4_minimal return UNAVAIL (not running) then go to dns. The plot thickens everybody wants to be first on the list to resolve names and everyone offers to do all resolutions by themselves. The dns entry in nsswitch actually calls nss-resolve first which replace nss-dns nss-resolve is a plug-in module for the GNU Name Service Switch (NSS) functionality of the GNU C Library (glibc) enabling it to resolve host names via the systemd-resolved(8) local network name resolution service. It replaces the nss-dns plug-in module that traditionally resolves hostnames via DNS. Which will depend on the several DOMAINS= entries in general /etc/systemd/resolved.conf and/or each interface via /etc/systemd/network files. That was explained above the EDIT entry. Understand that sytemd-resolved might query the dns servers by itself before the dns entry in nsswitch. If not found yet (without a [notfound=return] entry) then try the DNS servers. This will happen more-or-less immediately if the name does not end in .local, or not at all if it does. If you remove the [NOTFOUND=return] entry, nsswitch would try to locate unresolved .local hosts via unicast DNS. This would generally be a bad thing , as it would send many such requests to Internet DNS servers that would never resolve them. Apparently, that happens a lot. The final myhostname acts as a last-resort resolver for localhost, hostname, *.local and some other basic names. If systemd-resolved has a LLMNR=no set in /etc/systemd/resolved.conf the same list as above apply but systemd-resolved is still able to resolve localhost and to apply DOMAINS= settings (global or per interface). Understand that There's the LLMNR setting in systemd-resolved and there's also the per-link LLMNR setting in systemd-networkd. link . #What does all that mean?That it is very difficult to say with any certainty what will happen unless the configuration is very very specific. you will have to disable services and try (in your computer with your configuration) what will happen. #Q1 If an interface is configured with search domain "mydomain" and LLMNR disabled, will any single-label lookup request be routed into this interface? Yes, of course, it may . That LLMNR is disabled only blocks local resolution (no other servers on the local ( yes: .local ) network will be asked) but the resolution for that name must find an answer (even if negative) so it may (if there is no NOTFOUND=return entry, for example) happen that the DNS servers for a matching interface are contacted to resolve mylocalhost.mylocaldomain when a resolution for mylocalhost was started and there is an entry for mylocaldomain in the "search domain". to answer in a general sense is almost imposible, too many variables. #Q2 If an interface is configured with search domain "mydomain" and LLMNR enabled and a lookup request for "xyz" comes in, will "xyz" through LLMNR and "xyz.mydomain" throuth specified dns server both happen? No. if all is correctly configured a single label name "xyz" should only be resolved by LLMNR, and, even if asked, a DNS server should not try to resolve it. Well, that's theory. But the DNS system must resolve com (clearly, or the network would fall down as it is now). But there is a simple workaround: ask for com. , it has a dot, it is a FQDN. In any case, a DNS server shuld answer with NOERROR (with an empty A (or AAAA)) if the server doesn't have enough information about a label and the resolution should continue with the root servers (for . ). Or with a NXDOMAIN (the best answer to avoid further resolutions) for domains it knows that do not exist. The only safe way to control this is to have a local DNS server and choose which names to resolve and which not to. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/201367/"
]
} |
482,390 | After the last upgrade on: Operating System: Debian GNU/Linux buster/sid Kernel: Linux 4.18.0-2-686-pae Architecture: x86 /usr/lib/tracker/tracker-store eats a huge load of CPU. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7039 nath 20 0 96136 24460 11480 R 100,0 1,3 0:01.76 tracker-store When I run tracker daemon I get: Miners:17 Nov 2018, 21:17:06: ? File System - Not running or is a disabled plugin17 Nov 2018, 21:17:06: ? Applications - Not running or is a disabled plugin17 Nov 2018, 21:17:06: ? Extractor - Not running or is a disabled plugin I thought I disabled all tracker activities, what is it doing? The fan is going like crazy and a reboot does not improve the situation. | after having tracker-store running with almost 100% CPU, almost all the time for 7 days now, it seems like I found an easy fix: tracker reset --hardCAUTION: This process may irreversibly delete data.Although most content indexed by Tracker can be safely reindexed, it can?t be assured that this is the case for all data. Be aware that you may be incurring in a data loss situation, proceed at your own risk.Are you sure you want to proceed? [y|N]: /usr/lib/tracker/tracker-store process is gone, fan is spinning down, and everything is quiet after a week. After a reboot tracker-store still stays quiet. Update for Tracker3: tracker3 reset -s -r | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/482390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
482,393 | Given an array of strings, I would like to sort the array according to the length of each element. For example... array=( "tiny string" "the longest string in the list" "middle string" "medium string" "also a medium string" "short string" ) Should sort to... "the longest string in the list" "also a medium string" "medium string" "middle string" "short string" "tiny string" (As a bonus, it would be nice if the list sorted strings of the same length, alphabetically. In the above example medium string was sorted before middle string even though they are the same length. But that's not a "hard" requirement, if it over complicates the solution). It is OK if the array is sorted in-place (i.e. "array" is modified) or if a new sorted array is created. | If the strings don't contain newlines, the following should work. It sorts the indices of the array by the length, using the strings themselves as the secondary sort criterion. #!/bin/basharray=( "tiny string" "the longest string in the list" "middle string" "medium string" "also a medium string" "short string")expected=( "the longest string in the list" "also a medium string" "medium string" "middle string" "short string" "tiny string")indexes=( $( for i in "${!array[@]}" ; do printf '%s %s %s\n' $i "${#array[i]}" "${array[i]}" done | sort -nrk2,2 -rk3 | cut -f1 -d' '))for i in "${indexes[@]}" ; do sorted+=("${array[i]}")donediff <(echo "${expected[@]}") \ <(echo "${sorted[@]}") Note that moving to a real programming language can greatly simplify the solution, e.g. in Perl, you can just sort { length $b <=> length $a or $a cmp $b } @array | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/482393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321499/"
]
} |
482,417 | If I name a function or script with dash - instead of underline _ between words, would that be bad? For example, function duplicate-me() or duplicate-me.sh . In bash, a variable name can contains underline but no dash. If I name functions and scripts using dash instead of underline, then bash will never intend to interpret such a name as a variable, and I can expect bash to give error message if I misuse such a name as a variable. Thanks. | Using dashes is bad if you want to be portable, the standard allows only alphanumerics and underscores , so dashes might not work in all shells: $ dash -c 'foo-bar() { echo foo-bar; }; foo-bar'dash: 1: Syntax error: Bad function name (Busybox and ksh also don't accept them; Bash, Zsh, and mksh do.) That's not a problem if you know you're using Bash, so if you like dashes, you can use them in function names. However, if you want to avoid misusing a function in place of a variable name, note that e.g. $foo-bar and ${foo-bar} are both valid shell syntax. The first expands the variable $foo , and appends the string -bar ; the second expands to the value of $foo , or if it's unset, the given value bar . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482417",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
482,438 | In a Bash script, I call a program like this in several places: numfmt --suffix=" B" --grouping 231210893 Where the number is different every time, but the other parameters stay the same. I would now like to move the other parameters out of the many different calls, so they are centrally defined and can be easily changed. My attempt was like this: NUMFMT='--suffix=" B" --grouping'...numfmt $NUMFMT 231210893 Unfortunately, this doesn't work. The quote signs are removed at some point, and numfmt complains about an uninterpretable extra argument B . I tried plenty of other versions, using other quotes both in the definition and in the use of NUMFMT , to no avail. How do I do this properly? And if it's not too complicated, I would also like to understand why my version doesn't work and (hopefully) another one does. | Try arrays: NUMFMT=( --suffix=" B" '--grouping' )....numfmt "${NUMFMT[@]}" 231210893 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48147/"
]
} |
482,477 | I have a list of pathnames like this in a file: /path/to/directory/one.txt/longer/path/to/some/directory/two.py/path/with spaces/in/it/three.sh I want to delete all characters after the last occurrence of "/",so the desired output for the above is: /path/to/directory//longer/path/to/some/directory//path/with spaces/in/it/ | sed 's![^/]*$!!' This does the following: Use a delimiter other than / ,because the regular expression contains / . (I like ! and | because they look like dividing lines;other people use @ , # , ^ , or whatever they feel like. It is possible to use / in a regular expression delimited by / ,but that can be hard for a person to read.) Find a string of (zero or more) characters other than / . (Make it as long as possible.) But it must be at the end of the line. And replace it with nothing. Caveat: an input line that contains no / characterswill be totally wiped out(i.e., the entire contents of the line will be deleted,leaving only a blank line). If we wanted to fix that,and let a line with no slashes pass through unchanged,we could change the command to sed 's!/[^/]*$!/!' This is the same as the first answer,except it matches the last / and all the characters after it,and then replaces them with a / (in effect, leaving the final / in the input line alone). So, where the first answer finds one.txt and replaces it with nothing,this finds /one.txt and replaces it with / . But, on a line that contains no / characters,the first answer matches the entire line and replaces it with nothing,while this one would not find a match,and so would not make a substitution. We could use / as the delimiter in this command,but then it would have to be sed 's/\/[^/]*$/\//' “escaping” the slashes that are part of the regular expressionand the replacement stringby preceding them with back slashes ( \ ). Some people find this jungle of “leaning trees”to be harder to read and maintain,but it basically comes down to a matter of style. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321546/"
]
} |
482,481 | I have a folder on my desktop named Models. In the folder, there are named folders with jpeg files without the .jpg extensions. The jpeg file names are random hashes. I would like to use a bash script to batch rename these jpeg files to their directory names with increments and append the .jpg extension on each file. It's basically something like Models/ Alice/ a5ccB2ff3e ee420bc4a 2acee54dc ... Alex/ de33fa24c0 d1eaa48e0a ... And I want to to be like Models/ Alice/ Alice001.jpg Alice002.jpg Alice003.jpg ... Alex/ Alex001.jpg Alex002.jpg ... | sed 's![^/]*$!!' This does the following: Use a delimiter other than / ,because the regular expression contains / . (I like ! and | because they look like dividing lines;other people use @ , # , ^ , or whatever they feel like. It is possible to use / in a regular expression delimited by / ,but that can be hard for a person to read.) Find a string of (zero or more) characters other than / . (Make it as long as possible.) But it must be at the end of the line. And replace it with nothing. Caveat: an input line that contains no / characterswill be totally wiped out(i.e., the entire contents of the line will be deleted,leaving only a blank line). If we wanted to fix that,and let a line with no slashes pass through unchanged,we could change the command to sed 's!/[^/]*$!/!' This is the same as the first answer,except it matches the last / and all the characters after it,and then replaces them with a / (in effect, leaving the final / in the input line alone). So, where the first answer finds one.txt and replaces it with nothing,this finds /one.txt and replaces it with / . But, on a line that contains no / characters,the first answer matches the entire line and replaces it with nothing,while this one would not find a match,and so would not make a substitution. We could use / as the delimiter in this command,but then it would have to be sed 's/\/[^/]*$/\//' “escaping” the slashes that are part of the regular expressionand the replacement stringby preceding them with back slashes ( \ ). Some people find this jungle of “leaning trees”to be harder to read and maintain,but it basically comes down to a matter of style. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321508/"
]
} |
482,517 | We use the following script: more test.sh#!/bin/bashwhile read -r linedoecho $linedone < /tmp/file This is the file: kafka-broker,log.retention.hours,12kafka-broker,default.replication.factor,2fefolp-defaults,fefolp.history.fs.cleaner.interval,1dfefolp-defaults,fefolp.history.fs.cleaner.maxAge,2dfefolp-env,fefolp_daemon_memory,10000blo-site,blo.nodemanager.localizer.cache.target-size-mb,10240blo-site,blo.nodemanager.localizer.cache.cleanup.interval-ms,300000ams-env,metrics_collector_heapsize,512fefolp,hbase_master_heapsize,1408fefolp,hbase_regionserver_heapsize,512fefolp,hbase_master_xmn_size,192core-site,blolp.proxyuser.ambari.hosts,*core-site,Hadoop.proxyuser.root.groups,*core-site,Hadoop.proxyuser.root.hosts,*blo-site,blo.scheduler.minimum-allocation-mb,1024blolp-env,fefolp_heapsize,4096 Remark - after the last line - there are no space! But the script prints only these lines (except the last line): ./test.shkafka-broker,log.retention.hours,12kafka-broker,default.replication.factor,2fefolp-defaults,fefolp.history.fs.cleaner.interval,1dfefolp-defaults,fefolp.history.fs.cleaner.maxAge,2dfefolp-env,fefolp_daemon_memory,10000blo-site,blo.nodemanager.localizer.cache.target-size-mb,140blo-site,blo.nodemanager.localizer.cache.cleanup.interval-ms,300ams-env,metrics_collector_heapsize,51fefolp,hbase_master_heapsize,1408fefolp,hbase_regionserver_heapsize,542fefolp,hbase_master_xmn_size,19core-site,blolp.proxyuser.ambari.hosts,*core-site,Hadoop.proxyuser.root.groups,*core-site,Hadoop.proxyuser.root.hosts,*blo-site,blo.scheduler.minimum-allocation-mb,1024 Why does this happen? | Your input text contains an incomplete line as its last line. The last line is not terminated by a newline. while IFS= read -r line || [ -n "$line" ]; do printf '%s\n' "$line"done <file The above loop will read unmodified lines¹ (without stripping whitespaces or interpreting backslashed control sequences) from the file called file and print them to standard output. When an incomplete line is read, read will fail, but $line will still contain data. The extra -n test will detect this so that the loop body is allowed to output the incomplete line. In the iteration after that, read will fail again and $line will be an empty string, thus terminating the loop. ¹ assuming they don't contain NUL characters in shells other than zsh and assuming they don't contain sequences of bytes not forming part of valid characters in the yash shell, both of which shouldn't happen if the input is valid text, though that missing line delimiter on the last line already makes it invalid text. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/482517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
482,569 | update-grub failed with an error message # update-grubbash: update-grub: command not found @GAD3R Output of # [ -d /sys/firmware/efi ] && echo EFI || echo legacylegacy Note1 I've installed Debian 10 Buster Alpha 3 release (Xfce) using the amd64 CD iso installer using a default installation (except that I removed the print server and added the ssh server ). Note2 I used the root account ( su root ). | Solutions (best ones first) su - root instead of su root - nicest solution (thanks to Rui) extend path of the regular user in /etc/enviroment or ~/.bashrc or similar config file call commands explicitly; using this solution would require that one modifies all scripts that happens to call another command from sbin (this is not practical, nevertheless there is an example of it in the troubleshooting section) Findings This happened because the PATH works in a really strange way (actually works as designed). regular user login -> environment PATH doesn't contain /usr/sbin => opinion: works asdesigned, quite logical su root -> admin rights, but the environment is lacking /usr/sbin:/sbin=> opinion: works as designed, but illogical, because an account with root level of access should be able to execute commands from sbin without adding the path to the binaries manually su - root -> admin rights, /usr/sbin on the path => opinion: works asdesigned, quite logical Some more background There are two PATH defined in /etc/login.defs, but unless I start su - or su - root , I'm going to get the ENV_PATH.I know that this has been designed this way, to keep the environment of the actual user, but in this single case, it really boggles my mind, why not add automatically /usr/sbin and /sbin to thew path of a "regular user" after a successful su root # cat /etc/login.defs |grep PATH=ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binENV_PATH PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games Troubleshooting I've found that there is an update-grub command in /usr/sbin . # find / -name update-grub/usr/sbin/update-grub Ran it, just to get the next error message. # /usr/sbin/update-grub/usr/sbin/update-grub: 4: exec: grub-mkconfig: not found Searched for grub-mkconfig and found it under /usr/sbin/grub-mkconfig .Then it came to me, let's see how the update-grub script looks like? #cat /usr/sbin/update-grub |grep grub-mkconfigexec grub-mkconfig -o /boot/grub/grub.cfg "$@" Modified /usr/sbin/update-grub in order to call grub-mkconfig by it's explicit path ... exec /usr/sbin/grub-mkconfig -o /boot/grub/grub.cfg "$@" ... then called update-grub with it's explicit path and tada, it worked! # /usr/sbin/update-grubGenerating grub configuration file ...Found background image: /usr/share/images/desktop-base/desktop-grub.pngFound linux image: /boot/vmlinuz-4.18.0-2-amd64Found initrd image: /boot/initrd.img-4.18.0-2-amd64Found linux image: /boot/vmlinuz-4.16.0-2-amd64Found initrd image: /boot/initrd.img-4.16.0-2-amd64done Conclusion This must be something about the PATH | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/482569",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252902/"
]
} |
482,596 | When after I change dhcp to static in network-scripts and then I restart service systemctl restart NetworkManager . The static ip settings suppose to be updated but it doesn't. so I try ip link set dev enp0s3 down and then up didn't update the ip and then I try with ifdown enp0s3 then ifup enp0s3 it worked Why has it worked with ifup ? | Preamble: ip and ifconfig are utilities for controlling and monitoring networking. They are not typically used for reading/writing persistent configuration files - and this is why ip link did not work. Persistent configuration management has to be accomplished by other means, such as NetworkManager . (It's likely needless to say, but, as a side note, iproute2 , which provides ip , has been/is being adopted by many distributions as a replacement for net-tools , which provides ifconfig . They are often both shipped as default packages in distributions for compatibility reasons). Why ifup worked and systemctl restart NetworkManager did not: On CentOS (I have checked for CentOS 7), ifup and ifdown are provided by initscripts ; they operate on the scripts in /etc/sysconfig/network-scripts/ , provided by the same package. Thus, no surprise in ifup being able to apply the changes you made there. NetworkManager - the default networking service provider that CentOS inherited from upstream - on Red Hat and Fedora is configured to use the ifcfg-rh plugin to read/write network configuration from /etc/sysconfig/network-scripts/ifcfg-* . But it does not monitor those files. man nm-settings-ifcfg-rh warns that Users can create or modify the ifcfg-rh connection files manually, even if that is not the recommended way of managing the profiles. However, if they choose to do that, they must inform NetworkManager about their changes (see monitor-connection-file in nm-settings(5), and nmcli con (re)load). Thus, systemctl reload NetworkManager is not supposed to reload the configuration of a network connection from file on CentOS. To do that you can invoke nmcli connection reload or change NetworkManager configuration as stated in man NetworkManager.conf : monitor-connection-files Whether the configured settings plugin(s) should set up file monitors and immediately pick up changes made to connection files while NetworkManager is running. This is disabled by default; NetworkManager will only read the connection files at startup, and when explicitly requested via the ReloadConnections D-Bus call. [...] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482596",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/204680/"
]
} |
482,602 | I have a simple while loop accepting input: #!/bin/bashwhile true; do read -rep $'\n '"$USER"'> ' userInput echo "$userInput"done Example: ./input.sh username> command1command1 username> command2command2 Is it possible to have a command history? So that I can press up on my keyboard to view the previously executed commands (without leaving the while loop)? | You could use the small Readline wrapper rlwrap . This is a neat little tool that provides command history to utilities that don't implement it by themselves. You would use rlwrap on the script itself: rlwrap -a ./script.sh This would save a history file called ~/.script.sh_history and would use that file not only in the current session, but also in future sessions to provide a sort of history that you could step through. See the manual for rlwrap . rlwrap is commonly available as a package on most Unices, but may also be had from its GitHub repository . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321630/"
]
} |
482,613 | I started using dotfiles to sync everything I need to github . But I got some problem when symblink config files from dotfiles/ to ~/ Examples: $ rm ~/.config/termite/config$ ln -s ~/dotfiles/termite/config - > ~/.config/termite/config $ ll ~/dotfiles/termite total 4.0K-rw-r--r-- 1 hieuc users 1.9K Nov 18 15:19 config It won't let me edit, and it cannot be read by termite ~/.config/termite/config [Permission Denied] Does anyone know how to fix it? | You could use the small Readline wrapper rlwrap . This is a neat little tool that provides command history to utilities that don't implement it by themselves. You would use rlwrap on the script itself: rlwrap -a ./script.sh This would save a history file called ~/.script.sh_history and would use that file not only in the current session, but also in future sessions to provide a sort of history that you could step through. See the manual for rlwrap . rlwrap is commonly available as a package on most Unices, but may also be had from its GitHub repository . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312326/"
]
} |
482,724 | what I'm trying to do is to enable a Raspberry , physically placed in a client's site, that has internet access via Dynamic IP , to receive SSH commands from the wild without having to manipulate the client's router and firewall . No Inbound connections allowed on that network, plus, the internet address of the Client's site is dynamic. Edit : to know the solution for this problem, refer to the replies of Kevin_Kinsey and Florin Godard , or scroll to the end of my question to know how I was able to get it working on non-standard SSH port 22. I've already tried to study and understand what's proposed on ssh to private-ip on Unix Stack Exchange , but I'm really not getting the point. I do want to connect from my, say, laptop , to the Client's VPS server , and make the VPS server connect to the Raspberry SSH. So: ( firewall access allow in+out ) | => VPS Server \ ( firewall access allow out only ) | | => RaspberryMY PC / Here is it a case scenario with given IP adresses, ports and names configurations: MY PC name: [email protected] VPS Server name: remote.null.tld IP Address: 98.76.54.32 SSH Port: 9876Raspberry model: Zero W name: [email protected] IP Address: dynamic IP ( based on Internet Provider ) SSH Port: 6789 Raspberry's iptables: empty Router's Firewall Restrictions: allow only out Internet stability: very low The Raspberry's external IP is the one assigned from the Internet Provider, and may vary depending on router restarts. Cannot determine it absolutely. Internet Access on the client's network is really unstable . Radio link or something like that. Anyway, internet connection suffers of very dancing bandwidth. Plus, the Client's router cannot be manipulated not because of laziness but because of restrictions imposed by the Client's IT dep. I do have SSH access to the Client's VPS and I'm able to install whatever software on it. Edit: solution to the problem On my configuration, ports were non-standard. So, the solution was this one: On the Raspberry: # login to [email protected] is done via private/public key with no passwordsssh -p 9876 -f -N -T -R 55555:localhost:6789 [email protected] On the Raspberry's crontab: # A re-connect is performed at every 10th minute of every hour to prevent accidental tunnel breakdowns.10 * * * * ps -ef | grep 'ssh -p 9876 -f -N -T -R' | grep -v grep | awk '{print $2}' | xargs -r kill -9 && sleep 30s && ssh -p 9876 -f -N -T -R 55555:localhost:6789 [email protected] >/dev/null 2>&1 On the bridge VPS remote.null.tld ssh -p 55555 raspberry_username@localhost Or, a more elegant solution via modifying the VPS's ssh config: Host tunnelToRemoteRaspberry Hostname localhost User raspberry_username Port 55555 | This is possible. Use "reverse port forwarding" . You'll probably need a cronjob set up to check if it's connected. If not, run something like this: ssh -f -N -T -R 2210:localhost:22 [email protected] "Example.com" is some server outside the FW that you do have access to. You're forwarding port 22 on the RPi to port 2210 on example.com . You can then SSH into example.com and do: ssh RaspberryUser@localhost -p 2210 And you'll be connected to the RPi box. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482724",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321696/"
]
} |
482,725 | Are there any substitutes, alternatives or bash tricks for delaying commands without using sleep ? For example, performing the below command without actually using sleep: $ sleep 10 && echo "This is a test" | With bash builtins, you can do: coproc read -t 10 && wait "$!" || true To sleep for 10 seconds without using sleep . The coproc is to make so that read 's stdin is a pipe where nothing will ever come out from. || true is because wait 's exit status will reflect a SIGALRM delivery which would cause the shell to exit if the errexit option is set. In other shells: mksh and ksh93 have sleep built-in, no point in using anything else there (though they both also support read -t ). zsh also supports read -t , but also has a builtin wrapper around select() , so you can also use: zmodload zsh/zselectzselect -t 1000 # centiseconds If what you want is schedule things to be run from an interactive shell session, see also the zsh/sched module in zsh . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/482725",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321697/"
]
} |
482,755 | How can I display colors in terminal to handle hexadecimal color values ?It can be useful for theming, XResources etc. For example : $ command '#FF0000'// display a red square I use urxvt, i3wm in manjaro. | An alternative: show_colour() { perl -e 'foreach $a(@ARGV){print "\e[48:2::".join(":",unpack("C*",pack("H*",$a)))."m \e[49m "};print "\n"' "$@"} Example usage: $ show_colour "FF0088" "61E931" "1256E2" This prints spaces with the given RGB background colours. Note that you must not use # in the RGB code. I leave stripping that if present as an exercise for the reader. ☺ This does not alter the terminal emulator's palette. Caveat: Your terminal emulator must understand direct colour SGR control sequences, using the correct ITU T.416 forms. A few do. More understand these control sequences in certain long-standing faulty formulations. And you'll find that rxvt-unicode does not understand them at all. For one common faulty formulation substitute this ambiguous form: show_colour() { perl -e 'foreach $a(@ARGV){print "\e[48;2;".join(";",unpack("C*",pack("H*",$a)))."m \e[49m "};print "\n"' "$@"} Another alternative: Use my portable setterm , which I mentioned at https://unix.stackexchange.com/a/491883/5132 . It understands hexadecimal RGB notation, and even uses # as the indicator for it. Example usage: $ setterm -7 --background '#FF0088' ; printf ' ' ; \> setterm -7 --background '#61E931' ; printf ' ' ; \> setterm -7 --background '#1256E2' ; printf ' ' ; \> setterm -7 --background default ; printf '\n' This prints the same as the other example on terminals that understand direct colour SGR control sequences. One difference from the preceding alternative is that setterm also works on other terminals. It has fallbacks for terminal types that do not understand direct colour SGR control sequences. On terminal types that only understand indexed colour (i.e. only 256 colours) or on terminals that only understand the 16 AIXTerm colours, it tries to pick the nearest to the desired RGB colour: % TERM=rxvt-256color setterm -7 --background "#FF0088" |hexdump -C00000000 1b 5b 34 38 3b 35 3b 31 39 38 6d |.[48;5;198m|0000000b% TERM=ansi COLORTERM=16color setterm -7 --background "#FF0088" |hexdump -C00000000 1b 5b 31 30 35 6d |.[105m|00000006% TERM=ansi setterm -7 --background "#FF0088" |hexdump -C00000000 1b 5b 34 35 6d |.[45m|00000005% Further reading Jonathan de Boyne Pollard (2018). setterm . nosh Guide . Softwares. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291520/"
]
} |
482,881 | Troff supports both macro definitions using .de and branching using .if (see pages 5 and 6 of the Troff user's manual ). In these two respects, it is very much like TeX. However, I don't know of highly complex programs written in Troff (unlike say TikZ for TeX). Is Troff Turing complete? | ESR's The Art of Unix Programming claims it is: We'll examine troff in more detail in Chapter 18; for now, it's sufficient to note that it is a good example of an imperative minilanguage that borders on being a full-fledged interpreter (it has conditionals and recursion but not loops; it is accidentally Turing-complete). ("Accidentally" as opposed to m4 , which is said to be "deliberately Turing-complete".) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/482881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89474/"
]
} |
482,893 | I know I can do this in Bash: wc -l <<< "${string_variable}" Basically, everything I found involved <<< Bash operator. But in POSIX shell, <<< is undefined, and I have been unable to find an alternative approach for hours. I am quite sure there is a simple solution to this, but unfortunately, I didn't find it so far. | The simple answer is that wc -l <<< "${string_variable}" is a ksh/bash/zsh shortcut for printf "%s\n" "${string_variable}" | wc -l . There are actually differences in the way <<< and a pipe work: <<< creates a temporary file that is passed as input to the command, whereas | creates a pipe. In bash and pdksh/mksh (but not in ksh93 or zsh), the command on right-hand side of the pipe runs in a subshell. But these differences don't matter in this particular case. Note that in terms of counting lines, this assumes that the variable is not empty and does not end with a newline. Not ending with a newline is the case when the variable is the result of a command substitution, so you'll get the right result in most cases, but you'll get 1 for the empty string. There are two differences between var=$(somecommand); wc -l <<<"$var" and somecommand | wc -l : using a command substitution and a temporary variable strips away blank lines at the end, forgets whether the last line of output ended in a newline or not (it always does if the command outputs a valid nonempty text file), and overcounts by one if the output is empty. If you want to both preserve the result and count lines, you can do it by appending some known text and stripping it off at the end: output=$(somecommand; echo .)line_count=$(($(printf "%s\n" "$output" | wc -l) - 1))printf "The exact output is:\n%s" "${output%.}" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/482893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
482,930 | I know that when the maximum argument list size is reached, xargs keeps creating new lists until all of the files are included; but does find -exec <command> {} + do the same thing or will it not work if the size of the list exceeds the output of getconf ARG_MAX ? | Yes, find -exec ... {} + runs the given command as many times as necessary to accommodate all the arguments without exceeding the maximum argument list size in each invocation. This is specified by POSIX : If the primary expression is punctuated by a <plus-sign>, the primary shall always evaluate as true, and the pathnames for which the primary is evaluated shall be aggregated into sets. [...] An argument containing only the two characters " {} " shall be replaced by the set of aggregated pathnames, with each pathname passed as a separate argument to the invoked utility in the same order that it was aggregated. The size of any set of two or more pathnames shall be limited such that execution of the utility does not cause the system's {ARG_MAX} limit to be exceeded. (emphasis mine). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/482930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
482,973 | I want to set a variable if my condition is true on my Ubuntu system. This proves that my if-statement is correct: $ (if [ 1 == 1 ]; then echo "hi there"; fi);hi there This proves that I can set variables: $ a=1$ echo $a1 This shows that setting a variable in the if-statement DOES NOT work: $ (if [ 1 == 1 ]; then a=2; fi);$ echo $a1 Any ideas why? All my google research indicates that it should work like this... | The (...) part of your command is your problem. The parentheses create a separate subshell. The subshell will inherit the environment from its parent shell, but variables set inside it will not retain their new values once the subshell exits. This also goes for any other changes to the environment inside the subshell, including changing directories, setting shell options etc. Therefore, remove the subshell: if [ 1 = 1 ]; then a=2; fiecho "$a" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/482973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/238470/"
]
} |
483,061 | I want to convert uptime to date DD:MM:YY without the | and I want to put a string like "the computer is on since 23-feb-16" | You may get it for free from the output of last reboot : $ last rebootreboot system boot 4.14.81-i7 Sat Nov 17 23:25 still runningreboot system boot 4.14.80-i7 Fri Nov 16 09:16 - 15:49 (06:33)$ printf "On since: "; last reboot | grep "still running" | cut -c 40-56On since: Sat Nov 17 23:25 $ printf "On since: " ; last reboot --time-format iso | grep "still running" | cut -c 40-49On since: 2018-11-17 Your uptime command might also have the -s option: $ uptime -s2018-11-17 23:25:23 Since this format is acceptable to date -d , you can reformat the time however you wish like this:: $ date -d "$(uptime -s)" "+On since: %d:%m:%y"On since: 17:11:18 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321992/"
]
} |
483,135 | I want to add and echo the sum of several files using shell script. How do I start?I have a list of them like that: $ stat /etc/*.conf | grep Size | cut -f4 -d' '123456789101112 | stat -c "%s" /etc/*.conf|paste -sd+|bc -l | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483135",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/315631/"
]
} |
483,284 | I have a file with words separated by ',' on lines, every line has the same number of words, say 4. So ther are in the form of: something1,something2,something3,something4 I want to search for the line that contains on the 4th column exactly something4 , but how do i do that if there exists another line that is something like this: something1,something2,something3,1_something4 with grep I will get both these lines, but I only want the line that has on the 4th element exactly something4 what should I do? | You can use awk for this: awk -F, '$4 == "something4"' file.csv This should print the entire line for any line where the 4th column is exactly something4 In order to pass a variable into awk you would need to do the following: var1=$(echo "something,something4" | cut -f2 -d,)awk -F, -vsearch="$var1" '$4 == search' file.csv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/315631/"
]
} |
483,292 | This question is about the proper way to write regular expression literals in a match expression under bash. In zsh, the match below succeeds, as I expected: % [[ ' 123 ' =~ '^ [0-9]+ $' ]]; echo $?0 Not so in bash: $ [[ ' 123 ' =~ '^ [0-9]+ $' ]]; echo $?1 I know that this match succeeds in bash $ ( regexp='^ [0-9]+ $'; [[ ' 123 ' =~ $regexp ]]; echo $? )0 ...but it requires assigning the regular expression to an intermediate variable. My question is: How does one write an arbitrary regular expression literal in a match expression under bash? | It's best to put it in a variable, as recommended by the Bash reference manual 3.2.4.2 Conditional Constructs : Storing the regular expression in a shell variable is often a useful way to avoid problems with quoting characters that are special to the shell. It is sometimes difficult to specify a regular expression literally without using quotes, or to keep track of the quoting used by regular expressions while paying attention to the shell’s quote removal. Using a shell variable to store the pattern decreases these problems. However in order to write it directly inside of the bash extended test you need to remove the quotes and escape the spaces: $ [[ ' 123 ' =~ ^\ [0-9]+\ $ ]]; echo $?0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483292",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
483,354 | I was reading about RHEL 8, and this statement is made : Network scripts are deprecated in Red Hat Enterprise Linux 8 and they are no longer provided by default. The basic installation provides a new version of the ifup and ifdown scripts which call the NetworkManager service through the nmcli tool. OK, so to me this would imply that /etc/sysconfig/network-scripts would no longer be used, though it is unclear from my reading what is supposed to replace ifcfg-eth0 (or similar). But then I read this page about static IP addresses , which asserted: The procedure to configure a static IP address on RHEL 8: Create a file named /etc/sysconfig/network-scripts/ifcfg-eth0 as follows: DEVICE=eth0 BOOTPROTO=none ONBOOT=yes PREFIX=24 IPADDR=192.168.2.203 Restart network service on RHEL 8: systemctl restart NetworkManager OR sudo nmcli connection reload So, is it only the ifup and ifdown that are deprecated, but the configuration files remain? Is the distinction between scripts and configuration files , even though they seem lumped under a single chapter? Chapter 12 of the RHEL defixed network scripts as the: Chapter 12. Network Scripts ...configuration files for network interfaces and the scripts to activate and deactivate them are located in the /etc/sysconfig/network-scripts/ directory. So, what constitutes what is deprecated? It doesn't seem to be the scripts in /etc/sysconfig/network-scripts since that apparently is still an appropriate way to configure a static IP. I do not yet have a RHEL 8 box running, so I am hoping someone can shed light on what it is one is supposed to avoid. | From your first link: Note that custom commands in /sbin/ifup-local , ifdown-pre-local and ifdown-local scripts are not executed. If any of these scripts are required, the installation of the deprecated network scripts in the system is still possible with the following command: ~]# yum install network-scripts So, anything contained in the RHEL 8 network-scripts RPM file or relying on functionality of that RPM is now deprecated. In particular, if you previously used scripts like /sbin/ifup-local to set up some advanced routing or other specialized network configuration, now it's time to find out a new way to do that. Note that when NetworkManager was introduced into RHEL, it included - and still does - a configuration back-end that uses the old configuration file locations, but with a new NetworkManager infrastructure and an extended version of the old configuration script syntax. So the /etc/sysconfig/network-scripts/ifcfg-* files will still be there and using the same syntax, although they will now be parsed by NetworkManager and not executed as sourced scripts. The deprecated network-scripts package essentially contains: the SysVinit-style service script /etc/init.d/network the ifup* , ifdown* , init.ipv6-global and network-functions* scripts you've used to seeing in the /etc/sysconfig/network-scripts/ directory classic versions of /usr/sbin/ifup and /usr/sbin/ifdown (which would override the compatibility wrappers for nmcli that are present by default) the /usr/sbin/usernetctl command and the associated documentation and example files So, when you're not using the deprecated network-scripts RPM, you would now expect the /etc/sysconfig/network-scripts/ directory to only contain the ifcfg-* files for your network interfaces, and possibly route-* files for custom routes, but no other files at all. If you needed the usernetctl command, it's among the deprecated functionality and you should start using the appropriate nmcli subcommands as its replacement. ifup and ifdown will still be available, but now do their job through NetworkManager , unless you install the deprecated network-scripts RPM. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/483354",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108158/"
]
} |
483,396 | I'm able to get the signal strength of all Wi-Fi networks with the following command: $ nmcli -t -f SIGNAL device wifi list$ 77 67 60 59 55 45 44 39 39 37 I would like to reduce this list only to the current Wi-Fi on which I'm connected. I've been through the man page but can't find the necessary flag. One solution would be to use sed or awk , but I would like to avoid piping. Should I use nmcli device wifi instead of parsing directly for the SIGNAL column? | nmcli --versionnmcli tool, version 1.6.2 To get the SIGNAL of the AP on which you are connected, use: nmcli dev wifi list | awk '/\*/{if (NR!=1) {print $7}}' The second * mark in nmcli dev wifi list is set to identify the SSID on which your are connected. nmcli --versionnmcli tool, version 1.22.10 use: nmcli dev wifi list | awk '/\*/{if (NR!=1) {print $6}}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483396",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322275/"
]
} |
483,441 | I would like to rename a series of files named this way name (10).extname (11).ext... to name_10.extname_11.ext... This one-liner works: $ for i in name\ \(* ; do echo "$i" | sed -e 's! (\([0-9]\{2\}\))!_\1!' ; donename_10.extname_11.ext... But this one doesn't: $ for i in name\ \(* ; do mv "$i" "$(echo "$i" | sed -e 's! (\([0-9]\{2\}\))!_\1!')" ; donebash: !_\1!': event not found Why? How to avoid this? Using $ bash --versionGNU bash, versione 4.3.48(1)-release (x86_64-pc-linux-gnu) on Ubuntu 18.04. While in this similar question a simple case with ! is shown, here a ! just inside single quotes is considered and compared to a ! inside single quotes, inside double quotes. As pointed out in the comments, Bash behaves in a different way in these two cases. This is about Bash version 4.3.48(1) ; this problem seems instead to be no more present in 4.4.12(1) (it is however recommended to avoid this one-liner, because the Bash version may be unknown in some cases). As suggested in the Kusalananda answer, if the sed delimiter ! is replaced with # , $ for i in name\ \(* ; do mv "$i" "$(echo "$i" | sed -e 's# (\([0-9]\{2\}\))#_\1#')" ; done this problem does not arise at all. | When used in an interactive shell, the ! may initiate history expansion (not an issue in scripts). The solution is to use another delimiter than ! in the sed expression. An alternative bash loop: for name in "name ("*").ext"; do if [[ "$name" =~ (.*)" ("([^\)]+)").ext" ]]; then newname="${BASH_REMATCH[1]}_${BASH_REMATCH[2]}.ext" printf 'Would move %s to %s\n' "$name" "$newname" # mv -i "$name" "$newname" fidone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48707/"
]
} |
483,528 | Is each a different side of the connection or a deeper layer of logging. I am interested because of, for example, this excerpt from a vvv output debug3: send packet: type 30debug1: expecting SSH2_MSG_KEX_ECDH_REPLYConnection reset by nnnn port 22 Looking through the output I can't determine which side is saying what. | The short answer: Yes! The long answer: From [ man ssh ][1]: -v Verbose mode. Causes ssh to print debugging messages about itsprogress. This is helpful in debugging connection, authentication, and configuration problems. Multiple -v options increasethe verbosity. The maximum is 3. To see what it really does, have a look at the [edits on this question][2] as we asked the OP to go from -v to -vvv (Debug levels 2 and 3 for -vv and -vvv respectively) For even more information, have a look at RFC4252, Section 6 [1]: http://man7.org/linux/man-pages/man1/ssh.1.html [2]: https://unix.stackexchange.com/posts/483302/revisions "SSH forwarded through modem recently started failing" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483528",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251630/"
]
} |
483,530 | On RHEL 7.4, Apache/2.4.6 using mod_authnz_ldap.so and AuthLDAPInitialBindAsUser config. The configuration is to do LDAP authentication by binding as the user logging on. When the user is prompted for his username/password in the "Authentication required" popup: If the user and either leaves the two fields blank or only enter his username and leaves the password field blank and hits "OK", he gets a page with " The server encountered an internal error or misconfiguration and was unable to complete your request. " - there's no error in /var/log/httpd/error_log If the user enters an invalid username and/or invalid password, the system reacts normally and presents the popup again. If the user enters the correct username and password, the system reacts normally and is presented the desired web page. How to prevent this error page? kibana.conf: <VirtualHost *:80> ServerName kibana.mydomain.com ProxyPass / http://127.0.0.1:5601/ ProxyPassReverse / http://127.0.0.1:5601/ <Location "/app/kibana"> AuthType basic AuthName "LDAP Login" AuthBasicProvider ldap AuthLDAPURL "ldap://ldap.mydomain/dc=example,dc=com?sAMAccountName?sub?(objectClass=*)" AuthLDAPInitialBindAsUser on AuthLDAPInitialBindPattern (.*) "CN=$1,ou=users,dc=example,dc=com" Require valid-user </Location></VirtualHost> I set LDAPLibraryDebug 7 and got the following error message in /var/log/httpd/error_log res_errno: 1, res_error: <000004DC: LdapErr: DSID-0C09075A, comment: In order to perform this operation a successful bind must be completed on the connection. , data 0, v1db1>, res_matched: <> ldap_free_request (origid 2, msgid 2) ldap_parse_result ldap_err2string [Thu Nov 29 20:36:09.068006 2018] [authnz_ldap:info] [pid 25551] [client 72.23.42.165:56915] AH01695: auth_ldap authenticate: user authentication failed; URI /home/debug.jsp [ ldap_search_ext_s() for user failed ] [Operations error] <---- *this should result in authentication failure not Operation errors which cannot be handled by the http landler and ends up as an internal server error* The message " In order to perform this operation a successful bind must be completed on the connection. " leads me to believe that the module is attempting to perform another operation even though the binding is incomplete. But it should actually react exactly like when the password is invalid or when the username was not provided. Error message when the username is passed but the password field is left empty: [Thu Nov 29 20:48:48.818805 2018] [authnz_ldap:info] [pid 25550] [client 72.23.42.165:57045] AH01695: auth_ldap authenticate: user the.user authentication failed; URI /home/debug.jsp [LDAP: ldap_simple_bind() failed][Invalid credentials] [Thu Nov 29 20:48:48.818833 2018] [auth_basic:error] [pid 25550] [client 72.23.42.165:57045] AH01617: user the.user: authentication failure for "/home/debug.jsp": Password Mismatch Error message when no user name is entered but there is a value in the password field [Thu Nov 29 20:49:18.720210 2018] [authnz_ldap:info] [pid 25549] [client 72.23.42.165:57050] AH01695: auth_ldap authenticate: user authentication failed; URI /home/debug.jsp [LDAP: ldap_simple_bind() failed][Invalid credentials] [Thu Nov 29 20:49:18.720222 2018] [auth_basic:error] [pid 25549] [client 72.23.42.165:57050] AH01617: user : authentication failure for "/home/debug.jsp": Password Mismatch Message when a correct username/password [Thu Nov 29 20:53:29.658294 2018] [authnz_ldap:debug] [pid 25551] mod_authnz_ldap.c(593): [client 72.23.42.165:57105] AH01697: auth_ldap authenticate: accepting the.user [Thu Nov 29 20:53:29.658328 2018] [authz_core:debug] [pid 25551] mod_authz_core.c(809): [client 72.23.42.165:57105] AH01626: authorization result of Require valid-user : granted [Thu Nov 29 20:53:29.658338 2018] [authz_core:debug] [pid 25551] mod_authz_core.c(809): [client 72.23.42.165:57105] AH01626: authorization result of : granted [Thu Nov 29 20:53:29.658546 2018] [proxy:debug] [pid 25551] mod_proxy.c(1123): [client 72.23.42.165:57105] AH01143: Running scheme http handler (attempt 0) | The short answer: Yes! The long answer: From [ man ssh ][1]: -v Verbose mode. Causes ssh to print debugging messages about itsprogress. This is helpful in debugging connection, authentication, and configuration problems. Multiple -v options increasethe verbosity. The maximum is 3. To see what it really does, have a look at the [edits on this question][2] as we asked the OP to go from -v to -vvv (Debug levels 2 and 3 for -vv and -vvv respectively) For even more information, have a look at RFC4252, Section 6 [1]: http://man7.org/linux/man-pages/man1/ssh.1.html [2]: https://unix.stackexchange.com/posts/483302/revisions "SSH forwarded through modem recently started failing" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/156664/"
]
} |
483,613 | I have the following script : echo 'Please select type file , the name of the input file and the name of the output file: a.plain ( please press a ) <input_file> <output_file>; b.complex ( please press b ) <input_file> <output_file>;'read one two threeecho $#if [ $# -ne 3 ]; then echo "Insufficient arguments !" exit 1;else echo "Number of passed parameters is ok"fi $# always outputs 0 , the read command provides the correct variables when I am using $one , $two and $three later in the script Thank you. | To test whether you got values for all variables and exit otherwise, use the -z test to test for empty strings: if [ -z "$one" ] || [ -z "$two" ] || [ -z "$three" ]; then echo 'Did not get three values' >&2 exit 1fi The $# value is the number of positional parameters , usually command line arguments (or values set by the set builtin). These are available in $1 , $2 , $3 , etc. (or collectively in the array "$@" ) and are unrelated to the values read by the read builtin. To make your script take the input as command line arguments instead of reading them interactively (which may be preferred if the user is to supply one or several paths as they may take advantage of tab-completion, and it also makes it easier to use the script from within another script and does not require that there is a terminal connected), use if [ "$#" -ne 3 ]; then echo 'Did not get three command line arguments' >&2 exit 1fione=$1two=$2three=$3 The script would in this case be run as ./script.sh a.plain /path/to/inputfile /path/to/outputfile If the processing of the input can happen from standard input and if the output can be sent to standard output (i.e. if you don't actually need the explicit paths of the input file and output file inside the script), then the script only has to take the first parameter ( a.plain or b.complex ). ./script.sh a.plain </path/to/inputfile >/path/to/outputfile The script would then use the standard input and standard output streams for input and output (and consequently only has to check for a single command line argument). This would make it possible to run the script with data piped in from another program, and it would also allow for further post-processing: gzip -dc <file-in.gz | ./script.sh a.plain | sort | gzip -c >file-out.gz | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322279/"
]
} |
483,692 | This question is similar to What is secure to remove from GNOME Desktop => GNOME (but not a duplicate). I want to remove the GNOME desktop environment. Running the proposed command in this question does not work on Fedora 29. $ sudo dnf group remove gnome-desktop-environmentWarning: Group 'gnome-desktop-environment' is not installed.Error: No groups marked for removal. I noticed there is a group called gnome-desktop instead. However, sudo dnf group remove gnome-desktop lists hundreds (all?) of installed packages for removal. It includes packages that are not at all related to GNOME (such as lib* , plasma-* , texlive-* , ...). I fear that running this command will force me to do a full re-install of the system. Is there a safe way to remove GNOME's desktop environment from Fedora 29 that leaves me with a functioning KDE install? I just want to reclaim some disk space (not all of it..) | You can actually remove and install packages in the same operation, with dnf swap . And since the option takes groups as well as just single package names, you can switch one for another very simply: dnf swap @gnome-desktop @kde-desktop | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/483692",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56506/"
]
} |
483,699 | I have an industrial sensor that via an Interface module provides raw values via telnet . In order to connect to it: telnet 169.254.168.150 10001 I can only see garbage values from it. Information The information that I have is from the datasheet of the Interface module. The sensor is an analog one, hence Status flag is not relevant. I have some propreitary software from the company which visualizes the information of the sensor. This software also uses the above mentioned telnet protocol to obtain information. I cross checked it via WireShark. Packet size is 22 Bytes and in little-endian. I cannot programatically obtain the information as my SE Query and I am looking to obtain the value maybe store the incoming values directly somewhere (in DB or file). Is there any way to extract this infomation via the Command Line? | You can actually remove and install packages in the same operation, with dnf swap . And since the option takes groups as well as just single package names, you can switch one for another very simply: dnf swap @gnome-desktop @kde-desktop | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/483699",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/178625/"
]
} |
483,701 | In bash, when I use Ctrl-R to retrieve a previous command, why does it not work when the command starts with a whitespace? Can I make it match such a previous command? $ dateFri Nov 23 ... 2018(failed reverse-i-search)` date': cd database/ | Check the value of your HISTCONTROL environment variable. If the value contains ignorespace or ignoreboth , any command starting with a space will not be added to command history. From man bash : HISTCONTROL: A colon-separated list of values controlling how commands are saved on the history list. If the list of values includes ignorespace, lines which begin with a space character are not saved in the history list. A value of ignoredups causes lines matching the previous history entry to not be saved. A value of ignoreboth is shorthand for ignorespace and ignoredups. A value of erasedups causes all previous lines matching the current line to be removed from the history list before that line is saved. Any value not in the above list is ignored. If HISTCONTROL is unset, or does not include a valid value, all lines read by the shell parser are saved on the history list, subject to the value of HISTIGNORE. The second and subsequent lines of a multi-line compound command are not tested, and are added to the history regardless of the value of HISTCONTROL. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/483701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
483,704 | I have found that when running into an out-of-memory OOM situation, my linux box UI freezes completely for a very long time. I have setup the magic-sysrq-key then using echo 1 | tee /proc/sys/kernel/sysrq and encountering a OOM->UI-unresponsive situation was able to press Alt-Sysrq-f which as dmesg log showed causes the OOM to terminate/kill a process and by this resolve the OOM situation. My question is now. Why does linux become unresponsive as much as the GUI froze, however did seem not to trigger the same OOM-Killer, which I did trigger manually via Alt-Sysrq-f key combination? Considering that in the OOM "frozen" situation the system is so unresponsive as to not even allow a timely (< 10sec) response to hitting the Ctrl-Alt-F3 (switch to tty3), I would have to assume the kernel must be aware its unresponsiveness, but still did not by itself invoke the Alt-Sysrq-f OOM-Killer , why? These are some settings that might have an impact on the described behaviour. $> mount | grep memorycgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)$> cat /sys/fs/cgroup/memory/memory.oom_control oom_kill_disable 0under_oom 0oom_kill 0 which while as I understand states that the memory cgroup does not have OOM neithe activated nor disabled (evidently there must be a good reason to have the OOM_kill active and disabled, or maybe I cannot interpret correctly the output, also the under_oom 0 is somewhat unclear, still) | The reason the OOM-killer is not automatically called is, because the system, albeit completely slowed down and unresponsive already when close to out-of-memory y, has not actually reached the out-of-memory situation. Oversimplified the almost full ram contains 3 type of data: kernel data, that is essential pages of essential process data (e.g. any data the process created in ram only) pages of non-essential process data (e.g. data such as the code of executables, for which there is a copy on disk/ in the filesystem, and which while being currently mapped to memory could be "reread" from disk upon usage) In a memory starved situation the linux kernel as far as I can tell it is kswapd0 kernel thread, to prevent data loss and functionality loss, cannot throw away 1. and 2. , but is at liberty to at least temporarily remove those mapped-into-memory-files data from ram that is form processes that are not currently running. While this is behaviour which involves disk-thrashing , to constantly throw away data and reread it from disk, can be seen as helpful as it avoids, or at least postpones the necessariry removing/killing of a process and the freeing-but-also-loosing of its memory, it has a high price: performance. [load pages from disk to ram with code of executable of process 1][ run process 1 ] [evict pages with binary of process 1 from ram][load pages from disk to ram with code of executable of process 2][ run process 2 ] [evict pages with binary of process 2 from ram][load pages from disk to ram with code of executable of process 3][ run process 3 ] [evict pages with binary of process 3 from ram]....[load pages from disk to ram with code of executable of process 1][ run process 1 ] [evict pages with binary of process 1 from ram] is clearly IO expensive and the system is likely to become unresponsive, event though technically it has not yet run out completely of memory. From a user persepective however it seems, to be hung/frozen and the resulting unresponsive UI might not be really preferable, over simply killing the process (e.g. of a browser tab, whose memory usage might have very well been the root cause/culprit to begin with.) This is where as the question indicated the Magic SysRq key trigger to start the OOM manually seems great, as the Magic SysRq is less impacted by the unresponsiveness of the system. While there might be use-cases where it is important to preserve the processes at all ( performance ) costs, for a desktop, it is likely that uses would prefere the OOM-killer over the frozen UI. There is patch that claims to exempt clean mapped fs backed files from memory in such situation in this answer on stackoverflow . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24394/"
]
} |
483,717 | When I perform the following Netcat command and view the packets with Wireshark , it says the UDP packet is malformed. $ echo "this is a test" | nc -u 127.0.0.1 53 Similarly, using commands like $ echo "this is a test" > /dev/udp/127.0.0.1/53 produce "malformed packet" errors in Wireshark. The echo command gets sent/delivered to the Netcat server without errors. But this got me wondering: is it possible to manually construct a proper UDP packet with echo or some other native Unix tool(s)? I'm using Debian and macOS. | Your packet is completely valid, from the viewpoint of IP and UDP. If you expand the protocol details for Ethernet/IP/UDP in the lower pane of Wireshark, you will see that the packet is successfully parsed. However, as it is destined for port 53, Wireshark attempts to parse it as a DNS packet, which it cannot do (since the string "this is a test" is not a valid DNS request per the RFC 1035 spec). If you follow the specification at that link, you will be able to construct a packet that is valid when parsed as a DNS request. If you send the packet to another port, you'll notice that Wireshark will no longer parse it as a DNS request and will hence not show that warning. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/483717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322500/"
]
} |
483,769 | I want to merge unallocated space to an ext4 partition. But gparted seems to prevent this. I cannot extend the partition. see screenshot: merge unallocated with sda5. | That's because you have unallocated space outside of the extended partition ( sda3 ) which contains the partition you want to extend ( sda5 ) so I would: Take a full system backup using CloneZilla Live ¹ Boot a live environment of gparted Move sda3 to the left (this will move both sda5 and sda6 as well) Extend sda3 with the now free space at the end Move sda6 to the end of sda3 Extend sda5 still leaving 10% of the disk space unallocated between sda5 and sda6 ² Note 1: Yes, take a backup! If you have a power failure in any of the following steps, your entire disk is toast! Note 2: That leaves you some wriggling room to extend your swap or your data (or both) in an emergency. Note 3: Yes, you can also extend sda3 to the left and then extend sda5 to the left but that will not give you the 10% spare (unallocated) space for emergency extension in the future and the entire process is going to take all night anyway. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483769",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/282380/"
]
} |
483,826 | My ~/.bashrc : https://pastebin.com/VA7RLA2E My ~/.Xresources : https://pastebin.com/qSF1z0w4 How do I make XTerm automatically source .bashrc when it starts?Currently, whenever I open a new XTerm window, it doesn't source ~/.bashrc . My OS is Ubuntu 18.04. | In your ~/.Xresources file, you have the line xterm*loginShell: true This would make XTerm start the shell session as a login shell. When bash runs as a login shell, it reads your ~/.bash_profile file, but it does not read ~/.bashrc (this file is read by non-login interactive sessions) unless ~/.bash_profile reads it with source explicitly. You have two options: Remove the line from ~/.Xresources that specifies that the shell should be a login shell. You will likely have to exit your graphical login session for this file to be re-read and for the changes to take effect. Make your ~/.bash_profile file source your ~/.bashrc file, while making sure that your ~/.bashrc file is not sourcing the ~/.bash_profile file at the same time (which would create an infinite loop). An example of how you may do this (this would be added to the ~/.bash_profile file): if [ -o interactive ] && [ -f ~/.bashrc ]; then source ~/.bashrcfi You may need to do something similar for /etc/profile vs /etc/bash.bashrc , or wherever the system's bashrc is on your system if not already done by your system. However, as /etc/profile is read by all Bourne-like shells, not just bash , it needs to be adapted a little: if [ -n "$BASH" ] && [ "$BASH" != /bin/sh ] && [ -o interactive ] && [ ! -o posix ] && [ -f /etc/bash.bashrc ]then source /etc/bash.bashrcfi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322572/"
]
} |
483,861 | $ printf "%s" a bab$ printf "%s%s" a bab I have some problem understand the format specifier for printf . If I am correct it is mostly the same as those for strings in the C programming language. Why does the format specifier %s concatenate the two following strings together? Why does %s not mean that there is only one string to substitute it, and ignore the remaining string? Why are the results for two strings under %s and under %s%s the same? | That’s how printf is specified to behave : The format operand shall be reused as often as necessary to satisfy the argument operands. Any extra b, c, or s conversion specifiers shall be evaluated as if a null string argument were supplied; other extra conversion specifications shall be evaluated as if a zero argument were supplied. If the format operand contains no conversion specifications and argument operands are present, the results are unspecified. In your case, the %s format is repeated as many times as necessary to handle all the arguments. printf "%s" a b and printf "%s%s" a b produce the same result because in the first case, %s is repeated twice, which is equivalent to %s%s . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/483861",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
483,871 | I know I can find files using find : find . -type f -name 'sunrise' . Example result: ./sunrise./events/sunrise./astronomy/sunrise./schedule/sunrise I also know that I can determine the file type of a file: file sunrise . Example result: sunrise: PEM RSA private key But how can I find files by file type? For example, my-find . -type f -name 'sunrise' -filetype=bash-script : ./astronomy/sunrise./schedule/sunrise | "File types" on a Unix system are things like regular files, directories, named pipes, character special files, symbolic links etc. These are the type of files that find can filter on with its -type option. The find utility can not by itself distinguish between a "shell script", "JPEG image file" or any other type of regular file . These types of data may however be distinguished by the file utility, which looks at particular signatures within the files themselves to determine type of the file contents. A common way to label the different types of data files is by their MIME type , and file is able to determine the MIME type of a file. Using file with find to detect the MIME type of regular files, and use that to only find shell scripts: find . -type f -exec sh -c ' case $( file -bi "$1" ) in (*/x-shellscript*) exit 0; esac exit 1' sh {} \; -print or, using bash , find . -type f -exec bash -c ' [[ "$( file -bi "$1" )" == */x-shellscript* ]]' bash {} \; -print Add -name sunrise before the -exec if you wish to only detect scripts with that name. The find command above will find all regular files in or below the current directory, and for each such file call a short in-line shell script. This script runs file -bi on the found file and exits with a zero exit status if the output of that command contains the string /x-shellscript . If the output does not contain that string, it exits with a non-zero exit status which causes find to continue immediately with the next file. If the file was found to be a shell script, the find command will proceed to output the file's pathname (the -print at the end, which could also be replaced by some other action). The file -bi command will output the MIME type of the file. For a shell script on Linux (and most other systems), this would be something like text/x-shellscript; charset=us-ascii while on systems with a slightly older variant of the file utility, it may be application/x-shellscript The common bit is the /x-shellscript substring. Note that on macOS, you would have to use file -bI instead of file -bi because of reasons (the -i option does something quite different). The output on macOS is otherwise similar to that of a Linux system. Would you want to perform some custom action on each found shell script, you could do that with another -exec in place of the -print in the find commands above, but it would also be possible to do find . -type f -exec sh -c ' for pathname do case $( file -bi "$pathname" ) in */x-shellscript*) ;; *) continue esac # some code here that acts on "$pathname" done' sh {} + or, with bash , find . -type f -exec bash -c ' for pathname do [[ "$( file -bi "$pathname" )" != */x-shellscript* ]] && continue # some code here that acts on "$pathname" done' bash {} + Related: Understanding the -exec option of `find` | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/483871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217242/"
]
} |
483,879 | I have a Python code which listens and detects environmental sounds. It is not my project, I found it on web (SoPaRe). With the ./sopare.py -l command, it starts recording sounds but in infinite loop. When I want to stop it, I have to press Ctrl+C . My purpose is to stop this program automatically after 10 seconds, but when I talked with the author he said that the program does not have a time limiter. I tried to kill it via kill PID , but PID changes every time when program runs. How can stop it after a time interval via bash ? Alternatively, I can execute this command from python with os.system() command. | The simplest solution would be to use timeout from the collection of GNU coreutils (probably installed by default on most Linux systems): timeout 10 ./sopare.py -l See the manual for this utility for further options ( man timeout ). On non-GNU systems, this utility may be installed as gtimeout if GNU coreutils is installed at all. Another alternative, if GNU coreutils is not available, is to start the process in the background and wait for 10 seconds before sending it a termination signal: ./sopare.py -l &sleep 10kill "$!" $! will be the process ID of the most recently started background process, in this case of your Python script. In case the waiting time is used for other things: ./sopare.py -l & pid=$!# whatever code here, as long as it doesn't change the pid variablekill "$pid" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/483879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322153/"
]
} |
483,881 | If it depends on the exact type of block device, then what is the default I/O scheduler for each type of device? Background information Fedora 29 includes a Linux kernel from the 4.19 series. (Technically, the initial release used a 4.18 series kernel. But a 4.19 kernel is installed by the normal software updates). Starting in version 4.19, the mainline kernel has CONFIG_SCSI_MQ_DEFAULT as default y . I.e. that's what you get if you take the tree published by Linus, without applying any Fedora-specific patches. By default, SCSI and SATA devices will use the new multi-queue block layer. (Linux treats SATA devices as being SCSI, using a translation based on the SAT standard ). This is a transitional step towards removing the old code. All the old code will now be removed in version 4.21 5.0 , the next kernel release after 4.20. In the new MQ system, block devices use a new set of I/O schedulers. These include none , mq-deadline , and bfq . In the mainline 4.19 kernel, the default scheduler is set as follows: /* For blk-mq devices, we default to using mq-deadline, if available, for single queue devices. If deadline isn't available OR we have multiple queues, default to "none". */ A suggestion has been made to use BFQ as the default in place of mq-deadline . This suggestion was not accepted for 4.19. For the legacy SQ block layer, the default scheduler is CFQ, which is most similar to BFQ. => The kernel's default I/O scheduler can vary, depending on the type of device: SCSI/SATA, MMC/eMMC, etc. CFQ attempts to support some level of "fairness" and I/O priorities ( ionice ). It has various complexities. BFQ is even more complex; it supports ionice but also has heuristics to classify and prioritize some I/O automatically. deadline style scheduling is simpler; it does not support ionice at all. => Users with the Linux default kernel configuration, SATA devices, and no additional userspace policy (e.g. no udev rules), will be subject to a change in behaviour in 4.19. Where ionice used to work, it will no longer have any effect. However Fedora includes specific kernel patches / configuration. Fedora also includes userspace policies such as default udev rules. What does Fedora Workstation 29 use as the default I/O scheduler? If it depends on the exact type of block device, then what is the default I/O scheduler for each type of device? | The simplest solution would be to use timeout from the collection of GNU coreutils (probably installed by default on most Linux systems): timeout 10 ./sopare.py -l See the manual for this utility for further options ( man timeout ). On non-GNU systems, this utility may be installed as gtimeout if GNU coreutils is installed at all. Another alternative, if GNU coreutils is not available, is to start the process in the background and wait for 10 seconds before sending it a termination signal: ./sopare.py -l &sleep 10kill "$!" $! will be the process ID of the most recently started background process, in this case of your Python script. In case the waiting time is used for other things: ./sopare.py -l & pid=$!# whatever code here, as long as it doesn't change the pid variablekill "$pid" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/483881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29483/"
]
} |
483,882 | I have to make a bash script that will be changing files' names from lower to upper OR from upper to lower via parameters in command line. So when I put in command line: ./bashScript lower upper then all files in directory should change from lower to upper case. I have to also add 3rd parameter that will let me change only one specific file. So for example I have to be able of putting in command line: ./bashScript lower upper fileName | The simplest solution would be to use timeout from the collection of GNU coreutils (probably installed by default on most Linux systems): timeout 10 ./sopare.py -l See the manual for this utility for further options ( man timeout ). On non-GNU systems, this utility may be installed as gtimeout if GNU coreutils is installed at all. Another alternative, if GNU coreutils is not available, is to start the process in the background and wait for 10 seconds before sending it a termination signal: ./sopare.py -l &sleep 10kill "$!" $! will be the process ID of the most recently started background process, in this case of your Python script. In case the waiting time is used for other things: ./sopare.py -l & pid=$!# whatever code here, as long as it doesn't change the pid variablekill "$pid" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/483882",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322613/"
]
} |
483,894 | On tty2, how do I take a text screenshot of the command line? | Did you consider the screendump command? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270469/"
]
} |
483,898 | Edit 1:The freezing just happened and I was able recover from it. Log(syslog) from the freezing until 'now': https://ufile.io/ivred Edit 2: It seems a bug/problem with GDM3. I'll try Xubuntu. Edit 3: Now I'm using Xubuntu. The problem still happens, but a lot less often. So.. it is indeed a memory issue. I'm currently using Ubuntu 18.10 Live CD since my HD died. I did some customizations to my LiveCD mainly towards memory consumption, because I have only 4GB of RAM. When my free memory goes below 100MB, my pendrive LED starts to blink like crazy and the system freezes letting me time to just get out of GUI interface ( Ctrl + Alt + f1 ... f12 ) and reboot( Ctrl + Alt + Del ) or, sometimes to close Google Chrome with sudo killall chrome . So I created a very simple script to clean the system cache and close Google Chrome. Closing Chrome out of the blue like that is fine, since it asks you to recover the tabs when it wasn't closed properly. The question : It works like a charm 95% of the time. I don't know if my script is too simple or there is another reason for this intermittent freezing since I can't check the log, because of the need of reboot. Is there a more efficient way to do that? Am I doing it wrong? Obs.: I have another script to clean the cache that runs every 15 minutes. Since I created those scripts I am able to use my LiveCD every day with almost no freezing. Maybe 1 per day.. Before that I had to reboot every 30-40min, because I use the Chrome with several tabs. My script: #!/bin/bash while true ; do free=`free -m | grep Mem | awk '{print $4}'`if [ "$free" -gt 0 ] then if [ $free -le 120 ]; #When my memory consuptiom goes below 120MB do the commands below. thenif pgrep -x "chrome" > /dev/nullthen sudo killall -9 chrome sudo su xubuntu /usr/bin/google-chrome-stable --password-store=basic --aggressive-cache-discard --aggressive-tab-discardelse echo "Stopped"fi sudo sysctl -w vm.drop_caches=3 sudo sync && echo 3 | sudo tee /proc/sys/vm/drop_caches fifi & sleep 1; done | Did you consider the screendump command? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322297/"
]
} |
483,929 | (On raspberry pi zero w, kernel 4.14y) It seems the wireless adapter chip isn't a device in the /dev fs, but is the name of something that 'ifconfig' knows about. I understand that this is an artifact from Berkley Sockets. It is hardware, I assume it must be mentioned in the device tree -- to cause some driver to be loaded, but it must not create an entry in /dev (devfs). Where/how does Sockets find this device that is not a device? | In Linux, network interfaces don't have a device node in /dev at all. If you need the list of usable network interfaces e.g. in a script, look into /sys/class/net/ directory; you'll see one symbolic link per interface. Each network interface that has a driver loaded will be listed. Programmatically, you can use the if_nameindex() syscall: see this answer on Stack Overflow. Also, note that /dev is the device filesystem . The device-tree has a specific different meaning: it is a machine-readable description of a system's hardware composition. It is used on systems that don't have Plug-and-Play capable hardware buses, or otherwise have hardware that cannot be automatically discovered. As an example, Linux on ARM SoCs like Raspberry Pi uses a device tree. The boot sequence of a RasPi is quite interesting: see this question on RasPi.SE. In short, at boot time, under control of the /boot/start.elf file, the GPU loads the appropriate /boot/*.dtb and /boot/overlay/*.dtbo files before the main ARM CPU is started. The *.dtb file is the compiled device tree in binary format. It describes the hardware that can be found on each specific RasPi model, and is produced from a device tree source file (.dts`) which is just text, formatted in a specific way. The kernel's live image of the device-tree can be seen in: /sys/firmware/devicetree/base Per Ciro Santilli , it can be displayed in .dts format by: sudo apt-get install device-tree-compilerdtc -I fs -O dts /sys/firmware/devicetree/base You can find the specification of the device tree file format here. The specification is intended to be OS-independent. You may also need the Device Tree Reference as clarification to some details. So, the answer to your original question is like this: the Berkeley Sockets API gets the network interface from the kernel the kernel gets the essential hardware information from the device tree file the device tree file is loaded by the GPU with /boot/start.elf according to configuration in /boot/config.txt the device tree file was originally created according to the hardware specifications of each RasPi model and compiled to the appropriate binary format. The device tree scanning code is mostly concerned about finding a valid driver for each piece of hardware. It won't much care about each device's purpose : that's the driver's job. The driver uses the appropriate *_register_driver() kernel function to document its own existence, takes the appropriate part of the device tree information to find the actual hardware, and then uses other functions to register that hardware as being under its control. Once the driver has initialized the hardware, it uses the kernel's register_netdev() function to register itself as a new network interface , which, among other things, will make the Sockets API (which is just another interface of the kernel, not an independent entity as such) aware that the network interface exists. The driver is likely to register itself for other things too: it will list a number of ethtool operations it supports for link status monitoring, traffic statistics and other low-level functions, and a driver for a wireless NIC will also use register_wiphy() to declare itself as a wireless network interface with specific Wi-Fi capabilities. The Linux TCP/IP stack has many interfaces: the Berkeley Sockets API is the side of it that will be the most familiar to application programmers. The netdev API is essentially the other, driver-facing side of the same coin. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260674/"
]
} |
483,934 | Whenever I go to exit i3 a bar shows up on the top giving me the ability to to click Yes , to exit, or X to cancel. | Add this to your config: mode "exit: [l]ogout, [r]eboot, [s]hutdown" { bindsym l exec i3-msg exit bindsym r exec systemctl reboot bindsym s exec systemctl shutdown bindsym Escape mode "default" bindsym Return mode "default"}bindsym $mod+x mode "exit: [l]ogout, [r]eboot, [s]hutdown" now use mod + x and then choose l , r , or s | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
483,939 | After reading Stephen Kitt's reply , xargs waits for receiving the stdin input before processing any of the input, such as splitting it into arguments. How does xargs know when a stdin input ends, so that it can start processing it? Is -E used for specifying the end of a stdin input? Without it, how does xargs knows when it ends? Is there some timeout? | To read its input, xargs uses the read function (see also the corresponding Linux manpage ). read reads the next available data from a file descriptor into an area of memory specified by the calling program. As used by xargs , read waits until either data is available, or an error occurs, or it determines that no more data will ever be available. It returns respectively a positive integer, -1, or 0 in each of these cases. To determine when it’s finished reading its input (its standard input, or the input specified by the -a option), xargs looks for the specified end-of-file marker (if the -E option was used), or a 0 return value from read . You can see this in action by running printf '%s ' {1..1024} | strace -e read xargs -s 2048 -x | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
483,993 | This is the result I'm looking for: # mount /dev/node /mnt/intermediate# mount --bind /mnt/intermediate/sub/dir /final/mountpoint I'd like to do it in a single command, though, without use of the intermediate mount point, /mnt/intermediate . Is this possible? | No, in general you can't mount a sub directory of a file system, unless that file system specifically supports it. Support for sub directory mounting is sometimes found in network file systems, like NFS or SMB, where you can mount a sub directory of an exported file system. BTRFS has the option subvol , but that is file system specific. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/483993",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160312/"
]
} |
484,025 | I have a CSV file that looks like this format: text1,text2,string1,string2text3,text3,string3,string2text4,text5,string1,string2text6,text6,string6,string7 I want to extract rows when column1 and column2 are not equal. The expected result in the above example would be: text1,text2,string1,string2text4,text5,string1,string2 When column1 and column2 are not equal. I am familiar with commands that allow me extract specific column like the following to extract the first column: cat input.csv | cut -d ',' -f1 > output.csv | Assuming that this is a simple CSV file, without any fancy embedding of commas or newlines within the fields of the actual data, you may use awk to do this: awk -F ',' '$1 != $2' <input.csv This is a shorthand way of writing awk 'BEGIN { FS = "," } $1 != $2 { print }' <input.csv and it sets the input field separator to a comma and prints each line if the first and second fields ( $1 and $2 ) are not identical. An equivalent Perl variant: perl -F ',' -na -e 'print if $F[0] ne $F[1]' <input.csv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484025",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/299440/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.