source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
360,896 | I have a file like this: sample chr start end ref alt gene effectAADA-01 chr1 12336579 12336579 C T VPS13D SilentAADA-02 chr1 20009838 20009838 - CCA TMCO4 MissenseAADA-03 chr1 76397825 76397825 GTCA T ASB17 MissenseAADA-03 chr1 94548954 94548954 C A ABCA4 MissenseAADA-04 chr1 176762782 176762782 TCG C PAPPA2 MissenseAADA-04 chr1 183942764 183942764 - T COLGAL MissenseAADA-05 chr1 186076063 186076063 A TGC HMCN1 SilentAADA-05 chr1 186076063 186076063 A T HM1 Silent I need all the lines where the 5th and 6th columns contain only one character. And the result should look like: sample chr start end ref alt gene effectAADA-01 chr1 12336579 12336579 C T VPS13D SilentAADA-03 chr1 94548954 94548954 C A ABCA4 MissenseAADA-05 chr1 186076063 186076063 A T HM1 Silent I tried using this. awk -F'\t' '$5' filename | awk -F'\t' '$6' filename | wc -l I know this is wrong but can anyone correct my mistake please. | awk 'NR==1{print; next} $5 ~ /^[A-Z]$/ && $6 ~ /^[A-Z]$/' input.txt Explanation NR==1{print; next} This prints the first line (header) unconditionally and goes to the next line. $5 ~ /^[A-Z]$/ && $6 ~ /^[A-Z]$/ This is a conditionnal expression: if the 5th AND the 6th arguments both match a single upper case letter, then print the line (the print command is implicit in this case being the default instruction for any condition). $5 and $6 stand for the 5th and 6th column for each line. && is the logical operator AND. ~ is the regexp matching operator. It returns true if the argument on the left-hand side matches the regexp on the right-hand side. /^[A-Z]$/ is a regular expression (regexp). The character "/" is a delimiter for the regexp, "^" indicates the beginning of a line (or the string), "$" the end, and "[A-Z]" means all upper case letters from A to Z. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360896",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163629/"
]
} |
360,918 | I'm trying to figure this out. awk '{print $1","$10","$11","$12","$13,$14,$15,$16,$17,$18,$19}' <<< "$PASTE_1" > test.csv I need to print the $1 $10 $11 $12 separated by comma then continue with $13 until the of the line, without comma separation. Since there are many blank spaces from $13. | Do you mean something like this: awk '{a = ""; for (i = 13 ; i <= NF ; i++) a = a $i; print $1 "," $10 "," $11 "," $12 "," a}' The input a b c d e f g h i j k l m n o p q r s t u v w x y z gives: a,j,k,l,mnopqrstuvwxyz That is, the fields starting from 13 are concatenated together, and then printed after 1, 10, 11 and 12. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224703/"
]
} |
360,941 | Is there a way to predict when the next release will be out? I read somewhere that it has to do with number of bugs remaining in the testing branch . Could someone please explain how this works and when the next release will happen based on what variables? | See Debian Release Management ; for Debian 9, it stated: As always, Debian 9 “Stretch” will be released “when it’s ready”. and that’s the general rule for all releases. The planned release date for Debian 9, June 17 2017, was announced on May 26 of that year . The planned release date for Debian 10, July 6 2019, was announced on June 11 of that year . (Both releases happened on the planned date.) Debian 11 is currently frozen , and the release is planned for August 14 2021 . Generally speaking, you’re right that “when it’s ready” correlates to the number of (release-critical) bugs in the testing distribution, to a large extent. The release team give regular updates on debian-devel-announce , which are linked from the release management page . These updates list the items which still need to be fixed (including bugs, but not only), and explain how you can help; that’s mainly: test the current testing distribution; help triage bugs; help fix bugs. The best way of knowing when a Debian release will happen is to help fix the issues preventing it — as the number of such issues goes down, so does the release date get closer. You can track the release-critical bugs ; those which matter for the next release are counted as “number concerning the next release”. Other important ingredients for a Debian release are its installer and its documentation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/360941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46235/"
]
} |
361,010 | Is it possible to save the state of a virtual machine under QEMU/KVM/libvirt (on x86-64) to disk like you can on vmware Player, that means: The RAM and CPU/system state is save to disk The OS is stopped from the outside (no suspend to disk within the VM) The VM can be continued after rebooting the host? If it is possible, would it need special drivers within the VM? Which one for Linux and Windows 7 guests? | The virt-manager window has a feature "shut down" -> "save". Additional drivers are not required. I think the obvious bad thing happens with system time inside the guest. I don't know if there are guest drivers available to let the clock catch up. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115138/"
]
} |
361,018 | > strace w 2>&1 | grep urandomread(4, "/usr/bin/grep\0urandom\0", 2047) = 22> Why does "w" need urandom? How to avoid this? UPDATE: > strace w 2>&1 | awk '/urandom/'read(4, "awk\0/urandom/\0", 2047) = 14> so it is the filtering that has something to do with urandom? > strace who 2>&1 | grep urandom> Then why isn't "who" affected? | As explained in other answers and comments the reason for what youobserve is the way Bash handles pipes. In order to filter what youreally want in similar situations you can try to enclose the first letter of the grep argument in [] like this: $ strace w 2>&1 | grep randomread(4, "grep\0random\0", 2047) = 12$ strace w 2>&1 | grep '[r]andom'$ strace w 2>&1 | grep '[c]lose'close(3) = 0close(3) = 0close(3) = 0close(3) = 0close(3) = 0close(3) = 0(...) EDIT: As correctly noted by R. in the comment below in fact strace doesnot see the other side of the pipe. Similarly to ps aux | grep grep which also shows grep grep in its output w is is walking through /proc directory and finds grep process there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361018",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/214328/"
]
} |
361,089 | This answer explains .msi and setup.exe files for installing an application on Windows. Are there equivalents to .msi and to setup.exe files in Debian or Ubuntu? Do .deb package files correspond to .msi or setup.exe or something else? | Probably closer to an MSI installer than a setup.exe , a .deb package includes a tree of files to copy into the filesystem, as well as a collection of pre- and post-installation hooks to run (among other things). The hooks can effectively do anything on the system, including something I don't think I've ever seen on Windows: adding users for a system service. One thing they can't do is install another .deb package — the database is locked during installation, so this can only be achieved through dependencies. Installing a .deb package then produces entries in a central database of installed packages for ease of maintenance. The ttf-mscorefonts package is interesting in that the package itself contains only a script to download and install the fonts. This script is executed in one of these hooks. Closer to setup.exe might be downloading a progam's source code from the project's homepage, then running ./configure && make && sudo make install , or whatever other method the authors decided to use. Since this method does not add the package to the database of installed programs, removing it later can be much more difficult. Another difference is that a .deb specifies its dependencies, so proper installation can be guaranteed. As far as I know, in the Windows world an MSI cannot cause the installation of another MSI, so setup.exe is typically used for this kind of dependency tracking. Several comments note that MSIs can name dependencies, but since there is no central database of MSIs like there is for .deb packages, missing a dependency will just cause a failure to install. Thus, a .deb is sort of in between an MSI installer and a setup.exe . The package can do whatever it wants during its pre- and post-installation hooks, can name and usually find its own dependencies, and leaves a record of its installation in a central location for ease of maintenance. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
361,134 | I have a variable named descr which can contain a string Blah: -> r1-ae0-2 / [123] , -> s7-Gi0-0-1:1-US / Foo , etc. I want to get the -> r1-ae0-2 , -> s7-Gi0-0-1:1-US part from the string. At the moment I use descr=$(grep -oP '\->\s*\S+' <<< "$descr" for this. Is there a better way to do this? Is it also possible to do this with parameter expansion? | ksh93 and zsh have back-reference (or more accurately 1 , references to capture groups in the replacement) support inside ${var/pattern/replacement} , not bash . ksh93 : $ var='Blah: -> r1-ae0-2 / [123]'$ printf '%s\n' "${var/*@(->*([[:space:]])+([^[:space:]]))*/\1}"-> r1-ae0-2 zsh : $ var='Blah: -> r1-ae0-2 / [123]'$ set -o extendedglob$ printf '%s\n' "${var/(#b)*(->[[:space:]]#[^[:space:]]##)*/$match[1]}"-> r1-ae0-2 ( mksh man page also mentions that future versions will support it with ${KSH_MATCH[1]} for the first capture group. Not available yet as of 2017-04-25). However, with bash , you can do: $ [[ $var =~ -\>[[:space:]]*[^[:space:]]+ ]] && printf '%s\n' "${BASH_REMATCH[0]}"-> r1-ae0-2 Which is better as it checks that the pattern is found first. If your system's regexps support \s / \S , you can also do: re='->\s*\S+'[[ $var =~ $re ]] With zsh , you can get the full power of PCREs with: $ set -o rematchpcre$ [[ $var =~ '->\s*\S+' ]] && printf '%s\n' $MATCH-> r1-ae0-2 With zsh -o extendedglob , see also: $ printf '%s\n' ${(SM)var##-\>[[:space:]]#[^[:space:]]##}-> r1-ae0-2 Portably: $ expr " $var" : '.*\(->[[:space:]]*[^[:space:]]\{1,\}\)'-> r1-ae0-2 If there are several occurrences of the pattern in the string, the behaviour will vary with all those solutions. However none of them will give you a newline separated list of all matches like in your GNU- grep -based solution. To do that, you'd need to do the looping by hand. For instance, with bash : re='(->\s*\S+)(.*)'while [[ $var =~ $re ]]; do printf '%s\n' "${BASH_REMATCH[1]}" var=${BASH_REMATCH[2]}done With zsh , you could resort to this kind of trick to store all the matches in an array: set -o extendedglobmatches=() n=0: ${var//(#m)->[[:space:]]#[^[:space:]]##/${matches[++n]::=$MATCH}}printf '%s\n' $matches 1 back-references does more commonly designate a pattern that references what was matched by an earlier group. For instance, the \(.\)\1 basic regular expression matches a single character followed by that same character (it matches on aa , not on ab ). That \1 is a back-reference to that \(.\) capture group in the same pattern. ksh93 does support back-references in its patterns (for instance ls -d -- @(?)\1 will list the file names that consist of two identical characters), not other shells. Standard BREs and PCREs support back-references but not standard ERE, though some ERE implementations support it as an extension. bash 's [[ foo =~ re ]] uses EREs. [[ aa =~ (.)\1 ]] will not match, but re='(.)\1'; [[ aa =~ $re ]] may if the system's EREs support it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/361134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
361,177 | I'm a long time user of KDE Dolphin and love the pane mechanism. I know you can open/close pane using F3 . However, I can't find a shortcut to switch from one pane to another like Pycharm, Tmux or other application allow. I know you can use the mouse, but I found it slow to move away from keyboard, locate cursor, move to right pane, click to focus and repeat while a single shortcut could do the same. Question Is there such keyboard shortcut? How is it call? How do I configureit? | Dolphin version 17.08.3 There is an option is Settings > Configure Dolphin… > General to use Tab to switch between panes: | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361177",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17362/"
]
} |
361,191 | I found one for loop example online. Now I want to use it in my code but I am not sure how does this loop operates for entry in "$search_dir"/* do echo "$entry"done Now I want to ask that Does it look in search_dir in each iteration and copies files in search_dir to entry variable one file in each iteration? Or I take a snapshot of all the contents of search_dir and then store that snapshot to entry variable? Is there any change in output if some one inserts some file in search_dir while the loop is still working? | When the shell gets to the for -statement, it will expand the value of $search_dir and perform the file name globbing to generate a list of directory entries that will be iterated over. This happens only once, and if the things in $search_dir disappears or if there are new files/directories added to that directory while the loop is executing, these changes will not be picked up. If the loop operates on the directory entries whose names are in $entry , one might want to test for their existence in the loop, especially if the loop is known to take a long time to run and there are lots of files that are in constant flux for one reason or another: for entry in "$search_dir"/*; do if [ -e "$entry" ]; then # operate on "$entry" else # handle the case that "$entry" went away fidone As Stéphane rightly points out in comments, this is a superfluous test in most cases. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222226/"
]
} |
361,213 | Adding a gpg key via apt-key systematically fails since I've switched to Ubuntu 17.04 (I doubt it's directly related though). Example with Spotify's repo key : $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys BBEBDCB318AD50EC6865090613B00F1FD2C19886Executing: /tmp/apt-key-gpghome.wRE6z9GBF8/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys BBEBDCB318AD50EC6865090613B00F1FD2C19886gpg: keyserver receive failed: No keyserver available Same thing if I remove the hkp:// prefix. Context: I use CNTLM to cope with the local corporate proxy. Env variables are set (in /etc/environment ): $ env | grep 3128https_proxy=http://localhost:3128http_proxy=http://localhost:3128ftp_proxy=http://localhost:3128 /etc/apt/apt.conf is configured ( apt commands are working fine): $ cat /etc/apt/apt.confAcquire::http::Proxy "http://localhost:3128";Acquire::https::Proxy "http://localhost:3128";Acquire::ftp::Proxy "http://localhost:3128"; Finally, the specified keyserver seems reachable: $ curl keyserver.ubuntu.com:80<?xml version="1.0"?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>SKS OpenPGP Public Key Server</title> </head> <body> [...] What can I do ? I'm not even sure on how to further debug it... Things I already tried to do, without any result: run sudo with -E (preserve env) option run apt-key adv with --keyserver-options http-proxy=http://localhost:3128/ option ( source ) run $ gpg --list-keys for some reason ( source ) use another keyserver ( --keyserver pgp.mit.edu ) remove the hkp:// part ( --keyserver keyserver.ubuntu.com:80 ) Weird thing is that I never see any "cntlm" entry in /var/log/syslog when running apt-key . | You usually have a proxy for ftp, http and https; I am seeing there hkp:// as an URL; so it should not be directed via a pure http proxy, hence failing the communication. Use this instead: sudo apt-key adv --keyserver keyserver.ubuntu.com --keyserver-options http-proxy=http://localhost:3128 --recv-keys BBEBDCB318AD50EC6865090613B00F1FD2C19886 As for the system updates, I would advise using an APT proxy, for instance, apt-cacher-ng . Another way of doing it, is searching in the public web interface, with a browser, for instance on your working station for the key you want at https://keyserver.ubuntu.com Open the site, and you got a form. In this case I used the "Search String" "Spotify"; then select "Search" ; it will list several keys. Searching for the signature/fingerprint that you mentioned in the result page: pub 4096R/D2C19886 2015-05-28 Fingerprint=BBEB DCB3 18AD 50EC 6865 0906 13B0 0F1F D2C1 9886 uid Spotify Public Repository Signing Key <[email protected]>sig sig3 D2C19886 2015-05-29 __________ 2017-11-22 [selfsig]sig sig 94558F59 2015-06-02 __________ __________ Spotify Public Repository Signing Key <[email protected]> We see this is the entry that interests us. So we click in D2C19886 and are presented with a page with the key at https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x13B00F1FD2C19886 . Public Key Server -- Get "0x13b00f1fd2c19886 "-----BEGIN PGP PUBLIC KEY BLOCK-----Version: SKS 1.1.6Comment: Hostname: keyserver.ubuntu.commQINBFVm7dMBEADGcdfhx/pjGtiVhsyXH4r8TrFgsGyHEsOWaYeU2JL1tEi+YI1qjpExb2TeTReDTiGEFFMWgPTS0y5HQGm+2P3XGv0pShvgg9A6FWZmZmT+tymA2zvNrdpmKdhScZ52StPLFz9wsmXHG4DIKVuzgzuV4YxJ1i2wFtoVp8zT9ORu1BxLZ0IBwTvLRbaQGZ8DwXVAHak9cK91Ujj6gJ1MJPohZLHH2BjrOjEl/I36jFUjK0AadznNzo08lLAi94qjtheJtuJD3IEOAlCkaknz6vbEFpszLGlLD7GENMzJk46ObuJuvW5R2PkOU2U8jS0GaUD9Ou/SIdJ6vIdvjSs/ettc2wwdnbSdadvjovIfvEBRsEVMpRG+42B+DZpJbS9pCb8sxTJtnUy1YViZmG0++FhPGGPGzQYhC/Mz07lsx5PkC7Kka2FCNmhauxw5deO43Ck181oQVdbt/VxmChzchUJ6N6/uOV5JKm7B9UnDNyqUYv6goeLvFnT9ag+FCxiroTrq+dINr6d+XT/cI9WtSagfmhcekwhyfcCgYsFemAOckRifjEGFMksQlnWkGwWNoKe91KBxjgaJaazSbZRk0dFPSSmfKWaxuTwkR74pbaueyijnQJgHAjfCyzQe9miN9DitON5l6T2gVAN3Jn1QQmV7tt5GB7amcHf5/b0oYmmRPQARAQABtD5TcG90aWZ5IFB1YmxpYyBSZXBvc2l0b3J5IFNpZ25pbmcgS2V5IDxvcGVyYXRpb25zQHNwb3RpZnkuY29tPokBHAQQAQIABgUCVW3SWAAKCRAILM7flFWPWUk5B/wOqqD9/2Do9PyPucfUs/rrP4+M8iJLpv8U+bX/qHryTTWfpk3YuKL4+c8saHySK4HLGyxd3mdo1XMF351KrxLQvWMSSPbIRV9cSqZROOVn2ya+3xpWk6t1omLzxtBBMOC4B5qAfWhog7ioAmzQNY5NUz5mqXVP5WbgR/G+GOszzuQUgeu1Xxxzir3JqWQ0g8mp3EtX7dB76zxkkuTYbeVDPOvtJPn/38d3oSLUI1QJnL8pjREHeE8fO5mWncJmyZNhkYd+rfnPk+W0ZkTr59QBIEOGMTmATtNh+x1mo5e2dW91Oj4jEWipMUouLGqbo/gJuHFMt8RWBmy+zFYUEPYHiQI+BBMBAgAoAhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAUCVWg3sAUJBK3QLQAKCRATsA8f0sGYhl6hEACJ1CrYjaflKKR2Znuh0g0gM89NAwO8AA4+SpkWHagdGLo7OV/rGB3mlwD4mhaa8CbEnBT/za3jFnT19KsYQWiT21oOX/eo47ITbAspjDZTiXLinyAcOJn+q/EFkelROzbVaxZHi6SN5kCEd8KAew8h2jZf8wWqaYVyMPNSqotUhin6YjWsu57BGixVThoMmxx3udsGAiYqt8buAANWbkUphrvtJuNCKkGym7psnS4Q5EnHPfvbYii9iAfBswX6nZQlehva7aToN73elYL3opCArAxKAFx70bpGxb7T16KjKzkKS0a4iQ7xdbBGylb+AE/RhICa+RM5tma2YnB3pZvFM/n0BNeYReCgvxkl1rqrB1KxmFHfGqjLkb2YAZ5RYnP3gEt+nbEWxL8FO0Bhakn1RB3NqTC2oiQAUfh+66yUawUNkHRHlGAEzZAxvpfnf0hSJp734lyQZJs+zqXUAXa2UmEZ6se62PgZRQIz5IbAVxSiGz4xIZs1yS36N2vZ34LFJa9o/HVk5OfpqZM0zjWwQIQN2b4OBizL5r4h2Mi5BHUEyYMsDZn+txoJjPPYLolRlf31sqi5MJE+cbOAXSn8PC9k4i+hrbfqFzts47+6xgCH3aXbhUkJh1CH/0/qEXfTPYTyayijm4rdvSBczzEORWGT5E38oV9h1eUqp4nVPg===/qip-----END PGP PUBLIC KEY BLOCK----- You cut between the line that begins with "-----BEGIN" and the line ending with "-----END", including those lines, and paste to a file, say spotify.pgp on the intended server you want to import that key. (do not cut it from here, as I added 4 spaces before each line while formatting) Finally to import the key into the server you do: $sudo apt-key add spotify.pgpOK | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/361213",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101593/"
]
} |
361,214 | I'm using Arch Linux + GNOME3 on desktop, and when the system starts or the user logs out, gdm displays the login screen for about 20 seconds and then turns off the display (although the computer is still running). Is it possible to disable this? I want the monitor to keep displaying the login screen "forever". I couldn't find any way to configure this. | That's because of the idle-delay setting. To change it you'll have to alter the corresponding dconf key (and do that as the gdm user): switch to a VT (e.g. Ctrl + Alt + F3 ), login as root and run: su - gdm -s /bin/sh to switch user to gdm . then run: export $(dbus-launch) and set idle delay to 0 (which translates to never ): GSETTINGS_BACKEND=dconf gsettings set org.gnome.desktop.session idle-delay 0 run exit or hit Ctrl + D to return to root account. reboot your machine or restart the display manager: systemctl restart gdm | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228548/"
]
} |
361,239 | I recently purchased an external USB hard drive and wanted to use it as a portable boot drive. I installed Linux Mint 18.1 on it and got everything working. Then I started to think about using that drive to install Linux on other machines. I assumed that whatever a live boot USB does should be possible from a full-blown Linux installation. I looked around and the only option I found was from Ubuntu: Installation/From Linux . Their solution is to create a partition, fill it with the ISO contents and then boot from that to launch the installer.I did follow those instructions and got it working as expected, however, I still feel there must be a way to install Linux from Linux without booting into an ISO. I just found a related question: Installing without booting . There is an answer there that suggests there is some sequence of operations that could be run to install Linux on another partition, but I would need more detail than provided there. Is that process documented somewhere? Honestly, I would be more comfortable if I could just run the installers that are included in the live boot images of each distro. Or some kind of semi-authoritative script that would do the same thing. Is there a package in the repos that would provide such a thing (eg. a Linux Mint installer package that could be installed using apt-get or yum )? | There is an example to install debian from a Linux-mint live USB (or any debian based distro). If you have a debian based distribution already installed on your hdd , you can install other debian based distro using chroot and debootstrap from the existing OS. Boot from the live USB .Use gparted to create your root , swap , /home ... partitions. If you prefer the command line ( fdisk , parted ..) , there is how to activate the swap partition : mkswap /dev/sdaYsyncswapon /dev/sdaY Let's say you need to install debian bullseye . Install the debootstrap package : sudo apt-get install debootstrap Create the /mnt/stable then mount your root partition ( sdaX ) sudo mkdir /mnt/stablesudo mount /dev/sdaX /mnt/stable Install the base system: sudo debootstrap --arch amd64 bullseye /mnt/stable http://ftp.fr.debian.org/debiansudo mount -t proc none /mnt/stable/procsudo mount -o bind /dev /mnt/stable/devsudo chroot /mnt/stable /bin/bash Set up your root password: passwd Add a new user: adduser your-username Set up the hostname : echo your_hostname > /etc/hostname Configure the /etc/fstab : add the following lines: /dev/sdaX / ext4 defaults 0 1/dev/sdaY none swap sw 0 0proc /proc proc defaults 0 0 use the debian documentation to edit your /etc/apt/sources.list . Configure locale : apt install localesdpkg-reconfigure locales Configure you keyboard: apt install console-datadpkg-reconfigure console-data Install the kernel: apt-cache search linux-image Then: apt install linux-image-5.10.0-2-amd64 Configure the network: editor /etc/network/interfaces and past the following: auto loiface lo inet loopbackallow-hotplug eth0 # replace eth0 with your interfaceiface eth0 inet dhcpallow-hotplug wlan0 # replace wlan0 with your interfaceiface wlan0 inet dhcp To manage the wifi network, install the following packages: apt install iproute2 network-manager iw Install grub : apt install grub2grub-install /dev/sdaupdate-grub You can install a desktop environment through the command tasksel : apt install aptitude tasksel Run the following command and install your favourite GUI: tasksel Finally, exit the chroot and reboot your system Documentation: D.3. Installing Debian GNU/Linux from a Unix/Linux System Debian wiki: chroot debootstrap | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361239",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228564/"
]
} |
361,245 | Looking at the source of strace I found the use of the clone flag CLONE_IDLETASK which is described there as: #define CLONE_IDLETASK 0x00001000 /* kernel-only flag */ After looking deeper into it I found that, although that flag is not covered in man clone it is actually used by the kernel during the boot process to create idle processes (all of which should have PID 0) for each CPU on the machine. i.e. a machine with 8 CPUs will have at least 7 (see question below) such processes "running" (note quotes). Now, this leads me to a couple of question about what that "idle" process actually do. My assumption is that it executes NOP operation continuously until its timeframe ends and the kernel assigns a real process to run or assign the idle process once again (if the CPU is not being used). Yet, that's a complete guess. So: On a machine with, say, 8 CPUs will 7 such idle processes be created? (and one CPU will be held by the kernel itself whilst no performing userspace work?) Is the idle process really just an infinite stream of NOP operations? (or a loop that does the same). Is CPU usage (say uptime ) simply calculated by how long the idle process was on the CPU and how long it was not there during a certain period of time? P.S. It is likely that a good deal of this question is due to the fact that I do not fully understand how a CPU works. i.e. I understand the assembly, the timeframes and the interrupts but I do not know how, for example, a CPU may use more or less energy depending on what it is executing. I would be grateful if someone can enlighten me on that too. | The idle task is used for process accounting, and also to reduce energy consumption. In Linux, one idle task is created for every processor, and locked to that processor; whenever there’s no other process to run on that CPU, the idle task is scheduled. Time spent in the idle tasks appears as “idle” time in tools such as top . (Uptime is calculated differently.) Unix seems to always have had an idle loop of some sort (but not necessarily an actual idle task, see Gilles’ answer ), and even in V1 it used a WAIT instruction which stopped the processor until an interrupt occurred (it stood for “wait for interrupt”). Some other operating systems used busy loops, DOS, OS/2 , and early versions of Windows in particular. For quite a long time now, CPUs have used this kind of “wait” instruction to reduce their energy consumption and heat production. You can see various implementations of idle tasks for example in arch/x86/kernel/process.c in the Linux kernel: the basic one just calls HLT , which stops the processor until an interrupt occurs (and enables the C1 energy-saving mode), the other implementations handle various bugs or inefficiencies ( e.g. using MWAIT instead of HLT on some CPUs). All this is completely separate from idle states in processes, when they’re waiting for an event (I/O etc.). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/361245",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172635/"
]
} |
361,247 | I have a board that is running a "patch" script. The patch script always runs in the background and it is a shell script that runs the following pseudo code: while true; do # checks if a patch tar file exists and if yes then do patching sleep 10done This script is at /opt/patch.sh and it started by SystemV init script. The problems is that when the script finds the tar, it extracts it, and inside there is a shell script called patch.sh which is specific for the contents of the tar. When the script at /opt/patch.sh finds the tar it does the following: tar -xf /opt/update.tar -C /mnt/updatemv /mnt/update/patch.sh /opt/patch.shexec /opt/patch.sh It replaces itself with the another script and executes it from the same location.Can any problems occur doing that? | If the file is replaced by being written over in-place (inode stays the same), any processes having it open would see the new data if/when they read from the file. If it's replaced by unlinking the old file and creating a new one with the same name, the inode number changes, and any processes holding the file open would still have the old file. mv might do either, depending on if the move happens between filesystems or not... To make sure you get a completely new file, unlink or rename the original first. Something like this: mv /opt/patch.sh /opt/patch.sh.old # or rmmv /mnt/update/patch.sh /opt/patch.sh That way, the running shell would still have a file handle to the old data, even after the move. That said, as far as I've tested, Bash reads the whole loop before executing any of it, so any changes to the underlying file would not take change the running script as long as the execution stays within the loop. (It has to read the whole loop before executing it, since there may be redirections affecting the whole loop at the end.) After exiting the loop, Bash moves the read pointer back to and then resumes reading the input file from the position right after the loop ended. Any functions defined in the script are also loaded to memory, so putting the main logic of the script to a function, and only calling it at the end would make the script quite safe against modifications to the file: #!/bin/shmain() { do_stuff exit}main Anyway, it's not too hard to test what happens when a script is overwritten: $ cat > old.sh <<'EOF'#!/bin/bashfor i in 1 2 3 4 ; do # rm old.sh cat new.sh > old.sh sleep 1 echo $idoneecho will this be reached?EOF$ cat > new.sh <<'EOF'#!/bin/bashecho xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxecho xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxecho xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxecho xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxEOF$ bash old.sh With the rm old.sh commented out, the script will be changed in-place. Without the comment, a new file will be created. (This example partly relies on new.sh being larger than old.sh , as if it were shorter, the shell's read position would be past the end of the new script after the loop.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153587/"
]
} |
361,318 | I have a script running in a folder. If a command fails I want to delete the folder the containing the script. Is that possible? Edit: Based on comment, I took out what I tried. | I give an answer as I'm worried anyone would try the OP's suggestion... a BIG word of warning : the script shown in the question deletes the directory given by pwd , which is NOT the directory the script is in but the directory the USER is in when launching the script . If one does: (**DO NOT TRY THIS **) cd ; /path/to/thatscript they would delete THE USER'S WHOLE HOMEDIRECTORY (as "cd" went back into it) AND EVERYTHING UNDERNEATH! ... (This is especially bad on some OSes where root's homedir is "/" ... ). Instead in your script you should: mydir="$(cd -P "$(dirname "$0");pwd)" #retrieve the script's absolute path, #even if the script was called via ../relative/path/to/scriptecho "the script '$0' is in: ${mydir} "...# and then (if you really want this.... but I think it's a bad idea!)# rm -rf "${mydir:-/tmp/__UNDEFINED__}" #deletes ${mydir}, if defined# once you're sure it is correctly reflecting the real script's directory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361318",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/185414/"
]
} |
361,352 | As I understand for text-based interaction with the Linux kernel, a program called init starts getty (or agetty ) which connects to one of the TTY devices under /dev and prompts for a username. After this, a program called login is run which prompts for the user's password and if correct, then launches the user's preferred shell (e.g. bash or csh ). At this point, bash interacts with the kernel via the TTY device. How does this login process work for X11? Does X11 interact with the kernel over a TTY? | The shell uses a TTY device (if it’s connected to one) to obtain user input and to produce output, and not much else. The fact that a shell is connected to a TTY is determined by getty (and preserved by login ); most of the time the shell doesn’t care whether it’s connected to a TTY or not. Its interaction with the kernel happens via system calls. An X11 server doesn’t know about logins (just like a shell). The login process in X11 works in two ways: either the user logs in on the terminal, and then starts X (typically using startx ); or an X server is started with a “display manager” which prompts the user for a login and password (or whatever authentication information is required). The way X11 servers obtain input and produce output is very different compared to a shell. On the input side, X knows about devices that shells don’t, starting with mice; it typically manages those directly with its own drivers. Even for keyboards, X has its own drivers which complement the kernel’s handling (so as I understand it, on Linux for example X uses the TTY driver to read raw input from the keyboard, but then interprets that using its own driver). On the output side, X drives display devices directly, with or without the kernel’s help, and without going through a TTY device. X11 servers on many systems do use TTY devices though, to synchronise with the kernel: on systems which support virtual terminals, X needs to “reserve” the VT it’s running on, and handle VT switching. There are a few other subtleties along the way; thus on Linux, X tweaks the TTY to disable GPM (a program which allows text-mode use of mice). X can also share a VT... On some workstations in the past, there wasn’t much explicit synchronisation with the kernel; if you didn’t run xconsole , you could end up with kernel messages displayed in “text mode” over the top of your X11 display. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88752/"
]
} |
361,388 | I would like to know what component models (e.g. CORBA , D-Bus , UNO , XPCOM , ActiveX , COM , etc.) are installed on my machine. Is there a command that I can run on the terminal to see a list of the different component models available? | The shell uses a TTY device (if it’s connected to one) to obtain user input and to produce output, and not much else. The fact that a shell is connected to a TTY is determined by getty (and preserved by login ); most of the time the shell doesn’t care whether it’s connected to a TTY or not. Its interaction with the kernel happens via system calls. An X11 server doesn’t know about logins (just like a shell). The login process in X11 works in two ways: either the user logs in on the terminal, and then starts X (typically using startx ); or an X server is started with a “display manager” which prompts the user for a login and password (or whatever authentication information is required). The way X11 servers obtain input and produce output is very different compared to a shell. On the input side, X knows about devices that shells don’t, starting with mice; it typically manages those directly with its own drivers. Even for keyboards, X has its own drivers which complement the kernel’s handling (so as I understand it, on Linux for example X uses the TTY driver to read raw input from the keyboard, but then interprets that using its own driver). On the output side, X drives display devices directly, with or without the kernel’s help, and without going through a TTY device. X11 servers on many systems do use TTY devices though, to synchronise with the kernel: on systems which support virtual terminals, X needs to “reserve” the VT it’s running on, and handle VT switching. There are a few other subtleties along the way; thus on Linux, X tweaks the TTY to disable GPM (a program which allows text-mode use of mice). X can also share a VT... On some workstations in the past, there wasn’t much explicit synchronisation with the kernel; if you didn’t run xconsole , you could end up with kernel messages displayed in “text mode” over the top of your X11 display. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361388",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211318/"
]
} |
361,421 | I am in the process of salvaging data from a 1 TB failing drive (asked about it in Procedure to replace a hard disk? ). I have done ddrescue from a system rescue USB with a resulting error size of 557568 B in 191 errors, probably all in /home (I assume what it calls "errors" are not bad sectors, but consecutive sequences of them). Now, the several guides I've seen around suggest doing e2fsck on the new disk, and I expected this to somehow find that some files have been assigned "blank sectors/blocks", to the effect of at least knowing which files could not be saved whole. But no errors were found at all (I ran it without -y to make sure I didn't miss anything). Now I am running it again with -c , but at 95% no errors were found so far; I guess I have a new drive with some normal-looking files with zeroed or random pieces inside, undetectable until on day I open them with the corresponding software, or Linux Mint needs them. Can I do anything with the old/new drives in order to obtain a list of possibly corrupted files? I don't know how many they could be, since that 191 could go across files, but at least the total size is not big; I am mostly concerned about a big bunch old family photos and videos (1+ MB each), the rest is probably irrelevant or was backed up recently. Update: the new pass of e2fsck did give something new of which I understand nothing: Block bitmap differences: +231216947 +(231216964--231216965) +231216970 +231217707 +231217852 +(231217870--231217871) +231218486Fix<y>? yesFree blocks count wrong for group #7056 (497, counted=488). Fix<y>? yesFree blocks count wrong (44259598, counted=44259589).Fix<y>? yes | You'll need the block numbers of all encountered bad blocks ( ddrescue should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here ). You may want to script this if there are a lot of bad blocks. e2fsck doesn't help, it just checks consistency of the file system itself, so it will only act of the bad blocks contain "adminstrative" file system information. The bad blocks in the files will just be empty. Edit Ok, let's figure out the block size thingy. Let's make a trial filesystem with 512-byte device blocks: $ dd if=/dev/zero of=fs bs=512 count=200$ /sbin/mke2fs fs$ ll fs-rw-r--r-- 1 dirk dirk 102400 Apr 27 10:03 fs$ /sbin/tune2fs -l fs...Block count: 100...Block size: 1024Fragment size: 1024Blocks per group: 8192Fragments per group: 8192 So the filesystem block size is 1024, and we've 100 of those filesystem blocks (and 200 512-byte device blocks). Rescue it: $ ddrescue -b512 fs fs.new fs.logGNU ddrescue 1.19Press Ctrl-C to interruptrescued: 102400 B, errsize: 0 B, current rate: 102 kB/s ipos: 65536 B, errors: 0, average rate: 102 kB/s opos: 65536 B, run time: 1 s, successful read: 0 s agoFinished $ cat fs.log# Rescue Logfile. Created by GNU ddrescue version 1.19# Command line: ddrescue fs fs.new fs.log# Start time: 2017-04-27 10:04:03# Current time: 2017-04-27 10:04:03# Finished# current_pos current_status0x00010000 +# pos size status0x00000000 0x00019000 +$ printf "%i\n" 0x00019000102400 So the hex ddrescue units are in bytes, not any blocks. Finally, let's see what debugfs uses. First, make a file and find its contents: $ sudo mount -o loop fs /mnt/tmp$ sudo chmod go+rwx /mnt/tmp/$ echo 'abcdefghijk' > /mnt/tmp/foo$ sudo umount /mnt/tmp$ hexdump -C fs...00005400 61 62 63 64 65 66 67 68 69 6a 6b 0a 00 00 00 00 |abcdefghijk.....|00005410 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| So the byte address of the data is 0x5400 . Convert this to 1024-byte filesystem blocks: $ printf "%i\n" 0x540021504$ expr 21504 / 102421 and let's also try the block range while we are at it: $ /sbin/debugfs fsdebugfs 1.43.3 (04-Sep-2016)debugfs: testb 0testb: Invalid block number 0debugfs: testb 1Block 1 marked in usedebugfs: testb 99Block 99 not in usedebugfs: testb 100Illegal block number passed to ext2fs_test_block_bitmap #100 for block bitmap for fsBlock 100 not in usedebugfs: testb 21Block 21 marked in usedebugfs: icheck 21Block Inode number21 12debugfs: ncheck 12Inode Pathname12 //foo So that works out as expected, except block 0 is invalid, probably because the file system metadata is there. So, for your byte address 0x30F8A71000 from ddrescue , assuming you worked on the whole disk and not a partition, we subtract the byte address of the partition start 210330128384 - 7815168 * 512 = 206328762368 Divide that by the tune2fs block size to get the filesystem block (note that since multiple physical, possibly damaged, blocks make up a filesystem block, numbers needn't be exact multiples): 206328762368 / 4096 = 50373233.0 and that's the block you should test with debugfs . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228370/"
]
} |
361,505 | Let's say we have "for loop" as follow: #!/bin/bashfor i in $(cat test_file); do echo $idone The content of text file is names of folders in a parent folder If the text_file contains 10000 entries (i.e. variable i), how can I tell "for loop" to sleep 10 seconds between every 10 "echos".In other words when for loop reads the variable i in the text_file how can I control the number of variables that for loop can run every specific period of time? So the output as follow: Variable #1 Variable #2 Variable #3 . . . . sleep 10 Variable #11 Variable #12 Variable #13 . . . | Use the following bash script ( to sleep 10 seconds between every 10 "echos" ): test.sh is a test name of the script #!/bin/bashwhile ((++i)); read -r linedo echo "$line" if (( "$i" % 10 == 0)) then sleep 10 fidone < $1 Usage : bash test.sh test_file while ((++i)) - will increment i counter each time when read -r line returns a line from the input if (( "$i" % 10 == 0)) - checks if current line number i is divisible by 10 (means that the execution flow reaches next 10 lines) sleep 10 - pauses the script for 10 seconds | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
361,551 | I would like to bind both Super + 1 and Super + Home as shortcuts to the "Switch to workspace 1" action in GNOME 3. Is there a way to do this? I think that to achieve this there would either have to be a way to assign multiple keyboard shortcuts to the same action or there would have to be a way to switch workspaces via a command-line action (which would alet me create a shortcut for it in the "custom shortcuts" section). But I'm not sure these are possible... | Yes, this is a dconf setting and the value is an array of strings which means it accepts multiple shortcuts. You can do that via dconf-editor if you navigate to /org/gnome/desktop/wm/keybindings/switch-to-workspace-1 and turn Use default value OFF then insert Custom value : ['<Super>Home', '<Super>1'] Or if you prefer CLI you can use dconf or gsettings e.g. gsettings set org.gnome.desktop.wm.keybindings switch-to-workspace-1 "['<Super>Home', '<Super>1']" Keep in mind the values must be quoted and separated by comma+space. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23960/"
]
} |
361,623 | In bash, If I make assignment a= What is a now? I am sure a is not '' string, and not 0 . I found only below test is true if [ $a = ];then echo 'good';fi | It is the empty string. It's the same as doing: a='' Or a="" Or a=$'' Or for that matters: a=''""$'' Those '' , "" , and $'...' are quoting operators to the shell. When you do: a='' You're not assigning a string made of two single quote characters to $a but the empty string. Those '' are superfluous as there's nothing inside them, but that can make your code slightly more legible (make it clearer that you did intend to assign an empty string). To assign a literal '' to $a , you'd need to quote those special characters like: a="''" a=\'\' a=$'\'\'' a="'"\' Your test command is invalid. In [ $a = ] Since you forgot to quote $a , the split+glob operator is applied to $a . Since $a is empty (but that would be the same if it contained only blanks or newlines with the default value of $IFS ) that results in no argument to be passed to the [ command. So all the arguments [ receives are [ , = and ] . For [ , that's a test to tell if = is a non-empty string and it returns true. What you want here is to pass these arguments to the [ command: [ the content of $a for which you need "$a" = the empty string. For which you need '' or "" ... Passing nothing would mean that no argument is passed to [ so the 4th argument would be the closing ] . ] So it should be: if [ "$a" = '' ]; then echo '$a is empty'; fi Or: if [ -z "$a" ]; then echo '$a is empty'; fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31374/"
]
} |
361,627 | I installed ATLAS (with Netlib LAPACK ) in a Docker image, and now every time I run ldconfig , I get the following errors: ldconfig: Can't link /usr/local/lib//usr/local/lib/libtatlas.so to libtatlas.soldconfig: Can't link /usr/local/lib//usr/local/lib/libsatlas.so to libsatlas.so Of course, /usr/local/lib//usr/local/lib/libtatlas.so doesn't exist, but I'm confused why it would try to look for this file, since libtatlas.so isn't a symbolic link: root@cd00953552ab:/usr/local/lib# ls -la | grep atlas-rw-r--r-- 1 root staff 15242054 Apr 27 08:18 libatlas.a-rwxr-xr-x 1 root staff 17590040 Apr 27 08:18 libatlas.so-rwxr-xr-x 1 root staff 17492184 Apr 27 08:18 libsatlas.so-rwxr-xr-x 1 root staff 17590040 Apr 27 08:18 libtatlas.so Why would this be happening, and is there a way to fix it/turn off this error message? Edit: Here's the Readelf output: root@cd00953552ab:/usr/local/lib# eu-readelf -a /usr/local/lib/libatlas.so | grep SONAME SONAME Library soname: [/usr/local/lib/libtatlas.so] | For some reason, probably related to the way the libraries were built (and more specifically, linked), they’ve stored their installation directory in their soname: thus libtatlas.so ’s soname is /usr/local/lib/libtatlas.so . ldconfig tries to link libraries to their soname, if it doesn’t exist, in the same directory: it finds /usr/local/lib/libtatlas.so , checks its soname, determines that a link needs to be made from /usr/local/lib//usr/local/lib/libtatlas.so (the directory and soname concatenated) to /usr/local/lib/libtatlas.so , and fails because /usr/local/lib/usr/local/lib doesn’t exist. The appropriate way to fix this is to ensure that the libraries’ sonames are defined correctly. Typically I’d expect libtatlas.so.3 etc. with no directory name (the version would depend on the ABI level of the library being built). You probably need to rebuild the libraries, or find a correctly-built package... Alternatively, you can edit a library’s soname using PatchELF : patchelf --set-soname libtatlas.so /usr/local/lib/libtatlas.so Ideally you should relink the programs you built using this library, since they’ll have the soname embedded too (you can also patch that using PatchELF). In an evolving system, you’d really want to specify a version in the soname, but in a container it probably doesn’t matter — you should be rebuilding the container for upgrades anyway. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361627",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18885/"
]
} |
361,642 | I recently installed Ubuntu 17.04 and I'm not able to add any ppa. I tried to manually add keys using different keyservers but on every attempt I'm getting keyserver received error: $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 0F164EEB Error Received: Executing: /tmp/apt-key-gpghome.qm2WNA0lTK/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 0F164EEBgpg: keyserver receive failed: No keyserver available$ sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 0F164EEB Error Received: Executing: /tmp/apt-key-gpghome.O681PzEx7r/gpg.1.sh --keyserver keys.gnupg.net --recv-keys 0F164EEBgpg: keyserver receive failed: Connection refused It is the same case with other keys. I'm not able to add any PPA. | I was getting the same 'gpg keyserver connection refused' error with gpg at the command line, GPA, and KGpg. I am using gnupg 2.1.18-8 on Debian Sid. I enabled debugging in dirmngr as follows: sudo pkill dirmngr; dirmngr --debug-all --daemon --standard-resolver The debugging output on the console complained about the lack of a Tor connection. It turned out that "use-tor" was enabled in $HOME/.gnupg/dirmngr . (Thanks, gpgconf!) I commented it out, leaving an empty dirmngr.conf , and keyserver communications are now working normally. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228872/"
]
} |
361,655 | I am finding a way to get the filename assigned to a variable in my shell script. But my file has naming format as file-1.2.0-SNAPSHOT.txt . Here the numbers may change sometimes, now how can i assign this filename to a variable. Any regex can be used? or grep ? or find ? or file ? My directory consists of following files: file-1.2.0-SNAPSHOT.txtnewFile-1.0.0.txtsample.txt My script sc.sh : file_path="/home/user/handsOn"var=$file_path/file-1.2.0-SNAPSHOT.txtnewLocation=/new_pathcp $var $newLocation Now the file version changes sometimes. My script should work for any version number. How can I assign the matched filename to variable? Help me out. TIA | Let say your file is following this pattern file-1.2.0-SNAPSHOT.txt so it can be like file-1.2.0-SNAPSHOT.txt or file-1.3.0-SNAPSHOT.txt or file-1.5.1-SNAPSHOT.txt etc. then you can get the files using find command like this :- find . -type f -iname "*SNAPSHOT.txt" It will give you all the files which ends with SNAPSHOT.txt and then you can use it to do your work. Dot( . ) in find can be a parent directory which should contains the file. Like as find ~/my_files/ -type f -iname "*SNAPSHOT.txt" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227732/"
]
} |
361,658 | I'm trying to write a simple bash script in which the user inputs their username, then they are greeted, depending on the time of day by their surname. I currently have the following: echo Please enter your usernameread usernamename=$(grep $username /etc/passwd | cut -d ':' -f 5)h='date +%H'if [ $h -lt 12]; then echo Good morning ${name::-3) etc. etc. I have managed to cut the 3 commas off the end of the name that are there, but I want to be able to cut the first name off. For example: The $name is Amber Martin,,, . I've cut down to Amber Martin . I need to cut down further to Martin . And this needs to work with any name. | Better to use getent passwd than to read /etc/passwd directly. getent also works with LDAP, NIS and such. I think it exists in most Unixes. (My OS X doesn't have it, but it doesn't have my account in /etc/passwd either, so...) name=$(getent -- passwd "$USER" | cut -d: -f5) The string processing can be done with the shell's parameter expansion , these are POSIX compatible: name=${name%%,*} # remove anything after the first commaname=${name%,,,} # or remove just a literal trailing ",,,"name=${name##* } # remove from start until the last spaceecho "hello $name" Use ${name#* } to remove until the first space. (Just hope no-one has a two-part last name, with space in between). The cut could also be replaced with word-splitting or read , by setting IFS to a colon. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361658",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228886/"
]
} |
361,703 | I have got large 200 MB mail history text file and I have to find a line that has the following structure: lastname namestreetname numberOfHousepostalcode cityname namely: Arthur DentGalaxy 774369 Third Orbit Note : The postal code contains always 5 numbers and the cityname can be of one or two words containing uppercase and lowercase letters of the alphabet. Lastname, name, streetname are just one word containg uppercase and lowercase letters of the alphabet. There are no additional information given. My solution so far does return nothing and simply a new prompt line appears: grep -P '[a-zA-Z]+ [a-zA-Z]+\n[a-zA-Z]+ [0-9]+\n\d{5} [a-zA-Z]+ [a-zA-Z]+' '/home/jublikon/Downloads/emails' An excerpt of the file: Received: from outmail-1.st1.spray.net (outmail-1.st1.spray.net [212.78.202.120]) by pigsty.hamjudo.com (8.12.1/8.12.1/Debian -5) with ESMTP id g58Jd3Y9015868 for ...; Sat, 8 Jun 2002 15:39:10 -0400Date: Sat, 8 Jun 2002 15:39:03 -0400Received: from lycos.co.uk (newwww-2.st1.spray.net [212.78.202.12]) by outmail-1.st1.spray.net (8.8.8/8.8.8) with SMTP id VAA09339; Sat, 8 Jun 2002 21:36:57 +0200 (DST)Posted-Date: Sat, 8 Jun 2002 21:36:57 +0200 (DST)From: Sandra Savimbi [email protected]: [email protected]: [email protected]: Caramail - www.caramail.comX-Originating-IP: [213.251.169.58]Mime-Version: 1.0Subject: Kindly Get Back To Me Please.Content-Type: multipart/mixed;Case there are no risks involved. REPLYASAP. With regards. Dr.Raymond Graham(JP). TEL00-228-949-7287. _____________________________________________________________To meet someone --- http://www.domeconnection.com Get free new car price quoteshttp://autos.yahoo.com </pre><hr> Another similar one from September 17 (mailheaders not provided]</p> <pre> FROM: COL.ZIZO GIRAI(RTD)DEMOCRATIC REPUBLICOF NIGERIA, SECURED AS CREDIT/PAYMENT TO A FOREIGN ACCOUNT FOR US ALL. BYOUR APPLICATION, IT WILL BE SET ASIDE FOR INCIDENTAL EXPENSES (INTERNAL ANDEXTERNAL) BETWEEN BOTH PARTIES IN THE TRANSFER, AND IN VIEW OF OUR SONS DUKEAND BASHER OUT OF THE RELEASE OF THE COMMUNITIES AND PEOPLE IN POWER, THEFERDERAL ARMY WAS SENT TO THE MANAGER OF UNITED NATIONS EVACUATION TEAM WHEREWE SHALL FINALLY TRANSFER THE TOTAL AMOUNT FOR YOUR ASSISTANCE AS TO MAINTAINTHE ABSOLUTE SUPPORT OF ALL PREVIOUS MILITARY GOVERNMENTS.CONTACT SHOULD BECONFIDENTIAL. Best Regards, Dr. Mrs. Marian Sani Abacha. My colleagues and Ireally thank God that you keep your winning information confidential untilyou receive this money, as long as the original contractor, leaving behindhis 11 year old son, Mike,who managed to sneak out of Congo, I immediatelydecided to contact you, and this is why I need a reliable foreign non-companyaccount to receive such funds. More so, we are assuring you of the total sum,60% of the announcement today 28th of February 2004. After this date, all fundswill be for you. Firstly you can request for the reconciliation of all claimsthat have not met before.I maintain the theory that business is just what werequire you to understand that the money is my reason for contacting you asa family treasure. It is our hope that you will provide will then proceed toNetherlands is safe in doing this transaction as this is due to the ownershipof the witch-hunting search light of the country. I shall be revealed to meat once via email as stated above. Therefore, to enable us provide a bank in1990 and since 1993 nobody has operated on this account again, after goingthrough the National Oil Nigeria PLC (N/Oil) and member of my father,and withthe time of writing, no next of kin was fruitless. Itherefore made furtherinvestigation and discovered that Mr. Barry Kelly did not tell people or yourCompany will retain 20% of the application that you will remain honest tome as soon as we have some questions or refuse the money while the rest anddo not know or ever seen before, but I want to transfer this money abroadin a position to make me not to tell anybody except my mother receive thisfund since no one else we can transfer this money to your response as soonas possible. Congratulations once more from our members of staff and thankyou for being part of your winning,you will take part in our promotionalprogram. Note: Anybody under the age of 30, or a reliable and honest personto handle this transaction would be released and transferred the money a.Regards,Sandra SavimbiArthur DentGalaxy 774369 Third OrbitDate: Sun, 9 Jun 2002 05:23:27 +0200From: IBRAHIM ALI [email protected]: [email protected]: URGENT FOR INVESTMENTFrom the Desk of MR IBRAHIM ALI NIGERIAN NATIONAL PETROLEUM CORPORATIONLAGOS NIGERIA.ATTN: MANAGING DIRECTOR(CEO) . Your contact was given to me by a friend who was once on diplomatic missionin your country upon my enquiry for a reliable firm to engage in business.The same guaranteed your reliability and trust-worthiness in businessmatters. I therefore wish to explain this lucrative business intentionfor our mutual benefit, though I did not let that friend have the realOf it's swiftness and confidentiality. Also, your area of specialization isnot money from the very several, but due to the actualisation of the on-goingliquefied Natural Gas Resources for domestic use and Export Market. In 1995,a consortium of Engineering firms, Technip, Snamprogetti, M. W. Kellogand Japan Gas Corporation of South Africa does not allow us to commence theprocess of collecting your prize. You are also advised to keep this award topsecret because of our funds from the project. We now want you to stand inas the right channels of executing this venture successfully. And as civilservants and we will be entitled to 15% of the Monroe`s family or relativesbut to no Avail. Should you be willing to pay it into your account. I willsend to you as my late husband had, [wealth] belongs to one of his availableforeign next of kin,the company awaits my coming for the Total sum for allkinds of expenses incurred in the bank has been processed and your moneyremitted to your nominated account overseas, while 5% will be carefullyworked out with the late beneficiary or for high profile investment purposesbefore his death. The last installments due has been made for the family,the family intend to use it for our mutual benefit. REMUNERATION. We havedecided to contact his Next of kin to Mr. Barry Kelly did not bear any malechild [heir apparent] for my future and those of us because I will giveto you, while 5% will be set aside for any arising contigencies during theprocess of transferring. I look forward to receiving your prompt reply-BENSONOKA. __________________________________________________ Do You Yahoo!? Signup for sake of unfree environment.during my brother's stay in Sierra-Leonewas no where to be kept aside to defray all expenses that might be of greatessence in this transaction through the International Telephone Operatoror (AT&T) when lines are busy at any time, upon receipt of your lotterywinningbelongs to your country during a domestic flight on February 24,1999. Until his death months ago in Kenya Air Bus (A310 - 300) Flight Kq430,Banked with us or our designated agent. Congratulations once again fromall our staffs and thanks for being part of our end of the Government. Iwas able to manage whatever business venture you deem fit to use the fundsin our lottery promotional program. held on the 24th of January 2004. Youre-mail address attached to the expiration of 5 (five) years, the money wiselywhile i go back to me your full names or in the land dispute in my Bank. Thissum of money coded for safe keeping. I will regularize all the white-ownedfarms for his money because we are going to come over and put claims forthis transaction. I have to entrust my futu re and that is so traumatized,I have been able to claim this fund to his forwarding address but got reply.Regards,IBRAHIM ALIthe DeskSubject: HE CARES FOR THOSE WHO TRUST IN HIMDate: Mon, 10 Jun 2002 22:28:24 +0200From: Mrs Rose Sankoh [email protected]: Mrs Rose Sankoh [email protected]: [email protected] CARES FOR THOSE WHO TRUST IN HIMFROM: MRS. ROSE SANKOHE-MAIL: [email protected],I want to confide in you knowing that you are in the vineyard of God andyou may not have the mind to do otherwise when it finally materialise.With due respect and humility I write you this letter with the belief thatyou would be very much obliged to assist us. Since we have no place orIN SOUTH AFRICA. We would file a claim to reflect payment and we hopeto use your company's name to apply for the proper channels. Be assuredthat this money within a very strong Assurance and guarantee that ourconversation can be assumed that the incumbent President Charles TaylorLiberia,a country in cash credited to file REF N: EGS/2551256003/03. Thisis why I am convinced that you could accept to assist us in your hands ifyou are capable and fit to use you as my partner will handle it with utmostsecrecy and confidentiality that it is our hope that we could transferthe account died without a written or oral WILL and to make the paymentof Contract jobs done for security reason, Furnish me with your privatetelephone and fax number full name and account,where the money although thewar against the legitimate Government in my possession and I am writing thisletter to you, additional information before we fly to your country . Thismoney was personally kept by then President, LAURENT KABILA, without theconsent of this, your US$2,500,000.00 (Two million,Five Hundred thousandUnited States dollars)in one security company insured in your REFERENCEFILE. Due to the point, this money will be well protected. This businessproposal for you. On December 6, 1999, a Foreign Account requiring MaximumConfidence. THE PROPOSITION: A Foreigner, a french, Late Engr.Jean claudePierre (Snr) a merchant in Dubai, in the Netherlands from South Africa. Wewill then come over and put to use. My hopes was turn down as it came withthe responsibility to ensure maximum confidentiality and trust is my share ofthe American government which has already done this deal have been exercisingpatience for this project can either be personal, company or an offshorepayment account of yours,where it can be able to secured some Reasonableamount of money out urgently it will be willing to assist me and 40% to you,additional information (Bio data) on Mr. Bantam. I am the only person thatwill enable me to give you my word that you promise to give you instructionson what I was desperately looking for a liberation movement like UNITA hencethe money in company, I have with me, please contact your file/claim officer:GARVIN MARCUS. FOREIGN SERVICE MANAGER, Email : [email protected] :+31-620-885-334. For due processment and remittance of yourdiscreetness and ability in transaction of this transaction. Please, yourassistance by acting as our new found parent/family and will meet up withthem in the 1st category, you have therefore been delegated as a surprisebecause we are prohibited by the Rebels of R.U.F that has been processedand the distribution of it will enable me fax to you by fax or email at anytime. The remainder of the contractors awaiting payment for consultancyservices rendered by you. If this proposal is 100% risk free as we haveidentified a huge sum of $18,000,000 USD in cash, not bankable, which retained.Regards,Mrs RoseFrom: "alex princewill" [email protected]: [email protected]: Tuesday, June 11, 2002 1:51 PMSubject: INVESTMENT PROPOSAL/ TO AUDITING AND ACCOUNT UNIT. FORIEGN REMITTANCE DEPT. UNION TOGOLAISE DU BANQUE LOME-TOGO.IN WEST AFRICA. Attn, I am Mr.Alex princewill. the director in charge of auditing and account section of Union Togolaise Du Banque Lome-Togo with due respect and regard. I have decided to contact you on a business Question : Were could be my mistake? | Firstly there is no matching data in your file that you are showing here. And assuming that you found file with the kind of data you are intending to look for you should invoke grep with the options: grep -zoP -z will treat the file as one huge string.-o will get u just the matching portion-P will enable the Perl regex engine thereby making grep understand the kind of regex you have there. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361703",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228922/"
]
} |
361,713 | When running the fork call to create a new process, if it succeed it returns either 0 (the child) or the parent. I didn't get the idea behind this. Why doesn't fork just always return child or always parent ? | When you fork() , the code that’s running finds itself running in two processes (assuming the fork is successful): one process is the parent, the other the child. fork() returns 0 in the child process, and the child pid in the parent process: it’s entirely deterministic. This is how you can determine, after the fork() , whether you’re running in the parent or the child. (And also how the parent knows the child pid — it needs to wait on it at some point.) In a little more detail: the future parent process calls fork() ; the kernel creates a new process, which is the child, and sets various things up appropriately — but both processes are running the same code and are “waiting” for a return from the same function; both processes continue running (not necessarily straight away, and not necessarily simultaneously, but that’s besides the point): fork() returns 0 to the child process, which continues and uses that information to determine that it’s the child; fork() returns the child pid to the parent process, which continues and uses that information to determine that it’s the parent. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65878/"
]
} |
361,782 | I read strings from stdin and I want to display the users that match the strings. Problem is, if the user inputs the character '[', or a string containing it. grep -F does not work because the line has to start with the string (^ - which is a simple character with -F). Also, getent $user won't be good because I need only the username not id as well. if [[ "$user" == *"["* ]]; then echo -e "Invalid username.\n" continuefiif ! getent passwd | grep "^$user:"; then echo -e "Invalid username.\n" continuefi This is the workaround for '[', is there another way? awk will do the job most probably, but I have no knowledge of it yet, I'm interested in grep. | Either escape it or put it in a character class, something along these lines: grep '\['grep '[[]'grep -e "${user//\[/\\\[}" The syntax ${var//c/d} => in the shell variable $var we replace all the characters c with d . Now, in your case the c is [ but it so happens that [ is special in this syntax (it does globbing) and hence we need to escape it by prefixing it with a backslash, i.e., \[ . Now coming to the replacement part, what we need is a \[ in there. But again, both \ and [ are special in this syntax of ${var//...} parameter substitution and hence both need to be, yes you guessed it right, backslashedleading to the expression: \\\[ : "${var//\[/\\\[}" HTH | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361782",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139210/"
]
} |
361,794 | I read in another answer that I'm not able to pass arguments to the interpreter than I'm giving to /usr/bin/env : Another potential problem is that the #!/usr/bin/env trick doesn't let you pass arguments to the interpreter (other than the name of the script, which is passed implicitly). However, it looks like I am able to, because awk is breaking when I don't give it the -f flag and it's fixed when I do give it the -f flag, while using /usr/bin/env : First, without the -f flag: $ cat wrap_in_quotes#!/usr/bin/env awk# wrap each line in quotes# usage: wrap_in_quotes [ file ... ]{ print "\""$0"\"" }$ echo foobar | ./wrap_in_quotesawk: syntax error at source line 1 context is >>> . <<< /wrap_in_quotesawk: bailing out at source line 1 Second, with the -f flag: $ vim wrap_in_quotes$ cat wrap_in_quotes#!/usr/bin/env awk -f# wrap each line in quotes# usage: wrap_in_quotes [ file ... ]{ print "\""$0"\"" }$ echo foobar | ./wrap_in_quotes"foobar" So, if according to the linked answer I'm not able to pass flags to the interpreter, why am I able to pass the -f flag to awk ? I'm running macOS : $ sw_versProductName: Mac OS XProductVersion: 10.12.1BuildVersion: 16B2657 | Some Unices, most notably the macOS (and up until 2005, FreeBSD), will allow for this, while Linux will not, but... If one use the env utility from a recent release of the GNU coreutils package (8.30+), it has a non-standard -S option that allows for supplying multiple arguments in #! lines. The opposite question: Shebang line with `#!/usr/bin/env command --argument` fails on Linux | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173557/"
]
} |
361,832 | Only sometimes, I forget to make a backup of a given linux file such as /etc/rc.local , /etc/rsyslog.conf , /etc/dhcpcd.conf , etc, and later wish I did. Distribution agnostic, is there a good approach to later getting a copy of an unf'd up copy? | While the topic of configuration files backup/versioning might seem simple on the surface, it is one of the hot topics of system/infrastructure administration. Distribution agnostic, to keep automatic backups of /etc as a simple solution you can install etckeeper. By default it commits /etc to a repository/version control system installed on the same system. The commits/backups are by default daily and/or each time there are package updates. The etckeeper package is pretty much present in all Linux distributions. see: https://help.ubuntu.com/lts/serverguide/etckeeper.html or https://wiki.archlinux.org/index.php/Etckeeper It could be argued it is a good standard of the industry to have this package installed. If you have not etckeeper installed, and need a particular etc file, there are several ways; you might copy it from a similar system of yours ,you can ask your package manager to download the installation file or download it by hand, and extract the etc file from there; one of the easiest ways is using mc (midnight commander) to navigate inside packages as if they were directories. You can also use the distribution repositories to get packages, in the case of debian is http://packages.debian.org Ultimately if the etc/configurations are mangled beyond recognition you always have the option to reinstall the particular package. move the etc files to a backup name/directory, and for instance in Debian: apt-get install --reinstall package_name You can also configure and install the source repos for your particular distribution/version, install the source package, and get the etc files from there. https://wiki.debian.org/apt-src (again a Debian example) In some packages, you might also have samples of the configurations files at /usr/share/doc/package_name, which might be fit or not for use. As a last resort, you may also find etc files in the repositories/github addresses if the corresponding open source projects, just bear in mind that often distributions change default settings and things around. Obviously, none of these alternatives exempt you from having a sound backup policy in place, and retrieve your lost /etc files from there. Times also move fast, and if following a devops philosophy, you might also choose to discard certains systems altogether and redeploy them in case some files get corrupted; you might also use CI and reploy the files for instance, from jenkins. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137912/"
]
} |
361,867 | I want to repeat a command while there's a specific string in it's output, which indicates that there was an error.The command is gksu ./installer.run > ./inst.log 2>&1 What I want do is to repeat it while there's 'string' in ./inst.log .How can I do this from a bash command-line? | While the topic of configuration files backup/versioning might seem simple on the surface, it is one of the hot topics of system/infrastructure administration. Distribution agnostic, to keep automatic backups of /etc as a simple solution you can install etckeeper. By default it commits /etc to a repository/version control system installed on the same system. The commits/backups are by default daily and/or each time there are package updates. The etckeeper package is pretty much present in all Linux distributions. see: https://help.ubuntu.com/lts/serverguide/etckeeper.html or https://wiki.archlinux.org/index.php/Etckeeper It could be argued it is a good standard of the industry to have this package installed. If you have not etckeeper installed, and need a particular etc file, there are several ways; you might copy it from a similar system of yours ,you can ask your package manager to download the installation file or download it by hand, and extract the etc file from there; one of the easiest ways is using mc (midnight commander) to navigate inside packages as if they were directories. You can also use the distribution repositories to get packages, in the case of debian is http://packages.debian.org Ultimately if the etc/configurations are mangled beyond recognition you always have the option to reinstall the particular package. move the etc files to a backup name/directory, and for instance in Debian: apt-get install --reinstall package_name You can also configure and install the source repos for your particular distribution/version, install the source package, and get the etc files from there. https://wiki.debian.org/apt-src (again a Debian example) In some packages, you might also have samples of the configurations files at /usr/share/doc/package_name, which might be fit or not for use. As a last resort, you may also find etc files in the repositories/github addresses if the corresponding open source projects, just bear in mind that often distributions change default settings and things around. Obviously, none of these alternatives exempt you from having a sound backup policy in place, and retrieve your lost /etc files from there. Times also move fast, and if following a devops philosophy, you might also choose to discard certains systems altogether and redeploy them in case some files get corrupted; you might also use CI and reploy the files for instance, from jenkins. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361867",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174627/"
]
} |
361,895 | I am logged into Sun Solaris OS. I want to create and extract a compressed tar file. I tried this normal UNIX command: tar -cvzf file.tar.gz directory1 It is failing to execute in Sun OS with following error bash-3.2$ tar -cvzf file.tar.tz directory1tar: z: unknown function modifierUsage: tar {c|r|t|u|x}[BDeEFhilmnopPqTvw@[0-7]][bfk][X...] [blocksize] [tarfile] [size] [exclude-file...] {file | -I include-file | -C directory file}... | To avoid creation of temporary intermediate file you can use this command tar cvf - directory1|gzip -c >file.tar.gz | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/361895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/176232/"
]
} |
361,902 | I have remote server without GUI support. How can I install `CentOS 7 there? The CentOS 7 is mandatary and I can't switch to another OS or distribution. I get following text at the end. I able to mount CD but I don't know what to do next. FreeBSD has bsdinstall which works in text mode. Debian can also be installed in text mode without any problems. (?- //\ Core is distributed with ABSOLUTELY NO WARRANTY. v_/_ www.tinycorelinux.comtc@box:~$ Switched to clocksource tsc | CentOS 7 has an option to install in text mode. When you see install centos menu option press the tab key, add text to the end of any existing installer command line arguments and then press the return key. This will tell the installer (Anaconda) to install the OS in text mode. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11932/"
]
} |
361,923 | I am trying to identify a strange character I have found in a file I am working with: $ cat file�$ od file0000000 0053530000002$ od -c file0000000 353 \n0000002$ od -x file0000000 0aeb0000002 The file is using ISO-8859 encoding and can't be converted to UTF-8: $ iconv -f ISO-8859 -t UTF-8 fileiconv: conversion from `ISO-8859' is not supportedTry `iconv --help' or `iconv --usage' for more information.$ iconv -t UTF-8 fileiconv: illegal input sequence at position 0$ file filefile: ISO-8859 text My main question is how can I interpret the output of od here? I am trying to use this page which lets me translate between different character representations, but it tells me that 005353 as a "Hex code point" is 卓 which doesn't seem right and 0aeb as a "Hex code point" is ૫ which, again, seems wrong. So, how can I use any of the three options ( 355 , 005353 or 0aeb ) to find out what character they are supposed to represent? And yes, I did try with Unicode tools but it doesn't seem to be a valid UTF character either: $ uniprops $(cat file)U+FFFD ‹�› \N{REPLACEMENT CHARACTER} \pS \p{So} All Any Assigned Common Zyyy So S Gr_Base Grapheme_Base Graph X_POSIX_Graph GrBase Other_Symbol Print X_POSIX_Print Symbol Specials Unicode if I understand the description of the Unicode U+FFFD character, it isn't a real character at all but a placeholder for a corrupted character. Which makes sense since the file isn't actually UTF-8 encoded. | Your file contains two bytes, EB and 0A in hex. It’s likely that the file is using a character set with one byte per character, such as ISO-8859-1 ; in that character set, EB is ë: $ printf "\353\n" | iconv -f ISO-8859-1ë Other candidates would be δ in code page 437 , Ù in code page 850 ... od -x ’s output is confusing in this case because of endianness; a better option is -t x1 which uses single bytes: $ printf "\353\n" | od -t x10000000 eb 0a0000002 od -x maps to od -t x2 which reads two bytes at a time, and on little-endian systems outputs the bytes in reverse order. When you come across a file like this, which isn’t valid UTF-8 (or makes no sense when interpreted as a UTF-8 file), there’s no fool-proof way to automatically determine its encoding (and character set). Context can help: if it’s a file produced on a Western PC in the last couple of decades, there’s a fair chance it’s encoded in ISO-8859-1, -15 (the Euro variant), or Windows-1252; if it’s older than that, CP-437 and CP-850 are likely candidates. Files from Eastern European systems, or Russian systems, or Asian systems, would use different character sets that I don’t know much about. Then there’s EBCDIC... iconv -l will list all the character sets that iconv knows about, and you can proceed by trial and error from there. (At one point I knew most of CP-437 and ATASCII off by heart, them were the days.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/361923",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
361,955 | I have a bash script that runs multiple programs #!/bin/shpython program1.py &python program2.py &other programs ... &lastProgram I run it as ./myscript.sh When I hit Ctrl + C to close the lastProgram it exit and all the other ones keep running in the background. The problem is that the other programs need to be terminated. Which is the proper way to handle the closing of all the programs started from the script? | Collect the process ID's, kill the background processes on exit. #!/bin/bashkillbg() { for p in "${pids[@]}" ; do kill "$p"; done}trap killbg EXITpids=()background job 1 & pids+=($!)background job 2... & pids+=($!)foreground job Trapping EXIT runs the function when the shell exits, regardless of the reason. You could change that to trap killbg SIGINT to only run it on ^C . This doesn't check if one of the background processes exited before the script tries to shoot them. If they do, you could get errors, or worse, shoot the wrong process. Or kill them by job id. Let's read the output of jobs to find out which ones are still active. #!/bin/bash killjobs() { for x in $(jobs | awk -F '[][]' '{print $2}' ) ; do kill %$x done}trap killjobs EXITsleep 999 &sleep 1 &sleep 999 &sleep 30 If you run background processes that spawn other processes (like a subshell: (sleep 1234 ; echo foo) & ), you need to enable job control with set -m ("monitor mode") for this to work. Otherwise just the lead process is terminated. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226364/"
]
} |
361,961 | I have these two grep regexes grep -e '[Yy].*[Yy].[Ee][Ee]' first.txt and grep -e '[Ee][Ee].*[Yy].*[Yy]' first.txt How do I concatenate these two into a single regex? | By.. concatenating the patterns? grep -e '[Yy].*[Yy].[Ee][Ee][Ee][Ee].*[Yy].*[Yy]' first.txt Or did you mean essentially doing a logical AND of the two patterns? If the latter, you need to fake it, as while grep has built-in OR ( | ) and NOT ( -v ; [^] ), it does not have a built-in AND. One way is by piping the output of one grep into the other: grep -e '[Yy].*[Yy].[Ee][Ee]' first.txt | grep '[Ee][Ee].*[Yy].*[Yy]' The other way is to look for both patterns in series, in either order, with a logical OR (abbreviated for brevity): grep -Ee 'pattern1.*pattern2|pattern2.*pattern1' input.txt I find the first to be more succinct and easier to maintain. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/361961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229093/"
]
} |
361,972 | In Ubuntu 14.04, listing the contents of the directory /var/spool/cron with ls -l provides the following permissions on the directories within (irrelevant columns snipped): drwxrwx--T daemon daemon atjobsdrwxrwx--T daemon daemon atspooldrwx-wx--T root crontab crontabs What purpose does setting a sticky bit on a directory without the executable bit serve? | From the manual page for sticky : STICKY DIRECTORIES A directory whose `sticky bit' is set becomes an append-only directory, or, more accurately, a directory in which the deletion of files is restricted. A file in a sticky directory may only be removed or renamed by a user if the user has write permission for the directory and the user is the owner of the file, the owner of the directory, or the super-user. This feature is usefully applied to directories such as /tmp which must be publicly writable but should deny users the license to arbitrarily delete or rename each others' files. Any user may create a sticky directory. See chmod(1) for details about modifying file modes. The upshot of this is that only the owner of a file in a sticky directory can remove the file. In the case of the cron tables, this means that I can't go in there and remove your cron table and replace it with one of my choosing, even though I may have write access to the directory. It is for this reason that /tmp is also sticky. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/361972",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186696/"
]
} |
362,031 | I'm trying to filter a part of a file that holds 2 digital certificates. Basically, I want the first part (let's say Cert1) and not the second part (Cert2). Content of the file is: -----BEGIN CERTIFICATE-----AAAAAAAAETC-----END CERTIFICATE----------BEGIN CERTIFICATE-----AAAAAAAAETC-----END CERTIFICATE----- I was under the impression that this would give me the content of Cert1 (the first part between the first BEGIN and the first END) : cat /etc/nginx/cert.pem | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' For some reason, though, it still presents me all the content between the second BEGIN and the second END (basically, nothing changes; all content is the same) . Any pointers? | You can use the following sed command for this task sed '/-----END CERTIFICATE-----/q' /etc/nginx/cert.pem q is an exit code which instructs sed to quit. Therefore sed will print from the beginning of the file and quit when the pattern '-----END CERTIFICATE-----' is encountered. This causes it to stop at the end of the first certificate. Also there is no need to use a pipe to redirect the output of cat to sed. Simply specify the filename in the sed command. Source - http://www.theunixschool.com/2011/09/sed-selective-printing.html | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362031",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/204855/"
]
} |
362,053 | I am a Linux system admin, I will login every system of my local network. I don't want my IP to show up via who command. For example, if someone enters: $ who it reveals my IP. Is there any way to hide my IP from the who Linux command? [EDIT by chrips] This is important for those concerned with their personal utility servers being hacked! Obviously, you would want to hide your current home IP from an attacker lest they find a vector on you! | Most simply you could make the utmp log files non-world readable. This is even mentioned in the utmp man page : Unlike various other systems, where utmp logging can be disabled by removing the file, utmp must always exist on Linux. If you want to disable who(1) then do not make utmp world readable . like this: sudo chmod go-r /var/log/wtmp /var/run/utmpwho # shows nothing, not even an error!sudo who # still works for rootrudi :0 2017-04-18 19:08 (console) So this would disable who completely, not only skip IP addresses. Another idea (maybe a bit silly) to hide only the IPs could be to let your ssh server listen at another port (1234) and on localhost only. Then run a "proxy" (socat, netcat) to forward from public_ip:22 to localhost:1234: change ssh server config, /etc/ssh/sshd_config: Port 1234 run a proxy on ssh server machine: socat TCP-LISTEN:22,fork TCP:localhost:1234 Now all utmp logs ( who , last ) will show the same and useless localhost IP. Note maybe your users could still see the real connections via netstat . Instead of the userspace proxy ( socat ) you could also setup iptables NAT and MASQUERADING rules for the incomming ssh traffic. Or you could always use an extra "ssh hop" to always login from the same IP. This is left as an exercise for the reader. ;) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172333/"
]
} |
362,100 | Do I need to check & create /tmp before writing to a file inside of it?Assume that no one has run sudo rm -rf /tmp because that's a very rare case | The FHS mandates that /tmp exist, as does POSIX so you can rely on its being there (at least on compliant systems; but really it’s pretty much guaranteed to be present on Unix-like systems). But you shouldn’t: the system administrator or the user may prefer other locations for temporary files. See Finding the correct tmp dir on multiple platforms for more details. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/362100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27362/"
]
} |
362,102 | I have a folder called statistics in an Ubuntu server in which data files are regularly stored. How can I rename statistics folder to backup-xx while re-creating statistics folder to be available for storing new files? The files in statistics folder is created by PHP file_put_contents . I prefer renaming the folder, as there are many files in the statistics folder. | mv statistics backup-xx && mkdir statistics This would rename the existing statistics directory to backup-xx , and if that succeeds it would carry on to create a new statistics directory. For a more atomic operation, consider creating a directory statistics-001 (or similar, maybe by replacing 001 with today's date in a suitable format), and a symbolic link to it called statistics : mkdir statistics-001ln -s statistics-001 statistics When you want to "rotate" this so that new data goes into a clean directory, create the directory first, then recreate the statistics link to it: mkdir statistics-002ln -sf statistics-002 statisticsmv statistics-001 backup-001 This way, any program writing to the statistics directory (i.e. the directory that this symbolic link points to) will never 1 fail to find it. If you need special permissions or ownership set on the directory that statistics points to, set these before (re-)creating the link. 1 Or rather, this way, the time that a program would be without a valid target directory is minimized as much as practically possible using standard Unix tools. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362102",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10780/"
]
} |
362,115 | I'm about to run an python script on Ubuntu on VPS. It's is machine learning training process so take huge time to train. How can I close putty without stopping that process. | You have two main choices: Run the command with nohup . This will disassociate it from your session and let it continue running after you disconnect: nohup pythonScript.py Note that the stdout of the command will be appended to a file called nohup.out unless you redirect it ( nohup pythonScript.py > outfile ). Use a screen multiplexer like tmux . This will let you disconnect from the remote machine but then, next time you connect, if you run tmux attach again, you will find yourself in exactly the same session. The command will still be running (it will continue running when you log out) and you will be able to see its stdout and stderr just as though you'd never logged out: tmux pythonScript.py Once you've launched that, just close the PuTTY window. Then, connect again the next day, run tmux attach again and you're back where you started. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/362115",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229191/"
]
} |
362,130 | I tend to quote command substitutions as shown below even when assigning their output to a variable: var="$(command)" Is that actually needed though? When does it break? The accepted answer here claims: DIRNAME="$(dirname $FILE)" will not do what you want if $FILE contains whitespace or globbing characters [?*. The link points to Grey Cat Wiki's great page about quoting but that page doesn't mention quoting command substitutions specifically. And while quoting the variable is clearly needed, quoting the command substitution itself doesn't seem to be. However, the same post concludes with: DIRNAME="$(dirname "$FILE")" is the recommended way. You can replace DIRNAME= with a command and a space without changing anything else, and dirname receives the correct string. Which is what I've always thought as well and have often corrected posts here that didn't quote it. However, the wiki page linked to above also claims that: There are a few cases where double quotes may be safely omitted: On the right-hand side of a simple assignment. You may write foo=$bar without quotes. This is POSIX compliant. [. . . ] While var=$(command) isn't really a "simple" assignment, I was nevertheless unable to find a case where the quotes were actually necessary: $ var=$(echo "foo bar baz") ## whitespace works$ echo "$var"foo bar baz$ var=$(printf "foo\nbar * baz") ## so do globbing characters$ echo "$var"foobar * baz$ var1="foo\nbar * baz"$ var=$(printf "$var1") ## printing a variable doesn't make any difference$ echo "$var" foobar * baz$ var=$(printf '%s\n' "$var1")$ echo "$var"foo\nbar * baz$ var=$(printf -- '-e %s\n' "$var1") ## strings starting with - also work$ echo "$var"-e foo\nbar * baz Of course, the quotes are absolutely necessary if the command substitution is being used directly for things like command1 "$(command2)" , but that doesn't seem to be the case when assigning to a variable. So, what am I missing? Are the quotes ever needed? What corner case will quoting a command substitution when assigning its return value to a variable protect you from? Or is it always OK to not quote a command substitution if it is the right-hand side of a variable assignment operation? | As one reference, Bash's manual is clear on this : A variable may be assigned to by a statement of the form name=[value] If value is not given, the variable is assigned the null string. All values undergo tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, and quote removal (detailed below). [...] Word splitting is not performed , with the exception of "$@" as explained below. Filename expansion is not performed. No word splitting, no filename expansion, therefore no need for quotes. As for POSIX, section 2.9.1 Simple Commands : 2. The words that are not variable assignments or redirections shall be expanded. If any fields remain following their expansion If any fields remain following their expansion, the first field shall be considered the command name and remaining fields are the arguments for the command. [...] 4. Each variable assignment shall be expanded for tilde expansion, parameter expansion, command substitution, arithmetic expansion, and quote removal prior to assigning the value. I'm not sure if that's supposed to be interpreted to mean that field splitting happens only for expansions done at step 2? Step 4 does not mention field splitting, though the section on Field splitting also doesn't mention variable assignments as an exception to producing multiple fields. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
362,133 | I am running Arch Linux (on a Raspberry Pi 3) and tried to connect both the Ethernet and the Wi-Fi to the same network. route shows me the following: $ routeKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Ifacedefault gateway 0.0.0.0 UG 1024 0 0 eth0default gateway 0.0.0.0 UG 1024 0 0 wlan0192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0gateway 0.0.0.0 255.255.255.255 UH 1024 0 0 eth0gateway 0.0.0.0 255.255.255.255 UH 1024 0 0 wlan0 ip addr shows me the following: $ ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether b8:27:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff inet 192.168.1.103/24 brd 192.168.1.255 scope global dynamic eth0 valid_lft 85717sec preferred_lft 85717sec inet6 fe80::ba27:ebff:fee4:4f60/64 scope link valid_lft forever preferred_lft forever3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether b8:27:YY:YY:YY:YY brd ff:ff:ff:ff:ff:ff inet 192.168.1.102/24 brd 192.168.1.255 scope global dynamic wlan0 valid_lft 85727sec preferred_lft 85727sec inet6 fe80::ba27:ebff:feb1:1a35/64 scope link valid_lft forever preferred_lft forever Both wlan0 and eth0 interfaces were able to get an IP address from the router. But it turns out that only one of these interfaces ever works. The other interface cannot be pinged and is not connectable. Usually it's the Ethernet that works but sometimes it's the Wi-Fi. What's happening? What can I do to make this work? | As you have found out, from the routing perspective, while possible, it is not ideal to have addresses from the same network in different interfaces. Routing expects a different network per interface, and ultimately one of them will take precedence over the other in routing, since they overlap. The advised solution for having more than one interface connected to the same network is to aggregate them together in a bridge interface. The bridge interface will "own" the IP address, and the actual real interfaces are grouped as a virtual single entity under br0 . allow-hotplug eth0iface eth0 inet manualallow-hotplug wlan0iface wlan0 inet manualauto br0iface br0 inet dhcp bridge_ports eth0 wlan0 Debian Linux: Configure Network Interfaces As A Bridge / Network Switch | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362133",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228819/"
]
} |
362,227 | Consider a situation that I run these commands in my current shell or I put them inside .bashrc : alias source='echo hi'alias .='echo hi'alias unalias='echo hi' Or function source(){ echo hi; } , etc. In case of binary commands we can use absolute path like: /bin/ls , however how can I specifically run any of these shell built-in commands inside my current shell? | Bash has the command builtin for that: builtin: builtin [shell-builtin [arg ...]]Execute shell builtins.Execute SHELL-BUILTIN with arguments ARGs without performing commandlookup. E.g. $ cat > hello.shecho hello$ source() { echo x ; }$ source hello.shx$ builtin source hello.shhello Nothing prevents you from overriding builtin , however. Another way to work around aliases (but not functions) is to quote (part of) the word: $ alias source="echo x"$ source hello.sh x hello.sh$ \source hello.shhello | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362227",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64321/"
]
} |
362,229 | I was looking at the man page for the rm command on my MacBook and I noticed the the following: -W Attempt to undelete the named files. Currently, this option can only be used to recover files covered by whiteouts. What does this mean? What is a "whiteout"? | A whiteout is a special marker file placed by some "see-through" higher-order filesystems (those which use one or more real locations as a basis for their presentation), particularly union filesystems, to indicate that a file that exists in one of the base locations has been deleted within the artificial filesystem even though it still exists elsewhere. Listing the union filesystem won't show the whited-out file. Having a special kind of file representing these is in the BSD tradition that macOS derives from: macOS uses st_mode bits 0160000 to mark them . Using ls -F , those files will be marked with a % sign , and ls -W will show that they exist (otherwise, they're generally omitted from listings). Many union systems also make normal files with a special name to represent whiteouts on systems that don't support those files. I'm not sure that macOS exposes these itself in any way, but other systems from its BSD heritage do and it's possible that external filesystem drivers could use them. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/362229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
362,230 | The box is a HP microserver, running Ubuntu 16.04. I recently "upgraded" the boot device to a 64GB SSD. Additionally there is a 1TB SATA drive. usually it boots up with /dev/sda1 as the primary partition (on the SSD) and /dev/sda5 as swap, and /dev/sdb1 pointing to the partition on the 1Tb HDD, that is mounted to /mnt/media0 . The problem is, it sometimes changes all that, and the SSD is now /dev/sdb1 and /dev/sdb5 and the media partition is now /dev/sda1 . This, of course, causes the swap and media mounts to fail as they are listed in /etc/fstab using their previous /dev/sd* names. So, I have: Checked the BIOS, and it consistently lists the 64GB SSD as the first drive and the 1TB IDE as the 2nd. I tried to change /etc/fstab to reference the media drive by volume label, but that causes Ubuntu to fail on startup and put me into a recovery mode. I tried to change /etc/fstab to reference the swap, and (ext4) media partitions using UUID (as, in fact, it lists the primary partition) but I then encounter the 2nd problem I have. When I execute the following to find the UUIDs of the various partitions... ls /dev/disk/by-uuidblkid both only list the 1 entry – the primary partition's UUID. I can only see the UUID of the media partition using (on boots where it does, in fact, get assigned sdb1 obviously) tune2fs -l /dev/sdb1 but again, if I use that UUID in /etc/fstab then Ubuntu fails to boot and goes into recovery mode. So, my questions are: Is there any way to get /dev/sda and /dev/sdb to stop swapping between drives? How can I get the system to see the UUIDs of the other partitions so I can use them in fstab ? and/or is there any other way I can reliably get my swap and media partitions mounted? | You could use the "disk/by-id" names in /etc/fstab , see ls -l /dev/disk/by-id Note that these device names may be also used in other files (initrd, grub configs). So you may update your grub config and re-recreate initrd too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362230",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229271/"
]
} |
362,241 | I have a HDD which I don't entirely trust, but still want to use (burstcoin mining, where if I get a bad block in a file, I'll only lose a few cents). How can I tell btrfs to mark certain blocks as bad (eg from badblocks output)? If I can't pre-mark blocks as bad, will any bad blocks identified by btrfs scrub be avoided in future if the file using them is deleted? | Sadly, no. btrfs doesn't track bad blocks and btrfs scrub doesn't prevent the next file from hitting the same bad block(s). This btrfs mailing list post suggests to use ext4 with mkfs.ext4 -c (this "builds a bad blocks list and thenwon't use those sectors" ).The suggestion to use btrfs over mdadm 3.1+ with RAID0 will not work . It seems that LVM doesn't support badblock reallocation . A work-around is to build a device excluding blocks known to be bad: btrfs over dmsetup . The btrfs Project Ideas wiki says: Not claimed — no patches yet — Not in kernel yet Currently btrfs doesn't keep track of bad blocks, disk blocks that are very likely to lose data written to them. Btrfs should accept a list in badblocks' output format, store it in a new btree (or maybe in the current extent tree, with a new flag), relocate whatever data the blocks contain, and reserve these blocks so they can't be used for future allocations. Additionally, scrub could be taught to test for bad blocks when a checksum error is found. This would make scrub much more useful; checksum errors are generally caused by the disk, but while scrub detects afflicted files, which in a backup scenario gives the opportunity to recreate them, the next file to reuse the bad blocks will just start getting errors instead. These two items would match an ext4 feature (used through e2fsck). Please comment if the status changes and I will update this answer. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/362241",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
362,278 | How can I use awk to count the total number of input lines in a file? | The special variable NR holds the current line number. Once the entire file has been processed, it will hold the total number of lines of that file. So, you can do: awk 'END{print NR}' file Of course, that is a bit silly when there's a program designed specifically for this: wc -l file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/362278",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/221298/"
]
} |
362,292 | I'm trying to look for a file called Book1 . In my test I'm trying to look for the aforementioned file and in this test, I don't know where that file is located. I tried find / -iname book1 but there is no output. How do I find my file called book1 using the command line if I don't know where the file is located? EDIT: My scenario is described in more detail below: The file extension is unknown The exact name (i.e. Capitalized letters, numbers, etc.) is unknown The location of the file is unknown | First, an argument to -iname is a shell pattern . You can read moreabout patterns in Bashmanual . Thegist is that in order for find to actually find a file thefilename must match the specified pattern. To make a case-insensitivestring book1 match Book1.gnumeric you either have to add * so itlooks like this: find / -iname 'book1*' or specify the full name: find / -iname 'Book1.gnumeric' Second, -iname will make find ignore the filename case so if youspecify -iname book1 it might also find Book1 , bOok1 etc. Ifyou're sure the file you're looking for is called Book1.gnumeric then don't use -iname but -name , it will be faster: find / -name 'Book1.gnumeric' Third, remember about quoting the pattern as said in the otheranswer . And last - are you sure that you want to look for the file everywhere on your system? It's possible that the file you'relooking for is actually in your $HOME directory if you worked onthat or downloaded it from somewhere. Again, that may be much faster. EDIT : I noticed that you edited your question. If you don't know the full filename, capitalization and location indeed you should use something like this: find / -iname 'book1*' I also suggest putting 2>/dev/null at the end of the line to hideall *permission denied* and other errors that will be present if you invoke find as a non-root user: find / -iname 'book1*' 2>/dev/null And if you're sure that you're looking for a single file, and there is only a single file on your system that match the criteria you can tell find to exit after finding the first matching file: find / -iname 'book1*' -print -quit 2>/dev/null | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/362292",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224726/"
]
} |
362,338 | I have a file with information in it: Name Rate HoursClark 8.5 42Sarah 18.5 19 Joe 10 25Paul 12 5 I want to calculate the total pay for each employee. But I cannot get my loop to work because I am unsure of what i <= should be due to the headers ( Name , Rate , Hours ) in the beginning of the file. So far I have this: awk 'BEGIN{ total = 0;}{ rate = $2; hours = $3; for (i = 1; i<= NR; i++) { total = rate * hours; }}END { print "Total = $" total;}' testfile.dat Thanks in advance for the help! | As an alternative,something like this works ok and has a nice output: awk -v OFS="\t" 'NR==1{$4="total"}NR>1{$4=$2*$3}1' testfile.dat#Output:Name Rate Hours totalClark 8.5 42 357Sarah 18.5 19 351.5Joe 10 25 250Paul 12 5 60 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/221298/"
]
} |
362,389 | The Wikipedia page on Unix Signals says: SIGWINCH The SIGWINCH signal is sent to a process when its controlling terminal changes its size (a win dow ch ange). Is it possible to send SIGWINCH from the keyboard? If so, how? | use pgrep myprogram to get pid of myprogram kill -SIGWINCH pid you may use kill -l to get list of supported signal in numerical form. kill -28 1234 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/362389",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
362,439 | I'm using bash script on AMazon Linux. When I want to redirect stderr and stdout to a file, I can run ./my_process.pl &>/tmp/out.txt Is there a way I can redirect this output to a file and continue to see it on the console (in my shell)? How is this done? | Yes, by using tee : /my_process.pl 2>&1 | tee /tmp/out.txt Note that using &>file for redirecting both standard output and standard error to a file is an extension to the POSIX standard that is accepted by some shells. It is safer to use >file 2>&1 . In this case, &> can not be used at all since we're not redirecting to a file. In bash , one may also do /my_process.pl |& tee /tmp/out.txt which is equivalent of the above. In ksh93 , |& means something completely different though. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362439",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166917/"
]
} |
362,441 | Preface: Every couple of days there comes a question of this type, which is easy to solve with sed , but takes time to explain. I'm writing this question and answer, so I can later refer to this generic solution and only explain the adaption for the specific case. Feel free to contribute. I have files with variable definitions. Variables consist of uppercase letters or underscore _ and their values follow after the := . The values can contain other variables. This is Gnom.def : NAME:=GnomFULL_NAME:=$FIRST_NAME $NAMEFIRST_NAME:=SmanSTREET:=Mainstreet 42TOWN:=NowhereBIRTHDAY:=May 1st, 1999 Then there is another file form.txt with a template form: $NAMEFull name: $FULL_NAMEAddress: $STREET in $TOWNBirthday: $BIRTHDAYDon't be confused by $NAMES Now I want a script which replaces the variables (marked with $ and the identifier) in the form by the definitions in the other file, recursively, if necessary, so I get this text back: GnomFull name: Sman GnomAddress: Mainstreet 42 in NowhereBirthday: May 1st, 1999Don't be confused by $NAMES The last line is to ensure that no substrings of variables get replaced accidentally. | The basic idea to solve problems like this is to pass both files to sed . First the definitions, which are stored in the hold space of sed . Then each line of the other file gets the hold space appended and each occurrence of a variable which can be found repeated in the appended definitions gets replaced. Here is the script: sed '/^[A-Z_]*:=.*/{H;d;} G :b s/$\([A-Z_]*\)\([^A-Z_].*\n\1:=\)\([^[:cntrl:]]*\)/\3\2\3/ tb P d' Gnom.def form.txt And now the detailed explanation: /^[A-Z_]*:=.*/{H;d;} This collects the definitions to the hold space. /^[A-Z_]*:=.*/ selects all lines starting with a variable name and the sequence := . On these lines the commands in {} are performed: The H appends them to the hold space, the d deletes them and starts over, so they won't get printed. If you can't assure that all lines in the definition file follow this pattern, or if lines in the other file could match the given pattern, this part needs to be adapted, like explained later. G At this point of the script, only lines from the second file are processed. The G appends the hold space to pattern space, so we have the line to be processed with all definitions in the pattern space, separated by newlines. :b This starts a loop. s/$\([A-Z_]*\)\([^A-Z_].*\n\1:=\)\([^[:cntrl:]]*\)/\3\2\3/ This is the key part, the replacement. Right now we have something like At the $FOO<newline><newline>FOO:=bar<newline>BAR:=baz ----==================--- ### in the pattern space. (Detail: there are two newlines before the first definition, one produced by appending to the hold space, another by appending to the buffer space.) The part underlined with ---- matches $\([A-Z_]*\) . The \(\) makes it possible to backreference to that string later on. \([^A-Z_].*\n\) matches the part underlined with === , which is everything up to the backreference \1 . Starting with a non-variable character ensures we don't match substrings of a variable. Surrounding the backreference with a newline and := makes sure that a substring of a definition will not match. Finally, \([^[:cntrl:]]*\) matches the ### part, which is the definition. Note, that we assume the definition has no control characters. If this should be possible, you can use [^\n] with GNU sed or do a workaround for POSIX sed . Now the $ and the variable name get replaced by the variable value \3 , the middle part and definition are left as they were: \2\3 . tb If a replacement has been made, the t command loops to mark b and tries another replacement. P If no further replacements were possible, the uppercase P prints everything upto the first newline (thus, the definition section will not get printed) and d will delete the pattern space and start the next cycle. Done. Limitations You can do a nasty thing like including FOO:=$BAR and BAR:=$FOO in the definition file and make the script loop forever. You can define a processing order to avoid this, but is will make the script more difficult to understand. Leave this away, if your script doesn't need to be idiot proof. If the definition can contain control characters, after the G , we can exchange newline with another character like y/\n#/#\n and repeat this before printing. I don't know a better workaround. If the definition file can contain lines with different format or the other file can contain lines with definition format, we need a unique separator between both files, either as last line of the definition file or as first line of the other file or as separate file you pass to sed between the other files. Then you have one loop to collect the definitions until the separator line is met, then do a loop for the lines of the other file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216004/"
]
} |
362,550 | I want to remove a line from a file which contains a particular character only once, if it is present more than once or is not present then keep the line in file. For example: DTHGTYFGTHDCHYTRHDHTCCYDJUTDYC Here, the character which I want to remove is C so, the command should remove lines FGTHDC and JUTDYC because they have C exactly once. How can I do this using either sed or awk ? | In awk you can set the field separator to anything. If you set it to C , then you'll have as many fields +1 as occurrences of C . So if you say awk -F'C' '{print NF}' <<< "C1C2C3" you get 4 : CCC consists in 3 C s, and hence 4 fields. You want to remove lines in which C occurs exactly once. Taking this into consideration, in your case you will want to remove those lines in which there are exactly two C -fields. So just skip them: $ awk -F'C' 'NF!=2' fileDTHGTYHYTRHDHTCCYD | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229483/"
]
} |
362,559 | Where can I find a complete list of the keyboard combinations which send signals in Linux? Eg: Ctrl + C - SIGINT Ctrl + \ - SIGQUIT | The Linux N_TTY line discipline only sends three different signals: SIGINT, SIGQUIT, and SIGTSTP. By default the following control characters produce the signals: Ctrl + C - SIGINT Ctrl + \ - SIGQUIT Ctrl + Z - SIGTSTP | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/362559",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
362,571 | Given a text file containing a sorted list of paths, how can I remove all the paths that are redundant due to having their parent (immediate or not) also in the list? For example: /aaa/bbb/aaa/bbb/ccc/ddd/eee/fff/ggg/fff/ggg/hhh/iii/jjj/kkk/lll/mmm/jjj/kkk/lll/mmm/nnn Should reduce to: /aaa/bbb/ddd/eee/fff/ggg/jjj/kkk/lll/mmm I've tried using substrings in awk but the parent paths are not guaranteed to be at the same level each time so I couldn't get it work. | I think this should do it. Modified input file to add couple of more cases $ cat ip.txt /aaa/bbb/aaa/bbbd/aaa/bbb/ccc/ddd/eee/fff/ggg/fff/ggg/hhh/iii/jjj/kkk/lll/mmm/jjj/kkk/lll/mmm/nnn/jjj/kkk/xyz Using awk $ awk '{for (i in paths){if (index($0,i"/")==1) next} print; paths[$0]}' ip.txt /aaa/bbb/aaa/bbbd/ddd/eee/fff/ggg/jjj/kkk/lll/mmm/jjj/kkk/xyz paths[$0] is the reference with input line as key for (i in paths) every line is compared against all saved keys if (index($0,i"/")==1) next if input line matches with a saved key appended with / at start of line, then skip that line / is used to avoid /aaa/bbbd matching against /aaa/bbb | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362571",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229491/"
]
} |
362,614 | I want to create a system-wide directory, that contains application specific (read-write) data (like log files, configurations and other app specific metadata). After reading a bit more about the Linux file system, I thought about using /var/app_name/ , but then I found out, that some of the subdirs are temporary (not persistent among restarts, like run , log , tmp ). How significant is this? I mean, should I use another directory (like /home/app_name/ ) or using /var/app_name/ is OK? | From the Filesystem Hierarchy Standard : Applications must generally not add directories to the top level of /var. Such directories should only be added if they have some system-wide implication, and in consultation with the FHS mailing list. You should use /etc/app_name/ to store config files and other stuff for your program, and /var/log/app_name/ to store its logfiles. For the data used by the application, you can store: in /var/lib/app_name/ the persistent data and metadata in /var/cache/app_name/ any app cache that can safely be deleted in /var/spool/app_name/ the data that awaits processing Definitely do not use /home/app_name/ which is reserved to the homedir of user app_name . If your program needs to create a specific user to run as, that'll be its place. About your question in the comment: Linux does not delete neither rotate logs automatically for anything you put into /var/log/ . In fact, often sysadmins have the opposite problem of logs filling up all the space... So it's up to you to delete or rotate logfiles; this is done via logrotate or a custom cron job. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362614",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2787/"
]
} |
362,642 | Is there a standard location in Linux for holding source files for example OpenSSL . I am building Nginx from source with non default version of OpenSSL. I need to download and untar OpenSSL and I did it in home directory. Now, I wonder is there a standard location in Linux maybe /opt ? | Whenever you ask yourself something like this, check out the Filesystem Hierarchy Standard (FHS).There, you will find the following entry: usr/src : Source code (optional) Purpose Source code may be placed in this subdirectory, only for reference purposes So you can put your source files in subdirectories of /usr/src . That said, this is an optional directory so you can really keep them wherever you like. Source code is not relevant after you've compiled it into an executable so the system will never require the source of something to be accessible at a specific location. In conclusion: /usr/src is a pretty standard location but feel free to choose your own if you prefer. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/362642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154659/"
]
} |
362,650 | I added the following to my ~/.bashrc : export PS1="\e[0;35m[\u@\h \W]\$ \e[m "echo -e "\e[0;35mYOU ARE ON THE LIVE SERVER !!\e[0m" sadly, now, every time I paste something long into bash it goes squiffy and ghost tab characters appears all over the screen, and lines eat each other, does anyone know why? | I also had in the past the problem of ANSI colour codes messing up with command line navigation; you need to put the ANSI codes around \[ \] in order for the command shell to know how (not) to take them into account as part of the input string. As in: export PS1="\[\e[0;35m\][\u@\h \W]\$\[\e[m\] "echo -e "\[\e[0;35m\]YOU ARE ON THE LIVE SERVER !!\[\e[0m\]" Some explanation as to why the shell needs \[ and \] : To draw the prompt in the correct positions in the character matrix of a terminal the shell needs to know the correct length of the prompt string which is the amount of printable characters, i. e. not control characters or character sequences. However, the shell doesn't know which character sequences the terminal considers printable. Therefore one needs to provide hints to the shell to distinguish between printable and non-printable sequences, which is the purpose of \[ and \] . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/362650",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121189/"
]
} |
362,686 | According to the Intel security-center post dated May 1, 2017, there is a critical vulnerability on Intel processors which could allow an attacker to gain privilege (escalation of privilege) using AMT, ISM and SBT. Because the AMT has direct access to the computer’s network hardware, this hardware vulnerability will allow an attacker to access any system. There is an escalation of privilege vulnerability in Intel® Active Management Technology (AMT), Intel® Standard Manageability (ISM), and Intel® Small Business Technology versions firmware versions 6.x, 7.x, 8.x 9.x, 10.x, 11.0, 11.5, and 11.6 that can allow an unprivileged attacker to gain control of the manageability features provided by these products. This vulnerability does not exist on Intel-based consumer PCs. Intel have released a detection tool available for Windows 7 and 10. I am using information from dmidecode -t 4 and by searching on the Intel website I found that my processor uses Intel® Active Management Technology (Intel® AMT) 8.0 . Affected products: The issue has been observed in Intel manageability firmware versions 6.x, 7.x, 8.x 9.x, 10.x, 11.0, 11.5, and 11.6 for Intel® Active Management Technology, Intel® Small Business Technology, and Intel® Standard Manageability. Versions before 6 or after 11.6 are not impacted. The description: An unprivileged local attacker could provision manageability features gaining unprivileged network or local system privileges on Intel manageability SKUs: Intel® Active Management Technology (AMT), Intel® Standard Manageability (ISM), and Intel® Small Business Technology (SBT) How can I easily detect and mitigate the Intel escalation of privilege vulnerability on a Linux system? | The clearest post I’ve seen on this issue is Matthew Garrett’s (including the comments). Matthew has now released a tool to check your system locally: build it, run it with sudo ./mei-amt-check and it will report whether AMT is enabled and provisioned, and if it is, the firmware versions (see below). The README has more details. To scan your network for potentially vulnerable systems, scan ports 623, 624, and 16992 to 16993 (as described in Intel’s own mitigation document ); for example nmap -p16992,16993,16994,16995,623,664 192.168.1.0/24 will scan the 192.168.1/24 network, and report the status of all hosts which respond. Being able to connect to port 623 might be a false positive (other IPMI systems use that port), but any open port from 16992 to 16995 is a very good indicator of enabled AMT (at least if they respond appropriately: with AMT, that means an HTTP response on 16992 and 16993, the latter with TLS). If you see responses on ports 16992 or 16993, connecting to those and requesting / using HTTP will return a response with a Server line containing “Intel(R) Active Management Technology” on systems with AMT enabled; that same line will also contain the version of the AMT firmware in use, which can then be compared with the list given in Intel’s advisory to determine whether it’s vulnerable. See CerberusSec’s answer for a link to a script automating the above. There are two ways to fix the issue “properly”: upgrade the firmware, once your system’s manufacturer provides an update (if ever); avoid using the network port providing AMT, either by using a non-AMT-capable network interface on your system, or by using a USB adapter (many AMT workstations, such as C226 Xeon E3 systems with i210 network ports, have only one AMT-capable network interface — the rest are safe; note that AMT can work over wi-fi, at least on Windows, so using built-in wi-fi can also lead to compromission). If neither of these options is available, you’re in mitigation territory. If your AMT-capable system has never been provisioned for AMT, then you’re reasonably safe; enabling AMT in that case can apparently only be done locally, and as far as I can tell requires using your system’s firmware or Windows software. If AMT is enabled, you can reboot and use the firmware to disable it (press Ctrl P when the AMT message is displayed during boot). Basically, while the privilege vulnerability is quite nasty, it seems most Intel systems aren’t actually affected. For your own systems running Linux or another Unix-like operating system, escalation probably requires physical access to the system to enable AMT in the first place. (Windows is another story.) On systems with multiple network interfaces, as pointed out by Rui F Ribeiro , you should treat AMT-capable interfaces in the same way as you’d treat any administrative interface (IPMI-capable, or the host interface for a VM hypervisor) and isolate it on an administrative network (physical or VLAN). You cannot rely on a host to protect itself: iptables etc. are ineffective here, because AMT sees packets before the operating system does (and keeps AMT packets to itself). VMs can complicate matters, but only in the sense that they can confuse AMT and thus produce confusing scanning results if AMT is enabled. amt-howto(7) gives the example of Xen systems where AMT uses the address given to a DomU over DHCP, if any, which means a scan would show AMT active on the DomU, not the Dom0... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/362686",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153195/"
]
} |
362,731 | I'd like to identify which process a window belongs to in Wayland. Is there anything like xprop for X that allows the user to pick a window by clicking and outputs all window details, including PID? | Good news, there IS something like this built into Gnome Shell, and unlike xprop works with Xorg and Wayland. Ultimately this may fall into the realm of other tooling if you're using KDE, i3, or something else. To begin with, type the keys "ALT+F2" on the keyboard which will bring up a menu like this: After that comes up, issue the command lg (for "looking glass). This will then bring up the looking glass window, from which we can extract window information. Select "window" from the top right corner of the looking glass: From there, you'll see a list of windows, from which you can click on the name of the window you want to identify. In this case, I chose gedit for an example: In the top line of that output you may notice: Inspecting object: object instance proxy GType: MetaWindowX11 ... The "GType" will be one of MetaWindowX11 or MetaWindowWayland . This info comes as per https://fedoraproject.org/wiki/How_to_debug_Wayland_problems | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46334/"
]
} |
362,768 | I want to tar four directories which contain large number of small files using a shell script. Because of this script takes too long to execute so I want to make these 4 tar commands run in parallel using shell script hoping I can make better use of resources available. Commands that I am currently using: tar cf - /ebs/uat/uatappl | gzip -c > /ebs/backup/uatappl.tar.gztar cf - /ebs/uat/uatcomn | gzip -c > /ebs/backup/uatcomn.tar.gztar cf - /ebs/uat/uatora | gzip -c > /ebs/backup/uatora.tar.gztar cf - /ebs/uat/uatdata | gzip -c > /ebs/backup/uatdata.tar.gz | You can put all the tars in background like this: tar cf - /ebs/uat/uatappl | gzip -c > /ebs/backup/uatappl.tar.gz &tar cf - /ebs/uat/uatcomn | gzip -c > /ebs/backup/uatcomn.tar.gz &tar cf - /ebs/uat/uatora | gzip -c > /ebs/backup/uatora.tar.gz &tar cf - /ebs/uat/uatdata | gzip -c > /ebs/backup/uatdata.tar.gz & But be aware you must have enough processor power and fast disk, otherwise the concurrency will make total execution longer than consecutive one | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/176232/"
]
} |
362,793 | I'm trying to get the name of a DNS server 4.2.2.1 . Using host 4.2.2.1 I get this output: 1.2.2.4.in-addr.arpa domain name pointer a.resolvers.level3.net in my script, I do this as: name="$($host $server)" How can I use sed/awk on $name to only get the a.resolvers.level3.net , keeping in mind that I'll use this on completely different servers so I can't just grep a.resolvers out of the variable? | Another option is to slice the string : echo ${name##* } This will slice the string and keep the part starting from the last space to the end. ${name <-- from name ## <-- trim the front * <-- matches anything ' ' <-- until the last ' ' } | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229662/"
]
} |
362,800 | The charmap file /usr/share/i18n/charmaps/UTF-8.gz has this line: <U3400>..<U343F> /xe3/x90/x80 <CJK Ideograph Extension A> The map page for charmap(5) only says that it means a range. Then I found the spec , but it says that the number in the character name is supposed to be in decimal, not hex, and it uses 3 dots as opposed to 2 in the man page. So, how should I interpret character ranges in charmap files? Especially if I see something like <U3400>..<U3430> /xe3/x90/x80 <CJK Ideograph Extension A> then is the range in decimal or hex? | glibc allows three-dot decimal ranges (as in POSIX) and two-dot hexadecimal ranges. This doesn't appear to be documented anywhere, but we can see it in the source code. This is not defined portable behaviour, but an extension of glibc and possibly others. If you're writing your own files, use decimal. Let's confirm that this is the actual behaviour of glibc. When processing a range, glibc uses : if (decimal_ellipsis) while (isdigit (*cp) && cp >= from) --cp; else while (isxdigit (*cp) && cp >= from) { if (!isdigit (*cp) && !isupper (*cp)) lr_error (lr, _("\ hexadecimal range format should use only capital characters")); --cp; } where isxdigit validates a hex digit, and isdigit decimal. Later, it branches the conversion to integer of the consumed substring in the same way and carries on as you'd expect. Earlier, it has determined the kind of ellipsis in question during parsing , obtained from the lexer . The UTF-8 charmap file is mechanically generated from unicode.org's UnicodeData.txt , creating 64-codepoint ranges with two dots. I suppose that this convenient auto-generation is at least partially behind the extension, but I don't know. Earlier versions of glibc also generated it, but using a different program and the same format. Again, this doesn't appear to be documented anywhere, and since it's auto-generated right next to where it's used it conceivably could change, but I imagine it will be stable. If given something like <U3400>..<U3430> /xe3/x90/x80 <CJK Ideograph Extension A> then it is a hexadecimal range, because it uses two dots. With three dots, it would be a POSIX decimal range. If you're on another system that doesn't have this extension, it would just be a syntax error. A portable character map file should only use the decimal ranges. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229660/"
]
} |
362,828 | How do I make subtitle appear larger or smaller with mpv ? The subtitles are in .srt format most of the times but sometimes also in the movie itself ? Is there a way to do that ? Also is there a default configuration variable that I could put so that subtitles play uniformly, using my own fonts and weights etc. | The Manual has an entire section about subtitles . 2 relevant options: --sub-scale=<0-100> just scale them --sub-ass-force-style=<[Style.]Param=Value[,...]> Force specific style. Add these to ~/.mpv/config by removing the leading double-dashees ( -- ). All of that only works for non-image based subtitle formats Edit: @cipricus points out, that you can set shortcuts for increasing/decreasing subtitle size and position in the file ~/.config/mpv/input.conf | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/362828",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
362,833 | So, I thought this would be a pretty simple thing to locate: a service / kernel module that, when the kernel notices userland memory is running low, triggers some action (e.g. dumping a process list to a file, pinging some network endpoint, whatever) within a process that has its own dedicated memory (so it won't fail to fork() or suffer from any of the other usual OOM issues). I found the OOM killer , which I understand is useful, but which doesn't really do what I'd need to do. Ideally, if I'm running out of memory, I want to know why.I suppose I could write my own program that runs on startup and uses a fixed amount of memory, then only does stuff once it gets informed of low memory by the kernel, but that brings up its own question... Is there even a syscall to be informed of something like that?A way of saying to the kernel "hey, wake me up when we've only got 128 MB of memory left"? I searched around the web and on here but I didn't find anything fitting that description. Seems like most people use polling on a time delay, but the obvious problem with that is it makes it way less likely you'll be able to know which process(es) caused the problem. | What you are asking is, basically, a kernel-based callback on a low-memory condition, right? If so, I strongly believe that the kernel does not provide such mechanism, and for a good reason: being low on memory, it should immediately run the only thing that can free some memory - the OOM killer. Any other programs can bring the machine to an halt. Anyway, you can run a simple monitoring solution in userspace. I had the same low-memory debug/action requirement in the past, and I wrote a simple bash which did the following: monitor for a soft watermark: if memory usage is above this threshold, collect some statistics (processes, free/used memory, etc) and send a warning email; monitor for an hard watermark: if memory usage is above this threshold, collect some statistics and kill the more memory hungry (or less important) processes, then send an alert email. Such a script would be very lightweight, and it can poll the machine at small interval (ie: 15 seconds) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74972/"
]
} |
362,835 | I'm trying to do a Mono install package per package, because the CentOS server doesn't have internet access. However, when I try to install the mono core package using the command: rpm -i mono-core-4.8.1.0-0.xamarin.1.x86_64.rpm The system displays the following error message: error: Failed dependencies: mono(System.ComponentModel.Composition) = 4.0.0.0 is needed by mono-core-4.8.1.0-0.xamarin.1.x86_64 mono(System.ComponentModel.DataAnnotations) = 4.0.0.0 is needed by mono-core-4.8.1.0-0.xamarin.1.x86_64 mono(System.Data) = 4.0.0.0 is needed by mono-core-4.8.1.0-0.xamarin.1.x86_64 mono(System.IdentityModel) = 4.0.0.0 is needed by mono-core-4.8.1.0-0.xamarin.1.x86_64 mono(System.Runtime.Serialization) = 4.0.0.0 is needed by mono-core-4.8.1.0-0.xamarin.1.x86_64 mono(System.ServiceModel) = 4.0.0.0 is needed by mono-core-4.8.1.0-0.xamarin.1.x86_64 mono(System.ServiceProcess) = 4.0.0.0 is needed by mono-core-4.8.1.0-0.xamarin.1.x86_64 How do I solve these dependencies? UPDATE I'm trying to use this command: yum localinstall mono-core-4.8.1.0-0.xamarin.1.x86_64.rpm Result: Examining mono-core-4.8.1.0-0.xamarin.1.x86_64.rpm: mono-core-4.8.1.0-0.xamarin.1.x86_64Marking mono-core-4.8.1.0-0.xamarin.1.x86_64.rpm to be installedResolving Dependencies Running transaction check Package mono-core.x86_64 0:4.8.1.0-0.xamarin.1 will be installed Processing Dependency: mono(System.ComponentModel.Composition) = 4.0.0.0 for package: mono-core-4.8.1.0-0.xamarin.1.x86_64 Processing Dependency: mono(System.ComponentModel.DataAnnotations) = 4.0.0.0 for package: mono-core-4.8.1.0-0.xamarin.1.x86_64 Processing Dependency: mono(System.Data) = 4.0.0.0 for package: mono-core-4. Processing Dependency: mono(System.IdentityModel) = 4.0.0.0 for package: mono-core-4.8.1.0-0.xamarin.1.x86_64 Processing Dependency: mono(System.Runtime.Serialization) = 4.0.0.0 for package: mono-core-4.8.1.0-0.xamarin.1.x86_64 Processing Dependency: mono(System.ServiceModel) = 4.0.0.0 for package: mono-core-4.8.1.0-0.xamarin.1.x86_64 Processing Dependency: mono(System.ServiceProcess) = 4.0.0.0 for package: mono-core-4.8.1.0-0.xamarin.1.x86_64 Finished Dependency ResolutionError: Package: mono-core-4.8.1.0-0.xamarin.1.x86_64 (/mono-core-4.8.1.0-0.xamarin.1.x86_64) Requires: mono(System.Data) = 4.0.0.0 Error: Package: mono-core-4.8.1.0-0.xamarin.1.x86_64 (/mono-core-4.8.1.0-0.xamarin.1.x86_64) Requires: mono(System.ComponentModel.DataAnnotations) = 4.0.0.0 Error: Package: mono-core-4.8.1.0-0.xamarin.1.x86_64 (/mono-core-4.8.1.0-0.xamarin.1.x86_64) Requires: mono(System.IdentityModel) = 4.0.0.0 Error: Package: mono-core-4.8.1.0-0.xamarin.1.x86_64 (/mono-core-4.8.1.0-0.xamarin.1.x86_64) Requires: mono(System.ServiceModel) = 4.0.0.0 Error: Package: mono-core-4.8.1.0-0.xamarin.1.x86_64 (/mono-core-4.8.1.0-0.xamarin.1.x86_64) Requires: mono(System.ComponentModel.Composition) = 4.0.0.0 Error: Package: mono-core-4.8.1.0-0.xamarin.1.x86_64 (/mono-core-4.8.1.0-0.xamarin.1.x86_64) Requires: mono(System.Runtime.Serialization) = 4.0.0.0 Error: Package: mono-core-4.8.1.0-0.xamarin.1.x86_64 (/mono-core-4.8.1.0-0.xamarin.1.x86_64) Requires: mono(System.ServiceProcess) = 4.0.0.0You could try using --skip-broken to work around the problemYou could try running: rpm -Va --nofiles --nodigest | What you are asking is, basically, a kernel-based callback on a low-memory condition, right? If so, I strongly believe that the kernel does not provide such mechanism, and for a good reason: being low on memory, it should immediately run the only thing that can free some memory - the OOM killer. Any other programs can bring the machine to an halt. Anyway, you can run a simple monitoring solution in userspace. I had the same low-memory debug/action requirement in the past, and I wrote a simple bash which did the following: monitor for a soft watermark: if memory usage is above this threshold, collect some statistics (processes, free/used memory, etc) and send a warning email; monitor for an hard watermark: if memory usage is above this threshold, collect some statistics and kill the more memory hungry (or less important) processes, then send an alert email. Such a script would be very lightweight, and it can poll the machine at small interval (ie: 15 seconds) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362835",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229692/"
]
} |
362,839 | I have a large .csv file where I need to split a specific column by string length. I'm trying to take the last 6 characters of column 2 and move them into a new column. Current : 3102017,90131112,0,7403022017,8903944,90,03092017,127037191,475,0 Desired : 3102017,90,131112,0,7403022017,8,903944,90,03092017,127,037191,475,0 | With a POSIX-compliant awk : awk -F, -v OFS=, '{sub(/.{6}$/, OFS "&", $2); print}' With a POSIX-compliant sed : sed 's/^\([^,]*,[^,]*\)\([^,]\{6\}\)/\1,\2/' Those modify the lines only if the second field is at least 6 characters long (note that it will happily change 111,123456,333 to 111,,123456,333 leaving the second field empty). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362839",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226450/"
]
} |
362,885 | There is an option --ignore which allows specifying files to ignore. At the moment I only managed to ignore multiple files by doing --ignore file1 --ignore file2....... Trying to use --ignore "*assets*|*scripts*" does nothing. So is there a catch I'm not aware of? | You could use brace expansion e.g. ag pattern --ignore={'*assets*','*scripts*'} path_to_search or, as Glenn suggests here , process substitution: ag pattern -p <(printf "*%s*\n" assets scripts) path_to_search | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/362885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/161233/"
]
} |
362,888 | How can gedit be forced to open a new window independently of existing gedit windows whenever a text file (.txt) is double-clicked on a Gnome desktop of Debian 8, Jessie? Suppose that a.txt is already open in a gedit window, and that b.txt is double-clicked on a Gnome desktop of Debian 8 Jessie. Then, unfortunately, by the factory default, b.txt will be opened in a tab in the same window as a.txt. However, I want b.txt to be opened in a new window of gedit so that there will be two windows - the existing window for a.txt and a new window for b.txt. If Gnome invoked gedit with the "-s" option as in gedit -s b.txt then b.txt would be opened in a new window, while a.txt stays in its existing window. However, by default, Gnome seems to invoke gedit without the "-s" option. The configuration file /usr/share/applications/org.gnome.gedit.desktop contains the execution directive Exec=gedit %U So, I changed it to Exec=gedit -s %U by the following commands, and restarted the computer. cd /usr/share/applicationssu # similar to sudomv org.gnome.gedit.desktop org.gnome.gedit.desktop.bakperl -pe 's/Exec=gedit %U/Exec=gedit -s %U/' org.gnome.gedit.desktop.bak > org.gnome.gedit.desktopdiff org.gnome.gedit.desktop org.gnome.gedit.desktop.bak However, this method has failed. The b.txt still opens in a tab in the same window as a.txt. I am stuck. I need your help. The default mode of gedit is "single window, multiple tabs". I want the "multiple windows" mode. By the way, the following useless method turns gedit into the "single window, no tab" mode, which is not what I want. gsettings set org.gnome.gedit.preferences.ui show-tabs-mode 'never' With this "gsettings" method, gedit automatically closes a.txt and reuses the existing window of a.txt to open b.txt in it whenever b.txt is double-clicked on a desktop. Thus, it is the "single window, no tab" mode (as opposed to "multiple windows"). (By the way, the default value for "show-tabs-mode" is 'auto'.) | The reason why your modification of the Exec key in the .desktop file did not work is that gedit is DBus activated. This means that it is launched via your session's DBus daemon and then provides a common DBus interface for such activatable programs to specify the files to open. You can prevent this by changing the DBusActivatable key to false . Also, it is much better to create a copy of the .desktop file you want to modify in your home directory and use that to override the system-wide one than to modify the system-wide one directly. That way the system one will not be overwritten on distro package updates. To do that just copy /usr/share/applications/org.gnome.gedit.desktop to ~/.local/share/applications/org.gnome.gedit.desktop . Files in this path will override files with the same name from the system-wide directory. Then there is also an important difference between the two possible flags used to open a new window: --new-window or -s . Both will result in the files being opened in a new window, but with -s each window will also belong to its own process. When using --new-window all windows share the same gedit process. And finally to make sure that this also works if you select multiple files in your file manager and open them, you need another modification of the Exec key. The %U means that multiple URLs are allowed as arguments for this command, meaning that the file manager would start it like this: gedit --new-window file1.txt file2.txt . This results in a single new window with two tabs. If you change this to %u now, that tells the file manager, that the application only accepts a single URL as an argument and therefore causes it to run the command multiple times, each time with a different file as its argument. For more details on this see the freedesktop desktop entry specification . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/362888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229725/"
]
} |
362,894 | I learned a new command, at least I thought, because this command : chsh , does not behave like described. It was described to work like this: cat /etc/shells to know, which shells are installed,so you can choose among them. do echo $SHELL to know, which shell you are using. choose one of the shells and type chsh -s /path/to/shell enter password and verify with echo $SHELL , that you are in a new shell. I have done this and I got no error message when entering the password, but I was still in the same shell. % echo $SHELL/bin/bash% cat /etc/shells# /etc/shells: valid login shells/bin/sh/bin/dash/bin/bash/bin/rbash% chsh -s /bin/shPassword: % echo $SHELL/bin/bash | Log out and log in again. The chsh command will update the /etc/passwd file, but it does not change the current shell nor the value of the $SHELL variable in the current shell (it has no way of doing that). This is the reason you need to log in again; you have to start a new login session for a change to take effect. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/362894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
362,898 | I have a set of storage directories on Linux machines, all 770/root:root (perms/owner:group), for which I use ACLs to manage users access. (I am unable to use unix groups as the directories are shared across a network, where groups are managed via LDAP for which I'm not an admin). For each directory, one user has full rwx access via ACLs, and all others have rx access via ACLs. Currently, I have to manually respond to requests to add/remove users, and I'd like this ability to be passed onto the 'rwx' users for the directories they own (because I'm a lazy sysadmin, naturally). The best solution I can think of is to create a script/program with root setuid that checks for the 'rwx' ACL status of the calling user on the given directory, and allows them to add/remove 'rx' ACL users, as in: $ modify_acls.sh [--remove] [--add] <my_directory> <other_user> Is there an easier way of doing it, or will the solution above not work for any reason? | Log out and log in again. The chsh command will update the /etc/passwd file, but it does not change the current shell nor the value of the $SHELL variable in the current shell (it has no way of doing that). This is the reason you need to log in again; you have to start a new login session for a change to take effect. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/362898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123570/"
]
} |
362,977 | I have a bash function (or alias), for instance function install() {sudo apt-get install $@} . When running the command install dicelab , what I expect will actually be run is sudo apt-get install dicelab . Where can I see what was actually run by the shell? I would like to make sure that my more complicated aliases are working as expected. | Use set -x in the shell. $ alias hello='echo hello world!'$ hellohello world! $ set -x$ hello+ echo hello world!hello world! Using set -x turns on the xtrace shell option ( set +x turns it off) and should work in all Bourne-like shells, like bash , dash ksh93 , pdksh and zsh . This prompts the shell to display the command that gets executed after alias expansions and variable expansions etc. has been performed. The output will be on the standard error stream of the shell (just like the ordinary prompt) so it will not interfere with redirections of standard output, and it will be preceded by a prompt as defined by the PS4 shell variable (often +␣ by default). Example with a few functions: $ world () { echo "world"; }$ hello () { echo "hello"; }$ helloworld () { printf '%s %s!\n' "$(hello)" "$(world)"; } $ helloworldhello world! $ set -x$ helloworld+ helloworld++ hello++ echo hello++ world++ echo world+ printf '%s %s!\n' hello worldhello world! With your specific example (with syntax corrected and added quotes): $ install () { sudo apt-get install "$@"; }$ set -x$ install dicelab+ install dicelab+ sudo apt-get install dicelabbash: sudo: command not found (I don't use or have sudo on my system, so that error is expected.) Note that there is a common utility already called install , so naming your function something else ( aptin ?) may be needed if you at some point want to use that utility. Note that the trace output is debugging output . It is a representation of what the shell is doing while executing your command. The output that you see on the screen may not be suitable for shell input. Also note that I was using bash above. Other shells may have another default trace prompt ( zsh incorporates the string zsh and the current command's history number, for example) or may not "stack" the prompts up like bash does for nested calls. I'm running I used to run with set -x in all my interactive shells by default. It's nice to see what actually got executed... but I've noticed that programmable tab completion etc. may cause unwanted trace output in some shells, and some shells are a bit verbose in their default trace output (e.g. zsh ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/362977",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229798/"
]
} |
363,004 | I want to grep the motherboard serial number and the product model of a computer. I used sudo lshw | grep -m1 serial: to grep the serial number (since there are multiple occurrences of "serial:" and the one I want is the first one. How can I do this AND simultaneously grep for "product:" as well? There are also multiple occurrences of product, and the first one is again the one I want. lshw returns this: user@ubuntu:~$ sudo lshwubuntu-pc description: Notebook product: 23252DG (LENOVO_MT_2325) vendor: LENOVO version: ThinkPad X230 serial: R9TWZVR width: 64 bits capabilities: smbios-2.7 dmi-2.7 vsyscall32 configuration: administrator_password=disabled chassis=notebook family=ThinkPad X230 power-on_password=disabled sku=LENOVO_MT_2325 uuid=01ECC0B1-8251-CB11-8538-B7D9EC435D9B *-core description: Motherboard product: 23252DG vendor: LENOVO physical id: 0 version: Not Defined serial: 1ZPAB2AC2C1 slot: Not Available *-cpu description: CPU product: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz vendor: Intel Corp. physical id: 1 bus info: cpu@0 version: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz serial: None | You want the first two lines that match either product: or serial: . If so, you can try: $ sudo lshw | grep -Em2 'serial:|product:' product: 20FWCTO1WW (LENOVO_MT_20FW_BU_Think_FM_ThinkPad T460p) serial: PF0P1EUH Alternatively, grep all lines that match either of the target strings and then use head to only print the 1st two: $ sudo lshw | grep -E 'serial:|product:' | head -n2 product: 20FWCTO1WW (LENOVO_MT_20FW_BU_Think_FM_ThinkPad T460p) serial: PF0P1EUH Of course, both of these approaches assume that you will never have a second product: before the first serial: and vice versa. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229821/"
]
} |
363,012 | Running the sort command on the first field of a file sort -k1,1 file.txt like this: 1,2,32,1,110,2,1 gives me: 1,2,310,2,12,1,1 instead of: 1,2,32,1,110,2,1 I don't want 10 before 2. Is there any way to get sort to do that? | As explained in man sort : -n, --numeric-sort compare according to string numerical value So you want: $ sort -nk1,1 file1,2,32,1,110,2,1 Also note that by default, fields are blank delimited, so those lines in that file have only one field. For instance, the first field of the first line is 1,2,3 , not 1 . You'd need to add -t , for ,-separated fields: sort -t, -nk1,1 file With -n , sort only considers the sequence of characters that forms a valid number at the start of the sorting key (ignoring leading blanks). For that first line, without -t, , depending on the sort implementation and the locale, 1,2,3 will be considered either as 1 or as 1.2 (when the user's decimal separator is , ) or 123 (when the user's thousand separator is , and sort ignores any occurrence of it). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
363,048 | I'm trying to install Docker on a Ubuntu 64 machine following the official installation guide . Sadly Ubuntu seems it is not able to locate the docker-ce package. Any idea to fix it or at least to track what is happening ? Here some details for you... $ uname --all; sudo grep docker /etc/apt/sources.list; sudo apt-get install docker-ceLinux ubuntu 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linuxdeb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable.# deb-src [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable.Reading package lists... DoneBuilding dependency treeReading state information... DoneE: Unable to locate package docker-ce | Ubuntu 22.10 (Kinetic) sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu kinetic stable" Ubuntu 22.04 (Jammy) sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu jammy stable" Ubuntu 21.10 (Impish) sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu impish stable" Ubuntu 21.04 (hirsute) sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu hirsute stable" Ubuntu 20.10 (Groovy) sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu groovy stable" Ubuntu 20.04 (Focal) sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" Ubuntu 19.10 (Eoan) sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu eoan stable" Ubuntu 19.04 (Disco) sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu disco stable" Ubuntu 18.10 (Cosmic) sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu cosmic test" Ubuntu 18.04 (bionic) sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" Ubuntu 17.10 docker-ce package is available on the official docker (Ubutu Artful) repository , to install it use the following commands : sudo apt install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu artful stable" Ubuntu 16.04 You can install docker-ce on Ubuntu as follows: sudo apt-get install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" Run the following: sudo apt updateapt-cache search docker-ce sample output: docker-ce - Docker: the open-source application container engine Install docker-ce : For Ubuntu 16.04 you need to run sudo apt update . For Ubuntu 18.04 and higher, add-apt-repository will execute apt update automatically: sudo apt install docker-ce To check the available and permitted Ubuntu codenames: curl -sSL https://download.docker.com/linux/ubuntu/dists/ |awk -F'"' 'FNR >7 {print $2}' sample output (Results may be different after the directory updates): ../artful/bionic/cosmic/disco/eoan/focal/groovy/hirsute/trusty/xenial/yakkety/zesty/ Docker , OS requirements | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/363048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85598/"
]
} |
363,098 | I created the following service, amos.service, and it needs to run as amos (member of the amos group) [Unit]Description=AMOS ServiceAfter=network.target[Service]User=amosGroup=amosType=simpleWorkingDirectory=/usr/share/amosExecStart=/usr/share/amos/amos_service.sh startExecStop=/usr/share/amos/amos_service.sh stopRestart=on-failure[Install]WantedBy=multi-user.target all the permissions have been set on /usr/share/amos to amos:amos the amos_service.sh is as follows: #!/bin/bashCUDIR=$(dirname "$0")cd /usr/share/amosstart() { exec /usr/share/amos/run_amos.sh >> /var/log/amos.log 2>&1 }stop() { exec pkill java }case $1 in start|stop) "$1" ;;esaccd "$CURDIR" When I run the service initially without any modifications to the directories, meaning, belonging to root, and amos.service not having the User not Group parameter, everything runs great! Once I change the directories permissions to amos:amos and add the amos.service User & Group, the serive won't work and I get the following : See attached image | Use systemd: To show the problem use journalctl -xe after you started the service. You don't need a bash script, put this in your service file: ExecStart=/usr/share/amos/run_amos.sh There is no need for ExecStop , systemd will stop all child processes. You can view the output with journalctl -u amos.service . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229825/"
]
} |
363,101 | Is it possible to change the contrast of the reverse mode of Linux Console ? I would like something in high-contrast, like pure black and pure white. The current reverse mode uses a dark-gray as foreground and a light-gray as background, it's difficult to read what is in reverse mode. Look at the text "Digite caracteres alfanumericos" in the image: | Use systemd: To show the problem use journalctl -xe after you started the service. You don't need a bash script, put this in your service file: ExecStart=/usr/share/amos/run_amos.sh There is no need for ExecStop , systemd will stop all child processes. You can view the output with journalctl -u amos.service . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114939/"
]
} |
363,115 | I am trying to understand what I did wrong with the following mount command. Take the following file from here: http://elinux.org/CI20_Distros#Debian_8_2016-02-02_Beta Simply download the img file from here . Then I verified the md5sum is correct per the upstream page: $ md5sum nand_2016_06_02.img3ad5e53c7ee89322ff8132f800dc5ad3 nand_2016_06_02.img Here is what file has to say: $ file nand_2016_06_02.img nand_2016_06_02.img: x86 boot sector; partition 1: ID=0x83, starthead 68, startsector 4096, 3321856 sectors, extended partition table (last)\011, code offset 0x0 So let's check the start of the first partition of this image: $ /sbin/fdisk -l nand_2016_06_02.imgDisk nand_2016_06_02.img: 1.6 GiB, 1702887424 bytes, 3325952 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x0212268dDevice Boot Start End Sectors Size Id Typenand_2016_06_02.img1 4096 3325951 3321856 1.6G 83 Linux In my case Units size is 512 , and Start is 4096 , which means offset is at byte 2097152 . In which case, the following should just work, but isn't: $ mkdir /tmp/img$ sudo mount -o loop,offset=2097152 nand_2016_06_02.img /tmp/img/mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. And, dmesg reveals: $ dmesg | tail[ 1632.732163] loop: module loaded[ 1854.815436] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem[ 1854.815452] EXT4-fs (loop0): bad geometry: block count 967424 exceeds size of device (415232 blocks) None of the solutions listed here worked for me: resize2fs or, sfdisk What did I missed ? Some other experiments that I tried: $ dd bs=2097152 skip=1 if=nand_2016_06_02.img of=trunc.img which leads to: $ file trunc.img trunc.img: Linux rev 1.0 ext2 filesystem data (mounted or unclean), UUID=960b67cf-ee8f-4f0d-b6b0-2ffac7b91c1a (large files) and same goes the same story: $ sudo mount -o loop trunc.img /tmp/img/mount: wrong fs type, bad option, bad superblock on /dev/loop2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. I cannot use resize2fs since I am required to run e2fsck first: $ /sbin/e2fsck -f trunc.img e2fsck 1.42.9 (28-Dec-2013)The filesystem size (according to the superblock) is 967424 blocksThe physical size of the device is 415232 blocksEither the superblock or the partition table is likely to be corrupt!Abort<y>? yes | Use systemd: To show the problem use journalctl -xe after you started the service. You don't need a bash script, put this in your service file: ExecStart=/usr/share/amos/run_amos.sh There is no need for ExecStop , systemd will stop all child processes. You can view the output with journalctl -u amos.service . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363115",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32896/"
]
} |
363,126 | I'm learning about the relationship between processes, process groups (and sessions) in Linux. I compiled the following program... #include <iostream>#include <ctime>#include <unistd.h>int main( int argc, char* argv[] ){ char buf[128]; time_t now; struct tm* tm_now; while ( true ) { time( &now ); tm_now = localtime( &now ); strftime( buf, sizeof(buf), "%a, %d %b %Y %T %z", tm_now ); std::cout << buf << std::endl; sleep(5); } return 0;} ... to a.out and ran it as a background process like so... a.out & This website says the following... Every process is member of a unique process group, identified by its process group ID. (When the process is created, it becomes a member of the process group of its parent.) By convention, the process group ID of a process group equals the process ID of the first member of the process group, called the process group leader. Per my reading, the first sentence conflicts with the in-parentheses content: is a process a member of a unique process group, or is it a member of the process group of its parent ? I tried to investigate with ps ... ps xao pid,ppid,pgid,sid,command | grep "PGID\|a.out" PID PPID PGID SID COMMAND24714 23890 24714 23890 ./a.out This tells me my a.out process is pid 24714 , spawned from parent pid 23890 and part of program group 24714 . To begin with, I don't understand why this pgid matches the pid. Next, I tried to investigate the parent process... ps xao pid,ppid,pgid,sid,command | grep "PGID\|23890" PID PPID PGID SID COMMAND23890 11892 23890 23890 bash24714 23890 24714 23890 ./a.out It makes sense to me that the parent process of my a.out is bash . At first I thought " bash's pid matches its pgid - that must be because it's the process group leader. Maybe that makes sense because bash is kind of the "first thing" that got run, from which I ran my process. " But that reasoning doesn't make sense because a.out 's pgid also matches its own pid. Why doesn't a.out 's pgid equal bash 's pgid? That's what I would have expected, from my understanding of the quote. Can someone clarify the relationship between pids and pgids? | There is no conflict; a process will by default be in a unique process group which is the process group of its parent: $ cat pg.c#include <stdio.h>#include <unistd.h>int main(void){ fork(); printf("pid=%d pgid=%d\n", getpid(), getpgrp());}$ make pgcc pg.c -o pg$ ./pg pid=12495 pgid=12495pid=12496 pgid=12495$ The fork splits our process into parent ( 12495 ) and child ( 12496 ), and the child belongs to the unique process group of the parent ( 12495 ). bash departs from this because it issues additional system calls: $ echo $$12366$ And then in another terminal we run: $ strace -f -o blah -p 12366 And then back in the first terminal: $ ./pgpid=12676 pgid=12676pid=12677 pgid=12676$ And then we control+c the strace , and inspect the system calls: $ egrep 'exec|pgid' blah12366 setpgid(12676, 12676) = 012676 setpgid(12676, 12676 <unfinished ...>12676 <... setpgid resumed> ) = 012676 execve("./pg", ["./pg"], [/* 23 vars */]) = 012676 write(1, "pid=12676 pgid=12676\n", 21 <unfinished ...>12677 write(1, "pid=12677 pgid=12676\n", 21 <unfinished ...> bash has used the setpgid call to set the process group, thus placing our pg process into process group unrelated to that of the shell. ( setsid(2) would be another way to tweak the process group, if you're hunting for system calls.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363126",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/214773/"
]
} |
363,164 | I have this following code: find ./ -iname '*phpmyadmin' -exec rm -rf {} \; It deletes a dir called phpmyadmin , but it does not delete a file called phpMyAdmin-Version-XYZ.zip Even if I remove the -rf , it still won't delete it (probably because a second problem with the -iname not affecting case insensitivity). Is there a way to delete any inode in a single rm (file, dir, softlink)? Why does adding the -iname not have an effect? Note: I didn't find a "delete any inode" argument in man rm . | The problem is that you are matching a file that ends in phpmyadmin ( case-insensitively ) by using the pattern *phpmyadmin . To get any file that contains the string phpmyadmin (case-insensitively), use -iname '*phpmyadmin*' : find ./ -iname '*phpmyadmin*' -exec rm -rf {} \; Perhaps getting the matched files before removal would be sane: find ./ -iname '*phpmyadmin*' To answer your first question, there is no option in rm in userspace to deal with inodes. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/363164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
363,169 | Is there a GNU make alternative if don't want to use tab indents in my make program (or make -like) program? For example, when I use make , I need to indent everything after the make opener, ( % : ). This is a recipe for some problems in some circumstances (for example, I work cross-platform and I use a Windows10 AutoHotkey mechanism that strips tabs from codes I paste into Linux terminals from different reasons and it doesn't pass over make hence I need a non tab including solution). The necessity to tab-indent everything under % : makes my work with make non fluent. This is the make I use to create new virtual host conf files. I execute it with make domain.tld.conf : % : printf '%s\n' \ '<VirtualHost *:80>' \ 'DocumentRoot "/var/www/html/$@"' \ 'ServerName $@' \ '<Directory "/var/www/html/$@">' \ 'Options +SymLinksIfOwnerMatch' \ 'Require all granted' \ '</Directory>' \ 'ServerAlias www.$@' \ '</VirtualHost>' \ > "$@" a2ensite "$@" systemctl restart apache2.service Is there any alternative, maybe something that comes with Unix itself that provides similar functionality but without having to use tab indents in the pattern file itself? | If that’s your whole Makefile, and you’re not tracking any dependencies between files, just use a shell script: #!/bin/shfor domain; do> "/etc/apache2/sites-available/${domain}.conf" cat <<EOF<VirtualHost *:80>DocumentRoot "/var/www/html/${domain}"ServerName "${domain}"<Directory "/var/www/html/${domain}">Options +SymLinksIfOwnerMatchRequire all granted</Directory>ServerAlias www.${domain}</VirtualHost>EOFa2ensite "${domain}"donesystemctl restart apache2.service Copy the above into a file named for example create-vhost , make it executable: chmod 755 create-vhost then run it as ./create-vhost domain.tld This even supports creating multiple virtual hosts’ configuration files (with a single restart at the end): ./create-vhost domain1.tld domain2.tld | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
363,219 | Several questions are similar to this one, but I have not found a solution that works when I want to search for a pattern over several lines. The following sed -n '/First string/,/Second string/ p' my.file will print all occurrences of the matched pattern, but I would like only the first occurrence. I am using GNU sed. | Use q to explicitly quit when the end pattern is reached. In GNU sed: $ cat foofooSTARTbarENDblahSTART another$ sed -n '/START/,/END/p; /END/q' fooSTARTbarEND awk would maybe make it easier to not repeat the end pattern: $ awk '/START/{p=1} p; /END/{exit}' fooSTARTbarEND | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168566/"
]
} |
363,230 | I have an existing script test.sh which does some operations and then finally opens a file in vi. I cannot make any changes to this existing script. When I run the first script it opens a text file in vi. Now I have another script where I run the existing script test.sh . It opens a file in vi. How do I :wq from inside the script? Is it even possible? | Use q to explicitly quit when the end pattern is reached. In GNU sed: $ cat foofooSTARTbarENDblahSTART another$ sed -n '/START/,/END/p; /END/q' fooSTARTbarEND awk would maybe make it easier to not repeat the end pattern: $ awk '/START/{p=1} p; /END/{exit}' fooSTARTbarEND | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363230",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152598/"
]
} |
363,240 | I want to substitute the pattern "uid=" followed with any single character one ore more times. So I use this command : sed s/uid=.+/uid=something/g file But this does not work. It seems that the "followed with any single character one ore more times" is not correct, that is to say .+ Any idea why? | Use q to explicitly quit when the end pattern is reached. In GNU sed: $ cat foofooSTARTbarENDblahSTART another$ sed -n '/START/,/END/p; /END/q' fooSTARTbarEND awk would maybe make it easier to not repeat the end pattern: $ awk '/START/{p=1} p; /END/{exit}' fooSTARTbarEND | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363240",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171538/"
]
} |
363,297 | I want to run some commands in parallel. When all of these commands are finished start the next one. I though the following approach will work #!/bin/bashcommand1 &command2 &command3 &&command4 but it didn't. I need to run command4 when all the first three commands have been completely finished. | #!/bin/bashcommand1 &command2 &command3 &waitcommand4 wait (without any arguments) will wait until all the backgrounded processes have exited. The complete description of wait in the bash manual: wait [-n] [n ...] Wait for each specified child process and return its termination status. Each n may be a process ID or a job specification; if a job spec is given, all processes in that job's pipeline are waited for. If n is not given, all currently active child processes are waited for, and the return status is zero. If the -n option is supplied, wait waits for any job to terminate and returns its exit status. If n specifies a non-existent process or job, the return status is 127. Otherwise, the return status is the exit status of the last process or job waited for. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/363297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10780/"
]
} |
363,311 | Here's a use-case to clarify my question. Say I have a calendar program that is set to run in ~/.bashrc , and it ensures the streaming output overwrites the same block of lines. Is it possible to have the streaming output display in the terminal from the background process without clobbering new input? I already looked at Displaying stdout of a background process in specific location of the terminal , but the asker requires outputting new lines at termination, which I don't need to do. Here's a screenshot of the program output, which currently runs in the foreground and terminates after outputting the formatted text once: I just want that formatted text to continually replace itself while allowing foreground processes to function as normal. A solution in Bash, C, and / or C++ using something like zsh or ANSI escape sequences would be perfect for me. For reference, here's the current C code I'm using, but if it's easier for you, you could just formulate a solution that uses cal instead: #include <stdio.h>#include <string.h>#include <stdlib.h>#include <time.h>const char months[12][10] = {"January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"};const char weekDays[7][10] = {"Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"};void printCalendar(void);int getWeekDay(int, int, int, int, int);int getMaxDay(int, int);void getDate(int *, int *, int *, int *, int *, int *, int *);void formatTime(char *, int, int, int);int main(void) { printCalendar(); return 0;}void printCalendar(void) { int second, minute, hour, day, month, year, weekDay, maxDay, col, x = 0, i; char str[12]; getDate(&second, &minute, &hour, &day, &month, &year, &weekDay); formatTime(str, hour, minute, second); maxDay = getMaxDay(month, year); printf("\e[3J"); printf("%s %s\n", weekDays[weekDay], str); printf("%s %d, %d\n\n ", months[month], day, year); printf("Sun Mon Tue Wed Thu Fri Sat\n "); for (i = 1; i <= maxDay; i++) { col = getWeekDay(i, month, year, day, weekDay); if (x > col) { x = 0; printf("\n "); } while (x < col) { x++; printf(" "); } x++; if (i == day) { if (i < 10) { printf(" "); } printf(" \e[7m%d\e[0m ", i); } else { printf("%3d ", i); } } printf("\n\n");}int getWeekDay(int day, int month, int year, int rmday, int rwday) { return (day - rmday + rwday + 35) % 7;}int getMaxDay(int month, int year) { switch (month) { case 3: // April case 5: // June case 8: // September case 10:// November return 30; case 1: // February if ((year % 100 == 0 && year % 400 != 0) || year % 4 != 0) { return 28; // Not leap year } return 29; // Leap year default: return 31; // Remaining months }}void getDate(int *second, int *minute, int *hour, int *day, int *month, int *year, int *weekDay) { time_t now; struct tm *date; time(&now); date = localtime(&now); *second = (date -> tm_sec); *minute = (date -> tm_min); *hour = (date -> tm_hour); *day = (date -> tm_mday); *month = (date -> tm_mon); *year = (date -> tm_year) + 1900; *weekDay = (date -> tm_wday);}void formatTime(char *str, int hour, int minute, int second) { sprintf(str, "%02d:%02d:%02d %s", (hour % 12) ? (hour % 12) : 12, minute, second, hour / 12 ? "PM" : "AM"); str[11] = '\0';} And the code in ~/.bashrc is just: clear && ~/Documents/C/Calendar/calendar Thanks for any help | I recommend GNU screen for this. First, start up a new screen instance: $ screen Then make a split with Ctrl + A Shift + S . You can resize the top portion with the resize command. I found a height of 9 to be reasonable for cal : Ctrl + A :resize 9 Then use any command that constantly produces output. I don't use watch or even have it on many systems, but while true; do cal; sleep 3; done works just as well. Then Ctrl + A Tab moves you to the other (bottom) part of the split. Finally, Ctrl + A C opens a new shell in which you can run commands without interference from the other portion of the split. If you want this to occur automatically, you can use .screenrc : screen /bin/sh -c 'while true; do cal; sleep 3; done'splitresize 9focusscreen See screen(1) for a full description of commands, and possible inspiration for alternative configurations. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152446/"
]
} |
363,332 | I am using Raspberry Pi using Raspbian which is just Debian. I would like to bridge from the primary WiFi network router that connects to Cox Cable to my cabled router here for my subnet to have reliable internet access. It needs to be a WiFi-to-Ethernet bridge. I have set /etc/networks for a static address for the USB wlan1 with the external adapter and hi-gain antenna. wpa_supplicant is configured to log in to the master router properly. So right now it is set up so I can login to the proper network with the password, on external wlan1. Static address is set in /etc/networks. Gateway and nameserver are OK. I can browse web pages, etc. The missing link is to bridge this to the eth0 port so my router can connect also, to provide service to my subnet. No need for any extra network services like routing or nat or dhcp, etc. Just a simple bridge. Can anyone please point me in the right direction to make this happen? | For configuring a bridge from ethernet to wifi, it is as simple as doing in your /etc/network/interfaces : auto eth0allow-hotplug eth0iface eth0 inet manualauto wlan0allow-hotplug wlan0iface wlan0 inet manualauto br0iface br0 inet staticbridge_ports eth0 wlan0 address 192.168.1.100 netmask 255.255.255.0 Replace the IP address with something more appropriate to your network. If you prefer the IP attribution done via DHCP, change it to: auto br0iface br0 inet dhcpbridge_ports eth0 wlan0 After changing /etc/network/interfaces , either restarting Debian or doing service networking restart Will activate this configuration. You will have to make sure for this configuration to have bridge-utils installed. You can install it with: sudo apt install bridge-utils For more information, see: BRIDGE-UTILS-INTERFACES The wlan0 interface also has to be condigured to connect to your remote AP so this configuration is not be used verbatim. Additional note: bridging eth0 and wlan0 together means in poor layman´s terms that br0 will present itself as a single logical interface englobing the interfaces that make part of the bridge. Usually such configuration is made when both extend or belong to the same network. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363332",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197427/"
]
} |
363,376 | I'm going to make a bash script that is executed at boot and runs periodically. I want it user-configurable, so that a user can add a cron job 0 * * * * my_script by running my_script add 0 * * * * , list jobs by my_script list , and remove by my_script remove job_number where the job number is listed in the output of my_script list command. If I could manage crontab files separately, this would be easily achieved.However, It seems crontab is only one file per a user (If not, please let me know). Directly dealing with that crontab file is a bad solution, of course. So what is the proper way to handle the cron jobs? Or, is there a better way to handle periodically running scripts? Conditions: Any user should be able to run it, whether privileged or not. No dependencies. Additional question: Since I couldn't find any proper way to manage periodically running scripts, I thought what I might be doing wrong. In the sense of software design, is it not practical to implement the interface to manage the software's scheduled tasks? Should I leave all schedule managements to users? | Using cron is the correct way to schedule periodic running of tasks on most Unix systems. Using a personal crontab is the most convenient way for a user to schedule their own tasks. System tasks may be scheduled by root ( not using the script below! ) in the system crontab, which usually has an ever so slightly different format (an extra field with a username). Here's a simple script for you. Any user may use this to manage their own personal crontab. It doesn't do any type of validation of its input except that it will complain if you give it too few arguments. It is therefore completely possible to add improperly formatted crontab entries. The remove sub-command takes a line number and will remove what's on that line in the crontab, regardless of what that is. The number is passed, unsanitized, directly to sed . The crontab entry, when you add one, has to be quoted. This affects how you must handle quotes inside the crontab entry itself. Most of those things should be relatively easy for you to fix. #!/bin/shusage () { cat <<USAGE_ENDUsage: $0 add "job-spec" $0 list $0 remove "job-spec-lineno"USAGE_END}if [ -z "$1" ]; then usage >&2 exit 1ficase "$1" in add) if [ -z "$2" ]; then usage >&2 exit 1 fi tmpfile=$(mktemp) crontab -l >"$tmpfile" printf '%s\n' "$2" >>"$tmpfile" crontab "$tmpfile" && rm -f "$tmpfile" ;; list) crontab -l | cat -n ;; remove) if [ -z "$2" ]; then usage >&2 exit 1 fi tmpfile=$(mktemp) crontab -l | sed -e "$2d" >"$tmpfile" crontab "$tmpfile" && rm -f "$tmpfile" ;; *) usage >&2 exit 1esac Example of use: $ ./scriptUsage: ./script add "job-spec" ./script list ./script remove "job-spec-lineno"$ ./script list 1 */15 * * * * /bin/date >>"$HOME"/.fetchmail.log 2 @hourly /usr/bin/newsyslog -r -f "$HOME/.newsyslog.conf" 3 @reboot /usr/local/bin/fetchmail$ ./script add "0 15 * * * echo 'hello world!'"$ ./script list 1 */15 * * * * /bin/date >>"$HOME"/.fetchmail.log 2 @hourly /usr/bin/newsyslog -r -f "$HOME/.newsyslog.conf" 3 @reboot /usr/local/bin/fetchmail 4 0 15 * * * echo 'hello world!'$ ./script remove 4$ ./script list 1 */15 * * * * /bin/date >>"$HOME"/.fetchmail.log 2 @hourly /usr/bin/newsyslog -r -f "$HOME/.newsyslog.conf" 3 @reboot /usr/local/bin/fetchmail | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363376",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230144/"
]
} |
363,451 | I was looking in syslog to look into an audio issue and I see a hell of a lot of the Soliciting pool server ntp daemon messages. I've run other linux in past and never recall seeing so many ntp log messages. Is this due to a new network issue perhaps, is it common for Mint, is there a way to shush them if it is "common"? I've changed carriers and router hardware since those days, so I do not rule out something in my net. I have no problems accessing internet or playing online games etc. | The messages mean your ntpd server is looking for more time sources to sync to. Seeing a couple of them is expected, especially after reconnecting to the network after an outage or a restart, but if your ntpd and your network connection are running smoothly, you shouldn't be seeing more than a few per day. If you have several every few minutes, it's likely a problem. Does your ntpd connect to peers and sync time successfully? You can check for that using ntpq . Look at the list of peers in ntpq -c pe and the reported stratum and reftime in ntpq -c rv . Stratum of 16 means “not synchronized”. This: user@localhost $ ntpq -c pe remote refid st t when poll reach delay offset jitter============================================================================== de.pool.ntp.org .POOL. 16 p - 64 0 0.000 0.000 0.002user@localhost $ ntpq -c rvassocid=0 status=c016 leap_alarm, sync_unspec, 1 event, restart,version="ntpd [email protected] Tue Jun 20 08:08:18 UTC 2017 (1)",processor="x86_64", system="Linux", leap=11, stratum=16,precision=-23, rootdelay=0.000, rootdisp=0.090, refid=INIT,reftime=00000000.00000000 Thu, Feb 7 2036 7:28:16.000,clock=dde6bdf4.dec8453b Fri, Dec 22 2017 0:10:44.870, peer=0, tc=3,mintc=3, offset=0.000000, frequency=4.981, sys_jitter=0.000000,clk_jitter=0.000, clk_wander=0.000 means your NTP doesn't actually work (in this case because I've just started it up), while this: user@localhost $ ntpq -c pe remote refid st t when poll reach delay offset jitter============================================================================== cz.pool.ntp.org .POOL. 16 p - 64 0 0.000 0.000 0.002+mail.nettel.cz 195.113.144.201 3 u 4 64 377 5.215 -0.842 0.332*fedecks.wuji.cz 195.113.144.238 2 u 61 64 377 2.121 -2.005 0.171-lx.ujf.cas.cz .GPS. 1 u 62 64 177 2.662 -0.714 0.215-pyrrha.fi.muni. 195.113.144.238 2 u 63 64 177 7.445 -0.697 0.340-host-81-200-57- 192.168.3.246 2 u 55 64 177 15.792 0.098 1.160 cz.inthouse.clo 147.231.2.6 2 u 47 64 17 5.338 -0.266 0.461user@localhost $ ntpq -c rvassocid=0 status=0615 leap_none, sync_ntp, 1 event, clock_sync,version="ntpd [email protected] Sat Jul 29 07:38:14 UTC 2017 (1)",processor="ppc", system="Linux", leap=00, stratum=2,precision=-19, rootdelay=2.652, rootdisp=4.409, refid=147.231.100.5,reftime=dde6be4a.f90912d6 Fri, Dec 22 2017 0:12:10.972,clock=dde6be4d.12f27b56 Fri, Dec 22 2017 0:12:13.074, peer=10703, tc=6,mintc=3, offset=-0.387828, frequency=-254.539, sys_jitter=1.572660,clk_jitter=0.456, clk_wander=0.098 means your NTP works correctly. If it doesn't sync and stays that way for a long time, you likely have a network or configuration problem. Look at man 5 ntp.conf for help and at the NTP.org support page about configuration for examples. In my case, the reason for the unending “Soliciting pool server” spam was the nopeer directive, which has to be off for pool servers. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363451",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230188/"
]
} |
363,462 | I have customised .bashrc with a number of alias, specifically ll and export LS_OPTIONS='--color=auto' Unfortunately this does not work when used with sudo , so I also modified /root/.bashrc , but this seems to have made no difference. sudo env shows HOME=/root and SHELL=/bin/bash How can I get sudo commands to use the settings in /root/.bashrc ? I understand that this happens only when bash is executed interactively, so I am open to any other suggestions as to how to customise. | Thanks to those who answered, who prompted me to read man sudo more carefully. sudo -s If no command is specified, an interactive shell is executed. This interactive shell uses /root/.bashrc and thus includes my customisations. It does require the command be entered separately, but this is OK. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47111/"
]
} |
363,507 | there are strange things: on my virtualbox: centos7 Interfaces: enp0s3: 192.168.10.110/24lo:0 10.0.3.110/24 (ip alias) route: default via 10.0.3.2 dev lo192.168.10.0/24 dev enp0s3enp0s3 is plugged in 10.0.3.0/24 I have enabled the ip_forward (net.ipv4.ip_forward = 1) My Question: ping 10.0.3.2 works,but why? tcpdump can't get packets on enp0s3 ,but does get packets on lo . The default route is lo ; why does ping 10.0.3.2 work? Why can't I get packets on enp0s3 ? | The loopback interface is a virtual interface. The only purpose of the loopback interface is to return the packets sent to it, i.e. whatever you send to it is received on the interface. It makes little sense to put a default route on the loopback interface, because the only place it can send packets to is the imaginary piece of wire that is looped from the output of the interface to the input. There is nothing that can change this behaviour of the loopback interface, that's what it is coded to do. When you ping 10.0.3.2, the reply does not come from some external device, but from the loopback interface itself. When you add an address on the loopback interface with e.g. sudo ip addr add 10.0.3.1/24 dev lo a route to 10.0.3.0/24 is added. You can see this with ip route show table local Something like local 10.0.3.0/24 dev lo proto kernel scope host src 10.0.3.1 should show up. This routing table entry tells that a packet sent to any address between 10.0.3.1 and 10.0.3.254 is sent via the lo interface, from which it is immediately returned. EDIT: clarification as a response to the comment below. Here is what happens when you ping 10.0.3.2: the kernel gets an IP packet for delivery with a destination address 10.0.3.2. Just like with any packet to be delivered, the kernel consults the routing table. In this case the matching entry is this: local 10.0.3.0/24 dev lo proto kernel scope host src 10.0.3.1 , which says the packet should be delivered via the lo interface with the source address 10.0.3.1. Now, because the packet was given to the lo interface, the loopback interface does what it normally does: it takes the packet off the send queue and puts it on the receive queue. From the kernel's point of view, we have now received an incoming packet ready for consumption by a server process listening on a socket. (In the case of ping, the kernel processes it internally.) We have now received a "remote" ICMP packet with a destination address of 10.0.3.2, which is arguably not one of our local addresses, but it was delivered to the loopback interface nonetheless. Next, the kernel sends a response to the ping: an ICMP response packet with the addresses reversed: 10.0.3.2 as source address and 10.0.3.1 as destination. This is delivered via the loopback interface back to the ping program, which shows that we got a reply from 10.0.3.2. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363507",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220197/"
]
} |
363,525 | I actually did not know there are two different types of variables I can access from the command line. All I knew is, that I can declare variables like: foo="my dear friends"bar[0]="one"bar[1]="two"bar[2]="three" or accessing them with a $ sign, like: echo $fooecho ${bar[1]} or using inbuilt variables, like: echo $PWDPATH=$PATH:"/usr/bin/myProg" Now, I hear there are two (at least?) types of variables: shell variables and environment variables. What is the purpose of having two different types? How do I know which type a variable is? What are the typical usages for each one? | Environment variables are a list of name=value pairs that exist whatever the program is (shell, application, daemon…). They are typically inherited by children processes (created by a fork / exec sequence): children processes get their own copy of the parent variables. Shell variables do exist only in the context of a shell. They are only inherited in subshells (i.e. when the shell is forked without an exec operation). Depending on the shell features, variables might not only be simple strings like environment ones but also arrays, compound, typed variables like integer or floating point, etc. When a shell starts, all the environment variables it inherits from its parent become also shell variables (unless they are invalid as shell variables and other corner cases like IFS which is reset by some shells) but these inherited variables are tagged as exported 1 . That means they will stay available for children processes with the potentially updated value set by the shell. That is also the case with variables created under the shell and tagged as exported with the export keyword. Array and other complex type variables cannot be exported unless their name and value can be converted to the name=value pattern, or when a shell specific mechanism is in place (e.g.: bash exports functions in the environment and some exotic, non POSIX shells like rc and es can export arrays). So the main difference between environment variables and shell variables is their scope: environment variables are global while non exported shell variables are local to the script. Note also that modern shells (at least ksh and bash ) support a third shell variables scope. Variables created in functions with the typeset keyword are local to that function (The way the function is declared enables/disables this feature under ksh , and persistence behavior is different between bash and ksh ). See https://unix.stackexchange.com/a/28349/2594 1 This applies to modern shells like ksh , dash , bash and similar. The legacy Bourne shell and non Bourne syntax shells like csh have different behaviors. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/363525",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
363,534 | I have a file named file.txt which has some content say 'abcdef', when I do cat < file.txt I get the output abcdef but when I do echo < file.txt , no output is returned. Why doesn't the input redirection work with echo but works with cat? | You can use echo to read the file.txt ( not to redirect ) as follows: echo "$(<file.txt)" Sample output : abcdef | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224025/"
]
} |
363,539 | First off to anticipate confusion: this is indeed asking the oppositeof your run of the mill tmux question. How can one ensure that if the SSH tunnel dies, no tmux sessionis leaked at all. When running multiple shells over the sameSSH session, terminating the session and the tmux instance cleanlyin one terminal will still cause tmux instances in other terminalsto leak. This can cause all kinds of trouble if e. g. one forgotto terminate a resource consuming process that was lingering hiddenin some backgrounded pane. Not cool. I’m using tmux almost exclusively as a terminal emulator and havelittle use if any for its “detach” feature. It’s not uncommon thatI have multiple tmux instances in terminals accessing the samemachine over a single SSH session. If I want to run a backgroundprocess, I ask the shell to disown it or run it insystemd. Unfortunately, many systems I’m working with day to dayhaven’t upgraded to systemd yet and are unlikely to do that inthe near future so KillUserProcesses is not an option. Ideally I’d just invoke tmux with some command line switch thatprevents it from running in the background so I can alias thatto tmux . What SSH tunnels? One OpenSSH connection serving multiple sessions. What tmux session? Separate tmux instances, one per SSH session. What leaks? Leaks of tmux instances. As in: SSH into a machine. Start another shell sessionover the same connection. Start a tmux instance in each SSH session. You nowhave two tmux instances. Stop tmux in one session ( <C-d> ); then stop the SSHconnection (again <C-d> ): The other SSH session is closed but the tmux instancerunning inside it is leaked with all it child processes. Please describe: (1) what setup you have now, Stock OpenSSH, stock tmux, some shell. Mostly reverse SSH connections that I cannot reestablish to at will, but that shouldn’t matter. and (2) what scenario you're trying to prevent. Processes staying alive in the other tmux session. Note that those are different boxes being SSH’d into, so the goalis to prevent even a single tmux instance from remaining aliveafter the connection dies. Also, since I use split panes andbackgrounded windows heavily, I really wish for all the childprocesses to be collected. | You can use echo to read the file.txt ( not to redirect ) as follows: echo "$(<file.txt)" Sample output : abcdef | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230251/"
]
} |
363,540 | I built a firmware for an embedded board using Yocto. ssh server Dropbear seems to be up and running and working properly. I can login with the root user without a password. Nevertheless, I cannot mount the filesystem from an Ubuntu desktop using sshfs. On the desktop I'm getting: sudo sshfs -o allow_other [email protected]:/ /mountpointremote host has disconnected in Poky instead I can see in /var/log/messages: May 7 00:25:37 raspberrypi3 authpriv.info dropbear[537]: Child connection from 10.42.0.1:48010May 7 00:25:38 raspberrypi3 authpriv.notice dropbear[537]: Auth succeeded with blank password for 'root' from 10.42.0.1:48010May 7 00:25:38 raspberrypi3 authpriv.info dropbear[537]: Exit (root): Disconnect received Is it possible to increase the verbosity somehow? I tried to add "verbose = 1" in /etc/default/dropbear but this is probably wrong as the server does not even start anymore. Maybe sshfs is not supported at all by dropbear? | As for you trying to do SSHFS with Dropbear: the question is that SSHFS needs SFTP, while Dropbear only supports SCP. So there is not much a point in debugging why it is happening. From the dd-wrt wiki: https://www.dd-wrt.com/wiki/index.php/Sshfs Since Dropbear (the default ssh server) apparently does not support sshfs, you will need to install and run Openssh instead. So indeed, SSHFS is not supported by Dropbear as you suspected. P.S. For the benefit of other readers, Dropbear is a lightweight replacement for OpenSSH used widely in embedded systems/routers/ iOTs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12635/"
]
} |
363,575 | Let’s say I install a package using dpkg : sudo dpkg -i package-name.deb then without running the package binaries I just remove it: sudo dpkg -r package-name Is there anything harmful that can happen in this process? For example, any malicious configuration script in the .deb file? What are other possible threats that might happen? | Yes, packages can contain “maintainer scripts” which are run before and/or after installation. You can see the scripts, if any, by extracting the control archive from the package: dpkg-deb --ctrl-tarfile package-name.deb > control.tartar tf control.tar or, if you know you want to extract the control archive’s contents: dpkg-deb -e package-name.deb package-control (which places the extracted files in a directory named package-control ). They run as root and can do whatever the package author wants on your system. You should really consider that installing a package is equivalent to granting the maintainer (and anyone else involved in the package’s maintenance and build) root access to your system. Who do you trust? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/363575",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64321/"
]
} |
363,583 | I'm trying to mount my NTFS partition. When I try $sudo mount /dev/sda8 /media/FILES I get something like this fuse: device not found, try 'modprobe fuse' first . Then I of course tried this $modprobe fuse and I got modprobe: FATAL: Module fuse not found in directory /lib/modules/4.9.25 .I also tried $ntfsfix and ntfs-3g commands...Earlier I didn't have this fuse and I was successfully mounting.Could you help me with this issue? UPD: linux 4.10.13-1 and kernel 4.9.25 UPDATE 12.05.17 All in all, I tried to find a Fuse module in kernel and rebuild it. And yes! I forgot to mark fuse. After recompiling kernel and rebooting it successfully works with mount /dev/sda8 /media .Thanks you all | Your issue is that you haven't rebooted since upgrading your kernel, so you cannot load any of the kernel modules you require. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363583",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/199124/"
]
} |
363,629 | What sed command do I need to use to turn /08/ into /8/? I am looking to get rid of all the excess 0's in my command output. I have got it down to one pesky extra 0. sed -ie 's/\/0[1-9]\//\/[1-9]\//g' ~/tmp Outputs: at 12:27 AM on 5/[1-9] sed -ie 's/\/0?\//\/?\//g' ~/tmp Outputs: at 12:27 AM on 5/08 Full script: #!/bin/bashecho $@ > ~/tmpsed -ie 's/\/0[1-9]\//\/[1-9]\//g' ~/tmpAA=`awk '{print $2}' ~/tmp | awk -F : '{print $1":"$2}' | sed 's/^0*//'`BB=`awk '{print $3}' ~/tmp`CC=`awk '{print $1}' ~/tmp | awk -F / '{print $1"/"$2}' | sed 's/^0*//'`DD=`awk '{print $5}' ~/tmp | awk -F : '{print $1":"$2}' | sed 's/^0*//'`EE=`awk '{print $6}' ~/tmp`FF=`awk '{print $4}' ~/tmp | awk -F / '{print $1"/"$2}' | sed 's/^0*//'`if [ $# = 3 ]; then echo "at $AA $BB on $CC"elif [ $# = 6 ] && [ $CC = $FF ]; then echo "from $AA $BB to $DD $EE on $FF"elif [ $# = 6 ]; then echo "from $AA $BB on $CC to $DD $EE on $FF"firm ~/tmp Sample Input Output (alias=dt): With current sed command dt 05/08/2017 02:27:25 AM at 2:27 AM on 5/[1-9] Without first sed command dt 05/08/2017 02:27:25 AM at 2:27 AM on 5/08 Solved -- third line replaced with sed -rie 's/\/0(.?)/\/\1/g' ~/tmp dt 05/08/2017 01:03:56 AM Outputs: at 1:03 AM on 5/8 | NOTE : this is an edited answer to make the solution as general as possible. See the edit history to see what was originally done and see the comments for issues with the previous answer. The key here is to use grouping via () and -r for extended regular expressions. Grouping patterns with () will allow you to refer to them based on their position in via \NUMBER notation. In particular, here's what I came up with: sed -r 's/0*([^0]+)\/0*([^0]+)/\1\/\2/g' This reads as so: match zero or more characters that are zero group together one or more any non-zero character that follows then look for slash followed by zero or more characters that are zero and group together one or more non-zero characters that follow In practice this works as so with variable number of zeroes: $ echo "at 12:27 AM on 11/08/2017" | sed -r 's/0*([^0]+)\/0*([^0]+)/\1\/\2/g' at 12:27 AM on 11/8/2017$ echo "at 12:27 AM on 00000011/000008/00002017" | sed -r 's/0*([^0]+)\/0*([^0]+)/\1\/\2/g' at 12:27 AM on 11/8/00002017$ echo "at 12:27 AM on 011/08/00002017" | sed -r 's/0*([^0]+)\/0*([^0]+)/\1\/\2/g' at 12:27 AM on 11/8/00002017$ echo "at 12:27 AM on 000000011/0000008/00002017" | sed -r 's/0*([^0]+)\/0*([^0]+)/\1\/\2/g' at 12:27 AM on 11/8/00002017 Notice that this does good enough of a job of retaining whatever comes in the year part if that was required. If we want to get rid of that as well - we can also add a 3rd grouping. $ echo "at 12:27 AM on 005/0025/0002017" | sed -r 's/0*([^0]+)\/0*([^0]+)\/0*([^0]+)/\1\/\2\/\3/g' at 12:27 AM on 5/25/2017 This also works fairly well with other chars (which wasn't required but is nice to have): $ echo "at 12:27 AM on 0November/00Fifth/2017" | sed -r 's/0*([^0]+)\/0*([^0]+)/\1\/\2/g' at 12:27 AM on November/Fifth/2017 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/363629",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/223685/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.