source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
394,461 | I have a scenario where I have to switch to the different user and after that, I need to execute the some Linux command. my command is something like this ( echo myPassword | sudo -S su hduser ) && bash /usr/local/hadoop/sbin/start-dfs.sh but with this command, I switch to the user and the next command got triggered on the previous user. Is there any I can accomplish this using shell script | Try. sudo -H -u TARGET_USER bash -c 'bash /usr/local/hadoop/sbin/start-dfs.sh' see man sudo : -H The -H (HOME) option requests that the security policy set the HOME environment variable to the home directory of the target user (root by default) as specified by the password database. Depending on the policy, this may be the default behavior. -u user The -u (user) option causes sudo to run the specified command as a user other than root. To specify a uid instead of a user name, use #uid. When running commands as a uid, many shells require that the '#' be escaped with a backslash ('\'). Security policies may restrict uids to those listed in the password database. The sudoers policy allows uids that are not in the password database as long as the targetpw option is not set. Other security policies may not support this. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/394461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207576/"
]
} |
394,464 | I am trying to download a file through HTTP from a web site using wget . When I use: wget http://abc/geo/download/?acc=GSE48191&format=file I get only a file called index.html?acc=GSE48191 . When I use: wget http://abc/geo/download/?acc=GSE48191&format=file -o asd.rpm I get asd.rpm , but I want to download with actual name, and don't want to have manually change the name of the downloaded file. | wget --content-disposition 'https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE48191&format=file' The file you are downloading is a tar archive (a binary file), provided by a dynamic link from a web server. wget would normally save the file using part of the URL that you're using, but in this case that's just a REST API endpoint (or something similar) so the name would be unfriendly to work with (it would still be a valid name and the file contents would be the same). However, in this case the server provides a "Content Disposition" header containing the actual file name, which wget is able to use if you use the --content-disposition option. This option is marked "experimental" in my manual for wget . You also need to quote the URL so that the shell does not interpret the & and ? characters in it. The equivalent thing using curl : curl -J -O 'https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE48191&format=file' Or, using the equivalent long options: curl --remote-header-name --remote-name 'https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE48191&format=file' Once you have downloaded the file, you need to unpack it: tar -xvf GSE48191_RAW.tar Due to the way that this particular archive was created, this will unpack the archive's files into the current directory (so creating a new directory, moving the archive there and unpacking it there may be a good idea). The files in this archive are gzip -compressed CEL files. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/394464",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252750/"
]
} |
394,490 | How to cut till first delimiter / and get remaining part of strings? Ex: pandi/sha/Dev/bin/boot I want to cut pandi , so the output like sha/Dev/bin/boot | Simply with cut command: echo "pandi/sha/Dev/bin/boot" | cut -d'/' -f2-sha/Dev/bin/boot -d'/' - field delimiter -f2- - a range of fields to output ( -f<from>-<to> ; in our case: from 2 to the last) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/394490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248795/"
]
} |
394,501 | Unfortunately I'm being forced to work with a piece of software that handles automatically mounting and unmounting a network volume very poorly; it has an annoying tendency to leave a directory where the mount point was, but can't cope properly with that being the case when mounting the volume again later (it's supposed to tidy up but often doesn't). Naturally I'm on at the developers to get that fixed, but in the meantime I need to do something to tidy up the mount point(s) myself. So essentially what I need to do is remove the directory, but only if it isn't currently a mount point (as I don't want to delete the volume's contents by accident). Now, I can get the device ID of the directory and compare it to the device ID of root easily enough, but there's a possibility of a race-condition if I use such a comparison, i.e- if the volume is mounted between checking device IDs and calling rm -r /mnt/point . Are there any alternatives? I was intrigued by the possibility of using the find command's -xdev option, but I'm not sure how I would actually provide a point of comparison, as find /mnt/point -xdev won't work as the target and its contents are the same device. Also, using rmdir on the assumption that the leftover folder will always be empty seems unreliable, as on some systems a mount point may have a file inside; macOS for example leaves an .autodiskmounted file inside. While I could create a list of such cases and handle them, it'd be nice (and hopefully useful to others) to have a more general purpose solution for future reference, if such a thing is possible. | If the directory is a mount point, it will be busy and you shouldn't be able to rename it. $ sudo mv /mnt /mnt.oldmv: cannot move '/mnt' to '/mnt.old': Device or resource busy If it's just a regular directory, you should be able to rename it. $ sudo mv /mnt /mnt.old If the move succeeds, re-create the mount directory and delete the renamed directory. Optionally, you can validate the renamed directory is part of the filesystem you expect before removal. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38840/"
]
} |
394,539 | I have file input.txt with below data in TAB delimited format- 23776112 Inactive Active23415312 Inactive Active As per requirement, Inside the while loop, I want to use cut 1st column's data and print it - Below is the part of code- ............ while read line do SN=`echo ${line}|cut -d ' ' -f1` echo $SN done < input.txt........ To use the tab as delimiter above, I am using Ctrl V Tab But the output is not as expected.I am getting O/P- 23776112 Inactive Active23415312 Inactive Active Whereas I want O/P like - 23776112 23415312 | cut -f 1 input.txt This gives you the first column from the tab-delimited file input.txt . The default field delimiter for cut is the tab character, so there's no need to further specify this. If the delimiter is actually a space, use cut -d ' ' -f 1 input.txt If it turns out that there are multiple tabs and/or spaces, use awk : awk '{ print $1 }' input.txt The shell loop is not necessary for this operation, regardless of whether you use cut or awk . See also " Why is using a shell loop to process text considered bad practice? ". The reason your script does not work is because the tab disappears when you echo the unquoted variable. Related: Why is printf better than echo? Security implications of forgetting to quote a variable in bash/POSIX shells | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/394539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252948/"
]
} |
394,655 | I recently got a new laptop and installed Arch on it. I noticed that in a few applications, including chrome and gedit, pressing ctrl+shift+e will cause the next few keys pressed to be underlined, beep when pressed, and then deleted. I've looked around for a while, and the only way I can seem to "fix" it is to unload the pcspkr module. However, this still doesn't fix the issue, it only silences the beeping. It seems to happen under both gnome and i3, but not in a tty. Is there any way I can turn this off? Video of the behavior | Seee https://askubuntu.com/a/1039039 One needs to run ibus-setup and in the tab "Emoji" change the shortcut (click on the three dots that are focused in the screenshot) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109569/"
]
} |
394,695 | There has to be a simple solution for my problem, but I can't get it.I have multiple files in multiple folders, whose names have a pattern repeated multiple times in a row, like this: 20170223_LibError.log-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-XYZ12-SAE066.log_compressed_at_2017-09-27_03-32-55.gz I need to remove all but one XYZ12 of the patterns from the file names, to get the following result: 20170223_LibError.log-XYZ12-SAE066.log_compressed_at_2017-09-27_03-32-55.gz | a ) find + prename (Perl rename ) solution: find . -type f -name "*-XYZ12-XYZ12-*.gz" -exec prename 's/(-XYZ12)(\1)+/$1/g' {} \; b ) Additional bash + find + sed approach if prename is not supported: for f in $(find . -type f -name "*-XYZ12-XYZ12-*.gz"); do p="${f%/*}" # full path without basename (parent folders) fn="${f##*/}" # current filename (basename) new_fn=$(sed 's/\(-XYZ12\)\+/-XVZ12/' <<<"$fn") # new file name mv "$f" "$p/$new_fn"done c ) Also, you are able to avoid using sed in the above bash approach by using just bash variable substitution: shopt -s extglobfor f in $(find . -type f -name "*-XYZ12-XYZ12-*.gz"); do p="${f%/*}" # full path without basename (parent folders) fn="${f##*/}" # current filename (basename) new_fn="${fn/+(-XYZ12)/-XVZ12}" # new file name mv "$f" "$p/$new_fn"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252383/"
]
} |
394,699 | Let's say we list /usr/bin with ls – this may look like: CC file2c man sscopMail find mandoc sshaddftinfo finger manpath ssh-addaddr2line flex merge ssh-agent... but we could also use ls -1 , and we get: CCMailaddftinfoaddr2lineafmtoditaliasapplyapropos... A list with all filenames, each of them in a single line. The structure of the output is: filename, newline ( \n ), … This we can pipe to less : ls -1 | less . Now, is it possible to easily apply a command to the string contained in the current line? How exactly this is done is irrelevant, just the number of steps should be small. It could be by using ! in less (this doesn't seem possible?) or by somehow getting the string contained in the current line into a shell variable etc. Under Xorg this is easy of course, by just using the middle mouse button paste. But in text mode, can one do it in a not too complicated way? | a ) find + prename (Perl rename ) solution: find . -type f -name "*-XYZ12-XYZ12-*.gz" -exec prename 's/(-XYZ12)(\1)+/$1/g' {} \; b ) Additional bash + find + sed approach if prename is not supported: for f in $(find . -type f -name "*-XYZ12-XYZ12-*.gz"); do p="${f%/*}" # full path without basename (parent folders) fn="${f##*/}" # current filename (basename) new_fn=$(sed 's/\(-XYZ12\)\+/-XVZ12/' <<<"$fn") # new file name mv "$f" "$p/$new_fn"done c ) Also, you are able to avoid using sed in the above bash approach by using just bash variable substitution: shopt -s extglobfor f in $(find . -type f -name "*-XYZ12-XYZ12-*.gz"); do p="${f%/*}" # full path without basename (parent folders) fn="${f##*/}" # current filename (basename) new_fn="${fn/+(-XYZ12)/-XVZ12}" # new file name mv "$f" "$p/$new_fn"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394699",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150422/"
]
} |
394,709 | My question is how to get the user name on the shell, who is currently using the Linux desktop (on a "normal" desktop system, where you usually only have one active user, i.e. no server system here, but just your usual Laptop etc.).If you really want to imagine a server system, I would be fine with listing all active users. So take e.g. the case that a script is running as root as a cron job (or similar) and I want to get the/all currently active users on the system. I know I could use w or who or users to get the currently logged in users. That's fine, but that user are logged in does not mean that they are actually currently using the desktop, because in all desktop environments I know, users can switch to another user after they have logged in. I could also use last to get the user who last logged in, but this is also no guarantee that this user is still the active one. So how can one do this? It is fine to provide specific solutions for different desktops environments (GNOME, KDE, …), but, of course, a cross-compatible solution is preferred. | On many current distributions, login sessions (graphical and non-graphical) are managed by logind . You can list sessions using loginctl list-sessions and then display each session’s properties using loginctl show-session ${SESSIONID} or loginctl session-status ${SESSIONID} (replacing ${SESSIONID} as appropriate); the difference between the two variants is that show-session is designed to be easily parsed, session-status is designed for human consumption. Active sessions are identified by their state; you can query that directly using loginctl show-session -p State ${SESSIONID} which will output State=active for the active session(s). The full show-session output will tell you which user is connected, which TTY is being used, whether it’s a remote session, whether it’s a graphical session etc. Note that logind can have multiple active sessions, if the system is configured with multiple seats, or if there are remote sessions. Putting this all together, for sessionid in $(loginctl list-sessions --no-legend | awk '{ print $1 }')do loginctl show-session -p Id -p Name -p User -p State -p Type -p Remote $sessioniddone will give all the information you need to determine which sessions are active and who is using them, and for sessionid in $(loginctl list-sessions --no-legend | awk '{ print $1 }')do loginctl show-session -p Id -p Name -p User -p State -p Type -p Remote $sessionid | sortdone |awk -F= '/Name/ { name = $2 } /User/ { user = $2 } /State/ { state = $2 } /Type/ { type = $2 } /Remote/ { remote = $2 } /User/ && remote == "no" && state == "active" && (type == "x11" || type == "wayland") { print user, name }' will print the identifiers and logins of all active users with graphical sessions. The LockedHint property now indicates whether a given session is locked, so for sessionid in $(loginctl list-sessions --no-legend | awk '{ print $1 }')do loginctl show-session -p Id -p Name -p User -p State -p Type -p Remote -p LockedHint $sessionid | sortdone |awk -F= '/Name/ { name = $2 } /User/ { user = $2 } /State/ { state = $2 } /Type/ { type = $2 } /Remote/ { remote = $2 } /LockedHint/ { locked = $2 } /User/ && remote == "no" && state == "active" && (type == "x11" || type == "wayland") { print user, name, locked == "yes" ? "locked" : "unlocked" }' will also indicate whether the active session is locked or not. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/394709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146739/"
]
} |
394,770 | I tried to install R on Mint without success. Here are instructions, that I already tried to follow on YouTube : sudo apt-get install r-base[sudo] password for xwing: Reading package lists... DoneBuilding dependency tree Reading state information... DoneE: Unable to locate package r-base Digging more, I found something here about PPA and repository that have to be manually included, but I don't know how to exactly apply this on Mint. Note: I was an Ubuntu user that recently migrated to Linux Mint. Someone told me that Mint is very similar to Ubuntu. In fact, some commands for Ubuntu works fine on Mint, but in this case, I'm asking myself if any minor changes should be made to Ubuntu lines to aid proper execution on Mint. | Try adding cran.rstudio.com repository for Ubuntu: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9sudo add-apt-repository 'deb [arch=amd64,i386] https://cran.rstudio.com/bin/linux/ubuntu xenial/'sudo apt-get updatesudo apt-get install r-base Note: Linux Mint 18.2 uses Ubuntu Xenial base. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250431/"
]
} |
394,785 | If I use pacman to install software, I occasionally run into errors of the form package-name: /some/package/file exists in filesystem This usually occurs if I've messed up an installation at some time in the past, unwisely tried to install something manually and so on. In order to deal with the problem, it's useful to have the names of all the conflicting packages together in one file. I can do this using # pacman -S package-name | grep '^package-name: [^ ]* exists in filesystem$' | sed 's/^package-name: \([^ ]*\) exists in filesystem$/\1/' > conflicting_files.txt However, this requires me to type the same thing twice. Is there a way to do the same thing without duplicating the regex? | Many ways. For instance, using sed alone (I am assumong GNU tools here since you're using pacman ): pacman -S package-name | sed -En 's/^package-name: ([^ ]*) exists in filesystem$/\1/p' > conflicting_files.txt Or grep : pacman -S package-name | grep -oP '^package-name: \K\S+' > conflicting_files.txt If you need to match the end of the line, use @RomanPerekhrest's suggestion . Or perl : pacman -S package-name | perl -ne 's/^package-name: (\S*) exists in filesystem$/$1/ && print' > conflicting_files.txt Or perl : pacman -S package-name | perl -lane 'print $F[1] if /^package-name:.*exists in filesystem$/' > conflicting_files.txt Or awk : pacman -S package-name | awk '/^package-name:.*exists in filesystem$/{print $2}' > conflicting_files.txt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/394785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45055/"
]
} |
394,845 | I'm trying to figure out how to create an entry in the sudoer where I allow a limited set of arguments some optional but have the command still very restrictive. Is there any easy way to limit these restrictions? I'd like the user to be able to run with the -w flag and optional value but still be restrictive. I don't want to hardcode values for the -w option. The user should be able to run any of these commands with 10 being any digit. /usr/bin/iptables -nvL */usr/bin/iptables -w -nvL */usr/bin/iptables -w 10 -nvL * I came up with these 4 entries. Is there a better way to have optional values defined? username ALL=(root) NOPASSWD: /usr/bin/iptables -nvL *username ALL=(root) NOPASSWD: /usr/bin/iptables -w -nvL *username ALL=(root) NOPASSWD: /usr/bin/iptables -w [[\:digit\:]] -nvL *username ALL=(root) NOPASSWD: /usr/bin/iptables -w [[\:digit\:]][[\:digit\:]] -nvL * | I fear not :/ If you don't want to use wildcards like »?« or even »*«, you'll have to provide quite exactly, what you want. According to sudoers' man page, it only provides general wildcards: Wildcards sudo allows shell-style wildcards (aka meta or glob characters) to be used in host names, path names and command line arguments in the sudoers file. Wildcard matching is done via the glob(3) and fnmatch(3) functions as specified by IEEE Std 1003.1 (“POSIX.1”). * Matches any set of zero or more characters (including white space). ? Matches any single character (including white space). [...] Matches any character in the specified range. [!...] Matches any character not in the specified range. \x For any character ‘x’, evaluates to ‘x’. This is used to escape special characters such as: ‘*’, ‘?’, ‘[’, and ‘]’. NOTE THAT THESE ARE NOT REGULAR EXPRESSIONS. Unlike a regular expression there is no way to match one or more characters within a range. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/394845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28101/"
]
} |
394,910 | My directory structure is as follows : /home/workspace/build/ /home/workspace/js/jai.1.js/home/workspace/js/jai.2.js/home/workspace/js/jai.3.js/home/workspace/js/jai.4.js I want to tar all js files whose prefix is jai that are in /home/workspace/js/ directory to /home/workspace/build/ directory. Suppose name of the tar file that is created is bundle.tar.gz So bundle.tar.gz should contain following files jai.1.jsjai.2.jsjai.3.jsjai.4.js I want to achieve this using tar command instead of copying files to destination and then tar it. EDIT I want to tar it from destination directory i.e., /home/workspace/build/ | (cd ../js && pax -w jai.*.js) | gzip > bundle.tar.gz Would do it (or replace pax -w with tar cf - if you have a tar command). The point is to change the current directory only for pax / tar by using a subshell. Some tar implementations ( tar is a very non-portable command which is why POSIX introduced pax ) have a -C option that makes it change directory for the file collection but not for the file output. Those also generally support a z option to call gzip by themselves or implement the gzip compression internally. However, things like: tar czf bundle.tar.gz -C ../js jai.*.js wouldn't work because that jai.*.js is expanded by the shell for which the current directory has not changed. With zsh , you could do: tar czf bundle.tar.gz -C ../js ../js/jai.*.js(:t) Where the :t modifier gets the tail (basename) of the files generated by the glob. With pax , you can also do: pax -'s|.*/||' -w -- ../js/jai.*.js | gzip > bundle.tar.gz But note that the substitution to strip the leading path components would also apply to the target of symlinks (some tar implementations have similar options where you can specify whether you want those translated or not), and would also give a different outcome if any of those jail.*.js files were of type directory and contained more files. With libarchive bsdtar , you can also do: bsdtar zcf bundle.tar.gz -C ../js --include=. --include='jai.*.js' . GNU tar has no --include by has a --exclude , so you could do: tar zcf bundle.tar.gz -C ../js --exclude='*[^s.]' \ --exclude='*?.' \ --exclude='*[^j]s' \ --exclude='*[^.]js' . (those would add an entry for . though) With star : star czf bundle.tar.gz -C ../js 'pat=jai.*.js' . (would include jai.foo/bar.js though, and not non-matching files in jail.*.js directories) star czf bundle.tar.gz -C ../js 'pat=jai.#[^/].js{%!/*}' . (where #<expr> is like <expr>* in EREs and {%!/*} like (/.*)? ( % is nothing and ! is OR)) to do the same as (cd ../js && tar czf - jai.*.js) > bundle.tar.gz . It would also still crawl the entire directory structure for only selecting the files at the top level. With newer versions, you can also use -find . So for the equivalent of our (cd ...) one that doesn't crawl into unnecessary directories but still include all the files in jai.*.js directories: star czf bundle.tar.gz C=../js -find . \ \( \( -name '*.js' -o -path '*/*' -o -name . \) -o ! -prune \) ! -name . Though more likely, you'd actually want: star czf bundle.tar.gz C=../js \ -find . -maxdepth 1 -name 'jai.*.js' -type f That is archive only the jai.*.js regular files and only at the top level which with zsh you could also do with: (cd ../js && pax -w -- jai.*.js(.)) | gzip > bundle.tar.gz ( (.) being a glob qualifier that restricts to regular files). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394910",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/212912/"
]
} |
394,917 | I am using tune2fs, but it gives data in blocks, and I can't get the exact value of total size of the partition. I have also used fdisk -l /dev/mmcblk0p1 , but the size am getting from here is also a different value. How can I find the exact partition size? | The command is: blockdev --getsize64 /dev/mmcblk0p1 It gives the result in bytes, as a 64-bit integer. It queries the byte size of a block device , as the kernel see its size. The reason, why fdisk -l /dev/mmcblk0p1 didn't work, was that fdisk does some total different thing: it reads in the partition table (= first sector) of the block device, and prints what it found . It doesn't check anything, only says what is in the partition table. It doesn't even bother if the partition table is damaged, or the block device doesn't have one: it will print a warning that the checksum is not okay, but it still prints what is finds, even if the values are clearly non-sense. This is what happened in your case: /dev/mmcblk0p1 does not have a partition table. As the name of the device shows, it is already the first partition of the physical disk /dev/mmcblk0 . This disk contains a partition table, had you queried it with fdisk -l /dev/mmcblk0 , it had worked (assuming it had an msdos partition table). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/394917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253236/"
]
} |
394,984 | I wiped the disk using wipefs -a /dev/sda. I happily formatted the disk, and it seems that when I'm about to mount /dev/sda3 it says "unknown file system type crypto_LUKS" . I did no encryption on this partition, so it's like the previous configuration is saved somehow.If I apparently wiped or reset the disk, how can this be possible? Do I have to open and decrypt and remove encryption on that drive first? | wipefs -a /dev/sdx only wipes magic signatures on that device, not on its partitions. So at best, it only wipes your partition table, but if you then proceed to re-create the partitions at the same offsets at before, the old data is still there. You'd have to wipe the partitions as well. wipefs -a /dev/sdx[1-9]* # wipe old partitionswipefs -a /dev/sdx # wipe the disk itselfparted /dev/sdx # create new partitionswipefs -a /dev/sdx[1-9]* # wipe the new partitions, just in case# create filesystems or whatever That aside it's also entirely possible for wipefs to not wipe something if it doesn't know the signature. Or for another program to still recognize the data on the partition despite the signature being damaged. wipefs only overwrites a few magic bytes, which is reversible in most cases. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/394984",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251954/"
]
} |
395,059 | so i change my MAC address with macchanger -A wlp68s0b1 at boot with crontab,here is what happens when i disconnect and reconnect : while connecting after boot : rahman@debian:~$ macchanger -s wlp68s0b1Current MAC: 00:22:31:c6:38:45 (SMT&C Co., Ltd.)Permanent MAC: 00:00:00:00:00:00 (FAKE CORPORATION) after disconnecting : rahman@debian:~$ macchanger -s wlp68s0b1Current MAC: 16:7b:e7:3c:d3:cd (unknown)Permanent MAC: 00:00:00:00:00:00 (FAKE CORPORATION) after reconnecting : rahman@debian:~$ macchanger -s wlp68s0b1Current MAC: 00:00:00:00:00:00 (FAKE CORPORATION)Permanent MAC: 00:00:00:00:00:00 (FAKE CORPORATION) and so on, and with every disconnect i get a different random MAC address which fades on reconnecting giving me my real MAC address , what causes that and how to stop it ? some outputs : rahman@debian:~$ lspci -nn |grep 14e444:00.0 Network controller [0280]: Broadcom Limited BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01)rahman@debian:~$ uname -aLinux debian 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u5 (2017-09-19) x86_64 GNU/Linuxrahman@debian:~$ sudo ifconfig enp0s25: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 00:24:c0:7b:a8:8b txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 20 memory 0xd4800000-d4820000 enp0s25:avahi: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 169.254.9.109 netmask 255.255.0.0 broadcast 169.254.255.255 ether 00:24:c0:7b:a8:8b txqueuelen 1000 (Ethernet) device interrupt 20 memory 0xd4800000-d4820000 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 9436 bytes 6584515 (6.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9436 bytes 6584515 (6.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0wlp68s0b1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::6711:9875:eb78:24fc prefixlen 64 scopeid 0x20<link> inet6 fd9c:c172:b03b:ce00:f1e0:695e:7da0:91a prefixlen 64 scopeid 0x0<global> ether 00:00:00:00:00:00 txqueuelen 1000 (Ethernet) RX packets 484346 bytes 641850809 (612.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 368394 bytes 44259668 (42.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0rahman@debian:~$ sudo iwconfig lo no wireless extensions.enp0s25 no wireless extensions.wlp68s0b1 IEEE 802.11 ESSID:"3bdo" Mode:Managed Frequency:2.447 GHz Access Point: 9C:C1:72:B0:3B:D4 Bit Rate=65 Mb/s Tx-Power=30 dBm Retry short limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=54/70 Signal level=-56 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:4 Invalid misc:183 Missed beacon:0 | Network-Manager will reset your mac address during the wifi scanning. To permanently change your mac address: Edit your /etc/NetworkManager/NetworkManager.conf as follows: [main]plugins=ifupdown,keyfile[ifupdown]managed=false[device]wifi.scan-rand-mac-address=no[keyfile] Edit your /etc/network/interfaces by adding the following line: pre-up ifconfig wlp68s0b1 hw ether xx:xx:xx:yy:yy:yy The xx:xx:xx:yy:yy:yy is the new mac address obtained from the output of macchanger -A wlp68s0b1 . Reboot and verify your settings. Arch-linux wiki : Configuring MAC Address Randomization Randomization during Wi-Fi scanning is enabled by default, but it may be disabled by adding the following lines to /etc/NetworkManager/NetworkManager.conf or a dedicated configuration file under /etc/NetworkManager/conf.d . [device]wifi.scan-rand-mac-address=no Setting it to yes results in a randomly generated MAC address being used when probing for wireless networks. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/395059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240312/"
]
} |
395,086 | I am trying to create 50 directories (dir-01..dir-50). And I want to create 50 files (01.txt..50.txt) inside each of the 50 directories. For example: dir-01/01.txt..50.txt dir-02/02.txt..50.txt etc... I am able to create the the directories, but I am having trouble with creating the files inside each. I am also trying to compress all these afterwards into a tar file. This is where I am at so far: for i in {1..50}; do mkdir dir-$i;done;for j in {1..50}; do touch $j.txt.dir-*;done;tar -cf final.tar dir-{1..50} I know that second loop is wrong, but I am unsure how to proceed. Any advice is appreciated. This seems to work, but I am unsure if it is correct in syntax or format: for i in {1..50}; do mkdir "dir-$i"; for j in {1..50}; do touch "./dir-$i/$j.txt"; done;done;tar -cf final.tar dir-{1..50} | With zsh or bash or yash -o braceexpand : $ mkdir dir-{01..50}$ touch dir-{01..50}/file{01..50}.txt$ ls dir-45file01.txt file09.txt file17.txt file25.txt file33.txt file41.txt file49.txtfile02.txt file10.txt file18.txt file26.txt file34.txt file42.txt file50.txtfile03.txt file11.txt file19.txt file27.txt file35.txt file43.txtfile04.txt file12.txt file20.txt file28.txt file36.txt file44.txtfile05.txt file13.txt file21.txt file29.txt file37.txt file45.txtfile06.txt file14.txt file22.txt file30.txt file38.txt file46.txtfile07.txt file15.txt file23.txt file31.txt file39.txt file47.txtfile08.txt file16.txt file24.txt file32.txt file40.txt file48.txt$ tar -cf archive.tar dir-{01..50} With ksh93 : $ mkdir dir-{01..50%02d}$ touch dir-{01..50%02d}/file{01..50%02d}.txt$ tar -cf archive.tar dir-{01..50%02d} The ksh93 brace expansion takes a printf() -style format string that can be used to create the zero-filled numbers. With a POSIX sh : i=0 while [ "$(( i += 1 ))" -le 50 ]; do zi=$( printf '%02d' "$i" ) mkdir "dir-$zi" j=0 while [ "$(( j += 1 ))" -le 50 ]; do zj=$( printf '%02d' "$j" ) touch "dir-$zi/file$zj.txt" donedonetar -cf archive.tar dir-* # assuming only the folders we just created exists An alternative for just creating your tar archive without creating so many files, in bash : mkdir dir-01touch dir-01/file{01..50}.txttar -cf archive.tar dir-01for i in {02..50}; do mv "dir-$(( i - 1 ))" "dir-$i" tar -uf archive.tar "dir-$i"done This just creates one of the directories and adds it to the archive.Since all files in all 50 directories are identical in name and contents, it then renames the directory and appends it to the archive in successive iterations to add the other 49 directories. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/395086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253375/"
]
} |
395,156 | If I have files a , b and c in a directory on a Linux machine. How can I get the total number of bytes of these 3 files in a way that does not depend on how e.g. ls shows the information? I mean I am interested in a way that is not error prone Update 1) I am interested in binary files not ascii files 2) It would be ideal to be a portable solution e.g. GNU linux or Mac working | Use du with the -c (print total) and -b (bytes) options: $ ls -ltotal 12-rw-r--r-- 1 terdon terdon 6 Sep 29 17:36 a.txt-rw-r--r-- 1 terdon terdon 12 Sep 29 17:38 b.txt-rw-r--r-- 1 terdon terdon 17 Sep 29 17:38 c.txt Now, run du : $ du -bc a.txt b.txt c.txt6 a.txt12 b.txt17 c.txt35 total And if you just want the total size in a variable: $ var=$( du -bc a.txt b.txt c.txt | tail -n1 | cut -f1)$ echo $var35 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42132/"
]
} |
395,218 | Staphylococcus_sp_HMSC14C01-KV792037.1:0.00371647154267842634,Staphylococcus_hominis_VCU122-AHLD01000058.1:0.00124439639436691308)69:0.00227646100249620856,(Staphylococcus_sp_HMSC072E01-KV814990.1:0.00288325234399461859,(((Staphylococcus_hominis_793_SHAE-JUSR01000051.1:0.00594391769091206796,Staphylococcus_pettenkoferi_1286_SHAE-JVVL01000037.1:0.00594050248317441135) The comma is separating different items and in each item I want to remove everything between - and : including - but keeping : . How can I do that? So it should look like: Staphylococcus_sp_HMSC14C01:0.00371647154267842634,Staphylococcus_hominis_VCU122:0.00124439639436691308)69:0.00227646100249620856 I used sed 's/-.*://' 1.file > 2.file but ended up removing the whole file and just kept the first and last values. | .* is a greedy regexp, matching the longest possible match. You need to match the shortest match but match it globally on the whole line. Try sed 's/-[^:-]*:/:/g' 1.file > 2.file The character class [^:-] matches anything except colon and dash (and maybe it should match anything except colon only), so the regexp says "dash followed by any number of non-dash, non-colon characters followed by a colon". It then replaces that with a colon (since you wanted to keep that) and does the replacement globally (the trailing g ) on the line. If you omit the g , only the first instance would be replaced. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395218",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253480/"
]
} |
395,230 | So I can enter systemc and press Tab and get systemctl . But what if I want to list all commands that end in ctl ? How would I do that? | To list all available commands, including aliases, functions, bash builtins and bash keywords, use compgen -c . You may grep the resulting list with any pattern, for example: compgen -c | grep 'ctl$' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395230",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144995/"
]
} |
395,235 | I'm using the tmux-resurrect plugin to recreate tmux sessions after tmux is shutdown. It mostly works but it is not restoring running commands. I've tried with vim , less , man and tail , all supported, but I get nothing but a waiting bash prompt. Here's the save data for one pane that was running vim test.txt as seen in one of the save files in ~/.tmux/resurrect : pane 0 1 :bash 1 :* 2 :/tmp 0 vim : Clearly something is missing here. Where is "test.txt"? Pretty hard to recreate acommand if the arguments aren't persisted. Why aren't full commands being saved? I am running this on Cygwin which I suspect is relevant. | To list all available commands, including aliases, functions, bash builtins and bash keywords, use compgen -c . You may grep the resulting list with any pattern, for example: compgen -c | grep 'ctl$' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/213782/"
]
} |
395,284 | I found malware on my ec2 instance which was continuously mining bitcoin and using my instance processing power. I successfully identified the process, but was unable to remove and kill it. I ran this command watch "ps aux | sort -nrk 3,3 | head -n 5" It shows the top five process running on my instance, from which I found there is a process name ' bashd ' which was consuming 30% of cpu. The process is bashd -a cryptonight -o stratum+tcp://get.bi-chi.com:3333 -u 47EAoaBc5TWDZKVaAYvQ7Y4ZfoJMFathAR882gabJ43wHEfxEp81vfJ3J3j6FQGJxJNQTAwvmJYS2Ei8dbkKcwfPFst8FhG -p x I killed this process by using the kill -9 process_id command. After 5 seconds, the process started again. | If you did not put the software there and/or if you think your cloud instance is compromised: Take it off-line, delete it, and rebuild it from scratch (but read the link below first). It does not belong to you anymore, you can not trust it any longer . See "How to deal with a compromised server" on ServerFault for further information about what to do and how to behave when getting a machine compromised. In addition to the things to do and think about in the list(s) linked to above, be aware that depending on who you are and where you are, you may have a legal obligation to report it to either a local/central IT security team/person within your organization and/or to authorities (possibly even within a certain time frame). In Sweden (since December 2015), for example, any state agency (e.g. universities) are obliged to report IT-related incidents within 24 hours. Your organization will have documented procedures for how to go about doing this. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/395284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253538/"
]
} |
395,291 | I am new to shell scripting and I am trying to sequentially number the headers in a fasta file. The sequences in my fasta file look like this: >Rodentia sp. MALWILLPLLALLILWGPDPAQAFVNQHLCGSHLVEALYILVCGERGFFYTPMSRREVEDPQVGQVELGAGPGAGSEQTLALEVARQARIVQQCTSGICSLYQENYCN>Ovis ariesMALWTRLVPLLALLALWAPAPAHAFVNQHLCGSHLVEALYLVCGERGFFYTPKARREVEGPQVGALELAGGPGAGGLEGPPQKRGIVEQCCAGVCSLYQLENYCN I want to use awk in my shell script so that the headers are sequentially numbered, by inserting a number starting from 1 to n (where n is the number of sequences) after the ">", so that the sequences look like this: > 1 Rodentia sp. MALWILLPLLALLILWGPDPAQAFVNQHLCGSHLVEALYILVCGERGFFYTPMSRREVEDPQVGQVELGAGPGAGSEQTLALEVARQARIVQQCTSGICSLYQENYCN> 2 Ovis ariesMALWTRLVPLLALLALWAPAPAHAFVNQHLCGSHLVEALYLVCGERGFFYTPKARREVEGPQVGALELAGGPGAGGLEGPPQKRGIVEQCCAGVCSLYQLENYCN I tried using the sub function in awk, to do this, replacing every instance of ">" with "> [a number]". awk '/>/{sub(">", "> ++i ")}1' file However, I don't understand how to increment variables using the sub function in awk. I would like to know if there is a way to do this using the sub function. I understand how sub works, but I don't know how to declare the variable to be incremented properly. I declared i to be 1 at the beginning of my shell script: i=1 However, the output I get from the sub function is: > ++$i Rodentia sp. > ++$i Ovis aries How can a declare a variable properly so that I can use the awk sub function to number the headers? | If you did not put the software there and/or if you think your cloud instance is compromised: Take it off-line, delete it, and rebuild it from scratch (but read the link below first). It does not belong to you anymore, you can not trust it any longer . See "How to deal with a compromised server" on ServerFault for further information about what to do and how to behave when getting a machine compromised. In addition to the things to do and think about in the list(s) linked to above, be aware that depending on who you are and where you are, you may have a legal obligation to report it to either a local/central IT security team/person within your organization and/or to authorities (possibly even within a certain time frame). In Sweden (since December 2015), for example, any state agency (e.g. universities) are obliged to report IT-related incidents within 24 hours. Your organization will have documented procedures for how to go about doing this. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/395291",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253541/"
]
} |
395,297 | I have a directory as follows -rw-r--r-- 1 ualaoip2 mcm1 1073233 Sep 30 12:40 database.260.4-0.tar.gz-rw-r--r-- 1 ualaoip2 mcm1 502373963 Sep 30 12:40 database.260.4-1.tar.gz-rw-r--r-- 1 ualaoip2 mcm1 880379753 Sep 30 12:40 database.260.4-2.tar.gzdrwxr-xr-x 2 ualaoip2 mcm1 4096 Sep 30 13:41 db0filedrwxr-xr-x 2 ualaoip2 mcm1 4096 Sep 30 13:41 db1filedrwxr-xr-x 2 ualaoip2 mcm1 4096 Sep 30 13:41 db2file and I want to move the file database...0 into folder0 &c... What's the best way of doing this? I tried various variants of for i in $(ls fi*) do; mv $i ./folder$i but they renamed things and overwrote lots of stuff I didn't want! I tried using variants of find . -maxdepth 1 -type d -printf '%f\n' | sort /* why is it not sorted? but couldn't get rid of the . for the current directory. I used mkdir db{0..7} to create the files - is this the best way? I would appreciate a couple of words of explanation with the answer - not just a monkey see, monkey do! :-) | If you did not put the software there and/or if you think your cloud instance is compromised: Take it off-line, delete it, and rebuild it from scratch (but read the link below first). It does not belong to you anymore, you can not trust it any longer . See "How to deal with a compromised server" on ServerFault for further information about what to do and how to behave when getting a machine compromised. In addition to the things to do and think about in the list(s) linked to above, be aware that depending on who you are and where you are, you may have a legal obligation to report it to either a local/central IT security team/person within your organization and/or to authorities (possibly even within a certain time frame). In Sweden (since December 2015), for example, any state agency (e.g. universities) are obliged to report IT-related incidents within 24 hours. Your organization will have documented procedures for how to go about doing this. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/395297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99494/"
]
} |
395,298 | I have a script that uses gnu parallel. I want to pass two parameters for each "iteration" in serial run I have something like: for (( i=0; i<=10; i++ ))do a = tmp1[$i] b = tmp2[$i]done And I want to make this parallel as func pf(){ a=$1 b=$2}export -f pfparallel --jobs 5 --linebuffer pf ::: <what to write here?> | Omitting your other parallel flags just to stay focused... parallel --link pf ::: A B ::: C D This will run your function first with a=A , b=C followed by a=B , b=D or a=A b=Ca=B b=D Without --link you get full combination like this: a=A b=Ca=A b=Da=B b=Ca=B b=D Update: As Ole Tange metioned in a comment [since deleted - Ed. ] there is another way to do this: use the :::+ operator. However, there is an important difference between the two alternatives if the number of arguments is not the same in each param position. An example will illustrate. parallel --link pf ::: A B ::: C D E output: a=A b=Ca=B b=Da=A b=E parallel pf ::: A B :::+ C D E output: a=A b=Ca=B b=D So --link will "wrap" such that all arguments are consumed while :::+ will ignore the extra argument. (In the general case I prefer --link since the alternative is in some sense silently ignoring input. YMMV.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395298",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180442/"
]
} |
395,316 | I looking where can I install and try the new browser Firefox Quantum, I didn't find how to get it.Can someone please tell me what repositories or links to download and install it ? Thank you. | Add deb http://ftp.hr.debian.org/debian sid main contrib non-free to /etc/apt/sources.list and install it with this command: apt install -t sid firefox This will install only Firefox from unstable. Rest of packages will remain on stretch . Added by cas 2018-04-19 (because it's quite common for people to want to install something from unstable without upgrading everything to unstable, and the answer here is applicable to more than just firefox): This is a good answer, but incomplete. There are two more things that need to be done before running apt install -t sid firefox . Add APT::Default-Release "stable"; to /etc/apt/apt.conf or a file in /etc/apt/apt.conf.d/ so that apt will only install packages from sid/unstable if you explicitly tell it to with -t sid . If you don't set the default release to stable, the next upgrade or dist-upgrade will upgrade your entire system to sid . Most people don't want this. If you're using a named Debian distribution such as jessie or stretch in your sources.list file, use that name rather than the generic stable . run apt update to update the local package database. Finally, apt install -t sid firefox will install not only the firefox package but also the minimum set of upgraded & new packages required to satisfy the new firefox package's dependencies. This will usually just be a few firefox-related packages, built from the same source, but may also include other packages - e.g. if the new firefox depends on a newer version of a library package. Sometimes it may even cause an important package like libc6 to be upgraded which will then trigger a huge cascade of other package upgrades, effectively upgrading you to a hybrid of stable & unstable. This is generally worse than doing a full dist-upgrade to unstable itself. If this happens, you have two good choices: 1. cancel the firefox upgrade and wait for it to arrive in stable or https://backports.debian.org/ ; 2. cancel it and upgrade to unstable (which is not as bad as it sounds. In Debian, "unstable" doesn't mean "will crash all the time". It means "pre-release, changes constantly. sometimes things may break and require manual fixing") | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96144/"
]
} |
395,328 | Can someone explain the following rule for filtering traffic to loopback interface? # Allow all loopback (lo0) traffic and reject traffic# to localhost that does not originate from lo0.-A INPUT -i lo -j ACCEPT-A INPUT ! -i lo -s 127.0.0.0/8 -j REJECT The way I interpret it: accept all incoming packets to loopback . reject all incoming packets from 127.x.x.x.x which are not to loopback . What are the practical uses for these rules? In the case of 1, does this mean that all packets to loopback do not have to go through additional filtering? Is it possible for an incoming packet to loopback to be from an external source? | What the rules mean is exactly what you are describing, all packets accepted from the loopback interface. No packets with the loopback address accepted from other sources. It does not means per se data coming from the loopback interface has to go through additional filtering; what does it means is that the rule 2) is trying to prevent fake/spoofed packets with the loopback address coming from other interfaces. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395328",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253565/"
]
} |
395,402 | I am attempting to repair and upgrade an Arch Linux system. I boot off of a Live USB which is a newer version than the original install. Then I mount the sda and chroot to its mount point. When I run mkinitcpio -p linux , I get the error from the title: '/lib/modules/4.9.8-1-ARCH' is not a valid kernel module directory lib/modules/ has 4.13.3-1-ARCH. How do I tell mkinitcpio to use this directory instead? | The problem is that I forgot to mount my boot partition to /boot when I upgraded my entire system, including the Linux kernel. After dealing with some issues with pacman and PGP keys, I finally ran pacman -S filesystem linux and I am able to boot off of my HDD. (I'm not sure if filesystem was required to fix this problem, but it was referenced in other sources.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21888/"
]
} |
395,428 | If I rename images via exiv to the exif date time, I do the following: find . -iname \*jpg -exec exiv2 -v -t -r '%Y_%m_%d__%H_%M_%S' rename {} \; Now it might happen that pictures have exactly the same timestamp (including seconds). How can I make the filename unique automatically? The command should be stable in the sense that if I execute it on the same directory structure again (perhaps after adding new pictures), the pictures already renamed shouldn't change and if pictures with already existing filenames are added the new filenames should be unique as well. My first attempt was just to leave the original basename in the resulting filename, but then the command wouldn't be stable in the sense above. | You may want to try jhead instead which does that out-of-the-box (with a , b ... z suffixes allowing up to 27 files with the same date) and doesn't have the stability issue mentioned by @meuh: find . -iname '*jpg' -exec jhead -n%Y_%m_%d__%H_%M_%S {} + Or using exiftool (example in man page): exiftool -ext jpg '-FileName<CreateDate' -d %Y_%m_%d__%H_%M_%S%%-c.%%e . (here with %-c being a numerical suffix starting with - ) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/395428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
395,444 | The GNOME "Music" app says it searches my Music folder (e.g. it says this in the message you see when the directory is empty). However it does not follow symbolic links inside ~/Music/ to other directories. I already have a hierachy of music files. I can't sym-link ~/Music to the root of my hierarchy, because that includes duplicates (different codecs). Nor can I point it at a single sub-directory that contains all the files I want, without using symlinks. Is there a way to support the existing hierarchy, that doesn't involve writing a script to copy gigabytes of music files? gnome-music-3.24.2-1.fc26.x86_64 | GNOME Music does not index the ~/Music directory directly. It uses the shared GNOME indexer, which is called tracker . GNOME lets you configure this in Settings -> Search -> Files. (Select Files and click the cog icon). The dialog shows your Places (xdg dirs like ~/Music), Bookmarks, and Other. You can disable searching in individual Places, enable searching any of your bookmarked folders, and/or manually add folders in the Other section. This allows you to add an arbitrary set of folders to be indexed for music. Assuming you don't also need it to be a different set of folders than what the file search will index. tracker status and tracker info can be used to check the current status of the index. tracker appears happy to index files outside your home directory, but GNOME Music does not seem to pick them up. That can be defeated by adding symlinks from your home directory. It looks like album art is cached in some weird fashion. If Music has seen an album before, it may remember the album cover, even if the files you added this time don't include any album art. ("There are only two hard things in Computer Science...") GNOME Music can also overlook an album in some circumstances, so you may have to remove ~/.local/share/gnome-music to force Music to rescan. If you have to change permissions on some music files to allow your user to read them, tracker will not rescan them immediately. tracker index --file ~/Music does not seem reliable in this situation either, but to trigger a rescan you can just move those files in and out of a temporary directory. Thankfully, tracker seems able to process files in a reasonable amount of time. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29483/"
]
} |
395,477 | I logged into the linux system and try to get all the current users. I used users command in terminal and the result is displayed as "user1 user2 user3 user4". Are there any ways to break the username line and make each username occupy one line? | users | tr -s ' ' '\n' This will take the output of the users utility and replace all spaces with newlines using tr , removing multiple consecutive newlines from the result (with -s ). Pipe that through sort -u to get unique usernames. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253666/"
]
} |
395,509 | Given intput: Via: 1.1.1.1 not relevant line keyword + some text...not relevant line Nkeyword + some text...not relevant line NVia: 2.2.2.2not relevant line keyword + some text...not relevant line Nkeyword + some text...not relevant line NVia: 3.3.3.3not relevant linesVia: 4.4.4.4not relevantVia: 5.5.5.5not relevant line keyword + some text...not relevant line Nkeyword + some text...not relevant line Nnot relevant line N... Required output: Via: 1.1.1.1 keyword + some text Akeyword + some text AVia: 2.2.2.2keyword + some text Bkeyword + some text CVia: 5.5.5.5keyword + some text Dkeyword + some text E keyword string can occur N times in any Via block, or may not occur at all. In the output I need only those Via blocks where keyword occurs together with keyword strings belonging to them. The closest answer I found is here , but I can't make it into what I need. | With sed : sed -n '/^Via:/{ x; /keyword/p; d; }; /keyword/H; ${ x; /keyword/p; }' input.txt Or, if you want keyword anchored at the beginning of line: sed -n '/^Via:/{ x; /\nkeyword/p; d; }; /^keyword/H; ${ x; /\nkeyword/p; }' input.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56306/"
]
} |
395,535 | On my laptop, turning on autorepeat ( xset r on ) does not work. When checking the output of xev , it seems that the reason why autorepeat fails is because another key is being pressed intermittently (although I am not pressing anything), which cancels autorepeating the currently held down key. When no keys are being pressed, the following events are recorded repeating consistently: KeyPress event, serial 33, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1652400, (-509,794), root:(455,814), state 0x0, keycode 221 (keysym 0x0, NoSymbol), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseKeyRelease event, serial 33, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1652400, (-509,794), root:(455,814), state 0x0, keycode 221 (keysym 0x0, NoSymbol), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False It seems that a key with keycode 221 is being pressed, even when it is not. Thus, is it possible to completely disable a keycode such that xorg does not recieve the keypress signal from that keycode at all? Or, is it possible to make keys autorepeat when held down, regardless of whether another key is pressed? Update: After running sudo evtest , it appears that when the hidden output is coming from /dev/input/event11 PEAQ WMI hotkeys No other input device seems to send events when nothing is pressed.Autorepeat works when checking keyboard events in evtest. The full output of xev running for a couple seconds when nothing is pressed: Outer window is 0x1200001, inner window is 0x1200002PropertyNotify event, serial 8, synthetic NO, window 0x1200001, atom 0x27 (WM_NAME), time 1651733, state PropertyNewValuePropertyNotify event, serial 9, synthetic NO, window 0x1200001, atom 0x22 (WM_COMMAND), time 1651733, state PropertyNewValuePropertyNotify event, serial 10, synthetic NO, window 0x1200001, atom 0x28 (WM_NORMAL_HINTS), time 1651733, state PropertyNewValueCreateNotify event, serial 11, synthetic NO, window 0x1200001, parent 0x1200001, window 0x1200002, (10,10), width 50, height 50border_width 4, override NOPropertyNotify event, serial 14, synthetic NO, window 0x1200001, atom 0x15c (WM_PROTOCOLS), time 1651734, state PropertyNewValueMapNotify event, serial 15, synthetic NO, window 0x1200001, event 0x1200001, window 0x1200002, override NOReparentNotify event, serial 28, synthetic NO, window 0x1200001, event 0x1200001, window 0x1200001, parent 0x4000d5, (0,0), override NOConfigureNotify event, serial 28, synthetic NO, window 0x1200001, event 0x1200001, window 0x1200001, (2,0), width 952, height 1033, border_width 2, above 0x0, override NOPropertyNotify event, serial 28, synthetic NO, window 0x1200001, atom 0x15e (WM_STATE), time 1651735, state PropertyNewValueMapNotify event, serial 28, synthetic NO, window 0x1200001, event 0x1200001, window 0x1200001, override NOVisibilityNotify event, serial 28, synthetic NO, window 0x1200001, state VisibilityUnobscuredExpose event, serial 28, synthetic NO, window 0x1200001, (0,0), width 952, height 10, count 3Expose event, serial 28, synthetic NO, window 0x1200001, (0,10), width 10, height 58, count 2Expose event, serial 28, synthetic NO, window 0x1200001, (68,10), width 884, height 58, count 1Expose event, serial 28, synthetic NO, window 0x1200001, (0,68), width 952, height 965, count 0ConfigureNotify event, serial 28, synthetic YES, window 0x1200001, event 0x1200001, window 0x1200001, (962,18), width 952, height 1033, border_width 2, above 0x0, override NOFocusIn event, serial 28, synthetic NO, window 0x1200001, mode NotifyNormal, detail NotifyNonlinearKeymapNotify event, serial 28, synthetic NO, window 0x0, keys: 4294967236 0 0 0 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PropertyNotify event, serial 28, synthetic NO, window 0x1200001, atom 0x14f (_NET_WM_DESKTOP), time 1651736, state PropertyNewValueKeyRelease event, serial 30, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1651775, (-509,794), root:(455,814), state 0x0, keycode 36 (keysym 0xff0d, Return), same_screen YES, XLookupString gives 1 bytes: (0d) "" XFilterEvent returns: FalseMappingNotify event, serial 33, synthetic NO, window 0x0, request MappingKeyboard, first_keycode 8, count 248KeyPress event, serial 33, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1652400, (-509,794), root:(455,814), state 0x0, keycode 221 (keysym 0x0, NoSymbol), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseKeyRelease event, serial 33, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1652400, (-509,794), root:(455,814), state 0x0, keycode 221 (keysym 0x0, NoSymbol), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: FalseKeyPress event, serial 34, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1653200, (-509,794), root:(455,814), state 0x0, keycode 221 (keysym 0x0, NoSymbol), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseKeyRelease event, serial 34, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1653200, (-509,794), root:(455,814), state 0x0, keycode 221 (keysym 0x0, NoSymbol), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: FalseKeyPress event, serial 34, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1654000, (-509,794), root:(455,814), state 0x0, keycode 221 (keysym 0x0, NoSymbol), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseKeyRelease event, serial 34, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1654000, (-509,794), root:(455,814), state 0x0, keycode 221 (keysym 0x0, NoSymbol), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: FalseMappingNotify event, serial 34, synthetic NO, window 0x0, request MappingKeyboard, first_keycode 8, count 248KeyPress event, serial 34, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1654760, (-509,794), root:(455,814), state 0x0, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseMappingNotify event, serial 35, synthetic NO, window 0x0, request MappingKeyboard, first_keycode 8, count 248KeyPress event, serial 35, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1654800, (-509,794), root:(455,814), state 0x40, keycode 221 (keysym 0x0, NoSymbol), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseKeyRelease event, serial 35, synthetic NO, window 0x1200001, root 0x123, subw 0x0, time 1654800, (-509,794), root:(455,814), state 0x40, keycode 221 (keysym 0x0, NoSymbol), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: FalseMappingNotify event, serial 36, synthetic NO, window 0x0, request MappingKeyboard, first_keycode 8, count 248FocusOut event, serial 36, synthetic NO, window 0x1200001, mode NotifyGrab, detail NotifyAncestorClientMessage event, serial 37, synthetic YES, window 0x1200001, message_type 0x15c (WM_PROTOCOLS), format 32, message 0x15d (WM_DELETE_WINDOW) | It seems this is a bug introduced with kernel 4.13, as per Redhat bugzilla bug #1497861 . I found out that unloading the peaq_wmi module also serves as a workaround; it seems that someone already submitted a patch to fix the issue though. (To unload the peaq_wmi module one can issue the command sudo modprobe -r peaq_wmi .) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395535",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253708/"
]
} |
395,539 | Frequently I’ll have to do some digging around to figure out what I’m doing on linux, involving quite a bit of ‘’, ‘ls -la’, ‘cd’, ‘cat’, and ‘vim’ Is there any way to quickly reuse the file/target of the previous command? e.g. I have to look around for a file, say with ls, and when I’ve found it I’ll need to use it with a program like cat or vim. So let’s say I’ve got ‘ls -la /some/path/SomeName’ and hit , say I’ve drilled down to where this is the file I was looking for. What I’d like to do is some kind of ‘!!’ Like when you forget to use sudo; Say I really want to use ‘cat’ but had been searching around with ‘ls’ and ‘’ - or I’ve been searching around with ‘cat’, up-arrow and continuing to refine my ‘cat /file/path/‘ and then when I’ve found what I’m looking for I’ll want to edit that file I’d like to be able to do something like ‘cat !!’ or ‘vim !!’ | It seems this is a bug introduced with kernel 4.13, as per Redhat bugzilla bug #1497861 . I found out that unloading the peaq_wmi module also serves as a workaround; it seems that someone already submitted a patch to fix the issue though. (To unload the peaq_wmi module one can issue the command sudo modprobe -r peaq_wmi .) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52888/"
]
} |
395,588 | The following lsblk command print the disk usage in bytes lsblk -bio KNAME,TYPE,SIZE,MODEL| grep disk sda disk 298999349248 AVAGO sdb disk 1998998994944 AVAGO sdc disk 1998998994944 AVAGO sdd disk 1998998994944 AVAGO sde disk 98998994944 AVAGO how to print the disks when disk is greater than 300000000000 , by adding after the pipe awk or perl one-liner or else expected output: lsblk -bio KNAME,TYPE,SIZE,MODEL| grep disk | ...... sdb disk 1998998994944 AVAGO sdc disk 1998998994944 AVAGO sdd disk 1998998994944 AVAGO | You can do it with awk itself for pattern matching instead of using grep . lsblk -bio KNAME,TYPE,SIZE,MODEL| awk '/disk/ && $3> 300000000000 || NR==1' Or use scientific value 3e11 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/395588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
395,603 | Is the MMU (Memory Management Unit) chip necessary for a processor to have virtual memory support? Is it possible to emulate MMU functionality in software? (I am aware that it will probably have a big impact on performance). | Any system emulator which emulates a system containing a MMU effectively emulates a MMU in software, so yes, it’s possible to emulate a MMU. However , virtual memory requires some way of enforcing memory access control, or at least address translation, so it needs either full software emulation of the CPU running the software being controlled, or it needs hardware assistance. So you could conceivably build a system with no MMU, port QEMU to it, add the missing pieces to make virtual memory actually useful ( e.g. , add support for swap on the host system), and run a MMU-requiring operating system in QEMU, with all the protection you’d expect in the guest operating system (barring QEMU bugs). One real, and old, example of an MMU-less “emulation” used to provide virtual memory is the Z-machine , which was capable of paging and swapping its code and data, on 8-bit systems in the late seventies and early eighties. This worked by emulating a virtual processor on the underlying real processor; that way, the interpreter keeps full control over the memory layout which the running program “sees”. In practice, it’s generally considered that a MMU is required for virtual memory support, at least at the operating system level. As indicated MMU-less kernel? , it is possible to build the Linux kernel so that it can run on systems without a MMU, but the resulting configuration is very unusual and only appropriate for very specific use-cases (with no hostile software in particular). It might not support many scenarios requiring virtual memory (swapping, mmap ...). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/395603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167680/"
]
} |
395,673 | When working in terminal I often have sequences of commands like: cat long/path/to/file/xyz.filenano long/bath/to/file/xyz.filebash long/path/to/file/xyz.file etc Usually I hit control + A, right arrow a few times, backspace a few times, then write the new command. But during the ten seconds I am doing this, I always wonder if there is some magic shortcut that will do this for me. So my question is... Is there a terminal shortcut to delete the command but keep arguments? If not, is there a way to write your own shortcuts? The terminals I use most of the time are Ubuntu and OSX if that matters. | In many shells, Alt D will delete from the cursor to the end of the word under the cursor, so you can do Ctrl A followed by Alt D to delete the first word. Alternatively, in shells with history manipulation, !:1-$ will be replaced by all the parameters of the previous command, so you can type your new command followed by that to copy the arguments of the previous command: $ echo Hello sudo rm -rf slashHello sudo rm -rf slash$ printf "%s " !:1-$Hello sudo rm -rf slash If your commands have single arguments, or if you’re only interested in the last argument, you can shorten this to !$ ; so in your case $ cat long/path/to/file/xyz.file$ nano !$$ bash !$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165874/"
]
} |
395,685 | This syntax prints "linux" when variable equals "no": [[ $LINUX_CONF = no ]] && echo "linux" How would I use regular expressions (or similar) in order to make the comparison case insensitive? | Standard sh No need to use that ksh -style [[...]] command, you can use the standard sh case construct here: case $LINUX_CONF in ([Nn][Oo]) echo linux;; (*) echo not linux;;esac Or naming each possible case individually: case $LINUX_CONF in (No | nO | NO | no) echo linux;; (*) echo not linux;;esac bash For a bash -specific way to do case-insensitive matching, you can do: shopt -s nocasematch[[ $LINUX_CONF = no ]] && echo linux Or: [[ ${LINUX_CONF,,} = no ]] && echo linux (where ${VAR,,} is the syntax to convert a string to lower case). You can also force a variable to be converted to lowercase upon assignment with: typeset -l LINUX_CONF That also comes from ksh and is also supported by bash and zsh . More variants with other shells: zsh set -o nocasematch[[ $LINUX_CONF = no ]] && echo linux (same as in bash ). set -o extendedglob[[ $LINUX_CONF = (#i)no ]] && echo linux (less dangerous than making all matches case insensitive) [[ ${(L)LINUX_CONF} = no ]] && echo linux[[ $LINUX_CONF:l = no ]] && echo linux (convert to lowercase operators) set -o rematchpcre[[ $LINUX_CONF =~ '^(?i)no\z' ]] (PCRE syntax) ksh93 [[ $LINUX_CONF = ~(i)no ]] or [[ $LINUX_CONF = ~(i:no) ]] Note that all approaches above other than [nN][oO] to do case insensitive matching depend on the user's locale. Not all people around the world agree on what the uppercase version of a given letter is, even for ASCII ones. In practice for the ASCII ones, at least on GNU systems, the deviations from the English rules seem to be limited to the i and I letters and whether the dot is there or not on the uppercase or lowercase version. What that means is that [[ ${VAR,,} = oui ]] is not guaranteed to match on OUI in every locale (even when the bug in current versions of bash is fixed). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/395685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
395,692 | how to increment variables - $var that contain the letters a..z example: var=({b..z}) for x in 1 2 3 4 5 do echo $x,$var $var++ ( this is wrong but I need to do something like this ) done expected output: 1,b 2,c 3,d 4,e 5,f . . . | The simple way: echo "$x,$var"var="$(echo $var | tr '[a-y]z' '[b-z]a')" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395692",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
395,694 | I am using Ansible to provision MS SQL Server 2017 to a CentOS 7.4 box. I first went through this guide via command line and it works, but my end goal is to "Ansible-ize" it. However, when I get to the step about installing the command line tools, the -y switch does not work for accepting the license. [user@host ~]$ sudo yum install -y mssql-tools unixODBC-develLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfileResolving Dependencies--> Running transaction check---> Package mssql-tools.x86_64 0:14.0.6.0-1 will be installed--> Processing Dependency: msodbcsql < 13.2.0.0 for package: mssql-tools-14.0.6.0-1.x86_64--> Processing Dependency: msodbcsql >= 13.1.0.0 for package: mssql-tools-14.0.6.0-1.x86_64---> Package unixODBC-devel.x86_64 0:2.3.1-11.el7 will be installed--> Running transaction check---> Package msodbcsql.x86_64 0:13.1.9.1-1 will be installed--> Finished Dependency ResolutionDependencies Resolved================================================================================ Package Arch Version Repository Size================================================================================Installing: mssql-tools x86_64 14.0.6.0-1 packages-microsoft-com-prod 249 k unixODBC-devel x86_64 2.3.1-11.el7 pwbank_repo 55 kInstalling for dependencies: msodbcsql x86_64 13.1.9.1-1 packages-microsoft-com-prod 4.0 MTransaction Summary================================================================================Install 2 Packages (+1 Dependent package)Total size: 4.2 MInstalled size: 4.4 MDownloading packages:Running transaction checkRunning transaction testTransaction test succeededRunning transactionWarning: RPMDB altered outside of yum.The license terms for this product can be downloaded fromhttps://aka.ms/odbc131eula and found in/usr/share/doc/msodbcsql/LICENSE.TXT . By entering 'YES',you indicate that you accept the license terms.Do you accept the license terms? (Enter YES or NO)YES Installing : msodbcsql-13.1.9.1-1.x86_64 1/3 The license terms for this product can be downloaded fromhttp://go.microsoft.com/fwlink/?LinkId=746949 and found in/usr/share/doc/mssql-tools/LICENSE.txt . By entering 'YES',you indicate that you accept the license terms.Do you accept the license terms? (Enter YES or NO)YES Installing : mssql-tools-14.0.6.0-1.x86_64 2/3 Installing : unixODBC-devel-2.3.1-11.el7.x86_64 3/3 Verifying : msodbcsql-13.1.9.1-1.x86_64 1/3 Verifying : unixODBC-devel-2.3.1-11.el7.x86_64 2/3 Verifying : mssql-tools-14.0.6.0-1.x86_64 3/3 Installed: mssql-tools.x86_64 0:14.0.6.0-1 unixODBC-devel.x86_64 0:2.3.1-11.el7 Dependency Installed: msodbcsql.x86_64 0:13.1.9.1-1 Complete! I noticed that there is a warning before I am prompted saying RPMDB altered outside of yum. Does this mean that Microsoft has specifically modified this rpm in their own way and, because of this, yum doesn't know how to handle it? My Goal Although the above works for a "by hand" install, I am trying to "ansible-ize" the above. My playbook works up until I get to this play: - name: Upgrade all installed packages, and install new ones package: name: '{{item}}' state: latest with_items: - '*' - mssql-server - mssql-tools - unixODBC-devel The above play will update all of my currently installed packages and install MS SQL Server 2017 just fine, but it will hang while trying to install the mssql-tools package, I assume because it is waiting for the user to accept the license. My Question How can I "ansible-ize" this install if my playbook hangs, waiting for the user to accept the license? For bonus points, there's a step where I have to run sudo /opt/mssql/bin/mssql-conf setup and follow the on screen prompts which, again, impedes my provisioning. I am in the process of going through it once, finding its output file and seeing if I can't just copy that in whenever I re-provision a new box. Alternatively, I am in the process of reading up on Expect . | - name: install mssql-server repo (CentOS, RedHat) get_url: url: "{{ centos_repo_url }}" dest: /etc/yum.repos.d/mssql-server.repo when: ansible_distribution in ['CentOS', 'RedHat']- name: install mssql-server repo (Ubuntu) get_url: url: "{{ ubuntu_repo_url }}" dest: /etc/apt/sources.list.d/mssql-server.list when: ansible_distribution == 'Ubuntu'- name: refresh apt-get cache for server repo (Ubuntu) command: apt-get update when: ansible_distribution == 'Ubuntu'- name: install mssql-server package package: name: mssql-server state: latest- name: install mssql-tools package package: name: mssql-tools state: latest environment: ACCEPT_EULA: 'y' A sample playbook for installing and configuring SQL Server (along with creating a Pacemaker-managed Availability Group) is available at https://github.com/Microsoft/sql-server-samples/tree/master/samples/features/high%20availability/Linux/Ansible%20Playbook | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/395694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217888/"
]
} |
395,742 | I already read How to split a string into an array in bash but the question seems a little different to me so I'll ask using my data. I have this line comming from STDIN : (5,[a,b,c,d,e,f,g,h,i,j]) The five is my group ID and the letters are values of an array (the group data).I need to get the group ID into a var and the letters into something I can work using IFS=',' read -r -a array <<< "$tline" | bkpIFS="$IFS"IFS=',()][' read -r -a array <<<"(5,[a,b,c,d,e,f,g,h,i,j])"echo ${array[@]} ##Or printf "%s\n" ${array[@]}5 a b c d e f g h i jIFS="$bkpIFS" Explanations: First we are taking backup of default/current shell IFS with bkpIFS="$IFS" ; Then we set IFS to set of delimiters , , ( , ) , ] and [ with IFS=',()][' which means our input string can be delimited with one-or-more of these delimiters. Next read -r -a array reads and split the line into an array called array only based on defined IFS above from input string passed in Here-String method. The -r option is used to tell read command don't does expansion on back-slash \ if come in input. IFS=',()][' read -a array <<<"(5,[a,b,c,d,e,f,g,h,i,j,\,k])"echo ${array[@]}5 a b c d e f g h i j ,k see the last ,k which it caused by having back-slash in input and read without its -r option. With echo ${array[@]} we are printing all elements of array. see What is the difference between $* and $@? and Gilles's answer about ${array[@]} there with more details. With printf "%s\n" ${array[@]} also there is other approach to printing array elements. Now you can print a specific element of array with printf "%s\n" ${array[INDEX]} or same with echo ${array[INDEX]} . Ah, sorry, forgot to give IFS back to shell, IFS="$bkpIFS" : ) Or using awk and its split function. awk '{split($0,arr,/[][,)(]/)} END{for (x in arr) printf ("%s ",arr[x]);printf "\n"}' <<<"(5,[a,b,c,d,e,f,g,h,i,j])" Explanations: Same here, we are splitting the entire line of input based on defined group of delimiters [...] in regexp constant /[...]/ which support in modern implementation of awk using split function. read more in section of split() function. Next at the END{for (x in arr) printf ("%s ",arr[x]); ...} we are looping over array called arr and print their corresponding value. x here point to the index of array arr elements. read more about awk 's BEGIN/END rules . Side-redirect to How to add/remove an element to the array in bash? . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395742",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/182393/"
]
} |
395,746 | I have a file A 1A 2A 4A 6 I want to print the difference between adjacent values (below-above) in column 2 to a new column 3, to get this A 1A 2 1A 4 2A 6 2 I have discovered something like this on SO , but failed to print it as a new column. awk 'NR>1{print $1-p} {p=$1}' file | To modify the given code in question $ awk 'NR>1{$3=$2-p} {p=$2} 1' file A 1A 2 1A 4 2A 6 2 Fields are indexed from 1 , so use $2 for second column $0 contains entire input record After modifying, you need to print the record. Default action is printing contents of $0 if condition is true. 1 is used idiomatically for such cases | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240505/"
]
} |
395,776 | I have a linux (debian based) server which is configured to allow SSH session to the user 'admin', but not the user 'root'. Both these accounts are linked somehow because they share the same password. During an SSH session as admin, 'sudo' is required to run commands, unless I switch to the user 'root'. I have some services on which I need to run now and then, or even at system startup. I'm currently using private/public key mechanism to remote execute commands on the server. Some of the commands are manually typed, others are shell scripts that I execute.Currently the server still asks for password when a command has uses sudo. Question:How can remote execute as user 'admin' without supplying the password?Is it possible to use a private/public key to satisfy sudo?Or perhaps even a way to start shell scripts as the user 'root'? Is it even possible to avoid having to type the password using sudo? If not, are they other alternatives for situation like mine? | you can tell sudo to skip password for some command. e.g. in /etc/sudoers archemar ALL = (www-data) NOPASSWD: /bin/rm -rf /var/www/log/upload.* this allow me to use sudo -u www-data /bin/rm -rf /var/www/log/upload.* as archemar without password. Note that sudo -u www-data rm -rf /var/www/log/upload.* won't work (will ask a password) as rm differ from /bin/rm . (*) Be sure to edit /etc/sudoers using visudo command. Once you've reach advanced level, you might whish to have your own sudo files in /etc/sudoers.d . (*) this change in modern OS (redhat 7.x circa 2022) if rm in your path match /bin/rm in sudoers.conf you might use rm . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/395776",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253869/"
]
} |
395,801 | I tried both commands and the command find | grep 'filename' is many many times slower than the simple find 'filename' command. What would be a proper explanation for this behavior? | (I'm assuming GNU find here) Using just find filename would be quick, because it would just return filename , or the names inside filename if it's a directory, or an error if that name did not exist in the current directory. It's a very quick operation, similar to ls filename (but recursive if filename is a directory). In contrast, find | grep filename would allow find to generate a list of all names from the current directory and below, which grep would then filter. This would obviously be a much slower operation. I'm assuming that what was actually intended was find . -type f -name 'filename' This would look for filename as the name of a regular file anywhere in the current directory or below. This will be as quick (or comparably quick) as find | grep filename , but the grep solution would match filename against the full path of each found name, similarly to what -path '*filename*' would do with find . The confusion comes from a misunderstanding of how find works. The utility takes a number of paths and returns all names beneath these paths. You may then restrict the returned names using various tests that may act on the filename, the path, the timestamp, the file size, the file type, etc. When you say find a b c you ask find to list every name available under the three paths a , b and c . If these happens to be names of regular files in the current directory, then these will be returned. If any of them happens to be the name of a directory, then it will be returned along with all further names inside that directory. When I do find . -type f -name 'filename' This generates a list of all names in the current directory ( . ) and below. Then it restricts the names to those of regular files, i.e. not directories etc., with -type f . Then there is a further restriction to names that matches filename using -name 'filename' . The string filename may be a filename globbing pattern, such as *.txt (just remember to quote it!). Example: The following seems to "find" the file called .profile in my home directory: $ pwd/home/kk$ find .profile.profile But in fact, it just returns all names at the path .profile (there is only one name, and that is of this file). Then I cd up one level and try again: $ cd ..$ pwd/home$ find .profilefind: .profile: No such file or directory The find command can now not find any path called .profile . However, if I get it to look at the current directory, and then restrict the returned names to only .profile , it finds it from there as well: $ pwd/home$ find . -name '.profile'./kk/.profile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395801",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167680/"
]
} |
395,803 | After getting WARNING: Your hard drive is failingDevice: /dev/sdb [SAT], 1 Offline uncorrectable sectors I run $ sudo smartctl -a /dev/sdbsmartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.10.0-514.26.2.el7.x86_64] (local build)Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===Device Model: KingDian S200 60GBSerial Number: 2017022100551LU WWN Device Id: 0 000000 000000000Firmware Version: P0707F1User Capacity: 60,022,480,896 bytes [60.0 GB]Sector Size: 512 bytes logical/physicalRotation Rate: Solid State DeviceDevice is: Not in smartctl database [for details use: -P showall]ATA Version is: ACS-2 T13/2015-D revision 3SATA Version is: SATA >3.1, 6.0 Gb/s (current: 3.0 Gb/s)Local Time is: Tue Oct 3 10:56:08 2017 BSTSMART support is: Available - device has SMART capability.SMART support is: Enabled=== START OF READ SMART DATA SECTION ===SMART overall-health self-assessment test result: PASSEDGeneral SMART Values:Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled.Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run.Total time to complete Offline data collection: ( 120) seconds.Offline data collectioncapabilities: (0x11) SMART execute Offline immediate. No Auto Offline data collection support. Suspend Offline collection upon new command. No Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. No Selective Self-test supported.SMART capabilities: (0x0002) Does not save SMART data before entering power-saving mode. Supports SMART auto save timer.Error logging capability: (0x01) Error logging supported. General Purpose Logging supported.Short self-test routine recommended polling time: ( 2) minutes.Extended self-test routinerecommended polling time: ( 10) minutes.SMART Attributes Data Structure revision number: 1Vendor Specific SMART Attributes with Thresholds:ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x0032 100 100 050 Old_age Always - 0 5 Reallocated_Sector_Ct 0x0032 100 100 050 Old_age Always - 3 9 Power_On_Hours 0x0032 100 100 050 Old_age Always - 4486 12 Power_Cycle_Count 0x0032 100 100 050 Old_age Always - 13160 Unknown_Attribute 0x0032 100 100 050 Old_age Always - 1161 Unknown_Attribute 0x0033 100 100 050 Pre-fail Always - 98163 Unknown_Attribute 0x0032 100 100 050 Old_age Always - 0164 Unknown_Attribute 0x0032 100 100 050 Old_age Always - 9724165 Unknown_Attribute 0x0032 100 100 050 Old_age Always - 9166 Unknown_Attribute 0x0032 100 100 050 Old_age Always - 1167 Unknown_Attribute 0x0032 100 100 050 Old_age Always - 5168 Unknown_Attribute 0x0032 100 100 050 Old_age Always - 1500169 Unknown_Attribute 0x0032 100 100 050 Old_age Always - 100175 Program_Fail_Count_Chip 0x0032 100 100 050 Old_age Always - 0176 Erase_Fail_Count_Chip 0x0032 100 100 050 Old_age Always - 0177 Wear_Leveling_Count 0x0032 100 100 050 Old_age Always - 9602178 Used_Rsvd_Blk_Cnt_Chip 0x0032 100 100 050 Old_age Always - 3181 Program_Fail_Cnt_Total 0x0032 100 100 050 Old_age Always - 0182 Erase_Fail_Count_Total 0x0032 100 100 050 Old_age Always - 0192 Power-Off_Retract_Count 0x0032 100 100 050 Old_age Always - 13194 Temperature_Celsius 0x0022 100 100 050 Old_age Always - 28195 Hardware_ECC_Recovered 0x0032 100 100 050 Old_age Always - 3994818196 Reallocated_Event_Count 0x0032 100 100 050 Old_age Always - 2414197 Current_Pending_Sector 0x0032 100 100 050 Old_age Always - 3198 Offline_Uncorrectable 0x0032 100 100 050 Old_age Always - 1199 UDMA_CRC_Error_Count 0x0032 100 100 050 Old_age Always - 0232 Available_Reservd_Space 0x0032 100 100 050 Old_age Always - 98241 Total_LBAs_Written 0x0030 100 100 050 Old_age Offline - 36124242 Total_LBAs_Read 0x0030 100 100 050 Old_age Offline - 10259245 Unknown_Attribute 0x0032 100 100 050 Old_age Always - 9799SMART Error Log Version: 1No Errors LoggedSMART Self-test log structure revision number 1Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error# 1 Extended offline Completed without error 00% 4486 -Selective Self-tests/Logging not supported The detailed smartctl output shows: $ sudo smartctl -x /dev/sdbsmartctl 6.2 2017-02-27 r4394 [x86_64-linux-3.10.0-514.26.2.el7.x86_64] (local build)Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===Device Model: KingDian S200 60GBSerial Number: 2017022100551LU WWN Device Id: 0 000000 000000000Firmware Version: P0707F1User Capacity: 60,022,480,896 bytes [60.0 GB]Sector Size: 512 bytes logical/physicalRotation Rate: Solid State DeviceDevice is: Not in smartctl database [for details use: -P showall]ATA Version is: ACS-2 T13/2015-D revision 3SATA Version is: SATA >3.1, 6.0 Gb/s (current: 3.0 Gb/s)Local Time is: Tue Oct 3 15:49:27 2017 BSTSMART support is: Available - device has SMART capability.SMART support is: EnabledAAM feature is: UnavailableAPM level is: 128 (minimum power consumption without standby)Rd look-ahead is: EnabledWrite cache is: EnabledATA Security is: Disabled, frozen [SEC2]Wt Cache Reorder: Unavailable=== START OF READ SMART DATA SECTION ===SMART overall-health self-assessment test result: PASSEDGeneral SMART Values:Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled.Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run.Total time to complete Offline data collection: ( 120) seconds.Offline data collectioncapabilities: (0x11) SMART execute Offline immediate. No Auto Offline data collection support. Suspend Offline collection upon new command. No Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. No Selective Self-test supported.SMART capabilities: (0x0002) Does not save SMART data before entering power-saving mode. Supports SMART auto save timer.Error logging capability: (0x01) Error logging supported. General Purpose Logging supported.Short self-test routine recommended polling time: ( 2) minutes.Extended self-test routinerecommended polling time: ( 10) minutes.SMART Attributes Data Structure revision number: 1Vendor Specific SMART Attributes with Thresholds:ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate -O--CK 100 100 050 - 0 5 Reallocated_Sector_Ct -O--CK 100 100 050 - 3 9 Power_On_Hours -O--CK 100 100 050 - 4491 12 Power_Cycle_Count -O--CK 100 100 050 - 13160 Unknown_Attribute -O--CK 100 100 050 - 1161 Unknown_Attribute PO--CK 100 100 050 - 98163 Unknown_Attribute -O--CK 100 100 050 - 0164 Unknown_Attribute -O--CK 100 100 050 - 10068165 Unknown_Attribute -O--CK 100 100 050 - 9166 Unknown_Attribute -O--CK 100 100 050 - 1167 Unknown_Attribute -O--CK 100 100 050 - 5168 Unknown_Attribute -O--CK 100 100 050 - 1500169 Unknown_Attribute -O--CK 100 100 050 - 100175 Program_Fail_Count_Chip -O--CK 100 100 050 - 0176 Erase_Fail_Count_Chip -O--CK 100 100 050 - 0177 Wear_Leveling_Count -O--CK 100 100 050 - 9687178 Used_Rsvd_Blk_Cnt_Chip -O--CK 100 100 050 - 3181 Program_Fail_Cnt_Total -O--CK 100 100 050 - 0182 Erase_Fail_Count_Total -O--CK 100 100 050 - 0192 Power-Off_Retract_Count -O--CK 100 100 050 - 13194 Temperature_Celsius -O---K 100 100 050 - 28195 Hardware_ECC_Recovered -O--CK 100 100 050 - 4314392196 Reallocated_Event_Count -O--CK 100 100 050 - 2667197 Current_Pending_Sector -O--CK 100 100 050 - 3198 Offline_Uncorrectable -O--CK 100 100 050 - 1199 UDMA_CRC_Error_Count -O--CK 100 100 050 - 0232 Available_Reservd_Space -O--CK 100 100 050 - 98241 Total_LBAs_Written ----CK 100 100 050 - 36474242 Total_LBAs_Read ----CK 100 100 050 - 10529245 Unknown_Attribute -O--CK 100 100 050 - 10146 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warningGeneral Purpose Log Directory Version 1SMART Log Directory Version 1 [multi-sector log support]Address Access R/W Size Description0x00 GPL,SL R/O 1 Log Directory0x01 SL R/O 1 Summary SMART error log0x02 SL R/O 1 Comprehensive SMART error log0x03 GPL R/O 1 Ext. Comprehensive SMART error log0x04 GPL,SL R/O 8 Device Statistics log0x06 SL R/O 1 SMART self-test log0x07 GPL R/O 1 Extended self-test log0x10 GPL R/O 1 NCQ Command Error log0x11 GPL R/O 1 SATA Phy Event Counters0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log0x80-0x9f GPL,SL R/W 16 Host vendor specific log0xde GPL VS 8 Device vendor specific logSMART Extended Comprehensive Error Log Version: 1 (1 sectors)No Errors LoggedSMART Extended Self-test Log Version: 1 (1 sectors)Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error# 1 Extended offline Completed without error 00% 4488 -# 2 Extended offline Completed without error 00% 4487 -# 3 Extended offline Completed without error 00% 4486 -Selective Self-tests/Logging not supportedSCT Commands not supportedDevice Statistics (GP Log 0x04)Page Offset Size Value Description 1 ===== = = == General Statistics (rev 1) == 1 0x008 4 13 Lifetime Power-On Resets 1 0x010 4 4491 Power-on Hours 1 0x018 6 2390408669 Logical Sectors Written 1 0x020 6 69617191 Number of Write Commands 1 0x028 6 690041929 Logical Sectors Read 1 0x030 6 6959725 Number of Read Commands 7 ===== = = == Solid State Device Statistics (rev 1) == 7 0x008 1 0 Percentage Used Endurance IndicatorSATA Phy Event Counters (GP Log 0x11)ID Size Value Description0x0001 4 0 Command failed due to ICRC error0x0002 4 0 R_ERR response for data FIS0x0005 4 1 R_ERR response for non-data FIS0x000a 4 17 Device-to-host register FISes sent due to a COMRESET | I have had this issue in the past. IIRC, "Offline uncorrectable sectors" means that the disk controller (the one inside the disk, not the SATA/SCSI controller in your PC) has had repeated read failures with one sector and has decided that it was definitely not usable. So, I must declare that sector as bad to the filesystem that uses it? No. Fortunately, today's disks automatically replace bad sectors with good ones taken from a pool of spare sectors. Thus, you don't have to declare those bad sectors to your filesystem so that it doesn't use them anymore. Of course, the size of that pool is limited ( Available_Reservd_Space sectors , I guess) and once all spare sectors have been used, bad sectors will remain unusable and you will have to declare them as such to your FS. So, everything is fine, this is a harmless message? Not really. Your drive has tried several times to read the bad sector, and failed every time; so it's been queued up for replacement, but the drive can't do that on its own (it keeps hoping it will eventually be able to read it). Until the sector is overwritten with new data, it will remain "uncorrectable"; once it is overwritten, or if the drive somehow manages to read it, it will be remapped and replaced with a spare sector (in the smartctl output, Offline_Uncorrectable will be decremented by 1 and Reallocated_Sector_Ct will be incremented by 1). What can I do? In such an event, I usually force my RAID 1 array to resync (good disk -> faulty disk) in order for that new sector to have the right content. In any case, do a fsck and, if you have a backup of that partition (you should), compare the backup to your actual content. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395803",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78919/"
]
} |
395,853 | I have a bunch of files stored in various directories. They have been created at different times, but I need to check that their contents are the same. I cannot find how to do a diff on ALL files in one directory. Is this possible or is another CLI tool required? | If you don't need to compare them, and only need to know if they differ, you can just diff every file in the directory with any one of the files in the directory via a for-loop... for i in ./*; do diff -q "$i" known-file; done ...where known-file is just any given file in the directory. If you get no output, none of the files differ; else you'll get a list of the files that differ from known-file . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/395853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106673/"
]
} |
395,875 | I encrypted one file with gpg -c <file> and closed the terminal. After a while, I tried to decrypt it with gpg <file> and it decrypted it, without asking for a password. Is that normal? How to guarantee that gpg will ask for a password, even in my same computer? | This is normal, gpg now uses gpg-agent to manage private keys, and the agent caches keys for a certain amount of time (up to two hours by default, with a ten minute inactivity timeout). To change the defaults, create or edit a file named ~/.gnupg/gpg-agent.conf , and use the following entries: default-cache-ttl specifies the amount of time a cache entry is kept after its last use, in seconds (600 by default); max-cache-ttl specifies the maximum amount of time a cache entry is kept, in seconds (7200 by default). After changing these, you’ll need to reload the configuration (try sending SIGHUP to gpg-agent , or killing it outright). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/395875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63603/"
]
} |
395,908 | It is my second day in *nix world and search didn't help me to solve my issue. This question here is not relevant either. I installed FreeBSD 11 and I installed KDE. pgk install kde I tried to run it like startkde but turns out that I also need X server to run a UI. Ok. So I installed it like pgk install xorg Now I'm running X with "startx" and then I'm running KDE with "startkde"and I'm getting Could not start d-bus. can you call qdbus? How I can call qdbus? What's that? Update 1 As was suggested I edited rc.config and added dbus_enable=YES result is the same Update 2 I followed §5.7.2 of a handbook and /proc was mounted by adding this line to /etc/fstab : proc /proc procfs rw 0 0 /etc/rc.conf was edited and now has three lines: dbus_enable="YES"hald_enable="YES"kdm4_enable="YES" Now if I'm running startkde I'm getting error: "display is not set or cannot connect to x server" I found somewhere that I need to execute type plasma-desktop #kde4 to check if plasma-desktop is installed, and looks like it is fine. Not sure about kde. Here it is: | This is normal, gpg now uses gpg-agent to manage private keys, and the agent caches keys for a certain amount of time (up to two hours by default, with a ten minute inactivity timeout). To change the defaults, create or edit a file named ~/.gnupg/gpg-agent.conf , and use the following entries: default-cache-ttl specifies the amount of time a cache entry is kept after its last use, in seconds (600 by default); max-cache-ttl specifies the maximum amount of time a cache entry is kept, in seconds (7200 by default). After changing these, you’ll need to reload the configuration (try sending SIGHUP to gpg-agent , or killing it outright). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/395908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253970/"
]
} |
395,933 | I am having trouble grasping how to properly check from a bash script if the current time is between 23:00 and 06:30.I am trying to run an infinite loop to check the time now, and to do something if the time range is between 11pm and 6:30 am.Here's what I have written so far, which doesn't work the next day: fireup(){ local starttime=$(date --date="23:00" +"%s") local endtime=$(date --date="06:30" +"%s") while :; do local currenttime=$(date +%s) if [ "$currenttime" -ge "$starttime" -a "$currenttime" -ge "$endtime" ]; then do_something else do_something_else fi test "$?" -gt 128 && break local currenttime=$(date +%s) done & } What I am doing wrong? | If all you need is to check if HH:MM is between 23:00 and 06:30, then don't use Unix timestamps. Just check the HH:MM values directly: fireup(){ while :; do currenttime=$(date +%H:%M) if [[ "$currenttime" > "23:00" ]] || [[ "$currenttime" < "06:30" ]]; then do_something else do_something_else fi test "$?" -gt 128 && break done &} Notes: Time in HH:MM will be in lexicographic order, so you can directly compare them as strings. Avoid using -a or -o in [ ] , use || and && instead. Since this is bash, prefer [[ ]] over [ ] , it makes life easier. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/395933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123334/"
]
} |
395,939 | Need to mass rename files that prepending their parent directory name to them without using rename command. e.g. /tmp/2017-09-22/cyber.gz/tmp/2017-09-23/cyber.gz/tmp/2017-09-24/cyber.tar Also renamed files has to be copy in /tmp/archive without impacting above original files. Looks like below /tmp/archive/2017-09-22_cyber.gz/tmp/archive/2017-09-23_cyber.gz/tmp/archive/2017-09-24_cyber.tar | If all you need is to check if HH:MM is between 23:00 and 06:30, then don't use Unix timestamps. Just check the HH:MM values directly: fireup(){ while :; do currenttime=$(date +%H:%M) if [[ "$currenttime" > "23:00" ]] || [[ "$currenttime" < "06:30" ]]; then do_something else do_something_else fi test "$?" -gt 128 && break done &} Notes: Time in HH:MM will be in lexicographic order, so you can directly compare them as strings. Avoid using -a or -o in [ ] , use || and && instead. Since this is bash, prefer [[ ]] over [ ] , it makes life easier. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/395939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254002/"
]
} |
395,990 | I created a directory d and a file f inside it. I then gave myself only read permissions on that directory. I understand this should mean I can list the files (e.g. here ), but I can't. will@wrmpb /p/t/permissions> ls -altotal 0drwxr-xr-x 3 will wheel 102 4 Oct 08:30 .drwxrwxrwt 16 root wheel 544 4 Oct 08:30 ..dr-------- 3 will wheel 102 4 Oct 08:42 dwill@wrmpb /p/t/permissions> ls dwill@wrmpb /p/t/permissions> If I change the permissions to write and execute, I can see the file. will@wrmpb /p/t/permissions> chmod 500 dwill@wrmpb /p/t/permissions> ls dfwill@wrmpb /p/t/permissions> Why is this? I am using MacOS. Edit: with reference to @ccorn's answer, it's relevant that I'm using fish and type ls gives the following: will@wrmpb /p/t/permissions> type lsls is a function with definitionfunction ls --description 'List contents of directory' command ls -G $argvend | Some preparations, just to make sure that ls does not try more thingsthan it should: $ unalias ls 2>/dev/null$ unset -f ls$ unset CLICOLOR Demonstration of the r directory permission: $ ls -ld ddr-------- 3 ccorn ccorn 102 4 Okt 14:35 d$ ls df$ ls -l dls: f: Permission denied$ ls -F dls: f: Permission denied In traditional Unix filesystems, a directory was simply a list of (name, inodenumber) pairs. An inode number is an integer used as index into the filesystem'sinode table where the rest of the file metadata is stored. The r permission on a directory allows to list the names in it,but not to access the information stored in the inode table, that is,getting file type, file length, file permissions etc, or opening the file.For that you need the x permission on the directory. This is why ls -l , ls -F , ls with color-coded output etc failwithout x permission, whereas a mere ls succeeds. The x permission alone allows inode access, that is, given an explicitname within that directory, x allows to look up its inode and access that directory entry's metadata: $ chmod 100 d$ ls -l d/f-rw-r--r-- 1 ccorn ccorn 0 4 Okt 14:35 d/f$ ls dls: d: Permission denied Therefore, to open a file /a/b/c/f or list its metadata,the directories / , /a , /a/b , and /a/b/c must be granted x permission. Unsurprisingly, creating directory entries needs both w and x permissions: $ chmod 100 d$ touch d/gtouch: d/g: Permission denied$ chmod 200 d$ touch d/gtouch: d/g: Permission denied$ chmod 300 d$ touch d/g$ Wikipedia has a brief overview in an article on file system permissions . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/395990",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86002/"
]
} |
396,013 | I am trying to debug a kernel running on QEMU with GDB. The kernel has been compiled with these options: CONFIG_DEBUG_INFO=yCONFIG_GDB_SCRIPTS=y I launch the kernel in qemu with the following command: qemu-system-x86_64 -s -S -kernel arch/x86_64/boot/bzImage In a separate terminal, I launch GDB from the same path and issue these commands in sequence: gdb ./vmlinux(gdb) target remote localhost:1234(gdb) hbreak start_kernel(gdb) c I did not provide a rootfs, as I am not interested in a full working system as of now, just the kernel. I also tried combinations of hbreak/break. The kernel just boots and reaches a kernel panic as rootfs cannot be found... expected. I want it to stop at start_kernel and then step through the code. observation: if I set an immediate breakpoint, it works and stops, but not on start_kernel / startup_64 / main Is it possible that qemu is not calling all these functions, or is it being masked in some way? Kernel: 4.13.4 GDB: GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.3) 7.7.1GCC: gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4 system: ubuntu 14.04 LTS NOTE: This exact same procedure worked with kernel 3.2.93, but does not work with 4.13.4, so I guess some more configurations are needed. I could not find resources online which enabled this debug procedure for kernel 4.0 and up, so as of now I am continuing with 3.2, any and all inputs on this are welcome. | I ran into the same problem and found the solution from the linux kernel newbies mailing list . You should disable KASLR in your kernel command line with nokaslr option, or disable kernel option "Randomize the kernel memory sections" inside "Processor type and features" when you build your kernel image. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32936/"
]
} |
396,015 | I use OS X and have several checksum files that are generated from different external harddisks. If the checksum files are in the same location as the files to check then I can simply run eg.: shasum -c sums.sha1 But in my case sums.sha1 is located in ~/Desktop/sums.sha1 and the files to verify are in /Volumes/fr-ubb-1 (external drive, read only). I understand that it's not possible to pass a location parameter to shasum . What's the best practice to run the verification of my checksum file with files in a different location? | Run it from the directory containing the files to check, and give it the full path to the checksum file: cd /Volumes/fr-ubb-1shasum -c ~/Desktop/sums.sha1 This works with most (perhaps all) checksum verification tools, not just shasum . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26174/"
]
} |
396,038 | I'm currently exploring the directory tree on Linux Mint while supporting it by a book that I bought. Well, the book specifically said that: The /dev directory contains the special device files for all the devices. The device files are created during installation, and later with the /dev/MAKEDEV script. The /dev/MAKEDEV.local is a script written by the system administrator that creates local-only device files or links (...) I can't find that script, am I supposed to find it or is it generated upon installation of a new device? | Your book was correct when it was written, but it is now obsolete. MAKEDEV used to be a script in /dev , potentially supplemented by a local MAKEDEV.local written by the system administrator; nowadays, if it exists, it’s more likely to live in /sbin . Many current Linux systems don’t have a MAKEDEV at all, they rely on the kernel and udev to populate device nodes as necessary. See Why is the name of the MAKEDEV script spelled in all caps? for more on the history of MAKEDEV . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/396038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195189/"
]
} |
396,040 | I would like to know, how one can generate network load on a Linux based ARM machine which is to equivalent to video streaming over network. Linux machine is equipeed with tool like iperf. | Your book was correct when it was written, but it is now obsolete. MAKEDEV used to be a script in /dev , potentially supplemented by a local MAKEDEV.local written by the system administrator; nowadays, if it exists, it’s more likely to live in /sbin . Many current Linux systems don’t have a MAKEDEV at all, they rely on the kernel and udev to populate device nodes as necessary. See Why is the name of the MAKEDEV script spelled in all caps? for more on the history of MAKEDEV . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/396040",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84819/"
]
} |
396,063 | I have a script generating an index based on each file in a folder. All file names are a number with extension. How can I modify my loop to process them in numeric order? for file in xml/*.xml; do ...done | If you have GNU sort that has the option to de-limit on the \0 delimiter you can do. This way the while loop will start getting files in the sorted order for you to process. Replace the printf option with your own custom logic. shopt -s nullglobprintf '%s\0' xml/*.xml | sort -zV | while read -rd '' file; do printf "%s\n" "$file"doneshopt -u nullglob The nullglob option is to prevent shell from expanding an empty glob if no xml files are found in the current folder. The option -u unsets it after your processing is done. As Tony Speight rightly points out, if you don't want to mess with the shell options (e.g. it may be enabled for other reasons) you could just set in the sub-shell and let the glob expansion happen ( shopt -s nullglob; printf '%s\0' xml/*.xml ) | sort -zV | while read -rd '' file; do printf "%s\n" "$file"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
396,076 | I need to create the file /opt/nginx/conf.d/default.conf with this content via shell script and create the file if it doesn't exist: server { listen 80 default_server; listen [::]:80 default_server; server_name _; root /usr/share/nginx/html;} How do I write multiline content via a shell script? I created the directory sudo mkdir -p /opt/nginx/conf.d But I don't know how to write a file. | summary : use >> to append, use [ -f file ] to test. try if [ ! -f myfile ]then cat <<EOF > myfileserver { listen 80 default_server; listen [::]:80 default_server; server_name $server ; root /usr/share/nginx/html;}EOFfi the syntax cat <<EOF is called a " here document ". $server will be replace by its value, or empty if undefined. as pointed out, you can use single quoted 'EOF' to avoid replacing var if any. you can also have multiple echo (this could be painfull to maintain if too many echo) echo "## foo.conf" > foo.confecho param1=hello >> foo.confecho param2=world >> foo.conf prepending there is no direct prepend in bash, either use temporary file mv file file_tmpcat new_content file_tmp > filerm file_tmp or edit it sed -i -e '1r new_file' -e 'wq' file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/396076",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210969/"
]
} |
396,086 | I have a small program which contains the following folder structure: - main.sh- lib/ - clean.sh - get.sh - index.sh - test.sh Each file contains a single function which I use in main.sh . main.sh : source lib/*get_productsclean_productsmake_indextest_index In the above the first two functions work but the second two don't. Yet if I replace source lib/* with: source lib/get.shsource lib/clean.shsource lib/index.shsource lib/test.sh Everything works as expected. Anyone know why source lib/* doesn't work as expected? | Bash's source builtin only takes a single filename: source filename [arguments] Anything beyond the first parameter becomes a positional parameter to filename . A simple illustration: $ cat myfileecho "param1: $1"$ source myfile fooparam1: foo Full output of help source source: source filename [arguments]Execute commands from a file in the current shell.Read and execute commands from FILENAME in the current shell. Theentries in $PATH are used to find the directory containing FILENAME.If any ARGUMENTS are supplied, they become the positional parameterswhen FILENAME is executed.Exit Status:Returns the status of the last command executed in FILENAME; fails ifFILENAME cannot be read. (This also applies to the equivalent "dot source" builtin . which, it's worth noting, is the POSIX way and thus more portable.) As for the seemingly contradictory behavior you are seeing you can try running main.sh after doing set -x . Seeing what statements are getting executed and when may provide a clue. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/396086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
396,175 | I have a process that has called unshare to create a new network namespace with just itself inside. When it calls execve to launch bash, the ip command shows that I have just an lo device. If I also create a user namespace and arrange for my process to be root inside the namespace, I can use the ip command to bring that device up and it works. I can also use the ip command to create a veth device in this namespace. But it doesn't show up in ip netns list and the new veth device doesn't show up in the root level namespace (as I'd expect). How do I connect a veth device in the root-level namespace to my new veth device inside my process namespace? The ip command seems to require that the namespace has a name assigned by the ip command, and mine doesn't because I didn't use ip netns add to create it. Maybe I could do it by writing my own program that used the netlink device and set things up. But I'd really prefer not to. Is there a way to do this through the command line? There must be a way to do it, because docker containers have their own network namespace as well, and that namespace is also unnamed. Yet there is a veth device inside it that's connected to a veth device outside it. My goal is to dynamically create a process isolation context, ideally without needing to become root outside the container. To this end I'm going to be creating a PID namespace, a UID namespace, a network namespace, an IPC namespace, and mount namespace. I may also create a cgroup namespace, but those are newish and I need to be able to run on currently supported versions of SLES, RHEL, and Ubuntu LTS. I've been working through this one namespace at a time, and I currently have User, PID and mount namespaces working satisfactorily. I can mount /proc/pid/ns/net if I must, but I would prefer to do that from inside the user namespace so (again) I don't have to be root outside the namespace. Mostly, I want everything to disappear as soon as all the processes in the namespace are gone. Having a bunch of state to clean up on the filesystem when I'm done would be less than ideal. Though creating it temporarily when the container is first allocated and then immediately removing it is far better than having to clean it up when the container exits. No, I can't use docker , lxc , rkt , or any other existing solution such that I'd be relying on anything other than bog-standard system utilities (like ip ), system libraries like glibc , and Linux system calls. | ip link has a namespace option, which in addition to a network namespace name, can use a PID to refer a process' namespace. If PID namespaces are shared between the processes, you can move devices either way; it is probably easiest from inside , when you consider PID 1 being "outside" . With separate PID namespaces you need to move from outer (PID) namespace to the inner one. For example, from inside of a network namespace you can create a veth device pair to PID 1 namespace: ip link add veth0 type veth peer name veth0 netns 1 How namespaces work in Linux Every process has reference files for their namespaces in /proc/<pid>/ns/ . Additionally, ip netns creates persistent reference files in /run/netns/ . These files are used with setns system call to change the namespace of the running thread to a namespace pointed by such file. From shell you can enter to another namespace using nsenter program, providing namespace files (paths) in arguments. A good overview of Linux namespaces is given in the Namespaces in operation article series on LWN.net. Setting up namespaces When you set up multiple namespaces ( mount, pid, user, etc.), set up network namespace as early as possible, before altering mount and pid namespaces. If you do not have shared mount or pid namespaces, you do not have any way to point to the network namespace outside, because you can not see the files referring to network namespaces outside. If you need more flexibility than the command line utilities provide, you need to use the systemcalls to manage name spaces directly from your program. For documentation, see the relevant man pages: man 2 setns , man 2 unshare and man 7 namespaces . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8068/"
]
} |
396,195 | I have a list 22232224222 I want to print the values which are at least 2 times larger than the values 3 steps above and below in the same column. The output should be 4 How to do that? I have a similar question asked here , just to better illustrate I write it here, thanks. 20171006 Update :Sorry for oversimplifying my actual input file, it is actually a table instead of a list that I need to select in multiple columns (column 2, 3, 4 etc.) and print out column 1. How could I incorporate the column informations in such script? A 2 2 2B 2 2 2C 2 2 2D 3 3 3E 2 2 2F 2 2 2G 2 2 2H 4 4 4I 2 2 2 J 2 2 2 K 2 2 2 And to get H | You could do that in awk . You'd need to save the previous 6 lines to compare the 3rd last line with 6th last line and the current one. For that, the common trick is to use a ring buffer which is an array indexed by NR%6 where 6 is the number of lines you want to keep. awk ' NR > 6 { x = saved[NR%6]; y = saved[(NR - 3) % 6]; z = $0 if (y >= 2*x && y >= 2*z) print y } {saved[NR % 6] = $0}' < file For your edit: save the key and value to compare: awk -v key=1 -v value=2 ' NR > 6 { x = saved_value[NR%6]; y = saved_value[(NR - 3) % 6]; z = $value if (y >= 2*x && y >= 2*z) print saved_key[(NR - 3) % 6] } {saved_key[NR % 6] = $key; saved_value[NR % 6] = $value}' < file where key is the index of column you want to print and value the column with the values you want to compare. Or based on whatever metric you'd like based on those columns 2, 3, 4 like the average: awk ' {metric = ($2 + $3 + $4) / 3} NR > 6 { x = saved_metric[NR%6]; y = saved_metric[(NR - 3) % 6]; z = $metric if (y >= 2*x && y >= 2*z) print saved_key[(NR - 3) % 6] } {saved_key[NR % 6] = $key; saved_metric[NR % 6] = $metric}' < file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396195",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240505/"
]
} |
396,218 | What I am looking to do is block access to WAN and only allow these hosts to talk to each other on the 192.168.1.0/24 LAN. This configuration should be done on the hosts in question. There are some similar posts to this, but tend to be too specific use case, or overly complicated. I now pay for internet per/GB. I have certain VM's that don't really need WAN Access after being setup, but seem to be using large amounts of data. (LDAP Server for some reason?) I'm looking into DD-WRT Filtering, but I wondered how to do this host side. I will also be looking into enabling WAN Access for 1 hour daily. This could be done via " iptables script " with CRON, or just via DD-WRT. I'm guessing IPTables is the way to go. I think all of my servers use IPTables, some have UFW and some have FirewallD. I figure this can be a "generic question" with mostly answers that should work across many/all distros. But just to add, I'm mostly using Ubuntu 14/16 and CentOS 6/7. | Filtering with IPTABLES This can be accomplished by creating a set of rules for allowed traffic and dropping the rest. For the OUTPUT chain, create rules to accept loopback traffic and traffic to 192.168.1.0/24 network. Default action is applied when no rules are matched, set it to REJECT . iptables -A OUTPUT -o lo -j ACCEPTiptables -A OUTPUT -d 192.168.1.0/24 -j ACCEPTiptables -P OUTPUT REJECT For INPUT chain, you can create similar rules. Allow traffic from loopback and local network, drop the rest. You can match established traffic (reply traffic to connections initiated by your host) with a single rule using -m conntrack --ctstate ESTABLISHED . This way you do not need to alter the chain when you want to enable Internet access. This works when you do not run any programs/daemons expecting connections from outside of your local network. iptables -A INPUT -i lo -j ACCEPTiptables -A INPUT -s 192.168.1.0/24 -j ACCEPTiptables -A INPUT -m conntrack --ctstate ESTABLISHED -j ACCEPTiptables -P INPUT DROP If you need to allow connections initiated outside of your local network, you need to configure the INPUT chain in the same way as the OUTPUT chain and use similar mechanism to apply To allow unrestricted (WAN access) network access, change the default action to ACCEPT . To put the limits back, change the default action back to REJECT . Same effect is achieved by adding/removing -j ACCEPT as last rule. iptables -P OUTPUT ACCEPT You can also use iptables time module to accept the traffic at specific time of a day, in which case you do not need to use cron. For example, to allow any outgoing traffic between 12:00 and 13:00 with following rule: iptables -A OUTPUT -m time --timestart 12:00 --timestop 13:00 -j ACCEPT | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396218",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130767/"
]
} |
396,223 | My script: dateecho -e "${YELLOW}Network check${NC}\n\n"while read hostnamedoping -c 1 "$hostname" > /dev/null 2>&1 &&echo -e "Network $hostname : ${GREEN}Online${NC}" ||echo -e "${GRAY}Network $hostname${NC} : ${RED}Offline${NC}"done < list.txt sleep 30cleardone Is outputting info like this: Network 10.x.xx.xxx : Online Network 10.x.xx.xxx : Offline Network 10.x.xx.xxx : Offline Network 10.x.xx.xxx : Offline Network 10.x.xx.x : Online Network 139.xxx.x.x : Online Network 208.xx.xxx.xxx : Online Network 193.xxx.xxx.x : Online which I'd like to clean up to get something like this: Network 10.x.xx.xxx : Online Network 10.x.xx.xxx : Offline Network 10.x.xx.xxx : Offline Network 10.x.xx.x : Online Network 139.xxx.x.x : Online Network 208.xx.xxx.xxx : Online Network 193.xxx.xxx.x : Online Network 193.xxx.xxx.xxx : Offline | Simply with column command: yourscript.sh | column -t The output: Network 10.x.xx.xxx : OnlineNetwork 10.x.xx.xxx : OfflineNetwork 10.x.xx.xxx : OfflineNetwork 10.x.xx.xxx : OfflineNetwork 10.x.xx.x : OnlineNetwork 139.xxx.x.x : OnlineNetwork 208.xx.xxx.xxx : OnlineNetwork 193.xxx.xxx.x : Online | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/396223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254201/"
]
} |
396,240 | I'm new to bash functions but was just starting to write some bits and pieces to speed up my work flow. I like to test this as I go along so I've found myself editing and sourcing my ~/.profile a lot and find ~/. a bit awkward to type... So the first thing I thought I'd do was the following: sourceProfile(){ source ~/.profile}editProfile(){ vim ~/.profile && sourceProfile} when running editProfile I'm getting an issue on the sourceProfile call. Initially I was getting the error: -bash: ~./profile: No such file or directory Note the lack of typo in my function! However it works if I use an alias instead. alias sourceProfile='source ~/.profile' However after adding that alias and then commenting it out and uncommenting the function I start getting a syntax error instead: -bash: /home/jonathanramsden/.profile: line 45: syntax error near unexpected token `('-bash: /home/jonathanramsden/.profile: line 45: `sourceProfile(){' the proceeding line is: alias sservice='sudo service' I'm pretty sure all I did was comment/uncomment! And based on my googling it seems like that's the syntax for defining functions. | Aliases are like some form of macro expansion, similar to the pre-preprocessing done in C with #define except that in shells, there's no clear and obvious delimitation between the pre-processing stage and the interpretation stage (also, aliases are not expanded in all contexts and there can be several rounds of alias expansion like with nested aliases). When you do: alias sourceProfile='source ~/.profile'sourceProfile() { something} The alias expansion turns it into: source ~/.profile() { something} which is a syntax error. And: alias sourceProfile='source ~/.profile'editProfile(){ vim ~/.profile && sourceProfile} Turns it into: editProfile(){ vim ~/.profile && source ~/.profile} So, if you later redefine sourceProfile as a function, editProfile will not call it, because the definition of the editProfile has the expanded value of original alias. Also, for functions (or any compound command), aliases are only expanded at function definition time (while they're read and parsed), not at run time. So this: editProfile(){ vim ~/.profile && sourceProfile}alias sourceProfile='source ~/.profile'editProfile won't work because sourceProfile was not defined at the time the body of the editProfile function was parsed, and there won't be any alias expansion at the time of running the editProfile function. So, avoid mixing aliases and functions. And be wary of the implications of using aliases as they're not really commands but some form of macro expansion. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396240",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89369/"
]
} |
396,254 | I have a directory X. Inside X there are many directories. Inside those there are - among other stuff - some .wav files. Using wildcards, how can I access all those .wav files? (so I can copy them to a single directory) | Aliases are like some form of macro expansion, similar to the pre-preprocessing done in C with #define except that in shells, there's no clear and obvious delimitation between the pre-processing stage and the interpretation stage (also, aliases are not expanded in all contexts and there can be several rounds of alias expansion like with nested aliases). When you do: alias sourceProfile='source ~/.profile'sourceProfile() { something} The alias expansion turns it into: source ~/.profile() { something} which is a syntax error. And: alias sourceProfile='source ~/.profile'editProfile(){ vim ~/.profile && sourceProfile} Turns it into: editProfile(){ vim ~/.profile && source ~/.profile} So, if you later redefine sourceProfile as a function, editProfile will not call it, because the definition of the editProfile has the expanded value of original alias. Also, for functions (or any compound command), aliases are only expanded at function definition time (while they're read and parsed), not at run time. So this: editProfile(){ vim ~/.profile && sourceProfile}alias sourceProfile='source ~/.profile'editProfile won't work because sourceProfile was not defined at the time the body of the editProfile function was parsed, and there won't be any alias expansion at the time of running the editProfile function. So, avoid mixing aliases and functions. And be wary of the implications of using aliases as they're not really commands but some form of macro expansion. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396254",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254231/"
]
} |
396,277 | I want to create a cron script to interact with mysql, for example #!/bin/bashmysql -uroot -pecho rootecho "CREATE DATABASE example" But it doesn't work, it only prompts: Enter password: and when I exit mysql it shows root"CREATE DATABASE example" Any idea? | Put something like: [client]user=rootpassword="my-very-secret-password" In a file whose permissions ensure that nobody outside the people who are entitled to read it can read it. And run: #! /bin/sh -mysql --defaults-extra-file=/path/to/that/file --batch << "EOF"CREATE DATABASE exampleEOF See MySQL's own guideline itself for more information. You could put the password in the script and restrict read access to the script itself, but you'd also need to make sure that the password is not passed as argument to any command as that would then make it visible to anybody in the output of ps . You could do something like: #! /bin/sh -mysql --defaults-extra-file=/dev/fd/3 --batch 3<< "END_OF_AUTH" << "END_OF_SQL"[client]user=rootpassword="my-very-secret-password"END_OF_AUTHCREATE DATABASE exampleEND_OF_SQL | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254229/"
]
} |
396,320 | I have a global variable $EMAIL . I want to loop through several files and for each that meets a criteria I will add it to $EMAIL for report generation. The problem is when I redefine $EMAIL inside the loop it only changes $EMAIL at the scope of the loop iteration. The global version of $EMAIL remains empty. This is a simplified version of my program: #!/usr/bin/env bashEMAIL="I echo without 'nope'"ls | while read line; do if [ 73523 -lt 86400 ] then echo "Hasn't been backed up in over a day" EMAIL="$EMAIL nope" fidoneecho $EMAIL How can I modify my script so that I can add to $EMAIL from inside the loop? Edit/Update I wanted to further simplify the example so I tried changing ls | while read line; do to: for i in {1..2}; do Strangely with this change nope is appended to $EMAIL , maybe I'm misunderstanding what is going wrong here? | In: cmd1 | cmd2 bash run cmd2 in a subshell, so changing variable won't be visible to parent shell. You can do: #!/usr/bin/env bashEMAIL="I echo without 'nope'"while read line; do if [ 73523 -lt 86400 ] then echo "Hasn't been backed up in over a day" EMAIL="$EMAIL nope" fidone < <(ls)echo "$EMAIL" or using zsh or ksh , which will run cmd2 in the same shell process. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396320",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
396,382 | I would like to do 2 things: 1) Revert back the interfaces to the old classic name: eth0 instead of ens33. 2) Rename the interfaces in the way I want so that for example I can call interface eth0 as wan0 or assign eth1, eth2 and so on the mac address I want. | Assuming that you have just installed your debian 9 stretch. 1) For reverting back the old names for the interfaces do: nano /etc/default/grub edit the line GRUB_CMDLINE_LINUX="" to GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0" then launch a grub-mkconfig for apply the changes inside the bootloader grub-mkconfig -o /boot/grub/grub.cfg You need a reboot after that. 2) For renaming the interfaces use: For just a temporary modification take a look at the @xhienne answer. For a permanent modification: Start by creating / editing the /etc/udev/rules.d/70-persistent-net.rules file. nano /etc/udev/rules.d/70-persistent-net.rules And insert inside lines like: # interface with MAC address "00:0c:30:50:48:a1" will be assigned "eth0"SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:30:50:48:a1", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"# interface with MAC address "00:0c:30:50:48:ab" will be assigned "eth1"SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:30:50:48:ab", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" If you want to assign for example a name like wan0 to eth0 you can use given my example: # interface with MAC address "00:0c:30:50:48:a1" will be assigned "eth0"SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:30:50:48:a1", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="wan0" After the next reboot or using service networking restart you should see the changes applied. EXTRA: Remember that after all this modifications you have to edit your /etc/network/interfaces file replacing the old interfaces names with the new ones! EXTRA: If you want to know what MAC address your interfaces have, just do a ip addr show and look under the link/ section. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/396382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143935/"
]
} |
396,387 | I have a large (~300) set of .csv files, each of which are ~200k lines long, with a regular filename pattern: outfile_n000.csvoutfile_n001.csvoutfile_n002.csv...outfile_nXXX.csv I need to extract a range of lines (100013-200013) from each file, and save that extracted region to a new .csv file, appending a ptally_ prefix to differentiate it from the original file, while preserving the original file. I know that I can use sed -n '100013,200013p' outfile_nXXX.csv > ptally_outfile_nXXX.csv to do this to a single file, but I need a way to automate this for large batches of files. I can get close by using the -i option in sed to do so: sed -iptally_* -n '100013,200013p' outfile_nXXX.csv > ptally_outfile_nXXX.csv but this writes the extracted lines to outfile_nXXX.csv , and leaves the original file renamed as ptally_outfile_nXXX.csv , as this is the purpose of -i . Likewise, brace expansion in bash won't do the trick, as brace expansion and wildcards don't mix: sed --n 10013,20013p *.csv > {,ptally_}*.csv Any elegant ways to combine the extraction and renaming into a simpler process? Currently, I'm using a bash script to perform the swap between the outfile_nXXX.csv and ptally_outfile_nXXX.csv filenames, but I would prefer a more straightforward workflow. Thanks! | Use a for loop. for f in outfile_n???.csv; do sed -n '100013,200013p' "$f" > ptally_"$f"done Alternatively, depending on your exact actual requirements, it may be more applicable to use csplit . Some of the GNU extensions extend its power considerably. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396387",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152717/"
]
} |
396,427 | I'm using find "/home/../.." -type f -not -path "*/FolderName/*" to ignore FolderName from being listed. However, I read that find still traverses this FolderName . Searching for files in this folder using the upper command doesn't list any, so I'm not sure if the folder is still being traversed or not. -prune is said to really ignore the folder from being traversed, but I'm not sure if -prune is really needed? | The -depth (and -delete implies -depth ), -prune , -maxdepth <n> , -depth [+-]<n> , -follow (now replaced with the -L option), -quit , -exit , -xdev / -mount predicates (not all implementations support all of them) are the only ones that affect the directory traversal. Here, instead of ... ! -path '*/whatever/*' ... You can do: ... \( ! -name whatever -o -prune \) ... Or if you also want to exclude whatever itself (which ! -path '*/whatever/*' doesn't): ... -name whatever -prune -o \( ... \) Those would have to be inserted before predicates like -type f . That also avoids the problems whereby * doesn't match sequence of bytes that don't translate to characters in some locales and some implementations (like GNU find in most common locales). So for your example: find "/home/../.." \( ! -name FolderName -o -prune \) -type f -print Or: find "/home/../.." -name FolderName -prune -o \( -type f -print \) (note that it excludes all files called FolderName even those that are not of type directory ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244802/"
]
} |
396,509 | I'm following an installation script on GitHub and one of the steps is: cp sources/openssl/1.0.1p/Android.mk -o sources/openssl/$OPENSSL_VERSION/Android.mk But my terminal threw an error cp: invalid option -- 'o' I checked man cp on my Ubuntu, and there's no option -o . Is this a MAC OS thing? What does cp -o stand for? | You can safely remove the -o option. Btw, is $OPENSSL_VERSION set? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251215/"
]
} |
396,526 | If I want to perform some commands given variables aren't set I'm using: if [[ -z "$a" || -z "$v" ]]then echo "a or b are not set"fi Yet the same syntax doesn't work with -v , I have to use: if [[ -v a && -v b ]]then echo "a & b are set"fi What is the history behind this? I don't understand why the syntax wouldn't be the same. I've read that -v is a somewhat recent addition to bash (4.2) ? | Test operators -v and -z are just not the same. Operator -z tells if a string is empty. So it is true that [[ -z "$a" ]] will give a good approximation of "variable a is unset",but not a perfect one: the expression will yield true if a is set to the empty stringrather than unset; the enclosing script will fail if a is unset and the option nounset is enabled. On the other hand, -v a will be exactly "variable a is set", evenin edge cases. It should be clear that passing $a rather than a to -v would not be right, as it would expand that possibly-unsetvariable before the test operator sees it; so it has to be part ofthat operator's task to inspect that variable, pointed to by its name,and tell whether it is set. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/396526",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
396,551 | Which format (Mac or DOS) should I use on Linux PCs/Clusters? I know the difference : DOS format uses "carriage return" (CR or \r ) then "line feed" (LF or \n ). Mac format uses "carriage return" (CR or \r ) Unix uses "line feed" (LF or \n ) I also know how to select the option : Alt M for Mac format Alt D for DOS format But there is no UNIX format. Then save the file with Enter . | Use neither: enter a filename and press Enter , and the file will be saved with the default Unix line-endings (which is what you want on Linux). If nano tells you it’s going to use DOS or Mac format (which happens if it loaded a file in DOS or Mac format), i.e. you see File Name to Write [DOS Format]: or File Name to Write [Mac Format]: press Alt D or Alt M respectively to deselect DOS or Mac format, which effectively selects the default Unix format. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/396551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/241592/"
]
} |
396,584 | This is part file N W N N N N N N N N NN C N N N N N N N N NN A N N N N N N N N NN N N N N N N N N N NN G N N N N N N N N NN C N N N C N N N N NN C C N N N N N N N N In each line I want to count the total number of all characters that are not "N" my desire output 1110122 | GNU awk solution: awk -v FPAT='[^N[:space:]]' '{ print NF }' file FPAT='[^N[:space:]]' - the pattern defining a field value (any character except N char and whitespace) The expected output: 1110122 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/396584",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216256/"
]
} |
396,605 | There are several good references on systemd timers including this one: systemd.time Unfortunately, it still isn't clear to me how to create a timer that will run periodically, but at a specific number of minutes after the top of the hour. I want to create a timer that runs 30 minutes past the hour, every 2 hours. So it would run at 14:30 (2:30 pm), 16:30, 18:30, 20:30, etc. I tried several things that did not work, including this: OnCalendar=*-*-* *00/2:30 And this: OnCalendar=*-*-* *:00/2:30 I did not find the time specification to produce the desired result. Also, it does not have to run exactly at that moment, so I was thinking about using: AccuracySec=5m | Every 2 hours at 30 minutes past the hour should be OnCalendar=00/2:30# iow hh/r:mm 00/2 - the hh value is 00 and the repetition value r is 2 which means the hh value plus all multiples of the repetition value will be matched ( 00 , 02 , 04 .. 14 , 16 ..etc) 30 - the mm value, 30 will match 30 minutes past each hour I left the date and the seconds out since, per the same man page: date specification may be omitted, in which case the current day [...] is implied [...]If the second component is not specified, ":00" is assumed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/396605",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
396,615 | So, I'm trying to create a menu system in bash as a learning experience. I'm sure there are countless ways, and even "better" ways to do this, but what I've got is something like this... echo "2nd Menu********1) command2) commandM) Main menuX) Exit program"read -p "Choose option >" optif [ "$opt" -eq 1 ]; then commandselif [ "$opt" -eq 2 ]; then commandselif [[ "$opt" = [m,M] ]]; then main #calls the function main()elif [[ "$opt" = [x,X] ]]; then exitelse echo "Invalid option" mainfi The script works for every option except the "X) Exit program".When I run "X" or "x" I get this error... ./acct-mgr.sh: line 10: [: x: integer expression expected./acct-mgr.sh: line 12: [: x: integer expression expectedInvalid option This is baffling me! I'm pretty sure I'm using the correct comparison operator for each data type (-eq for integers and = for strings), coupled with the fact that EVERY option works EXCEPT that "x". Any help is greatly appreciated. Thanks. P.S. - Alternative ways of accomplishing the desired result is greatly appreciated, but even then I would like to know why this isn't working for my edification. Thanks again. | When you enter M/m or X/x, you're comparing a non-number to a number using -eq , and that is causing the failure. If you want $opt to be a string sometimes, treat it as a string all of the time: ...if [ "$opt" = "1" ]; then commandselif [ "$opt" = "2" ]; then commands... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153418/"
]
} |
396,622 | How to a grep a word from a file and store it into another existing file? What I've tried is cat q1.txt | grep -i "osx" test.txt I want to grep it from test.txt and store it into q1, but this and the other way around doesn't work | You may drop the cat completely, and then you should redirect the output from grep to the result file: grep -i "osx" test.txt >q1.txt This will search for lines in test.txt containing the string osx (case-insensitively), and store those lines in q1.txt . If the file q1.txt exists, it will be truncated (emptied) before the output is stored in it. If you wish to append the output at the end of the file, use >> rather than > as the redirection operator. Your command: cat q1.txt | grep -i "osx" test.txt What this actually does is to start cat and grep concurrently. cat will read from q1.txt and try to write it to its standard output, which is connected to the standard input of grep . However, since you're giving grep a file to read from, it will totally ignore whatever cat is sending it. In the end, all lines in test.txt that contains the string osx will be outputted to the terminal. There is something often referred to as "useless use of cat " (or sometimes "UUoC"), which means that a cat invocation may be completely removed and that the file may instead be read directly by another tool. The extreme example of that would be: cat test.txt | cat | cat | cat | grep -i "osx" | cat | cat >q1.txt but even just cat test.txt | grep -i "osx" >q1.txt is useless as grep is perfectly capable of reading from a file by itself (as seen above). Even if it wasn't able to open test.txt by itself, one could have written grep -i "osx" <test.txt >q1.txt to say that standard input should come from the test.txt file and that standard output should go to the q1.txt file Use cat only when concatenating data (that's what its main use is). There are a few other uses of cat too, but it's outside the scope of this question. Related: Multiple methods of inputting files How is this command legal? "> file1 < file2 cat" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254472/"
]
} |
396,630 | My problem: I'm writing a bash script and in it I'd like to check if a given service is running. I know how to do this manually, with $ service [service_name] status . But (especially since the move to systemd) that prints a whole bunch of text that's a little messy to parse. I assumed there's a command made for scripts with simple output or a return value I can check. But Googling around only yields a ton of "Oh, just ps aux | grep -v grep | grep [service_name] " results. That can't be the best practice, is it? What if another instance of that command is running, but not one started by the SysV init script? Or should I just shut up and get my hands dirty with a little pgrep? | systemctl has an is-active subcommand for this: systemctl is-active --quiet service will exit with status zero if service is active, non-zero otherwise, making it ideal for scripts: systemctl is-active --quiet service && echo Service is running If you omit --quiet it will also output the current status to its standard output. As pointed out by don_crissti , some units can be active even though nothing is running to provide the service: units marked as “RemainAfterExit” are considered active if they exit successfully, the idea being that they provide a service which doesn’t need a daemon ( e.g. they configure some aspect of the system). Units involving daemons will however only be active if the daemon is still running. | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/396630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109544/"
]
} |
396,654 | Custom Sort based on subject column order should be Maths, English, Science when I use this below command awk -F',' '{if (NR!=1) {print $2,$3,$5,$4}}' myfile.csv on my myfile.csv I am getting like this but I want some other way "101" "Anna" "Maths" "V""102" "Bob" "Maths" "V""103" "Charles" "Science" "VI""104" "Darwin" "Science" "VI""105" "Eva" "English" "VII" sort based on subject column order should be Maths, English, Science removed double quotes and joined by underscore like this 101_Anna_Maths_V102_Bob_Maths_V105_Eva_English_VII103_Charles_Science_VI104_Darwin_Science_VI Original file: output of cat myfile.csv Sl.No,RollNo,Names,Class,Subject1,101,Anna,V,Maths2,102,Bob,V,Maths3,103,Charles,VI,Science4,104,Darwin,VI,Science5,105,Eva,VII,English | Your original command: awk -F',' '{if (NR!=1) {print $2,$3,$5,$4}}' myfile.csv Your command written in the idiomatic awk way: awk -F',' 'NR > 1 { print $2, $3, $5, $4 }' myfile.csv Above command, modified to remove all double quotes for every line of input for which NR > 1 : awk -F',' 'NR > 1 { gsub(/"/, ""); print $2, $3, $5, $4 }' myfile.csv Above command, modified to output with _ as the output field separator ( OFS ): awk -F',' -vOFS='_' 'NR > 1 { gsub(/"/, ""); print $2, $3, $5, $4 }' myfile.csv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396654",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254487/"
]
} |
396,736 | My CentOS VPS got many IP addresses that I'd like to add to the eth0 network interface. Currently eth0 only got 1 IPv4 address and its other ones doesn't show up. My searching gives me terms like IP Alias but that doesn't seem to apply to CentOS. The CentOS Wiki doesn't really show how it's done. cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth0:0 Now you can edit the new file ifcfg-eth0:0 and specify the network settings of the virtual interface. How do I manually add IPv4 IP addresses to a physical network interface in CentOS 7? | Create a configuration file called ifcfg-<interface name>:0 in /etc/sysconfig/network-scripts/ The syntax of the configuration will be like this : DEVICE="eth0"BOOTPROTO="static"ONBOOT="yes"IPADDR=x.x.x.xGATEWAY=x.x.x.xNETMASK=255.255.255.0TYPE=Ethernet Then restart the service and you should be good to go. service network restart | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396736",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63767/"
]
} |
396,763 | I've got a folder with a load of folders in folders in folders etc... Some of the folders have files, and some do not. I want to cleanup the main folder by finding all directories with no files and deleting them. An example might make more sense: So if I start with this: mainFolder folder1 folder1 (empty) folder2 file.txt folder3 (empty) folder2 folder1 (empty) folder2 (empty) folder3 folder1 folder1 (empty) folder3 folder1 file.txt I should end up with this: mainFolder folder1 folder2 file.txt folder3 folder1 file.txt So: /mainFolder/folder1/folder1 was deleted cause it had no files /mainFolder/folder1/folder3 was deleted cause it had no files /mainFolder/folder2 was deleted because cause it had no files, evenall the sub-folders were empty I hope this makes sense... The only idea I had was to start at mainFolder and recursively travel down each sub-folder deleting the ones that are empty. | See if this does what you want: find mainFolder -depth -empty -type d -exec rmdir {} \; That should find directories in mainFolder using a depth-first traversal that are empty, and remove those directories. Since it does a depth-first traversal, as it remove subdirectories, if the parent directory becomes empty, find will identify it as empty and remove it as well. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137367/"
]
} |
396,785 | I have a table 1 11 00 10 0 I want to print the lines with two sets of selection criteria separated by OR . Criteria set 1 : (Column 1 >= 1 and Column 2 = 0) OR Criteria set 2 : (Column 1 = 0 and Column 2 >= 1) Expected output is 1 00 1 I have written something like this but didn't work awk '($1>=1 && $2=0)||($1=0 && $2>=1) {print $0}' What's the problem? | The problem is that you are using an assignment operator ( = ) rather than a test of equality ( == ). The boolean result of assigning zero to something is "false". This is why the test never succeeds. The idiomatic awk command would be awk '($1 >= 1 && $2 == 0) || ($1 == 0 && $ 2 >= 1)' The { print $0 } is not needed as this is the default action for any condition that does not have an action. If you just want to skip lines with the same values in column one and two (gives the same output for the given data): awk '$1 != $2' The output in both cases is 1 00 1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240505/"
]
} |
396,826 | I want to find a subdirectory of the current directory, which (that is the subdirectory) contains 2 or more regular files. I am not interested in directories containing less than 2 files, neither in directories which contain only subdirectories. | Here is a completely different approach based on GNU find and uniq . This is much faster and much CPU-friendly than answers based on executing a shell command that counts files for each directory found. find . -type f -printf '%h\n' | sort | uniq -d The find command prints the directory of all files in the hierarchy and uniq only displays the directories that appear at least twice. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/396826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9158/"
]
} |
396,840 | $ pdfgrep -R -i spark . | less &$ pdfgrep -R -i spark . &$ jobs[3]- Stopped pdfgrep -R -i spark . | less[4] Running pdfgrep -R -i spark . & Why would the one with | less be stopped, while the other onewithout it is running? The stopped backgrounded job doesn't read from stdin. So that can't be the reason. The reason that I background the jobs is that I can do somethingelse in the same terminal session. The reason that I pipe to less is because I don't want the outputto stdout messes up the screen of my terminal session when I amdoing something else. Is there some way to achieve the two goals above? I slightly prefernot saving output to a file over saving output to a file, because it takes a little more to remember the file, read and delete them. Thanks. | Let's look more closely at what's happening to less : $ pdfgrep -R -i spark . | strace less &[...]open("/dev/tty", O_RDONLY|O_LARGEFILE) = 3ioctl(3, TCGETS, {B38400 opost isig -icanon -echo ...}) = 0ioctl(3, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost isig -icanon -echo ...}) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)--- SIGTTOU {si_signo=SIGTTOU, si_code=SI_KERNEL} ------ stopped by SIGTTOU --- Job control restricts the processes in a background job from performing certain operations on the controlling terminal. If a background process tries to read from the terminal, it will be sent a SIGTTIN signal, which typically stops (pauses) the process. If a background process tries to set a terminal's parameters, it will be sent a SIGTTOU signal, which also typically stops the process. That's what is happening here with the TCSETSW ioctl . The less program tries to put the terminal into raw mode soon after it starts, even before it knows whether it has anything to display. There is a good reason for this: you don't want a background job asynchronously changing your terminal so that, for example, raw mode is on and echo is off.(A background process can get terminal parameters with the TCGETS ioctl without being stopped - see the listing above.) If a background process tries to write to the terminal and the terminal has the tostop flag set, it will be sent the SIGTTOU signal. You probably don't have the tostop flag set (run stty -a to check). If you don't, a background command like pdfgrep -R -i spark . & that doesn't change any terminal settings will be able to write to your terminal whenever it tries. You also wrote: The reason that I pipe to less is because I don't want the output to stdout messes up the screen of my terminal session when I am doing something else The less program is ultimately going to send output to the terminal, one screenful at a time. If you run stty tostop before pdfgrep | less & , or before pdfgrep & , then they will only output to your terminal when they are in the foreground. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
396,843 | Here is my command with IP's commented out with semanticIP's ssh -p 2022 -L 9389:localRDPIP:3389 user@publicIP \su -c "export HISTCONTROL=ignorespace; \iptables -t nat -A PREROUTING -p tcp --dport 3389 -j DNAT --to-destination localRDP_IP:3389; \iptables -t nat -A POSTROUTING -p tcp -d localRDP_IP --dport 3389 -j SNAT --to-source jumpIP"; basically, I'm trying to run some remote routing, which is not the question. The question is how do I run such a command? The best test I've been able to do is: ssh -p 2022 -L 9389:localRDPIP:3389 user@publicIP -t "su -c nano; nano" but I don't know how to do the spaces. If I have spaces in my commands in the -c "quoted area" other than a single command, I get an error. Note : I realize that with ssh port forwarding, iptables commands may be unnecessary. | Let's look more closely at what's happening to less : $ pdfgrep -R -i spark . | strace less &[...]open("/dev/tty", O_RDONLY|O_LARGEFILE) = 3ioctl(3, TCGETS, {B38400 opost isig -icanon -echo ...}) = 0ioctl(3, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost isig -icanon -echo ...}) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)--- SIGTTOU {si_signo=SIGTTOU, si_code=SI_KERNEL} ------ stopped by SIGTTOU --- Job control restricts the processes in a background job from performing certain operations on the controlling terminal. If a background process tries to read from the terminal, it will be sent a SIGTTIN signal, which typically stops (pauses) the process. If a background process tries to set a terminal's parameters, it will be sent a SIGTTOU signal, which also typically stops the process. That's what is happening here with the TCSETSW ioctl . The less program tries to put the terminal into raw mode soon after it starts, even before it knows whether it has anything to display. There is a good reason for this: you don't want a background job asynchronously changing your terminal so that, for example, raw mode is on and echo is off.(A background process can get terminal parameters with the TCGETS ioctl without being stopped - see the listing above.) If a background process tries to write to the terminal and the terminal has the tostop flag set, it will be sent the SIGTTOU signal. You probably don't have the tostop flag set (run stty -a to check). If you don't, a background command like pdfgrep -R -i spark . & that doesn't change any terminal settings will be able to write to your terminal whenever it tries. You also wrote: The reason that I pipe to less is because I don't want the output to stdout messes up the screen of my terminal session when I am doing something else The less program is ultimately going to send output to the terminal, one screenful at a time. If you run stty tostop before pdfgrep | less & , or before pdfgrep & , then they will only output to your terminal when they are in the foreground. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128791/"
]
} |
396,855 | I have 100 million rows in my file. Each row has only one column. e.g. aaaaabbccdddddddee I would like to list the character count Like this 2 character words - 35 character words - 17 character words - 1 etc. Is there any easy way to do this in terminal? | $ awk '{ print length }' file | sort -n | uniq -c | awk '{ printf("%d character words: %d\n", $2, $1) }'2 character words: 35 character words: 17 character words: 1 The first awk filter will just print the length of each line in the file called file . I'm assuming that this file contains one word per line. The sort -n (sort the lines from the output of awk numerically in ascending order) and uniq -c (count the number of times each line occurs consecutively) will then create the following output from that for the given data: 3 2 1 5 1 7 This is then parsed by the second awk script which interprets each line as "X number of lines having Y characters" and produces the wanted output. The alternative solution is to do it all in awk and keeping counts of lengths in an array. It's a tradeoff between efficiency, readability/ease of understanding (and therefore maintainability) which solution is the "best". Alternative solution: $ awk '{ len[length]++ } END { for (i in len) printf("%d character words: %d\n", i, len[i]) }' file2 character words: 35 character words: 17 character words: 1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/396855",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254626/"
]
} |
396,890 | I am using openSUSE. I want to update my Firefox browser to the newest version, but I don't know how. How can I do that? | Extending Hunter's answer: there is no guarantee that you will get really the last Firefox in your distribution. With zypper , you will get the last firefox what were included into the OpenSUSE. To get the last firefox, you have to download it manually, and install it manually. As Firefox has its own update mechanism (which is turned off in the packaged versions), you will get the latest Firefox, you can even have the latest alpha version (it is named nightly ). Although it will be a firefox independent from the zypper/rpm update mechanism. Nightly has also the latest version of the Firefox web developer plugin (since some versions it is merged into the FF and it is not a separate plugin). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/396890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114868/"
]
} |
396,895 | I was looking at discussion between Kusalananda and xhienne here , where it's mentioned [ "" -ge 2 ] not being a valid test producing an error in bash --posix and other POSIX-compliant shells. bash-4.3$ [ "" -gt 10 ]bash: [: : integer expression expectedbash-4.3$ [ '' -gt 10 ]bash: [: : integer expression expected All good there. Out of curiosity, I tried the same with [[ . bash-4.3$ [[ "" -gt 10 ]] && echo "YES"bash-4.3$ [[ "" -gt 0 ]] && echo "YES"bash-4.3$ [[ "" -gt -1 ]] && echo "YES"YESbash-4.3$ [[ "" -eq 0 ]] && echo "YES"YES As you can see, no errors and it's actually evaluated as numeric expression with "" being equal to 0. So what exactly is happening here ? Is [[ simply being inconsistent with the old test or POSIX ? Is it simply performing string comparison rather than numeric comparison ? | One difference between [ and [[ is that [ does not do arithmetic evaluation but [[ does: $ [ "2 + 2" -eq 4 ] && echo yesbash: [: 2 + 2: integer expression expected$ [[ "2 + 2" -eq 4 ]] && echo yesyes The second subtlety is that, wherever arithmetic evaluation is performed under bash, empty strings evaluate to 0. For example: $ x=""; echo $((0 + x))0$ [[ "" -eq 0 ]] && echo yesyes Documentation From man bash : Shell variables are allowed as operands; parameter expansion is performed before the expression is evaluated. Within an expression, shell variables may also be referenced by name without using the parameter expansion syntax. A shell variable that is null or unset evaluates to 0 when referenced by name without using the parameter expansion syntax. The value of a variable is evaluated as an arithmetic expression when it is referenced, or when a variable which has been given the integer attribute using declare -i is assigned a value. A null value evaluates to 0 . A shell variable need not have its integer attribute turned on to be used in an expression. [ Emphasis added] Aside: Security Issues Note that bash's arithmetic evaluation is a potential security issue. For example, consider: x='a[$(rm -i *)]'[[ x -eq 0 ]] && echo yes With the -i option, the above is safe but the general lesson is not to use bash's arithmetic evaluation with un-sanitized data. By contrast, with [ , no arithmetic evaluation is performed and, consequently, the command never attempts to delete files. Instead, it safely generates an error: $ x='a[$(rm -i *)]'$ [ "$x" -eq 0 ] && echo yesbash: [: a[$(rm -i *)]: integer expression expected For more on this issue, see this answer . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/396895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85039/"
]
} |
397,000 | I'm using FreeBSD 11 with PuTTY for SSH. The keyboard's key codes don't seem to be set up at all correctly - for example it beeps on up arrow and inserts '~' for most navigation keys including basics like arrows and delete key. The keyboard is a standard UK English keyboard. Typing is a real pain. I've read a number of threads about setting key codes, both in rc and shell, so I know I can set it up that way as a last resort. But it would be very odd for a client with so much configurability, and an OS with such wide use, not to have some terminal option / setting in common that they both "just understand", that I can set on both and voila - the keys all (or mostly) work. The trouble is I have no idea how to find it and, when I do find it, how to set it for all future sessions. I understand how to find the keycode being sent by the terminal for an individual key, so I could set up my keys that way, one by one. But I would like to find basic terminal settings for my shell rc and for PuTTY, that gets as many keys as possible understood by both, so I only have to set up a few exceptions if I need them. How can I do this? | There are so many knobs to twist and turn. And much advice on the Internet people follow blindly. As always many ways to Rome but when you know how things are connected they are very simple. The ultra short answer is: Change the terminal string in Putty from xterm to putty (under Connection -> Data -> Terminal-type string). The typical pitfall to avoid: Make sure that you are not setting TERM elsewhere in your rc files. The slightly longer answer: First I would start by ensuring that you are actually using the defaults. From my personal Windows 10 laptop using DK keyboard (and mapping) I connect to a FreeBSD 11.1 setup with DK mapping. In my case the arrow keys works as expected on the command-line. Left/right moves on current line. Up/Down goes through command history. I have verified this for both /bin/sh (default user shell) and /bin/tcsh (default root shell). You can read up on shells . You write that you know how you can do your keymapping in the shell rc file. Many suggestions on how to do this is floating around. But it is usually not what you should do. You will find suggestions like this for tcsh keybindings: # Del(ete), Home and End bindkey "\e[3~" delete-charbindkey "\e[1~" beginning-of-linebindkey "\e[4~" end-of-line And suggestions like this for bash ( ~/.inputrc) "\x7F": backward-delete-char"\e[3~": delete-char"\e[1~": beginning-of-line"\e[4~": end-of-line But rather than setting these bindings locally for each session and each shell you should rather use termcap / terminfo for this purpose (more on this later). In this context Putty is your terminal . The default for Putty is to set TERM for your session to "xterm". It does that because it is reasonably xterm compatible. xterm is not a reference to any terminal but for the program Xterm . PuTTY Configuration Connection -> Data -> Terminal-type string: `xterm` When you have logged in you can verify this setting carries through to your session: echo $TERMxterm If $TERM does not match what you have set in Putty then you might have set an override in your rc files. Notice the warning for /bin/sh in ~/.profile : # Setting TERM is normally done through /etc/ttys. Do only override# if you're sure that you'll never log in via telnet or xterm or a# serial line.# TERM=xterm; export TERM Because we do not use a lot of physical DEC VT100 's anymore xterm is what you will see many places. Even if you just keep TERM as xterm you will get colour output with default Putty and FreeBSD as ls -G will work. Some will recommend that you set TERM to xterm-color , xterm-256 or rxvt-256color to get "proper" colour support. But remember: All these magic TERM values are just mappings in a database. A reason xterm is so prevalent today is that some programs and script checks if $TERM begins with xterm (which is a horrible idea). This then brings us back to termcap which is the default on FreeBSD. If you want to use terminfo then you will need to install devel/ncurses . For more on this see: How can I use terminfo entries on FreeBSD? You can find the source of the termcap database in the text file /usr/share/misc/termcap . If you make changes to this file you need to run cap_mkdb to get the system to acknowledge the change. In here you will find the answer to your conundrum. There is an explicit TERM setting for Putty named: putty . FreeBSD has then made the choice not to change the settings for xterm to match Putty's behavior (probably due to combatibility concerns). But they have been nice enough to supply a setting for Putty. So if you change the Putty default setting for Terminal-type string: from xterm to putty then this is is reflected in TERM when you log in. And the default FreeBSD termcap has an entry for this. And by magic and without touching a lot of rc files you now have working arrow keys (I had that with xterm as well) but also Home/End moves to start/end of line and Del(ete) deletes. Bonus: It seems the default putty definition does not support all 256 colours. You can then modify your termcap and add these two lines (and run cap_mkdb): putty-256color:\ :pa#32767:Co#256:tc=putty: Then you can set your TERM to putty-256color . Scott Robison suggested this should be added - but the change has not been picked up by FreeBSD. I cannot find this PR in the database anymore. Bonus 2: If you prefer to keep TERM as xterm then you should spend time configuring Putty to match what FreeBSD expects of the xterm terminal. If you go to the settings Terminal -> Keyboard in the setting The Home and End keys you can change "Standard" to "rxvt". With this change you will notice the Home key works on the command line (moves to start of line). But End now does nothing. So it is then a question of getting Putty to agree with what FreeBSD expects from an xterm. Just to show that you can also go the other way around. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/397000",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120614/"
]
} |
397,078 | Obviously I could scp the key to every host the user needs SSH access to. But if there are many hosts this could take a long time. Especially if public key authentication is not set up yet, every scp would require me to input a password. This could be very time consuming and annoying. Would using auto mounted home directories solve this problem? Because then every host would use the same home directory for each user, public keys would only need to be copied once. This doesn't seem right however. Can someone give me advice? | There are a bunch of ways to do this, especially if you're on recent versions of OpenSSH. Remember also that you need more than a way to add them, you need a way to remove them (and quickly—consider if the key is compromised, the person parts on bad terms, etc.). A key addition that takes a day to propagate is an annoyance; a key removal that takes a day to propagate is a serious security concern. Keeping in mind the importance of removal being easy, that suggests a few approaches: It sounds like you already have some way of creating the users quickly. There is a good chance that's LDAP, for example. LDAP can store SSH public keys, and you can hook this in to sshd using the configuration option AuthorizedKeysCommand . For example, if you're running SSSD, sss_ssh_authorizedkeys is intended for that. (See, e.g., RedHat docs on SSSD authorized keys ). Key addition and removal can be instant, worst case is typically a few seconds for LDAP propagation. You can very likely fully automate this (and if you have a bunch of users probably already have!), requiring no admin intervention. If your servers must handle authentication offline (and beyond what SSSD can do), another approach is to use the certificate authority (CA) support in OpenSSH. This is documented mostly in the ssh-keygen manpage’s “Certificates” section . Basically you set up your servers' sshd to trust your CA and to automatically fetch update revocation lists. Then you sign the client's public key with said CA and give the cert to the client. At that point, the client can log in to all the servers using said cert. To un-authorize the client, you add it to the revocation list (as explained in the immediately following section in the man page). Key addition is instant, removal depends on how often you update revocation lists. Unfortunately there isn't anything like OCSP for SSH CAs. Automation (without admin help) of adds is possible to do securely; of removes is easy. You could—as you suggest—use shared, auto-mounted (or permanently-mounted; auto-mount is not required) home directories so all servers see the same ~/.ssh/authorized_keys — but this is a lot of overhead if you otherwise don't need a shared $HOME . Key addition and removal are instant to fairly quick, depending on caching. Key management likely entirely done by the user, not an admin. 3b. Ulrich Schwarz points out that you can change the location of the user's authorized keys file; it doesn't need to be ~/.ssh/authorized_keys . So you could share a directory containing all users' authorized keys files, and not have the overhead of fully shared home directories. You could use your configuration management tool like @DopeGhoti suggests. Be very careful not to forget about a host—especially one where the key was manually added. Probably means key addition and removal will require manual intervention by the admin. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139546/"
]
} |
397,120 | Does /tmp/.X11-unix , the directory that contains the UNIX sockets for communicating with the X server, ever have a different location (like because of some strange distro)? If so, is there any way of finding the alternate location? | The online source code for the latest X11 Release 7.76 June 2012 is available as several tar files. The source tar for libX11 shows us in file libX11-1.5.0/src/OpenDis.c the implementation of routine XOpenDisplay() . It calls _XConnectXCB() in adjacent file xcb_disp.c to start the connection. This calls xcb_connect() . The tar for libxcb has that function in libxcb-1.8.1/src/xcb_util.c . It calls _xcb_open() in the same file which has the line static const char unix_base[] = "/tmp/.X11-unix/X"; This line is not changed by any configuration option, though there is the use of /tmp/launch as a base if you HAVE_LAUNCHD , which I don't know anything about, and /var/tsol/doors/.X11-unix/X on Solaris Trusted Extensions. There is nothing to stop a distribution patching these sources, of course. You can probably check your distribution with strings /usr/lib*/libxcb.so|grep X11 which on my Fedora certainly shows /tmp/.X11-unix/X . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397120",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41033/"
]
} |
397,125 | I am trying to install virtual box from the oracle website, but I keep getting the error shown below. I tried to upgrade and update, but nothing works. Please help. | The online source code for the latest X11 Release 7.76 June 2012 is available as several tar files. The source tar for libX11 shows us in file libX11-1.5.0/src/OpenDis.c the implementation of routine XOpenDisplay() . It calls _XConnectXCB() in adjacent file xcb_disp.c to start the connection. This calls xcb_connect() . The tar for libxcb has that function in libxcb-1.8.1/src/xcb_util.c . It calls _xcb_open() in the same file which has the line static const char unix_base[] = "/tmp/.X11-unix/X"; This line is not changed by any configuration option, though there is the use of /tmp/launch as a base if you HAVE_LAUNCHD , which I don't know anything about, and /var/tsol/doors/.X11-unix/X on Solaris Trusted Extensions. There is nothing to stop a distribution patching these sources, of course. You can probably check your distribution with strings /usr/lib*/libxcb.so|grep X11 which on my Fedora certainly shows /tmp/.X11-unix/X . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397125",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167257/"
]
} |
397,134 | I'm using Kali, and my Mac Book Pro has some problems with the Wi-Fi monitoring, so I tried to solve them, but when I type the command apt-get update this appears: Ign:1 debian.org/debian main InReleaseErr:2 .jp.debian.org/debian main Release 404 Not FoundReading package lists... DoneE: The repository 'http://ftp.jp.debian.org/debian main Release' does not have a Release file.N: Updating from such a repository can't be done securely, and is therefore disabled by default.N: See apt-secure(8) manpage for repository creation and user configuration details. | You should not be using a Debian repository for Kali Linux. Either use a Kali repository, or switch away from Kali to a different distribution. Since you're a beginner it would make a lot of sense to consider a beginner's distribution. Kali is not a beginner's distribution . Instead, I would recommend Mint, Ubuntu or Fedora (but this is not an exhaustive list). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254829/"
]
} |
397,136 | On Debian I can use apt-get autoremove to remove packages that are no longer needed, i.e., that are not a dependency of any "manually installed" package. However, this does not remove packages that are merely "suggested" or "recommended" by manually installed packages. How can I find out the list of such packages on my system? | You can also tell apt-get autoremove to ignore “Recommends” and “Suggests”: sudo apt-get autoremove -o Apt::AutoRemove::RecommendsImportant=false -o Apt::AutoRemove::SuggestsImportant=false Use -s to get a list of the removals this would lead to without actually changing anything: sudo apt-get autoremove -s -o Apt::AutoRemove::RecommendsImportant=false -o Apt::AutoRemove::SuggestsImportant=false | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8446/"
]
} |
397,205 | I have written a sample script to split the string but it is not working as expected #!/bin/bashIN="One-XX-X-17.0.0"IFS='-' read -r -a ADDR <<< "$IN"for i in "${ADDR[@]}"; do echo "Element:$i"done#split 17.0.0 into NUMIFS='.' read -a array <<<${ADDR[3]};for element in "${array[@]}"do echo "Num:$element"done output OneXXX17.0.017 0 0 but I expected the output to be: One XX X 17.0.0 17 0 0 | In old versions of bash you had to quote variables after <<< . That was fixed in 4.4. In older versions, the variable would be split on IFS and the resulting words joined on space before being stored in the temporary file that makes up that <<< redirection. In 4.2 and before, when redirecting builtins like read or command , that splitting would even take the IFS for that builtin (4.3 fixed that): $ bash-4.2 -c 'a=a.b.c.d; IFS=. read x <<< $a; echo "$x"'a b c d$ bash-4.2 -c 'a=a.b.c.d; IFS=. cat <<< $a'a.b.c.d$ bash-4.2 -c 'a=a.b.c.d; IFS=. command cat <<< $a'a b c d That one fixed in 4.3: $ bash-4.3 -c 'a=a.b.c.d; IFS=. read x <<< $a; echo "$x"'a.b.c.d But $a is still subject to word splitting there: $ bash-4.3 -c 'a=a.b.c.d; IFS=.; read x <<< $a; echo "$x"'a b c d In 4.4: $ bash-4.4 -c 'a=a.b.c.d; IFS=.; read x <<< $a; echo "$x"'a.b.c.d For portability to older versions, quote your variable (or use zsh where that <<< comes from in the first place and that doesn't have that issue) $ bash-any-version -c 'a=a.b.c.d; IFS=.; read x <<< "$a"; echo "$x"'a.b.c.d Note that that approach to split a string only works for strings that don't contain newline characters. Also note that a..b.c. would be split into "a" , "" , "b" , "c" (no empty last element). To split arbitrary strings you can use the split+glob operator instead (which would make it standard and avoid storing the content of a variable in a temp file as <<< does): var='a.newline..b.c.'set -o noglob # disable globIFS=.set -- $var'' # split+globfor i do printf 'item: <%s>\n' "$i"done or: array=($var'') # in shells with array support The '' is to preserve a trailing empty element if any. That would also split an empty $var into one empty element. Or use a shell with a proper splitting operator: zsh : array=(${(s:.:)var} # removes empty elementsarray=("${(@s:.:)var}") # preserves empty elements rc : array = ``(.){printf %s $var} # removes empty elements fish set array (string split . -- $var) # not for multiline $var | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112232/"
]
} |
397,269 | In a multiple monitor set-up, is there a way to transfer entire workspaces (as opposed to single applications) to a different monitor? | You can define a binding in your i3 config. Note: windows are called "containers", and monitors are called "outputs". move workspace to output left|right|down|up|current|primary|<output> Here's what I use in my config: # move focused workspace between monitorsbindsym $mod+Ctrl+greater move workspace to output rightbindsym $mod+Ctrl+less move workspace to output left Strangely, I'd expect the $mod+Ctrl+greater to require me to hit Ctrl and Shift at the same time, since you need to press Shift to type < and > . However, pressing just mod, Ctrl, and , works, which is very nice. Note, you can also set a keybinding to send things to a specific monitor by its name. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/397269",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/161652/"
]
} |
397,334 | I am learning sed. Everything seemed to be going fine until I come across the N (multi-line next). I created this file (guide.txt) for practice/understanding/context purposes. Here is the contents of said file... This guide is meant to walk you through a day as a NetworkAdministrator. By the end, hopefully you will be betterequipped to perform your duties as a Network Administratorand maybe even enjoy being a Network Administrator that much more.Network AdministratorNetwork AdministratorI'm a Network Administrator So my goal is to substitute ALL instances of "Network Administrator" with "System User". Because the first instance of "Network Administrator" is separated by a newline (\n) I need the multi-line next operator (N) to append the line that starts with "Administrator" with the previous line ending with "Network\n". No problem. But I also want to catch all the other "Network Administrator" single-line instances. From my research, I've learned that I will need two substitution commands; one for the newline separated string and one for the others. Also, there is some jive happening because of the last line containing the substitution match and the multi-line next. So I craft this... $ sed '> s/Network Administrator/System User/> N> s/Network\nAdministrator/System\nUser/> ' guide.txt This returns these results... This guide is meant to walk you through a day as a SystemUser. By the end, hopefully you will be betterequipped to perform your duties as a System Userand maybe even enjoy being a Network Administrator that much more.System UserNetwork AdministratorI'm a System User I thought that the single-line substitution would catch all the "normal" instances of "Network Administrator" and swap it out for "System User", while the multi-line statement would work its magic on the newline separated instance, but as you can see it returned, what I consider, unexpected results. After some fiddling, I landed on this... $ sed '> s/Network Administrator/System User/> N> s/Network\nAdministrator/System\nUser/> s/Network Administrator/System User/> ' guide.txt And voilà, I get the desired output of... This guide is meant to walk you through a day as a SystemUser. By the end, hopefully you will be betterequipped to perform your duties as a System Userand maybe even enjoy being a System User that much more.System UserSystem UserI'm a System User Why does this work and the original sed script doesn't? I really want to understand this. Thanks in advance for any help. | As you are learning sed , I'll take the time to add to @John1024's answer: 1) Please note that you are using \n in the replacement string. This works in GNU sed , but is not part of POSIX, so it will insert a backslash and an n in many other sed s (using \n in the pattern is portable, btw). Instead of this I suggest to do s/Network\([[:space:]]\)Administrator/System\1User/g : The [[:space:]] will match newline or whitespace, so you don't need two s commands, but combine them in one. By surrounding it with \(...\) you can refer to it in the replacement: The \1 will get replaced by whatever was matched in the first pair of \(\) . 2) To properly match patterns over two lines, you should know the N;P;D pattern: sed '$!N;s/Network\([[:space:]]\)Administrator/System\1User/g;P;D' The N is always append the next line (except for the last line, that's why it's "addressed" with $! (=if not last line; you should always consider to preceed N with $! to avoid accidentally ending the script). Then after the replacement the P prints only the first line in the pattern space and the D deletes this line and starts the next cycle with the remains of the pattern space (without reading the next line). This is probably what you originally intended. Remember this pattern, you will often need it. 3) Another useful pattern for multiline editing, especially when more than two lines are involved: Hold space collecting, as I suggested to John: sed 'H;1h;$!d;g;s/Network\([[:space:]]\)Administrator/System\1User/g' I repeat it to explain it: H appends each line to the hold space. As this would result in an extra newline before the first line, the first line needs to be moved instead of appended with 1h . The following $!d means "for all lines except the last one, delete the pattern space and start over". Thus, the rest of the script is only executed for the last line. At this point, the whole file is collected in the hold space (so don't use this for very large files!) and the g moves it to the pattern space, so you can do all replacements at once like you can with the -z option of GNU sed . This is another useful pattern I suggest to keep in mind. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153418/"
]
} |
397,378 | I want to tar the files foo and bar into a tar archive archive.tar , but I want them to appear, within the archive, as being within a directory, bazdir . Thus when I untar someplace I want bazdir to be created and foo and bar to be created within it. How can I do that? This would be the opposite of: https://stackoverflow.com/questions/939982/how-do-i-tar-a-directory-of-files-and-folders-without-including-the-directory-it or create flat tar archive: ignoring all parents when adding folders | You can use the --transform option. For example: touch foo bartar cf archive.tar foo bar --transform 's,^,bazdir/,'tar tvf archive.tar -rw-r--r-- tigger/tigger 0 2017-10-11 19:32 bazdir/foo-rw-r--r-- tigger/tigger 0 2017-10-11 19:32 bazdir/bar For more details and more complex options see How to create a common base folder with tar and how to rename folders? - on the sister site, ask ubuntu. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/397378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
397,381 | I am working on a remote machine over ssh without X, and it has no browser installed. When I invoke browse-url on Emacs (not surprisingly) it gives an error: "No usable browser found." I can install w3m at the remote machine or forward a graphical browser, but I would like to see the url opened at the local machine with 'browse http://example.com/ '. Are there work done on this matter, or if not, how would one write a program that does such a thing(if it is possible at all)? I've seen this answer, but apparently it can't be used in scripting (when ssh'ing back to the original host is impossible) https://stackoverflow.com/questions/38567427/run-a-command-on-local-machine-while-on-ssh-in-bash Or if it's impossible I'll just have to forward firefox itself(though slow). | You can use the --transform option. For example: touch foo bartar cf archive.tar foo bar --transform 's,^,bazdir/,'tar tvf archive.tar -rw-r--r-- tigger/tigger 0 2017-10-11 19:32 bazdir/foo-rw-r--r-- tigger/tigger 0 2017-10-11 19:32 bazdir/bar For more details and more complex options see How to create a common base folder with tar and how to rename folders? - on the sister site, ask ubuntu. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/397381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168299/"
]
} |
397,382 | I would like to compare the two files, and only print this line and append it to "source.txt": 01.02.70 08h00,4.4.4.4,443 Here are my files: source.txt DATETIME,IPSOURCE,PORT01.01.70 08h00,0.0.0.0,44301.01.70 08h00,2.2.2.2,443 events.txt DATETIME,IPSOURCE,PORT01.02.70 09h00,0.0.0.0,44301.02.70 09h00,2.2.2.2,44301.02.70 08h00,4.4.4.4,443 I don't care about the DATETIME field in the comparison, I only want to add the new IP that appears in the log file "events.txt" to "source.txt" (addresses 0.0.0.0 and 2.2.2.2 appear in the two files). So I want to use grep -vxFf source.txt events.txt without taking the first field into account, I look only for differences in the IPSOURCE field (second column). | You can use the --transform option. For example: touch foo bartar cf archive.tar foo bar --transform 's,^,bazdir/,'tar tvf archive.tar -rw-r--r-- tigger/tigger 0 2017-10-11 19:32 bazdir/foo-rw-r--r-- tigger/tigger 0 2017-10-11 19:32 bazdir/bar For more details and more complex options see How to create a common base folder with tar and how to rename folders? - on the sister site, ask ubuntu. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/397382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255065/"
]
} |
397,390 | This is what is have now : find /home/a -mtime -1 -type f -print0 | xargs -0 tar -pcf $file01| tee -a $LOGFILE 2>&1 I do understand what tar -cf is but what is tar -pcf ? | The -p flag is not useful when creating an archive (with -c ), only when extracting (with -x ). From the GNU tar manual: -p , --preserve-permissions , --same-permissions extract information about file permissions (default for superuser) That's a horrible way of saying "preserve permissions and ownerships". From the OpenBSD manual : -p Preserve user and group ID as well as file mode regardless of the current umask(2) . The setuid and setgid bits are only preserved if the user and group ID could be preserved. Only meaningful in conjunction with the -x flag. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/397390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245524/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.