source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
536,984
Is gzip atomic? What happens if I stop the gzip process while it's in the middle of gzipping a file? If it's not atomic, and if I already pressed Ctrl+C on a gzip *.txt process, how do I safely resume? (I am not just curious about how to resume, but also about whether gzip specifically is atomic.)
Is gzip atomic? No. It creates a compressed file and then removes the uncompressed original. Specifically, it does not compress a file in situ and there is a period of time while the file is being compressed where, the compressed target is incomplete the partially compressed file and its source both exist in the filesystem. What happens if I stop the gzip process while it's in the middle of gzipping a file? If you stop the gzip process with a catchable signal ( SIGINT from Ctrl C , for example) it will cleanup partially created files. Otherwise, depending on the point at which it's stopped, you may end up with a partially compressed file alongside the untouched original. If it's not atomic, if I already pressed Ctrl+C on a gzip *.txt process, how do i safely resume? You delete the partially compressed version (if it still exists) and restart the gzip .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/536984", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95866/" ] }
537,029
When I try to mount an lvm snapshot device, I get an error: $ sudo mount -o loop /dev/mapper/matrix-snap--of--core /home/me/mountpointmount: /home/me/mountpoint: mount(2) system call failed: File exists. What is the “File exists.” error trying to tell me? What can I do to mount the lvm snapshot device? The mount command has “always worked before”, though last time I checked was in October 2018. A similar error has been encountered in this three-year-old question . However, the error message is slightly different and it’s 2019 now … This is my output for lsblk . NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 465.8G 0 disk └─sda1 8:1 0 465.8G 0 part └─core 254:0 0 465.8G 0 crypt ├─matrix-swapvolume 254:1 0 4G 0 lvm [SWAP] └─matrix-core-real 254:3 0 461.8G 0 lvm ├─matrix-core 254:2 0 461.8G 0 lvm / └─matrix-snap--of--core 254:5 0 461.8G 0 lvm sdb 8:16 1 59.5G 0 disk └─matrix-snap--of--core-cow 254:4 0 59.5G 0 lvm └─matrix-snap--of--core 254:5 0 461.8G 0 lvm I run Parabola Linux, my system is up-to-date. The logical volume /dev/matrix/core uses btrfs , which I suspect has something to do with the error. This is my output of uname -rvs . Linux 5.2.5-gnu-1 #1 SMP PREEMPT Sun Aug 4 02:02:20 UTC 2019
(I'm not sure why you're using the -o loop mount option, as the LVM snapshot device should be just as good a disk device as its original is.) "File exists" is the standard English text for errno value 17, or EEXIST as it is named in #include <errno.h> . That error result is not documented for the mount(2) system call, so a bit of source code reading is in order. Linux kernel cross-referencer at elixir.bootlin.com can list all the locations where EEXIST is used in the kernel code. Since you're attempting to loop-mount a btrfs filesystem, the places that might be relevant are: drivers/block/loop.c , related to loop device management fs/btrfs/super.c , which would be used when mounting a btrfs filesystem. In drivers/block/loop.c , the EEXIST error is generated if you're trying to allocate a particular loop device that is already in use (e.g. mount -o loop=/dev/loop3 ... and /dev/loop3 is already occupied). But that should not be the issue here, unless something is creating a race condition with your mount command. The fs/btrfs/super.c actually has a btrfs -specific function for translating error codes into error messages. It translates EEXIST into Object already exists . You are trying to mount what looks like a clone of a btrfs filesystem that is already mounted, so it actually makes sense: historically, this used to confuse btrfs , but it appears some protection has been (sensibly) added at some point. Since this seems to be a LVM-level snapshot, as opposed to a snapshot made with btrfs 's built-in snapshot functionality, you must treat the snapshot like a cloned filesystem if you wish to mount it while its origin filesystem is mounted: only the LVM will "know" that it's a snapshot, not an actual 1:1 clone. So, you'll need to change the metadata UUID of the snapshot/clone filesystem if you need to mount it on the same system as the original. Warning: I don't have much experience on btrfs , so the below might be wrong or incomplete. Since your kernel is newer than 5.0, you may have the option of using btrfstune -m /dev/mapper/matrix-snap--of--core to make the change. Otherwise you whould have to use btrfstune -u /dev/mapper/matrix-snap--of--core which would be slower as it needs to update all the filesystem metadata, not just the metadata_uuid field in the filesystem superblock.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/537029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24229/" ] }
537,061
Hi I have output like this [4251][7c3c] I need to get 517C3C I've tried decodeSerial() { serial=$1 #serial=$serial | sed -r 's/(\[|\])//g' #serial=$serial | sed 's/]//' #serial=$serial | tr a-z A-Z serial=${serial: -6} echo $serial} only last six characters works fine
$ echo '[4251][7c3c]' | tr -d '[]' | tr '[:lower:]' '[:upper:]' | cut -c 3-517C3C As a function: decodeSerial () { printf '%s\n' "$1" | tr -d '[]' | tr '[:lower:]' '[:upper:]' | cut -c 3-} The pipeline removes all [ and ] character from the input, converts any lower-case characters to upper-case, and discards the first two characters from the result. With a single sed call (which assumes that the alphabetic characters are hexadecimal digits, a through to f ): $ echo '[4251][7c3c]' | sed 's/[][]//g; y/abcdef/ABCDEF/; s/^..//'517C3C To keep the last six characters rather than delete the first two, this sed call may be changed to sed 's/[][]//g; y/abcdef/ABCDEF/; s/^.*\(.\{6\}\)$/\1/' Using awk : $ echo '[4251][7c3c]' | awk '{ gsub("[][]", ""); print toupper(substr($1,3)) }'517C3C Using the awk command in your function (the sed command above could be inserted in a similar fashion): decodeSerial () { printf '%s\n' "$1" | awk '{ gsub("[][]", ""); print toupper(substr($1,3)) }'} Using awk without a pipeline in the shell function: decodeSerial () { awk -v string="$1" 'BEGIN { gsub("[][]", "", string); print toupper(substr(string,3)) }'} Note that your ${serial: -6} is bash syntax which might not work with /bin/sh .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537061", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/368396/" ] }
537,070
So I am trying to see if I can ping another machine on my network and I tried a simple ping And I am getting 100% packet loss. I also tried to ping google.com Also getting 100% packet loss Funny thing is that when I try tnsping (Oracle utility for pinging another database), I am actually able to ping the server I am looking for. Am I using the ping command incorrectly? Or am I using the incorrect command?
$ echo '[4251][7c3c]' | tr -d '[]' | tr '[:lower:]' '[:upper:]' | cut -c 3-517C3C As a function: decodeSerial () { printf '%s\n' "$1" | tr -d '[]' | tr '[:lower:]' '[:upper:]' | cut -c 3-} The pipeline removes all [ and ] character from the input, converts any lower-case characters to upper-case, and discards the first two characters from the result. With a single sed call (which assumes that the alphabetic characters are hexadecimal digits, a through to f ): $ echo '[4251][7c3c]' | sed 's/[][]//g; y/abcdef/ABCDEF/; s/^..//'517C3C To keep the last six characters rather than delete the first two, this sed call may be changed to sed 's/[][]//g; y/abcdef/ABCDEF/; s/^.*\(.\{6\}\)$/\1/' Using awk : $ echo '[4251][7c3c]' | awk '{ gsub("[][]", ""); print toupper(substr($1,3)) }'517C3C Using the awk command in your function (the sed command above could be inserted in a similar fashion): decodeSerial () { printf '%s\n' "$1" | awk '{ gsub("[][]", ""); print toupper(substr($1,3)) }'} Using awk without a pipeline in the shell function: decodeSerial () { awk -v string="$1" 'BEGIN { gsub("[][]", "", string); print toupper(substr(string,3)) }'} Note that your ${serial: -6} is bash syntax which might not work with /bin/sh .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537070", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/368402/" ] }
537,230
I show my full working directory plus other information (git, etc) in my bash prompt and sometimes it gets very long. I want to add a newline to the end of my prompt so I can type the command on the next line, but only if the prompt is long e.g. more than 50 characters. | ~ $ Typing a command here is nice | | ~/foo/longDirectoryName/longsubirectory/src/package (master +*) $ Typing here s|| ucks. I want to just start on a new line | Obviously, If I wanted to always type my command on the next line, I could just add a newline to PS1 (as in this post ). But I haven't found a way to do that conditionally because PS1 is just a format string. P.S. I'm actually using ZSH trying to customize the Agnoster theme but I imagine any solution for bash in general would help.
In zsh , that's what the %<number>(l:<yes>:<no>) prompt expansion is for. When the number is negative, like -30 , if there are at least 30 characters left until the right edge of the screen, then the yes text is output, otherwise no , so: PS1=$'%~%-30(l::\n)$ ' Would insert a newline if fewer than 28 characters (30 minus the "$ " ) are left for you to use on the line. You can do your 50 or more with: PS1=$'%~%50(l:\n:)$ ' But IMO, it's more useful to guarantee a minimum available space, than a maximum unusable space. See the manual for details. You'll find other directives to truncate long prompts and replace with ellipsis for instance which you may also find useful. Note that zsh prompt expansion is completely different from that of bash . It's actually closer to that of tcsh , so solutions for bash are unlikely to be of much use for zsh , though it's generally more true the other way round.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537230", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/362854/" ] }
537,235
I have just created a fresh install of Debian 10. I have disabled network-manager, since I find it extremely annoying, opting for wpa_supplicant (non-mobile wireless desktop). In order to get this to work, I followed the official guide on wiki.debian.org . Note that dhcpcd.service no longer exists, so I couldn't configure it. Unfortunately, it does not work. There are a few strange things happening with it, namely that it does successfully raise the interface and associate, but fails anyway for some reason (you will see this in journalctl dump). There is also about a minutes wait during boot for "raising network interfaces". What's even stranger is that once logged in, the interface is configured and has an ip address, but it's in a down state. ifup won't work at this point. if I su to root and lower and raise the wifi interface, it connects no problem and in a short amount of time. I have no idea what could be causing this to happen, but it may have to do with something involving systemd targets. Does anyone know what's going on, and how it may be fixed? journalctl dump
In zsh , that's what the %<number>(l:<yes>:<no>) prompt expansion is for. When the number is negative, like -30 , if there are at least 30 characters left until the right edge of the screen, then the yes text is output, otherwise no , so: PS1=$'%~%-30(l::\n)$ ' Would insert a newline if fewer than 28 characters (30 minus the "$ " ) are left for you to use on the line. You can do your 50 or more with: PS1=$'%~%50(l:\n:)$ ' But IMO, it's more useful to guarantee a minimum available space, than a maximum unusable space. See the manual for details. You'll find other directives to truncate long prompts and replace with ellipsis for instance which you may also find useful. Note that zsh prompt expansion is completely different from that of bash . It's actually closer to that of tcsh , so solutions for bash are unlikely to be of much use for zsh , though it's generally more true the other way round.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537235", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146386/" ] }
537,257
Didn't see an option in the menus or any documentation on what I imagine is something simple: I have a multi-page PDF and want to delete a specific page within Okular's GUI.
There is no way to remove a page in Okular. However, it can refreshmodified PDF document automatically so there is no need to re-open thedocument manually after modifying it in external tool such as pdftk . To remove a given page use cat option and specify a rangeof pages you'd like to keep in the target PDF file. For example, ifyou want to remove 5th page in document.pdf execute the followingcommand: pdftk document.pdf cat 1-4 6-end output out.pdf && mv out.pdf document.pdf
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/537257", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42894/" ] }
537,289
I'm trying to assign -e to a variable in Bash 4. But the variable remains empty. For example: $ var="-e" $ echo $var$ var='-e' $ echo $var$ var="-t" $ echo $var-t Why does it work with -t , but not -e ?
It works, but running echo -e doesn't output anything in Bash unless both the posix and xpg_echo options are enabled, as the -e is then interpreted as an option: $ help echoecho: echo [-neE] [arg ...] Write arguments to the standard output. Display the ARGs, separated by a single space character and followed by a newline, on the standard output. Options: -n do not append a newline -e enable interpretation of the following backslash escapes -E explicitly suppress interpretation of backslash escapes Use printf "%s\n" "$var" instead. And as cas notes in a comment, Bash's declare -p var ( typeset -p var in ksh, zsh and yash (and bash)) can be used to tell the exact type and contents of a variable. See: Why is printf better than echo?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/364032/" ] }
537,319
I want to download files from my office computer to my laptop. I can connect my office machine by SSH to the organization server and then SSH from the server to my office machine. The only commands the organization server accepts are ssh, ssh1, and ssh2. How can I download a file from my office (remote) machine through the server into my laptop (local) machine?
The previous answers mention how to use the ProxyJump directive (added in OpenSSH 7.3) to connect through an intermediate server (usually referred to as the bastion host), but mention it just as a command line argument. Unless it is a machine you won't be connecting in the future, the best thing is that you configure it on ~/.ssh/config . I would put a file like: Host office-machineHostname yochay-machine.internal.company.localProxyJump bastion-machineHost bastion-machineHostname organization-server.company.com... If you are using an earlier version of OpenSSH which doesn't support ProxyJump, you would replace it with the equivalent: ProxyCommand ssh -W %h:%p bastion-machine and if your local ssh version was a really ancient one that didn't support -W : ssh bastion-machine nc %h %p although this last one requires that the bastion machine has nc installed. The beauty of ssh is that you can configure each destination on the file, and they will stack very nicely. Thus you end up working with office-machine as the hostname on all the tools (ssh, scp, sftp...) as they were direct connects, and they will figure out how to connect based in the ssh_config. You could also have wildcards like Host *.internal.company.local to make all hosts ending like that going through a specific bastion, and it will apply to all of them. Once configured correctly, the only difference between doing one hop connections or twenty would be the slower connection times.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/537319", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/318266/" ] }
537,413
Sorry if this has an answer elsewhere, I've no idea how to search for my problem. I was running some simulations on a redhat linux HPC server, and my code for handling the folder structure to save the output had an unfortunate bug. My matlab code to create the folder was: folder = [sp.saveLocation, 'run_', sp.run_number, '/']; where sp.run_number was an integer. I forgot to convert it to a string, but for some reason running mkdir(folder); (in matlab) still succeeded. In fact, the simulations ran without a hitch, and the data got saved to the matching directory. Now, when the folder structure is queried/printed I get the following situations: When I try to tab autocomplete: run_ run_^A/ run_^B/ run_^C/ run_^D/ run_^E/ run_^F/ run_^G/ run_^H/ run_^I/ When I use ls : run_ run_? run_? run_? run_? run_? run_? run_? run_? run_? run_? . When I transfer to my mac using rsync the --progress option shows: run_\#003/ etc. with (I assume) the number matching the integer in sp.run_number padded to three digits, so the 10th run is run_\#010/ When I view the folders in finder I see run_ run_ run_ run_ run_ run_ run_ run_ run_ run_? Looking at this question and using the command ls | LC_ALL=C sed -n l I get: run_$run_\001$run_\002$run_\003$run_\004$run_\005$run_\006$run_\a$run_\b$run_\t$run_$ I can't manage to cd into the folders using any of these representations. I have thousands of these folders, so I'll need to fix this with a script.Which of these options is the correct representation of the folder? How can I programmatically refer to these folders so I rename them with a properly formatted name using a bash script? And I guess for the sake of curiosity, how in the hell did this happen in the first place?
You can use the perl rename utility (aka prename or file-rename ) to rename the directories. NOTE: This is not to be confused with rename from util-linux , or any other version. rename -n 's/([[:cntrl:]])/ord($1)/eg' run_*/ This uses perl's ord() function to replace each control-character in the filename with the ordinal number for that character. e.g ^A becomes 1, ^B becomes 2, etc. The -n option is for a dry-run to show what rename would do if you let it. Remove it (or replace it with -v for verbose output) to actually rename. The e modifier in the s/LHS/RHS/eg operation causes perl to execute the RHS (the replacement) as perl code, and the $1 is the matched data (the control character) from the LHS. If you want zero-padded numbers in the filenames, you could combine ord() with sprintf() . e.g. $ rename -n 's/([[:cntrl:]])/sprintf("%02i",ord($1))/eg' run_*/ | sed -n lrename(run_\001, run_01)$rename(run_\002, run_02)$rename(run_\003, run_03)$rename(run_\004, run_04)$rename(run_\005, run_05)$rename(run_\006, run_06)$rename(run_\a, run_07)$rename(run_\b, run_08)$rename(run_\t, run_09)$ The above examples work if and only if sp.run_number in your matlab script was in the range of 0..26 (so it produced control-characters in the directory names). To deal with ANY 1-byte character (i.e. from 0..255), you'd use: rename -n 's/run_(.)/sprintf("run_%03i",ord($1))/e' run_*/ If sp.run_number could be > 255, you'd have to use perl's unpack() function instead of ord() . I don't know exactly how matlab outputs an unconverted int in a string, so you'll have to experiment. See perldoc -f unpack for details. e.g. the following will unpack both 8-bit and 16-bit unsigned values and zero-pad them to 5 digits wide: rename -n 's/run_(.*)/sprintf("run_%05i",unpack("SC",$1))/e' run_*/
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/537413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128396/" ] }
537,516
I am getting this one when I open a terminal session: sh: error importing function definition for `read.json' sh: error importing function definition for `ts-project' sh doesn't like these functions because they look like: read.json(){ ::} and ts-project(){ ::} the real question is - why is sh touching/interpreting these files? I am on MacOS and seen this before, it's such a mystery. I would think only bash would be loading these files. update : bash and sh are nothing out of the ordinary.when I type bash into the terminal, I get this: alex$ bashbeginning to load .bashrcfinished loading .bashrcbash-3.2$ when I type sh in the terminal, I get this: alex$ shsh: error importing function definition for `read.json'sh: error importing function definition for `ts-project'sh-3.2$
That error happens when bash masquerading as a POSIX shell tries to import those functions from the environment, not when loading them by interpreting a file like ~/.bashrc or such. Simplified example: foo.bar(){ true; }; export -f foo.bar; bash --posix -c truebash: error importing function definition for `foo.bar' I was expecting bash not to load functions from the environment when in posix mode, but it does , and only complains when their names contain funny characters. Notice that bash will also run in posix mode when the POSIXLY_CORRECT or POSIX_PEDANTIC environment variable is set, or when it was compiled with --enable-strict-posix-default / STRICT_POSIX . This latter seems to be the case for /bin/sh on MacOS (look here for PRODUCT_NAME = sh ), where I expect this error to also trigger when using library functions like popen(3) or system(3) .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/537516", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
537,546
Every time I try to upgrade it shows me this message sudo apt-get upgradeReading package lists... DoneBuilding dependency tree Reading state information... DoneYou might want to run 'apt --fix-broken install' to correct these.The following packages have unmet dependencies: libreoffice-avmedia-backend-gstreamer : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-base : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-base-core : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-base-drivers : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-calc : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-draw : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-gnome : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-gtk3 : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-impress : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-math : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-ogltrans : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-sdbc-hsqldb : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed libreoffice-writer : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installed openjdk-11-jre : Depends: openjdk-11-jre-headless (= 11.0.4+11-1ubuntu2~18.04.3) but 10.0.2+13-1ubuntu0.18.04.4 is installed python3-uno : Depends: libreoffice-core (= 1:6.0.7-0ubuntu0.18.04.9) but 1:6.0.6-0ubuntu0.18.04.1 is installedE: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).
In certain cases you might also want to force the overwrite sudo dpkg --configure --force-overwrite -a Alternatively: sudo apt -o Dpkg::Options::="--force-overwrite" --fix-broken install
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/537546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/368812/" ] }
537,566
I have a file with data like this: UserNameUserNameUserName Each piece of data is separated by a new line. I need that data to be converted to an array of strings and stored in a variable. How do I achieve this with just bash or shell execution?
With mapfile : $ mapfile -t array < yourfile$ declare -p array # print array content declare -a array=([0]="UserName" [1]="UserName" [2]="UserName")
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/361095/" ] }
537,576
The setup T460 thinkpad (Intel HD Graphics) docked with lid closedconnected to two external monitors one via DVI and the other via HDMI The problem During boot, up until the login screen both monitors work fine. But as soon as I sign in the monitor connected with the DVI will go blank about 80% of the time. My workaround solutions to fix this problem Going back to the login screen via CTRL + ALT + BACKSPACE and signing in again will almost always fix it. Physically removing/reinserting the cable will almost always fix it. Behavior When logging in and experiencing the problem, the monitor seems to be recognized as I can move my mouse into its resolution, but the monitor claims cable not connected. When did the problem occur? After moving and going from three to two monitors as well as reinstalling Linux Mint for luks encryption I'm pretty green when it comes to diagnosing graphical problem. I would be happy to paste logs, but I'm not sure which ones would be helpful. Please let me know which logs might be worth taking a look at, or any other suggestions you may have! ----- Edits/ Additional Info ----- The issue is isolated with Linux as this machine can dual boot into windows without the described problem $ lspci -k | grep -EA3 'VGA' 00:02.0 VGA compatible controller: Intel Corporation Skylake GT2 [HD Graphics 520] (rev 07) Subsystem: Lenovo Skylake GT2 [HD Graphics 520] Kernel driver in use: i915 Kernel modules: i915 | $ cat /var/log/Xorg.0.log | grep -i driver [ 105.041] X.Org Video Driver: 23.0 [ 105.041] X.Org XInput driver : 24.1 [ 105.062] (==) Matched modesetting as autoconfigured driver 0 [ 105.062] (==) Matched fbdev as autoconfigured driver 1 [ 105.062] (==) Matched vesa as autoconfigured driver 2 [ 105.062] (==) Assigned the driver to the xf86ConfigLayout [ 105.062] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 105.063] Module class: X.Org Video Driver [ 105.063] ABI class: X.Org Video Driver, version 23.0 [ 105.063] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 105.063] Module class: X.Org Video Driver [ 105.063] ABI class: X.Org Video Driver, version 23.0 [ 105.063] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 105.063] Module class: X.Org Video Driver [ 105.063] ABI class: X.Org Video Driver, version 23.0 [ 105.063] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 105.063] (II) FBDEV: driver for framebuffer: fbdev [ 105.063] (II) VESA: driver for VESA chipsets: vesa [ 105.079] ABI class: X.Org Video Driver, version 23.0 [ 105.087] (II) glamor: OpenGL accelerated X.org driver based. [ 105.946] (II) modeset(0): [DRI2] DRI driver: i965 [ 105.946] (II) modeset(0): [DRI2] VDPAU driver: i965 [ 106.020] Module class: X.Org XInput Driver [ 106.020] ABI class: X.Org XInput driver, version 24.1
With mapfile : $ mapfile -t array < yourfile$ declare -p array # print array content declare -a array=([0]="UserName" [1]="UserName" [2]="UserName")
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537576", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/342987/" ] }
537,586
I will make scp be on cronjob so I would rather use to auto enter the password of the other server. sshpass -p 'your_password' scp [email protected]:/usr/etc/Output/*.txt /usr/abc/ This script didn't work for me, I am using Solaris. Please also indicate an explanation on the answers. Thank you. ADDITIONAL I used #!/usr/local/bin/expect -fspawn bash -c "scp /apps/DASHBOARD/xx/xxx/xxxx/TEST1.*.txt dash@MYSERV:/apps/DASHBOARD/xx/xxx1/"expect "*Password:*"send "pass123\r"interactspawn bash -c "scp /apps/DASHBOARD/xx/xxx/xxxx/SAMP1.*.txt* dash@MYSERV:/apps/DASHBOARD/xx/samp1/"expect "*Password:*"send "pass123\r"interact And I put this on my CRONJOB. This script works for me if I ran this manually but when I set this on my CRONJOB, it doesn't proceed on the second scp. Please help.
Use public key authentication connection in order to avoid keeping/maintain hardcopy passwords.Copy the content of local users content of id_rsa.pub to the remote users ~/.ssh/authorized_keys file in order to establish public key authentication connection. If still want to use password then expect script could be made, like the following and change the expect script 2nd line to match your user and server and MY_PASSWORD with your password: spawn scp "[email protected]:/usr/etc/Output/*.txt" /usr/abc/expect "[email protected]\'s password:" send "MY_PASSWORD\r"interact Thanks to @pynexj at post Link to StackOverFlow i had to modify the script as the following: #!/usr/bin/expect -fspawn scp /TEST1.txt [email protected]:/root/expect "*assword:*"send "password\r"expect eofspawn scp /TEST2.txt [email protected]:/root/expect "*assword:*"send "password\r"expect eofexit Note that if you have multiple files or patterns that need to be transferred you can also consider to use SFTP with batch mode and expect.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/358760/" ] }
537,631
Consider the following strings: $ columnA="A1\nA2\nA3"$ columnB="B1\nB2\nB3"$ columnC="C1\nC2\nC3" Using Bash, how can I merge these so that I get another string with the following content: $ echo "$table"A1;B1;C1\nA2;B2;C2\nA3;B3;C3
You can use command paste , and process-substitution: table="$(paste -d ';' <(echo -e "$columnA") <(echo -e "$columnB") <(echo -e "$columnC"))" echo "$table" will give output as: A1;B1;C1A2;B2;C2A3;B3;C3 Also don't forget to use -e flag with echo , otherwise it will not consider \n , especially and you will get output: A1\nA2\nA3;B1\nB2\nB3;C1\nC2\nC3 Or, use printf: table="$(paste -d ';' <(printf "$columnA") <(printf "$columnB") <(printf "$columnC"))"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162056/" ] }
537,637
I'm experiencing a strange behavior on some of our machines atm. At least, it seems strange to me and my colleagues and we didn't find any explanation for it :) [edit 1] Next paragraph seems to be wrong. See edit 2 at end. We're using bash and zsh here. So, when SSHing into some of the zsh-default-machines (plain ssh login@host ) which are configured to use zsh as default shell (with chsh -s /usr/bin/zsh ), the then-opened shell is an interactive but non-login shell, regardless if we're already logged in on the respective machine or not. In my understanding, SSHing into a machine should be starting a new user session on that machine, thus requiring the shell to be a login shell, right? Shouldn't that be the case for zsh, too? When changing the default shell to bash on the machines, logging into the machine uses a login-shell. Is this the normal behavior for zsh? Could it be changed? Or is it some misconfiguration? [/edit 1] [edit 2]Ok, according to the ZSH documentation you could easily test if it is a login shell or not: $ if [[ -o login ]]; then; print yes; else; print no; fi See: http://zsh.sourceforge.net/Guide/zshguide02.html However, due to zsh man entry / documentation, zsh should source /etc/profile which in turn sources the scripts under /etc/profile.d/*.sh . My question above originated in the fact, that the scripts are not sourced and thus most of our environment variables and system configuration stuff isn't properly initialized. However, as described above - when we're using bash as default shell, /etc/profile and the scripts in the profile.d-folder are sourced. [/edit 2] [edit 3 - ANSWER]Thx @StéphaneChazelas for the answer in the comments below!It seems zsh is only sourcing /etc/profile when running in sh / ksh compatibility mode (see the respecitve man entry https://linux.die.net/man/1/zsh ). As logging in via SSH doesn't trigger that compatibility mode, zsh doesn't necessarily source /etc/profile on it's own but have to be triggered via .zprofile [/edit 3] System: OS: Ubuntu 18.04zsh-5.4.2 with omz and some plugins activated. Thank you!
ZSH just works in this way. /etc/profile is NOT an init file for ZSH . ZSH uses /etc/zprofile and ~/.zprofile . Init files for ZSH: /etc/zshenv ~/.zshenv login mode: /etc/zprofile ~/.zprofile interactive: /etc/zshrc ~/.zshrc login mode: /etc/zlogin ~/.zlogin Tips: Default shell opened in your terminal on Linux is a non-login, interactive shell. But on macOS, it's a login shell. References Unix shell initialization Shell Startup Scripts
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/537637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161032/" ] }
537,645
I'm trying to limit the total resources accessible from docker (for example only 90% of the RAM and 1500% of the CPU). I cannot use CPU and RAM limit when I'm launching my containers, that's why I need to limit the total resources available for docker containers. I have around 20 containers which can consume the maximum CPU and memory but not at the same time, so I cannot set the CPU and RAM limit, that's why I need to limit the total resource used by docker First of all I've created a slice: I tried the instruction above, but impossible to limit both the RAM and the CPU usage # /etc/systemd/system/docker_limit.slice[Unit]Description=Slice that limits docker resourcesBefore=slices.target[Slice]CPUAccounting=trueCPUQuota=700%#Memory ManagementMemoryAccounting=trueMemoryHigh=20GMemoryMax=25GMemoryMaxSwap=10G And my daemon.json { "insecure-registries" : [ "url1", "url2"], "cgroup-parent": "docker_limit.slice"} But when I try from a container: stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 I can see from docker stats it's using 111Go of Ram (full capacity of my server) stress --cpu 16 I can see from docker stats it's using near 1600 % (full capacity of my server) I think I've missed something but I don't know what
For other person who will need a complete answer to this question, I'm doing a full recap: First of all I've created a slice called docker_limit: Create a file to /etc/systemd/system/docker_limit.slice [Unit]Description=Slice that limits docker resourcesBefore=slices.target[Slice]CPUAccounting=trueCPUQuota=700%#Memory ManagementMemoryAccounting=trueMemoryLimit=25G Start unit: systemctl start docker_limit.slice Edit /etc/docker/daemon.json { "cgroup-parent": "docker_limit.slice"} Restart Docker daemon: systemctl restart docker In order to verify all works as expected: systemd-cgtop , you should see processes listed under docker_limit.slice . e.g.: credit to @rwos
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/368897/" ] }
537,702
Looking at the files in my /etc/profile.d directory: cwellsx@DESKTOP-R6KRF36:/etc/profile.d$ ls -ltotal 32-rw-r--r-- 1 root root 96 Aug 20 2018 01-locale-fix.sh-rw-r--r-- 1 root root 1557 Dec 4 2017 Z97-byobu.sh-rwxr-xr-x 1 root root 3417 Mar 11 22:07 Z99-cloud-locale-test.sh-rwxr-xr-x 1 root root 873 Mar 11 22:07 Z99-cloudinit-warnings.sh-rw-r--r-- 1 root root 825 Mar 21 10:55 apps-bin-path.sh-rw-r--r-- 1 root root 664 Apr 2 2018 bash_completion.sh-rw-r--r-- 1 root root 1003 Dec 29 2015 cedilla-portuguese.sh-rw-r--r-- 1 root root 2207 Aug 27 12:25 oraclejdk.sh This is Ubuntu on the "Windows Subsystem for Linux (WSL)". Anyway the content of oraclejdk.sh is like this: export J2SDKDIR=/usr/lib/jvm/oracle_jdk8export J2REDIR=/usr/lib/jvm/oracle_jdk8/jreexport PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Program Files/WindowsApps/CanonicalGroupLimited.Ubuntu18.04onWindows_1804.2019.522.0_x64__79rhkp1fndgsc:/snap/bin:/usr/lib/jvm/oracle_jdk8/bin:/usr/lib/jvm/oracle_jdk8/db/bin:/usr/lib/jvm/oracle_jdk8/jre/binexport JAVA_HOME=/usr/lib/jvm/oracle_jdk8export DERBY_HOME=/usr/lib/jvm/oracle_jdk8/db I'm pretty sure it's run when the bash shell starts. My question is, why don't all the *sh files have the x permission bit set? Don't all shell scripts need the x perission bit set in order to be executable? Please consider me a bit of a novice.
A shell script only needs to be executable if it is to be run as ./scriptname If it is executable, and if it has a valid #! -line pointing to the correct interpreter, then that interpreter (e.g. bash ) will be used to run the script. If the script is not executable (but still readable), then it may still be run with an explicit interpreter from the command line, as for example in bash ./scriptname (if it's a bash script). Note that you would have to know what interpreter to use here as a zsh script might not execute correctly if run with bash , and a bash script likewise would possibly break if executed with sh (just as a Perl script would not work correctly if executed by Python or Ruby). Some script, as the one you show, are not actually scripts but "dot-scripts". These are designed to be sourced , like . ./scriptname i.e. used as an argument to the dot ( . ) utility, or (in bash ), source ./scriptname (the two are equivalent in bash , but the dot utility is more portable) This would run the commands in the dot-script in the same environment as the invoking shell, which would be necessary for e.g. setting environment variables in the current environment. (Scripts that are run as ordinary are run in a child environment, a copy of its parent's environment, and can't set environment variables in, or change the current directory of, their parent shells.) A dot-script is read by (or "sourced by") the current shell, and therefore do not have to be executable, only readable. I can tell that the script that you show the contents of is a dot-script since it does not have a #! -line (it does not need one) and since it just exports a bunch of variables. I believe I picked up the term "dot-script" from the manual for the ksh93 shell. I can't find a more authoritative source for it, but sounds like a good word to use to describe a script that is supposed to be sourced using the . command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/368935/" ] }
537,707
I have a text file on Linux where the contents are like below: help.helloworld.com:latest.world.comdev.helloworld.com:latest.world.com I want to get the contents before the colon like below: help.helloworld.comdev.helloworld.com How can I do that within the terminal?
This is what cut is for: $ cat filehelp.helloworld.com:latest.world.comdev.helloworld.com:latest.world.comfoo:baz:barfoo$ cut -d: -f1 filehelp.helloworld.comdev.helloworld.comfoofoo You just set the delimiter to : with -d: and tell it to only print the 1st field ( -f1 ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/537707", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/368488/" ] }
537,800
There are duplicates in my booklist (txt file) like the following - The Ideal Team PlayerThe Ideal Team Player: How to Recognize and Cultivate The Three Essential VirtuesIdeal Team Player: Recognize and Cultivate The Three Essential VirtuesJoy on Demand: The Art of Discovering the Happiness WithinCrucial Conversations Tools for Talking When Stakes Are HighJoy on DemandSearch Inside Yourself: The Unexpected Path to Achieving Success, HappinessSearch Inside Yourself.................. I need to find the duplicate books and manually remove them after checking. I searched and found the lines need a pattern. Ex. Remove duplicate lines based on a partial line comparison Find partial duplicate lines in a file and count how many time each line was duplicated? But in my case finding a pattern in lines is difficult. However, I found a pattern in the sequence of words. I want to mark lines as duplicate only if they have three consecutive words (case insensitive) . If you see you will find that in - The Ideal Team PlayerThe Ideal Team Player: How to Recognize and Cultivate The Three Essential VirtuesIdeal Team Player: Recognize and Cultivate The Three Essential Virtues Ideal Team Player is the consecutive words which I am looking for. I would like the output to be something like the following - 3 Ideal Team Player2 Joy on Demand2 Search Inside Yourself.................. How can I do that?
The following awk program stores a count for how many times each set of three consecutive words occurs (after removing punctuation characters), and prints the counts and the set of words at the end if the count is larger than 1: { gsub("[[:punct:]]", "") for (i = 3; i <= NF; ++i) w[$(i-2),$(i-1),$i]++}END { for (key in w) { count = w[key] if (count > 1) { gsub(SUBSEP," ",key) print count, key } }} Given the text in your question, this produces 2 Search Inside Yourself2 Cultivate The Three2 The Three Essential2 Joy on Demand2 Recognize and Cultivate2 Three Essential Virtues2 and Cultivate The2 The Ideal Team3 Ideal Team Player As you can see, this may not be so useful. Instead, we can collect the same count information and then do a second pass over the file, printing each line that contains a word triplet with a count larger than one: NR == FNR { gsub("[[:punct:]]", "") for (i = 3; i <= NF; ++i) w[$(i-2),$(i-1),$i]++ next}{ orig = $0 gsub("[[:punct:]]", "") for (i = 3; i <= NF; ++i) if (w[$(i-2),$(i-1),$i] > 1) { print orig next }} Testing on your file: $ cat fileThe Ideal Team PlayerThe Ideal Team Player: How to Recognize and Cultivate The Three Essential VirtuesIdeal Team Player: Recognize and Cultivate The Three Essential VirtuesJoy on Demand: The Art of Discovering the Happiness WithinCrucial Conversations Tools for Talking When Stakes Are HighJoy on DemandSearch Inside Yourself: The Unexpected Path to Achieving Success, HappinessSearch Inside Yourself $ awk -f script.awk file fileThe Ideal Team PlayerThe Ideal Team Player: How to Recognize and Cultivate The Three Essential VirtuesIdeal Team Player: Recognize and Cultivate The Three Essential VirtuesJoy on Demand: The Art of Discovering the Happiness WithinJoy on DemandSearch Inside Yourself: The Unexpected Path to Achieving Success, HappinessSearch Inside Yourself Caveat: This awk program needs enough memory to store the text of your file about three times over, and may find duplicates in common phrases even when the entries are actually not truly duplicated (e.g. "how to cook" may be part of the titles of several books).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537800", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206574/" ] }
537,803
Consider the following csv file: A,3300 B,8440 B,8443 B,8444 C,304C,404 M,5502 M,5511 The real csv file is really big (around 60,000 rows) but I only included a small version for describing purposes. I need create a script to parse the file and filter rows basing on the second field to group in a single row those with a matching set of characters (replacing the second field with the matching set of characters). In other words, I expect the following output from the given csv file above: A,3300 B,844 C,304C,404 M,55 Please note that ONLY the content on the second csv field is relevant for the purpose of the script, so any matching/un-matching occurrence in the other fields need to remain on the file as they are. Would awk be useful to accomplish this task? or any other built-in function?Any help will be much appreciated.
The following awk program stores a count for how many times each set of three consecutive words occurs (after removing punctuation characters), and prints the counts and the set of words at the end if the count is larger than 1: { gsub("[[:punct:]]", "") for (i = 3; i <= NF; ++i) w[$(i-2),$(i-1),$i]++}END { for (key in w) { count = w[key] if (count > 1) { gsub(SUBSEP," ",key) print count, key } }} Given the text in your question, this produces 2 Search Inside Yourself2 Cultivate The Three2 The Three Essential2 Joy on Demand2 Recognize and Cultivate2 Three Essential Virtues2 and Cultivate The2 The Ideal Team3 Ideal Team Player As you can see, this may not be so useful. Instead, we can collect the same count information and then do a second pass over the file, printing each line that contains a word triplet with a count larger than one: NR == FNR { gsub("[[:punct:]]", "") for (i = 3; i <= NF; ++i) w[$(i-2),$(i-1),$i]++ next}{ orig = $0 gsub("[[:punct:]]", "") for (i = 3; i <= NF; ++i) if (w[$(i-2),$(i-1),$i] > 1) { print orig next }} Testing on your file: $ cat fileThe Ideal Team PlayerThe Ideal Team Player: How to Recognize and Cultivate The Three Essential VirtuesIdeal Team Player: Recognize and Cultivate The Three Essential VirtuesJoy on Demand: The Art of Discovering the Happiness WithinCrucial Conversations Tools for Talking When Stakes Are HighJoy on DemandSearch Inside Yourself: The Unexpected Path to Achieving Success, HappinessSearch Inside Yourself $ awk -f script.awk file fileThe Ideal Team PlayerThe Ideal Team Player: How to Recognize and Cultivate The Three Essential VirtuesIdeal Team Player: Recognize and Cultivate The Three Essential VirtuesJoy on Demand: The Art of Discovering the Happiness WithinJoy on DemandSearch Inside Yourself: The Unexpected Path to Achieving Success, HappinessSearch Inside Yourself Caveat: This awk program needs enough memory to store the text of your file about three times over, and may find duplicates in common phrases even when the entries are actually not truly duplicated (e.g. "how to cook" may be part of the titles of several books).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537803", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369018/" ] }
537,807
I have environment variable, which contains JSON text. I want to add some data to it with jq tool. I want to keep all old fields but add and/or overwrite new ones. The idea is the same as with adding paths to PATH variable, but with JSON . Even I don't understand, how to make and example. For one value I wrote echo "{\"A\":\"Hello world\"}" | jq -r Now suppose I want to merge this object with another one echo "{\"A\":\"Goodbye world\", \"B\": \"This was a joke\"}"
The following awk program stores a count for how many times each set of three consecutive words occurs (after removing punctuation characters), and prints the counts and the set of words at the end if the count is larger than 1: { gsub("[[:punct:]]", "") for (i = 3; i <= NF; ++i) w[$(i-2),$(i-1),$i]++}END { for (key in w) { count = w[key] if (count > 1) { gsub(SUBSEP," ",key) print count, key } }} Given the text in your question, this produces 2 Search Inside Yourself2 Cultivate The Three2 The Three Essential2 Joy on Demand2 Recognize and Cultivate2 Three Essential Virtues2 and Cultivate The2 The Ideal Team3 Ideal Team Player As you can see, this may not be so useful. Instead, we can collect the same count information and then do a second pass over the file, printing each line that contains a word triplet with a count larger than one: NR == FNR { gsub("[[:punct:]]", "") for (i = 3; i <= NF; ++i) w[$(i-2),$(i-1),$i]++ next}{ orig = $0 gsub("[[:punct:]]", "") for (i = 3; i <= NF; ++i) if (w[$(i-2),$(i-1),$i] > 1) { print orig next }} Testing on your file: $ cat fileThe Ideal Team PlayerThe Ideal Team Player: How to Recognize and Cultivate The Three Essential VirtuesIdeal Team Player: Recognize and Cultivate The Three Essential VirtuesJoy on Demand: The Art of Discovering the Happiness WithinCrucial Conversations Tools for Talking When Stakes Are HighJoy on DemandSearch Inside Yourself: The Unexpected Path to Achieving Success, HappinessSearch Inside Yourself $ awk -f script.awk file fileThe Ideal Team PlayerThe Ideal Team Player: How to Recognize and Cultivate The Three Essential VirtuesIdeal Team Player: Recognize and Cultivate The Three Essential VirtuesJoy on Demand: The Art of Discovering the Happiness WithinJoy on DemandSearch Inside Yourself: The Unexpected Path to Achieving Success, HappinessSearch Inside Yourself Caveat: This awk program needs enough memory to store the text of your file about three times over, and may find duplicates in common phrases even when the entries are actually not truly duplicated (e.g. "how to cook" may be part of the titles of several books).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537807", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28089/" ] }
537,827
I have some files: file1.csvfile2.csvfile3.csv A given script processes them, logging to this file: my.log with this format: ( filename col2 col3 ): file1.csv 1 afile2.csv 1 afile3.csv 1 afile2.csv 2 bfile1.csv 2 bfile3.csv 2 bfile1.csv 3 cfile2.csv 3 cfile3.csv 3 cfile2.csv 4 dfile3.csv 4 d I'd like to get one col3 value (only last one) from the my.log file for each *.csv file. I run this command: ls *.csv | xargs -I@ bash -c "cat my.log | grep @ | tail -n 1 | awk '{ print $3 }'" It works well, except that awk is giving me all the columns. file1.csv 3 cfile2.csv 4 dfile3.csv 4 d How could I get only col3 column? For example, this: cdd
In your expression "cat my.log | grep @ | tail -n 1 | awk '{ print $3 }'" ...the double-quotes around this string mean that the single-quotes are treated as literals. They don't protect $3 from the shell, so it's being expanded as an environment variable. As $3 is not actually defined by the shell (unless this is in a script that you've invoked with 3 arguments), it becomes the empty string, and the awk expression is simply { print } , printing the whole line. You could fix this by escaping the $ : ls *.csv | xargs -I@ bash -c "cat my.log | grep @|tail -n 1|awk '{print \$3}'" ...or by moving the awk out of the xargs expression: ls *.csv | xargs -I@ bash -c "cat my.log | grep @|tail -n 1"|awk '{print $3}'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/303198/" ] }
537,868
we want to separate each string in csv line to lines example of line ( could be other line , with the same concept ) machine23,machine094,machine73,machine83 so we try this echo machine23,machine094,machine73,machine83 |awk -F"," '{for(i=1;i<=NF;i++){printf "%-20s", $i};printf "\n"}' but we get machine23 machine094 machine7 machine83 instead to get the following expected results machine23machine094 machine7 machine83 any suggestions?
$ echo machine23,machine094,machine73,machine83 | tr ',' '\n'machine23machine094machine73machine83 or if you really wanted to do it in awk (perhaps because you want to do further processing in awk): $ echo machine23,machine094,machine73,machine83 | \ awk -F',' -v OFS='\n' '{$1=$1;$0=$0;print}'machine23machine094machine73machine83 This uses a neat awk trick where if you change any field (even by setting it equal to itself, as in $1=$ ) and then set $0=$0 , awk will reformat the entire input line - replacing the original field separators ( FS , a comma in this case) with the output field separator ( OFS , a newline in this case).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537868", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
537,912
I am trying to apply below nftables rule which I adopted from this guide : nft add rule filter INPUT tcp flags != syn counter drop somehow this is ending up with: Error: Could not process rule: No such file or directory Can anyone spot what exactly I might be missing in this rule?
You're probably missing your table or chain. nft list ruleset will give you what you are working with. If it prints out nothing, you're missing both. nft add table ip filter # create tablenft add chain ip filter INPUT { type filter hook input priority 0 \; } # create chain Then you should be able to add your rule to the chain. NOTE: If you're logged in with ssh, your connection will be suspended.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537912", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369102/" ] }
537,928
total linux beginner here. So I just installed https://github.com/niruix/sshwifty on my server. I only use SSH keys to log into my server. My SSH keys are protected by a passphrase. Now when I want to connect to my server through sshwifty it asks me to provide my private key. -----BEGIN RSA PRIVATE KEY-----Proc-Type: 4,ENCRYPTEDDEK-Info: AES-128-CBC,20DD60A29753E6F89413A2F03DE8B20F My private key is still encrypted and I assume I have to decyrpt it and paste that output into Sshwifty. What I already tried: openssl enc -d -aes-128-cbc -in id_rsa -out dec_rsa This gives me a bad magic number error. Appreciate the help
The command is openssl rsa -in ~/.ssh/id_rsa . If the ~/.ssh/id_rsa is encrypted, openssl will ask you for the passphrase to decrypt the private key, otherwise, the key will be directly outputted on the screen. But with that been said, you SHOULDN'T use id_rsa file. Because Sshwifty is doing SSH stuff on the backend. Meaning the private key you give to it will be sent to the backend server through the network , which is kind horrible since private key is intend to only lives on your local machine and should never be send to anybody else. However, we're dealing with design compromise here: Sshwifty backend needs the private key to be able to authenticate with the SSH server for you. So, to better protect yourself, you can generate a SSH key pair to use specifically in Sshwifty. To generate the private key, run command ssh-keygen -t rsa -f ~/.ssh/my_server , and when been asked for Enter passphrase (empty for no passphrase): , just hit Enter so openssl will not encrypt the private key. After the command is succeed, you will found two new files under ~/.ssh/ directory: my_server which is the private key that you can use in Sshwifty when connecting to the SSH server, and my_server.pub which is the public key that you need to deploy on your SSH server. In this case, if the my_server is for some reason leaked, only one server will be effected. Now, let's talk about the "design compromise". When designing the software, I spend sometime to find a SSH client library that could work inside a web browser, but I failed to found a reliable one if any. After that, I spend a bit more time reading SSH specs, and eventually decided that it is impossible for me to implement such complex protocol safely within given deadline. So, I settled on the current design, not only because it's a time saver, but also because it's been battle tested, and should provide expected safety. And yes, I'm the author of Sshwifty who happened to Google the name of my software due to boredom, and landed on this page. For the record: I totally did not posting questions to advertise my software. Hope this answer helps. Sorry for my poor English.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/537928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369116/" ] }
538,011
I'm learning sed's different commands and did some experiments. The command I'm trying is: root:[~]# seq 7 | sed -n '1~2H; 2~2{G;p}'214136135root:[~]# I analyzed the command and to me the last newline character after the number 5 should not exist. Below is my analysis. Based on my analysis, the output should be the cells with the red color background. As you can see, there is no last newline character. Where am I wrong?Thanks in advance.
p adds the newline character: % printf 1 | sed 'p;s/1/2/'12% As can be seen, the 2 is printed without a trailing newline, but the 1 before it, from p , is.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/538011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8839/" ] }
538,045
I don't understand why the permission does not change for a user when I run the chmod command with fakeroot . Initially, the file has these permissions: -rwxr-xr-x a.txt* When I try to change the permission for the file using chmod it works fine: chmod 111 a.txt---x--x--x a.txt* When I run it with fakeroot it doesn't seem to do the work fine. It sets the permissions for group and other correctly, but not for the user. The permissions for read and write are set, no matter what the 1st value in chmod command is. fakeroot chmod 111 a.txt-rwx--x--x a.txt* Am I missing something?
Fakeroot doesn't carry out all the file metadata changes, that's the point: it only pretends to the program that runs under it. Fakeroot does not carry out changes that it can't do, such as changing the owner. It also does not carry out changes that would cause failures down the line. For example, the following code succeeds when run as root, because root can always open files regardless of permissions: chmod 111 a.txtcp a.txt b.txt But when run as a non-root user, cp fails because it can't read a.txt . To avoid this, chmod under fakeroot does not remove permissions from the user. Fakeroot does pretend to perform the change for the program it's running. $ stat -c "Before: %A" a.txt; fakeroot sh -c 'chmod 111 a.txt; stat -c "In fakeroot: %A" a.txt'; stat -c "After: %A" a.txtBefore: -rwx--x--xIn fakeroot: ---x--x--xAfter: -rwx--x--x Generally speaking, file metadata changes done inside fakeroot aren't guaranteed to survive the fakeroot call. That's the point. Make a single fakeroot call that does both the metadata changes and whatever operations (such as packing an archive) you want to do with the changed metadata.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/538045", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369220/" ] }
538,214
I have a text file with 61000 columns and 173 rows. I would like to merge data on every 2 columns i.e., columns 1 and 2 should be merged, 3 and 4 should be merged, 5 and 6 should be merged and so on. Sample input (tab separated): Ind Pop scaffold1 X scaffold1 X.1 scaffold3 X.2 scaffold4 X.3a antartica 1 1 1 1 2 2 1 1b antartica 1 1 1 1 2 1 1 2c antartica 1 1 1 1 2 1 1 1d antartica 1 1 1 1 2 1 1 2e antartica 1 1 1 1 2 1 1 2f arctic 1 1 1 1 2 1 1 1g arctic 1 1 1 2 2 1 1 1h arctic 1 1 1 1 2 1 1 1I arctic 1 1 1 1 2 1 1 1j arctic 1 1 1 1 2 1 1 1 desired output (tab separated): Ind-Pop scaffold1-X scaffold2-X.1 scaffold3-X.2 scaffold4-X.3a-antartica 1-1 1-1 2-2 1-1b-antartica 1-1 1-1 2-1 1-2c-antartica 1-1 1-1 2-1 1-1d-antartica 1-1 1-1 2-1 1-2e-antartica 1-1 1-1 2-1 1-2f-arctic 1-1 1-1 2-1 1-1g-arctic 1-1 1-2 2-1 1-1h-arctic 1-1 1-1 2-1 1-1I-arctic 1-1 1-1 2-1 1-1j-arctic 1-1 1-1 2-1 1-1 I tried to do it with R using the unite function of tidyr package. I was able to manage to merge two columns at a time using the following command: unite(df, newcol, c(scaffold1, X), remove=TRUE) Not sure how to do it for multiple columns. Any R or perl or linux command line approaches will be appreciated!
sed -E 's/([^\t]+)\t([^\t]+)/\1-\2/g' Explanation sed -E 's/foo/bar/g' : run sed with -E extended regex, replacing foo with bar , multiple times per line /g . ([^\t]+)\t([^\t]+) : match a non-tab character [^\t] that is one or more characters long + , and capture this in a group ([^\t]+) . This is followed by a tab, then the non-tab characters again in another capturing group. \1-\2 : replace this with the first capturing group, - , then the second capturing group. Essentially, replace the tab with - . Why this works sed is "greedy", i.e. tries to grab as many characters as it can. Hence the two capturing groups will try to be as long as possible. e.g. it will grab all of a antartica (replacing it with a-antartica ). The next time the search is run, it has already passed by antartica , and starts searching again on the tab after this word. Hence, the next match will be 1 1 , which it will replace with 1-1 . This pattern will will then repeat for each pair of columns. The greedy + is important. If you omit it, the pattern will just modify every tab.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/538214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66134/" ] }
538,303
I have a script that goes like this ln /myfile /dev/${uniquename}/myfile I want to remove the link of /dev/somename/myfile to decrease the link count. How do I do this?
TL;DR... just delete the file name you don't want (with rm ). If you create a hard link (which is what your command above is doing), you have two names pointing to the same area of storage. You can delete either name without affecting the other name or the storage - it's only when the last name is removed that the area of storage is released. Compare this to soft links... created with ln -s - there, the link is different, it's a pointer to the original name rather than a pointer to the storage. If you delete the original named file the soft links point to something that has been deleted, so the link remains but is broken.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/538303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/318043/" ] }
538,397
I want a way to run a command randomly, say 1 out of 10 times. Is there a builtin or GNU coreutil to do this, ideally something like: chance 10 && do_stuff where do_stuff is only executed 1 in 10 times? I know I could write a script, but it seems like a fairly simple thing and I was wondering if there is a defined way.
In ksh , Bash, Zsh, Yash or BusyBox sh : [ "$RANDOM" -lt 3277 ] && do_stuff The RANDOM special variable of the Korn, Bash, Yash, Z and BusyBox shells produces a pseudo-random decimal integer value between 0 and 32767 every time it’s evaluated, so the above gives (close to) a one-in-ten chance. You can use this to produce a function which behaves as described in your question, at least in Bash: function chance { [[ -z $1 || $1 -le 0 ]] && return 1 [[ $RANDOM -lt $((32767 / $1 + 1)) ]]} Forgetting to provide an argument, or providing an invalid argument, will produce a result of 1, so chance && do_stuff will never do_stuff . This uses the general formula for “1 in n ” using $RANDOM , which is [[ $RANDOM -lt $((32767 / n + 1)) ]] , giving a (⎣32767 / n ⎦ + 1) in 32768 chance. Values of n which aren’t factors of 32768 introduce a bias because of the uneven split in the range of possible values.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/538397", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/226063/" ] }
538,409
I have installed kali linux through linux deploy on my android phone. When I try to ssh into the kali linux it asks for password. If I give the password which was entered before installing kali linux then it says "Password incorrect". I can VNC through that password but can't connect to ssh and also can't get root access. When I type sudo it says sudo: PERM ROOT: setresuid(0,-1,-1): Permission deniedsudo: unable to initialize policy plugin. And also I can’t install any packages through apt-get install . I am new to linux kindly help!!!
In ksh , Bash, Zsh, Yash or BusyBox sh : [ "$RANDOM" -lt 3277 ] && do_stuff The RANDOM special variable of the Korn, Bash, Yash, Z and BusyBox shells produces a pseudo-random decimal integer value between 0 and 32767 every time it’s evaluated, so the above gives (close to) a one-in-ten chance. You can use this to produce a function which behaves as described in your question, at least in Bash: function chance { [[ -z $1 || $1 -le 0 ]] && return 1 [[ $RANDOM -lt $((32767 / $1 + 1)) ]]} Forgetting to provide an argument, or providing an invalid argument, will produce a result of 1, so chance && do_stuff will never do_stuff . This uses the general formula for “1 in n ” using $RANDOM , which is [[ $RANDOM -lt $((32767 / n + 1)) ]] , giving a (⎣32767 / n ⎦ + 1) in 32768 chance. Values of n which aren’t factors of 32768 introduce a bias because of the uneven split in the range of possible values.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/538409", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369517/" ] }
538,419
usually, the possible file types in the output ls -l command are d and - , which represent directory and regular file respectively. besides above, I saw another type l in the output on macOS. drwxr-xr-x 8 yongjia staff 256 Aug 31 06:58 .drwxr-xr-x 4 yongjia staff 128 Aug 30 11:31 ..lrwxr-xr-x 1 root wheel 1 Aug 17 07:25 Macintosh HD -> / So, How many possible file types in the output ls -l command?
The filetypes reported by ls depends on the capabilities of the underlying filesystem, the operating system, and on the specific implementation of ls . The l type is the common symbolic link file type. This is (ought to be) documented in your ls manual. On OpenBSD (macOS and AIX has the same list, but in another order): - regular fileb block special filec character special filed directoryl symbolic linkp FIFOs socket link On NetBSD (FreeBSD has the same without a and A ): - Regular file.a Archive state 1.A Archive state 2.b Block special file.c Character special file.d Directory.l Symbolic link.p FIFO.s Socket link.w Whiteout. From info ls (i.e. the GNU ls manual): ‘-’ regular file‘b’ block special file‘c’ character special file‘C’ high performance (“contiguous data”) file‘d’ directory‘D’ door (Solaris 2.5 and up)‘l’ symbolic link‘M’ off-line (“migrated”) file (Cray DMF)‘n’ network special file (HP-UX)‘p’ FIFO (named pipe)‘P’ port (Solaris 10 and up)‘s’ socket‘?’ some other file type On Solaris 11: dThe entry is a directory.DThe entry is a door.lThe entry is a symbolic link.bThe entry is a block special file.cThe entry is a character special file.pThe entry is a FIFO (or “named pipe”) special file.PThe entry is an event port.sThe entry is an AF_UNIX address family socket.-The entry is an ordinary file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/538419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369527/" ] }
538,430
General question Assuming two directories with identical content are stored on different devices. Is there a way to calculate the size of the directories and always get the exact same number for both?In other words, is there such a thing as a "real size" of a directory irrespective of where it is stored? Practical example I transferred directories between two storage devices using rsync -ahP /dir1/ /dir2/ . After the transfer finished without errors, I checked the sizes of the directories on each device using du -s --apparent-size . For some directories I got the exact same number on both devices, but not for all of them. Specific questions Is it possible that rsync with the chosen options didn't produce an exact copy of the directory? If yes, would there be a way to get an exact copy? Does the used du command not give the "real size" of the directory irrespective of the storage device. If yes, would there be a way to calculate such a size?
The filetypes reported by ls depends on the capabilities of the underlying filesystem, the operating system, and on the specific implementation of ls . The l type is the common symbolic link file type. This is (ought to be) documented in your ls manual. On OpenBSD (macOS and AIX has the same list, but in another order): - regular fileb block special filec character special filed directoryl symbolic linkp FIFOs socket link On NetBSD (FreeBSD has the same without a and A ): - Regular file.a Archive state 1.A Archive state 2.b Block special file.c Character special file.d Directory.l Symbolic link.p FIFO.s Socket link.w Whiteout. From info ls (i.e. the GNU ls manual): ‘-’ regular file‘b’ block special file‘c’ character special file‘C’ high performance (“contiguous data”) file‘d’ directory‘D’ door (Solaris 2.5 and up)‘l’ symbolic link‘M’ off-line (“migrated”) file (Cray DMF)‘n’ network special file (HP-UX)‘p’ FIFO (named pipe)‘P’ port (Solaris 10 and up)‘s’ socket‘?’ some other file type On Solaris 11: dThe entry is a directory.DThe entry is a door.lThe entry is a symbolic link.bThe entry is a block special file.cThe entry is a character special file.pThe entry is a FIFO (or “named pipe”) special file.PThe entry is an event port.sThe entry is an AF_UNIX address family socket.-The entry is an ordinary file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/538430", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369530/" ] }
538,482
When I had to login in a site, Chrome used to offer me a number of possible user names and later the password associated with the user I choose (provided that I saved the user-password association previously). Now Chrome still offers the correct possible users but then offers no password, I have to enter the password (if I remember it, that is, otherwise it's password resetting) and, after a successful login, Chrome ask for permission to save the password. I give permission, of course, but next time I login it's again no password remembered. If I open the Settings->Passwords screen I see (the equivalent of) Offer to save passwords [YES]Auto Sign-in [YES] and below Saved Passwords Saved passwords will appear here that is, no password is really saved. Final consideration, the same page permits to View and manage saved passwords in your Google Account and if I go there I can see all my sites, my users and my passwords except, important exception! that some of these passwords are stale. I'm on Debian Sid, Chrome is Version 76.0.3809.132 (Official Build) (64-bit) from Google's official .deb What can I do to fix this annoying problem? UPDATE This is what I get when I launch Chrome from the shell $ google-chrome[18891:18891:0906/221132.855189:ERROR:sandbox_linux.cc(369)] InitializeSandbox() called with multiple threads in process gpu-process.[18856:18990:0906/221136.514201:ERROR:object_proxy.cc(619)] Failed to call method: org.freedesktop.Notifications.GetCapabilities: object_path= /org/freedesktop/Notifications: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Notifications was not provided by any .service files[18891:18891:0906/221143.906640:ERROR:buffer_manager.cc(488)] [.DisplayCompositor]GL ERROR :GL_INVALID_OPERATION : glBufferData: <- error from previous GL command[18856:18996:0906/221151.512723:ERROR:password_syncable_service.cc(191)] Passwords datatype error was encountered: Failed to get passwords from store. In particular, Passwords datatype error was encountered: Failed to get passwords from store. seems relevant wrt my issues.
Sometimes a couple of login files get corrupt, and stop google-chrome from saving the passwords. To fix it, close google-chrome. Terminal to the following directory and remove the two files, Login Data and Login Data-journal. cd ~/.config/google-chrome/Defaultrm 'Login Data'rm 'Login Data-journal' Now open Chrome and URL to 'Chrome://settings/passwords' and see if the passwords have returned under 'Saved Passwords...'. Try saving a password and check the above password URL to see if it is now being saved.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/538482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164535/" ] }
538,498
I'm using a Ubuntu Budgie 18.04. I've been using it for a year without any problems, but suddenly the wifi stopped working, and in the network settings, I get the message "No Wifi Adapter Found". The result of the iwconfig command is enp59s0 no wireless extensions.lo no wireless extensions. and the result of the lspci command tells me that I have a network controller: Intel Corporation Wireless-AC 9560[Jefferson Peak] (rev 10) . I tried some solutions I found but it doesn't work. Please, help me! Update : ifconfig output: enp59s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.189.125.212 netmask 255.255.0.0 broadcast 10.189.255.255 inet6 fe80::a47c:fa2:5210:181e prefixlen 64 scopeid 0x20<link> ether 54:bf:64:37:5d:ac txqueuelen 1000 (Ethernet) RX packets 126268 bytes 160092432 (160.0 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 47855 bytes 6451226 (6.4 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 17 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 988 bytes 97858 (97.8 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 988 bytes 97858 (97.8 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 iw list output:NOTHING lshw -c network output: *-network description: Network controller product: Wireless-AC 9560 [Jefferson Peak] vendor: Intel Corporation physical id: 14.3 bus info: pci@0000:00:14.3 version: 10 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list configuration: driver=iwlwifi latency=0 resources: irq:16 memory:ed31c000-ed31ffff *-network description: Ethernet interface product: Killer E2400 Gigabit Ethernet Controller vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:3b:00.0 logical name: enp59s0 version: 10 serial: 54:bf:64:37:5d:ac size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm pciexpress msi msix bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=alx duplex=full ip=10.189.125.212 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s resources: irq:17 memory:ed200000-ed23ffff ioport:3000(size=128) lsmod | grep iwlw output: iwlwifi 286720 1 iwlmvmcfg80211 622592 4 wl,iwlmvm,iwlwifi,mac80211 rfkill list output: 0: hci0: Bluetooth Soft blocked: no Hard blocked: no dmesg | grep iwl output [ 3.234002] iwlwifi 0000:00:14.3: enabling device (0000 -> 0002)[ 3.252973] iwlwifi 0000:00:14.3: loaded firmware version 34.3125811985.0 op_mode iwlmvm[ 3.314535] iwlwifi 0000:00:14.3: Detected Intel(R) Dual Band Wireless AC 9560, REV=0x318[ 3.360663] iwlwifi 0000:00:14.3: Microcode SW error detected. Restarting 0x0.[ 3.360668] iwlwifi 0000:00:14.3: Not valid error log pointer 0x00000000 for Init uCode[ 3.360825] iwlwifi 0000:00:14.3: SecBoot CPU1 Status: 0x3, CPU2 Status: 0x2459[ 3.360827] iwlwifi 0000:00:14.3: Failed to start INIT ucode: -5[ 3.372999] iwlwifi 0000:00:14.3: Failed to run INIT ucode: -5 sudo dmesg | grep iwl output after executing sudo rmmod iwlmvm && sudo modprobe iwlmvm : [ 3.255919] iwlwifi 0000:00:14.3: enabling device (0000 -> 0002)[ 3.273432] iwlwifi 0000:00:14.3: loaded firmware version 34.3125811985.0 op_mode iwlmvm[ 3.340146] iwlwifi 0000:00:14.3: Detected Intel(R) Dual Band Wireless AC 9560, REV=0x318[ 3.393635] iwlwifi 0000:00:14.3: base HW address: 34:e1:2d:c7:37:15[ 3.473579] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs'[ 3.534582] iwlwifi 0000:00:14.3 wlp0s20f3: renamed from wlan0[ 6.643197] iwlwifi 0000:00:14.3: BIOS contains WGDS but no WRDS[ 989.877842] iwlwifi 0000:00:14.3: Detected Intel(R) Dual Band Wireless AC 9560, REV=0x318[ 989.934163] iwlwifi 0000:00:14.3: base HW address: 34:e1:2d:c7:37:15[ 990.001012] ieee80211 phy1: Selected rate control algorithm 'iwl-mvm-rs'[ 990.010264] iwlwifi 0000:00:14.3 wlp0s20f3: renamed from wlan0[ 990.250978] iwlwifi 0000:00:14.3: BIOS contains WGDS but no WRDS
Per request, here is the solution to fixing the Failed to start INIT ucode: -5 issue Solution First, before you even move onto any steps having anything to do with the Linux kernel itself, make sure you have SecureBoot disabled in your BIOS. While SecureBoot was meant to be a security feature ensuring that all drivers are properly signed , from what I have seen this causes more problems than it solves in Linux kernel, especially when network and graphics drivers are concerned. This will more often than not be the key to solving this issue and your wifi driver will be loaded properly upon reboot. Once inside your Linux distro (and this is a good situation where using root account is actually appropriate), first ascertain if your kernel can see your wireless controller. This first one will tell you whether the wireless card/controller can be seen as a device at all by your kernel (even if the driver initialization fails) lshw -c network While this one will tell you whether the system actually initialized it as a wireless device. iw list Now, in the case of OP the first command did show the Intel AC 9560, while the second command had a null output, telling us that the kernel a) can see the card but b) fails to initialize it. This tells us that the problem is more than likely related to the module/driver of the card Just to be safe all run sudo rfkill list and make sure your wifi device is unblocked or just execute sudo rfkill unblock all to be sure that everything radio related is unblocked. If you have disabled SecureBoot in your BIOS, yet for some reason your wifi is still not loaded correctly on reboot you can then run: sudo rmmod iwlmvm && sudo modprobe iwlmvm and the kernel will reload the module and initialize it properly, and from then on it will work on every subsequent reboot. Why it often doesn't work right away upon the first reboot is a mystery to me, because as far as I know and have been taught, modules get freshly loaded upon every boot. It is also possible that simply rebooting twice might produce the same outcome as executing the above command. Once you have a stable internet connection, update your kernel headers and microcode packages.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/538498", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369571/" ] }
538,679
I am trying to set a shell variable and pass it into a command as part of a Shell script with the following code: SLS_DEBUG_TEXT=""if [ "$ENABLE_DEBUG_LOGGING" = "true" ]; then SLS_DEBUG_TEXT="SLS_DEBUG=* "fi$SLS_DEBUG_TEXT sls create_domain -v $AWS_PROFILE_FLAG --stage $STAGE // gives error#SLS_DEBUG=* sls create_domain -v $AWS_PROFILE_FLAG --stage $STAGE // works correctly The error I'm getting is: ./sls-deploy.sh: line 110: SLS_DEBUG_TEXT: command not found The if statement is mostly irrelevant but I want to show that this variable needs to be set conditionally (i.e. I don't always want the debug logging enabled). The last line in the code snippet (which is commented out) is what the command should look like if debug logging is enabled and this works correctly. I have found a few previous posts on here and Stack Overflow where the issue was due to spaces between the equals sign when setting a variable ( example ) but I don't think I've fallen into that trap here. Thanks for your help.
$SLS_DEBUG_TEXT is expanded too late, after the stage where the shell would otherwise treat its value as an assignment. The variable's value is therefore instead treated as a command. What you could do instead is to use env : SLS_DEBUG_TEXT='SLS_DEBUG=*'env "$SLS_DEBUG_TEXT" sls create_domain -v "$AWS_PROFILE_FLAG" --stage "$STAGE" Note the quoting around every single variable expansion . If SLS_DEBUG_TEXT may be empty or unset, the above would generate an error since env would try to execute a command with no name. To work around that, you may instead use SLS_DEBUG_TEXT='SLS_DEBUG=*'env ${SLS_DEBUG_TEXT:+"$SLS_DEBUG_TEXT"} sls create_domain -v "$AWS_PROFILE_FLAG" --stage "$STAGE" The ${SLS_DEBUG_TEXT:+"$SLS_DEBUG_TEXT"} would expand to "$SLS_DEBUG_TEXT" if that variable is set and non-empty. Otherwise it would expand to nothing (not even an empty string, which would be the case with "$SLS_DEBUG_TEXT" ). Note too that if you have SLS_DEBUG_TEXT='SLS_DEBUG=* ' (as in your code), the space after the * would become part of the value of SLS_DEBUG in the sls process' environment. I don't know if this is intended or not. I'm also noticing that the error message that you quote says ./sls-deploy.sh: line 110: SLS_DEBUG_TEXT: command not found To me, this indicates that you are using the variable's name without $ in front of it at some point in the script. This may be totally unrelated to the main issue that you had. The code that you show would instead cause the error ./sls-deploy.sh: line NNN: SLS_DEBUG=*: command not found
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/538679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184722/" ] }
538,686
I used to add static routes up until debian 9 this way: up route add -net 1.2.3.4.5/23 gw 2.3.4.5.6up route add -host 2.3.4.5 gw 3.4.5.6 What changed for debian10 and what's the new syntax for static routes in the /etc/network/interfaces file?
The up ... lines are not stand-alone, but they are extensions of an iface ... line before them. Before Debian 9, the actual network interface used to pretty much always be the last entry in /etc/network/interfaces , so just adding up route add ... lines at the end might have actually worked pretty often. If you chose to install a desktop environment, the installation is likely to include NetworkManager, and in that case, there may be no iface line for your network interface at all, allowing the interface to be controlled by NetworkManager instead. In that case, you could use one-time nmcli commands to persistently add new routes: nmcli c modify eno1 +ipv4.routes "1.2.3.4/23 2.3.4.5" # network route nmcli c modify eno1 +ipv4.routes "2.3.4.5 3.4.5.6" # host route And if you don't use NetworkManager... the net-tools package that includes the old ifconfig and route commands has been deprecated since Debian 9, and is no longer installed by default. So unless you have explicitly chosen to install net-tools , you should use the newer ip route commands instead: iface eno1 ... up /bin/ip route add 1.2.3.4/23 via 2.3.4.5 # network route up /bin/ip route add 2.3.4.5/32 via 3.4.5.6 # single host route
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/538686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9607/" ] }
538,715
Ex: Input file A<0>A<1>A_D2<2>A_D2<3>A<4>A_D2<6>A<9>A_D2<10>A<13> Desired Output: A<0>A<1>A_D2<2>A_D2<3>A<4>-----A_D2<6>----------A<9>A_D2<10>----------A<13> Just care about the number in the angle bracket. If the number is not continuous ,then add some symbol (or just add newline) until the number continue agian. In this case, number 5,7,8,11 and 12 are missing. Can anyone solve this problem by using awk or sed (even grep) command? I am a beginner in Linux. Please explain the details of the whole command line.
Using grep or sed for doing this would not be recommended as grep can't count and sed is really difficult to do any kind of arithmetics in (it would have to be regular expression-based counting, a non-starter for most people except for the dedicated ). $ awk -F '[<>]' '{ while ($2 >= ++nr) print "---"; print }' fileA<0>A<1>A_D2<2>A_D2<3>A<4>---A_D2<6>------A<9>A_D2<10>------A<13> The awk code assumes that 0 should be the first number, and then maintains the wanted line number for the current line in the variable nr . If a number is read from the input that requires one or several lines to be inserted, this is done by the while loop (which also increments the nr variable). The number in <...> is parsed out by specifying that < and > should be used as field delimiters. The number is then in $2 (the 2nd field).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/538715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369099/" ] }
538,844
From C, what's the easiest way to run a standard utility (e.g., ps) and no other? Does POSIX guarantee that, for example, a standard ps is in /bin/ps or should I reset the PATH environment variable to what I get with confstr(_CS_PATH, pathbuf, n); and then run the utility through PATH-search?
No, it doesn't, mainly for the reason that it doesn't require systems to conform by default , or to comply to only the POSIX standard (to the exclusion of any other standard). For instance, Solaris (a certified compliant system) chose backward compatibility for its utilities in /bin , which explains why those behave in arcane ways, and provide POSIX-compliant utilities in separate locations ( /usr/xpg4/bin , /usr/xpg6/bin ... for different versions of the XPG (now merged into POSIX) standard, those being actually part of optional components in Solaris). Even sh is not guaranteed to be in /bin . On Solaris, /bin/sh used to be the Bourne shell (so not POSIX compliant) until Solaris 10, while it's now ksh93 in Solaris 11 (still not fully POSIX compliant, but in practice more so than /usr/xpg4/bin/sh ). From C, you could use exec*p() and assume you're in a POSIX environment (in particular regarding the PATH environment variable). You could also set the PATH environment variable #define _POSIX_C_SOURCE=200809L /* before any #include */...confstr(_CS_PATH, buf, sizeof(buf)); /* maybe append the original * PATH if need be */setenv("PATH", buf, 1);exec*p("ps"...); Or you could determine at build time the path of the POSIX utilities you want to run (bearing in mind that on some systems like GNU ones, you need more steps like setting a POSIXLY_CORRECT variable to ensure compliance). You could also try things like: execlp("sh", "sh", "-c", "PATH=`getconf PATH`${PATH+:$PATH};export PATH;" "unset IFS;shift \"$1\";" "exec ${1+\"$@\"}", "2", "1", "ps", "-A"...); In the hope that there's a sh in $PATH , that it is Bourne-like, that there's also a getconf and that it's the one for the version of POSIX you're interested in.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/538844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23692/" ] }
539,122
looking for help diagnosing bluetooth mouse lag. I'm using a Logitech MX Anywhere 2, I've had it a few years now and it's worked well on a number of Linux distros. I recently installed Debian 10 and set it up to use Sid repos. In this environment, the mouse does not work as responsively as normal. I'm on a laptop, and the touchpad works perfectly smoothly, and a wired mouse is also perfectly smooth. What I get with the bluetooth mouse is as if the sampling rate is maybe once every 3 or 4 frames. I still have Pop!_OS installed which is based on Ubuntu 19.04, the mouse works as expected in that environment. Forgetting the mouse and re-adding it offers no change to behaviour, same with reboots. I've updated to the latest state of the repos, no dice. I've also tried switching from Wayland to Xorg with no effect. My best guess would be that it's down to the iwlwifi module (it's a Lenovo Yoga 900 with an Intel Core i7 6560U with integrated Intel Wireless 8260), but no idea where to go from here. Cheers!
Solution from reddit from @ashughes in an above comment - https://www.reddit.com/r/linuxquestions/comments/bc15f8/bluetooth_mouse_is_laggy_very_limited_pollrate/ez3ufhs/ sudo nano /var/lib/bluetooth/xx\:xx\:xx\:xx\:xx\:xx/yy\:yy\:yy\:yy\:yy\:yy/info where xx:xx.... is pc bluetooth address and yy:yy... is the mouse bluetooth address. In the file, I added the section at the end: [ConnectionParameters]MinInterval=6MaxInterval=7Latency=0Timeout=216 You may also need to reconnect the mouse. I also tracked this proposal on ubuntu bug here: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1824559?comments=all
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/539122", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370204/" ] }
539,128
I log all my shell activity using script , but am worried that there are quite a few passwords and other secret stuff in these log files. Does anybody know of any way of logging the shell sessions in an encrypted manner? Are there maybe some similar utilities to script that integrates with GPG?
Solution from reddit from @ashughes in an above comment - https://www.reddit.com/r/linuxquestions/comments/bc15f8/bluetooth_mouse_is_laggy_very_limited_pollrate/ez3ufhs/ sudo nano /var/lib/bluetooth/xx\:xx\:xx\:xx\:xx\:xx/yy\:yy\:yy\:yy\:yy\:yy/info where xx:xx.... is pc bluetooth address and yy:yy... is the mouse bluetooth address. In the file, I added the section at the end: [ConnectionParameters]MinInterval=6MaxInterval=7Latency=0Timeout=216 You may also need to reconnect the mouse. I also tracked this proposal on ubuntu bug here: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1824559?comments=all
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/539128", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111812/" ] }
539,131
After cloning a 1TB hdd "TO" a 2TB hdd, in which the 2TB hdd is now only 1TB... The following format will not allow me to resize it properly, to its max capacity. Seems as though it won't get resized to a bigger size than it is now. FORMAT USED: partedresizepartpartition #size 2000000MB (OR) 2GB "sectors exceed msdos parition max..." It says that it 'exceeds' the max by almost 1GB. UPDATE : after repartitioning it at a slightly larger size than it was, I can no longer mount it.
Solution from reddit from @ashughes in an above comment - https://www.reddit.com/r/linuxquestions/comments/bc15f8/bluetooth_mouse_is_laggy_very_limited_pollrate/ez3ufhs/ sudo nano /var/lib/bluetooth/xx\:xx\:xx\:xx\:xx\:xx/yy\:yy\:yy\:yy\:yy\:yy/info where xx:xx.... is pc bluetooth address and yy:yy... is the mouse bluetooth address. In the file, I added the section at the end: [ConnectionParameters]MinInterval=6MaxInterval=7Latency=0Timeout=216 You may also need to reconnect the mouse. I also tracked this proposal on ubuntu bug here: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1824559?comments=all
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/539131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/368978/" ] }
539,203
I have a text file, and I want extract the string from each line coming after "OS=" input file lineA0A0A9PBI3_ARUDO Uncharacterized protein OS=Arundo donax OX=35708 PE=4 SV=1K3Y356_SETIT ATP-dependent DNA helicase OS=Setaria italica OX=4555 PE=3 SV=1 Output desired OS=Arundo donaxOS=Setaria italica OR Arundo donaxSetaria italica
Use GNU grep (or compatible) with extended regex: grep -Eo "OS=\w+ \w+" file or basic regex (you need to escape + grep -o "OS=\w\+ \w\+" file# orgrep -o "OS=\w* \w*" file To get everything from OS= up to OX= you can use grep with perl-compatible regex (PCRE) ( -P option) if available and make lookahead: grep -Po "OS=.*(?=OX=)" file#to also leave out "OS="#use lookbehindgrep -Po "(?<=OS=).*(?=OX=)" file#or Keep-out \Kgrep -Po "OS=\K.*(?=OX=)" file or use grep including OX= and remove it with sed afterwards: grep -o "OS=.*\( OX=\)" file | sed 's/ OX=$//' Output: OS=Arundo donaxOS=Setaria italica
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/539203", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370260/" ] }
539,257
I was travelling by train with german railroad company "deutsche Bahn" and wanted to use their provided onboard wlan. I connected to their wlan, and wanted to visit a website in my browser, but redirection to their captive portal was not working. I was also connected with my mobile and everything was working as it should, so I assume that there was no problem with their wlan. I have a HP elitebook 840G5 running Manjaro Linux Gnome edition and docker installed. I was wondering how to figure out what was wrong and how to solve this issue? After investigating a bit I found the solution by myself, but I wanted to share my solution to help others running into the same problem.
This issue occured because the wlan of the ICE train was using the same subnet as docker on my machine: 172.18.x.x . This is also outlined here (unfortunately just in german) I fixed this, defining a new default IP range for docker, creating /etc/docker/daemon.json : { "default-address-pools": [ {"base":"172.19.0.0/16","size":24} ]} After this I restarted the docker daemon: sudo systemctl restart docker.service . Afterwards I was able to access the internet (with proper redirection to captive portal).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/539257", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/367638/" ] }
539,279
I have a text file with newline delimited strings. My problem is to process each line as follows: shuffle the order of tokens by using space as a delimiter. For example: Input: A B C Output: C A B Running the command/script repeatedly should of course provide different order. My current solution (for a single text line): $ cat <file> | tr " " "\n" | shuf | tr "\n" " " Is there a nice (a better) command line combo to process a text file with multiple lines?
POSIXly, you could do it with awk relatively efficiently (certainly more efficiently than running at least one GNU shuf utility for each line of the input) as: awk ' BEGIN {srand()} { for (i = 1; i <= NF; i++) { r = int(rand() * NF) + 1 x = $r; $r = $i; $i = x } print }' < your-file (note that in most awk implementations, running that same command twice in the same second is likely to give you the same result as the default random seed as used with srand() is generally based on the current epoch time in seconds).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/539279", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104493/" ] }
539,404
I have a folder containing files of the format file(0).jpg to file(100).jpg I can't directly use convert to generate a pdf here because the order ends up being 0,1,100,2,20,21,... echo *.jpg(n) gives the correct order of files. how do I pipe this into convert? I have tried echo *.jpg(n) | convert - out.pdf
If the command that you'd like to execute is convert 'file(0).jpg' 'file(1).jpg' ...etc... 'file(100).jpg' out.pdf then either use your glob, convert ./*.jpg(n) out.pdf or to only include files in 0..100 range that match that pattern: convert 'file('<0-100>').jpg'(n) out.pdf or you could use a brace expansion: convert 'file('{0..100}').jpg' out.pdf Though note that it's not globbing , the strings file(0).jpg through to file(100).jpg will be passed to convert regardless of whether these are names of existing files or not. Contrary to the previous one, it would also miss files named file(012).jpg for instance.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/539404", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370128/" ] }
539,502
I know that when a page cache page is modified, it is marked dirty and requires a writeback, but what happens when: Scenario: The file /apps/EXE, which is an executable file, is paged in to the page cache completely (all of its pages are in cache/memory) and being executed by process P Continuous release then replaces /apps/EXE with a brand new executable. Assumption 1: I assume that process P (and anyone else with a file descriptor referencing the old executable) will continue to use the old, in memory /apps/EXE without an issue, and any new process which tries to exec that path will get the new executable. Assumption 2: I assume that if not all pages of the file are mapped into memory, that things will be fine until there is a page fault requiring pages from the file that have been replaced, and probably a segfault will occur? Question 1: If you mlock all of the pages of the file with something like vmtouch does that change the scenario at all? Question 2: If /apps/EXE is on a remote NFS, would that make any difference? (I assume not) Please correct or validate my 2 assumptions and answer my 2 questions. Let's assume this is a CentOS 7.6 box with some kind of 3.10.0-957.el7 kernel Update:Thinking about it further, I wonder if this scenario is no different than any other dirty page scenario.. I suppose the process that writes the new binary will do a read and get all cache pages since it’s all paged in, and then all those pages will be marked dirty. If they are mlocked, they will just be useless pages occupying core memory after the ref count goes to zero. I suspect when the currently-executing programs end, anything else will use the new binary. Assuming that’s all correct, I guess it’s only interesting when only some of the file is paged in.
Continuous release then replaces /apps/EXE with a brand new executable. This is the important part. The way a new file is released is by creating a new file (e.g. /apps/EXE.tmp.20190907080000 ), writing the contents, setting permissions and ownership and finally rename(2)ing it to the final name /apps/EXE , replacing the old file. The result is that the new file has a new inode number (which means, in effect, it's a different file.) And the old file had its own inode number, which is actually still around even though the file name is not pointing to it anymore (or there are no file names pointing to that inode anymore.) So, the key here is that when we talk about "files" in Linux, we're most often really talking about "inodes" since once a file has been opened, the inode is the reference we keep to the file. Assumption 1 : I assume that process P (and anyone else with a file descriptor referencing the old executable) will continue to use the old, in memory /apps/EXE without an issue, and any new process which tries to exec that path will get the new executable. Correct. Assumption 2 : I assume that if not all pages of the file are mapped into memory, that things will be fine until there is a page fault requiring pages from the file that have been replaced, and probably a segfault will occur? Incorrect. The old inode is still around, so page faults from the process using the old binary will still be able to find those pages on disk. You can see some effects of this by looking at the /proc/${pid}/exe symlink (or, equivalently, lsof output) for the process running the old binary, which will show /app/EXE (deleted) to indicate the name is no longer there but the inode is still around. You can also see that the diskspace used by the binary will only be released after the process dies (assuming it's the only process with that inode open.) Check output of df before and after killing the process, you'll see it drop by the size of that old binary you thought wasn't around anymore. BTW, this is not only with binaries, but with any open files. If you open a file in a process and remove the file, the file will be kept on disk until that process closes the file (or dies.) Similarly to how hardlinks keep a counter of how many names point to an inode in disk, the filesystem driver (in the Linux kernel) keeps a counter of how many references exist to that inode in memory , and will only release the inode from disk once all references from the running system have been released as well. Question 1 : If you mlock all of the pages of the file with something like vmtouch does that change the scenario This question is based on the incorrect assumption 2 that not locking the pages will cause segfaults. It won't. Question 2 : If /apps/EXE is on a remote NFS, would that make any difference? (I assume not) It's meant to work the same way and most of the time it does, but there are some "gotchas" with NFS. Sometimes you can see the artifacts of deleting a file that's still open in NFS (shows up as a hidden file in that directory.) You also have some way to assign device numbers to NFS exports, to make sure those won't get "reshuffled" when the NFS server reboots. But the main idea is the same. NFS client driver still uses inodes and will try to keep files around (on the server) while the inode is still referenced.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/539502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43342/" ] }
539,508
EDIT: attached is the fdisk output! My system has about 300 gigs of unallocated space which you cant see in the terminal output. Using KDE Partition Manager on Ubuntu, I 'shredded' my windows partition which was taking up the majority of my hard drive which I really need now. Unfortunately, I am unable to move or resize any of these partitions according to KDE Partition manager. can someone please help me move all that unallocated memory into my linux partition? I have read a couple tutorials but I don't know enough about the core concepts to actually try it on my particular system setup! Thanks so much!
Continuous release then replaces /apps/EXE with a brand new executable. This is the important part. The way a new file is released is by creating a new file (e.g. /apps/EXE.tmp.20190907080000 ), writing the contents, setting permissions and ownership and finally rename(2)ing it to the final name /apps/EXE , replacing the old file. The result is that the new file has a new inode number (which means, in effect, it's a different file.) And the old file had its own inode number, which is actually still around even though the file name is not pointing to it anymore (or there are no file names pointing to that inode anymore.) So, the key here is that when we talk about "files" in Linux, we're most often really talking about "inodes" since once a file has been opened, the inode is the reference we keep to the file. Assumption 1 : I assume that process P (and anyone else with a file descriptor referencing the old executable) will continue to use the old, in memory /apps/EXE without an issue, and any new process which tries to exec that path will get the new executable. Correct. Assumption 2 : I assume that if not all pages of the file are mapped into memory, that things will be fine until there is a page fault requiring pages from the file that have been replaced, and probably a segfault will occur? Incorrect. The old inode is still around, so page faults from the process using the old binary will still be able to find those pages on disk. You can see some effects of this by looking at the /proc/${pid}/exe symlink (or, equivalently, lsof output) for the process running the old binary, which will show /app/EXE (deleted) to indicate the name is no longer there but the inode is still around. You can also see that the diskspace used by the binary will only be released after the process dies (assuming it's the only process with that inode open.) Check output of df before and after killing the process, you'll see it drop by the size of that old binary you thought wasn't around anymore. BTW, this is not only with binaries, but with any open files. If you open a file in a process and remove the file, the file will be kept on disk until that process closes the file (or dies.) Similarly to how hardlinks keep a counter of how many names point to an inode in disk, the filesystem driver (in the Linux kernel) keeps a counter of how many references exist to that inode in memory , and will only release the inode from disk once all references from the running system have been released as well. Question 1 : If you mlock all of the pages of the file with something like vmtouch does that change the scenario This question is based on the incorrect assumption 2 that not locking the pages will cause segfaults. It won't. Question 2 : If /apps/EXE is on a remote NFS, would that make any difference? (I assume not) It's meant to work the same way and most of the time it does, but there are some "gotchas" with NFS. Sometimes you can see the artifacts of deleting a file that's still open in NFS (shows up as a hidden file in that directory.) You also have some way to assign device numbers to NFS exports, to make sure those won't get "reshuffled" when the NFS server reboots. But the main idea is the same. NFS client driver still uses inodes and will try to keep files around (on the server) while the inode is still referenced.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/539508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370583/" ] }
539,516
I noticed that lvm2 objects use mixed case UUIDs: ~ # lvdisplay (...) LV UUID yD0FAx-1nHj-O8vV-qNyI-k1RA-hZsj-UF439H~ # pvdisplay (...) PV UUID mXOay3-gT0A-3eVM-5nVD-RI2q-D6A9-j2o04v Is there a particular reason for that, taken into account that the standard (see 6.5.4) explicitly discourages such a use (emphasis mine)? NOTE – It is recommended that the hexadecimal representation used in all human-readable formats be restricted to lower-case letters . Software processing this representation is, however, required to accept both upper and lower case letters as specified in 6.5.2.
Continuous release then replaces /apps/EXE with a brand new executable. This is the important part. The way a new file is released is by creating a new file (e.g. /apps/EXE.tmp.20190907080000 ), writing the contents, setting permissions and ownership and finally rename(2)ing it to the final name /apps/EXE , replacing the old file. The result is that the new file has a new inode number (which means, in effect, it's a different file.) And the old file had its own inode number, which is actually still around even though the file name is not pointing to it anymore (or there are no file names pointing to that inode anymore.) So, the key here is that when we talk about "files" in Linux, we're most often really talking about "inodes" since once a file has been opened, the inode is the reference we keep to the file. Assumption 1 : I assume that process P (and anyone else with a file descriptor referencing the old executable) will continue to use the old, in memory /apps/EXE without an issue, and any new process which tries to exec that path will get the new executable. Correct. Assumption 2 : I assume that if not all pages of the file are mapped into memory, that things will be fine until there is a page fault requiring pages from the file that have been replaced, and probably a segfault will occur? Incorrect. The old inode is still around, so page faults from the process using the old binary will still be able to find those pages on disk. You can see some effects of this by looking at the /proc/${pid}/exe symlink (or, equivalently, lsof output) for the process running the old binary, which will show /app/EXE (deleted) to indicate the name is no longer there but the inode is still around. You can also see that the diskspace used by the binary will only be released after the process dies (assuming it's the only process with that inode open.) Check output of df before and after killing the process, you'll see it drop by the size of that old binary you thought wasn't around anymore. BTW, this is not only with binaries, but with any open files. If you open a file in a process and remove the file, the file will be kept on disk until that process closes the file (or dies.) Similarly to how hardlinks keep a counter of how many names point to an inode in disk, the filesystem driver (in the Linux kernel) keeps a counter of how many references exist to that inode in memory , and will only release the inode from disk once all references from the running system have been released as well. Question 1 : If you mlock all of the pages of the file with something like vmtouch does that change the scenario This question is based on the incorrect assumption 2 that not locking the pages will cause segfaults. It won't. Question 2 : If /apps/EXE is on a remote NFS, would that make any difference? (I assume not) It's meant to work the same way and most of the time it does, but there are some "gotchas" with NFS. Sometimes you can see the artifacts of deleting a file that's still open in NFS (shows up as a hidden file in that directory.) You also have some way to assign device numbers to NFS exports, to make sure those won't get "reshuffled" when the NFS server reboots. But the main idea is the same. NFS client driver still uses inodes and will try to keep files around (on the server) while the inode is still referenced.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/539516", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64753/" ] }
539,521
What command lists all files beginning with "a" and having 1 or 2 digits that follow? (The output might display a0, a1, a2, a3, a00, a01, a99,... but not a333, not b12, not art53,...)
Continuous release then replaces /apps/EXE with a brand new executable. This is the important part. The way a new file is released is by creating a new file (e.g. /apps/EXE.tmp.20190907080000 ), writing the contents, setting permissions and ownership and finally rename(2)ing it to the final name /apps/EXE , replacing the old file. The result is that the new file has a new inode number (which means, in effect, it's a different file.) And the old file had its own inode number, which is actually still around even though the file name is not pointing to it anymore (or there are no file names pointing to that inode anymore.) So, the key here is that when we talk about "files" in Linux, we're most often really talking about "inodes" since once a file has been opened, the inode is the reference we keep to the file. Assumption 1 : I assume that process P (and anyone else with a file descriptor referencing the old executable) will continue to use the old, in memory /apps/EXE without an issue, and any new process which tries to exec that path will get the new executable. Correct. Assumption 2 : I assume that if not all pages of the file are mapped into memory, that things will be fine until there is a page fault requiring pages from the file that have been replaced, and probably a segfault will occur? Incorrect. The old inode is still around, so page faults from the process using the old binary will still be able to find those pages on disk. You can see some effects of this by looking at the /proc/${pid}/exe symlink (or, equivalently, lsof output) for the process running the old binary, which will show /app/EXE (deleted) to indicate the name is no longer there but the inode is still around. You can also see that the diskspace used by the binary will only be released after the process dies (assuming it's the only process with that inode open.) Check output of df before and after killing the process, you'll see it drop by the size of that old binary you thought wasn't around anymore. BTW, this is not only with binaries, but with any open files. If you open a file in a process and remove the file, the file will be kept on disk until that process closes the file (or dies.) Similarly to how hardlinks keep a counter of how many names point to an inode in disk, the filesystem driver (in the Linux kernel) keeps a counter of how many references exist to that inode in memory , and will only release the inode from disk once all references from the running system have been released as well. Question 1 : If you mlock all of the pages of the file with something like vmtouch does that change the scenario This question is based on the incorrect assumption 2 that not locking the pages will cause segfaults. It won't. Question 2 : If /apps/EXE is on a remote NFS, would that make any difference? (I assume not) It's meant to work the same way and most of the time it does, but there are some "gotchas" with NFS. Sometimes you can see the artifacts of deleting a file that's still open in NFS (shows up as a hidden file in that directory.) You also have some way to assign device numbers to NFS exports, to make sure those won't get "reshuffled" when the NFS server reboots. But the main idea is the same. NFS client driver still uses inodes and will try to keep files around (on the server) while the inode is still referenced.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/539521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370600/" ] }
539,615
Well, I know there are similar questions out there but none of their answers have actually solved my problem. I am running a Debian 10 ( uname -a: Linux msi-debian 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2 (2019-08-28) x86_64 GNU/Linux ) The thing is I haven't been able to make speakers play a sound, even though microphone and headphones work. On the other hand, while speakers are silent they are detected by the system. This is the info about my computer: $ inxi -FxSystem: Host: msi-debian Kernel: 4.19.0-6-amd64 x86_64 bits: 64 compiler: gcc v: 8.3.0 Desktop: Gnome 3.30.2 Distro: Debian GNU/Linux 10 (buster) Machine: Type: Laptop System: Micro-Star product: GL63 8SE v: REV:1.0 serial: <root required> Mobo: Micro-Star model: MS-16P7 v: REV:1.0 serial: <root required> UEFI: American Megatrends v: E16P7IMS.105 date: 11/26/2018 Battery: ID-1: BAT1 charge: 34.1 Wh condition: 49.6/53.4 Wh (93%) model: MSI BIF0_9 status: Discharging CPU: Topology: 6-Core model: Intel Core i7-8750H bits: 64 type: MT MCP arch: Kaby Lake rev: A L2 cache: 9216 KiB flags: lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 52992 Speed: 800 MHz min/max: 800/4100 MHz Core speeds (MHz): 1: 800 2: 800 3: 801 4: 800 5: 800 6: 800 7: 800 8: 800 9: 800 10: 800 11: 800 12: 800 Graphics: Device-1: Intel UHD Graphics 630 vendor: Micro-Star MSI driver: i915 v: kernel bus ID: 00:02.0 Device-2: NVIDIA TU106M [GeForce RTX 2060 Mobile] vendor: Micro-Star MSI driver: nvidia v: 418.74 bus ID: 01:00.0 Display: x11 server: X.Org 1.20.4 driver: modesetting,nouveau,nvidia unloaded: fbdev,vesa resolution: 1920x1080~120Hz OpenGL: renderer: Mesa DRI Intel UHD Graphics 630 (Coffeelake 3x8 GT2) v: 4.5 Mesa 18.3.6 direct render: Yes Audio: Device-1: Intel Cannon Lake PCH cAVS vendor: Micro-Star MSI driver: snd_hda_intel v: kernel bus ID: 00:1f.3 Device-2: NVIDIA TU106 High Definition Audio vendor: Micro-Star MSI driver: snd_hda_intel v: kernel bus ID: 01:00.1 Sound Server: ALSA v: k4.19.0-6-amd64 Network: Device-1: Intel Wireless-AC 9560 [Jefferson Peak] driver: iwlwifi v: kernel port: 6000 bus ID: 00:14.3 IF: wlo1 state: up mac: 48:a4:72:bd:e7:b4 Device-2: Qualcomm Atheros Killer E2400 Gigabit Ethernet vendor: Micro-Star MSI driver: alx v: kernel port: 3000 bus ID: 03:00.0 IF: enp3s0 state: down mac: 00:d8:61:05:39:c3 IF-ID-1: docker0 state: down mac: 02:42:fc:9c:14:12 Drives: Local Storage: total: 1.14 TiB used: 6.68 GiB (0.6%) ID-1: /dev/nvme0n1 vendor: Kingston model: RBUSNS8154P3256GJ size: 238.47 GiB ID-2: /dev/sda vendor: Seagate model: ST1000LM049-2GH172 size: 931.51 GiB Partition: ID-1: / size: 91.17 GiB used: 5.95 GiB (6.5%) fs: ext4 dev: /dev/sda4 ID-2: /home size: 137.24 GiB used: 749.6 MiB (0.5%) fs: ext4 dev: /dev/sda5 ID-3: swap-1 size: 15.82 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/sda6 Sensors: System Temperatures: cpu: 49.0 C mobo: N/A Fan Speeds (RPM): N/A Info: Processes: 297 Uptime: 1h 58m Memory: 15.51 GiB used: 3.01 GiB (19.4%) Init: systemd runlevel: 5 Compilers: gcc: 8.3.0 Shell: bash v: 5.0.3 inxi: 3.0.32 The info in pulseaudio is: $ pacmd list-cards2 card(s) available. index: 0 name: <alsa_card.pci-0000_01_00.1> driver: <module-alsa-card.c> owner module: 6 properties: alsa.card = "1" alsa.card_name = "HDA NVidia" alsa.long_card_name = "HDA NVidia at 0xa5080000 irq 17" alsa.driver_name = "snd_hda_intel" device.bus_path = "pci-0000:01:00.1" sysfs.path = "/devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1" device.bus = "pci" device.vendor.id = "10de" device.vendor.name = "NVIDIA Corporation" device.product.id = "10f9" device.product.name = "TU106 High Definition Audio Controller" device.string = "1" device.description = "TU106 High Definition Audio Controller" module-udev-detect.discovered = "1" device.icon_name = "audio-card-pci" profiles: output:hdmi-stereo: Digital Stereo (HDMI) Output (priority 5900, available: no) output:hdmi-surround: Digital Surround 5.1 (HDMI) Output (priority 800, available: no) output:hdmi-surround71: Digital Surround 7.1 (HDMI) Output (priority 800, available: no) output:hdmi-stereo-extra1: Digital Stereo (HDMI 2) Output (priority 5700, available: no) output:hdmi-surround-extra1: Digital Surround 5.1 (HDMI 2) Output (priority 600, available: no) output:hdmi-surround71-extra1: Digital Surround 7.1 (HDMI 2) Output (priority 600, available: no) off: Inactiu (priority 0, available: unknown) active profile: <off> ports: hdmi-output-0: HDMI / DisplayPort (priority 5900, latency offset 0 usec, available: no) properties: device.icon_name = "video-display" hdmi-output-1: HDMI / DisplayPort 2 (priority 5800, latency offset 0 usec, available: no) properties: device.icon_name = "video-display" index: 1 name: <alsa_card.pci-0000_00_1f.3> driver: <module-alsa-card.c> owner module: 7 properties: alsa.card = "0" alsa.card_name = "HDA Intel PCH" alsa.long_card_name = "HDA Intel PCH at 0xa5410000 irq 154" alsa.driver_name = "snd_hda_intel" device.bus_path = "pci-0000:00:1f.3" sysfs.path = "/devices/pci0000:00/0000:00:1f.3/sound/card0" device.bus = "pci" device.vendor.id = "8086" device.vendor.name = "Intel Corporation" device.product.id = "a348" device.product.name = "Cannon Lake PCH cAVS" device.form_factor = "internal" device.string = "0" device.description = "Audio intern" module-udev-detect.discovered = "1" device.icon_name = "audio-card-pci" profiles: input:analog-stereo: Estèreo analògic Input (priority 65, available: unknown) output:analog-stereo: Estèreo analògic Output (priority 6500, available: unknown) output:analog-stereo+input:analog-stereo: Analog Stereo Duplex (priority 6565, available: unknown) output:iec958-stereo: Estèreo digital (IEC958) Output (priority 5500, available: unknown) output:iec958-stereo+input:analog-stereo: Estèreo digital (IEC958) Output + Estèreo analògic Input (priority 5565, available: unknown) output:iec958-ac3-surround-51: Envolvent digital 5.1 (IEC958/AC3) Output (priority 300, available: no) output:iec958-ac3-surround-51+input:analog-stereo: Envolvent digital 5.1 (IEC958/AC3) Output + Estèreo analògic Input (priority 365, available: unknown) output:hdmi-stereo: Digital Stereo (HDMI) Output (priority 5900, available: no) output:hdmi-stereo+input:analog-stereo: Digital Stereo (HDMI) Output + Estèreo analògic Input (priority 5965, available: unknown) output:hdmi-surround: Digital Surround 5.1 (HDMI) Output (priority 800, available: no) output:hdmi-surround+input:analog-stereo: Digital Surround 5.1 (HDMI) Output + Estèreo analògic Input (priority 865, available: unknown) output:hdmi-surround71: Digital Surround 7.1 (HDMI) Output (priority 800, available: no) output:hdmi-surround71+input:analog-stereo: Digital Surround 7.1 (HDMI) Output + Estèreo analògic Input (priority 865, available: unknown) output:hdmi-stereo-extra1: Digital Stereo (HDMI 2) Output (priority 5700, available: no) output:hdmi-stereo-extra1+input:analog-stereo: Digital Stereo (HDMI 2) Output + Estèreo analògic Input (priority 5765, available: unknown) output:hdmi-surround-extra1: Digital Surround 5.1 (HDMI 2) Output (priority 600, available: no) output:hdmi-surround-extra1+input:analog-stereo: Digital Surround 5.1 (HDMI 2) Output + Estèreo analògic Input (priority 665, available: unknown) output:hdmi-surround71-extra1: Digital Surround 7.1 (HDMI 2) Output (priority 600, available: no) output:hdmi-surround71-extra1+input:analog-stereo: Digital Surround 7.1 (HDMI 2) Output + Estèreo analògic Input (priority 665, available: unknown) output:hdmi-stereo-extra2: Digital Stereo (HDMI 3) Output (priority 5700, available: no) output:hdmi-stereo-extra2+input:analog-stereo: Digital Stereo (HDMI 3) Output + Estèreo analògic Input (priority 5765, available: unknown) output:hdmi-surround-extra2: Digital Surround 5.1 (HDMI 3) Output (priority 600, available: no) output:hdmi-surround-extra2+input:analog-stereo: Digital Surround 5.1 (HDMI 3) Output + Estèreo analògic Input (priority 665, available: unknown) output:hdmi-surround71-extra2: Digital Surround 7.1 (HDMI 3) Output (priority 600, available: no) output:hdmi-surround71-extra2+input:analog-stereo: Digital Surround 7.1 (HDMI 3) Output + Estèreo analògic Input (priority 665, available: unknown) output:hdmi-stereo-extra3: Digital Stereo (HDMI 4) Output (priority 5700, available: no) output:hdmi-stereo-extra3+input:analog-stereo: Digital Stereo (HDMI 4) Output + Estèreo analògic Input (priority 5765, available: unknown) output:hdmi-surround-extra3: Digital Surround 5.1 (HDMI 4) Output (priority 600, available: no) output:hdmi-surround-extra3+input:analog-stereo: Digital Surround 5.1 (HDMI 4) Output + Estèreo analògic Input (priority 665, available: unknown) output:hdmi-surround71-extra3: Digital Surround 7.1 (HDMI 4) Output (priority 600, available: no) output:hdmi-surround71-extra3+input:analog-stereo: Digital Surround 7.1 (HDMI 4) Output + Estèreo analògic Input (priority 665, available: unknown) output:hdmi-stereo-extra4: Digital Stereo (HDMI 5) Output (priority 5700, available: no) output:hdmi-stereo-extra4+input:analog-stereo: Digital Stereo (HDMI 5) Output + Estèreo analògic Input (priority 5765, available: unknown) output:hdmi-surround-extra4: Digital Surround 5.1 (HDMI 5) Output (priority 600, available: no) output:hdmi-surround-extra4+input:analog-stereo: Digital Surround 5.1 (HDMI 5) Output + Estèreo analògic Input (priority 665, available: unknown) output:hdmi-surround71-extra4: Digital Surround 7.1 (HDMI 5) Output (priority 600, available: no) output:hdmi-surround71-extra4+input:analog-stereo: Digital Surround 7.1 (HDMI 5) Output + Estèreo analògic Input (priority 665, available: unknown) off: Inactiu (priority 0, available: unknown) active profile: <output:analog-stereo+input:analog-stereo> sinks: alsa_output.pci-0000_00_1f.3.analog-stereo/#35: Audio intern Estèreo analògic sources: alsa_output.pci-0000_00_1f.3.analog-stereo.monitor/#43: Monitor of Audio intern Estèreo analògic alsa_input.pci-0000_00_1f.3.analog-stereo/#44: Audio intern Estèreo analògic ports: analog-input-internal-mic: Internal Microphone (priority 8900, latency offset 0 usec, available: unknown) properties: device.icon_name = "audio-input-microphone" analog-input-mic: Microphone (priority 8700, latency offset 0 usec, available: no) properties: device.icon_name = "audio-input-microphone" analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown) properties: device.icon_name = "audio-speakers" analog-output-headphones: Headphones (priority 9000, latency offset 0 usec, available: no) properties: device.icon_name = "audio-headphones" iec958-stereo-output: Digital Output (S/PDIF) (priority 0, latency offset 0 usec, available: unknown) properties: hdmi-output-0: HDMI / DisplayPort (priority 5900, latency offset 0 usec, available: no) properties: device.icon_name = "video-display" hdmi-output-1: HDMI / DisplayPort 2 (priority 5800, latency offset 0 usec, available: no) properties: device.icon_name = "video-display" hdmi-output-2: HDMI / DisplayPort 3 (priority 5700, latency offset 0 usec, available: no) properties: device.icon_name = "video-display" hdmi-output-3: HDMI / DisplayPort 4 (priority 5600, latency offset 0 usec, available: no) properties: device.icon_name = "video-display" hdmi-output-4: HDMI / DisplayPort 5 (priority 5500, latency offset 0 usec, available: no) properties: device.icon_name = "video-display" I have tried every possible profile combination through pacmd set-profile-card and I have no idea what to do. Moreover, through alsamixer I have checked that no speakers have been muted (or I thing none of them are muted): Update: $ pacmd list-sinks1 sink(s) available. * index: 0 name: <alsa_output.pci-0000_00_1f.3.analog-stereo> driver: <module-alsa-card.c> flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY FLAT_VOLUME DYNAMIC_LATENCY state: SUSPENDED suspend cause: IDLE priority: 9039 volume: front-left: 65536 / 100% / 0,00 dB, front-right: 65536 / 100% / 0,00 dB balance 0,00 base volume: 65536 / 100% / 0,00 dB volume steps: 65537 muted: no current latency: 0,00 ms max request: 0 KiB max rewind: 0 KiB monitor source: 0 sample spec: s16le 2ch 44100Hz channel map: front-left,front-right Estèreo used by: 0 linked by: 0 configured latency: 0,00 ms; range is 0,50 .. 2000,00 ms card: 1 <alsa_card.pci-0000_00_1f.3> module: 7 properties: alsa.resolution_bits = "16" device.api = "alsa" device.class = "sound" alsa.class = "generic" alsa.subclass = "generic-mix" alsa.name = "ALC1220 Analog" alsa.id = "ALC1220 Analog" alsa.subdevice = "0" alsa.subdevice_name = "subdevice #0" alsa.device = "0" alsa.card = "0" alsa.card_name = "HDA Intel PCH" alsa.long_card_name = "HDA Intel PCH at 0xa5410000 irq 154" alsa.driver_name = "snd_hda_intel" device.bus_path = "pci-0000:00:1f.3" sysfs.path = "/devices/pci0000:00/0000:00:1f.3/sound/card0" device.bus = "pci" device.vendor.id = "8086" device.vendor.name = "Intel Corporation" device.product.id = "a348" device.product.name = "Cannon Lake PCH cAVS" device.form_factor = "internal" device.string = "front:0" device.buffering.buffer_size = "352800" device.buffering.fragment_size = "176400" device.access_mode = "mmap+timer" device.profile.name = "analog-stereo" device.profile.description = "Estèreo analògic" device.description = "Audio intern Estèreo analògic" alsa.mixer_name = "Realtek ALC1220" alsa.components = "HDA:10ec1220,14621275,00100101 HDA:8086280b,80860101,00100000" module-udev-detect.discovered = "1" device.icon_name = "audio-card-pci" ports: analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown) properties: device.icon_name = "audio-speakers" analog-output-headphones: Headphones (priority 9000, latency offset 0 usec, available: no) properties: device.icon_name = "audio-headphones" active port: <analog-output-speaker> Update 2: Output of alsamixer -c 0
Looks like it might be the same bug that is known in Ubuntu also: https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/1812693 This wiki has a suggestion for a workaround: https://blog.kafaiworks.com/posts/arch-linux-audio-setup-on-msi-gp63/ The workaround is to edit the /usr/share/pulseaudio/alsa-mixer/paths/analog-output-speaker.conf file like this: [Element Headphone]switch = offvolume = mergeoverride-map.1 = alloverride-map.2 = all-left,all-right[Element Speaker]required-any = anyswitch = mutevolume = off and then to restart pulseaudio (in Debian 10 it's implemented as a systemd per-user service): systemctl --user restart pulseaudio.service If I understand the workaround correctly, it looks like the volume/mute controls for the front speakers and headphones may be somehow miswired/cross-connected. The hdajackretask tool in package alsa-tools-gui might be helpful too: if you can find working override settings for your laptop model, you should probably contact the Linux audio driver developers to report your findings, so the override can be made to apply automatically to that particular system model. MSI GL73 apparently uses the same ALC1220 sound codec and also needed the same fix in sound routing as Clevo P950. As a wild guess, you might try adding a file named /etc/modprobe.d/sound-fixup.conf with the following content: options snd-hda-intel model=clevo-p950 Then run update-initramfs -u as root to make sure the change will be effective in early boot also, then reboot and see if it results in an improvement. If MSI has wired your model the same as the GL73, this might fix it. If not, delete the /etc/modprobe.d/sound-fixup.conf file and run update-initramfs -u again to fully get rid of the option.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/539615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370683/" ] }
539,698
I'm trying to check wether the file name of the script I run ends with a number or not: #!/bin/shname=$(basename "$0" .sh)[ $name =~ ^.[0-9]$ ] && numb=$(echo $name | sed 's/[^0-9]*//g') || numb=1echo $numb my shell file is named mh03.sh and this is the output if I run it: $ ./mh3.sh./mh3.sh: 3: [: mh3: unexpected operator1 can someone tell me why I get this exception and how I can fix it?
The regex match operator =~ is not supported in the single square brackets. You need double square brackets for it to work. [[ $name =~ ^.[0-9]$ ]] Note that you don't need a regex, you can use a normal pattern: [[ $name = *[0-9] ]] or, if you need the name to contain something before the digit, [[ $name = *?[0-9] ]]
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/539698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194605/" ] }
539,823
I did a few research, I can split the string by '.' but the only thing I want to replace is the third '.', and the version number is actually variable so how can I do the replace/split? version: 1.8.0.110 but What I want is the output like this: version: 1.8.0-110
Used sed for example: $ echo 'version: 1.8.0.110' | sed 's/\./-/3'version: 1.8.0-110 Explanation: sed s/search/replace/x searches for a string and replaces it with another string. x determines which occurence to replace - here the 3rd. Often g is used for x to mean all occurances. Here we wish to replace the dot . but this is a special character in the regular expression sed expects in the search term. Therefore we backslashify the . to \. to specify a literal . . Since we use special characters in the argument to sed (here, the backslash \ ) we need to put the whole argument in single quotes '' . Many people always use quotes here so as not to run into problems when using characters that might be special to the shell (like space ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/539823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370424/" ] }
539,828
I try to set up SNAT with firewalld on my CentOS-7-Router like described here , with additions from Karl Rupps explanation , but I end up like Eric . I also read some other documentation, but I am not able to get it to work, so that my client-IP is translated into another source IP. Both firewall-cmd --permanent --direct --add-rule ipv4 nat POSTROUTING 0 -p tcp -o enp1s0 -d 192.168.15.105 -j SNAT --to-source 192.168.25.121 or firewall-cmd --permanent --direct --add-rule ipv4 nat POSTROUTING 0 -p tcp -s 192.168.15.105/32 -j SNAT --to-source 192.168.25.121 gives a "success". I do a firewall-cmd --reload afterwards.But if I try to examine the table with iptables -t nat -nvL POSTROUTING the rule is not listed. But if I apply one of the above rules again, firewalld warns me with e.g. Warning: ALREADY_ENABLED: rule '['-p', 'tcp', '-o', 'enp1s0', '-d', '192.168.15.105', '-j', 'SNAT', '--to-source', '192.168.25.121']' already is in 'ipv4:nat:POSTROUTING' - but no SNAT-functionality for the source-ip 192.168.15.105 to be masqueraded as 192.168.45.121 is working. Maybe someone can explain me what I am doing wrong? After hours of struggling, I still am hanging on DNAT/SNAT.I now use only iptables with: 1.) iptables -t nat -A PREROUTING -p tcp --dport 1433 -i enp1s0 -d 192.168.25.121 -j DNAT --to-destination 192.168.15.105 and 2.) iptables -t nat -A POSTROUTING -p tcp --sport 1433 -o enp1s0 -s 192.168.15.105/32 -j SNAT --to-source 192.168.25.121 so iptables -t nat -nvL PREROUTING shows: pkts bytes target prot opt in out source destination 129 12089 PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 129 12089 PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 129 12089 PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DNAT tcp -- enp1s0 * 0.0.0.0/0 192.168.25.121 tcp dpt:1433 to:192.168.15.105 and iptables -t nat -nvL POSTROUTING shows: pkts bytes target prot opt in out source destination 97 7442 POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0 97 7442 POSTROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0 97 7442 POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 SNAT tcp -- * enp1s0 192.168.15.105 0.0.0.0/0 tcp spt:1433 to:192.168.25.121 All done right, here are some more good explanations: - https://wiki.ubuntuusers.de/iptables2 - https://netfilter.org/documentation/HOWTO/NAT-HOWTO-6.html - https://serverfault.com/questions/667731/centos-7-firewalld-remove-direct-rule but still iptraf-ng lists: Isn't PREROUTING (resp. POSTROUTING) done before (resp. after) ip_forwarding from internal to external interface?
#!/bin/bash# Assuming that your Linux box has two NICs; eth0 attached to WAN and eth1 attached to LAN# eth0 = outside# eth1 = inside# [LAN]----> eth1[GATEWAY]eth0 ---->WAN# Run the following commands on LINUX box that will act as a firewall or NAT gatewayfirewall-cmd --query-interface=eth0firewall-cmd --query-interface=eth1firewall-cmd --get-active-zone firewall-cmd --add-interface=eth0 --zone=externalfirewall-cmd --add-interface=eth1 --zone=internalfirewall-cmd --zone=external --add-masquerade --permanent firewall-cmd --reload firewall-cmd --zone=external --query-masquerade # ip_forward is activated automatically if masquerading is enabled.# To verify:cat /proc/sys/net/ipv4/ip_forward # set masquerading to internal zonefirewall-cmd --zone=internal --add-masquerade --permanentfirewall-cmd --reload firewall-cmd --direct --add-rule ipv4 nat POSTROUTING 0 -o eth0 -j MASQUERADEfirewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i eth0 -o eth1 -j ACCEPTfirewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPTfirewall-cmd --reload
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/539828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/365610/" ] }
539,854
I've this silly script to track execution time of my cron jobs to know when script started and stopped. start=$(date +'%Y-%m-%d %H:%M:%S') # This line converts to 2019-09-10 00:50:48echo "$start"./actual_script_runs_herestop=$(date +'%Y-%m-%d %H:%M:%S')echo "$stop" This method works. But I've to calculate time it took to run the script by myself. Is it possible to calculate this in bash? Or I could use time command. But it returns 3 different times with 3 new lines. I'd prefer to see that time result in one line together with my script result.
You could use date +%s to get a date format that you can easily do math on (number of seconds [with caveats] since Jan 1 1970 00:00:00Z). You can also convert your ISO-format date back (possibly incorrectly, due to issues like daylight saving time) using date -d "$start" +%s . It's not ideal not only due to DST issues when converting back, but also if someone changes the time while your program is running, or if something like ntp does, then you'll get a wrong answer. Possibly a very wrong one. A leap second would give you a slightly wrong answer (off by a second). Unfortunately, checking time's source code , it appears time suffers from the same problems of using wall-clock time (instead of "monotonic" time). Thankfully, Perl is available almost everywhere, and can access the monotonic timers: start=$(perl -MTime::HiRes=clock_gettime,CLOCK_MONOTONIC -E 'say clock_gettime(CLOCK_MONOTONIC)') That will get a number of seconds since an arbitrary point in the past (on Linux, since boot, excluding time the machine was suspended). It will count seconds, even if the clock is changed. Since it's somewhat difficult to deal with real numbers (non-integers) in shell, you could do the whole thing in Perl: #!/usr/bin/perluse warnings qw(all);use strict;use Time::HiRes qw(clock_gettime CLOCK_MONOTONIC);my $cmd = $ARGV[0];my $start = clock_gettime(CLOCK_MONOTONIC);system $cmd @ARGV; # this syntax avoids passing it to "sh -c"my $exit_code = $?;my $end = clock_gettime(CLOCK_MONOTONIC);printf STDERR "%f\n", $end - $start;exit($exit_code ? 127 : 0); I named that "monotonic-time"; and it's used just like "time" (without any arguments of its own). So: $ monotonic-time sleep 55.001831 # appeared ≈5s later$ monotonic-time fortune -sGood news is just life's way of keeping you off balance.0.017800
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/539854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370871/" ] }
539,933
Are there any method to extend a filesystem when i adding more hdd to it without using LVM? What are those ?
In this answer I'll call the resulting device a "volume", and the partitions you use to create it "devices". LVM internally uses the dmsetup tool to set up its volumes, and uses part of the storage space for storing metadata i.e. information about how the devices are set up. An alternative is to use dmsetup manually to create devices without need for explicitly allocated storage space for metadata. It also allows you to start using this feature with a partition that already contains data. Let's say you are currently having a drive /dev/sda5 which is an ext4 filesystem. It is 100 gigabytes big, and to get its exact size in sectors you run: # blockdev --getsz /dev/sda5195310000 Say you got a new hard drive sdb that is 300GB and want to use it to extend sda5. You could use the entire sdb device without a partition table for this, but for your own long-term sanity's sake, perhaps it's better to create a single partition that spans the whole disk so you remember later how the disk was used. So you then have sdb1, whose size blockdev reports as # blockdev --getsz /dev/sdb1583984375 So, to merge these two together, the FIRST THING you do is make sure your old device is unmounted. And of course I should say that even before that, back up your data if anything goes wrong. So, after your backup procedure, run: # umount /dev/sda5 to be sure. NOTE: I've never tried this on a system with systemd, so please check if there is a better way of doing it to avoid it potentially undoing your manual unmount. Next we'll create a file that contains the commands to set up the new volume. It requires some manual calculations. Basically the file we create will tell you, one line at a time, where each part of the new volume is located on the disks. So we want sectors 0-195309999 (195310000 sectors total) to map to device /dev/sda5 sectors 0-195309999. And then we want sectors 195310000-779294374 (583984375 sectors total) to map to device /dev/sdb1 sectors 0-583984374. So to do this, we create a file /etc/mybigvolume.dmsetup.txt with the following lines: 0 195310000 linear /dev/sda5 0195310000 583984375 linear /dev/sdb1 0 Each line has the format (all units in sectors = 512 bytes): <offset inside volume> <number of sectors> "linear" <source device> <source device offset> So, reading out loud, the lines mean: The target volume will have its sectors starting from 0 and going 195310000 sectors forward located in the device /dev/sda5, starting at sector 0 inside /dev/sda5 The target volume will have its sectors starting from 195310000 and going 583984375 sectors forward located in the device /dev/sdb1, starting at sector 0 inside /dev/sdb1 Side note: For the sake of understanding of numbers, should you want to add another identical 300G disk later to the end, the file contents would be: 0 195310000 linear /dev/sda5 0195310000 583984375 linear /dev/sdb1 0779294375 583984375 linear /dev/sdc1 0 Back to the original example; Having created the file, we can now set up the volume so we can start using it. We use dmsetup create for this. # dmsetup create mybigvolume < /etc/mybigvolume.dmsetup.txt If all goes well i.e. it outputs nothing, your new volume should now exist as a new device called /dev/mapper/mybigvolume which is 195310000 + 583984375 = 779294375 sectors big. Let's verify this: # blockdev --getsz /dev/mapper/mybigvolume779294375 You can run # dmsetup table at any point to see which devices have been set up with dmsetup. Yay! Now a few important things to think about at this point: You must now start using /dev/mapper/mybigvolume for accessing your disk. Always. If you use /dev/sda5, you can break your filesystem. So make sure you don't have /dev/sda5 anywhere anymore. Except of course in /etc/mybigvolume.dmsetup.txt or wherever you stored your dmsetup config. Your filesystem still only uses the first 195310000 sectors of the disk until you specifically ask it to start using the newly available space. Check your file system management tools for info. Had you used LVM, this step would still be needed. This setup (e.g. dmsetup configuration) only lasts until a reboot. So you will need to either configure your system to run the dmsetup create ... command automatically at boot BEFORE filesystems are mounted OR manually run it on every boot, followed by manually mounting the volume. How the former is done is highly dependent on your Linux distribution. But it would probably be similar to how cryptsetup is implemented (which also uses dmsetup to set up devices). Sample entry in /etc/fstab: /dev/mapper/mybigvolume /data ext4 defaults,noatime 0 0 Finally I'd like to point out that the risk of your volume failing is of course higher than a single device failing. But I don't know about your setup, maybe you use this /dev/mapper/mybigvolume as part of a raid-1 array! Anyway, good luck! :) PS. Feel free to ask questions!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/539933", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/368308/" ] }
540,018
I need a lot of different aliases to create ssh tunnels to different servers. To give you a few of them: alias tunnel_1='autossh -M 20000 -N -L 8080:localhost:8080 -N -L 9200:localhost:9200 -N -L 8090:localhost:8090 [email protected]'alias tunnel_2='autossh -M 20000 -N -L 8000:localhost:8080 -N -L 9200:localhost:9200 -N -L 8090:localhost:8090 [email protected]' I came up with this function I added in my aliases : addPort () { echo "-N -L $1:localhost:$1 "}tunnel () { aliasString="autossh -M 20000 " for port in "${@:2}" do aliasString+=$(addPort $port) done aliasString+="$1" eval $aliasString} so I just need to do this to tunnel to the server I want: tunnel [email protected] 8080 9000 7200 It’s working well, But I’d like not to use eval if it’s possible.is there another way to call autossh directly and give it the correct params without using eval?
Use a single shell function: tunnel () { local host="$1"; shift local port local args args=( -M 20000 ) for port do args+=( -N -L "$port:localhost:$port" ) done autossh "${args[@]}" "$host"} or, for /bin/sh : tunnel () { host="$1"; shift for port do set -- "$@" -N -L "$port:localhost:$port" shift done autossh -M 20000 "$@" "$host"} Both of these functions extracts the first argument into the variable host , and then builds a list consisting of strings made up from the provided port numbers. At the end, both functions invoke autossh with the given list and the host.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/540018", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211513/" ] }
540,029
I have this output. [root@linux ~]# cat /tmp/file.txtvirt-top time 11:25:14 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME 1 R 0 0 0 0 0.0 0.0 96:02:53 instance-0000036f 2 R 0 0 0 0 0.0 0.0 95:44:07 instance-00000372virt-top time 11:25:17 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME 1 R 0 0 0 0 0.6 12.0 96:02:53 instance-0000036f 2 R 0 0 0 0 0.2 12.0 95:44:08 instance-00000372 You can see it has two blocks and i want to extract last block (if you see first block it has all CPU zero which i don't care) inshort i want to extract following last lines (Notes: sometime i have more than two instance-*) otherwise i could use "tail -n 2" 1 R 0 0 0 0 0.6 12.0 96:02:53 instance-0000036f2 R 0 0 0 0 0.2 12.0 95:44:08 instance-00000372 I have tried sed/awk/grep and all possible way but not get close to desire result.
This feels a bit silly, but: $ tac file.txt |sed -e '/^virt-top/q' |tacvirt-top time 11:25:17 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME 1 R 0 0 0 0 0.6 12.0 96:02:53 instance-0000036f 2 R 0 0 0 0 0.2 12.0 95:44:08 instance-00000372 GNU tac reverses the file (many non-GNU systems have tail -r instead), the sed picks lines until the first that starts with virt-top . You can add sed 1,2d or tail -n +3 to remove the headers. Or in awk: $ awk '/^virt-top/ { a = "" } { a = a $0 ORS } END {printf "%s", a}' file.txt virt-top time 11:25:17 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME 1 R 0 0 0 0 0.6 12.0 96:02:53 instance-0000036f 2 R 0 0 0 0 0.2 12.0 95:44:08 instance-00000372 It just collects all the lines to a variable, and clears that variable on a line starting with virt-top . If the file is very large, the tac + sed solution is bound to be faster since it only needs to read the tail end of the file while the awk solution reads the full file from the top.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/540029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29656/" ] }
540,038
Preface: We have a complex build process and discovered it's working on one system and failing on another (unfortunally/Murphy it worked on the Jenkins build server, so the problem remained undiscovered for long). The error source turned out to be the order in which hard-linked files were put into a tar file by some external component: a and b were hard-linked and tar red, while later, on extracting the tar file, only a was extracted. Of course, when a was the first to be archived, it worked, but if b was the first in the tar, the typical Cannot hard link error shows up. Of course, we could tar cf with --hard-dereference , but this means changing a script in an external tool (which is to be avoided), so my question is different: Question: Our basic goal is to have a reproducable result, independent of the system. Currently, the tar ring order is reproducable on one system, but can be randomly different on another system. Can we force a file order for tar without giving options to tar or splitting the tar call into several calls? The systems are all linux right now, but could as well be FreeBSD or MacOS some time.
This feels a bit silly, but: $ tac file.txt |sed -e '/^virt-top/q' |tacvirt-top time 11:25:17 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME 1 R 0 0 0 0 0.6 12.0 96:02:53 instance-0000036f 2 R 0 0 0 0 0.2 12.0 95:44:08 instance-00000372 GNU tac reverses the file (many non-GNU systems have tail -r instead), the sed picks lines until the first that starts with virt-top . You can add sed 1,2d or tail -n +3 to remove the headers. Or in awk: $ awk '/^virt-top/ { a = "" } { a = a $0 ORS } END {printf "%s", a}' file.txt virt-top time 11:25:17 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME 1 R 0 0 0 0 0.6 12.0 96:02:53 instance-0000036f 2 R 0 0 0 0 0.2 12.0 95:44:08 instance-00000372 It just collects all the lines to a variable, and clears that variable on a line starting with virt-top . If the file is very large, the tac + sed solution is bound to be faster since it only needs to read the tail end of the file while the awk solution reads the full file from the top.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/540038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216004/" ] }
541,215
I understand how to use awk's printf function, but I don't want to specify every field. For example, assume this is my file: c1|c2|c3|c4|c5c6|c7|c8|c9|c10c11|c12|c13|c14|c15 I want to format it so that every record's first field is the width of c11 -- the longest cell in the first field: c1 |c2|c3|c4|c5c6 |c7|c8|c9|c10c11|c12|c13|c14|c15 I understand that I could specify: awk -F"|" '{printf "%-3s%s%s%s%s\n", $1, $2, $3, $4, $5}' file > newfile Let's assume I know what I want the width of the first column to be, but I do NOT know how many fields are in the file. Basically I want to do something like: ... '{printf "%-3s|", $1}' ... and then print the rest of the fields in their original format.
You can use sprintf to re-format $1 only. Ex. $ awk 'BEGIN{OFS=FS="|"} {$1 = sprintf("%-3s",$1)} 1' filec1 |c2|c3|c4|c5c6 |c7|c8|c9|c10c11|c12|c13|c14|c15
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/541215", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/372187/" ] }
541,311
I have a script with for i in 1 2 3 4; do do_something $i &done And when I call it, it terminates before all do_something terminated. I found this question with many different answers. Edit : help wait tells me that If ID is not given, waits for all currently active child processes, and the return status is zero. Is it not sufficient to just add a single wait at the end?
Yes, it's enough to use a single wait with no arguments at the end to wait for all background jobs to terminate. Note that background jobs started in a subshell would need to be waited for in the same subshell that they were started in. You have no instance of this in the code that you show. Note also that the question that you link to asks about checking the exit status of the background jobs. This would require wait to be run once for each background job (with the PID of that job as an argument).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/541311", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279285/" ] }
541,342
Say I have a POSIX shell script that needs to run on different systems/environments that I do not control, and needs to remove the decimal separator from a string that is emitted by a program that respects the locale settings. How can I detect the decimal separator in the most general way?
Ask locale : locale decimal_point This will output the decimal point using the current locale settings. If you need the thousands separator: locale thousands_sep You can view all the numeric keywords by requesting the LC_NUMERIC category : locale -k LC_NUMERIC
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/541342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164535/" ] }
541,381
I'm using Debian 10 (Buster) and am getting an error in a simple Bash script. All it does is check to see if one parameter is passed in, and if it's a file, just echo out the file: #!/usr/bin/bashif [ ( '$#' -eq 1 ) && ( -f "$1" ) ]; then echo "$1"fiexit 1 I get this error: line 2: syntax error near unexpected token `'$#''line 2: `if [ ( '$#' -eq 1 ) && ( -f "$1" ) ]; then' I have tried every combination of quotes (", ', no quotes) around the $# , and I always get a variant of those error messages using the type of quotes I use. I can't figure out what it's looking for.
[ is a command, so the problem is really the way you're trying to use multiple conditions. You want this: if [ "$#" -eq 1 ] && [ -f "$1" ]; then
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/541381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82788/" ] }
541,417
I often want to get the login name associated with a user ID and because it’s proven to be a common use case, I decided to write a shell function to do this. While I primarily use GNU/Linux distributions, I try to write my scripts to be as portable as possible and to check that what I’m doing is POSIX-compatible. Parse /etc/passwd The first approach I tried was to parse /etc/passwd (using awk ). awk -v uid="$uid" -F: '$3 == uid {print $1}' /etc/passwd However, the problem with this approach is that the logins may not be local, e.g., user authentication could be via NIS or LDAP. Use the getent command Using getent passwd is more portable than parsing /etc/passwd as this also queries non-local NIS or LDAP databases. getent - Wikipedia getent(1) - Linux man page getent passwd "$uid" | cut -d: -f1 Unfortunately, the getent utility does not seem to be specified by POSIX. Use the id command id is the POSIX-standardised utility for getting data about a user’s identity. BSD and GNU implementations accept a User ID as an operand: id(1) - OpenBSD manual pages id(1) – FreeBSD man page GNU Coreutils: id invocation This means it can be used to print the login name associated with a User ID: id -nu "$uid" However, providing User IDs as the operand is not specified in POSIX; it only describes the use of a login name as the operand. Combining all of the above I considered combining the above three approaches into something like the following: get_username(){ uid="$1" # First try using getent getent passwd "$uid" | cut -d: -f1 || # Next try using the UID as an operand to id. id -nu "$uid" || # As a last resort, parse `/etc/passwd`. awk -v uid="$uid" -F: '$3 == uid {print $1}' /etc/passwd} However, this is clunky, inelegant and – more importantly – not robust; it exits with a non-zero status if the User ID is invalid or does not exist. Before I write a longer and clunkier shell script that analyses and stores the exit status of each command invocation, I thought I’d ask here: Is there a more elegant and portable (POSIX-compatible) way of getting the login name associated with a user ID?
One common way to do this is to test if the program you want exists and is available from your PATH . For example: get_username(){ uid="$1" # First try using getent if command -v getent > /dev/null 2>&1; then getent passwd "$uid" | cut -d: -f1 # Next try using the UID as an operand to id. elif command -v id > /dev/null 2>&1 && \ id -nu "$uid" > /dev/null 2>&1; then id -nu "$uid" # Next try perl - perl's getpwuid just calls the system's C library getpwuid elif command -v perl >/dev/null 2>&1; then perl -e '@u=getpwuid($ARGV[0]); if ($u[0]) {print $u[0]} else {exit 2}' "$uid" # As a last resort, parse `/etc/passwd`. else awk -v uid="$uid" -F: ' BEGIN {ec=2}; $3 == uid {print $1; ec=0; exit 0}; END {exit ec}' /etc/passwd fi} Because POSIX id doesn't support UID arguments, the elif clause for id has to test not only whether id is in the PATH, but also whether it will run without error. This means it may run id twice, which fortunately will not have a noticeable impact on performance. It is also possible that both id and awk will be run, with the same negligible performance hit. BTW, with this method, there's no need to store the output. Only one of them will be run, so only one will print output for the function to return.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/541417", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22812/" ] }
541,424
/proc/iomem indicates that significant address space is mapped to PCI devices, such as a video card on my box: e0000000-efffffff : 0000:01:00.0 which is 250MB if my math is correct. On a 64 bit desktop with only 16GB RAM I assume there is some trick that linux or all modern kernels can do to recover that part of physical memory, but how exactly? A somewhat related question - if northbridge/memory controller routes memory/io accesses based on some programmable rule, such that for write access to memory mapped regions(for example, to pci devices), RAM does not even know about those writes since they are routed away, then there should be some sort of 'routing table'? And where does such a table live? How does linux kernel access this table?
One common way to do this is to test if the program you want exists and is available from your PATH . For example: get_username(){ uid="$1" # First try using getent if command -v getent > /dev/null 2>&1; then getent passwd "$uid" | cut -d: -f1 # Next try using the UID as an operand to id. elif command -v id > /dev/null 2>&1 && \ id -nu "$uid" > /dev/null 2>&1; then id -nu "$uid" # Next try perl - perl's getpwuid just calls the system's C library getpwuid elif command -v perl >/dev/null 2>&1; then perl -e '@u=getpwuid($ARGV[0]); if ($u[0]) {print $u[0]} else {exit 2}' "$uid" # As a last resort, parse `/etc/passwd`. else awk -v uid="$uid" -F: ' BEGIN {ec=2}; $3 == uid {print $1; ec=0; exit 0}; END {exit ec}' /etc/passwd fi} Because POSIX id doesn't support UID arguments, the elif clause for id has to test not only whether id is in the PATH, but also whether it will run without error. This means it may run id twice, which fortunately will not have a noticeable impact on performance. It is also possible that both id and awk will be run, with the same negligible performance hit. BTW, with this method, there's no need to store the output. Only one of them will be run, so only one will print output for the function to return.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/541424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/348230/" ] }
541,549
journalctl --boot prints log lines since boot and journalctl --follow prints the last 10 lines of the log and then follows it. But journalctl --boot --follow doesn't work like I expect it to. Rather than printing all the journal lines since boot and then following the journal it just ignores --boot flag. Swapping the flags around makes no difference. How do I print all the log lines since boot and then follow the log? Version info: $ journalctl --versionsystemd 239+PAM +AUDIT -SELINUX +IMA +APPARMOR +SMACK -SYSVINIT +UTMP -LIBCRYPTSETUP +GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
Adding --lines=all does the trick - rather than overriding --boot they work together to follow lines since boot. journalctl --boot --lines=all --follow
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/541549", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }
541,632
I have been trying to find the answer to this question for a while. I am writing a quick script to run a command based on output from awk. ID_minimum=1000for f in /etc/passwd;do awk -F: -vID=$ID_minimum '$3>=1000 && $1!="nfsnobody" { print "xfs_quota -x -c 'limit bsoft=5g bhard=6g $1' /home "}' $f; done The problems are that the -c argument takes a command in single quotes and I can't figure out how to properly escape that and also that $1 doesn't expand into the username. Essentially I am just trying to get it to output: xfs_quota -x -c 'limit bsoft=5g bhard=6g userone' /homexfs_quota -x -c 'limit bsoft=5g bhard=6g usertwo' /home etc...
To run the command xfs_quota -x -c 'limit bsoft=5g bhard=6g USER' /home for each USER whose UID is at least $ID_minimum , consider parsing out those users first and then actually run the command, rather than trying to create a string representing the command that you want to run. If you create the command string, you would have to eval it. This is fiddly and easy to get wrong. It's better to just get a list of usernames and then to run the command. getent passwd |awk -F: -v min="${ID_minimum:-1000}" '$3 >= min && $1 != "nfsnobody" { print $1 }' |while IFS= read -r user; do xfs_quota -x -c "limit bsoft=5g bhard=6g $user" /homedone Note that there is no actual need for single quotes around the argument after -c . Here I use double quotes because I want the shell to expand the $user variable which contains values extracted by awk . I use ${ID_minimum:-1000} when giving the value to the min variable in the awk command. This will expand to the value of $ID_minimum , or to 1000 if that variable is empty or not set. If you really wanted to, you could make the above loop print out the commands instead of executing them: getent passwd |awk -F: -v min="${ID_minimum:-1000}" '$3 >= min && $1 != "nfsnobody" { print $1 }' |while IFS= read -r user; do printf 'xfs_quota -x -c "limit bsoft=5g bhard=6g %s" /home\n' "$user"done Note again that using double quotes in the command string outputted (instead of single quotes) would not confuse a shell in any way if you were to execute the generated commands using eval or though some other means. If it bothers you, just swap the single and double quotes around in the first argument to printf above.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/541632", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/372549/" ] }
541,643
How can I use sed to replace new line character with any other character? Input: I cannot conceive that anybody will require multiplications at the rate of 40,000 or even 4,000 per hour ... -- F. H. Wales (1936) Desired output: I cannot conceive that anybody will require multiplications at the rate of 40,000 or even 4,000 per hour ... -- F. H. Wales (1936) I've tried: > pbpaste | sed 's/\n/ /g' but it outputs the same thing as input. I know it's a newline char because I've checked it with cat -ev and it prints $ as expected. What else would be a good command to do this? This shows where there is extra space between new line. I want to remove that as well. So it's like a sentence with spaces. > pbpaste | cat -ev I cannot conceive that anybody will $ require multiplications at the rate of $ 40,000 or even 4,000 per hour ... $ $ -- F. H. Wales (1936) ⏎
tr is probably a better tool for this job. Try the following pbpaste | tr '\n' ' ' With your input, I get the following output. I cannot conceive that anybody will require multiplications at the rate of 40,000 or even 4,000 per hour ... -- F. H. Wales (1936)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/541643", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/159858/" ] }
541,760
I'm looking for a "meta-command" xyz such that: (echo "foo"; echo "bar") | xyz rev will return: foo oofbar rab I'd like to avoid temporary files, i.e. I'm looking for a solution neater than: tempfile=$(mktemp)cat > $tempfilecat $tempfile | rev | paste $tempfile - (And of course I want a general solution, for any command, not just rev , you can assume that the command outputs exactly one line for each input line.) Zsh solution would be also acceptable.
There will be a lot of problems in most cases due to the way that stdio buffering works. A work around for linux might be to use the stdbuf program and run the command with coproc, so you can explicitly control the interleaving of the output. The following assumes that the command will output one line after each line of input. #!/bin/bashcoproc stdbuf -i0 -o0 "$@"IFS=while read -r in ; do printf "%s " "$in" printf "%s\n" "$in" >&${COPROC[1]} read -r out <&${COPROC[0]} printf "%s\n" "$out"done If a more general solution is needed as the OP only required each line of input to the program to eventually output one line rather than immediately, then a more complicated approach is needed. Create an event loop using read -t 0 to try and read from stdin and the co-process, if you have the same "line number" for both then output, otherwise store the line. To avoid using 100% of the cpu if in any round of the event loop neither was ready, then introduce a small delay before running the event loop again. There are additional complications if the process outputs partial lines, these need to be buffered. If this more general solution is needed, I would write this using expect as it already has good support for handling pattern matching on multiple input streams. However this is not a bash/zsh solution.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/541760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/372665/" ] }
541,779
There's no directory above / , so what's the point of the .. in it?
The .. entry in the root directory is a special case. From the POSIX standard ( 4.13 Pathname Resolution , where the . and .. entries are referred to as "dot" and "dot-dot" repsectively): The special filename dot shall refer to the directory specified by its predecessor. The special filename dot-dot shall refer to the parent directory of its predecessor directory. As a special case, in the root directory, dot-dot may refer to the root directory itself. The rationale has this to add ( A.4.13 Pathname Resolution ) What the filename dot-dot refers to relative to the root directory is implementation-defined. In Version 7 it refers to the root directory itself; this is the behavior mentioned in POSIX.1-2017. In some networked systems the construction /../hostname/ is used to refer to the root directory of another host, and POSIX.1 permits this behavior. Other networked systems use the construct //hostname for the same purpose; that is, a double initial <slash> is used. [...] So, in short, the POSIX standard says that every directory should have both . and .. entries, and permits the .. directory entry in / to refer to the / directory itself (notice the word "may" in the first text quoted), but it also allows an implementation to let it refer to something else. Most common implementations of filesystems makes /.. resolve to / .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/541779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85900/" ] }
541,795
I have used ln to write symbolic links for years but I still get the order of parameters the wrong away around. This usually has me writing: ln -s a b and then looking at the output to remind myself. I always imagine to be a -> b as I read it when it's actually the opposite b -> a . This feels counter-intuitive so I find that I'm always second-guessing myself. Does anyone have any tips to help me remember the correct order?
I use the following: ln has a one-argument form (2nd form listed in the manpage ) in which only the target is required (because how could ln work at all without knowing the target) and ln creates the link in the current directory. The two-argument form is an addition to the one-argument form, thus the target is always the first argument.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/541795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81989/" ] }
542,094
I have a tab delimited file which looks like this: gene v1 v2 v3 v4g1 NA NA NA NAg2 NA NA 2 3g3 NA NA NA NAg4 1 2 3 2 The number of fields in every line is fixed and same.I want to remove those rows from the above file where all the fields for every row from column 2 through last is NA. Then the output should look like: gene v1 v2 v3 v4g2 NA NA 2 3g4 1 2 3 2
With awk : awk '{ for (i=2;i<=NF;i++) if ($i!="NA"){ print; break } }' file Loop through the fields starting at the second field and print the line if a field not containing NA is found. Then break the loop.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/542094", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63486/" ] }
542,098
I am looking for a Linux distro that is accessible to someone who is totally blind. I am aware of Vinux and Sonar GNU, but the former is dormant and the latter is discontinued. What is out there that both is current and also not likely to go away? This search is also satisfied by a mainstream distro like Debian or Ubuntu plus this or that application (like Orca and Lynx); just name which distro, and which app.
As noted in the question, there have been several Linux distributions aimed at blind and visually impaired users, many of which were neglected for a long time or even abandoned. In early 2017, Vinux announced plans to merge with Sonar , using Fedora as a basis. That was the last thing I heard about this. TalkingArch is or was "a respin of the Arch Linux live iso modified to include speech and braille output for blind and visually impaired users". The latest version dates from 2017 and the Arch wiki points out that "TalkingArch project is dead since 2017". It was succeeded by Tarch (see below). Update 22.10.2021 : The link to the Talking Arch page now redirects to the wiki page "Install Arch Linux with accessibility options". Speakup , which is or was a set of tools for several Linux distributions, has not seen any updates for a number of years. Oralux was based on Knoppix and included BRLTTY, Emacspeak, Yasr, Speakup and speech synthesiser for several languages. It was last updated in 2006 or 2007. Some alternatives that are still being maintained are: Luwrain , which describes itself as "A platform for the creation of apps for the blind and partially-sighted". It has ISOs for 32-bit and 64-bit systems and bootable ISO images . Version 1.2.1 was released in May 2019. Tarch , "the new talking arch livecd project" succeeded Talking Arch. Its latest version is 2019.06.22, released in June 2019. Update 22.10.2021 : Tarch is no longer available. There is also ADRIANE , "Audio Desktop Reference Implementation and Networking Environment", available on Knopper.net, the same website where you can find Knoppix. Update 22.10.2021 : The newest linux distro focussing on visually impaired users appears to be Accessible-Coconut , which is based on Ubuntu and was first released in the summer of 2020. Using a distribution that was specifically developed with blind users in mind is not the only option. The decisive aspect is the desktop environment and the availability of packages that blind users need. The Gnome desktop was traditionally the desktop of choice for anybody with accessibility needs. Gnome 3 was a setback with regard to accessibility, which made Mate (a continuation of Gnome 2) the better choice for many years. However, I doubt that this is still the case. For example, I can't find any dedicated accessibility page on the MATE website , whereas GNOME at least has an Accessibility Team .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542098", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/372923/" ] }
542,185
I am trying to use jq to join the values inside a JSON array into a single line comma separated list. (Without trailing comma) { "hardware": [ "abc", "def", "ghi" ]} To create"abc, def, ghi" I can join values together using jq -jr '(.hardware[])' input.jsonabcdefghi I have tried to insert comma and space but cannot work jq -jr '(.hardware[]|join(", ")' jq: error: syntax error, unexpected $end (Unix shell quoting issues?) at <top-level>, line 1:(.hardware[]|join(", ") Could someone point me to the correct syntax to use? Thanks Densha
You are looking for jq -r '.hardware | join(", ")' The syntax error from the version you posted is because the opening ( doesn't have a matching ) , but in any case join needs to be given all the values at once, so .hardware is better than .hardware[] (which will pass them through one at a time).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542185", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/335627/" ] }
542,295
Is it possible (in classical ext4, and/or in any other filesystem) to create two files that point to the same content, such that if one file is modified, the content is duplicated and the two files become different? It would be very practical to save space on my hard drive. Context: I have some heavy videos that I share on an owncloud server that can be modified by lot's of people and therefore it may be possible that some people modify/remove these files... I really would like to make sure I have a backup of these files, and therefore I need for now to maintain two directories, the normal nextcloud one, and one "backup" directory, which (at least) double the size required to store it. I was thinking to great a git repo on top of the nextcloud directory, and it make the backup process much easier when new videos are added (just git add . ), but git still double the space, between the blob and the working directory. Ideally, a solution that can be combined with git would be awesome (i.e. that allows me to create a history of the video changes, with commits, checkouts... without doubling the disk space. Moreover, I'm curious to have solution for various file systems (especially if you have tricks for filesystems that do not implement snapshots ). Note that LVM snapshot is not really a solution as I don't want to backup my full volume, only some specific files/folders. Thanks!
Yes on a Copy On Write file systems (Btrfs, ZFS). git-annex is as close as you are likely to get on ext4. Note that you can mount --bind a LVM backed volume or a Btrfs file system over a folder in another file system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542295", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169695/" ] }
542,304
I would like to find all files which reside in any directory foo . For example, consider the following files: foo/wa/foo/xa/b/foo/ya/b/c/foo/za/b/c/foo/bar/n I would like to find the files w,x,y,z and get them listed as above, i.e., with their relative paths. My attempt was something like $ find . -path '**/foo' -type f which doesn't work. The search should not include file n , i.e., only files whose parent directory is named foo we are interested in.
Not using find , but globbing in the zsh shell: $ printf '%s\n' **/foo/*(^/)a/b/c/foo/za/b/foo/ya/foo/xfoo/w This uses the ** glob, which matches "recursively" down into directories, to match any directory named foo in the current directory or below, and then *(^/) to match any file in those foo directories. The (^/) at the end of that is a glob qualifier that restricts the pattern from matching directories (use (.) to match regular files only, or (-.) to also include symbolic links to regular files). In bash : shopt -s globstar nullglobfor pathname in **/foo/*; do if [[ ! -d "$pathname" ]] || [[ -h "$pathname" ]]; then printf '%s\n' "$pathname" fidone I set the nullglob option in bash to remove the pattern completely in case it does not match anything . The globstar shell option enables the use of ** in bash (this is enabled by default in zsh ). Since bash does not have the glob qualifiers of zsh , I'm looping over the pathnames that the pattern matches and test each match to make sure it's not a directory before printing it. Change the " ! -d || -h " test to a " -f && ! -h " test to instead pick out only regular files, or just a single " -f " test to also print symbolic links to regular files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542304", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/359000/" ] }
542,332
I'm trying to do floating point arithmetic in a shell script. I learned that awk can do it on the command line, but the way it works there does not seem to carry over to a shell script. When I type awk below, it turns gray, as if to say that bash doesn't recognize it. When I run it, $ ./script a 30 40 , I get 0. What am I doing wrong? #!/bin/bashif [ $1 = a ]then echo | awk '{print $2 + $3}'
Not using find , but globbing in the zsh shell: $ printf '%s\n' **/foo/*(^/)a/b/c/foo/za/b/foo/ya/foo/xfoo/w This uses the ** glob, which matches "recursively" down into directories, to match any directory named foo in the current directory or below, and then *(^/) to match any file in those foo directories. The (^/) at the end of that is a glob qualifier that restricts the pattern from matching directories (use (.) to match regular files only, or (-.) to also include symbolic links to regular files). In bash : shopt -s globstar nullglobfor pathname in **/foo/*; do if [[ ! -d "$pathname" ]] || [[ -h "$pathname" ]]; then printf '%s\n' "$pathname" fidone I set the nullglob option in bash to remove the pattern completely in case it does not match anything . The globstar shell option enables the use of ** in bash (this is enabled by default in zsh ). Since bash does not have the glob qualifiers of zsh , I'm looping over the pathnames that the pattern matches and test each match to make sure it's not a directory before printing it. Change the " ! -d || -h " test to a " -f && ! -h " test to instead pick out only regular files, or just a single " -f " test to also print symbolic links to regular files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542332", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/364608/" ] }
542,360
I have a Linux Mint 20.0 (Ulyana) Cinnamon , which is Ubuntu 20.04 based. GPU : NVIDIA , GeForce GTX 1060 , Max-Q Design , 6 GB GDDR5/X VRAM which has the basic specification as follows: Objective To install the latest available drivers without using any PPA (Personal Package Archive). Status If I run the integrated Mint's Driver Manager, I only see an old version 390 available below.
Disclaimer - please read before you install anything Today, I ran into an old laptop, with Nvidia Geforce GT 520M, which is not supported by the latest driver anymore, version 390 works fine though. Therefore, I must strongly recommend running a search on the Nvidia drivers page before you try to install any driver version! Generic way - the recommended way If you'd like to have the recommended packages installed too, then you could run this ( the version was last updated on 2021-Aug-04 ): sudo apt-get install --install-recommends nvidia-driver-470 I may not update the version anymore, so I will tell you instead, how to find out (manually) that there is a new version. As there are many ways, the most comfortable for me is (as a normal user or root) typing to terminal: apt-cache policy nvidia-driver-4 and double-tapping the Tab , an example output follows: nvidia-driver-418 nvidia-driver-440-server nvidia-driver-460-servernvidia-driver-418-server nvidia-driver-450 nvidia-driver-465nvidia-driver-430 nvidia-driver-450-server nvidia-driver-470nvidia-driver-435 nvidia-driver-455 nvidia-driver-470-servernvidia-driver-440 nvidia-driver-460 Linux Mint 20.2 - Driver Manager It may be possible to even use GUI driver manager for this. Generally, I like the command-line way much more, actually, I never use this GUI, because it does not tell you what is happening, you would just blindly look at the progress bar. Therefore I strongly recommend not using this tool, and do the job via terminal as shown above. Ubuntu way - NOT RECOMMENDED (!!!) Thanks to the Ubuntu base, one can also take advantage of, which takes care of everything, but I do not recommend it due to one has no control over what happens, and things can break as a side effect , so the following I note only for completeness (click your mouse to show): sudo ubuntu-drivers autoinstall To only list drivers applicable to your system, you can do: sudo ubuntu-drivers list which will list all drivers available to install on your Ubuntu-based system.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/542360", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
542,365
I want to have access to remote machines that connected to remote server behind the nat (actually it's just another computer with linux). Here you can see scheme. I'm using logmein hamachi to create connection between local machine and remote server and then sshuttle to tunnel traffic. But after few minutes of using sshuttle hamachi terminates connection and I need to restart its service. Hamachi works fine if I not using sshuttle. My sshuttle command: sshuttle -r [email protected] -x xxx.xxx.xxx.xxx 0/0 -vv Maybe there is another way to share network through nat? I need to get access to sites, services, machines that belongs to servers local network.
Disclaimer - please read before you install anything Today, I ran into an old laptop, with Nvidia Geforce GT 520M, which is not supported by the latest driver anymore, version 390 works fine though. Therefore, I must strongly recommend running a search on the Nvidia drivers page before you try to install any driver version! Generic way - the recommended way If you'd like to have the recommended packages installed too, then you could run this ( the version was last updated on 2021-Aug-04 ): sudo apt-get install --install-recommends nvidia-driver-470 I may not update the version anymore, so I will tell you instead, how to find out (manually) that there is a new version. As there are many ways, the most comfortable for me is (as a normal user or root) typing to terminal: apt-cache policy nvidia-driver-4 and double-tapping the Tab , an example output follows: nvidia-driver-418 nvidia-driver-440-server nvidia-driver-460-servernvidia-driver-418-server nvidia-driver-450 nvidia-driver-465nvidia-driver-430 nvidia-driver-450-server nvidia-driver-470nvidia-driver-435 nvidia-driver-455 nvidia-driver-470-servernvidia-driver-440 nvidia-driver-460 Linux Mint 20.2 - Driver Manager It may be possible to even use GUI driver manager for this. Generally, I like the command-line way much more, actually, I never use this GUI, because it does not tell you what is happening, you would just blindly look at the progress bar. Therefore I strongly recommend not using this tool, and do the job via terminal as shown above. Ubuntu way - NOT RECOMMENDED (!!!) Thanks to the Ubuntu base, one can also take advantage of, which takes care of everything, but I do not recommend it due to one has no control over what happens, and things can break as a side effect , so the following I note only for completeness (click your mouse to show): sudo ubuntu-drivers autoinstall To only list drivers applicable to your system, you can do: sudo ubuntu-drivers list which will list all drivers available to install on your Ubuntu-based system.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/542365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/373134/" ] }
542,474
I have a large file. How can I print every 9th line starting at line 6? awk NR % 9 == 0' file1 > file2
In GNU sed you can use the first~step operator : sed -n '6~9p' file1 > file2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542474", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
542,522
I want to do something like this: echo 'foo' | tee /dev/stdout > >(cat) where the stdout from echo gets sent to the terminal and to the cat process. Is there a simpler way to do this? When I run this: echo 'foo' | tee >(echo 'bar') for some reason, it does not echo 'foo' it only echoes 'bar', why?
You do it exactly the way you have shown: somecommand | tee >(othercommand) The output of somecommand would be written to the input of othercommand and to standard output. The issue with your echo 'bar' process substitution is that it doesn't care about the input that comes via tee from echo 'foo' , so it just outputs bar as quickly as it can and terminates. The tee utility then tries to write to it, but fails and therefore terminates (from receiving a PIPE signal) before it writes the string to standard output. Or, tee may have time to write the data to the process substitution, in which case both bar and foo would be printed on standard output, it's not deterministic. You need to make sure that the command in the process substitution actually reads the data sent to it (otherwise, what would be the point of sending it data?) As Uncle Billy suggests in comments , this is easily arranged in your example by letting the process substitution simply use cat >/dev/null (assuming you're not interested in the data coming from tee ): echo 'foo' | tee >(cat >/dev/null; echo 'bar') or echo 'foo' | tee >(echo 'bar'; cat >/dev/null) (these two variations would vary only in the order of the final output of the two strings)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
542,554
Last Monday morning, I found my server can't run any command, and it shouws "input output error". With tried for half an hour, I found the only command can executed is sudo poweroff -f (must use flag -f or I got "input output error"). And I booting server manually, and check the system log, but I got nothing special. And I made a smartctl test to confirm if there is any promblem with hard disk. And it passed without error. Then this Monday this problem shows again. I shutdown the server and boot it manually, and it looks fine just like nothing happened.Then I use msmtest86 8.2 test if if memory stick is ok. And makesure the SATA cable and hard disk in good condition and connected trustily. I think maybe it is the problem with OS or file system? My OS is Debian 8.11. Can you give me some advice? Thank you all!
I found my server can't run any command, and it shouws "input output error" The error code EIO ("Input/output error") on command launch would happen when your filesystem is damaged; or worse, when you are running on a faulty storage. Cross your fingers; either way, be aware that at this point you should NOT try to power on the server unless really necessary . 1 The Test There is one sure-fire way to distinguish between two root causes: conduct block-level read scan on the system, and watch out for kernel messages. Boot your system with GNU/Linux recovery boot disk. Change the system to the plain old text console (press Ctrl+Alt+F1); don't use graphical terminal for this . Login as root. Run dmesg -E to enable live kernel message display on the console. Run dmesg -n debug to let low-level kernel message though. Run blkid to see which disk contains system partition. (Note that blkid will list partitions; strip number off the end of partition path and you will get the disk) Run time -p dd if=/dev/sda of=/dev/null bs=4M to conduct an entire- disk read test (please type this carefully). If your system disk is not /dev/sda , substitute accordingly. Watch the screen (it will take a long while)... Results In the best case where dd completed successfully and uneventfully, then it is likely a filesystem problem. If you are comfortable doing filesystem check from boot disk, you can do it now (recommended). If you would rather let the system sort it by itself, reboot (also remove the boot disk), and boot your usual system but with fsck.mode=force appended to the end of kernel command line. (See this question for details) Discussing the result of filesystem check will warrant a different question though. However, in the worst case , you would see kernel messages like this spewing on the screen: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0ata2.00: irq_stat 0x40000001ata2.00: failed command: READ DMA EXTata2.00: cmd 25/00:08:78:15:c5/00:00:6c:00:00/e0 tag 0 dma 4096 in res 51/40:00:78:15:c5/00:00:6c:00:00/e0 Emask 0x9 (media error)ata2.00: status: { DRDY ERR }ata2.00: error: { UNC }ata2.00: configured for UDMA/100sd 1:0:0:0: [sda] Unhandled sense codesd 1:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSEsd 1:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor]Descriptor sense data with sense descriptors (in hex): 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 6c c5 15 78 sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failedsd 1:0:0:0: [sda] CDB: Read(10): 28 00 6c c5 15 78 00 00 08 00end_request: I/O error, dev sda, sector 1824855416Buffer I/O error on device sda, logical block 228106927ata2: EH complete Look for the key parts: DRDY , ERR and UNC in braces Medium Error status Unrecovered read error sense message If you glanced and find these in the messages (even once), they show that you are facing physical disk error. When this is the case, don't let dd finish, press Ctrl+C to stop, NOW ; shut down your system, and bring your disk to a data recovery shop you trust. If you did not find the above worst-case telltales, and rather found this kind of kernel messages repeated: ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozenata2: irq_stat 0x00000040, connection status changedata2: SError: { CommWake DevExch }ata2: hard resetting linkata2: link is slow to respond, please be patient (ready=0) Key parts: hard resetting link link is slow to respond Then you are rather facing SATA link problem (e.g. bad cabling): press Ctrl+C to stop, shut down your system, fix your disk cable and connection, and try again. Side Notes And I made a smartctl test to confirm if there is any promblem with hard disk. And it passed without error. Beware that some hard disks tell straight lies in their S.M.A.R.T status (I'm looking at you, Toshiba); my previous laptop hard disk just ground to halt when reading, spewing read errors, and it still said "nothing's wrong" in its status registers. If your server is mission-critical, then you should consider RAID -based setup. 1 Cautionary tale: My housemate once ignored this warning, and keep filesystem checker grinding on his desktop system anyway. He didn't wait for me to check it up until it eventually failed to boot . Once I got a chance to check it, the disk damage had been already beyond recover (the 500 GB disk could only barely read at snail-pace KB/s, and there was no significant continuous readable area found even after several days). On the other hand, in another case with the same symptom, the machine owner heeded my warning and left the thing off until I could check it. Of course, it was a hard disk failure. After half a day of GNU DDRescue session and one new hard disk, I brought a good news to him that his system and data was 100% recovered at block level- i.e. all files intact, and ready to boot again without any modification.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542554", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/359405/" ] }
542,586
filename~contenturl~uuid~nodeid~contentid000224.pdf~store://2018/7/20/11/35/3f176f4b-41a0-4ac4-a795-a2240ffbb7b9.bin~d6203724-1100-4057-9ed5-4ca6a94f5512~1324625~1363256000238.pdf~store://2018/7/20/11/35/4302b390-1134-424d-a92f-ad27b233e8c1.bin~96b7343d-349d-4316-8bc6-def5bd924032~1324641~1363292000262.pdf~store://2018/7/20/11/35/5ff59679-b3ec-46d2-aa7d-5ec28eff6fe9.bin~11827eee-67bb-43b7-a743-966514f26457~1324661~1363375 The above is my .csv file is there with separator as "~" , I want to do substring operation on the second column which is starting from store:// and want to add checksum result of that row to a new column in the same CSV if possible. e.g. filename~contenturl~checksum000224.pdf /opt/xyz/2018/7/20/11/35/3f176f4b-41a0-4ac4-a795-a2240ffbb7b9.bin 23423423425 So if you see the end result, I substring and process path in store:// and added a new column of that file called checksum. I want this via shell script using bin/bash =======================As far as I am newbie, I just tried with AWK and able to get only first and second column values using awk -F "~" '{print $1, $2}' $csv_file Now the next thing is complex for me,Second column values requires text processing and checksum you can get via cksum /opt/xyz/2018/7/20/11/35/3f176f4b-41a0-4ac4-a795-a2240ffbb7b9.bin Yes you heard right,final result would looks like filename~contenturl~checksum000224.pdf /opt/xyz/2018/7/20/11/35/3f176f4b-41a0-4ac4-a795-a2240ffbb7b9.bin 23423423425 rest of other column would be better if we would have, or else above three column is also fine. Note: If it is easy to retain existing column and to add more column named "checksum" that is also fine.
I found my server can't run any command, and it shouws "input output error" The error code EIO ("Input/output error") on command launch would happen when your filesystem is damaged; or worse, when you are running on a faulty storage. Cross your fingers; either way, be aware that at this point you should NOT try to power on the server unless really necessary . 1 The Test There is one sure-fire way to distinguish between two root causes: conduct block-level read scan on the system, and watch out for kernel messages. Boot your system with GNU/Linux recovery boot disk. Change the system to the plain old text console (press Ctrl+Alt+F1); don't use graphical terminal for this . Login as root. Run dmesg -E to enable live kernel message display on the console. Run dmesg -n debug to let low-level kernel message though. Run blkid to see which disk contains system partition. (Note that blkid will list partitions; strip number off the end of partition path and you will get the disk) Run time -p dd if=/dev/sda of=/dev/null bs=4M to conduct an entire- disk read test (please type this carefully). If your system disk is not /dev/sda , substitute accordingly. Watch the screen (it will take a long while)... Results In the best case where dd completed successfully and uneventfully, then it is likely a filesystem problem. If you are comfortable doing filesystem check from boot disk, you can do it now (recommended). If you would rather let the system sort it by itself, reboot (also remove the boot disk), and boot your usual system but with fsck.mode=force appended to the end of kernel command line. (See this question for details) Discussing the result of filesystem check will warrant a different question though. However, in the worst case , you would see kernel messages like this spewing on the screen: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0ata2.00: irq_stat 0x40000001ata2.00: failed command: READ DMA EXTata2.00: cmd 25/00:08:78:15:c5/00:00:6c:00:00/e0 tag 0 dma 4096 in res 51/40:00:78:15:c5/00:00:6c:00:00/e0 Emask 0x9 (media error)ata2.00: status: { DRDY ERR }ata2.00: error: { UNC }ata2.00: configured for UDMA/100sd 1:0:0:0: [sda] Unhandled sense codesd 1:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSEsd 1:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor]Descriptor sense data with sense descriptors (in hex): 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 6c c5 15 78 sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failedsd 1:0:0:0: [sda] CDB: Read(10): 28 00 6c c5 15 78 00 00 08 00end_request: I/O error, dev sda, sector 1824855416Buffer I/O error on device sda, logical block 228106927ata2: EH complete Look for the key parts: DRDY , ERR and UNC in braces Medium Error status Unrecovered read error sense message If you glanced and find these in the messages (even once), they show that you are facing physical disk error. When this is the case, don't let dd finish, press Ctrl+C to stop, NOW ; shut down your system, and bring your disk to a data recovery shop you trust. If you did not find the above worst-case telltales, and rather found this kind of kernel messages repeated: ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozenata2: irq_stat 0x00000040, connection status changedata2: SError: { CommWake DevExch }ata2: hard resetting linkata2: link is slow to respond, please be patient (ready=0) Key parts: hard resetting link link is slow to respond Then you are rather facing SATA link problem (e.g. bad cabling): press Ctrl+C to stop, shut down your system, fix your disk cable and connection, and try again. Side Notes And I made a smartctl test to confirm if there is any promblem with hard disk. And it passed without error. Beware that some hard disks tell straight lies in their S.M.A.R.T status (I'm looking at you, Toshiba); my previous laptop hard disk just ground to halt when reading, spewing read errors, and it still said "nothing's wrong" in its status registers. If your server is mission-critical, then you should consider RAID -based setup. 1 Cautionary tale: My housemate once ignored this warning, and keep filesystem checker grinding on his desktop system anyway. He didn't wait for me to check it up until it eventually failed to boot . Once I got a chance to check it, the disk damage had been already beyond recover (the 500 GB disk could only barely read at snail-pace KB/s, and there was no significant continuous readable area found even after several days). On the other hand, in another case with the same symptom, the machine owner heeded my warning and left the thing off until I could check it. Of course, it was a hard disk failure. After half a day of GNU DDRescue session and one new hard disk, I brought a good news to him that his system and data was 100% recovered at block level- i.e. all files intact, and ready to boot again without any modification.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/373346/" ] }
542,712
I would want to know the meaning of some items in the ss command output. Eg: # sudo ss -iepn '( dport = :3443 )'Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port tcp ESTAB 0 0 192.168.43.39:45486 190.0.2.1:443 users:(("rocketchat-desk",pid=28697,fd=80)) timer:(keepalive,11sec,0) uid:1000 ino:210510085 sk:16f1 <-> ts sack cubic wscale:7,7 rto:573 rtt:126.827/104.434 ato:40 mss:1388 pmtu:1500 rcvmss:1388 advmss:1448 cwnd:10 bytes_sent:12904 bytes_retrans:385 bytes_acked:12520 bytes_received:13322 segs_out:433 segs_in:444 data_segs_out:215 data_segs_in:253 send 875.5Kbps lastsnd:18722 lastrcv:18723 lastack:18662 pacing_rate 1.8Mbps delivery_rate 298.1Kbps delivered:216 busy:16182ms retrans:0/10 dsack_dups:10 rcv_rtt:305 rcv_space:14480 rcv_ssthresh:6CLOSE-WAIT 1 0 [2800:810:54a:7f0::1000]:37844 [2800:3f0:4002:803::200a]:443 users:(("plasma-browser-",pid=16020,fd=175)) uid:1000 ino:90761 sk:1d --> ts sack cubic wscale:8,7 rto:222 rtt:21.504/5.045 ato:40 mss:1348 pmtu:1500 rcvmss:1208 advmss:1428 cwnd:10 bytes_sent:1470 bytes_acked:1471 bytes_received:11214 segs_out:20 segs_in:20 data_segs_out:8 data_segs_in:13 send 5014881bps lastsnd:96094169 lastrcv:96137280 lastack:96094142 pacing_rate 10029464bps delivery_rate 1363968bps delivered:9 app_limited busy:91ms rcv_space:14280 rcv_ssthresh:64108 minrtt:17.458 Mainly items missing in ss man page, I made some guesses, please correct me if I'm wrong: rcvmss: I wonder is MMS receidev advmss: ? app_limited: ? busy: ? retrans: ? dsack_dups: Duplicated segments? minrtt: Minimum RTT achieved in the socket?
Meaning of some of these fields can be deduced from source code of ss andLinux kernel. Information you see is printed by tcp_show_info() function in iproute2/misc/ss.c . advmss : In ss.c : s.advmss = info->tcpi_advmss;(...) if (s->advmss) out(" advmss:%d", s->advmss); In linux/include/linux/tcp.h : u16 advmss; /* Advertised MSS */ app_limited: In ss.c : s.app_limited = info->tcpi_delivery_rate_app_limited;(..)if (s->app_limited) out(" app_limited"); That one is not documented in linux/include/uapi/linux/tcp.h inLinux: struct tcp_info {(...) __u8 tcpi_delivery_rate_app_limited:1; but surprisingly we can find some information in the commit thatintroduced it: commit eb8329e0a04db0061f714f033b4454326ba147f4Author: Yuchung Cheng <[email protected]>Date: Mon Sep 19 23:39:16 2016 -0400 tcp: export data delivery rate This commit export two new fields in struct tcp_info: tcpi_delivery_rate: The most recent goodput, as measured by tcp_rate_gen(). If the socket is limited by the sending application (e.g., no data to send), it reports the highest measurement instead of the most recent. The unit is bytes per second (like other rate fields in tcp_info). tcpi_delivery_rate_app_limited: A boolean indicating if the goodput was measured when the socket's throughput was limited by the sending application. This delivery rate information can be useful for applications that want to know the current throughput the TCP connection is seeing, e.g. adaptive bitrate video streaming. It can also be very useful for debugging or troubleshooting. A quick git blame in ss.c confirms that app_limited was addedafter tcpi_delivery_rate_app_limited was added to kernel. busy : In ss.c : s.busy_time = info->tcpi_busy_time;(..) if (s->busy_time) { out(" busy:%llums", s->busy_time / 1000); And in include/uapi/linux/tcp.h in Linux it says: struct tcp_info {(...) __u64 tcpi_busy_time; /* Time (usec) busy sending data */ retrans : In ss.c : s.retrans = info->tcpi_retrans;s.retrans_total = info->tcpi_total_retrans;(...) if (s->retrans || s->retrans_total) out(" retrans:%u/%u", s->retrans, s->retrans_total); tcpi_total_retrans is not described in linux/include/uapi/linux/tcp.h : struct tcp_info {(...) __u32 tcpi_total_retrans; but it's used in tcp_get_info() : void tcp_get_info(struct sock *sk, struct tcp_info *info){ const struct tcp_sock *tp = tcp_sk(sk); /* iff sk_type == SOCK_STREAM */(...) info->tcpi_total_retrans = tp->total_retrans; And in linux/include/linux/tcp.h it says: struct tcp_sock {(...) u32 total_retrans; /* Total retransmits for entire connection */ tcpi_retrans is also not described but reading tcp_get_info() again we see: info->tcpi_retrans = tp->retrans_out; And in linux/include/linux/tcp.h : struct tcp_sock {(...) u32 retrans_out; /* Retransmitted packets out */ dsack_dups : In ss.c : s.dsack_dups = info->tcpi_dsack_dups;(...) if (s->dsack_dups) out(" dsack_dups:%u", s->dsack_dups); In include/uapi/linux/tcp.h in Linux: struct tcp_info {(...)__u32 tcpi_dsack_dups; /* RFC4898 tcpEStatsStackDSACKDups */ And in https://www.ietf.org/rfc/rfc4898.txt : The number of duplicate segments reported to the local host by D-SACK blocks.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542712", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47954/" ] }
542,831
How do i add a variable as the name of the variable i want to checks value.My code is as below, when i run this i get the error: [: A_variable: integer expression expected I think this is because when doing an if on a variable the proper syntax should be: if [ $A_variable -eq 1 ]; but i am not able to put the dolla sign in front of my variable because i am putting a different variable in the value of my variable. Its almost like it should be: if [ $${letter}_variable or some mix of this but i dont know what? array=("A" "B" "C")A_variable=1B_variable=2for letter in "${array[@]}"doif [ ${letter}_variable -eq 1 ]; then...
This is called indirect expansion in bash. The syntax to get the value of a variable whose name is determined by another variable namevar is: ${!namevar} Since your variable name is composed of a variable and a constant string, you'll need one intermediate step: array=("A" "B" "C")A_variable=1B_variable=2for letter in "${array[@]}"do varname="${letter}_variable" if [ "${!varname}" -eq 1 ]; then ... Note: if the indirectly-referenced variable does not actually exist, the expansion will result in a null string, which will still cause an error. If this is not desirable, add a second test like this first: varname="${letter}_variable" if [ "${!varname}" = "" ]; then echo "Error: $varname is non-existent or empty" elif [ "${!varname}" -eq 1 ]; then ... Also, using double quotes to protect against unexpected characters in the expansion result would be a good idea, if the contents of all the possible matching variables are not initialized to safe values in advance.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542831", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202110/" ] }
542,892
This certain while loop acknowledges it only works up to number 20. After that, it starts to give bad answers, but I can't figure out why. #!/bin/bashn=$1[ "$n" == "" ] && echo Please give a number and try again && exitfactorial=1 ; j=1while [ $j -le $n ]do factorial=$(( $factorial * $j )) j=$(( $j + 1 ))doneecho The factorial of $n, "$n"'!' = $factorialexit 0 If you give 21 as an argument, you get: -4249290049419214848. Could it be a bash issue? I tried doing the factorial calculation below and I get the same bad answer. The answer should be: 51090942171709440000 echo $(( 2432902008176640000 * 21 ))
Bash uses signed 64-bit integers on most systems. You will need another tool like bc to go beyond that. Meaning that with bash the maximum number you can have is 2^63-1 , equals to 9,223,372,036,854,775,807 and you are exceeding this number: awk 'BEGIN{print 2432902008176640000 * 21}'51090942171709440000 #--> 51,090,942,171,709,440,000
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/542892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/364608/" ] }
542,989
When did Unix move away from storing clear text passwords in passwd? Also, when was the shadow file introduced?
For the early history of Unix password storage, read Robert Morris and Ken Thompson's Password Security: A Case History . They explain why and how early Unix systems acquired most the features that are still seen today as the important features of password storage (but done better). The first Unix systems stored passwords in plaintext. Unix Third Edition introduced the crypt function which hashes the password. It's described as “encryption” rather than “hashing” because modern cryptographic terminology wasn't established yet and it used an encryption algorithm, albeit in an unconventional way. Rather than encrypt the password with a key, which would be trivial to undo when you have the key (which would have to be stored on the system), they use the password as the key. When Unix switched from an earlier cipher to the then-modern DES , it was also made slower by iterating DES multiple times. I don't know exactly when that happened: V6? V7? Merely hashing the password is vulnerable to multi-target attacks: hash all the most common passwords once and for all, and look in the password table for a match. Including a salt in the hashing mechanism, where each account has a unique salt, defeats this precomputation. Unix acquired a salt in Seventh Edition in 1979 . Unix also acquired password complexity rules such as a minimum length in the 1970s. Originally the password hash was in the publicly-readable file /etc/passwd . Putting the hash in a separate file /etc/shadow that only the system (and the system administrator) could access was one of the many innovations to come from Sun, dating from around SunOS 4 in the mid-1980s. It spread out gradually to other Unix variants (partly via the third party shadow suite whose descendent is still used on Linux today) and wasn't available everywhere until the mid-1990s or so. Over the years, there have been improvements to the hashing algorithm. The biggest jump was Poul-Henning Kamp's MD5-based algorithm in 1994, which replaced the DES-based algorithm by one with a better design. It removed the limitation to 8 password characters and 2 salt characters and had increased slowness. See IEEE's Developing with open source software , Jan–Feb. 2004, p. 7–8 . The SHA-2-based algorithms that are the de facto standard today are based on the same principle, but with slightly better internal design and, most importantly, a configurable slowness factor.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/542989", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/373706/" ] }
543,013
For example, there's https://aur.archlinux.org/packages/github-desktop/ , https://aur.archlinux.org/packages/github-desktop-bin/ , and https://aur.archlinux.org/packages/github-desktop-git/ . I took a look at the pkgbuilds and found no easily identifiable difference between the packages. This isn't just one package, but many of them. What's the difference between them? Which one should I install?
Normal packages are built from stable versions or stable git tags of a repository. The program is compiled in user's machine and then installed. This will take time. Packages with -bin suffix are already built by upstream maintainer and is available somewhere. So, users do not have to compile the package in their machine. The PKGBUILD script downloads, extracts and install the files. Some proprietary software are released in this format where source code is not available. Packages with -git suffix are built from the latest commit from git repository, no matter it is a stable or not. This way user get latest fix or patches. This also compiled in user machine, then installed. The difference among the AUR packages can be easily understood from their corresponding PKGBUILD file (shell script like) in source() function. Here is an example: For github-desktop the source is a stable git release tag: pkgver=x.y.z_pkgver="${pkgver}-linux1"gitname="release-${_pkgver}"https://github.com/shiftkey/desktop.git#tag=${gitname} For github-desktop-bin the source is a already packed Debian package: pkgver=x.y.z_pkgver="${pkgver}-linux1"gitname="release-${_pkgver}"https://github.com/shiftkey/desktop/releases/download/${gitname}/GitHubDesktop-linux-${_pkgver}.deb For github-desktop-git the source is latest master branch: https://github.com/shiftkey/desktop.git Further readings: Arch Wiki: Arch User Repository (AUR) Manjaro Forum: The difference between bin and non bin packages
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/543013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/321803/" ] }
543,019
A command outputs this: file_0file_1file_10file_11file_12file_13file_14file_15file_2file_3file_4file_5file_6file_7file_8file_9 How can I use awk or some other posix tool to actually sort it by the contiguous digits as a single number: file_0file_1file_2file_3file_4file_5file_6file_7file_8file_9file_10file_11file_12file_13file_14file_15 In general it should also work in case the digits is inside the file name, e.g.: file_0.txtfile_1.txtfile_10.txtfile_11.txtfile_12.txtfile_13.txtfile_14.txtfile_15.txtfile_2.txtfile_3.txtfile_4.txtfile_5.txtfile_6.txtfile_7.txtfile_8.txtfile_9.txt
sort -nt '_' -k2 Output: file_0file_1file_2file_3file_4file_5file_6file_7file_8file_9file_10file_11file_12file_13file_14file_15 or: file_0.txtfile_1.txtfile_2.txtfile_3.txtfile_4.txtfile_5.txtfile_6.txtfile_7.txtfile_8.txtfile_9.txtfile_10.txtfile_11.txtfile_12.txtfile_13.txtfile_14.txtfile_15.txt Tested with FreeBSD and GNU coreutils implementations of sort butwould not work with busybox implementation. All options used arespecified by POSIX .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543019", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350619/" ] }
543,085
Need improvement on a script which continuously tests website. It's currently been used the following script, but it is giving a large amount of failing emails, while website is still up: #!/bin/bashwhile true; do date > wsdown.txt ; cp /dev/null pingop.txt ; ping -i 1 -c 1 -W 1 website.com > pingop.txt ; sleep 1 ; if grep -q "64 bytes" pingop.txt ; then : else mutt -s "Website Down!" [email protected] < wsdown.txt ; sleep 10 ; fidone Thinking now or in somehow improving this script or using another way.
You don't need ; at the end of each line, this is not C. You don't need: cp /dev/null pingop.txt because the very next line in the script ping -i 1 -c 1 -W 1 google.com > pingop.txt will overwrite contents of pingop.txt anyway. And if we're here, youdon't even need to save output of ping to the file if you're notgoing to send it or process it later, just do: if ping -i 1 -c 1 -W 1 website.com >/dev/null 2>&1then sleep 1else mutt -s "Website Down!" [email protected] < wsdown.txt sleep 10 To answer your question about false alarms - ping might not be thebest way for testing if website is up. Some websites just do notrespond to ICMP requests, for example: $ ping -i 1 -c 1 -W 1 httpbin.orgPING httpbin.org (3.222.220.121) 56(84) bytes of data.--- httpbin.org ping statistics ---1 packets transmitted, 0 received, 100% packet loss, time 0ms However, http://httpbin.org is up. If you're using website.com inyour example you most probably access it with HTTP/HTTPS and in thatcase consider using curl -Is : $ curl -Is "httpbin.org" >/dev/null 2>&1$ echo $?0$ curl -Is "non-existing-domain-lalalala.com" >/dev/null 2>&1$ echo $?6 OP asked about speed difference between ping and curl in thecomments. There is no big difference if you're testing website thatresponds to ping : $ time curl -Is google.com >/dev/null 2>&1real 0m0.068suser 0m0.002ssys 0m0.001s$ time ping -i 1 -c 1 -W 1 google.comPING google.com (216.58.215.110) 56(84) bytes of data.64 bytes from waw02s17-in-f14.1e100.net (216.58.215.110): icmp_seq=1 ttl=54 time=8.06 ms--- google.com ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 8.068/8.068/8.068/0.000 msreal 0m0.061suser 0m0.000ssys 0m0.000s But when testing website that does not respond to ping then curl is not only more reliable but also faster than ping with -W that youuse now: $ time ping -i 1 -c 1 -W 1 httpbin.orgPING httpbin.org (3.222.220.121) 56(84) bytes of data.--- httpbin.org ping statistics ---1 packets transmitted, 0 received, 100% packet loss, time 0msreal 0m1.020suser 0m0.000ssys 0m0.000s$ time curl -Is httpbin.org >/dev/null 2>&1real 0m0.256suser 0m0.003ssys 0m0.000s
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/543085", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153059/" ] }
543,149
It is a simple problem. I have a csv file with multiple columns I would like to extract 3 columns and save the output to a text file. sample of my dataset: page_id post_name link post_type likes_count5550296508 Ben Carson www.cnn.com shared_story 1925835830242058 John Smith www.abc.com news_story 4679485676544 Sara John www.msc.com shared_story 462 I would like to select three columns and save them to a text file with a comma seperator.The desired output: (or any similar format that shows the columns in a neat way. it doesn't have to be exactly like this format) "page_id","post_name","post_type""5550296508","Ben Carson","shared_story""5830242058","John Smith", "news_story" "9485676544", "Sara John", "shared_story" I tried to use awk : awk -F',' '{print $1,$2,$4}' Data.csv > output.txt It returns this output with a blank space between the columns, I would like to replace the blank space with a comma: page_id post_name post_type 5550296508 Ben Carson shared_story 5830242058 John Smith news_story 9485676544 Sara John shared_story I tried printf but I am not sure I am using the correct string because it doesn't return the output I want. awk '{printf "%s,%s,%s", $1,$2,$4}' Data.csv > output.txt using sed . This only replaces the first blank with a comma. awk -F',' '{print $2,$5,$10}' Data.csv | sed 's/ /,/' > output.txt
You can use below command to separate it out with comma , : awk '{print $1","$2","$4}' Data.csv > output.txt Output Will be : page_id,post_name,post_type5550296508,Ben,www.cnn.com5830242058,John,www.abc.com9485676544,Sara,www.msc.com
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/373480/" ] }
543,186
I have a number of folders, each with audio files in it. I want to pick out 30% of each of this files and cut them (not copy) to another folder. I can see this post which can help me do this given that I know the number of files in each folder. Sadly the number may change and I want a single piped bash line that can do this. Is this possible? How do I choose 30% of the files and cut them to another folder?
With bash 4.4+ and on a GNU system, you can do: readarray -td '' files < <( shopt -s nullglob dotglob printf '%s\0' * | sort -Rz) to populate the $files array with a shuffled list of all the files in the current directory. Then you can move 30% of them with something like: echo mv -- "${files[@]:0:${#files[@]}*30/100}" /target/directory/ (remove the echo when you're satisfied it's going to do what you want). The equivalent in the zsh shell could be something like: files=(*(NDnoe['REPLY=$RANDOM']))echo mv -- $files[1,$#files*30/100] /target/directory/ That's the same approach, only terser and not needing external utilities. Translation: shopt -s nullglob -> N glob qualifier (create an empty array when there's no file). shopt -s dotglob -> D glob qualifier (do not exclude files whose name start with a dot). GNU sort -Rz : noe['REPLY=$RANDOM'] (shuffle the list by sorting using a random order). ${array[@]:offset:length} -> $array[first,last] (zsh now also supports the Korn shell syntax, but I find the zsh one more legible). With bash we use NUL delimited records ( -d '' / -z / \0 ) to be able to deal with arbitrary file names. It's not needed in zsh as the list is never transformed to a single string/stream.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543186", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/355919/" ] }
543,194
Using Raspbian Buster, I have 2 users, The root user (pi) and a user I setup (polysense). I have a script that runs at startup that is owned by the root user. Part of the script is that it writes to a file on the polysense account. However I am getting a permissions error when the script runs.I tried: sudo chmod 777 configSite as configSite is the folder that contains the files I need to write to. When I view the permissions, it does not change. What am I doing wrong here?I am expecting to see drwxr-xr-rwx for the files inside configSite. pi@polysensesolutions:~ $ cd /home/polysense/pi@polysensesolutions:/home/polysense $ sudo chmod 777 configSitepi@polysensesolutions:/home/polysense $ cd configSitepi@polysensesolutions:/home/polysense/configSite $ ls -ltotal 164-rw-r--r-- 1 root root 1985 Sep 17 04:25 '>'drwxr-xr-x 3 polysense polysense 4096 Sep 2 16:33 configSite-rw-r--r-- 1 root root 143360 Sep 17 04:25 db.sqlite3-rwxr-xr-x 1 polysense polysense 630 Sep 2 15:51 manage.pydrwxr-xr-x 4 polysense polysense 4096 Sep 2 16:34 pagesdrwxr-xr-x 2 polysense polysense 4096 Sep 17 04:12 templatesdrwxr-xr-x 4 polysense polysense 4096 Sep 17 04:25 wifiApp
With bash 4.4+ and on a GNU system, you can do: readarray -td '' files < <( shopt -s nullglob dotglob printf '%s\0' * | sort -Rz) to populate the $files array with a shuffled list of all the files in the current directory. Then you can move 30% of them with something like: echo mv -- "${files[@]:0:${#files[@]}*30/100}" /target/directory/ (remove the echo when you're satisfied it's going to do what you want). The equivalent in the zsh shell could be something like: files=(*(NDnoe['REPLY=$RANDOM']))echo mv -- $files[1,$#files*30/100] /target/directory/ That's the same approach, only terser and not needing external utilities. Translation: shopt -s nullglob -> N glob qualifier (create an empty array when there's no file). shopt -s dotglob -> D glob qualifier (do not exclude files whose name start with a dot). GNU sort -Rz : noe['REPLY=$RANDOM'] (shuffle the list by sorting using a random order). ${array[@]:offset:length} -> $array[first,last] (zsh now also supports the Korn shell syntax, but I find the zsh one more legible). With bash we use NUL delimited records ( -d '' / -z / \0 ) to be able to deal with arbitrary file names. It's not needed in zsh as the list is never transformed to a single string/stream.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543194", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/373889/" ] }
543,195
I have a directory foo with subdirectories. I wish to create the same subdirectories names in another directory without copying their content. How do I do this? Is there a way to get ls output as a brace expansion list?
Try this, cd /source/dir/pathfind . -type d -exec mkdir -p -- /destination/directory/{} \; . -type d To list directories in the current path recursively. mkdir -p -- /destination/directory/{} create directory at destination. This relies on a find that supports expanding {} in the middle of an argument word.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/543195", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/355919/" ] }
543,205
Consider the shared object dependencies of /bin/bash , which includes /lib64/ld-linux-x86-64.so.2 (dynamic linker/loader): ldd /bin/bash linux-vdso.so.1 (0x00007fffd0887000) libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f57a04e3000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f57a04de000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f57a031d000) /lib64/ld-linux-x86-64.so.2 (0x00007f57a0652000) Inspecting /lib64/ld-linux-x86-64.so.2 shows that it is a symlink to /lib/x86_64-linux-gnu/ld-2.28.so : ls -la /lib64/ld-linux-x86-64.so.2 lrwxrwxrwx 1 root root 32 May 1 19:24 /lib64/ld-linux-x86-64.so.2 -> /lib/x86_64-linux-gnu/ld-2.28.so Furthermore, file reports /lib/x86_64-linux-gnu/ld-2.28.so to itself be dynamically linked: file -L /lib64/ld-linux-x86-64.so.2/lib64/ld-linux-x86-64.so.2: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped I'd like to know: How can the dynamically linker/loader ( /lib64/ld-linux-x86-64.so.2 ) itself be dynamically linked? Does it link itself at runtime? /lib/x86_64-linux-gnu/ld-2.28.so is documented to handle a.outbinaries ( man ld.so ), but /bin/bash is an ELF executable? The program ld.so handles a.out binaries, a format used long ago; ld-linux.so* (/lib/ld-linux.so.1 for libc5, /lib/ld-linux.so.2 for glibc2) han‐ dles ELF, which everybody has been using for years now.
Yes, it links itself when it initialises. Technically the dynamic linker doesn’t need object resolution and relocation for itself, since it’s fully resolved as-is, but it does define symbols and it has to take care of those when resolving the binary it’s “interpreting”, and those symbols are updated to point to their implementations in the loaded libraries. In particular, this affects malloc — the linker has a minimal version built-in, with the corresponding symbol, but that’s replaced by the C library’s version once it’s loaded and relocated (or even by an interposed version if there is one), with some care taken to ensure this doesn’t happen at a point where it might break the linker. The gory details are in rtld.c , in the dl_main function. Note however that ld.so has no external dependencies. You can see the symbols involved with nm -D ; none of them are undefined. The manpage only refers to entries directly under /lib , i.e. /lib/ld.so (the libc 5 dynamic linker, which supports a.out ) and /lib*/ld-linux*.so* (the libc 6 dynamic linker, which supports ELF). The manpage is very specific, and ld.so is not ld-2.28.so . The dynamic linker found on the vast majority of current systems doesn’t include a.out support. file and ldd report different things for the dynamic linker because they have different definitions of what constitutes a statically-linked binary. For ldd , a binary is statically linked if it has no DT_NEEDED symbols, i.e. no undefined symbols. For file , an ELF binary is statically linked if it doesn’t have a PT_DYNAMIC section (this will change in the release of file following 5.37; it now uses the presence of a PT_INTERP section as the indicator of a dynamically-linked binary, which matches the comment in the code). The GNU C library dynamic linker doesn’t have any DT_NEEDED symbols, but it does have a PT_DYNAMIC section (since it is technically a shared library). As a result, ldd (which is the dynamic linker) indicates that it’s statically linked, but file indicates that it’s dynamically linked. It doesn’t have a PT_INTERP section, so the next release of file will also indicate that it’s statically linked. $ ldd /lib64/ld-linux-x86-64.so.2 statically linked$ file $(readlink /lib64/ld-linux-x86-64.so.2)/lib/x86_64-linux-gnu/ld-2.28.so: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped (with file 5.35) $ file $(readlink /lib64/ld-linux-x86-64.so.2)/lib/x86_64-linux-gnu/ld-2.28.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped (with the currently in-development version of file ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/543205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128739/" ] }
543,211
Highest Columns Number Records Issue I have a test.txt file with contents as follows: 1:2:3123:534589:5:034567:8:7781:9:09 Could you please help me getting the following output from that test.txt file? 345895:0345678:77819:09 Explanation: below line contains the highest number column i,g. 3 and removed : 345895:0345678:77819:09
Yes, it links itself when it initialises. Technically the dynamic linker doesn’t need object resolution and relocation for itself, since it’s fully resolved as-is, but it does define symbols and it has to take care of those when resolving the binary it’s “interpreting”, and those symbols are updated to point to their implementations in the loaded libraries. In particular, this affects malloc — the linker has a minimal version built-in, with the corresponding symbol, but that’s replaced by the C library’s version once it’s loaded and relocated (or even by an interposed version if there is one), with some care taken to ensure this doesn’t happen at a point where it might break the linker. The gory details are in rtld.c , in the dl_main function. Note however that ld.so has no external dependencies. You can see the symbols involved with nm -D ; none of them are undefined. The manpage only refers to entries directly under /lib , i.e. /lib/ld.so (the libc 5 dynamic linker, which supports a.out ) and /lib*/ld-linux*.so* (the libc 6 dynamic linker, which supports ELF). The manpage is very specific, and ld.so is not ld-2.28.so . The dynamic linker found on the vast majority of current systems doesn’t include a.out support. file and ldd report different things for the dynamic linker because they have different definitions of what constitutes a statically-linked binary. For ldd , a binary is statically linked if it has no DT_NEEDED symbols, i.e. no undefined symbols. For file , an ELF binary is statically linked if it doesn’t have a PT_DYNAMIC section (this will change in the release of file following 5.37; it now uses the presence of a PT_INTERP section as the indicator of a dynamically-linked binary, which matches the comment in the code). The GNU C library dynamic linker doesn’t have any DT_NEEDED symbols, but it does have a PT_DYNAMIC section (since it is technically a shared library). As a result, ldd (which is the dynamic linker) indicates that it’s statically linked, but file indicates that it’s dynamically linked. It doesn’t have a PT_INTERP section, so the next release of file will also indicate that it’s statically linked. $ ldd /lib64/ld-linux-x86-64.so.2 statically linked$ file $(readlink /lib64/ld-linux-x86-64.so.2)/lib/x86_64-linux-gnu/ld-2.28.so: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped (with file 5.35) $ file $(readlink /lib64/ld-linux-x86-64.so.2)/lib/x86_64-linux-gnu/ld-2.28.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), statically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped (with the currently in-development version of file ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/543211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/373904/" ] }
543,214
I am combining ls , xargs and bash -c to run a specific program on multiple input files (whose names match the pattern barcode0[1-4].fastq . The program that I am calling with xargs is called krocus and below is the code I used to call the program with a single placeholder (for the multiple input files): ls barcode0[1-4].fastq |cut -d '.' -f 1|xargs -I {} bash -c "krocus --verbose --output_file out_krocus/{}_Ecoli1.txt /nexusb/Gridion/MLST_krocus_DBs/e_coli1 {}.fastq In the above code my placeholder are the files I want to call krocus on i.e. barcode0[1-4].fastq . I now want to add a second placeholder that will be passed to the --kmer argument of krocus. For this argument, I want to pass the numbers seq 12 14 i.e. 12, 13 and 14. In this example I would like to create a total of 12 files (which means I want to combine the three --kmer values for the four input files) Below is how I imagine this code would look like with the placeholder for the kmer argument: ls barcode0[1-4].fastq |cut -d '.' -f 1|xargs -I {} bash -c "krocus --verbose --output_file out_krocus/{placeholderForInputFile}_{placeholderForKmerSize}Ecoli1.txt /nexusb/Gridion/MLST_krocus_DBs/e_coli1 --kmer {placeholderForKmerSize} {placeholderForInputFile}.fastq The problem is that I need to pipe the output of two commands i.e. ls barcode0[1-4].fastq and seq 12 14
Don't Parse ls xargs does not support two placeholders. Use a for loop. Actually, two loops (one for the filenames, and one for the 12..14 sequence). For example: #!/bin/bashfor f in barcode0[1-4].fastq; do bn="$(basename "$f" .fastq)" for k in {12..14}; do krocus --verbose --output_file "out_krocus/${bn}_${k}Ecoli1.txt" \ /nexusb/Gridion/MLST_krocus_DBs/e_coli1 --kmer "$k" "$f" donedone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/321861/" ] }
543,253
I wanted to make a list of similar clear commands clearer to read, so I've made a little terminal loop for what in \ cache \ thumbs \; do my template $what:clear; done It works great, however I want to achieve an equivalent of my template cache:clear && my template thumbs:clear Is there a way how to easily and clearly put && to work in such loop, without the complexity of adding if/else statement to the loop body checking for last command exit code and calling break if not 0? If condition is enaugh for you: for what in \ cache \ thumbs \; domy template $what:clear;if [ $? -ne 0 ]; then break; fi \; done
You could use set -e : (set -e;for what in \ cache \ thumbs;do my template $what:clear; done) set -e causes the shell to exit when a command exits with a non-zero exit code, and the failure isn’t handled in some other way. In the above, the whole subshell exits if my ... fails, and the subshell’s exit code is the failed command’s exit code. This works in shell scripts and on the command line. Another approach is to use a disjunction: for what in \ cache \ thumbs; do my template $what:clear || breakdone This has the disadvantage of requiring || break after every command, and of swallowing the exit code indicating failure; the for loop will exit with a zero exit code every time. You can avoid this by making break fail, which will cause the for loop to exit with an exit code of 1 (which is still not as good as the subshell approach above): for what in \ cache \ thumbs; do my template $what:clear || break -1 2>/dev/nulldone ( break -1 causes break to exit with a non-zero exit code, and 2>/dev/null throws away its error message). As far as I know you can’t use && to exit a loop, as in && done or something similar.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543253", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223991/" ] }