source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
506,938 | On a tutorial for installing SendEmail , I found instructions to install 2 prequisites, 2 perl modules apt-get install 'perl(Net::SSLeay)' 'perl(IO::Socket::SSL)' This syntax also resulted strange to my Debian 9 E: Unable to find package perl(Net::SSLeay) E: Unable to find package perl(IO::Socket::SSL) I tried using cpan , (I honestly do not know what is ... ) but it ended In some syntax errors EDIT, more doubts In this 'Ask Ubuntu' question , I see instructions to executed apt-get install libnet-ssleay-perl libio-socket-ssl-perl But in the tutorial I am following, instructions is to execute first apt-get install libnet-ssleay-perl libio-socket-ssl-perl and then apt-get install 'perl(Net::SSLeay)' 'perl(IO::Socket::SSL)' | How to install Perl modules on Debian 9? 1) through cpan : cpan Module::Name eg: cpan IO::Socket::SSLcpan Net::SSLeay 2) Through apt Use apt-file to get the exact package name then install it. To install apt-file : apt install apt-fileapt-file update To get the package name: apt-file search Module/Name or : apt-file search Module/Name | awk '{print $1}' | uniq | tr -d \: e,g: apt-file search IO/Socket/SSL | awk '{print $1}' | uniq | tr -d \: sample output: libio-socket-ssl-perl Installing libio-socket-ssl-perl package will install the IO::Socket::SSL perl module: apt install libio-socket-ssl-perl | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/506938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167205/"
]
} |
506,998 | I would like to store the default journal directory /var/log/journal/ on a mounted device, but I am not sure if I can do it due to the fact that journal is an important service which may already run before any device is mounted which would lead to the fact that suddenly the directly changes. Is it safe/possible to store the journal directory on a mounted device and if yes, how to do it, if a simple mounting doesn't work. Directory listing from the journal entries: 2019-03-18 22:16:41 root@AAEB-APP206LY:/var/log/journal/d41cf15550e34487abe7103b61fbf794 => lltotal 792Mdrwxr-sr-x 1 root systemd-journal 884 Mar 12 06:35 ./drwxr-sr-x 1 root systemd-journal 64 Feb 26 18:17 ../-rw-r----- 1 root systemd-journal 96M Mar 18 22:16 system.journal-rw-r----- 1 root root 120M Feb 26 18:17 system@d5301574c947425cb992f7839ae52cdb-0000000000000001-0005827c7effc14d.journal-rw-r----- 1 root systemd-journal 96M Mar 5 12:29 system@d5301574c947425cb992f7839ae52cdb-0000000000051acb-000582cf3a7ba719.journal-rw-r----- 1 root systemd-journal 96M Mar 12 06:35 system@d5301574c947425cb992f7839ae52cdb-00000000000872b4-000583572e31154d.journal-rw-r-----+ 1 root systemd-journal 128M Mar 18 22:16 user-5000.journal-rw-r-----+ 1 root root 128M Mar 5 12:29 user-5000@cf6acecdf28e48c790173a36447ec2e7-0000000000051ad9-000582cf3d435013.journal-rw-r-----+ 1 root systemd-journal 128M Mar 12 06:35 user-5000@cf6acecdf28e48c790173a36447ec2e7-00000000000872b9-000583572e312040.journal As you can see it is occupying almost 800 MB and this fills up the main partition. Therefore the idea to store it on a different filesystem. | As far as I can tell, there is no way to change the location of systemd's predefined logging directories, /run/log/journal and /var/log/journal . It's possible via the Storage configuration option to choose which of those two options are used. But you can't change the path to /anotherfs/log/journal . What you can do is make /var/log/journal a symlink to another directory, and have that directory be in another filesystem. Systemd-tmpfiles can automatically make this link. Drop a file with this into /etc/tmpfiles.d/ : L /var/log/journal - - - - /anotherfs/journal Now if /var/log/journal doesn't exist it will be a link to another filesystem and the journal will go there. But then you will most likely run into another problem. Journald will switch from using a non-persistent journal in /run to the persistent journal in /var during boot, and flush the journal data from /run into /var. Obviously we need this switch to happen after /anotherfs is mounted! If the symlink pointed into the rootfs, or into the same fs as /var/log is on, then this wouldn't be issue and it would work fine as a way to change the path the journal data is at. The way this ordering, mount filesystem then flush journal, is normally achieved is through systemd-journal-flush.service having a RequiresMountsFor=/var/log/journal property. If /var/log/journal is a symlink, systemd only waits for the filesystem the link itself is in to be mounted, not what the link points too. And so the ordering doesn't work. We can get around this via bind mounts. Instead of symlinking /var/log/journal to another fs, we bind mount a directory from another fs (or directly mount the other fs, if the other fs is only for the journal) onto /var/log/journal . Create a unit file which must be named var-log-journal.mount : [Unit]Description=Persistent Journal Storage Bind[Mount]What=/anotherfs/journalWhere=/var/log/journalType=noneOptions=bind[Install]WantedBy=local-fs.target Install and enable this. Now the journal flush unit will see there is a mount unit for var-log-journal (it only looks for that exact name!) and wait for it. The mount unit will bind mount the directory /anotherfs/journal onto /var/log/journal . Systemd will automatically add ordering dependencies to wait for /anotherfs to be mounted if it needs that and will automatically create the mountpoint /var/log/journal if it's not already there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/506998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144303/"
]
} |
507,011 | I use a ProxyJump command for a number of ssh sessions I use daily, and also switch users a lot on these sessions and having to type exit 3 or 4 times in a row isn't too fun. I am aware of newline + ~ + . to terminate an ssh session, I still have to check if it terminates it amicably like an exit would, but how do you exit all sessions in the current shell with a single command or shortcut such that typing exit 3 or 4 times in my case becomes a one-time thing? | Ctrl - D will exit a shell in many cases. It is quicker than typing exit Enter . It's still not a single command to terminate everything, but holding Ctrl and hitting D several times is easier and faster. Not sure how valuable this is for your use case. Discussed in detail here . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/507011",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32917/"
]
} |
507,021 | I am having an error where the fields of my variable are not being detected when trying to build a configuration using a jinja2 template. This is to sync Linux repositories to yum and apt based systems from a golden source using ansible. Each repository configuration would go into a different file and the task updated with the variable name. Base system configs should be able to be put in one file using multiple uses of "-" then a list of attributes. I have reviewed: for loop in jinja2 https://omarkhawaja.com/accessing-ansible-variables-with-jinja2-loops/ https://stackoverflow.com/questions/25418158/templating-multiple-yum-repo-files-with-ansible-template-module as well as others that are less relevant to what I am doing. var file: ---repo: - name: google_chrome async: 1 url: http://dl.google.com/linux/chrome/rpm/stable/x86_6... include var task: - name: Include var into the 'chrome' variable. include_vars: file: google_chrome_repo.yaml name: chrome task to use template module: - name: generate config for Centos template: src: yum_template.j2 dest: "/etc/yum.repos.d/{{ item }}.repo" backup: yes with_items: - chrome when: - ansible_distribution == 'CentOS' template: {% for i in item %}[ {{ i.name }} ]async = {{ i.async }}baseurl = {{ i.url }}enabled = {{ i.repo_enable }}enablegroups = {{ i.pkggrp_enable }}failovermethod = {{ i.ha_method }}gpgkey = {{ i.gpgkey_url }}http_caching = {{ i.http_caching }}keepcache = {{ i.keepcache }}metadata_expire = {{ i.metadata_expire }}mirrorlist = {{ i.mirrorlist }}mirrorlist_expire = {{ i.mirrorlist_expire }}name = {{ i.descrip }}protect = {{ i.protect }}proxy = {{ i.proxy_config }}proxy_password = {{ i.proxy_username }}proxy_username = {{ i.proxy_password }}repo_gpgcheck = {{ i.repo_gpgcheck }}retries = {{ i.repo_retry_count }}s3_enabled = {{ i.s3_enabled }}sslverify = {{ i.ssl_verify }}timeout = {{ i.timeout }}{% endfor %} error: failed: [192.168.33.31] (item=chrome) => {"changed": false, "item": "chrome", "msg": "AnsibleUndefinedVariable: 'unicode object' has no attribute 'name'"} whichever attribute in role is first called by the jinja2 template will fail in this way. If I change the following so name isn't referenced and "i.name" becomes just "chrome" it will fail on async I can see the variable is imported ok: [192.168.33.31] => {"ansible_facts": {"chrome": {"repo": [{"async": 1, "descrip": "Google Chrome Repository", "gpgkey_url": "https://dl.google.com/linux/linux_signing_key.pub", "ha_method": "roundrobin", "http_caching": 1, "keepcache": 1, "metadata_expire": 21600, "mirrorlist": null, "mirrorlist_expire": 21600, "name": "google_chrome", "pkggrp_enable": 1, "protect": 0, "proxy_config": "__None__", "proxy_password": null, "proxy_username": null, "repo_enable": 1, "repo_gpgcheck": 1, "repo_retry_count": 10, "s3_enabled": 0, "ssl_verify": 1, "timeout": 1, "url": "http://dl.google.com/linux/chrome/rpm/stable/x86_6"}]}}, "ansible_included_var_files": ["/var/lib/awx/projects/_6__trowe/playbooks/roles/Manage_Linux_Repos/vars/google_chrome_repo.yaml"], "changed": false} I do see it says "unicode" variable where I would expect it to be a dict. I have also tried with_dict and the error says that the variable is not a dictionary. However if I structure the variable file without "repo:", it will error saying it was not passed a dictionary object... | Ctrl - D will exit a shell in many cases. It is quicker than typing exit Enter . It's still not a single command to terminate everything, but holding Ctrl and hitting D several times is easier and faster. Not sure how valuable this is for your use case. Discussed in detail here . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/507021",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342384/"
]
} |
507,023 | Let's suppose I want to find all .txt files and search for some string. I would do: find ./ -type f -name "*.txt" -exec egrep -iH 'something' '{}' \; What if I want to do a more complex filtering, like this: egrep something file.txt | egrep somethingelse | egrep other Inside find -exec? (or similar) Please keep in mind that I'm searching for a solution that I could easily type when I need it. I know that this could be done with a few lines using a shell script, but that isn't what I'm looking for. | If you must do it from within find, you need to call a shell: find ./ -type f -name "*.txt" -exec sh -c 'grep -EiH something "$1" | grep -E somethingelse | grep -E other' sh {} \; Other alternatives include using xargs instead: find ./ -type f -name "*.txt" | xargs -I{} grep -EiH something {} | grep -EiH somethingelse | grep -EiH other Or, much safer for arbitrary filenames (assuming your find supports -print0 ): find ./ -type f -name "*.txt" -print0 | xargs -0 grep -EiH something {} | grep -Ei somethingelse | grep -Ei other Or, you could just use a shell loop instead: find ./ -type f -name "*.txt" -print0 | while IFS= read -d '' file; do grep -Ei something "$file" | grep -Ei somethingelse | grep -Ei other done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/507023",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342385/"
]
} |
507,044 | I have the bash line: expr substr $SUPERBLOCK 64 8 Which is return to me string line: 00080000 I know that this is, actually, a 0x00080000 in little-endian. Is there a way to create integer-variable from it in bash in big-endian like 0x80000? | Probably a better way to do this but I've come up with this solution which converts the number to decimal and then back to hex (and manually adds the 0x ): printf '0x%x\n' "$((16#00080000))" Which you could write as: printf '0x%x\n' "$((16#$(expr substr "$SUPERBLOCK" 64 8)))" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507044",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/277138/"
]
} |
507,093 | I am using Manjaro Linux 18 (Arch Linux based Linux distro). I am using XFCE desktop environment. I have 8 workspaces on my computer. How do I move to the next and previous workspace using the command line? I have Googled and found multiple apps on Github that are said to do that, but none seem to work. | You will need xdotool. Installation sudo pacman -S xdotool Usage Going to the next workspace xdotool set_desktop --relative 1 Going to the previous workspace xdotool set_desktop --relative -1 NOTE: Negative numbers are said to be allowed, but some versions of xdotool do not allow negative numbers or at least give an error. Test negative numbers before you implement scripts with negative numbers. Work around for going to the previous workspaceIf you have n workspaces, then for going into the previous workspace xdotool set_desktop --relative n-1 where n = number of workspaces. Example:n = 8 workspaces xdotool set_desktop --relative 7 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/337442/"
]
} |
507,109 | I am thinking to do such thing, but I am not aware for such conditions. So, I wanted to know what will happen if I remount. | You will need xdotool. Installation sudo pacman -S xdotool Usage Going to the next workspace xdotool set_desktop --relative 1 Going to the previous workspace xdotool set_desktop --relative -1 NOTE: Negative numbers are said to be allowed, but some versions of xdotool do not allow negative numbers or at least give an error. Test negative numbers before you implement scripts with negative numbers. Work around for going to the previous workspaceIf you have n workspaces, then for going into the previous workspace xdotool set_desktop --relative n-1 where n = number of workspaces. Example:n = 8 workspaces xdotool set_desktop --relative 7 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507109",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342463/"
]
} |
507,111 | How do I get a shell script to remove duplicates in a text file, based on the 11th 21st columns? Sample file: Header:0000000000000001457854500000XP 12345678912yeyeyeyeeye 0000003XP 12345678913yeyeyeyeeye 0000002XP 12345678912yeyeyeyeeye 0000004XP 12345678913yeyeyeyeeye 0000001Footer:0000000000000001245856500004 Expected output: Header:0000000000000001457854500000XP 12345678913yeyeyeyeeye 0000001Xp 12345678912yeyeyeyeeye 0000004Footer:0000000000000001245856500001 | You will need xdotool. Installation sudo pacman -S xdotool Usage Going to the next workspace xdotool set_desktop --relative 1 Going to the previous workspace xdotool set_desktop --relative -1 NOTE: Negative numbers are said to be allowed, but some versions of xdotool do not allow negative numbers or at least give an error. Test negative numbers before you implement scripts with negative numbers. Work around for going to the previous workspaceIf you have n workspaces, then for going into the previous workspace xdotool set_desktop --relative n-1 where n = number of workspaces. Example:n = 8 workspaces xdotool set_desktop --relative 7 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342466/"
]
} |
507,131 | Today I got this warning issued by OpenSSL in Cygwin after updating some packages, I believe openssl was included: *** WARNING : deprecated key derivation used. Using -iter or -pbkdf2 would be better. The OpenSSL version used in Cygwin was: OpenSSL 1.1.1b 26 Feb 2019 This happened while decrypting my Backup on BluRay, which I created on Linux Mint 19.1 , where the OpenSSL version is significantly older : OpenSSL 1.1.0g 2 Nov 2017 The command used to encrypt and decrypt (just add -d to the end) was: $ openssl enc -aes-256-cbc -md sha256 -salt -in "${InputFilePath}" -out "${OutputFilePath}" What does this warning mean and can I do anything to avoid it in the future backups? | Comparing the Synopsys of the two main and recent versions of OpenSSL, let me quote the man pages. OpenSSL 1.1.0 openssl enc -ciphername [-help] [-ciphers] [-in filename] [-out filename] [-pass arg] [-e] [-d] [-a/-base64] [-A] [-k password] [-kfile filename] [-K key] [-iv IV] [-S salt] [-salt] [-nosalt] [-z] [-md digest] [-p] [-P] [-bufsize number] [-nopad] [-debug] [-none] [-engine id] OpenSSL 1.1.1 openssl enc -cipher [-help] [-ciphers] [-in filename] [-out filename] [-pass arg] [-e] [-d] [-a] [-base64] [-A] [-k password] [-kfile filename] [-K key] [-iv IV] [-S salt] [-salt] [-nosalt] [-z] [-md digest] [-iter count] [-pbkdf2] [-p] [-P] [-bufsize number] [-nopad] [-debug] [-none] [-rand file...] [-writerand file] [-engine id] There obviously are some greater differences, namely considering this question, there are these two switches missing in the 1.1.0: pbkdf2 iter You have basically two options now. Either ignore the warning or adjust your encryption command to something like: openssl enc -aes-256-cbc -md sha512 -pbkdf2 -iter 100000 -salt -in InputFilePath -out OutputFilePath Where these switches: -aes-256-cbc is what you should use for maximum protection or the 128-bit version, the 3DES (Triple DES) got abandoned some time ago, see Triple DES has been deprecated by NIST in 2017 , while AES gets accelerated by all modern CPUs by a lot; you can simply verify if your CPU has the AES-NI instruction set for example using grep aes /proc/cpuinfo ; win, win -md sha512 is the faster variant of SHA-2 functions family compared to SHA-256 while it might be a bit more secure; win, win -pbkdf2 : use PBKDF2 (Password-Based Key Derivation Function 2) algorithm -iter 100000 is overriding the default count of iterations (10000) for the password, quoting the man page: Use a given number of iterations on the password in deriving the encryption key. High values increase the time required to brute-force the resulting file. This option enables the use of PBKDF2 algorithm to derive the key. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/507131",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
507,142 | All filenames mentioned here are directories. The permissions of /media/disk are 0744 ( drwxr--r-- ). The permissions of /media/disk/directory are 0755 ( drwxr-xr-x ). I do not own these directories in anyway. Why can I ls /media/disk , but can't ls /media/disk/directory ? My guess is that ls needs run access to /media/disk , but this would be stupid because if I have read access to a file (i.e. if r is set), then I should be able to read the file. In addition to the question above, if I'm correct in saying that the issue is due to lack of run access, I want to ask why what I said is stupid, isn't. System information: DISTRIB_ID=UbuntuDISTRIB_RELEASE=18.04DISTRIB_CODENAME=bionicDISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS" I don't think the proposed duplicate explains why this feature isn't stupid. | Your /media/disk directory lacks the execute bit for group and "others". This means (IMHO in a somewhat confusing way) that you can successfully read from the directory (as the read bit is set) and list its contents, whileyou cannot act on it nor enter it (via cd ), and this includes listing its children's content as long the permission mask is 744 . If you want to access some specific subdirectory(-ies) without giving access to the whole tree, then it's a simple matter of removing read access but setting the execute bit: $ su# mkdir -p /tmp/parent/child# chmod 711 /tmp/parent/# chmod 755 /tmp/parent/child/# touch /tmp/parent/child/test# exit$ ls /tmp/parent/ls: cannot open directory '/tmp/parent/': Permission denied$ cd /tmp/parent/$ pwd/tmp/parent$ lsls: cannot open directory '.': Permission denied$ ls child/test | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342490/"
]
} |
507,148 | I use this shell pipeline to get a SQL dump using the terminal: $ cd var/lib/mysql && mysqldump -uroot -p"craft" --add-drop-table craft > ~/../docker-entrypoint-initdb.d/base.sql && cd ~/.. As can be seen, I entered the var/lib/mysql directory and create the dump to a file and come back from where I was initially. The command is correct, but, I guess it can be written concisely like without entering directly the var/lib/mysql directory. Can anyone suggest that? | To be honest, I don't see a reason for the two calls to cd at all. You don't seem to use the directory that you cd into for anything. You give an absolute path for the location of the database dump. If any custom MySQL configuration file is needed, that would be picked up from the user's home directory in any case. You could therefore, quite likely, just use mysqldump -uroot -p"craft" --add-drop-table craft \ > ~/../docker-entrypoint-initdb.d/base.sql regardless of what directory you run that from. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507148",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264590/"
]
} |
507,188 | In a (BSD) UNIX environment, I would like to capture a specific substring using a regular expression. Assume that the dmesg command output would include the following line: pass2: <Marvell Console 1.01> Removable Processor SCSI device I would like to capture the text between the < and > characters, like dmesg | <sed command> should output: Marvell Console 1.01 However, it should not output anything if the regex does not match. Many solutions including sed -e 's/$regex/\1/ will output the whole input if no match is found, which is not what i want. The corresponding regexp could be: regex="^pass2\: \<(.*)\>" How would i properly do a regex match using sed or grep ? Note that the grep -P option is unavailable in my BSD UNIX distribution. The sed -E option is available, however. | Try this, sed -nE 's/^pass2:.*<(.*)>.*$/\1/p' Or POSIXly ( -E has not made it to the POSIX standard yet as of 2019): sed -n 's/^pass2:.*<\(.*\)>.*$/\1/p' Output: $ printf '%s\n' 'pass2: <Marvell Console 1.01> Removable Processor SCSI device' | sed -nE 's/^pass2:.*<(.*)>.*$/\1/p'Marvell Console 1.01 This will only print the last occurrence of <...> for each line. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/507188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312076/"
]
} |
507,204 | I am currently trying to use Plink to run a few commands on a linux server to monitor a couple of things on it such as the disk space, memory, cpu usage. I have the commands that I want to use but I want to format the output that I get to something a little friendlier to read. This is the commands I am using inside my batch file. FOR /F "" %%G IN (C:\Users\username\Desktop\ip_list.txt) DO "C:\Program Files\PuTTY\plink.exe" -ssh -batch username@%%G -pw password "hostname; df | fgrep '/dev/'; free -h; top -bn2 | grep 'Cpu(s)';" and here is the output that i get Basically, I would like to just add some lines in between the individual command outputs to make this a tiny bit easier to read. Is this possible without writing the output to a text file? Thank you | This can be achieved by adding echo "" in the middle of the commands where the space is required. Here are some example. Adding new line in the middle. Example: df | fgrep '/dev/'; echo ""; free -h output tmpfs 16334344 55772 16278572 1% /dev/shm total used free shared buff/cache availableMem: 31G 4.0G 21G 346M 6.0G 26GSwap: 15G 2.3M 15G Adding detail of the command. Recommended Example: echo "==================" ; echo "This is output of df"; echo "==================" ;df | grep '/dev/shm' ; echo ""; echo "==================" ; echo "This is output of Free"; echo "==================";free -h Output: ==================This is output of df==================tmpfs 16334344 55772 16278572 1% /dev/shm==================This is output of Free================== total used free shared buff/cache availableMem: 31G 4.0G 21G 359M 6.0G 26GSwap: 15G 2.3M 15G | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507204",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270038/"
]
} |
507,219 | I'd like to be able to remove a batch of people from a "private" Slack channel. Use case: whoops, I just added 137 people to the wrong channel! The Slack Member ID list is obtained from an Airtable spreadsheet column. copy the range of cells from Airtable format the values into a single-space separated string prepend the Channel ID to the MemberID list Structure the input arguments like so: $ sh script.sh ChannelID MemberID1[ MemberID2 MemberID3 ...] #!/bin/bashargs=("$@")channel=${args[0]}for arg in "${@:2}"; do curl -X POST \ -H 'Authorization: Bearer foobar' \ -H "Content-Type":"application/json; charset=utf-8" \ --data '{"channel":"'"$channel"'", "user":"'"$arg"'"}' \ https://slack.com/api/conversations.kick;done The Slack API method I am cURLing is limited to acting upon one MemberID at a time, and limited to only accepting 50 cURL POSTs per minute ( Tier 3 ). I'd like to take a 50+ list of MemberIDs, and make sure they only get cURL'd in 50-member batches that are a minute apart. I started looking into xargs and made it as far as: if [ ${#args[@]} -gt 2 ]; then echo "channel ID: $channel" echo "${#args[@]} items in the argument array" echo "${@:2}" xargs -n 2 <<<${@:2} | xargs -I {} echo {} | sed -e 's/ /,/g'fi $ sh test.sh FO1O2B3A4R 1 2 3 4 5 6 7 8 9 10 11 channel ID: FO1O2B3A4R 12 items in the argument array 1 2 3 4 5 6 7 8 9 10 11 1,2 3,4 5,6 7,8 9,10 11 Can I use xargs to make the 50 member batch, then for each one of those members, fire off the cURL command, then wait one minute ( sleep 60s ) in between firing off the batches? | This can be achieved by adding echo "" in the middle of the commands where the space is required. Here are some example. Adding new line in the middle. Example: df | fgrep '/dev/'; echo ""; free -h output tmpfs 16334344 55772 16278572 1% /dev/shm total used free shared buff/cache availableMem: 31G 4.0G 21G 346M 6.0G 26GSwap: 15G 2.3M 15G Adding detail of the command. Recommended Example: echo "==================" ; echo "This is output of df"; echo "==================" ;df | grep '/dev/shm' ; echo ""; echo "==================" ; echo "This is output of Free"; echo "==================";free -h Output: ==================This is output of df==================tmpfs 16334344 55772 16278572 1% /dev/shm==================This is output of Free================== total used free shared buff/cache availableMem: 31G 4.0G 21G 359M 6.0G 26GSwap: 15G 2.3M 15G | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/209303/"
]
} |
507,259 | I am searching for a POSIX command to shutdown a machine. Is there a POSIX acceptable way to do this? The commands I use to do this are not POSIX compatible (e.g., shutdown, reboot, halt or poweroff). Systemd introduced systemctl to do this, but I am pretty sure that this is not POSIX, either. | No, POSIX does not care about the shutting down or rebooting of a Unix system, nor about how services are started at boot. The following areas are outside of the scope of POSIX.1-2017: Graphics interfaces Database management system interfaces Record I/O considerations Object or binary code portability System configuration and resource availability POSIX.1-2017 describes the external characteristics and facilities that are of importance to application developers, rather than the internal construction techniques employed to achieve these capabilities. Special emphasis is placed on those functions and facilities that are needed in a wide variety of commercial applications. (from the Introduction section of the POSIX Base Definitions) The shutdown command would fall into the "System configuration and resource availability" category, and it's not a tool that is important to application developers. The full POSIX standard is available online . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507259",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170792/"
]
} |
507,349 | In our Linux box we have USB -> serial device which was always identified as /dev/ttyACM0 . So I've written an application and until yesterday, everything worked fine. But suddenly (yeah, during the remote presentation ...) the device stopped working. After quick research, I found that the connection changed to /dev/ttyACM1 . It was a little untimely, but now I have a problem - how to unambiguously identify my device? Like, for example, the storage drive could be initialized using UUID although the /dev/sd** has changed. Is there some way to do that for serial devices? Now I use a stupid workaround: for(int i = 0; i < 10; i ++){ m_port = std::string("/dev/ttyACM") + (char)('0' + i); m_fd = open(m_port.c_str(), O_RDWR | O_NOCTTY | O_NDELAY);} The link to the device we use. | Since we are talking USB devices and assuming you have udev, you could setup some udev rules. I guess, and this is just a wild guess, somebody or something unplugged/removed the device and plugged it back in/added the device again, which bumps up the number. Now, first you need vendor and product id's: $ lsusbBus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 001 Device 011: ID 0403:6001 FTDI FT232 USB-Serial (UART) IC Next, you need the serial number (in case you have several): # udevadm info -a -n /dev/ttyUSB1 | grep '{serial}' | head -n1 ATTRS{serial}=="A6008isP" Now, lets create a udev rule: UDEV rules are usually scattered into many files in /etc/udev/rules.d . Create a new file called 99-usb-serial.rules and put the following line in there, I have three devices, each with a a different serial number: SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="A6008isP", SYMLINK+="MySerialDevice"SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="A7004IXj", SYMLINK+="MyOtherSerialDevice"SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="FTDIF46B", SYMLINK+="YetAnotherSerialDevice"ls -l /dev/MySerialDevicelrwxrwxrwx 1 root root 7 Nov 25 22:12 /dev/MySerialDevice -> ttyUSB1 If you do not want the serial number, any device from vendor with same chip will then get the same symlink, only one can be plugged in at any given time. SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", SYMLINK+="MySerialDevice" Taken from here | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/507349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92830/"
]
} |
507,374 | User has a (incremental) backup script using rsync , to external device. This was erroring on an SSD he had. Turns out his device was formatted exFAT . That means I need to detect this in the script , as I need to alter the options to rsync (e.g., exFAT cannot handle symbolic links, no owner/group permissions, etc.). User is running Linux Mint. I run Ubuntu. I can only assume/hope that a solution for my Ubuntu will work for his Mint. I have looked at: How do I know if a partition is ext2, ext3, or ext4? How to tell what type of filesystem you're on? https://www.tecmint.com/find-linux-filesystem-type/ There are a variety of good suggestions there, but I do not see one which meets my requirements, which are: Must report (parseable) ntfs / exfat explicitly, not just say fuseblk (which it will for both exfat & ntfs , I need to distinguish). Must not require sudo . Must be executable starting from a directory path on the file system (can assume it will be mounted), not just starting from a /dev/... . From the suggestions I have tried: fdisk -l , parted -l , file -sL : require sudo and/or /dev/... block device mount : requires /dev/... , only reports fuseblk df -T , stat -f -c %T : accept directory, but report only fuseblk lsblk -f , blkid : require /dev/... block device Is there a single, simple command which meets all these criteria? Or, lsblk / blkid seem to report exfat / ntfs correctly, if I need to pass them the /dev how do I get that suitably from the directory path in script? | Thanks to the other posters for replying/suggesting. Here is my full solution. df -P can be used to obtain device from path, and that can be fed to lsblk --fs to obtain exact file system. So a one-liner is: fs=$( lsblk --fs --noheadings $( df -P $path | awk 'END{print $1}' ) | awk 'END{print $2}' ) If all you need to know is that the file system is fuseblk --- which covers both ntfs & exfat and turns out in the end to be sufficient for my purposes after all --- this can be determined with the much simpler: fs=$( stat -f -c '%T' $path ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507374",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104736/"
]
} |
507,383 | I have two Intel SSDSC2CW120A3 SSDs in a SuperMicro X9SCL/X9SCM set for software RAID-1 on CentOS 7: Linux hostname.local 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux In dmesg I keep seeing "hard resetting link" on both ata1 and ata2, most of the time I (or my monitoring) don't notice any downtime but sometimes the server freezes completely and I'll have to do a power reset (Can not SSH to it anymore), according to the output of dmesg the reset happens pretty often: $ dmesg | grep "hard resetting link"[161507.540860] ata1: hard resetting link[161751.123732] ata2: hard resetting link[161798.132697] ata2: hard resetting link[161879.126542] ata2: hard resetting link[161939.134102] ata2: hard resetting link[162536.225103] ata1: hard resetting link[164738.176816] ata1: hard resetting link More output from dmesg : [229999.873718] ata1.00: failed command: WRITE FPDMA QUEUED[229999.879043] ata1.00: cmd 61/08:f0:28:12:d5/00:00:00:00:00/40 tag 30 ncq 4096 out res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)[229999.894050] ata1.00: status: { DRDY }[229999.897815] ata1: hard resetting link[230000.206411] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)[230000.223165] ata1.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded[230000.223179] ata1.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out[230000.231187] ata1.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out[230000.253132] ata1.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded[230000.253137] ata1.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out[230000.261148] ata1.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out[230000.273568] ata1.00: configured for UDMA/133[230000.277980] ata1: EH complete I've checked the SATA cables and they seem alright, unplugged them and plugged them in again, smartctl reports quite some uncorrectable errors but other than that nothing really suspicious. Also there's no kernel updates available. Before I start replacing either the drives or the board I'm wondering if there is there anything else I can check? I'm trying to figure out whether this issue is hard or software related. TIA | Thanks to the other posters for replying/suggesting. Here is my full solution. df -P can be used to obtain device from path, and that can be fed to lsblk --fs to obtain exact file system. So a one-liner is: fs=$( lsblk --fs --noheadings $( df -P $path | awk 'END{print $1}' ) | awk 'END{print $2}' ) If all you need to know is that the file system is fuseblk --- which covers both ntfs & exfat and turns out in the end to be sufficient for my purposes after all --- this can be determined with the much simpler: fs=$( stat -f -c '%T' $path ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/319555/"
]
} |
507,441 | I believe that the FAT32 file system does not support file permissions, however when I do ls -l on a FAT32 partition, ls -l shows that the files have permissions: -rw-r--r-- 1 john john 11 Mar 20 15:43 file1.txt-rw-r--r-- 1 john john 5 Mar 20 15:49 file2.txt Why is ls -l displaying the permissions of files? | The filesystem as stored on disk doesn't store all file permissions, but the filesystem driver has to provide them to the operating system since they are an integral part of the Unix filesystem concept and the system call interfaces have no way of presenting that the permissions are missing. Also consider what would happen if a file didn't have any permission bits at all? Would it be the same as 0777 , i.e. access to all; or the same as 0000 , i.e. no access to anyone? But both of those are file permissions, so why not show them? Or do something more useful and have a way to set some sensible permissions. So, the driver fakes some permissions, mostly same ones for all files. The permissions along with the files' owner and group are configurable at mount time. These are described under "Mount options for fat" in the mount(8) man page : Mount options for fat (Note: fat is not a separate filesystem, but a common part of the msdos, umsdos and vfat filesystems.) uid=value and gid=value Set the owner and group of all files. (Default: the UID and GID of the current process.) umask=value Set the umask (the bitmask of the permissions that are not present). The default is the umask of the current process. The valueis given in octal. dmask=value Set the umask applied to directories only. The default is the umask of the current process. The value is given in octal. fmask=value Set the umask applied to regular files only. The default is the umask of the current process. The value is given in octal. Note that the one useful permission the FAT filesystems store is the read-only -bit, and if you run chmod ugo-w file , the read permissions on it will disappear. That's also probably the reason that the above options take their values as permissions to masks away , so fmask=0133 would result in all files having all the x permissions removed and w removed from the group and others. The files would then have the permissions 0644 / rw-r--r-- or 0444 / r--r--r-- , depending on if the read-only bit is cleared or set. Also, the defaults are inherited from the process calling mount() , so if you call mount from the command line, the shell's umask will apply. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/507441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342731/"
]
} |
507,450 | I need to print some variables to the screen but I need to preferebly obfuscate the first few characters and I was wondering if there was an echo command in bash that can obfuscate the first characters of a secret value while printing it to the terminal: echo 'secretvalue'********lue | The other answers mask a fixed amount of characters from the start, with the plaintext suffix varying in length. An alternative would be to leave a fixed amount of characters in plaintext, and to vary the length of the masked part. I don't know which one is more useful, but here's the other choice: #!/bin/bashmask() { local n=3 # number of chars to leave local a="${1:0:${#1}-n}" # take all but the last n chars local b="${1:${#1}-n}" # take the final n chars printf "%s%s\n" "${a//?/*}" "$b" # substitute a with asterisks}mask abcdemask abcdefghijkl This prints **cde and *********jkl . If you like, you could also modify n for short strings to make sure a majority of the string gets masked. E.g. this would make sure at least three characters are masked even for short strings. (so abcde -> ***de , and abc -> *** ): mask() { local n=3 [[ ${#1} -le 5 ]] && n=$(( ${#1} - 3 )) local a="${1:0:${#1}-n}" local b="${1:${#1}-n}" printf "%s%s\n" "${a//?/*}" "$b"} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/507450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/341050/"
]
} |
507,457 | I'm trying to build a prototype of a real network in VirtualBox. I have 3 guest systems: Debian-based router with 3 NICs: enp0s3 Looks outsude.(bridged) enp0s8 LAN1.(gateway to internal network 1) enp0s9 LAN2.(gateway to internal network 2) WinXP workstation in LAN1. WinXP workstation in LAN2. in /etc/sysctl.conf: net.ipv4.ip_forward=1 iptables-save output: # Generated by iptables-save v1.6.0 on Thu Mar 21 01:18:29 2019*filter:INPUT DROP [31:8959]:FORWARD DROP [0:0]:OUTPUT DROP [0:0]-A INPUT -i lo -j ACCEPT-A INPUT -i enp0s8 -j ACCEPT-A INPUT -i enp0s9 -j ACCEPT-A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT-A INPUT -p icmp -m icmp --icmp-type 1 -j ACCEPT-A INPUT -p icmp -m icmp --icmp-type 2 -j ACCEPT-A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -m state --state INVALID -j DROP-A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP-A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP-A INPUT -i enp0s3 -p tcp -m tcp --dport 22 -j ACCEPT-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT-A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT-A FORWARD -m state --state INVALID -j DROP-A FORWARD -i enp0s8 -o enp0s3 -j ACCEPT-A FORWARD -i enp0s9 -o enp0s3 -j ACCEPT-A FORWARD -i enp0s3 -o enp0s8 -j REJECT --reject-with icmp-port-unreachable-A FORWARD -i enp0s3 -o enp0s9 -j REJECT --reject-with icmp-port-unreachable-A FORWARD -s 10.10.10.0/24 -d 10.10.11.0/24 -i enp0s8 -o enp0s9 -j ACCEPT-A FORWARD -s 10.10.11.0/24 -d 10.10.10.0/24 -i enp0s9 -o enp0s8 -j ACCEPT-A OUTPUT -o lo -j ACCEPT-A OUTPUT -o enp0s8 -j ACCEPT-A OUTPUT -o enp0s9 -j ACCEPT-A OUTPUT -o enp0s3 -j ACCEPT-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A OUTPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROPCOMMIT# Completed on Thu Mar 21 01:18:29 2019# Generated by iptables-save v1.6.0 on Thu Mar 21 01:18:29 2019*nat:PREROUTING ACCEPT [77:16001]:INPUT ACCEPT [2:628]:OUTPUT ACCEPT [2:143]:POSTROUTING ACCEPT [2:143]-A POSTROUTING -s 10.10.10.0/24 -o enp0s3 -j MASQUERADE-A POSTROUTING -s 10.10.11.0/24 -o enp0s3 -j MASQUERADECOMMIT# Completed on Thu Mar 21 01:18:29 2019# Generated by iptables-save v1.6.0 on Thu Mar 21 01:18:29 2019*mangle:PREROUTING ACCEPT [224:28224]:INPUT ACCEPT [180:21810]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [144:17877]:POSTROUTING ACCEPT [144:17877]COMMIT# Completed on Thu Mar 21 01:18:29 2019 DNSmasq acts as DHCP server And the question is: how can i make workstation in LAN1 see one in LAN2?The internet is available to both, workstation1(10.100) can ping 11.1 gateway but can't reach 11.100 machine and vice versa. iptables is not likely to be the problem because it does not have dropped packages in statistics. And what seems interesting tcpdump shows 2 packages for every ping request with 0.0000x time difference. P.S. It would be nice to make this all without adding static routes on clients if it is possible. | The other answers mask a fixed amount of characters from the start, with the plaintext suffix varying in length. An alternative would be to leave a fixed amount of characters in plaintext, and to vary the length of the masked part. I don't know which one is more useful, but here's the other choice: #!/bin/bashmask() { local n=3 # number of chars to leave local a="${1:0:${#1}-n}" # take all but the last n chars local b="${1:${#1}-n}" # take the final n chars printf "%s%s\n" "${a//?/*}" "$b" # substitute a with asterisks}mask abcdemask abcdefghijkl This prints **cde and *********jkl . If you like, you could also modify n for short strings to make sure a majority of the string gets masked. E.g. this would make sure at least three characters are masked even for short strings. (so abcde -> ***de , and abc -> *** ): mask() { local n=3 [[ ${#1} -le 5 ]] && n=$(( ${#1} - 3 )) local a="${1:0:${#1}-n}" local b="${1:${#1}-n}" printf "%s%s\n" "${a//?/*}" "$b"} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/507457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342644/"
]
} |
507,549 | I understand the command date -d 'last-monday - 14 days' +%Y%m%d will print the Monday two weeks ago from "Today's date". I need a way to test this for different dates and see the result. Almost like I need to mention the date command to do calculations off a relative date. I need something to test with different dateslike : date -d 'last-monday - 14 days' %Y%m%d from 20190315date -d 'last-monday - 14 days' %Y%m%d from 20180217date -d 'last-monday - 14 days' %Y%m%d from 201700914 and see the respective outputs. | #!/bin/bash# Our given datesdates=( 20190315 20180217 20170914)# Loop over the given datesfor thedate in "${dates[@]}"; do # Get day of the week as digit (1 is Monday, 7 is Sunday) day=$( date -d "$thedate" +%u ) # The Monday the same week is $(( day - 1 )) days earlier than the given date. # The Monday two weeks earlier is 14 days earlier still. date -d "$thedate -$(( day - 1 + 14 )) days" +"$thedate --> %Y%m%d"done Output: 20190315 --> 2019022520180217 --> 2018012920170914 --> 20170828 The difficult bit about this is to figure out how to construct the correct --date or -d string for GNU date to compute the final date. I opted for computing the day of week of the given date, and then using that to compute a date string that offsets the given date by a number of days so that the resulting date is the Monday two weeks earlier. The actual strings that ends up being used for the option argument to -d in the above script, using the dates given in the script, are 20190315 -18 days20180217 -19 days20170914 -17 days Condensing the script into a single command that does the computation for a single date in $thedate : date -d "$thedate -$(date -d "$thedate" +%u) days -13 days" +%Y%m%d or date -d "$thedate -$(date -d "$thedate" +"-%u days -13 days")" +%Y%m%d | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342808/"
]
} |
507,553 | So my task is to find the file with the most hardlinks in a directory.So far i have : find . -name "file*" | xargs -I{} -n 1 find . -samefile {} which gives me: ./hardlinkFIle245./hardlinkFIle23./hardlinkFIle2./file2.txt./hardlinkFIle1234./hardlinkFIle123./hardlinkFIle12./hardlinkFIle1./file1.txt Now when I pipe it in with |wc -l , I get the total number of lines 9: find . -name "file*" | xargs -I{} -n 1 find . -samefile {} | wc -l What I want is for each xargs batch -n 1 to give me the count :so i want: 45 | #!/bin/bash# Our given datesdates=( 20190315 20180217 20170914)# Loop over the given datesfor thedate in "${dates[@]}"; do # Get day of the week as digit (1 is Monday, 7 is Sunday) day=$( date -d "$thedate" +%u ) # The Monday the same week is $(( day - 1 )) days earlier than the given date. # The Monday two weeks earlier is 14 days earlier still. date -d "$thedate -$(( day - 1 + 14 )) days" +"$thedate --> %Y%m%d"done Output: 20190315 --> 2019022520180217 --> 2018012920170914 --> 20170828 The difficult bit about this is to figure out how to construct the correct --date or -d string for GNU date to compute the final date. I opted for computing the day of week of the given date, and then using that to compute a date string that offsets the given date by a number of days so that the resulting date is the Monday two weeks earlier. The actual strings that ends up being used for the option argument to -d in the above script, using the dates given in the script, are 20190315 -18 days20180217 -19 days20170914 -17 days Condensing the script into a single command that does the computation for a single date in $thedate : date -d "$thedate -$(date -d "$thedate" +%u) days -13 days" +%Y%m%d or date -d "$thedate -$(date -d "$thedate" +"-%u days -13 days")" +%Y%m%d | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507553",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342811/"
]
} |
507,656 | I was trying run a few commands using getline() function of GNU awk and print the error number ( errno ) value returned. But for simple failure cases of non existent directory/file the variable doesn't seem to be populated. awk 'BEGIN { cmd = "ls -lrth /non/existing/path" while ( ( cmd | getline result ) > 0 ) { print result } close(cmd); print ENVIRON["ERRNO"]}' When the above puts out the error string from ls , the print statement does not produce a valid error number. I've also tried from the man page to use PROCINFO["errno"] and PROCINFO["ERRNO"] which didn't work. I also tried printing it before closing the file descriptor which also didn't work. Is it wrong to expect ENOENT in this case? | You can not get the error number using getline . In your command, the output is from ls , not print result . In form cmd | getline result , cmd is run, then its output is piped to getline . It returns 1 if got output, 0 if EOF, -1 on failure. The problem is that failure is from running getline itself, not the return code of cmd . Example: awk 'BEGIN {while ( ( getline result < "/etc/shadow") > 0 ) { print result } print "XXX: ", ERRNO}'XXX: Permission denied You will see that /etc/shadow can not be read, so getline fails to run and reports the error in ERRNO variable. Note that GNU awk will return the cmd status if not in posix mode, so you can do: awk 'BEGIN { cmd = "ls -lrth /non/existing/path" while ( ( cmd | getline result ) > 0 ) { print result } status=close(cmd); if (status != 0) { code=and(rshift(status, 8),0xFF) printf("Exit status: %d, exit code: %d\n", status, code) }}'ls: cannot access '/non/existing/path': No such file or directoryExit status: 512, exit code: 2 In POSIX mode, You won't get the exit status: POSXILY_CORRECT=1 awk 'BEGIN { cmd = "ls -lrth /non/existing/path" while ( ( cmd | getline result ) > 0 ) { print result } status=close(cmd); if (status != 0) { code=and(rshift(status, 8),0xFF) printf("Exit status: %d, exit code: %d\n", status, code) }}'ls: cannot access '/non/existing/path': No such file or directory | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112235/"
]
} |
507,702 | In this example of a systemd unit file: # systemd-timesyncd.service...Before=time-sync.target sysinit.target shutdown.targetConflicts=shutdown.targetWants=time-sync.target systemd-timesyncd.service should start before time-sync.target .This defines an ordering dependency . But at the same systemd-timesyncd.service wants time-sync.target . So time-sync.target is it's requirement dependency What is the use case for this relation and why aren't they in some conflict with one another? | The use case of this double relation is similar to a “provides” relation. systemd-timesyncd provides a time synchronisation service, so it satisfies any dependency a unit has on time-sync.target . It must start before time-sync.target because it’s necessary for any service which relies on time synchronisation, and it wants time-sync.target because any unit relying on time synchonisation should be started along with the systemd-timesyncd service. I think the misunderstanding comes from your interpretation of “wants”. The “wants” relation in systemd isn’t a dependency: systemd-timesyncd doesn’t need time-sync to function. It’s a “start along with” relation: it says that the configuring unit ( systemd-timesyncd.service ) wants the listed units ( time-sync.target ) to start along with it. See also Which service provides time-sync.target in systemd? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/507702",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29436/"
]
} |
507,768 | I'm trying to pull two numerical values out of a string and assign them to variables using awk ( gawk is what I'm using specifically). I want to pull the major and minor version numbers out of a tmux version string into awk variables, e.g.: input: tmux 2.8 ; maj == 2 and min == 8 input: tmux 1.9a ; maj == 1 and min == 9 input: tmux 2.10 ; maj == 2 and min == 10 Assuming my input comes from tmux -V on stdin, I currently have the following: tmux -V | awk '{ maj = +gensub(/([0-9]+)\..*/, "\\1", "g", $2); min = +gensub(/.*\.([0-9]+).*/, "\\1", "g", $2); # ...do something with maj and min... }' This works, but as many users of tmux know, using if-shell in the .tmux.conf file (where I hope to use this stuff) can easily lead to really long lines in the config file, so I'm wondering if there's a way to combine these two variable assignments into one statement to save space...or any other way to glean these two variables from the input and save space. I'm thinking of something like: awk '{ maj, min = +gensub(/([0-9]+)\.([0-9]+).*/, "\\1 \\2", "g", $2); }' ...kind of like in Python, but that particular syntax doesn't exist in awk . Is there anything else that's possible? Note that readability isn't really a concern, just length. | Since you're using GNU awk, you can use the 3-arg form of match() to store multiple capturing groups: awk ' match($0, /([0-9]+)\.([0-9]+)/, m) {maj=m[1]; min=m[2]; print maj, min}' <<ENDtmux 2.8tmux 1.9atmux 2.10END 2 81 92 10 https://www.gnu.org/software/gawk/manual/html_node/String-Functions.html | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/507768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153578/"
]
} |
507,837 | My opinion is yes, it does, because all useful exposure to the outside world (non-privileged processor mode) would first require a process running in the outside world. That would require a file system, even a temporary, in-RAM, file system. Another engineer disagrees with me, but I can't seem to prove this beyond all (unknown to me) cases. Does the answer to this question depend on the definition of 'running'? | That's rather an odd question because you don't run the kernel like you run a program. The kernel is a platform to run programs on. Of course there is setup and shutdown code but it's not possible to run the kernel on its own. There must always be a main "init" process. And the kernel will panic if it's not there. If init tries to exit the kernel will also panic. These days the init process is something like systemd. If not otherwise specified the kernel will try to run a program from a list of locations starting with /sbin/init . See the init Param here http://man7.org/linux/man-pages/man7/bootparam.7.html in an emergency you can boot Linux with init=/bin/bash . But notice how you always specify a file on the file system to run. So the kernel will panic if it starts up an has no file system because without one there is no way to load init. Some confusion may arise because of an initialisation phase of the kernel. An initial ramdisk is loaded from an image on disk containing vital drivers and setup scripts. These are executed before the file system is loaded. But make no mistake the initial ramdisk is itself a file system. With an initial ramdisk /init is called (which is stored on the initial ramdisk). In many distributions it is ultimately this which calls /sbin/init . Again without a file system, this is impossible. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/507837",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44218/"
]
} |
507,911 | I configured the service - calc_mem.service as the following Restart=on-failureRestartSec=5StartLimitInterval=400StartLimitBurst=3 the configuration above should do the following from my understanding the service have 3 retries when service exit with error and before service start it will wait 5 seconds also I found that "Restart" can be also: Restart=always I can understand that need to restart the service on failurebut what is the meaning of Restart=always ? in which case we need to set - Restart=always | The systemd.service man page has a description of the values Restart= takes, and a table of what options cause a restart when. Always pretty much does what it says on the lid: If set to always , the service will be restarted regardless of whether it exited cleanly or not, got terminated abnormally by a signal, or hit a timeout. I don't know for sure what situation they had in mind for that feature, but we might hypothesise e.g. a service configured to only run for a fixed period of time or to serve fixed number of requests and to then stop to avoid any possible resource leaks. Having systemd do the restarting makes for a cleaner implementation of the service itself. In some sense, we might also ask why not include that option in systemd. Since it is capable of restarting services on failure, they might as well include the option of restarting the service always , just in case someone needs it. To provide tools, not policy. Note also that a "successful exit" here is defined rather broadly: If set to on-success , it will be restarted only when the service process exits cleanly. In this context, a clean exit means an exit code of 0, or one of the signals SIGHUP , SIGINT , SIGTERM or SIGPIPE , [...] SIGHUP is a common way of asking a process to restart, but it unhandled, it terminates the process. So having Restart=always (or Restart=on-success ) allows to use SIGHUP for restarting, even without the service itself supporting that. Also, as far as I can read the man page, always doesn't mean it would override the limits set by StartLimitInterval and StartLimitBurst : Note that service restart is subject to unit start rate limiting configured with StartLimitIntervalSec= and StartLimitBurst= , see systemd.unit(5) for details. A restarted service enters the failed state only after the start limits are reached. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/507911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
508,054 | I'm trying to locate nested YAML and HTML files to replace a string with sed after creating the list; I'm not sure I fully understand how to use the -prune option, here's what I have: find . -type f -name '*.yaml' -or -name '*.html' \ -and -path './.git/*' -or -path '*/node_modules/*' -prune For which I still get HTML files under node_modules directories | If you'd like to prune any directory called either .git or node_modules from the search tree, you would use -type d \( -name .git -o -name node_modules \) -prune This would cause find to not even enter these directories (the -type d is not strictly necessary, but I'll use it here for symmetry with -type f ; see below). Then you would add the other conditions, -type d \( -name .git -o -name node_modules \) -prune -o \-type f \( -name '*.yaml' -o -name '*.html' \) -print Ending up with find . \ -type d \( -name .git -o -name node_modules \) -prune -o \ -type f \( -name '*.yaml' -o -name '*.html' \) -print Any action that you'd like to take on the pathnames that passes all tests should be done in place of the -print . Note that the default logical operation between two predicates is -a (AND). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/508054",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10061/"
]
} |
508,104 | By accident I ran chmod -u filename and it removed all of the permissions I had on filename . The man page does not reference a -u option. Experimenting I was able to conclude that it removes not all permissions, but just read and execute access, leaving write access intact. So what does this do exactly? My conclusion above is wrong, I now think that what it does is remove the permissions that the owner has, from all categories. I think the behavior is analogous to a=u , only it is - instead of = and a can be dropped just as it can with, for instance, a+x . | This is not an option, but a standard (but uncommon) way of specifying the permissions. It means to remove ( - ) the permissions associated with the file owner ( u ), for all users (no preceding u , g , or o ). This is documented in the man page. GNU chmod's man page documents this as: The format of a symbolic mode is [ugoa...][[-+=][perms...]...] , where perms is either zero or more letters from the set rwxXst , or a single letter from the set ugo and later Instead of one or more of these letters, you can specify exactly one of the letters ugo: the permissions granted to the user who owns the file ( u ), the permissions granted to other users who are members of the file's group ( g ), and the permissions granted to users that are in neither of the two preceding categories ( o ) So -u means to remove ( - ) whatever permissions are currently enabled for the owner ( u ) for everybody (equivalently to a-u , except honouring the current umask). While that's not often going to be very useful, the analogous chmod +u will sometimes be, to copy the permissions from the owner to others when operating recursively, for example. It's also documented in POSIX , but more obscurely defined: the permission specification is broadly who[+-=]perms (or a number), and the effect of those are further specified: The permcopy symbols u , g , and o shall represent the current permissions associated with the user, group, and other parts of the file mode bits, respectively. For the remainder of this section, perm refers to the non-terminals perm and permcopy in the grammar. and then - ... If who is not specified, the file mode bits represented by perm for the owner, group, and other permissions, except for those with corresponding bits in the file mode creation mask of the invoking process, shall be cleared. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/508104",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342951/"
]
} |
508,204 | Is it possible to write commands to text file and then loaded it into terminal as file? If yes, how is the command for loading the file? Thank you. For instance file_commands: awk -f program.awk d01.active > out1awk -f program.awk d02.active > out2 It is because of a problem with running an awk program that doesn't work with command awk -f program.awk d??.active > out I need to use program.awk for lots of files and this seemed to me as easier solution when I am not able to repair program for that command with ??. It is related with this question https://stackoverflow.com/questions/55313187/more-input-files-in-awk?noredirect=1#comment97356807_55313187 | If you have a file with a list of shell commands, one per line, then you have a shell script! All you need to do is run it: sh file_commands However, that isn't the simplest approach for what I think you need. If you want to run program.awk on each d??.active file in the current directory, you can simply use a loop: for file in d??.active; do awk -f program.awk "$file" > "$file".out; done That will create a d01.active.out out file for d01.active , a d02.active.out file for d02.active and so on. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508204",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335430/"
]
} |
508,221 | my raspberry pi 3 model B, running Arch, has an issue with bluetooth. First of all: Bluetooth has worked flawlessly previously pi-bluetooth from the AUR is up to date bluez and bluez-utils are up to date The system is up to date as well (just ran pacman -Syu) Still, when I try to use the bluetooth interface, it doesn't work. bluetoothctl (as root), when I run "scan on", tells me Failed to start discovery: org.bluez.Error.NotReady wminput can't find the bluetooth interface: No Bluetooth interface foundunable to connect "systemctl status bluetooth" has the following output: ● bluetooth.service - Bluetooth service Loaded: loaded (/usr/lib/systemd/system/bluetooth.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2019-03-23 21:32:47 CET; 9min ago Docs: man:bluetoothd(8) Main PID: 2005 (bluetoothd) Status: "Running" Tasks: 1 (limit: 1404) CGroup: /system.slice/bluetooth.service └─2005 /usr/lib/bluetooth/bluetoothdMar 23 21:32:47 media.lan systemd[1]: Starting Bluetooth service...Mar 23 21:32:47 media.lan bluetoothd[2005]: Bluetooth daemon 5.50Mar 23 21:32:47 media.lan systemd[1]: Started Bluetooth service.Mar 23 21:32:47 media.lan bluetoothd[2005]: Starting SDP serverMar 23 21:32:47 media.lan bluetoothd[2005]: Bluetooth management interface 1.14 initialized I am at my wits end here, everything seems to be fine, yet nothing works. What is going on here? | Okay, wow, turns out all I had to do was run bluetoothctl power on | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/508221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343236/"
]
} |
508,229 | According to https://networkengineering.stackexchange.com/a/57909/ , a packet sent to 192.168.1.97 "doesn't leave the host but is treated like a packet received from the network, addressed to 192.168.1.97." So same as sending a packet to loop back 127.0.0.1. why does nmap 127.0.0.1 return more services than nmap 192.168.1.97 ? Does nmap 127.0.0.1 necessarily also return those services returned by nmap 192.168.1.97 ? Does a server listening at 192.168.1.97 necessarily also listen at 127.0.0.1 ? $ nmap -p0-65535 192.168.1.97Starting Nmap 7.60 ( https://nmap.org ) at 2019-03-23 19:18 EDTNmap scan report for ocean (192.168.1.97)Host is up (0.00039s latency).Not shown: 65532 closed portsPORT STATE SERVICE22/tcp open ssh111/tcp open rpcbind3306/tcp open mysql33060/tcp open mysqlxNmap done: 1 IP address (1 host up) scanned in 9.55 seconds$ nmap -p0-65535 localhostStarting Nmap 7.60 ( https://nmap.org ) at 2019-03-23 19:18 EDTNmap scan report for localhost (127.0.0.1)Host is up (0.00033s latency).Other addresses for localhost (not scanned):Not shown: 65529 closed portsPORT STATE SERVICE22/tcp open ssh111/tcp open rpcbind631/tcp open ipp3306/tcp open mysql5432/tcp open postgresql9050/tcp open tor-socks33060/tcp open mysqlxNmap done: 1 IP address (1 host up) scanned in 5.39 seconds Thanks. | In short, they are two different interfaces (192.168.1.97 vs 127.0.0.1), and may have different firewall rules applied and/or services listening. Being on the same machine means relatively little. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
508,295 | How can I echo the value of [ 5 -gt 4 ] ,that is a test expression in bash? [ 5 -gt 4 ] | echo and echo `[ 5 -gt 4 ]` both end up printing a blank line @Thomas Dickey 's answer works but could some one explain why the above two don`t work? | Your commands don't work the way you expect them to because the test does not output anything to its standard output stream. It's the standard output stream that gets piped to the next command in a pipeline (your first command), and it's the standard output that replaces a command substitution (your second command). As an aside, note that even if the left hand side of your first pipeline produced something on its standard output stream, echo on the right would not display it. The echo utility does not read from its standard input (but e.g. cat does). Any shell command returns an exit status. This exit status is what e.g. an if statement acts upon. The exit status is never outputted to e.g. the terminal or it would interfere with the actual output of the command or script. When you use [ 3 -gt 4 ] you call the [ utility with some arguments. That utility returns an exit status. It is exactly equivalent to test 3 -gt 4 See man test and man [ (or help test in the bash shell). The exit status of the most recently executed command is stored in the special variable $? . You may save this in an ordinary variable, or output it to the terminal: [ 3 -gt 4 ]printf 'Exit status of test was %s\n' "$?"printf 'Exit status of printf was %s\n' "$?" Note that printf also produces its own exit status, so if the printf call went ok, the value $? would be zero after outputting the status of the test. The code above would likely output Exit status of test was 1Exit status of printf was 0 Note that the test itself never outputs anything here. It just provides an exit status. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508295",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163930/"
]
} |
508,299 | I'm running a gameserver program (SRCDS) that completely ignores SIGTERM. The only way to cleanly shut down the server is to type "quit" interactively. Is there any way that I can wrap this program in a bash script that will catch SIGTERM and send "quit" to the STDIN of the program? Otherwise in normal operation the wrapper should forward STDIN and STDOUT as if wasn't there at all. Diagram of what I'm trying to achieve: Normal operation: --- STDIN ---> | | --- STDIN ---> | | | Bash Script | | Server Program |<-- STDOUT --- | | <-- STDOUT --- | | SIGTERM sent: "quit"--- STDIN ---> | | --- STDIN ---> | |-- SIGTERM --> | Bash Script | | Server Program |<-- STDOUT --- | | <-- STDOUT --- | | | Your commands don't work the way you expect them to because the test does not output anything to its standard output stream. It's the standard output stream that gets piped to the next command in a pipeline (your first command), and it's the standard output that replaces a command substitution (your second command). As an aside, note that even if the left hand side of your first pipeline produced something on its standard output stream, echo on the right would not display it. The echo utility does not read from its standard input (but e.g. cat does). Any shell command returns an exit status. This exit status is what e.g. an if statement acts upon. The exit status is never outputted to e.g. the terminal or it would interfere with the actual output of the command or script. When you use [ 3 -gt 4 ] you call the [ utility with some arguments. That utility returns an exit status. It is exactly equivalent to test 3 -gt 4 See man test and man [ (or help test in the bash shell). The exit status of the most recently executed command is stored in the special variable $? . You may save this in an ordinary variable, or output it to the terminal: [ 3 -gt 4 ]printf 'Exit status of test was %s\n' "$?"printf 'Exit status of printf was %s\n' "$?" Note that printf also produces its own exit status, so if the printf call went ok, the value $? would be zero after outputting the status of the test. The code above would likely output Exit status of test was 1Exit status of printf was 0 Note that the test itself never outputs anything here. It just provides an exit status. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508299",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/341426/"
]
} |
508,302 | I'm trying to change the passphrase of my GPG's secret key. I actually changed it using seahorse (Also tried gpg --edit-keys and passwd , but when I tried to export my private key it asks me for two passphrase now (Both new and old one) and uses the old one for sub secret key. Now I have to remember two complicated password! What is the correct way to change the passphrase of GPG's secret key? | Your commands don't work the way you expect them to because the test does not output anything to its standard output stream. It's the standard output stream that gets piped to the next command in a pipeline (your first command), and it's the standard output that replaces a command substitution (your second command). As an aside, note that even if the left hand side of your first pipeline produced something on its standard output stream, echo on the right would not display it. The echo utility does not read from its standard input (but e.g. cat does). Any shell command returns an exit status. This exit status is what e.g. an if statement acts upon. The exit status is never outputted to e.g. the terminal or it would interfere with the actual output of the command or script. When you use [ 3 -gt 4 ] you call the [ utility with some arguments. That utility returns an exit status. It is exactly equivalent to test 3 -gt 4 See man test and man [ (or help test in the bash shell). The exit status of the most recently executed command is stored in the special variable $? . You may save this in an ordinary variable, or output it to the terminal: [ 3 -gt 4 ]printf 'Exit status of test was %s\n' "$?"printf 'Exit status of printf was %s\n' "$?" Note that printf also produces its own exit status, so if the printf call went ok, the value $? would be zero after outputting the status of the test. The code above would likely output Exit status of test was 1Exit status of printf was 0 Note that the test itself never outputs anything here. It just provides an exit status. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343268/"
]
} |
508,314 | Assume that I have invented a new file system, and now I want to create a file system driver for it. How would I implement this file system driver, is this done using a kernel module? And how can the file system driver access the hard disk, should the file system driver contain code to access the hard disk, or does Linux contain a device driver to access the hard disk that is used by all the file system drivers? | Yes, filesystems in Linux can be implemented as kernel modules. But there is also the FUSE (Filesystem in USErspace) interface, which can allow a regular user-space process to act as a filesystem driver. If you're prototyping a new filesystem, implementing it first using the FUSE interface could make the testing and development easier. Once you have the internals of the filesystem worked out in FUSE form, you might then start implementing a performance-optimized kernel module version of it. Here's some basic information on implementing a filesystem within kernel space. It's rather old (from 1996!), but that should at least give you a basic idea for the kind of things you'll need to do. If you choose to go to the FUSE route, here's libfuse, the reference implementation of the userspace side of the FUSE interface. Filesystem driver as a kernel module Basically, the initialization function of your filesystem driver module needs just to call a register_filesystem() function, and give it as a parameter a structure that includes a function pointer that identifies the function in your filesystem driver that will be used as the first step in identifying your filesystem type and mounting it. Nothing more happens at that stage. When a filesystem is being mounted, and either the filesystem type is specified to match your driver, or filesystem type auto-detection is being performed, the kernel's Virtual FileSystem (VFS for short) layer will call that function. It basically says "Here's a pointer to a kernel-level representation of a standard Linux block device. Take a look at it, see if it's something you can handle, and then tell me what you can do with it." At that point, your driver is supposed to read whatever it needs to verify it's the right driver for the filesystem, and then return a structure that includes pointers to further functions your driver can do with that particular filesystem. Or if the filesystem driver does not recognize the data on the disk, it is supposed to return an appropriate error result, and then VFS will either report a failure to userspace or - if filesystem type auto-detection is being performed - will ask another filesystem driver to try. The other drivers in the kernel will provide the standard block device interface, so the filesystem driver won't have to implement hardware support. Basically, the filesystem driver can read and write disk blocks using standard kernel-level functions with the device pointer given to it. The VFS layer expects the filesystem driver to make a number of standard functions available to the VFS layer; a few of these are mandatory in order for the VFS layer to do anything meaningful with the filesystem, others are optional and you can just return a NULL in place of a pointer to such an optional function. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/508314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343344/"
]
} |
508,319 | I run this command on my Fedora: iw list | grep "Supported interface modes" -A 8 It showed my adapter doesn't support AP mode. But I checked this Asus USB-AC51 and Driver capabilities , the website showed mt76 supported AP mode. Is that driver or adapter problem? Supported interface modes: * managed * monitorBand 1: Capabilities: 0x17e HT20/HT40 SM Power Save disabled RX Greenfield RX HT20 SGI My adapter is at Port 12. /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/8p, 10000M/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/16p, 480M |__ Port 10: Dev 22, If 0, Class=Wireless, Driver=btusb, 12M |__ Port 10: Dev 22, If 1, Class=Wireless, Driver=btusb, 12M |__ Port 12: Dev 23, If 0, Class=Vendor Specific Class, Driver=mt76x0u, 480M |__ Port 13: Dev 21, If 1, Class=Human Interface Device, Driver=usbhid, 12M |__ Port 13: Dev 21, If 2, Class=Human Interface Device, Driver=usbhid, 12M |__ Port 13: Dev 21, If 0, Class=Human Interface Device, Driver=usbhid, 12M | Yes, filesystems in Linux can be implemented as kernel modules. But there is also the FUSE (Filesystem in USErspace) interface, which can allow a regular user-space process to act as a filesystem driver. If you're prototyping a new filesystem, implementing it first using the FUSE interface could make the testing and development easier. Once you have the internals of the filesystem worked out in FUSE form, you might then start implementing a performance-optimized kernel module version of it. Here's some basic information on implementing a filesystem within kernel space. It's rather old (from 1996!), but that should at least give you a basic idea for the kind of things you'll need to do. If you choose to go to the FUSE route, here's libfuse, the reference implementation of the userspace side of the FUSE interface. Filesystem driver as a kernel module Basically, the initialization function of your filesystem driver module needs just to call a register_filesystem() function, and give it as a parameter a structure that includes a function pointer that identifies the function in your filesystem driver that will be used as the first step in identifying your filesystem type and mounting it. Nothing more happens at that stage. When a filesystem is being mounted, and either the filesystem type is specified to match your driver, or filesystem type auto-detection is being performed, the kernel's Virtual FileSystem (VFS for short) layer will call that function. It basically says "Here's a pointer to a kernel-level representation of a standard Linux block device. Take a look at it, see if it's something you can handle, and then tell me what you can do with it." At that point, your driver is supposed to read whatever it needs to verify it's the right driver for the filesystem, and then return a structure that includes pointers to further functions your driver can do with that particular filesystem. Or if the filesystem driver does not recognize the data on the disk, it is supposed to return an appropriate error result, and then VFS will either report a failure to userspace or - if filesystem type auto-detection is being performed - will ask another filesystem driver to try. The other drivers in the kernel will provide the standard block device interface, so the filesystem driver won't have to implement hardware support. Basically, the filesystem driver can read and write disk blocks using standard kernel-level functions with the device pointer given to it. The VFS layer expects the filesystem driver to make a number of standard functions available to the VFS layer; a few of these are mandatory in order for the VFS layer to do anything meaningful with the filesystem, others are optional and you can just return a NULL in place of a pointer to such an optional function. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/508319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343350/"
]
} |
508,321 | Update: I found this question which seems to be very related. Not sure how exactly. What am I missing? I can only use ssh when it resolves through https hostnames? Normal ssh it hangs kinda like encryption keys aren't working, but I am using keys that I know have worked in the past. ssh -v serverOpenSSH_7.9p1, OpenSSL 1.1.1b 26 Feb 2019debug1: Reading configuration data /home/user/.ssh/configdebug1: /home/user/.ssh/config line 39: Applying options for serverdebug1: Reading configuration data /etc/ssh/ssh_configdebug1: Connecting to x.x.x.x [x.x.x.x] port 22. my config Host serverUser userHostName x.x.x.xIdentityFile ~/.ssh/id_key I don't recall ever changing my /etc/ssh/ssh_config and my firewall is set to allow ssh. | Yes, filesystems in Linux can be implemented as kernel modules. But there is also the FUSE (Filesystem in USErspace) interface, which can allow a regular user-space process to act as a filesystem driver. If you're prototyping a new filesystem, implementing it first using the FUSE interface could make the testing and development easier. Once you have the internals of the filesystem worked out in FUSE form, you might then start implementing a performance-optimized kernel module version of it. Here's some basic information on implementing a filesystem within kernel space. It's rather old (from 1996!), but that should at least give you a basic idea for the kind of things you'll need to do. If you choose to go to the FUSE route, here's libfuse, the reference implementation of the userspace side of the FUSE interface. Filesystem driver as a kernel module Basically, the initialization function of your filesystem driver module needs just to call a register_filesystem() function, and give it as a parameter a structure that includes a function pointer that identifies the function in your filesystem driver that will be used as the first step in identifying your filesystem type and mounting it. Nothing more happens at that stage. When a filesystem is being mounted, and either the filesystem type is specified to match your driver, or filesystem type auto-detection is being performed, the kernel's Virtual FileSystem (VFS for short) layer will call that function. It basically says "Here's a pointer to a kernel-level representation of a standard Linux block device. Take a look at it, see if it's something you can handle, and then tell me what you can do with it." At that point, your driver is supposed to read whatever it needs to verify it's the right driver for the filesystem, and then return a structure that includes pointers to further functions your driver can do with that particular filesystem. Or if the filesystem driver does not recognize the data on the disk, it is supposed to return an appropriate error result, and then VFS will either report a failure to userspace or - if filesystem type auto-detection is being performed - will ask another filesystem driver to try. The other drivers in the kernel will provide the standard block device interface, so the filesystem driver won't have to implement hardware support. Basically, the filesystem driver can read and write disk blocks using standard kernel-level functions with the device pointer given to it. The VFS layer expects the filesystem driver to make a number of standard functions available to the VFS layer; a few of these are mandatory in order for the VFS layer to do anything meaningful with the filesystem, others are optional and you can just return a NULL in place of a pointer to such an optional function. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/508321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343352/"
]
} |
508,355 | With the latest version of Kile installed, trying to run it crashes with a segmentation fault: $ kileqt5ct: using qt5ct pluginInvalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/16/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/16@2x/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/16/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/16@2x/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/22/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/22@2x/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/24/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/24@2x/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/24/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/24@2x/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/32/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/32@2x/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/32/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/32@2x/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/48/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/48@2x/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/48/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/48@2x/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/64/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/64@2x/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/64/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/64@2x/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/96/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/96@2x/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/128/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/128@2x/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/256/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/256@2x/"Invalid Context= "Apps" line for icon theme: "/usr/share/icons/Mint-Y/apps/symbolic/"Invalid Context= "Mimetypes" line for icon theme: "/usr/share/icons/Mint-Y/mimetypes/symbolic/"kf5.kio.core: Refilling KProtocolInfoFactory cache in the hope to find "mtp"kf5.kservice.services: KServiceTypeTrader: serviceType "ThumbCreator" not foundNo text-to-speech plug-ins were found.Segmentation fault (core dumped) How can Kile be run in Linux Mint 19? | After much searching, I found the answer in a Debian bug post exchange: If the okular package is not installed, kile can not start and crashes on a segmentation fault. The solution was to run sudo apt-get install okular | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99989/"
]
} |
508,393 | On bash I press CTRL+R and typing vim bash return list of commands typed in past with vim string.Is possible to make something like this in vim history for commands starting with the : ? | You may move up and down through the commands saved in Vim's command history by using the Up and Down keys after having typed : . If you enter the start of a command and press Up , Vim will give you the most recent saved command with the same prefix string. In this respect it works in the reverse order from what Bash uses in that you first type in a bit of a command and then press Up (rather than, as in Bash, first press Ctrl+R and then type something). This also works for search strings. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
508,442 | In /etc/sudoers , there is always: root ALL=(ALL:ALL) ALL However, the root user (with UID 0) doesn't need to enter password when they run sudo command . For other users, a password is required unless their entry contains NOPASSWD or a previous authentication hasn't timed out: user ALL=(ALL:ALL) NOPASSWD:ALL ^^^^^^^^ | sudo allows users to execute commands as UID 0 (or other users) based on how it’s configured. There is no need to ask root for a password to run a command as UID 0, because it already is UID 0. Furthermore, root can also su to anyone it’d like, so there’s no need to prompt for a password when executing sudo -u user as UID 0. Note: I do believe there is a PAM setting that will even require root to provide a password for the target user when using su . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508442",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211239/"
]
} |
508,448 | My current prompt format string is generated by an outside script that is provided by the organization. I want to manipulate it a bit (add time to the string) for that I need my current format string. I can understand it by going through the .cshrc (and its linked scripts) - but it would be much easier if I can ask the cshell for my current prompt format string. Do you know of a way to get the format string of the current shell? Thanks | sudo allows users to execute commands as UID 0 (or other users) based on how it’s configured. There is no need to ask root for a password to run a command as UID 0, because it already is UID 0. Furthermore, root can also su to anyone it’d like, so there’s no need to prompt for a password when executing sudo -u user as UID 0. Note: I do believe there is a PAM setting that will even require root to provide a password for the target user when using su . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343461/"
]
} |
508,464 | I use this command awk 'NR%2{t=$1;next}{print $1-t,$2}' to get the distance between two consecutive Y points in a file. But I would like to have all positive numbers. How to get that ? like something as modulus. 1577 -46.14921577.57 471578 -47.65281578.87 491579 -49.21061580 -50.77421580.15 51 | sudo allows users to execute commands as UID 0 (or other users) based on how it’s configured. There is no need to ask root for a password to run a command as UID 0, because it already is UID 0. Furthermore, root can also su to anyone it’d like, so there’s no need to prompt for a password when executing sudo -u user as UID 0. Note: I do believe there is a PAM setting that will even require root to provide a password for the target user when using su . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508464",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293538/"
]
} |
508,505 | I found this accidentally. I just typed man fork , and instead of showing the system call documentation, it showed to me an awk extension, however, the section page number was 3am , instead of just 3 . What does 3am means? | It appears to be the manual page for a GNU Awk ( gawk ) extension module . The complete list is: $ find /usr/share/man/man3 -name '*3am*' | xargs dpkg -Sgawk: /usr/share/man/man3/readfile.3am.gzgawk: /usr/share/man/man3/inplace.3am.gzgawk: /usr/share/man/man3/ordchr.3am.gzgawk: /usr/share/man/man3/revoutput.3am.gzgawk: /usr/share/man/man3/readdir.3am.gzgawk: /usr/share/man/man3/filefuncs.3am.gzgawk: /usr/share/man/man3/revtwoway.3am.gzgawk: /usr/share/man/man3/time.3am.gzgawk: /usr/share/man/man3/rwarray.3am.gzgawk: /usr/share/man/man3/fork.3am.gzgawk: /usr/share/man/man3/fnmatch.3am.gz I would guess the am stands for a wk m odule. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37434/"
]
} |
508,581 | I am having some issues installing PowerShell on my 32-bit Kali Linux PC. I followed this guide and started with: apt update && apt -y install curl gnupg apt-transport-https Next, I downloaded and added the public repository GPG key so APT will trust the packages and alert the user to any issues with package signatures. curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - With the GPG key added, I added the Microsoft package repository to its own package list file under /etc/apt/sources.list.d/ and updated the list of available packages. echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-debian-stretch-prod stretch main" \ > /etc/apt/sources.list.d/powershell.listapt update No errors so far indicated in the update process, Microsoft sources are in my source.list, and everything should be good to go. When I execute: apt -y install powershell I get: root@kali:/opt# apt -y install powershellReading package lists... DoneBuilding dependency treeReading state information... DoneE: Unable to locate package powershell | You have successfully added the repository for Powershell to your sources.list . However, you report to be using a 32-bit architecture system. Your output of apt-cache confirms that your Repositories do not contain the Powershell package. Taking a look at the Powershell GitHub , it appears that Microsoft does not provide a Linux package for Powershell for 32-bit Linux systems. All of the source and binary packages available for Linux here are for 64-bit systems. As user Bob points out in his comment, Powershell for Linux depends on .Net Core . If you are familiar with building from source, you could potentially build a 32-bit package, but that is a different kind of question. This may not actually work as Powershell maybe has hard requirements for 64-bit instructions and optimizations. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/277198/"
]
} |
508,706 | I have the following script that SSH to a server with a key and makes a lot of stuff there. #!/usr/bin/env bashssh -i mykey.pem myuser@SERVER_IP << 'ENDSSH'[A LOT OF STUFF]ENDSSH (which I run it with sh scriptname.sh ) Now I want to to the same in another server, so I've to SSH to two different servers ( ip_1 and ip_2 ) with two different .pem files ( mykey1.pem and mykey2.pem ). So far I know how to loop the ips as follows: #!/usr/bin/env baship_list="ip_1 ip_2"for ip in $ip_list; dossh -i mykey.pem myuser@$ip << 'ENDSSH'[A LOT OF STUFF]ENDSSHdone but now I would like to loop also to get the proper pem file. How can I archieve this? Maybe with another list? Can someone provide me an elegant solution? ip_1 should use mykey1.pem ip_2 should use mykey2.pem Thanks in advance | Since you're using bash, you can use associative arrays : #!/usr/bin/env bashdeclare -A ip_list=(["ip_1"]="mykey1.pem" ["ip_2"]="mykey2.pem")for ip in "${!ip_list[@]}"; do ssh -i "${ip_list[$ip]}" myuser@"$ip" << 'ENDSSH'[A LOT OF STUFF]ENDSSHdone Note that associative arrays, unlike regular indexed arrays, are not saved in a specific order, so there is no guarantee that ip_1 will be processed before ip_2 . If you need to use a simple, POSIX compatible shell, create a file with the ip and key files, one per line: $ cat iplist.txtip1 mykey1.pemip2 mykey2.pem Then, use this script: #!/bin/shwhile read -r ip key; do ssh -i "$key" myuser@"$ip" << 'ENDSSH'[A LOT OF STUFF]ENDSSHdone And run it with: sh /path/to/script < /path/to/iplist.txt But if you go that route, Stéphane's approach is better. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508706",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343670/"
]
} |
508,724 | I'm using a docker image as a base for my own development that adds the jessie backports repository in its Dockerfile and uses that to install a dependency. This image uses the following command to add the repository: echo "deb http://ftp.debian.org/debian jessie-backports main" >> /etc/apt/sources.list The problem is that fetching packages from the backports repository now fails with the following error (this used to work previously): W: Failed to fetchhttp://ftp.debian.org/debian/dists/jessie-backports/main/binary-amd64/Packages404 Not FoundW: Failed to fetchhttp://deb.debian.org/debian/dists/jessie-updates/main/binary-amd64/Packages 404 Not Found I looked on that server, and those paths are indeed not present there. I tried to figure out on the Debian backports site whether this particular repository should still be available, and I didn't find any indication that this was deprecated or something like that. Is this a temporary issue with the repository, or is the jessie-backports repository not available anymore? And if this is not a temporary issue, what options do I have to use this or an equivalent repository without upgrading to the newer Debian stable version? | Wheezy and Jessie were recently removed from the mirror network , so if you want to continue fetching Jessie backports, you need to use archive.debian.org instead: deb [check-valid-until=no] http://archive.debian.org/debian jessie-backports main (Validity checks need to be disabled since the repository is no longer being updated. Jessie’s apt doesn’t support the check-valid-until flag, see inostia’s answer for details, and the configuration summary further down in this answer.) The jessie-updates repository has been removed: all the updates have been merged with the main repository, and there will be no further non-security updates. So any references to jessie-updates in sources.list or sources.list.d files need to be removed. Security updates will continue to be provided , on LTS-supported architectures, in the security repository, until June 30, 2020. Since you’re building a container image, I highly recommend basing it on Debian 9 (Stretch) instead. To stay on Debian 8 (Jessie), your repositories should end up looking like deb http://cdn-fastly.deb.debian.org/debian/ jessie maindeb-src http://cdn-fastly.deb.debian.org/debian/ jessie maindeb http://security.debian.org/ jessie/updates maindeb-src http://security.debian.org/ jessie/updates maindeb http://archive.debian.org/debian jessie-backports maindeb-src http://archive.debian.org/debian jessie-backports main (without the jessie-updates repository). You’ll also need to disable validity checks in /etc/apt/apt.conf (which will apply to all repositories): Acquire::Check-Valid-Until "false"; | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/508724",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343693/"
]
} |
508,756 | I've hacked together this bash script which tests if the user has superuser privileges and if they don't it asks for them. Ultimately I'm trying to invert the second 'if' statement so that I can remove the following two lines (the echo "password ok" and the else on the following line) # Root user onlyif [[ "$EUID" != 0 ]]; then sudo -k # make sure to ask for password on next sudo if sudo true; then echo "Password ok" else echo "Aborting script" exit 1 fifiecho "do my ops" Is the purpose of "true" on the fourth line just a null-statement? I need to invert the test on the fourth line, how do I do so? Here is what I've tried: if sudo false; then if sudo true == false; thenif [!(sudo true)]; then | true in bash isn't a keyword, it's a program that instantly exits with a successful exit code.Likewise, false is a program that exits with an unsuccessful exit code. You can try this out by running both programs from your terminal, and then reading the $? variable, which contains the exit code of the last program; trueecho $? # 0falseecho $? #1 if sudo true isn't equivalent to if sudo == true . if sudo true is running the true program using sudo , and checking the exit code. Therefore: if sudo false; then is running the program false as sudo. The return will always be false. if sudo true == false will run the program true with the arguments == and false using sudo . This obviously isn't want you intended. if [!(sudo true)] is invalid syntax. What you are probably looking for is if ! sudo true; | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/508756",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85659/"
]
} |
508,764 | I have written a little function that exits if the value of the function argument is empty, I would like to be able to also print the name of the parameter (not the value!) if it is possible, my following implementation fails to print the name of the parameter. function exitIfEmpty(){ if [ -z "$1" ] then echo "Exiting because ${!1} is empty" exit 1 fi} when called like so exitIfEmpty someKey should print Exiting because someKey is empty | What gets passed to the function is just a string. If you run func somevar , what is passed is the string somevar . If you run func $somevar , what is passed is (the word-split) value of the variable somevar . Neither is a variable reference, a pointer or anything like that, they're just strings. If you want to pass the name of a variable to a function, and then look at the value of that variable, you'll need to use a nameref (Bash 4.3 or later, IIRC), or an indirect reference ${!var} . ${!var} expands to the value of the variable whose name is stored in var . So, you just have it the wrong way in the script, if you pass the name of a variable to function, use "${!1}" to get the value of the variable named in $1 , and plain "$1" to get the name . E.g. this will print variable bar is empty, exiting , and exit the shell: #!/bin/bashexitIfEmpty() { if [ -z "${!1}" ]; then echo "variable $1 is empty, exiting" exit 1 fi}foo=xunset barexitIfEmpty fooexitIfEmpty bar | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/508764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/341050/"
]
} |
508,767 | For the first single time I just had to log twice from the screensaver, and the second login screen was unusual. I am running XFCE 4.12 on Slackware 14.1, with xcreensaver 5.40. The first login screen to exit the screensaver was normal. However, the screen then became black, and when I moved the mouse or typed a key it became white with a black square and some text saying something like Blank screensaver, enter your password to login or click the icon to lock along with the machine name. However I cannot find any "blank screensaver" in xscreensaver's list, and it didn't look like an xscreensaver anyway (no login box, the login form took the whole screen). Is such a screensaver known ? And could I have triggered its shortcut with random keys ? Call me paranoid, but I wonder if it might be some keylogger… | What gets passed to the function is just a string. If you run func somevar , what is passed is the string somevar . If you run func $somevar , what is passed is (the word-split) value of the variable somevar . Neither is a variable reference, a pointer or anything like that, they're just strings. If you want to pass the name of a variable to a function, and then look at the value of that variable, you'll need to use a nameref (Bash 4.3 or later, IIRC), or an indirect reference ${!var} . ${!var} expands to the value of the variable whose name is stored in var . So, you just have it the wrong way in the script, if you pass the name of a variable to function, use "${!1}" to get the value of the variable named in $1 , and plain "$1" to get the name . E.g. this will print variable bar is empty, exiting , and exit the shell: #!/bin/bashexitIfEmpty() { if [ -z "${!1}" ]; then echo "variable $1 is empty, exiting" exit 1 fi}foo=xunset barexitIfEmpty fooexitIfEmpty bar | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/508767",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26952/"
]
} |
508,768 | Here's an example of a very simple code snippet that I'd like to paste to my terminal in a way that everything is executed. sudo apt updatesudo apt upgradesudo apt -y install build-essentialsudo apt -y install gitsudo apt -y install libxml2-dev # required for some tools using xml filessudo apt autoremove Unfortunately, what happens if build-essential wasn't installed beforehand is that it only runs until sudo apt -y install build-essential . The subsequent lines are skipped. The same is true if git wasn't installed: It'll run until the git line, then skip the rest. What's the reason for this happening, and is there a way to fix this problem without having to create a script file and running it via bash? | Assuming you are still within sudo's credential cache timeout (if you are unsure, just refresh it with sudo -v before running the snippet), that problem happens because apt(-get) is a very rich console application and thus consumes stdin even when it asks you nothing because of the -y . You can work around that by running the whole snippet in a subshell: At the prompt, start by typing a ( then paste the snippet then type the closing ) and press return It should go. Notice how the snippet is not executed as soon as you paste it. It rather gets “queued” on the command line, waiting for the closing parentheses. (PS: depending on your system you may need to use apt-get autoremove in place of apt autoremove , and you may also need to use -y on update and upgrade too) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/341212/"
]
} |
508,777 | I have file.txt that I need to read into a Bash array. Then I need to remove spaces, double quotes and all but the first comma in every entry . Here's how far I've gotten: $ cat file.txt10,this2 0 , i s30,"all"40,I50,n,e,e,d,260",s e,e"$ cat script.sh#!/bin/bashreadarray -t ARRAY<$1ARRAY=( "${ARRAY[@]// /}" )ARRAY=( "${ARRAY[@]//\"/}" )for ELEMENT in "${ARRAY[@]}";do echo "|ELEMENT|$ELEMENT|"done$ ./script.sh file.txt|ELEMENT|10,this||ELEMENT|20,is||ELEMENT|30,all||ELEMENT|40,I||ELEMENT|50,n,e,e,d,2||ELEMENT|60,se,e| Which works great except for the comma situation. I'm aware that there are multiple ways to skin this cat, but due to the larger script this is a part of, I'd really like to use parameter substitution to get to here: |ELEMENT|10,this||ELEMENT|20,is||ELEMENT|30,all||ELEMENT|40,I||ELEMENT|50,need2||ELEMENT|60,see| Is this possible via parameter substitution? | I would remove what you need to remove using sed before loading into the array (also note the lower case variable names, in general it is best to avoid capitalized variables in shell scripts): #!/bin/bashreadarray -t array< <(sed 's/"//g; s/ *//g; s/,/"/; s/,//g; s/"/,/' "$1")for element in "${array[@]}";do echo "|ELEMENT|$element|"done This produces the following output on your example file: $ foo.sh file |ELEMENT|10,this||ELEMENT|20,is||ELEMENT|30,all||ELEMENT|40,I||ELEMENT|50,need2||ELEMENT|60,see| If you really must use parameter substitution, try something like this: #!/bin/bashreadarray -t array< "$1"array=( "${array[@]// /}" )array=( "${array[@]//\"/}" )array=( "${array[@]/,/\"}" )array=( "${array[@]//,/}" )array=( "${array[@]/\"/,}" )for element in "${array[@]}"; do echo "|ELEMENT|$element|"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508777",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/221837/"
]
} |
508,781 | I want to increase my swap size to be able to have the hibernate option. First, I tried to add some swapfile. I followed https://bogdancornianu.com/change-swap-size-in-ubuntu/ and typed this in my terminal: sudo dd if=/dev/zero of=swapfile bs=1G count=16 I get: 16+0 records in16+0 records out17179869184 bytes (17 GB, 16 GiB) copied, 206.949 s, 83.0 MB/s then, I followed the instructions: sudo mkswap /swapfile But I get this error: mkswap: cannot open /swapfile: No such file or directory Then, I decided to resize my swap partition instead of swapfile. So I want to delete them. (I didn't create any before so I assume I can delete them all?) I followed this: https://askubuntu.com/questions/904628/default-17-04-swap-file-location I tried: $ cat /proc/swaps$ grep swap /etc/fstab But I get nothing from the first one. Output from the second one is: total used free shared buff/cache availableMem: 11862 3498 1014 138 7349 7907Swap: 0 0 0 I also tried (after reboot): swapon -s and get Filename Type Size Used Priority/dev/sdb3 partition 3905532 0 -2 I wonder that did I successfully create swapfiles? How do I delete them if I did? | The first issue is that your first command created a file, swapfile , in your current directory, and that your subsequent command(s) were explicitly referencing /swapfile , a file called swapfile in the root directory. If that was not your current working directory when you executed the first command, all of the subsequent commands would be referring to a file that is not there to operate upon. If you got no output from cat /proc/swaps , that indicates that either your system does not have procfs running (unlikely), or that you currently have no active swap space configured. The output you claim to get from grep swap /etc/fstab makes no sense whatsoever. That looks like the output of free -m (incidentally confirming that you have no active swap configured), not the partial contents of the filesystem table. Your post-reboot swapon -s (which as the manual states gives the same information as cat /proc/swaps ) indicates that at some point prior to your reboot, someone executed swapoff . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/508781",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343727/"
]
} |
508,804 | RHEL 7.2 memory use, per free -m : total used free shared buff/cache availableMem: 386564 77941 57186 687 251435 306557Swap: 13383 2936 16381 we see that used swap is 2936M so we want to decrease it to min by the following echo 1 > /proc/sys/vm/swappinesssysctl -w vm.swappiness=1echo "vm.swappiness = 1" >> /etc/sysctl.conf and after 10 min we check again , but still OS used the swap free -m : total used free shared buff/cache availableMem: 386564 77941 57186 687 251435 306557Swap: 13389 2930 16381 why the actions that we did not take affect immeditly? Do we need to restart the OS, in order to get swap used to be 0 ? example we run vmstat : vmstatprocs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 3 0 85740 20255872 2238248 183126400 0 0 7 162 0 0 7 1 92 0 0 we decrease the vm.swappiness=1 and run vmstat after 10min: procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 3 0 85740 20255872 2238248 183126400 0 0 7 162 0 0 7 1 92 0 0 | As you’ve been told before (see Why does swappiness not work? ), changing swappiness only affects future decisions made by the kernel when it needs to free memory. Reducing it won’t cause the kernel to reload everything that’s been swapped out. Your vmstat output shows that swap isn’t being actively used, i.e. your current workloads really don’t need the pages which have been swapped out. There’s no point in trying to micro-manage the kernel’s use of swap in the way you tend to do. Depending on your workload, decide whether you need to favour the page cache or not, adjust swappiness accordingly, then leave the system to run. If you really want to clear swap, disable it and re-enable it: swapoff -a && swapon -a | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/508804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
508,818 | On queue-based clusters the Queue of pending jobs is shown from a command, say showqueue . The command returns, in columns, a list of reasonable data like names, etc, but the columns/data don't really matter for the question. I like using the utility watch like watch showqueue at times (with an alias of alias watch="watch " to force alias expansion of my command to watch). There is valuable data (Running jobs), in the first few lines, then pending jobs, etc, and some valuable summaries at the end. However, at times the output of showqueue goes off the screen (Unbelievable, I know)! Ideally, I'd like some way to be able to see the beginning and end of the file at the same time. The best I have so far is: showqueue > file; head -n 20 file > file2; echo "..." >> file2 ; tail -n 20 file >> file2; cat file2 , and using watch on an alias of that. Does anyone know of anything a little more flexible or single-utility? My solution gets a little nastier with bash loops to make the "..." break multilined, it's not adaptive to resizing the terminal window at all, and I'm sure there's more that I missed. Any suggestions? | Are you looking to do something like the following? Shows output from both head and tail. $ showqueue | (head && tail) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/508818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118707/"
]
} |
508,833 | My setup is: -Windows 10 on a M.2 SSD -Ubuntu 18.04 on a normal ssd (sdb) -Arch Linux on sdb as well but on different partitions I first installed ubuntu with grub and later added arch linux without installing a seperate boot loader, basically the same set-up this guide advises. I can add kernel parameters to ubuntu via editing /etc/default/grub and then running sudo update-grub . I can confirm the changes made here are persistent through cat /proc/cmdline . But while I have a etc/default/grub on arch I don't have a grub.cfg for it so I can't apply the changes. sudo grub-mkconfig -o /boot/grub/grub.cfg outputs that there is no such file or directory. Obviously the changes made in archs grub-file won't transfer to ubuntu when I update the .cfg nor will the ubuntu .cfg load parameters for arch. Is there any way I can add kernel parameters to my arch installation without installing a second instance of grub? EDIT: I was able to get persistent changes to arch's kernel parameters by editing ubuntu's grub.cfg's arch linux entry manually but this I don't think this is a real solution, is it? Wouldn't I have to manipulate it again every time I ran update-grub ? | Are you looking to do something like the following? Shows output from both head and tail. $ showqueue | (head && tail) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/508833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343792/"
]
} |
508,834 | I am using EOF to generate bash scripts that run the Rscripts. In the Rscript I used basename to specify the output file name. When I use EOF to generate a list of bash scripts, I could not get basename to work. The error message is shown below. I was still able to get the bash scripts generated but the ${AF} turned into a blank in both places where it presented. Very strange! I had the bash script tested and it is working so I know the problem is somewhere between EOF and basename . How can I use basename with EOF ? Or is there any alternative methods? Thank you. for letter in {A..Z} do cat <<- EOF > batch_${letter}.sh #!/bin/bash module load R/3.5.1 R_func="/home/dir/R_func" TREAT="/home/dir/POP" BASE="/home/dir/base" OUTPUT="/home/dir/tmp" for AF in ${BASE}/${letter}*.txt_step3; do Rscript ${R_func}P_tools.R \ --ptool ${R_func}/P_tools_linux \ --group ${AF} \ --treat ${TREAT}/pop_exclude24dup \ --out ${OUTPUT}/OUT_$(basename ${AF%%_txt_step3})_noregress \ --binary-target F; done EOF done This is the error message basename: missing operand Try 'basename --help' for more information. | Are you looking to do something like the following? Shows output from both head and tail. $ showqueue | (head && tail) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/508834",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324209/"
]
} |
509,000 | I have a small numpad keyboard which I would like to use for launching macros and shortcuts, along side my regular keyboard. I can attach macros and shortcuts to these keys ( i.e, numpad 1 minimises the active window ), but my primary keyboard numpad also activates the shortcut. I would like a way to have the secondary keyboard act completely separately and to then attach shortcuts to it. Here is the output I get from xinput . ⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ LVT Recon gaming mouse id=10 [slave pointer (2)]⎜ ↳ LVT Recon gaming mouse id=11 [slave pointer (2)]⎜ ↳ Corsair Corsair K30A Gaming Keyboard id=13 [slave pointer (2)]⎜ ↳ SIGMACHIP USB Keyboard id=18 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Power Button id=8 [slave keyboard (3)] ↳ Sleep Button id=9 [slave keyboard (3)] ↳ Corsair Corsair K30A Gaming Keyboard id=12 [slave keyboard (3)] ↳ Corsair Corsair K30A Gaming Keyboard id=14 [slave keyboard (3)] ↳ LVT Recon gaming mouse id=15 [slave keyboard (3)] ↳ Corsair Corsair K30A Gaming Keyboard id=16 [slave keyboard (3)] ↳ SIGMACHIP USB Keyboard id=17 [slave keyboard (3)] ↳ SIGMACHIP USB Keyboard id=19 [slave keyboard (3)] | While my other answer will probably work on most Linuxes, even if they're many years old, SystemD and udev actually makes things easier: use lsusb to find the vendor and product code of your additional keyboard. (In my case, it's Vendor 145F, Product 0177. Make sure to have the letters in uppercase.) create a file /etc/udev/hwdb.d/90-extra-keyboard.hwdb , with contents similar to this: evdev:input:b0003v145Fp0177* KEYBOARD_KEY_7005b=stopcd The first line identifies the device: the four letters after the v is the vendor code, after the p, it's the product code, from the previous step. Every further line maps a scancode to a symbolic name. To get the scancode, run evtest : Event: time 1553711252.888538, -------------- SYN_REPORT ------------Event: time 1553711257.656558, type 4 (EV_MSC), code 4 (MSC_SCAN), value 70059Event: time 1553711257.656558, type 1 (EV_KEY), code 79 (KEY_KP1), value 1 To find out what to use for the symbolic name, look at the list of #define KEY_… lines in /usr/include/linux/input-event-codes.h : #define KEY_PLAYPAUSE 164#define KEY_PREVIOUSSONG 165#define KEY_STOPCD 166#define KEY_RECORD 167 re-build and load internal databases by running systemd-hwdb update; udevadm trigger verify the new settings work by running evtest again, or by assigning shortcuts in your settings. When trying this out in applications, just remember that if your desktop environment already uses that shortcut, the application won't even see the keypress. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/509000",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/343929/"
]
} |
509,173 | There's a script (let's call it echoer ) that prints to screen a bunch of information. I'd like to be able to only see lines after a pattern is found. I imagine the usage of a solution to look something like echoer | solution_command <pattern> Ideally pattern would be a regular expression, but hard value strings would be enough for me. | AWK can do this with pattern ranges, which allows the use of any regular expression: echoer | awk '/pattern/,0' will print echoer ’s output starting with the first line matching pattern . AWK is pattern-based, and is typically used with a “if this pattern matches, do this” type of approach. “This pattern” can be a range of patterns, defined as “when this pattern matches, start doing this, until this other pattern matches”; this is specified by writing two patterns separated by a comma, as above. Patterns can be text matches, as in /pattern/ , where the current line is checked against the pattern, interpreted as a regular expression; they can also be general expressions, evaluated for every line, and considered to match if their result is non-zero or non-empty. In AWK, the default action is to print the current line. Putting all this together, awk '/pattern/,0' looks for lines matching pattern , and once it finds one, applies the default action to all lines until the 0 condition matches (is non-zero). awk '/pattern/,""' would work too. The Gawk manual goes into much more detail. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/509173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/344115/"
]
} |
509,178 | I'm searching a folder in s3 bucket using this command aws s3 ls s3://bucketname/dir1/dir2/dir3 --recursive | grep -i 'dir3' It's getting results like dir1/dir2/dir3/1/aaa.txt dir1/dir2/dir3/1/bbb.txtdir1/dir2/dir3/1/ccc.txt However, I need only path of that file like dir1/dir2/dir3 I can able remove unnecessary text to get directory path by this aws s3 ls s3://bucketname/dir1/dir2/dir3 --recursive | grep -i 'dir2' | head -n 1 | sed 's/1.*//' But this is not working with multiple string search in grep aws s3 ls s3://bucketname/dir1/dir2/dir3 --recursive | grep -i 'dir3\|folder3' I need output like this dir1/dir2/dir3folder1/folder2/folder3 | AWK can do this with pattern ranges, which allows the use of any regular expression: echoer | awk '/pattern/,0' will print echoer ’s output starting with the first line matching pattern . AWK is pattern-based, and is typically used with a “if this pattern matches, do this” type of approach. “This pattern” can be a range of patterns, defined as “when this pattern matches, start doing this, until this other pattern matches”; this is specified by writing two patterns separated by a comma, as above. Patterns can be text matches, as in /pattern/ , where the current line is checked against the pattern, interpreted as a regular expression; they can also be general expressions, evaluated for every line, and considered to match if their result is non-zero or non-empty. In AWK, the default action is to print the current line. Putting all this together, awk '/pattern/,0' looks for lines matching pattern , and once it finds one, applies the default action to all lines until the 0 condition matches (is non-zero). awk '/pattern/,""' would work too. The Gawk manual goes into much more detail. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/509178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/344117/"
]
} |
509,232 | On my Debian GNU/Linux 9 system, when a binary is executed, the stack is uninitialized but the heap is zero-initialized. Why? I assume that zero-initialization promotes security but, if for the heap, then why not also for the stack? Does the stack, too, not need security? My question is not specific to Debian as far as I know. Sample C code: #include <stddef.h>#include <stdlib.h>#include <stdio.h>const size_t n = 8;// --------------------------------------------------------------------// UNINTERESTING CODE// --------------------------------------------------------------------static void print_array( const int *const p, const size_t size, const char *const name){ printf("%s at %p: ", name, p); for (size_t i = 0; i < size; ++i) printf("%d ", p[i]); printf("\n");}// --------------------------------------------------------------------// INTERESTING CODE// --------------------------------------------------------------------int main(){ int a[n]; int *const b = malloc(n*sizeof(int)); print_array(a, n, "a"); print_array(b, n, "b"); free(b); return 0;} Output: a at 0x7ffe118997e0: 194 0 294230047 32766 294230046 32766 -550453275 32713 b at 0x561d4bbfe010: 0 0 0 0 0 0 0 0 The C standard does not ask malloc() to clear memory before allocating it, of course, but my C program is merely for illustration. The question is not a question about C or about C's standard library. Rather, the question is a question about why the kernel and/or run-time loader are zeroing the heap but not the stack. ANOTHER EXPERIMENT My question regards observable GNU/Linux behavior rather than the requirements of standards documents. If unsure what I mean, then try this code, which invokes further undefined behavior ( undefined, that is, as far as the C standard is concerned) to illustrate the point: #include <stddef.h>#include <stdlib.h>#include <stdio.h>const size_t n = 4;int main(){ for (size_t i = n; i; --i) { int *const p = malloc(sizeof(int)); printf("%p %d ", p, *p); ++*p; printf("%d\n", *p); free(p); } return 0;} Output from my machine: 0x555e86696010 0 10x555e86696010 0 10x555e86696010 0 10x555e86696010 0 1 As far as the C standard is concerned, behavior is undefined, so my question does not regard the C standard. A call to malloc() need not return the same address each time but, since this call to malloc() does indeed happen to return the same address each time, it is interesting to notice that the memory, which is on the heap, is zeroed each time. The stack, by contrast, had not seemed to be zeroed. I do not know what the latter code will do on your machine, since I do not know which layer of the GNU/Linux system is causing the observed behavior. You can but try it. UPDATE @Kusalananda has observed in comments: For what it's worth, your most recent code returns different addresses and (occasional) uninitialised (non-zero) data when run on OpenBSD. This obviously does not say anything about the behaviour that you are witnessing on Linux. That my result differs from the result on OpenBSD is indeed interesting. Apparently, my experiments were discovering not a kernel (or linker) security protocol, as I had thought, but a mere implementational artifact. In this light, I believe that, together, the answers below of @mosvy, @StephenKitt and @AndreasGrapentin settle my question. See also on Stack Overflow: Why does malloc initialize the values to 0 in gcc? (credit: @bta). | The storage returned by malloc() is not zero-initialized. Do not ever assume it is. In your test program, it's just a fluke: I guess the malloc() just got a fresh block off mmap() , but don't rely on that, either. For an example, if I run your program on my machine this way: $ echo 'void __attribute__((constructor)) p(void){ void *b = malloc(4444); memset(b, 4, 4444); free(b);}' | cc -include stdlib.h -include string.h -xc - -shared -o pollute.so$ LD_PRELOAD=./pollute.so ./your_programa at 0x7ffd40d3aa60: 1256994848 21891 1256994464 21891 1087613792 32765 0 0b at 0x55834c75d010: 67372036 67372036 67372036 67372036 67372036 67372036 67372036 67372036 Your second example is simply exposing an artifact of the malloc implementation in glibc; if you do that repeated malloc / free with a buffer larger than 8 bytes, you will clearly see that only the first 8 bytes are zeroed, as in the following sample code. #include <stddef.h>#include <stdlib.h>#include <stdio.h>const size_t n = 4;const size_t m = 0x10;int main(){ for (size_t i = n; i; --i) { int *const p = malloc(m*sizeof(int)); printf("%p ", p); for (size_t j = 0; j < m; ++j) { printf("%d:", p[j]); ++p[j]; printf("%d ", p[j]); } free(p); printf("\n"); } return 0;} Output: 0x55be12864010 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0x55be12864010 0:1 0:1 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 0x55be12864010 0:1 0:1 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 0x55be12864010 0:1 0:1 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/509232",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18202/"
]
} |
509,375 | I am reading APUE and the Interrupted System Calls chapter confuses me. I would like to write down my understanding based on the book, please correct me. A characteristic of earlier UNIX systems was that if a process caught a signal while the process was blocked in a ‘‘slow’’ system call, the system call was interrupted. The system call returned an error and errno was set to EINTR . This was done under the assumption that since a signal occurred and the process caught it, there is a good chance that something has happened that should wake up the blocked system call. So it's saying that the earlier UNIX systems has a feature: if my program uses a system call, it would be interrupted/stopped, if at any time the program catches a signal. (Does default handler also count as a catch?) For example, if I have a read system call, which reads 10GB data, when it's reading, I send any one of signals(e.g. kill -SIGUSR1 pid ), then read would fail and return. To prevent applications from having to handle interrupted system calls, 4.2BSD introduced the automatic restarting of certain interrupted system calls. The system calls that were automatically restarted are ioctl , read , readv , write , writev , wait , and waitpid . As we’ve mentioned, the first five of these functions are interrupted by a signal only if they are operating on a slow device; wait and waitpid are always interrupted when a signal is caught. Since this caused a problem for some applications that didn’t want the operation restarted if it was interrupted, 4.3BSD allowed the process to disable this feature on a per-signal basis. So before automatic restarting was introduced, I had to handle interrupted system call on my own. I need write code like: The problem with interrupted system calls is that we now have to handle the error return explicitly. The typical code sequence (assuming a read operation and assuming that we want to restart the read even if it’s interrupted) would be: again: if ((n = read(fd, buf, BUFFSIZE)) < 0) { if (errno == EINTR) goto again; /* just an interrupted system call */ /* handle other errors */} But nowadays I don't have to write this kind of code, beacause of the automatic restarting mechanism. So if I my understanding are all correct, what/why should I care about interrupted system call now..? It seems the system/OS handles it automatically. | Interruption of a system call by a signal handler occurs only in the case of various blocking system calls, and happens when the system call is interrupted by a signal handler that was explicitly established by the programmer. Furthermore, in the case where a blocking system call is interrupted by a signal handler, automatic system call restarting is an optional feature. You elect to automatically restart system calls by specifying the SA_RESTART flag when establishing the signal handler. As stated in (for example) the Linux signal(7) manual page: If a signal handler is invoked while a system call or library function call is blocked, then either: * the call is automatically restarted after the signal handler returns; or * the call fails with the error EINTR. Which of these two behaviors occurs depends on the interface and whether or not the signal handler was established using the SA_RESTART flag (see sigaction(2)). As hinted by the last sentence quoted above, even when you elect to use this feature, it does not work for all system calls, and the set of system calls for which it does work varies across UNIX implementations. The Linux signal(7) manual page notes a number of system calls that are automatically restarted when using the SA_RESTART flag, but also goes on to note various system calls that are never restarted, even if you specify that flag when establishing a handler, including: * "Input" socket interfaces, when a timeout (SO_RCVTIMEO) has been set on the socket using setsockopt(2): accept(2), recv(2), recvfrom(2), recvmmsg(2) (also with a non-NULL timeout argu‐ ment), and recvmsg(2). * "Output" socket interfaces, when a timeout (SO_RCVTIMEO) has been set on the socket using setsockopt(2): connect(2), send(2), sendto(2), and sendmsg(2). * File descriptor multiplexing interfaces: epoll_wait(2), epoll_pwait(2), poll(2), ppoll(2), select(2), and pselect(2). * System V IPC interfaces: msgrcv(2), msgsnd(2), semop(2), and semtimedop(2). For these system calls, manual restarting using a loop of the form described in APUE is essential, something like: while ((ret = some_syscall(...)) == -1 && errno == EINTR) continue;if (ret == -1) /* Handle error */ ; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/208590/"
]
} |
509,384 | I am trying to build my source using gcc 8.3.0 root@eqx-sjc-engine2-staging:/usr/local/src# gcc --versiongcc (Debian 8.3.0-2) 8.3.0Copyright (C) 2018 Free Software Foundation, Inc.This is free software; see the source for copying conditions. There is NOwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.root@eqx-sjc-engine2-staging:/usr/local/src# I am getting the below error libs/esl/fs_cli.c:1679:43: error: '%s' directive output may be truncated writing up to 1023 bytes into a region of size 1020 [-Werror=format-truncation=] snprintf(cmd_str, sizeof(cmd_str), "api %s\nconsole_execute: true\n\n", argv_command); libs/esl/fs_cli.c:1679:3: note: 'snprintf' output between 29 and 1052 bytes into a destination of size 1024 snprintf(cmd_str, sizeof(cmd_str), "api %s\nconsole_execute: true\n\n", argv_command); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors make[2]: *** [Makefile:2693: fs_cli-fs_cli.o] Error 1 make[2]: Leaving directory '/usr/local/src' make[1]: *** [Makefile:3395: all-recursive] Error 1 make[1]: Leaving directory '/usr/local/src' make: *** [Makefile:1576: all] Error 2 I tried running the make like below make -Wno-error=format-truncation Still I see the same issue. my linux version is root@eqx-sjc-engine2-staging:~# cat /etc/os-release PRETTY_NAME="Debian GNU/Linux buster/sid"NAME="Debian GNU/Linux"ID=debianHOME_URL="https://www.debian.org/"SUPPORT_URL="https://www.debian.org/support"BUG_REPORT_URL="https://bugs.debian.org/" How to fix it? | Depending on the makefile, you probably need something like: make CFLAGS="-Wno-error=format-truncation" The default Makefile rules, and most well-written Makefiles, should see CFLAGS for option arguments to the C compiler being used. Similarly, you can use CXXFLAGS for providing options to the C++ compiler, and LDFLAGS for the linker. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509384",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342917/"
]
} |
509,487 | I need OpenGL 4.5 to be supported by my graphics card's driver, and as far as I know Mesa is actually able to run it. glxinfo gives me this: $ glxinfo | grep "OpenGL"OpenGL vendor string: Intel Open Source Technology CenterOpenGL renderer string: Mesa DRI Intel(R) Haswell Mobile OpenGL core profile version string: 3.3 (Core Profile) Mesa 13.0.6OpenGL core profile shading language version string: 3.30OpenGL core profile context flags: (none)OpenGL core profile profile mask: core profileOpenGL core profile extensions:OpenGL version string: 3.0 Mesa 13.0.6OpenGL shading language version string: 1.30OpenGL context flags: (none)OpenGL extensions:OpenGL ES profile version string: OpenGL ES 3.1 Mesa 13.0.6OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10OpenGL ES profile extensions: So this means it can only run OpenGL 3.0. So I tried to update it, but I ran into several problems: If I try to update it through apt , i.e. sudo apt-get upgrade libgl1-mesa-dri -t testing , it is broken: $ sudo apt-get upgrade libgl1-mesa-dri -t testingReading package lists... DoneBuilding dependency tree Reading state information... DoneCalculating upgrade... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: libsnmp30 : Depends: libsensors4 (>= 1:3.0.0) but it is not going to be installed mesa-va-drivers : Depends: libsensors4 (>= 1:3.0.0) but it is not going to be installed mesa-va-drivers:i386 : Depends: libsensors4:i386 (>= 1:3.0.0) but it is not going to be installedE: Broken packages Okay, but: $ apt-cache policy libsensors4libsensors4: Installed: 1:3.4.0-4 Candidate: 1:3.4.0-4 Version table: *** 1:3.4.0-4 900 900 http://ftp.ru.debian.org/debian stretch/main amd64 Packages 100 /var/lib/dpkg/status So it depends on the lib version >=1:3.0.0, but I have version 1:3.4.0-4, which is really strange. Generally, I don't understand how should I upgrade Mesa. If using apt , I don't know which packages should I update. If from source, I don't know how will it interact with apt and if it won't be reverted by an update. I am using Debian 9 Stretch, and my graphics card is Intel HD Graphics 5000. | Don't try to install testing directly on stable! or you'll end up with a FrankenDebian (at best) or will lose a lot of packages due to unrealistic dependencies. The good news is that those updated packages are available in stretch-backports . Debian's mesa had several packaging changes in testing so also in stretch-backports, related to the vendor neutral's GL dispatch library turning this non-trivial. Also, since you are using multi-arch with both amd64 and i386 packages, those packages must be upgraded in lockstep or you'll get some of the errors you've seen. I thus can't tell the exact command on how to upgrade mesa only, without upgrading everything (which you should not do: stretch-backports doesn't have security support) but I will give a procedure. First please follow Debian's instructions on how to add stretch-backports properly. I'll put a simplified summary here: # echo 'deb http://deb.debian.org/debian/ stretch-backports main contrib non-free' >> /etc/apt/sources.list.d/stretch-backports.list# apt-get update And DO remove buster/testing/sid entries if you added them. Some packages might have disappeared (eg libgles1-mesa isn't provided anymore) and others appeared. You will have to upgrade all involved packages in one single apt-get command, so you'd first have to look at the most involved packages with their current version, and let the dependency resolver pick the missing parts (eg: libdrm2 ). You should do things manually, not in a script because you have to check nothing bad happens (like apt-get offering to delete 100 packages). So something like this: dpkg -l | fgrep 13.0.6-1+b2 or even: dpkg -l | awk '/^.i/ && $3 == "13.0.6-1+b2" { print $2 }' | xargs to get the main part of the list of packages. DO NOTE that for installed multi-arch packages you must provide both the amd64 package (which is by default so doesn't require the extra :amd64 but you can leave it from the cut/paste) and again the same i386 package (using :i386 appended to the package name) if it was also found in the previous dpkg command. So the final installation command should probably look like: apt-get -t stretch-backports install libgl1-mesa-dri:amd64 libgl1-mesa-dri:i386 mesa-opencl-icd:amd64 mesa-opencl-icd:i386 ... You get the idea. Now check the number of to be removed packages that are offered. If there are some mesa related packages to be removed (eg: libgles1-mesa ) that's fine, if most of them or many unrelated packages are offered to be removed, abort and ponder what might be missing. Of course many others should be offered in addition as upgrade (eg: libdrm2 and libdrm2:i386 ). It's probably those that might still cause trouble because of multi-arch, so you might have to add them manually twice (once for each arch) to the growing one-liner list if apt-get isn't smart enough. As suggested by @Stephen Kitt, other useful and related packages, dealing with an improved usage of the hardware, including graphics support, are also available in stretch-backports, and should probably also be upgraded. Among them: linux-image-amd64 which will currently pull linux-image-4.19.0-0.bpo.2-amd64 Various firmware packages (anyway all those that are currently installed should be upgraded), like firmware-misc-nonfree which might include upgraded graphical support and anyway which might have to be upgraded as a (perhaps hidden) dependency for the newer kernel for best results. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509487",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317519/"
]
} |
509,490 | I need to find an image, say ABC.jpg, that I know will have been programmatically placed into a directory named ABC_MPSC. I've tried: cd /find . -name "ABC_MPSC/ABC.jpg" But that doesn't return anything (I actually know where the particular one I'm searching for is, so I know it exists). Is there a find command that could allow me not have to search manually? | There's a -path predicate that's useful here: find . -path '*/ABC_MPSC/ABC.jpg' The POSIX description for that predicate is: The primary shall evaluate as true if the current pathname matches pattern using the pattern matching notation described in Pattern Matching Notation. The additional rules in Patterns Used for Filename Expansion do not apply as this is a matching operation, not an expansion. The reason that your -name "ABC_MPSC/ABC.jpg" failed is because the -name predicate: shall evaluate as true if the basename of the current pathname matches pattern In other words, -name never sees the directory of the current filename, only the base filename itself (ABC.jpg, for example). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/509490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/344393/"
]
} |
509,553 | Like If I have : 1st line (keep) 2nd line (keep) 3rd line (keep) 4rth lines (delete) 5th (del) 6th (keep) 7nth (keep) 8th lines (keep) 9th (del) 10th (del) 11th (keep) 12th (keep) 13th (keep) 14th (del) 15th (del) etc.... | Try: awk '(NR-1)%5<3' file For example: $ awk '(NR-1)%5<3' file1st line (keep)2nd line (keep)3rd line (keep)6th (keep)7nth (keep)8th lines (keep)11th (keep)12th (keep)13th (keep) How it works The command (NR-1)%5<3 tells awk to print any line for which (NR-1)%5<3 is true. In awk , NR is the line number with the first line counting as 1 . For every five lines in the file, that statement will be true for the first three. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509553",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/344445/"
]
} |
509,572 | I have several files with two columns : file 1: 1 1002 103 file 2 1 2002 203 and around 600 such files with two columns. Now, I would like to combine the second column in every file of the first row in the correct sequence to get a single data file like : 100200... (600 lines) How do I do that? | awk 'FNR==1 {print $2}' file* This prints the second column ( $2 ) of the first line ( FNR==1 ) for every file whose filename starts with file . An alternative is to print the first line and then immediately skip to the next file ( nextfile is a mawk and GNU awk -specific keyword): awk '{print $2; nextfile}' file* | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293538/"
]
} |
509,660 | From manpages: docker container rm will remove one or more containers from the host node. The container name or ID can be used. This does not remove images. docker container kill : The main process inside each container specified will be sent SIGKILL, or any signal specified with option --signal. Is a container a running instance of an image?So do docker container rm and docker container kill effectively achieve the same: the container will stop existing? What are their differences? What is "the main process inside a container"? Is a container run exactly as a process in the host machine? Thanks. | If you run a container.. eg docker run alpine echo hello It looks like it cleans up afterwards... % docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES But it doesn't it's still there. % docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES3a4772c0f165 alpine "echo hello" 22 seconds ago Exited (0) 20 seconds ago relaxed_ramanujan This can be cleaned up with the rm command % docker container rm 3a4772c0f165 3a4772c0f165% docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES So: docker kill will kill a container. docker rm will clean up a terminated container. They are different things. Note: you can tell containers to auto-clean: % docker run --rm alpine echo hellohello% docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Then you don't need to manually rm . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/509660",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
509,666 | I want to copy all *out from various subdirectories from my remote account to my local machine my trail is scp --parents -r @remote:~/path/*out . this trail doesn't workI am wondering about the mistake or if there any other alternative way to carry out this job | If you run a container.. eg docker run alpine echo hello It looks like it cleans up afterwards... % docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES But it doesn't it's still there. % docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES3a4772c0f165 alpine "echo hello" 22 seconds ago Exited (0) 20 seconds ago relaxed_ramanujan This can be cleaned up with the rm command % docker container rm 3a4772c0f165 3a4772c0f165% docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES So: docker kill will kill a container. docker rm will clean up a terminated container. They are different things. Note: you can tell containers to auto-clean: % docker run --rm alpine echo hellohello% docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Then you don't need to manually rm . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/509666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112129/"
]
} |
509,714 | I would like to process a multiline string and iterate it line by line, in a POSIX shell ( /bin/sh ) on a BSD platform. Bash is not included in the base BSD-distribution and has a GPL license - so I am trying to make it universally work with /bin/sh instead. I found a solution using a pipe, however in the regular /bin/sh shell, these a processed in a separate process, meaning the following does not work: MULTILINE="`cat ${SOMEFILE}`"SOMEVAR="original value"echo "${MULTILINE}" | while IFS= read -r SINGLELINEdo SOMEVAR="updated value" echo "this is a single line: ${SINGLELINE}" echo "SOMEVAR is now: ${SOMEVAR}"doneecho "Final SOMEVAR is unchanged: ${SOMEVAR}" In the above example, it accomplishes what I want, except for the fact that changes to variables such as ${SOMEVAR} are not accessible outside the while loop. My question: how can I accomplish something like the above without this restriction? Note that many solutions require Bash, whereas I am using the standard POSIX-shell /bin/sh . | You could use a here document: while IFS= read -r SINGLELINEdo SOMEVAR="updated value" printf '%s\n' "this is a single line: ${SINGLELINE}" printf '%s\n' "SOMEVAR is now: ${SOMEVAR}"done << EOF$MULTILINEEOFprintf '%s\n' "Final SOMEVAL is still $SOMEVAR" Depending on the sh implementation, here-documents are implemented either as a deleted temporary file where the shell has stored the expansion of the variable followed by newline beforehand, or a pipe to which the shell feeds the expansion of the variable followed by newline. But in either case, except in the original Bourne shell (a shell that is no longer in use these days and is not a POSIX compliant shell), the command being redirected is not run in a subshell (as POSIX requires). or you could use split+glob: IFS='' # split on newline onlyset -o noglobfor SINGLELINE in $MULTILINEdo SOMEVAR="updated value" printf '%s\n' "this is a single line: ${SINGLELINE}" printf '%s\n' "SOMEVAR is now: ${SOMEVAR}"doneprintf '%s\n' "Final SOMEVAL is still $SOMEVAR" But beware it skips empty lines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509714",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312076/"
]
} |
509,764 | I've googled around a lot for this and it seems there is no precursor to this need. I need to edit an applications preference file programatically : as part of a shell script. and the prefs are stored in strict json format : this means the app loading that preference file will crash at startup if there is a comma , before a closing curly brace } . normally this wouldn't be an issue. I'd just use my sed s accordingly : if the line containing my faulty text lines up at the end of a section in my example file, then when replacing this text I will always put it without a comma. If another line containing another faulty bit I want to replace is not at the end, I always replace it including a comma. Example : (I use underscore _ as my sed's delimiter because the stings to replace are full of backslashes sometimes) sed -i 's_"executableDecorator".*_"executableDecorator": "'$user_path'/faf/run \\"%s\\"",_' $user_path/.faforever/client.prefs if that line was at the end : sed -i 's_"executableDecorator".*_"executableDecorator": "'$user_path'/faf/run \\"%s\\""_' $user_path/.faforever/client.prefs this would work, But!... I have the app end before I run my script so that both aren't editing the preferences at the same time, but even still, because of this app's asynchronous execution the preferences my script will be receiving will differ every time. it's completely random. sometimes a line could be in the middle sometimes at the end. The app itself (Java & some json java lib) knows how to append comma or not depending on the context but as part of my shell script... I feel things are going to get bloated. (If not and there's a shorthand to ensure I have comma or not depending on if next line is } , then that is a better simpler solution that I would be more interested in) But as it stands I'm looking for a POSIX utility that fixes json so that I can "sanitize" my json prefs file as soon as I'm done butchering it all within my shell script ...does such a thing exist? EDIT : here's the base file (whole file, ) : { "mainWindow": { "width": 800, "height": 600, "maximized": false, "lastView": "NEWS", "lastChildViews": {}, "x": 67.0, "y": 27.0 }, "forgedAlliance": { "customMapsDirectory": "/home/t/My Games/Gas Powered Games/Supreme Commander Forged Alliance/Maps", "preferencesFile": "/home/t/.wine/drive_c/users/t/Application Data/Gas Powered Games/Supreme Commander Forged Alliance/Game.prefs", "officialMapsDirectory": "/home/t/faf/./Maps", "modsDirectory": "/home/t/My Games/Gas Powered Games/Supreme Commander Forged Alliance/Mods", "port": 6112, "autoDownloadMaps": true, "executableDecorator": "\"%s\"" }, "login": { "username": "tatsu", "password": "*******", "autoLogin": true }, "chat": { "zoom": 1.0, "learnedAutoComplete": false, "previewImageUrls": true, "maxMessages": 500, "chatColorMode": "CUSTOM", "channelTabScrollPaneWidth": 250, "userToColor": {}, "hideFoeMessages": true, "timeFormat": "AUTO", "chatFormat": "COMPACT", "idleThreshold": 10 }, "notification": { "soundsEnabled": true, "transientNotificationsEnabled": true, "mentionSoundEnabled": true, "infoSoundEnabled": true, "warnSoundEnabled": true, "errorSoundEnabled": true, "friendOnlineToastEnabled": true, "friendOfflineToastEnabled": true, "ladder1v1ToastEnabled": true, "friendOnlineSoundEnabled": true, "friendOfflineSoundEnabled": true, "friendJoinsGameSoundEnabled": true, "friendPlaysGameSoundEnabled": true, "friendPlaysGameToastEnabled": true, "privateMessageSoundEnabled": true, "privateMessageToastEnabled": true, "friendJoinsGameToastEnabled": true, "notifyOnAtMentionOnlyEnabled": false, "afterGameReviewEnabled": true, "toastPosition": "BOTTOM_RIGHT", "toastScreen": 0, "toastDisplayTime": 5000 }, "themeName": "default", "lastGameType": "faf", "localization": {}, "rememberLastTab": true, "showPasswordProtectedGames": true, "showModdedGames": true, "ignoredNotifications": [], "lastGameMinRating": 800, "lastGameMaxRating": 1300, "ladder1v1": { "factions": [ "aeon", "cybran", "uef", "seraphim" ] }, "news": { "lastReadNewsUrl": "http://direct.faforever.com/2019/03/king-of-badlands-tournament-march-30th/" }, "developer": { "gameRepositoryUrl": "https://github.com/FAForever/fa.git" }, "vaultPrefs": { "onlineReplaySortConfig": { "sortProperty": "startTime", "sortOrder": "DESC" }, "mapSortConfig": { "sortProperty": "statistics.plays", "sortOrder": "DESC" }, "modVaultConfig": { "sortProperty": "latestVersion.createTime", "sortOrder": "DESC" } }, "gameListSorting": [], "gameTileSortingOrder": "PLAYER_DES", "unitDataBaseType": "RACKOVER", "storedCookies": {}, "lastGameOnlyFriends": false} the only part that matters is "forgedAlliance" : "forgedAlliance": { "customMapsDirectory": "/home/t/My Games/Gas Powered Games/Supreme Commander Forged Alliance/Maps", "preferencesFile": "/home/t/.wine/drive_c/users/t/Application Data/Gas Powered Games/Supreme Commander Forged Alliance/Game.prefs", "officialMapsDirectory": "/home/t/faf/./Maps", "modsDirectory": "/home/t/My Games/Gas Powered Games/Supreme Commander Forged Alliance/Mods", "port": 6112, "autoDownloadMaps": true, "executableDecorator": "\"%s\"" }, I run commands to obtain this : "forgedAlliance": { "path": "/home/t/.steam/steam/steamapps/common/Supreme Commander Forged Alliance", "installationPath": "/home/t/.steam/steam/steamapps/common/Supreme Commander Forged Alliance", "customMapsDirectory": "/home/t/My Games/Gas Powered Games/Supreme Commander Forged Alliance/Maps", "preferencesFile": "/home/t/.steam/steam/steamapps/compatdata/9420/pfx/drive_c/users/steamuser/Local Settings/Application Data/Gas Powered Games/Supreme Commander Forged Alliance/Game.prefs", "officialMapsDirectory": "/home/t/faf/./Maps", "modsDirectory": "/home/t/My Games/Gas Powered Games/Supreme Commander Forged Alliance/Mods", "port": 6112, "autoDownloadMaps": true, "executableDecorator": "/home/t/faf/run \"%s\"" }, the commands that work (in a standard case where things don't move around) are : if ! grep -q '"path"' $user_path/.faforever/client.prefs > /dev/nullthen sed -i '12i"path": "'$user_path'/.steam/steam/steamapps/common/Supreme Commander Forged Alliance",' $user_path/.faforever/client.prefs sed -i '13i"installationPath": "'$user_path'/.steam/steam/steamapps/common/Supreme Commander Forged Alliance",' $user_path/.faforever/client.prefsfi! grep -q '"preferencesFile": "'$user_path'/.steam/steam/steamapps/compatdata/9420/pfx/drive_c/users/steamuser/Local Settings/Application Data/Gas Powered Games/Supreme Commander Forged Alliance/Game.prefs",' $user_path/.faforever/client.prefs > /dev/null && sed -i 's_"preferencesFile".*_"preferencesFile": "'$user_path'/.steam/steam/steamapps/compatdata/9420/pfx/drive\_c/users/steamuser/Local Settings/Application Data/Gas Powered Games/Supreme Commander Forged Alliance/Game.prefs",_' $user_path/.faforever/client.prefs! grep -q '"executableDecorator": "'$user_path'/faf/",' $user_path/.faforever/client.prefs > /dev/null && sed -i 's_"executableDecorator".*_"executableDecorator": "'$user_path'/faf/run \\"%s\\""_' $user_path/.faforever/client.prefs | This jq command will make exactly those changes: jq --arg user_path "$user_path" ' .forgedAlliance += { installationPath: ($user_path + "/.steam/steam/steamapps/common/Supreme Commander Forged Alliance"), path: ($user_path + "/.steam/steam/steamapps/common/Supreme Commander Forged Alliance"), preferencesFile: ($user_path + "/.steam/steam/steamapps/compatdata/9420/pfx/drive_c/users/steamuser/Local Settings/Application Data/Gas Powered Games/Supreme Commander Forged Alliance/Game.prefs"), executableDecorator: ($user_path + "/faf/run \"%s\"") }' This uses --arg user_path "$user_path" to bring the shell variable into the jq program (you could also use the variable binding operator "'"$user_path"'" as $user_path | , but it would involve ugly quote splicing) Update-assignment .forgedAlliance += to process the whole file, updating just the value of the "forgedAlliance" key by merging it with what's on the right. A fresh object constructed from { to } with just the new key values you wanted computed inside it. If there are existing keys with the same name, they will be replaced. $user_path to access that variable binding we made above. The whitespace is optional - it's just there to make it easier to read on this site. jq always outputs as valid JSON, so you don't have any comma cleanup to do. You may find the sponge command from moreutils useful for updating the file itself, because there is no -i equivalent in jq, but you can also just redirect to another file jq ... > tmpfilemv tmpfile prefs.json and step around it manually as well. There is one (slight?) difference to what your code did: you made no changes for path and installationPath if "path" appeared anywhere in the file. There's no way to replicate that with jq directly, but you could split the command in two (one for path, one for all the time) if there's a necessary semantic element to that. This command will always make the change, but if it's already got the same value for a key that doesn't have any effect. If this is a fixed set of replacements, you could also make a file with just the object from point 3 above in it literally (as true JSON, not dynamically computed), and then use jq --slurpfile tmp rhs.json '.forgedAlliance += tmp[0]' with the same effect as the big command above. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228658/"
]
} |
509,765 | I am learning shell-scripting and for that I am using HackerRank. There is a question related to sed on the same site: 'Sed' command #1 : For each line in a given input file, transform the first occurrence of the word 'the' with 'this'. The search and transformation should be strictly case sensitive. First of all I tried, sed 's/the/this/' but in that sample test case failed. Then I tried sed 's/the /this /' and it worked. So, the question arises what difference did the whitespaces created? Am I missing something here? | The difference is whether there is a space after the in the input text. For instance: With a sentence without a space , no replacement: $ echo 'theman' | sed 's/the /this /'theman With a sentence with a space , works as expected: $ echo 'the man' | sed 's/the /this /'this man With a sentence with another whitespace character ,no replacement will occur: $ echo -e 'the\tman' | sed 's/the /this /'the man | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509765",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/344644/"
]
} |
509,766 | I just installed Debian 9 on a Lenovo S130-14IGM but the touchpad doesn't work at all. With Ubuntu there is no problem with it. Here are the results from the two operating systems: Ubuntu # egrep -i 'syna|alps|etps|elan' /proc/bus/input/devicesN: Name="SYNA3388:00 06CB:8459 Touchpad"P: Phys=i2c-SYNA3388:00S: Sysfs=/devices/pci0000:00/0000:00:17.0/i2c_designware.0/i2c-4/i2c-SYNA3388:00/0018:06CB:8459.0001/input/input17# apt list xserver-xorg-input-synapticsListing...xserver-xorg-input-synaptics/bionic 1.9.0-1ubuntu1 amd64# dpkg -l | grep -i synaii xserver-xorg-input-synaptics-hwe-18.04 1.9.1-1ubuntu1~18.04.1 amd64 Synaptics TouchPad driver for X.Org server# xinput⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ SYNA3388:00 06CB:8459 Touchpad id=9 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Video Bus id=6 [slave keyboard (3)] ↳ Power Button id=7 [slave keyboard (3)] ↳ EasyCamera: EasyCamera id=8 [slave keyboard (3)] ↳ Ideapad extra buttons id=10 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=11 [slave keyboard (3)] Debian # egrep -i 'syna|alps|etps|elan' /proc/bus/input/devices-# apt list xserver-xorg-input-synapticsEn train de lister…xserver-xorg-input-synaptics/stable 1.9.0-1+b1 amd64# dpkg -l | grep -i synaii synaptic 0.84.2 amd64 Graphical package manager# xinput⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ EasyCamera id=9 [slave keyboard (3)] ↳ Ideapad extra buttons id=10 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=11 [slave keyboard (3)] What should I do? | # egrep -i 'syna|alps|etps|elan' /proc/bus/input/devicesN: Name="SYNA3388:00 06CB:8459 Touchpad"P: Phys=i2c-SYNA3388:00S: Sysfs=/devices/pci0000:00/0000:00:17.0/i2c_designware.0/i2c-4/i2c-SYNA3388:00/0018:06CB:8459.0001/input/input17 Your touchpad is not connected to the system via internal PS/2 or USB wiring, but using the I2C bus. This is a fairly new development, and Debian 9's standard kernel might be too old to support such touchpads very well. You might try with a backport kernel. See here for instructions in enabling the Debian Backports repository - basically, add this line to the /etc/apt/sources.list file: deb http://deb.debian.org/debian stretch-backports main Then you should be able to install a backport kernel with: apt-get updateapt-get -t stretch-backports install linux-image-4.19.0-0.bpo.2-amd64 linux-image-amd64 After a reboot, you might then have better luck with your touchpad. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/344645/"
]
} |
509,834 | I have a tab-separated file that looks like this: NZ_CP023599.1 WP_003911075.1 302845 305406NZ_CP023599.1 WP_003898428.1 471171 472583NZ_CP023599.1 WP_003402248.1 534387 535157NZ_CP023599.1 WP_003402301.1 552556 553950NZ_CP023599.1 WP_003402318.1 558837 559697 I need to subtract the number in 4th column of each row from the number in 3rd column of the next line, and then print the difference in the next line as a 5th column. The output would look like this: NZ_CP023599.1 WP_003911075.1 302845 305406 NZ_CP023599.1 WP_003898428.1 471171 472583 165765NZ_CP023599.1 WP_003402248.1 534387 535157 61804NZ_CP023599.1 WP_003402301.1 552556 553950 17399NZ_CP023599.1 WP_003402318.1 558837 559697 4887 How do I go about this using awk? | You can do this as below. Defer the subtraction except for the first line but get its last column value as input for the subsequent line. awk -F'\t' 'BEGIN { OFS = FS } NR == 1 { last = $4; print; next }{ $5 = $3 - last; last = $4 }1' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509834",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/290191/"
]
} |
509,915 | Although I like Debian for various reasons, it is not always easy to find some documentation on specific aspects of this distribution and its policies. My question is: what is the difference between contrib and non-free packages repositories? From the little explanations I could find, if I am not mistaken: non-free is for packages whose licences are not free contrib for dependencies of non-free packages (which make them not part of Debian) But it seems odd to me to have two repositories for, so to speak, the same purpose, which is making available non free software inside Debian. I would like to know if I am missing something here. | non-free packages are packages not complying to the Debian Free Software Guidelines definition. E.g.: nvidia-driver which provides a proprietary driver. contrib packages are packages that do comply with the DFSG, but depend on non-free packages, or which depend on some non-free software downloaded (by the package or having to be downloaded manually) to work properly. So they don't end up in main . E.g.: bumblebee-nvidia which while DFSG compliant, isn't really useful without the non-free package nvidia-driver , so it's put in the contrib section, or vice which requires to download (while respecting copyrights and laws) ROMs to work properly. If somebody doesn't want to or can't use non-free software, that person most probably doesn't need or won't be able to use software depending on it, so it's more useful to put them separately in a contrib section. That person won't even have to download the contrib section. UPDATE: the software in contrib , and its sources are still available for free use. Interesting parts could be reused in an other project, or the non-free (or non available) parts it depends upon could be replaced (e.g.: replace graphics, music etc. assets for a game engine in contrib ). Having it separate from non-free helps to know which parts can be reused. trivia: the Open Source Definition was initially created by removing any mention of Debian in DFSG. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/509915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159254/"
]
} |
509,973 | I have an SSD disk with an ext4 filesystem on it: $ lsblk -f /dev/sdc NAME FSTYPE LABEL UUID MOUNTPOINTsdc ext4 142b28fd-c886-4182-892d-67fdc34b522a I am attempting to mount it, but it is failing: $ sudo mkdir /mnt/data$ sudo mount /dev/sdc /mnt/datamount: /mnt/data: cannot mount /dev/sdc read-only. What does the error message mean? How can I diagnose and fix the problem? To add additional information pertinent to an answer below: There is only one partition on the disk. Here is the result of executing lsblk for the boot disk: $ lsblk /dev/sdaNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 10G 0 disk ├─sda1 8:1 0 9.9G 0 part /├─sda14 8:14 0 4M 0 part └─sda15 8:15 0 106M 0 part /boot/efi and here is the result of executing lsblk for the disk in question: $ lsblk /dev/sdcNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsdc 8:32 0 2G 1 disk | I had a similar problem with a USB thumb drive which was down to the ext4 journal recovery not working. dmesg confirmed this: [1455125.992721] EXT4-fs (sdh1): INFO: recovery required on readonly filesystem[1455125.992725] EXT4-fs (sdh1): write access unavailable, cannot proceed (try mounting with noload) As it suggested, mounting with noload worked: sudo mount -o ro,noload /dev/sdh1 /mnt/drive I was then able to backup the content: sudo rsync -av /mnt/drive /data/tmp/ and then use fdisk to delete and recreate the partition and then create a new filesystem with mkfs.ext4 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/509973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28680/"
]
} |
509,995 | I need to execute the following shell script in my macOS terminal.The loop never executes more than its first iteration. function execute_function() {# Launch job number_of_jobs=$1 echo "Launching ${number_of_jobs} jobs" for i in {1..$1}; do job_id=`head /dev/urandom | tr -dc A-Z0-9 | head -c 6 ; echo ''` echo "Launching Job: $job_id" echo $i done} When I run it, I always get: execute_function 10 Launching 10 jobs Launching Job: XX9BWC {1..10} The same happens if I replace: $1 with $number_of_jobs or "${number_of_jobs}" | The problem here is variable in braces expansion. Try rewriting it to for ((i=1;i<=$1;i++))do #your code heredone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509995",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82441/"
]
} |
509,998 | We had email from Hetzner network team asking us not to use MAC addresses belonging to the Virtual Machines belonging to the subnet. We configured Xen server host as a router using this guide . After asking for additional details Hetzner support answered that normally, the network configuration of your hypervisor should only let packets exit the system with the MAC address of the real NIC. But if you do not find the issue, you might try to block those outgoing packets using IPtables. So our questions: If anyone had this kind of problem with Hetzner or other dedicated server provider. How you solved it? Thank you Action plans that I tried and didn't help Create separate router VM Turn off IPv6 on all VMs and host using sysctl.conf directives Masquarading on both host interfaces -A POSTROUTING -o xenbr0 -j MASQUERADE-A POSTROUTING -o eth0 -j MASQUERADE Bring up additional xenbr0:1 interface Adding following into /etc/sysctl.conf net.ipv4.conf.default.proxy_arp = 1 Configuration details Host/Router configuration: [root@xenserver-custom ~]# cat /etc/sysctl.confnet.ipv4.ip_forward = 1 net.ipv6.conf.all.forwarding=1net.ipv4.conf.default.proxy_arp = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv4.conf.lo.send_redirects = 0 net.ipv4.conf.xenbr0.send_redirects = 0 [root@xenserver-custom network-scripts]# ip addr add 85.91.107.177/28 dev xenbr0[root@xenserver-custom ~]# ifconfigeth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 0c:c4:7a:e7:dc:33 txqueuelen 1000 (Ethernet) RX packets 4704816217 bytes 6002063739181 (5.4 TiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6294828922 bytes 7643975899027 (6.9 TiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1 (Local Loopback) RX packets 518545683 bytes 6322784653872 (5.7 TiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 518545683 bytes 6322784653872 (5.7 TiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0vifxxxx.....................xenbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 115.35.61.184 netmask 255.255.255.192 broadcast 115.35.61.191 ether 0c:c4:7a:e7:dc:33 txqueuelen 1 (Ethernet) RX packets 3070611738 bytes 8670969429554 (7.8 TiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2680055664 bytes 9822630727363 (8.9 TiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 VM Guest configuration [root@r1213a network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-eth0DEVICE=eth0TYPE=EthernetONBOOT=yesNM_CONTROLLED=yesBOOTPROTO=staticIPADDR=85.91.107.184PREFIX=28GATEWAY=85.91.107.177DNS1=213.133.98.98DEFROUTE=yesIPV4_FAILURE_FATAL=yesIPV6INIT=no[root@r1213a network-scripts]# ifconfigeth0 Link encap:Ethernet HWaddr B6:8F:14:74:A6:B6 inet addr:85.91.107.184 Bcast:85.91.107.191 Mask:255.255.255.240 inet6 addr: fe80::b48f:14ff:fe74:a6b6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:27122939 errors:0 dropped:2 overruns:0 frame:0 TX packets:2218911 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5404322465 (5.0 GiB) TX bytes:1061055301 (1011.9 MiB)[root@r1213a network-scripts]# ip routedefault via 85.91.107.177 dev eth085.91.107.176/28 dev eth0 proto kernel scope link src 85.91.107.184[root@r1213a network-scripts]# traceroute google.comtraceroute to google.com (172.217.18.110), 30 hops max, 60 byte packets 1 xenserver.localdomain (85.91.107.177) 0.081 ms 0.029 ms 0.039 ms 2 static.129.61.69.159.clients.your-server.de (159.69.61.129) 0.390 ms 0.410 ms 0.370 ms 3 core22.fsn1.hetzner.com (213.239.245.121) 0.393 ms 0.416 ms 0.424 ms 4 core0.fra.hetzner.com (213.239.252.33) 5.207 ms 5.184 ms core0.fra.hetzner.com (213.239.252.29) 5.049 ms 5 72.14.218.94 (72.14.218.94) 5.273 ms 5.249 ms 72.14.218.176 (72.14.218.176) 4.990 ms 6 108.170.251.193 (108.170.251.193) 5.139 ms * 5.019 ms 7 209.85.241.75 (209.85.241.75) 5.834 ms 216.239.40.58 (216.239.40.58) 5.092 ms 172.253.64.119 (172.253.64.119) 5.707 ms 8 108.170.251.144 (108.170.251.144) 15.292 ms zrh04s05-in-f110.1e100.net (172.217.18.110) 4.952 ms 4.903 ms | The problem here is variable in braces expansion. Try rewriting it to for ((i=1;i<=$1;i++))do #your code heredone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/509998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267783/"
]
} |
510,000 | I stopped httpd processes on a centos 7.6 machine using kill -STOP command and the processes show up as stopped in top output. I tried to telnet the webserver's IP on port 80 and was able to do so fine. I am trying to understand what exactly stopping the process with kill -STOP does and why the telnet was successful when the httpd process is no longer running ? | The problem here is variable in braces expansion. Try rewriting it to for ((i=1;i<=$1;i++))do #your code heredone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/510000",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73029/"
]
} |
510,031 | My proc info: lscpuArchitecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 4On-line CPU(s) list: 0-3Thread(s) per core: 1Core(s) per socket: 4Socket(s): 1NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 158Model name: Intel(R) Core(TM) i5-7400 CPU @ 3.00GHzStepping: 9CPU MHz: 1036.788CPU max MHz: 3500,0000CPU min MHz: 800,0000BogoMIPS: 6000.00Virtualization: VT-xL1d cache: 32KL1i cache: 32KL2 cache: 256KL3 cache: 6144KNUMA node0 CPU(s): 0-3 I tried: sudo apt-get install gcc-arm-linux-gnueabi g++-arm-linux-gnueabi If I go for: arm-linux-gccarm-linux-gcc: command not found How to install cross-compiler? | TLDR you need to call arm-linux-gnueabi-gcc not arm-linux-gcc . It looks like you've just got the wrong file name. For reference apt-file is a useful tool. sudo apt-get install apt-filesudo apt-file updateapt-file search -x 'gcc$' | grep 'gcc-arm-linux-gnueabi' This searches any file ending gcc in any package with gcc-arm-linux-gnueabi in the name. The result is: gcc-arm-linux-gnueabi: /usr/bin/arm-linux-gnueabi-gcc So if you have installed gcc-arm-linux-gnueabi you should have a file /usr/bin/arm-linux-gnueabi-gcc . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/510031",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143955/"
]
} |
510,046 | I can print current directory using pwd , but this gives me the path I navigated to get to where I am.I need to know which disk/partition current directory is on. For example, if I create symlink user@pc:~$ ln -s /media/HD1 hard_disk and then navigate to ~/hard_disk and run pwd it will print /home/user/hard_disk . I would like to get the actual path I'm currently on or better just the actual filesystem I'm currently on, which corresponds to one in df . | pwd -P will give you the physical directory you are in, i.e. the pathname of the current working directory with the symbolic links resolved. Using df . would give you the df output for whatever partition the current directory is residing on. Example (on an OpenBSD machine): $ pwd/usr/ports $ pwd -P/extra/ports $ df .Filesystem 512-blocks Used Avail Capacity Mounted on/dev/sd3a 103196440 55987080 42049540 57% /extra To parse out the mountpoint from this output, you may use something like $ df -P . | sed -n '$s/[^%]*%[[:blank:]]*//p'/extra To parse out the filesystem device used, use $ df -P . | sed -n '$s/[[:blank:]].*//p'/dev/sd3a I believe some Linux systems also supports findmnt --target . (where --target . can be replaced by -T . ) or, for more terse output, findmnt --output target --noheadings --target . (where --noheadings may be replaced by -n , and --output target may be replaced by -o target ) to get the mountpoint holding the filesystem that the current directory is located on. Use --output source to get the mounted device node. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/510046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149200/"
]
} |
510,132 | I currently have a function that prints the position and duration from cmus and formats it like "1/500". The issue I'm having is that I would like the position and duration data to be presented in minutes as opposed to seconds (0:01/8:20 instead of 1/500) but I'm out of ideas on how to achieve this. Currently the relevant part of the function looks like this: print_music(){ if ps -C cmus > /dev/null; then position=`cmus-remote -Q | grep --text '^position' | sed -e 's/position //' | awk '{gsub("position ", "");print}'` duration=`cmus-remote -Q | grep --text '^duration' | sed -e 's/duration //' | awk '{gsub("duration ", "");print}'` echo "[$position/$duration]"; else echo ""; fi} | This will help you: sec2min() { printf "%d:%02d" "$((10#$1 / 60))" "$((10#$1 % 60))"; } $ sec2min 5008:20$ sec2min 10:01 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/510132",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323736/"
]
} |
510,139 | I've made a custom command in bash and I placed it in ~/.local/bin which is a path loaded by the ~/.profile . When I run the command through a terminal without sudo it's fine, but when I try to run it with sudo the output is: sudo: my_command: command not found Could you tell me how I can accomplish that? | This will help you: sec2min() { printf "%d:%02d" "$((10#$1 / 60))" "$((10#$1 % 60))"; } $ sec2min 5008:20$ sec2min 10:01 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/510139",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/345133/"
]
} |
510,151 | After recently setting up Postgresql on my Raspberry Pi I created an account which is not a power user. Initially to set this up I entered psql from the terminal and then executed createuser pi -P --interactive I responded N for superuser Y for create databasesY for create new roles and then. Create database test; When I try to go to psql now using simply psql I get pi@raspberrypi:~ $ psql psql: FATAL: database "pi" does not exist I can go to psql test and create databases there, but I was wondering what causes this behavior. Does terminal automatically pass the Pi user credentials to postgresql or is it logging me in with my system's Pi Account? | This will help you: sec2min() { printf "%d:%02d" "$((10#$1 / 60))" "$((10#$1 % 60))"; } $ sec2min 5008:20$ sec2min 10:01 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/510151",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/345147/"
]
} |
510,153 | Sometimes, the mouse scrolling speed changes to a very slow rate, where it takes many clicks (3-8 clicks) or significant wheel rotation to scroll the page one step. This started after upgrading to linux kernel 5.0. It has not happened yet after booting back into kernel 4.20. When it happens, it's usually upon the wireless mouse 'waking up' after having gone into its power saving mode, but it does not happen every time. Turning the mouse off and on again with its power switch restores normal behavior. The mouse is a wireless Logitech M720 used with a Unifying receiver. There is also a wireless keyboard, a Logitech k830, paired with the same receiver. I know kernel 5.0 introduced new high resolution scrolling support. Is this a bug with that feature? Has anyone else experienced this? | One solution is to have solaar running, and make sure that the Wheel Resolution setting for the mouse (M720 in my case) is ON in kernel 5.0, which results in normal scrolling behavior. With solaar set to autostart, I have not had the slow scrolling problem since. When this setting is OFF it consistently results in the slow scrolling behavior. For whatever reason, without solaar running in kernel 5.0, the mouse sometimes spontaneously switches to that behavior, though without actually changing the setting. Interestingly, in kernel 4.20 and earlier, Wheel Resolution = OFF resulted in normal scrolling behavior, while ON provided much faster, more sensitive scrolling. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/510153",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211182/"
]
} |
510,178 | I would like use a program like tail to follow a file as it's being written to, but not display the most recent lines. For instance, when following a new file, no text will be displayed while the file is less than 30 lines. After more than 30 lines are written to the file, lines will be written to the screen starting at line 1. So as lines 31-40 are written to the file, lines 1-10 will be written to the screen. If there is no easy way to do this with tail, maybe a there's a way to write to a new file a prior line from the first file each time the first file is extended by a line, and the tail that new file... | Maybe buffer with awk: tail -n +0 -f some/file | awk '{b[NR] = $0} NR > 30 {print b[NR-30]; delete b[NR-30]} END {for (i = NR - 29; i <= NR; i++) print b[i]}' The awk code, expanded: { b[NR] = $0 # save the current line in a buffer array}NR > 30 { # once we have more than 30 lines print b[NR-30]; # print the line from 30 lines ago delete b[NR-30]; # and delete it}END { # once the pipe closes, print the rest for (i = NR - 29; i <= NR; i++) print b[i]} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/510178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/345164/"
]
} |
510,216 | This works on a shell (bash, dash) prompt: [ -z "" ] && echo A || echo BA However, I am trying to write a POSIX shell script, it starts like this: #!/bin/sh[ "${#}" -eq 1 ] || echo "Invalid number of arguments, expected one."; exit 1readonly raw_input_string=${1}[ -z "${raw_input_string}" ] && echo "The given argument is empty."; exit 1 And I don't know why, but I don't get the message : The given argument is empty. if I call the script like this: ./test_empty_argument "" Why is that? | Note that your line [ "${#}" -eq 1 ] || echo "Invalid number of arguments, expected one."; exit 1 this is the same as [ "${#}" -eq 1 ] || echo "Invalid number of arguments, expected one."exit 1 (an unquoted ; can, in most circumstances, be replaced by a newline character) This means that the exit 1 statement is always executed regardless of how many arguments were passed to the script. This in turn means that the message The given argument is empty. would never have a chance of getting printed. To execute more than a single statement after a test using the "short-circuit syntax", group the statements in { ...; } . The alternative is to use a proper if statement (which, IMHO, looks cleaner in a script): if [ "$#" -ne 1 ]; then echo 'Invalid number of arguments, expected one.' >&2 exit 1fi You have the same issue with your second test. Regarding [ -z "" ] && echo A || echo B This would work for the given example, but the generic some-test && command1 || command2 would not be the same as if some-test; then command1else command2fi Instead, it is more like if ! { some-test && command1; }; then command2fi or if some-test && command1; then :else command2fi That is, if either the test or the first command fails, the second command executes, which means it has the potential to execute all three involved statements. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/510216",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
510,220 | After reading ilkkachu's answer to this question I learned on the existence of the declare (with argument -n ) shell built in. help declare brings: Set variable values and attributes. Declare variables and give them attributes. If no NAMEs are given,display the attributes and values of all variables. -n ... make NAME a reference to the variable named by its value I ask for a general explanation with an example regarding declare because I don't understand the man . I know what is a variable and expanding it but I still miss the man on declare (variable attribute?). Maybe you'd like to explain this based on the code by ilkkachu in the answer: #!/bin/bashfunction read_and_verify { read -p "Please enter value for '$1': " tmp1 read -p "Please repeat the value to verify: " tmp2 if [ "$tmp1" != "$tmp2" ]; then echo "Values unmatched. Please try again."; return 2 else declare -n ref="$1" ref=$tmp1 fi} | In most cases it is enough with an implicit declaration in bash asdf="some text" But, sometimes you want a variable's value to only be integer (so in case it would later change, even automatically, it could only be changed to an integer, defaults to zero in some cases), and can use: declare -i num or declare -i num=15 Sometimes you want arrays, and then you need declare declare -a asdf # indexed type or declare -A asdf # associative type You can find good tutorials about arrays in bash when you browse the internet with the search string 'bash array tutorial' (without quotes), for example linuxconfig.org/how-to-use-arrays-in-bash-script I think these are the most common cases when you declare variables. Please notice also, that in a function, declare makes the variable local (in the function) without any name, it lists all variables (in the active shell) declare Finally, you get a brief summary of the features of the shell built-in command declare in bash with the command help declare | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/510220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
510,269 | I've set a function in a shell script that checks if a folder exists, if it does not exist, tries to created it, and if it can not create the folder (for example if the user does not the right permission) return 1. Then I check this "return", but I don't understand why "if" does not work because the return equal 1. Code: #!/bin/bash# Main foldersINPUT="input"OUTPUT="output"# Functionsfunction checkFolderExist(){ if [ -d $1 ] then # 0 = true # Change to 0, only for tests. return 1 else mkdir $1 result=$? if [ result==0 ] then # 0 = true return 0 else # 1 = false return 1 fi fi}CHECKINPUT=$(checkFolderExist $INPUT)echo $?CHECKOUTPUT=$(checkFolderExist $OUTPUT)echo $?# If folders does not exist, exit the scriptif [[ "$CHECKINPUT" = 1 || "$CHECKOUTPUT" = 1 ]]; then echo "[+] Error. Folder does not exist. Check user permissions." exit 1fi | There's a few things here. You very seldom have to explicitly check $? against anything or save it in a variable (unless you need to reference the same exit status multiple times). The exit status of a function is the exit status of the last executed command in the function, so an explicit return is seldom needed (seldom with an explicit return value at least). A function that checks whether a directory exists should not create any directories. Better call it create_dir_if_needed . There's an error in [ result==0 ] . The string result==0 is a string of non-zero length, and testing a string in this way will return true if the string has non-zero length, so the test is always true. You probably wanted [ "$result" -eq 0 ] instead. Remember to always double quote variable expansions and command substitutions, unless you know in what contexts this is not needed . With these things in mind: create_dir_if_needed () { mkdir -p -- "$1"} This would return the exit status of mkdir -p -- "$1" . This command would create the named directory (and any intermediate directories) if this did not already exist. If the mkdir command fails to create the directory, it will exit with a non-zero exit status, which will become the exit status of the function. mkdir -p will not fail if the directory already exists. You would use this as if ! create_dir_if_needed "$dirpath"; then printf 'Failed to create directory "%s"\n' "$dirpath" >&2 exit 1fi or, since the function is trivial, you could get rid of it and say if ! mkdir -p -- "$dirpath"; then printf 'Failed to create directory "%s"\n' "$dirpath" >&2 exit 1fi A variation of the create_dir_if_needed function that uses mkdir without -p and will therefore never create missing parent directories to the given directory path: create_dir_if_needed () { if [ -d "$1" ]; then return fi mkdir -- "$1"} or, create_dir_if_needed () { [ -d "$1" ] || mkdir -- "$1"} A call to this function would return true (zero) if the directory already existed or if the mkdir call went well. A return statement with no explicit value will return the exit status of the most recently executed statement, in this case it would return the positive outcome of the [ -d "$1" ] test. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/510269",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90105/"
]
} |
510,298 | Suppose that you redirect, in bash, the standard output of a command cmd to a file named f.out , and the standard error to f.err , using tee to preserve console printing: cmd 1> >(tee f.out) 2> >(tee f.err) Then f.out contains the output as well as the error (at least on my system). Now, if you change the order of redirections: cmd 2> >(tee f.err) 1> >(tee f.out) f.out only contains the output (and f.err only contains the error in both cases). So my question is double: how stderr can be redirected to f.out , and why does the order of redirections impact the result? Note that if you don't use tee , but for example cat , like this: cmd 1> >(cat>f.out) 2> >(cat>f.err) you don't have this issue, and the order of redirections doesn't matter, as expected, and as it would be the case without process substitution ( cmd 1>f.out 2>f.err ). | Order of redirection is important because Bash applies them in the order it finds them on the command it interprets. This is on purpose so that you can have idioms like > file 2>&1 working as expected i.e. having stderr the same as stdout. This idiom works as in "assign file to stdout and then make stderr equal to stdout", which yields the expected outcome because by the time stderr gets stdout's same value, stdout's value is file . The other way around (ie 2>&1 1> file ) won't yield the same outcome because stdout's value is changed after it has been copied to stderr's value. File-descriptors can be considered analogous to regular variables, which have their own values and can be made to get a copy of another variable's value, as in var1="${var2}" , and much like such var1 won't follow var2 's subsequent value changes, file-descriptor's value won't too. It is also handy so that you can e.g. swap file-descriptors on the same line, like in 3>&1 1>&2 2>&3- . This swaps fds 1 and 2 using fd 3 as temporary “helper” fd. As such you can consider redirections as instructions executed sequentially, just as if they were on two separate lines of your command or script. On your specific case there are also Process Substitutions involved, and those too get executed in the specified sequence inheriting the redirections expressed up to that point That is, to cap it all: you first redirect stdout to the process running tee f.out ; at this point cmd ’s stdout is connected to tee f.out ’s stdin, as desired then you redirect stderr to the process running tee f.err ; but this inherits its stdout as per the redirection expressed before, i.e. connected to tee f.out ’s stdin Therefore tee f.err , by innocently outputting to its stdout as well as to f.err file, pipes your cmd ’s error messages to tee f.out ’s stdin which will therefore receive all messages, outputting them to f.out file as well as to your terminal window. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/510298",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320314/"
]
} |
510,338 | Windows executable files ( New or Portable executables ) can contain icons. How can I extract them, either as ICO files or separate images? | There are a number of tools you can use. icoutils , available as the eponymous package in many distributions, includes a tool capable of extracting resources from most Windows executables (16-bit NE, 32-bit PE, and 64-bit PE+), wrestool . wrestool -x --output=. -t14 /path/to/windows.exe will extract the icons present in the given Windows executable and write them to individual files, named after the executable name, with the type and icon name added. 7z can also extract all the resources in a Windows executable; 7z x /path/to/windows.exe .rsrc/ICON will extract all the icons in the given Windows executable and write them to individual files in the .rsrc/ICON directory. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/510338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13308/"
]
} |
510,450 | This is Bash. The behavior is similar in fish. $ which python/usr/bin/python$ alias py=python$ type pypy is aliased to `python' And then, running type -P py prints nothing, where as I expected to print /usr/bin/pyton in a similar fashion to what is seen below. $ type lsls is aliased to `ls --color=auto'$ type -P ls/bin/ls The documentation for the -P option reads -P force a PATH search for each NAME, even if it is an alias, builtin, or function, and returns the name of the disk file that would be executed I've confirmed that /usr/bin (the directory where python is located) is in PATH . What is going on here? | This: force a PATH search for each NAME, even if it is an alias, does not mean that bash will expand the alias and then search for the expanded command. It means that, if there were a command foo , and also an alias foo , the type -P foo will still look for the command named foo , even though there's an alias masking it. So bash isn't expanding py in type -P py to be python , and it won't show /usr/bin/python . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/510450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/328323/"
]
} |
510,553 | It's not completely clear to me, but what is the difference between mv and rename (from util-linux-ng 2.17.2 as /usr/bin/rename )? Are there advantages of one over the other beyond rename accepting regular expressions and mv doesn't? I believe rename can also handle multiple file renames at once, whereas mv does not do this. I couldn't find a clear indication in their man pages what else sets them apart or through some investigation on my own. | It's basically what it says on the lid, for both. mv is a standard utility to move one or more files to a given target. It can be used to rename a file, if there's only one file to move. If there are several, mv only works if the target is directory, and moves the files there. So mv foo bar will either move the file foo to the directory bar (if it exists), or rename foo to bar (if bar doesn't exist or isn't a directory). mv foo1 foo2 bar will just move both files to directory bar , or complain if bar isn't a directory. mv will call the rename() C library function to move the files, and if that doesn't work (they're being moved to another filesystem), it will copy the files and remove the originals. If all you have is mv and you want to rename multiple files, you'll have to use a shell loop. There are a number of questions on that here on the site, see e.g. this , this , and others . On the other hand, the various rename utilities rename files, individually. The rename from util-linux which you mention makes a simple string substitution, e.g. rename foo bar * would change foobar to barbar , and asdffoo to asdfbar . It does not , however, take a regular expression! The Perl rename utility ( or various instances of it ) takes a Perl expression to transform the filenames. One will most likely use an s/ pattern / replacement / command, where the pattern is a regular expression. Both the util-linux rename and the Perl rename can be used to move files to another directory at the same time, by making appropriate changes to the file name, but it's a bit awkward. Neither does more than call rename() on the files, so moving from one filesystem to another does not work. As for which rename you have, it may depend on your distribution , and/or what you have installed. Most of them support rename --version , so use that to identify which one you have. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/510553",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293632/"
]
} |
510,601 | We have a huge text file containing millions of ordered timestamped observations and given the start point and the end point, we need a fast method to extract the observations in that period. For instance, this could be part of the file: "2018-04-05 12:53:00",28,13.6,7.961,1746,104.7878,102.2,9.78,29.1,0,2.432,76.12,955,38.25,249.9,362.4,281.1,0.04"2018-04-05 12:54:00",29,13.59,7.915,1738,104.2898,102.2,10.01,29.53,0,1.45,200.3,952,40.63,249.3,361.4,281.1,0.043"2018-04-05 12:55:00",30,13.59,7.907,1734,104.0326,102.2,10.33,28.79,0,2.457,164.1,948,41.39,249.8,361.3,281.1,0.044"2018-04-05 12:56:00",31,13.59,7.937,1718,103.0523,102.2,10.72,31.42,0,1.545,8.22,941,42.06,249.4,361.1,281.1,0.045"2018-04-05 12:57:00",32,13.59,7.975,1719,103.1556,102.2,10.68,29.26,0,2.541,0.018,940,41.95,249.1,360.1,281.1,0.045"2018-04-05 12:58:00",33,13.59,8,1724,103.4344,102.2,10.35,29.58,0,1.908,329.8,942,42.65,249.5,361.4,281.1,0.045"2018-04-05 12:59:00",34,13.59,8,1733,103.9831,102.2,10.23,30.17,0,2.59,333.1,948,42.21,250.2,362,281.2,0.045"2018-04-05 13:00:00",35,13.59,7.98,1753,105.1546,102.2,10.17,29.06,0,3.306,332.4,960,42,250.4,362.7,281.1,0.044"2018-04-05 13:01:00",36,13.59,7.964,1757,105.3951,102.2,10.24,30.75,0,2.452,0.012,962,42.03,250.4,362.4,281.1,0.044"2018-04-05 13:02:00",37,13.59,7.953,1757,105.4047,102.2,10.31,31.66,0,3.907,2.997,961,41.1,250.6,362.4,281.1,0.043"2018-04-05 13:03:00",38,13.59,7.923,1758,105.4588,102.2,10.28,29.64,0,4.336,50.19,962,40.85,250.3,362.6,281.1,0.042"2018-04-05 13:04:00",39,13.59,7.893,1757,105.449,102.1,10.27,30.42,0,1.771,12.98,962,41.73,249.8,362.1,281.1,0.043"2018-04-05 13:05:00",40,13.6,7.89,1757,105.4433,102.1,10.46,29.54,0,2.296,93.7,962,43.02,249.9,361.7,281,0.045"2018-04-05 13:06:00",41,13.59,7.915,1756,105.3322,102.1,10.52,29.53,0,0.632,190.8,961,43.64,249.3,361.5,281,0.045"2018-04-05 13:07:00",42,13.6,7.972,1758,105.4697,102.1,10.77,29.49,0,0.376,322.5,961,44.69,249.1,360.9,281.1,0.046"2018-04-05 13:08:00",43,13.6,8.05,1754,105.233,102.1,11.26,28.66,0,0.493,216.8,959,44.8,248.4,360.1,281.2,0.047 If we want the datapoints between "2018-04-05 13:00:00" and "2018-04-05 13:05:00", the output should be: "2018-04-05 13:00:00",35,13.59,7.98,1753,105.1546,102.2,10.17,29.06,0,3.306,332.4,960,42,250.4,362.7,281.1,0.044"2018-04-05 13:01:00",36,13.59,7.964,1757,105.3951,102.2,10.24,30.75,0,2.452,0.012,962,42.03,250.4,362.4,281.1,0.044"2018-04-05 13:02:00",37,13.59,7.953,1757,105.4047,102.2,10.31,31.66,0,3.907,2.997,961,41.1,250.6,362.4,281.1,0.043"2018-04-05 13:03:00",38,13.59,7.923,1758,105.4588,102.2,10.28,29.64,0,4.336,50.19,962,40.85,250.3,362.6,281.1,0.042"2018-04-05 13:04:00",39,13.59,7.893,1757,105.449,102.1,10.27,30.42,0,1.771,12.98,962,41.73,249.8,362.1,281.1,0.043"2018-04-05 13:05:00",40,13.6,7.89,1757,105.4433,102.1,10.46,29.54,0,2.296,93.7,962,43.02,249.9,361.7,281,0.045 Regular tools like grep or sed or awk are not optimized to be applied to sorted files. So they are not fast enough for. A tool which uses a binary search would be ideal for this type of problems. | It's basically what it says on the lid, for both. mv is a standard utility to move one or more files to a given target. It can be used to rename a file, if there's only one file to move. If there are several, mv only works if the target is directory, and moves the files there. So mv foo bar will either move the file foo to the directory bar (if it exists), or rename foo to bar (if bar doesn't exist or isn't a directory). mv foo1 foo2 bar will just move both files to directory bar , or complain if bar isn't a directory. mv will call the rename() C library function to move the files, and if that doesn't work (they're being moved to another filesystem), it will copy the files and remove the originals. If all you have is mv and you want to rename multiple files, you'll have to use a shell loop. There are a number of questions on that here on the site, see e.g. this , this , and others . On the other hand, the various rename utilities rename files, individually. The rename from util-linux which you mention makes a simple string substitution, e.g. rename foo bar * would change foobar to barbar , and asdffoo to asdfbar . It does not , however, take a regular expression! The Perl rename utility ( or various instances of it ) takes a Perl expression to transform the filenames. One will most likely use an s/ pattern / replacement / command, where the pattern is a regular expression. Both the util-linux rename and the Perl rename can be used to move files to another directory at the same time, by making appropriate changes to the file name, but it's a bit awkward. Neither does more than call rename() on the files, so moving from one filesystem to another does not work. As for which rename you have, it may depend on your distribution , and/or what you have installed. Most of them support rename --version , so use that to identify which one you have. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/510601",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286985/"
]
} |
510,687 | For a project I need to write and execute Ansible scripts in a Linux environment (CentOS). Though using the command line and vi is interesting, I need to use graphical file explorer and Visual Studio Code to edit files. Because the Linux VMs available to me have low memory (3GB) and run on slower CPUs, GNOME 3 for Desktop is too slow. Are there lighter GUIs in which I can run Visual Studio code? | Given your use case, and if you are this low on memory, the best choice would be to switch to a lightweight Desktop Environnement (DE), such as: XFCE Mate LXDE LXQt etc. If you are (even sort of) new to Linux, I would suggest you to stay away from Tilling Window Managers (TWM); although being extremely lightweight and powerful once configured and mastered, I do not think one of these would be a good idea given your situation. If you want to install XFCE (example): First, you need to add the Extra Packages for Enterprise Linux (EPEL) repository , as this is where you will install packages from: # yum -y install epel-release Then you can install XFCE Desktop Environment as following: # yum -y groupinstall X11# yum -y groups install "Xfce" After a reboot, you will be able to switch to XFCE4 instead of using GNOME3 at the login screen. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/510687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342869/"
]
} |
510,709 | I have two tab-separated files which look as follows: file1: NC_008146.1 WP_011558474.1 1155234 1156286 44173NC_008146.1 WP_011558475.1 1156298 1156807 12NC_008146.1 WP_011558476.1 1156804 1157820 -3NC_008705.1 WP_011558474.1 1159543 1160595 42748NC_008705.1 WP_011558475.1 1160607 1161116 12NC_008705.1 WP_011558476.1 1161113 1162129 -3NC_009077.1 WP_011559727.1 2481079 2481633 8NC_009077.1 WP_011854835.1 1163068 1164120 42559NC_009077.1 WP_011854836.1 1164127 1164636 7 file2: NC_008146.1 GCF_000014165.1_ASM1416v1_protein.faaNC_008705.1 GCF_000015405.1_ASM1540v1_protein.faaNC_009077.1 GCF_000016005.1_ASM1600v1_protein.faa I want to match column 1 of file1 to file2 and replace itself with the respective column 2 entry of file 2.The output would look like this: GCF_000014165.1_ASM1416v1_protein.faa WP_011558474.1 1155234 1156286 44173GCF_000014165.1_ASM1416v1_protein.faa WP_011558475.1 1156298 1156807 12GCF_000014165.1_ASM1416v1_protein.faa WP_011558476.1 1156804 1157820 -3GCF_000015405.1_ASM1540v1_protein.faa WP_011558474.1 1159543 1160595 42748GCF_000015405.1_ASM1540v1_protein.faa WP_011558475.1 1160607 1161116 12GCF_000015405.1_ASM1540v1_protein.faa WP_011558476.1 1161113 1162129 -3GCF_000016005.1_ASM1600v1_protein.faa WP_011559727.1 2481079 2481633 8GCF_000016005.1_ASM1600v1_protein.faa WP_011854835.1 1163068 1164120 42559GCF_000016005.1_ASM1600v1_protein.faa WP_011854836.1 1164127 1164636 7 | You can do this very easily with awk : $ awk 'NR==FNR{a[$1]=$2; next}{$1=a[$1]; print}' file2 file1GCF_000014165.1_ASM1416v1_protein.faa WP_011558474.1 1155234 1156286 44173GCF_000014165.1_ASM1416v1_protein.faa WP_011558475.1 1156298 1156807 12GCF_000014165.1_ASM1416v1_protein.faa WP_011558476.1 1156804 1157820 -3GCF_000015405.1_ASM1540v1_protein.faa WP_011558474.1 1159543 1160595 42748GCF_000015405.1_ASM1540v1_protein.faa WP_011558475.1 1160607 1161116 12GCF_000015405.1_ASM1540v1_protein.faa WP_011558476.1 1161113 1162129 -3GCF_000016005.1_ASM1600v1_protein.faa WP_011559727.1 2481079 2481633 8GCF_000016005.1_ASM1600v1_protein.faa WP_011854835.1 1163068 1164120 42559GCF_000016005.1_ASM1600v1_protein.faa WP_011854836.1 1164127 1164636 7 Or, since that looks like a tab-separated file: $ awk -vOFS="\t" 'NR==FNR{a[$1]=$2; next}{$1=a[$1]; print}' file2 file1GCF_000014165.1_ASM1416v1_protein.faa WP_011558474.1 1155234 1156286 44173GCF_000014165.1_ASM1416v1_protein.faa WP_011558475.1 1156298 1156807 12GCF_000014165.1_ASM1416v1_protein.faa WP_011558476.1 1156804 1157820 -3GCF_000015405.1_ASM1540v1_protein.faa WP_011558474.1 1159543 1160595 42748GCF_000015405.1_ASM1540v1_protein.faa WP_011558475.1 1160607 1161116 12GCF_000015405.1_ASM1540v1_protein.faa WP_011558476.1 1161113 1162129 -3GCF_000016005.1_ASM1600v1_protein.faa WP_011559727.1 2481079 2481633 8GCF_000016005.1_ASM1600v1_protein.faa WP_011854835.1 1163068 1164120 42559GCF_000016005.1_ASM1600v1_protein.faa WP_011854836.1 1164127 1164636 7 This assumes that every RefSeq ( NC_* ) id in file1 has a corresponding entry in file2 . Explanation NR==FNR : NR is the current line number, FNR is the line number of the current file. The two will be identical only while the 1st file (here, file2 ) is being read. a[$1]=$2; next : if this is the first file (see above), save the 2nd field in an array whose key is the 1st field. Then, move on to the next line. This ensures the next block isn't executed for the 1st file. {$1=a[$1]; print} : now, in the second file, set the 1st field to whatever value was saved in the array a for the 1st field (so, the associated value from file2 ) and print the resulting line. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/510709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/290191/"
]
} |
510,715 | After reading this answer by Kusalananda I got some sense about what is a variable-attribute, yet I miss what is a "name-reference" (type?) of a variable-attribute, what is it using for, why would one want to use it in a Bash script, I tried to Google the term "name reference" (without quote marks) but I didn't find a wiki article on this term. | Some of the attributes are more like what would be called variables types in other languages. Name references are such a "type". Like references in many languages, access to the variable actually accesses some other variable (the one referenced). The only exception is when using declare -n to set what variable is referenced, or declare -p to show it. So, e.g. foo=123declare -n ref=foo # set what 'ref' points toref=456 # set the value of 'foo'echo "$foo $ref" # both are the value of 'foo' would print 456 456 . However, declare -p will show that ref is a reference to foo , and foo the variable with an actual value. $ declare -p foo refdeclare -- foo="456"declare -n ref="foo" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/510715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
510,756 | I am trying to verify When choosing a Docker container image file for my Ubuntu, what do I need to match between them? On Lubuntu, a CentOS container says it is CentOS, by $ sudo docker run centos bash -c "cat /etc/*-release "CentOS Linux release 7.6.1810 (Core) NAME="CentOS Linux"VERSION="7 (Core)"ID="centos"ID_LIKE="rhel fedora"VERSION_ID="7"PRETTY_NAME="CentOS Linux 7 (Core)"ANSI_COLOR="0;31"CPE_NAME="cpe:/o:centos:centos:7"HOME_URL="https://www.centos.org/"BUG_REPORT_URL="https://bugs.centos.org/"CENTOS_MANTISBT_PROJECT="CentOS-7"CENTOS_MANTISBT_PROJECT_VERSION="7"REDHAT_SUPPORT_PRODUCT="centos"REDHAT_SUPPORT_PRODUCT_VERSION="7"CentOS Linux release 7.6.1810 (Core) CentOS Linux release 7.6.1810 (Core) but also says it is the same Ubuntu as the host: $ sudo docker run centos bash -c "cat /proc/version"Linux version 4.15.0-46-generic (buildd@lgw01-amd64-038) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019$ cat /proc/versionLinux version 4.15.0-46-generic (buildd@lgw01-amd64-038) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019 I wonder why the two commands differ in OS distribution and kernel version? Does a container share the same kernel as its host? If yes, should their kernel versions be the same? When choosing a Docker container image file for my Ubuntu, what do I need to match between them? says "you don't need to match distributions or kernel versions." | cat /proc/version is showing kernel version. As containers run on the same kernel as the host. It is the same kernel as the host. cat /etc/*-release is showing the distribution release. It is the OS version, minus the kernel. A container is not virtualisation, in is an isolation system that runs directly on the Linux kernel. It uses the kernel name-spaces, and cgroups. Name-spaces allow separate networks, process ids, mount points, users, hostname, Inter-process-communication. cgroups allows limiting resources. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/510756",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.