source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
567,224
My build/deploy script does not behave as I was expecting. If clone or build fails it will still go on and call restart. I suspect its because I return 0, but how should I write if I want to return the result from the inner commands, and in "clone" I want to move into the cloned folder only if clone was successful. How would you improve this script? function clone { git clone /volume1/repos/project.git cd project return 0}function build { docker build -t project . return 0}function cleanup { ...}function restart { ... docker run ...}trap cleanup EXITif clone && build; then restart exit 0else echo Failed... exit 1fi
Yes, as you point out, since your function is always returning a 0 it will always return success. No matter what happens. The simplest solution is to not have your function return anything at all: function clone { git clone /volume1/repos/project.git cd project}function build { docker build -t project .} Then, it will return the exit status of the last command run (from man bash ): when executed, the exit status of a function is the exit status of the last command executed in the body. Of course, since the last command of your clone function is cd , that means that if the git clone fails but cd project succeeds, your function will still return success. To avoid this, you can save the exit status of the git command in a variable and then have your function return that variable: function clone { git clone /volume1/repos/project.git out=$? cd project return $out} Or, perhaps better: function clone { git clone /volume1/repos/project.git && cd project} By combining the commands, if either fails, your function will return failure.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/567224", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395102/" ] }
567,338
I have a number of files (Jupyter notebooks, .ipynb ) which are text files. All of these contain some LaTeX markup. But when I run file , I get: $ file nb_* nb_1.ipynb: ASCII textnb_2.ipynb: ASCII textnb_3.ipynb: ASCII text, with very long linesnb_4.ipynb: LaTeX document, ASCII text, with very long linesnb_5.ipynb: text, with very long lines How does file distinguish these? I would like all files to have the same type. (Why should the files have the same type? I am uploading them to an online system for sharing. The system classifies them somehow and treats them differently, with no possibility for me to change this. I suspect the platform uses file or maybe libmagic internally and would like to work around this.)
The file type recognition is driven by so-called magic patterns. The magic file for analyzing TeX family source code contains a number of macro names that causea file to be classified as LaTeX . Each match is assigned a strength , e. g. 15 in case of \begin and 18 for \chapter . This makes the heuristic more robust againstfalse positives like misclassification of Plain TeX or ConTeXtdocuments that happen to define their own macros with those names.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/567338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395210/" ] }
567,351
Both hier(7) and file-hierarchy(7) man pages claim to describe the conventional file system hierarchy. However, there are some differences between them. For example, hier(7) describes /opt and /var/crash , but file-hierarchy(7) does not. What are the differences between these two descriptions. Which one do real Linux systems use?
The hier manual page has a long history that dates back to Unix Seventh Edition in 1979. The one in Linux operating systems is not the original Unix one, but a clone. At the turn of the century, FreeBSD people documented existing long-standing practice , namely that system administrators adjust stuff for their own systems, and that a good system administrator changes that manual page to match the local adjustments. Of course, Linux operating systems are notoriously bad when it comes to doco. The hier manual page is rarely fully adjusted to the actual operating system by the distribution maintainers, if it is adjusted at all. Debian, for example, does not patch it at all, and simply provides the underlying generic hier manual page from Michael Kerrisk's Linux Manpages Project as-is. (The BSDs have a generally much stronger tradition of the people who are making changes to the operating system including changes to its doco in what they do. Their doco is better as a result. But it is itself still woefully outdated in some areas. For example: The FreeBSD manual for the ul command has been missing large parts of the tool since 2.9BSD .) So Lennart Poettering wrote his own manual page for systemd, file-hierarchy , in 2014. As you can see, despite its claim it really is not "more minimal" than the hier page. For starters, it documents a whole load of additional things about user home directories. Thus there are two different manual pages from two different sets of people, none of whom are the distribution maintainers themselves, who actually decide this stuff. The simple truth is that real Linux-based operating systems adhere to neither . There are distribution variations from vanilla systemd that don't get patched into the file-hierarchy page by the distribution maintainers; and as mentioned the hier page often does not get locally patched either. They do not adhere to the Linux Filesystem Hierarchy Standard moreover. Several operating systems purposefully deviate from it, and a few of them document this. A few Linux operating systems intentionally do not reference it at all, such as GoboLinux. As you can see from the further reading, Arch Linux used to reference it but has since dropped it. (I have a strong suspicion, albeit that I have done no rigorous survey, that Arch Linux dropping the FHS is the tipping point, and that adherence to the FHS is the exception rather than the norm for Linux operating systems now.) For many Linux operating systems there simply is not a single manual page for this. The actual operating system will be an admixture of hier , file-hierarchy , the Linux Filesystem Hierarchy Standard , and individual operating system norms with varying degrees of documentation. Further reading Jonathan de Boyne Pollard (2016). " Gazetteer ". nosh Guide . Softwares. Binh Nguyen (2004-07-30). Linux Filesystem Hierarchy . Version 0.65. The Linux Documentation Project. https://wiki.archlinux.org/index.php/Frequently_asked_questions#Does_Arch_follow_the_Linux_Foundation.27s_Filesystem_Hierarchy_Standard_.28FHS.29.3F https://netarky.com/programming/arch_linux/Arch_Linux_directory_structure.html https://wiki.gentoo.org/wiki/Complete_Handbook/Users_and_the_Linux_file_system#Linux_file_system_hierarchy https://www.suse.com/support/kb/doc/?id=7004448 https://sta.li/filesystem/ Daniel J. Bernstein. The root directory . cr.yp.to.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/567351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/217242/" ] }
567,354
I am trying to write a script that will do ssh and run a few commands on the remote host with help of loop. One of the problems I've run into is that, when I run the for loop, it outputs is null. migration_array=($1)Environment=$2case "$Environment" in "Feature") echo "Running test on Feature" ssh -tt [email protected] -p 1234 <<EOF for i in ${migration_array[@]}; do echo "file: $i" done exitEOF ;; *) echo "unknown environment!" ;;esac Output Last login: Thu Feb 13 15:52:43 2020 from 10.0.10.88jenkins@feature:~$ > do> echo "file: "> donefile: jenkins@ffeature:~$ exitlogoutConnection to 10.0.10.100 closed.
The hier manual page has a long history that dates back to Unix Seventh Edition in 1979. The one in Linux operating systems is not the original Unix one, but a clone. At the turn of the century, FreeBSD people documented existing long-standing practice , namely that system administrators adjust stuff for their own systems, and that a good system administrator changes that manual page to match the local adjustments. Of course, Linux operating systems are notoriously bad when it comes to doco. The hier manual page is rarely fully adjusted to the actual operating system by the distribution maintainers, if it is adjusted at all. Debian, for example, does not patch it at all, and simply provides the underlying generic hier manual page from Michael Kerrisk's Linux Manpages Project as-is. (The BSDs have a generally much stronger tradition of the people who are making changes to the operating system including changes to its doco in what they do. Their doco is better as a result. But it is itself still woefully outdated in some areas. For example: The FreeBSD manual for the ul command has been missing large parts of the tool since 2.9BSD .) So Lennart Poettering wrote his own manual page for systemd, file-hierarchy , in 2014. As you can see, despite its claim it really is not "more minimal" than the hier page. For starters, it documents a whole load of additional things about user home directories. Thus there are two different manual pages from two different sets of people, none of whom are the distribution maintainers themselves, who actually decide this stuff. The simple truth is that real Linux-based operating systems adhere to neither . There are distribution variations from vanilla systemd that don't get patched into the file-hierarchy page by the distribution maintainers; and as mentioned the hier page often does not get locally patched either. They do not adhere to the Linux Filesystem Hierarchy Standard moreover. Several operating systems purposefully deviate from it, and a few of them document this. A few Linux operating systems intentionally do not reference it at all, such as GoboLinux. As you can see from the further reading, Arch Linux used to reference it but has since dropped it. (I have a strong suspicion, albeit that I have done no rigorous survey, that Arch Linux dropping the FHS is the tipping point, and that adherence to the FHS is the exception rather than the norm for Linux operating systems now.) For many Linux operating systems there simply is not a single manual page for this. The actual operating system will be an admixture of hier , file-hierarchy , the Linux Filesystem Hierarchy Standard , and individual operating system norms with varying degrees of documentation. Further reading Jonathan de Boyne Pollard (2016). " Gazetteer ". nosh Guide . Softwares. Binh Nguyen (2004-07-30). Linux Filesystem Hierarchy . Version 0.65. The Linux Documentation Project. https://wiki.archlinux.org/index.php/Frequently_asked_questions#Does_Arch_follow_the_Linux_Foundation.27s_Filesystem_Hierarchy_Standard_.28FHS.29.3F https://netarky.com/programming/arch_linux/Arch_Linux_directory_structure.html https://wiki.gentoo.org/wiki/Complete_Handbook/Users_and_the_Linux_file_system#Linux_file_system_hierarchy https://www.suse.com/support/kb/doc/?id=7004448 https://sta.li/filesystem/ Daniel J. Bernstein. The root directory . cr.yp.to.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/567354", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/390951/" ] }
567,419
The final purpose is to build the image of a partition sector by sector. I want the sector size to be 4096. As a first step I am trying to create an empty image of 32MiB with 4096bytes sectors rather than 512bytes. For this I am trying: dd if=/dev/zero of=empty4k.img bs=4096 count=8192 Then I do fdisk -l empty4k.img and shows 512 byte sectors. I believe it is because if you do " fdisk -l /dev/zero it also says 512 byte sectors. Can anyone help me?
The bs given to dd just tells how large the buffer should be during creating the file. In the end, the file consists of nothing but zero-bytes, there is no information about alignment. You have to use the specific parameter to fdisk , which is -b , as per the man -page of fdisk(8) : -b, --sector-size sectorsize Specify the sector size of the disk. Valid values are 512, 1024, 2048, and 4096. (Recent kernels know the sector size. Use this option only on old kernels or to override the kernel's ideas.) Since util-linux-2.17, fdisk differentiates between logical and physical sector size. This option changes both sector sizes to sectorsize.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/567419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/319668/" ] }
567,531
I always do this to append text to a file echo "text text text ..." >> file# orprintf "%s\n" "text text text ..." >> file I wonder if there are more ways to achieve the same, more elegant or unusual way.
I quite like this one, where I can set up a log file at the top of a script and write to it throughout without needing either a global variable or to remember to change all occurrences of a filename: exec 3>> /tmp/somefile.log...echo "This is a log message" >&3echo "This goes to stdout"echo "This is written to stderr" >&2 The exec 3>dest construct opens the file dest for writing (use >> for appending, < for reading - just as usual) and attaches it to file descriptor #3. You then get descriptor #1 for stdout , #2 for stderr , and this new #3 for the file dest . You can join stderr to stdout for the duration of a script with a construct such as exec 2>&1 - there are lots of powerful possibilities. The documentation ( man bash ) has this to say about it: exec [-cl] [-a name] [command [arguments]] If command is specified, it replaces the shell. [...] If command is not specified, any redirections take effect in the current shell [...].
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/567531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/245871/" ] }
567,733
I am using for some time some Ubuntu clones, because they works on my uncommon computer setup (Intel Atom, UEFI, 64Bit). Unfortunately because of this specific architecture, I can not drop on this machine every Linux type as I want. But I worked a very long time ago in Debian, and I was very happy because of him minimalism etc.I read some articles about Debian + UEFI, but I read there that it is quite difficult operation or that it is immposible.\ Do you have simmilar opinion as this two upside, or maybe have some tricks and tips how to install this OS in right and easy way.
I suspect that your system combines a 64-bit Atom CPU with a 32-bit UEFI; this was common for Atom systems in the past. This causes problems for a number of distributions, and Ubuntu has had specific support for this scenario for a long time. Debian 7 couldn’t be installed in 64-bit mode on these systems, which is why there are many claims online that it doesn’t work; however, mixed mode is supported since Debian 8 , so it should be possible to install current releases of Debian without issue. Download a multi-arch Debian 10 installation image , choose the 64-bit installation option, and the installer will set up 64-bit Debian with a 32-bit loader.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/567733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395568/" ] }
567,764
I'm using wc -l to count the lines in output of a command, as the input is piped to it. commad | wc -l This works fine, but if command is doing some heavy computation, this is quite slow. Is there an alternative which displays the count of lines which have been "piped in so far"? Such a thing would especially be useful when I'm doing a sort of per-item computation, like cat something | xargs -L1 heavy-per-line-computation | wc -l One way I could do this manually is to pipe the output to a file ( command > file ) and periodically do cat file | wc -l on it. But a single command (which doesn't redirect to files, to avoid wasteful I/O), is what I'm after.
awk '{print NR}' This command prints a new number for each line encountered. If the final line is complete then the last number will agree with what wc -l would say. If the final line is incomplete then awk may count it (in my Kubuntu GNU awk does) but wc -l would not (because it really counts newlines); so there may be discrepancy. Another discrepancy is if the input is completely empty: wc -l will print 0 , our awk will print nothing. To make it print 0 use this variant: awk '{print NR} END {if (NR==0) print NR}' Or maybe you want each new number overwrite the old one in the same line of your console. Then this: awk '{printf "\r%s",NR} END {print "\r"NR}' Example: yes | head -n 76543 | awk '{printf "\r%s",NR} END {print "\r"NR}' Note the command consumes its input ( tee may be handy). For monitoring purpose you may be interested in: awk '{print NR OFS $0}' which (with the default OFS being space) is almost like cat -n (if your cat supports -n ). pv -l counts lines and it can be used inside a pipeline. Example: for i in 1 2 3 4 5; do date; sleep 1; done | pv -l | wc -l Consider pv -lb for quite minimal output.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/567764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39009/" ] }
567,792
I have a variable: ❯ echo $numholds 409503409929409930409932409934409936409941409942409944409946 I want to do a for loop of it, but the newline-delimitation doesn't work: ❯ for num in $numholds; doecho $num ,echo New Item!> done 409503409929409930409932409934409936409941409942409944409946 ,New Item! This even happens when I set IFS=$"\n \t" . How can I make this work?
To split on newline in zsh, you use the f parameter expansion flag ( f for line f eed ) which is short for ps:\n: : for num (${(f)numholds}) print -r New item: $num You could also use $IFS -splitting which in zsh (contrary to other Bourne-like shells) you have to ask for explicitly for parameter expansion using the $=param syntax ( $= looks a bit like a pair of scissors): for num ($=numholds) print -r New item: $num
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/567792", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297140/" ] }
567,837
I'm looking for a backup utility with incremental backups, but in a more complicated way. I tried rsync, but it doesn't seem to be able to do what I want, or more likely, I don't know how to make it do that. So this is an example of what I want to achieve with it.I have the following files: testdir├── picture1├── randomfile1├── randomfile2└── textfile1 I want to run the backup utility and basically create an archive (or a tarball) of all of these files in a different directory: $ mystery-command testdir/ testbaktestbak└── 2020-02-16--05-10-45--testdir.tar Now, let's say the following day, I add a file, such that my structure looks like: testdir├── picture1├── randomfile1├── randomfile2├── randomfile3└── textfile1 Now when I run the mystery command, I will get another tarball for that day: $ mystery-command testdir/ testbaktestbak├── 2020-02-16--05-10-45--testdir.tar└── 2020-02-17--03-24-16--testdir.tar Here's the kicker: I want the backup utility to detect the fact that picture1 , randomfile1 , randomfile2 and textfile1 have not been changed since last backup, and only backup the new/changed files, which in this case is randomfile3 , such that: tester@raspberrypi:~ $ tar -tf testbak/2020-02-16--05-10-45--testdir.tar testdir/testdir/randomfile1testdir/textfile1testdir/randomfile2testdir/picture1tester@raspberrypi:~ $ tar -tf testbak/2020-02-17--03-24-16--testdir.tar testdir/randomfile3 So as a last example, let's say the next day I changed textfile1 , and added picture2 and picture3 : $ mystery-command testdir/ testbaktestbak/├── 2020-02-16--05-10-45--testdir.tar├── 2020-02-17--03-24-16--testdir.tar└── 2020-02-18--01-54-41--testdir.tartester@raspberrypi:~ $ tar -tf testbak/2020-02-16--05-10-45--testdir.tar testdir/testdir/randomfile1testdir/textfile1testdir/randomfile2testdir/picture1tester@raspberrypi:~ $ tar -tf testbak/2020-02-17--03-24-16--testdir.tar testdir/randomfile3tester@raspberrypi:~ $ tar -tf testbak/2020-02-18--01-54-41--testdir.tar testdir/textfile1testdir/picture2testdir/picture3 With this system, I would save space by only backing up the incremental changes between each backup (with obviously the master backup that has all the initial files), and I would have backups of the incremental changes, so for example if I made a change on day 2, and changed the same thing again on day 3, I can still get the file with the change from day 2, but before the change from day 3. I think it's kinda like how GitHub works :) I know I could probably create a script that runs a diff and then selects the files to backup based on the result (or more efficiently, just get a checksum and compare), but I want to know if there's any utility that can do this a tad easier :)
Update: Please see some caveats here: Is it possible to use tar for full system backups? According to that answer, restoration of incremental backups with tar is prone to errors and should be avoided. Do not use the below method unless you're absolutely sure you can recover your data when you need it. According to the documentation you can use the -g/--listed-incremental option to create incremental tar files, eg. tar -cg data.inc -f DATE-data.tar /path/to/data Then next time do something like tar -cg data.inc -f NEWDATE-data.tar /path/to/data Where data.inc is your incremental metadata, and DATE-data.tar are your incremental archives.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/567837", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395667/" ] }
567,964
I have gone through some notes that it is not a good idea to do pip upgrade using sudo command. My question is if I don't give sudo I get permission errors. How can I resolve this? Also, what is the reason sudo is not suggested in order to upgrade pip? $python -m pip install --upgrade pipDEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-supportCollecting pip Using cached https://files.pythonhosted.org/packages/54/0c/d01aa759fdc501a58f431eb594a17495f15b88da142ce14b5845662c13f3/pip-20.0.2-py2.py3-none-any.whlInstalling collected packages: pip Found existing installation: pip 19.2.3 Uninstalling pip-19.2.3: Successfully uninstalled pip-19.2.3 Rolling back uninstall of pip Moving to /home/abc/.local/bin/pip from /tmp/pip-uninstall-V4F8Pj/pip Moving to /home/abc/.local/bin/pip2 from /tmp/pip-uninstall-V4F8Pj/pip2 Moving to /home/abc/.local/bin/pip2.7 from /tmp/pip-uninstall-V4F8Pj/pip2.7 Moving to /home/abc/.local/lib/python2.7/site-packages/pip-19.2.3.dist-info/ from /home/abc/.local/lib/python2.7/site-packages/~ip-19.2.3.dist-info Moving to /home/abc/.local/lib/python2.7/site-packages/pip/ from /home/abc/.local/lib/python2.7/site-packages/~ipERROR: Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/pip-20.0.2.dist-info/top_level.txt'Consider using the `--user` option or check the permissions.WARNING: You are using pip version 19.2.3, however version 20.0.2 is available.You should consider upgrading via the 'pip install --upgrade pip' command.
Never upgrade the OS provided version of tools outside of the package management system, because if there's a new package released it will overwrite your changes. So sudo pip install --upgrade pip is a bad thing. The OS package system believes it controls the files, and you've overridden them. Odd behaviour may result, including installing of an older version than you've previously installed! If you want a newer version then you can install it in the user profile % pip install --upgrade --user pipCollecting pip Downloading https://files.pythonhosted.org/packages/54/0c/d01aa759fdc501a58f431eb594a17495f15b88da142ce14b5845662c13f3/pip-20.0.2-py2.py3-none-any.whl (1.4MB) 100% |################################| 1.4MB 615kB/s Installing collected packages: pipSuccessfully installed pip-20.0.2 This will install the latest version in $HOME/.local/bin % ls -l .local/bin/pip -rwxr-xr-x 1 sweh sweh 223 Feb 16 21:49 .local/bin/pip If you have $HOME/.local/bin on your PATH then you'll always pick up user pip installed programs. Most of the time, however, you don't need to upgrade pip .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/567964", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/363046/" ] }
568,029
Can someone explain how "real" process priority (i.e. pri_baz field of ps ) is calculated? My guess is: pri_baz = 99 - static_priority # if static_priority > 0 (real-time process)pri_baz = 100 + min(20 + nice + dynamic_adjustment, 39) # if static_priority = 0 (time-shared process) This is supported by the following test: # chrt -r 1 sleep 1 \> & chrt -r 99 sleep 1 \> & nice --20 sleep 1 \> & nice -19 sleep 1 \> & ps -C sleep -O pri_baz[1] 25408[2] 25409[3] 25410[4] 25411 PID BAZ S TTY TIME COMMAND 25408 98 S pts/3 00:00:00 sleep 1 25409 0 S pts/3 00:00:00 sleep 1 25410 100 S pts/3 00:00:00 sleep 1 25411 139 S pts/3 00:00:00 sleep 1 However I'm puzzled because: pri_baz = 99 appears to be unused. I knew Linux handled (by default) 140 priority queues, and this scheme gives only 139 values of priority.
In ps ’s output, pri_baz is calculated as pp->priority + 100 , and pp->priority is the prio value from the kernel. This is described as Priority of a process goes from 0.. MAX_PRIO -1, valid RT priority is 0.. MAX_RT_PRIO -1, and SCHED_NORMAL / SCHED_BATCH tasks are in the range MAX_RT_PRIO .. MAX_PRIO -1. Priority values are inverted: lower p->prio value means higher priority. The MAX_USER_RT_PRIO value allows the actual maximum RT priority to be separate from the value exported to user-space. This allows kernel threads to set their priority to a value higher than any user task. Note: MAX_RT_PRIO must not be smaller than MAX_USER_RT_PRIO . So the range in the kernel does cover 140 values, from 0 to MAX_PRIO –1 (139). However, the minimum FIFO and RT priority is 1 , and this explains the missing value: the input values (at least, that can be set from userspace, using sched_setscheduler ) go from 1 to 99, and the kernel converts those to prio values using the formula MAX_RT_PRIO – 1 – priority , giving values from 0 to 98.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/385179/" ] }
568,034
I'm currently trying to share and use my wildcard certificate from letsencrypt with NFS, but the servers who are supposed to use it, cannot do so. To my Setup:I have 3 VM's (in future maybe 4) running.One is a Reverse-proxy, that receives all http and https traffic and redirects them to my Mail server and my Kanboard.My mailserver runs with iRedMail. My problem is that I fail to deploy the certificate on both, the Kanboard and the iRedMail server.Kanboard (APACHE2) tells me this: SSLCertificateFile: file '/mnt/letsencrypt/live/domain.com/fullchain.pem' does not exist or is empty and iRedMail (NGINX) this: nginx: [emerg] BIO_new_file("/etc/ssl/certs/iRedMail.crt") failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/ssl/certs/iRedMail.crt' Since I dont want this post to drag to long, I created some pastebins with my configs, and things I have done. Reverse-proxy , iRedMail , Kanboard : All will be accessible for 6 months. HTTPS access for domain.com (meaning Reverse-Proxy) works without a problem. Output for sudo ls -l /etc/letsencrypt/ (live) drwxrwxrwx 3 administrator root 4096 Feb 13 16:25 live All 3 Servers run Ubuntu 1804 Server and the user "administrator" uses the same credentials. If you need anymore information, feel free to ask. Edits Outputs for namei -lx /path/to/private/key
In ps ’s output, pri_baz is calculated as pp->priority + 100 , and pp->priority is the prio value from the kernel. This is described as Priority of a process goes from 0.. MAX_PRIO -1, valid RT priority is 0.. MAX_RT_PRIO -1, and SCHED_NORMAL / SCHED_BATCH tasks are in the range MAX_RT_PRIO .. MAX_PRIO -1. Priority values are inverted: lower p->prio value means higher priority. The MAX_USER_RT_PRIO value allows the actual maximum RT priority to be separate from the value exported to user-space. This allows kernel threads to set their priority to a value higher than any user task. Note: MAX_RT_PRIO must not be smaller than MAX_USER_RT_PRIO . So the range in the kernel does cover 140 values, from 0 to MAX_PRIO –1 (139). However, the minimum FIFO and RT priority is 1 , and this explains the missing value: the input values (at least, that can be set from userspace, using sched_setscheduler ) go from 1 to 99, and the kernel converts those to prio values using the formula MAX_RT_PRIO – 1 – priority , giving values from 0 to 98.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568034", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353377/" ] }
568,058
As far as I know, every desktop/server distribution of Linux has Perl (and core modules) installed by default, AIX and Solaris (IIRC) also do. By "by default" I mean that even the most lightweight variant has it. I haven't worked on BSD or similar ones, do they come with Perl? Motivation: I'm trying to figure out if encouraging people in my team to use Perl instead of awk/sed/other text utils would make sense. Note: Which is the most portable of sed, awk, perl and sh? does't answer my question. Portability it's not my main concern here, availability out of the box is. Even if all unix-like systems have awk/sed I would still prefer Perl. Note: also, by Perl I mean Perl 5.8+
Yes, if you mean available as an ordinary third-party application rather than being bundled as part of the operating system. FreeBSD dropped Perl from contrib back in 2003, in version 5.0. It's in ports/packages, of course. This is also the case for NetBSD and MirBSD (a.k.a. MirOS BSD), for FreeBSD derivatives such as GhostBSD and TrueOS, and for FreeBSD fork DragonFlyBSD. FreeBSD fork MidnightBSD has retained Perl in contrib, however. Further reading FreeBSD/i386 5.0-RELEASE Release Notes . 2003. The FreeBSD Documentation Project.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568058", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395846/" ] }
568,219
In a shell script, I have the following code to exit early if the database is offline; Br3trans -u / -c -f dbstate && ( echo Datenbank ist online. ; echo Nächster Schritt... ) || ( echo "Datenbank ist offline" ; exit 1 );echo Test If the database is offline, it doesn't exit the script and instead goes to echo "Test". Any ideas?
The exit statement exits the subshell. If you unroll your one-liner it will start to work as you expect #!/bin/bashif Br3trans -u / -c -f dbstatethen echo "Datenbank ist online." echo "Nächster Schritt..."else echo "Datenbank ist offline" exit 1fiecho Test Alternatively, in some shells you can use a grouping {...} instead of a subshell (...) , but I think for readability it would be better to use the slightly longer version I've given you.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395974/" ] }
568,225
In a folder I have files with nanoseconds timestamps.: FileLastestFile.extFileOther0.extFileOther1.extFileOther2.extFileOther3.extFileOther4.ext...FileOther.ext I want to keep only file that are create 10 secondes from the lastest file in that directory ( FileLastestFile.ext ). I try to use the command find .. -newermt , but I don't know which way to specify the FileLastestFile.ext timestamp as origin.
The exit statement exits the subshell. If you unroll your one-liner it will start to work as you expect #!/bin/bashif Br3trans -u / -c -f dbstatethen echo "Datenbank ist online." echo "Nächster Schritt..."else echo "Datenbank ist offline" exit 1fiecho Test Alternatively, in some shells you can use a grouping {...} instead of a subshell (...) , but I think for readability it would be better to use the slightly longer version I've given you.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395981/" ] }
568,241
according to the test(1) manpages: -n STRING the length of STRING is nonzero so I expected this to run fine: [ -n ${var} ] && echo "var is not empty" I used this logic in a real case, a script like this: [...]dne() { echo DEBUG: wc -c: $(echo -n $peopkg |wc -c) echo DEBUG: pdir: $pdir echo "Error: ${package}: doesn't exist in local repos" exit 1}# packages are listed as such: # p/pname/package-version-release-distrelease...# pname is inconsistent, but it is guaranteed that the first word in package# (all to lowercase) will match at least one pname.# NOTE: package-[0-9]* matches package-32bit. *-32bit is a subpackage we want# to match later, but not when it's not already in pcase.# So that must be negated toopfirst=${pcase%%-*}for pdir in ${p}/${pfirst}*do # check if the glob matched anything at all [ ${pdir} = "${p}/${pfirst}*" ] && dne peopkg=$(find ${pdir} \ -name ${package}-[0-9]* \ ! -name *.delta.eopkg \ ! -name ${package}-32bit* |sort -rn |head -1) echo DEBUG: in-loop peopkg: $peopkg echo DEBUG: in-loop wc -c: $(echo -n $peopkg |wc -c) echo DEBUG: in-loop test -n: $(test -n $peopkg && echo true || echo false) #------------------------------------------------------------# # break on ANY match. There's supposed to be only one anyway # [ -n ${peopkg} ] && break # <--- [issue here] # #------------------------------------------------------------#done[ -z ${peopkg} ] && dne[...] what matters here is that when I run this, I get these messages: DEBUG: in-loop peopkg:DEBUG: in-loop wc -c: 0DEBUG: in-loop test -n: trueDEBUG: wc -c: 0DEBUG: pdir: a/alsa-firmwareError: alsa-utils: doesn't exist in local repos This makes zero sense to me.. DEBUG: pdir: a/alsa-firmware indicates that the loop exits always on the first iteration. Which can only happen if the glob pattern a/alsa* matched something AND peopkg was nonzero length. PS: I'm trying to be POSIX compliant.
If var contains the empty string, [ -n $var ] expands (after word-splitting $var ) to the words [ , -n and ] . That's the one-argument version of test , which tests if that single argument is non-empty. The string -n is not empty, so the test is true. In the GNU manpage, that's mentioned just after your quoted passage: -n STRING the length of STRING is nonzero STRING equivalent to -n STRING The problem is of course the lack of quoting, discussed in e.g. Why does my shell script choke on whitespace or other special characters? and When is double-quoting necessary? Note that it's not only the empty string that will break in the unquoted case. Problems also appear if var contains multiple words: $ var='foo bar'; [ -n $var ]bash: [: foo: binary operator expected Or wildcard characters: $ var='*'; [ -n $var ]bash: [: file.txt: binary operator expected
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/293023/" ] }
568,385
If I run ls -a user@users-MacBook-Pro:~$ ls -a. .. I get . and .. (current directory and parent directory?) Is there a reason why they show up after ls -a , do they do anything interesting?
Because -a means show all files. It is useful when combined with -l . As to why show those useless files when not using -l , because of consistency, and because Unix does not try to double guess what is good for you. There is an option -A (for at least GNU ls ) that excludes these two ( .. , and . ). Interestingly the idea of hidden files in Unix came about by a bug in ls where it was trying to hide these two files. To make the code simple the original implementation only checked the first character. People used this to hide files, it then became a feature, and the -a option was added to show the hidden files. Later someone was wondering, the same as you, why . and .. are shown, we know they are there. The -A option was born. Note: Unix has a much looser meaning of file than you may have. FILE ⊇ {normal-file, directory, named-pipe, unix-socket, symbolic-link, devices}.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/568385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/391527/" ] }
568,495
I have multiple files with the following text: 20~02~19~05-01-522249\\\2249\\\2249\\\2249\\\2249\\\2249\\\2248\\\ I'd like to use sed or another Linux command to replace \\\ with a newline.
sed 's|\\\\\\|\n|g' filename does it, if you're using GNU sed . If you want POSIX sed , then this should work (quite a lot of escaping!): sed 's|\\\\\\|\|g' filename
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568495", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/386213/" ] }
568,537
I know that I can pass an env variable to a command by prepending it like this: env_variable=value command but today I accidentally put && between the variable and the command: env_variable=value && command exactly ^ ^ and I got curious how is it different to the correct way. I know that you can use && to chain commands together. But what's interesting is that the command didn't receive the variable, why? I'd be grateful if anyone explained how exactly the second variant is different from the first one and why the command didn't see the variable. Thanks
foo=bar && somecmd is pretty much the same as (since the assignment isn't likely to fail) foo=bar; somecmd which is the same as (on separate lines) foo=barsomecmd which is the assignment of a shell variable called foo , and then running a command somecmd . If foo is not export ed (shell variables aren't by default), then it's not presented in the environment of somecmd . But you could use within the same shell. See, e.g. What do the bash-builtins 'set' and 'export' do? If processes inherit the parent's environment, why do we need export?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568537", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90372/" ] }
568,569
I run the command: ll /dev/null and got this output: crw-rw-rw- 1 root root 1, 3 Feb 19 10:20 /dev/null I know d means directory. Can someone please explain what the c special flag means?
It's a character device based fileWithin Linux devices such as hardware are characterised in two ways: Character Devices (c) which are devices which transfer data in characters also known as bytes or bits such as mice, speaker etc. Block Devices (b) which are devices which transfer data in blocks of data such as USB, Hard Disks etc. These types of files can commonly be found within the /dev directory which is where device files are stored, just type ls -lah and you can see the various types. If you're running a decent Linux distro, that information (plus more than you could probably ever need) can be obtained with the command: info ls which contains this little snippet: The file type is one of the following characters: - regular file b block special file c character special file C high performance ("contiguous data") file d directory D door (Solaris 2.5 and up) l symbolic link M off-line ("migrated") file (Cray DMF) n network special file (HP-UX) p FIFO (named pipe) P port (Solaris 10 and up) s socket ? some other file type
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/356234/" ] }
568,634
I'm basically trying to figure out how one would go about making a GUI from absolute scratch with nothing but the linux kernel and programming in C. I am not looking to create a GUI desktop environment from scratch, but I would like to create some desktop applications and in my search for knowledge, all the information I have been able to find is on GUI APIs and toolkits. I would like to know, at the very least for my understanding of the fundamentals of how linux GUI is made, how one would go about making a GUI environment or a GUI appllication without using any APIs or toolkits. I am wondering if for example: existing APIs and toolkits work via system calls to the kernel (and the kernel is responsible at the lowest level for constructing a GUI image in pixels or something) these toolkits perform syscalls which simply pass information to screen drivers (is there a standard format for sending this information that all screen drivers abide by or do GUI APIs need to be able to output this information in multiple formats depending on the specific screen/driver?) and also if this is roughly true, does the the raw linux kernel usually just send information to the screen in the form of 8-bit characters? I just really want to understand what happens between the linux kernel, and what I see on my screen (control/information flow through both software and hardware if you know, what format the information takes, etc). I would so greatly appreciate a detailed explanation, I understand this might be a dousie to explain in sufficient detail, but I think such an explanation would be a great resource for others who are curious and learning. For context I'm a 3rd year comp sci student who recently started programming in C for my Systems Programming course and I have an intermediate(or so I would describe it) understanding of linux and programming. Again Thank you to anyone who helps me out!!!
How it works (Gnu/Linux + X11) Overview It looks something like this (not draws to scale) ┌───────────────────────────────────────────────┐│ User ││ ┌─────────────────────────────────────────┤│ │ Application ││ │ ┌──────────┬─────┬─────┬─────┤│ │ │ ... │ SDL │ GTK │ QT ││ │ ├──────────┴─────┴─────┴─────┤│ │ │ xLib ││ │ ├────────────────────────────┤├─────┴───┬────────┴──┐ X11 ││ Gnu │ Libraries │ Server ││ Tools │ │ │├─────────┘ │ │ ├─────────────────────┤ ││ Linux (kernel) │ │├─────────────────────┴─────────────────────────┤│ Hardware │└───────────────────────────────────────────────┘ We see from the diagram that X11 talks mostly with the hardware. However it needs to talk via the kernel, to initially get access to this hardware. I am a bit hazy on the detail (and I think it changed since I last looked into it). There is a device /dev/mem that gives access to the whole of memory (I think physical memory), as most of the graphics hardware is memory mapped, this file (see everything is a file) can be used to access it. X11 would open the file (kernel uses file permissions to see if it can do this), then X11 uses mmap to map the file into virtual memory (make it look like memory), now the memory looks like memory. After mmap , the kernel is not involved. X11 needs to know about the various graphics hardware, as it accesses it directly, via memory. (this may have changes, specifically the security model, may no longer give access to ALL of the memory.) Linux At the bottom is Linux (the kernel): a small part of the system. It provides access to hardware, and implements security. Gnu Then Gnu (Libraries; bash; tools:ls, etc; C compiler, etc). Most of the operating system. X11 server (e.g. x.org) Then X11 (Or Wayland, or ...), the base GUI subsystem. This runs in user-land (outside of the kernel): it is just another process, with some privileges.The kernel does not get involved, except to give access to the hardware. And providing inter-process communication, so that other processes can talk with the X11 server. X11 library A simple abstraction to allow you to write code for X11. GUI libraries Libraries such as qt, gtk, sdl, are next — they make it easier to use X11, and work on other systems such as wayland, Microsoft's Windows, or MacOS. Applications Applications sit on top of the libraries. Some low-level entry points, for programming xlib Using xlib, is a good way to learn about X11. However do some reading about X11 first. SDL SDL will give you low level access, direct to bit-planes for you to directly draw to. Going lower If you want to go lower, then I am not sure what good current options are, but here are some ideas. Get an old Amiga, or simulator. And some good documentation. e.g. https://archive.org/details/Amiga_System_Programmers_Guide_1988_Abacus/mode/2up (I had 2 books, this one and similar). Look at what can be done on a raspberry pi. I have not looked into this. Links X11 https://en.wikipedia.org/wiki/X_Window_System Modern ways Writing this got my interest, so I had a look at what the modern fast way to do it is. Here are some links: https://blogs.igalia.com/itoral/2014/07/29/a-brief-introduction-to-the-linux-graphics-stack/
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/568634", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/396359/" ] }
568,639
I have pipe delimited text file named data.txt like ... Kalpesh|100|1Kalpesh|500|1Ramesh|500|1Ramesh|500|1Ramesh|500|1Naresh|500|1Ganesh|500|1Ganesh|500|1Ganesh|500|1Ganesh|500|1 I am using an awk script as follows: awk -F"|" 'BEGIN { ln=0;slno=0;pg=0; }{name=$1;{if (name !=x||ln > 50) #if same name repeates more than 50times then new page{ tot=0;pg++;printf("\f");print "PERSONS HAVING OUTSTANDING ADVANCE SALARY"print "+==============================+"print "|Sr.| name |Amount Rs.|Nos |"print "+==============================+"ln=0;}if (name!=x)slno=1;tot+=$2;{printf ("|%3s|%10s|%10.2f|%4d|\n",slno,$1,$2,$3,tot,$4);ln++;slno++;x=name; }}} END {print "================================"print "Total for",$1,slno,totprint "================================"print "\f" }' data.txt This is giving result like PERSONS HAVING OUTSTANDING ADVANCE SALARY+==============================+|Sr.| name |Amount Rs.|Nos |+==============================+| 1| Kalpesh| 100.00| 1|| 2| Kalpesh| 500.00| 1|PERSONS HAVING OUTSTANDING ADVANCE SALARY+==============================+|Sr.| name |Amount Rs.|Nos |+==============================+| 1| Ramesh| 500.00| 1|| 2| Ramesh| 500.00| 1|| 3| Ramesh| 500.00| 1|PERSONS HAVING OUTSTANDING ADVANCE SALARY+==============================+|Sr.| name |Amount Rs.|Nos |+==============================+| 1| Naresh| 500.00| 1|PERSONS HAVING OUTSTANDING ADVANCE SALARY+==============================+|Sr.| name |Amount Rs.|Nos |+==============================+| 1| Ganesh| 500.00| 1|| 2| Ganesh| 500.00| 1|| 3| Ganesh| 500.00| 1|| 4| Ganesh| 500.00| 1|================================Total for Ganesh 5 2000================================ My desired output is like PERSONS HAVING OUTSTANDING ADVANCE SALARY+==============================+|Sr.| name |Amount Rs.|Nos |+==============================+| 1| Kalpesh| 100.00| 1|| 2| Kalpesh| 500.00| 1|================================Total for Kalpesh 2 600================================PERSONS HAVING OUTSTANDING ADVANCE SALARY+==============================+|Sr.| name |Amount Rs.|Nos |+==============================+| 1| Ramesh| 500.00| 1|| 2| Ramesh| 500.00| 1|| 3| Ramesh| 500.00| 1|================================Total for Ramesh 3 1500================================PERSONS HAVING OUTSTANDING ADVANCE SALARY+==============================+|Sr.| name |Amount Rs.|Nos |+==============================+| 1| Naresh| 500.00| 1|================================Total for Naresh 1 500================================PERSONS HAVING OUTSTANDING ADVANCE SALARY+==============================+|Sr.| name |Amount Rs.|Nos |+==============================+| 1| Ganesh| 500.00| 1|| 2| Ganesh| 500.00| 1|| 3| Ganesh| 500.00| 1|| 4| Ganesh| 500.00| 1|================================Total for Ganesh 5 2000================================
How it works (Gnu/Linux + X11) Overview It looks something like this (not draws to scale) ┌───────────────────────────────────────────────┐│ User ││ ┌─────────────────────────────────────────┤│ │ Application ││ │ ┌──────────┬─────┬─────┬─────┤│ │ │ ... │ SDL │ GTK │ QT ││ │ ├──────────┴─────┴─────┴─────┤│ │ │ xLib ││ │ ├────────────────────────────┤├─────┴───┬────────┴──┐ X11 ││ Gnu │ Libraries │ Server ││ Tools │ │ │├─────────┘ │ │ ├─────────────────────┤ ││ Linux (kernel) │ │├─────────────────────┴─────────────────────────┤│ Hardware │└───────────────────────────────────────────────┘ We see from the diagram that X11 talks mostly with the hardware. However it needs to talk via the kernel, to initially get access to this hardware. I am a bit hazy on the detail (and I think it changed since I last looked into it). There is a device /dev/mem that gives access to the whole of memory (I think physical memory), as most of the graphics hardware is memory mapped, this file (see everything is a file) can be used to access it. X11 would open the file (kernel uses file permissions to see if it can do this), then X11 uses mmap to map the file into virtual memory (make it look like memory), now the memory looks like memory. After mmap , the kernel is not involved. X11 needs to know about the various graphics hardware, as it accesses it directly, via memory. (this may have changes, specifically the security model, may no longer give access to ALL of the memory.) Linux At the bottom is Linux (the kernel): a small part of the system. It provides access to hardware, and implements security. Gnu Then Gnu (Libraries; bash; tools:ls, etc; C compiler, etc). Most of the operating system. X11 server (e.g. x.org) Then X11 (Or Wayland, or ...), the base GUI subsystem. This runs in user-land (outside of the kernel): it is just another process, with some privileges.The kernel does not get involved, except to give access to the hardware. And providing inter-process communication, so that other processes can talk with the X11 server. X11 library A simple abstraction to allow you to write code for X11. GUI libraries Libraries such as qt, gtk, sdl, are next — they make it easier to use X11, and work on other systems such as wayland, Microsoft's Windows, or MacOS. Applications Applications sit on top of the libraries. Some low-level entry points, for programming xlib Using xlib, is a good way to learn about X11. However do some reading about X11 first. SDL SDL will give you low level access, direct to bit-planes for you to directly draw to. Going lower If you want to go lower, then I am not sure what good current options are, but here are some ideas. Get an old Amiga, or simulator. And some good documentation. e.g. https://archive.org/details/Amiga_System_Programmers_Guide_1988_Abacus/mode/2up (I had 2 books, this one and similar). Look at what can be done on a raspberry pi. I have not looked into this. Links X11 https://en.wikipedia.org/wiki/X_Window_System Modern ways Writing this got my interest, so I had a look at what the modern fast way to do it is. Here are some links: https://blogs.igalia.com/itoral/2014/07/29/a-brief-introduction-to-the-linux-graphics-stack/
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/568639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/343138/" ] }
568,666
From my understanding, $1 is the first field. But strangely enough, awk '$1=$1' omits extra spaces. $ echo "$string"foo foo bar bar$ echo "$string" | awk '$1=$1'foo foo bar bar Why is this happening?
When we assign a value to a field variable ie. value of $1 is assigned to field $1, awk actually rebuilds its $0 by concatenating them with default field delimiter(or OFS) space. we can get the same case in the following scenarios as well... echo -e "foo foo\tbar\t\tbar" | awk '$1=$1'foo foo bar barecho -e "foo foo\tbar\t\tbar" | awk -v OFS=',' '$1=$1'foo,foo,bar,barecho -e "foo foo\tbar\t\tbar" | awk '$3=1'foo foo 1 bar For GNU AWK this behavior is documented here: https://www.gnu.org/software/gawk/manual/html_node/Changing-Fields.html $1 = $1 # force record to be reconstituted
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/568666", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/245871/" ] }
568,671
I have a question regarding changing the home folder for a user on the system.I was thinking I could do something like: new_folder_name="$2"user_name="$3"mkdir /home/$new_folder_nameusermod -d -m /home/$new_folder_name/$user_name This unfortunately did not work and now I feel kinda lost. Anyone have some advice on how to do this? I used mkdir /home/$2chown $3:$3 /home/$2chmod 700 /home/$2usermod --home /home/$2 $3 instead, which works, but it prints chown: invalid group:username:username afterwards, why is that?
When we assign a value to a field variable ie. value of $1 is assigned to field $1, awk actually rebuilds its $0 by concatenating them with default field delimiter(or OFS) space. we can get the same case in the following scenarios as well... echo -e "foo foo\tbar\t\tbar" | awk '$1=$1'foo foo bar barecho -e "foo foo\tbar\t\tbar" | awk -v OFS=',' '$1=$1'foo,foo,bar,barecho -e "foo foo\tbar\t\tbar" | awk '$3=1'foo foo 1 bar For GNU AWK this behavior is documented here: https://www.gnu.org/software/gawk/manual/html_node/Changing-Fields.html $1 = $1 # force record to be reconstituted
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/568671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395310/" ] }
568,767
I want to find where a word appears in a text file — as in the number of words into the text that a word occurs — for all instances of that word, but I'm not sure even where to start. I imagine I'll need a loop, and some combination of grep and wc. As an example, here is a an article about iPhone 11: On Tuesday, in a sign that Apple is paying attention to consumers who aren’t racing to buy more expensive phones, the company said the iPhone 11, its entry-level phone, would start at $700, compared with $750 for the comparable model last year. Apple kept the starting prices of its more advanced models, the iPhone 11 Pro and iPhone 11 Pro Max, at $1,000 and $1,100. The company unveiled the new phones at a 90-minute press event at its Silicon Valley campus. There are 81 words in the text. jaireaux@macbook:~$ wc -w temp.txt 81 temp.txt The word 'iPhone' appears three times. jaireaux@macbook:~$ grep -o -i iphone temp.txt | wc -w 3 The output I want would be like this: jaireaux@macbook:~$ whereword iPhone temp.txt 24 54 57 What would I do to get that output?
Here's one way, using GNU tools: $ tr ' ' '\n' < file | tr -d '[:punct:]' | grep . | grep -nFx iPhone25:iPhone54:iPhone58:iPhone The first tr replaces all spaces with newlines, and then the second deletes all punctuation (so that iPhone, can be found as a word). The grep . ensures that we skip any blank lines (we don't want to count those) and the grep -n appends the line number to the output. Then, the -F tells grep not to treat its input as a regular expression, and the -x that it should only find matches that span the entire line (so that job will not count as a match for jobs ). Note that the numbers you gave in your question were off by one. If you only want the numbers, you could add another step: $ tr ' ' '\n' < file | tr -d '[:punct:]' | grep . | grep -nFx iPhone | cut -d: -f1255458 As has been pointed out in the comments, this will still have problems with "words" such as aren't or double-barreled . You can improve on that using: tr '[[:space:][:punct:]]' '\n' < file | grep . | grep -nFx iPhone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568767", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234187/" ] }
568,770
$ sudo nix-env --list-generations --profile /nix/var/nix/profiles/system... 600 2020-01-25 21:01:11 601 2020-02-03 13:44:19 602 2020-02-09 14:06:20 603 2020-02-09 14:11:11 604 2020-02-11 00:02:43 605 2020-02-13 12:26:22 606 2020-02-16 16:40:02 (current) How could I get the commit / channel generation (is this a thing?) for a Nixos generation - and potentially rollback the channel to it? Or in other words, how can I rollback my 'channel state' to what it was at generation 605? Why I want to know is because I did a sudo nix-channel --update , and sudo nixos-rebuild switch few days ago - however whatever package updates took place - resulted in an unstable system. So to mitigate this, I booted and continue to use an old generation ( 605 ). I now want to update a specific package in my Nixos system configuration, and base the changes on 605 as opposed to the latest 606 . I did find https://stackoverflow.com/questions/39090387/how-to-undo-nix-channel-update ( nix-channel --rollback.. ) however I may have updated the channels a few times - so the 'last' channel state might not be what I need. I did notice you can specify a channel generation number as a parameter to this command - but I'm not sure what is the relationship between that and the nixos generation? It does not seem to be the same thing as I tried for 605 with the following results: sudo nix-channel --rollback 605error: generation 605 does not existerror: program '/nix/store/cs47wjxwiqgyl1nkjnksyf3s2rb93piq-nix-2.3.2/bin/nix-env' failed with exit code 1
I assume you want sudo nix-channel --rollback ? For example, you may also manually inspect /nix/var/nix/profiles/per-user/root/channels-*/manifest.nix – those contain name, commit hash, etc.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568770", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124109/" ] }
568,793
How can I increase the size of these System Tray Icons? : Update: In Debian 11, KDE Plasma 5.20.5, there is now an option to scale system-tray icons to the panel's height. Here's is a short video showing that. Thanks goes to KDE!
Open ~/.config/plasma-org.kde.plasma.desktop-appletsrc with a text editor. Find every line starting with extraItems= Add another line iconSize=3 below each of those. Save the edited file and exit. After a reboot the icons should have a much higher limit of size when adjusting the height of the panel.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568793", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40149/" ] }
568,794
During my travel abroad, I will connect to the SSH server in my home country. I can find my connection times by: last userlast | grep user but these two commands give me a connection time based on my country time, not local time. How can I get connection time based on local time?
Open ~/.config/plasma-org.kde.plasma.desktop-appletsrc with a text editor. Find every line starting with extraItems= Add another line iconSize=3 below each of those. Save the edited file and exit. After a reboot the icons should have a much higher limit of size when adjusting the height of the panel.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395300/" ] }
568,899
Say that I have the following files in the current directory : aa01.txtaa02.txtbb01.txtbb02.txtcc01.txt... Is there a way I can search for a given pattern and select ALL the matching files at once (not just selecting the first matching file and then selecting the next one, then the next one...), so that I can process them further (e.g. delete, move, copy... them as a group)? For example, say I'd like to select all the files above containing the string "aa" (maybe to delete them), or maybe all the file containing "02" (maybe to copy them)...
I usually do this by setting a filter first (with zf + expression), then selecting everything in the result ( v ) and turning the filter off again ( zf + Enter).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/249286/" ] }
568,907
Every once in a while I discover my zsh history has been truncated (or maybe lost entirely, difficult to tell), and I have to restore it from backup. For example, today: ls -lh ~/.zsh_history-rw------- 1 stripe staff 32K 21 Feb 10:20 /Users/stripe/.zsh_history But in my backup from a few days ago: -rw-------@ 1 stripe staff 203K 17 Feb 22:36 /Volumes/Time Machine Backups/.../Users/stripe/.zsh_history I have configured zsh to save lots of history so it shouldn't be a matter of the shell intentionally trimming the file. unsetopt share_historysetopt inc_append_historysetopt hist_ignore_all_dupsHISTSIZE=500000SAVEHIST=$HISTSIZE Has anyone else experienced this and found a way to mitigate it? Is there some zsh spring cleaning feature I'm unaware of?
ZSH History file can be truncated/lost/cleaned for multiple reasons those can be: Corruption of the zsh history file (because of a power-cut/system-fail while a shell is opened, in this case fsck need to be setup to run when the system fail) Zsh config file is not loaded (for example if $HOME env variable is not defined) Unsupported character on history file can make zsh reset the history Cleaning tools like bleachbit Zsh misconfiguration Sharing the $HISTFILE among shells with a more restrictive $HISTSIZE (even implicitly, for example if you run a bash shell under zsh without a $HISTFILE in your bashrc, the subshell will use the inherited variable from zsh and will apply the $HISTSIZE defined in bashrc) etc. Notes History available setup options HISTFILE="$HOME/.zsh_history"HISTSIZE=500000SAVEHIST=500000setopt BANG_HIST # Treat the '!' character specially during expansion.setopt EXTENDED_HISTORY # Write the history file in the ":start:elapsed;command" format.setopt INC_APPEND_HISTORY # Write to the history file immediately, not when the shell exits.setopt SHARE_HISTORY # Share history between all sessions.setopt HIST_EXPIRE_DUPS_FIRST # Expire duplicate entries first when trimming history.setopt HIST_IGNORE_DUPS # Don't record an entry that was just recorded again.setopt HIST_IGNORE_ALL_DUPS # Delete old recorded entry if new entry is a duplicate.setopt HIST_FIND_NO_DUPS # Do not display a line previously found.setopt HIST_IGNORE_SPACE # Don't record an entry starting with a space.setopt HIST_SAVE_NO_DUPS # Don't write duplicate entries in the history file.setopt HIST_REDUCE_BLANKS # Remove superfluous blanks before recording entry.setopt HIST_VERIFY # Don't execute immediately upon history expansion.setopt HIST_BEEP # Beep when accessing nonexistent history. Answer The following configuration is recommended for this situation (to be setup on ~/.zshrc file) HISTFILE=/specify/a/fixed/and/different/location/.historyHISTSIZE=500000SAVEHIST=500000setopt appendhistorysetopt INC_APPEND_HISTORY setopt SHARE_HISTORY Alternative You can use a little script that check the history file size and do restore it from backup when necessary (in ~/.zshrc ) if [ /home/my/zsh/hist/file -lt 64000 ]; then echo "History file is lower than 64 kbytes, restoring backup..." cp -f /mybackup/histfile /home/my/zsh/hist/filefi Links Additional infos are available on this and this questions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/568907", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/396604/" ] }
569,097
I have a gazillion files which need to be reduced in size. I found that most (not all) files have an end section which can be cut without losing information: Data 1Data 2something_unimportant_here END DATARubbish 1Rubbish 2 How can I edit a file (end hence, all) by deleting the line including "END DATA" and all following, in-place, changing only those the files that contain the pattern, thereby minimising write access to the disk (many, many files and slow disk). If possible, I would like to add a new last line to the file (my own end tag) so that the file's syntax stays correct -- again, only in those files containing the pattern. I was thinking of using ed , like echo ',s/END DATA/ ???? '\\n'q'\\n'wq' | ed "$file" but cannot seem to manage the ???? part correcty. Expected output: Data 1Data 2NEW END
It sounds like the sequence of commands you're looking for is /END DATA/,$dq.aNEW END.wq or as a one-liner printf '%s\n' '/END DATA/,$d' 'q' '.a' 'NEW END' '.' 'wq' (You can replace wq with ,p for testing.) Ex. given $ cat fileData 1Data 2something_unimportant_here END DATARubbish 1Rubbish 2 then $ printf '%s\n' '/END DATA/,$d' 'q' '.a' 'NEW END' '.' 'wq' | ed -s file gives $ cat fileData 1Data 2NEW END
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/569097", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115138/" ] }
569,153
I'm trying to join lines with POSIX sed. With GNU sed (without --posix) this works as intended: $ sed ':a; N; s/\n//; b a' <<< $'a\nb\nc'abc But if I use --posix I get no output. Why is this and how can I how can I do it otherwise?
That's a job for paste : printf '%s\n' a b c | paste -sd '\0' - (no, that's not joining with NULs, that's joining with no separator as required by POSIX. Some paste implementations also support paste -sd '' - but that's neither standard nor portable). Note that except in the busybox implementation, it produces one empty line as output if the input is empty (a historical bug/misfeature unfortunately now engraved in the POSIX specification). With POSIX sed : sed -e :a -e '$!{N;ba' -e '}' -e 's/\n//g' Or: sed ':a$!{ N ba}s/\n//g' The b , : , and } commands cannot be followed by another command. In earlier versions of the POSIX specification, b a;s/a/b/ would require b to branch to the label called a;s/a/b/ , in newer versions of the specification, it's now unspecified, to allow the GNU sed behaviour. The following command has to be in a subsequent expression or on a separate line. Also POSIX requires N on the last line to exit without printing the pattern space. GNU sed only does it in POSIX mode, like when there's a POSIXLY_CORRECT variable in the environment or with your --posix option which explains why you get no output with --posix . Also note that the minimum size of the pattern space guaranteed by POSIX is 8192 bytes. You can only use that approach portably for very small files. paste has no size limitation, and contrary to the sed approach doesn't need to load the whole file in memory before printing it. Another approach is tr -d '\n' . However note that contrary to paste / sed , it produces a non-delimited line on output (outputs abc instead of abc\n on the example above). In any case, <<< is a zsh operator (now supported by a few other shells), and $'...' is a ksh93 operator (now supported by most other POSIX-like shells), neither are POSIX sh operators (though the latter is likely to be added to the next major revision of the standard), so should not be used in POSIX sh scripts.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/569153", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370904/" ] }
569,170
I'm trying to modify the You have new mail message that is shown below MOTD when you login via SSH.My goal is to make that message more visible (bright color would be great) so it gets my attention when I log in.
In Bash, custom messages can be set with MAILPATH. The man page has this example: MAILPATH='/var/mail/bfox?"You have mail":~/shell-mail?"$_ has mail!"' Trying it: $ export MAILPATH="$MAIL?\"Santa was here.\""$$$$ "Santa was here." Oh, uh, okay. Must have misread the man page there. bright color would be great So we have to smuggle us some color escape codes into the message... $ esc=$'\e'$ export MAILPATH="$MAIL?$esc[1;37;44mREAD YOUR MAIL RIGHT NOW$esc[0m"$ echo $MAILPATH/var/spool/mail/frostschutz?READ YOUR MAIL RIGHT NOW$$READ YOUR MAIL RIGHT NOW I don't know how to color things here, just imagine it screaming in bright white blue. Color choices are subject to taste and local terminal color scheme settings. Also check that MAILPATH was not already in use and MAIL actually has the correct path to use for MAILPATH.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/569170", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/393356/" ] }
569,199
I want to publish an app just should be registered as systemd service. Alongside the app, I want to provide a script to register the app as a systemd service. So currently, I publishing the app with a myapp.service template file, and a script that copies the myapp.service and updates his values. But I don't like this solution. Can I create a .service file from the command-line? Something like systemd new -name myapp -description "my description" -after network.target -exec path/to/exec ?
There are two basic approaches. Compile-time configuration This is the approach taken by many software packagers. One uses macros. At package generation time, some form of macro preprocessor is run over a macro-ized service unit file, with parameters that represent the configuration. The output, then put into the package, is a tailored service unit file. An example is in systemd itself. The kmod-static-nodes.service.in file contains macros: ExecStart=@KMOD@ static-nodes --format=tmpfiles --output=/run/tmpfiles.d/static-nodes.conf The large collection of Python scripts that build systemd (named "Meson") contain amonst many other things a regular-expression-based macro preprocessor. This preprocessor is run on kmod-static-nodes.service.in , replacing the KMOD macro to produce the kmod-static-nodes.service that goes into the package. The replacement is a string that is the path to the kmod executable program image file, searched for at compile time by the Python scripts. (I'm not going to go into details of the large Python script system, as that's well beyond the scope of this answer.) The disadvantage of this scheme is that one has to create the package on a machine that is laid out the same way as, and contains the same programs as, the target machine on which the package is installed. Every operating system's packages could be different (even from version to version). Install-time configuration drop-ins Another approach would be to have a static service unit file that is configured at install time, by generating drop-in .conf files that adjust the service unit to the target system. They are not ideal, as of course they are exported to the service process, but environment variables can be used for this. For example: A static service unit file /usr/lib/systemd/system/wibble.service could say ExecStart=/usr/bin/wibble $OPTIONS The package maintenance script for the install action then creates a /usr/lib/systemd/system/wibble.service.d/20-options.conf file on the fly at install time containing the tailored options calculated on the current machine at install time rather than on a different machine at package creation time: Environment=OPTIONS=wobble --jelly -o plate The disadvantage of this scheme is that the package maintenance script for the deinstall action has to remember to remove this file if the package is being completely purged, and one has to remember to hook this into the configure action so that system administrators can explicitly force regeneration of the drop-in file if they reconfigure or rearrange stuff. preprocessing A variation on the aforementioned is to go back to macro preprocessing. Actually ship the macro-ized service unit file, and preprocess it at install time. If one is not making a package at all, but just shipping everything in a ustar archive for system administrators to install by hand, this is more workable than generated drop-ins. The disadvantage of this is that macro preprocessing systems like the one in Meson are tightly coupled to the Python script collection and not available as standalone tools. For standalone preprocessing tools one has the likes of m4 , cpp , and others, none of which are a good fit for pre-processing .INI files and making the sorts of decisions (e.g. "Detect what options the wibble program takes on this machine." "Where is kmod on this machine?") that are usually involved. There is clearly a niche for a tool that no-one has spotted, here — that I have heard of, at any rate. One can sort-of make do with command -v to do the path lookup and ex to do the replacements, in a shell script; but it's non-trivial to (and most people do not) harden that against things like spaces and metacharacters in pathnames, or allow for an escaping mechanism to prevent macro expansion where it is not desired. Perl has somewhat safer string processing than shell script, but not every operating system has Perl out of the box (c.f. " Is there a unix-like system that doesn't come with Perl? "), necessitating that system administrators be told to ensure that Perl is installed first. Python obviously can do this, given systemd's large Python script collection build system. But then there's the problem of system adminstrators going and changing the version of Python on you, and pulling the rug out from under you (c.f. " apt-get update no longer works after rebuilding corrupted PATH variable " and all of the Ask Ubuntu questions hyperlinked-to there). Other possibilities include TCL. On the gripping hand … Sometimes such configurability is unnecessary in the first place. The @KMOD@ macro in systemd is not actually necessary. This (nowadays) would work: ExecStart=kmod static-nodes --format=tmpfiles --output=/run/tmpfiles.d/static-nodes.conf Turning a database table (of some sort) into service units? Ship a generator. Specifying a default enable/disable state? Ship presets.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/569199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332026/" ] }
569,265
I need to disable the touchpad of my laptop. I am using Gnome on Wayland . libinput should provide this functionality, but apparently it doesn't. xinput doesn't work because I'm on Wayland. I think Gnome offered some switch to do so in the input settings, but it isn't there anymore (Gnome shell 3.34 — maybe this is an Xorg exclusive feature?). Is it really asked too much if I want do disable an input device? Edit: xinput is NOT a solution! Its man page has a whole section on Wayland (emphasis mine): XWAYLAND Xwayland is an X server that uses a Wayland Compositor as backend. Xwayland acts as translation layer between the X protocol and the Wayland protocol but does not have direct access to the hardware. The X Input Extension devices created by Xwayland ("xwayland-pointer", "xwayland-keyboard", etc.) map to the Wayland protocol devices, not to physical devices. These X Input Extension devices are only visible to other X clients connected to the same Xwayland process. Changing properties on Xwayland devices only affects the behavior of those clients . For example, disabling an Xwayland device with xinput does not disable the device in Wayland-native applications . Other changes may not have any effect at all. In most instances, using xinput with an Xwayland device is indicative of a bug in a shell script and xinput will print a warning. Use the Wayland Compositor's native device configuration methods instead. TL;DR : If I disable the touchpad using xinput , it will still continue working as before, but XWayland applications won't see the cursor move anymore.
First of all, try if this dconf setting is of any use: gsettings set org.gnome.desktop.peripherals.touchpad disable-while-typing 'false' It was stated with false while the name would imply true . This setting should be for Xserver configurations: gsettings set org.gnome.desktop.peripherals.touchpad send-events 'disabled' Notebook also may disable the touchpad with Fn + F5 ;you may try that. If that doesn't work,please add the output of libinput-list-devices to your question. You also may want to have a lookat the Touchpad Indicator GNOME Shell extension.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/569265", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233767/" ] }
569,270
I am writing a shell script to download the latest version of some software. After I parse the output of curl I go through several steps to find the exact version string (at the moment of writing this is 0.65.3 ): (In all code samples below, > is my prompt. Output from Bash 3.2 or Zsh is on a line without a > prefix.) > url="https://github.com/gohugoio/hugo/releases/latest"> latest=$(curl --silent --head "$url" | grep Location)> tag=$(echo "$latest" | cut -d'/' -f8)> version=$(echo "${tag//v}")> echo "hugo_${version}_Linux-64bit.tar.gz"_Linux-64bit.tar.gz The output I expected was hugo_0.65.3_Linux-64bit.tar.gz , but in the output of the call to echo with the quoted string it seems the bytes following ${version} have been used to overwrite the bytes at the beginning of the quoted string. Here I use two different quoted strings to clarify what is happening: > echo "hugo_${version}test"test_0.65.3> echo "hugo_${version}lorem ipsum dolor sit amet"lorem ipsum dolor sit amet I get the same unexpected result if I do this: > version=$(echo "${tag:1}")> echo "hugo_${version}_Linux-64bit.tar.gz"_Linux-64bit.tar.gz But, I get the expected result if I do this: > version=0.65.3hugo_0.65.3_Linux-64bit.tar.gz This last result is what is required, but of course it makes my script static rather than dynamic, and thus not very useful to me. How can I get the desired result without hard-coding the value of $version in my script?
The line returned by curl ends with carriage-return line-feed. (MS-dos line ending). The line-feed is removed by the Unix tools, however this leaves a carriage-return at the end. Fix this line to use dos2unix (and quote your argument to echo , avoiding the bugs described in BashPitfalls #14 ): version="$(echo "${tag//v}" | dos2unix)" ...or, using the shell's built-in syntax to enact both changes at once: version=${tag//[$'v\r']/} dos2unix does make some other changes (like adding a trailing newline after the last line of text, which UNIX requires but DOS does not), but none of them matter for a single-line string like this.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/569270", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/366680/" ] }
569,271
I'm basically trying to put two scripts I found online together. The first, in two parts, checks if a directory exists, if not creates it then goes on to take a picture through the webcam if an incorrect login password is entered. As far as I can tell, this part is working properly. #!/bin/bash## Variablesdir=/tmp/gotchagot=/tmp/gotcha/[email protected][email protected]=smtp.googlemail.com:[email protected]=xxxxxSUBJECT="Someone tried to access your computer while you were away."if test [ -d "$dir" ]then echo "Found directory /tmp/gotcha/" ]else mkdir /tmp/gotcha/fits=`date +%s`ffmpeg -f video4linux2 -s vga -i /dev/video0 -vframes 1 $gotexit 0 ## NOTE - must exit with status 0 The second is SUPPOSED to email me the photo taken, then delete the photo from the storage directory, but as far as I can tell it's not finding the photo even though i can find the file in question through both the command line and the GUI. ## sendEmail script, found in /home/josh/scripts/emailscriptif test -f [ "$got" ]then sendemail -f $SMTPFROM -t $SMTPTO -u $SUBJECT -a $got -s $SMTPSERVER -$fi## Remove gotcha.jpgif test -f [ "$got" ]then rm $got When I run the script in the terminal, this is the output: /usr/local/bin/gotcha: line 13: test: too many argumentsmkdir: cannot create directory ‘/tmp/gotcha/’: File existsffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7 (Ubuntu 7.3.0-16ubuntu3) configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared WARNING: library configuration mismatch avcodec configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared --enable-version3 --disable-doc --disable-programs --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libtesseract --enable-libvo_amrwbenc libavutil 55. 78.100 / 55. 78.100 libavcodec 57.107.100 / 57.107.100 libavformat 57. 83.100 / 57. 83.100 libavdevice 57. 10.100 / 57. 10.100 libavfilter 6.107.100 / 6.107.100 libavresample 3. 7. 0 / 3. 7. 0 libswscale 4. 8.100 / 4. 8.100 libswresample 2. 9.100 / 2. 9.100 libpostproc 54. 7.100 / 54. 7.100Input #0, video4linux2,v4l2, from '/dev/video0': Duration: N/A, start: 10364.818498, bitrate: 147456 kb/s Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 147456 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbcStream mapping: Stream #0:0 -> #0:0 (rawvideo (native) -> mjpeg (native))Press [q] to stop, [?] for help[swscaler @ 0x558898563320] deprecated pixel format used, make sure you did set range correctlyOutput #0, image2, to '/tmp/gotcha/gotcha.jpg': Metadata: encoder : Lavf57.83.100 Stream #0:0: Video: mjpeg, yuvj422p(pc), 640x480, q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc Metadata: encoder : Lavc57.107.100 mjpeg Side data: cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1frame= 1 fps=0.0 q=5.7 Lsize=N/A time=00:00:00.03 bitrate=N/A speed=3.32x video:24kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown Like I said, I'm new to bash scripting, so I'm probably overlooking something silly, but any help is appreciated!
The line returned by curl ends with carriage-return line-feed. (MS-dos line ending). The line-feed is removed by the Unix tools, however this leaves a carriage-return at the end. Fix this line to use dos2unix (and quote your argument to echo , avoiding the bugs described in BashPitfalls #14 ): version="$(echo "${tag//v}" | dos2unix)" ...or, using the shell's built-in syntax to enact both changes at once: version=${tag//[$'v\r']/} dos2unix does make some other changes (like adding a trailing newline after the last line of text, which UNIX requires but DOS does not), but none of them matter for a single-line string like this.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/569271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/396916/" ] }
569,312
I have problem with missing dependencies, when I try to install some software by apt-get and also from downloaded .deb files. I am using Debian 10. After installation I removed a lot of software which I no needed eg. Deluge, some terminals (I have only LXTerminal now), some tools from Libre Office Packet etc. Now, when I try to install Atom, TeamViewer, or some RDP tools, every time I got information about missing unresolved dependencies, common it is libgconf , libcurl etc. How can I resolve this issue? Now I can not install a lot of must have software, and when I try to apt-get install this packets, I got error, that no libs with this name were found. Here is my sources.list. Maybe it will be some tip to fix this problem: deb http://security.debian.org/debian-security/ buster/updates non-free contrib main deb-src deb-src http://security.debian.org/debian-security/ buster/updates non-free contrib main deb-src deb http://deb.debian.org/debian/ buster-backports main contrib non-free deb-src http://deb.debian.org/debian/ buster-backports non-free contrib main deb http://ftp.de.debian.org/debian stretch main contrib non-freedeb-src http://ftp.de.debian.org/debian stretch main contrib non-free Stretch repos were added for test purposes, but when I comment them, it looks the same.
You need to add the main Debian 10 repository to your sources.list : deb http://deb.debian.org/debian/ buster main contrib non-free deb-src http://deb.debian.org/debian/ buster non-free contrib main You should also remove the Debian 9 lines (Stretch).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/569312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395568/" ] }
569,322
Using the command line, how do I find all the files recursively beginning at a specific directory where all those files fall within a size range? Additionally list the results sorted by size.
You can use find /PATH/TO/specific_directory -size +MIN -size -MAX For precise info about what MIN and MAX could be, check man find -size n[cwbkMG] File uses n units of space, rounding up. The following suffixes can be used: `b' for 512-byte blocks (this is the default if no suffix is used) `c' for bytes `w' for two-byte words `k' for kibibytes (KiB, units of 1024 bytes) `M' for mebibytes (MiB, units of 1024 * 1024 = 1048576 bytes) `G' for gibibytes (GiB, units of 1024 * 1024 * 1024 = 1073741824 bytes) The size is simply the st_size member of the struct stat populated by the lstat (or stat) system call, rounded up as shown above. In other words, it's consistent with the result you get for ls -l. Bear in mind that the `%k' and `%b' format specifiers of -printf handle sparse files differently. The `b' suffix always denotes 512-byte blocks and never 1024-byte blocks, which is different to the behaviour of -ls. The + and - prefixes signify greater than and less than, as usual; i.e., an exact size of n units does not match. Bear in mind that the size is rounded up to the next unit. Therefore -size -1M is not equivalent to -size -1048576c. The former only matches empty files, the latter matches files from 0 to 1,048,575 bytes. Update to meet your new requirements : find /PATH/TO/specific_directory -size +MIN -size -MAX -print0 | du --human-readable --files0-from=- | sort --human-numeric-sort or, in its short form: find /PATH/TO/specific_directory -size +MIN -size -MAX -print0 | du -h --files0-from=- | sort -h Update to meet your new requirements (2) : find /PATH/TO/specific_directory -size +MIN -size -MAX -print0 | du --human-readable --bytes --files0-from=- | sort
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/569322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394724/" ] }
569,389
I have a file named 35554842200284685106000166550020003504201637715423.xml and I simply need to rename it to 42200284685106000166550020003504201637715423.xml (remove everything before the last 48 characters). This simple regular expression ( .{48}$ ) can extract the last 48 characters, but I can't make it work using rename in Bash. How can I use rename and this regular expression to rename it to only the last 48 characters? Edit: Output of rename --help [root@ip-000-00-0-000 tmp]# rename --helpUsage: rename [options] <expression> <replacement> <file>...Rename files.Options: -v, --verbose explain what is being done -s, --symlink act on the target of symlinks -n, --no-act do not make any changes -h, --help display this help and exit -V, --version output version information and exitFor more details see rename(1). Thank you.
You don't actually need rename here, you can work around it: $ file=35554842200284685106000166550020003504201637715423.xml$ newname=$(sed -E 's/.*(.{48})/\1/'<<<"$file"); $ mv -v "$file" "$newname"renamed '35554842200284685106000166550020003504201637715423.xml' -> '42200284685106000166550020003504201637715423.xml'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/569389", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/397029/" ] }
569,435
I have large text file which contains data, formatted like this: 12345678910 I am trying to convert it to this: 1 2 34 5 67 8 910 I tried awk : '{ if (NR%2) {printf "%40s\n", $0} else {printf "%80s\n", $0} }' file.txt
A solution with paste seq 10 | paste - - -1 2 34 5 67 8 910 paste is a Unix standard tool, and the standard guarantees that this works for at least 12 columns.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/569435", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/343138/" ] }
569,445
I have written some scripts and stored them in my ~/bin folder. I'm already able to run them just by calling their title during a shell session. However, they aren't running interactively (I mean, my ~/.bashrc aliases are not loaded). Is there a way to mark them to run interactively by default? Or must I source ~/.bashrc inside of them to use any aliases defined there? Other alternatives are welcome!
If you add the -i option to your hashbang(s) it will specify that the script runs in interactive mode. #!/bin/bash -i Alternatively you could call the scripts with that option: bash -i /path/to/script.sh
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/569445", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/364080/" ] }
569,471
I am trying to convert string variable to an array with following code. #/bin/shVERSION='1.2.3'echo $VERSIONIFS='.' read -a array <<< "$VERSION"echo ${#array[@]}echo ${array[@]} But getting following error when I run sh test.sh 1.2.3test.sh: 4: test.sh: Syntax error: redirection unexpected Version : ubuntu@jenkins-slave1:~$ $SHELL --versionGNU bash, version 4.4.19(1)-release (x86_64-pc-linux-gnu)Copyright (C) 2016 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> OS : ubuntu@jenkins-slave1:~$ cat /etc/os-releaseNAME="Ubuntu"VERSION="18.04.1 LTS (Bionic Beaver)"ID=ubuntuID_LIKE=debianPRETTY_NAME="Ubuntu 18.04.1 LTS"
<<< is a zsh operator now supported by a few other shells (including bash ). read -a is bash-specific. ksh had read -A for that long before bash (consistent with set -A ; set -a being something else inherited from the Bourne shell; also supported by zsh and yash ) sh these days is an implementation or another of an interpreter for the POSIX sh language. That language has neither <<< nor read -a nor arrays (other than "$@" ). On Ubuntu, by default, the shell used as sh interpreter is dash which is very close to the POSIX sh specification in that it implements very few extensions over what POSIX specifies. It implements none of <<< , read -a nor arrays. In POSIX sh , to split a string into the one POSIX sh array ( "$@" ), you'd use: IFS=. # split on .set -o noglob # disable globset -- $VERSION"" # split+glob with glob disabled.echo "$# elements:"printf ' - "%s"\n' "$@" It has a few advantages over the bash-specific IFS=. read -a <<< $string syntax: it doesn't need creating a temporary file and storing the contents of $string in it. it works even if $string contains newline characters ( read -a <<< $string ) would only read the first line. it works even if $string contains backslash characters (with read , you need the -r option for backslash not to undergo a special processing). it's standard. it works for values like 1.2. . read -a <<< $string would split it into "1" and "2" only instead of "1" , "2" and "" . in older versions of bash , you had to quote the $string ( IFS=. read -ra <<< "$string" ) or otherwise it would undergo splitting (followed by joining with space).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/569471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66640/" ] }
569,570
I am trying to echo the content of key and certificate files encoded with base64 so that I can then copy the output into other places. I found this thread: Redirecting the content of a file to the command echo? which shows how to echo the file content and also found ways to keep the newline characters for encoding. However when I add the | base64 this breaks the output into multiple lines, and trying to add a second echo just replaces the newlines with white spaces. $ echo "$(cat test.key)" | base64LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUpRZ0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQ1N3d2dna29BZ0VBQW9JQ0FRRFF4Tkh0aHZvcEp1Z0EKOHBsSUNUUU1pOGMwMzRERlR6Z1E5ME5tcE5zN2hRczNQZ0QwU2JuSFcyVGxqTS9oM1F1QVE0Q1dqaHRiV1ZUbgpSREcveGxWRFBESVVVMzB1UHJnK0N6dlhOUkhzQkE9PQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg==$ echo $(echo "$(cat test.key)" | base64)LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUpRZ0lCQURBTkJna3Foa2lHOXcwQkFRRUZB QVNDQ1N3d2dna29BZ0VBQW9JQ0FRRFF4Tkh0aHZvcEp1Z0EKOHBsSUNUUU1pOGMwMzRERlR6Z1E5 ME5tcE5zN2hRczNQZ0QwU2JuSFcyVGxqTS9oM1F1QVE0Q1dqaHRiV1ZUbgpSREcveGxWRFBESVVV MzB1UHJnK0N6dlhOUkhzQkE9PQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg== The desired output would be: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUpRZ0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQ1N3d2dna29BZ0VBQW9JQ0FRRFF4Tkh0aHZvcEp1Z0EKOHBsSUNUUU1pOGMwMzRERlR6Z1E5ME5tcE5zN2hRczNQZ0QwU2JuSFcyVGxqTS9oM1F1QVE0Q1dqaHRiV1ZUbgpSREcveGxWRFBESVVVMzB1UHJnK0N6dlhOUkhzQkE9PQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg== How can I achieve this output?
Use the -w option (line wrapping) of base64 like this: ... | base64 -w 0 A value of 0 will disable line wrapping.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/569570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/397172/" ] }
569,595
I might want to run Zwift on a Linux distribution instead of Windows. Is this at all possible? Anyone got experience running it with WineHQ? Edit: the reason I'd want to run it on a PC is that I use the companion app on my phone and don't have a tablet. The PC is also preferable because the screen size is larger than on a tablet if I would have one. Currently I am using Windows, so right now there is no problem. Although, since Microsoft has some marketing strategies in place where people are drawn to Windows 10, where they might start ripping people of by asking them for a monthly/yearly charge to use Windows 10 and upward in the future. I'm considering changing from Windows to Linux, because I don't want to buy into their strategy. Unfortunately there is too much software that is not available on Linux. It'd just be nice if other companies embraced developing applications for all operating systems. That's why I'm asking around if anyone knows a good way to handle this with Zwift.
Zwift can now be run in Linux using the latest versions of Wine (5.0 and greater) and the workaround from user wentam42 detailed in comment #7 of this bug report . Here are the steps. You can also find a video documenting the process here . Install Wine 5.0+ following the instructions for your distribution Install winetricks script Run winetricks dotnet35sp1 win7 Download the RunFromProcess.exe utility from nirsoft here Download the Windows installation file for Zwift Run wine ZwiftSetup.exe and wait for the installation to complete (~1hr for me) At this point you will be greeted by a blank white window. Leaving this window open (or relaunching wine ZwiftLauncher.exe if you closed the window), run wine RunFromProcess.exe ZwiftLauncher.exe ZwiftApp.exe The Zwift splash screen should open followed by a login prompt. Proceed until you are prompted to connect sensors. Bluetooth compatibility in Wine is currently immature . However, I had no trouble using the Zwift Companion app on my phone to sync with sensors. The phone app then relays the information to the Zwift servers so that you can ride. After launching Zwift Companion, turn on relevant settings (e.g., for BLE sensors, location and bluetooth), pedal, and in Zwift click 'Search sensors'. Everything from this point forward should work as it does natively in Windows.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/569595", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266362/" ] }
569,796
Disk/Partition Backup What are the backup options and good practice to make a solid and easy to use full system backup? With the following requirement: Live backup Image backup Encrypted backup Incremental backups Mount/access the backup disk/files easily Full system backup, restorable in one shot Can be scheduled automatically (with cron or else) Encrypted or classic backup source (luks, dm-crypt, ext3/ext4/btrfs).
Linux system backup When targeting a true full system backup , disk image backup (as asked) offer substantial advantage (detailed bellow) compared to files based backup. With files based backup disk/partition structure is not saved; Most of the time for a full restore, the process is a huge time consumer in fact many time consuming steps (like system reinstall) are required; and finally backing up installed applications can be tricky; Image disk backup avoid all these cons and restore process is a one shot step. Tools like clonezilla, fsarchiver are not suitable for this question because they are missing one or multiple requested features. As a reminder, luks encrypted partition are not dependent on the used file system (ext3/ext4/etc.) keep in mind that the performance are not the same depending on the chosen file system ( details ), also note that btrfs ( video-1 , video-2 ) may be a very good option because of its snapshot feature and data structure. This is just an additional protection layer because btrfs snapshot are not true backups! (classic snapshots reside on the same partition). As a side note, in addition to disk image backup we may want to do a simple file sync backup for some particular locations, to achieve this, tools like rsync / grsync (or btrfs-send in case of btrfs) can be used in combinaison with cron (if required) and an encrypted backup destination (like luks-partition/vault/truecrypt). Files based backup tools can be: rsync / grsync , rsnapshot , cronopete , dump / restore , timeshift , deja-dup , systemback , freefilesync , realtimesync , luckybackup , vembu . Annotations lsblk --fs output: sda is the main disk sda1/sda2 are the encrypted partitions crypt_sda1/crypt_sda2 virtual (mapped) un-encrypted partitions sda ├─sda1 crypto_LUKS f3df6579-UUID... │ └─crypt_sda1 ext4 bc324232-UUID... /mount-location-1 └─sda2 crypto_LUKS c3423434-UUID... └─crypt_sda2 ext4 a6546765-UUID... /mount-location-2 Method #1 Backup the original luks disk/partition ( sda or sda1 ) encrypted as it is to any location bdsync / bdsync-manager is an amazing tool that can do image backup (full/incremental) by fast block device syncing; This can be used along with luks directly on the encrypted partition, incremental backups works very well in this case as well. This tool support mounting/compression/network/etc. dd : classic method for disk imaging, can be used with command similar to dd if=/dev/sda1 of=/backup/location/crypted.img bs=128K status=progress but note that imaging a mounted partition with dd may lead data corruption for the used files while the backup is done, like sql databases, x config files, or documents being edited, to guarantee data integrity with such backup closing all running application and data base is recommended, we can also mount the image after its creation and check its integrity with fsck . Cons for #1 : backup size, compression, and incremental backups can be tricky Method #2 This method is for disk without encryption or to backup the mapped luks un-encrypted partition crypt_sda1/crypt_sda2 ... An encrypted backup destination location (like luks-partition/vault/truecrypt) or an encrypted archive/image if the backup tool support such feature is recommended. Veeam : free/paid professional backup solution (on linux only command line and TUI), kernel module is opensource, this tool can not be used for the fist method, backup can be encrypted, incremental and mounting backups are supported. bdsync / bdsync-manager same as in the first method but the backup is made from the un-encrypted mapped partition (crypt_sda1/crypt_sda2). dd : classic method for disk imaging, can be used with command similar to dd if=/dev/mapper/crypt_sda1 of=/backup/location/un-encrypted-sda1.img bs=128K status=progress but note that imaging a mounted partition with dd may lead data corruption for the used files while the backup is done, like sql databases, x config files, or documents being edited, to guarantee data integrity with such backup closing all running application and data base is recommended, we can also mount the image after its creation and check its integrity with fsck . Cons for #2 : disk headers, mbr, partitions structure, uid etc. are not saved additional backup steps (detailed bellow) are required for a full backup Backup luks headers: cryptsetup luksHeaderBackup /dev/sda1 --header-backup-file /backup/location/sda1_luks_heanders_backup Backup mbr: dd if=/dev/sda of=/backup/location/backup-sda.mbr bs=512 count=1 Backup partitions structure : sfdisk -d /dev/sda > /location/backup-sda.sfdisk Backup disk uuid Note: Images done with dd can be mounted with commands similar to: fdisk -l -u /location/image.imgkpartx -l -v /location/image.imgkpartx -a -v /location/image.imgcryptsetup luksOpen /dev/mapper/loop0p1 imgrootmount /dev/mapper/imgroot /mnt/backup/ Alternatives: Bareos : open source backup solution ( demo-video ) Bacula : open source backup solution ( demo-video ) Weresync : disk image solution with incremental feature. Other tools can be found here , here , here or here There is a Wikipedia page comparing disk cloning software An analyse by Gartner of some professional backup solutions is available here Other tools Acronis backup may be used for both methods but their kernel module is always updated very lately (not working with current/recent kernel version) plus mounting backups is not working as of 02/2020. Partclone : used by clonezilla, this tool only backup disk used blocks, it support image mounting but does not support live/hot backup nor encryption/luks. Partimage : dd alternative with a TUI, it support live/hot backups but images can not be mounted and it does not support luks (but ext4/btrfs). Doclone : very nice live/hot backup imaging solution, supporting many systems (but not lucks...) ext4 etc. support network, mounting is not possible. Rsnapshot : snapshot file backup system using rsync. used in many distro (like mageia) the backup jobs are scheduled with cron, when running in background the backup status is not automatically visible. Rsync / Grsync : sync folders with rsync command, grsync is the gui... Cronopete : file backup alternative to rsync (the application is limited on how it work compared to modern solution) Simple-backup : file backup solution with tray icon and incremental feature, backup are made to tars archives Backintime : python backup app for file based backup (the app have many unsolved issues ) Shadowprotect : acronis alternative with mount feature... luks support is not obvious. Datto : professional backup solution, luks support is not obvious, linux agent need to be networked to a backup server... kernel module is opensource on github... the interface is web based without using a modern design. FSArchiver : live/hot image backup solution, backup can not be mounted. Dump : image backup system, mount is not supported.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/569796", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120919/" ] }
569,810
I tried to set an environment variable and made an error in the input due to a new keyboard I'm not used to. In the input (and output) below, did I accidentally start a job of some kind? I'm mystified by what happened. This is bash on latest OS X. mbp:e4 m$ DATABASE_URI=jdbc:postgresql://localhost:5432/e4-data?user=m&password=?"> "[1] 59932[1]+ Done DATABASE_URI=jdbc:postgresql://localhost:5432/e4-data?user=m
Linux system backup When targeting a true full system backup , disk image backup (as asked) offer substantial advantage (detailed bellow) compared to files based backup. With files based backup disk/partition structure is not saved; Most of the time for a full restore, the process is a huge time consumer in fact many time consuming steps (like system reinstall) are required; and finally backing up installed applications can be tricky; Image disk backup avoid all these cons and restore process is a one shot step. Tools like clonezilla, fsarchiver are not suitable for this question because they are missing one or multiple requested features. As a reminder, luks encrypted partition are not dependent on the used file system (ext3/ext4/etc.) keep in mind that the performance are not the same depending on the chosen file system ( details ), also note that btrfs ( video-1 , video-2 ) may be a very good option because of its snapshot feature and data structure. This is just an additional protection layer because btrfs snapshot are not true backups! (classic snapshots reside on the same partition). As a side note, in addition to disk image backup we may want to do a simple file sync backup for some particular locations, to achieve this, tools like rsync / grsync (or btrfs-send in case of btrfs) can be used in combinaison with cron (if required) and an encrypted backup destination (like luks-partition/vault/truecrypt). Files based backup tools can be: rsync / grsync , rsnapshot , cronopete , dump / restore , timeshift , deja-dup , systemback , freefilesync , realtimesync , luckybackup , vembu . Annotations lsblk --fs output: sda is the main disk sda1/sda2 are the encrypted partitions crypt_sda1/crypt_sda2 virtual (mapped) un-encrypted partitions sda ├─sda1 crypto_LUKS f3df6579-UUID... │ └─crypt_sda1 ext4 bc324232-UUID... /mount-location-1 └─sda2 crypto_LUKS c3423434-UUID... └─crypt_sda2 ext4 a6546765-UUID... /mount-location-2 Method #1 Backup the original luks disk/partition ( sda or sda1 ) encrypted as it is to any location bdsync / bdsync-manager is an amazing tool that can do image backup (full/incremental) by fast block device syncing; This can be used along with luks directly on the encrypted partition, incremental backups works very well in this case as well. This tool support mounting/compression/network/etc. dd : classic method for disk imaging, can be used with command similar to dd if=/dev/sda1 of=/backup/location/crypted.img bs=128K status=progress but note that imaging a mounted partition with dd may lead data corruption for the used files while the backup is done, like sql databases, x config files, or documents being edited, to guarantee data integrity with such backup closing all running application and data base is recommended, we can also mount the image after its creation and check its integrity with fsck . Cons for #1 : backup size, compression, and incremental backups can be tricky Method #2 This method is for disk without encryption or to backup the mapped luks un-encrypted partition crypt_sda1/crypt_sda2 ... An encrypted backup destination location (like luks-partition/vault/truecrypt) or an encrypted archive/image if the backup tool support such feature is recommended. Veeam : free/paid professional backup solution (on linux only command line and TUI), kernel module is opensource, this tool can not be used for the fist method, backup can be encrypted, incremental and mounting backups are supported. bdsync / bdsync-manager same as in the first method but the backup is made from the un-encrypted mapped partition (crypt_sda1/crypt_sda2). dd : classic method for disk imaging, can be used with command similar to dd if=/dev/mapper/crypt_sda1 of=/backup/location/un-encrypted-sda1.img bs=128K status=progress but note that imaging a mounted partition with dd may lead data corruption for the used files while the backup is done, like sql databases, x config files, or documents being edited, to guarantee data integrity with such backup closing all running application and data base is recommended, we can also mount the image after its creation and check its integrity with fsck . Cons for #2 : disk headers, mbr, partitions structure, uid etc. are not saved additional backup steps (detailed bellow) are required for a full backup Backup luks headers: cryptsetup luksHeaderBackup /dev/sda1 --header-backup-file /backup/location/sda1_luks_heanders_backup Backup mbr: dd if=/dev/sda of=/backup/location/backup-sda.mbr bs=512 count=1 Backup partitions structure : sfdisk -d /dev/sda > /location/backup-sda.sfdisk Backup disk uuid Note: Images done with dd can be mounted with commands similar to: fdisk -l -u /location/image.imgkpartx -l -v /location/image.imgkpartx -a -v /location/image.imgcryptsetup luksOpen /dev/mapper/loop0p1 imgrootmount /dev/mapper/imgroot /mnt/backup/ Alternatives: Bareos : open source backup solution ( demo-video ) Bacula : open source backup solution ( demo-video ) Weresync : disk image solution with incremental feature. Other tools can be found here , here , here or here There is a Wikipedia page comparing disk cloning software An analyse by Gartner of some professional backup solutions is available here Other tools Acronis backup may be used for both methods but their kernel module is always updated very lately (not working with current/recent kernel version) plus mounting backups is not working as of 02/2020. Partclone : used by clonezilla, this tool only backup disk used blocks, it support image mounting but does not support live/hot backup nor encryption/luks. Partimage : dd alternative with a TUI, it support live/hot backups but images can not be mounted and it does not support luks (but ext4/btrfs). Doclone : very nice live/hot backup imaging solution, supporting many systems (but not lucks...) ext4 etc. support network, mounting is not possible. Rsnapshot : snapshot file backup system using rsync. used in many distro (like mageia) the backup jobs are scheduled with cron, when running in background the backup status is not automatically visible. Rsync / Grsync : sync folders with rsync command, grsync is the gui... Cronopete : file backup alternative to rsync (the application is limited on how it work compared to modern solution) Simple-backup : file backup solution with tray icon and incremental feature, backup are made to tars archives Backintime : python backup app for file based backup (the app have many unsolved issues ) Shadowprotect : acronis alternative with mount feature... luks support is not obvious. Datto : professional backup solution, luks support is not obvious, linux agent need to be networked to a backup server... kernel module is opensource on github... the interface is web based without using a modern design. FSArchiver : live/hot image backup solution, backup can not be mounted. Dump : image backup system, mount is not supported.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/569810", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/174126/" ] }
569,876
My goal is to output a JSON object using jq on the output of a find command in bash. It could either be a one-line command or a bash script. I have this command which creates JSON objects from each line of output: find ~/ -maxdepth 1 -name "D*" | \while read line; \do jq -n \--arg name "$(basename "$line")" \--arg path "$line" \'{name: $name, path: $path}'; \done The output looks like this: { "name": "Desktop", "path": "/Users/username/Desktop"}{ "name": "Documents", "path": "/Users/username/Documents"}{ "name": "Downloads", "path": "/Users/username/Downloads"} But I need these objects to be in an array, and I need the array to be the value of a a parent object's single key called items , like so: {"items": [ { "name": "Desktop", "path": "/Users/username/Desktop" }, { "name": "Documents", "path": "/Users/username/Documents" }, { "name": "Downloads", "path": "/Users/username/Downloads" } ]} I tried adding the square brackets to the jq output string for each line ( '[{name: $name, path: $path}]'; ) and that adds the brackets but not the commas between the array elements. I found possible solutions here but I could not figure out how to use them while looping through each line.
This trick with the jq 1.5 inputs streaming filter seems to do it ... | jq -n '.items |= [inputs]' Ex. $ find ~/ -maxdepth 1 -name "D*" | while read line; do jq -n --arg name "$(basename "$line")" --arg path "$line" '{name: $name, path: $path}' done | jq -n '.items |= [inputs]'{ "items": [ { "name": "Downloads", "path": "/home/steeldriver/Downloads" }, { "name": "Desktop", "path": "/home/steeldriver/Desktop" }, { "name": "Documents", "path": "/home/steeldriver/Documents" } ]}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/569876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152074/" ] }
569,882
I have a system which can be accessed via SSH and HTTP. The system have two interfaces (eth0, eth1), and is working with Slackware 14.1. eth0 : 192.168.1.99, LTE Ethernet Gateway/Modemeth1 : 172.16.101.250, Local network (with internet access) eth1 should be used as default route for outgoing traffic, and automatically switch to eth0 when internet not available via eth1. This part is working using a cron and a script. The main concern is that when switching default gateway, the ingoing traffic to SSH and HTTP are working only with the interface of the default gateway. /etc/rc.d/rc.inet1.conf # Config information for eth0:IPADDR[0]="192.168.1.99"NETMASK[0]="255.255.255.0"USE_DHCP[0]="no"DHCP_HOSTNAME[0]="bridge"# Config information for eth1:IPADDR[1]="172.16.101.250"NETMASK[1]="255.255.128.0"USE_DHCP[1]="no"DHCP_HOSTNAME[1]="bridge"# Default gateway IP address:GATEWAY="172.16.0.1" Script executed every minute to verify internet availability on both networks #!/bin/bashDEF_GATEWAY="172.16.0.1" # Default GatewayBCK_GATEWAY="192.168.1.1" # Backup GatewayRMT_IP_1="8.8.8.8" # first remote ipRMT_IP_2="8.8.4.4" # second remote ipPING_TIMEOUT="1" # Ping timeout in seconds# Check userif [ `whoami` != "root" ]then echo "Failover script must be run as root!" exit 1fi# Check GWCURRENT_GW=`ip route show | grep default | awk '{ print $3 }'`if [ "$CURRENT_GW" == "$DEF_GATEWAY" ]then ping -c 2 -W $PING_TIMEOUT $RMT_IP_1 > /dev/null PING=$?else # Add static routes to remote ip's ip route add $RMT_IP_1 via $DEF_GATEWAY ip route add $RMT_IP_2 via $DEF_GATEWAY ping -c 2 -W $PING_TIMEOUT $RMT_IP_1 > /dev/null PING_1=$? ping -c 2 -W $PING_TIMEOUT $RMT_IP_2 > /dev/null PING_2=$? # Del static route to remote ip's ip route del $RMT_IP_1 ip route del $RMT_IP_2fiif [ "$PING" == "1" ] && [ "$PING_2" == "1" ]then if [ "$CURRENT_GW" == "$DEF_GATEWAY" ] then ip route replace default via $BCK_GATEWAY fielif [ "$CURRENT_GW" != "$DEF_GATEWAY" ]then # Switching to default ip route replace default via $DEF_GATEWAYfi Here are the services listening # netstat -lActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:http *:* LISTEN tcp 0 0 *:auth *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:https *:* LISTEN tcp 0 0 *:time *:* LISTEN tcp6 0 0 [::]:ssh [::]:* Here is the routing table # routeKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Ifacedefault 172.16.0.1 0.0.0.0 UG 1 0 0 eth1loopback * 255.0.0.0 U 0 0 0 lo172.16.0.0 * 255.255.128.0 U 0 0 0 eth1192.168.1.0 * 255.255.255.0 U 0 0 0 eth0
This trick with the jq 1.5 inputs streaming filter seems to do it ... | jq -n '.items |= [inputs]' Ex. $ find ~/ -maxdepth 1 -name "D*" | while read line; do jq -n --arg name "$(basename "$line")" --arg path "$line" '{name: $name, path: $path}' done | jq -n '.items |= [inputs]'{ "items": [ { "name": "Downloads", "path": "/home/steeldriver/Downloads" }, { "name": "Desktop", "path": "/home/steeldriver/Desktop" }, { "name": "Documents", "path": "/home/steeldriver/Documents" } ]}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/569882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37935/" ] }
569,941
I was wondering if there was a quick way to save a command in Ubuntu's terminal. The scenario is: Problem: Typed out a [long command] Forgot I needed to run [another command] before I could run the [long command] I want to be able to save the command for later use in an easy way that's not just putting a # before it and putting it in the up and down key history. Optimally saving it directly to a register or the clipboard. I forgot to mention that I didn't want to echo, either.
This is not your terminal , this is your shell . The name for the shell mechanism that you are looking for is a kill buffer . People forget that shell command line editors have these. ZLE in the Z shell has them, as have GNU Readline in the Bourne Again shell, libedit in the (FreeBSD) Almquist shell, the Korn shell's line editor, and the TENEX C shell's line editor. In all of these shells in emacs mode, simply go to the end of the line to be saved, kill it to the head kill buffer with ⎈ Control + U , type and run the intermediate command, and then yank the kill buffer contents with ⎈ Control + Y . Ensure that you do not do anything with the kill buffer when entering the intermediate command. In the Z shell in vi mode, you have the vi prefix sequences for specifying a named vi -style buffer to kill the line into. You can use one of the other buffers instead of the default buffer. Simply use something like " a d d (in vicmd mode) to delete the whole line into buffer "a", type and run the intermediate command, and then put that buffer's contents with " a p . In their vi modes, the Korn shell, GNU Readline in the Bourne Again shell, and libedit in the (FreeBSD) Almquist shell do not have named vi -style buffers, only the one cut buffer. d d to delete the line into that buffer, followed by putting the buffer contents with p , will work. But it uses the same vi -style buffer that killing and yanking will while entering the intermediate command.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/569941", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/397494/" ] }
569,993
I am Using Virtual Box 6.1 and Ubuntu 18.04. Here are my steps: start the Ubuntu on a Virtual Box in the VirtualBox guest window menubar, select Devices -> Install Guest Additions if prompted to automatically attempt to run software from the CD, just hit cancel sudo apt-get install build-essential linux-headers-generic open a command-prompt window ( Applications -> Accessories -> Terminal ) I have already enabled the copy/paste to bi-directional in the settings but still it is not working.I have tried rebooting the machine too. What am I missing here, please help? Also, I have tried - sudo apt-get install virtualbox-guest-dkms virtualbox-guest-utils
There is actually a problem with 6.1.4 VBOX version. I posted the question in Oracle vbox forum and it worked after downgrading it to 6.1.2 VBox version.Here is the link - https://forums.virtualbox.org/viewtopic.php?f=6&t=97052
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/569993", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/363046/" ] }
570,008
I ran into a problem and can't find a solution to fix the following issue. I want to copy the first word of a line to the beginning of the following n lines if they start with a special character, else copy the new word. Input: aaa random words`dsf|dfbbb|d Output: aaa random wordsaaa`dsfaaa|dfbbbbbb|d
An awk oneliner: awk '/^[[:alnum:]]/ {prefix = $1; print; next} {print prefix $0}' input On lines starting with an alphanumeric character, store the first word in prefix , print the line and continue to the next line. On all other lines print prefix before the line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/570008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/397562/" ] }
570,054
Is it possible to configure a domain-depentant nameserver for address-resolution (e.g. resolv.conf)? e.g. nameserver 1.2.3.4 for any domain abc.comnameserver 4.3.2.1 for any domain cba.comnameserver 1.4.2.3 for anything else I am using a modern Debian.
You can’t do this with only resolv.conf , but with an intermediary DNS forwarding daemon such as Dnsmasq (packaged in Debian as dnsmasq and related packages). With Dnsmasq, you’d configure Dnsmasq itself with the list of servers: server=/abc.com/1.2.3.4server=/cba.com/4.3.2.1server=1.4.2.3 and tell it not to look at resolv.conf : no-resolv Then you’d change your resolv.conf so it points to the Dnsmasq daemon, by removing all the nameserver entries therein. You’d also need to ensure that any DHCP setup doesn’t overwrite resolv.conf .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/570054", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215038/" ] }
570,094
In my ~/.bashrc I have the alias link='ln -sf' set, and it's working accordingly during my shell sessions. However, for root protected locations, where I need to use sudo in the command beginning, it is throwing the following error: link: cannot create link '<$2>' to '<$1>': Operation not permitted What is exactly happening? How could I go around it?
sudo is an external command which doesn't know about your aliases. Only your shell knows about your aliases, and sudo is not part of it. In this case I guess that sudo tries to run the /usr/bin/link binary, which on my Linux system is a simple command which always creates a hard link by invoking the link(2) system call: $ link --helpUsage: link FILE1 FILE2 or: link OPTIONCall the link function to create a link named FILE2 to an existing FILE1. --help display this help and exit --version output version information and exitGNU coreutils online help: <https://www.gnu.org/software/coreutils/>Full documentation at: <https://www.gnu.org/software/coreutils/link>or available locally via: info '(coreutils) link invocation'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/570094", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/364080/" ] }
570,188
I am studying Linux device drivers, my main focus is on wifi drivers.I want to know how the code flows when I plugin my device. Maybe, I can do something like add a printk line in every function.The device I have is supported by ath9k_htc driver.I want to make some changes in the driver code for learning purposes. What is the correct or general approach for understanding the code flow of driver modules in linux?
When I want to do this, I use the ftrace framework . Start by mounting the special file system: mount -t tracefs nodev /sys/kernel/tracing (as root; you should become root for all this, you’ll be doing everything as root anyway, and it’s easier to have a root shell than to use sudo ). Then change to that directory: cd /sys/kernel/tracing It contains a basic README which provides a short summary. To explore function calls, I use the function graph tracer , function_graph in available_tracers . Identify the functions you’re interested in, for example ath9k_htc_tx , and set them up echo ath9k_htc_tx > set_graph_function You can append other functions, make sure to use >> after the first function. You can see the configured functions with cat set_graph_function When you write to set_graph_function , the function is checked against the running kernel; if the function can’t be found, the write will fail, so you’ll know straight away if you’ll end up not tracing anything. Once the functions are set up, enable the tracer: echo function_graph > current_tracer then watch the trace file. To disable the tracer again, echo nop > current_tracer or flip tracing_on by writing 0 or 1 to it (0 to disable tracing, 1 to re-enable it).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/570188", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/397364/" ] }
570,240
What I did: install a minimal debian/testing (no GUI, no standard utilities) install build-essential, dkms, linux-headers-$(uname -r) click Devices -> Insert guest additions CD ran m-a prepare mount /dev/sr0 somewhere, cd there and ./VBoxLinuxAdditions.run What I got: ...Building the modules for kernel 5.4.0-4-amd64.Look at /var/log/vboxadd-setup.log to find out what went wrong.modprobe vboxsf failed... What is in the log: ...test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo >&2; \ echo >&2 " ERROR: Kernel configuration is invalid."; \ echo >&2 " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo >&2 " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo >&2 ; \ /bin/false) ... What I investigated: $modprobe vboxsf modprobe: FATAL: Module vboxsf not found in directory /lib/modules/5.4.0-4-amd $lsmod | grep vboxsf <no output> $find /usr/src/linux-headers-5.4.0-4-amd64/ -name autoconf.h /usr/src/linux-headers-5.4.0-4-amd64/include/generated/autoconf.h The host OS is Ubuntu 18. EDIT: after installing openbox to get an X server, this is how the log looks like: # less /var/log/vboxadd-setup.logBuilding the main Guest Additions module for kernel 5.4.0-4-amd64.Error building the module. Build output follows.make V=1 CONFIG_MODULE_SIG= -C /lib/modules/5.4.0-4-amd64/build M=/tmp/vbox.0 SRCROOT=/tmp/vbox.0 -j1 modulesmake -C /usr/src/linux-headers-5.4.0-4-amd64 -f /usr/src/linux-headers-5.4.0-4-common/Makefile modulestest -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \echo >&2; \echo >&2 " ERROR: Kernel configuration is invalid."; \echo >&2 " include/generated/autoconf.h or include/config/auto.conf are missing.";\echo >&2 " Run 'make oldconfig && make prepare' on kernel src to fix it."; \echo >&2 ; \/bin/false)make -f /usr/src/linux-headers-5.4.0-4-common/scripts/Makefile.build obj=/tmp/vbox.0 single-build= need-builtin=1 need-modorder=1 gcc-9 -Wp,-MD,/tmp/vbox.0/.VBoxGuest-linux.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/9/include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include -I./arch/x86/include/generated -I/usr/src/linux-headers-5.4.0-4-common/include -I./include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include/uapi -I./arch/x86/include/generated/uapi -I/usr/src/linux-headers-5.4.0-4-common/include/uapi -I./include/generated/uapi -include /usr/src/linux-headers-5.4.0-4-common/include/linux/kconfig.h -include /usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_types.h -D__KERNEL__ -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE -Werror=implicit-function-declaration -Werror=implicit-int -Wno-format-security -std=gnu89 -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -m64 -falign-jumps=1 -falign-loops=1 -mno-80387 -mno-fp-ret-in-387 -mpreferred-stack-boundary=3 -mskip-rax-setup -mtune=generic -mno-red-zone -mcmodel=kernel -DCONFIG_X86_X32_ABI -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_SSSE3=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -DCONFIG_AS_AVX512=1 -DCONFIG_AS_SHA1_NI=1 -DCONFIG_AS_SHA256_NI=1 -Wno-sign-compare -fno-asynchronous-unwind-tables -mindirect-branch=thunk-extern -mindirect-branch-register -fno-jump-tables -fno-delete-null-pointer-checks -Wno-frame-address -Wno-format-truncation -Wno-format-overflow -Wno-address-of-packed-member -O2 --param=allow-store-data-races=0 -Wframe-larger-than=2048 -fstack-protector-strong -Wno-unused-but-set-variable -Wimplicit-fallthrough -Wno-unused-const-variable -fno-var-tracking-assignments -g -pg -mrecord-mcount -mfentry -DCC_USING_FENTRY -flive-patching=inline-clone -Wdeclaration-after-statement -Wvla -Wno-pointer-sign -Wno-stringop-truncation -fno-strict-overflow -fno-merge-all-constants -fmerge-constants -fno-stack-check -fconserve-stack -Werror=date-time -Werror=incompatible-pointer-types -Werror=designated-init -fmacro-prefix-map=/usr/src/linux-headers-5.4.0-4-common/= -fcf-protection=none -Wno-packed-not-aligned -Wno-declaration-after-statement -include /tmp/vbox.0/include/VBox/VBoxGuestMangling.h -fno-pie -I/usr/src/linux-headers-5.4.0-4-common/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxguest/ -I/tmp/vbox.0/vboxguest/include -I/tmp/vbox.0/vboxguest/r0drv/linux -D__KERNEL__ -DMODULE -DVBOX -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_GUEST -DIN_GUEST_R0 -DIN_MODULE -DRT_WITH_VBOX -DVBGL_VBOXGUEST -DVBOX_WITH_HGCM -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -DKBUILD_BASENAME='"VBoxGuest_linux"' -DKBUILD_MODNAME='"vboxguest"' -c -o /tmp/vbox.0/VBoxGuest-linux.o /tmp/vbox.0/VBoxGuest-linux.c ./tools/objtool/objtool orc generate --module --no-fp --retpoline --uaccess /tmp/vbox.0/VBoxGuest-linux.o if objdump -h /tmp/vbox.0/VBoxGuest-linux.o | grep -q __ksymtab; then gcc-9 -E -D__GENKSYMS__ -Wp,-MD,/tmp/vbox.0/.VBoxGuest-linux.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/9/include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include -I./arch/x86/include/generated -I/usr/src/linux-headers-5.4.0-4-common/include -I./include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include/uapi -I./arch/x86/include/generated/uapi -I/usr/src/linux-headers-5.4.0-4-common/include/uapi -I./include/generated/uapi -include /usr/src/linux-headers-5.4.0-4-common/include/linux/kconfig.h -include /usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_types.h -D__KERNEL__ -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE -Werror=implicit-function-declaration -Werror=implicit-int -Wno-format-security -std=gnu89 -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -m64 -falign-jumps=1 -falign-loops=1 -mno-80387 -mno-fp-ret-in-387 -mpreferred-stack-boundary=3 -mskip-rax-setup -mtune=generic -mno-red-zone -mcmodel=kernel -DCONFIG_X86_X32_ABI -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_SSSE3=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -DCONFIG_AS_AVX512=1 -DCONFIG_AS_SHA1_NI=1 -DCONFIG_AS_SHA256_NI=1 -Wno-sign-compare -fno-asynchronous-unwind-tables -mindirect-branch=thunk-extern -mindirect-branch-register -fno-jump-tables -fno-delete-null-pointer-checks -Wno-frame-address -Wno-format-truncation -Wno-format-overflow -Wno-address-of-packed-member -O2 --param=allow-store-data-races=0 -Wframe-larger-than=2048 -fstack-protector-strong -Wno-unused-but-set-variable -Wimplicit-fallthrough -Wno-unused-const-variable -fno-var-tracking-assignments -g -pg -mrecord-mcount -mfentry -DCC_USING_FENTRY -flive-patching=inline-clone -Wdeclaration-after-statement -Wvla -Wno-pointer-sign -Wno-stringop-truncation -fno-strict-overflow -fno-merge-all-constants -fmerge-constants -fno-stack-check -fconserve-stack -Werror=date-time -Werror=incompatible-pointer-types -Werror=designated-init -fmacro-prefix-map=/usr/src/linux-headers-5.4.0-4-common/= -fcf-protection=none -Wno-packed-not-aligned -Wno-declaration-after-statement -include /tmp/vbox.0/include/VBox/VBoxGuestMangling.h -fno-pie -I/usr/src/linux-headers-5.4.0-4-common/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxguest/ -I/tmp/vbox.0/vboxguest/include -I/tmp/vbox.0/vboxguest/r0drv/linux -D__KERNEL__ -DMODULE -DVBOX -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_GUEST -DIN_GUEST_R0 -DIN_MODULE -DRT_WITH_VBOX -DVBGL_VBOXGUEST -DVBOX_WITH_HGCM -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -DKBUILD_BASENAME='"VBoxGuest_linux"' -DKBUILD_MODNAME='"vboxguest"' /tmp/vbox.0/VBoxGuest-linux.c | scripts/genksyms/genksyms -r /dev/null > /tmp/vbox.0/.tmp_VBoxGuest-linux.ver; ld -m elf_x86_64 -z max-page-size=0x200000 -r -o /tmp/vbox.0/.tmp_VBoxGuest-linux.o /tmp/vbox.0/VBoxGuest-linux.o -T /tmp/vbox.0/.tmp_VBoxGuest-linux.ver; mv -f /tmp/vbox.0/.tmp_VBoxGuest-linux.o /tmp/vbox.0/VBoxGuest-linux.o; rm -f /tmp/vbox.0/.tmp_VBoxGuest-linux.ver; fi gcc-9 -Wp,-MD,/tmp/vbox.0/.VBoxGuest.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/9/include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include -I./arch/x86/include/generated -I/usr/src/linux-headers-5.4.0-4-common/include -I./include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include/uapi -I./arch/x86/include/generated/uapi -I/usr/src/linux-headers-5.4.0-4-common/include/uapi -I./include/generated/uapi -include /usr/src/linux-headers-5.4.0-4-common/include/linux/kconfig.h -include /usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_types.h -D__KERNEL__ -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE -Werror=implicit-function-declaration -Werror=implicit-int -Wno-format-security -std=gnu89 -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -m64 -falign-jumps=1 -falign-loops=1 -mno-80387 -mno-fp-ret-in-387 -mpreferred-stack-boundary=3 -mskip-rax-setup -mtune=generic -mno-red-zone -mcmodel=kernel -DCONFIG_X86_X32_ABI -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_SSSE3=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -DCONFIG_AS_AVX512=1 -DCONFIG_AS_SHA1_NI=1 -DCONFIG_AS_SHA256_NI=1 -Wno-sign-compare -fno-asynchronous-unwind-tables -mindirect-branch=thunk-extern -mindirect-branch-register -fno-jump-tables -fno-delete-null-pointer-checks -Wno-frame-address -Wno-format-truncation -Wno-format-overflow -Wno-address-of-packed-member -O2 --param=allow-store-data-races=0 -Wframe-larger-than=2048 -fstack-protector-strong -Wno-unused-but-set-variable -Wimplicit-fallthrough -Wno-unused-const-variable -fno-var-tracking-assignments -g -pg -mrecord-mcount -mfentry -DCC_USING_FENTRY -flive-patching=inline-clone -Wdeclaration-after-statement -Wvla -Wno-pointer-sign -Wno-stringop-truncation -fno-strict-overflow -fno-merge-all-constants -fmerge-constants -fno-stack-check -fconserve-stack -Werror=date-time -Werror=incompatible-pointer-types -Werror=designated-init -fmacro-prefix-map=/usr/src/linux-headers-5.4.0-4-common/= -fcf-protection=none -Wno-packed-not-aligned -Wno-declaration-after-statement -include /tmp/vbox.0/include/VBox/VBoxGuestMangling.h -fno-pie -I/usr/src/linux-headers-5.4.0-4-common/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxguest/ -I/tmp/vbox.0/vboxguest/include -I/tmp/vbox.0/vboxguest/r0drv/linux -D__KERNEL__ -DMODULE -DVBOX -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_GUEST -DIN_GUEST_R0 -DIN_MODULE -DRT_WITH_VBOX -DVBGL_VBOXGUEST -DVBOX_WITH_HGCM -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -DKBUILD_BASENAME='"VBoxGuest"' -DKBUILD_MODNAME='"vboxguest"' -c -o /tmp/vbox.0/VBoxGuest.o /tmp/vbox.0/VBoxGuest.c/tmp/vbox.0/VBoxGuest.c: In function ‘vgdrvCheckIfVmmReqIsAllowed’:/tmp/vbox.0/VBoxGuest.c:2060:16: warning: this statement may fall through [-Wimplicit-fallthrough=] 2060 | if (pSession->fUserSession) | ^/tmp/vbox.0/VBoxGuest.c:2062:9: note: here 2062 | case kLevel_AllUsers: | ^~~~ ./tools/objtool/objtool orc generate --module --no-fp --retpoline --uaccess /tmp/vbox.0/VBoxGuest.o if objdump -h /tmp/vbox.0/VBoxGuest.o | grep -q __ksymtab; then gcc-9 <compile flags deleted to fit in 30 000 characters>fi gcc-9 -Wp,-MD,/tmp/vbox.0/.VBoxGuestR0LibGenericRequest.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/9/include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include -I./arch/x86/include/generated -I/usr/src/linux-headers-5.4.0-4-common/include -I./include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include/uapi -I./arch/x86/include/generated/uapi -I/usr/src/linux-headers-5.4.0-4-common/include/uapi -I./include/generated/uapi -include /usr/src/linux-headers-5.4.0-4-common/include/linux/kconfig.h -include /usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_types.h -D__KERNEL__ -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE -Werror=implicit-function-declaration -Werror=implicit-int -Wno-format-security -std=gnu89 -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -m64 -falign-jumps=1 -falign-loops=1 -mno-80387 -mno-fp-ret-in-387 -mpreferred-stack-boundary=3 -mskip-rax-setup -mtune=generic -mno-red-zone -mcmodel=kernel -DCONFIG_X86_X32_ABI -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_SSSE3=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -DCONFIG_AS_AVX512=1 -DCONFIG_AS_SHA1_NI=1 -DCONFIG_AS_SHA256_NI=1 -Wno-sign-compare -fno-asynchronous-unwind-tables -mindirect-branch=thunk-extern -mindirect-branch-register -fno-jump-tables -fno-delete-null-pointer-checks -Wno-frame-address -Wno-format-truncation -Wno-format-overflow -Wno-address-of-packed-member -O2 --param=allow-store-data-races=0 -Wframe-larger-than=2048 -fstack-protector-strong -Wno-unused-but-set-variable -Wimplicit-fallthrough -Wno-unused-const-variable -fno-var-tracking-assignments -g -pg -mrecord-mcount -mfentry -DCC_USING_FENTRY -flive-patching=inline-clone -Wdeclaration-after-statement -Wvla -Wno-pointer-sign -Wno-stringop-truncation -fno-strict-overflow -fno-merge-all-constants -fmerge-constants -fno-stack-check -fconserve-stack -Werror=date-time -Werror=incompatible-pointer-types -Werror=designated-init -fmacro-prefix-map=/usr/src/linux-headers-5.4.0-4-common/= -fcf-protection=none -Wno-packed-not-aligned -Wno-declaration-after-statement -include /tmp/vbox.0/include/VBox/VBoxGuestMangling.h -fno-pie -I/usr/src/linux-headers-5.4.0-4-common/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxguest/ -I/tmp/vbox.0/vboxguest/include -I/tmp/vbox.0/vboxguest/r0drv/linux -D__KERNEL__ -DMODULE -DVBOX -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_GUEST -DIN_GUEST_R0 -DIN_MODULE -DRT_WITH_VBOX -DVBGL_VBOXGUEST -DVBOX_WITH_HGCM -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -DKBUILD_BASENAME='"VBoxGuestR0LibGenericRequest"' -DKBUILD_MODNAME='"vboxguest"' -c -o /tmp/vbox.0/VBoxGuestR0LibGenericRequest.o /tmp/vbox.0/VBoxGuestR0LibGenericRequest.c ./tools/objtool/objtool orc generate --module --no-fp --retpoline --uaccess /tmp/vbox.0/VBoxGuestR0LibGenericRequest.o if objdump -h /tmp/vbox.0/VBoxGuestR0LibGenericRequest.o | grep -q __ksymtab; then gcc-9 -E -D__GENKSYMS__ -Wp,-MD,/tmp/vbox.0/.VBoxGuestR0LibGenericRequest.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/9/include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include -I./arch/x86/include/generated -I/usr/src/linux-headers-5.4.0-4-common/include -I./include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include/uapi -I./arch/x86/include/generated/uapi -I/usr/src/linux-headers-5.4.0-4-common/include/uapi -I./include/generated/uapi -include /usr/src/linux-headers-5.4.0-4-common/include/linux/kconfig.h -include /usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_types.h -D__KERNEL__ -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE -Werror=implicit-function-declaration -Werror=implicit-int -Wno-format-security -std=gnu89 -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -m64 -falign-jumps=1 -falign-loops=1 -mno-80387 -mno-fp-ret-in-387 -mpreferred-stack-boundary=3 -mskip-rax-setup -mtune=generic -mno-red-zone -mcmodel=kernel -DCONFIG_X86_X32_ABI -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_SSSE3=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -DCONFIG_AS_AVX512=1 -DCONFIG_AS_SHA1_NI=1 -DCONFIG_AS_SHA256_NI=1 -Wno-sign-compare -fno-asynchronous-unwind-tables -mindirect-<some flags removed to fit into 30 000 char> -DVBOX_WITH_64_BITS_GUESTS -DMODULE -DKBUILD_BASENAME='"VBoxGuestR0LibGenericRequest"' -DKBUILD_MODNAME='"vboxguest"' /tmp/vbox.0/VBoxGuestR0LibGenericRequest.c | scripts/genksyms/genksyms -r /dev/null > /tmp/vbox.0/.tmp_VBoxGuestR0LibGenericRequest.ver; ld -m elf_x86_64 -z max-page-size=0x200000 -r -o /tmp/vbox.0/.tmp_VBoxGuestR0LibGenericRequest.o /tmp/vbox.0/VBoxGuestR0LibGenericRequest.o -T /tmp/vbox.0/.tmp_VBoxGuestR0LibGenericRequest.ver; mv -f /tmp/vbox.0/.tmp_VBoxGuestR0LibGenericRequest.o /tmp/vbox.0/VBoxGuestR0LibGenericRequest.o; rm -f /tmp/vbox.0/.tmp_VBoxGuestR0LibGenericRequest.ver; fi gcc-9 -Wp,-MD,/tmp/vbox.0/.VBoxGuestR0LibHGCMInternal.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/9/include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include -I./arch/x86/include/generated -I/usr/src/linux-headers-5.4.0-4-common/include -I./include -I/usr/src/linux-headers-5.4.0-4-common/arch/x86/include/uapi -I./arch/x86/include/generated/uapi -I/usr/src/linux-headers-5.4.0-4-common/include/uapi -I./include/generated/uapi -include /usr/src/linux-headers-5.4.0-4-common/include/linux/kconfig.h -include /usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_types.h -D__KERNEL__ -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE -Werror=implicit-function-declaration -Werror=implicit-int -Wno-format-security -std=gnu89 -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -m64 -falign-jumps=1 -falign-loops=1 -mno-80387 -mno-fp-ret-in-387 -mpreferred-stack-boundary=3 -mskip-rax-setup -mtune=generic -mno-red-zone -mcmodel=kernel -DCONFIG_X86_X32_ABI -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_SSSE3=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -DCONFIG_AS_AVX512=1 -DCONFIG_AS_SHA1_NI=1 -DCONFIG_AS_SHA256_NI=1 -Wno-sign-compare -fno-asynchronous-unwind-tables -mindirect-branch=thunk-extern -mindirect-branch-register -fno-jump-tables -fno-delete-null-pointer-checks -Wno-frame-address -Wno-format-truncation -Wno-format-overflow -Wno-address-of-packed-member -O2 --param=allow-store-data-races=0 -Wframe-larger-than=2048 -fstack-protector-strong -Wno-unused-but-set-variable -Wimplicit-fallthrough -Wno-unused-const-variable -fno-var-tracking-assignments -g -pg -mrecord-mcount -mfentry -DCC_USING_FENTRY -flive-patching=inline-clone -Wdeclaration-after-statement -Wvla -Wno-pointer-sign -Wno-stringop-truncation -fno-strict-overflow -fno-merge-all-constants -fmerge-constants -fno-stack-check -fconserve-stack -Werror=date-time -Werror=incompatible-pointer-types -Werror=designated-init -fmacro-prefix-map=/usr/src/linux-headers-5.4.0-4-common/= -fcf-protection=none -Wno-packed-not-aligned -Wno-declaration-after-statement -include /tmp/vbox.0/include/VBox/VBoxGuestMangling.h -fno-pie -I/usr/src/linux-headers-5.4.0-4-common/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxguest/ -I/tmp/vbox.0/vboxguest/include -I/tmp/vbox.0/vboxguest/r0drv/linux -D__KERNEL__ -DMODULE -DVBOX -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_GUEST -DIN_GUEST_R0 -DIN_MODULE -DRT_WITH_VBOX -DVBGL_VBOXGUEST -DVBOX_WITH_HGCM -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -DKBUILD_BASENAME='"VBoxGuestR0LibHGCMInternal"' -DKBUILD_MODNAME='"vboxguest"' -c -o /tmp/vbox.0/VBoxGuestR0LibHGCMInternal.o /tmp/vbox.0/VBoxGuestR0LibHGCMInternal.cIn file included from /usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_types.h:59, from <command-line>:/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c: In function ‘vbglR0HGCMInternalPreprocessCall’:/usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_attributes.h:200:41: error: expected ‘)’ before ‘__attribute__’ 200 | # define fallthrough __attribute__((__fallthrough__)) | ^~~~~~~~~~~~~/tmp/vbox.0/include/iprt/cdefs.h:1116:48: note: in expansion of macro ‘fallthrough’ 1116 | # define RT_FALL_THROUGH() __attribute__((fallthrough)) | ^~~~~~~~~~~/tmp/vbox.0/include/iprt/cdefs.h:1123:33: note: in expansion of macro ‘RT_FALL_THROUGH’ 1123 | #define RT_FALL_THRU() RT_FALL_THROUGH() | ^~~~~~~~~~~~~~~/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:271:17: note: in expansion of macro ‘RT_FALL_THRU’ 271 | RT_FALL_THRU(); | ^~~~~~~~~~~~In file included from /tmp/vbox.0/include/iprt/types.h:29, from /tmp/vbox.0/VBoxGuestR0LibInternal.h:33, from /tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:33:/tmp/vbox.0/include/iprt/cdefs.h:1116:60: error: expected identifier or ‘(’ before ‘)’ token 1116 | # define RT_FALL_THROUGH() __attribute__((fallthrough)) | ^/tmp/vbox.0/include/iprt/cdefs.h:1123:33: note: in expansion of macro ‘RT_FALL_THROUGH’ 1123 | #define RT_FALL_THRU() RT_FALL_THROUGH() | ^~~~~~~~~~~~~~~/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:271:17: note: in expansion of macro ‘RT_FALL_THRU’ 271 | RT_FALL_THRU(); | ^~~~~~~~~~~~In file included from /usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_types.h:59, from <command-line>:/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c: In function ‘vbglR0HGCMInternalInitCall’:/usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_attributes.h:200:41: error: expected ‘)’ before ‘__attribute__’ 200 | # define fallthrough __attribute__((__fallthrough__)) | ^~~~~~~~~~~~~/tmp/vbox.0/include/iprt/cdefs.h:1116:48: note: in expansion of macro ‘fallthrough’ 1116 | # define RT_FALL_THROUGH() __attribute__((fallthrough)) | ^~~~~~~~~~~/tmp/vbox.0/include/iprt/cdefs.h:1123:33: note: in expansion of macro ‘RT_FALL_THROUGH’ 1123 | #define RT_FALL_THRU() RT_FALL_THROUGH() | ^~~~~~~~~~~~~~~/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:545:17: note: in expansion of macro ‘RT_FALL_THRU’ 545 | RT_FALL_THRU(); | ^~~~~~~~~~~~In file included from /tmp/vbox.0/include/iprt/types.h:29, from /tmp/vbox.0/VBoxGuestR0LibInternal.h:33, from /tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:33:/tmp/vbox.0/include/iprt/cdefs.h:1116:60: error: expected identifier or ‘(’ before ‘)’ token 1116 | # define RT_FALL_THROUGH() __attribute__((fallthrough)) | ^/tmp/vbox.0/include/iprt/cdefs.h:1123:33: note: in expansion of macro ‘RT_FALL_THROUGH’ 1123 | #define RT_FALL_THRU() RT_FALL_THROUGH() | ^~~~~~~~~~~~~~~/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:545:17: note: in expansion of macro ‘RT_FALL_THRU’ 545 | RT_FALL_THRU(); | ^~~~~~~~~~~~In file included from /usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_types.h:59, from <command-line>:/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c: In function ‘vbglR0HGCMInternalCopyBackResult’:/usr/src/linux-headers-5.4.0-4-common/include/linux/compiler_attributes.h:200:41: error: expected ‘)’ before ‘__attribute__’ 200 | # define fallthrough __attribute__((__fallthrough__)) | ^~~~~~~~~~~~~/tmp/vbox.0/include/iprt/cdefs.h:1116:48: note: in expansion of macro ‘fallthrough’ 1116 | # define RT_FALL_THROUGH() __attribute__((fallthrough)) | ^~~~~~~~~~~/tmp/vbox.0/include/iprt/cdefs.h:1123:33: note: in expansion of macro ‘RT_FALL_THROUGH’ 1123 | #define RT_FALL_THRU() RT_FALL_THROUGH() | ^~~~~~~~~~~~~~~/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:812:17: note: in expansion of macro ‘RT_FALL_THRU’ 812 | RT_FALL_THRU(); | ^~~~~~~~~~~~In file included from /tmp/vbox.0/include/iprt/types.h:29, from /tmp/vbox.0/VBoxGuestR0LibInternal.h:33, from /tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:33:/tmp/vbox.0/include/iprt/cdefs.h:1116:60: error: expected identifier or ‘(’ before ‘)’ token 1116 | # define RT_FALL_THROUGH() __attribute__((fallthrough)) | ^/tmp/vbox.0/include/iprt/cdefs.h:1123:33: note: in expansion of macro ‘RT_FALL_THROUGH’ 1123 | #define RT_FALL_THRU() RT_FALL_THROUGH() | ^~~~~~~~~~~~~~~/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:812:17: note: in expansion of macro ‘RT_FALL_THRU’ 812 | RT_FALL_THRU(); | ^~~~~~~~~~~~/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c: In function ‘vbglR0HGCMInternalPreprocessCall’:/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:259:20: warning: this statement may fall through [-Wimplicit-fallthrough=] 259 | if (!VBGLR0_CAN_USE_PHYS_PAGE_LIST(/*a_fLocked =*/ true)) | ^/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:273:13: note: here 273 | case VMMDevHGCMParmType_LinAddr_In: | ^~~~/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c: In function ‘vbglR0HGCMInternalInitCall’:/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:539:20: warning: this statement may fall through [-Wimplicit-fallthrough=] 539 | if (!VBGLR0_CAN_USE_PHYS_PAGE_LIST(/*a_fLocked =*/ true)) | ^/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:547:13: note: here 547 | case VMMDevHGCMParmType_LinAddr_In: | ^~~~/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c: In function ‘vbglR0HGCMInternalCopyBackResult’:/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:807:20: warning: this statement may fall through [-Wimplicit-fallthrough=] 807 | if (!VBGLR0_CAN_USE_PHYS_PAGE_LIST(/*a_fLocked =*/ true)) | ^/tmp/vbox.0/VBoxGuestR0LibHGCMInternal.c:814:13: note: here 814 | case VMMDevHGCMParmType_LinAddr_Out: | ^~~~make[3]: *** [/usr/src/linux-headers-5.4.0-4-common/scripts/Makefile.build:271: /tmp/vbox.0/VBoxGuestR0LibHGCMInternal.o] Error 1make[2]: *** [/usr/src/linux-headers-5.4.0-4-common/Makefile:1665: /tmp/vbox.0] Error 2make[1]: *** [/usr/src/linux-headers-5.4.0-4-common/Makefile:179: sub-make] Error 2make: *** [/tmp/vbox.0/Makefile.include.footer:100: vboxguest] Error 2 My compiler is: $ gcc -vUsing built-in specs.COLLECT_GCC=gccCOLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapperOFFLOAD_TARGET_NAMES=nvptx-none:hsaOFFLOAD_TARGET_DEFAULT=1Target: x86_64-linux-gnuConfigured with: ../src/configure -v --with-pkgversion='Debian 9.2.1-30' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-bootstrap --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none,hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu --with-build-config=bootstrap-lto-lean --enable-link-mutexThread model: posixgcc version 9.2.1 20200224 (Debian 9.2.1-30)
The problem seems to be with the guest additions iso that come with the virtualbox installation. The best option is to download the guest additions from the repository. Run sudo apt install virtualbox-guest-additions-iso to get the latest repositories The guest iso will be downloaded inside /usr/share/virtualbox/VBoxGuestAdditions.iso Create a mount point and mount iso: sudo mkdir -p /mnt/cdrom && sudo mount /usr/share/virtualbox/VBoxGuestAdditions.iso /mnt/cdrom Navigate to iso and install: cd /mnt/cdrom && sudo sh ./VBoxLinuxAdditions.run --nox11
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/570240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20506/" ] }
570,297
I routinely disable Caps Lock and the respective modifier with a script, using xmodmap . That all works fine. Sometimes, however, for some reason unknown, Caps Lock is active. Having no key bound to Caps_Lock and no key bound to toggle the Lock modifier, I cannot switch Caps Lock off unless I reset the keymap, press the key, then re-map it to my desired configuration. So: How can I disable Caps Lock (currently active) without re-mapping keys and with no keys bound to do the job? Perhaps a command line tool can set the state? For anyone interested, here is how my script disables the accidental activation of Caps Lock by a key press (I never enable it intentionally): #!/bin/sh# I never want to use Caps_Lock. Make Caps_Lock another Control_L...xmodmap -e "remove Lock = Caps_Lock" 2> /dev/nullxmodmap -e "keysym Caps_Lock = Control_L" 2> /dev/nullxmodmap -e "add Control = Control_L" 2> /dev/null
I don't know of any utility which does that (except maybe xdotool key Caps_Lock ?), but in the meanwhile you can compile this little program with cc xkb_unlock.c -s -lX11 -o ./xkb_unlock (provided that you have installed a compiler and the libc & xorg development packages)and use it as simply ./xkb_unlock . xkb_unlock.c #include <X11/Xlib.h>#include <X11/XKBlib.h>#include <err.h>#include <stdlib.h>int main(void){ Display *dpy; if(!(dpy = XOpenDisplay(0))) errx(1, "cannot open display '%s'", XDisplayName(0)); XkbLockModifiers(dpy, XkbUseCoreKbd, 0xff, 0); XSync(dpy, False);}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/570297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115138/" ] }
570,477
I have a few thousand files that are individually GZip compressed (passing of course the -n flag so the output is deterministic). They then go into a Git repository. I just discovered that for 3 of these files, Gzip doesn't produce the same output on macOS vs Linux. Here's an example: macOS $ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | shasum -a 2560ac378465b576991e1c7323008efcade253ce1ab08145899139f11733187e455 -$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip --fast -n | shasum -a 2566e145c6239e64b7e28f61cbab49caacbe0dae846ce33d539bf5c7f2761053712 -$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip -n | shasum -a 2563562fd9f1d18d52e500619b4a5d5dfa709f5da8601b9dd64088fb5da8de7b281 -$ gzip --versionApple gzip 272.250.1 Linux $ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | shasum -a 2560ac378465b576991e1c7323008efcade253ce1ab08145899139f11733187e455 -$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip --fast -n | shasum -a 25610ac8b80af8d734ad3688aa6c7d9b582ab62cf7eda6bc1a0f08d6159cad96ddc -$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip -n | shasum -a 256cbf249e3a35f62a4f3b13e2c91fe0161af5d96a58727d17cf7a62e0ac3806393 -$ gzip --versiongzip 1.6Copyright (C) 2007, 2010, 2011 Free Software Foundation, Inc.Copyright (C) 1993 Jean-loup Gailly.This is free software. You may redistribute copies of it under the terms ofthe GNU General Public License <http://www.gnu.org/licenses/gpl.html>.There is NO WARRANTY, to the extent permitted by law.Written by Jean-loup Gailly. How is this possible? I thought the GZip implementation was completely standard? UPDATE: Just to confirm that macOS and Linux versions do produce the same output most of the time, both OSes output the same hash for: $ echo "Vive la France" | gzip --fast -n | shasum -a 256af842c0cb2dbf94ae19f31c55e05fa0e403b249c8faead413ac2fa5e9b854768 -
Note that the compression algorithm (Deflate) in GZip is not strictly bijective. To elaborate: For some data, there's more than one possible compressed output depending on the algorithmic implementation and used parameters. So there's no guarantee at all that Apple GZip and gzip 1.6 will return the same compressed output. These outputs are all valid GZip streams, the standard just guarantees that every of these possible outputs will be decompressed to the same original data.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/570477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/397994/" ] }
570,486
I trying to bind Ctrl+LeftArrow to backward-word in terminal (no XWindowSystem). But I observe, that Ctrl+LeftArrow and LeftArrow generate identically escape sequence in terminal: I press Ctrl+V I press LeftArrow I received ^[[D I press Ctrl+V I press Ctrl+LeftArrow I received ^[[D Same problem with Ctrl+RightArrow. How I can fix it? (Debian: Linux v4.19.0-8-amd64)
Note that the compression algorithm (Deflate) in GZip is not strictly bijective. To elaborate: For some data, there's more than one possible compressed output depending on the algorithmic implementation and used parameters. So there's no guarantee at all that Apple GZip and gzip 1.6 will return the same compressed output. These outputs are all valid GZip streams, the standard just guarantees that every of these possible outputs will be decompressed to the same original data.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/570486", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/398001/" ] }
570,494
Edit:Is there a way to tell if a linux iso will provide the "try X without installing" option? From https://ubuntu.com/download/desktop for example or https://lubuntu.net/downloads/ it is not clear which will give that option. (My expectation was that all could "try without installing" but recently I tried some which excluded that option from the boot menu) Original, confused questions:If I create a bootable usb (Ubuntu for example if it matters), does that mean it is a "live usb", where I can run the OS off the usb, or not necessarily, only that I could install the OS off the usb. Are there certain ISO's to choose to get live usb's or all ISO's for certain distros? Are there other names for 'live usb'?
Note that the compression algorithm (Deflate) in GZip is not strictly bijective. To elaborate: For some data, there's more than one possible compressed output depending on the algorithmic implementation and used parameters. So there's no guarantee at all that Apple GZip and gzip 1.6 will return the same compressed output. These outputs are all valid GZip streams, the standard just guarantees that every of these possible outputs will be decompressed to the same original data.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/570494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/398009/" ] }
570,507
I work with cash registers that are bare-boned RedHat Linux boxes. These registers have a scanner attached to a serial port ( ttyS0 ). We do a lot of testing that requires someone to stand at the register and swipe products across the scanner. I'm trying to figure out a way to remove the person from the equation. Is there an easy way to simulate input FROM the serial port? Obviously, sending information TO the serial port is easy: echo [whatever] > /dev/ttyS0 but what I'd really love is some sort of bash code where I could type: echo [barcode number] > (some code that makes the machine think the barcode number is coming from the serial port) Is this possible? I'm also restricted in what I can actually put on the register. I can't install any new utilities onto the machine. I can put bash scripts on there, but that's about it.
You can do that (as root) with the TIOCSTI ioctl. See the example program at the end of this answer . Use it as echo [whatever] | ./tiocsti > /dev/ttyS0 You can also do that from perl without having to compile anything -- search this site, there are plenty of examples . Another option that I am using, is doing it in hardware: simply connect the machine to itself via an USB->serial adapter, and write to the /dev/ttyUSBx (or /dev/serial/by-id/xx-xx-xx ) whatever you want read from /dev/ttySx (make /dev/ttyUSBx raw with stty raw so it does pass the data unchanged).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/570507", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/398021/" ] }
570,514
How to run exactly 1 instance of a program as a process? Or alternatively, how to test for the existance of a running program? Portable methods are preferred.
You can do that (as root) with the TIOCSTI ioctl. See the example program at the end of this answer . Use it as echo [whatever] | ./tiocsti > /dev/ttyS0 You can also do that from perl without having to compile anything -- search this site, there are plenty of examples . Another option that I am using, is doing it in hardware: simply connect the machine to itself via an USB->serial adapter, and write to the /dev/ttyUSBx (or /dev/serial/by-id/xx-xx-xx ) whatever you want read from /dev/ttySx (make /dev/ttyUSBx raw with stty raw so it does pass the data unchanged).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/570514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186590/" ] }
570,530
For example, if I run sudo apt-get -y upgrade if there is a package that requires a restart to upgrade, will the yes flag cause the system to reboot after the command finishes upgrading everything? Or, will it still require a manual reboot? OS and Software: Debian Buster 10 -> kernel version 4.19 on a Raspbian HW apt 1.8.2 ( armhf )
No, apt on its own won’t reboot. You can check whether the file /var/run/reboot-required exists after running apt to see if a reboot is required. If you use unattended-upgrades , you can configure that to reboot for you.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/570530", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/398034/" ] }
570,548
I'm looking at solutions to convert JSON to a CSV. It seems most of the solutions expect the JSON to be a single object rather than an array of objects. All the solutions I've tried from here seem to break with my input, which comes from curling this site . How can I convert the JSON into a CSV with jq or another tool when the input is an array instead of an object. [ { "id": "4", "link": "https://pressbooks.online.ucf.edu/amnatgov/", "metadata": { "@context": "http://schema.org", "@type": "Book", "name": "American Government", "inLanguage": "en", "copyrightYear": "2016", "disambiguatingDescription": "The content of this textbook has been developed and arranged to provide a logical progression from the fundamental principles of institutional design at the founding, to avenues of political participation, to thorough coverage of the political structures that constitute American government. The book builds upon what students have already learned and emphasizes connections between topics as well as between theory and applications. The goal of each section is to enable students not just to recognize concepts, but to work with them in ways that will be useful in later courses, future careers, and as engaged citizens. ", "image": "https://pressbooks.online.ucf.edu/app/uploads/sites/4/2020/01/American-Government.png", "isBasedOn": "https://ucf-dev.pb.unizin.org/pos2041", "author": [ { "@type": "Person", "name": "OpenStax" } ], "datePublished": "2016-01-06", "copyrightHolder": { "@type": "Organization", "name": "cnxamgov" }, "license": { "@type": "CreativeWork", "url": "https://creativecommons.org/licenses/by/4.0/", "name": "CC BY (Attribution)" } }, "_links": { "api": [ { "href": "https://pressbooks.online.ucf.edu/amnatgov/wp-json/" } ], "metadata": [ { "href": "https://pressbooks.online.ucf.edu/amnatgov/wp-json/pressbooks/v2/metadata" } ], "self": [ { "href": "https://pressbooks.online.ucf.edu/wp-json/pressbooks/v2/books/4" } ] } }] Desired Format: id, link, context, type, name, inLanguage, image, author_type, author_name, license_type, license_url, license_name
The issue is not really that the JSON that you show is an array, but that each element of the array (of which you only have one) is a fairly complex structure. It is straight forward to extract the relevant data from each array entry into a shorter flat array and convert that into CSV with @csv in jq : jq -r '.[] | [ .id, .link, .metadata."@context", .metadata."@type", .metadata.name, .metadata.inLanguage, .metadata.image, .metadata.author[0]."@type", .metadata.author[0].name, .metadata.license."@type", .metadata.license.url, .metadata.license.name] | @csv' file.json ... but notice how I'm forced to decide that we're only ever interested in the first author (the .metadata.author sub-structure is an array). The output: "4","https://pressbooks.online.ucf.edu/amnatgov/","http://schema.org","Book","American Government","en","https://pressbooks.online.ucf.edu/app/uploads/sites/4/2020/01/American-Government.png","Person","OpenStax","CreativeWork","https://creativecommons.org/licenses/by/4.0/","CC BY (Attribution)" To create author name strings that are concatenations of all author names (and similarly for author types), with ; as delimiter, you could instead of .metadata.author[0].name in the above use [.metadata.author[].name]|join(";") (and [.metadata.author[]."@type"]|join(";") for the type), so that your command becomes jq -r '.[] | [ .id, .link, .metadata."@context", .metadata."@type", .metadata.name, .metadata.inLanguage, .metadata.image, ( [ .metadata.author[]."@type" ] | join(";") ), ( [ .metadata.author[].name ] | join(";") ), .metadata.license."@type", .metadata.license.url, .metadata.license.name] | @csv' file.json
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/570548", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
570,657
I faced an interesting question to count the words from a file where the number of lines were also specified as a command line argument.For example if the input text file had - Unix is an OSLinux is the child of UnixUnix is fun.End of File The command to be executed is: bash test.sh unix.txt 3 where unix.txt is the test file containing the sentences and 3 is the number of lines whose words are to be counted. The answer would be 13. I have used the basic wc commands but none of them would give the correct answer. So, I tried to use a for loop, but I could not specify how to take only those number of lines.
With head -n 3 unix.txt you get the first three lines of your file and then you can pipe them to wc So for any arbitrary filename stored in the $file shell variable: { head -n 3 | wc -w; } < "$file" Or: head -n 3 -- "$file" | wc -w Though the latter wouldn't work with a file called - with some head implementations, and output 0 in addition to an error message (by head ) when the file can't be opened, and the failure exit status in that case would be lost unless you use the (non-standard) pipefail option found in some shells.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/570657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/393572/" ] }
570,692
I copied a project directory to a portable hard disk, wiped my laptop and copied the project back, but now git reports loads of changes due to files that previously had 644 permission changing to 755. I could recursively chmod all files and directories to 644, but then I get a load more changes in git (so looks like not everything was 644 previously). Is there any way to chmod only files that have 755 permissions?
The documentation for find (see man find ) writes, -perm mode File's permission bits are exactly mode (octal or symbolic). [...] See the EXAMPLES section for some illustrative examples. So you can match files and change their permissions like this find path/to/files -type f -perm 0755 -exec echo chmod 0644 {} + Remove the echo when you are comfortable that it's showing you what you expect, and run it again.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/570692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266775/" ] }
570,703
On my local machine I have some directories (with subdirectories) with many files ( jpeg and nef ), like: 1.jpg1.nef2.jpg2.nef.... what I want to do is to sync on an HD all files (both jpeg and nef ) but after syncing, deleting on my local computer only the nef files and keep the jpg . Is there a way to combine this with --remove-source-files ?
The documentation for find (see man find ) writes, -perm mode File's permission bits are exactly mode (octal or symbolic). [...] See the EXAMPLES section for some illustrative examples. So you can match files and change their permissions like this find path/to/files -type f -perm 0755 -exec echo chmod 0644 {} + Remove the echo when you are comfortable that it's showing you what you expect, and run it again.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/570703", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/252913/" ] }
570,729
There are some utilities that accept a -- (double dash) as the signal for "end of options", required when a file name starts with a dash: $ echo "Hello World!" >-file$ cat -- -fileHello World!$ cat -file # cat - -file fails in the same way.cat: invalid option -- 'f'Try 'cat --help' for more information. But some of those utilities don't show such an option in the manual page. The man page for cat doesn't document the use (or validity) of a -- argument in any of the OS'es. This is not meant to be a Unix - Linux flame war , it is a valid, and, I believe, useful concern. Neither cat , mv , ed (and I am sure many others) document such an option in their manual page that I can find. Note that ./-file is a more portable workaround to the use of -- .For example, the source (dot) command (and written as . ) doesn't (generally) work well with an -- argument: $ echo 'echo "Hello World!"' >-file$ . ./-fileHello World!$ . -fileksh: .: -f: unknown optionksh: .: -i: unknown optionksh: .: -l: unknown optionksh: .: -e: unknown optionUsage: . [ options ] name [arg ...]$ . -- -file # works in bash. Not in dash, ksh, zsh.ksh: .: -file: cannot open [No such file or directory]
This is a POSIX requirement for all utilities, see POSIX chapter 12.02 ,Guideline 10 for more information: The first -- argument that is not an option-argument should be accepted as a delimiter indicating the end of options. Any following arguments should be treated as operands, even if they begin with the '-' character. POSIX recommends all utilities to follow these guidelines. There are a few exceptions like echo (read at OPTIONS) . And special builtins that do not follow the guidelines (like break , dot , exec , etc.) : Some of the special built-ins are described as conforming to XBD Utility Syntax Guidelines. For those that are not, the requirement in Utility Description Defaults that "--" be recognized as a first argument to be discarded does not apply and a conforming application shall not use that argument. The intent is to document all commands that do not follow the guidelines in their POSIX man page, from POSIX chapter 12.02 third paragraph: Some of the standard utilities do not conform to all of these guidelines; in those cases, the OPTIONS sections describe the deviations. As the cat POSIX man page documents no deviations in the OPTIONS section, it is expected that it accept -- as a valid argument. There still may be (faulty) implementations that fail to follow the guideline. In particular, most GNU core utilities follow this guideline
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/570729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
570,741
Example contents of the playlist file: 1. The fire is on - 03:502. Abc dge khji kkt mmy kdj - 09:203. Blowing in the winds - 14:164. By the rivers of Babylon - 15:465. Waka waka it's time for africa - 20:306. DGF djf Kmf pffg jdkf dhf - 28:257. Fdsa djf | kf |- 34:258. Despacito despatico - 41:33......... The command - ffmpeg -i "a" -ss "b" -to "c" "output" Now from the list, the contents from the beginning (i.e from the serial no.) till the end of the text line (which may include pipes as well) should be the last argument of the command (in the position of 'output'), the timestamp at the end should be the argument for parameter -ss and the timestamp in the next line should be the argument of parameter -to This is quite similiar to this question but i am quite not sure how to modify the awk command to suit this particular case.
This is a POSIX requirement for all utilities, see POSIX chapter 12.02 ,Guideline 10 for more information: The first -- argument that is not an option-argument should be accepted as a delimiter indicating the end of options. Any following arguments should be treated as operands, even if they begin with the '-' character. POSIX recommends all utilities to follow these guidelines. There are a few exceptions like echo (read at OPTIONS) . And special builtins that do not follow the guidelines (like break , dot , exec , etc.) : Some of the special built-ins are described as conforming to XBD Utility Syntax Guidelines. For those that are not, the requirement in Utility Description Defaults that "--" be recognized as a first argument to be discarded does not apply and a conforming application shall not use that argument. The intent is to document all commands that do not follow the guidelines in their POSIX man page, from POSIX chapter 12.02 third paragraph: Some of the standard utilities do not conform to all of these guidelines; in those cases, the OPTIONS sections describe the deviations. As the cat POSIX man page documents no deviations in the OPTIONS section, it is expected that it accept -- as a valid argument. There still may be (faulty) implementations that fail to follow the guideline. In particular, most GNU core utilities follow this guideline
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/570741", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332496/" ] }
570,764
I'm trying to calculate the geometric mean of a file full of numbers (1 column). The basic formula for geometric mean is the average the natural log (or log) of all the values and then raise e (or base 10) to that value. My current bash only script looks like this: # Geometric Meancount=0;total=0; for i in $( awk '{ print $1; }' input.txt ) do if (( $(echo " "$i" > "0" " | bc -l) )); then total="$(echo " "$total" + l("$i") " | bc -l )" ((count++)) else total="$total" fi doneGeometric_Mean="$( printf "%.2f" "$(echo "scale=3; e( "$total" / "$count" )" | bc -l )" )"echo "$Geometric_Mean" Essentially: Check every entry in the input file to make sure it is larger than 0 calling bc every time If the entry is > 0, I take the natural log (l) of that value and add it to the running total calling bc every time If the entry is <=0, I do nothing Calculate the Geometric Mean This works perfectly fine for a small data set. Unfortunately, I am trying to use this on a large data set (input.txt has 250,000 values). While I believe this will eventually work, it is extremely slow. I've never been patient enough to let it finish (45+ minutes). I need a way of processing this file more efficiently. There are alternative ways such as using Python # Import the library you need for mathimport numpy as np# Open the file# Load the lines into a list of float objects# Close the fileinfile = open('time_trial.txt', 'r')x = [float(line) for line in infile.readlines()]infile.close()# Define a function called geo_mean# Use numpy create a variable "a" with the ln of all the values# Use numpy to EXP() the sum of all of a and divide it by the count of a# Note ... this will break if you have values <=0def geo_mean(x): a = np.log(x) return np.exp(a.sum()/len(a))print("The Geometric Mean is: ", geo_mean(x)) I would like to avoid using Python, Ruby, Perl ... etc. Any suggestions on how to write my bash script more efficiently?
Please don't do this in the shell. There is no amount of tweaking that would ever make it remotely efficient. Shell loops are slow and using the shell to parse text is just bad practice. Your whole script can be replaced by this simple awk one-liner which will be orders of magnitude faster: awk 'BEGIN{E = exp(1);} $1>0{tot+=log($1); c++} END{m=tot/c; printf "%.2f\n", E^m}' file For example, if I run that on a file containing the numbers from 1 to 100, I get: $ seq 100 > file$ awk 'BEGIN{E = exp(1);} $1>0{tot+=log($1); c++} END{m=tot/c; printf "%.2f\n", E^m}' file37.99 In terms of speed, I tested your shell solution, your python solution and the awk I gave above on a file containing the numbers from 1 to 10000: ## Shell$ time foo.sh3677.54real 1m0.720suser 0m48.720ssys 0m24.733s### Python$ time foo.pyThe Geometric Mean is: 3680.827182220091real 0m0.149suser 0m0.121ssys 0m0.027s### Awk$ time awk 'BEGIN{E = exp(1);} $1>0{tot+=log($1); c++} END{m=tot/c; printf "%.2f\n", E^m}' input.txt3680.83real 0m0.011suser 0m0.010ssys 0m0.001s As you can see, the awk is even faster than the python and far simpler to write. You can also make it into a "shell" script, if you like. Either like this: #!/bin/awk -fBEGIN{ E = exp(1);} $1>0{ tot+=log($1); c++;} END{ m=tot/c; printf "%.2f\n", E^m} or by saving the command in a shell script: #!/bin/shawk 'BEGIN{E = exp(1);} $1>0{tot+=log($1); c++;} END{m=tot/c; printf "%.2f\n", E^m}' "$1"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/570764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/398239/" ] }
570,770
The sysctl utility allows a Linux admin to query and modify kernel parameters in runtime. For example, to change the swappiness of a Linux system to 0, we can: echo 0 > /proc/sys/vm/swappiness Or we can use sysctl : sysctl -w vm.swappiness=0 To make the value persistent, Archwiki suggests to write vm.swappiness=0 to /etc/sysctl.d/99-swappiness.conf file. For persistent silent boot , Archwiki suggests to write kernel.printk = 3 3 3 3 to /etc/sysctl.d/20-quiet-printk.conf Similarly I have a 99-sysrq.conf on my system which works without the number as well. Archwiki has a sysctl page which mentions the importance of the number: Note: From version 207 and 21x, systemd only applies settings from /etc/sysctl.d/*.conf and /usr/lib/sysctl.d/*.conf .  If you had customized /etc/sysctl.conf , you need to rename it as /etc/sysctl.d/99-sysctl.conf .  If you had e.g. /etc/sysctl.d/foo , you need to rename it to /etc/sysctl.d/foo.conf . What does the number in 99-swappiness.conf and 20-quiet-printk.conf denote here?
The number at the beginning of the name of configuration files is used as an easily readable way to sort them, with the aim of defining the order of precedence among the entries they contain. From man 5 sysctl.d 1 (emphasis mine): CONFIGURATION DIRECTORIES AND PRECEDENCE ... All configuration files are sorted by their filename in lexicographic order, regardless of which of the directories they reside in. If multiple files specify the same option, the entry in the file with the lexicographically latest name will take precedence . It is recommended to prefix all filenames with a two-digit number and a dash, to simplify the ordering of the files. 1 The man page for sysctl.d is shipped as part of the systemd package and the quoted text comes from version 244.3 on Arch Linux. The wording differs to some extent, but not significantly (for the purpose of this Q/A), from both the version currently available at The Linux man-pages project and the version you can find on freedesktop.org .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/570770", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274717/" ] }
571,075
if I have a folder that's restricted with say 600 , thus no access for group or everyone , but the folder contains files with 777 , would this be safe? Are there any work-arounds to access the 777 file as group or everyone , despite it residing inside a 600 folder?
You can't access/enter a directory (or create files) with permissions set to 600 as a regular user. You are also not able to acces/list (well, sort of) files at all with said folder permissions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571075", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/352497/" ] }
571,135
After installing PyCharm on Pop! OS (by extracting the download) there is no easy way to run the program. I have probably installed it in my Documents folder. Not sure what the convention is. To run PyCharm I need to go to the folder pycharm-community-2019.2.4/bin , open terminal and run ./pycharm.sh Any way to make my life easier?
You can use main menu. There is Tools -> Create Desktop Entry . It might require root permissions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571135", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233248/" ] }
571,141
I have a shared over the network printer connected via USB to a Windows box. I am printing from my Linux laptop with Pop_OS 19.10 but sometimes it will say that the print Job is complete but actually hasn't printed anything. I've already saw a loo of people with the same problem however, in my case it happens randomly. Today I've received an email with two pdfs. The first pdf I've tried to print didn't print, the second one printed without any problem. I've tried to print again the first pdf, just to check, but I had the same problem again. This has been happening to me for the couple of weeks or so. Up to that time I had no problem printing from my linux laptop. DISTRIB_ID=UbuntuDISTRIB_RELEASE=19.10DISTRIB_CODENAME=eoanDISTRIB_DESCRIPTION="Pop!_OS 19.10"NAME="Pop!_OS"VERSION="19.10"ID=ubuntuID_LIKE=debianPRETTY_NAME="Pop!_OS 19.10"VERSION_ID="19.10"HOME_URL="https://system76.com/pop"SUPPORT_URL="http://support.system76.com"BUG_REPORT_URL="https://github.com/pop-os/pop/issues"PRIVACY_POLICY_URL="https://system76.com/privacy"VERSION_CODENAME=eoanUBUNTU_CODENAME=eoanLOGO=distributor-logo-pop-oscups: Installed: 2.2.12-2ubuntu1 Candidate: 2.2.12-2ubuntu1 Version table: *** 2.2.12-2ubuntu1 500 500 http://us.archive.ubuntu.com/ubuntu eoan/main amd64 Packages 100 /var/lib/dpkg/statussmbclient: Installed: 2:4.10.7+dfsg-0ubuntu2.4 Candidate: 2:4.10.7+dfsg-0ubuntu2.4 Version table: *** 2:4.10.7+dfsg-0ubuntu2.4 500 500 http://us.archive.ubuntu.com/ubuntu eoan-security/main amd64 Packages 500 http://us.archive.ubuntu.com/ubuntu eoan-updates/main amd64 Packages 100 /var/lib/dpkg/status 2:4.10.7+dfsg-0ubuntu2 500 500 http://us.archive.ubuntu.com/ubuntu eoan/main amd64 Packages EDIT: I could print the file from CLI using lp file.pdf
You can use main menu. There is Tools -> Create Desktop Entry . It might require root permissions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/308127/" ] }
571,233
I understand the concept of managing permissions on Linux with chmod using the first digit as the user , the second as the group and the third as other users as described on this answer in Understanding UNIX permissions and file types . Let's say I have a Linux system with 5 users: admin , usera , userb , userc and guest . By default, the users usera , userb and userc will have execution permission on all files inside /usr/bin , so these users can use the command line of the system executing the files in there as those files have 75 5 permission. So far it's completely ok. However, I'd like to forbid the user guest from executing files on the folder /usr/bin , I know I could achieve that by changing the permission of all files inside this folder to something like 750 with chmod , but if I do that I'll mess up the permissions of the users usera , userb and userc because they will be also forbidden to execute files. On my computer, all the files in /usr/bin belong to the group root , so I know I could create a newgroup , change the group of all those files to it and add usera , userb and userc to newgroup . But doing that sounds like way too much modification on the system's default settings. Does anyone know a smarter way of solving this problem? How can I forbid a single user from using the command line (or executing any file on PATH ) without an overcomplicated solution that requires changing the permissions of too many files?
Use ACLs to remove the permissions. In this case, you don't need to modify the permissions of all the executables; just remove the execute permission from /usr/bin/ to disallow traversal of that directory and therefore access of any files within. setfacl -m u:guest:r /usr/bin This sets the permissions of u ser guest to just r ead for the directory /usr/bin , so they can ls that directory but not access anything within. You could also just remove all permissions: setfacl -m u:guest:- /usr/bin
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/571233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103357/" ] }
571,293
I have over 20 years experience in Un*x, and I have been using scp for immemorial times. scp is over SSH, therefore I consider it as secure as the latter.Now, in my company, where I recently took up a job, the Security Officer says that scp should not be used, because it is unsafe and obsolete. sftp should be favoured over it (yet over SSH too...) I don't immediately agree with this, based on the infrastructure underneath scp, and based on how popular and trusted scp is in the professional community, and among my other colleagues from my ex-companies. I don't want to change my mind just because some CSO said it. This said, is there here a subject? Is scp suddenly the blacksheep? Is is just a mere security expert debates in higher spheres? Or is scp just good to use?
The way I read it is, "it depends". According to CVE-2019-6111 However, the scp client only performs cursory validation of the object name returned (only directory traversal attacks are prevented). A malicious scp server (or Man-in-The-Middle attacker) can overwrite arbitrary files in the scp client target directory. If recursive operation (-r) is performed, the server can manipulate subdirectories as well (for example, to overwrite the .ssh/authorized_keys file). So if scp goes between hosts on your premises (datacenter), it is likely (but not certain) than no MITM attack can be performed. However, if you fetch business files from a customer/partner over the world wild web, you should rely on sftp , or better, ftps . (At least, ensure that the scp client and ssh server have proper versions)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/571293", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/219899/" ] }
571,315
The file looks like this (one big line): a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a; etc...... Now I want to cut the text and do a line break after every fifth semicolon ( ; ) so it looks like this: a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;etc.... How do I do this?
With tr and paste tr ';' '\n' < semicolons | paste -d';' - - - - - Tests $ cat semicolonsa;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a$ tr ';' '\n' < semicolons | paste -d';' - - - - -a;a;a;a;aa;a;a;a;aa;a;a;a;aa;a;a;a;aa;a;a;a;aa;a;a;a;aa;a;a;a;aa;a;a;a;aa;a;a;a;a Both tr and paste are specified in POSIX standard. To add the required semicolon ; at the end of the lines tr ';' '\n' < semicolons | paste -d';' - - - - - | sed s/$/\;/ Tests $ tr ';' '\n' < semicolons | paste -d';' - - - - - | sed s/$/\;/a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;a;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/398776/" ] }
571,355
File names : _10_-_Overriding_or_customizing_the_rest_end_point-rkkfgI502f0.mp4_11_-_Expose_ids_in_json_response-CrDXtLfiZos.mp4_12_-_Create_angular_8_project_using_Angular_CLI-kSXkW1hF0KU.mp4_13_-_Create_a_model_class_for_Book_entity-Hfm3da1Ze8E.mp4_14_-_Display_the_list_of_books_in_html_table_with_hard-coded_values-b5R8CsMrOO4.mp4_15_-_Create_a_new_book-list_component_and_display_the_book_images-Tto3r229fFA.mp4_16_-_Make_a_HTTP_GET_request_to_the_Spring_boot_application-98RfVQ9Z3ZM.mp4_17_-_Understanding_the_Observable_and_Observer-NKLirs5SFYk.mp4_18_-_Call_a_service_method_to_get_the_book_array-yQ34aPdH1_0.mp4_19_-_Fix_the_error_CORS_policy_and_display_the_data_in_html_table-YSEAdODxMfE.mp4_1_-_Course_Introduction-b4pjjftApmY.mp4_20_-_Replace_the_blank_images_with_real_images-fut1f40FHo4.mp4_2_-_Setup_the_development_environment-RbUGvRAUpSM.mp4_3_-_Setup_the_MySQL_database-D3krImBhofo.mp4_4_-_Create_repository_in_Github_and_add_it_to_Eclipse_IDE-MAkVtB_MhzI.mp4_5_-_Create_spring_boot_project_using_spring_initializer-GsmqGxEv6rg.mp4_6_-_Configure_application_properties_and_commit_changes_to_github-HqDZKih-Ehk.mp4_7_-_Create_an_entity_class_for_book_table-pfxt3BeU_e0.mp4_8_-_Create_an_entity_class_for_book_table-eg1pJJLAzAQ.mp4_9_-_Create_rest_repositories_for_book_and_category_entity-w7vFTSCWCOM.mp4 How can I remove the single _ character from the beginning of the file names?
In the directory that contain those files, issue for file in _*; do mv "$file" "${file#_}"; done ${file#exp} deletes the shortest match of the pattern exp from the beginning of file .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571355", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332496/" ] }
571,362
If I understand correctly, SSH tunneling works as follows: Machine B is running the SSH daemon (process b ). Machine A opens an ssh client (process a ) Machine A connects its ssh client to machine B's server (say on port 22). In other words, process a on machine A is communicating with process b on machine B. A tunnel is opened, say by running -L 4444:localhost:5555 on machine A. Any traffic to localhost:4444 on machine A is intercepted and sent to process a , who sends a special message over the connection on port 22 Process b receives the special message, and redirects to localhost:5555 on machine B. Is there a replacement for processes a and b that performs this tunneling trick but that isn't SSH? In other words, it doesn't have to perform encryption, and importantly, does not require the user on machine A to log into machine b?
In the directory that contain those files, issue for file in _*; do mv "$file" "${file#_}"; done ${file#exp} deletes the shortest match of the pattern exp from the beginning of file .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238234/" ] }
571,390
I have the following script and I try to create an empty text file. I tried touch and I tried echo -n > - they both fail. #!/bin/bashset filename_result="contents"echo "filename=$filename_result" # Shows: filename=rm -f "$filename_result".jsontouch $filename_result # Error: usage: touch [-A [-][[hh]mm]SS] [-acfhm] [-r file] [-t [[CC]YY]MMDDhhmm[.SS]] file ...# Note that `% touch myfile` works fine on the command linetouch -- "$filename_result" # Error: touch: : No such file or directoryecho -n > "$filename_result" # Error: No such file or directoryecho "[" >> "$filename_result"for file in *jsondo echo "examining file: $file" echo "\n" >> "$filename_result" cat $file >> "$filename_result" echo "\n" >> "$filename_result"doneecho "\n]" >> "$filename_result"mv $filename_result "$filename_result".json EDIT: Printing to screen the filename_result variable shows that it's empty after setting it? echo "filename=$filename_result" # Shows: filename=
Try this: filename_result=contentstouch -- "$filename_result" Or: set contentstouch -- "$1"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571390", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184563/" ] }
571,598
Just started learning UNIX so the question might seem really newbie but would appreciate the answer, as I've been trying to work it out on my own for an hour already with google's help, with no success however. cat /etc/shadow 2>&1 | wc -l What would be the effect of this command? My guess is: The command prints the line count of “/etc/shadow”, if there is a standard error, it will be redirected to standard output and the error lines will be counted. The command prints the files of “/etc/shadow”, if there is a standard error, it will be redirected to standard output and the error's lines will be counted.
X>&Y is for file descriptor redirection : this means that all output to fd X is actually going into Y . 2>&1 throws STDERR's output into STDOUT. wc -l writes the number of input lines to STDOUT. Together , the command cat /etc/shadow 2>&1 | wc -l returns the number of lines in /etc/shadow , as well as the number of error lines. If you don't want to count those error lines, just use cat /etc/shadow | wc -l .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/399034/" ] }
571,741
I read man ssh-add and I don't see a flag for passing a passphrase like we have in ssh which is -p . I tried # the -K is for macOS's ssh-add but it's not relevantssh-add -q -K ~/.ssh/id_rsa <<< $passphrase But I get the following output: ssh_askpass: exec(/usr/X11R6/bin/ssh-askpass): No such file or directory How can I pass a password w/o triggering a prompt Update Adding more context to what I'm doing, I'm creating a script to create SSH keys for me. It will generate the passphrase, the SSH key using that passphrase, and add it to the agent. # ...passphrase=$(generate_password)ssh-keygen -t rsa -b 4096 -C $email -f $filename -N $passphrase -qssh-add -q -K $filename Following Stephen's answer, I'm not sure how would that work. Seems like I'd had to create a temporal script and save it to disk, in order for SSH_ASKPASS to work. Any ideas?
From the ssh-add manpage: DISPLAY and SSH_ASKPASS If ssh-add needs a passphrase, it will read the passphrase from the current terminal if it was run from a terminal. If ssh-add does not have a terminal associated with it but DISPLAY and SSH_ASKPASS are set, it will execute the program specified by SSH_ASKPASS (by default ``ssh-askpass'') and open an X11 window to read the passphrase. This is particularly useful when calling ssh-add from a .xsession or related script. (Note that on some machines it may be necessary to redirect the input from /dev/null to make this work.) So we can use this to cheat a little. We start with no identities in the agent: $ ssh-add -lThe agent has no identities. So now we need a program that will supply the password: $ cat x#!/bin/shecho test123 And then convince ssh-add to use that script: $ DISPLAY=1 SSH_ASKPASS="./x" ssh-add test < /dev/nullIdentity added: test (sweh@godzilla) And there it is: $ ssh-add -l 2048 SHA256:07qZby7TafI10LWAMSvGFreY75L/js94pFuNcbhfSC0 sweh@godzilla (RSA) Edit to add, based on revised question: The password could be passed as a variable, and the askpass script use that variable. For example: $ cat /usr/local/sbin/auto-add-key#!/bin/shecho $SSH_PASS$ SSH_PASS=test123 DISPLAY=1 SSH_ASKPASS=/usr/local/sbin/auto-add-key ssh-add test < /dev/nullIdentity added: test (sweh@godzilla) In the workflow presented you would do SSH_PASS=$passphrase to use the newly generated passphrase.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/571741", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77781/" ] }
571,801
I suspect this is intentional (rather than just a bug). If so, please direct me to the relevant documentation for a justification. ~$ i=0; ((i++)) && echo true || echo falsefalse~$ i=1; ((i++)) && echo true || echo falsetrue The only difference between the two lines is i=0 vs i=1 .
This is because i++ does post-increment , as described in man bash . This means that the value of the expression is the original value of i , not the incremented value. ARITHMETIC EVALUATION The shell allows arithmetic expressions to be evaluated, under certain circumstances (see the let and declare builtin commands and Arithmetic Expansion). Evaluation is done in fixed-width integers with no check for overflow, though division by 0 is trapped and flagged as an error. The operators and their prece- dence, associativity, and values are the same as in the C language. The following list of operators is grouped into levels of equal-precedence operators. The levels are listed in order of decreasing precedence. id++ id-- variable post-increment and post-decrement So that: i=0; ((i++)) && echo true || echo false acts like: i=0; ((0)) && echo true || echo false except that i is incremented too; and that: i=1; ((i++)) && echo true || echo false acts like: i=1; ((1)) && echo true || echo false except that i is incremented too. The return value of the (( )) construct is truthy ( 0 ) if the value is nonzero, and vice versa. You can also test how does post-increment operator work: $ i=0$ echo $((i++))0$ echo $i1 And pre-increment for comparison: $ i=0$ echo $((++i))1$ echo $i1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136914/" ] }
571,812
Short question: This is done on a Mac: time for i in {1..10}; do python3 foo.py ; done but CTRL-C won't be able to stop it. How to make it work? (or how to time the running time of N times?) Details: There is some standard test of program from github, and I could run it: python3 foo.py or time python3 foo.py and it reports 0.27 seconds running time. Since the test is wired up for 1000 test cases and to a test framework, I don't want to change the test, so I want to time it 100 times. It seems the command time has no way to time it for 10 or 100 times (this is somewhat strange of Unix that so many years and something this useful and simple is not built into time ). The following can be used: time for i in {1..10}; do python3 foo.py ; done but CTRL-C won't be able to stop it -- it will stop that one python script and keep on running the rest. So fortunately it wasn't 100 or 1000 that was typed in. Is there a way to use time or a better way to do it? (besides changing the program?)
This is because i++ does post-increment , as described in man bash . This means that the value of the expression is the original value of i , not the incremented value. ARITHMETIC EVALUATION The shell allows arithmetic expressions to be evaluated, under certain circumstances (see the let and declare builtin commands and Arithmetic Expansion). Evaluation is done in fixed-width integers with no check for overflow, though division by 0 is trapped and flagged as an error. The operators and their prece- dence, associativity, and values are the same as in the C language. The following list of operators is grouped into levels of equal-precedence operators. The levels are listed in order of decreasing precedence. id++ id-- variable post-increment and post-decrement So that: i=0; ((i++)) && echo true || echo false acts like: i=0; ((0)) && echo true || echo false except that i is incremented too; and that: i=1; ((i++)) && echo true || echo false acts like: i=1; ((1)) && echo true || echo false except that i is incremented too. The return value of the (( )) construct is truthy ( 0 ) if the value is nonzero, and vice versa. You can also test how does post-increment operator work: $ i=0$ echo $((i++))0$ echo $i1 And pre-increment for comparison: $ i=0$ echo $((++i))1$ echo $i1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571812", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19342/" ] }
571,815
I'm trying to update bash on CentOS 7 - I need at least 4.4 for my project and the default shell on it is 4.2. It's a production server, so I literally just want a new version of bash as the default shell and that's it; I don't want to be messing around too much or updating anything else. Anyway, running: yum update bash returns No packages marked for update The command: yum repolist all shows that the CentOS 7 updates repo is enabled (not CentOS 7.* base/updates though). As a result, this command: yum --enablerepo=updates update bash does nothing. I can share my CentOS-Base.repo file, if it helps. What am I doing wrong?
The point of distributions like RedHat (and thus CentOS) is that it's stable; it doesn't have the latest version of every software, it has a consistent version. For CentOS7 the current version is bash-4.2.46-33.el7 . RedHat will backport security fixes but may not backport functionality enhancements because they can cause compatibility issues. If you need a different version then you may need to compile it from source and place it in a non-standard location (e.g. $HOME/bin/bash ). Don't overwrite /bin/bash because the OS may replace it at any time through yum update . In comparison, RedHat 8 (CentOS8) has bash 4.4, and Debian 10 has bash 5.0
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/571815", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/391872/" ] }
571,877
I am using cygwin on Windows 10, to ssh to my servers, and recently I noticed that whenever I pressed CTRL+o the entire session/cygwin window would freeze up. I tested it out on putty as well and same thing happens. I have also noticed that pressing CTRL+o again causes it to unfreeze. I am pretty sure this did not use to happen before, as I use nano text editor usually and CTRL-o for saving the file. This is causing a lot of frustration now as every time I try to save a file the whole session gets stuck. Also I have tried stty -ixon command but it did not help and it seems unrelated to CTRL+s and CTRL+q . Any ideas what might be causing this and if it can be disabled?
Do you have by any chance NZXT Cam software running? The default hotkey to activate it's overlay is Ctrl+O. Remove the assigned hotkey in NZXT Cam Settings > Overlay and you should be good.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103422/" ] }
571,949
I know that I can use unset -f $FUNCTION_NAME to unset a single function in bash / zsh, but how do I unset all functions?
In the zsh shell, you may disable all functions using disable -f -m '*' (literally, "disable each function whose name matches * "). You may then enable them again with the analogous enable call. You may also use unset in a similar way to remove the functions completely from the current environment: unset -f -m '*'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/571949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/344185/" ] }
572,051
In bash I did the following. This if expression will evaluate to true if the Redhat version is 7.5 or 7.6. if [[ ` cat /etc/redhat-release | awk '{print $7}' ` == "7.5" ]] || [[ ` cat /etc/redhat-release | awk '{print $7}' ` == "7.6" ]]then... Can we do it in a more elegant way with regular expressions? This is the content of /etc/redhat-release : cat /etc/redhat-releaseRed Hat Enterprise Linux Server release 7.6 (Maipo)
You can do this with bash 's built-in string matching. Note that this uses glob (wildcard) patterns, not regular expressions. if [[ $(cat /etc/redhat-release | awk '{print $7}') == 7.[56] ]] Or, of we eliminate the UUoC : if [[ $(awk '{print $7}' /etc/redhat-release) == 7.[56] ]] or... if [[ $(cat /etc/redhat-release) == *" release 7."[56]" "* ]] or even (thanks to @kojiro)... if [[ $(< /etc/redhat-release) == *" release 7."[56]" "* ]] (Note that wildcards at the beginning and end are needed to make it match the entire line. The quoted space after the number is to make sure it doesn't accidentally match "7.50".) Or if you really want to use regular expressions, use =~ and switch to RE syntax: if [[ $(< /etc/redhat-release) =~ " release 7."[56]" " ]] (Note that the part in quotes will be matched literally, so . doesn't need to be escaped or bracketed (as long as you don't enable bash31 compatibility). And RE matches aren't anchored by default, so you don't need anything at the ends like in the last one.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/572051", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
572,241
In my scripts I have a function called messages . I wrote it in Linux Mint, with no problems running it, and when I moved it to a Debian Buster station, the function clashes with /usr/bin/messages . I have a startup script that calls the script messages : startup_script # call to messages script. messages messages messages() { # reformat the arguments and return them} later on startup_script messages "This is a message" Which throws ./startup_script: line 35: .: /usr/bin/messages: cannot execute binary filemessages: could not open mailbox `/path/to/my/script/<string passed to my function>': No such file or directory So I get a bunch of errors related to /usr/bin/messages being called instead of my function. After adding type messages "This is a message" , the relevant output is: messages is /usr/bin/messages I have the option of renaming my function¹, but maybe there's a better way to handle this situation. How do I tell my script to ignore system binaries and use my own functions? ¹ The function is called in several scripts, many times, so it is not the easiest option to just change the name.
This is how . file works: If file does not contain a <slash>, the shell shall use the search path specified by PATH to find the directory containing file. This behavior is specified by POSIX . Your first error is ./startup_script: line 35: .: /usr/bin/messages: cannot execute binary file It's similar when you call . echo : -bash: .: /bin/echo: cannot execute binary file You're trying to source the binary file /usr/bin/messages . In effect your file where the function is defined is not sourced at all, the function is not defined in the current script. This means later messages still means /usr/bin/messages , not the function. The relevant line should be . ./messages or . /full/path/to/messages (i.e. path to your file, not to the binary).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/572241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/338177/" ] }
572,281
When I use ! as a field separator e.g. awk -F! it is giving error message bash: !: event not found . Why? It is accepting awk -F"\!" . Bash version 3.2.25
! is the trigger character for Bash's history expansion . If that's enabled, something like !foo on a command line gets expanded to the latest command that starts with foo . Or if there is no such command line, the shell gives an error, like here. It shouldn't do anything as the last character of a word, though. This actually works as you intended in all versions of Bash I tried: $ echo 'aa!bb' | awk -F! '{print $1}' aa In more recent versions, ! also shouldn't do anything before an ending double-quote, so -F"!" works in later versions, but not in 3.2. A backslash or single-quotes will work to escape it, i.e. neither \!foo nor '!foo' would expand. Of course you could also stop it by disabling history expansion completely, with set +H . With double-quotes it's weirder. Within them, a backslash disables the history expansion but the backslash itself gets left in place: $ printf '%s\n' "\!foo"\!foo In the case of awk -F"\!" this works because awk itself removes the backslash.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/572281", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/343138/" ] }
572,294
cp is a massively popular Linux tool maintained by the coreutils team of the GNU foundation. By default, files with the same name will be overwritten, if the user wants to change this behaviour they can add --no-clobber to their copy command: -n, --no-clobber do not overwrite an existing file (overrides a previous -i option) Why not something like --no-overwrite ?
“ Clobber ” in the context of data manipulation means destroying data by overwriting it. In the context of files in a Unix environment, the word was used at least as far back as the early 1980s, possibly earlier. Csh had set noclobber to configure > to refuse to overwrite an existing file (later set -o noclobber in ksh93 and other sh-style shells). When GNU coreutils added --no-clobber (in 2009), they used the same vocabulary that shells were using.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/572294", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123737/" ] }
572,424
I want a script to curl to a file and to put the status code into a variable (or, at least enable me to test the status code) I can see I can do it in two calls with e.g. url=https://www.gitignore.io/api/nonexistentlanguagex=$(curl -sI $url | grep HTTP | grep -oe '\d\d\d')if [[ $x != 200 ]] ; then echo "$url SAID $x" ; returnficurl $url # etc ... but presumably there's a way to avoid the redundant extra call? $? doesn't help: status code 404 still gets an return code of 0
#!/bin/bashURL="https://www.gitignore.io/api/nonexistentlanguage"response=$(curl -s -w "%{http_code}" $URL)http_code=$(tail -n1 <<< "$response") # get the last linecontent=$(sed '$ d' <<< "$response") # get all but the last line which contains the status codeecho "$http_code"echo "$content" (There are other ways like --write-out to a temporary file. But my example does not need to touch the disk to write any temporary file and remembering to delete it; everything is done in RAM)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/572424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181263/" ] }
572,427
I've done tons of research on Computrace from Absolute Software and I haven't found a solid answer to: Does it work on Linux? I've read the following research papers and they don't refer to Linux or *NIX like systems once: Deactivate the Rootkit Absolute Backdoor Revisited They're all focused on reverse engineering the agents/binaries that are dropped onto Windows machines. I've looked at the running processes on multiple Linux systems with Computrace enabled and there isn't any sign of it. So I guess I've answered my own question, but for some reason I don't feel assured that it is 100% NOT working on Linux. If anyone here has experience with Computrace or has also tested themselves, please let me know!
#!/bin/bashURL="https://www.gitignore.io/api/nonexistentlanguage"response=$(curl -s -w "%{http_code}" $URL)http_code=$(tail -n1 <<< "$response") # get the last linecontent=$(sed '$ d' <<< "$response") # get all but the last line which contains the status codeecho "$http_code"echo "$content" (There are other ways like --write-out to a temporary file. But my example does not need to touch the disk to write any temporary file and remembering to delete it; everything is done in RAM)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/572427", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
572,616
I have two scripts that use GPU and train ML models. I want to start them before I go to sleep so they work at the night and I expect to see some results in the morning. But because of the GPU memory is limited, I want to run them in serial instead of parallel. I can do it with python train_v1.py && python train_v2.py ; but let's say I started to train the train_v1 . In the mean time, because the training takes long time, I started and finished the implementation of the second script, train_v2.py , and I want to run it automatically when python train_v1.py is finished. How can I achieve that? Thank you.
Here's an approach that doesn't involve looping and checking if the other process is still alive, or calling train_v1.py in a manner different from what you'd normally do: $ python train_v1.py^Z[1]+ Stopped python train_v1.py$ % && python train_v2.py The ^Z is me pressing Ctrl + Z while the process is running to sleep train_v1.py through sending it a SIGTSTP signal. Then, I tell the shell to wake it with % , using it as a command to which I can add the && python train_v2.py at the end. This makes it behave just as if you'd done python train_v1.py && python train_v2.py from the very beginning. Instead of % , you can also use fg . It's the same thing. If you want to learn more about these types of features of the shell, you can read about them in the "JOB CONTROL" section of bash's manpage . EDIT: How to keep adding to the queue As pointed out by jamesdlin in a comment, if you try to continue the pattern to add train_v3.py for example before v2 starts, you'll find that you can't: $ % && python train_v2.py^Z[1]+ Stopped python train_v1.py Only train_v1.py gets stopped because train_v2.py hasn't started, and you can't stop/suspend/sleep something that hasn't even started. $ % && python train_v3.py would result in the same as python train_v1.py && python train_v3.py because % corresponds to the last suspended process. Instead of trying to add v3 like that, one should instead use history: $ !! && python train_v3.py% && python train_v2.py && python train_v3.py One can do history expansion like above, or recall the last command with a keybinding (like up) and add v3 to the end. $ % && python train_v2.py && python train_v3.py That's something that can be repeated to add more to the pipeline. $ !! && python train_v3.py% && python train_v2.py && python train_v3.py^Z[1]+ Stopped python train_v1.py$ !! && python train_v4.py% && python train_v2.py && python train_v3.py && python train_v4.py
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/572616", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/398373/" ] }
572,628
I just upgraded an Ubuntu Server from 18.10 to 19.04 and then to 19.10. I think that this upgrade also upgraded tmux to a newer version. Since then my tmux scripts, which build some dashboards, are no longer working. When issuing a command like tmux send-keys "echo 'test'" C-m; I get a lost server message. This happens when nothing has attached to the session which contains the pane which is being targeted. When I start a session and attach to it, then send-keys does work. The syslog contains the following entry Mar 12 23:27:33 machine kernel: [ 27.074805] tmux: server[2657]: segfault at 751 ip 000056042469f029 sp 00007ffe602aa6f0 error 4 in tmux[560424675000+62000] This is what my creation script looks like, it is invoked in crontab as @reboot , but the problem also exists when manually executing it. SESSION=stufftmux new-session -d -s $SESSION -n 'homepage'tmux split-window -h -p 50tmux select-pane -t 1; tmux send-keys "./lhp.sh" C-m;tmux select-pane -t 2; tmux send-keys "./lnginx.sh" C-m;tmux split-window -v -p 50tmux select-pane -t 3; tmux send-keys "./lsmr.sh" C-m;tmux new-window -t $SESSION -n 'shells'tmux split-window -h -p 50tmux select-window -t :1; And at some later point in time (hours or days) I invoke tmux attach-session -t stuff to view the content. Does anyone know I can continue using it as I used to?
Here's an approach that doesn't involve looping and checking if the other process is still alive, or calling train_v1.py in a manner different from what you'd normally do: $ python train_v1.py^Z[1]+ Stopped python train_v1.py$ % && python train_v2.py The ^Z is me pressing Ctrl + Z while the process is running to sleep train_v1.py through sending it a SIGTSTP signal. Then, I tell the shell to wake it with % , using it as a command to which I can add the && python train_v2.py at the end. This makes it behave just as if you'd done python train_v1.py && python train_v2.py from the very beginning. Instead of % , you can also use fg . It's the same thing. If you want to learn more about these types of features of the shell, you can read about them in the "JOB CONTROL" section of bash's manpage . EDIT: How to keep adding to the queue As pointed out by jamesdlin in a comment, if you try to continue the pattern to add train_v3.py for example before v2 starts, you'll find that you can't: $ % && python train_v2.py^Z[1]+ Stopped python train_v1.py Only train_v1.py gets stopped because train_v2.py hasn't started, and you can't stop/suspend/sleep something that hasn't even started. $ % && python train_v3.py would result in the same as python train_v1.py && python train_v3.py because % corresponds to the last suspended process. Instead of trying to add v3 like that, one should instead use history: $ !! && python train_v3.py% && python train_v2.py && python train_v3.py One can do history expansion like above, or recall the last command with a keybinding (like up) and add v3 to the end. $ % && python train_v2.py && python train_v3.py That's something that can be repeated to add more to the pipeline. $ !! && python train_v3.py% && python train_v2.py && python train_v3.py^Z[1]+ Stopped python train_v1.py$ !! && python train_v4.py% && python train_v2.py && python train_v3.py && python train_v4.py
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/572628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61956/" ] }
572,695
I want to run some simulations using a Python tool that I had made. The catch is that I would have to call it multiple times with different parameters/arguments and everything. For now, I am using multiple for loops for the task, like: for simSeed in 1 2 3 4 5do for launchPower in 17.76 20.01 21.510 23.76 do python sim -a $simSeed -p $launchPower donedone In order for the simulations to run simultaneously, I append a & at the end of the line where I call the simulator. python sim -a $simSeed -p $launchPower & Using this method I am able to run multiple such seeds. However, since my computer has limited memory, I want to re-write the above script so that it launches the inner for loop parallelly and the outer for loop sequentially. As an example, for simSeed = 1 , I want 5 different processes to run with launchPower equal to 17.76 20.01 21.510 23.76 . As soon as this part is complete, I want the script to run for simSeed = 2 and again 5 different parallel processes with launchPower equal to 17.76 20.01 21.510 23.76 . How can I achieve this task? TLDR: I want the outer loop to run sequentially and inner loop to run parallelly such that when the last parallel process of the inner loop finishes, the outer loop moves to the next iteration.
GNU parallel has several options to limit resource usage when starting jobs in parallel. The basic usage for two nested loops would be parallel python sim -a {1} -p {2} ::: 1 2 3 4 5 ::: 17.76 20.01 21.510 23.76 If you want to launch at most 5 jobs at the same time, e.g., you could say parallel -j5 python <etc.> Alternatively, you can use the --memfree option to start new jobs only when enough memory is free, e.g. at least 256 MByte parallel --memfree 256M python <etc.> Note that the last option will kill the most recently started job if the memory falls below 50% of the "reserve" value stated (but it will be re-qeued for catch-up automatically).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/572695", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/400024/" ] }
572,720
I have an application which opens a large file (about 1-2GB) every time it runs. For development/testing reasons, I need to keep restarting the application, and the 30s-1m wait time to load the file from HDD becomes a bit inconvenient. Is there a way to put the file in the RAM (and keep it there), so loading would be faster?
If your system has enough RAM, the file should be cached in memory, so it shouldn’t be re-read from the drive every time. You can try to force the issue by copying the file to a tmpfs file system, and load it from there. tmpfs file systems are RAM-based. Most distributions now use a tmpfs for /tmp , so copying the file there will work; if you need to, you can mount a tmpfs yourself somewhere else and copy the file there. However, if memory is short, the contents of a tmpfs can be swapped out, so you may end up reading from swap.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/572720", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52446/" ] }
572,725
I have a problem with label printing on a Raspberry Pi: When I start a print on a Dymo LabelWriter 450 Turbo with CUPS and drivers from matthiasbock/dymo-cups-drivers it takes about 6-7 seconds until the print starts. The printing itself is then very slow. At the end of printing, the printer pauses for about 1-2 seconds and then advances the last piece of the label. System is a Raspberry Pi 4 with Raspian - but I have the exact same problem on a current Ubuntu on my workstation. The quirks file from 8-10 Sec delay between jobs DYMO 450 Turbo USB is integrated and doesn't help. Setting usb no-reattach doesn't help either. Does anyone know about this problem or have an idea where I could tackle this problem? I would be happy about every idea and every suggestion!
If your system has enough RAM, the file should be cached in memory, so it shouldn’t be re-read from the drive every time. You can try to force the issue by copying the file to a tmpfs file system, and load it from there. tmpfs file systems are RAM-based. Most distributions now use a tmpfs for /tmp , so copying the file there will work; if you need to, you can mount a tmpfs yourself somewhere else and copy the file there. However, if memory is short, the contents of a tmpfs can be swapped out, so you may end up reading from swap.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/572725", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/400043/" ] }
572,754
A CPU in a virtual machine is only virtual so I would assume that the microcode of the manufacturer does not need to be loaded. The same is probably valid for GPUs. Is this correct? Is there any risk or disadvantage of using or not using it in a KVM/QEMU VM? I am talking of the microcode update that takes place early in the boot process of the Linux VM. Both host and VM CPUs are the same. The host does load the latest microcode upon its boot. A reply with references would be appreciated as I have done an educated guess myself already.
I’m not sure there is a reference in the documentation, but Paolo Bonzini (the KVM maintainer) said this on the qemu-devel mailing list : The guest has no microcode of it's own, but you need to update the microcode in the host. You also need to update the kernel, QEMU and libvirt if you are using it. and then, specifically with regards to updating microcode inside the guest , No, that has no effect.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/572754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115138/" ] }