source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
598,145
If I have a file /abc/def/ghi/jkl , is there a way to tell which device it is located on, or should I parse /etc/mtab and see what matches /abc/def/ghi/jkl better?
df will tell you device name and mount point, and ls will tell you device numbering: paul $ pwd/home/paul/SandBox/Toys/hSortpaul $ ls -l ReadMe-rw-r--r-- 1 paul paul 296 Jan 8 2020 ReadMepaul $ df ReadMeFilesystem 1K-blocks Used Available Use% Mounted on/dev/sda9 103818480 3796556 94725184 4% /homepaul $ ls -l /dev/sda9brw-rw---- 1 root disk 8, 9 Jul 12 12:10 /dev/sda9
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/598145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420461/" ] }
598,221
I've set up a systemd service to run my Minecraft server. Now, I need it to repeat the start script when the server crashes.Here's my code so far: #!/bin/bashwhile true; do cd /home/mcserver/Spigot echo Starting Spigot... echo 600 > ./restart-info java -jar spigot.jar echo Server has stopped or crashed, starting again in 5 minutes... sleep 300done I can actually view the output of spigot.jar using systemctl status spigot , but I also want to control the server console, maybe using screen . When I try to do this: screen -S "Spigot" java -jar spigot.jar I'll get the Must be connected to a terminal error. This command only works in a terminal (not in a script) and I can attach it using screen -r . Is there any way to "bypass" this screen bug?I already tried to place script /dev/null before the screen command.I don't want to use screen with -d and -m because it'll run in the background and the script will keep restarting my server.
I suspect you've stumbled on this blog post which uses screen to solve a problem where your minecraft server stops when you $ java -jar spigot.jar , then close your ssh or putty session. That method seems to have become the cannonical answer as to how to run a minecraft server, even though it isn't necessary. systemd is a totally different (and better) solution to this problem, circumventing the need for screen . You can achieve everything you've done in your script with systemd service options. To run a vanilla minecraft server, create /etc/systemd/system/minecraft.service with this content: [Unit]Description=Minecraft Server[Service]Type=simpleWorkingDirectory=/home/minecraftExecStart=java -Xmx1024M -Xms1024M -jar /home/minecraft/server.jar noguiUser=minecraftRestart=on-failure[Install]WantedBy=multi-user.target Set it to launch automatically after boot with systemctl enable minecraft . You asked about how to control it: $ sudo systemctl start minecraft # Starts the service if it wasn't running$ sudo systemctl stop minecraft # Stops the service$ sudo systemctl restart minecraft # Restarts the service$ sudo systemctl status minecraft # Find out how the service is doing$ sudo journalctl -u minecraft -f # Monitor the logs This does everything except give you a means to send commands to the console. To do that, we'll set up a file that the server will listen to where you can write your commands by creating the following systemd units: /etc/systemd/system/minecraft.socket : [Unit]PartOf=minecraft.service[Socket]ListenFIFO=%t/minecraft.stdin and /etc/systemd/system/minecraft.service : [Unit]Description=Minecraft Server[Service]Type=simpleWorkingDirectory=/home/minecraftExecStart=java -Xmx1024M -Xms1024M -jar /home/minecraft/server.jar noguiUser=minecraftRestart=on-failureSockets=minecraft.socketStandardInput=socketStandardOutput=journalStandardError=journal[Install]WantedBy=multi-user.target Now you can send console commands by echoing stuff into that file: echo "help" > /run/minecraft.stdinecho "/stop" > /run/minecraft.stdin What's also cool is that you can make your own custom sequences of commands and cat the entire file into the console. For example, if you play UHC , you can start a new world, have people log-in, then cat uhc.commands > /run/minecraft.stdin to set the gamerules, spread the players, and start the event.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420387/" ] }
598,243
Let us just agree that the output of pod2html is boooring. Is there a tool that adds some CSS/HTML5 to make it a bit less 1990s?
I suspect you've stumbled on this blog post which uses screen to solve a problem where your minecraft server stops when you $ java -jar spigot.jar , then close your ssh or putty session. That method seems to have become the cannonical answer as to how to run a minecraft server, even though it isn't necessary. systemd is a totally different (and better) solution to this problem, circumventing the need for screen . You can achieve everything you've done in your script with systemd service options. To run a vanilla minecraft server, create /etc/systemd/system/minecraft.service with this content: [Unit]Description=Minecraft Server[Service]Type=simpleWorkingDirectory=/home/minecraftExecStart=java -Xmx1024M -Xms1024M -jar /home/minecraft/server.jar noguiUser=minecraftRestart=on-failure[Install]WantedBy=multi-user.target Set it to launch automatically after boot with systemctl enable minecraft . You asked about how to control it: $ sudo systemctl start minecraft # Starts the service if it wasn't running$ sudo systemctl stop minecraft # Stops the service$ sudo systemctl restart minecraft # Restarts the service$ sudo systemctl status minecraft # Find out how the service is doing$ sudo journalctl -u minecraft -f # Monitor the logs This does everything except give you a means to send commands to the console. To do that, we'll set up a file that the server will listen to where you can write your commands by creating the following systemd units: /etc/systemd/system/minecraft.socket : [Unit]PartOf=minecraft.service[Socket]ListenFIFO=%t/minecraft.stdin and /etc/systemd/system/minecraft.service : [Unit]Description=Minecraft Server[Service]Type=simpleWorkingDirectory=/home/minecraftExecStart=java -Xmx1024M -Xms1024M -jar /home/minecraft/server.jar noguiUser=minecraftRestart=on-failureSockets=minecraft.socketStandardInput=socketStandardOutput=journalStandardError=journal[Install]WantedBy=multi-user.target Now you can send console commands by echoing stuff into that file: echo "help" > /run/minecraft.stdinecho "/stop" > /run/minecraft.stdin What's also cool is that you can make your own custom sequences of commands and cat the entire file into the console. For example, if you play UHC , you can start a new world, have people log-in, then cat uhc.commands > /run/minecraft.stdin to set the gamerules, spread the players, and start the event.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
598,246
I want to make it clear that I know there are better ways to write graphics programs but our teacher is using turbo c++ program on windows with graphics.h header file.As I am using Linux kali distro, I wanted an optimum solution to run C programs with graphics.h header file. problem is graphics.h is not the part of the standard GCC library. now there are multiple approaches to solve this issue: installing turbo c++ and DOSBox adding “graphics.h” C/C++ library to GCC compiler in Linux downloading a compiler which has graphics.h library.(i don't know how much of it makes sense but I wrote what I wrote according to my basic understanding.) now let's get to solutions, I didn't try the first one. I wanted a more optimum solution if there's one. you can describe all of the solutions so I can have a better understanding of this one. the second solution is to get dependencies and necessary packages. for this, I looked into this article: How to add “graphics.h” C/C++ library to gcc compiler in Linux it was 4 step process: Get build-essentials. done . sudo apt-get install build-essential Additional packages. partially done . sudo apt-get install libsdl-image1.2 libsdl-image1.2-dev guile-2.0 \ guile-2.0-dev libsdl1.2debian libart-2.0-dev libaudiofile-dev \ libesd0-dev libdirectfb-dev libdirectfb-extra libfreetype6-dev \ libxext-dev x11proto-xext-dev libfreetype6 libaa1 libaa1-dev \ libslang2-dev libasound2 libasound2-dev the command above gives the following error: E: unable to locate package libesd0-dev so here I can either find the package by adding some repository in source list but I don't want to break my system. I can find the package online somehow but I don't know where I should place the files. Also, it brings me to the question where did the above packages got installed? download and install libgraph-1.0.2.tar.gz file partially done I downloaded the file, extracted it. ran command ./configure and then when I ran make it gave me the following output on the terminal. (i want to give you the insights but it gave 1 fatal error if you ignore the warnings. ) guile-libgraph.c:25:10 fatal error: libguile.h: no file or directory found. link of Pastebin below: make command output I tried other solutions such as the one suggested on this link: Trying to install libgraph the best solution mentioned: using the alternative method,the output I got was: make command output no errors and just warning so I moved forward with make install command. I got the following errors as output: finally, make install As u can see I am stuck. thanks for reading. please help. the main objective is to run C programs with graphics.h on Linux kali.
I suspect you've stumbled on this blog post which uses screen to solve a problem where your minecraft server stops when you $ java -jar spigot.jar , then close your ssh or putty session. That method seems to have become the cannonical answer as to how to run a minecraft server, even though it isn't necessary. systemd is a totally different (and better) solution to this problem, circumventing the need for screen . You can achieve everything you've done in your script with systemd service options. To run a vanilla minecraft server, create /etc/systemd/system/minecraft.service with this content: [Unit]Description=Minecraft Server[Service]Type=simpleWorkingDirectory=/home/minecraftExecStart=java -Xmx1024M -Xms1024M -jar /home/minecraft/server.jar noguiUser=minecraftRestart=on-failure[Install]WantedBy=multi-user.target Set it to launch automatically after boot with systemctl enable minecraft . You asked about how to control it: $ sudo systemctl start minecraft # Starts the service if it wasn't running$ sudo systemctl stop minecraft # Stops the service$ sudo systemctl restart minecraft # Restarts the service$ sudo systemctl status minecraft # Find out how the service is doing$ sudo journalctl -u minecraft -f # Monitor the logs This does everything except give you a means to send commands to the console. To do that, we'll set up a file that the server will listen to where you can write your commands by creating the following systemd units: /etc/systemd/system/minecraft.socket : [Unit]PartOf=minecraft.service[Socket]ListenFIFO=%t/minecraft.stdin and /etc/systemd/system/minecraft.service : [Unit]Description=Minecraft Server[Service]Type=simpleWorkingDirectory=/home/minecraftExecStart=java -Xmx1024M -Xms1024M -jar /home/minecraft/server.jar noguiUser=minecraftRestart=on-failureSockets=minecraft.socketStandardInput=socketStandardOutput=journalStandardError=journal[Install]WantedBy=multi-user.target Now you can send console commands by echoing stuff into that file: echo "help" > /run/minecraft.stdinecho "/stop" > /run/minecraft.stdin What's also cool is that you can make your own custom sequences of commands and cat the entire file into the console. For example, if you play UHC , you can start a new world, have people log-in, then cat uhc.commands > /run/minecraft.stdin to set the gamerules, spread the players, and start the event.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598246", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/354118/" ] }
598,293
server:~# cat file1.txtabcpqrxyz I would like to convert as below: abc,pqr,xyz I'm using below command server:~# cat file1.txt | tr "\n" ", "abc,pqr,xyz,server:~# Plz note my input may contain n number of lines which we don't. How can we achieve.. server:~# cat file1.txt | tr "\n" ", "abc,pqr,xyzserver:~#
You can use the paste command paste -sd, file1.txt By default, paste pastes lines from multiple files side-by-side separated by tabs; the -d option sets an alternate delimiter and the -s option tells it to take lines s erially from one file at a time (or, as in this case, serially from a single file).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598293", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/418692/" ] }
598,611
Input: helloenrico output: ocirneolleh To do this, I can simply tac a file and pipe the output to rev (or the other way around), so one function that does the job is just this: revtac() { tac "$@" | rev; } Is there a built-in function for the job? I suspect that this could potentially break something, as it would reverse <CR> and <LF> on Windows-generated files, but I'm still interested.
No, there's no builtin function for the job. BTW, neither tac nor rev are builtins. They are external binary programs, some *nix systems even come without them. You can also use Perl to simulate the combo: perl -lne 'push @lines, scalar reverse; END { print for reverse @lines }' -- file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164309/" ] }
598,622
We need to write to a file, when the user Login, LogOff, reboot, shutdown, Lock, UnLock in Ubuntu 20.04 by writing to a file. We have tried, but not working.Not Found : /etc/rc.d/rc.local None of the below url worked.1 -> https://www.tecmint.com/auto-execute-linux-scripts-during-reboot-or-startup/ 2 -> https://ccm.net/faq/3348-execute-a-script-at-startup-and-shutdown-on-ubuntu
No, there's no builtin function for the job. BTW, neither tac nor rev are builtins. They are external binary programs, some *nix systems even come without them. You can also use Perl to simulate the combo: perl -lne 'push @lines, scalar reverse; END { print for reverse @lines }' -- file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598622", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420523/" ] }
598,634
I have a 4 column tab delimited file and the last column sometimes has duplicates. This is an excerpt from that file: chr7 116038644 116039744 GeneAchr7 116030947 116032047 GeneAchr7 115846040 115847140 GeneAchr7 115824610 115825710 GeneAchr7 115801509 115802609 GeneAchr7 115994986 115996086 GeneAchrX 143933024 143934124 GeneBchrX 143933119 143934219 GeneBchrY 143933129 143933229 GeneC For every set of duplicate in that column, I want to convert them into something like this (without really touching the non duplicate values in that column): chr7 116038644 116039744 GeneA-1chr7 116030947 116032047 GeneA-2chr7 115846040 115847140 GeneA-3chr7 115824610 115825710 GeneA-4chr7 115801509 115802609 GeneA-5chr7 115994986 115996086 GeneA-6chrX 143933024 143934124 GeneB-1chrX 143933119 143934219 GeneB-2chrY 143933129 143933229 GeneC How can I do this with awk or sed or Bash's for loop?
Try this awk -F'\t' -v OFS='\t' '{$4=$4 "-" (++count[$4])}1' file.tsv This will store the occurence of each value of the 4th field in a counter array count (where the value of the 4th field is used as "index"), and append the pre-incremented value of that counter to the 4th field, separated by a dash. The above "simple" example has a disadvantage: it will add a disambiguation number even to those values in column 4 that only appear once in the file. In order to suppress that, the following double-pass approach will work (command broken over two lines via \ to improve readability): awk -F'\t' -v OFS='\t' 'NR==FNR{f[$4]++}\ NR>FNR{if (f[$4]>1) {$4=$4 "-" (++count[$4])}; print}' file.tsv file.tsv Note that the file to be processed is stated twice as argument, and will therefore be read twice. The first time it is read (indicated by FNR , the per-file line counter, being equal to NR , the global line counter), we simply count how often every distinct value of column 4 appears in the file, and store that in an array f . The second time the file is read, we perform the actual text processing just like in the "simple" approach and append the occurence counter to column 4, but only if the total number of occurences as found in the first pass is larger than 1. This approach avoids buffering the entire file, which can be an advantage if the file is very large. The processing time is of course longer because the file is read two times. As a general rule, using shell loops for text processing is rarely necessary, as awk e.g. can perform loop operations by itself in a much more efficient way.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598634", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394420/" ] }
598,642
I have an array c_arr containing table columns with table aliases. But there are some elements in the array which are actually not columns and so they don't have the format alias.column_name . I need to remove those elements which do not contain a . . How can I do that? The array is created using below statement: c_arr=($echo $(grep -io "\b$alias.\w[a-zA-Z_0-9]*" $output_file)) There is another problem with the above line. Even though I am searching for $alias. (having a dot after the alias), the array c_arr is getting other values too which do not contain a dot.Sample values of the array are as follows: cab.SYSTEM_NAMEcab.row_idcab.namecabxacabxacab.x_sys_namecab.status_Cdcab.LAST_UPD UPDATE: Now, the question at hand is how to remove the elements in the array c_arr which does not contain the character . if at all the array is having dot and non-dot elements. Contents of c_arr is as below: cab.SYSTEM_NAMEcab.row_idcab.namecabxacabxacab.x_sys_namecab.status_Cdcab.LAST_UPD The output desired is : cab.SYSTEM_NAMEcab.row_idcab.namecab.x_sys_namecab.status_Cdcab.LAST_UPD
Try this awk -F'\t' -v OFS='\t' '{$4=$4 "-" (++count[$4])}1' file.tsv This will store the occurence of each value of the 4th field in a counter array count (where the value of the 4th field is used as "index"), and append the pre-incremented value of that counter to the 4th field, separated by a dash. The above "simple" example has a disadvantage: it will add a disambiguation number even to those values in column 4 that only appear once in the file. In order to suppress that, the following double-pass approach will work (command broken over two lines via \ to improve readability): awk -F'\t' -v OFS='\t' 'NR==FNR{f[$4]++}\ NR>FNR{if (f[$4]>1) {$4=$4 "-" (++count[$4])}; print}' file.tsv file.tsv Note that the file to be processed is stated twice as argument, and will therefore be read twice. The first time it is read (indicated by FNR , the per-file line counter, being equal to NR , the global line counter), we simply count how often every distinct value of column 4 appears in the file, and store that in an array f . The second time the file is read, we perform the actual text processing just like in the "simple" approach and append the occurence counter to column 4, but only if the total number of occurences as found in the first pass is larger than 1. This approach avoids buffering the entire file, which can be an advantage if the file is very large. The processing time is of course longer because the file is read two times. As a general rule, using shell loops for text processing is rarely necessary, as awk e.g. can perform loop operations by itself in a much more efficient way.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/415990/" ] }
598,653
I'm generating a private key, this key is for demonstrable purposes only: $ openssl genrsa-----BEGIN RSA PRIVATE KEY-----MIIEogIBAAKCAQEAvB8fZFRS83Kztend5KO9cnWXaqLWot0qLDeLcS8ly718FUdm3VcCY5j737zz4iwmFf3b20Q2XxlbYC/M13wTJzHBf2d1mRDlpZq7CgX/JSEUW/HruXiF6PI+ypkvskyoQcz04rlT8skd7tanXhXINnLwW7gCiNlxQQFkrpfO8Fkh+vYL...Ewac3GAh9CiMikQEYNxpsuLLboS4NcaQWiGB+1imtPtbp8Gf89pJSVBDubgza2BbrucNxP3HZtPd6G9CvkMJREYL7jHkXYa5DBzs9LB9mLB4b5H/6KN/fsfj-----END RSA PRIVATE KEY----- There is a newline \n at the end of each of these lines that needs removing, I want everything on a single line so I can set it to an env var. Note: I'm unable to store a multiline env var in .env as docker-compose doesn't support it. I've stripped out all the new lines with this: $(openssl genrsa | tr -d '\n')-----BEGIN RSA PRIVATE KEY-----MII...-----END RSA PRIVATE KEY----- I then manually insert two newlines \n which I'm looking to automate through a script (hence this post). If I don't do this the signing of the JWT fails. -----BEGIN RSA PRIVATE KEY-----\nMII...\n-----END RSA PRIVATE KEY----- I define it within a .env file JWT_PRIVATE_KEY=-----BEGIN RSA PRIVATE KEY-----\nMII...\n-----END RSA PRIVATE KEY----- With node and dotenv I access it like so: privateRsaKey = process.env.JWT_PRIVATE_KEY.replace(/\\n/gm, '\n'), Now privateRsaKey looks like this: -----BEGIN RSA PRIVATE KEY-----MII...-----END RSA PRIVATE KEY----- Now I actually use the private key to sign a JWT const signed = jwt.sign(payload, privateRsaKey, { algorithm: 'RS256', ...}); All of the above is working as expected when I bring up the Docker containers. I need help in the scripting so I don't have to manually insert two \n Thank you all for you help and patience it's much appreciated.
Try this awk -F'\t' -v OFS='\t' '{$4=$4 "-" (++count[$4])}1' file.tsv This will store the occurence of each value of the 4th field in a counter array count (where the value of the 4th field is used as "index"), and append the pre-incremented value of that counter to the 4th field, separated by a dash. The above "simple" example has a disadvantage: it will add a disambiguation number even to those values in column 4 that only appear once in the file. In order to suppress that, the following double-pass approach will work (command broken over two lines via \ to improve readability): awk -F'\t' -v OFS='\t' 'NR==FNR{f[$4]++}\ NR>FNR{if (f[$4]>1) {$4=$4 "-" (++count[$4])}; print}' file.tsv file.tsv Note that the file to be processed is stated twice as argument, and will therefore be read twice. The first time it is read (indicated by FNR , the per-file line counter, being equal to NR , the global line counter), we simply count how often every distinct value of column 4 appears in the file, and store that in an array f . The second time the file is read, we perform the actual text processing just like in the "simple" approach and append the occurence counter to column 4, but only if the total number of occurences as found in the first pass is larger than 1. This approach avoids buffering the entire file, which can be an advantage if the file is very large. The processing time is of course longer because the file is read two times. As a general rule, using shell loops for text processing is rarely necessary, as awk e.g. can perform loop operations by itself in a much more efficient way.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598653", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/422813/" ] }
598,735
I have a shell script that I run as a systemd service, and I want to log messages with granular priorities into the systemd log for that service. When I use logger(1) , journald only logs some of the messages and discards the rest. Which message ends up getting logged to the service log appears to be completely random; Sometimes only one or two messages get logged, sometimes no message gets logged at all. At first I thought it was a boot order/dependency issue, but that doesn't seem to be the case because all messages do appear in the system log (i.e. journalctl --system ) but not the service log (i.e. journalctl -u SERVICE.service ). I also tried systemd-cat but its behavior was unfortunately similar. What is the correct way for a script-based service to log messages with priorities to its own systemd log?
The problem that systemd has with logger is the same problem that it has with its own systemd-notify tool.The protocol is datagram-based and asynchronous, and the tools do one job.The invoker forks a process to run the tool; it fires off a datagram and forgets about it; and the process exits. systemd server processes for the logging and readiness notification protocols want to know what service the sender belongs to, the former in order to add the right service name field to the log entry and the latter in order to know what service is being talked about.They get the process ID of the datagram sender from Linux, and then they go to the process table to look up what control groups the process belongs to, and hence what service it belongs to. If the sending process has done its job and immediately exited, this does not work (subject to a race condition).The process is no longer in the process table. systemd-notify notifications fail; logger information does not get marked as belonging to the relevant service.Switching to a stream protocol (e.g. with logger 's --tcp option) wouldn't fix this unless the logging protocol itself were also altered so that the client awaited a response from the server before closing the stream and exiting, which it does not.RFC 5426 has no server acknowledgements sent back to clients. So although the log information is in the journal, it isn't tagged with the service name and isn't pulled out when you query by service name.(These aren't separate logs as you think, by the way. journalctl is simply applying filters to one big log. -u is a filter.) This is a long-standing, widely-known, bug. The systemd people describe this as a deficiency of Linux.It doesn't have proper job objects that can be used to encapsulate and track sets of processes; nor does its AF_LOCAL datagram socket mechanism transport such information.If it did, systemd could put all service processes into a job, and its logging and readiness notification servers could pull out the client-end job information whenever they receive a datagram, even if the client process had since exited. There is an idiosyncratic protocol specific to system-journald , that some versions of logger even speak.No, _SYSTEMD_UNIT is a "trusted field" set at the server end, and attempts by clients to set it will be ignored; and that's a datagram-based, asynchronous, protocol without acknowledgements too.It has exactly the same problem. To reliably have log entries tagged with the right service, write to standard error .That's long-lived and more reliably connectable to the service unit name at the server end.Yes, you cannot specify the old facilities and priorities; that's the trade-off that you have to make. Further reading Jonathan de Boyne Pollard (2015). " using a synchronous protocol when pulling client credentials " Readiness protocol problems with Unix dæmons . Frequently Given Answers. Jonathan de Boyne Pollard (2016). Linux control groups are not jobs . Frequently Given Answers. https://unix.stackexchange.com/a/383575/5132 Davide Lima Daum (2017-02-23). Journalctl miss to show logs from unit . RedHat bug #1426152. https://unix.stackexchange.com/a/294206/5132
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353063/" ] }
598,737
I have designed a script to easily and quickly download videos from YouTube that uses options at the end to ask if you want to download another. Here's my script: #!/bin/bashcd ~/Videosread -p "Enter A Valid YouTube URL: " urlresetyoutube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4' $urlresetwhile true; do read -p "Do you wish to download another? " yn case $yn in [Yy]* ) reset; cd ~/Desktop; ./Youtube-DL-Video.sh;; [Nn]* ) exit;; * ) echo "Please answer yes or no.";; esacdone My Goal I'd like to add an option to play the file it just downloaded (Via OMXPlayer). Is there any command the Youtube-DL has that would give the download path?
youtube-dl has the --output option which allows you to set the output destination: -o, --output TEMPLATE Output filename template, see the " OUTPUT TEMPLATE " for all the info As an example, I use the following template --output "$XDG_DOWNLOAD_DIR/youtube/%(title)s.%(ext)s" which downloads the video to a youtube folder in my download directory and use the video title as filename. In order to add an option to play the file you just downloaded you can use --get-filename option which does not download the video but only returns the filename corresponding to your template: youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4' -o <insert_your_ template_here> $urlfile=$(youtube-dl -o <insert_your_template_here> --get-filename)...<video_player> $file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598737", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/422903/" ] }
598,768
When I run this: for f in $(grep -r --include=*.directory "search-string"); do echo $f; done; I get the search results split up by the spaces in the search results' filenames. How can I escape the spaces in the grep search results?
If you want the list of files in an array and are using the bash shell (version 4.4 or above), you'd do: readarray -td '' files < <( grep -rlZ --include='*.directory' "search-string" .) With the zsh shell: files=(${(0)"$(grep -rlZ --include='*.directory' "search-string" .)"}) And loop over them with: for file in "${files[@]}"; do printf '%s\n' "$file"done With zsh , you can skip the intermediary array with: for file in ${(0)"$(grep -rlZ --include='*.directory' "search-string" .)"}; do printf '%s\n' "$file"} Beware that leaving a word expansion unquoted (like $f or $(...) ) has a very special meaning in bash , generally not the one you want, and file names can contain any byte value other than 0 , so 0 aka NUL is the only delimiter than can safely be used when expressing a list of file paths as a stream of byte with delimiter. That's what the -Z / --null option of GNU grep is for. With simple shells like dash , you could have gawk for instance take the output of GNU grep to generate a list of shell-quoted file names for sh to evaluate as shell code: eval set -- "$( grep -rlZ --include='*.directory' "search-string" . | gawk -v RS='\0' -v ORS=' ' -v q="'" ' {gsub(q, q "\\" q q); print q $0 q}')"for file do printf '%s\n' "$file"done If you can guarantee your file names won't contain newline characters, you can simplify it to: IFS=''set -o noglobfor file in $(grep -rl --include='*.directory' "search-string" .); do printf '%s\n' "$file"done You can skip the set -o noglob if you can guarantee the file names also don't contain * , ? , [ (and possibly \ , and more glob operators depending on the shell and shell version).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233262/" ] }
598,857
Regarding /etc/ssh/sshd_config having specified within Banner /etc/issue Since the SSH banner does not get presented until after entering the value for SSH login, Is it possible to have a different (unique) banner presented based on the username entered for the SSH login? Or is it possible to use specific banners based on the connecting IP address? Is either of those somehow possible with the SSH version used in RHEL/CentOS 7.8 ?
well, if you mean show a different banner either per user or IP address connecting through ssh, you have options for these both as following using Match command; different banner based on username: # put in Match section likeMatch User sshUser Banner /path/to/specific_banner different banner based on IP address: # put in Match section likeMatch Address 10.20.30.0/24 Banner /path/to/specific_banner so, it's possible; you will just need to reload the sshd to take changes effect; if your sshd version has no reload command (in worst condition), you will need restart it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598857", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154426/" ] }
598,906
I am using a bash command, gps location , that returns a date, time and location information. [john@hostname :~/develp] $ gps locationLocation: {"date": "16/07/20", "time": "19:01:22", "latitude": "34.321", "longitude": "133.453", "altitude": "30m"} I want to write the longitude to a file, before I get there I need to correctly parse the string. [john@hostname :~/develp] $ variable=`gps location | awk '/"longitude":/ {print $9}'`[john@hostname :~/develp] $ echo $variable"133.453",[john@hostname :~/develp] $ Currently, awk isn't searching for longitude, it solely is taking the whole string and finding the 9th string. Ideally, I would like to use a regex/keyword approach and find longitude and then the next string after. I have tried using grep | cut also tried sed . No luck, best I can do is using awk .
Strip off the Location: and you're left with JSON: $ echo '{"date": "16/07/20", "time": "19:01:22", "latitude": "34.321", "longitude": "133.453", "altitude": "30m"}' | jq .longitude"133.453" See in the man page if gps has an option to not print the Location: keyword up front, if not stripping it is easy, e.g.: $ echo 'Location: {"date": "16/07/20", "time": "19:01:22", "latitude": "34.321", "longitude": "133.453", "altitude": "30m"}' | cut -d':' -f2- | jq .longitude"133.453" or: $ echo 'Location: {"date": "16/07/20", "time": "19:01:22", "latitude": "34.321", "longitude": "133.453", "altitude": "30m"}' | sed 's/Location://' | jq .longitude"133.453"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/598906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178795/" ] }
598,920
Trying to replace window.location = '/loft-run'+ResourceManager.hotlegs + mainPage + ".html#" + newhash; to window.location = ResourceManager.hotlegs + mainPage + ".html#" + newhash; in a file. what I have tried sed -i 's~/loft-run'+ResourceManager.hotlegs + mainPage + ".html#" + newhash"~ResourceManager.hotlegs + mainPage + ".html#" + newhash"' warmblanket.js Have tried some sed commands but not much of help.Your suggestions would be of great help.
Strip off the Location: and you're left with JSON: $ echo '{"date": "16/07/20", "time": "19:01:22", "latitude": "34.321", "longitude": "133.453", "altitude": "30m"}' | jq .longitude"133.453" See in the man page if gps has an option to not print the Location: keyword up front, if not stripping it is easy, e.g.: $ echo 'Location: {"date": "16/07/20", "time": "19:01:22", "latitude": "34.321", "longitude": "133.453", "altitude": "30m"}' | cut -d':' -f2- | jq .longitude"133.453" or: $ echo 'Location: {"date": "16/07/20", "time": "19:01:22", "latitude": "34.321", "longitude": "133.453", "altitude": "30m"}' | sed 's/Location://' | jq .longitude"133.453"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/598920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146463/" ] }
598,986
After executing the command declare -i a=5 , the command a+=2 succeeds, but the command a-=2 fails. Can someone explain this strange behavior of bash?
In Bash, arithmetic evaluation is done inside (( )) , e.g. ((i=i+3)) . From Bash's man page ( man bash ) , ((expression)) The expression is evaluated according to the rules described below under ARITHMETIC EVALUATION. Both -= and += are documented in the ARITHMETIC EVALUATION section, along with = *= /= %= <<= >>= &= ^= |= , and all work as you expect if you use the arithmetic notation. += working without that notation is an exception described under PARAMETERSsection of the manual. When += isapplied to a variable for which the integer attribute has been set, value is evaluated as an arithmetic expression and added to the variable's current value,which is also evaluated. All in all, to get the desired behavior, #!/bin/bashdeclare -i a=5((a+=2))echo $a((a-=2))echo $a The output is 7 and 5.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/598986", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145001/" ] }
599,066
I am trying to do the following indirect task: host_1=(192.168.0.100 user1 pass1)host_2=(192.168.0.101 user2 pass2)hostlist=( "host_1" "host_2" )for item in ${hostlist[@]}; docurrent_host_ip=${!item[0]}current_host_user=${!item[1]}current_host_pass=${!item[2]}echo "IP: $current_host_ip User: $current_host_user Pass: $current_host_pass"done I'm trying to understand how should I perform this indirect request so I pull the hostname from the array "hostlist", and then I should do indirect request to pull the host 1 IP, user and pass. But when I'm trying to do it, I'm stuck with either only first variable (only IP), or all variables inside one (if I add [@] into the end of variable name), empty result, or numbers from array. I can't understand how can I first copy the host_1 array into current_ variables and then (after my script does some work) I need to pass the host_2 variables to the same variables current_. Can you pinpoint my mistake? I think this is the solution to the problem I just can't adopt it: Indirect return of all elements in an array
You could use a name reference to your array variable: for item in "${hostlist[@]}"; do declare -n hostvar=$item current_host_ip=${hostvar[0]} current_host_user=${hostvar[1]} current_host_pass=${hostvar[2]} echo "IP: $current_host_ip User: $current_host_user Pass: $current_host_pass"done Here, variable hostvar refers to the variable named $item which is either array host_1 or host_2 . Using variable indirection and a copy of the array values: for item in "${hostlist[@]}"; do x=${item}[@] y=( "${!x}" ) current_host_ip=${y[0]} current_host_user=${y[1]} current_host_pass=${y[2]} echo "IP: $current_host_ip User: $current_host_user Pass: $current_host_pass"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/599066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/423205/" ] }
599,350
I invoked the top command and got this: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3433 klutt 20 0 4790760 1.0g 282208 S 8.3 4.2 1261:15 firefox-esr 2063 klutt 9 -11 3424532 33644 24432 S 7.0 0.1 432:44.69 pulseaudio 3681 klutt 20 0 3958364 545000 139800 S 6.6 2.2 434:35.72 Web Content I understand that firefox and Web content are using a lot of memory, but pulseaudio? Is it normal that it is using over 3GB? Is it a bug? $ uname -aLinux desktop 5.7.0-1-amd64 #1 SMP Debian 5.7.6-1 (2020-06-24) x86_64 GNU/Linux$ pulseaudio --versionpulseaudio 13.0$ cat /etc/debian_version bullseye/sid
In your example, pulseaudio is using 32MB not 3GB. The RES column is physical memory. The VIRT column shows all the virtual memory used by the process. According to man top , that includes all code, data, and shared libraries plus pages that have been swapped out and pages that have been mapped but not used.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/599350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236940/" ] }
599,363
I'm experiencing with systemd timers to launch a service every day at let's say 7am. This service launch an application that must run continuously until 10pm. So if the application crashes, the service must restart it. The service is stopped at 10 by crontab, that also shutdown the system. I'm using a timer with OnCalendar and Persistent=true which works, but I cannot ensure that if there is a power loss (system is restarted by BIOS when power is back) after 7am the service is started, because the timer already successfully triggered at 7am and so will wait until next day.. I cannot run my service at boot because if the system can start before 7am (power loss during night) and so also the service will start, I don't want it before 7am.. Any idea?
In your example, pulseaudio is using 32MB not 3GB. The RES column is physical memory. The VIRT column shows all the virtual memory used by the process. According to man top , that includes all code, data, and shared libraries plus pages that have been swapped out and pages that have been mapped but not used.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/599363", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171796/" ] }
599,371
I created an ext4 partition on Ubuntu 18.04.4 LTS in order to transfer a large amount of data to a production server. The server is running CentOS 6.10 with kernel 2.6.32. The Ext4 Howto states that "Ext4 was released as a functionally complete and stable filesystem in Linux 2.6.28" so I assumed I was going to be able to just mount the partition. However when trying to mount the partition on the server I get the errors: localhost kernel: EXT4-fs (sdd1): couldn't mount RDWR because of unsupported optional features (400)localhost kernel: JBD: Unrecognised features on journallocalhost kernel: EXT4-fs (sdd1): error loading journal I have full root access to the server, but I am unable to upgrade any of the operating system components due to compatibility issues with the running software. Initial Googling suggested that the issue was due to the metadata checksum feature, so I downloaded and compiled the latest e2fsprogs (1.46-WIP (20-Mar-2020)) and used those to disable the feature: sudo /home/user/bin/e2fsck -f /dev/sdd1sudo /home/user/bin/tune2fs -O ^metadata_csum /dev/sdd1 However the partition still fails to mount, although I don't get the "unsupported optional features (400)" message any more: $ sudo mount /dev/sdd1 /mnt/disk1mount: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so$ sudo tail /var/log/messagesJul 20 08:01:21 localhost kernel: JBD: Unrecognised features on journalJul 20 08:01:21 localhost kernel: EXT4-fs (sdd1): error loading journal Is there some way I can access the data on this partition without rebooting the server or changing any of the system software? There seem to be two options: either I use mount the partition as is (using FUSE, or compiling my own mount.ext4 binary), or I use tune2fs to remove the remaining incompatible features (how do I find out what they are?) I should mention that due to COVID-19 lockdown measures, there's a two to three week wait for someone to physically unplug the drive from the server and plug it into a different machine. I need to find a solution which I can implement quicker than that.
First try running sudo e2fsck -f -v -C 0 -t /dev/sdd1 An e2fsck run may be required to complete the removal of the feature. If it still doesn't help, try removing and recreating the journal: sudo /home/user/bin/tune2fs -O '^has_journal,^64bit' /dev/sdd1sudo /home/user/bin/resize2fs -s /dev/sdd1sudo /home/user/bin/tune2fs -j /dev/sdd1 Lastly if it's still unmountable, compare the flags being used for sudo dumpe2fs /dev/existing_parition and sudo dumpe2fs /dev/sdd1 and remove the ones which are not present for your already existing partitions. For future reference, if you format the filesystem on the old system instead of on the new system, it should always be usable by the new kernel. If you need to format on the new system, you could use mke2fs -t ext4 -O '^metadata_csum,^64bit' to avoid some of the newer features (though this may be a moving target), or mke2fs -t ext3 (though this may be somewhat slower than ext4 as a result, but is very safe compatibility wise).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/599371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136056/" ] }
599,401
We want to search string by grep to both files: The files are /confluent/logs/server.log/confluent/logs/server.log.1 But we not want to match the other files as /confluent/logs/server.log.2/confluent/logs/server.log.3 etc So instead to do double grep as grep log.retention.bytes /confluent/logs/server.loggrep log.retention.bytes /confluent/logs/server.log.1 we want to find the match of log.retention.bytes on both files on the same time we try to do grep log.retention.bytes /opt/mcspace/confluent/logs/server.log.*[1] but this is wrong
grep log.retention.bytes server.log{,.1} In order to keep log entries (appended) in chronological order, you might want to reverse the order of files: grep log.retention.bytes server.log{.1,} which is of course equivalent to: grep log.retention.bytes server.log.1 server.log as the brace expansion is done by the shell before executing the grep command. Moreover, with zsh shell you can easily automatically glob for the last N files matching a pattern with: grep log.retention.bytes server.log*(Om[-2,-1]) where Om means order by mtime descending and [-2,-1] fetches 2 last rows. This trick is worth remembering if you watch to search more files and do not want to type them manually.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/599401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
599,440
The thing is /home has only used 11GB whereas /var uses 14GB, /temp 11GB and /swapfile 2.4GB Can I safely do sudo rm * in the last three directories? Contents from sudo du /var | sort -n | tail -20 679376 /var/snap/microk8s/common/run683324 /var/lib/docker/overlay2/3ecccaf38f1f0837b174563be1ce108c862264359047750fd3daceae9a015182/diff/usr683424 /var/lib/docker/overlay2/2313ac4c63c3915860ed097576334e5167ca94569ebfafd585f30d456dd1e33b/diff/usr735748 /var/lib/docker/overlay2/3ecccaf38f1f0837b174563be1ce108c862264359047750fd3daceae9a015182/diff735756 /var/lib/docker/overlay2/3ecccaf38f1f0837b174563be1ce108c862264359047750fd3daceae9a015182735840 /var/lib/docker/overlay2/2313ac4c63c3915860ed097576334e5167ca94569ebfafd585f30d456dd1e33b/diff735848 /var/lib/docker/overlay2/2313ac4c63c3915860ed097576334e5167ca94569ebfafd585f30d456dd1e33b879292 /var/snap/microk8s/common/var/lib/containerd954104 /var/snap/microk8s/common/var/lib1161476 /var/snap/microk8s/common/var1451924 /var/lib/docker/volumes1840856 /var/snap/microk8s/common1878948 /var/snap/microk8s1879156 /var/snap2923700 /var/lib/snapd/snaps3967480 /var/lib/snapd4971824 /var/lib/docker/overlay26437580 /var/lib/docker10813292 /var/lib12804788 /var
grep log.retention.bytes server.log{,.1} In order to keep log entries (appended) in chronological order, you might want to reverse the order of files: grep log.retention.bytes server.log{.1,} which is of course equivalent to: grep log.retention.bytes server.log.1 server.log as the brace expansion is done by the shell before executing the grep command. Moreover, with zsh shell you can easily automatically glob for the last N files matching a pattern with: grep log.retention.bytes server.log*(Om[-2,-1]) where Om means order by mtime descending and [-2,-1] fetches 2 last rows. This trick is worth remembering if you watch to search more files and do not want to type them manually.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/599440", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/423587/" ] }
599,469
My test json file is as follows: { "server":"xxx", "cert":"line1\nline2", "security/path": "/var/log/systems.log"} I would like to filter by key security/path , and below commands all don't work. jq .security/path test.jsonjq: error: path/0 is not defined at <top-level>, line 1:.security/pathjq: 1 compile error jq '.security/path' test.json has same result.
From Basic filters : Object Identifier-Index : ... If the key contains special characters or starts with a digit, youneed to surround it with double quotes like this: ."foo$" , or else .["foo$"] .... So, $ jq -r '."security/path"' test.json/var/log/systems.log
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/599469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216704/" ] }
599,476
Some of distro is /lib64 some of them is /lib/x86_64-linux-gnu , there maybe other format. Is there a uniform way to determine this in bash?
From Basic filters : Object Identifier-Index : ... If the key contains special characters or starts with a digit, youneed to surround it with double quotes like this: ."foo$" , or else .["foo$"] .... So, $ jq -r '."security/path"' test.json/var/log/systems.log
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/599476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77353/" ] }
599,510
First of all, I am a newbie in Linux, now that is out of the way: I am experiencing issues with Overmind installation. After downloading the binary file I tried with the trinity of installations, but for it did not work, no configuration, the read me is the same as the one in GitHub which is next to no help for me. Attempted with package installers, but they don't recognize the binary files as such. Extracted the contents and tried all kinds of stuff I read online but to no avail. I am sure I am missing something really simple, but apparently I can't figure out what.
Download the .gz and .sha256 files from the official releases from official github repo (latest release normally) Either use web ui and right-click these two files or download directly cd ~/Downloadmkdir overmind && cd overmindwget https://github.com/DarthSim/overmind/releases/download/v2.2.2/overmind-v2.2.2-linux-amd64.gzwget https://github.com/DarthSim/overmind/releases/download/v2.2.2/overmind-v2.2.2-linux-amd64.gz.sha256sum now make sure the sha256 checks out shasum -a 256 overmind-v2.2.2-linux-amd64.gz | awk '{print $1}' && cat overmind-v2.2.2-linux-amd64.gz.sha256sum You should see two long hash strings, they should match identically. If so proceed, if not, don't. # unzip the binarygunzip -d overmind-v2.2.2-linux-amd64.gz# make it executablesudo chmod +x overmind-v2.2.2-linux-amd64# move to an appropriate binary pathsudo mv overmind-v2.2.2-linux-amd64 /usr/local/bin/# symlink it for easier usesudo ln -s /usr/local/bin/overmind-v2.2.2-linux-amd64 /usr/bin/overmind# restart your terminal session# now you can run it from anywhere by just runningovermind start
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/599510", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/423678/" ] }
599,641
> zsh --versionzsh 5.7.1 (x86_64-apple-darwin19.0) > setoptalwaystoendautocdautopushdcombiningcharscompleteinwordcorrectextendedhistorynoflowcontrolhistexpiredupsfirsthistignorealldupshistignorespacehistreduceblankshistsavenodupshistverifyinteractiveinteractivecommentsloginlonglistjobsmonitorpromptsubstpushdignoredupspushdminussharehistoryshinstdinzle> cat ~/.zsh_history: 1595363811:0;ls 2&>1 /dev/null: 1595363821:0;ls /dev/null: 1595363831:0;cat ~/.zsh_history: 1595363837:0;ls /dev/null: 1595363841:0;setopt: 1595363845:0;cat ~/.zsh_history: 1595363993:0;setopt: 1595364000:0;ls: 1595364009:0;cat ~/.zsh_history It is ignoring them if they are one after the other like histignoredups but AFAIK, my configuration should ignore any and all. > cat ~/.zshrc ~/.zshenv ~/.zprofile# secrets-management -> master####this file is generated edit ~/.config/yadm/alt/.gitconfig##template instead#### If you come from bash you might have to change your $PATH.# export PATH=$HOME/bin:/usr/local/bin:$PATH# Path to your oh-my-zsh installation.export ZSH="/Users/calebcushing/.oh-my-zsh"# Set name of the theme to load --- if set to "random", it will# load a random theme each time oh-my-zsh is loaded, in which case,# to know which specific one was loaded, run: echo $RANDOM_THEME# See https://github.com/ohmyzsh/ohmyzsh/wiki/ThemesZSH_THEME="typewritten"TYPEWRITTEN_CURSOR="block"TYPEWRITTEN_RIGHT_PROMPT_PREFIX="# "TYPEWRITTEN_GIT_RELATIVE_PATH=true# Set list of themes to pick from when loading at random# Setting this variable when ZSH_THEME=random will cause zsh to load# a theme from this variable instead of looking in ~/.oh-my-zsh/themes/# If set to an empty array, this variable will have no effect.# ZSH_THEME_RANDOM_CANDIDATES=( "robbyrussell" "agnoster" )# Uncomment the following line to use case-sensitive completion.# CASE_SENSITIVE="true"# Uncomment the following line to use hyphen-insensitive completion.# Case-sensitive completion must be off. _ and - will be interchangeable.# HYPHEN_INSENSITIVE="true"# Uncomment the following line to disable bi-weekly auto-update checks.# DISABLE_AUTO_UPDATE="true"# Uncomment the following line to automatically update without prompting.DISABLE_UPDATE_PROMPT="true"# Uncomment the following line to change how often to auto-update (in days).# export UPDATE_ZSH_DAYS=13# Uncomment the following line if pasting URLs and other text is messed up.# DISABLE_MAGIC_FUNCTIONS=true# Uncomment the following line to disable colors in ls.# DISABLE_LS_COLORS="true"# Uncomment the following line to disable auto-setting terminal title.# DISABLE_AUTO_TITLE="true"# Uncomment the following line to enable command auto-correction.# ENABLE_CORRECTION="true"# Uncomment the following line to display red dots whilst waiting for completion.# COMPLETION_WAITING_DOTS="true"# Uncomment the following line if you want to disable marking untracked files# under VCS as dirty. This makes repository status check for large repositories# much, much faster.# DISABLE_UNTRACKED_FILES_DIRTY="true"# Uncomment the following line if you want to change the command execution time# stamp shown in the history command output.# You can set one of the optional three formats:# "mm/dd/yyyy"|"dd.mm.yyyy"|"yyyy-mm-dd"# or set a custom format using the strftime function format specifications,# see 'man strftime' for details.# HIST_STAMPS="mm/dd/yyyy"# Would you like to use another custom folder than $ZSH/custom?# ZSH_CUSTOM=/path/to/new-custom-folderZSH_COLORIZE_STYLE="monokai"# Which plugins would you like to load?# Standard plugins can be found in ~/.oh-my-zsh/plugins/*# Custom plugins may be added to ~/.oh-my-zsh/custom/plugins/# Example format: plugins=(rails git textmate ruby lighthouse)# Add wisely, as too many plugins slow down shell startup.plugins=( colored-man-pages colorize command-not-found direnv history-substring-search gitfast git-auto-fetch git-escape-magic gitignore magic-enter safe-paste scd themes z)source $ZSH/oh-my-zsh.sh# User configurationsource $HOME/.config/my/rc.sh# export MANPATH="/usr/local/man:$MANPATH"# You may need to manually set your language environment# export LANG=en_US.UTF-8# Preferred editor for local and remote sessionsexport EDITOR='vim'export VISUAL='vim'# Compilation flags# export ARCHFLAGS="-arch x86_64"# Set personal aliases, overriding those provided by oh-my-zsh libs,# plugins, and themes. Aliases can be placed here, though oh-my-zsh# users are encouraged to define aliases within the ZSH_CUSTOM folder.# For a full list of active aliases, run `alias`.# Example aliases# alias zshconfig="mate ~/.zshrc"# alias ohmyzsh="mate ~/.oh-my-zsh"alias vi="vim -Xp"alias vim="vim -Xp"jdk() { version=$1 export JAVA_HOME=$(/usr/libexec/java_home -v"$version"); java -version}setopt CORRECTsetopt SHARE_HISTORYsetopt HIST_IGNORE_ALL_DUPSunsetopt HIST_IGNORE_DUPSexport LESS="-R --no-init --quit-if-one-screen"export PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"export HISTSIZE="1000"eval "$(perl -I$HOME/perl5/lib/perl5 -Mlocal::lib)" Second question: Should I setopt in ~/.zshrc or ~/.zshenv ?
The only option needed to trim all duplicates is histignorealldups , and you have it already set, so yes, duplicates are being removed but from memory And you are looking to the history stored in file ( cat $HISTFILE ). how to reproduce Start a new zsh instance, erase all history entries and execute some commands % zsh -i% a=( $(setopt) )% unsetopt $a% HISTSIZE=0% HISTSIZE=99% history 95 HISTSIZE=99% setopt INC_APPEND_HISTORY% ls >/dev/null% clear% ls >/dev/null% history 113 HISTSIZE=99 114 history 115 setopt INC_APPEND_HISTORY 116 ls >/dev/null 117 clear 118 ls >/dev/null Now, you can set the option histignorealldups and all duplicates will disappear (from memory): % setopt histignorealldups% history 113 HISTSIZE=99 115 setopt INC_APPEND_HISTORY 117 clear 118 ls >/dev/null 122 setopt histignorealldups But that doesn't mean that the lines have been erased from the history file: % cat ~/.histfile | tail -n 10setopt INC_APPEND_HISTORYls >/dev/nullclearls >/dev/nullhistorysetopt histignorealldupshistorysetopt histignorealldupshistorycat ~/.histfile | tail -n 10 To remove duplicates from the file you would have to edit the file. I recommend you don't do that, as the history might be shared by several zsh instances running in parallel. This is not a trivial problem.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/599641", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
599,757
I got a quite stupid question I am afraid but I am kind of in need of written confirmation of my suspicion. Consider a Debian 9 with PHP from the official repositories. The PHP version shipped by Debian 9 is 7.0 . I did not enable third party repositories such as Sury . In my research I found the Debian PHP documentation which gives all the information I could need except for the following question: What happens, when the PHP version is not maintained upstream any more? The PHP Project states in their supported versions document , that PHP 7.0 does not receive security updates since the beginning of 2019. So is the default PHP version in Debian 9 potentially vulnerable? Thanks in advance for any input and information!
The PHP packages are covered as part of Debian Stretch LTS , until June 2022, on the LTS architectures ( i386 , amd64 , arm64 , armel and armhf ). Ondřej Surý backports security fixes from later releases, see his July 6 upload for a recent example. If you install the debian-security-support package, you’ll be told if your system uses any unsupported package.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/599757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139745/" ] }
599,771
I'm looking for the simplest solution that takes $* as input, and expands to each element prefixed and suffixed with a given string: $*=foo bar baz<solution(x,y)>=xfooy xbary xbazy I can do either prepending or appending, but not both: echo ${*/#/x}# prints xfoo xbar xbaz echo ${*/%/y}# prints fooy bary bazy I'm unable to combine the two solutions. The documentation claims the value returned by the expansion in the parameter=* case is a list, but I'm unable to use it as such. I want to pass the resulting array of values to a further command as separate arugments, therefore simply building a single string wouldn't work.
${var/pattern/replacement} is a ksh93 parameter expansion operator, also supported by zsh , mksh , and bash , though with variations ( mksh 's currently can't operate on arrays). ksh93 In ksh93 , you'd do ${var/*/x\0y} to prefix the expansion of $var with x and suffix with y , and ${array[@]/*/x\0y} to do that for each element of the array. So, for the array of positional parameters: print -r -- "${@/*/x\0y}" (beware however that like for your ${*/#/x} , it's buggy when the list of positional parameters is empty). zsh zsh 's equivalent of ksh93 's \0 to recall the matched string in the replacement is $MATCH , but only if you use (#m) in the pattern (for which you need the extendedglob option): set -o extendedglobprint -r -- "${@/(#m)*/x${MATCH}y}" But in zsh , you can nest parameter expansions, so you can also do: print -r -- ${${@/#/x}/%/y} Though you would probably rather use the $^array operator which turns on rcexpandparam for the expansion of that array, making it behave like brace expansion: print -r -- x$^@y Or you could use: printf -v argv x%sy "$@" To modify $@ (aka $argv in zsh ) in-place (here assuming "$@" is not the empty list ). bash In the bash shell, you'd probably need to do it in two steps with an intermediary array as shown by @L.ScottJohnson , or modifying $@ in place with: set -- "${@/#/x}"echo -E "${@/%/y}" (here assuming the prefix ( x in this case), doesn't start with - ). POSIXly You could modify the positional parameters in-place with a loop: for i do set -- "$@" "x${i}y" shiftdoneecho "$@" (though beware that echo can't be used portably to display arbitrary data that may contain backslash characters or start with - ) Note Note that the $* form of parameter expansion (which is only useful quoted), is the one that is meant to concatenate the positional parameters (with the first character of $IFS , SPC by default). You need $@ (again, quoted) to expand to all positional parameters as separated arguments. Unquoted, $* and $@ make little sense (except in zsh where they expand to the non-empty positional parameters) as they would be subject to split+glob, and the behaviour varies between shells.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/599771", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40300/" ] }
599,949
I had swap from a swapfile working for quite some time, but for some reason it stopped working. sudo fallocate -l 4G /home/.swap/swapfilesudo chmod 600 /home/.swap/swapfilesudo mkswap /home/.swap/swapfile# /etc/fstab/home/.swap/swapfile swap swap defaults 0 0sudo swapon -a swapon: /home/.swap/swapfile: swapon failed: Invalid argument I'm running the newest version of Fedora, so is it maybe possible something has changed with an update or what could be the reason?
Please try replacing fallocate -l 4G /home/.swap/swapfile with dd if=/dev/zero of=/home/.swap/swapfile bs=1M count=4096
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/599949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107359/" ] }
600,024
I have a file like this: 20001 17001170015300190001 9000190001 and I'm trying to modify $1 by adding one to it when it's a duplicate entry, so the output will be like this: 20001 17001170025300190001 9000290003
awk '{$1+=seen[$1]++} 1' file Add post-incremented hash value to current value of $1 before printing. The above will produce duplicate numbers when values are close together, such as the sequence 2,2,3 – the output is 2,3,3. A loop can be used to make that 2,3,4: awk '{while (c[$1]) {$1 += c[$1] += c[$1+c[$1]]} c[$1]++} 1' Array c stores the offset by which $1 is to be increased (like seen in the first example). Instead of increasing $1 only by the offset for that unique value, it's also increased by the offset from the next value until a new previously unseen $1 has been reached.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/600024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395867/" ] }
600,266
When I list files that have spaces in their names, most of the time I see them surrounded by apostrophes : 'famille, vacances, loisirs' I had to edit a file whose name had to be : Chapitre 2 : L'accès au système.md , with an apostrophe inside of it. I tried : gvim 'Chapitre 2 : L''accès au système.md' doubling the apostrophe in the middle. But it created the file 'Chapitre 2 : Laccès au système.md' instead, with no apostrophe at all. I tried : gvim "Chapitre 2 : L'accès au système.md" and it created a file named : "Chapitre 2 : L'accès au système.md" I have two questions : Is (for the system) the file name : "Chapitre 2 : L'accès au système.md" the same than a file named 'Chapitre 2 : L'accès au système.md' if I had succeded in doing so ? how should I write the file name in my gvim command to get the exact file name 'Chapitre 2 : L'accès au système.md' I would like to read in the outpout of a ls command ?
You're using some system where ls outputs filenames with the shell's quoting rules, to make the output unambiguous. Possibly e.g. GNU ls with QUOTING_STYLE set to shell , or ls from coreutils >= 8.25 where that is the default. The quoting rules of the shell are also important when entering the filenames on the command line. gvim 'Chapitre 2 : L''accès au système.md' created the file 'Chapitre 2 : Laccès au système.md' instead, with no apostrophe at all. You gave the shell two back-to-back single-quoted strings, which just get concatenated. In SQL, you can get a literal single quote that way, but in the most common shells you can't (†) . The outer single quotes you show shouldn't be part of the file name, they're just what ls shows to make the output unambiguous. The actual filename is Chapitre 2 : Laccès au système.md . († POSIX-style shells (like Bash, ksh, and zsh with default settings), (t)csh, and fish take that as concatenation, which is how it works in e.g. Python too. Some other shells (rc/es/akanga) do what SQL does, though, and zsh has the rcquotes option for that.) gvim "Chapitre 2 : L'accès au système.md" created a file named : "Chapitre 2 : L'accès au système.md" It most likely created a file called Chapitre 2 : L'accès au système.md . The double quotes aren't part of the name, they're just printed by ls to make the output unambiguous. It used double quotes instead of single quotes here, since the name had a single quote but nothing that would be special in double quotes, so that format was the cleanest. Though Is (for the system) the file name : "Chapitre 2 : L'accès au système.md" the same than a file named 'Chapitre 2 : L'accès au système.md' if I had succeded in doing so? If those were the full filenames -- and they're valid as filenames! -- then no, they're not equivalent, since the other contains two double quotes and one single quote, and the other contains three single quotes. If you mean if they're the same when interpreted using the quoting rules of the shell, then no, again. "Chapitre 2 : L'accès au système.md" represents the string Chapitre 2 : L'accès au système.md , as a single shell word (since the quotes keep it together). On the other hand, 'Chapitre 2 : L'accès au système.md' represents the strings Chapitre 2 : Laccès , au , système.md (three distinct shell words since there are unquoted spaces) and an open quote with no closing partner. If you entered that on the shell command line, it would wait for input from another line in hope of getting the closing quote. If you entered those as arguments to a command on the shell command line without the final stray quote, that command would probably try to access those three distinct files. how should I write the file name in my gvim command to get the exact file name 'Chapitre 2 : L'accès au système.md' I would like to read in the outpout of a ls command ? You can't get ls to output 'Chapitre 2 : L'accès au système.md' in the mode where it outputs shell-style quoted strings, since that's not a valid shell-style quoted string: it has an unclosed quote in the end. Now, if we go back to what you said first: I had to edit a file whose name had to be : Chapitre 2 : L'accès au système.md , with an apostrophe inside of it. There's a few ways to represent that in the shell. One of them is using double quotes, which ls also did for you: "Chapitre 2 : L'accès au système.md" . This works because none of the characters inside are special in double quotes (it only has spaces and the single quote to protect), but wouldn't work if the filename contained e.g. a dollar sign. If it did have dollar signs, you could escape them with a backslash: \$ . Another way is to use single quotes for everything but the single quote itself, and to put an escaped single quote where we want one: 'Chapitre 2 : L'\''accès au système.md' . That has three parts: 'Chapitre 2 : L' , \' , and 'accès au système.md' , the quotes and backslash get removed, and the result is concatenated to the single word Chapitre 2 : L'accès au système.md .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/600266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350549/" ] }
600,268
Given a directory containing: note 1.txt , last modified yesterday note 2.txt , last modified the day before yesterday note 3.txt , last modified today What is the best way to fetch the array note 3 note 1 note 2 ? To define "best," I'm more concerned about robustness (in the context of Zsh in macOS) than I am about efficiency and portability. The intended use case is a directory of hundreds or thousands of plain text files, but—at the risk of muddling the question—this is a specific case of a more general question I have, of what best practices are in performing string manipulations on filepaths printed by commands like ls , find , and mdfind . I've been using a macro which invokes this command to achieve the above: ls -t | sed -e 's/.[^.]*$//' It's never failed, but: Greg's Wiki strongly recommends against parsing the output of ls . ( Parsing ls ; Practices , under "5. Don't Ever Do These"). Is invoking sed inefficient where parameter expansion would do? Using find (safely delimiting filepaths with NUL characters rather than newlines), and parameter expansion to extract the basenames, this produces an unsorted list: find . -type f -print0 | while IFS= read -d '' -r l ; do print "${${l%.*}##*/}" ; done But sorting by modification date would seem to require invoking stat and sort , because macOS's find lacks the -printf flag which might otherwise serve well . Finally, using Zsh's glob qualifiers : for f in *(om) ; do print "${f%.*}" ; done Though not portable, this last method seems most robust and efficient to me. Is this correct, and is there any reason I shouldn't use a modified version of the find command above when I'm actually performing a search rather than simply listing files in a directory?
In zsh , list=(*(Nom:r)) Is definitely the most robust. print -rC1 -- *(Nom:r) to print them one per line, or print -rNC1 -- *(Nom:r) as NUL-delimited records to be able to do anything with that output since NUL is the only character not allowed in a file path. Change to *(N-om:r) if you want the modification time to be considered after symlink resolution (mtime of the target instead of the symlink like with ls -Lt ). :r (for root name) is the history modifier (from csh ) to remove the extension. Beware that it turns .bashrc into the empty string which would only be a concern here if you enabled the dotglob option. Change to **/*(N-om:t:r) to do it recursively ( :t for the tail (basename), that is, to remove the directory components). Doing it reliably for arbitrary file names with ls is going to be very painful. One approach could be to run ls -td -- ./* (assuming the list of file names fits in the arg list limit) and parse that output, relying on the fact that each file names starts with ./ , and generate either a NUL-delimited list or a shell-quoted list to pass it to the shell, but doing that portably is also very painful unless you resort to perl or python . But if you can rely on perl or python being there, you would be able to have them generate and sort the list of files and output it NUL-delimited (though possibly not that easily portably if you want to support sub-second precision). ls -t | sed -e 's/.[^.]*$//' Would not work properly for filenames that contain newline characters (IIRC some versions of macOS did ship with such filenames in /etc by default). It could also fail for file names that contain sequence of bytes not forming valid characters as . or [^.] could fail to match on them. It may not apply to macOS though, and could be fixed by setting the locale to C / POSIX for sed . The . should be escaped ( s/\.[^.]*$// ) as it's the regexp operator that matches any character as otherwise, it turns dot-less files like foobar into empty strings. Note that to print a string raw , it's: print -r -- "$string" print "$string" would fail for values of $string that start with - , even introducing a command injection vulnerability (try for instance with string='-va[$(uname>&2)1]' , here using a harmless uname command). And would mangle values that contain \ characters. Your: find . -type f -print0 | while IFS= read -d '' -r l ; do print "${${l%.*}##*/}" ; done Also has an issue in that you strip the .* before removing the directory components. So for instance a ./foo.d/bar would become foo instead of bar and ./foo would become the empty string. About safe ways to process the find output in various shells, see Why is looping over find's output bad practice?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/600268", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/424322/" ] }
600,496
I have workspace with many folders which all are consisted of long paths. For example: |- workspace|-- this.is.very.long.name.context|-- this.is.another.long.path.authors|-- // 20 more folders|-- this.is.folder.with.too.many.characters.folder They all start with same phase ( this.is ) which in my real case is 20 characters long and they mostly differ in last sequence. Is there any way to quickly navigate through them using cd command? Any wild characters like ? ?
I can't speak for others (e.g., zsh ) but if you are using bash ,wildcards do work to an extent. Example: ~ $ lsDocumentsDesktopDownloads If you use an asterisk ( * ), you get: ~ $ cd *ments~/Documents $ That's because bash can do the substitutions before the command ever gets to cd . In the case of cd , if multiple matches work, you would expect the behaviour to be undefined: ~ $ cd *sbash: cd: too many arguments bash expands this to cd Documents Downloads ,which doesn't make sense to cd . You can also rely on bash 's autocomplete.  In your example, you can simply type cd t ; then hitting Tab will auto-complete to: cd this.is. or whatever the next ambiguous character is.  Hit Tab a second time to see all options in this filtered set. You can repeat by entering another character to narrow it down, Tab to autocomplete to the next ambiguous character,and then Tab to see all options. Going further, bash can handle wildcards in autocomplete. In the first case above, you can type cd D*s then hit Tab to get suggestions of what could match the pattern: ~ $ cd D*sDocuments/ Downloads/~ $ cd D*s If only one match exists, it'll get completed for you. ~ $ cd *loads~ $ cd Downloads/ You could also use ls if you don't mind being in the questioned directory. The -d tells ls to list directories themselves instead of their contents. $ ls -d *long*this.is.very.long.name.contextthis.is.another.long.path.authors or you could use find if you want to look recursively: $ find workspace -type d -name '*long*'workspace/this.is.very.long.name.contextworkspace/this.is.another.long.path.authors
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/600496", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/424574/" ] }
600,519
I have a large file that contains numeric data like this 123124124124126127127 I want to get total Number of repetitions (counted from each number occurring more than once).The output should be 5 as (124 is repeating 3 times and 127 two times). I am able to count repetitions using cat file | sort | uniq -d | wc -l but it gives output as 2 i.e two numbers are repeated (124 &127) and i want output 5.
You can use awk to count the numbers: sort file | uniq -dc | awk '{n+=$1}END{print n}' Output: 5 (you don't need cat here, as sort accepts input) If your uniq does not support -dc , then sort file | uniq -c | awk '$1>1{n+=$1}END{print n}'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/600519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421632/" ] }
600,543
Both curl and wget offer the ability to download a sequential range of files ( [1-100] in curl , {1..100} in wget ) but each of them have a shortcoming: curl offers no easy way to pause between each download in the sequence. Some servers cut off downloads after several rapid downloads, and, in any case, it is polite and proper to pause between downloads anyways to be a good scraper citizen. If one wanted to, say, pause 5 seconds between each request, my understanding is there is no way to do this without additional scripting that essentially defeats the point of having the built-in support for a sequential range by making individual requests. A solution to this is to use wget which has the handy --wait=5 flag to achieve the above desired result. Unfortunately, wget has other problems. It seems to struggle with special characters in URLs, and quotes around the URL can't be used because the range {1..100} then appears to go unrecognized. This means some manual escaping of special characters is sometimes needed. This is manageable, but annoying. However, more importantly, wget has no support for naming the output dynamically (the -O flag is of no help here). Though curl offers the convenient -o "#1.jpg" there appears to be no way to achieve the same dynamic result in wget without, again, bypassing built-in sequential range support and making a scripted collection of single requests, or else having to rename or otherwise edit the file names after download. This strikes me as a fairly common task: downloading a sequential range of source files, politely pausing between each request, and renaming the output dynamically. Am I missing some alternative to curl and wget that overcomes the two problems above: 1) pause between each request 2) output file names dynamically.
It seems to struggle with special characters in URLs, and quotes around the URL can't be used because the range {1..100} then appears to go unrecognized. This is because this range syntax is not actually a feature of wget , it's a feature of your shell (e.g. bash), which expands the arguments before passing them to wget , compare: $ echo abc{1..5}abc1 abc2 abc3 abc4 abc5 or $ ruby -e 'p ARGV' abc{1..5}["abc1", "abc2", "abc3", "abc4", "abc5"] If you quote the argument then the shell will not expand it: $ echo 'abc{1..5}'abc{1..5} However you can quote everything except the range: $ echo 'abc'{1..5}'def'abc1def abc2def abc3def abc4def abc5def However, more importantly, wget has no support for naming the output dynamically wget has no features for dealing with ranges like this, because ranges like this are not a wget feature. So no, it seems you can't do all of this with a single command. But you can still fit it in a oneliner: for i in {1..100}; do curl "https://example.com/${i}.jpg" -o "output_${i}.jpg"; sleep 5; done UNIX tools are designed to be fairly focused but easily scriptable. Some of them have grown many options to accomplish common tasks in one go, but they'll never be able to cover every use case on their own.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/600543", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43983/" ] }
600,589
I am trying to create a run script with some simple flags using the /bin/sh #!/bin/shset -euxif [ "$DEBUG" = "true" ]; then debug="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=$SUSPEND,address=$DEBUG_PORT"else debug=''fiserv="--server.port=${SERVER_PORT}"prop="--spring.profiles.active=${PROFILES}"all="${serv:-} ${prop:-}"java "${debug:-}"-jar /opt/someJar.jar "${all:-}" Now I have found some issues which I cannot explain neither find in google why it behaves like that. note at "${debug:-}"-jar there is not space. When I put a space then application breaks and says "java cannot find or load the Main class". When I remove the space it works as expected. I have two variables serv and prop . If I put those two as "${serv:-}" "${prop:-}" at the end of the java command then those two arguments are passed separately in the application. But when I put the "${all:-}" then from the exception rising in java I can see that those two are concatenated. Caused by: java.lang.NumberFormatException: For input string: "1234--spring.profiles.active=some-profile" but because of the set -eux it prints + java -jar /opt/someJar.jar --server.port=1234 --spring.profiles.active=some-profile which is what I actually want. If I did not provide enough info for any reason I can provide more.What I try to achieve is to understand why it behaves that way and how can I use /bin/sh for some simple scripts like this one.
It seems to struggle with special characters in URLs, and quotes around the URL can't be used because the range {1..100} then appears to go unrecognized. This is because this range syntax is not actually a feature of wget , it's a feature of your shell (e.g. bash), which expands the arguments before passing them to wget , compare: $ echo abc{1..5}abc1 abc2 abc3 abc4 abc5 or $ ruby -e 'p ARGV' abc{1..5}["abc1", "abc2", "abc3", "abc4", "abc5"] If you quote the argument then the shell will not expand it: $ echo 'abc{1..5}'abc{1..5} However you can quote everything except the range: $ echo 'abc'{1..5}'def'abc1def abc2def abc3def abc4def abc5def However, more importantly, wget has no support for naming the output dynamically wget has no features for dealing with ranges like this, because ranges like this are not a wget feature. So no, it seems you can't do all of this with a single command. But you can still fit it in a oneliner: for i in {1..100}; do curl "https://example.com/${i}.jpg" -o "output_${i}.jpg"; sleep 5; done UNIX tools are designed to be fairly focused but easily scriptable. Some of them have grown many options to accomplish common tasks in one go, but they'll never be able to cover every use case on their own.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/600589", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/424639/" ] }
600,642
Trying to monitor 3 different folders and run a script when one changes. The problem is the script needs to know which folder changed, and I can't seem to find a way to pass that. [Path]PathChanged=/x/y/zPathChanged=/a/b/cPathChanged=/foo/barUnit=123.service[Install]WantedBy=multi-user.target I'm assuming there just isn't a way to do it. And that I need to either have 3 separate .path files(gross), or have the script just iterate through all 3 folders every time one of them changes(inefficient and also gross). But I figured I'd ask here. Perhaps there's a systemd variable I'm missing, or a more efficient way to do it without systemd. So is there? Thanks.
After some playing I found the easiest way was to use one *.path file per path and template each path into a single *@.service file. Here's something using your example: $ systemctl --user cat 123* *.path# /home/stew/.config/systemd/user/[email protected][Service]Type=oneshotExecStart=/bin/echo %I# /home/stew/.config/systemd/user/abc.path[Path]PathChanged=/a/b/[email protected]# /home/stew/.config/systemd/user/foobar.path[Path]PathChanged=/foo/[email protected]# /home/stew/.config/systemd/user/xyz.path[Path]PathChanged=/x/y/[email protected] The *.service can access the path through the %I specifier To get the Unit= names, I used systemd-escape : $ systemd-escape [email protected] \ '/x/y/z' \ '/a/b/c' \ '/foo/bar'[email protected] [email protected] [email protected] Relevant man pages: systemd.path(5) systemd.exec(5) systemd.service(5) systemd-escape(1) systemd-unit(5) In case you're wondering whether there is an easier solution, here are the things I tried: Experiment 1 Hypothesis: It's in an environment variable. systemd.exec(5) gives a list of environment variables. It's possible that something like $RUNTIME_DIRECTORY or $LISTEN_FDS is set. Experiment Setup: $ mkdir /home/stew/systemdpath$ systemctl --user cat simplepath.*# /home/stew/.config/systemd/user/simplepath.path[Unit]Description=Path testing[Path]DirectoryNotEmpty=/home/stew/systemdpath# /home/stew/.config/systemd/user/simplepath.service[Unit]Description=Path testing unit[Service]Type=oneshotExecStart=/usr/bin/env$ systemctl --user start simplepath.path Experiment Results: $ touch ~/systemdpath/file$ journalctl --user simplepath.serviceJul 28 08:26:16 stewbian systemd[31634]: Starting Path testing unit...Jul 28 08:26:16 stewbian env[334512]: HOME=/home/stewJul 28 08:26:16 stewbian env[334512]: LANG=en_GB.UTF-8Jul 28 08:26:16 stewbian env[334512]: LANGUAGE=en_GB:enJul 28 08:26:16 stewbian env[334512]: LOGNAME=stewJul 28 08:26:16 stewbian env[334512]: PATH=/home/stew/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/gamesJul 28 08:26:16 stewbian env[334512]: SHELL=/bin/bashJul 28 08:26:16 stewbian env[334512]: USER=stewJul 28 08:26:16 stewbian env[334512]: XDG_RUNTIME_DIR=/run/user/1000Jul 28 08:26:16 stewbian env[334512]: GTK_MODULES=gail:atk-bridgeJul 28 08:26:16 stewbian env[334512]: QT_ACCESSIBILITY=1Jul 28 08:26:16 stewbian env[334512]: DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/busJul 28 08:26:16 stewbian env[334512]: DESKTOP_SESSION=/usr/share/xsessions/i3Jul 28 08:26:16 stewbian env[334512]: DISPLAY=:0Jul 28 08:26:16 stewbian env[334512]: GPG_AGENT_INFO=/run/user/1000/gnupg/S.gpg-agent:0:1Jul 28 08:26:16 stewbian env[334512]: PAM_KWALLET5_LOGIN=/run/user/1000/kwallet5.socketJul 28 08:26:16 stewbian env[334512]: PWD=/home/stewJul 28 08:26:16 stewbian env[334512]: SHLVL=1Jul 28 08:26:16 stewbian env[334512]: XAUTHORITY=/home/stew/.XauthorityJul 28 08:26:16 stewbian env[334512]: XDG_CURRENT_DESKTOP=i3Jul 28 08:26:16 stewbian env[334512]: XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0Jul 28 08:26:16 stewbian env[334512]: XDG_SESSION_CLASS=userJul 28 08:26:16 stewbian env[334512]: XDG_SESSION_DESKTOP=i3Jul 28 08:26:16 stewbian env[334512]: XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session5Jul 28 08:26:16 stewbian env[334512]: XDG_SESSION_TYPE=x11Jul 28 08:26:16 stewbian env[334512]: _=/usr/bin/dbus-update-activation-environmentJul 28 08:26:16 stewbian env[334512]: MANAGERPID=31634Jul 28 08:26:16 stewbian env[334512]: INVOCATION_ID=837f6b2e56b543c9b51cda4ee8952fa8Jul 28 08:26:16 stewbian env[334512]: JOURNAL_STREAM=8:21581980Jul 28 08:26:16 stewbian systemd[31634]: simplepath.service: Succeeded.Jul 28 08:26:16 stewbian systemd[31634]: Finished Path testing unit Conclusion: Systemd does not put the path into an environment variable. Experiment 2 Hypothesis: It can be templated Thinking of $LISTEN_FDS there are some parallels between socket and paths. Sockets are templated when Accept=yes is set, so what if we try that with paths? Initial setup: $ systemctl --user cat simplepath*# /home/stew/.config/systemd/user/simplepath.path[Unit]Description=Path testing[Path]DirectoryNotEmpty=/home/stew/[email protected]# /home/stew/.config/systemd/user/[email protected][Unit]Description=Path testing unit[Service]Type=oneshotExecStart=/bin/echo %i Experiment Results: $ systemctl --user start simplepath.path$ touch ~/systemdpath/file$ journalctl --user --since "5 minutes ago"Jul 28 09:14:25 stewbian systemd[31634]: Starting Path testing unit...Jul 28 09:14:25 stewbian echo[336171]: simplepathJul 28 09:14:25 stewbian systemd[31634]: [email protected]: Succeeded. Conclusion The instance echo'd in the service was the service name itself. That doesn't help. Experiment 3 Hypothesis: Each path can have its own file and template the service Experiment Setup: $ mkdir ~/path1$ mkdir ~/path2$ systemctl --user cat path*# /home/stew/.config/systemd/user/path1.path[Unit]Description=Path1 testing[Path]DirectoryNotEmpty=%h/[email protected]# /home/stew/.config/systemd/user/path2.path[Unit]Description=Path2 testing[Path]DirectoryNotEmpty=%h/[email protected]# /home/stew/.config/systemd/user/[email protected][Unit]Description=Path testing unit[Service]Type=oneshotExecStart=/bin/echo %h/%i$ systemctl --user start path1.path path2.path Experiment: $ touch ~/path1$ touch ~/path2$ journalctl --user --since "5 minutes ago"Jul 28 09:43:45 stewbian systemd[31634]: Starting Path testing unit...Jul 28 09:43:45 stewbian echo[336517]: /home/stew/path1Jul 28 09:43:45 stewbian systemd[31634]: [email protected]: Succeeded.Jul 28 09:43:45 stewbian systemd[31634]: Finished Path testing unit.Jul 28 09:43:50 stewbian systemd[31634]: Starting Path testing unit...Jul 28 09:43:50 stewbian echo[336519]: /home/stew/path2Jul 28 09:43:50 stewbian systemd[31634]: [email protected]: Succeeded.Jul 28 09:43:50 stewbian systemd[31634]: Finished Path testing unit. Conclusion You can have a single templated service unit working for several path units. This seems to be the simplest approach. The problem I ran into here is that the service uses %h which is the home directory. I had problems when I included the / character in the template names. systemd-escape(1) seems to help out with this.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/600642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345942/" ] }
600,765
I have around 3K JSON files inside a directory and each file contains approximately 250 (+/-) JSON elements inside it, I want to count the total number of JSON elements from those files as a sum. Currently using jq like below command which returns numbers in each line but I want to count total by adding them, jq length *.json Current Output , 250248250240......250 Expected Output (approx), 600530
Using jq only: jq -s 'map(length) | add' ./*.json -s / --slurp makes jq read its input as a single array, running the specified filter only once against it. map is used to run length for each element of that virtual array, returning an array of numbers, and add finally sums them. To also make sure not to hit the command line length limit (but note that this would also recursively process files in subdirectories 1 ): find . -name "*.json" -exec jq 'length' {} + | jq -s 'add' Found files are passed to jq 'length' in batches whose size depends on the maximum command line length allowed on your system. Since find may run jq more than once, making it slurp its input won't reliably work and its output is piped to a second (slurping) jq instead. 1 Several Q/As on this site show how to prevent find from descending into directories; for instance, Using "find" non-recursively?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/600765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288313/" ] }
600,768
I'm trying to mount USB thumb drive to my router. My USB thumb drive is 32GB,divided to two partitions : 16GB NTFS and 16GB ext4 . The 16GB NTFS partition will be automatically detected in router as sda1 and by default mounted to /mnt/sda1 and /tmp/ftp/Volume_A1. The 16GB ext4 automatically detected in router as sda2 but has not been mounted. So I want to mount sda2 to /test these what I did: mount /dev/sda2 /test <====== sda2 will be mounted to /test, but gone after router rebooted added the UUID of /dev/sda2 as below to /etc/fstab to mount it on /test .<========== I check on df never been mounted , please see below root@router:/# blkid/dev/sda2: UUID="14a0f0f0-27ac-4101-8d11-3057f10d1385" TYPE="ext4"/dev/sda1: LABEL="usbdata" UUID="23D9FBBC72AB064E" TYPE="ntfs"/dev/ubi1_0: UUID="9c7f4c41-289f-4c49-8036-3698b24c7687" TYPE="ubifs"/dev/ubi0_0: UUID="66fa53a5-cc19-454d-b1a4-6a691051fb9e" TYPE="ubifs" I added the UUID of /dev/sda2 (listed above) to /etc/fstab to mount it on /test : root@router:/# nano /etc/fstab # fstab file - used to mount file systemsproc /proc proc defaults 0 0tmpfs /var tmpfs size=420k,noexec 0 0tmpfs /mnt tmpfs size=16k,noexec 0 0tmpfs /dev tmpfs size=64k,mode=0755,noexec 0 0sysfs /sys sysfs defaults 0 0debugfs /sys/kernel/debug debugfs nofail 0 0mtd:bootfs /bootfs jffs2 ro 0 0UUID=14a0f0f0-27ac-4101-8d11-3057f10d1385 /test auto nosuid,nodev,nofail 0 0 root@router:/# dfFilesystem 1K-blocks Used Available Use% Mounted onubi:rootfs_ubifs 44840 38760 6080 86% /mtd:bootfs 4480 3440 1040 77% /bootfsmtd:data 4096 464 3632 11% /dataubi1:tp_data 4584 844 3472 20% /tp_dataubi:rootfs_ubifs 44840 38760 6080 86% /tmp/root/dev/sda1 15452156 84620 15367536 1% /mnt/sda1/dev/sda1 15452156 84620 15367536 1% /tmp/ftp/Volume_A1 [Spacing has been modified in an attempt to increase readability.] Please advise and thank you =========================================================================== Following up comment's below : =========================================================================== As suggested by Aaron D. Marasco , I changed auto to ext4 : UUID=14a0f0f0-27ac-4101-8d11-3057f10d1385 /test ext4 nosuid,nodev,nofail 0 0 still no luck. df as same result as before And here is the output from ps , as requested by Hauke Laging . (The router's Busybox doesn’t recognize the -p option.) root@router:/# ps -o pid,argsPID COMMAND 1 init 2 [kthreadd] 3 [ksoftirqd/0] 4 [kworker/0:0] 5 [kworker/0:0H] 6 [kworker/u4:0] 7 [rcu_preempt] 8 [rcu_sched] 9 [rcu_bh] 10 [migration/0] 11 [migration/1] 12 [ksoftirqd/1] 14 [kworker/1:0H] 15 [khelper] 122 [writeback] 125 [ksmd] 126 [crypto] 127 [bioset] 129 [kblockd] 151 [skbFreeTask] 152 [bcmFapDrv] 173 [kswapd0] 174 [fsnotify_mark] 294 [cfinteractive] 344 [kworker/1:1] 351 [linkwatch] 352 [ipv6_addrconf] 357 [deferwq] 362 [ubi_bgt0d] 926 [jffs2_gcd_mtd2] 947 [ubi_bgt1d] 962 [ubifs_bgt1_0] 1039 [bcmFlwStatsTask] 1113 [kworker/1:2] 1137 {rcS} /bin/sh /etc/init.d/rcS S boot 1139 init 1140 logger -s -p 6 -t sysinit 1286 /sbin/klogd 1540 /sbin/hotplug2 --override --persistent --set-rules-file /etc/hotplug2.rul 1550 /usr/sbin/logd -C 128 1555 /sbin/ubusd 1558 {S12ledctrl} /bin/sh /etc/rc.common /etc/rc.d/S12ledctrl boot 1560 /usr/bin/ledctrl 1627 [bcmsw_rx] 1629 [bcmsw] 1636 [pdc_rx] 1649 /bin/swmdk 1766 /sbin/netifd 4265 [dhd_watchdog_th] 4272 [wfd0-thrd] 4425 [check_task] 4493 [kworker/0:2] 4559 [scsi_eh_0] 4562 [scsi_tmf_0] 4568 [usb-storage] 4917 [kworker/u4:2] 4919 [kworker/1:1H] 5039 /usr/sbin/imbd 5207 /usr/sbin/dnsmasq -C /var/etc/dnsmasq.conf 5219 [ telnetDBGD ] 5220 [ acktelnetDBGD ] 5243 [NU TCP] 5248 [NU UDP] 5356 eapd 5369 nas 5395 wps_monitor 6095 acsd 7008 /usr/sbin/mcud 7592 /usr/sbin/dropbear -P /var/run/dropbear.1.pid -p 22 7598 {S50postcenter} /bin/sh /etc/rc.common /etc/rc.d/S50postcenter boot 7600 /usr/sbin/postcenter 7612 /usr/sbin/sysmond 7620 {S50tmpServer} /bin/sh /etc/rc.common /etc/rc.d/S50tmpServer boot 7622 /usr/bin/tmpServer 7626 /usr/sbin/tsched 7628 /usr/bin/tmpServer 7777 /usr/bin/client_mgmt 8350 /usr/sbin/ntpd -n -p time.nist.gov -p time-nw.nist.gov -p time-a.nist.gov 8398 [ubifs_bgt0_0] 8403 /usr/bin/cloud-https 8639 {S99switch_led} /bin/sh /etc/rc.common /etc/rc.d/S99switch_led boot 8644 /usr/bin/switch_led 8758 /usr/bin/tm_shn -b start 8948 [tntfsiupdated] 9217 /usr/sbin/smbd -D 9219 /usr/sbin/nmbd -D 9264 proftpd: (accepting connections) 9279 udhcpc -p /var/run/udhcpc-eth0.pid -s /lib/netifd/dhcp.script -O 33 -O 12 9330 /usr/sbin/minidlnad -f /tmp/minidlna.conf -P /var/run/minidlnad.pid 9533 /usr/sbin/crond -c /etc/crontabs -l 5 9568 {dnsproxy_deamon} /bin/sh /usr/lib/dnsproxy/dnsproxy_deamon.sh 9974 /usr/sbin/improxy -c /etc/improxy.conf -p /tmp/improxy.pid10122 /usr/sbin/miniupnpd -f /var/etc/miniupnpd.conf10332 /usr/bin/cloud-brd -c /etc/cloud_config.cfg10341 /usr/bin/cloud-client10778 {lic-setup.sh} /bin/sh ./lic-setup.sh10783 ./gen_lic11185 {tcd_monitor.sh} /bin/sh ./tcd_monitor.sh11186 {dc_monitor.sh} /bin/sh ./dc_monitor.sh11187 {wred-setup.sh} /bin/sh ./wred-setup.sh11200 ./tcd11204 ./dcd -i 1800 -p 43200 -S 4 -b11217 ./wred -B11241 {clean-cache.sh} /bin/sh ./clean-cache.sh11244 /usr/bin/tm_shn -t start15903 sh /lib/deleteTmSigToken.sh 8640015906 sleep 8640019612 /usr/sbin/dropbear -P /var/run/dropbear.1.pid -p 2219771 -ash19884 sleep 60021950 sleep 3022135 sleep 522137 sleep 522158 sleep 522160 sleep 5 as answer by Hauke Laging . Sounds right, I do mount -a or mount /test and sda2 will be mounted to /test , how to permanently mount with udev rule ? On my router , I have no idea to run udev rule ( cant find any udev.conf ) , so I test with run script mount /test in /etc/rc.local , reboot the router but still won't mounted /test then I add sleep 20 for delaying in script and test by reboot the router, and working , automatically mounted /test now !Thank you all
Using jq only: jq -s 'map(length) | add' ./*.json -s / --slurp makes jq read its input as a single array, running the specified filter only once against it. map is used to run length for each element of that virtual array, returning an array of numbers, and add finally sums them. To also make sure not to hit the command line length limit (but note that this would also recursively process files in subdirectories 1 ): find . -name "*.json" -exec jq 'length' {} + | jq -s 'add' Found files are passed to jq 'length' in batches whose size depends on the maximum command line length allowed on your system. Since find may run jq more than once, making it slurp its input won't reliably work and its output is piped to a second (slurping) jq instead. 1 Several Q/As on this site show how to prevent find from descending into directories; for instance, Using "find" non-recursively?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/600768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/423992/" ] }
600,775
I installed WSL2 on my computer, and with it I grabbed OpenSUSE so I could get some experience with a distro besides Ubuntu. I sideloaded OpenSUSE 15.1, and it installed fine and loads into the terminal fine. Now, in order to use things that have a GUI [I'm trying to get KDE], I need some sort of X11 window manager. I'm using Xming, which is probably the most popular one.When I set the $DISPLAY variable with export DISPLAY=0.0 , it runs fine, and using echo $DISPLAY returns the same thing that I inputted. However, when I run startkde , I get the following: $DISPLAY is not set or cannot connect to the X server. .What might be causing this issue, and how might I get around it?
Using jq only: jq -s 'map(length) | add' ./*.json -s / --slurp makes jq read its input as a single array, running the specified filter only once against it. map is used to run length for each element of that virtual array, returning an array of numbers, and add finally sums them. To also make sure not to hit the command line length limit (but note that this would also recursively process files in subdirectories 1 ): find . -name "*.json" -exec jq 'length' {} + | jq -s 'add' Found files are passed to jq 'length' in batches whose size depends on the maximum command line length allowed on your system. Since find may run jq more than once, making it slurp its input won't reliably work and its output is piped to a second (slurping) jq instead. 1 Several Q/As on this site show how to prevent find from descending into directories; for instance, Using "find" non-recursively?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/600775", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/424817/" ] }
600,852
I am reading the zsh's manual and I know that I can assign the integer value to a parameter by: (( val = 1 )) And if I am calling a script, and my first parameter is an integer like ./scirpt.zsh $((val)) , how can I refer to it in my script to do some arithmetic processes like reassignment, comparison,etc? I hope the code like following: #!/bin/zsh(( {1} = $1++ ))
In zsh, the array variable argv is an alternative way to access the positional parameters. So: ((argv[1]++))
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/600852", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350311/" ] }
600,866
I want to explore arbitrary document using jq . To that end, I would like to limit the depth to which jq descends into documents and only show me the first n , e.g. 3, levels. Suppose I have the following JSON document: { "a": { "b": { "c": { "d": { "e": "foo" } } } }, "f": { "g": { "h": { "i": { "j": "bar" } } } }, "k": { "l": { "m": { "n": { "o": "baz" } } } }} I would expect the result { "a": { "b": { "c": {} } }, "f": { "g": { "h": {} } }, "k": { "l": { "m": {} } }} This is a fairly simple task if I know the document's structure in advance, but frequently, I don't. That is why I want to be able to have jq show only the first n levels of the document structure, which may be an arbitrary nesting of dictionaries and arrays. A more complex example could be: [ { "a": { "b": { "c": { "d": { "e": "foo"}}}}}, { "f": [ { "g": "foo"}]}, [ "h", "i", "j" ]] where I would expect the result [ { "a": { "b": {}}, { "f": [{}]}, [ "h", "i", "j" ]] Can I make jq do this?
Combining the del function with the .[]? array/object value iterator to delete any key/value nested at the fourth level seems to give the result you are looking for: $ jq 'del(.[]?[]?[]?[]?)' <<'EOT'[ { "a": { "b": { "c": { "d": { "e": "foo"}}}}}, { "f": [ { "g": "foo"}]}, [ "h", "i", "j" ]]EOT[ { "a": { "b": {} } }, { "f": [ {} ] }, [ "h", "i", "j" ]] The .[]? version of the .[] iterator filter is needed to prevent jq from complaining when it tries to iterate over an item that is not an array or object. To be honest, I couldn't find any direct mention of the array/object iterator filter in the form shown above (basically: .[][] ) anywhere in the documentation. A less concise but clearly documented version would be: $ jq 'del(.[]? | .[]? | .[]? | .[]?)' ...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/600866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19575/" ] }
601,080
Is there an analog for dpkg -S <file> , but for configuration files, such as /etc/samba/smb.conf ? Which are generated on dpkg-reconfigure , I guess. user@host:~$ dpkg -S /etc/samba/smb.confdpkg-query: no path found matching pattern /etc/samba/smb.conf
dpkg -S will only find configuration files which are shipped directly in packages, not those which are generated by maintainer scripts (or other tools). There’s no general solution for the latter, but looking for references to the file in /var/lib/dpkg/info is your best bet. In this instance: $ grep -rl /etc/samba/smb.conf /var/lib/dpkg/info/var/lib/dpkg/info/samba-common.config/var/lib/dpkg/info/samba-common.postinst/var/lib/dpkg/info/samba-common.postrm/var/lib/dpkg/info/samba-common.templates This suggests that the file is managed by samba-common ; reading the postinst file will confirm that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/601080", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/320683/" ] }
601,138
I am looking into automating some processes / calculations but I may first need to format a slightly awkward CSV file set. (For this I am using bash , as requested). The csv file set follows (roughly) the format below CODE,Sitting,Jan,Feb,Mar,Apr,May,Jun,Jul,TotalsCLLK_J9,First Sitting,,,2,5,2,,,10,Second Sitting,,,,,,,,1RTHM_A8,First Sitting,,,1,,3,,,6,Second Sitting,,,,,1,,,1FFBJ_FA9,First Sitting,,,,8,6,,,25,Second Sitting,,,,,11,,,12UUYIOR_HJ9,First Sitting,,,1,3,6,,,17IKRO_Lk8,First Sitting,,,,3,3,,,37,Second Sitting,,,,6,11,,,34 I am trying to fill in the empty fields in the column CODE with the content of the field from the previous line (ordinarily these empty fields occur next to a "second sitting" instance in column 2). So, for the above example, the result should be like CODE,Sitting,Jan,Feb,Mar,Apr,May,Jun,Jul,TotalsCLLK_J9,First Sitting,,,2,5,2,,,10CLLK_J9,Second Sitting,,,,,,,,1etc. I am starting to read some awk documentation as it seems a reasonably powerful utility for this task - but haven't made any progress yet. Thoughts? ta
Using Miller ( https://github.com/johnkerl/miller ) is very simple. Running mlr --csv fill-down -f CODE input.csv >output.csv you will have +------------+----------------+-----+-----+-----+-----+-----+-----+-----+--------+| CODE | Sitting | Jan | Feb | Mar | Apr | May | Jun | Jul | Totals |+------------+----------------+-----+-----+-----+-----+-----+-----+-----+--------+| CLLK_J9 | First Sitting | - | - | 2 | 5 | 2 | - | - | 10 || CLLK_J9 | Second Sitting | - | - | - | - | - | - | - | 1 || RTHM_A8 | First Sitting | - | - | 1 | - | 3 | - | - | 6 || RTHM_A8 | Second Sitting | - | - | - | - | 1 | - | - | 1 || FFBJ_FA9 | First Sitting | - | - | - | 8 | 6 | - | - | 25 || FFBJ_FA9 | Second Sitting | - | - | - | - | 11 | - | - | 12 || UUYIOR_HJ9 | First Sitting | - | - | 1 | 3 | 6 | - | - | 17 || IKRO_Lk8 | First Sitting | - | - | - | 3 | 3 | - | - | 37 || IKRO_Lk8 | Second Sitting | - | - | - | 6 | 11 | - | - | 34 |+------------+----------------+-----+-----+-----+-----+-----+-----+-----+--------+
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/601138", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346469/" ] }
601,160
A common way to skip aliases is to add a backslash before the aliased command.For example, $ alias ls='ls -l'$ ls file-rw-r--r-- 1 user user 70 Jul 30 14:37 file$ \ls filefile My research has led me to discover other exotic ways to skip aliases, such as l\s , "ls" , l""s , ls'' . But where is that behavior documented, or (put another way) why is that the correct behavior? The POSIX specification does not seem todocument any special case which deactivates aliases when preceding themwith a backslash. I also could not find it in my shell manual . This is a self-answered question.
Using Miller ( https://github.com/johnkerl/miller ) is very simple. Running mlr --csv fill-down -f CODE input.csv >output.csv you will have +------------+----------------+-----+-----+-----+-----+-----+-----+-----+--------+| CODE | Sitting | Jan | Feb | Mar | Apr | May | Jun | Jul | Totals |+------------+----------------+-----+-----+-----+-----+-----+-----+-----+--------+| CLLK_J9 | First Sitting | - | - | 2 | 5 | 2 | - | - | 10 || CLLK_J9 | Second Sitting | - | - | - | - | - | - | - | 1 || RTHM_A8 | First Sitting | - | - | 1 | - | 3 | - | - | 6 || RTHM_A8 | Second Sitting | - | - | - | - | 1 | - | - | 1 || FFBJ_FA9 | First Sitting | - | - | - | 8 | 6 | - | - | 25 || FFBJ_FA9 | Second Sitting | - | - | - | - | 11 | - | - | 12 || UUYIOR_HJ9 | First Sitting | - | - | 1 | 3 | 6 | - | - | 17 || IKRO_Lk8 | First Sitting | - | - | - | 3 | 3 | - | - | 37 || IKRO_Lk8 | Second Sitting | - | - | - | 6 | 11 | - | - | 34 |+------------+----------------+-----+-----+-----+-----+-----+-----+-----+--------+
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/601160", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/388654/" ] }
602,189
2nd Command is getting ignored ssh -q -t -o ConnectTimeout=10 learnserver sudo su - root -c 'hostname' && "/grep PermitRootLogin /opt/ssh/etc/sshd_config/"
Using Miller ( https://github.com/johnkerl/miller ) is very simple. Running mlr --csv fill-down -f CODE input.csv >output.csv you will have +------------+----------------+-----+-----+-----+-----+-----+-----+-----+--------+| CODE | Sitting | Jan | Feb | Mar | Apr | May | Jun | Jul | Totals |+------------+----------------+-----+-----+-----+-----+-----+-----+-----+--------+| CLLK_J9 | First Sitting | - | - | 2 | 5 | 2 | - | - | 10 || CLLK_J9 | Second Sitting | - | - | - | - | - | - | - | 1 || RTHM_A8 | First Sitting | - | - | 1 | - | 3 | - | - | 6 || RTHM_A8 | Second Sitting | - | - | - | - | 1 | - | - | 1 || FFBJ_FA9 | First Sitting | - | - | - | 8 | 6 | - | - | 25 || FFBJ_FA9 | Second Sitting | - | - | - | - | 11 | - | - | 12 || UUYIOR_HJ9 | First Sitting | - | - | 1 | 3 | 6 | - | - | 17 || IKRO_Lk8 | First Sitting | - | - | - | 3 | 3 | - | - | 37 || IKRO_Lk8 | Second Sitting | - | - | - | 6 | 11 | - | - | 34 |+------------+----------------+-----+-----+-----+-----+-----+-----+-----+--------+
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602189", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/426225/" ] }
602,216
tl;dr : Does cron use the numerical value of an interval compared to the numerical value of the day to determine its time of execution or is it literally "every 3 days" at the prescribed time from creation? Question: If I add the following job with crontab -e will it run at midnight tomorrow for the first time or three days from tomorrow? Or is it only on "third" days of the month? Day 1, 4, 7, 10...? 0 0 */3 * * /home/user/script.sh I put this cron in yesterday and it ran this morning (that might be the answer to my question) but I want to verify that this is correct. Today is the 31st and that interval value does appear to fall into the sequence. If cron starts executing an interval on the 1st of the month, will it run again tomorrow for me? Additional notes: There are already some excellent posts and resources about cron in general (it is a common topic I know) however the starting point for a specific interval isn't as clear to me. Multiple sources word it in multiple ways: This unixgeeks.org post states: Cron also supports 'step' values.A value of */2 in the dom field would mean the command runs every two daysand likewise, */5 in the hours field would mean the command runs every5 hours. So what is implied really by every two days? This answer states that a cronjob of 0 0 */2 * * would execute "at 00:00 on every odd-numbered day (default range with step 2, i.e. 1,3,5,7,...,31)" Does cron always step from the first day of the month? It appears that the blog states the cron will execute on the 31st and then again on the 1st of the next month (so two days in a row) due to the interval being based on the numeric value of the day. Another example from this blog post 0 1 1 */2 * command to be executed is supposed to execute the first day of month, every two months at 1am Does this imply that the cron will execute months 1,3,5,7,9,11? It appears that cron is designed to execute interval cronjobs ( */3 ) based on the numerical value of the interval compared to the numerical value of the day (or second, minute, hour, month). Is this 100% correct? P.S. This is a very specific question about one particular feature of cron that (I believe) needs some clarification. This should allow Google to tell you, with 100% certainty, when your "every 3 months" cron will run for the first time after it's been added to crontab.
The crontab(5) man page use a wording that is pretty clear: Step values can be used in conjunction with ranges. Following a range with "/number" specifies skips of the number's value through the range. For example, "0-23/2" can be used in the hours field to specify command execution every other hour (the alternative in the V7 standard is "0,2,4,6,8,10,12,14,16,18,20,22"). Steps are also permitted after an asterisk, so if you want to say "every two hours", just use "*/2". The exact wording (and the example) is "skips of the number's value through the range" - and it is implied that it starts at the first number in the range. This mean if the range is 1-31 for days, the values returned in the case of 1-31/2 or */2 is 1,3,5,7.. etc. This also means that the range is reset to the start value when it has run through. So you are also correct that in this case, the cronjob would run both on the 31th and 1st the month after. Please note that cron has 2 fields that are mutually exclusive - the "day of month" and "day of week". So you have to choose one or the other, when running jobs with an interval of days. If you want to define a cronjob that runs perfectly every other day, you have to use multiple lines and custom define each month according to the current calendar.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/167947/" ] }
602,225
Both Debian and Ubuntu certificates were added to the dbx EFI/UEFI firmware database because of the BootHole vulnerability , and the two major linux distros are now forced to use different private keys in order to sign software they provide. But there's no information on where the new certificates can be downloaded from. So how to get them?
The crontab(5) man page use a wording that is pretty clear: Step values can be used in conjunction with ranges. Following a range with "/number" specifies skips of the number's value through the range. For example, "0-23/2" can be used in the hours field to specify command execution every other hour (the alternative in the V7 standard is "0,2,4,6,8,10,12,14,16,18,20,22"). Steps are also permitted after an asterisk, so if you want to say "every two hours", just use "*/2". The exact wording (and the example) is "skips of the number's value through the range" - and it is implied that it starts at the first number in the range. This mean if the range is 1-31 for days, the values returned in the case of 1-31/2 or */2 is 1,3,5,7.. etc. This also means that the range is reset to the start value when it has run through. So you are also correct that in this case, the cronjob would run both on the 31th and 1st the month after. Please note that cron has 2 fields that are mutually exclusive - the "day of month" and "day of week". So you have to choose one or the other, when running jobs with an interval of days. If you want to define a cronjob that runs perfectly every other day, you have to use multiple lines and custom define each month according to the current calendar.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52763/" ] }
602,240
As ll , I use ls -alh /some/file frequently, and just as often then, follow immediately with cat /some/file . I have to believe this has been addressed with various aliases. And on the next level, with shell functions. I have written a couple hundred aliases, but no shell function. Maybe that's what the doctor orders. Is this an old hat? Is there a shelf remedy? Thank you.
The crontab(5) man page use a wording that is pretty clear: Step values can be used in conjunction with ranges. Following a range with "/number" specifies skips of the number's value through the range. For example, "0-23/2" can be used in the hours field to specify command execution every other hour (the alternative in the V7 standard is "0,2,4,6,8,10,12,14,16,18,20,22"). Steps are also permitted after an asterisk, so if you want to say "every two hours", just use "*/2". The exact wording (and the example) is "skips of the number's value through the range" - and it is implied that it starts at the first number in the range. This mean if the range is 1-31 for days, the values returned in the case of 1-31/2 or */2 is 1,3,5,7.. etc. This also means that the range is reset to the start value when it has run through. So you are also correct that in this case, the cronjob would run both on the 31th and 1st the month after. Please note that cron has 2 fields that are mutually exclusive - the "day of month" and "day of week". So you have to choose one or the other, when running jobs with an interval of days. If you want to define a cronjob that runs perfectly every other day, you have to use multiple lines and custom define each month according to the current calendar.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/192563/" ] }
602,267
I have two home LANs (100km apart) connected to internet via internet provider routers and would like to them connect with wireguard VPN with two single board computers (NanoPi R2S). NanoPi R2S boards already have armbian and wireguard installed. One of internet connections has static and other dynamic internet IP. Both are entered in dynamic DNS, and now I have the chance to install dynamic DNS client on nanopis, because clients on routers don't work very well. I can expose wireguard computers via NAT virtual servers functionality on both internet routers. Site A: Static public IP A.A.A.A and name sitea LAN 192.168.0.0/24 with default gateway 192.168.0.1 NanoPi eth0 with DHCP reservation 192.168.0.250 Site B: Dynamic public IP B.B.B.B and name siteb LAN 192.168.1.0/24 with default gateway 192.168.1.1 NanoPi eth0 with DHCP reservation 192.168.1.250 I thought Site B would establish a connections from dynamic IP site to Site B exposed server port. Then all computers on Site A (192.168.0.0/24) should have an additional route, which would direct traffic for Site B (192.168.1.0/24) via 192.168.0.250. Similarly, all computers on Site B (192.168.1.0/24) should have a route to direct traffic for Site A (192.168.0.0/24) via 192.168.1.250. Will that work? What IPs should I assign to wg0 interfaces? 10.0.0.0/24? How would then routes look like? How do I route 10.0.0.0/24 traffic then? Do I need to route it? How should I configure wireguard? I think I don't need exact commands for wireguard. I just need pointers, how the traffic (IPs, routes) should be organized. With eth0 and wg0, NanoPi R2S would need eth0 for traffic in both ways. Could I easily use another ethernet port on NanoPi (lan0) to increase throughput or to dedicate one port for wireguard and another for LAN? How? What about devices (phones, laptops), which I move from one site to another and they connect to WiFi/DHCP? I will probably have to enter routes every time, as route in Site A would make a mess on another Site B and vice versa. I know that one of the routers allows entering some static routes, but I need to figure out what they are for.
The solution to this is pretty straightforward, however, I have not found any description on internet how to make such site-to-site VPN. Therefore, I decided to post a solution that works for me, here. Disclaimer: This solution works for IPv4 only. Since one of my ISPs does not provide any IPv6, I did not configure for IPv6. That remains for the future improvement. IP forwarding First IP forwarding needs to be enabled, so nanopi will forward traffic from one interface to another. A line net.ipv4.ip_forward=1 should be present in /etc/sysctl.conf or in some .conf file in /etc/sysctl.d/ . Run sysctl -p or service procps reload (depending on linux distribution), or just reboot to make the change effective. Wireguard I will not go into the details how to configure wireguard. There are plenty of them on the internet. I will just write some network details used in my case, and point out two atypical configuration changes that I did in /etc/wireguard/wg0.conf . As mentioned in the question there are two sites: 192.168.0.0/24 with nanopi acting as a wireguard server with wireguard address 10.0.0.1 on wg0 interface. 192.168.1.0/24 with nanopi acting as a wireguard client with wireguard address 10.0.0.2 on wg0 interface. Changes in /etc/wireguard/wg0.conf are: First , an instruction is added to prevent wg-quick to setup its ip rules and routes. A line Table = off is added to [Interface] section: Table = off Second , add a route to direct traffic through VPN connection. Again in [Interface] section: PostUp = ip route add 192.168.1.0/24 via 10.0.0.2 dev wg0PreDown = ip route del 192.168.1.0/24 via 10.0.0.2 dev wg0 The lines above are from the first site, and tell the kernel to route all traffic for another network ( 192.168.1.0/24 ) to the IP connected on another side of VPN ( 10.0.0.2 ) via device wg0 . On another site IP numbers in configuration are different ( 192.168.0.0/24 , and 10.0.0.1 ). Third , the second wireguard computer acting as a wireguard client has an Endpoint defined in the [Peer] section, while the first one, acting as a server, does not. That endpoint is public IP and port, where server nanopi is accessible. On the router, the server nanopi has to be exposed to the internet in order to allow incoming connections. On the site where the server wireguard computer is, the internet router shall have NAT or Port forwarding or something like that. There should be UDP on port, where wireguard connects, forwarded to the IP and port of the wireguard server IP and port. I will not show that here, because every router, has different GUI for setting that up. Routing & DHCP Now, when the wireguard connection works, you should be able to access nano pi on another site via VPN. While logged to 192.168.0.250 ( 10.0.0.1 on wg0 ) one should be able to ping (or login to) 10.0.0.2 , and vice versa. However, other clients on both networks do not have information how to reach there through VPN. They have a default gateway where they forward all the traffic that is not for the local network. Since addresses from the other site are not local, all traffic that should go through VPN goes out to the internet via ISP routers. One way would be to add custom static route on each device on the network. Some allow that, but many don't. Windows or linux computers have an option to add a route. On android devices that is already a problem. So, other solutions should be found, where one do not need to set a static route on each device. If ISP router has an option to add custom routes, such a route can be added. I cannot show here how exactly that can be accomplished. In general, all addresses from another site (e.g. 192.168.0.0 with netmask 24 or 255.255.255.0 shall be redirected to 192.168.1.250 ). If that is not possible, I would recommend setting up a DHCP server on nanopi and disable DHCP function on ISP router. isc-dhcp-server can be installed on practically all linux distributions, and that server can send proper routing information to all clients on the network, as long as they use DHCP. I will not go into details of DHCP server configuration. Here is an example of /etc/dhcp/dhcpd.conf : default-lease-time 600;max-lease-time 7200;option subnet-mask 255.255.255.0;option broadcast-address 192.168.0.255;option routers 192.168.0.1;option domain-name "localdomain";option domain-name-servers 192.168.0.250;option rfc3442-classless-static-routes code 121 = array of integer 8;option ms-classless-static-routes code 249 = array of integer 8;option rfc3442-classless-static-routes 24, 192, 168, 1, 192, 168, 0, 250, 0, 192, 168, 0, 1;option ms-classless-static-routes 24, 192, 168, 1, 192, 168, 0, 250, 0, 192, 168, 0, 1;subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.150 192.168.0.229; default-lease-time 86400; max-lease-time 172800;}host staticip { default-lease-time 86400; max-lease-time 172800; hardware ethernet aa:bb:cc:dd:ee:ff; fixed-address 192.168.0.4;} Everything in dhcp configuration is pretty standard. DHCP is assigning addresses from 192.168.0.150 to 192.168.0.229 . Change that to your preferences. I should point out the lines option rfc3442... and option ms-classless... . They define options for DHCP to send route information. One is for clients following RFC 3442, and another for Microsoft clients, which use another option. The numbers 24, 192, 168, 1, 192, 168, 0, 250, 0, 192, 168, 0, 1 have the following meaning: First static route: 24 - netmask size 192, 168, 1 - prefix for the netmask defined above 192, 168, 0, 250 - gateway for the network defined above Second static route: 0 - netmask size (default gateway) no network since the mask above is 0 192, 168, 0, 1 - gateway for the network defined above Default route has to be provided here, too, because default route should get ignored on client if this information is received via DHCP. At the end of dchp config file there is an example how to make a DHCP reservation for clients that will have static IP addresses. Enter you hardware ethernet (MAC) address and desired IP. Firewalls Now clients from one network should be able access clients on another network via wireguard VPN, provided firewalls will let the traffic through. If there is a firewall on your VPN computer (e.g. nanopi) it may not allow traffic to enter. Make firewall rules according to your preferences and security requirements of your environment. You can open each individual port that you need to go through on a VPN computer, or you can open all, if it is in a trusted environment. One way to do that is with using firewall-cmd commands. Here is an example: # set "trusted" as a default zonefirewall-cmd --set-default-zone=trusted# see in which zones the interfaces arefirewall-cmd --get-active-zones# you may have to remove eth0 from the zone where it belongs to (public in this case)firewall-cmd --permanent --zone=public --remove-interface=eth0# add eth0 to trusted zonefirewall-cmd --permanent --zone=trusted --add-interface=eth0# you may have to remove wg0 from the zone where it belongs to (public in this case)firewall-cmd --permanent --zone=public --remove-interface=wg0# add wg0 to trusted zonefirewall-cmd --permanent --zone=trusted --add-interface=wg0# reload the new configfirewall-cmd --reload Then there are firewalls on each device that may stop the incoming traffic. Typically, Windows firewall allows some connections from "local network" only. Another site is not on local network, so server will block the connections coming through VPN. You have to add another network (e.g. 192.168.1.0/24 ) for each rule that is blocking a particular connection. For instance, for Windows share you have to change all incoming rules for ports 135-139, and 445 on computer sharing. Similarly, you have to change outgoing rules on a computer accessing the share. I would certainly recommend some time to fine tune the firewalls instead of just shutting them off. I hope the instructions are clear enough and will help to setup a site-to-site VPN or at least give an idea how to do it. Some knowledge in networking and administration is required.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602267", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/130631/" ] }
602,313
I am trying to unlock the Gnome Keyring Daemon from the command line, by directly passing it a password. I tried a few variations of --daemonize, --login, --start, but I can't get it to work. echo $password | gnome-keyring-daemon --unlock returns SSH_AUTH_SOCK=/run/user/1000/keyring/ssh but doesn't unlock anything. Basically I want something along the lines of: gnome-keyring-daemon unlock --pw $password Not sure if it makes any difference, but I'm on Manjaro i3wm version, so not using a desktop environment. Background: I'm using KeePassXC to manage my keyring. The one downside to this is, that I can't automatically unlock the keyring on login. Since I don't want to enter two long passwords I came up with the following script as a workaround: Logging in automatically unlocks Gnome Keyring Daemon Gnome Keyring Daemon contains (a part of) the PW to KeePassXC as the only entry enter the last characters of the pw in a prompt kill Gnome Keyring Daemon use the combined pw to unlock KeePassXC Now I want to do the opposite to lock KeePassXC again: Get PW to Gnome Keyring Daemon from KeePassXC Kill KeePassXC Unlock GnomeKeyringDaemon <- this is the part I can't get to work
This is a very brutal, dirty, and probably very wrong way to do this, but after struggling with unlocking my keyring over SSH for a while, I came up with this little script: echo 'NOTE: This script will only work if launched via source or .' >&2echo -n 'Login password: ' >&2read -s _UNLOCK_PASSWORD || returnkillall -q -u "$(whoami)" gnome-keyring-daemoneval $(echo -n "${_UNLOCK_PASSWORD}" \ | gnome-keyring-daemon --daemonize --login \ | sed -e 's/^/export /')unset _UNLOCK_PASSWORDecho '' >&2 And yes, when I call . ~/bin/unlock-gnome-keyring and enter my login password, it unlocks my login keyring, I can view it in seahorse running through remote X and use it via libsecret applications. Please be warned though, I'm not a security expert and there might be serious security implications to doing it this way. I did not check whether the password is properly cleaned in memory etc., which might render you more exposed to attacks.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602313", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/426338/" ] }
602,495
I want to switch from Debian Buster to Debian Bullseye. After doing this, can I return all Bullseye packages to Buster? (Unupgrade packages)
Technically, you can, but downgrades aren’t supported and we don’t test them. The downgrade might work fine, but it might not, and Bullseye is now sufficiently far ahead of Buster that I wouldn’t try it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602495", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/424650/" ] }
602,518
I have been using ssh to access remote servers for many months, but recently I haven't been able to establish a reliable connection. Sometimes I cannot login and get the message "Connection reset by port 22", when I can login I get the error message "client_loop: send disconnect: Broken pipe" in a few minutes (even if the terminal is not idle). My ~/.ssh/config file has: Host * ServerAliveInterval 300 ServerAliveCountMax 2 TCPKeepAlive yes My /etc/ssh/sshd_config file has: #ClientAliveInterval 300#ClientAliveCountMax 3 I recently upgraded my xfinity plan to a faster speed and the problem started happening then. But xfinity insists the issue is on my end. Note that my roommate also has the same issue with ssh... Is there something that I'm missing on my end? Any help would be greatly appreciated!(I'm running on a Mac)
I solved the same problem by editing the file ~/.ssh/config to have: Host * ServerAliveInterval 20 TCPKeepAlive no Motivation: TCPKeepAlive no means "do not send keepalive messages to the server". When the opposite, TCPKeepAlive yes, is set, then the client sends keepalive messages to the server and requires a response in order to maintain its end of the connection . This will detect if the server goes down, reboots, etc. The trouble with this is that if the connection between the client and server is broken for a brief period of time (due to flaky a network connection), this will cause the keepalive messages to fail, and the client will end the connection with "broken pipe". Setting TCPKeepAlive no tells the client to just assume the connection is still good until proven otherwise by a user request, meaning that temporary connection breakages while your ssh term is sitting idle in the background won't kill the connection.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/602518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/426520/" ] }
602,522
I was recently surprised to find out, that the POSIX list of utilities doesn't include the column utility. column first appeared way back in 4.3BSD, andit's is pretty useful. Is there a POSIX equivalent of it? The exact command I would like to replace is: $ more -- ./tmp.txt 1 SMALL 000a2 VERY_VERY_VERY_VERY_LONG 000b3 SMALL 000c$ column -t -- ./tmp.txt 1 SMALL 000a2 VERY_VERY_VERY_VERY_LONG 000b3 SMALL 000c$ That is, expand tabs to create prettier columns.
POSIX prefers to codify existing behavior, and to only mandate new features or features that are not widely adopted when the existing behavior is unsatisfactory. Nowadays there isn't much call for presentation of data using unformatted text in fixed-width fonts, so column is unlikely to become mandatory. Unlike pr , it's from BSD, it isn't present System V and other historical unices, so it isn't grandfathered in. Like any other text utility, you can express it in awk with a moderate amount of work. Here's a minimally tested implementation of column -t . Awk's -F option is similar to column 's -s in simple cases (with a single character). #!/usr/bin/env awk{ if (max_column < NF) max_column = NF; for (i = 1; i <= NF; i++) { if (width[i] < length($i)) width[i] = length($i); data[NR, i] = $i; }}END { for (i = 1; i < max_column; i++) format[i] = sprintf("%%-%ds ", width[i]); format[max_column] = "%s\n"; for (k = 1; k <= NR; k++) { for (i = 1; i <= max_column; i++) printf format[i], data[k, i]; }}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105635/" ] }
602,587
I was making some routine checks just now and realized this: Raspberry Pi OS (previously called Raspbian) Source: Raspberry Pi OS I found no mention of this in their blog, nor on the Wikipedia page. Why change such a good name as "Raspbian" into the cumbersome and problematic "Raspberry Pi OS"? Now I have to rename a bunch of established code and stuff...
First some background: The original Pi fell uncomfortably between two stools hardware wise. A debian "armel" userland (with a Pi specific kernel) could run on the Pi but was far from taking advantage of it. Debian "armhf" wouldn't run because it's minimum CPU requirements were too high. To get around this Mike and I formed the Raspbian project and set about re-building all of Debian and I have been maintaining Raspbian since. While we did produce one or two complete OS images in the early days, the Raspbian project has mostly focused on maintaining a repository of Packages and left the building of OS images up to other people. Some time later the Raspberry Pi foundation started building their own Raspbian images. Over the years the delta between plain Raspbian and the Raspberry Pi foundation Raspbian images has grown as Raspberry Pi have developed their own desktop environment and have backported a substantial number of graphics related packages in support of their migration from their Pi-specific graphics stack to a Mesa based graphics stack. I have not been particularly happy with the lack of distinction between plain Raspbian and the Raspberry Pi foundation Raspbian images but I also didn't feel like pressing the issue too hard. Separately the Pi lineup has been evolving. The original Pi was using ARMv6 CPU, the Pi 2 was using ARMv7. It could run a Debian "armhf" userland and after a while Debian also added support for the Pi 2 in their kernel, though being an "upstream" kernel some things that are supported in the downstream raspberry pi kernels are not supported. The Pi 3 added 64-bit cores, which (after a bit of kernel development) meant Debian "arm64" could now run on the Pi. Then the Pi 4 came along offering up to 4GB of RAM. Through most of this the Raspberry Pi foundation decided to stick with a single OS image based on Raspbian as their official main OS. They decided that the benefits from multiple OS images did not justify the extra work. So that brings us forward to April 2020. The 8GB Pi 4 was in alpha testing and Raspberry Pi decided it was finally time to start producing a 64-bit OS image. I got an e-mail from Eben asking my opinion on naming. I expressed that I would not be happy about the name Raspbian being used for an image that did not actually use anything from the Raspbian project. The name Debian wasn't exactly great either because Debian were building their own images for the Pi. So Raspberry Pi decided to use the term "Raspberry Pi OS" for all their OS images (32-bit for Pi, 64-bit for Pi and 32-bit for PC) based on Debian or Raspbian.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/602587", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/426579/" ] }
602,613
When I connect my headphones into the front jack, my speakers (line-out) get muted. I would like to switch to speakers without disconnecting my headphones. I see two possible options in gnome-control-center and pavucontrol : Headphones and Line-out . But when I switch to line-out, I hear nothing. How can I set correct behaviour? I have Realtek ALC1150 with Alsa and PulseAudio installed.
1. Software switching support Check, if your sound card supports software switching for the front audio panel. Some older motherboards don't support software switching at all. Some sound cards have connectors for both variants on the motherboard: software and hardware switching. In this case, make sure from your motherboard manual, that you use the connector with software switching. 2. Alsamixer Auto-Mute Auto-Mute is a feature of Alsa. It ensures that when you connect your headphones, the other audio outputs are automatically muted. You can find and disable this in alsamixer . Open AlsaMixer, choose your sound card with F6 then move with < and > and find Auto-Mute. If it's enabled, disable it with the down arrow key. 3. PulseAudio configuration This was the most problematic part for me because PulseAudio is poorly documented. When I disabled AutoMute in the previous step, speakers played in both cases. No matter, if I switched to Headphones or Line-out. So what I had to do was look into AlsaMixer again and understand, how the volume bars react to audio switching in settings or pavucontrol. Line-out: Headphones: As you can see, after switching to headphones, almost all volume bars got muted. But as I found out after a while, the volume of my speakers is for some reason controlled by Front bar. So now the last thing I had to do, was to configure PulseAudio to mute this Front volume-bar after switching to headphones. PulseAudio configuration files we need to edit are stored in /usr/share/pulseaudio/alsa-mixer/paths/ . In my case, I only edited the file analog-output-headphones.conf but this may vary depending on the configuration. You have to edit the file as root to contain these lines: [Element Front]switch = offvolume = off Once you are done, save the file and restart PulseAudio with pulseaudio -k . Output switching should now work as expected.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602613", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220439/" ] }
602,617
I want to sort the output below by year, month and date. I tried a few combinations that don't seem to be working. In the below output 2nd column is the date, 3rd is the month and 4th is the year. I tried to apply the sort command this way: sort -nk2 -Mk3 -nk4 but it didn't work because I got the error sort: options '-Mn' are incompatible Here's my sample datafile adcblz01 14 Mar 2018 adcblz03 23 Nov 2018 aktestlb02 26 Aug 2019 ckicbrwlz1 23 Mar 2018 ckilabbrwlb1 23 Mar 2018 bhuiflz28 09 Mar 2017 bhuiflz47 09 Mar 2017 bhuiflz48 09 Mar 2017 olkeflb24 23 Jul 2019 olkeflz46t2 09 Mar 2017 rrjugflb7 03 Jul 2019
You're close, but you've got the order of the sort operations wrong at two levels You should sort by year (field 4), then by month (field 3), and finally by day (field 2) You should apply the sort qualifiers ( n and M ) just to the keys, not globally The resulting sort command is thus, sort -k4n -k3M -k2n In case you're wondering why you got the error sort: options '-Mn' are incompatible with your original command sort -nk2 -Mk3 -nk4 , it's because the options are parsed mostly left-to-right. Effectively you wrote sort -n -k2 -M -k3 -n -k4 . You can more easily now see that you specified the flags -n and -M as global operations rather than per-key qualifiers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/426614/" ] }
602,644
I'm new to awk programming and I'm learning how arrays work. I found this code: awk 'BEGIN{OFS=FS=","}NR==FNR{a[$1]=$2;next}$3 in a && $3 = a[$3]' filez filex They put $1 as index in array and $2 as value then if $3 is equal to an index and for this one $3 = a[$3] . I can't understand what the meaning is, because the first file has just 2 columns -- where did they come up with $3 in comparison to $3 from 2nd file?!? Input file filez : 111,111112,114113,113 Input file filex : A,bb,111,xxx,nnnA,cc,112,yyy,nnnA,dd,113,zzz,ppp
The aim of this script is to replace the values in the third column of the second file ( filex ) with the corresponding values stored in the second column of the first file ( filez ). NR is the number of the current line relative to the first line of the first processed file. It is a "global" line counter. FNR is the number of the current line relative to the beginning of the currently processed file. NR==FNR is a condition that only evaluates to true for the first file. The corresponding action ( {a[$1]=$2;next} ) slurps the whole first file, line by line, into the a dictionary, an associative array whose aim is to look up the values of the first file's second column based on the corresponding values in the first column. next makes awk skip the remaining conditions and restart a cycle, reading the next line. $3 in a && $3 = a[$3] is a condition with a potential side effect (the assignment to $3 ). It is only evaluated for the second file (when NR==FNR is false; remember that, when NR==FNR was true, $3 in a && $3 = a[$3] was skipped). For each line, if the value of the third field is found (as an index) in the a dictionary, it is replaced with the corresponding value from the dictionary. Then, if $3 = a[$3] evaluates to true, the line is printed (because in AWK programs, which are basically composed of condition (or "pattern")-action pairs, either the condition or the action may be omitted, and an omitted action is equivalent to print ). Assuming this shortened filez : 111,111112,114 and filex : A,bb,111,xxx,nnnA,cc,112,yyy,nnn what happens, step by step, is: the first line of filez is read; NR==FNR evaluates to 1==1 , true; thus, {a[$1]=$2;next} is executed; a[111] is set to 111 ; next means that the rest of the script ( $3 in a && $3 = a[$3] ) is skipped for this line (specifically, $3 is not used for now); nothing is printed, because the performed action doesn't include any printing command; the second and last line of filez is read; NR==FNR evaluates to 2==2 , true; {a[$1]=$2;next} is executed; a[112] is set to 114 ; $3 in a && $3 = a[$3] is again skipped because of next ; and again, nothing is printed; the first line of filex is read; NR==FNR evaluates to 3==1 , false; thus {a[$1]=$2;next} is not executed; the next condition is evaluated: $3 is 111 and is an index value of a , hence $3 in a is true and $3 = a[$3] is evaluated; it causes the assignment of a[111] , which is 111 , to $3 ; since 111 is not 0 nor the empty string, the condition-assignment also evaluates to true and the current line is printed; A,bb,111,xxx,nnn the second and last line of filex is read; NR==FNR evaluates to 4==2 , false; thus {a[$1]=$2;next} is not executed; the next condition is evaluated: $3 is 112 and is an index value of a , hence $3 in a is true and $3 = a[$3] is evaluated; it causes the assignment of a[112] , which is 114 , to $3 ; since 114 is not 0 nor the empty string, the condition-assignment also evaluates to true and the current line is printed. A,cc,114,yyy,nnn
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395867/" ] }
602,647
I have a text file which lists a hostname and in the line directly under states the results of a ping of a salt minion. Here is an example output: T5F6Z12: Minion did not return. [Not connected] I need to be able to first identify is the text (Minion did not return) exists and if so grab the hostname associated with the error so I can run other commands against that server. I have started with this: if grep -q "Minion" /srv/salt/test/ping_resultsthen So I'm pretty sure I need to grep for the word "Minion" because it will only show up for servers that failed the test. But once I've identified it exists, I'm not sure how to grab the associated hostname above it in the text file.
You could use -B1 to print previous line as well and then grab only the first line: $ grep -B1 'Minion' ip.txtT5F6Z12: Minion did not return. [Not connected]$ grep -B1 'Minion' ip.txt | head -n1T5F6Z12: Or, do it with awk : $ awk '/Minion/{print p} {p=$0}' ip.txtT5F6Z12:$ awk '/Minion/{sub(/:$/, "", p); print p} {p=$0}' ip.txtT5F6Z12 Here p keeps saving the last line. When input line contains Minion , then it gets printed. Note that this will work for multiple matches unlike the grep solution above which gives only the first match.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602647", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/384906/" ] }
602,730
I'm using a Dell XPS13 laptop running Linux debian 4.19.0-9-amd64 #1 SMP Debian 4.19.118-2+deb10u1 (2020-06-07) x86_64 GNU/Linux . Every so often my bluetooth stops working and I have to restart my computer to fix it. Running sudo hciconfig hci0 reset produces: Can't init device hci0: Connection timed out (110) hciconfig -a output: hci0: Type: Primary Bus: USB BD Address: 9C:B6:D0:8C:C6:42 ACL MTU: 1024:8 SCO MTU: 50:8 DOWN RX bytes:16038619 acl:914914 sco:0 events:16294 errors:0 TX bytes:29114 acl:183 sco:0 commands:3232 errors:0 Features: 0xff 0xfe 0x8f 0xfe 0xd8 0x3f 0x5b 0x87 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH HOLD SNIFF Link mode: SLAVE ACCEPT This is my lsusb; sudo rfkill list output: Bus 004 Device 004: ID 0bda:8153 Realtek Semiconductor Corp. RTL8153 Gigabit Ethernet AdapterBus 004 Device 003: ID 0424:5807 Standard Microsystems Corp. Bus 004 Device 002: ID 2109:0820 VIA Labs, Inc. Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 003 Device 004: ID 2109:8888 VIA Labs, Inc. Bus 003 Device 008: ID 03f0:0667 HP, Inc Bus 003 Device 007: ID 03f0:0269 HP, Inc Bus 003 Device 006: ID 046d:0892 Logitech, Inc. OrbiCamBus 003 Device 005: ID b58e:0005 Blue Microphones Bus 003 Device 003: ID 0424:2807 Standard Microsystems Corp. Bus 003 Device 002: ID 2109:2820 VIA Labs, Inc. Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 001 Device 003: ID 0489:e0a2 Foxconn / Hon Hai Bus 001 Device 002: ID 0bda:58f4 Realtek Semiconductor Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub0: hci0: Bluetooth Soft blocked: no Hard blocked: no1: phy0: Wireless LAN Soft blocked: no Hard blocked: no Here is my dmesg output: [199822.744526] Bluetooth: hci0: setting interface failed (32)[199824.609972] Bluetooth: hci0: command 0x200c tx timeout[199826.629956] Bluetooth: hci0: command 0x0401 tx timeout[199830.675115] Bluetooth: hci0: setting interface failed (32)[199832.577943] Bluetooth: hci0: command 0x2005 tx timeout[199834.593891] Bluetooth: hci0: command 0x200b tx timeout[199834.753788] Bluetooth: hci0: setting interface failed (32)[199836.609853] Bluetooth: hci0: command 0x200c tx timeout[199838.625852] Bluetooth: hci0: command 0x0401 tx timeout[199842.678915] Bluetooth: hci0: setting interface failed (32)[199844.577771] Bluetooth: hci0: command 0x2005 tx timeout[199846.593753] Bluetooth: hci0: command 0x200b tx timeout[199846.744753] Bluetooth: hci0: setting interface failed (32)[199848.609746] Bluetooth: hci0: command 0x200c tx timeout[199850.625697] Bluetooth: hci0: command 0x0401 tx timeout And my sudo systemctl status bluetooth output: Aug 03 18:06:39 debian bluetoothd[24758]: RFCOMM server failed for :1.260/Profile/HSPHSProfile/00001108-0000-1000-8000-00805f9b34fb: rfcomm_Aug 03 18:06:41 debian bluetoothd[24758]: Loading LTKs timed out for hci0Aug 03 18:06:49 debian bluetoothd[24758]: Failed to set mode: Failed (0x03)Aug 03 18:44:01 debian bluetoothd[24758]: Failed to set mode: Failed (0x03) Any help would be appreciated, this is a really annoying problem. Thank you.
I also have these intermittent connection issues with the bluetooth. I use the following commands to get it working without have to reboot: hciconfig hci0 downrmmod btusbmodprobe btusbhciconfig hci0 up I hope this helps you !
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602730", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/380243/" ] }
602,822
I have the following csv file with 2 columns: Header1,Header2AU3CB0222255,EBFXFRAU3CB0222271,DBFXFRAU3CB0225233,DBFXFRAU3CB0225662,DBFXFRAU3CB0226264,DBFXFR I want to count the fields in column 2 which don't start with E . I tried the command below but it's not working properly: awk '$2 !~ /^E_/ { count++ }END{ print count }' FinalOutput.csv
Your awk command has several issues. You have not specified the field separator, so awk splits the lines at whitespace, not , . You can use the -F',' command-line option to set the field separator. Your RegExp states /^E_/ and hence would look for fields that don't start with E_ (which none of your column 2 values does), not merely those that don't start with E . Remove the _ . Your command would also count the header line. You can use the FNR internal variable (which is automatically set to the current line number within the current file ) to exclude the first line. As noted by Rakesh Sharma, if all lines start with E , the command would print the empty string at the end instead of a 0 because of the use of an uninitialized variable. You can force interpretation as number by printing count+0 instead of count . A corrected version would be awk -F',' 'FNR>1 && $2!~/^E/{count++} END{print count+0}' FinalOutput.csv Note that since I used the FNR per-file line-counter (rather then the global line-counter NR ), this would also work with more than one input file where all of them have a header line, i.e. you could even use it as awk -F',' ' ... ' FinalOutput1.csv FinalOutput2.csv ...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602822", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/426811/" ] }
602,827
I switched the sources to Bullseye and the upgrade went smoothly, but when I do a full-upgrade, I get: libc6-dev : Breaks: libgcc-8-dev (< 8.4.0-2~) but 8.3.0-6 is to be installed My sources are as follows: deb http://deb.debian.org/debian bullseye maindeb-src http://deb.debian.org/debian bullseye main#deb http://deb.debian.org/debian buster-updates main#deb-src http://deb.debian.org/debian buster-updates main#deb http://security.debian.org/debian-security/ buster/updates main#deb-src http://security.debian.org/debian-security/ buster/updates main How can I fix this to finalize the upgrade? P.S. I've looked at a recent issue here: Full-upgrade to Debian testing fails due to libc6-dev : Breaks: libgcc-8-dev Which didn't help.
Debian 10 uses GCC 8, but Debian 11 currently uses GCC 9 (probably 10 by the time it’s released). For some reason, in your case, the upgrade fails to replace the GCC 8 packages, and the upgrade is blocked. To fix this, remove gcc-8 and its dependencies. This is a symptom of a more general problem with GCC libraries when upgrading from Debian 10 to testing; see Ryan Pavlik’s repository for a general solution and details of the problem, as well as links to bugs filed against GCC in the hope of an official fix.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/426814/" ] }
602,953
I have a file with entries like this: chr1 740678 740720chr1 2917480 2917507 I want to remove the entries which start with chr1 but retain others which start with chr11 or chr19 and so on. When I use grep -v "chr1" it removes the others which start with chr11 or chr19.Can I use another regular expression?
First, you should anchor your regular expression to only match at the beginning of the line ( ^chr1 ) to avoid finding lines that contain chr1 but it isn't the first string (this can easily happen with an annotated VCF file, for example). Next, you can use the -w option for (GNU) grep : -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. If your grep doesn't support that, then use this: grep -v '^chr1\s' file The \s matches whitespace (including both tabs and spaces), so that will exclude any lines that start with chr1 and then any kind of whitespace character.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/602953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/315919/" ] }
603,012
I have the following script: file="home/report.csv"while IFS= read -r linedosed 's/\,/;/' > tmp.txtdone <"$file"file2="home/tmp.txt"while IFS= read -r linedoawk -F. '{print $1";service" > "report_v2.csv"}' OFS=;done <"$file2" After the first "While", the file " tmp.txt " does not have the first line of " report.csv ". Then, after the second "While", the file report_v2.csv does not have the first line of tmp.txt . Hence, the last file has two lines less than the original one. This is an example of my files: report.csv 1,foo2,pippo3,pluto4,davis tmp.txt 2;pippo3;pluto4;davis report_v2.csv 3;pluto;service4;davis;service I need to keep the first two lines of the original file also in the last file. How can I do? Thanks
First, you should anchor your regular expression to only match at the beginning of the line ( ^chr1 ) to avoid finding lines that contain chr1 but it isn't the first string (this can easily happen with an annotated VCF file, for example). Next, you can use the -w option for (GNU) grep : -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. If your grep doesn't support that, then use this: grep -v '^chr1\s' file The \s matches whitespace (including both tabs and spaces), so that will exclude any lines that start with chr1 and then any kind of whitespace character.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/603012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214104/" ] }
603,030
I have a Dell Optiplex for which I purchased four 4TB disks. I am trying to install Debian 10 on this machine. I'm hitting a wall when it comes to installing GRUB. I get vague error at this point of the installation: Executing 'grub-install /dev/sda' failed.This is a fatal error. This is my first time trying to install Debian on a RAID array. In the past, I have only done single-disk installs. All four disks are connected to a PCIe RAID controller. The controller is compatible with the Linux kernel and I am able to see and partition the drives no problem. When I get to the partitioning stage, I am presented with four empty disks. I do the following (at this point all disks have no partitions): Manual Configure software RAID Create MD device: RAID 5 active devices = 4 spare devices = 0 partitions = sda, sdb, sdc, sde At this point my partitions look like this: RAID5 device #0 - 12TB software RAID #1 - 12TB SCSI1, sda - 4TB 1MB FREE SPACE #1 - 4TB K raid 859.6 kB FREE SPACE SCSI1, sdb - 4TB 1MB FREE SPACE #1 - 4TB K raid 859.6 kB FREE SPACE SCSI1, sdc - 4TB 1MB FREE SPACE #1 - 4TB K raid 859.6 kB FREE SPACE SCSI1, sde - 4TB 1MB FREE SPACE #1 - 4TB K raid 859.6 kB FREE SPACE Next, I select: guided partitioning > guided - use entire disk > RAID5 device #0 > All files in one partition. The partitions now look like this: RAID5 device #0 - 12TB software RAID #1 - 1MB K biosgrub #2 - 12TB f ext4 / #3 - 17.1GB f swap swap SCSI1, sda - 4TB 1MB FREE SPACE #1 - 4TB K raid 859.6 kB FREE SPACE SCSI1, sdb - 4TB 1MB FREE SPACE #1 - 4TB K raid 859.6 kB FREE SPACE SCSI1, sdc - 4TB 1MB FREE SPACE #1 - 4TB K raid 859.6 kB FREE SPACE SCSI1, sde - 4TB 1MB FREE SPACE #1 - 4TB K raid 859.6 kB FREE SPACE At this point, I select "finish partitioning and write the changes to disk". The installer then proceeds unpacking and installing software, which seems to go fine. When it is time to install GRUB I selected sda , and this fails. I have also tried selecting sdb and manually entering /dev/md as GRUB installation locations. These have also failed. At this point I suspect I am not partitioning things correctly. I have searched around the web and I have found several articles and blogs with guides but none are focused on RAID5. I read them anyway looking for useful information but found nothing.
First, you should anchor your regular expression to only match at the beginning of the line ( ^chr1 ) to avoid finding lines that contain chr1 but it isn't the first string (this can easily happen with an annotated VCF file, for example). Next, you can use the -w option for (GNU) grep : -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. If your grep doesn't support that, then use this: grep -v '^chr1\s' file The \s matches whitespace (including both tabs and spaces), so that will exclude any lines that start with chr1 and then any kind of whitespace character.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/603030", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/426995/" ] }
603,048
I have a RHEL 7 server that we are using as a jumpbox to authenticate to all servers. Typically users will login to jumpserver(rhel 7) from local machine and then login to their respective virtual machines but lately users are storing data on their directory in jumpserver which is very small vm and we don't what users to store any data in-fact i want restrict users to just use it as jumpserver by disabling all others permissions to run any command. flow: localmachine(open-ldap auth) --> jumpserver-rhel 7(ssh-only) ---> virtual machine can someone shed some light on how to restrict domain/ldap and linux users to only ssh and do nothing apart from jump to their virtual machine. Env: everything is rhel 7 based.
First, you should anchor your regular expression to only match at the beginning of the line ( ^chr1 ) to avoid finding lines that contain chr1 but it isn't the first string (this can easily happen with an annotated VCF file, for example). Next, you can use the -w option for (GNU) grep : -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. If your grep doesn't support that, then use this: grep -v '^chr1\s' file The \s matches whitespace (including both tabs and spaces), so that will exclude any lines that start with chr1 and then any kind of whitespace character.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/603048", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/427042/" ] }
603,078
I have a directory (e.g. /home/various/ ) with many subdirectories (e.g. /home/various/foo/ , /home/various/ber/ , /home/various/kol/ and /home/various/whatever/ ). Is there a command I can run, which will breakdown the contents per file extension showing totals like total size number of files Let's say, I don't want to manually type each file extension in the terminal, in part because I don't know all the file extensions inside (recursively) /various/ . An output like this, would be great: *.txt 23 files, 10.2MB*.pdf 8 files, 23.2MB*.db 3 files, 2.3MB*.cbz 24 files, 2.3GB*.html 2,508 files, 43.9MB*.readme 13 files, 4KB
First, you should anchor your regular expression to only match at the beginning of the line ( ^chr1 ) to avoid finding lines that contain chr1 but it isn't the first string (this can easily happen with an annotated VCF file, for example). Next, you can use the -w option for (GNU) grep : -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. If your grep doesn't support that, then use this: grep -v '^chr1\s' file The \s matches whitespace (including both tabs and spaces), so that will exclude any lines that start with chr1 and then any kind of whitespace character.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/603078", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/427061/" ] }
603,087
I'm using an awk script which takes the column headers after 166th column and prints it into each of it's subsequent rows. Ex. col165 col166 col167a 1 2b 3 4c 5 6 Becomes - col165 col166 col167a col166|1 col167|2b col166|3 col167|4c col166|5 col167|6 However, the file which I'm processing is quite huge (around 1.6M lines) which takes about 1.5 hours to process. To speed up the process, I thought of splitting the huge file into 100k lines, then use gnu parallel to process each file separately. I've run into a problem however, the script takes the header of the file and uses that to obtain the headers. I wanted to use another file just to specify the headers, else I would have to add headers to each of the split files (which seems like a hassle in of itself). The code which I'm using is - awk 'BEGIN { FS="\t";OFS="\t" }; NR == 1 { split($0, headers); print; next } {for (i=166;i<=NF;i++) $i=headers[i] "|" $i } 1' input > output I wanted to use a file column_headers to specify the headers. Would it be possible? I tried the following code but it didn't work and I'm not sure if my code is correct: awk -v head='$(cat column_headers)' 'BEGIN{ FS="\t";OFS="\t" }; NR == 1 { split($head, headers); print; next } {for (i=166;i<=NF;i++) $i=headers[i] "|" $i } 1' input > output I think I'm doing something wrong, not sure what. Any help would be appreciated. EDIT: Thanks. I had missed out another command in the chain, which was actually the culprit for long times. As @Ole Tange mentioned, I used the command though modified it slightly - time cat input_1|parallel -k -q -j 24 --tmpdir tmp/ --block 900M --pipe awk -f culprit_script > output The script basically splits each field, and removes/retains them based on their value. The execution time of the first command is around 15-20 min, the second script takes little more than an hour. With parallel and 24 threads, it gets over in 7 minutes!! I think I'll use parallel for first command too :P Thanks for the input everyone!
First, you should anchor your regular expression to only match at the beginning of the line ( ^chr1 ) to avoid finding lines that contain chr1 but it isn't the first string (this can easily happen with an annotated VCF file, for example). Next, you can use the -w option for (GNU) grep : -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. If your grep doesn't support that, then use this: grep -v '^chr1\s' file The \s matches whitespace (including both tabs and spaces), so that will exclude any lines that start with chr1 and then any kind of whitespace character.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/603087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/373511/" ] }
603,094
ln -s creates 'fast' symbolic links. These break if you copy them (and their targets) to e.g. optical media. I believe old style 'slow' symbolic links would work, but how can I create them? There's no ln flag or other command that I can find. Some info for context, from the wikpedia page on symbolic links : Early implementations of symbolic links stored the symbolic linkinformation as data in regular files. The file contained the textualreference to the link's target, and the file mode bits indicated thatthe type of the file is a symbolic link. This method was slow and an inefficient use of disk-space on smallsystems. An improvement, called fast symlinks, allowed storage of thetarget path within the data structures used for storing fileinformation on disk (inodes). This space normally stores a list ofdisk block addresses allocated to a file. Thus, symlinks with shorttarget paths are accessed quickly. Systems with fast symlinks oftenfall back to using the original method if the target path exceeds theavailable inode space. The original style is retroactively termed aslow symlink. It is also used for disk compatibility with other orolder versions of operating systems.
There’s no way to tell ln to create “fast” or “slow” symlinks, the file system determines how it stores symlinks. Dealing with symlink representation on optical media is up to the program handling the conversion, or the file system driver providing access to the medium, not up to the source file system. For example, mkisofs can use Rock Ridge extensions or TRANS.TBL files to represent symlinks. It can also handle hard links.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/603094", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33301/" ] }
603,184
Is there a way to edit a matched pattern and then replace another pattern with the edited pattern? Input: a11.tsome text herea06.tsome text here Output: a11.t 11some text herea06.t 06some text here The above example shows the first two digits (matched by first pattern) extracted and placed at the end of the line (second pattern). In a programming language, I would load the file into a data structure, edit, replace, and write to a new file. But is there a one-line equivalent? Trial: sed 's/\(a[0-9][0-9].*\)/& \1/I' stack.fa | sed -e 's#a##g2' -e 's#\.\w##g2' Trial output: a11.t 11some text herea06.t 06some text here Obviously the trial works, but is there a more robust way? Further, is there another text processing language this could done in more easily?
sed here is the perfect tool for the task. However note that you almost never need to pipe several sed invocations together as a sed script can be made of several commands. If you wanted to extract the first sequence of 2 decimal digits and append following a space to end of the line if found, you'd do: sed 's/\([[:digit:]]\{2\}\).*$/& \1/' < your-file If you wanted to do that only if it's found in second position on the line and following a a : sed 's/^a\([[:digit:]]\{2\}\).*$/& \1/' < your-file And if you don't want to do it if that sequence of 2 digits is followed by more digits: sed 's/^a\([[:digit:]]\{2\}\)\([^[:digit:]].*\)\{0,1\}$/& \1/' < your-file In terms of robustness it all boils down to answering the question: what should be matched? and what should not be? . That's why it's important to specify your requirements clearly, and also understand what the input may look like (like can there be digits in the lines where you don't want to find a match? , can there be non-ASCII characters in the input? , is the input encoded in the locale's charset? etc.). Above, depending on the sed implementation, the input will be decoded into text based on the locale's charmap (see output of locale charmap ), or interpreted as if each byte corresponded to a character and bytes 0 to 127 interpreted as per the ASCII charmap (assuming you're not on a EBCDIC based system). For sed implementations in the first category, it may not work properly if the file is not encoded in the right charset. For those in the second category, it could fail if there are characters in the input whose encoding contains the encoding of decimal digits.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/603184", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/427161/" ] }
603,236
In the cd, bash help page: The variable CDPATH defines the search path for the directory containingDIR. Alternative directory names in CDPATH are separated by a colon (:).A null directory name is the same as the current directory. If DIR beginswith a slash (/), then CDPATH is not used. But I don't understand the concept of "Alternative directory", and can't find an example that illustrates the use of the colon ( : ) with the cd command.
The variable is not set by default (at least in the systems I am familiar with) but can be set to use a different directory to search for the target dir you gave cd . This is probably easier to illustrate with an example: $ echo $CDPATH ## CDPATH is not set$ cd etc ## fails: there is no "etc" directory herebash: cd: etc: No such file or directory$ CDPATH="/" ##CDPATH is now set to /$ cd etc ## This now moves us to /etc/etc In other words, the default behavior for cd foo is "move into the directory named 'foo' which is a subdirectory of the current directory or of any other directory that is given in CDPATH". When CDPATH is not set, cd will only look in the current directory but, when it is set, it will also look for a match in any of the directories you set it to. The colon is not used with cd , it is used to separate the directories you want to set in CDPATH : CDPATH="/path/to/dir1:/path/to/dir2:/path/to/dirN"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/603236", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/427202/" ] }
603,514
How do I upgrade a single Debian package without marking it as "manually installed"? apt install upgradeable-lib works of course, but then I have to apt-mark auto (or the package is no longer autoremoveable).
By poking at the sources , I found the feature you are looking for, but it was made available only in a commit made a few months ago , so is available only in bullseye (future Debian 11): Support marking all newly installed packages as automaticallyinstalled Add option '--mark-auto' to 'apt install' that marks all newlyinstalled packages as automatically installed. The equivalent configuration option (having no effect in Debian 10) is APT::Get::Mark-Auto .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/603514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/417907/" ] }
603,682
I'm currently writing a nemo action script in bash, and I need a way to get input from the user. How?, the terminal isn't showing when running an action script. is there anyway to popup a query window in the GUI to ask the user for input?
Zenity is a good tool for that. user_input=$(zenity --entry) That assigns to variable user_input whatever the user types in the GUI window, unless the user presses cancel, in which case the exit code is not zero. user_input=$(zenity --entry)if [ $? = 0 ]; then echo "User has pressed OK. The input was:" echo "$user_input"else echo "User has pressed cancel"fi Gxmessage is an alternative, with very similar syntax. user_input=$(gxmessage --entry "Enter your input") More information in man zenity and man gxmessage .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/603682", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/382913/" ] }
603,703
I am running a shell script that gives the below result, and I want to store the result in Excel in different columns (HOST, Status, Expires, Days). How can I convert into Excel? Host Status Expires Days----------------------------------------------- ------------ ------------ ----FILE:certs/dnscert.crt Valid Aug 4, 2021 359
Zenity is a good tool for that. user_input=$(zenity --entry) That assigns to variable user_input whatever the user types in the GUI window, unless the user presses cancel, in which case the exit code is not zero. user_input=$(zenity --entry)if [ $? = 0 ]; then echo "User has pressed OK. The input was:" echo "$user_input"else echo "User has pressed cancel"fi Gxmessage is an alternative, with very similar syntax. user_input=$(gxmessage --entry "Enter your input") More information in man zenity and man gxmessage .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/603703", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/427598/" ] }
603,733
I have the following data in a text file: Name,7/27,7/28,7/29,7/30,7/31,8/1,8/2,8/3,8/4abc,5,3,8,8,0,0,2,0,11def,6,7,0,0,0,0,0,2,5ghi,1,3,5,2,0,0,5,3,6 I need to find out which file (the "Name" column the file name) is giving 5 consecutive 0 as output.In this example, that would be def .
I'd probably do this in awk , using , as a delimiter: $ awk -F, '/,0,0,0,0,0/{print $1}' file def However, that will also catch a line like this: mno,6,7,0,0,0,0,0.5 To avoid that, match only if the last 0 is followed by a , or the end of the line: awk -F, '/,0,0,0,0,0(,|$)/{print $1}' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/603733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/427313/" ] }
603,734
Model: ATA Samsung SSD 850 (scsi)Sector size (logical/physical): 512B/512BPartition Table: gptDisk Flags: Number Start End Size File system Name Flags 1 24576B 1048575B 1024000B bios_grub 2 1048576B 537919487B 536870912B fat32 boot, esp 3 537919488B 1611661311B 1073741824B zfs 4 1611661312B 500107845119B 498496183808B zfs parted /dev/sda align-check optimal 1> 1 not alignedparted /dev/sda align-check optimal 2> 2 alignedparted /dev/sda align-check optimal 3> 3 alignedparted /dev/sda align-check optimal 4> 4 aligned The sector size says 512B, but interally I am guessing 4096B because it is a SSD, either way it should be divisible, 24576 / 512 = 48 , 24576 / 4096 = 6 . Is there any reason why parted says it is not aligned. I am aware that this current config should not have any effects on performance as it is only read (if at all) at boot, but just curious in why it is reported as it is. For reference the partition layout is the one suggested by Debian ZFS on Root ( https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html )
I'd probably do this in awk , using , as a delimiter: $ awk -F, '/,0,0,0,0,0/{print $1}' file def However, that will also catch a line like this: mno,6,7,0,0,0,0,0.5 To avoid that, match only if the last 0 is followed by a , or the end of the line: awk -F, '/,0,0,0,0,0(,|$)/{print $1}' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/603734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/427430/" ] }
603,851
I have a big csv file (Test.csv), which looks like this: 1,2,3,A,51,2,3,B,51,2,3,E,51,2,3,D,51,2,3,Z,51,2,3,B,5 I want to print the lines in which the 4th column has the same content in different files. Actually, I need to join these lines that have the same content in a new csv or txt file, named as the 4th column content. For example: Output: File A 1,2,3,A,51,2,3,A,51,2,3,A,5 File B 1,2,3,B,51,2,3,B,5 Since the input file is large, I have no idea how many different patterns there are in this 4th column. Column 4 contains only words and the other columns contain words and/or numbers. As I have no experience, I researched similar questions and even tried the following code: awk 'NR==FNR{a[$4]=NR; next} $NF in a {print > "outfile" a[$NF]}' Test.csv but nothing worked. Can anyone help me, please? Thanks in advance.
This will work efficiently using POSIX sort and any awk in any shell on every UNIX box: $ sort -t, -k4,4 test.csv | awk -F, '$4!=prev{close(out); out="File"$4; prev=$4} {print > out}'$ head -n 20 File*==> FileA <==1,2,3,A,5==> FileB <==1,2,3,B,51,2,3,B,5==> FileD <==1,2,3,D,5==> FileE <==1,2,3,E,5==> FileZ <==1,2,3,Z,5 Some things to note: some awks need putting parens around an expression on the right side of output redirection, and some awks fail if you don't close output files as they go and so trying to retain too many open files once they get past a dozen or so output files, and keeping multiple open output files is very inefficient in all awks that allow it, and closing output files line by line to account for that will be very inefficient in all awks.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/603851", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/427703/" ] }
604,123
In his autobiography, Just for Fun , Linus mentions the "page-to-disk" feature that was pivotal in making Linux a worthy competitor to Minix and other UNIX clones of the day: I remember that, in December, there was this guy in Germany who only had 2 megabytes of RAM, and he was trying to compile the kernel and he couldn't run GCC because GCC at the time needed more than a megabyte. He asked me if Linux could be compiled with a smaller compiler that wouldn't need as much memory. So I decided that even though I didn't need the particular feature, I would make it happen for him. It's called page-to-disk, and it means that even though someone only has 2 mgs of RAM, he can make it appear to be more using the disk for memory. This was around Christmas 1991. Page-to-disk was a fairly big thing because it was something Minix had never done. It was included in version 0.12, which was released in the first week of January 1992. Immediately, people started to compare Linux not only to Minix but to Coherent, which was a small Unix clone developed by Mark Williams Company. From the beginning, the act of adding page-to-disk caused Linux to rise above the competition. That's when Linux took off. Suddenly there were people switching from Minix to Linux. Is he essentially talking about swapping here? People with some historical perspective on Linux would probably know.
Yes, this is effectively swapping. Quoting the release notes for 0.12 : Virtual memory. In addition to the "mkfs" program, there is now a "mkswap" program onthe root disk. The syntax is identical: "mkswap -c /dev/hdX nnn", andagain: this writes over the partition, so be careful. Swapping can thenbe enabled by changing the word at offset 506 in the bootimage to thedesired device. Use the same program as for setting the root filesystem (but change the 508 offset to 506 of course). NOTE! This has been tested by Robert Blum, who has a 2M machine, and itallows you to run gcc without much memory. HOWEVER, I had to stop usingit, as my diskspace was eaten up by the beta-gcc-2.0, so I'd like tohear that it still works: I've been totally unable to make aswap-partition for even rudimentary testing since about christmastime.Thus the new changes could possibly just have backfired on the VM, but Idoubt it. In 0.12, paging is used for a number of features, not just swapping to a device: demand-loading (only loading pages from binaries as they’re used), sharing (sharing common pages between processes).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/604123", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/399259/" ] }
604,258
dbus is supposed to provide "a simple way for applications to talk to one another". But I am still not sure what it is useful for, practically. I have never seen a situation where dbus is useful, I only see warnings that some dbus component has experienced errors, such as when I start terminator from commandline (so that I can see errors): Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files I got rid of the above error by adding NO_AT_BRIDGE=1 to /etc/environment . I have no idea what that does. Almost all gui applications seem to be linked with dbus . Some allow to be started without dbus , ie: terminator --no-dbus I see no difference in behavior. What is supposed to stop working, when terminator is started without dbus ? Also, I have tried disabling various dbus components to see what stops working: I have deleted /etc/X11/Xsession.d/95dbus_update-activation-env just to see what happens. It contained the following code: if [ -n "$DBUS_SESSION_BUS_ADDRESS" ] && [ -x "/usr/bin/dbus-update-activation-environment" ]; then # subshell so we can unset environment variables ( # unset login-session-specifics unset XDG_SEAT unset XDG_SESSION_ID unset XDG_VTNR # tell dbus-daemon --session to put the Xsession's environment in activated services' environments dbus-update-activation-environment --verbose --all )fi Everything works the same, as far as I can tell. What was the purpose of the above script? In what situation would it be useful for my applications to talk to each other via dbus ? Are there applications that don't work without dbus ? My system is Debian Buster, and I am using plain openbox environment (without any desktop environment such as Gnome or KDE)
dbus does exactly what you said: it allows two-way communication between applications. For your specific example you mentioned terminator . From terminator's man page , we see: --new-tab If this is specified and Terminator is already running, DBus will be used to spawn a new tab in the first Terminator window. So if you do this from another terminal (konsole, xterm, gnome-terminal): $ terminator &$ terminator --new-tab & You'll see that the first command opens a new window. The second command opens a new tab in the first window. That's done by the second process using dbus to find the first process, asking it to open a new tab, then quitting. If you do this from another terminal: $ terminator --no-dbus &$ terminator --new-tab & You'll see that the first command opens a new window. The second command fails to find the first window's dbus, so it launches a new window. I installed terminator to test this, and it's true. In addition, I suspect polkit would be affected. Polkit uses dbus to elevate privileges for GUI applications. It's like the sudo of the GUI world. If you are in gnome, and see the whole screen get covered while you are asked for the administrator's password, that's polkit in action. I suspect you won't get that prompt in any GUI application you start from terminator if you have --no-dbus . It'll either fail to authenticate, or fallback to some terminal authentication. From terminator try pkexec ls . That will run ls with elevated privileges. See if it's different with and without the --no-dbus option. I don't have a polkit agent in my window manager (i3) so I can't test this one out. I mostly know about dbus in terms of systemd, so that's where the rest of my answer will come from. Are there applications that don't work without dbus ? Yes. Take systemctl . systemctl status will issue a query to "org.freedesktop.systemd1" , and will present that to you. systemctl start will call a dbus method and pass the unit as an argument to that method. systemd recieves the call and performs the action. If you want to take action in response to a systemd unit (i.e. foo.service) changing states, you can get a file descriptor for interface org.freedesktop.DBus.Properties with path /org/freedesktop/systemd1/unit/foo_2eservice and member PropertiesChanged . Setup an inotify on that FD and you suddenly have a way to react to a service starting, stopping, failing, etc. If you want to take a look at what's available on the systemd dbus for a specific unit (i.e. ssh.service ) try this command: busctl introspect \ org.freedesktop.systemd1 \ /org/freedesktop/systemd1/unit/ssh_2eserviceNAME TYPE SIGNATURE RESULT/VALUE FLAGSorg.freedesktop.DBus.Introspectable interface - - -.Introspect method - s -org.freedesktop.DBus.Peer interface - - -.GetMachineId method - s -.Ping method - - -org.freedesktop.DBus.Properties interface - - -.Get method ss v -.GetAll method s a{sv} -.Set method ssv - -.PropertiesChanged signal sa{sv}as - -org.freedesktop.systemd1.Service interface - - -.AttachProcesses method sau - -.GetProcesses method - a(sus) -.AllowedCPUs property ay 0 -.AllowedMemoryNodes property ay 0 -.AmbientCapabilities property t 0 const.AppArmorProfile property (bs) false "" const.BindPaths property a(ssbt) 0 const.BindReadOnlyPaths property a(ssbt) 0 const.BlockIOAccounting property b false -.BlockIODeviceWeight property a(st) 0 -.BlockIOReadBandwidth property a(st) 0 -.BlockIOWeight property t 18446744073709551615 -.BlockIOWriteBandwidth property a(st) 0 -.BusName property s "" const.CPUAccounting property b false -.CPUAffinity property ay 0 const.CPUAffinityFromNUMA property b false const.CPUQuotaPerSecUSec property t 18446744073709551615 -.CPUQuotaPeriodUSec property t 18446744073709551615 -.CPUSchedulingPolicy property i 0 const.CPUSchedulingPriority property i 0 const.CPUSchedulingResetOnFork property b false const.CPUShares property t 18446744073709551615 -.CPUUsageNSec property t 18446744073709551615 -.CPUWeight property t 18446744073709551615 -.CacheDirectory property as 0 const.CacheDirectoryMode property u 493 const.CapabilityBoundingSet property t 18446744073709551615 const.CleanResult property s "success" emits-change.ConfigurationDirectory property as 0 const.ConfigurationDirectoryMode property u 493 const.ControlGroup property s "/system.slice/ssh.service" -.ControlPID property u 0 emits-change.CoredumpFilter property t 51 const.DefaultMemoryLow property t 0 -.DefaultMemoryMin property t 0 -.Delegate property b false -.DelegateControllers property as 0 -.DeviceAllow property a(ss) 0 -.DevicePolicy property s "auto" -.DisableControllers property as 0 -.DynamicUser property b false const.EffectiveCPUs property ay 0 -.EffectiveMemoryNodes property ay 0 -.Environment property as 0 const.EnvironmentFiles property a(sb) 1 "/etc/default/ssh" true const.ExecCondition property a(sasbttttuii) 0 emits-invalidation.ExecConditionEx property a(sasasttttuii) 0 emits-invalidation.ExecMainCode property i 0 emits-change.ExecMainExitTimestamp property t 0 emits-change.ExecMainExitTimestampMonotonic property t 0 emits-change.ExecMainPID property u 835 emits-change.ExecMainStartTimestamp property t 1597235861087584 emits-change.ExecMainStartTimestampMonotonic property t 5386565 emits-change.ExecMainStatus property i 0 emits-change.ExecReload property a(sasbttttuii) 2 "/usr/sbin/sshd" 2 "/usr/sbin/sshd" "… emits-invalidation.ExecReloadEx property a(sasasttttuii) 2 "/usr/sbin/sshd" 2 "/usr/sbin/sshd" "… emits-invalidation.ExecStart property a(sasbttttuii) 1 "/usr/sbin/sshd" 3 "/usr/sbin/sshd" "… emits-invalidation.ExecStartEx property a(sasasttttuii) 1 "/usr/sbin/sshd" 3 "/usr/sbin/sshd" "… emits-invalidation.ExecStartPost property a(sasbttttuii) 0 emits-invalidation.ExecStartPostEx property a(sasasttttuii) 0 emits-invalidation.ExecStartPre property a(sasbttttuii) 1 "/usr/sbin/sshd" 2 "/usr/sbin/sshd" "… emits-invalidation.ExecStartPreEx property a(sasasttttuii) 1 "/usr/sbin/sshd" 2 "/usr/sbin/sshd" "… emits-invalidation.ExecStop property a(sasbttttuii) 0 emits-invalidation.ExecStopEx property a(sasasttttuii) 0 emits-invalidation.ExecStopPost property a(sasbttttuii) 0 emits-invalidation.ExecStopPostEx property a(sasasttttuii) 0 emits-invalidation.FileDescriptorStoreMax property u 0 const.FinalKillSignal property i 9 const.GID property u 4294967295 emits-change.Group property s "" const.GuessMainPID property b true const.IOAccounting property b false -.IODeviceLatencyTargetUSec property a(st) 0 -.IODeviceWeight property a(st) 0 -.IOReadBandwidthMax property a(st) 0 -.IOReadBytes property t 18446744073709551615 -.IOReadIOPSMax property a(st) 0 -.IOReadOperations property t 18446744073709551615 -.IOSchedulingClass property i 0 const.IOSchedulingPriority property i 0 const.IOWeight property t 18446744073709551615 -.IOWriteBandwidthMax property a(st) 0 -.IOWriteBytes property t 18446744073709551615 -.IOWriteIOPSMax property a(st) 0 -.IOWriteOperations property t 18446744073709551615 -.IPAccounting property b false -.IPAddressAllow property a(iayu) 0 -.IPAddressDeny property a(iayu) 0 -.IPEgressBytes property t 18446744073709551615 -.IPEgressFilterPath property as 0 -.IPEgressPackets property t 18446744073709551615 -.IPIngressBytes property t 18446744073709551615 -.IPIngressFilterPath property as 0 -.IPIngressPackets property t 18446744073709551615 -.IgnoreSIGPIPE property b true const.InaccessiblePaths property as 0 const...skipping....CollectMode property s "inactive" const.ConditionResult property b true emits-change.ConditionTimestamp property t 1597235861034899 emits-change.ConditionTimestampMonotonic property t 5333881 emits-change.Conditions property a(sbbsi) 1 "ConditionPathExists" false true "/et… emits-invalidation.ConflictedBy property as 0 const.Conflicts property as 1 "shutdown.target" const.ConsistsOf property as 0 const.DefaultDependencies property b true const.Description property s "OpenBSD Secure Shell server" const.Documentation property as 2 "man:sshd(8)" "man:sshd_config(5)" const.DropInPaths property as 0 const.FailureAction property s "none" const.FailureActionExitStatus property i -1 const.Following property s "" -.FragmentPath property s "/lib/systemd/system/ssh.service" const.FreezerState property s "running" emits-change.Id property s "ssh.service" const.IgnoreOnIsolate property b false const.InactiveEnterTimestamp property t 0 emits-change.InactiveEnterTimestampMonotonic property t 0 emits-change.InactiveExitTimestamp property t 1597235861039525 emits-change.InactiveExitTimestampMonotonic property t 5338505 emits-change.InvocationID property ay 16 90 215 118 165 228 162 72 57 179 144… emits-change.Job property (uo) 0 "/" emits-change.JobRunningTimeoutUSec property t 18446744073709551615 const.JobTimeoutAction property s "none" const.JobTimeoutRebootArgument property s "" const.JobTimeoutUSec property t 18446744073709551615 const.JoinsNamespaceOf property as 0 const.LoadError property (ss) "" "" const.LoadState property s "loaded" const.Names property as 2 "ssh.service" "sshd.service" const.NeedDaemonReload property b false const.OnFailure property as 0 const.OnFailureJobMode property s "replace" const.PartOf property as 0 const.Perpetual property b false const.PropagatesReloadTo property as 0 const.RebootArgument property s "" const.Refs property as 0 -.RefuseManualStart property b false const.RefuseManualStop property b false const.ReloadPropagatedFrom property as 0 const.RequiredBy property as 0 const.Requires property as 3 "system.slice" "-.mount" "sysinit.tar… const.RequiresMountsFor property as 1 "/run/sshd" const.Requisite property as 0 const.RequisiteOf property as 0 const.SourcePath property s "" const.StartLimitAction property s "none" const.StartLimitBurst property u 5 const.StartLimitIntervalUSec property t 10000000 const.StateChangeTimestamp property t 1597235861208937 emits-change.StateChangeTimestampMonotonic property t 5507917 emits-change.StopWhenUnneeded property b false const.SubState property s "running" emits-change.SuccessAction property s "none" const.SuccessActionExitStatus property i -1 const.Transient property b false const.TriggeredBy property as 0 const.Triggers property as 0 const.UnitFilePreset property s "enabled" -.UnitFileState property s "enabled" -.WantedBy property as 1 "multi-user.target" const.Wants property as 0 const You can see from this that the dbus interface is pretty powerful. You might ask: Why don't these applications just communicate via sockets or files? DBus provides a common interface. You don't need different logic to call methods or check properties based on the application you are talking to. You just need to know the name of the path. I've used systemd as an example because that's what I understand best, but there are tons of uses of dbus on most desktops. Everything from authentication, to display settings are available on dbus.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/604258", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155832/" ] }
604,376
I want to change the first line of hundreds of files recursively in the most efficient way possible. An example of what I want to do is to change #!/bin/bash to #!/bin/sh , so I came up with this command: find ./* -type f -exec sed -i '1s/^#!\/bin\/bash/#!\/bin\/sh/' {} \; But, to my understanding, doing it this way sed has to read the whole file and replace the original. Is there a more efficient way to do this?
Yes, sed -i reads and rewrites the file in full, and since the line length changes, it has to, as it moves the positions of all other lines. ...but in this case, the line length doesn't actually need to change. We can replace the hashbang line with #!/bin/sh␣␣ instead, with two trailing spaces. The OS will remove those when parsing the hashbang line. (Alternatively, use two newlines, or a newline + hash sign, both of which create extra lines the shell will eventually ignore.) All we need to do is to open the file for writing from the start, without truncating it. The usual redirections > and >> can't do that, but in Bash, the read-write redirection <> seems to work: echo '#!/bin/sh ' 1<> foo.sh or using dd (these should be standard POSIX options): echo '#!/bin/sh ' | dd of=foo.sh conv=notrunc Note that strictly speaking, both of those rewrite the newline at the end of the line too, but it doesn't matter. Of course, the above overwrites the start of the given file unconditionally. Adding a check that the original file has the correct hashbang is left as an exercise... Regardless, I probably wouldn't do this in production, and obviously, this won't work if you need to change the line to a longer one.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/604376", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/428096/" ] }
604,616
I want to write my own systemd unit files to manage really long running commands 1 (in the order of hours). While looking the ArchWiki article on systemd , it says the following regarding choosing a start up type: Type=simple (default): systemd considers the service to be started up immediately. The process must not fork . Do not use this type if other services need to be ordered on this service, unless it is socket activated. Why must the process not fork at all? Is it referring to forking in the style of the daemon summoning process (parent forks, then exits), or any kind of forking? 1 I don't want tmux/screen because I want a more elegant way of checking status and restarting the service without resorting to tmux send-keys .
The service is allowed to call the fork system call. Systemd won't prevent it, or even notice if it does. This sentence is referring specifically to the practice of forking at the beginning of a daemon to isolate the daemon from its parent process. “The process must not fork [and exit the parent while running the service in a child process]”. The man page explains this more verbosely, and with a wording that doesn't lead to this particular confusion. Many programs that are meant to be used as daemons have a mode (often the default mode) where when they start, they isolate themselves from their parent. The daemon starts, calls fork() , and the parent exits. The child process calls setsid() so that it runs in its own process group and session, and runs the service. The purpose is that if the daemon is invoked from a shell command line, the daemon won't receive any signal from the kernel or from the shell even if something happens to the terminal such as the terminal closing (in which case the shell sends SIGHUP to all the process groups it knows of). This also causes the servicing process to be adopted by init, which will reap it when it exits, avoiding a zombie if the daemon was started by something that wouldn't wait() for it (this wouldn't happen if the daemon was started by a shell). When a daemon is started by a monitoring process such as systemd, forking is counterproductive. The monitoring process is supposed to restart the service if it crashes, so it needs to know if the service exits, and that's difficult if the service isn't a direct child of the monitoring process. The monitoring process is not supposed to ever die and does not have a controlling terminal, so there are no concerns around unwanted signals or reaping. Thus there's no reason for the service process not to be a child of the monitor, and there's a good reason for it to be.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/604616", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/428422/" ] }
604,670
I use the Windows Subsystem For Linux. On launching Ubuntu, I get this errors: -bash: /home/divyansh/.bashrc: line 119: syntax error near unexpected token `('-bash: /home/divyansh/.bashrc: line 119: `export PATH=/mnt/z/usr/local/bin:/mnt/z/usr/local/bin:/home/divyansh/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Program Files/WindowsApps/CanonicalGroupLimited.UbuntuonWindows_2004.2020.424.0_x64__79rhkp1fndgsc:/mnt/c/Python38/Scripts:/mnt/c/Python38:/mnt/c/Program Files (x86)/Common Files/Oracle/Java/javapath:/mnt/c/Windows/System32:/mnt/c/Windows:/mnt/c/Windows/System32/wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0:/mnt/c/Windows/System32/OpenSSH:/mnt/c/Program Files/Intel/WiFi/bin:/mnt/c/Program Files/Common Files/Intel/WirelessCommon:/mnt/c/MinGW/bin:/mnt/c/ProgramData/pbox:/mnt/c/Program Files/nodejs:/mnt/c/ProgramData/chocolatey/bin:/mnt/c/Program Files/Git/cmd:/mnt/c/Users/asus/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/asus/AppData/Local/Programs/Microsoft VS Code/bin:/mnt/c/Program Files/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin:/mnt/c/tools:/mnt/c/Users/asus/AppData/Roaming/npm:/mnt/c/Users/asus/AppData/Local/atom/bin:/snap/bin' The '(' token appears only once in the line :/mnt/c/Program Files (x86)/Common Files/Oracle/Java/javapath: . I do not understand why this error is caused. What can I do to clear this? If I leave it as it is, how does it affect the respective path variables?
With regard to the shell syntax, ( is a special character (like ; , > , & etc.), it can't appear as part of an assigned value without being escaped or quoted. It's used e.g. to start subshells, but as you noticed, in most cases it just causes a syntax error. (Unlike, say & , which would just silently end the command.) However, the parenthesis aren't your only problem, you also have whitespace in the path. That's not a syntax error, but changes the meaning of the command. export PATH=/mnt/c/Program Files/Somepath means to assign /mnt/c/Program to PATH , and to export a variable called Files/Somepath , which also causes an error because the slash is not valid in a variable name. You'll need to either escape all the parenthesis and spaces, as in Program\ Files\ \(x86\) , or simply quote the whole string: PATH='/mnt/z/usr/local/bin:...:/mnt/c/Program Files (x86)/Common Files/Oracle/Java/javapath:...' or just parts of it, though that may be harder to read: PATH=/mnt/z/usr/local/bin:...:'/mnt/c/Program Files (x86)/Common Files/Oracle/Java/javapath':... (Note that you can't do both inside each other, PATH='/mnt/c/Program\ Files\ \(x86\)/...' would assign a string containing literal backslashes.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/604670", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/428468/" ] }
604,738
I've just installed the kali 2020.2 with kali-linux-2020.2-installer-amd64.iso in VMware 15.1.0. And I choose the default desktop environment "xfce". Everything looks good except when I open the terminal, the font size is too small that I almost can't see it. I clicked the "file" => "propertities" => "font" in terminal and changed the font size from 10 to 20. But after I clicked the "ok" button, nothing happened. And from the "properties" view, the font size is still 10. Could anyone tell me how to change the font size in the terminal in kali 2020.2? Thank you.
First of all you can zoom in terminal with control + “+” [^+]and zoom out with control + “-“ [^-] But if you want to edit INTERNAL TERMINAL SIZE ...then : Open terminalClick EDIT>PREFERENCES>PROFILES Now you can create a profile or edit current one you have and costumize as you want
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/604738", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/428516/" ] }
604,765
I know that the SHELL allows variable assignment to take place immediately before a command, such that IFS=":" read a b c d <<< "$here_string" works... What I was wondering is do such assignments not work when done with compound statements such as loops? I tried something like IFS=":" for i in $PATH; do echo $i; done but it results in a syntax error. I could always do something like oldIFS="$IFS"; IFS=":"; for....; IFS="$oldIFS" , but I wanted to know if there was any way I could make such inline assignments work for compound statements like for loops?
for is a reserved word and as such follows special rules : The following words shall be recognized as reserved words: ! { } case do done elif else esac fi for if in then until while This recognition shall only occur when none of the characters is quoted and when the word is used as: The first word of a command The first word following one of the reserved words other than case , for , or in The third word in a case command (only in is valid in this case) The third word in a for command (only in and do are valid in this case) If you try IFS=":" for i in $PATH; do echo $i; done then by the rules above that is not a for loop, as the keyword is notthe first word of the command. But you can get the desired output with tr ':' '\n' <<< "$PATH" #Bash, Ksh, Zshprintf "%s\n" "$PATH" | tr ':' '\n' #Any standard shell where tr replaces each : by a newline. You may be familiar with this valid approach: while IFS= read -r line; do while is the first word of the command, and the IFS assignment applies to read ,so all is OK.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/604765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/399259/" ] }
604,791
I recently bought a Varmilo VA109M mechanical keyboard. It works fine on Windows, but seems to confuse my Ubuntu install in that the F1 - F12 function keys appear always to activate media shortcuts, regardless of whether I've held the dedicated Fn modifier key or not. For instance, F12 will increase my system volume if I press it on its own, and will do the same if I press Fn + F12 ; there is no way to get it to act like a normal F12 key. This is causing me issues because I do a lot of programming, and many IDE shortcuts rely on the standard function keys. I have tried resetting the keyboard's internal settings by holding Fn + Esc , but this didn't help. My Windows install on the same machine functions perfectly fine with this keyboard. Is there anything I can do to try and diagnose exactly what Ubuntu is getting confused about? EDIT: lsusb outputs the following: Bus 001 Device 003: ID 05ac:024f Apple, Inc. Varmilo KeyboardDevice Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x05ac Apple, Inc. idProduct 0x024f bcdDevice 1.00 iManufacturer 1 iProduct 2 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 0x005b bNumInterfaces 3 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 350mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 1 Keyboard iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 75 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 85 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0010 1x 16 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 2 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 33 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 4 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x04 EP 4 OUT bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 4
This is solvable! So I did some research into this myself recently and while Jd3eBP is right about the keyboard pretending to be an Apple keyboard, it's actually probably an issue with Varmilo's flashing at the factory. They sell a Mac version of the keyboard that I think differs only in firmware and labeling, by default I think it supports the Mac layout, it's also supposed to be able to switch to "windows mode" which probably swaps the order of the keys to what you'd expect, it identifies itself as an Apple keyboard to get Macs to treat it properly. However it seems like maybe they accidentally flashed that firmware onto every keyboard instead of just the Mac only ones, which isn't noticeable on Windows since it ignores the id, but on linux will activate the hid_apple driver. Solution: On to the answer part. There's two big options for solving this, I tested both and ended up finding the second much better. Change hid_apple into a mode where it treats the function keys normally, afaik this will basically solve the issue. You can find instructions here for how to do that, it will work on Ubuntu as well. https://wiki.archlinux.org/index.php/Apple_Keyboard#Function_keys_do_not_work . Reflash the keyboard with the product and vendor ID such that it will not be detected. This is arguably the right answer but a little more risky. You can get the firmware files from the manufacturer site here, https://en.varmilo.com/keyboardproscenium/Driverdownload , using the VA87M download. The updater itself didn't work (I think I needed Chinese localization installed), so you can use the updater that was supplied to someone here https://www.reddit.com/r/Varmilo/comments/g4sabk/fn_lock_on_va87m/ , using the official firmware file from the for good measure. If you don't trust that, I hear that if you email Varmilo about the issue they will provide the required files. That updater worked under wine for me after installing wine from the official site. This just reflashes the vendor and product ID to not come up as an Apple keyboard, it also removes the "switch to windows/mac mode" functionality that was unused on the Windows only version. You could probably flash the Mac firmware to revert to the old behavior if you want I didn't test that however.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/604791", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/303655/" ] }
604,796
I have two files receiving daily basis as TRN_HIST_TBL_16_AUG_2020 and TRN_HIST_TBL_NON_NPI_16_AUG_2020 and I want to rename them like TRN_HIST_TBL.txt and TRN_HIST_TBL_NON_NPI.txt I have written the rename command like below but it is creating only one file TRN_HIST_TBL.txt . mv TRN_HIST_TBL* TRN_HIST_TBL.txt mv TRN_HIST_TBL_NON_NPI* TRN_HIST_TBL_NON_NPI.txt
This is solvable! So I did some research into this myself recently and while Jd3eBP is right about the keyboard pretending to be an Apple keyboard, it's actually probably an issue with Varmilo's flashing at the factory. They sell a Mac version of the keyboard that I think differs only in firmware and labeling, by default I think it supports the Mac layout, it's also supposed to be able to switch to "windows mode" which probably swaps the order of the keys to what you'd expect, it identifies itself as an Apple keyboard to get Macs to treat it properly. However it seems like maybe they accidentally flashed that firmware onto every keyboard instead of just the Mac only ones, which isn't noticeable on Windows since it ignores the id, but on linux will activate the hid_apple driver. Solution: On to the answer part. There's two big options for solving this, I tested both and ended up finding the second much better. Change hid_apple into a mode where it treats the function keys normally, afaik this will basically solve the issue. You can find instructions here for how to do that, it will work on Ubuntu as well. https://wiki.archlinux.org/index.php/Apple_Keyboard#Function_keys_do_not_work . Reflash the keyboard with the product and vendor ID such that it will not be detected. This is arguably the right answer but a little more risky. You can get the firmware files from the manufacturer site here, https://en.varmilo.com/keyboardproscenium/Driverdownload , using the VA87M download. The updater itself didn't work (I think I needed Chinese localization installed), so you can use the updater that was supplied to someone here https://www.reddit.com/r/Varmilo/comments/g4sabk/fn_lock_on_va87m/ , using the official firmware file from the for good measure. If you don't trust that, I hear that if you email Varmilo about the issue they will provide the required files. That updater worked under wine for me after installing wine from the official site. This just reflashes the vendor and product ID to not come up as an Apple keyboard, it also removes the "switch to windows/mac mode" functionality that was unused on the Windows only version. You could probably flash the Mac firmware to revert to the old behavior if you want I didn't test that however.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/604796", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/428556/" ] }
604,875
I have some aliases setup (in this order) in .bashrc: alias ls="lsc"alias lsc='ls -Flatr --color=always'alias lscR='ls -FlatrR --color=always' Confirming them with alias after sourcing: alias ls='lsc'alias lsc='ls -Flatr --color=always'alias lscR='ls -FlatrR --color=always' I can run the newly aliased ls just fine, and it chains through to the lsc alias, and then executes the command associated with the lsc alias. I can also run lscR and it operates as expected. If I try to run lsc itself though, I get: $ lsclsc: command not found Any idea why the shell seems to be shadowing/hiding the lsc alias in this scenario?(I realise it's pointless to run 'lsc' when I can just run 'ls' to get the same result here, but I'm trying to understand the shells behaviour in this scenario). EDIT: Workarounds below for the (bash) shell behaviour provided in the question answers. Some really helpful answers have been provided to the original question. In order to short-circuit the expansion behaviour that is explained in the answers, there seems to be at least two ways of preventing a second alias, from trying to expand a command that you have already aliased. For example, if you have alias cmd='cmd --stuff' which is overriding a native command called cmd , you can prevent the 'cmd' alias from being used in place of the native cmd within other aliases, by: (thanks to wjandrea's comment for this first approach) prefixing cmd with 'command' in the other alias e.g. alias other-cmd-alias='command cmd --other-stuff' or, Similarly, you can escape aliases (as you can also do on the command line), within other aliases by prefixing with a backslash '', e.g. alias other-cmd-alias='\cmd --other-stuff' .
Bash does allow aliases to contain aliases but it has built-in protections against infinite loops. In your case, when you type lsc , bash first expands the alias to: ls -Flatr --color=always Since ls is also an alias, bash expands it to: lsc -Flatr --color=always lsc is an alias but, quite sensibly, bash refuses to expand it a second time . If there was a program named lsc , bash would run it. But, there is not and that is why you get command not found . Addendum It is different when lscR runs. lscR expands to: ls -FlatrR --color=always Since ls is an alias, this expands to: lsc -FlatrR --color=always Since lsc is an alias, this expands to: ls -Flatr --color=always -FlatrR --color=always Since ls has already been expanded once, bash refuses to expand it a second time . Since a real command called ls exists, it is run. History As noted by Schily in the comments, bash borrowed the concept of not expanding an alias a second time from ksh . Aside Aliases are useful but not very powerful. If you are tempted to do something complex with an alias, such as argument substitution, don't; use a shell function instead.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/604875", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266125/" ] }
605,067
We have a Debian Buster box (nftables 0.9.0, kernel 4.19) attached to four different network segments. Three of these segments are home to devices running Syncthing, which runs its own local discovery via broadcasts to UDP port 21027. The devices thus can't all "see" each other as the broadcasts don't cross segments; the Buster box itself does not participate in the sync cluster. While we could solve this by running Syncthing's discovery or relay servers on the Buster box, it's been requested that we not use them (reasons around configuration and devices which roam to other sites). Hence, we're looking at a nftables-based solution; my understanding is that this isn't normally done, but to make this work, we have to: Match incoming packets on UDP 21027 Copy those packets to the other segment interface(s) they need to be seen on Change the destination IP of the new packet(s) to match the new segment's broadcast address (while preserving the source IP as the discovery protocol can rely on it) Emit the new broadcasts without them getting duplicated again Only three of the attached segments participate with devices; all are subnet masked as /24. Segment A (eth0, 192.168.0.1) should not be forwarded Segment B (eth1, 192.168.1.1) should be forwarded to segment A only Segment C (eth2, 192.168.2.1) should be forwarded to both A and B The closest we have to a working rule for this so far is (other DNAT/MASQ and local filtering rules omitted for brevity): table ip mangle { chain repeater { type filter hook prerouting priority -152; policy accept; ip protocol tcp return udp dport != 21027 return iifname "eth1" ip saddr 192.168.2.0/24 counter ip daddr set 192.168.1.255 return iifname "eth0" ip saddr 192.168.2.0/24 counter ip daddr set 192.168.0.255 return iifname "eth0" ip saddr 192.168.1.0/24 counter ip daddr set 192.168.0.255 return iifname "eth2" ip saddr 192.168.2.0/24 counter dup to 192.168.0.255 device "eth0" nftrace set 1 iifname "eth2" ip saddr 192.168.2.0/24 counter dup to 192.168.1.255 device "eth1" nftrace set 1 iifname "eth1" ip saddr 192.168.1.0/24 counter dup to 192.168.0.255 device "eth0" nftrace set 1 }} The counters show that the rules are being hit, though without the daddr set rules the broadcast address remains the same as on the originating segment. nft monitor trace shows least some packets are reaching the intended interface with the correct destination IP, but are then landing in the input hook for the box itself and are not seen by other devices on the segment. Is the outcome we're looking for here achievable in practice, and if so, with which rules?
It's still possible to use nftables in the netdev family (rather than ip family) for this case, since only ingress is needed (nftables still doesn't have egress available). The behaviour of dup and fwd in the ingress hook is exactly the same as tc-mirred 's mirror and redirect . I also addressed a minor detail: rewrite the Ethernet source address to the new Ethernet outgoing interface's MAC address, as would have been done for a truly routed packet, even if it works for you without this. So the interfaces' MAC addresses has to be known beforehand. I put the two required ( eth0 's and eth1 's) in variables/macro definitions, which should be edited with the correct values. define eth0mac = 02:0a:00:00:00:01define eth1mac = 02:0b:00:00:00:01table netdev statelessnatdelete table netdev statelessnattable netdev statelessnat { chain b { type filter hook ingress device eth1 priority 0; pkttype broadcast ether type ip ip daddr 192.168.1.255 udp dport 21027 jump b-to-a } chain c { type filter hook ingress device eth2 priority 0; pkttype broadcast ether type ip ip daddr 192.168.2.255 udp dport 21027 counter jump c-to-b-a } chain b-to-a { ether saddr set $eth0mac ip daddr set 192.168.0.255 fwd to eth0 } chain c-to-b-a { ether saddr set $eth1mac ip daddr set 192.168.1.255 dup to eth1 goto b-to-a }}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/605067", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/428783/" ] }
605,084
Why isn't the Linux module API backward compatible? I'm frustrated to find updated drivers after updating the Linux kernel. I have a wireless adapter that needs a proprietary driver, but the manufacturer has discontinued this device about 7 years ago. As the code is very old and was written for Linux 2.6.0.0, it doesn't compile with the latest Linux kernels. I have used many Linux distributions but the same problem is everywhere. Although there is an open-source driver distributed with Linux kernel, it doesn't work. Some people are trying to modify the old proprietary code to make it compatible with the latest Linux kernels, but when a new Linux kernel is released, it takes months to make code compatible with that. Within that time, another new version is released. For this reason, I can't upgrade to a new Linux kernel; sometimes I can't even upgrade my distribution.
Greg Kroah-Hartman has written on this topic here: https://www.kernel.org/doc/html/v4.10/process/stable-api-nonsense.html Besides some technical details regarding compiling C code he draws out a couple of basic software engineering issues that make their decision. Linux Kernel is always a work in progress. This happens for many reasons: New requirements come along. People want their software to do more, that's why most of us upgrade, we want the latest and greatest features. These can require rework to the existing software. Bugs are found which need fixing, sometimes bugs are with the design itself and cannot be fixed without significant rework New ideas and idioms in the software world happen and people find much easier / elegant / efficient ways to do things. This is true of most software , and any software that is not maintained will die a slow and painful death . What you are asking is why doesn't that old unmaintained code still work? Why aren't old interfaces maintained? To ensure backward compatibility would require that old (often "broken" and insecure) interfaces are maintained. Of course it's theoretically possible to do this except it does carry significant cost . Greg Kroah-Hartman writes If Linux had to ensure that it will preserve a stable source interface, a new interface would have been created, and the older, broken one would have had to be maintained over time, leading to extra work for the [developers]. Since all Linux [developers] do their work on their own time, asking programmers to do extra work for no gain, for free, is not a possibility. Even though Linux is open source, there is still only limited developer time to maintain it. So manpower can still be discussed in terms of "cost". The developers have to chose how they spend their time: Spend a lot of time maintaining old / broken / slow / insecure interfaces. This can sometimes be double to triple the time it took to write the interface in the fist instance. Thow away the old interfaces and expect other software maintainers to [do their job and] maintain their own software. On balance, binning interfaces is really cost-effective (for the kernel developers) . If you want to know why developers don't spend months and years of their life saving others from paying $10 for a new wifi adaptor... that's the reason. Remember that's time/cost effective for the kernel developers, not necessarily cost-effective for you or manufacturers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/605084", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/364840/" ] }
605,121
I have a service managed by systemd that has the following systemd config telling systemd to write the logs to a file directly (no syslog or anything) StandardOutput=file:/var/log/foo/my.log I have a logrotate rule /var/log/foo/*.log{ rotate 31 daily missingok notifempty compress delaycompress sharedscripts} What's happening is that the logs are being rotated but the service is still writing to the old rotated file and the new log file stays empty. I have a similar working setup where the service instead writes to syslog. That one works fine because the logrotate config has postrotate invoke-rc.d rsyslog rotate > /dev/null , which notifies syslog that its log has been rotated. The issue is that in my problematic case the log is going directly to the file so I don't know if (or which one) i need to send a similar signal to systemd or to the actual service process. I found the copytruncate option in logrotate which i'm pretty sure will fix my problem but i'm getting the feeling that this is not the ideal way to do it, otherwise copytruncate would be the default behaviour of logrotate. How do I solve this problem? do i need to send some signal to systemd? do i need to send some signal to the service process? do i have to use copytruncate in logrotate instead? If it matters, the service is a java process using logback to write to stdout
copytruncate is the right answer in this case. It's not the default because it's less common to need it, because you'd have a proper daemon that you can signal to re-open the log file. The alternative is to restart the service in the post-rotation script, but that may not be convenient or desirable.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/605121", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206068/" ] }
605,232
I'd like to improve the readability of a script by writting 1 pattern per line. Is there a syntax that would allow me to transform grep 'foo\|bar\|barz' to something like the following? grep 'foo\| bar\| barz'
I don't see a need for | there. You could use multiple -e options spread across lines: grep -e foo \ -e bar \ -e barz
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/605232", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/366091/" ] }
605,333
I have observed that the size of mp4 videos are larger than webm for a same resolution video. Which makes me wonder that either there is some quality loss or compression in webm. youtube-dl -F url Generally using the above code the 4k videos are shown in webm quality. Can we force download 4k videos as mp4 instead of webm?
You can find the available mp4 resolutions available for a video with: youtube-dl --list-formats https://youtu.be/LXb3EKWsInQ | grep mp4 ... 401 mp4 3840x2160 2160p60 18167k , av01.0.13M.10.0.110.09.16.09.0, 60fps, video only, 460.07MiB ... For this example, you can find that 401 is the format number you need. Then, use the -f flag: youtube-dl -f 401 https://youtu.be/LXb3EKWsInQ
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/605333", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259721/" ] }
605,374
I'm new to Unix scripting, so please bear with me. I am given a file which has information on processes on each line. I need to extract certain information on these processes from each line. Example of the file - process1 port=1234 appID=dummyAppId1 authenticate=true <some more params>process3 port=1244 authenticate=false appID=dummyAppId2 <some more params>process2 appID=dummyAppId3 port=1235 authenticate=true <some more params> The desired output is - 1port=1234 authenticate=true appID=dummyAppId1 2port=1244 authenticate=false appID=dummyAppId23port=1235 authenticate=true appID=dummyAppId3 The numbers 1, 2, and 3 on each line just denote the line number of the output file. I have already tried using the sed s/ command but it is order-specific, while the parameters in the input file don't follow an order - as a result, some lines in the input file are skipped. Here is my command - sed -nr 'appId/s/(\w+).*port=([^ ]+) .*authenticate=[^ ]+) .*appId=[^ ]+) .*/\2\t\3\t\4/p' | sed = Could anyone guide me on how to extract those parameters regardless of order? Thanks! Edit 1: I managed to use grep's look-behind zero-width assertion feature this way - grep -Po '(?<=pattern1=)[^ ,]+|(?<=pattern2=)[^ ,]+|(?<=pattern3=)[^ ,]+|(?<=pattern4=)[^ ,]+' filename but this seems to give the output for each line in new lines i.e. 1234truedummyAppId1 Trying to figure out how to get it on one line using grep (i.e. not via merging X lines into 1) Edit 2: mixed-up the order of parameters in the input Edit 3: I'm sorry, I should have mentioned this earlier - perl seems to be restricted on the machines I work on. While the answers provided by Stephane and Sundeep work perfectly when I test it out locally, it wouldn't work on the machines I need it to finally run on.It looks like awk, grep, and sed are the mainly supported options :(
With awk (tested with GNU awk , not sure if it works with other implementations) $ cat kv.awk/appID/ { for (i = 1; i <= NF; i++) { $i ~ /^port=/ && (a = $i) $i ~ /^authenticate=/ && (b = $i) $i ~ /^appID=/ && (c = $i) } print NR "\n" a, b, c}$ awk -v OFS='\t' -f kv.awk ip.txt1port=1234 authenticate=true appID=dummyAppId12port=1244 authenticate=false appID=dummyAppId23port=1235 authenticate=true appID=dummyAppId3 With perl $ # note that the order is changed for second line here$ cat ip.txtprocess1 port=1234 authenticate=true appID=dummyAppId1 <some more params>process3 port=1244 appID=dummyAppId2 authenticate=false <some more params>process2 port=1235 authenticate=true appID=dummyAppId3 <some more params>$ perl -lpe 's/(?=.*(port=[^ ]+))(?=.*(authenticate=[^ ]+))(?=.*(appID=[^ ]+)).*/$1\t$2\t$3/; print $.' ip.txt 1port=1234 authenticate=true appID=dummyAppId12port=1244 authenticate=false appID=dummyAppId23port=1235 authenticate=true appID=dummyAppId3 (?=.*(port=[^ ]+)) first capture group for port (?=.*(authenticate=[^ ]+)) second capture group for authenticate and so on print $. for line number To avoid partial matches, use \bport , \bappID etc if word boundary is enough. Otherwise, use (?<!\S)(port=[^ ]+) to restrict based on whitespace. If you need to print only lines containing appID or any other such condition, change -lpe to -lne and change print $. to print "$.\n$_" if /appID/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/605374", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/217209/" ] }
605,378
I looking for a way to start a real-time process or set a running process as a real-time process.
With awk (tested with GNU awk , not sure if it works with other implementations) $ cat kv.awk/appID/ { for (i = 1; i <= NF; i++) { $i ~ /^port=/ && (a = $i) $i ~ /^authenticate=/ && (b = $i) $i ~ /^appID=/ && (c = $i) } print NR "\n" a, b, c}$ awk -v OFS='\t' -f kv.awk ip.txt1port=1234 authenticate=true appID=dummyAppId12port=1244 authenticate=false appID=dummyAppId23port=1235 authenticate=true appID=dummyAppId3 With perl $ # note that the order is changed for second line here$ cat ip.txtprocess1 port=1234 authenticate=true appID=dummyAppId1 <some more params>process3 port=1244 appID=dummyAppId2 authenticate=false <some more params>process2 port=1235 authenticate=true appID=dummyAppId3 <some more params>$ perl -lpe 's/(?=.*(port=[^ ]+))(?=.*(authenticate=[^ ]+))(?=.*(appID=[^ ]+)).*/$1\t$2\t$3/; print $.' ip.txt 1port=1234 authenticate=true appID=dummyAppId12port=1244 authenticate=false appID=dummyAppId23port=1235 authenticate=true appID=dummyAppId3 (?=.*(port=[^ ]+)) first capture group for port (?=.*(authenticate=[^ ]+)) second capture group for authenticate and so on print $. for line number To avoid partial matches, use \bport , \bappID etc if word boundary is enough. Otherwise, use (?<!\S)(port=[^ ]+) to restrict based on whitespace. If you need to print only lines containing appID or any other such condition, change -lpe to -lne and change print $. to print "$.\n$_" if /appID/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/605378", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/335415/" ] }
605,410
I am trying to get only matching strings ( match_E2 and pattern_2 ) along with 1st column. abcd.corp;;a123,Virtual,aws,Linux,Linux,match_E2,Databaseweb1.corp;;,Virtual,azure,match_E2,Linux,corpo,Databaseweb2.corp;;match_E2,Virtual,a2responsible,Linux_Suse,Linux,corpo,Databaseweb3.corp;;Virtual,Virtual,corpo,pattern_2,Linux,corpo,Databaseweb4.corp;;Virtual,Virtual,corpo,,Linux,pattern_2,Database expected output could be below abcd.corp,match_E2web1.corp,match_E2web2.corp,match_E2web3.corp,pattern_2web4.corp,pattern_2 I tried to use option -o in grep but it gives only matching strings.
I daresay your case might be better handled with sed . For match_E2 pattern: $ sed -nE 's/^([^;]+).*(match_E2).*/\1,\2/p' file.txt For pattern_2 pattern: $ sed -nE 's/^([^;]+).*(pattern_2).*/\1,\2/p' file.txt For both those patterns in one go: $ sed -nE 's/^([^;]+).*(match_E2|pattern_2).*/\1,\2/p' file.txt That is, basically: $ sed -nE 's/^([^;]+).*( ).*/\1,\2/p' file.txt# ^ ^# | |# ---------------------# put within these two parentheses the same (Extended Regular Expression) pattern you would use with `grep -E` Note that it only relies on at least one ; being the separator between the first field and the rest of the line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/605410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332231/" ] }
605,468
I'm just learning linux. Working as root, I created a new user called testuser. I then changed testuser to a nologin shell. Now I want to login as the testuser to see what a user with a nologin shell can/can't do. I tried: su testuser and got: This account is currently not available.I tried: su - testuser and got:su: warning: cannot change directory to /home/testuser: No such file or directoryThis account is currently not available. How do I switch from root to a user with a nologin shell?
The point of the nologin shell is to prevent the user from logging in. Such a user may still use your server services like FTP, IMAP/POP3 and others but they won't be able to login e.g. using sshd or console, period. How do I switch from root to a user with a nologin shell? sudo -u USERNAME /bin/bash Will work but only root can do that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/605468", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/429180/" ] }
605,531
What is the difference between the primary and secondary group?Why is there a need for a primary group? Are the user's permissions the same as the primary group permission? Assume there is a user A with primary group grp1 and user B with primary group grp2 and secondary group grp1 . Then can B have the same permission as A for the files created by A ?
What is difference between primary and secondary group? What is the need of primary group? From Unix Groups : Primary group – Specifies a group that the operating system assigns to files that are created by the user. Each user must belong to a primary group . Secondary groups – Specifies one or more groups to which a user also belongs. Users can belong to up to 15 secondary groups. What is the need of primary group? Are the user's permissions the same as primary group permission? Imagine there is no primary group assigned to my user and I'm in ten secondary groups. I create a new file: what group this file belongs to? Which one of these ten? Primary group tackles this issue and defines which group the file belongs to by default. You don't want the files you are creating in your home directory to be owned by development group so anyone on that group can do the stuff allowed to do with them as the group. Assume there is a user A with primary group grp1 and user B with primary group grp2 and secondary group grp1 . Then can he have same permission as A for the files created by A ? No! only permissions defined by group part. Of course it is possible to do it. Using umask for example.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/605531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/429250/" ] }
605,806
I saw that in /lib/modules/ I have 7 directories that related to the out of date kernel versions, can I fully delete them? It will not make any changes or hurt my system? $ ls /lib/modules5.4.0-26-generic 5.4.0-31-generic 5.4.0-37-generic 5.4.0-40-generic5.4.0-29-generic 5.4.0-33-generic 5.4.0-39-generic 5.4.0-42-generic$ uname -r5.4.0-42-generic # remove all directories without this kernel directory
You should run dpkg -S /lib/modules/* to check whether any installed package matches those directories. You can delete any directory for which the above says dpkg-query: no path found matching pattern /lib/modules/... For directories still matching a package, you should remove the corresponding package first. If you’re using Ubuntu, sudo apt autoremove --purge should take care of this for you , but do pay attention to the list of packages it shows before confirming the removal.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/605806", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/429460/" ] }
605,843
I want to list files in a certain subdirectory, but I'm doing so as part of a docker exec inside a docker container, so I don't want to bother starting up a shell that I don't really need. Is it possible to find all the matches for a glob with a simple command line tool, and not just a shell? For example, my current invocation is bash -l -c 'echo /usr/local/conda-meta/*.json' . Is it possible to simplify this using a commonly available tool, resulting in something like globber /usr/local/conda-meta/*.json , which would be much simpler and lighter weight?
sh is simple and commonly available. sh is the tool that is invoked to parse command lines in things like system(cmdline) in many languages. Many OSes including some GNU ones have stopped using bash (the GNU shell) to implement sh for the reason that it has become too bloated to do just that simple thing of parsing command lines and interpreting POSIX sh scripts. Your bash -l -c 'echo /usr/local/conda-meta/*.json' command line is possibly being interpreted by a sh invocation already. So possibly you can just do: printf '%s\n' /usr/local/conda-meta/*.json directly. If not: sh -c 'printf "%s\n" /usr/local/conda-meta/*.json' You could also use find here. find doesn't do globbing but it can report file names that match patterns similar to shell ones. LC_ALL=C find /usr/local/conda-meta/. ! -name . -prune -name '*.json' Or with some find implementations: LC_ALL=C find /usr/local/conda-meta -mindepth 1 -maxdepth 1 -name '*.json' (note that the LC_ALL=C needed here so that * matches any sequence of bytes, not just those that are forming valid characters in the current locale, is a shell construct. If that command line is not interpreted by a shell, you may need to change it to env LC_ALL=C find... ) Some differences with shell globs: the list of files is not sorted hidden files are included (you could add a ! -name '.*' to exclude them) you get no output if there's no matching file. globs have that misfeature that they leave the pattern as-is unexpanded in that case. with the first (standard) variant, files will be output as /usr/local/conda-meta/./file.json . some globs such as x*/y/../*z are not easily translated (also note the differing behaviour with respect to symlinks to directories in that case). In any case, you can't use echo to output arbitrary data. My next question would be: what are you going to do with that output? With echo , you're outputting those file paths separated by SPC characters, and with my printf or find above, delimited by NL characters. Both NL and SPC are perfectly valid characters in file names, so those outputs are not post-processable reliable. You could use '%s\0' instead of '%s\n' (or use find 's -print0 if supported), not suitable for display to a user, but post-processable. In terms of efficiency, comparing Ubuntu 20.04's /bin/sh (dash 0.5.10.2) with its find (GNU find 4.7.0). Startup time: $ time (repeat 1000 sh -c '')( repeat 1000; do; sh -c ''; done; ) 0.91s user 0.66s system 105% cpu 1.483 total$ time (repeat 1000 find . -quit)( repeat 1000; do; find . -quit; done; ) 1.35s user 1.25s system 103% cpu 2.507 total Globbing some json files: $ TIMEFMT='%U user %S system %P cpu %*E total'$ time (repeat 1000 sh -c 'printf "%s\n" /usr/share/iso-codes/json/*.json') > /dev/null0.95s user 0.72s system 105% cpu 1.587 total$ time (repeat 1000 find /usr/share/iso-codes/json -mindepth 1 -maxdepth 1 -name '*.json') > /dev/null1.34s user 1.35s system 103% cpu 2.599 total Even bash is hardly slower than find here: $ time (repeat 1000 bash -c 'printf "%s\n" /usr/share/iso-codes/json/*.json') > /dev/null1.53s user 1.36s system 102% cpu 2.808 total Of course YMMV depending on the system, implementation, version of the respective utilities and the libraries they're linked against. Now on the history note, the glob name actually comes from the name of a utility called glob in the very first versions of Unix in the early 70s. It was located in /etc and was invoked by sh as a helper to expand wildcard patterns. You'll find a few projects online to revive that very old shell such as https://etsh.nl/ . More as an exercise in archaeology, you could build the glob utility from there and then be able to do: glob printf '%s\n' '/usr/local/conda-meta/*.json' A few notes of warning though. those are ancient globs, [!x] (let alone [^x] ) is not supported. it's not 8 bit safe. Actually, the 8th bit is used for escaping the glob operators ( $'\xe9*' would match the same thing as i* , $'\xaa*' would match on filenames that start with * ; the shell would set that 8th bit for the quoted characters before invoking glob ) ranges like [a-f] match on byte value rather than collation order (in practice, that's generally an advantage IMO). Non-matching globs result in a No match error (again, probably preferably, that's something that was broken by the Bourne shell in the late 70s). The glob functionality was later moved into the shell starting with the PWB shell and Bourne shell in the late 70s. Later, some fnmatch() and glob() functions were added to the C library to allow that feature to be used from other applications, but I'm not aware of a standard nor common utility that is a bare interface to that function. Even perl used to invoke csh in its early days to expand glob patterns.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/605843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/294528/" ] }