source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
484,060 | Will # dd if=/dev/zero of=/dev/sda wipe out a pre-existing partition table? Or is it the other way around, i.e, does # fdisk /dev/sda g (for GPT) wipe out the zeros written by /dev/zero ? | Will dd if=/dev/zero of=/dev/sda wipe out a pre-existing partition table? Yes, the partition table is in the first part of the drive, so writing over it will destroy it. That dd will write over the whole drive if you let it run (so it will take quite some time). Something like dd bs=512 count=50 if=/dev/zero of=/dev/sda would be enough to overwrite the first 50 sectors, including the MBR partition table and the primary GPT. Though at least according to Wikipedia, GPT has a secondary copy of the partition table at the end of the drive, so overwriting just the part in the head of the drive might not be enough. (You don't have to use dd , though. head -c10000 /dev/zero > /dev/sda or cat /bin/ls > /dev/sda would have the same effect.) does fdisk /dev/sda g (for GPT) wipe out the zeros written by /dev/zero? Also yes (provided you save the changes). (However, the phrasing in the title is just confusing, /dev/zero in itself does not do anything any more than any regular storage does.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/484060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
484,086 | I have multiple revisions of a text file in separate files in the same folder. How can I grep all files in that folder without listing any duplicate of lines with identical text? | How about cat * | grep exampletext | sort -u | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270469/"
]
} |
484,228 | On Linux, is there a way for a shell script to check if its standard input is redirected from the null device (1, 3) * , ideally without reading anything? The expected behavior would be: ./checkstdinnull-> no./checkstdinnull < /dev/null-> yesecho -n | ./checkstdinnull-> noEDITmknod secretunknownname c 1 3exec 6<secretunknownnamerm secretunknownname./checkstdinnull <&6-> yes I suspect I "just" need to read the maj/min number of the input device . But I can't find a way of doing that from the shell. * No necessary just /dev/null , but any null device even if manually created with mknod . | On linux, you can do it with: stdin_is_dev_null(){ test "`stat -Lc %t:%T /dev/stdin`" = "`stat -Lc %t:%T /dev/null`"; } On a linux without stat(1) (eg. the busybox on your router): stdin_is_dev_null(){ ls -Ll /proc/self/fd/0 | grep -q ' 1, *3 '; } On *bsd: stdin_is_dev_null(){ test "`stat -f %Z`" = "`stat -Lf %Z /dev/null`"; } On systems like *bsd and solaris, /dev/stdin , /dev/fd/0 and /proc/PID/fd/0 are not "magical" symlinks as on linux, but character devices which will switch to the real file when opened . A stat(2) on their path will return something different than a fstat(2) on the opened file descriptor. This means that the linux example will not work there, even with GNU coreutils installed. If the versions of GNU stat(1) is recent enough, you can use the - argument to let it do a fstat(2) on the file descriptor 0, just like the stat(1) from *bsd: stdin_is_dev_null(){ test "`stat -Lc %t:%T -`" = "`stat -Lc %t:%T /dev/null`"; } It's also very easy to do the check portably in any language which offers an interface to fstat(2), eg. in perl : stdin_is_dev_null(){ perl -e 'exit((stat STDIN)[6]!=(stat "/dev/null")[6])'; } | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/484228",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40697/"
]
} |
484,247 | I am having troubles understanding how to manage recurring tasks in taskwarrior I start with an empty database: $ task[task next]No matches. I add a recurring daily task: $ task add recur:daily due:later test It shows up in the report: $ task[task next]ID Age Recur Due Description Urg 2 - P1D 19.2y test 2.41 taskCreating recurring task instance 'test' If I mark it done like this: $ task 2 doneCompleted task 2 'test'.Completed 1 task.$ task[task next]No matches. it disappears from the report. I believe it makes sense, since "I completed the daily task today". The problem is it never appears again the next day and further. What am I doing wrong? | Apparently recurring tasks should have the same companion due settings. For example: $ task add "a daily recurring task" recur:daily due:eod$ task add "a weekly recurring task" recur:weekly due:eow$ task add "a monthly recurring task" recur:monthly due:eom this way, the daily task can be marked done and won't show up again till next day same for weekly/monthly etc tasks | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21126/"
]
} |
484,276 | disown causes a shell not to send SIGHUP to its disowned job when the shell terminates, and removes the disowned job from the shell's job control. Is the first the result of the second?In other words, if a process started from a shell is removed from the shell's job control by any way, will the shell not send SIGHUP to the process when the shell terminates? disown -h still keeps a process under a shell's job control. Does it mean that disown -h makes a process still receives SIGHUP sent from the shell, but sets up the action of SIGHUP by the process to be "ignore"? That sounds similar to nohup . $ sleep 123 & disown -h[1] 26103$ jobs[1]+ Running sleep 123 &$ fg 1sleep 123$ ^Z[1]+ Stopped sleep 125$ bg 1[1]+ sleep 123 &$ exit$ ps aux | grep sleept 26103 0.0 0.0 14584 824 ? S 15:19 0:00 sleep 123 Do disown -h and nohup work effectively the same, if we disregard their difference in using a terminal? Thanks. | nohup and disown -h are not exactly the same thing. With disown , a process is removed from the list of jobs in the current interactive shell. Running jobs after starting a background process and running disown will not show that process as a job in the shell. A disowned job will not receive a HUP from the shell when it exits (but see note at end). With disown -h , the job is not removed from the list of jobs, but the shell would not send a HUP signal to it if it exited (but see note at end). The nohup utility ignores the HUP signal and starts the given utility. The utility inherits the signal mask from nohup and will therefore also ignore the HUP signal. When the shell terminates, the process remains as a child process of nohup (and nohup is re-parented to init ). The difference is that the process started with nohup ignores HUP regardless of who sends the signal. The disowned processes are just not sent a HUP signal by the shell , but may still be sent the signal from e.g. kill -s HUP <pid> and will not ignore this. Note that HUP is only sent to the jobs of a shell if the shell is a login shell and the huponexit shell option is set, or the shell itself recieves a HUP signal. Relevant bits from the bash manual (my emphasis): SIGNALS [...] The shell exits by default upon receipt of a SIGHUP . Before exiting, an interactive shell resends the SIGHUP to all jobs, running or stopped. Stopped jobs are sent SIGCONT to ensure that they receive the SIGHUP . To prevent the shell from sending the signal to a particular job, it should be removed from the jobs table with the disown builtin (see SHELL BUILTIN COMMANDS below) or marked to not receive SIGHUP using disown -h . If the huponexit shell option has been set with shopt , bash sends a SIGHUP to all jobs when an interactive login shell exits. disown [-ar] [-h] [jobspec ... | pid ... ] Without options, remove each jobspec from the table of active jobs. [...] If the -h option is given, each jobspec is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP . [...] Related: Difference between nohup, disown and & | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/484276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
484,320 | I am constantly frustrated by this simple command: find / | fgrep somestuff.ext When I don't use sudo , I get line after line of permission denied - which is fair enough, but why isn't this output ignored when grep reads it from pipe? Why is this form of output sent straight to the terminal window and not passed into the pipe (what I suspect must be happening) and subsequently ignored by grep, while the same lines produced by cat (say I had permission denied messages stored in a text file) would correctly go into the pipe and be ignored by my grep pattern? I feel like there is something about the STDIN/STDOUT process I'm not understanding here | The permission denied messages are not sent to stdout from find but to stderr. You can redirect the whole stderr to the bit bucket: find 2>/dev/null | fgrep somestuff.ext Also, to find the given file, you don't need any grepping: find . -name somestuff.ext to which you can still apply the 2>/dev/null . To only suppress the permission denied messages, you can use 2> >(grep -v 'Permission denied' >&2) in bash. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484320",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322926/"
]
} |
484,388 | Sometimes, when I have numerous tabs open in Firefox, one of those tabs will start consuming a lot of CPU%, and I want to know which tab is the culprit. Doing this is a very manual process for which I'd like to find automation. I wish I had an application that could monitor firefox exclusively in a manner that produces concise output of only the firefox-facts I want to know. I'm looking for a command/application that will list the processes of each tab running in firefox filtered to only include the following info for each tab-process: Process ID Webpage Address of Tab CPU % usage Memory used Additionally, I'd like the info sorted by CPU % descending. Basically, I hoping there exists a program like htop, but that's exclusively dedicated to just the pertinent stuff I want to monitor in Firefox (while leaving out all the details I don't care about). | You can type about:performance in the address bar of firefox. Then you will get a table where there will be pid of each tab of firefox with Resident Set size and Unique Set Size . And below this there will be some lines explaining the performance of each tab (like performing well ) and if a tab is not performing well then it will show there and you can close that tab from there using the Close Tab option. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484388",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40149/"
]
} |
484,391 | I have a rather large bunch of files that contains several fields pipe-delimited. 5595340959340|1|MXPYAQWE|870569689456954654|0|0|20181018224126| 1212121212121212121212121212 |2|0|1000|70|33107||1|Event 5595340959340|1|MXPYAQWE|870569689456954654|0|0|20181018224126| 2323232323232323232323232323 |2|0|1000|70|33107||1|Event 5595340959340|1|MXPYAQWE|870569689456954654|0|0|20181018224126| 3434343434343434343434343434 |2|0|1000|70|33107||1|Event 5595340959340|1|MXPYAQWE|870569689456954654|0|0|20181018224126| 4545454545454545454545454545 |2|0|1000|70|33107||1|Event 5595340959340|1|MXPYAQWE|870569689456954654|0|0|20181018224126| 5656565656565656565656565656 |2|0|1000|70|33107||1|Event Notice the eighth field. It currently has 29 characters and I'm supposed to trim it so it has only five characters left. The only (convoluted) solution I've come up with is this: Isolate the fields I want to trim: awk -F "|" '{print $8}' > Original_Fields Trim the fields cp Original_Fields Tempmore Temp | cut -c -5 > Trimmed_Fields Create a susbtitution script with sed grep -rh -f <file_with_matching_strings> /path/to/files > Original_Stringsvi Original_Strings:%s/^/grep -rl "/g:%s/$/" \/path\/to\/file | xargs sed -i 's\//g:wq! And then edit the Original_Fields and Trimmed_Fields files, so I end up with grep -rl /path/to/file | xargs sed -i 's/Original_Field/Trimmed_Field/g' This works, but I strongly suspect there must be a quicker way to accomplish this with AWK and SED, so I can do all of this in just one step. | Yes, you can trim and rebuild each line with AWK: awk -F'|' 'BEGIN { OFS = FS } { $8 = substr($8, 1, 5); print }' This sets the input and output separators to “|”, and for each line of input, trims the eighth field to five characters at most, and prints all the fields (including the updated field). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/484391",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322980/"
]
} |
484,434 | I have: a Linux server that I connect via SSH on IP 203.0.113.0 port 1234 a home computer (behind a router), public IP 198.51.100.17, which is either Debian or Windows+Cygwin What's the easiest to have a folder /home/inprogress/ synchronized (in both directions), a bit like rsync , but with a filesystem watcher , so that each time a file is modified, it is immediately replicated on the other side? (i.e. no need to manually call a sync program) I'm looking for a command-line / no-GUI solution, as the server is headless. Is there a Linux/Debian built-in solution? | Following @Kusalananda's comment, I finally spent a few hours testing Syncthing for this use case and it works great. It automatically detects changes on both sides and the replication is very fast. Example: imagine you're working locally on server.py in your favorite Notepad software, you hit CTRL+S (Save). A few seconds later it's automatically replicated on the distant server (without any popup dialog). One great thing I've noticed is that you don't have to think about the IP of the home computer and server with Syncthing: each "device" (computer, server, phone, etc.) has a unique DeviceID and if you share the ID with another device, it will find out automatically how they should connect to each other. To do: Home computer side (Windows or Linux): Use the normal Syncthing in-browser configuration tool VPS side: First connect the VPS with a port forwarding: ssh <user>@<VPS_IP> -L 8385:localhost:8384 The latter option will redirect the VPS's Syncthing web-configuration tool listening on port 8384 to the home computer's port 8385. Then run this on VPS: wget https://github.com/syncthing/syncthing/releases/download/v0.14.52/syncthing-linux-amd64-v0.14.52.tar.gz tar xvfz syncthing-linux-amd64-v0.14.52.tar.gznohup syncthing-linux-amd64-v0.14.52/syncthing & Then on the home computer's browser, open http://localhost:8385 : this will be the VPS's Syncthing configuration! Other solution I tried: SSHFS using this tutorial . Please note that in this tutorial they don't use sshfs-win but win-sshfs instead (these are two different projects). I tried both, and I couldn't make any of them work (probably a problem with my VPS configuration). Here is an interesting reference too: https://softwarerecs.stackexchange.com/questions/13875/windows-sshfs-sftp-mounting-clients Additional advantages of Syncthing I've just noticed: you can reduce fsWatcherDelayS in the config.xml from 10 to 2 seconds so that after doing CTRL+S, 2 seconds later (+the time to upload, i.e. less than 1 second for a small text file) it's on the other computer if you sync two computers which are in the same local network (by just giving the DeviceID to each other, no need to care about local IP addresses), it will automatically notice that it doesn't need to transit via internet, but it can deal locally. This is great and allows a very fast speed transfer (4 MB/s!) sync of phone <--> computer both connected to the same home router via WiFi... ...whereas it would be stuck at 100 KB/s on ADSL with a Dropbox sync! (my ADSL is limited at 100 KB/s on upload) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/484434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59989/"
]
} |
484,442 | How can I get the pid of a subshell? For example: $ echo $$16808 This doesn't work, because the original shell expands $$ : $ ( echo $$ )16808 Why does single quoting not work? After the original shell removes the single quote, does the subshell not expand $$ in itself? $ ( echo '$$' )$$ Why does eval not work either? Is eval run by the subshell? Why does it give me the original shell's PID? $ ( eval echo '$$' )16808 Thanks. | $ echo $BASHPID37152$ ( echo $BASHPID )18633 From the manual: BASHPID Expands to the process ID of the current bash process. This differs from $$ under certain circumstances, such as subshells that do not require bash to be re-initialized. $ Expands to the process ID of the shell. In a () subshell, it expands to the process ID of the current shell, not the subshell. Related: Do parentheses really put the command in a subshell? , especially parts of Gilles' answer . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/484442",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
484,448 | Ask the user how many papers to grade? Create a for loop that will loop the times necessary for each paper's score to be entered. Ask the user for each score (1-100). count the number of loops At the end of the program display the average score of all the papers ive done one for a while loop but not sure how to do for loop #!/bin/bashset -xcount=0papers=0score=0grade=0average=0read -p " How many papers would you like to grade? " paperswhile [ $count -lt $papers ]do read -p " Please enter a score " grade score=`expr $score + $grade` count=$((count + 1))doneaverage=`expr $score / $papers`echo $average | $ echo $BASHPID37152$ ( echo $BASHPID )18633 From the manual: BASHPID Expands to the process ID of the current bash process. This differs from $$ under certain circumstances, such as subshells that do not require bash to be re-initialized. $ Expands to the process ID of the shell. In a () subshell, it expands to the process ID of the current shell, not the subshell. Related: Do parentheses really put the command in a subshell? , especially parts of Gilles' answer . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/484448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322957/"
]
} |
484,481 | I am trying to list every .tar.gz file, only using the following command: ls *.tar.gz -l ...It shows me the following list: -rw-rw-r-- 1 osm osm 949 Nov 27 16:17 file1.tar.gz-rw-rw-r-- 1 osm osm 949 Nov 27 16:17 file2.tar.gz However, I just need to list it this way: file1.tar.gz file2.tar.gz and also not: file1.tar.gz file2.tar.gz How is this "properly" done? | The -1 option (the digit “one”, not lower-case “L”) will list one file per line with no other information: ls -1 -- *.tar.gz | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/484481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49478/"
]
} |
484,556 | Linux has 7 virtual consoles, which correspond to 7 device files /dev/tty[n] . Is a virtual console running as a process, just like a terminal emulator? (I am not sure. It seems a virtual console is part of the kernel, and if that is correct, it can't be a process.) Is a virtual console implemented based on pseudoterminal, just like a terminal emulator? (I guess no. Otherwise, a virtual console's device file will be /dev/pts/[n] , instead of /dev/tty[n] ) Thanks. | That is incorrect. There's a terminal emulator program built into the Linux kernel. It doesn't manifest as a running process with open file handles. Nor does it require pseudo-terminal devices. It's layered on top of the framebuffer and the input event subsystem, which it uses internal kernel interfaces to access. It presents itself to application-mode systems as a series of 63 (not 7) kernel virtual terminal devices, /dev/tty1 to /dev/tty63 . User-space virtual terminals are implemented using pseudo-terminal devices. Pseudo-terminal devices, kernel virtual terminal devices, and real terminal devices layered on top of serial ports are the three types of terminal device (as far as applications programs are concerned) in Linux. Because of a lack of coördination, Linux documentation is now quite bad on this subject. There has been for several years no manual page for kernel virtual terminal devices on several Linux operating systems, although there are pages for the other two types of terminal device. This manual page would have explained the correct number or devices and their device file names and used to read: A Linux system has up to 63 virtual consoles (character devices with major number 4 and minor number 1 to 63), usually called /dev/tty n with 1 <= n <= 63. The current console is also addressed by /dev/console or /dev/tty0 , the character device with major number 4 and minor number 0. Debian people noticed that Debian was missing a console (4) manual page in 2014, and switched to installing the one from the Linux Manpages Project, only for people in that same project to delete their console (4) manual page a year and a bit later in 2016 because "Debian and derivatives don't install this page" and "Debian no longer carries it". Further reading https://unix.stackexchange.com/a/177209/5132 https://unix.stackexchange.com/a/333922/5132 Linux: Difference between /dev/console , /dev/tty and /dev/tty0 What are TTYs >12 used for? ttyS . Linux Programmers' Manual . Michael Kerrisk. 1992-12-19. pty . Linux Programmers' Manual . Michael Kerrisk. 2017-09-15. https://dyn.manpages.debian.org/jessie/manpages/console.4.html https://dyn.manpages.debian.org/stretch/manpages/console.4.html https://dyn.manpages.debian.org/testing/manpages/console.4.html http://manpages.ubuntu.com/manpages/trusty/en/man4/console.4.html http://manpages.ubuntu.com/manpages/artful/en/man4/console.4.html http://manpages.ubuntu.com/manpages/bionic/en/man4/console.4.html http://manpages.ubuntu.com/manpages/cosmic/en/man4/console.4.html Vincent Lefevre (2014-12-27). manpages: some man pages have references to console (4), which no longer exists . Debian bug #774022. Dr. Tobias Quathamer (2016-01-05). " console.4 : Is now included in this package. (Closes: #774022) ". manpages 4.04-0.1 . changelog. Marko Myllynen (2016-01-07). console (4) is out of date . Kernel bug #110481. Michael Kerrisk (2016-03-15). " console.4 : Remove outdated page ". man-pages . kernel.org. Jonathan de Boyne Pollard (2016). " Terminals ". nosh Guide . Softwares. Jonathan de Boyne Pollard (2018). Manual pages for Linux kernel virtual terminal devices . Proposals. Jonathan de Boyne Pollard (2018). console . Linux Programmers' Manual . Proposals. Jonathan de Boyne Pollard (2018). vt . Linux Programmers' Manual . Proposals. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484556",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
484,561 | I'm using IBM AIX which doesn't have much support like sed -i and sed with \t is not working in my case. I would like to replace replace NotApplicable string with a single space ' ' then replace @@@ multi-char-delimiter with a tab delimiter , in a specified order using single command be it awk, or sed. I tried using sed as following but it didn't work. Couldn't add search and replace for NotApplicable with a ' ' single space in below command. sed 's/@@@/\t/g' file.csv > file.xls Sample data. cola@@@colb@@@colbctest@@@test@@@testtest@@@NotApplicable@@@test123@@@145@@@567333@@@444@@@NotApplicablecola colb colbctest test testtest test123 145 567333 444 | That is incorrect. There's a terminal emulator program built into the Linux kernel. It doesn't manifest as a running process with open file handles. Nor does it require pseudo-terminal devices. It's layered on top of the framebuffer and the input event subsystem, which it uses internal kernel interfaces to access. It presents itself to application-mode systems as a series of 63 (not 7) kernel virtual terminal devices, /dev/tty1 to /dev/tty63 . User-space virtual terminals are implemented using pseudo-terminal devices. Pseudo-terminal devices, kernel virtual terminal devices, and real terminal devices layered on top of serial ports are the three types of terminal device (as far as applications programs are concerned) in Linux. Because of a lack of coördination, Linux documentation is now quite bad on this subject. There has been for several years no manual page for kernel virtual terminal devices on several Linux operating systems, although there are pages for the other two types of terminal device. This manual page would have explained the correct number or devices and their device file names and used to read: A Linux system has up to 63 virtual consoles (character devices with major number 4 and minor number 1 to 63), usually called /dev/tty n with 1 <= n <= 63. The current console is also addressed by /dev/console or /dev/tty0 , the character device with major number 4 and minor number 0. Debian people noticed that Debian was missing a console (4) manual page in 2014, and switched to installing the one from the Linux Manpages Project, only for people in that same project to delete their console (4) manual page a year and a bit later in 2016 because "Debian and derivatives don't install this page" and "Debian no longer carries it". Further reading https://unix.stackexchange.com/a/177209/5132 https://unix.stackexchange.com/a/333922/5132 Linux: Difference between /dev/console , /dev/tty and /dev/tty0 What are TTYs >12 used for? ttyS . Linux Programmers' Manual . Michael Kerrisk. 1992-12-19. pty . Linux Programmers' Manual . Michael Kerrisk. 2017-09-15. https://dyn.manpages.debian.org/jessie/manpages/console.4.html https://dyn.manpages.debian.org/stretch/manpages/console.4.html https://dyn.manpages.debian.org/testing/manpages/console.4.html http://manpages.ubuntu.com/manpages/trusty/en/man4/console.4.html http://manpages.ubuntu.com/manpages/artful/en/man4/console.4.html http://manpages.ubuntu.com/manpages/bionic/en/man4/console.4.html http://manpages.ubuntu.com/manpages/cosmic/en/man4/console.4.html Vincent Lefevre (2014-12-27). manpages: some man pages have references to console (4), which no longer exists . Debian bug #774022. Dr. Tobias Quathamer (2016-01-05). " console.4 : Is now included in this package. (Closes: #774022) ". manpages 4.04-0.1 . changelog. Marko Myllynen (2016-01-07). console (4) is out of date . Kernel bug #110481. Michael Kerrisk (2016-03-15). " console.4 : Remove outdated page ". man-pages . kernel.org. Jonathan de Boyne Pollard (2016). " Terminals ". nosh Guide . Softwares. Jonathan de Boyne Pollard (2018). Manual pages for Linux kernel virtual terminal devices . Proposals. Jonathan de Boyne Pollard (2018). console . Linux Programmers' Manual . Proposals. Jonathan de Boyne Pollard (2018). vt . Linux Programmers' Manual . Proposals. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3303/"
]
} |
484,630 | How to check, in a Bash script, that no command line arguments or STDIN was provided ? I mean if I run: #> ./myscript.sh... Show message "No data provided..." and exit Or: #> ./myscript.sh filename.txt... Read from filename.txt Or: #> ./myscript.sh < filename.txt**... Read from STDIN | Does this fit your requirements ? #!/bin/shif test -n "$1"; then echo "Read from $1";elif test ! -t 0; then echo "Read from stdin"else echo "No data provided..."fi The major tricks are as follow: Detecting that you have an argument is done through the test -n $1 which is checking if a first argument exists. Then, checking if stdin is not open on the terminal (because it is piped to a file) is done with test ! -t 0 (check if the file descriptor zero (aka stdin ) is not open). And, finally, everything else fall in the last case ( No data provided... ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/484630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323194/"
]
} |
484,655 | If I wanted to stay on the same file system, couldn't I just specify an output path for the same file system? Or is it to prevent accidentally leaving the current file system? | It limits where files are copied from , not where they’re copied to. It’s useful with recursive copies, to control how cp descends into subdirectories. Thus cp -xr / blah will only copy the root file system, not any of the other file systems mounted. See the cp -x documentation (although its distinction is subtle). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/484655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270469/"
]
} |
484,675 | I have many files like xyz_123_foo.ext for which I would like to add -bar to the filenames at the end to result in xyz_123_foo-bar.ext . I tried: rename . -bar. xyz_* which resulted in: rename: invalid option -- 'b' followed by the usage text. I then tried variations with '-bar' and "-bar" to no avail. How can I get rename to accept - as part of the replacement string? Or would another command be more efficient or appropriate? My shell is bash and I am using the rename from util-linux on SuSe Linux SLE12. | mmv is nice for tasks like this ex. mmv -n -- '*.ext' '#1-bar.ext' or for any dot extension mmv -n -- '*.*' '#1-bar.#2' Remove the -n once you are happy that it is doing the right thing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484675",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323219/"
]
} |
484,685 | I am trying to install Debian in ASUS P2440UA laptop. Previously I was using Windows 7 and Linux Mint in dual boot without any issue (for 6 months). But I decided to format whole hard drive and install only Debian. I have formatted the hard drive as GPT and tried to install Debian in UEFI mode. I used 256 MB EFI system partition, 60 GB root and 4 GB swap. Rest of the space is mounted as home. But at the end of installation process "failed to install grub bootloader on a hard drive".Then I have followed this https://forums.kali.org/showthread.php?37091-GRUB-Boot-Loader-Not-installing-on-Hard-Drive but still didn't worked. It was an "input output error".Then I have found this When wouldn't you want to install GRUB bootloader? saying that installing grub in a modern computer is not the best option. In this case what is the best option for me? I am trying for four days.Note that I havd an empty hard drive. I want to use gpt format for hard drive. I will use multi boot. | mmv is nice for tasks like this ex. mmv -n -- '*.ext' '#1-bar.ext' or for any dot extension mmv -n -- '*.*' '#1-bar.#2' Remove the -n once you are happy that it is doing the right thing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259704/"
]
} |
484,789 | In the question " Testing if a file descriptor is valid ", a test is sought for testing whether a file descriptor is opened or not. The answers all focus on testing whether the file descriptor is opened for output , but how may one test whether the file descriptor is opened for input ? This came up in a comment thread for an answer to another question, where the answer said, paraphrasing, if [ -n "$1" ]; then # read input from file "$1" (we're assuming it exists)elif [ ! -t 0 ]; then # read input from standard input (from pipe or redirection)else # no input given (we don't want to read from the terminal)fi The problem with [ ! -t 0 ] is that the -t test is true if the file descriptor is open and associated with a terminal. If the test is false, then the descriptor is either closed, or not associated with a terminal (i.e. we're reading from a pipe or redirection). Testing with [ ! -t 0 ] is therefore not a guarantee that the file descriptor is even valid. How to determine whether it's valid (so that read would not complain) or whether it's closed? | The check is easy to do in C with either read(fd, 0, 0) or (fcntl(fd, F_GETFL) & O_WRONLY) == 0 . I wasn't able to trick any standard utility into doing just that, so here are some possible workarounds. On linux, you can use /proc/PID/fdinfo/FD : if [ ! -d "/proc/$$/fd/$fd" ] && grep -sq '^flags.*[02]$' "/proc/$$/fdinfo/$fd"; then echo "$fd" is valid for inputfi On OpenBSD and NetBSD, you can use /dev/fd/FD and dd with a zero count: if dd if=/dev/fd/3 count=0 3<&"$fd" 2>/dev/null; then echo "$fd" is valid for inputfi On FreeBSD, only the first 3 fds are provided by default in /dev/fd ; you should either mount fdescfs(5) on /dev/fd or: if (dd if=/dev/fd/0 count=0 <&"$fd") 2>/dev/null; then echo "$fd" is valid for inputfi Notes: On some systems, bash does its emulation of /dev/fd/FD , and so cat </dev/fd/7 may work completely different from cat /dev/fd/7 . Same caveat applies to gawk . A read(2) with length 0 (or an open(2) without O_TRUNC in its flags) will not update the access time or any other timestamps. On linux, a read(2) will always fail on a directory, even if it was opened without the O_DIRECTORY flag. On other Unix systems, a directory may be read just like another file. The standard leaves unspecified whether dd count=0 will copy no blocks or all blocks from the file: the former is the behaviour of GNU dd ( gdd ) and of the dd from *BSD. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484789",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116858/"
]
} |
484,966 | The file name staff.txt and sample contents are: JHONMANAGER10000 I want to find JHON in a file and I want to change whatever's in the 2nd line after that one with another word/number. How can I accomplish this? | With sed you can move to next line with n : sed '/JHON/{n;n;s/.*/42/}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484966",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323452/"
]
} |
484,980 | I get a long list of such arrays { "id": "byu6g6c4cjys5mdkg5znh8ju8c", "create_at": 1511875272294, "update_at": 1511875272294, "delete_at": 0, "display_name": "BMA", "name": "BMA", "description": "", "email": "[email protected]", "type": "O", "company_name": "", "allowed_domains": "", "invite_id": "gdgz1tbxuinntx1fbr1ax7kehy", "allow_open_invite": false, "scheme_id": null } I want to get by JQ only the ID where the name is BMA.At the moment I parse " jq -r '.[]["name"]" and I can filter the output from curl by name and I will get "BMA" and also 100 other names, but I need to filter only the ID where name is =BMA.Any ideas? | jq You should be able to accomplish this with the following: jq '.[] | select( .name == "BMA" ).id' If name is BMA it will extract and output the corresponding .id . To use the value of a shell variable, import it into jq with --arg : myvariable=BMAjq --arg name "$myvariable" '.[] | select( .name == $name ).id' json json -c 'this.name === "BMA"' -a id | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322111/"
]
} |
484,997 | How to set the vi or emacs command line editing mode the Bash AND how to determine which mode is currently set? | To set : set -o vi Or: set -o emacs (setting one unsets the other. You can do set -o vi +o vi to unset both) To check: if [[ -o emacs ]]; then echo emacs modeelif [[ -o vi ]]; then echo vi modeelse echo neitherfi That syntax comes from ksh . The set -o vi is POSIX. set -o emacs is not (as Richard Stallman objected to the emacs mode being specified by POSIX) but very common among shell implementations. Some shells support extra editing modes. [[ -o option ]] is not POSIX, but supported by ksh, bash and zsh. [ -o option ] is supported by bash , ksh and yash (note that -o is also a binary OR operator for [ ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/484997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323481/"
]
} |
485,004 | Let's say I have a file as below { "fruit": "Apple",} I want to remove the comma at the end of the line, if and only if the next line contains "}". So, the output will be : { "fruit": "Apple"} However, if the file is as below. I do not want to do any change. Since the , s are not followed by a } { "fruit": "Apple", "size": "Large", "color": "Red"} Anything with sed would be fantastic. | To set : set -o vi Or: set -o emacs (setting one unsets the other. You can do set -o vi +o vi to unset both) To check: if [[ -o emacs ]]; then echo emacs modeelif [[ -o vi ]]; then echo vi modeelse echo neitherfi That syntax comes from ksh . The set -o vi is POSIX. set -o emacs is not (as Richard Stallman objected to the emacs mode being specified by POSIX) but very common among shell implementations. Some shells support extra editing modes. [[ -o option ]] is not POSIX, but supported by ksh, bash and zsh. [ -o option ] is supported by bash , ksh and yash (note that -o is also a binary OR operator for [ ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/485004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323492/"
]
} |
485,065 | I tried created a very simple cron task that echo's "Hello World" into a file named /tmp/example.txt . You can see from the screenshot, I tried: 28 23 * * * echo "Hello World" >> /tmp/example.txt28 23 * * * root /bin/bash echo "Hello World" >> /tmp/example.txt Also in the screenshot, you can see the date . I have also tried getting the exact date in the cron task (e.g.: 28 23 29 11 4), but that didn't work either. I also tried using an actual file 28 23 * * * ./pingstuff The pingstuff file just pings Google. I can run this file and it executes properly. But, when I try using it in the crontab -e, nothing happens at the scheduled time. (Times have been changed accordingly after each attempt.) I am logged in as a super user. I have permissions to read/write/execute all of these files. I'm not really sure what I'm doing wrong. | A cronjob may fail to execute because the cron daemon is not running there's a syntax error in the crontab there's a syntax error in the command or there's a permission problem (e.g. execute bit not set) To check if the daemon is running, try service cron status or systemctl status cron (the service manager depends on your distribution). The daemon may also be called something slightly different, like crond or cronie . If it's not running, start it (replace status with start ). If it's running, proceed to check the relevant log files to see whether the job was actually run. Depending on your distribution, this might be logged to /var/log/syslog , /var/log/messages , a daemon-specific file like /var/log/cron , or a systemd binary log file ( journalctl -u cron to view). You should see a line for each execution of the job. When testing crontabs, set an execution time that's more than one minute in the future. Some cron implementations "pre-plan" what tasks to run, i.e. they'll decide at 12:33:00 what to run at 12:34:00, so you'll miss the window of opportunity if you add a 34 12 … cronjob at 12:33:30. If the job runs but doesn't produce the expected result, try running the command from the crontab manually, as the same user, with the same shell (usually the minimalistic /bin/sh ). One common pitfall (though not the case here) are % characters in the command: they are treated specially by cron and need to be escaped ( \% ) to be seen by the actual command invocation. Most cron implementations also send the output (if any) of each cronjob as an email. If you haven't set up mail delivery over the internet, those mails should be stored locally and be readable using the mail command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/485065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323548/"
]
} |
485,156 | From this answer to Linux: Difference between /dev/console , /dev/tty and /dev/tty0 From the documentation : /dev/tty Current TTY device/dev/console System console/dev/tty0 Current virtual console In the good old days /dev/console was System Administrator console. And TTYs were users' serial devices attached to a server. Now /dev/console and /dev/tty0 represent current display and usually are the same. You can override it for example by adding console=ttyS0 to grub.conf . After that your /dev/tty0 is a monitor and /dev/console is /dev/ttyS0 . By " System console ", /dev/console seems like the device file of a text physical terminal, just like /dev/tty{1..63} are device files for the virtual consoles. By " /dev/console and /dev/tty0 represent current display and usually are the same", /dev/console seems to me that it can also be the device file of a virtual console. /dev/console seems more like /dev/tty0 than like /dev/tty{1..63} ( /dev/tty0 is the currently active virtual console, and can be any of /dev/tty{1..63} ). What is /dev/console ? What is it used for? Does /dev/console play the same role for Linux kernel as /dev/tty for a process? ( /dev/tty is the controlling terminal of the process session of the process, and can be a pts, /dev/ttyn where n is from 1 to 63, or more?) The other reply mentions: The kernel documentation specifies /dev/console as a character device numbered 5:1. Opening this character device opens the "main" console, which is the last tty in the list of consoles. Does "the list of consoles" mean all the console= 's in the boot option ? By " /dev/console as a character device numbered 5:1", does it mean that /dev/console is the device file of a physical text terminal i.e. a system console? (But again, the first reply I quoted above says /dev/console can be the same as /dev/tty0 which is not a physical text terminal, but a virtual console) Thanks. | /dev/console exists primarily to expose the kernel’s console to userspace. The Linux kernel’s documentation on devices now says The console device, /dev/console , is the device to which system messages should be sent, and on which logins should be permitted in single-user mode. Starting with Linux 2.1.71, /dev/console is managed by the kernel; for previous versions it should be a symbolic link to either /dev/tty0 , a specific virtual console such as /dev/tty1 , or to a serial port primary ( tty* , not cu* ) device, depending on the configuration of the system. /dev/console , the device node with major 5 and minor 1, provides access to whatever the kernel considers to be its primary means of interacting with the system administrator; this can be a physical console connected to the system (with the virtual console abstraction on top, so it can use tty0 or any ttyN where N is between 1 and 63), or a serial console, or a hypervisor console, or even a Braille device. Note that the kernel itself doesn’t use /dev/console : devices nodes are for userspace, not for the kernel; it does, however, check that /dev/console exists and is usable, and sets init up with its standard input, output and error pointing to /dev/console . As described here, /dev/console is a character device with a fixed major and minor because it’s a separate device (as in, a means of accessing the kernel; not a physical device), not equivalent to /dev/tty0 or any other device. This is somewhat similar to the situation with /dev/tty which is its own device (5:0) because it provides slightly different features than the other virtual console or terminal devices. The “list of consoles” is indeed the list of consoles defined by the console= boot parameters (or the default console, if there are none). You can see the consoles defined in this way by looking at /proc/consoles . /dev/console does indeed provide access to the last of these : You can specify multiple console= options on the kernel command line. Output will appear on all of them. The last device will be used when you open /dev/console . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/485156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
485,164 | I am trying to configure a DNS cache with dnsmasq.The server responds to the query, but the response time is exactly the same as the Cloudflare DNS.To test the DNS Server I have removed any internet DNS Server from my computer and also in the dnsmasq config file. Here my /etc/dnsmasq.conf domain=raspberry.localresolv-file=/etc/resolv.dnsmasqmin-port=4096cache-size=10000 I have tried for example: dig facebook.it and the Query time is circa 85 msec, end this is the exactly tile that I have if I use Clodflare DNS.maybe there is something that I don't understand, but I think that a Query time should be less than 10 msec if I use a local cache DNS. Here the content of the file /etc/resolv.conf # Generated by resolvconf# Domainsearch xxxxxxx# CloudFlare Serversnameserver 1.1.1.1nameserver 1.0.0.1search lannameserver 127.0.0.1 I don't try 127.0.0.1 because I use the DNS server on raspberry pi for the rest of lan. I have tried dig facebook.com and the response arrive from 192.168.100.5 that is the raspberry pi LAN IP Here the content of the file /etc/resolv.conf # Generated by resolvconf# Domainsearch xxxxxxx# CloudFlare Serversnameserver 1.1.1.1nameserver 1.0.0.1search lannameserver 127.0.0.1 I don't try 127.0.0.1 because I use the DNS server on raspberry pi for the rest of lan.I have tried dig facebook.com and the response arrive from 192.168.100.5 that is the raspberry pi LAN IP | /dev/console exists primarily to expose the kernel’s console to userspace. The Linux kernel’s documentation on devices now says The console device, /dev/console , is the device to which system messages should be sent, and on which logins should be permitted in single-user mode. Starting with Linux 2.1.71, /dev/console is managed by the kernel; for previous versions it should be a symbolic link to either /dev/tty0 , a specific virtual console such as /dev/tty1 , or to a serial port primary ( tty* , not cu* ) device, depending on the configuration of the system. /dev/console , the device node with major 5 and minor 1, provides access to whatever the kernel considers to be its primary means of interacting with the system administrator; this can be a physical console connected to the system (with the virtual console abstraction on top, so it can use tty0 or any ttyN where N is between 1 and 63), or a serial console, or a hypervisor console, or even a Braille device. Note that the kernel itself doesn’t use /dev/console : devices nodes are for userspace, not for the kernel; it does, however, check that /dev/console exists and is usable, and sets init up with its standard input, output and error pointing to /dev/console . As described here, /dev/console is a character device with a fixed major and minor because it’s a separate device (as in, a means of accessing the kernel; not a physical device), not equivalent to /dev/tty0 or any other device. This is somewhat similar to the situation with /dev/tty which is its own device (5:0) because it provides slightly different features than the other virtual console or terminal devices. The “list of consoles” is indeed the list of consoles defined by the console= boot parameters (or the default console, if there are none). You can see the consoles defined in this way by looking at /proc/consoles . /dev/console does indeed provide access to the last of these : You can specify multiple console= options on the kernel command line. Output will appear on all of them. The last device will be used when you open /dev/console . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/485164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311687/"
]
} |
485,173 | I have an Enigma2 FreeSat recorder that I've now hooked up to my Plex Media Server. Plex can see and play the files from the Enigma2 just fine, but the file naming makes this unattractive. How can I rename files of this format: yyyymmdd nnnn - channel - title.* e.g. 20181128 2100 - BBC One HD - The Apprentice.* To: title - dd-mm-yyyy - channel.* e.g. The Apprentice - 28-11-2018 - BBC One HD.* (in such a way I can run this every few minutes from the command line). I want to be sure that it only matches files in the first format so it doesn't try to rename files already renamed. Later I'll want to have this running as a docker container. | /dev/console exists primarily to expose the kernel’s console to userspace. The Linux kernel’s documentation on devices now says The console device, /dev/console , is the device to which system messages should be sent, and on which logins should be permitted in single-user mode. Starting with Linux 2.1.71, /dev/console is managed by the kernel; for previous versions it should be a symbolic link to either /dev/tty0 , a specific virtual console such as /dev/tty1 , or to a serial port primary ( tty* , not cu* ) device, depending on the configuration of the system. /dev/console , the device node with major 5 and minor 1, provides access to whatever the kernel considers to be its primary means of interacting with the system administrator; this can be a physical console connected to the system (with the virtual console abstraction on top, so it can use tty0 or any ttyN where N is between 1 and 63), or a serial console, or a hypervisor console, or even a Braille device. Note that the kernel itself doesn’t use /dev/console : devices nodes are for userspace, not for the kernel; it does, however, check that /dev/console exists and is usable, and sets init up with its standard input, output and error pointing to /dev/console . As described here, /dev/console is a character device with a fixed major and minor because it’s a separate device (as in, a means of accessing the kernel; not a physical device), not equivalent to /dev/tty0 or any other device. This is somewhat similar to the situation with /dev/tty which is its own device (5:0) because it provides slightly different features than the other virtual console or terminal devices. The “list of consoles” is indeed the list of consoles defined by the console= boot parameters (or the default console, if there are none). You can see the consoles defined in this way by looking at /proc/consoles . /dev/console does indeed provide access to the last of these : You can specify multiple console= options on the kernel command line. Output will appear on all of them. The last device will be used when you open /dev/console . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/485173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323624/"
]
} |
485,181 | I need to compare a command output with a string.This is the scenario: pvs_var=$(pvs | grep "sdb1") so pvs var is: /dev/sdb1 vg_name lvm2 a-- 100.00g 0 if [[ $($pvs_var | awk '{ print $2 }') = vg_name ]]; then do somethingfi The issue is that the output of the if statement is -bash: /dev/sdb1: Permission denied I don't understand this behavior. Thank you | You are attempting to execute the contents of $pvs_var as a command, rather than passing the string to awk. To fix this, add an echo or printf in your if statement: if [[ $(echo "$pvs_var" | awk '{ print $2 }') = vg_name ]]; then do somethingfi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/485181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/271131/"
]
} |
485,187 | While doing my Linux training I found this curiosity: If I do "sudo su - username" and log into that username account, it doesn't count as login when I do "finger username" Take this image where "jonathan" it's me (current user) and alumne2 is another account I created to test basic commands on it. Why it doesn't show up the last logging (finger alumne2) if from jonathan I "log in" as "sudo su - alumne2" ? | You are attempting to execute the contents of $pvs_var as a command, rather than passing the string to awk. To fix this, add an echo or printf in your if statement: if [[ $(echo "$pvs_var" | awk '{ print $2 }') = vg_name ]]; then do somethingfi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/485187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/319873/"
]
} |
485,221 | I am trying to get a bash array of all the unstaged modifications of files in a directory (using Git). The following code works to print out all the modified files in a directory: git -C $dir/.. status --porcelain | grep "^.\w" | cut -c 4- This prints "Directory Name/File B.txt""File A.txt" I tried using arr1=($(git status --porcelain | grep "^.\w" | cut -c 4-)) but then for a in "${arr1[@]}"; do echo "$a"; done (both with and without the quotes around ${arr1[@]} prints "DirectoryName/FileB.txt""FileA.txt" I also tried git -C $dir/.. status --porcelain | grep "^.\w" | cut -c 4- | readarray arr2 but then for a in "${arr2[@]}"; do echo "$a"; done (both with and without the quotes around ${arr2[@]} ) prints nothing. Using declare -a arr2 beforehand does absolutely nothing either. My question is this: How can I read in these values into an array? (This is being used for my argos plugin gitbar , in case it matters, so you can see all my code). | TL;DR In bash: readarray -t arr2 < <(git … )printf '%s\n' "${arr2[@]}" There are two distinct problems on your question Shell splitting. When you did: arr1=($(git … )) the "command expansion" is unquoted, and so: it is subject to shell split and glob. The exactly see what that shell splitting do, use printf: $ printf '<%s> ' $(echo word '"one simple sentence"')<word> <"one> <simple> <sentence"> That would be avoided by quoting : $ printf '<%s> ' "$(echo word '"one simple sentence"')"<word "one simple sentence"> But that, also, would avoid the splitting on newlines that you want. Pipe When you executed: git … | … | … | readarray arr2 The array variable arr2 got set but it went away when the pipe ( | ) was closed. You could use the value if you stay inside the last subshell: $ printf '%s\n' "First value." "Second value." | { readarray -t arr2; printf '%s\n' "${arr2[@]}"; }First value.Second value. But the value of arr2 will not survive out of the pipe. Solution(s) You need to use read to split on newlines but not with a pipe. From older to newer: Loop. For old shells without arrays (using positional arguments, the only quasi-array): set --while IFS='' read -r value; do set -- "$@" "$value"done <<-EOT$(printf '%s\n' "First value." "Second value.")EOTprintf '%s\n' "$@" To set an array (ksh, zsh, bash) i=0; arr1=()while IFS='' read -r value; do arr1+=("$value")done <<-EOT$(printf '%s\n' "First value." "Second value.")EOTprintf '%s\n' "${arr1[@]}" Here-string Instead of the here document ( << ) we can use a here-string ( <<< ): i=0; arr1=()while IFS='' read -r value; do arr1+=("$value")done <<<"$(printf '%s\n' "First value." "Second value.")"printf '%s\n' "${arr1[@]}" Process substitution In shells that support it (ksh, zsh, bash) you can use <( … ) to replace the here-string: i=0; arr1=()while IFS='' read -r value; do arr1+=("$value")done < <(printf '%s\n' "First value." "Second value.")printf '%s\n' "${arr1[@]}" With differences: <( ) is able to emit NUL bytes while a here-string might remove (or emit a warning) the NULs. A here-string adds a trailing newline by default. There may be others AFAIK. readarray Use readarray in bash [a] (a.k.a mapfile ) to avoid the loop: readarray -t arr2 < <(printf '%s\n' "First value." "Second value.")printf '%s\n' "${arr2[@]}" [a] In ksh you will need to use read -A , which clears the variable before use, but needs some "magic" to split on newlines and read the whole input at once. IFS=$'\n' read -d '' -A arr2 < <(printf '%s\n' "First value." "Second value.") You will need to load a mapfile module in zsh to do something similar. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/485221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184237/"
]
} |
485,246 | I used to use Windows but switched to Linux, I am used to use putty on Windows and to copy from Windows and paste on Putty I could just right click, but I think I might be missing some configuration, when I right click it does not paste, when I CTRL+V it does not paste, I can copy and paste any text anywhere on Elementary OS, but it just won't paste inside putty, is there some clipboard configuration on putty or something for this?... | TL;DR In bash: readarray -t arr2 < <(git … )printf '%s\n' "${arr2[@]}" There are two distinct problems on your question Shell splitting. When you did: arr1=($(git … )) the "command expansion" is unquoted, and so: it is subject to shell split and glob. The exactly see what that shell splitting do, use printf: $ printf '<%s> ' $(echo word '"one simple sentence"')<word> <"one> <simple> <sentence"> That would be avoided by quoting : $ printf '<%s> ' "$(echo word '"one simple sentence"')"<word "one simple sentence"> But that, also, would avoid the splitting on newlines that you want. Pipe When you executed: git … | … | … | readarray arr2 The array variable arr2 got set but it went away when the pipe ( | ) was closed. You could use the value if you stay inside the last subshell: $ printf '%s\n' "First value." "Second value." | { readarray -t arr2; printf '%s\n' "${arr2[@]}"; }First value.Second value. But the value of arr2 will not survive out of the pipe. Solution(s) You need to use read to split on newlines but not with a pipe. From older to newer: Loop. For old shells without arrays (using positional arguments, the only quasi-array): set --while IFS='' read -r value; do set -- "$@" "$value"done <<-EOT$(printf '%s\n' "First value." "Second value.")EOTprintf '%s\n' "$@" To set an array (ksh, zsh, bash) i=0; arr1=()while IFS='' read -r value; do arr1+=("$value")done <<-EOT$(printf '%s\n' "First value." "Second value.")EOTprintf '%s\n' "${arr1[@]}" Here-string Instead of the here document ( << ) we can use a here-string ( <<< ): i=0; arr1=()while IFS='' read -r value; do arr1+=("$value")done <<<"$(printf '%s\n' "First value." "Second value.")"printf '%s\n' "${arr1[@]}" Process substitution In shells that support it (ksh, zsh, bash) you can use <( … ) to replace the here-string: i=0; arr1=()while IFS='' read -r value; do arr1+=("$value")done < <(printf '%s\n' "First value." "Second value.")printf '%s\n' "${arr1[@]}" With differences: <( ) is able to emit NUL bytes while a here-string might remove (or emit a warning) the NULs. A here-string adds a trailing newline by default. There may be others AFAIK. readarray Use readarray in bash [a] (a.k.a mapfile ) to avoid the loop: readarray -t arr2 < <(printf '%s\n' "First value." "Second value.")printf '%s\n' "${arr2[@]}" [a] In ksh you will need to use read -A , which clears the variable before use, but needs some "magic" to split on newlines and read the whole input at once. IFS=$'\n' read -d '' -A arr2 < <(printf '%s\n' "First value." "Second value.") You will need to load a mapfile module in zsh to do something similar. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/485246",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316527/"
]
} |
485,284 | I want to install several gcc with different versions in centos. The default version of gcc in centos 6 is 4.9.3. So I use devtoolset install a higher version of gcc. Then I switch to the higher version of gcc by executing "source /opt/rh/devtoolset-5/enable". But now if I want to switch back to the default gcc, how should I do? By the way, is there any solution to install multiple gcc with different versions in centos 5? | The version of gcc that's distributed with CentOS 6 is actually 4.4.7. You can install as many versions of gcc either by installing devtoolset-# via yum or by compiling then from source. The first way is the easiest. Make sure that you are installing the devtoolset packages via the scl repo . I figure that you already did as you have installed one already but in case you didn't: yum install centos-release-scl You can then use the below command to set the gcc version to whichever one you want. Using 5 for this example and assuming that your shell is bash : scl enable devtoolset-5 bash If you want to change to 6: scl enable devtoolset-6 bash If you want to change back to the default then any of the following will work assuming bash is your shell: bash source ~/.bash_profile The first will start a new shell session and set any aliases/variables/commands in ~/.bashrc . The second will set it with the variables/commands in ~/.bash_profile . (Without the devtoolset enabled). You can even put scl enable devtoolset-5 bash , for example, in ~/.bashrc or ~/.bash_profile so that it sets the gcc version to one of the devtoolset versions at login. To go back to the system default if you use this method, comment the line out in ~/.bashrc or ~/.bash_profile and then run bash or source ~/.bash_profile , respectively. That will start a new shell session with everything in one of those shell init files except the scl enable command that you commented out. The only downside is that any variables that you've set via the export command will no longer be there as the shell session will be new. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/485284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323703/"
]
} |
485,379 | I read a tutorial and was instructed to do chown root:root /home/mynewuser as part of the process to get my ssh key working with a new user i created "kevind", however it broke the path. What does this do? How can I reset it back to default? What ever the default would be, / , ~ or something? Tutorial came from this answer comment . | In general: Do not execute commands from the web if you do not know exactly what they do. Specially by root !! . The command chown root:root /home/mynewuser is: ch anging the own ership to user:group of /home/mynewuser . However, the first comment from your linked page adds an -R (keep reading). Assuming the user kevind (using the specific name you provided) has a main group called kevind also created already (you can create it if needed) the command to revert the effect is: chown kevind:kevind /home/kevind Which must be executed as/by root to revert the ownership of root to the user kevind . A more extensive change to ensure that kevind doesn't have some file owned by root inside his directories (security reasons) is: chown -R kevind:kevind /home/kevind That will R ecurse inside all directories and subdirectories of the given top directory. That is a safe command, there is no real reason for a user to have a file (or directory) owned by root inside his home directory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/485379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323775/"
]
} |
485,463 | we run the smartctl on sdb disk smartctl -a /dev/sdbsmartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.10.0-327.el7.x86_64] (local build)Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.orgSmartctl open device: /dev/sdb failed: DELL or MegaRaid controller, please try adding '-d megaraid,N' according to the output from smartctl we change it to smartctl -a -d megaraid,0 /dev/sdbsmartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.10.0-327.el7.x86_64] (local build)Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===Vendor: TOSHIBAProduct: MG04SCA20ENY.. and I set the - 0 , according to the first bus ( from smartctl --scan ) smartctl --scan/dev/sda -d scsi # /dev/sda, SCSI device/dev/sdb -d scsi # /dev/sdb, SCSI device/dev/bus/0 -d megaraid,0 # /dev/bus/0 [megaraid_disk_00], SCSI device/dev/bus/0 -d megaraid,12 # /dev/bus/0 [megaraid_disk_12], SCSI device/dev/bus/0 -d megaraid,13 # /dev/bus/0 [megaraid_disk_13], SCSI device/dev/bus/0 -d megaraid,14 # /dev/bus/0 [megaraid_disk_14], SCSI device/dev/bus/0 -d megaraid,16 # /dev/bus/0 [megaraid_disk_16], SCSI device but I am not sure if this value "0" is the right value am I right here ? | Yes, you can use 0, or 12, or 13, or 14, or 16 for N. If your scan output isn't complete, possibly even more numbers. And you already tried with 0, and it worked. So try the others, too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/485463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
485,475 | On Ubuntu, I have a problem to establish a connection to redis server using SSH tunnel and SSH key with Redis Desktop Manager (RDM). What are the symptoms? I can connect to the server where redis is running using "plain" ssh and my id_rsa , other utilities which use either the SSH agent or keys in .ssh can connect to this server and create tunnels (e.g. DB apps), I can connect with RDM to redis servers using SSH tunnel and password (so the question is not a duplicate of Unable to establish an SSH tunnel using Redis Desktop Manager ); but this is not a perfect solutions, because I would rather use private/public keys authorization, I cannot convert keys in .ssh to a working PEM format required by RDM: any PEM files I've generated using different methods I googled are rejected by RDM with a message Connection: Disconnect on error: SSH Connection error(Authentication Error): Unable to extract public key from private key file: Unable to open private key file , I tried entering either a path to id_rsa ( ~/.ssh/id_rsa ) or just a path to a directory where my private key is stored ( ~/.ssh ). So, does anyone have an idea how to properly convert my SSH keys to a PEM format RDM needs and accepts? | That's a known issue of RDM: https://github.com/uglide/RedisDesktopManager/issues/4230 Workaround: copy your id_rsa file into a directory without "." in its name... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/485475",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40495/"
]
} |
485,477 | I have following json [ { "ip":"105.105.105.105", "timestamp":"1543746097", "ports":[ { "port":80, "proto":"tcp", "status":"open", "reason":"syn-ack", "ttl":128 } ] }, { "ip":"105.105.105.105", "timestamp":"1543746097", "ports":[ { "port":53, "proto":"tcp", "status":"open", "reason":"syn-ack", "ttl":128 } ] }] I want to extract ports to simple csv output 80,53 I tried jq -r '.[]."ports" | map(.port) | @csv' 105.105.105.105_tcp.json and jq -r '.[]."ports" | map(.port) | join(",")' 105.105.105.105_tcp.json but none of them work. | Here's a jq-only solution: jq -r '[ .[].ports[].port ]|@csv' network.json80,53 The approach here is to retrieve the port numbers, wrap them into an array, and then convert it to the CSV format. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/485477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257520/"
]
} |
485,505 | As I wrote at https://unix.stackexchange.com/a/484626/5132 this is worthy of its own Q&A. On Linux operating systems … % getent passwd binbin:x:2:2:bin:/bin:/usr/sbin/nologin% … and on FreeBSD … % getent passwd binbin:*:3:7:Binaries Commands and Source:/:/usr/sbin/nologin% … and on OpenBSD … $ getent passwd binbin:*:3:7:Binaries Commands and Source:/:/sbin/nologin$ … one can still today find a bin account. But it is pretty much undocumented. The Linux Standard Base version 5 says merely … Notes: The bin User ID/Group ID is included for compatibility with legacy applications. New applications should no longer use the bin User ID/Group ID. … without explaining the nature of the compatibility mechanism. As Joey Hess put it back in 2001 : bin : HELP : No files on my system are owned by user or group bin . What good are they? Historically they were probably the owners of binaries in /bin ? It is not mentioned in the FHS, Debian policy, or the changelog of base-passwd or base-files. M. Hess's question remains unanswered in the Debian doco for its base-passwd package to this day , 17 years later. So what is the bin account for? | bin has not properly been for anything for the entire lifetime of Linux. Like run-levels and init spawning getty because of records in /etc/inittab , the bin account was obsolete in the Unix world before Linux was even invented. It was an idea from the 1980s that was broken by the invention and adoption of NFS (Network File System) and its nobody user. Its continued presence in user account databases in the late 2010s when people in the commercial Unix world actively discontinued its use in the 1990s is a testament to inertia. The idea was that the bin user owned various directories such as /bin and /usr/bin (and indeed some of the others mentioned at https://unix.stackexchange.com/a/448799/5132 such as /usr/mbin and /usr/5bin ) and the non-set-UID/non-set-GID files within them. It also owned doco files and directories, such as manual pages. (In more extreme cases on some Unices it even owned / and /etc , although the latter was an acknowledged mistake in the creation of a SunOS operating system image. The former was just dunderheaded.) Thus the permission to enact software updates, running as user bin , was not a blanket permission, running as the superuser, to perform any action whatsoever against the system. The software upgrader could not read/write private user files, access mailboxes, and so forth; which updating softwares as the superuser of course could. Several other special account entries in your /etc/passwd file must have passwords. These are the administrative accounts — bin , daemon , sys , uucp , lp , and adm . […] The primary reason for the existence of these accounts is the secure ownership of commands, scripts, files, and devices. And some administrators install passwords for these accounts and actually use them. […] A password-free bin account is extremely useful for a system breaker. — Rebecca Thomas and Rik Farrow (1989). UNIX Administration Guide for System V . Prentice Hall. ISBN 9780139428890. p. 452. NFS was invented in the early 1980s, and broke this idea completely. It was already on shaky ground, as the aforegiven quotation alludes. This was because the ability to update the program image files for basic utilities that the superuser executed as a matter of course, such as /bin/ls for example, is a direct vector for gaining superuser privileges, and the division of access in using a bin account merely prevented accidentally modifying the wrong directories rather than prevented a malefactor from gaining superuser access. The advent of NFS highlighted this. Although NFS had a mechanism for remapping the superuser account to an ordinary non-system user account, it did not have the same for non-root-accounts like bin . So if someone could gain bin access on an NFS client it gave them bin access to the operating system files on an NFS server. Indeed, it enabled a superuser on an NFS client, who would otherwise be remapped to an ordinary non-system user on the server, to gain owner access to the server's operating system files and directories. This was common knowledge by the early 1990s, and at that time received wisdom was to chown stuff that was owned by bin to being owned by the superuser, to plug this hole that had already become a standard reporting item in Unix security auditing tools and was warned against in the likes of the installation doco for Sendmail. As far as M. Hess's questions are concerned, this idea was never adopted on Debian, which only came into existence years after it was known to be a bad idea in the Unix world, which indeed knew it to be a bad idea before Linux itself was invented. The BSD operating systems whose history does stretch back into the 1980s have long since done away with the actual ownership, but nonetheless retain the user account in the account database. FreeBSD converted bin : bin ownership to root : wheel ownership back in 1998 , for example. Further reading Jonathan de Boyne Pollard (2015). /etc/inittab is a thing of the past. . Frequently Given Answers. Jonathan de Boyne Pollard (2018). run-levels are things of the past. . Frequently Given Answers. Jonathan de Boyne Pollard (2018). Don't abuse nobody for running dæmons. . Frequently Given Answers. Thomas Benjamin (1995-11-07). Tiger flags *bin* owned system files, disabled accounts . comp.sys.hp.hpux. Bob Proulx (2002-09-15). Re: question about /etc/passwd entries . linux.debian.user. Doug Siebert (1999-06-22). Re: OS files owned by bin not root . comp.security.unix. Doug Siebert (1998-06-01). Re: AIX : "/" is owned by bin.bin . umn.local-lists.security. Theo de Raadt (1999-06-22). Re: OS files owned by bin not root . comp.security.unix. gene (1994-09-08). Why should bin not own a directory . comp.security.unix. Brad Powell (1993-08-12). Re: Root directory 'bin bin'? . comp.security.unix. Mario Wolczko (1991-09-13). Re: What breaks if /etc is not owned by bin? . alt.security. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/485505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5132/"
]
} |
485,677 | Currently, Debian testing codename is buster , and next-stable will be buster . I have installed Debian testing to keep packages up-to-date. My question is: Will my Debian become the stable release of Debian after buster is released as stable? If the answer is yes: I want to keep my Debian to testing version and keep packages up-to-date, What should I do? | If your /etc/apt/sources.list file references buster then it will stay on buster from testing through stable and then old-stable . If you have referenced testing then it will stay on testing regardless of the current testing version. You can see more details on the Debian Wiki , which includes a suggested sources.list file. I've taken that and tweaked it to reference testing as mentioned in your question: deb http://deb.debian.org/debian testing main contrib non-freedeb-src http://deb.debian.org/debian testing main contrib non-freedeb http://deb.debian.org/debian-security/ testing/updates main contrib non-freedeb-src http://deb.debian.org/debian-security/ testing/updates main contrib non-freedeb http://deb.debian.org/debian testing-updates main contrib non-freedeb-src http://deb.debian.org/debian testing-updates main contrib non-free | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/485677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/299017/"
]
} |
485,705 | in a file containing : ...18-11-2018:othercharacters10-11-2018:othercharacters03-10-2018:othercharacters30-10-2018:othercharacters27-09-2018:othercharacters03-12-2018:othercharacters... the command : sort -t- -k2 -k1 does not sort by date, what am I missing ? | That's one of the reasons why the recommended date format is YYYY-MM-DD. -k2 sorts on the portion of the line that starts with the second field, you need -k2,2 to sort on the second field only, so: sort -b -t- -k2,2 -k1,1 Or: sort -b -k1.7,1.10 -k1.4,1.5 -k1.1,1.2 To sort first on year (7th to 10th character of the first field (counted after having ignored the leading blanks in that field with -b , and with the default field separator (transition from a non-blank to a blank))), then month then day. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/485705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205321/"
]
} |
485,918 | For lazy reasons I pushed a bunch of commits with default messages and now it has become cumbersome, as I don't really know what I've changed in each commit. How do I edit just the messages of previous commits and (if possible) keep the commit tree? | To edit the commit messages of a series of commits, I run git rebase -i firstsha where firstsha is an identifier for the parent commit of the first commit I want to edit. (You can use any valid reference here, so git rebase -i HEAD~4 will show the last four commits.) In the editor that opens, change all the “pick” entries to “reword” on commits you wish to modify, then close the editor; you will then be asked to enter commit messages for all the commits you chose. Note that this will change the commit tree, because the hashes of the commits will change. You will have to force-push your new tree, or push it to a new branch. Take care when rebasing merges; you’ll need to use -r ( --rebase-merges ), and read the “Rebasing merges” section of the git rebase manpage . To quickly edit only the last commit, run git commit --amend (but beware of anything already staged for commit). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/485918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256195/"
]
} |
486,019 | Double quoting command substitution is a good practice. Is it the same for process substitution ( <() and >() )? It seems double quotes allow command substitution, but disallow process substitution: $ echo <(printf "%s" hello)/dev/fd/63$ echo "<(printf "%s" hello)"<(printf %s hello) What if the result of any process substitution contains whitespaces, or that never happens? Thanks. | Quoting process substitution will inhibit it, as is trivially tested: $ echo <(echo 123)/dev/fd/63$ cat <(echo 123)123$ echo "<(echo 123)"<(echo 123)$ cat "<(echo 123)"cat: '<(echo 123)': No such file or directory It is bizarre to me that you did not make an attempt to do so before asking the question, but it is at least easily verified now that it will not work. This is no different to what happens when quoting other shell operators; echo "(" is not a syntax error for the same reason, nor does echo "> /dev/sda" cause you any problems. The documented behaviour is that: This filename is passed as an argument to the current command as the result of the expansion As a single argument, the presence or absence of whitespace is not material and word-splitting is not relevant and not performed. It is plausible that on some platform whitespace appeared within the generated path, but it would have no impact. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/486019",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
486,029 | According to man ps : -p pidlist Select by PID. This selects the processes whose process ID numbers appear in pidlist. Identical to p and --pid. -q pidlist Select by PID (quick mode). This selects the processes whose process ID numbers appear in pidlist. With this option ps reads the necessary info only for the pids listed in the pidlist and doesn't apply additional filtering rules. The order of pids is unsorted and preserved. No additional selection options, sorting and forest type listings are allowed in this mode. Identical to q and --quick-pid. I see that -q is considerably faster than -p , taking at most one quarter the time to produce an identical listing. For example: $ time ps -fq "$$"UID PID PPID C STIME TTY TIME CMDvagrant 8115 3337 0 23:05 pts/0 00:00:00 bashreal 0m0.003suser 0m0.001ssys 0m0.002s$ time ps -fp "$$"UID PID PPID C STIME TTY TIME CMDvagrant 8115 3337 0 23:05 pts/0 00:00:00 bashreal 0m0.013suser 0m0.003ssys 0m0.009s$ On another system, I observed ps -q to take less than a tenth the time of ps -p . However, I'm not using a forest-type listing, and I've only passed a single PID so the sorting isn't taking any time (and sorting should be negligible anyway for moderately short PID lists). There are no additional filtering rules in my command. What all is ps -p doing that ps -q is not? | What I can answer exactly is: What exactly ps -q PID does not do? Sort and/or select a tree from the process list given. From add -q/q/--quick-pid option with bolding added: This commit introduces a new option q/-q/--quick-pid to the 'ps' command. The option does a similar job to the p/-p/--pid option (i.e. selection of PIDs listed in the comma separated list that follows the option), but the new option is optimized for speed. In cases where users only need to specify a list of PIDs to be shown and don't need other selection options, forest type output and sorting options, the new option is recommended as it decreases the initial processing delay by avoiding reading the necessary information from all the processes running on the system and by simplifying the internal filtering logic. The option is designed to be fast. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/486029",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
486,153 | I have a Red Hat Enterprise Linux server (7.5 x86_64). I have OpenSSH version 7.4. I was asked to upgrade it to a later version for security reasons: Nessus states that OpenSSH should be ugraded from 7.4 to 7.6 or later . However the Red Hat software and downloads does not have the latest package RPM. I found some clues on where to get the latest package for OpenSSH. I found this link , however, I do not know on how to upgrade it and trust this website. I do not want the SSH and other configuration to be modified by the ugrade. I did find links but however they are not useful, for example this one . I would like to know how to upgrade OpenSSH without using yum . | RHEL 7 ships OpenSSH 7.4p1 with any patches necessary to fix security issues. RHEL 7 is fully supported until 2024 (and longer with extended support contracts). This means that all known vulnerabilities in your version of OpenSSH are fixed, and newly-discovered vulnerabilities which are discovered in the future will be fixed — there’s no need to upgrade to the latest version of OpenSSH to avoid vulnerabilities. That’s one of the points of using a supported distribution: you can rely on your distributor to take care of upstream vulnerabilities for you (as long as you keep your systems up-to-date). To upgrade to a version of OpenSSH later than 7.4 you’d have to upgrade to RHEL 8 (which is currently in beta and has OpenSSH 7.8), or build it yourself for RHEL 7 (and take on support for future vulnerabilities). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/486153",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324393/"
]
} |
486,180 | I have a program that throws a segmentation fault on certain circumstances. I want to execute a command when the segmentation fault occurs to process the data, then execute the command again, and keep doing so until the segmentation fault stops. As a rough attempt at pseudo code, dodgy_commandwhile SegFault dataProcessing dodgy_commandend I think I need to be using a Trap command, but I don't understand the syntax for this command. | When a program aborts abnormally the exit code (as seen by the shell) typically has the high bit set so the value is 128 or higher. So a simple solution might be dodgy_commandwhile [ $? -ge 128 ]do process_data dodgy_commanddone If you specifically only want a segfault and not any other type of error, the while line becomes $? -eq 139 (because SEGV is signal 11; 128+11=139). If you don't get an high valued exit code on failure then it probably means the application is trapping the error, itself, and forcing a different exit code. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/486180",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324418/"
]
} |
486,230 | I want to store an array of some quotes (so basically real-world strings with new lines) in a file. How can I achieve it? I thought of setting the IFS to something like “xxxxxxxxx74765xxx” (which will never occur in my strings), but of course, IFS only works for single chars. I can think of some ugly hacks to do it (e.g., store that nonsense string as a line between elements, read the file line by line and check each line against it, and rebuild the array thus.), but I will appreciate some more experienced opinions. | Just do: typeset array > file To load: source file (you can also use typeset -p array to also save the attributes of the array variable (exported, unique...)). Alternatively: print -rl -- ${(qq)array} > file To load: eval "array=($(<file))" For your separator idea: print -r -- ${(j[separator])array} > file To load: array=("${(@s[separator])"$(<file)"}") (though beware it removes all trailing newline characters from the last element of the array and it doesn't work for an empty array). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/486230",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/282382/"
]
} |
486,326 | Sometimes I need to run some math operations. I know I can use bc or echo $(( 6/2 )) . I have created my own function for bc to read input. But sometimes it takes a long time to type: _bc "6/2" . So I have this question: Is there way to teach zsh/bash how to run math operation for numbers in command line? One example is more than thousands words. $ 6/2$ 3.0 It means that zsh/bash must recognize numbers and call i.e. bc . | Shortcut Alt - c (bash) With bash, using the readline utility, we can define a key sequence to place the word calc at the start and enclose the text written so far into double quotes: bind '"\ec": "\C-acalc \"\e[F\""' Having executed that, you type 23 + 46 * 89 for example, then Alt - c to get: calc "23 + 46 * 89" Just press enter and the math will be executed by the function defined as calc, which could be as simple as, or a lot more complex: calc () { <<<"$*" bc -l; } a (+) Alias We can define an alias: alias +='calc #' Which will comment the whole command line typed so far. You type: + (56 * 23 + 26) / 17 When you press enter, the line will be converted to calc #(56 * 23 + 26) / 17 and the command calc will be called. If calc is this function: bash calc(){ s=$(HISTTIMEFORMAT='' history 1); # recover last command line. s=${s#*[ ]}; # remove initial spaces. s=${s#*[0-9]}; # remove history line number. s=${s#*[ ]+}; # remove more spaces. eval 'bc -l <<<"'"$s"'"'; # calculate the line. } ksh calc(){ s=$(history -1 | # last command(s) sed '$!d;s/^[ \t]*[0-9]*[ \t]*+ //'); # clean it up # (assume one line commads) eval 'bc -l <<<"'"$s"'"'; # Do the math. } zsh zsh doesn't allow neither a + alias nor a # character. The value will be printed as: $ + (56 * 23 + 26) / 17 77.29411764705882352941 Only a + is required, String is quoted (no globs), shell variables accepted: $ a=23 $ + (56 * 23 + $a) / 17 77.11764705882352941176 a (+) Function With some limitations, this is the closest I got to your request with a function (in bash): +() { bc -l <<< "$*"; } Which will work like this: $ + 25+68+8/2493.33333333333333333333 The problem is that the shell parsing isn't avoided and a * (for example) could get expanded to the list of files in the pwd. If you write the command line without (white) spaces you will probably be ok. Beware of writing things like $(...) because they will get expanded. The safe solution is to quote the string to be evaluated: $ + '45 + (58+3 * l(23))/7'54.62949752111249272462$ + '4 * a(1) * 2'6.28318530717958647688 Which is only two characters shorter that your _bc "6/2" , but a + seems more intuitive to me. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/486326",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95722/"
]
} |
486,430 | I want to use AWK to get the filename or the last folders name if the string is only a directory. I have: awk -F '/' '{print $NF}' to print the last column and: awk -F '/' '{print $(NF - 1)} to print one column before the last. How can I make awk recognize if the string contains only a directory and no filename and in this case print one column before the last. My problem is that a directory might look like: ./folder1/folder2/folder3/ and in this case the last column would be empty. I want awk to recognize this and then print folder3 (so one column before the last one). | You can either use if or the ?: operator for this. awk -F '/' '{print $NF == "" ? $(NF - 1) : $NF}' awk -F '/' '{if($NF == "") print $(NF - 1); else print $NF}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/486430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
486,510 | I'm thinking about extracting the time from the 'date' command, subtracting a certain time in the future from it to get the number of seconds left until 'date' reaches that time, then to divide that number by 60 for minutes, and 60 for hours. I want to use this as an argument for the 'shutdown' command for example. how do I do this? | Something like this? echo $(( $(date +%s -d "tomorrow 12:00") - $( date +%s ) ))59856 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/486510",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324679/"
]
} |
486,902 | I have this in a bash script: exit 3;exit_code="$?"if [[ "$exit_code" != "0" ]]; then echo -e "${r2g_magenta}Your r2g process is exiting with code $exit_code.${r2g_no_color}"; exit "$exit_code";fi It looks like it will exit right after the exit command, which makes sense.I was wondering is there some simple command that can provide an exit code without exiting right away? I was going to guess: exec exit 3 but it gives an error message: exec: exit: not found . What can I do? :) | If you have a script that runs some programand looks at the program's exit status (with $? ),and you want to test that script by doing something that causes $? to be set to some known value (e.g., 3 ), just do (exit 3) The parentheses create a sub-shell. Then the exit command causes that sub-shellto exit with the specified exit status. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/486902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
486,908 | I want to generate a sorted list with all 8-digit numbers — from 00000000 to 99999999.I typed in the shell: f() { while IFS="" read -r line; do for i in {0..9}; do echo "$line$i"; done; done}echo | f | f | f | f | f | f | f | f | tee result.txt | wc -l response is bash: echo: write error: Interrupted system callbash: echo: write error: Interrupted system callbash: echo: write error: Interrupted system call99998890 Why have I got these three errors and malformed result.txt ? I use GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu) Debian GNU/Linux 9.6 (stretch) Linux kernel: 4.19.0 #2 SMP Thu Nov 1 15:31:34 EET 2018 x86_64 GNU/Linux | The specific write error: Interrupted system call error is generated when the console window size is changed while the script is being executed. Doing a: trap '' SIGWINCH will avoid it. Note that a seq 99999999 >result.txt; wc -l <result.txt Will be both faster and will avoid the SIGWINCH issue. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/486908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62358/"
]
} |
486,916 | I am trying to mount an overlayfs inside an archivemount (as a follow-up to Layered or Virtual filesystem on Linux ). I am doing this: mkdir -p {upper,work,mount}tar zcf somefile upper/ work/ mount/mkdir tmparchivemount -o allow_root somefile tmpsudo mount -t overlay -o lowerdir=/,upperdir=tmp/upper,workdir=tmp/work overlayfs tmp/mount Note that I allow root to access the mounted archive (had to update /etc/fuse.conf for that).It fails with: mount: tmp/mount: wrong fs type, bad option, bad superblock on overlayfs, missing codepage or helper program, or other error. It works with the original folders. I checked and by default, archivemount is mounting in read/write by default. I also can write a file in every folder. I also checked the access rights and they seem to be correct. Root as access to mount and can write to it. What am I doing wrong? | The specific write error: Interrupted system call error is generated when the console window size is changed while the script is being executed. Doing a: trap '' SIGWINCH will avoid it. Note that a seq 99999999 >result.txt; wc -l <result.txt Will be both faster and will avoid the SIGWINCH issue. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/486916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226943/"
]
} |
486,926 | I would like to subscribe to the following Oracle yum repository so I can install virtual box guest additions packages. https://yum.oracle.com/repo/OracleLinux/OL7/developer/x86_64/index.html Where can I find the .repo file URL in order for me to add this to my Oracle Linux subscription list? Edit: I am already subscribed to the Oracle public yum repo [root@localhost yum.repos.d]# yum repolistLoaded plugins: langpacks, ulninforepo id repo name statusol6_UEK_latest/x86_64 Latest Unbreakable Enterprise Kernel for Oracle Lin 820ol6_latest/x86_64 Oracle Linux 7Server Latest (x86_64) 11,323ol7_UEKR5/x86_64 Latest Unbreakable Enterprise Kernel Release 5 for 108ol7_latest/x86_64 Oracle Linux 7Server Latest (x86_64) 11,688repolist: 23,939[root@localhost yum.repos.d]# yum search vboxLoaded plugins: langpacks, ulninfo============================== N/S matched: vbox ===============================isdn4k-utils-vboxgetty.x86_64 : ISDN voice box (getty)Name and summary matches only, use "search all" for everything. | The specific write error: Interrupted system call error is generated when the console window size is changed while the script is being executed. Doing a: trap '' SIGWINCH will avoid it. Note that a seq 99999999 >result.txt; wc -l <result.txt Will be both faster and will avoid the SIGWINCH issue. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/486926",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167905/"
]
} |
486,931 | I need to find all files that cointains in the first line the strings: "StockID" and "SellPrice". Here is are some exemples of files : 1.csv : StockID Dept Cat2 Cat4 Cat5 Cat6 Cat1 Cat3 Title Notes Active Weight Sizestr Colorstr Quantity Newprice StockCode DateAdded SellPrice PhotoQuant PhotoStatus Description stockcontrl Agerestricted<blank> 1 0 0 0 0 22 0 RAF Air Crew Oxygen Connector 50801 1 150 <blank> <blank> 0 0 50866 2018-09-11 05:54:03 65 5 1 <br />\r\nA wartime RAF aircrew oxygen hose connector.<br />\r\n<br />\r\nAir Ministry stamped with Ref. No. 6D/482, Mk IVA.<br />\r\n<br />\r\nBrass spring loaded top bayonet fitting for the 'walk around' oxygen bottle extension hose (see last photo).<br />\r\n<br />\r\nIn a good condition. 2 0<blank> 1 0 0 0 0 15 0 WW2 US Airforce Type Handheld Microphone 50619 1 300 <blank> <blank> 1 0 50691 2017-12-06 09:02:11 20 9 1 <br />\r\nWW2 US Airforce Handheld Microphone type NAF 213264-6 and sprung mounting Bracket No. 213264-2.<br />\r\n<br />\r\nType RS 38-A.<br />\r\n<br />\r\nMade by Telephonics Corp.<br />\r\n<br />\r\nIn a un-issued condition. 3 0<blank> 1 0 0 0 0 22 0 RAF Seat Type Parachute Harness <blank> 1 4500 <blank> <blank> 1 0 50367 2016-11-04 12:02:26 155 8 1 <br />\r\nPost War RAF Pilot Seat Type Parachute Harness.<br />\r\n<br />\r\nThis Irvin manufactured harness is 'new old' stock and is unissued.<br />\r\n<br />\r\nThe label states Irvin Harness type C, Mk10, date 1976.<br />\r\nIt has Irvin marked buckles and complete harness straps all in 'mint' condition.<br />\r\n<br />\r\nFully working Irvin Quick Release Box and a canopy release Irvin 'D-Ring' Handle.<br />\r\n<br />\r\nThis harness is the same style type as the WW2 pattern seat type, and with some work could be made to look like one.<br />\r\n<br />\r\nIdeal for the re-enactor or collector (Not sold for parachuting).<br />\r\n<br />\r\nTotal weight of 4500 gms. 3 0 2.csv : id user_id organization_id hash name email date first_name hear_about1 2 15 <blank> Fairley [email protected] 1129889679 John 0 I only want to find the files that contains on 1st line : "StockID" and "SellPrice" ; So in this exemple, i want to output only ./1.csv I manage to do this, but i`m stuck now ;( where=$(find "./backup -type f)for x in $where; do head -1 $x | grep -w "StockID"done | find + awk solution: find ./backup -type f -exec \awk 'NR == 1{ if (/StockID.*SellPrice/) print FILENAME; exit }' {} \; In case if the order of crucial words may be different - replace pattern /StockID.*SellPrice/ with /StockID/ && /SellPrice/ . In case of huge number of files a more efficient alternative would be (processing a bunch of files at once; the total number of invocations of the command will be much less than the number of matched files): find ./backup -type f -exec \awk 'FNR == 1 && /StockID.*SellPrice/{ print FILENAME }{ nextfile }' {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/486931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325014/"
]
} |
486,976 | I am running Debian. Many times I get cramped (or something) for being on the computer for to long. Is there a tool which will tell me after 30-40 minutes to take a break? I remember seeing something, but I have forgotten what it is called. | I use Workrave for this; it’s available in Debian as the workrave package. I also noticed Safe Eyes , available as the safeeyes package, but haven’t tried it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/486976",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
487,005 | I personally usually use Debian based systems but I ask the following question on the entire *nix family of operating systems. Is the www-data user usually using as the Apache/Nginx already comes with some *nix systems or is it usually created by a given software, i.e Apache/Nginx? If it already comes with some systems (Debian in my case) it will prevent me for creating it for software different than Apache/Nginx before I install Apache/Nginx that might create it themselves, thus saving myself from some possible conflict. BTW, I was thinking using it for Ansible with become: yes . | Since you’re using Ansible, you should specify that you want a www-data user to be present, using the user module with state=present and whatever other attributes are appropriate ( e.g. system=yes ). That will create the user if necessary, and won’t if one is already present. That’s a general principle of configuration management — describe the situation you want the system to be in, not the steps to get there. On Debian, and presumably most derivatives, the www-data user is always present , it’s not created by a specific package for its own purposes (it’s “created” by base-passwd , along with all the other entries in the default /etc/passwd ). I don’t know off-hand about other systems. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
487,028 | Is it possible to have SHA sums print without the - appended to the end? $ echo test | sha1sum 4e1243bd22c66e76c2ba9eddc1f91394e57f9f83 - <--- this "-" dash/hyphen I know we can use awk and other command line tools, but can it be done without using another tool ? $ echo test | sha1sum | awk '{print $1}'4e1243bd22c66e76c2ba9eddc1f91394e57f9f83 | This is not possible without another tool or without editing the actual sha1sum script/binary. When sha1sum is fed a file it prints the filename after the sum. When sha1sum is not fed a file or is used with a pipe. It puts the - there as a placeholder to indicate that the input was not a file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487028",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325067/"
]
} |
487,030 | I desire to totally upgrade everything in Debian:Stable including the release version, to the newest stable release available: Packages update Packages upgrade D:S minor_version D:S major_version D:S release_version Each action will be done respective to others in that entire recursive (monthly/yearly) single process, while I assume that release_version will surly be the last. In other words, I'd like to create a "fully rolling release stable Debian". I do it when having at least weekly/daily automatic backups (per month) of all the data so if something was broken I restore a backup. What will be the command to "brutally" upgrade everything whatsoever including doing a release upgrade? I was thinking about: apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade -y | The Debian operating system is not bleeding edge. It enjoys great stability when installed, on supported hardware. However, as a result, the software that Debian uses and that is in its repo's are slightly older, than those in say, Ubuntu. Even though Ubuntu is Debian based, it is constantly being updated and things are getting tweaked day to day sometimes. If you successfully complete the commands you listed, everything should be up to date and considered the newest stable version. If you are however looking to go from Debian 8 to 9. The process is more involved. After doing the above commands: If everything went smoothly, perform database sanity and consistency checks for partially installed, missing and obsolete packages: dpkg -C If no issues are reported, check what packages are held back: apt-mark showholdPackages On Hold will not be upgraded, which may cause inconsistencies after Stretch upgrade. Before you move to the next part, it is recommended to fix all issues produced by both above commands. Make backup of your sources.list: cp /etc/apt/sources.list /etc/apt/sources.list_backup Change to stretch; sed -i 's/jessie/stretch/g' /etc/apt/sources.list Update apt-get update List Upgradeable: apt list --upgradable Note that if you see anything that alarms you at this point you can undo everything in reverse. After the following commands there is no undoing: apt-get upgradeapt-get dist-upgrade More information can be found: HERE | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487030",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
487,037 | I have a configuration line in my .inputrc : set enable-bracketed-paste on # Insert paste as a string rather than possibly running it This is valid when typed at the command line: bind 'set enable-bracketed-paste on' However the variable is not being set when I start bash v4.4.23 . Why is this line being ignored? | The Debian operating system is not bleeding edge. It enjoys great stability when installed, on supported hardware. However, as a result, the software that Debian uses and that is in its repo's are slightly older, than those in say, Ubuntu. Even though Ubuntu is Debian based, it is constantly being updated and things are getting tweaked day to day sometimes. If you successfully complete the commands you listed, everything should be up to date and considered the newest stable version. If you are however looking to go from Debian 8 to 9. The process is more involved. After doing the above commands: If everything went smoothly, perform database sanity and consistency checks for partially installed, missing and obsolete packages: dpkg -C If no issues are reported, check what packages are held back: apt-mark showholdPackages On Hold will not be upgraded, which may cause inconsistencies after Stretch upgrade. Before you move to the next part, it is recommended to fix all issues produced by both above commands. Make backup of your sources.list: cp /etc/apt/sources.list /etc/apt/sources.list_backup Change to stretch; sed -i 's/jessie/stretch/g' /etc/apt/sources.list Update apt-get update List Upgradeable: apt list --upgradable Note that if you see anything that alarms you at this point you can undo everything in reverse. After the following commands there is no undoing: apt-get upgradeapt-get dist-upgrade More information can be found: HERE | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
487,113 | What inspired this question is that I am testing the functionality of watchdog device and I was thinking if there is a shell inbuilt command to just open the device and do nothing/wait until terminated? Echo/touch seem to just open and close the device immediately after performing the operation. Cat does not seem to work. I am using a C application to do the same but was wondering if shell script has some provision for it | In Bourne-like shells, exec 3< "$device" Opens the device on file descriptor 3 of the shell. That would be more or less equivalent to C's: fd = open(device, O_RDONLY);if (fd < 0) handler_error(...);if (fd != 3) { dup2(fd, 3); close(fd); } ( ksh93 also does a fcntl(3, F_SETFD, FD_CLOEXEC) on that fd). To close it: exec 3<&- In zsh , ksh93 and bash , the equivalent of fd = open(device, O_RDONLY) could also be written as: exec {fd}< "$device" Where the file descriptor would be the first free one above 9 and stored in $fd . To close it: exec {fd}<&- Replace < with > for O_WRONLY|O_CREAT|O_TRUNC , and with <> for O_RDWR|O_CREAT and >> for O_WRONLY|O_CREAT|O_APPEND . zsh also has a sysopen builtin (in the zsh/system module) where you can specify the flags exactly. Note that in POSIX compliant shells, exec being a special builtin , if the file can't be opened, it's a fatal error when non-interactive (it exits the script). You can disabled that by using the command command. if command exec 3< "$device"; then do-what-you-need-to-doelse handle-the-error-yourselffi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/487113",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293013/"
]
} |
487,164 | How can I write Bash Code for Hiding Password Or convert into '*'user input will be in string so no spaces, and change or hide password String. | Use read -s to not echo the input, i.e. show nothing when the user types the password: read -p 'Password? ' -s passwordecho Your password is "$password". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324884/"
]
} |
487,238 | poweroff complains that it can't connect to systemd via DBus (of course, it's not alive). I did sync followed by kill $$ , thinking that pid 1 dying would cue the kernel to poweroff, but that caused a kernel panic. I then held the power button to force the poweroff. What's the most proper way to power-off in this scenario? | Unmount the filesystems that you had mounted. The root filesystem is a special case; for this you can use mount / -o remount,ro . On Linux, umount / also happens to work, because it is effectively converted to the former command. That said, you don't need to worry about unmounting too much, unless You have mounted an old filesystem like FAT - as used by the EFI system partition - or ext2, which does not implement journalling or equivalent. With a modern filesystem, sync is supposed to be enough, and the filesystem will repair itself very quickly on the next boot. You might have left a running process that writes to the filesystem, and you had intended to shut it down cleanly. In that case it's useful to attempt to umount the filesystems, because umount would fail and show a busy error to remind you about the remaining writer. The above is the important part. After that, you can also conveniently power off the hardware using poweroff -f . Or reboot with reboot -f . There is a systemd -specific equivalent of poweroff -f : systemctl poweroff -f -f . However poweroff -f does the same thing, and systemd supports this command even if it has been built without SysV compatibility. Technically, I remember my USB hard drive was documented as requiring Windows "safe remove" or equivalent. But this requirement is not powerfail safe, and Linux does not do this during a normal shutdown anyway. It's better interpreted as meaning that you shouldn't jog the hard drive while it is spinning - including by trying to unplug it. A full power off should stop the drive spinning. You can probably hear, feel, or see if it does not stop :-). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487238",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189744/"
]
} |
487,245 | This may seem a bit of a strange question, but it occurred to me that when typing a command such as the following, I always have to copy and paste the character from Wikipedia . echo '5 μs' >> /tmp/Output Is there a way to input such a character directly using an escape sequence on keyboard shortcut on a standard English keyboard? For example, in Vim , one can do C-k,m* to produce this character. | Yes, there are at least this four (five?) ways: AltGr Make your keyboard layout English (international AltGr dead keys) . Then the right Alt key is the AltGr key. Pressing AltGr - m will generate a µ. Many other keys also generate other characters: AltGr-s for ß , and Shift-AltGr-s for § . There is a separate keyboard layout for the console (the true consoles at Alt Ctrl F1 up to F6 ) and one for the GUI in X. Changing either depends on your distro (CentOS, Fedora, Debian, etc.) and Display manager (gnome, kde, xfce, etc.). For example, installing xfce4-xkb-plugin will allow a button on a panel to configure the keyboard and switch between several keyboard layouts for X in XFCE. Compose Make some key the Compose key. Then, presing Compose , releasing it, pressing / , releasing it, and then u will generate a µ. Defining a Compose key is usually done with xkb or with a keyboard layout applet. For example, in Gnome usually available at Region & Language section, or maybe, Switch Keyboard Layout Easily . Unicode There is a generic way to type any Unicode character (if your console supports it). Yes any codepoint of the 1,111,998 possible characters (visible if your font(s) could draw them). Press, as one chord (at the same time) Shift Ctrl u , release them (Probably, an underlined u̲ will appear), then type b5 which is the Unicode codepoint (always in Hex) for the character. And to end, type space or enter (at least). Readline In a bash prompt (as you tagged the question) is possible to use readline to generate a µ ( mu ). bind '"\eu": "µ"' Or add the line: "\eu": "µ" to ~/.inputrc , read it with Alt - x Alt - r or start a new bash shell (execute bash ) and when you type: Alt - u An µ will appear. Input method Probably too much for a short answer like this. A mistake:Technically, the character requested in the question was Unicode \U3bc while this answer has provided solutions for \Ub5 . Yes, they are different, my mistake, sorry. $ unicode $(printf '\U3bc\Ub5') U+03BC GREEK SMALL LETTER MU UTF-8: ce bc UTF-16BE: 03bc Decimal: μ Octal: \01674 μ (Μ) Uppercase: 039C Category: Ll (Letter, Lowercase) Unicode block: 0370..03FF; Greek and Coptic Bidi: L (Left-to-Right) U+00B5 MICRO SIGN UTF-8: c2 b5 UTF-16BE: 00b5 Decimal: µ Octal: \0265 µ (Μ) Uppercase: 039C Category: Ll (Letter, Lowercase) Unicode block: 0080..00FF; Latin-1 Supplement Bidi: L (Left-to-Right) Decomposition: <compat> 03BC And technically, the only valid solutions are number 3 and 4. In 3 the Unicode number could be changed from b5 to 3bc to get even this Greek character. In 4 just copy the correct character and done. Not in my defense, but Both b5 and 3bc have as Uppercase 39c . So, both are the lowercase of MU. Both look very, very similar (probably the same glyph from the font): Alternatives. AltGr Its quite possible and already done by changing the AltGr-g (with xkb) to: key <AC05> { [ g, G, dead_greek, dead_greek ]}; And then typing AltGr-g m to get a true greek-mu. Compose The Compose table is incorrect, even the Greek Compose file (/usr/share/X11/locale/el_GR.UTF-8/Compose) lists: <Multi_key> <slash> <u> : "µ" mu <Multi_key> <u> <slash> : "µ" mu <Multi_key> <slash> <U> : "µ" mu <Multi_key> <U> <slash> : "µ" mu Those compositions as Greek , which they are not. The correct solution for compose is to include an ~/.XCompose for greek and reboot. Unicode Works as posted, with unicode number 3bc Readline Works as posted, change the effective character to any wanted. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/487245",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34334/"
]
} |
487,346 | I have installed Ubuntu 18.04 on my laptop Dell Inspiron 5000 (AMD Ryzen 52500U/8 GB RAM/1 TB HDD/Windows 10/39.62 cm (15.6 Inch) FHD/Vega 8 Graphics) Inspiron 5575 The os is freezing randomly even sometimes with no application on or sometimes just Chrome on with 7-8 tabs. I checked memory footprint also had a call with Dell support centre. They confirmed there is no issue with hardware. Also for more info I have 8 GB of space with 100GB of file system partition and remaining for backup or other storage. I need to identify and resolve this. Output of free command: total used free shared buff/cache availableMem: 7863936 3474352 1285924 82252 3103660 4002564Swap: 7812092 0 7812092 lsblk output: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTloop0 7:0 0 3.7M 1 loop /snap/gnome-system-monitor/57loop1 7:1 0 14.5M 1 loop /snap/gnome-logs/45loop2 7:2 0 42.1M 1 loop /snap/gtk-common-themes/701loop3 7:3 0 140.7M 1 loop /snap/gnome-3-26-1604/74loop4 7:4 0 45M 1 loop /snap/core18/442loop5 7:5 0 34.6M 1 loop /snap/gtk-common-themes/818loop6 7:6 0 44.1M 1 loop /snap/core18/437loop7 7:7 0 2.3M 1 loop /snap/gnome-calculator/238loop8 7:8 0 144.4M 1 loop /snap/skype/63loop9 7:9 0 17.6M 1 loop /snap/chromium-ffmpeg/9loop10 7:10 0 2.3M 1 loop /snap/gnome-calculator/180loop11 7:11 0 13.9M 1 loop /snap/chromium-ffmpeg/8loop12 7:12 0 3.7M 1 loop /snap/gnome-system-monitor/51loop13 7:13 0 13M 1 loop /snap/gnome-characters/124loop14 7:14 0 13M 1 loop /snap/gnome-characters/139loop15 7:15 0 14.5M 1 loop /snap/gnome-logs/37loop16 7:16 0 259.6M 1 loop /snap/phpstorm/67loop17 7:17 0 259.9M 1 loop /snap/phpstorm/74loop18 7:18 0 13M 1 loop /snap/gnome-characters/103loop19 7:19 0 10.2M 1 loop /snap/chromium-ffmpeg/5loop20 7:20 0 147.3M 1 loop /snap/skype/66loop21 7:21 0 89.5M 1 loop /snap/core/6034loop22 7:22 0 87.9M 1 loop /snap/core/5742loop23 7:23 0 23.6M 1 loop /snap/core18/19loop24 7:24 0 88.2M 1 loop /snap/core/5897loop25 7:25 0 140.9M 1 loop /snap/gnome-3-26-1604/70loop26 7:26 0 2.3M 1 loop /snap/gnome-calculator/260loop27 7:27 0 141.8M 1 loop /snap/skype/60loop28 7:28 0 259.6M 1 loop /snap/phpstorm/69loop29 7:29 0 34.2M 1 loop /snap/gtk-common-themes/808sda 8:0 0 931.5G 0 disk ├─sda1 8:1 0 94M 0 part /boot/efi├─sda2 8:2 0 7.5G 0 part [SWAP]├─sda3 8:3 0 83.8G 0 part /└─sda4 8:4 0 840.2G 0 part sr0 11:0 1 1024M 0 rom Output of smartctl smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-36-generic] (local build)Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===Device Model: ST1000LM035-1RK172Serial Number: ZDE7YBWJLU WWN Device Id: 5 000c50 0b000ca9bFirmware Version: SDM2User Capacity: 1,000,204,886,016 bytes [1.00 TB]Sector Sizes: 512 bytes logical, 4096 bytes physicalRotation Rate: 5400 rpmForm Factor: 2.5 inchesDevice is: Not in smartctl database [for details use: -P showall]ATA Version is: ACS-3 T13/2161-D revision 3bSATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)Local Time is: Wed Dec 12 11:07:45 2018 ISTSMART support is: Available - device has SMART capability.SMART support is: EnabledAAM feature is: UnavailableAPM level is: 254 (maximum performance)Rd look-ahead is: EnabledWrite cache is: EnabledATA Security is: Disabled, NOT FROZEN [SEC1]=== START OF READ SMART DATA SECTION ===SMART overall-health self-assessment test result: PASSEDGeneral SMART Values:Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled.Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run.Total time to complete Offline data collection: ( 0) seconds.Offline data collectioncapabilities: (0x71) SMART execute Offline immediate. No Auto Offline data collection support. Suspend Offline collection upon new command. No Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported.SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer.Error logging capability: (0x01) Error logging supported. General Purpose Logging supported.Short self-test routine recommended polling time: ( 1) minutes.Extended self-test routinerecommended polling time: ( 160) minutes.Conveyance self-test routinerecommended polling time: ( 2) minutes.SCT capabilities: (0x3035) SCT Status supported. SCT Feature Control supported. SCT Data Table supported.SMART Attributes Data Structure revision number: 10Vendor Specific SMART Attributes with Thresholds:ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-- 067 057 006 - 159234009 3 Spin_Up_Time PO---- 099 099 000 - 0 4 Start_Stop_Count -O--CK 100 100 020 - 495 5 Reallocated_Sector_Ct PO--CK 100 100 036 - 16 7 Seek_Error_Rate POSR-- 071 060 045 - 12990802 9 Power_On_Hours -O--CK 100 100 000 - 304 (229 20 0) 10 Spin_Retry_Count PO--C- 100 100 097 - 0 12 Power_Cycle_Count -O--CK 100 100 020 - 234184 End-to-End_Error -O--CK 100 100 099 - 0187 Reported_Uncorrect -O--CK 080 080 000 - 20188 Command_Timeout -O--CK 100 100 000 - 0189 High_Fly_Writes -O-RCK 100 100 000 - 0190 Airflow_Temperature_Cel -O---K 062 051 040 - 38 (Min/Max 24/38)191 G-Sense_Error_Rate -O--CK 100 100 000 - 15192 Power-Off_Retract_Count -O--CK 100 100 000 - 32193 Load_Cycle_Count -O--CK 099 099 000 - 3769194 Temperature_Celsius -O---K 038 049 000 - 38 (0 22 0 0 0)197 Current_Pending_Sector -O--C- 100 100 000 - 0198 Offline_Uncorrectable ----C- 100 100 000 - 0199 UDMA_CRC_Error_Count -OSRCK 200 200 000 - 0240 Head_Flying_Hours ------ 100 253 000 - 289 (128 32 0)241 Total_LBAs_Written ------ 100 253 000 - 1059218227242 Total_LBAs_Read ------ 100 253 000 - 809232907254 Free_Fall_Sensor -O--CK 100 100 000 - 0 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warningGeneral Purpose Log Directory Version 1SMART Log Directory Version 1 [multi-sector log support]Address Access R/W Size Description0x00 GPL,SL R/O 1 Log Directory0x01 SL R/O 1 Summary SMART error log0x02 SL R/O 5 Comprehensive SMART error log0x03 GPL R/O 5 Ext. Comprehensive SMART error log0x04 GPL,SL R/O 8 Device Statistics log0x06 SL R/O 1 SMART self-test log0x07 GPL R/O 1 Extended self-test log0x09 SL R/W 1 Selective self-test log0x10 GPL R/O 1 SATA NCQ Queued Error log0x11 GPL R/O 1 SATA Phy Event Counters log0x21 GPL R/O 1 Write stream error log0x22 GPL R/O 1 Read stream error log0x24 GPL R/O 512 Current Device Internal Status Data log0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log0x80-0x9f GPL,SL R/W 16 Host vendor specific log0xa1 GPL,SL VS 24 Device vendor specific log0xa2 GPL VS 8160 Device vendor specific log0xa8 GPL,SL VS 136 Device vendor specific log0xa9 GPL,SL VS 1 Device vendor specific log0xab GPL VS 1 Device vendor specific log0xb0 GPL VS 8920 Device vendor specific log0xbe-0xbf GPL VS 65535 Device vendor specific log0xc0 GPL,SL VS 1 Device vendor specific log0xc1 GPL,SL VS 16 Device vendor specific log0xc2 GPL,SL VS 240 Device vendor specific log0xc3 GPL,SL VS 8 Device vendor specific log0xc4 GPL,SL VS 24 Device vendor specific log0xc9 GPL,SL VS 1 Device vendor specific log0xca GPL,SL VS 16 Device vendor specific log0xd3 GPL VS 1920 Device vendor specific log0xe0 GPL,SL R/W 1 SCT Command/Status0xe1 GPL,SL R/W 1 SCT Data TransferSMART Extended Comprehensive Error Log Version: 1 (5 sectors)Device Error Count: 20 CR = Command Register FEATR = Features Register COUNT = Count (was: Sector Count) Register LBA_48 = Upper bytes of LBA High/Mid/Low Registers ] ATA-8 LH = LBA High (was: Cylinder High) Register ] LBA LM = LBA Mid (was: Cylinder Low) Register ] Register LL = LBA Low (was: Sector Number) Register ] DV = Device (was: Device/Head) Register DC = Device Control Register ER = Error register ST = Status registerPowered_Up_Time is measured from power on, and printed asDDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,SS=sec, and sss=millisec. It "wraps" after 49.710 days.Error 20 [19] log entry is emptyError 19 [18] log entry is emptyError 18 [17] log entry is emptyError 17 [16] log entry is emptyError 16 [15] log entry is emptyError 15 [14] log entry is emptyError 14 [13] log entry is emptyError 13 [12] log entry is emptyError 12 [11] log entry is emptyError 11 [10] log entry is emptyError 10 [9] log entry is emptyError 9 [8] log entry is emptyError 8 [7] log entry is emptyError 7 [6] log entry is emptyError 6 [5] log entry is emptyError 5 [4] log entry is emptyError 4 [3] occurred at disk power-on lifetime: 205 hours (8 days + 13 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 00 05 f9 81 40 00 00 Error: UNC at LBA = 0x05f98140 = 100237632 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 00 00 00 20 00 00 05 f9 81 40 40 00 00:00:24.250 READ FPDMA QUEUED 60 00 00 00 08 00 00 05 f9 81 38 40 00 00:00:24.250 READ FPDMA QUEUED 60 00 00 00 20 00 00 05 f9 81 10 40 00 00:00:24.250 READ FPDMA QUEUED 60 00 00 00 08 00 00 05 f9 81 08 40 00 00:00:24.238 READ FPDMA QUEUED 60 00 00 01 00 00 00 05 f9 7f a8 40 00 00:00:24.200 READ FPDMA QUEUEDError 3 [2] occurred at disk power-on lifetime: 205 hours (8 days + 13 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 00 05 f9 81 40 00 00 Error: UNC at LBA = 0x05f98140 = 100237632 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 00 00 00 08 00 00 05 f9 81 40 40 00 00:00:35.990 READ FPDMA QUEUED ef 00 10 00 02 00 00 00 00 00 00 a0 00 00:00:35.980 SET FEATURES [Enable SATA feature] 27 00 00 00 00 00 00 00 00 00 00 e0 00 00:00:35.953 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3] ec 00 00 00 00 00 00 00 00 00 00 a0 00 00:00:35.951 IDENTIFY DEVICE ef 00 03 00 46 00 00 00 00 00 00 a0 00 00:00:35.939 SET FEATURES [Set transfer mode]Error 2 [1] occurred at disk power-on lifetime: 205 hours (8 days + 13 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 00 05 f9 81 40 00 00 Error: UNC at LBA = 0x05f98140 = 100237632 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 00 00 00 08 00 00 05 f9 81 40 40 00 00:00:33.065 READ FPDMA QUEUED ef 00 10 00 02 00 00 00 00 00 00 a0 00 00:00:33.056 SET FEATURES [Enable SATA feature] 27 00 00 00 00 00 00 00 00 00 00 e0 00 00:00:33.029 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3] ec 00 00 00 00 00 00 00 00 00 00 a0 00 00:00:33.027 IDENTIFY DEVICE ef 00 03 00 46 00 00 00 00 00 00 a0 00 00:00:33.014 SET FEATURES [Set transfer mode]Error 1 [0] occurred at disk power-on lifetime: 205 hours (8 days + 13 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 00 05 f9 81 40 00 00 Error: UNC at LBA = 0x05f98140 = 100237632 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 00 00 00 20 00 00 05 f9 81 40 40 00 00:00:28.429 READ FPDMA QUEUED 60 00 00 00 08 00 00 05 f9 81 38 40 00 00:00:28.428 READ FPDMA QUEUED 60 00 00 00 20 00 00 05 f9 81 10 40 00 00:00:28.428 READ FPDMA QUEUED 60 00 00 00 08 00 00 05 f9 81 08 40 00 00:00:28.416 READ FPDMA QUEUED 60 00 00 01 00 00 00 05 f9 7f a8 40 00 00:00:28.379 READ FPDMA QUEUEDSMART Extended Self-test Log Version: 1 (1 sectors)Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error# 1 Short offline Completed without error 00% 206 -# 2 Short offline Completed without error 00% 0 -SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testingSelective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk.If Selective self-test is pending on power-up, resume after 0 minute delay.SCT Status Version: 3SCT Version (vendor specific): 522 (0x020a)SCT Support Level: 1Device State: Active (0)Current Temperature: 38 CelsiusPower Cycle Min/Max Temperature: 24/38 CelsiusLifetime Min/Max Temperature: 22/50 CelsiusLifetime Average Temperature: 38 CelsiusUnder/Over Temperature Limit Count: 0/0SCT Temperature History Version: 2Temperature Sampling Period: 1 minuteTemperature Logging Interval: 30 minutesMin/Max recommended Temperature: 14/55 CelsiusMin/Max Temperature Limit: 10/60 CelsiusTemperature History Size (Index): 128 (78)Index Estimated Time Temperature Celsius 79 2018-12-09 19:30 42 *********************** 80 2018-12-09 20:00 ? - 81 2018-12-09 20:30 26 ******* 82 2018-12-09 21:00 39 ******************** 83 2018-12-09 21:30 41 ********************** 84 2018-12-09 22:00 ? - 85 2018-12-09 22:30 33 ************** 86 2018-12-09 23:00 39 ******************** 87 2018-12-09 23:30 ? - 88 2018-12-10 00:00 24 ***** 89 2018-12-10 00:30 35 **************** 90 2018-12-10 01:00 ? - 91 2018-12-10 01:30 37 ****************** 92 2018-12-10 02:00 39 ******************** 93 2018-12-10 02:30 40 ********************* 94 2018-12-10 03:00 40 ********************* 95 2018-12-10 03:30 40 ********************* 96 2018-12-10 04:00 ? - 97 2018-12-10 04:30 27 ******** 98 2018-12-10 05:00 ? - 99 2018-12-10 05:30 30 *********** 100 2018-12-10 06:00 38 ******************* 101 2018-12-10 06:30 39 ******************** 102 2018-12-10 07:00 40 ********************* 103 2018-12-10 07:30 40 ********************* 104 2018-12-10 08:00 39 ******************** 105 2018-12-10 08:30 38 ******************* 106 2018-12-10 09:00 39 ******************** 107 2018-12-10 09:30 ? - 108 2018-12-10 10:00 27 ******** 109 2018-12-10 10:30 ? - 110 2018-12-10 11:00 25 ****** 111 2018-12-10 11:30 ? - 112 2018-12-10 12:00 28 ********* 113 2018-12-10 12:30 36 ***************** 114 2018-12-10 13:00 38 ******************* 115 2018-12-10 13:30 ? - 116 2018-12-10 14:00 27 ******** 117 2018-12-10 14:30 37 ****************** 118 2018-12-10 15:00 ? - 119 2018-12-10 15:30 30 *********** 120 2018-12-10 16:00 38 ******************* 121 2018-12-10 16:30 39 ******************** 122 2018-12-10 17:00 38 ******************* 123 2018-12-10 17:30 38 ******************* 124 2018-12-10 18:00 38 ******************* 125 2018-12-10 18:30 39 ******************** 126 2018-12-10 19:00 39 ******************** 127 2018-12-10 19:30 ? - 0 2018-12-10 20:00 39 ******************** 1 2018-12-10 20:30 ? - 2 2018-12-10 21:00 28 ********* 3 2018-12-10 21:30 39 ******************** 4 2018-12-10 22:00 40 ********************* 5 2018-12-10 22:30 ? - 6 2018-12-10 23:00 24 ***** 7 2018-12-10 23:30 37 ****************** 8 2018-12-11 00:00 37 ****************** 9 2018-12-11 00:30 37 ****************** 10 2018-12-11 01:00 ? - 11 2018-12-11 01:30 28 ********* 12 2018-12-11 02:00 ? - 13 2018-12-11 02:30 32 ************* 14 2018-12-11 03:00 ? - 15 2018-12-11 03:30 23 **** 16 2018-12-11 04:00 ? - 17 2018-12-11 04:30 25 ****** 18 2018-12-11 05:00 36 ***************** 19 2018-12-11 05:30 ? - 20 2018-12-11 06:00 23 **** 21 2018-12-11 06:30 ? - 22 2018-12-11 07:00 27 ******** 23 2018-12-11 07:30 37 ****************** 24 2018-12-11 08:00 37 ****************** 25 2018-12-11 08:30 ? - 26 2018-12-11 09:00 25 ****** 27 2018-12-11 09:30 36 ***************** 28 2018-12-11 10:00 ? - 29 2018-12-11 10:30 29 ********** 30 2018-12-11 11:00 36 ***************** 31 2018-12-11 11:30 37 ****************** 32 2018-12-11 12:00 39 ******************** 33 2018-12-11 12:30 37 ****************** 34 2018-12-11 13:00 ? - 35 2018-12-11 13:30 29 ********** 36 2018-12-11 14:00 38 ******************* 37 2018-12-11 14:30 40 ********************* 38 2018-12-11 15:00 39 ******************** 39 2018-12-11 15:30 ? - 40 2018-12-11 16:00 39 ******************** 41 2018-12-11 16:30 40 ********************* 42 2018-12-11 17:00 40 ********************* 43 2018-12-11 17:30 ? - 44 2018-12-11 18:00 39 ******************** 45 2018-12-11 18:30 ? - 46 2018-12-11 19:00 30 *********** 47 2018-12-11 19:30 ? - 48 2018-12-11 20:00 22 *** 49 2018-12-11 20:30 36 ***************** 50 2018-12-11 21:00 ? - 51 2018-12-11 21:30 25 ****** 52 2018-12-11 22:00 ? - 53 2018-12-11 22:30 29 ********** 54 2018-12-11 23:00 ? - 55 2018-12-11 23:30 38 ******************* 56 2018-12-12 00:00 40 ********************* 57 2018-12-12 00:30 40 ********************* 58 2018-12-12 01:00 40 ********************* 59 2018-12-12 01:30 39 ******************** 60 2018-12-12 02:00 39 ******************** 61 2018-12-12 02:30 ? - 62 2018-12-12 03:00 26 ******* 63 2018-12-12 03:30 38 ******************* 64 2018-12-12 04:00 38 ******************* 65 2018-12-12 04:30 ? - 66 2018-12-12 05:00 39 ******************** ... ..( 3 skipped). .. ******************** 70 2018-12-12 07:00 39 ******************** 71 2018-12-12 07:30 43 ************************ 72 2018-12-12 08:00 45 ************************** 73 2018-12-12 08:30 46 *************************** 74 2018-12-12 09:00 47 **************************** 75 2018-12-12 09:30 48 ***************************** 76 2018-12-12 10:00 49 ****************************** 77 2018-12-12 10:30 ? - 78 2018-12-12 11:00 24 *****SCT Error Recovery Control command not supportedDevice Statistics (GP Log 0x04)Page Offset Size Value Flags Description0x01 ===== = = === == General Statistics (rev 1) ==0x01 0x008 4 234 --- Lifetime Power-On Resets0x01 0x010 4 304 --- Power-on Hours0x01 0x018 6 1059632915 --- Logical Sectors Written0x01 0x020 6 10726128 --- Number of Write Commands0x01 0x028 6 809348555 --- Logical Sectors Read0x01 0x030 6 11805715 --- Number of Read Commands0x01 0x038 6 - --- Date and Time TimeStamp0x03 ===== = = === == Rotating Media Statistics (rev 1) ==0x03 0x008 4 301 --- Spindle Motor Power-on Hours0x03 0x010 4 52 --- Head Flying Hours0x03 0x018 4 3769 --- Head Load Events0x03 0x020 4 16 --- Number of Reallocated Logical Sectors0x03 0x028 4 34 --- Read Recovery Attempts0x03 0x030 4 0 --- Number of Mechanical Start Failures0x03 0x038 4 0 --- Number of Realloc. Candidate Logical Sectors0x04 ===== = = === == General Errors Statistics (rev 1) ==0x04 0x008 4 435 --- Number of Reported Uncorrectable Errors0x04 0x010 4 0 --- Resets Between Cmd Acceptance and Completion |||_ C monitored condition met ||__ D supports DSN |___ N normalized valueSATA Phy Event Counters (GP Log 0x11)ID Size Value Description0x000a 2 2 Device-to-host register FISes sent due to a COMRESET0x0001 2 0 Command failed due to ICRC error0x0003 2 0 R_ERR response for device-to-host data FIS0x0004 2 0 R_ERR response for host-to-device data FIS0x0006 2 0 R_ERR response for device-to-host non-data FIS0x0007 2 0 R_ERR response for host-to-device non-data FIS Some Screenshots: Output of Ubuntu logs: Important Tab: 11:36:46 AM kernel: unrecognized option 'nic-lo'11:21:01 AM sendmail-msp: unable to qualify my own domain name (MY-DEVICE-NAME) -- using short name11:16:46 AM kernel: unrecognized option 'nic-lo'11:01:01 AM sendmail-msp: unable to qualify my own domain name (MY-DEVICE-NAME) -- using short name10:56:46 AM pppd: unrecognized option 'nic-lo'10:52:57 AM sendmail-msp: unable to qualify my own domain name (MY-DEVICE-NAME) -- using short name10:52:19 AM bluetoothd: Failed to set mode: Blocked through rfkill (0x12)10:52:19 AM spice-vdagent: Cannot access vdagent virtio channel /dev/virtio-ports/com.redhat.spice.010:52:17 AM pulseaudio: [pulseaudio] backend-ofono.c: Failed to register as a handsfree audio agent with ofono: org.freedesktop.DBus.Error.ServiceUnknown: The name org.ofono was not provided by any .service files10:52:05 AM bluetoothd: Failed to set mode: Blocked through rfkill (0x12)10:52:05 AM spice-vdagent: Cannot access vdagent virtio channel /dev/virtio-ports/com.redhat.spice.010:51:57 AM sendmail-msp: My unqualified host name (MY-DEVICE-NAME) unknown; sleeping for retry10:51:45 AM pppd: unrecognized option 'nic-lo'10:51:44 AM wpa_supplicant: dbus: Failed to construct signal10:51:39 AM systemd: Failed to start Process error reports when automatic reporting is enabled.10:51:36 AM bluetoothd: Failed to set mode: Blocked through rfkill (0x12)10:51:31 AM kernel: [drm:generic_reg_wait [amdgpu]] *ERROR* REG_WAIT timeout 1us * 100 tries - tgn10_lock line:56610:51:28 AM kernel: pcieport 0000:00:01.7: [12] Replay Timer Timeout 10:51:22 AM kernel: Couldn't get size: 0x800000000000000e10:51:22 AM kernel: tpm_crb MSFT0101:00: can't request region for resource [mem 0xbf774000-0xbf777fff]10:51:22 AM kernel: AMD-Vi: Disabling interrupt remapping10:51:22 AM kernel: [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found10:51:22 AM kernel: ACPI Error: 1 table load failures, 9 successful (20170831/tbxfload-246) UPDATE:(03 June 2020) I upgraded my Ubuntu from 18.04 to 20.04 LTS few months back (hoping to resolve the issue) and gladly it worked great. Have experienced none freezing after upgrade. I am writing this after about 2-3 months of usage after upgrade. During these 2-3 months I have hardly shutdown my laptop for 3-4 times (As long as I remembered). I would like to suggest all guys here who are experiencing freezing problem with 18.04 to upgrade to 20.4 LTS. As frequent freezing and forced reboot is bad for HDD. This is not the actual solution to the problem, so I will keep open this thread for others. | For me, it turned out to be google chrome hanged the entire system. I was plagued with this issue for months until I disabled Chrome GPU hardware acceleration. I applied this fix around February/March of this year, and haven't had a system freeze since. Credit to this post: Chrome freeze very frequently with ubuntu 16.04 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325338/"
]
} |
487,374 | I am using Mac OS terminal (similar to Linux) and trying to find best way to search inside all files on a computer that has extension *.py What is the best way to achieve this? I wanted to put 1 keyword for search and quickly show the whole path of these python files are that contain requested keyword in them.. | Try this, find / -type f -name '*.py' -exec grep -l "keyword" {} \; Explanation: find / -type f -name '*.py' : Find files below / with py extension. -exec grep -l keyword {} \; Within the files found, grep for keyword and output the filename instead of the match -l . I'm not familiar with Mac OS, but if you have globstar option in your shell, you can use the following: shopt -s globstargrep -l keyword /**/*.py | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487374",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/271301/"
]
} |
487,378 | Load Average itself is not very informative about the reasons of overload, so I want an additional characteristic: the percent of disk read operations that are done from cache. To measure it, I probably need: the amount of data read from cache (or, instead, the amount of data read directly from disk); the total amount of read data. Both for the certain period of time, f.e., 1 second. Is this information available in procfs/sysfs? | Try this, find / -type f -name '*.py' -exec grep -l "keyword" {} \; Explanation: find / -type f -name '*.py' : Find files below / with py extension. -exec grep -l keyword {} \; Within the files found, grep for keyword and output the filename instead of the match -l . I'm not familiar with Mac OS, but if you have globstar option in your shell, you can use the following: shopt -s globstargrep -l keyword /**/*.py | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129375/"
]
} |
487,415 | I wrote a function in bash to see manpages in vim viman () { man "$@" | vim -R +":set ft=man" - ; } This works fine, the only problem occurs if I pass a manpage to it which doesn't exist. It prints that the manpage doesn't exist but still opens vim with an empty buffer. So, I changed the function to check the error code ( which is 16 here ) and exit if the manpage doesn't exist. The modefied function looks somewhat like this - viman () { man "$@" | [[ $? == 16 ]] && exit 1 | vim -R +":set ft=man" - ; } But, now it doesn't do anything!! I just want to quit the program if the manpage doesn't exist otherwise open the manpage with vim | Try this: capture the man output, and if successful launch vim viman () { text=$(man "$@") && echo "$text" | vim -R +":set ft=man" - ; } | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273967/"
]
} |
487,458 | I am trying to sort some simple pipe-delimited data. However, sort isn't actually sorting. It moves my header row to the bottom, but my two rows starting with 241 are being split by a row starting with 24. cat sort_fail.csvcolumn_a|column_b|column_c241|212|2081037824|121|2810172241|213|20810376sort sort_fail.csv241|212|2081037824|121|2810172241|213|20810376column_a|column_b|column_c The column headers are being moved to the bottom of the file, so sort is clearly processing it. But, the actual values aren't being sorted like I'd expect. In this case I worked around it with sort sort_fail.csv --field-separator='|' -k1,1 But, I feel like that shouldn't be necessary. Why is sort not sorting? | sort is locale aware, so depending on your LC_COLLATE setting (which is inherited from LANG) you may get different results: $ LANG=C sort sort_fail.csv 241|212|20810378241|213|2081037624|121|2810172column_a|column_b|column_c$ LANG=en_US sort sort_fail.csv241|212|2081037824|121|2810172241|213|20810376column_a|column_b|column_c This can cause problems in scripts, because you may not be aware of what the calling locale is set to, and so may get different results. It's not uncommon for scripts to force the setting needed e.g. $ grep 'LC.*sort' /bin/precat LC_COLLATE=C sort -u | prezip-bin -z "$cmd: $2" Now what's interesting, here, is the | character looks odd. But that's because the default rule for en_US, which derives from ISO, says $ grep 007C /usr/share/i18n/locales/iso14651_t1_common<U007C> IGNORE;IGNORE;IGNORE;<j> # 142 | Which means the | character is ignored and the sort order would be as if the character doesn't exist.. $ tr -d '|' < sort_fail.csv | LANG=C sort2412122081037824121281017224121320810376column_acolumn_bcolumn_c And that matches the "unexpected" sorting you are seeing. The work arounds are to use -n (to force numeric sorts), or to use the field separator (as you did) or to use the C locale. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/487458",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325417/"
]
} |
487,484 | I use the regular expression to handle the string "abc123".The command below is work and return value "c123" echo abc123 | grep -o [a-z][0-9]*$ But the command below does not work. echo abc123 | grep -o [a-z][0-9]+$ Why do I get this result? I knew the '*' is used to matches the preceding pattern element zero or more time, and '+' is used to matches the preceding pattern element at least one or more time. So this situation makes me confused. | + is only a quantifier in extended regular expressions (ERE): $ echo abc123 | grep -Eo '[a-z][0-9]+$'c123 In basic regular expressions (BRE) it matches literal + , although you can use \{1,\} instead, or in GNU grep ( -o is already a GNU extension anyway), \+ : $ echo abc123 | grep -o '[a-z][0-9]\+$'c123 (note the quotes to prevent [ and \ from being interpreted by the shell). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325455/"
]
} |
487,508 | I have a file with below lines This is an PLUTOThis is PINEAPPLEThis is ORANGEThis is RICE How do I make a new line above each lines and insert the last string to the new line output as below: PLUTO:This is an PLUTOPINEAPPLE:This is an PINEAPPLEORANGE:This is an ORANGERICE:This is an RICE Thanks | Using awk to print the last field of each line followed by a colon before printing the line itself: $ awk '{ print $NF ":"; print }' filePLUTO:This is an PLUTOPINEAPPLE:This is PINEAPPLEORANGE:This is ORANGERICE:This is RICE Variation that uses a single print statement but that explicitly prints the output record separator (a newline) and $0 (the line): awk '{ print $NF ":" ORS $0 }' file Variation using printf instead: awk '{ printf("%s:\n%s\n", $NF, $0) }' file Using sed : $ sed 'h; s/.* //; s/$/:/; G' filePLUTO:This is an PLUTOPINEAPPLE:This is PINEAPPLEORANGE:This is ORANGERICE:This is RICE Annotated sed script: h; # Copy the pattern space (the current line) into the hold space (general purpose buffer)s/.* //; # Remove everything up to the last space character in the pattern spaces/$/:/; # Add colon at the endG; # Append the hold space (original line) with an embedded newline character # (implicit print) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487508",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324294/"
]
} |
487,521 | I want to loop over some values and I know I can simply use a for loop but is there any other way that I can replace a variable value in a command at the end of the command? Somewhat like echo 11"$p" < 8 #toprint 118 echo 11"$p" < 9 #toprint 119 echo 11"$p" < a #toprint 11a I want to be able to replace certain variables with my value but at the end of the command. I know there are multiple ways to do this so I'm not asking for other ways. | Using awk to print the last field of each line followed by a colon before printing the line itself: $ awk '{ print $NF ":"; print }' filePLUTO:This is an PLUTOPINEAPPLE:This is PINEAPPLEORANGE:This is ORANGERICE:This is RICE Variation that uses a single print statement but that explicitly prints the output record separator (a newline) and $0 (the line): awk '{ print $NF ":" ORS $0 }' file Variation using printf instead: awk '{ printf("%s:\n%s\n", $NF, $0) }' file Using sed : $ sed 'h; s/.* //; s/$/:/; G' filePLUTO:This is an PLUTOPINEAPPLE:This is PINEAPPLEORANGE:This is ORANGERICE:This is RICE Annotated sed script: h; # Copy the pattern space (the current line) into the hold space (general purpose buffer)s/.* //; # Remove everything up to the last space character in the pattern spaces/$/:/; # Add colon at the endG; # Append the hold space (original line) with an embedded newline character # (implicit print) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/310202/"
]
} |
487,626 | Does lubuntu have an anti- blue light feature? | Yes, at least you can use Redshift on it — install the redshift-gtk package. Some desktop environments such as GNOME have similar features built-in, I’m not sure Lubuntu’s does. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3999/"
]
} |
487,683 | To declare a variable in Bash, say in a Bash script-file (that doesn't include Bash function), I do for example x=y and when I finish using it inside that script-file I do unset x . Is there a way (without using a function), to unset the variable after say 5 minutes, both in one line? A plausible approach might be x=y && echo "unset x" | at now + 5 minutes . In this particular case I run the script-file directly in the terminal by copy-pasting its content from GitHub to terminal. This falls under sourcing I assume". Given I use GitHub, an alternative might be executing a raw version of the script-file directly from GitHub with bash in a separate shell as follows, but I don't like that way because it can't be user/repo/branch/path/file -agnostic: wget -O - https://raw.githubusercontent.com/<username>/<repo>/<branch>/<path>/<file> | bash | You can't use at jobs because they run in a different context, and can't affect the current shell. But we can do something similar. This code will trigger an alarm signal, which we can catch and perform action on #!/bin/bashx=100trap 'unset x' SIGALRMmypid=$$( /bin/sleep 3 ; kill -ALRM $mypid) &for a in 1 2 3 4 5 6do echo Now x=$x sleep 1done This example is only 3 seconds long to demonstrate the solution; you can pick your delay as you need. In action: Now x=100Now x=100Now x=100Now x=Now x=Now x= You can easily make it one line with ; ... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487683",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
487,730 | I need to do IFS=",";echo {1..5} so that it can output 1,2,3,4,5 instead of 1 2 3 4 5 . How do I make bash echo {1..5} and output the values with a comma? | With Bash's builtins: This is a bit ugly since we need to separate the 5 to avoid a trailing comma: $ printf '%s,' {1..4}; echo 51,2,3,4,5 Though since printf can output directly to a variable, that can be worked around and the final comma removed with a parameter expansion: $ printf -v tmpvar "%s," {1..5}; echo "${tmpvar%,}"1,2,3,4,5 Or with "$*" , which joins using the first character of IFS . This trashes global state, but you could rather easily avoid that by running it in a subshell or in a function with local IFS : $ IFS=,; set -- {1..5}; echo "$*";1,2,3,4,5 If the limits are in variables, it's probably easiest to just do it manually with a loop since you can't use variables as endpoints in a brace expansion range. Again, the upper limit is in a special case: a=1; b=5for (( i=a ; i<b ; i++ )); do printf "$i,";done;printf "$b\n" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487730",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309050/"
]
} |
487,742 | Is there a way (other than permit root login on the target machine) to work-around the following: $ ssh [email protected]'s password:"System is booting up. Unprivileged users are not permitted to log in yet. Please come back later. For technical details, see pam_nologin(8)." I am trying to debug remotely a failure to start an X session. At this time, the following link is not working for me: https://github.com/systemd/systemd/issues/8228 | This issue may come from /run/nologin . /run/nologin is created by systemd-tmpfiles-setup.service . It is then removed by systemd-user-sessions.service . So you have to delete this: $ ls -l /run/nologin# rm /run/nologin Let us know if it works! Good Luck! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/487742",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32896/"
]
} |
487,750 | I am trying to send a mail with below command. I am sending this command from sqr to the command line, which is working fine. $FilePath_mail have To , From and other information along with mail body which is in HTML format. I want to have an image (logo) in the body, so I wanted to send it as an attachment. /usr/sbin/sendmail -t < $FilePath_mail I need to change the above command to add the attachment(basically an image) to the mail? | This issue may come from /run/nologin . /run/nologin is created by systemd-tmpfiles-setup.service . It is then removed by systemd-user-sessions.service . So you have to delete this: $ ls -l /run/nologin# rm /run/nologin Let us know if it works! Good Luck! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/487750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325699/"
]
} |
487,830 | If a program is not allowed to handle or ignore SIGKILL and SIGSTOP, and must immediately terminate, why does the kernel even send the signal to the program? Can't the kernel simply evict the program from the CPU and memory? I assume the kernel would have the ability do directly do this. | The user-space part of a process terminated by SIGKILL never gets to know about it; the kernel deals with everything . (This does mean that some resources can leak: temporary files, shared memory allocations, resources held on behalf of the killed process by another process such as an X server…) However the signal still needs to be delivered, so that other processes can find out how the killed process was terminated. A parent, using wait , will get the information that the killed process was terminated by SIGKILL . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487830",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325760/"
]
} |
487,831 | So I have a text file: 4556 4618 7843 8732 4532 0861 1932 5122 3478 893* 6788 63125440 3173 8207 0451 678866011 2966 7184 4668 3678 3905 5323 2389 4387 9336 2783 239 235 453 3458182 534 654 765 4485 0721 1308 275946759 543 2345 I want to grep only the numbers that have 4 digits together, 4 times in a row (seperated by a space). For example: 4556 4618 7843 8732 I am using: grep -E "([0-9]{4} [0-9]{4} [0-9]{4} [0-9]{4})" test.txt Which shows me: 4556 4618 7843 8732 4532 0861 1932 5122 5440 3173 8207 0451 678866011 2966 7184 4668 4485 0721 1308 2759 Using this there is an extra line that shouldn't appear, where there is a 5th set of numbers that has 5 digits on the end. So I used: grep -E "([0-9]{4} [0-9]{4} [0-9]{4} [0-9]{4})$" test.txt But this only gave me two results instead of the 4 it should: 4556 4618 7843 8732 4485 0721 1308 2759 Can someone tell me what I'm doing wrong? | The user-space part of a process terminated by SIGKILL never gets to know about it; the kernel deals with everything . (This does mean that some resources can leak: temporary files, shared memory allocations, resources held on behalf of the killed process by another process such as an X server…) However the signal still needs to be delivered, so that other processes can find out how the killed process was terminated. A parent, using wait , will get the information that the killed process was terminated by SIGKILL . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487831",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/319349/"
]
} |
487,837 | I'm on a minimalist FreeBSD system and need to use the built-in vi editor to edit files. To be specific, this is not vim, or vim-tiny or other replacement. It's the "4BSD bug-for-bug compatible" nvi editor. It works almost as expected. The man page says that control-T and control-D will indent/unindent according to shiftwidth . Control-T does work, but control-D does not. It actually enters the ^D character into the file. If I do get vim onto the system, control-T and control-D work as expected, so it's not an issue of the terminal mis-interpreting the key. Vi itself is not interpreting control-D. Anyone run into this? An solutions? Using vim is not an option. | tl;dr; vim is not vi . In vi , you should use Control-T instead of Tab to indent a line. If you find hard to retrain, you could add an input mode mapping from Tab to Control-T: printf 'map! \x16\t \x14\n' >> ~/.nexrc In the real vi , and in the nvi clone (used in FreeBSD), a control-D will erase autoindent characters up to the previous "shiftwidth" boundary. It will not erase the Tab or Space characters you entered by hand, either by pressing Control-I, Tab or Space. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487837",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/282407/"
]
} |
487,850 | I used to use unquoted expansion $variable when variable stored compiler flags, but I learned recently that glob metacharacters like * and ? contained in variable are still expanded, e.g. $ f='*'$ echo $ffoo.bash Is there a portable way to just perform field splitting without globbing besides set -f . The most explicit way I can come up with to do this in bash is to define a read_words function like so, which populates an array name with the contents of a string passed in as an argument and then uses ${arr[@]} to expand the string. #!/bin/bashcount() { printf '%s\n' "$#"}read_words() { IFS=$' \t\n' read -a "$1" <<< "$2" return 0}read_words arr 'a b *'count "${arr[@]}" | tl;dr; vim is not vi . In vi , you should use Control-T instead of Tab to indent a line. If you find hard to retrain, you could add an input mode mapping from Tab to Control-T: printf 'map! \x16\t \x14\n' >> ~/.nexrc In the real vi , and in the nvi clone (used in FreeBSD), a control-D will erase autoindent characters up to the previous "shiftwidth" boundary. It will not erase the Tab or Space characters you entered by hand, either by pressing Control-I, Tab or Space. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86874/"
]
} |
487,873 | I would like to remove just the numbers and "_" after ">" symbol, for example: >1_CR-B_CR56_tMTKIIKFVYFMTIFISPNHHCPVYNCTHPKQPWCKLVRLQLLFHGSLIGLCDCI>2_R-B_R46_tMVEVTKLVNVMLIFLTLSPLVYDCQAYECELPFKPDCLMVEYSPQFVALRCGCV>3000_N-N274_MMVEVTKLVNVMLIFLTLFVYTDSDCQAYACELPFKPDCLMVEYAPQFFRLACGCV Expected Results: >CR-B_CR56_tMTKIIKFVYFMTIFISPNHHCPVYNCTHPKQPWCKLVRLQLLFHGSLIGLCDCI>R-B_R46_tMVEVTKLVNVMLIFLTLSPLVYDCQAYECELPFKPDCLMVEYSPQFVALRCGCV>N-N274_MMVEVTKLVNVMLIFLTLFVYTDSDCQAYACELPFKPDCLMVEYAPQFFRLACGCV I used sed "s/>[0-9][_]//g" but it removed ">" as well. | Just a slight modification from your sed command: sed 's/^>[0-9]\+[_]/>/g' the s is the sed substitute command, it searches for the string on the left hand side and replaces it with the string on the right hand side. Instead of replacing it with nothing you can replace it with the > character that you would like to keep. ^ is used to specify that the match should only start at the beginning of a newline Additionally the * is being used to match more than a single digit. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137472/"
]
} |
487,885 | I'm building an IP Camera Server for rtsp ffmpeg capture and 24/7 purposes. Only thing that is missing is a script that checks connectivity of the camera and if it's not reachable there should be triggered another script which checks the cam for online status so that a new ffmpeg capture process can be started then. I already spent plenty of time testing, but nothing will work right now.So for the job I have three scripts. The 1st should check if the camera is still reachable and, if not, then go to the 2nd: #!/bin/sh# record-ping_cam1.sh# Check 24h if cam is alive, in case of error code 1 (offline) start record-waitfor_xxx.sh#IPCAM=192.168.xxx.xxxping -w 86400 -i2 $IPCAM 0>/dev/nullOFFLINE=$?if [ $OFFLINE -eq 1 ]then source /home/xxx/record-ping-waitfor_cam1.shfi The 2nd should check if it reachable again and, if it is, then go to the 3rd: #!/bin/sh# record-ping-waitfor_cam1.sh# Check if Cam is alive, if yes (exit code 0) then execute record-ping-reconnect_xxx.sh## Ping with infinitive loop - as soon as reachable (exit code 0) then go on with record scriptIPCAM=192.168.xxx.xxxwhile true; do ping -c1 $IPCAM > /dev/null && break; doneONLINE=$?if [ $ONLINE -eq 0 ]then source /home/xxx/record-ping-reconnect_cam1.shfi The 3rd starts the new ffmpeg process and writes ffmpeg and ping PIDs to file (needed later): #!/bin/sh# record-ping-reconnect_cam1.sh# Record IPcam after any case of signal lost## This will print the current date and time in a format appropriate for storageSTARTTIME=$(/bin/date +"%d.%m.%Y")-"("$(/bin/date +"%H").$(/bin/date +"%M")Uhr")"### IP Camera Names ### Creating date stamps for each of the CamerasCAM=Cam1_$STARTTIME### Network and Local Storage Locations ## #Trailing '/' is necessary hereRCDIR="/home/xxx/Reconnect/"### Record Time per File sec ##LENGTH="86400" # (24h)### Record Settings #### wait until cam is ready to capture againsleep 40s# start capture this camsourceffmpeg -v 0 -rtsp_transport tcp -i "rtsp://device:port/11" -vcodec copy -an -t $LENGTH $RCDIR$CAM1.mkv & echo $! > /home/xxx/Reconnect/PIDs/ffmpeg_cam1.pid# start the ping routine, check the cam for connectivitysource /home/xxx/record-ping_cam1.sh & echo $! > /home/xxx/Reconnect/PIDs/ping_cam1.pidexit The thing is... the 1st script worked fine but I had trouble with the 2nd. I tried then different things with fping but without luck. Now with ping in the while loop it's working flawlessly. But then the 1st script stopped working... that seems weird to me. Server is a RPI 3b+ with Raspbian Stretch | Just a slight modification from your sed command: sed 's/^>[0-9]\+[_]/>/g' the s is the sed substitute command, it searches for the string on the left hand side and replaces it with the string on the right hand side. Instead of replacing it with nothing you can replace it with the > character that you would like to keep. ^ is used to specify that the match should only start at the beginning of a newline Additionally the * is being used to match more than a single digit. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325841/"
]
} |
487,941 | I am supplying a while loop with the following #!/bin/bashnumber1=1while [ -z "$number2" ] | [ "$number2" == 404 ] & [ "$number2" != 200 ] & [ "$number1" -lt 13 ]; do #number2=$(some command which can actually get a number) number2=200 # <<< e.g. a command that would return let number1=number1+1done This is what I need to do If number2 is null do the loop If number2 is 404 do the loop If number2 is 200 don't do the loop Do the loop until number1 is 12 When I try the loop with number2=200 it doesn't stop. It seems I am having a challenge with having it to stop where number2 is 200 . How do I write the statement such that it will stop the while loop when number2=200 or is there an alternative? | If number2 is null do the loop If number2 is 404 do the loop If number2 is 200 don't do the loop Do the loop until number1 is 12 In other words, repeat as long as (number2 is null OR number2 = 404) AND (number2 != 200) AND (number1 <= 12) . Note that you need some sort of grouping here, to make the precedence of AND and OR explicit. (In Bash, && and || operate from left to right, but often the AND-operator binds more strongly than an OR-operator.) Though you didn't say what should happen for other values of number2 , so we might as well drop the first two conditions, since if number2 is null or 404, then it can't be 200. So we get (number2 != 200) AND (number1 <= 12) . Here, while [ -z "$number2" ] | [ "$number2" == 404 ] & [ "$number2" != 200 ] & [ "$number1" -lt 13 ]; do ... you have | and & instead of || and && . | indicates a pipeline, and & runs the preceding command in the background. So the above would run three commands in parallel: one pipeline with two tests, and another with one test, both in the background; and one test in the foreground. That doesn't make much sense. I mentioned && and || above, those are the logical condition operators in Bash. The simplified form would be: while [ "$number2" != 200 ] && [ "$number1" -le 12 ]; do ... (You may also want to use somewhat more descriptive variable names than "number1" and "number2".) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/487941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309050/"
]
} |
487,955 | I'm looking for something like command1 ; command2 i.e. how to run command2 after command1 but I'd like to plan execution of command2 when command1 is already running. It can be solved by just typing the command2 and confirming by Enter supposed that the command1 is not consuming standard input and that the command1 doesn't produce to much text on output making it impractical to type (typed characters are blended with the command1 output). | Generally what I do is: Ctrl + Z fg && command2 Ctrl + Z to pause it and let you type more in the shell. Optionally bg , to resume command1 in the background while you type out command2. fg && command2 to resume command1 in the foreground and queue up command2 afterwards if command1 succeeds. You can of course substitute ; or || for the && if you so desire. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/487955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46334/"
]
} |
488,068 | I have found the following kind of shebang in the RosettaCode page: --() { :; }; exec db2 -txf "$0" It works for Db2, and a similar thing for Postgres. However, I do not understand the whole line. I know the double dash is a comment in SQL, and after that it calls the Db2 executable with some parameters passing the file itself as file. But what about the parenthesis, the curly brakets, the colon and semi-colon, and how can replace a real shebang #! ? https://rosettacode.org/wiki/Multiline_shebang#PostgreSQL | Related: Which shell interpreter runs a script with no shebang? The script does not have a shebang/hashbang/ #! line, simply because a double dash is not #! . However, the script will be executed by a shell (see above linked question and answers), and in that shell, if - is a valid character in a function name, the line declares a shell function called -- that does nothing (well, it runs : , which does nothing ) and which is never called. The function, in the more common multi-line notation (just to make it more obvious what it looks like, as its odd name kinda obscures the fact that it's in fact a function): -- () { :} The sole purpose of the function definition is to have a line that is valid in a shell script and at the same time a valid SQL command (a comment). This sort of code is called a polyglot . After declaring the bogus shell function, the script, when executed by a shell script interpreter, uses exec to replace the current shell with the process resulting from running db2 -txf "$0" , which would be the same as using db2 -txf on the pathname of the script from the command line. This trick would probably not work reliably on systems where dash or other ash -based shells, yash , the Bourne shell, ksh88 or ksh93 is used as /bin/sh , as these shell do not accept functions whose name contains dashes. Also related: Shell valid function name characters Will it be bad that a function or script name contains dash `-` instead of underline `_`? I suppose the following would also work (not really tested): --() { exec db2 -txf "$0"; }; -- | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/488068",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41086/"
]
} |
488,108 | Unfortunately I keep running into this issue when trying to install Debian. It occurs after I install the nvidia graphics drivers as per this guide https://wiki.debian.org/NvidiaGraphicsDrivers . I am following the Version 390.48 (via stretch-backports) guide and then the configuration steps via nvidia-xconfig . How can I troubleshoot this and get it working? | Related: Which shell interpreter runs a script with no shebang? The script does not have a shebang/hashbang/ #! line, simply because a double dash is not #! . However, the script will be executed by a shell (see above linked question and answers), and in that shell, if - is a valid character in a function name, the line declares a shell function called -- that does nothing (well, it runs : , which does nothing ) and which is never called. The function, in the more common multi-line notation (just to make it more obvious what it looks like, as its odd name kinda obscures the fact that it's in fact a function): -- () { :} The sole purpose of the function definition is to have a line that is valid in a shell script and at the same time a valid SQL command (a comment). This sort of code is called a polyglot . After declaring the bogus shell function, the script, when executed by a shell script interpreter, uses exec to replace the current shell with the process resulting from running db2 -txf "$0" , which would be the same as using db2 -txf on the pathname of the script from the command line. This trick would probably not work reliably on systems where dash or other ash -based shells, yash , the Bourne shell, ksh88 or ksh93 is used as /bin/sh , as these shell do not accept functions whose name contains dashes. Also related: Shell valid function name characters Will it be bad that a function or script name contains dash `-` instead of underline `_`? I suppose the following would also work (not really tested): --() { exec db2 -txf "$0"; }; -- | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/488108",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325986/"
]
} |
488,117 | I have a string -w o rd . I need to split it to w o r d or to an array for 'w' 'o' 'r' 'd' it doesn't really matter. I have tried the following IFS='\0- ' read -a string <<< "-w o rd" echo ${string[*]} rd isn't getting split. How can I make it get split | You can't use IFS in bash to split on nothing (it has to be on a character). There's no characters between r and d in rd . No space and no character isn't the same as the null character. If you want each character as a separate element in the array, one way I can think of is to read each character individually and append it to an array (and using IFS to get rid of spaces and - ): bash-4.4$ while IFS=' -' read -n1 c ; do [[ -n $c ]] && foo+=("$c"); done <<<"-w o rd"bash-4.4$ declare -p foodeclare -a foo=([0]="w" [1]="o" [2]="r" [3]="d") | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/488117",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309050/"
]
} |
489,145 | I see the term i386 instead of x86 in many places related to Linux. As of my knowledge, they are not interchangeable . x86 is a family of instruction set architectures where i386 is a specific one of the x86 processors. But why do Linux world uses the term i386 instead of x86 ? References: x86 | Wikipeadia Intel 80386 | Wikipeadia | i386, or 80386, was the first 32-bit processor. When it was introduced, the word i386 is started to be using in many places, including in OSs and compilers, which made it impossible or very difficult to change later. Even after the introduction of other advanced x86 processors, including the 486 and 586, many manufacturers didn't bother to change the label i386 and started to use it as an alias for 32-bit x86 processor . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489145",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/307224/"
]
} |
489,150 | I am trying to print two string separated by a TAB.I have tried: echo -e 'foo\tbar'printf '%s\t%s\n' foo bar Both of them print: foo bar Where the whitespace between the two is actually 5 spaces (as per selecting the output with mouse in Putty). I have also tried using CTRL+V and pressing TAB when typing the command, with the same result. What is the correct way to force tab being printed as tab, so I can select the output and copy it to somewhere else, with tabs? And the secondary question: why is bash expanding tabs into spaces? Update :Apparently, this is a problem of Putty: https://superuser.com/questions/656838/how-to-make-putty-display-tabs-within-a-file-instead-of-changing-them-to-spaces | the whitespace between the two is actually 5 spaces. No, it's not. Not in the output of echo or printf . $ echo -e 'foo\tbar' | od -c0000000 f o o \t b a r \n0000010 What is the correct way to force tab being printed as tab, so I can select the output and copy it to somewhere else, with tabs? This is a different issue. It's not about the shell but the terminal emulator, which converts the tabs to spaces on output. Many, but not all of them do that. It may be easier to redirect the output with tabs to a file, and copy it from there, or to use unexpand on the output to convert spaces to tabs. (Though it also can't know what whitespace was tabs to begin with, and will convert all of it to tabs, if possible.) This of course would depend on what, exactly, you need to do with the output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489150",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172003/"
]
} |
489,154 | For example: tar xvf test.tar.gz ; rm test.tar.gz Is there a faster way to reference the file name on the second execution? I was thinking something like this (which is invalid): tar xvf test.tar.gz ; rm $1 Anything possible? I'm fully aware of wildcards. | You could assign the filename to a variable first: f=test.tar.gz; tar xvf "$f"; rm "$f" Or use the $_ special parameter , it contains the last word of the previous command, which is often (but of course not always) the filename you've been working with: tar xvf test.tar.gz; rm "$_" This works with multiple commands too, as long as the filename is always the last argument to the commands (e.g. echo foo; echo $_; echo $_ prints three times foo .) As an aside, you may want to consider using tar ... && rm ... , i.e. with the && operator instead of a semicolon. That way, the rm will not run if the first command fails. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327021/"
]
} |
489,230 | I have many files of music; with the program mp3Tag, I had organized all of it. I have the correct metadata as it allows. I am looking for a free software that does the same, but for PDF files. | Ghostscript can insert or modify document metadata into any PDF. Caveats: While doing so, Ghostscript will (1) first read in the complete PDF code, (2) second re-process that complete PDF code, (3) write out a completely new PDF file. This process can be wanted (could be for the advantage of the PDF quality, for example by additionally embedding previously missing fonts) or unwanted... How to do it Create a text file named mydocinfo.pdfmark and put the following content into it: [ /Title (Jaziel's Important Document) /Author (Jaziel Aguirre) /Subject (Mr. Aguirre's experiments with pdfmark) /Creator (JA's Metadata Inserter) /ModDate (D:19700101000000+01'00') /Producer (A 'pdfmark' trick with Ghostscript) /Keywords (Metadata, Ghostscript, PDF, Linux) /CreationDate (D:20181229104653+01'00') /DOCINFOpdfmark Note, that the opening [ does NOT require a closing ] -- it is closed by the 'pdfmark' keyword. Now run this Ghostscript command to insert the new metadata into an existing PDF: gs \ -o with-metadata.pdf \ -sDEVICE=pdfwrite \ existing.pdf \ mydocinfo.pdfmark Check the new metadata: pdfinfo with-metadata.pdf Title: Jaziel's Important Document Subject: Mr. Aguirre's experiments with pdfmark Keywords: Metadata, Ghostscript, PDF, Linux Author: Jaziel Aguirre Creator: JA's Metadata Inserter Producer: A 'pdfmark' trick with Ghostscript CreationDate: Sat Dec 29 10:46:53 2018 CET ModDate: Thu Jan 1 00:00:00 1970 CET Tagged: no UserProperties: no Suspects: no Form: none JavaScript: no Pages: 1 Encrypted: no Page size: 142.8 x 202.08 pts Page rot: 0 File size: 5394 bytes Optimized: no PDF version: 1.7 (Tested with Ghostscript v9.27.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489230",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327063/"
]
} |
489,297 | At the memory address, 0x7fffffffeb58 of a program lies a value, I want to find out the value of the address. Is there a way to get the value just by using commands? I've tried dd but to no avail. | To peek at memory addresses of a process, you can look at /proc/$pid/mem . See also /proc/$pid/maps for what's mapped in the process' address space. You'll want to seek() within that file to the location you want, which you should be able to do with dd : dd bs=1 skip="$((0x7fffffffeb58))" count=4 if="/proc/$pid/mem" | od -An -vtu4 Would read 4 bytes at that address and interpret them as an unsigned 32 bit integer. Another approach is to attach a debugger to the process: gdb --batch -ex 'x/u 0x7fffffffeb58' -p "$pid" In any case, note that depending on the value of the kernel.yama.ptrace_scope sysctl, you may need to have superuser privileges to do that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327120/"
]
} |
489,394 | I know the best solution would be to upgrade the server after Debian wheezy has reached End Of LIfe in May 2018, but is there a way to continue running an old wheezy instance and get some minimal updates still? Maybe some external apt-sources, that still are getting maintained somewhere? | As indicated by muru , there is a way to get updated packages for Debian 7, through Freexian’s Extended LTS . See also Raphaël Hertzog’s blog post introducing it . This is a continuation of the Debian 7 LTS , which stopped in May 2018. In the extended LTS, a subset of Debian 7 packages continue to receive support, on amd64 and possibly i386 . The subset is determined by the sponsors of the project; to be an extended LTS sponsor, you also have to be a regular LTS sponsor. It is however possible to hitch a ride and use the extended LTS without sponsoring it, the instructions and repository are freely available: wget https://deb.freexian.com/extended-lts/pool/main/f/freexian-archive-keyring/freexian-archive-keyring_2018.05.29_all.deb && sudo dpkg -i freexian-archive-keyring_2018.05.29_all.debecho deb http://deb.freexian.com/extended-lts wheezy-lts main contrib non-free | sudo tee /etc/apt/sources.list.d/extended-lts.listecho deb http://deb.freexian.com/extended-lts wheezy-lts-kernel main | sudo tee -a /etc/apt/sources.list.d/extended-lts.list Remember to upgrade the kernel too, and set up a local package mirror. Without becoming a sponsor, this can only be a band-aid and the resulting feeling of security will be misleading. For example, Ghostscript was dropped at the start of September... (Freexian handles sponsorship and payments. The package maintenance is carried out by Debian developers whose time is paid for by the Extended LTS project.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
489,410 | I can't find this case in the board, so I'm asking the question. This is input file: module x(a,b,c) module y(d,e,f, g,h,i) module z(j,k,l) And output file should be: module x(a,b,c) module y(d,e,f, g,h,i) module z(j,k,l) | What you want to do is to join the module lines with the next line. Using sed : $ sed '/^module/N;s/\n//' filemodule x(a,b,c)module y(d,e,f,g,h,i)module z(j,k,l) This is with your data copied and pasted as is, with spaces at the end of each line. The sed command will print each line as it is read, but when it encounters a line that starts with the string module , it appends the next line with an embedded newline character in-between (this is what N does). We remove that newline character with a substitution before the result is printed. If your data has no spaces at the end of the lines, use $ sed '/^module/N;s/\n/ /' filemodule x(a,b,c)module y(d,e,f,g,h,i)module z(j,k,l) Just in case you'd want this (assuming no spaces at end of input lines): $ sed -e '/^module/bpp' -e 'H;$bpp' -e 'd' \ -e ':pp' -e 'x;/^$/d;s/\n/ /g' filemodule x(a,b,c)module y(d,e,f, g,h,i)module z(j,k,l) Annotated sed script: /^module/ b print_previous; # print previous recordH; # append this line to hold space$ b print_previous; # print previous (last) recordd; # end processing this line:print_previous; # prints a record accumulated in the hold spacex; # swap in the hold space/^$/ d; # if line is empty, delete its/\n/ /g; # replace embedded newlines by spaces # (implicit print) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489410",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/308792/"
]
} |
489,421 | I'm using these commands: du -sh --apparent-size ./*du -sh ./* both reporting: 4.0K ./Lightroom_catalog_from_win_backup432M ./Lightroom catalog - wine_backup while those directories contain: $ll ./"Lightroom catalog - wine_backup"total 432M-rwxrwx--- 1 gigi gigi 432M Mar 18 2018 Lightroom 5 Catalog Linux.lrcat-rwxrwx--- 1 gigi gigi 227 Nov 21 2015 zbackup.bat$ll ./Lightroom_catalog_from_win_backuptotal 396M-rwxrwx--- 3 gigi gigi 396M Dec 17 09:35 Lightroom 5 Catalog Linux.lrcat-rwxrwx--- 3 gigi gigi 227 Dec 17 09:35 zbackup.bat Why du is reporting 4.0K for ./Lightroom_catalog_from_win_backup and how could I make it to report correctly? PS: other system information: $stat --file-system $HOME File: "/home/gigi" ID: 5b052c62a5a527bb Namelen: 255 Type: ext2/ext3Block size: 4096 Fundamental block size: 4096Blocks: Total: 720651086 Free: 155672577 Available: 119098665Inodes: Total: 183050240 Free: 178896289$lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 16.04.5 LTSRelease: 16.04Codename: xenial | I can reproduce if the files are hard links: ~ mkdir foo bar~ dd if=/dev/urandom of=bar/file1 count=1k bs=1k1024+0 records in1024+0 records out1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00985276 s, 106 MB/s~ ln bar/file1 foo/file1~ du -sh --apparent-size foo bar1.1M foo4.0K bar This is expected behaviour. From the GNU du docs : If two or more hard links point to the same file, only one of the hard links is counted. The file argument order affects which links are counted, and changing the argument order may change the numbers and entries that du outputs. If you really need repeated sizes of hard links, try the -l option: ‘ -l ’ ‘ --count-links ’ Count the size of all files, even if they have appeared already (as a hard link). ~ du -sh --apparent-size foo bar -l1.1M foo1.1M bar | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/489421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227120/"
]
} |
489,431 | I have the following text: Name= GarenClass= 9CSchool= USName= LuluClass= 4AName= KataClass= 10DSchool= UK I got the awk cmd below: awk '$Name ~/Name/ {printf $0;} $Class ~/Class/ {printf $0;} $School ~/School/ {print $0;} ' file.txt But it outputs in a new line. Like this: Name= Garen Class= 9C School= USName= Lulu Class= 4A Name= Kata Class= 10D School= UK I want it to output like this : Name= Garen ,Class= 9C ,School= USName= Lulu , Class= 4A ,Name= Kata ,Class= 10D ,School= UK if it falls into a situation : Name= GarenClass= 9CLast Name= Wilson School= USName= LuluClass= 4ALast Name= MillerName= KataClass= 10DSchool= UKLast Name= Thomas and print: Name= Garen,Class= 9C,School= USName= Lulu,Class= 4AName= Kata,Class= 10D,School= UK | I can reproduce if the files are hard links: ~ mkdir foo bar~ dd if=/dev/urandom of=bar/file1 count=1k bs=1k1024+0 records in1024+0 records out1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00985276 s, 106 MB/s~ ln bar/file1 foo/file1~ du -sh --apparent-size foo bar1.1M foo4.0K bar This is expected behaviour. From the GNU du docs : If two or more hard links point to the same file, only one of the hard links is counted. The file argument order affects which links are counted, and changing the argument order may change the numbers and entries that du outputs. If you really need repeated sizes of hard links, try the -l option: ‘ -l ’ ‘ --count-links ’ Count the size of all files, even if they have appeared already (as a hard link). ~ du -sh --apparent-size foo bar -l1.1M foo1.1M bar | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/489431",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327226/"
]
} |
489,445 | How to unzip a file (ex: foo.zip ) to a folder with the same name ( foo/ )? Basically, I want to create an alias of unzip that unzips files into a folder with the same name (instead of the current folder). That's how Mac's unzip utility works and I want to do the same in CLI. | I use unar for this; by default, if an archive contains more than one top-level file or directory, it creates a directory to store the extracted contents, named after the archive in the way you describe: unar foo.zip You can force the creation of a directory in all cases with the -d option: unar -d foo.zip Alternatively, a function can do this with unzip : unzd() { if [[ $# != 1 ]]; then echo I need a single argument, the name of the archive to extract; return 1; fi target="${1%.zip}" unzip "$1" -d "${target##*/}"} The target=${1%.zip} line removes the .zip extension, with no regard for anything else (so foo.zip becomes foo , and ~/foo.zip becomes ~/foo ). The ${target##*/} parameter expansion removes anything up to the last / , so ~/foo becomes foo . This means that the function extracts any .zip file to a directory named after it, in the current directory. Use unzip $1 -d "${target}" if you want to extract the archive to a directory alongside it instead. unar is available for macOS (along with its GUI application, The Unarchiver ), Windows, and Linux; it is packaged in many distributions, e.g. unar in Debian and derivatives, Fedora and derivatives, community/unarchiver in Arch Linux. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/489445",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323983/"
]
} |
489,453 | we want to replace the script file extension so we did the following: new_name=` echo run_fix.bash | sed 's/[.].*$//' ` new_file_extension=".in_hold.txt" new_name=$new_name$new_file_extension echo $new_name run_fix.in_hold.txt but I feel my approach is not so elegant note - because script extension could be bash or perl or python and also the target extension could be any thing after "." we want global replacement I am using redhat 7.2 | old_name=run_fix.bashnew_name=${old_name%.bash}.in_hold.txtprintf 'New name: %s\n' "$new_name" This would remove the filename suffix .bash from the value of $old_name and add .in_hold.txt to the result of that. The whole thing would be assigned to the variable new_name . The expansion ${variable%pattern} to remove the shortest suffix string matching the pattern pattern from the value of $variable is a standard parameter expansion . To replace any filename suffix (i.e. anything after the last dot in the filename): new_name=${old_name%.*}.new_suffix The .* pattern would match the last dot and anything after it (this would be removed). Had you used %% instead of % , the longest substring that matched the pattern would have been removed (in this case, you would have removed everything after the first dot in the string). If the string does not contain any dots, it remains unaltered. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/489453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
489,459 | Suppose I have search the string which give like the following result anything1.knownKeyWordanything2.knownKeyWordanything3[1].knownKeyWord How I can write generic syntax for grep such it match all 3 string.I have done like this ^.*\w+\d[\[]?[0]?[\]]?\.knownKeyWord.*$ But I think for indexing eg [1] is not written in good way, how can I achieve so that even i replace [1] with [2342jdsjf] , I don't have to change the syntax much. | old_name=run_fix.bashnew_name=${old_name%.bash}.in_hold.txtprintf 'New name: %s\n' "$new_name" This would remove the filename suffix .bash from the value of $old_name and add .in_hold.txt to the result of that. The whole thing would be assigned to the variable new_name . The expansion ${variable%pattern} to remove the shortest suffix string matching the pattern pattern from the value of $variable is a standard parameter expansion . To replace any filename suffix (i.e. anything after the last dot in the filename): new_name=${old_name%.*}.new_suffix The .* pattern would match the last dot and anything after it (this would be removed). Had you used %% instead of % , the longest substring that matched the pattern would have been removed (in this case, you would have removed everything after the first dot in the string). If the string does not contain any dots, it remains unaltered. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/489459",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327259/"
]
} |
489,489 | I'm trying to run a command, write that to a file, and then I'm using that file for something else. The gist of what I need is: myAPICommand.exe parameters > myFile.txt The problem is that myAPICommand.exe fails a lot. I attempt to fix some of the problems and rerun, but I get hit with "cannot overwrite existing file". I have to run a separate rm command to cleanup the blank myFile.txt and then rerun myAPICommand.exe . It's not the most egregious problem, but it is annoying. How can I avoid writing a blank file when my base command fails? | You must have "noclobber" set, check the following example: $ echo 1 > 1 # create file$ cat 11$ echo 2 > 1 # overwrite file$ cat 12$ set -o noclobber$ echo 3 > 1 # file is now protected from accidental overwritebash: 1: cannot overwrite existing file$ cat 12$ echo 3 >| 1 # temporary allow overwrite$ cat 13$ echo 4 > 1bash: 1: cannot overwrite existing file$ cat 13$ set +o noclobber$ echo 4 > 1$ cat 14 "noclobber" is only for overwrite, you can still append though: $ echo 4 > 1bash: 1: cannot overwrite existing file$ echo 4 >> 1 To check if you have that flag set you can type echo $- and see if you have C flag set (or set -o |grep clobber ). Q: How can I avoid writing a blank file when my base command fails? Any requirements? You could just simply store the output in a variable and then check if it is empty. Check the following example (note that the way you check the variable needs fine adjusting to your needs, in the example I didn't quote it or use anything like ${cmd_output+x} which checks if variable is set, to avoid writing a file containing whitespaces only. $ cmd_output=$(echo)$ test $cmd_output && echo yes || echo nono$ cmd_output=$(echo -e '\n\n\n')$ test $cmd_output && echo yes || echo nono$ cmd_output=$(echo -e ' ')$ test $cmd_output && echo yes || echo nono$ cmd_output=$(echo -e 'something')$ test $cmd_output && echo yes || echo noyes$ cmd_output=$(myAPICommand.exe parameters)$ test $cmd_output && echo "$cmd_output" > myFile.txt Example without using a single variable holding the whole output: log() { while read data; do echo "$data" >> myFile.txt; done; }myAPICommand.exe parameters |log | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/489489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160716/"
]
} |
489,556 | I'm trying to write a shell script that deletes all empty directoriesas well as any directory that contains only the .DS_Store file that Mac generates. I can do the former pretty easily with find -depth -type d -empty but I can't figure how to find directories that contain only .DS_Store . Is there an easy way of doing this without writing my own recursive search function? | POSIX sh + find Here's a solution that relies only on POSIX find and POSIX sh. List all directories, then filter those that only contain an entry called .DS_Store . find . -type d -exec sh -c ' cd "$0" && for x in * .[!.]* ..?*; do if [ "$x" = ".DS_Store" ]; then continue; fi; if [ -e "$x" ] || [ -L "$x" ]; then exit 1; fi; done' {} \; -print I use find to enumerate all directories recursively. On each directory, I call sh to run some shell code. The for loop enumerates all the files in the directory. The body of the loop skips .DS_Store . Each of the three patterns is left unchanged if it doesn't match any file. [ -e "$x" ] || [ -L "$x" ] captures any file including broken symbolic links; the only way they don't match is if a pattern was left unchanged. Therefore the shell snippet runs exit 1 if there is a file other than .DS_Store , and returns 0 for success otherwise. Change -print to -exec … if you want to do something other than printing the names. Zsh Here's a solution in zsh. Change echo to whatever command you want to run. setopt extended_globecho **/*(/DNe\''a=($REPLY/^.DS_Store(DNY1)); ((!#a))'\') **/* enumerates all files recursively. With the glob qualifier / , **/*(/) enumerates all directories recursively. The glob qualifier N ensures that you get an empty list if there are no matches (by default zsh signals an error). The glob qualifier D causes dot files to be included. The glob qualifier e\'' CODE '\' runs CODE on each matching file name and limits the matches to those for which CODE succeeds. CODE can use the variable $REPLY to refer to the file name. ^.DS_Store matches files that are not called .DS_Store . Thus the CODE limits the matches to those for which the number of files other than .DS_Store is zero. The glob qualifier Y1 limits the matches to one (it's only an efficiency improvement). Python Here's a solution in Python (it works in both 2 and 3). The structure is rather clearer despite this being compressed into a one-liner. python -c 'import os; print("\n".join([path for path, dirs, files in os.walk(".") if dirs == [] and files in ([], [".DS_Store"])]))' os.walk returns a list of directories recursively under its argument. For each directory, it produces a triple containing path (the path to the directory), dirs (the list of subdirectories) and files (the list of files in the directory that aren't themselves directories). [… for … in os.walk(…) if …] filters the result of os.walk . The if clause keeps an element only if it has no subdirectories and no files other than .DS_Store . The script prints the accepted elements, joined with a newline in between and with a final newline. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/489556",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327320/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.