source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
255,537 | Is there any option for curl that saves a page as text ? I mean, in the same way a page can be saved in browser as Text Files . At least, Firefox has that option. I need it for a script, I simply do something like curl -s http://... But it would make things much easier to deal with it without all html code. I found an option for lynx that makes what I want: lynx -dump , but I'd rather use curl . Thanks. | No. You can use lynx for this: lynx -dump URL UPDATE. Ops. Sorry. I did not see you know about lynx. I advice to use lynx for this purpose. It often produces very readable output. Sometimes you should use -width option to increase width of the output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93422/"
]
} |
255,556 | Running GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu). I don't really understand process substitution (ProcSub) from the perspective of a user interested in lifting the hood on i/o processing and related speed issues. I use ProcSub to script, so I have some knowledge of File Descriptors 0,1,2, but that's pretty much it. I have read a few pretty good posts, e.g. [1] , and misc. wikis, e.g. [2] , [3] , the latter stating: "Process substitution feeds the output (FD 1 and/or 2) of a process (or processes) into the stdin (FD 0) of another process" . By that simplest of definitions and for just one process, it seems operationally no different from a simple unnamed pipe. To look into that I started with tee , in itself interesting from the point of view of i/o. tee permits feeding "stdin to stdout and also to any files given as argument" . So : $ for i in 1 2 3; do (( j=i+10 )); printf "%d\n" $j > file_$i; done# so each file in file_{1,2,3} contains the numeral in its name + 10.$ cat file_{1,2,3} | tee file_4111213$ cat file_4111213 Obviously, I am not interested in seeing data filling my screen ala Matrix, so when: 1) I add a pipe and redirection of shasum 's output ... $ cat file_{1,2,3} | tee file_4 | shasum -a 256 > file_4.sha256$ the one liner above exits quietly , file_4 is as before (above) and file_4.sha256 contains the computed SHA256 sum. The above is just an example to illustrate my question, trying to understand intermediate i/o's. My layman's conclusion is that tee saves the output of the cat cmd in file_4 and its copy normally sent to stdout is actually not sent to stdout but piped to shasum . Q: Is this even remotely right ? 2) I try the same with ProcSub: $ cat file_{1,2,3} | tee file_4 >(shasum -a 256 > file_4.sha256)111213$ -> No stdout redirection of whatever being sent to FD 1 by tee ? Q: I am not clear on what ProcSub does or does not do to i/o (obviously it does not affect i/o in this case) and could use an explanation of its mechanism. 3) I try with ProcSub and redirection of final stdout to file_4: $ cat file_{1,2,3} | tee >(shasum -a 256 > file_4.sha256) > file_4$ Again this time the one-liner exists quietly. Q: So the general question is: how are i/o processed for the 3 cases above (or at least for the second and third) ? There are obvious and visible differences in i/o terms (just looking at final stdout), but sometimes different i/o processes can lead to identical end-results on the display. Tx. | The idiom >(...) just means (in layman terms): "the name of a file". And it works as a "name of a file" (sort of, all will be clear in an instant): $ echo <(date)/proc/self/fd/11 Or some other number/name on your OS. But echo does print a name, exactly as if you do: $ echo ProcSubs11ProcSubs11 And if a file with the label ProcSubs11 exists, you could also do: $ cat ProcSubs11contents_of_file_ProcSubs11 Which you could do exactly the same with: $ cat <(date)Fri Jan 15 21:25:18 UTC 2016 The difference is that the actual name of the "Process Substitution" is "not visible" and that the details are a lot longer than reading a simple file, as described quite well in all the painful detail in the link to How process substitution is implemented in bash? . Having said the above, lets review your items. Q 1 ...seems operationally no different from a simple unnamed pipe... Well, "Process Substitution" is exactly based in an unnamed pipe as your given first link states: The bash process creates an unnamed pipe for communication between the two processes created later. The difference is that all the ~6 steps explained in the link are simplified to one idiom >(...) for writing to and <(...) for reading from . And, it could be argued that the connection (pipe) has a name, as a file has. Just that that name is hidden from the user (the /proc/self/fd/11 shown at the start). Example 1 1) I add a pipe and redirection of shasum's output ... $ cat file_{1,2,3} | tee file_4 | shasum -a 256 > file_4.sha256 There is no "Process Substitution" there, but it worth noting (for later) that tee sends (writes to) what it receive in its stdin to a file file_4 and also sends the same stdin content to stdout . Which happens to be connected to a pipe (in this case) that writes to shasum. So, in short, in layman terms, tee copy stdin to both file_4 and shasum . Example 2 2) I try the same with ProcSub: $ cat file_{1,2,3} | tee file_4 >(shasum -a 256 > file_4.sha256) Re-using the description above (in layman terms) to describe this example: Tee copy stdin to three elements: file_4 , shasum and stdout . Why?. Remember that >(...) is the name of a file, lets put that in the line: $ cat file_{1,2,3} | tee file_4 /proc/self/fd/11 tee is serving the input to two files file_4 and shasum (via "Process Substitution") and the stdout of tee is still connected to its default place: the console. That is why you see the numbers in the console. To make this example exactly equal to 1) , we could do: $ cat file_{1,2,3} | tee file_4 > /proc/self/fd/11 ### note the added `>` Which becomes (yes, the space between > and >( must be used. $ cat file_{1,2,3} | tee file_4 > >(shasum -a 256 > file_4.sha256) That is redirecting tee 's stdout to the "Process Substitution". Q 3 Q: So the general question is: how are i/o processed for the 3 cases above I believe I just did explain the 3 cases, if not clear, please comment. Q 4 (in comments, Please edit and add the question) why the <(...) construct won't work in the third case. Because (in layman terms) you can not insert a male prong into a male socket . The <(...) idiom reads from what is inside the "Process substitution" and therefore provides an "output" and should be inserted in the stdin of the outside command. The outside command tee is trying to connect stdout (like) elements. So, that pair could not match. An important note: The command cat hides some details when applied to "Process Substitution", as both this command will give the same output: $ cat <(date)$ cat < <(date) All is correct, but drawing conclusions from a misleading equality is wrong. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255556",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72707/"
]
} |
255,563 | Given a file like this: 1,768,12,46576457,78981,123,435,134,1462,345,6756856,12312,13115,234,567465,12341,13411,3245,4356345,2442,139,423,2342,121,4639,989,342,121,1212 I would like to list all rows (in bash terminal) such that the value in column 1 appears at least twice (in column 1). The result should be 1,768,12,46576457,78981,123,435,134,1461,3245,4356345,2442,139,423,2342,121,4639,989,342,121,1212 | To try and avoid storing the whole file in memory, you could do: awk -F , ' !count[$1]++ {save[$1] = $0; next} count[$1] == 2 { print save[$1] delete save[$1] } {print}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/151694/"
]
} |
255,581 | What does set command without arguments do? As I can see it prints out my environment variables just like env command but in alphabetical order. And further it prints some different information (variables? functions?) like: __git_printf_supports_v=yes__grub_script_check_program=grub-script-check...quote () { local quoted=${1//\'/\'\\\'\'}; printf "'%s'" "$quoted"}quote_readline () { local quoted; _quote_readline_by_ref "$1" ret; printf %s "$ret"} What is it and where does it come from? I cannot find information about set command without arguments. Actually I don't have a man page for set in my Linux distribution at all. | set is a shell built-in that displays all shell variables, not only the environment ones, and also shell functions, which is what you are seeing at the end of the list. Variables are displayed with a syntax that allow them to be set when the lines are executed or sourced. From bash manual page : If no options or arguments are supplied, set displays the names and values of all shell variables and functions, sorted according to the current locale, in a format that may be reused as input for setting or resetting the currently-set variables. On different shells, the behavior is not necessarily the same; for example, ksh set doesn't display shell functions. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/255581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112424/"
]
} |
255,612 | I want to write a perl one-liner that replaces every instance of two specific consecutive strings that may or may not be separated by whitespace. For instance, say my two strings are john paul and george and I want to replace consecutive instances of these strings (in this order) with pete . Running the one-liner on $ cat ~/foojohn paulgeorgejohn paul georgejohn paul georgegeorge john paul should result in $ cat ~/foopetepetepetegeorge john paul The only thing I've thought of is $ perl -p -i -e 's/john paul\s*george/pete/g' ~/foo but this results in $ cat ~/foopetepetejohn paul georgegeorge john paul Is there a way to alter my one-liner? | The only thing you need to add to your one-liner is the option to slurp the file as a single string: perl -0777 -p -i -e 's/john paul\s*george/pete/g' ~/foo# ^^^^^ See http://perldoc.perl.org/perlrun.html#Command-Switches | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92703/"
]
} |
255,633 | I'm trying to copy the current directory as a source. Using full paths gives me the expected behaviour, copying the entire directory into the destination. $ cd /tmp$ mkdir a b$ cd a$ touch 1 2 3$ cp -r /tmp/a /tmp/b # use /tmp/a as source$ ls /tmp/ba However, using . to refer to the source copies the contents of the source instead of the directory itself. $ cd /tmp$ mkdir c$ cd a$ cp -r . /tmp/c # use . as source$ ls /tmp/c1 2 3 What is the difference between . and the absolute path of the current directory? If I want to copy the current directory itself, is there a short reference? (The only way I could see was to use ../a , which seems slightly redundant.) | The only thing you need to add to your one-liner is the option to slurp the file as a single string: perl -0777 -p -i -e 's/john paul\s*george/pete/g' ~/foo# ^^^^^ See http://perldoc.perl.org/perlrun.html#Command-Switches | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
255,634 | I have a file ~/foo.txt that has instances of each of the following strings alpha-1alpha-2alpha-3alpha-4alpha-5alpha-6alpha-7alpha-8alpha-9 I would like to replace each of these alpha-X with beta-X . Simply replacing alpha- with beta- will not suffice as there are other instances of alpha- that I want to preserve. I imagine this can be accomplished with something like perl -p -i -e `s/alpha-SOMETHING/beta-SOMETHING/g' but I'm not sure what SOMETHING should be. Is there a solution here? (Of course, I could run nine one-liners but this seems very inefficient.) | The only thing you need to add to your one-liner is the option to slurp the file as a single string: perl -0777 -p -i -e 's/john paul\s*george/pete/g' ~/foo# ^^^^^ See http://perldoc.perl.org/perlrun.html#Command-Switches | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92703/"
]
} |
255,652 | I am attempting to iterate through 1-30, say hello #number for each except for number 10. This is what I have which is not working for i in {1..30}; do if [i != 10]; then echo "hello $i"; fi; done my output from this is bash [i: command not found -- thirty times | Your if statement is wrong, it should be if [ $i != 10 ] . Spaces around the [ are mandatory, and your variables should have a $ sign before it if you are reading them. for i in {1..30}; do if [ $i != 10 ]; then echo "hello $i"; fi; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255652",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29125/"
]
} |
255,692 | I'm wondering if $PATH cascades entries. You'll all need to take a leap of faith with me here, but here it goes. Let's say we have a Java executable at /usr/bin/java but this version is very old and outdated. Unfortunately, we don't have su access so we can't just replace it. We can, however, download the current version of the JRE/JDK locally and point to the updated version. My question is, how does bash handle the case where we have two or more executables with the same name but in two or more different locations? Does bash somehow choose which one to execute when we type java into the console? Assuming /usr/bin has many other executables that we need, how would the $PATH look for something like this to work correctly? Ideally, when we type java -version we should see: java version "1.8.0_45"Java(TM) SE Runtime Environment (build 1.8.0_45-b14)Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode) instead of java version "1.7.0_45"Java(TM) SE Runtime Environment (build 1.7.0_45-b18)Java HotSpot(TM) Client VM(build 24.45-b08, mixed mode, sharing) I'm sure this question has been asked before and has some type of jargon associated with it. I've poked around SE, SO, and some forums but didn't find anything conclusive. | Your $PATH is searched sequentially. For example if echo $PATH shows /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin , each of those directories is searched in sequence for a given command (assuming the command isn't an alias or a shell builtin). If you want to override specific binaries on a per-user basis (or you just don't have access to override for other users than yourself), I would recommend creating a bin directory in your home directory, and then prefixing your PATH variable with that directory. Like so: $ cd ~$ pwd/home/joe$ mkdir bin$ echo "$PATH"/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin$ echo 'export PATH="$HOME/bin:$PATH"' >> .bash_profile Then source .bash_profile so the new PATH definition will take effect (or just log out and log in, or restart your terminal emulator). $ source .bash_profile$ echo "$PATH"/home/joe/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin Now, any executable files you put in /home/joe/bin/ will take precedence over system binaries and executables. Note that if you do have system access and the overrides should apply to all users, the preferred place to put override executables is /usr/local/bin , which is intended for this purpose. In fact often /usr/local/bin is already the first directory in $PATH specifically to allow this. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255692",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29787/"
]
} |
255,693 | I use feh to view images on my hard drive. However, if I hit [delete] the images are removed from the current slideshow, but not from the disk. What is the right way of removing images from the drive straight from feh? | From man feh CTRL+delete [delete] Remove current file from filelist and delete it CTRL+delete will do the job | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/255693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
255,702 | I tried this command: [silas@mars 11]$ string=xyababcdabababefab[silas@mars 11]$ echo ${string/abab/"\n"}xy\ncdabababefab[silas@mars 11]$ I also tried to replace "\n" to '\n' and to \n . I can't use AWK or sed (this is a part from a homework exercise, and the teacher don't allow to use in this specific exercise). | You should use -e with echo as follows: echo -e ${string/abab/'\n'} From manpage: -e enable interpretation of backslash escapes If -e is in effect, the following sequences are recognized:\\ backslash\a alert (BEL)\b backspace\c produce no further output\e escape\f form feed\n new line\r carriage return\t horizontal tab\v vertical tab | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255702",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148112/"
]
} |
255,705 | My game server has a custom PHP administration interface, which can run the server manually by running the start shell script with exec and sudo -E . However, if I restart httpd using service httpd restart , both the script and the actual game server get killed, too. How can I change this behaviour so that only the web server get killed, or run the game server some other way? | You should use -e with echo as follows: echo -e ${string/abab/'\n'} From manpage: -e enable interpretation of backslash escapes If -e is in effect, the following sequences are recognized:\\ backslash\a alert (BEL)\b backspace\c produce no further output\e escape\f form feed\n new line\r carriage return\t horizontal tab\v vertical tab | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67178/"
]
} |
255,707 | I'm talking about the keyboard shortcuts that you use in command-line/terminal. Example: Ctrl + c that kills the process, Ctrl + d that logout, Ctrl + z that send process to background... etc. I've tested some and found that they are neither terminal (i.e gnome-terminal , xterm , konsole ) specific nor shell (i.e bash , zsh ) specific, they even work at tty s. So, I want to know: Who provides these shortcuts? How can I list and modify/define them? | The kernel's terminal driver ( termios ) interprets the special keys that can be typed to send a signal to a process, send end of file, erase characters, etc. This is basic Unix kernel functionality and very similar on most Unix and Linux implementations. The stty command displays or sets the termios special characters, as well as other parameters for the terminal line driver. Invoke stty -a to see the current values of the special characters and other "terminal line settings". In the following examples, you can see that intr is Ctrl + C , eof is Ctrl + D , susp is Ctrl + Z . (I've deleted other output to show only the special character settings): stty -a special chars on GNU/Linux: intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>;eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; stty -a special characters on FreeBSD: cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = ^@; eol2 = ^@; erase = ^?; erase2 = ^H; intr = ^C; kill = ^U; lnext = ^V; min = 1; quit = ^\; reprint = ^R; start = ^Q; status = ^T; stop = ^S; susp = ^Z; time = 0; werase = ^W; To change the value of a special character, for example, to change the interrupt character from Ctrl + C to Ctrl + E invoke stty like this ( ^E is literally two characters, the circumflex ( ^ ) followed by the letter E ): stty intr '^E' For more information see the man pages for stty and termios . On GNU/Linux you can also look at the tty_ioctl man page. Notes: The intr key ( Ctrl + C by default), doesn't actually kill the process, but causes the kernel to send an interrupt signal ( SIGINT ) to all processes within the process group. The processes may arrange to catch or ignore the signal, but most processes will terminate, which is the default behavior. The reason that Ctrl + d logs you out is because the terminal line driver sends EOF (end of file) on the standard input of the shell. The shell exits when it receives end of file on it's standard input. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/255707",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66803/"
]
} |
255,715 | The file /etc/udev/rules.d/70-persistent-net.rules is auto-generated on a Linux system with udev, if it does not exist, during reboot. But I would like to know how to create this rules file (with a command) without rebooting the server. I was Googling around for a while and found that the rules file is generated by this script: /lib/udev/write_net_rules However, it is impossible to run this script from command line, since (I assume) it wants to be started by udev, with some environment variables set properly. Starting it manually prints error message "missing $INTERFACE". Even if I set env variable INTERFACE=eth0 prior the starting of the script, it still prints error "missing valid match". Not to mention I have two interfaces ( eth0 and eth1 ) and I want the rules file generated for both. I was also thinking to trigger udev events like this, hoping it will start the script from udev itself, but nothing changes: udevadm trigger --type=devices --action=change So, does anybody know how to regenerate the persistent net rules in file /etc/udev/rules.d/70-persistent-net.rules without reboot? | According to man page --action=change is the default value for udevadm . -c, --action=ACTION Type of event to be triggered. The default value is change. Therefore you better try --action=add instead. It should help: /sbin/udevadm trigger --type=devices --action=add | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/151804/"
]
} |
255,749 | Ubuntu's do-release-upgrade command upgrades the operating system to the latest release. What is Debian's way or tool for the same purpose (to upgrade to the latest stable release)? | Debian does not provide a single command to upgrade the OS to a new release. The Release Notes for each release include upgrade instructions for supported hardware architectures. You can find release notes for all Debian releases via the Debian Releases page. For example, to upgrade a 64-bit PC from stretch to buster , follow the instructions in Chapter 4. Upgrades from Debian 9 (stretch) under Debian 10 -- Release Notes for Debian 10 (buster), 64-bit PC . You should always be able to find the release notes for the current stable release at https://www.debian.org/releases/stable/releasenotes . Although upgrading a Debian release from " oldstable " to stable is usually painless, it's important to follow the Release Notes because the OS can differ from release to release in ways that could affect your specific installation. The Release Notes also contain information and tips about changes in the new release that can save considerable time and effort.For example, the upgrade process for some previous releases recommended the use of aptitude for the upgrade.For upgrades from stretch to buster , the apt tool is recommended instead of aptitude .(Although aptitude is suggested for resolution of some problems after the upgrade.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255749",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49297/"
]
} |
255,854 | I am trying to print a simple value for $AR1_p1 but the variable $i is not evaluating. for i in 1 2 3 4do AR1_p1=22 AR1_p2=23 AR1_p3=24 AR1_p3=25 echo $AR1_p$idone It's like concatenating dynamically. Any suggestions on how to fix this?. | You can use bash indirect references for that: AR1_p1=22AR1_p2=23AR1_p3=24AR1_p4=25for i in 1 2 3 4do VARNAME="AR1_p${i}" echo "${!VARNAME}"done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/255854",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124631/"
]
} |
255,862 | Is it possible to enable "wrap around" in less for the search? This means: If the last occurrence of a pattern has been found, pressing n will start the search again at the beginning of the file (so you do not have to press g and then n ). | Yes , it's possible since release v568 . Since this patch (full disclosure: I wrote it), the search modifier ^W enables wrap-around search within the current file. Use it by pressing CTRL-W while on the search prompt. From the manpage : /pattern (...) ^W WRAP around the current file. That is, if the search reaches the end of the current file without finding a match, the search continues from the first line of the current file up to the line where it started. One way to make this the default behavior is via the command section in your config file (possibly located at ~/.lesskey , see man lesskey for details): #command/ forw-search ^W? back-search ^W | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150422/"
]
} |
255,875 | How can I start cron at startup of computer, so I can use a cronjob to automatically make a backup using rsync. I am not well known with how to start programs in Linux. I am using Linux Mint 17.3. | Yes , it's possible since release v568 . Since this patch (full disclosure: I wrote it), the search modifier ^W enables wrap-around search within the current file. Use it by pressing CTRL-W while on the search prompt. From the manpage : /pattern (...) ^W WRAP around the current file. That is, if the search reaches the end of the current file without finding a match, the search continues from the first line of the current file up to the line where it started. One way to make this the default behavior is via the command section in your config file (possibly located at ~/.lesskey , see man lesskey for details): #command/ forw-search ^W? back-search ^W | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/151916/"
]
} |
255,882 | I need to use fmt to format some text output in Greek, but it does not behave as it does with latin characters. Consider for example the sentences with 15 characters below. With latin characters: $echo "Have a nice day" | fmt -w 16 Have a nice day but, strangely, with non-latin characters: $echo "Ηαωε α νιψε δαυ" | fmt -w 16 Ηαωε α νιψε δαυ In fact for the above string, the smallest value that it prints the sentence without line breaks would be -w 28 : $echo "Ηαωε α νιψε δαυ" | fmt -w 28 Ηαωε α νιψε δαυ $echo "Ηαωε α νιψε δαυ" | fmt -w 27 Ηαωε α νιψε δαυ Can somebody explain why this happens and how to fix it, if possible? | To answer your question, it is not working because Greek characters are non-Latin, Unicode characters, and: Unlike par , fmt has no Unicode support, ... https://en.wikipedia.org/wiki/Fmt Additional notes The second part of your question on how-to, unfortunately, Although there seems be a fairly recent technical report regarding how to wrap Unicode, for example Heninger, Unicode Line Breaking Algorithm , 2015-06-01 http://www.unicode.org/reports/tr14/ however this seems to be specification only, no actual implementation or mention of software how-to examples. You could try asking the author via the email listed. Since the Wikipedia article on fmt referred to par , and it was available via apt-get , I decided to try it on your posted text. But I was unsuccessful, it still doesn't wrap the way you wish: $ echo "Ηαωε α νιψε δαυ" | par 16grΗαωε ανιψε δαυ The man page was difficult enough that even the author cautioned that it was: not well-written for the end-user , but if you are determined you could try your luck reading it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255882",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53510/"
]
} |
255,965 | I am currently following a tutorial about shell scripting located here: http://www.cs.columbia.edu/~sauce/tutorial/ashell.html and I believe my script matches that one exactly. However, when I try to run the example, my output does not match. Since I don't want to be vague and simply ask "why doesn't it work?", I will focus on the part I don't understand: why is there a ":" after the testlogin: command? I have read many forums that discuss the meaning of " : " (with spaces on either side) and also a leading ":" , but none mention the use of the lagging colon. What is the meaning in this context? And if this is a typo, can anyone help me find another typo in the example that might be making it not run correctly? Any help would be greatly appreciated! the code (copied exactly from the site I linked above) is below: #testloginuseron(){if ( who | grep $1 > /dev/null)then echo $1 is logged inelse echo $1 is not logged infi }if test $# != 1then echo testlogin: usernameelse useronfi and the output of paul@paul-LC22UP:~$ .testlogin paulUsage: grep [OPTION]... PATTERN [FILE]...Try 'grep --help' for more information.is no logged in | testlogin isn't a command here, but an argument to echo . The full command is: echo testlogin: username echo just spits out the text you give it to STDOUT. So the colon has no special meaning, it is part of the verbatim output of echo . This is the output if you call the script with the wrong number of arguments: $ ./testlogintestlogin: username$ ./testlogin fred bill adamtestlogin: username This is a strange output, but not necessarily a typo. I would usually expect something more along the lines of: $ ./testloginUsage: testlogin username The reason the whole script doesn't work is because the author has made a subtle mistake in calling the useron function, and presumably hasn't tested their script before posting it (or they would have noticed the same error you did). $1 in the useron function is the first argument to that function , not to the whole script; but since useron wasn't called with any arguments, that is the empty string and so it ends up running the command: $ who | grep which will give grep s usage info, and then always trigger the else branch of the condition. You can fix this by passing the first argument to the script into useron like this: else useron $1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/255965",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/151965/"
]
} |
255,976 | In the Bash shell, I can get the command exit status through the $? variable: # ps -ef | grep "haha"root 15439 15345 0 23:02 pts/0 00:00:00 grep --color=auto haha# echo $?0 Is it available only in the Bash shell? Or can I also use it in other shells? | The $? exit-code is common to any shell that follows POSIX, and is described in 2.5.2 Special Parameters : ? Expands to the decimal exit status of the most recent pipeline (see Pipelines ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/255976",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85056/"
]
} |
256,029 | If I run these commands: dmesg | head -n 10 I presume the OS sends back some kind of signal to dmesg once head has read 10 lines. How does this work? What does head tell the kernel? This is different from a program dying since this is a normal, 'clean' stop. | It depends on the OS buffers and the timing between the 10th and 11th writes of dmesg . After head writes 10 lines, it terminates and dmesg will receive SIGPIPE signal if it continues writing to the pipe. Depending on your OS buffer, dmesg will often write more than 10 lines before head consumes them. To see that head had consumed more than 10 lines, you can use: strace -f sh -c 'dmesg | head -n 10' (Look at the head process, count on number of read system calls.) To see how the writing speed effect: strace -f sh -c "perl -le '$|++;print 1 while 1' | head -n 10" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/256029",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5352/"
]
} |
256,055 | I have a new clean installation of CentOS 6.7, it is not on a VM, but on a dedicated notebook. During the installation procedure I've configured my WiFi connection and have added it on eth0 pre-configured connection. I have specified all: name of the connection SSID mode: hoc band: automatic channel: pre-configured MTU: automatic checked automatic connection box and available for all users box too In security section I have inserted WPA & WPA2 Personal, then the corresponding password of the router. In IPv4 section: automatic (DHCP) method and checked the completion of this connection with IPv4 addressing. In IPv6 section: ignore method I log in with root user and corresponding password for have all privileges. The WiFi spy on the WiFi key of the notebook is on, the router is on and Internet works with other devices. But if I ping google.com , it says: unknown host google.com while if I ping 8.8.8.8 , it says: Network is unreachable Since I have configured all the connection data and checked the automatic connection box, I expected that the connection would be automatic when I log in. Is there something that I did wrong? Hope in some friendly advice. | It depends on the OS buffers and the timing between the 10th and 11th writes of dmesg . After head writes 10 lines, it terminates and dmesg will receive SIGPIPE signal if it continues writing to the pipe. Depending on your OS buffer, dmesg will often write more than 10 lines before head consumes them. To see that head had consumed more than 10 lines, you can use: strace -f sh -c 'dmesg | head -n 10' (Look at the head process, count on number of read system calls.) To see how the writing speed effect: strace -f sh -c "perl -le '$|++;print 1 while 1' | head -n 10" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/256055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149852/"
]
} |
256,075 | I noticed that the mknod command required root privileges when a creating a node other than a regular file, FIFO or a Unix socket. Why is that? How can a regular user harm a system or compromise other users' privacy with the mknod command that he can't when creating regular files? | If you could call mknod arbitrarily, then you could create device files owned and accessible by you for any device. The device files give you unlimited access to the corresponding devices; therefore, any user could access devices arbitrarily. For instance, suppose /dev/sda1 holds a file system to which you have no access. (Say, it is mounted to /secret ). Over here, /dev/sda1 is block special 8,1, so if you could call mknod, e.g. mknod ~/my_sda1 b 8 1 , then you could access anything on /dev/sda1 through your own device file for /dev/sda1 regardless of any filesystem restrictions on /dev/sda1 . (You get the device as a flat file without any structure, so you would need to know what to do with it, but there are libraries for accessing block device files.) Likewise, if you could create your own copy of /dev/mem or /dev/kmem , then you could examine anything in main memory; if you could create your own copy of /dev/tty* or /dev/pts/* , then you could record any keyboard input - and so on. Therefore, mknod in the hand of ordinary users is harmful and thus its use must be restricted. N.B. This is why the nodev mount option is crucial for mobile devices, for otherwise you could bring in your own device files on prepared mobile media. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256075",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152054/"
]
} |
256,100 | I have noticed that when I ssh to a server and then su to the root user, I do not get color in bash. In this specific case when I say "do not get color in bash" I am talking about editing files with vim. Now, if I sudo after login I get color, so no problems there. If I su to root and source /root/.bash_profile then I get color as root. But I do not want to have to source .bash_profile file every time I su to root. Here are the contents of my /root/.bashrc and /root/.bash_profile files. What can I do to get color when doing su? # .bashrc# User specific aliases and functions# You may uncomment the following lines if you want `ls' to be colorized:export LS_OPTIONS='--color=auto'eval "`dircolors`"alias ls='ls $LS_OPTIONS'alias ll='ls $LS_OPTIONS -l'alias l='ls $LS_OPTIONS -lA'alias rm='rm -i'alias cp='cp -i'alias mv='mv -i'# Source global definitionsif [ -f /etc/bashrc ]; then . /etc/bashrcfi ============================================= # .bash_profile# Get the aliases and functionsif [ -f ~/.bashrc ]; then . ~/.bashrcfi# User specific environment and startup programsPATH=$PATH:$HOME/binexport PATHalias vi='/usr/bin/vim'alias grep='/bin/grep --color'export EDITOR=/usr/bin/vim# HISTSIZE = number of lines in memory while session is ongoing# HISTFILESIZE = maximum number of lines in the history file on diskexport HISTSIZE=3000export HISTFILESIZE=5000export HISTFILE=/root/history/.bash_hist-$(who -m | awk '{print $1}') | Either use su - to get a login shell, or move the aliases to ~/.bashrc . See: Answer on SuperUser | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81926/"
]
} |
256,104 | test@debian:~$ echo `echo \`echo "uh!"\``uh! How does the Bash do that? It seems that it first executes the expression in the non-escaped backticks, which gives back (the double-quotes "" are removed, right?): `echo uh!` So we have an input equivalent to: test@debian:~$ echo `echo uh!` (side note: really, why does it work? Because: test@debian:~$ echo `echo uh!`-bash: !`: event not found ) Then Bash executes the expression in backticks again, which gives: test@debian:~$ echo uh! Which finally gives us the output: uh! Is that right? And how could one encapsulate four echo -backtick-expression into each other? | Depends on your version of bash : bash-4.2$ echo `echo \`echo "uh!"\``bash: !"\``: event not foundbash-4.3$ echo `echo \`echo "uh!"\``uh! In bash-4.3 , !" is no longer eligible as a history event designator, so history expansion does not apply. Other than that, it's just normal backtick nesting syntax. Inside backticks, the backslash character is overloaded (yet again) to do that nested expansion again. You can nest as many levels as you want: echo `echo \`echo \\\`echo \\\\\\\`echo whatever\\\\\\\`\\\`\`` Which is the cumbersome equivalent of: echo $(echo $(echo $(echo $(echo whatever)))) However note that in both versions, the command substitution is subject to word splitting. So, you'd want to quote them to prevent it. With bash , dash , pdksh , yash , zsh , it's relatively easy: echo "`echo "\`echo "\\\`echo "\\\\\\\`echo whatever\\\\\\\`"\\\`"\`"`" With the Bourne or Korn shell, you also need to escape the " though, so that becomes: echo "`echo \"\`echo \\\"\\\`echo \\\\\\\"\\\\\\\`echo whatever\\\\\\\`\\\\\\\"\\\`\\\"\`\"`" Compare with: echo "$(echo "$(echo "$(echo "$(echo whatever)")")")" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256104",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150422/"
]
} |
256,120 | I would like to simplify the output of a script by suppressing the output of secondary commands that are usually successful. However, using -q on them hides the output when they occasionally fail, so I have no way of understanding the error. Additionally, these commands log their output on stderr . Is there a way to suppress a command's output only if it succeeds ? For example (but not limited to) something like this: mycommand | fingerscrossed If all goes well, fingerscrossed catches the output and discards it. Else it echoes it to the standard or error output (whatever). | moreutils ' chronic command does just that: chronic mycommand will swallow mycommand 's output, unless it fails, in which case the output is displayed. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/256120",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11791/"
]
} |
256,128 | I am trying to setup bi-direction or two way sync with rsync. In my case I only need to delete the files when syncing from B to A. So, I was thinking of running rsync twice as follow : rsync -rtuv ./A/ ./B/rsync -rtuv --delete ./B/ ./A/ This problem with this solution is that when I run rsync (B->A) which would be right after running the rsync (A-B), Any new file that get created in between the sync will also get removed. Is there a way I can specify a time stamp as condition that it only delete the file if it created before this date/time. Updated: I understand there is a unison solution but the problem with unison is required to install on both ends. I am syncing with a remote server and I can not install unison on the remote end. | rsync is the wrong tool for this task, for exactly the reasons that you have encountered. Instead, consider using unison : unison A/ B/ The first time you run this it will identify files that are uniquely in A , and those that are uniquely in B . It will also flag those that are in both places and ask you to identify which is to be overwritten. The next time you run this it will copy changes from A to B and also B to A , flagging any files that have been changed in both places for manual resolution. mkdir A Bdate > A/datewho > B/whounison A/ B/# Lots of output from unison, showing synchronisationls Adate whols Bdate whodate > A/dateunison A/ B/# Lots of output from unison, showing synchronisation There are a number of useful flags available for unison which help automate the process by defining assumptions and thereby reducing the number of questions you're asked during the synchronisation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26989/"
]
} |
256,138 | The short version of the question: I am looking for a speech recognition software that runs on Linux and has decent accuracy and usability. Any license and price is fine. It should not be restricted to voice commands, as I want to be able to dictate text. More details: I have unsatisfyingly tried the following: CMU Sphinx CVoiceControl Ears Julius Kaldi (e.g., Kaldi GStreamer server ) IBM ViaVoice (used to run on Linux but was discontinued years ago) NICO ANN Toolkit OpenMindSpeech RWTH ASR shout silvius (built on the Kaldi speech recognition toolkit) Simon Listens ViaVoice / Xvoice Wine + Dragon NaturallySpeaking + NatLink + dragonfly + damselfly https://github.com/DragonComputer/Dragonfire : only accepts voice commands All the above-mentioned native Linux solutions have both poor accuracy and usability (or some don't allow free-text dictation but only voice commands). By poor accuracy, I mean an accuracy significantly below the one the speech recognition software I mentioned below for other platforms have. As for Wine + Dragon NaturallySpeaking, in my experience it keeps crashing, and I don't seem to be the only one to have such issues unfortunately. On Microsoft Windows I use Dragon NaturallySpeaking, on Apple Mac OS X I use Apple Dictation and DragonDictate, on Android I use Google speech recognition, and on iOS I use the built-in Apple speech recognition. Baidu Research released yesterday the code for its speech recognition library using Connectionist Temporal Classification implemented with Torch. Benchmarks from Gigaom are encouraging as shown in the table below, but I am not aware of any good wrapper around to make it usable without quite some coding (and a large training data set): System Clean (94) Noisy (82) Combined (176) Apple Dictation 14.24 43.76 26.73 Bing Speech 11.73 36.12 22.05 Google API 6.64 30.47 16.72 wit.ai 7.94 35.06 19.41 Deep Speech 6.56 19.06 11.85 Table 4: Results (%WER) for 3 systems evaluated on the original audio. All systems are scored only on the utterances with predictions given by all systems. The number in the parentheses next to each dataset, e.g. Clean (94), is the number of utterances scored. There exist some very alpha open-source projects: https://github.com/mozilla/DeepSpeech (part of Mozilla's Vaani project: http://vaani.io ( mirror )) https://github.com/pannous/tensorflow-speech-recognition Vox, a system to control a Linux system using Dragon NaturallySpeaking: https://github.com/Franck-Dernoncourt/vox_linux + https://github.com/Franck-Dernoncourt/vox_windows https://github.com/facebookresearch/wav2letter https://github.com/espnet/espnet http://github.com/tensorflow/lingvo (to be released by Google, mentioned at Interspeech 2018) I am also aware of this attempt at tracking states of the arts and recent results (bibliography) on speech recognition. as well as this benchmark of existing speech recognition APIs . I am aware of Aenea , which allows speech recognition via Dragonfly on one computer to send events to another, but it has some latency cost: I am also aware of these two talks exploring Linux option for speech recognition: 2016 - The Eleventh HOPE: Coding by Voice with Open Source Speech Recognition (David Williams-King) 2014 - Pycon: Using Python to Code by Voice (Tavis Rudd) | Right now I'm experimenting with using KDE connect in combination with Google speech recognition on my android smartphone. KDE connect allows you to use your android device as an input device for your Linux computer (there are also some other features). You need to install the KDE connect app from the Google play store on your smartphone/tablet and install both kdeconnect and indicator-kdeconnect on your Linux computer. For Ubuntu systems the install goes as follows: sudo add-apt-repository ppa:vikoadi/ppasudo apt updatesudo apt install kdeconnect indicator-kdeconnect The downside of this installation is that it installs a bunch of KDE packages that you don't need if you don't use the KDE desktop environment. Once you pair your android device with your computer (they have to be on the same network) you can use the android keyboard and then click/press on the mic to use Google speech recognition. As you talk, text will start to appear where ever your cursor is active on your Linux computer. As for the results, they are a bit mixed for me as I'm currently writing some technical astrophysics document and Google speech recognition is struggling with the jargon that you don't typically read. Also forget about it figuring out punctuation or proper capitalization. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16704/"
]
} |
256,144 | I have a file foo.txt testqweasdxcaasdfarrfsxcadasdfasdcadacdacqaeasdcvasgfasdcvewqqweadffavasfgfasdfeqwqweaefawasdadfaeasdfweasdferafbntsgnjdnuydidhyhnydfgbyasfgadsgeqwqwertargtragaadfgasgaaasgarhsdtjshyjuysysdghjsthtewqsdtjstsasdghysdmksaadfbgns,asfhytewatbafgq4tqweasfdg5abfgshtsadtyhwafbvgnasfgaghafgewqqweafghtaasg56angadfg6435aasdfgr5asdfgfdagh5tewq I want to print all the lines between qwe and ewq in a separate file.This is what I have so far : #!/bin/bashfilename="foo.txt"#While loop to read line by linewhile read -r linedo readLine=$line #If the line starts with ST then echo the line if [[ $readLine = qwe* ]] ; then echo "$readLine" read line readLine=$line if [[ $readLine = ewq* ]] ; then echo "$readLine" fi fidone < "$filename" | You need to make some changes to your script (in no particular order): Use IFS= before read to avoid removing leading and trailing spaces. As $line is not changed anywhere, there is no need for variable readLine . Do not use read in the middle of the loop!!. Use a Boolean variable to control printing. Make clear the start and end of printing. With those changes, the script becomes: #!/bin/bashfilename="foo.txt"#While loop to read line by linewhile IFS= read -r line; do #If the line starts with ST then set var to yes. if [[ $line == qwe* ]] ; then printline="yes" # Just t make each line start very clear, remove in use. echo "----------------------->>" fi # If variable is yes, print the line. if [[ $printline == "yes" ]] ; then echo "$line" fi #If the line starts with ST then set var to no. if [[ $line == ewq* ]] ; then printline="no" # Just to make each line end very clear, remove in use. echo "----------------------------<<" fidone < "$filename" Which could be condensed in this way: #!/bin/bashfilename="foo.txt"while IFS= read -r line; do [[ $line == qwe* ]] && printline="yes" [[ $printline == "yes" ]] && echo "$line" [[ $line == ewq* ]] && printline="no"done < "$filename" That will print the start and end lines (inclusive). If there is no need to print them, swap the start and end tests: #!/bin/bashfilename="foo.txt"while IFS= read -r line; do [[ $line == ewq* ]] && printline="no" [[ $printline == "yes" ]] && echo "$line" [[ $line == qwe* ]] && printline="yes"done < "$filename" However, it would be quite better (if you have bash version 4.0 or better) to use readarray and loop with the array elements: #!/bin/dashfilename="infile"readarray -t lines < "$filename"for line in "${lines[@]}"; do [[ $line == ewq* ]] && printline="no" [[ $printline == "yes" ]] && echo "$line" [[ $line == qwe* ]] && printline="yes"done That will avoid most of the issues of using read . Of course, you could use the recommended (in comments; Thanks, @costas) sed line to get only the lines to be processed: #!/bin/bashfilename="foo.txt"readarray -t lines <<< "$(sed -n '/^qwe.*/,/^ewq.*/p' "$filename")"for line in "${lines[@]}"; do : # Do all your additional processing here, with a clean input.done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256144",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68738/"
]
} |
256,149 | I have found multiple examples of "esac" appearing at the end of a bash case statement but I have not found any clear documentation on it's use. The man page uses it, and even has an index on the word ( https://www.gnu.org/software/bash/manual/bashref.html#index-esac ), but does not define it's use. Is it the required way to end a case statement, best practice, or pure technique? | Like fi for if and done for for , esac is the required way to end a case statement. esac is case spelled backward, rather like fi is if spelled backward. I don't know why the token ending a for block is not rof . | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/256149",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87949/"
]
} |
256,205 | I am thinking about the man page sections 1 = user commands , 2 = system calls etc. Is there a way, a command that will tell me what sections are available to read besides running something like man 1 gedit , man 2 gedit , man 3 gedit etc? | One option: apropos fork to limit to exact word: apropos -e fork Alternatively, as apropos uses regex by default: apropos "^fork$" Alternatively use man -k instead of apropos . Check out man pages for apropos and man for more details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29125/"
]
} |
256,303 | time writes to stderr , so one would assume that adding 2>&1 to the command line should route its output to stdout . But this does not work: test@debian:~$ cat file one two three fourtest@debian:~$ time wc file > wc.out 2>&1real 0m0.022suser 0m0.000ssys 0m0.000stest@debian:~$ cat wc.out 1 4 19 file Only with parentheses it works: test@debian:~$ (time wc file) > wc.out 2>&1test@debian:~$ cat wc.out 1 4 19 filereal 0m0.005suser 0m0.000ssys 0m0.000s Why are parentheses needed in this case? Why isn't time wc interpreted as one single command? | In ksh , bash and zsh , time is not a command (builtin or not), it's a reserved word in the language like for or while . It's used to time a pipeline 1 . In: time for i in 1 2; do cmd1 "$i"; done | cmd2 > redir You have special syntax that tells the shell to run that pipe line: for i in 1 2; do cmd1 "$i"; done | cmd2 > redir And report timing statistics for it. In: time cmd > output 2> error It's the same, you're timing the cmd > output 2> error command, and the timing statistics still go on the shell's stderr. You need: { time cmd > output 2> error; } 2> timing-output Or: exec 3>&2 2> timing-outputtime cmd > output 2> error 3>&-exec 2>&3 3>&- For the shell's stderr to be redirected to timing-output before the time construct (again, not command ) is used (here to time cmd > output 2> error 3>&- ). You can also run that time construct in a subshell that has its stderr redirected: (time cmd > output 2> error) 2> timing-output But that subshell is not necessary here, you only need stderr to be redirected at the time that time construct is invoked. Most systems also have a time command. You can invoke that one by disabling the time keyword. All you need to do is quote that keyword somehow as keywords are only recognised as such when literal. 'time' cmd > output 2> error-and-timing-output But beware the format may be different and the stderr of both time and cmd will be merged into error-and-timing-output . Also, the time command, as opposed to the time construct cannot time pipelines or compound commands or functions or shell builtins... If it were a builtin command, it might be able to time function invocations or builtins, but it could not time redirections or pipelines or compound commands. 1 Note that bash has (what can be considered as) a bug whereby time (cmd) 2> file (but not time cmd | (cmd2) 2> file for instance) redirects the timing output to file | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/256303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150422/"
]
} |
256,321 | $ df /tmpFilesystem 1K-blocks Used Available Use% Mounted on/dev/root 480589544 42607368 413546516 10% /$ ls /dev/rootls: cannot access /dev/root: No such file or directory I wanted to check if my default Debian installation places /tmp in RAM or on the disk, but now am completely confused. Why would a non-existing device be reported as a filesystem type? What doe "mounted on /" mean? Here is the output of mount : /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=811520k,mode=755)tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)devtmpfs on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=1013960,mode=755)tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=1623020k)devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)none_debugfs on /sys/kernel/debug type debugfs (rw,relatime)cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,size=12k)cgmfs on /run/cgmanager/fs type tmpfs (rw,relatime,size=100k,mode=755)systemd on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,release_agent=/run/cgmanager/agents/cgm-release-agent.systemd,name=systemd)tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=811520k,mode=700,uid=1000,gid=1000) | If the output is as above, it's on the hard disk. You can get /dev/root by looking at the kernel commandline: $ cat /proc/cmdline | grep rootBOOT_IMAGE=/boot/vmlinuz-3.19.0-32-generic root=UUID=0cde5cf9-b15d-4369-b3b1-4405204fd9ff ro So /dev/root is equivalent to the partition with the UUID printed above; your's will differ. To look this UUID up, use $ sudo blkid/dev/sda1: UUID="0cde5cf9-b15d-4369-b3b1-4405204fd9ff" TYPE="ext4" /dev/sda5: UUID="37bc6a9c-a27f-43dc-a485-5fb1830e1e42" TYPE="swap" /dev/sdb1: UUID="177c3cec-5612-44a7-9716-4dcba27c69f9" TYPE="ext4" As you can see, the matching partition is /dev/sda1 . So your /tmp is on the hard disk. Another giveaway in the output of df is the mountpoint / . If you mounted /tmp in the RAM, you'd instead get $ df /tmpFilesystem 1K-blocks Used Available Use% Mounted ontmpfs 3640904 20 3640884 1% /tmp | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20506/"
]
} |
256,409 | Updated : Clarify line number requirement, some verbosity reductions From the command line, is there a way to: check a file of English text to find repeat-word typos, along with line numbers where they are found, in order to help correct them? Example 1 Currently, to help finish an article or other piece of English writing, aspell -c text.txt is helpful for catching spelling errors. But, not helpful when the error is an unintentional consecutive repetition of a word. highlander_typo.txt : There can be only one one. Running aspell : $ aspell -c highlander_typo.txt Probably since aspell is a spell-checker, not a grammar-checker, so repeat word typos are beyond its intended feature scope. Thus the result is this file passes aspell 's check because nothing is "wrong" in terms of individual word spelling. The correct sentence is There can be only one. , the second one is an unintended repeat-word typo. Example 2 But a different situation is for example kylie_minogue.txt : La la la Here the repetition is not a typo, as these are part of an artist's song lyrics . So the solution should not presume and "fix" anything by itself, otherwise it could overwrite intentional repeated words. Example 3: Multi-line jefferson_typo.txt : He has has refused his Assent to Laws, the most wholesome and necessaryfor the public good.He has forbidden his Governors to pass Laws of immediate andand pressing importance, unless suspended in their operation till hisAssent should be be obtained; and when so suspended, he has utterlyneglected to attend to them. Modified from The Declaration of Independence In the above six lines, 1: He has has refused should be He has refused , the second has is a repeat-word typo 5: should be be obtained should be should be obtained , the second be is a repeat-word typo However, did you notice a third repeat-word typo? 3: ... immediate and 4: and pressing ... This is also a repeat-word typo because though they are on separate lines they are still part of the same English sentence, the trailing end of the line above has a word that is accidentally added at the start of the next line. Rather tricky to detect by eye due to the repetition being on opposite sides of a passage of text. Intended output an interactive program with a process similar to aspell -c yet able to detect repeat-words, or, a script or combination of commands able to extract line numbers and the suspected repeat words. This info makes it easier to use an editor such as vim to jump to the repeat words and make fixes where appropriate. Using above multi-line jefferson_typo.txt , the desired output would be something like: 1: has has3: and4: and5: be be or: 1: He [has has] refused his Assent to Laws, the most wholesome and necessary3: He has forbidden his Governors to pass Laws of immediate [and]4: [and] pressing importance, unless suspended in their operation till his5: Assent should [be be] obtained; and when so suspended, he has utterly I am actually not entirely sure how to display the difficult case of inter-line or cross-line repeat-word, such as the and repetition above, so don't worry if your solution doesn't resemble this exactly. But I hope that, like the above, it shows: relevant original input's line number some way to draw attention to what repeated, especially helpful if the line of text is also quite long. if the full line is displayed to give context (credit: @Wildcard), then there needs to be a way to somehow render the repeated word or words distinctively. The example shown here marks the repetition by enclosing them within ASCII characters [ ] . Alternatively, perhaps mimic grep --colors=always to colorize the line's matches for display in a color terminal Other considerations text, should stay as plain text files no GUI solutions please, just textual. ssh -X X11 forwarding not reliably available and need to edit over ssh Unsuccessful attempts To try to find duplicates, uniq came to mind, so the plan was to first determine how to get repeat-word recognition to work on a single line at first. In order to use uniq we would need to first convert words on a line, to becoming one word per line. $ tr ' ' '\n' < highlander_typo.txtTherecanbeonlyoneone. Unfortunately: $ tr ' ' '\n' < highlander_typo.txt | uniq -D Nothing. This is because for -D option, which normally reveals duplicates, input has to be exactly a duplicate line. Unfortunately the period . at the end of the repeated word one negates this. It just looks like a different line. Not sure how I would work around arbitrary punctuation marks such as this period, and somehow add it back after tr processing. This was unsuccessful. But if it were successful, next there would need to be a way to include this line's line number, since the input file could have hundreds of lines and it would help to indicate which line of the input file, that the repeat-word was detected on. This single-line code processing would perhaps be part of a parent loop in order to do some kind of line-by-line multi-line processing and thus be able to process all lines in a file, but unfortunately getting past even single-line repeat-word recognition has been problematic. | Edited: added install and demo You need to take care of at least some edge cases, like repeated words at the end (and beginning) of the line. search should be case insensitive, because of frequent errors like The the apple . probably you want to restrict search only to word constituent to not match something like ( ( a + b) + c ) (repeated opening parentheses. only full words should match to eliminate the thesis When it comes to human language Unicode characters inside words should properly interpreted All in all I recommend pcregrep solution: pcregrep -Min --color=auto '\b([^[:space:]]+)[[:space:]]+\1\b' file Obviously color and line number ( n option) is optional, but usually nice to have. Install On Debian-based distributions you can install via: $ sudo apt-get install pcregrep Example Run the command on jefferson_typo.txt to see: $ pcregrep -Min --color=auto '\b([^[:space:]]+)[[:space:]]+\1\b' jefferson_typo.txt1:He has has refused his Assent to Laws, the most wholesome and necessary3:He has forbidden his Governors to pass Laws of immediate andand pressing importance, unless suspended in their operation till his5:Assent should be be obtained; and when so suspended, he has utterly The above is just a text capture, but on a color-supported terminal, matches are colorized: has has and and be be | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135509/"
]
} |
256,412 | I have A LOT of files named "bar_foo.txt" in a directory, and I'd like to rename them to "foo_bar.txt" instead. In sed and vim the regex to to this would be something like 's/\(bar\)_\(foo\)/\2_\1/g' , backreferencing the search in the replacement. Is there a way to do this with rename ? I've seen some hacks piping ls to sed to bash , but that's obviously not an amazing way to do it. Are there another tools that does this? Is there a name for the "sed and vim flavour" of regex? | If you have the rename implementation with Perl regexes (as on Debian/Ubuntu/…, or prename on Arch Linux), you need $1 instead of \1 . Also, no backslashes on capturing parentheses: rename 's/(.*)_(.*)/$2_$1/' *_* If not, you have to implement it yourself. #! /bin/bashfor file in *_* ; do left=${file%_*} right=${file##*_} mv "$file" "$right"_"$left"done Note: As written, both commands rename a_b_c to c_a_b . To get b_c_a , change the first capture group to .*? in the first case, or % to %% and ## to # in the second one. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33928/"
]
} |
256,427 | I am looking to grep an Apache access_log file from a specific date/time to the end of the file, so for example I want to grep from the first match of the following string to the end of the file: 19/Jan/2016:22: What is the simplest way to do this? | Provided there are fewer than 999,999 lines: grep -A 999999 '19/Jan/2016:22:' access_log But this would be a better solution as it doesn't restrict the number of lines after the match: sed -n '/19\/Jan\/2016:22:/,$p' access_log | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66537/"
]
} |
256,434 | I want to check if a shell variable contains an absolute path. I don't care if the path exists or not—if it doesn't I'm going to create it—but I do want to ensure that I'm dealing with an absolute pathname. My code looks something like the following: myfunction() { [ magic test to see if "$1" is an absolute path ] || return 1 mkdir -p "$(dirname "$1")" || return 1 commands >> "$1"} Or, the use case where the absolute path to be verified is intended to be a directory: anotherfunction() { [ same magic test ] || return 1 mkdir -p "$1" dostuff >> "$1/somefile"} If this were awk I would do the check like so: myvar ~ /^\// There must be a clean way to do this with the shell's string handling, but I'm having trouble coming up with it. (Mentioning a bash -specific solution would be fine but I'd like to know how to do this portably, also. POSIX string handling seems like it should be sufficient for this.) | You can just do: case $1 in (/*) pathchk -- "$1";; (*) ! : ;; esac That should be enough. And it will write diagnostics to stderr and return failure for inaccessible or uncreatable components. pathchk isn't about existing pathnames - it's about usable pathnames. The pathchk utility shall check that one or more pathnames are valid (that is, they could be used to access or create a file without causing syntax errors) and portable (that is, no filename truncation results) . More extensive portability checks are provided by the -p option. By default, the pathchk utility shall check each component of each pathname operand based on the underlying file system. A diagnostic shall be written for each pathname operand that: Is longer than {PATH_MAX} bytes (see Pathname Variable Values in <limits.h> ) Contains any component longer than {NAME_MAX} bytes in its containing directory Contains any component in a directory that is not searchable Contains any character in any component that is not valid in its containing directory The format of the diagnostic message is not specified, but shall indicate the error detected and the corresponding pathname operand. It shall not be considered an error if one or more components of a pathname operand do not exist as long as a file matching the pathname specified by the missing components could be created that does not violate any of the checks specified above. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
256,494 | I understand that sed is a command to manipulate text file. From my Googling, it seems -i means perform the operation on the file itself, is this correct? What about '1d' ? | In sed : -i option will edit the input file in-place '1d' will remove the first line of the input file Example: % cat file.txt foobar% sed -i '1d' file.txt % cat file.txt bar Note that, most of the time it's a good idea to take a backup while using the -i option so that you have the original file backed up in case of any unexpected change. For example, if you do: sed -i.orig '1d' file.txt the original file will be kept as file.txt.orig and the modified file will be file.txt . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/256494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15657/"
]
} |
256,495 | When I try to execute mail from inside a function in a bash script it creates something similar to a fork bomb. To clarify, this creates the issue: #!/bin/bashmail() { echo "Free of oxens" | mail -s "Do you want to play chicken with the void?" "[email protected]"}mailexit 0 Sometimes you can just kill the command and it'll kill the child processes, but sometimes you'll have to killall -9 . It doesn't care whether the mail were sent or not. The fork bomb is created either way. And it doesn't seem as adding any check for the exit code, such as if ! [ "$?" = 0 ] , helps. But the script below works as intended, either it outputs an error or it sends the mail. #!/bin/bashecho "Free of oxens" | mail -s "Do you want to play chicken with the void?" "[email protected]"exit 0 Why does this happen? And how would you go about checking the exit code of the mail command? | You're invoking the function mail from within the same function: #!/bin/bashmail() { # This actually calls the "mail" function # and not the "mail" executable echo "Free of oxens" | mail -s "Do you want to play chicken with the void?" "[email protected]"}mailexit 0 This should work: #!/bin/bashmailfunc() { echo "Free of oxens" | mail -s "Do you want to play chicken with the void?" "[email protected]"}mailfuncexit 0 Note that function name is no longer invoked from within the function itself. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/256495",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149009/"
]
} |
256,497 | Throughout the POSIX specification, there's provision ( 1 , 2 , 3 ...) to allow implementations to treat a path starting with two / specially. A POSIX application (an application written to the POSIX specification to be portable to all POSIX compliant systems) cannot assume that //foo/bar is the same as /foo/bar (though they can assume that ///foo/bar is the same as /foo/bar ). Now what are those POSIX systems (historical and still maintained) that treat //foo specially? I believed (I've now been proven wrong ) that POSIX provision was pushed by Microsoft for their Unix variant (XENIX) and possibly Windows POSIX layer (can anyone confirm that?). It is used by Cygwin which also is a POSIX-like layer for Microsoft Windows. Are there any non-Microsoft Windows systems? OpenVMS? On systems where //foo/bar is special, what is it used for? //host/path for network file systems access? Virtual file systems? Do some applications running on Unix-likes —if not the system's API— treat //foo/bar paths specially (in contexts where they otherwise treat /foo/bar as the path on the filesystem)? Edit , I've since asked a question on the austin-group mailing list about the origin of //foo/bar handling in the spec, and the discussion is an interesting read (from an archaeology point of view at least). | This is a compilation and index of the answers given so far. This post is community wiki , it can be edited by anybody with 100+ reputation and nobody gets reputation from it. Feel free to post your own answer and add a link to it in here (or wait for me to do it). Ideally, this answer should just be a summary (with short entries while individual other answers would have the details). Currently actively maintained systems: Cygwin . A POSIX layer for Microsoft Windows. Used for Windows UNC paths . UWIN since 1.3. Another POSIX layer for Windows. Used at least for //host/file network file sharing paths. @OlivierDulac IBM z/OS as mentioned in the POSIX bug tracker , z/OS resolves //pathname requests to MVS datasets , not to network files. Example . Defunct systems @BinaryZebra Apollo Domain/OS (confirmed). Also mentioned at Official Description UNC (Universal Naming Convention) as the possible origin of //host/path notations ( see also , page 2-15). According to Donn Terry , it was HP (which acquired Apollo Computers) that pushed for inclusion of that provision in the POSIX spec for Domain/OS. @jillagre Tektronix Utek ( corroborated ), where //host/path is a path on a distributed file system . @gilles QNX 4 with the FLEET distributed processing system, where //123/ path is a / path on node 123. (Mentioned in the QNX 6 documentation .) @roaima AT&T SysV Release 3 (unverified). //host/path in (discontinued in SVR4) RFS Remote File Sharing system. @Scott SEL/Gould UTX-32 (unverified). Used for //host/path . Applications that treat //foo/bar specially for paths @Prem Perforce where //depot/A/B/C/D refers to a path in a depot . @WChargin Blender . In its configuration you use a // prefix for relative paths (to the blend associated with the data-block) . The Bazel build system uses a // prefix for labels of targets within the Bazel build graph . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/256497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22565/"
]
} |
256,508 | Consider a symlink I am making to my Music directory named music . ln -s Music music now consider following sequence of commands: edward@ArchLinux:~$ readlink musicMusicedward@ArchLinux:~$ readlink music/edward@ArchLinux:~$ I am getting output only if I am not using / at the end of symlink name. I wonder if /dir and /dir/ are different. Can anybody explain? | readlink expects a symbolic link, and then displays the file/dir that symbolic link points to. in your first attempt: it sees the symbolic link, so it displays what it points to in your second attempt: it sees music/., which is Music/. which is the pointed directory, not the symbolic link pointing to that directory, so it doesn't have a link to interpret. (In other words, when you add the final "/", the shell's file descriptor is for the pointed directory instead). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256508",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52733/"
]
} |
256,518 | I am running arch on a notebook, but I do not need the wireless connection. Nevertheless the adapter is continuously running, even if it is not necessary. Is it possible to disable it temporarily? And if yes, how? | Find the device name with the command ip link , set it to down mode with ip link set <device> down . The device is most likely named something like wlp3s0. If operation isn't permitted, perform the command with sudo . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256518",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67222/"
]
} |
256,522 | I have a file that contains a list of names. i.e.: Long Name One (001)Long Name Two (201)Long Name Three (123)... with spaces and some special characters. I wanted to make directories out of these names, i.e.: cat file | xargs -l1 mkdir It makes individual directories separated by spaces, i.e. Long , Name , One , Two , Three , instead of Long Name One (001) , Long Name Two (201) , Long Name Three (123) . How can I do that? | Use -d '\n' with your xargs command: cat file | xargs -d '\n' -l1 mkdir From manpage: -d delim Input items are terminated by the specified character. Quotes and backslash are not special; every character in the input is taken literally. Disables the end-of-file string, which is treated like any other argument. This can be used when the input consists of simply newline-separated items, although it is almost always better to design your program to use --null where this is possible. The specified delimiter may be a single character, a C-style character escape such as \n, or an octal or hexadecimal escape code. Octal and hexadecimal escape codes are understood as for the printf command. Multibyte characters are not supported. Example output: $ lsfile$ cat fileLong Name One (001)Long Name Two (201)Long Name Three (123)$ cat file | xargs -d '\n' -l1 mkdir$ ls -1fileLong Name One (001)Long Name Three (123)Long Name Two (201) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256522",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105126/"
]
} |
256,551 | I'm using this code: numbzip=`ls *.plt.zip | wc -l` &>/dev/null and trying to get rid of the output in the command window. No files ending on .plt.zip exist so it comes back with: ls: cannot access *.plt.zip: No such file or directory whatever I try it always writes this line in the command window. I tried: numbzip=`ls *.plt.zip | wc -l` >/dev/null 2>/dev/nullnumbzip=`ls *.plt.zip | wc -l` >/dev/null >>/dev/null 2>/dev/null Regards, Wilco. | Use -d '\n' with your xargs command: cat file | xargs -d '\n' -l1 mkdir From manpage: -d delim Input items are terminated by the specified character. Quotes and backslash are not special; every character in the input is taken literally. Disables the end-of-file string, which is treated like any other argument. This can be used when the input consists of simply newline-separated items, although it is almost always better to design your program to use --null where this is possible. The specified delimiter may be a single character, a C-style character escape such as \n, or an octal or hexadecimal escape code. Octal and hexadecimal escape codes are understood as for the printf command. Multibyte characters are not supported. Example output: $ lsfile$ cat fileLong Name One (001)Long Name Two (201)Long Name Three (123)$ cat file | xargs -d '\n' -l1 mkdir$ ls -1fileLong Name One (001)Long Name Three (123)Long Name Two (201) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152384/"
]
} |
256,576 | I'm working with ACLs on CentOS 7 and noticed that the partition ( / ) where ACLs are applied is mounted with defaults options. As far as I know ACLs need that the acl option is enabled in the mount, and defaults = rw, suid, dev, exec, auto, nouser, async . Does defaults include acl now and if yes, since which distro version? EDIT: Just found that XFS has native support for ACLs, and XFS is the standard filesystem in CentOS 7, which explains everything. EDIT 2: However, I just tested that ACLs are maintained - even after reboot - in an ext4 filesystem with no explicit acl mount option. Why is that? | Use -d '\n' with your xargs command: cat file | xargs -d '\n' -l1 mkdir From manpage: -d delim Input items are terminated by the specified character. Quotes and backslash are not special; every character in the input is taken literally. Disables the end-of-file string, which is treated like any other argument. This can be used when the input consists of simply newline-separated items, although it is almost always better to design your program to use --null where this is possible. The specified delimiter may be a single character, a C-style character escape such as \n, or an octal or hexadecimal escape code. Octal and hexadecimal escape codes are understood as for the printf command. Multibyte characters are not supported. Example output: $ lsfile$ cat fileLong Name One (001)Long Name Two (201)Long Name Three (123)$ cat file | xargs -d '\n' -l1 mkdir$ ls -1fileLong Name One (001)Long Name Three (123)Long Name Two (201) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34039/"
]
} |
256,617 | Is there any meaning of the CentOS (I guess RedHat and if I remember correctly Fedora too) boot screen progress bar layers*? (*) Layers ~ white, lilac(?) and violet colours. I can't figure out any good search query for this, but maybe somebody will know... | I'm not an expert, but from the source (ply-text-progress-bar.c) it looks like you set the overall percentage done and that the different colors/layers's progress is hard coded by the following, and other functions within that file: brown_fraction = -(progress_bar->percent_done * progress_bar->percent_done) + 2 * progress_bar->percent_done; blue_fraction = progress_bar->percent_done; white_fraction = progress_bar->percent_done * progress_bar->percent_done; So, it appears that it is a style choice with no underlying meaning. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256617",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112421/"
]
} |
256,628 | I love seeing this screen when I start up Vim, but when I type a character the screen disappears. Is there a command I can use to manually show it again? | You could issue the command: :intro | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106512/"
]
} |
256,672 | I'm having trouble finding the domain name of my school's server, I read another question similar to this but their commands didn't work. I tried to use nslookup and then when I was prompted I entered server . I get the following output. I don't understand where the domain name is stored. I'm using a wired LAN connection from my university. Default Server: 127.0.01Address: 127.0.0.1#53Default Server: ::1Address: ::1#53 | You should nslookup the IP. It will return. Example. [nazu@healer ~]$ nslookup> bro-ns-01Server: 10.100.1.2Address: 10.100.1.2#53Name: bro-ns-01.angelsofclockwork.netAddress: 10.100.1.2> 10.100.1.2Server: 10.100.1.2Address: 10.100.1.2#532.1.100.10.in-addr.arpa name = bro-ns-01.angelsofclockwork.net.> I'm assuming you're using dnsmasq or otherwise, because your server address is set to your localhost. If that's the case, it should still be transparent enough to pull the information. You could also look at /etc/resolve.conf if you're on the same network. You're more than likely being given a reverse DNS entry that has the same domain suffix of static servers. [nazu@healer ~]$ cat /etc/resolv.conf ; generated by /usr/sbin/dhclient-scriptsearch angelsofclockwork.net.nameserver 10.100.1.2 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139921/"
]
} |
256,688 | This page details psuedocode for creating a compose function that takes n commands and executes them in a pipeline: We program a command compose such that compose cmd1 cmd2 ... cmdn behaves like the shell command: cmd1 | cmd2 | ... | cmdn I'm experimenting with named pipes and I'm interested in actually writing compose , in Bash. Unfortunately I don't get any output when I do so, presumably due to race conditions reading and writing to the different pipes. I've made a number of iterations but keep getting confusing behavior. I've whittled it down to this smaller problem: echo foo | # stdin{ mkfifo p # create pipe p cat > p & # direct stdin to pipe p cat < p # read pipe p to stdout rm p # remove pipe p} I would expect this to output foo , but instead I get nothing. What am I doing wrong? | You should nslookup the IP. It will return. Example. [nazu@healer ~]$ nslookup> bro-ns-01Server: 10.100.1.2Address: 10.100.1.2#53Name: bro-ns-01.angelsofclockwork.netAddress: 10.100.1.2> 10.100.1.2Server: 10.100.1.2Address: 10.100.1.2#532.1.100.10.in-addr.arpa name = bro-ns-01.angelsofclockwork.net.> I'm assuming you're using dnsmasq or otherwise, because your server address is set to your localhost. If that's the case, it should still be transparent enough to pull the information. You could also look at /etc/resolve.conf if you're on the same network. You're more than likely being given a reverse DNS entry that has the same domain suffix of static servers. [nazu@healer ~]$ cat /etc/resolv.conf ; generated by /usr/sbin/dhclient-scriptsearch angelsofclockwork.net.nameserver 10.100.1.2 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19157/"
]
} |
256,704 | I know in the man pages it puts it as -x, --one-file-system stay on this file system but can someone explain it to me like I'm five. | cp -ax , rsync -x or tar -one-file-system , all are the same. It means not to cross file system boundaries . A boundary between file systems is a mount point . If you run df -a , you will see all files and mount points. To help you understand with an example: If you run df on your filesystem: df / , or on /usr directory: df /usr , you will see that they are mounted on / , your installation partition. But if you run it on /proc df /proc you will see that the mount point is different. So if you issue a recursive copy on your filesystem cp -ax / , it won't copy /proc directory. You could add -v option if you want to see exactly what is discarded and what is being copied. As others pointed out, it's used with recursive and people generally use it for backup. And you should see "Meaning of crossing filesystem boundaries" for more on mount points. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152520/"
]
} |
256,713 | I'd like to execute xinput disable bcm5974 when Gnome Terminal (and maybe some other application) gets focused, and xinput enable bcm5974 when it loses focus. This is because libinput and my macbook's touchpad are not friends, libinput's palm rejection barely works, it's really driving me nuts when editing code in Vim and it scrolls by accident, or when typing a command at the terminal. libinput 1.1.4-1 xf86-input-libinput 0.16.0-1 ArchLinux | cp -ax , rsync -x or tar -one-file-system , all are the same. It means not to cross file system boundaries . A boundary between file systems is a mount point . If you run df -a , you will see all files and mount points. To help you understand with an example: If you run df on your filesystem: df / , or on /usr directory: df /usr , you will see that they are mounted on / , your installation partition. But if you run it on /proc df /proc you will see that the mount point is different. So if you issue a recursive copy on your filesystem cp -ax / , it won't copy /proc directory. You could add -v option if you want to see exactly what is discarded and what is being copied. As others pointed out, it's used with recursive and people generally use it for backup. And you should see "Meaning of crossing filesystem boundaries" for more on mount points. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42313/"
]
} |
256,721 | I have following configuration in the file. But i have such many configs in the file. I have to print a word which are enclosed in [] brackets. I want to print the word summary_sai_verbose only once since this word exists four times and there are many words which starts with summary and enclosed in [] brackets and i want to print the word which starts with summary that too only once. ########################################################################## Indexes Definition for Verbose for SAI##########################################################################[summary_sai_verbose]maxHotBuckets = 2maxDataSize = auto_high_volumehomePath = volume:SAI_VOLUME/summary_sai_verbose/dbhomePath.maxDataSizeMB = 56842coldPath = volume:SAI_VOLUME/summary_sai_verbose/colddbcoldPath.maxDataSizeMB = 125053thawedPath = $SAI_DB/summary_sai_verbose/thaweddb########################################################################## | cp -ax , rsync -x or tar -one-file-system , all are the same. It means not to cross file system boundaries . A boundary between file systems is a mount point . If you run df -a , you will see all files and mount points. To help you understand with an example: If you run df on your filesystem: df / , or on /usr directory: df /usr , you will see that they are mounted on / , your installation partition. But if you run it on /proc df /proc you will see that the mount point is different. So if you issue a recursive copy on your filesystem cp -ax / , it won't copy /proc directory. You could add -v option if you want to see exactly what is discarded and what is being copied. As others pointed out, it's used with recursive and people generally use it for backup. And you should see "Meaning of crossing filesystem boundaries" for more on mount points. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256721",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150598/"
]
} |
256,736 | I'm having some difficulties on understanding why the following cronjob does not work anymore: 30 3 * * * /path/to/backup_script.sh && tar -czvf /path/to/archived/backups/retain_last_3_backups/backup-community_$(date '+%Y%m%dT%H%M%S').tar.gz -C /path/to/source/backup/folder/ . If I run it manually using the same user who own the crontab, it does work. It stopped to work when I edited it a couple of days ago adding && tar -czvf [...] should I call in a different way the date command? or escape the $ (I'm going to test this now, just noticed it)? Thanks to David Sánchez Martín, I found the specific log, it report the following error: /bin/sh: 1: Syntax error: Unterminated quoted string | cp -ax , rsync -x or tar -one-file-system , all are the same. It means not to cross file system boundaries . A boundary between file systems is a mount point . If you run df -a , you will see all files and mount points. To help you understand with an example: If you run df on your filesystem: df / , or on /usr directory: df /usr , you will see that they are mounted on / , your installation partition. But if you run it on /proc df /proc you will see that the mount point is different. So if you issue a recursive copy on your filesystem cp -ax / , it won't copy /proc directory. You could add -v option if you want to see exactly what is discarded and what is being copied. As others pointed out, it's used with recursive and people generally use it for backup. And you should see "Meaning of crossing filesystem boundaries" for more on mount points. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256736",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55995/"
]
} |
256,739 | This is my file: TLRUIDA CBdms Status DP 6/1/1 DC 6/1/5 0 YTLRUIDA CBdms Status DP 6/2/1 DC 6/2/5 0 YTLRUIDA CBdms Status DP 6/3/1 DC 6/3/5 0 YTLRUIDA CBdms Status DP 6/4/1 DC 6/4/5 0 YTLRUIDA CBdms Status DP 6/5/1 DC 6/5/5 0 YTLRUIDA CBdms Status DP 6/6/1 DC 6/6/5 0 YTLRUIDA CBdms Status DP 6/7/1 DC 6/7/5 0 YTLRUIDA CBdms Status DP 6/8/1 DC 6/8/5 0 YTLRUIDA CBdms Status DP 6/9/1 DC 6/9/5 0 YTLRUIDA CBdms Status DP 6/10/1 DC 6/10/5 0 YTLRUIDA CBdms Status DP 6/11/1 DC 6/11/5 0 YTLRUIDA CBdms Status DP 6/12/1 DC 6/12/5 0 Y I have alignment problem from the row after digit 10 is started. And I want the format below mentioned, TLRUIDA CBdms Status DP 6/1/1 DC 6/1/5 0 YTLRUIDA CBdms Status DP 6/2/1 DC 6/2/5 0 YTLRUIDA CBdms Status DP 6/3/1 DC 6/3/5 0 YTLRUIDA CBdms Status DP 6/4/1 DC 6/4/5 0 YTLRUIDA CBdms Status DP 6/5/1 DC 6/5/5 0 YTLRUIDA CBdms Status DP 6/6/1 DC 6/6/5 0 YTLRUIDA CBdms Status DP 6/7/1 DC 6/7/5 0 YTLRUIDA CBdms Status DP 6/8/1 DC 6/8/5 0 YTLRUIDA CBdms Status DP 6/9/1 DC 6/9/5 0 YTLRUIDA CBdms Status DP 6/10/1 DC 6/10/5 0 YTLRUIDA CBdms Status DP 6/11/1 DC 6/11/5 0 YTLRUIDA CBdms Status DP 6/12/1 DC 6/12/5 0 Y | The right tool for this job is column . You can specify column separator with -o (on OS X it's -s ) , e.g.: column -t -o ' ' file gives TLRUIDA CBdms Status DP 6/1/1 DC 6/1/5 0 YTLRUIDA CBdms Status DP 6/2/1 DC 6/2/5 0 YTLRUIDA CBdms Status DP 6/3/1 DC 6/3/5 0 YTLRUIDA CBdms Status DP 6/4/1 DC 6/4/5 0 YTLRUIDA CBdms Status DP 6/5/1 DC 6/5/5 0 YTLRUIDA CBdms Status DP 6/6/1 DC 6/6/5 0 YTLRUIDA CBdms Status DP 6/7/1 DC 6/7/5 0 YTLRUIDA CBdms Status DP 6/8/1 DC 6/8/5 0 YTLRUIDA CBdms Status DP 6/9/1 DC 6/9/5 0 YTLRUIDA CBdms Status DP 6/10/1 DC 6/10/5 0 YTLRUIDA CBdms Status DP 6/11/1 DC 6/11/5 0 YTLRUIDA CBdms Status DP 6/12/1 DC 6/12/5 0 Y | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256739",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
256,893 | After I restart my computer, ld.so.cache still has the information in it, so my questions are as follows: Is the information always kept there? Isn't it being removed after restart or something like that? Like RAM or browser cache being deleted? After I have removed an application that installed some shared libraries, does it know to also remove the information from the ld.so.cache ? If I use ldconfig will it remove the information? How does it actually work? If I am installing a program, how does my computer know to use the new libraries that had been added? After apt-get install is ldconfig run? | Linux programs are using libraries which are called shared objects. Shared objects have the extension .so. To see the S.O. usage of the command ls run ldd /bin/ls By default, libs are stored in /lib /usr/lib and /usr/local/lib (/lib32, /lib64 for 32/64bit). The info where additional libs can be found are stored in /etc/ld.so.conf.d/ . In there are single .conf files which contain paths to specific libs i.e. /opt/foo/lib . Since the lookup in /etc/ld.so.conf.d/ is very slow ldconfig generates the /etc/ld.so.cache file, which is a binary version of this which improves the lookup speed.To answer the first question. No, keep the file. Yes, apt-get or dpkg (?) is triggering ldconfig. How it works - see 1. Yes, see 1. I hope, I got it right. Feel free to correct me. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152709/"
]
} |
256,919 | I've seen an interesting pattern in RPM packaging. The main library package will include the shared library itself: /usr/lib64/libavcodec.so.54 The -devel package will provide headers and a symlink: /usr/include/libavcodec/libavcodec.h/usr/lib64/libavcodec.so -> /usr/lib64/libavcodec.so.54 Why is the libavcodec.so symlink provided by the devel package and not just included with the shared library package? What about the symlink has anything to do with something a developer would want? The headers make sense, but why the symlink to the shared object? | Software from the distribution is mechanically linked consistently, and expects to find libavcodec.so.54 , so the unversioned name isn't required for any of the pre-built packages. If you're building software yourself, however, it's common to use -lavcodec or similar, which will find libavcodec.so unversioned. Similarly, build scripts may expect these names to exist. The unversioned names aren't required for the distribution packages, so they're not included by default, but as they're useful when building other software they're included in the -devel package. Other distributions make different delineations and include the .so link in the main package; both are reasonable choices. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256919",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
256,926 | In Linux, when a child process terminates and it's parent has not yet waited on it, it becomes a zombie process. The child's exit code is stored in the pid descriptor. If a SIGKILL is sent to the child, there is not supposed to be any effect. Does this mean that the exit code will not be modified by the SIGKILL or will the exit code be modified to indicate that the child exited because it received a SIGKILL ? | To answer that question, you have to understand how signals are sent to a process and how a process exists in the kernel. Each process is represented as a task_struct inside the kernel (the definition is in the sched.h header file and begins here ). That struct holds information about the process; for instance the pid. The important information is in line 1566 where the associated signal is stored. This is set only if a signal is sent to the process. A dead process or a zombie process still has a task_struct . The struct remains, until the parent process (natural or by adoption) has called wait() after receiving SIGCHLD to reap its child process. When a signal is sent, the signal_struct is set. It doesn't matter if the signal is a catchable one or not, in this case. Signals are evaluated every time when the process runs. Or to be exact, before the process would run. The process is then in the TASK_RUNNING state. The kernel runs the schedule() routine which determines the next running process according to its scheduling algorithm. Assuming this process is the next running process, the value of the signal_struct is evaluated, whether there is a waiting signal to be handled or not. If a signal handler is manually defined (via signal() or sigaction() ), the registered function is executed, if not the signal's default action is executed. The default action depends on the signal being sent. For instance, the SIGSTOP signal's default handler will change the current process's state to TASK_STOPPED and then run schedule() to select a new process to run. Notice, SIGSTOP is not catchable (like SIGKILL ), therefore there is no possibility to register a manual signal handler. In case of an uncatchable signal, the default action will always be executed. To your question: A defunct or dead process will never be determined by the scheduler to be in the TASK_RUNNING state again. Thus the kernel will never run the signal handler (default or defined) for the corresponding signal, whichever signal is was. Therefore the exit_signal will never be set again. The signal is "delivered" to the process by setting the signal_struct in task_struct of the process, but nothing else will happen, because the process will never run again. There is no code to run, all that remains of the process is that process struct. However, if the parent process reaps its children by wait() , the exit code it receives is the one when the process "initially" died. It doesn't matter if there is a signal waiting to be handled. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/256926",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152733/"
]
} |
256,930 | I'm having a hard time finding others with the same error, and I'm trying to figure out the best path forward. I have a hard drive that was unusably slow, and then stopped booting. The clonezilla clone failed, and I started a ddrescue, using the gnu rescue tool included with a clonezilla live cd. It is going unbelievably slow averaging about 400 kBps for a 2 TB drive, so I'm estimating almost 4 months! at this point. My last backup was sadly about 2 years ago, and there are a lot of pictures I'd like to get off of it. The surprising part is that its rescued about 50 GB, with no errors so far, even though its taken 3 days.I have a few questions on the best path forward, and why it would take so long but also not have any errors. Is the drive just taking forever to succesfully read, but never actually failing, slowing down the copy time? Is the hard drive itself fine, but something like the control board the problem? I'm very worried about where the logfile is likely going. I can't depend on my computer staying steady, and the command not erroring for four months. If I'm talking at all about keeping it running for even weeks, I'd like to get that logfile onto a flash drive. I originally thought it was going onto the new larger hard drive, but now I realize its likely on the RAM drive clonezilla_live is utilizing. Is it safe to insert a formatted USB drive, mount it, and copy over the log file, then restart the ddrescue? Will the clonezilla shell even recognize that I inserted the USB stick that wasn't there on boot, so I can mount it? I'm assuming I'd try sudo fdisk -l to list the disks, then make a directory? sudo mkdir /logfile/usb then mount it? sudo mount /dev/sdb1 /media/usb , then copy? ANY feedback would be appreciated. I've screwed around in Unix shell a bit, setup a z-pool raid, but always when I knew exactly what I was doing, and not in linux, let alone a bare-bones version. | If anyone is interested or comes across an archived version of this in a few years. I waited the two months, establishing a log file to resume copying. Twice it just started getting reading errors (until the computer was restarted), and once I lost power. After months of copying, I plugged the backup in via a USB adapter to another laptop, I probably had 7.5 mb of the ~2 tb that wasn't copied (still had errors after -r3 (3 retries)). It was unreadable, but I rebuilt the partition table per these instructions: https://perrohunter.com/repair-a-mac-os-x-hfs-partition-table/ - I did have to change the block size since this drive is much larger than the older drives. It then worked close to flawlessly. I did a disk verify and repair, and permissions repair in disk utility, and it booted up fine. Real lesson learned? I'm using backblaze for the really important files (photos and documents) and a mirrored bootable backup on-site. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152731/"
]
} |
256,933 | I recently installed Kali Linux 2.0 and tried to update the software. This is what I did: I edited /etc/apt/sources.list to contain the following mirrors : deb http://http.kali.org/kali kali-rolling main non-free contribdeb http://http.kali.org/kali kali-rolling main contrib non-freedeb-src http://http.kali.org/kali kali-rolling main contrib non-freedeb http://http.kali.org/kali sana main non-free contribdeb http://security.kali.org/kali-security sana/updates main contrib non-freedeb-src http://http.kali.org/kali sana main non-free contribdeb-src http://security.kali.org/kali-security sana/updates main contrib non-free then ran the following commands: apt-get cleanapt-get update While running the apt-get update , I was not able to connect to the Kali server. Here is the error message: Err http://security.kali.org sana/updates InReleaseErr http://http.kali.org sana InReleaseErr http://security.kali.org sana/updates Release.gpgUnable to connect to kali.mirror.garr.it:http:Err http://http.kali.org kali-rolling Release.gpgUnable to connect to kali.mirror.garr.it:http:Err http://http.kali.org sana Release.gpgUnable to connect to kali.mirror.garr.it:http:Segmentation fault Reading package lists... DoneW: Failed to fetch http://http.kali.org/kali/dists/kali-rolling/InReleaseW: Failed to fetch http://http.kali.org/kali/dists/sana/InReleaseW: Failed to fetch http://security.kali.org/kali-security/dists/sana/updates/InReleaseW: Failed to fetch http://http.kali.org/kali/dists/kali-rolling/Release.gpgUnable to connect to kali.mirror.garr.it:http:W: Failed to fetch http://security.kali.org/kali-security/dists/sana/updates/Release.gpgUnable to connect to kali.mirror.garr.it:http:W: Failed to fetch http://http.kali.org/kali/dists/sana/Release.gpgUnable to connect to kali.mirror.garr.it:http:W: Some index files failed to download. They have been ignored, or old ones used instead. How can I fix this error? | You should NEVER modify sources.list in Kali Linux. Here's what should be in them: deb http://http.kali.org/kali kali-rolling main contrib non-free# For source package access, uncomment the following line# deb-src http://http.kali.org/kali kali-rolling main contrib non-free You probably have no connection to the internet. That's why the apt-get update failed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/256933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64188/"
]
} |
257,010 | I generated my public/private key pair using ssh-keygen -t rsa -b 2048 -v and then needed a .pem file and followed this https://serverfault.com/questions/706336/how-to-get-a-pem-file-from-ssh-key-pair ssh-keygen -f id_rsa -e -m pem -----BEGIN RSA PUBLIC KEY----- but then i found this https://gist.github.com/mingfang/4aba327add0807fa5e7f openssl rsa -in ~/.ssh/id_rsa -outform pem-----BEGIN RSA PRIVATE KEY----- why is the output different? | That's how they are written; OpenSSH emits the public key material via a PEM_write_RSAPublicKey(stdout, k->rsa) call in the do_convert_to_pem function of ssh-keygen.c , while OpenSSL operates instead on the given private key. With OpenSSH, I'd imagine that the majority of cases would be to convert the public key into a form usable on some foreign server, with the private key remaining private on the client system, so operating on the public key of the keypair makes sense. With OpenSSL, there is no "get a public key into a form suitable for some other SSH server" concern, so that code operates directly on the private key. Different code, different intentions, different results. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133764/"
]
} |
257,014 | I see there is an executable called "[" in /usr/bin . What is its purpose? | In most cases, [ is a shell builtin and is equivalent to test . However, like test , it also exists as a standalone executable: that's the /bin/[ you saw. You can test this with type -a [ (on an Arch Linux system, running bash ): $ type -a [[ is a shell builtin[ is /bin/[ So, on my system, I have two [ : my shell's builtin and the executable in /bin . The executable is documented in man test : TEST(1) User Commands TEST(1)NAME test - check file types and compare valuesSYNOPSIS test EXPRESSION test [ EXPRESSION ] [ ] [ OPTIONDESCRIPTION Exit with the status determined by EXPRESSION.[ ... ] As you can see in the excerpt of the man page quoted above, test and [ are equivalent. The /bin/[ and /bin/test commands are specified by POSIX which is why you'll find them despite the fact that many shells also provide them as builtins. Their presence ensures that constructs like: [ "$var" -gt 10 ] && echo yes will work even if the shell running them doesn't have a [ builtin. For example, in tcsh : > which [/sbin/[> set var = 11> [ "$var" -gt 10 ] && echo yesyes | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/257014",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130702/"
]
} |
257,020 | I'm currently trying to write a little script to convert all the flac files to mp3 files. However, I ran into a bit of a problem when trying to set up recursion down into all my music folders - the script kept looping into the current directory (.) Here's what I currently have: #!/bin/bash#---# flacToMp3: Converts FLAC files in my originalFLAC folder into mp3 files# and places them in an identical folder structure in my Music# folder.#---function enterDIR { for DIR in "$(find . -maxdepth 1 -type d)"; do #recurse into every directory below top-level directory if [ "$DIR" == "." ]; then #avoid current directory infinite loop continue fi cd "$DIR/" enterDIR done createDirectory convertFLAC}function createDirectory { #recreate directory structure in Music folder curDir="$pwd" newDir=${curDir/originalFLAC/Music} mkdir -p $newDir}function convertFLAC { #convert each flac file in current directory into an mp3 file for FILE in "$(find . -maxdepth 1 -type f)"; do #loop through all regular (non-directory) files in current directory if [ "${FILE: -5}" == ".flac" ]; then #if FILE has extension .flac ffmpeg -i "$FILE" -ab 320k -map_metadata 0 "${FILE%.*}.mp3"; #convert to .mp3 mv -u "${FILE%.*}.mp3" $newDir else #copy all other files to new directory as-is cp -ur "$FILE" $newDir fi done}enterDIR This script is pretty clunky, since I only just started dipping into Bash. The problem (or at least where I think it is) comes from the if [ "$DIR" == "." ]; then line - looking at my output when running the script, it doesn't seem to filter it. How do I filter out (ignore) the current directory? | You can filter it in find by using -mindepth option. Like this: function enterDIR { find . -mindepth 1-maxdepth 1 -type d | while read DIR ; do #recurse into every directory below top-level directory cd "$DIR/" enterDIR done createDirectory convertFLAC} But the whole script doesn't look like a good solution. If I understand your idea correct, you want to walk through the whole directory tree, create there new directory, convert flac to mp3 if any and copy all non-flac files to a new dir. I would do that this way: find . -mindepth 1 -type -d -exec mkdir -p {}/originalFLAC/Music \+find . -type f -iname "*.flac" -exec ffmpeg -i {} -ab 320k -map_metadata 0 {}.mp3 \;find . -type f ! -iname "*.flac" | while read file ; do cp -v "$file" "$(dirname "$file")"/originalFLAC/Music/ ; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257020",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67617/"
]
} |
257,046 | I have two files large files: f1.txt: 5020118359 |13ZJ24001218 |20141224|R5020120475 |13ZJ38000813 |20141204|R5020127431 |13ZJ38001569 |20141201|R5020127689 |12ZJ44000606 |20141203|R5020127728 |13ZJ38001356 |20141203|R5020127956 |13ZJ62002544 |20141205|R5020127972 |13ZJ49000082 |20141205|R5020128325 |13ZJ57000785 |20141210|R5020128706 |13ZJ38002805 |20141211|R5020129084 |10XJ70107764 |20141217|R5020129102 |12ZJ54000041 |20141217|R f2.txt: 09Y90301055212ZJ5400004111XJ6211838508Y90901894609Y90201195411XJ5712034610XJ7010776411XJ4016532909XJ4200833608Y91202143511XJ5104027207Y910027235 Output: 5020129084 |10XJ70107764 |20141217|R5020129102 |12ZJ54000041 |20141217|R it will compare 2nd column of the first file and 1st column of the second file and then print the matched records of the 1st file. | You can filter it in find by using -mindepth option. Like this: function enterDIR { find . -mindepth 1-maxdepth 1 -type d | while read DIR ; do #recurse into every directory below top-level directory cd "$DIR/" enterDIR done createDirectory convertFLAC} But the whole script doesn't look like a good solution. If I understand your idea correct, you want to walk through the whole directory tree, create there new directory, convert flac to mp3 if any and copy all non-flac files to a new dir. I would do that this way: find . -mindepth 1 -type -d -exec mkdir -p {}/originalFLAC/Music \+find . -type f -iname "*.flac" -exec ffmpeg -i {} -ab 320k -map_metadata 0 {}.mp3 \;find . -type f ! -iname "*.flac" | while read file ; do cp -v "$file" "$(dirname "$file")"/originalFLAC/Music/ ; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152841/"
]
} |
257,082 | Using the regexp string, how can I remove all the lines before the first line that contains a match? e.g How can I change this: lostloadlinuxloanlinux into this: linuxloanlinux I tried: echo "lostloadlinuxloanlinux" | sed -e 's/.*^li.*$//g' but it returns this, not changing anything: lostloadlinuxloanlinux I'd like to make it work so that it won't output anything when there's no match. | One way, POSIXly: $ echo "lostloadlinuxloanlinux" | sed -e/linux/\{ -e:1 -en\;b1 -e\} -ed or shorter: sed -n '/linux/,$p' or even shorter: sed '/linux/,$!d' For readers who wonder why I prefer the longer over the shorter version, the longer version will only perform i/o over the rest of file, while using ranges can affect the performance if the 2nd address is a regex, and the regexes are trying to be matched more than is necessary. Consider: $ time seq 1000000 | sed -ne '/^1$/{' -e:1 -en\;b1 -e\}=====JOB sed -e '/^1$/,$d'87% cpu0.11s real0.10s user0.00s sys with: $ time seq 1000000 | sed -e '/^1$/,/1000000/d'=====JOB sed -e '/^1$/,/1000000/d'96% cpu0.24s real0.23s user0.00s sys you can see the different between two versions. With complex regex, it's will be big difference. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/257082",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152267/"
]
} |
257,083 | I have many files with names in the form username_pattern_suffix.ext where "pattern" is fixed, but there are very many usernames, each with several files (i.e. values of 'suffix'). I'd like concatenate each user's files into a single file, e.g. username.ext to end up with one file per username. Each username contains two underscores _ and ends with a number, but is variable length. | One way, POSIXly: $ echo "lostloadlinuxloanlinux" | sed -e/linux/\{ -e:1 -en\;b1 -e\} -ed or shorter: sed -n '/linux/,$p' or even shorter: sed '/linux/,$!d' For readers who wonder why I prefer the longer over the shorter version, the longer version will only perform i/o over the rest of file, while using ranges can affect the performance if the 2nd address is a regex, and the regexes are trying to be matched more than is necessary. Consider: $ time seq 1000000 | sed -ne '/^1$/{' -e:1 -en\;b1 -e\}=====JOB sed -e '/^1$/,$d'87% cpu0.11s real0.10s user0.00s sys with: $ time seq 1000000 | sed -e '/^1$/,/1000000/d'=====JOB sed -e '/^1$/,/1000000/d'96% cpu0.24s real0.23s user0.00s sys you can see the different between two versions. With complex regex, it's will be big difference. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/257083",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152869/"
]
} |
257,105 | Is there a way one could make some commands "sticky" in a shell history? I want to save some favourite commands that would be searchable with Ctrl+R, like the rest of the history, but they should never expire. | A simple way is to put code in your shell startup file to read an additional history file, and make sure that you store sufficiently many history lines in memory so that the sticky lines aren't forgotten. bash In ~/.bashrc : history -r ~/.bash_history.sticky Also make sure that HISTSIZE is at least HISTFILESIZE plus the number of lines in ~/.bash_history.sticky plus the number of commands you execute in a long session, e.g. HISTFILESIZE=1000HISTSIZE=10000 If you want to ensure that the sticky history entries remain in memory without having a very large HISTSIZE , you can do it by manually trimming the history in PROMPT_COMMAND with history -d , but it's difficult to get right if you have erasedups in HISTCONTROL . zsh In ~/.zshrc : fc -RI ~/.zsh_history.sticky Also make sure that HISTSIZE is at least SAVEHIST plus the number of lines in ~/.zsh_history.sticky plus the number of commands you execute in a long session, e.g. SAVEHIST=1000HISTSIZE=10000 If you want to ensure that the sticky history entries remain in memory without having a very large HISTSIZE , you can do it by manually trimming the history in precmd , but it's cumbersome (zsh doesn't really support rewriting history, you have to fc -W into a temporary file and read back an edited version) and difficult to get right if you have the hist_ignore_dups or hist_ignore_all_dups option set. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/51026/"
]
} |
257,110 | This will cause LIST to grow very large (even a couple of GB in a short time etc): $ for i in *; do echo $i; cut -d ' ' -f1 $i ; done > LIST For example, after 10 seconds: $ wc -l LIST132654955 LIST$ ls -hl LIST-rw-r--r-- 1 user users 2.3G Jan 22 21:35 LIST I think that the reason is that LIST is added to the list of filesthat should be processed and cut never finishes processing it. Ifound 3 solutions for this problem: exclude LIST from being processed: for i in !(LIST); do echo $i; cut -d ' ' -f1 $i ; done > LIST use another directory for LIST : for i in *; do echo $i; cut -d ' ' -f1 $i ; done > /tmp/LIST expand * before running the loop with C-x * or whatever $ bind-p | grep glob-expand-word shows Is my reasoning correct and which way is the best here? | A simple way is to put code in your shell startup file to read an additional history file, and make sure that you store sufficiently many history lines in memory so that the sticky lines aren't forgotten. bash In ~/.bashrc : history -r ~/.bash_history.sticky Also make sure that HISTSIZE is at least HISTFILESIZE plus the number of lines in ~/.bash_history.sticky plus the number of commands you execute in a long session, e.g. HISTFILESIZE=1000HISTSIZE=10000 If you want to ensure that the sticky history entries remain in memory without having a very large HISTSIZE , you can do it by manually trimming the history in PROMPT_COMMAND with history -d , but it's difficult to get right if you have erasedups in HISTCONTROL . zsh In ~/.zshrc : fc -RI ~/.zsh_history.sticky Also make sure that HISTSIZE is at least SAVEHIST plus the number of lines in ~/.zsh_history.sticky plus the number of commands you execute in a long session, e.g. SAVEHIST=1000HISTSIZE=10000 If you want to ensure that the sticky history entries remain in memory without having a very large HISTSIZE , you can do it by manually trimming the history in precmd , but it's cumbersome (zsh doesn't really support rewriting history, you have to fc -W into a temporary file and read back an edited version) and difficult to get right if you have the hist_ignore_dups or hist_ignore_all_dups option set. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257110",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20604/"
]
} |
257,163 | I'm running GNOME 3.18 and I would like to reduce the title bar height. How could I do it? | After Gnome 3.20, .header-bar.default-decoration doesn't work. You can put follow content into ~/.config/gtk-3.0/gtk.css : /* shrink headerbars (don't forget semicolons after each property) */headerbar { min-height: 0px; padding-left: 2px; /* same as childrens vertical margins for nicer proportions */ padding-right: 2px; background-color: #2d2d2d;}headerbar entry,headerbar spinbutton,headerbar button,headerbar separator { margin-top: 0px; /* same as headerbar side padding for nicer proportions */ margin-bottom: 0px;}/* shrink ssd titlebars */.default-decoration { min-height: 0; /* let the entry and button drive the titlebar size */ padding: 0px; background-color: #2d2d2d;}.default-decoration .titlebutton { min-height: 0px; /* tweak these two props to reduce button size */ min-width: 0px;}window.ssd headerbar.titlebar { padding-top: 3px; padding-bottom: 3px; min-height: 0;}window.ssd headerbar.titlebar button.titlebutton { padding-top: 3px; padding-bottom:3px; min-height: 0;} via https://ogbe.net/blog/gnome_titles.html | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/257163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152923/"
]
} |
257,233 | I am looking for some command e.g. match ls which should match commands like ls , alsa asls ,.. and return them. I preferably want it to cover both all commands and defined functions. Is there a builtin command/application to do this? Obviously, I can create my own script for that. But, I am asking just in case anyone knows of existing command/script that does the same? | There is a utility in bash called compgen . # List all Commandscompgen -c# List all Commands starting with lscompgen -c ls# List all Commands that has 'ls' in itcompgen -c | grep ls | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257233",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121183/"
]
} |
257,270 | during early boot, I get following error message: [sdb] No Caching mode page found[sdb] Assuming drive cache: write through If I understand correctly, this is actually just a harmless info message and not an actual error. sdb is my USB disk, and it does not use caching . The problem is, I have intentionally set kernel loglevel to 4, to get rid of these kind of useless info messages. Why then do I still get this info message? The reason why it's bothering me is, that it interferes with my password prompt (for decrypting my LUKS disk) Is there a way to get rid of this message ? | Hard disks have a small amount of RAM cache to speed up write operations. The system can write a chunk of data to the disk cache without actually waiting for it to be written to the disk. This is sometimes called "write-back" mode.If there is no cache on the disk, data is directly written to it in "write-through" mode.The Asking for cache data failed warning usually occurs with devices such as USB flash drives, USB card readers, etc. which present themselves as SCSI devices to the system (sdX), but have no cache.The system asks the device: "Do you have a cache?" and gets no response. So it assumes there is no cache and puts it in "write-through" mode. You may try to go to: /etc/modules and on top of the modules list add the line usb_storage It should look something like this: # /etc/modules: kernel modules to load at boot time.## This file contains the names of kernel modules that should be loaded# at boot time, one per line. Lines beginning with "#" are ignored.usb_storagelp This is how I solved a similar problem. Let me know what happened. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257270",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
257,290 | Here's the situation (I'm on a Mac, OS X El Capitan): # This works:$ cd /Applications/Adobe\ Illustrator*/Cool\ Extras.localized/en_US/Templates/;# These do not work:$ INSTALL_DIR=/Applications/Adobe\ Illustrator*/Cool\ Extras.localized/en_US/Templates;$ cd $INSTALL_DIR# Moves me here: /Applications/Adobe$ cd "$INSTALL_DIR"-bash: cd: /Applications/Adobe Illustrator*/Cool Extras.localized/en_US/Templates: No such file or directory$ cd "${INSTALL_DIR}"-bash: cd: /Applications/Adobe Illustrator*/Cool Extras.localized/en_US/Templates: No such file or directory My goal is to use $INSTALL_DIR in tar like so: $ tar -xz $SOURCE_ZIP --strip-components 1 -C $INSTALL_DIR "*.ait"; Unfortunately, the -C (changing to destination directory) doesn't like the spaces in $INSTALL_DIR ; if I use quotes, I can't get the * to work. Is there an elegant way to handle this scenario? | When the * is not quoted the shell expands the argument list before running the command. It passes the expand argument list to the program. When the * appears in a quoted string it is not expanded by the shell before being passed to the program. Try expanding the path, assigning it to another variable, and then quoting the second variable when passing it as an argument. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67282/"
]
} |
257,297 | Let me give an example: $ timeout 1 yes "GNU" > file1$ wc -l file111504640 file1 $ for ((sec0=`date +%S`;sec<=$(($sec0+5));sec=`date +%S`)); do echo "GNU" >> file2; done$ wc -l file21953 file2 Here you can see that the command yes writes 11504640 lines in a second while I can write only 1953 lines in 5 seconds using bash's for and echo . As suggested in the comments, there are various tricks to make it more efficient but none come close to matching the speed of yes : $ ( while :; do echo "GNU" >> file3; done) & pid=$! ; sleep 1 ; kill $pid[1] 3054$ wc -l file319596 file3 $ timeout 1 bash -c 'while true; do echo "GNU" >> file4; done'$ wc -l file418912 file4 These can write up to 20 thousand lines in a second. And they can be further improved to: $ timeout 1 bash -c 'while true; do echo "GNU"; done >> file5' $ wc -l file534517 file5 $ ( while :; do echo "GNU"; done >> file6 ) & pid=$! ; sleep 1 ; kill $pid[1] 5690$ wc -l file640961 file6 These get us up to 40 thousand lines in a second. Better, but still a far cry from yes which can write about 11 million lines in a second! So, how does yes write to file so quickly? | In a nutshell: yes exhibits similar behavior to most other standard utilities which typically write to a FILE STREAM with output buffered by the libC via stdio . These only do the syscall write() every some 4kb (16kb or 64kb) or whatever the output block BUFSIZ is . echo is a write() per GNU . That's a lot of mode-switching (which is not, apparently, as costly as a context-switch ) . And that's not at all to mention that, besides its initial optimization loop, yes is a very simple, tiny, compiled C loop and your shell loop is in no way comparable to a compiler optimized program. But I was wrong: When I said before that yes used stdio , I only assumed it did because it behaves a lot like those that do. This was not correct - it only emulates their behavior in this way. What it actually does is very like an analog to the thing I did below with the shell: it first loops to conflate its arguments (or y if none) until they might grow no more without exceeding BUFSIZ . A comment from the source immediately preceding the relevant for loop states: /* Buffer data locally once, rather than having thelarge overhead of stdio buffering each item. */ yes does its own write() s thereafter. Digression: (As originally included in the question and retained for context to a possibly informative explanation already written here) : I've tried timeout 1 $(while true; do echo "GNU">>file2; done;) but unable to stop loop. The timeout problem you have with the command substitution - I think I get it now and can explain why it doesn't stop. timeout doesn't start because its command-line is never run. Your shell forks a child shell, opens a pipe on its stdout and reads it. It will stop reading when the child quits, and then it will interpret all the child wrote for $IFS mangling and glob expansions, and with the results, it will replace everything from $( to the matching ) . But if the child is an endless loop that never writes to the pipe, then the child never stops looping, and timeout 's command-line is never completed before (as I guess) you do Ctrl + C and kill the child loop. So timeout can never kill the loop which needs to complete before it can start. Other timeout s: ... simply aren't as relevant to your performance issues as the amount of time your shell program must spend switching between user- and kernel-mode to handle output. timeout , though, is not as flexible as a shell might be for this purpose: where shells excel is in their ability to mangle arguments and manage other processes. As is noted elsewhere, simply moving your [fd-num] >> named_file redirection to the loop's output target rather than only directing output there for the command looped over can substantially improve performance because that way at least the open() syscall need only be done the once. This also is done below with the | pipe targeted as output for the inner loops. Direct comparison: You might do like: for cmd in exec\ yes 'while echo y; do :; done'do set +m sh -c '{ sleep 1; kill "$$"; }&'"$cmd" | wc -l set -mdone 256659456505401 Which is kind of like the command sub relationship described before, but there's no pipe and the child is backgrounded until it kills the parent. In the yes case the parent has actually been replaced since the child was spawned, but the shell calls yes by overlaying its own process with the new one and so the PID remains the same and its zombie child still knows who to kill after all. Bigger buffer: Now let's see about increasing the shell's write() buffer. IFS=""; set y "" ### sets up the macro expansion until [ "${512+1}" ] ### gather at least 512 argsdo set "$@$@";done ### exponentially expands "$@"printf %s "$*"| wc -c ### 1 write of 512 concatenated "y\n"'s 1024 I chose that number because output strings any longer than 1kb were getting split out into separate write() 's for me. And so here's the loop again: for cmd in 'exec yes' \ 'until [ "${512+:}" ]; do set "$@$@"; done while printf %s "$*"; do :; done'do set +m sh -c $'IFS="\n"; { sleep 1; kill "$$"; }&'"$cmd" shyes y ""| wc -l set -mdone 26862796815850496 That's 300 times the amount of data written by the shell in the same amount of time for this test than the last. Not too shabby. But it's not yes . Felated: As requested, there is a more thorough description than the mere code comments on what is done here at this link . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/257297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66803/"
]
} |
257,343 | For a while now, I have had the problem that a gzip process randomly starts on my Kubuntu system, uses up quite a bit of resources and causes my notebook fan to go crazy. The process shows up as gzip -c --rsyncable --best in htop and runs for quite a long time. I have no clue what is causing this, the system is a Kubuntu 14.04 and has no backup plan setup or anything like that. Any idea how I can figure out what is causing this the next time the process appears? I have done a bit of googling already but could not figure it out. I saw some suggestions with the ps command but grepping all lines there did not really point to anything. | Process tree While the process is running try to use ps with the f option to see the process hierarchy: ps axuf Then you should get a tree of processes, meaning you should see what the parent process of the gzip is. If gzip is a direct descendant of init then probably its parent has exited already, as it's very unlikely that init would create the gzip process. Crontabs Additionally you should check your crontab s to see whether there's anything creating it. Do sudo crontab -l -u <user> where user is the user of the gzip process you're seeing (in your case that seems to be root ). If you have any other users on that system which might have done stuff like setting up background services, then check their crontab s too. The fact that gzip runs as root doesn't guarantee that the original process that triggered the gzip was running as root as well. You can see a list of all existing crontab s by doing sudo ls /var/spool/cron/crontabs . Logs Check all the systems logs you have, looking for suspicious entries at the time the process is created. I'm not sure whether Kubuntu names its log files differently, but in standard Ubuntu you should at least check /var/log/syslog . Last choice: a gzip wrapper If none of these lead to any result you could rename your gzip binary and put a little wrapper in place which launches gzip with the passed parameters but also captures the system's state at that moment. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/257343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153054/"
]
} |
257,372 | I am running a Fedora Workstation virtual machine on an Ubuntu host. I created a folder /home/cl/share in the guest and mounted the shared folder /media/cl/system/virtual/share as root following the instructions from linux-kvm : mount -t 9p -o trans=virtio,version=9p2000.L /host /home/cl/share I am able to access (read) files and folders from shared folders, but I can not write to them. An example: I am trying to save the file mount-share.txt to the folder /home/cl/share . Both folders (host and guest) have read-write permissions, nevertheless I receive the error Error opening file '/home/cl/share/mount-share.txt': Operation not permitted. Host operating system: Ubuntu 15.10 desktop 64-bit Guest operating system: Fedora 23 workstation 64-bit Virtualization software: qemu qemu-kvm virt-manager Host system location : 1st built-in SSD - ext4 format Virtual storage location : 2nd built-in HDD - NTFS format Shared folders location : 2nd built-in HDD - NTFS format How can I share a directory between guest and host and allow the guest read-write access? sudo chmod a+x /media/cl and sudo chmod -R 777 /media didn't change the situation. Guest system: ls -la /home/cl/share total 16 drwxrwxrwx. 1 cl cl 4096 20. Jan 14:41 . drwx------. 18 cl cl 4096 24. Jan 19:11 .. drwxrwxrwx. 1 cl cl 4096 17. Dez 09:49 fedora drwxrwxrwx. 1 cl cl 0 5. Jan 11:43 solus drwxrwxrwx. 1 cl cl 0 6. Jan 12:10 ubuntu drwxrwxrwx. 1 cl cl 4096 24. Jan 16:58 various stat /home/cl/share File: ‘/home/cl/share’ Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 25h/37d Inode: 135 Links: 1 Access: (0777/drwxrwxrwx) Uid: ( 1000/ cl) Gid: ( 1000/ cl) Access: 2016-01-27 10:11:12.566303000 +0100 Modify: 2016-01-26 21:34:48.647707300 +0100 Change: 2016-01-26 21:34:48.647707300 +0100 Birth: - Host system: ls -ld /media /media/cl/ /media/cl/system /media/cl/system/virtual/ /media/cl/system/virtual/share drwxr-xr-x 3 root root 4096 Okt 22 16:06 /media drwxr-x---+ 6 root root 4096 Jan 24 10:49 /media/cl/ drwxrwxrwx 1 cl cl 4096 Jan 19 15:28 /media/cl/system drwxrwxrwx 1 cl cl 4096 Jan 22 13:43 /media/cl/system/virtual/ drwxrwxrwx 1 cl cl 4096 Jan 20 14:41 /media/cl/system/virtual/share getfacl /media/cl/ getfacl: Removing leading '/' from absolute path names # file: media/cl/ # owner: root # group: root user::rwx user:libvirt-qemu:--x user:cl:r-x group::--- mask::r-x other::---ps aux | grep virtroot 988 0.0 0.2 1207024 39888 ? Ssl 12:48 0:01 /usr/sbin/libvirtd libvirt+ 1204 0.0 0.0 45268 2720 ? S 12:48 0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper root 1207 0.0 0.0 45240 368 ? S 12:48 0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper cl 4204 0.0 0.0 15184 2532 pts/2 S+ 14:06 0:00 grep --color=auto virt | I could reproduce the problem on my system. Your main problem are the ACL restrictions of your host. For this reason change the ACL attributes of the libvirt-qemu user : sudo setfacl -R -m u:libvirt-qemu:rwx /media/cl Change the Mode settings for Filesystem /host from Passthrough to Mapped . Why? That's the reason why: Your guest system runs as libvirt-qemu user and your ACL settings restrict the permissions of this user. user:libvirt-qemu:--x The correct output of getfacl should be : user:libvirt-qemu:rwx | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/257372",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133516/"
]
} |
257,403 | I am trying to install LLVM on my CentOS machine.In the installation tutorial of LLVM, a flag -jn is specified along with make . It says to perform make -jn and also says " Choose n such that make doesn’t run in to swap space issue. " What is the use of the -j flag and how can I choose the value of n? | The -j make flag denotes how many threads you want to allot for compiling. n is, in this case, a place-holder for the number of processes. The classic rule of thumb is that it's safe to make n = the number of cores your CPU has. So if you are on a dual core machine, you might use -j2 , while on an 8-core machine -j8 In practise, I have found that to be a good starting place, but you should probably feel free to experiment a bit and see what works best for you. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257403",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91015/"
]
} |
257,451 | For example, a temp.txt file contains information like below: adsf on line jhkjhvjdbvjvbvbdjkvn qerwtt on line fdgdgdgdd qwqertg on line safffasffaf wrt on line adaddsd I want to grep for on line in all the lines of the file and write the remaining part of the lines to another file, i.e after the process on temp.txt file the new file should contain: on line jhkjhvjdbvjvbvbdjkvn on line fdgdgdgdd on line safffasffaf on line adaddsd How can I do that in linux terminal? | Use the -o option of grep to select only the desired portion, in your case use pattern on line .* to select the portion starting from on line till the end of the line(s): % grep -o 'on line .*' temp.txt >new.txt% cat new.txt on line jhkjhvjdbvjvbvbdjkvn on line fdgdgdgdd on line safffasffaf on line adaddsd | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257451",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153138/"
]
} |
257,454 | For example, a file file1.txt contains Hi how are you hello today is monday hello I am fine Hi how are you After the processing of file1.txt it should write to file2.txt and contents should be like this without repeating the same lines. Hi how are you hello today is monday I am fine What command can I use to do that in linux terminal? | This is an easy job for sort , use the unique ( -u ) option of sort : % sort -u file1.txthelloHi how are youI am finetoday is monday To save it in file2.txt : sort -u file1.txt >file2.txt If you want to preserve the initial order: % nl file1.txt | sort -uk2,2 | sort -k1,1n | cut -f2Hi how are youhellotoday is mondayI am fine | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153138/"
]
} |
257,462 | I have several txt-files containing data, where i use grep to search for a current string of text, and use awk to filter out the variable i need. The string is repeated through the file, so i currently use this command to extract the desired string: grep 'text' *.txt | awk ' NR==1 {print $2 } ' > outputfile The problem is that i want to cycle through multiple files in the folder, and for each file get the extracted variable written into a single output-file. I know the question have been answered before, but I am quite fresh to this and have some difficulties implementing. Any feedback would be highly appreciated ! | This is an easy job for sort , use the unique ( -u ) option of sort : % sort -u file1.txthelloHi how are youI am finetoday is monday To save it in file2.txt : sort -u file1.txt >file2.txt If you want to preserve the initial order: % nl file1.txt | sort -uk2,2 | sort -k1,1n | cut -f2Hi how are youhellotoday is mondayI am fine | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153145/"
]
} |
257,484 | I'm trying to change my terminal emulator from xterm to eterm on Debian Jessie. I can't seem to find the tty config files. I've ran: sudo find / -name tty*.conf but that doesn't yield any results. Where are the config files, and how do I change the default terminal emulator? | In Debian, that is x-terminal-emulator : sudo update-alternatives --config x-terminal-emulator Further reading: Debian Alternatives System Virtual Package: x-terminal-emulator Debian Policy Manual:Chapter 11 - Customized programs Debian Policy Manual: 11.8.3 Packages providing a terminal emulator | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/257484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137910/"
]
} |
257,485 | I have a list of IPs and I need to check them for opened ports using nmap .So far, my script is like this: #!/bin/bashfilename="$1"port="$2"echo "STARTING NMAP"while IFS= read -r linedo nmap --host-timeout 15s -n $line -p $2 -oN output.txt | grep "Discovered open port" | awk {'print $6'} | awk -F/ {'print $1'} >> total.txtdone <"$filename" It works great but it's slow and I want to check, for example, 100 IPs from the file at once, instead of running them one by one. | In Debian, that is x-terminal-emulator : sudo update-alternatives --config x-terminal-emulator Further reading: Debian Alternatives System Virtual Package: x-terminal-emulator Debian Policy Manual:Chapter 11 - Customized programs Debian Policy Manual: 11.8.3 Packages providing a terminal emulator | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/257485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153162/"
]
} |
257,510 | I was able to set up a network namespace, establish a tunnel with openvpn and start an application that uses this tunnel inside the namespace. So far so good, but this application can be accessed via a web interface and I cant't figure out how to route requests to the web interface inside my LAN. I followed a guide from @schnouki explaining how to set up a network namespace and run OpenVPN inside of it ip netns add myvpnip netns exec myvpn ip addr add 127.0.0.1/8 dev loip netns exec myvpn ip link set lo upip link add vpn0 type veth peer name vpn1ip link set vpn0 upip link set vpn1 netns myvpn upip addr add 10.200.200.1/24 dev vpn0ip netns exec myvpn ip addr add 10.200.200.2/24 dev vpn1ip netns exec myvpn ip route add default via 10.200.200.1 dev vpn1iptables -A INPUT \! -i vpn0 -s 10.200.200.0/24 -j DROPiptables -t nat -A POSTROUTING -s 10.200.200.0/24 -o en+ -j MASQUERADEsysctl -q net.ipv4.ip_forward=1mkdir -p /etc/netns/myvpnecho 'nameserver 8.8.8.8' > /etc/netns/myvpn/resolv.conf After that, I can check my external ip and get different results inside and outside of the namespace, just as intended: curl -s ipv4.icanhazip.com<my-isp-ip>ip netns exec myvpn curl -s ipv4.icanhazip.com<my-vpn-ip> The application is started, I'm using deluge for this example. I tried several applications with a web interface to make sure it's not a deluge specific problem. ip netns exec myvpn sudo -u <my-user> /usr/bin/delugedip netns exec myvpn sudo -u <my-user> /usr/bin/deluge-web -fps $(ip netns pids myvpn) PID TTY STAT TIME COMMAND1468 ? Ss 0:13 openvpn --config /etc/openvpn/myvpn/myvpn.conf9302 ? Sl 10:10 /usr/bin/python /usr/bin/deluged9707 ? S 0:37 /usr/bin/python /usr/bin/deluge-web -f I'm able to access the web interface on port 8112 from within the namespace and from outside if I specify the ip of veth vpn1. ip netns exec myvpn curl -Is localhost:8112 | head -1HTTP/1.1 200 OKip netns exec myvpn curl -Is 10.200.200.2:8112 | head -1HTTP/1.1 200 OKcurl -Is 10.200.200.2:8112 | head -1HTTP/1.1 200 OK But I do want to redirect port 8112 from my server to the application in the namespace. The goal is to open a browser on a computer inside my LAN and get the web interface with http://my-server-ip:8112 (my-server-ip being the static ip of the server that instantiated the network interface) EDIT: I removed my attempts to create iptables rules. What I'm trying to do is explained above and the following commands should output a HTTP 200: curl -I localhost:8112curl: (7) Failed to connect to localhost port 8112: Connection refusedcurl -I <my-server-ip>:8112curl: (7) Failed to connect to <my-server-ip> port 8112: Connection refused I tried DNAT and SNAT rules and threw in a MASQUERADE for good measure, but since I don't know what I'm doing, my attempts are futile. Perhaps someone can help me put together this construct. EDIT: The tcpdump output of tcpdump -nn -q tcp port 8112 . Unsurprisingly, the first command returns a HTTP 200 and the second command terminates with a refused connection. curl -Is 10.200.200.2:8112 | head -1listening on vpn0, link-type EN10MB (Ethernet), capture size 262144 bytesIP 10.200.200.1.36208 > 10.200.200.2.8112: tcp 82IP 10.200.200.2.8112 > 10.200.200.1.36208: tcp 145curl -Is <my-server-ip>:8112 | head -1listening on lo, link-type EN10MB (Ethernet), capture size 262144 bytesIP <my-server-ip>.58228 > <my-server-ip>.8112: tcp 0IP <my-server-ip>.8112 > <my-server-ip>.58228: tcp 0 EDIT: @schnouki himself pointed me to a Debian Administration article explaining a generic iptables TCP proxy . Applied to the problem at hand, their script would look like this: YourIP=<my-server-ip>YourPort=8112TargetIP=10.200.200.2TargetPort=8112iptables -t nat -A PREROUTING --dst $YourIP -p tcp --dport $YourPort -j DNAT \--to-destination $TargetIP:$TargetPortiptables -t nat -A POSTROUTING -p tcp --dst $TargetIP --dport $TargetPort -j SNAT \--to-source $YourIPiptables -t nat -A OUTPUT --dst $YourIP -p tcp --dport $YourPort -j DNAT \--to-destination $TargetIP:$TargetPort Unfortunately, traffic between the veth interfaces seized and nothing else happened. However, @schnouki also suggested the use of socat as a TCP proxy and this is working perfectly. curl -Is <my-server-ip>:8112 | head -1IP 10.200.200.1.43384 > 10.200.200.2.8112: tcp 913IP 10.200.200.2.8112 > 10.200.200.1.43384: tcp 1495 I have yet to understand the strange port shuffling while traffic is traversing through the veth interfaces, but my problem is solved now. | I've always had issues with iptables redirections (probably my fault, I'm pretty sure it's doable). But for a case like yours, it's IMO easier to do it in user-land without iptables. Basically, you need to have a daemon in your "default" workspace listening on TCP port 8112 and redirecting all traffic to 10.200.200.2 port 8112. So it's a simple TCP proxy. Here's how to do it with socat : socat tcp-listen:8112,reuseaddr,fork tcp-connect:10.200.200.2:8112 (The fork option is needed to avoid socat from stopping after the first proxied connection is closed). EDIT : added reuseaddr as suggested in the comments. If you absolutely want to do it with iptables, there's a guide on the Debian Administration site. But I still prefer socat for more advanced stuff -- like proxying IPv4 to IPv6, or stripping SSL to allow old Java programs to connect to secure services... Beware however that all connections in Deluge will be from your server IP instead of the real client IP. If you want to avoid that, you will need to use a real HTTP reverse proxy that adds the original client IP to the proxied request in a HTTP header. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/257510",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153171/"
]
} |
257,514 | Suppose I have a list of URLs in a text file: google.com/funnyunix.stackexchange.com/questionsisuckatunix.com/ireallydo I want to delete everything that comes after '.com'. Expected Results: google.comunix.stackexchange.comisuckatunix.com I tried sed 's/.com*//' file.txt but it deleted .com as well. | To explicitly delete everything that comes after ".com", just tweak your existing sed solution to replace ".com(anything)" with ".com": sed 's/\.com.*/.com/' file.txt I tweaked your regex to escape the first period; otherwise it would have matched something like "thisiscommon.com/something". Note that you may want to further anchor the ".com" pattern with a trailing forward-slash so that you don't accidentally trim something like "sub.com.domain.com/foo": sed 's/\.com\/.*/.com/' file.txt | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/257514",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152598/"
]
} |
257,518 | On CentOS 7.0.1406 I get differing output when running ps -A -o pid,command | grep [r]esque than when I run ps -A -o pid,comm | grep [r]esque The latter returns nothing; the former what I would expect. I was under the impression that comm was an alias for command . Could someone please explain the difference? | They are not aliases, command outputs the full command and comm only the command name , so it is possible that the outputs are different. It all depends on what you want to extract the grep command. An example: $ ps -A -o pid,command | grep 9600376 /sbin/agetty --keep-baud 115200 38400 9600 ttyS0 vt220 and the output for the following is empty: ps -A -o pid,comm | grep 9600 The string 9600 is part of the complete command but the command name. command , and cmd are aliases to args , with prints the command with all its arguments as string. comm is a different sorting code that prints only the executable name. Manpage snippet: args COMMAND command with all its arguments as a string. cmd CMD see args. (alias args, command). comm COMMAND command name (only the executable name). command COMMAND see args. (alias args, cmd). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257518",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153185/"
]
} |
257,521 | I have an un-partitioned 500GB disk with a ext3 fs using the entire disk. If I make sure there are no partitions on the disk using fdisk or parted, "ssm list" will still show an ext3 fs on the disk (because this file system exists outside of any partitions" I am also still able to mount the fs and use it. How can I remove any reference to this filesystem? I'm using centos7 and there is no data on the disk that I want to keep.The server is running in a VM, I could just add a new disk to it, but I want to know how to do it. | One easy (and heavy handed) way to do this would be to wipe the whole contents of the disk. The simplest way to do that would be to use dd : $ sudo dd if=/dev/zero of=/dev/<disk> bs=1M count=500000 By the time the command ends (maybe an hour?) your whole disk will be filled with zeros. If you're in a rush, you could kill the process with Ctl + C after a few seconds/minutes to see if you've wiped enough data for the disk to be considered as blank. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153186/"
]
} |
257,532 | Quite often, I'm in the following situation: I launch a command on bash (that takes lot of time to be processes) and I get a long output, which doesn't fit into the terminal and it can't be read even using the scrolling.The only way I have to read it is to redirect the output to a file. In order to do so, I have to relaunch the command, something that I want to avoid as I would take too much time. I simply want to print on file the output that has been generated in the prevous command. It there any way to do that? Al. PS: for example, I give diff folder1 folder2 #the folders contain many files wait 60 seconds, and after I decide to print the output | If you have a suspicion that, such a thing, i.e., a long output, to happen, start your session by executing command script this will log all your screen output as well as what you type in to the terminal (caveat emptor, backspaces and other normally unprintable characters will make the file harder to read, if you are not careful). when you are done executing your long winded command, just type exit and it will tell you it saved the session output in a file called typescript . Also you can change the name of this file by running your command as script my_output_file_name It is a good tool for debugging scripts etc. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41107/"
]
} |
257,544 | I have created this question in order to provide an answer. The idea is that having this question on a Pixma printer I have found that many people that have the proper drivers (different depending on the model of printer-scanner, but called 'scangearmp') for such an integrated scanner are trying to use it with Simple Scan or Xsane without success . How to use a such scanner? | If one has the Canon scanner drivers installed, that means that in most cases a scanning application called ScanGear is already installed. That can be started by opening a terminal and doing scangearmp . In some cases it's scangearmp2 . So, other tools like Simple Scan or Xsane are not needed. Some recommend to run ScanGear from Gimp , just because ScanGear does not have a /usr/share/applications/ desktop file and cannot be easily accessed. To correct that, using gedit text editor: gedit ~/.local/share/applications/scan.desktop paste something similar to this: [Desktop Entry]Categories=Graphics;Scanning;Exec=scangearmpIcon=scannerName=ScanType=Application After that, just type 'scan' in a launcher like Dash or Synapse, or put the file /usr/share/applications/scan.desktop to the desktop, panel, dock, etc or otherwise make a copy at hand. ScanGear can save as png, pdf and pnm formats. It has advanced settings too. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/257544",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
257,555 | I nearly have this working, This is a script that will run in CRON to keep the latest 10 backups, and remove anything else, I want to log this to keep track to make sure all is in good order, I have it working & writing what files were deleted, alas I cannot set a prefix of a timestamp. TIMESTAMP=$(date +%d-%m-%Y-%H-%M-%S)LOGFILE="/home/user/place/backups/backuplog.txt"echo "$TIMESTAMP, " | ls -1tr | head -n -10 | xargs -d '\n' rm -f -v >> $LOGFILE | You can use the ts command for that: ts [-r] [-i | -s] [format] Something like the following: TS_FORMAT="%d-%m-%Y-%H-%M-%S, "LOGFILE="/home/user/place/backups/backuplog.txt"ls -1tr | head -n -10 | xargs -d '\n' rm -f -v | ts "${TS_FORMAT}" >> $LOGFILE ts is included in the moreutils package. Update: Without Installing More Dependecies You can use xargs again: TIMESTAMP=$(date +%d-%m-%Y-%H-%M-%S)LOGFILE="/home/user/place/backups/backuplog.txt"ls -1tr | head -n -10 | xargs -d '\n' rm -f -v | xargs -L 1 -d '\n' echo "${TIMESTAMP}, " >> $LOGFILE Another possibility is to use sed : TIMESTAMP=$(date +%d-%m-%Y-%H-%M-%S)LOGFILE="/home/user/place/backups/backuplog.txt"ls -1tr | head -n -10 | xargs -d '\n' rm -f -v | sed "s/^/${TIMESTAMP}/" >> $LOGFILE Using awk : LOGFILE="/home/user/place/backups/backuplog.txt"ls -1tr | head -n -10 | xargs -d '\n' rm -f -v | awk '{ print strftime("%d-%m-%Y-%H-%M-%S"), $0}' >> $LOGFILE And so on. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91407/"
]
} |
257,571 | On my Arch install, /etc/bash.bashrc and /etc/skel/.bashrc contain these lines: # If not running interactively, don't do anything[[ $- != *i* ]] && return On Debian, /etc/bash.bashrc has: # If not running interactively, don't do anything[ -z "$PS1" ] && return And /etc/skel/.bashrc : # If not running interactively, don't do anythingcase $- in *i*) ;; *) return;;esac According to man bash , however, non-interactive shells don't even read these files: When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following commands were executed: if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi but the value of the PATH variable is not used to search for the filename. If I understand correctly, the *.bashrc files will only be read if BASH_ENV is set to point to them. This is something that can't happen by chance and will only occur if someone has explicitly set the variable accordingly. That seems to break the possibility of having scripts source a user's .bashrc automatically by setting BASH_ENV , something that could come in handy. Given that bash will never read these files when run non-interactively unless explicitly told to do so, why do the default *bashrc files disallow it? | This is a question that I was going to post here a few weeks ago. Like terdon , I understood that a .bashrc is only sourced for interactive Bash shells so there should be no need for .bashrc to check if it is running in an interactive shell. Confusingly, all the distributions I use (Ubuntu, RHEL and Cygwin) had some type of check (testing $- or $PS1 ) to ensure the current shell is interactive. I don’t like cargo cult programming so I set about understanding the purpose of this code in my .bashrc . Bash has a special case for remote shells After researching the issue, I discovered that remote shells are treated differently. While non-interactive Bash shells don’t normally run ~/.bashrc commands at start-up, a special case is made when the shell is Invoked by remote shell daemon : Bash attempts to determine when it is being run with its standard input connected to a network connection, as when executed by the remote shell daemon, usually rshd , or the secure shell daemon sshd . If Bash determines it is being run in this fashion, it reads and executes commands from ~/.bashrc, if that file exists and is readable. It will not do this if invoked as sh . The --norc option may be used to inhibit this behavior, and the --rcfile option may be used to force another file to be read, but neither rshd nor sshd generally invoke the shell with those options or allow them to be specified. Example Insert the following at the start of a remote .bashrc . (If .bashrc is sourced by .profile or .bash_profile , temporarily disable this while testing): echo bashrcfun(){ echo functions work} Run the following commands locally: $ ssh remote_host 'echo $- $0'bashrchBc bash No i in $- indicates that the shell is non-interactive . No leading - in $0 indicates that the shell is not a login shell . Shell functions defined in the remote .bashrc can also be run: $ ssh remote_host funbashrcfunctions work I noticed that the ~/.bashrc is only sourced when a command is specified as the argument for ssh . This makes sense: when ssh is used to start a regular login shell, .profile or .bash_profile are run (and .bashrc is only sourced if explicitly done so by one of these files). The main benefit I can see to having .bashrc sourced when running a (non-interactive) remote command is that shell functions can be run. However, most of the commands in a typical .bashrc are only relevant in an interactive shell, e.g., aliases aren’t expanded unless the shell is interactive. Remote file transfers can fail This isn’t usually a problem when rsh or ssh are used to start an interactive login shell or when non-interactive shells are used to run commands. However, it can be a problem for programs such as rcp , scp and sftp that use remote shells for transferring data. It turns out that the remote user’s default shell (like Bash) is implicitly started when using the scp command. There’s no mention of this in the man page – only a mention that scp uses ssh for its data transfer. This has the consequence that if the .bashrc contains any commands that print to standard output, file transfers will fail , e.g, scp fails without error . See also this related Red Hat bug report from 15 years ago, scp breaks when there's an echo command in /etc/bashrc (which was eventually closed as WONTFIX ). Why scp and sftp fail SCP (Secure copy) and SFTP (Secure File Transfer Protocol) have their own protocols for the local and remote ends to exchange information about the file(s) being transferred. Any unexpected text from the remote end is (wrongly) interpreted as part of the protocol and the transfer fails. According to a FAQ from the Snail Book What often happens, though, is that there are statements in either the system or per-user shell startup files on the server ( .bashrc , .profile , /etc/csh.cshrc , .login , etc.) which output text messages on login, intended to be read by humans (like fortune , echo "Hi there!" , etc.). Such code should only produce output on interactive logins, when there is a tty attached to standard input. If it does not make this test, it will insert these text messages where they don't belong: in this case, polluting the protocol stream between scp2 / sftp and sftp-server . The reason the shell startup files are relevant at all, is that sshd employs the user's shell when starting any programs on the user's behalf (using e.g. /bin/sh -c "command"). This is a Unix tradition, and has advantages: The user's usual setup (command aliases, environment variables, umask, etc.) are in effect when remote commands are run. The common practice of setting an account's shell to /bin/false to disable it will prevent the owner from running any commands, should authentication still accidentally succeed for some reason. SCP protocol details For those interested in the details of how SCP works, I found interesting information in How the SCP protocol works which includes details on Running scp with talkative shell profiles on the remote side? : For example, this can happen if you add this to your shell profile on the remote system: echo "" Why it just hangs? That comes from the way how scp in source mode waits for the confirmation of the first protocol message. If it's not binary 0, it expects that it's a notification of a remote problem and waits for more characters to form an error message until the new line arrives. Since you didn't print another new line after the first one, your local scp just stays in a loop, blocked on read(2) . In the meantime, after the shell profile was processed on the remote side, scp in sink mode was started, which also blocks on read(2) , waiting for a binary zero denoting the start of the data transfer. Conclusion / TLDR Most of the statements in a typical .bashrc are only useful for an interactive shell – not when running remote commands with rsh or ssh . In most such situations, setting shell variables, aliases and defining functions isn’t desired – and printing any text to standard out is actively harmful if transferring files using programs such as scp or sftp . Exiting after verifying that the current shell is non-interactive is the safest behaviour for .bashrc . | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/257571",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
257,587 | I'm using NixOS on a laptop and want to disable auto-suspension that happens every time I close the laptop lid. Approach described in “ How to disable auto suspend when I close laptop lid? ”, that is, editing /etc/systemd/logind.conf won't work, as it is just a symlink to /etc/static/systemd/logind.conf , which itself is a symlink to a file in /nix/store . AFAIK, you shouldn't edit the Nix store directly, although I'm not entirely sure what would happen if I did. But the file in /nix/store doesn't have write permissions anyway. How do I disable auto-suspension of a laptop in a NixOS idiomatic way? | There is a configuration option services.logind.extraConfig . Open your NixOS configuration file ( /etc/nixos/configuration.nix ). Assign a string "HandleLidSwitch=ignore" (or whatever you would usually put into /etc/systemd/logind.conf ) to that option: services.logind.extraConfig = "HandleLidSwitch=ignore"; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257587",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11397/"
]
} |
257,590 | I need to use SSH on my machine to access my website and its databases (setting up a symbolic link- but I digress). Following problem I enter the command: ssh-keygen -t dsa To generate public/private dsa key pair. I save it in the default ( /home/user/.ssh/id_dsa ) and enter Enter passphrase twice. Then I get this back: WARNING: UNPROTECTED PRIVATE KEY FILE! Permissions 0755 for '/home/etc.ssh/id_rsa' are too open. It is recommended that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: [then the FILE PATH in VAR/LIB/SOMEWHERE] Now to work round this I then tried sudo chmod 600 ~/.ssh/id_rsa sudo chmod 600 ~/.ssh/id_rsa.pub But shortly after my computer froze up, and on logging back on there was a could not find .ICEauthority error . I got round this problem and deleted the SSH files but want to be able to use the correct permissions to avoid these issues in future. How should I set up ICEauthority, or where should I save the SSH Keys- or what permissions should they have? Would using a virtual machine be best? This is all very new and I am on a very steep learning curve, so any help appreciated. | chmod 600 ~/.ssh/id_rsa; chmod 600 ~/.ssh/id_rsa.pub (i.e. chmod u=rw,go= ~/.ssh/id_rsa ~/.ssh/id_rsa.pub ) are correct. chmod 644 ~/.ssh/id_rsa.pub (i.e. chmod a=r,u+w ~/.ssh/id_rsa.pub ) would also be correct, but chmod 644 ~/.ssh/id_rsa (i.e. chmod a=r,u+w ~/.ssh/id_rsa ) would not be. Your public key can be public, what matters is that your private key is private. Also your .ssh directory itself must be writable only by you: chmod 700 ~/.ssh or chmod u=rwx,go= ~/.ssh . You of course need to be able to read it and access files in it (execute permission). It isn't directly harmful if others can read it, but it isn't useful either. You don't need sudo . Don't use sudo to manipulate your own files, that can only lead to mistakes. The error about .ICEauthority is not related to the chmod commands you show. Either it's a coincidence or you ran some other commands that you aren't showing us. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/257590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153233/"
]
} |
257,598 | I can't seem to find any information on this aside from "the CPU's MMU sends a signal" and "the kernel directs it to the offending program, terminating it". I assumed that it probably sends the signal to the shell and the shell handles it by terminating the offending process and printing "Segmentation fault" . So I tested that assumption by writing an extremely minimal shell I call crsh (crap shell). This shell does not do anything except take user input and feed it to the system() method. #include <stdio.h>#include <stdlib.h>int main(){ char cmdbuf[1000]; while (1){ printf("Crap Shell> "); fgets(cmdbuf, 1000, stdin); system(cmdbuf); }} So I ran this shell in a bare terminal (without bash running underneath). Then I proceeded to run a program that produces a segfault. If my assumptions were correct, this would either a) crash crsh , closing the xterm, b) not print "Segmentation fault" , or c) both. braden@system ~/code/crsh/ $ xterm -e ./crshCrap Shell> ./segfaultSegmentation faultCrap Shell> [still running] Back to square one, I guess. I've just demonstrated that it's not the shell that does this, but the system underneath. How does "Segmentation fault" even get printed? "Who" is doing it? The kernel? Something else? How does the signal and all of its side effects propagate from the hardware to the eventual termination of the program? | All modern CPUs have the capacity to interrupt the currently-executing machine instruction. They save enough state (usually, but not always, on the stack) to make it possible to resume execution later, as if nothing had happened (the interrupted instruction will be restarted from scratch, usually). Then they start executing an interrupt handler , which is just more machine code, but placed at a special location so the CPU knows where it is in advance. Interrupt handlers are always part of the kernel of the operating system: the component that runs with the greatest privilege and is responsible for supervising execution of all the other components. 1,2 Interrupts can be synchronous , meaning that they are triggered by the CPU itself as a direct response to something the currently-executing instruction did, or asynchronous , meaning that they happen at an unpredictable time because of an external event, like data arriving on the network port. Some people reserve the term "interrupt" for asynchronous interrupts, and call synchronous interrupts "traps", "faults", or "exceptions" instead, but those words all have other meanings so I'm going to stick with "synchronous interrupt". Now, most modern operating systems have a notion of processes . At its most basic, this is a mechanism whereby the computer can run more than one program at the same time, but it is also a key aspect of how operating systems configure memory protection , which is is a feature of most (but, alas, still not all ) modern CPUs. It goes along with virtual memory , which is the ability to alter the mapping between memory addresses and actual locations in RAM. Memory protection allows the operating system to give each process its own private chunk of RAM, that only it can access. It also allows the operating system (acting on behalf of some process) to designate regions of RAM as read-only, executable, shared among a group of cooperating processes, etc. There will also be a chunk of memory that is only accessible by the kernel. 3 As long as each process accesses memory only in the ways that the CPU is configured to allow, memory protection is invisible. When a process breaks the rules, the CPU will generate a synchronous interrupt, asking the kernel to sort things out. It regularly happens that the process didn't really break the rules, only the kernel needs to do some work before the process can be allowed to continue. For instance, if a page of a process's memory needs to be "evicted" to the swap file in order to free up space in RAM for something else, the kernel will mark that page inaccessible. The next time the process tries to use it, the CPU will generate a memory-protection interrupt; the kernel will retrieve the page from swap, put it back where it was, mark it accessible again, and resume execution. But suppose that the process really did break the rules. It tried to access a page that has never had any RAM mapped to it, or it tried to execute a page that is marked as not containing machine code, or whatever. The family of operating systems generally known as "Unix" all use signals to deal with this situation. 4 Signals are similar to interrupts, but they are generated by the kernel and fielded by processes, rather than being generated by the hardware and fielded by the kernel. Processes can define signal handlers in their own code, and tell the kernel where they are. Those signal handlers will then execute, interrupting the normal flow of control, when necessary. Signals all have a number and two names, one of which is a cryptic acronym and the other a slightly less cryptic phrase. The signal that's generated when the a process breaks the memory-protection rules is (by convention) number 11, and its names are SIGSEGV and "Segmentation fault". 5,6 An important difference between signals and interrupts is that there is a default behavior for every signal. If the operating system fails to define handlers for all interrupts, that is a bug in the OS, and the entire computer will crash when the CPU tries to invoke a missing handler. But processes are under no obligation to define signal handlers for all signals. If the kernel generates a signal for a process, and that signal has been left at its default behavior, the kernel will just go ahead and do whatever the default is and not bother the process. Most signals' default behaviors are either "do nothing" or "terminate this process and maybe also produce a core dump." SIGSEGV is one of the latter. So, to recap, we have a process that broke the memory-protection rules. The CPU suspended the process and generated a synchronous interrupt. The kernel fielded that interrupt and generated a SIGSEGV signal for the process. Let's assume the process did not set up a signal handler for SIGSEGV , so the kernel carries out the default behavior, which is to terminate the process. This has all the same effects as the _exit system call: open files are closed, memory is deallocated, etc. Up till this point nothing has printed out any messages that a human can see, and the shell (or, more generally, the parent process of the process that just got terminated) has not been involved at all. SIGSEGV goes to the process that broke the rules, not its parent. The next step in the sequence, though, is to notify the parent process that its child has been terminated. This can happen in several different ways, of which the simplest is when the parent is already waiting for this notification, using one of the wait system calls ( wait , waitpid , wait4 , etc). In that case, the kernel will just cause that system call to return, and supply the parent process with a code number called an exit status . 7 The exit status informs the parent why the child process was terminated; in this case, it will learn that the child was terminated due to the default behavior of a SIGSEGV signal. The parent process may then report the event to a human by printing a message; shell programs almost always do this. Your crsh doesn't include code to do that, but it happens anyway, because the C library routine system runs a full-featured shell, /bin/sh , "under the hood". crsh is the grandparent in this scenario; the parent-process notification is fielded by /bin/sh , which prints its usual message. Then /bin/sh itself exits, since it has nothing more to do, and the C library's implementation of system receives that exit notification. You can see that exit notification in your code, by inspecting the return value of system ; but it won't tell you that the grandchild process died on a segfault, because that was consumed by the intermediate shell process. Footnotes Some operating systems don't implement device drivers as part of the kernel; however, all interrupt handlers still have to be part of the kernel, and so does the code that configures memory protection, because the hardware doesn't allow anything but the kernel to do these things. There may be a program called a "hypervisor" or "virtual machine manager" that is even more privileged than the kernel, but for purposes of this answer it can be considered part of the hardware . The kernel is a program , but it is not a process; it is more like a library. All processes execute parts of the kernel's code, from time to time, in addition to their own code. There may be a number of "kernel threads" that only execute kernel code, but they do not concern us here. The one and only OS you are likely to have to deal with anymore that can't be considered an implementation of Unix is, of course, Windows. It does not use signals in this situation. (Indeed, it does not have signals; on Windows the <signal.h> interface is completely faked by the C library.) It uses something called " structured exception handling " instead. Some memory-protection violations generate SIGBUS ("Bus error") instead of SIGSEGV . The line between the two is underspecified and varies from system to system. If you've written a program that defines a handler for SIGSEGV , it is probably a good idea to define the same handler for SIGBUS . "Segmentation fault" was the name of the interrupt generated for memory-protection violations by one of the computers that ran the original Unix , probably the PDP-11 . " Segmentation " is a type of memory protection, but nowadays the term "segmentation fault " refers generically to any sort of memory protection violation. All the other ways the parent process might be notified of a child having terminated, end up with the parent calling wait and receiving an exit status. It's just that something else happens first. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/257598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54466/"
]
} |
257,604 | I am looking to search a file for a column that is 3 letters followed by any 3 or 4 numerals. eg. if ( $1 ~ /^[A-Z][A-Z][A-Z][0-9][0-9][0-9]/) However I need the 3 letters to be a variable, so I am looking for the result of SP="ABC" if ( $1 ~ /^/SP/[0-9][0-9][0-9]/)> However, this does not work. How do I combine a variable and regular expression in the search pattern? | SP="ABC" if ( $1 ~ "^" SP "[0-9]{3}") You can concatenate strings but not /xxx/ s which are in effect more like regular expression matching operators, and with parsing rules that can get confusing (and vary between implementations) $1 ~ /ABC/ /BCD/ could be seen as the concatenation of $1 matched against the concatenation of /ABC/ (1 or 0 depending on whether $0 matches /ABC/ ) and /BCD/ (1 or 0 depending on whether $0 matches /BCD/ ), or $1 matched against /ABC/ (0 or 1) concatenated with $0 matched against /BCD/ which would be confusing enough except that the /regexp/ operator doesn't work well when combined with some others like the concatenation operator here, as there's a possible confusion with the / division operator. But with parenthesis, you can get interesting (read buggy ) behaviours: $ echo 11 ab | gawk '{print $1 ~ /a/ (/b/)}'1$ echo 11 ab | bwk-awk '{print $1 ~ /a/ (/b/)}'01$ echo b | bwk-awk '{print /a/ - (/b/)}'0-1 (that latter one being the result of /a/ (0) concatenated with the result of - (/b/) ). note that in $1 =~ "^" SP "[0-9]{3}" , SP's content is still treated as a regexp (if it's ... , that matches 3 characters, not 3 dots); if that's not wanted: if (index($1, SP) == 1 && substr($1, length(SP)+1) ~ /^[0-9]{3}/) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257604",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153248/"
]
} |
257,652 | I'm trying to change a partition's UUID, the problem is that I'm trying to change an encrypted volume. So I can't use the usual method described here . Since it throws the following error: tune2fs: Bad magic number in super-block while trying to open /dev/sda1Couldn't find valid filesystem superblock. So let's suppose this is my blkid : /dev/sda1: UUID="adc4277c-0057-4455-a25e-94dec062571c" TYPE="crypto_LUKS" PARTUUID="23487624-01"/dev/sda2: UUID="9f16a55e-954b-4947-87ce-b0055c6ac953" TYPE="crypto_LUKS" PARTUUID="23487624-02"/dev/mapper/root: LABEL="root" UUID="6d1b1654-016b-4dc6-8330-3c242b2c538b" TYPE="ext4"/dev/mapper/home: LABEL="home" UUID="9c48b8fe-36a6-4958-af26-d15a2a89878b" TYPE="ext4" What I want to change in this example is the /dev/sda1 UUID. How can I achieve this? | For changing the file system UUID you have to decrypt /dev/sda1 and then run tune2fs on the decrypted device mapper device. sda1 itself does not have a UUID thus it cannot be changed. The LUKS volume within sda1 does have a UUID (which is of limited use because you probably cannot use it for mounting), though. It can be changed with cryptsetup luksUUID /dev/sda1 --uuid "$newuuid" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/257652",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153272/"
]
} |
257,679 | For example, this keeps the gnuplot-x11 graph window open until a key is pressed: gnuplot -e "plot \"file\" ; pause -1 \"text\"" How to keep it open until manually closed? | Use the -p or --persist option: gnuplot --persist -e 'plot sin(x)' This will keep the window open until manually closed. From the man page : -p, --persist lets plot windows survive after main gnuplot program exits. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/257679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94448/"
]
} |
257,684 | I have a bash script where I'm trying to assign a heredoc string to a variable using read, and it only works if I use read with the -d '' option, I.e. read -d '' <variable> script block #!/usr/bin/env bashfunction print_status() { echo echo "$1" echo }read -d '' str <<- EOF Setup nginx site-config NOTE: if an /etc/nginx/sites-available config already exists for this website, this routine will replace existing config with template from this script. EOFprint_status "$str" I found this answer on SO which is where I copied the command from, it works, but why?I know the first invocation of read stops when it encounters the first newline character, so if I use some character that doesn't appear in the string the whole heredoc gets read in, e.g. read -d '|' <variable> -- this works read -d'' <variable> -- this doesn't I'm sure it's simple but what's going on with this read -d '' command option? | I guess the question is why read -d '' works though read -d'' doesn't. The problem doesn't have anything to do with read but is a quoting "problem". A "" / '' which is part of a string (word) simply is not recognized at all. Let the shell show you what is sees / executes: start cmd:> set -xstart cmd:> echo read -d " " foo+ echo read -d ' ' foostart cmd:> echo read -d" " foo+ echo read '-d ' foostart cmd:> echo read -d "" foo+ echo read -d '' foostart cmd:> echo read -d"" foo+ echo read -d foo | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
257,785 | I have a text file like this: abcdefg How can I group those lines to obtain an output like this: a-bb-cc-dd-ee-ff-g I have to do this in shell (sh, csh, bash). I have found this: cat file | xargs -n2 but the last element of first group does not became the first of the second. | With awk : awk 'NR!=1{print x"-"$0}{x=$0}' file NR!=1 applies on all line, except the first one print x"-"$0 print the values with dash between x=$0 set x (for the next iteration) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153396/"
]
} |
257,797 | The new line characters seem to be lost, even though they appear within quotes. foo=for i in 1 2 3; do foo="$foo$(printf '\n')$i"doneecho "$foo"123 Now in practice, I don't want to add a \n before the first element, so I would use a ${foo:+$(printf '\n')} instead, but first I have to figure out why the code above doesn't work. | Command substitution removes trailing newlines, so $(printf '\n') is the same thing as $(printf '\n\n\n\n') , namely the empty string. To include a newline in a string, put it bewteen single or double quotes. for i in 1 2 3; do foo="$foo$i"done You may find it less ugly to define a variable to store just a newline. nl=''for i in 1 2 3; do foo="$foo$nl$i"done In ksh93, bash, FreeBSD sh, mksh and zsh, but not plain sh (yet), you can also use dollar-single-quote, which allows backslash escapes. for i in 1 2 3; do foo="$foo"$'\n'"$i"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/257797",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53904/"
]
} |
257,812 | I read through this popular IBM doc (I see it referred quite often on the web) explaining the function of the initial RAM disk. I hit a wall in conceptualizing how this works though. In the doc it says The boot loader, such as GRUB, identifies the kernel that is to be loaded and copies this kernel image and any associated initrd into memory I'm already confused: Does it copy the entire kernel into memory or just part of it? If the entire kernel is in memory then why do we even need the initial RAM disk? I thought the purpose of initrd was to be able to have a small generalized kernel image and initrd will install the correct modules in it before the kernel image is loaded. But if the entire kernel is already in memory why do we need initrd? That also brings up another thing that confuses me - where are the modules that get loaded into the kernel located? Are all the kernel modules stored inside initrd? | The entire kernel is loaded into memory at boot, typically along with an initramfs nowadays. (It is still possible to set up a system to boot without an initramfs but that's unusual on desktops and servers.) The initramfs 's role is to provide the functionality needed to mount the "real" filesystems and continue booting the system. That involves kernel modules, and also various binaries: you need at least udev , perhaps some networking, and kmod which loads modules. Modules can be loaded into the kernel later than just boot, so there's no special preparation of the kernel by the initramfs . They can be stored anywhere: the initramfs , /lib/modules on the real filesystem, in a development tree if you're developing a module... The initramfs only needs to contain the modules which are necessary to mount the root filesystem (which contains the rest). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/257812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29731/"
]
} |
257,819 | I have a file that looks like this toy example. My actual file has 4 million lines, about 10 of which I need to delete. ID Data1 Data21 100 1002 100 2003 200 100ID Data1 Data24 100 100ID Data1 Data25 200 200 I want to delete the lines that look like the header, except for the first line. Final file: ID Data1 Data21 100 1002 100 2003 200 1004 100 1005 200 200 How can I do this? | header=$(head -n 1 input)(printf "%s\n" "$header"; grep -vFxe "$header" input) > output grab the header line from the input file into a variable print the header process the file with grep to omit lines that match the header capture the output from the above two steps into the output file | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/257819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124543/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.