source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
196,541 | I have installed CentOS 7 which uses gnome classic, and am finding the hot corner extremely painfully annoying. How do I remove/disable it? | Install the extension No Topleft Hot Corner . You can do this from your browser, visiting https://extensions.gnome.org/ . You'll need to allow the appropriate plugin to run (on Firefox, it's the plugin Gnome Shell Integration). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4430/"
]
} |
196,549 | I'm making a curl request where it displays an html output in the console like this <b>Warning</b>: Cannot modify header information - headers already sent by (output started at /home/domain/public_html/wp-content/themes/explicit/functions/ajax.php:87) in <b>/home/domain/public_html/wp-content/themes/explicit/functions/ajax.php</b> on line <b>149</b><br />...... etc I need to hide these outputs when running the CURL requests, tried running the CURL like this curl -s 'http://example.com' But it still displays the output, how can I hide the output? Thanks | From man curl -s, --silent Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute. It will still output the data you ask for, potentially even to the terminal/stdout unless you redirect it . So if you don't want any output use: curl -s 'http://example.com' > /dev/null | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/196549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110666/"
]
} |
196,565 | I wanted to format the Unix files conditionally, I am currently working on diff command and wanted to know if it is possible to format the text of the diff command output. Example: Matched values should be displayed in green. Unmatched values should be displayed in red. Suppose I have two files file1 and file2 and my command is diff file1 file2 . Now I wanted that suppose output contain 5 mismatch then those mismatch should be displayed in Red color. How to achieve this using unix? In short "Change color to red for the output of diff command for values which mismatch" | diff --color option was added to GNU diffutils 3.4 (2016-08-08) This is the default diff implementation on most distros, which will soon be getting it. Ubuntu 18.04 has diffutils 3.6 and therefore has it. On 3.5 it looks like this: Tested: diff --color -u \ <(seq 6 | sed 's/$/ a/') \ <(seq 8 | grep -Ev '^(2|3)$' | sed 's/$/ a/') Apparently added in commit c0fa19fe92da71404f809aafb5f51cfd99b1bee2 (Mar 2015). Word-level diff Like diff-highlight . Not possible it seems, feature request: https://lists.gnu.org/archive/html/diffutils-devel/2017-01/msg00001.html Related threads: https://stackoverflow.com/questions/1721738/using-diff-or-anything-else-to-get-character-level-diff-between-text-files diff within a line https://superuser.com/questions/496415/using-diff-on-a-long-one-line-file ydiff does it though, see below. ydiff side-by-side word level diff https://github.com/ymattw/ydiff Is this Nirvana? python3 -m pip install --user ydiffdiff -u a b | ydiff -s Outcome: If the lines are too narrow (default 80 columns), fit to screen with: diff -u a b | ydiff -w 0 -s Contents of the test files: a 12345 the original line the original line the original line the original line6789101112131415 the original line the original line the original line the original line1617181920 b 12345 the original line teh original line the original line the original line6789101112131415 the original line the original line the original line the origlnal line1617181920 ydiff Git integration ydiff integrates with Git without any configuration required. From inside a git repository, instead of git diff , you can do just: ydiff -s and instead of git log : ydiff -ls See also: https://stackoverflow.com/questions/7669963/how-can-i-get-a-side-by-side-diff-when-i-do-git-diff/14649328#14649328 Tested on Ubuntu 16.04, git 2.18.0, ydiff 1.1. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/196565",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108771/"
]
} |
196,575 | I'd been trying to use the rename function in my debian, I searched here at unix.stackexchange but it seems the solution is not the same in my scenario which files contains [] [800p]-[WOLU-H]-test1.mkv[800p]-[WOLU-H]-test2.mkv desired output is just [WOLU-H]-test1.mkv[WOLU-H]-test2.mkv i tried rename [800p]-[WOLU-H] [WOLU-H] [800p]-* but it keeps saying: Bareword found where operator expected at (eval 1) line 1, near "800p" (Missing operator before p?)syntax error at (eval 1) line 1, near "800p" I tried other such as rename 's/[800p]-[WOLU-H]/[WOLU-H]/' [800p]-* but output also failed Invalid [] range "U-H" in regex; marked by <-- HERE in m/[800p]-[WOLU-H <-- HERE ]/ at (eval 1) line 1. can somebody enlighten me with the correct process? Thanks! UPDATE I tried this: rename 's/\[800p\]-\[WOLU-H\]/\[WOLU-H\]/' \[800p\]-* but error: Bareword found where operator expected at (eval 1) line 1, near "800p" (Missing operator before p?)Backslash found where operator expected at (eval 1) line 1, near "p\"Backslash found where operator expected at (eval 1) line 1, near "]\" (Missing operator before \?)Backslash found where operator expected at (eval 1) line 1, near "]\" (Missing operator before \?)syntax error at (eval 1) line 1, near "800p"Unmatched right square bracket at (eval 1) line 1, at end of lineUnmatched right square bracket at (eval 1) line 1, at end of line I also tried this: rename "[800p]-[WOLU-H]" "[WOLU-H]" "[800p]-"* But still error Bareword found where operator expected at (eval 1) line 1, near "800p" (Missing operator before p?)syntax error at (eval 1) line 1, near "800p" I think the - with numerics are messing? | [ and ] have a special meaning in bash and also in regular expressions, so you have to escape them as \[ and \] . Something like this should work: rename 's/\[800p\]-\[WOLU-H\]/\[WOLU-H\]/' \[800p\]-* Example: $ touch [800p]-[WOLU-H]-test1.mkv [800p]-[WOLU-H]-test2.mkv$ ls[800p]-[WOLU-H]-test1.mkv [800p]-[WOLU-H]-test2.mkv$ rename 's/\[800p\]-\[WOLU-H\]/\[WOLU-H\]/' \[800p\]-*$ ls[WOLU-H]-test1.mkv [WOLU-H]-test2.mkv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196575",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79493/"
]
} |
196,576 | I am running Kali Linux in a virtual box (VMWare Player), and although the resolution fits my screen properly, the icons, windows, text is so small. In example, when I use the terminal or browse using Iceweasel, the windows they appear in show small text, and everything is not proportioned properly to size. It seems that everything has shrunk in size; even the dropdown menus on the desktop, and windows that open are not properly displayed regarding size. How can I increase the size so that windows, text, desktop icons and desktop menus display properly (standard size)? | When you installed Guest Additions to your VBox Kali, there should be a bar at the bottom of your screen that shows up every time your cursor is there at the bottom (with options: File, Machine, View, Input, Devices, Help, Kali). I cannot take a screenshot of the bar but you should be able to find it. Click on the View option, and you can choose full screen or seamless or whichever you like. But with high solution the icons will be small as you mentioned. It's okay because you can see at the bottom of the View list you have "Scale Factor", move your cursor on it and it will show different scaling such as 125% or 150%. I find 125% and 150% suit my screen the best. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110689/"
]
} |
196,586 | If I do not edit the .bashrc or other config files, the environment variables that I've setted are gone when I logout, or turn off the terminal. What I'm curious is, where are those 'temporary' env vars saved in? As I guess, they might be in the memory. That makes sense because they will disappear when the terminal is turned off(equals the terminal I was using is gone from the memory). Am I correct? | Environment variables are stored in memory associated with a process. Every process has access to its own set of environment variables. A child process (one started by the "current" process) inherits a copy of those variables. It's not possible for any process to alter any other process's environment variables. Using a shell such as bash you can define environment variables when you log in, or start a new bash process. The shell itself also defines a number of environment variables ( PWD springs to mind, after being prompted by comments), and other environment variables, such as PATH , are used at a much deeper level that just the shell - in this example by the system libraries. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108136/"
]
} |
196,603 | On the man page , it just says: -m Job control is enabled. But what does this actually mean? I came across this command in a SO question , I have the same problem as OP, which is "fabric cannot start tomcat". And set -m solved this. The OP explained a little, but I don't quite understand: The issue was in background tasks as they will be killed when thecommand ends. The solution is simple: just add "set -m;" prefix before command. | Quoting the bash documentation (from man bash ): JOB CONTROL Job control refers to the ability to selectively stop (suspend) the execution of processes and continue (resume) their execution at a later point. A user typically employs this facility via an interactive interface supplied jointly by the operating system kernel's terminal driver and bash. So, quite simply said, having set -m (the default forinteractive shells) allows one to use built-ins such as fg and bg ,which would be disabled under set +m (the default for non-interactive shells). It's not obvious to me what the connection is between job control andkilling background processes on exit, however, but I can confirm thatthere is one: running set -m; (sleep 10 ; touch control-on) & willcreate the file if one quits the shell right after typing thatcommand, but set +m; (sleep 10 ; touch control-off) & will not. I think the answer lies in the rest of the documentation for set -m : -m Monitor mode. [...] Background pro‐ cesses run in a separate process group and a line con‐ taining their exit status is printed upon their comple‐ tion. This means that background jobs started under set +m are not actual"background processes" ("Background processes are those whose processgroup ID differs from the terminal's"): they share the same processgroup ID as the shell that started them, rather than having their ownprocess group like proper background processes. This explains thebehavior observed when the shell quits before some of its backgroundjobs: if I understand correctly, when quitting, a signal is sent tothe processes in the same process group as the shell (thus killingbackground jobs started under set +m ), but not to those of otherprocess groups (thus leaving alone true background processes startedunder set -m ). So, in your case, the startup.sh script presumably starts abackground job. When this script is run non-interactively, such asover SSH as in the question you linked to, job control is disabled,the "background" job shares the process group of the remote shell, andis thus killed as soon that shell exits. Conversely, by enabling jobcontrol in that shell, the background job acquires its own processgroup, and isn't killed when its parent shell exits. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/196603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55914/"
]
} |
196,633 | I have this idea of running a bash script to check some conditions and using ffmpeg to convert all the videos in my directory from any format to .mkv and it is working great! The thing is, I did not know that a for file in loop does not work recursively ( https://stackoverflow.com/questions/4638874/how-to-loop-through-a-directory-recursively ) But I barely understand "piping" and am looking forward to seeing an example and clearing some uncertainties. I have this scenario in mind that I think would help me a lot to understand. Suppose I have this bash script snippet: for file in *.mkv *avi *mp4 *flv *ogg *mov; dotarget="${file%.*}.mkv" ffmpeg -i "$file" "$target" && rm -rf "$file"done What it does is, for the current directory, search for any *.mkv *avi *mp4 *flv *ogg *mov then declare the output to have its extension to be .mkv then afterwards delete the original file, then the output should be saved to the very same folder the original video is in. How can I convert this to run recursively? if I use find , where to declare the variable $file ? And where should you declare $target ? Are all find just really one-liners? I really need to pass the file to a variable $file , because I will still need to run the condition check. And, assuming that (1) is successful, how to make sure that the requirement "then the output should be save to the very same folder the original video is in" is satisfied? | You've got this code: for file in *.mkv *avi *mp4 *flv *ogg *mov; dotarget="${file%.*}.mkv" ffmpeg -i "$file" "$target" && rm -rf "$file"done which runs in the current directory. To turn it into a recursive process you have a couple of choices. The easiest (IMO) is to use find as you suggested. The syntax for find is very "un-UNIX-like" but the principle here is that each argument can be applied with AND or OR conditions. Here, we're going to say " If this-filename matches OR that-filename matches Then print-it ". The filename patterns are quoted so that the shell can't get hold of them (remember that the shell is responsible for expanding all unquoted patterns, so if you had an unquoted pattern of *.mp4 and you had janeeyre.mp4 in your current directory, the shell would replace *.mp4 with the match, and find would see -name janeeyre.mp4 instead of your desired -name *.mp4 ; it gets worse if *.mp4 matches multiple names...). The brackets are prefixed with \ also to keep the shell from trying to action them as subshell markers (we could quote the brackets instead, if preferred: '(' ). find . \( -name '*.mkv' -o -name '*avi' -o -name '*mp4' -o -name '*flv' -o -name '*ogg' -o -name '*mov' \) -print The output of this needs to be fed into the input of a while loop that processes each file in turn: while IFS= read file ## IFS= prevents "read" stripping whitespacedo target="${file%.*}.mkv" ffmpeg -i "$file" "$target" && rm -rf "$file"done Now all that's left is to join the two parts together with a pipe | so that the output of the find becomes the input of the while loop. While you're testing this code I'd recommend you prefix both ffmpeg and rm with echo so you can see what would be executed - and with what paths. Here is the final result, including the echo statements I recommend for testing: find . \( -name '*.mkv' -o -name '*avi' -o -name '*mp4' -o -name '*flv' -o -name '*ogg' -o -name '*mov' \) -print | while IFS= read file ## IFS= prevents "read" stripping whitespace do target="${file%.*}.mkv" echo ffmpeg -i "$file" "$target" && echo rm -rf "$file" done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79493/"
]
} |
196,649 | What is the "brandbot" service running in Centos 7? What does it do? All I can find (it has no manpage) is "Flexible Branding Service" and that it's part of the "base" repo. | brandbot : tool to write branding to /etc/os-release When you start it 1 , it looks for /var/lib/rhsm/branded_name and adds in /etc/os-release : PRETTY_NAME=first_line_from_branded_name 1 Path-based activation — System services that support path-based activation can be started on-demand when a particular file or directory changes its state. Systemd uses path units for path-based activation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196649",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110738/"
]
} |
196,656 | I created a soft link ( ln -s 1 2 ) to a directory which is inside the test directory and opened the soft link ( cd 2 ) and displayed the current path using pwd . The displayed path was ~/test/2 and not ~/test/1 . It's different in an OS like Windows, the shortcut brings us to the real directory. I'm little bit confused how this soft link works in Linux. Is it not a shortcut like in Windows ? Why is the path not ~/test/1 ? $ mkdir test $ cd test $ mkdir 1 $ ln -s 1 2 $ cd 2 $ pwd /home/dazz/test/2 | That is a feature of the shell that remembers how you got to where you are. If you have realpath installed you can do: $ realpath /home/dazz/test/1 And lacking that if you have python: $ python -c "import os; print(os.path.realpath('.'))"/home/dazz/test/1 or readlink (from coreutils): $ readlink -f ./home/dazz/test/1 or /bin/pwd (not the shell built-in pwd ): $ /bin/pwd/home/dazz/test/1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110279/"
]
} |
196,658 | I'm new in Linux and just have basic knowledge about it. In the past I've tested to boot into single-user mode in some Linux distribution by just append " single" to the boot command. However, I don't know where should I add it in Ubuntu 14, the boot command is actually a shell script. Can anyone help me please? Below are some snapshots I captured. | I got the answer. At this line: linux /vmlinuz-3.13.0-32-generic root=/dev/mapper/ubuntu-vg-root ro append single and it becomes to linux /vmlinuz-3.13.0-32-generic root=/dev/mapper/ubuntu-vg-root ro single Then press Ctrl + x . You will goes into single user mode. To make this permanent, you need to edit /etc/default/grub and change this line: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to GRUB_CMDLINE_LINUX_DEFAULT="text" The details may vary depending on your system. The important bit is setting GRUB_CMDLINE_LINUX_DEFAULT to text . Once you have done this, run sudo update-grub and next time you restart you will boot into text mode. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196658",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8839/"
]
} |
196,677 | I asked Google the same question and didn't like the results I got. What is /tmp/.X11-unix/ ? | On my fairly up-to-date Arch laptop, /tmp/.X11-unix/ is a directory with one entry: X0 , a Unix-domain socket . The X11 server (usuall Xorg these days) communicates with clients like xterm , firefox, etc via some kind of reliable stream of bytes. A Unix domain socket is probably a bit more secure than a TCP socket open to the world, and probably a bit faster, as the kernel does it all, and does not have to rely on an ethernet or wireless card. My X11 server shows up as: bediger 294 293 0 Apr09 tty1 01:23:26 /usr/lib/xorg-server/Xorg -nolisten tcp :0 vt1 -auth /tmp/serverauth.aK3Lrv5hMV The "-nolisten tcp" keeps it from opening TCP port 6000 for communications. The command lsof -U can tell you what processes are using which Unix domain sockets. I see Xorg as connected to /tmp/.X11-unix/X0 . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/196677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61349/"
]
} |
196,690 | I have this code: #!/bin/bashnum=${1:-undefined}cmd=$(which {banner,echo} | head -n1)until [[ "$num" =~ ^[0-9]+$ ]]; do read -p "Type a number: " numdonefor ((num;num>=0;num--)); do $cmd $num sleep 1done But I think that there is a better way to know if a program is installed. I've tried these. However I think that are unclear: which banner && cmd=banner || cmd=echo cmd=$(eval 'which '{banner,echo}'||' :) cmd=$(which {banner,echo} | head -n1) cmd=$(which banner || which echo) I like 3 more than the rest, however I think that someone could help me to find a good method to do this. I accept the use of type , hash or command instead if needed. This is a compact code challenge, so I want to make a oneliner, make it short, and try to not repeat the variable name or any command (except || , && and so) | The function-based approach results in the clearest code. There's at least two variants possible, the one suggested by FloHimself above in the comment (which would not be my preferred way): function myprint{ if type banner >/dev/null then banner "$@" else echo "$@" fi}myprint "Hello World!" or - the variant I'd prefer - to use a conditional function definition : if type banner >/dev/nullthen function myprint { banner "$@" ;}else function myprint { echo "$@" ;}fimyprint "Hello World!" which doesn't have the overhead of an if -condition with each call, and which also does not require duplication of string or variable expressions (as in another suggestion). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107625/"
]
} |
196,715 | I have a file that I want to pad until it reaches 16 MiB (16777216 bytes). Currently it is 16515072 bytes. The difference is 262144 bytes. How do I pad it? This doesn't seem to be working: cp smallfile.img largerfile.imgdd if=/dev/zero of=largerfile.img bs=1 count=262144 | Besides the answers to get a physical padding you may also leave most of the padding space in the file just empty ("holes"), by seek ing to the new end-position of the file and writing a single character: dd if=/dev/zero of=largerfile.txt bs=1 count=1 seek=16777215 (which has the advantage to be much more performant, specifically with bs=1 , and does not occupy large amounts of additional disk space). That method seems to work even without adding any character, by using if=/dev/null and the final desired file size: dd if=/dev/null of=largerfile.txt bs=1 count=1 seek=16777216 A performant variant of a physical padding solution that uses larger block-sizes is: padding=262144 bs=32768 nblocks=$((padding/bs)) rest=$((padding%bs)){ dd if=/dev/zero bs=$bs count=$nblocks dd if=/dev/zero bs=$rest count=1} 2>/dev/null >>largerfile.txt | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/196715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
196,759 | I'm trying to use env to set environment variables (read from another source, say for example a file) for a subprocess. Essentially, I am attempting the following: env VALUE=thisisatest ./somescript.sh If, for example, somescript.sh was: echo $VALUE Then this would print thisisatest as expected. But I would like to load the variables from a file. I've gotten to this point: env $(cat .vars | xargs -d '\n') ./somescript.sh But, I run into trouble when any of the variables contain spaces (and quotes don't work as expected) . So for example: env $(echo 'VALUE="this is a test"' | xargs -d '\n') ./somescript.sh Will error with env: is: No such file or directory And trying: env $(echo 'VALUE="thisisatest"' | xargs -d '\n') ./somescript.sh Will give me the unexpected: "thisisatest" I assumed this would work properly since running env VALUE="thisisatest" ./somescript.sh prints it without the quotes. From the error, I glean that for some reason env is not understanding that the quotes mean that the value that follows should be a string. However, I'm unsure how to interpolate these vars in a way that the quotes are correctly interpreted. Can anyone provide any hints for how I could accomplish this? Thanks! | You need double quote in command substitution, otherwise, the shell will perform field splitting with the result of command substitution: $ env "$(echo 'VALUE="this is a test"')" ./somescript.sh"this is a test" For env reading from file, you must let the shell does field spliting, but set IFS to newline only, so your command won't break with space: $ IFS=''$ env $(cat .vars) ./somescript.sh If you want to read from file, it's better if you just source (aka dot ) the file in somescript.sh : #!/bin/bash. .vars: The rest of script go here | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8506/"
]
} |
196,768 | Assume I have a directory tree as follow: /home/xen/p.c/home/dan/t.c/home/jhon/l.c...etc (many more users) I need to get comma separated (username,full path to c file) list as follow: xen,/home/xen/p.cdan,/home/dan/t.c...etc How to do that from command line? | You need double quote in command substitution, otherwise, the shell will perform field splitting with the result of command substitution: $ env "$(echo 'VALUE="this is a test"')" ./somescript.sh"this is a test" For env reading from file, you must let the shell does field spliting, but set IFS to newline only, so your command won't break with space: $ IFS=''$ env $(cat .vars) ./somescript.sh If you want to read from file, it's better if you just source (aka dot ) the file in somescript.sh : #!/bin/bash. .vars: The rest of script go here | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110819/"
]
} |
196,780 | For example: sed 's/\u0091//g' file1 Right now, I have to do hexdump to get hex number and put into sed as follows: $ echo -ne '\u9991' | hexdump -C00000000 e9 a6 91 |...|00000003 And then: $ sed 's/\xe9\xa6\x91//g' file1 | Just use that syntax: sed 's/馑//g' file1 Or in the escaped form: sed "s/$(echo -ne '\u9991')//g" file1 (Note that older versions of Bash and some shells do not understand echo -e '\u9991' , so check first.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196780",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24077/"
]
} |
196,802 | If I set a passphrase on my private key like so: openssl rsa -des -in insecure.key -out secure.key and I remove the passphrase like so: openssl rsa -in secure.key -out insecure.key then my private key (insecure.key) ends up with a file mode of 644. How can I tell openssl to create insecure.key with a file mode of 600 (or anything)? I know that I can simply chmod the file afterwards, but what if I lose connection? Then there's a private key on the filesystem that anybody could read. | You can try to set umask before converting it umask 077; openssl rsa -in secure.key -out insecure.key Edit: To not affect other files in the current shell environment by the umask setting execute it in a subshell: ( umask 077; openssl rsa -in secure.key -out insecure.key ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196802",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47985/"
]
} |
196,829 | In various shell scripts I often see two approaches for getting information from databases supported by Name Service Switch libraries like /etc/group , /etc/hosts or /etc/services . One is getent utility and other is grep or some other text processing tool. For example: root@fw-test:~# getent passwd rootroot:x:0:0:root:/root:/bin/bashroot@fw-test:~# root@fw-test:~# grep root /etc/passwdroot:x:0:0:root:/root:/bin/bashroot@fw-test:~# ..or: root@fw-test:~# getent hosts www.blah.com189.113.174.199 www.blah.comroot@fw-test:~# root@fw-test:~# host www.blah.comwww.blah.com has address 189.113.174.199root@fw-test:~# Which of those two approaches above should be used in scripts? I mean is one of the solutions more elegant or standard than the other? | A lot of this will come down to factors stemming from the specific environment you're in, but I prefer the getent method because it looks up external users as well as local users. Specifically, it will look up the LDAP users in my environment from the LDAP server, whereas a cat /etc/passwd or similar has no idea my LDAP server even exists, much less has valid users on it. If all your users are always local, getent doesn't really buy you much aside from "no need to rewrite if we add an LDAP server in 10 years". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
196,830 | What is the practical usage of /etc/networks file? As I understand, one can give names to networks in this file. For example: root@fw-test:~# cat /etc/networksdefault 0.0.0.0loopback 127.0.0.0link-local 169.254.0.0google-dns 8.8.4.4root@fw-test:~# However, if I try to use this network name for example in ip utility, it does not work: root@fw-test:~# ip route add google-dns via 104.236.63.1 dev eth0Error: an inet prefix is expected rather than "google-dns".root@fw-test:~# ip route add 8.8.4.4 via 104.236.64.1 dev eth0root@fw-test:~# What is the practical usage of /etc/networks file? | As written in the manual page , the /etc/networks file is to describe symbolic names for networks. With network, it is meant the network address with tailing .0 at the end. Only simple Class A, B or C networks are supported. In your example the google-dns entry is wrong. It's not a A,B or C network. It's an ip-address-hostname-relationship therefore it belongs to /etc/hosts . Actually the default entry is also not conformant. Lets imagine you have an ip address 192.168.1.5 from your corporate network. An entry in /etc/network could then be: corpname 192.168.1.0 When using utilities like route or netstat , those networks are translated (if you don't suppress resolution with the -n flag). A routing table could then look like: Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Ifacedefault 192.168.1.1 0.0.0.0 UG 0 0 0 eth0corpname * 255.255.255.0 U 0 0 0 eth0 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196830",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
196,840 | We have an Ubuntu 12.04/apache server and a directory in the "/var/www/foo" and root permission. Something is repeatedly changes the permission of this directory. Question : How can we investigate, what is changing the permission? | You could investigate using auditing to find this. In ubuntu the package is called auditd . Use that command to start a investigation if a file or folder: auditctl -w /var/www/foo -p a -w means watch the file/folder -p a means watch for changes in file attributes Now start tail -f /var/log/audit/audit.log . When the attributes change you will see something like this in the log file: type=SYSCALL msg=audit(1429279282.410:59): arch=c000003e syscall=268 success=yes exit=0 a0=ffffffffffffff9c a1=23f20f0 a2=1c0 a3=7fff90dd96e0 items=1 ppid=26951 pid=32041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts5 ses=4294967295 comm="chmod" exe="/bin/chmod"type=CWD msg=audit(1429279282.410:59): cwd="/root"type=PATH msg=audit(1429279282.410:59): item=0 name="/var/www/foo" inode=18284 dev=00:13 mode=040700 ouid=0 ogid=0 rdev=00:00 I executed chmod 700 /var/www/foo to trigger it. In the 1st line, you see which executable did it: exe="/bin/chmod" the pid of the process: pid=32041 You could also find out which user it was: uid=0 , root in my case. In the 3rd line, you see the changed mode: mode=040700 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105588/"
]
} |
196,844 | Have the following problem: Computer boots till the fedora logo before login.when I press F2 , I see the following: (1 of 3) A start job is running for Gnome Display Manager (2 of 3) A start job is running for Login Service (3 of 3) A start job is running for Wait for Plymouth Boot Screen The login screen is not showing up. Before reboot the computer crashed because I did run a recovery software (r-linux) but that broke the system. What can I do? | You could investigate using auditing to find this. In ubuntu the package is called auditd . Use that command to start a investigation if a file or folder: auditctl -w /var/www/foo -p a -w means watch the file/folder -p a means watch for changes in file attributes Now start tail -f /var/log/audit/audit.log . When the attributes change you will see something like this in the log file: type=SYSCALL msg=audit(1429279282.410:59): arch=c000003e syscall=268 success=yes exit=0 a0=ffffffffffffff9c a1=23f20f0 a2=1c0 a3=7fff90dd96e0 items=1 ppid=26951 pid=32041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts5 ses=4294967295 comm="chmod" exe="/bin/chmod"type=CWD msg=audit(1429279282.410:59): cwd="/root"type=PATH msg=audit(1429279282.410:59): item=0 name="/var/www/foo" inode=18284 dev=00:13 mode=040700 ouid=0 ogid=0 rdev=00:00 I executed chmod 700 /var/www/foo to trigger it. In the 1st line, you see which executable did it: exe="/bin/chmod" the pid of the process: pid=32041 You could also find out which user it was: uid=0 , root in my case. In the 3rd line, you see the changed mode: mode=040700 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/196844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110862/"
]
} |
196,874 | I would like to make a completely unattended installation of Ubuntu Server 14.04 from an USB drive where I extracted the ubuntu-14.04.2-server.iso In /syslinux/txt.cfg of the USB drive, I added the following section menu label ^Unattended Ubuntu Server installationkernel /install/vmlinuzappend noprompt cdrom-detect/try-usb=true auto=true priority=critical url=http://website.com/preseed.cfg vga=788 initrd=/install/initrd.gz quiet -- But when I tried it, even before this menu shows up, I have to select a language (and therefore force me to have a manual intervention by pressing enter ). I found a similar question where it suggests to echo en >syslinux/langlist but I still get the language selection menu (with only one item). How can I avoid this intervention ? | Doing what py4on suggested does simply shorten the list of available languages (to the extend of having one single element: en ), but does not automate the language selection. It was probably working on older versions of Ubuntu but the requirement is for the Ubuntu Server 14.04. On 16.04 the instructions below may change to isolinux and isolinux.cfg instead of syslinux depending how you create the media. In order to avoid this intervention at the language selection menu, the option timeout of syslinux should be set to a strictly positive value. After the specified timeout, the default language will be selected and the default booting entry of syslinux will be selected. The timeout parameter of syslinux represents a time in deci-seconds, and the default value is 0 , corresponding to an "infinite timeout". Therefore, one could set timeout 10 to make syslinux wait 1 second before proceeding with the default value. The best place to put the parameter is in syslinux/syslinux.cfg . For example: echo "timeout 10" >> syslinux/syslinux.cfg In order to have a different language than en , I would suggest to proceed as py4on suggested by leaving only the chosen language in the syslinux/langlist file. For example: echo "fr" > syslinux/langlist References: http://www.syslinux.org/wiki/index.php/SYSLINUX | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196874",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40025/"
]
} |
196,877 | I just had a problem with rsyslog's imjournal module, possibly the issue described here , whereby complications with systemd's journal cause the journal to return the same data endlessly in a tight loop. This results in massive message duplication inside rsyslog probably resulting in a denial-of-service when the system resouces get exhausted In any case, it did indeed max the processor out repeating messages that were months old. I did not realize that rsyslog and journald were so tightly coupled by default, so I reconfigured the former to use a more traditional, more efficient live socket for input as per these instructions : $ModLoad imuxsock$OmitLocalLogging off This seems to work in so far as the socket is created and in use. However, I then noticed a strange thing when testing it. > logger "hello world" Results in this in /var/log/syslog , which is mentioned only once in rsyslog.conf : Apr 17 10:35:45 pidora logger: hello worldApr 17 10:35:45 pidora logger: hello world The message is repeated, and it would seem that all the other messages are as well. Some of them are exactly the same, and some of them differ in only one aspect: Apr 17 10:42:26 pidora systemd[1]: Stopping System Time Synchronized.Apr 17 10:42:26 pidora systemd: Stopping System Time Synchronized. The [1] is a pid. I believe what's going on is rsyslog is getting the message once from the application and then again from journald. This is kind of silly. How can I stop it? | I believe what's going on is rsyslog is getting the message once from the application and then again from journald. Yep. The solution is to include this in /etc/systemd/journald.conf : ForwardToSyslog=no Why there wasn't this problem when using imjournal I'm not sure, but there is a hint in man journald.conf : ForwardToSyslog= [...] the journal daemon shall be forwarded to a traditional syslog daemon [...] If forwarding to syslog is enabled but no syslog daemon is running, the respective option has no effect I'm guessing what's actually meant by a "syslog daemon running" is the literal presence of a traditional syslog socket. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196877",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/25985/"
]
} |
196,884 | The Linux kernel swaps out most pages from memory when I run an application that uses most of the 16GB of physical memory. After the application finishes, every action (typing commands, switching workspaces, opening a new web page, etc.) takes very long to complete because the relevant pages first need to be read back in from swap. Is there a way to tell the Linux kernel to copy pages from swap back into physical memory without manually touching (and waiting for) each application? I run lots of applications so the wait is always painful. I often use swapoff -a && swapon -a to make the system responsive again, but this clears the pages from swap, so they need to be written again the next time I run the script. Is there a kernel interface, perhaps using sysfs, to instruct the kernel to read all pages from swap? Edit: I am indeed looking for a way to make all of swap swapcached. (Thanks derobert!) [P.S. serverfault.com/questions/153946/… and serverfault.com/questions/100448/… are related topics but do not address the question of how to get the Linux kernel to copy pages from swap back into memory without clearing swap.] | It might help to up /proc/sys/vm/page-cluster (default: 3). From the kernel documentation ( sysctl/vm.txt ): page-cluster page-cluster controls the number of pages up to which consecutive pages are read in from swap in a single attempt. This is the swap counterpart to page cache readahead. The mentioned consecutivity is not in terms of virtual/physical addresses, but consecutive on swap space - that means they were swapped out together. It is a logarithmic value - setting it to zero means "1 page", setting it to 1 means "2 pages", setting it to 2 means "4 pages", etc. Zero disables swap readahead completely. The default value is three (eight pages at a time). There may be some small benefits in tuning this to a different value if your workload is swap-intensive. Lower values mean lower latencies for initial faults, but at the same time extra faults and I/O delays for following faults if they would have been part of that consecutive pages readahead would have brought in. The documentation doesn't mention a limit, so possibly you could set this absurdly high to make all of swap be read back in really soon. And of course turn it back to a sane value afterwards. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196884",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96817/"
]
} |
196,907 | I have a service (docker registry) that runs on port 5000 , I have installed nginx to redirect http request from 8080 to 5000 . If I make a curl to localhost:5000 it works, but when I make a curl to localhost:8080 I get a Bad gateway error. nginx config file: upstream docker-registry { server localhost:5000;}server { listen 8080; server_name registry.mydomain.com; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; client_max_body_size 0; chunked_transfer_encoding on; location / { proxy_pass http://docker-registry; } location /_ping { auth_basic off; proxy_pass http://docker-registry; } location /v1/_ping { auth_basic off; proxy_pass http://docker-registry; }} In /var/log/nginx/error.log I have: [crit] 15595#0: *1 connect() to [::1]:5000 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: registry.mydomain.com, request: "GET / HTTP/1.1", upstream: "http://[::1]:5000/", host: "localhost:8080" Any idea? | I assume its a Linux box, so most likely SELinux is preventing the connection as there is no policy allowing the connection. You should be able to just run # setsebool -P httpd_can_network_connect true and then restart nginx. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/196907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64116/"
]
} |
196,915 | I have to replace special characters using shell, so i use sed but i have some mistakes that i don't understand. <%_ by [@, ("_" = dash)_%> by ] for the first 2 characters my synthax is : sed -i y/\<%\/\]\/ test.htm it works, but here how can i add the dash character ?The second should be this way sed -i y/\%>\/\]\/ but i have this mistake bash: /]/: is a folder can you help me please | I assume its a Linux box, so most likely SELinux is preventing the connection as there is no policy allowing the connection. You should be able to just run # setsebool -P httpd_can_network_connect true and then restart nginx. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/196915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110890/"
]
} |
196,926 | I purchased a multidomain certificate, and I was trying to install it on my server. I put the following information inside the virtual host for one of my sites: <VirtualHost *:443>...SSLEngine onSSLCertificateFile /etc/apache2/ssl/16478325.crtSSLCertificateKeyFile /etc/apache2/ssl/private.keySSLCertificateChainFile /etc/apache2/ssl/16478325.ca-bundle... I then tried to restart apache, and got a failure, and I was told to check my apache logs. The error message received was this: Init: Multiple RSA server certificates not allowed I assumed that that meant I had to install an SSL certificate for the entire site, not on one domain. But that also did not work. I got the error: Illegal attempt to re-initialise SSL for server (SSLEngine On should go in the VirtualHost, not in global scope.) Why isn't this working? How do I configure my domains to each have SSL? | (summary of comments) You have a conflicting SSL virtual host from the Ubuntu/Debian default-ssl virtual host. a2dissite default-ssl will fix the immediate problem. The Apache HTTP Server Wiki has a guide to configuring name-based SSL virtual hosts which you should review. The guide shows them all in one file, but you can split the different VirtualHost sections to different files (in sites-available/ )—they're all included in the main config anyway. You can put stuff like the NameVirtualHost line in ports.conf or in conf-available/ . /usr/share/doc/apache2/README.Debian.gz contains some documentation on the Debian/Ubuntu apache config handling. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196926",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66627/"
]
} |
196,932 | How can I limit the number of active logins per user? I've seen this on various servers before, and I was wondering how I could set this up myself. Perhaps in those cases, this was accomplished by limiting the number of active SSH logins per user? And I guess that would be the way to go. How would I set this up? | /etc/security/limits.conf , at least on Debian. Path may vary a little by distro. There is an example in the file to limit all members of the student group to 4 logins (commented out): #<domain> <type> <item> <value>@student - maxlogins 4 You could do * instead of a group, but make sure not to hit users you don't want to limit (e.g., a staff member) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83381/"
]
} |
196,952 | I'm looking for a way to get three informations from a remote repository using git ls-remote like command. I would like to use it in a bash script running in a cron . Currently, if I do git ls-remote https://github.com/torvalds/linux.git master I get the last commit hash on the master branch : 54e514b91b95d6441c12a7955addfb9f9d2afc65 refs/heads/master Is there any way to get commit message and commit author ? | While there is not any utilities that come with git that lets you do what you want, it is rather easy to write a python script that parses a git object and then outputs the author and commit message. Here is a sample one that expects a git commit object on stdin and then prints the author followed by the commit message: from parse import parseimport sys, zlibraw_commit = sys.stdin.buffer.read()commit = zlib.decompress(raw_commit).decode('utf-8').split('\x00')[1](headers, body) = commit.split('\n\n')for line in headers.splitlines(): # `{:S}` is a type identifier meaning 'non-whitespace', so that # the fields will be captured successfully. p = parse('author {name} <{email:S}> {time:S} {tz:S}', line) if (p): print("Author: {} <{}>\n".format(p['name'], p['email'])) print(body) break To make a full utility like you want the server needs to support the dumb git transport protocol over HTTP, as you cannot get a single commit using the smart protocol. GitHub doesn’t support the dumb transport protocol anymore however, so I will be using my self-hosted copy of Linus’ tree as an example. If the remote server supports the dump http git transport you could just use curl to get the object and then pipe it to the above python script. Let’s say that we want to see the author and commit message of commit c3fe5872eb , then we’d execute the following shell script: baseurl=http://git.kyriasis.com/kyrias/linux.git/objectscurl "$baseurl"/c3/fe5872eb3f5f9e027d61d8a3f5d092168fdbca | python parse.py Which will print the following output: Author: Sanidhya Kashyap <[email protected]>bfs: correct return valuesIn case of failed memory allocation, the return should be ENOMEM insteadof ENOSPC.... The full commit SHA of commit c3fe5872eb is c3fe5872eb3f5f9e027d61d8a3f5d092168fdbca , and as you can see in the above shell script the SHA is split after the second character with a slash inbetween. This is because git stores objects namespaced under the first two characters of the SHA, presumably due to legacy filesystem having low limits on the number of files that can reside in a single directory. While this answer doesn’t give a full working implementation of a remote git-show command, it gives the basic parts needed to make a simple one. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/196952",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88202/"
]
} |
197,005 | When using bash's vi mode (set -o vi), is it possible to recover the last argument of the last executed command? This is done in emacs mode with ESC + . , and I would like to do it in vi mode as well. I know that bash provides !$ and $_ , but they are not expanded and I find quite dangerous to use them directly. I've tried (with no success) some solutions I found on Stack Overflow about editing the .inputrc and adding: set editing-mode viset keymap vi-insert"\e.": yank-last-arg"\e_": yank-last-arg I'm switching to vi mode in bash but I'm quite used to ESC + . and it would be nice to be able to use it, or to find a quick & easy replacement. EDIT: This question has been marked as a duplicate of a similar one that asks about how to recover last argument with Alt+S. I was asking specifically about ESC+. (it's the shortcut I'm used to and it is not covered by the other answer). EDIT: To complement @chaos' solution: the following binding makes ESC+. (well, really '.') paste the last argument, but you lose Vi's dot (.) functionality: bind -m vi-command ".":insert-last-argument | For me it works when I add the following to my .inputrc : $if mode=vi"\e.":yank-last-arg$endif Then, when changing it in bash on the fly, the .inputrc must be read again: set -o vibind -f .inputrc Now, I can get the last argument with alt + . . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49392/"
]
} |
197,018 | I want to keep only first five scheduled jobs (as in the lowest 5 job ID numbers) and remove the rest of the scheduled atq jobs. How to can I do this? | For me it works when I add the following to my .inputrc : $if mode=vi"\e.":yank-last-arg$endif Then, when changing it in bash on the fly, the .inputrc must be read again: set -o vibind -f .inputrc Now, I can get the last argument with alt + . . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197018",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28793/"
]
} |
197,028 | Should this question be moved to stackoverflow instead? I often need to read log files generated by java applications using log4j. Usually, a logged message (let's call it a log entry) spans over multiple lines. Example: INFO 10:57:01.123 [Thread-1] [Logger1] This is a multi-linetext, two linesDEBUG 10:57:01.234 [Thread-1] [Logger2] This entry takes 3 linesline 2line 3 Note that each log entry starts at a new line and the very first word from the line is TRACE, DEBUG, INFO or ERROR and at least one space.Here, there are 2 log entry, the first at millisecond 123, the other at millisecond 234. I would like a fast command (using a combination of sed/grep/awk/etc) to filter log entries (grep only filters lines), eg: remove all the log entries containing text 'Logger2'. I considered doing the following transformations: 1) join lines belonging to the same log entries with a special sequence of chars (eg: ##); this way, all the log entries will take exactly one line INFO 10:57:01.123 [Thread-1] [Logger1] This is a multi-line##text, two linesDEBUG 10:57:01.234 [Thread-1] [Logger2] This entry takes 3 lines##line 2##line 3 2) grep 3) split the lines back (ie: replace ## with \n) I had troubles at step 1 - I do not have enough experience with sed. Perhaps the 3 steps above are not required, maybe sed can do all the work. | There is no need to mix many instruments. Task can be done by sed only sed '/^INFO\|^DEBUG\|^TRACE\|^ERROR/{ /Logger2/{ :1 N /\nINFO\|\nDEBUG\|\nTRACE\|\nERROR/!s/\n// $!t1 D } }' log.entry | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197028",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13945/"
]
} |
197,063 | I am using libreoffice under CentOS 6. I can convert ODT files to PDF with: libreoffice --headless --convert-to pdf *.odt but the problem is that it only works when no document is open in libreoffice. When I specify --env:UserInstallation=file:///path/to/some/directory as suggested in one of the comments of this question , it doesn't help. What am I doing wrong? It is a nuisance to close all libreoffice instances before running the before command. | That is unlikely going to work, as the suggestion in the comment is both incomplete (you cannot just specify some directory) and incorrect ( --env:... should be -env:.. . Here is what I recommend you do: Stop all instances of libreoffice Start libreoffice from the commandline without specifying --headless : libreoffice -env:UserInstallation=file:///home/username/.config/libreoffice-alt you should replace /home/username with your home directory (and adjust .config if you don't have that on your CentOS, I did this on Ubuntu and Linux Mint). The above will create a new configuration directory for the alternate libreoffice in your .config directory, without which you would get some error about java not being found. Exit that instance of libreoffice That directory /home/username/.config/libreoffice-alt should now have been created for you. Now start another libreoffice from the command-line (doing so allows you to see some useful messages if things go wrong when starting the second instance), without the -env:... , and while that is still running start the conversion using: libreoffice -env:UserInstallation=file:///home/username/.config/libreoffice-alt --headless --convert-to pdf *.odt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110950/"
]
} |
197,121 | Is there a way to create bash variables and assign them values via a loop? Something along the lines of: #!/bin/bashc=0for file in $( ls ); do var"$c"="$file"; let c=$c+1;done EDIT: Thank you to @Costas and @mdpc for pointing out this would be a poor alternative to a list; the question is theoretical only. | Well, you absolutely can using eval as follows: c=0for file in $( ls ); do eval "var$c=$file"; c=$((c+1));done This code will create variables named var0, var1, var2, ... with each one holding the file name. I assume you will have a good reason you want to do that over using an array ... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109083/"
]
} |
197,122 | First of all I'm new to awk so please excuse if it's something simple. I'm trying to generate a file that contains paths. I'm using for this an ls -LT listing as well as an awk script: This is an example of the input file: vagrant@precise64:/vagrant$ cat structure-of-home.cnf/home/:vagrant/home/vagrant:postinstall.sh This would be the expected output: /home/vagrant/home/vagrant/postinstall.sh The awk script should do the following: Check whether the line has a : in it If yes allocate the string (without : ) to a variable ( $path in my case) If the line is empty print nothing If it's not empty and it does not contain a : print the $path and then the current line $0 Here's the script: BEGIN{path=""}{ if ($1 ~ /\:/) { sub(/\:/,"",$1) if (substr($1, length,1) ~ /\//) { path=$1; } else { path=$1"/" } } else if (length($0) == 0) {} else print $path$1} The problem is that when I run the script I get the following mess: vagrant@precise64:/vagrant$ awk -f format_output.awk structure-of-home.cnfvagrantvagrantpostinstall.shpostinstall.sh What am I doing wrong please? | As pointed out by taliezin , your mistake was to use $ to expand path when printing. Unlike bash or make , awk doesn't use the $ to expand variables names to their value, but to refer to the fields of a line (similar to perl ). So just removing this will make your code work: BEGIN{path=""}{ if ($1 ~ /\:/) { sub(/\:/,"",$1) if (substr($1, length,1) ~ /\//) { path=$1; } else { path=$1"/" } } else if (length($0) == 0) {} else print path$1} However, this is not really an awk ish solution:First of all, there is no need to initialize path in a BEGIN rule, non-defined variables default to "" or 0 , depending on context. Also, any awk script consist of patterns and actions , the former stating when , the latter what to do.You have one action that's always executed (empty pattern ), and internally uses (nested) conditionals to decide what to do. My solution would look like this: # BEGIN is actually a pattern making the following rule run only once:# That is, before any input is read.BEGIN{ # Split lines into chunks (fields) separated by ":". # This is done by setting the field separator (FS) variable accordingly:# FS=":" # this would split lines into fields by ":" # Additionally, if a field ends with "/", # we consider this part of the separator. # So fields should be split by a ":" that *might* # be predecessed by a "/". # This can be done using a regular expression (RE) FS: FS="/?:" # "?" means "the previous character may occur 0 or 1 times" # When printing, we want to join the parts of the paths by "/". # That's the sole purpose of the output field separator (OFS) variable: OFS="/"}# First we want to identify records (i.e. in this [default] case: lines),# that contain(ed) a ":".# We can do that without any RE matching, since records are# automatically split into fields separated by ":".# So asking >>Does the current line contain a ":"?<< is now the same# as asking >>Does the current record have more than 1 field?<<.# Luckily (but not surprisingly), the number of fields (NF) variable# keeps track of this:NF>1{ # The follwoing action is run only if are >1 fields. # All we want to do in this case, is store everything up to the first ":", # without the potential final "/". # With our FS choice (see above), that's exactly the 1st field: path=$1}# The printing should be done only for non-empty lines not containing ":".# In our case, that translates to a record that has neither 0 nor >1 fields:NF==1{ # The following action is only run if there is exactly 1 field. # In this case, we want to print the path varible (no need for a "$" here) # followed by the current line, separated by a "/". # Since we defined the proper OFS, we can use "," to join output fields: print path,$1 # ($1==$0 since NF==1)} And that's all. Removing all the comments, shortening the variable name and moving the [O]FS definitions to command line arguments, all you have to write is: awk -F'/?:' -vOFS=\/ 'NF>1{p=$1}NF==1{print p,$1}' structure-of-home.cnf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197122",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87188/"
]
} |
197,124 | I am running an Ubuntu 12.04 desktop system. So far I have only installed some programs (I have sudo rights). When I check the list of users on the system, I see a long list, like more than 20 users—when were these users created (e.g. daemon, sys, sync, games, pulse, etc.)? How are these related to new programs being installed? If I run a program on my system, it should run with my UID. But on doing a ps , I see many other programs running with different UID (like root, daemon, avahi, syslog, colord etc.) — how were these programs started with different UIDs? | User accounts are used not only for actual, human users, but also to run system services and sometimes as owners of system files. This is done because the separation between human users' resources (processes, files, etc.) and the separation between system services' resources requires the same mechanisms under the hood. The programs that you run normally run with your user ID. It's only system daemons that run under their own account. Either the configuration file that indicates when to run the daemon also indicates what user should run it, or the daemon switches to an unprivileged account after starting. Some daemons require full administrative privileges, so they run under the root account. Many daemons only need access to a specific hardware device or to specific files, so they run under a dedicated user account. This is done for security: that way, even if there's a bug or misconfiguration in one of these services, it can't lead to a full system attack, because the attacker will be limited to what this service can do and won't be able to overwrite files, spy on processes, etc. Under Ubuntu, user IDs in the range 0–99 are created at system installation. 0 is root; many of the ones in the range 1–99 exist only for historical reasons and are only kept for backward compatibility with some local installations that use them (a few extra entries don't hurt). User IDs in the range 100–999 are created and removed dynamically when services that need a dedicated user ID are installed or removed. The range from 1000 onwards is for human users or any other account created by the system administrator. The same goes for groups. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/197124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63934/"
]
} |
197,127 | Near as I can tell the zip -T option only determines if files can be extracted -- it doesn't really test the archive for internal integrity. For example, I deliberately corrupted the local (not central directory) CRC for a file, and zip didn't care at all, reporting the archive as OK. Is there some other utility to do this? There's a lot of internal redundancy in ZIP files, and it would be nice to have a way of checking it all. Of course, normally the central directory is all you need, but when repairing a corrupted archive often all you have is a fragment, with the central directory clobbered or missing. I'd like to know if archives I create are as recoverable as possible. | unzip -t Test archive files. This option extracts each specified file in memory and compares the CRC (cyclic redundancy check, an enhanced checksum) of the expanded file with the original's stored CRC value. [ source: https://linux.die.net/man/1/unzip ] | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/197127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110982/"
]
} |
197,134 | I want to take a 78gb folder and store it in a single file (for upload into a cloud service), as if I am compressing it in an archive, but I don't want any compression (I don't have that much CPU time available). Is there anyway that I can accomplish this, perhaps a terminal command I don't know about? | Use tar : tar -cf my_big_folder.tar /my/big/folder Restore the archive with tar -xf my_big_folder.tar -C / -C will change to the root directory to restore your archive since the archive created above contains absolute paths. EDIT : Due to the relatively big size of the archive, it'd be best to send it [directly] to its final location, using SSH or a mount point of the cloud resource/folder. For example, as Cole Johnson suggests : tar -cf /network/mount/point/my_big_folder.tar /my/big/folder or tar -c /my/big/folder | ssh example.com "cat > my_big_folder.tar" EDIT : As Blacklight Shining also suggests , If you want to avoid absolute paths, you can change to the big folder's parent and tar from there: tar -cf /network/mount/point/my_big_folder.tar \ -C /my/big/folder/location the_big_folder or tar -cC /my/big/folder/location the_big_folder | \ssh example.com "cat > my_big_folder.tar" Personal reflexions Whether to include relative or absolute paths is a matter of personal preference. There are cases absolute paths are obvious, e.g. for a restore in a disaster recovery situation. For local projects or collections it's common to archive a directory tree from the desired folder's parent so as to avoid cluttering the current directory, in case the archive is accidentally unpacked in-place. If big_folder lies somewhere deep in a standard *NIX hierarchy , it may make some sense to start archiving the first non-standard folder where big_folder deviates from and its directory tree from there. Finally — going pedantic here — tar archive members are always relative since a) they may be restored in any directory and b) tar removes the leading / when creating an archive. I personally tend to always use -C when unpacking an archive. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/197134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110984/"
]
} |
197,139 | I have my /boot partition in a RAID 1 array using mdadm. This array has degraded a few times in the past, and every time I remove the physical drive, add a new one, bring the array being to normal, it uses a new drive letter. Leaving the old one still in the array and failed. I can't seem to remove those all components that no longer exist. [root@xxx ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdg1[10] sde1[8](F) sdb1[7](F) sdd1[6](F) sda1[4] sdc1[5] 358336 blocks super 1.0 [4/3] [UUU_] Here's what I've tried to remove the non-existent drives and partitions. For example, /dev/sdb1 . [root@xxx ~]# mdadm /dev/md0 -r /dev/sdb1mdadm: Cannot find /dev/sdb1: No such file or directory[root@xxx ~]# mdadm /dev/md0 -r faultymdadm: Cannot find 8:49: No such file or directory[root@xxx ~]# mdadm /dev/md0 -r detachedmdadm: Cannot find 8:49: No such file or directory That 8:49 I believe refers to the major and minor number shown in --detail , but I'm not quite sure where to go from here. I'm trying to avoid a reboot or restarting mdadm. [root@xxx ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.0 Creation Time : Thu Aug 8 18:07:35 2013 Raid Level : raid1 Array Size : 358336 (350.00 MiB 366.94 MB) Used Dev Size : 358336 (350.00 MiB 366.94 MB) Raid Devices : 4 Total Devices : 6 Persistence : Superblock is persistent Update Time : Sat Apr 18 16:44:20 2015 State : clean, degraded Active Devices : 3Working Devices : 3 Failed Devices : 3 Spare Devices : 0 Name : xxx.xxxxx.xxx:0 (local to host xxx.xxxxx.xxx) UUID : 991eecd2:5662b800:34ba96a4:2039d40a Events : 694 Number Major Minor RaidDevice State 4 8 1 0 active sync /dev/sda1 10 8 97 1 active sync /dev/sdg1 5 8 33 2 active sync /dev/sdc1 6 0 0 6 removed 6 8 49 - faulty 7 8 17 - faulty 8 8 65 - faulty Note: The array is legitimately degraded right now and I'm getting a new drive in there as we speak. However, as you can see above, that shouldn't matter. I should still be able to remove /dev/sdb1 from this array. | It's because the device nodes no longer exist on your system (probably udev removed them when the drive died). You should be able to remove them by using the keyword failed or detached instead: mdadm -r /dev/md0 failed # all failed devicesmdadm -r /dev/md0 detached # failed ones that aren't in /dev anymore If your version of mdadm is too old to do that, you might be able to get it to work by mknod 'ing the device to exist again. Or, honestly, just ignore it—it's not really a problem, and should go away the next time you reboot. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197139",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62894/"
]
} |
197,160 | Within a desktop environment we can resize terminals ( GNOME Terminal for example) for our convenience. How can I know the size of the terminal in terms of pixels or number of columns and rows? | If you issue the command stty size it returns the size of the current terminal in rows and columns. Example: $ stty size24 80 You can read the rows and columns into variables like this (thanks to Janis' comment ): $ read myrows mycols < <(stty size) Obtaining the size in pixels requires knowledge of your screen's resolution and I don't think stty has direct access to such information. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/197160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52733/"
]
} |
197,199 | Often times I find myself in need to have the output in a buffer with all the features (scrolling, searching, shortcuts, ...) and I have grown accustomed to less . However, most of the commands I use generate output continuously. Using less with continuous output doesn't really work the way I expected. For instance: while sleep 0.5do echo "$(cat /dev/urandom | tr -cd 'a-zA-Z0-9' | head -c 100)"done | less -R This causes less to capture the output until it reaches maximum terminal height and at this point everything stops (hopefully still accepting data), allowing me to use movement keys to scroll up and down. This is the desired effect. Strangely, when I catch-up with the generated content (usually with PgDn ) it causes less to lock and follow new data, not allowing me to use movement keys until I terminate with ^C and stop the original command. This is not the desired effect. Am I using less incorrectly? Is there any other program that does what I wish? Is it possible to "unlock" from this mode? Thank you! | Works OK for me when looking at a file that's being appended to but not when input comes from a pipe (using the F command - control-C works fine then). See discussion at Follow a pipe using less? - this is a known bug/shortcoming in less . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197199",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111027/"
]
} |
197,203 | In Vim, if you type :help into the status bar, you get a window with this (contents truncated to the relevant bits): *help.txt* For Vim version 7.4. Last change: 2012 Dec 06 VIM - main help file...USER MANUAL: These files explain how to accomplish an editing task.Jump to a subject: Position the cursor on a tag (e.g. |bars|) and hit CTRL-].|usr_toc.txt| Table Of ContentsGetting Started ~|usr_01.txt| About the manuals|usr_02.txt| The first steps in Vim|usr_03.txt| Moving around|usr_04.txt| Making small changes|usr_05.txt| Set your settings|usr_06.txt| Using syntax highlighting|usr_07.txt| Editing more than one file|usr_08.txt| Splitting windows|usr_09.txt| Using the GUI Say I want to see more information about this item |usr_07.txt| , what command do I type? I tried this: Jump to a subject: Position the cursor on a tag (e.g. |bars|) and hit CTRL-]. But my terminal window has mapped ctrl - to "decrease text". | You can use :help usr_01.txt to access a specific file. Usually more usefully you can jump to a particular topic: :help syntax:help wq:help CTRL-] This last notes that you can also use Ctrl-Click with the mouse, and double-click works too. You can also use g] to access tagselect , which offers a list that you can select from with just numbers and Enter . In many cases that will be a list of one item, but it still avoids using Ctrl-] at any point. Some other commands you could use to follow these links are also listed in :help tagsrch.txt . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
197,260 | echo hai && echo bye prints hai bye while echo hai && echo $? prints hai0 When the first echo command's return value is 0 , how does the echo statement after AND operator gets executed? Doesn't quick AND come out after seeing the return value 0 ? | Your confusion stems from the fact that many popular languages (especially C-based ones) stop evaluating && sequences when 0 is encountered, because 0 is considered false and everything else is true . In Bash, however, that's not the case. By convention, in POSIX systems (and all other Unix-like systems), return code 0 is considered SUCCESS (there was no error, so nothing is returned) and a non-zero return code is considered FAILURE . Every command in Bash, be it an external program such as a C program or a shell builtin, must return a value: A simple command is a sequence of optional variable assignments followed by blank-separated words and redirections, and terminated by a control operator . The first word specifies the command to be executed, and is passed as argument zero. The remaining words are passed as arguments to the invoked command. The return value of a simple command is its exit status, or 128+ n if the command is terminated by signal n . (...) Shell builtin commands return a status of 0 ( true ) if successful, and non-zero ( false ) if an error occurs while they execute. All builtins return an exit status of 2 to indicate incorrect usage. A return value is not a Boolean, though. It's a number between 0 and 255: The exit status of an executed command is the value returned by the waitpid system call or equivalent function. Exit statuses fall between 0 and 255, though, as explained below, the shell may use values above 125 specially. Exit statuses from shell builtins and compound commands are also limited to this range. Under certain circumstances, the shell will use special values to indicate specific failure modes. For the shell's purposes, a command which exits with a zero exit status has succeeded. An exit status of zero indicates success. A non-zero exit status indicates failure. When a command terminates on a fatal signal N , bash uses the value of 128+ N as the exit status. (Bolding mine.) When a command reports its return code back to the shell, it's generally enough to check whether the exit code is 0 or not. Now, the next command in a list glued together with && will be executed only if the previous command returned 0—i.e. SUCCESS : AND and OR lists are sequences of one or more pipelines separated by the && and || control operators, respectively. AND and OR lists are executed with left associativity. An AND list has the form command1 && command2 command2 is executed if, and only if, command1 returns an exit status of zero. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
197,265 | I want to split 'hello' into h e l l o in an array using only bash, I could do it in sed with sed 's/./& /g' but I want to know how to split a string into an array in Bash when I do not know what the delimiter would be, or the delimiter is any single character. I don't think I can use ${i// /} without some creativity because the delimiter is an unknown, and I don't think that expression accepts regex. I tried using BASH_REMATCH with [[ string =~ ([a-z].).* ]] but it doesn't work as I expected. What is the proper way to use only bash to accomplish a string.split() type of behavior? The reason is that I am trying to write the rev utility in all bash: while read data; do word=($(echo $data|tr ' ' '_'|sed 's/./& /g')) new=() i=$((${#word[@]} - 1)) while [[ $i -ge 0 ]]; do new+=(${word[$i]}) (( i-- )) done echo ${new[@]}|tr -d ' '|tr '_' ' ' done But I used tr and sed, I want to know how to do the split properly and then I will fix it to be all bash. Just for fun. | s="hello"declare -a a # define array afor ((i=0; i<${#s}; i++)); do a[$i]="${s:$i:1}"; donedeclare -p a # print array a in a reusable form Output: declare -a a='([0]="h" [1]="e" [2]="l" [3]="l" [4]="o")' or (please note the comments) s="hello"while read -n 1 c; do a+=($c); done <<< "$s"declare -p a Output: declare -a a='([0]="h" [1]="e" [2]="l" [3]="l" [4]="o")' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197265",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43342/"
]
} |
197,324 | Is there a way to schedule a cron job to run every fortnight? (One way I can think of, within crontab , would be to add two entries for "date-of-month"...) | No, cron only knows about the day of the week, the day of the month and the month. Running a command twice a month on fixed days (e.g. the 1st and the 16th) is easy: 42 4 1,16 * * do_stuff Running a command every other week is another matter. The best you can do is to run a command every week, and make it do nothing every other week. On Linux, you can divide the number of seconds since the epoch ( date +%s ) by the number of seconds in a week to get a number that flips parity every week. Note that in a crontab, % needs to be escaped (cron turns % into newlines before executing the command). 42 4 * * 1 case $(($(date +\%s) / (60*60*24*7))) in *[02468]) do_stuff;; esac | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197324",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27297/"
]
} |
197,347 | I was wondering where the files are located for Kde's panel launchers? I can't find anything online. I know where the regular desktop files are, ( ~/.local/share/applications/ ), but I can't find the launcher files. I currently have a launcher that is not working correctly. When I create the new icon in Kmenuedit, it works fine from within Kde's menu/kickoff. But when i pin that icon to the panel, it doesn't. It is a launcher for IntelliJ IDEA. When I hover over the icon on the panel, all it says is java . I checked the .desktop file in ~/.local/share/applications/ , and that one is fine. Any advice would be appreciated. | It is in the text file ~/.kde4/share/config/plasma-desktop-appletsrc . You want to find this section: [Containments][n][Applets][m][Configuration][Launchers] , where n and m are some numbers you need to find out on your system. There should only be one Launchers section for each task manager you have. If it isn't there, you should look through [Containments][n][Applets][m] for one containing the application you are looking for. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197347",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111115/"
]
} |
197,352 | I have a folder which has 250+ files of 2 GB each. I need to search for a string/pattern in those files and output the result in an output file. I know I can run the following command, but it is too slow!! grep mypattern * > output I want to speed it up. Being a programmer in Java, I know multi-threading can be used for speeding up the process. I'm stuck on how to start grep in "multi-threaded mode" and write the output into a single output file. | There are two easy solutions for this. Basically, using xargs or parallel . xargs Approach: You can use xargs with find as follows: find . -type f -print0 | xargs -0 -P number_of_processes grep mypattern > output Where you will replace number_of_processes by the maximum number of processes you want to be launched.However, this is not guaranteed to give you a significant performance in case your performance is I/O limited. In which case you might try to start more processes to compensate for the time lost waiting for I/Os. Also, with the inclusion of find, you can specify more advanced options instead of just file patterns, like modification time, etc ... One possible issue with this approach as explained by Stéphane's comments, if there are few files, xargs may not start sufficiently many processes for them. One solution will be to use the -n option for xargs to specify how many arguments should it take from the pipe at a time. Setting -n1 will force xargs to start a new process for each single file. This might be a desired behavior if the files are very large (like in the case of this question) and there is a relatively small number of files. However, if the files themselves are small, the overhead of starting a new process may undermine the advantage of parallelism, in which case a greater -n value will be better. Thus, the -n option might be fine tuned according to the file sizes and number. Parallel Approach: Another way to do it is to use Ole Tange GNU Parallel tool parallel , (available here ). This offers greater fine grain control over parallelism and can even be distributed over multiple hosts (would be beneficial if your directory is shared for example). Simplest syntax using parallel will be: find . -type f | parallel -j+1 grep mypattern where the option -j+1 instructs parallel to start one process in excess of the number of cores on your machine (This can be helpful for I/O limited tasks, you may even try to go higher in number). Parallel also has the advantage over xargs of actually retaining the order of the output from each process and generating a contiguous output. For example, with xargs , if process 1 generates a line say p1L1 , process 2 generates a line p2L1 , process 1 generates another line p1L2 , the output will be: p1L1p2L1p1L2 whereas with parallel the output should be: p1L1p1L2p2L1 This is usually more useful than xargs output. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111012/"
]
} |
197,371 | I came across a lot of answers, including theses: sed on OSX insert at a certain line https://stackoverflow.com/questions/14846304/sed-command-error-on-macos-x https://stackoverflow.com/questions/4247068/sed-command-failing-on-mac-but-works-on-linux https://stackoverflow.com/questions/16266281/how-to-add-text-to-the-beginning-of-all-files-in-a-folder And I can't find a way to do what I want to do. I need to insert #encoding:utf-8 at the beginning of every .html.erb file of my directory (recursively). I tried using this command find . -iname "*.erb" -type f -exec sed -ie "1i \#encoding:utf-8" {} \; But it throws this error: sed: 1: "1i #encoding:utf-8": extra characters after \ at the end of i command | To edit file in-place with OSX sed , you need to set empty extension: $ sed -i '' '1i\#encoding:utf-8' filename And you need a literal newline after i\ . This is specified by POSIX sed . Only GNU sed allows text to be inserted on the same line with command. sed can also works with multiple files at once, so you can use -exec command {} + form: $ find . -iname "*.erb" -type f -exec sed -i '' '1i\#encoding:utf-8' {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197371",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104998/"
]
} |
197,391 | I'm having trouble getting the full range of colors to work in vim when I'm running through tmux. I belive that it's some sort of trouble with TERM variables or 256colors, but I've done everything I have been able to find online to get 256 colors working in vim, tmux, and iTerm, and nothing has fixed it. It's a small problem, but it seriously bugs me. Here's an example code file in vim just through iTerm: and here's the same file in vim through tmux and iTerm: Notice how the background colors seem slightly mismatched, only when code is written there. Why could this be? I have set t_Co=256 in my vimrc , my iTerm terminal is set to xterm-256color , I have set -g default-terminal xterm-256color in my tmux.conf , and I have: if [ -e /usr/share/terminfo/x/xterm-256color ]; then export TERM='xterm-256color'else export TERM='xterm-color'fi in my .profile . This exact issue is replicated on my Ubuntu based machine at work, and I use all of the same configuration files there. This at least isolates the issue as not being OS/iTerm related. | An old question but it ranked high on my Google search without helping me. This is what finally solved this for me In .tmux.conf : set -g default-terminal "screen-256color"set -ga terminal-overrides ",*256col*:Tc" In .vimrc : if exists('+termguicolors') let &t_8f = "\<Esc>[38;2;%lu;%lu;%lum" let &t_8b = "\<Esc>[48;2;%lu;%lu;%lum" set termguicolorsendif | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197391",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93518/"
]
} |
197,437 | I am creating a linux distro and now I need an init program. I can code in c really well and I know quite a bit about linux (not much but I've been using arch linux for development for 4 years), so I thought I should try writing my own basic init script in C. I was just wondering, what tasks does init do to set the system up for a simple shell? (When I ask "what does init do?", I do know what init is and what it's for. I just don't know what tasks it does.) I don't need code and I possibly don't even need basic commands but I do need the order that they are run in. | System 5 init will tell you only a small part of the story. There's a sort of myopia that affects the Linux world. People think that they use a thing called "System 5 init ", and that is both what is traditional and the best place to start. Neither is in fact the case. Tradition isn't in fact what such people say it to be, for starters. System 5 init and System 5 rc date to AT&T UNIX System 5, which was almost as far after the first UNIX as we are now (say) after the first version of Linux-Mandrake. 1st Edition UNIX only had init . It did not have rc . The 1st Edition assembly language init ( whose code has been restored and made available by Warren Toomey et al. ) directly spawned and respawned 12 getty processes, mounted 3 hardwired filesystems from a built-in table, and directly ran a program from the home directory of a user named mel . The getty table was also directly in the program image. It was another decade after UNIX System 5 that the so-called "traditional" Linux init system came along. In 1992, Miquel van Smoorenburg (re-)wrote a Linux init + rc , and their associated tools, which people now refer to as "System 5 init ", even though it isn't actually the software from UNIX System 5 (and isn't just init ). System 5 init / rc isn't the best place to start, and even if one adds on knowledge of systemd that doesn't cover half of what there is to know. There's been a lot of work in the area of init system design (for Linux and the BSDs) that has happened in the past two decades alone. All sorts of engineering decisions have been discussed, made, designed, implemented, and practised. The commercial Unices did a lot, too. Existing systems to study and and learn from Here is an incomplete list of some of the major init systems other than those two, and one or two of their (several) salient points: Joachim Nilsson's finit went the route of using a more human-readable configuration file. Felix von Leitner's minit went for a filesystem-is-the-database configuration system, small memory footprints, and start/stop dependencies amongst things that init starts. Gerrit Pape's runit went for what I have previously described as the just spawn four shell scripts approach. InitNG aimed to have dependencies, named targets, multiple configuration files, and a more flexible configuration syntax with a whole load more settings for child processes. upstart went for a complete redesign, modelling the system not as services and interdependencies at all, but as events and jobs triggered by them. The design of nosh includes pushing all of the service management out (including even the getty spawning and zombie reaping) into a separate service manager, and just handling operating-system-specific "API" devices/symlinks/directories and system events. sinit is a very simple init. It executes /bin/rc.init whose job it is to start programs, mount filesystem, etc. For this you can use something like minirc . Moreover, about 10 years ago, there was discussion amongst daemontools users and others of using svscan as process #1, which led to projects like Paul Jarc's svscan as process 1 study , Gerrit Pape's ideas , and Laurent Bercot's svscan as process 1 . Which brings us to what process #1 programs do. What process #1 programs do Notions of what process #1 is "supposed" to do are by their natures subjective. A meaningful objective design criterion is what process #1 at minimum must do. The kernel imposes several requirements on it. And there are always some operating-system-specific things of various kinds that it has to do. When it comes to what process #1 has traditionally done, then we are not at that minimum and never really have been. There are several things that various operating system kernels and other programs demand of process #1 that one simply cannot escape. People will tell you that fork() ing things and acting as the parent of orphaned processes is the prime function of process #1. Ironically, this is untrue. Dealing with orphaned processes is (with recent Linux kernels, as explained at https://unix.stackexchange.com/a/177361/5132 ) a part the system that one can largely factor out of process #1 into other processes, such as a dedicated service manager . All of these are service managers, that run outwith process #1: the IBM AIX srcmstr program, the System Resource Controller Gerrit Pape's runsvdir from runit Daniel J. Bernstein's svscan from daemontools, Adam Sampson's svscan from freedt , Bruce Guenter's svscan from daemontools-encore, and Laurent Bercot's s6-svscan from s6 Wayne Marshall's perpd from perp the Service Management Facility in Solaris 10 the service-manager from nosh Similarly, as explained at https://superuser.com/a/888936/38062 , the whole /dev/initctl idea doesn't need to be anywhere near process #1. Ironically, it is the highly centralized systemd that demonstrates that it can be moved out of process #1. Conversely, the mandatory things for init , that people usually forget in their off-the-top-of-the-head designs, are things such as handling SIGINT , SIGPWR , SIGWINCH , and so forth sent from the kernel and enacting the various system state change requests sent from programs that "know" that certain signals to process #1 mean certain things. (For example: As explained at https://unix.stackexchange.com/a/196471/5132 , BSD toolsets "know" that SIGUSR1 has a specific meaning.) There are also once-off initialization and finalization tasks that one cannot escape, or will suffer greatly from not doing, such as mounting "API" filesystems or flushing the filesystem cache. The basics of dealing with "API" filesystems are little different to the operation of init rom 1st Edition UNIX: One has a list of information hardwired into the program, and one simply mount() s all of the entries in the list. You'll find this mechanism in systems as diverse as BSD (sic!) init , through the nosh system-manager , to systemd. "set the system up for a simple shell" As you have observed, init=/bin/sh doesn't get "API" fileystems mounted, crashes in an ungainly fashion with no cache flush when one types exit ( https://unix.stackexchange.com/a/195978/5132 ), and in general leaves it to the (super)user to manually do the actions that make the system minimally usable. To see what one actually has no choice but to do in process #1 programs, and thus set you on a good course for your stated design goal, your best option is to look at the overlaps in the operation of Gerrit Pape's runit, Felix von Leitner's minit, and the system-manager program from the nosh package. The former two show two attempts to be minimalist, yet still handle the stuff that it is impossible to avoid. The latter is useful, I suggest, for its extensive manual entry for the system-manager program, which details exactly what "API" filesystems are mounted, what initialization tasks are run, and what signals are handled; in a system that by design has the system manager just spawn three other things (the service manager, an accompanying logger, and the program to run the state changes) and only do the unavoidable in process #1. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/197437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111203/"
]
} |
197,439 | I am reading this Unix tutorial and came across this quote... We should note here that a directory is merely a special type of file. ...but no explanation or details are provided. How is a directory really just a file? | Many entities in *nix style (and other) operating systems are considered files, or have a defining file-like aspect, even though they are not necessarily a sequence of bytes stored in a filesystem. Exactly how directories are implemented depends on the kind of filesystem, but generally what they contain, considered as a list, is a sequence of stored bytes, so in that sense they are not that special. One way of defining what a "file" is in a *nix context is that it is something which has a file descriptor associated with it. As per the wikipedia article, a file descriptor is an abstract indicator used to access a file or other input/output resource , such as a pipe or network connection... In other words, they refer to various kinds of resources from/to which a sequence of bytes may be read/written, although the source/destination of that sequence is unspecified. Put another way, the "where" of the resource could be anything. What defines it is that it is a conduit of information. This is part of why it is sometimes said that in unix "everything is a file". You should not take that completely literally, but it is worth serious consideration. In the case of a directory, this information pertains to what is in the directory, and on a lower, implementation level, how to find it within the filesystem. Directories are sort of special in this sense because in native C code they are not ostensibly associated with a file descriptor; the POSIX API uses a special type of stream handle, DIR* . However, this type does in fact have an underlying descriptor which can be retrieved . Descriptors are managed by the kernel and accessing them always involves system calls, hence, another aspect of what a descriptor is is that it is a conduit controlled by the OS kernel. They have unique (per process) numbers starting with 0, which is usually the descriptor for the standard input stream. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197439",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58080/"
]
} |
197,448 | In the current version of Raspian, I know it is possible to change the password of the current logged in user from the command line like so: sudo passwd which will then prompt the user to enter a new password twice. This will produce output like so: Changing password for pi.(current) UNIX password:Enter new UNIX password:Retype new UNIX password:passwd: password updated successfully I was wondering if there is a possible way to change a password programmatically, like from a shell script. I'm trying to make a configuration script to deploy on my Raspberry Pis and I don't want to manually have to type in new passwords for them. | You're looking for the chpasswd command. You'd do something like this: echo 'pi:newpassword' | chpasswd # change user pi password to newpassword Note that it needs to be run as root, at least with the default PAM configuration. But presumably run as root isn't a problem for a system deployment script. Also, you can do multiple users at once by feeding it multiple lines of input. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/197448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146317/"
]
} |
197,464 | I have a large folder with 30M small files. I hope to backup the folder into 30 archives, each tar.gz file will have 1M files. The reason to split into multi archives is that to untar one single large archive will take month.. pipe tar to split also won't work because when untar the file, I have to cat all archives together. Also, I hope not to mv each file to a new dir, because even ls is very painful for this huge folder. | I wrote this bash script to do it.It basically forms an array containing the names of the files to go into each tar, then starts tar in parallel on all of them .It might not be the most efficient way, but it will get the job done as you want.I can expect it to consume large amounts of memory though. You will need to adjust the options in the start of the script.You might also want to change the tar options cvjf in the last line (like removing the verbose output v for performance or changing compression j to z , etc ...). Script #!/bin/bash# User configuratoin#===================files=(*.log) # Set the file pattern to be used, e.g. (*.txt) or (*)num_files_per_tar=5 # Number of files per tarnum_procs=4 # Number of tar processes to starttar_file_dir='/tmp' # Tar files dirtar_file_name_prefix='tar' # prefix for tar file namestar_file_name="$tar_file_dir/$tar_file_name_prefix"# Main algorithm#===============num_tars=$((${#files[@]}/num_files_per_tar)) # the number of tar files to createtar_files=() # will hold the names of files for each tartar_start=0 # gets update where each tar starts# Loop over the files adding their names to be taredfor i in `seq 0 $((num_tars-1))`do tar_files[$i]="$tar_file_name$i.tar.bz2 ${files[@]:tar_start:num_files_per_tar}" tar_start=$((tar_start+num_files_per_tar))done# Start tar in parallel for each of the strings we just constructedprintf '%s\n' "${tar_files[@]}" | xargs -n$((num_files_per_tar+1)) -P$num_procs tar cjvf Explanation First, all the file names that match the selected pattern are stored in the array files . Next, the for loop slices this array and forms strings from the slices. The number of the slices is equal to the number of the desired tarballs. The resulting strings are stored in the array tar_files . The for loop also adds the name of the resulting tarball to the beginning of each string. The elements of tar_files take the following form (assuming 5 files/tarball): tar_files[0]="tar0.tar.bz2 file1 file2 file3 file4 file5"tar_files[1]="tar1.tar.bz2 file6 file7 file8 file9 file10"... The last line of the script, xargs is used to start multiple tar processes (up to the maximum specified number) where each one will process one element of tar_files array in parallel. Test List of files: $lsa c e g i k m n p r tb d f h j l o q s Generated Tarballs: $ls /tmp/tar* tar0.tar.bz2 tar1.tar.bz2 tar2.tar.bz2 tar3.tar.bz2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197464",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86455/"
]
} |
197,469 | I've installed a web app called scrumblr which uses redis as a database. What I am attempting to do is delete all keys that have been inactive for 30 days, or have not been accessed in 30 days. I ran redis-cli KEYS* Which returns all of the keys, though it does not show a timestamp. Is there a script or a command I can run each day at a specific time, which would seek out all the inactive keys and delete them? | You may make take the advantage of OBJECT IDLETIME command, which returns the number of seconds since the object stored at the specified key is idle (not requested by read or write operations). Example code as follows: #!/bin/shredis-cli -p 6379 keys "*" | while read LINE ;doval=`redis-cli -p 6379 object idletime $LINE`;if [ $val -gt $((30 * 24 * 60 * 60)) ];then echo "$LINE"; # del=`redis-cli -p 6379 del $LINE`; # be careful with del # echo $del;fidone; In your situation, you can replace redis-cli -p 6379 with: redis-cli -h redis_host -p redis_port -a redis_password | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197469",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110303/"
]
} |
197,497 | How can I check whether a string is empty in tcsh? Before you freak out, no I'm not writing shell scripts with tcsh. I am asking because I want to use this in my .tcshrc file. Specifically, I want to do the equivalent of this bash code in tcsh: if [[ -z $myVar ]]; then echo "the string is blank"fi | if ("$myVar" == "") then echo "the string is blank"endif Note that in csh, it is an error to attempt to access an undefined variable. (From a Bourne shell perspective, it's as if set -u was always in effect.) To test whether a variable is defined, use $?myVar : if (! $?myVar) then echo "myVar is undefined"else if ("$myVar" == "") then echo "myVar is empty" else echo "myVar is non-empty" endifendif Note the use of a nested if . You can't use else if here, because that would cause the "$myVar" == "" condition to be parsed even when the first condition is true. If you want to treat the empty and the undefined case in the same way, first set the variable: if (! $?myVar) then set myVar=""endifif ("$myVar" == "") then echo "myVar is empty or was undefined"else echo "myVar is non-empty"endif | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83381/"
]
} |
197,504 | I ssh to a server, and want to add some daily jobs (specifially to renew Kerberos tickets even when I log out and I still want my programs in screen or tmux continue to run) to cron. So I run crontab -e , and add the following, 00 00 * * * kinit -R00 12 * * * kinit -R When I save it, I am asked by the editor: File Name to Write: /tmp/crontab.HMpG7V Isn't it that the files in /tmp can be deleted by the OS? Especially after I log out of the server? Where shall I store my crontab file? Can I save the crontab file under $HOME or some better space? | crontab -e opens a file in /tmp instead of the actual crontab so that it can check your new crontab for errors and prevent you from overwriting your actual crontab with those errors. If there are no errors, then your actual crontab will be updated. If crontab -e just wrote straight to your actual crontab, then you would risk all of your cronjobs failing to run due to a syntax error in your new crontab. sudoedit , visudo , vipw , etc. operate on the same principle. Don't worry, your actual crontab lives in a non-volatile location on disk. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197504",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
197,550 | To search a file in a directory, I found two commands as below: ls -ltr initialfilename* find ./ -name initialfilename* Sometime, first command gives me the search result but sometime I used to execute second command.What is the difference between these two set of Linux command? Please specify your answer around the major difference only. | ls -ltr file* : This command just list the contents of the current directory in the long listing format ( -l ), sorted by modification time ( -t ) in reverse order ( -r ) of all files and directories beginning with file* . find ./ -name file* : That command searches trough the whole directory structure under the current working directory and all its subdirectories for files and directories beginning with file* in their names. The output format is very simple; only the file/dir paths are printed line by line. Major difference (conclusion): ls only applies to the current working directory, while find applies to all files and subdirectories starting from the current working directory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54198/"
]
} |
197,567 | What is the simplest and most versatile way to send files over the network to other computers? By that I mean computers that other people are using at the moment. I don't think SSH works if the computer has an active session open. So far I am using netcat , which works alright. But are there any other simple ways to do this? One problem I have with netcat , is that the receiver needs to know the file ending and has to come up with a name for the stream. | You're complicating your life needlessly. Use scp . To transfer a file myfile from your local directory to directory /foo/bar on machine otherhost as user user , here's the syntax: scp myfile user@otherhost:/foo/bar . EDIT: It is worth noting that transfer via scp/SSH is encrypted while transfer via netcat or HTTP isn't. So if you are transferring sensitive files, always use the former. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/197567",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111041/"
]
} |
197,568 | I am in the process of moving some Linux servers onto a virtualized environment with their filesystems mounted from LVM volumes, which are in turn hosted on a remote NAS via iSCSI. I am able to start them up and they run perfectly with no issues. However, the NAS server is Windows-based and, when Microsoft issues patches, it automatically applies them and reboots. When it reboots, all of the virtual servers' filesystems detect errors and go into read-only mode. I have attempted to remount them as read/write, but the kernel has the filesystem flagged as write-protected, so this fails. The only way I've been able to find to recover is to shut the virt down, fsck its LVM volume, and restart it. The virts mount these LVM volumes with an fstab entry of the form: /dev/xvda2 / ext3 noatime,nodiratime,errors=remount-ro 0 1 or /dev/xvda2 / ext4 errors=remount-ro 0 1 The virtual host OS also has an LVM/iSCSI mount from the NAS server (in the same volume group, even) which continues working in read/write mode despite these interruptions. Its fstab entry is: /dev/mapper/nas6-dom0 /mnt/nas6 ext4 _netdev 0 0 This leads me to suspect that removing errors=remount-ro from the guests' fstab entries would provide fault-tolerance, but I'm a bit uneasy about doing that - if an actual error develops in the filesystem, I would expect that allowing continued writes to the fs could make things much worse in short order. What is the best practice for resolving this such that the virtual guests will be able to continue running after the NAS reboots itself? | You're complicating your life needlessly. Use scp . To transfer a file myfile from your local directory to directory /foo/bar on machine otherhost as user user , here's the syntax: scp myfile user@otherhost:/foo/bar . EDIT: It is worth noting that transfer via scp/SSH is encrypted while transfer via netcat or HTTP isn't. So if you are transferring sensitive files, always use the former. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/197568",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20546/"
]
} |
197,588 | I was doing this tutorial , but when it comes to the part where O should run these commands: local-server# ssh -NTCf -w 0:0 87.117.217.27local-server# ssh -NTCf -w 1:1 87.117.217.44 It says: channel 0: open failed: administratively prohibited: open failed How can I fix that? | After discussing this in a chat and debugged the issue, it turned out that the required directive PermitTunnel yes was not in place and active. After adding the directive to /etc/ssh/sshd_config and reloading sshd by service sshd reload this was resolved. We added -v to the ssh command to get some debugging information and from that we found: debug1: forking to backgroundroot@ubuntu:~# debug1: Entering interactive session.debug1: Remote: Server has rejected tunnel device forwardingchannel 0: open failed: administratively prohibited: open faileddebug1: channel 0: free: tun, nchannels 1 The server actively rejected the tunnel request which pointed us to the right directive. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110720/"
]
} |
197,591 | I've been able to remove all special characters from a fixed length file which appear in the first column but as a result all subsequent columns have shifted to the left by the amount of characters deleted.It is a space separated file. Line 1 in input file is corrupt. Line 2 is what it should look like. The string 000022000362700 starts at position 49 on both lines. The problem I'm having is that after removing the 3 special chars the field moves to position 46. GAVISCON LIQUID PEPPERMINT �OT 000022000362700 159588000007979400 50001584182 0006S020000GAVISCON LIQUID PEPPERMINT OT 000022000362700 159588000007979400 50001584182 0006S020000 The command I'm using is as follows : cat file.txt | grep '[^ - ~]' | sed's/[^ - ~]//g' This produces following output: GAVISCON LIQUID PEPPERMINT OT 000022000362700 159588000007979400 50001584182 0006S020000 By removing the special characters every field to the right of the changed field has moved to the left changing the field start positions. I've been searching for a while now and cannot find any solution for this issue. How should I proceed? | After discussing this in a chat and debugged the issue, it turned out that the required directive PermitTunnel yes was not in place and active. After adding the directive to /etc/ssh/sshd_config and reloading sshd by service sshd reload this was resolved. We added -v to the ssh command to get some debugging information and from that we found: debug1: forking to backgroundroot@ubuntu:~# debug1: Entering interactive session.debug1: Remote: Server has rejected tunnel device forwardingchannel 0: open failed: administratively prohibited: open faileddebug1: channel 0: free: tun, nchannels 1 The server actively rejected the tunnel request which pointed us to the right directive. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197591",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111292/"
]
} |
197,600 | When looking at the limits of a running process, I see Max pending signals 15725 What is this? How can I determine a sensible value for a busy service? Generally, I can't seem to find a page that explains what each limit is. Some are pretty self-explanatory (max open files), some less so (max msgqueue size). | According to the manual page of sigpending : sigpending() returns the set of signals that are pending for delivery to the calling thread (i.e., the signals which have been raised while blocked). So, it is meant the signals (sigterm, sigkill, sigstop, ...) that are waiting until the process comes out of the D (uninterruptible sleep) state. Usually a process is in that state when it is waiting for I/O. That sleep can't be interrupted. Even sigkill ( kill -9 ) can't and the kernel waits until the process wakes up (the signal is pending for delivery so long). For the other unclear values, I would take a look in the manual page of limits.conf . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197600",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1533/"
]
} |
197,601 | How to add vertical space after each command in bash? Looking for a wee bit of vertical space, not a full line. 1/4 or 1/3 of the line height should do it. [edit] The space to add is only after the command+output bundle. The lines between command and associated output still use default spacing. Example: do ls , the output is shown using regular line spacing; only after the output do we get the increased spacing to clearly separate command+output pair from the next command+output. | This is more a "terminal application" feature/configuration option than a bash option. Bash is not aware of fonts or spaces, that's something related to the terminal. For example: Mac Os X's terminal program allows to setup more space between the lines: http://osxdaily.com/2015/01/05/increase-line-spacing-terminal-mac-os-x/ If that's what you're looking for, you should check your terminal program to see if it allows you do configure this. EDIT: If you only want to add extra space to the prompt after the output of the command itselfs, try this: export PS1='\n\[\033[01;31m\]\u@\H:\[\033[02;36m\] \w \$\[\033[00m\] ' It adds an entire line (\n) but maybe it's better than nothing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197601",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86552/"
]
} |
197,628 | On a linux distrouser mino report this passwd status passw -S minomino P 04/21/2015 0 90 15 -1 P=passwd ok 04/21/2015 = date creation 0 min pass? 90 max pass valid 15 = ? -1 = ? Thanks | According to the manual: man passwd : -S, --status Display account status information. The status information consists of 7 fields. The first field is the user's login name. The second field indicates if the user account has a locked password (L), has no password (NP), or has a usable password (P). The third field gives the date of the last password change. The next four fields are the minimum age, maximum age, warning period, and inactivity period for the password. These ages are expressed in days. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
197,633 | If I read the ext4 documentation correctly, starting from Linux 3.8 it should be possible to store data directly in the inode in the case of a very small file. I was expecting such a file to have a size of 0 blocks, but it is not the case. # creating a small fileprintf "abcde" > small_file# checking size of file in bytesstat --printf='%s\n' small_file5# number of 512-byte blocks used by filestat --printf='%b\n' small_file8 I would expect this last number here to be 0. Am I am missing something? | To enable inline data in ext4, you'll need to use e2fsprogs 1.43 or later. Support for inline data was added in March 2014 to the Git repository but was only released in May 2016. Once you have that, you can run mke2fs -O inline_data on an appropriate device to create a new filesystem with inline data support; this will erase all your data . It's apparently not yet possible to activate inline data on an existing filesystem (at least, tune2fs doesn't support it). Now create a small file, and run debugfs on the filesystem. cd to the appropriate directory, and run stat smallfile ; you'll get something like Inode: 32770 Type: regular Mode: 0644 Flags: 0x10000000Generation: 2302340561 Version: 0x00000000:00000001User: 1000 Group: 1000 Size: 6File ACL: 0 Directory ACL: 0Links: 1 Blockcount: 0Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015 atime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015 mtime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015crtime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015Size of extra inode fields: 28Extended attributes: system.data (0)Size of inline data: 60 As you can see the data was stored inline. This can also be seen using df ; before creating the file: % df -i /mnt/new Filesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/vg--large--mirror-inline 65536 12 65524 1% /mnt/new% df /mnt/new Filesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/vg--large--mirror-inline 1032088 1280 978380 1% /mnt/new After creating the file: % echo Hello > smallfile% ls -ltotal 1-rw-r--r-- 1 steve steve 6 Apr 22 07:35 smallfile% df -i /mnt/newFilesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/vg--large--mirror-inline 65536 13 65523 1% /mnt/new% df /mnt/newFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/vg--large--mirror-inline 1032088 1280 978380 1% /mnt/new The file is there, it uses an inode but the storage space available hasn't changed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4540/"
]
} |
197,636 | I want to run some script when a service fails. The closest thing I see to this is the FailureAction= option (under [Service] section), but it only offers reboot commands. | There is an OnFailure= directive in section [Unit] , documented in systemd.unit(5) . It is defined as follows: A space-separated list of one or more units that are activated when this unit enters the "failed" state. (Also there is an OnFailureJobMode= directive in the same section which allows to set job mode for activating OnFailure= units.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
197,670 | There is a service I want to run only when another service fails ( [Unit] OnFailure=foo ), but I don't want this service ( foo ) to start up automatically on boot. One option is running systemctl disable foo , but I'm looking for another way. Background: I am creating an OS image, and I don't want to have to boot the machine up, run that command ( systemctl disable foo ), then shut it down before declaring my image final. | systemctl enable works by manipulating symlinks in /etc/systemd/system/ (for system daemons). When you enable a service, it looks at the WantedBy lines in the [Install] section, and plops symlinks in those .wants directories. systemctl disable does the opposite. You can just remove those symlinks—doing that by hand is fully equivalent to using systemctl disable . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/197670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
197,688 | On Ubuntu 14.04, I found that when I don't plug in my external wireless adapter, its module rt2800usb is still shown in lsmod . when does automatically loading a driver module happen? Is it when the device is connected to the computer, or when the OS boots? when does automatically unloading a driver module happen? Is it when the device is disconnected to the computer, or when the OS shuts down? | When the kernel detects a new device, it runs the program modprobe and passes it a name that identifies the device. Most devices are identified through registered numbers for a vendor and model, e.g. PCI or USB identifiers. The modprobe program consults the module alias table /lib/modules/ VERSION /modules.alias to find the name of the file that contains the driver for that particular device. A similar principle applies to drivers for things that are not hardware devices, such as filesystems and cryptographic algorithms. For more details, see Debian does not detect serial PCI card after reboot Once modprobe has identified which module file ( .ko ) contains the requested driver, it loads the module file into the kernel: the module code is dynamically loaded into the kernel. If the module is loaded successfully, it will then appear in the listing from lsmod . The automatic loading of modules happen when the kernel detects new hotpluggable hardware, e.g. when you connect a USB peripheral. The operating system also does a pass of enumerating all the hardware that's present on the system early during startup, in order to load drivers for peripherals that are present at boot time. It's also possible to manually request the loading of a module with the modprobe or insmod command. Most distributions include a startup script that loads the modules listed in /etc/modules . Another way for modules to be loaded is if they're a dependency of a module: if module A depends on module B, then modprobe A loads B before loading A. Once a module is loaded, it remains loaded until explicitly unloaded, even if all devices using that driver have been disconnected. A long time ago, there was a mechanism to automatically unload unused modules, but it was removed, if I remember correctly, when udev came onto the scene. I suspect that automatic module unloading is not a common feature because the systems that would tend to need it are mostly desktop PCs that have lots of memory anyway (on the scale of driver code). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
197,737 | cat < file.txt Who is the responsible of reading the file ? Is the shell reads the open and read the file then write its content to the standard input of the command ? | For the shell command cat <file.txt : The redirection operator < causes the shell to open file.txt for reading. The shell executes the cat command, with its standard input connected to file.txt . The cat command reads from its standard input (so file.txt ) and copies the content to its standard output. So the shell is the one opening the file, but the cat command is the one reading the data. You can observe what is going on by listing the system calls performed by the shell and its subprocesses. On Linux: $ strace -f sh -c 'cat <file.txt' >/dev/nullexecve("/bin/sh", ["sh", "-c", "cat <file.txt"], [/* 76 vars */]) = 0…open("file.txt", O_RDONLY) = 3…dup2(3, 0) = 0…clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fbc737539d0) = 22703[pid 22702] wait4(-1, <unfinished ...>[pid 22703] execve("/bin/cat", ["cat"], [/* 76 vars */]) = 0[pid 22703] read(0, "wibble"..., 32768) = 6[pid 22703] write(1, "wibble"..., 6) = 6[pid 22703] read(0, "", 32768) = 0[pid 22703] close(0) = 0[pid 22703] close(1) = 0[pid 22703] close(2) = 0[pid 22703] exit_group(0) = ?<... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 22703--- SIGCHLD (Child exited) @ 0 (0) ---rt_sigreturn(0x11) = 22703… (22702 is the parent shell process, 22703 is the child cat ) The shell command cat file.txt works differently. The shell executes the cat command, passing it one parameter, namely file.txt . The cat program opens file.txt for reading. The cat command reads from file.txt and copies the content to its standard output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111372/"
]
} |
197,792 | I'm trying to join all of the arguments to a Bash function into one single string with spaces separating each argument. I also need to have the string include single quotes around the whole string. Here is what I have so far... $array=("$@")str="\'"for arg in "${array[@]}"; do let $str=$str+$arg+" "donelet $str=$str+"\'" Obviously this does not work but I'm wondering if there is a way to achieve this? | I believe that this does what you want. It will put all the arguments in one string, separated by spaces, with single quotes around all: str="'$*'" $* produces all the scripts arguments separated by the first character of $IFS which, by default, is a space. Inside a double quoted string, there is no need to escape single-quotes. Example Let us put the above in a script file: $ cat script.sh #!/bin/shstr="'$*'"echo "$str" Now, run the script with sample arguments: $ sh script.sh one two three four 5'one two three four 5' This script is POSIX. It will work with bash but it does not require bash . A variation: concatenating with slashes instead of spaces We can change from spaces to another character by adjusting IFS : $ cat script.sh #!/bin/shold="$IFS"IFS='/'str="'$*'"echo "$str"IFS=$old For example: $ sh script.sh one two three four 'one/two/three/four' | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/197792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111391/"
]
} |
197,824 | What is the difference between: find . and find . -print What does -print actually do? $ find .../hello.txt./hello./hello/txt./hello/hello2./hello/hello2/hello3./hello/hello2/hello3/txt./hello/hello2/txt$ find . -print../hello.txt./hello./hello/txt./hello/hello2./hello/hello2/hello3./hello/hello2/hello3/txt./hello/hello2/txt | From the findutils find manpage : If no expression is given, the expression -print is used (but you should probably consider using -print0 instead, anyway). ( -print is a find expression .) The POSIX documentation confirms this: If no expression is present, -print shall be used as the expression. So find . is exactly equivalent to find . -print ; the first has no expression so -print is added internally. The explanation of what -print does comes further down in the manpage: -print True; print the full file name on the standard output, followed by a newline. If you are piping the output of find into another program and there is the faintest possibility that the files which you are searching for might contain a newline, then you should seriously consider using the -print0 option instead of -print . See the UNUSUAL FILENAMES section for information about how unusual characters in filenames are handled. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/197824",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111372/"
]
} |
197,830 | I have two text files, one file contains entries such as Id Value1 apple 2 orange 3 mango 4 banana 5 strawberry6 papaya In other file I have entries like Id Value6 strawberry 4 banana3 orange 1 mango2 papaya5 straw berry I have to match between Ids and the corresponding strings in the value column and find the string correctness. How can this be done? | From the findutils find manpage : If no expression is given, the expression -print is used (but you should probably consider using -print0 instead, anyway). ( -print is a find expression .) The POSIX documentation confirms this: If no expression is present, -print shall be used as the expression. So find . is exactly equivalent to find . -print ; the first has no expression so -print is added internally. The explanation of what -print does comes further down in the manpage: -print True; print the full file name on the standard output, followed by a newline. If you are piping the output of find into another program and there is the faintest possibility that the files which you are searching for might contain a newline, then you should seriously consider using the -print0 option instead of -print . See the UNUSUAL FILENAMES section for information about how unusual characters in filenames are handled. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/197830",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111431/"
]
} |
197,835 | Can anyone explain when and how to use << operator for input redirection? I've googled enough but couldn't find. | The << redirection operator introduces a "here document": the text fed into standard input comes just after the redirection. Here's an example: grep Hello <<EOFThis line won't appearHello this one willHello againEOF All the text between <<EOF and EOF is fed into grep . EOF isn't special here, the shell takes the word given just after << and uses it as a delimiter. An interesting variant is <<- which strips leading tabs. See the bash documentation for details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197835",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50426/"
]
} |
197,854 | If I start a process and then delete the binary of it, I can still recover it from /proc/<pid>/exe : $ cp `which sleep` .$ ./sleep 10m &[1] 13728$ rm sleep$ readlink /proc/13728/exe /tmp/sleep (deleted)$ cp /proc/13728/exe ./sleep-copy$ diff sleep-copy `which sleep` && echo not differentnot different$ stat /proc/13728/exe File: ‘/proc/13728/exe’ -> ‘/tmp/sleep (deleted)’ Size: 0 Blocks: 0 IO Block: 1024 symbolic link On the other hand, if I make a symbolic link myself, delete the target and attempt to copy: cp: cannot stat ‘sleep’: No such file or directory /proc is an interface to the kernel. So does this symbolic link actually point to the copy loaded in memory, but with a more useful name? How does the exe link work, exactly? | /proc/<pid>/exe does not follow the normal semantics for symbolic links. Technically this might count as a violation of POSIX, but /proc is a special filesystem after all. /proc/<pid>/exe appears to be a symlink when you stat it. This is a convenient way for the kernel to export the pathname it knows for the process' executable. But when you actually open that "file", there is none of the normal procedure of reading the following the contents of a symlink. Instead the kernel just gives you access to the open file entry directly. Notice that when you ls -l a /proc/<pid>/exe pseudofile for a process whose executable has been deleted the symlink target has the string " (deleted)" at the end of it. This would normally be non-sensical in a symlink: there definitely isn't a file that lives at the target path with a name that ends with " (deleted)". tl;dr The proc filesystem implementation just does its own magic thing with pathname resolution. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197854",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70524/"
]
} |
197,860 | How might one search the first 50 lines of files in a directory for a given string ? I'm specifically looking for which database table files (from mysqldump) define a specific field, but I don't want to grep the whole files, which after 20-40 lines of CREATE TABLE continue on to hundreds of INSERT statements. I could write a Python script to iterate over the first few lines of each file, but from experience Python, though powerful, is slow. I have over 200 *.sql files to go through, and I'd like to learn a solution which I could generalize in the future anyway. | This solution works, but I feel that it is clumsy. Searching the first 50 lines for the string "foobar": $ for I in *.sql ; do echo $I && head -n 50 $I | grep foobar ; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9760/"
]
} |
197,865 | I just upgraded my Manjaro installation to version 0.8.12. I didn't notice any difference right away, but after i rebooted the computer the top window border on every window turned green. I really don't like that color, and I would like to turn it back to black or gray. I have tried to change the XFCE theme, but the border stays the same. How can I change the color? | This solution works, but I feel that it is clumsy. Searching the first 50 lines for the string "foobar": $ for I in *.sql ; do echo $I && head -n 50 $I | grep foobar ; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197865",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111457/"
]
} |
197,878 | I'm running into issues with redirection in tcsh. Consider the following commands: vi --version and vi --xxx . And let's assume this is on a machine where vi supports the --version option. The option --xxx is invalid, and therefore vim should display something via stderr . By that reasoning, using 2> /dev/null with both of these commands should give output for the valid case and no output for the invalid case. And that is what I see in bash, zsh, ksh, and dash. $ vi --xxx 2> /dev/null$ vi --version 2> /dev/nullVIM - Vi IMproved 7.4 (2013 Aug 10, compiled Oct 20 2014 16:09:17)... However, when I try this in tcsh, it gives me no output in both cases. $ vi --xxx 2> /dev/null$ vi --version 2> /dev/null(there is no output here) What is going on here? Am I redirecting stderr incorrectly? Here is the output of tcsh --version : tcsh 6.18.01 (Astron) 2012-02-14 (i686-intel-linux) options wide,nls,dl,al,kan,rh,nd,color,filec | This inconsistency is in fact the first reason in the list of reasons why csh programming is considered harmful . Or what if you just want to throw away stderr and leave stdout alone? Pretty simple operation, eh? cmd 2>/dev/null Works in the Bourne shell. In the csh, you can only make a pitiful attempt like this: (cmd > /dev/tty) >& /dev/null But who said that stdout was my tty? So it's wrong. This simple operation CANNOT BE DONE in the csh. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83381/"
]
} |
197,896 | echo "scale=3;1/8" | bc shows .125 on the screen. How to show 0.125 if the output result is less than one? | bc can not output zero before decimal point, you can use printf : $ printf '%.3f\n' "$(echo "scale=3;1/8" | bc)"0.125 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/197896",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60221/"
]
} |
197,898 | How to set default umask for cron jobs, please? (On RHEL 6.) Jobs are started under non-interactive (obviously) no-login (?) shell. Not only I prefer dash over bash. But consider also bash called as /bin/sh . It seems, that both shells in non-interactive no-login invocation doesn't read any start-up file like /etc/profile . Is the default umask hard-wired in shell or it is inherited from cron daemon? | On RHEL, PAM is used, so you could try using pam_umask Try putting this in /etc/pam.d/crond session optional pam_umask.so umask=0022 Naturally, this is untested, and may very well break assumptions made by various applications. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111411/"
]
} |
197,904 | I know that [] is working in ls pattern matching: $ lsfoo.c foo.h$ ls foo.[ch]foo.c foo.h but I cannot find where this is documented. I would like to know the syntax that would match these: $ lsfoo.asd foo.qwe This is the best guess I had: ls foo.[{asd}{qwe}] . It did not work. | I think you are looking for the brace expansion {asd,qwe} : $ ls foo.{asd,qwe}foo.asd foo.qwe | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38106/"
]
} |
197,917 | I have two files: file1 contains: 1234 file2 contains: JohnSamGeorgeKen I want to combine these files to create one file(file3) 1, John2, Sam3, George4, Ken My thought was to use nested loops and add the comma for each line, for x in file1doecho "$x" >> file3for y in file2echo ",$y" >> file3donedone Is there a command I need to use? How do I get it to x and y to appear on one line for each entry in both files? | You can use paste : $ :|paste -d',' file1 - | paste -d' ' - file21, John2, Sam3, George4, Ken or: $ :|paste -d', ' file1 - file2 where the -d', ' argument specifies to use a comma and space as a delimiter between the contents of each file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111352/"
]
} |
197,918 | I read that there are two modes called “kernel mode” and “user mode” to handle execution of processes. ( Understanding the Linux Kernel , 3rd Edition.) Is that a hardware switch (kernel/user) that is controlled by Linux, or software feature provided by the Linux kernel? | Kernel mode and user mode are a hardware feature, specifically a feature of the processor. Processors designed for mid-to-high-end systems (PC, feature phone, smartphone, all but the simplest network appliances, …) include this feature. Kernel mode can go by different names: supervisor mode, privileged mode, etc. On x86 (the processor type in PCs), it is called “ring 0”, and user mode is called “ring 3”. The processor has a bit of storage in a register that indicates whether it is in kernel mode or user mode. (This can be more than one bit on processors that have more than two such modes.) Some operations can only be carried out while in kernel mode, in particular changing the virtual memory configuration by modifying the registers that control the MMU . Furthermore, there are only very few ways to switch from user mode to kernel mode, and they all require jumping to addresses controlled by the kernel code. This allows the code running in kernel mode to control the memory that code running in user mode can access. Unix-like operating systems (and most other operating systems with process isolation) are divided in two parts: The kernel runs in kernel mode. The kernel can do everything. Processes run in user mode. Processes can't access hardware and can't access the memory of other processes (except as explicitly shared). The operating system thus leverages the hardware features (privileged mode, MMU) to enforce isolation between processes. Microkernel -based operating systems have a finer-grained architecture, with less code running in kernel mode. When user mode code needs to perform actions that it can't do directly (such as access a file, access a peripheral, communicate with another process, …), it makes a system call : a jump into a predefined place in kernel code. When a hardware peripheral needs to request attention from the CPU, it switches the CPU to kernel mode and jumps to a predefined place in kernel code. This is called an interrupt . Further reading Wikipedia What is the difference between user-level threads and kernel-level threads? Hardware protection needed for operating system kernel | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110279/"
]
} |
197,932 | I have two micro-controllers which are connected to my computer(ubuntu 12.04) through serial-USB cables.One of the controllers (ttyUSB0) is programmed such that it sends data serially and another (ttyAMC0) is programmed to receive data serially. So please any one could guide on achieving a link(send the data available at ttyUSB0 to ttyAMC0 ) between these two hardware ports. Thanks in advance. | Kernel mode and user mode are a hardware feature, specifically a feature of the processor. Processors designed for mid-to-high-end systems (PC, feature phone, smartphone, all but the simplest network appliances, …) include this feature. Kernel mode can go by different names: supervisor mode, privileged mode, etc. On x86 (the processor type in PCs), it is called “ring 0”, and user mode is called “ring 3”. The processor has a bit of storage in a register that indicates whether it is in kernel mode or user mode. (This can be more than one bit on processors that have more than two such modes.) Some operations can only be carried out while in kernel mode, in particular changing the virtual memory configuration by modifying the registers that control the MMU . Furthermore, there are only very few ways to switch from user mode to kernel mode, and they all require jumping to addresses controlled by the kernel code. This allows the code running in kernel mode to control the memory that code running in user mode can access. Unix-like operating systems (and most other operating systems with process isolation) are divided in two parts: The kernel runs in kernel mode. The kernel can do everything. Processes run in user mode. Processes can't access hardware and can't access the memory of other processes (except as explicitly shared). The operating system thus leverages the hardware features (privileged mode, MMU) to enforce isolation between processes. Microkernel -based operating systems have a finer-grained architecture, with less code running in kernel mode. When user mode code needs to perform actions that it can't do directly (such as access a file, access a peripheral, communicate with another process, …), it makes a system call : a jump into a predefined place in kernel code. When a hardware peripheral needs to request attention from the CPU, it switches the CPU to kernel mode and jumps to a predefined place in kernel code. This is called an interrupt . Further reading Wikipedia What is the difference between user-level threads and kernel-level threads? Hardware protection needed for operating system kernel | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111486/"
]
} |
197,988 | I have a dns zone, which has serial number: 2015040500 Today I am going to add some CNAME records there, so I am interested in how to increment serial number, I mean should I change it based on today's date, e.g it will be: 2015042200 or just increment it with one, so it will be 2015040501 ? | You can do however you please, the only thing you must make sure is that the new serial number is greater than the old one. Having said that, I would recommend a timestamp based approach following a scheme like: YYYYMMDDxx where xx starts at 00 and is incremented for all edits on that specific day (when editing on another day, you reset xx to 00 ) The main advantage of this scheme is, that you also know the date of the last modification of your zone-file at first glance. It also makes the serial number incrementing more robust. The alternative is to start with 1 and just increment whenever you edit the file. If the serial number is already timestamp based (and 2015040500 looks very much like that), you shold probably stick with that decision (even if not made by you), and use the logical successor 2015042200 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/197988",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111523/"
]
} |
198,000 | I have read the following article: How do I bypass/ignore the gpg signature checks of apt? It outlines how to configure apt to not check the signatures of packages at all . However, I'd like to limit the effect of this setting to a single (in this case locally hosted) repository. That is: all official repositories should use the GPG signature check as usual, except for the local repo . How would I go about doing that? Failing that, what would be the advantage (security-wise) of signing the packages during an automated build (some meta-packages and a few programs) and then doing all that secure apt prescribes? After all the host with the repo would then also be the one on which the secret GPG key resides. | You can set options in your sources.list : deb [trusted=yes] http://localmachine/debian wheezy main The trusted option is what turns off the GPG check. See man 5 sources.list for details. Note: this was added in apt 0.8.16~exp3. So it's in wheezy (and of course jessie), but not squeeze. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/198000",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
198,003 | How can I pick which kernel GRUB2 should load by default? I recently installed a the linux realtime kernel and now it loads by default. I'd like to load the regular one by default. So far I only managed to pick the default OS.. and for some reason the /boot/grub.cfg already assumes that I want to load the rt-kernel and put it into the generic linux menu entry (in my case Arch Linux). | I think most distributions have moved additional kernels into the advanced options sub menu at this point, as TomTom found was the case with hisArch. I didn't want to alter my top level menu structure in order to select a previous kernel as the default. I found the answer here . To summarize: Find the $menuentry_id_option for the submenu: $ grep submenu /boot/grub/grub.cfgsubmenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { Find the $menuentry_id_option for the menu entry for the kernel you want to use: $ grep gnulinux /boot/grub/grub.cfgmenuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' {submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-rt-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-rt-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-rt-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-rt-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.18.0-0.bpo.1-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.18.0-0.bpo.1-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.17.0-0.bpo.1-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.17.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.17.0-0.bpo.1-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.17.0-0.bpo.1-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.9.0-8-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.9.0-8-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { menuentry 'Debian GNU/Linux, with Linux 4.9.0-8-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.9.0-8-amd64-recovery-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc' { Comment out your current default grub in /etc/default/grub and replace it with the sub-menu's $menuentry_id_option from step one, and the selected kernel's $menuentry_id_option from step two separated by > . In my case the modified GRUB_DEFAULT is: #GRUB_DEFAULT=0GRUB_DEFAULT="gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc>gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc" Update grub to make the changes. For Debian this is done like so: $ sudo update-grub Done. Now when you boot, the advanced menu should have an asterisk and you should boot into the selected kernel. You can confirm this with uname . $ uname -aLinux NAME 4.18.0-0.bpo.1-amd64 #1 SMP Debian 4.18.0-0 (2018-09-13) x86_64 GNU/Linux Changing this back to the most recent kernel is as simple as commenting out the new line and uncommenting #GRUB_DEFAULT=0 : GRUB_DEFAULT=0#GRUB_DEFAULT="gnulinux-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc>gnulinux-4.18.0-0.bpo.1-amd64-advanced-38ea4a12-6cfe-4ed9-a8b5-036295e62ffc" then rerunning update-grub . Specifying IDs for all the entries from the top level menu is mandatory. The format for setting the default boot entry can be found in the documentation . | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/198003",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111041/"
]
} |
198,009 | I have two Debian virtual machines that were built from the same netinstall iso, but probably have different packages installed. One of them has an /etc/exports file for nfs mounts but the other one doesn't. I would like them both to have this file as installed by the package manager. I come from the Fedora world and were I still in it, I would yum whatprovides /etc/exports . I am told that in Debian land, I should do apt-file search . However, I am not getting any results with: apt-file updateapt-file search /etc/exports What am I missing here? | When you're looking for a file belonging to a package which is installed on your machine, you can use dpkg -S (equivalent to dpkg-query -S ): dpkg -S /etc/exports In this case though it won't find anything, because /etc/exports is created by a maintainer script (and that type of file is explicitly not handled by dpkg-query , or for that matter by apt-file ). So if apt-file and dkpg -S fail to find a file, you can try to look through the maintainer scripts: grep /etc/exports /var/lib/dpkg/info/* This should match nfs-kernel-server 's maintainer scripts; that's the package which creates /etc/exports , at least on my NFS servers. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/198009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34796/"
]
} |
198,010 | After a lot of mucking around Xterm, poring through reams of webpages I have thrown in the towel and realized that this isn't something I can figure out on my own. TrueType vs Bitmap Is the option xterm*font used to specify only bitmap fonts and is *faceName used only for TrueType fonts? I'm using the commands xlsfonts and fc-list to find out the Bitmap and TrueType fonts that are installed. Is this correct? I want to set the XTerm font to Ubuntu Mono. This is the output of fc-list | grep -i ubuntu Ubuntu Mono for Powerline:style=RegularForPowerlineUbuntu Mono for Powerline:style=Bold ItalicUbuntu Mono for Powerline:style=BoldForPowerlineUbuntu Mono for Powerline:style=ItalicForPowerline and I added XTerm*faceName: Ubuntu Mono for Powerline:style=RegularForPowerline to my ~/.Xresources and ran xrdb -merge ~/.Xresources xrdb -query all shows that *faceName is set to Ubuntu Mono for Powerline:style=RegularForPowerline However, this doesn't work. What am I missing/screwing up here? | I finally figured out what's wrong just a couple of days ago after scrounging through multiple sources. Combining everyone's responses here: Ubuntu Mono is a TrueType font and TrueType fonts require xterm to be compiled with FreeType library support. To check whether xterm has this, use the ldd /path/to/xterm/binary command and see if it says freetype in there. An alternate way is to see if xterm has the -fa option. If your xterm has FreeType lib support, choose a Bitmap fonts, by running xfontsel -p and use the exact string it prints upon exit. TrueType fonts, using fc-list :scalable=true:spacing=mono: family and use the exact string it outputs. Once you have the font name using one of the above steps, set it via XTerm*faceName: <name of the font> If you install a new font, and it doesn't show up when you run one of the above commands, rebuild your font cache using fc-cache -frv and try again. P.S. I used Ubuntu Mono patched font downloaded from here I'm using XTerm*faceName: Ubuntu Mono derivative Powerline Thanks to Wumpus Q. Wumbley and Thomas Dickey for their detailed responses. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/198010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111534/"
]
} |
198,045 | I have a script which requires an directory as one argument. I want to support the two form: one is like a/b/c (no slash at the end) and another is like a/b/c/ (has slash at the end). My question: given either of the two form, how can I just keep the first form unchanged and strip the last slash of the second form to convert it to the first form. | dir=${1%/} will take the script's first parameter and remove a trailing slash if there is one. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/198045",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87568/"
]
} |
198,065 | How can I skip the first 6 lines/rows in a text file (input.txt) and process the rest with awk? The format of my awk script (program.awk) is: BEGIN {} { process here} END {} My text file is like this: 0350.1 4.32.0 1.51.5 3.00.3 3.31.5 2.1... I want to process the file starting from: 0.3 3.31.5 2.1... | Use either of the two patterns: NR>6 { this_code_is_active } or this: NR<=6 { next }{ this_code_is_active } Use FNR instead of NR if you have many files as arguments to awk and want to skip 6 lines in every file. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/198065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111564/"
]
} |
198,128 | Consider this line: ${libdir}/bin/licenseTool check "${SERIAL}" "${VERSION}" "${PRODUCT}" ${libdir} | grep '^200' >/dev/null What's the point of looking for the pattern in the output if the result of that is thrown away? And, if a line like that appears as the last thing in a bash script, is its exit value returned to the script's caller, or ignored? (I'm speculating on whether we can assume this is done for side effects only or returns something to the caller somehow.) | Your suspicion is correct; the exit status of the last command of the script will be passed to the calling environment. So the answer is that this script will return an exit status 0 if grep matched someting in the data, exist status 1 if there was no match, and exit status 2 if some error occurred. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/198128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106567/"
]
} |
198,138 | I can do diff filea fileb to see the difference between files. I can also do head -1 filea to see the first line of filea or fileb. How can I combine these commands to show the difference between the first line of filea and the first line of fileb? | If your shell supports process substitution , try: diff <(head -n 1 filea) <(head -n 1 fileb) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/198138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101831/"
]
} |
198,151 | I have a zipped file like myArchive123.tar.gz . Inside, it contains a folder like helloWorld If I extract it with tar -xf myArchive123.tar.gz , I get the helloWorld folder: ls myArchive123.tar.gzhelloWorld I want the output to be the same as the file name minus the .tar.gz extension. I.e.: tar <magic paramaters> myArchive123.tar.gz ls myArchive123.tar.gz myArchive123cd myArchive123ls helloWorld Can this be done? I never know what's inside the archive. It could be a folder, could be many files. I'd be ok with using another tool if tar can't do it. I'd be ok with a longer form that can be turned into a script EDIT In the meantime, I wrote myself a script that seems to get the job done (see my posted answer below).The main thing is that it should be packageable into a one-liner like: extract <file> | With gnu tar , you could use --xform (or --transform ) to prepend /prefix/ to each file name: tar -xf myArchive.tar.gz --xform='s|^|myArchive/|S' note there's no leading / in prefix/ and the sed expression ends with S to exclude symbolic link targets from file name transformations. To test it (dry-run): tar -tf myArchive.tar.gz --xform='s|^|myArchive/|S' --verbose --show-transformed-names To get you started, here's a very simplistic script that you could invoke as extract <file> : STRIP=${1%.*} #strip last suffixNAME=${STRIP%.tar} #strip .tar suffix, if presenttar -xf "$1" --xform="s|^|$NAME/|S" #run command | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/198151",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77271/"
]
} |
198,178 | I often saw the words "kernel ring buffer", "user level", "log level" and some other words appear together. e.g. /var/log/dmesg Contains kernel ring buffer information. /var/log/kern.log Contains only the kernel's messages of any loglevel /var/log/user.log Contains information about all user level logs Are they all about logs? How are they related and different? By "level", I would imagine a hierarchy of multiple levels? Is "user level" related to "user space"? Are they related to runlevel or protection ring in some way? | Yes, all of this has to do with logging. No, none of it has to do with runlevel or "protection ring". The kernel keeps its logs in a ring buffer. The main reason for this is so that the logs from the system startup get saved until the syslog daemon gets a chance to start up and collect them. Otherwise there would be no record of any logs prior to the startup of the syslog daemon. The contents of that ring buffer can be seen at any time using the dmesg command, and its contents are also saved to /var/log/dmesg just as the syslog daemon is starting up. All logs that do not come from the kernel are sent as they are generated to the syslog daemon so they are not kept in any buffers. The kernel logs are also picked up by the syslog daemon as they are generated but they also continue to be saved (unnecessarily, arguably) to the ring buffer. The log levels can be seen documented in the syslog(3) manpage and are as follows: LOG_EMERG : system is unusable LOG_ALERT : action must be taken immediately LOG_CRIT : critical conditions LOG_ERR : error conditions LOG_WARNING : warning conditions LOG_NOTICE : normal, but significant, condition LOG_INFO : informational message LOG_DEBUG : debug-level message Each level is designed to be less "important" than the previous one. A log file that records logs at one level will also record logs at all of the more important levels too. The difference between /var/log/kern.log and /var/log/mail.log (for example) is not to do with the level but with the facility, or category. The categories are also documented on the manpage. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/198178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
198,236 | What would be the best way to check whether $1 is an integer in /bin/dash ? In bash, I could do: [[ $1 =~ ^([0-9]+)$ ]] But that does not seem to be POSIX compliant and dash does not support that | The following detect integers, positive or negative, and work under dash and are POSIX: Option 1 echo "$1" | grep -Eq '^[+-]?[0-9]+$' && echo "It's an integer" Option 2 case "${1#[+-]}" in ''|*[!0-9]*) echo "Not an integer" ;; *) echo "Integer" ;;esac Or, with a little use of the : (nop) command: ! case ${1#[+-]} in *[!0-9]*) :;; ?*) ! :;; esac && echo Integer | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/198236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
198,254 | When find is invoked to find nothing, it still exits with code 0. Is there a way to make it return an exit code indicating failure when no file was found? | If your grep supports reading NUL-delimited lines (like GNU grep with -z ), you can use it to test if anything was output by find : find /some/path -print0 | grep -qz . To pipe the data to another command, you can remove the -q option, letting grep pass on the data unaltered while still reporting an error if nothing came through: find /some/path -print0 | grep -z . | ... Specifically, ${PIPESTATUS[1]} in bash should hold the exit status of grep . If your find doesn't support -print0 , the use grep without -z and hope that newlines in filenames don't cause problems: find ... | grep '^' | ... In this case, using ^ instead of . might be safer. If output has consecutive newlines, ^ will pass them by, but . won't. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/198254",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28937/"
]
} |
198,305 | I have an unknown (let's assume uname -a returns nothing) Linux distribution I have root access to. How can I verify, looking at file structure, that it is based on Debian? | You can view if a file called /etc/debian_version exists. $ cat /etc/debian_versionwheezy/sid If it exists, you also can see the version of debian. Also distributions like Ubuntu, Linux Mint, and so on, which are based on Debian have that file. Actually most distributions have a release file you can also try and see what comes out: cat /etc/*release | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/198305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20334/"
]
} |
198,372 | In zsh, running the command read -p 'erasing all directories (y/n) ?' ans , throws the error, read: -p: no coprocess But in bash, it prints a prompt. How do I do this in zsh? | You can still use read , you just need to print a prompt first. In zsh , -p indicates that input should be read from a coprocess instead of indicating the prompt to use. You can do the following instead, which is POSIX-compliant: printf >&2 '%s ' 'erase all directories? (y/n)'read ans Like for ksh/zsh's read 'var?prompt' or bash's read -p prompt var , the prompt is issued on stderr so as not to pollute the normal output of your script. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/198372",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
198,394 | If I have some nested and indented statements that echo a multi-line string into a file, the indents from the original shell (in my case bash) will be carried over into the target file. To avoid this, I remove indents from the echo'ed output, but this loses that indented code formatting in my source, for example, #! /bin/bashfunction configure_default_vhost() { case "$2" in [--production]) echo "<VirtualHost *:80># These lines are "breaking" out of the preffered indentingredirect 404 /ErrorDocument 404</VirtualHost>" > /etc/apache/sites-availabe/000-default.conf esac} I'm aiming to get something as close as possible to this: #! /bin/bashfunction configure_default_vhost() { case "$2" in [--production]) echo "<VirtualHost *:80> # These lines are aligned with the preffered indenting redirect 404 / ErrorDocument 404 </VirtualHost> " > /etc/apache/sites-availabe/000-default.conf esac} (Note: this question has been listed as a possible duplicate of a HEREDOC related question. I'm not sure where the correct place to respond to this is so I'll put here for now (someone please let know if otherwise). My question is about indenting code blocks, and heredoc is only one of the many ways to acheive this and actually HEREDOC wasn't the solution I went with.) | You can use a "here-document" with the - modifier. It can be indented by tab characters. You must switch from echo to cat . cat <<-EOF > /etc/apache/sites-availabe/000-default.conf <VirtualHost *:80> redirect 404 / ErrorDocument 404 </VirtualHost>EOF Or, to keep tabs in the result, you can pre-process the HERE document by let's say sed and indent with 4 spaces instead: sed 's/^ //' <<EOF....{....(------)func () {....(------)return....(------)}....}EOF I used . instead of a space and (------) instead of a tab to show how to format the script. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/198394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.