source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
305,701 | I'm trying to use ls -l and filter only the files that were lastly modified in the current month.I'm running: ls -l | awk '{if($6 == date +%b) print$0; and get a syntax error. I've been looking everywhere on the internet for simple rules as to how to use quotes correctly in awk and in script bashing and I can't make sense out of it. I'm very confused about this even though I have experience with programming so a general answer regarding those issues would be far more appreciated than a specific answer that will deal with this specific case. | You probably don't want to call date multiple times in your code; it's "constant" for the duration of the run. So we can set it before hand and pass it as a variable eg ls -l | awk -v month="$(date +%b)" '$6 == month { print }' Note that this will fail if you have a file that's from this month but a year old (eg from August 2015) or in locales where month names contain spaces, or if there are file names that contain newline characters. Parsing the output of ls is fraught with peril | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/305701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/182347/"
]
} |
305,735 | Using aptitude I can make a search like: aptitude search '~i bash' This seems to be an aptitude specific regex. Is it possible to do the same thing using apt or apt-cache without additional commands? apt search '~i bash' is not working. | You can try: apt list --installed bash This will try to list the installed package s with the name bash However, if you wanted to search for a particular file, use apt-file The following command will list all the packages that have string bash within their name: apt list -a --installed bash As suggested by @Exostor apt list -a --installed bash is not always the case to list those packages that start with a particular string, instead use: apt list -a --installed bash* If globbing is what you're searching for, please upvote @Exostor comment below. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/305735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102020/"
]
} |
305,868 | I have executed below command cat /proc/loadavg && date Actual Result: 0.00 0.00 0.00 1/803 26256Fri Aug 26 09:00:56 EEST 2016 Expected Result: 0.00 0.00 0.00 1/803 26256 @@ Fri Aug 26 09:00:56 EEST 2016 I have tried sed and tr , but didn't work. cat /proc/loadavg && date | sed 's/\n/ @@ /g'cat /proc/loadavg && date | tr '\n' ' @@ ' Any idea, what I am missing? | Your best bet is to use printf . You have two strings and you want to output them with some additional formatting. That's exactly what printf does. $ printf "%s @@ %s\n" "$(cat /proc/loadavg)" "$(date)" Your tr attempt does not work since tr modifies characters , not words. You could use it to replace the newlines with one single character though: $ ( cat /proc/loadavg; date ) | tr '\n' '@' ... but it doesn't do quite what you need. Your sed attempt does not work since the newline is stripped from the input to sed (i.e. sed -n '/\n/p' inputfile would never print anything). You could still do it with sed if you read the second line (from date ) with the N editing command while editing the first line (which will place a newline between them): $ ( cat /proc/loadavg; date ) | sed 'N;s/\n/ @@ /' ... but I would personally prefer the printf solution. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/305868",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81640/"
]
} |
305,882 | I have a shell script which executes an ansible playbook and I want to process the output of this playbook. I am not sure how can I do this. Script: #!/bin/shansible-playbook -i inventory/ec2.py services_status.yml The output of ansible-playbook command is: PLAY [all] *********************************************************************TASK [cmx_running_services] ****************************************************ok: [172.31.35.225]ok: [172.31.9.253]TASK [debug] *******************************************************************ok: [172.31.35.225] => { "services": { "changed": false, "meta": { "services": [ "zk", "kafka" ] } }}ok: [172.31.9.253] => { "services": { "changed": false, "meta": { "MyService": [ "default" ], "services": [ "monitoring-agent" ] } }}PLAY RECAP *********************************************************************172.31.35.225 : ok=2 changed=0 unreachable=0 failed=0172.31.9.253 : ok=2 changed=0 unreachable=0 failed=0 In my script I want to process this output and store a json object in format: { "172.31.35.225":{ "services":[ "zk", "kafka" ] }, "172.31.9.253":{ "MyService":[ "default" ], "services":[ "monitoring-agent" ] }} | You can print the result of the playbook as a json then parse it in modern language like Python. All you need to do is set an environment variable ANSIBLE_STDOUT_CALLBACK=json Example: ANSIBLE_STDOUT_CALLBACK=json ansible-playbook -i hosts.ini main.yaml | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/305882",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186634/"
]
} |
305,886 | Doing 7z x on an archive gives me '20 ª.1 ¯® '$'\302\212''¨à®¢®£à ¤áª ï ã«.rtf' IMG_6527.JPG''$'\302\212''¨à®¢®£à ¤áª ï, ¨áâ.doc' IMG_6532.JPG''$'\302\204''®¯ ᮣ« 襨¥(3).doc' IMG_6542.JPG''$'\302\204\302\212\302\217''.doc' IMG_6543.JPG IMG_6526.JPG Clearly some files were encoded differently and 7z by default does not convert to UTF-8. How to tell 7z to do the conversion? The only options I found for charset: -scc{UTF-8|WIN|DOS} : set charset for for console input/output -scs{UTF-8|UTF-16LE|UTF-16BE|WIN|DOS|{id}} : set charset for list files WIN , DOS , UTF-8 do not work. When trying to guess charset via 7z -scsCP1251 l 26-08-2016_10-18-14.zip 7z gives warning: Unsupported charset: cp1251 unzip does this right (cyrillic symbols got converted): '20 к.1 по Кировоградская ул.rtf' IMG_6532.JPG 'Доп соглашение(3).doc'26-08-2016_10-18-14.zip IMG_6542.JPG 'Кировоградская, ист.doc'IMG_6526.JPG IMG_6543.JPGIMG_6527.JPG ДКП.doc Supplementary information p7zip Version: 15.14.1 (locale=ru_RU.UTF-8,Utf16=on,HugeFiles=on,64 bits,4 CPUs AMD Phenom(tm) II X4 960T Processor (100FA0),ASM) hexdump of start of archive ( od -tx1z -Ax ): 000000 50 4b 03 04 14 00 00 00 00 00 81 54 1a 49 7e 35 >PK.........T.I~5<000010 fa 34 00 ec 00 00 00 ec 00 00 07 00 17 00 84 8a >.4..............<000020 8f 2e 64 6f 63 75 70 13 00 01 19 fd 45 54 d0 94 >..docup.....ET..<000030 d0 9a d0 9f 2e 64 6f 63 00 00 00 00 d0 cf 11 e0 >.....doc........<000040 a1 b1 1a e1 00 00 00 00 00 00 00 00 00 00 00 00 >................<000050 00 00 00 00 3e 00 03 00 fe ff 09 00 06 00 00 00 >....>...........<000060 00 00 00 00 00 00 00 00 01 00 00 00 71 00 00 00 >............q...<000070 00 00 00 00 00 10 00 00 73 00 00 00 01 00 00 00 >........s.......<000080 fe ff ff ff 00 00 00 00 70 00 00 00 ff ff ff ff >........p.......<000090 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff >................<*000230 ff ff ff ff ff ff ff ff ff ff ff ff ec a5 c1 00 >................<000240 07 80 19 04 00 00 f0 12 bf 00 00 00 00 00 00 10 >................<000250 00 00 00 00 00 08 00 00 72 7b 00 00 0e 00 62 6a >........r{....bj<000260 62 6a 2a 16 2a 16 00 00 00 00 00 00 00 00 00 00 >bj*.*...........<000270 00 00 00 00 00 00 00 00 19 04 16 00 34 8e 00 00 >............4...<000280 48 7c 00 00 48 7c 00 00 4b 2c 00 00 00 00 00 00 >H|..H|..K,......<000290 19 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >................<0002a0 00 00 00 00 00 00 00 00 ff ff 0f 00 00 00 00 00 >................<0002b0 00 00 00 00 ff ff 0f 00 00 00 00 00 00 00 00 00 >................<0002c0 ff ff 0f 00 00 00 00 00 00 00 00 00 00 00 00 00 >................<0002d0 00 00 00 00 b7 00 00 00 00 00 3e 0e 00 00 00 00 >..........>.....<0002e0 00 00 3e 0e 00 00 a0 1b 00 00 00 00 00 00 a0 1b >..>.............<0002f0 00 00 00 00 00 00 a0 1b 00 00 00 00 00 00 a0 1b >................<000300 00 00 00 00 00 00 a0 1b 00 00 14 00 00 00 00 00 >................<000310 00 00 00 00 00 00 ff ff ff ff 00 00 00 00 b4 1b >................<000320 00 00 00 00 00 00 b4 1b 00 00 00 00 00 00 b4 1b >................<000330 00 00 38 00 00 00 ec 1b 00 00 84 00 00 00 70 1c >..8...........p.<000340 00 00 34 00 00 00 b4 1b 00 00 00 00 00 00 b8 28 >..4............(<000350 00 00 e6 01 00 00 a4 1c 00 00 00 00 00 00 a4 1c >................<000360 00 00 00 00 00 00 a4 1c 00 00 00 00 00 00 a4 1c >................<000370 00 00 00 00 00 00 a4 1c 00 00 00 00 00 00 d8 1d >................<000380 00 00 00 00 00 00 d8 1d 00 00 00 00 00 00 d8 1d >................<000390 00 00 00 00 00 00 43 28 00 00 02 00 00 00 45 28 >......C(......E(<0003a0 00 00 00 00 00 00 45 28 00 00 00 00 00 00 45 28 >......E(......E(<*0003c0 00 00 00 00 00 00 45 28 00 00 00 00 00 00 9e 2a >......E(.......*<0003d0 00 00 a2 02 00 00 40 2d 00 00 da 00 00 00 45 28 >[email protected](<0003e0 00 00 2d 00 00 00 00 00 00 00 00 00 00 00 00 00 >..-.............<0003f0 00 00 00 00 00 00 a0 1b 00 00 00 00 00 00 d8 1d >................<000400 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >................<000410 00 00 00 00 00 00 d8 1d 00 00 00 00 00 00 d8 1d >................<000420 | Depending on the encoding used to create the zip file, you might be able to prevent unwanted translations by temporarily setting the locale to "C": LC_ALL=C 7z x $archive (This helped for a zip created by IZArc on Win7, using two of your example filenames.) However, for the archive in the question, the "filename" field contains the CP1251 encoding of "ДКП.doc" ( 84 8a 8f 2e 64 6f 63 ). The "extra" field uses an Info-zip extension (see section 4.6.9 of the Zip Specification v 6.3.4 ) to store the UTF-8 filename. unzip knows about this header, and uses the UTF-8 name, ignoring the CP1251 one. 7z doesn't do anything with this "extra field", and only uses the CP1251 one. Depending on the current locale, it might create the file using that exact name (the raw bytes 84 8a 8f ), or worse, treat them as unicode points to be expanded to UTF-8 first ( c2 84 c2 8a c2 8f ). One option is to use external utilities to change the zip first: #!/bin/bashcp orig.zip renamed.zipindex=0zipinfo -1 orig.zip | while read name ; do ziptool renamed.zip rename $index "$name" index=$((index+1))done ziptool is from libzip . zipinfo is distributed with Info-ZIP's UnZip , so you might as well have just used unzip . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/305886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180116/"
]
} |
305,927 | I'm quite new to Bash scripting. I have a "testscript", which I used as the basis for a more advanced / useful script: #!/bin/bashfiles=$1for a in $filesdo echo "$a"done When I call this without any quotes it just picks up one file in a directory: testscript *.txt But when I call it with quotes it works correctly and picks out all the text files: testscript '*.txt' What is going on here? | When you call a program testscript *.txt then your shell does the expansion and works out all the values. So it might, effectively call your program as testscript file1.txt file2.txt file3.txt file4.txt Now your program only looks at $1 and so only works on file1.txt . By quoting on the command line you are passing the literal string *.txt to the script, and that is what is stored in $1 . Your for loop then expands it. Normally you would use "$@" and not $1 in scripts like this. This is a "gotcha" for people coming from CMD scripting, where the command shell doesn't do globbing (as it's known) and always passes the literal string. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/305927",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186571/"
]
} |
305,949 | Ok, so I have 1 server with pfSense and many virtual servers. I'm using Nginx upstream functionality to run multiplies WEB servers on same public IP. Of course I need to know REAL users IP not Nginx proxy which is 192.168.2.2, but after switching to pfSense (recently had simple consumer router) web servers can't see real users IP. I have tried to change various settings in System / Advanced / Firewall & NAT like:NAT Reflection mode for port forwardsEnable automatic outbound NAT for Reflection Also in Firewall / NAT / Outbound tried every mode, nothing helped still every user have IP of my Proxy server. So how to disable masquarading, or how to pass real client IP. Update Ok, so it seams problem is with subdomains not domains. Situation now: If client go to domain.com - everything is fine backend server can see real clinet IP If client go to subdomain.domain.com - backend server see proxy server IP All domains A records points to external IP, then pfSense forward 80 port to proxy, then proxy depending on domain forward to corresponding internal server. I have 2 physical servers, 1 - pfSense router and another with virtualbox running many VM's in this example 4 VM's Another one interesting thing, when i try to reach troublesome subdomain.domain1.com from inside local network I get this: Again, no problems with domain1.com and domain2.com and so on... | When you call a program testscript *.txt then your shell does the expansion and works out all the values. So it might, effectively call your program as testscript file1.txt file2.txt file3.txt file4.txt Now your program only looks at $1 and so only works on file1.txt . By quoting on the command line you are passing the literal string *.txt to the script, and that is what is stored in $1 . Your for loop then expands it. Normally you would use "$@" and not $1 in scripts like this. This is a "gotcha" for people coming from CMD scripting, where the command shell doesn't do globbing (as it's known) and always passes the literal string. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/305949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186141/"
]
} |
305,950 | I have a Python script test.py that only contains: print('hi') . I want to run it in a screen so that the output of the screen is saved by script . I use the following command to run test.py in a screen , it works fine: screen -dm bash -c 'python test.py' However, I haven't managed yet to find a way to use script to save the output of the screen . How can I do it? I unsuccessfully tried: script -c "screen -dm bash -c 'python test.py'" output.txt : the output file output.txt doesn't contain hi , but only: Script started on Fri 26 Aug 2016 01:04:59 PM EDTScript done on Fri 26 Aug 2016 01:04:59 PM EDT I use Ubuntu 14.04.4 LTS x64. Documentation: https://www.gnu.org/software/screen/manual/screen.html : -d -m: Start screen in detached mode. This creates a new session but doesn't attach to it. This is useful for system startup scripts. http://linux.about.com/library/cmd/blcmdl1_sh.htm : -c string: If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0. script's man page: -c, --command run command rather than interactive shell | You should do it the other way round, run script inside screen : screen -dm bash -c 'script -c "python test.py" output.txt' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/305950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16704/"
]
} |
306,007 | Right now I have one-liner like this: curl -fsSL http://git.io/vvZMn | bash It is downloading script and passing it to bash as stdin file. I would like to run this script with additional argument print . Maybe something like this? curl -fsSL http://git.io/vvZMn | bash -- print But this doesn't work. | I believe what you are looking for is the -s option. With -s , you can pass arguments to the script. As a dummy example to illustrate this: $ echo 'echo 1=$1' | bash -s -- Print1=Print Here, you can see that the script provided on stdin is given the positional parameter Print . Your script takes a -u UUID argument and that can be accommodated also: $ echo 'echo arguments=$*' | bash -s -- -u UUID printarguments=-u UUID print So, in your case: curl -fsSL http://git.io/vvZMn | bash -s -- print Or, curl -fsSL http://git.io/vvZMn | bash -s -- -u UUID print As Stephen Harris pointed out, downloading a script and executing it, sight unseen, is a security concern. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/306007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90561/"
]
} |
306,057 | When I run the following make command in a build directory it's almost empty (file in question is absolutely certainly not there) strace -f -e trace=execve,vfork,open,creat -s 1024 make <target> After it finishes, the file is totally there. So it must have been created by make or one of its child processes (or children of their children and so on). However when I grep the strace log for either the name of the file or for creat I cannot find the system call responsible for creation of this file. What am I missing? Are there other system calls I should be monitoring? EDIT: It turns out the mistake was in both my strace comamnd and my grepping.All the answers were helpful, thank you, everyone, for your time. I actually failed to communicate that the file was in a subdir and I was grepping using the name of a file along with the subdir's name. But since strace does not provide info on current working directory this approach didn't work so well (I ended up stracing chdir and rename calls to get the desired effect). So PaulHaldane's first suggestion was right and to the point. As well as larsks's answer in which he has actually guessed how the file got created. | Run strace without the -e option and see if that improves your result. There are a number of ways to create a file. Instead of open -ing it, it is highly likely that whatever tool produces that file first opens a temporary file, writes the data, and then renames the file on completion. With your current limit ( execve,vfork,open,creat ) you're not going to see that sort of behavior. For example, given this simple python script: import osimport tempfilefd = tempfile.NamedTemporaryFile(dir='.', delete=False)fd.write('this is a test\n')fd.close()os.rename(fd.name, 'output') Running strace with your arguments and then looking for output in the results yields nothing: $ strace -e trace=execve,vfork,open,creat -o trace -f -s 80 python tmptest.py$ grep output trace$ But if I remove the -e filter: $ strace -o trace -f -s 80 python tmptest.py$ grep output trace4523 rename("/home/lars/tmp/tmpZDwvPK", "output") = 0 In the comments on your question, Sato Katsura provides an example in which you will not see your target filename in the strace output, but I think you are unlikely to encounter that when running make as long as you start with a clean build environment. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128174/"
]
} |
306,111 | I am a little bit confused on what do these operators do differently when used in bash (brackets, double brackets, parenthesis and double parenthesis). [[ , [ , ( , (( I have seen people use them on if statements like this : if [[condition]]if [condition]if ((condition))if (condition) | In Bourne-like shells, an if statement typically looks like if command-list1then command-list2else command-list3fi The then clause is executed if the exit code of the command-list1 list of commands is zero. If the exit code is nonzero, then the else clause is executed. command-list1 can besimple or complex. It can, for example, be a sequence of one or more pipelines separated by one of the operators ; , & , && , || or newline. The if conditions shown below are just special cases of command-list1 : if [ condition ] [ is another name for the traditional test command. [ / test is a standard POSIX utility. All POSIX shells have it builtin (though that's not required by POSIX²). The test command sets an exit code and the if statement acts accordingly. Typical tests are whether a file exists or one number is equal to another. if [[ condition ]] This is a new upgraded variation on test ¹ from ksh that bash , zsh , yash , busybox sh also support. This [[ ... ]] construct also sets an exit code and the if statement acts accordingly. Among its extended features, it can test whether a string matches a wildcard pattern (not in busybox sh ). if ((condition)) Another ksh extension that bash and zsh also support. This performs arithmetic. As the result of the arithmetic, an exit code is set and the if statement acts accordingly. It returns an exit code of zero (true) if the result of the arithmetic calculation is nonzero. Like [[...]] , this form is not POSIX and therefore not portable. if (command) This runs command in a subshell. When command completes, it sets an exit code and the if statement acts accordingly. A typical reason for using a subshell like this is to limit side-effects of command if command required variable assignments or other changes to the shell's environment. Such changes do not remain after the subshell completes. if command command is executed and the if statement acts according to its exit code. ¹ though not really a command but a special shell construct with its own separate syntax from that of normal command, and varying significantly between shell implementations ² POSIX does require that there be a standalone test and [ utilities on the system however, though in the case of [ , several Linux distributions have been known to be missing it. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/306111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186194/"
]
} |
306,147 | Sometime ago I used to mount shared folders from my windows computer using the command : sudo mount.cifs //computer/folder /mnt -o username=user However now it bugs and yields : mount error : cifs filesystem not supported by the system mount error(19): No such device Any idea on what might have happened in the meanwhile that prevents it from working ? The package cifs-utils is installed.. zgrep -i cifs /proc/config.gz Returns : CONFIG_CIFS=mCONFIG_CIFS_STATS=y#CONFIG_CIFS_STATS2 is not setCONFIG_CIFS_WEAK_PW_HASH=y... All yes except for STATS2 and DEBUG basically | CONFIG_CIFS=m means the CIFS functionality is compiled into a kernel module.If the cifs module isn't loaded after a reboot, you can append a line cifs to the file /etc/modules. The file lists modules which will be loaded automatically at boot time. To check if the module is already loaded type: lsmod | grep cifs If you don't see 'cifs' in the output, it isn't loaded. To load it manually without a reboot: modprobe cifs | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306147",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149443/"
]
} |
306,189 | Is there a reason why most man pages don't include a few common examples? They usually explain all the possible options, but that makes it even harder for a beginner to understand how it's "usually" used. | That depends on the man pages... Traditionally, they have included a section with examples - but for some reason that is usually missing from the man pages under Linux (and I assume other using GNU commands - which are most these days). On for example Solaris on the other hand, almost every man page include the Example section, often with several examples. If I were to guess, FSF/GNU has for a long time discouraged use of man pages and prefer users to use info for documentation instead. info pages tend to be more comprehensive than man pages, and usually do include examples. info pages are also more "topical" - i.e. related commands (eg. commands for finding files) can often be found together. Another reason may be that GNU and its man pages are used on many different operating systems which may differ from each other (there are after all lots of differences just between different Linux distros). The intention may have been that the publisher added examples relevant to the particular OS/distro - which obviously is rarely done. I would also add that man pages were never intended to "teach beginners". UNIX was developed by computer experts (old term "hackers") and intended to be used by computer experts. The man pages were thus not made to teach a novice, but to quickly assist a computer expert who needed a reminder for some obscure option or strange file format - and this is reflected in how a man page is sectioned. man -pages are thus intended as A quick reference to refresh your memory; showing you how the command should be called, and listing available options. A deep and thorough - and usually very technical - description of all aspects of the command. It's written by computer experts, for fellow computer experts. List of environment variables and files (i.e. config files) used by the command. Reference to other documentation (eg. books), and other man pages - eg. for the format of config files and related/similar commands. That said, I very much agree with you that man pages ought to have examples, since they can explain the usage better than wading through the man page itself. Too bad examples generally aren't available on Linux man pages... Sample of the Example part of a Solaris man page - zfs(1M): (...)EXAMPLES Example 1 Creating a ZFS File System Hierarchy The following commands create a filesystem named pool/home and a filesystem named pool/home/bob. The mount point /export/home is set for the parent filesystem, and is automatically inherited by the child filesystem. # zfs create pool/home # zfs set mountpoint=/export/home pool/home # zfs create pool/home/bob Example 2 Creating a ZFS Snapshot The following command creates a snapshot named yesterday. This snapshot is mounted on demand in the .zfs/snapshot directory at the root of the pool/home/bob file system. # zfs snapshot pool/home/bob@yesterday Example 3 Creating and Destroying Multiple Snapshots The following command creates snapshots named yesterday of pool/home and all of its descendent file systems. Each snapshot is mounted on demand in the .zfs/snapshot directory at the root of its file system. The second command destroys the newly created snapshots. # zfs snapshot -r pool/home@yesterday # zfs destroy -r pool/home@yesterdaySunOS 5.11 Last change: 23 Jul 2012 51System Administration Commands zfs(1M) Example 4 Disabling and Enabling File System Compression The following command disables the compression property for(...) This particular man page comes with 16(!) such examples... Kudos to Solaris! (And I'll admit I myself have mostly followed these examples, instead of reading the whole man page for this command...) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/306189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186922/"
]
} |
306,195 | I'm trying to determine if port 25 is available on a server. I tried using nc and telnet to connect, but each of those failed to connect. Is there any other test I can do to find out if there is anything listening on that port? | If you have access to the system and you want to check whether it's blocked or open, you can use netstat -tuplen | grep 25 to see if the service is on and is listening to the IP address or not. You can also try to use iptables -nL | grep <port number> to see if there is any rule set by your firewall. If you saw nothing wrong, finally you can do what you have already done by using telnet yourTarget 25 or nc yourTarget 25 , and if you get a message saying that the connection is refused, it might be blocked and filtered by your ISP since most ISPs block the default SMTP port 25. In that case you can change the default port - if you need it - to an alternative. The other option you have, is to use Nmap ↴ You can use nmap -sT localhost to determine which ports are listening for TCP connections from the network. To check for UDP ports, you should use -sU option. To check for port 25, you can easily use nmap -p25 localhost . And if you do not have access to the system, you can use nmap -sS -p25 yourTargetIP . N.B. Nmap is a super-powerful tool, but you should know how to use it. For instance, sometimes you might be in need of using -Pn option for a pingless scan. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306195",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186925/"
]
} |
306,229 | I have Linux installed on a Dell XPS 9343 with an Samsung PM851 SSD. I recently read that many SSDs don't support TRIM operations. So I'd like to check if discard option effectively works on my system. At first step, I tried to simply run sudo fstrim --verbose --all and it reported 41GB trimmed ; this makes me fear because I was expecting a really little value because I have the continuously TRIM enabled (see above); in fact, if I re-run that command again I get O bytes trimmed . Is it normal? even if I have the discard option in the /etc/fstab? PS: I tried to follow the proposed solution here but it stucks on the second command due to trim.test: FIBMAP unsupported . PS2: it's a flat SSD (no LVM or RAID) with GPT and EXT4 filesystem | Try lsblk -D TRIM/discard is available, if the DISC-MAX column is not 0B Example (SSD/trim available) [root@foo bar]# lsblk -DNAME DISC-ALN DISC-GRAN DISC-MAX DISC-ZEROsda 0 4K 1G 0 Example (HDD/trim not available) [root@foo bar]# lsblk -DNAME DISC-ALN DISC-GRAN DISC-MAX DISC-ZEROsda 0 0B 0B 0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186963/"
]
} |
306,277 | I have a serie of ten images. I would like to have them stretched to fit 5in squares centered in a single A4 PDF, one image per page. I would need something working on Mac (in case of some local command line subtleties). | With ImageMagick, at 300 DPI: convert -page 2480x3508 -extent 2480x3508 -density 300 -gravity Center *.png out.pdf The "magic" numbers 2480x3508 are the dimensions in pixels of an A4 page at 300 DPI. See here for the dimensions at other resolutions if you need something different than 300 DPI and you can't be bothered to do the scaling yourself. Add -background black to get the images on black. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31322/"
]
} |
306,293 | I keep reading/hearing that /etc is for system configuration files. Can someone explain/give me some intuition for why these scripts that start/stop/restart various programs are usually in /etc instead of /var or /usr or something similar? | Early on (both historically, and in the process of booting...), /etc is a part of / (the first mounted filesystem), while /usr was not (until disks got large). /var holds temporary data, while these scripts are not temporary. It's not that simple, but it started that way and there's little reason to rework the entire directory layout. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/306293",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/162496/"
]
} |
306,315 | How can ^ be replaced with space in Unix? Input: ab^cd^ef Output: ab cd ef I have tried sub(/^/, " & ", str) but output is same as input. | That's the job for tr : $ str=ab^cd^ef$ printf '%s\n' "$str" | tr '^' ' 'ab cd ef In bash , ksh93 , mksh , zsh : printf '%s\n' "${str//^/ }" In zsh : print -rl -- "${str:gs/^/ /}" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187018/"
]
} |
306,336 | I want to run a service file on the last day of every month. How to configure this? I am referring the Systemd.timers but not able to find the right approach. Additionally will it be possible to add another configuration to run on the last working day of the month also? (days other than sun and sat) | As of version 233, systemd supports using "~" in its calendar syntax to specify dates relative to the end of the month. OnCalendar=*-02~03 means the third last day in February (the 26th or 27th, depending on whether or not it's a leap year) Mon *-05~07/1 and Mon *-05~01..07 are synonyms for the last Monday in May. https://github.com/systemd/systemd/blob/v233/NEWS#L174 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306336",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64660/"
]
} |
306,364 | If I run below command, I am getting some strange IP addresses. hostname -i198.105.244.11 198.105.254.11 My host file entry is in default configuration, below are the contents of my /etc/hosts file entry 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 Actual IP of this machine is 192.168.2.31 I hope, once I added below entry in /etc/hosts file, 192.168.2.31 myhost I might get expected output, hostname -i 192.168.2.31 But, Why its showing different range of IPs while running hostname -i ? Update: ip r192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.31 169.254.0.0/16 dev eth0 scope link metric 1002 default via 192.168.2.1 dev eth0ifconfig eth0eth0 Link encap:Ethernet HWaddr ##removed## inet addr:192.168.2.31 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:feca:24c2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2153703 errors:0 dropped:0 overruns:0 frame:0 TX packets:612859 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:189727756 (180.9 MiB) TX bytes:761146814 (725.8 MiB) (Using Cent 6.4) | As of version 233, systemd supports using "~" in its calendar syntax to specify dates relative to the end of the month. OnCalendar=*-02~03 means the third last day in February (the 26th or 27th, depending on whether or not it's a leap year) Mon *-05~07/1 and Mon *-05~01..07 are synonyms for the last Monday in May. https://github.com/systemd/systemd/blob/v233/NEWS#L174 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81640/"
]
} |
306,428 | A former coworker configured screen for me and I just found out I could change this, but I don't know what the setting is supposed to be and the man page is a little vague. I've got this in my .screenrc # no annoying audible bell, pleasevbell on I don't have a vbell msg defined, but the man page says the default vbell msg is "Wuff Wuff", I've never seen that particular message. What I do see is a big annoying full screen flash every time I hit tab when a file doesn't exist (and this is starting to get on my nerves). It's better than a ding, but I'd rather have no indication than an annoying indication. So, is there any way to completely disable the vbell without defacto enabling the real bell? (Note, if you know this is just a putty question, I can close an ask on SuperUser, I saw the "bell" screen in the Putty setup which had some stuff that was already disabled for flashing - and it doesn't flash outside of screen) | You can do what you want in a terminal-independent way using just your .screenrc : vbell onvbell_msg ''termcapinfo * vb=: The settings are: first line (you already did this) second line cancels the Wuff, Wuff! third line sets the flash to an empty string Note that the trailing colon ( : ) is needed for the termcap syntax used by screen . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/306428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4330/"
]
} |
306,438 | Say a file called abc exists in the current directory and it has some text in it. When you execute the command: cat abc > abc Why do the contents of the the file abc disappear? Why does the command delete the text in it and the file becomes an empty file? | Because of the order how things are done. When you do: cat abc > abc > is the output redirection operator, when the shell sees this it opens the file in truncation mode using O_TRUNC flag with open(2) i.e. open("abc", O_TRUNC) , so whatever was there in the file will be gone. Note that, this redirection is done first by the shell before the cat command runs. So when the command cat abc executes, the file abc is already truncated hence cat will find the file empty. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/306438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187128/"
]
} |
306,442 | How to check the version of a XFS filesystem on a system, whether it is V5 or later? | Since version 3.15 , the kernel tells you the version of XFS used in each filesystem as it mounts it; dmesg | grep XFS should give you something like [1578018.463269] XFS (loop0): Mounting V5 Filesystem Instead of loop0 on your system you'll get the underlying device, and V5 will be replaced by whatever version your filesystem uses. Older kernels officially supported XFS version 4 filesystems, but could mount version 5 filesystems (since mid 2013); for the latter, the kernel would print Version 5 superblock detected. This kernel has EXPERIMENTAL support enabled! when the filesystem was mounted. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/306442",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128152/"
]
} |
306,463 | Mozilla just released a new tool to check your website configuration.observatory.mozilla.org But the scan is complaining about Cookies (-10 points): Session cookie set without the Secure flag ... Unfortunately the service running behind my nginx can only set the secure header if the SSL terminates there directly and not when SSL terminates on the nginx. Thus the "Secure" flag is not set on the cookies. Is it possible to append the "secure" flag to the cookies somehow using nginx? Modifing the location/path seems to be possible. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_domain http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_path | I know two ways to sorta do this, neither of them great. The first is to just abuse proxy_cookie_path like this: proxy_cookie_path / "/; secure"; The second is to use the more_set_headers directive from the Headers More module like this: more_set_headers 'Set-Cookie: $sent_http_set_cookie; secure'; Both of these can introduce problems because they blindly add the items. For example if the upstream sets the secure flag you will wind up sending the client a duplicate like this: Set-Cookie: foo=bar; secure; secure; and in the second case if the upstream app does not set a cookie nginx will send this to the browser: Set-Cookie; secure; This is doubleplusungood, of course. I think this problem needs to be fixed as many people has asked about it. In my opinion a directive is needed something like this: proxy_cookie_set_flags * HttpOnly;proxy_cookie_set_flags authentication secure HttpOnly; but alas, this does not currently exist :( | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/306463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154630/"
]
} |
306,467 | During my workflow I have created this file: AAGGAGGGAGCTGCATGGAACCTGTGGATATATACACACAAGGTTAACCTCTGTCCTGTAAA 8 GGAGTTCAGATGTGTGCTCTTCCGATCTGGAGGTCTCTGCTGGGGCCACCCTGTCCTCTCAG 30 GAGAGAGGAAAGGAAGCGATTGCAGAACTTTCCACAAGGCTTTAGATTCCCCTGTCACAGAG 15 GGAGGAGAAAGAATCAACTTTATAGCATCAGCCCCTTGTTTATTTTAAGTTCAGGGTTTAAG 13 GGGAGAACATTTCCCTCCTTGTCCTCTCCTATCTCACTTACTACATTCCCACTGGTCACTGT 7 GGGACATTTGTGATTACATGGTTGCAGTATTCTTTTTGTTCTTAGTCAGACTGTATAATTGG 4 I would like to select from each text of the first column the first number of letters as present in the amount of the second column. Like first 8 character of the first row, first 30 character of the second row etc.. Like the first as example the output would be something like this: AAGGAGGG GGAGTTCAGATGTGTGCTCTTCCGATCTGG Any idea would be really appreciated. | With awk : awk '{ $0 = substr($1, 0, $2) } 1' file.txt With GNU sed : sed -r 's/.* ([0-9]+).*/s!^(.{\1}).*!\\1!/' file.txt | \ cat -n | \ sed -r -f - file.txt (GNU sed because it can read script files from stdin ). With perl : perl -lpe 's/.*?([ACTG]+)\s+(\d+).*/ substr($1, 0, $2)/e' file.txt Another way with perl : perl -lape '$_ = substr($F[0], 0, $F[1])' file.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306467",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136099/"
]
} |
306,483 | From the following input: serial0/0SSL disabledeth0/0SSL enabledSSL onlyeth0/1SSL disabledbgroup0SSL enabledeth0/2SSL disabledeth0/3SSL disabledeth0/4SSL disabledbgroup1SSL disabledbgroup2SSL disabledbgroup3SSL disabledvlan1SSL enablednullSSL disabled I am trying to obtain a colon separated list in which if the next line of the input begins with "SSL", it gets appended to the previous line, so the output would be like so: serial0/0:SSL disabledeth0/0:SSL enabled:SSL onlyeth0/1:SSL disabledbgroup0:SSL enabledeth0/2:SSL disabledeth0/3:SSL disabledeth0/4:SSL disabledbgroup1:SSL disabledbgroup2:SSL disabledbgroup3:SSL disabledvlan1:SSL enablednull:SSL disabled As you can see in the second line of this ideal output, the case when more than one line starts with "SSL" should be considered. Based on this classic sed guide, I came up with the following sed script: sed -r '/^SSL/! { :again N s/(.*)\n(SSL.*)/\1:\2/ t again }' Which makes sense to me but returns the following output: serial0/0:SSL disabledeth0/0SSL enabledSSL onlyeth0/1:SSL disabledbgroup0SSL enabledeth0/2:SSL disabledeth0/3SSL disabledeth0/4:SSL disabledbgroup1SSL disabledbgroup2:SSL disabledbgroup3SSL disabledvlan1:SSL enablednullSSL disabled Any idea of what I could be doing wrong? | A much easier solution: sed -z 's/\nSSL/:SSL/g' The -z says to use NUL as a line separator - effectively making the entire stream appear as one line to sed . It then simple replaces a \nSSL sequence with a :SSL sequence, effectively combining lines the way you wish. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/177388/"
]
} |
306,542 | There is one text file.(test.txt) 1970-01-011971-01-011972-01-011973-01-011974-01-01....1993-01-011994-01-011995-01-011996-01-01...2015-01-012016-01-01 I would like to clear the line of text above in 1995 from 1970 . Below is the sed command that I made. sed -i '/197[0-9]/d' test.txtsed -i '/198[0-9]/d' test.txtsed -i '/199[0-5]/d' test.txt Is there a way to use the three commands into one command sed? | Since your file appears to be in sorted order, you can just delete from the beginning until the end eg sed -i '1,/^1995/d' test.txt If the date starts before 1970, then sed -i '/^1970/,/^1995/d' test.txt If your file isn't in order then there's no easy regex (there's a long boring one) that'll match all the lines, but you can specify multiple sed -i -e '/^19[78][0-9]/d' -e '/^199[0-5]/d' test.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306542",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86643/"
]
} |
306,576 | I have a file that contains epoch dates which I need converting to human-readable. I already know how to do the date conversion, eg: [server01 ~]$ date -d@1472200700Fri 26 Aug 09:38:20 BST 2016 ..but I'm struggling to figure out how to get sed to walk through the file and convert all the entries.The file format looks like this: #1472047795ll /data/holding/email#1472047906cat /etc/rsyslog.conf#1472048038ll /data/holding/web | Assuming consistent file format, with bash you can read the file line by line, test if it's in given format and then do the conversion: while IFS= read -r i; do [[ $i =~ ^#([0-9]{10})$ ]] && \ date -d@"${BASH_REMATCH[1]}"; done <file.txt BASH_REMATCH is an array whose first element is the first captured group in Regex matching, =~ , in this case the epoch. If you want to keep the file structure: while IFS= read -r i; do if [[ $i =~ ^#([0-9]{10})$ ]]; then printf '#%s\n' \ "$(date -d@"${BASH_REMATCH[1]}")"; else printf '%s\n' "$i"; fi; done <file.txt this will output the modified contents to STDOUT, to save it in a file e.g. out.txt : while ...; do ...; done >out.txt Now if you wish, you can replace the original file: mv out.txt file.txt Example: $ cat file.txt#1472047795ll /data/holding/email#1472047906cat /etc/rsyslog.conf#1472048038ll /data/holding/web$ while IFS= read -r i; do [[ $i =~ ^#([0-9]{10})$ ]] && date -d@"${BASH_REMATCH[1]}"; done <file.txtWed Aug 24 20:09:55 BDT 2016Wed Aug 24 20:11:46 BDT 2016Wed Aug 24 20:13:58 BDT 2016$ while IFS= read -r i; do if [[ $i =~ ^#([0-9]{10})$ ]]; then printf '#%s\n' "$(date -d@"${BASH_REMATCH[1]}")"; else printf '%s\n' "$i"; fi; done <file.txt#Wed Aug 24 20:09:55 BDT 2016ll /data/holding/email#Wed Aug 24 20:11:46 BDT 2016cat /etc/rsyslog.conf#Wed Aug 24 20:13:58 BDT 2016ll /data/holding/web | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187246/"
]
} |
306,617 | I have the following type of output from a find and grep pipe ./Columbia/815425_0001104659-11-049107.txt: CENTRAL INDEX KEY: 0000815425./Columbia/815425_0001104659-12-060231.txt: CENTRAL INDEX KEY: 0000815425./Columbia/815425_0001104659-13-066298.txt: CENTRAL INDEX KEY: 0000815425./Dimensional Advisors/355437_0001137439-04-000108.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001137439-05-000205.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001137439-06-000306.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001137439-08-000364.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001137439-09-000076.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001137439-12-000295.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001140361-10-035592.txt: CENTRAL INDEX KEY: 0000355437 I would like to obtain Columbia 0000815425Columbia 0000815425Columbia 0000815425Dimensional Advisors 0000355437Dimensional Advisors 0000355437Dimensional Advisors 0000355437Dimensional Advisors 0000355437Dimensional Advisors 0000355437Dimensional Advisors 0000355437Dimensional Advisors 0000355437 I was thinking sed and grep , but I am stuck with how to combine everything: matching the first part: (how do I match just before the / ?) erik Funds$ cat myoutput | egrep -o "[A-Z].*/"Columbia/Columbia/Columbia/Dimensional Advisors/Dimensional Advisors/Dimensional Advisors/Dimensional Advisors/Dimensional Advisors/Dimensional Advisors/Dimensional Advisors/ and the last 10 digit numbers: erik Funds$ cat myoutput | egrep -o "[0-9]{10}$"0000815425000081542500008154250000355437000035543700003554370000355437000035543700003554370000355437 | awk with / as field separator, and then printing field 2 and field 3 (with necessary zero padding): ... | awk -F/ '{ printf("%s %010d\n", $2, $3) }' Example: $ cat file.txt ./Columbia/815425_0001104659-11-049107.txt: CENTRAL INDEX KEY: 0000815425./Columbia/815425_0001104659-12-060231.txt: CENTRAL INDEX KEY: 0000815425./Columbia/815425_0001104659-13-066298.txt: CENTRAL INDEX KEY: 0000815425./Dimensional Advisors/355437_0001137439-04-000108.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001137439-05-000205.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001137439-06-000306.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001137439-08-000364.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001137439-09-000076.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001137439-12-000295.txt: CENTRAL INDEX KEY: 0000355437./Dimensional Advisors/355437_0001140361-10-035592.txt: CENTRAL INDEX KEY: 0000355437$ awk -F/ '{ printf("%s %010d\n", $2, $3) }' file.txtColumbia 0000815425Columbia 0000815425Columbia 0000815425Dimensional Advisors 0000355437Dimensional Advisors 0000355437Dimensional Advisors 0000355437Dimensional Advisors 0000355437Dimensional Advisors 0000355437Dimensional Advisors 0000355437Dimensional Advisors 0000355437 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306617",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143576/"
]
} |
306,673 | Run a job in the background $ command & When it's done, the terminal prints [n]+ command or [n]- command So sometimes it's a plus and other times it's a minus following [n] . What does plus/minus mean? | They are to distinguish between current and previous job; the last job and the second last job for more than two jobs, with + for the last and - for the second last one. From man bash : The previous job may be referenced using %- . If there is only a single job, %+ and %- can both be used to refer to that job. In output pertaining to jobs (e.g., the output of the jobs command), the current job is always flagged with a + , and the previous job with a - . Example: $ sleep 5 &[1] 21795$ sleep 5 &[2] 21796$ sleep 5 &[3] 21797$ sleep 5 &[4] 21798$ jobs[1] Running sleep 5 &[2] Running sleep 5 &[3]- Running sleep 5 &[4]+ Running sleep 5 &$ [1] Done sleep 5[2] Done sleep 5[3]- Done sleep 5[4]+ Done sleep 5 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/306673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67765/"
]
} |
306,716 | In .bashrc case "$TERM" inxterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;;*) ;;esac I understand ${debian_chroot:+($debian_chroot)}\u@\h: \w , but not \[\e]0; . What does it make? | The \e]0; is an escape sequence; \e is replaced with ASCII 27 (ESC), so the terminal receives the 4 characters ESC ] 0 ; tells xterm to set icon and title bar, that ends in BEL ( \a ). So the sequence \e]0;STUFFGOESHERE\a will set the title of the terminal to STUFFGOESHERE. In your example it'll set the title to user/host/path. FWIW, xterm escape sequences are documented at: https://www.x.org/docs/xterm/ctlseqs.pdf | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/306716",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67765/"
]
} |
306,890 | sqlite3 stores command history in .sqlite_history , which is by default created in: $HOME/.sqlite_history How can I change this location to somewhere else? This is possible for example with mysql , where I can define environment variable MYSQL_HISTFILE=/path/to/whatever/file But I could not find any corresponding environment variable for sqlite3 export SQLITE_HISTFILE=/tmp/history has no effect. I found a post where somebody asks same question, but no useful answers are given. | Since Version 3.25.3 you can simply set SQLITE_HISTORY to change the history filename, like mattmc3 wrote. In the versions before, it was hardcoded in line 5576 in shell.c (version 3.14.1): sqlite3_snprintf(nHistory, zHistory,"%s/.sqlite_history", zHome); So, to change it, one option among the others mentioned in this thread is was to edit the source and recompile. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
306,907 | I am trying to run a script that will check internet speed on a linux machine with gui, bring the terminal window down and when the query is finished , the window will come up with the answer.as for the time being- i can take the window down, but not up. #!/bin/bash xdotool getactivewindow windowminimize #xdotool set_window --name speedy #xdotool set_window --icon-name speedy speedtest-cli --simple if [ $? -eq 0 ] then #xdotool windowactivate speedy xdotool windowfocus #speedy xdotool key "F11" fi exec $SHELL | xdotool needs to know the window ID for all its actions.You correctly used getactivewindow to obtain the window for the windowminimize command, but you also need to do it for setting its name. So put xdotool getactivewindow set_window --name speedy before the minimize line. Then you can use search to find it for activating later. xdotool search --name speedy windowactivate See the manpage sections Window stack and Command chaining for an explanation of how this all works. The whole script: #!/bin/bash# rename the window for finding it again laterxdotool getactivewindow set_window --name speedyxdotool search --name speedy windowminimizespeedtest-cli --simpleif [ $? -eq 0 ]then xdotool search --name speedy windowactivate xdotool key "F11"fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181400/"
]
} |
306,913 | Ext4 has a maximum filesystem size of 1EB and maximum filesize of 16TB. However is it possible to make the maximum filesize smaller at filesystem level ? For example I wouldn't like to allow to create files greater than a specified value (e.g. 1MB). How can this be achieved on ext4 ? If not ext4 then any other modern filesystem has support for such feature ? | ext4 has a max_dir_size_kb mount option to limit the size of directories , but no similar option for regular files. A process however can be prevented from creating a file bigger than a limit using limits as set by setrlimit() or the ulimit or limit builtin of some shells. Most systems will also let you set those limits system-wide, per user. When a process exceeds that limit, it receives a SIGXFSZ signal. And when it ignores that signal, the operation that would have caused that file size to be exceeded (like a write() or truncate() system call) fails with a EFBIG error. To move that limit to the file system, one trick you could do is use a fuse (file system in user space) file system, where the user space handler is started with that limit set. bindfs is a good candidate for that. If you run bindfs dir dir (that is bind dir over itself), with bindfs started as ( zsh syntax): (limit filesize 1M; trap '' XFSZ; bindfs dir dir) Then any attempt to create a file bigger than 1M in that dir will fail. bindfs forwards the EFBIG error to the process writing the file. Note that that limit only applies to regular files, that won't stop directories to grow past that limit (for instance by creating a large number of files in them). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306913",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187198/"
]
} |
306,930 | I'm trying to understand output redirection, but I'm struggeling. I don't think I really get the differences. 1 > file # - redirect stdout to file (override mode)1 >> file # - redirect stdout to file (append mode)2 > 1 # 1) would that also redirect stderr to stdout, replacing stdout?2 >> 1 # 2) would this redirect stderr to stdout (appending to it, # i.e. haivng both sent to stdout?)1>&9 # - duplicates stdout to file descriptor (pointer) 9 # 3) what's the difference of 2>&1 to a 2 >> 1? Does >> only run at the end # of output?echo foo > file # - redirects output to file (override mode)>file 2>&1 # - duplicates stderr to stdout, then redirects to file in override mode # 4) why is this giving me stdout, too, when the syntax is 2>&1, # i.e. duplicate stderr into stdout - not merge 2 into 1? I'm assuming the ampersand & means duplicate, as opposed to redirect. But what's the difference of redirecting a to b (will a remain unchanged?) to duplicating a to b (will a and b be the same?)? 2>&1 effectively seems to redirect and merge 2 into 1 , i.e. what would have gone into 2 is now in 1 , but only in 1 ... why? I'm so confused... | First, anything after > or >> is a file name; so > 1 writes to a file named 1 . Numbers in the other forms given in your example are file descriptors. By default, programs start with file descriptors 0 (standard input), 1 (standard output) and 2 (standard error) connected; when you start a program from an interactive shell, these are connected to the terminal's input and output (you can see these by running ls -l /proc/$$/fd on Linux). Specifying a number before > , >> or >& specifies the file descriptor you wish to manipulate; the number has to be right in front of the > symbol. Thus echo Example 2> stderr will print "Example" and create an empty stderr file (which would contain anything sent to the standard error). You can think of file descriptors as entries in a table, pointing to files; thus by default: 0 points to /dev/tty 1 points to /dev/tty 2 points to /dev/tty Specifying 1> file (or simply > file ) updates file descriptor 1 to point to file , opened in truncating mode (so its contents are replaced). Specifying 2> 1 updates file descriptor 2 to point to a file named 1 , opened in truncating mode. Duplicating file descriptors using >& (or &> , which is the preferred form) simply updates one file descriptor to point to whatever the other is pointing at. In your last example, > file updates file descriptor 1: 0 points to /dev/tty 1 points to file 2 points to /dev/tty and then 2>&1 updates file descriptor 2: 0 points to /dev/tty 1 points to file 2 points to file (order is significant: > file 2>&1 produces the above, 2>&1 > file would only end up redirecting file descriptor 1). The 1>&9 form only works if file descriptor 9 has been opened, e.g. by copying file descriptor 1 to it ( 9>&1 ) or by opening a file ( 9> file ). This type of construct can be useful to keep track of the original contents of file descriptors when redirecting; thus in a script you could copy 1 and 2 safely away, redirect standard output and error for whatever purpose you need, and then restore them... The Bash manual has all the details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/306930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95276/"
]
} |
307,022 | I am sending files from one server to another using scp . I need to rename files after sending that .So I use following command for each file scp original-hc.db user@host:/dir/original-hc_1.db I want to send all files using a single command with renaming files . Like scp *.db user@host:/dir/(actual file name before extension)_1.db | This can easily be achieved with a loop for f in *.dbdo scp "$f" user@host:/dir/"${f%.db}"_1.dbdone The ${f%.db} construct strips off the .db suffix from $f . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307022",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/151307/"
]
} |
307,026 | I ran a script that creates an oracle tablespace but I forgot to change the path in the script. The alter tablespace script contained data2. Normally if you want to rename a datafile, you put the tablespace offline and rename that file. I get the error: mv <oracle path>data2.dbf data2.dbf-bash: syntax error near unexpected token `newline' So, how do I rename a file with characters containing <> ? | You need to escape the space, less than and greater than characters using a backslash: mv \<oracle\ path\>data2.dbf data2.dbf Should work. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307026",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187557/"
]
} |
307,046 | Is there any tool available to sync files between two or more linux servers immediately after writing the file in the disk? The rsync command does not suit me on this, because if I set rsync in cron, the minimum time I can set is 1 minute, but I need it on real-time basis. | Haven't used it myself but read about it recently. There is a daemon called lsyncd , which I presume does exactly what you need. Read more about it HERE | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/307046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184773/"
]
} |
307,100 | I found a really great tutorial explaining some practical sed examples. The last one (number 10) can be seen below: $ sed -e 's/<[^>]*>//g'This <b> is </b> an <i>example</i>.This is an example. Can someone please help me through this one? To summarize where I'm at:1. I understand : s/x/y/g is a command telling sed to "globally subsitute the regex x with the regex y 2. It seems like the -e flag puts sed in some sort of "interactive mode", from the man page: -e command Append the editing commands specified by the command argument to the list of commands. This seems confusing to me because it doesn't seem like we're giving sed a "list of commands" it seems like we're giving it a "list of arguments" so I'm not to sure on that one.3. I understand that the first and only < is nothing more than the single char regex < , and the last > is nothing more than the single char regex > 4. I understand that the * is telling sed to match 0 or more occurances of the pattern before it, which is in this case inside the brackets; however, this is where I'm really confused: can someone please unpack the [^>]* more for me? so where I'm really confused on is: what's going on with -e in plain english? what's going on with [^>]* ? Thanks :) | The sequence s/<[^>]*>//g is a command to the sed processing engine; it tells it to do a "Search and replace". So -e 's/..../g' means "add this search and replace command to the execution of sed . This may make more sense if we do multiple commands in one command: sed -e '1d' -e '$d' would add two commands to the sed processing; "delete first line" and "delete last line". The [^>] is a regular expression that means "any character except for the > So [^>]* means "zero or more of any character except for the > And so <[^>]*> means a < , optionally followed by non- > , followed by > . And then we put that into a "search and replace" command where this sequence is replaced by nothing, and then do it multiple times in the line (the final g ). This means that the string hello <abc> there <def> will first match <abc> (the < , then abc matches "zero or more non- > ", then the > ) and replace that with nothing, and then redo this for the <def> . The result would be hello there . (Note the extra spaces, 'cos we're not removing them!) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173557/"
]
} |
307,167 | I would like to know, given a binary's name, which package I should install on Alpine Linux. How can I do that? | You have three ways basically. First: The package should be installed and you need to specify the full path : apk info --who-owns /path/to/the/file Second: Use the pkgs.alpinelinux.org website Third: Use the api.alpinelinux.org API by filtering the json output.For this you need a json parser like jq: apk add jq then use the API with the instructions provided here UPDATE on 2022-04-07 I've released a tiny utility that allows to search via CLI what can be found on pkgs.alpinelinux.org website: https://github.com/fcolista/apkfile .: Francesco | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/307167",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42478/"
]
} |
307,186 | I am trying to read from the disk and wanted to dd command to issue every request random and check for the latency of the disk for the read operation I have used seek and skip both will that work ? dd if=/dev/rdsk/c2t5000CCA0284F36A4d0 skip=10 of=/dev/null bs=4k count=10240001024000+0 records in1024000+0 records out4194304000 bytes (4.2 GB) copied, 51.0287 s, 82.2 MB/sdd if=/dev/rdsk/c2t5000CCA0284F36A4d0 seek=10 of=/dev/null bs=4k count=10240001024000+0 records in1024000+0 records out4194304000 bytes (4.2 GB) copied, 51.364 s, 81.7 MB/s can anybody suggest me with any new way to read from the disk ? | skip (also known as iseek in some dd implementations) moves current pointer of the input stream while seek moves current pointer in the output stream. Thus, by using skip you could ignore some data at the beginning of the input stream. The seek is usually used (but not always) in conjunction with conv=notrunc to preserve some data existing at the beginning of the output stream. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187674/"
]
} |
307,201 | I mark some packages for installation but I get Fig. 1 Asking to insert the installer DVD again in /media/cdrom but I have only its USB installer media and it does not want to have it, Fig. 2 List of failures without successful step 1 Hardware: Asus Zenbook UX303UA OS: Debian 8.5 | skip (also known as iseek in some dd implementations) moves current pointer of the input stream while seek moves current pointer in the output stream. Thus, by using skip you could ignore some data at the beginning of the input stream. The seek is usually used (but not always) in conjunction with conv=notrunc to preserve some data existing at the beginning of the output stream. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
307,210 | I would like to upgrade my Linux kernel to 4.7 in Debian 8.5, since it has significant better Skylake 6th generation support than the current linux kernel. In Ubuntu 16.04, the upgrade is easy. However, I now need Debian 8.5 because of stability, and I would like to do the same upgrade for it. Testing StephenKitt's answer I upgrade the kernel and reboot, but I get unsuccessfully when loading the OS. Fig. 1 Failure messages in startup 2nd iteration - Solving the Bug in startup [GAD3R] Boot in Linux kernel 3.16.x Run as su , apt-get remove xserver-xorg-video-intel Reboot Output: works! Resolution is now the 1920x1080. Since there is no complete support of Skylake in Linux kernel 4.6 etc this artifact here in Matlab 2016a without a firmware, you need some non-free adjustments as firmware; which you maybe already free firmware in Linux kernel 4.7 # https://unix.stackexchange.com/a/307435/16920apt-get -t jessie-backports install firmware-misc-nonfree xserver-xorg-video-intel abnormal installation and its action's prevention I find out that the package xserver-xorg-video-intel may get installed as a dependency (and all its dependencies) in other conditions, as described a case in the thread How to Recover Debian of LK backports where runlevel conflict? The idea is to prevent the action of the package although it get installed by creating the file /etc/X11/xorg.conf # https://unix.stackexchange.com/a/308709/16920Section "Device" Identifier "Intel" Driver "modesetting"EndSection## Bugs # 1. LK 3.16 will fail now but LK 4.6 will work. TODO in the thread https://unix.stackexchange.com/a/308709/16920 Hardware: Asus Zenbook UX303UA OS: Debian 8.5 Related: Asus Zenbook UX303UA Linux compatibility , Linux Kernel - Mobile Skylake 6th Generation - Power Management | The easiest way to install a newer kernel is to use Jessie backports . First you need to add Jessie backports to your repositories, if it's not already there: echo deb http://http.debian.net/debian jessie-backports main > /etc/apt/sources.list.d/jessie-backports.list (as root), then apt-get updateapt-get -t jessie-backports install linux-image-amd64 will install the current default backported kernel (4.8 as of this writing). To provide the appropriate firmware for your laptop's wi-fi, you need to add non-free and install firmware-iwlwifi : echo deb http://http.debian.net/debian jessie-backports main contrib non-free > /etc/apt/sources.list.d/jessie-backports.listapt-get updateapt-get -t jessie-backports install firmware-iwlwifi To solve the display problems, you can remove xserver-xorg-video-intel (nowadays Intel GPUs don't need a separate driver, they can use the kernel's mode-setting support), as suggested by GAD3R : apt-get remove xserver-xorg-video-intel (You may need to install xserver-xorg-video-dummy to satisfy other packages' dependencies.) You should also install the Skylake firmware to enable all the GPU features: apt-get -t jessie-backports install firmware-misc-nonfree Enabling backports is safe: newer packages are not picked up automatically from backports, you need to explicitly select them using -t jessie-backports as above (but once you've done that, updates to the upgraded packages are picked up by apt-get upgrade ). Version 4.6 of the kernel already provided good support for Skylake, and it's improved since. If you upgrade as above, running apt-get upgrade will automatically upgrade to later versions of the kernel once they become available in the backports. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
307,231 | Why there are 12 md5 sums while there are only 5 ISOs in Debian's official site? http://cdimage.debian.org/debian-cd/current/i386/iso-dvd/ I just download the debian-8.5.0-i386-DVD-1.iso and check the md5 sum, which does not match the given value. Because the md5sum file in the above link has too many entries, therefore I doubt if it is my download error or their mistake, or I missed something.. | Some Images are missing! Only the first n images are available! Where is the rest? We don't store/serve the full set of ISO images for all architectures, to reduce the amount of space taken up on the mirrors. You can use the jigdo tool to recreate the missing ISO images instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307231",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115506/"
]
} |
307,257 | I want to replace the header in file1.csv with the header in file2.csv file1.csv"a","b","c"file2.csv"x","y","z" I want file1.csv 's header as "x","y","z" Please tell me how to do it? I tried sed -i "1 s/^.*$/$file2.csv/" file1.csv But it doesn't take the "" (quotes). I want the header with the quotes. | If you insist to do it with sed : ( sed 1q file2.csv; sed 1d file1.csv ) >file3.csv && mv file3.csv file1.csv Without sed : ( head -1 file2.csv; tail -n +2 file1.csv ) >file3.csv && mv file3.csv file1.csv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187720/"
]
} |
307,278 | Today I noticed that there are bunch of messages complaining about the RAID array (it's a software RAID10), so I started looking into it but need help because I'm unsure if I interpret the status output correctly (I've kinda forgotten the actual RAID set-up because the machine is at a remote location and I configured it about a year or two ago)... if I remember correctly the system was suppose to have 8x 2TB disks, but that's about all I can remember. System mail: N 14 [email protected] Wed May 25 21:30 32/1059 Fail event on /dev/md/0:EDMedia N 15 [email protected] Thu May 26 06:25 30/1025 DegradedArray event on /dev/md/0:EDMedia N 16 [email protected] Thu May 26 06:25 30/1025 SparesMissing event on /dev/md/0:EDMedia The bit that's specifically confusing me, now that I'm looking at the outputs, is this: Number Major Minor RaidDevice State 0 0 0 0 removed Does it mean that a disk has been removed (or that it dropped from the array)? Should I try re-adding '/dev/sda1' to it? And is there any way I can tell that '/dev/sda1' was part of '/dev/md0' without adding a partitioned disk in-use by something, only to make things worse? Status outputs: 'mdadm -D /dev/md0' output: /dev/md0: Version : 1.2 Creation Time : Mon Feb 8 23:15:33 2016 Raid Level : raid10 Array Size : 2197509120 (2095.71 GiB 2250.25 GB) Used Dev Size : 1465006080 (1397.14 GiB 1500.17 GB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu Sep 1 19:54:05 2016 State : clean, degraded Active Devices : 2Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : EDMEDIA:0 UUID : 6ebf98c8:d52a13f0:7ab1bffb:4dbe22b6 Events : 4963861 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 'lsblk' output: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 1.4T 0 disk└─sda1 8:1 0 1.4T 0 partsdb 8:16 0 1.4T 0 disk└─sdb1 8:17 0 1.4T 0 part └─md0 9:0 0 2T 0 raid10 ├─md0p1 259:0 0 1.5M 0 md ├─md0p2 259:1 0 244.5M 0 md /boot └─md0p3 259:2 0 2T 0 md ├─EDMedia--vg-root 253:0 0 2T 0 lvm / └─EDMedia--vg-swap_1 253:1 0 16G 0 lvm [SWAP]sdc 8:32 0 1.4T 0 disk└─sdc1 8:33 0 1.4T 0 part └─md0 9:0 0 2T 0 raid10 ├─md0p1 259:0 0 1.5M 0 md ├─md0p2 259:1 0 244.5M 0 md /boot └─md0p3 259:2 0 2T 0 md ├─EDMedia--vg-root 253:0 0 2T 0 lvm / └─EDMedia--vg-swap_1 253:1 0 16G 0 lvm [SWAP]sdd 8:48 0 1.4T 0 disk└─sdd1 8:49 0 1.4T 0 partsdj 8:144 0 298.1G 0 disk└─sdj1 8:145 0 298.1G 0 partsr0 11:0 1 1024M 0 rom 'df' output: Filesystem 1K-blocks Used Available Use% Mounted on/dev/dm-0 2146148144 1235118212 801988884 61% /udev 10240 0 10240 0% /devtmpfs 1637644 17124 1620520 2% /runtmpfs 4094104 0 4094104 0% /dev/shmtmpfs 5120 0 5120 0% /run/locktmpfs 4094104 0 4094104 0% /sys/fs/cgroup/dev/md0p2 242446 34463 195465 15% /boot 'watch -n1 cat /proc/mdstat' output: Every 1.0s: cat /proc/mdstat Thu Sep 1 21:26:22 2016Personalities : [raid10]md0 : active raid10 sdb1[1] sdc1[2] 2197509120 blocks super 1.2 512K chunks 2 near-copies [3/2] [_UU] bitmap: 16/17 pages [64KB], 65536KB chunkunused devices: <none> | If you insist to do it with sed : ( sed 1q file2.csv; sed 1d file1.csv ) >file3.csv && mv file3.csv file1.csv Without sed : ( head -1 file2.csv; tail -n +2 file1.csv ) >file3.csv && mv file3.csv file1.csv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307278",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187736/"
]
} |
307,338 | If I'm writing a script or a program, should I output to stderr its name together with warning or error message? For example: ./script.sh: Warning! Variable "var" lowered down to 10. or: ./prog.py: Error! No such file: "file.cfg". I understand that generally it is just a matter of taste (especially if you write your own stuff for yourself), but I wonder if there's anything conventional to that? I believe most UNIX/Linux utilities write their names when something happens, so it seems to be a good thing, but are there any guidelines or unspoken rules how to do that and how not to? For example, it's not advisable to install binaries under /usr/bin/ , rather under /usr/local/bin/ or something else. Are there similar rules about output to stderr? Should I write the name followed by a colon? Or just "Warning!" and "Error!" words? I couldn't find anything but maybe someone could point me to where to read about it. This question is a bit about programming practices, but I thought it is more appropriate here rather than on stackoverflow , as it's about UNIX/Linux traditions and not programming in general. | It is common practice to save the 0th argument passed to a C program main and use that as the parameter for perror — for simple programs: #include <stdio.h>#include <stdlib.h>int main(int argc, char *argv[]){ char *foo = malloc(9999999999L); if (foo == 0) perror(argv[0]); return 0;} call that program "foo", and running it illustrates the point: > ./foo./foo: Cannot allocate memory Complicated programs may add to the text (or use only the filename without the path), but keeping the program name lets you find where a misbehaving program came from. There is no universally accepted scheme for error messages, but some widely-used programs (such as gcc) add a message category such as "Error" or "Warning". Here's an example from one of my build-logs: compiling fld_def (obj_s)../form/fld_def.c: In function '_nc_Copy_Argument':../form/fld_def.c:164:14: warning: cast discards 'const' qualifier from pointer target type [-Wcast-qual] res = (TypeArgument *)argp; ^ In this example, gcc separates fields with colons and adds a category "warning" after the filename, line number, column number — and before the actual message. But there are several variations, making it complicated for programs (such as vi-like-emacs ) to parse the information. For compilers, using a category in the message makes it simple to detect the fatal errors (which may not be immediately fatal ) and warnings. If your program exits on an error it does not add much to say that some are really warnings and some are errors. But when it behaves differently (or continues to work more or less) the category helps to diagnose the problem encountered. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187755/"
]
} |
307,356 | What is the identification of the font that is being used on Solaris for console text? Is there a Windows equivalent? See the example in the attached screenshot. | I converted the font into a TTF file at some stage, which I have used on OS X with some success. When rendered with anti-aliasing, it's usable at a surprisingly broad range of sizes. It's available here . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307356",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1142/"
]
} |
307,378 | I'm trying to look through this: How do I get the MD5 sum of a directory's contents as one sum? , and so I'm trying: $ find | LC_ALL=C sort | cpio -o | md5sum25324 blocks6631718c8856606639a4c9b1ef24d420 - Hmm... I'd like just the hash, not anything else in the output... so assuming that "25324 blocks" has been printed to stderr, I try to redirect stderr to /dev/null : $ find | LC_ALL=C sort | cpio -o | md5sum 2>/dev/null 25324 blocks6631718c8856606639a4c9b1ef24d420 - Nope, that's not it. Let's just for tests sake, try to redirect stdout to /dev/null : $ find | LC_ALL=C sort | cpio -o | md5sum 1>/dev/null 25324 blocks Ok, so the hash is gone as expected - but the "blocks" message is still there ?! Where the hell is this "25324 blocks" printed, via file descriptor 3 ?!: $ find | LC_ALL=C sort | cpio -o | md5sum 3>/dev/null 25324 blocks6631718c8856606639a4c9b1ef24d420 - Nope, that's not it... In any case, I can get just the hash with awk: $ find | LC_ALL=C sort | cpio -o | md5sum | awk '{print $1}'25324 blocks6631718c8856606639a4c9b1ef24d420 but still the darn "blocks" message is printed... So how is it printed to terminal at all (as it seems not printed via either stdout or stderr), and how can I suppress that message? EDIT: found the answer, the "blocks" message is printed by cpio actually, so the right thing to do is: $ find | LC_ALL=C sort | cpio -o 2>/dev/null | md5sum | awk '{print $1}'6631718c8856606639a4c9b1ef24d420 Now we have just the hash... | The message is printed by cpio , this avoids it: find | LC_ALL=C sort | cpio -o 2> /dev/null | md5sum | awk '{print $1}' You’ll lose any error messages printed by cpio if you use this approach. Some versions of cpio (at least GNU and FreeBSD) support a quiet option instead: find | LC_ALL=C sort | cpio -o --quiet | md5sum | awk '{print $1}' To avoid losing errors with a version of cpio which doesn’t support --quiet , you could log them to a temporary file: cpiolog=$(mktemp); find | LC_ALL=C sort | cpio -o 2> "${cpiolog}" | md5sum | awk '{print $1}'; grep -v blocks "${cpiolog}"; rm -f "${cpiolog}" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8069/"
]
} |
307,390 | I want to know the difference between ttyS0 , ttyUSB0 and ttyAMA0 on Linux. | ttyS0 What you get on the host when you connect to target with this: Source This port is not present on most laptops or small devboards, but is still present on many desktops, and is very convenient for OS developers as mentioned at: https://askubuntu.com/questions/104771/where-are-kernel-panic-logs/932380#932380 You also get it with qemu -device isa-serial . For example could hook up two desktops with one of those cables, and communicate directly between them to get a shell on the remote desktop from your own. From Linux computer 1 you would run: screen /dev/ttyS0 115200 and then computer 2 would reply with the login prompt, and then you can log in from computer 1 into computer 2. So this is a bit like SSH and can be seen as an early form of networking. I think those cables cannot be too long or else the signal is lost though, and you can't do as much without the TCP/IP addressing/packet mechanisms. ttyUSB0 What you get on host when using something like: I also get it when I connect the GPIOs of my Raspberry Pi to my laptop to get a shell on a Raspberry Pi without a screen ! And another more integrated RPI connector version of the above: Source A concrete RPI example at: https://stackoverflow.com/questions/22054578/how-to-run-a-program-without-an-operating-system/32483545#32483545 ttyACM0 This is the TTY interface that what you get when you connect your computer to either of: BBC micro:bit v1 Raspberry Pi Pico via an USB cable. They've just implemented a TTY directly on the same USB that can power on and program the device, it is very convenient! ttyAMA0 Haven't used this one on a real board yet. It seems that I would be able to connect my desktop with a serial port to my RPI with that interface: https://raspberrypi.stackexchange.com/questions/69697/what-is-dev-ttyama0 I've used it with QEMU: https://github.com/buildroot/buildroot/tree/27d7ef126bc7f4eb1a757128466befa11245bbfd/board/qemu/arm-versatile It seem that AMA is a shortened form of AMBA ? https://www.raspberrypi.org/forums/viewtopic.php?t=8075 But why would they shorten a 4 character acronym?! The only interesting Linux kernel hits are under: https://github.com/torvalds/linux/blob/v4.16/drivers/tty/serial/amba-pl011.c#L2488 so it seems to be strictly linked to the PL011: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0183g/index.html which is a UART controller licensed by ARM, which I think is the dominating implementation. ttySAC0 Kamil reported that his Samsung Artik 710 , so another one for the collection. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187831/"
]
} |
307,452 | What is the difference between user name, display name and log-in name? What are the consequences of modifying each of them if a substantial difference holds? How do I modify these? I understand that usermod is relevant here, but interpreting its options is not immediate without having that terminology clear. And there might be other commands that serve the same or similar purposes. Pass. Thanks for clarifying this. | Which is which User name is an ambiguous term that could refer to a formal user ID string known to some system, or to a display name like John Smith . For that reason, we have more specific terms like login name , which informs us that this is the character string that is used for logging in, like jsmith , and not John Smith . User ID also serves this purpose, but it is ambiguous against a numeric user ID . That has to be clear from context. For instance in Unix, users don't usually deal with numeric user IDs; if a prompt asks for a "user ID", people just know that they aren't supposed to enter 1003 but jsmith . Display name (also called the real user name ) informs us that this is a name of some software object (such as a user account) used for referring to it in user interfaces and program output such as diagnostic or debug messages. The implication is that a display name is not necessarily unique among such objects and cannot be used as a key to unambiguously refer to an object. It is literally for display purposes only. A "display name" is not necessarily a user name; that has to be established by the context. Anything that can have a name can potentially have a display name. In traditional Unix, the /etc/passwd file associates your numeric user ID with the login name (the textual user ID), and with a display name . Changing and consequences The chfn utility is used for changing the display name aka real user name and related information. Doing this should have no consequence. Changing the textual user ID aka login name requires privilege; root can edit the password file to edit this. The effect will be instant: the new name will appear anywhere in the system where numeric user IDs are displayed as their text equivalent. For instance, if someone lists a directory using ls -l and that directory contains files owned by that user, they will immediately see the new name, since the ls program picks it out of the password database. The change will break or potentially break various things in the system, and so is a bad idea: Firstly, if the new name clashes with another one, that's obviously very bad; I'm mentioning that for the sake of completeness. Let's assume that it's not the case. Let's also assume that it's not the case that some user's name is changed without their knowledge, leaving them unable to log in. The remaining problem is that in the file system there are likely configuration files which encode the textual user ID : both in their path names, and in their contents. These, of course, continue to refer to the old user ID which no longer exists in the password file. The name change is not complete unless all of these are hunted down and repaired. The problem can be further compounded if a new password file entry is created which matches the old name. Those configurations now refer to a valid user, but the wrong one. As an example let's consider that the sudo utility exists in the system and is configured via the /etc/sudoers file. Suppose the /etc/sudoers file grants user bob the privilege to run some dangerous administrative command with superuser credentials. Now suppose we rename bob to robert in the password file and don't update this entry. Now robert is not able to run that command any more; the sudoers file grants the privilege to bob not to robert . Next day, a new user is added and happens to be called bob . This bob now has the privilege to run that administrative command as root. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307452",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132913/"
]
} |
307,497 | Is it possible to stop my laptop going to sleep when I close the lid? GNOME 3.20, Fedora 24. My laptop does not reliably wake from sleep. (It happens to be a hardware issue... I think I basically killed it while trying to replace a wifi card. But I want to keep using it for a while longer). | Install GNOME Tweak Tool and go to the Power section. There's an option to disable the automatic suspend on lid close. Option details I compared dconf before and after to find the option, but it turns out that's not how it's implemented. Instead, Tweak Tool creates ~/.config/autostart/ignore-lid-switch-tweak.desktop . The autostart is a script which effectively runs systemd-inhibit --what=handle-lid-switch . So we can see the lid close action is handled purely by systemd-logind. Alternative route An alternative would be to edit /etc/systemd/logind.conf to include: HandleLidSwitch=ignore This would work all the time, not just when your user is logged in. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/307497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29483/"
]
} |
307,504 | I recently bought a Raspberry Pi 2, Model B. I am intending to mostly connect to it over the local WLAN or Ethernet, using a SSH connection from my main computer. However, right now I have a Raspberry Pi that does not yet have any software installed on it. The guides on setting up a Raspberry Pi that I've found online so far all start with connecting the machine to a HDMI-display.At this time, I do not have a display with an HDMI-connection over here. Is it possible to install (any version of, but raspbian is probably preferred) Linux on the Raspberry Pi without needing to connect it to a HDMI display? | Raspbian from early 2016 allows ssh after second boot. First boot from SD resizes partitions and generates sshd keys, but doesn't start ssh daemon. Wait 5-10 minutes and powercycle RPI. Connect over ssh using default credentials. Finding RPI's ip address is out of scope of this answer :) Update 2017 : raspbian stretch doesn't require powercycle, but needs a file 'ssh' placed in root of smaller SD card partition | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307504",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123305/"
]
} |
307,580 | So when ever I use the up arrow it does not show the history of last command but does ^[[A and down ^[[BI don't know what is this called but also I have $ but running su did not have it. Using Ubuntu Server 16.04.1 | History is not present in all shells. You need to start a shell with history like bash . To do so, just type the name of the shell, like bash or the full path of the executable, like /bin/bash | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/307580",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187309/"
]
} |
307,583 | I want to check which directories take most disk space, quickly. I tried du -sh subdir but it took more than 20 seconds on bigger directories. I'm not sure how to display the size of all the subdirectories in the home directory at once with this method, but I'm afraid that it might take minutes... Is there a fast way to do this? I don't need to display the size of files, just directories. | Sample directory $ ls -aF./ ../ .asd/ folder1/ folder2/ list t1 t2 xyz/ To find sizes only for folders, excluding hidden folders: $ find -type d -name '[!.]*' -exec du -sh {} + 4.0K ./folder14.0K ./folder28.0K ./xyz If you need a total at the end as well: $ find -type d -name '[!.]*' -exec du -ch {} + 4.0K ./folder14.0K ./folder28.0K ./xyz16K total To sort the results: $ find -type d -name '[!.]*' -exec du -sh {} + | sort -h4.0K ./folder14.0K ./folder28.0K ./xyz To reverse the sorting order: $ find -type d -name '[!.]*' -exec du -sh {} + | sort -hr8.0K ./xyz4.0K ./folder24.0K ./folder1 If you need with hidden directories as well, remove -name '[!.]*' from find command. I don't know any other command to find size of folders that is faster than du . Use df for file system disk space usage Use find -maxdepth 1 -type d -name '[!.]*' -exec du -sh {} + to avoid sub-folders showing up | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307583",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152267/"
]
} |
307,595 | I have the following script for creating projects: clearversion="0.0.2"echo "Project creator"echo "---------------"# ask user to specify a directory (could start with spelled-out home directory)directoryName=""# Input must meet the following criteria: # 1.) No spaces at all in the directory path# 2.) some characters must be enteredwhile [[ ${#directoryName} -eq 0 || $directoryName == *" "* ]] do echo -e "\nEnter the full directory of the project (no spaces):" read directoryName doneecho "You entered: $directoryName"# uate directoryName so that the rest of the code can understand it# create directory if it does not existeval mkdir "$directoryName"# if successfulif [ $? -eq 0 ]then # call executable in /home/miwarren/CPPFileCreator to create a main.cpp file to put in the folder $HOME/CPPFileCreator/a.out $directoryName # copy a run.sh file into the folder, too... eval cp $HOME/CPPFileCreator/run.sh $directoryName/run.sh # you might want to put a fileCreator.sh in the folder as well eval cp $HOME/fileCreator.sh $directoryName/fileCreator.shfi I make sure to strip out any spaces (assumed that every unsafe string has at least a space in it, because injection-style attack). I want users to not have to spell out path parts when there exist variables for it (for example, $HOME for the home directory). Am I still good to use eval here, and if not, what to do instead? | "I want users to not have to spell out path parts when there exist variables for it (for example, $HOME for the home directory)." That can be done without eval : $ s='$HOME/.config'$ s="${s//\$HOME/$HOME}"$ echo "$s"/home/john1024/.config This has some limitations. For one, if both HOMES and HOME are names of variables that you want to substitute, then, to avoid false matches, HOMES must be substituted before HOME. Applying substitutions for all exported variables Using bash: while IFS== read -r name valdo s="${s//\$$name/$val}"done < <(printenv) For example: $ export A=alpha; export B=beta$ s='$HOME/$A/$B'$ while IFS== read -r name val; do s="${s//\$$name/$val}"; done < <(printenv)$ echo "$s"/home/john1024/alpha/beta Because this approach doesn't sort the variable names by length, it has the variable overlap issue mentioned above. We can fix that by sorting the variable names according to length: while IFS== read -r n name valdo s="${s//\$$name/$val}"done < <(printenv | awk '/^[^ \t]/{key=$0; sub(/=.*/,"",key); printf "%s=%s\n",length(key),$0}' | sort -rnt=) If the user enters a variable name that does not exist but the initial characters match some shorter name, the shorter name will be substituted. If this matters, we can avoid it by requiring the user to use brace-notation for this variables with this code: while IFS== read -r n name valdo s="${s//\$\{$name\}/$val}"done < <(printenv | awk '/^[^ \t]/{key=$0; sub(/=.*/,"",key); printf "%s=%s\n",length(key),$0}' | sort -rnt=) As an example: $ s='${HOME}/${A}/${B}'$ while IFS== read -r n name val; do s="${s//\$\{$name\}/$val}"; done < <(printenv | awk '/^[^ \t]/{key=$0; sub(/=.*/,"",key); printf "%s=%s\n",length(key),$0}' | sort -rnt=)$ echo "$s"/home/john1024/alpha/beta | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/183708/"
]
} |
307,600 | During the installation, there is a choice to let you choose which desktop and whether or not install the standard system utilities . See here for the screen shot and the packages included. Personally I don't like to install many packages I don't need, so I ask here what is the consequences of not installing these utilities. Please in plain language what functionality I will lose or inconvenience I will get. | What's the consequences that I don't install the standard system utilities of debian? Edit Without installing the standard system utilities , you will get a working operating system but you will need most of the utilities later. I have tested debian in a Virtualbox offline install without a GUI and without standard system utilities . The output of apt list --installed > installed.txt is here . From the installed OS i have configured apt because it is not fully working only the security update is enabled: deb http://security.debian.org/ jessie/updates maindeb-src http://security.debian.org/ jessie/updates main then i have installed a GUI , here are the two steps that I execute: 1) To configure my sources.list i have comment out the following lines: deb http://ftp.fr.debian.org/debian/ jessie/updates maindeb http://ftp.fr.debian.org/debian/ jessie/updates main Then adding: deb http://ftp.fr.debian.org/debian/ jessie maindeb-src http://ftp.fr.debian.org/debian/ jessie main 2) Runing tasksel to install the Gui: i mounted the debian.iso to save the bandwidth , connecting to the internet then installing my desktop . Updating the package and everything work fine. NB the standard system utilities isn't available" after runing tasksel on the installed system. What does the "standard system" task include? This task is available only during the installation, it contains the following packages: # tasksel --task-packages standard~pstandard~prequired~pimportant It corresponds to the following command: aptitude search ~pstandard ~prequired ~pimportant -F%p The following priority levels are recognized by the Debian package management tools. required Packages which are necessary for the proper functioning of the system (usually, this means that dpkg functionality depends on these packages). Removing a required package may cause your system to become totally broken and you may not even be able to use dpkg to put things back, so only do so if you know what you are doing. Systems with only the required packages are probably unusable, but they do have enough functionality to allow the sysadmin to boot and install more software. important Important programs, including those which one would expect to find on any Unix-like system. If the expectation is that an experienced Unix person who found it missing would say "What on earth is going on, where is foo?", it must be an important package.[6] Other packages without which the system will not run well or be usable must also have priority important. This does not include Emacs, the X Window System, TeX or any other large applications. The important packages are just a bare minimum of commonly-expected and necessary tools. standard These packages provide a reasonably small but not too limited character-mode system. This is what will be installed by default if the user doesn't select anything else. It doesn't include many large applications. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307600",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115506/"
]
} |
307,606 | My scenario is to perform side by side diff directories using: diff -ry <folder1> <folder2> along with the line numbers in the diff output . By default line numbers are not displayed in the side by side diff and the parameter --new-line-format doesn't work along with diff -y only works with diff -u . What i have tried is to do [For files only]: diff -y <(cat -n file1) <(cat -n file2) to generate line numbers. The above command first generates line numbers and then passes it to diff command, so the line numbers are kept intact in the diff result. But when it comes to use diff -ry , i am unable to do it. Is there any way to apply cat -n using something like xargs [like a preprocessor] in the diff -ry command ? | What's the consequences that I don't install the standard system utilities of debian? Edit Without installing the standard system utilities , you will get a working operating system but you will need most of the utilities later. I have tested debian in a Virtualbox offline install without a GUI and without standard system utilities . The output of apt list --installed > installed.txt is here . From the installed OS i have configured apt because it is not fully working only the security update is enabled: deb http://security.debian.org/ jessie/updates maindeb-src http://security.debian.org/ jessie/updates main then i have installed a GUI , here are the two steps that I execute: 1) To configure my sources.list i have comment out the following lines: deb http://ftp.fr.debian.org/debian/ jessie/updates maindeb http://ftp.fr.debian.org/debian/ jessie/updates main Then adding: deb http://ftp.fr.debian.org/debian/ jessie maindeb-src http://ftp.fr.debian.org/debian/ jessie main 2) Runing tasksel to install the Gui: i mounted the debian.iso to save the bandwidth , connecting to the internet then installing my desktop . Updating the package and everything work fine. NB the standard system utilities isn't available" after runing tasksel on the installed system. What does the "standard system" task include? This task is available only during the installation, it contains the following packages: # tasksel --task-packages standard~pstandard~prequired~pimportant It corresponds to the following command: aptitude search ~pstandard ~prequired ~pimportant -F%p The following priority levels are recognized by the Debian package management tools. required Packages which are necessary for the proper functioning of the system (usually, this means that dpkg functionality depends on these packages). Removing a required package may cause your system to become totally broken and you may not even be able to use dpkg to put things back, so only do so if you know what you are doing. Systems with only the required packages are probably unusable, but they do have enough functionality to allow the sysadmin to boot and install more software. important Important programs, including those which one would expect to find on any Unix-like system. If the expectation is that an experienced Unix person who found it missing would say "What on earth is going on, where is foo?", it must be an important package.[6] Other packages without which the system will not run well or be usable must also have priority important. This does not include Emacs, the X Window System, TeX or any other large applications. The important packages are just a bare minimum of commonly-expected and necessary tools. standard These packages provide a reasonably small but not too limited character-mode system. This is what will be installed by default if the user doesn't select anything else. It doesn't include many large applications. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307606",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186788/"
]
} |
307,641 | My problem is that many programs call xdg-open to open websites but on my Manjaro system (based on Arch Linux) this is somehow bound to cups :) When such a call to xdg-open happens, the CPU usage goes up a lot, without anything happens. I restart because the laptop gets hot very quickly. ~ $ xdg-settings get default-web-browsercups.desktop When I want to change that, I get the following response: ~ $ xdg-settings set default-web-browser firefox.desktopxdg-settings: $BROWSER is set and can't be changed with xdg-settings I can go ahead and change the environment variable for the browser and I'm fixed, BUT only for this one terminal. How could I make this change permanent or add it to autostart? I'm using: i3 4.12, fish shell | I had this issue since Chromium made itself the default browser each time I installed it. Using xdg-mime fixed it: xdg-mime default firefox.desktop x-scheme-handler/https x-scheme-handler/http On my Arch Linux system, this added two lines to ~/.config/mimeapps.list , associating HTTP and HTTPS with Firefox. Now I can have both Firefox and Chromium installed with Firefox being the default browser. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307641",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180164/"
]
} |
307,657 | I use this command to download the entire wikipedia content of main page : wget -p -k https://en.wikipedia.org/wiki/Main_Page But I receive an error when try to combine it with -O in order to rename the downloaded parent dir : wget -p -k https://en.wikipedia.org/wiki/Main_Page -O mydir Cannot specify both -k or --convert-file-only and -O if multiple URLs are given, or in combinationwith -p or -r. See the manual for details. How to download the URL entirely and name the downloaded parent dir ? | I had this issue since Chromium made itself the default browser each time I installed it. Using xdg-mime fixed it: xdg-mime default firefox.desktop x-scheme-handler/https x-scheme-handler/http On my Arch Linux system, this added two lines to ~/.config/mimeapps.list , associating HTTP and HTTPS with Firefox. Now I can have both Firefox and Chromium installed with Firefox being the default browser. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65628/"
]
} |
307,663 | I'm running on Arch Linux, Xfce 4.12. My mouse wheel scrolls too slowly, so I want to increase the number of lines for each scroll "tick". I read that this is possible by setting the Evdev Scrolling Distance with xinput , however, I am using libinput and I do not see anything related to scrolling distance. Output of xinput list-props on my mouse: Device Enabled (139): 1 Coordinate Transformation Matrix (141): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000libinput Accel Speed (275): -0.640000 libinput Accel Speed Default (276): 0.000000 libinput Accel Profiles Available (277): 1, 1 libinput Accel Profile Enabled (278): 1, 0 libinput Accel Profile Enabled Default (279): 1, 0 libinput Natural Scrolling Enabled (280): 0 libinput Natural Scrolling Enabled Default (281): 0 libinput Send Events Modes Available (259): 1, 0 libinput Send Events Mode Enabled (260): 0, 0 libinput Send Events Mode Enabled Default (261): 0, 0 libinput Left Handed Enabled (282): 0 libinput Left Handed Enabled Default (283): 0 libinput Scroll Methods Available (284): 0, 0, 1 libinput Scroll Method Enabled (285): 0, 0, 0 libinput Scroll Method Enabled Default (286): 0, 0, 0 libinput Button Scrolling Button (287): 2 libinput Button Scrolling Button Default (288): 274 libinput Middle Emulation Enabled (289): 0 libinput Middle Emulation Enabled Default (290): 0 Device Node (262): "/dev/input/event1" Device Product ID (263): 1133, 50487 libinput Drag Lock Buttons (291): <no items> libinput Horizonal Scroll Enabled (264): 1 How can I change my scrolling speed? | Libinput does not have any kind of "for every wheel scroll, do n lines/degrees" concept as a common party, the setting seems to be device-specific for now, as some Logitech has the parameter Evdev Scrolling Distance (278) that possibly came with the "old" Evdev driver includes. This will be considered an regression for the user experience on my opinion, where at first, the inclusion of a configurable mouse scroll sensitivity into the common toolkit (libinput), was refused, it is now part of a pull request to be in future versions – possibly the function calls will have to be implemented in every Desktop Environment. There are many possibilities to fix such issue, but depends on the Linux distribution. Be lucky and have driver-specific scroll sensitivity – check by doing a search for all inputs with scroll variables: xinput list | cut -f2 | cut -f2 -d'=' | \ xargs -d $'\n' -I'{}' sh -c "xinput list-props '{}' | grep -iq scroll && \ (echo Listing dev id '{}'; xinput list-props '{}')" and setting the specific variable by xinput --set-prop <ID> <SUB-ID> <values> , where <ID> can be the device name and <SUB-ID> can be the setting name. A general fix is repatching the libinput code and rebuilding . You can try to rollback to udevadm/evdev interfaces with X11, andthen try the X11 variable MOUSE_WHEEL_CLICK_ANGLE . From reference of last item , its possible to use imwheel to emulate mouse scroll clicks in multiply value. # Should use imwheel --kill --buttons "4 5" to restart imwheel,# if the mouse has back/forward buttons, otherwhise imwheel --kill is enough.# imwheel must be set to autostart in your DE tools.#Edit ~/.imwheelrc to include, where '3' is a multiplier".*"None, Up, Button4, 3None, Down, Button5, 3Control_L, Up, Control_L|Button4Control_L, Down, Control_L|Button5Shift_L, Up, Shift_L|Button4Shift_L, Down, Shift_L|Button5 There are specific application settings for mouse wheel sensitivity,like Chrome SmoothScroll and Firefox SmoothWheel ref . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307663",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125089/"
]
} |
307,665 | I juste made a clean install of the last Kali (not the light one) on my TP300Full disk and encrypted Then I updated everything and added Linux headers (I'm not a pro at all with Linux), installed my WiFi driver... and then noticed that I have no sound card in my settings, no sound icon in the control panel, and... no sound ! My sound keys don't work. I never had this problem with older versions.. I tried a lot of command found on the net but nothing, I think my pc detect the sound card because I think I saw it several times using commands to display cards etc.. And before login, I can play with the volume, I hear the sound, I see the sound icon, and the key works, but after login.. nothing ! Do someone have an idea to fix this ? | To fix the problem, type this in the terminal; systemctl --user enable pulseaudio This changes a configuration file to enable pulseaudio starting on boot. Reference: https://bugs.kali.org/view.php?id=3128 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307665",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188051/"
]
} |
307,666 | I'm writing a daemon to manage my Java app on a headless Ubuntu 16.04 box using jsvc and this (probably pre-systemd) tutorial , and got as far as running update-rc.d mydaemon enable , receiving the error update-rc.d: error: mydaemon Default-Start contains no runlevels, aborting Having Googled around a bit this appears to have something to do with the (fairly?) recent move to systemd , which I have confirmed is running with pidof systemd . How do I achieve the same starting-at-boot behaviour as update-rc.d (and more importantly stopping the service via /etc/init.d/mydaemon stop rather than just killing the process as the Java app needs to clean up). And are systemd and update-rc.d different systems, or does systemd just change how the latter works? | I don't have a Ubuntu 16.04 to test this on, or provide you with many details, but systemd has a compatibility feature to allow older /etc/init.d scripts to continue working. Instead of using update-rc.d to enable your daemon, use the systemd native command equivalent: sudo systemctl enable mydaemon If this still produces the same error, add the missing lines to the starting set of comments in your script: # Default-Start: 2 3 4 5# Default-Stop: 0 1 6 between the ### BEGIN INIT INFO and ### END INIT INFO lines, and try again.See the LSB core description for these lines. You can also explicitly start the daemon with sudo systemctl start mydaemon and ask for its status with sudo systemctl status -l mydaemon See man systemd-sysv-generator for the compatibility feature. See this wiki for converting System V or upstart scripts like yours to native systemd Units. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188049/"
]
} |
307,753 | I'm looking at a bash script with the following code: #!/bin/shset -e # Exit if any command fails# If multiple args given run this script once for each argtest $2 && { for arg in $@ do $0 $arg done exit}... As mentioned in the comment the purpose is to "run the script on each argument if there is more than one" I've some runner code to test this out: # install-unit# If multiple args given run this script once for each argtest $2 && { echo "\$0: $0" echo "\$1: $1" echo "\$2: $2" echo "test \$2 && is true"}test $2 && {# note this is just a regular bash for loop# http://goo.gl/ZHpBvT for arg in $@ do $0 $arg done exit}echo "running: $1" which gives the following: sh ❯ ./multiple-run.sh cats and dogs and stuff$0: ./multiple-run.sh$1: cats$2: andtest $2 && is truerunning: catsrunning: andrunning: dogsrunning: andrunning: stuffsh ❯ ./multiple-run.sh catsrunning: cats I'd like for someone to unpack the test $2 && {} part and the exit shell built in. In the beginning I was thinking that the test $2 was checking to see if there are more args, and maybe it is, but it looks weird because it seems like there's two expressions separated by the "and" && , and I'm also just confused what the heck the exit builtin does. Plain english explanations with examples and documentation references are appreciated :) Thanks! Edit: Thanks Stéphane Chazelas for that. For anyone who is confused by the change in for loop syntax, check out number 3 in this geekstuff article. To summarize: echo "positional parameters"for arg do echo "$arg"doneecho "\nin keyword"for arg in "$@" do echo "$arg"done gives us: ❯ ./for-loop-positional.sh cats and dogs positional parameterscatsanddogsin keywordcatsanddogs | That's an incorrect way to write: #!/bin/sh -set -e # Exit if any command fails# If multiple args given run this script once for each argif [ "$#" -gt 1 ]; then for arg do "$0" "$arg" done exitfi I think what the author intended to do was check if the second argument was a non-empty string (which is not the same thing as checking whether there are more than 1 argument, as the second argument could be passed but be the empty string). test somestring A short form of test -n somestring Returns true if somestring is not the empty string. ( test '' returns false, test anythingelse returns true (but beware of test -t in some shells that checks whether stdout is a terminal instead)). However, the author forgot the quotes around the variable (on all the variables actually). What that means is that the content of $2 is subject to the split+glob operator. So if $2 contains characters of $IFS (space, tab and newline by default) or glob characters ( * , ? , [...] ), that won't work properly. And if $2 is empty (like when less than 2 arguments are passed or the second argument is empty), test $2 becomes test , not test '' . test does not receive any argument at all (empty or otherwise). Thankfully in that case, test without arguments returns false. It's slightly better than test -n $2 which would have returned true instead (as that would become test -n , same as test -n -n ), so that code would appear to work in some cases. To sum up: to test if 2 or more arguments are passed: [ "$#" -gt 1 ] or [ "$#" -ge 2 ] to test if a variable is non-empty: [ -n "$var" ][ "$var" != '' ][ "$var" ] all of which are reliable in POSIX implementations of [ , but if you have to deal with very old systems, you may have to use [ '' != "$var" ] instead for implementations of [ that choke on values of $var like = , -t , ( ... to test if a variable is defined (that could be used to test if the script is passed a second argument, but using $# is a lot easier to read and more idiomatic): [ "${var+defined}" = defined ] (or the equivalent with the test form. Using the [ alias for the test command is more common-place). Now on the difference between cmd1 && cmd2 and if cmd1; then cmd2; fi . Both run cmd2 only if cmd1 is successful. The difference in that case is that the exit status of the overall command list will be that of the last command that is run in the && case (so a failure code if cmd1 doesn't return true (though that does not trip set -e here)) while in the if case, that will be that of cmd2 or 0 (success) if cmd2 is not run. So in cases where cmd1 is to be used as a condition (when its failure is not to be regarded as a problem ), it's generally better to use if especially if it's the last thing you do in a script as that will define your script's exit status. It also makes for more legible code. The cmd1 && cmd2 form is more commonly used as conditions themselves like in: if cmd1 && cmd2; then...while cmd1 && cmd2; do... That is in contexts where we care for the exit status of both those commands. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307753",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173557/"
]
} |
307,764 | I need to extract text strings from a file and put them in a new file.Each string is always between the same text (tags). Here's an example (there are hundreds of blocks like this one where I need the data from to be put into one file): 1731 0 obj<</Page 250/Type/Annot/Subtype/Highlight/Rotate 0/Rect[ 95.4715 347.644 337.068 362.041]/NM(929cd95c-f962-4fa3-b734-2e0e67d7b321)/T(iPad)/CreationDate(D:20160818145053Z00'00')/M(D:20160818145204Z00'00')/C[ 0.454902 0.501961 0.988235]/CA 1/QuadPoints[ 95.4715 362.041 337.068 362.041 95.4715 347.644 337.068 347.644]/Contents(EXAMPLE OF TEXT TO BE EXTRACTED)/F 4/Subj(Highlight)>>endobj I need to extract Page 250 and EXAMPLE OF TEXT TO BE EXTRACTED For the Page 250 example, the relevant tags seem to be: <</ and /Type For the EXAMPLE OF TEXT TO BE EXTRACTED example, the relevant tags seem to be: /Contents( and )/F Eventually I would like the pages and the corresponding text to be sorted in ascending order, but I could manage that in a spreadsheet. I tried to use some answers from here , but I did not manage to make it work... I am most confortable with Unix command line, but I know a little bit of Python and AppleScript | That's an incorrect way to write: #!/bin/sh -set -e # Exit if any command fails# If multiple args given run this script once for each argif [ "$#" -gt 1 ]; then for arg do "$0" "$arg" done exitfi I think what the author intended to do was check if the second argument was a non-empty string (which is not the same thing as checking whether there are more than 1 argument, as the second argument could be passed but be the empty string). test somestring A short form of test -n somestring Returns true if somestring is not the empty string. ( test '' returns false, test anythingelse returns true (but beware of test -t in some shells that checks whether stdout is a terminal instead)). However, the author forgot the quotes around the variable (on all the variables actually). What that means is that the content of $2 is subject to the split+glob operator. So if $2 contains characters of $IFS (space, tab and newline by default) or glob characters ( * , ? , [...] ), that won't work properly. And if $2 is empty (like when less than 2 arguments are passed or the second argument is empty), test $2 becomes test , not test '' . test does not receive any argument at all (empty or otherwise). Thankfully in that case, test without arguments returns false. It's slightly better than test -n $2 which would have returned true instead (as that would become test -n , same as test -n -n ), so that code would appear to work in some cases. To sum up: to test if 2 or more arguments are passed: [ "$#" -gt 1 ] or [ "$#" -ge 2 ] to test if a variable is non-empty: [ -n "$var" ][ "$var" != '' ][ "$var" ] all of which are reliable in POSIX implementations of [ , but if you have to deal with very old systems, you may have to use [ '' != "$var" ] instead for implementations of [ that choke on values of $var like = , -t , ( ... to test if a variable is defined (that could be used to test if the script is passed a second argument, but using $# is a lot easier to read and more idiomatic): [ "${var+defined}" = defined ] (or the equivalent with the test form. Using the [ alias for the test command is more common-place). Now on the difference between cmd1 && cmd2 and if cmd1; then cmd2; fi . Both run cmd2 only if cmd1 is successful. The difference in that case is that the exit status of the overall command list will be that of the last command that is run in the && case (so a failure code if cmd1 doesn't return true (though that does not trip set -e here)) while in the if case, that will be that of cmd2 or 0 (success) if cmd2 is not run. So in cases where cmd1 is to be used as a condition (when its failure is not to be regarded as a problem ), it's generally better to use if especially if it's the last thing you do in a script as that will define your script's exit status. It also makes for more legible code. The cmd1 && cmd2 form is more commonly used as conditions themselves like in: if cmd1 && cmd2; then...while cmd1 && cmd2; do... That is in contexts where we care for the exit status of both those commands. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188232/"
]
} |
307,794 | I want to use iwconfig. The install of wireless tools: tristan@debian:~$ sudo apt install wireless-toolsReading package lists... DoneBuilding dependency tree Reading state information... Donewireless-tools is already the newest version. when i try to run iwconfig: tristan@debian:~$ iwconfigbash: iwconfig: command not found As you can see I am not able to use iwconfig. | Run iwconfig as root : su -c "iwconfig" Or grant administrative privileges for user then run: sudo iwconfig For unprivileged user you can run iwconfig after adding the following line to your .bashrc : export PATH="$PATH:/sbin" Update: On debian Buster iwconfig is under /usr/sbin , you can add /usr/sbin to your PATH. Add the following lines to your /etc/environment PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/sbin:/usr/local/sbin:/sbin" then : source /etc/environment | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/176556/"
]
} |
307,895 | I have crafted a Bash tool that runs on a server. This tool will block certain IP addresses for a certain time range (i.e. from 5 A.M. to 3:30 P.M.). As of currently, the tool works fine, but I have to input IP addresses to certain websites manually into a text file and have the tool pull the IP address based on the nth line using the "head" and "tail" commands. I don't want to do this as I believe a single ping will be much more lightweight and portable. So if I do a ping for Google: ping google.com -c 1 -s 16 The output will be: Ubuntu@yokai:~# ping google.com -c 1 -s 16PING google.com (173.194.66.138) 16(44) bytes of data.24 bytes from qo-in-f138.1e100.net (173.194.66.138): icmp_seq=1 ttl=37 time=46.7 ms--- google.com ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 46.748/46.748/46.748/0.000 ms And the command I have narrowed this output down with is: ping google.com -c 1 -s 16 | grep -o '([^ ]*' | tr -d '(44):\n' Which gives me an output of: 173.19.66.113173.19.66.113ubuntu@yokai As you can see, there is a duplicate of the same IP address. How can I remove the duplicate with sed so that I can store the single IP address into a variable and run the script as a cronjob, or am I on a better track using tr? (EDIT) I already know/knew how to resolve IP address from a host name or domain. That is not what I am asking here. This question is specifically about managing ping output using sed in order to keep the tool I have created more portable as ping comes default with almost any and all linux distros. (UPDATE)Since some marked this question a duplicate of some other bullshit question that has nothing to do with this one, I will make it clear enough that retards with English comprehension troubles can understand: How to parse the IP ADDRESS "ONLY" from ping OUTPUT when using ping on a domain name. This is NOT asking to resolve a domain name and therefore is NOT A DUPLICATE!!! (LATEST UPDATE)I have, since asking this question, learned proper POSIX regex to do what I needed and I need to make it clear that I was originally asking about the regular expressions for sed that would print a single instance of an IP from ping output. I have since refined my methods and do not use any of the answers here but I thank everyone for their attempts with helping here. I am now using a timer script that I created to configure iptables at certain times to block certain domain name resolutions. Again, thank you to everyone that tried to help. | ping is for checking whether a host is up or down based on ICMP response, it is never the right tool for only resolving IP address, there are dedicated tools for that. You should look at dig , host , nslookup -- whatever suites you the best. Here's a dig output: % dig +short google.com123.108.243.57123.108.243.51123.108.243.54123.108.243.48123.108.243.60123.108.243.52123.108.243.56123.108.243.55123.108.243.61123.108.243.58123.108.243.49123.108.243.50123.108.243.59123.108.243.47123.108.243.53 As a side note, in Linux, if you want to query by the NSSwitch (Name Service Switch) i.e. /etc/nsswitch , then use the getent command with hosts (or ahosts ) database e.g.: getent hosts google.com In my computer, i have: hosts: files mdns4 dns in /etc/nsswitch.conf , so getent hosts will query in sequence and use gethostbyaddr(3) or gethostbyname(3) based on name and ahosts will use getaddrinfo(3) . In essence, with my configuration, this will first check /etc/hosts , then mDNS and at last DNS. If you insist on using ping and sed , you can do: % ping -c1 google.com | sed -nE 's/^PING[^(]+\(([^)]+)\).*/\1/p'123.108.243.56 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170676/"
]
} |
307,901 | I need to count the number of files under a folder and use the following command. cd testfolderbash-4.1$ ls | wc -l6 In fact, there are only five files under this folder, bash-4.1$ lstotal 44-rw-r--r-- 1 comp 11595 Sep 4 22:51 30.xls.txt-rw-r--r-- 1 comp 14492 Sep 4 22:51 A.pdf.txt-rw-r--r-- 1 comp 8160 Sep 4 22:51 comparison.docx.txt-rw-r--r-- 1 comp 903 Sep 4 22:51 Survey.pdf.txt-rw-r--r-- 1 comp 1206 Sep 4 22:51 Steam Table.xls.txt It looks like ls | wc -l even counts the total 44 as a file, which is not correct. | wc is a char, word, and line counter, not a file counter. You, the programmer/script writer, are responsible for making it count what you want and to adjust the calculation accordingly. In your case, you could do something like: echo $((`ls|wc -l`-1)) Finally note that your ls is probably an alias as it gives a long listing which is not the normal ls without arguments. It may therefore be a good idea to refer to ls 's full path (usually /bin/ls ) to avoid confusion. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14207/"
]
} |
307,937 | I have a script that starts my vagrant machine, opens up multiple terminals and connects to the vagrant machine via ssh in every newly opened terminal. My problem is that I need about five terminals, and I don't want to type in the password for each terminal manually. Is there a way to get prompted for the password only once in the main terminal, and use the same password for the ssh command? #!/bin/bashcd /home/kkri/public_html/freitag/vagrantvagrant upfor run in $(seq 1 $1) do gnome-terminal --window-with-profile=dark -e "ssh vagrant@localhost -p 2222" --$ donegnome-terminal --window-with-profile=git clearecho "~~~ Have fun! ~~~" | In general (ignoring vagrant or other system-specific details) your best bet is to set up authentication with SSH keys, and run ssh-agent . Then open the ssh sessions with something like: # load the key to the agent with a 10 s timeout# this asks for the key passphrasessh-add -t10 ~/.ssh/id_rsa for x in 1 2 3 ; do ssh .... done Or, if you can't use keys, you could rig something up with sshpass . read -p "Enter password: " -s SSHPASS ; echofor x in 1 2 3 ; do sshpass -e ssh ...doneunset SSHPASS Though with the terminal in the middle, this would leave the password set in the terminal's environment. To work around that, you could save the password temporarily in a file: read -p "Enter password: " -s SSHPASS ; echoPWFILE=~/.ssh/secret_passwordcat <<< "$SSHPASS" > "$PWFILE"unset SSHPASSfor x in 1 2 3 ; do sshpass -f "$PWFILE" ssh ...doneshred --remove "$PWFILE" This still isn't optimal since there's a chance the password hits the disk, so keys would be better. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307937",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188285/"
]
} |
307,951 | If 2 or more consecutive lines contain a specific pattern then delete all matching lines and keep just the first line. In below example when 2 or more consecutive lines contain "logical IO" then we need to delete all matching lines but keep the first line. Input file: select * from test1 where 1=1testing logical IO 24select * from test2 where condition=4parsing logical IO 45testing logical IO 500handling logical IO 49select * from test5 where 1=1testing logical IO 24select * from test5 where condition=78parsing logical IO 346testing logical IO 12 Output file: select * from test1 where 1=1testing logical IO 24select * from test2 where condition=4parsing logical IO 45select * from test5 where 1=1testing logical IO 24select * from test5 where condition=78parsing logical IO 346 | Using awk : awk '/logical IO/ {if (!seen) {print; seen=1}; next}; {print; seen=0}' file.txt /logical IO/ {if (!seen) {print; seen=1}; next} checks if the line contains logical IO , if found and the variable seen is false i.e. previous line does not contain logical IO , then print the line, set seen=1 and go to the next line else go to the next line as the previous line has logical IO For any other line, {print; seen=0} , prints the line and the sets seen=0 Example: $ cat file.txt select * from test1 where 1=1testing logical IO 24select * from test2 where condition=4parsing logical IO 45testing logical IO 500select * from test5 where 1=1testing logical IO 24select * from test5 where condition=78parsing logical IO 346parsing logical IO 346testing logical IO 12$ awk '/logical IO/ {if (!seen) {print; seen=1}; next}; {print; seen=0}' file.txt select * from test1 where 1=1testing logical IO 24select * from test2 where condition=4parsing logical IO 45select * from test5 where 1=1testing logical IO 24select * from test5 where condition=78parsing logical IO 346 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31577/"
]
} |
307,955 | Why does this happen? Everything else printable with uname is shown.I am not looking into fixing this. Even the manual page of uname says that it's a common output. I just want to know why. | POSIX doesn't define -p or -i . In GNU coreutils they are marked as non-portable, as you indicate. The default implementation relies on two optional operating system features, the three-argument form of sysinfo(2) (from SunOS) and the six-argument form of sysctl(3) (from the BSDs); neither of these are available on Linux. Thus on Debian and derived distributions (apart from Ubuntu and its derivatives), you simply get unknown . On Fedora and related distributions, uname is patched to return the machine type ( -m ) as processor ( -p ) and hardware platform ( -i ), with the latter tweaked to produce i386 for any value of the form i?86 . On Ubuntu and derivatives, a variant of the Fedora patch is used, which additionally checks for AMD CPUs on i686 processors and produces athlon instead. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181999/"
]
} |
307,968 | forgive the noobie question... my desktop fails to start, just hangs there with a mobile mouse cursor.when I ctrl + alt + 1 to cli, login, try startx I get Xkeyboard keymap errors; then the connection to X server refused with...xinit: unexpected signal 2 It hints that this could be a missing or incorrect setup of xkeyboard.config please help me sort this out. | POSIX doesn't define -p or -i . In GNU coreutils they are marked as non-portable, as you indicate. The default implementation relies on two optional operating system features, the three-argument form of sysinfo(2) (from SunOS) and the six-argument form of sysctl(3) (from the BSDs); neither of these are available on Linux. Thus on Debian and derived distributions (apart from Ubuntu and its derivatives), you simply get unknown . On Fedora and related distributions, uname is patched to return the machine type ( -m ) as processor ( -p ) and hardware platform ( -i ), with the latter tweaked to produce i386 for any value of the form i?86 . On Ubuntu and derivatives, a variant of the Fedora patch is used, which additionally checks for AMD CPUs on i686 processors and produces athlon instead. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307968",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188299/"
]
} |
307,969 | Let's suppose in Gentoo Linux I'm emerging a lot of packages with parallel emerging enabled, and one of them fails because compiling its source code takes a lot of RAM, so the compiler ran into an out-of-memory and got axed; this probably happened because the offending package was not the only one that was being built, so if I emerge that package individually, it might build without problems. So I want to emerge only that one package, and then resume the rest of my big previous emerge once it's done. How can I do it? I've seen some solutions posted online such as saving the resume list to a file and then loading it into emerge , but these solutions don't seem to be the best (that one solution didn't seem to support parallel emerging). Ideally, the best solution should allow for issuing emerge --resume to continue the previous emerge after installing the offending package individually. | POSIX doesn't define -p or -i . In GNU coreutils they are marked as non-portable, as you indicate. The default implementation relies on two optional operating system features, the three-argument form of sysinfo(2) (from SunOS) and the six-argument form of sysctl(3) (from the BSDs); neither of these are available on Linux. Thus on Debian and derived distributions (apart from Ubuntu and its derivatives), you simply get unknown . On Fedora and related distributions, uname is patched to return the machine type ( -m ) as processor ( -p ) and hardware platform ( -i ), with the latter tweaked to produce i386 for any value of the form i?86 . On Ubuntu and derivatives, a variant of the Fedora patch is used, which additionally checks for AMD CPUs on i686 processors and produces athlon instead. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47085/"
]
} |
307,979 | I have a Linux Debian Voyage installed on a flash card. Works fine, but the flash disk as /dev/sda1 is mounted as read only: /dev/sda1 on / type ext2 (ro,noatime,errors=continue) With mount -o remount,rw / it works: /dev/sda1 on / type ext2 (rw,noatime,errors=continue) I tried with booting a live cd and running this command: fsck -rfv /dev/sda1 Didn't help.How can I fix that for the boot? Or should I make a small startup-script as a workaround? Kind regards UPDATE On startup I saw following: Begin: Checking root file system ... fsck from util-linux 2.25.2fsck: error 2 (No such file or directory) while executing fsck.ext2 for /dev/sda1fsck exited with status code 8done.Warning: File system check failed but did not detect errorsdone. Now I saw following at the end of the boot sequence: Remounting / as read-only ... Done. | POSIX doesn't define -p or -i . In GNU coreutils they are marked as non-portable, as you indicate. The default implementation relies on two optional operating system features, the three-argument form of sysinfo(2) (from SunOS) and the six-argument form of sysctl(3) (from the BSDs); neither of these are available on Linux. Thus on Debian and derived distributions (apart from Ubuntu and its derivatives), you simply get unknown . On Fedora and related distributions, uname is patched to return the machine type ( -m ) as processor ( -p ) and hardware platform ( -i ), with the latter tweaked to produce i386 for any value of the form i?86 . On Ubuntu and derivatives, a variant of the Fedora patch is used, which additionally checks for AMD CPUs on i686 processors and produces athlon instead. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/307979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104327/"
]
} |
307,990 | According to the answer to What are login and non-login shells? on Ask Ubuntu, GNOME Terminal is a type of non-login shell.As pointed out in the excellent book, A Practical Guide to Fedora and Red Hat Enterprise Linux, 6th Edition : an interactive non-login shell executes commands in the ~/.bashrc file. The default ~/.bashrc file calls /etc/bashrc. As a result, /etc/profile won't be processed in a non-login shell. However, I found that I have appended the java home path to the PATH variable and, when I use GNOME Terminal and issue the command java , everything goes fine. Also, the value of the PATH variable is the same as the value I defined in /etc/profile . In view of the above-mentioned facts, there is a conflict, what’s wrong with my understanding? | When you log in to your x session via a display manager or in a tty, /etc/profile is (usually - apparently it is being in your case, though some graphical shells do not read it) sourced by your shell program. After that, a local file (I'm assuming you're using bash here) ~/.bash_profile , ~/.bash_login or ~/.profile will be sourced, and any environment variables defined here will override /etc/profile for the current user. This environment is inherited by any shell you open within the session. This is why we can define environment variables , such as your PATH, in these files. When you open gnome-terminal, yes by default that starts a non-login shell, but it inherits your user environment already loaded from the login shell or graphical shell. Since ~/.bashrc is sourced when starting an interactive shell (eg opening gnome-terminal), it may be used to override some elements of the environment (such as PS1). (gnome-terminal itself is an application, not a shell) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/307990",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180407/"
]
} |
307,994 | I would like to compute the bcrypt hash of my password. Is there an open source command line tool that would do that ? I would use this hash in the Syncthing configuration file (even if I know from here that I can reset the password by editing the config file to remove the user and password in the gui section, then restart Syncthing). | You can (ab)use htpasswd from apache-utils package, provided you have version 2.4 or higher. htpasswd -bnBC 10 "" password | tr -d ':\n' -b takes the password from the second command argument -n prints the hash to stdout instead of writing it to a file -B instructs to use bcrypt -C 10 sets the bcrypt cost to 10 The bare htpasswd command outputs in format <name>:<hash> followed by two newlines. Hence the empty string for name and tr stripping the colon and newlines. The command outputs bcrypt with $2y$ prefix, which may be problem for some uses, but can easily be fixed by another sed since the OpenBSD variant using $2a$ is compatible with the fixed crypt_blowfish variant using $2y$ . htpasswd -bnBC 10 "" password | tr -d ':\n' | sed 's/$2y/$2a/' Link to htpasswd man page: https://httpd.apache.org/docs/2.4/programs/htpasswd.html Details about bcrypt variants: https://stackoverflow.com/a/36225192/6732096 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/307994",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166985/"
]
} |
308,089 | I have a text file as follows: test1,test2,test3test4,test5test6,test7,test8,test9,test10test11,test12,test13,test14 How can I replace commas with semicolons, starting with the second one (and continuing to the last one)? I want to get output as follows: test1,test2;test3test4,test5test6,test7;test8;test9;test10test11,test12;test13;test14 | This can be done with this, $ sed -e 's/,/;/g' -e 's/;/,/1' infiletest1,test2;test3test4,test5test6,test7;test8;test9;test10test11,test12;test13;test14 Explanation s/,/;/g replaces all occurrence of , with ; s/;/,/1 replaces the first occurrence of ; with , If you have GNU sed , you can also try this simple and handy, sed 's/,/;/2g' infile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188422/"
]
} |
308,094 | How do I print a 256-colour test pattern in my terminal? I want to check that my terminal correctly supports 256 colours. | 256-colour test pattern For the above, you can use my bash code . ("Look Ma, no subprocesses!") Or for a bash quicky: for i in {0..255} ; do printf "\x1b[38;5;${i}m%3d " "${i}" if (( $i == 15 )) || (( $i > 15 )) && (( ($i-15) % 12 == 0 )); then echo; fidone 24-bit / truecolour test pattern See this question for the full spectrum :) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/308094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
308,173 | I've been for long using VIM as my main editor and never touched an IDE since. This works great for most of the programming languages on the market. When it comes to C, though, I still fell limited to simple projects, because writing makefiles is too cumbersome. How do the "Unix as IDE" philosophy deal with makefiles? Is there a tool I'm not aware of that does that particular job from the command line, or is everyone just writing the makefiles themselves? | There are quite a few tools around to generate makefiles. The two most common ones are CMake and Automake ; both of these ask you to describe the components of your project and the desired output, and generate makefiles for you. This is no doubt a matter of opinion, but you'll probably find CMake easier to get to grips with; if you ever need to cross-compile though, you'll end up needing Automake (and Autoconf ). For simple projects, the built-in rules provided with GNU Make can help quite a bit; for example, to build a project consisting of two source files, a.c and b.c , the following Makefile works: all: aa: a.o b.o Running make will figure out that a.c and b.c need to be compiled, and linked to produce a ... (As AProgrammer points out, the built-in rules only go so far, and your makefile needs to specify all the relationships between files, including your project's headers; you'll quickly end up reaching for other tools to help manage dependencies etc.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61175/"
]
} |
308,207 | I am confused about the meaning of the exit code in the end of a bash script:I know that exit code 0 means that it finished successfully, and that there are many more exit codes numbers (127 if I'm not mistaken?) My question is about when seeing exit code 0 at the end of a script, does it force the exit code as 0 even if the script failed or does it have another meaning? | The builtin command exit exits the shell (from Bash's reference ): exit [n] Exit the shell, returning a status of n to the shell’s parent. If n is omitted, the exit status is that of the last command executed. Any trap on EXIT is executed before the shell terminates. Running to the end of file also exits, returning the return code of the last command, so yes, a final exit 0 will make the script exit with successful status regardless of the exit status of the previous commands. (That is, assuming the script reaches the final exit .) At the end of a script you could also use true or : to get an exit code of zero. Of course more often you'd use exit from inside an if to end the script in the middle. These should print a 1 ( $? contains the exit code returned by the previous command): sh -c "false" ; echo $?sh -c "false; exit" ; echo $? While this should print a 0: sh -c "false; exit 0" ; echo $? I'm not sure if the concept of the script "failing" when executing an exit makes sense, as it's quite possible to some commands ran by the script to fail, but the script itself to succeed. It's up to the author of the script to decide what is a success and what isn't. Also, the standard range for exit codes is 0..255. Codes above 127 are used by the shell to indicate a process terminated by a signal, but they can be returned in the usual way. The wait system call actually returns a wider value, with the rest containing status bits set by the operating system. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/308207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181400/"
]
} |
308,214 | I have a bunch of logs and I don't want to look in each of it individually but instead I'd like to run cat cluster-2010*.log | grep error but the problem is that even though it's returning me what I want I don't have a clue which .log file is containing the error that the previous command was displaying. How can I solve this? | grep will tell you itself if you give it the filenames instead of feeding it their content: grep error cluster-2010*.log | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152365/"
]
} |
308,253 | What I know and use regularly I love how I can add commands to less with the + -parameter from the commandline. I like this for instant searching: $ less +/DHCP /var/log/syslog# result: syslog at the point of the first occurrence of DHCP Yet I also like to set it to follow the output like this: $ less +F /var/log/syslog# result: a syslog that follows the file, very much like tail -f would. What I would like to use But every once in a while I'd like BOTH. But I do not have any Idea how to do it. $ less +F +/DHCP /var/log/syslog# result: "Pattern not found" - it will actually bunch up both things to one string. Bonus Points to anyone who can tell me how I can automatically filter without having to press at the beginning? $ less +\&DHCP /var/log/syslog# result: "please press enter" after which the filtering search _is_# running correctly. I would love to get rid of that extra <enter> edit2: Funny is, that i can combine these: $ less +?DHCP +G /var/log/syslog# result: jumps to the end and reverse-searches for DHCP# But I have to press enter (which I'd like to avoid) but i can not do this: $ less +G +?DHCP /var/log/syslog# this is being bunched up to ?DHCPG and subsequently not found. So, the order seems to be important, and all strings are interpreted as if it was one? Version info edit here's the version of less installed on my system, but i'd be willing to install another version if neccessary! $ less --versionless 458 (GNU regular expressions)Copyright (C) 1984-2012 Mark Nudelman[...] | I don't think I understand the first part of your question based on your later comments. For me, using less +F +/pattern logfile works and continues to highlight new instances of pattern as they appear in the updated file. As for the bonus part of your question, you can try one of the following commands: less +\&DHCP$'\n' /var/log/syslog or less +\&DHCP^M /var/log/syslog In that second command the ^M is generated by pressing Ctrl-V then Enter . Kinda easier to just press enter when less starts though unless you're looking to script it or something. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77478/"
]
} |
308,260 | I'm trying to understand what this Docker entrypoint does . It seems to me that's a very common pattern when writing Dockerfiles, but my bash skills are limited and I have no idea of all the special bash symbols kung fu. Also, it's hard to google for "--", "$!" etc. What are these called in bash world? To summarize, what is the line bellow trying to do? if [ "${1#-}" != "$1" ]; then set -- haproxy "$@"fi | The set command (when not setting options) sets the positional parameterseg $ set a b c$ echo $1a$ echo $2b$ echo $3c The -- is the standard "don't treat anything following this as an option" The "$@" are all the existing position paramters. So the sequence set -- haproxy "$@" Will put the word haproxy in front of $1 $2 etc. eg $ echo $1,$2,$3a,b,c$ set -- haproxy "$@"$ echo $1,$2,$3,$4 haproxy,a,b,c | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/308260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109547/"
]
} |
308,269 | While I probably need some kind of monitoring tool like mon or sysstat or something. I am looking for a way to know which tasks take the most of my memory,CPU time etc. While I understand that each workstation/desktop PC is unique, a typical workload on one of my desktops is something like this : Single user (even though the choice is there to have multiple users) games - Aisleriot, kshisen torrent client - qbittorrent mail client - thunderbird messaging clients - empathy, telegram and quasselcore and client. Browser - Firefox and sometimes tor desktop - MATE media player - mpv most of the time it's usually a light workload most of the time but I still see the hdd sensor lighting up which means some background tasks is going intently even though no foreground tasks are happening. While I could use top to find what tasks take most of the CPU and memory cycles, it is only for the moment. I realize I need something which I could figure out over period of time (say a day), runs in the background and produces nice enough graphs to analyze, and most of all has the raw data in user-defined location, say in /home/shirish/mon or whatever directory name is there. It is ok if it is /var/log//logs is where it keeps. I just need to know few things : Which processes take memory and CPU over time, foreground and background. Which background processes take most of the CPU and memory The logging is tunable, taking snaps every 2-5 minutes. I am sure there are tools and ways in which people have done it for servers etc. but has anybody done for the above scenario ? If yes, how they went about it ? | The set command (when not setting options) sets the positional parameterseg $ set a b c$ echo $1a$ echo $2b$ echo $3c The -- is the standard "don't treat anything following this as an option" The "$@" are all the existing position paramters. So the sequence set -- haproxy "$@" Will put the word haproxy in front of $1 $2 etc. eg $ echo $1,$2,$3a,b,c$ set -- haproxy "$@"$ echo $1,$2,$3,$4 haproxy,a,b,c | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/308269",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
308,283 | 550 relay not permitted That's the error message when email sent by Exim4 from my Debian laptop bounces. What's weird is that only the first email bounces. Second and subsequent emails pass through the relay and on to their destinations just fine. If I reboot my laptop, though, the first email after reboot bounces again. The relay is plaintext password protected after STARTTLS on port 587. The X.509 certificate is a real one, not a snakeoil. I administer both the relay server and the laptop. The relay server too runs Exim4 on Debian, if it matters. Postfix is not involved. I suppose that one could work around the problem by configuring the laptop to send a dummy email (or maybe even just an SMTP EHLO?) each time it boots, but the behavior worked around just seems odd. I'm stumped. If you know what I should next investigate to solve this, would you advise me? | Your email timeout could well be due to your daemon trying IPv6 first. The IPv6 stack implementation by default has priority over the IPv4 stack so when programs/daemons try to communicate they will try first to use the IPv6 address, when the destination has both public IPv4 and IPv6 address. Even if you do not have public IPv6, you have IPv6 localhost and link local addresses. It would be not the first time, and probably not the last time, that I do catch Internet daemons trying first the link local address as source IP address to communicate to another address, and only after timing out, if they still have time/tries allotted, might fall back to sending the data to the IPv4 destination. (In the past, I already had DNS and email problems due to this in an ISP I used to run.) So for exim, you can disable IPv6 at application/daemon level, using the directive disable_ipv6=true in /etc/exim4/exim4.conf.template or /etc/exim4/update-exim4.conf.conf depending on whether you are using the non-split or the split configuration scheme. From Exim Internet Mailer-Chapter 14 - Main configuration disable_ipv6 Use: main Type: boolean Default: false If this option is set true, even if the Exim binary has IPv6 support, no IPv6 activities take place. AAAA records are never looked up, and any IPv6 addresses that are listed in local_interfaces, data for the manualroute router, etc. are ignored. If IP literals are enabled, the ipliteral router declines to handle IPv6 literal addresses. An alternative approach might be also binding it only to IPv4 addresses, however the disadvantage is having to hardcode the IPv4 address(es) in the configuration : local_interfaces = <; 127.0.0.1 ; \ 192.168.23.65 As for the system itself, as you are not actively using IPv6: add as the last line to give priority by default to IPv4, to the file /etc/gai.conf precedence ::ffff:0:0/96 100 Add to /etc/sysctl.conf to disable by default the IPv6 stack (setting supported from kernel 3 onwards): net.ipv6.conf.all.disable_ipv6=1 The sysctl will be applied at boot time. To activate it before booting, do: sudo sysctl -p Whilst they call it an IPv6 deactivation, the module is still loaded, and while the interfaces do not have IPv6 addresses anymore, you can still see applications connected to their IPv6 sockets. You can also pass to the kernel an option to disable IPv6, and the IPv6 kernel module won't be loaded. Editing /etc/default/grub : GRUB_CMDLINE_LINUX="ipv6.disable=1" And then to apply it, if you have grub (your grub partition may vary or you might not have it; I do not have it in my ARM servers, and have to edit other file for the kernel options): sudo update-grubsudo grub-install /dev/sda You may have to configure one or other daemon to disable IPv6 at the application level (from the top of my head, xinetd , if you have it installed). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18202/"
]
} |
308,288 | Why does echo ,,, |sed s':\(,\)\(,\):\1*\2:'g yield " ,*,, " rather than " ,*,*, "? In other words: why, despite the "g" flag, does sed not insert ' * ' between one pair of commas? | Because with the two , s in (,\)\(,\) , you have already matched the first two , s and the Regex pattern matched so far won't backtrack for the rest of the line. Only one is left now that is the last , hence it just printed as it is without any * in between it the second last one. If you have another , in input, you would get the desired (global, g ) response: % echo ,,,, | sed s':\(,\)\(,\):\1*\2:'g,*,,*, | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/308288",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99298/"
]
} |
308,290 | I would like to tweak vim for using cdo ( Climate Data Operators ) efficiently. Because I need to use cdo in (bash) scripts, I would like to add an autocomplete matching (implemented!) with small description (still searching!). I setup the environment and it works on a basic level. For now, I am wondering whether it's possible to manipulate the output of the pop-up menu on the right side of the matching dictionary keywords. I got a simple keyword matching so far, see picture below: Using the following setup: My .vimrc: set completeopt=longest,menuone Includes different dictionaries, mainly cdo.dic: ~/.vim/ftdetect/cdo.vim: au BufRead,BufNewFile *.sh set dictionary+=~/.vim/dictionary/cdo.dicau BufRead,BufNewFile *.sh set dictionary+=~/.vim/dictionary/hamocc.dicau BufRead,BufNewFile *.sh set dictionary+=~/.vim/dictionary/mpiom.dic My dictionary file ~/.vim/dictionary/cdo.dic: with is matched against abs -abs \ adisit -adisit \ ...around 700 more.... Goal : What I would love to get as output is a small description that is displayed at the right, instead of the filepath of the dictionary file. So preferably a short explanation of the operator (which might be also stored in the dictionary file after the operator?), eg. for illustrative purpose selcode : Select parameters by code number {selcode,code ifile ofile} read from a dictionary line: selcode -selcode {Select parameters by code number [selcode,code ifile ofile]}\ So basically, I quick lookup tool for operator names and short description without large programming, plugins with other external tools. So it's an 'Is it possible and how?' question... I tried so far the vim documentation, and googling about vim, dictionary, complete, completeopt, pmenu, ... I appreciate your suggestions. | Because with the two , s in (,\)\(,\) , you have already matched the first two , s and the Regex pattern matched so far won't backtrack for the rest of the line. Only one is left now that is the last , hence it just printed as it is without any * in between it the second last one. If you have another , in input, you would get the desired (global, g ) response: % echo ,,,, | sed s':\(,\)\(,\):\1*\2:'g,*,,*, | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/308290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188572/"
]
} |
308,297 | Alright, before this is immediately closed as a duplicate, let me explain. I have attempted many methods such as: for d in /Applications ; do echo "$d"done but that returns /Applications instead of the contents of /Applications. I also tried: #!/bin/bashFILES=/Applicationsfor file in $FILESdo echo $filedone Which is basically the same thing. I noticed, though, that when I just do: #!/bin/bashfor file in *do echo $filedone It correctly echoes all of the folders in my home folder. Anyone know what's going on? | As noted by Jeff Schaller , * normally works like ls —it will not show things whose names begin with . (a period). If you are using bash,and you want to get things whose names begin with . , s et (turn on) the sh ell opt ion dotglob with the command shopt -s dotglob This will cause * to work like ls -A — it still won't show . and .. themselves, but will show everything else beginning with . . Your question mentions "everything in a directory". This is a somewhat ambiguous phrase. If you mean everything in the (top-level) /Applications directory,then the other answers are fine. But if you want everything in the /Applications tree (i.e., everything in the /Applications directory and its subdirectories ),and you're using bash, set the shell option globstar with the command shopt -s globstar This will cause ** as a filename component to meaneverything here and below. So for file in /Applications/**do echo "$file"done will list all objects (files, directories, etc.)in the /Applications tree . Note that this cannot be combined with other charactersat the same component level; i.e., you can't do things like foo**bar . However, you append other components after the ** . For example, for file in /Applications/**/README will loop through all files named README in the /Applications tree,and for file in /Applications/**/*.txt will find all files whose names end with .txt . You can set multiple options at once; e.g., with shopt -s dotglob globstar See the bash documentation for a complete list of options. You can u nset options with shopt -u . You should always quote all your shell variable references(e.g., "$file" ) unless you have a good reason not to,and you're sure you know what you're doing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138280/"
]
} |
308,311 | I created my own service for jekyll and when I start the service it seems like it doesn't run as a background process because I am forced to ctrl + c out of it. It just stays in the foreground because of the --watch. I am not sure how to go around it and make it so that it runs in the background. Any thoughts? # /etc/systemd/system/jekyll-blog.service[Unit]Description=Start blog jekyll[Service]Type=forkingWorkingDirectory=/home/blogExecStart=/usr/local/bin/jekyll build --watch --incremental -s /home/blog -d /var/www/html/blog &ExecReload=/bin/kill -HUP $MAINPIDKillMode=processRestart=on-failureUser=rootGroup=root[Install]WantedBy=multi-user.target | Systemd is able to handle various different service types specifically one of the following simple - A long-running process that does not background its self and stays attached to the shell. forking - A typical daemon that forks itself detaching it from the process that ran it, effectively backgrounding itself. oneshot - A short-lived process that is expected to exit. dbus - Like simple, but notification of processes startup finishing is sent over dbus. notify - Like simple, but notification of processes startup finishing is sent over inotify. idle - Like simple, but the binary is started after the job has been dispatched. In your case you have picked Type=forking which means systemd is waiting for the process to fork itself and for the parent process to end, which it takes as an indication that the process has started successfully. However, your process is not doing this - it remains in the foreground and so systemctl start will hang indefinitely or until the processes crashes. Instead, you want Type=simple , which is the default so you can remove the line entirely to get the same effect. In this mode systemd does not wait for the processes to finish starting up (as it has no way of know when this has happened) and so continues executing and dependent services straight away. In your case there are none so this does not matter. A small note on security: You are running the service as root, this is discouraged as it is less secure than running it as an unprivileged user. The reason for this is that if there is a vulnerability in jekyll that somehow allows execution of commands (possibly via the code it is parsing) then the attacker needs to do nothing else to completely own your system. If, on the other hand, it is run as a non-privileged user, the attacker is only able to do as much damage as that user and must now attempt to gain root privileges to completely own your system. It simply adds an extra layer attackers must go though. You can simply run it as the same user that is running your web server, but this leaves you open to another potential attack. If there is a vulnerability in your web server that allows the user to manipulate files on your system they can modify the generated html files, or worst the source files and cause your server to serve anything they want. However, if the generated files and source files are only readable by the webserver and writable be another non-privileged user they will not be able to, as easily, modify them by attacking the web server. However, if you are simply serving static files from this server and keep the server up to date these attacks are very very unlikely - but still possible. It is your responsibility to weigh the risks vs the overhead of setting it up based on how critical your system is but both of these tips are very simple to set up and next to no maintenance overhead. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/308311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188594/"
]
} |
308,315 | Currently when I invoke completion the behaviour is like this: % cd ~/<TAB>Completing directoryDesktop/ Downloads/ Pictures/ system/ Videos/Documents/ Music/ Public/ Templates/ www/ How can I configure the completion to list the hidden files also? | You could add globdots to $_comp_options in your .zshrc e.g. .....compinit_comp_options+=(globdots)..... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/308315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
308,421 | I would like to be able to save my current environment in a file (for a running interactive session), so that I can: Save it, export/modify/delete variables at will in the running session, then restore the saved environment Switch at will between multiple environment Detect differences between two environment I am only interested in exported variables. As I want to be able to restore the environment it have to be a shell function, I am using bash. Ideally, it would not depends on external programs, and would work on versions of bash from v3.2.25 to current. For now, to save my environment I use the following function: env_save () { export -p > "$STORAGE/$1.sh"} That I use as env_save <filename> in a running session. I have some boilerplate code to keep backups, but let's ignore that. However, I then have difficulties with loading the environment back: env_restore () { source "$STORAGE/$1.sh"} As this would not remove spurious variables that I created in the mean time. That is, calling export -p after env_restore <filename> might not give the same output than cat $STORAGE/$1.sh . Is there a clean way to handle that problem? I will probably need to blacklist some variables such as PWD, OLDPWD, SHELL, SHLVL, USER, SSH_*, STORAGE, etc... That is, those variable should not be saved and should not be changed when restoring as they are special variables. I cannot use a whitelist as I do not know what variables will be there. | POSIXly, you can do: # saveexport -p > saved-env...# restoreblacklisted () { case $1 in PWD|OLDPWD|SHELL|STORAGE|-*) return 0 ;; *) return 1 ;; esac}eval ' export() { blacklisted "${1%%=*}" || unset -v "${1%%=*}" } '"$(export -p)"export() { blacklisted "${1%%=*}" || command export "$@"}. saved-envunset -f export Note that for bash not invoked as sh , you'd need to issue a set -o posix for that to work properly. Also with bash versions prior to 4.4, sourcing the output of export -p is potentially unsafe: $ env -i 'a;reboot;=1' /bin/bash -o posix -c 'export -p'export OLDPWDexport PWD="/"export SHLVL="1"export a;reboot; ksh93 has a similar problem. yash doesn't have that particular one, but still has problems with variable names starting with - : $ env -i -- '-p=' yash -c 'export -p'export '-p'=''export OLDPWDexport PWD='/' Also beware of potential problems if you're not in the same locale when saving and restoring the variables. bash-4.3$ locale charmapISO-8859-15bash-4.3$ export Stéphane=1bash-4.3$ export -p > abash-4.3$ LC_ALL=en_GB.UTF-8 bash -c '. ./a'./a: line 5: export: `Stéphane=1': not a valid identifier | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188719/"
]
} |
308,435 | I'm archiving a few directories every night to LTO-7 tape with about 100 or so large (2GB) files in each of them. As a check that the data has been written correctly, I'm verifying that the number of bytes reported written is the same as what should have been written. I'm first looking at the size of the archive by doing a tar dry-run: tar -cP --warning=no-file-changed $OLDEST_DIR | wc -c Then I'm creating the archive with: tar -cvf /dev/nst0 --warning=no-file-changed --totals $OLDEST_DIR If the filesizes match, then I delete the original file. The problem is that the dry-run has to read the entire contents of the files and can take several hours. Ideally, it should use the reported filesizes, apply the necessary padding / aligning, and report back the size rather than thrashing the disk for hours. Using du -s or similar doesn't work because the sizes don't quite match (filesystems treat a directory as 4096 bytes, tar treats it as 0 bytes for example). Alternatively, is there a better way of checking that the file has been correctly written? I can't trust tar's return code, since I'm ignoring certain warnings (to handle some sort of bug with tar/mdraid) | POSIXly, you can do: # saveexport -p > saved-env...# restoreblacklisted () { case $1 in PWD|OLDPWD|SHELL|STORAGE|-*) return 0 ;; *) return 1 ;; esac}eval ' export() { blacklisted "${1%%=*}" || unset -v "${1%%=*}" } '"$(export -p)"export() { blacklisted "${1%%=*}" || command export "$@"}. saved-envunset -f export Note that for bash not invoked as sh , you'd need to issue a set -o posix for that to work properly. Also with bash versions prior to 4.4, sourcing the output of export -p is potentially unsafe: $ env -i 'a;reboot;=1' /bin/bash -o posix -c 'export -p'export OLDPWDexport PWD="/"export SHLVL="1"export a;reboot; ksh93 has a similar problem. yash doesn't have that particular one, but still has problems with variable names starting with - : $ env -i -- '-p=' yash -c 'export -p'export '-p'=''export OLDPWDexport PWD='/' Also beware of potential problems if you're not in the same locale when saving and restoring the variables. bash-4.3$ locale charmapISO-8859-15bash-4.3$ export Stéphane=1bash-4.3$ export -p > abash-4.3$ LC_ALL=en_GB.UTF-8 bash -c '. ./a'./a: line 5: export: `Stéphane=1': not a valid identifier | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308435",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187750/"
]
} |
308,476 | Suppose somebody downloaded a Linux distro, like Ubuntu. Suppose further modify one piece of it, say the Window Manager. Would it be perfectly legal for them to sell copies of this slightly modified version of Ubuntu (let's call it Mubuntu = Modified Ubuntu)? What if they made the new window manager portion closed source? Would it still be legal to sell? | Would it be perfectly legal for them to sell copies of this slightly modified version of Ubuntu (let's call it Mubuntu = Modified Ubuntu)? No. While the software licenses may allow you to do this, the trademark license does not: Any redistribution of modified versions of Ubuntu must be approved, certified or provided by Canonical if you are going to associate it with the Trademarks. Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries. This does not affect your rights under any open source licence applicable to any of the components of Ubuntu. If you need us to approve, certify or provide modified versions for redistribution you will require a licence agreement from Canonical, for which you may be required to pay. For further information, please contact us (as set out below). and You will require Canonical’s permission to use: (i) any mark ending with the letters UBUNTU or BUNTU which is sufficiently similar to the Trademarks or any other confusingly similar mark, and (ii) any Trademark in a domain name or URL or for merchandising purposes. You would be allowed to sell an unmodified version of Ubuntu, you would be allowed to sell a heavily modified version of Ubuntu that no longer mentions the Ubuntu name, but for this slightly modified version of Ubuntu, you need an agreement with Canonical. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/308476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86396/"
]
} |
308,631 | I have a text file: a aa aaa b bb bbb c cc cccd dd ddd e ee eee f ff fffg gg ggg h hh hhh i ii iiij jj jjj How can I process it and get a 2 column file like this: a aaaaa bbb bbbc ccccc ddd ddde eeeee fff fffg ggggg hhh hhhi iiiii jjj jjj Or a three column file like this: a aa aaab bb bbbc cc cccd dd ddde ee eeef ff fffg gg gggh hh hhhi ii iiij jj jj I prefer to get awk solution but other solutions are welcomed too. | Put each field on a line and post-columnate. Each field on one line tr tr -s ' ' '\n' < infile grep grep -o '[[:alnum:]]*' infile sed sed 's/\s\+/\n/g' infile or more portable: sed 's/\s\+/\/g' infile awk awk '$1=$1' OFS='\n' infile or awk -v OFS='\n' '$1=$1' infile Columnate paste For 2 columns: ... | paste - - For 3 columns: ... | paste - - - etc. sed For 2 columns: ... | sed 'N; s/\n/\t/g' For 3 columns: ... | sed 'N; N; s/\n/\t/g' etc. xargs ... | xargs -n number-of-desired-columns As xargs uses /bin/echo to print, beware that data that looks like options to echo will be interpreted as such. awk ... | awk '{ printf "%s", $0 (NR%n==0?ORS:OFS) }' n=number-of-desired-columns OFS='\t' pr ... | pr -at -number-of-desired-columns or ... | pr -at -s$'\t' -number-of-desired-columns columns (from the autogen package) ... | columns -c number-of-desired-columns Typical output: a aa aaab bb bbbc cc cccd dd ddde ee eeef ff fffg gg gggh hh hhhi ii iiij jj jjj | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/308631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8839/"
]
} |
308,636 | I have a file test.txt contains many lines like the following: hello:123: worldhello:783: worldhello:479: world...... How to use sed command to replace the lines as follows? hello:(123, 0): worldhello:(783, 0): worldhello:(479, 0): world...... Thanks in advance! | How do you like this one? I hope it is what you needed. sed -e 's/\([0-9]\+\)/(\1, 0)/g' Test echo "hello:123: worldhello:783: worldhello:479: world" | sed -e 's/\([0-9]\+\)/(\1, 0)/g' Result hello:(123, 0): world hello:(783, 0): world hello:(479, 0): world | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/308636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65170/"
]
} |
308,646 | I found a very nice tutorial on how to create virtual hard disks, and I am considering using these for my work in order to store reliably and portably large datasets with associated processing results. Basically, the tutorial consist in doing this: dd if=/dev/zero of=MyDrive.vhd bs=1M count=500mkfs -t ext3 MyDrive.vhdmount -t auto -o loop MyDrive.vhd /some/user/folder which creates a virtual hard drive of 500MB formatted in ext3 and mounts it somewhere. Now say I use that file and realise I need more than 500MB, is there a way of "dynamically" resizing the virtual disk? (By dynamically I mean other than creating a new bigger disk and copy the data over.) | There is better alternative than dd to extend a file. The dd command requires several parameters to run properly (to not corrupt your data). I use truncate instead. Despite of its name it can extend the size of a file as well: truncate - shrink or extend the size of a file to the specified size -s, --size=SIZE set or adjust the file size by SIZE SIZE is an integer and optional unit (example: 10M is 10*1024*1024). Units > are K, M, G, T, P, E, Z, Y (powers of 1024) or KB, MB, ... (powers of 1000). SIZE may also be prefixed by one of the following modifying characters: '+' extend by, '-' reduce by, '<' at most, '>' at least, '/' round down to multiple of, '%' round up to multiple of. thus, truncate -s +1G MyDrive.vhd safely expands your file by 1 gigabyte.And, yes, it does sparse expansion when supported by the underlying filesystem, so the actual blocks would be allocated on demand. When a file is expanded, don't forget to run resize2fs: resize2fs MyDrive.vhd Also, the whole thing may be done online (without umount 'ing the device) for file systems that support online resize: losetup -c loopdev updates in-kernel information on a backing file, and resize2fs loopdev resizes the mounted file system online | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/308646",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45354/"
]
} |
308,668 | Specifically, I am trying test something on my build server by switching to the "jenkins" user: sudo su - jenkinsNo passwd entry for user 'jenkins' | The error message is pretty much self-explanatory. It says that the user jenkins has no entry in the /etc/passwd file i.e. the user does not exist in the system. When you do any user related operations that requires username, password, home directory, shell information, the /etc/passwd file is consulted first. No entry in that file leading to the very error you are getting. So you need to create the user first ( useradd / adduser ). As a side note, unless necessary you should create any service specific user (non-human) e.g. jenkins as system user. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/308668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184277/"
]
} |
308,687 | I combined the detailed instructions from the original blog post , and the more up to date instructions from the man page (using dnf instead of yum). # sudo dnf -y --releasever=24 --installroot=$HOME/fedora-24 --disablerepo='*' --enablerepo=fedora --enablerepo=updates install systemd passwd dnf fedora-release vim-minimal# sudo systemd-nspawn -D fedora-24Spawning container fedora-24 on /home/alan-sysop/fedora-24Press ^] three times within 1s to kill container.-bash-4.3# passwdChanging password for user root.New password:Retype new password: Result: passwd: Authentication token manipulation error and an AVC popup, i.e. SELinux error. It says passwd is not allowed to unlink (replace) /etc/passwd . One of the suggestions from the "Troubleshoot" button is that I could assign the label passwd_file_t to /etc/passwd . What's wrong, how can I fix it? | For some reason, dnf didn't set the "right" SELinux label on /etc/passwd. But it did set a label on /bin/passwd. That mismatch is what causes the problem. Further explanations welcomed :). $ ls -Z fedora-24/etc/passwdunconfined_u:object_r:etc_t:s0 fedora-24/etc/passwd$ ls -Z /etc/passwdsystem_u:object_r:passwd_file_t:s0 /etc/passwd$ ls -Z fedora-24/bin/passwdsystem_u:object_r:passwd_exec_t:s0 fedora-24/bin/passwd$ ls -Z /usr/bin/passwdsystem_u:object_r:passwd_exec_t:s0 /usr/bin/passwd Attempting to run restorecon -Rv / inside the container does nothing. IIRC libselinux detects when it's run in a container, and will not do anything. Solution We need to run from outside the container: restorecon -Rv fedora-24/ It makes sure all the SELinux labels are reset. (To the value expected by the container host, i.e. unlabelled). Then we can set the root password successfully. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/308687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29483/"
]
} |
308,722 | I'm looking for a portable way to obtain parent block device name (e.g. /dev/sda ) given the partition device name (e.g. /dev/sda1 ). I know I could just drop the last character, but that wouldn't work in some cases: MMC card readers typically have names like /dev/mmcblk0 , while their partitions have names like /dev/mmcblk0p1 (notice the extra p ). optional: some block devices don't have any partition table at all and are formatted as a single partition. In this case, partition device and parent block device are the same. LVM volumes are a whole different kettle of fish. I don't need to support them right now, but if taking them into account requires little extra effort, I wouldn't mind. | If you're on linux you could use lsblk (which is part of util-linux ): lsblk -no pkname /dev/sda1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/308722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106927/"
]
} |
308,731 | When using dnf and yum on rpm based Linux distros (RHEL/Red Hat, Fedora, CentOS, etc) the utility will automatically wrap lines to make it more friendly for the user to read. This is problematic as it makes it extremely annoying to work with the data through pipelining. For example: $ dnf search jenkins-ssh-credentials-plugin-javadocLast metadata expiration check: 6 days, 15:30:08 ago on Thu Sep 1 21:09:10 2016.============= N/S Matched: jenkins-ssh-credentials-plugin-javadoc =============jenkins-ssh-credentials-plugin-javadoc.noarch : Javadoc for jenkins-ssh-credentials-plugin$ dnf search jenkins-ssh-credentials-plugin-javadoc | grep ssh====== N/S Matched: jenkins-ssh-credentials-plugin-javadoc =======jenkins-ssh-credentials-plugin-javadoc.noarch : Javadoc for : jenkins-ssh-credentials-plugin You can see that once the output for DNF is put through grep it decides to wrap the data in a completely different way then when normally displayed to the user. Multiple issues have been filed about this behavior ( #584525 , #986740 ) and consistently the issues are closed as CLOSED NOTABUG because "Yum is an interactive text-based ui which is not suited, nor intended for piping.". The solution as per the Red Hat developers is to "use a different tool for the job." It seems unreasonable to have to do this, especially when the methods supplied (install repoquery for example) don't even exist within the dnf utilities and require installing a dozen more packages just to parse the output of this data. Ideally a user would be able to just use the data in pipelining. In lieu of that, it would be nice to have a simple one-liner which could be used to make the data usable. | If you're on linux you could use lsblk (which is part of util-linux ): lsblk -no pkname /dev/sda1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/308731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14189/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.