source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
428,279
I have tried two separate scripts for scrapping all comments of a Youtube video. Everything works fine, but there was one problem:Youtube video ID's starting with a hiphen, like -FIHqoTcZog does not work. I was wondering is there a way to escape every single character of that ID from shell interpretation, for instance using as an ID: \-\F\I\H\q\o\T\c\Z\o\g , but this did not work in my case. The scripts i used was : youtube-comment-downloader and youtube-comment-scraper . Both require video ID. Even if that is surrounded by single or double quotes the ID works, but neither works if the video ID starts with a hyphen. Youtube-dl had similar issue before, but now it accepts ID starting with hyphen: this is done by using the option --id , still it does not work in our case unless the hyphen is preceded by -- , making the video name into --id -- -FIHqoTcZog when it is ok to be --id xxxxxxxxxxx in another case where the ID does not start with a hyphen. Is there any way arround for my scripts to work with ID starting with a hyphen, like the way how it did in the Youtube-dl 's case, or using another work arround?
Related question: What does β€œ--” (double-dash) mean? (also known as β€œbare double dash”) The hyphen character is not interpreted by your shell but by the program/script (its parser, more precisely) you are using. That's why escaping it (at the shell level) doesn't work. Programs often recognize arguments with leading hyphen(s) as options , not as operands . To interpret arguments like -foo as operands, programs usually follow one or more of these ways: Recognize the first -- argument as the end of options marker: program -- -foo Let you pass operands as option-arguments: program --option -foo Recognize operands in alternative ways: program prefix-foo In your specific scenario: youtube-dl accepts: -- -FIHqoTcZog https://www.youtube.com/watch?v=-FIHqoTcZog youtube-comment-downloader seems to accept: --youtubeid -FIHqoTcZog youtube-comment-scraper seems to accept: -- -FIHqoTcZog https://www.youtube.com/watch?v=-FIHqoTcZog
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428279", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128335/" ] }
428,282
I am attempting to write a script that automates the installation of ports/packages on new FreeBSD installs. To do this, the user who executes the script must be root. The system is "supposed" to be virgin meaning bash and sudo may or may not be installed; so I am trying to account for it. To do this, I am checking if the user ID equals 0. The problem is, between bash and sh, the environment variables are different: bash -> $EUID (all caps) sh -> $euid (all lower) Is there a different way other than the environment variable to check for root user or should I just adjust the checking of the user based on environment?
I would check the value of id -u , which is specified to: Output only the effective user ID, using the format "%u\n". Perhaps like this: if [ $(id -u) -eq 0 ]then : rootelse : not rootfi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/428282", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107777/" ] }
428,285
The output of du -sh folder is some dimension and folder; how can I use grep -o '*G' and similar to just get the dimension, namely get rid of folder name?
I can offer a simple cut solution : du -sh . | cut -f1 The standard delimiter in cut is tab, so no need for any additional options. Simply print field 1. From your comment, it seems you are concerned with resources/speed so to quote Gilles from another answer: "Generally speaking, the more specialized a tool is, the faster it is. So in most cases, you can expect cut and grep to be faster than sed , and sed to be faster than awk ." Quoted from here The ouput of time for both commands shows: time du -sh /folder | awk '{print $1}'60Kreal 0m0.005suser 0m0.002ssys 0m0.004stime du -sh /folder | cut -f160kreal 0m0.003suser 0m0.000ssys 0m0.004s I believe you would need to repeat that multiple times, and take the average to make it a fair test, but either way, not much in it. Technically cut should be "faster".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428285", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37522/" ] }
428,310
This works as expected: $ echo a b c | xargs --replace="{}" echo x "{}" yx a b c y This does too: $ echo a b c | xargs --max-args=1 echo xx ax bx c But this doesn't work as expected: $ echo a b c | xargs --max-args=1 --replace="{}" echo x "{}" yx a b c y And neither does this: $ echo a b c | xargs --delimiter=' ' --max-args=1 --replace="{}" echo x "{}" yx a yx b yx c y I expected this output: x a yx b yx c y As a workaround, I am using printf and two xargs, but that is ugly: $ echo a b c | xargs printf '%s\0' | \> xargs --null --max-args=1 --replace="{}" echo x "{}" yx a yx b yx c y Any idea why this is happening?
According to the POSIX documentation , xargs should run the given utility with arguments delimited by either spaces or newlines, and this is what happens in the two first examples of yours. However, when --replace (or -I ) is used, only newlines will delimit arguments. The remedy is to give xargs arguments on separate lines: $ printf '%s\n' a b c | xargs --max-args=1 --replace="{}" echo x "{}" yx a yx b yx c y Using POSIX options: printf '%s\n' a b c | xargs -n 1 -I "{}" echo x "{}" y Here, I give xargs not one line but three. It takes one line (at most) and executes the utility with that as the argument. Note also that -n 1 (or --max-args=1 ) in the above is not needed as it's the number of replacements made by -I that determines the number of arguments used: $ printf '%s\n' a b c | xargs -I "{}" echo x "{}" yx a yx b yx c y In fact, the Rationale section of the POSIX spec on xargs says (my emphasis) The -I , -L , and -n options are mutually-exclusive . Some implementations use the last one specified if more than one is given on a command line; other implementations treat combinations of the options in different ways. While testing this, I noticed that OpenBSD's version of xargs will do the the following if -n and -I are used together: $ echo a b c | xargs -n 1 -I "{}" echo x "{}" yx a yx b yx c y This is different from what GNU coreutils' xargs does (which produces x a b c y ). This is due to the implementation accepting spaces as argument delimiter with -n , even though -I is used. So, don't use -I and -n together (it's not needed anyway).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428310", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94871/" ] }
428,394
My main problem is getting this error: Makefile:463: recipe for target 'znep.out' failed after running make I was trying to install GPAW (g Projector Augmented Wave method, for DFT simulations) on my machine. ASE is working, and I already installed the Libxc , and compiled the BLAS libraries as specified here but when performing 'make' on the extracted package I always get the same error: ~/Downloads/lapack-3.8.0$ make gfortran -O2 -frecursive -c -o zunt03.o zunt03.fgfortran -o xeigtstz zchkee.o zbdt01.o zbdt02.o zbdt03.o zbdt05.o zchkbb.o zchkbd.o zchkbk.o zchkbl.o zchkec.o zchkgg.o zchkgk.o zchkgl.o zchkhb.o zchkhs.o zchkst.o zchkst2stg.o zchkhb2stg.o zckcsd.o zckglm.o zckgqr.o zckgsv.o zcklse.o zcsdts.o zdrges.o zdrgev.o zdrges3.o zdrgev3.o zdrgsx.o zdrgvx.o zdrvbd.o zdrves.o zdrvev.o zdrvsg.o zdrvsg2stg.o zdrvst.o zdrvst2stg.o zdrvsx.o zdrvvx.o zerrbd.o zerrec.o zerred.o zerrgg.o zerrhs.o zerrst.o zget02.o zget10.o zget22.o zget23.o zget24.o zget35.o zget36.o zget37.o zget38.o zget51.o zget52.o zget54.o zglmts.o zgqrts.o zgrqts.o zgsvts3.o zhbt21.o zhet21.o zhet22.o zhpt21.o zhst01.o zlarfy.o zlarhs.o zlatm4.o zlctes.o zlctsx.o zlsets.o zsbmv.o zsgt01.o zslect.o zstt21.o zstt22.o zunt01.o zunt03.o dlafts.o dlahd2.o dlasum.o dlatb9.o dstech.o dstect.o dsvdch.o dsvdct.o dsxt1.o alahdg.o alasum.o alasvm.o alareq.o ilaenv.o xerbla.o xlaenv.o chkxer.o ../../libtmglib.a ../../liblapack.a ../../librefblas.amake[2]: Leaving directory '/home/joshua/Downloads/lapack-3.8.0/TESTING/EIG'NEP: Testing Nonsymmetric Eigenvalue Problem routines./EIG/xeigtstz < nep.in > znep.out 2>&1Makefile:463: recipe for target 'znep.out' failedmake[1]: *** [znep.out] Error 139make[1]: Leaving directory '/home/joshua/Downloads/lapack-3.8.0/TESTING'Makefile:42: recipe for target 'lapack_testing' failedmake: *** [lapack_testing] Error 2 I used the default configuration for the 'Makefile' which is proposed in the Installation instructions. The default file is in here . Any suggestion? I use Kubuntu 17.10
After attending to a HPC lecture and doing some research I had the answer. It looks like the kernel associates a certain amount of memory to the compilation processes. This feature helps in some cases, when bugs can arise and those start to allocate unnecessarily big amounts of memory. But sometimes, the compilation requires more memory than usual and start getting errors. Then, by using the following command, it sets an unlimited amount of memory to the compilation. ulimit -s unlimited Now everything works fine. Thanks to @steeldriver for the extra questions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428394", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112187/" ] }
428,419
Saying that I have two files: a.txt and b.txt . The content of a.txt : hello world The content of b.txt : hello worldsomething else Of course I can use vimdiff to check their difference, I can make sure that a.txt is a subset of b.txt , which means that b.txt must contain all of lines existing in a.txt (just like the example above). My question is how to record lines which exists in b.txt but doesn't exist in a.txt into a file?
comm -1 -3 a.txt b.txt > c.txt The -1 excludes lines that are only in a.txt , and the -3 excludes lines that are in both. Thus only the lines exclusively in b.txt are output (see man comm or comm --help for details). The output is redirected to c.txt If you want the difference between the two files, use diff rather than comm . e.g. diff -u a.txt b.txt > c.txt
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/428419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145824/" ] }
428,481
I have a file that contains a bunch of lines. I would like to remove entries from file which starts with nnn and are longer than 25 characters. For now I am able only to make two separates sed : sed '/^nnn/d' input.txt > output.txtsed '/^.\{25\}./d' input.txt > output.txt but this is not my goal. Example input: nnnASDDGfdgdsfndsjfndsjfdfgGHGHGhfhfdnnnASDDGfdgdsfnsadbSADSDDFSDFrrrRRRRRRRttTGGGG Desired output: nnnASDDGfdgdsfnsadbSADSDDFSDFrrrRRRRRRRttTGGGG
sed solution: sed -E '/^nnn.{23}/d' file /^nnn.{23,}/ - match only line that starts with nnn and has more than 25 characters ( .{23,} - match character sequence of at least 23 characters long) d - delete the matched line The output: nnnASDDGfdgdsfnsadbSADSDDFSDFrrrRRRRRRRttTGGGG The same with awk command: awk '!/^nnn.{23}/' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428481", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279202/" ] }
428,500
I want to replace cat: var=$(cat "filename" 2>/dev/null) by bashism syntax: var=$(<"filename") The problem is that I don't know how to make the bashism silent to avoid such warnings: bash: filename: No such file or directory I've tried this: var=$(2>/dev/null <"filename") but it does not read existing files into var anymore.
Wrapping the assignment into a compound block and using a redirection on that seems to work: { var=$(<"$file"); } 2>/dev/null; e.g. $ echo hello > test1; rm -f test2$ file=test1; { var=$(<"$file"); } 2>/dev/null; echo "${var:-[it is empty]}"hello$ file=test2; { var=$(<"$file"); } 2>/dev/null; echo "${var:-"[it is empty]"}"[it is empty] Just don't use ( .. ) to create a subshell, since then the assigned variable would be lost.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428500", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28765/" ] }
428,570
I am trying to grep for only for a range of numbers located inside parenthesis. My current grep will pull everything within the parenthesis but I just want anything from [0001-9999] . How can I fix my grep to do this? I have tried a few different version of this (?<=\()[0001-9999](?=\)) but they don't return anything $ cat /tmp/outputThis is R3trans version 6.26 (release 745 - 24.03.17 - 20:17:03).R3trans finished (0000).R3trans finished (0001).R3trans finished (9999).05.03.2018 16:30:02enserver, EnqueueServer, RED, Running, 2018 02 27 09:15:52, 151:14:10, 19069 Current output $ cat /tmp/output | grep -Po '(?<=\().*(?=\))'release 745 - 24.03.17 - 20:17:03000000019999 Desired output $ cat /tmp/output | grep -Po '(?<=\().*(?=\))'00019999
Two approaches to reach the goal: grep approach (with Perl support): grep -Po '\(\K(?!0000)([0-9]{4})(?=\))' /tmp/output GNU awk approach: awk -v FPAT='\\([0-9]{4}\\)' '$1{ n = substr($1,2,4); if (int(n) > 0) print n }' /tmp/output The output: 00019999
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41643/" ] }
428,669
The headline a generalization of what I want.Specific problem: given some command that outputs multiple timestamps, e.g: $ cat timestamps | sort -n15083492718201508349271821150834942522215083494252231508349454218150834947641915083495000181508349500020150834969882015083496988221508350047721150835004772415083516356211508351635623150835169961815083516996201508351699621150835169962215083516996231508352230120150835223012315083522301241508352230125150835223221915083522322201508352364919150835236492015083523876181508352387619 I want to compute each 2 diff.I ended up doing something like: $ wc -l timestamps29cat <(sort -n timestamps | head -28) <(sort -n timestamps | tail -28) | sort -n | xargs -n 2 sh 'calc $2 - $1' sh1153401128995222012359921988002348899315878972639952111530497311209411326991226981 So I managed to get by, but there's must be an easier way.The generalization is: given an output with multiple lines, how can I compute a sliding window of x args at once, with y args step size?
Awk is well suited for this: awk 'NR>1{print $1-last} {last=$1}' timestamps In the above, for each line after the first ( NR>1 ), we print the value on the current value, $1 , minus the value on the previous line, last . Next, we update the value of last . Example $ awk 'NR>1{print $1-last} {last=$1}' timestamps1153401128995222012359921988002348899315878972639952111530497311209411326991226981 More complex calculation The code below starts with the number in the current line, adds twice the number in the preceding line, and then subtracts three times the number on the line five lines previous: awk '{a[NR]=$1} NR>5{print a[NR]+2*a[NR-1]-3*a[NR-5]}' timestamps
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428669", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173099/" ] }
428,715
I need to process a file like below in the bash script: input.txt: host1 53host1 123host2 0host1 222host3 1host1 85host1 25host1 13host3 8host2 90 I need to get in the results only one line for each host based on maximumvalue in column 2: output.txt: host1 222host2 90host3 8 Any ideas?
With GNU sort or compatible: <input.txt sort -k2rn | sort -sbuk1,1 >output.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81389/" ] }
428,721
In a VM on a cloud provider, I'm seeing a process with weird random name. It consumes significant network and CPU resources. Here's how the process looks like from pstree view: systemd(1)───eyshcjdmzg(37775)─┬─{eyshcjdmzg}(37782) β”œβ”€{eyshcjdmzg}(37783) └─{eyshcjdmzg}(37784) I attached to the process using strace -p PID . Here's the output I've got: https://gist.github.com/gmile/eb34d262012afeea82af1c21713b1be9 . Killing the process does not work. It is somehow (via systemd?) resurrected. Here's how it looks from systemd point of view ( note the weird IP address at the bottom): $ systemctl status 37775● session-60.scope - Session 60 of user root Loaded: loadedTransient: yes Drop-In: /run/systemd/system/session-60.scope.d └─50-After-systemd-logind\x2eservice.conf, 50-After-systemd-user-sessions\x2eservice.conf, 50-Description.conf, 50-SendSIGHUP.conf, 50-Slice.conf, 50-TasksMax.conf Active: active (abandoned) since Tue 2018-03-06 10:42:51 EET; 1 day 1h ago Tasks: 14 Memory: 155.4M CPU: 18h 56min 4.266s CGroup: /user.slice/user-0.slice/session-60.scope β”œβ”€37775 cat resolv.conf β”œβ”€48798 cd /etc β”œβ”€48799 sh β”œβ”€48804 who β”œβ”€48806 ifconfig eth0 β”œβ”€48807 netstat -an β”œβ”€48825 cd /etc β”œβ”€48828 id β”œβ”€48831 ps -ef β”œβ”€48833 grep "A" └─48834 whoamiMar 06 10:42:51 k8s-master systemd[1]: Started Session 60 of user root.Mar 06 10:43:27 k8s-master sshd[37594]: Received disconnect from 23.27.74.92 port 59964:11:Mar 06 10:43:27 k8s-master sshd[37594]: Disconnected from 23.27.74.92 port 59964Mar 06 10:43:27 k8s-master sshd[37594]: pam_unix(sshd:session): session closed for user root What is going on?!
eyshcjdmzg is a Linux DDoS trojan (easily found through a Google search). You've likely been hacked. Take that server off-line now. It's not yours any longer. Please read the following ServerFault Q/A carefully: How to deal with a compromised server . Note that depending on who you are and where you are, you may additionally be legally obliged to report this incident to authorities. This is the case if you are working at a government agency in Sweden (e.g. a university), for example. Related: How can I kill minerd malware on an AWS EC2 instance? (compromised server) Need help understanding suspicious SSH commands
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/428721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30894/" ] }
428,724
Is there a practical and easy way to capture data going through a named pipe? I've tried wireshark, but it only accepts a specific data format. I've also tried cat, but I get mixed results. Thank you
eyshcjdmzg is a Linux DDoS trojan (easily found through a Google search). You've likely been hacked. Take that server off-line now. It's not yours any longer. Please read the following ServerFault Q/A carefully: How to deal with a compromised server . Note that depending on who you are and where you are, you may additionally be legally obliged to report this incident to authorities. This is the case if you are working at a government agency in Sweden (e.g. a university), for example. Related: How can I kill minerd malware on an AWS EC2 instance? (compromised server) Need help understanding suspicious SSH commands
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/428724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279398/" ] }
428,727
I have the following json as my input for jq processing [ { "Category": "Disk Partition Details", "Filesystem": "udev", "Size": "3.9G", "Used": 0, "Avail": "3.9G", "Use%": "0%", "Mounted": "/dev" }, { "Category": "Disk Partition Details", "Filesystem": "tmpfs", "Size": "799M", "Used": "34M", "Avail": "766M", "Use%": "5%", "Mounted": "/run" }] using ./csvtojson.sh bb.csv | jq 'map( {(.Category): del(.Category)})' as suggested by @peak here , I've reached till the json below [ { "Disk Partition Details": { "Filesystem": "udev", "Size": "3.9G", "Used": 0, "Avail": "3.9G", "Use%": "0%", "Mounted": "/dev" } }, { "Disk Partition Details": { "Filesystem": "tmpfs", "Size": "799M", "Used": "34M", "Avail": "766M", "Use%": "5%", "Mounted": "/run" } }] All I want is to put the category on the top for once only and to break this json to another level as i did in the previous step like this. [ { "Disk Partition Details": { "udev" :{ "Size": "3.9G", "Used": 0, "Avail": "3.9G", "Use%": "0%", "Mounted": "/dev" }, "tmpfs" : { "Size": "799M", "Used": "34M", "Avail": "766M", "Use%": "5%", "Mounted": "/run" } } }]
eyshcjdmzg is a Linux DDoS trojan (easily found through a Google search). You've likely been hacked. Take that server off-line now. It's not yours any longer. Please read the following ServerFault Q/A carefully: How to deal with a compromised server . Note that depending on who you are and where you are, you may additionally be legally obliged to report this incident to authorities. This is the case if you are working at a government agency in Sweden (e.g. a university), for example. Related: How can I kill minerd malware on an AWS EC2 instance? (compromised server) Need help understanding suspicious SSH commands
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/428727", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279400/" ] }
428,737
I'm trying to add random string for each line while running: awk '{print "name" "'$ran'" "-"$0}' 'myfile' Before that, the random string is generated: ran="$(tr -dc '[:alnum:]' </dev/urandom | head -c 6)" The problem is that it will print the same random string for each line: nameGQz3Ek-nameGQz3Ek-nameGQz3Ek- What should I do in order to get different random string for each line?
With awk system() function: Sample input.txt : abc awk '{ printf "name"; system("tr -dc \047[:alnum:]\047 </dev/urandom | head -c6"); printf "-%s\n", $0 }' input.txt Sample output: nameSDbQ7T-anameAliHY0-bnameDUGP2S-c system(command) Execute the operating system command command and then return to the awk program https://www.gnu.org/software/gawk/manual/gawk.html#index-system_0028_0029-function
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428737", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279406/" ] }
428,747
Ive got an Huion DWH69 graphic tablet . I have Manjaro Linux installed. I have two displays connected . Now it seems like to work out of the box. The problem is, that I want to bount the surface to the limits of the primary monitor . At the moment the horizontal movements are very fast since it is a small tablet but a huge area of display. How can I configure that? I guess I have to touch the X-Server in some way but dont know how.
also check out bebop's answer here: https://askubuntu.com/questions/839161/limit-a-graphics-tablet-to-one-monitor its a longer version with some extras. For the QUICK AND DIRTY VERSION: Found this, it fixed my issue which sounded similar to yours. https://forum.kde.org/viewtopic.php?f=139&t=125532 This is the code for HUION New 1060 for example: HID 256c:006e Pad First type: xinput # get the IDs for all relevant pieces of my tablet. blah blah: HID 256c:006e Pad id=17there might/should be two devices, Pad and Pen. Then do: xrandr # get the names of my displays look for the ones showing 'connected' like HDMI-A-0 connectedand maybe DisplayPort-2 connected Then you tell xinput to stick the id's to the screen you want Krita or Photoshop on, such as if you were using a HDMI port to main screen: xinput map-to-output 13 HDMI-A-0xinput map-to-output 14 HDMI-A-0 that was mine.it resets after a reboot. thanks to that user timotimo!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/130327/" ] }
428,825
I'd like to ask: Why is echo {1,2,3} expanded to 1 2 3 which is an expected behavior,while echo [[:digit:]] returns [[:digit:]] while I expected it to print all digits from 0 to 9 ?
Because they are two different things. The {1,2,3} is an example of brace expansion . The {1,2,3} construct is expanded by the shell , before echo even sees it. You can see what happens if you use set -x : $ set -x$ echo {1,2,3}+ echo 1 2 31 2 3 As you can see, the command echo {1,2,3} is expanded to: echo 1 2 3 However, [[:digit:]] is a POSIX character class . When you give it to echo , the shell also processes it first, but this time it is being processed as a shell glob . it works the same way as if you run echo * which will print all files in the current directory. But [[:digit:]] is a shell glob that will match any digit. Now, in bash, if a shell glob doesn't match anything, it will be expanded to itself: $ echo /this*matches*no*files+ echo '/this*matches*no*files'/this*matches*no*files If the glob does match something, that will be printed: $ echo /e*c+ echo /etc/etc In both cases, echo just prints whatever the shell tells it to print, but in the second case, since the glob matches something ( /etc ) it is told to print that something. So, since you don't have any files or directories whose name consists of exactly one digit (which is what [[:digit:]] would match), the glob is expanded to itself and you get: $ echo [[:digit:]][[:digit:]] Now, try creating a file called 5 and running the same command: $ echo [[:digit:]]5 And if there are more than one matching files: $ touch 1 5 $ echo [[:digit:]]1 5 This is (sort of) documented in man bash in the explanation of the nullglob options which turns this behavior off: nullglob If set, bash allows patterns which match no files (see Pathname Expansion above) to expand to a null string, rather than themselves. If you set this option: $ rm 1 5$ shopt -s nullglob$ echo [[:digit:]] ## prints nothing$
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/428825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268014/" ] }
428,975
I am having trouble to get my touchpad to work. It does not detect movement or clicks most of the time, and will only very sporadically "wake up", respond for ~one second and then stop. The same applies to the trackpoint. I have tried Fedora (27), Mint and Ubuntu (17.10) and the issue is the same in all versions. Everything that follows is w.r.t. Ubuntu 17.10. hwinfo gives Unique ID: AH6Q.Y_f5kDtfqz2 The touchpad does not show up in xinput : (it did in Mint, but the problem was also present there). Using libinput debug-events , I get: (...)-event5 DEVICE_ADDED SynPS/2 Synaptics TouchPad seat0 default group9 cap:pg size 70x50(...)(When swiping around on the touchpad, nothing happens. Then, suddenly, it will show:)-event6 DEVICE_ADDED PS/2 Generic Mouse seat0 default group11 cap:p left scroll-nat scroll-button-event5 POINTER_MOTION +7.73s 2.98/ 0.00(...)- event5 POINTER_MOTION +7.88s 2.54/ 0.00(and it will cut out again. When continuing swiping, once the keyboard "wakes up" again, the process repeats.) What I believe to the a good hint so far was the result from dmesg. This gives the error psmouse serio1: TouchPad at isa0060/serio1/input0 lost sync at byte 1 multiple times. I have found two ways to circumvent the problem, but both are not satisfactory: 1) modprobe -r psmouse && modprobe psmouse proto=imps will make the touchpad respond, but disables any gestures (two-finger scrolling etc). It also removes the touchpad from the "Settings > Devices > Mouse and touchpad" panel. sudo libinput list-devices confirms that most of the functionality is lost. 2) Disabling the trackpoint in the BIOS also leads to the touchpad working as intended, including two-finger scrolling. It does, however, also disable the physical buttons for the touchpad. Any advice would be greatly appreciated. Thank you very much!
I also have the model with NFC and the following got both trackpoint and touchpad (with 2-finger scrolling) working: Delete (or comment out) the line i2c_i801 from /etc/modprobe.d/blacklist.conf . Add psmouse.synaptics_intertouch=1 to the GRUB_CMDLINE_LINUX_DEFAULT=... line in /etc/default/grub (caveat: will be reset and needs to be redone after every kernel update). sudo update-grub Reboot. Running Ubuntu 17.10 and kernel 4.16.0 Thanks to user net_life on the Lenovo forum
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279597/" ] }
429,071
I want to use the new features of C++14 on Linux. Which free compiler grants these features?
According to the standards support pages for clang and gcc , you can use gcc >= 5.0 or clang >= 3.4. Most C++14 support was added in 4.9 for gcc, but a few features did not make it in until 5.0.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223174/" ] }
429,169
I must be missing something incredibly simple about how to do this, but I have a simple script: extract () { if [ -f $1 ] ; then case $1 in *.tar.bz2) tar xvjf $1 ;; *.tar.gz) tar xvzf $1 ;; *.tar.xz) tar xvJf $1 ;; *.bz2) bunzip2 $1 ;; *.rar) unrar x $1 ;; *.gz) gunzip $1 ;; *.tar) tar xvf $1 ;; *.tbz2) tar xvjf $1 ;; *.tgz) tar xvzf $1 ;; *.zip) unzip $1 ;; *.Z) uncompress $1 ;; *.7z) 7z x $1 ;; *.xz) unxz $1 ;; *.exe) cabextract $1 ;; *) echo "\'$1': unrecognized file compression" ;; esac else echo "\'$1' is not a valid file" fi} The script works, but I can't seem to get it to be executable by default when I log in. I have consulted this very thorough answer: How to define and load your own shell function in zsh , and I have managed to get my $fpath to show the appropriate directory where I have stored the function. In my .zshrc profile, I have added fpath=( ~/.local/bin "${fpath[@]}" ) to the bottom, which is the path where my functions live. When I input echo $fpath , I get: /home/me/.local/bin /home/me/.oh-my-zsh/plugins/git /home/me/.oh-my-zsh/functions /home/me/.oh-my-zsh/completions ... However, it does not work unless I explicitly type autoload -Uz extract each time when I log in. Is there a way I can get the whole directory to autoload when I log in?
You're mixing up scripts and functions. Making a script A script is a standalone program. It may happen to be written in zsh, but you can invoke it from anywhere, not just from a zsh command line. If you happen to run a script written in zsh from a zsh command line or another zsh script, that's a coincidence that doesn't affect the script's behavior. The script runs in its own process, it doesn't influence its parent (e.g. it can't change variables or the current directory). Your code accomplishes a standalone task which can be invoked from anywhere and doesn't need to access the state of the shell that runs it, so it should be a script, not a function. A script must be an executable file: chmod +x /path/to/script . It must start with a shebang line to let the kernel know what program to use to interpret the script. In your case, add this line to the top of the file: #!/usr/bin/env zsh Put the file in a directory that is listed in the $PATH variable. Many systems set up either ~/bin or ~/.local/bin in a user's default PATH , so you can use these. If you want to add another directory, see http://unix.stackexchange.com/questions/26047/how-to-correctly-add-a-path-to-path When you type a command name that isn't an alias, a function or a builtin, the shell looks for an executable file of that name in $PATH and executes it. Thus you don't need to declare the script to the shell, you just drop it in the right place. Making a function A function is code that runs inside an existing shell instance. It has full access to all the shell's state: variables, current directory, functions, command history, etc. You can only invoke a function in a compatible shell. Your code can work as a function, but you don't gain anything by making it a function, and you lose the ability to invoke it from somewhere else (e.g. from a file manager). In zsh, you can make a function available for interactive sessions by including its definition in ~/.zshrc . Alternatively, to avoid cluttering .zshrc with a very large number of functions, you can use the autoloading mechanism. Autoloading works in two steps: Declare the function name with autoload -U myfunction . When myfunction is invoked for the first time, zsh looks for a file called myfunction in the directories listed in $fpath , and uses the first such file it finds as the definition of myfunction . All functions need to be defined before use. That's why it isn't enough to put the file in $fpath . Declaring the function with autoload actually creates a stub definition that says β€œload this function from $fpath and try again”: % autoload -U myfunction% which myfunctionmyfunction () { # undefined builtin autoload -XU} Zsh does have a mechanism to generate those stubs by exploring $fpath . It's embedded in the completion subsystem. Put #autoload as the first line of the file. In your .zshrc , make sure that you fully set fpath before calling the completion system initialization function compinit . Note that the file containing a function definition must contain the function body , not the definition of the function, because what zsh executes when the function is called is the content of the file. So if you wanted to put your code in a function, you would put it in a file called extract that is in one of the directories on $fpath , containing #autoloadif [ -f $1 ]; then… If you want to have initialization code that runs when the function is loaded, or to define auxiliary functions, you can use this idiom (used in the zsh distribution). Put the function definition in the file, plus all the auxiliary definitions and any other initialization code. At the end, call the function, passing the arguments. Then myfunction would contain: #autoloadmy_auxiliary_function () { …}myfunction () { …}myfunction "$@" P.S. 7z x works on most archive types.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/429169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277972/" ] }
429,285
[Note: This similar Q concerns the same bash error message. It's been marked a duplicate of this other Q . But because I found a very different source for this error, I will answer my own Q below.] This previously working bash script line while ... do ... done <<< "$foo" one day started producing this error message: cannot create temp file for here-document: Permission denied
I had added umask 777 before the here string. After removing the umask, the error went away. So lesson learned: There is a temporary file created for a here string ( <<< ), and this is related to a here document ( << ), and you must have an appropriate umask set for these to work.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429285", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181039/" ] }
429,290
I'm running the following script on multiple machines, It runs fine on couple of machines, but now on others, and gives the error. The script & errors are given below. SCRIPT: host="$(/bin/hostname)";qSize="$(/usr/sbin/exim -bpc)";qLimit="100";sTo="[email protected]";sFr="root@"$hostname;if [ $qSize -ge $qLimit ]; thenecho "There are "$qSize "emails in the mail queue" | sed 's/^/To: '"$sTo"'\nSubject: *ALERT* - Mail queue on '"$host"' exceeds limit\nFrom: '"$sFr"'\n\n/' | sendmail -telse echo -e "There are "$qSize "emails in the mail queue"fi ERROR!! sed: -e expression #1, char 79: unterminated `s' command Does anyone have any idea, what the error could be?
I had added umask 777 before the here string. After removing the umask, the error went away. So lesson learned: There is a temporary file created for a here string ( <<< ), and this is related to a here document ( << ), and you must have an appropriate umask set for these to work.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429290", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117409/" ] }
429,296
I am using KDE Plasma 5.12. In the previous version of Plasma I could show the global menu for an application below the title bar(Settings > Application Style > Widget Style > Fine Tuning > Menubar Style: In application). However in Plasma 5.12 I can only show a button in the title bar and click to have a vertical menu, which is annoying because this is slower than the in application menu bar. How can get the "In Application" menu bar back?An image showing the previous setting: https://i.stack.imgur.com/qTMZb.jpg EDIT: also I don't want the global menu on a separate panel (like unity)
My personal opinion, based on some reverse engineering I've just finished on this subject, is that, with all due respect, KDE developers did a big mess here. In a fit of oversimplification, the option you mention is no longer available. The global menu is now automatically enabled when you place a global menu applet in a panel or add the menu button to the window decoration in Buttons tab of the Window Decorations module. Otherwise the global menu should be automatically disabled, and the classic "In Application" Menu Bar used instead. With some exceptions. KCalc for example, behaves ad described. Just remove any global menu applet and the application menu button from the window decoration, to get back KCalc "in application" menu. But other applications, such as Ark, KMenuEdit, Muon, Okteta, KHelpCenter, just to mention a few, when you use the application menu button or the global menu applet at least one time, remain in this state even after you remove the application menu button or the global menu applet, with no access to the menu whatsoever. It seems a bug to me. For such kind of application, you have to manually edit their configuration file (when the application itself is closed, of course). You'll find them in the ~/.config folder. Search by the application name. For Ark, the configuration file is: ~/.config/arkrc There change MenuBar=Disabled with MenuBar=Enabled This restores "in application" menu (but remember to remove any global menu applet and the application menu button from the window decoration before!) In addition to the rules above, other applications implement a further mechanism to switch on and off "in app" application menu, using CTRL+M hotkey (given that you have restored the "in application" menu as described in the point 1). For example Dolphin and Gwenview support CTRL+M as described. Kate supports CTRL+M but kindly emits a warning before hiding the menu. Konsole terminal instead, which acts it's too cool for all the other applications, wants CTRL+ SHIFT +M to switch the menu on and off. The selected state is persistent after system reboot. And it's not over . Other plasmoids designed as a replacement of the global application menu, such as " Active Window Control " will disable your "in application" menu, despite any other contrary directive you may have set. So, I suggest you to make your tests in a clean KDE plasma environment.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429296", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186352/" ] }
429,344
What does the "+" in GTK+ mean and what is its history? I read through https://www.gtk.org/overview.php but did not see mention of the "+" origin or meaning.
They've added the + when they redesigned the original GTK (the Gimp ToolKit based on Motif ) to be object oriented (meaning something like GTK on steriods I guess...) See also wikipedia page on GIMP version history.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27902/" ] }
429,374
On a linux machine, a non-root user open a file, $ sudo vi /etc/hosts and quit saying :sh to get root access. 1) With above, How a non-root user becomes a root user? 2) Why Linux allow such hacking approach to breach security?
The non-root user became root as soon as they successfully ran sudo (given the assumed root target user); they started running vi as root. When you ask vi for a shell, it dutifully runs a shell, as the current user -- root! I should clarify that you should not "quit" vi with the :sh command, as that's asking for a shell. Quit with :q instead. Linux allows such functionality because that's specifically what sudo is intended to do! Perhaps you've seen the lecture that sudo gives: We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility. sudo offers a limited "speed bump" to this when it comes to granting "ALL" access, in the form of the ! negation operator, often demonstrated as: jill SERVERS = /usr/bin/, !SU, !SHELLS where jill is granted permission to run programs from /usr/bin, but not anything listed in the SU or SHELLS aliases. The sudoers man page has a whole "Security Notes" section when it comes to granting large-scale access via sudo and then trying to restrict it. Limitations of the β€˜!’ operator It is generally not effective to β€œsubtract” commands from ALL using the β€˜!’ operator. A user can trivially circumvent this by copying the desired command to a different name and then executing that. and In general, if a user has sudo ALL there is nothing to prevent them from creating their own program that gives them a root shell (or making their own copy of a shell) regardless of any β€˜!’ elements in the user specification. and more pertinently: Preventing shell escapes Once sudo executes a program, that program is free to do whatever it pleases, including run other programs. This can be a security issue since it is not uncommon for a program to allow shell escapes, which lets a user bypass sudo's access control and logging. Common programs that permit shell escapes include shells (obviously), editors , paginators, mail and terminal programs
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/429374", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62659/" ] }
429,408
I have a hacked up Chromebook on which I'm running Gentoo. When I try to compile anything , CPU usage spikes up to 100%, the temperature increases by ~10 degrees C, battery usage spikes (4.X W -> 10 W), and it's a slow process. But I also have an Arch Linux computer running, and I can connect to it over SSH. They are both x86_64 CPUs. Is there any way I could offload the compilation of stuff (Linux kernel, everyday packages, etc.) onto the Arch Linux machine over SSH? I haven't done anything like this before. Might cross-compilation be necessary?
No, you don't have to cross-compile (that would be necessary if you targeted another architecture.) There are two ways that I can think of that you could set up your systems to do this: Use distcc . The Gentoo and Arch wikis do a good job of describing how to install and configure the program, so I won't copy the entire thing here. Briefly, you need to have the following set up in order for it to work: Your CFLAGS in /etc/portage/make.conf must not use march=native or mtune=native , because the remote computer will use its idea of "native" CPU, not the local computer's. If you're using "native", find out which flags to use by running: $ gcc -v -E -x c -march=native -mtune=native - < /dev/null 2>&1 | grep cc1 | perl -pe 's/^.* - //g;' Both computers need the same compiler and binutils versions. Both computers need distcc installed, configured and running. Use a chroot environment on your Arch system with a copy of your Chromebook filesystem (treat this like you're doing an installation of Gentoo, so copy resolv.conf from your Arch installation, and mount the appropriate filesystems inside per the Gentoo installation manual , keeping in mind the warning about /dev/shm if Arch's version is a symlink.) It needs to be as close as possible to your Chromebook environment, or else you'll end up with possibly incorrect binaries; if you do a copy, you'll have to rebuild less packages. Inside of this environment: Add FEATURES="buildpkg" to /etc/portage/make.conf . The generated packages will then be in /usr/portage/packages . You can also compile the kernel in this way and simply copy the generated kernel and appropriate /lib/modules directory to the Chromebook. (Remember that these directory locations are relative to the chroot!) The wiki recommends having an NFS mount or other server so that you don't have to copy files manually: this can be set up on the Arch system proper. I like setting up rsyncd for this purpose, but use whatever method you prefer for file access. On your Chromebook: Make sure to add FEATURES="getbinpkg" to /etc/portage/make.conf if you want to prevent it from compiling locally. If you're using remote file access, add PORTAGE_BINHOST="protocol://path/to/your/chroot/usr/portage/packages" to /etc/portage/make.conf . Refer to the Binary package guide in the Gentoo wiki for more information. I have done both of these methods in the past, and they both work pretty well. My observations on the two methods: distcc is finicky to get working, even if you have identical setups on both sides. Keeping gcc and binutils versions the same will be your biggest challenge. Once you get it going, however, it's pretty fast, and if you have extra computers that are fast enough you can add them. The chroot environment is less finicky, but if you make changes to any part of the portage environment ( CFLAGS , USE flags, masks, profiles, etc.) you have to make sure that both sides stay consistent, or else you can end up with packages that have the wrong dependencies. Gentoo is pretty good about making sure the USE flags match, but it doesn't track compiler options in binary packages. One advantage is that you're not limited by the (lack of) disk space and memory on the Chromebook for compilation. If you're going to use the chroot method, I would make a script to do all the uninteresting work required in setting it up (replace /mnt/gentoo with your chroot location): cp -L /etc/resolv.conf /mnt/gentoo/etcmount -t proc proc /mnt/gentoo/procmount --rbind /sys /mnt/gentoo/sysmount --make-rslave /mnt/gentoo/sysmount --rbind /dev /mnt/gentoo/devmount --make-rslave /mnt/gentoo/devchroot /mnt/gentoo /bin/bashumount -R /mnt/gentoo/devumount -R /mnt/gentoo/sysumount /mnt/gentoo/proc
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201408/" ] }
429,421
When I run chmod +w filename it doesn't give write permission to other , it just gives write permission to user and group . After executing this command chmod +w testfile.txt running ls -l testfile.txt prints -rw-rw-r-- 1 ravi ravi 20 Mar 10 18:09 testfile.txt but in case of +r and +x it works properly. I don't want to use chmod ugo+w filename .
Your specific situation In your specific situation, we can guess that your current umask is 002 (this is a common default value) and this explains your surprise. In that specific situation where umask value is 002 (all numbers octal). +r means ugo+r because 002 & 444 is 000 , which lets all bits to be set +x means ugo+x because 002 & 111 is 000 , which lets all bits to be set but +w means ug+w because 002 & 222 is 002 , which prevents the "o" bit to be set. Other examples With umask 022 +w would mean u+w . With umask 007 +rwx would mean ug+rwx . With umask 077 +rwx would mean u+rwx . What would have matched your expectations When you change umask to 000 , by executing umask 000 in your terminal, then chmod +w file will set permissions to ugo+w. Side note As suggested by ilkkachu, note that umask 000 doesn't mean that everybody can read and write all your files. But umask 000 means everyone that has some kind of access to any user account on your machine (which may include programs running server services ofc) can read and write all the files you make with that mask active and don't change (if the containing chain of directories up to the root also allows them).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/429421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276630/" ] }
429,466
Is it possible to scroll past the bottom in less? Ideally, I'd like to be able to see the last line of the file I am viewing at the top of my terminal window (the rest of the screen may be filled with tildes ( ~ ), which mean empty line/nothing to see here in less).
Yes, using J (as in Shift J ). So you can go to the end of the file with G , then scroll down past the end with J until the last line of the file is at the top of the screen ( less won’t let you scroll any further). K and Y do the same at the top of the file, scrolling up past the beginning until the first line is at the bottom of the screen. As David Ongaro points out, you can use repeat specifiers to avoid having to press J multiple times: G 9 9 J will thus scoll down until the last line is at the top of the screen (unless your terminal has a very large number of rows).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/429466", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128489/" ] }
429,476
trying to substitute single apostrophes to doubles. But cant get it right. sed 's/'/"/'
Single-quotes can't be escaped inside single quotes, so '\'' is the standard way of "embedding" a single-quote inside a single-quoted string in a shell command-line or script. It doesn't actually embed a single quote, but it achieves the desired end result. '\'' = ' (end quote) \' (escaped quote) and ' (start quote). In other words, instead of: sed 's/'/"/g' Use: sed 's/'\''/"/g Alternatively, double-quotes CAN be backslash-escaped inside double-quotes, so you could use: sed "s/'/\"/g" Be careful with this form - you have to escape shell meta-characters you want to be treated as string literals inside double quotes. e.g. sed 's/foo/$bar/' replaces foo with the string literal $bar , while sed "s/foo/$bar/" replaces foo with the value of the current shell's $bar variable (or with nothing if it isn't defined. Note: some values of variable $bar can break the sed command - e.g. if $bar contains an un-escaped delimiter like bar='a/b' , that would cause the sed command to be s/foo/a/b/ , a syntax error) Yet another, probably the best, alternative is to use $'' to quote your string. This style of quoting does allow single-quotes to be escaped. This changes the way that the shell interprets the quoted string, so may have other side effects if the string contains other escaped characters or character sequences (e.g. \xHH - "HH" will be interpreted as the hexadecimal representation of an 8-bit character inside $'' , but it would be interpreted as just plain text "xHH" if it was unquoted or inside double-quotes). sed $'s/\'/"/g' NOTE: $'' works in bash, ksh, zsh and maybe others. It doesn't work in dash .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/280002/" ] }
429,507
a@b:~$ sudo growpart -v /dev/xvda 1update-partition set to trueresizing 1 on /dev/xvda using resize_sfdisk_dos6291456000 sectors of 512. total size=3221225472000 bytesWARN: disk is larger than 2TB. additional space will go unused.## sfdisk --unit=S --dump /dev/xvdalabel: doslabel-id: 0x965243d6device: /dev/xvdaunit: sectors/dev/xvda1 : start= 2048, size= 4294965247, type=83, bootablemax_end=4294967296 tot=6291456000 pt_end=4294967295 pt_start=2048 pt_size=4294965247NOCHANGE: partition 1 could only be grown by 1 [fudge=2048]a@b:~$ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTxvda 202:0 0 3T 0 disk└─xvda1 202:1 0 2T 0 part /xvde 202:240 0 64G 0 disk Trying to extend a 2TB partition to 3TB. Is the partition limited to 2TB?
Your drive is formatted as MBR . For drives larger than 2TB, they need to be partitioned as GPT as MBR is limited to 2TB regardless of the OS.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429507", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223641/" ] }
429,529
How do I find the DHCP lease time when I am using systemd-networkd ? My network is defined in /etc/systemd/network/eth0.network : [Match]Name=eth0[Network]DHCP=yes There are other question on this site asking for the same information but they weren't using systemd-networkd but dhclient or some other method. I've tried looking in journalctl to no avail. I’m using ArchLinux.
Depending on the OS; Enabling debug isn't always necessary. systemd-networkd should store the lease info under /run/systemd/netif/leases/ i.e. cat /run/systemd/netif/leases/2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429529", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
429,535
Every time I boot a Fedora 27 UEFI installed system it messes with the EFI bootmanager entries. For example: As root, I change the boot-order such that Fedora's entry isn't first. And/or I delete the Fedora entry. At boot, at the systems UEFI boot menu, I boot a generic harddisk boot entry. This boots Fedora fine. As root, checking with efibootmgr I see that Fedora somehow managed to add an entry for itself (if it was deleted before) and to put that entry in front of the boot ordering. This behavior makes sense for standard installations, but not so much if Fedora is installed on a USB stick you want to boot for rescue work, without implicitly changing the EFI bootmanager entries. Thus, what Fedora piece is responsible for these boot time changes? And how can this be disabled? edit: Another experiment: As root, delete all Fedora boot entries with efibootmgr and change the bootorder to just include one generic entry (000C). Include efibootmgr into the initramfs (using dracut). Reboot and drop into the dracut shell. efibootmgr now prints: BootCurrent: 000CBootOrder: 000A,0000,......Boot000A* Fedora HD(2,GPT,...)/File(\EFI\fedora\shimx64.efi)Boot000C* UEFI Misc Device 2 PciRoot(0x0)/Pci(0x5,0x0)...... The BootCurrent is as expected, the change in BootOrder (contains everything now) and the new Fedora entry are unexpected. Thus, something running between the shutdown -r now and the initramfs emergency shell has changed the EFI bootmanager configuration. It's possible that the UEFI firmware did this change but I don't see how it would derive the 'Fedora' name and the /EFI/fedora/shimx64.efi path.
It's the shim. With a default Fedora install, the EFI/BOOT/BOOTX64.EFI is a shim (to support secure boot) that also executes some fallback logic that restores the Fedora boot manager entry. The 'Fedora' name comes from the EFI/fedora/BOOTX64.CSV file. The fallback logic can be disabled via removing the fallback code and copying the grub bits to the BOOT directory, i.e.: cd /boot/efi/EFIrm BOOT/fallback.efi BOOT/fbx64.eficp fedora/grub*.efi BOOTcp fedora/MokManager.efi BOOT The default setup can be restored via removing the copied files and reinstalling the packages: rm /boot/efi/EFI/fedora/*.efidnf reinstall grub2-efi-x64 shim-x64
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429535", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1131/" ] }
429,612
I noticed a strange behaviour when applying an edit using sed -i on a symlink. The documentation sais -i will do an in-place edit. However, the symlink is replaced with a file. Steps to reproduce: cd /tmpecho blah > fooln -s foo barsed -i -e 's/ah/ub/' barls -l will result in: -rw-rw-r--. 1 arogge arogge 5 Mar 9 15:07 bar-rw-rw-r--. 1 arogge arogge 5 Mar 9 15:07 foo Is this intended behaviour or is it a bug in sed ?
This is the expected behaviour. The -i / --in-place flag edits a temp copy of a file and then moves that copy over the original. So when you do: sed -i 'bla' symlink What sed is doing is: sed 'bla' symlink > temp_filemv temp_file symlink And hence destroying the symlink by placing a regular file in its place. Info taken from a comment in How do I prevent sed -i from destroying symlinks?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429612", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/340033/" ] }
429,649
This is my first questionn ever on stackexchange, very excited :) I'm currently going through OverTheWire War Game and learning the basics of ssh.I was stuck at bandit5 for I encountered something that I could not understand. Please refer to Over The Wire War Game - Bandit Level 5 Firstly, here are the steps I took to try and solve the problem: 1.See what was available bandit5@bandit:~$ lsinhere 2.Go in to 'inhere' bandit5@bandit:~$ cd inheremaybehere00 maybehere04 maybehere08 maybehere12 maybehere16maybehere01 maybehere05 maybehere09 maybehere13 maybehere17maybehere02 maybehere06 maybehere10 maybehere14 maybehere18maybehere03 maybehere07 maybehere11 maybehere15 maybehere19 3.Since I did not know which folder was correct, I checked the manual of find and managed to add in file size option. bandit5@bandit:~/inhere$ find -size 1033c./maybehere07/.file2 4.Now I knew which file was the correct one, so I accessed it like this... 4a.Go in to the folder where the file belonged (sounds logical right?) bandit5@bandit:~/inhere$ cd maybehere07 4b.Cat the file so I can access the answer. bandit5@bandit:~/inhere/maybehere07$ cat ./-file2 Now this produced a weird set of long characters... J67tSefFKYcCAUUQmclCbDzpijgUE2VZeC2LHFikNP3IuTbERBw6CpeLRqDJskyUvZwpeP6helUWai750jaGVNpGJ94gorbwQLPwHfDwb2XLLzrC4jfmn8JLXT0jeVkIW4VfCqUSeHyKNsozJ2gYgZLInRFlWqxcKG6DR9CIRGAWUKeIBRUN8sxvxdNGvc8jhbg3RIeGq05WlkPxGNPCwxYCcu1hCGqdtfGbqGeyVaYIEDfetHS1siBU1IpM113A2Ysswv79cJ6S2ikv1MpWg8gpWLFaCUCJnyhcLAes1FeQ1e5VqxcxeO11DCxA57thoQ13UnxCBqttGVrez1jmDD22AEVOAASfzbEcXNcmZOBwdbx49AzLyiOmrS2XGZfDKlRVoF09LzUA8XqMPO9B10fSQitGs0Npgy6PQANJNGOVIQoCU4yi4f5lw77KV3f9IGlx2FtChC3F5vyW2fO4YFbp0983sBWScC9UbRhJF1HYCJfRlZ6uuNgcsZJ2I63H7zBPr3t64qEAXABSJcwtiTm68pUuppbApPsA5KjJtC1ih1O3w4kdjnLY2CdLFUZTse9zHzwuoKZNeKL0kkhOqFLDfCetfXlaff3PNmX6q9zw8rfwe1vQSwLOesguhdmArICSQ0Mk86JJQaA79wqt9Eig2BzrSd2Fy5JbxWU7W3zJPnPXA3hCA3lvpe1vlPRIYuU9nnTWhTLlYOlRwuBEoswyFB9QaWOufgNGL85eOJahzeXMLBh8suJlLiz7C4stadra5mdONGv40VzehCM2r6xeQG0JfctB1qX7BBlzB5nJI1g79iK6QBZ655vdMsevMOMj9187wQlWKIRCq8KEfRhs9kii4aJ2l6xsBNxDlaa7Ec3CAfBrumMlIUT4uAHAOKpkoIMGzmmTWsVR1oF48cV8JsOUb92wI7XCz2Ljm8KuTO1RWxJuL3s2K1srWijpnDM4XlQ2PUlvXxRBrBYQF4AFYtLiPSKraimoTST7sxeCrP5OXUpCdFresPVRs7aDQZJz4JOMFdVKP6M4NAu4LomPMGQU84q7YlzIVCkFnGt0nIGBeO7VfwIf6tJbqSWjbiVt7oge2CadpHvPyZRo8QpZJYsJLdvbI8l3Fc2onq6aJi6xDEyle8MQPyWqsIgmDmLA0pDbJYarVgKXyy73QQuvOHk5Fz7ks0KfMaQz94Y3CVemLfPSHpCRTcmOO76suMpIFG0bUDaxGkfw9RCshPGmcNfU4wedjyPlK7Tv0CJVvKpOOy18UW5X9iZ65su5jP5K0mhJTQD71yw7E36FeLi9mf5cS21K8vGWlbt5ggzeUlFkDLV9wIwGK4Ga4zCTfvI2OuCX9mQjzqtMZ59piS6flG9D8zrrwSuxgQ0qTZuWeA660o3nKZuO5M3K1HXfHKFYd33wCdxgLdzaI1KayFO9siDyQY9d5v3mc6lXqFuZOIDmeWQZulZO4OBAYIQ477QRf6mEcSWGve7V4DdGneHg40s93UyhYBthWGfz6bj5nJQNWtgnTbEGyYaHuoaTdw2VAdfxAwWLaiNkzlivEEHKHOjU1hfnwL62REdahU9GyWau8LsZ8jq31TBWxfkhghpLHaKVeFCfStsayhBX4TuHjuVhX6Acl8GIBirk5rQcNUoLupRlqMnnCXDPDiAhLtpTaXO3EYTSU1aUcG9hTG1B0tyBBvw7yQQr349olyczqqgyYpkgd6Lzkc2BlkpjjrNzdUgCZmCZwEA4Ftj4JSb0LZRlt2MbeFMnw33AFoAY3XoSARLuPzlLqE6yTiliGCVUAbVhJkDmP0oSybURITNnCwTvYbbdeXbYbo9BVXMRafxBqZNo4V2lfQdy4WUTgBmhCq0bLyqn7lb8B2E8UuNnVloj4ahn5RrmPfNhRN59X6Ux4nN1ndGj6AOVrJS8BqGMuLKPFIGohyxmylEnTNHbZxg841cLnI57KLQA20DLryXx2qar0X9KvZwoK3Mfm8ydUYlfeAqlzpcfq3rxJAkeV4uIyQMu5ItfXslTTo3pRbbdF8NazwFDEIDzBBBHnA04RW2gdo4FyYKbUHZG2HI8Fc3BQjVLuTJlGH7pfXfubKqza6Q2NJrZ6yGlk1NA2v4XGiAbpl1nonni2u8WnTpNqagMnxbr3fZa1HW0XByt61c1SKMcwKo1PaoPeSvbXOx9ttOCSwoshNSq6GfyWPNUc3iHD3HEIeIfSnJ4G62i0RsLTNxpYfnMk5PjWL7KN83swOBBwYSubE2EWb2nphWADWZo6aeOnoxTcP6Rfl79rCq9P28xiNnV83QG8MVDnEpih2YXQZ5yP66TfoIv3Jth5kRWApANFg6trS6UPHsvEIRBUjknjqdLzuGUo86C76a1nXvTXKXiXOFKkpmdd1OZ2Km9ModpTFjLcNePOQYkrvpufMJFtBgyEfWSs52rzbpzTqZST7vmLPEI0iD2PuCCBHwx1P14n1HPfwNdvDezkllurmVodiE Which I thought was the password at first, but ofcourse it wasn't :( So I did a little bit of research and realised all the other players were using the following commands to cat / access the file:bandit5@bandit:~/inhere$ cat maybehere07/.file2 So the question is, what is the difference between: bandit5@bandit:~/inhere/maybehere07$ cat ./-file2 and bandit5@bandit:~/inhere$ cat maybehere07/.file2 and why has it produced that kind of output? Thank you all in advance,
This is the expected behaviour. The -i / --in-place flag edits a temp copy of a file and then moves that copy over the original. So when you do: sed -i 'bla' symlink What sed is doing is: sed 'bla' symlink > temp_filemv temp_file symlink And hence destroying the symlink by placing a regular file in its place. Info taken from a comment in How do I prevent sed -i from destroying symlinks?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429649", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/280151/" ] }
429,664
tl;dr: We build an RPM package that automatically detect dependencies (no Requires in .spec file). How can I remove dependencies from this RPM package. Long story:->I'm shipping the dynamic libraries along with the binary but RPMs automatic dependencies mechanism obviously lists that shipped libraries also as dependencies. How to avoid this?
I you don't want rpm to process these dependencies automatically; you can use: AutoReqProv: no However, I have multiple time packaged myself binaries and the libraries they depend on; rpm has never caused me any trouble in that way; maybe your way of packaging is not optimal? For further reading on the automatic dependencies: http://ftp.rpm.org/max-rpm/s1-rpm-depend-auto-depend.html
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161326/" ] }
429,729
After a dist-upgrade I can not update Kali and sudo apt-get update returns this error: Err:1 http://http.kali.org/kali kali-rolling InRelease Connection failed [IP: 192.99.200.113 80]Reading package lists... DoneW: Failed to fetch http://http.kali.org/kali/dists/kali-rolling/InRelease Connection failed [IP: 192.99.200.113 80]W: Some index files failed to download. They have been ignored, or old ones used instead. I searched for update errors in Kali but couldn't find this error. This is my sources.list file: # #deb cdrom:[Debian GNU/Linux 2017.3 _Kali-rolling_ - Official Snapshot amd64 LIVE/INSTALL Binary 20171109-13:49]/ kali-rolling contrib main non-free#deb cdrom:[Debian GNU/Linux 2017.3 _Kali-rolling_ - Official Snapshot amd64 LIVE/INSTALL Binary 20171109-13:49]/ kali-rolling contrib main non-freedeb http://http.kali.org/kali kali-rolling main non-free contrib# deb-src http://http.kali.org/kali kali-rolling main non-free contrib How can I fix this?
Can you see the repository (http.kali.org/kali) in a browser? Does it show 'Index of /kali' in your browser? If you can't see the Index then it might be the cause of firewall/proxy blocks your connection. Please check with your network admin in that case. Also try to open this URL https://http.kali.org/kali from a browser. If it shows 'Index' in browser then please go for solution 1. Solution 1: Try using https repository by executing the following command echo "deb https://http.kali.org/kali kali-rolling main non-free contrib" > /etc/apt/sources.list Then try sudo apt-get update If you find the same error, please choose another solution. Solution 2: please execute the following command. apt-key adv --keyserver hkp://keys.gnupg.net --recv-keys 7D8D0BF6 Then try sudo apt-get update If you find the same error, please choose another solution. Solution 3: Please keep a back up file before changing the sources.list fileUsing text editor add these lines to /etc/apt/sources.list file deb http://http.kali.org/ /kali main contrib non-freedeb http://http.kali.org/ /wheezy main contrib non-freedeb http://http.kali.org/kali kali-dev main contrib non-freedeb http://http.kali.org/kali kali-dev main/debian-installerdeb-src http://http.kali.org/kali kali-dev main contrib non-freedeb http://http.kali.org/kali kali main contrib non-freedeb http://http.kali.org/kali kali main/debian-installerdeb-src http://http.kali.org/kali kali main contrib non-freedeb http://security.kali.org/kali-security kali/updates main contrib non-freedeb-src http://security.kali.org/kali-security kali/updates main contrib non-free Then try sudo apt-get update If you find the same error, please choose another solution. solution 4: Are you using any proxy server? Then, check the file /etc/apt/apt.conf Please add the following three lines in /etc/apt/apt.conf Acquire::http::proxy "http://proxy:port/"; Acquire::ftp::proxy "ftp://proxy:port/"; Acquire::https::proxy "https://proxy:port/"; write your IP address in place of 'proxy' write your port number in place of 'port' Then try sudo apt-get update If you find the same error, please choose another solution.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/429729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272855/" ] }
429,760
I am trying to run an executable on a remote server, to which I connect via ssh -Y. I think the executable uses openGL The server runs Ubuntu and the local system runs OSX. ssh -Y normally opens a display on my local machine by X11. This works well with other applications (firefox, matlab etc..) This time I get the message: libGL error: No matching fbConfigs or visuals foundlibGL error: failed to load driver: swrastX Error of failed request: GLXBadContext Major opcode of failed request: 149 (GLX) Minor opcode of failed request: 6 (X_GLXIsDirect) Serial number of failed request: 35 Current serial number in output stream: 34X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 149 (GLX) Minor opcode of failed request: 24 (X_GLXCreateNewContext) Value in failed request: 0x0 Serial number of failed request: 34 Current serial number in output stream: 35 I also ran glxinfo (I was trying things I found on forums) and got this name of display: localhost:11.0libGL error: No matching fbConfigs or visuals foundlibGL error: failed to load driver: swrastX Error of failed request: GLXBadContext Major opcode of failed request: 149 (GLX) Minor opcode of failed request: 6 (X_GLXIsDirect) Serial number of failed request: 23 Current serial number in output stream: 22 Could someone help with this? Thank you!
EDIT May 5th, 2021 : With the release of XQuartz 2.8.0, the configuration path appears to have changed from org.macosforge.xquartz.X11 to org.xquartz.X11 . The same instructions still apply, just replace the old path with the new if you are from the future. Although the answers here have fixes, I'm submitting another one that I can use for future reference when this issue comes up every other year in my work :) It happens often when X forwarding (via SSH, Docker, etc). You need to allow OpenGL drawing (iglx), which by default is disabled on a lot of X11 servers (like XQuarts or the standard X11 server on Ubuntu). Some other logs you may see related to this are below. XRequest.155: GLXBadContext 0x500003aXRequest.155: BadValue (integer parameter out of range for operation) 0x0XRequest.155: GLXBadContext 0x500003b[xcb] Unknown sequence number while processing queue[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called[xcb] Aborting, sorry about that.../../src/xcb_io.c:259: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed. The fix is to enable iglx. First, check if you have an XQuarts version that supports this feature. The latest as of this writing does, but it is deprecated so may not in the future. My version is XQuartz 2.7.11 (xorg-server 1.18.4) . Next, run defaults write org.macosforge.xquartz.X11 enable_iglx -bool true . You should be able to confirm it is set by running $ defaults read org.macosforge.xquartz.X11{ "app_to_run" = "/opt/X11/bin/xterm"; "cache_fonts" = 1; "done_xinit_check" = 1; "enable_iglx" = 1; ####### this should be truthy "login_shell" = "/bin/sh"; "no_auth" = 0; "nolisten_tcp" = 0; "startx_script" = "/opt/X11/bin/startx -- /opt/X11/bin/Xquartz";} Finally, restart xquartz (or your whole machine). You may need to re-run xhost + to disable security & authentication (fine for isolated machines, dangerous for internet-exposed). You should now be able to run your GUI applications as expected. Hope this helps!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170338/" ] }
429,869
The study guide LPIC-1 Training and Preparation Guide (Ghori Asghar, ISBN 978-1-7750621-0-3) contains the following question ... Which of the following commands can be used to determine the file type? (A) file (B) type (C) filetype (D) what ... and claims that the answer is: "(B) type ". But isn't "(A) file " the correct answer? I'm beginning to doubt the entire book.
Yes it seems like your book is wrong. The file command tells what kind of file it is. From the man file: "file -- determine file type". A few examples: $ file /usr/bin/file/usr/bin/file: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=ecc4d67cf433d0682a5b7f3a08befc45e7d18057, stripped$ file activemq-all-5.15.0.jaractivemq-all-5.15.0.jar: Java archive data (JAR) The type command is used to tell if a command is built in or external: $ type filefile is /usr/bin/file$ type typetype is a shell builtin
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429869", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/280366/" ] }
429,877
I'm running this script in terminal and getting desired results, but when I set a cron to run it from /root/somefolder/ every 5 min, it does not do what its supposed to do. My root user crontab entry looks like this: */5 * * * * ~root/somedirectory/script.sh The script is: #!/bin/bash## Variables ##host="`/bin/hostname`"; ## Limits ##OneMin="1";FiveMin="6";FifteenMin="6"; ## Mail IDs ##To="someone@somedomain"; Fr="root@"$host; ## Load Averages ##LA=(`uptime | grep -Eo '[0-9]+\.[0-9]+' | cut -d"." -f1`) ## Top Process List ##tp=(`ps -ef | sort -nrk 3,3 | grep -E "(php|httpd)" | grep -v root | head -n30 | awk '{print $2}'`)## Actions ##if [ ${LA[0]} -ge $OneMin ]; then ## Send Mail ##echo -e "From: $FrTo: $ToSubject: *ALERT* - Current Load on '$host' Is High Load Averages Are: \n\n 1:Min\t5:Min\t15:Min \n${LA[0]}\t${LA[1]}\t${LA[2]} \n\n List Of Processes That Were Killed \n" | sendmail -t## Kill Top Pocesses ##for i in $tp ; do kill -9 $i donefi Issues: All the recipients in $To variable don't get any alert, when script is run via cron even the if statement gets true, but when it is run in terminal, everyone gets an email. I tried to paste all email IDs directly in To: fields like this, cause I thought it's not reading $To variable. To: someone@somedomain instead of $To But still none of the recipients gets any alert, and no actions seems to be performed.
Yes it seems like your book is wrong. The file command tells what kind of file it is. From the man file: "file -- determine file type". A few examples: $ file /usr/bin/file/usr/bin/file: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=ecc4d67cf433d0682a5b7f3a08befc45e7d18057, stripped$ file activemq-all-5.15.0.jaractivemq-all-5.15.0.jar: Java archive data (JAR) The type command is used to tell if a command is built in or external: $ type filefile is /usr/bin/file$ type typetype is a shell builtin
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117409/" ] }
429,882
I am running XUbuntu 16.04 with kernel 4.4.0-116-generic (but the same goes for earlier versions) and 16 GB of RAM. I am using a "traditional" hard drive (no SSD) and my swappiness is 0 (RAM is rarely full). I have been experiencing the following with hibernation : hibernation proper takes less than 2 minutes, but resuming from it takes much, much more time until the applications are responsive (today it took more that 10 minutes to get to the light-locker prompt). I do not think this is usable (it takes more time to resume from hibernation than booting to a fresh session and reopening the programs). How can I improve performance ? Am I the only one experiencing these problems ? True, memory usage right now is 8.8 GB, but what bugs me is the discrepancy between hibernating and resuming times. I understand from other questions that it may be because during hibernation, the kernel freezes all processes and dump the RAM en bloc to swap, while on resuming it just lets processes ask for their pages in swap. Is this a valid explanation ? If so, why is it done so, as reading big blocks from a hard drives is faster than random accesses ? Can I configure hibernation to not proceed this way ? The question "Restoring in-memory/swapped page state on resume from hibernation" seems related, but I do not know enough of pages to really understand if what they do ("take note of the tags that label what pages are on disk and in RAM, then restore this exact state on resume") is useful and I also do not know how to do it.
Yes it seems like your book is wrong. The file command tells what kind of file it is. From the man file: "file -- determine file type". A few examples: $ file /usr/bin/file/usr/bin/file: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=ecc4d67cf433d0682a5b7f3a08befc45e7d18057, stripped$ file activemq-all-5.15.0.jaractivemq-all-5.15.0.jar: Java archive data (JAR) The type command is used to tell if a command is built in or external: $ type filefile is /usr/bin/file$ type typetype is a shell builtin
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/244454/" ] }
429,898
I'd like to load scripts through a script that I called loader.sh . #### Loader.shuload() { # ... if ! is_loaded $file then . "${file}" || exit 3 if [[ ! -z "${callback}" ]] then "${callback}" fi fi}### log.shdeclare -r WARN=3warn() { echo "${WARN}: $@" >&2} Use case: . "${loader.sh}"uload "log.sh"warn 'test' This causes this error: WARN: unbound variable Why am I getting this error?
Yes it seems like your book is wrong. The file command tells what kind of file it is. From the man file: "file -- determine file type". A few examples: $ file /usr/bin/file/usr/bin/file: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=ecc4d67cf433d0682a5b7f3a08befc45e7d18057, stripped$ file activemq-all-5.15.0.jaractivemq-all-5.15.0.jar: Java archive data (JAR) The type command is used to tell if a command is built in or external: $ type filefile is /usr/bin/file$ type typetype is a shell builtin
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142331/" ] }
429,946
so I am currently using manjaro linux and I am using urxvt as my terminal which I love so before all this starts switching the terminal is not an option, sorry if I am being rude. I installed zsh to it as my default shell and added the theme robbyrussell throught oh-my-zsh. At first it was all fine and everything was working but after an update my icons broke. Particulary(if you are fimiliar with the theme) the arrow icon broke and same with all the icons in all the other themes. This problem occurrs only with urxvt because when I try with some other terminal such as sterminal the theme works. Some screenshots you can see hereThis is how it's supposed to look(screenshoots taken from sterminal) And this is how it looks(screenshoots taken from rxvt) I have been asking for help in reddit, github repos such as oh-my-zsh and robbyrussell officiall repo but no one seemed to help me, so I really am hoping you guys will give me a help. Here are some information about my os and terminal: URXVT Version 9.22Operating System : Manjaro i3 4.12.24-1No Desktop Envoirmenti3wm as window manager I am using default oh-my-zsh configurations for my zshrc file and default Xreources from manjaro i3 which you can find here . If you need further information just tell me. Any help would be greatly appreciated!
First of all, there is a significant difference between the terminals types rxvt and unicode-rxvt (often abbreviated to urxvt). You have indicated that the terminal you are using is "URXVT Version 9.22", so to avoid confusion, please use the correct name which is not rxvt but urxvt. As Mikel has pointed out, the Xresources file is telling urxvt to use the 9x15 font which is (a) the old style X11 server provided font method and (b) a limited capability bitmap font. The oh-my-zsh Github README file explains many themes require installing the Powerline Fonts in order to render properly So in order to show the correct arrow shape you need to have the terminal using the appropriate font. Perhaps your update which broke the feature was an update which reset the font usage by urxvt? As you state that sterminal displays the prompt correctly, check which font that is using, then change the .Xresources in your ${HOME} directory to use that font after verifying that it works with the manual test urxvt -font "font_name". (For the newer method of Xft supplied fonts, font_name is preceeded by "xft:" and followed by ":size=12" for font size). Having checked in my urxvt, it seems quite a number of well known truetype and opentype monospace fonts do not provide the "right arrow" glyph and just show an empty box. However one readily available standard font that does work (and should be installed on your system) is Deja Vu Sans Mono. So try firing up a urxvt with urxvt -font "xft:Deja Vu Sans Mono:size=12" & and see if your prompt is correctly displayed. Take a look at https://bbs.archlinux.org/viewtopic.php?id=173477 for discussion on modifying font resource specification for urxvt in an Xresources/Xdefaults file. PS Do not forget that you can use multiple urxvt terminals more efficiently if you first start the urxvtd daemon and then fire up terminals with urxvtc. ADDENDUM Thanks for confirming that you are using urxvt and you have DejaVu Sans Mono installed. Confirm that there is no font substitution happening with the command entered in terminal at the prompt fc-match "DejaVu Sans Mono" producing the output DejaVuSansMono.ttf: "DejaVu Sans Mono" "Book" The actual font file location and styles available for the font can be verified with fc-list | grep --color 'DejaVu Sans Mono' Now assuming that is all okay, you need to check by firing up a urxvt from the command line of a terminal (sorry for not making that absolutely clear above and I had space between Deja and Vu which might have caused a problem) urxvt -font "xft:DejaVu Sans Mono:size=12" & that you can cut'n'paste the right-arrow character (from here) "➜" into that urxvt and that it displays correctly which I have checked does work. I can also confirm that putting the following into an Xresources file URxvt.font: xft:DejaVu Sans Mono:autohint=true:size=12 URxvt.boldFont: xft:DejaVu Sans Mono:autohint=true:bold:size=12 URxvt.italicFont: xft:DejaVu Sans Mono:autohint=true:italic:size=12 URxvt.boldItalicFont: xft:DejaVu Sans Mono:autohint=true:bold:italic:size=12 and loading into the Xorg server resource database with xrdb -merge Xresource_file_name to be 100% certain those values will be used and then firing up a terminal with just urxvt at the command line results in a terminal in which the font correctly shows the right arrow character. (you should also notice that characteristic of this font, the l characters are curly and that there is a dot in the center of the zero characters). The font I normally use in urxvt "Luxi Mono" (easier to read, easy on the eyes IMHO) does not display the right arrow correctly even though the "font-manager" program reveals that "Luxi Mono" does have the glyph. Similarly xterm is also broken but a test in lxterminal, mate-terminal, and xfce4-terminal (checked in preferences that font is set to Luxi Mono) all display the right-arrow correctly. So it does appear that something is broken for some fonts in urxvt and xterm (which if I understand correctly share some code origins) just as the others which work similarly share some common code viz libvte.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/429946", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277140/" ] }
429,969
I am reading The C Programming Language (2nd Edition). On page 157 and 158 the author gives a code snippet of fopen in the Unix system. At the end of the snippet the author added: In particular, our fopen does not recognize the "b" that signals binary access, since that is meaningless on UNIX systems, nor the "+" that permits both reading and writing. Why does the author say it's meaningless? (The "b" and "+" mentioned here are file access modes)
Some non-Unix systems treat binary and text files in different ways. For example under DOS, Windows and OS/2 (which wouldn’t have been relevant when fopen was designed, but serve as useful examples), opening a file in text mode and writing to it will convert line endings from β€œC” convention ( \n ) to whatever the platform requires. On other systems, opening a file in binary mode will cause it to be processed in records. This is what fopen ’s β€œb” flag controls: files opened without it are opened in text mode, files opened with it are opened in binary mode. Since Unix-style systems don’t have this distinction, β€œb” is ignored (and doesn’t cause an error). My copy of the book doesn’t mention β€œ+”, but I’m guessing fopen didn’t support it then ( it does now ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429969", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/280417/" ] }
429,987
I have this CSV file: "ADFS-Administrators","Administrator-Access","arn:aws:iam::279052847476:saml-provider/companyADFS""ADFS-amtest-ro","arn:aws:iam::279052847476:saml-provider/companyADFS""AWSAccCorpAdmin","arn:aws:iam::279052847476:saml-provider/LastPass""AWScompanyCorpAdmin","arn:aws:iam::279052847476:saml-provider/LastPass""AWScompanyCorpPowerUser","arn:aws:iam::279052847476:saml-provider/LastPass""flowlogsRole","oneClick_flowlogsRole_1495032428381","companyDevShutdownEC2Instaces","oneClick_lambda_basic_execution_1516271285849","companySAMLUser","arn:aws:iam::279052847476:saml-provider/companyAzureAD""lambda_stop_rundeck_instance","oneClick_lambda_basic_execution_1519651160794","OneLoginAdmin","arn:aws:iam::279052847476:saml-provider/OneLoginAdmin""OneLoginDev","arn:aws:iam::279052847476:saml-provider/OneLoginDev""vmimport","vmimport","workspaces_DefaultRole","SkyLightServiceAccess", I want to add another comma in each line if after the first comma there is a string which starts with arn:aws:iam: Desired output (partial): "ADFS-amtest-ro",,"arn:aws:iam::279052847476:saml-provider/companyADFS""AWSAccCorpAdmin",,"arn:aws:iam::279052847476:saml-provider/LastPass""AWScompanyCorpAdmin",,"arn:aws:iam::279052847476:saml-provider/LastPass""AWScompanyCorpPowerUser",,"arn:aws:iam::279052847476:saml-provider/LastPass For lines which doesn't have the string which starts with arn:aws:iam , don't change anything.
Some non-Unix systems treat binary and text files in different ways. For example under DOS, Windows and OS/2 (which wouldn’t have been relevant when fopen was designed, but serve as useful examples), opening a file in text mode and writing to it will convert line endings from β€œC” convention ( \n ) to whatever the platform requires. On other systems, opening a file in binary mode will cause it to be processed in records. This is what fopen ’s β€œb” flag controls: files opened without it are opened in text mode, files opened with it are opened in binary mode. Since Unix-style systems don’t have this distinction, β€œb” is ignored (and doesn’t cause an error). My copy of the book doesn’t mention β€œ+”, but I’m guessing fopen didn’t support it then ( it does now ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/429987", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279815/" ] }
430,003
I have the following example script and want to know what exactly is the length of an array, are this bytes, characters or what else? #!/bin/bash# Arrays# @ vs. *ape=( "Apple Banana" "Emacs Window" "Panda Bamboo Nature" )cape=( 'Ping Pong' 'King Kong' 'King Fisher Club' 'Blurb' )jade=( ally belly cally delly )echo Expansion with \*echo ${ape[*]}echo ${cape[*]}echo -e "${jade[*]}\n"echo Expansion with \@echo ${ape[@]}echo ${cape[*]}echo -e "${jade[@]}\n"echo Elements with \*echo ${#ape[*]}echo ${#cape[*]}echo ${#jade[*]}echo Elements with \@echo ${#ape[@]}echo ${#cape[@]}echo ${#jade[*]}echo -e "\nLength"echo ${#ape}echo ${#cape}echo ${#jade} From the man pages I know, that the array expansion differs from * to @ if the word is double-quoted or not, but I cannot see any differences. Why do I have in both cases the same results? The output is as follows: Expansion with *Apple Banana Emacs Window Panda Bamboo NaturePing Pong King Kong King Fisher Club Blurbally belly cally dellyExpansion with @Apple Banana Emacs Window Panda Bamboo NaturePing Pong King Kong King Fisher Club Blurbally belly cally dellyElements with *344Elements with @344Length1294
You missed the case where it shows that * will expand the array to a single string, and @ expands to individually quoted strings: printf 'string "%s"\n' "${cape[*]}" which generates string "Ping Pong King Kong King Fisher Club Blurb" and printf 'string "%s"\n' "${cape[@]}" which generates string "Ping Pong"string "King Kong"string "King Fisher Club"string "Blurb" Remember that echo just concatenates its arguments and prints them, while printf will fill out its format string with the arguments and repeat the same format if more arguments are supplied. Also, for s in "${cape[*]}"; do echo "$s"done generates a single line of output (it only iterates of a single string), while for s in "${cape[@]}"; do echo "$s"done generates one per array element. You always want to use double quotes around the ${array[*]} and ${array[@]} expansions, unless you for some reason want to explicitly invoke word splitting and file name globbing. And you use * or @ depending on whether you need the array elements all together as one string, or individually quoted. In my experience, one very seldom use [*] . When getting the length of an array, it doesn't matter which of * or @ you use. But if you use neither, you'll get the length in characters of the first element of the array.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/430003", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223883/" ] }
430,037
I have a bootable USB (ADATA Superior Series S102 Pro 16GB USB 3.0 Flash Drive (AS102P-16G-RGY)) with MultiBootUSB (multibootusb.org) with non-persistent Ubuntu, KALI Linux, ParrotSec OS, Arch Linux, and Trinity Rescue Kit. However, if I boot any of those OSs', and then remove the USB drive, any programs that I haven't run so far will fail to run, the display will start flickering, and it will crash and show lots of cmdline outputs like: [ 10.737654] cannot access <whatever> Is there a way to load the entire OS (and all programs, files, etc.) from the USB to RAM so that it can be unplugged after the OS boots , without losing OS functionality?I've already tried the toram thing, the "RAM mode" option, and the "Load system to RAM" option. I'd prefer a solution that works for all aforementioned OSs.
I have found a solution (may not work for all distros):Where it says "Try Ubuntu before installing" or "Try from this live CD", just press "E" to edit the kernel parameters. Then, there should be a line that ends like this: quiet splash ---or maybe quiet splash hostname=ubuntu ---Add toram (or toram=yes if that doesn't work) to that line, before the dashes, so it reads: quiet splash toram ---(with or without hostname)Press F10 or Ctrl + X to boot. If it worked, then either the desktop or the file manager should have the USB mounted as a drive. Right-click and click "Eject", then remove the drive.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/430037", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/280485/" ] }
430,056
I want my machine to automatically download some files. That doesn't have to be very efficient. So I decided to do this with a bash script. It works so far when I encode the URL hardly. But I want to get the files retrieved in irregular order and I thought I would use simple variables. How do I get the random number into my variable? My approach data_link0="https://example.com/target1.html"data_link1="https://example.com/target2.html"data_link2="https://example.com/target3.html"data_link3="https://example.com/target4.html"useragent0="Mozilla/5.0 (iPhone; CPU iPhone OS 10_0_1 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like Gecko) Version/10.0 Mobile/14A403 Safari/602.1"useragent1="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/604.5.6 (KHTML, like Gecko) Version/11.0.3 Safari/604.5.6"useragent3="Mozilla/5.0 (Windows 7; ) Gecko/geckotrail Firefox/firefoxversion"wget --user-agent="$user_agent[$((RANDOM % 3))]" "$datei_link$((RANDOM % 3))" unfortunately does not work.
As far as you need to retrieve all the urls, a better way would be using shuf (GNU/linux coreutils) (or sort -R coreutils too): shuf file | xargs wget File : $ cat file"https://example.com/target1.html""https://example.com/target2.html""https://example.com/target3.html""https://example.com/target4.html" man 1 shuf NAME shuf - generate random permutations New comments, new needs, new code : (requiring random user-agent) $ cat uasMozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.52 Safari/537.36Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.52 Safari/537.36 OPR/15.0.1147.100Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; AS; rv:11.0) like Gecko Code : shuf file | while read url; do wget --user-agent="$(shuf -n1 uas)" "$url"done If you prefer to keep your way (one url) : data_link=( "https://example.com/target1.html" "https://example.com/target2.html" "https://example.com/target3.html" "https://example.com/target4.html")user_agent=( "Mozilla/5.0 (iPhone; CPU iPhone OS 10_0_1 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like Gecko) Version/10.0 Mobile/14A403 Safari/602.1" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/604.5.6 (KHTML, like Gecko) Version/11.0.3 Safari/604.5.6" "Mozilla/5.0 (Windows 7; ) Gecko/geckotrail Firefox/firefoxversion")wget --user-agent="${user_agent[RANDOM % ${#user_agent[@]} ]}" "${data_link[RANDOM % ${#data_link[@]}]}" Your way for all urls and user-agent (both randomized) : for i in $(seq 0 $((${#data_link[@]} -1)) | shuf); do wget -U "${user_agent[RANDOM % ${#user_agent[@]}]}" "${data_link[i]}"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/430056", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/280532/" ] }
430,109
I have several CSS files under a folder contains 0.2rem or 0.5rem 0.6rem , now I want them to be all divided by 2, become 0.1rem and 0.25rem, 0.3rem . How can I use awk or sed or gawk to accomplish this? I tried the following command but have no success: find . -name "*.css" | xargs gawk -i inplace '{gsub(/([0-9\.]+)rem/, "(\\1 * 0.5)rem"); print $0}'
find + GNU awk solution: find . -type f -name "*.css" -exec gawk -i inplace \'{ for (i=1; i<=NF; i++) if ($i ~ /^[0-9]+\.[0-9]+rem/) { v=$i/2; sub(/^[0-9]+\.[0-9]+/, "", $i); $i=v $i } }1' {} \;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/430109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/187844/" ] }
430,161
func() { echo 'hello' echo 'This is an error' >&2}a=$(func)b=??? I'd like to redirect the stderr to b variable without creating a temporary file. echo $b # output should be: "This is an error" The solution that works but with a temporary file: touch temp.txtexec 3< temp.txta=$(func 2> temp.txt);cat <&3rm temp.txt So the question is, how do I redirect the stderr of the bash function func to the variable b without the need of a temporary file?
On Linux and with shells that implement here-documents with writable temporary files (like zsh or bash versions prior to 5.1 do), you can do: { out=$( chmod u+w /dev/fd/3 && # needed for bash5.0 ls /dev/null /x 2> /dev/fd/3 ) status=$? err=$(cat<&3)} 3<<EOFEOFprintf '%s=<%s>\n' out "$out" err "$err" status "$status" (where ls /dev/null /x is an example command that outputs something on both stdout and stderr). With zsh , you can also do: (){ out=$(ls /dev/null /x 2> $1) status=$? err=$(<$1);} =(:) (where =(cmd) is a form of process substitution that uses temporary files, and (){ code; } args anonymous functions). In any case, you'd want to use temporary files. Any solution that would use pipes would be prone to deadlocks in case of large outputs. You could read stdout and stderr through two separate pipes and use select() / poll() and some reads in a loop to read data as it comes from the two pipes without causing lock-ups, but that would be quite involved and AFAIK, only zsh has select() support built-in and only yash a raw interface to pipe() (more on that at Read / write to the same file descriptor with shell redirection ). Another approach could be to store one of the streams in temporary memory instead of a temporary file. Like ( zsh or bash syntax): { IFS= read -rd '' err IFS= read -rd '' out IFS= read -rd '' status} < <({ out=$(ls /dev/null /x); } 2>&1; printf '\0%s' "$out" "$?") (assuming the command doesn't output any NUL) Note that $err will include the trailing newline character. Other approaches could be to decorate the stdout and stderr differently and remove the decoration upon reading: out= err= status=while IFS= read -r line; do case $line in (out:*) out=$out${line#out:}$'\n';; (err:*) err=$err${line#err:}$'\n';; (status:*) status=${line#status:};; esacdone < <( { { ls /dev/null /x | grep --label=out --line-buffered -H '^' >&3 echo >&3 "status:${PIPESTATUS[0]}" # $pipestatus[1] in zsh } 2>&1 | grep --label=err --line-buffered -H '^' } 3>&1) That assumes GNU grep and that the lines are short enough. With lines bigger than PIPEBUF (4K on Linux), lines of the output of the two grep s could end up being mangled together in chunks.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/430161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142331/" ] }
430,196
I am using Linux Mint 18.2 Sonja and want to connect my new Logitech MX Master S2 mouse without using the bluetooth-dongle, but directly with the inbuilt bluetooth module of my notebook. This works so far with my PC after running the commands: ~$ sudo hciconfig hci0 sspmode 1~$ sudo hciconfig hci0 down~$ sudo hciconfig hci0 up However by entering sudo hciconfig hci0 sspmode 1 I get the following error message on my notebook: Can't set Simple Pairing mode on hci0: Input/output error (5) After reading a few articles in different communities they recommended to do the following: alpha@Pavilion ~ $ bluetoothctl [NEW] Controller B0:35:9F:0E:4F:3D Pavilion [default][NEW] Device C5:E2:3F:77:5C:3D MX Master 2S[NEW] Device DD:6A:F3:5A:A2:A2 MI Band 2[NEW] Device C5:E2:3F:77:5C:3B MX Master 2S[NEW] Device 00:02:3C:51:C6:12 Creative T50 Wireless[bluetooth]# power onChanging power on succeeded[bluetooth]# agent onAgent registered[bluetooth]# default-agent Default agent request successful[bluetooth]# scan onDiscovery started[CHG] Device C5:E2:3F:77:5C:3D RSSI: -15[bluetooth]# scan offDiscovery stopped[CHG] Device C5:E2:3F:77:5C:3D RSSI: -4[bluetooth]# trust C5:E2:3F:77:5C:3DChanging C5:E2:3F:77:5C:3D trust succeeded[bluetooth]# pair C5:E2:3F:77:5C:3DAttempting to pair with C5:E2:3F:77:5C:3D[CHG] Device C5:E2:3F:77:5C:3D Connected: yesFailed to pair: org.bluez.Error.AuthenticationTimeout[CHG] Device C5:E2:3F:77:5C:3D Connected: no[bluetooth]# connect C5:E2:3F:77:5C:3DAttempting to connect to C5:E2:3F:77:5C:3DFailed to connect: org.bluez.Error.Failed[bluetooth]# versionVersion 5.37[bluetooth]# exitAgent unregistered[DEL] Controller B0:35:9F:0E:4F:3D Pavilion [default]alpha@Pavilion ~ $ As you can see, by trying to pair the mouse it is shortly connected and followed by the error: Failed to pair: org.bluez.Error.AuthenticationTimeout Has someone an idea? Thanks in advance!
At least I found a way to solve the problem: I installed the Bluetooth Manager called blueman: ~$ sudo apt install blueman In the GUI there is a button which looks like a calculator or so, called "Create pairing with the device" when you hover over it. While by right-click neither the buttons "pair" nor "Setup / pair device" work, the mentioned button above did the job!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/430196", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/245933/" ] }
430,199
Given file.csv : a,b,c1,2,3 How can mlr be made to output: a,b,c1,2,c Using the label name of $c without knowing in advance that $c contains the letter " c "? Note: correct answer must use mlr only.
Edited answer Hi,you could use this script mlr --csv put 'if (NR == 1) {counter=1; for (key in $*) { if (counter == 3) { $[key]=key; } counter += 1; }}' input.csv And as output you will have: a,b,c1,2,c NR == 1 to have the first row, and counter == 3 to get the third field.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/430199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165517/" ] }
430,207
After executing the following to disable ping replies: # sysctl net.ipv4.icmp_echo_ignore_all=1# sysctl -p I obtain different results from pinging localhost vs. 127.0.0.1 # ping -c 3 localhostPING localhost(localhost (::1)) 56 data bytes64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.029 ms64 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.035 ms64 bytes from localhost (::1): icmp_seq=3 ttl=64 time=0.101 ms--- localhost ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2042msrtt min/avg/max/mdev = 0.047/0.072/0.101/0.022 ms Pinging 127.0.0.1 fails: ping -c 3 127.0.0.1PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.--- 127.0.0.1 ping statistics ---3 packets transmitted, 0 received, 100% packet loss, time 2032ms Why are these results different?
The ping command shows the address it resolved the name to. In this case it resolved to the IPv6 localhost address, ::1 . On the other hand, 127.0.0.1 is an IPv4 address, so it explicitly makes ping use IPv4. The sysctl you used only affects IPv4 pings, so you get replies for ::1 , but not for 127.0.0.1 . The address you get from resolving localhost depends on how your DNS is resolver is set up. localhost is probably set in /etc/hosts , but in theory you could get it from an actual name server. As for how to drop IPv6 pings, you may need to look into ip6tables , as there doesn't seem to be a similar sysctl for IPv6. Or just disable IPv6 entirely, if you're not using it in your network. (Though of course that's not a very forward-looking idea, but doable if you're not currently using it anyway.)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/430207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/252134/" ] }
430,318
Say if I wrote a program with the following line: int main(int argc, char** argv) Now it knows what command line arguments are passed to it by checking the content of argv . Can the program detect how many spaces between arguments? Like when I type these in bash: ibug@linux:~ $ ./myprog aaa bbbibug@linux:~ $ ./myprog aaa bbb Environment is a modern Linux (like Ubuntu 16.04), but I suppose the answer should apply to any POSIX-compliant systems.
In general, no. Command line parsing is done by the shell which does not make the unparsed line available to the called program. In fact, your program might be executed from another program which created the argv not by parsing a string but by constructing an array of arguments programmatically.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/430318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211239/" ] }
430,337
I'm trying to use getops in a function yet it doesn't seem to work: #!/bin/bashfunction main(){ while getopts ":p:t:c:b:" o; do case "${o}" in p) echo "GOt P" p=$OPTARG ;; t) echo "GOt T" t=$OPTARG ;; c) echo "GOt C" c=$OPTARG ;; b) echo "GOt b" b=$OPTARG ;; *) #usage echo "Unknown Option" return ;; esac done echo $p echo $t echo $c echo $b}main And then running it like this: $ ./bin/testArguments.sh -p . -t README.md -c 234 -b 1 I have tried making sure that optid are local yet this didn't work either. Anything else that might be wrong?
You're not passing any argument to your main function. If you want that function to get the same arguments as passed to the script, pass them along with: main "$@" Instead of: main Also relevant to your script: difference between β€œfunction foo() {}” and β€œfoo() {}” Why is printf better than echo? Security implications of forgetting to quote a variable in bash/POSIX shells you'd want to output errors on stderr: echo >&2 Unknown option you'd want to return a non-zero exit status upon error ( return 1 ) when calling getopts in a function, it's a good habit to set OPTIND to 1 initially, in case getopts has been called before (for instance in a previous invocation of the function).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/430337", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/587/" ] }
430,365
In my application I open a file, using the open() call. My questions are: Is the file automatically closed (as using the close() call on the returned file descriptor) if I kill the process? What happens if the application crashes (e.g. segmentation fault)? Is this documented somewhere?
Yes, the file will be automatically closed when the process terminates, regardless of the reason for the process termination. This is documented in POSIX . In β€œ Consequences of Process Termination ”, among other consequences: All of the file descriptors, directory streams, conversion descriptors, and message catalog descriptors open in the calling process shall be closed. And in β€œ Terminating a Process ”: It is important that the consequences of process termination as described occur regardless of whether the process called _exit() (perhaps indirectly through exit() ) or instead was terminated due to a signal or for some other reason.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/430365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/280846/" ] }
430,379
I have a weird problem in that my user (on Linux Mint 18.3 Sylvia ) is part of group docker , which I verify by starting a terminal: $ groupsme adm cdrom sudo dip plugdev lpadmin sambashare docker Now, when I fire up tmux, and run the same command, suddenly my docker membership disappears: $ tmux$ groupsme adm cdrom sudo dip plugdev lpadmin sambashare The weird thing is, when I attempt to forcibly add my user to the group docker when in tmux, it says I'm already in it! $ sudo adduser me dockerThe user `me' is already a member of `docker'. I'm stumped -- why would being in tmux make a difference? For reference, I use fish shell and this is my .tmux.conf : $ cat ~/config/.tmux.conf new-sessionset -g default-terminal "screen-256color"set -g history-limit 10000setw -g mode-keys viset-option -g default-shell /usr/bin/fishset-option -g default-command /usr/bin/fish EDIT When I restarted my computer, everything worked again (both tmux and "normal" shells showed my docker membership). I'm still curious -- why did this happen?
Yes, the file will be automatically closed when the process terminates, regardless of the reason for the process termination. This is documented in POSIX . In β€œ Consequences of Process Termination ”, among other consequences: All of the file descriptors, directory streams, conversion descriptors, and message catalog descriptors open in the calling process shall be closed. And in β€œ Terminating a Process ”: It is important that the consequences of process termination as described occur regardless of whether the process called _exit() (perhaps indirectly through exit() ) or instead was terminated due to a signal or for some other reason.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/430379", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41318/" ] }
430,407
It is my understanding that the main difference between a network hub and a network switch is that the hub always sends out all packets to all (other?) hosts while the switch is smart about it and keeps track of which port belongs to which MAC address via MAC address tables and whatnot. If a packet is addressed to a host which is connected to the switch but have never sent or received anything, the switch acts like a hub would. But every subsequent packet addressed to that host is not broadcast to the other hosts but forwarded only to that specific host. Please correct me if I got something wrong. Does a network bridge in Linux act like a hub would, or like a switch?
A bridge is a network aggregation device , similar in practice to a switch. The bridges implemented in the Linux kernel follow this model. Like any bridge, they forward traffic based on destination MAC addresses, once the MAC address mapping is known. They are actually more featureful than most switches, since they also support firewalling, traffic shaping etc., using ebtables . See the bridge documentation for details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/430407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33928/" ] }
430,550
I there a way to split single line into multiple lines with 3 columns.New line characters are missing at the end of all the lines in the file. I tried using awk, but it is splitting each column as one row instead of 3 columns in each row. awk '{ gsub(",", "\n") } 6' filename where filename 's content looks like: A,B,C,D,E,F,G,H,I,J,K,L,M,N,O Desired output has 3 columns in each line: A,B,CD,E,FG,H,IJ,K,LM,N,O
Using awk $ awk -v RS='[,\n]' '{a=$0;getline b; getline c; print a,b,c}' OFS=, filenameA,B,CD,E,FG,H,IJ,K,LM,N,O How it works -v RS='[,\n]' This tells awk to use any occurrence of either a comma or a newline as a record separator. a=$0; getline b; getline c This tells awk to save the current line in variable a , the next line in varaible b , and the next line after that in variable c . print a,b,c This tells awk to print a , b , and c OFS=, This tells awk to use a comma as the field separator on output. Using tr and paste $ tr , '\n' <filename | paste -d, - - -A,B,CD,E,FG,H,IJ,K,LM,N,O How it works tr , '\n' <filename This reads from filename while converting all commas to newlines. paste -d, - - - This paste to read three lines from stdin (one for each - ) and paste them together, each separated by a comma ( -d, ). Alternate awk $ awk -v RS='[,\n]' '{printf "%s%s",$0,(NR%3?",":"\n")}' filenameA,B,CD,E,FG,H,IJ,K,LM,N,O How it works -v RS='[,\n]' This tells awk to use any occurrence of either a comma or a newline as a record separator. printf "%s%s",$0,(NR%3?",":"\n") This tells awk to print the current line followed by either a comma or a newline depending the value of the current line number, NR , modulo 3.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/430550", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47766/" ] }
430,616
I'm trying to understand permissions better, so I'm doing some "exercises". Here's a sequence of commands that I'm using with their respective output: $ umask0022$ touch file1$ ls -l file1-rw-r--r-- 1 user group 0 Mar 16 12:55 file1$ mkdir dir1$ ls -ld dir1drwxr-xr-x 2 user group 4096 Mar 16 12:55 dir1 That makes sense because we know that the default file permissions are 666 ( rw-rw-rw- ) and directories default permissions are 777 ( rwxrwxrwx ).If I subtract the umask value from these default permissions I have 666-022=644 , rw-r--r-- , for the file1 , so it's coherent with the previous output; 777-022=755 , rwx-r-x-r-x , for the dir1 , also coherent. But if I change the umask from 022 to 021 it isn't any more. Here is the example for the file: $ umask 0021$ touch file2$ ls -l file2-rw-r--rw- user group 0 Mar 16 13:33 file2 -rw-r--rw- is 646 but it should be 666-021=645 . So it doesn't work according to the previous computation. Here is the example for the directory: $ touch dir2$ ls -ld dir2drwxr-xrw- 2 user group 4096 Mar 16 13:35 dir2 drwxr-xrw- is 756 , 777-021=756 . So in this case the result is coherent with the previous computation. I've read the man but I haven't found anything about this behaviour. Can somebody explain why? EXPLANATION As pointed out in the answers: umask 's value is not mathematically subtracted from default directory and file's permissions. The operation effectively involved is a combination of AND (&) and NOT (!) boolean operators. Given: R = resulting permissions D = default permissions U = current umask R = D & !U For example: 666& !0053 = 110 110 110 & !000 101 011 110 110 110 & 111 010 100 = 110 010 100 = 624 = rw--w-r-- 777& !0022 = 111 111 111 & !000 010 010 111 111 111 & 111 101 101 = 111 101 101 = 755 = rwxr--xr-x TIP An easy way to quickly know the resulting permissions (at least it helped me) is to think that we can use just 3 decimal values: r = 100 = 4 w = 010 = 2x = 001 = 1 Permissions will be a combination of these 3 values. " " is used to indicate that the relative permission is not given. 666 = 4+2+" " 4+2+" " 4+2+" " = rw rw rw So if my current umask is 0053 I know I'm removing read and execution (4+1) permission from group and write and execution (2+1) from other resulting in 4+2 " "+2+" " 4+" "+" " = 624 = rw--w-r-- (group and other already hadn't execution permission)
umask is a mask , it’s not a subtracted value. Thus: mode 666, mask 022: the result is 666 & ~022, i.e. 666 & 755, which is 644; mode 666, mask 021: the result is 666 & ~021, i.e. 666 & 756, which is 646. Think of the bits involved. 6 in a mode means bits 1 and 2 are set, read and write. 2 in a mask masks bit 1, the write bit. 1 in a mask masks bit 0, the execute bit. Another way to represent this is to look at the permissions in text form. 666 is rw-rw-rw- ; 022 is ----w--w- ; 021 is ----w---x . The mask drops its set bits from the mode, so rw-rw-rw- masked by ----w--w- becomes rw-r--r-- , masked by ----w---x becomes rw-r--rw- .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/430616", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/218981/" ] }
430,643
I have a file testfile.xml that I want to open with Vim and delete in command mode
There are many ways: Using the intercative shell in vim When in the editor - :shrm textfile.xml Using Bang(!) As DopeGhoti suggested - :!rm textfile.xml From this link . Add this to your ~/.vimrc command! -complete=file -nargs=1 RemoveΒ :echo 'Remove: '.'<f-args>'.' '.(delete(<f-args>) == 0Β ? 'SUCCEEDED'Β : 'FAILED') Then,in vim, :Remove testfile.xml Again from the same link in #3 Use this command: :call delete(expand('%')) | bdelete!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/430643", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247586/" ] }
430,691
I am a little embarrassed to ask this question, but this topic seems to be particularly difficult to search. On Linux systems I almost exclusively use the terminal and access the systems (from macOS most often) using SSH through a terminal emulator. In the general sense, copying and pasting code snippets and errors from logs, etc. is a tricky problem to traverse the buffer across systems, when a terminal multiplexer is involved, and this is usually achieved by copying via the terminal emulator's own selection feature and use the client's OS's paste buffer. This question is NOT ABOUT THAT. My issue is when I have a large number of vim instances open on a single Linux server. I am in runlevel 3 and do not run a GUI. There is no xclip available to me, mainly because X is not installed. When I am in this workflow I find the need to yank parts of files and paste them in other vims on the same remove box. Vim's builtin + and * copy/paste buffers do not work. (the clipboard compile option in vim is not enabled on these systems) However, what works is if I yank some text in one vim instance, quit it, and open another vim instance, then pasting works. So something about exiting vim persists the buffer somewhere. I think that if I can just have whatever this system is work in realtime without having to close vims, that would be great. I would like to avoid having to layer a bind on yanks and deletes to implement my own yank/paste implementation.
This is the .viminfo file ( :h viminfo ). When you exit vim it writes out the current state, such as command history and register values to that file. When it starts, it reads the file and restores whatever state it describes. That means that successive vim sessions (appear to) share some state, but concurrent ones don't. You can forcibly re-read the viminfo file with the :rv / :rviminfo command , and manually write it out with :wv . So y :wv in one editor, and :rv p in the other will work, but there will be side effects: all your register values and command history may be reset, and quite a lot of other things, which may or may not matter to you. That can also be an advantage: you can use the full range of registers to get multiple copy buffers between editors, which the system clipboard doesn't provide. On the other hand, it's not terribly convenient unless you rebind y to do this automatically, and p you probably don't want to read the file every time. I have read/write viminfo bound to leader commands, but that only saves me one keypress (and it sounds like you'd use it more often). There are some other approaches you could use, like paging out manually to a specific file yourself, which would avoid the side effects. It doesn't sound like you want that, but it's an option. There are also plugins that do more or less of what you're looking for, and the sessions system as well. They aren't direct answers to your question but they may inform where you want to end up.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/430691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12497/" ] }
430,761
If I were to use the dd command as follows: dd if=/dev/zero of=/dev/sdX bs=16M What happens at the end of the disk if it is not an exact multiple of 16M? Does that mean the last remaining portion of the disk is not zeroed? I noticed in https://www.marksanborn.net/howto/wiping-a-hard-drive-with-dd/ , he writes that the US Government uses dd if=/dev/urandom of=/dev/sda bs=8b conv=notrunc Is the conv=notrunc option the way to ensure every last byte is wiped?
The whole output device is wiped whether its size is a multiple of the block size you pass to dd or not. The notrunc flag has no effect when the output is a device file, because truncating a device file has no effect. If the output was a regular file, it would have the effect that the output file is not truncated before writing, which on some filesystems means that the old data is overwritten (as opposed to writing new blocks of data and leaving the rest unattached), however this wouldn't be useful since this property is not guaranteed by all filesystems and furthermore the command would not only overwrite the file, but also keep writing until it's filled the output disk (or some other error occurs). Instead of using dd and worrying about whether you're using it correctly (as it happens, it works in this particular case, but it's complicated and sometimes doesn't work ), just use cat . cat /dev/zero >/dev/sdX Despite popular belief on the web, there is absolutely no magic in dd that makes it somehow better suited to writing to a disk. The magic is in the /dev files. Any tool that can cope with binary data, such as any modern cat or head , can do the same job as dd unless you're passing flags such as seek or skip . Note that a problem shared by dd and cat is that on successful operation, they'll error out with β€œNo space left on device” (ENOSPC). If you put this in a script, you'll either need to check that the error is ENOSPC or use a different method. A more reliable method is to first determine the size of the device (e.g. using /proc/partitions under Linux), then write exactly the right number of bytes with a tool such as head . size=$(</proc/partitions awk '$4 == "sdX" {print $3}')head -c "${size}k" /dev/zero >/dev/sdX
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/430761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
431,786
In Windows, the system drive C: has a directory program_files , under which each program has its own directory. In Linux, under /usr/ and /usr/local/ , there are /bin, /etc, /share, /src , etc. So in windows, all the files of each program are grouped in the same directory, while in Linux, files of the same type from all the programs are. I feel the way that Windows organizes the installed programs is more logical than the way of Linux, and thus installed programs are more easy to manage manually. What is the benefit of the way that Linux organizes the files of installed programs? Thanks. I have this question when having the problem of How to organize installed programs in $HOME for shell to search for them when running them? , where I try to organize my programs in $HOME in the way of Windows, but have some problem of specifying the search paths for the programs.
In Linux the different locations usually, when well maintained, mirror some logic. Eg.: /bin contains the most basic tools (programs) /sbin contains the most basic admin programs Both of them contain the elementary commands used by booting and fundamental troubleshooting. And here you see the first difference. Some programs are not meant to be used by regular users. Then take a look in /usr/bin . Here you should find a bigger choice of commands (programs), usually more than 1000 of them. They are standard tools, but not as essential as those in /bin and /sbin . /usr/bin contains the commands, while the configuration files reside elsewhere. This both separates the functional entities (programs) and their config and other files, but in terms of user functionality, this comes handy, as having the commands not intermixed with anything else allows for the simple use of the PATH variable pointing to the executables. It also introduces clarity. Whatever is should be executable. Take a look at my PATH , $ echo "$PATH" | perl -F: -anlE'$,="\n"; say @F'/home/tomas/bin/usr/local/bin/usr/bin/bin/usr/local/games/usr/games There are exactly six locations containing the commands I can call directly (ie. not by their paths, but by their executables' names). /home/tomas/bin is my private directory in my home folder for my private executables. /usr/local/bin I'll explain separately below. /usr/bin is described above. /bin is also described above. /usr/local/games is a combination of /usr/local (to be explained below) and games /usr/games are games. Not to be mixed with utility executables, they have their separate locations. Now to /usr/local/bin . This one is somewhat slippery, and was already explained here: What is /usr/local/bin? . To understand it, you need to know that the folder /usr might be shared by many machines and mounted from a net location. The commands there are not needed at bootup, as noted before, unlike those in /bin , so the location can be mounted in later stages of the bootup process. It can also be mounted in a read-only fashion. /usr/local/bin , on the other hand, is for the locally installed programs, and needs to be writable. So while many network machines might share the general /usr directory, each one of them will have their own /usr/local mounted inside the common /usr . Finally, take a look at the PATH of my root user: # echo "$PATH" | perl -F: -anlE'$,="\n"; say @F'/usr/local/sbin/usr/local/bin/usr/sbin/usr/bin/sbin/bin It contains these: /usr/local/sbin , which contains the admin commands of the type /usr/local /usr/local/bin , which are the same ones the regular user can use. Again, their type can be described as /usr/local . /usr/sbin are the non-essential administration utilities. /usr/bin are the non-essential administration and regular user utilities. /sbin are the essential admin tools. /bin are the admin and regular user essential tools.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/431786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
431,809
If I install the ISO from: https://cloudflare.cdn.openbsd.org/pub/OpenBSD/snapshots/amd64/ Then how can I install packages? Or there is no pkg_add method in snapshots? What do I need to do for example, if I want to install a firefox? compile it? how?
Install packages with pkg_add as usual, but use -D snapshot (or just -D snap ) to make it look in the correct place on your selected mirror (the mirror listed in /etc/installurl ). So, to install Firefox, as root do: pkg_add -D snapshot firefox See also pkg_add(1) and installurl(5) . Note that you will need to keep your base system up to date to use the snapshot ports as they are rebuilt every once in a while, and the ports and base system should ideally be kept in sync. The sysupgrade(8) utility makes this easy.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/431809", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/261470/" ] }
431,881
Given the following directory tree: .β”œβ”€β”€ d1β”‚Β Β  └── workspaceβ”œβ”€β”€ d2β”‚Β Β  └── workspaceβ”œβ”€β”€ d3β”‚Β Β  └── workspaceβ”œβ”€β”€ d4β”‚Β Β  └── workspace└── d5 └── workspace I need to set the permissions for all workspace directories as below: chmod -R 774 d1/workspace chmod -R 774 d2/workspace ... How can I do the above operations in one command for all workspace directories? I can run the following command: chmod -R 774 * But this also changes the mode of parent directories, which is not desired.
You can use wildcards on the top level directory. chmod 774 d*/workspace Or to make it more specific you can also limit the wildcard, for example to d followed by a single digit. chmod 774 d[0-9]/workspace A more general approach could be with find . find d* -maxdepth 1 -name workspace -type d -exec chmod 774 "{}" \;
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/431881", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10867/" ] }
431,898
I have 4 fields and I want to start from last column and ignore the first 3 columns. How can I use NF in awk to do that? root pts/0 192.168.108.1 Mon Mar 19 08:45 still logged inroot tty1 Mon Mar 19 08:45 still logged inreboot system boot 3.10.0-693.el7.x Mon Mar 19 08:44 - 08:49 (00:04)root pts/0 192.168.108.1 Sun Mar 18 13:06 - crash (19:37)root tty1 Sun Mar 18 13:06 - crash (19:38)reboot system boot 3.10.0-693.el7.x Sun Mar 18 13:01 - 08:49 (19:47)root pts/2 192.168.108.1 Sat Mar 17 12:38 - 13:09 (00:31)root pts/1 192.168.108.1 Sat Mar 17 10:49 - down (02:20)root pts/1 192.168.108.1 Fri Mar 16 23:17 - 00:30 (01:13)root pts/0 192.168.108.1 Fri Mar 16 21:25 - 12:56 (15:31)root tty1 Fri Mar 16 21:25 - 13:09 (15:44)reboot system boot 3.10.0-693.el7.x Fri Mar 16 21:04 - 13:09 (16:05)root pts/2 192.168.108.1 Fri Mar 16 08:45 - crash (12:18)root tty1 Fri Mar 16 08:45 - 17:25 (08:40)syslog **Never logged in** I want to show only: Sun Jan 19 13:52:08 -0800 2018**Never logged in**Fri Mar 16 08:45 - 17:25 (08:40)Mon Mar 19 08:45 still logged in
You could do it with a simple awk command to print the last column contents, and using a multiple spaces as the field separator. Since the default separator in awk is a single white-space, using it directly on the last column would split the contents of the last column. The NR> condition takes care of skipping the first line in applying the actual actions defined for awk . awk -F '[[:space:]][[:space:]]+' 'NR>1{print $NF}' file or using sed too, assuming multiple spaces between columns and column separated words only in the last column. sed '1d;s/^.* //' file OP apparently re-phrased the question to dump the output of the last column command from which they wanted the last column output. Since the intermediate columns themselves could have spaces, we do a match of the column containing the day name and print from there to the end of the line i.e. last | awk ' { for( i=1;i<=NF;i++ ) { if ( $i ~ /Mon|Tue|Wed|Thu|Fri|Sat|Sun/ ) { j = 0 str = "" for ( j=i; j<=NF;j++ ) { str = ( str ? (str FS $j):$j ) } print str break } } }'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/431898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4658/" ] }
431,990
I need to expand some shell (not environment) variable names that are thematically related, e.g. B2_... where ... could be one or more different things like ACCOUNT_ID , ACCOUNT_KEY , RESPOSITORY and so on. I don't know ahead how many variables there are nor which ones they are. That is what I'm want to find out. I want to be able to iterate through the B2... variables without having to put each individual name in the list, similar to how I would glob filename expansions. I use zsh for interactive sessions, but solutions for sh or bash are good too.
Using bash parameter expansion : $ foobar_1=x foobar_2=y foobar_3=z$ for v in "${!foobar_@}"; do echo "$v"; done Output : foobar_1foobar_2foobar_3 'dereference' : $ for v in "${!foobar_@}"; do echo "${!v}"; done OutputΒ² : xyz
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/431990", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16501/" ] }
432,002
I accidentally renamed the directory /usr into /usr_bak . I want to change it back, so I append the path /usr_bak/bin to $PATH to allow the system to find the command sudo . But now sudo mv /usr_bak /usr gives me the error: sudo: error while loading shared libraries: libsudo_util.so.0: cannot open shared object file: No such file or directory Is there a way to rename the /usr_bak as /usr besides reinstalling the system?
Since you have set a password for root, use su and busybox , installed by default in Ubuntu. All of su 's required libraries are in /lib . Busybox is a collection of utilities that's statically linked, so missing libraries shouldn't be a problem. Do: su -c '/bin/busybox mv /usr_bak /usr' (While Busybox itself also has a su applet, the /bin/busybox binary is not setuid and so doesn't work unless ran as root.) If you don't have a root password, you could probably use Gilles' solution here using LD_LIBRARY_PATH , or (Gilles says this won't work with setuid binaries like sudo) reboot and edit the GRUB menu to boot with init=/bin/busybox as a kernel parameter and move the folder back.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/432002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145824/" ] }
432,052
Ubuntu 16.04 #!/bin/bashsite="hello"wDir="/home/websites/${site}/httpdocs/"for file in $(find "${wDir}" -name "*.css")do echo "$file";doneexit 0; shellcheck warns me even if I define the start directory but the script works just fine. root@me /scripts/ # shellcheck test.shIn test.sh line 6:for file in $(find "${wDir}" -name "*.css") ^-- SC2044: For loops over find output are fragile. Use find -exec or a while read loop.
Using a for loop over find output is an anti-pattern at best. See BashFAQ/001 - How can I read a file (data stream, variable) line-by-line (and/or field-by-field)? for reason why. Use a while loop as below with a read command. The below command delimits the output of find with a NULL byte and read command reads by splitting on that byte, so that all files with special characters in their names are safely handled (including newlines) #!/usr/bin/env bashsite="hello"wDir="/home/websites/${site}/httpdocs/"find "${wDir}" -name "*.css" -type f -print0 | while IFS= read -r -d '' file; do printf '%s\n' "$file"done Or altogether avoid using the pipe-lines and do process-substitution while IFS= read -r -d '' file; do printf '%s\n' "$file"done< <(find "${wDir}" -name "*.css" -type f -print0) The web ShellCheck does not report any issues for either of the two snippets above.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/432052", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210789/" ] }
432,057
I have a problem with my tomcat servers running on centos 7.have 7 in all, set up alike. 4 have been patched recently and rebooted, remaining 3 has uptime of approx 2 years. when I do: sudo -u tomcat ls /tmp I get error on the newly patched servers, stating : sudo: pam_open_session: Permission deniedsudo: policy plugin failed session initialization on the unpatched servers I get to execute the command. /etc/security/limits are identical: tomcat soft nofile 5000000tomcat hard nofile 5000000tomcat soft nproc 5000000tomcat hard nproc 5000000 I can circumvent the errror by commenting out : /etc/pam.d/sudo:session required pam_limits.so I dont get it ? Am I looking in the right places ? strace from both look like: Failing: strace -e setrlimit sudo -u tomcat ls /tmpsetrlimit(RLIMIT_NPROC, {rlim_cur=RLIM64_INFINITY, rlim_max=RLIM64_INFINITY}) = 0setrlimit(RLIMIT_NPROC, {rlim_cur=1031015, rlim_max=1031015}) = 0setrlimit(RLIMIT_NPROC, {rlim_cur=5000000, rlim_max=5000000}) = 0setrlimit(RLIMIT_NOFILE, {rlim_cur=5000000, rlim_max=5000000}) = -1 EPERM (Operation not permitted)sudo: pam_open_session: Permission deniedsudo: policy plugin failed session initialization+++ exited with 1 +++" working: strace -e setrlimit sudo -u tomcat ls /tmpsetrlimit(RLIMIT_NPROC, {rlim_cur=5000000, rlim_max=5000000}) = 0setrlimit(RLIMIT_NOFILE, {rlim_cur=5000000, rlim_max=5000000}) = -1 EPERM (Operation not permitted)setrlimit(RLIMIT_NPROC, {rlim_cur=RLIM64_INFINITY, rlim_max=RLIM64_INFINITY}) = 0hs_err_pid13726.log hsperfdata_cron hsperfdata_tokor hsperfdata_tomcat systemd-private-U8GAP7--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=28963, si_status=0, si_utime=0, si_stime=0} ---+++ exited with 0 +++ pam version on working is : pam-1.1.8-12.el7_1.1.x86_64 and on non-working: pam-1.1.8-18.el7.x86_64
This is a bug in the pam_limits module, causing authentication to fail. I think it only affects RHEL/Centos 7. It affects sudo users who have an unlimited or very high nofiles setting (bigger than fs.nr_open =1024x1024=1024576). Your options are: Remove pam_limits from your sudo PAM rules Set the nofiles for the destination user (tomcat) to be something lower than fs.nr_open Raise the kernel setting fs.nr_open (in /etc/sysctl.conf ) to be higher than your ulimit Wait for a fix?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/432057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271542/" ] }
432,245
Is there a way to get only the name of physical ethernet interface(i.e not virtual ethernet interface)? To give a bit of background, I'm trying to get a few SBCs(RPi 3) to write their IP addresses to a DataBase. But since the names of the physical ethernet interface on different SBCs is not usually same, I'm finding it hard to get their IP addresses. One way I could think of solving this is to give all the SBCs ethernet interface a common name like eth0. But this method feels a bit clunky. So, is there any other alternative to get only the name of physical ethernet interface?
You can tell which interfaces are virtual via ls -l /sys/class/net/ which gives you this output: [root@centos7 ~]# ls -l /sys/class/net/total 0lrwxrwxrwx. 1 root root 0 Mar 20 08:58 ens33 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:01.0/net/ens33lrwxrwxrwx. 1 root root 0 Mar 20 08:58 lo -> ../../devices/virtual/net/lolrwxrwxrwx. 1 root root 0 Mar 20 08:58 virbr0 -> ../../devices/virtual/net/virbr0lrwxrwxrwx. 1 root root 0 Mar 20 08:58 virbr0-nic -> ../../devices/virtual/net/virbr0-nic From there, you could grep to filter only non-virtual interfaces: ls -l /sys/class/net/ | grep -v virtual Another option is to use this small script, adapted from this answer , which prints the name of all interfaces which do not have a MAC address of 00:00:00:00:00:00 i.e. physical: #!/bin/bashfor i in $(ip -o link show | awk -F': ' '{print $2}')do mac=$(ethtool -P $i) [[ $mac != *"00:00:00:00:00:00"* ]] && echo "$i"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/432245", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/281619/" ] }
432,285
I'm trying to grep for possible matches to, ex****e So anything with ex at the start and e at the end with 4 characters inbetween, how could I do this?
The regular expression operator that matches a single character is . . That's similar to ? in shell wildcards. * itself matches any number of the preceding thing in regular expressions (for instance, a* matches any number (including 0) of a s), and any number of characters in shell wildcards. POSIXly, to find lines that match e x actly on that: grep -xE 'ex.{4}e' Or: grep -x ex....e Or: grep -x 'ex.\{4\}e' The second of which being the most portable. grep '^ex....e$' would even work in the original implementation in Unix Version 4 (1973); however -x was added in Unix Version 7 (1979) and is universal nowadays so you can rely on that one. Extended regular expressions were added in egrep in V7 as well but initially without the {x,y} interval operators. That operator was added as \{x,y\} for grep but often not in egrep as that would have broken backward compatibility. In the early nineties however, POSIX introduced the -E option of grep to merge in the egrep functionality into grep and requires it support {x,y} and egrep is now deprecated. However, you still occasionally find some grep implementations that don't support -E or egrep ones that don't support {x,y} like the /bin/grep and /bin/egrep of Solaris (where you need to use /usr/xpg4/bin/grep instead). Beware that some grep implementations are not multibyte aware and their . regexp operator may match on each byte of a multibyte character like the non-ASCII UTF-8 characters). $ $ locale charmapUTF-8$ echo extrΓͺme | busybox grep -x ex....e $ echo extrΓͺme | gnu-grep -x ex....eextrΓͺme$ echo extrΓͺme | busybox grep -x ex.....eextrΓͺme As the Γͺ character is made of two bytes in UTF-8, extrΓͺme is 7 characters, but 8 bytes: $ printf %s extrΓͺme | wc -cm 7 8
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/432285", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/281676/" ] }
432,309
For certain obvious usages, the grep command can output multiple complaints about search names being directories. Note the multiple "Is a directory" warnings, this typically happens with commands like: grep text * eg: /etc/webmin # grep "log=1" * grep: cluster-webmin: Is a directoryconfig:log=1grep: cpan: Is a directorygrep: man: Is a directoryminiserv.conf:log=1miniserv.conf:syslog=1grep: mon: Is a directory I know I can use the "-s" option to suppress these warnings about directories (and stderr can be redirected to nul, but that's even worse), but I don't like that because it's extra boilerplate which we have to remember every time, and it also suppresses all warnings, not just ones about directories. Is there some way, where this spurious warning can be suppressed forever and globally ? I'm most interested in Debian and Cygwin.
Depending on the way you want to handle directories contents, grep -d recurse will do it (handling recursively directories), or grep -d skip (ignoring directories and their content). You could have this be automatic, by adding it to ~/.profile or ~/.bashrc (one user) or /etc/profile or /etc/bashrc (all users) alias grep="/bin/grep -d skip"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/432309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/255852/" ] }
432,393
I recently encountered some problems with my Dell XPS 13 running Ubuntu 17.10. I updated the linux kernel to the latest version. With the new kernel running, my wifi adapter wasn't working anymore. Also Virtualbox doesn't work anymore. I searched a lot on the internet but couldn't find any solution. I was told my only option would be to downgrade the kernel to the previous working version. Now the problem with this is, I'll have to use GRUB. My laptop has been showing difficulties since some weeks, and one of them is that my laptop won't boot most of the times. The times it does boot, it goes straight to the disk encryption password prompt, meaning I can not boot from USB, enter bios or enter the GRUB. Now my question is, is there any other way of downgrading the kernel without using GRUB? I have figured I might take out the SSD and try to fix it from another computer, but I don't have the necessary tools to open it up, and even if I did, I wouldn't know where to plug in a M.2 ssd.
If I understood your question correctly, your problem is that you cannot successfully access the BIOS setup nor the GRUB prompt at boot time because of other problems, and you need to downgrade your kernel version. Since the package management tools usually won't let you uninstall the kernel version you're currently running, you'll need to somehow first boot an older kernel without interacting with GRUB at boot time . That's easy. First look at /etc/default/grub in your system and find the GRUB_DEFAULT= setting. If it says: GRUB_DEFAULT=saved then you can use the sudo grub-set-default <number> to change which of the configured boot options GRUB will pick by default. Normally, it will be the topmost entry, or entry number 0. So if you want to backtrack one kernel update, you'd usually just say sudo grub-set-default 1 and reboot, and then you'll be free to remove the latest kernel package you had problems with. But if the /etc/default/grub instead says: GRUB_DEFAULT=0 then you can edit that file to say GRUB_DEFAULT=1 instead, and then run sudo update-grub to make the change effective. Then reboot, and again, you should be able to remove the newest kernel you have currently installed since you'll no longer be running on it. After removing the problematic kernel version, you should undo your bootloader change, or you'll forever be one step behind in kernel updates :-) So, either run sudo grub-set-default 0 or edit /etc/default/grub again to undo your change + run sudo update-grub , depending on which you originally did.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/432393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/281758/" ] }
432,528
I'm trying to replace the following section of a YAML file: ssl: enabled: false to read ssl: enabled: true I've tried this, which failed: sed -i s/ssl:\n enabled: false/ssl:\n enabled: true/g
You can use sed ranges: sed '/^ *ssl:/,/^ *[^:]*:/s/enabled: false/enabled: true/' file The range boundaries are /^ *ssl:/ (start of the ssl section) and /^ *[^:]*:/ (any other section). The s is the usual substitution command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/432528", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/281873/" ] }
432,604
Say I only need the first 5 lines of an output for logging purposes. I also need to know if and when the log has been truncated. I am trying to use head to do the job, the seq command below outputs 20 lines that get truncated by head , and I echo a truncating information: > seq -f 'log line %.0f' 20 | head -n 5 && echo '...Output truncated. Only showing first 5 lines...'log line 1log line 2log line 3log line 4log line 5...Output truncated. Only showing first 5 lines... But if the seq command outputs less than 5 lines, using the same above construction, I get a wrong "truncated" status: seq -f ' log line %.0f' 3 | head -n 5 && echo '...Output truncated. Only showing first 5 lines...'log line 1log line 2log line 3...Output truncated. Only showing first 5 lines... Is there a way for the head command (or another tool) to tell me if it truncated anything so that I only display the "...truncated..." message when needed?
A note of warning: When you do: cmd | head and if the output is truncated, that could cause cmd to be killed by a SIGPIPE, if it writes more lines after head has exited. If it's not what you want, if you want cmd to keep running afterwards, even if its output is discarded, you'd need to read but discard the remaining lines instead of exiting after 10 lines have been output (for instance, with sed '1,10!d' or awk 'NR<=10' instead of head ). So, for the two different approaches: output truncated, cmd may be killed cmd | awk 'NR>5 {print "TRUNCATED"; exit}; {print}'cmd | sed '6{s/.*/TRUNCATED/;q;}' Note that the mawk implementation of awk accumulates a buffer-full of input before starting processing it, so cmd may not be killed until it has written a buffer-full (8KiB on my system AFAICT) of data. That can be worked-around by using the -Winteractive option. Some sed implementations also read one line in advance (to be able to know which is the last line when using the $ address), so with those, cmd may only be killed after it has output its 7 th line. output truncated, the rest discarded so cmd is not killed cmd | awk 'NR<=5; NR==6{print "TRUNCATED"}'cmd | sed '1,6!d;6s/.*/TRUNCATED/'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/432604", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/207557/" ] }
432,660
I want to check whether an argument to a shell script is a whole number(i.e.,Β aΒ non-negative integer: 0, 1, 2, 3, …, 17, …, 42, …, etc,but not 3.1416 or βˆ’5) expressed in decimal (so nothing likeΒ 0x11 orΒ 0x2A).Β How can I write a case statement using regex as condition (to match numbers)? IΒ tried aΒ few different ways I came up with (e.g., [0-9]+ or ^[0-9][0-9]*$ ); none of them works. Like in the following example, valid numbers are falling through the numeric regex that's intended to catch them and are matching the * wildcard. i=1let arg_n=$#+1while (( $i < $arg_n )); do case ${!i} in [0-9]+) n=${!i} ;; *) echo 'Invalid argument!' ;; esac let i=$i+1done Output: $ ./cmd.sh 64Invalid argument!
case does not use regexes, it uses patterns For "1 or more digits", do this: shopt -s extglob... case ${!i} in +([[:digit:]]) ) n=${!i} ;; ... If you want to use regular expressions, use the =~ operator within [[...]] if [[ ${!i} =~ ^[[:digit:]]+$ ]]; then n=${!i}else echo "Invalid"fi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/432660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264145/" ] }
432,685
lsusb lists it as 2001:3d04How do i install drivers?Please help Output from lsmod |grep mt7601u mt7601u 98304 0 mac80211 638976 1 mt7601u cfg80211 573440 2 mac80211,mt7601u usbcore 241664 5' uhci_hcd,mt7601u,ehci_hcd,ehci_pci,usbhid Output from ifconfig -a eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.146.129 netmask 255.255.255.0 broadcast 192.168.146.255 inet6 fe80::20c:29ff:fed0:5896 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:d0:58:96 txqueuelen 1000 (Ethernet) RX packets 38 bytes 4096 (4.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 38 bytes 3691 (3.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 18 bytes 1058 (1.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 18 bytes 1058 (1.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Output from dmesg : screenshot link
case does not use regexes, it uses patterns For "1 or more digits", do this: shopt -s extglob... case ${!i} in +([[:digit:]]) ) n=${!i} ;; ... If you want to use regular expressions, use the =~ operator within [[...]] if [[ ${!i} =~ ^[[:digit:]]+$ ]]; then n=${!i}else echo "Invalid"fi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/432685", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282010/" ] }
432,733
I have a list of hostnames in a file and want to separate them based on the last character. Write the host name to a file if the last character is odd number. How can I do this in one liner? Example: abc123abc124abc348abc435 Desired output: abc123abc435
Short awk command: awk '/[13579]$/' file > hostnames_odd.txt [13579] - character class representing the list of allowed digits (odd numbers) $ - the end of the string/line Result: $ cat hostnames_odd.txt abc123abc435 Or the same with grep : grep '[13579]$' file > hostnames_odd.txt In case if there could possibly be a whitespace(s) at the end of some line(s) change the crucial pattern to the following: [13579][[:space:]]*$
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/432733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134713/" ] }
432,774
Can I take a Linux kernel and use it with, say, FreeBSD and vice versa (FreeBSD kernel in, say, a Debian)? Is there a universal answer? What are the limitations? What are the obstructions?
No, kernels from different implementations of Unix-style operating systems are not interchangeable, notably because they all present different interfaces to the rest of the system (user space) β€” their system calls (including ioctl specifics), the various virtual file systems they use... What is interchangeable to some extent, at the source level, is the combination of the kernel and the C library, or rather, the user-level APIs that the kernel and libraries expose (essentially, the view at the layer described by POSIX, without considering whether it is actually POSIX). Examples of this include Debian GNU/kFreeBSD , which builds a Debian system on top of a FreeBSD kernel, and Debian GNU/Hurd , which builds a Debian system on top of the Hurd. This isn’t quite at the level of kernel interchangeability, but there have been attempts to standardise a common application binary interface, to allow binaries to be used on various systems without needing recompilation. One example is the Intel Binary Compatibility Standard , which allows binaries conforming to it to run on any Unix system implementing it, including older versions of Linux with the iBCS 2 layer. I used this in the late 90s to run WordPerfect on Linux. See also How to build a FreeBSD chroot inside of Linux .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/432774", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
432,816
When I cat /etc/os-release I get the following: PRETTY_NAME="Kali GNU/Linux Rolling"NAME="Kali GNU/Linux"ID=kaliVERSION="2018.1"VERSION_ID="2018.1"ID_LIKE=debianANSI_COLOR="1;31"HOME_URL="http://www.kali.org/"SUPPORT_URL="http://forums.kali.org/"BUG_REPORT_URL="http://bugs.kali.org/" How would I grab kali from ID= in bash? How would I grab 2018.1 from VERSION= in bash?
you can source the file and use var's value . /etc/os-releaseecho $IDecho $VERSION or try with awk awk -F= '$1=="ID" { print $2 ;}' /etc/os-release where -F= tell awk to use = as separator $1=="ID" filter on ID { print $2 ;} print value
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/432816", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173825/" ] }
432,824
This question, in different forms, has been asked hundreds of times, and the answers are always similar - I agree with them in general case, but still would like to ask it again as my case is a bit different. My workflow involves interaction over SSH with various devices manufactured by company I work in and I would like my SSH client to: Remember the password for a host once I successfully login Do not require me to manually delete entries from known-hosts file in case host identification changed. The devices are always in an isolated, local network without internet access. I cannot use key-based authentication since these devices have no persistent storage (keys will not survive a reboot). I'd also like to use the same PC to access hardware in different setups - IPs and passwords are the same, but hosts IDs are different. Regular, OpenSSH client is painful in this scenario - while I understand that's because of the security I'd like to voluntarily opt out of it. Is this somehow possible without forking OpenSSH and making these changes by myself?
you can source the file and use var's value . /etc/os-releaseecho $IDecho $VERSION or try with awk awk -F= '$1=="ID" { print $2 ;}' /etc/os-release where -F= tell awk to use = as separator $1=="ID" filter on ID { print $2 ;} print value
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/432824", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/196466/" ] }
433,043
I just configured a new server and installed fail2ban as well, but it is not banning me when I keep trying to connect with the wrong password fail2ban.log: 2018-03-23 12:46:29,363 fail2ban.actions [9756]: NOTICE [sshd] [my ip] already banned2018-03-23 12:46:30,747 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:46:33,346 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:46:35,515 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:46:36,372 fail2ban.actions [9756]: NOTICE [sshd] [my ip] already banned2018-03-23 12:47:45,471 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:47:46,820 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:47:49,503 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:47:50,458 fail2ban.actions [9756]: NOTICE [sshd] [my ip] already banned2018-03-23 12:47:51,893 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:48:49,699 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:48:51,835 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:48:52,531 fail2ban.actions [9756]: NOTICE [sshd] [my ip] already banned2018-03-23 12:48:54,477 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:48:57,056 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:50:53,240 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:50:53,677 fail2ban.actions [9756]: NOTICE [sshd] [my ip] already banned2018-03-23 12:50:55,065 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:50:58,253 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:51:00,494 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:51:00,685 fail2ban.actions [9756]: NOTICE [sshd] [my ip] already banned2018-03-23 12:52:06,119 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:52:08,300 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:52:11,583 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:52:11,773 fail2ban.actions [9756]: NOTICE [sshd] [my ip] already banned2018-03-23 12:52:13,498 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:53:07,823 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:53:09,712 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:53:09,842 fail2ban.actions [9756]: NOTICE [sshd] [my ip] already banned2018-03-23 12:53:11,718 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:53:13,696 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:54:37,181 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:54:37,949 fail2ban.actions [9756]: NOTICE [sshd] [my ip] already banned2018-03-23 12:54:39,092 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:54:40,906 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:54:42,616 fail2ban.filter [9756]: INFO [sshd] Found [my ip]2018-03-23 12:54:42,955 fail2ban.actions [9756]: NOTICE [sshd] [my ip] already banned2018-03-23 12:54:52,074 fail2ban.action [9756]: ERROR iptables -w -n -L INPUT | grep -q 'f2b-sshd[ \t]' -- stdout: ''2018-03-23 12:54:52,075 fail2ban.action [9756]: ERROR iptables -w -n -L INPUT | grep -q 'f2b-sshd[ \t]' -- stderr: ''2018-03-23 12:54:52,075 fail2ban.action [9756]: ERROR iptables -w -n -L INPUT | grep -q 'f2b-sshd[ \t]' -- returned 12018-03-23 12:54:52,075 fail2ban.CommandAction [9756]: ERROR Invariant check failed. Trying to restore a sane environment2018-03-23 12:54:52,180 fail2ban.action [9756]: ERROR iptables -w -D INPUT -p tcp -m multiport --dports ssh -j f2b-sshdiptables -w -F f2b-sshdiptables -w -X f2b-sshd -- stdout: ''2018-03-23 12:54:52,181 fail2ban.action [9756]: ERROR iptables -w -D INPUT -p tcp -m multiport --dports ssh -j f2b-sshdiptables -w -F f2b-sshdiptables -w -X f2b-sshd -- stderr: "iptables v1.4.21: Couldn't load target `f2b-sshd':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\niptables: No chain/target/match by that name.\niptables: No chain/target/match by that name.\n"2018-03-23 12:54:52,181 fail2ban.action [9756]: ERROR iptables -w -D INPUT -p tcp -m multiport --dports ssh -j f2b-sshdiptables -w -F f2b-sshdiptables -w -X f2b-sshd -- returned 12018-03-23 12:54:52,181 fail2ban.actions [9756]: ERROR Failed to execute unban jail 'sshd' action 'iptables-multiport' info '{'matches': '2018-03-23T11:53:46.707058149-210-194-176.colo.transip.net sshd[27676]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=ip-[my ip].ip.prioritytelecom.net user=root2018-03-23T11:53:48.733188149-210-194-176.colo.transip.net sshd[27676]: Failed password for root from [my ip] port 31224 ssh22018-03-23T11:54:51.709842149-210-194-176.colo.transip.net sshd[27676]: Failed password for root from [my ip] port 31224 ssh2', 'ip': '[my ip]', 'time': 1521802491.930057, 'failures': 3}': Error stopping action When I tail the log file, I see my ssh login attempts getting logged, but after the 3rd attempt I just can keep trying; if I use the right password after the 10th attempt, for example, it will log me in. I also get errors, seen at the end of the log file, every now and then. my jail.local: [DEFAULT]#ban n hosts for one hour:bantime = 3600#maxtrysmaxretry = 3# Override /etc/fail2ban/jail.d/00-firewalld.conf:banaction = iptables-multiport[sshd]enabled = true Does anyone have a clue why this is happening?
Looks like your iptables configuration does not include a filter chain named f2b-sshd . First, a mini-primer on iptables . iptables is both a command and the name of the Linux firewall subsystem. The command is used to set up firewall rules in RAM. The iptables firewall rules are arranged first into tables: there is the default filter table, but also nat , mangle , raw and security tables, for various purposes. fail2ban is doing traffic filtering, so it uses the filter table. The tables are then further divisible into filter chains. Each table has certain standard chains: for the filter table, the standard chains are INPUT , FORWARD and OUTPUT . The FORWARD chain is only used when the system is configured to route traffic for other systems. The INPUT chain deals with incoming traffic to this system. If fail2ban added its rules directly to the INPUT chain and wiped that chain clean when all the bans expired, then you would have to turn over full control of your firewall input rules to fail2ban - you could not easily have any custom firewall rules in addition to what fail2ban does. This is clearly not desirable, so fail2ban won't do that. Instead, fail2ban creates its own filter chain it can fully manage on its own, and adds on start-up a single rule to the INPUT chain to send any matching traffic to be processed through fail2ban 's chain. For example, when configured to protect sshd , fail2ban should be executing these commands at start-up: iptables -N f2b-sshdiptables -A f2b-sshd -j RETURNiptables -I INPUT -p tcp -m multiport --dports <TCP ports configured for sshd protection> -j f2b-sshd These commands create a f2b-sshd filter chain, set RETURN as its last rule (so that when any fail2ban rules have been processed, the normal processing of INPUT rules will continue as without fail2ban , and finally, add a rule to the beginning of the INPUT table to catch any SSH traffic and send it first to the f2b-sshd chain. Now, when fail2ban needs to ban an IP address for SSH use, it will just insert a new rule to the f2b-sshd chain. If you are using firewalld or some other system that manages iptables firewall rules for you, or if you clear all the iptables rules manually, then these initial rules, and possibly the entire f2b-sshd filter chain, may be wiped out. You should make sure that any firewall management tool you might be using maintains that initial rule in the INPUT chain and doesn't touch the f2b-sshd chain at all. The error messages at the end of your snippet indicate that fail2ban is checking that the initial rules are still there ("invariant check"), and finding that they're not.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/433043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282146/" ] }
433,045
I'd like to know the possible versions of ls that exist and how they differ from one another. I am actually working on man page of ls but I'm not getting the right results even though the options are correct so I'm thinking maybe the version of ls is the issue here.
Looks like your iptables configuration does not include a filter chain named f2b-sshd . First, a mini-primer on iptables . iptables is both a command and the name of the Linux firewall subsystem. The command is used to set up firewall rules in RAM. The iptables firewall rules are arranged first into tables: there is the default filter table, but also nat , mangle , raw and security tables, for various purposes. fail2ban is doing traffic filtering, so it uses the filter table. The tables are then further divisible into filter chains. Each table has certain standard chains: for the filter table, the standard chains are INPUT , FORWARD and OUTPUT . The FORWARD chain is only used when the system is configured to route traffic for other systems. The INPUT chain deals with incoming traffic to this system. If fail2ban added its rules directly to the INPUT chain and wiped that chain clean when all the bans expired, then you would have to turn over full control of your firewall input rules to fail2ban - you could not easily have any custom firewall rules in addition to what fail2ban does. This is clearly not desirable, so fail2ban won't do that. Instead, fail2ban creates its own filter chain it can fully manage on its own, and adds on start-up a single rule to the INPUT chain to send any matching traffic to be processed through fail2ban 's chain. For example, when configured to protect sshd , fail2ban should be executing these commands at start-up: iptables -N f2b-sshdiptables -A f2b-sshd -j RETURNiptables -I INPUT -p tcp -m multiport --dports <TCP ports configured for sshd protection> -j f2b-sshd These commands create a f2b-sshd filter chain, set RETURN as its last rule (so that when any fail2ban rules have been processed, the normal processing of INPUT rules will continue as without fail2ban , and finally, add a rule to the beginning of the INPUT table to catch any SSH traffic and send it first to the f2b-sshd chain. Now, when fail2ban needs to ban an IP address for SSH use, it will just insert a new rule to the f2b-sshd chain. If you are using firewalld or some other system that manages iptables firewall rules for you, or if you clear all the iptables rules manually, then these initial rules, and possibly the entire f2b-sshd filter chain, may be wiped out. You should make sure that any firewall management tool you might be using maintains that initial rule in the INPUT chain and doesn't touch the f2b-sshd chain at all. The error messages at the end of your snippet indicate that fail2ban is checking that the initial rules are still there ("invariant check"), and finding that they're not.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/433045", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282304/" ] }
433,046
I'm in the process of purchasing a RHEL license. In the meantime, I'd like to utilize CentOS 7 repos on my RHEL 7. I created a /etc/yum.repos.d/centos.repo file in the /etc/yum.repos.d directory but I don't know how to move past that. Most of the information I found online either points to fedora repos or is referring to CentOS 5. Below is something I found online and copy pasted onto my centos.repo file. Thank you. [centos]name=CentOS $releasever - $basearchbaseurl=http://ftp.heanet.ie/pub/centos/7/os/$basearch/enabled=1gpgcheck=0[base]name=CentOS-$releasever - Basemirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=osbaseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/enabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5priority=1#released updates[updates]name=CentOS-$releasever - Updatesmirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updatesbaseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/enabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7priority=1#packages used/produced in the build but not released[addons]name=CentOS-$releasever - Addonsmirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=addons#baseurl=http://mirror.centos.org/centos/$releasever/addons/$basearch/enabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7priority=1#additional packages that may be useful[extras]name=CentOS-$releasever - Extrasmirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extrasbaseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/enabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7priority=1#additional packages that extend functionality of existing packages[centosplus]name=CentOS-$releasever - Plusmirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplusbaseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/enabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7priority=2#contrib - packages by Centos Users[contrib]name=CentOS-$releasever - Contribmirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contribbaseurl=http://mirror.centos.org/centos/$releasever/contrib/$basearch/enabled=0gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7priority=2
Delete this centos.repo (or change enabled=0 for all) and Create a new repository centos1.repo in /etc/yum.repos.d/ with the content: [centos]name=CentOS-7baseurl=http://ftp.heanet.ie/pub/centos/7/os/x86_64/enabled=1gpgcheck=1gpgkey=http://ftp.heanet.ie/pub/centos/7/os/x86_64/RPM-GPG-KEY-CentOS-7 Then run yum repolist Now check if you can install any package like yum install nmap -y Done!!!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/433046", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/245389/" ] }
433,051
Every time I try to mount a NFS share I get this: >> mount -t nfs gitlab-replica-storage.blah.com:/export/registry-gitlab-prod-data-vol /mnt/testmount.nfs: Stale file handle The problem is that I cannot umount, as it says: >> umount -f -l /mnt/testumount: /mnt/test: not mounted I tried checking if any process was using the mountpoint, but that is not the case. Any other alternative to troubleshoot this? As clarification: I can mount it in another machine. I cannot mount it in another mountpoint on the affected machine.
The error, ESTALE, was originally introduced to handle the situation where a file handle, which NFS uses to uniquely identify a file on the server, no longer refers to a valid file on the server. This can happen when the file is removed on the server, either by an application on the server, some other client accessing the server, or sometimes even by another mounted file system from the same client. The NFS server also returns this error when the file resides upon a file system which is no longer exported. Additionally, some NFS servers even change the file handle when a file is renamed, although this practice is discouraged. This error occurs even if a file or directory, with the same name, is recreated on the server without the client being aware of it. The file handle refers to a specific instance of a file and deleting the file and then recreating it creates a new instance of the file. The error, ESTALE, is usually seen when cached directory information is used to convert a pathname to a dentry/inode pair. The information is discovered to be out of date or stale when a subsequent operation is sent to the NFS server. This can easily happen in system calls such as stat(2) when the pathname is converted a dentry/inode pair using cached information, but then a subsequent GETATTR call to the server discovers that the file handle is no longer valid. This error can also occur when a change is made on the server in between looking up different components of the pathname to be looked up or between a successful lookup and a subsequent operation. Original link about ESTALE: ESTALE LWN . I suggest to you check files and directories on NFS server or say to admin of NFS server to do this. Maybe some old pagecache, inode, dentry cache entries are exists on NFS server. Please clean it: # To free pagecacheecho 1 > /proc/sys/vm/drop_caches# To free dentries and inodesecho 2 > /proc/sys/vm/drop_caches# To free pagecache, dentries and inodesecho 3 > /proc/sys/vm/drop_caches
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/433051", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258261/" ] }
433,066
I've got a bash script I'd like to loop over the lines in stdin, or loop over each argument passed in. Is there a clean way to write this so I don't have to have 2 loops? #!/bin/bash# if we have command line args... if [ -t 0 ]then # loop over arguments for arg in "$@" do # process each argument doneelse # loop over lines from stdin while IFS= read -r line; do # process each line donefi EDIT: I'm looking for a generic solution that just uses a single loop, as I find I want to do this quite often, but have always wrote out 2 loops and then called a function instead. So maybe something that turns stdin into an array, so I could use a single loop instead?
Create data for your while read loop: #!/bin/shif [ "$#" -gt 0 ]; then # We have command line arguments. # Output them with newlines in-between. printf '%s\n' "$@"else # No command line arguments. # Just pass stdin on. catfi |while IFS= read -r string; do printf 'Got "%s"\n' "$string"done Note that your concat example can be done with the while read loop replaced by tr '\n' ',' or similar. Also, the -t test says nothing about whether you have command line arguments or not. Alternatively, to process both command line arguments and standard input (in that order): #!/bin/sh{ if [ "$#" -gt 0 ]; then # We have command line arguments. # Output them with newlines in-between. printf '%s\n' "$@" fi if [ ! -t 0 ]; then # Pass stdin on. cat fi} |while IFS= read -r string; do printf 'Got "%s"\n' "$string"done Or, using short-cut notation that some people seems to like: #!/bin/sh{ [ "$#" -gt 0 ] && printf '%s\n' "$@" [ ! -t 0 ] && cat} |while IFS= read -r string; do printf 'Got "%s"\n' "$string"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/433066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22734/" ] }
433,240
On WSL, I executed sudo zypper update , but I got this error message. Loading repository data... Warning: Repository 'oss_update' appears to be outdated. Consider using a different mirror or server. Reading installed packages... Nothing to do. When I executed sudo zypper refresh , I didn't get any error message, though. Retrieving repository 'The Go Programming Language (openSUSE_Leap_42.3)' metadata ...............................................................................................................................................[done]Retrieving repository 'devel:languages:php (openSUSE_Leap_42.3)' metadata .......................................................................................................................................................[done]Repository 'oss' is up to date.Retrieving repository 'oss_update' metadata .....................................................................................................................................................................................[done]Retrieving repository 'PHP7 extensions (php7_openSUSE_Leap_42.3)' metadata ......................................................................................................................................................[done]All repositories have been refreshed I checked the list of the repository I am using with zypper lr -u . # | Alias | Name | Enabled | GPG Check | Refresh | URI--+----------------------------+--------------------------------------------------+---------+-----------+---------+-------------------------------------------------------------------------------------------------1 | devel_languages_go | The Go Programming Language (openSUSE_Leap_42.3) | Yes | (r ) Yes | No | http://download.opensuse.org/repositories/devel:/languages:/go/openSUSE_Leap_42.3/2 | devel_languages_php | devel:languages:php (openSUSE_Leap_42.3) | Yes | (r ) Yes | No | http://download.opensuse.org/repositories/devel:/languages:/php/openSUSE_Leap_42.3/3 | oss | oss | Yes | (r ) Yes | No | http://download.opensuse.org/distribution/leap/42.3/repo/oss/suse/4 | oss_update | oss_update | Yes | (r ) Yes | No | http://download.opensuse.org/update/leap/42.3/oss/5 | server_php_extensions_php7 | PHP7 extensions (php7_openSUSE_Leap_42.3) | Yes | (r ) Yes | No | http://download.opensuse.org/repositories/server:/php:/extensions:/php7/php7_openSUSE_Leap_42.3/ When I check the content of http://download.opensuse.org/update/leap/42.3/oss/ , I see the files and the directories have been updated on March 23, 2018, so they don't seem obsolete. Why am I getting that error message about the repository being outdated? How do I change the repository I am using? What should I use?
I found the answer from https://www.reddit.com/r/bashonubuntuonwindows/comments/8fcbs5/update_of_opensuse_on_wsl_error/ : you need to change the repository URIs from HTTP to HTTPS. I just did that and was able to see new packages. I only had the oss and oss_update repositories, so the process I followed was: sudo zypper rr osssudo zypper rr oss_updatesudo zypper ar https://download.opensuse.org/distribution/leap/42.3/repo/oss/suse/ osssudo zypper ar https://download.opensuse.org/update/leap/42.3/oss/ oss_updatesudo zypper refsudo zypper up To make sure this works for your version, find the version number of your system, and substitute it into the above URLs in the place of 42.3 . (You can make sure the URLs are valid by opening them in a browser.) You can see the version number in the output of the following command: cat /etc/os-release
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/433240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1114/" ] }
433,248
I want to find all PDF files in a directory and its subdirectories, on OSX. I know there are some PDFs in subdirectories, because e.g. this produces lots of results: ls myfolder/pdfs/*.pdf All my Googling suggests I want ls -R , but this produces no results: ls -R *.pdf What am I doing wrong? I can find some results this way: ls -R | grep pdf But I can't see this full paths to the files, which isn't very helpful.
ls -R *.pdf would invoke ls recursively on anything matching *.pdf (if there's nothing matching *.pdf in the current directory, you'll get no result, and if there is, it will only recurse into it if it's a directory). ls -R | grep pdf would show you everything in the ls -R result that matches the regular expression pdf , which is not what you want. This is what you need: find myfolder -type f -name '*.pdf' This will give you the pathnames of all regular files ( -type f ) in or below the myfolder directory whose filenames matches the pattern *.pdf . The pattern needs to be quoted to protect it from the shell. With the zsh shell (which recently became the default shell on macOS): print -rC1 myfolder/**/*.pdf(.ND) This would print out the pathnames of all regular files in or below the directory myfolder that have names ending in .pdf , in a single column. The matching would include hidden names. The N and D in the glob qualifier corresponds to setting the nullglob and dotglob shell options in the bash shell (but only for this single globbing pattern), and the dot makes the pattern only match regular files (i.e. not directories etc.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/433248", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118973/" ] }
433,273
I use vi-mode in oh-my-zsh with the af-magic theme . I want the cursor style to indicate whether I am in normal mode (block) or insert mode (beam), both in zsh and in vim . This is what I have so far: In my ~/.zshrc : # vim mode config # --------------- # Activate vim mode. bindkey -v # Remove mode switching delay. KEYTIMEOUT=5 # Change cursor shape for different vi modes. function zle-keymap-select { if [[ ${KEYMAP} == vicmd ]] || [[ $1 = 'block' ]]; then echo -ne '\e[1 q' elif [[ ${KEYMAP} == main ]] || [[ ${KEYMAP} == viins ]] || [[ ${KEYMAP} = '' ]] || [[ $1 = 'beam' ]]; then echo -ne '\e[5 q' fi } zle -N zle-keymap-select # Use beam shape cursor on startup. echo -ne '\e[5 q' # Use beam shape cursor for each new prompt. preexec() { echo -ne '\e[5 q' } As found here . In vim , I use Vundle and terminus . With these configurations, both zsh and vim work as they should when considered independently. However, when I enter vim from zsh in insert mode , vim starts in normal mode (as it should) but still shows the beam shape cursor.Similarly, when I exit vim , I get back to zsh in insert mode , but the cursor is still in block shape (since the last mode in vim was normal ). When after this, I switch modes for the first time (in both zsh and vim ), the cursor behaves the way it should again. How can I make them display the correct cursor after entering and exiting vim as well? I tried putting autocmd VimEnter * stopinsert autocmd VimLeave * startinsert in my ~.vimrc , but this does not affect the cursor.
I think it's better to use precmd() instead of preexec() : # .zshrc_fix_cursor() { echo -ne '\e[5 q'}precmd_functions+=(_fix_cursor) This way: you don't have to change .vimrc cursor is fixed also when you create a new prompt without executing a command you don't have to write echo -ne '\e[5 q' twice in your .zshrc .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/433273", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282466/" ] }
433,345
Say we have a named pipe called fifo , and we're reading and writing to it from two different shells. Consider these two examples: shell 1$ echo foo > fifo<hangs>shell 2$ cat fifofooshell 1$ echo bar > fifo<hangs> shell 1$ cat > fifo<typing> foo<hangs>shell 2$ cat fifofoo^Cshell 1$<typing> bar<exits> I can't wrap my head around what happens in these examples, and in particular why trying to write 'bar' to the pipe in the first example results in a blocking call, whereas in the second example it triggers a SIGPIPE. I do understand that in the first case, two separate processes write to the pipe, and thus it is opened twice, while in the second case it is only opened once by a single process and written to twice, with the process reading from the pipe being killed in the meantime. What I don't understand is how that affects the behaviour of write . The pipe(7) man page states: If all file descriptors referring to the read end of a pipe have been closed, then a write (2) will cause a SIGPIPE signal to be generated for the calling process. This condition doesn't sound clear to me. A closed file descriptor just ceases to be a file descriptor, right? How does saying " the reading end of the pipe has been closed " differ from " the reading end of the pipe is not open "? I hope my question was clear enough. By the way, if you could suggest pointers for understanding in details the functioning of Unix pipes in relationship to open , close , read and write operations, I'd greatly appreciate it.
Your example is using a fifo not a pipe , so is subject to fifo(7) . pipe(7) also tells: A FIFO (short for First In First Out) has a name within the filesystem (created using mkfifo(3)), and is opened using open(2). Any process may open a FIFO, assuming the file permissions allow it. The read end is opened using the O_RDONLY flag; the write end is opened using the O_WRONLY flag. See fifo(7) for further details. Note: although FIFOs have a pathname in the filesystem, I/O on FIFOs does not involve operations on the underlying device (if there is one). I/O on pipes and FIFOs The only difference between pipes and FIFOs is the manner in which they are created and opened. Once these tasks have been accomplished, I/O on pipes and FIFOs has exactly the same semantics. So now from fifo(7) : The kernel maintains exactly one pipe object for each FIFO special file that is opened by at least one process. The FIFO must be opened on both ends (reading and writing) before data can be passed. Normally, opening the FIFO blocks until the other end is opened also. So before both ends (here meaning there is at least a reader and a writer) are opened, write blocks as per fifo(7) . After both ends have been opened, and then (the) reading end(s) closed, write generates SIGPIPE as per pipe(7) . For an example of pipe usage (not fifo) look at the example section of pipe(2) : involves pipe() (no open(), since pipe() actually created the pipe pair opened), close(), read() write() and fork() (there's almost always a fork() around when using a pipe). The simpliest way to handle SIGPIPE from your own C code if you don't want it to die when writing to a fifo, would be to call signal(SIGPIPE, SIG_IGN); and handle it by checking for errno EPIPE after each write() instead.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/433345", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123658/" ] }
433,385
I have a problem with QT applications (i.e. picard or masterpdfeditor ) under GNOME: their interface looks tiny. Instead, GTK application looks good. I would like a global solution which will work across all applications, not a per-application fix. I don't know exactly where the issue begin (is it a QT5 issue or a GNOME issue?) but I'd like to have a bigger interface. How can I do it? I have already tried with a trick explained here but it works partially: if I launch the apps directly from the terminal, by appending the right variable e.g. QT_SCALE_FACTOR=1.35 picard , the trick works! while if I launch them from the menu (gnome-shell), the exported variable is completely ignored . Is there a way to fix it? I have a laptop connected with an external FullHD 24" monitor.I'm on Arch Linux x86_64 and Gnome 3.28/3.30.
According to the Archlinux Wiki: Since Qt 5.6, Qt 5 applications can be instructed to honor screen DPI by setting the QT_AUTO_SCREEN_SCALE_FACTOR environment variable. So, you just need to edit ~/.profile or ~/.bash_profile and add this line to export the correct environment variable like this : export QT_AUTO_SCREEN_SCALE_FACTOR=1 I've tried with KeepassXC under ubuntu 18.04 successfully.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/433385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186963/" ] }
433,404
I have below inputs with huge number of rows 11|ABCD|19900101123123445455|555|AAA|50505050|000000903011|ABCD|19900101123123445455|555|AAA|50505050|000000019913|ABCD|201803010YYY66666666|600|ETC|20180300|000008409911|ABCD|19900101123123445455|555|AAA|50505050|0008995001 And I need to get below output 11|ABCD|19900101123123445455|555|AAA|50505050|900423013|ABCD|201803010YYY66666666|600|ETC|20180300|84099 I have been trying with below awk but having too limited knowledge with arrays. cat test|awk -F"|" '{ a[$1]++;b[$2]++;c[$3]++;d[$4]++;e[$5]++;f[$6]+=$6 }; END { for (i in a); print i, f[i]}' I need to sum last column of column number 6 and print all first 5 columns, which are separated by pipe and last 6th column as sum of 6th column.
Awk solution: awk 'BEGIN{ FS=OFS="|" } { a[$1 FS $2 FS $3 FS $4 FS $5 FS $6] += $7 } END{ for (i in a) print i, a[i] }' file The output: 11|ABCD|19900101123123445455|555|AAA|50505050|900423013|ABCD|201803010YYY66666666|600|ETC|20180300|84099
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/433404", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282577/" ] }
433,444
I have a Dockerfile like this: FROM alpineCOPY setup.sh /setup.shCMD ["/setup.sh"] My setup.sh is like this: #!/bin/shecho "hello world" Tried to run these commands: docker build .docker run --name test 61230f9f45ad Error returned is this: standard_init_linux.go:195: exec user process caused "no such file or directory" I'm using Powershell on Windows 10 LTSB, docker version is 17.12.0-ce, build c97c6d6 . Why?
It is probably the Windows-style line endings that break it. Here's what the file looks like when saved with Windows line endings, but read in Unix style: #!/bin/sh^M^Mecho "hello world"^M When interpreting the shebang (#!), exec will see an extra carriage return (denoted CR , \r , ^M ) and fail to find /bin/sh^M : $ exec ./setup.shbash: setup.sh: /bin/sh^M: bad interpreter: No such file or directory Save the file with Unix-style line endings. On Windows, decent text editors (Sublime Text, Notepad++, any IDE, etc.) should be able to do it. There is also a simple command-line tool called dos2unix, which does exactly what you'd expect it to.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/433444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282611/" ] }
433,454
Assume I have iptables configured to block inbound accesses, except 22 and small number of other ports.Assume also that I wish for iptables to "snitch" on any naughty software that phones home, with a rule like: iptables -A OUTPUT -p all -s $THIS_SERVER -m conntrack --state NEW -j LOG --log-prefix "OUTBOUND " That is all well and good, but if I also have SSH tunneled traffic, how would I use iptables to treat that traffic differently? Here is my specific example. Let's say forwarding is allowed by sshd so that a client could do something like: ssh -D127.0.0.1:12345 uid@myserver -N They then have a SOCKS proxy on their end and can make outbound requests through $THIS_SERVER . This is a behaviour that I want to allow, but I would like to log it as different traffic than the above originating on the server itself (or perhaps, not log this forwarded traffic at all). Do you think this can be done and if so, please explain. what I have attempted to do is to capture the second category by the FORWARD chain, but that did not work. Both categories of traffic seem to be selected by the same criteria, so I can't distinguish them. Is there a way to associate those SSH tunneled packets? For avoidance of doubt, here are the log entries associated with setting up such a SOCKS proxy through my server, and then opening up Firefox. The UID=980 is the Linux resolver user, and UID=1000 is the remote user who established the tunnel via ssh. [15719.755667] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=73 TOS=0x00 PREC=0x00 TTL=64 ID=62202 DF PROTO=UDP SPT=35744 DPT=53 LEN=53 UID=980 GID=980 [15719.755771] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=73 TOS=0x00 PREC=0x00 TTL=64 ID=62203 DF PROTO=UDP SPT=32777 DPT=53 LEN=53 UID=980 GID=980 [15720.046668] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=95 TOS=0x00 PREC=0x00 TTL=64 ID=62256 DF PROTO=UDP SPT=33118 DPT=53 LEN=75 UID=980 GID=980 [15720.047446] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=34.210.48.174 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=520 DF PROTO=TCP SPT=34758 DPT=443 WINDOW=29200 RES=0x00 SYN URGP=0 UID=1000 GID=1000 [15720.049736] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=34.210.48.174 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=1938 DF PROTO=TCP SPT=34760 DPT=443 WINDOW=29200 RES=0x00 SYN URGP=0 UID=1000 GID=1000 [15720.614489] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=74 TOS=0x00 PREC=0x00 TTL=64 ID=62296 DF PROTO=UDP SPT=43834 DPT=53 LEN=54 UID=980 GID=980 [15720.614573] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=74 TOS=0x00 PREC=0x00 TTL=64 ID=62297 DF PROTO=UDP SPT=55510 DPT=53 LEN=54 UID=980 GID=980 [15720.615559] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=75 TOS=0x00 PREC=0x00 TTL=64 ID=62298 DF PROTO=UDP SPT=40906 DPT=53 LEN=55 UID=980 GID=980 [15728.039642] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=72 TOS=0x00 PREC=0x00 TTL=64 ID=63174 DF PROTO=UDP SPT=39308 DPT=53 LEN=52 UID=980 GID=980 [15728.039723] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=72 TOS=0x00 PREC=0x00 TTL=64 ID=63175 DF PROTO=UDP SPT=53907 DPT=53 LEN=52 UID=980 GID=980 [15729.529947] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=81 TOS=0x00 PREC=0x00 TTL=64 ID=63315 DF PROTO=UDP SPT=45694 DPT=53 LEN=61 UID=980 GID=980 [15729.530068] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=81 TOS=0x00 PREC=0x00 TTL=64 ID=63316 DF PROTO=UDP SPT=52263 DPT=53 LEN=61 UID=980 GID=980 [15729.896039] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=70 TOS=0x00 PREC=0x00 TTL=64 ID=63355 DF PROTO=UDP SPT=40457 DPT=53 LEN=50 UID=980 GID=980 [15729.896132] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=70 TOS=0x00 PREC=0x00 TTL=64 ID=63356 DF PROTO=UDP SPT=38307 DPT=53 LEN=50 UID=980 GID=980 [15730.189743] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=63442 DF PROTO=UDP SPT=40521 DPT=53 LEN=62 UID=980 GID=980 [15730.189870] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=63443 DF PROTO=UDP SPT=34444 DPT=53 LEN=62 UID=980 GID=980 [15730.190707] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=81 TOS=0x00 PREC=0x00 TTL=64 ID=63444 DF PROTO=UDP SPT=55549 DPT=53 LEN=61 UID=980 GID=980 [15730.192361] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=52.43.38.51 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=22080 DF PROTO=TCP SPT=60876 DPT=443 WINDOW=29200 RES=0x00 SYN URGP=0 UID=1000 GID=1000 [15730.641766] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=81 TOS=0x00 PREC=0x00 TTL=64 ID=63507 DF PROTO=UDP SPT=54359 DPT=53 LEN=61 UID=980 GID=980 [15730.641890] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=81 TOS=0x00 PREC=0x00 TTL=64 ID=63508 DF PROTO=UDP SPT=52250 DPT=53 LEN=61 UID=980 GID=980 [15749.499230] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=87 TOS=0x00 PREC=0x00 TTL=64 ID=198 DF PROTO=UDP SPT=57205 DPT=53 LEN=67 UID=980 GID=980 [15749.499394] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=87 TOS=0x00 PREC=0x00 TTL=64 ID=199 DF PROTO=UDP SPT=45791 DPT=53 LEN=67 UID=980 GID=980 [15749.500301] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=116 TOS=0x00 PREC=0x00 TTL=64 ID=200 DF PROTO=UDP SPT=37022 DPT=53 LEN=96 UID=980 GID=980 [15749.500917] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=54.68.157.14 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=64708 DF PROTO=TCP SPT=43082 DPT=443 WINDOW=29200 RES=0x00 SYN URGP=0 UID=1000 GID=1000 [15752.028535] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=81 TOS=0x00 PREC=0x00 TTL=64 ID=585 DF PROTO=UDP SPT=59245 DPT=53 LEN=61 UID=980 GID=980 [15752.028624] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=81 TOS=0x00 PREC=0x00 TTL=64 ID=586 DF PROTO=UDP SPT=55551 DPT=53 LEN=61 UID=980 GID=980 [15752.029484] OUTPUT:LOG,ACCEPT IN= OUT=eth0 SRC=$THIS_SERVER DST=$NAMESERVER LEN=75 TOS=0x00 PREC=0x00 TTL=64 ID=587 DF PROTO=UDP SPT=34032 DPT=53 LEN=55 UID=980 GID=980
It is probably the Windows-style line endings that break it. Here's what the file looks like when saved with Windows line endings, but read in Unix style: #!/bin/sh^M^Mecho "hello world"^M When interpreting the shebang (#!), exec will see an extra carriage return (denoted CR , \r , ^M ) and fail to find /bin/sh^M : $ exec ./setup.shbash: setup.sh: /bin/sh^M: bad interpreter: No such file or directory Save the file with Unix-style line endings. On Windows, decent text editors (Sublime Text, Notepad++, any IDE, etc.) should be able to do it. There is also a simple command-line tool called dos2unix, which does exactly what you'd expect it to.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/433454", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137096/" ] }
433,559
So I need to repeatedly try submitting a 4 digit number to a port on the local host, and then evaluate the response I get from the port if it contains a specific string, which I do with grep . I used something like echo {0000...9999} | nc localhost port . However, I cannot get this nice sequence expansion that I get from this command to work in a bash script. How could I go about writing a bash script for this problem?
You can use within a for-loop like following: for i in {0001..9999}; do grep -q 'specificString' <<<"$(nc localhost $i)" && \ echo "found on $i" || echo "not found on $i";done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/433559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134933/" ] }
433,630
When I install a program like GIMP or LibreOffice on Linux I'm never asked about permissions. By installing a program on Ubuntu, am I explicitly giving that program full permission to read/write anywhere on my drive and full access to the internet? Theoretically, could GIMP read or delete any directory on my drive, not requiring a sudo-type password? I'm only curious if it's technically possible, not if it's likely or not. Of course, I know it's not likely.
There are two things here: when you install a program by standard means (system installer such as apt/apt-get on Ubuntu) it is usually installed in some directory where it is available to all users (/usr/bin...). This directory requires privileges to be written to so you need special privileges during installation. when you use the program, it runs with your user id and can only read or write where programs executed with your id are allowed to read or write. In the case of Gimp, you will discover for instance that you cannot edit standard resources such as brushes because they are in the shared /usr/share/gimp/ and that you have to copy them first. This also shows in Edit>Preferences>Folders where most folders come in pairs, a system one which is read-only and a user one that can be written to.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/433630", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282776/" ] }
433,644
As of today, I cannot login into my account. I start the computer, get to the login screen, login and then this following error appears: Your session only lasted less than 10 seconds, if you... And my ~/.xsession-errors file has this output: initctl: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused/etc/mdm/Xsession: Beginning session setup.../etc/mdm/Xsession: 21: /home/lenduya/.profile: source: not foundlocaluser:lenduya being added to access control list lenduya being my username. My setup: Thinkpad T430 dualboot with Windows 10. What I've tried: repair broken packages: chose advanced Ubuntu at the booting screen, selected repair broken packages sudo apt-get install cinnamon : 0 new packages installed, cinnamon already at the latest version sudo apt-get update : 0 new packages installed What I did prior: no major updates today AFAIK (two packages were updated) computer worked normally, just VLC wouldn't start. I probed around a little bit, there was one suggestion that I should check whether Pulse Audio is one of the softwares that starts up automatically. So I went on to look and Pulse Audio wasn't among things that Startup starts with. Clicked on Add software but didn't find Pulse Audio in the list so I closed it. I try to restart the computer, maybe that would fix the VLC issue. this error occurs. Any suggestions on how to fix this issue? I'd prefer if I did not have to reinstall it.
It's VirtualBox that is the problem.Fix: <Ctrl><Alt><F1> to get a shell and login to the prompt. Then: sudo apt-get remove virtualbox* Credit goes to: Mint forum
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/433644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282790/" ] }
433,652
I have a script dataProcessing.pl that accepts a tab-delimited .txt file and performs extensive processing tasks on the contained data. Multiple input files exist ( file1.txt file2.txt file3.txt ) which are currently looped over as part of a bash script, that invokes perl during each iteration (i.e. input files are processed one at a time). I wish however to run multiple instances of Perl (if possible), and process all input files simultaneously via xargs. I'm aware that you can run something akin to: perl -e 'print "Test" x 100' | xargs -P 100 However I want to pass a different file for each parallel instance of Perl opened (one instance works on file1.txt, one works on file2.txt and so forth). File handle or file path can be passed to Perl as an argument. How can I do this? I am not sure how I would pass the file names to xargs for example.
Use xargs with -n 1 meaning "only pass one single argument to each invocation of the utility". Something like: printf '%s\n' file*.txt | xargs -n 1 -P 100 perl dataProcessing.pl which assumes that the filenames don't contain literal newlines. If you have GNU xargs , or an implementation of xargs that understands -0 (for reading nul-delimited arguments, which allows for filenames with newlines) and -r (for not running the utility with empty argument list, when file*.txt doesn't match anything and nullglob is in effect), you may do printf '%s\0' file*.txt | xargs -r0 -n 1 -P 100 perl dataProcessing.pl Note that both of these variations may start up to 100 parallel instances of the script, which may not be what you want. You may want to limit it to a reasonable number related to the number of CPUs on your machine (or related to the total amount of available RAM divided by the expected memory usage per task, if it's memory bound).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/433652", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/257120/" ] }
433,678
I have many json files of such format: sample file1 : { "attributes": [ { "name": "Node", "value": "test" } ]} sample file2 : { "attributes": [ { "name": "version", "value": "11.1" } ]} etc. I need to merge all of them to one json file, eg. { "attributes": [ { "name": "Node", "value": "test" }, { "name": "version", "value": "11.1" } ]} Could someone please provide a solution with jq?
jq solution: jq -s '{ attributes: map(.attributes[0]) }' file*.json -s ( --slurp ) - instead of running the filter for each JSON object in the input, read the entire input stream into a large array and run the filter just once. Sample output: { "attributes": [ { "name": "Node", "value": "test" }, { "name": "version", "value": "11.1" } ]}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/433678", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274554/" ] }