source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
239,645 | I have a task on which I have spent a lot of time. I am not fluent in Linux, but I can manage basic things. The task is to gather different types of ICMP packets. I can harvest them by tcpdump (which I prefer) or Wireshark. I am able get the ICMP types of echo reply and echo request using ping, and time exceeded using tracepath or traceroute. Now, what I am trying to get is unreachable or timestamp or something else. I need two more types, however I don't know a way to produce. I have tried pinging a nonexistent host or wrong port, and using tracepath the same way, but I am not getting anything. Can someone advise me or tell me what commands I can use, and in which way, to obtain two more types of ICMP packets? | You could try like this: awk 'NR==1{h=$0; next}!seen[$3]++{f="FILE_"FILENAME"_"$3".txt";print h > f} {print >> f}' infile The above saves the header in a variable h ( NR==1{h=$0; next} ) then, if $3 not seen ( !seen[$3]++ i.e. if it's the first time it encounters the current value of $3 ) it sets the filename ( f=...) and writes the header to filename ( print h > f ). Then it appends the entire line to filename ( print >> f ). It uses default FS (field separator): blank . If you want to use | as FS (or even a regex with gnu awk ) see cas ' comment below. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/239645",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140847/"
]
} |
239,708 | After searching in google I found out that we can telnet to web-server to its http port and use GET to retrieve a html page. For ex: $ telnet web-server-name 80 But I am not able to understand how is this possible ? I thought that if port 80 is for http server, then port 80 will only listen for http requests. But how am I able to telnet to an HTTP port ? Aren't telnet and HTTP two different protocol ? | Congratulations, you've just delved into the concept of networking layers by realizing that ports and protocols are not directly connected with each other. As others are saying, telnet can be used to connect to any TCP port. However to understand why this is possible you need to understand a bit about networking layers. If you've ever heard of the OSI 7 layer model this is what allows you to use telnet to connect to another port. Although on the Internet, they only concern themselves with 4 of the layers and its called the Internet Protocol Suite . Without layers of networking, each program would not only need to understand its own protocol, but would have to define its own IP addressing scheme and port system, which means each router would need to understand how to route these schemes and different protocols would be much harder to learn and diagnose. To put it simply, the Internet wouldn't work nearly as well without layers. What you are concerned with are the transport layer and the application layer. At the transport layer we have Internet protocols like TCP and UDP with port numbers ranging from 1 to 65535 on each. At the application layer we have protocols such as HTTP, SMTP and DNS. Usually each Internet standards document that defines a protocol specifies a default TCP or UDP port that the protocol should use by default. Such as TCP port 80 for HTTP, TCP port 25 for SMTP, UDP port 53 for DNS and TCP port 23 for Telnet. The telnet program actually speaks the TELNET protocol, which is a standard protocol , but mostly an ancient one by current standards. Because its protocol sequences are made from 8-bit characters, you rarely see the protocol itself and its mostly transparent when compared with other more modern protocols like HTTP and SMTP that use human visible words in ASCII such as GET, POST, HELO, LOGIN, etc. Because its protocol isn't generally visible, telnet made for a decent tool for connecting to other TCP ports and allowing the user to type in protocols manually. Some network administrators use this technique in order to diagnose problems with servers. However because the telnet program still has its own protocol and may send extra bits of data sometimes, you can still experience problems with this technique. When you use telnet you really are "making a connection" at the application layer as well as the transport layer. It just happens that other application layer protocols may work ok through it for most diagnostics and won't interfere with the telnet protocol. There is a better program for doing this through called nc (Net Cat. It gets its name from being a Network based version of the cat command). $ nc www.stackexchange.com 80 The nc program doesn't speak any application layer protocol and when you make a connection with it you are "making a connection" only at the Internet layer (IP address) and Transport layer (TCP or UDP). What that means is that you control what application layer protocol is used. Almost anything is fair game, even binary protocols. This also allows you to do useful things like transfer files without them being corrupted and listen on ports for incoming traffic: nc -l 9000 < movie.mp4 (Your friend runs this)nc friends.computer.hostname 9000 > movie.mp4 (you run this) And then movie.mp4 is transfered over the network using no application layer protocol (such as FTP) at all. The application protocol is actually your friend telling you that they are ready for you to run your command. nc can also handle UDP packets and UNIX-domain sockets. Using it to listen can also be interesting. nc -l 12345 Now in your web browser visit http://localhost:12345/ and in your nc session you should see the browser's GET / HTTP/1.1 request. At this point you can type something in and press Ctrl-D and it should show up in your browser in plain text (If you want HTML to show up, you have to send it back the proper HTTP protocol response followed by HTML code). Sometimes, programs which natively speak one protocol like HTTP can connect to other ports that are meant for a different protocol. You typically can't do this in a GUI browser anymore because they have restricted them from connecting to some ports, but if you use a program like curl to connect to port 25 (SMTP for sending mail) you'll probably see a couple of errors about breaking protocol. $ curl yourispsmtpserverhost.com:25220 yourispsmtpserverhost.com ESMTP Postfix221 2.7.0 Error: I can break rules, too. Goodbye. This happens because curl normally speaks the HTTP protocol, so after it establishes a TCP handshake, it starts sending data like this: GET / HTTP/1.1Host: yourispsmtpserverhost.com:25User-agent: curl But what the SMTP server is expecting is SMTP, which is more like this: HELO myhomecomputername.local At which point the server sends back its identification line: 250 yourispsmtpserverhost.com So you see that there is nothing that prevents curl from establishing a transport layer connection with the SMTP server, it just can't speak the protocol. But you can speak the protocol yourself with a program like telnet or more preferably nc. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/239708",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109220/"
]
} |
239,709 | Once in a while (every 30-th boot) my linux system decides to check filesystem for errors. I am ok with this - what needs to be done needs to be done. But sometimes I need my laptop to boot fast. I need some urgent job to do and I do not have time to wait for fsck to complete (it may take about 10 minutes). How can I stop the check in this case? The only solution (well, workaround) I come up to now is to turn off auto fsck and run it manually occasionally. I do not like this approach, because I have to remember when it was the last time I run it. What I want is to be able to press Ctrl + C to abort filesystem check. Let filesystem check run during the next boot! But actually if I press Ctrl + C fsck just restarts. | fsck has an option which makes it delay the automatic check when the laptop is on battery power; that is, if the filesystem is configured to check once every 30 mounts, it will interpret that as once every 60 battery-powered mounts. Most distributions have it enabled these days. However, it only checks for that at startup. What you could do is, if the automatic check starts, remove the power supply from your laptop and then restart fsck by whatever way (hard reset, ctrl-c, ...) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/239709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50667/"
]
} |
239,741 | I would like to pipe tarballs ( .tar.gz files in this case) I download with GNU Wget ( wget ) through tar -xzf (to decompress them, if this is unclear) but I don't know how. I have tried: wget -q -O- ${SRC_URI} | tar -xzf > ${DESTDIR} and wget -q -O- ${SRC_URI} | `tar -xzf` > ${DESTDIR} and wget -q -O- ${SRC_URI} | 'tar -xzf' > ${DESTDIR} not one even came close to doing what I want. I have also tried omitting the output component > ${DESTDIR} and just letting tar to extract the tarball's contents the way it does by default. Each attempt usually either returned an error like: tar: option requires an argument -- 'f' before it would download the tarball, or nothing but then I would check for whether the path set by ${DESTDIR} had been created (as I was leaving tar to generate it) and it had not. | As you're extracting a tar.gz file from stdin, you don't need to specify the f option, tar defaults to reading from stdin. Assuming you want to extract the contents to $DESTDIR, you also need to use GNU tar 's -C (change directory) option. I've also put " quotes around the variables, in case $SRC_URI or $DESTDIR contain any spaces or shell meta-characters - & , * , ? and the like. Finally, the {} curly braces around the variables aren't strictly necessary here, but I've left them in anyway - they certainly don't cause any harm. Putting that all together, you get: wget -q -O- "${SRC_URI}" | tar -xz -C "${DESTDIR}" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/239741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27613/"
]
} |
239,751 | I'm asking only about the usage which would have the similar effect as traditional input redirection from a file. <<<"$(<file)" as far as I can tell is equivalent to <file It appears to me that these are functionally equivalent. At the low level it appears that the <<< here document might actually cause more copies of the data to be in memory at once. I know this type of redirection exists in both bash and zsh but I'm not familiar with how it's implemented, though I see the zsh manpages contain some implementation details. | In <<<"$(<file)" (supported by zsh (where <<< was first introduced, inspired by the same operator in the clone of rc for Unix by Byron Rakitzis), ksh93 (the $(<file) operator was introduced by ksh ), mksh and bash ), For $(<file) , the shell reads the content of the file (chokes on NUL bytes except for zsh ), removes all the trailing newline characters and that makes the expansion of $(<file) (so the content of the file is stored as a whole in memory). For <<< some-text , the shell stores some-text followed by one newline character into a temporary file, and opens that temporary file on the file descriptor 0 (though some shells including recent versions of bash can use pipes instead, at least for small amounts of data). So basically <<<"$(<file)" opens stdin for reading on a temporary copy of file where trailing newline characters have been replaced by just one (and with various misbehaviours if the file contains NUL bytes, except in zsh ). While in < file , it's file that is directly opened for reading on stdin. Of course < file is much more efficient (doesn't involve a copy on disk and in memory), but one might want to use the <<<"$(<file)" to make sure the file open on stdin is a regular file, or to make sure the file has been fully read by the time the command is started (in case that command writes to it for instance) or another redirection is processed (like one that would truncate file as in tr 1 2 <<< "$(<file)" > file ). Note that yash supports the <<< operator (though implements it with a pipe (so not a regular file) instead of a temporary file). but not the $(<file) one. You can use <<<"$(cat < file)" instead there. yash strings are characters only, so the "$(cat < file)" will choke on sequences of bytes that don't form valid characters, while other shells can usually cope OK with them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/239751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139893/"
]
} |
239,772 | I thought this would be simple - but it is proving more complex than I expected. I want to iterate through all the files of a particular type in a directory, so I write this: #!/bin/bashfor fname in *.zip ; do echo current file is ${fname}done This works as long as there is at least one matching file in the directory. However if there are no matching files, I get this: current file is *.zip I then tried: #!/bin/bashFILES=`ls *.zip`for fname in "${FILES}" ; do echo current file is ${fname}done While the body of the loop does not execute when there are no files, I get an error from ls: ls: *.zip: No such file or directory How do I write a loop which cleanly handles no matching files? | In bash , you can set the nullglob option so that a pattern that matches nothing "disappears", rather than treated as a literal string: shopt -s nullglobfor fname in *.zip ; do echo "current file is ${fname}"done In POSIX shell script, you just verify that fname exists (and at the same time with [ -f ] , check it is a regular file (or symlink to regular file) and not other types like directory/fifo/device...): for fname in *.zip; do [ -f "$fname" ] || continue printf '%s\n' "current file is $fname"done Replace [ -f "$fname" ] with [ -e "$fname" ] || [ -L "$fname ] if you want to loop over all the (non-hidden) files whose name ends in .zip regardless of their type. Replace *.zip with .*.zip .zip *.zip if you also want to consider hidden files whose name ends in .zip . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/239772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9178/"
]
} |
239,782 | On Ubuntu 15.10, when I want to format using the NTFS file system an external 4TO disk connected by USB3 (on a StarTech USB/eSATA hard disk dock), I have a lot of I/O errors, and the format fails. I tried GParted v 0.19, and GParted on the latest live CD gparted-live-0.23.0-1-i586.iso , with the same problem. After that, and using GParted on Ubuntu 15.10 and the same USB3 connection, I tried to format as ext4 , without problems. It's really strange. Because I don't know if the mkfs.ext4 tools used by GParted to format the disk test the disk like (or not like) mkntfs , I first suppose that the problem is linked to the new disk. Perhaps this new disk is causing problems. So I try running e2fsck -c on this HDD. On Ubuntu 15.10, e2fsck -c freezes at 0.45%, and I don't know why. So, using another version of Ubuntu (15.04) on the same PC, I try to connect the same 4TO disk using the eSATA connection of the StarTech HDD dock. This time, e2fsck -c runs correctly. After some hours, you can see the result: sudo e2fsck -c /dev/sdc1e2fsck 1.42.12 (29-Aug-2014)ColdCase : récupération du journalVérification des blocs défectueux (test en mode lecture seule) : complété ColdCase: Updating bad block inode.Passe 1 : vérification des i-noeuds, des blocs et des taillesPasse 2 : vérification de la structure des répertoiresPasse 3 : vérification de la connectivité des répertoiresPasse 4 : vérification des compteurs de référencePasse 5 : vérification de l'information du sommaire de groupeColdCase: ***** LE SYSTÈME DE FICHIERS A ÉTÉ MODIFIÉ *****ColdCase : 11/244195328 fichiers (0.0% non contigus), 15377150/976754176 blocs I'm not an expert in badblock outputs, but it seems there is no bad block at all on this disk? So, if the problem is not the hard drive, maybe the problem can be linked to mkntfs used over USB3? What other tests can I try? Some information about the USB dock: ➜ ~ lsusb...Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge...➜ ~ sudo lsusb -v -d 174c:55aaMot de passe [sudo] pour reyman : Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridgeDevice Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 9 idVendor 0x174c ASMedia Technology Inc. idProduct 0x55aa ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge bcdDevice 1.00 iManufacturer 2 asmedia iProduct 3 ASM1053E iSerial 1 123456789012 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 121 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 36mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk-Only iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 1 bNumEndpoints 4 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 98 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 MaxStreams 16 Data-in pipe (0x03) Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 MaxStreams 16 Data-out pipe (0x04) Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 MaxStreams 16 Status pipe (0x02) Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x04 EP 4 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 0 Command pipe (0x01)Binary Object Store Descriptor: bLength 5 bDescriptorType 15 wTotalLength 22 bNumDeviceCaps 2 USB 2.0 Extension Device Capability: bLength 7 bDescriptorType 16 bDevCapabilityType 2 bmAttributes 0x00000002 Link Power Management (LPM) Supported SuperSpeed USB Device Capability: bLength 10 bDescriptorType 16 bDevCapabilityType 3 bmAttributes 0x00 wSpeedsSupported 0x000e Device can operate at Full Speed (12Mbps) Device can operate at High Speed (480Mbps) Device can operate at SuperSpeed (5Gbps) bFunctionalitySupport 1 Lowest fully-functional device speed is Full Speed (12Mbps) bU1DevExitLat 10 micro seconds bU2DevExitLat 2047 micro secondsDevice Status: 0x0001 Self Powered Information about the disk in question: /dev/sdd ➜ ~ sudo fdisk -l /dev/sddDisque /dev/sdd : 3,7 TiB, 4000787030016 octets, 7814037168 secteursUnités : sectors of 1 * 512 = 512 octetsSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 33553920 bytesDisklabel type: gptDisk identifier: ACD5760B-2898-435E-82C6-CB3119758C9BPériphérique Start Fin Secteurs Size Type/dev/sdd1 2048 7814035455 7814033408 3,7T Linux filesystem dmesg returns a lot of errors about the disk; see this extract: [ 68.856381] scsi host6: uas_eh_bus_reset_handler start[ 68.968376] usb 2-2: reset SuperSpeed USB device number 2 using xhci_hcd[ 68.989825] scsi host6: uas_eh_bus_reset_handler success[ 99.881608] sd 6:0:0:0: [sdd] tag#12 uas_eh_abort_handler 0 uas-tag 13 inflight: CMD OUT [ 99.881618] sd 6:0:0:0: [sdd] tag#12 CDB: Write(16) 8a 00 00 00 00 00 e8 c4 08 00 00 00 00 08 00 00[ 99.881856] sd 6:0:0:0: [sdd] tag#5 uas_eh_abort_handler 0 uas-tag 6 inflight: CMD OUT [ 99.881861] sd 6:0:0:0: [sdd] tag#5 CDB: Write(16) 8a 00 00 00 00 00 cd 01 08 f0 00 00 00 10 00 00[ 99.881967] sd 6:0:0:0: [sdd] tag#4 uas_eh_abort_handler 0 uas-tag 5 inflight: CMD OUT [ 99.881972] sd 6:0:0:0: [sdd] tag#4 CDB: Write(16) 8a 00 00 00 00 00 cd 01 08 00 00 00 00 f0 00 00[ 99.882085] sd 6:0:0:0: [sdd] tag#3 uas_eh_abort_handler 0 uas-tag 4 inflight: CMD OUT [ 99.882090] sd 6:0:0:0: [sdd] tag#3 CDB: Write(16) 8a 00 00 00 00 00 cd 01 07 10 00 00 00 f0 00 00[ 99.882171] sd 6:0:0:0: [sdd] tag#2 uas_eh_abort_handler 0 uas-tag 3 inflight: CMD OUT [ 99.882175] sd 6:0:0:0: [sdd] tag#2 CDB: Write(16) 8a 00 00 00 00 00 cd 01 06 20 00 00 00 f0 00 00[ 99.882255] sd 6:0:0:0: [sdd] tag#1 uas_eh_abort_handler 0 uas-tag 2 inflight: CMD OUT [ 99.882258] sd 6:0:0:0: [sdd] tag#1 CDB: Write(16) 8a 00 00 00 00 00 cd 01 05 30 00 00 00 f0 00 00[ 99.882338] sd 6:0:0:0: [sdd] tag#0 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD OUT [ 99.882342] sd 6:0:0:0: [sdd] tag#0 CDB: Write(16) 8a 00 00 00 00 00 cd 01 04 40 00 00 00 f0 00 00[ 99.882419] sd 6:0:0:0: [sdd] tag#11 uas_eh_abort_handler 0 uas-tag 12 inflight: CMD OUT [ 99.882423] sd 6:0:0:0: [sdd] tag#11 CDB: Write(16) 8a 00 00 00 00 00 cd 00 f9 00 00 00 00 f0 00 00[ 99.882480] sd 6:0:0:0: [sdd] tag#10 uas_eh_abort_handler 0 uas-tag 11 inflight: CMD OUT [ 99.882483] sd 6:0:0:0: [sdd] tag#10 CDB: Write(16) 8a 00 00 00 00 00 cd 00 f9 f0 00 00 00 f0 00 00[ 99.882530] sd 6:0:0:0: [sdd] tag#9 uas_eh_abort_handler 0 uas-tag 10 inflight: CMD OUT [ 99.882532] sd 6:0:0:0: [sdd] tag#9 CDB: Write(16) 8a 00 00 00 00 00 cd 00 fa e0 00 00 00 f0 00 00[ 99.882593] sd 6:0:0:0: [sdd] tag#8 uas_eh_abort_handler 0 uas-tag 9 inflight: CMD [ 99.882596] sd 6:0:0:0: [sdd] tag#8 CDB: Write(16) 8a 00 00 00 00 00 cd 00 fb d0 00 00 00 f0 00 00[ 99.882667] scsi host6: uas_eh_bus_reset_handler start[ 99.994700] usb 2-2: reset SuperSpeed USB device number 2 using xhci_hcd[ 100.015613] scsi host6: uas_eh_bus_reset_handler success[ 135.962175] sd 6:0:0:0: [sdd] tag#5 uas_eh_abort_handler 0 uas-tag 6 inflight: CMD OUT [ 135.962185] sd 6:0:0:0: [sdd] tag#5 CDB: Write(16) 8a 00 00 00 00 00 cd 40 78 f0 00 00 00 10 00 00[ 135.962428] sd 6:0:0:0: [sdd] tag#4 uas_eh_abort_handler 0 uas-tag 5 inflight: CMD OUT [ 135.962436] sd 6:0:0:0: [sdd] tag#4 CDB: Write(16) 8a 00 00 00 00 00 cd 40 78 00 00 00 00 f0 00 00[ 135.962567] sd 6:0:0:0: [sdd] tag#3 uas_eh_abort_handler 0 uas-tag 4 inflight: CMD OUT [ 135.962576] sd 6:0:0:0: [sdd] tag#3 CDB: Write(16) 8a 00 00 00 00 00 cd 40 77 10 00 00 00 f0 00 00[ 135.962682] sd 6:0:0:0: [sdd] tag#2 uas_eh_abort_handler 0 uas-tag 3 inflight: CMD OUT [ 135.962690] sd 6:0:0:0: [sdd] tag#2 CDB: Write(16) 8a 00 00 00 00 00 cd 40 76 20 00 00 00 f0 00 00[ 135.962822] sd 6:0:0:0: [sdd] tag#1 uas_eh_abort_handler 0 uas-tag 2 inflight: CMD [ 135.962830] sd 6:0:0:0: [sdd] tag#1 CDB: Write(16) 8a 00 00 00 00 00 cd 40 75 30 00 00 00 f0 00 00[ 160.904916] sd 6:0:0:0: [sdd] tag#0 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD OUT [ 160.904926] sd 6:0:0:0: [sdd] tag#0 CDB: Write(16) 8a 00 00 00 00 00 00 00 29 08 00 00 00 08 00 00[ 160.905068] scsi host6: uas_eh_bus_reset_handler start I found this information on this forum post , that there is some problem with UAS and new Linux kernels? It seems the problem is known in many places on the internet, USB3 + Linux seems problematic in many cases -- but for old kernels. Any ideas to resolve this problem on a more recent kernel? My kernel is: ➜ ~ uname -r 4.2.0-16-generic Hmm, it seems UAS is broken for different USB3 chips of ASMedia Technology Inc. ; you can see more information here . I suppose this is my problem, but how can I resolve it now, and which chip is used for the USB3 implementation in the StarTech dock? | I ran into this issue today on a 4.8.0 kernel. According to this forum post it can be circumvented by $ echo options usb-storage quirks=357d:7788:u | sudo tee /etc/modprobe.d/blacklist_uas_357d.conf$ sudo update-initramfs -u and rebooting. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/239782",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140938/"
]
} |
239,786 | How do I execute a command where a file is found? Consider I've an directory named testdir that contains the following: $ ls -R testdir/testdir/:dir1 dir2 dir3 dir4 dir5testdir/dir1:doc1.pdftestdir/dir2:file1.txttestdir/dir3:doc2.pdftestdir/dir4:file2.txttestdir/dir5:doc5.pdf Now I want to perform an action (execute a command) where find finds a certain file/file type. For example let me find *.pdf : $ find . -name '*.pdf'./testdir/dir3/doc2.pdf./testdir/dir5/doc5.pdf./testdir/dir1/doc1.pdf Now suppose I want to execute a command (for example say touch file ) where the above command finds file(s). In other words, I want to create a file named file in each directory where at least one .pdf was found so that I get: $ ls -R testdir/testdir/:dir1 dir2 dir3 dir4 dir5testdir/dir1:doc1.pdf filetestdir/dir2:file1.txttestdir/dir3:doc2.pdf filetestdir/dir4:file2.txttestdir/dir5:doc5.pdf file How do I accomplish such a work? May be for every time file found, cd to where file exist and perform a command recursively. I know that find has awesome feature: -exec but I can't get it to work. This is only an example for getting an idea about what I want to do. Broadly: How to execute an command where file(s) are found (by find ) recursively? | If you run this command your touch file will be run, potentially multiple times, from the directory in which the command has been started: find -name '*.pdf' -exec touch file \; On the other hand, if you run this variant, each instance of the command will be run in the target file's directory: find -name '*.pdf' -execdir touch file \; In both cases you can see this in action by substituting the touch file with either echo {} and/or pwd . From manpage: -execdir command ; -execdir command {} + Like -exec , but the specified command is run from the subdirectory containing the matched file, which is not normally the directory in which you started find . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/239786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66803/"
]
} |
239,808 | When you type control characters in the shell they get displayed using what is called "caret notation". Escape for example gets written as ^[ in caret notation. I like to customize my bash shell to make it look cool. I have for example changed my PS1 and PS2 to become colorized. I now want control characters to get a unique appearance as well to make them more distinguishable from regular characters. $ # Here I type CTRL-C to abort the command.$ blahblah^C ^^ I want these two characters to be displayed differently Is there a way to make my shell highlight control characters differently? Is it possible to make it display them in a bold font or maybe make them appear in different colors from regular text? I am using bash shell here but I did not tag the question with bash because maybe there is a solution that applies to many different shells. Note that I do not know at what level highlighting of control characters takes place. I first thought it was in the shell itself. Now I have heard that it is readline that controls how control characters are in shells like bash . So the question is now tagged with readline and I am still looking for answers. | When you press Ctrl+X , your terminal emulator writes the byte 0x18 to the master side of the pseudo-terminal pair. What happens next depends on how the tty line discipline (a software module in the kernel that sits in between the master side (under control of the emulator) and the slave side (which applications running in the terminal interact with)) is configured. A command to configure that tty line discipline is the stty command. When running a dumb application like cat that is not aware of and doesn't care whether its stdin is a terminal or not, the terminal is in a default canonical mode where the tty line discipline implements a crude line editor . Some interactive applications that need more than that crude line editor typically change those settings on start-up and restore them on leaving. Modern shells, at their prompt are examples of such applications. They implement their own more advanced line editor. Typically, while you enter a command line, the shell puts the tty line discipline in that mode, and when you press enter to run the current command, the shell restores the normal tty mode (as was in effect before issuing the prompt). If you run the stty -a command, you'll see the current settings in use for the dumb applications . You're likely to see the icanon , echo and echoctl settings being enabled. What that means is that: icanon : that crude line editor is enabled. echo : characters you type (that the terminal emulator writes to the master side) are echoed back (made available for reading by the terminal emulator). echoctl : instead of being echoed asis, the control characters are echoed as ^X . So, let's say you type A B Backspace-aka-Ctrl+H/? C Ctrl+X Backspace Return . Your terminal emulator will send: AB\bC\x18\b\r . The line discipline will echo back: AB\b \bC^X\b \b\b \b\r\n , and an application that reads the input from the slave side ( /dev/pts/x ) will read AC\n . All the application sees is AC\n , and only when your press Enter so it can't have any control on the output for ^X there. You'll notice that for echo , the first ^H ( ^? with some terminals, see the erase setting) resulted in \b \b being sent back to the terminal. That's the sequence to move the cursor back, overwrite with space, move cursor back again, while the second ^H resulted in \b \b\b \b to erase those two ^ and X characters. The ^X (0x18) itself was being translated to ^ and X for output. Like B , it didn't make it to the application, as we deleted it with Backspace. \r (aka ^M ) was translated to \r\n ( ^M^J ) for echo, and \n ( ^J ) for the application. So, what are our options for those dumb applications: disable echo ( stty -echo ). That effectively changes the way control characters are echoed, by... not echoing anything. Not really a solution. disable echoctl . That changes the way control characters (other than ^H , ^M ... and all the other ones used by the line editor) are echoed. They are then echoed as-is. That is for instance, the ESC character is send as the \e ( ^[ / 0x1b ) byte (which is recognised as the start of an escape sequence by the terminal), ^G you send a \a (a BEL, making your terminal beep)... Not an option. disable the crude line editor ( stty -icanon ). Not really an option as the crude applications would become a lot less usable. edit the kernel code to change the behaviour of the tty line discipline so the echo of a control character sends \e[7m^X\e[m instead of just ^X (here \e[7m usually enables reverse video in most terminals). An option could be to use a wrapper like rlwrap that is a dirty hack to add a fancy line editor to dumb applications. That wrapper in effect tries to replace simple read() s from the terminal device to calls to readline line editor (which do change the mode of the tty line discipline). Going even further, you could even try solutions like this one that hijacks all input from the terminal to go through zsh's line editor (which happens to highlight ^X s in reverse video) relying on GNU screen's :exec feature. Now for applications that do implement their own line editor, it's up to them to decide how the echo is done. bash uses readline for that which doesn't have any support for customizing how control characters are echoed. For zsh , see: info --index-search='highlighting, special characters' zsh zsh does highlight non-printable characters by default. You can customize the highlighting with for instance: zle_highlight=(special:fg=white,bg=red) For white on red highlighting for those special characters. The text representation of those characters is not customizable though. In a UTF-8 locale, 0x18 will be rendered as ^X , \u378 , \U7fffffff (two unassigned unicode code points) as <0378> , <7FFFFFFF> , \u200b (a not-really printable unicode character) as <200B> . \x80 in a iso8859-1 locale would be rendered as ^� ... etc. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/239808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128859/"
]
} |
239,863 | I have this cronjob to run every minute */1 * * * * root sh /test.sh My /test.sh logs the result of "top" and "free" command. It works fine when I run it manually on terminal with "sh /teste.sh" and saves the output to a nice file, however when the cron job runs it only saves the result of the command "free" below. Check this please: printf "\n" >> "log_lojar_top_free.txt"printf %s "$(date)" >> "log_lojar_top_free.txt"printf '\t' >> "log_lojar_top_free.txt"top -b -n 3 -d 1 | grep "Cpu" | tail -n 1 | awk '/^%Cpu\(s\)/ {printf $2}' >> "log_lojar_top_free.txt"printf '\t' >> "log_lojar_top_free.txt"free | awk '/^Mem:/ {printf $7}' >> "log_lojar_top_free.txt" What is the error that is causing that only the last line (free) to have it's output loged? | As suggested by others, try to direct use the bash shebang in your script or prefix by using bash instead of sh . For I don't know what system you're actually running, I recently ran into trouble calling a script usign /bin/sh -c myscript.sh under ubuntu which is a debian derivate which uses dash instead of bash . Maybe this is the key to your problem. EDIT: I've got it working with this crontab entry, done as root with crontab -e : */1 * * * * /bin/bash -c "/test.sh" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/239863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140993/"
]
} |
239,869 | I have set up my bash shell so that any commands I type appear in bold and the commands' output is shown in normal weight: I did this by adding \e[01m at the end of my PS1 variable to turn on bold, and using trap DEBUG to turn it off: trap 'printf "\e[0m" "$_"' DEBUG That way, the \e[0m is printed before each command is executed and I get normal font weight in the output. How would I go about getting the same effect in zsh ? | The old-fashioned way was to use POSTEDIT POSTEDIT=$'\e[0m' (and by the way this isn't bash, don't use a DEBUG trap to simulate preexec : zsh is where it's from ) but since zsh 4.3.11 you can use the command line syntax highlighting facility . Let your prompt care only about your prompt and set zle_highlight=(default:bold) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/239869",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
239,871 | Both the nslookup and host commands return IPv4 addresses only. How can i retrieve the IPv6 address of a website using the terminal? (I have googled around, unfortunately I couldn't find anything useful) | You need a way to specify that you want to retrieve an AAAA record instead of an A record. You'll want to use the dig command for this, which is the replacement for nslookup anyways. dig AAAA websitehostname or if you don't want the verbose output: dig AAAA +short websitehostname | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/239871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122397/"
]
} |
239,879 | I have been using ls -Rlh /path/to/directory > file to create some text file records of what's in some hard drives. I want to delete some strings from the text files after they've been created. An example of part of a text file is: external1:total 36Kdrwxrwxr-x 2 emma emma 4.0K Oct 31 01:29 dir1drwxrwxr-x 2 emma emma 12K Oct 31 01:29 dir2drwxrwxr-x 2 emma emma 20K Oct 31 01:29 dir3external1/dir1:total 4.5M-rw-rw-r-- 1 emma emma 769K Oct 31 01:12 a001.jpg-rw-rw-r-- 1 emma emma 698K Oct 31 01:12 a002.jpg-rw-rw-r-- 1 emma emma 755K Oct 31 01:12 a003.jpg-rw-rw-r-- 1 emma emma 656K Oct 31 01:12 a004.jpg-rw-rw-r-- 1 emma emma 756K Oct 31 01:12 a005.jpg-rw-rw-r-- 1 emma emma 498K Oct 31 01:12 a006.jpg-rw-rw-r-- 1 emma emma 455K Oct 31 01:12 a007.jpgexternal1/dir2:total 8.7M-rw-rw-r-- 1 emma emma 952K Oct 31 01:13 a001.jpg-rw-rw-r-- 1 emma emma 891K Oct 31 01:13 a002.jpg-rw-rw-r-- 1 emma emma 838K Oct 31 01:13 a003.jpg-rw-rw-r-- 1 emma emma 846K Oct 31 01:13 a004.jpg-rw-rw-r-- 1 emma emma 876K Oct 31 01:13 a005.jpg-rw-rw-r-- 1 emma emma 834K Oct 31 01:13 a006.jpg-rw-rw-r-- 1 emma emma 946K Oct 31 01:13 a007.jpg-rw-rw-r-- 1 emma emma 709K Oct 31 01:13 a008.jpg-rw-rw-r-- 1 emma emma 1007K Oct 31 01:13 a009.jpg-rw-rw-r-- 1 emma emma 940K Oct 31 01:13 a010.jpgexternal1/dir3:total 4.6M-rw-rw-r-- 1 emma emma 408K Oct 31 01:15 a001.jpg-rw-rw-r-- 1 emma emma 525K Oct 31 01:15 a002.jpg-rw-rw-r-- 1 emma emma 383K Oct 31 01:15 a003.jpg-rw-rw-r-- 1 emma emma 512K Oct 31 01:15 a004.jpg-rw-rw-r-- 1 emma emma 531K Oct 31 01:15 a005.jpg-rw-rw-r-- 1 emma emma 532K Oct 31 01:15 a006.jpg-rw-rw-r-- 1 emma emma 400K Oct 31 01:15 a007.jpg-rw-rw-r-- 1 emma emma 470K Oct 31 01:15 a008.jpg-rw-rw-r-- 1 emma emma 407K Oct 31 01:15 a009.jpg-rw-rw-r-- 1 emma emma 470K Oct 31 01:15 a010.jpg The actual text files are thousands of lines long and several megabytes in size. What I want to do is delete everything before the file size from each applicable line, so that each line starts with the file size. E.g. 512K Oct 31 01:15 a004.jpg531K Oct 31 01:15 a005.jpg532K Oct 31 01:15 a006.jpg400K Oct 31 01:15 a007.jpg470K Oct 31 01:15 a008.jpg However, I want to keep all of the other lines (with the directory names and total sizes) intact, so this means that I can't use colrm or cut . | parsing the output of ls is unreliable, but this should work in this particular case: sed -e 's/^.*emma emma //' file That deletes everything up to "emma emma " on each line. if that string doesn't appear on a line, it is unchanged. I've written the regexp to only remove the first space after emma, so that the size field remains right-aligned (e.g. ' 709K' and '1007K' both take the same amount of chars on the line) if you don't wan't that, use this instead: sed -e 's/^.*emma emma *//' file that will delete all whitespace after emma until the start of the next field. Here's a sed version that works with any user group : sed -e 's/^.\{10\} [0-9]\+ [^ ]\+ [^ ]\+ //' file it relies even more heavily on the exact format of your ls output, so it is technically even worse than the first version....but it should work for your particular file. see Why *not* parse `ls`? for info on why parsing ls is bad. If not all files are owned by emma , you might want to use an awk script like this instead. awk 'NF>2 {print $5,$6,$7,$8,$9} ; NF<3 {print}' file For lines with more than 2 fields, it prints only fields 5-9. for lines with <3 fields, it prints the entire line.unfortunately, this loses the right-alignment of the size field....that can be fixed with a slightly more complicated awk script: awk 'NF>2 {printf "%5s %s %s %s %s\n", $5, $6, $7, $8, $9} ; NF<3 {print}' file This final version merges the for loop from jasonwryan's answer, so copes with filenames that have any number of single spaces in them (but not consecutive spaces, as mentioned by G-Man): awk 'NF>2 {printf "%5s", $5; for(i=6;i<=NF;i++){printf " %s", $i}; printf "\n"} ; NF<3 {print}' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/239879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
239,920 | How do I set the fully qualified hostname on CentOS 7.0? I have seen a few posts online for example using: $ sudo hostnamectl set-hostname nodename.domainname However, running domainname returns nothing: $ domainname(none) Also: $ hostnamenodename.domainname However, $ hostname -fhostname: Name or service not known$ hostname -dhostname: Name or service not known Some debug output: $ cat /etc/hostnamenodename.domainname$ grep ^hosts /etc/nsswitch.confhosts: files dns | To set the hostname do use hostnamectl , but only with the hostname, like this: hostnamectl set-hostname nodename To set the (DNS) domainname edit /etc/hosts file and ensure that: There is a line <machine's primary, non-loopback IP address> <hostname>.<domainname> <hostname> there There are NO other lines with <some IP> <hostname> , and this includes lines with 127.0.0.1 and ::1 (IPv6) addresses. Note that unless you’re using NIS, (none) is the correct output when running the domainname command. To check if your DNS domainname is set correctly use dnsdomainname command and check output of hostname vs hostname -f (FQDN). NIS vs. DNS domain This issue confused me when I first came across it. It seems that the domainname command predates the popularity of the Internet. Instead of the DNS domain name, it shows or sets the system’s NIS (Network Information Service) aka YP (Yellow Pages) domain name (a group of computers which have services provided by a master NIS server). This command simply displays the name returned by the getdomainname(2) standard library function. ( nisdomainname and ypdomainname are alternative names for this command.) Display the FQDN or DNS domain name To check the DNS (Internet) domain name, you should run the dnsdomainname command or hostname with the -d, --domain options. (Note that the dnsdomainname command can’t be used to set the DNS domain name – it’s only used to display it.) To display the FQDN (Fully Qualified Domain Name) of the system, run hostname with the -f, --fqdn, --long options (likewise, this command can’t be used to set the domain name part). The above commands use the system’s resolver (implemented by the gethostbyname(3) function from the standard library, as specified by POSIX) to determine the DNS domain name and the FQDN. Name Resolution In modern operating systems such as RHEL 7, the hosts entry in /etc/nsswitch.conf is used for resolving host names. In your CentOS 7 machine, this line is configured as (default for CentOS 7): hosts: files dns This means that when when the resolver functions look up hostnames or IP address, they first check for an entry in the /etc/hosts file and next try the DNS server(s) which are listed in /etc/resolv.conf . When running hostname -f to obtain the FQDN of a host, the resolver functions try to get the FQDN for the system’s hostname. If the host is not listed in the /etc/hosts file or by the relevant DNS server, the attempt fails and hostname reports that Name or service not known . When hostname -d is run to obtain the domain name, the same operations are carried out, and the domain name part is determined by stripping the hostname part and the first dot from the FQDN. Configure the domain name Update the relevant DNS name server In my case, I had already added an entry for my new CentOS 7 machine in the DNS server for my local LAN so when the FQDN wasn’t found in the /etc/hosts file when I ran hostname with the -d or -f option, the local DNS services were able to fully resolve the FQDN for my new hostname. Use the /etc/hosts file If the DNS server haven’t been configured, the fully qualified domain name can be specified in the /etc/hosts file. The most common way to do this is to specify the primary IP address of the system followed by its FQDN and its short hostname. E.g., 172.22.0.9 nodename.domainname nodename Excerpt from hostname man page You cannot change the FQDN with hostname or dnsdomainname . The recommended method of setting the FQDN is to make the hostname bean alias for the fully qualified name using /etc/hosts, DNS, orNIS. For example, if the hostname was "ursula", one might have a linein /etc/hosts which reads: 127.0.1.1 ursula.example.com ursula Technically: The FQDN is the name getaddrinfo(3) returns for the hostname returned by gethostname(2). The DNS domain name is the partafter the first dot. Therefore it depends on the configuration of the resolver (usually in /etc/host.conf ) how you can change it. Usually the hosts file isparsed before DNS or NIS, so it is most common to change the FQDN in /etc/hosts . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/239920",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24554/"
]
} |
239,960 | When I run date on my linux I get Sat Oct 31 11:53:22 BRST 2015 . I want to get only Sat . How do I do that? So far I think I should use the code below but it does not work: date | printf $1 | date can output what you want without help of other commands: $ date +%aSat For more details: man 1 date | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/239960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140993/"
]
} |
239,964 | I have a cron job on CentOS that I want to execute every 3 minutes but I have many other cronjobs that run at every 3 minutes starting from 0, 3, 6, 9... So, to avoid my server getting too overloaded, I wanted some of my crons to run at every 3 minutes but starting at 1 minute and so on: 1, 4, 7... My crons are usually like this: */3 * * * * How can I do this? | 1-59/3 is the more typical and concise way to specify it, meaning "every 3 minutes starting from 1". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/239964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140993/"
]
} |
239,973 | Problem: When trying to install guest additions in Kali linux the following error occurs. Oops! There was a problem running this software. Unable to locate program This occurred after a fresh install of Kali Linux 2.0 in Virtual Box 4.3.32 Action taken to get this error: Virtualbox -> Devices -> Insert Guest Additions CD image then from Kali Linux GUI the message "VBOXADDITIONS_4.3.32_103443" contains software intended to be automatically started. Would you like to run it? Select run and the error occurs How to solve this problem? What is the cause? | The question is a bit old, but deserves an answer to the root cause of the error, not a work-around. The root cause of your issue is in /etc/fstab . If yours looks anything like mine, the mount options for /dev/sr0 are probably user,noauto . The user option automatically implies noexec which strips executable bits off all binary files on the mounted file system. You simply need to add the exec option to your mount statement in /etc/fstab from: /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 to: /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto,exec 0 0 This will allow you to execute binaries from optical media. Cheers, Rich | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/239973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137195/"
]
} |
239,977 | I accidentally wrote a 512 bytes binary to the wrong USB disk with dd and the device doesn't show any partitions with fdisk anymore. I thought all the data was gone, but dd if=/dev/sdx | strings shows that the data seems to be still there, since dd fortunately limited itself to the first 512 bytes. Is there any way to recover it ? The disk had two partitions: an ext4 (~4GB) one and the remaining of 16GB were formatted as NTFS. | It depends on what exactly was there before, but it might be easy(-ish) to recover from this. Use dd to create a full image of your USB drive on a safe location. Use dd to create a full image of your USB drive on a safe location. Yes, please do keep a full image. Data recovery operations can often cause more damage than one would expect. Try to remember what the partition layout on that USB drive was like. Write it down . It might help if you have system logs from when that disk (before being messed-up) is detected by the Linux kernel - quite often it will print-out some data about the detected partitions. Use fdisk to recreate the MBR with the same partition table. Do not format and/or fsck any partitions . Try to mount your partitions with the read-only ( -o ro ) mount option. If it succeeds, try to copy all files over to a safe location and watch your terminal and logs for I/O errors - the typical way for partition boundary errors to be expressed is via out-of-bound accesses on the underlying device. If the copy fails, restore the image and go back to step 4. Did I mention having a full image of the USB drive before doing anything else? PS: You might also want to have a look at tools like TestDisk , that attempt to automate the recovery process. But you should still get an full image first. PS2: If you feel comfortable enough, you could also experiment a bit. If you can make a reasonable assumption for the starting point of the first partition, then you can use tune2fs -l to get the exact size of the first partition, which would allow you to hunt for the start of the second one. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/239977",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55467/"
]
} |
239,985 | I am quite puzzled as to why ssh requires password Authentication although i have generated and copies keys. I installed Ubuntu mini 14.04, and for whatever the reason I am not able to connect to it without a password via SSH. At first I believed it was an issue with the guest machine which generated and copied the keys, however that is not the case. Here is what I have done.Machine A (Let’s call it Client), Machine B (Ubuntu Mini, let’s call it Server). Somehow what to do on which machine gets a little confusing on many of the instructions I have found. Delete all entries in /home/user/.ssh on both the Client and the Server (making sure it was all clean) On Client Generated keys ssh-keygen on the client, went through the questions and did not apply a password. Copied keys to server ssh-copy-id [email protected] - entered password. SSH’d into server, client machine prompt for password, I check the make sure the key had been copied over the server. It was listed in the servers /home/users/.ssh/authorized_keys file I checked permission on /home/user/.ssh folder and made sure it was 700 SSH always requires a password. I repeated the same process on the server and was able to auto login via ssh to the client. SSH Directory on Server username@Server:~$ ls -ld .ssh drwx------ 2 username username 4096 Oct 27 08:24 .ssh .SSH Directory Contents on Server username@Server:~/.ssh$ ls -l total 16 -rw------- 1 username username 789 Oct 26 21:08 authorized_keys -rw------- 1 username username 1675 Oct 26 20:37 id_rsa -rw-r--r-- 1 username username 400 Oct 26 20:37 id_rsa.pub -rw-r--r-- 1 username username 222 Oct 26 20:37 known_hosts Authorized Keys on Server username@Server:~/.ssh$ cat authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJKqmBuPPxzFx/opVJhNQNiUUHLQIT4n2ScQljni489ONzUXmTC8fAhGprDFUhVs GZrlFm+RJrmu5VlasG+dLG33Y7mXTnhsj5FVjUzbbliUbVqizR di18Gh6AM5VyiSqSh/prDmT5xpasQLQopGmB3kxCP6+6RnKnovUk8f4UOs4i0HXZM9VM EnwgPkN9v6LTTI7VI2QApLl/c1aYfMF2jOua/T7Xw4hdz+DbzEQi8ygk9NYpbE1QB8l4TB2Ls6hwBEVlSeHcP3H 6RX8a71ow+qGz5Zz9cK5Eg6v3OKK6YXcwS2osePWgMmJsNW/mVgne3pQvoajIZyMx9+r9mCIF pi@PiScanner RSA Public keys on Client pi@PiScanner ~/.ssh $ cat id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJKqmBuPPxzFx/opVJhNQNiUUHLQIT4n2ScQljni489ONzUXmTC8fAhGprDFUhVs GZrlFm+RJrmu5VlasG+dLG33Y7mXTnhsj5FVjUzbbliUbVqizR di18Gh6AM5VyiSqSh/prDmT5xpasQLQopGmB3kxCP6+6RnKnovUk8f4UOs4i0HXZM9VM EnwgPkN9v6LTTI7VI2QApLl/c1aYfMF2jOua/T7Xw4hdz+DbzEQi8ygk9NYpbE1QB8l4TB2Ls6hwBEVlSeHcP3H 6RX8a71ow+qGz5Zz9cK5Eg6v3OKK6YXcwS2osePWgMmJsNW/mVgne3pQvoajIZyMx9+r9mCIF pi@PiScanner pi@PiScanner ~/.ssh $ ssh -vvv [email protected] OpenSSH_6.0p1 Debian-4+deb7u2, OpenSSL 1.0.1e 11 Feb 2013debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to 192.168.101.2 [192.168.101.2] port 22.debug1: Connection established.debug3: Incorrect RSA1 identifierdebug3: Could not load "/home/pi/.ssh/id_rsa" as a RSA1 public keydebug1: identity file /home/pi/.ssh/id_rsa type 1debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048debug1: identity file /home/pi/.ssh/id_rsa-cert type -1debug1: identity file /home/pi/.ssh/id_dsa type -1debug1: identity file /home/pi/.ssh/id_dsa-cert type -1debug1: identity file /home/pi/.ssh/id_ecdsa type -1debug1: identity file /home/pi/.ssh/id_ecdsa-cert type -1debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3 pat OpenSSH*debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.0p1 Debian-4+deb7u2debug2: fd 3 setting O_NONBLOCKdebug3: load_hostkeys: loading entries for host "192.168.101.2" from file "/home/pi/.ssh/known_hosts"debug3: load_hostkeys: found key type ECDSA in file /home/pi/.ssh/known_hosts:1debug3: load_hostkeys: loaded 1 keysdebug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected]@openssh.com,ecdsa-sha2-nistp52 [email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-excha nge-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1debug2: kex_parse_kexinit: [email protected]@openssh.com,[email protected] om,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected] om,[email protected],ssh-rsa,ssh-dssdebug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cb c,arcfour,[email protected]: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cb c,arcfour,[email protected]: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,hmac [email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,hmac [email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit:debug2: kex_parse_kexinit:debug2: kex_parse_kexinit: first_kex_follows 0debug2: kex_parse_kexinit: reserved 0debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha2 56,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],chacha20-poly1305@o penssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],chacha20-poly1305@o penssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: [email protected],[email protected],hmac-sha2-256-etm@op enssh.com,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1, [email protected],[email protected]@openssh.com,hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: [email protected],[email protected],hmac-sha2-256-etm@op enssh.com,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1, [email protected],[email protected]@openssh.com,hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit:debug2: kex_parse_kexinit:debug2: kex_parse_kexinit: first_kex_follows 0debug2: kex_parse_kexinit: reserved 0debug2: mac_setup: found hmac-md5debug1: kex: server->client aes128-ctr hmac-md5 nonedebug2: mac_setup: found hmac-md5debug1: kex: client->server aes128-ctr hmac-md5 nonedebug1: sending SSH2_MSG_KEX_ECDH_INITdebug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ECDSA 73:78:68:3b:58:0d:78:a9:64:96:6e:9c:ca:0c:ae:9fdebug3: load_hostkeys: loading entries for host "192.168.101.2" from file "/home/pi/.ssh/known_hosts"debug3: load_hostkeys: found key type ECDSA in file /home/pi/.ssh/known_hosts:1debug3: load_hostkeys: loaded 1 keysdebug1: Host '192.168.101.2' is known and matches the ECDSA host key.debug1: Found key in /home/pi/.ssh/known_hosts:1debug1: ssh_ecdsa_verify: signature correctdebug2: kex_derive_keysdebug2: set_newkeys: mode 1debug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug2: set_newkeys: mode 0debug1: SSH2_MSG_NEWKEYS receiveddebug1: Roaming not allowed by serverdebug1: SSH2_MSG_SERVICE_REQUEST sentdebug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug2: key: /home/pi/.ssh/id_rsa (0x782a3308)debug2: key: /home/pi/.ssh/id_dsa ((nil))debug2: key: /home/pi/.ssh/id_ecdsa ((nil))debug1: Authentications that can continue: publickey,passworddebug3: start over, passed a different list publickey,passworddebug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,passworddebug3: authmethod_lookup publickeydebug3: remaining preferred: keyboard-interactive,passworddebug3: authmethod_is_enabled publickeydebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/pi/.ssh/id_rsadebug3: send_pubkey_testdebug2: we sent a publickey packet, wait for replydebug1: Authentications that can continue: publickey,passworddebug1: Trying private key: /home/pi/.ssh/id_dsadebug3: no such identity: /home/pi/.ssh/id_dsadebug1: Trying private key: /home/pi/.ssh/id_ecdsadebug3: no such identity: /home/pi/.ssh/id_ecdsadebug2: we did not send a packet, disable methoddebug3: authmethod_lookup passworddebug3: remaining preferred: ,passworddebug3: authmethod_is_enabled passworddebug1: Next authentication method: [email protected]'s password: Cannot get this to auto log in! any assistance would be greatful! thank you. | It depends on what exactly was there before, but it might be easy(-ish) to recover from this. Use dd to create a full image of your USB drive on a safe location. Use dd to create a full image of your USB drive on a safe location. Yes, please do keep a full image. Data recovery operations can often cause more damage than one would expect. Try to remember what the partition layout on that USB drive was like. Write it down . It might help if you have system logs from when that disk (before being messed-up) is detected by the Linux kernel - quite often it will print-out some data about the detected partitions. Use fdisk to recreate the MBR with the same partition table. Do not format and/or fsck any partitions . Try to mount your partitions with the read-only ( -o ro ) mount option. If it succeeds, try to copy all files over to a safe location and watch your terminal and logs for I/O errors - the typical way for partition boundary errors to be expressed is via out-of-bound accesses on the underlying device. If the copy fails, restore the image and go back to step 4. Did I mention having a full image of the USB drive before doing anything else? PS: You might also want to have a look at tools like TestDisk , that attempt to automate the recovery process. But you should still get an full image first. PS2: If you feel comfortable enough, you could also experiment a bit. If you can make a reasonable assumption for the starting point of the first partition, then you can use tune2fs -l to get the exact size of the first partition, which would allow you to hunt for the start of the second one. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/239985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68650/"
]
} |
240,013 | I'm getting a bizarre error message while using git: $ git clone [email protected]:Itseez/opencv.gitCloning into 'opencv'Warning: Permanently added the RSA host key for IP address '192.30.252.128' to the list of known hosts.X11 forwarding request failed on channel 0(...) I was under the impression that X11 wasn't required for git, so this seemed strange. This clone worked successfully, so this is more of a "warning" issue than an "error" issue, but it seem unsettling. After all, git shouldn't need X11. Any suggestions? | Note that to disable ForwardX11 just for github.com you need something like the following in your ~/.ssh/config Host github.com ForwardX11 noHost * ForwardX11 yes The last two lines assume that in general you /do/ want to forward your X connection. This can cause confusion because the following is WRONG: ForwardX11 yesHost github.com ForwardX11 no Which is what I had (and caused me no end of confusion). This is because in .ssh/config, the first setting wins, and isn't overwritten by subsequent customizations. HTH, Dan. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/240013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141083/"
]
} |
240,039 | I got my hands on a copy of Unix system V. I downloaded a software to write img files to floppy disks. But when I try to boot from the floppy on my old 128MB ram computer it says disk image failure. How do I install Unix and is it even possible? | Unix System V is from 1983. There is a pretty good chance that the disk image you have is not even for Intel x86 architectures and won't work at all on your system or emulator, let alone other hardware driver incompatibilities. Maybe if you used one of the alternate qemu architectures. But most likely you'd need to get your hands on compatible hardware that still works. There are some youtube videos where people boot these old systems and explorer them a bit. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240039",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141098/"
]
} |
240,059 | Is there a way to run a script on shutdown, after the file system is remounted as read-only? I've a raspberry pi connected to a wireless socket, which I can control via a sender and a script. I want to power off that socket (powering off the raspberry) on shutdown, after the file system is mounted read-only. I've tried this: [Unit]Description=TestDefaultDependencies=noRequires=shutdown.target umount.target final.targetAfter=shutdown.target umount.target final.target[Service]Type=oneshotExecStart=/testKillMode=none[Install]WantedBy=halt.target The script /test does output the current mounts. When it's run on shutdown, it states read/write for the root file system and not read-only as expected. Edit: Content of /test: #!/bin/bashecho -n 'Debug-Mount: ' > /dev/tty1cat /proc/mounts | grep /dev/sda > /dev/tty1 Screen output on shutdown: | I found a reliable solution: Just put the script in /usr/lib/systemd/system-shutdown/. See also: https://www.freedesktop.org/software/systemd/man/systemd-halt.service.html Immediately before executing the actual system halt/poweroff/reboot/kexec systemd-shutdown will run all executables in /usr/lib/systemd/system-shutdown/ and pass one arguments to them: either "halt", "poweroff", "reboot" or "kexec", depending on the chosen action. All executables in this directory are executed in parallel, and execution of the action is not continued before all executables finished. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45235/"
]
} |
240,112 | I've read some answers on this site and found the printf rounding desirable. However when I used it in practice, a subtle bug led me to the following behavior: $ echo 197.5 | xargs printf '%.0f'198$ echo 196.5 | xargs printf '%.0f'196$ echo 195.5 | xargs printf '%.0f'196 Notice that rounding 196.5 becomes 196 . I know this can be some subtle floating point bug (but this is not a very large number, huh?), so can someone throw some light upon this? An workaround for this is also greatly welcomed (because I'm trying to put this to work now). | It is as expected, it is "round to even", or "Banker's rounding". A related site answer explain it. The issue that such rule is trying to solve is that (for numbers with one decimal), x.1 up to x.4 are rounded down. x.6 up to x.9 are rounded up. That's 4 down and 4 up. To keep the rounding in balance, we need to round the x.5 up one time and down the next. That is done by the rule: « Round to nearest 'even number' ». In code: sh LC_NUMERIC=C printf '%.0f ' "$value" awk echo "$value" | awk 'printf( "%s", $1)' Options: In total, there are four possible ways to round a number: The already explained Banker's rule. Round towards +infinite. Round up (for positive numbers) Round towards -infinite. Round down (for positive numbers) Round towards zero. Remove the decimals (either positive or negative). Up If you do need "round up (toward +infinite )", then you can use awk: value=195.5 awk echo "$value" | awk '{ printf("%d", $1 + 0.5) }' bc echo "scale=0; ($value+0.5)/1" | bc Down If you do need "round down (Toward -infinite )", then you can use: value=195.5 awk echo "$value" | awk '{ printf("%d", $1 - 0.5) }' bc echo "scale=0; ($value-0.5)/1" | bc Trim decimals. To remove the decimals (anything after the dot). We could also directly use the shell (works on most shells - is POSIX): value="127.54" ### Works also for negative numbers. shell echo "${value%%.*}" awk echo "$value"| awk '{printf ("%d",$0)}' bc echo "scale=0; ($value)/1" | bc | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/240112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70229/"
]
} |
240,116 | I tried to install the php-ldap package on my machine but was not able to do so. It shows missing dependencies error for php-common , but the said package is already installed and updated. See the following screenshot: | It is as expected, it is "round to even", or "Banker's rounding". A related site answer explain it. The issue that such rule is trying to solve is that (for numbers with one decimal), x.1 up to x.4 are rounded down. x.6 up to x.9 are rounded up. That's 4 down and 4 up. To keep the rounding in balance, we need to round the x.5 up one time and down the next. That is done by the rule: « Round to nearest 'even number' ». In code: sh LC_NUMERIC=C printf '%.0f ' "$value" awk echo "$value" | awk 'printf( "%s", $1)' Options: In total, there are four possible ways to round a number: The already explained Banker's rule. Round towards +infinite. Round up (for positive numbers) Round towards -infinite. Round down (for positive numbers) Round towards zero. Remove the decimals (either positive or negative). Up If you do need "round up (toward +infinite )", then you can use awk: value=195.5 awk echo "$value" | awk '{ printf("%d", $1 + 0.5) }' bc echo "scale=0; ($value+0.5)/1" | bc Down If you do need "round down (Toward -infinite )", then you can use: value=195.5 awk echo "$value" | awk '{ printf("%d", $1 - 0.5) }' bc echo "scale=0; ($value-0.5)/1" | bc Trim decimals. To remove the decimals (anything after the dot). We could also directly use the shell (works on most shells - is POSIX): value="127.54" ### Works also for negative numbers. shell echo "${value%%.*}" awk echo "$value"| awk '{printf ("%d",$0)}' bc echo "scale=0; ($value)/1" | bc | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/240116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141152/"
]
} |
240,146 | Citing tmux man page: The TERM environment variable must be set to “screen” for all programs running inside tmux. New windows will automatically have “TERM=screen” added to their environment, but care must be taken not to reset this in shell start-up files. Is this this because tmux session can be attached to any terminal (e.g. remote), screen being lowest common denominator? If I know my terminal, can I force it without unexpected results, or would some unsupported capabilities break (like scrolling)? | It does this to take advantage of the previously-installed terminal description for GNU screen. Both tmux and screen provide applications with (more or less) the "same" terminal descriptions to simplify connecting from different terminals. The tmux and screen programs are supposed to handle the differences between the internal (TERM=screen) and external (xterm, linux, etc). So the screen terminal description has been installed "everywhere" (usually by ncurses). There are some differences: GNU screen has a very old problem representing the "standout" feature (which technically is not a specific terminal feature but rather an abstraction for the convenience of curses applications). tmux does not have that limitation. But the improved TERM=tmux is not necessarily "everywhere". GNU screen has a feature for using hybrid terminal descriptions. Given the external TERM=xterm and the existence of "screen.xterm", it will choose that for the internal TERM value. (see for example the terminal database ). tmux does not do that. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/240146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78297/"
]
} |
240,252 | I ran sudo pacman -Syu and I got some interesting errors reading: error: failed to commit transaction (conflicting files) and a long list of files followed by exists in filesystem . Full output is here: http://ix.io/lLw It appears that many of these files are not associated with a package when I checked them with pacman -Qo <path-to-file> , but I did not check them all. I had a weak connection when I ran pacman -Syu , but I get the same errors when I updated later: http://ix.io/lLx What should I do? Should I check all files and delete the ones that do not have an associated package? Should I force update (with sudo pacman -S --force <package-name> ?) Update I tried running sudo pacman -S --force <package-name> and got this: [my-pc]/home/average-joe$ pacman -Qo /usr/lib/python3.5/site-packages/PyYAML-3.11-py3.5.egg-infoerror: No package owns /usr/lib/python3.5/site-packages/PyYAML-3.11-py3.5.egg-info It looks like pacman -S --force <package does not overwrite directories that contain files. From the man: Using --force will not allow overwriting a directory with a file or installing packages with conflicting files and directories. Should I just delete the conflicting directories? (they do not have associated packages) | After pacman finally deprecated the --force option and made the surrogate --overwrite option work as expected, the following usage pattern should be noted. A command to reproduce the --force option that blindly overwrites anything that conflicts is this: sudo pacman -S --overwrite \* <package_name> Or sudo pacman -S --overwrite "*" <package_name> The tricky part is escaping the wildcard to stop the shell from expanding it first. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/240252",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59802/"
]
} |
240,274 | Up to now I thought that a semicolon in the shell has (somehow) the same meaning as a line break. So I was surprised that for alias <name>=<replacement text>; <name> <name> is unknown while it is known in the next line. csh , tcsh , sh , ksh and bash behave the same. At least for csh it does not matter if alias is used directly or if a script is sourced before the semccolon--the aliases are not known after ; but the are known in the next command line. Is this a bug or is this behavior intended? | The alias syntax you are using is inappropriate for a POSIX shell, for a POSIX shell, you need to use: alias name='replacement' But for all shells, this cannot work as the alias replacement is done early in the parser. Before your alias setup is executed, the whole line was read by the parser and for this reason, your command line wil not work. If the alias appears on the next command line, it will work. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240274",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60224/"
]
} |
240,278 | I played around with a LSB init script under Debian Wheezy( init is from sysvinit package version 2.88dsf-41+deb7u1) for learning purposes. My script is following: # cat /etc/init.d/test-script#! /bin/sh### BEGIN INIT INFO# Provides: test# Required-Start: $all# Required-Stop: $all# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: test script# Description: test script### END INIT INFO# always executestouch /tmp/test-filecase "$1" in start) echo "Starting script test" touch /tmp/test-file-start ;; stop) echo "Stopping script test" touch /tmp/test-file-stop ;; restart) echo "Restarting script test" touch /tmp/test-file-restart ;; force-reload) echo "Force-reloading script test" touch /tmp/test-file-force-reload ;; status) echo "Status of test" touch /tmp/test-file-status ;; *) echo "Usage: /etc/init.d/test {start|stop}" exit 1 ;;esacexit 0# I made the /etc/init.d/test-script file executable and added a symlink to /etc/rc2.d/ directory: lrwxrwxrwx 1 root root 21 Nov 2 13:19 /etc/rc2.d/S04test-script -> ../init.d/test-script ..as my default runlevel is 2 and reloaded the machine, but script was not started. As a final step I also added test to /etc/init.d/.depend.start file, but /etc/init.d/test-script was still not executed during a bootup. Which additional steps does insserv take to install an init script? | The alias syntax you are using is inappropriate for a POSIX shell, for a POSIX shell, you need to use: alias name='replacement' But for all shells, this cannot work as the alias replacement is done early in the parser. Before your alias setup is executed, the whole line was read by the parser and for this reason, your command line wil not work. If the alias appears on the next command line, it will work. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240278",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
240,282 | for eg: CREATE TABLE MWWDATA.ACK997 ( AKTYPE CHAR(2) DEFAULT '' NOT NULL , AKNUM CHAR(9) DEFAULT '' NOT NULL );CREATE TABLE MWWDATA.APREIDEXC ( EMPLID NUMBER(15, 0) DEFAULT NULL ); I want output like: CREATE TABLE MWWDATA.ACK997(AKTYPE,ANUM);CREATE TABLE MWWDATA.APREIDEXC(EMPLID); | The alias syntax you are using is inappropriate for a POSIX shell, for a POSIX shell, you need to use: alias name='replacement' But for all shells, this cannot work as the alias replacement is done early in the parser. Before your alias setup is executed, the whole line was read by the parser and for this reason, your command line wil not work. If the alias appears on the next command line, it will work. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240282",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131356/"
]
} |
240,292 | I have a file which has many entries as below : US6DWMD01#DW01DDATAPURGE(060009/28)US6DWMD01#DW01DDATAPURGE(060009/29)US6DWMD01#DW01DDATAPURGE(060009/30)US6DWMD01#DW01DDATAPURGE(060011/01)US6DWMD01#DW11WPURESUN(060011/01)US6TPA01#PPAORD__LDBASE(000009/26)US6TPA01#PPAORD__LGBOX(000009/26)US6TPA01#PPATDD__DEDMGT(060009/25)US6TPA01#PPATDD__FLNET(060009/25)US6TPA01#PPATDD__LORTBLS(060009/25)US6TPA01#PPATDD__PPATTBLS(060009/25)US6TPA01#PPATDD__P8020RP(060011/01) I want use cut/sed/awk commands to insert space after 5 characters when counting reverse as below : US6TPA01#PPATDD__DEDMGT(0600 09/25)US6TPA01#PPATDD__FLNET(0600 09/25)US6TPA01#PPATDD__LORTBLS(0600 09/25)US6TPA01#PPATDD__PPATTBLS(0600 09/25)US6TPA01#PPATDD__P8020RP(0600 11/01) | The alias syntax you are using is inappropriate for a POSIX shell, for a POSIX shell, you need to use: alias name='replacement' But for all shells, this cannot work as the alias replacement is done early in the parser. Before your alias setup is executed, the whole line was read by the parser and for this reason, your command line wil not work. If the alias appears on the next command line, it will work. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240292",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141294/"
]
} |
240,344 | I always boot up my GNU/Linux laptop in console mode. But sometimes I need to bring up the GUI mode. it always requires entering the root password. So I wrote the following script "gogui.sh" and put it in /usr/bin : #!/bin/bashecho "mypassword" | sudo service lightdm start It is a really stupid idea, as if someone read the file, can easily see my password. Is the an alternative to this? | Passing a password to sudo in a script is utterly pointless. Instead, add a sudo rule adding the particular command you want to run with the NOPASSWD tag . Take care that the command-specific NOPASSWD rule must come after any general rule. saeid ALL = (ALL:ALL) ALLsaeid ALL = (root) NOPASSWD: service lightdm start But this is probably not useful anyway. lightdm start starts a login prompt, but you only need that if you want to let other users log in graphically. You don't need it if all you want is to start a GUI session. Instead, call startx to start a GUI session from your text mode session. This does not require any extra privilege. You may need to explicitly specify your window manager or desktop environment, as startx might not pick up the same default session type that lightdm uses. startx -- gnome-session | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/240344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122146/"
]
} |
240,353 | The udev rules I've created so far only deal with devices being added or removed, i.e.: ACTION=="add"... or ACTION=="remove"... I've come across an example of a rule that seems to deal with device state changes as well: ACTION=="add|change", KERNEL=="sd[b-z]", ATTR{queue/rotational}=="1", RUN+="/usr/bin/hdparm -B 127 -S 12 /dev/%k" I take it that the above rule applies whenever a matching device is added OR its state changes . Question: What kind of state changes are possible (generally and specific to a USB hard drive)? I've checked all udev documentation I can find and there's barely any mention of, or usage guidance, in respect of device state changes or specifically ACTION="change" . | "change" corresponds, for example, to removing or inserting an sdcard in a sdcard reader, or changing the hard disc inside a usb-to-sata enclosure. The device itself is not added nor removed, but the media is no longer the same. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240353",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133840/"
]
} |
240,418 | How do I find out the most recently accessed file in a given directory? I can use the find command to list out all files modified/accessed in last n minutes. But here in my case, I'm not sure when the last file was modified/accessed? All that I need is to list all the files which were accessed/modified very recently among all other sub-files or sub-directories, sorted by their access/modified times, for example. Is that possible? | To print the last 3 accessed files (sorted from the last accessed file to the third last accessed file): find . -type f -exec stat -c '%X %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print $2}' To print the last 3 modified files (sorted from the last modified file to the third last modified file): find . -type f -exec stat -c '%Y %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print $2}' find . -type f -exec stat -c '%X %n' * : prints the last access' time followed by the file's path for each file in the current directory hierarchy; find . -type f -exec stat -c '%Y %n' * : prints the last modification's time followed by the file's path for each file in the current directory hierarchy; sort -nr : sorts in an inverse numerical order; awk 'NR==1,NR==3 {print $2}' : prints the second field of the first, second and third line. You can change the number of files to be shown by changing 3 to the desired number of files in awk 'NR==1,NR==3 {print $2}' . % touch file1% touch file2% touch file3% find . -type f -exec stat -c '%X %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file3./file2./file1% find . -type f -exec stat -c '%Y %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file3./file2./file1% cat file1% find . -type f -exec stat -c '%X %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file1./file3./file2% find . -type f -exec stat -c '%Y %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file3./file2./file1% touch file2% find . -type f -exec stat -c '%X %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file2./file1./file3% find . -type f -exec stat -c '%Y %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file2./file3./file1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/240418",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4843/"
]
} |
240,429 | How do I write a bash script that makes lists by by selecting a column of a spreadsheet which is in csv format? If I have csv files that have these contents: [user]$ cat list1.csv Last, First, user lname1, fname1, user1 lname2, fname2, user2 [user]$ cat list2.csv Last, First, user lname3, fname3, user3 lname4, fname4, user4 And I want the script to be invoked as CreateList <column> <file1> <file2> ... For Example : [user]$ CreateList 2 list2.csv list1.csv list2: fname3, fname4 list1: fname1, fname2 | To print the last 3 accessed files (sorted from the last accessed file to the third last accessed file): find . -type f -exec stat -c '%X %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print $2}' To print the last 3 modified files (sorted from the last modified file to the third last modified file): find . -type f -exec stat -c '%Y %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print $2}' find . -type f -exec stat -c '%X %n' * : prints the last access' time followed by the file's path for each file in the current directory hierarchy; find . -type f -exec stat -c '%Y %n' * : prints the last modification's time followed by the file's path for each file in the current directory hierarchy; sort -nr : sorts in an inverse numerical order; awk 'NR==1,NR==3 {print $2}' : prints the second field of the first, second and third line. You can change the number of files to be shown by changing 3 to the desired number of files in awk 'NR==1,NR==3 {print $2}' . % touch file1% touch file2% touch file3% find . -type f -exec stat -c '%X %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file3./file2./file1% find . -type f -exec stat -c '%Y %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file3./file2./file1% cat file1% find . -type f -exec stat -c '%X %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file1./file3./file2% find . -type f -exec stat -c '%Y %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file3./file2./file1% touch file2% find . -type f -exec stat -c '%X %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file2./file1./file3% find . -type f -exec stat -c '%Y %n' {} \; | sort -nr | awk 'NR==1,NR==3 {print }'./file2./file3./file1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/240429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140168/"
]
} |
240,444 | I am working on a remote Debian Jessie server. I have started a screen session, started running a script, then been disconnected by a network timeout. Now I have logged in again and want to resume the session. This is what I see when I list screens: $ screen -lsThere are screens on: 30608.pts-8.myserver (11/03/2015 08:47:58 AM) (Attached) 21168.pts-0.myserver (11/03/2015 05:29:24 AM) (Attached) 7006.pts-4.myserver (10/23/2015 09:05:45 AM) (Detached) 18228.pts-4.myserver (10/21/2015 07:50:49 AM) (Detached) 17849.pts-0.myserver (10/21/2015 07:43:53 AM) (Detached)5 Sockets in /var/run/screen/S-me. I seem to be attached to two screens at once. Now I want to resume the session I was running before, to see the results of my script: $ screen -r 30608.pts-8.myserverThere is a screen on: 30608.pts-8.OpenPrescribing (11/03/2015 08:47:58 AM) (Attached)There is no screen to be resumed matching 30608.pts-8.myserver. Why I can't I re-attach? I have the same problem with the other screen: $ screen -r 21168.pts-0.myserverThere is a screen on: 21168.pts-0.OpenPrescribing (11/03/2015 05:29:24 AM) (Attached)There is no screen to be resumed matching 21168.pts-0.myserver. | The session is still attached on another terminal. The server hasn't detected the network outage on that connection: it only detects the outage when it tries to send a packet and gets an error back or no response after a timeout, but this hasn't happened yet. You're in a common situation where the client detected the outage because it tried to send some input and failed, but the server is just sitting there waiting for input. Eventually the server will send a keepalive packet and detect that the connection is dead. In the meantime, use the -d option to detach the screen session from the terminal where it's in. screen -r -d 30608 screen -rd is pretty much the standard way to attach to an existing screen session. | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/240444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118973/"
]
} |
240,448 | I am using a shared server at HG and I want to automate a bash script that will run hourly once and notify me to my gmail account with details of authorized/non-authorized users who have logged into the system in the past hour. HG doesn't allow tools like inotify in their shared plans. Is this possible? Do you think it's a decent idea? Although I am the only user, what happens if someone illicitly logs in without my knowledge? The problem is I can't run who every time or scan the logs as it is a tedious process. | The session is still attached on another terminal. The server hasn't detected the network outage on that connection: it only detects the outage when it tries to send a packet and gets an error back or no response after a timeout, but this hasn't happened yet. You're in a common situation where the client detected the outage because it tried to send some input and failed, but the server is just sitting there waiting for input. Eventually the server will send a keepalive packet and detect that the connection is dead. In the meantime, use the -d option to detach the screen session from the terminal where it's in. screen -r -d 30608 screen -rd is pretty much the standard way to attach to an existing screen session. | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/240448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140396/"
]
} |
240,470 | I was wondering if there is a best way to run the following command cat cisco.log-20151103.log | grep -v "90.192.142.138" | grep -v "PIX" | grep -v "Intrusion" I tried cat cisco.log-20151103.log | grep -v "90.192.142.138|PIX|Intrusion" but it doesn't work. | two other options grep -v -e 90.192.142.138 -e PIX -e Intrusion cisco.log-20151103.log and assuming fixed strings grep -vF '90.192.142.138PIXIntrusion' cisco.log-20151103.log | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240470",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138120/"
]
} |
240,506 | I am migrating a kafka/zookeeper cluster on Windows to Debian wheezy. Java version: 1.7.0_80 Debian version: 7.9 Zookeeper version: 3.3.5+dfsg1-2 0 Kafka version: 2.10-0.8.2.1 If I configure zookeeper on the Debian servers with IP addresses for the other Debian servers, everything works fine. If I use DNS names instead, the leader election fails on the Debian servers. On the Debian servers, I can lookup the IP of any of the other Debian servers using 'host' command, so DNS resolution is working. Everything is automated: server creation, Debian installation, zookeeper installation, zookeeper configuration; so the window for manual config errors are at a bare minimum and easy to reproduce or change. Using clientPortAddress=DNSNAME does not make any difference; it still fails.There is nothing configured in iptables. There is no firewall in between these servers. In the following, servers 1-3 are Windows 2012R2 servers and servers 4-6 are Debian servers. This config works: server.1=testkafka400:2888:3888 server.2=testkafka401:2888:3888 server.3=testkafka402:2888:3888 server.4=10.1.132.152:2888:3888 server.5=10.1.132.153:2888:3888 server.6=10.1.132.154:2888:3888 This config does not work: server.1=testkafka400:2888:3888 server.2=testkafka401:2888:3888 server.3=testkafka402:2888:3888 server.4=testkafka403:2888:3888 server.5=testkafka404:2888:3888 server.6=testkafka405:2888:3888 When I use the DNS names, I get the following output -- where the exceptions just repeat themselves. Please note that the following log is from a cluster setup containing only Debian servers, using DNS names, for the sake of testing. If I shift to IP, the cluster works and can hold an election. [2015-11-03 13:55:52,309] INFO Reading configuration from: /etc/zookeeper/config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)[2015-11-03 13:55:52,322] INFO Defaulting to majority quorums (org.apache.zookeeper.server.quorum.QuorumPeerConfig)[2015-11-03 13:55:52,344] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)[2015-11-03 13:55:52,344] INFO autopurge.purgeInterval set to 24 (org.apache.zookeeper.server.DatadirCleanupManager)[2015-11-03 13:55:52,345] INFO Purge task started. (org.apache.zookeeper.server.DatadirCleanupManager)[2015-11-03 13:55:52,454] INFO Purge task completed. (org.apache.zookeeper.server.DatadirCleanupManager)[2015-11-03 13:55:52,472] INFO Starting quorum peer (org.apache.zookeeper.server.quorum.QuorumPeerMain)[2015-11-03 13:55:52,581] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)[2015-11-03 13:55:52,601] INFO tickTime set to 3000 (org.apache.zookeeper.server.quorum.QuorumPeer)[2015-11-03 13:55:52,601] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.quorum.QuorumPeer)[2015-11-03 13:55:52,601] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.quorum.QuorumPeer)[2015-11-03 13:55:52,601] INFO initLimit set to 20 (org.apache.zookeeper.server.quorum.QuorumPeer)[2015-11-03 13:55:52,626] INFO Reading snapshot /etc/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileSnap)[2015-11-03 13:55:52,675] INFO My election bind port: testkafka403.prod.local/127.0.1.1:3888 (org.apache.zookeeper.server.quorum.QuorumCnxManager)[2015-11-03 13:55:52,713] INFO LOOKING (org.apache.zookeeper.server.quorum.QuorumPeer)[2015-11-03 13:55:52,715] INFO New election. My id = 4, proposed zxid=0x100000014 (org.apache.zookeeper.server.quorum.FastLeaderElection)[2015-11-03 13:55:52,717] INFO Notification: 1 (message format version), 4 (n.leader), 0x100000014 (n.zxid), 0x1 (n.round), LOOKING (n.state), 4 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state) (org.apache.zookeeper.server.quorum.FastLeaderElection)[2015-11-03 13:55:52,732] WARN Cannot open channel to 5 at election address testkafka404.prod.local/10.1.132.153:3888 (org.apache.zookeeper.server.quorum.QuorumCnxManager)java.net.SocketTimeoutExceptionat java.net.SocksSocketImpl.remainingMillis(SocksSocketImpl.java:111)at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)at java.net.Socket.connect(Socket.java:579)at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)at java.lang.Thread.run(Thread.java:745)[2015-11-03 13:55:52,737] WARN Cannot open channel to 6 at election address testkafka405.prod.local/10.1.132.154:3888 (org.apache.zookeeper.server.quorum.QuorumCnxManager)java.net.ConnectException: Connection refusedat java.net.PlainSocketImpl.socketConnect(Native Method)at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)at java.net.Socket.connect(Socket.java:579)at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)at java.lang.Thread.run(Thread.java:745)[2015-11-03 13:55:52,919] WARN Cannot open channel to 6 at election address testkafka405.prod.local/10.1.132.154:3888 (org.apache.zookeeper.server.quorum.QuorumCnxManager)java.net.ConnectException: Connection refusedat java.net.PlainSocketImpl.socketConnect(Native Method)at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)at java.net.Socket.connect(Socket.java:579)at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:402)at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:840)at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:762) We really would like to be able to use DNS names, but have no clue as to where we should begin looking for a solution any more. Maybe we missed installing or activating an important Debian or Java feature? | Okay, so I have an idea of what's going on here. I saw the same issue when trying to set up a 3-node Spring-XD cluster in Vagrant, on Linux VMs. This configuration worked: server.1=172.28.128.3:2888:3888server.2=172.28.128.4:2888:3888server.3=172.28.128.7:2888:3888 But this one didn't: server.1=spring-xd-1:2888:3888server.2=spring-xd-2:2888:3888server.3=spring-xd-3:2888:3888 The "smoking gun" was this line in my zookeeper log: 2015-11-26 20:48:31,439 [myid:1] - INFO [Thread-2:QuorumCnxManager$Listener@504] - My election bind port: spring-xd-1/127.0.0.1:3888 So, why was Zookeeper binding the election port on the loopback interface? Well... My /etc/hosts on one of the VMs looked like this: 127.0.0.1 spring-xd-1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6## vagrant-hostmanager-start172.28.128.3 spring-xd-1172.28.128.4 spring-xd-2172.28.128.7 spring-xd-3## vagrant-hostmanager-end I removed the hostname from the 127.0.0.1 line in /etc/hosts and bounced the zookeeper service on all 3 nodes, and BAM! everything came up roses. So, now the host file on each machine looks like this: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6## vagrant-hostmanager-start172.28.128.3 spring-xd-1172.28.128.4 spring-xd-2172.28.128.7 spring-xd-3## vagrant-hostmanager-end I'm guessing you didn't see the issue on Windows because the hosts file ( C:\Windows\System32\drivers\etc\hosts ) has no entries by default. You should be able to reproduce the problem on Windows, by adding a similar 127.0.0.1 line to it. I'm calling this a Zookeeper bug. Editing the hosts file was good enough to prove out the issue and remediate it in Vagrant, but I wouldn't recommend it for any "real" environment. EDIT: According to http://ccl.cse.nd.edu/operations/condor/hostname.shtml , this seems to be a fairly common problem with clustered apps on Linux, and recommends editing the hosts file as I've described above. However, the Zookeeper documentation on cluster setup doesn't mention it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240506",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134325/"
]
} |
240,541 | What is a resolution of jiffie in Linux Kernel? according to current timer source ( cat /sys/devices/system/clocksource/clocksource0/current_clocksource ), Linux uses TSC and has nanosecond resolution according to http://lxr.free-electrons.com/source/include/linux/jiffies.h jiffie is not smaller than 1us, but can be larger. Is there a way to determine its current resolution. | If you take a look at the man page man 7 time The value of HZ varies across kernel versions and hardware platforms. On i386 the situation is as follows: on kernels up to and including 2.4.x, HZ was 100, giving a jiffy value of 0.01 seconds; starting with 2.6.0, HZ was raised to 1000, giving a jiffy of 0.001 seconds. Since kernel 2.6.13, the HZ value is a kernel configuration parameter and can be 100, 250 (the default) or 1000, yielding a jiffies value of, respectively, 0.01, 0.004, or 0.001 seconds. Since kernel 2.6.20, a further frequency is available: 300, a number that divides evenly for the common video frame rates (PAL, 25 HZ; NTSC, 30 HZ). The times(2) system call is a special case. It reports times with a granularity defined by the kernel con‐ stant USER_HZ. User-space applications can determine the value of this constant using sysconf(_SC_CLK_TCK). You can inquire the CLK_TCK constant: $ getconf CLK_TCK100 This tells you the value of HZ, i.e. 100. This value is the number of jiffies in a second. References How does USER_HZ solve the jiffy scaling issue? time.h - time types | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141729/"
]
} |
240,594 | With the help of Display command in xterm titlebar I've got gnome-terminal changing the title to reflect the running command, so that I can see which terminal Mutt is running it. But what I'd really like is to push my Mutt status up to the title. I have this in my .muttrc : set status_format = "%n new | %M in %f [%v]." and I'd love to push that whole status to my gnome-terminal title. Is there a way to do that in my .bashrc ? Or another way? There's a discussion of how to do this from w/in vim at http://vim.wikia.com/wiki/Automatically_set_screen_title but...that's vim. | mutt can already do this. man muttrc ts_enabled Type: boolean Default: no Controls whether mutt tries to set the terminal status line and icon name. Most terminal emulators emulate the status line in the window title. ts_status_format Type: string Default: “Mutt with %?m?%m messages&no messages?%?n? [%n NEW]?” Controls the format of the terminal status line (or window title), provided that “$ts_enabled” has been set. This string is identical in formatting to the one used by “$status_format”. Unfortunately it doesn't change the title back, when you exit mutt. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240594",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141494/"
]
} |
240,602 | I am trying to handle SIGINT ( Ctrl C ) in such a way that if a user accidentally presses ctrl-c, he is prompted with a message, "Do you wish to quit?(y/n)". If he enters yes, then exit the script. If no, then continue from where ever the interrupt occurred. Basically, I need Ctrl C to work similar to SIGTSTP ( Ctrl Z ) but in a slightly different way. I have tried various ways to acheieve this but I didn't get the expected results. Below are few scenario which I tried. Case: 1 Script: play.sh #!/bin/shfunction stop(){while true; do read -rep $'\nDo you wish to stop playing?(y/n)' yn case $yn in [Yy]* ) echo "Thanks for playing !!!"; exit 1;; [Nn]* ) break;; * ) echo "Please answer (y/n)";; esacdone} trap 'stop' SIGINT echo "going to sleep"for i in {1..100}do echo "$i" sleep 3 doneecho "end of sleep" When I run the above script, I get the expected results. Output: $ play.sh going to sleep1^CDo you wish to stop playing?(y/n)yThanks for playing !!!$ play.sh going to sleep12^CDo you wish to stop playing?(y/n)n34^CDo you wish to stop playing?(y/n)yThanks for playing !!! $ Case: 2 I moved the for loop to a new script loop.sh , thus play.sh becomes the parent process and loop.sh the child process. Script: play.sh #!/bin/shfunction stop(){while true; do read -rep $'\nDo you wish to stop playing?(y/n)' yn case $yn in [Yy]* ) echo "Thanks for playing !!!"; exit 1;; [Nn]* ) break;; * ) echo "Please answer (y/n)";; esacdone}trap 'stop' SIGINT loop.sh Script: loop.sh #!/bin/shecho "going to sleep"for i in {1..100}do echo "$i" sleep 3 doneecho "end of sleep" Output in this case is not as expected. Output: $ play.sh going to sleep12^CDo you wish to stop playing?(y/n)yThanks for playing !!!$ play.sh going to sleep1234^CDo you wish to stop playing?(y/n)n$ I understand that when a process receives a SIGINT signal, it propagates the signal to all the child processes, thus my 2nd case is failing. Is there any way that I can avoid SIGINT being propagated to child processes and thus make the loop.sh work exactly the way it worked in the 1st case? Note: This is just an example of my actual application. The application I am working on has several child scripts in play.sh and loop.sh . I should make sure that the application on receiving SIGINT , should not terminate but it should prompt the user with a message. | Great classic question about managing jobs and signals with good examples! I've developed a stripped down test script to focus on the mechanics of the signal handling. To accomplish this, after starting the children (loop.sh) in the background, call wait , and upon receipt of the INT signal, kill the process group whose PGID equals your PID. For the script in question, play.sh , this can be accomplished by the following: In the stop() function replace exit 1 with kill -TERM -$$ # note the dash, negative PID, kills the process group Start loop.sh as a background process (multiple background processes can be started here and managed by play.sh ) loop.sh & Add wait at the end of the script to wait for all children. wait When your script starts a process, that child becomes a member of a process group with PGID equal to the PID of the parent process which is $$ in the parent shell. For example, the script trap.sh started three sleep processes in the background and is now wait ing on them, notice the process group ID column (PGID) is the same as the PID of the parent process: PID PGID STAT COMMAND17121 17121 T sh trap.sh17122 17121 T sleep 60017123 17121 T sleep 60017124 17121 T sleep 600 In Unix and Linux you can send a signal to every process in that process group by calling kill with the negative value of the PGID . If you give kill a negative number, it will be used as -PGID. Since the script's PID ( $$ ) is the same as it's PGID, you can kill your process group in the shell with kill -TERM -$$ # note the dash before $$ you have to give a signal number or name, otherwise some implementations of kill will tell you "Illegal option" or "invalid signal specification." The simple code below illustrates all of this. It sets a trap signal handler, spawns 3 children, then goes into an endless wait loop, waiting to kill itself by the kill process group command in the signal handler. $ cat trap.sh#!/bin/shsignal_handler() { echo read -p 'Interrupt: ignore? (y/n) [Y] >' answer case $answer in [nN]) kill -TERM -$$ # negative PID, kill process group ;; esac}trap signal_handler INT for i in 1 2 3do sleep 600 &donewait # don't exit until process group is killed or all children die Here's a sample run: $ ps -o pid,pgid,stat,args PID PGID STAT COMMAND 8073 8073 Ss /bin/bash17111 17111 R+ ps -o pid,pgid,stat,args$ OK no extra processes running. Start the test script, interrupt it ( ^C ), choose to ignore the interrupt, and then suspend it ( ^Z ): $ sh trap.sh ^CInterrupt: ignore? (y/n) [Y] >y^Z[1]+ Stopped sh trap.sh$ Check the running processes, note the process group numbers ( PGID ): $ ps -o pid,pgid,stat,args PID PGID STAT COMMAND 8073 8073 Ss /bin/bash17121 17121 T sh trap.sh17122 17121 T sleep 60017123 17121 T sleep 60017124 17121 T sleep 60017143 17143 R+ ps -o pid,pgid,stat,args$ Bring our test script to the foreground ( fg ) and interrupt ( ^C ) again, this time choose not to ignore: $ fgsh trap.sh^CInterrupt: ignore? (y/n) [Y] >nTerminated$ Check running processes, no more sleeping: $ ps -o pid,pgid,stat,args PID PGID STAT COMMAND 8073 8073 Ss /bin/bash17159 17159 R+ ps -o pid,pgid,stat,args$ Note about your shell: I had to modify your code to get it to run on my system. You have #!/bin/sh as the first line in your scripts, yet the scripts use extensions (from bash or zsh) which are not available in /bin/sh. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140824/"
]
} |
240,630 | A previous question of mine asked how to pipe downloaded files through tar, now I would like to know how to pipe the output of tar through mv. See I have this command at the moment: wget -c https://github.com/JeffHoogland/moksha/archive/0.1.0.tar.gz | tar -xz and this creates a directory called moksha-0.1.0 , but I would like to know how I might rename this output directory as moksha , perhaps via a pipe ( | ) at the end of this command. Although if you know how to do this without a pipe, but still on the same line of code as wget and tar, I will be happy to accept it too. To be clear I know that: wget -c https://github.com/JeffHoogland/moksha/archive/0.1.0.tar.gz | tar -xz -C moksha will create an output directory moksha but within this output directory there will be the moksha-0.1.0 directory, rather I want to rename this moksha-0.1.0 directory as moksha , instead of placing moksha-0.1.0 in a new directory called moksha . | Like this? [root@b se]# wget -cqO - https://github.com/JeffHoogland/moksha/archive/0.1.0.tar.gz | tar -xz --transform=s/moksha-0.1.0/moksha/[root@b se]# lsmoksha[root@b se]# ls mokshaABOUT-NLS config.guess debian Makefile.amaclocal.m4 config.guess.dh-orig depcomp Makefile.inAUTHORS config.h.in doc missingautogen.sh config.rpath enlightenment.pc.in netwm.txtautom4te.cache config.sub enlightenment.spec.in NEWSBACKPORTS config.sub.dh-orig INSTALL poBUGS configure install-sh READMEChangeLog configure.ac intl srccompile COPYING ltmain.sh xdebug.shconfig data m4 x-ui.sh From the tar manual page: --transform= EXPRESSION , --xform= EXPRESSION Use sed replace EXPRESSION to transform file names. So sed is probably required for this to work. Though if you have wget , you probably have sed as well. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27613/"
]
} |
240,651 | How to find the MAX I/O a physical disk can support? My application is doing I/O, and I can find the actual throughput (Blk_wrtn/s) by using linux commands. But how can I find what is max limit I can reach? I want to know if it can be further loaded. | Obiously using Unix tools is the easiest way to do it. You can measure the max operation by creating a test case and use appropriate tools to measure its perfomance. A good resource can be found here: LINUX - Test READ and WRITE speed of Storage sudo hdparm -tT /dev/sdX for example as read test. And to measure write: dd if=/dev/random of=<some file on the hd> bs=8k count=10000; sync;# Hit CONTROL-C after 5 seconds to get results# 65994752 bytes (66 MB) copied, 21.8919 s, 3.0 MB/sflag Note As pointed out in the comments the dd command also measures the performance of the file system and even /dev/random . It does measure the write performance of an environment, that heavily depends on the hard disks performance, though. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15647/"
]
} |
240,673 | trying to do some operations on all the files in the directory that end in .c (C files). The code is: #!/bin/shclearfor file in *do if [ $file="*.c" ] then echo $file fidoneexit 0 doesn't work. it just lists all the files and directories. | The condition on if is malformed, you're just checking that the string $file=*.c isn't empty. Try instead: #!/bin/shclearfor file in *do if [ "$file" = "*.c" ] then echo "$file" fidoneexit 0 On the other hand, the comoding char '*' in this case is not functional, but it's interpreted to a string. Try: #!/bin/shclearfor file in *.cdo echo "$file"doneexit 0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141534/"
]
} |
240,678 | I knew this is a quite simple question, but I'm new in using a quite big server with two 4TB HDDs and I want to learn a lot about it. I've got a php script that runs as a cron. The script calls a shell command (aria2c) that downloads a file from a ftp-server and put the file into a specific folder on my server. This works fine since months. The files on the server have to be downloadable for users. But now we mounted a second hdd to get more space for files. As I can see, the mounting was successful. Two days ago the quota of HDD1 was reached and the script threw errors. Up to this time, I believed that the upload will automatically switch to hdd2. But this was wrong! Here my question arises. How can I upload files to HDD2 and make it downloadable for users.? My upload command for aria2c is: aria2c --ftp-user $username --ftp-passwd $password -c -p -t 100 -s 2 --max-connection-per-server 1 --max-download-limit $speedLimit --allow-overwrite=true --file-allocation none --summary-interval 0 ftp://$server/$remfile -o files/$remfile | The condition on if is malformed, you're just checking that the string $file=*.c isn't empty. Try instead: #!/bin/shclearfor file in *do if [ "$file" = "*.c" ] then echo "$file" fidoneexit 0 On the other hand, the comoding char '*' in this case is not functional, but it's interpreted to a string. Try: #!/bin/shclearfor file in *.cdo echo "$file"doneexit 0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240678",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141538/"
]
} |
240,723 | Here's a simple script that sets up a temp dir in the current dir and a trap to delete it on exit. #filename: script set -x trap 'rm -rf "$d"' exitd=`TMPDIR=$PWD mktemp -d`"$@" If I do ksh script sleep 100 or bash script sleep 100 and interrupt it with, C-C , the trap gets executed and the directory is deleted.It doesn't work with dash . Why? Is this a bug or intended behavior? | zsh , pdksh (though not recent versions of mksh derived from that), yash , the Bourne shell behave like dash . Only bash , ksh88 , ksh93 and mksh behave otherwise. The POSIX spec is not clear on what should be the correct behaviour, but there's nothing in there that says that the shell is allowed to override the default handler for the SIGINT (or other) signal. It says EXIT trap action should be evaluated when exit is invoked, but AFAICT, it doesn't even say for instance if it should be evaluated when the shell exits as the result of set -e or set -u or error conditions like syntax errors or failing special builtins. To be able to run EXIT trap upon reception of a signal, the shell would need to install a handler on that signal. That's what ksh , mksh and bash do, but the list of signals they handle is different between all three implementations. The only signals common between all 3 seem to be INT , QUIT , TERM , ALRM and HUP . If you want the EXIT trap to be run upon some signals, the portable way would be to handle those signals yourself: trap 'exit 1' INT HUP QUIT TERM ALRM USR1trap 'cleanup' EXIT That approach however doesn't work with zsh , which doesn't run EXIT trap if exit is called from a trap handler. It also fails to report your death-by-signal to your parent. So instead, you could do: for sig in INT QUIT HUP TERM ALRM USR1; do trap " cleanup trap - $sig EXIT kill -s $sig "'"$$"' "$sig"donetrap cleanup EXIT Now, beware though that if more signals arrive while you're executing cleanup , cleanup may be run again. You may want to make sure your cleanup works correctly if invoked several times and/or ignore signals during its execution. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/240723",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23692/"
]
} |
240,728 | I am trying to backup a failing hard drive and rsync would be ideal due to the features it has such as progress indicator and ability to stop and resume. The one issue I am having is that while file date modified attribute is preserved the directories get new date attribute. This causes issues as I sort many files by date so I know what was added more recently. Is it possible to preserve directory date modified attribute with rsync: sudo rsync -avhX --progress --info=progress2 /mnt/failing/ /mnt/new/ -t (included with -a) option preserves the file attributes but does not mention directories. Is there any special requirement for ownership / permissions of the /mnt/new partition to preserve certain attributes successfully? | The last modification time of directories is preserved by -a , but you canonly see this when rsync finishes. It does not try to set the time on directories that are constantly being updated with new files. You can test this yourself. Create a directory and set the date on it to yesterday, then copy it with rsync: $ mkdir d1 d2$ ls -ld d1drwxr-xr-x 2 40 Nov 4 14:41 d1$ touch -d 'yesterday' d1$ ls -ld d1drwxr-xr-x 2 40 Nov 3 14:41 d1$ rsync -i -avR d1 d2$ ls -ld d1 d2/d1/drwxr-xr-x 2 40 Nov 3 14:41 d1drwxr-xr-x 2 40 Nov 3 14:41 d2/d1/ The d2/d1 dir has yesterday's date. We can override it and see if rsyncfixes things: $ touch d2/d1$ ls -ld d1 d2/d1/drwxr-xr-x 2 40 Nov 3 14:41 d1drwxr-xr-x 2 40 Nov 4 14:42 d2/d1/$ rsync -i -avR d1 d2.d..t...... d1/$ ls -ld d1 d2/d1/drwxr-xr-x 2 40 Nov 3 14:41 d1drwxr-xr-x 2 40 Nov 3 14:41 d2/d1/ rsync -i shows the timestamp is wrong on d2/d1 and fixes it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67187/"
]
} |
240,732 | I'm running a process and I'm counting the number of threads with ps huH p <PID_OF_U_PROCESS> | wc -l I can run this thread with watch like this; watch -n 1 ps huH p <PID_OF_U_PROCESS> | wc -l This will output the number of threads the process is running, but usually that number doesn't change. How can I only print the new number to screen if it changed from the last time the command was run? For example: 646564 (a few minutes go by) 65 Etc. | The last modification time of directories is preserved by -a , but you canonly see this when rsync finishes. It does not try to set the time on directories that are constantly being updated with new files. You can test this yourself. Create a directory and set the date on it to yesterday, then copy it with rsync: $ mkdir d1 d2$ ls -ld d1drwxr-xr-x 2 40 Nov 4 14:41 d1$ touch -d 'yesterday' d1$ ls -ld d1drwxr-xr-x 2 40 Nov 3 14:41 d1$ rsync -i -avR d1 d2$ ls -ld d1 d2/d1/drwxr-xr-x 2 40 Nov 3 14:41 d1drwxr-xr-x 2 40 Nov 3 14:41 d2/d1/ The d2/d1 dir has yesterday's date. We can override it and see if rsyncfixes things: $ touch d2/d1$ ls -ld d1 d2/d1/drwxr-xr-x 2 40 Nov 3 14:41 d1drwxr-xr-x 2 40 Nov 4 14:42 d2/d1/$ rsync -i -avR d1 d2.d..t...... d1/$ ls -ld d1 d2/d1/drwxr-xr-x 2 40 Nov 3 14:41 d1drwxr-xr-x 2 40 Nov 3 14:41 d2/d1/ rsync -i shows the timestamp is wrong on d2/d1 and fixes it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240732",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141585/"
]
} |
240,799 | I have a text in a text file, where I want everything that is between strings like \{{[} and {]}\} to be deleted - including these strings themselves. These two string can lie on different lines as well as on the same line. In either case, on the line on which the beginning \{{[} lies, I don't want the text before, i.e. left, of it to be deleted - and the same holds for the text after {]}\} . Here is an example: Given a text file with the content Bla Bla bla bla \{{[} more bla blaeven more bla bla bla bla. A lot of stuff might be here.Bla bla {]}\} finally done.Nonetheless, the \{{[} show {]}\} goes on. the script should return another text file with the content Bla Bla bla bla finally done.Nonetheless, the goes on. Unfortunately this simple-looking task turned out to be too difficult for me to do with sed . I'm happy with any solution in any language, as long as I don't have to install anything on my standard linux machine (C and some java is already installed). | With perl : perl -0777 -pe 's/\Q\{{[}\E.*?\Q{]}\}\E//gs' Note that the whole input is loaded in memory before being processed. \Qsomething\E is for something to be treated as a literal string and not a regular expression. To modify a regular file in-place, add the -i option: perl -0777 -i -pe 's/\Q\{{[}\E.*?\Q{]}\}\E//gs' file.txt With GNU awk or mawk : awk -v 'RS=\\\\\\{\\{\\[}|\\{\\]}\\\\}' -v ORS= NR%2 There, we're defining the record separator as either of those beginning or end markers (only gawk and mawk support RS being a regexp here). But we need to escape the characters that are regexp operator (backslash, { , [ ) and also the backslash once more because it's special in arguments to -v (used for things like \n , \b ...), hence the numerous backslashes. Then all we need to do is print every other record. NR%2 would be 1 (true) for every odd record. For both solutions, we're assuming the markers are matched and those sections not nested. To modify the file in-place, with recent versions of GNU awk , add the -i inplace option. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240799",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141629/"
]
} |
240,814 | I'm working with two ubuntu instances on AWS (which I use a pem key to access them). I set up rsync for both instances, and it works if I use the default user which is ubuntu@ipaddress. However if I try to use rsync with another user (I'm typing sudo su - jenkins for example or even typing sudo before the rsync command), then I get the following error. Permission denied (publickey).rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]rsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.0] Steps that I've taken: I've tried creating an ssh key (using ssh-keygen) while logged in as jenkins and added that to the authorized_keys file in both /home/ubuntu/.ssh/authorized_keys (where i'm running the rsync from) and even $JENKINS_HOME/.ssh/authorized_keys (where I tried running rsync from there too). I even tried using the pem key to do the same thing and that didn't work either. Here's what I'm trying to run rsync -avuh --delete -e ssh jenkins@ipaddress:/var/lib/jenkins/* /var/lib/jenkins And here's with the key file rsync -avuh --delete -e 'ssh -i path/to/key.pem' [email protected]:/var/lib/jenkins/* /var/lib/jenkins P.S.: The only reason why I don't want to run it with the ubuntu user is because I get failed: Permission denied (13) on a lot of things (since the files are owned by jenkins). End goal: I'm trying to keep the backup jenkins instance backed up constantly with the primary instance by doing a cronjob: */30 * * * * /usr/bin/rsync -avuh --delete -e ssh root@jenkinsprimary:/var/lib/jenkins/* /var/lib/jenkins | You have to differentiate 2 things: who establishes the SSH connection . which remote user owns the files that you want to copy. Overview (srcmachine) (rsync) (destmachine) srcuser -- SSH --> destuser | | sudo su jenkins | v jenkins Let's say that you want to rsync: From: Machine: srcmachine User: srcuser Directory: /var/lib/jenkins To: Machine: destmachine User: destuser to establish the SSH connection . Directory: /tmp Final files owner: jenkins . Solution rsync --rsync-path 'sudo -u jenkins rsync' -avP --delete /var/lib/jenkins destuser@destmachine:/tmp Explanations --rsync-path=PROGRAM specify the rsync to run on the remote machine The trick is to tell to run rsync on the remote machine with another user ( jenkins ) than the one who establishes the SSH connection ( destuser ). Requirements SSH access (srcmachine) (rsync) (destmachine) srcuser -- SSH --> destuser[~/.ssh/id_rsa] [~/.ssh/authorized_keys] <-- "id_rsa.pub" inside[~/.ssh/id_rsa.pub] Don't forget to restrict permissions on ~/.ssh : chmod 700 ~/.ssh sudoer for the destuser The destuser must have the privilege to do sudo -u jenkins rsync . In general, we set the destuser as a member of the sudoers . To do this, on the root @ destmachine : cat > /etc/sudoers.d/destuser << EOFdestuser ALL=(ALL) NOPASSWD:ALLEOF To test it before rsync , you can log onto the destuser @ destmachine and run this: sudo su jenkinsecho $USER If it returns: jenkins it means that you are logged as jenkins user, and it means that your rsync command will work as well, because the escalade privilege to jenkins works. Note about a bad solution: establish the SSH connection with the destination user jenkins Why don't we just do this? (srcmachine) (rsync) (destmachine) srcuser -- SSH --> jenkins[~/.ssh/id_rsa] [~/.ssh/authorized_keys] <-- "id_rsa.pub" inside[~/.ssh/id_rsa.pub] because jenkins is a "service" account, which means that it runs a service which exposes a port ( 80 or so) for external HTTP access, and it means that it is POSSIBLE that there is a security breach through the Jenkins service over HTTP to gain access. That's why we have www-data user and similars to run the different services. In case they get hacked from the ports they expose, they can't do much: everything is read-only for them. except writing in /var/log/THE_SERVICE . So allowing SSH access for the jenkins user exposes a surface attack (and so it is for SSH access as root !!). Moreover, if you want to rsync as another user ( root , www-data , etc.), you would have to copy your SSH key public key to those accounts (troublesome). Good solution : You should set SSH access as few as possible to user accounts ( destuser ) that CAN escaladate to the "service" account you want ( jenkins , root , etc.). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138189/"
]
} |
240,822 | I'm trying to set up a cron job under my user. I run crontab -e, make my edits, and try to save and exit. I receive the following error message /var/spool/cron/: mkstemp: Permission denied . Relevant output from ls -al /var/spool/cron/crontabs drwxr-xr-x 2 root crontab 4096 Nov 4 10:09 .drwxr-xr-x 5 root root 4096 Nov 19 2014 ..-rw-rw-rw- 1 greg crontab 91 Nov 4 11:04 greg-rw------- 1 root crontab 1231 Oct 29 16:18 root I can directly edit the greg file and save that but I still can't seem to get the job to run, even if I restart cron after updating it. What do I need to do to fix this problem? The output from ls -lha $(which crontab) is: -rwxr-sr-x 1 root crontab 36K Feb 8 2013 /usr/bin/crontab The output from groups greg is: greg : greg adm sudo crontab lpadmin sambashare | This will fix your immediate problem: chmod u=rwx,g=wx,o=t /var/spool/cron/crontabs But, if you can download packages, a more robust way to fix this is to use apt-get to reinstall the appropriate package: root@ubuntu# dpkg-query -S /var/spool/cron/crontabscron: /var/spool/cron/crontabsroot@ubuntu# apt-get install --reinstall cron after first making sure any local changes you've made to /etc/init/cron.conf , /etc/default/cron , etc. are copied somewhere and then reapplied. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/240822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141645/"
]
} |
240,823 | How could I extract just IP numbers from a file formatted like whatfollows? test-Zookeeper2-Z1-solr1006 10.15.5.22610.15.6.103 test-Zookeeper2-Z2-solr100610.15.5.92 test-Zookeeper3-Z1-solr100610.15.6.217 test-Zookeeper1-Z2-solr100610.15.6.83 test-Zookeeper3-Z2-solr1006test-Zookeeper-Z1-solr1006 10.15.7.106 | Perl has a tried-and-true module for common regular expressions, including IPv4 addresses: $ perl -MRegexp::Common=net -lane 'print for grep {/^$RE{net}{IPv4}$/} @F' file10.15.5.22610.15.6.10310.15.5.9210.15.6.21710.15.6.8310.15.7.106 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/240823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141649/"
]
} |
240,833 | There are a lot of questions about how to undo a finished rm command.In my case, my rm started asking:"Are you sure you want to delete this file?" And I need to confirm for each one of them. Instead of manually confirming "y/n": Can I set "yes" to the remainder while being asked for y/n for then'th file? (equivalent to "Yes to All" tickbox on Windows) (I knowthe yes | rm trick before starting) Can I abort the operation while being asked for y/n for the n'thfile? (except for shutting down the machine :)) | To accomplish request 1 you will need to use a more sophisticated program than yes to send y N number of times and then pass keyboard input through beyond that. You can't do it with rm except to always ask ( rm -i ) or to never ask ( rm -f ). You can always abort rm by pressing Control-C to interrupt it (sends SIGINT), pressing Control-Z to stop it (sends SIGSTOP) and then killing it, sending a SIGTERM ( kill ), or sending it a SIGKILL ( kill -9 ). This won't undo any file operations rm has already performed, but they will prevent it from performing any further file operations. If the rm process is currently prompting for user input it is not actively unlinking any files so killing it will merely prevent it from continuing. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/240833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141653/"
]
} |
240,845 | I'm running CentOS release 6.7 (Final) I'm trying to install sudo yum install pgadmin3 I keep getting Loaded plugins: fastestmirror, refresh-packagekit, securitySetting up Install ProcessLoading mirror speeds from cached hostfile * base: linux.cc.lehigh.edu * extras: mirrors.lga7.us.voxel.net * updates: mirror.steadfast.netbase | 3.7 kB 00:00 extras | 3.4 kB 00:00 updates | 3.4 kB 00:00 No package pgadmin3 available.Error: Nothing to do I've also tried sudo yum update run sudo yum install pgadmin3 again - still got the same result ! :( Any hints/suggestions will be much appreciated ! | To accomplish request 1 you will need to use a more sophisticated program than yes to send y N number of times and then pass keyboard input through beyond that. You can't do it with rm except to always ask ( rm -i ) or to never ask ( rm -f ). You can always abort rm by pressing Control-C to interrupt it (sends SIGINT), pressing Control-Z to stop it (sends SIGSTOP) and then killing it, sending a SIGTERM ( kill ), or sending it a SIGKILL ( kill -9 ). This won't undo any file operations rm has already performed, but they will prevent it from performing any further file operations. If the rm process is currently prompting for user input it is not actively unlinking any files so killing it will merely prevent it from continuing. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/240845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118753/"
]
} |
241,006 | I want to change files with KomodoEdit that need sudo permissions. I can't start KomodoEdit with sudo , though (for whatever reason). Can I somehow grant Komodo permission to edit those files (in particular I am talking about apache2 files and /etc/hosts )? | Use sudoedit <file> . It creates a local copy of the file, edits it with user rights and copies it back to the original location. The advantage is that the editor is running as regular user. To specify a different editor than the default one you can set EDITOR temporarily: EDITOR=/usr/bin/someeditor sudoedit /etc/hosts This requires the sudo package to be installed and the user to be added to the sudo group. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241006",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141765/"
]
} |
241,022 | I found myself unable to use the VMWare Workstation 12 trial on Debian Stretch as well as on Ubuntu 15.10 . It installs just fine, but when I attempt to start it, it simply does nothing, no error or anything else. I came across this thread: Ubuntu 15.10 Host - Can't Start VMWare WorkStation Player 12 on VMWare Community so I tried to export LD_LIBRARY_PATH=/usr/lib/vmware/lib/libglibmm-2.4.so.1/:$LD_LIBRARY_PATH , which allowed me to start the GUI, but it states that a required gcc version is not installed (which is) and won't let me proceed. Has anyone else experienced this?How do I get VMWare Workstation 12 running on my system? | Use sudoedit <file> . It creates a local copy of the file, edits it with user rights and copies it back to the original location. The advantage is that the editor is running as regular user. To specify a different editor than the default one you can set EDITOR temporarily: EDITOR=/usr/bin/someeditor sudoedit /etc/hosts This requires the sudo package to be installed and the user to be added to the sudo group. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241022",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137596/"
]
} |
241,023 | I have makefile, where I define some lib and its path: MY_LIB= dir/lib Now I know that I can use @D to get directory part from target: #This will go to dir$(MY_LIB): cd $(@D) But I want to use the directory part of MY_LIB in other target: $(TARGET): doSomething $(INSERT_MY_LIB_DIR_HERE) How can I do that? | Use sudoedit <file> . It creates a local copy of the file, edits it with user rights and copies it back to the original location. The advantage is that the editor is running as regular user. To specify a different editor than the default one you can set EDITOR temporarily: EDITOR=/usr/bin/someeditor sudoedit /etc/hosts This requires the sudo package to be installed and the user to be added to the sudo group. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241023",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129998/"
]
} |
241,067 | I have 2 text files: $ cat /tmp/test1"AAP" bar"AEM" bar"AA" bar"AEO" bar"A" bar$ cat /tmp/test2"AEM" foo"AAP" foo"A" foo"AEO" foo"AA" foo I want to sort them $ sort /tmp/test1"AA" bar"AAP" bar"A" bar <-- "A" is in position 3"AEM" bar"AEO" bar$ sort /tmp/test2"AA" foo"AAP" foo"AEM" foo"AEO" foo"A" foo <-- "A" is in position 5 Why does "A" end up in position 3 in /tmp/test1 and in position 5 in /tmp/test2 ? My expectation is that each character per column will be compared. As such, when comparing column 3, 'A' , 'E' and '"' will be compared against each other, and this would be the ultimate determinant in the final sort order of this test data. Clearly my expectation is wrong, so how does sort work, if not in the way I expected? Is there command line option to sort or some other utility I can use to get the sort order I desire? | You need to have collation locale changed. The behavior you describe is typical to en_US and many other locales. Fix with: LC_ALL=C sort /tmp/test1 More in this answer: https://stackoverflow.com/questions/6531674/linux-sort-unexpected-output | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28680/"
]
} |
241,077 | On my new Arch installation, perl doesn't seem to play nice with Unicode. For example, given this input file: ελα ρε王小红 This command should give me the last two characters of each line: $ perl -CIO -pe 's/.*(..)$/$1/' file However, as you can see above, I get gibberish. The correct output is: ρε小红 I know that my terminal ( gnome-terminator ) supports UTF-8 since these both work as expected: $ cat fileελα ρε王小红$ perl -pe '' fileελα ρε王小红 Unfortunately, without -CIO , perl doesn't deal with the files correctly either: $ perl -pe 's/.*(..)$/$1/' fileε�� It also shouldn't be a locale issue: $ localeLANG=en_US.UTF-8LC_CTYPE="en_US.UTF-8"LC_NUMERIC="en_US.UTF-8"LC_TIME="en_US.UTF-8"LC_COLLATE="en_US.UTF-8"LC_MONETARY="en_US.UTF-8"LC_MESSAGES="en_US.UTF-8"LC_PAPER="en_US.UTF-8"LC_NAME="en_US.UTF-8"LC_ADDRESS="en_US.UTF-8"LC_TELEPHONE="en_US.UTF-8"LC_MEASUREMENT="en_US.UTF-8"LC_IDENTIFICATION="en_US.UTF-8"LC_ALL= I'm guessing I need to install some Perl packages, but I don't know which ones. Some relevant information: $ perl --version | grep subversionThis is perl 5, version 22, subversion 0 (v5.22.0) built for x86_64-linux-thread-multi$ pacman -Qs unicodelocal/fribidi 0.19.7-1 A Free Implementation of the Unicode Bidirectional Algorithmlocal/icu 55.1-1 International Components for Unicode librarylocal/libunistring 0.9.6-1 Library for manipulating Unicode strings and C stringslocal/perl 5.22.0-1 (base) A highly capable, feature-rich programming languagelocal/perl-unicode-stringprep 1.105-1 Preparation of Internationalized Strings (RFC 3454)local/perl-unicode-utf8simple 1.06-5 Conversions to/from UTF8 from/to characterselocal/ttf-arphic-uming 0.2.20080216.1-5 CJK Unicode font Ming style How can I get my perl installation to play nice with Unicode? | The issue you are describing is standard behaviour on the systems I tested on. I and O affect stdin and stdout, so this should work: → cat data | perl -CIO -pe 's/.*(..)$/$1/'ρε小红 Whereas this might not: → perl -CIO -pe 's/.*(..)$/$1/' data There are two more options to perl -C that produce your desired behaviour. i 8 UTF-8 is the default PerlIO layer for input streamso 16 UTF-8 is the default PerlIO layer for output streams Which is basically saying to perl, use a file open form: open(F, "<:utf8", "data"); or you can use perl -CSD which is shorthand for perl -CIOEio S 7 I + O + ED 24 i + o Then you get → perl -CSD -pe 's/.*(..)$/$1/' dataρε小红 If the PERLIO environment variable is set and includes :utf8 this behaviour would also be enabled. It looks like the default behaviour for perl isn't modifiable at configure/compile time either (cuonglm comment below). Arch certainly doesn't set anything. I doubt debian perl packages would modify default behaviour. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
241,173 | There are special files in Linux that are not really files. The most notable and clear examples of these are in the dev folder, "files" like: /dev/null - Ignores anything you write to the file /dev/random - Outputs random data instead of the contents of a file /dev/tcp - Sends any data you write to this file over the network First of all, what is the name of these types of "files" that are really some sort of script or binary in disguise? Second, how are they created? Are these files built into the system at a kernel level, or is there a way to create a "magic file" yourself (how about a /dev/rickroll )? | /dev/zero is an example of a "special file" — particularly, a "device node". Normally these get created by the distro installation process, but you can totally create them yourself if you want to. If you ask ls about /dev/zero : # ls -l /dev/zerocrw-rw-rw- 1 root root 1, 5 Nov 5 09:34 /dev/zero The "c" at the start tells you that this is a "character device"; the other type is "block device" (printed by ls as "b"). Very roughly, random-access devices like harddisks tend to be block devices, while sequential things like tape drives or your sound card tend to be character devices. The "1, 5" part is the "major device number" and the "minor device number". With this information, we can use the mknod command to make our very own device node: # mknod foobar c 1 5 This creates a new file named foobar , in the current folder, which does exactly the same thing as /dev/zero . (You can of course set different permissions on it if you want.) All this "file" really contains is the three items above — device type, major number, minor number. You can use ls to look up the codes for other devices and recreate those too. When you get bored, just use rm to remove the device nodes you just created. Basically the major number tells the Linux kernel which device driver to talk to, and the minor number tells the device driver which device you're talking about. (E.g., you probably have one SATA controller, but maybe multiple harddisks plugged into it.) If you want to invent new devices that do something new... well, you'll need to edit the source code for the Linux kernel and compile your own custom kernel. So let's not do that! :-) But you can add device files that duplicate the ones you've already got just fine. An automated system like udev is basically just watching for device events and calling mknod / rm for you automatically. Nothing more magic than that. There are still other kinds of special files: Linux considers a directory to be a special kind of file. (Usually you can't directly open a directory, but if you could, you'd find it's a normal file that contains data in a special format, and tells the kernel where to find all the files in that directory.) A symlink is a special file. (But a hard link isn't.) You can create symlinks using the ln -s command. (Look up the manpage for it.) There's also a thing called a "named pipe" or "FIFO" (first-in, first-out queue). You can create one with mkfifo . A FIFO is a magical file that can be opened by two programs at once — one reading, one writing. When this happens, it works like a normal shell pipe. But you can start each program separately... A file that isn't "special" in any way is called a "regular file". You will occasionally see mention of this in Unix documentation. That's what it means; a file that isn't a device node or a symlink or whatever. Just a normal, every day file with no magical properties. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/241173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
241,178 | I have a bash script which uses set -o errexit so that on error the entire script exits at the point of failure. The script runs a curl command which sometimes fails to retrieve the intended file - however when this occurs the script doesn't error exit. I have added a for loop to pause for a few seconds then retry the curl command use false at the bottom of the for loop to define a default non-zero exit status - if the curl command succeeds - the loop breaks and the exit status of the last command should be zero. #! /bin/bashset -o errexit# ...for (( i=1; i<5; i++ ))do echo "attempt number: "$i curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim if [ -f ~/.vim/autoload/pathogen.vim ] then echo "file has been retrieved by curl, so breaking now..." break; fi echo "curl'ed file doesn't yet exist, so now will wait 5 seconds and retry" sleep 5 # exit with non-zero status so main script will errexit falsedone# rest of script ..... The problem is when the curl command fails, the loop retries the command five times - if all attempts are unsuccessful the for loop finishes and the main script resumes - instead of triggering the errexit . How can I get the entire script to exit if this curl statement fails? | Replace: done with: done || exit 1 This will cause the code to exit if the for loop exits with a non-zero exit code. As a point of trivia, the 1 in exit 1 is not needed. A plain exit command would exit with the exit status of the last executed command which would be false (code=1) if the download fails. If the download succeeds, the exit code of the loop is the exit code of the echo command. echo normally exits with code=0, signally success. In that case, the || does not trigger and the exit command is not executed. Lastly, note that set -o errexit can be full of surprises. For a discussion of its pros and cons, see Greg's FAQ #105 . Documentation From man bash : for (( expr1 ; expr2 ; expr3 )) ; do list ; done First, the arithmetic expression expr1 is evaluated according the rules described below under ARITHMETIC EVALUATION. The arithmetic expression expr2 is then evaluated repeatedly until it evaluates to zero. Each time expr2 evaluates to a non-zero value, list is executed and the arithmetic expression expr3 is evaluated. If any expression is omitted, it behaves as if it evaluates to 1. The return value is the exit status of the last command in list that is executed, or false if any of the expressions is invalid. [Emphasis added] | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/241178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
241,186 | xautolock is clearly running : $ ps wafux | grep [x]autolockuser 21410 0.0 0.0 20124 2628 ? S Nov05 0:04 xautolock -time 10 -notify 30 -notifier notify-send --urgency low --expire-time=10000 -- 'Locking screen in 30 seconds' -locker slock However, when I try to lock it : $ xautolock -locknowCould not locate a running xautolock. If I spin up another xautolock it works: $ xautolock -time 10 -notify 30 -notifier "notify-send --urgency low --expire-time=10000 -- 'Locking screen in 30 seconds'" -locker slock&[2] 18828$ ps wafux | grep [x]autolockuser 21410 0.0 0.0 20124 2628 ? S Nov05 0:04 xautolock -time 10 -notify 30 -notifier notify-send --urgency low --expire-time=10000 -- 'Locking screen in 30 seconds' -locker slockuser 18828 0.0 0.0 20124 2708 pts/1 S 08:30 0:00 \_ xautolock -time 10 -notify 30 -notifier notify-send --urgency low --expire-time=10000 -- 'Locking screen in 30 seconds' -locker slock$ xautolock -locknow # Runs fine and locks the desktop What gives? By now I've seen this on both my desktop and laptop. Please note that at least the first time after boot locking works fine. It's only after some unknown time or event that it starts failing. I have not been able to reproduce this reliably. That is, I've tried the following approaches on my laptop and in both cases the screensaver shortcut/command actually locks the desktop afterwards: Close the lid Wait for the computer to hibernate Open the lid Press the power button Provide the login password followed by Enter and Lock the desktop Same steps as above Tracing the code: The line which prints the error message : error1 ("Could not locate a running %s.\n", progName); That happens if messageToSend is truthy and type != XA_INTEGER It looks like type is set in the following statement: (void) XGetWindowProperty (d, root, semaphore, 0L, 2L, False, AnyPropertyType, &type, &format, &nofItems, &after, (unsigned char**) &contents); Does this mean that whether the running xautolock is detected can depend on the window that is focused? I'm also wondering if this call could be related to this known bug : The -disable, -enable, -toggle, -exit, -locknow, -unlocknow, and -restart options depend on access to the X server to do their work. This implies that they will be suspended in case some other application has grabbed the server all for itself. Is it possible that xautolock conflicts with xss-lock , both of which are using slock ? In addition to the xautolock line above I also have this line in .xprofile : xss-lock slock & Since both xautolock and xss-lock can call slock , I'm suspecting that the problem goes something like this: xautolock runs slock after 10 minutes of inactivity. xss-lock also tries to run slock after 10 minutes : $ xset q | grep --after-context=2 --line-regexp --fixed-strings 'Screen Saver:'Screen Saver: prefer blanking: yes allow exposures: yes timeout: 600 cycle: 600 Only one slock client is actually spawned. xss-lock kills the wrong slock , which causes xautolock to crash or give up. Since xss-lock can detect laptop sleep I'd like to use it instead of xautolock , but I can't seem to make xss-lock work with notify-send . | For me, the xautolock process was still running in the background, but it wasn't listening to any xautolock -locknow commands. As mentioned by @jrm, an application must be suppressing the "screensaver" . For both of us, this was due to mpv (video player) disabling the screensaver. For mpv, the fix is to add the following to ~/.config/mpv/config or ~/.mpv/config : stop-screensaver=no If you do not use mpv, it might be another application disabling the screensaver. Try some commonly used ones out to see which one it is. If you want to prevent automatic screen locking during video playback , one common way is to use xautolock's "corners" feature: xautolock -corners 000- -cornersize 30 With the above command, if you put your mouse cursor in the bottom right corner of the screen (within a 30px radius), auto-locking will be temporarily disabled. One more thing to try is the -resetsaver option: xautolock -resetsaver Or the -detectsleep option: xautolock -detectsleep | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
241,215 | I've recently been creating new users and assigning them to certain groups. I was wondering if there is a command that shows all the users assigned to a certain group?I have tried using the 'groups' command however whenever I use this it says 'groups: not found' | You can use grep: grep '^group_name_here:' /etc/group This only lists supplementary group memberships, not the user who have this group as their primary group. And it only finds local groups, not groups from a network service such as LDAP. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/241215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133708/"
]
} |
241,321 | $ sh backup-to-s3.sh backup-to-s3.sh: 11: [: bkup_20151106_150532.zip: unexpected operatorbackup-to-s3.sh: 11: [: bkup_20151106_150532.zip: unexpected operatorbackup-to-s3.sh: 11: [: bkup_20151106_150532.zip: unexpected operatorbackup-to-s3.sh: 11: [: bkup_20151106_150532.zip: unexpected operatorbackup-to-s3.sh: 11: [: bkup_20151106_150532.zip: unexpected operatorbackup-to-s3.sh: 11: [: bkup_20151106_150532.zip: unexpected operator ubuntu@accretive-staging-32gb-ephemeral:~$ cat backup-to-s3.sh #Script to move /home/ubuntu/backup folder to S3://auto-backup#Author Ashish Karpecd /mnt/backupfilename="bkup_$(date +%Y%m%d_)"/bin/ls -alF | awk '{ print $9 }' > /tmp/filefor i in $(cat /tmp/file); do# echo $i;# read a;# echo $filename; if [ $filename* = $i ] then echo "Copying " $i "to S3://auto-backup"; s3cmd put $i s3://auto-backup fidone | don't use for to iterate over the lines of a file, use while IFS= read -r line; do ...; done < filename you don't need to pipe ls output to a file at all, especially using -F use bash [[ x == y ]] for pattern comparisons, and the pattern is on the right-hand side: #!/bin/bashcd /mnt/backupprefix="bkup_$(date +%Y%m%d_)"for file in * .*; do [[ -f $file ]] || continue # skip things like directories and soft links if [[ $file == $prefix* ]]; then echo "Copying " $file "to S3://auto-backup"; s3cmd put $file s3://auto-backup fidone < /tmp/file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116075/"
]
} |
241,328 | I am trying to use autoconf to create a configure script. However, some of the headers I want to check for require additional compiler flags (e.g. c++11 ). I can get partly there with the answer here where the relevant lines look like this in the configure.ac file. AX_CXX_COMPILE_STDCXX_11(,[mandatory])AC_CHECK_HEADER("CL/cl2.hpp") but the std=gnu++11 flag isn't passed to the preprocessing step of AC_CHECK_HEADERS where I end up with the strange result where it is usable but not present: checking CL/cl2.hpp usability... yeschecking CL/cl2.hpp presence... no Looking in the config.log shows the following lines: configure:3423: checking CL/cl2.hpp presenceconfigure:3423: g++ -E conftest.cppIn file included from conftest.cpp:19:0/usr/include/CL/cl2.hpp:442:2: error #error Visual studio 2013 or another C++11-supported compiler required where I can clearly see that the C++ flag is not being used. How can I have compiler flags be used in these preprocessor steps? EDIT I can manually get around this by setting the CXXCPP manually when running configure but I'd like it to run without the end user needing to know this. ./configure CXXCPP="g++ -E -std=gnu++11" | don't use for to iterate over the lines of a file, use while IFS= read -r line; do ...; done < filename you don't need to pipe ls output to a file at all, especially using -F use bash [[ x == y ]] for pattern comparisons, and the pattern is on the right-hand side: #!/bin/bashcd /mnt/backupprefix="bkup_$(date +%Y%m%d_)"for file in * .*; do [[ -f $file ]] || continue # skip things like directories and soft links if [[ $file == $prefix* ]]; then echo "Copying " $file "to S3://auto-backup"; s3cmd put $file s3://auto-backup fidone < /tmp/file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241328",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83890/"
]
} |
241,348 | Often I have to annotate multiple pdf documents using xournal . If I start with a directory of "fresh" pdf files, I just open them all with xournal : for f in *pdf; do xournal $f&; done When I begin to annotate a file, say a .pdf I save it as a .xoj , i.e. I just switch the file extension. Now suppose I have an interrupted session and want to open all .pdf files in the directory provided that no corresponding .xoj file exists and open the .xoj file otherwise (both with xournal ). How can I do this in command line? | You can just remove .pdf to get the file's name without the extension and check for a file of that name with the .xoj extension: for f in *.pdfdo if [ -f "${f%.pdf}".xoj ] then xournal "${f%.pdf}".xoj & else xournal "${f}" & fidone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241348",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
241,394 | Usually I do this on ssh for getting a X application using sudo su ssh -X server OKI login xauth list $DISPLAY which returns to me server/unix:10 MIT-MAGIC-COOKIE-1 blablablablabla Then I do sudo suxauth add server/unix:10 MIT-MAGIC-COOKIE-1 blablablablabla And after running an X application..I get it, it is correct. The problem is on Centos7, I do xauth list $DISPLAY And it returns nothing. I try to add MIT magic cookies given by xauth list But of course it doesn't work.The normal X-forwarding via ssh, without sudo works. The settings of sshd are the same on 3 servers: slackware WORKS hpux WORKS centos7 NOT WORKING | Another solution is to merge the .Xauthority file of the current user with that of the root user. ssh user@host change the .Xauthority file permissions so that root also has access to it. sudo su - root xauth merge /home/users/user/.Xauthority Test gedit somefile.log It should open a gedit window. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
241,414 | For example I'm in my directory: /home/myname and then I want to CD into a different directory. /home/pulsar/... Since I need to go pretty deep into the other directory, how can I go about this without having to type the whole line? I tried cd */thedirectoryiwanttogointo but that doesn't work. I have to type the whole line. | Probably your wildcard does not work because: there were no matches for the wildcard from the location you gave, or there was more than one match. The usual approach (in a shell) to moving frequently among subdirectories is to use the CDPATH feature, as well as pushd and popd . The CDPATH feature (perhaps first seen in tcsh) is a colon-separated list of directories. If the parent of your thedirectoryiwanttogointo name is reasonably unique, then you could add the parent to the list. For further reading (your shell's manual page should be first): What is CDPATH ? 3.4. Changing Directories with cd (Red Hat Enterprise Linux Step By Step Guide) How to change the CDPATH for the C shells: csh and tcsh pushd and popd are newer than CDPATH , but still dating from the mid-1990s. They allow you to save your current directory ("pushing" onto a stack) and restore it ("popping" from a stack) during their respective cd commands. For further reading: How do I use pushd and popd commands? Using pushd and popd in Linux Other people use shell aliases or symbolic links. Those are most useful when going to well-known locations. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241414",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142064/"
]
} |
241,552 | Short version: How to disable audit messages (dmesg) on a Fedora system? A Fedora system keeps logging "audit: success" messages in dmesg - in such an extreme way that dmesg has become unusable because it's filled up by these messages ( dmesg | grep -v audit is empty). These messages are completely useless as they obviously want to inform the user that some every-day internal process has succeeded (which might be of interest when debugging something, but it's just noise in this case). Even the command line interface (when switching to a non-X tty with Ctrl + Alt + F2 ) has become unusable as it's always cluttered with these audit messages, it's impossible to read the output of the commands that are actually run by the user. For example, after entering the username (login), an audit message is spewed out (apparently telling the user that something was formatted/printed successfully): audit: type=1131 audit(1446913801.945:10129): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=fprintd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' It appears that most of these messages indicate "success", however there are also many audit messages which do not contain this keyword. Running Chromium triggers hundreds of these: audit: type=1326 audit(1446932349.568:10307): auid=500 uid=500 gid=500 ses=2 pid=1593 comm="chrome" exe="/usr/lib64/chromium/chrome" sig=0 arch=c000003e syscall=273 compat=0 ip=0x7f9a1d0a34f4 code=0x50000 Other messages include: audit: type=1131 audit(1446934361.948:10327): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' audit: type=1103 audit(1446926401.821:10253): pid=28148 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="user" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success' Generally, the majority of recent audit messages (at the time of writing) contains the keyword " NetworkManager " or " chrome ". How can these messages be disabled completely? Additional points: In case anyone might be thinking "you should read and analyze these audit messages, not disable them, they could be important", no they are not important, they're almost exclusively "success" messages. Nobody needs to be told that something which is supposed to work did in fact work. However, if one actually significant message was being logged, it would never be noticed in the storm of thousands of insignificant messages. In any case, no audit logging is wanted on this particular system (it's running in a controlled environment anyway). Clearly, something must be very misconfigured on this system. However, it was once a default Fedora installation which has been upgraded whenever a new release came out. Maybe it's just a simple setting that has to be changed, but as it did not happen changing the system configuration manually (on purpose), this stackexchange.com question will hopefully help others who happen to have gotten their system in the same state. It's now a Fedora 22 system, running Linux 4.0.6 (systemd 219). It's a standard Fedora desktop installation, currently running KDE. SELinux is disabled (/etc/selinux/config is set to "disabled"). Update : After upgrading to Fedora 23 (kernel 4.2.5, systemd 222), there are fewer audit messages than before. | Firstly, on fedora, both auditd and auditctl come from the same package (unconfusingly named audit). So if you don't have auditctl, something else is wrong. Try this: rpm -ql audit |grep ctl If that gives you nothing, then you do not have the audit package installed at all. Secondly, the first "human" language line in the grub.cfg file you mentioned says "DO NOT EDIT" on my system. This is a clue that any manual changes to the file can be lost. The correct place to edit the grub config on a fedora/redhat system is the one file you specifically suggested as not being necessary to change (/etc/default/grub). In reality, this is the only "safe" way to make the proposed change and survive kernel upgrades. This is because it is used as part of the source configuration during kernel upgrades, to regenerate a working grub.cfg. Look up the grub2-mkconfig command (and it's friends). Details are here: https://fedoraproject.org/wiki/GRUB_2 Your answer is not wrong, but I found it a little confusing. I hate the grub command line, and IMHO anyone who is likely to miss adding a whitespace char on a kernel command line would probably not thank any one for being lead down that road. Still, some people like to learn the hard way I know. All commands below need to be run as root (which is in and of itself a dangerous thing to suggest). For a running system: auditctl -e 0 If you cannot find auditctl, check your PATH and also consider: dnf install audit This should at least reduce if not disable the messages until such a time as you can reboot. To persist beyond reboots, edit /etc/default/grub and change the GRUB_CMDLINE_LINUX line to add "audit=0" to the end, then use grub2-mkconfig to regenerate the grub.cfg. This final step also puts a layer of validation between your change, and the running system. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/241552",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58393/"
]
} |
241,623 | I'd like to delete a line from a text file ( input.txt ) if two patterns ( string1 and string2 ) are found on the same row, using sed . I'm trying: sed -i "/\b\(string1\|string2\)\b/d" input.txt , but this is deleting rows containing string1 OR string2 . | sed -i "/string1.*string2\|string2.*string1/d" input.txt This will delete any line where string1 appears before string2, OR string2 appears before string1. Both strings have to be on the line, in either order, for the line to be deleted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74555/"
]
} |
241,639 | I've had this reoccurring sudden shutdown issue happening recently, and it always happens when I have PhpStorm opened, I'm not sure if it's related, but it still seems random, meaning that sometimes I have PhpStorm opened without any sudden shutdowns. I'm running Ubuntu 14.04.3 on Lenovo Y50-70. I've tried running stress tests to replicate this issue, but it wasn't successful, no sudden shutdowns. I've converted from java-8-oracle to java-7-openjdk-amd64 thinking maybe the issue is related to java but it happened again. I'm becoming more sure to admit that it's a hardware issue, but it didn't happen on Windows 8. I'm still able to return the laptop back and replace it, but I want to make sure this is a hardware issue, and to have convincing reasons as to why I'm returning it back. I don't want them to tell me, "But it works fine on Windows so it's a software issue", so I want to make sure it's a hardware issue and know if there are any possible fix. Here is my mcelog file, it indicates that there are many reoccuring hardware errors: | sed -i "/string1.*string2\|string2.*string1/d" input.txt This will delete any line where string1 appears before string2, OR string2 appears before string1. Both strings have to be on the line, in either order, for the line to be deleted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241639",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74848/"
]
} |
241,726 | When I run ls on a folder with directories that have a 777 permission, the ls colors are purple text with a green background, which is unreadable: What can I do to make this more pleasant to look at? | If you are using Linux (and not, e.g., using a Mac which does things differently) you can use dircolors with a custom database to specify which colors are used for which file attributes. First, create a dircolors database file. $ dircolors -p > ~/.dircolors Then edit it, you probably want to change the STICKY_OTHER_WRITABLE and OTHER_WRITABLE lines to something more pleasant than 34;42 (34 is blue, 42 is green - dircolors -p helpfully includes comments with the color codes listed). Then run eval $(dircolors ~/.dircolors) Edit your ~/.profile (or ~/.bash_profile etc) and find the line that runs eval $(dircolors) and change it to include the filename as above. Or if there isn't such a line in your .profile (etc) add it. Or, if you want it to work whether there is a ~/.dircolors file or not, change it to: [ -e ~/.dircolors ] && eval $(dircolors -b ~/.dircolors) || eval $(dircolors -b) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/241726",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141502/"
]
} |
241,728 | I would like to, every 30 seconds or so, copy all text of a certain terminal or terminal emulator to a file, and display it in conky . I'm not talking about simple redirection ( command > file ), which doesn't work for ncurses programs or games such as NetHack. How could I go about doing this? | There is no portable way to ask a terminal emulator to do screen dumps. You can work around this by running your application in GNU screen or tmux and using them to carry out the screen-dumps. GNU screen can do this: Can I take a text screenshot of a GNU Screen session? Taking a screenshot of screen session over ssh Likewise, there is a plugin for tmux to do screen captures. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59616/"
]
} |
241,746 | Today I learnt a bit about tr command. But I was stuck at understanding the difference between -c and -C . On the manual, it said: -C Complement the set of characters in string1, that is ``-C ab'' includes every character except for `a' and `b'. -c Same as -C but complement the set of values in string1. I'm not quite understand what does set of values in string1 of -c option mean. I thought it may treat string 1 "ab" as a whole and will escape single a and b . So I did an experiment: ⇒ echo "ab_a_b" | tr -C 'ba' 'c'abcacbc% ⇒ echo "ab_a_b" | tr -c 'ba' 'c'abcacbc% Things didn't match my expectation! So, what's the difference between -C and -c in tr command? Software Version: BSD 2004 on OSX10.10 | The POSIX manual says this: If the -C option is specified, the complements of the characters specified by string1 (the set of all characters in the current character set, as defined by the current setting of LC_CTYPE, except for those actually specified in the string1 operand) shall be placed in the array in ascending collation sequence, as defined by the current setting of LC_COLLATE. If the -c option is specified, the complement of the values specified by string1 shall be placed in the array in ascending order by binary value. and contains the following note The ISO POSIX-2:1993 standard had a -c option that behaved similarly to the -C option, but did not supply functionality equivalent to the -c option specified in POSIX.1-2008. This meant that historical practice of being able to specify tr -cd\000-\177 (which would delete all bytes with the top bit set) would have no effect because, in the C locale, bytes with the values octal 200 to octal 377 are not characters. From this it appears that the -c option let you specify numeric values representing ASCII character instead of using the characters themselves. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
241,783 | I created a new account on my Linux OS, using the useradd command, now I want to delete the account created, I'm trying to use the userdel command to delete that account and I get the following error: userdel: cannot lock /etc/passwd; try again later.\ I don't understand what that error means. The syntax I used to delete the account was: userdel -r "accountname" , I also used "accountname" userdel but I didn't work. Can someone help me delete this account using the command line. | Probably because you do not use the userdel command as superuser (root) or other privileged user. Try: sudo userdel accountname As stated in several comments it is also possible to remove the homedirectory configured while removing the useraccount using: sudo userdel -r accountname | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139921/"
]
} |
241,788 | Currently when I am not using screen utility, I am able to see the VIM content wiped out of the display when I quit VIM. However when I am using GNU Screen utility and opening a file in one of the screen window and closing it, I can see the trailing file content on the display. It’s not wiping out the file content from the display the way it does when I am not using GNU Screen. I found the below post where it has been discussed without GNU Screen. How to set the bash display to not show the vim text after exit? In my case, in both the scenario [with and without GNU Screen] the Terminal Type is “xterm”. But the behavior is different when I close a VIM file. Kindly help. | GNU screen supports the xterm alternate-screen feature using the altscreen setting in your .screenrc file. According to the manual : — Command: altscreen state (none) If set to on, "alternate screen" support is enabled in virtual terminals, just like in xterm. Initial setting is ‘off’. A quick check shows that screen is actually simulating the feature, because it clears and/or restores the screen contents itself without sending the control sequence used by xterm. The screen feature works whether or not the actual terminal (or its terminal description) supports the alternate screen feature. You can test this by setting TERM to "vt100" before running screen . You can read more about the alternate screen feature in the xterm FAQ Why doesn't the screen clear when running vi? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142312/"
]
} |
241,806 | I am writing a script a check whether exist a backup file every morning Monday to Friday. These backup file is saved in the end of everyday from Monday to Friday only named as 02_10_15 There is a problem that if I run my script on Monday said 09_10_15 It won't find out the file because the file name is 06_10_15 rather than yesterday 08_10_15 Please find my date code below, #Create variablesyday=$(date --date yesterday +"%d_%m_%y")#yday="02_10_15"FileName=$(date --date yesterday +"%Y%m%d") How can I get the date for Monday special for last Friday. | GNU screen supports the xterm alternate-screen feature using the altscreen setting in your .screenrc file. According to the manual : — Command: altscreen state (none) If set to on, "alternate screen" support is enabled in virtual terminals, just like in xterm. Initial setting is ‘off’. A quick check shows that screen is actually simulating the feature, because it clears and/or restores the screen contents itself without sending the control sequence used by xterm. The screen feature works whether or not the actual terminal (or its terminal description) supports the alternate screen feature. You can test this by setting TERM to "vt100" before running screen . You can read more about the alternate screen feature in the xterm FAQ Why doesn't the screen clear when running vi? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140937/"
]
} |
241,809 | I installed Windows 7 on an SSD and upgraded it to Windows 10. Then I installed Linux mint 17.2 Cinnamon and had the following partitions: The boot menu was showing Linux Mint and Windows 10 and I thought everything was fine. UEFI boot configuration showed "ubuntu". However after booting Windows and then rebooting, grub was gone, and in boot configuration there was only "Windows Boot Manager" available.When I repaired grub2 with grub-install and grub-update I was able boot Linux Mint again, but only as long as I don't boot into Windows 10, which seems to wipe out grub like this every time. Secureboot and Fastboot are disabled. /boot/efi contains folders Boot, Microsoft and ubuntu. Did I do something wrong? How can I get grub2 working permanently? | I found the problem. Looking at the NVRAM with sudo efibootmgr I noticed that the Windows boot loader somehow seems to have the urge to be the first entry in the boot order. When I changed it to grub2 being first, windows overwrites entry 0000 and changes the boot order, even if grub2 was 0000 before, therefore overwriting it. The solution was setting the Windows boot manager entry inactive but leave it in first position of the boot order: sudo efibootmgr --bootnum 0000 --inactivesudo efibootmgr --bootorder 0000,0002,000C,000D (with 0002 being grub2) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/241809",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142310/"
]
} |
241,868 | The wget man page states this, under the section for the --random-wait parameter: Some web sites may perform log analysis to identify retrieval programs such as Wget by looking for statistically significant similarities in the time between requests. [...] A 2001 article in a publication devoted to development on a popular consumer platform provided code to perform this analysis on the fly. Its author suggested blocking at the class C address level to ensure automated retrieval programs were blocked despite changing DHCP-supplied addresses. I want to obtain a copy of this article for reading, and have tried many searches on the Internet to determine the article. However, all I can find with these searches is the man page for wget hosted on different websites; and some other research papers having no relation at all with this topic. Does anyone know which article is being referred to and where I can obtain a copy? | Even though not a direct answer, git blame and git log reveal that this section was introduced in commit 2c41d783 by a committer called hniksic , who turns out to be Hrvoje Niksic. His email address can be found in wget's ChangeLog file (I won't publish it here for the obvious reasons). I'd suggest asking him directly, as he might be the best to give a more adequate answer. While at it, you might consider asking him whether he would mind updating the manpage accordingly. ;) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241868",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
241,873 | Here is the content of two files : Judi # cat File1 judi /export/home 76 judi /usr 83 judi # cat File2 judi /export/home 79 judi /usr 82 If column 3 of File2 is greater that column 3 of File1 , then print File2 's lines judi /export/home 79 | Even though not a direct answer, git blame and git log reveal that this section was introduced in commit 2c41d783 by a committer called hniksic , who turns out to be Hrvoje Niksic. His email address can be found in wget's ChangeLog file (I won't publish it here for the obvious reasons). I'd suggest asking him directly, as he might be the best to give a more adequate answer. While at it, you might consider asking him whether he would mind updating the manpage accordingly. ;) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131403/"
]
} |
241,878 | The string \e]11;?\a presumably yields the current terminal's background color, but I have not found a way to use this information to get the color in a human-readable (e.g. RGB) format. I did try (cluelessly) print -P '\e]11;?\a' but it produces nothing, or at least nothing visible. BTW, I'm aware of xtermcontrol --get-bg , but when I run it on the terminal I'm working on, I get the error: xtermcontrol: --get-bg is unsupported or disallowed by this terminal. \See also, TROUBLESHOOTING section of xtermcontrol(1) manpage. (The referred-to TROUBLESHOOTING section did not provide any work-around.) BTW, I have deliberately omitted details about the terminal I'm using because I'm hoping to find a general solution, rather than one that works only for a specific terminal. | Even though not a direct answer, git blame and git log reveal that this section was introduced in commit 2c41d783 by a committer called hniksic , who turns out to be Hrvoje Niksic. His email address can be found in wget's ChangeLog file (I won't publish it here for the obvious reasons). I'd suggest asking him directly, as he might be the best to give a more adequate answer. While at it, you might consider asking him whether he would mind updating the manpage accordingly. ;) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
241,897 | How can I open 8080 port for listening? In normal situation, I have tomcat7 that listens on port 8080. sudo netstat -tanpu | grep ":8080"tcp6 0 0 :::8080 :::* LISTEN 7519/java After that, I stop tomcat7 with sudo service tomcat7 stop . So, now 8080 port is closed. I did sudo iptables -A INPUT -p tcp --dport 8080 -j ACCEPT to open it, but the port is not listening. sudo netstat -tanpu | grep ":8080"tcp6 0 0 127.0.0.1:8080 127.0.0.1:37064 TIME_WAIT - How can I open this port (8080) for listening for another application ( not tomcat)? | You are confusing two concepts. Iptables handles access control for your networking. When you accept input traffic with a destination of TCP port 8008 that you just means you are letting the internet send traffic to that port. It has no effect on what, if anything, is listening on the port. To listen on a port you need a program set up to do that. In your original case, tomcat was that program. You stopped it so now nothing is listening on that port. To open it back up as a listener you need to start tomcat , or any other program that you want, to listen on that port. What program you select to listen on that port is entirely dependent on what service you want to provide on that port. The iptables commands don't affect whether or not your program is listening, it just affects whether or not traffic from the internet is allowed to talk to that program. If you just want to open up a network port that dumps whatever is sent to it, the program you want is netcat . The command nc -l -p 8080 This will cause netcat to listen on port 8080 and dump whatever is sent to that port to standard output. You can redirect its output to a file if you want to save the data sent to that port. If you want anything more sophisticated than a raw data dump, you will need to determine what specific program(s) are capable of handling your data and start one of those instead. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/241897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15992/"
]
} |
241,970 | I'm working with a custom service which essentially runs a web server, called thisismywebserver. Currently it's not working (ie I get an "Unable to Connect" error trying to access a page). When I run this command service thisismywebserver status to see the status of the service I see that the status is "active (exited)". Does this mean the service has stopped working? If not, then what does this mean? root@thisismywebserver-testing:~# service thisismywebserver status● thisismywebserver.service - LSB: ThisIsMyWebServer server Loaded: loaded (/etc/init.d/thisismywebserver) Active: active (exited) since Sun 2015-11-08 23:01:33 EST; 18h agoWarning: Journal has been rotated since unit was started. Log output is incomplete or unavailable. | It seems you are running a system with systemd yet you are using sysV commands. Did you create a sysV init script or a systemd unit file? State active (exited) means that systemd has successfully run the commands but that it does not know there is a daemon to monitor. If there is you must define it in the unit file by configuring the Type and ExecStart options appropriately according to whether the process you start is the main proces, forks child processes and exits etc. Check the different systemd man pages or update your question and post the unit file or init script. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/241970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11898/"
]
} |
241,972 | I have the following general question regarding cron jobs. Suppose I have the following in my crontab : * 10 * * * * someScript.sh* 11 * * * * someScript2.sh30 11 */2 * * someScript3.sh <-- Takes a long time let's say 36 hours.* 12 * * * someScript4.sh Is it smart enough to run the remaining jobs at the appropriate times? For example, the long script doesn't need to terminate? Also, what happens if the initial long script is still running and it gets called by cron again? | Each cron job is executed independent of any other jobs you may have specified. This means that your long-lived script will not impede other jobs from being executed at the specified time. If any of your scripts are still executing at their next scheduled cron interval, then another, concurrent, instance of your script will be executed. This can have unforeseen consequences depending on what your script does. I would recommend reading the Wikipedia article on File Locking , specifically the section on Lock files . A lock file is a simple mechanism to signal that a resource — in your case the someScript3.sh script — is currently 'locked' (i.e. in use) and should not be executed again until the lock file is removed. Take a look at the answers to the following question for details of ways to implement a lock file in your script: How to make sure only one instance of a bash script runs? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/241972",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81787/"
]
} |
241,977 | I have an assignment to write a script that reads a number from the user and prints stars say if the number is 3 then it would print *********** I understand the idea of how to write it, but i try to make a variable that saves * in it but when i echo $variable it would just show all the folders/files in my current place. how do I just save the '*' as a string without it running the function? | varname='*' Though you have to be careful with where you use it; since globbing occurs after variable expansion, if you expand it carelessly it'll do the glob operation at expand-time instead. Use echo "$varname" to print it (note the quotes). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241977",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142424/"
]
} |
241,994 | By default, Mutt wants to archive messages to folders by sender name. So when I hit s it prompts me with Save to mailbox ('?' for list): =sendername . I'd like to have it default to =INBOX.Archives.2015 instead. I don't think I need a macro, which is how this one was solved: mutt: save message to specific folder I just want to set a default so that the prompt is always =INBOX.Archives.2015 (I can reset it once a year, the year doesn't need to update.) | You need to add a line such as the following to your .muttrc : save-hook . '=INBOX.Archives.2015' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/241994",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141494/"
]
} |
242,013 | If I look for ACPI interrupts, I find: /sys/firmware/acpi/interrupts/sci: 55414/sys/firmware/acpi/interrupts/error: 0/sys/firmware/acpi/interrupts/gpe00: 0 invalid/sys/firmware/acpi/interrupts/gpe01: 0 invalid/sys/firmware/acpi/interrupts/gpe02: 0 invalid/sys/firmware/acpi/interrupts/gpe03: 0 invalid/sys/firmware/acpi/interrupts/gpe04: 0 invalid/sys/firmware/acpi/interrupts/gpe05: 0 invalid/sys/firmware/acpi/interrupts/gpe06: 0 enabled/sys/firmware/acpi/interrupts/gpe07: 0 enabled/sys/firmware/acpi/interrupts/gpe08: 0 invalid/sys/firmware/acpi/interrupts/gpe09: 0 disabled/sys/firmware/acpi/interrupts/gpe10: 0 enabled/sys/firmware/acpi/interrupts/gpe11: 0 invalid/sys/firmware/acpi/interrupts/gpe12: 0 invalid/sys/firmware/acpi/interrupts/gpe13: 0 invalid/sys/firmware/acpi/interrupts/gpe14: 1 enabled/sys/firmware/acpi/interrupts/gpe15: 0 invalid/sys/firmware/acpi/interrupts/gpe16: 1 enabled/sys/firmware/acpi/interrupts/gpe0A: 0 invalid/sys/firmware/acpi/interrupts/gpe17: 54753 enabled/sys/firmware/acpi/interrupts/gpe0B: 0 invalid/sys/firmware/acpi/interrupts/gpe18: 0 invalid/sys/firmware/acpi/interrupts/gpe0C: 0 invalid/sys/firmware/acpi/interrupts/gpe19: 0 invalid/sys/firmware/acpi/interrupts/gpe0D: 0 disabled/sys/firmware/acpi/interrupts/gpe0E: 0 invalid/sys/firmware/acpi/interrupts/gpe20: 0 invalid/sys/firmware/acpi/interrupts/gpe0F: 0 invalid/sys/firmware/acpi/interrupts/gpe21: 0 invalid/sys/firmware/acpi/interrupts/gpe22: 0 invalid/sys/firmware/acpi/interrupts/gpe23: 0 enabled/sys/firmware/acpi/interrupts/gpe24: 0 invalid/sys/firmware/acpi/interrupts/gpe25: 0 invalid/sys/firmware/acpi/interrupts/gpe26: 0 invalid/sys/firmware/acpi/interrupts/gpe1A: 0 invalid/sys/firmware/acpi/interrupts/gpe27: 0 invalid/sys/firmware/acpi/interrupts/gpe1B: 0 invalid/sys/firmware/acpi/interrupts/gpe28: 0 invalid/sys/firmware/acpi/interrupts/gpe1C: 0 invalid/sys/firmware/acpi/interrupts/gpe29: 0 invalid/sys/firmware/acpi/interrupts/gpe1D: 0 invalid/sys/firmware/acpi/interrupts/gpe1E: 0 invalid/sys/firmware/acpi/interrupts/gpe30: 0 invalid/sys/firmware/acpi/interrupts/gpe1F: 0 invalid/sys/firmware/acpi/interrupts/gpe31: 0 invalid/sys/firmware/acpi/interrupts/gpe32: 0 invalid/sys/firmware/acpi/interrupts/gpe33: 0 invalid/sys/firmware/acpi/interrupts/gpe34: 0 invalid/sys/firmware/acpi/interrupts/gpe35: 0 invalid/sys/firmware/acpi/interrupts/gpe36: 0 invalid/sys/firmware/acpi/interrupts/gpe2A: 0 invalid/sys/firmware/acpi/interrupts/gpe37: 0 invalid/sys/firmware/acpi/interrupts/gpe2B: 0 invalid/sys/firmware/acpi/interrupts/gpe38: 0 invalid/sys/firmware/acpi/interrupts/gpe2C: 0 invalid/sys/firmware/acpi/interrupts/gpe39: 0 invalid/sys/firmware/acpi/interrupts/gpe2D: 0 invalid/sys/firmware/acpi/interrupts/gpe2E: 0 invalid/sys/firmware/acpi/interrupts/gpe2F: 0 invalid/sys/firmware/acpi/interrupts/gpe3A: 0 invalid/sys/firmware/acpi/interrupts/gpe3B: 0 invalid/sys/firmware/acpi/interrupts/gpe3C: 0 invalid/sys/firmware/acpi/interrupts/gpe3D: 0 invalid/sys/firmware/acpi/interrupts/gpe3E: 0 invalid/sys/firmware/acpi/interrupts/gpe3F: 0 invalid/sys/firmware/acpi/interrupts/sci_not: 0/sys/firmware/acpi/interrupts/ff_pmtimer: 0 invalid/sys/firmware/acpi/interrupts/ff_rt_clk: 0 disabled/sys/firmware/acpi/interrupts/gpe_all: 55414/sys/firmware/acpi/interrupts/ff_gbl_lock: 0 enabled/sys/firmware/acpi/interrupts/ff_pwr_btn: 0 enabled/sys/firmware/acpi/interrupts/ff_slp_btn: 0 invalid I wrote a service script to disable this on boot: #!/bin/bash### BEGIN INIT INFO# Provides: disable-gpe17# Required-Start: $remote_fs $syslog# Required-Stop: $remote_fs $syslog# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: Start daemon at boot time# Description: Enable service provided by daemon.### END INIT INFOlogger -t gpe17 -s "Disabling gpe17 interrupts..."/etc/pm/sleep.d/30_disable_gpe17 thawexit 0 This calls my PM script: #!/bin/shecho 0 > /sys/firmware/acpi/interrupts/gpe17 2>/dev/null I've made both scripts executable, and added disable-gpe17 to the boot scripts with: sudo update-rc.d disable-gpe17 defaults When I look in my boot log, I don't see a record of the syslog entry stating that gpe17 has been disabled. Is there a better, perhaps udev, way of disabling certain interrupts on boot? If not, why is my service script not running on boot? I'm on a MacBook Pro 11,5 running kernel 3.19 with Ubuntu 14.04. | I have the same issue, I needed to disable gpe16 and gpe17 for kworker to stop hogging the CPU. I followed the recipe found here: http://sudoremember.blogspot.com.au/2013/05/high-cpu-usage-due-to-kworker.html An abbreviated (and corrected, at least for my instance) version is here: $ sudo -s# echo "disable" > /sys/firmware/acpi/interrupts/gpe16# echo "disable" > /sys/firmware/acpi/interrupts/gpe17 You should now see the CPU load / hear the fans go down.Make sure this happens again on reboot - still while root privs: # crontab -e This opens your favourite editor. Add these lines: @reboot echo "disable" > /sys/firmware/acpi/interrupts/gpe16 @reboot echo "disable" > /sys/firmware/acpi/interrupts/gpe17 Since suspend mode doesn't work for me I didn't bother following the remainder of the instructions on how to create a script that reactivates those settings on resume after suspend. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/242013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
242,014 | I am trying to install the latest version of subversion on Sid, and because it has a bug I receive a warning and I abort the installation. How do I get to locate the previous version version, install it and pin until the bug is resolved? root@server01:~# apt-get install subversion Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: db5.3-util subversion-tools The following NEW packages will be installed: subversion 0 upgraded, 1 newly installed, 0 to remove and 205 not upgraded. Need to get 0 B/981 kB of archives. After this operation, 4,844 kB of additional disk space will be used. Retrieving bug reports... Done Parsing Found/Fixed information... Done critical bugs of subversion (-> 1.9.2-2) <Outstanding> b1 - #803725 - subversion: dump-load of a repository modifies verbose log output: M line lostserious bugs of subversion (-> 1.9.2-2) <Outstanding> b2 - #803589 - FTBFS with ruby2.2 (only)Summary: subversion(2 bugs)Are you sure you want to install/upgrade the above packages? [Y/n/?/...] n**************************************************************************** Exiting with an error in order to stop the installation. ****************************************************************************E: Sub-process /usr/sbin/apt-listbugs apt returned an error code (10)E: Failure running script /usr/sbin/apt-listbugs apt | You can tell apt-get to install a specific version of a package. For your example: apt-get install subversion you would append the version to the package name, e.g., apt-get install subversion=1.9.2-1 To find a package version, the Debian wiki page RollbackUpdate shows an example where that information is found in http://www.debian.org/distrib/packages , i.e., https://www.debian.org/distrib/packages#search_packages or (older versions) via http://snapshot.debian.org/ http://snapshot.debian.org/package/subversion/ shows http://snapshot.debian.org/package/subversion/1.9.2-2/ http://snapshot.debian.org/package/subversion/1.9.2-1/ http://snapshot.debian.org/package/subversion/1.9.1-1/ and so forth. Finally, the Debian page shows (for its example) the change to make to /etc/apt/preferences to pin the package. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/242014",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
242,019 | How can I set the Service WorkingDirectory using an environment variable? Here is an example service config: [Service]Environment=MYWORKINGDIR=/tmpWorkingDirectory=${MYWORKINGDIR} This generates an error along the lines of not an absolute path . Adding a slash to the start "fixes" that error, but the path is still not found: [Service]Environment=MYWORKINGDIR=/tmpWorkingDirectory=/${MYWORKINGDIR} Is this even possible? Documentation isn't clear on which directives can/can't use env variables. http://www.freedesktop.org/software/systemd/man/systemd.exec.html | Is this even possible? No, It's not possible. You can use: ~ absolute directory path - absolute directory path Also, WorkingDirectory understands specifiers . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/242019",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73094/"
]
} |
242,079 | Let's assume I have two identical systems. On the first system I've created a symbolic link. On the second one I want to copy that symlink via sftp and the symlink shall work the same (i.e if the symlink links to /etc/ , after copying it over, only the symlink will be copied and only the symlink (not the linked files) ) Any ideas on how I can do this / if I can do this? I only want to copy the symbolic link, nothing else; just the reference. | It is a little unclear how exactly you want to deal with the symlink. As I understand it, you want to recreate the symlink on the other system. Symlinks are filesystem dependent, the protocol used to copy files must be aware of that. A good example is using rsync with the -a option. more specifically, the -l option, but -a is probably what you want. It will recreate symlinks without pulling the targets over (dereferencing the links). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/242079",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129282/"
]
} |
242,087 | I have script including multiple commands. How can I group commands to run together ( I want to make several groups of commands. Within each group, the commands should run in parallel (at the same time). The groups should run sequentially, waiting for one group to finish before starting the next group)... i.e. #!/bin/bashcommand #1command #2command #3command #4command #5command #6command #7command #8command #9command #10 how can I run every 3 commands to gether? I tried: #!/bin/bash{command #1command #2command #3} & { command #4command #5command #6} & {command #7command #8command #9}&command #10 But this didn't work properly ( I want to run the groups of commands in parallel at the same time. Also I need to wait for the first group to finish before running the next group) The script is exiting with an error message! | The commands within each group run in parallel, and the groups run sequentially, each group of parallel commands waiting for the previous group to finish before starting execution. The following is a working example: Assume 3 groups of commands as in the code below. In each group the three commands are started in the background with & . The 3 commands will be started almost at the same time and run in parallel while the script waits for them to finish. After all three commands in the the third group exit, command 10 will execute. $ cat command_groups.sh #!/bin/shcommand() { echo $1 start sleep $(( $1 & 03 )) # keep the seconds value within 0-3 echo $1 complete}echo First Group:command 1 &command 2 &command 3 &waitecho Second Group:command 4 &command 5 &command 6 &waitecho Third Group:command 7 &command 8 &command 9 &waitecho Not really a group, no need for background/wait:command 10$ sh command_groups.sh First Group:1 start2 start3 start1 complete2 complete3 completeSecond Group:4 start5 start6 start4 complete5 complete6 completeThird Group:7 start8 start9 start8 complete9 complete7 completeNot really a group, no need for background/wait:10 start10 complete$ | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/242087",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
242,111 | I recently came across this list of Exit Codes With Special Meanings from the Advanced Bash-Scripting Guide. They refer to these codes as being reserved and recommend that: According to the above table, exit codes 1-2, 126-165, and 255 have special meanings, and should therefore be avoided for user-specified exit parameters. A while ago, I wrote a script which used the following exit status codes: 0 - success 1 - incorrect hostname 2 - invalid arguments specified 3 - insufficient user privileges When I wrote the script I wasn’t aware of any special exit codes so I simply started at 1 for the first error condition, and incremented the exit status for each successive error type. I wrote the script with the intention that at a later stage it could be called by other scripts (which could check for the non-zero exit codes). I haven’t actually done that yet; so far I’ve only run the script from my interactive shell (Bash) and I was wondering what / if any problems could be caused by using my custom exit codes. How relevant/important is the recommendation from the Advanced Bash-Scripting Guide? I couldn’t find any corroborating advice in the Bash documentation; its section on Exit Status simply lists the exit codes used by Bash but doesn’t state that any of these are reserved or warn against using them for your own scripts/programs. | There have been several attempts to standardize the meanings of process exit codes. In addition to the one you mention, I know of: the BSDs have sysexits.h which defines meanings for values from 64 on up. GNU grep documents that exit code 0 means at least one match was found, 1 means no matches were found, and 2 means an I/O error occurred; this convention is obviously also useful for other programs for which the distinction between "nothing went wrong but I didn't find anything" and "an I/O error occurred" is meaningful. Many implementations of the C library function system use exit code 127 to indicate the program doesn't exist or failed to start. On Windows, NTSTATUS codes (which are inconveniently scattered all over the 32-bit number space) may be used as exit codes, particularly the ones that indicate a process was terminated due to catastrophic misbehavior (e.g. STATUS_STACK_OVERFLOW ). You can't count on any given program obeying any particular one of these conventions. The only reliable rule is that exit code 0 is success and anything else is some sort of failure. (Note that C89's EXIT_SUCCESS is not guaranteed to have the value zero; however, exit(0) is required to behave identically to exit(EXIT_SUCCESS) even if the values are not the same.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/242111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22812/"
]
} |
242,126 | I'm using a Debian stable box with a Dell monitor and the Dell AX510 "stereo soundbar". The "soundbar" has a large frontal speaker as well as two headphone jacks on the left. While the hardware itself is operational (I can hear static when I crank up the volume), I cannot make it produce any sound. I'm running KDE and have filled up the master bar through alsamixer . The desktop itself also has a headphone jack on the desktop itself. I really do not mind using either the soundbar or that particular headphone jack; I'm just looking to have some sound output through either source. I'm going to be using headphones anyway since this is a work computer. Audio-related lspci output: jason@debian:~$ lspci | grep Audio 01:00.1 Audio device: NVIDIA Corporation GF108 High Definition Audio Controller (rev a1) And some machine specs: jason@debian:~$ uname -r3.16.0-4-amd64 jason@debian:~$ cat /etc/debian_version 8.2 | There have been several attempts to standardize the meanings of process exit codes. In addition to the one you mention, I know of: the BSDs have sysexits.h which defines meanings for values from 64 on up. GNU grep documents that exit code 0 means at least one match was found, 1 means no matches were found, and 2 means an I/O error occurred; this convention is obviously also useful for other programs for which the distinction between "nothing went wrong but I didn't find anything" and "an I/O error occurred" is meaningful. Many implementations of the C library function system use exit code 127 to indicate the program doesn't exist or failed to start. On Windows, NTSTATUS codes (which are inconveniently scattered all over the 32-bit number space) may be used as exit codes, particularly the ones that indicate a process was terminated due to catastrophic misbehavior (e.g. STATUS_STACK_OVERFLOW ). You can't count on any given program obeying any particular one of these conventions. The only reliable rule is that exit code 0 is success and anything else is some sort of failure. (Note that C89's EXIT_SUCCESS is not guaranteed to have the value zero; however, exit(0) is required to behave identically to exit(EXIT_SUCCESS) even if the values are not the same.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/242126",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21567/"
]
} |
242,127 | When I write using o_sync , the write call returns once the data has been written to the disk. But how does o_sync force Linux to write the data to disk? Normally you would have to wait a maximum of dirty_expire_centisecs +dirty_writeback_centisecs (30seconds + 5seconds) at worst for pdflush to write the data to disk. Does o_sync set the dirty_expire_centisecs for the data lower or does something other happen (manually calls flush)? Please provide sources for your answer. I couldn't find anything on this topic. | There have been several attempts to standardize the meanings of process exit codes. In addition to the one you mention, I know of: the BSDs have sysexits.h which defines meanings for values from 64 on up. GNU grep documents that exit code 0 means at least one match was found, 1 means no matches were found, and 2 means an I/O error occurred; this convention is obviously also useful for other programs for which the distinction between "nothing went wrong but I didn't find anything" and "an I/O error occurred" is meaningful. Many implementations of the C library function system use exit code 127 to indicate the program doesn't exist or failed to start. On Windows, NTSTATUS codes (which are inconveniently scattered all over the 32-bit number space) may be used as exit codes, particularly the ones that indicate a process was terminated due to catastrophic misbehavior (e.g. STATUS_STACK_OVERFLOW ). You can't count on any given program obeying any particular one of these conventions. The only reliable rule is that exit code 0 is success and anything else is some sort of failure. (Note that C89's EXIT_SUCCESS is not guaranteed to have the value zero; however, exit(0) is required to behave identically to exit(EXIT_SUCCESS) even if the values are not the same.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/242127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142516/"
]
} |
242,129 | I am using GNOME 3.18.1 on Arch Linux 4.2.5-1-ARCH x86_64 on a Dell E6530 laptop. Since I installed this OS years ago, the power button on my laptop has always led my OS to completely power down. However, in the last few weeks this behaviour has changed, so that pressing the power button now puts my laptop into energy savings mode. I did not change my power settings. I always keep my system up to date using pacman -Syyu , however, so I suspect that an update changed this functionality. In the power settings there is no option for this. How can I restore the initial behaviour, so that pressing that button powers the system off? | That's caused by the latest gnome-settings-daemon updates... There is no such option in power settings because it was removed by the GNOME devs (the shutdown/power off action is considered "too destructive" ). Bottom line: you can no longer power off your laptop by pressing the power off button. You could however add a new dconf / gsettings option (i.e. shutdown ) to the settings daemon power plugin if you're willing to patch and rebuild gnome-settings-daemon : --- gnome-settings-daemon-3.18.2/data/gsd-enums.h 2015-11-10 09:07:12.000000000 -0500+++ gnome-settings-daemon-3.18.2/data/gsd-enums.h 2015-11-11 18:43:43.240794875 -0500@@ -114,7 +114,8 @@ { GSD_POWER_BUTTON_ACTION_NOTHING, GSD_POWER_BUTTON_ACTION_SUSPEND,- GSD_POWER_BUTTON_ACTION_HIBERNATE+ GSD_POWER_BUTTON_ACTION_HIBERNATE,+ GSD_POWER_BUTTON_ACTION_SHUTDOWN } GsdPowerButtonActionType; typedef enum--- gnome-settings-daemon-3.18.2/plugins/media-keys/gsd-media-keys-manager.c 2015-11-10 09:07:12.000000000 -0500+++ gnome-settings-daemon-3.18.2/plugins/media-keys/gsd-media-keys-manager.c 2015-11-11 18:47:52.388602012 -0500@@ -1849,6 +1849,9 @@ action_type = g_settings_get_enum (manager->priv->power_settings, "power-button-action"); switch (action_type) {+ case GSD_POWER_BUTTON_ACTION_SHUTDOWN:+ do_config_power_action (manager, GSD_POWER_ACTION_SHUTDOWN, in_lock_screen);+ break; case GSD_POWER_BUTTON_ACTION_SUSPEND: do_config_power_action (manager, GSD_POWER_ACTION_SUSPEND, in_lock_screen); break; Once you install the patched version, a new shutdown option will be available in dconf-editor under org > gnome > settings-daemon > plugins > power > power-button-action : so select that to shutdown via power button or, if you prefer CLI, run in terminal: gsettings set org.gnome.settings-daemon.plugins.power power-button-action shutdown Sure, for the above to work you also need the right settings in /etc/systemd/logind.conf : HandlePowerKey=poweroffPowerKeyIgnoreInhibited=yes Keep in mind that pressing the power button will shutdown your system without any warning. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/242129",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22513/"
]
} |
242,140 | Here I want something like the way aircrack-ng does to display text in the terminal screen, or like matrix scripts ... ! For example, if my terminal screen already contains 4 lines, I want to update the 1st of those first lines in its place, and same for other lines ... (using bash) To be more precise, I want a script like the following: #!/bin/bashwhile : do echo "line1" echo "line2" echo "line3" echo "line4" # without using clear cmd, next cycle line1 should be printed # in line1 place not in a new line , and same for other lines done | On terminals that support it, you can use tput sc to save the cursor position and tput rc to restore it: i=0tput scwhile sleep 1; do tput rc echo "line$((i=i+1))" echo "line$((i=i+1))" echo "line$((i=i+1))" echo "line$((i=i+1))"done You can save those escape sequences in a variable to avoid having to invoke tput every time: rc=$(tput rc) || echo >&2 "Warning: terminal doesn't support restoring the cursor"...printf '%s\n' "${rc}line1..." On the rare terminals that don't support it, you can always use cursor positioning sequences, while sleep 1; do echo "line$((i=i+1))" echo "line$((i=i+1))" echo "line$((i=i+1))" echo "line$((i=i+1))" tput cuu 4 # or up=$(tput cuu1); printf %s "$up$up$up$up"done See the terminfo man page in section 5 (if your system ships with ncurses) for more details. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/242140",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134906/"
]
} |
242,149 | I have a series of software files downloaded into my subdirectory ~/Downloads on my personal computer. I am also using bash to connect remotely to a computer using ssh . Is it possible to transfer this file via ssh to the remote computer? | You may want to use scp for this purpose. It is a secure means to transfer files using the SSH protocol. For example, to copy a file named yourfile.txt from ~/Downloads to remote computer, use: scp ~/Downloads/yourfile.txt [email protected]:/some/remote/directory You can see more examples here . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/242149",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115891/"
]
} |
242,298 | Respectable projects release tar archives that contain a single directory, for instance zyrgus-3.18.tar.gz contains a zyrgus-3.18 folder which in turn contains src , build , dist , etc. But some punk projects put everything at the root :'-( This results in a total mess when unarchiving. Creating a folder manually every time is a pain, and unnecessary most of the time. Is there a super-fast way to tell whether a .tar or .tar.gz file contains more than a single directory at its root? Even for a big archive. Or even better, is there a tool that in such cases would create a directory (name of the archive without the extension) and put everything inside? | patool handles different kinds of archives and creates a subdirectory in case the archive contains multiple files to prevent cluttering the working directory with the extracted files. Extract archive patool extract archive.tar To obtain a list of the supported formats, use patool formats . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/242298",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2305/"
]
} |
242,334 | I noticed notepad++ adds characters when writing bash scripts. For example a simple wait script written in notepad++ it adds \r breaking the script. The standard windows notepad does not have this issue. Is there a way to check or to control for these extra characters. I've already tried the different encoding options within notepad++ (utf8, utf8 without bom, ansi) Here is the sample script that works when save in notepad or nano, but not notepad++ (v.5.9.8) #!/bin/bash#Written in Notepadecho I will say something and wait 5 secondssleep 5echo then say something again | It is a menu item under Edit>EOL Conversion>Unix Conversion and save This answer was described under the third link provided by Thomas. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/242334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139163/"
]
} |
242,355 | I have a small bash program, that allows me to compare tables (having similar naming style) and that copies to another directory the file having the highest value at the first row of the fourth column (with space separator) (the values of the fourth column are decimal numbers e.g. 1111.22). #! /bin/bashriver=lobith_rhinehighest=1for model in H08do for gcm in IPSL-CM5A-LR do for scenario in hist rcp8p5 do for x in ${model}_${gcm}_${scenario}_${river}[1-9]/${model}_${gcm}_${scenario}_${river}[1-9].txt do fourth="$(awk 'NR==1{print $4}' $x)" if [ "$highest" -lt "$fourth" ];then highest=$fourth hifile=$x fi done echo "highest was $highest in $hifile" cp $hifile /home/steve/high_test/${model}_${gcm}_${scenario}_${river}.txt done donedone Unfortunately, the line 13 if [ "$highest" -lt "$fourth" ];then produces the following error message: integer expression expected So, I read some documentation about comparisons operators, and found that I can replace the problematic line by if [ "$highest" \< "$fourth" ];then But this is not working properly since it just compares the ascii alphabetic order values rather than the entire value. Does anyone has an idea about how to handle those issues? | It is a menu item under Edit>EOL Conversion>Unix Conversion and save This answer was described under the third link provided by Thomas. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/242355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125347/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.